-
Notifications
You must be signed in to change notification settings - Fork 100
Pull requests: InfiniTensor/InfiniCore
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
issue/1012 - feat: add paged caching for moore gpu referencing nvidia
准备好了
模块:算子
#1013
opened Feb 10, 2026 by
spike-zhu
Loading…
issue/1001 - feat: add paged attention prefill and decode for moore gpu referencing nvidia
准备好了
模块:算子
#1011
opened Feb 10, 2026 by
spike-zhu
Loading…
issue/899 - fix: fix causal_softmax and rearrange bug
准备好了
模块:算子
#1010
opened Feb 10, 2026 by
spike-zhu
Loading…
issue/949 - feat: add silu_and_mul for moore gpu with test pass
准备好了
模块:算子
类型:开发
#1009
opened Feb 10, 2026 by
spike-zhu
Loading…
Demo-131 Cuda graph with optimized paged attention
#990
opened Jan 27, 2026 by
PanZezhong1725
Loading…
issue/978 - metax cuda graph impl and wrappings
准备好了
类型:开发
#982
opened Jan 26, 2026 by
wooway777
Loading…
Issue/843: 增加quant linear支持,增加C610上I8 Gemm, per_channel_quant_i8算子
#965
opened Jan 22, 2026 by
qinyiqun
Loading…
issue/958 - Add read tensor from file feature in dynamic ops test && …
#959
opened Jan 21, 2026 by
baominghelly
Loading…
Issue/951: feat: add paged attention related operator for metax
#953
opened Jan 20, 2026 by
Ceng23333
Loading…
issue/791 修改addRmsNorm接口以及rmsnorm模组
准备好了
模块:算子
类型:优化
#947
opened Jan 19, 2026 by
PanZezhong1725
Loading…
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.