Skip to content

Commit 8c81c3a

Browse files
cthifacebook-github-bot
authored andcommitted
Fix typo in FP4 quantize
Summary: as title Differential Revision: D83083612
1 parent 826064d commit 8c81c3a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

fbgemm_gpu/experimental/gemm/triton_gemm/fp4_quantize.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1461,7 +1461,7 @@ def triton_scale_nvfp4_quant(
14611461
stochastic_casting (bool): Whether to use stochastic casting.
14621462
14631463
Returns:
1464-
torch.Tensor: [M / 2] nvfp4 scaled tensor packed into in8
1464+
torch.Tensor: [M / 2] nvfp4 scaled tensor packed into int8
14651465
torch.Tensor: [M / group_size] nvfp4 shared exponents into int8
14661466
14671467
eg.

0 commit comments

Comments
 (0)