We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 826064d commit 8c81c3aCopy full SHA for 8c81c3a
fbgemm_gpu/experimental/gemm/triton_gemm/fp4_quantize.py
@@ -1461,7 +1461,7 @@ def triton_scale_nvfp4_quant(
1461
stochastic_casting (bool): Whether to use stochastic casting.
1462
1463
Returns:
1464
- torch.Tensor: [M / 2] nvfp4 scaled tensor packed into in8
+ torch.Tensor: [M / 2] nvfp4 scaled tensor packed into int8
1465
torch.Tensor: [M / group_size] nvfp4 shared exponents into int8
1466
1467
eg.
0 commit comments