Skip to content

avx512 based int8 -> bf16 dequantization #6102

avx512 based int8 -> bf16 dequantization

avx512 based int8 -> bf16 dequantization #6102

Triggered via pull request September 23, 2025 20:54
Status Success
Total duration 2h 31m 13s
Artifacts 5
generate-matrix  /  generate
4s
generate-matrix / generate
filter-matrix
7s
filter-matrix
Matrix: pytorch/FBGEMM / build
Matrix: pytorch/FBGEMM / upload / upload
Fit to window
Zoom out
Zoom in

Annotations

1 warning
filter-matrix
The `python-version` input is not set. The version of Python currently in `PATH` will be used.

Artifacts

Produced during runtime
Name Size Digest
pytorch_FBGEMM__3.10_cu126_x86_64
17.2 MB
sha256:2ad78396ebd5c9464debddf0ef7fbb5f0bdd67d7d4b59bd3e914a451fc7a3694
pytorch_FBGEMM__3.10_cu128_x86_64
48.8 MB
sha256:685c65e56217c09be337fb2cccb996023d37c3e7a2cb0a393b9148d52007112f
pytorch_FBGEMM__3.10_cu130_x86_64
46.7 MB
sha256:3304bb0184402de220b6d02099a98290649970c0be1fcb842f3fdac2a83d3479
pytorch_FBGEMM__3.10_rocm6.3_x86_64
11.2 MB
sha256:db5e86aeed59615cfd76b099b73394cfc165a1e6e6c264e9532ba5233d3032e0
pytorch_FBGEMM__3.10_rocm6.4_x86_64
11.3 MB
sha256:f2a540864f323caca934b679d6c8141a2d1211ea55263c05f15417031a5651e7