Skip to content

avx512 based int8 -> bf16 dequantization #6046

avx512 based int8 -> bf16 dequantization

avx512 based int8 -> bf16 dequantization #6046

Triggered via pull request September 22, 2025 20:41
Status Success
Total duration 2h 0m 36s
Artifacts 2
generate-matrix  /  generate
6s
generate-matrix / generate
filter-matrix
6s
filter-matrix
Matrix: build
Fit to window
Zoom out
Zoom in

Annotations

1 warning
filter-matrix
The `python-version` input is not set. The version of Python currently in `PATH` will be used.

Artifacts

Produced during runtime
Name Size Digest
pytorch_FBGEMM__3.10_cu126_aarch64
13.9 MB
sha256:9be928176dbd464aaefba2d1c068f05c70225a5a4decb5248b70f74ea7fa3110
pytorch_FBGEMM__3.10_cu128_aarch64
42.1 MB
sha256:493f4130a8922975b556756ed820cde84b3f7b09d91413f24f810c00e4ce42e5