Skip to content

avx512 based int8 -> bf16 dequantization #16701

avx512 based int8 -> bf16 dequantization

avx512 based int8 -> bf16 dequantization #16701

Triggered via pull request September 23, 2025 20:54
Status Success
Total duration 41m 25s
Artifacts 1
generate-matrix  /  generate
8s
generate-matrix / generate
filter-matrix
6s
filter-matrix
Matrix: build
Fit to window
Zoom out
Zoom in

Annotations

1 warning
filter-matrix
The `python-version` input is not set. The version of Python currently in `PATH` will be used.

Artifacts

Produced during runtime
Name Size Digest
pytorch_FBGEMM__3.10_cpu_aarch64
4.37 MB
sha256:3691c9182612eb02da370e6c1c0cd2a547d46465bf8c215cedbb56ff50eb3c81