Skip to content

avx512 based int8 -> bf16 dequantization #6068

avx512 based int8 -> bf16 dequantization

avx512 based int8 -> bf16 dequantization #6068

Triggered via pull request September 23, 2025 07:31
Status Success
Total duration 2h 0m 21s
Artifacts 2
generate-matrix  /  generate
8s
generate-matrix / generate
filter-matrix
4s
filter-matrix
Matrix: build
Fit to window
Zoom out
Zoom in

Annotations

1 warning
filter-matrix
The `python-version` input is not set. The version of Python currently in `PATH` will be used.

Artifacts

Produced during runtime
Name Size Digest
pytorch_FBGEMM__3.10_cu126_aarch64
13.9 MB
sha256:d5fad761d2af3ffabaf5cbbdc08e912a7e01e3de78144ee068f8265b368e83f8
pytorch_FBGEMM__3.10_cu128_aarch64
42.1 MB
sha256:78ce5bdfb263878ba72d8b074c15dadff47b023fb3235f8c8e55f4881f95e88d