Skip to content

avx512 based int8 -> bf16 dequantization #6099

avx512 based int8 -> bf16 dequantization

avx512 based int8 -> bf16 dequantization #6099

Triggered via pull request September 23, 2025 20:54
Status Success
Total duration 2h 10m 12s
Artifacts 2
generate-matrix  /  generate
5s
generate-matrix / generate
filter-matrix
5s
filter-matrix
Matrix: build
Fit to window
Zoom out
Zoom in

Annotations

1 warning
filter-matrix
The `python-version` input is not set. The version of Python currently in `PATH` will be used.

Artifacts

Produced during runtime
Name Size Digest
pytorch_FBGEMM__3.10_cu126_aarch64
15.9 MB
sha256:74b8e15e9242c898cf01497cb9efa748c63b229b646cedd436417db63bd6d6ba
pytorch_FBGEMM__3.10_cu128_aarch64
47.2 MB
sha256:6b0134e62b3c65ada18eb3259a6902ce38e4c5893ce508c0c682ae8f04a8c924