Skip to content

avx512 based int8 -> bf16 dequantization #6071

avx512 based int8 -> bf16 dequantization

avx512 based int8 -> bf16 dequantization #6071

Triggered via pull request September 23, 2025 07:31
Status Success
Total duration 2h 22m 15s
Artifacts 5
generate-matrix  /  generate
6s
generate-matrix / generate
filter-matrix
6s
filter-matrix
Matrix: pytorch/FBGEMM / build
Matrix: pytorch/FBGEMM / upload / upload
Fit to window
Zoom out
Zoom in

Annotations

1 warning
filter-matrix
The `python-version` input is not set. The version of Python currently in `PATH` will be used.

Artifacts

Produced during runtime
Name Size Digest
pytorch_FBGEMM__3.10_cu126_x86_64
15.2 MB
sha256:e042fc23b2fb5b8df74feec92a6c237401e22de779a623534dcf33719b8b9c75
pytorch_FBGEMM__3.10_cu128_x86_64
43.5 MB
sha256:97f2c6c0d8cbb2e1f7c40b215fd94b4e529d35b713cd7689b3e55131b96e2262
pytorch_FBGEMM__3.10_cu130_x86_64
41.4 MB
sha256:07e12e21e388e99477d7e0095657beb59dee77797cb251cf4af2aa36ce87a5ca
pytorch_FBGEMM__3.10_rocm6.3_x86_64
11.1 MB
sha256:2664bd86bc5ea820d72dba661960023b130e0188aecee85a8b8d06ff01a7fe9a
pytorch_FBGEMM__3.10_rocm6.4_x86_64
11.2 MB
sha256:2cb65b03b81c0344f57bcaaa8c34a5145901ffca221007ac2919b871230c9a85