Skip to content

Fixing test_quantize_fp8_matmul for CUDA graph #13837

Fixing test_quantize_fp8_matmul for CUDA graph

Fixing test_quantize_fp8_matmul for CUDA graph #13837

Triggered via pull request July 1, 2025 16:36
Status Success
Total duration 2h 55m 7s
Artifacts 6

build_wheels_linux_x86.yml

on: pull_request
generate-matrix  /  generate
8s
generate-matrix / generate
filter-matrix
7s
filter-matrix
Matrix: pytorch/FBGEMM / build
Matrix: build / upload / upload
Fit to window
Zoom out
Zoom in

Annotations

1 warning
filter-matrix
The `python-version` input is not set. The version of Python currently in `PATH` will be used.

Artifacts

Produced during runtime
Name Size Digest
pytorch_FBGEMM__3.9_cpu_x86_64
5.52 MB
sha256:e0eeefe7cd37e918db63b5d6f77b6bf3576c7a9fdaa8ae7417f12e28a60d65a1
pytorch_FBGEMM__3.9_cu126_x86_64
383 MB
sha256:f8bcfa013a4de4fafcf641f18a3288d53e2f9457a23981c1c391d5d4eb6ca704
pytorch_FBGEMM__3.9_cu128_x86_64
621 MB
sha256:ea7641f37f9cc73f0a0f90f0b27bc96c403423d9fd804c23d37cacd12980a9f5
pytorch_FBGEMM__3.9_cu129_x86_64
652 MB
sha256:9c7f836b64c111013ea4b650b2de025bbd2ecf237c86ea0d4eb2ae8d09006926
pytorch_FBGEMM__3.9_rocm6.3_x86_64
55.3 MB
sha256:143bc2dc669104b63750a49a72798056af61c1408e35ea2499bec7dee22ca9b3
pytorch_FBGEMM__3.9_rocm6.4_x86_64
54.6 MB
sha256:4b585d58689b51a1273664efde329edd3a3b2c490b870c0d5f4aa39e3329ad96