Skip to content

LayoutDetection下任何PP-DocLayout-xxx模型,在CPU高性能推理下以openvino或onnxruntime作为推理后端时,无法解析多页PDF,且报错退出 #4615

@HorseLuke

Description

@HorseLuke

Checklist:

描述问题

在CPU高性能推理模式下部署“通用版面解析v3产线”时,PP-DocLayout-M模型推理后端可使用openvino或者onnxruntime;但如果此时传入多页PDF,会报错退出。

如果改用paddle推理后端则能正常解析多页PDF。

另一个正常情形是将多页PDF拆分成单页PDF,然后逐页传入。但该方案违反了接口约定,不能使用。

复现

  1. 高性能推理

【是】

  1. 服务化部署

【是】

* 您在服务化部署中是否有使用高性能推理插件?

【是】

* 您使用了哪一种服务化部署方案?

docker自行打包部署(从飞浆3.0.0镜像直接构建),CPU高性能推理模式。

其中Dockerfile主要内容简化如下:

FROM ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddle:3.0.0 AS build

RUN <<-DOCKERFILERUNEOF

set -xe

python -m pip install "onnxruntime==1.22.0"

python -m pip install onnx_graphsurgeon==0.5.8

python -m pip install "paddlex[ocr]==3.2.1"

paddlex --install paddle2onnx

paddlex --install hpi-cpu

paddlex --install serving

DOCKERFILERUNEOF
  1. 您使用的模型数据集是?

模型:

PP-StructureV3-debug.yaml配置如下(使用paddlex命令行直接导出产线文件,仅修改PP-DocLayout_plus-LPP-DocLayout-M):



pipeline_name: PP-StructureV3

batch_size: 8

use_doc_preprocessor: False
use_seal_recognition: False
use_table_recognition: True
use_formula_recognition: True
use_chart_recognition: False
use_region_detection: True

SubModules:
  LayoutDetection:
    module_name: layout_detection
    model_name: PP-DocLayout-M
    model_dir: null
    batch_size: 8
    threshold: 
      0: 0.3  # paragraph_title
      1: 0.5  # image
      2: 0.4  # text
      3: 0.5  # number
      4: 0.5  # abstract
      5: 0.5  # content
      6: 0.5  # figure_table_chart_title
      7: 0.3  # formula
      8: 0.5  # table
      9: 0.5  # reference
      10: 0.5 # doc_title
      11: 0.5 # footnote
      12: 0.5 # header
      13: 0.5 # algorithm
      14: 0.5 # footer
      15: 0.45 # seal
      16: 0.5 # chart
      17: 0.5 # formula_number
      18: 0.5 # aside_text
      19: 0.5 # reference_content
    layout_nms: True
    layout_unclip_ratio: [1.0, 1.0] 
    layout_merge_bboxes_mode: 
      0: "large"  # paragraph_title
      1: "large"  # image
      2: "union"  # text
      3: "union"  # number
      4: "union"  # abstract
      5: "union"  # content
      6: "union"  # figure_table_chart_title
      7: "large"  # formula
      8: "union"  # table
      9: "union"  # reference
      10: "union" # doc_title
      11: "union" # footnote
      12: "union" # header
      13: "union" # algorithm
      14: "union" # footer
      15: "union" # seal
      16: "large" # chart
      17: "union" # formula_number
      18: "union" # aside_text
      19: "union" # reference_content
  ChartRecognition:
    module_name: chart_recognition
    model_name: PP-Chart2Table
    model_dir: null
    batch_size: 1 
  RegionDetection:
    module_name: layout_detection
    model_name: PP-DocBlockLayout
    model_dir: null
    layout_nms: True
    layout_merge_bboxes_mode: "small"

SubPipelines:
  DocPreprocessor:
    pipeline_name: doc_preprocessor
    batch_size: 8
    use_doc_orientation_classify: True
    use_doc_unwarping: True
    SubModules:
      DocOrientationClassify:
        module_name: doc_text_orientation
        model_name: PP-LCNet_x1_0_doc_ori
        model_dir: null
        batch_size: 8
      DocUnwarping:
        module_name: image_unwarping
        model_name: UVDoc
        model_dir: null

  GeneralOCR:
    pipeline_name: OCR
    batch_size: 8
    text_type: general
    use_doc_preprocessor: False
    use_textline_orientation: True
    SubModules:
      TextDetection:
        module_name: text_detection
        model_name: PP-OCRv5_server_det
        model_dir: null
        limit_side_len: 736
        limit_type: min
        max_side_limit: 4000
        thresh: 0.3
        box_thresh: 0.6
        unclip_ratio: 1.5
      TextLineOrientation:
        module_name: textline_orientation
        model_name: PP-LCNet_x1_0_textline_ori
        model_dir: null
        batch_size: 8
      TextRecognition:
        module_name: text_recognition
        model_name: PP-OCRv5_server_rec
        model_dir: null
        batch_size: 8
        score_thresh: 0.0
 

  TableRecognition:
    pipeline_name: table_recognition_v2
    use_layout_detection: False
    use_doc_preprocessor: False
    use_ocr_model: False
    SubModules:  
      TableClassification:
        module_name: table_classification
        model_name: PP-LCNet_x1_0_table_cls
        model_dir: null

      WiredTableStructureRecognition:
        module_name: table_structure_recognition
        model_name: SLANeXt_wired
        model_dir: null
      
      WirelessTableStructureRecognition:
        module_name: table_structure_recognition
        model_name: SLANet_plus
        model_dir: null
      
      WiredTableCellsDetection:
        module_name: table_cells_detection
        model_name: RT-DETR-L_wired_table_cell_det
        model_dir: null
      
      WirelessTableCellsDetection:
        module_name: table_cells_detection
        model_name: RT-DETR-L_wireless_table_cell_det
        model_dir: null

      TableOrientationClassify:
        module_name: doc_text_orientation
        model_name: PP-LCNet_x1_0_doc_ori
        model_dir: null
    SubPipelines:
      GeneralOCR:
        pipeline_name: OCR
        text_type: general
        use_doc_preprocessor: False
        use_textline_orientation: True
        SubModules:
          TextDetection:
            module_name: text_detection
            model_name: PP-OCRv5_server_det
            model_dir: null
            limit_side_len: 736
            limit_type: min
            max_side_limit: 4000
            thresh: 0.3
            box_thresh: 0.4
            unclip_ratio: 1.5
          TextLineOrientation:
            module_name: textline_orientation
            model_name: PP-LCNet_x1_0_textline_ori
            model_dir: null
            batch_size: 8
          TextRecognition:
            module_name: text_recognition
            model_name: PP-OCRv5_server_rec
            model_dir: null
            batch_size: 8
        score_thresh: 0.0

  SealRecognition:
    pipeline_name: seal_recognition
    batch_size: 8
    use_layout_detection: False
    use_doc_preprocessor: False
    SubPipelines:
      SealOCR:
        pipeline_name: OCR
        batch_size: 8
        text_type: seal
        use_doc_preprocessor: False
        use_textline_orientation: False
        SubModules:
          TextDetection:
            module_name: seal_text_detection
            model_name: PP-OCRv4_server_seal_det
            model_dir: null
            limit_side_len: 736
            limit_type: min
            max_side_limit: 4000
            thresh: 0.2
            box_thresh: 0.6
            unclip_ratio: 0.5
          TextRecognition:
            module_name: text_recognition
            model_name: PP-OCRv5_server_rec
            model_dir: null
            batch_size: 8
            score_thresh: 0
    
  FormulaRecognition:
    pipeline_name: formula_recognition
    batch_size: 8
    use_layout_detection: False
    use_doc_preprocessor: False
    SubModules:
      FormulaRecognition:
        module_name: formula_recognition
        model_name: PP-FormulaNet_plus-L
        model_dir: null
        batch_size: 8

数据集:

任意多页PDF文件。

  1. 请提供您出现的报错信息及相关log

运行cli,其中1111111.pdf为多页PDF:

paddlex --pipeline /paddle/PP-StructureV3-debug.yaml         --input /paddle/1111111.pdf         --use_doc_orientation_classify true         --use_doc_unwarping true         --use_textline_orientation true         --use_general_ocr true         --use_table_recognition true         --use_formula_recognition  True         --use_e2e_wired_table_rec_model false         --use_e2e_wireless_table_rec_model True         --save_path /paddle/outputdebug        --device cpu     --use_hpip

PP-DocLayout-M模型推理后端使用openvino时:

Creating model: ('PP-LCNet_x1_0_doc_ori', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/PP-LCNet_x1_0_doc_ori`.
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
Inference backend: openvino
Inference backend config: cpu_num_threads=10
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx	number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx	affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx	Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend	Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('UVDoc', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/UVDoc`.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu,  device_id: None,  run_mode: paddle,  trt_dynamic_shapes: {'img': [[1, 3, 128, 64], [1, 3, 256, 128], [8, 3, 512, 256]]},  cpu_threads: 10,  delete_pass: [],  enable_new_ir: True,  enable_cinn: False,  trt_cfg_setting: {},  trt_use_dynamic_shapes: True,  trt_collect_shape_range_info: True,  trt_discard_cached_shape_range_info: False,  trt_dynamic_shape_input_data: None,  trt_shape_range_info_path: None,  trt_allow_rebuild_at_runtime: True,  mkldnn_cache_capacity: 10
Creating model: ('PP-DocBlockLayout', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/PP-DocBlockLayout`.
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=10
[INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend	Runtime initialized with Backend::ORT in Device::CPU.
Creating model: ('PP-DocLayout-M', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/PP-DocLayout-M`.
Inference backend: openvino
Inference backend config: cpu_num_threads=10
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx	number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx	affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx	Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend	Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('PP-LCNet_x1_0_textline_ori', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/PP-LCNet_x1_0_textline_ori`.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu,  device_id: None,  run_mode: mkldnn,  trt_dynamic_shapes: {'x': [[1, 3, 80, 160], [1, 3, 80, 160], [8, 3, 80, 160]]},  cpu_threads: 10,  delete_pass: [],  enable_new_ir: True,  enable_cinn: False,  trt_cfg_setting: {},  trt_use_dynamic_shapes: True,  trt_collect_shape_range_info: True,  trt_discard_cached_shape_range_info: False,  trt_dynamic_shape_input_data: None,  trt_shape_range_info_path: None,  trt_allow_rebuild_at_runtime: True,  mkldnn_cache_capacity: 10
Creating model: ('PP-OCRv5_server_det', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/PP-OCRv5_server_det`.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu,  device_id: None,  run_mode: mkldnn,  trt_dynamic_shapes: {'x': [[1, 3, 32, 32], [1, 3, 736, 736], [1, 3, 4000, 4000]]},  cpu_threads: 10,  delete_pass: [],  enable_new_ir: True,  enable_cinn: False,  trt_cfg_setting: {},  trt_use_dynamic_shapes: True,  trt_collect_shape_range_info: True,  trt_discard_cached_shape_range_info: False,  trt_dynamic_shape_input_data: None,  trt_shape_range_info_path: None,  trt_allow_rebuild_at_runtime: True,  mkldnn_cache_capacity: 10
Creating model: ('PP-OCRv5_server_rec', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/PP-OCRv5_server_rec`.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu,  device_id: None,  run_mode: mkldnn,  trt_dynamic_shapes: {'x': [[1, 3, 48, 160], [1, 3, 48, 320], [8, 3, 48, 3200]]},  cpu_threads: 10,  delete_pass: [],  enable_new_ir: True,  enable_cinn: False,  trt_cfg_setting: {},  trt_use_dynamic_shapes: True,  trt_collect_shape_range_info: True,  trt_discard_cached_shape_range_info: False,  trt_dynamic_shape_input_data: None,  trt_shape_range_info_path: None,  trt_allow_rebuild_at_runtime: True,  mkldnn_cache_capacity: 10
Creating model: ('PP-LCNet_x1_0_table_cls', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/PP-LCNet_x1_0_table_cls`.
Inference backend: openvino
Inference backend config: cpu_num_threads=10
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx	number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx	affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx	Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend	Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('SLANeXt_wired', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/SLANeXt_wired`.
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=10
[INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend	Runtime initialized with Backend::ORT in Device::CPU.
Creating model: ('SLANet_plus', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/SLANet_plus`.
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=10
[INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend	Runtime initialized with Backend::ORT in Device::CPU.
Creating model: ('RT-DETR-L_wired_table_cell_det', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/RT-DETR-L_wired_table_cell_det`.
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=10
[INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend	Runtime initialized with Backend::ORT in Device::CPU.
Creating model: ('RT-DETR-L_wireless_table_cell_det', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/RT-DETR-L_wireless_table_cell_det`.
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=10
[INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend	Runtime initialized with Backend::ORT in Device::CPU.
Creating model: ('PP-FormulaNet_plus-L', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/PP-FormulaNet_plus-L`.
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=10
[INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend	Runtime initialized with Backend::ORT in Device::CPU.
Creating model: ('PP-Chart2Table', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/PP-Chart2Table`.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/doc_vlm/predictor.py:100: UserWarning: The PP-Chart2Table series does not support `use_hpip=True` for now.
  warnings.warn(
Loading configuration file /.paddlex/official_models/PP-Chart2Table/config.json
Loading weights file /.paddlex/official_models/PP-Chart2Table/model_state.pdparams
Loaded weights file from disk, setting weights to model.
All model checkpoint weights were used when initializing PPChart2TableInference.

All the weights of PPChart2TableInference were initialized from the model checkpoint at /.paddlex/official_models/PP-Chart2Table.
If your task is similar to the task the model of the checkpoint was trained on, you can already use PPChart2TableInference for predictions without further training.
Loading configuration file /.paddlex/official_models/PP-Chart2Table/generation_config.json
Traceback (most recent call last):
  File "/usr/local/bin/paddlex", line 8, in <module>
    sys.exit(console_entry())
  File "/usr/local/lib/python3.10/dist-packages/paddlex/__main__.py", line 26, in console_entry
    main()
  File "/usr/local/lib/python3.10/dist-packages/paddlex/paddlex_cli.py", line 509, in main
    pipeline_predict(
  File "/usr/local/lib/python3.10/dist-packages/paddlex/paddlex_cli.py", line 370, in pipeline_predict
    for res in result:
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/_parallel.py", line 129, in predict
    yield from self._pipeline.predict(
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/layout_parsing/pipeline_v2.py", line 1007, in predict
    layout_det_results = list(
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/base/predictor/base_predictor.py", line 219, in __call__
    yield from self.apply(input, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/base/predictor/base_predictor.py", line 277, in apply
    prediction = self.process(batch_data, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/object_detection/predictor.py", line 234, in process
    batch_preds = self.infer(batch_inputs)
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/common/static_infer.py", line 641, in __call__
    return self._call_multi_backend_infer(x)
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/common/static_infer.py", line 654, in _call_multi_backend_infer
    return self._multi_backend_infer(inputs)
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/common/static_infer.py", line 594, in __call__
    outputs = self.ui_runtime.infer(x)
  File "/usr/local/lib/python3.10/dist-packages/ultra_infer/runtime.py", line 65, in infer
    return self._runtime.infer(data)
RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:79:
Exception from src/inference/src/cpp/infer_request.cpp:66:
Exception from src/plugins/intel_cpu/src/infer_request.cpp:377:
Can't set the input tensor with index: 0, because the model input (shape=[1,3,640,640]) and the tensor (shape=(6.3.640.640)) are incompatible

PP-DocLayout-M模型推理后端使用onnxruntime时:

(重复内容略)

Creating model: ('PP-DocLayout-M', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/.paddlex/official_models/PP-DocLayout-M`.
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=10
[INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend	Runtime initialized with Backend::ORT in Device::CPU.

(重复内容略)

All the weights of PPChart2TableInference were initialized from the model checkpoint at /.paddlex/official_models/PP-Chart2Table.
If your task is similar to the task the model of the checkpoint was trained on, you can already use PPChart2TableInference for predictions without further training.
Loading configuration file /.paddlex/official_models/PP-Chart2Table/generation_config.json
[ERROR] ultra_infer/runtime/backends/ort/ort_backend.cc(378)::Infer	Failed to Infer: Got invalid dimensions for input: image for the following indices
 index: 0 Got: 6 Expected: 1
 Please fix either the inputs/outputs or the model.
Traceback (most recent call last):
  File "/usr/local/bin/paddlex", line 8, in <module>
    sys.exit(console_entry())
  File "/usr/local/lib/python3.10/dist-packages/paddlex/__main__.py", line 26, in console_entry
    main()
  File "/usr/local/lib/python3.10/dist-packages/paddlex/paddlex_cli.py", line 509, in main
    pipeline_predict(
  File "/usr/local/lib/python3.10/dist-packages/paddlex/paddlex_cli.py", line 370, in pipeline_predict
    for res in result:
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/_parallel.py", line 129, in predict
    yield from self._pipeline.predict(
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/pipelines/layout_parsing/pipeline_v2.py", line 1007, in predict
    layout_det_results = list(
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/base/predictor/base_predictor.py", line 219, in __call__
    yield from self.apply(input, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/base/predictor/base_predictor.py", line 278, in apply
    prediction = PredictionWrap(prediction, len(batch_data))
  File "/usr/local/lib/python3.10/dist-packages/paddlex/inference/models/base/predictor/base_predictor.py", line 56, in __init__
    assert len(data[k]) == num, f"{len(data[k])} != {num} for key {k}!"
AssertionError: 0 != 6 for key boxes!

环境

  1. 请提供您使用的PaddlePaddle、PaddleX版本号、Python版本号

PaddlePaddle为3.0.0,PaddleX为3.2.1,Python为3.10.13。具体信息如下:

python --version
Python 3.10.13

pip list
Package               Version
--------------------- --------------------
aiohappyeyeballs      2.6.1
aiohttp               3.13.0
aiosignal             1.4.0
aistudio-sdk          0.3.8
annotated-types       0.7.0
anyio                 4.1.0
astor                 0.8.1
async-timeout         5.0.1
attrs                 25.4.0
bce-python-sdk        0.9.46
cachetools            6.2.1
certifi               2019.11.28
chardet               3.0.4
charset-normalizer    3.4.4
click                 8.3.0
coloredlogs           15.0.1
colorlog              6.9.0
cssselect             1.3.0
cssutils              2.11.1
dbus-python           1.2.16
decorator             5.1.1
distro-info           0.23+ubuntu1.1
einops                0.8.1
et_xmlfile            2.0.0
exceptiongroup        1.2.0
fastapi               0.119.0
filelock              3.20.0
filetype              1.2.0
flatbuffers           25.9.23
frozenlist            1.8.0
fsspec                2025.9.0
ftfy                  6.3.1
future                1.0.0
h11                   0.14.0
hf-xet                1.1.10
httpcore              1.0.2
httpx                 0.25.1
huggingface-hub       0.35.3
humanfriendly         10.0
idna                  2.8
imagesize             1.4.1
Jinja2                3.1.6
joblib                1.5.2
lxml                  6.0.2
MarkupSafe            3.0.3
ml_dtypes             0.5.3
modelscope            1.30.0
more-itertools        10.8.0
mpmath                1.3.0
multidict             6.7.0
networkx              3.4.2
numpy                 1.26.2
onnx                  1.17.0
onnx_graphsurgeon     0.5.8
onnxoptimizer         0.3.13
onnxruntime           1.22.0
opencv-contrib-python 4.10.0.84
openpyxl              3.1.5
opt-einsum            3.3.0
packaging             25.0
paddle2onnx           2.0.2rc3
paddlepaddle          3.0.0
paddlex               3.2.1
pandas                2.3.3
Pillow                10.1.0
pip                   23.3.1
polygraphy            0.49.26
premailer             3.10.0
prettytable           3.16.0
propcache             0.4.1
protobuf              4.25.1
psutil                7.1.0
py-cpuinfo            9.0.0
pyclipper             1.3.0.post6
pycryptodome          3.23.0
pydantic              2.12.1
pydantic_core         2.41.3
PyGObject             3.36.0
pypdfium2             4.30.0
python-apt            2.0.1+ubuntu0.20.4.1
python-dateutil       2.9.0.post0
pytz                  2025.2
PyYAML                6.0.2
regex                 2025.9.18
requests              2.32.5
requests-unixsocket   0.2.0
ruamel.yaml           0.18.15
ruamel.yaml.clib      0.2.14
scikit-learn          1.7.2
scipy                 1.15.3
setuptools            68.2.2
shapely               2.1.2
six                   1.14.0
sniffio               1.3.0
starlette             0.48.0
sympy                 1.14.0
threadpoolctl         3.6.0
tiktoken              0.12.0
tokenizers            0.22.1
tqdm                  4.67.1
typing_extensions     4.15.0
typing-inspection     0.4.2
tzdata                2025.2
ujson                 5.11.0
ultra-infer-python    1.2.0
unattended-upgrades   0.1
urllib3               2.5.0
uvicorn               0.37.0
wcwidth               0.2.14
wheel                 0.45.1
yarl                  1.22.0
  1. 请提供您使用的操作系统信息,如Linux/Windows/MacOS

Linux任意发行版,目前在Debian 13,Ubuntu 22.04稳定复现。

  1. 请问您使用的CUDA/cuDNN的版本号是?

N/A(CPU推理模式)

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions