Replies: 1 comment
-
|
请尝试 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
鲲鹏9204+昇腾910b48(单颗npu32G显存,使用其中4、5号npu)+512G内存
报错如下:
C++ Traceback (most recent call last):
0 paddle::AnalysisPredictor::ZeroCopyRun(bool)
1 paddle::framework::NaiveExecutor::RunInterpreterCore(std::vector<std::string, std::allocator<std::string > > const&, bool, bool)
2 paddle::framework::InterpreterCore::Run(std::vector<std::string, std::allocator<std::string > > const&, bool, bool, bool, bool)
3 paddle::framework::PirInterpreter::Run(std::vector<std::string, std::allocator<std::string > > const&, bool, bool, bool, bool)
4 paddle::framework::PirInterpreter::TraceRunImpl()
5 paddle::framework::PirInterpreter::TraceRunInstructionList(std::vector<std::unique_ptr<paddle::framework::InstructionBase, std::default_deletepaddle::framework::InstructionBase >, std::allocator<std::unique_ptr<paddle::framework::InstructionBase, std::default_deletepaddle::framework::InstructionBase > > > const&)
6 paddle::framework::PirInterpreter::RunInstructionBase(paddle::framework::InstructionBase*)
7 paddle::framework::PhiKernelInstruction::Run()
8 phi::KernelImpl<void ()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::string const&, std::vector<int, std::allocator > const&, int, std::string const&, phi::DenseTensor), &(void phi::ConvKernel<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::string const&, std::vector<int, std::allocator > const&, int, std::string const&, phi::DenseTensor*))>::Compute(phi::KernelContext*)
9 void phi::ConvKernelImpl<float, phi::CPUContext>(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::string const&, int, std::vector<int, std::allocator > const&, std::string const&, phi::DenseTensor*)
10 phi::funcs::Im2ColFunctor<(phi::funcs::ColFormat)0, phi::CPUContext, float>::operator()(phi::CPUContext const&, phi::DenseTensor const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, phi::DenseTensor*, common::DataLayout)
Error Message Summary:
FatalError:
Segmentation faultis detected by the operating system.[TimeInfo: *** Aborted at 1758613257 (unix time) try "date -d @1758613257" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0xbf23) received by PID 48931 (TID 0xffff93285e90) from PID 48931 ***]
Segmentation fault (core dumped)
代码如下:
from pathlib import Path
from paddleocr import PPStructureV3
input_file = "./1.pdf"
output_path = Path("./output")
pipeline = PPStructureV3()
output = pipeline.predict(input=input_file)
markdown_list = []
markdown_images = []
for res in output:
md_info = res.markdown
markdown_list.append(md_info)
markdown_images.append(md_info.get("markdown_images", {}))
res.print() ## 打印预测的结构化输出
res.save_to_json(save_path="output") ## 保存当前图像的结构化json结果
res.save_to_markdown(save_path="output") ## 保存当前图像的markdown格式的结果
res.save_to_img(save_path="output") ## 保存带标注的可视化图片
markdown_texts = pipeline.concatenate_markdown_pages(markdown_list)
mkd_file_path = output_path / f"{Path(input_file).stem}.md"
mkd_file_path.parent.mkdir(parents=True, exist_ok=True)
with open(mkd_file_path, "w", encoding="utf-8") as f:
f.write(markdown_texts)
for item in markdown_images:
if item:
for path, image in item.items():
file_path = output_path / path
file_path.parent.mkdir(parents=True, exist_ok=True)
image.save(file_path)
Beta Was this translation helpful? Give feedback.
All reactions