启动langchain-chatchat的时候,加载模型阶段出现问题,请大神们帮忙看看问题出在哪里? #4446
Unanswered
luckydog1005
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
以下是执行启动命令后的所有回显,看起来在model_worker执行过程中出错,但是再深层次原因我就看不出来了。看看各位大神有没有遇到过类似问题,或者能够给我提供点解决思路也成,感谢各位!
(langchain) root@autodl-container-317042b6f6-6507b3ff:~/autodl-tmp/WebUI/Langchain-Chatchat-master# python3 startup.py --llm-api
==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-5.15.0-94-generic-x86_64-with-glibc2.31.
python版本:3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0]
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['chatglm3-6b'] @ cpu
{'device': 'cuda',
'host': '0.0.0.0',
'infer_turbo': False,
'model_path': '/root/autodl-tmp/WebUI/Langchain-Chatchat-master/models/chatglm3-6b',
'model_path_exists': True,
'port': 20002}
当前Embbedings模型: bge-large-zh @ cpu
==============================Langchain-Chatchat Configuration==============================
2024-07-06 11:33:38,062 - startup.py[line:655] - INFO: 正在启动服务:
2024-07-06 11:33:38,062 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /root/autodl-tmp/WebUI/Langchain-Chatchat-master/logs
/root/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃
warn_deprecated(
2024-07-06 11:34:13 | ERROR | stderr | INFO: Started server process [1207]
2024-07-06 11:34:13 | ERROR | stderr | INFO: Waiting for application startup.
2024-07-06 11:34:13 | ERROR | stderr | INFO: Application startup complete.
2024-07-06 11:34:13 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit)
2024-07-06 11:34:15 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 7a610968 ...
2024-07-06 11:34:15 | ERROR | stderr | Process model_worker - chatglm3-6b:
2024-07-06 11:34:15 | ERROR | stderr | Traceback (most recent call last):
2024-07-06 11:34:15 | ERROR | stderr | File "/root/miniconda3/envs/langchain/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
2024-07-06 11:34:15 | ERROR | stderr | self.run()
2024-07-06 11:34:15 | ERROR | stderr | File "/root/miniconda3/envs/langchain/lib/python3.10/multiprocessing/process.py", line 108, in run
2024-07-06 11:34:15 | ERROR | stderr | self._target(*self._args, **self._kwargs)
2024-07-06 11:34:15 | ERROR | stderr | File "/root/autodl-tmp/WebUI/Langchain-Chatchat-master/startup.py", line 389, in run_model_worker
2024-07-06 11:34:15 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs)
2024-07-06 11:34:15 | ERROR | stderr | File "/root/autodl-tmp/WebUI/Langchain-Chatchat-master/startup.py", line 217, in create_model_worker_app
2024-07-06 11:34:15 | ERROR | stderr | worker = ModelWorker(
2024-07-06 11:34:15 | ERROR | stderr | File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/fastchat/serve/model_worker.py", line 77, in init
2024-07-06 11:34:15 | ERROR | stderr | self.model, self.tokenizer = load_model(
2024-07-06 11:34:15 | ERROR | stderr | File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 348, in load_model
2024-07-06 11:34:15 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs)
2024-07-06 11:34:15 | ERROR | stderr | File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 816, in load_model
2024-07-06 11:34:15 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained(
2024-07-06 11:34:15 | ERROR | stderr | File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 774, in from_pretrained
2024-07-06 11:34:15 | ERROR | stderr | return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
2024-07-06 11:34:15 | ERROR | stderr | File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2028, in from_pretrained
2024-07-06 11:34:15 | ERROR | stderr | return cls._from_pretrained(
2024-07-06 11:34:15 | ERROR | stderr | File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2260, in _from_pretrained
2024-07-06 11:34:15 | ERROR | stderr | tokenizer = cls(*init_inputs, **init_kwargs)
2024-07-06 11:34:15 | ERROR | stderr | File "/root/.cache/huggingface/modules/transformers_modules/chatglm3-6b/tokenization_chatglm.py", line 109, in init
2024-07-06 11:34:15 | ERROR | stderr | self.tokenizer = SPTokenizer(vocab_file)
2024-07-06 11:34:15 | ERROR | stderr | File "/root/.cache/huggingface/modules/transformers_modules/chatglm3-6b/tokenization_chatglm.py", line 17, in init
2024-07-06 11:34:15 | ERROR | stderr | assert os.path.isfile(model_path), model_path
2024-07-06 11:34:15 | ERROR | stderr | File "/root/miniconda3/envs/langchain/lib/python3.10/genericpath.py", line 30, in isfile
2024-07-06 11:34:15 | ERROR | stderr | st = os.stat(path)
2024-07-06 11:34:15 | ERROR | stderr | TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
Beta Was this translation helpful? Give feedback.
All reactions