Closed
Description
Your current environment
I am trying to get started with contributing to the vllm code base. I followed the developer guide to set things up.
When running pip install -r requirements/dev.txt
, I got Failed building wheel for mamba-ssm
error.
Note: I added --extra-index-url https://download.pytorch.org/whl/cu128
to requirements/dev.txt
.
The collect_env.py
ran into some error as well
pip install vllm --extra-index-url https://download.pytorch.org/whl/cu128
((venv) ) root@a3a95b0352ca:~/vllm# python collect_env.py
/workspace/vllm/vllm/__init__.py:6: RuntimeWarning: Failed to read commit hash:
No module named 'vllm._version'
from .version import __version__, __version_tuple__ # isort:skip
INFO 06-08 00:01:23 [__init__.py:244] Automatically detected platform cuda.
Traceback (most recent call last):
File "/workspace/vllm/collect_env.py", line 19, in <module>
from vllm.envs import environment_variables
File "/workspace/vllm/vllm/__init__.py", line 13, in <module>
from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
File "/workspace/vllm/vllm/engine/arg_utils.py", line 22, in <module>
from vllm.config import (BlockSize, CacheConfig, CacheDType, CompilationConfig,
File "/workspace/vllm/vllm/config.py", line 37, in <module>
from vllm.model_executor.layers.quantization import (QUANTIZATION_METHODS,
File "/workspace/vllm/vllm/model_executor/__init__.py", line 4, in <module>
from vllm.model_executor.parameter import (BasevLLMParameter,
File "/workspace/vllm/vllm/model_executor/parameter.py", line 10, in <module>
from vllm.distributed import get_tensor_model_parallel_rank
File "/workspace/vllm/vllm/distributed/__init__.py", line 4, in <module>
from .communication_op import *
File "/workspace/vllm/vllm/distributed/communication_op.py", line 9, in <module>
from .parallel_state import get_tp_group
File "/workspace/vllm/vllm/distributed/parallel_state.py", line 150, in <module>
from vllm.platforms import current_platform
File "/workspace/vllm/vllm/platforms/__init__.py", line 276, in __getattr__
_current_platform = resolve_obj_by_qualname(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/vllm/vllm/utils.py", line 2239, in resolve_obj_by_qualname
module = importlib.import_module(module_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/vllm/vllm/platforms/cuda.py", line 18, in <module>
import vllm._C # noqa
^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'vllm._C'
How you are installing vllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
# add `--extra-index-url https://download.pytorch.org/whl/cu128` to `requirements/dev.txt`
pip install -r requirements/dev.txt
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.