Replies: 3 comments
-
|
This looks like a PyTorch error. Check your PyTorch Version
|
Beta Was this translation helpful? Give feedback.
-
|
This Root cause: Fixes: 1. Reinstall vLLM with matching PyTorch pip uninstall vllm
pip install vllm --no-cache-dir2. Check PyTorch version import torch
print(torch.__version__)
print(torch.version.cuda)vLLM needs specific CUDA + PyTorch combos. 3. Full clean install pip uninstall vllm torch
pip cache purge
pip install torch==2.2.0+cu121 -f https://download.pytorch.org/whl/torch_stable.html
pip install vllm4. Build from source (if weird setup) git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e .Common causes:
We've debugged these ABI issues at RevolutionAI many times. The clean reinstall usually fixes it. What's your |
Beta Was this translation helpful? Give feedback.
-
|
This error usually points to a mismatch between the PyTorch version installed in your environment and the version vLLM was built or linked against. Specifically, it's failing to resolve a TorchScript function, From experience, these kinds of issues often occur when using pre-built binaries (
If none of these steps work, check whether vLLM uses any custom compiled components. In such cases, ensuring your environment matches the compiler/runtime versions used for building those components (e.g., GCC or Clang) might help. Let me know if you’re still stuck, and I can dig deeper into specifics. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
......lib/python3.12/site-packages/vllm/_C.abi3.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKSsb
vllm serve. NOT RUN ,
Beta Was this translation helpful? Give feedback.
All reactions