@Sherlock-Holo thanks for your report, it's been added to makedepends
Search Criteria
Package Details: python-vllm-cuda 0.19.0-2
Package Actions
| Git Clone URL: | https://aur.archlinux.org/python-vllm-cuda.git (read-only, click to copy) |
|---|---|
| Package Base: | python-vllm-cuda |
| Description: | high-throughput and memory-efficient inference and serving engine for LLMs |
| Upstream URL: | https://github.com/vllm-project/vllm |
| Licenses: | Apache-2.0 |
| Conflicts: | python-vllm |
| Provides: | python-vllm |
| Submitter: | envolution |
| Maintainer: | ncihnegn |
| Last Packager: | ncihnegn |
| Votes: | 1 |
| Popularity: | 0.001608 |
| First Submitted: | 2024-12-01 16:12 (UTC) |
| Last Updated: | 2026-04-06 04:12 (UTC) |
Dependencies (57)
- numactl (numactl-gitAUR)
- python-aiohttp
- python-blake3AUR
- python-boto3 (python-boto3-gitAUR)
- python-cachetools
- python-cloudpickle
- python-diskcacheAUR
- python-einopsAUR
- python-fastapi
- python-ggufAUR (python-gguf-gitAUR)
- python-huggingface-hub (python-huggingface-hub-gitAUR)
- python-ijson
- python-importlib-metadata
- python-model-hosting-container-standardsAUR
- python-msgspec
- python-openai
- python-openai-harmonyAUR (python-openai-harmony-gitAUR)
- python-opencv (python-opencv-cuda)
- python-partial-json-parserAUR (python-partial-json-parser-gitAUR)
- python-prometheus-fastapi-instrumentatorAUR
- Show 37 more dependencies...
Required by (2)
- python-pydantic-ai (requires python-vllm) (optional)
- python-pydantic-ai-slim (requires python-vllm) (optional)
Sources (3)
Latest Comments
envolution commented on 2025-02-12 21:30 (UTC)
Sherlock-Holo commented on 2025-02-11 09:28 (UTC) (edited on 2025-02-11 09:30 (UTC) by Sherlock-Holo)
when build this package, it says
Traceback (most recent call last):
File "/home/sherlock/.cache/yay/python-vllm-cuda/src/vllm/setup.py", line 18, in <module>
from setuptools_scm import get_version
ModuleNotFoundError: No module named 'setuptools_scm'
if add the miss makedepends python-setuptools-scm, it will fail with
Traceback (most recent call last):
File "/home/sherlock/.cache/yay/python-vllm-cuda/src/vllm/setup.py", line 633, in <module>
version=get_vllm_version(),
~~~~~~~~~~~~~~~~^^
File "/home/sherlock/.cache/yay/python-vllm-cuda/src/vllm/setup.py", line 527, in get_vllm_version
raise RuntimeError("Unknown runtime environment")
RuntimeError: Unknown runtime environment
envolution commented on 2024-12-28 04:47 (UTC) (edited on 2024-12-28 04:51 (UTC) by envolution)
Not working currently due to lack of python 3.13 support in vllm-flash-attention. Try python-vllm-bin or the cpu version python-vllm
Pinned Comments
envolution commented on 2024-12-28 04:47 (UTC) (edited on 2024-12-28 04:51 (UTC) by envolution)
Not working currently due to lack of python 3.13 support in vllm-flash-attention. Try python-vllm-bin or the cpu version python-vllm