Search Criteria
Package Details: python-vllm-bin 0.11.0-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/python-vllm-bin.git (read-only, click to copy) |
---|---|
Package Base: | python-vllm-bin |
Description: | high-throughput and memory-efficient inference and serving engine for LLMs |
Upstream URL: | https://github.com/vllm-project/vllm |
Licenses: | Apache-2.0 |
Conflicts: | python-vllm |
Provides: | python-vllm |
Submitter: | envolution |
Maintainer: | envolution |
Last Packager: | envolution |
Votes: | 2 |
Popularity: | 0.39 |
First Submitted: | 2024-11-30 16:59 (UTC) |
Last Updated: | 2025-10-04 13:44 (UTC) |
Dependencies (55)
- numactl (numactl-gitAUR)
- python-aiohttp
- python-blake3AUR
- python-boto3
- python-cachetools
- python-cloudpickle
- python-diskcacheAUR
- python-fastapi
- python-ggufAUR
- python-huggingface-hub (python-huggingface-hub-gitAUR)
- python-importlib-metadata
- python-msgspecAUR
- python-openai
- python-opencv (python-opencv-cuda)
- python-partial-json-parserAUR (python-partial-json-parser-gitAUR)
- python-prometheus-fastapi-instrumentatorAUR
- python-psutil
- python-py-cpuinfo
- python-pybase64AUR
- python-pydantic
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda12.9AUR, python-pytorch-opt-cuda12.9AUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm)
- python-pyzmq
- python-soundfile
- python-sphinx (python-sphinx-gitAUR)
- python-starlette
- python-sympy (python-sympy-gitAUR)
- python-torchvision (python-torchvision-gitAUR, python-torchvision-rocm-binAUR, python-torchvision-rocmAUR, python-torchvision-rocmAUR, python-torchvision-cuda12.9AUR, python-torchvision-cuda)
- python-tqdm
- python-transformersAUR
- python-triton
- python-uvloop
- python-watchfiles
- uvicorn
- python-installer (make)
- unzip (unzip-zstdAUR, unzip_pAUR, unzip-natspecAUR) (make)
- zip (zip-natspecAUR) (make)
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR, cuda-12.5AUR, cuda-12.9AUR) (optional) – use nvidia GPU
- cuda-tools (cuda11.1-toolsAUR, cuda12.0-toolsAUR, cuda11.4-toolsAUR, cuda11.4-versioned-toolsAUR, cuda12.0-versioned-toolsAUR) (optional) – use nvidia GPU
- python-compressed-tensorsAUR (optional) – required to load compressed tensor files
- python-datasets (optional) – tools to benchmark scripts
- python-depyf (optional) – required for debugging and profiling with complilation config
- python-einopsAUR (optional) – required for QWen2-VL models
- python-lark (python-lark-gitAUR, python-lark-parser) (optional) – parsing toolkit
- python-lm-format-enforcer (optional) – required for JSON/REGEX llm output
- python-mistral-commonAUR (optional) – mistral tools for opencv
- python-msgspecAUR (optional) – JSON/MessagePack library with validation
- python-openai (optional) – required for openai protocols
- python-outlinesAUR (optional) – guided text generation
- python-pillow (python-pillow-simd-gitAUR) (optional) – required for image processing
- python-prometheus_client (optional) – Prometheus instrumentation library for Python applications
- python-tiktoken (python-tiktoken-gitAUR) (optional) – required for DBRX tokenizer
- python-torchaudioAUR (python-torchaudio-gitAUR, python-torchaudio-rocmAUR, python-torchaudio-rocmAUR) (optional) – required for image processor of minicpm-o-2.6
- python-torchvision (python-torchvision-gitAUR, python-torchvision-rocm-binAUR, python-torchvision-rocmAUR, python-torchvision-rocmAUR, python-torchvision-cuda12.9AUR, python-torchvision-cuda) (optional) – required for image processor of phi3v
- python-typing_extensions (optional) – typing hints
- python-xgrammar (optional) – flexible structured generation
Latest Comments
aspirogrammer commented on 2025-08-27 15:30 (UTC)
These packages were needed in my case:
python-cbor2 python-setproctitle python-pytorch-cuda
(CPU-based pytorch wasn't sufficient).envolution commented on 2025-08-20 06:18 (UTC)
@bash000000 thanks, I thought they had forgotten to release it. Since the 3.13 commit came out after the 0.10.1 release I've tried to patch the .whl but admittedly I don't have time to test the functionality - if there are any other issues please feel free to let me know
bash000000 commented on 2025-08-20 05:09 (UTC)
Now, vllm can work correctly in version 3.13. Although it is incompatible with the METADATA requirements, the cu128 version of vllm does not have the cu128 flag https://github.com/vllm-project/vllm/releases/download/v0.10.1/vllm-0.10.1-cp38-abi3-manylinux1_x86_64.whl is the cu128 version
envolution commented on 2025-08-04 01:34 (UTC)
currently not initializing due to python 3.13 incompatibilities - I suggest running in a virualenv with python 3.9-3.12 until this is resolved
showgood163 commented on 2025-07-29 16:41 (UTC)
Hi, python-tiktonek is a typo for python-tiktoken, see https://github.com/openai/tiktoken for details.
envolution commented on 2025-06-04 07:34 (UTC)
@NonerKao thanks!
NonerKao commented on 2025-06-04 05:42 (UTC)
it seems python-cloudpickle is required as well
Sherlock-Holo commented on 2025-02-17 07:05 (UTC)
it seems python-msgspec is not a optional dependency
run
vllm -h
will saysterusus commented on 2025-01-29 13:19 (UTC)
Dependencies not properly defined in the package file.