Package Details: python-vllm 0.10.1.1-1

Git Clone URL: https://aur.archlinux.org/python-vllm.git (read-only, click to copy)
Package Base: python-vllm
Description: high-throughput and memory-efficient inference and serving engine for LLMs
Upstream URL: https://github.com/vllm-project/vllm
Licenses: Apache-2.0
Submitter: envolution
Maintainer: envolution
Last Packager: envolution
Votes: 2
Popularity: 1.06
First Submitted: 2024-12-01 16:07 (UTC)
Last Updated: 2025-08-21 01:22 (UTC)

Dependencies (56)

Required by (0)

Sources (2)

Pinned Comments

envolution commented on 2025-08-04 01:31 (UTC)

@nipsky looks like some new incompatibility with python 3.13 - I was able to reproduce. Unfortunately the only feasible method at the moment is to run in a virtualenv using python 3.9-3.12. Upstream is working on 3.13 support but it's not quite there yet.

To be honest, we'll probably be on 3.14 by the time they support 3.13. I'll try have a look to see if I can patch it to initialize, but this week is kind of busy for me so it wouldn't be quick.

Latest Comments

« First ‹ Previous 1 2

envolution commented on 2024-12-01 16:08 (UTC)

Sadly this won't build without detecting cuda and tools - this is the CPU only build