Package Details: llama.cpp b8864-4

Git Clone URL: https://aur.archlinux.org/llama.cpp.git (read-only, click to copy)
Package Base: llama.cpp
Description: Port of Facebook's LLaMA model in C/C++
Upstream URL: https://github.com/ggml-org/llama.cpp
Licenses: MIT
Conflicts: ggml, libggml, llama.cpp
Provides: llama.cpp
Submitter: txtsd
Maintainer: fabse
Last Packager: fabse
Votes: 15
Popularity: 2.38
First Submitted: 2024-10-26 15:38 (UTC)
Last Updated: 2026-04-21 09:24 (UTC)

Pinned Comments

fabse commented on 2026-04-13 16:15 (UTC)

CI's running now, lemme know if it gets outdated for some reason

txtsd commented on 2024-10-26 20:14 (UTC) (edited on 2024-12-06 14:14 (UTC) by txtsd)

Alternate versions

llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip

Latest Comments

1 2 3 4 5 Next › Last »

fabse commented on 2026-04-19 09:42 (UTC)

Also a question for you all: Currently the ci compiles the package to make sure that the pkgbuild works as if, but that adds a delay to when the package is updated here. Would you prefer to have it skip the verification step and get quicker updates here? Or just keep it as if?

fabse commented on 2026-04-19 09:40 (UTC)

Oh yeah I can do that, I had actually already sent a request for the same lol, but it's still pending

asbachb commented on 2026-04-19 08:55 (UTC)

@fabse maybe you can add llama.cpp-cuda also to your ci. I think the package will get orphaned soon.

fabse commented on 2026-04-13 16:15 (UTC)

CI's running now, lemme know if it gets outdated for some reason

fabse commented on 2026-04-12 11:01 (UTC)

Ahh good catch, I was using llama.cpp-vulkan as a reference for what to update, but you're right that stable-diffusion-cpp doesn't conflict

Still working on the CI, hopefully that will be done today :)

QTaKs commented on 2026-04-12 07:37 (UTC)

I have installed stable-diffusion.cpp-git and llama.cpp-cuda (both compiled with all possible backends, so they are sort of an all-in-one package). I haven not tested whether stable-diffusion.cpp works yet, but there were no conflicts during installation.

Thank you for taking on the maintenance of this package!

fabse commented on 2026-04-11 16:09 (UTC)

I've taken over maintaining the package! There will be a bit of a delay initially because I don't have a CI set up yet, but that shouldn't take too long

Haydo commented on 2026-03-04 23:20 (UTC)

Bummer to see this hasn't been updated in 2.5 months. I'm surprised that it's not just automated via CI with a bunch of tests. I wanted to try out qwen3.5 but the current version doesn't have the qwen3.5moe parts.

I tried pinning it to the latest build:

# Get the AUR package
git clone https://aur.archlinux.org/llama.cpp.git
cd llama.cpp

# Edit PKGBUILD to use latest release
sed -i 's/pkgver=.*/pkgver=b8198/' PKGBUILD
sed -i 's/pkgrel=.*/pkgrel=1/' PKGBUILD
updpkgsums  # Update checksums

# Build and install
makepkg -si

Which worked for me (I just wanted to do inference). Definitely don't try this on a machine you can't snapshot or roll back.

sneakomatic commented on 2026-01-15 09:35 (UTC)

Yes, makes me sad too. Because of their crazy release cycle it was probably too much for envolution to keep up. Atm I edit the PKGBUILD and run makepkg. But gonna switch to homebrew soon: https://formulae.brew.sh/formula/llama.cpp

JamesMowery commented on 2026-01-10 03:58 (UTC)

Will this still be updated? Coming up on a month since the last update. Makes me sad.