@neitsab no problem, and thanks - comaintainer works fine for me
Search Criteria
Package Details: llama.cpp-bin b7360-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/llama.cpp-bin.git (read-only, click to copy) |
|---|---|
| Package Base: | llama.cpp-bin |
| Description: | LLM inference in C/C++ (precompiled Linux binaries) |
| Upstream URL: | https://github.com/ggerganov/llama.cpp |
| Keywords: | ai chat gpt llama llm |
| Licenses: | MIT |
| Conflicts: | ggml, libggml, llama.cpp |
| Provides: | ggml, libggml, llama.cpp |
| Submitter: | neitsab |
| Maintainer: | neitsab (envolution) |
| Last Packager: | envolution |
| Votes: | 4 |
| Popularity: | 0.32 |
| First Submitted: | 2024-11-15 12:24 (UTC) |
| Last Updated: | 2025-12-12 02:39 (UTC) |
Dependencies (2)
- curl (curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
Required by (3)
Sources (2)
Latest Comments
envolution commented on 2025-10-02 22:47 (UTC)
neitsab commented on 2025-10-02 22:38 (UTC)
@envolution Sorry I completely missed your comment! I just added you as a co-maintainer, feel free to take over the package:-) Thanks
envolution commented on 2025-08-16 03:55 (UTC)
@neitsab - I can take over maintenance and solve for the issue below
feel free to maintain ownership if you like, I would only need co-maintainer
carlo commented on 2025-07-29 15:00 (UTC) (edited on 2025-07-29 15:01 (UTC) by carlo)
Currently, with https://github.com/ggml-org/llama.cpp/releases/tag/b6022, this happens:
$ llama-cli -hf unsloth/Qwen3-0.6B-GGUF
[...]
llama_model_load_from_file_impl: no backends are loaded. hint: use ggml_backend_load() or ggml_backend_load_all() to load a backend before calling this function
[...]
main: error: unable to load model
From https://github.com/ggml-org/llama.cpp/issues/14302, I learnt that this is because the executable is looking for its shared libraries in (a) the directory of the executable, and (b) the current directory.
So, one way around this is to cd to /usr/lib before running llama-cli (or llama-server etc.). This works for me.
But probably a better way would be to update the PKGBUILD to put everything into the same directory (or maybe do something with symlinks to make it a bit less ugly, if possible).
neitsab commented on 2025-02-14 10:49 (UTC)
@chocolateboy You're right, and it's not limited to our package: https://github.com/ggerganov/llama.cpp/issues/11123
I just pushed an updated version which installs the shared libraries in the right place. Now I get
% llama-cli --version
version: 4714 (38e32eb6)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Thanks for noticing.
chocolateboy commented on 2025-02-14 10:07 (UTC)
@neitsab Thanks for the info.
FYI, this snapshot is missing some files:
$ llama-cli --version
/usr/bin/llama-cli: error while loading shared libraries: libllama.so: cannot open shared object file: No such file or directory
neitsab commented on 2025-02-05 16:44 (UTC) (edited on 2025-03-13 17:36 (UTC) by neitsab)
Please note:
This package being updated several times a day upstream, the best way to get the most recent version for yourself is to run the following commands
sudo pacman -S devtools nvchecker
git clone https://aur.archlinux.org/llama.cpp-bin.git && cd llama.cpp-bin
pkgctl version upgrade
makepkg -srci
Please only flag it out of date in case the build is broken or missing features.
If you have suggestions to automate the upgrading and publishing or want to take over maintenance, I'm all ears :-)
Pinned Comments
neitsab commented on 2025-02-05 16:44 (UTC) (edited on 2025-03-13 17:36 (UTC) by neitsab)
Please note:
This package being updated several times a day upstream, the best way to get the most recent version for yourself is to run the following commands
Please only flag it out of date in case the build is broken or missing features.
If you have suggestions to automate the upgrading and publishing or want to take over maintenance, I'm all ears :-)