Search Criteria
Package Details: llama.cpp-git b6663.r1.c8dedc999-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/llama.cpp-git.git (read-only, click to copy) |
|---|---|
| Package Base: | llama.cpp-git |
| Description: | Port of Facebook's LLaMA model in C/C++ |
| Upstream URL: | https://github.com/ggerganov/llama.cpp |
| Licenses: | MIT |
| Conflicts: | llama.cpp |
| Provides: | llama.cpp |
| Submitter: | robertfoster |
| Maintainer: | robertfoster |
| Last Packager: | robertfoster |
| Votes: | 15 |
| Popularity: | 0.012895 |
| First Submitted: | 2023-03-27 22:24 (UTC) |
| Last Updated: | 2025-10-01 22:34 (UTC) |
Dependencies (6)
- libggml-gitAUR
- cmake (cmake3AUR, cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- python-ggufAUR (optional) – convert_hf_to_gguf python script
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mklAUR, python-numpy-mkl-tbbAUR) (optional) – convert_hf_to_gguf.py python script
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda12.9AUR, python-pytorch-opt-cuda12.9AUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – convert_hf_to_gguf.py python script
Latest Comments
« First ‹ Previous 1 2 3 4 Next › Last »
dreieck commented on 2024-10-10 11:11 (UTC)
License file needs to be installed:
dreieck commented on 2024-10-10 11:08 (UTC)
Now fails:
Binaries have been renamed.
Regards!
lahwaacz commented on 2024-07-08 19:15 (UTC)
The
-cublaspackage installs the same files as the-clblaspackage: there iscd "${_name}-clblas"instead ofcd "${_name}-cublas".grdgkjrpdihe commented on 2024-07-02 09:11 (UTC)
LLAMA_*been renamed toGGML_*after https://github.com/ggerganov/llama.cpp/commit/f3f65429c44bb195a9195bfdc19a30a79709db7bgrdgkjrpdihe commented on 2024-06-24 22:23 (UTC)
should be added to make
stripworks https://archlinux.org/todo/lto-fat-objects/grdgkjrpdihe commented on 2024-06-24 22:17 (UTC)
should renamed to
after commit https://github.com/ggerganov/llama.cpp/commit/1c641e6aac5c18b964e7b32d9dbbb4bf5301d0d7
bendavis78 commented on 2024-06-04 04:13 (UTC)
I'm getting a build error w/ my nvidia GPU:
let-def commented on 2024-05-09 09:32 (UTC)
The llama.cpp-cublas-git package is packaging the clblas directory, e.g:
dreieck commented on 2024-04-17 11:18 (UTC)
ccacheshould not be an optional dependency.It is only relevant for building.
And the user can specify the usage of it by setting the
ccache-optionin e.g./etc/makepkg.conf.Please remove it from
optdepends.Regards!
lapsus commented on 2024-04-16 22:20 (UTC)
« First ‹ Previous 1 2 3 4 Next › Last »