Search Criteria
Package Details: llama.cpp-git b6663.r1.c8dedc999-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/llama.cpp-git.git (read-only, click to copy) |
|---|---|
| Package Base: | llama.cpp-git |
| Description: | Port of Facebook's LLaMA model in C/C++ |
| Upstream URL: | https://github.com/ggerganov/llama.cpp |
| Licenses: | MIT |
| Conflicts: | llama.cpp |
| Provides: | llama.cpp |
| Submitter: | robertfoster |
| Maintainer: | robertfoster |
| Last Packager: | robertfoster |
| Votes: | 15 |
| Popularity: | 0.013158 |
| First Submitted: | 2023-03-27 22:24 (UTC) |
| Last Updated: | 2025-10-01 22:34 (UTC) |
Dependencies (6)
- libggml-gitAUR
- cmake (cmake3AUR, cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- python-ggufAUR (optional) – convert_hf_to_gguf python script
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mklAUR, python-numpy-mkl-tbbAUR) (optional) – convert_hf_to_gguf.py python script
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda12.9AUR, python-pytorch-opt-cuda12.9AUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – convert_hf_to_gguf.py python script
Latest Comments
1 2 3 4 Next › Last »
ZenithCC commented on 2025-10-07 13:55 (UTC) (edited on 2025-10-07 13:55 (UTC) by ZenithCC)
I'm having trouble compiling the package. Here's the error message:
I posted about this on the llama.cpp GitHub repository, and a contributor was kind enough to respond in github issue #15852 comment
To be honest, I'm not entirely sure I understand this. Does it mean that ggml is being developed in multiple places (ggml, llama.cpp, and whisper.cpp), and that's causing the issue? Would using libggml-git, which is based on an outdated version of ggml compared to whisper.cpp, explain why I can't compile the package? Currently, my workaround is to downgrade to b6661.
itsdesmond commented on 2024-11-14 20:55 (UTC)
@robertfoster, I follow the intent but I don't think we're able to build only one package, and so we are required to build, and install dependencies, for every package provided by the PKGBUILD. It looks like the ability to build only one was removed a number of years ago: [pacman-dev] [PATCH] makepkg: remove ability to build individual packages. If that's the deal then that's the deal, I guess.
robertfoster commented on 2024-11-14 20:21 (UTC)
@itsdesmond
because of this https://wiki.archlinux.org/title/PKGBUILD#pkgbase
Here a split PKGBUILD is used. As you can see from documentation, makedepends is not considered in packages functions. This means that with split logic, you need to declare only one array of makedepends and only one build function. Hope that is clear.
itsdesmond commented on 2024-11-14 20:15 (UTC)
I'm a bit confused by this package. Is it necessary that I build every package in order to install just one of them? Or is there some sort of targeting command I'm not taking advantage of?
BestSteve commented on 2024-11-11 04:33 (UTC) (edited on 2024-11-11 04:34 (UTC) by BestSteve)
grdgkjrpdihe commented on 2024-11-07 00:55 (UTC)
llama.cpp-hipblas-git should depends
hipblasrather thanrocm-hip-runtimeabitrolly commented on 2024-10-25 11:53 (UTC)
Another report with full output of the error https://gitlab.archlinux.org/archlinux/packaging/packages/cuda/-/issues/10
abitrolly commented on 2024-10-25 10:59 (UTC)
Looks like CMake CUDA error comes from the wrong compiler used https://bbs.archlinux.org/viewtopic.php?id=300160
LeoKesler commented on 2024-10-17 17:43 (UTC)
CMake Error at /usr/share/cmake/Modules/CMakeDetermineCompilerId.cmake:838 (message): Compiling the CUDA compiler identification source file "CMakeCUDACompilerId.cu" failed.dreieck commented on 2024-10-10 11:12 (UTC) (edited on 2024-10-13 10:12 (UTC) by dreieck)
Your packages
llama.cpp-vulkan-gitandwhisper.cpp-clblasdo conflict which each other, which is not reflected in theconflictsarray.Please add the corresponding
conflictsentry, or, to ensure compatibility, think of stripping out thelibggmlstuff and depend onlibggml:Regards!
1 2 3 4 Next › Last »