Search Criteria
Package Details: llama-cpp c3e53b4-1
Git Clone URL: | https://aur.archlinux.org/llama-cpp.git (read-only, click to copy) |
---|---|
Package Base: | llama-cpp |
Description: | Port of Facebook's LLaMA model in C/C++ |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | GPL3 |
Submitter: | Freed |
Maintainer: | Freed |
Last Packager: | Freed |
Votes: | 1 |
Popularity: | 0.20 |
First Submitted: | 2023-07-18 07:59 (UTC) |
Last Updated: | 2023-08-24 11:40 (UTC) |
Dependencies (10)
- intel-oneapi-mkl (intel-oneapi-hpckitAUR, intel-oneapi-basekit)
- openblas (openblas-lapackAUR)
- openmpi (openmpi-gitAUR)
- python-numpy (python-numpy-flameAUR, python-numpy-mkl-binAUR, python-numpy-gitAUR, python-numpy-mklAUR)
- python-sentencepieceAUR (python-sentencepiece-gitAUR)
- clblast (clblast-gitAUR) (make)
- cmake (cmake-gitAUR) (make)
- cuda (cuda11.1AUR) (make)
- intel-oneapi-dpcpp-cpp (intel-oneapi-hpckitAUR, intel-oneapi-basekit) (make)
- intel-oneapi-mkl (intel-oneapi-hpckitAUR, intel-oneapi-basekit) (make)
Latest Comments
lmat commented on 2024-04-03 13:02 (UTC)
I just tried to build this and got:
I changed the source to https://github.com/ggerganov/llama.cpp/archive/refs/tags/b2586.tar.gz, and hoping for the best.
dront78 commented on 2023-09-08 07:51 (UTC) (edited on 2023-09-08 07:52 (UTC) by dront78)
b1198 PKGBUILD
colobas commented on 2023-09-01 18:09 (UTC)
I used the following patch to get this to build. Using release tags as pkgver.
sunng commented on 2023-08-07 03:35 (UTC)
Is cuda required for this opencl package?