Search Criteria
Package Details: llama-cpp c3e53b4-1
Git Clone URL: | https://aur.archlinux.org/llama-cpp.git (read-only, click to copy) |
---|---|
Package Base: | llama-cpp |
Description: | Port of Facebook's LLaMA model in C/C++ |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | GPL3 |
Submitter: | Freed |
Maintainer: | Freed |
Last Packager: | Freed |
Votes: | 0 |
Popularity: | 0.000000 |
First Submitted: | 2023-07-18 07:59 (UTC) |
Last Updated: | 2023-08-24 11:40 (UTC) |
Dependencies (10)
- intel-oneapi-mkl (intel-oneapi-basekit)
- openblas (openblas-lapackAUR)
- openmpi (openmpi-gitAUR, openmpi-ucxAUR)
- python-numpy (python-numpy1.22AUR, python-numpy-flameAUR, python-numpy-mkl-binAUR, python-numpy-openblasAUR, python-numpy-mklAUR, python-numpy-gitAUR)
- python-sentencepieceAUR (python-sentencepiece-gitAUR)
- clblast (clblast-gitAUR) (make)
- cmake (cmake-gitAUR) (make)
- cuda (cuda-11.0AUR, cuda11.1AUR) (make)
- intel-oneapi-dpcpp-cpp (intel-oneapi-basekit) (make)
- intel-oneapi-mkl (intel-oneapi-basekit) (make)
Latest Comments
dront78 commented on 2023-09-08 07:51 (UTC) (edited on 2023-09-08 07:52 (UTC) by dront78)
b1198 PKGBUILD
colobas commented on 2023-09-01 18:09 (UTC)
I used the following patch to get this to build. Using release tags as pkgver.
sunng commented on 2023-08-07 03:35 (UTC)
Is cuda required for this opencl package?