@v1993 I've uploaded llama.cpp-cuda-f16. Please let me know if it works as expected!
Search Criteria
Package Details: llama.cpp-cuda b4301-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/llama.cpp-cuda.git (read-only, click to copy) |
---|---|
Package Base: | llama.cpp-cuda |
Description: | Port of Facebook's LLaMA model in C/C++ (with NVIDIA CUDA optimizations) |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Conflicts: | libggml, llama.cpp |
Provides: | llama.cpp |
Submitter: | txtsd |
Maintainer: | txtsd |
Last Packager: | txtsd |
Votes: | 5 |
Popularity: | 3.06 |
First Submitted: | 2024-10-26 20:17 (UTC) |
Last Updated: | 2024-12-10 22:05 (UTC) |
Dependencies (12)
- blas-openblas
- blas64-openblas
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR)
- curl (curl-quiche-gitAUR, curl-http3-ngtcp2AUR, curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc11-libsAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-linux4AUR, glibc-eacAUR, glibc-eac-binAUR, glibc-eac-rocoAUR)
- openmp
- python (python37AUR, python311AUR, python310AUR)
- python-numpy (python-numpy-flameAUR, python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR)
- python-sentencepieceAUR (python-sentencepiece-gitAUR)
- cmake (cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
Required by (0)
Sources (4)
txtsd commented on 2024-12-06 13:37 (UTC)
txtsd commented on 2024-12-02 02:25 (UTC)
I'll give it a look later today and see if a newer package is warranted in that case. Thanks for your input!
v1993 commented on 2024-12-01 14:53 (UTC)
To be honest, I'm not 100% sure (it's a pretty old option and tacking down its origins is kinda tricky), but I'd expect at least a performance degradation on older GPUs (Nvidia used to be really bad at fp16 on older architectures).
txtsd commented on 2024-12-01 14:38 (UTC)
@v1993 Does that have to be a separate package, or will making the change in this package suffice without breaking things for users of older GPUs?
v1993 commented on 2024-12-01 14:29 (UTC)
Would it be possible to have a package version with GGML_CUDA_F16
enabled? It's a nice performance boost on newer GPUs. Thank you for your work on this package!
Poscat commented on 2024-11-28 09:46 (UTC)
@txtsd thank you
txtsd commented on 2024-11-25 07:05 (UTC)
Builds are not static anymore, and the service file has been fixed.
txtsd commented on 2024-11-24 03:16 (UTC)
@Poscat Thank you for your input! The service file was inherited from a previous version and maintainer of the package. I admit that the service was not tested.
The static builds were created to allow for side-by-side installation with whisper.cpp
, since they both install libggml files.
Poscat commented on 2024-11-24 03:12 (UTC)
diff --git a/llama.cpp.service b/llama.cpp.service
index 4678d85..be89f9b 100644
--- a/llama.cpp.service
+++ b/llama.cpp.service
@@ -7,7 +7,7 @@ Type=simple
EnvironmentFile=/etc/conf.d/llama.cpp
ExecStart=/usr/bin/llama-server $LLAMA_ARGS
ExecReload=/bin/kill -s HUP $MAINPID
-Restart=never
+Restart=no
[Install]
WantedBy=multi-user.target
Also your sysetmd service file is wrong. Did you even test your package?
Poscat commented on 2024-11-24 03:10 (UTC) (edited on 2024-11-24 03:10 (UTC) by Poscat)
diff --git a/PKGBUILD b/PKGBUILD
index ad448a7..3fdc20f 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -50,7 +50,8 @@ build() {
local _cmake_options=(
-B build
-S "${_pkgname}"
- -DCMAKE_BUILD_TYPE=None
+ -DCMAKE_BUILD_TYPE=MinSizeRel
+ -DCMAKE_CUDA_ARCHITECTURES=native
-DCMAKE_INSTALL_PREFIX='/usr'
-DGGML_NATIVE=OFF
-DGGML_AVX2=OFF
@@ -59,8 +60,8 @@ build() {
-DGGML_FMA=OFF
-DGGML_ALL_WARNINGS=OFF
-DGGML_ALL_WARNINGS_3RD_PARTY=OFF
- -DBUILD_SHARED_LIBS=OFF
- -DGGML_STATIC=ON
+ -DBUILD_SHARED_LIBS=ON
+ -DGGML_STATIC=OFF
-DGGML_LTO=ON
-DGGML_RPC=ON
-DLLAMA_CURL=ON
@@ -75,7 +76,6 @@ build() {
package() {
DESTDIR="${pkgdir}" cmake --install build
rm "${pkgdir}/usr/include/"ggml*
- rm "${pkgdir}/usr/lib/"lib*.a
install -Dm644 "${_pkgname}/LICENSE" "${pkgdir}/usr/share/licenses/${pkgname}/LICENSE"
this patch reduces the package size from 37G to 82M
Pinned Comments
txtsd commented on 2024-10-26 20:17 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip