Package Details: llama.cpp-git b3334.r3.a8db2a9ce-1

Git Clone URL: https://aur.archlinux.org/llama.cpp-git.git (read-only, click to copy)
Package Base: llama.cpp-git
Description: Port of Facebook's LLaMA model in C/C++ (with OPENBlas CPU optimizations)
Upstream URL: https://github.com/ggerganov/llama.cpp
Licenses: MIT
Conflicts: llama.cpp
Provides: llama.cpp
Submitter: robertfoster
Maintainer: robertfoster
Last Packager: robertfoster
Votes: 11
Popularity: 2.52
First Submitted: 2023-03-27 22:24 (UTC)
Last Updated: 2024-07-07 15:09 (UTC)

Latest Comments

1 2 Next › Last »

lahwaacz commented on 2024-07-08 19:15 (UTC)

The -cublas package installs the same files as the -clblas package: there is cd "${_name}-clblas" instead of cd "${_name}-cublas".

grdgkjrpdihe commented on 2024-07-02 09:11 (UTC)

LLAMA_* been renamed to GGML_* after https://github.com/ggerganov/llama.cpp/commit/f3f65429c44bb195a9195bfdc19a30a79709db7b

grdgkjrpdihe commented on 2024-06-24 22:23 (UTC)

CXXFLAGS+=" -ffat-lto-objects"

should be added to make strip works https://archlinux.org/todo/lto-fat-objects/

grdgkjrpdihe commented on 2024-06-24 22:17 (UTC)

  mv "${pkgdir}/usr/bin/${_name}-main" \

should renamed to

  mv "${pkgdir}/usr/bin/${_name}-llama-cli" \

after commit https://github.com/ggerganov/llama.cpp/commit/1c641e6aac5c18b964e7b32d9dbbb4bf5301d0d7

bendavis78 commented on 2024-06-04 04:13 (UTC)

I'm getting a build error w/ my nvidia GPU:

[ 19%] Linking CXX executable ../../bin/quantize-stats
icpx: warning: ignoring '-fcf-protection' option as it is not currently supported for target 'spir64-unknown-unknown' [-Woption-ignored]
icpx: warning: ignoring '-fcf-protection' option as it is not currently supported for target 'spir64-unknown-unknown' [-Woption-ignored]
icpx: warning: ignoring '-fcf-protection=none' option as it is not currently supported for target 'spir64-unknown-unknown' [-Woption-ignored]
[ 19%] Built target llava
[ 19%] Linking CXX static library libllava_static.a
[ 19%] Built target llava_static
/usr/bin/ld: /tmp/lto-llvm-cb2e4e.o: in function `ggml_compute_forward_mul_mat':
ld-temp.o:(.text.ggml_compute_forward_mul_mat+0x463): undefined reference to `llamafile_sgemm'
/usr/bin/ld: /tmp/lto-llvm-cb2e4e.o: in function `ggml_compute_forward_mul_mat':
/usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/ggml.c:12552:(.text.ggml_compute_forward_mul_mat+0x111f): undefined reference to `llamafile_sgemm'
icpx: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [examples/benchmark/CMakeFiles/benchmark.dir/build.make:102: bin/benchmark] Error 1
make[1]: *** [CMakeFiles/Makefile2:2307: examples/benchmark/CMakeFiles/benchmark.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
1 warning generated.
/usr/bin/ld: /tmp/lto-llvm-477f02.o: in function `main':
/usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/examples/quantize-stats/quantize-stats.cpp:311:(.text.main+0xcd0): undefined reference to `llama_model_default_params'
/usr/bin/ld: /usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/examples/quantize-stats/quantize-stats.cpp:314:(.text.main+0xd06): undefined reference to `llama_load_model_from_file'
/usr/bin/ld: /usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/examples/quantize-stats/quantize-stats.cpp:321:(.text.main+0xd20): undefined reference to `llama_context_default_params'
/usr/bin/ld: /usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/examples/quantize-stats/quantize-stats.cpp:325:(.text.main+0xd79): undefined reference to `llama_new_context_with_model'
/usr/bin/ld: /usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/examples/quantize-stats/quantize-stats.cpp:334:(.text.main+0xd93): undefined reference to `llama_internal_get_tensor_map[abi:cxx11](llama_context*)'
/usr/bin/ld: /usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/examples/quantize-stats/quantize-stats.cpp:413:(.text.main+0x4ef8): undefined reference to `llama_free'
/usr/bin/ld: /usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/examples/quantize-stats/quantize-stats.cpp:414:(.text.main+0x4f01): undefined reference to `llama_free_model'
/usr/bin/ld: /usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/examples/quantize-stats/quantize-stats.cpp:329:(.text.main+0x503d): undefined reference to `llama_free_model'
/usr/bin/ld: /usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/examples/quantize-stats/quantize-stats.cpp:352:(.text.main+0x51a5): undefined reference to `llama_free'
/usr/bin/ld: /usr/src/debug/llama.cpp-git/llama.cpp-sycl-f16/examples/quantize-stats/quantize-stats.cpp:353:(.text.main+0x51b8): undefined reference to `llama_free_model'
icpx: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [examples/quantize-stats/CMakeFiles/quantize-stats.dir/build.make:102: bin/quantize-stats] Error 1
make[1]: *** [CMakeFiles/Makefile2:2824: examples/quantize-stats/CMakeFiles/quantize-stats.dir/all] Error 2
2 warnings generated.
[ 20%] Linking CXX static library libcommon.a
[ 20%] Built target common
make: *** [Makefile:146: all] Error 2
==> ERROR: A failure occurred in build().
    Aborting...
error: failed to build 'llama.cpp-git-b2698-1 (llama.cpp-cublas-git)':
error: packages failed to build: llama.cpp-git-b2698-1 (llama.cpp-cublas-git)

let-def commented on 2024-05-09 09:32 (UTC)

The llama.cpp-cublas-git package is packaging the clblas directory, e.g:

diff --git a/PKGBUILD b/PKGBUILD
index 0f49b81..78a6ae8 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -192,7 +192,7 @@ package_llama.cpp-cublas-git() {
   provides=("${_name}")
   conflicts=("${_name}")

-  cd "${_name}-clblas"
+  cd "${_name}-cublas"
   DESTDIR="${pkgdir}" cmake --install build
   _package
 }

dreieck commented on 2024-04-17 11:18 (UTC)

ccache should not be an optional dependency.

It is only relevant for building.

And the user can specify the usage of it by setting the ccache-option in e.g. /etc/makepkg.conf.

Please remove it from optdepends.

Regards!

lapsus commented on 2024-04-16 22:20 (UTC)

[ 19%] Built target common
make: *** [Makefile:146: all] Error 2
==> ERROR: A failure occurred in build().
Aborting...
error: failed to build 'llama.cpp-git-b2684-1 (llama.cpp-cublas-git)':
error: packages failed to build: llama.cpp-git-b2684-1 (llama.cpp-cublas-git)

lapsus commented on 2024-04-15 22:49 (UTC)

/home/user/.cache/paru/clone/llama.cpp-git/PKGBUILD: line 137: cd: /home/user/.cache/paru/clone/llama.cpp-git/src/llama.cpp-sycl: No such file or directory
==> ERROR: A failure occurred in build().
Aborting...
error: failed to build 'llama.cpp-git-b2646.r7.f7001ccc5-1 (llama.cpp-cublas-git)':
error: packages failed to build: llama.cpp-git-b2646.r7.f7001ccc5-1 (llama.cpp-cublas-git)

<deleted-account> commented on 2024-04-10 05:23 (UTC)

I believe it's LLAMA_CUDA now instead of LLAMA_CUBLAS.

Just an FYI.