it should properly source=( from upstream rather than trying to shallow clone in prepare. You can source the release archive if you're trying to avoid cloning a deep history
Search Criteria
Package Details: llama.cpp-vulkan b7039-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/llama.cpp-vulkan.git (read-only, click to copy) |
|---|---|
| Package Base: | llama.cpp-vulkan |
| Description: | Port of Facebook's LLaMA model in C/C++ (with Vulkan GPU optimizations) |
| Upstream URL: | https://github.com/ggerganov/llama.cpp |
| Licenses: | MIT |
| Conflicts: | ggml, libggml, llama.cpp, stable-diffusion.cpp |
| Provides: | llama.cpp |
| Submitter: | txtsd |
| Maintainer: | Orion-zhen |
| Last Packager: | Orion-zhen |
| Votes: | 15 |
| Popularity: | 1.90 |
| First Submitted: | 2024-10-26 20:10 (UTC) |
| Last Updated: | 2025-11-13 00:22 (UTC) |
Dependencies (14)
- curl (curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-eacAUR)
- python
- vulkan-icd-loader (vulkan-icd-loader-gitAUR)
- cmake (cmake3AUR, cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- shaderc (shaderc-gitAUR, shaderc-gitAUR) (make)
- vulkan-headers (vulkan-headers-gitAUR) (make)
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mklAUR, python-numpy-mkl-tbbAUR) (optional) – needed for convert_hf_to_gguf.py
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda12.9AUR, python-pytorch-opt-cuda12.9AUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – needed for convert_hf_to_gguf.py
- python-safetensorsAUR (python-safetensors-binAUR) (optional) – needed for convert_hf_to_gguf.py
- python-sentencepieceAUR (python-sentencepiece-gitAUR, python-sentencepiece-binAUR) (optional) – needed for convert_hf_to_gguf.py
- python-transformersAUR (optional) – needed for convert_hf_to_gguf.py
Required by (0)
Sources (1)
envolution commented on 2025-08-28 12:29 (UTC)
quickes commented on 2025-08-26 18:12 (UTC)
Encountered an error again when updating the package during the call to pkgver().
git describe --tags --abbrev=0
fatal: No names found, cannot describe anything.
Error analysis revealed that the issue arises from combining the code retrieval command with frequent commits to the master branch.
git clone --depth 3 --single-branch --branch master "${url}" "${_pkgname}"
Package updates will only work for a few days after a new version is released. If attempting to update later, the required tag won't be present in the fetched master branch history. As solution to this problem, I suggest using:
git clone --branch "$pkgver" --depth 1 "${url}" "${_pkgname}"
This approach directly fetches the required version and works reliably at all times.
libricoleur commented on 2025-08-07 11:20 (UTC)
As of now this conflicts with stable-diffusion.cpp-vulkan-git because they both provide /usr/lib/cmake/ggml/ggml-config.cmake and /usr/lib/cmake/ggml/ggml-version.cmake.
wonka commented on 2025-07-07 10:25 (UTC)
This changed now with new ggml-vulkan
[ 27%] Building CXX object src/CMakeFiles/llama.dir/llama-quant.cpp.o
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp: In function ‘bool weight_buft_supported(const llama_hparams&, ggml_tensor*, ggml_op, ggml_backend_buffer_type_t, ggml_backend_dev_t)’:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:231:42: error: too many arguments to function ‘ggml_tensor* ggml_ssm_scan(ggml_context*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*)’
231 | op_tensor = ggml_ssm_scan(ctx, s, x, dt, w, B, C, ids);
| ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/../include/llama.h:4,
from /var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.h:3,
from /var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:1:
/usr/include/ggml.h:2009:35: note: declared here
2009 | GGML_API struct ggml_tensor * ggml_ssm_scan(
| ^~~~~~~~~~~~~
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp: In lambda function:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:9922:37: error: too many arguments to function ‘ggml_tensor* ggml_ssm_scan(ggml_context*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*)’
9922 | return ggml_ssm_scan(ctx, ssm, x, dt, A, B, C, ids);
| ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/ggml.h:2009:35: note: declared here
2009 | GGML_API struct ggml_tensor * ggml_ssm_scan(
| ^~~~~~~~~~~~~
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp: In lambda function:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:10046:37: error: too many arguments to function ‘ggml_tensor* ggml_ssm_scan(ggml_context*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*)’
10046 | return ggml_ssm_scan(ctx, ssm, x, dt, A, B, C, ids);
| ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/ggml.h:2009:35: note: declared here
2009 | GGML_API struct ggml_tensor * ggml_ssm_scan(
wonka commented on 2025-07-05 09:21 (UTC)
Same issue reported here somehow.
wonka commented on 2025-07-04 08:32 (UTC) (edited on 2025-07-04 08:40 (UTC) by wonka)
Still error with b5827
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:618:23: error: ‘ggml_reglu’ was not declared in this scope; did you mean ‘ggml_relu’?
618 | cur = ggml_reglu(ctx0, cur);
| ^~~~~~~~~~
| ggml_relu
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp: In member function ‘ggml_tensor* llm_graph_context::build_moe_ffn(ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, int64_t, int64_t, llm_ffn_op_type, bool, bool, float, llama_expert_gating_func_type, int) const’:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:751:23: error: ‘ggml_swiglu_split’ was not declared in this scope
751 | cur = ggml_swiglu_split(ctx0, cur, up);
| ^~~~~~~~~~~~~~~~~
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:759:23: error: ‘ggml_geglu_split’ was not declared in this scope
759 | cur = ggml_geglu_split(ctx0, cur, up);
| ^~~~~~~~~~~~~~~~
make[2]: *** [src/CMakeFiles/llama.dir/build.make:191: src/CMakeFiles/llama.dir/llama-graph.cpp.o] Error 1
make[2]: *** Attesa per i processi non terminati....
make[1]: *** [CMakeFiles/Makefile2:1007: src/CMakeFiles/llama.dir/all] Error 2
make: *** [Makefile:136: all] Error 2
txtsd commented on 2025-07-01 02:00 (UTC)
@eSPiYa Upstream issue atm
eSPiYa commented on 2025-06-30 21:53 (UTC)
I already installed ggml-vulkan, but the build still fails
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp: In member function ‘ggml_tensor* llm_graph_context::build_ffn(ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, llm_ffn_op_type, llm_ffn_gate_type, int) const’:
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:564:23: error: ‘ggml_swiglu_split’ was not declared in this scope
564 | cur = ggml_swiglu_split(ctx0, cur, tmp);
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:573:23: error: ‘ggml_geglu_split’ was not declared in this scope
573 | cur = ggml_geglu_split(ctx0, cur, tmp);
| ^~~~~~~~~~~~~~~~
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:586:23: error: ‘ggml_reglu_split’ was not declared in this scope
586 | cur = ggml_reglu_split(ctx0, cur, tmp);
| ^~~~~~~~~~~~~~~~
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:603:23: error: ‘ggml_swiglu’ was not declared in this scope; did you mean ‘ggml_silu’?
603 | cur = ggml_swiglu(ctx0, cur);
| ^~~~~~~~~~~
| ggml_silu
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:608:23: error: ‘ggml_geglu’ was not declared in this scope; did you mean ‘ggml_gelu’?
608 | cur = ggml_geglu(ctx0, cur);
| ^~~~~~~~~~
| ggml_gelu
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:613:23: error: ‘ggml_reglu’ was not declared in this scope; did you mean ‘ggml_relu’?
613 | cur = ggml_reglu(ctx0, cur);
| ^~~~~~~~~~
| ggml_relu
munzirtaha commented on 2025-06-23 22:40 (UTC) (edited on 2025-06-23 22:41 (UTC) by munzirtaha)
❯ paru -S llama.cpp-vulkan
:: Resolving dependencies...
error: could not find all required packages:
libggml-vulkan (wanted by: llama.cpp-vulkan)
❯ paru -Sii ggml-vulkan-git |rg Provides
Provides : ggml-vulkan ggml libggml
Use ggml-vulkan or libggml instead
Pinned Comments
Orion-zhen commented on 2025-09-02 03:17 (UTC) (edited on 2025-09-02 13:20 (UTC) by Orion-zhen)
I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.
txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip