Hi @Orion-zhen, b7652-1 builds successfully. I had problem with previous 2-3 versions (b7589), while building from source worked, so I guess it doesent matter now. Thanks anyway.
Search Criteria
Package Details: llama.cpp-vulkan b7792-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/llama.cpp-vulkan.git (read-only, click to copy) |
|---|---|
| Package Base: | llama.cpp-vulkan |
| Description: | Port of Facebook's LLaMA model in C/C++ (with Vulkan GPU optimizations) |
| Upstream URL: | https://github.com/ggml-org/llama.cpp |
| Licenses: | MIT |
| Conflicts: | ggml, libggml, llama.cpp, stable-diffusion.cpp |
| Provides: | llama.cpp |
| Submitter: | txtsd |
| Maintainer: | Orion-zhen |
| Last Packager: | Orion-zhen |
| Votes: | 18 |
| Popularity: | 2.70 |
| First Submitted: | 2024-10-26 20:10 (UTC) |
| Last Updated: | 2026-01-22 00:24 (UTC) |
Dependencies (15)
- curl (curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-eacAUR, glibc-git-native-pgoAUR)
- python
- vulkan-icd-loader (vulkan-icd-loader-gitAUR)
- cmake (cmake3AUR, cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- shaderc (shaderc-gitAUR, shaderc-gitAUR) (make)
- vulkan-headers (vulkan-headers-gitAUR) (make)
- python-ggufAUR (optional) – needed for convert_hf_to_gguf.py
- python-numpy (python-numpy-gitAUR, python-numpy-mkl-binAUR, python-numpy-mklAUR, python-numpy-mkl-tbbAUR, python-numpy1AUR) (optional) – needed for convert_hf_to_gguf.py
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda12.9AUR, python-pytorch-opt-cuda12.9AUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – needed for convert_hf_to_gguf.py
- python-safetensorsAUR (python-safetensors-binAUR) (optional) – needed for convert_hf_to_gguf.py
- python-sentencepieceAUR (python-sentencepiece-gitAUR, python-sentencepiece-binAUR) (optional) – needed for convert_hf_to_gguf.py
- python-transformersAUR (optional) – needed for convert_hf_to_gguf.py
Required by (4)
Sources (3)
nula commented on 2026-01-07 13:25 (UTC) (edited on 2026-01-07 13:26 (UTC) by nula)
Orion-zhen commented on 2026-01-07 08:55 (UTC)
Hi, @nula. I couldn't reproduce the error. I'm building on both Ryzen9 7950x and GitHub Action runner.
nula commented on 2026-01-03 17:01 (UTC)
Not able to build it anymore. Ryzen 9 7900
[ 3%] No patch step for 'vulkan-shaders-gen'
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c: In function ‘ggml_fp32_to_bf16_row’:
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:482:9: error: implicit declaration of function ‘_mm512_storeu_si512’ [-Wimplicit-function-declaration]
482 | _mm512_storeu_si512(
| ^~~~~~~~~~~~~~~~~~~
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:483:14: error: ‘__m512i’ undeclared (first use in this function); did you mean ‘m512i’?
483 | (__m512i *)(y + i),
| ^~~~~~~
| m512i
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:483:14: note: each undeclared identifier is reported only once for each function it appears in
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:483:23: error: expected expression before ‘)’ token
483 | (__m512i *)(y + i),
| ^
[ 4%] Performing configure step for 'vulkan-shaders-gen'
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:484:19: error: implicit declaration of function ‘_mm512_cvtne2ps_pbh’ [-Wimplicit-function-declaration]
484 | m512i(_mm512_cvtne2ps_pbh(_mm512_loadu_ps(x + i + 16),
| ^~~~~~~~~~~~~~~~~~~
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:61:28: note: in definition of macro ‘m512i’
61 | #define m512i(p) (__m512i)(p)
| ^
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:484:39: error: implicit declaration of function ‘_mm512_loadu_ps’ [-Wimplicit-function-declaration]
484 | m512i(_mm512_cvtne2ps_pbh(_mm512_loadu_ps(x + i + 16),
| ^~~~~~~~~~~~~~~
/home/fd/Downloads/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml.c:61:28: note: in definition of macro ‘m512i’
61 | #define m512i(p) (__m512i)(p)
| ^
make[2]: *** [ggml/src/CMakeFiles/ggml-base.dir/build.make:79: ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o] Error 1
make[2]: *** Waiting for unfinished jobs....
-- The C compiler identification is GNU 15.2.1
[ 4%] Linking CXX executable ../../bin/llama-gemma3-cli
[ 4%] Linking CXX executable ../../bin/llama-minicpmv-cli
[ 4%] Linking CXX executable ../../bin/llama-qwen2vl-cli
[ 4%] Linking CXX executable ../../bin/llama-llava-cli
-- The CXX compiler identification is GNU 15.2.1
-- Detecting C compiler ABI info
[ 4%] Built target llama-gemma3-cli
[ 4%] Built target llama-minicpmv-cli
[ 4%] Built target llama-llava-cli
[ 4%] Built target llama-qwen2vl-cli
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
[ 4%] Built target xxhash
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Enabling coopmat glslc support
-- Enabling coopmat2 glslc support
-- Enabling dot glslc support
-- Enabling bfloat16 glslc support
-- Configuring done (0.4s)
-- Generating done (0.0s)
-- Build files have been written to: /home/fd/Downloads/llama.cpp-vulkan/src/build/ggml/src/ggml-vulkan/vulkan-shaders-gen-prefix/src/vulkan-shaders-gen-build
[ 4%] Performing build step for 'vulkan-shaders-gen'
[ 50%] Building CXX object CMakeFiles/vulkan-shaders-gen.dir/vulkan-shaders-gen.cpp.o
make[1]: *** [CMakeFiles/Makefile2:1267: ggml/src/CMakeFiles/ggml-base.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[100%] Linking CXX executable vulkan-shaders-gen
[ 4%] Linking CXX static library libcpp-httplib.a
[ 4%] Built target cpp-httplib
[100%] Built target vulkan-shaders-gen
[ 4%] Performing install step for 'vulkan-shaders-gen'
-- Installing: /home/fd/Downloads/llama.cpp-vulkan/src/build/Release/./vulkan-shaders-gen
[ 4%] Completed 'vulkan-shaders-gen'
[ 4%] Built target vulkan-shaders-gen
make: *** [Makefile:136: all] Error 2
==> ERROR: A failure occurred in build().
Aborting...
Orion-zhen commented on 2025-12-17 05:20 (UTC)
Hi, @Dominiquini. I have added that line, thank you for your advice.
Dominiquini commented on 2025-12-17 03:13 (UTC)
@Orion-zhen: Add this line to the PKGBUILD to avoid replacing the config file after every update:
backup=("etc/conf.d/llama.cpp")
Orion-zhen commented on 2025-12-16 18:30 (UTC) (edited on 2025-12-16 18:30 (UTC) by Orion-zhen)
Hi, @Basxto. I have added .service and .conf files. Now you are able to start llama-server more easily :)
Basxto commented on 2025-12-15 12:18 (UTC)
Consider adding the .service and .conf files from https://aur.archlinux.org/packages/llama.cpp
That would make it a lot easier to (auto)start llama-server in router mode.
Crandel commented on 2025-09-24 06:39 (UTC)
Original url https://github.com/ggerganov/llama.cpp currently is an alias to current url https://github.com/ggml-org/llama.cpp. I guess would be better for visibility to use current url in PKGBUILD
Pinned Comments
Orion-zhen commented on 2025-09-02 03:17 (UTC) (edited on 2025-09-02 13:20 (UTC) by Orion-zhen)
I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.
txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip