@porzione Current version seems to build fine.
Search Criteria
Package Details: llama.cpp-vulkan b5604-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/llama.cpp-vulkan.git (read-only, click to copy) |
---|---|
Package Base: | llama.cpp-vulkan |
Description: | Port of Facebook's LLaMA model in C/C++ (with Vulkan GPU optimizations) |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Conflicts: | llama.cpp |
Provides: | llama.cpp |
Submitter: | txtsd |
Maintainer: | txtsd |
Last Packager: | txtsd |
Votes: | 7 |
Popularity: | 2.21 |
First Submitted: | 2024-10-26 20:10 (UTC) |
Last Updated: | 2025-06-07 14:31 (UTC) |
Dependencies (16)
- blas-openblas
- blas64-openblas
- curl (curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-linux4AUR, glibc-eacAUR)
- openmp
- python (python37AUR)
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR)
- python-sentencepieceAUR (python-sentencepiece-gitAUR)
- vulkan-icd-loader (vulkan-icd-loader-gitAUR)
- cmake (cmake3AUR, cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- pkgconf (pkgconf-gitAUR) (make)
- shaderc (shaderc-gitAUR, shaderc-gitAUR) (make)
- vulkan-headers (vulkan-headers-gitAUR) (make)
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional)
Required by (0)
Sources (4)
Latest Comments
txtsd commented on 2025-06-07 15:06 (UTC)
porzione commented on 2025-06-05 04:26 (UTC)
Can't build b5590, any ideas on how to fix it?
[ 7%] Building CXX object ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o
In file included from /usr/include/vulkan/vulkan_hpp_macros.hpp:35,
from /usr/include/vulkan/vulkan.hpp:11,
from /tmp/makepkg/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:8:
/usr/include/c++/15.1.1/ciso646:46:4: warning: #warning "<ciso646> is deprecated in C++17, use <version> to detect implementation-specific macros" [-Wcpp]
46 | # warning "<ciso646> is deprecated in C++17, use <version> to detect implementation-specific macros"
| ^~~~~~~
/tmp/makepkg/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp: In function ‘void ggml_vk_load_shaders(vk_device&)’:
/tmp/makepkg/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2745:102: error: ‘conv_transpose_1d_f32_len’ was not declared in this scope
2745 | ggml_vk_create_pipeline(device, device->pipeline_conv_transpose_1d_f32, "conv_transpose_1d_f32", conv_transpose_1d_f32_len, conv_transpose_1d_f32_data, "main", 3, sizeof(vk_op_conv_transpose_1d_push_constants), {1, 1, 1}, {}, 1);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
/tmp/makepkg/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2745:129: error: ‘conv_transpose_1d_f32_data’ was not declared in this scope
2745 | ggml_vk_create_pipeline(device, device->pipeline_conv_transpose_1d_f32, "conv_transpose_1d_f32", conv_transpose_1d_f32_len, conv_transpose_1d_f32_data, "main", 3, sizeof(vk_op_conv_transpose_1d_push_constants), {1, 1, 1}, {}, 1);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
make[2]: *** [ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/build.make:197: ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o] Error 1[mesec@archbox]
make[1]: *** [CMakeFiles/Makefile2:2366: ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
==> ERROR: A failure occurred in build().
Aborting...
> 4 llama.cpp-vulkan (master) $ git status
On branch master
Your branch is up to date with 'origin/master'.
nothing to commit, working tree clean
> llama.cpp-vulkan (master) $ git log -1
commit 6d5edd88268fc54262d8642f8690a47be8565cca (HEAD -> master, origin/master, origin/HEAD)
Author: txtsd <code@ihavea.quest>
Date: Wed Jun 4 21:22:35 2025 +0000
upgpkg: llama.cpp-vulkan b5590-1
ioctl commented on 2025-04-14 09:42 (UTC)
Package conflicts with the whisper.cpp-vulkan. Here is install error:
llama.cpp-vulkan: /usr/bin/vulkan-shaders-gen exists in filesystem (owned by whisper.cpp-vulkan)
llama.cpp-vulkan: /usr/lib/libggml-base.so exists in filesystem (owned by whisper.cpp-vulkan)
llama.cpp-vulkan: /usr/lib/libggml-cpu.so exists in filesystem (owned by whisper.cpp-vulkan)
llama.cpp-vulkan: /usr/lib/libggml-vulkan.so exists in filesystem (owned by whisper.cpp-vulkan)
llama.cpp-vulkan: /usr/lib/libggml.so exists in filesystem (owned by whisper.cpp-vulkan)
txtsd commented on 2025-02-14 05:14 (UTC)
@Sherlock-Holo I added python-pytorch
as an optional dependency.
Sherlock-Holo commented on 2025-02-14 02:54 (UTC)
the convert_hf_to_gguf.py
script miss dependency
Traceback (most recent call last):
File "/usr/bin/convert_hf_to_gguf.py", line 22, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
txtsd commented on 2024-12-20 04:07 (UTC)
Thanks for reporting @greyltc and @envolution.
I pushed a fix.
envolution commented on 2024-12-20 03:43 (UTC)
the -git version compiled for me on intel igpu, this one failed similarly to how @greyltc's failed
greyltc commented on 2024-12-19 06:29 (UTC) (edited on 2024-12-19 06:30 (UTC) by greyltc)
This isn't building for me, /usr/bin/ld: attempted static link of dynamic object '/lib/libvulkan.so'
:
-- Found Vulkan: /lib/libvulkan.so (found version "1.4.303") found components: glslc glslangValidator
-- Vulkan found
-- GL_NV_cooperative_matrix2 not supported by glslc
-- Including Vulkan backend
-- Found CURL: /usr/lib/libcurl.so (found version "8.11.1")
-- Configuring done (2.8s)
-- Generating done (0.2s)
-- Build files have been written to: /build/llama.cpp-vulkan/src/build
[ 1%] Building C object ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o
[ 1%] Building C object ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o
[ 2%] Building CXX object ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o
[ 2%] Building CXX object ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o
[ 3%] Building CXX object ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o
[ 3%] Building C object ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o
[ 4%] Linking CXX static library libggml-base.a
[ 4%] Built target ggml-base
[ 4%] Building CXX object ggml/src/ggml-vulkan/vulkan-shaders/CMakeFiles/vulkan-shaders-gen.dir/vulkan-shaders-gen.cpp.o
[ 5%] Linking CXX executable ../../../../bin/vulkan-shaders-gen
/usr/bin/ld: attempted static link of dynamic object `/lib/libvulkan.so'
collect2: error: ld returned 1 exit status
make[2]: *** [ggml/src/ggml-vulkan/vulkan-shaders/CMakeFiles/vulkan-shaders-gen.dir/build.make:102: bin/vulkan-shaders-gen] Error 1
make[1]: *** [CMakeFiles/Makefile2:2382: ggml/src/ggml-vulkan/vulkan-shaders/CMakeFiles/vulkan-shaders-gen.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
Any ideas?
Pinned Comments
txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip