Package Details: llama.cpp-vulkan b5985-1

Git Clone URL: https://aur.archlinux.org/llama.cpp-vulkan.git (read-only, click to copy)
Package Base: llama.cpp-vulkan
Description: Port of Facebook's LLaMA model in C/C++ (with Vulkan GPU optimizations)
Upstream URL: https://github.com/ggerganov/llama.cpp
Licenses: MIT
Conflicts: ggml, libggml, llama.cpp
Provides: llama.cpp
Submitter: txtsd
Maintainer: txtsd
Last Packager: txtsd
Votes: 9
Popularity: 1.83
First Submitted: 2024-10-26 20:10 (UTC)
Last Updated: 2025-07-24 20:36 (UTC)

Pinned Comments

txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)

Alternate versions

llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip

Latest Comments

1 2 Next › Last »

wonka commented on 2025-07-07 10:25 (UTC)

This changed now with new ggml-vulkan

[ 27%] Building CXX object src/CMakeFiles/llama.dir/llama-quant.cpp.o
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp: In function ‘bool weight_buft_supported(const llama_hparams&, ggml_tensor*, ggml_op, ggml_backend_buffer_type_t, ggml_backend_dev_t)’:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:231:42: error: too many arguments to function ‘ggml_tensor* ggml_ssm_scan(ggml_context*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*)’
  231 |                 op_tensor = ggml_ssm_scan(ctx, s, x, dt, w, B, C, ids);
      |                             ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/../include/llama.h:4,
                 from /var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.h:3,
                 from /var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:1:
/usr/include/ggml.h:2009:35: note: declared here
 2009 |     GGML_API struct ggml_tensor * ggml_ssm_scan(
      |                                   ^~~~~~~~~~~~~
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp: In lambda function:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:9922:37: error: too many arguments to function ‘ggml_tensor* ggml_ssm_scan(ggml_context*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*)’
 9922 |                 return ggml_ssm_scan(ctx, ssm, x, dt, A, B, C, ids);
      |                        ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/ggml.h:2009:35: note: declared here
 2009 |     GGML_API struct ggml_tensor * ggml_ssm_scan(
      |                                   ^~~~~~~~~~~~~
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp: In lambda function:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:10046:37: error: too many arguments to function ‘ggml_tensor* ggml_ssm_scan(ggml_context*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*)’
10046 |                 return ggml_ssm_scan(ctx, ssm, x, dt, A, B, C, ids);
      |                        ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/ggml.h:2009:35: note: declared here
 2009 |     GGML_API struct ggml_tensor * ggml_ssm_scan(

wonka commented on 2025-07-05 09:21 (UTC)

Same issue reported here somehow.

https://aur.archlinux.org/packages/llama.cpp-hip

wonka commented on 2025-07-04 08:32 (UTC) (edited on 2025-07-04 08:40 (UTC) by wonka)

Still error with b5827

/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:618:23: error: ‘ggml_reglu’ was not declared in this scope; did you mean ‘ggml_relu’?
  618 |                 cur = ggml_reglu(ctx0, cur);
      |                       ^~~~~~~~~~
      |                       ggml_relu
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp: In member function ‘ggml_tensor* llm_graph_context::build_moe_ffn(ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, int64_t, int64_t, llm_ffn_op_type, bool, bool, float, llama_expert_gating_func_type, int) const’:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:751:23: error: ‘ggml_swiglu_split’ was not declared in this scope
  751 |                 cur = ggml_swiglu_split(ctx0, cur, up);
      |                       ^~~~~~~~~~~~~~~~~
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:759:23: error: ‘ggml_geglu_split’ was not declared in this scope
  759 |                 cur = ggml_geglu_split(ctx0, cur, up);
      |                       ^~~~~~~~~~~~~~~~
make[2]: *** [src/CMakeFiles/llama.dir/build.make:191: src/CMakeFiles/llama.dir/llama-graph.cpp.o] Error 1
make[2]: *** Attesa per i processi non terminati....
make[1]: *** [CMakeFiles/Makefile2:1007: src/CMakeFiles/llama.dir/all] Error 2
make: *** [Makefile:136: all] Error 2

txtsd commented on 2025-07-01 02:00 (UTC)

@eSPiYa Upstream issue atm

eSPiYa commented on 2025-06-30 21:53 (UTC)

I already installed ggml-vulkan, but the build still fails

/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp: In member function ‘ggml_tensor* llm_graph_context::build_ffn(ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, llm_ffn_op_type, llm_ffn_gate_type, int) const’:
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:564:23: error: ‘ggml_swiglu_split’ was not declared in this scope
564 |                 cur = ggml_swiglu_split(ctx0, cur, tmp);
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:573:23: error: ‘ggml_geglu_split’ was not declared in this scope
573 |                 cur = ggml_geglu_split(ctx0, cur, tmp);
    |                       ^~~~~~~~~~~~~~~~
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:586:23: error: ‘ggml_reglu_split’ was not declared in this scope
586 |                 cur = ggml_reglu_split(ctx0, cur, tmp);
    |                       ^~~~~~~~~~~~~~~~
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:603:23: error: ‘ggml_swiglu’ was not declared in this scope; did you mean ‘ggml_silu’?
603 |                 cur = ggml_swiglu(ctx0, cur);
    |                       ^~~~~~~~~~~
    |                       ggml_silu
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:608:23: error: ‘ggml_geglu’ was not declared in this scope; did you mean ‘ggml_gelu’?
608 |                 cur = ggml_geglu(ctx0, cur);
    |                       ^~~~~~~~~~
    |                       ggml_gelu
/home/<user>/.cache/paru/clone/llama.cpp-vulkan/src/llama.cpp/src/llama-graph.cpp:613:23: error: ‘ggml_reglu’ was not declared in this scope; did you mean ‘ggml_relu’?
613 |                 cur = ggml_reglu(ctx0, cur);
    |                       ^~~~~~~~~~
    |                       ggml_relu

munzirtaha commented on 2025-06-23 22:40 (UTC) (edited on 2025-06-23 22:41 (UTC) by munzirtaha)

❯ paru -S llama.cpp-vulkan
:: Resolving dependencies...
error: could not find all required packages:
libggml-vulkan (wanted by: llama.cpp-vulkan)

❯ paru -Sii ggml-vulkan-git |rg Provides
Provides        : ggml-vulkan  ggml  libggml

Use ggml-vulkan or libggml instead

txtsd commented on 2025-06-15 12:04 (UTC)

This package now uses system libggml so it should work alongside whisper.cpp

Tests and examples building has been turned off.
kompute is removed.

txtsd commented on 2025-06-07 15:06 (UTC)

@porzione Current version seems to build fine.

porzione commented on 2025-06-05 04:26 (UTC)

Can't build b5590, any ideas on how to fix it?

[  7%] Building CXX object ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o
In file included from /usr/include/vulkan/vulkan_hpp_macros.hpp:35,
                 from /usr/include/vulkan/vulkan.hpp:11,
                 from /tmp/makepkg/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:8:
/usr/include/c++/15.1.1/ciso646:46:4: warning: #warning "<ciso646> is deprecated in C++17, use <version> to detect implementation-specific macros" [-Wcpp]
   46 | #  warning "<ciso646> is deprecated in C++17, use <version> to detect implementation-specific macros"
      |    ^~~~~~~
/tmp/makepkg/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp: In function ‘void ggml_vk_load_shaders(vk_device&)’:
/tmp/makepkg/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2745:102: error: ‘conv_transpose_1d_f32_len’ was not declared in this scope
 2745 |     ggml_vk_create_pipeline(device, device->pipeline_conv_transpose_1d_f32, "conv_transpose_1d_f32", conv_transpose_1d_f32_len, conv_transpose_1d_f32_data, "main", 3, sizeof(vk_op_conv_transpose_1d_push_constants), {1, 1, 1}, {}, 1);
      |                                                                                                      ^~~~~~~~~~~~~~~~~~~~~~~~~
/tmp/makepkg/llama.cpp-vulkan/src/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:2745:129: error: ‘conv_transpose_1d_f32_data’ was not declared in this scope
 2745 |     ggml_vk_create_pipeline(device, device->pipeline_conv_transpose_1d_f32, "conv_transpose_1d_f32", conv_transpose_1d_f32_len, conv_transpose_1d_f32_data, "main", 3, sizeof(vk_op_conv_transpose_1d_push_constants), {1, 1, 1}, {}, 1);
      |                                                                                                                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~
make[2]: *** [ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/build.make:197: ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.o] Error 1[mesec@archbox] 
make[1]: *** [CMakeFiles/Makefile2:2366: ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
==> ERROR: A failure occurred in build().
    Aborting...
> 4 llama.cpp-vulkan (master) $ git status 
On branch master
Your branch is up to date with 'origin/master'.

nothing to commit, working tree clean
> llama.cpp-vulkan (master) $ git log -1
commit 6d5edd88268fc54262d8642f8690a47be8565cca (HEAD -> master, origin/master, origin/HEAD)
Author: txtsd <code@ihavea.quest>
Date:   Wed Jun 4 21:22:35 2025 +0000

    upgpkg: llama.cpp-vulkan b5590-1

ioctl commented on 2025-04-14 09:42 (UTC)

Package conflicts with the whisper.cpp-vulkan. Here is install error:


llama.cpp-vulkan: /usr/bin/vulkan-shaders-gen exists in filesystem (owned by whisper.cpp-vulkan)
llama.cpp-vulkan: /usr/lib/libggml-base.so exists in filesystem (owned by whisper.cpp-vulkan)
llama.cpp-vulkan: /usr/lib/libggml-cpu.so exists in filesystem (owned by whisper.cpp-vulkan)
llama.cpp-vulkan: /usr/lib/libggml-vulkan.so exists in filesystem (owned by whisper.cpp-vulkan)
llama.cpp-vulkan: /usr/lib/libggml.so exists in filesystem (owned by whisper.cpp-vulkan)