This changed now with new ggml-vulkan
[ 27%] Building CXX object src/CMakeFiles/llama.dir/llama-quant.cpp.o
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp: In function ‘bool weight_buft_supported(const llama_hparams&, ggml_tensor*, ggml_op, ggml_backend_buffer_type_t, ggml_backend_dev_t)’:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:231:42: error: too many arguments to function ‘ggml_tensor* ggml_ssm_scan(ggml_context*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*)’
231 | op_tensor = ggml_ssm_scan(ctx, s, x, dt, w, B, C, ids);
| ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/../include/llama.h:4,
from /var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.h:3,
from /var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:1:
/usr/include/ggml.h:2009:35: note: declared here
2009 | GGML_API struct ggml_tensor * ggml_ssm_scan(
| ^~~~~~~~~~~~~
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp: In lambda function:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:9922:37: error: too many arguments to function ‘ggml_tensor* ggml_ssm_scan(ggml_context*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*)’
9922 | return ggml_ssm_scan(ctx, ssm, x, dt, A, B, C, ids);
| ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/ggml.h:2009:35: note: declared here
2009 | GGML_API struct ggml_tensor * ggml_ssm_scan(
| ^~~~~~~~~~~~~
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp: In lambda function:
/var/cache/private/pamac/llama.cpp-vulkan/src/llama.cpp/src/llama-model.cpp:10046:37: error: too many arguments to function ‘ggml_tensor* ggml_ssm_scan(ggml_context*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*, ggml_tensor*)’
10046 | return ggml_ssm_scan(ctx, ssm, x, dt, A, B, C, ids);
| ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/ggml.h:2009:35: note: declared here
2009 | GGML_API struct ggml_tensor * ggml_ssm_scan(
Pinned Comments
txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip