Package Details: llama.cpp-cublas-git b6663.r1.c8dedc999-1

Git Clone URL: https://aur.archlinux.org/llama.cpp-cublas-git.git (read-only, click to copy)
Package Base: llama.cpp-cublas-git
Description: Port of Facebook's LLaMA model in C/C++ (with NVIDIA CUDA optimizations)
Upstream URL: https://github.com/ggerganov/llama.cpp
Licenses: MIT
Conflicts: llama.cpp, llama.cpp-cublas
Submitter: robertfoster
Maintainer: robertfoster
Last Packager: robertfoster
Votes: 0
Popularity: 0.000000
First Submitted: 2024-11-15 20:33 (UTC)
Last Updated: 2025-10-01 22:36 (UTC)

Latest Comments

aquilarubra commented on 2025-10-09 13:59 (UTC)

llama.cpp-cublas-git/src/llama.cpp-cublas/src/llama-model.cpp:19331:43: error: ‘ggml_xielu’ was not declared in this scope; did you mean ‘ggml_silu’? 19331 | ggml_tensor * activated = ggml_xielu(ctx0, up, alpha_n_val, alpha_p_val, beta_val, eps_val); | ^~ | ggml_silu make[2]: *** [src/CMakeFiles/llama.dir/build.make:359: src/CMakeFiles/llama.dir/llama-model.cpp.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:1053: src/CMakeFiles/llama.dir/all] Error 2 make: *** [Makefile:136: all] Error 2 ==> ERROR: A failure occurred in build(). Aborting... -> error making: llama.cpp-cublas-git-exit status 4 -> Failed to install the following packages. Manual intervention is required: llama.cpp-cublas-git - exit status 4

aquilarubra commented on 2025-10-09 13:59 (UTC)

llama.cpp-cublas-git/src/llama.cpp-cublas/src/llama-model.cpp:19331:43: error: ‘ggml_xielu’ was not declared in this scope; did you mean ‘ggml_silu’? 19331 | ggml_tensor * activated = ggml_xielu(ctx0, up, alpha_n_val, alpha_p_val, beta_val, eps_val); | ^~ | ggml_silu make[2]: *** [src/CMakeFiles/llama.dir/build.make:359: src/CMakeFiles/llama.dir/llama-model.cpp.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:1053: src/CMakeFiles/llama.dir/all] Error 2 make: *** [Makefile:136: all] Error 2 ==> ERROR: A failure occurred in build(). Aborting... -> error making: llama.cpp-cublas-git-exit status 4 -> Failed to install the following packages. Manual intervention is required: llama.cpp-cublas-git - exit status 4