Search Criteria
Package Details: llama.cpp-cublas-git b6663.r1.c8dedc999-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/llama.cpp-cublas-git.git (read-only, click to copy) |
|---|---|
| Package Base: | llama.cpp-cublas-git |
| Description: | Port of Facebook's LLaMA model in C/C++ (with NVIDIA CUDA optimizations) |
| Upstream URL: | https://github.com/ggerganov/llama.cpp |
| Licenses: | MIT |
| Conflicts: | llama.cpp, llama.cpp-cublas |
| Submitter: | robertfoster |
| Maintainer: | robertfoster |
| Last Packager: | robertfoster |
| Votes: | 0 |
| Popularity: | 0.000000 |
| First Submitted: | 2024-11-15 20:33 (UTC) |
| Last Updated: | 2025-10-01 22:36 (UTC) |
Dependencies (6)
- libggml-cuda-gitAUR
- cmake (cmake3AUR, cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR, git-wd40AUR) (make)
- python-ggufAUR (optional) – convert_hf_to_gguf python script
- python-numpy (python-numpy-gitAUR, python-numpy-mkl-binAUR, python-numpy1AUR, python-numpy-mklAUR, python-numpy-mkl-tbbAUR) (optional) – convert_hf_to_gguf.py python script
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda12.9AUR, python-pytorch-opt-cuda12.9AUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – convert_hf_to_gguf.py python script
Latest Comments
aquilarubra commented on 2025-10-09 13:59 (UTC)
llama.cpp-cublas-git/src/llama.cpp-cublas/src/llama-model.cpp:19331:43: error: ‘ggml_xielu’ was not declared in this scope; did you mean ‘ggml_silu’? 19331 | ggml_tensor * activated = ggml_xielu(ctx0, up, alpha_n_val, alpha_p_val, beta_val, eps_val); | ^
~ | ggml_silu make[2]: *** [src/CMakeFiles/llama.dir/build.make:359: src/CMakeFiles/llama.dir/llama-model.cpp.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:1053: src/CMakeFiles/llama.dir/all] Error 2 make: *** [Makefile:136: all] Error 2 ==> ERROR: A failure occurred in build(). Aborting... -> error making: llama.cpp-cublas-git-exit status 4 -> Failed to install the following packages. Manual intervention is required: llama.cpp-cublas-git - exit status 4aquilarubra commented on 2025-10-09 13:59 (UTC)
llama.cpp-cublas-git/src/llama.cpp-cublas/src/llama-model.cpp:19331:43: error: ‘ggml_xielu’ was not declared in this scope; did you mean ‘ggml_silu’? 19331 | ggml_tensor * activated = ggml_xielu(ctx0, up, alpha_n_val, alpha_p_val, beta_val, eps_val); | ^
~ | ggml_silu make[2]: *** [src/CMakeFiles/llama.dir/build.make:359: src/CMakeFiles/llama.dir/llama-model.cpp.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:1053: src/CMakeFiles/llama.dir/all] Error 2 make: *** [Makefile:136: all] Error 2 ==> ERROR: A failure occurred in build(). Aborting... -> error making: llama.cpp-cublas-git-exit status 4 -> Failed to install the following packages. Manual intervention is required: llama.cpp-cublas-git - exit status 4