I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.
Search Criteria
Package Details: ik-llama.cpp-cuda r3884.6d2e7ca4-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/ik-llama.cpp-cuda.git (read-only, click to copy) |
---|---|
Package Base: | ik-llama.cpp-cuda |
Description: | llama.cpp fork with additional SOTA quants and improved performance (CUDA Backend) |
Upstream URL: | https://github.com/ikawrakow/ik_llama.cpp |
Licenses: | MIT |
Conflicts: | ggml, ik-llama.cpp, ik-llama.cpp-vulkan, libggml, llama.cpp, llama.cpp-cuda, llama.cpp-hip, llama.cpp-vulkan |
Provides: | llama.cpp |
Submitter: | Orion-zhen |
Maintainer: | Orion-zhen |
Last Packager: | Orion-zhen |
Votes: | 2 |
Popularity: | 1.17 |
First Submitted: | 2025-07-31 01:41 (UTC) |
Last Updated: | 2025-09-14 00:21 (UTC) |
Dependencies (13)
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR, cuda-12.5AUR)
- curl (curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-eacAUR)
- nvidia-utils (nvidia-410xx-utilsAUR, nvidia-440xx-utilsAUR, nvidia-430xx-utilsAUR, nvidia-340xx-utilsAUR, nvidia-525xx-utilsAUR, nvidia-510xx-utilsAUR, nvidia-390xx-utilsAUR, nvidia-vulkan-utilsAUR, nvidia-535xx-utilsAUR, nvidia-utils-teslaAUR, nvidia-470xx-utilsAUR, nvidia-utils-betaAUR, nvidia-550xx-utilsAUR)
- python (python37AUR)
- cmake (cmake3AUR, cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR, python-numpy-mkl-binAUR) (optional) – needed for convert_hf_to_gguf.py
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – needed for convert_hf_to_gguf.py
- python-safetensorsAUR (python-safetensors-binAUR) (optional) – needed for convert_hf_to_gguf.py
- python-sentencepieceAUR (python-sentencepiece-gitAUR) (optional) – needed for convert_hf_to_gguf.py
- python-transformersAUR (optional) – needed for convert_hf_to_gguf.py
Pinned Comments
Orion-zhen commented on 2025-09-02 03:19 (UTC) (edited on 2025-09-02 13:21 (UTC) by Orion-zhen)
I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.