Package Details: ik-llama.cpp-cuda r3884.6d2e7ca4-1

Git Clone URL: https://aur.archlinux.org/ik-llama.cpp-cuda.git (read-only, click to copy)
Package Base: ik-llama.cpp-cuda
Description: llama.cpp fork with additional SOTA quants and improved performance (CUDA Backend)
Upstream URL: https://github.com/ikawrakow/ik_llama.cpp
Licenses: MIT
Conflicts: ggml, ik-llama.cpp, ik-llama.cpp-vulkan, libggml, llama.cpp, llama.cpp-cuda, llama.cpp-hip, llama.cpp-vulkan
Provides: llama.cpp
Submitter: Orion-zhen
Maintainer: Orion-zhen
Last Packager: Orion-zhen
Votes: 2
Popularity: 1.17
First Submitted: 2025-07-31 01:41 (UTC)
Last Updated: 2025-09-14 00:21 (UTC)

Dependencies (13)

Required by (0)

Sources (0)

Pinned Comments

Orion-zhen commented on 2025-09-02 03:19 (UTC) (edited on 2025-09-02 13:21 (UTC) by Orion-zhen)

I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.

Latest Comments

Orion-zhen commented on 2025-09-02 03:19 (UTC) (edited on 2025-09-02 13:21 (UTC) by Orion-zhen)

I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.