Package Details: llama.cpp-hip b6715-1

Git Clone URL: https://aur.archlinux.org/llama.cpp-hip.git (read-only, click to copy)
Package Base: llama.cpp-hip
Description: Port of Facebook's LLaMA model in C/C++ (with AMD ROCm optimizations)
Upstream URL: https://github.com/ggml-org/llama.cpp
Licenses: MIT
Conflicts: ggml, libggml, llama.cpp, stable-diffusion.cpp
Provides: llama.cpp
Submitter: txtsd
Maintainer: Orion-zhen
Last Packager: Orion-zhen
Votes: 8
Popularity: 0.50
First Submitted: 2024-10-26 19:54 (UTC)
Last Updated: 2025-10-09 00:20 (UTC)

Pinned Comments

Orion-zhen commented on 2025-09-02 03:18 (UTC) (edited on 2025-09-02 13:20 (UTC) by Orion-zhen)

I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.

Orion-zhen commented on 2025-08-16 10:24 (UTC) (edited on 2025-09-06 03:32 (UTC) by Orion-zhen)

Make sure ROCm is correctly set in your system before installing this package.

  1. sudo pacman -S rocm-hip-sdk rocm-hip-libraries rocm-hip-runtime
  2. Reboot (recommended)
  3. Setup environment variables, including ROCM_HOME=/opt/rocm

Note: rocWMMA is enabled by default now to offer great speed up in prompt processing (about 9x). If you find your GPU incompatible with it, please download the PKGBUILD and turn it off manually.

txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)

Alternate versions

llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip

Latest Comments

1 2 3 Next › Last »

Orion-zhen commented on 2025-09-02 03:18 (UTC) (edited on 2025-09-02 13:20 (UTC) by Orion-zhen)

I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.

Orion-zhen commented on 2025-08-27 03:21 (UTC) (edited on 2025-08-28 14:07 (UTC) by Orion-zhen)

Currently, llama.cpp-hip has some issue with ROCm 6.4.3. You can track the process here. Before they fix the issue, try llama.cpp-vulkan instead or downgrade ROCm to 6.4.1.

Orion-zhen commented on 2025-08-16 10:24 (UTC) (edited on 2025-09-06 03:32 (UTC) by Orion-zhen)

Make sure ROCm is correctly set in your system before installing this package.

  1. sudo pacman -S rocm-hip-sdk rocm-hip-libraries rocm-hip-runtime
  2. Reboot (recommended)
  3. Setup environment variables, including ROCM_HOME=/opt/rocm

Note: rocWMMA is enabled by default now to offer great speed up in prompt processing (about 9x). If you find your GPU incompatible with it, please download the PKGBUILD and turn it off manually.

Orion-zhen commented on 2025-08-04 08:06 (UTC) (edited on 2025-08-04 12:25 (UTC) by Orion-zhen)

Hi @Valantur.

Actually, I removed service file and configuration file from the PKGBUILD, because my automatic update script have difficulty uploading assets files. And TBH, I have never used llama.cpp service before, because I usually run multiple models, such as chat model, embedding model and reranking model. Switching between these models via llama.cpp service is difficult. Instead of llama.cpp service, I recommend llama-swap, an application that switches model based on requests. So if you need llama.cpp service, please write it on your own, thanks.

Valantur commented on 2025-08-02 17:12 (UTC)

Upgraded today and the service wont' start because the service file is a symlink to your build system instead of a real text file.

panikal commented on 2025-07-20 05:44 (UTC) (edited on 2025-07-20 05:45 (UTC) by panikal)

This wouldn't build for me due to errors about not finding the ROCm

-- The HIP compiler identification is unknown
CMake Error at /usr/share/cmake/Modules/CMakeDetermineHIPCompiler.cmake:174
(message):
Failed to find ROCm root directory.
Call Stack (most recent call first):
ggml/src/ggml-hip/CMakeLists.txt:36 (enable_language)

I added this to my install to make it work, this should really be part of the build

ROCM_PATH=/opt/rocm
PATH=$ROCM_PATH/bin:$PATH
LD_LIBRARY_PATH=$ROCM_PATH/lib:$ROCM_PATH/lib64:$LD_LIBRARY_PATH

Jawzper commented on 2025-07-05 13:08 (UTC)

Thank you @edtoml, those PKGBUILD changes seem to have done the trick.

I removed libggml-hip-git during the installation to avoid conflicts, and now I'm not able to install it again (because of said conflicts). I only had it in the first place because it was a dependency for llama.cpp-hip. Is it fine?

edtoml commented on 2025-07-05 11:49 (UTC) (edited on 2025-07-05 11:50 (UTC) by edtoml)

edtoml commented on 2025-07-05 02:02 (UTC)

This has broken AGAIN due to libggml-hip-git being out of sync.

It can be made by changing option in the build() section (ON has been changed to OFF)

-DLLAMA_USE_SYSTEM_GGML=OFF

If you are on RDNA3 or RDNA4 adding the line below speeds up flash attention

-DGGML_HIP_ROCWMMA_FATTN=ON

You also need to remove the depend for libggml-hip in the depends=( list

There are dependencies you will need to resolve as files added by libggml-hip-git are going to conflict.