Package Details: llama.cpp-hip b8752-1

Git Clone URL: https://aur.archlinux.org/llama.cpp-hip.git (read-only, click to copy)
Package Base: llama.cpp-hip
Description: Port of Facebook's LLaMA model in C/C++ (with AMD ROCm optimizations)
Upstream URL: https://github.com/ggml-org/llama.cpp
Licenses: MIT
Conflicts: ggml, libggml, llama.cpp, stable-diffusion.cpp
Provides: ggml, libggml, llama.cpp
Submitter: txtsd
Maintainer: Orion-zhen
Last Packager: Orion-zhen
Votes: 13
Popularity: 2.49
First Submitted: 2024-10-26 19:54 (UTC)
Last Updated: 2026-04-11 00:32 (UTC)

Dependencies (17)

Required by (5)

Sources (3)

Pinned Comments

Orion-zhen commented on 2025-09-02 03:18 (UTC) (edited on 2025-09-02 13:20 (UTC) by Orion-zhen)

I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.

Orion-zhen commented on 2025-08-16 10:24 (UTC) (edited on 2026-02-28 04:22 (UTC) by Orion-zhen)

Make sure ROCm is correctly set in your system before installing this package.

  1. sudo pacman -S rocm-hip-sdk rocm-hip-libraries rocm-hip-runtime
  2. Reboot (recommended)
  3. Setup environment variables, including ROCM_HOME=/opt/rocm

Note: rocWMMA is disabled by default now to avoid speed regression since ROCm 7+.

txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)

Alternate versions

llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip

Latest Comments

1 2 3 Next › Last »

Orion-zhen commented on 2026-02-28 04:21 (UTC)

Have removed rocwmma flag

jackweeks3 commented on 2026-01-31 12:14 (UTC)

Hi, rocWMMA is slow now. See https://strixhalo.wiki/AI/llamacpp-with-ROCm#rocwmma

Orion-zhen commented on 2025-09-02 03:18 (UTC) (edited on 2025-09-02 13:20 (UTC) by Orion-zhen)

I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.

Orion-zhen commented on 2025-08-27 03:21 (UTC) (edited on 2025-08-28 14:07 (UTC) by Orion-zhen)

Currently, llama.cpp-hip has some issue with ROCm 6.4.3. You can track the process here. Before they fix the issue, try llama.cpp-vulkan instead or downgrade ROCm to 6.4.1.

Orion-zhen commented on 2025-08-16 10:24 (UTC) (edited on 2026-02-28 04:22 (UTC) by Orion-zhen)

Make sure ROCm is correctly set in your system before installing this package.

  1. sudo pacman -S rocm-hip-sdk rocm-hip-libraries rocm-hip-runtime
  2. Reboot (recommended)
  3. Setup environment variables, including ROCM_HOME=/opt/rocm

Note: rocWMMA is disabled by default now to avoid speed regression since ROCm 7+.

Orion-zhen commented on 2025-08-04 08:06 (UTC) (edited on 2025-08-04 12:25 (UTC) by Orion-zhen)

Hi @Valantur.

Actually, I removed service file and configuration file from the PKGBUILD, because my automatic update script have difficulty uploading assets files. And TBH, I have never used llama.cpp service before, because I usually run multiple models, such as chat model, embedding model and reranking model. Switching between these models via llama.cpp service is difficult. Instead of llama.cpp service, I recommend llama-swap, an application that switches model based on requests. So if you need llama.cpp service, please write it on your own, thanks.

Valantur commented on 2025-08-02 17:12 (UTC)

Upgraded today and the service wont' start because the service file is a symlink to your build system instead of a real text file.

panikal commented on 2025-07-20 05:44 (UTC) (edited on 2025-07-20 05:45 (UTC) by panikal)

This wouldn't build for me due to errors about not finding the ROCm

-- The HIP compiler identification is unknown
CMake Error at /usr/share/cmake/Modules/CMakeDetermineHIPCompiler.cmake:174
(message):
Failed to find ROCm root directory.
Call Stack (most recent call first):
ggml/src/ggml-hip/CMakeLists.txt:36 (enable_language)

I added this to my install to make it work, this should really be part of the build

ROCM_PATH=/opt/rocm
PATH=$ROCM_PATH/bin:$PATH
LD_LIBRARY_PATH=$ROCM_PATH/lib:$ROCM_PATH/lib64:$LD_LIBRARY_PATH

Jawzper commented on 2025-07-05 13:08 (UTC)

Thank you @edtoml, those PKGBUILD changes seem to have done the trick.

I removed libggml-hip-git during the installation to avoid conflicts, and now I'm not able to install it again (because of said conflicts). I only had it in the first place because it was a dependency for llama.cpp-hip. Is it fine?