Have removed rocwmma flag
Search Criteria
Package Details: llama.cpp-hip b8752-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/llama.cpp-hip.git (read-only, click to copy) |
|---|---|
| Package Base: | llama.cpp-hip |
| Description: | Port of Facebook's LLaMA model in C/C++ (with AMD ROCm optimizations) |
| Upstream URL: | https://github.com/ggml-org/llama.cpp |
| Licenses: | MIT |
| Conflicts: | ggml, libggml, llama.cpp, stable-diffusion.cpp |
| Provides: | ggml, libggml, llama.cpp |
| Submitter: | txtsd |
| Maintainer: | Orion-zhen |
| Last Packager: | Orion-zhen |
| Votes: | 13 |
| Popularity: | 2.49 |
| First Submitted: | 2024-10-26 19:54 (UTC) |
| Last Updated: | 2026-04-11 00:32 (UTC) |
Dependencies (17)
- curl (curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-eacAUR, glibc-git-native-pgoAUR)
- hip-runtime-amd (opencl-amdAUR, rocm-gfx120x-binAUR, rocm-gfx1151-binAUR, rocm-gfx110x-binAUR, rocm-gfx1150-binAUR, rocm-gfx1152-binAUR, rocm-nightly-gfx120x-all-binAUR, rocm-nightly-gfx1151-binAUR, rocm-nightly-gfx110x-binAUR)
- hipblas (opencl-amd-devAUR, rocm-gfx120x-binAUR, rocm-gfx1151-binAUR, rocm-gfx110x-binAUR, rocm-gfx1150-binAUR, rocm-gfx1152-binAUR, rocm-nightly-gfx120x-all-binAUR, rocm-nightly-gfx1151-binAUR, rocm-nightly-gfx110x-binAUR)
- openmp
- python
- rocblas (rocblas-gfx1103AUR, opencl-amd-devAUR, rocm-gfx120x-binAUR, rocm-gfx1151-binAUR, rocm-gfx110x-binAUR, rocm-gfx1150-binAUR, rocm-gfx1152-binAUR, rocm-nightly-gfx120x-all-binAUR, rocm-nightly-gfx1151-binAUR, rocm-nightly-gfx110x-binAUR)
- cmake (cmake3AUR, cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR, git-wd40AUR) (make)
- rocm-hip-sdk (opencl-amd-devAUR, rocm-gfx120x-binAUR, rocm-gfx1151-binAUR, rocm-gfx110x-binAUR, rocm-gfx1150-binAUR, rocm-gfx1152-binAUR, rocm-nightly-gfx120x-all-binAUR, rocm-nightly-gfx1151-binAUR, rocm-nightly-gfx110x-binAUR) (make)
- python-ggufAUR (python-gguf-gitAUR) (optional) – needed for convert_hf_to_gguf.py
- python-numpy (python-numpy-gitAUR, python-numpy-mkl-binAUR, python-numpy1AUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR) (optional) – needed for convert_hf_to_gguf.py
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda12.9AUR, python-pytorch-opt-cuda12.9AUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – needed for convert_hf_to_gguf.py
- python-safetensors (optional) – needed for convert_hf_to_gguf.py
- python-sentencepieceAUR (python-sentencepiece-gitAUR, python-sentencepiece-binAUR) (optional) – needed for convert_hf_to_gguf.py
- python-transformersAUR (python-transformers-gitAUR) (optional) – needed for convert_hf_to_gguf.py
Required by (5)
- scmd-bin (requires llama.cpp)
- voxd (requires llama.cpp) (optional)
- voxd-bin (requires llama.cpp) (optional)
- voxd-git (requires llama.cpp) (optional)
- whisper.cpp-hip
Sources (3)
Orion-zhen commented on 2026-02-28 04:21 (UTC)
jackweeks3 commented on 2026-01-31 12:14 (UTC)
Hi, rocWMMA is slow now. See https://strixhalo.wiki/AI/llamacpp-with-ROCm#rocwmma
Orion-zhen commented on 2025-09-02 03:18 (UTC) (edited on 2025-09-02 13:20 (UTC) by Orion-zhen)
I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.
Orion-zhen commented on 2025-08-27 03:21 (UTC) (edited on 2025-08-28 14:07 (UTC) by Orion-zhen)
Currently, llama.cpp-hip has some issue with ROCm 6.4.3. You can track the process here. Before they fix the issue, try llama.cpp-vulkan instead or downgrade ROCm to 6.4.1.
Orion-zhen commented on 2025-08-16 10:24 (UTC) (edited on 2026-02-28 04:22 (UTC) by Orion-zhen)
Make sure ROCm is correctly set in your system before installing this package.
sudo pacman -S rocm-hip-sdk rocm-hip-libraries rocm-hip-runtime- Reboot (recommended)
- Setup environment variables, including
ROCM_HOME=/opt/rocm
Note: rocWMMA is disabled by default now to avoid speed regression since ROCm 7+.
Orion-zhen commented on 2025-08-04 08:06 (UTC) (edited on 2025-08-04 12:25 (UTC) by Orion-zhen)
Hi @Valantur.
Actually, I removed service file and configuration file from the PKGBUILD, because my automatic update script have difficulty uploading assets files. And TBH, I have never used llama.cpp service before, because I usually run multiple models, such as chat model, embedding model and reranking model. Switching between these models via llama.cpp service is difficult. Instead of llama.cpp service, I recommend llama-swap, an application that switches model based on requests. So if you need llama.cpp service, please write it on your own, thanks.
Valantur commented on 2025-08-02 17:12 (UTC)
Upgraded today and the service wont' start because the service file is a symlink to your build system instead of a real text file.
panikal commented on 2025-07-20 05:44 (UTC) (edited on 2025-07-20 05:45 (UTC) by panikal)
This wouldn't build for me due to errors about not finding the ROCm
-- The HIP compiler identification is unknown
CMake Error at /usr/share/cmake/Modules/CMakeDetermineHIPCompiler.cmake:174
(message):
Failed to find ROCm root directory.
Call Stack (most recent call first):
ggml/src/ggml-hip/CMakeLists.txt:36 (enable_language)
I added this to my install to make it work, this should really be part of the build
ROCM_PATH=/opt/rocm
PATH=$ROCM_PATH/bin:$PATH
LD_LIBRARY_PATH=$ROCM_PATH/lib:$ROCM_PATH/lib64:$LD_LIBRARY_PATH
Jawzper commented on 2025-07-05 13:08 (UTC)
Thank you @edtoml, those PKGBUILD changes seem to have done the trick.
I removed libggml-hip-git during the installation to avoid conflicts, and now I'm not able to install it again (because of said conflicts). I only had it in the first place because it was a dependency for llama.cpp-hip. Is it fine?
Pinned Comments
Orion-zhen commented on 2025-09-02 03:18 (UTC) (edited on 2025-09-02 13:20 (UTC) by Orion-zhen)
I couldn't receive notifications from AUR in real-time, so if you have problems that require immediate feedback or communication, please consider submitting an issue in this GitHub repository, at which I maintain all my AUR packages. Thank you for your understanding.
Orion-zhen commented on 2025-08-16 10:24 (UTC) (edited on 2026-02-28 04:22 (UTC) by Orion-zhen)
Make sure ROCm is correctly set in your system before installing this package.
sudo pacman -S rocm-hip-sdk rocm-hip-libraries rocm-hip-runtimeROCM_HOME=/opt/rocmNote: rocWMMA is disabled by default now to avoid speed regression since ROCm 7+.
txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip