Package Details: llama.cpp-hipblas-git b5123.r1.bc091a4dc-1

Git Clone URL: https://aur.archlinux.org/llama.cpp-hipblas-git.git (read-only, click to copy)
Package Base: llama.cpp-hipblas-git
Description: Port of Facebook's LLaMA model in C/C++ (with AMD ROCm optimizations)
Upstream URL: https://github.com/ggerganov/llama.cpp
Licenses: MIT
Conflicts: llama.cpp, llama.cpp-hipblas
Submitter: robertfoster
Maintainer: robertfoster
Last Packager: robertfoster
Votes: 0
Popularity: 0.000000
First Submitted: 2024-11-15 20:34 (UTC)
Last Updated: 2025-04-12 16:38 (UTC)

Latest Comments

edtoml commented on 2025-02-15 13:57 (UTC) (edited on 2025-02-15 13:58 (UTC) by edtoml)

It would seem most people are using llama.cpp-hip-git now. However that package gives me less than one token per second. This one works as expected if the build section of its PKGBUILD is updated to:

build() {
  local _cmake_args=(
    -B build
    -S .
    -DCMAKE_INSTALL_PREFIX=/usr
    -DCMAKE_BUILD_TYPE=Release
    -DGGML_STATIC=OFF
    -DGGML_HIP=ON
  )