Search Criteria
Package Details: llama.cpp-hipblas-git b5123.r1.bc091a4dc-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/llama.cpp-hipblas-git.git (read-only, click to copy) |
---|---|
Package Base: | llama.cpp-hipblas-git |
Description: | Port of Facebook's LLaMA model in C/C++ (with AMD ROCm optimizations) |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Conflicts: | llama.cpp, llama.cpp-hipblas |
Submitter: | robertfoster |
Maintainer: | robertfoster |
Last Packager: | robertfoster |
Votes: | 0 |
Popularity: | 0.000000 |
First Submitted: | 2024-11-15 20:34 (UTC) |
Last Updated: | 2025-04-12 16:38 (UTC) |
Dependencies (6)
- ggml-hipblas-gitAUR
- cmake (cmake-gitAUR, cmake3AUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- python-ggufAUR (optional) – convert_hf_to_gguf python script
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR) (optional) – convert_hf_to_gguf.py python script
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – convert_hf_to_gguf.py python script
Latest Comments
edtoml commented on 2025-02-15 13:57 (UTC) (edited on 2025-02-15 13:58 (UTC) by edtoml)
It would seem most people are using llama.cpp-hip-git now. However that package gives me less than one token per second. This one works as expected if the build section of its PKGBUILD is updated to: