convert_hf_to_gguf is completely unusable!
No module named 'gguf'
Even after installing python-gguf, it still doesn't work. Did the Maintainer forget to test convert_hf_to_gguf?
Search Criteria
Package Details: llama.cpp b7360-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/llama.cpp.git (read-only, click to copy) |
|---|---|
| Package Base: | llama.cpp |
| Description: | Port of Facebook's LLaMA model in C/C++ |
| Upstream URL: | https://github.com/ggerganov/llama.cpp |
| Licenses: | MIT |
| Conflicts: | ggml, libggml |
| Submitter: | txtsd |
| Maintainer: | envolution |
| Last Packager: | envolution |
| Votes: | 11 |
| Popularity: | 0.24 |
| First Submitted: | 2024-10-26 15:38 (UTC) |
| Last Updated: | 2025-12-12 01:30 (UTC) |
Dependencies (9)
- curl (curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-eacAUR)
- cmake (cmake3AUR, cmake-gitAUR) (make)
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR) (optional) – needed for convert_hf_to_gguf.py
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda12.9AUR, python-pytorch-opt-cuda12.9AUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – needed for convert_hf_to_gguf.py
- python-safetensorsAUR (python-safetensors-binAUR) (optional) – needed for convert_hf_to_gguf.py
- python-sentencepieceAUR (python-sentencepiece-gitAUR, python-sentencepiece-binAUR) (optional) – needed for convert_hf_to_gguf.py
- python-transformersAUR (optional) – needed for convert_hf_to_gguf.py
Sources (3)
ilovesusu commented on 2025-11-23 02:44 (UTC)
envolution commented on 2025-10-08 23:50 (UTC)
@ArKay clear your ccache, extra/fmt was upgraded, so you really shouldn't rely on ccache to try figure that out
ArKay commented on 2025-10-08 14:01 (UTC) (edited on 2025-10-08 14:01 (UTC) by ArKay)
ccache: error while loading shared libraries: libfmt.so.11: cannot open shared object file: No such file or directory
[ 7%] Building C object ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o
envolution commented on 2025-08-01 17:49 (UTC)
@marceldev89 thanks, next release should be more specific - rather than unsetting that variable we can just use GGML_NATIVE=ON
marceldev89 commented on 2025-08-01 16:02 (UTC)
The current PKGBUILD no longer compiles a native build due to makepkg providing the SOURCE_DATE_EPOCH environment variable. This variable is checked in ggml/CMakeLists.txt:
if (CMAKE_CROSSCOMPILING OR DEFINED ENV{SOURCE_DATE_EPOCH})
message(STATUS "Setting GGML_NATIVE_DEFAULT to OFF")
set(GGML_NATIVE_DEFAULT OFF)
else()
set(GGML_NATIVE_DEFAULT ON)
endif()
envolution commented on 2025-07-31 19:36 (UTC)
I've added llama-server to the package as it doesn't impact dependencies
envolution commented on 2025-07-31 05:41 (UTC)
just a couple of minor changes in the build
python modules are now identified as optional for the conversion script
changed from git to tar.gz due to the git repo initial sync being so large
QTaKs commented on 2025-07-26 10:13 (UTC)
$ convert_hf_to_gguf.py ./
Traceback (most recent call last):
File "/usr/bin/convert_hf_to_gguf.py", line 19, in <module>
from transformers import AutoConfig
ImportError: cannot import name 'AutoConfig' from 'transformers' (unknown location)
Please add python-transformers as dependency
File "/usr/bin/convert_hf_to_gguf.py", line 30, in <module>
import gguf
ModuleNotFoundError: No module named 'gguf'
And python-gguf too
txtsd commented on 2025-06-15 12:03 (UTC) (edited on 2025-06-15 12:03 (UTC) by txtsd)
This package now uses system libggml so it should work alongside whisper.cpp
Tests and examples building has been turned off.
kompute is removed.
visad commented on 2025-06-13 15:04 (UTC) (edited on 2025-06-13 15:04 (UTC) by visad)
Conflicts with i.e. libggml-git AUR package (needed by whisper.cpp package), as this includes its own version of libggml. Maybe some paths separation (i.e. move this to /usr/local) could help if internal libggml is required and shared version could not be used, or linking to libggml in dynamic fashion? As for now atleast the "conflicts" section of PKGBUILD should be filled in with libggml as the first measure I think :)
Pinned Comments
txtsd commented on 2024-10-26 20:14 (UTC) (edited on 2024-12-06 14:14 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip