Package Details: llama.cpp-hip b4302-1

Git Clone URL: https://aur.archlinux.org/llama.cpp-hip.git (read-only, click to copy)
Package Base: llama.cpp-hip
Description: Port of Facebook's LLaMA model in C/C++ (with AMD ROCm optimizations)
Upstream URL: https://github.com/ggerganov/llama.cpp
Licenses: MIT
Conflicts: llama.cpp
Provides: llama.cpp
Submitter: txtsd
Maintainer: txtsd
Last Packager: txtsd
Votes: 2
Popularity: 0.94
First Submitted: 2024-10-26 19:54 (UTC)
Last Updated: 2024-12-11 02:39 (UTC)

Pinned Comments

txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)

Alternate versions

llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip

Latest Comments

txtsd commented on 2024-12-06 12:35 (UTC)

@Althorion Thanks for reporting! I've pushed a fix. Please let me know if it works as expected.

Althorion commented on 2024-12-04 14:22 (UTC)

Compilation fails with multiple cases of:

ld.lld: error: undefined symbol: cblas_sgemm
>>> referenced by ggml-blas.cpp
>>>               lto.tmp:(ggml_backend_blas_graph_compute(ggml_backend*, ggml_cgraph*))
>>> referenced by ggml-blas.cpp
>>>               lto.tmp:(ggml_backend_blas_graph_compute(ggml_backend*, ggml_cgraph*))
>>> referenced by ggml-blas.cpp
>>>               lto.tmp:(ggml_backend_blas_graph_compute(ggml_backend*, ggml_cgraph*))
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [examples/server/CMakeFiles/llama-server.dir/build.make:122: bin/llama-server] Error 1
make[1]: *** [CMakeFiles/Makefile2:4043: examples/server/CMakeFiles/llama-server.dir/all] Error 2

(the actual files looking for cblas_sgemm differ, the missing symbol is the same)

txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)

Alternate versions

llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip