Package Details: llama.cpp-hip b4372-1

Git Clone URL: https://aur.archlinux.org/llama.cpp-hip.git (read-only, click to copy)
Package Base: llama.cpp-hip
Description: Port of Facebook's LLaMA model in C/C++ (with AMD ROCm optimizations)
Upstream URL: https://github.com/ggerganov/llama.cpp
Licenses: MIT
Conflicts: llama.cpp
Provides: llama.cpp
Submitter: txtsd
Maintainer: txtsd
Last Packager: txtsd
Votes: 2
Popularity: 0.75
First Submitted: 2024-10-26 19:54 (UTC)
Last Updated: 2024-12-21 04:10 (UTC)

Pinned Comments

txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)

Alternate versions

llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip

Latest Comments

Althorion commented on 2024-12-12 21:13 (UTC)

My bad—it just doesn’t like mold linker. Switched to ldd, works great. Thank you, problem solved.

txtsd commented on 2024-12-11 08:09 (UTC)

@Althorion It builds for me. Please try a clean build.

Althorion commented on 2024-12-11 07:46 (UTC)

The linker dumps core now:

[ 33%] Linking CXX executable ../../bin/llama-run
clang++: error: unable to execute command: Aborted (core dumped)
clang++: error: linker command failed due to signal (use -v to see invocation)
make[2]: *** [examples/rpc/CMakeFiles/rpc-server.dir/build.make:110: bin/rpc-server] Error 1
make[1]: *** [CMakeFiles/Makefile2:4638: examples/rpc/CMakeFiles/rpc-server.dir/all] Error 2

txtsd commented on 2024-12-06 12:35 (UTC)

@Althorion Thanks for reporting! I've pushed a fix. Please let me know if it works as expected.

Althorion commented on 2024-12-04 14:22 (UTC)

Compilation fails with multiple cases of:

ld.lld: error: undefined symbol: cblas_sgemm
>>> referenced by ggml-blas.cpp
>>>               lto.tmp:(ggml_backend_blas_graph_compute(ggml_backend*, ggml_cgraph*))
>>> referenced by ggml-blas.cpp
>>>               lto.tmp:(ggml_backend_blas_graph_compute(ggml_backend*, ggml_cgraph*))
>>> referenced by ggml-blas.cpp
>>>               lto.tmp:(ggml_backend_blas_graph_compute(ggml_backend*, ggml_cgraph*))
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [examples/server/CMakeFiles/llama-server.dir/build.make:122: bin/llama-server] Error 1
make[1]: *** [CMakeFiles/Makefile2:4043: examples/server/CMakeFiles/llama-server.dir/all] Error 2

(the actual files looking for cblas_sgemm differ, the missing symbol is the same)

txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)

Alternate versions

llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip