My bad—it just doesn’t like mold
linker. Switched to ldd
, works great. Thank you, problem solved.
Search Criteria
Package Details: llama.cpp-hip b4372-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/llama.cpp-hip.git (read-only, click to copy) |
---|---|
Package Base: | llama.cpp-hip |
Description: | Port of Facebook's LLaMA model in C/C++ (with AMD ROCm optimizations) |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Conflicts: | llama.cpp |
Provides: | llama.cpp |
Submitter: | txtsd |
Maintainer: | txtsd |
Last Packager: | txtsd |
Votes: | 2 |
Popularity: | 0.75 |
First Submitted: | 2024-10-26 19:54 (UTC) |
Last Updated: | 2024-12-21 04:10 (UTC) |
Dependencies (17)
- blas-openblas
- blas64-openblas
- curl (curl-quiche-gitAUR, curl-http3-ngtcp2AUR, curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc11-libsAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-linux4AUR, glibc-eacAUR, glibc-eac-binAUR, glibc-eac-rocoAUR)
- hip-runtime-amd (opencl-amdAUR)
- hipblas (opencl-amd-devAUR)
- openmp
- python (python37AUR, python311AUR, python310AUR)
- python-numpy (python-numpy-flameAUR, python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mklAUR, python-numpy-mkl-tbbAUR)
- python-sentencepieceAUR (python-sentencepiece-gitAUR)
- rocblas (opencl-amd-devAUR)
- clblast (clblast-gitAUR) (make)
- cmake (cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- rocm-hip-runtime (opencl-amdAUR) (make)
- rocm-hip-sdk (opencl-amd-devAUR) (make)
Required by (0)
Sources (4)
Latest Comments
Althorion commented on 2024-12-12 21:13 (UTC)
txtsd commented on 2024-12-11 08:09 (UTC)
@Althorion It builds for me. Please try a clean build.
Althorion commented on 2024-12-11 07:46 (UTC)
The linker dumps core now:
[ 33%] Linking CXX executable ../../bin/llama-run
clang++: error: unable to execute command: Aborted (core dumped)
clang++: error: linker command failed due to signal (use -v to see invocation)
make[2]: *** [examples/rpc/CMakeFiles/rpc-server.dir/build.make:110: bin/rpc-server] Error 1
make[1]: *** [CMakeFiles/Makefile2:4638: examples/rpc/CMakeFiles/rpc-server.dir/all] Error 2
txtsd commented on 2024-12-06 12:35 (UTC)
@Althorion Thanks for reporting! I've pushed a fix. Please let me know if it works as expected.
Althorion commented on 2024-12-04 14:22 (UTC)
Compilation fails with multiple cases of:
ld.lld: error: undefined symbol: cblas_sgemm
>>> referenced by ggml-blas.cpp
>>> lto.tmp:(ggml_backend_blas_graph_compute(ggml_backend*, ggml_cgraph*))
>>> referenced by ggml-blas.cpp
>>> lto.tmp:(ggml_backend_blas_graph_compute(ggml_backend*, ggml_cgraph*))
>>> referenced by ggml-blas.cpp
>>> lto.tmp:(ggml_backend_blas_graph_compute(ggml_backend*, ggml_cgraph*))
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [examples/server/CMakeFiles/llama-server.dir/build.make:122: bin/llama-server] Error 1
make[1]: *** [CMakeFiles/Makefile2:4043: examples/server/CMakeFiles/llama-server.dir/all] Error 2
(the actual files looking for cblas_sgemm
differ, the missing symbol is the same)
Pinned Comments
txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip