@kelvie
- It doesn't actually use it, it's just downloaded for no reason (this package is the CPU backend). The nomic-ai fork is what llama.cpp uses for its Kompute backend (GGML_KOMPUTE=ON).
- CMAKE_BUILD_TYPE=None is the standard for building Arch packages. See the CMake package guidelines page in the wiki: makepkg provides its own release flags.
(llama.cpp assertions via GGML_ASSERT are always enabled in all build types.)
Pinned Comments
txtsd commented on 2024-10-26 20:14 (UTC) (edited on 2024-12-06 14:14 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip