@undefinedmethod - the changes don't affect any of the build flags unless explicitly enabled so this seems possibly due to a recent commit - possibly https://github.com/ggml-org/llama.cpp/pull/15587
I'd need to see the output of makepkg -L
- this would create a build log which would show me what was detected at build time. You can link to it using a pastebin type service, or at https://github.com/envolution/aur/issues - For now you can try run aur_llamacpp_build_universal=true makepkg -fsi
to build the default cuda architectures - I also have prebuilds at https://github.com/envolution/aur/releases/tag/llama.cpp-cuda if you'd like to try that as it has the universal flag set already
Pinned Comments
txtsd commented on 2024-10-26 20:17 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip