Package Details: ollama-cuda-git 0.1.30.gc2712b55-1

Git Clone URL: https://aur.archlinux.org/ollama-cuda-git.git (read-only, click to copy)
Package Base: ollama-cuda-git
Description: Create, run and share large language models (LLMs) with CUDA
Upstream URL: https://github.com/jmorganca/ollama
Licenses: MIT
Conflicts: ollama, ollama-cuda
Provides: ollama
Submitter: sr.team
Maintainer: sr.team
Last Packager: sr.team
Votes: 1
Popularity: 0.82
First Submitted: 2024-02-22 23:22 (UTC)
Last Updated: 2024-04-01 12:42 (UTC)

Dependencies (4)

Required by (10)

Sources (4)

Latest Comments

nmanarch commented on 2024-04-17 07:49 (UTC) (edited on 2024-04-17 07:53 (UTC) by nmanarch)

Hello. I apologized. Since 1.29 the gpu support without avx cpu is blocked in ollama. Did someone can help to have this to work again? https://github.com/ollama/ollama/issues/2187 and the bypass propose by dbzoo which work but not apply at the main. https://github.com/dbzoo/ollama/commit/45eb1048496780a78ed07cf39b3ce6b62b5a72e3 Many thanks.have a nice days.

nmanarch commented on 2024-04-04 14:23 (UTC)

Yes it is fixed and build and run.thanks.

sr.team commented on 2024-04-01 12:43 (UTC)

@nmanarch thanks for report. This problem must be fixed now

nmanarch commented on 2024-03-31 23:00 (UTC) (edited on 2024-03-31 23:59 (UTC) by nmanarch)

Hi ! the build failed..? someone have some tricks to solve ?


/var/tmp/pamac-build-nico/ollama-cuda-git/src/ollama/llm/llama.cpp/ggml-cuda.cu:9432:13: note: in instantiation of function template specialization 'mul_mat_vec_q_cuda<256, 8, block_iq3_s, 1, &vec_dot_iq3_s_q8_1>' requested here
            mul_mat_vec_q_cuda<QK_K, QI3_XS, block_iq3_s, 1, vec_dot_iq3_s_q8_1>
            ^
error: option 'cf-protection=return' cannot be specified on this target
error: option 'cf-protection=branch' cannot be specified on this target
194 warnings and 2 errors generated when compiling for gfx1010.
make[3]: *** [CMakeFiles/ggml.dir/build.make:132: CMakeFiles/ggml.dir/ggml-cuda.cu.o] Error 1
make[2]: *** [CMakeFiles/Makefile2:838: CMakeFiles/ggml.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:3575: ext_server/CMakeFiles/ext_server.dir/rule] Error 2
make: *** [Makefile:1440: ext_server] Error 2
llm/generate/generate_linux.go:3: running "bash": exit status 2
==> ERREUR : Une erreur s’est produite dans build().
    Abandon…

So i have found on : https://aur.archlinux.org/packages/ollama-rocm-git?O=10 So i remove my fcf-protection in my /etc/makepkg.conf .

But now i have


 g++ -fPIC -g -shared -o ../llama.cpp/build/linux/x86_64/rocm/lib/libext_server.so -Wl,--whole-archive ../llama.cpp/build/linux/x86_64/rocm/ext_server/libext_server.a -Wl,--no-whole-archive ../llama.cpp/build/linux/x86_64/rocm/common/libcommon.a ../llama.cpp/build/linux/x86_64/rocm/libllama.a '-Wl,-rpath,$ORIGIN' -lpthread -ldl -lm -L/opt/rocm/lib -L/opt/amdgpu/lib/x86_64-linux-gnu/ '-Wl,-rpath,$ORIGIN/../../rocm/' -lhipblas -lrocblas -lamdhip64 -lrocsolver -lamd_comgr -lhsa-runtime64 -lrocsparse -ldrm -ldrm_amdgpu
/usr/sbin/ld : ../llama.cpp/build/linux/x86_64/rocm/ext_server/libext_server.a : membre %B dans l'archive n'est pas un objet
collect2: erreur: ld a retourné le statut de sortie 1
llm/generate/generate_linux.go:3: running "bash": exit status 1
==> ERREUR : Une erreur s’est produite dans build().
    Abandon…