Package Details: ollama-cuda-git 0.5.5+r3779+g6982e9cc9-3

Git Clone URL: https://aur.archlinux.org/ollama-cuda-git.git (read-only, click to copy)
Package Base: ollama-cuda-git
Description: Create, run and share large language models (LLMs)
Upstream URL: https://github.com/ollama/ollama
Licenses: MIT
Conflicts: ollama
Provides: ollama
Submitter: sr.team
Maintainer: None
Last Packager: envolution
Votes: 5
Popularity: 1.13
First Submitted: 2024-02-22 23:22 (UTC)
Last Updated: 2025-01-14 06:03 (UTC)

Dependencies (5)

Required by (29)

Sources (5)

Latest Comments

1 2 Next › Last »

LaptopDev commented on 2025-01-24 14:09 (UTC)

Can you inform me how to update this without removal?

envolution commented on 2024-12-28 02:55 (UTC)

@wanxp can you please try the new version?

Wanxp commented on 2024-12-28 00:05 (UTC) (edited on 2024-12-28 00:07 (UTC) by Wanxp)

i has already install package nvidia-dkms,nvidia, nvidia-utils

after install opencl-nvidia, install ollama-cuba-git, get next error:

/usr/include/c++/14.2.1/type_traits(3271): error: type name is not allowed
      __is_member_object_pointer(_Tp);
                                 ^

/usr/include/c++/14.2.1/type_traits(3281): error: type name is not allowed
      __is_member_function_pointer(_Tp);
                                   ^

/usr/include/c++/14.2.1/type_traits(3298): error: type name is not allowed
    inline constexpr bool is_reference_v = __is_reference(_Tp);
                                                          ^

/usr/include/c++/14.2.1/type_traits(3315): error: type name is not allowed
    inline constexpr bool is_object_v = __is_object(_Tp);
                                                    ^

/usr/include/c++/14.2.1/type_traits(3328): error: type name is not allowed
    inline constexpr bool is_member_pointer_v = __is_member_pointer(_Tp);
                                                                    ^

/usr/include/c++/14.2.1/bits/utility.h(237): error: __type_pack_element is not a template
      { using type = __type_pack_element<_Np, _Types...>; };
                     ^

/usr/include/c++/14.2.1/type_traits(138): error: class "std::enable_if<<error-constant>, void>" has no member "type"
      using __enable_if_t = typename enable_if<_Cond, _Tp>::type;
                                                            ^
          detected during:
            instantiation of type "std::__enable_if_t<<error-constant>, void>" at line 176
            instantiation of "std::__detail::__or_fn" based on template arguments <std::is_reference<std::allocator<char>>, std::is_function<std::allocator<char>>, std::is_void<std::allocator<char>>, std::__is_array_unknown_bounds<std::allocator<char>>> at line 194
            instantiation of class "std::__or_<_Bn...> [with _Bn=<std::is_reference<std::allocator<char>>, std::is_function<std::allocator<char>>, std::is_void<std::allocator<char>>, std::__is_array_unknown_bounds<std::allocator<char>>>]" at line 1195
            instantiation of class "std::is_nothrow_default_constructible<_Tp> [with _Tp=std::allocator<char>]" at line 528 of /usr/include/c++/14.2.1/bits/basic_string.h
            instantiation of "std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::basic_string() [with _CharT=char, _Traits=std::char_traits<char>, _Alloc=std::allocator<char>]" at line 4248 of /usr/include/c++/14.2.1/bits/basic_string.h

21 errors detected in the compilation of "llama/ggml-cuda/acc.cu".
make[1]: *** [make/gpu.make:53: llama/build/linux-amd64/llama/ggml-cuda/acc.cuda_v12.o] Error 255
make: *** [Makefile:43: dist_cuda_v12] Error 2
==> ERROR: A failure occurred in build().
    Aborting...
 -> error making: ollama-cuda-git-exit status 4
 -> Failed to install the following packages. Manual intervention is required:
ollama-cuda-git - exit status 4

Wanxp commented on 2024-12-27 23:42 (UTC)

install get error and not success

/usr/include/c++/14.2.1/type_traits(3315): error: type name is not allowed
    inline constexpr bool is_object_v = __is_object(_Tp);
                                                    ^

/usr/include/c++/14.2.1/type_traits(3328): error: type name is not allowed
    inline constexpr bool is_member_pointer_v = __is_member_pointer(_Tp);
                                                                    ^

/usr/include/c++/14.2.1/bits/utility.h(237): error: __type_pack_element is not a template
      { using type = __type_pack_element<_Np, _Types...>; };
                     ^

/usr/include/c++/14.2.1/type_traits(138): error: class "std::enable_if<<error-constant>, void>" has no member "type"
      using __enable_if_t = typename enable_if<_Cond, _Tp>::type;
                                                            ^
          detected during:
            instantiation of type "std::__enable_if_t<<error-constant>, void>" at line 176
            instantiation of "std::__detail::__or_fn" based on template arguments <std::is_reference<std::allocator<char>>, std::is_function<std::allocator<char>>, std::is_void<std::allocator<char>>, std::__is_array_unknown_bounds<std::allocator<char>>> at line 194
            instantiation of class "std::__or_<_Bn...> [with _Bn=<std::is_reference<std::allocator<char>>, std::is_function<std::allocator<char>>, std::is_void<std::allocator<char>>, std::__is_array_unknown_bounds<std::allocator<char>>>]" at line 1195
            instantiation of class "std::is_nothrow_default_constructible<_Tp> [with _Tp=std::allocator<char>]" at line 528 of /usr/include/c++/14.2.1/bits/basic_string.h
            instantiation of "std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::basic_string() [with _CharT=char, _Traits=std::char_traits<char>, _Alloc=std::allocator<char>]" at line 4248 of /usr/include/c++/14.2.1/bits/basic_string.h

21 errors detected in the compilation of "llama/ggml-cuda/acc.cu".
make[1]: *** [make/gpu.make:53: llama/build/linux-amd64/llama/ggml-cuda/acc.cuda_v12.o] Error 255
make: *** [Makefile:43: dist_cuda_v12] Error 2
==> ERROR: A failure occurred in build().
    Aborting...
 -> error making: ollama-cuda-git-exit status 4
checking dependencies...
:: nvidia-utils optionally requires opencl-nvidia: OpenCL support
:: ocl-icd optionally requires opencl-driver: packaged opencl driver

envolution commented on 2024-12-16 01:55 (UTC) (edited on 2024-12-16 05:36 (UTC) by envolution)

This seems to be working now. If it fails to compile, please comment with your GPU/nvidia-smi and just the error line from the compilation

envolution commented on 2024-12-05 15:38 (UTC)

until they merge https://github.com/ollama/ollama/pull/7499 there isn't a good way to manage this git package

envolution commented on 2024-12-01 02:18 (UTC)

@jfiguero @sarudosi

https://github.com/ollama/ollama/pull/7499 https://gitlab.archlinux.org/archlinux/packaging/packages/ollama-cuda/-/commits/main?ref_type=HEADS

It's being worked on. If anyone has a working build() to enable cuda, flag OOD and add it to comments please

jfiguero commented on 2024-11-29 18:41 (UTC) (edited on 2024-11-29 18:50 (UTC) by jfiguero)

I have installed ollama-cuda-git 0.4.6+r3691+gce7455a8e-1 and it does not use my GTX 1070 GPU but defaults to CPU. using ollama-cuda from extra and extra-testing does use it, but both packages are outdated.

I confirmed this using nvidia-smi, which won't show ollama as a running process, and see no change in power/RAM consumption while generating a response when using this package.

Here's my output for systemctl status ollama. Any suggestions on what I can look for to further debug?

ollama.service - Ollama Service Loaded: loaded (/usr/lib/systemd/system/ollama.service; enabled; preset: disabled) Drop-In: /etc/systemd/system/ollama.service.d └─override.conf Active: active (running) since Fri 2024-11-29 12:25:27 CST; 12min ago Invocation: 5fd8601c6424461a9f5c138297e19711 Main PID: 129202 (ollama) Tasks: 17 (limit: 37967) Memory: 60.4M (peak: 61.4M) CPU: 501ms CGroup: /system.slice/ollama.service └─129202 /usr/bin/ollama serve

Nov 29 12:25:27 hsxarch ollama[129202]: [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(Server).ListHandler-fm (5 handlers) Nov 29 12:25:27 hsxarch ollama[129202]: [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(Server).GenerateRoutes.func2 (5 handlers) Nov 29 12:25:27 hsxarch ollama[129202]: [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(Server).GenerateRoutes.func1 (5 handlers) Nov 29 12:25:27 hsxarch ollama[129202]: [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(Server).ListHandler-fm (5 handlers) Nov 29 12:25:27 hsxarch ollama[129202]: [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) Nov 29 12:25:27 hsxarch ollama[129202]: time=2024-11-29T12:25:27.215-06:00 level=INFO source=routes.go:1248 msg="Listening on [::]:11434 (version 0.4.6)" Nov 29 12:25:27 hsxarch ollama[129202]: time=2024-11-29T12:25:27.216-06:00 level=INFO source=common.go:135 msg="extracting embedded files" dir=/tmp/ollama649091418/runners Nov 29 12:25:27 hsxarch ollama[129202]: time=2024-11-29T12:25:27.300-06:00 level=INFO source=common.go:49 msg="Dynamic LLM libraries" runners="[cpu_avx2 cpu cpu_avx]" Nov 29 12:25:27 hsxarch ollama[129202]: time=2024-11-29T12:25:27.300-06:00 level=INFO source=gpu.go:221 msg="looking for compatible GPUs" Nov 29 12:25:27 hsxarch ollama[129202]: time=2024-11-29T12:25:27.492-06:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-fe5de60e-7506-5f83-6fa3-4070b933c724 library=cuda variant=v12 compute=6.1 driver=12.7 name="NVIDIA GeForce GTX 1070" total="7.9 GiB" available="6.5 GiB"

sarudosi commented on 2024-11-26 06:48 (UTC)

I can not use GPU, using this pkg and extra/ollama-cuda. Do you have any idea to fix this issue?

systemctl stutas ollama

11月 26 15:30:01 QC861 ollama[1474]: time=2024-11-26T15:30:01.430+09:00 level=WARN source=gpu.go:732 msg="unable to locate gpu dependency libraries"
11月 26 15:30:01 QC861 ollama[1474]: time=2024-11-26T15:30:01.430+09:00 level=DEBUG source=gpu.go:532 msg="gpu library search" globs="[libcudart.so* /var/lib/ollama/libcudart.so* /usr/local/cuda/lib64/libcuda>
11月 26 15:30:01 QC861 ollama[1474]: time=2024-11-26T15:30:01.439+09:00 level=DEBUG source=gpu.go:566 msg="discovered GPU libraries" paths=[/opt/cuda/lib64/libcudart.so.11.8.89]
11月 26 15:30:01 QC861 ollama[1474]: cudaSetDevice err: 100
11月 26 15:30:01 QC861 ollama[1474]: time=2024-11-26T15:30:01.444+09:00 level=DEBUG source=gpu.go:582 msg="Unable to load cudart library /opt/cuda/lib64/libcudart.so.11.8.89: cudart init failure: 100"
11月 26 15:30:01 QC861 ollama[1474]: time=2024-11-26T15:30:01.444+09:00 level=DEBUG source=amd_linux.go:416 msg="amdgpu driver not detected /sys/module/amdgpu"
11月 26 15:30:01 QC861 ollama[1474]: time=2024-11-26T15:30:01.444+09:00 level=INFO source=gpu.go:386 msg="no compatible GPUs were discovered"
11月 26 15:30:01 QC861 ollama[1474]: time=2024-11-26T15:30:01.444+09:00 level=INFO source=types.go:123 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="62.6 GiB" avai>
11月 26 15:44:48 QC861 ollama[1474]: [GIN] 2024/11/26 - 15:44:48 | 200 |     962.018µs |       127.0.0.1 | HEAD     "/"
11月 26 15:44:49 QC861 ollama[1474]: [GIN] 2024/11/26 - 15:44:49 | 200 |  572.186812ms |       127.0.0.1 | GET      "/api/tags"

sarudosi commented on 2024-11-26 06:43 (UTC) (edited on 2024-11-26 06:44 (UTC) by sarudosi)

Mr. brauliobo. You shuold modify PKGBUILD & makpkg -si.

- sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=on -D CMAKE_BUILD_TYPE=Release,g' llm/generate/gen_linux.sh

+ sed -i 's,T_CODE=on,T_CODE=on -D LLAMA_LTO=on -D CMAKE_BUILD_TYPE=Release,g' scripts/build_linux.sh