Package Details: ollama-rocm-git 0.5.3.git+2cde4b88-1

Git Clone URL: https://aur.archlinux.org/ollama-rocm-git.git (read-only, click to copy)
Package Base: ollama-rocm-git
Description: Create, run and share large language models (LLMs) with ROCm
Upstream URL: https://github.com/ollama/ollama
Licenses: MIT
Conflicts: ollama
Provides: ollama
Submitter: sr.team
Maintainer: wgottwalt
Last Packager: wgottwalt
Votes: 4
Popularity: 0.56
First Submitted: 2024-02-28 00:40 (UTC)
Last Updated: 2024-12-17 06:59 (UTC)

Required by (25)

Sources (4)

Pinned Comments

wgottwalt commented on 2024-11-09 10:46 (UTC) (edited on 2024-11-26 15:23 (UTC) by wgottwalt)

Looks like the ROCm 6.2.2-1 SDK has a malfunctioning compiler. It produces a broken ollama binary (fp16 issues). You may need to stay with ROCm 6.0.2 for now. I don't know if this got fixed in a newer build release. But the initial SDK version "-1" is broken.

ROCm 6.2.4 fixes this issue completely.

Latest Comments

« First ‹ Previous 1 2 3

nameiwillforget commented on 2024-03-01 11:18 (UTC) (edited on 2024-03-01 16:22 (UTC) by nameiwillforget)

@sr.team No, I installed cuda earlier because I thought I needed it, but once I realized I didn't I uninstalled it using yay -Rcs. That was before I tried to install ollama-rocm-git.

Edit: I re-installed cuda, and after I added it to my PATH, the compilation fails with the same error.

sr.team commented on 2024-02-29 17:11 (UTC)

@nameiwillforget you are having cuda installed? The ollama generator tried to build laama.cpp with CUDA for you

nameiwillforget commented on 2024-02-29 17:09 (UTC)

Compilation fails when installed with yay, with the following error message:

/home/alex/.cache/yay/ollama-rocm-git/src/ollama/llm/llama.cpp/ggml-cuda.cu:9673:5: note: in instantiation of function template specialization 'pool2d_nchw_kernel<float, float>' requested here
    pool2d_nchw_kernel<<<block_nums, CUDA_IM2COL_BLOCK_SIZE, 0, main_stream>>>(IH, IW, OH, OW, k1, k0, s1, s0, p1, p0, parallel_elements, src0_dd, dst_dd, op);
    ^
/home/alex/.cache/yay/ollama-rocm-git/src/ollama/llm/llama.cpp/ggml-cuda.cu:6947:25: warning: enumeration value 'GGML_OP_POOL_COUNT' not handled in switch [-Wswitch]
                switch (op) {
                        ^~
error: option 'cf-protection=return' cannot be specified on this target
error: option 'cf-protection=branch' cannot be specified on this target
184 warnings and 2 errors generated when compiling for gfx1010.
make[3]: *** [CMakeFiles/ggml.dir/build.make:135: CMakeFiles/ggml.dir/ggml-cuda.cu.o] Error 1
make[3]: *** Waiting for unfinished jobs....
make[3]: Leaving directory '/home/alex/.cache/yay/ollama-rocm-git/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
make[2]: *** [CMakeFiles/Makefile2:745: CMakeFiles/ggml.dir/all] Error 2
make[2]: Leaving directory '/home/alex/.cache/yay/ollama-rocm-git/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
make[1]: *** [CMakeFiles/Makefile2:2910: examples/server/CMakeFiles/ext_server.dir/rule] Error 2
make[1]: Leaving directory '/home/alex/.cache/yay/ollama-rocm-git/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
make: *** [Makefile:1196: ext_server] Error 2

llm/generate/generate_linux.go:3: running "bash": exit status 2
==> ERROR: A failure occurred in build().