Package Details: ollama-rocm-git 0.1.30.gc2712b55-1

Git Clone URL: https://aur.archlinux.org/ollama-rocm-git.git (read-only, click to copy)
Package Base: ollama-rocm-git
Description: Create, run and share large language models (LLMs) with ROCm
Upstream URL: https://github.com/jmorganca/ollama
Licenses: MIT
Conflicts: ollama, ollama-cuda
Provides: ollama
Submitter: sr.team
Maintainer: sr.team
Last Packager: sr.team
Votes: 1
Popularity: 0.34
First Submitted: 2024-02-28 00:40 (UTC)
Last Updated: 2024-03-25 23:09 (UTC)

Dependencies (6)

Required by (10)

Sources (4)

Pinned Comments

bullet92 commented on 2024-03-02 13:00 (UTC) (edited on 2024-03-02 15:50 (UTC) by bullet92)

Hi, without the package hipblas it does not install with the following compilation error:

CMake Error at CMakeLists.txt:500 (find_package):
  By not providing "Findhipblas.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "hipblas", but
  CMake did not find one.

  Could not find a package configuration file provided by "hipblas" with any
  of the following names:

hipblasConfig.cmake
hipblas-config.cmake

  Add the installation prefix of "hipblas" to CMAKE_PREFIX_PATH or set
  "hipblas_DIR" to a directory containing one of the above files.  If
  "hipblas" provides a separate development package or SDK, be sure it has
  been installed.

Suggestion: add hipblas as dependencies

In any case compile fail, so I have to modify my /etc/makepkg.conf and commented out this line:

 45 # -fstack-clash-protection -fcf-protection

After starting ollama with systemctl and checking the output with status I obtained:

level=INFO source=routes.go:1044 msg="no GPU detected"

So I edited the service adding "HSA_OVERRIDE_GFX_VERSION=10.3.0" to the current environment in ollama.service

sudo systemctl edit ollama.service

    [Service]
    Environment="HOME=/var/lib/ollama" "GIN_MODE=release" "HSA_OVERRIDE_GFX_VERSION=10.3.0"

and installed some dependencies:

  pacman -S rocm-hip-sdk rocm-opencl-sdk clblast go

( in my case rocm-hip-sdk rocm-opencl-sdk were missings ) And that gives me "Radeon GPU detected"

Latest Comments

« First ‹ Previous 1 2

ZappaBoy commented on 2024-03-05 12:28 (UTC)

Same compile error of @nameiwillforget, fixed using the @bullet92 workaround commenting the line in /etc/makepkg.conf

45 # -fstack-clash-protection -fcf-protection

bullet92 commented on 2024-03-02 13:00 (UTC) (edited on 2024-03-02 15:50 (UTC) by bullet92)

Hi, without the package hipblas it does not install with the following compilation error:

CMake Error at CMakeLists.txt:500 (find_package):
  By not providing "Findhipblas.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "hipblas", but
  CMake did not find one.

  Could not find a package configuration file provided by "hipblas" with any
  of the following names:

hipblasConfig.cmake
hipblas-config.cmake

  Add the installation prefix of "hipblas" to CMAKE_PREFIX_PATH or set
  "hipblas_DIR" to a directory containing one of the above files.  If
  "hipblas" provides a separate development package or SDK, be sure it has
  been installed.

Suggestion: add hipblas as dependencies

In any case compile fail, so I have to modify my /etc/makepkg.conf and commented out this line:

 45 # -fstack-clash-protection -fcf-protection

After starting ollama with systemctl and checking the output with status I obtained:

level=INFO source=routes.go:1044 msg="no GPU detected"

So I edited the service adding "HSA_OVERRIDE_GFX_VERSION=10.3.0" to the current environment in ollama.service

sudo systemctl edit ollama.service

    [Service]
    Environment="HOME=/var/lib/ollama" "GIN_MODE=release" "HSA_OVERRIDE_GFX_VERSION=10.3.0"

and installed some dependencies:

  pacman -S rocm-hip-sdk rocm-opencl-sdk clblast go

( in my case rocm-hip-sdk rocm-opencl-sdk were missings ) And that gives me "Radeon GPU detected"

sr.team commented on 2024-03-01 17:16 (UTC)

@nameiwillforget you can build ROCm version in docker, without a CUDA

nameiwillforget commented on 2024-03-01 11:18 (UTC) (edited on 2024-03-01 16:22 (UTC) by nameiwillforget)

@sr.team No, I installed cuda earlier because I thought I needed it, but once I realized I didn't I uninstalled it using yay -Rcs. That was before I tried to install ollama-rocm-git.

Edit: I re-installed cuda, and after I added it to my PATH, the compilation fails with the same error.

sr.team commented on 2024-02-29 17:11 (UTC)

@nameiwillforget you are having cuda installed? The ollama generator tried to build laama.cpp with CUDA for you

nameiwillforget commented on 2024-02-29 17:09 (UTC)

Compilation fails when installed with yay, with the following error message:

/home/alex/.cache/yay/ollama-rocm-git/src/ollama/llm/llama.cpp/ggml-cuda.cu:9673:5: note: in instantiation of function template specialization 'pool2d_nchw_kernel<float, float>' requested here
    pool2d_nchw_kernel<<<block_nums, CUDA_IM2COL_BLOCK_SIZE, 0, main_stream>>>(IH, IW, OH, OW, k1, k0, s1, s0, p1, p0, parallel_elements, src0_dd, dst_dd, op);
    ^
/home/alex/.cache/yay/ollama-rocm-git/src/ollama/llm/llama.cpp/ggml-cuda.cu:6947:25: warning: enumeration value 'GGML_OP_POOL_COUNT' not handled in switch [-Wswitch]
                switch (op) {
                        ^~
error: option 'cf-protection=return' cannot be specified on this target
error: option 'cf-protection=branch' cannot be specified on this target
184 warnings and 2 errors generated when compiling for gfx1010.
make[3]: *** [CMakeFiles/ggml.dir/build.make:135: CMakeFiles/ggml.dir/ggml-cuda.cu.o] Error 1
make[3]: *** Waiting for unfinished jobs....
make[3]: Leaving directory '/home/alex/.cache/yay/ollama-rocm-git/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
make[2]: *** [CMakeFiles/Makefile2:745: CMakeFiles/ggml.dir/all] Error 2
make[2]: Leaving directory '/home/alex/.cache/yay/ollama-rocm-git/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
make[1]: *** [CMakeFiles/Makefile2:2910: examples/server/CMakeFiles/ext_server.dir/rule] Error 2
make[1]: Leaving directory '/home/alex/.cache/yay/ollama-rocm-git/src/ollama/llm/llama.cpp/build/linux/x86_64/rocm_v1'
make: *** [Makefile:1196: ext_server] Error 2

llm/generate/generate_linux.go:3: running "bash": exit status 2
==> ERROR: A failure occurred in build().