Search Criteria
Package Details: ollama-openblas-git 0.1.39+rc1+4.r2773.20240522.955c317c-2
Package Actions
Git Clone URL: | https://aur.archlinux.org/ollama-nogpu-git.git (read-only, click to copy) |
---|---|
Package Base: | ollama-nogpu-git |
Description: | Create, run and share large language models (LLMs). CPU optimisation with openblas. |
Upstream URL: | https://github.com/jmorganca/ollama |
Licenses: | MIT |
Conflicts: | ollama |
Provides: | ollama, ollama-git |
Submitter: | dreieck |
Maintainer: | dreieck |
Last Packager: | dreieck |
Votes: | 0 |
Popularity: | 0.000000 |
First Submitted: | 2024-04-17 15:09 (UTC) |
Last Updated: | 2024-05-22 08:41 (UTC) |
Dependencies (9)
- gcc-libs (gccrs-libs-gitAUR, gcc11-libsAUR, gcc-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-linux4AUR, glibc-eacAUR)
- openssl (openssl-gitAUR, openssl-staticAUR)
- bash (bash-devel-static-gitAUR, bash-gitAUR, bash-devel-gitAUR, busybox-coreutilsAUR) (make)
- cmake (cmake-gitAUR) (make)
- git (git-gitAUR) (make)
- go (go-gitAUR, gcc-go-gitAUR, gcc-go-snapshotAUR, gcc-go) (make)
- openblas (openblas-lapackAUR) (make)
- openmpi (openmpi-gitAUR) (make)
Required by (10)
- alpaka-git (requires ollama)
- chatd (requires ollama)
- chatd-bin (requires ollama)
- litellm (requires ollama) (optional)
- litellm-ollama (requires ollama)
- ollamamodelupdater (requires ollama) (optional)
- ollamamodelupdater-bin (requires ollama) (optional)
- python-ollama (requires ollama)
- python-ollama-git (requires ollama)
- tlm (requires ollama) (optional)
Latest Comments
dreieck commented on 2024-05-22 08:38 (UTC)
Reactivated Vulkan build by deactivating testing options.
dreieck commented on 2024-05-21 21:12 (UTC) (edited on 2024-05-21 21:12 (UTC) by dreieck)
Disabled Vulkan build since it currently fails.
dreieck commented on 2024-05-21 20:44 (UTC)
Upstream has implemented CUDA and ROCm skip variables 🎉 — implementing it and uploading fixed
PKGBUILD
…nmanarch commented on 2024-05-18 09:41 (UTC)
Ok ! This sad. Perhaps they agree to accept your code changes to theirs master ? So many thanks for all of this.
dreieck commented on 2024-05-18 09:32 (UTC) (edited on 2024-05-18 09:38 (UTC) by dreieck)
Ahoj @nmanarch,
it seems upstream is moving too fast, needing to change the patch too often.
If I do not find an easier way to not build with ROCm or CUDA even if some of their files are installed, I might just give up.
↗ Upstream feature request to add a "kill switch" to force-off ROCm and CUDA.
nmanarch commented on 2024-05-18 09:17 (UTC) (edited on 2024-05-18 09:37 (UTC) by nmanarch)
Hello @derieck. I want to try your ollama vulkan . But the patch failed to apply: I have try to add --fuzz 3 and --ignore-whitespace but not better. Thanks for a trick.
Submodule path 'llm/llama.cpp': checked out '614d3b914e1c3e02596f869649eb4f1d3b68614d' pplying patch disable-rocm-cuda.gen_linux.sh.patch ...
patching file llm/generate/gen_linux.sh Hunk #5 FAILED at 143. Hunk #6 FAILED at 220. 2 out of 6 hunks FAILED -- saving rejects to file llm/generate/gen_linux.sh.rej ==> ERROR: A failure occurred in prepare(). Aborting...