Search Criteria
Package Details: ollama-nogpu-git 0.5.5+r3779+g6982e9cc9-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/ollama-nogpu-git.git (read-only, click to copy) |
---|---|
Package Base: | ollama-nogpu-git |
Description: | Create, run and share large language models (LLMs) |
Upstream URL: | https://github.com/ollama/ollama |
Licenses: | MIT |
Conflicts: | ollama |
Provides: | ollama |
Submitter: | dreieck |
Maintainer: | None |
Last Packager: | envolution |
Votes: | 5 |
Popularity: | 0.153731 |
First Submitted: | 2024-04-17 15:09 (UTC) |
Last Updated: | 2025-01-14 12:40 (UTC) |
Dependencies (3)
- cmake (cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- go (go-gitAUR, gcc-go-gitAUR, gcc-go-snapshotAUR, gcc-go) (make)
Required by (29)
- ai-writer (requires ollama)
- alpaca-ai (requires ollama)
- alpaca-git (requires ollama) (optional)
- alpaka-git (requires ollama)
- anythingllm-desktop-bin (requires ollama)
- calt-git (requires ollama)
- chatd (requires ollama)
- chatd-bin (requires ollama)
- gollama (requires ollama) (optional)
- gollama-git (requires ollama) (optional)
- hoarder (requires ollama) (optional)
- hollama-bin (requires ollama)
- litellm (requires ollama) (optional)
- litellm-ollama (requires ollama)
- llocal-bin (requires ollama)
- lobe-chat (requires ollama) (optional)
- lumen (requires ollama) (optional)
- maestro (requires ollama) (optional)
- maestro-git (requires ollama) (optional)
- ollama-chat-desktop-git (requires ollama)
- ollamamodelupdater (requires ollama) (optional)
- ollamamodelupdater-bin (requires ollama) (optional)
- open-webui (requires ollama) (optional)
- open-webui-no-venv (requires ollama) (optional)
- python-ollama (requires ollama)
- python-ollama-git (requires ollama)
- screenshot_llm (requires ollama) (optional)
- screenshot_llm-git (requires ollama) (optional)
- tlm (requires ollama) (optional)
Latest Comments
« First ‹ Previous 1 2 3
dreieck commented on 2024-05-22 08:38 (UTC)
Reactivated Vulkan build by deactivating testing options.
dreieck commented on 2024-05-21 21:12 (UTC) (edited on 2024-05-21 21:12 (UTC) by dreieck)
Disabled Vulkan build since it currently fails.
dreieck commented on 2024-05-21 20:44 (UTC)
Upstream has implemented CUDA and ROCm skip variables 🎉 — implementing it and uploading fixed
PKGBUILD
…nmanarch commented on 2024-05-18 09:41 (UTC)
Ok ! This sad. Perhaps they agree to accept your code changes to theirs master ? So many thanks for all of this.
dreieck commented on 2024-05-18 09:32 (UTC) (edited on 2024-05-18 09:38 (UTC) by dreieck)
Ahoj @nmanarch,
it seems upstream is moving too fast, needing to change the patch too often.
If I do not find an easier way to not build with ROCm or CUDA even if some of their files are installed, I might just give up.
↗ Upstream feature request to add a "kill switch" to force-off ROCm and CUDA.
nmanarch commented on 2024-05-18 09:17 (UTC) (edited on 2024-05-18 09:37 (UTC) by nmanarch)
Hello @derieck. I want to try your ollama vulkan . But the patch failed to apply: I have try to add --fuzz 3 and --ignore-whitespace but not better. Thanks for a trick.
Submodule path 'llm/llama.cpp': checked out '614d3b914e1c3e02596f869649eb4f1d3b68614d' pplying patch disable-rocm-cuda.gen_linux.sh.patch ...
patching file llm/generate/gen_linux.sh Hunk #5 FAILED at 143. Hunk #6 FAILED at 220. 2 out of 6 hunks FAILED -- saving rejects to file llm/generate/gen_linux.sh.rej ==> ERROR: A failure occurred in prepare(). Aborting...
« First ‹ Previous 1 2 3