Search Criteria
Package Details: ollama-vulkan-bin 0.18.0-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/ollama-bin.git (read-only, click to copy) |
|---|---|
| Package Base: | ollama-bin |
| Description: | Create, run and share large language models (LLMs) with Vulkan |
| Upstream URL: | https://github.com/ollama/ollama |
| Keywords: | ai llm local |
| Licenses: | MIT |
| Conflicts: | ollama-cuda, ollama-cuda12, ollama-cuda13 |
| Provides: | ollama-vulkan |
| Submitter: | Dominiquini |
| Maintainer: | Dominiquini |
| Last Packager: | Dominiquini |
| Votes: | 6 |
| Popularity: | 1.61 |
| First Submitted: | 2025-09-26 06:29 (UTC) |
| Last Updated: | 2026-03-14 04:39 (UTC) |
Dependencies (7)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-eacAUR, glibc-git-native-pgoAUR)
- ollama-binAUR
- vulkan-driver (nvidia-410xx-utilsAUR, nvidia-440xx-utilsAUR, nvidia-430xx-utilsAUR, amdvlk-gitAUR, vulkan-amdgpu-pro-legacyAUR, mesa-wsl2-gitAUR, vulkan-radeon-amd-bc250AUR, vulkan-terakanAUR, nvidia-510xx-utilsAUR, swiftshader-gitAUR, nvidia-utils-teslaAUR, vulkan-nouveau-gitAUR, vulkan-amdgpu-proAUR, amdvlkAUR, amdvlk-binAUR, nvidia-525xx-utilsAUR, mesa-rk35xx-gitAUR, nvidia-575xx-utilsAUR, mesa-gitAUR, mesa-minimal-gitAUR, nvidia-535xx-utilsAUR, mesa-nollvm-gitAUR, vulkan-terakan-gitAUR, amdonly-gaming-vulkan-radeon-gitAUR, nvidia-470xx-utilsAUR, nvidia-390xx-utilsAUR, nvidia-580xx-utilsAUR, nvidia-550xx-utilsAUR, nvidia-utils-betaAUR, nvidia-utils-bsbAUR, mesa-git-nvk-dlssAUR, nvidia-vulkan-utilsAUR, mesa-git-dlss-reflexAUR, mesa-dlss-reflex-gitAUR, nvidia-utils, vulkan-asahi, vulkan-broadcom, vulkan-dzn, vulkan-freedreno, vulkan-gfxstream, vulkan-intel, vulkan-kosmickrisp, vulkan-nouveau, vulkan-panfrost, vulkan-powervr, vulkan-radeon, vulkan-swrast, vulkan-virtio)
- ollama-cuda12 (ollama-cuda12-binAUR) (optional) – NVIDIA GPU Support
- ollama-cuda13 (ollama-cuda13-binAUR) (optional) – NVIDIA GPU Support
- ollama-vulkan (ollama-vulkan-gitAUR, ollama-vulkan-binAUR) (optional) – GPU Support
Required by (4)
- ollama-bin (requires ollama-vulkan) (optional)
- ollama-cuda12-bin (requires ollama-vulkan) (optional)
- ollama-mlx-cuda13-bin (requires ollama-vulkan) (optional)
- ollama-vulkan-bin (requires ollama-vulkan) (optional)
Latest Comments
niflheimmer commented on 2026-01-22 20:42 (UTC)
@Dominiquini, all is good here now. After deleting the ollama directories and reinstalling anew, the symlink is working and the NVIDIA GPU is being detected without modifying PKGBUILD. Thank you!
Dominiquini commented on 2026-01-22 02:18 (UTC)
@ovflowd: Fixed! Thanks for the patch. The installation was broken on my machine and I hadn't noticed! I was just testing and the service seemed to be running without problems... I'll test it more thoroughly before the next updates!
ovflowd commented on 2026-01-22 00:14 (UTC)
Hey @Dominiquini could you also apply my patch?
Dominiquini commented on 2026-01-22 00:11 (UTC)
@niflheimmer: I tried your suggestion, but them the service broke here on my machine. I checked again against the package 'ollama' in the main repos and noticed the my symlink was broken! I fix this and update here on the AUR. Now, I was able to run ollama with the home pointed to '/usr/share/ollama'. Can you test and checks if it's working? Thanks
niflheimmer commented on 2026-01-21 22:19 (UTC) (edited on 2026-01-21 22:25 (UTC) by niflheimmer)
The PKGBUILD creates
/usr/share/ollamaas a symlink to/var/lib/ollama. Ollama by default tries tomkdir /usr/share/ollamaat service runtime and fails if it's a symlink. This causesollama serveto immediately exit withError: mkdir /usr/share/ollama: file exists: ensure path elements are traversable. Upstream (Ollama's install.sh) installs/usr/share/ollamaas a real directory. Please remove the symlink and install both paths as real directories. This is an issue for anyone that is using/usr/share/ollamaas "HOME".Relevant upstream issue: https://github.com/ollama/ollama/issues/10839
Patch to fix it:
NB: the patch from @ovflowd is also not in the PKGBUILD yet, and it solved my same issue of Ollama not detecting any GPU.
Otherwise, these packages are great for any deprecated NVIDIA Maxwell and Pascal (GTX 900 and 1000 series) GPUs stuck on CUDA 12. Sincerely appreciated.
ovflowd commented on 2025-12-18 11:59 (UTC)
Hey there, due to a recent PR upstream (ollama) https://github.com/ollama/ollama/pull/13469 this PKGBUILD is somewhat broken, installing Ollama works but it cannot detect any GPU due to mismatch on .so filenames.
This patch fixes it:
omnigenous commented on 2025-09-30 02:02 (UTC)
Thank you so much!
Dominiquini commented on 2025-09-29 20:10 (UTC)
@omnigenous For Pascal, replace both ollama and ollama-cuda from the main repo and install ollama-bin and ollama-cuda12-bin (these packages conflict with each other, so replacement is automatic). These packages do not depend on the user having CUDA 13 installed on the system, but they also do not conflict, so you can keep them if you want!
In short. Just run
to have ollama running on the GPU for Pascal cards!omnigenous commented on 2025-09-29 10:10 (UTC)
Do I delete both cuda and ollama packages and install this for Pascal card?