Package Details: ollama-vulkan-bin 0.18.0-1

Git Clone URL: https://aur.archlinux.org/ollama-bin.git (read-only, click to copy)
Package Base: ollama-bin
Description: Create, run and share large language models (LLMs) with Vulkan
Upstream URL: https://github.com/ollama/ollama
Keywords: ai llm local
Licenses: MIT
Conflicts: ollama-cuda, ollama-cuda12, ollama-cuda13
Provides: ollama-vulkan
Submitter: Dominiquini
Maintainer: Dominiquini
Last Packager: Dominiquini
Votes: 6
Popularity: 1.61
First Submitted: 2025-09-26 06:29 (UTC)
Last Updated: 2026-03-14 04:39 (UTC)

Required by (4)

Sources (8)

Latest Comments

niflheimmer commented on 2026-01-22 20:42 (UTC)

@Dominiquini, all is good here now. After deleting the ollama directories and reinstalling anew, the symlink is working and the NVIDIA GPU is being detected without modifying PKGBUILD. Thank you!

Dominiquini commented on 2026-01-22 02:18 (UTC)

@ovflowd: Fixed! Thanks for the patch. The installation was broken on my machine and I hadn't noticed! I was just testing and the service seemed to be running without problems... I'll test it more thoroughly before the next updates!

ovflowd commented on 2026-01-22 00:14 (UTC)

Hey @Dominiquini could you also apply my patch?

Dominiquini commented on 2026-01-22 00:11 (UTC)

@niflheimmer: I tried your suggestion, but them the service broke here on my machine. I checked again against the package 'ollama' in the main repos and noticed the my symlink was broken! I fix this and update here on the AUR. Now, I was able to run ollama with the home pointed to '/usr/share/ollama'. Can you test and checks if it's working? Thanks

niflheimmer commented on 2026-01-21 22:19 (UTC) (edited on 2026-01-21 22:25 (UTC) by niflheimmer)

The PKGBUILD creates /usr/share/ollama as a symlink to /var/lib/ollama. Ollama by default tries to mkdir /usr/share/ollama at service runtime and fails if it's a symlink. This causes ollama serve to immediately exit with Error: mkdir /usr/share/ollama: file exists: ensure path elements are traversable. Upstream (Ollama's install.sh) installs /usr/share/ollama as a real directory. Please remove the symlink and install both paths as real directories. This is an issue for anyone that is using /usr/share/ollama as "HOME".

Relevant upstream issue: https://github.com/ollama/ollama/issues/10839

Patch to fix it:

diff -u a/PKGBUILD b/PKGBUILD
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -70,7 +70,7 @@

     install -dm755 "${pkgdir}/var/share"
     install -dm755 "${pkgdir}/var/lib/ollama"
-    ln -s "${pkgdir}/var/lib/ollama" "${pkgdir}/usr/share/ollama"
+    install -dm755 "${pkgdir}/usr/share/ollama"
 }

 package_ollama-cuda12-bin() {

NB: the patch from @ovflowd is also not in the PKGBUILD yet, and it solved my same issue of Ollama not detecting any GPU.

Otherwise, these packages are great for any deprecated NVIDIA Maxwell and Pascal (GTX 900 and 1000 series) GPUs stuck on CUDA 12. Sincerely appreciated.

ovflowd commented on 2025-12-18 11:59 (UTC)

Hey there, due to a recent PR upstream (ollama) https://github.com/ollama/ollama/pull/13469 this PKGBUILD is somewhat broken, installing Ollama works but it cannot detect any GPU due to mismatch on .so filenames.

This patch fixes it:

diff --git a/PKGBUILD b/PKGBUILD
index 972ca24..58e66d6 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -38,8 +38,12 @@ package_ollama-bin() {
     cd "${srcdir}/" || exit

     install -Dm755 "./bin/ollama" "${pkgdir}/usr/bin/ollama"
+    install -dm755 "${pkgdir}/usr/lib/ollama"

-    for lib in 'libggml-base.so' \
+    cp -P "./lib/ollama/libggml-base.so"* "${pkgdir}/usr/lib/ollama/"
+    chmod 755 "${pkgdir}/usr/lib/ollama/libggml-base.so."*
+
+    for lib in \
         'libggml-cpu-alderlake.so' \
         'libggml-cpu-haswell.so' \
         'libggml-cpu-icelake.so' \

omnigenous commented on 2025-09-30 02:02 (UTC)

Thank you so much!

Dominiquini commented on 2025-09-29 20:10 (UTC)

@omnigenous For Pascal, replace both ollama and ollama-cuda from the main repo and install ollama-bin and ollama-cuda12-bin (these packages conflict with each other, so replacement is automatic). These packages do not depend on the user having CUDA 13 installed on the system, but they also do not conflict, so you can keep them if you want!

In short. Just run

yay -S ollama-bin ollama-cuda12-bin
to have ollama running on the GPU for Pascal cards!

omnigenous commented on 2025-09-29 10:10 (UTC)

Do I delete both cuda and ollama packages and install this for Pascal card?