Package Details: ollama-cuda13-bin 0.12.10-1

Git Clone URL: https://aur.archlinux.org/ollama-bin.git (read-only, click to copy)
Package Base: ollama-bin
Description: Create, run and share large language models (LLMs) with CUDA 13
Upstream URL: https://github.com/ollama/ollama
Keywords: ai llms local
Licenses: MIT
Conflicts: ollama-cuda
Provides: ollama-cuda
Submitter: Dominiquini
Maintainer: Dominiquini
Last Packager: Dominiquini
Votes: 3
Popularity: 1.32
First Submitted: 2025-09-26 06:29 (UTC)
Last Updated: 2025-11-06 23:59 (UTC)

Dependencies (4)

Required by (6)

Sources (7)

Latest Comments

omnigenous commented on 2025-09-30 02:02 (UTC)

Thank you so much!

Dominiquini commented on 2025-09-29 20:10 (UTC)

@omnigenous For Pascal, replace both ollama and ollama-cuda from the main repo and install ollama-bin and ollama-cuda12-bin (these packages conflict with each other, so replacement is automatic). These packages do not depend on the user having CUDA 13 installed on the system, but they also do not conflict, so you can keep them if you want!

In short. Just run

yay -S ollama-bin ollama-cuda12-bin
to have ollama running on the GPU for Pascal cards!

omnigenous commented on 2025-09-29 10:10 (UTC)

Do I delete both cuda and ollama packages and install this for Pascal card?