Search Criteria
Package Details: ollama-nogpu-git 0.5.5+r3779+g6982e9cc9-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/ollama-nogpu-git.git (read-only, click to copy) |
---|---|
Package Base: | ollama-nogpu-git |
Description: | Create, run and share large language models (LLMs) |
Upstream URL: | https://github.com/ollama/ollama |
Licenses: | MIT |
Conflicts: | ollama |
Provides: | ollama |
Submitter: | dreieck |
Maintainer: | None |
Last Packager: | envolution |
Votes: | 5 |
Popularity: | 0.153731 |
First Submitted: | 2024-04-17 15:09 (UTC) |
Last Updated: | 2025-01-14 12:40 (UTC) |
Dependencies (3)
- cmake (cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- go (go-gitAUR, gcc-go-gitAUR, gcc-go-snapshotAUR, gcc-go) (make)
Required by (29)
- ai-writer (requires ollama)
- alpaca-ai (requires ollama)
- alpaca-git (requires ollama) (optional)
- alpaka-git (requires ollama)
- anythingllm-desktop-bin (requires ollama)
- calt-git (requires ollama)
- chatd (requires ollama)
- chatd-bin (requires ollama)
- gollama (requires ollama) (optional)
- gollama-git (requires ollama) (optional)
- hoarder (requires ollama) (optional)
- hollama-bin (requires ollama)
- litellm (requires ollama) (optional)
- litellm-ollama (requires ollama)
- llocal-bin (requires ollama)
- lobe-chat (requires ollama) (optional)
- lumen (requires ollama) (optional)
- maestro (requires ollama) (optional)
- maestro-git (requires ollama) (optional)
- ollama-chat-desktop-git (requires ollama)
- Show 9 more...
Latest Comments
1 2 3 Next › Last »
envolution commented on 2024-12-31 18:45 (UTC)
@lavilao don't, it seems like a problem that they dont at least summarise the (major) detected libraries during the build process. Happy new year
lavilao commented on 2024-12-31 05:22 (UTC)
...well I feel like a stupid. The reason it does not show openblas support is because they(ollama and llama.cpp) dont show it anymore even if you compile it with support for it. I compiled llama.cpp locally and while the logs show that openblas support was added after the compilation there is no longer an indicator showing that support. Thanks @envolution for your patience and happy new year.
envolution commented on 2024-12-25 08:58 (UTC)
@lavilao yes the build scripts have changed considerably since the last release, more suited for host capability detection rather than requiring specificity in the compilation options.
Speaking of which, https://github.com/ollama/ollama/blob/main/docs/development.md mentions nothing about forcing llama to use a specific blas provider - if you can find something yourself, I can try add it - but I'm not really prepared to dig through their sourcecode at the moment
lavilao commented on 2024-12-24 16:50 (UTC) (edited on 2024-12-24 16:51 (UTC) by lavilao)
@envolution The compilation logs only show
GOARCH=amd64 go build -buildmode=pie "-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=0.5.4-9-gffe3549\" " -trimpath -tags "avx,avx2" -o llama/build/linux-amd64/runners/cpu_avx2/ollama_llama_server ./cmd/runner
on the earlier pkgbuild there was this flag:export OLLAMA_CUSTOM_CPU_DEFS="${_cmake_options_common}"
but now even adding:export OLLAMA_CUSTOM_CPU_DEFS='-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=openblas -DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS -DBLAS_LIBRARIES="/usr/lib/libopenblas.so" -DLAPACK_LIBRARIES="/usr/lib/libopenblas.so -DLLAMA_LTO=ON'
does not work (I have also added openblas to makedepends to ensure is installed)envolution commented on 2024-12-24 04:05 (UTC) (edited on 2024-12-24 04:06 (UTC) by envolution)
@lavilao I suspect it should be trying to detect openblas/clblas/hipblas, it's not well documented last I checked so I'm honestly not sure on what optional depends should be added to this at the moment. Near the beginning of the build logs it should show exactly what options it's being built with...
Can you confirm you have extra/openblas installed and it's not creating a openblas linked runner? Merry Christmas to you as well brother
lavilao commented on 2024-12-24 03:44 (UTC)
@envolution, thank you for the new package; it now compiles with the correct flags on GitHub. Do you know how I can add OpenBLAS support to it? OpenBLAS works fine without AVX. Merry Christmas!
envolution commented on 2024-12-16 02:19 (UTC)
@lavilao you can try this new build, it should work without avx
nmanarch commented on 2024-10-29 18:30 (UTC)
For @lavilao i just come to see your ask for run without avx hope you have success ? So if not you have to apply this : https://github.com/ollama/ollama/issues/2187#issuecomment-2262876198 So you have to download this ollama vulkan aur version ..change the line of code and do a new build package with the change and install this new build.
nmanarch commented on 2024-10-29 18:20 (UTC)
Hi @dreieck ! This is sad ! this is the only one i found for ollama run with vulkan. So perhaps i can take it but could you provide help if problems occurs with futur upgrade which become too hard to apply ?
dreieck commented on 2024-09-03 11:29 (UTC)
Does anyone want to take over?
I notice that I completely don't use this software, so I disown it.
Regards!
1 2 3 Next › Last »