Search Criteria
Package Details: llama.cpp-vulkan-git b5123.r1.bc091a4dc-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/llama.cpp-vulkan-git.git (read-only, click to copy) |
|---|---|
| Package Base: | llama.cpp-vulkan-git |
| Description: | Port of Facebook's LLaMA model in C/C++ (with Vulkan GPU optimizations) |
| Upstream URL: | https://github.com/ggerganov/llama.cpp |
| Licenses: | MIT |
| Conflicts: | llama.cpp, llama.cpp-vulkan |
| Submitter: | robertfoster |
| Maintainer: | robertfoster |
| Last Packager: | robertfoster |
| Votes: | 3 |
| Popularity: | 0.43 |
| First Submitted: | 2024-11-15 20:39 (UTC) |
| Last Updated: | 2025-04-12 16:39 (UTC) |
Dependencies (6)
- ggml-vulkan-git
- cmake (cmake3AUR, cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- python-ggufAUR (optional) – convert_hf_to_gguf python script
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mklAUR, python-numpy-mkl-tbbAUR) (optional) – convert_hf_to_gguf.py python script
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda12.9AUR, python-pytorch-opt-cuda12.9AUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – convert_hf_to_gguf.py python script
Latest Comments