Search Criteria
Package Details: llama.cpp-sycl-f16-git b5123.r1.bc091a4dc-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/llama.cpp-sycl-f16-git.git (read-only, click to copy) |
---|---|
Package Base: | llama.cpp-sycl-f16-git |
Description: | Port of Facebook's LLaMA model in C/C++ (with Intel SYCL GPU optimizations and F16) |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Conflicts: | llama.cpp, llama.cpp-sycl-f16 |
Submitter: | robertfoster |
Maintainer: | robertfoster |
Last Packager: | robertfoster |
Votes: | 0 |
Popularity: | 0.000000 |
First Submitted: | 2024-11-15 20:36 (UTC) |
Last Updated: | 2025-04-12 16:39 (UTC) |
Dependencies (6)
- ggml-sycl-f16-gitAUR
- cmake (cmake-gitAUR, cmake3AUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- python-ggufAUR (optional) – convert_hf_to_gguf python script
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR) (optional) – convert_hf_to_gguf.py python script
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional) – convert_hf_to_gguf.py python script
Latest Comments
ioctl commented on 2025-03-06 09:02 (UTC) (edited on 2025-03-06 09:18 (UTC) by ioctl)
Build is fine, but instead of using SYCL this app seems uses CPU only: benchmark performance is the same as CPU-only version and GPU is not used according to the gputop utility.
Here is official Docker file, that can be used as a reference: https://github.com/ggml-org/llama.cpp/blob/master/.devops/intel.Dockerfile
pepijndevos commented on 2024-11-25 10:51 (UTC)
I'm getting all sorts of linking errors trying to build this, seems to be from functions in the common namespace.