I'm not sure the SYCL_EXTERNAL
issue is upstream's problem. I think it might be caused by the environment variables that makepkg sets when it builds packages. Still trying to figure it out...
Search Criteria
Package Details: llama.cpp-sycl-f16 b4384-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/llama.cpp-sycl-f16.git (read-only, click to copy) |
---|---|
Package Base: | llama.cpp-sycl-f16 |
Description: | Port of Facebook's LLaMA model in C/C++ (with Intel SYCL GPU optimizations and F16) |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Conflicts: | llama.cpp |
Provides: | llama.cpp |
Submitter: | txtsd |
Maintainer: | txtsd |
Last Packager: | txtsd |
Votes: | 1 |
Popularity: | 0.31 |
First Submitted: | 2024-10-26 18:11 (UTC) |
Last Updated: | 2024-12-23 13:28 (UTC) |
Dependencies (11)
- curl (curl-quiche-gitAUR, curl-http3-ngtcp2AUR, curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc11-libsAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-linux4AUR, glibc-eacAUR, glibc-eac-binAUR, glibc-eac-rocoAUR)
- intel-oneapi-basekit (intel-oneapi-basekit-2025AUR)
- python (python37AUR, python311AUR, python310AUR)
- python-numpy (python-numpy-flameAUR, python-numpy-gitAUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR, python-numpy1AUR)
- python-sentencepieceAUR (python-sentencepiece-gitAUR)
- cmake (cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- openmp (make)
- procps-ng (procps-ng-gitAUR, busybox-coreutilsAUR) (make)
Required by (0)
Sources (4)
Latest Comments
greyltc commented on 2024-12-21 17:19 (UTC)
txtsd commented on 2024-12-21 07:42 (UTC)
Thanks @grelytc! Could you post an upstream issue about SYCL_EXTERNAL
?
greyltc commented on 2024-12-18 18:44 (UTC)
@txtsd, you can depend on https://aur.archlinux.org/packages/intel-oneapi-basekit-2025 here if you want the latest intel-oneapi-basekit. That solves the "'syclcompat/math.hpp' file not found" issue, but even with that solved the package still won't build.
ioctl commented on 2024-11-30 18:21 (UTC)
While this package cannot be build one may use Intel's ipex-llm project, that compatible with llama.cpp and works very fast with Archlinux intel-oneapi-basekit 2024.1.
Here is official guide: https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md
pepijndevos commented on 2024-11-22 07:32 (UTC)
I was told they wont update it until after the Python 3.13 migration.
I have updated it locally and am now getting
/usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/array:217:2: error: SYCL kernel cannot call an undefined function without SYCL_EXTERNAL attribute
txtsd commented on 2024-11-22 06:44 (UTC)
It's the intel-oneapi-basekit package. It's been outdated for a long time now. I'll send a patch and see if they'll accept it.
ioctl commented on 2024-11-22 06:37 (UTC)
I have the same problem with 'syclcompat/math.hpp' file not found .
The latest intel-oneapi-basekit-2024.1.0.596-3 is installed.
txtsd commented on 2024-11-22 05:34 (UTC)
@pepijndevos Thanks for reporting! I'm trying to figure out if it's an upstream issue or if it's because of the outdated intel-oneapi-basekit
package.
pepijndevos commented on 2024-11-21 17:20 (UTC)
I'm getting an error
/home/pepijn/aur/llama.cpp-sycl-f16/src/llama.cpp/ggml/src/ggml-sycl/../ggml-sycl/dpct/helper.hpp:18:10: fatal error: 'syclcompat/math.hpp' file not found
18 | #include <syclcompat/math.hpp>
| ^~~~~~~~~~~~~~~~~~~~~
Pinned Comments
txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip