I have the same problem with 'syclcompat/math.hpp' file not found .
The latest intel-oneapi-basekit-2024.1.0.596-3 is installed.
Git Clone URL: | https://aur.archlinux.org/llama.cpp-sycl-f16.git (read-only, click to copy) |
---|---|
Package Base: | llama.cpp-sycl-f16 |
Description: | Port of Facebook's LLaMA model in C/C++ (with Intel SYCL GPU optimizations and F16) |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Conflicts: | llama.cpp |
Provides: | llama.cpp |
Submitter: | txtsd |
Maintainer: | txtsd |
Last Packager: | txtsd |
Votes: | 2 |
Popularity: | 0.53 |
First Submitted: | 2024-10-26 18:11 (UTC) |
Last Updated: | 2025-04-07 22:58 (UTC) |
« First ‹ Previous 1 2
I have the same problem with 'syclcompat/math.hpp' file not found .
The latest intel-oneapi-basekit-2024.1.0.596-3 is installed.
@pepijndevos Thanks for reporting! I'm trying to figure out if it's an upstream issue or if it's because of the outdated intel-oneapi-basekit
package.
I'm getting an error
/home/pepijn/aur/llama.cpp-sycl-f16/src/llama.cpp/ggml/src/ggml-sycl/../ggml-sycl/dpct/helper.hpp:18:10: fatal error: 'syclcompat/math.hpp' file not found
18 | #include <syclcompat/math.hpp>
| ^~~~~~~~~~~~~~~~~~~~~
Pinned Comments
txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip