@txtsd: May you please implement the build fixes provided by @heyrict or disown this pkg? It's now not building in a clean build environment for months straight. Offering this PKGBUILD on the AUR is just misleading.
Search Criteria
Package Details: llama.cpp-sycl-f16 b5415-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/llama.cpp-sycl-f16.git (read-only, click to copy) |
---|---|
Package Base: | llama.cpp-sycl-f16 |
Description: | Port of Facebook's LLaMA model in C/C++ (with Intel SYCL GPU optimizations and F16) |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Conflicts: | llama.cpp |
Provides: | llama.cpp |
Submitter: | txtsd |
Maintainer: | txtsd |
Last Packager: | txtsd |
Votes: | 2 |
Popularity: | 0.24 |
First Submitted: | 2024-10-26 18:11 (UTC) |
Last Updated: | 2025-05-17 23:27 (UTC) |
Dependencies (12)
- curl (curl-gitAUR, curl-c-aresAUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-linux4AUR, glibc-eacAUR)
- intel-oneapi-basekit
- python (python37AUR, python311AUR, python310AUR)
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR)
- python-sentencepieceAUR (python-sentencepiece-gitAUR)
- cmake (cmake3AUR, cmake-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- openmp (make)
- procps-ng (procps-ng-gitAUR) (make)
- python-pytorch (python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm) (optional)
Required by (0)
Sources (4)
bionade24 commented on 2025-05-14 14:29 (UTC)
heyrict commented on 2025-03-16 02:48 (UTC) (edited on 2025-03-20 07:47 (UTC) by heyrict)
I am able to build the package with manual modifications. Steps posted below:
- Fetch resources and extract the files:
makepkg -o
- Manually build the package:
cd src
source /opt/intel/oneapi/setvars.sh
local _cmake_options=(
-B build
-S llama.cpp
#-DCMAKE_BUILD_TYPE=None
-DCMAKE_INSTALL_PREFIX='/usr'
-DGGML_ALL_WARNINGS=OFF
-DGGML_ALL_WARNINGS_3RD_PARTY=OFF
#-DBUILD_SHARED_LIBS=OFF
#-DGGML_STATIC=ON
#-DGGML_LTO=ON
-DGGML_RPC=ON
-DLLAMA_CURL=ON
-DGGML_BLAS=ON
-DCMAKE_C_COMPILER=icx
-DCMAKE_CXX_COMPILER=icpx
-DGGML_SYCL=ON
-DGGML_SYCL_F16=ON # Comment this out for building F32 version
-Wno-dev
)
cmake "${_cmake_options[@]}"
cmake --build build --config Release -j -v
- Patch the PKGBUILD, as there are no lib*.a in a non-static build.
diff --git a/PKGBUILD b/PKGBUILD
index 7999ec8..51cc1bc 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -73,7 +73,7 @@ build() {
package() {
DESTDIR="${pkgdir}" cmake --install build
rm "${pkgdir}/usr/include/"ggml*
- rm "${pkgdir}/usr/lib/"lib*.a
+ #rm "${pkgdir}/usr/lib/"lib*.a
install -Dm644 "${_pkgname}/LICENSE" "${pkgdir}/usr/share/licenses/${pkgname}/LICENSE"
- Package the build and install:
makepkg -Ri
Notes:
BUILD_SHARED_LIBS
andGGML_STATIC
were commented out because of the inference issue. The built package will havelibggml.so
shared libraries which may have conflicts with other packages.- Switching GGML LTO off will fix build error
libggml-sycl.a: error adding symbols: archive has no index; run ranlib to add one
. I think it is some limitations of icx/icpx. - I've noticed some differences between
build/compile_commands.json
in manual build and pacman build, with the same build flags. Probably related to theSYCL_EXTERNAL
issue.
An example of differences in compile_commands.json
--- /tmp/pacman.json 2025-03-16 10:41:15.426354158 +0800
+++ /tmp/manual.json 2025-03-16 10:41:06.154835418 +0800
@@ -1,31 +1,47 @@
{
"directory": "/home/heyrict/.cache/paru/clone/llama.cpp-sycl-f32/src/build/ggml/src",
"command": "/opt/intel/oneapi/compiler/2025.0/bin/icx
-DGGML_BUILD
-DGGML_SCHED_MAX_COPIES=4
-DGGML_SHARED
-D_GNU_SOURCE
-D_XOPEN_SOURCE=600
-Dggml_base_EXPORTS
-I/home/heyrict/.cache/paru/clone/llama.cpp-sycl-f32/src/llama.cpp/ggml/src/.
-I/home/heyrict/.cache/paru/clone/llama.cpp-sycl-f32/src/llama.cpp/ggml/src/../include
+ -march=x86-64
+ -mtune=generic
+ -O2
+ -pipe
+ -fno-plt
+ -fexceptions
+ -Wp,-D_FORTIFY_SOURCE=3
+ -Wformat
+ -Werror=format-security
+ -fstack-clash-protection
+ -fcf-protection
+ -fno-omit-frame-pointer
+ -mno-omit-leaf-frame-pointer
+ -g
+ -ffile-prefix-map=/home/heyrict/.cache/paru/clone/llama.cpp-sycl-f32/src=/usr/src/debug/llama.cpp-sycl-f32
+ -flto=auto
-O3
-DNDEBUG
-std=gnu11
-fPIC
-Wshadow
-Wstrict-prototypes
-Wpointer-arith
-Wmissing-prototypes
-Werror=implicit-int
-Werror=implicit-function-declaration
-Wall
-Wextra
-Wpedantic
-Wcast-qual
-Wno-unused-function
-o CMakeFiles/ggml-base.dir/ggml.c.o
-c /home/heyrict/.cache/paru/clone/llama.cpp-sycl-f32/src/llama.cpp/ggml/src/ggml.c",
"file": "/home/heyrict/.cache/paru/clone/llama.cpp-sycl-f32/src/llama.cpp/ggml/src/ggml.c",
"output": "ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o"
},
bionade24 commented on 2025-02-11 14:41 (UTC)
Still broken: Current build log at https://abs-cd.oscloud.info/cd_manager/llama.cpp-sycl-f16
You get past the SYCL_EXTERNAL
issue by providing a makepkg.conf without most of the -fsomething C(XX)FLAGS
, but run into architecture problems afterwards.
bionade24 commented on 2025-02-06 21:51 (UTC) (edited on 2025-02-08 10:48 (UTC) by bionade24)
intel-compute-runtime
should be an optdepend of this and the -f32 pkg. It's needed for OneAPI GPU access.
@greyltc I could neither build it (with intel-oneapi-basekit-2025) in a clean buildenv, with a custom makepkg.conf that removes all the C(XX)FLAGS nor manually. I got past the undefined function without SYCL_EXTERNAL
with the latter 2, but the got a shitton of errors related to typedef
s where my first guess is that the compiler should have used stdlib headers under /opt/intel/oneapi
. Currently trying to build in docker.
Edit: Docker builds & runs, but I didn't have any luck building it on Arch. I also tried changing the BLAS libary to OpenBLAS, setting -march=haswell
to maybe get those weird intrinsincs errors away, but to no help.
greyltc commented on 2024-12-21 17:19 (UTC)
I'm not sure the SYCL_EXTERNAL
issue is upstream's problem. I think it might be caused by the environment variables that makepkg sets when it builds packages. Still trying to figure it out...
txtsd commented on 2024-12-21 07:42 (UTC)
Thanks @grelytc! Could you post an upstream issue about SYCL_EXTERNAL
?
greyltc commented on 2024-12-18 18:44 (UTC)
@txtsd, you can depend on https://aur.archlinux.org/packages/intel-oneapi-basekit-2025 here if you want the latest intel-oneapi-basekit. That solves the "'syclcompat/math.hpp' file not found" issue, but even with that solved the package still won't build.
ioctl commented on 2024-11-30 18:21 (UTC)
While this package cannot be build one may use Intel's ipex-llm project, that compatible with llama.cpp and works very fast with Archlinux intel-oneapi-basekit 2024.1.
Here is official guide: https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md
pepijndevos commented on 2024-11-22 07:32 (UTC)
I was told they wont update it until after the Python 3.13 migration.
I have updated it locally and am now getting
/usr/lib64/gcc/x86_64-pc-linux-gnu/14.2.1/../../../../include/c++/14.2.1/array:217:2: error: SYCL kernel cannot call an undefined function without SYCL_EXTERNAL attribute
Pinned Comments
txtsd commented on 2024-10-26 20:15 (UTC) (edited on 2024-12-06 14:15 (UTC) by txtsd)
Alternate versions
llama.cpp
llama.cpp-vulkan
llama.cpp-sycl-fp16
llama.cpp-sycl-fp32
llama.cpp-cuda
llama.cpp-cuda-f16
llama.cpp-hip