Search Criteria
Package Details: adaptivecpp-git 25.02.0.r109.g8810d85-1
Package Actions
| Git Clone URL: | https://aur.archlinux.org/adaptivecpp-git.git (read-only, click to copy) |
|---|---|
| Package Base: | adaptivecpp-git |
| Description: | A modern, community-driven platform for C++-based heterogeneous programming models targeting CPUs and GPUs from all major vendors. |
| Upstream URL: | https://github.com/AdaptiveCpp/AdaptiveCpp |
| Licenses: | BSD-2-Clause |
| Conflicts: | adaptivecpp |
| Provides: | opencl-headers |
| Submitter: | dreieck |
| Maintainer: | solanki |
| Last Packager: | solanki |
| Votes: | 6 |
| Popularity: | 0.78 |
| First Submitted: | 2024-02-08 13:18 (UTC) |
| Last Updated: | 2025-09-27 00:00 (UTC) |
Dependencies (20)
- clang19
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR, cuda-12.5AUR, cuda-12.9AUR)
- gcc-libs (gcc-libs-gitAUR, gccrs-libs-gitAUR, gcc-libs-snapshotAUR)
- glibc (glibc-gitAUR, glibc-eacAUR)
- hip-runtime-amd (opencl-amdAUR)
- level-zero-loader (level-zero-loader-gitAUR, level-zero-loader-legacyAUR)
- llvm19-libs
- numactl (numactl-gitAUR)
- nvidia-utils (nvidia-410xx-utilsAUR, nvidia-440xx-utilsAUR, nvidia-430xx-utilsAUR, nvidia-340xx-utilsAUR, nvidia-510xx-utilsAUR, nvidia-utils-teslaAUR, nvidia-470xx-utilsAUR, nvidia-550xx-utilsAUR, nvidia-390xx-utilsAUR, nvidia-vulkan-utilsAUR, nvidia-535xx-utilsAUR, nvidia-utils-betaAUR, nvidia-525xx-utilsAUR)
- ocl-icd (khronos-ocl-icd-gitAUR, opencl-icd-loaderAUR)
- python
- boost (boost-gitAUR) (make)
- cmake (cmake3AUR, cmake-gitAUR) (make)
- doxygen (doxygen-gitAUR) (make)
- git (git-gitAUR, git-glAUR) (make)
- level-zero-headers (level-zero-headers-gitAUR, level-zero-headers-legacyAUR) (make)
- lld19 (make)
- llvm19 (make)
- openmp (make)
- rocm-llvm (opencl-amd-devAUR) (make)
Required by (186)
- amdapp-sdk (requires opencl-headers) (optional)
- amdcovc (requires opencl-headers) (make)
- amdonly-gaming-opencl-rusticl-mesa-git (requires opencl-headers) (optional)
- aquagpusph (requires opencl-headers) (make)
- arrayfire (requires opencl-headers) (make)
- arrayfire-git (requires opencl-headers) (make)
- asap-chiptunes-player-git (requires opencl-headers) (make)
- avisynth-plugin-knlmeanscl-git (requires opencl-headers) (make)
- beagle-lib-all (requires opencl-headers)
- beagle-lib-opencl (requires opencl-headers)
- beignet (requires opencl-headers)
- beignet-git (requires opencl-headers)
- chipstar-git (requires opencl-headers) (make)
- clang-prefixed-git (requires opencl-headers) (make)
- clang-prefixed-release (requires opencl-headers) (make)
- clblas (requires opencl-headers)
- clblast-git (requires opencl-headers)
- clfft (requires opencl-headers) (make)
- clfft-git (requires opencl-headers) (make)
- clinfo-git (requires opencl-headers) (make)
- Show 166 more...
Latest Comments
1 2 Next › Last »
JPenuchot commented on 2025-11-11 11:02 (UTC)
Got it, thanks for the explanation (and the package) :)
solanki commented on 2025-10-30 14:45 (UTC)
@JPenuchot, you are correct that AdaptiveCpp supports newer versions of LLVM than 19. The limiting factor, however, is not AdaptiveCpp itself but ROCm. This is because Arch Linux still ships ROCm version 6.4.4, which comes with
rocm-llvmbased on LLVM 19.0.0 (you can confirm this by running/opt/rocm/bin/amdclang --version, which for me returnsAMD clang version 19.0.0git). It is absolutely imperative that we do not build AdaptiveCpp with a newer version of LLVM than ROCm ships with, as doing so pretty much always ends up breaking AdaptiveCpp's ROCm backend (we learned this the hard way). In fact, since May this year, AdaptiveCpp by default refuses to even build when it detects that the LLVM version that it is provided with is newer than the one used by ROCm. [1]TL;DR: Upgrading the LLVM version is currently not an option, as it would likely break AdaptiveCpp on AMD GPUs.
[1] https://github.com/AdaptiveCpp/AdaptiveCpp/pull/1805/commits/e86ccddcd91a37bce88943c96dd849830b61297b
JPenuchot commented on 2025-10-30 13:12 (UTC)
Hi,
The documentation seems to suggest that AdaptiveCpp is compatible with LLVM 20[1]. Maybe bumping
_llvm_version_majorto 20 could be reasonable? Or even trying LLVM 21?Regards, Jules
[1] https://github.com/AdaptiveCpp/AdaptiveCpp/blob/develop/doc/installing.md#compilation-flows
dreieck commented on 2024-05-28 11:23 (UTC)
I do not use it.
I disown it.
@Eirikr or/ and @illuhad, feel free to adopt!
Regards!
illuhad commented on 2024-03-12 22:21 (UTC)
Hi @Eirikr, @dreieck :) Thanks for your efforts, this is great to see!
We do have some ideas about packaging in the upstream AdaptiveCpp project that I'd like to share with you - perhaps they can be helpful. It is clear that adding packages for each combination of backends is not feasible due to the combinatoric explosion and the flexibility that AdaptiveCpp provides.
We have two ideas how this could be handled. Both rely on the fact that AdaptiveCpp has modular backend plugins that are discovered and loaded at runtime by the core runtime library. So:
adaptivecpp-coreor similar package which provides the core infrastructure (libacpp-rt, headers,acpp,ompbackend,libllvm-to-backend). Then there would be individualadaptivecpp-cuda,adaptivecpp-rocmetc packages that add individual backends (librt-backend-<backend>,libllvm-to-<backend>). Only these packages would then carry dependencies to backend stacks like CUDA or ROCm. Users could then install one or multiple of those backend packages that they actually need. Note that there is currently no way to build "just a backend" without the core infrastructure, so the build process for the backend packages would involve building AdaptiveCpp again with the requested backend, and then only package the backend-specific bits.Eirikr commented on 2024-03-06 04:36 (UTC) (edited on 2024-03-06 05:15 (UTC) by Eirikr)
Hey there! First of all; I want to say thank you very much for working on this. It is marvelous and works great. Wanted to try to mess around with CUDA... then was having issues with it finding the ROCM library, so I added these to /etc/environment.
So then I tried to tinker to address stuff listed in the current PKGBUILD + have a combined package for mixed-use labs and datacenters; mixed arch homelabs and rigs; laptops, edge cases of a mini PC with multiple GPUs, so forth.
So here is the combined CUDA+ROCM+OpenCL+CPU and LevelZero GPU included to satiate my curiosity; tho any CUDA-only errors should be addressed first of course. I was tinkering more than I was paying attention to jotting down what I'm doing so this "noob having fun" list does not cover everything added.
(I do not pretend this is a solution: complete or not. This is practicing, experimenting,and brainstorming to comment a rough draft to consider adding Intel + a few different configs. Do I do not expect help nor support for making all these modifications. However if you find any useful, you are free to use them for your effort/efforts.)
Then we have this version which tries to consolidate things down for my scattered brain to try to better understand what's up. Same thing applies above: just some tinkering and experimenting I wish to share.
acxz commented on 2020-03-24 17:47 (UTC)
Development is on Github: https://github.com/acxz/pkgbuilds Please open issues and PRs there instead of commenting.
acxz commented on 2020-03-24 17:47 (UTC)
Development is on Github: https://github.com/acxz/pkgbuilds Please open issues and PRs there instead of commenting.
acxz commented on 2020-03-02 20:04 (UTC)
@illuhad thanks for your detailed response. As of right now my focus is getting the ROCm stack working on Arch via the AUR, once we get that working I will later shift to focusing on hipsycl and the related packages. I would like to eventually get the hipsycl installation working via the AUR and when that time comes I will definitely contact you (and ask to be (co)maintainer)), but not before we get a working stack. Thank you!
illuhad commented on 2020-03-01 21:21 (UTC) (edited on 2020-03-01 21:21 (UTC) by illuhad)
@acxz I agree that it is desirable to have hipSYCL packages in the AUR for visibility. However, I think integrating our PKGBUILD with the AUR is going to be difficult. The reason is that we want to offer a hierarchy of installation mechanisms:
In order to keep the maintenance effort for us as low as possible, the packaging process builds a singularity container image for the target Linux distribution, installs hipSYCL using the installation scripts to /opt/hipSYCL and then just packages this existing installation (e.g. using a PKGBUILD on Arch). This makes it difficult to integrate with the AUR, outside of our packaging pipeline. Additionally, we also build our packages against our own LLVM distributions such that we can verify functionality before shipping packages (hipSYCL imposes certain requirements on the way LLVM is compiled - for most distributions, this is not a problem though).
If you want, you can have a look at our stuff here: https://github.com/illuhad/hipSYCL/tree/master/install/scripts/packaging
If you want to create a PKGBUILD for the AUR, I will support you of course (you can become maintainer of this package here if you like) but at the moment we lack the manpower to support hipSYCL from our side on distribution specific platforms like the AUR, in particular because all distributions have different guidelines etc that need to be obeyed for this.
If you want to discuss how this should be done in more detail, I invite you to open an issue on github :)
1 2 Next › Last »