Package Details: opencl-amd-dev 1:6.2.4-2

Git Clone URL: https://aur.archlinux.org/opencl-amd-dev.git (read-only, click to copy)
Package Base: opencl-amd-dev
Description: OpenCL SDK / HIP SDK / ROCM Compiler. This package needs at least 20GB of disk space.
Upstream URL: http://www.amd.com
Licenses: custom:AMD
Conflicts: composablekernel-dev, hipblas, hipblas-dev, hipblaslt, hipblaslt-dev, hipcub, hipcub-dev, hipfft, hipfft-dev, hipfort, hipfort-dev, hipify-clang, hiprand, hiprand-dev, hipsolver, hipsolver-dev, hipsparse, hipsparse-dev, hipsparselt, hipsparselt-dev, hiptensor, hiptensor-dev, migraphx, migraphx-dev, miopen, miopen-hip, miopen-hip-dev, mivisionx, mivisionx-dev, omniperf, omnitrace, openmp-extras-dev, rccl, rccl-dev, rocalution, rocalution-dev, rocblas, rocblas-dev, rocfft, rocfft-dev, rocm-developer-tools, rocm-hip-libraries, rocm-hip-runtime-dev, rocm-hip-sdk, rocm-llvm, rocm-ml-libraries, rocm-ml-sdk, rocm-opencl-sdk, rocprim, rocprim-dev, rocprofiler-sdk, rocprofiler-sdk-roctx, rocrand, rocrand-dev, rocsolver, rocsolver-dev, rocsparse, rocsparse-dev, rocthrust, rocthrust-dev, rocwmma-dev, rpp, rpp-dev
Provides: composablekernel-dev, half, hipblas, hipblas-dev, hipblaslt, hipblaslt-dev, hipcub, hipcub-dev, hipfft, hipfft-dev, hipfort, hipfort-dev, hipify-clang, hiprand, hiprand-dev, hipsolver, hipsolver-dev, hipsparse, hipsparse-dev, hipsparselt, hipsparselt-dev, hiptensor, hiptensor-dev, migraphx, migraphx-dev, miopen, miopen-hip, miopen-hip-dev, mivisionx, mivisionx-dev, omniperf, omnitrace, openmp-extras-dev, rccl, rccl-dev, rocalution, rocalution-dev, rocblas, rocblas-dev, rocfft, rocfft-dev, rocm-developer-tools, rocm-hip-libraries, rocm-hip-runtime-dev, rocm-hip-sdk, rocm-llvm, rocm-ml-libraries, rocm-ml-sdk, rocm-opencl-sdk, rocprim, rocprim-dev, rocprofiler-sdk, rocprofiler-sdk-roctx, rocrand, rocrand-dev, rocsolver, rocsolver-dev, rocsparse, rocsparse-dev, rocthrust, rocthrust-dev, rocwmma-dev, rpp, rpp-dev
Submitter: luciddream
Maintainer: luciddream
Last Packager: luciddream
Votes: 8
Popularity: 0.47
First Submitted: 2021-12-26 15:01 (UTC)
Last Updated: 2024-11-10 10:22 (UTC)

Required by (154)

Sources (54)

Pinned Comments

luciddream commented on 2022-01-12 16:47 (UTC) (edited on 2024-11-07 20:44 (UTC) by luciddream)

Latest release: 6.2.4. It uses 25.93GB of disk.

Latest Comments

« First ‹ Previous 1 2 3 4 5 6 7 8 Next › Last »

luciddream commented on 2022-02-11 18:00 (UTC)

@trougnouf I followed your advice. I don't have much experience with packaging so I'm not sure how to deal with conflicts / provides. I've added everything to provides for now.

@esistgut I've updated this package for ROCm 5.0 - However I don't have any luck with the recent versions of Pytorch. Maybe you have better luck :)

trougnouf commented on 2022-02-07 23:07 (UTC) (edited on 2022-02-07 23:08 (UTC) by trougnouf)

It's redundant to conflict with rocm-opencl-runtime because opencl-amd, its dependency, already has this conflict (and imo opencl-amd should provide rocm-opencl-runtime)

esistgut commented on 2022-01-24 18:53 (UTC)

@luciddream, just installed it: torch.cuda.is_available() returns False but as far as I can tell it doesn't seem to interact with opencl-amd-dev at all, removing it has no effect.

luciddream commented on 2022-01-24 09:52 (UTC)

@esistgut I noticed that there are some builds for Pytorch with Rocm 4.5.2 support now, maybe you want to give it a try?

I'm not on my PC now to test it but I think something like this might work

pip install --pre torch -f https://download.pytorch.org/whl/nightly/rocm4.5.2/torch_nightly.html

luciddream commented on 2022-01-12 16:47 (UTC) (edited on 2024-11-07 20:44 (UTC) by luciddream)

Latest release: 6.2.4. It uses 25.93GB of disk.

luciddream commented on 2022-01-12 11:40 (UTC)

@esistgut

The thing is that we have different issues. Pytorch already has support for gfx1030 and you have a gfx1030. So maybe all it's missing for you is to wait for [this pr] to be completed.

In any case I will push a new version of this package within the week to include missing math libraries etc.

esistgut commented on 2022-01-12 10:49 (UTC)

Thank you for your effort. I think rebuilding pytorch is not a long term solution, I'll just wait for the official binary to work. The python-pytorch-rocm package is not a very nice way to follow as it rely on rebuilding the whole platform from sources, this requires more than 32GB of RAM and many many many hours. If your going to publish a package to install the AMD binary ROCm/HIP distribution it will provide a faster way to approach ROCm on Arch.

luciddream commented on 2022-01-12 09:07 (UTC) (edited on 2022-01-12 09:24 (UTC) by luciddream)

So yesterday late at night I managed to create a PKGBUILD with the full ROCM SDK libraries.. the installation size is about 9.4GB - I can attach it here when I get home later today. I don't want to release it yet, maybe it will make sense to split it to many packages. But it still doesn't help with Pytorch.

I think Pytorch needs some extra configuration to work with ROCM, but I'm not familiar with the application or the C++ installation tools to figure it out yet. What I did with the full rocm package was:

  • Downloaded Pytorch with git clone -b fix_warpsize_issue --recursive https://github.com/micmelesse/pytorch --depth=1 --shallow-submodules
  • Added /opt/rocm/bin and /opt/rocm/hip/bin to path (I will add it on the main package on the next release)
  • Ran python tools/amd_build/build_amd.py
  • Exported USE_KINETO="OFF" since I couldn't find where to set KINETO_HIP_LIBRARY="/opt/rocm/hip/lib" properly.
  • Ran: python setup.py build --cmake-only
  • Ran ccmake build and added gfx1010 to GPU Targets
  • Gave up

Maybe python-pytorch-rocm works better for you, or we can use it to get ideas on how to build it, but I don't see how it handles the GPU targets (if at all).

luciddream commented on 2022-01-11 22:20 (UTC)

@esistgut

I will take a better look the next days. I think this package is missing a lot of libraries. I noticed that when I tried to compile Pytorch myself. I will then make a new release. Thanks for your feedback!

esistgut commented on 2022-01-11 20:10 (UTC)

(venv) esistgut@nibiru:~/coding/python/fastai$ cat check.py 
import torch
print(torch.cuda.is_available())
(venv) esistgut@nibiru:~/coding/python/fastai$ python check.py 
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
Aborted (core dumped)
(venv) esistgut@nibiru:~/coding/python/fastai$ HSA_OVERRIDE_GFX_VERSION=1030 python check.py 
False

https://pytorch.org/docs/stable/notes/hip.html