Package Details: nvhpc 24.3-1

Git Clone URL: https://aur.archlinux.org/nvhpc.git (read-only, click to copy)
Package Base: nvhpc
Description: NVIDIA HPC SDK
Upstream URL: https://gitlab.com/badwaik/archlinux/aur/nvhpc
Keywords: compiler cuda fortran pgi portland
Licenses: custom
Conflicts: pgi-compilers
Replaces: pgi-compilers
Submitter: a.kudelin
Maintainer: jayesh
Last Packager: jayesh
Votes: 14
Popularity: 0.005263
First Submitted: 2020-10-20 12:54 (UTC)
Last Updated: 2024-04-03 00:02 (UTC)

Dependencies (5)

Required by (0)

Sources (2)

Latest Comments

1 2 3 4 5 6 Next › Last »

jayesh commented on 2023-05-20 05:58 (UTC)

@ylee: Thanks for your report. An alternative for now would be to use the gcc12 package in archlinux as a dependency and run makelocalrc against it. I will look at this in a couple of days and see what can be done.

ylee commented on 2023-05-19 23:51 (UTC)

It looks like the current nvhpc doesn't work with Arch's upstream gcc-13. This is reported at here, but not sure when they're going to support the newest gcc.

jayesh commented on 2023-05-11 18:10 (UTC)

@nordwin thanks for the pull request. I have merged it now.

jayesh commented on 2023-05-04 09:23 (UTC)

@nordwin

yes, this is the case. Thanks for noticing and apologies.

I'm unable to fix this today, so it might take some time to fix. If you wish, you could send me a PR at https://gitlab.com/badwaik/archlinux/aur/nvhpc and I'll accept it.

Thanks.

Nordwin commented on 2023-05-04 09:05 (UTC)

Hey, is it possible that you forgot to update the checksums?

Because it says

sha256sums=('a01733a257995dc63a4f07b94dbad50b07f12d0515f7c7a9b2bebef3ac35750a' '8853cf0dcb2dec7acd25cedaf2e849993a8156165742a69381a44d4447ce19d5')

in the PKGBUILD, but I get

65c97207e7ac2d5f163bc50cb017a2c9519a7c9b2b3d12146d3dd433655963f2 nvhpc_2023_233_Linux_x86_64_cuda_12.0.tar.gz

and

19e5c4b25dc1c5a41123a3e84b18db0668d33f08f1dd8cf0d6ab1ace346f535b nvhpc.sh

jayesh commented on 2023-05-01 15:20 (UTC)

Updated to nvhpc 23.3

jayesh commented on 2023-02-05 23:18 (UTC) (edited on 2023-02-05 23:18 (UTC) by jayesh)

I've updated NVHPC to the new version 23.1. According to the release notes here: https://docs.nvidia.com/hpc-sdk/hpc-sdk-release-notes/index.html

This release introduces a new “nvhpc-hpcx” environment module for the HPC-X library, an alternative to the default OpenMPI 3.x library that is setup by the existing “nvhpc” environment module. If no MPI library is desired, or to use an external MPI library, the “nvhpc-nompi” environment module is provided.

So, in the interest of testing, I've commented out my hack of replacing the default MPI with HPC-X. Please test out the new package and let me know how things go with HPC-X.

ylee commented on 2022-12-22 07:53 (UTC) (edited on 2022-12-22 07:54 (UTC) by ylee)

@jayesh,

Here is the MWE for nvfortran:

❯ module load nvhpc

❯ nvfortran -mp=gpu hello_omp_offload.F90
nvlink warning : Skipping incompatible '/usr/lib64/libdl.a' when searching for -ldl
nvlink warning : Skipping incompatible '/usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libdl.a' when searching for -ldl
nvlink warning : Skipping incompatible '/usr/lib64/libpthread.a' when searching for -lpthread
nvlink warning : Skipping incompatible '/usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libpthread.a' when searching for -lpthread
nvlink warning : Skipping incompatible '/usr/lib64/librt.a' when searching for -lrt
nvlink warning : Skipping incompatible '/usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/librt.a' when searching for -lrt

❯ ./a.out
 I'm on GPU

And the code is:

! hello_omp_offload.F90
program hello
  !$ use omp_lib, ONLY: omp_is_initial_device
  implicit none
  logical:: onCPU

  onCPU = .true.

  !$omp target map(tofrom:onCPU)
  !$    onCPU = omp_is_initial_device()
  !$omp end target

  if (onCPU) then
    print *, "I cannot go into the GPU"
  else
    print *, "I'm on GPU"
  end if

end program hello

Even though it uses openMP offloading, nvfortran will utilizes CUDA libraries with -mp=gpu option.

For the nvcc, without the symlinks that I mentioned before,

❯ module load nvhpc

❯ nvcc hello.cu
In file included from /opt/nvidia/hpc_sdk/Linux_x86_64/22.11/cuda/11.8/include/cuda_runtime.h:83,
                 from <command-line>:
/opt/nvidia/hpc_sdk/Linux_x86_64/22.11/cuda/11.8/include/crt/host_config.h:132:2: error: #error -- unsupported GNU version! gcc versions later than 11 are not supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk.
  132 | #error -- unsupported GNU version! gcc versions later than 11 are not supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk.
      |  ^~~~~

The source code is:

// hello.cu
#include <stdio.h>

#define NUM_BLOCKS 1
#define BLOCK_WIDTH 8

__global__ void hello()
{
  printf("Hello from thread %d in block %d\n", threadIdx.x, blockIdx.x);
}

int main(int argc, char **argv)
{
  hello<<<NUM_BLOCKS, BLOCK_WIDTH>>>();

  cudaDeviceSynchronize();

  return 0;
}

Either with -ccbin=gcc-11 or -allow-unsupported-compiler resolve the issue.

I hope you have a wonderful holiday. Thank you for your input.

jayesh commented on 2022-12-22 06:57 (UTC) (edited on 2022-12-22 07:01 (UTC) by jayesh)

@ylee: try using the --ccbin option to point to the correct gcc.

nvfortran is a nvhpc compiler. It doesn't call a host compiler. So those problems of linking are different from host compiler problems. I'll have tp look at it in more detail. But a MWE will be quite helpful to make sure that we are looking at the same things.

I'll see if patching gcc works for 11/12 versions and then push it sometime after Christmas holidays.