Package Details: nvhpc 24.3-1

Git Clone URL: https://aur.archlinux.org/nvhpc.git (read-only, click to copy)
Package Base: nvhpc
Description: NVIDIA HPC SDK
Upstream URL: https://gitlab.com/badwaik/archlinux/aur/nvhpc
Keywords: compiler cuda fortran pgi portland
Licenses: custom
Conflicts: pgi-compilers
Replaces: pgi-compilers
Submitter: a.kudelin
Maintainer: jayesh
Last Packager: jayesh
Votes: 14
Popularity: 0.003721
First Submitted: 2020-10-20 12:54 (UTC)
Last Updated: 2024-04-03 00:02 (UTC)

Dependencies (5)

Required by (0)

Sources (2)

Latest Comments

« First ‹ Previous 1 2 3 4 5 6 Next › Last »

jayesh commented on 2022-12-14 07:41 (UTC)

@aitzkora

Thank you for the report. I can confirm that this happens to be with multiple hello world programs (not just the one you posted) and also that it happens specifically on Archlinux and not on other distributions.

I also tested hello world for C and C++ and they seem to work well. So, the problem is not extremely clear. And I'll need to analyze more what's going on.

aitzkora commented on 2022-12-13 13:26 (UTC) (edited on 2022-12-13 13:31 (UTC) by aitzkora)

Hi jayesh, I tried your package, it seems to work, but when i try to run a simple hello_world in fortran, it does not work

program hello
 use mpi_f08
 implicit none
 !include "mpif.h"

 integer :: rank, nprocs, ierr 
 call MPI_INIT( ierr )
 call MPI_COMM_RANK( MPI_COMM_WORLD, rank, ierr )
 call MPI_COMM_SIZE( MPI_COMM_WORLD, nprocs, ierr )

 print '(a(i2)a(i2)a)', "hello from ", rank, " among " , nprocs, " procs"

 call MPI_FINALIZE( ierr )

end program hello
 fux@udalatx $ mpif90 -o hello_mpi hello_mpi.f90
 fux@udalatx  $  mpirun -np 3 ./hello_mpi   
[udalatx:1224353] *** Process received signal ***
[udalatx:1224353] Signal: Segmentation fault (11)
[udalatx:1224353] Signal code: Address not mapped (1)
[udalatx:1224353] Failing at address: 0x1d0
[udalatx:1224354] *** Process received signal ***
[udalatx:1224354] Signal: Segmentation fault (11)
[udalatx:1224354] Signal code: Address not mapped (1)
[udalatx:1224354] Failing at address: 0x1d0
[udalatx:1224353] [ 0] [udalatx:1224354] [ 0] /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libc.so.6(+0x38a00)[0x7fe497251a00]
[udalatx:1224354] [ 1] /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libc.so.6(+0x38a00)[0x7fdb17a51a00]
[udalatx:1224353] [ 1] /usr/lib/libpmix.so.2(+0x6cb94)[0x7fe493c77b94]
[udalatx:1224354] [ 2] /usr/lib/libpmix.so.2(+0x6cb94)[0x7fdb14477b94]
[udalatx:1224353] [ 2] /opt/nvidia/hpc_sdk/Linux_x86_64/22.11/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi.so.40(ompi_errhandler_callback+0x1b)[0x7fe49665f8db]
[udalatx:1224354] [ 3] /usr/lib/openmpi/mca_pmix_ext3x.so(+0x6c4b)[0x7fe496aaec4b]
[udalatx:1224354] [ 4] /opt/nvidia/hpc_sdk/Linux_x86_64/22.11/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi.so.40(ompi_errhandler_callback+0x1b)[0x7fdb16e5f8db]
[udalatx:1224353] [ 3] /usr/lib/libpmix.so.2(+0x6cbc3)[0x7fe493c77bc3]
[udalatx:1224354] [ 5] /usr/lib/openmpi/mca_pmix_ext3x.so(+0x6c4b)[0x7fdb172ffc4b]
[udalatx:1224353] [ 4] /usr/lib64/libevent_core-2.1.so.7(+0x1eb32)[0x7fe496b74b32]
[udalatx:1224354] [ 6] /usr/lib64/libevent_core-2.1.so.7(event_base_loop+0x4ff)[0x7fe496b7639f]
[udalatx:1224354] /usr/lib/libpmix.so.2(+0x6cbc3)[0x7fdb14477bc3]
[udalatx:1224353] [ 5] /usr/lib64/libevent_core-2.1.so.7(+0x1eb32)[0x7fdb17688b32]
[udalatx:1224353] [ 7] [ 6] /usr/lib64/libevent_core-2.1.so.7(event_base_loop+0x4ff)[0x7fdb1768a39f]
[udalatx:1224353] [ 7] /usr/lib/libpmix.so.2(+0x9ec1a)[0x7fe493ca9c1a]
[udalatx:1224354] [ 8] /usr/lib/libpmix.so.2(+0x9ec1a)[0x7fdb144a9c1a]
[udalatx:1224353] [ 8] [udalatx:1224355] *** Process received signal ***
[udalatx:1224355] Signal: Segmentation fault (11)
[udalatx:1224355] Signal code: Address not mapped (1)
[udalatx:1224355] Failing at address: 0x1d0
/usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libc.so.6(+0x868fd)[0x7fe49729f8fd]
[udalatx:1224354] [ 9] /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libc.so.6(+0x868fd)[0x7fdb17a9f8fd]
[udalatx:1224353] [ 9] [udalatx:1224355] [ 0] /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libc.so.6(+0x108a60)[0x7fe497321a60]
[udalatx:1224354] *** End of error message ***
/usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libc.so.6(+0x38a00)[0x7f429ce51a00]
[udalatx:1224355] [ 1] /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libc.so.6(+0x108a60)[0x7fdb17b21a60]
[udalatx:1224353] *** End of error message ***
/usr/lib/libpmix.so.2(+0x6cb94)[0x7f4299877b94]
[udalatx:1224355] [ 2] /opt/nvidia/hpc_sdk/Linux_x86_64/22.11/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi.so.40(ompi_errhandler_callback+0x1b)[0x7f429c25f8db]
[udalatx:1224355] [ 3] /usr/lib/openmpi/mca_pmix_ext3x.so(+0x6c4b)[0x7f429c696c4b]
[udalatx:1224355] [ 4] /usr/lib/libpmix.so.2(+0x6cbc3)[0x7f4299877bc3]
[udalatx:1224355] [ 5] /usr/lib64/libevent_core-2.1.so.7(+0x1eb32)[0x7f429c76fb32]
[udalatx:1224355] [ 6] /usr/lib64/libevent_core-2.1.so.7(event_base_loop+0x4ff)[0x7f429c77139f]
[udalatx:1224355] [ 7] /usr/lib/libpmix.so.2(+0x9ec1a)[0x7f42998a9c1a]
[udalatx:1224355] [ 8] /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libc.so.6(+0x868fd)[0x7f429ce9f8fd]
[udalatx:1224355] [ 9] /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/../../../../lib64/libc.so.6(+0x108a60)[0x7f429cf21a60]

do you observe the same bug ? best regards marc

jayesh commented on 2022-11-16 15:02 (UTC)

Upgraded to 22.11. Please let me know if there are any bugs. -- Jayesh

jayesh commented on 2022-10-12 13:30 (UTC) (edited on 2022-10-12 13:30 (UTC) by jayesh)

I have a poll question: Do you think that the nvhpc.sh installed directly in /etc/profile.d/ is actually a good idea? Or do you people want it to be installed someplace else from where you can source it?

I'll read replies for the next 2-3 months until the next release happens and then decide one way or another.

jayesh commented on 2022-10-12 10:57 (UTC)

Updated to 22.9. Please let me know if there are any bugs. -- Jayesh

jayesh commented on 2022-08-01 17:32 (UTC)

Thank you for the update. I have tried to change the path to the new default version. Please let me know if there are any issues.

pcosta commented on 2022-07-31 13:54 (UTC)

NVHPC 22.7 is out: https://developer.download.nvidia.com/hpc-sdk/22.7/nvhpc_2022_227_Linux_x86_64_cuda_multi.tar.gz

The default installation path in the NVIDIA docs (and many packages which use NVHPC) is /opt/nvidia/hpc_sdk, not /opt/nvidia. Perhaps the installation directory here could change to be consistent? https://docs.nvidia.com/hpc-sdk/hpc-sdk-install-guide/index.html

jayesh commented on 2022-06-17 11:09 (UTC)

https://aur.archlinux.org/packages/nvhpc-22.5

For now, I have this fixed version package which should work in this place.

jayesh commented on 2022-06-16 21:32 (UTC) (edited on 2022-06-16 21:33 (UTC) by jayesh)

https://gitlab.com/badwaik/archlinux/aur/nvhpc

I have bumped the PKGBUILD version to the 22.5 version. Please merge it. I volunteer to be the maintainer if you are so inclined.

dront78 commented on 2022-06-05 08:38 (UTC)

is it alive? 22.5 is released