Package Details: gromacs 2024.4-2

Git Clone URL: https://aur.archlinux.org/gromacs.git (read-only, click to copy)
Package Base: gromacs
Description: A versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
Upstream URL: http://www.gromacs.org/
Keywords: chemistry science simulations
Licenses: LGPL-2.1-only
Submitter: xyproto
Maintainer: hseara (vedranmiletic)
Last Packager: vedranmiletic
Votes: 23
Popularity: 0.000003
First Submitted: 2011-12-14 17:03 (UTC)
Last Updated: 2024-11-08 05:39 (UTC)

Dependencies (14)

Required by (1)

Sources (1)

Latest Comments

1 2 3 4 5 6 .. 9 Next › Last »

hseara commented on 2024-11-16 16:23 (UTC) (edited on 2024-11-16 16:27 (UTC) by hseara)

Hi all, I have again problems compiling with cuda. I think it is still connected to compiling with gcc13. If I disable of linking time optimizations (!lto) I can successfully compile:

options=('!libtool' '!lto') 

I can also run the executables without the issues described in the comments below. Could someone confirm that this is not only a problem with my setup and the proposed solution is universal?

These are my makepkg.conf flags:

CFLAGS="-march=native -O2 -pipe -fno-plt -fexceptions \
        -Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security \
        -fstack-clash-protection -fcf-protection \
        -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer"
CXXFLAGS="$CFLAGS -Wp,-D_GLIBCXX_ASSERTIONS"

vedranmiletic commented on 2024-09-25 05:35 (UTC)

@e-kwsm You are right. I will push an update soon.

e-kwsm commented on 2024-08-21 19:07 (UTC)

PKGBUILD:41:  cmake ../gromacs-v${pkgver}/ \
PKGBUILD:42:        -DCMAKE_INSTALL_PREFIX=/usr \
PKGBUILD:43:        -DCMAKE_INSTALL_LIBDIR=lib \
PKGBUILD:44:        -DGMX_DOUBLE=ON \
PKGBUILD:45:        #-DGMX_BUILD_OWN_FFTW=ON \
PKGBUILD:46:  # For AVX2 and AVX512 support, uncomment the previous line
PKGBUILD:47:        -DGMX_HWLOC=ON \
PKGBUILD:48:        -DREGRESSIONTEST_DOWNLOAD=ON

Here the command is malformed.

Sophon96 commented on 2024-08-17 20:43 (UTC)

I was able to get the assertion failure to go away just by not defining _GLIBCXX_ASSERTIONS (while keeping _FORTIFY_SOURCE=3).

JBauer commented on 2023-10-23 12:21 (UTC)

Addendum: $CXXFLAGS also needs to be reset:

export CXXFLAGS="$CFLAGS"

JBauer commented on 2023-10-23 07:59 (UTC)

@dalima

I think you were lucky. I still got the same error when using CC=/opt/cuda/bin/gcc and CXX=/opt/bin/g++.

This isn't really surprising since those are actually just symlinks to /usr/bin/gcc-12 and /usr/bin/g++-12. Did you use makepkg or did you build the software by hand?

@hseara

The source of the problem seems to be in the default CFLAGS used by makepkg found in /etc/makepkg.conf. They are

CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \
        -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security \
        -fstack-clash-protection -fcf-protection"

I was able to make the core dump go away by changing -D_FORTIFY_SOURCE=2 to -D_FORTIFY_SOURCE=1. The documentation for this macro says that level 2 introduces some run-time memory checks which might cause some otherwise well-behaved programs to fail; level 1 only does compile time checking.

I was able to get the package to build and run without the error by adding the following to the PKGBUILD just after the CC and CXX lines:

export CFLAGS="-mtune=native -O3 -pipe -fno-plt -fexceptions \
   -Wp,-D_FORTIFY_SOURCE=1 -Wformat -Werror=format-security \
   -fstack-clash-protection -fcf-protection"

(I dropped -march=x86-64 because the FFT library wouldn't compile properly with it. -mtune=native and -O3 resulted in a small performance boost during testing.)

dalima commented on 2023-09-07 19:23 (UTC)

@hseara

I had the exact same error you mentioned there. I got past it by specifying cuda's own version of gcc and g++ before using cmake: export CC=/opt/cuda/bin/gcc export CXX=/opt/cuda/bin/g++

Also i think there is a typo where -GMX_GPU should be -DGMX_GPU

I dont know if that fixes it on all computers, or i just got lucky? CUDA version 12.2.0-1

hseara commented on 2022-10-31 09:01 (UTC)

Details Description:

For quite some time now, when using gromacs with cuda in archlinux results in a core dump.

/usr/lib/gcc/x86_64-pc-linux-gnu/11.3.0/include/c++/bits/unique_ptr.h:407: typename std::add_lvalue_reference<_Tp>::type std::unique_ptr<_Tp, _Dp>::operator*() const [with _Tp = DeviceStream; _Dp = std::default_delete<DeviceStream>; typename std::add_lvalue_reference<_Tp>::type = DeviceStream&]: Assertion 'get() != pointer()' failed.
Aborted (core dumped)

If I install the package using spack the package runs without problems. This means that the problem is somehow in cuda/gcc11 in arch does anyone have a clue what is going on?

Additional info: * package version(s)

gromacs@2022.2 cuda: 11.8 (it also happened in previous 11.7 versions) gcc@11.3.0 * config and/or log files etc.

  • link to upstream bug report, if any N/A

Steps to reproduce:

$ gmx mdrun -v -deffnm step5_11
:-) GROMACS - gmx mdrun, 2022.3-dev (-:

Executable: /usr/bin/gmx
Data prefix: /usr
Working dir: /home/hector/test_gromacs
Command line:
gmx mdrun -v -deffnm step5_11

Reading file step5_11.tpr, VERSION 2022.2 (single precision)
Changing nstlist from 20 to 100, rlist from 1.224 to 1.346

1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
PP:0,PME:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
PME tasks will do all aspects on the GPU
Using 1 MPI thread
Using 16 OpenMP threads

/usr/lib/gcc/x86_64-pc-linux-gnu/11.3.0/include/c++/bits/unique_ptr.h:407: typename std::add_lvalue_reference<_Tp>::type std::unique_ptr<_Tp, _Dp>::operator*() const [with _Tp = DeviceStream; _Dp = std::default_delete<DeviceStream>; typename std::add_lvalue_reference<_Tp>::type = DeviceStream&]: Assertion 'get() != pointer()' failed.
Aborted (core dumped)

vedranmiletic commented on 2022-01-31 11:19 (UTC)

Could you update to version 2021.5? On my machine, 2021.4 fails the GammaDistributionTest.Output test in double precision, 2021.5 passes all tests.

E3LDDfrK commented on 2020-03-24 08:27 (UTC) (edited on 2020-05-04 14:58 (UTC) by E3LDDfrK)

@gardotd426 What changes did you make? Did you check this https://bbs.archlinux.org/viewtopic.php?pid=1870450#p1870450 ? I think adding options=(!buildflags) to PKGBUILD is the easiest way to solve this.

I think some people ignore that sometime people install the gromacs package to analyze the data without actually running the simulation on their computers. (In the sense that I use my old laptop that doesn't support AVX2 to analyze/manipulate the trajectories. Pretty sure I'm not alone in this.)