Search Criteria
Package Details: gromacs 2024.1-2
Package Actions
Git Clone URL: | https://aur.archlinux.org/gromacs.git (read-only, click to copy) |
---|---|
Package Base: | gromacs |
Description: | A versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. |
Upstream URL: | http://www.gromacs.org/ |
Keywords: | chemistry science simulations |
Licenses: | LGPL |
Submitter: | xyproto |
Maintainer: | hseara (vedranmiletic) |
Last Packager: | vedranmiletic |
Votes: | 23 |
Popularity: | 0.000232 |
First Submitted: | 2011-12-14 17:03 (UTC) |
Last Updated: | 2024-05-06 19:04 (UTC) |
Dependencies (13)
- gcc12AUR
- hwloc
- lapack (aocl-libflame-aoccAUR, blas-mklAUR, lapack-gitAUR, atlas-lapackAUR, blas-aocl-gccAUR, blas-aocl-aoccAUR, openblas-lapackAUR, aocl-libflameAUR, blas-openblas)
- zlib (zlib-ng-compat-gitAUR, zlib-gitAUR, zlib-ng-compatAUR)
- cmake (cmake-gitAUR) (make)
- hwloc (make)
- libxml2 (libxml2-gitAUR, libxml2-2.9AUR) (make)
- cuda (cuda11.1AUR) (optional) – Nvidia GPU support
- opencl-clover-mesa (mesa-nightly-nvk-rusticl-intelrt-gitAUR, amdonly-gaming-opencl-clover-mesa-gitAUR, mesa-gitAUR) (optional) – OpenCL support for AMD/Intel GPU
- opencl-nvidia (opencl-nvidia-410xxAUR, opencl-nvidia-340xxAUR, opencl-nvidia-440xxAUR, opencl-nvidia-430xxAUR, opencl-510xx-nvidiaAUR, opencl-nvidia-vulkanAUR, opencl-nvidia-teslaAUR, opencl-nvidia-470xxAUR, opencl-nvidia-390xxAUR, opencl-nvidia-betaAUR, opencl-nvidia-535xxAUR, opencl-nvidia-525xxAUR) (optional) – OpenCL support for Nvidia GPU
- opencl-rusticl-mesa (mesa-nightly-nvk-rusticl-intelrt-gitAUR, amdonly-gaming-opencl-rusticl-mesa-gitAUR, opencl-rusticl-mesa-minimal-gitAUR, mesa-gitAUR) (optional) – OpenCL support for AMD/Intel GPU
- perl (perl-gitAUR) (optional) – needed for demux.pl and xplor2gmx.pl
- vmdAUR (vmd-binAUR, vmdAUR, vmd-srcAUR) (optional) – Accesibility to other trajectory formats (ONLY WHEN COMPILING)
Latest Comments
1 2 3 4 5 6 .. 9 Next › Last »
JBauer commented on 2023-10-23 12:21 (UTC)
Addendum: $CXXFLAGS also needs to be reset:
JBauer commented on 2023-10-23 07:59 (UTC)
@dalima
I think you were lucky. I still got the same error when using CC=/opt/cuda/bin/gcc and CXX=/opt/bin/g++.
This isn't really surprising since those are actually just symlinks to /usr/bin/gcc-12 and /usr/bin/g++-12. Did you use makepkg or did you build the software by hand?
@hseara
The source of the problem seems to be in the default CFLAGS used by makepkg found in /etc/makepkg.conf. They are
I was able to make the core dump go away by changing -D_FORTIFY_SOURCE=2 to -D_FORTIFY_SOURCE=1. The documentation for this macro says that level 2 introduces some run-time memory checks which might cause some otherwise well-behaved programs to fail; level 1 only does compile time checking.
I was able to get the package to build and run without the error by adding the following to the PKGBUILD just after the CC and CXX lines:
(I dropped -march=x86-64 because the FFT library wouldn't compile properly with it. -mtune=native and -O3 resulted in a small performance boost during testing.)
dalima commented on 2023-09-07 19:23 (UTC)
@hseara
I had the exact same error you mentioned there. I got past it by specifying cuda's own version of gcc and g++ before using cmake: export CC=/opt/cuda/bin/gcc export CXX=/opt/cuda/bin/g++
Also i think there is a typo where -GMX_GPU should be -DGMX_GPU
I dont know if that fixes it on all computers, or i just got lucky? CUDA version 12.2.0-1
hseara commented on 2022-10-31 09:01 (UTC)
Details Description:
For quite some time now, when using gromacs with cuda in archlinux results in a core dump.
If I install the package using
spack
the package runs without problems. This means that the problem is somehow in cuda/gcc11 in arch does anyone have a clue what is going on?Additional info: * package version(s)
gromacs@2022.2 cuda: 11.8 (it also happened in previous 11.7 versions) gcc@11.3.0 * config and/or log files etc.
Steps to reproduce:
vedranmiletic commented on 2022-01-31 11:19 (UTC)
Could you update to version 2021.5? On my machine, 2021.4 fails the GammaDistributionTest.Output test in double precision, 2021.5 passes all tests.
E3LDDfrK commented on 2020-03-24 08:27 (UTC) (edited on 2020-05-04 14:58 (UTC) by E3LDDfrK)
@gardotd426 What changes did you make? Did you check this https://bbs.archlinux.org/viewtopic.php?pid=1870450#p1870450 ? I think adding options=(!buildflags) to PKGBUILD is the easiest way to solve this.
I think some people ignore that sometime people install the gromacs package to analyze the data without actually running the simulation on their computers. (In the sense that I use my old laptop that doesn't support AVX2 to analyze/manipulate the trajectories. Pretty sure I'm not alone in this.)
gardotd426 commented on 2020-03-23 08:13 (UTC)
This fails to build, even after changing /etc/makepkg.conf as suggested below. /proc/cpuinfo shows that my cpu does support avx2...
cat /proc/cpuinfo:
hseara commented on 2019-12-13 11:47 (UTC)
@brisvag Please recompile gromacs. hwloc has been updated from v1 to v2 which brakes gromacs installation. Recompiling gromacs solves the issue.
brisvag commented on 2019-12-10 14:09 (UTC) (edited on 2019-12-10 14:10 (UTC) by brisvag)
When I try to use
gmx dump
on a.tpr
, I get the following error:Indeed, the library is missing. Simply adding a symlink to
libhwloc.so
adding.5
at the end solves the issue. Shouldn't this be handled by the installation?E3LDDfrK commented on 2019-10-24 17:05 (UTC) (edited on 2019-10-25 23:27 (UTC) by E3LDDfrK)
EDIT 3: A solution here: https://bbs.archlinux.org/viewtopic.php?pid=1870411#p1870411
I have the same error as @mefistofeles.
EDIT 2: And @malinke too. It's partly because my CPU doesn't support AVX2, I think. There's a double "-march=core-avx2 -march=native" when compiling fftw. It fails because my "-march=native" doesn't actually support AVX2.
I've also changed my /etc/makepkg.conf as @hseara suggested.
It's a Thinkpad X220 with Sandy Bridge, so I used -DGMX_SIMD=AVX_256 on my PKGBUILD. At least in my case, computer doesn't support avx2, so it failed to compile fftw. Not sure what to do here, it seems fftw is automatically compiled with "--enable-avx2". I'll try again later without DGMX_BUILD_OWN_FFTW=ON on the PKGBUILD.
EDIT 1: So it builds successfully when I replaced DGMX_BUILD_OWN_FFTW=ON with -DGMX_FFT_LIBRARY=fftw3 on the PKGBUILD. Not sure if using pacman-installed fftw with GROMACS will lead to problems later on.
For what it's worth, like @mefistofeles, I also tried to manually build the package using cmake and DGMX_BUILD_OWN_FFTW=ON, and it worked. I wonder what the problem is. From what I can tell, gromacs-2019.4/src/external/build-fftw/CMakeLists.txt says it will build fftw with only either just "--enable-sse2" or the whole "--enable-sse2;--enable-avx;--enable-avx2". The "--enable-avx" always comes with "--enable-avx2". But it worked this time, despite my CPU not supporting AVX 2. When running cmake, it also outputted:
Just for comparison. This line from manual install succeeds:
This line from aur using pikaur fails:
So possibly the double "-march" thing makes it fail? Just "-march=core-avx2" vs "-march=core-avx2 -march=native". Because my Sandy Bridge ("-march=native" part) doesn't actually support AVX2.
1 2 3 4 5 6 .. 9 Next › Last »