Package Details: gromacs 2023.2-1

Git Clone URL: https://aur.archlinux.org/gromacs.git (read-only, click to copy)
Package Base: gromacs
Description: A versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
Upstream URL: http://www.gromacs.org/
Keywords: chemistry science simulations
Licenses: LGPL
Submitter: xyproto
Maintainer: hseara (vedranmiletic)
Last Packager: vedranmiletic
Votes: 24
Popularity: 0.000335
First Submitted: 2011-12-14 17:03 (UTC)
Last Updated: 2023-08-04 19:09 (UTC)

Dependencies (11)

Required by (1)

Sources (1)

Latest Comments

« First ‹ Previous 1 2 3 4 5 6 7 8 9 Next › Last »

malinke commented on 2017-09-21 13:37 (UTC)

One problem is that in `/etc/makepkg.conf` the cflags and cxxflags contain `-march=x86_64 -mtune=generic`. These flags are not used by gromacs to determine if avx should be activated during the configuration stage of cmake. Therefore gromacs tries to build with avx instructions (on my machine). This results in the following flags to the c/c++ compiler '-march=cpu-avx -march=x86_64 -mtune=generic'. This breaks the builds as only the last usage of `march/tune` is using deactivating the avx build. The build then stops complaining it doesn't know about the avx functions. Overwritting the flags in the PKGBUILD lets the compilation finish. These cflag I have are the standard for every user. https://git.archlinux.org/svntogit/packages.git/tree/trunk/makepkg.conf?h=packages/pacman

malinke commented on 2017-09-21 12:31 (UTC)

Gromacs doesn't build with gcc5. Using gcc5 I get the following error. ``` gcc-5: error: unrecognized command line option ‘-fno-plt’ ``` I had no problem building and running gromacs with gcc7

hseara commented on 2017-09-13 17:23 (UTC) (edited on 2017-09-24 15:35 (UTC) by hseara)

Compiling gromacs with CUDA support will fail after the update to glibc-2.26. This is incompatible with CUDA 8 and it is not clear whether the soon to come CUDA 9 will solve this problem. (NEWS: A dirty fix implemented in the CUDA package allows again the compilation).

hseara commented on 2017-04-28 16:42 (UTC) (edited on 2017-09-24 15:34 (UTC) by hseara)

At least for Nvidia, Gromacs detects CUDA capable cards. If you have CUDA installed, it then includes CUDA capability without the need of using any flag. This default behavior makes the current Gromacs installation platform agnostic. I do not wish to enable any flag that can potentially change this platform independent behavior. If Gromacs does not by default activate in your system OpenCL, please write a bug report to Gromacs people, so they provide similar functionality as for CUDA. You can always edit the PKGBUILD before installing to include your flags. Besides, you will need to tell gromacs to use your compilers.

jaw8621 commented on 2017-04-27 11:44 (UTC)

Now that LLVM and Clang 4.0 is out, gromacs should have the cmake OpenCL compile flags turned on by default. Works since gromacs 5.1 and with both, amd and nvidia as soon as a suitable gpu and driver is present. Flags: -DGMX_GPU=ON -DGMX_USE_OPENCL=ON

jaw8621 commented on 2017-02-27 19:53 (UTC)

ah thanks. Sorry for my ignorance. New to the arch world :) I think to clarify this, the message should read something like this: ###### CMAKE OPTIONS DISABLE BY DEFAULT ########### # If you are using an AVX2 capable CPU, you will # # not have AVX2 binaries unless you set -march to # # 'native', your respective architecture flag: # # https://gcc.gnu.org/onlinedocs/gcc-5.3.0/gcc/x86-Options.html#x86-Options # # or just include '-mavx2' to the default compiler# # flags in the /etc/makepkg.conf: # # https://wiki.archlinux.org/index.php/Makepkg#Architecture.2C_compile_flagsAdd # ################################################### (Sorry, turned out to be a little longer ;)) to avoid confusion. Right now it makes you think it is set to native when instead it says something x86. Obviously one can just look up /etc/makepkg.conf and find the obvious...

hseara commented on 2017-02-27 12:56 (UTC)

The issue is not when you have -march=native. The issue is when you use the default /etc/makepkg.conf file which contains "-march=x86-64 -mtune=generic" in AVX2 capable computers. Multi-core compilation is configured in your /etc/makepkg.conf as in any other package. For example, I have MAKEFLAGS="-j12".

jaw8621 commented on 2017-02-26 16:51 (UTC) (edited on 2017-02-26 16:52 (UTC) by jaw8621)

what is the issue with: ###### CMAKE OPTIONS DISABLE BY DEFAULT ########### # If you are using a haswell CPU, you will have # # problems compiling with AVX2 support unless you # # modify march=native in the /etc/makepkg.conf: # # https://wiki.archlinux.org/index.php/Makepkg#Architecture.2C_compile_flagsAdd # ################################################### ? I compile on a haswell machine with -march=native and it produces a correct AVX2 executable. Are newer than haswell architectures also supposedly affected? And how about compiling on more then 1 core to speed up compile times?

hseara commented on 2016-07-14 20:00 (UTC)

NOTE: PROBLEMS BUILDING GROMACS AFTER GCC6 UPDATE WITH CUDA SUPPORT If your system has CUDA installed, gromacs installer by default will try to compile with CUDA support. Unfortunately, currently gcc6 and CUDA do not play well along and the compilation will fail. Because of this the later cuda release comes along with gcc5 package. Using gcc5 we can again build gromacs in bash using the following commands: export CC=gcc-5 export CXX=g++-5 makepkg

hseara commented on 2016-07-14 19:53 (UTC)

Sorry, I'm not sure I'm understand really your problem. This package is the default gromacs and as such is installed in: /usr/bin /usr/include /usr/lib /usr/share I just build the package and it works perfectly with the files installed in above directories. Are you sure that you are installing gromacs and not aur/gromacs-5.0-complete or aur/gromacs-4.6-complete which indeed install this version of gromacs in /usr/local. In this latter packages the intention is that they can coexist with the last stable version in case your projects still require them and because of that they are encapsulated in /usr/local/gromacs-XXX/ If your problems persist please provide more details.