The NSF XSEDE project will be ending on August 31, 2022 and the new ACCESS project will begin September 1, 2022. Consult Allocation Announcements for key dates related to the XSEDE to ACCESS transition. See Advance to ACCESS for additional information about the new project.

The discussion forums in the XSEDE User Portal are for users to share experiences, questions, and comments with other users and XSEDE staff. Visitors are welcome to browse and search, but you must login to contribute to the forums. While XSEDE staff monitor the lists, XSEDE does not guarantee that questions will be answered. Please note that the forums are not a replacement for formal support or bug reporting procedures through the XSEDE Help Desk. You must be logged in to post to the user forums.

« Back

How to access GROMACS in Expanse

Combination View Flat View Tree View
Threads [ Previous | Next ]
How to access GROMACS in Expanse
7/20/21 9:20 PM
I found that GROMACS 2019 and 2020 are available in Expanse. I was trying to access it by loading all the necessary modules. However, after loading the required modules including gromacs/2019.6, I was able to get gmx_mpi executable but unable to run simple commands like gmx_mpi -version etc. It is showing the following error-

By default, for Open MPI 4.0 and later, infiniband ports on a device
are not used by default. The intent is to use UCX for these devices.
You can override this policy by setting the btl_openib_allow_ib MCA parameter
to true.

Local host: exp-7-60
Local adapter: mlx5_0
Local port: 1

WARNING: There was an error initializing an OpenFabrics device.

Local host: exp-7-60
Local device: mlx5_0
srun: error: PMK_KVS_Barrier duplicate request from task 0

Here are the currently loaded modules-
Currently Loaded Modules:
1) shared 2) DefaultModules 3) gpu/0.15.4 4) openmpi/4.0.4 5) slurm/expanse/20.02.3 6) sdsc/1.0 7) gromacs/2019.6

Inactive Modules:
1) gcc/10.2.0

I would appreciate if anyone can tell me how to run gromacs job in Expanse.
Sanjoy Paul
Boston University