The discussion forums in the XSEDE User Portal are for users to share experiences, questions, and comments with other users and XSEDE staff. Visitors are welcome to browse and search, but you must login to contribute to the forums. While XSEDE staff monitor the lists, XSEDE does not guarantee that questions will be answered. Please note that the forums are not a replacement for formal support or bug reporting procedures through the XSEDE Help Desk. You must be logged in to post to the user forums.

« Back to General Discussion

Running large grids using OpenFOAM on XSEDE Kraken

Combination View Flat View Tree View
Threads [ Previous | Next ]
toggle
Hi all,

I've been running simulations using OpenFOAM on Kraken for sometime now. Recently I tried running a grid with ~10M cells, but ran into memory issues while trying to do domain decomposition (using decomposePar). I also tried creating some simple grids using OpenFOAM's blockMesh utility to see how large a grid it can handle. The utility was able to generate grids with ~6M cells, and for larger grids, would return this error message:

new cannot satisfy memory request.
This does not necessarily mean you have run out of virtual memory.
It could be due to a stack violation caused by e.g. bad use of pointers or an out of date shared library
Aborted

This was while I was working on the login nodes (with 8GB memory). I recently submitted a ticket regarding this, and tech support suggested I use the compute nodes instead (with 16GB memory). This did take care of the memory issue. However, I'm not really sure if this is a memory issue and not some other bug, since ~10M cells is not a big grid by today's standards. So I'm curious to know if other people running OpenFOAM have encountered this error while working with large grids (>10M cells). I'd really appreciate if people who've experienced this could comment here.

Thanks,
Karthik

RE: Running large grids using OpenFOAM on XSEDE Kraken
Answer
3/7/14 8:24 PM as a reply to Karthik Rudra Reddy.
Karthik,

I believe I've found a way around decomposePar not running in parallel. The solution is to use redistributePar as it says in the header file:

# Create empty processor directories (have to exist for argList)
mkdir processor0
..
mkdir processorN
# Copy undecomposed polyMesh
cp -r constant processor0
# Distribute
mpirun -np ddd redistributePar -parallel (for Kraken this would be aprun -n redistributePar -parallel)

However, this leaves out a very important line on the second step. You also have to do "cp -r 0 processor0" or the fields will not be distributed with the mesh. Once the results are finished, you also cannot recombine them with recomposePar as that utility can also not be run in parallel. Therefore, you can create the something.openfoam file and read in each section of the grid or all of them at once from the processor directories in Fieldview/Paraview using the OpenFOAM direct reader.

-Mark