Content with tag argonne .

XSEDE-allocated Stampede2 supercomputer simulations shed light on meteor physics 

By Jorge Salazar, Texas Advanced Computing Center 

 

XSEDE Stampede 2 simulations are helping reveal the physics of what happens when a meteor strikes the atmosphere. Credit: CC BY-SA 4.0 (Jacek Halicki) 

In the heavens above, it's raining dirt. 

Every second, millions of pieces of dirt that are smaller than a grain of sand strike Earth's upper atmosphere. At about 100 kilometers altitude, bits of dust, mainly debris from asteroid collisions, zing through the sky vaporizing as they go 10 to 100 times the speed of a bullet. The bigger ones can make streaks in the sky, meteors that take our breath away.  

Scientists are using supercomputers to help understand how tiny meteors, invisible to the naked eye, liberate electrons that can be detected by radar and can characterize the speed, direction and rate of meteor deceleration with high precision, allowing its origin to be determined. Because this falling space dust helps seed rain-making clouds, this basic research on meteors will help scientists more fully understand the chemistry of Earth's atmosphere. What's more, meteor composition helps astronomers characterize the space environment of our solar system. 

Why It's Important

Meteors play an important role in upper atmospheric science, not just for the Earth but for other planets as well. They allow scientists to be able to diagnose what's in the air using pulsed laser remote sensing lidar, which bounces off meteor dust to reveal the temperature, density, and the winds of the upper atmosphere.  

"Supercomputers give scientists the power to investigate in detail the real physical processes, not simplified toy models. They're ultimately a tool for numerically testing ideas and coming to a better understanding of the nature of meteor physics and everything in the universe." – Meers Oppenheim, professor of Astronomy at Boston University

Scientists also track with radar the plasma generated by meteors, determining how fast winds are moving in the upper atmosphere by how fast the plasma is pushed around. It's a region that's impossible to study with satellites, as the atmospheric drag at these altitudes will cause the spacecraft to re-enter the atmosphere. 

The meteor research was published in June 2021 in the Journal of Geophysical Research: Space Physics of the American Geophysical Society.  

In it, lead author Glenn Sugar of Johns Hopkins University developed computer simulations to model the physics of what happens when a meteor hits the atmosphere. The meteor heats up and sheds material at hypersonic speeds in a process called ablation. The shed material slams into atmospheric molecules and turn into glowing plasma.  

"What we're trying to do with the simulations of the meteors is mimic that very complex process of ablation, to see if we understand the physics going on; and to also develop the ability to interpret high-resolution observations of meteors, primarily radar observations of meteors," said study co-author Meers Oppenheim, professor of Astronomy at Boston University. 

Large radar dishes, such as the iconic but now defunct Arecibo radar telescope, have recorded multiple meteors per second in a little tiny patch of sky. According to Oppenheim, this means the Earth is getting hit by millions and millions of meteors every second. 

"Interpreting those measurements has been tricky," he said. "Knowing what we're looking at when we see these measurements is not so easy to understand." 

The simulations in the paper basically set up a box that represents a chunk of atmosphere. In the middle of the box, a tiny meteor is placed, spewing out atoms. The particle-in-cell, finite-difference time-domain simulations were used to generate density distributions of the plasma generated by meteor atoms as their electrons are stripped off in collisions with air molecules. 

The electric field magnitude squared of the total field (top row) and scattered field (bottom row) in the yz- plane (left column) and xy-plane (right column) after an incident pulse encounters the overdense meteor center using the Dimant-Oppenheim model plasma distribution. Credit: Sugar et al.

"Radars are really sensitive to free electrons," Oppenheim explained. "You make a big, conical plasma that develops immediately in front of the meteoroid and then gets swept out behind the meteoroid. That then is what the radar observes. We want to be able to go from what the radar has observed back to how big that meteoroid is. The simulations allow us to reverse engineer that." 

The goal is to be able to look at the signal strength of radar observations and be able to get physical characteristics on the meteor, such as size and composition. 

"Up to now we've only had very crude estimates of that. The simulations allow us to go beyond the simple crude estimates," Oppenheim said. 

How XSEDE Helped

"Analytical theory works really well when you can say, 'Okay, this single phenomenon is happening, independently of these other phenomena.' But when it's all happening at once, it becomes so messy. Simulations become the best tool," Oppenheim said.  

Oppenheim was awarded supercomputer time by the Extreme Science and Engineering Discovery Environment (XSEDE)  on TACC's Stampede2 supercomputer for the meteor simulations. 

"Now we're really able to use the power of Stampede2 — these giant supercomputers — to evaluate meteor ablation in incredible detail," said Oppenheim. "XSEDE made this research possible by making it easy for me, the students, and research associates to take advantage of the supercomputers."  

"The systems are well run," he added. "We use many mathematical packages and data storage packages. They're all pre-compiled and ready for us to use on XSEDE. They also have good documentation. And the XSEDE staff has been very good. When we run into a bottleneck or hurdle, they're very helpful. It's been a terrific asset to have." 

Astronomers are leaps and bounds ahead of where they were 20 years ago in terms being able to model meteor ablation. Oppenheim referred to a 2020 study led by Boston University undergraduate Gabrielle Guttormsen that simulates tiny meteor ablation to see how fast it heats up and how much material bubbles away. 

Representative plasma frequency distributions used in meteor ablation simulations. Credit: Sugar et al.

Meteor ablation physics is very hard to do with pen and paper calculations, because meteors are incredibly inhomogeneous, said Oppenheim. "You're essentially modeling explosions. All this physics is happening in milliseconds,  hundreds of milliseconds for the bigger ones, and for the bolides, the giant fireballs that can last a few seconds, we're talking seconds. They're explosive events."  

Oppenheim's team models ablation all the way from picoseconds, which is the time scale of the meteor disintegrating and the atoms interacting when the air molecules slam into them. The meteors are often traveling at ferocious speeds of 50 kilometers a second or even up to 70 kilometers a second.   

Oppenheim outlined three different types of simulations he's conducting to attack the meteor ablation problem. First, he uses molecular dynamics, that looks at individual atoms as the air molecules slam into the small particles at picosecond time resolution. Next, he uses a different simulator to watch what happens as those molecules then fly away, and then the independent molecules slam into the air molecules and become a plasma with electromagnetic radiation. Finally, he takes that plasma and launches a virtual radar at it, listening for the echoes there.  

So far, he hasn't been able to combine these three simulations into one. It's what he describes as a ‘stiff problem,' with too many timescales for today's technology to handle one self-consistent simulation.   

Oppenheim said he plans to apply for supercomputer time on TACC's NSF-funded Frontera supercomputer, the fastest academic supercomputer on the planet. "Stampede2 is good for lots of smaller test runs, but if you have something really massive, Frontera is meant for that," he said. 

Said Oppenheim: "Supercomputers give scientists the power to investigate in detail the real physical processes, not simplified toy models. They're ultimately a tool for numerically testing ideas and coming to a better understanding of the nature of meteor physics and everything in the universe." 

 

NSF Grants AGS-1244842, AGS-1056042, AGS-1755020, AGS-1833209, and AGS- 1754895.XSEDE allocation  TG-ATM100026 

 

At A Glance

 

  • Supercomputer simulations developed of electromagnetic waves scattering off meteor head plasma.
  • Science team awarded XSEDE allocations on TACC's Stampede2 system.
  • Results published in the Journal of Geophysical Research (June 2021): Space Physics of the American Geophysical Society. 

Current Campus Champions

Current Campus Champions listed by institution. Participation as either an Established Program to Stimulate Competitive Research (EPSCoR) or as a minority-serving institution (MSI) is also indicated.

Campus Champion Institutions  
Total Academic Institutions 310
     Academic institutions in EPSCoR jurisdictions 82
    Minority Serving Institutions 58
    Minority Serving Institutions in EPSCoR jurisdictions 18
Non-academic, not-for-profit organizations 36
Total Campus Champion Institutions 346
Total Number of Champions 747

LAST UPDATED: October 12, 2021

Click here to see a big map with a legend.

See also the lists of Leadership Team and Regional LeadersDomain Champions and Student Champions.

Institution Campus Champions EPSCoR MSI
AIHEC (American Indian Higher Education Consortium) Russell Hofmann    
Alabama A & M University Damian Clarke, Raziq Yaqub, Georgiana Wright (student)
Albany State University Olabisi Ojo  
Arizona State University Michael Simeone (domain) , Sean Dudley, Johnathan Lee, Lee Reynolds, William Dizon, Ian Shaeffer, Dalena Hardy, Gil Speyer, Richard Gould, Chris Kurtz, Jason Yalim, Philip Tarrant, Douglas Jennewein, Marisa Brazil, Rebecca Belshe, Eric Tannehill, Zachary Jetson, Natalie Mason (student)    
Arkansas State University Hai Jiang  
Austin Peay State University Justin Oelgoetz    
Bates College Kai Evenson  
Baylor College of Medicine Pavel Sumazin , Hua-Sheng Chiu, Hyunjae Ryan Kim    
Baylor University Mike Hutcheson, Carl Bell, Brian Sitton    
Bentley University Jason Wells    
Bethune-Cookman University Ahmed Badi  
Boise State University Kyle Shannon, Jason Watt, Kelly Byrne, Mendi Edgar, Mike Ramshaw  
Boston Children's Hospital Arash Nemati Hayati    
Boston College Simo Goshev    
Boston University Wayne Gilmore, Charlie Jahnke, Augustine Abaris, Brian Gregor, Katia Bulekova, Josh Bevan    
Bowdoin College Dj Merrill , Stephen Houser  
Bowie State University Konda Karnati  
Brandeis University John Edison    
Brown University Maximilian King, Paul Hall, Khemraj Shukla, Mete Tunca, Paul Stey, Rohit Kakodkar  
Cabrini University Alexander Davis    
California Baptist University Linn Carothers  
California Institute of Technology Tom Morrell    
California State Polytechnic University-Pomona Chantal Stieber    
California State University - Fullerton Justin Tran    
California State University-Sacramento Anna Klimaszewski-Patterson  
California State University-San Bernardino Dung Vu, James MacDonell  
Carnegie Institution for Science Floyd A. Fayton, Jr.    
Carnegie Mellon University Bryan Webb, Franz Franchetti, Carl Skipper    
Case Western Reserve University Roger Bielefeld, Hadrian Djohari, Emily Dragowsky, James Michael Warfe, Sanjaya Gajurel    
Central State University Mohammadreza Hadizadeh  
Centre College David Toth  
Chapman University James Kelly    
Children's Mercy Kansas City Shane Corder    
Claremont Graduate University Michael Espero (student), Cindy Cheng (student)    
Claremont McKenna College Jeho Park, Zeyad Elkelani (student)    
Clark Atlanta University Dina Tandabany  
Clarkson Univeristy Jeeves Green, Joshua A. Fiske    
Clemson University Xizhou Feng, Corey Ferrier, Tue Vu, Asher Antao, Grigorio Yourganov  
Cleveland Clinic, The Iris Nira Smith, Daniel Blankenberg    
Clinton College Terris S. Riley
Coastal Carolina University Will Jones, Thomas Hoffman  
Colby College Randall Downer  
Colgate University Dan Wheeler    
College of Charleston Ray Creede  
College of Staten Island CUNY Sharon Loverde  
College of William and Mary Eric Walter    
Colorado School of Mines Torey Battelle, Nicholas Danes    
Colorado State University Stephen Oglesby    
Columbia University Rob Lane, George Garrett, Cesar Arias, Axinia Radeva    
Columbia University Irving Medical Center Vinod Gupta    
Complex Biological Systems Alliance Kris Holton    
Cornell University Susan Mehringer    
Dakota State University David Zeng  
Dartmouth College Arnold Song  
Davidson College Neil Reda (student), Michael Blackmon (student)    
Dillard University Tomekia Simeon, Brian Desil (student), Priscilla Saarah (student)
Doane University-Arts & Sciences Mark Meysenburg, AJ Friesen  
Dominican University of California Randall Hall    
Drexel University David Chin, Cameron Fritz (student), Hoang Oanh Pham (student)    
Duke University Tom Milledge    
Earlham College Charlie Peck    
East Carolina University Nic Herndon    
East Tennessee State University David Currie, Janet Keener, Vincent Thompson    
Edge, Inc. Forough Ghahramani    
Emory University Jingchao Zhang    
Federal Reserve Bank Of Kansas City (CADRE) BJ Lougee, Chris Stackpole, Michael Robinson    
Federal Reserve Bank Of Kansas City (CADRE) - OKC Branch Greg Woodward  
Federal Reserve Bank Of New York Ernest Miller, Kevin Kelliher    
Felidae Conservation Fund Kevin Clark    
Ferris State University Luis Rivera, David Petillo    
Florida A and M University Hongmei Chi, Jesse Edwards, Yohn Jairo Parra Bautista, Rodolfo Tsuyoshi F. Kamikabeya (student), Emon Nelson (student)  
Florida Atlantic University Rhian Resnick    
Florida International University David Driesbach, Cassian D'Cunha  
Florida Southern College Christian Roberson    
Florida State University Paul van der Mark    
Francis Marion University K. Daniel Brauss, Jordan D. McDonnell
Franklin and Marshall College Jason Brooks    
GPN (Great Plains Network) Kate Adams, James Deaton    
George Mason University Jayshree Sarma, Alastair Neil, Berhane Temelso, Swabir Silayi    
George Washington University Hanning Chen, Adam Wong, Glen Maclachlan, William Burke    
Georgetown University Alisa Kang    
Georgia Institute of Technology Mehmet Belgin, Semir Sarajlic, Nuyun (Nellie) Zhang, Sebastian Kayhan Hollister (student), Kevin Manalo, Siddhartha Vemuri (student), Fang Liu, Michael Weiner, Aaron Jezghani, Ruben Lara    
Georgia Southern University Brandon Kimmons, Dain Overstreet    
Georgia State University Neranjan "Suranga" Edirisinghe Pathiran, Ken Huang, Melchizedek Mashiku (student), Christopher Childress, Sanju Timsina  
Grinnell College Michael Conner    
Harrisburg University of Science and Technology Daqing Yun    
Harvard Business School Bob Freeman    
Harvard Medical School Jason Key    
Harvard University Scott Yockel, Plamen Krastev, Francesco Pontiggia    
Harvey Mudd College Aashita Kesarwani    
Hood College Xinlian Liu    
Howard University Marcus Alfred, Christina McBean (student), Tamanna Joshi (student)  
I-Light Network & Indiana Gigapop Caroline Weilhamer, Marianne Chitwood    
Idaho National Laboratory Ben Nickell, Eric Whiting, Kit Menlove  
Idaho State University Keith Weber, Dong Xu, Kindra Blair, Jack Bradley  
Illinois Institute of Technology Jeff Wereszczynski    
Indiana University Abhinav Thota, Sudahakar Pamidighantam (domain) , Junjie Li, Thomas Doak (domain) , Sheri Sanders (domain) , Le Mai Weakley, Ashley Brooks (student)    
Indiana University of Pennsylvania John Chrispell    
Internet2 Dana Brunson, Cathy Chaplin, John Hicks, Tim Middelkoop, Ananya Ravipati    
Iowa State University Andrew Severin, James Coyle, Levi Baber    
Jackson State University Carmen Wright, Duber Gomez-Fonseca (student)
James Madison University Isaiah Sumner    
Jarvis Christian College Widodo Samyono  
John Brown University Jill Ellenbarger  
Johns Hopkins University Anthony Kolasny, Jaime Combariza, Jodie Hoh (student)    
Juniata College Burak Cem Konduk    
KanREN (Kansas Research and Education Network) Casey Russell  
Kansas State University Dan Andresen, Mohammed Tanash (student), Kyle Hutson  
Kennesaw State University Ramazan Aygun    
Kentucky State University Chi Shen
Lafayette College Bill Thompson, Jason Simms, Peter Goode    
Lamar University Larry Osborne    
Lane College Elijah MacCarthy  
Langston University Franklin Fondjo, Abebaw Tadesse, Joel Snow
Lawrence Berkeley National Laboratory Andrew Wiedlea    
Lawrence Livermore National Laboratory Todd Gamblin    
Lehigh University Alexander Pacheco    
Lipscomb University Michael Watson    
Lock Haven University Kevin Range    
Louisiana State University Feng Chen, Ric Simmons  
Louisiana State University - Alexandria Gerard Dumancas  
Louisiana State University Health Sciences Center-New Orleans Mohamad Qayoom  
Louisiana Tech University Don Liu  
Marquette University Craig Struble, Lars Olson, Xizhou Feng    
Marshall University Jack Smith  
Massachusetts Green High Performance Computing Center Julie Ma, Abigail Waters (student)    
Massachusetts Institute of Technology Christopher Hill, Lauren Milechin    
Miami University - Oxford Jens Mueller    
Michigan State University Andrew Keen, Yongjun Choi, Dirk Colbry, Justin Booth, Dave Dai, Arthur "Chip" Shank II, Brad Fears    
Michigan Technological University Gowtham    
Middle Tennessee State University Dwayne John    
Midwestern State University Eduardo Colmenares-Diaz    
Minnesota State University - Mankato Maria Kalyvaki    
Mississippi State University Trey Breckenridge  
Missouri State University Matt Siebert    
Missouri University of Science and Technology Buddy Scharfenberg, Don Howdeshell    
Monmouth College Christopher Fasano    
Montana State University Coltran Hophan-Nichols  
Montana Tech Bowen Deng  
Morgan State University James Wachira  
Murray State University Jonathan Lyon  
NCAR/UCAR Davide Del Vento    
National University Ali Farahani    
Navajo Technical University Jason Arviso
New Jersey Institute of Technology Glenn "Gedaliah" Wolosh, Roman Voronov    
New Mexico State University Alla Kammerdiner, Diana Dugas, Strahinja Trecakov
New York University Shenglong Wang    
Noble Research Institute, LLC Nick Krom, Perdeep Mehta  
North Carolina A & T State University Ling Zu  
North Carolina Central University Caesar Jackson, Alade Tokuta  
North Carolina State University at Raleigh Lisa Lowe, Bailey Pollard (student), Christopher Blanton    
North Dakota State University Dane Skow, Nick Dusek, Oluwasijibomi "Siji" Saula, Khang Hoang  
Northeastern University Scott Valcourt    
Northern Arizona University Christopher Coffey, Jason Buechler, William Wilson    
Northern Illinois University Jifu Tan    
Northwest Missouri State University Jim Campbell    
Northwestern State University (Louisiana Scholars' College) Brad Burkman  
Northwestern University Alper Kinaci, Sajid Ali (student)    
OWASP Foundation Learning Gateway Project Bev Corwin, Laureano Batista, Zoe Braiterman, Noreen Whysel    
Ohio Supercomputer Center Karen Tomko, Keith Stewart, Sandy Shew    
Oklahoma Baptist University Yuan-Liang Albert Chen  
Oklahoma State University Brian Couger (domain) , Jesse Schafer, Christopher J. Fennell (domain) , Phillip Doehle, Evan Linde, Venkat Padmanapan Rao (student), Bethelehem Ali Beker (student)  
Old Dominion University Wirawan Purwanto    
Oral Roberts University Stephen R. Wheat  
Oregon State University David Barber, CJ Keist, Mark Keever, Dylan Keon, Mckenzie Hughes (student)    
Penn State University Chuck Pavloski, Wayne Figurelle, Guido Cervone, Diego Menendez, Jeff Nucciarone    
Pittsburgh Supercomputing Center Stephen Deems, John Urbanic    
Pomona College Andrew Crawford, Omar Zintan Mwinila-Yuori (student), Samuel Millette (student), Sanghyun Jeon, Nathaniel Getachew (student)    
Portland State University William Garrick    
Prairie View A&M University Suxia Cui, Racine McLean (student), Kobi Tioro (student), Chara Tatum (student), Virgie Leyva (student)  
Princeton University Ian Cosden    
Purdue University Xiao Zhu, Tsai-wei Wu, Matthew Route (domain) , Eric Adams    
RAND Corporation Justin Chapman    
RENCI Laura Christopherson    
Reed College Trina Marmarelli, Johnny Powell , Ben Poliakoff    
Rensselaer Polytechnic Institute Joel Giedt, James Flamino (student)    
Rhodes College Brian Larkins    
Rice University Qiyou Jiang, Erik Engquist, Xiaoqin Huang, Clinton Heider, John Mulligan    
Rochester Institute of Technology Andrew W. Elble , Emilio Del Plato, Charles Gruener, Paul Mezzanini, Sidney Pendelberry    
Roswell Park Comprehensive Cancer Center Shawn Matott    
Rowan University Ghulam Rasool    
Rutgers University Shantenu Jha, Bill Abbott, Paul Framhein, Galen Collier, Eric Marshall, Vlad Kholodovych, Bala Desinghu, Sue Oldenburg, Branimir Ljubic, Girish Ganesan (student), Ehud Zelzion    
SBGrid Consortium      
SUNY Downstate Health Sciences University Zaid McKie-Krisberg    
SUNY at Albany Kevin Tyle, Nicholas Schiraldi    
Saint Louis University Eric Kaufmann, Frank Gerhard Schroer IV (student)    
Saint Martin University Shawn Duan    
San Diego State University Mary Thomas  
San Jose State University Sen Chiao, Werner Goveya    
Slippery Rock University of Pennsylvania Nitin Sukhija    
Sonoma State University Mark Perri  
South Carolina State University Biswajit Biswal, Jagruti Sahoo
South Dakota State University Kevin Brandt, Roberto Villegas-Diaz (student), Rachael Auch, Chad Julius  
Southeast Missouri State University Marcus Bond    
Southern Connecticut State University Yigui Wang    
Southern Illinois University Shaikh Ahmed, Majid Memari (student), Manvith Mali (student)    
Southern Illinois University-Edwardsville Kade Cole, Andrew Speer    
Southern Methodist University Amit Kumar, Merlin Wilkerson, Robert Kalescky    
Southern University and A & M College Shizhong Yang, Rachel Vincent-Finley
Southwestern Oklahoma State University Jeremy Evert, Kurtis D. Clark (student), Hamza Jamil (student), Arianna Martin (student)  
Spelman College Yonas Tekle  
Stanford University Ruth Marinshaw, Zhiyong Zhang    
Swarthmore College Andrew Ruether    
Temple University Richard Berger, Edwin Posada    
Tennessee Technological University Mike Renfro    
Texas A & M University-College Station Rick McMullen, Dhruva Chakravorty, Jian Tao, Brad Thornton    
Texas A & M University-Corpus Christi Ed Evans, Joshua Gonzalez  
Texas A&M University-San Antonio Izzat Alsmadi  
Texas Southern University Farrukh Khan  
Texas State University Shane Flaherty  
Texas Tech University Tom Brown, Misha Ahmadian (student)    
Texas Wesleyan University Terrence Neumann    
The College of New Jersey Shawn Sivy    
The Jackson Laboratory Shane Sanders, Bill Flynn, Kurt Showmaker  
The University of Tennessee - Health Science Center Billy Barnett    
The University of Tennessee-Chattanooga Carson Woods (student), Tony Skjellum    
The University of Texas at Austin Kevin Chen    
The University of Texas at Dallas Frank Feagans, Gi Vania, Jaynal Pervez, Christopher Simmons, Namira Pervez (student)    
The University of Texas at El Paso Rodrigo Romero, Vinod Kumar  
The University of Texas at San Antionio Brent League, Zhiwei Wang, Armando Rodriguez, Thomas Freeman, Ritu Arora, Richard Zanni  
Tinker Air Force Base Zachary Fuchs, David Monismith  
Trinity College Peter Yoon    
Tufts University Shawn Doughty    
Tulane University Hideki Fujioka, Hoang Tran, Carl Baribault  
United States Department of Agriculture - Agriculture Research Service Nathan Weeks    
United States Geological Survey Janice Gordon, Jeff Falgout, Natalya Rapstine    
University at Buffalo Dori Sajdak, Andrew Bruno    
University of Akron Main Campus Sean Mitchuson    
University of Alabama at Birmingham John-Paul Robinson, Shahram Talei (student), Blake Joyce  
University of Alaska Liam Forbes, Kevin Galloway
University of Arizona Jimmy Ferng, Mark Borgstrom, Moe Torabi, Adam Michel, Chris Reidy, Chris Deer, Ric Anderson, Todd Merritt, Alexander Prescott (student), Devin Bayly, Sara Willis    
University of Arkansas David Chaffin, Jeff Pummill, Pawel Wolinski, Timothy "Ryan" Rogers (student)  
University of Arkansas at Little Rock Albert Everett  
University of California Merced Sarvani Chadalapaka, Robert Romero    
University of California-Berkeley Aaron Culich, Chris Paciorek    
University of California-Davis Bill Broadley, Timothy Thatcher    
University of California-Irvine Harry Mangalam  
University of California-Los Angeles TV Singh    
University of California-Riverside Charles Forsyth, Jordan Hayes, Jacobus Kats  
University of California-San Diego Cyd Burrows-Schilling, Claire Mizumoto    
University of California-San Francisco Jason Crane    
University of California-Santa Barbara Sharon Solis, Sharon Tettegah  
University of California-Santa Cruz Jeffrey D. Weekley  
University of Central Florida Glenn Martin, Jamie Schnaitter, Fahad Khan, Shafaq Chaudhry    
University of Central Oklahoma Evan Lemley, Samuel Kelting (student), Thomas Dunn (student)  
University of Chicago Igor Yakushin, Ryan Harden    
University of Cincinnati Kurt Roberts, Larry Schartman, Jane E Combs    
University of Colorado Shelley Knuth, Andy Monaghan, Daniel Trahan    
University of Colorado, Denver/Anschutz Medical Campus Amy Roberts, Farnoush Banaei-Kashani    
University of Delaware Anita Schwartz, Parinaz Barakhshan (student), Michael Kyle  
University of Florida Alex Moskalenko, David Ojika    
University of Georgia Guy Cormier    
University of Guam Eugene Adanzo, Randy Dahilig, Jose Santiago, Steven Mamaril
University of Hawaii Gwen Jacobs, Sean Cleveland
University of Houston Jerry Ebalunode  
University of Houston-Clear Lake David Garrison, Liwen Shih    
University of Houston-Downtown Hong Lin, Dexter Cahoy  
University of Idaho Lucas Sheneman  
University of Illinois Mao Ye (domain) , Rob Kooper (domain) , Dean Karres, Tracy Smith    
University of Illinois at Chicago Himanshu Sharma, Jon Komperda, Leonard Apanasevich  
University of Indianapolis Steve Spicklemire    
University of Iowa Ben Rogers, Sai Ramadugu, Adam Harding, Joe Hetrick, Cody Johnson, Genevieve Johnson, Glenn Johnson, Brendel Krueger, Kang Lee, Gabby Perez, Brian Ring, John Saxton, Elizabeth Leake, Giang Rudderham    
University of Kansas Riley Epperson  
University of Kentucky Vikram Gazula, James Griffioen  
University of Louisiana at Lafayette Raju Gottumukkala  
University of Louisville Harrison Simrall  
University of Maine System Bruce Segee, Steve Cousins, Michael Brady Butler (student)  
University of Maryland Eastern Shore Urban Wiggins  
University of Maryland-Baltimore County Roy Prouty, Randy Philipp  
University of Maryland-College Park Kevin M. Hildebrand  
University of Massachusetts Amherst Johnathan Griffin    
University of Massachusetts-Boston Jeff Dusenberry, Runcong Chen  
University of Massachusetts-Dartmouth Scott Field    
University of Memphis Qianyi Cheng    
University of Miami Dan Voss, Warner Baringer    
University of Michigan Gregory Teichert , Shelly Johnson, Todd Raeker, Daniel Kessler (student)    
University of Minnesota Eric Shook (domain) , Ben Lynch, Joel Turbes, Doug Finley, Aneesh Venugopal (student), Charles Nguyen    
University of Missouri-Columbia Asif Ahamed Magdoom Ali, Brian Marxkors, Ashkan Mirzaee (student), Christina Roberts, Predrag Lazic, Phil Redmon    
University of Missouri-Kansas City Paul Rulis    
University of Montana Tiago Antao  
University of Nebraska Adam Caprez, Natasha Pavlovikj (student), Tom Harvill  
University of Nebraska Medical Center Ashok Mudgapalli  
University of Nevada-Reno Fred Harris, Scotty Strachan, Engin Arslan  
University of New Hampshire Anthony Westbrook  
University of New Mexico Hussein Al-Azzawi, Matthew Fricke
University of North Carolina Mark Reed, Mike Barker    
University of North Carolina - Greensboro Jacob Fosso Tande    
University of North Carolina - Wilmington Eddie Dunn, Ellen Gurganious, Cory Nichols Shrum (student)    
University of North Carolina at Charlotte Christopher Maher    
University of North Dakota Aaron Bergstrom, David Apostal  
University of North Georgia Luis A. Cueva Parra , Yong Wei    
University of Notre Dame Dodi Heryadi, Scott Hampton    
University of Oklahoma Henry Neeman, Kali McLennan, Horst Severini, James Ferguson, David Akin, S. Patrick Calhoun, Jason Speckman  
University of Oregon Nick Maggio, Robert Yelle, Michael Coleman, Jake Searcy, Mark Allen    
University of Pennsylvania Gavin Burris    
University of Pittsburgh Kim Wong, Matt Burton, Fangping Mu, Shervin Sammak, Donya Ramezanian    
University of Puerto Rico Mayaguez Ana Gonzalez
University of Rhode Island Kevin Bryan, Gaurav Khanna  
University of Richmond Fred Hagemeister    
University of Rochester Baowei Liu    
University of South Carolina Paul Sagona, Ben Torkian, Nathan Elger  
University of South Dakota Ryan Johnson, Bill Conn  
University of South Florida-St Petersburg Tylar Murray    
University of Southern California Cesar Sul, Derek Strong, Andrea Renney, Tomasz Osinski, Marco Olguin, Asya Shklyar, Ryan Sim (student)    
University of Southern Mississippi Brian Olson , Gopinath Subramanian  
University of St Thomas William Bear, Keith Ketchmark, Eric Tornoe    
University of Tennessee - Knoxville Deborah Penchoff    
University of Tulsa Peter Hawrylak  
University of Utah Anita Orendt, Tom Cheatham (domain) , Brian Haymore    
University of Vermont Andi Elledge, Yves Dubief, Keri Toksu  
University of Virginia Ed Hall, Katherine Holcomb    
University of Washington Nam Pho    
University of Wisconsin-La Crosse David Mathias, Samantha Foley    
University of Wisconsin-Madison Todd Shechter    
University of Wisconsin-Milwaukee Dan Siercks, Darin Peetz    
University of Wisconsin-River Falls Anthony Varghese    
University of Wyoming Bryan Shader, Dylan Perkins  
University of the Virgin Islands Marc Boumedine
Utah Valley University George Rudolph    
Valparaiso University Paul Lapsansky, Paul M. Nord, Nicholas S. Rosasco    
Vanderbilt University Haoxiang Luo    
Vassar College Christopher Gahn    
Virginia Commonwealth University Huan Wang    
Virginia Tech University James McClure, Alana Romanella, Srijith Rajamohan    
Washburn University Karen Camarda, Steve Black  
Washington State University Rohit Dhariwal, Peter Mills    
Washington University in St Louis Xing Huang, Matt Weil, Matt Callaway    
Washington and Lee University Tom Marcais    
Wayne State University Patrick Gossman, Michael Thompson, Aragorn Steiger, Sara Abdallah (student)    
Weill Cornell Medicine Joseph Hargitai    
Wesleyan University Henk Meij    
West Chester University of Pennsylvania Linh Ngo    
West Texas A & M University Anirban Pal    
West Virginia Higher Education Policy Commission Jack Smith  
West Virginia State University Sridhar Malkaram
West Virginia University Guillermo Avendano-Franco , Blake Mertz, Nathaniel Garver-Daniels  
West Virginia University Institute of Technology Sanish Rai  
Wichita State University Terrance Figy  
Williams College Adam Wang    
Winston-Salem State University Xiuping Tao  
Winthrop University Paul Wiegand  
Wofford College Beau Christ  
Woods Hole Oceanographic Institution Roberta Mazzoli, Richard Brey, Gretchen Zwart    
Worcester Polytechnic Institute Shubbhi Taneja    
Yale University Andrew Sherman, Kaylea Nelson, Benjamin Evans, Sinclair Im (student), Robert Bjornson, Eric Peskin, Paul Gluhosky, Ping Luo, Michael Strickler, Thomas Langford, Tyler Trafford, David Backeberg, Jay Kubeck, Adam Munro    
Youngstown State University Feng George Yu    

LAST UPDATED: October 12, 2021

 

Key Points
Members
Institutions
Contact Info
Contact Information

Service Provider Forum

The national cyberinfrastructure ecosystem is powered by a broad set of Service Providers (SP). The XSEDE Federation primarily consists of SPs that are autonomous entities that agree to coordinate with XSEDE and each other to varying degrees. The XSEDE Federation may also include other non-service provider organizations.

Service Providers are projects or organizations that provide cyberinfrastructure (CI) services to the science and engineering community. In the US academic community, there is a rich diversity of SPs, spanning centers that are funded by NSF to operate large-scale resources for the national research community to universities that provide resources and services to their local researchers. The Service Provider Forum is intended to facilitate this ecosystem of SPs, thereby advancing the science and engineering researchers that rely on these cyberinfrastructure services. The SP Forum provides:

  • An open forum for discussion of topics of interest to the SP community.
  • A formal communication channel between the SP Forum members and the XSEDE project.

SPs are classified as being at a specific level by meeting a minimum set of conditions.They may meet additional conditions at their option, but classification at a specific level will be based on the stated required minimum conditions.

Briefly, Level 1 SPs meet all the XSEDE integration requirements and will explicitly share digital services with the broader community. Level 2 SPs make some digital services accessible via XSEDE and Level 3 SPs are the most loosely coupled; they will share the characteristics of their services via XSEDE, but need not make those services available beyond their local community. For more detailed descriptions, see the documents linked below.

Leadership

This year's SP Forum Officers elected January 23, 2021:

  • Chair: Ruth Marinshaw, Stanford University
  • Vice Chair: Mary Thomas, San Diego Supercomputer Center
  • L2 Chair: Jaime Cambariza, Johns Hopkins University
  • L3 Chair: Andy Keen, Michigan State University
  • XAB Representative: David Hancock, Indiana University
  • XAB Representative: Dan Stanzione, Texas Advanced Computing Center

Charter

SPF Charter

Membership Application

SPF Membership Application

Current XSEDE Federation Service Providers

 

SERVICE PROVIDER

SP LEVEL

REPRESENTATIVE

INSTITUTION

Expanse

Level 1

Mike Norman; Bob Sinkovits & Shawn Strande

San Diego Supercomputer Center

Jetstream2

Level 1

David Hancock, Jeremy Fischer

Indiana University

Bridges-2 Level 1 Shawn Brown Pittsburgh Supercomputing Center (PSC)
Anvil Level 1 Carol Song Purdue University
Delta Level 1 Bill Gropp, Tim Boerner NCSA

NCAR

Level 2

Davide Del Ventoi & Eric Nienhouse

NCAR

OSG

Level 2

Miron Livny

Univ of Wisconsin

SuperMIC

Level 2

Seung-Jong (Jay) Park and Steve Brandt

Louisiana State University

Rosen Center

Level 2

Carol Song

Purdue University

Stanford Research Computing Center (XStream) (application incoming for change from L2 to L3) Level 3 Ruth Marinshaw Stanford University
Beacon Level 2 Gregory D. Peterson UTK/NICS
Science Gateways Community Institute (SGCI) Level 2 Michael Zentner  Science Gateways Community Institute
Open Storage Network (OSN) Level 2 Kenton McHenry Open Storage Network
FASTER Level 2 Honggao Liu Texas A&M University
KyRIC Level 2 Jim Griffioen, Vikram Gazula University of Kentucky
MARCC's DISCO system Level 2 Jaime Combariza Johns Hopkins University
Ookami Level 2 Robert Harrison Stony Brook University

Rutgers Discovery Informatics Institute (RDI2)

Level 3

Barr von Oehsen

Rutgers University

Minnesota Supercomputing Institute (MSI)

Level 3

Jim Wilgenbusch

Minnesota Supercomputing Institute

Oklahoma State University High Performance Computing Center (OSUHPCC)

Level 3

Pratul Agarwal

Oklahoma State University

Institute for Cyber-­Enabled Research (iCER)

Level 3

Andy Keen

Michigan State University

Oklahoma University Supercomputing Center for Education & Research (OSCER)

Level 3

Henry Neeman

The University of Oklahoma

USDA-PBARC (Moana)

Level 3

Brian Hall, Scott Geib

University of Hawaii

Arkansas High Performance Computing Center (AHPCC)

Level 3

Jeff Pummill

University of Arkansas

DataONE

Level 3

Bruce Wilson/Rebecca Koskela

University of New Mexico

Institute for Computational Research in Engineering and Science (ICRES)

Level 3

Daniel Andresen

Kansas State University

Tufts University Research Technology (RT)

Level 3

Shawn Doughty

Tufts University

VELA computational resources

Level 3

Suranga Edirisinghe

Georgia State University (GSU)

Advanced Research Computing – Technology Services (ARC-TS)

Level 3

Brock Palen

University of Michigan

Palmetto

Level 3

Linh Ngo
Grigori Yourganov

Clemson University

Langston University Computing Center for Research and Education (LUCCRE)

Level 3

Franklin Fondjo-Fotou

Langston University

Holland Computing Center (HCC)

Level 3

Hongfeng Yu

University of Nebraska

Advanced Research Computing Center (ARCC)
Rocky Mountain Advanced Computing Consortium (RMACC)

Level 3

Tim Brewer

University of Wyoming

West Virginia University Research Computing Group

Level 3

Nathan Gregg

West Virginia University

NCGAS

Level 3

Thomas Doak and Robert Henschel 

Indiana University

Research Computing Group at USD Level 3 Ryan Johnson University of South Dakota
University of Colorado-Boulder's Research Computing Group Level 3 Shelley Knuth University of Colorado-Boulder
Center for Computationally Assisted Science & Technology Level 3 Dane Skow North Dakota State University
BigDawg Level 3 Jerry Richards Southern Illinois University
Research Computing Support Services (Lewis & Clark clusters) Level 3 Timothy Middlekoop University of Missouri
Information Technologies Level 3 John Huffman, Anita Schwartz University of Delaware
Hewlett Packard Enterprise (HPE) Data Science Institute (DSI) Level 3 Keith Crabb University of Houston
Fluid Numerics Level 3 Joseph Schoonover Fluid Numerics LLC

Former XSEDE Federation Service Providers

 
Service Provider SP Level Representative Institution Acceptance Date Exit Date
Wrangler Level 1 Dan Stanzione Texas Advanced Computing Center M 2019
Stampede Level 1 Dan Stanzione Univ of Texas at Austin M 2016
Gordon Level 1 Mike Norman UCSD/SDSC 17 September 2012 Spring 2017
FutureGrid Level 1 Geoffrey Fox Indiana University 17 September 2012 Fall 2014
Longhorn Level 1 Kelly Gaither UT-Austin/TACC 11 October 2012 February 2014
Longhorn decommissioned
Steele/Wispy Level 1 Carol Song Purdue 17 September 2012 July 2013
Steele/Wispy decommissioned
Ranger Level 1 Dan Stanzione UT-Austin/TACC 11 October 2012
 
February 2013
Ranger decommissioned
MSS Level 2 John Towns NCSA/Univ of Illinois 17 September 2012
 
30 September 2013
MSS decommissioned
Kraken Level 1 Mark Fahey UTK/NICS 17 May 2012
 
30 April 2014
Kraken decommissioned
Lonestar Level 1 Dan Stanzione UT-Austin/TACC 11 October 2012
 
31 December 2014
Lonestar decommissioned
Keeneland Level 1 Jeffery Vetter GaTech 27 March 2013
 
31 December 2014
Keeneland decommissioned
Trestles Level 1 Richard Moore UCSD/SDSC 17 May 2012
 
May 2015
Trestles decommissioned
Darter/Nautilus Level 1 Sean Ahern UTK/NICS 17 September 2012
 
 
Maverick Level 1 Dan Stanzione UT-AUstin/TACC Unknown 1 April 2018
Blue Waters Level 2  Bill Kramer NCSA/Univ of Illinois   December 2019
ROGER Level 3  Shaowen Wang and Anand Padmanabhan NCSA   December 2017

XSEDE's Peer Institutions

XSEDE also needs to interact with other XSEDE-like organizations as peers. There are already such examples both within the United States and internationally.

Current international interactions include:

  • Partnership for Advanced Computing in Europe (PRACE) – Memorandum of Understanding (MOU)
  • European Grid Infrastructure (EGI) - Call for Collaborative Use Examples (CUEs)
  • Research Organization for Information Science and Technology (RIST) – Memorandum of Understanding (MOU)

Key Points
Service Providers contribute to XSEDE's Cyberinfrastructure
Contact Information

 
October 2021 | Science Highlights, Announcements & Upcoming Events
 
XSEDE helps the nation's most creative minds discover breakthroughs and solutions for some of the world's greatest scientific challenges. Through free, customized access to the National Science Foundation's advanced digital resources, consulting, training, and mentorship opportunities, XSEDE enables you to Discover More. Get started here.
 
Science Highlights
 
Collaboration Develops AI Tool for "Long Tail" Stamp Recognition in Japanese Historic Documents
 
XSEDE systems and expertise power study of business document that promises better automated analysis of datasets with numerous rare items, a key limitation in artificial intelligence
 
 
Despite its promise, artificial intelligence (AI) can fail – infamously – when its training dataset isn't fully representative of the real world. The MiikeMineStamps dataset offers an historical and social science goldmine in studying business practices in Japan and elsewhere, but its "long tail" of rare stamps poses a big training-dataset challenge. Using XSEDE's Extended Collaborative Support Services and XSEDE-allocated computation, scientists created a new active-learning-based AI that leverages a training dataset with good representation of the MiikeMineStamps data. The method holds promise for addressing training-dataset-driven failures of AI.
 
 
Examples of collected images of stamps in the MiikeMineStamps dataset.
 
Cell's Energy Secrets Revealed with Supercomputers
 
XSEDE allocations help develop most detailed model yet of mitochondrial protein-protein complex
 
 
It takes two to tango, as the saying goes.
This is especially true for scientists studying the details of how cells work. Protein molecules inside a cell interact with other proteins, and in a sense the proteins dance with a partner to respond to signals and regulate each other's activities.
Crucial to giving cells energy for life is the migration of a compound called adenosine triphosphate (ATP) out of the cell's powerhouse, the mitochondria. And critical for this flow out to the power-hungry parts of the cell is the interaction between a protein enzyme called hexokinase-II (HKII) and proteins in the voltage-dependent anion channel (VDAC) found on the outer membrane of the mitochondria. Thanks to XSEDE allocations, supercomputer simulations have revealed for the first time how VDAC binds to HKII.
 
 
Movie representative of the Brownian dynamics simulation trajectory capturing the formation of the Hexokinase (teal)/VDAC (yellow) complex. Credit: Haloi, N., Wen, PC., Cheng, Q. et al.
 
Improving Geomagnetic Storm Research
 
XSEDE allocations used for solar wind study
 
 
Both satellite and ground-based telecommunications systems can be impacted by geomagnetic storms, such as the one triggered by a powerful explosion on the surface of the sun that took down a Canadian power grid back in 1989. Scientists are working to better understand these strong, mysterious storms by validating data from NASA's Parker Solar Probe with help from an XSEDE allocation. These supercomputer-generated simulations are helping scientists better understand how solar wind is heated to such a high temperature (sometimes above one-million degrees) as well as how solar wind is accelerated to very high speeds (often faster than one-thousand miles per second). In turn, this information can help with solar wind forecasting.
 
 
Artist's concept of the Parker Solar Probe spacecraft approaching the sun. Launched in 2018, the Parker Solar Probe provides data on solar activity and makes critical contributions to the ability to forecast major space-weather events that impact life on Earth. Credit: NASA
 
Program Announcements
 
XSEDE Cyberinfrastructure Integration (XCI) Updates
 
 
Globus has released a 3.0 version of its command-line interface, the Globus CLI, which now provides support for Globus Connect Server version 5, profile switching, listing groups you belong to, a simpler interface for specifying file lists, and improved token storage. 
 
 
Help Engage Undergrads in the Work of XSEDE!
 
 
An XSEDE-wide effort is underway to expand the community by recruiting and enabling a diverse group of students who have the skills — or are interested in acquiring the skills — to participate in the work of XSEDE. The name of this effort is XSEDE EMPOWER (Expert Mentoring Producing Opportunities for Work, Education, and Research). We invite the whole community to recruit and mentor undergraduate students to engage in a variety of activities, such as computational and/or data analytics research and education in all fields of study, networking, system maintenance and support, visualization, and more. The program provides a stipend for students who work on projects for one semester or one quarter.
 
To participate, undergraduate students from any U.S. degree-granting institutions are matched with mentors who have projects that contribute to the work of XSEDE. Participation is strongly encouraged for mentors and students belonging to groups historically underrepresented in HPC.
 
Application deadline for spring 2022 participation: October 29, 2021

 
Community Announcements
 
Registration Open for Gateways 2021
 
 
Gateways 2021, Oct. 19-21, will offer a variety of presentations, diverse options for interaction, and opportunities to influence the future of the Science Gateways Community Institute (SGCI), the gateways community, and the Gateways Conference series. 
 
Registration for Gateways 2021 is FREE but required. Register by October 11 to receive a surprise in the mail!
 
 
Globus Announcements
 
 
Globus Premium Storage Connector for Microsoft Azure Blob Storage is now generally available
 
Researchers can now use the Globus platform to easily share data in Microsoft Azure Blob storage with their collaborators at different institutions. This new connector also includes support for Azure Data Lake Storage Gen2 and  can be used with high assurance data. Get the details.
 
Version 3 of the SDK and CLI is now available
 
The CLI has gained some powerful new features, including profile switching and GCS version 5 support. The SDK also includes new features such as automatic request retries. Learn more.
 
 
Upcoming Dates and Deadlines
 

 


Supercomputer simulations illustrate how hydrogen peroxide is synthesized in a new way 

Kimberly Mann Bruch, SDSC Communications 

This image showcases a critical intermediate during oxygen reduction to hydrogen peroxide. Credit: Xunhua Zhao and Yuanyue Liu, UT Austin

Hydrogen peroxide, often used as a disinfectant, serves as a precursor for many organic compounds. Recently, computational materials scientists at The University of Texas Austin (UT Austin) investigated a novel synthetic approach where oxygen molecules react with water and electrons with the help of a catalyst, such as a cobalt atom bound to four nitrogen atoms and embedded in a thin layer of carbon (Co-N4-C), to form hydrogen peroxide.

However, the scientists were puzzled about how and why the reaction produced hydrogen peroxide (H2O2) rather than hydroxide (HO-), which was expected due to its lower energy. To answer this question, they used XSEDE-allocated supercomputing resources to simulate the reaction at an atomic scale.

Yuanyue Liu, an assistant professor of materials science and engineering, and Xunhua Zhao, a postdoctoral researcher, used Comet at the San Diego Supercomputer Center (SDSC) at the University of California at San Diego (UC San Diego) and Stampede2 at the Texas Advanced Computing Center (TACC) at UT Austin, to detail how oxygen molecules react with water and electrons to form hydrogen peroxide on the catalyst.

Liu and Zhao recently published their study results in the Journal of American Chemical Society.

Prior to these simulations, there was limited understanding about why some catalysts yield more hydrogen peroxide than hydroxide, due to the lack of an effective tool to simulate the kinetics.

"XSEDE provided computational resources without which we would not be able to do our research," Zhao said. "Using Comet and Stampede2 to simulate this reaction, we found that bond breaking to yield hydrogen peroxide can have a lower energy barrier than the bond breaking to yield hydroxide, despite that the hydroxide has a lower energy than the hydrogen peroxide."

"Moreover, we explained why the yield of hydrogen peroxide increases with decreasing electrode potential," Liu said. "There are two types of oxygen in the reaction and depending on which one first gets hydrogen from water, you may obtain different products ­– decreasing the electrode potential pushes the water closer to the oxygen and that will give us hydrogen peroxide."

 

This supercomputer-generated atomistic simulation shows how hydrogen peroxide forms during oxygen reduction, catalyzed by a single cobalt atom embedded in nitrogen-filled graphene. Credit: Xunhua Zhao, UT Austin

 

Why It's Important

Liu and Zhao's findings have helped the materials science community further understand this important fundamental process; however, there is much work to be done.

"While our research uncovers how hydrogen peroxide is selectively produced, we continue to work on making our simulations less computationally expensive," Liu said. "We are also working on applying this model to other electrochemical systems."

 

How XSEDE Helped

"We used XSEDE resources to first develop an advanced first-principles model for effective calculations of the electrochemical kinetics at the solid-water interface," Liu said. "Then, we used Comet and Stampede2 to simulate the pathways of forming hydrogen peroxide and hydroxide on the Co-N4-C catalyst, and how these pathways vary with the applied electrode potential – this allowed us to understand what makes one product more favorable than the other, and how to tune the preference."

"We look forward to working on Expanse at SDSC for our next set of simulations that will help us continue developing and applying atomistic modelling methods to understand, design, and discover materials for electronics and energy applications." – Yuanyue Liu, assistant professor at UT Austin's Materials Science and Engineering Department

Zhao and Liu said that while they ran into a few code compilation problems during their study, XSEDE support was helpful in assisting with solutions.

"We look forward to working on Expanse at SDSC for our next set of simulations that will help us continue developing and applying atomistic modeling methods to understand, design, and discover materials for electronics and energy applications," Liu said.

This work was supported by the National Science Foundation (awards 1900039 and 2029442), the Welch Foundation (F-1959-20180324), ACS PRF (60934-DNI6), and the Department of Energy (DE-EE0007651). This work used computational resources at National Renewable Energy Lab, XSEDE (allocation TG-CHE190065), the Center for Nanoscale Materials at Argonne National Laboratory and the Center for Functional Nanomaterials at Brookhaven National Laboratory.

 

Related Links:

San Diego Supercomputer Center: https://www.sdsc.edu/

Texas Advanced Computing Center: https://tacc.utexas.edu/

UC San Diego: https://ucsd.edu/

University of Texas at Austin: https://utexas.edu/

National Science Foundation: https://www.nsf.gov/

XSEDE: https://www.xsede.org/

 

 

At a Glance:

  • UT Austin researchers used XSEDE resources to illustrate a detailed model of oxygen molecules reacting with water and electrons to form hydrogen peroxide on the catalyst.
  • Study results were published in the Journal of American Chemical Society.
  • The scientists will next utilize Expanse at the San Diego Supercomputer Center for additional modelling to further their research.

 



UCLA Researchers Utilize SDSC Resources for Solar Wind Study

By Kimberly Mann Bruch, SDSC Communications

Artist's concept of the Parker Solar Probe spacecraft approaching the sun. Launched in 2018, the Parker Solar Probe provides data on solar activity and makes critical contributions to the ability to forecast major space-weather events that impact life on Earth. Credit: NASA

Both satellite and ground-based telecommunications systems can be impacted by geomagnetic storms, such as the one triggered by a powerful explosion on the surface of the sun that took down a Canadian power grid back in 1989. To better understand such strong, mysterious storms – which occur infrequently while other smaller storms occur more regularly – a team of scientists at UCLA recently completed a comparative study between satellite data collected by NASA's Parker Solar Probe and numerical simulations from Comet at the San Diego Supercomputer Center (SDSC) at UC San Diego.

The UCLA research team used XSEDE allocations on Comet to confirm that the shear between two solar wind streams of different speeds has a significant impact on the characteristics of turbulence, such as the energy levels and how the energy is distributed among different physical quantities. Importantly, their simulations on Comet agreed with satellite data collected by the Parker Solar Probe.

Recently published in Astronomy and Astrophysics and the Astrophysical Journal, the Comet-generated simulations help scientists better understand how solar wind is heated to such a high temperature (sometimes above one-million degrees) as well as how solar wind is accelerated to very high speeds (often faster than one-thousand miles per second). In turn, this information can help with solar wind forecasting.

"Our simulations not only agreed with the satellite data, but also provide detailed information not possible to obtain from the single-point satellite measurement," said Chen Shi, a postdoctoral scholar in the Earth, Planetary and Space Sciences Department at UCLA. "The Comet simulations can provide global information on most of the structures and processes happening in the heliosphere while the satellites only observe specific points in space. For example, the satellite gives us limited information while the simulations give us global information, which can lead to better insight on geomagnetic storms and allow us to forecast and prepare for those."

Our project helps answer the question of how plasma turbulence evolves in the solar wind and these studies are crucial to understanding a couple of the most fundamental physical processes in the field of heliophysics, such as Alfvénicity (a property of magnetohydrodynamic fluctuations in plasma physics), solar wind temperature and speed, and we are only able to accomplish our work with the help of supercomputers like Comet. — Chen Shi, postdoctoral scholar in the UCLA Earth, Planetary and Space Sciences Department

Shi further explained that the solar wind, when it impacts the Earth's magnetic field, can produce currents that flow in both outer space and on the ground. "If we can predict the fluctuations in the solar wind and how they interact with other solar processes, we can better prepare for magnetic storms caused by the turbulence that can interfere with satellite and ground-based telecommunications," he said. "Our project helps answer the question of how plasma turbulence evolves in the solar wind and these studies are crucial to understanding a couple of the most fundamental physical processes in the field of heliophysics, such as Alfvénicity (a property of magnetohydrodynamic fluctuations in plasma physics), solar wind temperature and speed, and we are only able to accomplish our work with the help of supercomputers like Comet."

 

Why is Alfvenicity Important?

This Comet-generated figure shows the evolution of the fluctuation amplitude in the solar wind calculated by a 2D magnetohydrodynamic simulation run on Comet. The stream shears (at y'~0.25 and 0.75) deform the shape of the wavefront of the fluctuations and facilitate the dissipation of the fluctuation energies. Credit: Chen Shi, UCLA

Alfvénicity, a term that defines the level of alignment between magnetic fluctuations and velocity fluctuations, was named in honor of UC San Diego Nobel Laureate Hannes Alfvén (1908-1995) – regarded by many as the father of the fundamental physics discipline coined magnetohydrodynamics, the study of plasmas in magnetic fields. Alfvén's studies were some of the first regarding magnetic storms, and turbulence forecasting has become more prevalent as scientists have come to realize that the solar wind not only can have a negative impact on satellites, space apparatus and even astronauts, but also can create magnetic storms that threaten ground electronics on Earth. 

What would Alfvén think of the Parker Solar Probe, which was launched in 2018, so close to the sun that it is able to measure Alfvénicity in the very nascent solar wind? And, how did Shi's project show that faster solar wind has higher Alfvénicity?

"We found a more dominant outward propagating wave component and more balanced magnetic and kinetic energies in the solar wind with larger speed," Shi explained. "Meanwhile the slow wind that originates near the polar coronal holes has much lower Alfvénicity compared with the slow wind that originates from the active regions."

These findings lead to a deeper understanding of the processes that occur in solar winds and can help to improve the models and forecasts.

 

How XSEDE Helped

To accomplish this large-scale, high-performance computation project, Shi said that the team received a great deal of assistance from the XSEDE support team on ways to improve the scalability of their code. "The SDSC computational team gave us advice on improved ways to run our numerical simulations on Comet, which provided us with the necessary resources to conduct our study," said Shi. "More and more researchers are viewing numerical simulations as an extremely helpful and necessary method to study heliophysics and plasma physics – we are fortunate to have access to supercomputers like Comet alongside the support staff that allows us to accomplish our research goals."

 

What's Next?

Because the Comet-generated two-dimensional data closely agreed with the observations collected by the Parker Solar Probe, the team now has developed plans for additional studies that encompass three-dimensional simulations. And, while the Parker Solar Probe continues to lower its orbit toward the sun, the researchers have already started working on their next set of calculations and plan on using additional supercomputing resources to complete their future exploration of solar wind in both two-dimensional and three-dimensional illustrations.

This research was funded in part by the FIELDS experiment on the Parker Solar Probe spacecraft, designed and developed under NASA (contract NNN06AA01C) and the NASA Parker Solar Probe Observatory Scientist (grant NNX15AF34G). Computational work on Comet was completed via XSEDE (allocation TG-AST200031).

 

Related Links:

San Diego Supercomputer Center: https://www.sdsc.edu/

UCLA: https://ucla.edu/

UC San Diego: https://ucsd.edu/

National Science Foundation: https://www.nsf.gov/

XSEDE: https://www.xsede.org/

 

At a Glance:

  • UCLA researchers used XSEDE allocations to run numerical simulations on SDSC's Comet to better understand geomagnetic storms.
  • Data from NASA's Parker Solar Probe was validated with the supercomputer illustrations.
  • Results were published in Astronomy and Astrophysics and the Astrophysical Journal.

Student Champions

Campus Champions programs include Regional, Student, and Domain Champions.

 

Student Champions

Student Champion volunteer responsibilities may vary from one institution to another and depending on your Campus Champion Mentor. Student Champions may work with their Mentor to provide outreach on campus to help others access the best advanced computing resource that will help them accomplish their research goals, provide training to people on their campus, or work on special projects assigned by your Mentor. Student Champions are also encouraged to attend the annual PEARC conference and participate in the PEARC student program as well as submit posters or papers to the conference. 

To join the Student Champions program, the Campus Champion who will be their mentor should send a message to info@campuschampions.org to recommend the student for the program and confirm their willingness to be the student's mentor. 

Questions? Email info@campuschampions.org.

 

 

 

INSTITUTION CHAMPION MENTOR FIELD OF STUDY DESIGNATION GRADUATION 
Alabama Agricultural & Mechanical University Georgianna Wright Damian Clarke Computer Science Undergraduate 2022
Arizona State University Natalie Mason Marisa Brazil & Ian Shaeffer Informatics Undergraduate 2023
Claremont Graduate University Cindy Cheng Jeho Park Information Systems & Technology Graduate 2022
Claremont Graduate University Michael Espero Asya Shklyar Biostatistics, Neurocognitive Science Graduate 2021
Claremont McKenna College Zeyad Elkelani Jeho Park Political Science Graduate 2021
Dillard University Priscilla Saarah Tomekia Simeon Biology Undergraduate 2022
Dillard University Brian Desil Tomekia Simeon Physics Undergraduate 2021
Drexel University Cameron Fritz David Chin Computer Science Undergraduate 2023
Drexel Univeristy Hoang Oanh Pham  David Chin Computer Science Undergraduate 2023
Florida A&M Univerisity Rodolfo Tsuyoshi F. Kamikabeya Hongmei Chi Computer Information Science Graduate 2021
Florida A&M Univeristy Emon Nelson Hongmei Chi Computer Science Graduate 2021
Georgia Institute of Technology Sebastian Kayhan Hollister Semir Sarajlic Computer Science  Undergraduate 2021
Georgia Institute of Technology Siddhartha Vemuri Semir Sarajlic Computer Science Undergraduate 2021
Georgia State University Kenneth Huang Suranga Naranjan   Graduate 2021
Georgia State University  Melchizedek Mashiku Suranga Naranjan Computer Science Undergraduate 2022
Howard University Christina McBean Marcus Alfred Physics & Mathematics Undergraduate 2021
Howard University Tamanna Joshi Marcus Alfred Condensed Matter Theory Graduate 2021
John Hopkins University Jodie Hoh Jaime Combariza, Anthony Kolasny, Kevin Manalo Computer Science Undergraduate 2022
Kansas State University Mohammed Tanash Dan Andresen Computer Science Gradudate 2022
Massachusetts Green HPC Center Abigail Waters  Julie Ma Clinical Psychology Graduate 2022
North Carolina State University Bailey Pollard Lisa Lowe Business Administration Undergraduate 2022
Northwestern University  Sajid Ali Alper Kinaci Applied Physics Graduate 2021
Oregon State University McKenzie Hughes CJ Keist Biology Undergraduate 2021
Pomona College Nathaniel Getachew Asya Shklyar Computer Science & Mathematics Undergraduate 2023
Pomona College Omar Zintan Mwinila-Yuori Asya Shklyar Computer Science Undergraduate  2022
Pomona College Samuel Millette Asya Shklyar Computer Science  Undergraduate   2023
Prairie View A&M University Chara Tatum Suxia Cui Computer Science Undergraduate 2021
Prairie View A&M University Kobi Tioro Suxia Cui Computer Engineering Undergraduate 2021
Prairie View A&M University Racine McLean Suxia Cui Computer Engineering Undergraduate 2021
Prairie View A&M University Virgie Leyva Suxia Cui Computer Engineering Undergraduate 2021
Reed College Jiarong Li Trina Marmarelli Math-Computer Science Undergraduate 2021
Rensselaer Polytechnic Institute James Flamino Joel Geidt   Graduate 2022
Rutgers University Girish Ganesan Galen Collier Computer Science and Mathematics Undergraduate 2023
Saint Louis University Frank Gerhard Schroer IV Eric Kaufmann Physics Undergraduate 2021
Southern Illinois University

Majid Memari

Chet Langin   Graduate 2021
Southern Illinois University Manvith Mali Chet Langin Computer Science Graduate 2021
Southwestern Oklahoma State University Arianna Martin Jeremy Evert Computer Science & Music Performance Undergraduate 2023
Texas Tech University Misha Ahmadian Tom Brown Computer Science Graduate  2022
The University of Tennessee at Chattanooga  Carson Woods Tony Skjellum Computer Science Undergraduate 2021
University of Alabama at Birmingham Shahram Talei

John-Paul Robinson

Physics Graduate 2021
University of Arizona Alexander Prescott Blake Joyce Geosciences Graduate 2021
Univerity of Arkansas Timothy "Ryan" Rogers Jeff Pummill Physical Chemistry Graduate 2021
University of Central Oklahoma Samuel Kelting Evan Lemley Mathematics/CS Undergraduate 2021
University of Central Oklahoma Thomas Dunn Evan Lemley Computer Science Undergraduate 2022
University of Delaware Parinaz Barakhshan Anita Schwartz Electrical and Computer Engineering Graduate 2024
University of Maine Michael Brady Butler Bruce Segee Physica/Computational Materials Science Graduate 2022
University of Michigan Daniel Kessler Shelly Johnson Statistics Graduate 2022
University of Minnesota Aneesh Venugopal Ben Lynch Electrical Engineering Graduate 2021
University of Missouri Ashkan Mirzaee Predrag Lazic Industrial Engineering Graduate 2021
University of Nebraska Natasha Pavlovikj Adam Caprez Computer Science Graduate 2021
University of North Carolina Wilmington Cory Nichols Shrum Eddie Dunn      
University of Southern California Ryan Sim Asya Shaklyar Physics & Electrical and Computer Engineering Undergraduate 2022
University of Texas at Dallas Namira Pervez

Jaynal Pervez

Neuroscience Undergraduate 2024
Yale University Sinclair Im Andy Sherman Applied Math Graduate 2022
           
GRADUATED          
Boise State University Mike Henry Kyle Shannon     2020
Florida A&M Univerisity George Kurian Hongmei Chi     2019
Florida A&M Univerisity Temilola Aderibigbe Hongmei Chi     2019
Florida A&M Univerisity Stacyann Nelson Hongmei Chi     2019
Georgia State University Mengyuan Zhu Suranga Naranjan     2017
Georgia State University Thakshila Herath Suranga Naranjan     2018
Iowa State University Justin Stanley  Levi Barber     2020
Jackson State Univeristy Ebrahim Al-Areqi Carmen Wright     2018
Jackson State University Duber Gomez-Fonseca Carmen Wright     2019
Midwestern State University Broday Walker Eduardo Colmenares     2020
Mississippi State University Nitin Sukhija Trey Breckenridge     2015
New Jersey Institute of Technology Vatsal Shah Roman Voronov     2020
North Carolina State University Dheeraj Kalidini Lisa Lowe     2020
North Carolina State University Michael Dacanay Lisa Lowe      
North Carolina State University Yuqing Du Lisa Lowe     2021
Oklahoma State University Phillip Doehle Dana Brunson     2016
Oklahoma State University Venkat Padmanapan Rao Jesse Schafer     2019
Oklahoma State University Raj Shukla Dana Brunson     2018
Oklahoma State University Nathalia Graf Grachet Philip Doehle     2019
Rensselaer Polytechnic Institute Jorge Alarcon Joel Geidt     2016
Southern Illinois University Aaron Walber Chet Langin     2020
Southern Illinois University Alex Sommers Chet Langin     2018
Southern Illinois University Sai Susheel Sunkara Chet Langin     2018
Southern Illinois University Monica Majiga Chet Langin     2017
Southern Illinois University Sai Sandeep Kadiyala  Chet Langin     2017
Southern Illinois University Rezaul Nishat Chet Langin     2018
Southern Illinois University Alvin Gonzales Chet Langin     2020
Southwestern Oklahoma State University Kurtis D. Clark Jeremy Evert Computer Science   2020
Texas A&M University - College Station Logan Kunka Jian Tao     2020
Tufts University Georgios (George) Karamanis Shawn G. Doughty     2018
University of Arkansas Shawn Coleman Jeff Pummill     2014
University of California - Merced Luanzheng Guo Sarvani Chadalapaka     2020
University of Central Florida Amit Goel Paul Weigand      
University of Florida David Ojika Oleksandr Moskalenko     2018
University of Illinois at Chicago Babak Kashir Taloori Jon Komperda     2021
University of Iowa Baylen Jacob Brus Ben Rogers     2020
University of Houston Clear Lake Tarun Kumar Sharma Liwen Shih     2014
University of Houston-Downtown Eashrak Zubair Hong Lin     2020
University of Maryland Baltimore County Genaro Hernadez Paul Schou     2015
University of Michigan Simon Adorf Shelly Johnson     2019
University of Missouri Alexander Barnes Timothy Middelkoop     2018
University of North Carolina Wilmington James Stinson Gray Eddie Dunn     2018
University of Pittsburgh Shervin Sammak Kim Wong     2016
University of South Dakota Joseph Madison Doug Jennewein     2018
University of Wyoming Rajiv Khadka Jared Baker     2020
Virginia Tech University David Barto Alana Romanella     2020
Virginia Tech University Lu Chen Alana Romanella     2017
West Chester University of Pennsylvania Jon C. Kilgannon Linh Ngo     2020
Winston-Salem State University Daniel Caines Xiuping Tao     2019

Updated: March 25, 2020

 

Key Points
Student Champions
Regional Champions
Domain Champions
Contact Information

COVID-19 HPC Consortium

HPC Resources available to fight COVID-19

The COVID-19 HPC Consortium encompasses computing capabilities from some of the most powerful and advanced computers in the world. We hope to empower researchers around the world to accelerate understanding of the COVID-19 virus and the development of treatments and vaccines to help address infections. Consortium members manage a range of computing capabilities that span from small clusters to some of the very largest supercomputers in the world.

Preparing your COVID-19 HPC Consortium Request

(Revised 23 September 2021)


NOTE:The 28 October 2020 updates to this page incorporate new guidance reflective of the desire of the Consortium to more actively manage the portfolio of projects accepted by the Consortium.

To request access to resources of the COVID-19 HPC Consortium, you must prepare a description, no longer than three (3) pages, of your proposed work. Do not include any proprietary information in proposals, since your request will be reviewed by staff from a number of consortium sites. 

Review Criteria and Project Expectations

The proposals will be evaluated on the following criteria:

  • Potential near-term benefits for COVID-19 response

  • Feasibility of the technical approach

  • Need for high-performance computing

  • High-performance computing knowledge and experience of the proposing team

  • Estimated computing resource requirements 

The Consortium is particularly, though not exclusively, interested in projects focused on:

  • Understanding and modeling patient response to the virus using large clinical datasets

  • Learning and validating vaccine response models from multiple clinical trials

  • Evaluating combination therapies using repurposed molecules

  • Epidemiological models driven by large multi-modal datasets

Please note the following parameters and expectations:

  • Projects supported by the COVID-19 HPC Consortium are intended to provide benefits to COVID-19 response in the near-term (< 6 months). Projects with longer-term benefits will be referred to standard proposal and allocation processes.

  • Allocations of resources are expected to be for a maximum of six (6) months; proposers may submit subsequent proposals for additional resources

  • All supported projects will have the name of the principal investigator, affiliation, project title and project abstract posted to the  COVID-19 HPC Consortium web site.

  • Project PIs are expected to provide brief (~2 paragraphs) updates on a weekly basis.

  • It is expected that teams who receive Consortium access will publish their results in the open scientific literature.  

Further to the last point, the COVID-19 HPC Consortium wishes to catalyze open, pre-competitive results that can impact COVID-19. Because of the strong public component of the Consortium, all project results must be open and publishable.  

If a principal investigator (PI) believes that a project will result in commercializable intellectual property (IP), the PI should work directly with providers that can accommodate commercializable IP requirements.  The information in the Resources section indicates which providers can make such arrangements.

Project Description Outline

To ensure your request is directed to the appropriate resource(s), your description should include the sections outlined here.

A. Scientific/Technical Goal

Describe how your proposed work contributes to our understanding of COVID-19 and improves the nation's ability to respond to the pandemic in the near term (<6 months).

  • What is the scientific/technical goal and pandemic response impact?

  • What is the plan and timetable for getting to the goal?

  • What is the expected period for performance (one week to six months)?

  • Where do you plan to publish your results and in what timeline?

B. Estimate of Compute, Storage and Other Resources

To the extent possible, provide an estimate of the scale and type of the resources needed to complete the work. The information in the Resources section below is available to help you answer this question.  Please be as specific as possible in your resource request along with data supporting the request made.  Also, please indicate your preferred resource(s) as well as alternative resources should there not be sufficient availability of the primary resource(s) you request. 

  • Are there specific computing architectures or systems that are most appropriate (e.g. GPUs, large memory, large core counts on shared memory nodes, etc.)

  • How much computing support will this effort approximately require in terms of core, node, or GPU hours? How does this break down into the number of independent computations and their individual compute requirements?

  • How distributed can the computation be, and can it be split across multiple HPC systems?

  • Can this workload execute in a cloud environment?

  • Describe the storage needs of the project.

  • Does your project require access to any public datasets? If so, please describe these datasets and how you intend to use them.

 

C. Support Needs

Describe whether collaboration or support from staff at various Consortium Member and Affiliate organizations (e.g. Commercial Cloud Providers. HPC centers, data or tool providers) will be essential, helpful, or unnecessary. Estimates of necessary application support are very helpful. Teams should also identify any restrictions that might apply to the project, such as export-controlled code, ITAR restrictions, proprietary data sets, regional location of compute resources, or personal health information (PHI) or HIPAA restrictions. In such cases, please provide information on security, privacy, and access issues.

D. Team and Team Preparedness

Summarize your team's qualifications and readiness to execute the project both in using the methods proposed and the resources requested.

  • What is the expected lead time before you can begin the simulation runs?

  • What systems have you recently used and how big were the simulation runs?

  • Given that some resources are at federal facilities with restrictions, please provide a list of the team members that will require accounts on resources along with their citizenship.

Document Formatting

While readability is of greatest importance, documents must satisfy the following minimum requirements. Documents that conform to NSF proposal format guidelines will satisfy these guidelines.

  • Margins: Documents must have 2.5-cm (1-inch) margins at the top, bottom, and sides.

  • Fonts and Spacing: The type size used throughout the documents must conform to the following three requirements:
    Use one of the following typefaces identified below:

    • Arial 11, Courier New, or Palatino Linotype at a font size of 10 points or larger;

    • Times New Roman at a font size of 11 points or larger; or

    • Computer Modern family of fonts at a font size of 11 points or larger.

  • A font size of less than 10 points may be used for mathematical formulas or equations, figures, table or diagram captions and when using a Symbol font to insert Greek letters or special characters. PIs are cautioned, however, that the text must still be readable.

  • Type density must be no more than 15 characters per 2.5 cm (1 inch).

  • No more than 6 lines must be within a vertical space of 2.5 cm (1 inch).

* **Page Numbering**: Page numbers should be included in each file by the submitter. Page numbering is not provided by XRAS. * **File Format**: XRAS accepts only PDF file formats.

 

Submitting your COVID-19 HPC Consortium request 

  1. Create an XSEDE portal account
    • Go to https://portal.xsede.org/
    • Click on "Sign In" at the upper right, if you have an XSEDE account … 
    • … or click "Create Account" to create one. 
    • To create an account, basic information will be required (name, organization, degree, address, phone, email). 
    • Email verification will be necessary to complete the account creation.
    • Set your username and password.
    • After your account is created, be sure you're logged into https://portal.xsede.org/
    • IMPORTANT: Each individual should have their own XSEDE account; it is against policy to share user accounts.
  2. Go to the allocation request form
    • Follow this link to go directly to the submission form.
    • Or to navigate to the request form:
      • Click the "Allocations" tab in the XSEDE User Portal,
      • Then select "Submit/Review Request."
      • Select the "COVID-19 HPC Consortium" opportunity.
    • Select "Start a New Submission."
  3. Complete your submission
    • Provide the data required by the form. Fields marked with a red asterisk are required to complete a submission.
    • The most critical screens are the PersonnelTitle/Abstract, and Resources screens.
      • On the Personnel screen, one person must be designated as the Principal Investigator (PI) for the request. Other individuals can be added as co-PIs or Users (but they must have XSEDE accounts).
      • On the Title/Abstract screen, all fields are required.
      • On the Resources screen…
        • Enter "n/a" in the "Disclose Access to Other Compute Resources" field (to allow the form to be submitted).
        • Then, select "COVID-19 HPC Consortium" and enter 1 in the Amount Requested field. 
    • On the Documents screen, select "Add Document" to upload your 3-page document. Select "Main Document" or "Other" as the document Type.
      • Only PDF files can be accepted.
    • You can ignore the Grants and Publications sections. However, you are welcome to enter any supporting agency awards, if applicable.
    • On the Submit screen, select "Submit Request." If necessary, correct any errors and submit the request again.

Resources available for COVID-19 HPC Consortium request 
Click on title to see full description

U.S. Department of Energy (DOE) Advanced Scientific Computing Research (ASCR)
Supercomputing facilities at DOE offer some of the most powerful resources for scientific computing in the world. The Argonne Leadership Computing Facility (ALCF) and Oak Ridge Leadership Computing Facility (OLCF) and the Lawrence Berkeley National Laboratory (LBNL) may be used for modeling and simulation coupled with machine and deep learning techniques to study a range of areas, including examining underlying protein structure, classifying the evolution of the virus, understanding mutation, uncovering important differences, and similarities with the 2002-2003 SARS virus, searching for potential vaccine and antiviral, compounds, and simulating the spread of COVID-19 and the effectiveness of countermeasure options.

 

Oak Ridge Summit | 200 PF, 4608 nodes, IBM POWER9/NVIDIA Volta

Summit System

 2 x IBM POWER9 per node
42 TF per node
6 x NVIDIA Volta GPUs per node
512 GB DDR4 + 96 GB HBM2 (GPU memory) per node
1600 GB per node
2 x Mellanox EDR IB adapters (100 Gbps per adapter)
250 PB, 2.5 TB/s, IBM Spectrum Scale storage

 

Argonne Theta | 11.69 PF, 4292 nodes, Intel Knights Landing

1 x Intel KNL 7230 per node, 64 cores per CPU
192 GB DDR4, 16 GB MCDRAM memory per node
128 GB local storage per node
Aries dragonfly network
10 PB Lustre + 1 PB IBM Spectrum Scale storage
Full details available at: https://www.alcf.anl.gov/alcf-resources

 

Lawrence Berkeley National Laboratory 

LBNL Cori | 32 PF, 12,056 Intel Xeon Phi and Xeon nodes
9,668 nodes, each with one 68-core Intel Xeon Phi Processor 7250 (KNL)
96 GB DDR4 and 16 GB MCDRAM memory per KNL node
2,388 nodes, each with two 16-core Intel Xeon Processor E5-2698 v3 (Haswell)
128 GB DDR4 memory per Haswell node
Cray Aries dragonfly high speed network
30 PB Lustre file system and 1.8 PB Cray DataWarp flash storage
Full details at: https://www.nersc.gov/systems/cori/

U.S. DOE National Nuclear Security Administration (NNSA)

Established by Congress in 2000, NNSA is a semi-autonomous agency within the U.S. Department of Energy responsible for enhancing national security through the military application of nuclear science. NNSA resources at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), and Sandia National Laboratories (SNL) are being made available to the COVID-19 HPC Consortium.

Lawrence Livermore + Los Alamos + Sandia | 32.2 PF, 7375 nodes, IBM POWER8/9, Intel Xeon
  • LLNL Lassen
    • 23 PFLOPS, 788 compute nodes, IBM Power9/NVIDIA Volta GV100
    • 28 TF per node
    • 2 x IBM POWER9 CPUs (44 cores) per node
    • 4 x NVIDIA Volta GPUs per node
    • 256 BD DDR4 + 64 GB HBM2 (GPU memory) per node
    • 1600 GB NVMe local storage per node
    • 2 x Mellanox EDR IB (100Gb/s per adapter)
    • 24 PB storage
  • LLNL Quartz
    • 3.2 PF, 3004 compute nodes, Intel Broadwell
    • 1.2 TF per node
    • 2 x Intel Xeon E5-2695 CPUs (36 cores) per node
    • 128 GB memory per node
    • 1 x Intel Omni-Path IB (100Gb/s)
    • 30 PB storage (shared with other clusters)
  • LLNL Pascal
    • 0.9 PF, 163 compute nodes, Intel Broadwell CPUs/NVIDIA Pascal P100
    • 11.6 TF per node
    • 2 x Intel Xeon E5-2695 CPUs (36 cores) per node
    • 2 x NVIDIA Pascal P100 GPUs per node
    • 256 GB memory + 32 HBM2 (GPU memory) per node
    • 1 x Mellanox EDR IB (100Gb/s)
    • 30 PB storage (shared with other clusters) 
  • LLNL Ray
    • 1.0   PF, 54 compute nodes, IBM Power8/NVIDIA Pascal P100
    • 19 TF per node
    • 2 x IBM Power8 CPUs (20 cores) per node
    • 4 x NVIDIA Pascal P100 GPUs per node
    • 256 GB + 64 GB HBM2 (GPU memory) per node
    • 1600 GB NVMe local storage per node
    • 2 x Mellanox EDR IB (100Gb/s per adapter)
    • 1.5 PB storage
  • LLNL Surface
    • 506 TF, 158 compute nodes, Intel Sandy Bridge/NVIDIA Kepler K40m
    • 3.2 TF per node
    • 2 x Intel Xeon E5-2670 CPUs (16 cores) per node
    • 3 x NVIDIA Kepler K40m GPUs
    • 256 GB memory + 36 GB GDDR5 (GPU memory) per node
    • 1 x Mellanox FDR IB (56Gb/s)
    • 30 PB storage (shared with other clusters)
  • LLNL Syrah
    • 108 TF, 316 compute nodes, Intel Sandy Bridge
    • 0.3 TF per node
    • 2 x Intel Xeon E5-2670 CPUs (16 cores) per node
    • 64 GB memory per node
    • 1 x QLogic IB (40Gb/s)
    • 30 PB storage (shared with other clusters)
  • LANL Snow
    • 445 TF, 368 compute nodes, Intel Broadwell
    • 1.2 TF per node
    • 2 x Intel Xeon E5-2695 CPUs (36 cores) per node
    • 128 GB memory per node
    • 1 x Intel Omni-Path IB (100Gb/s)
    • 15.2 PB storage
  • LANL Badger
    • 790 TF, 660 compute nodes, Intel Broadwell
    • 1.2 TF per node
    • 2 x Intel Xeon E5-2695 CPUs (36 cores) per node
    • 128 GB memory per node
    • 1 x Intel Omni-Path IB (100Gb/s)
    • 15.2 PB storage
U.S. DOE Office of Nuclear Energy

Idaho National Laboratory | 6 PF, 2079 nodes, Intel Xeon

  • Sawtooth |6 PF; 2079 compute nodes; 99,792 cores; 108 NVIDIA Tesla V100 GPUs
    • Mellanox Infiniband EDR, hypercube
    • CPU-only nodes:
      • 2052 nodes, 2 x Intel Xeon 8268 CPUs
      • 192 GB Ram/node
    • CPU/GPU nodes:
      • 27 nodes, 2 x Intel Xeon 8268 CPUs
      • 384 GB Ram/node
      • 4 NVIDIA Tesla V100 GPUs
Rensselaer Polytechnic Institute
The Rensselaer Polytechnic Institute (RPI) Center for Computational Innovations is solving problems for next-generation research through the use of massively parallel computation and data analytics. The center supports researchers, faculty, and students a diverse spectrum of disciplines. RPI is making its Artificial Intelligence Multiprocessing Optimized System (AiMOS) system available to the COVID-19 HPC Consortium. AiMOS is an 8-petaflop IBM Power9/Volta supercomputer configured to enable users to explore new AI applications.

 

RPI AiMOS | 11.1 PF, 252 nodes POWER9/Volta

2 x IBM POWER9 CPU per node, 20 cores per CPU
6 x NVIDIA Tesla GV100 per node
32 GB HBM per GPU
512 GB DRAM per node
1.6 TB NVMe per node
Mellanox EDR InfiniBand
11 PB IBM Spectrum Scale storage

MIT/Massachusetts Green HPC Center (MGHPCC)
MIT is contributing two HPC systems to the COVID-19 HPC Consortium. The MIT Supercloud, a 7-petaflops Intel x86/NVIDIA Volta HPC cluster, is designed to support research projects that require significant compute, memory or big data resources. Satori, is a 2-petaflops scalable AI-oriented hardware resource for research computing at MIT composed of 64 IBM Power9/Volta nodes. The MIT resources are installed at the Massachusetts Green HPC Center (MGHPCC), which operates as a joint venture between Boston University, Harvard University, MIT, Northeastern University, and the University of Massachusetts.

 

MIT/MGHPCC Supercloud | 6.9 PF, 440 nodes Intel Xeon/Volta

2 x Intel Xeon (18 CPU cores per node)
2 x NVIDIA V100 GPUs pe node
32 GB HBM per GPU
Mellanox EDR InfiniBand
3 PB scratch storage

MIT/MGHPCC Satori | 2.0 PF, 64 nodes IBM POWER9/NVIDIA Volta

2 x POWER9 , 40  cores per node
4 x NVIDIA Volta GPUs per node (256 total)
32 GB HBM per GPU
1.6 TB NVMe per node
Mellanox EDR InfiniBand
2 PB scratch storage

IBM 

 

IBM Cloud

Compute:
bx2-16x64 with 16 vCPUs, 64GB RAM and 32 Gbps
bx2-48x192 with 48 vCPUs, 192 GB RAM and 80 Gbps

cx2-16x32 with 16 vCPUs, 32 GB RAM and 32 Gbps
cx2-32x64 with 32 vCPUs, 64 GB RAM and 64 Gbps

mx2-16x128 with 16 vCPUs, 128 GB RAM and 32 Gbps
mx2-32x256 with 32 vCPUs, 256 GB RAM and 64 Gbps

Storage:
Cloud Object Storage

Job Scheduling:
IBM Spectrum LSF managed instances for easy job submission and monitoring.

IBM Research

IBM Research is providing our WSC 2.8 PF, 54 node, IBM POWER9/NVIDIA Volta high performance computing cluster as well as software tools to help accelerate discovery.

2 x POWER9 CPU per node, 22 cores per CPU
6 x NVIDIA Volta GPUs per node (336 total)
512 GiB DRAM per node
1.4 TB NVMe per node
2 x Mellanox EDR InfiniBand
2 PB IBM Spectrum Scale storage

Deep Search: The Deep Search platform for COVID-19 helps researchers to quickly find and aggregate information in the exponentially growing literature related to COVID-19. Examples of such information are the list of all reported used drugs so far.
Drug Candidate Exploration: To help researchers generate potential new drug candidates for COVID-19, we have applied our novel AI generative frameworks to three COVID-19 targets and have generated 3000 novel molecules. We are sharing these molecules under a Creative Commons license.
Functional Genomics Platform: IBM Functional Genomics Platform is a cloud-based data repository that accelerates the study of microbial life at scale with specifically curated molecular sequence data to fight COVID-19.

U.S. National Science Foundation (NSF)

The NSF Office of Advanced Cyberinfrastructure supports and coordinates the development, acquisition, and provision of state-of-the-art cyberinfrastructure resources, tools and services essential to the advancement and transformation of science and engineering. By fostering a vibrant ecosystem of technologies and a skilled workforce of developers, researchers, staff and users, OAC serves the growing community of scientists and engineers, across all disciplines. The most capable resources supported by NSF OAC are being made available to support the COVID-19 HPC Consortium.

Frontera | 38.7 PF, 8114 nodes, Intel Xeon, NVIDIA RTX GPU

Funded by the National Science Foundation and Operated by the Texas Advanced Computing Center (TACC), Frontera provides a balanced set of capabilities that supports both capability and capacity simulation, data-intensive science, visualization, and data analysis, as well as emerging applications in AI and deep learning. Frontera has two computing subsystems, a primary computing system focused on double precision performance, and a second subsystem focused on single-precision streaming-memory computing.  Frontera is built be Dell, Intel, DataDirect Networks, Mellanox, NVIDIA, and GRC.

Expanse | 5.63 PF, total 844 nodes, AMD EPYC 7742 (Rome), NVIDIA Volta (V100) GPUs

Operated by the San Diego Supercomputer Center (SDSC), Expanse is a cluster designed by Dell and SDSC delivering 5.63 peak petaflops. Expanse's 784 standard compute nodes are each powered by two 64-core AMD EPYC 7742 processors and contain 256 GB of DDR4 memory. The cluster also has 56 GPU nodes each containing four NVIDIA V100s (32 GB SMX2) connected via NVLINK and dual 20-core Intel Xeon 6248 CPUs. Expanse also has four 2 TB large memory nodes.

Stampede2 | 19.3 PF, 4200 Intel KNL, 1,736 Intel Xeon

Operated by TACC, Stampede 2 is a nearly 20-petaflop HPC national resource accessible to  thousands of researchers across the country, including to enable new computational and data-driven scientific and engineering, research and educational discoveries and advances. 

Longhorn | 2.8 PF, 112 nodes, IBM POWER9, NVIDIA Volta

Longhorn is a TACC resource built in partnership with IBM to support GPU-accelerated workloads. The power of this system is in its multiple GPUs per node, and it is intended to support sophisticated workloads that require high GPU density and little CPU compute. Longhorn will support double-precision machine learning and deep learning workloads that can be accelerated by GPU-powered frameworks, as well as general purpose GPU calculations.

Bridges-2 | 5.1 PF, 576 nodes, AMD EPYC 7742 (Rome), NVIDIA Volta (V100) GPUs

Operated by the Pittsburgh Supercomputing Center (PSC), Bridges-2 provides a heterogeneous research computing platform focused on supporting rapidly-evolving and data centric computing. Bridges-2 was built in collaboration with Hewlett-Packard Enterprise (HPE) and contains 504 regular compute nodes each powered by 2 64-core AMD EPYC 7742 processors and contain 256 GB of DDR4 memory (with 16 containing 512 GB for larger memory applications), 4 extreme memory nodes each with 4 TB of DDR4 memory, and 24 GPU nodes each containing 8 NVIDIA V100s (32GB SMX2) connected via NVLink (there are 9 additional nodes that contain 8 NVIDIA V100s (16 GB SMX2) and an NVIDIA DGX-2 as well).

Jetstream | 320 nodes, Cloud accessible

Operated by a team led by the Indiana University Pervasive Technology Institute, Jetstream is a configurable large-scale computing resource that leverages both on-demand and persistent virtual machine technology to support a wide array of software environments and services through incorporating elements of commercial cloud computing resources with some of the best software in existence for solving important scientific problems.  

Open Science Grid | Distributed High Throughput Computing, 10,000+ nodes, Intel x86-compatible CPUs, various NVIDIA GPUs

The Open Science Grid (OSG) is a large virtual cluster of distributed high-throughput computing (dHTC) capacity shared by numerous national labs, universities, and non-profits, with the ability to seamlessly integrate cloud resources, too. The OSG Connect service makes this large distributed system available to researchers, who can individually use up to tens of thousands of CPU cores and up to hundreds of GPUs, along with significant support from the OSG team. Ideal work includes parameter optimization/sweeps, molecular docking, image processing, many bioinformatics tasks, and other work that can run as numerous independent tasks each needing 1-8 CPU cores, <8 GB RAM, and <10GB input or output data, though these can be exceeded significantly by integrating cloud resources and other clusters, including many of those contributing to the COVID-19 HPC Consortium.

Cheyenne | 5.34 PF, 4032 nodes, Intel Xeon

Operated by the National Center for Atmospheric Research (NCAR), Cheyenne is a critical tool for researchers across the country studying climate change, severe weather, geomagnetic storms, seismic activity, air quality, wildfires, and other important geoscience topics. The Cheyenne environment also encompases tens of petabytes of storage capacity and an analysis cluster to support efficient workflows. Built by SGI (now HPE), Cheyenne is funded by the Geosciences directorate of the National Science Foundation.

Blue Waters| 13.34 PF, 26,864 nodes, AMD Interlagos, NVIDIA Kepler K20X GPU

The Blue Waters sustained-petascale computing project is supported by the National Science Foundation, the State of Illinois, the University of Illinois and the National Geospatial-Intelligence Agency. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications and provided by Cray.  Blue Waters is a well-balanced architecture that has 22,636 XE6 nodes with X86 compatible AMD two Interlagos 16 core CPUs and 4,228 XK7 nodes, each with a NVIDIA Kepler K20X GPU and a 16 core AMD Interlagos CPU.  The system is  integrated with a single high speed Gemini 24x24x24 torus with an aggregate bandwidth of 265+ TBps to simultaneously support very large scale parallel and high through, many job applications.  Blue Waters has a 36PB (26 usable) shared Lustre file system that supports 1.1 TB/s I/O bandwidth.  It is connected at a total of 450 Gbps to Wide Area networks.  The rich system software includes multiple compilers, communication libraries, software and visualization tools, docker containers, python, and machine learning and data management methods that supports capability and capacity simulation, data-intensive science, visualization, and data analysis, and machine learning/AI.  All projects are provided with expert points of contact and provided with advanced application support.  

 

NASA High-End Computing Capability

NASA Supercomputing Systems | 19.13 PF, 15800 nodes Intel x86

NASA's High-End Computing Capability (HECC) Portfolio provides world-class high-end computing, storage, and associated services to enable NASA-sponsored scientists and engineers supporting NASA programs to broadly and productively employ large-scale modeling, simulation, and analysis to achieve successful mission outcomes.

NASA's Ames Research Center in Silicon Valley hosts the agency's most powerful supercomputing facilities. To help meet the COVID-19 challenge facing the nation and the world, HECC is offering access to NASA's high-performance computing (HPC) resources for researchers requiring HPC to support their efforts to combat this virus. 

 

NASA Supercomputing Systems | 19.39 PF, 17609 nodes Intel Xeon

AITKEN | 3.69 PF, 1,152 nodes, Intel Xeon
ELECTRA | 8.32 PF, 3,456 nodes, Intel Xeon
PLEIDES | 7.09 PF, 11,207 nodes, Intel Xeon, NVIDIA K40, Volta GPUs
ENDEAVOR | 32 TF, 2 nodes, Intel Xeon
MEROPE | 253 TF, 1792 nodes, Intel Xeon

Amazon Web Services
As part of the COVID-19 HPC Consortium, AWS is offering research institutions and companies technical support and promotional credits for the use of AWS services to advance research on diagnosis, treatment, and vaccine studies to accelerate our collective understanding of the novel coronavirus (COVID-19). Researchers and scientists working on time-critical projects can use AWS to instantly access virtually unlimited infrastructure capacity, and the latest technologies in compute, storage and networking to accelerate time to results. Learn more here.
Microsoft Azure High Performance Computing (HPC)

Microsoft Azure offers purpose-built compute and storage specifically designed to handle the most demanding computationally and data intensive scientific workflows. Azure is optimized for applications such as genomics, precision medicine and clinical trials in life sciences.  

Our team of HPC experts and AI for Health data science experts, whose mission is to improve the health of people and communities worldwide, are available to collaborate with COVID-19 researchers as they tackle this critical challenge. More broadly, Microsoft's research scientists across the world, spanning computer science, biology, medicine, and public health, will be available to provide advice and collaborate per mutual interest.

Azure HPC helps improve the efficiency of drug development process with power and scale for computationally intensive stochastic modeling and simulation workloads, such as population pharmacokinetic and pharmacokinetic-pharmacodynamic modeling.

Microsoft will give access to our Azure cloud and HPC capabilities.

HPC-optimized and AI Optimized virtual machines (VM)

·        Memory BW Intensive CPUs: Azure HBv2 Instances (AMD EPYC™ 7002-series | 4GB RAM per core | 200Gb/s HDR InfiniBand)

·        Compute Intensive CPUs: Azure HC Instances (Intel Xeon Platinum 8168 | 8GB RAM per core | 100Gb/s EDR InfiniBand)

·        GPU Intensive RDMA connected: Azure NDv2 Instances (8 NVIDIA V100 Tensor Core GPUs with NVIDIA NVLink interconnected GPUs | 32GB RAM each | 40 non-hyperthreaded Intel Xeon Platinum 8168 processor cores | 100Gb/s EDR InfiniBand)

·        See the full list of HPC-optimized VM's (H-SeriesNC-Series, and ND-Series)

 

Storage Options:

·        Azure HPC Cache | Azure NetApp Files | Azure Blog Storage | Cray ClusterStor

 

Management:

·        Azure CycleCloud

 

Batch scheduler

 

Azure HPC life sciences: https://azure.microsoft.com/en-us/solutions/high-performance-computing/health-and-life-sciences/#features

Azure HPC web site: https://azure.microsoft.com/en-us/
AI for Health web site: https://www.microsoft.com/en-us/ai/ai-for-health

Hewlett Packard Enterprise
As part of this new effort to attack the novel coronavirus (COVID-19) pandemic, Hewlett Packard Enterprise is committing to providing supercomputing software and applications expertise free of charge to help researchers port, run, and optimize essential applications. Our HPE Artificial Intelligence (AI) experts are collaborating to support the COVID-19 Open Research Dataset and several other COVID-19 initiatives for which AI can drive critical breakthroughs. They will develop AI tools to mine data across thousands of scholarly articles related to COVID-19 and related coronaviruses to help the medical community develop answers to high-priority scientific questions. We encourage researchers to submit any COVID-19 related proposals to the consortium's online portal. More information can be found here: www.hpe.com/us/en/about/covid19/hpc-consortium.html.
Google

Transform research data into valuable insights and conduct large-scale analyses with the power of Google Cloud. As part of the COVID-19 HPC Consortium, Google is providing access to Google Cloud HPC resources for academic researchers.

 

.
BP

BP's Center for High Performance Computing (CHPC), located at their US headquarters in Houston, serves as a worldwide hub for processing and managing huge amounts of geophysical data from across BP's portfolio and is a key tool in helping scientists to ‘see' more clearly what lies beneath the earth's surface. The high performance computing team is made up of people with deep skills in computational science, applied math, software engineering and systems administration. BP's biosciences research team includes computational and molecular biologists, with expertise in software tools for bioinformatics, microbial genomics, computational enzyme design and metabolic modeling.

To help meet the COVID-19 challenge facing the nation, BP is offering access to the CHPC, the high performance computing team and the biosciences research team to support researchers in their efforts to combat the virus. BP's computing capabilities include:

  • Almost 7,000 HPE compute servers with Intel Cascade Lake AP, Skylake, Knights Landing and Haswell processors
  • Over 300,000 cores
  • 16.3 Petaflops (calculations or floating operations points per second)
  • Over 40 Petabytes of storage capacity
  • Mellanox IB and Intel OPA high speed interconnects
NVIDIA

A task force of NVIDIA researchers and data scientists with expertise in AI and HPC will help optimize research projects on the Consortium's supercomputers. The NVIDIA team has expertise across a variety of domains, including AI, supercomputing, drug discovery, molecular dynamics, genomics, medical imaging and data analytics. NVIDIA will also contribute the packaging of software for relevant AI and life-sciences software applications through NVIDIA NGC, a hub for GPU-accelerated software. The company is also providing compute time on an AI supercomputer, SaturnV.

 

D. E. Shaw Research Anton 2 at PSC

Operated by the Pittsburgh Supercomputing Center (PSC) with support from National Institutes of Health award R01GM116961, Anton 2 is a special-purpose supercomputer for molecular dynamics (MD) simulations developed and provided without cost by D. E. Shaw Research. For more information, see https://www.psc.edu/resources/anton/anton-2-covid/.

Intel

Intel will provide HPC /AI and HLS subject matter experts and engineers to collaborate on COVID-19 code enhancements to benefit the community. Intel will also provide licenses for High Performance Computing software development tools for the research programs selected by the COVID-19 HPC Consortium. The integrated tool suites include Intel's C++ and Fortran Compilers, performance libraries, and performance-analysis tools.

Ohio Supercomputer Center

The Ohio Supercomputer Center (OSC), a member of the Ohio Technology Consortium of the Ohio Department of Higher Education, addresses the rising computational demands of academic and industrial research communities by providing a robust shared infrastructure and proven expertise in advanced modeling, simulation and analysis. OSC empowers scientists with the vital resources essential to make extraordinary discoveries and innovations, partners with businesses and industry to leverage computational science as a competitive force in the global knowledge economy, and leads efforts to equip the workforce with the key technology skills required to secure 21st century jobs. For more, visit www.osc.edu

  • OSC Owens | 1.6 PF, 824 nodes Intel Xeon/Pascal
    • 2 x Intel Xeon (28 cores per node, 48 cores per big-mem node)
    • 160 NIVIDIA P100 GPUs (1 per node)
    • 128 GB per node (1.5TB per big-mem node)
    • Mellanox EDR Infiniband
    • 12.5 PB Project and Scratch storage
  • OSC Pitzer | 1.3.PF, 260 nodes Intel Xeon/Volta
    • 2 x Intel Xeon (40 cores per node, 80 cores per big-mem node)
    • 64 NIVIDIA V100 GPUs (2 per node)
    • 192 GB per node (384 GB per GPU node, 3TB per big-mem node)
    • Mellanox EDR Infiniband
    • 12.5 PB Project and Scratch storage
Dell

Zenith

The Zenith cluster is the result of a partnership between Dell and Intel®. On the TOP500 list of fastest supercomputers in the world, Zenith includes Intel Xeon® Scalable Processors, Omni‑Path fabric architecture, data center storage solutions, FPGAs, adapters, software and tools. Projects underway include image classification to identify disease in X-rays, MRI scan matching to thoughts and actions, and building faster neural networks to drive recommendation engines. Zenith is available to researchers via the COVD-19 HPC Consortium via standard the application process, subject to availability.

Zenith Configuration:

  • Servers:
    • 422 PowerEdge C6420 servers  
    • 160x PowerEdge C6320p servers
    • 4 PowerEdge R740 servers with Intel – FPGAs
  • Processors
    • 2nd generation Intel Xeon Scalable processors
    • Intel Xeon Phi™
  • • Memory
    • 192GB at 2,933MHz per node (Xeon Gold)
    • 96GB at 2,400MHz per node (Xeon Phi)
  • Operating System: Red Hat® Enterprise Linux® 7
  • Host channel adapter (HCA) card: Intel Omni‑Path Host Fabric Interface Storage
  • Storage
    • 2.68PB Ready Architecture for HPC Lustre Storage
    • 480TB Ready Solutions for HPC NFS Storage
    • 174TB Isilon F800 all-flash NAS storage
UK Digital Research Infrastructure
The UK Digital Research Infrastructure consists of a range of advanced computing systems from academic and UK government agencies with a wide range of different capabilities and capacities. Expertise in porting, developing and testing software is also available from the research software engineers (RSEs) supporting the systems.

Specific technical details on the systems available:

  • ARCHER | 4920 nodes (118,080 cores), two 2.7 GHz, 12-core Intel Xeon E5-2697 v2 per node. 4544 nodes with 64 GB memory nodes and 376 with 128 GB. Cray Aries interconnect. 4.4 PB high-performance storage.
  • Cirrus | 280 nodes (10080 cores), two 2.1GHz 18 core Intel Xeon E5-2695 per node. 256 GB memory per node; 2 GPU nodes each containing two 2.4 Ghz, 20 core Intel Xeon 6148 processors and four NVIDIA Tesla V100-PCIE-16GB GPU accelerators. Mellanox FDR interconnect. 144 NVIDIA V100 GPUs in 36 Plainfield blades (2 Intel Cascade Lake processors and 4 GPUs per node).
  • DiRAC Data Intensive Service (Cambridge) | 484 nodes (15488 cores), two Intel Xeon 6142 per node, 192 GB or 384 GB memory per node; 11 nodes with 4x Nvidia P100 GPUs and 96 GB memory per node; 342 nodes of Intel Xeon Phi with 96 GB memory per node.
  • DiRAC Data Intensive Service (Leicester) | 408 nodes (14688 cores), two Intel Xeon 6140 per node, 192 GB memory per node; 1x 6 TB RAM server with 144 Intel Xeon 6154 cores; 3x 1.5TB RAM servers with 36 Intel Xeon 6140 cores;  64 nodes (4096 cores) Arm ThunderX2 with 128 GB RAM/node.
  • DiRAC Extreme Scaling Service (Edinburgh) | 1468 nodes (35,232 cores), two Intel Xeon 4116 per node, 96 GB RAM/node. Dual rail Intel OPA interconnect.
  • DiRAC Memory Intensive Service (Durham) | 452 nodes (12,656 cores), two Intel Xeon 5120 per node, 512 GB RAM/node, 440TB flash volume for checkpointing.
  • Isambard | 332 nodes (21,248 cores), two Arm-based Marvell ThunderX2 32 core 2.1 GHz per node. 256 GB memory per node. Cray Aries interconnect. 75 TB high-performance storage.
  • JADE | 22x Nvidia DGX-1V nodes with 8x Nvidia V100 16GB and 2x 20 core Intel Xeon E5-2698 per node.
  • MMM Hub (Thomas) | 700 nodes (17000 cores), 2x 12 core Intel Xeon E5-2650v4 2.1 GHz per node, 128 GB RAM/node.
  • NI-HPC | 60x Dell PowerEdge R6525, two AMD Rome 64-core 7702 per node. 768GB RAM/node; 4x Dell PowerEdge R6525 with 2TB RAM; 8 x Dell DSS8440 (each with 2x Intel Xeon 8168 24-core). Provides 32x Nvidia Tesla V100 32GB.
  • XCK | 96 nodes (6144 cores), one 1.3 GHz, 64-core Intel Xeon Phi 7320 per node + 20 nodes (640 cores), two 2.3 Ghz, 16 core Intel Xeon E5-2698 v3 per node. 16 GB fast memory + 96 GB DDR per Xeon Phi node, 128 GB per Xeon node. Cray Aries interconnect. 9TB of DataWarp storage and 650 TB of high-performance storage.
  • XCS | 6720 nodes (241,920 cores), two Intel Xeon 2.1 GHz, 18-core E5-2695 v4 series per node. All with 128 GB RAM/node. Cray Aries interconnect. 11 PB of high-performance storage.
CSCS – Swiss National Supercomputing Centre
CSCS Piz Daint 27 PF, 5704 nodes, Cray XC50/NVIDIA PASCAL
Xeon E5-2690v3 12C 2.6GHz 64GB RAM
NVIDIA® Tesla® P100 16GB
Aries interconnect
Swedish National Infrastructure for Computing (SNIC)

The Swedish National Infrastructure for Computing is a national research infrastructure that makes resources for large scale computation and data storage available, as well as provides advanced user support to make efficient use of the SNIC resources.

Beskow | 2.5 PF, 2060 nodes, Intel Haswell & Broadwell.

Funded by Swedish National Infrastructure for Computing and operated by the PDC Center for High-Performance Computing at the KTH Royal Institute of Technology in Stockholm, Sweden, Beskow supports capability computing and simulations in the form of wide jobs. Attached to Beskow is a 5 PB Lustre file system from DataDirect Networks. Beskow is also a Tier-1 resource in the Prace European Project.

Beskow is built by Cray, Intel and DataDirect Networks.

Korea Institute of Science and Technology Information (KISTI)

The Korea Institute of Science and Technology Information (KISTI) serves as a national supercomputing center of Korea, providing supercomputing and high-performance research networking facilities to Korean researchers. Together, with our hope to help accelerate the understanding of the COVID-19 virus for development of treatments and vaccines, KISTI is intended to contribute to the HPC Consortium of COVID-19 by offering access to the KISTI-5 supercomputer called Nurion. KISTI has also had a great experience in participating to the European project called WISDOM, a grid-enabled drug discovery initiative against malaria, several years ago in the era of Grid computing, where two teams in Korea joined the WISDOM collaboration by (1) offering computing resources along with relevant technology and (2) in-vitro testing to the initiative, respectively. KISTI still maintains the technology (http://htcaas.kisti.re.kr/wiki) to facilitate the conducting of large-scale virtual screening experiments to identify small molecule drug candidates on top of multiple computing platforms.

Nurion | 25.7PF, 8437 nodes, Cray CS500

  • 8305 nodes Intel Xeon Phi 7250 (KNL) 68C 1.4GHz
    • 96GB DDR4 and 16 GB MCDRAM memory per KNL node
  • 132 nodes 2 x Intel Xeon 6148 (Skylake) 20C 2.4GHz
    • 192GB DDR4 memory per Skylate node  
  • 0.8PB DDN IME flash storage (burst buffer)
  • 20PB Lustre Filesystem
  • 10PB IBM TS 4500 Tape Storage 
  • Intel Omni-Path, Fat-Tree, 50% Blocking
Ministry of Education, Culture, Sports, Science and Technology(MEXT)-JAPAN

The supercomputer Fugaku is Japan's flagship supercomputer, developed mainly via collaboration between RIKEN Center for Computational Science (R-CCS) and Fujitsu, and to be commissioned for operation in 2021; however, portions of its resources is being deployed a year in advance to combat COVID-19. The technical specifications of Fugaku are as follows:

Fugaku | 89 PF, 26,496 nodes

  • Total # Nodes: 158,976 nodes
  • Processor core ISA: Arm (Aarch64 v8 + 512 bit SVE)
    • 48 + 2 or 4 cores per CPU chip, one CPU chip per node
  • ~400Gbps Tofu-D interconnect.
  • Theoretical Peak Compute Performances: Boost Mode (CPU Frequency 2.2GHz)
    • 64 bit Double Precision FP: 537 Petaflops
    • 32 bit Single Precision FP: 1.07 Exaflops
    • 16 bit Half Precision FP (AI training): 2.15 Exaflops
    • 8 bit Integer (AI Inference): 4.30 Exaops
  • Theoretical Peak Memory Bandwidth: 163 Petabytes/s
  • Approximately 150 Petabytes of Lustre storage
  • System software: Red Hat Enterprise Linux, all standard programming languages, optimized numerical libraries, MPI, OpenMP, TensorFlow/PyTorch, etc.


For details refer to:
https://postk-web.r-ccs.riken.jp/spec.html
https://www.fujitsu.com/global/about/innovation/fugaku/specifications/

Consortium Affiliates provide a range of computing services and expertise that can enhance and accelerate the research for fighting COVID-19. Matched proposals will have access to resources and help from Consortium Affiliates, provided for free, enabling rapid and efficiently execution of complex computational research programs.

Atrio | [Affiliate]

Atrio will assist researchers studying COVID-19 in creating and optimizing performance of application containers (e.g. CryoEM processing application suite) , as well as performance-optimized deployment of those application containers on to any of HPC Consortium members' computational platforms and specifically onto high performing GPU and CPU resources. Our proposal is two fold - one is additional computational resources, and another, equally important, is support for COVID-19 researchers with an easy way to access and use HPC Consortium computational resources. That support consists of creating application containers for researchers, optimizing their performance, and an optional multi-site container and cluster management software toolset.

Data Expedition Inc | [Affiliate]

Data Expedition, Inc. (DEI) is offering free licenses of its easy-to-use ExpeDat and CloudDat accelerated data transport software to researchers studying COVID-19. This software transfers files ranging from megabytes to terabytes from storage to storage, across wide area networks, among research institutions, cloud providers, and personal computers at speeds many times faster than traditional software. Available immediately for an initial 90-day license. Requests to extend licenses will be evaluated on a case-by-case basis to facilitate continuing research..

Flatiron | [Affiliate]

The Flatiron Institute is a multi-disciplinary science lab with 50 scientists in computational Biology. Flatiron is pleased to offer 3.5M core hours per month on our modern HPC system and 5M core hours per month on Gordon, our older HPC facility at SDSC.

Fluid Numerics | [Affiliate]

Fluid Numerics' Slurm-GCP deployment leverages Google Compute Engine resources and the Slurm job scheduler to execute high performance computing (HPC) and high throughput computing (HTC) workloads. Our system is currently capable of ~6pflops but please keep in mind this is a quota-bound metric that can be adjusted if needed. We intend to provide onboarding and remote system administration resources for the fluid-slurm-gcp HPC cluster solution on Google Cloud Platform. We will help researchers leverage GCP for COVID-19 research by assisting with software installation and porting, user training, consulting, and coaching, and general GCP administration, including quota requests, identity and access management, and security compliance.

SAS | [Affiliate]

SAS is offering to provide licensed access to the SAS Viya platform and data science project based resources. SAS provided resources will be specific to the requirements of the selected COVID-19 project use-case. SAS expects a typical engagement on a project would require 1-2 data science resources, a project manager, a data prep specialist and potentially a visualization expert.

Raptor Computing Systems, LLC | [Affiliate]

Our main focus for this membership though is developer systems, as we offer a wide variety of desktop and workstation systems built on the POWER architecture. These run Linux, support NVIDIA GPUs and provide an applications development environment for targeting the larger supercomputers. This is the main focus of our support effort. We can provide these machines free of charge (up to a reasonable limit) to the COVID effort to free up supercomputer / high end HPC server time that would otherwise be allocated to development and testing of the algorithms / software in use.

Mathworks | [Affiliate]

MathWorks will help researchers studying COVID-19 to scale their parallel MATLAB algorithms to the cloud and to HPC resources provided by this HPC Consortium. Our offering includes:

·  Free access to MATLAB® and Simulink® on the allocated computing resources.

·  Support for parallelizing and scaling researcher's algorithms.

As an example, see Ventilator research at Duke University.

The HDF Group | [Affiliate]

The HDF Group helps scientists use open source HDF5 effectively, including offering general usage and performance tuning advice, and helping to troubleshoot any issues that arise. Our engineers will be available to assist you in applying HPC and HDF® technologies together for your COVID-19 research.

Immortal Hyperscale InterPlanetary Fabrics | [Affiliate]

Immortal is providing licensed access to components of its platform to organizations which are (a) investigating the nature of the COVID-19 virus, (b) developing products for therapeutic breakthroughs, and (c) conducting R&D to build COVID-19 vaccines. Immortal's platform aggregates and orchestrates large magnitudes of applications, services, data, and resources across multiple Clouds, multiple supercomputers, or a combination. The platform is suitable for organizations operating on problems which need computation and data management at scales of hundreds of petaflops and hundreds of petabytes.

Acknowledging Support

Papers, presentations, and other publications featuring work that was supported, at least in part, by the resources, services and support provided via the COVID-19 HPC Consortium are expected to acknowledge that support.  Please include the following acknowledgement:

This work used resources services, and support provided via the COVID-19 HPC Consortium (https://covid19-hpc-consortium.org/), which is a unique private-public effort to bring together  government, industry, and academic leaders who are volunteering free compute time and resources in support of COVID-19 research.

 
 

(Revised 23 September 2021)

Key Points
Computing Resources utilized in research against COVID-19
National scientists encouraged to use computing resources
How and where to find computing resources
Contact Information

Domain Champions

Domain Champions are part of Campus Champions along with Regional and Student Champions

Domain Champions

Domain Champions act as ambassadors by spreading the word about what XSEDE can do to boost the advancement of their field, based on their personal experience, and to connect interested colleagues to the right people/resources in the XSEDE community (XSEDE Extended Collaborative Support Services (ECSS) staff, Campus Champions, documentation/training, helpdesk, etc.). Domain Champions work within their discipline, rather than within a geographic or institutional territory.

The table below lists our current domain champions. We are very interested in adding new domains as well as additional champions for each domain. Please contact domain-info@xsede.org if you are interested in a discussion with a current domain champion, or in becoming a domain champion yourself.

DOMAIN CHAMPION INSTITUTION
Astrophysics, Aerospace, and Planetary Science Matthew Route Purdue University
Data Analysis Rob Kooper University of Illinois
Finance Mao Ye University of Illinois
Molecular Dynamics Tom Cheatham University of Utah
Genomics Brian Couger Oklahoma State University
Digital Humanities Michael Simeone Arizona State University
Genomics and Biological Field Stations Thomas Doak,  Sheri Sanders Indiana University, National Center for Genome Analysis Support
Chemistry and Material Science Sudhakar Pamidighantam Indiana University
Fluid Dynamics & Multi-phase Flows Amit Amritkar University of Houston
Chemistry Christopher J. Fennell Oklahoma State University
Geographic Information Systems Eric Shook University of Minnesota
Biomedicine Kevin Clark Cures Within Reach


Last Updated: April 7, 2021


XSEDE allocations on TACC's Stampede2, Ranch systems help develop most detailed model yet of mitochondrial protein-protein complex

Supercomputer simulations have revealed for the first time how the cell's mitochondrial voltage-dependent anion channel (VDAC) binds to the enzyme hexokinase-II (HKII). Artistic depiction of membrane binding of the cytosolic enzyme Hexokinase (light blue), followed by its complex formation with the integral membrane protein VDAC (dark blue), at the surface of the outer membrane of mitochondria. ATP (red) are phosphorylated by HKII. This basic research will help researchers understand the molecular basis of diseases such as cancer. Credit: Haloi, N., Wen, PC., Cheng, Q. et al.

It takes two to tango, as the saying goes.

This is especially true for scientists studying the details of how cells work. Protein molecules inside a cell interact with other proteins, and in a sense the proteins dance with a partner to respond to signals and regulate each other's activities.

Crucial to giving cells energy for life is the migration of a compound called adenosine triphosphate (ATP) out of the cell's powerhouse, the mitochondria. And critical for this flow out to the power-hungry parts of the cell is the interaction between a protein enzyme called hexokinase-II (HKII) and proteins in the voltage-dependent anion channel (VDAC) found on the outer membrane of the mitochondria.

Supercomputer simulations have revealed for the first time how VDAC binds to HKII, supported by allocations awarded by the Extreme Science and Engineering Discovery Environment (XSEDE) on the Stampede2 system of the Texas Advanced Computing Center (TACC).

 

Why It's Important

(Left to Right) Po-Chao Wen, Nandan Haloi, and Emad Tajkhorshid of the University of Illinois at Urbana-Champaign. 

This basic research in how proteins interact out of the cell's powerhouses, the mitochondria, will help researchers understand the molecular basis of diseases such as cancer.

"We had strong evidence that they bind, but we didn't know how they bind to each other," said Emad Tajkhorshid, the J. Woodland Hastings Endowed Chair in Biochemistry at the University of Illinois at Urbana-Champaign "That was the million-dollar question."

Tajkhorshid co-authored a study published in Nature Communications Biology, June 2021. The study found that when the enzyme and the channel proteins bind to each other, the conduction of the channel changes and partially blocks the flow of ATP.

This work has implications for a deeper understanding of not only healthy cells, but cancer cells.

Basically, a cell needs ATP to metabolize glucose. It uses the ‘P' to convert glucose to glucose phosphate, giving it a ‘handle' that the cell can use. Hexokinase-II makes the conversion happen, binding at the mitochondrial channel to gobble up the ATP and phosphorylate it.

"We showed how the phosphorylation affects this process of binding between the two proteins. That was also verified experimentally," Tajkhorshid said.

The VDAC channel is critical for efficient delivery of ATP directly to hexokinase. "It can work like a double-edged sword. For a healthy cell it's good. For a cancer cell, it also helps the cell to promote and to proliferate," he said.

 

How XSEDE Helped

Simulations on supercomputers revealed this binding supported by allocations awarded by XSEDE on the Stampede2 system of TACC.

(Animation)  Movie representative of the Brownian dynamics simulation trajectory capturing the formation of the Hexokinase (teal)/VDAC (yellow) complex. Credit: Haloi, N., Wen, PC., Cheng, Q. et al.

What's more, the XSEDE-allocated Ranch system at TACC holds the off-site permanent file storage for the study's data.

"If it wasn't for XSEDE, we wouldn't be studying many of these complex projects and biological systems because you simply can't afford running the simulation. They usually require long simulations, and we need multiple copies of these simulations to be scientifically convincing. Without XSEDE it is impossible. We would have to go back to studying smaller systems," Tajkhorshid said.

Tajkhorshid's team developed the most detailed and sophisticated model yet of the complex formed by the binding of HKII and VDAC, combining highest-resolution all-atom molecular dynamics simulations with coarser Brownian dynamics techniques. The system size of the VDAC-HKII complex was about 700,000 atoms, including the membrane. It's about one-fifth of the diameter of the COVID-19 virus.

"What stands out in our approach is that we actually considered the cellular background of this interaction," said Po-Chao Wen, a post-doctoral research associate at the NIH Center for Macromolecular Modeling and Bioinformatics, University of Illinois at Urbana-Champaign.

Binding of membrane-inserted HKII-N to VDAC1 and their complex formation. A snapshot of membrane-embedded HKII/VDAC1 complex 1 (HKV1) during the full-membrane MD simulation. Credit: Haloi, N., Wen, PC., Cheng, Q. et al.

Wen explained that their simulation design started with the hypothesis that the VDAC protein in the outer membrane might interact at all with HKII, which is localized at a different part of the cell called the cytosol. They speculated that HKII should bind first to the membrane, drifting on it until it reaches a VDAC protein.

VDAC sitting on the membrane has already been well modeled, and the researchers built on this knowledge to break down the modelling of the HKII-VDAC complex into three parts, which initially focused on HKII.

To study how HKII binds to the mitochondrial outer membrane, they used all-atom molecular dynamics and a tool developed by their center called the Highly Mobile Membrane Model (HMMM), which deals with the membrane interaction.

They next used Brownian dynamics to study how HKII drifts on the membrane to meet VDAC, creating many encounter/collision events between a sitting VDAC and a drifting HKII on a planar membrane.

"Then we used all-atom molecular dynamics to get a more refined model and specific size of the interaction to look for this particular protein-protein interaction," Wen added. This helped them find the most stable complex of the two proteins formed.

"It seemed almost impossible when we started this process, because of the long timescales of milliseconds to seconds of the all-atom simulations," said study co-author Nandan Haloi, a PhD student also at the center. 

Many other computational science tools were developed by the group, including the commonly-used NAMD for molecular dynamics.

"These are really expensive calculations, which would require millions of dollars to set up independently. And you need to run on parallel supercomputers using our NAMD code, otherwise we could not reach the time scales that we needed," Tajkhorshid said.

If it wasn't for XSEDE, we wouldn't be studying many of these complex projects and biological systems because you simply can't afford running the simulation. They usually require long simulations, and we need multiple copies of these simulations to be scientifically convincing. Without XSEDE it is impossible. We would have to go back to studying smaller systems. -- Emad Tajkhorshid, J. Woodland Hastings Endowed Chair in Biochemistry, University of Illinois at Urbana-Champaign.

"We're extremely happy with TACC and their support for not only just this work, but most of our projects and also our software development, tuning the software and making it faster. TACC has been wonderful in supporting us," Tajkhorshid said.

TACC scientists work with the NIH Center for Macromolecular Modeling and Bioinformatics to constantly optimize the NAMD software, currently used by thousands of researchers.

The next steps in the research include more ambitious systems such as the fusion of two cells, important in understanding how neurons in the brain fire signals to each other; and how a virus such as the novel coronavirus fuses to the host cell.

Tajkhorshid's group was awarded a Leadership Resource Allocation on the NSF-funded flagship supercomputer Frontera (an XSEDE L2 SP) at TACC to investigate some of these ambitious projects.

Said Tajkhorshid: "We like to look at our work as a computational microscope that allows one to look at molecular systems and processes, how molecules come together, how they move, and how they change their structure to accomplish a particular function that people have been indirectly measuring experimentally. Supercomputers are essential in providing this level of detail, which we can use to understanding the molecular basis of diseases, drug discovery, and more."

The study, "Structural basis of complex formation between mitochondrial anion channel VDAC1 and Hexokinase-II," was published June 2021 in the journal Nature Communications Biology. The study authors are Nandan Haloi, Po-Chao Wen, and Emad Tajkhorshid of the University of Illinois at Urbana-Champaign; Qunli Cheng, Meiying Yang, Gayathri Natarajan, Amadou K. S. Camara, and Wai-Meng Kwok of the Medical College of Wisconsin.

This research was supported by the National Institutes of Health grants R01-HL131673 and P41- GM104601. Simulations in this study were performed using allocations at National Science Foundation Supercomputing Centers (XSEDE grant number MCA06N060) and the Blue Waters supercomputer of National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign.

At a Glance

  • Supercomputer simulations have revealed for the first time how the cell's mitochondrial voltage-dependent anion channel (VDAC) binds to the enzyme hexokinase-II (HKII).
  • XSEDE awarded supercomputer allocation on TACC's Stampede2 and Ranch systems for molecular dynamics simulations of the 700,000 atom VDAC-HKII complex.
  • Basic research in how proteins interact out of the mitochondria will help researchers understand the molecular basis of diseases such as cancer.