Content with tag argonne .


Stampede2, Ranch at TACC through XSEDE help explore new solid-state lighting materials

By Jorge Salazar, Texas Advanced Computing Center (TACC)

Scientists are using supercomputers to gain insight on new materials that could make LED lighting even brighter and more affordable. New properties have been found in cubic III-nitrides LED materials useful for next-generation solid-state lighting.

 

LED lamps are lighting up the world more and more. Global LED sales in residential lighting have risen from five percent of the market in 2013 to 40 percent in 2018, according to the International Energy Agency, and other sectors mirror these trends. An unmatched energy efficiency and sturdiness have made LED lights popular with consumers.

Scientists are currently using supercomputers to gain insight on the crystal structure of new materials that could make LED lighting even brighter and more affordable.

New properties have been found in a promising LED material for next-generation solid-state lighting. A January 2020 study in the chemistry journal ACS Omega revealed evidence pointing to a brighter future for cubic III-nitrides in photonic and electronic devices.

"The main finding is that next generation LEDs can, should, and will do better," said study co-author Can Bayram, an assistant professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign. His motivation for studying cubic III-nitrides stems from the fact that today's LED loses much of its efficiency under high injection conditions of current passing through the device, necessary for general lighting.

Determination of band alignments in ternary III-nitrides. Element-projected electronic structure of (a) wz-AlN, (b) wz-GaN, (c) wz-InN, (d) zb-AlN, (e) zb-GaN, and (f) zb-InN. The red−light green colormap indicates an anion-like character, while the light green−blue colormap represents cation-like behavior. Credit: Tsai et. al, ACS Omega 2020, 5, 8, 3917-3923

 

Band gaps and electron affinities of binary and ternary, wurtzite (wz-) and zincblende (zb-) III-nitrides were investigated using a unified hybrid density functional theory, and band offsets between wz- and zb- alloys were calculated using Anderson's electron affinity model. Credit: Tsai et. al, ACS Omega 2020, 5, 8, 3917-3923

Bayram's lab builds newly discovered crystals atom by atom in real life as well as in their simulations so that they can correlate experiments with theory. "We need new materials that are scalable to be used for next generation lighting," Bayram said. "Searching for such materials in a timely and precise manner requires immense computational power."

"In this study we are exploring the fundamental properties of cubic-phase aluminium gallium indium nitride materials" Bayram said.

"To date, indium gallium nitride-based green LED research has been restricted to naturally-occurring hexagonal-phase devices. Yet they are limited in power, efficiency, speed, and bandwidth, particularly when emitting the green color. This problem fueled our research. We found that cubic phase materials reduce the necessary indium content for the green color emission by ten percent because of a lower bandgap. Also, they quadruple radiative recombination dynamics by virtue of their zero polarization." study co-author and graduate student Yi-Chia Tsai added.

Bayram describes the computational model used as "experimentally-corroborated." "The computed fundamental material properties are so accurate that computational findings have almost one-on-one match with the experimental ones," he said.

He explained that it's challenging to model compound semiconductors such as gallium nitride because they are compound, unlike elemental semiconductors such as silicon or germanium. Modeling alloys of the compound semiconductors, such as aluminum gallium nitride, are further challenging because, as the saying goes, it's all about location, location, location. Relative atomic positions matter.

Left: Can Bayram, Assistant Professor, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign. Right: Yi-Chia Tsai, Ph. D. Candidate, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.

"In a unit cell sketch of a crystallography class, Al and Ga atoms are interchangeable but not so in our computational research," Bayram explained.  That's because each atom and its relative position matter when you are simulating the unit cell, a small volume of the entire semiconductor material.

"We simulate the unit cell to save computational resources and use proper boundary conditions to infer the entire material properties. Thus, we had to simulate all possible unit cell combinations and infer accordingly — this approach gave the best computational matching to the experimental ones," Bayram said. Using this approach, they further explored new though not experimentally-realized materials.

To overcome the computational challenges, Bayram and Tsai applied for and were awarded supercomputer allocations by the Extreme Science and Engineering Discovery Environment (XSEDE). XSEDE is a single virtual system funded by the National Science Foundation that scientists can use to interactively share computing resources, data, and expertise. XSEDE-allocated Stampede2 and Ranch systems at the Texas Advanced Computing Center supported Bayram's simulations and data storage.

The Stampede2 supercomputer at the Texas Advanced Computing Center is an allocated resource of the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation (NSF).
The Ranch long term archival data storage system at the Texas Advanced Computing Center is an allocated resource of the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation (NSF).

"XSEDE is a unique resource. We primarily use the state-of-the-art XSEDE hardware to enable material computations. First, I want to stress that XSEDE is an enabler. Without XSEDE, we could not perform this research. We started with Startup then Research allocation grants. XSEDE — over the last two years — provided us with Research allocations valued at nearly $20,000 as well. Once implemented, the outcome of our research will save billions of dollars annually in energy savings alone," Bayram said.

Bayram stressed that non-scientists can benefit from this basic research into prototype LED materials. "We all need lighting, now more than ever. We not only need lighting for seeing. We need it for horticulture. We need it for communication. We need it for medicine. One percent efficiency increase in general lighting will save us $6 billion annually. In financial terms alone, this is a million times return on investment," he said.

For any semiconductor device, scientists strive to understand the impurities within. The next stage in Bayram's research is to understand how impurities impact new materials and to explore how to dope the new material effectively. Through searching the most promising periodic table groups, he said they're looking for the best elemental dopants, which will eventually help the experimental realization of devices immensely.

Said Bayram: "Supercomputers are super-multipliers. They super-multiply fundamental research into mainstream industry. One measure of success comes when the research outcome promises a unique solution. A one-time investment of $20K into our computational quest will at least lead to $6 billion in savings annually. If not, meaning that the research outcome eliminates this material for further investigation, this early investment will help the industry save millions of dollars and research-hours. Our initial findings are quite promising, and regardless of the outcome the research will ultimately benefit society."

The study, "Band Alignments of Ternary Wurtzite and Zincblende III-Nitrides Investigated by Hybrid Density Functional Theory," was published in the journal ACS Omega on January 30, 2020. The study co-authors are Yi-Chia Tsai and Can Bayram, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign. This work is supported by the National Science Foundation Faculty Early Career Development (CAREER) Program under award number NSF-ECCS-16-52871. The authors acknowledge the computational resources allocated by the Extreme Science and Engineering Discovery Environment (XSEDE) with Nos. TG-DMR180050 and TG-DMR180075.

Story Highlights

  • New properties found in cubic III-nitrides LED materials for next-generation solid-state lighting.
  • First time reporting of bandgaps, electron affinities, and band alignments of wz- and zb- III-nitrides using a unified HSE06 hybrid functional.
  • XSEDE-allocated Stampede2 and Ranch systems at the Texas Advanced Computing Center supported study's simulations and data storage.
  • Insight gained on the crystal structure of cubic III-nitrides could make LED lighting even brighter and more affordable.

Impact by XSEDE | July 2020
July 2020 | Science Highlights, Announcements & Upcoming Events
XSEDE helps the nation's most creative minds discover breakthroughs and solutions for some of the world's greatest scientific challenges. Through free, customized access to the National Science Foundation's advanced digital resources, consulting, training, and mentorship opportunities, XSEDE enables you to Discover More. Get started here.
Science Highlights
Star Crash
Artificial intelligence on XSEDE systems is key to speeding simulations of neutron star mergers
Collisions between neutron stars involve some of the most extreme physics in the Universe. The intense effects of vast matter density and magnetic fields make them computation-hungry to simulate. A team from the National Center for Supercomputing Applications (NCSA) used artificial intelligence on the advanced graphics-processing-unit (GPU) nodes of the XSEDE-allocated supercomputers Bridges at the Pittsburgh Supercomputing Center (PSC) as well as Stampede2 at the Texas Advanced Computing Center (TACC) to obtain a correction factor that will allow much faster, less detailed simulations to produce accurate predictions of these mergers.

The intense magnetic fields accompanying movement of matter from neutron-stars past each other causes increasingly complicated turbulence that is computationally expensive with standard simulation methods. Here, a deep learning AI provides a simulation of this process at a fraction of the computing time.
Every Calculation Stabs
XSEDE and other resources enable team to span time scales in simulating dagger-like microbe-killing molecule
Medical science is in a race to develop new and better antimicrobial agents to address infection and other human diseases. One promising example of such agents is the beta-defensins. These naturally occurring molecules stab microbes' outer membranes, dagger-like, causing their contents to spill out. A scientist at Tennessee Tech University used the XSEDE-allocated Bridges platform at PSC, as well as the D .E. Shaw Research (DESRES) Anton 2 system hosted at PSC, in a "one-two" simulation that shed light on a beta-defensin's initial binding to a microbial membrane. The work promises clues to agents that can better destroy microbes with membranes.

Assembly of cell membrane components (red) and human beta-defensin type 3 (blue) from first principles. As the simulation plays out, the membrane components form a double-layered membrane, seen side-on, and the peptide binds to it.
Program Announcements
Application Window for XSEDE EMPOWER Fall Internships and Mentorships Closing Soon
XSEDE EMPOWER (Expert Mentoring Producing Opportunities for Work, Education, and Research) provides undergraduate students with the opportunity to work on a variety of XSEDE projects, such as computational and/or data analytics research and education in all fields of study, networking, system maintenance and support, visualization, and more. Mentors help engage undergrads in the work of XSEDE. The EMPOWER program aims to enable a diverse group of students to participate in the actual work of XSEDE. To apply to mentor one or more students, create one or more positions by following the link below. If you also have a student in mind to work with, that student should also submit an application.  The deadline for mentors and students to apply for Fall 2020 participation is July 10, 2020.

Check out this video to learn more about  XSEDE   EMPOWER  and what two recent interns have to say about the program.
Computing4Change Application Deadline Extended
For undergraduate students who want to enhance their skillset and create positive change in their community, XSEDE is accepting applications for Computing4Change (C4C).   C4C is a competition for students from diverse disciplines and backgrounds who want to work collaboratively to learn to apply data analysis and computational thinking to a social challenge. Currently it is planned to be co-located at SC20 in Atlanta, GA from November 14-20, 2020 (but may become virtual).  The deadline to apply has been extended until July 19, 2020

Community Announcements
Science Gateways Community Institute Offering Internship Opportunity
The Science Gateways Community Institute (SGCI) is now offering academic-year internships for students interested in developing their gateway development skills. Students will receive an hourly salary/stipend (determined by the internship host, based on qualifications) for up to 20 hours of work per week. Application reviews will begin on July 6 for fall placements. For more information, to apply, or to find out how to host an intern, see the link below.

PEARC Seeking Nominations for 2022 Conference Chair and Steering Committee
The Practice and Experience in Advanced Research Computing (PEARC) Conference seeks nominations for individuals to serve in the role of PEARC Steering Committee Members-at-Large and PEARC 2022 Conference General Chair.  Nominations will be accepted until July 12, 2020, at 11:59 pm Central Time.  All members of the community are eligible to serve in these roles. Individuals can self-nominate or nominate another person as a potential Steering Committee member or conference chair. Nomination details and obligations may be found at the link below.

2020 NSF Cybersecurity Summit CFP
The 2020 NSF Cybersecurity Summit hosted by Trusted CI, the NSF Cybersecurity Center of Excellence, will take place September 22–24, 2020 and will be hosted online. 

Attendees include cybersecurity practitioners, technical leaders, and risk owners from within the NSF Large Facilities and CI Community, as well as key stakeholders and thought leaders from the broader scientific and information security communities.

The Summit seeks proposals for presentations, breakout, and training sessions. It offers opportunities for student scholarships.  Those interested in presenting at the Summit should send submissions to  CFP@trustedci.org  by COB on Monday, July 13.  The Summit organizers welcome proposals from all individuals and agencies.

Upcoming Dates and Deadlines

Student Champions

Campus Champions programs include Regional, Student, and Domain Champions.

 

Student Champions

Student Champion volunteer responsibilities may vary from one institution to another and depending on your Campus Champion Mentor. Student Champions may work with their Mentor to provide outreach on campus to help users access the best advanced computing resource that will help them accomplish their research goals, provide training to users on campus, or work on special projects assigned by your Mentor. Student Champions are also encouraged to attend the annual PEARC conference and participate in the PEARC student program as well as submit posters or papers to the conference. 

To join the Student Champions program, the Campus Champion who will be their mentor should send a message to info@campuschampions.org to recommend the student for the program and confirm their willingness to be the student's mentor. 

Questions? Email info@campuschampions.org.

 

INSTITUTION CHAMPION MENTOR FIELD OF STUDY DESIGNATION GRADUATION 
Alabama Agricultural & Mechanical University Georgianna Wright Damian Clarke Computer Science Undergraduate 2022
Boise State University Mike Henry Kyle Shannon Material Science Graduate 2020
Claremont McKenna College Zeyad Elkelani Jeho Park Political Science Graduate 2021
Dillard University Priscilla Saarah Tomekia Simeon Biology Undergraduate 2022
Dillard University Brian Desil Tomekia Simeon Physics Undergraduate 2021
Drexel University Cameron Fritz David Chin Computer Science Undergraduate 2023
Florida A&M Univerisity Rodolfo Tsuyoshi F. Kamikabeya Hongmei Chi Computer Information Science Graduate 2021
Florida A&M Univeristy Emon Nelson Hongmei Chi Computer Science Graduate  
Georgia Institute of Technology Sebastian Kayhan Hollister Semir Sarajlic Computer Science  Undergraduate 2021
Georgia Institute of Technology Siddhartha Vemuri Semir Sarajlic Computer Science Undergraduate 2021
Georgia State University Kenneth Huang Suranga Naranjan   Graduate 2020
Georgia State University  Melchizedek Mashiku Suranga Naranjan Computer Science Undergraduate 2022
Howard University Christina McBean Marcus Alfred Physics & Mathematics Undergraduate 2021
Howard University Tamanna Joshi Marcus Alfred Condensed Matter Theory Graduate 2021
Indiana University Ashley Brooks Carrie Ganote Physics Graduate 2025
Iowa State University Justin Stanley Levi Barber Computer Science Undergraduate 2020
John Hopkins University Jodie Hoh Jaime Combariza, Anthony Kolasny, Kevin Manalo Computer Science Undergraduate 2022
Kansas State University Mohammed Tanash Dan Andresen Computer Science Gradudate 2022
Massachusetts Green HPC Center Abigail Waters  Julie Ma Clinical Psychology Graduate 2022
Midwestern State University Broday Walker Eduardo Colmenares Computer Science Graduate 2020
New Jersey Institute of Technology Vatsal Shah Roman Voronov Mechanical Engineering  Undergraduate 2020
North Carolina State University Yuqing Du Lisa Lowe Statistics Graduate  2020
North Carolina State University Dheeraj Kalidindi Lisa Lowe Mechanical Engineering Undergraduate 2020
Northwestern University  Sajid Ali Alper Kinaci Applied Physics Graduate 2021
Pomona College Omar Zintan Mwinila-Yuori Asya Shklyar Computer Science Undergraduate  2022
Pomona College Samuel Millette Asya Shklyar Computer Science  Undergraduate   2023
Reed College Jiarong Li Trina Marmarelli Math-Computer Science Undergraduate 2021
Rensselaer Polytechnic Institute James Flamino Joel Geidt   Graduate 2022
Saint Louis University Frank Gerhard Schroer IV Eric Kaufmann Physics Undergraduate 2021
Southern Illinois University

Majid Memari

Chet Langin   Graduate 2021
Southern Illinois University Aaron Walber Chet Langin Physics   2020
Southern Illinois University Manvith Mali Chet Langin Computer Science Graduate 2021
Southwestern Oklahoma State University Kurtis D. Clark Jeremy Evert Computer Science Undergraduate 2020
Texas A&M University - College Station Logan Kunka Jian Tao Aerospace Engineering Graduate 2020
Texas Tech University Misha Ahmadian Tom Brown Computer Science Graduate  2022
The University of Tennessee at Chattanooga  Carson Woods Craig Tanis Computer Science Undergraduate 2021
University of Arizona Alexander Prescott Blake Joyce Geosciences Graduate 2021
Univerity of Arkansas Timothy "Ryan" Rogers Jeff Pummill Physical Chemistry Graduate 2021
University of California - Merced Luanzheng Guo Sarvani Chadalapaka   Graduate 2020
University of Central Florida Amit Goel Paul Weigand      
University of Central Oklahoma Samuel Kelting Evan Lemley Mathematics/CS Undergraduate 2021
University of Delaware Parinaz Barakhshan Anita Schwartz Electrical and Computer Engineering Graduate 2024
University of Houston-Downtown Eashrak Zubair Hong Lin   Undergraduate 2020
University of Illinois at Chicago Babak Kashir Taloori Jon Komperda Mechanical Engineering Graduate 2020
University of Iowa Baylen Jacob Brus Ben Rogers Health Informatics Undergraduate 2020
University of Maine Michael Brady Butler Bruce Segee Physica/Computational Materials Science Graduate 2022
University of Michigan Daniel Kessler Shelly Johnson Statistics Graduate 2022
University of Missouri Ashkan Mirzaee Tim Middelkoop Industrial Engineering Graduate 2021
University of North Carolina Wilmington Cory Nichols Shrum Eddie Dunn      
University of South Dakota Adison Ann Kleinsasser   Computer Science Graduate 2020
University of Wyoming Rajiv Khadka Jared Baker   Graduate 2020
Virginia Tech University David Barto Alana Romanella   Undergraduate 2020
West Chester University of Pennsylvania Jon C. Kilgannon Linh Ngo Computer Science Graduate 2020
Yale University Sinclair Im Andy Sherman Applied Math Graduate 2022
           
GRADUATED          
Florida A&M Univerisity George Kurian Hongmei Chi     2019
Florida A&M Univerisity Temilola Aderibigbe Hongmei Chi     2019
Florida A&M Univerisity Stacyann Nelson Hongmei Chi     2019
Georgia State University Mengyuan Zhu Suranga Naranjan     2017
Georgia State University Thakshila Herath Suranga Naranjan     2018
Jackson State Univeristy Ebrahim Al-Areqi Carmen Wright     2018
Jackson State University Duber Gomez-Fonseca Carmen Wright   Graduate 2019
Mississippi State University Nitin Sukhija Trey Breckenridge     2015
Oklahoma State University Phillip Doehle Dana Brunson     2016
Oklahoma State University Venkat Padmanapan Rao Jesse Schafer Materials Science Graduate 2019
Oklahoma State University Raj Shukla Dana Brunson     2018
Oklahoma State University Nathalia Graf Grachet Philip Doehle     2019
Rensselaer Polytechnic Institute Jorge Alarcon Joel Geidt     2016
Southern Illinois University Alex Sommers Chet Langin     2018
Southern Illinois University Sai Susheel Sunkara Chet Langin     2018
Southern Illinois University Monica Majiga Chet Langin     2017
Southern Illinois University Sai Sandeep Kadiyala  Chet Langin     2017
Southern Illinois University Rezaul Nishat Chet Langin     2018
Southern Illinois University Alvin Gonzales Chet Langin     2020
Tufts University Georgios (George) Karamanis Shawn G. Doughty     2018
University of Arkansas Shawn Coleman Jeff Pummill     2014
University of Florida David Ojika Oleksandr Moskalenko     2018
University of Houston Clear Lake Tarun Kumar Sharma Liwen Shih     2014
University of Maryland Baltimore County Genaro Hernadez Paul Schou     2015
University of Michigan Simon Adorf Shelly Johnson     2019
University of Missouri Alexander Barnes Timothy Middelkoop     2018
University of North Carolina Wilmington James Stinson Gray Eddie Dunn     2018
University of South Dakota Joseph Madison Doug Jennewein     2018
University of Pittsburgh Shervin Sammak Kim Wong     2016
Virginia Tech University Lu Chen Alana Romanella     2017
Winston-Salem State University Daniel Caines Xiuping Tao Computer Science Undergraduate 2019

Updated: March 25, 2020

 

Key Points
Student Champions
Regional Champions
Domain Champions
Contact Information

Artificial Intelligence on XSEDE Systems Is Key to Speeding Simulations of Neutron Star Mergers

By Ken Chiacchia, Pittsburgh Supercomputing Center

The intense magnetic fields accompanying movement of matter from neutron-stars past each other causes increasingly complicated turbulence that is computationally expensive with standard simulation methods. In this time series, a deep learning AI provides a simulation of this process at a fraction of the computing time.

 

Collisions between neutron stars involve some of the most extreme physics in the Universe. The intense effects of vast matter density and magnetic fields make them computation-hungry to simulate. A team from the National Center for Supercomputing Applications (NCSA) used artificial intelligence on the advanced graphics-processing-unit (GPU) nodes of the XSEDE-allocated supercomputers Bridges at the Pittsburgh Supercomputing Center (PSC) as well as Stampede2 at the Texas Advanced Computing Center (TACC) to obtain a correction factor that will allow much faster, less detailed simulations to produce accurate predictions of these mergers.

Why It's Important

Bizarre objects the size of a city but with more mass than our Sun, neutron stars spew magnetic fields a hundred thousand times stronger than an MRI medical scanner. A teaspoon of neutron-star matter weighs about a billion tons. It stands to reason that when these cosmic bodies smack together it will be dramatic. And nature does not disappoint on that count.

"We don't know the nature of matter when it is super-compressed. Neutron stars are ideal laboratories to get insights into this state of matter. [Other than black holes] they are the most compact objects in the Universe."—Shawn Rosofsky, NCSA

Scientists have directly detected two neutron-star mergers to date. These detections depended on two gravitational-wave-detector observatories. LIGO consists of two detectors, one in Hanford, Wash., and the other in Livingston, La. The European Virgo detector is in Santo Stefano a Macerata, Italy.

The intense magnetic fields accompanying movement of matter from neutron-stars past each other causes increasingly complicated turbulence that is computationally expensive with standard simulation methods. Here, a deep learning AI provides a simulation of this process at a fraction of the computing time.

 

Scientists who analyze the data collected by LIGO and Virgo would like to see the highest-quality computer simulations of neutron star mergers. This allows them to identify what they should be looking for to better recognize and understand these events. But these simulations are slow and computationally expensive. Graduate student Shawn Rosofsky, working with advisor E. A. Huerta at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, set out to speed up such simulations. To accomplish this, he turned to artificial intelligence using the advanced graphics processing units (GPUs) of the National Science Foundation-funded, XSEDE-allocated Bridges supercomputing platform at PSC.

How XSEDE Helped

Rosofsky set out to simulate the phenomenon of magnetohydrodynamic turbulence in the gasses surrounding neutron stars as they merge. This physical process is related to the turbulence in the atmosphere that produces clouds. But in neutron star mergers, it takes place under massive magnetic fields that make it difficult to simulate in a computer. The scale of the interactions is small—and so detailed—that the high resolutions required to resolve these effects in a single simulation could take years.

"Subgrid modeling [with AI] allows us to mimic the effects of high resolutions in lower-resolution simulations. The time scale of the simulations with high-enough resolutions without subgrid modeling is probably years rather than months.  If we [can] obtain accurate results on a grid by artificially lowering the resolution by a factor of eight, we reduce the computational complexity by a factor of eight to the fourth power, or 4,096. XSEDE provided computational resources for running the simulations that we used as our training data with Stampede2. This work has attracted the attention of [our scientific community, enabling] us to push the envelope on innovative AI applications to accelerate multi-scale and multi-physics simulations."—Shawn Rosofsky, NCSA

Rosofsky wondered whether deep learning, a type of artificial intelligence (AI) that uses multiple layers of representation, could recognize features in the data that allow it to extract correct predictions faster than the brute force of ultra-high resolutions. His idea was to produce a correction factor using the AI to allow lower-resolution, faster computations on conventional, massively parallel supercomputers while still producing accurate results.

Deep learning starts with training, in which the AI analyzes data in which the "right answers" have been labeled by humans. This allows it to extract features from the data that humans might not have recognized but which allow it to predict the correct answers. Next the AI is tested on data without the right answers labeled, to ensure it's still getting the answers right.

"It takes several months to obtain high-resolution simulations without subgrid scale modeling. Shawn's idea was, ‘Forget about that, can we solve this problem with AI? Can we capture the physics of magnetohydrodynamics turbulence through data-driven AI modeling?' … We had no idea whether we would be able to capture these complex physics … But the answer was, ‘Yes!'"—E. A. Huerta, NCSA 

Rosofsky designed his deep-learning AI to progress in steps. This allowed him to verify the results at each step and understand how the AI was obtaining its predictions. This is important in deep learning computation, which otherwise could produce a result that researchers might not fully understand and so can't fully trust.

Rosofsky used the XSEDE-allocated Stampede2 supercomputer at the Texas Advanced Computing Center (TACC) to produce the data that is used to train, validate and test his neural network models. For the training and testing phases of the project, Bridges' NVIDIA Tesla P100 GPUs, the most advanced available at the time, were ideally suited to the computations. Using Bridges, he was able to obtain a correction factor for the lower resolution simulations much more accurately than with the alternatives. The ability of AI to accurately compute subgrid scale effects with low resolution grids should allow the scientists to perform a large simulation in months rather than years. The NCSA team reported their results in the journal Physical Review D in April 2020.

"What we're doing here is not just pushing the boundaries of AI. We're providing a way for other users to optimally utilize their resources."—E. A. Huerta, NCSA

The AI computations on Bridges showed that the method would work better and faster than gradient models. They also present a roadmap for other researchers to use AI to speed other massive computations.

Future work by the group may include the even more advanced V100 GPU nodes of the XSEDE-allocated Bridges-AI system at PSC, or the upcoming Bridges-2 platform. Their next step will be to incorporate the AI's correction factors into large-scale simulations of neutron-star mergers and further assess the accuracy of the AI and of the quicker simulations. Their hope is that the new simulations will demonstrate details in neutron-star mergers that can be identified in gravitational wave detectors. These could allow observatories to detect more events, as well as explain more about how these massive and strange cosmic events unfold.

You can read the NCSA team's paper here.

 

 

At a Glance:

  • Collisions between neutron stars involve some of the most extreme physics in the Universe.

  • The intense effects of vast matter density and magnetic fields make them computationally challenging  to simulate.

  • A team from the National Center for Supercomputing Applications (NCSA) used artificial intelligence on XSEDE systems to obtain a correction factor for merger simulations.

  • The shortcut will allow much faster, less detailed simulations to produce accurate predictions of these mergers.


Current Campus Champions

Current Campus Champions listed by institution. Participation as either an Established Program to Stimulate Competitive Research (EPSCoR) or as a minority-serving institution (MSI) is also indicated.

Campus Champion Institutions  
Total Academic Institutions 290
     Academic institutions in EPSCoR jurisdictions 79
    Minority Serving Institutions 56
    Minority Serving Institutions in EPSCoR jurisdictions 17
Non-academic, not-for-profit organizations 36
Total Campus Champion Institutions 328
Total Number of Champions 697

LAST UPDATED: June 28, 2020

See also the lists of Leadership Team and Regional LeadersDomain Champions and Student Champions.

Institution Campus Champions EPSCoR MSI
Alabama A & M University Damian Clarke, Raziq Yaqub, Georgiana Wright (student)
Albany State University Olabisi Ojo  
Arizona State University Michael Simeone (domain) , Sean Dudley, Johnathan Lee, Lee Reynolds, William Dizon, Ian Shaeffer, Dalena Hardy, Gil Speyer, Richard Gould, Chris Kurtz, Jason Yalim, Philip Tarrant, Douglas Jennewein, Marisa Brazil, Rebecca Belshe    
Arkansas State University Hai Jiang  
Auburn University Tony Skjellum  
Austin Peay State University Justin Oelgoetz    
Bates College Kai Evenson  
Baylor College of Medicine Pavel Sumazin , Hua-Sheng Chiu, Hyunjae Ryan Kim    
Baylor University Mike Hutcheson, Carl Bell, Brian Sitton    
Bentley University Jason Wells    
Bethune-Cookman University Ahmed Badi  
Boise State University Kyle Shannon, Mike Henry (student), Jason Watt, Kelly Byrne, Mendi Edgar, Mike Ramshaw  
Boston Children's Hospital Arash Nemati Hayati    
Boston University Wayne Gilmore, Charlie Jahnke, Augustine Abaris, Brian Gregor, Katia Oleinik, Jacob Pessin    
Bowdoin College Dj Merrill , Stephen Houser  
Bowie State University Konda Karnati  
Brandeis University John Edison    
Brown University Helen Kershaw, Maximilian King, Paul Hall, Khemraj Shukla, Mete Tunca, Paul Stey  
California Baptist University Linn Carothers  
California Institute of Technology Tom Morrell    
California State Polytechnic University-Pomona Chantal Stieber    
California State University-Sacramento Anna Klimaszewski-Patterson  
California State University-San Bernardino Dung Vu, James MacDonell  
Carnegie Institution for Science Floyd A. Fayton, Jr.    
Carnegie Mellon University Bryan Webb, Franz Franchetti, Carl Skipper    
Case Western Reserve University Roger Bielefeld, Hadrian Djohari, Emily Dragowsky, James Michael Warfe, Sanjaya Gajurel    
Centre College David Toth  
Chapman University James Kelly    
Children's Research Institute, Children's Mercy Kansas City Shane Corder    
Claremont Graduate University Michael Espero (student), Cindy Cheng (student)    
Claremont McKenna College Jeho Park, Zeyad Elkelani (student)    
Clark Atlanta University Dina Tandabany  
Clarkson Univeristy Jeeves Green, Joshua A. Fiske    
Clemson University Marcin Ziolkowski, Xizhou Feng, Ashwin Srinath, Jeffrey Denton, Corey Ferrier  
Cleveland Clinic Foundation Iris Nira Smith, Daniel Blankenberg    
Clinton College Terris S. Riley
Coastal Carolina University Will Jones, Thomas Hoffman  
Colby College Randall Downer  
Colgate University Howard Powell, Dan Wheeler    
College of Charleston Berhane Temelso  
College of Staten Island CUNY Sharon Loverde  
College of William and Mary Eric Walter    
Colorado School of Mines Torey Battelle    
Columbia University Rob Lane, George Garrett    
Columbia University Medical Center Vinod Gupta    
Complex Biological Systems Alliance Kris Holton    
Cornell University Susan Mehringer    
Dakota State University David Zeng  
Davidson College Neil Reda (student), Michael Blackmon (student)    
Dillard University Tomekia Simeon, Brian Desil (student), Priscilla Saarah (student)
Doane University-Arts & Sciences Mark Meysenburg, AJ Friesen  
Dominican University of California Randall Hall    
Drexel University David Chin, Cameron Fritz (student)    
Duke University Tom Milledge    
Earlham College Charlie Peck    
East Carolina University Nic Herndon    
Edge, Inc. Forough Ghahramani    
Emory University Jingchao Zhang    
Federal Reserve Bank Of Kansas City (CADRE) BJ Lougee, Chris Stackpole, Michael Robinson    
Federal Reserve Bank Of Kansas City (CADRE) - OKC Branch Greg Woodward  
Federal Reserve Bank Of New York Ernest Miller, Kevin Kelliher    
Felidae Conservation Fund Kevin Clark    
Ferris State University Luis Rivera, David Petillo    
Fisk University Michael Watson  
Florida A and M University Hongmei Chi, Jesse Edwards, Yohn Jairo Parra Bautista, Rodolfo Tsuyoshi F. Kamikabeya (student), Emon Nelson (student)  
Florida Atlantic University Rhian Resnick    
Florida International University David Driesbach, Cassian D'Cunha  
Florida Southern College Christian Roberson    
Florida State University Paul van der Mark    
Francis Marion University K. Daniel Brauss, Jordan D. McDonnell
Franklin and Marshall College Jason Brooks    
George Mason University Jayshree Sarma, Alastair Neil    
George Washington University Hanning Chen, Adam Wong, Glen Maclachlan, William Burke    
Georgetown University Alisa Kang    
Georgia Institute of Technology Mehmet Belgin, Semir Sarajlic, Nuyun (Nellie) Zhang, Sebastian Kayhan Hollister (student), Paul Manno, Kevin Manalo, Siddhartha Vemuri (student)    
Georgia Southern University Brandon Kimmons, Dain Overstreet    
Georgia State University Neranjan "Suranga" Edirisinghe Pathiran, Ken Huang, Thakshila Herath (student), Melchizedek Mashiku (student)  
Gettysburg College Charles Kann    
Great Plains Network Kate Adams, James Deaton    
Grinnell College Michael Conner    
Harrisburg University of Science and Technology Donald Morton, Daqing Yun    
Harvard Medical School Jason Key    
Harvard University Scott Yockel, Plamen Krastev, Francesco Pontiggia    
Harvey Mudd College Aashita Kesarwani    
Hood College Xinlian Liu    
Howard University Marcus Alfred, Christina McBean (student), Tamanna Joshi (student)  
Idaho National Laboratory Ben Nickell, Eric Whiting, Tami Grimmett  
Idaho State University Keith Weber, Dong Xu  
Illinois Institute of Technology Jeff Wereszczynski    
Indiana University Abhinav Thota, Sudahakar Pamidighantam (domain) , Junjie Li, Thomas Doak (domain) , Carrie L. Ganote (domain) , Sheri Sanders (domain) , Bhavya Nalagampalli Papudeshi (domain) , Le Mai Weakley, Ashley Brooks (student)    
Indiana University of Pennsylvania John Chrispell    
Internet2 Dana Brunson, Cathy Chaplin    
Iowa State University Andrew Severin, James Coyle, Levi Baber, Justin Stanley (student)    
Jackson State University Carmen Wright, Duber Gomez-Fonseca (student)
James Madison University Yasmeen Shorish, Isaiah Sumner    
Jarvis Christian College Widodo Samyono  
John Brown University Jill Ellenbarger  
Johns Hopkins University Anthony Kolasny, Jaime Combariza, Jodie Hoh (student)    
Juniata College Burak Cem Konduk    
KINBER Jennifer Oxenford    
Kansas Research and Education Network Casey Russell  
Kansas State University Dan Andresen, Mohammed Tanash (student), Kyle Hutson  
Kennesaw State University Dick Gayler, Jon Preston    
Kentucky State University Chi Shen
Lafayette College Bill Thompson, Jason Simms, Peter Goode    
Lamar University Larry Osborne    
Langston University Franklin Fondjo, Abebaw Tadesse, Joel Snow
Lawrence Berkeley National Laboratory Andrew Wiedlea    
Lawrence Livermore National Laboratory Todd Gamblin    
Lehigh University Alexander Pacheco    
Lock Haven University Kevin Range    
Louisiana State University Feng Chen, Blaise A Bourdin  
Louisiana State University Health Sciences Center-New Orleans Mohamad Qayoom  
Louisiana Tech University Don Liu  
Marquette University Craig Struble, Lars Olson, Xizhou Feng    
Marshall University Jack Smith  
Massachusetts Green High Performance Computing Center Julie Ma, Abigail Waters (student)    
Massachusetts Institute of Technology Christopher Hill, Lauren Milechin    
Medical University of South Carolina Starr Hazard  
Miami University-Oxford Jens Mueller    
Michigan State University Andrew Keen, Yongjun Choi, Dirk Colbry, Brian Loomis, Justin Booth, Dave Dai, Arthur "Chip" Shank II    
Michigan Technological University Gowtham    
Middle Tennessee State University Dwayne John    
Midwestern State University Eduardo Colmenares-Diaz, Broday Walker (student)    
Mississippi State University Trey Breckenridge  
Missouri State University Matt Siebert    
Missouri University of Science and Technology Buddy Scharfenberg, Don Howdeshell    
Monmouth College Christopher Fasano    
Montana State University Jonathan Hilmer  
Montana Tech Bowen Deng  
Morgan State University Asamoah Nkwanta, James Wachira  
NCAR/UCAR Davide Del Vento    
National University Ali Farahani    
Navajo Technical University Jason Arviso
New Jersey Institute of Technology Glenn "Gedaliah" Wolosh, Roman Voronov, Vatsal Shah (student)    
New Mexico State University Alla Kammerdiner, Diana Dugas, Strahinja Trecakov
New York University Shenglong Wang    
North Carolina A & T State University Dukka KC  
North Carolina Central University Caesar Jackson, Alade Tokuta  
North Carolina State University at Raleigh Lisa Lowe, Dheeraj Kalidindi (student), Yuqing Du (student)    
North Dakota State University Dane Skow, Nick Dusek, Oluwasijibomi "Siji" Saula, Khang Hoang  
Northern Arizona University Christopher Coffey, Jason Buechler, William Wilson    
Northern Illinois University Jifu Tan    
Northwest Missouri State University Jim Campbell    
Northwestern State University (Louisiana Scholars' College) Brad Burkman  
Northwestern University Pascal Paschos, Alper Kinaci, Sajid Ali (student)    
OWASP Foundation Learning Gateway Project Bev Corwin, Laureano Batista, Zoe Braiterman, Noreen Whysel    
Ohio State University Keith Stewart, Sandy Shew    
Ohio Supercomputer Center Karen Tomko    
Oklahoma Baptist University Yuan-Liang Albert Chen  
Oklahoma Innovation Institute John Mosher  
Oklahoma State University Brian Couger (domain) , Jesse Schafer, Christopher J. Fennell (domain) , Phillip Doehle, Evan Linde, Venkat Padmanapan Rao (student), Bethelehem Ali Beker (student)  
Old Dominion University Wirawan Purwanto    
Oral Roberts University Stephen R. Wheat  
Oregon State University David Barber, CJ Keist, Mark Keever, Dylan Keon    
Penn State University Wayne Figurelle, Guido Cervone, Diego Menendez, Jeff Nucciarone, Chuck Pavloski    
Pittsburgh Supercomputing Center Stephen Deems, John Urbanic    
Pomona College Asya Shklyar, Andrew Crawford, Omar Zintan Mwinila-Yuori (student), Samuel Millette (student), Sanghyun Jeon    
Portland State University William Garrick    
Prairie View A&M University Suxia Cui  
Princeton University Ian Cosden    
Purdue University Xiao Zhu, Tsai-wei Wu, Matthew Route (domain) , Eric Adams    
RAND Corporation Justin Chapman    
Reed College Trina Marmarelli, Johnny Powell , Ben Poliakoff, Jiarong Li (student)    
Rensselaer Polytechnic Institute Joel Giedt, James Flamino (student)    
Rhodes College Brian Larkins    
Rice University Qiyou Jiang, Erik Engquist, Xiaoqin Huang, Clinton Heider, John Mulligan    
Rochester Institute of Technology Andrew W. Elble , Emilio Del Plato, Charles Gruener, Paul Mezzanini, Sidney Pendelberry    
Rowan University Ghulam Rasool    
Rutgers University Kevin Abbey, Shantenu Jha, Bill Abbott, Leslie Michelson, Paul Framhein, Galen Collier, Eric Marshall, Kristina Plazonic, Vlad Kholodovych, Bala Desinghu    
SBGrid Consortium      
SUNY at Albany Kevin Tyle, Nicholas Schiraldi    
Saint Louis University Eric Kaufmann, Frank Gerhard Schroer IV (student)    
Saint Martin University Shawn Duan    
San Diego State University Mary Thomas  
San Jose State University Sen Chiao, Werner Goveya    
Slippery Rock University of Pennsylvania Nitin Sukhija    
Sonoma State University Mark Perri  
South Carolina State University Biswajit Biswal, Jagruti Sahoo
South Dakota School of Mines and Technology Rafal M. Oszwaldowski  
South Dakota State University Kevin Brandt, Roberto Villegas-Diaz (student), Rachael Auch, Chad Julius  
Southeast Missouri State University Marcus Bond    
Southern Connecticut State University Yigui Wang    
Southern Illinois University-Carbondale Shaikh Ahmed, Chet Langin, Majid Memari (student), Aaron Walber (student), Manvith Mali (student)    
Southern Illinois University-Edwardsville Kade Cole, Andrew Speer    
Southern Methodist University Amit Kumar, Merlin Wilkerson, Robert Kalescky    
Southern University and A & M College Shizhong Yang, Rachel Vincent-Finley
Southwest Innovation Cluster Thomas MacCalla    
Southwestern Oklahoma State University Jeremy Evert, Kurtis D. Clark (student), Hamza Jamil (student)  
Spelman College Yonas Tekle  
Stanford University Ruth Marinshaw, Zhiyong Zhang    
State University of New York at Buffalo Dori Sajdak, Andrew Bruno    
Swarthmore College Andrew Ruether    
Temple University Richard Berger    
Tennessee Technological University Mike Renfro    
Texas A & M University-College Station Rick McMullen, Dhruva Chakravorty, Jian Tao, Brad Thornton, Logan Kunka (student)    
Texas A & M University-Corpus Christi Ed Evans, Joshua Gonzalez  
Texas A&M University-San Antonio Smriti Bhatt  
Texas Southern University Farrukh Khan  
Texas State University Shane Flaherty  
Texas Tech University Tom Brown, Misha Ahmadian (student)  
Texas Wesleyan University Terrence Neumann    
The College of New Jersey Shawn Sivy    
The Jackson Laboratory Shane Sanders, Bill Flynn  
The University of Tennessee-Chattanooga Craig Tanis, Carson Woods (student)    
The University of Texas at Austin Kevin Chen    
The University of Texas at Dallas Frank Feagans, Gi Vania, Jaynal Pervez, Christopher Simmons    
The University of Texas at El Paso Rodrigo Romero, Vinod Kumar  
The University of Texas at San Antionio Brent League, Jeremy Mann, Zhiwei Wang, Armando Rodriguez, Thomas Freeman  
Tinker Air Force Base Zachary Fuchs, David Monismith  
Trinity College Peter Yoon    
Tufts University Shawn Doughty    
Tulane University Hideki Fujioka, Hoang Tran, Carl Baribault  
United States Department of Agriculture - Agriculture Research Service Nathan Weeks    
United States Geological Survey Janice Gordon, Jeff Falgout, Natalya Rapstine    
University of Alabama at Birmingham John-Paul Robinson  
University of Alaska Fairbanks Liam Forbes, Kevin Galloway
University of Arizona Jimmy Ferng, Mark Borgstrom, Moe Torabi, Adam Michel, Chris Reidy, Chris Deer, Cynthia Hart, Ric Anderson, Todd Merritt, Dima Shyshlov, Blake Joyce, Alexander Prescott (student)    
University of Arkansas David Chaffin, Jeff Pummill, Pawel Wolinski, Timothy "Ryan" Rogers (student)  
University of Arkansas at Little Rock Albert Everett  
University of California-Berkeley Aaron Culich, Chris Paciorek    
University of California-Davis Bill Broadley, Timothy Thatcher    
University of California-Irvine Harry Mangalam  
University of California-Los Angeles TV Singh    
University of California-Merced Matthias Bussonnier, Sarvani Chadalapaka, Luanzheng Guo (student)    
University of California-Riverside Bill Strossman, Charles Forsyth  
University of California-San Diego Cyd Burrows-Schilling, Claire Mizumoto    
University of California-San Francisco Jason Crane    
University of California-Santa Barbara Sharon Solis, Sharon Tettegah  
University of California-Santa Cruz Jeffrey D. Weekley  
University of Central Florida Paul Wiegand, Amit Goel (student), Jason Nagin    
University of Central Oklahoma Evan Lemley, Samuel Kelting (student)  
University of Chicago Igor Yakushin, Ryan Harden    
University of Cincinnati Kurt Roberts, Larry Schartman, Jane E Combs    
University of Colorado Thomas Hauser, Shelley Knuth, Andy Monaghan, Daniel Trahan    
University of Colorado, Denver Amy Roberts    
University of Delaware Anita Schwartz, Parinaz Barakhshan (student), Michael Kyle  
University of Florida Alex Moskalenko, David Ojika    
University of Georgia Guy Cormier    
University of Guam Rommel Hidalgo, Eugene Adanzo, Randy Dahilig, Jose Santiago, Steven Mamaril
University of Hawaii Gwen Jacobs, Sean Cleveland
University of Houston Jerry Ebalunode, Amit Amritkar (domain)  
University of Houston-Clear Lake David Garrison, Liwen Shih    
University of Houston-Downtown Eashrak Zubair (student), Hong Lin  
University of Idaho Lucas Sheneman  
University of Illinois at Chicago Himanshu Sharma, Jon Komperda, Babak Kashir Taloori (student)  
University of Illinois at Urbana-Champaign Mao Ye (domain) , Rob Kooper (domain) , Dean Karres, Tracy Smith    
University of Indianapolis Steve Spicklemire    
University of Iowa Ben Rogers, Baylen Jacob Brus (student), Sai Ramadugu, Adam Harding, Joe Hetrick, Cody Johnson, Genevieve Johnson, Glenn Johnson, Brendel Krueger, Kang Lee, Gabby Perez, Brian Ring, John Saxton    
University of Kansas Riley Epperson  
University of Kentucky Vikram Gazula, James Griffioen  
University of Louisiana at Lafayette Raju Gottumukkala  
University of Louisville Harrison Simrall  
University of Maine System Bruce Segee, Steve Cousins, Michael Brady Butler (student)  
University of Maryland Eastern Shore Urban Wiggins  
University of Maryland-Baltimore County Roy Prouty, Randy Philipp  
University of Maryland-College Park Kevin M. Hildebrand  
University of Massachusetts Amherst Johnathan Griffin    
University of Massachusetts-Boston Jeff Dusenberry, Runcong Chen  
University of Massachusetts-Dartmouth Scott Field, Gaurav Khanna    
University of Memphis Qianyi Cheng    
University of Miami Dan Voss, Warner Baringer    
University of Michigan Shelly Johnson, Todd Raeker, Gregory Teichert , Daniel Kessler (student)    
University of Minnesota Eric Shook (domain) , Ben Lynch, Evan Bolling, Joel Turbes, Doug Finley    
University of Mississippi Medical Center Kurt Showmaker  
University of Missouri-Columbia Timothy Middelkoop, Derek Howard, Asif Ahamed Magdoom Ali, Brian Marxkors, Ashkan Mirzaee (student), Christina Roberts, Predrag Lazic, Phil Redmon    
University of Missouri-Kansas City Paul Rulis    
University of Montana Tiago Antao  
University of Nebraska Adam Caprez, Natasha Pavlovikj (student), Tom Harvill  
University of Nebraska Medical Center Ashok Mudgapalli  
University of Nevada-Reno Fred Harris, Scotty Strachan, Engin Arslan  
University of New Hampshire Scott Valcourt  
University of New Mexico Hussein Al-Azzawi, Matthew Fricke
University of North Carolina Mark Reed, Mike Barker    
University of North Carolina - Greensboro Jacob Tande    
University of North Carolina Wilmington Eddie Dunn, Ellen Gurganious, Cory Nichols Shrum (student)    
University of North Carolina, RENCI Laura Christopherson, Chris Erdmann, Chris Lenhardt    
University of North Dakota Aaron Bergstrom, David Apostal  
University of North Georgia Luis A. Cueva Parra , Yong Wei    
University of North Texas Charles Peterson, Damiri Young    
University of Notre Dame Dodi Heryadi, Scott Hampton    
University of Oklahoma Henry Neeman, Kali McLennan, Horst Severini, James Ferguson, David Akin, S. Patrick Calhoun, Jason Speckman  
University of Oregon Nick Maggio, Robert Yelle, Jake Searcy, Mark Allen, Michael Coleman    
University of Pennsylvania Gavin Burris    
University of Pittsburgh Kim Wong, Matt Burton, Fangping Mu, Shervin Sammak    
University of Puerto Rico Mayaguez Ana Gonzalez
University of Richmond Fred Hagemeister    
University of South Carolina Paul Sagona, Ben Torkian, Nathan Elger  
University of South Dakota Adison Ann Kleinsasser (student), Ryan Johnson, Bill Conn  
University of South Florida-St Petersburg (College of Marine Science) Tylar Murray    
University of Southern California Virginia Kuhn (domain) , Cesar Sul, Erin Shaw    
University of Southern Mississippi Brian Olson , Gopinath Subramanian  
University of St Thomas William Bear, Keith Ketchmark, Eric Tornoe    
University of Tennessee - Knoxville Deborah Penchoff    
University of Tulsa Peter Hawrylak  
University of Utah Anita Orendt, Tom Cheatham (domain) , Brian Haymore (domain)    
University of Vermont Andi Elledge, Yves Dubief  
University of Virginia Ed Hall, Katherine Holcomb    
University of Washington-Seattle Campus Nam Pho    
University of Wisconsin-La Crosse David Mathias, Samantha Foley    
University of Wisconsin-Madison Todd Shechter    
University of Wisconsin-Milwaukee Dan Siercks, Shawn Kwang    
University of Wyoming Bryan Shader, Rajiv Khadka (student), Dylan Perkins  
University of the Virgin Islands Marc Boumedine
Utah Valley University George Rudolph    
Valparaiso University Paul Lapsansky, Paul M. Nord, Nicholas S. Rosasco    
Vassar College Christopher Gahn    
Virginia Tech University James McClure, Alana Romanella, Srijith Rajamohan, David Barto (student)    
Washburn University Karen Camarda, Steve Black  
Washington State University Rohit Dhariwal, Peter Mills    
Washington University in St Louis Xing Huang, Matt Weil, Matt Callaway    
Washington and Lee University Tom Marcais    
Wayne State University Patrick Gossman, Michael Thompson, Aragorn Steiger, Sara Abdallah (student)    
Weill Cornell Medicine Joseph Hargitai    
Wesleyan University Henk Meij    
West Chester University of Pennsylvania Linh Ngo, Jon C. Kilgannon (student)    
West Virginia Higher Education Policy Commission Jack Smith  
West Virginia University Guillermo Avendano-Franco , Blake Mertz  
West Virginia University Institute of Technology Sanish Rai  
Wichita State University Terrance Figy  
Williams College Adam Wang    
Winston-Salem State University Xiuping Tao, Daniel Caines (student)  
Wofford College Beau Christ  
Woods Hole Oceanographic Institution Roberta Mazzoli, Richard Brey    
Yale University Andrew Sherman, Kaylea Nelson, Benjamin Evans, Sinclair Im (student)    
Youngstown State University Feng George Yu    

LAST UPDATED: June 28, 2020

 

Key Points
Members
Institutions
Contact Info
Contact Information

COVID-19 Response: To our valued stakeholders and XSEDE collaborators,
By now you have received a flurry of communication surrounding the ongoing COVID-19 pandemic and how various organizations are responding, and XSEDE is no exception. As XSEDE staff have transitioned out of their usual offices and into telecommuting arrangements with their home institutions, they have worked both to support research around the pandemic and to ensure we operate without interruption.

Computational Research Techniques: Scientific Visualization
June 30, July 7, 14, 21

Computational Research Techniques: Applied Parallel Programming
July 2, 9, 16, 23

Indispensable Security: Tips to Use SDSC's HPC Resources Securely
Thursday, July 16, 2020, at 11:00am PDT

TACC Summer Institute Series: Machine Learning
August 10-14


COVID-19 HPC Consortium

HPC Resources available to fight COVID-19

The COVID-19 HPC Consortium encompasses computing capabilities from some of the most powerful and advanced computers in the world. We hope to empower researchers around the world to accelerate understanding of the COVID-19 virus and the development of treatments and vaccines to help address infections. Consortium members manage a range of computing capabilities that span from small clusters to some of the very largest supercomputers in the world.

Preparing your COVID-19 HPC Consortium Request

To request access to resources of the COVID-19 HPC Consortium, you must prepare a description, no longer than three pages, of your proposed work. To ensure your request is directed to the appropriate resource(s), your description should include the sections outlined below. Do not include any proprietary information in proposals, since your request will be reviewed by staff from a number of consortium sites. 

The proposals will be evaluated on the following criteria:

  • Potential benefits for COVID-19 response
  • Feasibility of the technical approach
  • Need for high-performance computing
  • High-performance computing knowledge and experience of the proposing team
  • Estimated computing resource requirements 

Please note the following parameters and expectations:

  • Allocations of resources are expected to be for a maximum of 6 months; proposers may submit subsequent proposals for additional resources
  • All supported projects will have the name of the principal investigator, affiliation, project title and project abstract posted to the  COVID-19 HPC Consortium web site.
  • Project PIs are expected to provide brief (~2 paragraphs) updates on a weekly basis.
  • It is expected that teams who receive Consortium access will publish their results in the open scientific literature. 

A. Scientific/Technical Goal

Describe how your proposed work contributes to our understanding of COVID-19 and/or improves the nation's ability to respond to the pandemic.

  • What is the scientific/technical goal?
  • What is the plan and timetable for getting to the goal?
  • What is the expected period for performance (one week to three months)?
  • Where do you plan to publish your results and in what timeline? 

B. Estimate of Compute, Storage and Other Resources

To the extent possible, provide an estimate of the scale and type of the resources needed to complete the work, making sure to address the points below. The information in the Resources section below is available to help you answer this question.  Please be as specific as possible in your resource request.  If you have more than one phase of computational work, please address the points below for each phase (including subtotals for each phase):

  • Are there computing architectures or systems that are most appropriate (e.g. GPUs, large memory, large core counts on shared memory nodes, etc.)?
  • What is the scale of total computing and data storage resources needed for the work?
    • For example, how long does a single analysis take on what number/kind of CPU cores or GPUs, requiring how much memory (RAM and/or GPU memory) and what sizes of input and output data? How many analyses are proposed?
  • How distributed can the computation be, and can it be executed across multiple computing systems?
  • Can this workload be executed in a cloud environment?
  • Does your project require access to any public datasets? If so, please describe these datasets and how you intend to use them?
  • Do you prefer specific resource provider(s)/system(s), or can your analyses be run on a range of systems?

C. Support Needs

Describe whether collaboration or support from staff at the National labs, Commercial Cloud providers, or other HPC facilities will be essential, helpful, or unnecessary. Estimates of necessary application support are very helpful. Teams should also identify any restrictions that might apply to the project, such as export-controlled code, ITAR restrictions, proprietary data sets, regional location of compute resources, or personal health information (PHI) or HIPAA restrictions. In such cases, please provide information on security, privacy and access issues.

D. Team and Team Preparedness

Summarize your team's qualifications and readiness to execute the project.

  • What is the expected lead time before you can begin the simulation runs?
  • What systems have you recently used and how big were the simulation runs?
  • Given that some resources are at federal facilities with restrictions, please provide a list of team members that will require accounts on resources along with their citizenship. 

Document Formatting

While readability is of greatest importance, documents must satisfy the following minimum requirements. Documents that conform to NSF proposal format guidelines will satisfy these guidelines.

  • Margins: Documents must have 2.5-cm (1-inch) margins at the top, bottom, and sides.
  • Fonts and Spacing: The type size used throughout the documents must conform to the following three requirements:
  • Use one of the following typefaces identified below:
    • Arial 11, Courier New, or Palatino Linotype at a font size of 10 points or larger;
    • Times New Roman at a font size of 11 points or larger; or
    • Computer Modern family of fonts at a font size of 11 points or larger.
  • A font size of less than 10 points may be used for mathematical formulas or equations, figures, table or diagram captions and when using a Symbol font to insert Greek letters or special characters. PIs are cautioned, however, that the text must still be readable.
  • Type density must be no more than 15 characters per 2.5 cm (1 inch).
  • No more than 6 lines must be within a vertical space of 2.5 cm (1 inch).

* **Page Numbering**: Page numbers should be included in each file by the submitter. Page numbering is not provided by XRAS. * **File Format**: XRAS accepts only PDF file formats.

Submitting your COVID-19 HPC Consortium request 

  1. Create an XSEDE portal account
    • Go to https://portal.xsede.org/
    • Click on "Sign In" at the upper right, if you have an XSEDE account … 
    • … or click "Create Account" to create one. 
    • To create an account, basic information will be required (name, organization, degree, address, phone, email). 
    • Email verification will be necessary to complete the account creation.
    • Set your username and password.
    • After your account is created, be sure you're logged into https://portal.xsede.org/
    • IMPORTANT: Each individual should have their own XSEDE account; it is against policy to share user accounts.
  2. Go to the allocation request form
    • Follow this link to go directly to the submission form.
    • Or to navigate to the request form:
      • Click the "Allocations" tab in the XSEDE User Portal,
      • Then select "Submit/Review Request."
      • Select the "COVID-19 HPC Consortium" opportunity.
    • Select "Start a New Submission."
  3. Complete your submission
    • Provide the data required by the form. Fields marked with a red asterisk are required to complete a submission.
    • The most critical screens are the PersonnelTitle/Abstract, and Resources screens.
      • On the Personnel screen, one person must be designated as the Principal Investigator (PI) for the request. Other individuals can be added as co-PIs or Users (but they must have XSEDE accounts).
      • On the Title/Abstract screen, all fields are required.
      • On the Resources screen…
        • Enter "n/a" in the "Disclose Access to Other Compute Resources" field (to allow the form to be submitted).
        • Then, select "COVID-19 HPC Consortium" and enter 1 in the Amount Requested field. 
    • On the Documents screen, select "Add Document" to upload your 3-page document. Select "Main Document" or "Other" as the document Type.
      • Only PDF files can be accepted.
    • You can ignore the Grants and Publications sections. However, you are welcome to enter any supporting agency awards, if applicable.
    • On the Submit screen, select "Submit Request." If necessary, correct any errors and submit the request again.

Resources available for COVID-19 HPC Consortium request 
Click on title to see full description

U.S. Department of Energy (DOE) Advanced Scientific Computing Research (ASCR)
Supercomputing facilities at DOE offer some of the most powerful resources for scientific computing in the world. The Argonne Leadership Computing Facility (ALCF) and Oak Ridge Leadership Computing Facility (OLCF) and the Lawrence Berkeley National Laboratory (LBNL) may be used for modeling and simulation coupled with machine and deep learning techniques to study a range of areas, including examining underlying protein structure, classifying the evolution of the virus, understanding mutation, uncovering important differences, and similarities with the 2002-2003 SARS virus, searching for potential vaccine and antiviral, compounds, and simulating the spread of COVID-19 and the effectiveness of countermeasure options.

 

Oak Ridge Summit | 200 PF, 4608 nodes, IBM POWER9/NVIDIA Volta

Summit System

 2 x IBM POWER9 per node
42 TF per node
6 x NVIDIA Volta GPUs per node
512 GB DDR4 + 96 GB HBM2 (GPU memory) per node
1600 GB per node
2 x Mellanox EDR IB adapters (100 Gbps per adapter)
250 PB, 2.5 TB/s, IBM Spectrum Scale storage

 

Argonne Theta | 11.69 PF, 4292 nodes, Intel Knights Landing

1 x Intel KNL 7230 per node, 64 cores per CPU
192 GB DDR4, 16 GB MCDRAM memory per node
128 GB local storage per node
Aries dragonfly network
10 PB Lustre + 1 PB IBM Spectrum Scale storage
Full details available at: https://www.alcf.anl.gov/alcf-resources

 

Lawrence Berkeley National Laboratory 

LBNL Cori | 32 PF, 12,056 Intel Xeon Phi and Xeon nodes
9,668 nodes, each with one 68-core Intel Xeon Phi Processor 7250 (KNL)
96 GB DDR4 and 16 GB MCDRAM memory per KNL node
2,388 nodes, each with two 16-core Intel Xeon Processor E5-2698 v3 (Haswell)
128 GB DDR4 memory per Haswell node
Cray Aries dragonfly high speed network
30 PB Lustre file system and 1.8 PB Cray DataWarp flash storage
Full details at: https://www.nersc.gov/systems/cori/

U.S. DOE National Nuclear Security Administration (NNSA)

Established by Congress in 2000, NNSA is a semi-autonomous agency within the U.S. Department of Energy responsible for enhancing national security through the military application of nuclear science. NNSA resources at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), and Sandia National Laboratories (SNL) are being made available to the COVID-19 HPC Consortium.

Lawrence Livermore + Los Alamos + Sandia | 32.2 PF, 7375 nodes, IBM POWER8/9, Intel Xeon
  • LLNL Lassen
    • 23 PFLOPS, 788 compute nodes, IBM Power9/NVIDIA Volta GV100
    • 28 TF per node
    • 2 x IBM POWER9 CPUs (44 cores) per node
    • 4 x NVIDIA Volta GPUs per node
    • 256 BD DDR4 + 64 GB HBM2 (GPU memory) per node
    • 1600 GB NVMe local storage per node
    • 2 x Mellanox EDR IB (100Gb/s per adapter)
    • 24 PB storage
  • LLNL Quartz
    • 3.2 PF, 3004 compute nodes, Intel Broadwell
    • 1.2 TF per node
    • 2 x Intel Xeon E5-2695 CPUs (36 cores) per node
    • 128 GB memory per node
    • 1 x Intel Omni-Path IB (100Gb/s)
    • 30 PB storage (shared with other clusters)
  • LLNL Pascal
    • 0.9 PF, 163 compute nodes, Intel Broadwell CPUs/NVIDIA Pascal P100
    • 11.6 TF per node
    • 2 x Intel Xeon E5-2695 CPUs (36 cores) per node
    • 2 x NVIDIA Pascal P100 GPUs per node
    • 256 GB memory + 32 HBM2 (GPU memory) per node
    • 1 x Mellanox EDR IB (100Gb/s)
    • 30 PB storage (shared with other clusters) 
  • LLNL Ray
    • 1.0   PF, 54 compute nodes, IBM Power8/NVIDIA Pascal P100
    • 19 TF per node
    • 2 x IBM Power8 CPUs (20 cores) per node
    • 4 x NVIDIA Pascal P100 GPUs per node
    • 256 GB + 64 GB HBM2 (GPU memory) per node
    • 1600 GB NVMe local storage per node
    • 2 x Mellanox EDR IB (100Gb/s per adapter)
    • 1.5 PB storage
  • LLNL Surface
    • 506 TF, 158 compute nodes, Intel Sandy Bridge/NVIDIA Kepler K40m
    • 3.2 TF per node
    • 2 x Intel Xeon E5-2670 CPUs (16 cores) per node
    • 3 x NVIDIA Kepler K40m GPUs
    • 256 GB memory + 36 GB GDDR5 (GPU memory) per node
    • 1 x Mellanox FDR IB (56Gb/s)
    • 30 PB storage (shared with other clusters)
  • LLNL Syrah
    • 108 TF, 316 compute nodes, Intel Sandy Bridge
    • 0.3 TF per node
    • 2 x Intel Xeon E5-2670 CPUs (16 cores) per node
    • 64 GB memory per node
    • 1 x QLogic IB (40Gb/s)
    • 30 PB storage (shared with other clusters)
  • LANL Snow
    • 445 TF, 368 compute nodes, Intel Broadwell
    • 1.2 TF per node
    • 2 x Intel Xeon E5-2695 CPUs (36 cores) per node
    • 128 GB memory per node
    • 1 x Intel Omni-Path IB (100Gb/s)
    • 15.2 PB storage
  • LANL Badger
    • 790 TF, 660 compute nodes, Intel Broadwell
    • 1.2 TF per node
    • 2 x Intel Xeon E5-2695 CPUs (36 cores) per node
    • 128 GB memory per node
    • 1 x Intel Omni-Path IB (100Gb/s)
    • 15.2 PB storage
U.S. DOE Office of Nuclear Energy

Idaho National Laboratory | 6 PF, 2079 nodes, Intel Xeon

  • Sawtooth |6 PF; 2079 compute nodes; 99,792 cores; 108 NVIDIA Tesla V100 GPUs
    • Mellanox Infiniband EDR, hypercube
    • CPU-only nodes:
      • 2052 nodes, 2 x Intel Xeon 8268 CPUs
      • 192 GB Ram/node
    • CPU/GPU nodes:
      • 27 nodes, 2 x Intel Xeon 8268 CPUs
      • 384 GB Ram/node
      • 4 NVIDIA Tesla V100 GPUs
Rensselaer Polytechnic Institute
The Rensselaer Polytechnic Institute (RPI) Center for Computational Innovations is solving problems for next-generation research through the use of massively parallel computation and data analytics. The center supports researchers, faculty, and students a diverse spectrum of disciplines. RPI is making its Artificial Intelligence Multiprocessing Optimized System (AiMOS) system available to the COVID-19 HPC Consortium. AiMOS is an 8-petaflop IBM Power9/Volta supercomputer configured to enable users to explore new AI applications.

 

RPI AiMOS | 11.1 PF, 252 nodes POWER9/Volta

2 x IBM POWER9 CPU per node, 20 cores per CPU
6 x NVIDIA Tesla GV100 per node
32 GB HBM per GPU
512 GB DRAM per node
1.6 TB NVMe per node
Mellanox EDR InfiniBand
11 PB IBM Spectrum Scale storage

MIT/Massachusetts Green HPC Center (MGHPCC)
MIT is contributing two HPC systems to the COVID-19 HPC Consortium. The MIT Supercloud, a 7-petaflops Intel x86/NVIDIA Volta HPC cluster, is designed to support research projects that require significant compute, memory or big data resources. Satori, is a 2-petaflops scalable AI-oriented hardware resource for research computing at MIT composed of 64 IBM Power9/Volta nodes. The MIT resources are installed at the Massachusetts Green HPC Center (MGHPCC), which operates as a joint venture between Boston University, Harvard University, MIT, Northeastern University, and the University of Massachusetts.

 

MIT/MGHPCC Supercloud | 6.9 PF, 440 nodes Intel Xeon/Volta

2 x Intel Xeon (18 CPU cores per node)
2 x NVIDIA V100 GPUs pe node
32 GB HBM per GPU
Mellanox EDR InfiniBand
3 PB scratch storage

MIT/MGHPCC Satori | 2.0 PF, 64 nodes IBM POWER9/NVIDIA Volta

2 x POWER9 , 40  cores per node
4 x NVIDIA Volta GPUs per node (256 total)
32 GB HBM per GPU
1.6 TB NVMe per node
Mellanox EDR InfiniBand
2 PB scratch storage

IBM Research WSC

The IBM Research WSC cluster consists of 56 compute nodes, each with dual socket 22 core CPU and 6 GPUs, plus seven additional nodes dedicated to management functions. The cluster is intended to be used for the following purposes: client collaboration, advanced research for government-funded projects, advanced research on Converged Cognitive Systems, and advanced research on Deep Learning.

IBM Research WSC | 2.8 PF, 54 nodes IBM POWER9/NVIDIA Volta

  • 54 IBM POWER9 nodes

  • 2 x POWER9 CPU per node, 22 cores per CPU

  • 6 x NVIDIA V100 GPUs per node (336 total)

  • 512 GB DRAM per node

  • 1.4 TB NVMe per node

  • 2 x Mellanox EDR InfiniBand per node

  • 2 PB IBM Spectrum Scale distributed storage

  • RHEL 7.6

  • CUDA 10.1
  • 
IBM PowerAI 1.6

Tools to Accelerate Discovery:

Deep Search: The traditional drug discovery pipeline is time and cost intensive. To deal with new viral outbreaks and epidemics, such as COVID-19, we need more rapid drug discovery processes. Generative AI models have shown promise for automating the discovery of molecules. However, many challenges still exist: Current generative frameworks are not efficient in handling design tasks with multiple discovery constraints, have limited exploratory and expansion capabilities, and require expensive model retraining to learn beyond limited training data.

We have developed advanced and robust generative frameworks that can overcome these challenges to create novel peptides, proteins, drug candidates, and materials. We have applied our methodology to generate drug-like molecule candidates for COVID-19 targets. Our hope is that by releasing these novel molecules, the research and drug design communities can accelerate the process of identifying promising new drug candidates for coronavirus and potential similar, new outbreaks. This work demonstrates our vision for the future of accelerated discovery, where AI researchers and pharmaceutical scientists work together to rapidly create next-generation therapeutics, aided by novel AI-powered tools.

Drug Candidate Exploration: The traditional drug discovery pipeline is time and cost intensive. To deal with new viral outbreaks and epidemics, such as COVID-19, we need more rapid drug discovery processes. Generative AI models have shown promise for automating the discovery of molecules. However, many challenges still exist: Current generative frameworks are not efficient in handling design tasks with multiple discovery constraints, have limited exploratory and expansion capabilities, and require expensive model retraining to learn beyond limited training data.

We have developed advanced and robust generative frameworks that can overcome these challenges to create novel peptides, proteins, drug candidates, and materials. We have applied our methodology to generate drug-like molecule candidates for COVID-19 targets. Our hope is that by releasing these novel molecules, the research and drug design communities can accelerate the process of identifying promising new drug candidates for coronavirus and potential similar, new outbreaks. This work demonstrates our vision for the future of accelerated discovery, where AI researchers and pharmaceutical scientists work together to rapidly create next-generation therapeutics, aided by novel AI-powered tools.

Functional Genomics Platform: The IBM Functional Genomics Platform is a database and a cloud platform designed to study microbial life at scale. It contains over 300 million sequences for both bacteria and viruses— seamlessly connecting their genomes, genes, proteins, and functional domains. Together, these sequences describe the collective biological activity that a microbe can have and are therefore used to develop health interventions such as antivirals, vaccines, and diagnostic tests. In response to the global COVID-19 pandemic, we processed all newly sequenced public SARS-CoV-2 genomes and are offering access for free to the IBM Functional Genomics Platform to support important research for identifying molecular targets to aid discovery during this public health crisis.

U.S. National Science Foundation (NSF)

The NSF Office of Advanced Cyberinfrastructure supports and coordinates the development, acquisition, and provision of state-of-the-art cyberinfrastructure resources, tools and services essential to the advancement and transformation of science and engineering. By fostering a vibrant ecosystem of technologies and a skilled workforce of developers, researchers, staff and users, OAC serves the growing community of scientists and engineers, across all disciplines. The most capable resources supported by NSF OAC are being made available to support the COVID-19 HPC Consortium.

Frontera | 38.7 PF, 8114 nodes, Intel Xeon, NVIDIA RTX GPU

Funded by the National Science Foundation and Operated by the Texas Advanced Computing Center (TACC), Frontera provides a balanced set of capabilities that supports both capability and capacity simulation, data-intensive science, visualization, and data analysis, as well as emerging applications in AI and deep learning. Frontera has two computing subsystems, a primary computing system focused on double precision performance, and a second subsystem focused on single-precision streaming-memory computing.  Frontera is built be Dell, Intel, DataDirect Networks, Mellanox, NVIDIA, and GRC.

Comet | 2.75 PF, total 2020 nodes, Intel Xeon, NVIDIA Pascal GPU

Operated by the San Diego Supercomputer Center (SDSC), Comet is a nearly 3-petaflop cluster designed by Dell and SDSC. It features Intel next-generation processors with AVX2, Mellanox FDR InfiniBand interconnects, and Aeon storage. 

Stampede2 | 19.3 PF, 4200 Intel KNL, 1,736 Intel Xeon

Operated by TACC, Stampede 2 is a nearly 20-petaflop HPC national resource accessible to  thousands of researchers across the country, including to enable new computational and data-driven scientific and engineering, research and educational discoveries and advances. 

Longhorn | 2.8 PF, 112 nodes, IBM POWER9, NVIDIA Volta

Longhorn is a TACC resource built in partnership with IBM to support GPU-accelerated workloads. The power of this system is in its multiple GPUs per node, and it is intended to support sophisticated workloads that require high GPU density and little CPU compute. Longhorn will support double-precision machine learning and deep learning workloads that can be accelerated by GPU-powered frameworks, as well as general purpose GPU calculations.

Bridges | 2 PF, 874 nodes, Intel Xeon, NVIDA K80/V100/P100 GPUs, DGX-2

Operated by the Pittsburgh Supercomputing Center (PSC), Bridges and Bridges-AI provides an innovative HPC and data-analytic system, integrating advanced memory technologies to empower new modalities of artificial intelligence based computations, bring desktop convenience to HPC, connect to campuses, and express data-intensive scientific and engineering workflows.  

Jetstream | 320 nodes, Cloud accessible

Operated by a team led by the Indiana University Pervasive Technology Institute, Jetstream is a configurable large-scale computing resource that leverages both on-demand and persistent virtual machine technology to support a wide array of software environments and services through incorporating elements of commercial cloud computing resources with some of the best software in existence for solving important scientific problems.  

Open Science Grid | Distributed High Throughput Computing, 10,000+ nodes, Intel x86-compatible CPUs, various NVIDIA GPUs

The Open Science Grid (OSG) is a large virtual cluster of distributed high-throughput computing (dHTC) capacity shared by numerous national labs, universities, and non-profits, with the ability to seamlessly integrate cloud resources, too. The OSG Connect service makes this large distributed system available to researchers, who can individually use up to tens of thousands of CPU cores and up to hundreds of GPUs, along with significant support from the OSG team. Ideal work includes parameter optimization/sweeps, molecular docking, image processing, many bioinformatics tasks, and other work that can run as numerous independent tasks each needing 1-8 CPU cores, <8 GB RAM, and <10GB input or output data, though these can be exceeded significantly by integrating cloud resources and other clusters, including many of those contributing to the COVID-19 HPC Consortium.

Cheyenne | 5.34 PF, 4032 nodes, Intel Xeon

Operated by the National Center for Atmospheric Research (NCAR), Cheyenne is a critical tool for researchers across the country studying climate change, severe weather, geomagnetic storms, seismic activity, air quality, wildfires, and other important geoscience topics. The Cheyenne environment also encompases tens of petabytes of storage capacity and an analysis cluster to support efficient workflows. Built by SGI (now HPE), Cheyenne is funded by the Geosciences directorate of the National Science Foundation.

Blue Waters| 13.34 PF, 26,864 nodes, AMD Interlagos, NVIDIA Kepler K20X GPU

The Blue Waters sustained-petascale computing project is supported by the National Science Foundation, the State of Illinois, the University of Illinois and the National Geospatial-Intelligence Agency. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications and provided by Cray.  Blue Waters is a well-balanced architecture that has 22,636 XE6 nodes with X86 compatible AMD two Interlagos 16 core CPUs and 4,228 XK7 nodes, each with a NVIDIA Kepler K20X GPU and a 16 core AMD Interlagos CPU.  The system is  integrated with a single high speed Gemini 24x24x24 torus with an aggregate bandwidth of 265+ TBps to simultaneously support very large scale parallel and high through, many job applications.  Blue Waters has a 36PB (26 usable) shared Lustre file system that supports 1.1 TB/s I/O bandwidth.  It is connected at a total of 450 Gbps to Wide Area networks.  The rich system software includes multiple compilers, communication libraries, software and visualization tools, docker containers, python, and machine learning and data management methods that supports capability and capacity simulation, data-intensive science, visualization, and data analysis, and machine learning/AI.  All projects are provided with expert points of contact and provided with advanced application support.  

 

NASA High-End Computing Capability

NASA Supercomputing Systems | 19.13 PF, 15800 nodes Intel x86

NASA's High-End Computing Capability (HECC) Portfolio provides world-class high-end computing, storage, and associated services to enable NASA-sponsored scientists and engineers supporting NASA programs to broadly and productively employ large-scale modeling, simulation, and analysis to achieve successful mission outcomes.

NASA's Ames Research Center in Silicon Valley hosts the agency's most powerful supercomputing facilities. To help meet the COVID-19 challenge facing the nation and the world, HECC is offering access to NASA's high-performance computing (HPC) resources for researchers requiring HPC to support their efforts to combat this virus. 

 

NASA Supercomputing Systems | 19.39 PF, 17609 nodes Intel Xeon

AITKEN | 3.69 PF, 1,152 nodes, Intel Xeon
ELECTRA | 8.32 PF, 3,456 nodes, Intel Xeon
PLEIDES | 7.09 PF, 11,207 nodes, Intel Xeon, NVIDIA K40, Volta GPUs
ENDEAVOR | 32 TF, 2 nodes, Intel Xeon
MEROPE | 253 TF, 1792 nodes, Intel Xeon

Amazon Web Services
As part of the COVID-19 HPC Consortium, AWS is offering research institutions and companies technical support and promotional credits for the use of AWS services to advance research on diagnosis, treatment, and vaccine studies to accelerate our collective understanding of the novel coronavirus (COVID-19). Researchers and scientists working on time-critical projects can use AWS to instantly access virtually unlimited infrastructure capacity, and the latest technologies in compute, storage and networking to accelerate time to results. Learn more here.
Microsoft Azure High Performance Computing (HPC)

Microsoft Azure offers purpose-built compute and storage specifically designed to handle the most demanding computationally and data intensive scientific workflows. Azure is optimized for applications such as genomics, precision medicine and clinical trials in life sciences.  

Our team of HPC experts and AI for Health data science experts, whose mission is to improve the health of people and communities worldwide, are available to collaborate with COVID-19 researchers as they tackle this critical challenge. More broadly, Microsoft's research scientists across the world, spanning computer science, biology, medicine, and public health, will be available to provide advice and collaborate per mutual interest.

Azure HPC helps improve the efficiency of drug development process with power and scale for computationally intensive stochastic modeling and simulation workloads, such as population pharmacokinetic and pharmacokinetic-pharmacodynamic modeling.

Microsoft will give access to our Azure cloud and HPC capabilities.

HPC-optimized and AI Optimized virtual machines (VM)

·        Memory BW Intensive CPUs: Azure HBv2 Instances (AMD EPYC™ 7002-series | 4GB RAM per core | 200Gb/s HDR InfiniBand)

·        Compute Intensive CPUs: Azure HC Instances (Intel Xeon Platinum 8168 | 8GB RAM per core | 100Gb/s EDR InfiniBand)

·        GPU Intensive RDMA connected: Azure NDv2 Instances (8 NVIDIA V100 Tensor Core GPUs with NVIDIA NVLink interconnected GPUs | 32GB RAM each | 40 non-hyperthreaded Intel Xeon Platinum 8168 processor cores | 100Gb/s EDR InfiniBand)

·        See the full list of HPC-optimized VM's (H-SeriesNC-Series, and ND-Series)

 

Storage Options:

·        Azure HPC Cache | Azure NetApp Files | Azure Blog Storage | Cray ClusterStor

 

Management:

·        Azure CycleCloud

 

Batch scheduler

 

Azure HPC life sciences: https://azure.microsoft.com/en-us/solutions/high-performance-computing/health-and-life-sciences/#features

Azure HPC web site: https://azure.microsoft.com/en-us/
AI for Health web site: https://www.microsoft.com/en-us/ai/ai-for-health

Hewlett Packard Enterprise
As part of this new effort to attack the novel coronavirus (COVID-19) pandemic, Hewlett Packard Enterprise is committing to providing supercomputing software and applications expertise free of charge to help researchers port, run, and optimize essential applications. Our HPE Artificial Intelligence (AI) experts are collaborating to support the COVID-19 Open Research Dataset and several other COVID-19 initiatives for which AI can drive critical breakthroughs. They will develop AI tools to mine data across thousands of scholarly articles related to COVID-19 and related coronaviruses to help the medical community develop answers to high-priority scientific questions. We encourage researchers to submit any COVID-19 related proposals to the consortium's online portal. More information can be found here: www.hpe.com/us/en/about/covid19/hpc-consortium.html.
Google

Transform research data into valuable insights and conduct large-scale analyses with the power of Google Cloud. As part of the COVID-19 HPC Consortium, Google is providing access to Google Cloud HPC resources for academic researchers.

 

.
BP

BP's Center for High Performance Computing (CHPC), located at their US headquarters in Houston, serves as a worldwide hub for processing and managing huge amounts of geophysical data from across BP's portfolio and is a key tool in helping scientists to ‘see' more clearly what lies beneath the earth's surface. The high performance computing team is made up of people with deep skills in computational science, applied math, software engineering and systems administration. BP's biosciences research team includes computational and molecular biologists, with expertise in software tools for bioinformatics, microbial genomics, computational enzyme design and metabolic modeling.

To help meet the COVID-19 challenge facing the nation, BP is offering access to the CHPC, the high performance computing team and the biosciences research team to support researchers in their efforts to combat the virus. BP's computing capabilities include:

  • Almost 7,000 HPE compute servers with Intel Cascade Lake AP, Skylake, Knights Landing and Haswell processors
  • Over 300,000 cores
  • 16.3 Petaflops (calculations or floating operations points per second)
  • Over 40 Petabytes of storage capacity
  • Mellanox IB and Intel OPA high speed interconnects
NVIDIA

A task force of NVIDIA researchers and data scientists with expertise in AI and HPC will help optimize research projects on the Consortium's supercomputers. The NVIDIA team has expertise across a variety of domains, including AI, supercomputing, drug discovery, molecular dynamics, genomics, medical imaging and data analytics. NVIDIA will also contribute the packaging of software for relevant AI and life-sciences software applications through NVIDIA NGC, a hub for GPU-accelerated software. The company is also providing compute time on an AI supercomputer, SaturnV.

 

D. E. Shaw Research Anton 2 at PSC

Operated by the Pittsburgh Supercomputing Center (PSC) with support from National Institutes of Health award R01GM116961, Anton 2 is a special-purpose supercomputer for molecular dynamics (MD) simulations developed and provided without cost by D. E. Shaw Research. For more information, see https://psc.edu/anton2-for-covid-19-research.

Intel

Intel will provide HPC /AI and HLS subject matter experts and engineers to collaborate on COVID-19 code enhancements to benefit the community. Intel will also provide licenses for High Performance Computing software development tools for the research programs selected by the COVID-19 HPC Consortium. The integrated tool suites include Intel's C++ and Fortran Compilers, performance libraries, and performance-analysis tools.

Ohio Supercomputer Center

The Ohio Supercomputer Center (OSC), a member of the Ohio Technology Consortium of the Ohio Department of Higher Education, addresses the rising computational demands of academic and industrial research communities by providing a robust shared infrastructure and proven expertise in advanced modeling, simulation and analysis. OSC empowers scientists with the vital resources essential to make extraordinary discoveries and innovations, partners with businesses and industry to leverage computational science as a competitive force in the global knowledge economy, and leads efforts to equip the workforce with the key technology skills required to secure 21st century jobs. For more, visit www.osc.edu

  • OSC Owens | 1.6 PF, 824 nodes Intel Xeon/Pascal
    • 2 x Intel Xeon (28 cores per node, 48 cores per big-mem node)
    • 160 NIVIDIA P100 GPUs (1 per node)
    • 128 GB per node (1.5TB per big-mem node)
    • Mellanox EDR Infiniband
    • 12.5 PB Project and Scratch storage
  • OSC Pitzer | 1.3.PF, 260 nodes Intel Xeon/Volta
    • 2 x Intel Xeon (40 cores per node, 80 cores per big-mem node)
    • 64 NIVIDIA V100 GPUs (2 per node)
    • 192 GB per node (384 GB per GPU node, 3TB per big-mem node)
    • Mellanox EDR Infiniband
    • 12.5 PB Project and Scratch storage
Dell

Zenith

The Zenith cluster is the result of a partnership between Dell and Intel®. On the TOP500 list of fastest supercomputers in the world, Zenith includes Intel Xeon® Scalable Processors, Omni‑Path fabric architecture, data center storage solutions, FPGAs, adapters, software and tools. Projects underway include image classification to identify disease in X-rays, MRI scan matching to thoughts and actions, and building faster neural networks to drive recommendation engines. Zenith is available to researchers via the COVD-19 HPC Consortium via standard the application process, subject to availability.

Zenith Configuration:

  • Servers:
    • 422 PowerEdge C6420 servers  
    • 160x PowerEdge C6320p servers
    • 4 PowerEdge R740 servers with Intel – FPGAs
  • Processors
    • 2nd generation Intel Xeon Scalable processors
    • Intel Xeon Phi™
  • • Memory
    • 192GB at 2,933MHz per node (Xeon Gold)
    • 96GB at 2,400MHz per node (Xeon Phi)
  • Operating System: Red Hat® Enterprise Linux® 7
  • Host channel adapter (HCA) card: Intel Omni‑Path Host Fabric Interface Storage
  • Storage
    • 2.68PB Ready Architecture for HPC Lustre Storage
    • 480TB Ready Solutions for HPC NFS Storage
    • 174TB Isilon F800 all-flash NAS storage
UK Digital Research Infrastructure
The UK Digital Research Infrastructure consists of a range of advanced computing systems from academic and UK government agencies with a wide range of different capabilities and capacities. Expertise in porting, developing and testing software is also available from the research software engineers (RSEs) supporting the systems.

Specific technical details on the systems available:

  • ARCHER | 4920 nodes (118,080 cores), two 2.7 GHz, 12-core Intel Xeon E5-2697 v2 per node. 4544 nodes with 64 GB memory nodes and 376 with 128 GB. Cray Aries interconnect. 4.4 PB high-performance storage.
  • Cirrus | 280 nodes (10080 cores), two 2.1GHz 18 core Intel Xeon E5-2695 per node. 256 GB memory per node; 2 GPU nodes each containing two 2.4 Ghz, 20 core Intel Xeon 6148 processors and four NVIDIA Tesla V100-PCIE-16GB GPU accelerators. Mellanox FDR interconnect. 144 NVIDIA V100 GPUs in 36 Plainfield blades (2 Intel Cascade Lake processors and 4 GPUs per node).
  • DiRAC Data Intensive Service (Cambridge) | 484 nodes (15488 cores), two Intel Xeon 6142 per node, 192 GB or 384 GB memory per node; 11 nodes with 4x Nvidia P100 GPUs and 96 GB memory per node; 342 nodes of Intel Xeon Phi with 96 GB memory per node.
  • DiRAC Data Intensive Service (Leicester) | 408 nodes (14688 cores), two Intel Xeon 6140 per node, 192 GB memory per node; 1x 6 TB RAM server with 144 Intel Xeon 6154 cores; 3x 1.5TB RAM servers with 36 Intel Xeon 6140 cores;  64 nodes (4096 cores) Arm ThunderX2 with 128 GB RAM/node.
  • DiRAC Extreme Scaling Service (Edinburgh) | 1468 nodes (35,232 cores), two Intel Xeon 4116 per node, 96 GB RAM/node. Dual rail Intel OPA interconnect.
  • DiRAC Memory Intensive Service (Durham) | 452 nodes (12,656 cores), two Intel Xeon 5120 per node, 512 GB RAM/node, 440TB flash volume for checkpointing.
  • Isambard | 332 nodes (21,248 cores), two Arm-based Marvell ThunderX2 32 core 2.1 GHz per node. 256 GB memory per node. Cray Aries interconnect. 75 TB high-performance storage.
  • JADE | 22x Nvidia DGX-1V nodes with 8x Nvidia V100 16GB and 2x 20 core Intel Xeon E5-2698 per node.
  • MMM Hub (Thomas) | 700 nodes (17000 cores), 2x 12 core Intel Xeon E5-2650v4 2.1 GHz per node, 128 GB RAM/node.
  • NI-HPC | 60x Dell PowerEdge R6525, two AMD Rome 64-core 7702 per node. 768GB RAM/node; 4x Dell PowerEdge R6525 with 2TB RAM; 8 x Dell DSS8440 (each with 2x Intel Xeon 8168 24-core). Provides 32x Nvidia Tesla V100 32GB.
  • XCK | 96 nodes (6144 cores), one 1.3 GHz, 64-core Intel Xeon Phi 7320 per node + 20 nodes (640 cores), two 2.3 Ghz, 16 core Intel Xeon E5-2698 v3 per node. 16 GB fast memory + 96 GB DDR per Xeon Phi node, 128 GB per Xeon node. Cray Aries interconnect. 9TB of DataWarp storage and 650 TB of high-performance storage.
  • XCS | 6720 nodes (241,920 cores), two Intel Xeon 2.1 GHz, 18-core E5-2695 v4 series per node. All with 128 GB RAM/node. Cray Aries interconnect. 11 PB of high-performance storage.
CSCS – Swiss National Supercomputing Centre
CSCS Piz Daint 27 PF, 5704 nodes, Cray XC50/NVIDIA PASCAL
Xeon E5-2690v3 12C 2.6GHz 64GB RAM
NVIDIA® Tesla® P100 16GB
Aries interconnect
Swedish National Infrastructure for Computing (SNIC)

The Swedish National Infrastructure for Computing is a national research infrastructure that makes resources for large scale computation and data storage available, as well as provides advanced user support to make efficient use of the SNIC resources.

Beskow | 2.5 PF, 2060 nodes, Intel Haswell & Broadwell.

Funded by Swedish National Infrastructure for Computing and operated by the PDC Center for High-Performance Computing at the KTH Royal Institute of Technology in Stockholm, Sweden, Beskow supports capability computing and simulations in the form of wide jobs. Attached to Beskow is a 5 PB Lustre file system from DataDirect Networks. Beskow is also a Tier-1 resource in the Prace European Project.

Beskow is built by Cray, Intel and DataDirect Networks.

Consortium Affiliates provide a range of computing services and expertise that can enhance and accelerate the research for fighting COVID-19. Matched proposals will have access to resources and help from Consortium Affiliates, provided for free, enabling rapid and efficiently execution of complex computational research programs.

Atrio | [Affiliate]

Atrio will assist researchers studying COVID-19 in creating and optimizing performance of application containers (e.g. CryoEM processing application suite) , as well as performance-optimized deployment of those application containers on to any of HPC Consortium members' computational platforms and specifically onto high performing GPU and CPU resources. Our proposal is two fold - one is additional computational resources, and another, equally important, is support for COVID-19 researchers with an easy way to access and use HPC Consortium computational resources. That support consists of creating application containers for researchers, optimizing their performance, and an optional multi-site container and cluster management software toolset.

Data Expedition Inc | [Affiliate]

Data Expedition, Inc. (DEI) is offering free licenses of its easy-to-use ExpeDat and CloudDat accelerated data transport software to researchers studying COVID-19. This software transfers files ranging from megabytes to terabytes from storage to storage, across wide area networks, among research institutions, cloud providers, and personal computers at speeds many times faster than traditional software. Available immediately for an initial 90-day license. Requests to extend licenses will be evaluated on a case-by-case basis to facilitate continuing research..

Flatiron | [Affiliate]

The Flatiron Institute is a multi-disciplinary science lab with 50 scientists in computational Biology. Flatiron is pleased to offer 3.5M core hours per month on our modern HPC system and 5M core hours per month on Gordon, our older HPC facility at SDSC.

Fluid Numerics | [Affiliate]

Fluid Numerics' Slurm-GCP deployment leverages Google Compute Engine resources and the Slurm job scheduler to execute high performance computing (HPC) and high throughput computing (HTC) workloads. Our system is currently capable of ~6pflops but please keep in mind this is a quota-bound metric that can be adjusted if needed. We intend to provide onboarding and remote system administration resources for the fluid-slurm-gcp HPC cluster solution on Google Cloud Platform. We will help researchers leverage GCP for COVID-19 research by assisting with software installation and porting, user training, consulting, and coaching, and general GCP administration, including quota requests, identity and access management, and security compliance.

SAS | [Affiliate]

SAS is offering to provide licensed access to the SAS Viya platform and data science project based resources. SAS provided resources will be specific to the requirements of the selected COVID-19 project use-case. SAS expects a typical engagement on a project would require 1-2 data science resources, a project manager, a data prep specialist and potentially a visualization expert.

Raptor Computing Systems, LLC | [Affiliate]

Our main focus for this membership though is developer systems, as we offer a wide variety of desktop and workstation systems built on the POWER architecture. These run Linux, support NVIDIA GPUs and provide an applications development environment for targeting the larger supercomputers. This is the main focus of our support effort. We can provide these machines free of charge (up to a reasonable limit) to the COVID effort to free up supercomputer / high end HPC server time that would otherwise be allocated to development and testing of the algorithms / software in use.

The HDF Group | [Affiliate]

The HDF Group helps scientists use open source HDF5 effectively, including offering general usage and performance tuning advice, and helping to troubleshoot any issues that arise. Our engineers will be available to assist you in applying HPC and HDF® technologies together for your COVID-19 research.

Acknowledging Support

Papers, presentations, and other publications featuring work that was supported, at least in part, by the resources, services and support provided via the COVID-19 HPC Consortium are expected to acknowledge that support.  Please include the following acknowledgement:

This work used resources services, and support provided via the COVID-19 HPC Consortium (https://covid19-hpc-consortium.org/), which is a unique private-public effort to bring together  government, industry, and academic leaders who are volunteering free compute time and resources in support of COVID-19 research.

 

(Revised 10 June 2020)

Key Points
Computing Resources utilized in research against COVID-19
National scientists encouraged to use computing resources
How and where to find computing resources
Contact Information

ECSS Symposium

ECSS staff share technical solutions to scientific computing challenges monthly in this open forum.

The ECSS Symposium allows the over 70 ECSS staff members to exchange on a monthly basis information about successful techniques used to address challenging science problems. Tutorials on new technologies may be featured. Two 30-minute, technically-focused talks are presented each month and include a brief question and answer period. This series is open to everyone.

Symposium coordinates

Day and Time: Third Tuesdays @ 1 pm Eastern / 12 pm Central / 10 am Pacific
Add this event to your calendar.
Note – Symposium not held in July and November due to conflicts with PEARC and SC conferences.

Webinar (PC, Mac, Linux, iOS, Android): Launch Zoom webinar

iPhone one-tap (US Toll): +16468769923,,114343187# (or) +16699006833,,114343187#

Telephone (US Toll): Dial(for higher quality, dial a number based on your current location):

US: +1 646 876 9923 (or) +1 669 900 6833 (or) +1 408 638 0968

Meeting ID: 114 343 187

Upcoming events are also posted to the Training category of XSEDE News.

Due to the large number of attendees, only the presenters and host broadcast audio. Attendees may submit chat questions to the presenters through a moderator.

Key Points
Monthly technical exchange
ECSS community present
Open to everyone
Tutorials and talks with Q & A
Contact Information

ECSS Justification

ECSS provides expert assistance with your projects. Request help through the allocation process.

How to Justify Inclusion of ECSS for Your Project

XSEDE users may request ECSS services through the allocation process. PIs may request ECSS assistance along with a new Research allocation request or by submitting a stand-alone Supplement request to an active allocation. The request will be reviewed, usually in conjunction with your XSEDE grant proposal. ECSS management will then assess whether there is also a good potential match with the experts available to collaborate with your team. If so, a member of the ECSS team will contact you to develop a specific work plan for the collaboration.

PIs must answer the following five questions to be included along with the ECSS request. Illustrative examples are provided to clarify the meaning of each question.

1. What do you want to accomplish with the help of expert staff? Have you already done any work on this aspect of your software?

For example:

We request ECSS support to improve the parallel efficiency of our major production code, "ABCDcode". This application has routinely utilized up to 250 processes for a cumulative run-time of about 300 hours. The proposed research requires a 6-fold increase in dataset size which will require the utilization of additional processes. In order to maintain an appropriate time-to-solution at least 1,000 processes will need to be utilized.

This project will require the identification of barriers to parallelization which decrease the parallel efficiency of ABCDcode. A likely culprit is our use of a primary-secondary task management pattern which relies on synchronization between a subset of secondary processes. We expect that replacing this communication pattern with a more scalable alternative would greatly increase the parallel efficiency. Additionally, the removal of data dependencies between processes should also help remove synchronization between tasks and improve efficiency. It may also be necessary to investigate the choice of specific algorithms within the code, since it is likely that certain implementations may be adding to the inefficient communication patterns and/or unnecessary process synchronizations.

The assistance requested may pertain to visualization or other aspects of data analysis:

We request ECSS guidance visualization of our simulation data produced by ABCDcode. First, we will need advice as to which tools are best suited to produce 3D animations, volume renderings and streamlines from our data. By deploying and testing these, we may then agree with ECSS staff that we also need help producing a set of initial visualizations, and in the development of a visualization pipeline for our data.

For a project intended to serve a wider community:

We request ECSS support to develop our major production code, "ABCDcode", into a community code for Science Discipline Z. As reported in our XRAC proposal, this application has routinely utilized up to 10,000 processes on Machine M, allowing our group to achieve important breakthroughs in our own research. As a result, we have received many requests from our colleagues to make ABCDcode available to them; however, early attempts by two other groups to use the software on Machine M have caused our code to crash for some of their datasets, and considerable performance degradation in other cases. In addition, we have encountered difficulties porting the code to Machines L and N.

We have a good idea of the reasons for these issues, since our code still embodies some assumptions specific to our own research problems. We hope that ECSS will help us to efficiently remove these restrictions and to develop and thoroughly test a generalized, robust and maintainable version of ABCDcode.

2. How would the success of this collaboration benefit your project?

Example:

The successful improvement of ABCDcode's parallel efficiency will enable the simulation of a dataset 6 times larger than previously attempted. This would allow us, with the SUs requested in this XRAC proposal, to directly compare our results with experiment, rather than rely on interpolation. The improved parallel efficiency will, additionally, enable and inform future optimizations necessary for future investigations. Also, the improved efficiency will increase the time-to-solution of current production simulations. This will greatly increase the group's scientific output.

Or, if you wish to develop a community capability:

The successful development and testing of ABCDcode to become a robust and maintainable community code would greatly expand the ability of the Science Discipline Z community to perform the collaborative research proposed to this XRAC review. It will enable us to provide support to its users with a sustainable effort level, including by people from other groups who are not part of the original development team. It will make it possible for us to consider making this system available via a Science Gateway, for the development of which we may need to request continued support next year.

3. Which member(s) of your team would collaborate with ECSS staff?

It is important to ensure that members of your team work hand in hand with the ECSS staff members, so that you fully understand what has been done and are ready to take over when the staff support period ends.

Example:

Two graduate students in our group will be dedicated to this project over the next year. They have experience in both the operation and maintenance of ABCDcode which they obtained over the last year. They will collaborate directly with ECSS staff and be responsible for explaining and implementing changes in source code. The ECSS staff will be responsible for identifying parallelization barriers, suggesting improvements, and answering specific questions about resource utilization.

4. Have you had significant interaction on previous projects related to your current proposal or discussed your extended support needs with any XSEDE staff? If so, please indicate with whom.

This helps us form the ECSS project team and the project plan by enlisting the named staff members in understanding and launching the support project. For example:

The scalability issues with ABCDcode on Machine M were first diagnosed by user consultant F, who, while handling a problem report we had submitted to the XOC, suggested that the likely culprit is our use of a primary-secondary task management pattern which relies on synchronization between a subset of secondary processes. She suggested that fixing the issues will require a sustained collaboration with ECSS staff.

It is expected that the impetus for many ECSS projects will come from the work of the Novel and Innovative Projects (NIP) development area. If this has been the case, it should be mentioned in response to Q4. For example:

We became aware of the potential of XSEDE for our research via our Campus Champion, Dr. C, who arranged for a series of discussions with ECSS NIP staff member Dr. N. This led to our Startup grant which forms the basis for our XRAC proposal, and it was Dr. N who suggested that our serial code should be parallelized and incorporated into an automated work and data flow system via the ECSS project proposed here. We request that Dr. N remain our principal contact as we execute this ECSS project.

5. Have you received XSEDE advanced support in the past? If so, please indicate the time period, and how the support you received then relates to the support you request now.

ECSS staff expertise is a valuable and limited resource shared by all XSEDE users, so it is important to make sure that as many research teams and communities as possible benefit from it. Therefore, requests for recurrent support must be justified and reviewed carefully, to ensure that it is indeed necessary for the full potential of past progress to be realized, or that this is a highly meritorious new project. For example:

Last year, our ECSS project led to the successful development of ABCDcode to become a robust and maintainable community code. As discussed in our new XRAC proposal, we now wish to build a Science Gateway to enable an estimated 2,000 members of the Science Discipline Z community to run this code every year, on XSEDE systems L, M, and N, as well as on various Campus based clusters. As described above, we require the Gateways building expertise of ECSS to efficiently achieve this goal.

Or, to argue that this is a new effort:

Two years ago, our TeraGrid ASTA project boosted the scalability of our "ABCDcode" to routinely utilize up to 10,000 processes on Machine M, allowing our group to achieve important breakthroughs in Science Problem P. Recent work by the PI and a new postdoc, using the XSEDE Startup grant mentioned in the XRAC proposal, has resulted in the creation of a new code, "GHIJcode", which is the first approach that shows any promise of tackling Problem Q, heretofore considered intractable. To be useful in production, GHIJcode would have to scale to 100,000 processes and to be made part of a complex distributed task and data flow. We hope that we will be able to achieve this, but only with ECSS help.

 

Varieties of ECSS Support

ECSS provides expert technical assistance to XSEDE users. XSEDE staff members can collaborate with XSEDE project teams in a variety of support roles:

  • Algorithmic or solver change; incorporation or implementation of advanced numerical methods and/or math libraries
  • Optimization (single processor performance, parallel scaling, parallel I/O performance, memory usage, or benchmarking improvement)
  • Development of parallel code from serial code/algorithm
  • Data visualization or analysis
  • Incorporation of data management or storage
  • Cloud computing work
  • Implementation of workflows for automation of scientific processes
  • Innovative scheduling implementation
  • Integration of XSEDE resources into a portal or Science Gateway

We find projects work best when approached collaboratively, therefore we require involvement from the principal investigator's team. We are not able to support activities such as a complete handoff of a code for independent parallelization by ECSS staff. Also, we will generally not be able to assist with third-party software unless you or we have a developer's license or other collaborative relationship with the authors, or if the software is open-source.

Key Points
A wide variety of support types are available
Examples demonstrate how to adequately justify your request
Contact Information

Champion Leadership Team

This page includes the Champions Leadership team and Regional Champions

Champion Staff
Name Institution Position
Dana Brunson Internet2 Campus Engagement Co-manager
Henry Neeman University of Oklahoma Campus Engagement Co-manager
Cathy Chaplin Internet2 Champion Coordinator
Jay Alameda University of Illinois Urbana-Champaign Champion Fellows Coordinator & ECSS Liaison
     
Champion Elected Leadership Team    
Thomas Cheatham University of Utah Champion Leadership Team (2020-2022)
Douglas Jennewein Arizona State University Champion Leadership Team (2018-2022)
Timothy Middelkoop University of Missouri Champion Leadership Team (2018-2022)
Julie Ma MGHPCC Champion Leadership Team (2018-2022)
Shelley Knuth University of Colorado Champion Leadership Team (2019-2021)
BJ Lougee Federal Reserve Bank of Kansas (CADRE) Champion Leadership Team (2019-2021)
Torey Battelle Colorado School of Mines Champion Leadership Team (2019-2021)
     
Champion Leadership Team Alumni    
Hussein Al-Azzawi University of New Mexico Champion Leadership Team (2018-2020)
Aaron Culich University of California-Berkeley Champion Leadership Team (2017-2019)
Jack Smith West Virginia Higher Education Policy Commission  Champion Leadership Team (2016-2018)
Dan Voss University of Miami Champion Leadership Team (2016-2018)
Erin Hodges University of Houston Champion Leadership Team (2017-2018)
Alla Kammerdiner New Mexico State University Champion Leadership Team (2017-2019)

Updated: June 18, 2020

Regional Champions

The Regional Champion Program is built upon the principles and goals of the XSEDE Champion Program. The Regional Champion network facilitates education and training opportunities for researchers, faculty, students and staff in their region that help them make effective use of local, regional and national digital resources and services. Additionally, the Regional Champion Program provides oversight and assistance in a predefined geographical region to ensure that all Champions in that region receive the information and assistance they require, as well as establish a bi-directional conduit between Champions in the region and the XSEDE champion staff, thus ensuring a more efficient dissemination of information, allowing finer grained support. Finally, the Regional Champions acts as a regional point of contact and coordination, to assist in scaling up the Champion program by working with the champion staff to coordinate and identify areas of opportunity for expanding outreach to the user community.

 

CHAMPION INSTITUTION DEPUTY CHAMPION INSTITUTION REGION
Ben Nickell Idaho National Labs Nick Maggio University of Oregon 1
Ruth Marinshaw Stanford University Aaron Culich University of California, Berkeley 2
Kevin Brandt South Dakota State University  Chet Langin Southern Illinois University 3
Dan Andresen Kansas State University BJ Lougee Federal Reserve Bank Of Kansas City CADRE  4
Mark Reed University of North Carolina Craig Tanis University of Tennessee, Chattanooga 5
Scott Hampton University of Notre Dame     6
Scott Yockel Harvard University Scott Valcourt University of New Hampshire 7
Anita Orendt University of Utah Shelley Knuth University of Colorado 8

Updated: April 13, 2020


 

Key Points
Leadership table
Regional Champions table
Contact Information

Combination of XSEDE, other resources allows team to span time scales in simulating dagger-like microbe-killing molecule

By Ken Chiacchia, Pittsburgh Supercomputing Center

Assembly of cell membrane components (red) and human beta-defensin type 3 (blue) from first principles. As the simulation plays out, the membrane components form a double-layered membrane, seen side-on, and the peptide binds to it.

 

Medical science is in a race to develop new and better antimicrobial agents to address infection and other human diseases. One promising example of such agents is the beta-defensins. These naturally occurring molecules stab microbes' outer membranes, dagger-like, causing their contents to spill out. A scientist at Tennessee Tech University used the XSEDE-allocated Bridges platform at PSC, as well as the D .E. Shaw Research (DESRES) Anton 2 system hosted at PSC, in a "one-two" simulation that shed light on a beta-defensin's initial binding to a microbial membrane. The work promises clues to agents that can better destroy microbes with membranes.

Molecular dynamics simulations of human beta-defensin type 3 (red and blue) in wildtype (left) and analog (right) forms binding to a simulated cell membrane (green). In both cases the two loops (R36 and L21) stick to the membrane, but in the analog form the "head" of the molecule (I2) sticks as well.  Reprinted in part with permission from The Journal of Physical Chemistry B. © 2020, American Chemical Society.

Why It's Important

Living in a post-pandemic world, it's hardly necessary to point out how important new antimicrobial agents can be for treating afflictions from drug-resistant bacterial infections to COVID-19. One promising avenue of research focuses on the beta-defensins, a family of small protein-like peptides. These molecules, which consist of a chain of amino acids, are naturally produced by the body to kill bacteria. Better, since intact cell membranes are so fundamental to survival, bacteria can't become resistant to this kind of attack. Beta-defensins can also suppress some viruses that have membranes, such as HIV. Engineered versions of beta-defensin may also be able to attack the Coronavirus. Scientists would like to know more about how the beta-defensins work. Such information could help them to design both new antimicrobial agents and drugs that help the natural versions of these peptides work better.

"My lab works on the defensins, which work by first crossing the microbial lipid membrane. By breaking the lipid membrane, they cause leaking of the microbial contents. Our goal is to understand defensins' structure, their dynamics, and the functional relationship between them."—Liqun Zhang, Tennessee Tech

Beta-defensins work like little daggers, stabbing their way into the membrane of a microbe and spilling its contents so it can no longer infect healthy cells. But these peptides work in a changing environment. The conditions in the body range from oxygen-rich oxidizing conditions surrounding cells, for example, in the lungs to oxygen-poor reducing conditions in cells in the intestines. This causes important changes in the folding of a beta-defensin peptide such as human beta-defensin 3. The amino-acid chain in this beta-defensin's wild-type form is crosslinked to itself in three places via disulfide bonds. These links form in oxidizing conditions and break in reducing conditions. Scientists have long wondered how, and whether, beta-defensin can still destroy microbes in both forms.

To shed light on this question, Liqun Zhang at Tennessee Tech University found she needed to combine the complementary powers of the XSEDE-allocated Bridges platform at PSC and the DESRES Anton 2 supercomputer hosted at PSC.

How XSEDE Helped

As a first step, Zhang simulated the equilibrated form of human beta-defensin type 3. This consists of starting with the peptide's chain in a disordered tangle and using the rules of chemical interaction to let that chain find the combination of twists and turns that it naturally settles into. Zhang found Bridges to be a great tool for this step. The National Science Foundation-funded platform's massive computational abilities in both large memory and computational efficiency allowed her to simulate the initial 20 to 500 nanoseconds—billionths of a second—needed for the chain to find this lowest-energy form.

To simulate beta-defensin sticking to the membrane, though, she needed a much longer simulation—5 to 7.5 microseconds (millionths of a second), over 10 times longer. To perform this simulation, she used Anton 2, which is made available to PSC without cost by DESRES and supported through operational funding from the National Institutes of Health. Anton 2 is a highly specialized supercomputer designed and developed by DESRES that greatly accelerates such molecular dynamics simulations. Because of its specialized hardware and software, Anton 2 can perform simulations nearly two orders of magnitude longer in a given length of real time than a general-purpose supercomputer. While not an XSEDE system, Anton 2's location at XSEDE-member PSC and the common support staff was a big help to her in making use of both supercomputers.

"The combination is necessary. Bridges can equilibrate a system for maybe 20 to 500 nanoseconds. Then we can move to Anton for a long time span. We appreciate the computer resources that PSC and XSEDE offer; without their support, it's hard for me to imagine how we could finish the work."—Liqun Zhang, Tennessee Tech

Zhang simulated both the disulfide-crosslinked wild-type peptide and the uncrosslinked analog version of the peptide as it interacted with a virtual membrane typical of bacteria. In the initial Bridges simulations, she found that the wild-type version is much more rigid. Its crosslinks hold its shape more firmly than the analog version, which because of the lack of crosslinks is more flexible. The Anton 2 simulations showed an interesting difference that stems from this difference in flexibility. Two loops of the peptide chain initially stick to the membrane in both versions. But the analog version is flexible enough for an additional region, the "head" of the peptide, also to stick. Zhang reported her results in the Journal of Physical Chemistry in February, 2020.

It isn't yet clear what the effects of this different way of initial binding to the membrane may mean for beta-defensin's ability to destroy microbes. An important next step will be for Zhang to simulate the actual insertion of the peptide into the membrane and the disruption of the membrane. Another important step will be for Zhang's colleagues to test her predictions on real peptides in the lab, verifying the results and in turn uncovering details she can use to create better simulations. Ultimately, she hopes that these simulations will offer clues for designing drugs to combat microbes that cause disease. Zhang and her colleagues also plan to design beta-defensin-based small antimicrobial peptides to combat Coronavirus.

This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system, which is supported by NSF award number ACI-1445606, at PSC. Anton 2 computer time was provided by PSC through Grant R01GM116961 from the National Institutes of Health. The Anton 2 machine at PSC was generously made available without cost by D. E. Shaw Research. Some of the short-term simulations and analysis were carried out at the high performance computers at Tennessee Technological University.

You can read Zhang's Journal of Physical Chemistry article here.

 

 

 

 

At a Glance:

  • Medical science is in a race to develop new and better antimicrobial agents

  • The beta-defensins are naturally occurring molecules that stab microbes' outer membranes, destroying them

  • A Tennessee Tech University team used the XSEDE-allocated Bridges platform at PSC, as well as the D. E. Shaw Research Anton 2 system hosted at PSC, in a "one-two" simulation of beta-defensin's initial binding to a microbial membrane

  • The work promises clues to agents that can better destroy microbes with membranes, such as bacteria and SARS-CoV-2


 

 

 
June 2020 | Science Highlights, Announcements & Upcoming Events
 
XSEDE helps the nation's most creative minds discover breakthroughs and solutions for some of the world's greatest scientific challenges. Through free, customized access to the National Science Foundation's advanced digital resources, consulting, training, and mentorship opportunities, XSEDE enables you to Discover More. Get started here.
 
Science Highlights
 
XSEDE Aids Drug Screening for Heart Arrhythmias
 
Computational pipeline tests for cardiotoxicity
 
 
Death from sudden cardiac arrest makes headlines when it strikes athletes. But it also causes the most deaths by natural causes in the U.S.,  estimated  at 325,000 per year. According to the Cleveland Clinic , the heart's bioelectrical system goes haywire during arrest. The malfunction can send heartbeats racing out of control, cutting off blood to the body and brain. This differs from a heart attack, which is caused by a blockage of the heart's arteries. The leading risk factors for sudden cardiac arrest are a past attack and the presence of disease. Another risk factor is the side effects from medications, which can potentially cause deadly arrhythmias.
 
Using XSEDE-allocated supercomputers, scientists have developed for the first time a way to screen drugs through their chemical structures for induced arrhythmias.
 
 
A computational pipeline to screen drugs for cardiotoxicity has been developed with the use of supercomputers. The pipeline connects atomistic scale information to protein, cell, and tissue scales by predicting drug-binding affinities and rates from simulation of hERG ion channel and drug structure interactions and then using these values to model drug effects on the hERG ion channel function and an emergent cardiac electrical activity alteration. Credit: Yang et al., Circulation Research.
 
XSEDE-Allocated Resources Used for High Tech Materials Science Study
 
Supercomputing advances zirconia research for future applications
 
 
For thousands of years, humans have produced ceramics by simply combining specific minerals with water or other solvents to create ceramic slurries that cure at room temperature and become some of the hardest known materials. In more recent times, zirconia-based ceramics have been useful for an array of applications ranging from dental implants and artificial joints to jet engine parts.
 
With help from XSEDE, researchers are conducting simulations to assess zirconia-based ceramic's ability to withstand harsh conditions as well as its fracture and fatigue limitations.
 
 
Stress-strain relations of four-, five-, seven-twin boundaries (TB) YSTZ nanopillars. Credit: N. Zhang and M. Asle Zaeem.
 
Settling In
 
XSEDE-allocated resources correct computer-predicted protein simulations, approaching lab accuracy
 
 
To understand how the tiny machinery of life works in health and disease, scientists need accurate pictures of how proteins fold and move. But laboratory methods for imaging proteins are slow, and so the structures of hundreds of thousands of proteins that have been discovered are still unknown. Scientists have used a number of methods for predicting structures via computer simulation. But sometimes even high-quality simulations aren't as accurate as drug designers may want. A research team used the GPU nodes in XSEDE-allocated supercomputers to optimize predictions made by other scientists. In the process, they made predictions of the structures for a number of proteins with accuracy that approached the most precise, X-ray based lab measurements.
 
 
An example of refinement in a portion of one of the proteins simulated. Credit: Heo L, Feig M.
 
Graphene-Reinforced Carbon Fiber May Lead to Affordable, Stronger Car Materials
 
Simulations via XSEDE allocation offer insight into chemical reactions
 
 
A new way of creating carbon fibers — which are typically expensive to make — could one day lead to using these lightweight, high-strength materials to improve safety and reduce the cost of producing cars, according to a team of researchers. Using a mix of computer simulations (enabled by an XSEDE allocation) and laboratory experiments, the team found that adding small amounts of the 2D graphene to the production process both reduces the production cost and strengthens the fibers.
 
 
Credit: Z. Gao et. al.
 
XSEDE-Allocated Resources Simulate Solar Cells
 
Perovskite research shows promise for future inexpensive, efficient solar options
 
 
Solar energy has become a popular renewable source of electricity around the world with silicon serving as the primary source due to its efficiency and stability. Because of silicon's relatively high cost, hybrid organic-inorganic perovskites (HOIPs) have emerged as a lower-cost and highly efficient option for solar power. A search for stable, efficient, and environmentally safe perovskites has shaped an active avenue in current materials research with new findings relying on simulations made possible through access to supercomputers provided by XSEDE.
 
 
Simulations of four lead-free perovskites show that these materials exhibit promising features for solar energy options. They are now being synthesized for further investigation. Credit: H. Tran, et al, and V. Ngoc Tuoc.
 
Program Announcements
 
XSEDE EMPOWER Now Accepting Applications for Fall Internships and Mentorships
 
 
XSEDE EMPOWER (Expert Mentoring Producing Opportunities for Work, Education, and Research) provides undergraduate students with the opportunity to work on a variety of XSEDE projects, such as computational and/or data analytics research and education in all fields of study, networking, system maintenance and support, visualization, and more. Mentors help engage undergrads in the work of XSEDE. The EMPOWER program aims to enable a diverse group of students to participate in the actual work of XSEDE. To apply to mentor one or more students, create one or more positions by following the link below. If you also have a student in mind to work with, that student should also submit an application.  The deadline for mentors and students to apply for Fall 2020 participation is July 10, 2020.
 
 
Check out this video to learn more about  XSEDE   EMPOWER  and what two recent interns have to say about the program.
 
Computing4Change Application Deadline Extended
 
 
For undergraduate students who want to enhance their skillset and create positive change in their community, XSEDE is accepting applications for Computing4Change (C4C).   C4C is a competition for students from diverse disciplines and backgrounds who want to work collaboratively to learn to apply data analysis and computational thinking to a social challenge. Currently it is planned to be co-located at SC20 in Atlanta, GA from November 14-20, 2020 (but may become virtual).  The deadline to apply has been extended until July 19, 2020
 
 
XSEDE Cyberinfrastructure Integration (XCI) Updates
 
 
XSEDE's InCommon Identity Provider (IdP) idp.xsede.org enables XSEDE users to sign in to web sites that are part of the InCommon Federation (for example, GENI and ORCID) using their XSEDE accounts and is especially useful for users who do not have an existing InCommon IdP provided by their home institution. Several configuration updates have been made recently to comply with community standards and maintain effective operational security. Please be sure your web browser is up-to-date.
 
 
Community Announcements
 
Jumpstart Your Sustainability Plan with SGCI's Free Virtual Mini-Course
 
 
Interested in developing a sustainability strategy for your gateway?  Register by June 12, 2020  for the Science Gateway Community Institute's Jumpstart Your Sustainability Plan mini-course! It's free, and offered virtually. This course will take place June 16-18, 2020, with main presentations from 12-1:30 pm ET each day, and optional office hours and special topics presentations from 2-5 pm ET.
 
Given that COVID-19 has, for now, limited the ability to travel and gather in person, SGCI is making the best of the situation and offering this free new mini-course instead of Gateway Focus Week, one of SGCI's most popular programs. Jumpstart Your Sustainability Plan will focus solely on offering practical and effective steps for developing a sustainability strategy. The mini-course offers PIs of research and teaching-focused gateways and their teams the perfect way to kick off sustainability planning, whether you are writing a new grant or ready to get to the next level with a more mature project.
 
 
Upcoming Dates and Deadlines
 

 


COVID-19 Response: To our valued stakeholders and XSEDE collaborators,
By now you have received a flurry of communication surrounding the ongoing COVID-19 pandemic and how various organizations are responding, and XSEDE is no exception. As XSEDE staff have transitioned out of their usual offices and into telecommuting arrangements with their home institutions, they have worked both to support research around the pandemic and to ensure we operate without interruption.