Content with tag argonne .

Service Provider Forum

The national cyberinfrastructure ecosystem is powered by a broad set of Service Providers (SP). The XSEDE Federation primarily consists of SPs that are autonomous entities that agree to coordinate with XSEDE and each other to varying degrees. The XSEDE Federation may also include other non-service provider organizations.

Service Providers are projects or organizations that provide cyberinfrastructure (CI) services to the science and engineering community. In the US academic community, there is a rich diversity of SPs, spanning centers that are funded by NSF to operate large-scale resources for the national research community to universities that provide resources and services to their local researchers. The Service Provider Forum is intended to facilitate this ecosystem of SPs, thereby advancing the science and engineering researchers that rely on these cyberinfrastructure services. The SP Forum provides:

  • An open forum for discussion of topics of interest to the SP community.
  • A formal communication channel between the SP Forum members and the XSEDE project.

SPs are classified as being at a specific level by meeting a minimum set of conditions.They may meet additional conditions at their option, but classification at a specific level will be based on the stated required minimum conditions.

Briefly, Level 1 SPs meet all the XSEDE integration requirements and will explicitly share digital services with the broader community. Level 2 SPs make some digital services accessible via XSEDE and Level 3 SPs are the most loosely coupled; they will share the characteristics of their services via XSEDE, but need not make those services available beyond their local community. For more detailed descriptions, see the documents linked below.

Leadership

SP Forum Elected Officers (as of January 17, 2019):

  • Chair: Ruth Marinshaw, Stanford University
  • Vice Chair: Mary Thomas, San Diego Supercomputer Center
  • L2 Chair: Thomas Doak, NCGAS/Indiana University
  • L3 Chair: Chet Langin, Southern Illinois University
  • XAB Representative: David Hancock, Indiana University
  • XAB Representative: Jonathan Anderson, CU Boulder

Charter

SPF Charter

Membership Application

SPF Membership Application

Current XSEDE Federation Service Providers

 
SERVICE PROVIDER SP LEVEL REPRESENTATIVE INSTITUTION
Stampede Level 1 Dan Stanzione Univ of Texas at Austin
Comet Level 1 Mike Norman; Bob Sinkovits & Shawn Strande San Diego Supercomputer Center
Wrangler Level 1 Dan Stanzione Texas Advanced Computing Center
Jetstream Level 1  David Hancock, Jeremy Fischer Indiana University
Bridges Level 1 Nicholas A. Nystrom Pittsburgh Supercomputing Center (PSC)
NCAR Level 2 Irfan Elahi & Eric Newhouse NCAR
Indiana University Level 2 Craig Stewart Indiana University
OSG Level 2 Miron Livny Univ of Wisconsin
Blue Waters Level 2 Bill Kramer NCSA/Univ of Illinois
SuperMIC Level 2 Seung-Jong (Jay) Park and Steve Brandt Louisiana State University
Rosen Center Level 2 Carol Song Purdue University
Stanford Research Computing Center Level 2 Ruth Marinshaw Stanford University
Beacon Level 2 Gregory D. Peterson UTK/NICS
Science Gateways Community Institute Level 2 Nancy Wilkins-Diehr Science Gateways Community Institute
Rutgers Discovery Informatics Institute (RDI2) Level 3 J.J. Villalobos  Rutgers University
Minnesota Supercomputing Institute (MSI) Level 3 Jeff McDonald Minnesota Supercomputing Institute
Oklahoma State University High Performance Computing Center (OSUHPCC) Level 3 Dana Brunson Oklahoma State University
Institute for Cyber-Enabled Research (iCER) Level 3 Andy Keen Michigan State University
Oklahoma University Supercomputing Center for Education & Research (OSCER) Level 3 Henry Neeman The University of Oklahoma
USDA-PBARC (Moana) Level 3 Brian Hall, Scott Geib University of Hawaii
Arkansas High Performance Computing Center (AHPCC) Level 3 Jeff Pummill University of Arkansas
DataONE Level 3 Bruce Wilson University of New Mexico
Institute for Computational Research in Engineering and Science (ICRES) Level 3 Daniel Andresen Kansas State University
Research Technology (RT) Level 3 Shawn Doughty Tufts University
ORION computational resources Level 3 Suranga Edirisinghe Georgia State University (GSU)
Advanced Research Computing - Technology Services (ARC-TS) Level 3 Brock Palen University of Michigan
Palmetto Level 3 Dustin Atkins, Corey Ferrier Clemson University
Langston University Computing Center for Research and Education (LUCCRE) Level 3 Franklin Fondjo-Fotou Langston University
Holland Computing Center (HCC) Level 3 David Swanson University of Nebraska
University of Wyoming Level 3 Tim Brewer University of Wyoming
West Virginia University Research Computing Group Level 3 Nathan Gregg West Virginia University
ROGER Level 3 Shaowen Wang and Anand Padmanabhan NCSA
NCGAS Level 3 Thomas Doak and Robert Henschel Indiana University
Research Computing Group at USD Level 3 Douglas M. Jennewein University of South Dakota
University of Colorado-Boulder's Research Computing Group Level 3 Thomas Hauser University of Colorado-Boulder
Center for Computational Science & Technology Level 3 Dane Skow North Dakota State University
BigDawg Level 3 Chet Langin Southern Illinois University Carbondale
Research Computing Support Services Level 3 Timothy Middelkoop University of Missouri
Information Technologies Level 3 Jeff Frey, Anita Schwartz University of Delaware
Maryland Advanced Research Computing Center (MARCC) Level 3 Jaime Combariza Johns Hopkins University
Hewlett Packard Enterprise (HPE) Data Science Institute (DSI) Level 3 Amit Amritkar University of Houston

Former XSEDE Federation Service Providers

 
Service Provider SP Level Representative Institution Letter of Intent Acceptance Date Exit Date
Gordon Level 1 Mike Norman UCSD/SDSC Gordon LOI 17 September 2012
Acceptance Letter
Spring 2017
FutureGrid Level 1 Geoffrey Fox Indiana University FutureGrid LoI 17 September 2012
Acceptance Letter
Fall 2014
Longhorn Level 1 Kelly Gaither UT-Austin/TACC Longhorn LoI 11 October 2012
Acceptance Letter
February 2014
Longhorn decommissioned
Steele/Wispy Level 1 Carol Song Purdue Steele LoI 17 September 2012
Acceptance Letter
July 2013
Steele/Wispy decommissioned
Ranger Level 1 Dan Stanzione UT-Austin/TACC Ranger LoI 11 October 2012
Acceptance Letter
February 2013
Ranger decommissioned
MSS Level 2 John Towns NCSA/Univ of Illinois MSS LoI 17 September 2012
Acceptance Letter
30 September 2013
MSS decommissioned
Kraken Level 1 Mark Fahey UTK/NICS Kraken LoI 17 May 2012
Acceptance Letter
30 April 2014
Kraken decommissioned
Lonestar Level 1 Dan Stanzione UT-Austin/TACC Lonestar LoI 11 October 2012
Acceptance Letter
31 December 2014
Lonestar decommissioned
Keeneland Level 1 Jeffery Vetter GaTech Keeneland LoI (updated 3/7/13) 27 March 2013
Acceptance Letter
31 December 2014
Keeneland decommissioned
Trestles Level 1 Richard Moore UCSD/SDSC Trestles LoI 17 May 2012
Acceptance Letter
May 2015
Trestles decommissioned
Darter/Nautilus Level 1 Sean Ahern UTK/NICS RDAV LoI 17 September 2012
Acceptance Letter
 
Maverick Level 1 Dan Stanzione UT-AUstin/TACC N/A Unknown 1 April 2018

XSEDE's Peer Institutions

XSEDE also needs to interact with other XSEDE-like organizations as peers. There are already such examples both within the United States and internationally.

Current international interactions include:

  • Partnership for Advanced Computing in Europe (PRACE) – Memorandum of Understanding (MOU)
  • European Grid Infrastructure (EGI) - Call for Collaborative Use Examples (CUEs)
  • Research Organization for Information Science and Technology (RIST) – Memorandum of Understanding (MOU)

Key Points
Service Providers contribute to XSEDE's Cyberinfrastructure
Contact Information

In a First, an AI Program Running on an XSEDE Resource Overcomes Multiple Human Poker Champions

Ken Chiacchia, Pittsburgh Supercomputing Center

 

Artificial intelligence (AI) research took a great leap forward when a Carnegie Mellon University computer program overcame the world's best professional players in a series of six-player poker games. Experimenting with multi-player, "incomplete information" games offers more useful lessons for real-world problems such as security, business negotiations and cancer therapy than one-on-one, "complete information" games like chess or go. Running on the XSEDE-allocated Bridges system at the Pittsburgh Supercomputing Center, the Pluribus AI was the first to surpass humanity's best at such a game.

Why It's Important

It's obvious, but it bears repeating: Life is not chess. In real-world problems, the pieces are not lined up neatly for all to see. Terrorists have secret plans. Businesspeople have undisclosed deal-breakers and hidden needs that can torpedo negotiations. Cancer cells evade the body and drug treatments by mutating their genes.

Poker may not be a perfect representation of these problems, but it's a lot closer. Players keep their hands secret, and try to bluff and shift their strategies to keep opponents off-balance. In 2017, Carnegie Mellon's School of Computer Science grad student Noam Brown and his faculty advisor Tuomas Sandholm broke through the barrier of such imperfect-information games. That's when their artificial intelligence (AI) program, called Libratus, surpassed four of the world's best humans in heads-up (two-player), no-limit Texas Hold'em poker, running on the XSEDE-allocated Bridges at the Pittsburgh Supercomputing Center. That victory had been the first in which an AI overcame top players in an incomplete-information game.

"Being unpredictable is a huge part of playing poker … you have to be unpredictable; you have to bluff. If you don't have a strong hand you have to check; if you do have a strong hand you can't tip off the other players. Humans are good at that; Pluribus is very good at that."—Noam Brown, Facebook AI Research and Carnegie Mellon University.

One limitation of the earlier work was that the AI had only faced humans one-on-one. This is a far simpler game than the usual, multi-player poker game, in which a player has to change how to play a given combination of cards from hand to hand to deal with the shifting plays produced by multiple opponents' strategies. At the time of Libratus's victory, many experts felt that the multi-player game problem might not be winnable in the foreseeable future. Still, Brown (now at Facebook AI Research) and Sandholm felt it was worth a try. They essentially started over with their new project—but still used the power of Bridges to develop and run the new AI.

How XSEDE Helped

The transition from head-to-head to multi-player poker required a stronger AI approach than the researchers had used with Libratus. Like the earlier AI, Pluribus taught itself to play Texas Hold'em poker before facing the pros. Like Libratus, Pluribus also discovered strategies that humans do not normally employ. But Pluribus played and learned in a fundamentally different way than its predecessor.

Libratus had been designed to think through the entire remaining game when deciding each move. The Carnegie Mellon team realized that such a strategy would never work in multi-player poker because the game size would grow exponentially as the number of players increases. This was one reason why some experts thought the problem might not be solvable.

"You have to understand that opponents can adapt. If you only employ one strategy, you might be exploitable. In rock, paper, scissors, if we assume the other player is responding randomly, if you always throw rock you always break even. But when the other player adapts to always throwing paper, that strategy fails. Understanding that players can switch strategies is a big part of the game."—Noam Brown, Facebook AI Research and Carnegie Mellon University.

The researchers took the good-enough strategy one step further. Would it be possible to stay ahead of multiple opponents if the AI only planned a few steps ahead, rather than to the end of the game? Such a "limited look-ahead" approach would save computing power to react to and overcome each opponent's moves.

Pluribus compiled the data and trained itself running on one of Bridges' large-memory nodes, each of which feature 3 terabytes of RAM—about 100 times that in a high-end laptop, and 20 times what is considered large memory on most supercomputers. Play took place on one of Bridges' regular-memory nodes. Bridges also helped the Carnegie Mellon team by offering massive data storage.

"Many thought the multi-player game was not possible to win [by an AI]; others thought it would be too computationally expensive. I don't think anybody thought it would be that cheap."—Noam Brown, Facebook AI Research and Carnegie Mellon University.

While Pluribus used more power than available on commodity personal computers, its performance represented a huge savings in computing time over Libratus. The earlier AI used around 15 million core hours over two months to develop its strategies and 50 of Bridges' powerful compute nodes to play. By comparison, Pluribus trained itself in eight days using 12,400 core hours and used just one node during live play. This promises that such AIs may be able to run on commodity computers in the not-too-distant future.

Pluribus used its limited-lookahead strategy in an online tournament from June 1 to 12, 2019, against a total of 13 poker champions, each of whom had won over $1 million in his poker career. The culmination of the Facebook-funded tournament was a series of 10,000 hands against five of the pros at once. Pluribus racked up a literally super-human win rate. The human players reported that the AI's strategy was impossible to predict and it often made plays that experienced humans never do—probably because doing so successfully is too complicated for the human brain.

 

The CMU team plans to apply Pluribus's insights far afield of poker, with promising possible uses in business negotiations, medical treatment planning and other fields. Their report of their results was the coveted cover article in the prestigious journal Science in August.


"Deep learning" Artificial Intelligence on XSEDE Systems Promises Fewer False Alarms and Early Prediction of Breast Cancer Development

Using AIs to identify false recalls by classifying the three categories (negative, false recalls, and malignancy) of digital mammogram images in breast cancer screening.

Screening mammography is an important tool against breast cancer, but it has its limitations. A "normal" screening mammogram means that a woman doesn't have cancer now. But doctors wonder whether "normal" images contain clues about a woman's risk of developing breast cancer in the future. Also, most women "recalled" for more tests when their mammograms show suspicious masses don't have cancer. With the help of XSEDE's Extended Collaborative Support Service (ECSS) and Novel and Innovative Projects (NIP), scientists at the University of Pittsburgh Medical Center are using the XSEDE-allocated Bridges-AI supercomputer at the Pittsburgh Supercomputing Center to run artificial intelligence programs meant to determine the risk of developing breast cancer and to prevent false recalls.

Why It's Important

Despite a lot of progress in improving survival and quality of life for women with breast cancer, the disease remains a major threat to women's health. It's the most common cancer in women and is either the first or second most common cause of cancer death for women in the largest racial and ethnic groups, accounting for 41,000 deaths in 2016 alone. 

Screening mammography is an important tool for getting early warning, when the disease is easiest to treat. But it's not perfect. For women whose scans show no signs of breast cancer, doctors wonder whether that scan may contain information they could use to predict future risk. More than 10 percent of women who get mammograms are "recalled" for further testing. But nearly 90 percent of the time it's a false alarm. That's something like 3 million women in the U.S. who go through the stress of unnecessary recall each year. 

 

"We collected a large set of images from UPMC's digital screening mammography … We wanted to see if ‘normal,' or negative, digital mammography images in screening were predictive for the risk of breast cancer in the future. There's also the very critical breast-cancer screening issue that when a lot of women come for mammography and when radiologists visually assess their images, it's not certain in many images whether cancer is present or not. This creates a difficult decision-making process on whether to ask these women to come back for additional workup."—Shandong Wu, University of Pittsburgh Medical Center

Short-term breast cancer risk prediction from negative screening digital mammograms (first row). Second row shows heatmaps illustrating the deep learning-identified regions that are associated with the risk assessment.

Expert radiologists can tell a lot from a modern digital mammogram. But Shandong Wu and his colleagues at the University of Pittsburgh Medical Center (UPMC) wondered if artificial intelligence (AI) could detect subtle hints in mammograms that the human eye can't see. They tested their "deep learning" AIs on digital mammograms from UPMC patients whose status was already known, running the programs on the XSEDE-allocated Bridges-AI system at the Pittsburgh Supercomputing Center.

How XSEDE Helped

The task for the UPMC scientists' deep-learning software was a big one. Each digital mammogram is more than 2,000 by 3,000 pixels large—that's dozens of megabytes of data for each image. And to do their study, they needed to "train" and then test their AIs on thousands of images. Their AIs are also pretrained on large datasets with tens of thousands of images. The size of the data for AI modeling was enormous, making the computations slow on the computers available to the researchers at their own laboratory.

"[Roberto] was really helpful. He converted a Matlab container to Singularity, and he wrote a wrapper to run Matlab R2019a on the DGX-2. Sergiu Sanielevici [leader of XSEDE's Novel and Innovative Projects] has been very helpful in supporting our research and he regularly inquires our progress and needs, making sure our issues are properly addressed. Tom Maiden, Rick Costa and Bryan Learn also assisted us in solving problems. Without the support of XSEDE, I don't think we would have been able to do this work."—Shandong Wu, University of Pittsburgh Medical Center

Working with experts from XSEDE's Novel and Innovative Projects program and Extended Collaborative Support Service (ECSS), including Roberto Gomez and other ECSS staff at PSC, the team used the graphics processing unit (GPU) nodes of Bridges-AI to train and run their AIs. Deep-learning, which works by building up layers of different kinds of information and then pruning connections between the layers that don't produce the desired result, tends to work best on GPUs. The new NVIDIA "Volta" GPUs in Bridges-AI contain accelerators, called "tensor cores," specifically designed for deep learning. Bridges-AI's GPU nodes combine eight to 16 GPUs each for up to 512 gigabytes of extremely fast GPU memory. The large memory available to PSC's GPU nodes was central to the success of the AIs, bringing the computation time down from weeks to hours. The NVIDIA DGX-2 node, deployed in Bridges-AI as a first for open research, and its massive memory were particularly useful.

When an AI is designed to produce a binary result—yes or no, positive or negative—scientists often report that result as a graph of true positives versus false positives. The larger the "area under the curve," or AUC, the better the AI's accuracy. AUC can range from zero to one, where zero means that the classifier has no predictive value and one implies perfect prediction. Versions of the screening AIs had an AUC of 0.73 when predicting whether a woman with a negative mammogram would develop cancer over the next 1.5 years. Better, the recall AIs could tell the difference between women with cancer and those who would have been recalled even though they didn't have cancer with an AUC of 0.80. These results are promising and could have value for clinical use after further evaluation.

"In this kind of work—medical images—we deal with larger-scale and sometime 3D volumetric data. All those images are high-resolution images … and we have more than 10 thousand [of them]. Our local GPUs did not have enough memory to accommodate such a scale of data for AI modeling. It could take weeks to run one experiment without the support of powerful GPUs. Using the GPUs from XSEDE, with larger memory, reduced that to a couple of hours."—Shandong Wu, University of Pittsburgh Medical Center

With XSEDE support, Dr. Wu's lab is developing several other AIs to improve breast cancer diagnosis. One would pre-read digital breast tomosynthesis—a kind of 3D breast imaging method—to reduce the time it takes radiologists to read the scans. Another is designed to automatically identify and correct mistakes in the labeling in a dataset for AI learning. Finally, the scientists are also working on AIs to predict breast cancer pathology test markers and the recurrence risk for women who've already been diagnosed with breast cancer.

Further work beyond these preliminary results will compare the benefits of improved AIs against the current methods used by doctors. The aim is to improve care and lower cost in real-world clinical practice. The UPMC team reported their results in five oral presentations at the Radiological Society of North America (RSNA) Annual Meeting in Chicago last year, three presentations at the Society of Photo-Optical Instrumentation Engineers Medical Imaging conference in San Diego this year, and several upcoming journal manuscripts.


Current Campus Champions

Current Campus Champions listed by institution. Participation as either an Established Program to Stimulate Competitive Research (EPSCoR) or as a minority-serving institution (MSI) is also indicated.

Campus Champion Institutions  
Total Academic Institutions 272
     Academic institutions in EPSCoR jurisdictions 82
    Minority Serving Institutions 54
    Minority Serving Institutions in EPSCoR jurisdictions 18
Non-academic, not-for-profit organizations 32
Total Campus Champion Institutions 304
Total Number of Champions 627

LAST UPDATED: September 5, 2019

See also the lists of Leadership Team and Regional LeadersDomain Champions and Student Champions.

Institution Campus Champions EPSCoR MSI
Alabama A & M University Damian Clarke, Raziq Yaqub
Albany State University Olabisi Ojo, Konda Reddy Karnati  
Arizona State University Michael Simeone (domain) , Sean Dudley, Johnathan Lee, Lee Reynolds, William Dizon, Jorge Henriquez, Ian Shaeffer, Dalena Hardy, Gil Speyer, Sirong Lu, Richard Gould, Chris Kurtz, Jason Yalim    
Arkansas State University Hai Jiang  
Auburn University Tony Skjellum  
Austin Peay State University Justin Oelgoetz    
Bates College Kai Evenson  
Baylor College of Medicine Pavel Sumazin , Hua-Sheng Chiu, Hyunjae Ryan Kim    
Baylor University Mike Hutcheson, Carl Bell, Brian Sitton    
Bentley University Jason Wells    
Bethune-Cookman University Ahmed Badi  
Boise State University Kyle Shannon, Mike Henry (student), Jason Watt, Kelly Byrne, Mendi Edgar  
Boston Children's Hospital Arash Nemati Hayati    
Boston University Wayne Gilmore, Charlie Jahnke, Augustine Abaris, Shaohao Chen, Brian Gregor, Katia Oleinik, Jacob Pessin    
Bowdoin College Dj Merrill  
Brandeis University John Edison    
Brown University Helen Kershaw, Maximilian King, Paul Hall, Khemraj Shukla, Mete Tunca, Paul Stey  
California Baptist University Linn Carothers  
California Institute of Technology Tom Morrell    
California State Polytechnic University-Pomona Chantal Stieber    
California State University-Sacramento Anna Klimaszewski-Patterson  
Carnegie Institution for Science Floyd A. Fayton, Jr.    
Carnegie Mellon University Bryan Webb, Franz Franchetti    
Case Western Reserve University Roger Bielefeld, Hadrian Djohari, Emily Dragowsky, James Michael Warfe, Sanjaya Gajurel    
Centre College David Toth  
Chapman University James Kelly    
Children's Research Institute, Children's Mercy Kansas City Shane Corder    
Citadel Military College of South Carolina John Lewis  
Claremont McKenna College Jeho Park    
Clark Atlanta University Dina Tandabany  
Clarkson Univeristy Jeeves Green, Joshua A. Fiske    
Clemson University Marcin Ziolkowski, Xizhou Feng, Ashwin Srinath, Jeffrey Denton, Corey Ferrier  
Cleveland Clinic Foundation Iris Nira Smith, Daniel Blankenberg    
Clinton College Terris S. Riley
Coastal Carolina University Will Jones, Thomas Hoffman  
Colby College Randall Downer  
College of Charleston Berhane Temelso  
College of Staten Island CUNY Sharon Loverde  
College of William and Mary Eric Walter    
Colorado School of Mines Torey Battelle    
Columbia University Rob Lane, George Garrett, John Villa    
Complex Biological Systems Alliance Kris Holton    
Cornell University Susan Mehringer    
Dakota State University David Zeng  
Dillard University Tomekia Simeon, Brian Desil (student), Priscilla Saarah (student)
Doane University-Arts & Sciences Adam Erck, Mark Meysenburg  
Dominican University of California Randall Hall    
Drexel University David Chin    
Duke University Tom Milledge    
Earlham College Charlie Peck    
Federal Reserve Bank Of Kansas City (CADRE) BJ Lougee, Chris Stackpole, Michael Robinson    
Federal Reserve Bank Of Kansas City (CADRE) - OKC Branch Greg Woodward  
Federal Reserve Bank Of New York Ernest Miller, Kevin Kelliher    
Felidae Conservation Fund Kevin Clark    
Ferris State University Luis Rivera, David Petillo    
Fisk University Michael Watson  
Florida A and M University Hongmei Chi, Temilola Aderibigbe (student), George Kurian (student), Jesse Edwards, Stacyann Nelson (student)  
Florida Atlantic University Rhian Resnick    
Florida International University David Driesbach, Cassian D'Cunha  
Florida Southern College Christian Roberson    
Florida State University Paul van der Mark    
Francis Marion University K. Daniel Brauss, Jordan D. McDonnell
George Mason University Jayshree Sarma, Jeffrey Bassett, Alastair Neil    
George Washington University Hanning Chen, Adam Wong, Glen Maclachlan, William Burke    
Georgetown University Alisa Kang    
Georgia Institute of Technology Mehmet Belgin, Semir Sarajlic, Nuyun (Nellie) Zhang, Sebastian Kayhan Hollister (student), Paul Manno, Kevin Manalo    
Georgia Southern University Brandon Kimmons    
Georgia State University Neranjan "Suranga" Edirisinghe Pathiran, Ken Huang, Thakshila Herath (student), Melchizedek Mashiku (student)  
Gettysburg College Charles Kann    
Great Plains Network Kate Adams, James Deaton    
Harvard Medical School Jason Key    
Harvard University Scott Yockel, Plamen Krastev, Francesco Pontiggia    
Harvey Mudd College Aashita Kesarwani    
Hood College Xinlian Liu    
Howard University Marcus Alfred  
Idaho National Laboratory Ben Nickell, Eric Whiting, Tami Grimmett  
Idaho State University Keith Weber, Randy Gaines, Dong Xu  
Illinois Institute of Technology Jeff Wereszczynski    
Indiana University Abhinav Thota, Sudahakar Pamidighantam (domain) , Junjie Li, Thomas Doak (domain) , Carrie L. Ganote (domain) , Sheri Sanders (domain) , Bhavya Nalagampalli Papudeshi (domain) , Le Mai Weakley    
Indiana University of Pennsylvania John Chrispell    
Internet2 Dana Brunson    
Iowa State University Andrew Severin, James Coyle, Levi Baber, Justin Stanley (student)    
Jackson State University Carmen Wright, Duber Gomez-Fonseca (student)
James Madison University Yasmeen Shorish, Isaiah Sumner    
Jarvis Christian College Widodo Samyono  
John Brown University Jill Ellenbarger  
Johns Hopkins University Anthony Kolasny, Jaime Combariza, Jodie Hoh (student)    
Juniata College Burak Cem Konduk    
KINBER Jennifer Oxenford    
Kansas Research and Education Network Casey Russell  
Kansas State University Dan Andresen, Mohammed Tanash (student), Kyle Hutson  
Kennesaw State University Dick Gayler, Jon Preston    
Kentucky State University Chi Shen
Lafayette College Bill Thompson, Jason Simms    
Lamar University Larry Osborne    
Langston University Franklin Fondjo, Abebaw Tadesse, Joel Snow
Lawrence Berkeley National Laboratory Andrew Wiedlea    
Lawrence Livermore National Laboratory Todd Gamblin    
Lehigh University Alexander Pacheco    
Lock Haven University Kevin Range    
Louisiana State University Feng Chen, Blaise A Bourdin  
Louisiana State University Health Sciences Center-New Orleans Mohamad Qayoom  
Louisiana Tech University Don Liu  
Marquette University Craig Struble, Lars Olson, Xizhou Feng    
Marshall University Jack Smith, Justin Chapman  
Massachusetts Green High Performance Computing Center Julie Ma, Abigail Waters (student)    
Massachusetts Institute of Technology Christopher Hill, Lauren Milechin    
Medical University of South Carolina Starr Hazard  
Michigan State University Andrew Keen, Yongjun Choi, Dirk Colbry    
Michigan Technological University Gowtham    
Middle Tennessee State University Dwayne John    
Midwestern State University Eduardo Colmenares-Diaz, Broday Walker (student)    
Mississippi State University Trey Breckenridge  
Missouri State University Matt Siebert    
Missouri University of Science and Technology Buddy Scharfenberg, Don Howdeshell    
Monmouth College Christopher Fasano    
Montana State University Jonathan Hilmer  
Montana Tech Bowen Deng  
Morehouse College Jigsa Tola, Doreen Stevens  
NCAR/UCAR Davide Del Vento    
National University Ali Farahani    
Navajo Technical University Jason Arviso
New Jersey Institute of Technology Glenn "Gedaliah" Wolosh, Roman Voronov    
New Mexico State University Alla Kammerdiner, Diana Dugas, Strahinja Trecakov, Matt Henderson
New York University Shenglong Wang    
North Carolina A & T State University Dukka KC  
North Carolina Central University Caesar Jackson, Alade Tokuta  
North Carolina State University at Raleigh Lisa Lowe    
North Dakota State University Dane Skow, Nick Dusek, Oluwasijibomi "Siji" Saula, Khang Hoang  
Northern Arizona University Christopher Coffey    
Northern Illinois University Jifu Tan    
Northwest Missouri State University Jim Campbell    
Northwestern State University (Louisiana Scholars' College) Brad Burkman  
Northwestern University Pascal Paschos, Alper Kinaci    
OWASP Foundation Learning Gateway Project Bev Corwin, Laureano Batista, Zoe Braiterman, Noreen Whysel    
Ohio State University Keith Stewart, Sandy Shew    
Ohio Supercomputer Center Karen Tomko    
Oklahoma Baptist University Yuan-Liang Albert Chen  
Oklahoma Innovation Institute John Mosher  
Oklahoma State University Brian Couger (domain) , Jesse Schafer, Raj Shukla (student), Christopher J. Fennell (domain) , Phillip Doehle, Evan Linde, Venkat Padmanapan Rao (student), Nathalia Graf Grachet (student)  
Old Dominion University Wirawan Purwanto    
Oral Roberts University Stephen R. Wheat  
Oregon State University David Barber, Chuck Sears, Todd Shechter, CJ Keist    
Penn State University Wayne Figurelle, Guido Cervone, Diego Menendez    
Pittsburgh Supercomputing Center Stephen Deems, John Urbanic    
Pomona College Asya Shklyar, Andrew Crawford    
Portland State University William Garrick    
Princeton University Ian Cosden    
Purdue University Xiao Zhu, Tsai-wei Wu, Matthew Route (domain) , Stephen Harrell, Marisa Brazil, Eric Adams (domain)    
Reed College Trina Marmarelli, Johnny Powell    
Rensselaer Polytechnic Institute Joel Giedt, James Flamino (student)    
Rhodes College Brian Larkins    
Rice University Qiyou Jiang, Erik Engquist, Xiaoqin Huang, Clinton Heider, John Mulligan    
Rochester Institute of Technology Andrew W. Elble , Emilio Del Plato, Charles Gruener, Paul Mezzanini, Sidney Pendelberry    
Rowan University Ghulam Rasool    
Rutgers University Kevin Abbey, Shantenu Jha, Bill Abbott, Leslie Michelson, Paul Framhein, Galen Collier, Eric Marshall, Kristina Plazonic, Vlad Kholodovych    
SBGrid Consortium      
SUNY at Albany Kevin Tyle, Nicholas Schiraldi    
Saint Louis University Eric Kaufmann, Frank Gerhard Schroer IV (student)    
Saint Martin University Shawn Duan    
San Diego State University Mary Thomas  
San Jose State University Sen Chiao, Werner Goveya    
Slippery Rock University of Pennsylvania Nitin Sukhija    
Smithsonian Conservation Biology Institute Jennifer Zhao    
Sonoma State University Mark Perri  
South Carolina State University Biswajit Biswal, Jagruti Sahoo
South Dakota School of Mines and Technology Rafal M. Oszwaldowski  
South Dakota State University Kevin Brandt, Maria Kalyvaki  
Southeast Missouri State University Marcus Bond    
Southern Connecticut State University Yigui Wang    
Southern Illinois University-Carbondale Shaikh Ahmed, Chet Langin, Majid Memari (student), Aaron Walber (student)    
Southern Illinois University-Edwardsville Kade Cole, Andrew Speer    
Southern Methodist University Amit Kumar, Merlin Wilkerson, Robert Kalescky    
Southern University and A & M College Shizhong Yang, Rachel Vincent-Finley
Southwest Innovation Cluster Thomas MacCalla    
Southwestern Oklahoma State University Jeremy Evert  
Spelman College Yonas Tekle  
Stanford University Ruth Marinshaw, Zhiyong Zhang    
Swarthmore College Andrew Ruether    
Temple University Richard Berger    
Tennessee Technological University Tao Yu, Mike Renfro    
Texas A & M University-College Station Rick McMullen, Dhruva Chakravorty, Jian Tao    
Texas A & M University-Corpus Christi Ed Evans, Joshua Gonzalez  
Texas A&M University-San Antonio Smriti Bhatt  
Texas Southern University Farrukh Khan  
Texas State University Shane Flaherty  
Texas Wesleyan University Terrence Neumann    
The College of New Jersey Shawn Sivy    
The Jackson Laboratory Shane Sanders  
The University of Tennessee-Chattanooga Craig Tanis, Ethan Hereth, Carson Woods (student)    
The University of Texas at Austin Kevin Chen    
The University of Texas at Dallas Frank Feagans, Gi Vania, Jaynal Pervez, Christopher Simmons    
The University of Texas at El Paso Rodrigo Romero, Vinod Kumar  
The University of Texas at San Antionio Brent League, Jeremy Mann, Zhiwei Wang, Armando Rodriguez, Thomas Freeman  
Tinker Air Force Base Zachary Fuchs, David Monismith  
Trinity College Peter Yoon    
Tufts University Shawn Doughty, Georgios (George) Karamanis (student)    
Tulane University Hideki Fujioka, Hoang Tran, Carl Baribault  
United States Department of Agriculture - Agriculture Research Service Nathan Weeks    
United States Geological Survey Janice Gordon, Jeff Falgout, Natalya Rapstine    
University of Alabama at Birmingham John-Paul Robinson  
University of Alaska Fairbanks Liam Forbes, Kevin Galloway
University of Arizona Jimmy Ferng, Mark Borgstrom, Moe Torabi, Adam Michel, Chris Reidy, Chris Deer, Cynthia Hart, Ric Anderson, Todd Merritt, Dima Shyshlov, Blake Joyce    
University of Arkansas David Chaffin, Jeff Pummill, Pawel Wolinski, James McCartney, Timothy "Ryan" Rogers (student)  
University of Arkansas at Little Rock Albert Everett  
University of California-Berkeley Aaron Culich, Chris Paciorek    
University of California-Davis Bill Broadley    
University of California-Irvine Harry Mangalam  
University of California-Los Angeles TV Singh    
University of California-Merced Jeffrey Weekley, Sarvani Chadalapaka, Luanzheng Guo (student)    
University of California-Riverside Bill Strossman, Charles Forsyth  
University of California-San Diego Cyd Burrows-Schilling, Claire Mizumoto    
University of California-San Francisco Jason Crane    
University of California-Santa Barbara Sharon Solis, Sharon Tettegah  
University of California-Santa Cruz Shawfeng Dong  
University of Central Florida Paul Wiegand, Amit Goel (student), Jason Nagin    
University of Central Oklahoma Evan Lemley  
University of Chicago Igor Yakushin    
University of Cincinnati      
University of Colorado Thomas Hauser, Shelley Knuth, Andy Monaghan, Daniel Trahan    
University of Delaware Anita Schwartz, Parinaz Barakhshan (student)  
University of Florida Alex Moskalenko, David Ojika    
University of Georgia Guy Cormier    
University of Guam Rommel Hidalgo, Eugene Adanzo, Randy Dahilig, Jose Santiago, Steven Mamaril
University of Hawaii Gwen Jacobs, Sean Cleveland
University of Houston Jerry Ebalunode, Amit Amritkar (domain)  
University of Houston-Clear Lake David Garrison, Liwen Shih    
University of Houston-Downtown Eashrak Zubair (student), Hong Lin  
University of Idaho Lucas Sheneman  
University of Illinois at Chicago Himanshu Sharma, Jon Komperda, Babak Kashir Taloori (student)  
University of Illinois at Urbana-Champaign Mao Ye (domain) , Rob Kooper (domain) , Dean Karres, Tracy Smith    
University of Indianapolis Steve Spicklemire    
University of Iowa Ben Rogers, Baylen Jacob Brus (student), Sai Ramadugu, Adam Harding, Joe Hetrick, Cody Johnson, Genevieve Johnson, Glenn Johnson, Brendel Krueger, Kang Lee, Gabby Perez, Brian Ring, John Saxton    
University of Kansas Riley Epperson  
University of Kentucky Vikram Gazula, James Griffioen  
University of Louisiana at Lafayette Raju Gottumukkala  
University of Louisville Harrison Simrall  
University of Maine System Bruce Segee, Steve Cousins, Michael Brady Butler (student)  
University of Maryland Eastern Shore Urban Wiggins  
University of Maryland-Baltimore County Roy Prouty, Randy Philipp  
University of Maryland-College Park Kevin M. Hildebrand  
University of Massachusetts Amherst Johnathan Griffin    
University of Massachusetts-Boston Jeff Dusenberry, Runcong Chen  
University of Massachusetts-Dartmouth Scott Field, Gaurav Khanna    
University of Memphis Qianyi Cheng    
University of Miami Dan Voss, Warner Baringer    
University of Michigan Brock Palen, Simon Adorf (student), Shelly Johnson, Todd Raeker, Gregory Teichert    
University of Minnesota Jim Wilgenbusch, Eric Shook (domain) , Ben Lynch, Eric Shook, Joel Turbes, Doug Finley    
University of Mississippi Kurt Showmaker  
University of Missouri-Columbia Timothy Middelkoop, Micheal Quinn, Alexander Barnes (student), Derek Howard, Asif Ahamed Magdoom Ali, Brian Marxkors    
University of Missouri-Kansas City Paul Rulis    
University of Montana Tiago Antao  
University of Nebraska Adam Caprez, Jingchao Zhang  
University of Nebraska Medical Center Ashok Mudgapalli  
University of Nevada-Reno Fred Harris, Scotty Strachan, Engin Arslan  
University of New Hampshire Scott Valcourt  
University of New Mexico Hussein Al-Azzawi, Matthew Fricke
University of North Carolina Mark Reed, Mike Barker    
University of North Carolina Wilmington Eddie Dunn, Ellen Gurganious, Cory Nichols Shrum (student)    
University of North Dakota Aaron Bergstrom  
University of North Georgia Luis A. Cueva Parra    
University of North Texas Charles Peterson, Damiri Young    
University of Notre Dame Dodi Heryadi, Scott Hampton    
University of Oklahoma Henry Neeman, Kali McLennan, Horst Severini, James Ferguson, David Akin, S. Patrick Calhoun, George Louthan, Jason Speckman  
University of Oregon Nick Maggio, Robert Yelle, Jake Searcy, Mark Allen, Michael Coleman    
University of Pennsylvania Gavin Burris    
University of Pittsburgh Kim Wong, Matt Burton, Fangping Mu, Shervin Sammak    
University of Puerto Rico Mayaguez Ana Gonzalez
University of Richmond Fred Hagemeister    
University of South Carolina Paul Sagona, Ben Torkian, Nathan Elger  
University of South Dakota Doug Jennewein, Adison Ann Kleinsasser (student)  
University of South Florida-St Petersburg (College of Marine Science) Tylar Murray    
University of Southern California Virginia Kuhn (domain) , Cesar Sul, Erin Shaw    
University of Southern Mississippi Brian Olson , Gopinath Subramanian  
University of St Thomas William Bear, Keith Ketchmark, Eric Tornoe    
University of Tulsa Peter Hawrylak  
University of Utah Anita Orendt, Tom Cheatham (domain) , Brian Haymore (domain)    
University of Vermont Andi Elledge, Yves Dubief  
University of Virginia Ed Hall, Katherine Holcomb    
University of Washington-Seattle Campus Nam Pho    
University of Wisconsin-La Crosse David Mathias, Samantha Foley    
University of Wisconsin-Milwaukee Dan Siercks, Jason Bacon, Shawn Kwang    
University of Wyoming Bryan Shader, Rajiv Khadka (student)  
University of the Virgin Islands Marc Boumedine
Utah Valley University George Rudolph    
Valparaiso University Paul Lapsansky, Paul M. Nord, Nicholas S. Rosasco    
Vassar College Christopher Gahn    
Virginia Tech University James McClure, Alana Romanella, Srijith Rajamohan, David Barto (student)    
Washburn University Karen Camarda, Steve Black  
Washington State University Rohit Dhariwal, Peter Mills    
Washington University in St Louis Xing Huang, Matt Weil, Matt Callaway    
Wayne State University Patrick Gossman, Michael Thompson, Aragorn Steiger    
Weill Cornell Medicine Joseph Hargitai    
West Chester University of Pennsylvania Linh Ngo, Jon C. Kilgannon (student)    
West Virginia Higher Education Policy Commission Jack Smith  
West Virginia State University Sridhar Malkaram
West Virginia University Don McLaughlin, Nathan Gregg, Guillermo Avendano-Franco  
West Virginia University Institute of Technology Sanish Rai  
Wichita State University Terrance Figy  
Winston-Salem State University Xiuping Tao, Daniel Caines (student)  
Wofford College Beau Christ  
Woods Hole Oceanographic Institution Roberta Mazzoli, Wei-Hao (Andrei) Huang    
Yale University Andrew Sherman, Kaylea Nelson, Benjamin Evans    
Youngstown State University Feng George Yu    

LAST UPDATED: September 10, 2019

 

Key Points
Members
Institutions
Contact Info
Contact Information

Campus Champions

 

Computational Science & Engineering makes the impossible possible; high performance computing makes the impossible practical

Campus Champions Celebrate Ten Year Anniversary 

What is a Campus Champion?

A Campus Champion is an employee of, or affiliated with, a college or university (or other institution engaged in research), whose role includes helping their institution's researchers, educators and scholars (faculty, postdocs, graduate students, undergraduates, and professionals) with their computing-intensive and data-intensive research, education, scholarship and/or creative activity, including but not limited to helping them to use advanced digital capabilities to improve, grow and/or accelerate these achievements.

What is the Campus Champions Program?

The Campus Champions Program is a group of 600+ Campus Champions at 300+ US colleges, universities, and other research-focused institutions, whose role is to help researchers at their institutions to use research computing, especially (but not exclusively) large scale and high end computing.

Campus Champions peer-mentor each other, to learn to be more effective. The Campus Champion community has a very active mailing list where Champions exchange ideas and help each other solve problems, regular conference calls where we learn what's going on both within the Champions and at the national level, and a variety of other activities.

Benefits to Campus Champion Institutions

  • A Campus Champion gets better at helping people use computing to advance their research, so their institution's research becomes more successful.
  • There is no charge to the Campus Champion institution for membership.

Benefits to the Campus Champion

  • A Campus Champion becomes more valuable and more indispensable to their institution's researchers, and therefore to their institution.
  • The Campus Champions Program is a lot of fun, so Champions can enjoy learning valuable strategies.

What does a Campus Champion do as a member of the CC Program?

  • Participate in Campus Champions Program information sharing sessions such as the Campus Champions monthly call and email list.
  • Participate in peer mentoring with other Campus Champions, learning from each other how to be more effective in their research support role.
  • Provide information about national Cyberinfrastructure (CI) resources to researchers, educators and scholars at their local institution.
  • Assist their local institution's users to quickly get start-up allocations of computing time on national CI resources.
  • Serve as an ombudsperson, on behalf of their local institution's users of national CI resources, to capture information on problems and challenges that need to be addressed by the resource owners.
  • Host awareness sessions and training workshops for their local institution's researchers, educators, students, scholars and administrators about institutional, national and other CI resources and services.
  • Participate in some or all of the Campus, Regional, Domain, and Student Champion activities.
  • Submit brief activity reports on a regular cadence.
  • Participate in relevant national conferences, for example the annual SC supercomputing conference and the PEARC conference.
  • Participate in education, training and professional development opportunities at the institutional, regional and national level, to improve the champion(s)' ability to provide these capabilities.

What does the Campus Champions program do for the Campus Champions?

  • Provide a mailing list for sharing information among all Campus Champions and other relevant personnel.
  • Provide the Campus Champions with regular correspondence on new and updated CI resources, services, and offerings at the national level, including but not limited to the XSEDE offerings.
  • Provide advice to the Campus Champions and their institutions on how to best serve the institution's computing- and data-intensive research, education and scholarly endeavors.
  • Provide education, training and professional development for Campus Champions at conferences, Campus Champion meetings, training events, and by use of online collaboration capabilities (wiki, e-mail, etc.).
  • Help Champions to pursue start-up allocations of computing time on relevant national CI resources (currently only XSEDE resources, but we aspire to expand that), to enable Campus Champions to help their local users get started quickly on such national CI resources.
  • Record success stories about impact of Campus Champions on research, education and scholarly endeavors.
  • Maintain a web presence and other social media activity that promotes the Campus Champions Program and lists all active Campus Champions and their institutions, including their local institution and its Campus Champion(s).
  • Raise awareness of, and recruit additional institutions and Campus Champions into the Campus Champions Program.
  • Provide Campus Champions with the opportunity to apply for the XSEDE Champion Fellows Program (and aspirationally other programs), to acquire in-depth technical and user support skills by working alongside XSEDE staff experts.
  • Provide Campus Champions information to participate in subgroup activities, such as the Regional Champion initiative.

Become a Champion

  • Write to info@campuschampions.org and ask to get involved
  • We'll send you a template letter of collaboration
  • Ask questions, add signatures, send it back, and join the community

In addition to traditional Campus Champions, the Champion Program now includes the following types of specialized Champions:

 

  • Student Champions - offering a unique student perspective on use of digital resources and services
  • Regional Champions - regional point of contact to help scale the Champion Program
  • Domain Champions - spread the word about what XSEDE can do to boost the advancement of their specific research domains

Key Points
Program serves more than 300 US colleges and universities
Aimed at making the institution's research more successful
Free membership
Contact Information

 
August 2019 | Science Highlights, Announcements & Upcoming Events
 
XSEDE helps the nation's most creative minds discover breakthroughs and solutions for some of the world's greatest scientific challenges. Through free, customized access to the National Science Foundation's advanced digital resources, consulting, training, and mentorship opportunities, XSEDE enables you to Discover More. Get started here.
 
Science Highlights
 
Going with the flow
 
XSEDE-powered calculations shed light on heat transfer between 2D electronic components
 
 
Smaller electronic components offer us more power in our pockets. But thinner and thinner components pose engineering problems. Anisotropic materials—those with properties that vary in direction—hold promise for being unusually versatile. Still, their properties, particularly as they become "two dimensional," or ultra-thin, are not well understood. Using a combination of calculations on XSEDE-allocated resources and lab experiments, a UCLA-led group showed that an atomistic model can explain and predict the transfer of heat between aluminum and black phosphorous, a highly anisotropic material with possible applications in future devices.
 
 
Schematic of the crystal structure of black phosphorous. Image permission granted by Yongjie Hu, UCLA.
 
Program Announcements
 
New NSF cyberinfrastructure projects to be part of XSEDE ecosystem
The National Science Foundation has recently funded three new HPC systems slated to come online in the coming year, all of which will become part of the XSEDE ecosystem:
 
  • Bridges-2, a $10M machine at the Pittsburgh Supercomputing Center will be focused on high-performance AI, HPC and big data. PSC plans to accept an initial round of proposals via XSEDE's allocations process in June to July 2020, with early users beginning work in August and production operations in October.
  • Expanse, a $10M machine at the San Diego Supercomputer Center will be a system optimized for "long-tail" users. Expanse will begin its production phase under XSEDE allocation in the second quarter of 2020.
  • Ookami, a $5M machine at Stony Brook University will be a testbed for ARM architecture in collaboration with Riken CCS in Japan and will be phased into XSEDE service with two years of allocation managed by Stony Brook beginning in late 2020, followed by two years of XSEDE allocation.
 
 
XSEDE announces 2019-2020 Campus Champion Fellows
Five members of the 600+ member  Campus Champion community  have been selected as  Campus Champions Fellows  for the 2019-2020 academic year. The five selected Fellows will work on projects spanning from hydrology gateways to undergraduate data science curriculum development under the overarching goal of increasing cyberinfrastructure expertise on campuses by including Campus Champions as partners in XSEDE's projects. For the list of new Fellows and their projects, please see the link below.
 
 
XSEDE Cyberinfrastructure Integration updates
The XSEDE Cyberinfrastructure Integration (XCI) team recently worked with George Mason University IT staff to conduct cluster build training on local hardware with the XCBC OpenHPC toolkit. XCI has also released a new "Describe a User Story" feature to the Research Software portal, updated the XSEDE Globus ID Explorer service to provide a more friendly user interface and help developers debug and understand security credentials, and would like your input on a SSH needs survey.
 
 
Apply by Nov. 1 for Spring 2020 XSEDE EMPOWER internships
 
 
Do you know an undergraduate interested in computation, conducting their own research, and making connections within our community? Tell them about  XSEDE EMPOWER  ( E xpert  M entoring  P roducing  O pportunities for  W ork,  E ducation, and  R esearch), an internship program where undergraduates have the chance to participate in actual XSEDE activities, like computational research and education in all fields of study, data analytics research and education, networking, system maintenance and support, and visualization. Undergraduate students at any U.S. degree-granting institution are welcome to apply. No prior experience necessary.  The deadline to apply for Spring 2020 internships is November 1, 2019.
 
 
Check out this video to learn more about XSEDE EMPOWER and what two recent interns have to say about the program.
 
Community Announcements
 
Globus now supports Box
If your campus uses Box for cloud storage, managing and collaborating with this data just got easier. Globus has announced a new Box connector that seamlessly connects Box with an organization's existing research storage ecosystem. With the Globus for Box connector, researchers can easily move and share data stored in Box using the same Globus interface they use to access other storage systems, enhancing collaboration and the ability to fulfill data management plan requirements.
 
 
HPC Leadership Institute
The High Performance Computing (HPC) Leadership Institute, hosted by the Texas Advanced Computing Center (TACC) September 17-19, 2019, is tailored to managers and decision makers who are using, or considering using, HPC within their organizations. It is also applicable to those with a real opportunity to make this career step in the near future. Topics covered will include procurement considerations, pricing and capital expenditures, operating expenditures, and cost/benefit analysis of adding HPC to a company's or institution's R&D portfolio. A broad scope of HPC is covered from department scale clusters to the largest supercomputers, modeling, simulation to non-traditional use cases, and more. We encourage attendees from diverse backgrounds and underrepresented communities.
 
Registration closes September 3.
 
 
Upcoming Dates and Deadlines
 

 


 

 
July 2019 | Science Highlights, Announcements & Upcoming Events
 
XSEDE helps the nation's most creative minds discover breakthroughs and solutions for some of the world's greatest scientific challenges. Through free, customized access to the National Science Foundation's advanced digital resources, consulting, training, and mentorship opportunities, XSEDE enables you to Discover More. Get started here.
 
Science Highlights
 
Building better batteries
 
XSEDE-allocated resources simulate improved battery components
 
 
The move toward cleaner, cheaper energy would be much easier if we had more powerful, safer battery technologies.
 
Carnegie Mellon University (CMU) scientists have used the XSEDE-allocated systems Bridges at the Pittsburgh Supercomputing Center (PSC) and Comet at the San Diego Supercomputer Center (SDSC) to simulate new battery component materials that are inherently safer and more powerful than currently possible.
 
 
One of the predicted new low cobalt structures of Li Ni x Mn y Co 1-x-y O2 with a ratio of nickel to manganese to cobalt of 18:5:1. The nickel is shown in grey, the manganese in magenta, and the cobalt in blue. The lithium layer is shown in
green and oxygen is shown in red. Credit: Gregory Houchins, Carnegie Mellon University.
 
Supercomputing dynamic earthquake rupture models
 
XSEDE-allocated resources support multi-fault earthquake research
 
 
Multi-fault earthquakes can span fault systems of tens to hundreds of kilometers, with ruptures propagating from one segment to the other. During the last decade, scientists have observed several cases of this complicated type of earthquake, such as the M7.8 2015 Kaikoura earthquake in New Zealand. 
 
Scientists are using physics-based dynamic rupture models to simulate complex earthquake ruptures using XSEDE-allocated supercomputers in order to better predict the behavior of the world's most powerful, multiple-fault earthquakes. This research lends itself to a new understanding of a complex set of faults in Southern California that have the potential to impact the lives of millions of people in the United States and Mexico.
 
 
Map (left panels) and 3D (right panels) view of supercomputer earthquake simulations in the Brawley Seismic Zone, CA. The figure shows how different stress conditions affect rupture propagation across the complex network of faults. The top panels show a high-stress case scenario (leading to very fast rupture propagation, higher than the S wave speed) while the bottom panels show a medium stress case simulation. Credit: Christodoulos Kyriakopoulos, UC Riverside.
 
Researchers use machine learning to more quickly analyze key capacitor materials
 
XSEDE-allocated resources speed up electronics design
 
 
Capacitors, given their high energy output and recharging speed, could play a major role in powering the machines of the future from electric cars to cell phones. 
But the biggest hurdle for capacitors as energy storage devices is that they store much less energy than a battery of similar size.
 
Researchers at the Georgia Institute of Technology (Georgia Tech) are tackling the problem in a novel way with the help of XSEDE. They've combined machine learning with XSEDE-allocated supercomputers to find ways to build more capable capacitors, which could lead to better power management for electronic devices.
 
 
Scientists at Georgia Tech are using machine learning with supercomputers to analyze the electronic structure of materials and ultimately find ways to build more capable capacitors. (Left) Density functional theory (DFT) charge density of a molecular dynamics snapshot of a benzene. (Right) Charge density difference between machine learning prediction and DFT for the same benzene structure. Credit: Rampi Ramprasad, Georgia Tech.
 
Achieving crystal clear results... literally
 
XSEDE-allocated resources shed light on color-changing material applications
 
 
A recent discovery by a Georgia Tech graduate student has led to materials that quickly change color from completely clear to a range of vibrant hues – and back again. 
 
With an XSEDE allocation on the San Diego Supercomputer Center's Comet supercomputer , researchers analyzed electrochromic materials with computational models that provide insights into how changes at the sub-molecular level cause color changes.
 
The work could have potential applications in everything from skyscraper windows that control the amount of light and heat coming in and out of a building, to switchable camouflage and visors for military applications, and even color-changing cosmetics and clothing.
 
 
Coupling Comet -generated computational models with anodically coloring electrochromes (ACEs), researchers demonstrated how small chemical modifications change the electronic structure of the molecules' radical cation states, which in turn alter the color. Credit: Aimée Tomlinson, University of North Georgia.
 
Program Announcements
 
Help engage undergrads in the work of XSEDE!
 
 
Application deadline for Fall 2019 XSEDE EMPOWER internships and mentor positions is August 2, 2019!
 
An XSEDE-wide effort is underway to expand the community by recruiting and enabling a diverse group of students who have the skills   or are interested in acquiring the skills   to participate in the actual work of XSEDE. The name of this effort is XSEDE EMPOWER ( E xpert  M entoring  P roducing  O pportunities for  W ork,  E ducation, and  R esearch).
 
We invite the whole XSEDE community—staff, researchers, and educators—to recruit and mentor undergraduate students to engage in a variety of XSEDE activities, such as computational and/or data analytics research and education in all fields of study, networking, system maintenance and support, visualization, and more. The program provides a stipend to students and resources for the training of those students who work on XSEDE projects for one semester, one quarter, one summer, or longer.
 
 
Community Announcements
 
Register now for Gateways 2019
Science gateways connect components of advanced cyberinfrastructure behind streamlined, user-friendly interfaces. Join gateway creators and enthusiasts to learn, share, connect, and shape the future of gateways at Gateways 2019, September 23-25, 2019 in San Diego, CA.
 
Early-bird registration is open through Thursday, August 8.  (Regular registration closes Monday, Sept. 9.) The Poster Session is open to all and accepting abstracts through Thursday, August 15. Gateways 2019 also offers travel support for students. Learn more at the link below.
 
 
For the love of science
 
 
BOINC@TACC  supports virtualized, parallel, cloud, and GPU-based applications to allow volunteer science enthusiasts and researchers to help solve science problems – it's the first use of volunteer computing by a major HPC center.  XSEDE  researchers are invited to submit jobs to BOINC@TACC.
 
 
Sign up for the GlobusWorld Tour at University of Michigan July 22-23
 
 
Don't miss this free two-day event hosted by the University of Michigan! Day one is a half-day crash course on Globus from an end user and system administrator perspective, and Day two will focus on developing custom data management solutions with an emphasis on tools for automating data flows throughout the research lifecycle.

 
Apply now for Science Gateways' Gateway Focus Week (formerly Bootcamp)
Want to learn gateways, from start to finish? Apply by July 19 for SGCI's next Focus Week, September 9-13, 2019 in Chicago, IL.
 
Gateway Focus Week will help you learn how to develop, operate, and sustain a gateway (also known as portals, virtual research environments, hubs, etc.).
 
Why apply? You'll...
  • Leave behind day-to-day tasks to tackle big questions that will help your team articulate the value of your work to key stakeholders.
  • Create a strong development, operations, and sustainability plan.
  • Walk away with proven and effective strategies in everything from business and finance to cybersecurity and usability.
  • Network, collaborate, and establish relationships with others doing similar work.
  • Do it all at minimal cost (currently, participants only pay for travel, lodging, and a few meals).
 
 
Upcoming Dates and Deadlines
 

 


This Comet-generated simulation illustrates how an intense laser pulse renders a dense material relativistically transparent, thereby allowing it to propagate - the material is penetrated and the laser pushes the electrons to form an extremely strong magnetic field. The strength is comparable to that on a neutron star's surface, which is at least 100 million times stronger than the Earth's magnetic field and a thousand times stronger than the field of superconducting magnets. Credit: Tao Wang, Department of Mechanical and Aerospace Engineering, and the Center for Energy Research, UC San Diego.

 

While intense magnetic fields are naturally generated by neutron stars, researchers have been striving to achieve similar results in the laboratory for many years. Tao Wang, a University of California San Diego mechanical and aerospace engineering graduate student, recently demonstrated how an extremely strong magnetic field, similar to that on the surface of a neutron star, can be not only generated but also detected using an x-ray laser inside a solid material.

Wang carried out his research with the help of simulations conducted on the Comet supercomputer at the San Diego Supercomputer Center (SDSC) as well as Stampede and Stampede2 at the Texas Advanced Computing Center (TACC). All resources were allocated via XSEDE.

"Wang's findings were critical to our recently published study's overall goal of developing a fundamental understanding of how multiple laser beams of extreme intensity interact with matter," said Alexey Arefiev, a professor of mechanical and aerospace engineering at the UC San Diego Jacobs School of Engineering.

Alexey Arefiev, Professor of Mechanical and Aerospace Engineering, UC San Diego Jacobs School of Engineering

Wang, Arefiev, and their colleagues used the XSEDE-allocated supercomputers for multiple large three-dimensional simulations, remote visualization, and data post-processing to complete their study, which showed how an intense laser pulse is able to propagate into the dense material because of the laser's relativistic intensity.

In other words, as the velocity of the electrons in the laser approaches the speed of light, their mass becomes so heavy that the target becomes transparent. Because of the transparency, the laser pulse pushes the electrons to form a strong magnetic field. This strength is comparable to that on a neutron star's surface, which is at least 100 million times stronger than the Earth's magnetic field, and about one thousand times stronger than the field of superconducting magnets.

The findings were published in a Physics of Plasma journal article entitled "Structured Targets for Detection of Megatesla-level Magnetic Fields Through Faraday Rotation of XFEL Beams" and was recently named "Editor's Pick."

"Now that we have completed this study, we are working on ways to detect this type of magnetic field at a one-of-a-kind facility called the European X-Ray Free Electron Laser (XFEL), which encompasses a 3.4- kilometer-long accelerator that generates extremely intense x-ray flashes to be used by researchers like our team," explained Arefiev.

Located in Schenefeld, Germany, the European XFEL is home to Toma Toncian, where he leads the project group construction and commissioning of the Helmholtz International Beamline for Extreme Fields at the High Energy Density instrument. He is also a co-author on the recently published study.

"The very fruitful collaboration between UC San Diego and Helmholtz-Zentrum Dresden-Rossendorf is paving the road to future high-impact experiments," said Toncian. "As we pass nowadays from construction to commissioning and first experiments, the theoretical predictions by Tao Wang are timely and show us how to further develop and fully exploit the capabilities of our instrument."

According to Mingsheng Wei, a senior scientist at the University of Rochester's Laboratory for Laser Energetics and co-author on the paper, "the innovative micro-channel target design explored in the simulation work could be demonstrated using the novel low-density polymer foam material that is only a few times heavier than the dry air contained in micro-structured tubes."

"Because the resulting data sets of our experiments using XFEL are very large, our research would not have been possible on a regular desktop – we could not have completed this study without the use of XSEDE supercomputers," said Arefiev. "We are also very grateful to the Air Force Office of Scientific Research for making this project possible."

Arefiev said that their group's supercomputer usage efforts relied upon the guidance of Amit Chourasia, SDSC's senior visualization scientist, who helped set up remote parallel visualization tools for the researchers.

"It is fantastic to work in tandem with research groups and equip them with powerful methods, tools, and an execution plan that in turn propels their research at an accelerated pace with aid of HPC and visualization. We're grateful to play a role in enabling new discoveries," said Chourasia.

 

This research was supported by the Air Force Office of Scientific Research under grant number FA9550-17-1-0382 and the National Science Foundation under grant number 1632777. Particle-in-cell simulations were performed using EPOCH and developed under UK EPSRC grant numbers EP/G054940, EP/G055165, and EP/G056803. High-performance computing resources were provided by XSEDE under grant number PHY180033.

 



XSEDE's Stampede2, ECSS Helps Simulate Shock Turbulence Interactions

A new theoretical framework was developed and tested using the Stampede2 supercomputer to understand turbulent jumps of mean thermodynamic quantities, shock structure and amplification factors. Turbulence comes in from the left in this image, hitting the shock, and leaving the domain from the right. This three-dimensional picture shows the structure of enstrophy and colored by local Mach number with the shock at gray. Credit: Chang-Hsin Chen, TAMU.

 

This may come as a shock, if you're moving fast enough. The shock being shock waves. A balloon's 'pop' is an example of shock waves generated by exploded bits of the balloon moving faster than the speed of sound. A supersonic plane generates a much louder sonic 'boom,' also from shock waves.

Farther out into the cosmos, a collapsing star generates shock waves from particles racing near the speed of light as the star goes supernova. Scientists are using supercomputers allocated through XSEDE to get a better understanding of turbulent flows that interact with shock waves. This understanding could help develop supersonic and hypersonic aircraft, more efficient engine ignition, as well as probe the mysteries of supernova explosions, star formation, and more.

"We proposed a number of new ways in which shock-turbulence interactions can be understood," said Diego Donzis, an associate professor in the Department of Aerospace Engineering at Texas A&M University. Donzis co-authored the study, "Shock–Turbulence Interactions at High Turbulence Intensities," published in May 2019 in the Journal of Fluid Mechanics. Donzis and colleagues used extremely high resolution simulations to support the team's novel theory of shockwaves in turbulence that accounts for features not captured by the seminal work on the subject.

Shock turbulence study co-authors Chang Hsin Chen (L) and Diego Donzis (R), pictured with the Stampede2 supercomputer. Drs. Chen and Donzis are both with the Department of Aerospace Engineering, Texas A&M University. Credit: TACC.

Enter Stampede2, an 18-petaflops supercomputer at the Texas Advanced Computing Center (TACC). Donzis was awarded compute time on Stampede2 through XSEDE. Both Stampede2 and XSEDE are funded by the National Science Foundation.

"On Stampede2, we ran a very large data set of shock-turbulence interactions at different conditions, especially at high turbulence intensity levels, with a degree of realism that is beyond what is typically found in the literature in terms of resolution at the small scales [and] in terms of the order of the scheme that we used," Donzis said. "Thanks to Stampede2, we can not only show how amplification factors scale, but also under what conditions we expect theory to hold, and under what conditions our previously proposed scaling is the more appropriate one."

Study lead author Chang Hsin Chen added, "We also looked at the structure of the shock and, through highly resolved simulations, we were able to understand how turbulence creates holes on the shock. This was only possible due to the computational power provided by Stampede2."

Another simulated view of turbulence coming from the left, hitting the shock, and leaving the domain from the right. The two-dimensional picture is Q-criterion and the shock is the thin blue line. Credit: Credit: Chang-Hsin Chen, TAMU.

Donzis furthered that "Stampede2 is allowing us to run simulations, some of them at unprecedented levels of realism, in particular the small-scale resolution that we need to study processes at the very small scales of turbulent flows. Some of these simulations run on half of the machine, or more, and sometimes they take months to run."

Making progress in understanding when turbulence meets shocks didn't come easy. Extreme resolution on the order of billions of grid points are needed to capture the sharp gradients of a shock in a high turbulent flow.

"While we are limited by how much we can push the parameter range on Stampede2 or any other computer for that matter, we have been able to cover a very large space in this parameter space, spanning parameter ranges beyond what has been done before," Donzis said. The input/output (I/O) also turned out to be challenging in writing the data to disk at very large core counts.

"This is one instance in which we took advantage of the Extended Collaborative Support Services (ECSS) from XSEDE, and we were able to successfully optimize our strategy," Donzis said. "We are now confident that we can keep increasing the size of our simulations with the new strategy and keep doing I/O at a reasonable computational expense."

Donzis is no stranger to XSEDE, which he's used for years back when it was called TeraGrid to develop his group's codes, starting with the LeMieux system at the Pittsburgh Supercomputing Center; Blue Horizon at the San Diego Supercomputer Center; Kraken at the National Institute for Computational Sciences; and now on Stampede1 and Stampede2 at TACC.

Better understanding of shock turbulence interactions could help develop supersonic and hypersonic aircraft, more efficient engine ignition, as well as probe the mysteries of supernova explosions, star formation, and more. NASA's Low-Boom Flight Demonstration supersonic aircraft illustrated here. Credit: NASA/Lockheed Martin.

"A number of the successes that we have today are because of the continued support of XSEDE, and TeraGrid, for the scientific community. The research we're capable of doing today and all the success stories are in part the result of the continuous commitment by the scientific community and funding agencies to sustain a cyberinfrastructure that allows us to tackle the greatest scientific and technological challenges we face and may face in the future. This is true not just for my group, but perhaps also for the rest of the scientific computing community in the U.S. I believe the XSEDE project and its predecessors in this sense have been a tremendous enabler," Donzis said.

The dominant theoretical framework for shock turbulence interactions, explained Donzis, goes back to the 1950s, developed by Herbert Ribner while at the University of Toronto, Ontario.  His work supported the understanding of turbulence and shocks interactions with a linear, inviscid theory, which assumes the shock to be a true discontinuity. The entire problem can thus be reduced to something mathematically tractable, where the results depend only on the shock's Mach number, the ratio of a body's speed to the speed of sound in the surrounding medium. As turbulence goes through the shock, it is typically amplified depending on the Mach number.

Experiments and simulations by Donzis and colleagues suggested ­this amplification depends on the Reynolds Number, a measure of how strong the turbulence is, and the turbulent Mach number, which is another parameter of the problem.

"We proposed a theory that combined all of these into a single parameter," Donzis said. "And when we proposed this theory a couple of years ago, we didn't have well-resolved data at very high resolution to test some of these ideas."

What's more, the scientists also explored shock jumps, which are abrupt changes in pressure and temperature as matter moves across a shock.

"In this study we developed and tested a new theoretical framework to understand, for example, why an otherwise stationary shock, starts moving when the incoming flow is turbulent," Donzis said. This implies that the incoming turbulence deeply alters the shock. The theory predicts, and the simulations on Stampede2 confirm, that the pressure jumps change, and how they do so when the incoming flow is turbulent. "This is an effect that is actually not accounted for in the seminal work by Ribner, but now we can understand it quantitatively," Donzis said.

Donzis is a firm believer that advances in HPC translate directly to benefits for all of society.

Said Donzis: "Advances in the understanding of shock turbulence interactions could lead to supersonic and hypersonic flight, to make them a reality for people to fly in a few hours from here to Europe; space exploration; and even our understanding of the structure of the observable universe. It could help answer, why are we here?  "

The study, "Shock–turbulence interactions at high turbulence intensities," was published in May 2019 in the Journal of Fluid Mechanics. The study authors are Chang Hsin Chen and Diego A. Donzis, Department of Aerospace Engineering, Texas A&M University. The study authors gratefully acknowledge support from the National Science Foundation (grant OCI-1054966) and the Air Force Office of Scientific Research (grants FA9550-12-1-0443, FA9550-17-1-0107).

Stampede1, Stampede2, and the Extended Collaborative Support Services program are allocated resources of the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation (NSF).


Science Gateways for Developers and Operators

This page documents required and recommended steps for developers. For additional assistance, XSEDE provides Extended Consultation Support Services and community mailing lists to assist gateway developers and administrators.

Science Gateways can democratize access to the cyberinfrastructure that enables cutting-edge science

What is an XSEDE Science Gateway?

An XSEDE Science Gateway is a web or application portal that provides a graphical interface for executing applications and managing data on XSEDE and other resources. XSEDE science gateways are community services offered by XSEDE users to their communities; each gateway is associated with at least one active XSEDE allocation. For an overview of the steps a gateway provider must take to start an XSEDE Science Gateway, see the Gateways for PIs page.

See the Science Gateways Listing for a complete list of current operational gateways.

Science gateway developers and administrators may include PIs as well as their collaborators, staff, and students. The PI should add these team members to the XSEDE allocation; see Manage Users for more details. It is recommended that the allocation have at least one user with the Allocation Manager role, in addition to the PI.

Operations Checklist

  1. The PI obtains an XSEDE allocation.
  2. The PI adds developer and administrator team members to the allocation.
  3. Register the gateway.
  4. Request for a community account to be added to the allocation. The PI logs onto the XSEDE User Portal and selects "Community Accounts." from the My XSEDE tab.
  5. Add the XSEDE logo to the gateway. See https://www.xsede.org/web/guest/logos.
  6. Integrate the user counting scripts with the gateway's submission mechanism.
  7. Join the XSEDE gateway community mailing list (optional).

Building and Operating

Science gateways can be developed using many different frameworks and approaches. General issues include managing users, remotely executing and managing jobs on diverse XSEDE resources, tracking jobs, and moving data between XSEDE and the user environment. XSEDE specific issues include tracking users, monitoring resources, and tracking use of the gateway allocation. For a general overview of best practices for building and operating a science gateway, please see the material developed by the Science Gateways Community Institute, an independently funded XSEDE service provider. The Institute provides support for different frameworks that can be used to build science gateways.

XSEDE supports a wide range of gateways and does not require specific middleware; gateways can use team-developed middleware or third party provided middleware. Gateways that run jobs and access data on XSEDE resources may be hosted on the PI's local servers or directly on XSEDE resources that support persistent Web services, middleware, and databases; these include Bridges, Comet, and Jetstream.

For gateway teams that would like additional development assistance, XSEDE supports the integration of science gateways with XSEDE resources through Extended Collaborative Support Services (ECSS). ECSS support can be requested as part of an allocation request; PIs can add ECSS support to an existing allocation through a supplemental request.

Managing User Accounts

XSEDE science gateways are community provided applications. Gateway users are not required to have XSEDE accounts or allocations. XSEDE allows all users jobs to run on the gateway's community account instead. Gateways thus map their local user accounts to the gateway's single community account. XSEDE does require quarterly reporting of the number of unique users who executed jobs on XSEDE resources, as described below.

XSEDE Community Accounts

XSEDE allows science gateways that run applications on behalf of users to direct all submission requests to a gateway community user account. Designated gateway operators have direct shell access to their community account, but normal users do not. The community account simplifies administration of the gateway, since the gateway administrators have access to input and output files, logs, etc, for all their users, and users don't need to request individual gateway accounts.

A community account has the following characteristics:

  • Only a single community user account (i.e., a XSEDE username/password) is created.
  • The Science Gateway uses the single XSEDE community user account to launch jobs on XSEDE.
  • The gateway user running under the community account has privileges to run only a limited set of applications.

Requesting a Community Account: The PI or Allocation Manager with a registered gateway can request a community account by logging on to the XSEDE User Portal and selecting "Community Accounts." from the "My XSEDE" tab. Select community accounts on all allocated resources.

Accessing Community Accounts: Administrators access community accounts through SSH and SCP using the community account username and password that is provided with the account. Community accounts cannot be accessed from the XSEDE single sign on hub.

Community Accounts on Sites with Two-Factor Authentication: Some XSEDE resources, including Stampede and Wrangler, require two-factor authentication. Gateways can request exceptions to this policy for their community accounts by contacting XSEDE Help Desk. The gateway will need to provide the static IP addresses of the server or servers it uses to connect to the resource.

Unique Science Gateway User Accounts

It is the gateway developer's responsibility, as described below, to implement gateway logins or otherwise uniquely identify users in order to track usage. These accounts can be local to the gateway and do not need to correspond to user accounts on XSEDE. The gateway maps these accounts to the gateway's common community account.

Gateways may optionally choose to use XSEDE's OAuth2-based authentication process for authentication. This is a service provided by Globus Auth. ECSS consultants are available to assist with this integration.

The XSEDE Cyberinfrastructure Integration (XCI) team has completed writing and testing the document "User Authentication Service for XSEDE Science Gateways." This is an introduction to the user authentication service that XSEDE offers for science gateway developers and operators. This service provides a user "login" function so that gateway developers don't need to write their own login code or maintain user password databases.

Connecting to XSEDE Resources

The most common type of XSEDE science gateway allows users to run scientific applications on XSEDE computing resources through a browser interface. This section describes XSEDE policies and requirements for doing this.

Community Allocations

Gateways typically provide their users with a community-wide allocation acquired by the PI on behalf of the community. The gateway may implement internal restrictions on how much of this allocation a user can use.

If a user is consuming an excessive amount of resources, the gateway may require these "power users" to acquire their own allocations, either through the Startup or XRAC allocation process. After obtaining the allocation, the user adds the gateway community account to her/his allocation. The user's jobs still run under the community account, but the community account uses the user's, rather than the gateway PI's, allocation. This is implemented by adding the allocation string to the batch script. This is the standard -A option for the SLURM schedulers used by many XSEDE resources; see examples for Stampede, Comet, and Bridges. Gateway middleware providers may provide this service as a feature.

Interacting with HPC Resources

Science gateways that run jobs on behalf of their users submit them just like regular users. For XSEDE's HPC resources, this means using the local batch scheduler to submit jobs and monitor them. For an overview, see the XSEDE Getting Started Guide. Gateways execute scheduler commands remotely through SSH and use SCP for basic file transfer. Gateways may choose to work with third party middleware and gateway framework providers to do this efficiently. For more information on third party software providers, consult the Science Gateways Community Institute service provider web site.

XSEDE ECSS consultants can assist gateways with HPC integration.

XSEDE Resources for Gateway Hosting

XSEDE includes resources that have special Virtual Machine (VM) and related capabilities for gateways and similar persistent services. These resources are allocated through the standard XSEDE allocation mechanisms.

  • Bridges is designed for jobs that need large amounts of shared memory. It also has allocatable VMs that have access to Bridges' large shared file system. VM users can directly access scheduler command line tools to Bridge's computing resources inside their VMs.
  • Comet, like Bridges, is a computing cluster with co-located Virtual Machines. Users can also request entire, self-contained Virtual Clusters that can run both the gateway services and computing jobs.
  • Jetstream is an XSEDE cloud computing resource. Gateway users can get persistent VMs for use in gateway service hosting. They can also get multiple VMs configured as a Virtual Cluster with a private scheduler for running computing jobs.

Science Gateway Usage Metrics: Unique Users per Quarter

XSEDE requires all gateways to report the number of unique users per quarter who have executed jobs on XSEDE resources. This is a key metric that XSEDE in turn reports to the NSF. Compliance with this requirement justifies XSEDE's investment in the science gateway community. XSEDE collects this information through a simple API that is integrated into the job submission process. XSEDE ECSS consultants are available to assist gateway developers to do this.  For instructions and information on the API, please see https://xsede-xdcdb-api.xsede.org/api/gateways.

Security and Accounting

XSEDE has specific security and accounting requirements and recommendations for connecting to its resources to optimize your gateway for prevention and triage of security incidents or inadvertent misuse.

Security and Accounting Requirements and Recommendations

The following security and accounting steps are required.

  • Required: Notify the XSEDE Help Desk immediately if you suspect the gateway or its community account may be compromised, or call the Help Desk at 1-866-907-2383.
  • Required: Keep Science Gateway contact info up to date on the Science Gateways Listing in case XSEDE staff should need to contact you. XSEDE reserves the right to disable a community account in the event of a security incident.
  • Required: Use the gateway_submit_attributes tool to submit gateway username with job.

Additional recommendations are as follows:

  • Collect Accounting Statistics
  • Maintain an audit trail (keep a gateway log)
  • Provide the ability to restrict job submissions on a per user basis
  • Safeguard and validate programs, scripts, and input
  • Protect user passwords on the gateway server and over the network
  • Do not use passwordless SSH keys.
  • Perform Risk and Vulnerability Assessment
  • Backup your gateway routinely
  • Develop an an incident response plan for your gateway; review and update it regularly
  • Put a contingency plan in place to prepare for a disaster or security event that could cause the total loss or lock down of the server
  • Monitor changes to critical system files such as SSH with tripwire or samhain (open source)
  • Make sure the OS and applications of your gateway service are properly patched - Run a vulnerability scanner against them such as nessus
  • Make use of community accounts rather than individual accounts

These are described in more detail below in separate sections. XSEDE ECSS support staff can assist with designing and implementing best practices. The Science Gateways Community Institute service provider also provides information on best practices.

What To Do In Case of a Security Incident

Whether a threat is confirmed or suspected, quick action and immediate communication with XSEDE Security Working Group is essential. Please contact the XSEDE Help Desk immediately at 1-866-907-2383.

Key Points
Gateways provide higher level user interface for XSEDE resources that are tailored to specific scientific communities.
XSEDE supports gateways through community accounts, gateway hosting, and extended collaborative support services.
Contact Information
XSEDE Science Gateways Expert
Science Gateways Community Institute

Authors Year Title URL
Wernert, J., DeStefano, L., & Rivera, L. (2018). Practice & Experience in Advanced Research Computing (PEARC18) Evaluation Report. http://hdl.handle.net/2142/102157
Rivera, L., DeStefano, L., & Wernert, J. (2018). 2018 Campus Champions Climate Study Report Executive Summary. The eXtreme Science and Engineering Discovery Environment; cyberinfrastructure award through the National Science Foundation. http://hdl.handle.net/2142/102189
Rivera, L., DeStefano, L., & Wernert, J. (2018). 2017 XSEDE Staff Climate Study Report Executive Summary. The eXtreme Science and Engineering Discovery Environment; cyberinfrastructure award through the National Science Foundation. http://hdl.handle.net/2142/101865
Rivera, L., DeStefano, L., & Wernert, J. (2016). 2016 XSEDE Staff Climate Study Report Executive Summary. The eXtreme Science and Engineering Discovery Environment; cyberinfrastructure award through the National Science Foundation. http://hdl.handle.net/2142/101866
Rivera, L., DeStefano, L., & Wernert, J. (2015). 2015 XSEDE Staff Climate Study Report Executive Summary. The eXtreme Science and Engineering Discovery Environment; cyberinfrastructure award through the National Science Foundation. http://hdl.handle.net/2142/101867
Rivera, L., DeStefano, L., & Wernert, J. (2014). 2014 XSEDE Staff Climate Study Report Executive Summary. The eXtreme Science and Engineering Discovery Environment; cyberinfrastructure award through the National Science Foundation. http://hdl.handle.net/2142/101868
Rivera, L., DeStefano, L., & Wernert, J. (2013). 2013 XSEDE Staff Climate Study Report Executive Summary. The eXtreme Science and Engineering Discovery Environment; cyberinfrastructure award through the National Science Foundation. http://hdl.handle.net/2142/101869
Hart, David (2018). Practices and Procedures for Publishing XSEDE Data Sets http://hdl.handle.net/2142/101913
Wernert, J., DeStefano, L., & Rivera, L. (2018). 2018 eXtreme Science and Engineering Discovery Environment (XSEDE) Annual User Satisfaction Survey http://hdl.handle.net/2142/102280
Sinkovits, Robert S. (2019). Optimization and Parallelization of a Time Series Classification Algorithm (XSEDE-2019.1) http://hdl.handle.net/2142/104713

 


By Ken Chiacchia, Pittsburg Supercomputing Center/XSEDE

August 5, 2019

The National Science Foundation is well positioned to support national priorities, as new NSF-funded HPC systems to come online in the upcoming year promise to democratize advanced computing and take advantage of new technologies, according to Jim Kurose, assistant director of Computer and Information Science and Engineering (CISE) at NSF. Kurose was speaking at the final keynote presentation of the PEARC19 conference on Aug. 1.

"If you look at these areas that are stated national priorities, you see that CISE and computing are generally at the center" of them, he said. "Computing plays … such a central role in all of these priority areas" such as AI, big data and cybersecurity. "These are the kinds of things that we do in this community."

PEARC19, held in Chicago last week (July 28-Aug. 1), explored current practice and experience in advanced

Jim Kurose, NSF

research computing including modeling, simulation and data-intensive computing. The primary focus this year was on machine learning and artificial intelligence. The PEARC organization coordinates the PEARC conference series to provide a forum for discussing challenges, opportunities and solutions among the broad range of participants in the research computing community.

"NSF is a very bottom-up institution," Kurose said, and the HPC community "has been really vocal about providing input … When I look at the tea leaves inside NSF, I see a focus on computation at large-scale facilities … I think that's going to be incredibly important."

Manish Parashar, director of NSF's Office of Advanced Cyberinfrastructure, noted that the CI discipline "cuts across all parts of NSF's mission, but also its priorities … [we] can extrapolate beyond that and say that it's even central to national priorities."

"Increasingly, we are realizing that science only happens when all [the] pieces come together," he added. "How do you combine the data, software, systems, networking and people?" The technology and scientific user community are changing rapidly, he noted, and NSF and the HPC community need to "continue thinking about what a cyberinfrastructure should look like and how … we evolve it" with innovations such as cloud computing and novel architectures balanced by computational stability.

Parashar introduced three new NSF-funded HPC systems, slated to come online in the coming year.

Bridges-2: High-Performance AI, HPC and Big Data

Nick Nystrom, chief scientist at the Pittsburgh Supercomputing Center (PSC), described their $10-million system, Bridges-2.

Bridges-2's high-performance AI converged with high performance computing emphasizes AI as a service, with ease of use, familiar software, interactivity and productivity as central goals, Nystrom said. A heterogeneous machine, Bridges-2 will feature Intel Ice Lake CPUs, advanced GPUs, compute nodes with various amounts of memory (256 GB, 512 GB and 4 TB RAM) and cloud interoperability to facilitate a variety of workflows. Built in collaboration with HPE, the system will contain new technology such as an all-flash array for very rapid data access.

PSC plans to accept an initial round of proposals via XSEDE's allocations process in June to July 2020, with early users beginning work in August and production operations in October.

Expanse: A System Optimized for "Long-Tail" Users

Shawn Strande, deputy director of the San Diego Supercomputer Center (SDSC), described their new system, Expanse. The system, he said, is focused on "the long tail of science … It's not a box in a machine room that people log into and do stuff. [It's] connected with other things" in a way that addresses a broad range of computation and data analytics needs.

The $10-million acquisition will be optimized for small- to mid-scale jobs and machine learning. Dell is the primary HPC vendor and Aeon Computing will provide the storage. Expanse will feature 728 standard compute nodes, 52 GPU nodes, four large-memory nodes, 12 PB of performance storage, 7 PB of Ceph object storage, interactive containers and cloud burst capability with a direct connection to Amazon Web Services. The system will be cloud agnostic, supporting all of the major cloud providers.

Expanse will begin its production phase under XSEDE allocation in the second quarter of 2020.

Ookami: A Testbed for ARM Architecture

John Towns, principal investigator of XSEDE, introduced Stony Brook University's Ookami system on behalf of Robert Harrison, the new system's PI as well as professor and director, Department of Applied Mathematics & Statistics at Stony Brook. The $5 million Ookami will be a testbed project in collaboration with Riken CCS in Japan, featuring ARM architecture via the A64fx processor. Its 48 compute and four assistant cores will have 32 GB of RAM, which is sufficient to serve some 86 percent of XSEDE's historic workload.

As a testbed project, Ookami will be phased into XSEDE service, with two years of allocation managed by Stony Brook beginning in late 2020, followed by two years of XSEDE allocation.

XSEDE allocation processes and requirements can be found at xsede.org. The awards can be found at:

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1928147

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1928224

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1927880

This story was published first in HPCwire on Aug. 5, 2019: https://www.hpcwire.com/2019/08/05/upcoming-nsf-cyberinfrastructure-projects-to-support-long-tail-users-ai-and-big-data/