Content with tag meter .

Resource Integration

XSEDE Capabilities and Resource Interoperability (XCRI) team helps create a unified National Cyberinfrastructure by providing software toolkits to CI providers, enabling the creation and maintenance of XSEDE-like resources at smaller institutions

By tracking best practices as found in the XSEDE ecosystem, XCRI brings lessons learned on a large scale to institutions without the necessary time, money, or local experience to easily implement research computing resources.

XCRI Toolkits and services include

  • The XSEDE-Compatible Basic Cluster (XCBC) software toolkit enables campus CI resource administrators to build a local cluster from scratch, which is then easily interoperable with XSEDE-supported CI resources. XCBC is very simple in concept: pull the lever, have a cluster built for you complete with an open source resource manager / scheduler and all of the essential tools needed to run a cluster, and have those tools set in place in ways that mimic the basic setup on an XSEDE-supported cluster. The XCBC is based on the OpenHPC project, and consists of XSEDE-developed Ansible playbooks and templates designed to ease the work required to build a cluster. Consult the XSEDE Knowledge Base for complete information about how to use XCBC to set up a cluster.

  • The XSEDE National Integration Toolkit (XNIT). Suppose you already have a cluster that you are happy with and you want to add too it software tools that will allow users to use open sources software like that on XSEDE, or other particular pieces of software that you think are important, but you don't want to blow up your cluster to add that capability? XNIT is for you. You can add all of the basic software that is in SCBC, as relocatable RPMs (Resource Package Manger), via a YUM repo. (YUM Stands for Yellowdog Updater, Modified). The RPMs in XNIT allow you to expand the functionality of your cluster, in ways that mimic the setup on an XSEDE cluster. XNIT packages include specific scientific, mathematical, and visualization applications that have been useful on XSEDE systems. Systems administrators may pick and choose what they want to add to their local cluster; updates may be configured to run automatically or manually. Currently the XNIT repository is available for x86_64 systems running CentOS 6 or 7. Consult the XSEDE Knowedge Base for more information.

  • Optional software that can be added to clusters running XCBC or XNIT is available and described at This includes a toolkit for installing a local Globus connect server Globus transfer is the recommended method for transferring data to any XSEDE system.

  • XCRI staff will travel in person to your campus to help implement XNIT, XCBC, or any other XCRI tools on your campus. After an initial phone consultation, we can assist onsite with configuration and ensure that you have the knowledge that you need to maintain your system. That's right…. XSEDE will pay to fly XSEDE staff to your campus and help you with your campus cluster, even if you have no particular relationship to XSEDE. A list of places we have gone to give talks or help people set up clusters is at And detailed descriptions of past visits to campuses to help with local clusters are are given in the XSEDE 2016 paper Implementation of Simple XSEDE-Like Clusters: Science Enabled and Lessons Learned These site visits are funded by XSEDE, including staff travel and lodgings.

To understand the challenges XCRI is trying to address today, see our mission. To understand what successful work was done by the predecessor to XCRI - the Campus Bridging Group - in the first five years of XSEDE, see Campus Bridging History.

Key Points
The XSEDE-Compatible Basic Cluster (XCBC) software toolkit
The XSEDE National Integration Toolkit (XNIT).
XCRI staff will travel in person to your campus
Contact Information

Current Campus Champions

Current XSEDE Campus Champions, by institution. Participation as either an Established Program to Stimulate Competitive Research (EPSCoR) or as a minority-serving institution (MSI) is also indicated.


Campus Champion Institutions  
Total Academic Institutions 212
      Academic institutions in EPSCoR jurisdictions 65
      Minority Serving Institutions 41
      Minority Serving Institutions in EPSCoR jurisdictions 14
Non-academic, not-for-profit organizations 21
Total Campus Champion Institutions 233
Total Number of Champions 409


See also the lists of Student Champions, Regional Champions and Domain Champions.

Institution  Campus Champions EpSCoR MSI
Albany State University Ojo Olabisi  
Albert Einstein College of Medicine Aaron Golden    
Arizona State University Sean Dudley, Brandon Mikkelsen    
Arkansas State University Hai Jiang  
Auburn University Tony Skjellum  
Auburn University at Montgomery  
Austin Peay State University Justin Oelgoetz    
Bates College Kai Evenson  
Bentley University Jason Wells    
Bethune-Cookman University Ahmed Badi  
Boise State University Kyle Shannon, Jason Watt  
Boston University Shaohao Chen, Brian Gregor, Katia Oleinik    
Brown University Helen Kershaw, Mukul Dave  
California Baptist University Linn Carothers  
California Institute of Technology Tom Morrell    
California State Polytechnic University-Pomona Chantal Stieber  

Carnegie Institution for Science

Floyd A. Fayton, Jr.

Carnegie Mellon University Franz Franchetti    
Case Western Reserve University , , Emily Dragowsky    
Centre College David Toth  
Children's Research Institute, Children's Mercy Kansas City Shane Corder    
Clark Atlanta University Dina Tandabany  
Clemson University Marcin Ziolkowski, Xizhou Feng, Ashwin Srinath, Linh Ngo  
Clinton College

Terris S. Riley

Coastal Carolina University Will JonesMike Murphy, Thomas Hoffman  
Colby College Randall Downer  
Colorado School of Mines Torey Battelle    
Columbia University  George Garrett    
Complex Biological Systems Alliance Kris Holton    
Cornell University    
Duke University Tom Milledge    
Earlham College Charlie Peck    

Federal Reseve Bank Of Kansas City, Center for the Advancement of Data and Research in Economics (CADRE)

BJ Lougee, Chris Stackpole, Brad Praiswater    
Felidae Conservation Fund Kevin Clark    
Fisk University  
Florida A&M University Hongmei Chi  
Florida Atlantic University Rhian Resnick    
Florida International University David Driesbach, Cassian D'Cunha  
Florida Southern College David Mathias    
Florida State University Paul van der Mark    
George Mason University Jayshree SarmaDmitri Chebotarov    
George Washington University , Adam Wong, Glen Maclachlan    
Georgia Southern University Brandon Kimmons    
Georgia State University Neranjan Edirisinghe Pathiran, Semir Sarajlic    
Georgia Institute of Technology Mehmet Belgin    
Gettysburg College Charles Kann    
Great Plains Network    
Harvard University Scott Yockel, Plamen Krastev, Francesco Pontiggia    
Harvard Medical School Jason Key    
Harvey Mudd College    
Hood College Xinlian Liu    
Howard University  
Idaho National Laboratory Ben Nickell, Eric Whiting  
Idaho State University , , Dong Xu  
Indiana University Abhinav Thota, Junje Li    
Indiana University of Pennsylvania John Chrispell    
Illinois Institute of Technology Jeff Wereszczynski     
Iowa State University James Coyle, Andrew Severin    
The Jackson Laboratory Shane Sanders    
Jackson State University Carmen Wright
James Madison University Isaiah SumnerYasmeen Shorish    
Johns Hopkins University Anthony Kolasny, Jaime Combariza    
Kansas State University  
Kennesaw State University Jon PrestonDick Gayler    
Kentucky State University
KINBER Jennifer Oxenford    
Lafayette College Bill Thompson, Jason Simms    
Lamar University    
Langston University , Abebaw Tadesse, Joel Snow
Lawrence Berkeley National Laboratory Andrew Wiedlea    
Lehigh University Alexander Pacheco    
Lock Haven University    
Louisiana State University Wei Feinstein  
Louisiana Tech University Don Liu  
Marquette University , ,    
Marshall University Justin Chapman,  
Massachusetts Green High Performance Computing Center (MGHPCC) Julie Ma    

Massachusetts Institute of Technology

Christopher Hill, Lauren Milechin

Medical University of South Carolina  
Michigan State University Andrew Keen, Yongjun Choi    
Michigan Technological University Gowtham    
Middle Tennessee State University Hyrum Carroll, Dwayne John    
Midwestern State University Eduardo Colmenares-Diaz    
Mississippi State University  
Missouri State University Matt Siebert    
Missouri University of Science and Technology Buddy Scharfenberg, Don Howdeshell    
Monmouth College    
Montana State University Jonathan Hilmer  
Montana Tech Bowen Deng  
Morehouse College Ken Perry, Jigsa Tola, Doreen Stevens  
National University Ali Farahani  
Navajo Technical University
New Mexico State University Alla Kammerdiner, Diana Dugas
New York University    
North Carolina A & T State University Dukka KC  
North Carolina Central University Caesar Jackson, Alade Tokuta  
Northern Arizona University Christopher Coffey    
Northwest Missouri State University Jim Campbell    
Northwestern University Alper Kinaci,    
Northwestern State University (Louisiana Scholars' College)  
Ohio Supercomputer Center    
Oklahoma Innovation Institute John Mosher, George Louthan  
Oklahoma State University Dana Brunson, Jamie Hadwin, Jesse Schafer  
Old Dominion University Rizwan Bhutta, Wirawan Purwanto    
Oregon State University David Barber, CJ Keist, Chuck Sears, Todd Shechter



Penn State University Chuck Pavloski    
Portland State University Wiliam Garrick    
Princeton University Ian Cosden    
Purdue University Xiao Zhu, Tsai-wei Wu, Stephen Harrell    
Reed College Trina Marmarelli    
Rensselaer Polytechnic Institute Joel Giedt    
Rhodes College Brian Larkins    
Rice University Erik Engquist, Xiaoquin Huang    
Rutgers University , , Bill Abbott, Leslie Michelson, Paul Framhein, Galen Collier, Eric Marshall    
SBGrid Consortium Jason Key    
Saint Louis University Eric Kaufmann    
Saint Martin University Shawn Duan    
San Diego State University Mary Thomas  
Slippery Rock University of Pennsylvania Nitin Sukhija    
Sonoma State University Mark Perri  
South Dakota State University Kevin Brandt, Maria Kalyvaki  
Southeast Missouri State University Marcus Bond    
Southern Connecticut State University Yigui Wang    
Southern Illinois University Shaikh Ahmed, Chet Langin    
Southern Methodist University Amit Kumar, Merlin Wilkerson, Robert Kalescky    
Southern University and A & M College
Southwest Innovation Cluster Thomas MacCalla    
Stanford University , Zhiyong Zhang    
Swarthmore College    
Temple University Richard Berger    
Tennessee Technological University Tao Yu    
Texas A & M University-College Station Rick McMullen, Dhruva Chakravorty, Robert Chenye, Jana McDonald    
Texas Southern University  
Texas State University Shane Flaherty, Richard Carney  
Texas Wesleyan University Terrence Neumann    
Tinker Air Force Base Zachary Fuchs, David Monismith    
The Translational Genomics Research Institute Gil Speyer    
Trinity College Peter Yoon    
Tufts University Shawn Doughty    
Tulane University  
United States Department of Agriculture - Agriculture Research Service Nathan Weeks    
United States Geological Survey Jeff Falgout, Janice Gordon    
United States Naval Academy    
University of Alabama at Birmingham John-Paul Robinson  
University of Alaska Liam Forbes
University of Arizona , , , Cynthia HartChris DeerRic AndersonTodd MerrittChris ReidyAdam MichelRyan DuitmanDima Shyshlov    
University of Arkansas , , Pawel Wolinski  
University of Arkansas at Little Rock  
University of California-Berkeley Aaron Culich    
University of California-Davis Bill Broadley    
University of California-Irvine Harry Mangalam  
University of California-Los Angeles TV Singh    
University of California-Merced Jeffrey Weekley, Sarvani Chadalapaka  
University of California-Riverside Russ Harvey, Bill Strossman  
University of California-San Francisco Jason Crane    
University of California-Santa Barbara Burak Himmetoglu    
University of California-Santa Cruz Shawfeng Deng  
University of Central Florida Paul Wiegand, Jason Nagin    
University of Chicago Igor Yakushin    

University of Cincinnati

Brett Kottmann

University of Colorado , Shelley Knuth    
University of Connecticut Ed Swindelles    
University of Delaware Anita Schwartz  
University of Florida Alex Moskalenko    
University of Georgia Guy Cormier    
University of Hawaii Gwen Jacobs, Sean Cleveland
University of Houston Jerry Ebalunode  
University of Houston-Clear Lake ,    
University of Houston-Downtown  
University of Idaho Lucas Sheneman  
University of Illinois at Chicago Himanshu Sharma  
University of Indianapolis Steve Spicklemire    
University of Iowa , Brenna MillerSai Ramadugu    
University of Kansas Dan Voss  
University of Kentucky , James Griffioen  
University of Louisiana at Lafayette  
University of Louisville Chris Cprek  
University of Maine System Bruce Segee, Steve Cousins  
University of Massachusetts Amherst Johnathan Griffin    
University of Michigan    
University of Minnesota Jim Wilgenbusch, Ben Lynch, Eric Shook, Joel Turbes, Doug Finley    
University of Missouri-Columbia Timothy MiddelkoopSusie Meloro, George RobbJacob GotbergMicheal Quinn    
University of Missouri-Kansas City    
University of Montana Tiago Antao  
University of Nebraska , Jingchao Zhang  
University of Nebraska Medical Center Ashok Mudgapalli  
University of Nevada-Las Vegas Sharon Tettegah
University of Nevada-Reno Fred Harris  
University of New Hampshire Grace Wilson-Caudill  
University of New Mexico Hussein Al-Azzawi, Ryan Johnson
University of New Mexico-Gallup  
University of New Orleans  
University of North Carolina .    
University of North Carolina Wilmington Eddie Dunn, Ellen Gurganious    
University of North Dakota  
University of North Texas Charles Peterson, Damiri Young    
University of Notre Dame Dodi Heryadi, Scott Hampton    
University of Oklahoma Kali McLennan, Horst SeveriniJames Ferguson  
University of Oregon Nick Maggio, Robert Yelle, Chris Hoffman    
University of Pennsylvania Gavin Burris    
University of Pittsburgh Kim WongFangping MuKetan Maheshwari, Matt Burton    
University of Puerto Rico Mayaguez Ana Gonzalez
University of South Carolina Paul Sagona, Ben Torkian, Nathan Elger  
University of South Dakota Doug Jennewein  
University of Southern California Erin Shaw, Cesar Sul    
University of Southern Mississippi Brian OlsonGopinath Subramanian   
The University of Tennessee-Chattanooga Craig Tanis, Ethan Hereth  
The University of Texas - Rio Grande Valley  
The University of Texas at Dallas Gi Vania, Frank Feagans, Jaynal Pervez    
The University of Texas at El Paso Vinod Kumar  
The University of Texas at Austin Kevin Chen    
University of Tulsa Peter Hawrylak  
University of Utah Anita Orendt    
University of Vermont Yves Dubief  
University of the Virgin Islands
University of Virginia Ed Hall,    
University of Washington Chance Reschke    
University of Wisconsin-Milwaukee    
University of Wyoming  
Utah Valley University George Rudolph    
Vanderbilt University Will French    
Vassar College Christopher Gahn    
Virginia Tech University James McClure, Alana RomanellaSrijith Rajamohan    
Washburn University Karen Camarda, Steve Black  
Washington State University Jeff White    
Wayne State University Patrick Gossman, Michael Thompson, Aragorn Steiger    
West Virginia Higher Education Policy Commission Jack Smith  
West Virginia State University Sridhar Malkaram
West Virginia University  Nathan Gregg  
Wichita State University

Gisuk Hwang

Yale University , Kaylea Nelson, Benjamin Evans    
Youngstown State University Feng George Yu    

LAST UPDATED: October 13, 2017


Key Points
Contact Info
Contact Information

Towards a Leadership-Class Computing Facility - Phase 1 (NSF 17-558) and Potential Collaborative Efforts with XSEDE

This solicitation presents opportunities for a new level of collaboration between XSEDE and NSF "Track 1" resources. The potential exists for Leadership-Class Computing Facility (LCCF) proposers to leverage existing services operated by XSEDE and to engage in collaborative efforts to develop, deliver, and support services that may be unique to the LCCF proposer's platform.

Managing Conflicts of interest and Communications with XSEDE

To avoid conflicts of interest with respect to the proposers and XSEDE staff, we will be providing to all interested proposers a list of items, efforts, activities, etc., that XSEDE is willing to commit to collaborate on with any potential proposer. Proposers will then be able to request a letter of commitment from the XSEDE PI that commits to collaborate on what is included in their proposal (standard commitment form letter) with the understanding that they select only from the menu of options provided.

If there are collaboration options not noted below which you are interested in exploring, XSEDE is willing to consider them, but, to be clear, in order to manage potential conflict of interest issues, XSEDE must offer any option to all potential proposers.

To further manage conflicts of interest, all potential proposers should direct their communications with XSEDE to the XSEDE PI (John Towns, All discussions with XSEDE staff should all be arranged via the XSEDE PI.

The XSEDE Federation and the Service Providers Forum

XSEDE coordinates and integrates the national cyberinfrastructure funded by the NSF while also reaching out to coordinate, integrate, provide support services, and be as inclusive as possible with the broader community. The extended organization created by the amalgamation of the XSEDE program and other organizations with which XSEDE collaborates is referred to as the XSEDE Federation, which includes many autonomous Service Providers. The XSEDE Federation includes those providers and consumers of services which will meet, to varying degrees, the requirements of interfaces with XSEDE and also will engage in the effective and sustained interactions required to develop and evolve those interfaces.

When a resource or service participates in XSEDE, the provider can coordinate and request services from and/or provide them to XSEDE. Communications between XSEDE and Service Providers are managed via the XD Service Providers (SP) Forum, a part of the broader XSEDE Federation including all of XSEDE's partners in various forms. LCCF proposers planning to work with XSEDE should also plan to become a member of the SP Forum. For more information, see Requesting Membership in the XSEDE Federation, the XSEDE Service Provider Software and Services Baseline, and the XSEDE Service Provider Checklist, which include more information on the Service Provider Levels and associated services available via XSEDE. More information regarding these options is outlined below.

Three Levels of Service Providers (SPs) are defined within the XSEDE Federation. Service Providers are classified as being at a specific Level by meeting a minimum set of conditions, described in detail in the Requesting Membership in the XSEDE Federation document. These Levels reflect the degree of coordination/integration between the Service Provider and XSEDE. Level 1 Service Providers are the most tightly coupled with XSEDE. Level 2 and Level 3 Service Providers are more loosely coupled with XSEDE. The XSEDE Software and Services Table for Service Providers document describes the software and services integration expected by XSEDE when participating in the XSEDE Federation as a Service Provider at various Levels.

Depending on their Level of participation, Service Providers commit to integrate with XSEDE in a variety of ways, such as: integration with the XSEDE User Portal; use of the XSEDE Resource Allocation System and XRAC allocations process; XSEDE Information Services; participating in XSEDE working groups on a periodic basis; and verifying their integration annually. The Service Provider Checklist document is updated periodically and reviewed with each Service Provider annually to keep up with the ever evolving XSEDE cyberinfrastructure environment. Once a Service Provider decides or is required to participate with XSEDE, the first contact is with the XSEDE Service Provider Coordinator.

Participating in XSEDE as a Level 1 Service Provider has an estimated start up effort of approximately 320 hours for meetings, software installation and configuration, and integration activities and then an annual maintenance and troubleshooting effort of approximately 160 hours. See the Service Provider Checklist document for the full list of integration components. Participating in XSEDE as a Level 2 Service Provider at a moderate level of integration (installation and integration with many but not all of XSEDE software components and integration activities) has a start up effort of approximately 160 hours and an annual maintenance and troubleshooting effort of approximately 80 hours. See the Service Provider Checklist document for the full list of integration components. Participating in XSEDE as a Level 3 Service Provider requires only filling out a Resource Description Repository (RDR) entry for the Service Provider resource and installation of the Information Services Publishing Framework (IPF) and has an estimated start up effort of 80 hours and an annual maintenance and troubleshooting estimate of 40 hours.

Potential Collaboration Areas

Below XSEDE has provided a set of options for LCCF proposers to consider. These have been organized in a way that we hope is amenable to considering how they might mesh into proposals under development. Again, other areas are possible, and interested potential proposers should discuss this with the XSEDE PI. In each case, we have worked to provide a crisp definition of the collaboration opportunity and any associated costs that proposers would need to budget to support.

Education, Training, Outreach, and Community Engagement

The Education and Student Programs within XSEDE's CEE Workforce Development area provides a continuum of learning resources and services designed to address the needs and requirements of researchers, educators, developers, integrators, and students utilizing advanced digital resources. The Education and Student Programs deliver these services via curricular materials, faculty enhancement, and student engagement. In the context of proposals in response to the LCCF solicitation, the XSEDE Education and Student Programs can:

  1. Provide curricular materials for introducing real HPC in science, math, and computer science classes,
  2. Offer faculty workshops for professional development enabling computational science and HPC fundamentals to be infused into curricula,
  3. Facilitate and support efforts to extend existing educational programs with computational science and HPC, and
  4. Engage students in furthering their HPC skills with authentic internship opportunities that include expert mentorship and access to XSEDE resources.

In support of this effort, LCCF proposers can choose to participate in a number of ways. If LCCF proposers wish to have educational materials developed for a new course and wish XSEDE CEE staff to assist them, 25% of an FTE should be allocated to support this effort. If LCCF proposers wish to coordinate with XSEDE educational staff and participate actively in CEE education activities, 10% of an FTE should be allocated.

The Training group in XSEDE's CEE Workforce Development area delivers training in a variety of formats, including in-person, webinars, multi-site hands-on workshops, and online, asynchronous tutorials. We develop learning assessments in the form of badges, conduct peer reviews of XSEDE training resources, help users to find the appropriate materials for their needs, and coordinate with training leads in other organizations to avoid duplication of effort. In the context of proposals in response to the LCCF solicitation, the XSEDE Training group can:

  1. Offer training to the wider community on a broad array of computing topics, in a variety of formats (which often applies to most high-end compute platforms),
  2. Facilitate workshop listing and registration on the XSEDE portal, when the event is open to the wider community,
  3. Provide use of training accounts, when appropriate,
  4. Facilitate access to an LCCF proposers' training materials by incorporating them into XSEDE's training material listings, and
  5. Develop introductory resource-specific materials.

In support of this effort, LCCF proposers should allocate within their budget 25% of an FTE to serve as the liaison with and participant in XSEDE's training team, ensuring that training activities are well tailored to the proposed LCCF architecture and proposed activities.

The Broadening Participation (and Student Programs) group in XSEDE works to engage underrepresented communities in Science, Technology, Engineering and Mathematics (STEM), particularly focusing on advanced computing skills. The group hosts a variety of programs, including on-site workshops at minority serving institutions, the annual Advanced Computing for Social Change Challenge, outreach efforts to engage underrepresented communities by attending conferences including SACNAS, National HBCU Week, K-16 Educational Justice, Grace Hopper Celebration of Women, CAHSI, Emerging Researchers National Conference, and Understanding Interventions. In the context of proposals in response to the LCCF solicitation, the XSEDE Broadening Participation and Student Programs group can:

  1. Collaborate to offer training workshops at minority serving and teaching institutions,
  2. Facilitate connections to a large group of contacts at minority serving institutions,
  3. Provide best practices developed within XSEDE for effective outreach to underrepresented communities, and
  4. Collaborate on additional Advanced Computing for Social Change programming and similar challenges.

In support of this effort, LCCF proposers should allocate within their budget 25% of an FTE to serve as the liaison with and participant in XSEDE's Broadening Participation group and participate in Broadening Participation and/or Diversity Forum calls where the broadening participation activities are coordinated. This level of support will ensure that the LCCF team is actively engaged in all CEE Broadening Participation activities and will ensure that new activities are developed in collaboration with the LCCF team.

The Campus Champions (CC) program, part of the XSEDE Campus Engagement program, works with research computing facilitators and other CI professionals at institutions nationwide, developing a community of practice among, so far, over 300 professionals at over 200 institutions, in every US state (plus 3 territories). In the context of proposals in response to the LCCF solicitation, the CC leadership will work closely with both the LCCF proposal leadership and the CCs, to:

  1. Help CCs to recognize computing-intensive and data-intensive investigations at their home institutions that would be appropriate for their LCCF proposal;
  2. Train CCs whose institutions have investigations appropriate for the proposed LCCF resource on the basics of using that resource;
  3. Provide such CCs with startup allocations to help their local researchers who have such investigations to get onto the system and port their code to it for testing and benchmarking purposes; and
  4. Help such local researchers to craft compelling resource allocation proposals. In support of this effort, LCCF proposers should allocate within their budget at least 25% of an FTE, to serve as the liaison with, and trainer of, these CCs.

Extended Collaborative Support Services (ECSS)

XSEDE provides in depth (up to 1 year) collaborations with researchers and ECSS consultants, subject to peer review and capacity of the ECSS consultants. ECSS staff have a wide range of skills, from optimizing code to integrating XSEDE resources into science gateways to delivering training and working with new communities to enhance their use of their proposed resources and services. ECSS staff require accounts and training, as early as possible, on all systems where support is offered. ECSS support is available to all researchers with allocations obtained via the XSEDE XRAC process. This would also be true of the LCCF system for any allocations that are made through the XRAC.

For proposal teams wishing to utilize ECSS support for allocations made outside the XRAC process, additional ECSS staff can be conscripted at the cost of $250k/FTE/year. In general, staff devote 25% time per year to an ECSS project. The LCCF team would work with ECSS management on staff assignments.

Resource Allocation Services

XSEDE's Resource Allocation Service (RAS) supports a range of mature and efficient services for members of the SP Forum at each of the three provider levels. Collaboration opportunities for LCCF proposers include managing allocations processes and NSF-approved policies for Startup, Education, and Research projects; accounting processes for tracking usage by users against allocations; and the ability for Service Providers to allow resource access via XSEDE's Single Sign-On Hub. Modifications to existing allocations, accounting, and identity management policies, procedures and capabilities can be explored through defined processes.

The solicitation indicates that 80% of the resource will be allocated via the PRAC process NSF has used with Blue Waters for allocating the bulk of that resource. LCCF proposers can leverage XSEDE-provided services both to manage the allocations made by NSF and support the allocations process and allocations management associated with the remaining 20% of the resource. To enable this, RAS supports the XSEDE Resource Allocation Service (XRAS), a hosted service that can provide allocation request submission, review, and administration capabilities for organizations that manage independent allocation processes. In addition, it can be used to manage allocations made via the PRAC process.

For any allocation made via the XRAC process, use of the XRAS service to support that process will come at no cost to the successful proposer, with the exception of any substantive changes necessary to the current process and procedures to accommodate any unusual needs. In this latter case (anticipated to be unusual), some additional staffing support from the LCCF proposer will be necessary to support those needs in the process.

For LCCF proposers that wish to use the XRAS service to support other allocations processes (tracking of PRAC awarded allocation, allocations process separate from the XRAC to allocate portions of the resource, etc.). LCCF proposers can make use of the XRAS service for their needs at a cost of $10,000 per year (inflated at a rate of 2.75% annually) for basic XRAS support. This is for clients that can work with the existing XRAS implementation and do not have additional features or customization necessary. If there are features that do not currently exist in XRAS, XSEDE staff will need to work with the LCCF awardee to determine the scope of work and develop a separate contract for this development effort.

Infrastructure Services and Integration Support

XSEDE installs, connects, maintains, secures, and evolves an integrated cyberinfrastructure that incorporates a wide range of digital capabilities to support national scientific, engineering, and scholarly research efforts. Infrastructure and enterprise services are provided by XSEDE Operations, which focuses on cybersecurity, networking and data transfer, enterprise services, and providing an operations center for prompt frontline user support and initial issue ticket management.

As noted above in the XSEDE Federation and the Service Providers Forum section, when a Service Provider participates in XSEDE, the Service Provider can coordinate, request, and/or provide resources and/or services from and/or to XSEDE. The resources and/or services can be both traditional or new, novel, and innovative resources and/or services. XSEDE is making available to LCCF proposers services which they may leverage to support their proposed resources and/or services. These include:

XSEDE Operations Center (XOC) and Service Request/Ticket system (RT): XSEDE operates a 24x7 XOC that provides front line user assistance. This provides timely and accurate assistance to the XSEDE community for a wide variety of user issues, and continuously monitors and provides front line troubleshooting for XSEDE user-facing systems and services. XSEDE also operates a service request/ticket system based on RT, where user and staff issues can be routed as service request tickets. Issues can be turned in by tickets via the XSEDE website, via email to, or by calling the XOC and having the operations staff enter the issue into XSEDE's RT. This gives the capability to track issues to resolution and keep all the information about the progress towards issue resolution. XSEDE uses RT, also known as Request Tracker from Best Practical Solutions. An LCCF proposer can leverage this capability to provide frontline/helpdesk services to support all of their users, regardless of how they are allocated, for $200k/year (escalated at 2.75% annually). This includes both staffing support and support for the RT system. If the LCCF awardee opts to allocate 10% or more of their resource via the XRAC (and thus also join the SP Forum), this will be offered at a lower cost of $150k/year (escalated at 2.75% annually). Some Service Providers also use RT locally, and there is an available capability to route tickets between RT systems.

XSEDEnet: XSEDEnet is the private point-to-point network provided by Internet2's Advanced Layer 2 Services (AL2S) platform. This network allows high performance and integrated connectivity between XSEDE Service Providers. Service Providers of all levels participate in XSEDEnet, usually using the Internet2 AL2S that most universities and research centers already have access to with their regional Internet service providers. If the proposers' regional internet service provider already includes Internet2's AL2S as part of their services, this service will likely be free except for the local networking devices that are required to connect to a regional provider. If AL2S is not provided free to the proposer, contact Internet2 to determine the costs associated with obtaining AL2S services from Internet2. domain: XSEDE operates and maintains the domain and any Service Provider can participate in the domain. Usually XSEDE allocated Service Providers have access to {site} to list resources associated with XSEDE in the domain space. The {site} domain can even be delegated to the sites local networking group to maintain the DNS space. This service is available to all Levels of Service Providers. The XSEDE Data Transfer Services group operates, manages and coordinates the domain and can delegate DNS zones as they deem appropriate. Typically, tightly integrated Service Providers (Level 1 Service Providers) are allowed to manage their own DNS zones in the space.

XSEDE Security Infrastructure: Security is an important area to closely coordinate with XSEDE especially for Level 1 and Level 2 allocated Service Providers. Incidents have been minimal in the last seven years, but as Service Providers are tightly integrated with XSEDE, the risk exists that an account compromise at one Service Provider could lead to the spread of the compromise to another XSEDE participating site. XSEDE has multiple authentication services that an SP Forum members can leverage, such as XSEDE's Kerberos, Certificate Authority (providing certificate credentials), CILogon, OAuth, and Duo two factor authentication. Also, XSEDE's cybersecurity team has access to the Qualys security scanning tool paid for by XSEDE for scanning XSEDE resources, which can include resources provided by SP Forum members. This service is used to scan all XSEDE Enterprise Services, wherever they are hosted, and is provided to Service Providers to scan canonical images of XSEDE or PRAC allocated resources. For example, Service Providers with allocations through XRAC, could request that one login node, a DTN, and a publicly accessible compute node is scanned regularly, though XSEDE would not be able to scan all compute nodes or any private hosts. LCCF proposers who wish to take advantage of this as member of the SP Forum should plan to budget 5% FTE of effort on their security team to support these scans. Also, XSEDE Operations Cybersecurity coordinates incident response across XSEDE and its SP Forum Service Provider partners. A security incident becomes an XSEDE security incident when the incident spreads across multiple SP Forum Service Provider resources or if XSEDE enterprise services are somehow impacted. XSEDE security incident coordination is provided free to SP Forum members by XSEDE.

SIngle Single-On Hub: XSEDE's Single Sign On (SSO) login hub,, is a single point-of-entry to the XSEDE cyberinfrastructure. Upon logging into the hub with an XSEDE User Portal (XUP) username, password and Duo two-factor authentication, a 12 hour proxy certificate is automatically generated for a user, allowing the user to access XSEDE resources via GSISSH for the duration of the proxy. GSISSH is further enhanced to have shortcuts to XSEDE compute resources where one can login without the need for the resource-specific username and password. The XSEDE SSO hub accepts standard SSH incoming connections and does not allow use of SSH keys. LCCF proposers can have their users set up to use the XSEDE SSO hub free of charge, as long as the LCCF proposer becomes a member of the SP Forum, those users have XSEDE User Portal accounts, and the LCCF proposer sets up the appropriate capabilities on their resource integrated with XSEDE to facilitate SSO access. This involves approximate 40-80 hours of staff time to work with XSEDE to establish initial integration depending on the XSEDE Service Provider integration level. If the LCCF proposer is an unallocated (via the XRAC) Level 2 or Level 3 SP, there will be one-time, incremental cost of $10,000 to XSEDE that they must bear to integrate with XSEDE and implement this capability.

Coordination meetings: XSEDE has a number of integration and community information meetings. These include the biweekly SP Forum meeting, the XSEDE Campus Champions meeting, the Service Provider Software monthly meetings, the weekly XSEDE Service Provider cybersecurity coordination meetings, the as-needed cybersecurity incident response meetings, the monthly XRAS account management meetings, meetings for participation with XCI (XSEDE Cyberinfrastructure Integration team) in defining and developing the future cyberinfrastructure, and the XSEDEnet participants meeting. Fully integrated Service Providers have approximately 12 hours of meetings a month for participation as a Level 1 allocated resources in XSEDE. Level 2 and Level 3 participants will have lower participation costs per month.

Community driven software requirements and capabilities: XSEDE offers the research community a way to build a shared understanding of driving use cases in areas such as allocations, account management, authentication, authorization, security, remote login, batch computing, data capabilities, and community building, among many other things. This shared understanding is achieved through a transparent and lightweight engineering process with public tools and information resources. Using these processes and tools, users, software and software based service providers, and infrastructure operators can achieve public transparency from driving use cases through production capabilities.

If an LCCF proposer chose to collaborate with XSEDE in this area it could provide its user, staff, and the software provider community with a consistent public understanding of LCCF's important use cases and which ones are shared between the LCCF proposer, XSEDE, and campuses. This would result in the ability to leverage common software and software based services or provide interoperability across these infrastructures. Building this shared understanding of driving use cases would require approximately 1 person-month of LCCF effort in the LCCF personnel budget, spread out over several months. There would be no XSEDE effort that would require support from the LCCF proposer.

Shared or interoperable software and software based services: As noted above, some capabilities are generally well understood and have been listed as explicit offering from XSEDE to LCCF proposers. Building on a shared understanding of the driving use cases, the LCCF awardee and XSEDE could achieve greater interoperability and further enable users to leverage all their available distributed infrastructure, including personal systems, local/campus resources, XSEDE resources, and the LCCF resource. The effort required for LCCF and XSEDE to share software solutions or implement interoperable solutions will depend on those use cases and available implementations and cannot be predicted. Whether an LCCF proposer is interested in sharing software or services with XSEDE, or wishes to be interoperable with XSEDE and campus resources, the proposer should appropriately plan staff time for the level of sharing or interoperability they desire.

Incorporating XSEDE Collaborative Efforts into Proposer Budgets

As noted above, potential conflict of interest issues make this process rather complicated. XSEDE is taking here is to provide opportunities for collaboration while being blind to the specific plans of particular LCCF proposers. This issue has been raised with NSF via both the XSEDE project cognizant program officer and the LCCF solicitation cognizant program officer. LCCF proposers interested in leveraging and collaborating with XSEDE are encouraged to contact the LCCF cognizant program officer to discuss this issue and obtain guidance.

Key Points
Collaborating with XSEDE on NSF 17-558 Towards a Leadership-Class Computing Facility – Phase 1
In order to mitigate any potential conflicts of interest, all potential proposers should direct their XSEDE communications to John Towns, the XSEDE PI
Contact Information

Domain and Student Champions

Campus Champions programs include Regional, Student, and Domain Champions.



Student Champions

Georgia State University Mengyuan Zhu Suranga Naranjan 2017
Georgia State University Kenneth Huang Suranga Naranjan 2020
Georgia State University Thakshila Herath Suranga Naranjan 2017
Jackson State University Ebrahim Al-Areqi Carmen Wright 2018
Jackson State University Duber Gomez-Fonseca Carmen Wright 2019
Oklahoma State University Raj Shukla Dana Brunson  
Rensselaer Polytechnic Institute James Flamino Joel Geidt 2022
Southern Illinois University Sai Susheel Sunkara Chet Langin 2018
Southern Illinois University Monica Majiga Chet Langin 2017
Southern Illinois University Sai Sandeep Kadiyala  Chet Langin 2017
Tufts University Georgios (George) Karamanis Shawn G. Doughty 2018
University of California - Merced Luanzheng Guo Sarvani Chadalapaka  
University of Central Florida Amit Goel Paul Weigand  
University of Florida David Ojika Oleksandr Moskalenko 2018
University of Houston-Downtown Eashrak Zubair Erin Hodgess 2020
University of Michigan Simon Adorf Brock Palen 2019
University of Pittsburgh Shervin Sammak Kim Wong  
University of South Dakota Joseph Madison Doug Jennewein 2018
University of Utah Khalid Ahmad Anita Orendt 2021
Virginia Tech University David Barto Alana Romanella 2020
Mississippi State University Nitin Sukhija Trey Breckenridge 2015
Oklahoma State University Phillip Doehle Dana Brunson 2016
Rensselaer Polytechnic Institute Jorge Alarcon Joel Geidt 2016
University of Arkansas Shawn Coleman Jeff Pummill 2014
University of Maryland Baltimore County Genaro Hernadez Paul Schou 2015
University of Houston Clear Lake Tarun Kumar Sharma Liwen Shih 2014
Virginia Tech University Lu Chen Alana Romanella 2017


Domain Champions

Domain Champions act as ambassadors by spreading the word about what XSEDE can do to boost the advancement of their field, based on their personal experience, and to connect interested colleagues to the right people/resources in the XSEDE community (XSEDE Extended Collaborative Support Services (ECSS) staff, Campus Champions, documentation/training, helpdesk, etc.). Domain Champions work within their discipline, rather than within a geographic or institutional territory.

The table below lists our current domain champions. We are very interested in adding new domains as well as additional champions for each domain. Please contact if you are interested in a discussion with a current domain champion, or in becoming a domain champion yourself.

Astrophysics, Aerospace, and Planetary Science Matthew Route Purdue University
Data Analysis Rob Kooper University of Illinois
Finance Mao Ye University of Illinois
Molecular Dynamics Tom Cheatham University of Utah
Genomics Brian Couger Oklahoma State University
Digital Humanities Virginia Kuhn University of Southern California
Digital Humanities Michael Simeone Arizona State University
Genomics Thomas DoakCarrie L. GanoteSheri SandersBhavya Nalagampalli Papudeshi Indiana University, National Center for Genome Analysis Support
Chemistry and Material Science Sudhakar Pamidighantam Indiana University
Fluid Dynamics & Multi-phase Flows Amit Amritkar University of Houston
Chemistry Christopher J. Fennell Oklahoma State University
Geographic Information Systems Eric Shook University of Minnesota

Last Updated: October 2, 2017

Key Points
Student Champions
Regional Champions
Domain Champions
Contact Information

ECSS Symposium

ECSS staff share technical solutions to scientific computing challenges monthly in this open forum.

The ECSS Symposium allows the over 70 ECSS staff members to exchange on a monthly basis information about successful techniques used to address challenging science problems. Tutorials on new technologies may be featured. Two 30-minute, technically-focused talks are presented each month and include a brief question and answer period. This series is open to everyone.

Symposium coordinates

Day and Time: Third Tuesdays @ 1 pm Eastern / 12 pm Central / 10 am Pacific
Add this event to your calendar.

Webinar (PC, Mac, Linux, iOS, Android): Launch Zoom webinar

iPhone one-tap (US Toll): +14086380968,350667546#
(or): +16465588656,350667546#

Telephone (US Toll): Dial: +1 408 638 0968
(or) +1 646 558 8656
International numbers availableZoom International Dial-in Numbers
Meeting ID: 350 667 546

See the Events page for details of upcoming presentations. Upcoming events are also posted to the Training category of XSEDE News.

Due to the large number of attendees, only the presenters and host broadcast audio. Attendees may submit chat questions to the presenters through a moderator.

Video library

Videos and slides from past presentations are available (see below). Presentations from prior years that are not listed below may be available in the archive:

Key Points
Monthly technical exchange
ECSS community present
Open to everyone
Tutorials and talks with Q & A
Contact Information

2016 ECSS Symposium

Video content of ECSS Symposium for the year 2016

Key Points
Related Links
Contact Information

Champion Leadership Team

This page includes the Champions Leadership team and Regional Champions

Champion Leadership Team
Name Institution Position
Dana Brunson Oklahoma State University Campus Engagement Co-manager
Henry Neeman University of Oklahoma Campus Engagement Co-manager
Marisa Brazil Purdue University Champion Coordinator
Jeff Pummill University of Arkansas Regional Champion Coordinator
Aaron Culich University of California-Berkeley Champion Leadership Team
Erin Hodgess University of Houston-Downtown Champion Leadership Team
Alla Kammerdiner New Mexico State University Champion Leadership Team
Doug Jennewein University of South Dakota Champion Leadership Team
Jack Smith West Virginia Higher Education Policy Commission Champion Leadership Team
Dan Voss University of Kansas Champion Leadership Team


Regional Champions

The Regional Champion Program is built upon the principles and goals of the XSEDE Champion Program. The Regional Champion network facilitates education and training opportunities for researchers, faculty, students and staff in their region that help them make effective use of local, regional and national digital resources and services. Additionally, the Regional Champion Program provides oversight and assistance in a predefined geographical region to ensure that all Champions in that region receive the information and assistance they require, as well as establish a bi-directional conduit between Champions in the region and the XSEDE champion staff, thus ensuring a more efficient dissemination of information, allowing finer grained support. Finally, the Regional Champions acts as a regional point of contact and coordination, to assist in scaling up the Champion program by working with the champion staff to coordinate and identify areas of opportunity for expanding outreach to the user community.

Regional Champions are coordinated by Jeff Pummill.

Ben Nickell Idaho National Labs Nick Maggio University of Oregon 1
Ruth Marinshaw Stanford University Aaron Culich University of California, Berkeley 2
Chester Langin Southern Illinois University Aaron Bergstrom University of North Dakota 3
Dan Andresen Kansas State University Timothy Middelkoop University of Missouri 4
Mark Reed University of North Carolina Craig Tanis University of Tennessee, Chattanooga 5
Scott Hampton University of Notre Dame Stephen Harrell Purdue University 6
Scott Yockel Harvard University Scott Valcourt University of New Hampshire 7
Anita Orendt University of Utah Shelley Knuth University of Colorado 8



Key Points
Leadership table
Regional Champions table
Contact Information

Campus Champions Fellows Program

The Fellows Program partners Campus Champions with Extended Collaborative Support Services (ECSS) staff and research teams to work side by side on real-world science and engineering projects.

The cyberinfrastructure expertise developed by high-end application support staff in XSEDE's Extended Collaborative Support Services (ECSS) program can be difficult to disseminate to the large numbers of researchers who would benefit from this knowledge. Among their many roles, XSEDE Campus Champions (CC) serve as local experts on national cyberinfrastructure resources and organizations, such as XSEDE. Champions are closely connected to those doing cutting-edge research on their campuses. The goal of the Campus Champions (CC) Fellows program is to increase cyberinfrastructure expertise on campuses by including CCs as partners in XSEDE's Extended Collaborative Support Services (ECSS) projects.

The Fellows program partners Campus Champions with ECSS staff and research teams to work side by side on real-world science and engineering projects. In 2015, the types of projects offered have expanded beyond ECSS projects. In addition to ECSS, Champions now have the opportunity to work with XSEDE Cyberinfrastructure Integration (XCI) to develop on-ramps from campuses to XSEDE, to work with Community Engagment & Enrichment staff to help create a formal undergraduate or graduate minor, concentration, or certificate program at their institution, or to design a project of their choosing. Fellows will develop expertise within varied areas of cyberinfrastructure, and they are already well positioned to share their advanced knowledge through their roles as the established conduits to students, administrators, professional staff, and faculty on their campuses. A directory of Fellows will expand the influence even further by creating a network of individuals with these unique skill sets. In addition to the technical knowledge gleaned from their experiences, the individual Fellows will benefit from their personal interactions with the ECSS staff and will acquire the skills necessary to manage similar user or research group project requests on their own campuses. The Campus Champions Fellows program is a unique, rare opportunity for a select group of individuals to learn first-hand about the application of high-end cyberinfrastructure to challenging science and engineering problems.

Another partner opportunity - ECSS Affiliates

Accepted Fellows make a 400-hour time commitment and are paid a $15,000 annual stipend for their efforts. The program includes funding for two one- to two-week visits to an ECSS or research team site to enhance the collaboration and also funding to attend and present at a Fellows symposium at an XSEDE conference.

The following are the types of skills that may be developed, depending on project assignments:

  • Use of profiling and tracking tools to better understand a code's characteristics
  • CUDA programming
  • Hybrid (MPI/OpenMP) programming
  • Optimal use of math libraries
  • Use of visualization tools, with a particular focus on large data sets
  • Use of I/O tools and software such as HDF5, MPI IO, and parallel file system optimization
  • Optimal use of scientific application software such as AMBER, ABAQUS, RaxML, MrBayes, etc.
  • Application of high-performance computing and high-throughput computing to non-traditional domains such as computational linguistics, economics, genomics
  • Single-processor optimization techniques
  • Benchmarking, including concepts of sockets, binding processes to cores, and impacts of these in optimization
  • Optimal data transfer techniques including the use of grid-ftp and Globus Online
  • Cluster scheduling, tuning for site-specific goals
  • Managing credentials, security practices
  • Monitoring resources with information services
  • Understanding failure conditions and programming for fault tolerance
  • Automated work and data flow systems
  • Web tools and libraries for building science gateways or portals that connect to high-end resources
  • Data management through tools such as iRODS and SAGA

For questions about the program please contact

Key Points
Campus Champions as local experts
Campus Champions work with ECSS staff and research teams
Campus Champions responsibilities and skills

Campus Champions

Computational Science & Engineering makes the impossible possible; high performance computing makes the impossible practical

What is a Campus Champion?

A Campus Champion is an employee of, or affiliated with, a college or university (or other institution engaged in research), whose role includes helping their institution's researchers, educators and scholars (faculty, postdocs, graduate students, undergraduates, and professionals) with their computing-intensive and data-intensive research, education, scholarship and/or creative activity, including but not limited to helping them to use advanced digital capabilities to improve, grow and/or accelerate these achievements.

What is the Campus Champions Program?

The Campus Champions Program is a group of 300+ Campus Champions at 200+ US colleges, universities, and other research-focused institutions, whose role is to help researchers at their institutions to use research computing, especially (but not exclusively) large scale and high end computing.

Campus Champions peer-mentor each other, to learn to be more effective. The Campus Champion community has a very active mailing list where Champions exchange ideas and help each other solve problems, regular conference calls where we learn what's going on both within the Champions and at the national level, and a variety of other activities.

Benefits to Campus Champion Institutions

  • A Campus Champion gets better at helping people use computing to advance their research, so their institution's research becomes more successful.
  • There is no charge to the Campus Champion institution for membership.

Benefits to the Campus Champion

  • A Campus Champion becomes more valuable and more indispensable to their institution's researchers, and therefore to their institution.
  • The Campus Champions Program is a lot of fun, so Champions can enjoy learning valuable strategies.

What does a Campus Champion do as a member of the CC Program?

  • Participate in Campus Champions Program information sharing sessions such as the Campus Champions monthly call and email list.
  • Participate in peer mentoring with other Campus Champions, learning from each other how to be more effective in their research support role.
  • Provide information about national Cyberinfrastructure (CI) resources to researchers, educators and scholars at their local institution.
  • Assist their local institution's users to quickly get start-up allocations of computing time on national CI resources.
  • Serve as an ombudsperson, on behalf of their local institution's users of national CI resources, to capture information on problems and challenges that need to be addressed by the resource owners.
  • Host awareness sessions and training workshops for their local institution's researchers, educators, students, scholars and administrators about institutional, national and other CI resources and services.
  • Participate in some or all of the Campus, Regional, Domain, and Student Champion activities.
  • Submit brief activity reports on a regular cadence.
  • Participate in relevant national conferences, for example the annual SC supercomputing conference and the PEARC conference.
  • Participate in education, training and professional development opportunities at the institutional, regional and national level, to improve the champion(s)' ability to provide these capabilities.

What does the Campus Champions program do for the Campus Champions?

  • Provide a mailing list for sharing information among all Campus Champions and other relevant personnel.
  • Provide the Campus Champions with regular correspondence on new and updated CI resources, services, and offerings at the national level, including but not limited to the XSEDE offerings.
  • Provide advice to the Campus Champions and their institutions on how to best serve the institution's computing- and data-intensive research, education and scholarly endeavors.
  • Provide education, training and professional development for Campus Champions at conferences, Campus Champion meetings, training events, and by use of online collaboration capabilities (wiki, e-mail, etc.).
  • Help Champions to pursue start-up allocations of computing time on relevant national CI resources (currently only XSEDE resources, but we aspire to expand that), to enable Campus Champions to help their local users get started quickly on such national CI resources.
  • Record success stories about impact of Campus Champions on research, education and scholarly endeavors.
  • Maintain a web presence and other social media activity that promotes the Campus Champions Program and lists all active Campus Champions and their institutions, including their local institution and its Campus Champion(s).
  • Raise awareness of, and recruit additional institutions and Campus Champions into the Campus Champions Program.
  • Provide Campus Champions with the opportunity to apply for the XSEDE Champion Fellows Program (and aspirationally other programs), to acquire in-depth technical and user support skills by working alongside XSEDE staff experts.
  • Provide Campus Champions information to participate in subgroup activities, such as the Regional Champion initiative.

Become a Champion

  • Write to and ask to get involved
  • We'll send you a template letter of collaboration
  • Ask questions, add signatures, send it back, and join the community

In addition to traditional Campus Champions, the Champion Program now includes the following types of specialized Champions:


  • Student Champions - offering a unique student perspective on use of digital resources and services
  • Regional Champions - regional point of contact to help scale the Champion Program
  • Domain Champions - spread the word about what XSEDE can do to boost the advancement of their field.

Key Points
Program serves more than 200 US colleges and universities
Aimed at making the institution's research more successful
Free membership
Contact Information

Allocations Splash page

XSEDE Allocations award eligible users access to compute, visualization, and/or storage resources as well as extended support services. XSEDE has various types of allocations from short term exploratory request to year long projects. In order to get access to XSEDE resources you must have an allocation. Submit your allocation requests via the XSEDE Resource Allocation System (XRAS) in the XSEDE User Portal.

View the latest XRAC Announcements including latest XRAC submission information, new and retiring XSEDE resources and more.

Trial Allocations granting rapid access to resources are now available on SDSC's new Comet cluster.

New to XSEDE Allocation Process?

Once you are ready to get started the following documentation will help guide you through the allocation process:

Create a user portal account & start your allocation request using XRAS.

Are you an existing PI, co-PI or Allocation Manager with an XSEDE allocation? Read more about how to Manage your Allocation on the XSEDE User Portal.

Key Points
Related Links
Contact Information

XSEDE Resources

The XSEDE ecosystem encompasses a broad portfolio of resources operated by members of the XSEDE Service Provider Forum. These resources include multi-core and many-core high-performance computing (HPC) systems, distributed high-throughput computing (HTC) environments, visualization and data analysis systems, large-memory systems, data storage, and cloud systems. Some resources provide unique services for science gateways. Some of these resources are made available to the user community through a central XSEDE-managed allocations process, while many other resources operated by Forum members are linked to other parts of the ecosystem.

If you're not sure where to start or what resource might be right for you, just ask! Send a note to and XSEDE staff will point you in the right direction

Allocated Resources

Almost all U.S.-based university and non-profit researchers are eligible to request allocations via XSEDE for access to nearly two dozen computational and storage resources. Visit the XSEDE User Portal for detailed descriptions of the resources available as well as for instructions on how to request an allocation.

Researchers requiring large amounts of computing power for tightly coupled MPI, OpenMP, and hybrid programs should look to XSEDE-allocated high-performance computing (HPC) resources. For massively parallel jobs that require far less communication and synchronization, researchers may want to consider high-throughput computing (HTC) options. Additional specialized systems run GPGPU codes, memory-intensive problems, visualization, data analytics, and research clouds. Many computing resources also provide Science Gateway capabilities. XSEDE-allocated storage resources are available to satisfy most users' practical needs. Several XSEDE service providers also host storage platforms providing services such as data management, data collections hosting, and large-scale persistent storage.

Stampede2 at the Texas Advanced Computing Center

Other Ecosystem Resources

XSEDE provides other opportunities for service providers to connect resources to the ecosystem. Some providers, including NCAR and NCSA, use the XSEDE-hosted XRAS software-as-a-service to manage separate allocation processes for their resources. Other providers are leveraging XSEDE's single sign-on (SSO) hub to manage access to their resources. Many providers join the XSEDE ecosystem to take advantage of XSEDE-provided software toolkits and other integrated services.

Allocated Resource Listing

Resource Org Type Startup Allocation Limit
PSC Regular Memory
PSC compute 50000
Bridges GPU PSC compute 2500
Bridges Large
PSC Large Memory Nodes
PSC compute 1000
Briges Pylon
PSC Storage
PSC storage N/A
SDSC Dell Cluster with Intel Haswell Processors
SDSC compute 50000
Comet GPU
SDSC Comet GPU Nodes
SDSC compute 2500
Data Oasis
SDSC Medium-term disk storage (Data Oasis)
SDSC storage N/A
Jetstream Indiana University/TACC compute 50000
Jetstream Storage
IU/TACC Storage
Indiana University/TACC storage N/A
HP/NVIDIA Interactive Visualization and Data Analytics System
TACC visualization 3000
Open Science Grid OSG compute 200000
TACC Long-term tape Archival Storage
TACC storage N/A
TACC Dell/Intel Knight's Landing System
TACC compute 1600
LSU Cluster
LSU CCT compute 50000
TACC Data Analytics System
UT Austin compute 1000
Wrangler Storage
IU Long-term Storage
Indiana University/TACC storage N/A
Stanford University GPU Cluster
Stanford U compute 5000

Key Points
A diverse portfolio of XSEDE-allocated resources is available to support your research.
Most US-based researchers are eligible for no-cost allocations via XSEDE. Get started in two weeks or less!
Members of the XSEDE Service Provider Forum operate additional resources that may be available to certain users.
Contact Information


Keep up to date with science research. See our XSEDE IMPACT newsletter.

Best of XSEDE13

Highlights of XSEDE13 conference

XSEDE13 conference selects best papers, posters, visualizations and more

The XSEDE13 conference—held July 22-25 at the Marriott Marquis and Marina, San Diego—featured more than 60 papers, more than 85 posters, 10 lightning talks, nine visualizations, and 10 teams comprising students from 40 universities and high schools in the student programming competition. From among this slate of competitors, the following were selected for awards:

Best Science & Engineering Track paper / Best Paper
The awardee in the Science & Engineering track was also selected as best overall paper at XSEDE13.

Using Lucene to Index and Search the Digitized 1940 U.S. Census by Liana Diesendruck, Rob Kooper, Luigi Marini and Kenton McHenry, all from the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign


The Phil Andrews Best Technology Track paper

CILogon: A Federated X.509 Certification Authority for CyberInfrastructure Logon by Jim Basney, Terry Fleury and Jeff Gaynor, all of NCSA


Best Software & Software Environments Track paper

Embedding CIPRES Science Gateway Capabilities in Phylogenetics Software Environments by Mark Miller, Terri Schwartz and Wayne Pfeiffer, all from the San Diego Supercomputer Center/University of California, San Diego


Best Education, Outreach and Training Track paper / Best Student Paper

The best Training, Education and Outreach Track paper was also selected as the best student paper.

Human Centered Game Design for Bioinformatics and Cyberinfrastructure Learning by Daniel Perry, University of Washington; Cecilia Aragon, University of Washington; Stephanie Cruz, University of Washington; Mette Peters, Sage Bionetworks; and Jeanne Ting Chowning, Northwest Association for Biomedical Research


Best Poster

Transcriptional regulatory networks of single cells during in vitro hepatic differentiation of human pluripotent stem cellsby Rathi Thiagarajan, Thomas Touboul, Robert Morey, and Louise Laurent, University of California, San Diego.


Best Student Poster, high school

Application of Color-recognition Algorithms in the Wide Galactic Field Survey from Planck Space Observatory by Anoushka Bose from San Diego's Francis Parker School


Best Student Poster, undergraduate (TIE)

There was a tie for Best Student Poster, with awards going to Anthony Frachioni, Binghamton University, for LAMMPS based umbrella sampling of the free energy landscape in Sn alloys and to Shobhit Garg, Vahe Peroomian and Mostafa El-Alaoui, UC Los Angeles, for Using High-Performance Computing to Study the Earth's magnetosphere


Best Student Poster, graduate

Heavy Vehicle Viscous Pressure Drag Optimization by David Manosalvas and Antony Jameson, Stanford


Best Visualization (selected by XSEDE13 attendee votes)

The First Star: Birth through Death by Matthew Turk, Columbia University; John Wise, Georgia Tech; Sam Skillman, University of Colorado; and Mark Subbarao, Adler Planetarium


Best Lightning Talk (selected by XSEDE13 attendee votes)

Optimizing the PCIT algorithm on Stampede's Xeon and Xeon Phi processors for faster discovery of biological networks by Lars Koesterke, Kent Milfeld, Dan Stanzione, Matt Vaughn, Texas Advanced Computing Center at The University of Texas at Austin; James Koltes, James Reecy and Nathan Weeks, Iowa State University


Student programming competition

  • First place: Ben Albrecht, University of Pittsburgh; Zhe Zhang, University of Nebraska, Lincoln; Travis Boettcher, University of Wisconsin, Eau Claire; Matt Armbruster, University of Nebraska; Cassandra Schaening, University of Puerto Rico
  • Second place: David Toth, Alex Gilley, Zack Goodwyn, Jerome Mueller, Russell Ruud, University of Mary Washington
  • Third place: Grace Silva, UNC - Chapel Hill; Melva James, Clemson University; Tracey Fernandez, New Mexico State University; Allison Hall, UC Berkeley; Lydon Ellsworth, Navajo Technical College; Giancarlo Sanguinetti, Lehigh University
  • Most Creative Solution: Erik Parreira, University of California, San Diego; Utkrisht Rajkumar, Mira Mesa High School; Shraman Ray Chaudhuri, MIT;  Eric Dillmore, UT Dallas; Katherine Prutz and Annalise Labatu,  Louisiana School for Math, Science, and the Arts
  • Fan Favorite: Nicholas Szapiro, University of Oklahoma; Diego Mesa and Wendy Vasquez, University of California, San Diego; Rodney Pickett, Michigan State University; Augusto Seminario, UC Berkeley; Latifa Jackson, Drexel University; Victoria Nneji, Columbia University

Key Points
Award Winners
Related Links
Contact Information

Science Gateways

Science Gateways are community-provided interfaces to XSEDE resources. This section provides a general gateways overview and describes XSEDE support for science gateways. The section provides links to more detailed information on the XSEDE user portal and external sites.

XSEDE Start-up and Educational allocations require only a one paragraph project description

A Science Gateway is a community-developed set of tools, applications, and data that are integrated via a portal or a suite of applications, usually in a graphical user interface, that is further customized to meet the needs of a specific community. Gateways enable entire communities of users associated with a common discipline to use national resources through a common interface that is configured for optimal use. Researchers can focus on their scientific goals and less on assembling the cyberinfrastructure they require. Gateways can also foster collaborations and the exchange of ideas among researchers.

Using Science Gateways

Gateways are independent projects, each with its own guidelines for access. Most gateways are available for use by anyone, although they usually target a particular research audience. XSEDE Science Gateways are portals to computational and data services and resources across a wide range of science domains for researchers, engineers, educators, and students. Depending on the needs of the communities, a gateway may provide any of the following features:

  • High-performance computation resources
  • Workflow tools
  • General or domain-specific analytic and visualization software
  • Collaborative interfaces
  • Job submission tools
  • Education modules

Science Gateways simplify access to computing resources by hiding infrastructure complexities

Developing and Integrating Science Gateways

XSEDE supports science gateways in several ways.

  • Most XSEDE service providers support community accounts, which allow gateways to execute scientific applications on XSEDE resources as a generic gateway user.  This eliminates the need for users of a science gateway to create XSEDE accounts.
  • XSEDE directly provides Virtual Machine hosting for science gateways and their related services.
  • XSEDE provides Extended Collaborative Support services to help science gateway providers integrate both new and existing science gateways with XSEDE resources.

See the Gateways Listing  to explore the current XSEDE gateways projects.

How to Turn Your Project into a Science Gateway

  1. Get an XSEDE allocation; Start-up and Educational allocations require only a one paragraph project description. For more information visit the Allocations section of the web site.
  2. Register your project as an XSEDE Gateway
  3. Build a portal with guidance from Gateways for Developers and Operators
  4. Set up your developer accounts by Adding users to an existing allocation. Also, set up your Gateway community accounts.

Key Points
Gateways provide higher level user interface for XSEDE resources that are tailored to specific scientific communities.
XSEDE supports gateways through community accounts, gateway hosting, and extended collaborative support services.
Contact Information
XSEDE Science Gateways Expert
Science Gateways Community Institute

Science Gateways Listing

This page lists all of the current science gateways. The list membership changes occasionally, as new projects join the community. Find the related science domains and links to the gateway home pages within the table below.

Key Points
Currently there are 34 Gateways listed
Contact Information
XSEDE Science Gateways Expert
Science Gateways Community Institute

Science Gateways for Developers and Operators

This page documents required and recommended steps for developers. For additional assistance, XSEDE provides Extended Consultation Support Services and community mailing lists to assist gateway developers and administrators.

Science Gateways can democratize access to the cyberinfrastructure that enables cutting-edge science

What is an XSEDE Science Gateway?

An XSEDE Science Gateway is a web or application portal that provides a graphical interface for executing applications and managing data on XSEDE and other resources. XSEDE science gateways are community services offered by XSEDE users to their communities; each gateway is associated with at least one active XSEDE allocation. For an overview of the steps a gateway provider must take to start an XSEDE Science Gateway, see the Gateways for PIs page.

See the Science Gateways Listing for a complete list of current operational gateways.

Science gateway developers and administrators may include PIs as well as their collaborators, staff, and students. The PI should add these team members to the XSEDE allocation; see Manage Users for more details. It is recommended that the allocation have at least one user with the Allocation Manager role, in addition to the PI.

Operations Checklist

  1. The PI obtains an XSEDE allocation.
  2. The PI adds developer and administrator team members to the allocation.
  3. Register the gateway.
  4. Request for a community account to be added to the allocation. The PI logs onto the XSEDE User Portal and selects "Community Accounts." from the My XSEDE tab.
  5. Add the XSEDE logo to the gateway. See
  6. Integrate the user counting scripts with the gateway's submission mechanism.
  7. Join the XSEDE gateway community mailing list (optional).

Building and Operating

Science gateways can be developed using many different frameworks and approaches. General issues include managing users, remotely executing and managing jobs on diverse XSEDE resources, tracking jobs, and moving data between XSEDE and the user environment. XSEDE specific issues include tracking users, monitoring resources, and tracking use of the gateway allocation. For a general overview of best practices for building and operating a science gateway, please see the material developed by the Science Gateways Community Institute, an independently funded XSEDE service provider. The Institute provides support for different frameworks that can be used to build science gateways.

XSEDE supports a wide range of gateways and does not require specific middleware; gateways can use team-developed middleware or third party provided middleware. Gateways that run jobs and access data on XSEDE resources may be hosted on the PI's local servers or directly on XSEDE resources that support persistent Web services, middleware, and databases; these include Bridges, Comet, and Jetstream.

For gateway teams that would like additional development assistance, XSEDE supports the integration of science gateways with XSEDE resources through Extended Collaborative Support Services (ECSS). ECSS support can be requested as part of an allocation request; PIs can add ECSS support to an existing allocation through a supplemental request.

Managing User Accounts

XSEDE science gateways are community provided applications. Gateway users are not required to have XSEDE accounts or allocations. XSEDE allows all users jobs to run on the gateway's community account instead. Gateways thus map their local user accounts to the gateway's single community account. XSEDE does require quarterly reporting of the number of unique users who executed jobs on XSEDE resources, as described below.

XSEDE Community Accounts

XSEDE allows science gateways that run applications on behalf of users to direct all submission requests to a gateway community user account. Designated gateway operators have direct shell access to their community account, but normal users do not. The community account simplifies administration of the gateway, since the gateway administrators have access to input and output files, logs, etc, for all their users, and users don't need to request individual gateway accounts.

A community account has the following characteristics:

  • Only a single community user account (i.e., a XSEDE username/password) is created.
  • The Science Gateway uses the single XSEDE community user account to launch jobs on XSEDE.
  • The gateway user running under the community account has privileges to run only a limited set of applications.

Requesting a Community Account: The PI or Allocation Manager with a registered gateway can request a community account by logging on to the XSEDE User Portal and selecting "Community Accounts." from the "My XSEDE" tab. Select community accounts on all allocated resources.

Accessing Community Accounts: Administrators access community accounts through SSH and SCP using the community account username and password that is provided with the account. Community accounts cannot be accessed from the XSEDE single sign on hub.

Community Accounts on Sites with Two-Factor Authentication: Some XSEDE resources, including Stampede and Wrangler, require two-factor authentication. Gateways can request exceptions to this policy for their community accounts by contacting XSEDE Help Desk. The gateway will need to provide the static IP addresses of the server or servers it uses to connect to the resource.

Unique Science Gateway User Accounts

It is the gateway developer's responsibility, as described below, to implement gateway logins or otherwise uniquely identify users in order to track usage. These accounts can be local to the gateway and do not need to correspond to user accounts on XSEDE. The gateway maps these accounts to the gateway's common community account.

Gateways may optionally choose to use XSEDE's OAuth2-based authentication process for authentication. This is a service provided by Globus Auth. ECSS consultants are available to assist with this integration.

The XSEDE Cyberinfrastructure Integration (XCI) team has completed writing and testing the document "User Authentication Service for XSEDE Science Gateways." This is an introduction to the user authentication service that XSEDE offers for science gateway developers and operators. This service provides a user "login" function so that gateway developers don't need to write their own login code or maintain user password databases.

Connecting to XSEDE Resources

The most common type of XSEDE science gateway allows users to run scientific applications on XSEDE computing resources through a browser interface. This section describes XSEDE policies and requirements for doing this.

Community Allocations

Gateways typically provide their users with a community-wide allocation acquired by the PI on behalf of the community. The gateway may implement internal restrictions on how much of this allocation a user can use.

If a user is consuming an excessive amount of resources, the gateway may require these "power users" to acquire their own allocations, either through the Startup or XRAC allocation process. After obtaining the allocation, the user adds the gateway community account to her/his allocation. The user's jobs still run under the community account, but the community account uses the user's, rather than the gateway PI's, allocation. This is implemented by adding the allocation string to the batch script. This is the standard -A option for the SLURM schedulers used by many XSEDE resources; see examples for Stampede, Comet, and Bridges. Gateway middleware providers may provide this service as a feature.

Interacting with HPC Resources

Science gateways that run jobs on behalf of their users submit them just like regular users. For XSEDE's HPC resources, this means using the local batch scheduler to submit jobs and monitor them. For an overview, see the XSEDE Getting Started Guide. Gateways execute scheduler commands remotely through SSH and use SCP for basic file transfer. Gateways may choose to work with third party middleware and gateway framework providers to do this efficiently. For more information on third party software providers, consult the Science Gateways Community Institute service provider web site.

XSEDE ECSS consultants can assist gateways with HPC integration.

XSEDE Resources for Gateway Hosting

XSEDE includes resources that have special Virtual Machine (VM) and related capabilities for gateways and similar persistent services. These resources are allocated through the standard XSEDE allocation mechanisms.

  • Bridges is designed for jobs that need large amounts of shared memory. It also has allocatable VMs that have access to Bridges' large shared file system. VM users can directly access scheduler command line tools to Bridge's computing resources inside their VMs.
  • Comet, like Bridges, is a computing cluster with co-located Virtual Machines. Users can also request entire, self-contained Virtual Clusters that can run both the gateway services and computing jobs.
  • Jetstream is an XSEDE cloud computing resource. Gateway users can get persistent VMs for use in gateway service hosting. They can also get multiple VMs configured as a Virtual Cluster with a private scheduler for running computing jobs.

Science Gateway Usage Metrics: Unique Users per Quarter

XSEDE requires all gateways to report the number of unique users per quarter who have executed jobs on XSEDE resources. This is a key metric that XSEDE in turn reports to the NSF. Compliance with this requirement justifies XSEDE's investment in the science gateway community. XSEDE collects this information through a simple script that is integrated into the job submission process. XSEDE ECSS consultants are available to assist gateway developers to do this.

The gateway_submit_attributes package provides a mechanism for collecting science gateway-supplied usernames used to run applications under community accounts on XSEDE resources. In this scenario, the gateway authenticates the external user, sets the username, and provides indirect access to the community account.

The gateway (via SSH) or the job management middleware invokes the script, gateway_submit_attributes, that writes the gateway-supplied username, the local job ID (obtained from the local resource manager), the submission time (also obtained from the local resource manager), and the submission host (configured by the local service provider) to special, restricted tables in the XSEDE Central Database (XDCDB).

The gateway_submit_attributes package provides a PERL client for integration into science gateways. The client is available on XSEDE resources under the module name "gateway-usage-reporting". After SSH'ing into the XSEDE resource, to access the client, simply run

$ module load gateway-usage-reporting

The gateway_submit_attributes script takes as input 3 command-line parameters in the format:

gateway_submit_attributes -gateway_user <> \
-submit_time <submission_time> -jobid <jobid>

The submission_time should be in the standard ISO format of "YYYY-MM-DD HH:MM:SS TZ" like "1999-01-08 04:05:06 -8:00". For example, after submitting a job on an XSEDE resource, extract the job id, and run the gateway_submit_attributes script as follows:

sbatch mpi.job
. . .
Submitted batch job 4937919

gateway_submit_attributes -gateway_user -submit_time "`date '+%F %T %:z'`" -jobid 4939827

Please note that for the Gordon resource, you need to use the full string returned from PBS (including the hostname) e.g.,

qsub test.sub2
gateway_submit_attributes -gateway_user -submit_time "`date '+%F %T %:z'`" -jobid 2149587.gordon-fe2.local

This command will submit the information to the XDCDB to a staging table where it will be later matched with AMIE accounting records coming from the site. To verify that the information is matched correctly, you can run the xdusage command with "-ja" option as shown in the example below:

xdusage -j -ja -s 2015-03-04 -e 2015-03-05 -p TG-STA110011S
. . .
job id= 4939827 resource=stampede.tacc.xsede
submit=2015-03-04@17:47:28 start=2015-03-04@17:47:28
end=2015-03-04@17:57:37 nodecount=2 processors=32 queue=normal charge=24.89
job-attr id= 4939827

It may take up to a day for the AMIE packets to be sent by the site and for the data to be matched as above.

In case of submission failures due to database errors, the attributes are saved in a log file in the $HOME directory:


If the file already exists, the script exits without overwriting the file. Attributes can later be resubmitted through gateway_submit_attributes log file option:

gateway_submit_attributes -f <attributes_filename>

A gateway_bulk_submit script is also provided for the convenience of gateway operators to submit/resubmit bulk of attributes or attribute log files. To resubmit all previously failed entries, simply use:

    gateway_bulk_submit -resubmit

Upon successful resubmission, the corresponding log file will be renamed as $HOME/gateway_attributes_log/gateway_attributes_entry...delete and can be deleted.

All submission histories are logged for future references in files named $HOME/gateway_attributes_log/history/gateway_submit_attributes-.log and users are encouraged to keep such logs at least for 90 to 120 days for auditing purposes.

For any questions or issues with the gateway_submit_attributes package, please contact the [XSEDE Help Desk[( and follow the standard XSEDE help desk procedure.

Security and Accounting

XSEDE has specific security and accounting requirements and recommendations for connecting to its resources to optimize your gateway for prevention and triage of security incidents or inadvertent misuse.

Security and Accounting Requirements and Recommendations

The following security and accounting steps are required.

  • Required: Notify the XSEDE Help Desk immediately if you suspect the gateway or its community account may be compromised, or call the Help Desk at 1-866-907-2383.
  • Required: Keep Science Gateway contact info up to date on the Science Gateways Listing in case XSEDE staff should need to contact you. XSEDE reserves the right to disable a community account in the event of a security incident.
  • Required: Use the gateway_submit_attributes tool to submit gateway username with job.

Additional recommendations are as follows:

  • Collect Accounting Statistics
  • Maintain an audit trail (keep a gateway log)
  • Provide the ability to restrict job submissions on a per user basis
  • Safeguard and validate programs, scripts, and input
  • Protect user passwords on the gateway server and over the network
  • Do not use passwordless SSH keys.
  • Perform Risk and Vulnerability Assessment
  • Backup your gateway routinely
  • Develop an an incident response plan for your gateway; review and update it regularly
  • Put a contingency plan in place to prepare for a disaster or security event that could cause the total loss or lock down of the server
  • Monitor changes to critical system files such as SSH with tripwire or samhain (open source)
  • Make sure the OS and applications of your gateway service are properly patched - Run a vulnerability scanner against them such as nessus
  • Make use of community accounts rather than individual accounts

These are described in more detail below in separate sections. XSEDE ECSS support staff can assist with designing and implementing best practices. The Science Gateways Community Institute service provider also provides information on best practices.

What To Do In Case of a Security Incident

Whether a threat is confirmed or suspected, quick action and immediate communication with XSEDE Security Working Group is essential. Please contact the XSEDE Help Desk immediately at 1-866-907-2383.

Key Points
Gateways provide higher level user interface for XSEDE resources that are tailored to specific scientific communities.
XSEDE supports gateways through community accounts, gateway hosting, and extended collaborative support services.
Contact Information
XSEDE Science Gateways Expert
Science Gateways Community Institute

Science Gateways Symposium

Many recorded presentations from the XSEDE Science Gateways and Scientific Workflows Symposium series are available for viewing. They will be available for download in the near future. Please check back soon.

If you would like to participate in the series, please subscribe to the Science Gateways and ECSS Workflow mailing lists.

To subscribe to the Gateways mailing list, email with "subscribe gateways" in the body of the message.

To subscribe to the Workflows mailing list, email with "subscribe workflows" in the body of the message.

More information is available on the Science Gateways and ECSS Workflows pages.

Key Points
Available for viewing online
Subscribe for more info
Contact Information
XSEDE Science Gateways Expert
Science Gateways Community Institute