Service Providers

Service Providers (SPs) are independently funded projects or organizations that provide cyberinfrastructure (CI) services to the science and engineering community. In the US academic community, there is a rich diversity of Service Providers, spanning centers that are funded by the National Science Foundation (NSF) to operate large-scale resources for the national research community to universities that provide resources and services to their local researchers.

XSEDE Federation

XSEDE coordinates and integrates the national cyberinfrastructure funded by the NSF while also reaching out to coordinate, integrate, provide support services, and be as inclusive as possible with the broader community. The extended organization created by the amalgamation of the XSEDE program and other organizations with which XSEDE collaborates is referred to as the XSEDE Federation. This includes many Service Providers that are autonomous entities from the XSEDE project but agree to coordinate/integrate with XSEDE as members of the XSEDE Federation. The XSEDE Federation includes those providers and consumers of services which will meet, to varying degrees, the requirements of interfaces with XSEDE and also will engage in the effective and sustained interactions required to develop and evolve those interfaces.

Three Levels of Service Providers are defined within the XSEDE Federation. SPs are classified as being at a specific Level by meeting a minimum set of conditions, described in detail in the Requesting Membership in the XSEDE Federation document. These Levels reflect the degree of coordination/integration between the Service Provider and XSEDE. Level 1 Service Providers are the most tightly coupled with XSEDE. Level 2 and Level 3 Service Providers are progressively more loosely coupled with XSEDE. The XSEDE Software and Services Table for Service Providers document describes the software and services integration expected by XSEDE for participating in the XSEDE Federation.

Depending on their Level of participation, Service Providers are required to integrate with XSEDE, such as, integration with the XSEDE User Portal, XSEDE Resource Allocation System, XSEDE Information Services, participate with XSEDE working groups on a period basis, and verify their integration annually. The Service Provider Checklist document is updated periodically to keep up with the ever evolving XSEDE cyberinfrastructure environment.

Service Provider Forum

The Service Provider Forum is intended to facilitate this ecosystem of Service Providers, thereby advancing the science and engineering researchers that rely on these cyberinfrastructure services.

The Service Provider (SP) Forum provides: an open forum for discussion of topics of interest to the SP community; a formal communication channel between the SP Forum members and the XSEDE project.

The Service Provider Forum was originally proposed as part of the governance structure for the XSEDE project, which recognized the need for a forum to serve as the formal interface point between the XSEDE project and the group of autonomous SPs. In this context, the SP Forum is the forum for interaction between XSEDE and SPs, discussing and mutually resolving issues where the interests of SPs and XSEDE overlap (e.g. networking, operations, security, allocations, accounting, user support, documentation, software environments, etc.). The SP Forum charter, membership and governance structures reflect the mutual relationships and responsibilities in this crucial governance body that must constructively support the XSEDE-SP interactions.

If you are interested in joining the Service Provider Forum, please contact spf-chair@xsede.org.

XSEDE Federation Member List (with resource listing)

Resource

Organization

Type

SP Level

XSEDE 
Allocated

HP/NVIDIA Interactive Visualization and Data Analytics System (Maverick)

TACC

Viz

Level 1

Yes

IU Data Analytics System (Wrangler)

Indiana University

Compute

Level 1

Yes

IU Long-term Storage (Wrangler Storage)

Indiana University

Storage

Level 1

Yes

IU/TACC (Jetstream)

Indiana University

Compute

Level 1

Yes

IU/TACC Storage (Jetstream Storage)

Indiana University/TACC

Storage

Level 1

Yes

Open Science Grid (OSG)

OSG

Compute

Level 1

Yes

PSC Bridges GPU (Bridges GPU)

PSC

Compute

Level 1

Yes

PSC Large Memory Nodes (Bridges Large)

PSC

Compute

Level 1

Yes

PSC Regular Memory (Bridges)

PSC

Compute

Level 1

Yes

PSC Storage (Bridges Pylon)

PSC

Storage

Level 1

Yes

SDSC Comet GPU Nodes (Comet GPU)

SDSC

Compute

Level 1

Yes

SDSC Dell Cluster with Intel Haswell Processors (Comet)

SDSC

Compute

Level 1

Yes

SDSC Medium-term disk storage (Data Oasis)

SDSC

Storage

Level 1

Yes

TACC Data Analytics System (Wrangler)

TACC

Compute

Level 1

Yes

TACC Dell/Intel Knight's Landing System (Stampede2 - Phase 1)

TACC

Compute

Level 1

Yes

TACC Long-term Storage (Wrangler Storage)

TACC

Storage

Level 1

Yes

TACC Long-term tape Archival Storage (Ranch)

TACC

Storage

Level 1

Yes

LSU Cluster (superMIC)

LSU CCT

Compute

Level 2

Yes

Stanford University GPU Cluster (XStream)

Stanford

Compute

Level 2

Yes

University of Tennessee Beacon cluster

UTK-NICS

Compute

Level 2

Yes

NCAR GLADE central file systems and data storage

NCAR

Storage

Level 2

No

Blue Waters

NCSA

Compute

Level 2

No

The Purdue Scholar Cluster for Education

Purdue

Compute

Level 2

No

Science Gateway Community Institute

SGCI

Other

Level 2

Yes

Dell PowerEdge IB FDR cluster

Rutgers

Compute

Level 3

No

RDI2 Supermicro FatTwin SuperServer OPA cluster

Rutgers

Compute

Level 3

No

RDI2 DDN GPFS

Rutgers

Storage

Level 3

No

RDI2 DDN GPFS

Rutgers

Storage

Level 3

No

Oklahoma State cluster (Cowboy)

Oklahoma State University

Compute

Level 3

No

Laconia - Institute for Cyber-Enabled Research

Michigan State University

Compute

Level 3

No

Schooner - Dell PowerEdge R430/R730 cluster

University of Oklahoma

Compute

Level 3

No

University of Arkansas/AHPCC Cluster (Trestles)

University of Arkansas

Compute

Level 3

No

KSU Beocat Computer Cluster

Kansas State University

Compute

Level 3

No

Tufts University High Performance Compute

Tufts University

Compute

Level 3

No

Orion heterogeneous scientific computing infrastructure

Georgia State University

Compute

Level 3

No

Flux - Cluster for CPU, GPU, and Large Memory workloads

University of Michigan

Compute

Level 3

No

LUCCRE Cluster (lucille)

Langston University

Compute

Level 3

No

U.S. CMS Tier2 Compute Element (Red)

University of Nebraska, Lincoln

Compute

Level 3

No

Mount Moran Cluster

University of Wyoming

Compute

Level 3

No

WVU Spruce KNOB HPC Cluster

West Virginia University

Compute

Level 3

No

ROGER HPC Cluster

CyberGIS

Compute

Level 3

No

Legacy HPC Cluster

University of South Dakota

Compute

Level 3

No

Summit - RMACC Heterogeneous Compute Cluster

University of Colorado

Compute

Level 3

No

Minnesota Supercomputer Institute

University of Minnesota

Compute

Level 3

No

PBARC

USDA (Hawaii)

Compute

Level 3

No

DataONE

University of New Mexico

Storage

Level 3

No

Palmetto

Clemson University

Compute

Level 3

No

Key Points
All Service Providers that collaborate with XSEDE are members of the XSEDE Federation
The Service Provider user's group is the Service Provider Forum
Once an organization decides or is required by an NSF award to be a Service Provider, they can apply to join the XSEDE Federation and integrate at their Service Provider Level
Contact Information
Service Provider Coordinator

October 13, 2017 SP Forum Webinar

Containerized and Virtual Computing Webinar

Presenter(s): Trevor Cooper (Comet) Mahidhar Tatineni (Comet) Mike Lowe (Jetstream) Nate Rini (NCAR) Robert Budden (Bridges) Dan Stanzione (Stampede) Eric Shook (UMN)

he XSEDE Service Provider Forum is pleased to hold a webinar designed to help the broader XSEDE community navigate the opportunities for containerized and virtual computing that covers the space of virtual clusters, Singularity, OpenStack, Shifter, Docker, Kubernetes, etc. This 90-minute webinar will include 15 minute sessions (including Q&A) from 5 SPs . Featured SPs include Comet (Trevor Cooper, SDSC), Jetstream (Mike Lowe, IU), NCAR systems (Nathan Rini, NCAR)), Bridges (Robert Budden, PSC) and TACC systems including Stampede2 (Dan Stanzione, TACC). We'll conclude with a user perspective from XSEDE GIS domain champion Eric Shook (U Minnesota). 15 min Comet topics (Trevor Cooper, SDSC): Virtualization vs Containerization Comet VC use cases Comet Singularity use cases Comet Kubernetes use cases (possibly) Allocation Applicability SDSC User Services Assistance Slides 15 min Jetstream topics (Mike Lowe, IU, live demo, no slides): Docker enabled images Container Orchestration Services 15 min NCAR topics (Nathan Rini, NCAR): Nathan Rini - Presentation Using inception for (unprivileged) users to use different supercomputer environments Cluster Cron using inception Using docker for system services 15 min Bridges topics (Robert Budden, PSC): Virtual Machines Community Gateways Containers / Singularity https://drive.google.com/file/d/0BzH_CCal-OFJWEVoN2l6dC1MWkE/view 15 min TACC/Stampede topics (Dan Stanzione, TACC): https://drive.google.com/open?id=0ByWLOLZejU2lZGNhTjBQREpwQ0E 15 min User perspective (Eric Shook U Minnesota): Experience using Singularity Containers Building a Science Gateway using these technologies User Considerations https://drive.google.com/open?id=0ByWLOLZejU2lOFZzazJHa3BDaTg