Last update: January 21, 2018
First time here? Check out the Resource Info page to learn about the resources available, and then visit the Startup page to get going! Startup, Campus Champions, and Education Allocation requests may be submitted at any time throughout the year.
See the XSEDE Resources Catalog for a complete list of XSEDE compute, visualization and storage resources, and more details on the new systems.
The Science Gateways Community Institute (SGCI) is a new Level Two XSEDE Service Provider. SGCI helps its clients build science gateways, at any stage of development, through online and in-person resources and services. This is in contrast to XSEDE's ECSS program which helps existing science gateways connect to XSEDE resources.
SGCI can provide specialized long- or short-term consulting services for hands-on, custom development as well as advice from seasoned experts. Expertise covers many areas - technology selection, usability, graphic design, cybersecurity, licensing and more. Expertise can be provided regardless of the type of gateway (citizen science, compute intensive, data portal, etc.) or the resources used (campus resources, XSEDE, cloud).
Consult the Science Gateways Community Institute site for more information.
The Texas Advanced Computing Center's (TACC) new resource, Stampede2, is now in full production. The 18 petaflop national resource builds on the successes of the original Stampede system it replaces.
The first phase, of the Stampede2 rollout features the second generation of processors based on Intel's Many Integrated Core (MIC) architecture. These 4,200 Knights Landing (KNL) nodes represent a radical break with the first generation Knights Corner (KNC) MIC coprocessor. Unlike the legacy KNC, a Stampede KNL is not a coprocessor: each 68-core KNL is a stand-alone, self-booting processor that is the sole processor in its node.
Phase 2, now deployed, has added approximately 50% of additional compute power to the system as a whole via the integration of 1,736 Intel Xeon (Skylake) nodes. Now fully deployed, Stampede2 will deliver twice the performance of the original Stampede system.
Please note that Stampede2 is allocated in node-hour, NOT core-hour service units (SU)s. An SU is defined as 1 wall-clock node hour.
SDSC's Comet GPU has 72 general purpose GPU nodes. This includes the 36 original nodes each containing two dual-GPU NVIDIA K80s (four GPUs per node) plus 36 newer nodes that each contain four NVIDIA P100 GPUs. The resource is allocated in K80 hours and usage of the P100 GPUs is charged a 1.5x premium. Each GPU node also features 2 Intel Haswell processors of the same design and performance as the standard compute nodes (described separately). The GPU nodes are integrated into the Comet resource and available through the Slurm scheduler for either dedicated or shared node jobs (i.e., a user can run on 1 or more GPUs/node and will be charged accordingly). Like the Comet standard compute nodes, the GPU nodes feature a local SSD which can be specified as a scratch resource during job execution - in many cases using SSD's can alleviate I/O bottlenecks associated with using the shared Lustre parallel file system.
Comet's GPUs are a specialized resource that performs well for certain classes of algorithms and applications. There is a large and growing base of community codes that have been optimized for GPUs including those in molecular dynamics, and machine learning. GPU-enabled applications on Comet include: Amber, Gromacs, BEAST, OpenMM, TensorFlow, and NAMD.
PSC introduces Bridges GPU, a newly allocatable resource within Bridges that features 32 NVIDIA Tesla K80 GPUs and 64 NVIDIA Tesla P100 GPUs. Bridges GPU complements Bridges' Regular, Bridges Large, and its Pylon storage system to accelerate deep learning and a wide variety of application workloads. The 16 GPU nodes, each with 2 NVIDIA Tesla K80 GPU cards, 2 Intel Xeon CPUs (14 cores each), and 128GB of RAM and 32 GPU nodes, each with 2 NVIDIA Tesla P100 GPU cards, 2 Intel Xeon CPUs (16 cores each), and 128GB of RAM.
PSC's Bridges GPU is a uniquely capable resource for empowering new research communities and bringing together HPC and Big Data. Bridges integrates a uniquely flexible, user-focused, data-centric software environment with very large shared memory, a high-performance interconnect, and rich file systems to empower new research communities, bring desktop convenience to HPC and drive complex workflows. Bridges supports new communities through extensive interactivity, gateways, persistent databases and web servers, high productivity programming languages, and virtualization. The software environment is extremely robust, supporting enabling capabilities such as Python, R, and MATLAB on large-memory nodes, genome sequence assembly on nodes with up to 12TB of RAM, machine learning and especially deep learning, Spark and Hadoop, complex workflows, and web architectures to support gateways.
Maverick, TACC's Interactive Visualization and Data Analytics System, will soon leave the list of available XSEDE resources. As of April 1, 2018, Maverick will no longer be available to XSEDE users. PIs will not be able to request allocations on the system if submitting a proposal within the submission window that opens on September 15, 2017.
Continuing this submission period, access to XSEDE storage resources along with compute resources will need to be requested and justified, both in the XSEDE Resource Allocation System (XRAS) and in the body of the proposal's main document. The following XSEDE sites will be offering allocatable storage facilities, these are:
- IU/TACC Jetstream - required when requesting IU/TACC Jetstream
- PSC Pylon - required when requesting PSC Bridges, Regular or Large Memory
- SDSC Data Oasis - required when requesting SDSC Comet or Comet GPU
- TACC Ranch - required when requesting TACC Stampede-2 or Maverick
- TACC Wrangler - required when requesting TACC Wrangler
Storage needs have always been part of allocation requests, however, XSEDE will be enforcing the storage awards in unison with the storage sites. Please vist XSEDE's Storage page for more info.