XSEDE offers storage to satisfy most users' practical needs. Several XSEDE service providers host storage platforms providing services such as data management, data collections hosting, and large-scale persistent storage. XSEDE provides storage allocations both in support of compute/visualization usage and storage independent of those compute/visualization allocations. Researchers may request data storage allocations for short- and long-term storage and for staging of data collections on disk. You may request a storage allocation through the same system (XSEDE Resource Allocation System (XRAS)) used to request compute and visualization resources.
XSEDE users have access to three types of storage.
- Stand-alone Storage: Stand-alone storage allows storage allocations independent of a compute/visualization allocation. Data hosting and data preservation projects may have stand-alone storage needs. PI's may request stand-alone storage on any of the XSEDE storage systems detailed in Table 1 below.
- Archival Storage: Archival storage on XSEDE systems is typically used for large-scale persistent storage requested in conjunction with compute and visualization resources. Archival storage is usually not backed up. Users will need to explicitly request these resources in their request as has been done for compute and visualization requests through the XSEDE proposal submission site.
- Resource file system storage: All XSEDE compute/visualization allocations include access to limited disk and scratch space on the compute/visualization resource file systems to accomplish project goals. See Table 2 below.
The following table lists XSEDE allocatable archival and stand-alone storage systems available to XSEDE users. Storage on these systems must be requested along with compute and visualization requests through XRAS. Currently, the only stand-alone storage resource is the TACC Ranch storage resource.
As with compute and visualization allocations, storage allocation durations will be one year with the option to renew for subsequent one-year periods. Data will be preserved for at least one quarter beyond the expiration of the storage allocation end date.
Storage allocations will be provided on a per-project basis. PIs will manage their storage allocations via the XSEDE User Portal's account management feature. PIs will receive automatic notifications at pre-determined intervals as their storage usage approaches the awarded amount and as their allocation approaches its expiration date. PIs and users are responsible for correctly managing their storage allocations.
|Data Oasis |
|System type||Medium-term disk storage||Long-term tape archival storage||Persistent disk storage||Persistent, global online storage|
|Allows storage in support of compute and/or vis?||yes||yes||yes||yes|
|Allows stand-alone storage?||no||yes||no||yes|
|Companion HPC resource(s)||Gordon||Stampede||Wrangler||N/A|
|File system type||Lustre||SAM-FQS||Lustre||GPFS|
|Default/Max storage space for XRAC request||500GB / 50TB||none / 1PB||TBD||none / 100TB|
|Data retention beyond allocation expiration||3 months||6 months||TBD||3 months|
The following table lists each compute and visualization resource along with its respective file systems, file systems' capacities, quotas and backup policies if applicable.
|Resource||File System Name||File System Type||Size||Quota||Purge||Backup|
|Gordon (SDSC)|| ||NFS||260GB||none||no||yes|
|Mason (IU)|| ||NFS||8TB||10GB||no||yes|
|Maverick (TACC)|| ||NFS||7.6TB||10GB||no||incremental (nightly)|
|Stampede (TACC)|| ||Lustre||524TB||5GB||no||incremental (nightly)|
| ||Lustre||7.5PB||none||yes (10 days inactive)||no|
1 Gordon has 4.8TB local flash storage for 16 nodes, 300TB aggregate
2 TACC's Stampede, Wrangler and Maverick mount their respective
$WORK file systems from Stockyard, TACC's global file system backbone.
Users should be aware that attempting to write to a file system while over their quota may cause jobs to fail. Please check your quotas before submitting a job to ensure that the data generated by the job will fit within the quota, or that all job output is written to a file system without quotas.
You may check your quota simply by issuing the following command:
$SCRATCH and other large file systems on XSEDE resources are purged on a regular basis, meaning that older, unused data will be deleted. Most such file systems are not backed up. Please consult site- or system-specific documentation for details on these policies. Users should be aware that data stored on purged resources could be deleted, and should exercise care to ensure that important data is copied or moved to appropriate long-term storage resources promptly.
Last update: May 1, 2016