Allocations Announcements
 

First time here? Check out the Resource Info page to learn about the resources available, and then visit the Startup page to get going! Startup, Campus Champions, and Education Allocation requests may be submitted at any time throughout the year.

New XSEDE Resources

Starting this submission period both the Pittsburgh Supercomputing Center (PSC) and the San Diego Supercomputer Center (SDSC) will be allocating their GPU compute resources separately from their standard compute nodes. See SDSC's Comet GPU and PSC's Bridges GPU resource details below. TACC's Stampede 2 system also debuts its availability and will begin production in Summer, 2017.

See the XSEDE Resources Overview for a complete list of XSEDE compute, visualization and storage resources, and more details on the new systems.

TACC's Stampede 2

TACC's newest resource, Stampede 2, will enter full production in the Fall 2017 as the 18 petaflop national resource that builds on the successes of the original Stampede system it replaces. The first phase of the Stampede 2 rollout features the second generation of processors based on Intel's Many Integrated Core (MIC) architecture. These 4,200 Knights Landing (KNL) nodes represent a radical break with the first generation Knights Corner (KNC) MIC coprocessor. Unlike the legacy KNC, a Stampede KNL is not a coprocessor: each 68-core KNL is a stand-alone, self-booting processor that is the sole processor in its node. Phase 2 will add approximately 50% of additional compute power to the system as a whole by introducing new nodes equipped with a future Intel processor. When fully deployed, Stampede 2 will deliver twice the performance of the original Stampede system. Please note that Stampede 2 is allocated in service units (SU)s where an SU is defined as 1 wall-clock node hour (not core hour).

SDSC's Comet GPU

SDSC's Comet GPU has 36 general purpose GPU nodes, with 2 Tesla K80 GPU graphics cards per node, each with 2 GK210 GPUs (144 GPUs in total). Each GPU node also features 2 Intel Haswell processors of the same design and performance as the standard compute nodes (described separately). The GPU nodes are integrated into the Comet resource and available through the SLURM scheduler for either dedicated or shared node jobs (i.e., a user can run on 1 or more GPUs/node and will be charged accordingly). Like the Comet standard compute nodes, the GPU nodes feature a local SSD which can be specified as a scratch resource during job execution - in many cases using SSD's can alleviate I/O bottlenecks associated with using the shared Lustre parallel file system.

Comet's GPUs are a specialized resource that performs well for certain classes of algorithms and applications. There is a large and growing base of community codes that have been optimized for GPUs including those in molecular dynamics, and machine learning. GPU-enabled applications on Comet include: Amber, Gromacs, BEAST, OpenMM, TensorFlow, and NAMD.

PSC's Bridges GPU

PSC introduces Bridges GPU, a newly allocatable resource within Bridges that features 32 NVIDIA Tesla K80 GPUs and 64 NVIDIA Tesla P100 GPUs. Bridges GPU complements Bridges' Regular, Bridges Large, and its Pylon storage system to accelerate deep learning and a wide variety of application workloads. The 16 GPU nodes, each with 2 NVIDIA Tesla K80 GPU cards, 2 Intel Xeon CPUs (14 cores each), and 128GB of RAM and 32 GPU nodes, each with 2 NVIDIA Tesla P100 GPU cards, 2 Intel Xeon CPUs (16 cores each), and 128GB of RAM.

The PSC's Bridges is a uniquely capable resource for empowering new research communities and bringing together HPC and Big Data. Bridges integrates a uniquely flexible, user-focused, data-centric software environment with very large shared memory, a high-performance interconnect, and rich file systems to empower new research communities, bring desktop convenience to HPC and drive complex workflows.

Bridges supports new communities through extensive interactivity, gateways, persistent databases and web servers, high productivity programming languages, and virtualization. The software environment is extremely robust, supporting enabling capabilities such as Python, R, and MATLAB on large-memory nodes, genome sequence assembly on nodes with up to 12TB of RAM, machine learning and especially deep learning, Spark and Hadoop, complex workflows, and web architectures to support gateways.

Please see the Estimated Resource Amounts Available for the current XRAC meeting on the Research allocations page.

Retiring XSEDE Resources

These retiring resources, listed below, are not available for new or renewal research requests.

Storage Allocations

Continuing this submission period, access to XSEDE storage resources along with compute resources will need to be requested and justified, both in the XSEDE Resource Allocation System (XRAS) and in the body of the proposal's main document. The following XSEDE sites will be offering allocatable storage facilities, these are:

  • PSC Pylon - required when requesting PSC Bridges, Regular or Large Memory
  • IU/TACC Jetstream - required when requesting IU/TACC Jetstream
  • SDSC Data Oasis - required when requesting SDSC Comet or Comet GPU
  • TACC Ranch - required when requesting TACC Stampede-2 or Maverick
  • TACC Wrangler - required when requesting TACC Wrangler

Storage needs have always been part of allocation requests, however, XSEDE will be enforcing the storage awards in unison with the storage sites. Please vist XSEDE's Storage page for more info.

Last update: May 17, 2017