XSEDE Resource Allocation Committee (XRAC) Announcements

XSEDE is now accepting Research Allocation Requests for the allocation period, July 1, 2017 to June 30, 2018. The submission period is from March 15, 2017 thru April 15, 2017. Please review the new XSEDE systems and important policy changes before submitting your allocation request through the XSEDE User Portal

Startup, Campus Champions, and Education Allocation Requests may be submitted at any time throughout the year.

Research Allocations

The Research allocation submission period is open: March 15, 2017 - April 15, 2017.

New XSEDE Resources

Starting this submission period both the Pittsburgh Supercomputing Center (PSC) and the San Diego Supercomputer Center (SDSC) will be allocating their GPU compute resources separately from their standard compute nodes. See SDSC's Comet GPU and PSC's Bridges GPU resource details below. TACC's Stampede 2 system also debuts its availability and will begin production in Summer, 2017.

See the XSEDE Resources Overview for a complete list of XSEDE compute, visualization and storage resources, and more details on the new systems.

TACC's Stampede 2

TACC's newest resource, Stampede 2, will enter full production in the Fall 2017 as the 18 petaflop national resource that builds on the successes of the original Stampede system it replaces. The first phase of the Stampede 2 rollout features the second generation of processors based on Intel's Many Integrated Core (MIC) architecture. These 4,200 Knights Landing (KNL) nodes represent a radical break with the first generation Knights Corner (KNC) MIC coprocessor. Unlike the legacy KNC, a Stampede KNL is not a coprocessor: each 68-core KNL is a stand-alone, self-booting processor that is the sole processor in its node. Phase 2 will add approximately 50% of additional compute power to the system as a whole by introducing new nodes equipped with a future Intel processor. When fully deployed, Stampede 2 will deliver twice the performance of the original Stampede system. Please note that Stampede 2 is allocated in service units (SU)s where an SU is defined as 1 wall-clock node hour (not core hour).

SDSC's Comet GPU

SDSC's Comet GPU has 36 general purpose GPU nodes, with 2 Tesla K80 GPU graphics cards per node, each with 2 GK210 GPUs (144 GPUs in total). Each GPU node also features 2 Intel Haswell processors of the same design and performance as the standard compute nodes (described separately). The GPU nodes are integrated into the Comet resource and available through the SLURM scheduler for either dedicated or shared node jobs (i.e., a user can run on 1 or more GPUs/node and will be charged accordingly). Like the Comet standard compute nodes, the GPU nodes feature a local SSD which can be specified as a scratch resource during job execution - in many cases using SSD's can alleviate I/O bottlenecks associated with using the shared Lustre parallel file system.

Comet's GPUs are a specialized resource that performs well for certain classes of algorithms and applications. There is a large and growing base of community codes that have been optimized for GPUs including those in molecular dynamics, and machine learning. GPU-enabled applications on Comet include: Amber, Gromacs, BEAST, OpenMM, TensorFlow, and NAMD.

PSC's Bridges GPU

PSC introduces Bridges GPU, a newly allocatable resource within Bridges that features 32 NVIDIA Tesla K80 GPUs and 64 NVIDIA Tesla P100 GPUs. Bridges GPU complements Bridges' Regular, Bridges Large, and its Pylon storage system to accelerate deep learning and a wide variety of application workloads. The 16 GPU nodes, each with 2 NVIDIA Tesla K80 GPU cards, 2 Intel Xeon CPUs (14 cores each), and 128GB of RAM and 32 GPU nodes, each with 2 NVIDIA Tesla P100 GPU cards, 2 Intel Xeon CPUs (16 cores each), and 128GB of RAM.

The PSC's Bridges is a uniquely capable resource for empowering new research communities and bringing together HPC and Big Data. Bridges integrates a uniquely flexible, user-focused, data-centric software environment with very large shared memory, a high-performance interconnect, and rich file systems to empower new research communities, bring desktop convenience to HPC and drive complex workflows.

Bridges supports new communities through extensive interactivity, gateways, persistent databases and web servers, high productivity programming languages, and virtualization. The software environment is extremely robust, supporting enabling capabilities such as Python, R, and MATLAB on large-memory nodes, genome sequence assembly on nodes with up to 12TB of RAM, machine learning and especially deep learning, Spark and Hadoop, complex workflows, and web architectures to support gateways.

Estimated Available Service Units/GB for upcoming meeting

Resource SUs Available
Jetstream (IU/TACC) 5,000,000 vCPU hours
SuperMIC (LSU) 6,500,000 node hours
OSG 2,000,000 CPU hours
Bridges Regular Memory (PSC) 38,000,000 core hours
Bridges Large Memory (PSC) 700,000 core hours
Bridges GPU (PSC) TBD
Pylon (PSC) 2,000,000 TB
Comet (SDSC) 80,000,00
Comet GPU (SDSC) TBD
Data Oasis (SDSC) 300,000 TB
XStream (Stanford)
Cray CS-Storm GPU Supercomputer
500,000 GPU hours
Maverick (TACC) 4,000,000 core hours
Stampede2 - Phase 1 (TACC)
TACC Dell/Intel Knight's Landing System
10,000,000 node hours
Wrangler (TACC)
Data Analytics System
180,000 node hours
Wrangler Storage (TACC) 500,000 TB
Ranch (TACC)
Long-term tape Archival Storage
2,000,000 TB

Retiring XSEDE Resources

These retiring resources, listed below, are not available for new or renewal research requests.

  • TACC's Stampede 1 System (July 2017)
  • SDSC's Gordon Compute Cluster (March 2017)

Storage Allocations

Continuing this submission period, access to XSEDE storage resources along with compute resources will need to be requested and justified, both in the XSEDE Resource Allocation System (XRAS) and in the body of the proposal's main document. The following XSEDE sites will be offering allocatable storage facilities, these are:

Storage needs have always been part of allocation requests, however, XSEDE will be enforcing the storage awards in unison with the storage sites. Please vist XSEDE's Storage page for more info.

Allocation Request Procedures

  • In the past code performance and scaling was to be a section addressed in all research requests main document, this section seems to have been overlooked by many PIs in the recent quarterly research submission periods which has led to severe reductions or even complete rejection of both new and renewal requests. Continuing this quarterly submission period it will be mandatory to upload a scaling and code performance document detailing your code efficiency. Please see section 7.2 Review Criteria under Allocations Policies.

  • It is mandatory to discuss and detail the disclosure of access to other cyberinfrastructure resources (e.g., NSF Blue Waters, DOE INCITE resources, local campus, ...) in the proposal's main document. Please see section 7.3 Access to Other CI Resources in the Allocations Policies. The failure to disclose access to these resources could lead to severe reductions or even complete rejection of both new and renewal requests. If there is no access to other cyberinfrastructure resources this should be made clear as well.

  • The XRAC review panel requests that PIs include the following in their requests:

    The description of the computational methods must include explicit specification of the integration
    time step value, if relevant (e.g. Molecular Dynamics Simulations). If these details are not provided
    a 1 femtosecond (1fs) will be assumed with this information being used accordingly to evaluate the
    proposed computations.
  • All funding used to support the Research Plan of an XRAC Research Request must be reported in the Supporting Grants form in the XRAS submission. Reviewers use this information to assess whether the PI has enough support to accomplish the Research Plan, analyze data, prepare publications, etc.

  • Publications that have resulted from the use of XSEDE resources should be entered into your XSEDE portal profile which you will be able to attach to your Research submission.

  • Also note that it is expected that the scaling and code performance information is from the resource(s) being requested in the research request.

Policy Changes

Reference: XSEDE Allocations Policies document

  • Storage allocation requests for Archival Storage in conjunction with compute and visualization resources and/or Stand Alone Storage need to be requested explicitly both in your proposal (research proposals) and also in the resource section of XRAS.

  • Furthermore, the PI must describe the peer-reviewed science goal that the resource award will facilitate. These goals must match or be sub-goals of those described in the listed funding award for that year.

  • After the Panel Discussion of the XRAC meeting, the total Recommended Allocation is determined and compared to the total Available Allocation across all resources. Transfers of allocations may be made for projects that are more suitable for execution on other resources; transfers may also be made for projects that can take advantage of other resources, hence balancing the load. When the total Recommended considerably exceeds Available Allocations a reconciliation process adjusts all Recommended Allocations to remove oversubscription. This adjustment process reduces large allocations more than small ones and gives preference to NSF-funded projects or project portions. Under the direction of NSF, additional adjustments may be made to achieve a balanced portfolio of awards to diverse communities, geographic areas, and scientific domains.

  • Conflict of Interest (COI) policy will be strictly enforced for large proposals. For small requests, the PI/reviewer may participate in the respective meeting, but leave the room during the discussion of their proposal.

  • XRAC proposals for allocations request resources representing a significant investment of the National Science Foundation. The XRAC review process therefore strives to be as rigorous as for equivalent NSF proposals.

  • The actual availability of resources is not considered in the review. Only the merit of the proposal is. Necessary reductions due to insufficient resources will be made after the merit review, under NSF guidelines, as described in Section 6.4.1.

  • 10% max advance on all research requests, as described in Section 3.5.4.

Examples of well-written proposals

Please visit the Submitting a Successful Research Allocation Request page for more information on writing a successful research proposal as well as examples of successful research allocation requests.

If you would like to discuss your plans for submitting a research request please send email to the XSEDE Help Desk at help@xsede.org. Your questions will be forwarded to the appropriate XSEDE Staff for their assistance.

XSEDE Trial Allocations

Beginning April 1, 2015, XSEDE will offer Trial Allocations. Trial Allocations are designed to give potential users rapid, but limited access to XSEDE resources. Within one business day, users will be able to log on and evaluate an XSEDE resource prior to requesting a larger startup or research allocation. Trial allocation sizes will be limited, but sufficient, for initial resource evaluation. For compute resources it will give new users the ability to compile and run software, and evaluate results, within the XSEDE resource software and hardware environment.

Trial Allocations are currently offered only on SDSC's Comet resource. To apply for a trial allocation, create an XSEDE User Portal (XUP) account. Once you have your XUP account, please submit a ticket via the XSEDE help-desk.

Resource Trial Allocation limit Request a Trial Account
Comet 1000 SUs, 6 months Please submit an XSEDE help-desk ticket

Last update: March 16, 2017