XSEDE Resource Allocation Committee (XRAC) Announcements
The next research allocation submission period is March 15, 2017 - April 15, 2017.
Please review the following information prior to submitting a Research Allocation Request.
- New XSEDE Resources
- Retiring XSEDE Resources
- Storage Allocations
- Estimated Available Service Units/GB for this Period
- Allocation Request Procedures
- Policy Changes
- Examples of Well-Written Proposals
A recent change to the submission of proposals is that the Allocations proposal submission system (XRAS) will force submissions to adhere to the uploaded document page limits. Please see: https://portal.xsede.org/allocation-policies#63
See the XSEDE Resources Overview for a list of XSEDE compute, visualization and storage resources, and more details on the new systems.
Stanford University announces the availability of a new Cray GPU cluster, XStream, interconnected with FDR InfiniBand (56 Gb/s) in a fat-tree topology. It differs from traditional CPU-based HPC systems as it has almost a Petaflop (PF) of GPU compute power. Each of the 65 nodes has 8 NVIDIA K80 cards or 16 NVIDIA Kepler GPUs, interconnected through PCI-Express PLX-based switches. Each GPU has 12 GB of GDDR5 memory. XStream's compute nodes feature 2 Intel Ivy-Bridge 10-core CPUs, 256 GB of DRAM and 450 GB of local SSD storage. The system features 1.4 PB of Lustre storage (22 GB/s aggregate).
Even with the extreme GPU computing, near Petaflop density, this system was #6 in the June 2015 Green 500 list and moved to #5 in the November 2015 list.
Each of the two login nodes has a 10 GigE connection to Stanford's wide-area network. This network provides multiple 10 Gigabit/s connections to Internet2 and the commodity Internet. In the near future, the network will connect at 100 Gigabit/s with the Research and Education Networks.
Pittsburgh Supercomputing Center's newest HPC resource, Bridges, integrates advanced memory techniques with a uniquely flexible, user-focused, data-centric environment to empower new research communities, bring desktop convenience to HPC and drive complex workflows. Bridges will differ from traditional HPC systems and support new communities through extensive interactivity, gateways, persistent databases and web servers, high productivity programming languages and virtualization. Bridges will feature three tiers of processing nodes with either 128GB (Regular Shared Memory), 3TB (Large Shared Memory) or 12TB (Extreme Shared Memory) of hardware-enabled coherent shared memory per node. RSM nodes will be 2 Intel Xeon EP-series CPUs; LSM nodes will be HP DL580 servers with 4 Intel Xeon EX-series CPUs with 128GB; and ESM nodes will be HP Integrity Superdome X servers with 16 Intel Xeon EX-series CPUs.
In addition, Bridges will have persistent database and web server nodes and dedicated nodes for data transfer, all dual-socket Xeon servers with 128GB of RAM. The data transfer nodes will have 10 GigE connections to PSC's wide-area network, enabling high-performance data transfers between Bridges and XSEDE, campuses, instruments and other advanced cyberinfrastructure.
Bridges' components will be interconnected by the Intel Omni-Path Fabric, which delivers 100Gbps line speed, low latency, excellent scalability and improved tolerance to data errors. A unique two-level "island" topology, designed by PSC, will maximize performance for the intended workloads. Compute islands will provide full bi-section bandwidth communication performance to applications spanning up to 42 nodes. Storage islands will take advantage of the Intel Omni-Path Fabric to implement mutiple paths and provide optimal bandwidth to the Pylon filesystem. Storage switches will be cross-linked to all other storage switches and connect management nodes, database nodes, web server nodes and data transfer nodes.
The Jetstream cloud system will be available for allocations and use in early operations mode on or around 22 January 2016. Researchers and educators are encouraged to request allocations during the fall 2015 request period! Jetstream is a first of a kind cloud system for the NSF, the first production cloud supporting all areas of science and engineering research funded by the NSF. Jetstream will be a user-friendly cloud environment designed to give researchers and students access to computing and data analysis resources on demand. Users will interact with the system through a menu of "virtual machines" designed to support research in many disciplines including biology, atmospheric science, earth science, economics, network science, observational astronomy and social sciences. Jetstream will also allow creators of new research software to make those tools easily available to potential users, speeding their adoption. The primary mode of use for Jetstream will be interactive. Jetstream will run a very standard cloud environment: Openstack and KVM. The user interface for Jetstream is based on the Atmosphere interface developed by the University of Arizona and used for some time in iPlant, this interface provides both terminal and remote desktop access to your virtual machine(s) through a web browser along with advanced programmatic interfaces.
There are two units of allocation on Jetstream: VMs and persistent storage.
The basic unit of VM allocation for Jetstream will be based on a virtual CPU (vCPU) hour: 1 service unit (SU) is equivalent to 1 vCPU for 1 hour of wall clock time. A standard "Tiny" VM instance will consist of 1 vCPU, 2 GB of RAM, and 8 GB of storage. This corresponds closely to a "t2.small" instance in Amazon Web Services. We are mindful that we are establishing precedents for other cloud systems and this precedent is based on consideration of future flexibility and on consistency with current best practices in commercial clouds. That majority of storage within an instance will be available for user data but will vary based on the VM image you select.
You may also request additional persistent storage for data. Jetstream persistent storage is for the storage of virtual machine images, VM snapshots, block storage volumes attached to a VM, and eventually object storage (API-accessible) data. Persistent storage policies on Jetstream will be determined as we obtain some experience with people using the system. Modest amounts of persistent storage will be available, our initial allocations will be based on a maximum of 10 TB per allocation. If storage is needed for persisting data and results that exceed local storage available on Jetstream, please request additional storage on the Wrangler data analytics / data storage system.
If you are not sure how much resource to request on Jetstream, imagine how many hours you (or you and your students) might interactively use a cloud system to do your research in a year, and multiply that by 10 to get SUs required (as a working first estimate, we believe that 10 times a "tiny" VM will suit many users). Storage above the default size of each instance may also be requested.
To give a sense of our thoughts on what a modest request for Jetstream might be, we are recommending 50,000 SUs as the upper limit for startup allocations (approximately 5 "Tiny" VMs running for a year).
If you would like assistance preparing a request for use of Jetstream, send email to email@example.com and a Jetstream team rep will be in touch with you promptly.
One important note: Jetstream will become available in early operations mode on 22 January but at that time will not be formally accepted by the NSF. This means that usage between the time early operations starts and the time the system is accepted is not charged against your allocation. Requesting an allocation now will allow you to get on the system in the early operations phase and help the Jetstream team (Indiana University Pervasive Technology Institute, TACC, University of Arizona, Johns Hopkins, and several other partners) establish the baseline policies and troubleshoot any early issues during that period without having your usage charged against your allocation until the system is formally accepted by the NSF.
These retiring resources, listed below, are not available for new or renewal research requests.
- SDSC's Gordon Compute Cluster (March 2017)
Continuing this submission period, access to XSEDE storage resources along with compute resources will need to be requested and justified, both in the XSEDE Resource Allocation System (XRAS) and in the body of the proposal's main document. The following XSEDE sites will be offering allocatable storage facilities, these are:
Storage needs have always been part of allocation requests, however, XSEDE will be enforcing the storage awards in unison with the storage sites. Please vist XSEDE's Storage page for more info.
|Quarry||40||Indiana University Gateway/Web Service Hosting|
|SuperMIC||6,500,000||Louisiana State University|
|Open Science Grid (OSG)||2,000,000|
|Bridges (Regular Memory)||44,000,000||Pittsburgh Supercomputing Center|
|Bridges (Large Memory)||700,000||Pittsburgh Supercomputing Center|
|Pylon||2,000,000||PSC Persistent disk storage|
|Comet||75,000,000||SDSC Dell Cluster with Intel Haswell Processors|
|Data Oasis||300,000||SDSC Medium-term disk storage|
|Maverick||4,000,000||TACC HP/NVIDIA Interactive Visualization and Data Analytics System|
|XStream||500,000||Stanford Cray CS-Storm GPU Supercomputer|
|Stampede||125,000,000||TACC Dell PowerEdge C8220 Cluster with Intel Xeon Phi coprocessors|
|Wrangler||180,000||TACC Data Analytics System|
|Wrangler Storage||500,000||TACC Long-term Storage|
|Ranch||2,000,000||TACC Long-term tape Archival Storage|
In the past code performance and scaling was to be a section addressed in all research requests main document, this section seems to have been overlooked by many PIs in the recent quarterly research submission periods which has led to severe reductions or even complete rejection of both new and renewal requests. Continuing this quarterly submission period it will be mandatory to upload a scaling and code performance document detailing your code efficiency. Please see section 7.2 Review Criteria under Allocations Policies.
It is mandatory to discuss and detail the disclosure of access to other cyberinfrastructure resources (e.g., NSF Blue Waters, DOE INCITE resources, local campus, ...) in the proposal's main document. Please see section 7.3 Access to Other CI Resources in the Allocations Policies. The failure to disclose access to these resources could lead to severe reductions or even complete rejection of both new and renewal requests. If there is no access to other cyberinfrastructure resources this should be made clear as well.
The XRAC review panel requests that PIs include the following in their requests:
The description of the computational methods must include explicit specification of the integration
time step value, if relevant (e.g. Molecular Dynamics Simulations). If these details are not provided
a 1 femtosecond (1fs) will be assumed with this information being used accordingly to evaluate the
All funding used to support the Research Plan of an XRAC Research Request must be reported in the Supporting Grants form in the XRAS submission. Reviewers use this information to assess whether the PI has enough support to accomplish the Research Plan, analyze data, prepare publications, etc.
Publications that have resulted from the use of XSEDE resources should be entered into your XSEDE portal profile which you will be able to attach to your Research submission.
Also note that it is expected that the scaling and code performance information is from the resource(s) being requested in the research request.
Reference: XSEDE Allocations Policies document
Storage allocation requests for Archival Storage in conjunction with compute and visualization resources and/or Stand Alone Storage need to be requested explicitly both in your proposal (research proposals) and also in the resource section of XRAS.
Furthermore, the PI must describe the peer-reviewed science goal that the resource award will facilitate. These goals must match or be sub-goals of those described in the listed funding award for that year.
After the Panel Discussion of the XRAC meeting, the total Recommended Allocation is determined and compared to the total Available Allocation across all resources. Transfers of allocations may be made for projects that are more suitable for execution on other resources; transfers may also be made for projects that can take advantage of other resources, hence balancing the load. When the total Recommended considerably exceeds Available Allocations a reconciliation process adjusts all Recommended Allocations to remove oversubscription. This adjustment process reduces large allocations more than small ones and gives preference to NSF-funded projects or project portions. Under the direction of NSF, additional adjustments may be made to achieve a balanced portfolio of awards to diverse communities, geographic areas, and scientific domains.
Conflict of Interest (COI) policy will be strictly enforced for large proposals. For small requests, the PI/reviewer may participate in the respective meeting, but leave the room during the discussion of their proposal.
XRAC proposals for allocations request resources representing a significant investment of the National Science Foundation. The XRAC review process therefore strives to be as rigorous as for equivalent NSF proposals.
The actual availability of resources is not considered in the review. Only the merit of the proposal is. Necessary reductions due to insufficient resources will be made after the merit review, under NSF guidelines, as described in Section 6.4.1.
10% max advance on all research requests, as described in Section 3.5.4.
Please visit the Submitting a Successful Research Allocation Request page for more information on writing a successful research proposal as well as examples of successful research allocation requests.
If you would like to discuss your plans for submitting a research request please send email to the XSEDE Help Desk at firstname.lastname@example.org. Your questions will be forwarded to the appropriate XSEDE Staff for their assistance.
XSEDE offers Trial Allocations, which are designed to give potential users rapid, but limited, access to XSEDE resources. Within one business day, users will be able to log on and evaluate an XSEDE resource prior to requesting a larger startup or research allocation. Trial allocation sizes will be limited, but sufficient, for initial resource evaluation. For compute resources it will give new users the ability to compile and run software, and evaluate results, within the XSEDE resource software and hardware environment.
Trial Allocations are currently offered only on SDSC's Comet resource. To apply for a trial allocation, create an XSEDE User Portal (XUP) account. Once you have your XUP account, please submit a ticket via the XSEDE help-desk.
|Resource||Trial Allocation limit||Request a Trial Account|
|Comet||1000 SUs, 6 months||Please submit an XSEDE help-desk ticket|
Last update: February 13, 2017