XSEDE ALLOCATION REQUESTS Open Submission, Guidelines, Resource and Policy Changes

Posted by Ken Hackworth on 03/16/2015 21:22 UTC

XSEDE is now accepting Research Allocation Requests for the allocation period, July 1, 2015 to June 30, 2016. The submission period is from March 15, 2015 thru April 15, 2015. Please review the new XSEDE systems and important policy changes (see below) before you submit your allocation request through the XSEDE User Portal
————————————————————
NEW XSEDE Resources:
See the Resource Catalog for a list of XSEDE compute, visualization and storage resources, and more details on the new systems (https://portal.xsede.org/web/guest/resources/overview).

  • The NICS Darter resource is a Cray XC30 (Cascade) supercomputer that runs the Cray Linux Environment (CLE) 5.0 upo 3 based on SLES 11. It has 11,968 physical compute cores (23,936 logical cores with Hyper-Threading enabled), 24 TB of compute memory, the interconnect is a Dragonfly network topology from Cray Aries technology. Darter has 748 compute nodes with a peak performance of nearly 250 Tflops. Darter has a 334TB parallel Lustre file system for scratch on Cray Sonexion hardware, 2 login nodes with 10GigE uplinks and long term archival/storage available through HPSS. Darter is intended for highly scalable parallel applications.
  • SDSC is pleased to announce it’s newest supercomputer Comet. Comet will be a 2.0 Petaflop (PF) Dell integrated compute cluster, with next-generation Intel Haswell processors (with AVX2), interconnected with Mellanox FDR InfiniBand in a hybrid fat-tree topology. Full bisection bandwidth will be available at rack level (72 nodes) and there will be 4:1 oversubscription cross-rack. Compute nodes will feature 320 GB of SSD storage and 128GB of DRAM per node. The system will also feature 7PB of performance storage (200GB/s aggregate), and 6PB of durable storage. A subset of the system will feature 4 NVIDIA GPUs per node. Additionally, 4 1.5TB large memory nodes and additional nodes for Gateway hosting and VM image repositories will be available. Comet will enable high performance virtualization using the single root I/O virtualization (SR-IOV) technology. Please note that there are two request limits for the Comet resource, one there is a maximum request(SU) limit of 10M SUs except for Gateway requests and second there is a limit on the maximum number of cores per job, that being 1,728.
    • Please note, Comet will also be providing high performance Virtual Clusters (VC) later this year. VCs are primarily intended for those users who require both fine-grained control over their software stack and access to multiple nodes. Science Gateways serving large research communities and that require a flexible software environment are encouraged to consider applying for a VC, as are current users of commercial clouds who want to make the transition for performance or cost reasons.
  • The TACC Wrangler Data Analytics system is designed to satisfy the needs for many in Data Computing who find their I/O patterns are not well suited for classic HPC systems. Wrangler features 0.5PB of usable flash-based storage accessible directly via the PCI bus to all 96 compute nodes at TACC. Unlike SSD solutions, this configuration gives all of the compute nodes direct PCI level access to all of the storage providing I/O rates of 1 TB/s and 250 million IOPS. Wrangler features 10PB of replicated storage for both input and result data hosted at TACC and at Indiana University.
  • Storage Allocations: Continuing this submission period, access to XSEDE storage resources along with compute resources will need to be requested and justified, both in the XRAS application and the body of the proposal’s main document. The following XSEDE sites will be offering allocatable storage facilities, these are:
    • PSC (Data SuperCell)
    • SDSC (Data Oaisis)
    • TACC (Ranch)
    • XSEDE-Wide File System (XWFS)

Storage needs have always been part of allocation requests, however, XSEDE will be enforcing the storage awards in unison with the storage sites. Please see (https://www.xsede.org/storage).

RETIRING XSEDE resources:
These retiring resources, listed below, are not available for new or renewal research requests.

  • SDSC Trestles(April 2015) - Please see information above about the SDSC Comet resource that will be replacing Trestles.

Estimated Available Service Units/GB for upcoming meeting:
Indiana University HP DL580 Large Memory Cluster (Mason) 300,000
Indiana University Gateway/Web Service Hosting (Quarry) 40
LSU (SuperMIC) 6,000,000
NICS (Darter) 100,000,000
Open Science Grid (OSG) 2,000,000
PSC SGI Altix UV (Blacklight) 4,000,000
PSC Persistent disk storage (Data SuperCell) 100,000
SDSC Dell Cluster with Intel Haswell Processors (Comet) 80,000,00
SDSC Appro with Intel Sandy Bridge Cluster (Gordon Compute Cluster) TBD
SDSC Medium-term disk storage (Data Oasis) 250,000
TACC HP/NVIDIA Interactive Visualization and Data Analytics System (Maverick) 3,000,000
TACC Dell PowerEdge C8220 Cluster with Intel Xeon Phi coprocessors (Stampede) 175,000,000
TACC Data Analytics System (Wrangler) TBD
TACC Long-term Storage (Wrangler Storage) TBD
TACC Long-term tape Archival Storage (Ranch) 4,000,000
XSEDE-Wide File System (XWFS) 150,000

Allocation Request Procedures:

  • In the past code performance and scaling was to be a section addressed in all research requests main document, this section seems to have been overlooked by many PIs in the recent quarterly research submission periods which has led to severe reductions or even complete rejection of both new and renewal requests. Continuing this quarterly submission period it will be mandatory to upload a scaling and code performance document detailing your code efficiency. Please see section 7.2 Review Criteria, of the Allocations Policy document(https://portal.xsede.org/group/xup/allocation-policies).
  • Also, it has become mandatory to discuss/detail, in the main document, the disclosure of access to other cyberinfrastructure resources(e.g. NSF Blue Waters, DOE INCITE resources, local campus, …) should be detailed in the main document. Please see section 7.3 Review Criteria, of the Allocations Policy document(https://portal.xsede.org/group/xup/allocation-policies). The failure to disclose access to these resources could lead to severe reductions or even complete rejection of both new and renewal requests. If there is no access to other cyberinfrastructure resources this should be made clear as well.
  • The XRAC review panel has asked that the PIs include the following: "The description of the computational methods must include explicit specification of the integration time step value, if relevant (e.g. Molecular Dynamics Simulations). If these details are not provided a 1 femtosecond (1fs) will be assumed with this information being used accordingly to evaluate the proposed computations."

Policy Changes: Allocations Policy document(https://portal.xsede.org/group/xup/allocation-policies)

  • Storage allocation requests for Archival Storage in conjunction with compute and visualization resources and/or Stand Alone Storage need to be requested explicitly both in your proposal (research proposals) and also in the resource section of XRAS.
  • Furthermore, the PI must describe the peer-reviewed science goal that the resource award will facilitate. These goals must match or be sub-goals of those described in the listed funding award for that year.
  • After the Panel Discussion of the XRAC meeting, the total Recommended Allocation is determined and compared to the total Available Allocation across all resources. Transfers of allocations may be made for projects that are more suitable for execution on other resources; transfers may also be made for projects that can take advantage of other resources, hence balancing the load. When the total Recommended considerably exceeds Available Allocations a reconciliation process adjusts all Recommended Allocations to remove oversubscription. This adjustment process reduces large allocations more than small ones and gives preference to NSF-funded projects or project portions. Under the direction of NSF, additional adjustments may be made to achieve a balanced portfolio of awards to diverse communities, geographic areas, and scientific domains.
  • Conflict of Interest (COI) policy will be strictly enforced for large proposals. For small requests, the PI/reviewer may participate in the respective meeting, but leave the room during the discussion of their proposal.
  • XRAC proposals for allocations request resources that represent a significant investment of the National Science Foundation. The XRAC review process therefore strives to be as rigorous as for equivalent NSF proposals.
  • The actual availability of resources is not considered in the review. Only the merit of the proposal is. Necessary reductions due to insufficient resources will be made after the merit review, under NSF guidelines, as described in Section 6.4.1.
  • 10% max advance on all research requests, as described in Section 3.5.4

Examples of well-written proposals
For more information about writing a successful research proposal as well as examples of successful research allocation requests please see: https://portal.xsede.org/successful-requests

If you would like to discuss your plans for submitting a research request please send email to the XSEDE Help Desk at help@xsede.org. Your questions will be forwarded to the appropriate XSEDE Staff for their assistance.

Ken Hackworth
XSEDE Resource Allocations Coordinator
help@xsede.org