XSEDE ALLOCATION REQUESTS Open Submission, Guidelines, Resource and Policy Changes

Update 1

Posted by Ken Hackworth on 03/13/2013 23:23 UTC

XSEDE is now accepting Research Allocation Requests for the allocation period, July 1, 2013 to June 30, 2013. The submission period is from March 15, 2013 thru April 15, 2013. Please review the new XSEDE systems and important policy changes (see below) before you submit your allocation request through the XSEDE User Portal
————————————————————
NEW XSEDE Resources:
See the Resource Catalog for a list of XSEDE compute, visualization and storage resources, and more details on the new systems (https://portal.xsede.org/web/guest/resources/overview).

  • Storage Allocations: Starting this submission period, access to XSEDE storage resources along with compute resources will need to be requested and justified, both in the POPS application and the body of the proposal’s main document. The following XSEDE sites will be offering allocatable storage facilities, these are:

NICS (HPSS)
PSC (Data SuperCell)
SDSC (Data Oaisis)
TACC (Ranch)
XSEDE-Wide File System (XWFS)

Storage needs have always been part of allocation requests, however, XSEDE will be enforcing the storage awards in unison with the storage sites. Please see (https://www.xsede.org/storage).

  • Mason (https://kb.iu.edu/data/bbhh.html) at Indiana University is a large memory computer cluster configured to support data-intensive, high-performance computing tasks for researchers using genome assembly software (particularly software suitable for assembly of data from next-generation sequencers), large-scale phylogenetic software, or other genome analysis applications that require large amounts of computer memory. Mason will enter service on the Extreme Science and Engineering Discovery Environment (XSEDE) in April 2013. Mason has 16 HP DL580 G7 compute nodes, each with four eight-core 1.87 GHz Intel L7555 CPUs, 512 GB memory, 400GB of local scratch disk, and a 10gb Ethernet network interface. Recommended use of Mason is intended to run primarily large memory (>16GB and up to 500GB) serial jobs

Estimated Available Service Units/TB for upcoming meeting:
Indiana University HP DL580 Large Memory Cluster (Mason) TBD
Indiana University Gateway/Web Service Hosting (Quarry) 40
NICS HP/NVIDIA (Keeneland) 1,500,000
NICS Cray XT5 (Kraken) 200,000,000
NICS SGI/NVIDIA, Visualization and Data Analysis System (Nautilus) 2,000,000
Open Science Grid (OSG) 2,000,000
PSC SGI Altix UV (Blacklight) 7,000,000
SDSC Appro Linux Cluster (Trestles) 16,000,000
SDSC Appro with Intel Sandy Bridge Cluster (Gordon Compute Cluster) 25,000,000
TACC Dell PowerEdge Westmere Linux Cluster (Lonestar) 15,000,000
TACC Dell/NVIDIA Visualization and Data Analysis Cluster (Longhorn) 3,000,000
TACC Dell PowerEdge C8220 Cluster with Intel Xeon Phi coprocessors (Stampede) 175,000,000

Ken Hackworth
XSEDE Resource Allocations Coordinator
help@xsede.org

Original post

Posted by Ken Hackworth on 12/19/2012 19:52 UTC

XSEDE is now accepting Research Allocation Requests for the allocation period, April 1, 2013 to March 31, 2014. The submission period is from December 15, 2012 until January 15, 2013. Please review the new XSEDE systems and important policy changes (see below) before you submit your allocation request through the XSEDE User Portal
————————————————————
NEW XSEDE Resources:
See the Resource Catalog for a list of XSEDE compute, visualization and storage resources, and more details on the new systems (https://portal.xsede.org/web/guest/resources/overview).

  • Stampede is configured with 6,400 Dell DCS Zeus compute nodes, each with two 2.7 GHz E5-2680 Intel Xeon (Sandy Bridge) processors. With 32 GB of memory and 50 GB of storage per node, users have access to an aggregate of 205 TB of memory and 275+ TB of local storage. The cluster is also equipped with Intel Xeon Phi coprocessors based on Intel Many Integrated Core (Intel MIC) architecture. Stampede will deliver 2+ PF of peak performance on the main cluster and 7+ PF of peak performance on the Intel Xeon Phi coprocessors. Stampede also provides access to 16 large memory nodes with 1TB each of RAM, and 128 nodes containing an NVIDIA Kepler 2 GPU, giving users access to large shared-memory computing and remote visualization capabilities, respectively. Compute nodes have access to a 14 PB Lustre Parallel file system. An FDR InfiniBand switch fabric interconnects the nodes through a fat-tree topology with a point-to-point bandwidth of 40GB/sec (unidirectional speed). Stampede is intended primarily for parallel applications scalable to tens of thousands of cores. Normal batch queues will enable users to run simulations up to 24 hours. Jobs requiring run times and more cores than allowed by the normal queues will be run in a special queue after approval of TACC staff. Serial and development queues will also be configured. In addition, users will be able to run jobs using thousands of the Intel Xeon Phi coprocessors via the same queues to support massively parallel workflows.

Decommissioned XSEDE Resources:
The Purdue resources(Steele and Condor) that have been available through the XSEDE program will not be available for research allocations. These resources will be decommissioned as XSEDE resources by mid-summer 2013.

Estimated Available Service Units/TB for upcoming meeting:
Albedo, Wide Area File System 150
Indiana University Gateway/Web Service Hosting (Quarry) 40
NICS HP/NVIDIA (Keeneland) 2,000,000
NICS Cray XT5 (Kraken) 200,000,000
NICS SGI/NVIDIA, Visualization and Data Analysis System (Nautilus) 2,000,000
Open Science Grid (OSG) 2,000,000
PSC SGI Altix UV (Blacklight) 7,000,000
SDSC Appro Linux Cluster (Trestles) 16,000,000
SDSC Appro with Intel Sandy Bridge Cluster (Gordon Compute Cluster) 25,000,000
TACC Dell PowerEdge Westmere Linux Cluster (Lonestar) 13,000,000
TACC Dell/NVIDIA Visualization and Data Analysis Cluster (Longhorn) 3,000,000
TACC Dell PowerEdge C8220 Cluster with Intel Xeon Phi coprocessors (Stampede) 150,000,000

Allocation Request Procedures and Policy Changes:
The proposal submission interface is no longer accessible through the url pops-submit.teragrid.org. Any allocation request (submission) must occur via the XSEDE User Portal. Login, select the Allocations tab and then click on the Submit Request subtab. If you do not have a portal login/password, create an account on the portal welcome page.
The new submission interface has been simplified. The PI’s project is displayed as a history of allocation requests with a status and a list of actions (in orange) that the PI can take on the request. The actions (such as renewal) are only available at appropriate times (submission periods).

Extended Collaborative Support:
Extended Collaborative Support, formally known as Advanced User Support requests will no longer be part of the Main Document proposal, but will be defined by five (5) questions that have been added to the Resource Request section. If Extended Collaborative Support is requested the five questions will need to be answered in full. (See the ECSS page at
https://portal.xsede.org/group/xup/ecss-justification)
A guide for writing requests and references to examples, are available at the XSEDE website (https://portal.xsede.org/successful-requests. The XSEDE Allocations Policy document explains the allocations process, procedures and policies (https://portal.xsede.org/web/guest/allocation-policies).

Ken Hackworth
XSEDE Resource Allocations Coordinator
help@xsede.org