Welcome to POPS: System for XSEDE Allocation Requests

To submit an allocation:
  1. If you don't already have a XSEDE User Portal login, please create one.
  2. Login to the XSEDE User Portal
  3. Once you log in, you will be able to access POPS.
The overview page provides a summary of the basic allocation request process, and sample research allocation requests can be found in the Sample Requests page.

XSEDE is accepting Research Allocation Requests for the allocation period, July 1, 2014 to June 30, 2015. The submission period is from March 15, 2014 thru April 15, 2014. Please review the new and decommissioned XSEDE systems and any policy changes prior to submitting your allocation request.

For information on how to submit an allocation please view the Request Steps Guide. If you are unfamiliar with the process of requesting an allocation please see the Allocations Overview page. It may also be helpful to refer to this list of previous successful requests.

Submission Schedule

REQUEST SERVICE UNITS (SUS) RANGE (K=1000) OPEN SUBMISSIONS CLOSE SUBMISSIONS ALLOCATIONS BEGIN REVIEW CYCLE
New Startups (Not normally renewable) Grand total limit of 200K
See Hardware Resource Catalog for specific startup limits.
Year round n/a 2-3 weeks after submission Year round
Educational (Renewable) Grand total limit of 200K
See Hardware Resource Catalog for specific educational limits.
Year round n/a 2-3 weeks after submission Year round
Research No SU limit Dec. 15
Mar. 15
Jun. 15
Sept. 15
Jan. 15
Apr. 15
Jul. 15
Oct. 15
Apr. 1
Jul. 1
Oct. 1
Jan. 1
Quarterly

New Items & Resource Changes

New Resources

  • TACC's Maverick, an HP/NVIDIA Interactive Visualization and Data Analytics System, is intended primarily for interactive visualization and data analysis jobs to allow for interactive query of large-scale data sets. Normal batch queues will enable users to run simulations up to 6 hours for interactive jobs and 24 hours for GPGPU and HPC jobs. Maverick is configured with 132 HP ProLiant SL250s Gen8 compute nodes and 132 NVIDIA Tesla K40 GPU accelerators.

  • LSU's SuperMIC is funded by an MRI grant from the NSF to LSU's Center for Computation & Technology, SuperMIC is currently in the acquisition phase. It is expected to be a 1 PetaFlop cluster with 360 compute nodes each with two 10-core 2.8GHz Intel Ivy Bridge-EP processors and 64GB of memory and 2 Intel Xeon Phi 7120P coprocessors. LSU previously participated in the TeraGrid program in conjunction with the LONI Queen Bee cluster. Slated to join XSEDE on April 1, 2014, SuperMIC will allocate 40% of it's resources to XSEDE.

Retiring Resources

The following systems are no longer being allocated and will retire from XSEDE service on the dates listed.

  • NICS' Kraken - April, 2014
  • TACC's Longhorn - Longhorn was replaced by Maverick in March, 2014.
  • TACC's Lonestar - June, 2014.
  • Georgia Tech's Keeneland - September, 2014.

Please see the Resource Catalog for a complete listing of XSEDE compute, visualization and storage resources, and more details on the new systems.

Storage Allocations: Continuing this submission period, access to XSEDE storage resources along with compute resources will need to be requested and justified, both in the POPS application and the body of the proposal's main document. The following XSEDE sites will be offering allocatable storage facilities, these are:

Storage needs have always been part of allocation requests, however, XSEDE will be enforcing the storage awards in unison with the storage sites. Please see the Storage Documentation for more information.

Allocation Request Procedures and Policy Changes

In the past, code performance and scaling was to be a section addressed in all research requests main document, this section seems to have been overlooked by many PIs in the recent quarterly research submission periods which have led to severe reductions or even complete rejection of both new and renewal requests. It is now mandatory to upload a scaling and code performance document detailing your code efficiency. Please see section 7.2 Review Criteria of the Allocations Policy document.

Also, the disclosure of access to other cyberinfrastructure resources (e.g., NSF Blue Waters, DOE INCITE resources) should be detailed in the main document. Please see section 7.3 Access to Other CI Resources, of the Allocations Policy document. The failure to disclose access to these resources could lead to severe reductions or even complete rejection of both new and renewal requests.

Estimated Available Service Units/TB for Upcoming Meeting

Compute Resource SU/TB Request Limit Resource Description
Blacklight 5,000,000 PSC SGI Altix UV
Data SuperCell 100 TB PSC Persistent Disk Storage
Data Oasis 250 TB SDSC Medium-term Disk Storage
Gordon Compute Cluster 25,000,000 SDSC Appro with Intel Sandy Bridge Cluster
Maverick 3,000,000 TACC Dell/NVIDIA Visualization and Data Analysis Cluster
Mason 300,000 Indiana University HP DL580 Large Memory Cluster
OSG 2,000,000 Open Science Grid
Quarry 40 Indiana University Gateway/Web Service Hosting
Ranch 8 PB TACC Long-term Archival Storage
Stampede 175,000,000 TACC Dell PowerEdge C8220 Cluster with Intel Xeon Phi coprocessors
SuperMIC 2,000,000 Louisiana State University's 360 node Cluster with Intel Xeon Phi coprocessors
Trestles 16,000,000 SDSC Appro Linux Cluster
XWFS 150 TB XSEDE Wide File System

Policies

A description of the XSEDE review and allocation process is described in the XSEDE Allocations Policy document.

Last update: March 18, 2014