Purdue Anvil User Guide
Last update: May 10, 2022

Introduction

Purdue University is the home of Anvil, a powerful new supercomputer that provides advanced computing capabilities to support a wide range of computational and data-intensive research spanning from traditional high-performance computing to modern artificial intelligence applications.

Anvil, which is funded by a $10 million award from the National Science Foundation, significantly increases the capacity available to the NSF's Extreme Science and Engineering Discovery Environment (XSEDE), which serves tens of thousands of researchers across the U.S., and in which Purdue has been a partner for the past nine years. Anvil enters production in 2021 and serves researchers for five years. Additional funding from the NSF supports Anvil's operations and user support.

The name "Anvil" reflects the Purdue Boilermakers' strength and workmanlike focus on producing results, and the Anvil supercomputer enables important discoveries across many different areas of science and engineering. Anvil also serves as an experiential learning laboratory for students to gain real-world experience using computing for their science, and for student interns to work with the Anvil team for construction and operation. We will be training the research computing practitioners of the future. Learn more about Anvil's mission in the Anvil press release.

Anvil is built in partnership with Dell and AMD and consists of 1,000 nodes with two 64-core AMD Epyc "Milan" processors each and will deliver over 1 billion CPU core hours to XSEDE each year, with a peak performance of 5.3 petaflops. Anvil's nodes are interconnected with 100 Gbps Mellanox HDR InfiniBand. The supercomputer ecosystem also includes 32 large memory nodes, each with 1 TB of RAM, and 16 nodes each with four NVIDIA A100 Tensor Core GPUs providing 1.5 PF of single-precision performance to support machine learning and artificial intelligence applications.

Anvil is funded under NSF award number 2005632. Carol Song is the principal investigator and project director. Preston Smith, executive director of Research Computing, Xiao Zhu, computational scientist and senior research scientist, and Rajesh Kalyanam, data scientist, software engineer, and research scientist, are all co-PIs on the project.

Anvil Specifications

All Anvil nodes have 128 processor cores, 256 GB to 1 TB of RAM, and 100 Gbps Infiniband interconnects.

Anvil nodes run CentOS 8 and use SLURM (Simple Linux Utility for Resource Management) as the batch scheduler for resource and job management. The application of operating system patches will occur as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).
Anvil Login
Login Number of Nodes Processors per Node Cores per Node Memory per Node
  8 Two Milan CPUs @ 2.0GHz 32 512 GB
Anvil Sub-Clusters
Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node
A 1000 Two Milan CPUs @ 2.0GHz 128 256 GB
B 32 Two 3rd Gen AMD EPYC™ 7763 CPUs 128 1 TB
C 16 Two 3rd Gen AMD EPYC™ 7763 CPUs + Four NVIDIA A100 GPUs 128 512 GB

Accessing the System

Obtaining and Account

As an XSEDE computing resource, Anvil is accessible to XSEDE users who receive an allocation on the system. To obtain an account, users may submit a proposal through the XSEDE Allocation Request System.

Interested parties may contact the XSEDE Help Desk for help with an Anvil proposal.

Logging In

Anvil will be accessible via the XSEDE Single Sign-On (SSO) hub.

To login to the XSEDE SSO hub, use your SSH client to start an SSH session on login.xsede.org with your XSEDE User Portal username and password:

localhost$ ssh -l my-xsede-portal-username login.xsede.org

XSEDE now requires that you use the XSEDE Duo service for additional authentication, you will be prompted to authenticate yourself further using Duo and your Duo client app, token, or other contact methods. Consult Multi-Factor Authentication with Duo for account setup instructions.

Once logged into the hub, use the gsissh utility to login to Anvil where you have an account.

[my-xsede-portal-username@ssohub ~]$ gsissh anvil

When reporting a problem to the help desk, please execute the gsissh command with the -vvv option and include the verbose output in your problem description.

ThinLinc

For your first time accessing Anvil using the ThinLinc client, your desktop might be locked after it has been idle for more than 5 minutes. Because in the default settings, the "screensaver" and "lock screen" are turned on. To solve this issue, please refer to the FAQs Page.

Anvil Research Computing provides Cendio's ThinLinc as an alternative to running an X11 server directly on your computer. It allows you to run graphical applications or graphical interactive jobs directly on Anvil through a persistent remote graphical desktop session.

ThinLinc is a service that allows you to connect to a persistent remote graphical desktop session. This service works very well over high latency, low bandwidth, or off-campus connection compared to running an X11 server locally. It is also very helpful for Windows users who do not have an easy-to-use local X11 server, as little to no setup is required on your computer.

There are two ways in which to use ThinLinc: preferably through the native client or through a web browser.

Browser-based ThinLinc access is not supported on Anvil at this moment. Please use a native ThinLinc client with SSH keys.

Installing the ThinLinc native client

The native ThinLinc client will offer the best experience especially over off-campus connections and is the recommended method for using ThinLinc. It is compatible with Windows, Mac OS X, and Linux.

  • Download the ThinLinc client from the ThinLinc website.
  • Start the ThinLinc client on your computer.
  • In the client's login window, use desktop.anvil.rcac.purdue.edu as the Server and use your Anvil username x-anvilusername.
  • At this moment, an SSH key is required to login to the ThinLinc client. For help generating and uploading keys to the cluster, see SSH Keys section in our user guide for details.

Configure ThinLinc to use SSH Keys

To set up SSH key authentication on the ThinLinc client:

  1. Open the Options panel, and select Public key as your authentication method on the Security tab. The "Options" button in the ThinLinc Client can be found towards the bottom left, above the "Connect" button.

  2. In the options dialog, switch to the "Security" tab and select the "Public key" radio button. The "Security" tab found in the options dialog, will be the last of the available tabs. The "Public key" option can be found in the "Authentication method" options group.

  3. Click OK to return to the ThinLinc Client login window. You should now see a Key field in place of the Password field.
  4. In the Key field, type the path to your locally stored private key or click the ... button to locate and select the key on your local system. Note: If PuTTY is used to generate the SSH Key pairs, please choose the private key in the OpenSSH format. The ThinLinc Client login window will now display key field instead of a password field.

  5. Click the Connect button.
  6. Continue to the following section on connecting to Anvil from ThinLinc.

Connecting to Anvil from ThinLinc

  • Once logged in, you will be presented with a remote Linux desktop running directly on a cluster login node.

  • Open the terminal application on the remote desktop.

  • Once logged in to the Anvil login node, you may use graphical editors, debuggers, software like Matlab, or run graphical interactive jobs. For example, to test the X forwarding connection issue the following command to launch the graphical editor gedit: $ gedit

  • This session will remain persistent even if you disconnect from the session. Any interactive jobs or applications you left running will continue running even if you are not connected to the session.

Tips for using ThinLinc native client

  • To exit a full-screen ThinLinc session press the F8 key on your keyboard (fn + F8 key for Mac users) and click to disconnect or exit full-screen.

  • Full-screen mode can be disabled when connecting to a session by clicking the Options button and disabling full-screen mode from the Screen tab.

General Overview

To connect to Anvil using SSH keys, you must follow three high-level steps:

  1. Generate a key pair consisting of a private and a public key on your local machine.
  2. Copy the public key to the cluster and append it to $HOME/.ssh/authorized_keys file in your account.
  3. Test if you can ssh from your local computer to the cluster without using XSEDE's Single Sign On (SSO) login hub.

Detailed steps for different operating systems and specific SSH client software are given below.

Mac and Linux

  1. For your first time login to Anvil, please log in with your XSEDE username and password through XSEDE Single Sign-On (SSO) hub.
    localhost$ ssh -l my-xsede-portal-username login.xsede.org
        login as: my-xsede-portal-username 
        Using keyboard-interactive authentication.
        Please login to this system using your XSEDE username and password:
        Duo two-factor login for my-xsede-portal-username 
    
    Enter a passcode or select one of the following options:
    
     1. Duo Push to XXX-XXX-XXXX
     2. Phone call to XXX-XXX-XXXX
    
    Passcode or option (1-2): 1
    Success. Logging you in...
    #  Welcome to the XSEDE Single Sign-On (SSO) Hub!
    #  ...
    [my-xsede-portal-username@ssohub ~]$ gsissh anvil
    ======================================
    ==                    Welcome to the Anvil Cluster                         ==
    ...       
    
    x-anvilusername@login01:~ $ pwd # show your current directory
    /home/x-anvilusername
  2. Run ssh-keygen in a terminal on your local machine.
        localhost >$ ssh-keygen
        Generating public/private rsa key pair.
        Enter file in which to save the key (localhost/.ssh/id_rsa):
        
    You may supply a filename and a passphrase for protecting your private key, but it is not mandatory. To accept the default settings, press Enter without specifying a filename. Note: If you do not protect your private key with a passphrase, anyone with access to your computer could SSH to your account on Anvil.
        Created directory 'localhost/.ssh'.
        Enter passphrase (empty for no passphrase):
        Enter same passphrase again:
        Your identification has been saved in localhost/.ssh/id_rsa.
        Your public key has been saved in localhost/.ssh/id_rsa.pub.
        The key fingerprint is:
        ... 
        The key's randomart image is:
        ...
        
    By default, the key files will be stored in ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub on your local machine.
  3. Go to the ~/.ssh folder in your local machine and cat the key information in the id_rsa.pub file.
        localhost/.ssh>$ cat id_rsa.pub
        ssh-rsa ... localhost-username@localhost
        
    Then go back to the home directory on Anvil, make a directory mkdir -p ~/.ssh if it does not exist. Create a file ~/.ssh/authorized_keys on the Anvil cluster and copy the contents of the public key id_rsa.pub in your local machine into ~/.ssh/authorized_keys.
        x-anvilusername@login01:~ $ cd ~/.ssh
        x-anvilusername@login01:~/.ssh $ vi authorized_keys
        # copy-paste the contents of the public key id_rsa.pub in your local machine to here and save the change of authorized_keys file. Then it is all set!
    
  4. Test the new key by SSH-ing to the server. The login should now complete without asking for a password.
        localhost>$ ssh x-anvilusername@anvil.rcac.purdue.edu
        =============================================================================
        ==                    Welcome to the Anvil Cluster                         ==
        ...
        =============================================================================
        x-anvilusername@login06:~ $
        
Windows SSH Instructions
Programs Instructions
MobaXterm Open a local terminal and follow Linux steps
Git Bash Follow Linux steps
Windows 10 PowerShell Follow Linux steps
Windows 10 Subsystem for Linux Follow Linux steps
PuTTY Follow steps below

Putty

  1. Launch PuTTYgen, keep the default key type (RSA) and length (2048-bits) and click Generate button.

    The "Generate" button can be found under the "Actions" section of the PuTTY Key Generator interface.

  2. Once the key pair is generated:

    Use the Save public key button to save the public key, e.g. DocumentsKeyspublic_key.pub

    Use the Save private key button to save the private key, e.g. DocumentsKeysprivate_key.ppk. When saving the private key, you can also choose a reminder comment, as well as an optional passphrase to protect your key, as shown in the image below. Note: If you do not protect your private key with a passphrase, anyone with access to your computer could SSH to your account on Anvil.

    The PuTTY Key Generator form has inputs for the Key passphrase and optional reminder comment.

    From the menu of PuTTYgen, use the "Conversion -> Export OpenSSH key" tool to convert the private key into openssh format, e.g. Documents\SSH_Keys\mylaptop_private_key.openssh to be used later for Thinlinc.

  3. Configure PuTTY to use key-based authentication: Launch PuTTY and navigate to "Connection -> SSH ->Auth" on the left panel, click Browse button under the "Authentication parameters" section and choose your private key, e.g. mylaptop_private_key.ppk.

    After clicking Connection -> SSH ->Auth panel, the "Browse" option can be found at the bottom of the resulting panel.

    Navigate back to "Session" on the left panel. Highlight "Default Settings" and click the "Save" button to ensure the change is in place.
  4. For your first time login to Anvil, please log in with your XSEDE username and password through XSEDE Single Sign-On (SSO) hub. Then go back to the home directory on Anvil, make a directory mkdir -p ~/.ssh if it does not exist.

    Create a file ~/.ssh/authorized_keys on the Anvil cluster and copy the contents of public key from PuTTYgen as shown below and paste it into ~/.ssh/authorized_keys. Please double-check that your text editor did not wrap or fold the pasted value (it should be one very long line).

    The "Public key" will look like a long string of random letters and numbers in a text box at the top of the window.

  5. Test by connecting to the cluster and the login should now complete without asking for a password. If you chose to protect your private key with a passphrase in step 2, you will be prompted to enter the passphrase when connecting.

Open OnDemand

Open OnDemand is an open-source HPC portal developed by the Ohio Supercomputing Center. Open OnDemand allows one to interact with HPC resources through a web browser and easily manage files, submit jobs, and interact with graphical applications directly in a browser, all with no software to install. Anvil has an instance of OnDemand available that can be accessed via .

Logging In

To log into the Anvil OnDemand portal:

  • Navigate to Anvil OnDemand
  • Log in using your XSEDE portal username and password

The Anvil team continues to refine the user interface, please reach out to us in case of any queries regarding the use of OnDemand.

Check Allocation Usage

To keep track of the usage of the allocation by your project team, you can use mybalance:
    x-anvilusername@login01:~ $ mybalance
    
    Allocation          Type  SU Limit   SU Usage  SU Usage  SU Balance
    Account                             (account)    (user)
    ===============  =======  ========  ========= =========  ==========
    xxxxxxxxx           CPU    1000.0       95.7       0.0       904.3
    

You can also check the allocation usage through XSEDE User Portal.

System Architecture

Model: 3rd Gen AMD EPYC™ CPUs (AMD EPYC 7763)
Number of nodes: 1000
Sockets per node: 2
Cores per socket: 64
Cores per node: 128
Hardware threads per core: 1
Hardware threads per node: 128
Clock rate: 2.45GHz (3.5GHz max boost)
RAM: Regular compute node: 256 GB DDR4-3200
Large memory node: (32 nodes with 1TB DDR4-3200)
Cache: L1d cache: 32K/core
L1i cache: 32K/core
L2 cache: 512K/core
L3 cache: 32768K/CCD
Local storage: 240GB local disk
Number of Nodes Processors per Node Cores per Node Memory per Node
8 3rd Gen AMD EPYC™ 7543 CPU 32 512 GB
Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node
B 32 Two 3rd Gen AMD EPYC™ 7763 CPUs 128 1TB
C 16 Two 3rd Gen AMD EPYC™ 7763 CPUs
+ Four NVIDIA A100 GPUs
128 512 GB

Network

All nodes, as well as the proposed scratch storage system, will be interconnected by an oversubscribed (3:1 fat tree) HDR InfiniBand interconnect. The nominal per-node bandwidth is 100 Gbps, with message latency as low as 0.90 microseconds. The fabric will be implemented as a two-stage fat tree. Nodes will be directly connected to Mellanox QM8790 switches with 60 HDR100 links down to nodes and 10 links to spine switches.

Running Jobs

Users familiar with the Linux command line may use standard job submission utilities to manage and run jobs on the Anvil compute nodes.

For GPU jobs, make sure to use --gres=gpu command instead of --gpu or -G for single-node GPU jobs, and use --gpus-per-node command instead of --gres=gpu or --gpu for multi-node GPU jobs, otherwise, your job may not run properly.

Accessing the Compute Nodes

Anvil uses the Slurm Workload Manager for job scheduling and management. With Slurm, a user requests resources and submits a job to a queue. The system takes jobs from queues, allocates the necessary compute nodes, and executes them. While users will typically SSH to an Anvil login node to access the Slurm job scheduler, they should note that Slurm should always be used to submit their work as a job rather than run computationally intensive jobs directly on a login node. All users share the login nodes, and running anything but the smallest test job will negatively impact everyone's ability to use Anvil.

Anvil is designed to serve the moderate-scale computation and data needs of the majority of XSEDE users. Users with allocations can submit to a variety of queues with varying job size and walltime limits. Separate sets of queues are utilized for the CPU, GPU, and large memory nodes. Typically, queues with shorter walltime and smaller job size limits will feature faster turnarounds. Some additional points to be aware of regarding the Anvil queues are:

  • Anvil provides a debug queue for testing and debugging codes.
  • Anvil supports shared-node jobs (more than one job on a single node). Many applications are serial or can only scale to a few cores. Allowing shared nodes improves job throughput, provides higher overall system utilization, and allows more users to run on Anvil.
  • Anvil supports long-running jobs - run times can be extended to four days for jobs using up to 16 full nodes.
  • The maximum allowable job size on Anvil is 7,168 cores. To run larger jobs, submit a consulting ticket to discuss with Anvil support.
  • Shared-node queues will be utilized for managing jobs on the GPU and large memory nodes.

Job Accounting

The charge unit for Anvil is the Service Unit (SU). This corresponds to the equivalent use of one compute core utilizing less than or equal to approximately 2G of data in memory for one hour, or 1 GPU for 1 hour. Keep in mind that your charges are based on the resources that are tied up by your job and do not necessarily reflect how the resources are used. Charges on jobs submitted to the shared queues are based on the number of cores and the fraction of the memory requested, whichever is larger. Jobs submitted as node-exclusive will be charged for all 128 cores, whether the resources are used or not. Jobs submitted to the large memory nodes will be charged 4 SU per compute core (4x standard node charge). The minimum charge for any job is 1 SU. Filesystem storage is not charged.

Queues

Anvil provides different queues with varying job size and walltime. There are also limits on the number of jobs queued and running on a per allocation and queue basis. Queues and limits are subject to change based on the evaluation from the Early User Program.

Make sure to specify the desired partition when submitting your jobs (e.g. -p standard). If you do not specify one, the job will be directed into the default partition (shared).

If the partition is node-exclusive, i.e. the standard and wide queue, even if you ask for 1 core in your job submission script, your job will get allocated an entire node and does not share this node with any other jobs. Hence, it will be charged for 128 cores' worth and squeue command would show it as 128 cores, too. See SU accounting for more details.

Queue Name Node Type Max Nodes per Job Max Cores per Job Max Duration Max Running Jobs in Queue Max Running + Submitted Jobs in Queue Charging Factor
debug regular 2 nodes 256 cores 2 hrs 1 2 1
gpu-debug gpu 1 node 2 gpus 0.5 hrs 1 2 1
standard regular 16 nodes 2,048 cores 96 hrs 64 128 1
wide regular 56 nodes 7,168 cores 12 hrs 5 1 10
shared regular 1 node 128 cores 96 hrs 6400   1
highmem large-memory 1 node 128 cores 48 hrs 2 4 4
gpu gpu     48 hrs 8 gpus   1

Useful Tools

  1. To display all Slurm partitions and their current usage, type showpartitions at the command line.
        x-anvilusername@login03.anvil:[~] $ showpartitions
        Partition statistics for cluster anvil at CURRENTTIME
              Partition     #Nodes     #CPU_cores  Cores_pending   Job_Nodes MaxJobTime Cores Mem/Node
              Name State Total  Idle  Total   Idle Resorc  Other   Min   Max  Day-hr:mn /node     (GB)
        standard:*    up   750   684  96000  92160      0   1408     1 infin   infinite   128     257 
            shared    up   250   224  32000  30208      0      0     1 infin   infinite   128     257 
              wide    up   750   684  96000  92160      0      0     1 infin   infinite   128     257 
           highmem    up    32    32   4096   4096      0      0     1 infin   infinite   128    1031 
             debug    up    17     5   2176   2176      0      0     1 infin   infinite   128     257 
               gpu    up    16    10   2048   1308      0    263     1 infin   infinite   128     515 
         gpu-debug    up    16    10   2048   1308      0      0     1 infin   infinite   128     515
        
  2. To show the list of available constraint feature names for different node types, type sfeatures at the command line.
        x-anvilusername@login03.anvil:[~] $ sfeatures
        NODELIST     CPUS   MEMORY    AVAIL_FEATURES   GRES
        a[000-999]   128    257526    A,a              (null)
        b[000-031]   128    1031669   B,b              (null)
        g[000-015]   128    515545    G,g,A100         gpu:4
        

Job Submission Script

To submit work to a Slurm queue, you must first create a job submission file. This job submission file is essentially a simple shell script. It will set any required environment variables, load any necessary modules, create or modify files and directories, and run any applications that you need:

#!/bin/sh -l
# FILENAME:  myjobsubmissionfile

# Loads Matlab and sets the application up
module load matlab

# Change to the directory from which you originally submitted this job.
cd $SLURM_SUBMIT_DIR

# Runs a Matlab script named 'myscript'
matlab -nodisplay -singleCompThread -r myscript

Once your script is prepared, you are ready to submit your job.

Job Script Environment Variables

The standard Slurm environment variables that can be used in the job submission file are listed in the table below:

Name Description
SLURM_SUBMIT_DIR Absolute path of the current working directory when you submitted this job
SLURM_JOBID Job ID number assigned to this job by the batch system
SLURM_JOB_NAME Job name supplied by the user
SLURM_JOB_NODELIST Names of nodes assigned to this job
SLURM_SUBMIT_HOST Hostname of the system where you submitted this job
SLURM_JOB_PARTITION Name of the original queue to which you submitted this job

Once your script is prepared, you are ready to submit your job.

Submitting a Job

Once you have a job submission file, you may submit this script to Slurm using the sbatch command. Slurm will find, or wait for, available resources matching your request and run your job there.

To submit your job to one compute node with one task:

login1$ sbatch --nodes=1 --ntasks=1 myjobsubmissionfile

By default, each job receives 30 minutes of wall time, or clock time. If you know that your job will not need more than a certain amount of time to run, request less than the maximum wall time, as this may allow your job to run sooner. To request the 1 hour and 30 minutes of wall time:

login1$ sbatch -t 1:30:00 --nodes=1  --ntasks=1 myjobsubmissionfile

Each compute node in Anvil has 128 processor cores. In some cases, you may want to request multiple nodes. To utilize multiple nodes, you will need to have a program or code that is specifically programmed to use multiple nodes such as with MPI. Simply requesting more nodes will not make your work go faster. Your code must utilize all the cores to support this ability. To request 2 compute nodes with 256 tasks:

login1$ sbatch --nodes=2 --ntasks=256 myjobsubmissionfile

If more convenient, you may also specify any command line options to sbatch from within your job submission file, using a special form of comment:

#!/bin/sh -l
# FILENAME:  myjobsubmissionfile

#SBATCH -A myallocation
#SBATCH --nodes=1
#SBATCH --ntasks=1 
#SBATCH --time=1:30:00
#SBATCH --job-name myjobname

# Print the hostname of the compute node on which this job is running.
/bin/hostname

If an option is present in both your job submission file and on the command line, the option on the command line will take precedence.

After you submit your job with sbatch, it may wait in the queue for minutes, hours, or even days. How long it takes for a job to start depends on the specific queue, the available resources ,and time requested, and other jobs that are already waiting in that queue. It is impossible to say for sure when any given job will start. For best results, request no more resources than your job requires.

Once your job is submitted, you can monitor the job status, wait for the job to complete, and check the job output.

Checking Job Status

Once a job is submitted there are several commands you can use to monitor the progress of the job. To see your jobs, use the squeue -u command and specify your username:

To retrieve useful information about your queued or running job, use the scontrol show job command with your job's ID number.

$ scontrol show job 189
JobId=189 JobName=myjobname
   UserId=myusername GroupId=mygroup MCS_label=N/A
   Priority=103076 Nice=0 Account=myacct QOS=normal
   JobState=RUNNING Reason=None Dependency=(null)
   Requeue=1 Restarts=0 BatchFlag=0 Reboot=0 ExitCode=0:0
   RunTime=00:01:28 TimeLimit=00:30:00 TimeMin=N/A
   SubmitTime=2021-10-04T14:59:52 EligibleTime=2021-10-04T14:59:52
   AccrueTime=Unknown
   StartTime=2021-10-04T14:59:52 EndTime=2021-10-04T15:29:52 Deadline=N/A
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2021-10-04T14:59:52 Scheduler=Main
   Partition=standard AllocNode:Sid=login05:1202865
   ReqNodeList=(null) ExcNodeList=(null)
   NodeList=a010
   BatchHost=a010
   NumNodes=1 NumCPUs=1 NumTasks=1 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
   TRES=cpu=1,mem=257526M,node=1,billing=1
   Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
   MinCPUsNode=1 MinMemoryNode=257526M MinTmpDiskNode=0
   Features=(null) DelayBoot=00:00:00
   OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)
   Command=(null)
   WorkDir=/home/myusername/jobdir
   Power=

There are several useful bits of information in this output.

  • JobState lets you know if the job is Pending, Running, Completed, or Held.
  • RunTime and TimeLimit will show how long the job has run and its maximum time.
  • SubmitTime is when the job was submitted to the cluster.
  • The job's number of Nodes, Tasks, Cores (CPUs) and CPUs per Task are shown.
  • WorkDir is the job's working directory.
  • StdOut and Stderr are the locations of stdout and stderr of the job, respectively.
  • Reason will show why a PENDING job isn't running. The above error says that it has been requested to start at a specific, later time.

Checking Job Output

Once a job is submitted, and has started, it will write its standard output and standard error to files that you can read.

SLURM catches output written to standard output and standard error - what would be printed to your screen if you ran your program interactively. Unless you specified otherwise, SLURM will put the output in the directory where you submitted the job in a file named slurm- followed by the job id, with the extension out. For example slurm-3509.out. Note that both stdout and stderr will be written into the same file, unless you specify otherwise.

If your program writes its own output files, those files will be created as defined by the program. This may be in the directory where the program was run, or may be defined in a configuration or input file. You will need to check the documentation for your program for more details.

Redirecting Job Output

It is possible to redirect job output to somewhere other than the default location with the --error and --output directives:

#! /bin/sh -l
#SBATCH --output=/path/myjob.out
#SBATCH --error=/path/myjob.out

# This job prints "Hello World" to output and exits
echo "Hello World"

Holding a Job

Sometimes you may want to submit a job but not have it run just yet. For example, you may be wanting to allow labmates to cut in front of you in the queue - so hold the job until their jobs have started, and then release yours.

To place a hold on a job before it starts running, use the scontrol hold job command:

$ scontrol hold job myjobid 

Once a job has started running it can not be placed on hold.

To release a hold on a job, use the scontrol release job command:

$ scontrol release job myjobid 

Job Dependencies

Dependencies are an automated way of holding and releasing jobs. Jobs with a dependency are held until the condition is satisfied. Once the condition is satisfied jobs only then become eligible to run and must still queue as normal.

Job dependencies may be configured to ensure jobs start in a specified order. Jobs can be configured to run after other job state changes, such as when the job starts or the job ends.

These examples illustrate setting dependencies in several ways. Typically dependencies are set by capturing and using the job ID from the last job submitted.

To run a job after job myjobid has started:

$ sbatch --dependency=after:myjobid myjobsubmissionfile

To run a job after job myjobid ends without error:

$ sbatch --dependency=afterok:myjobid myjobsubmissionfile

To run a job after job myjobid ends with error:

$ sbatch --dependency=afternotok:myjobid myjobsubmissionfile

To run a job after job myjobid ends with or without error:

$ sbatch --dependency=afterany:myjobid myjobsubmissionfile

To set more complex dependencies on multiple jobs and conditions:

$ sbatch --dependency=after:myjobid1:myjobid2:myjobid3,afterok:myjobid4 myjobsubmissionfile

Canceling a Job

To stop a job before it finishes or remove it from a queue, use the scancel command:

$ scancel myjobid 

Interactive Jobs

In addition to the ThinLinc and OnDemand interfaces, users can also choose to run interactive jobs on compute nodes, to obtain a shell that they can interact with. This gives users the ability to type commands or use a graphical interface as if they were on a login node.

To submit an interactive job, use sinteractive to run a login shell on allocated resources.

sinteractive accepts most of the same resource requests as sbatch, so to request a login shell in the compute queue while allocating 2 nodes and 256 total cores, you might do:

login1$ sinteractive -N2 -n256 -A oneofyourallocations

To quit your interactive job:

exit or Ctrl-D

Example Jobs

A number of example jobs are available for you to look over and adapt to your own needs. The first few are generic examples, and latter ones go into specifics for particular software packages.

Generic SLURM Job

The following examples demonstrate the basics of SLURM jobs, and are designed to cover common job request scenarios. These example jobs will need to be modified to run your application or code.

Serial job in Standard queue

This shows an example of a job submission file of the serial programs:

#!/bin/sh -l
# FILENAME:  myjobsubmissionfile

#SBATCH -A myallocation # Allocation name (required if more than 1 available)
#SBATCH --nodes=1       # Total # of nodes (must be 1 for serial job)
#SBATCH --ntasks=1      # Total # of MPI tasks (should be 1 for serial job)
#SBATCH --time=1:30:00  # Total run time limit (hh:mm:ss)
#SBATCH -J myjobname    # Job name
#SBATCH -o myjob.o%j    # Name of stdout output file
#SBATCH -e myjob.e%j    # Name of stderr error file
#SBATCH -p standard     # Queue (partition) name
#SBATCH --mail-user=useremailaddress
#SBATCH --mail-type=all # Send email to above address at begin and end of job
# Manage processing environment, load compilers and applications.
module purge
module load compilername
module load applicationname
module list

# Launch serial code
./myexecutablefiles

MPI job in Standard queue

An MPI job is a set of processes that take advantage of multiple compute nodes by communicating with each other. OpenMPI, Intel MPI (IMPI) and MVAPICH2 are implementations of the MPI standard.

This shows an example of a job submission file of the MPI programs:

#!/bin/sh -l
# FILENAME:  myjobsubmissionfile

#SBATCH -A myallocation # Allocation name (required if more than 1 available)
#SBATCH --nodes=2       # Total # of nodes 
#SBATCH --ntasks=256    # Total # of MPI tasks
#SBATCH --time=1:30:00  # Total run time limit (hh:mm:ss)
#SBATCH -J myjobname    # Job name
#SBATCH -o myjob.o%j    # Name of stdout output file
#SBATCH -e myjob.e%j    # Name of stderr error file
#SBATCH -p standard     # Queue (partition) name
#SBATCH --mail-user=useremailaddress
#SBATCH--mail-type=all # Send email to above address at begin and end of job
# Manage processing environment, load compilers and applications.
module purge
module load compilername
module load mpilibrary
module load applicationname
module list

# Launch MPI code
mpirun -np $SLURM_NTASKS ./myexecutablefiles

SLURM can run an MPI program with the srun command. The number of processes is requested with the -n option. If you do not specify the -n option, it will default to the total number of processor cores you request from SLURM.

If the code is built with OpenMPI, it can be run with a simple srun -n command. If it is built with Intel IMPI, then you also need to add the --mpi=pmi2 option: srun --mpi=pmi2 -n 256 ./mycode.exe in this example.

Invoking an MPI program on Anvil with ./myexecutablefiles is typically wrong, since this will use only one MPI process and defeat the purpose of using MPI. Unless that is what you want (rarely the case), you should use srun which is the Slurm analog of mpirun or mpiexec, or use mpirun or mpiexec to invoke an MPI program.

OpenMP job in Standard queue

A shared-memory job is a single process that takes advantage of a multi-core processor and its shared memory to achieve parallelization.

When running OpenMP programs, all threads must be on the same compute node to take advantage of shared memory. The threads cannot communicate between nodes.

To run an OpenMP program, set the environment variable OMP_NUM_THREADS to the desired number of threads. This should almost always be equal to the number of cores on a compute node. You may want to set to another appropriate value if you are running several processes in parallel in a single job or node.

This example shows how to submit an OpenMP program, this job asked for 2 MPI tasks, each with 64 OpenMP threads for a total of 128 CPU-cores:

#!/bin/sh -l
# FILENAME:  myjobsubmissionfile

#SBATCH -A myallocation # Allocation name (required if more than 1 available)
#SBATCH --nodes=1       # Total # of nodes (must be 1 for OpenMP job)
#SBATCH --ntasks-per-node=2      # Total # of MPI tasks per node
#SBATCH --cpus-per-task=64       # cpu-cores per task (default value is 1, >1 for multi-threaded tasks)
#SBATCH --time=1:30:00  # Total run time limit (hh:mm:ss)
#SBATCH -J myjobname    # Job name
#SBATCH -o myjob.o%j    # Name of stdout output file
#SBATCH -e myjob.e%j    # Name of stderr error file
#SBATCH -p standard     # Queue (partition) name
#SBATCH --mail-user=useremailaddress
#SBATCH --mail-type=all # Send email to above address at begin and end of job
# Manage processing environment, load compilers and applications.
module purge
module load compilername
module load applicationname
module list
# Set thread count (default value is 1).
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

# Launch OpenMP code
./myexecutablefiles

The ntasks x cpus-per-task should equal to or less than the total number of CPU cores on a node.

If an OpenMP program uses a lot of memory and 128 threads use all of the memory of the compute node, use fewer processor cores (OpenMP threads) on that compute node.

Hybrid job in Standard queue

A hybrid program combines both MPI and shared-memory to take advantage of compute clusters with multi-core compute nodes. Libraries for OpenMPI, Intel MPI (IMPI) and MVAPICH2 and compilers which include OpenMP for C, C++, and Fortran are available.

This example shows how to submit a hybrid program, this job asked for 4 MPI tasks (with 2 MPI tasks per node), each with 64 OpenMP threads for a total of 256 CPU-cores:

#!/bin/sh -l
# FILENAME:  myjobsubmissionfile

#SBATCH -A myallocation       # Allocation name (required if more than 1 available)
#SBATCH --nodes=2             # Total # of nodes 
#SBATCH --ntasks-per-node=2 # Total # of MPI tasks per node
#SBATCH --cpus-per-task=64    # cpu-cores per task (default value is 1, >1 for multi-threaded tasks)
#SBATCH --time=1:30:00        # Total run time limit (hh:mm:ss)
#SBATCH -J myjobname          # Job name
#SBATCH -o myjob.o%j          # Name of stdout output file
#SBATCH -e myjob.e%j          # Name of stderr error file
#SBATCH -p standard           # Queue (partition) name
#SBATCH --mail-user=useremailaddress
#SBATCH --mail-type=all       # Send email at begin and end of job
# Manage processing environment, load compilers and applications.
module purge
module load compilername
module load mpilibrary
module load applicationname
module list

# Set thread count (default value is 1).
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

# Launch MPI code
mpirun -np $SLURM_NTASKS ./myexecutablefiles

The ntasks times the cpus-per-task should be equal to or less than the total number of CPU cores on a node.

GPU job in GPU queue

The Anvil cluster nodes contain GPUs that support CUDA and OpenCL. See the detailed hardware overview for the specifics on the GPUs in Anvil or use sfeatures command to see the detailed hardware overview.

How to use SLURM to submit a SINGLE-node GPU program:

#!/bin/sh -l
# FILENAME:  myjobsubmissionfile

#SBATCH -A myallocation       # allocation name (required if more than 1 available)
#SBATCH --nodes=1             # Total # of nodes 
#SBATCH --ntasks-per-node=1   # Number of MPI ranks per node (one rank per GPU)
#SBATCH --gres=gpu:1          # Number of GPUs per node
#SBATCH --time=1:30:00        # Total run time limit (hh:mm:ss)
#SBATCH -J myjobname          # Job name
#SBATCH -o myjob.o%j          # Name of stdout output file
#SBATCH -e myjob.e%j          # Name of stderr error file
#SBATCH -p gpu                # Queue (partition) name
#SBATCH --mail-user=useremailaddress
#SBATCH --mail-type=all       # Send email to above address at begin and end of job
# Manage processing environment, load compilers and applications.
module purge
module load modtree/gpu
module load applicationname
module list

# Launch GPU code
./myexecutablefiles

Make sure to use --gres=gpu command instead of --gpu or -G, otherwise, your job may not run properly.

How to use SLURM to submit a MULTI-node GPU program:

#!/bin/sh -l
# FILENAME:  myjobsubmissionfile

#SBATCH -A myallocation       # allocation name
#SBATCH --nodes=2             # Total # of nodes 
#SBATCH --ntasks-per-node=4   # Number of MPI ranks per node (one rank per GPU)
#SBATCH --gpus-per-node=4          # Number of GPUs per node
#SBATCH --time=1:30:00        # Total run time limit (hh:mm:ss)
#SBATCH -J myjobname          # Job name
#SBATCH -o myjob.o%j          # Name of stdout output file
#SBATCH -e myjob.e%j          # Name of stderr error file
#SBATCH -p gpu                # Queue (partition) name
#SBATCH --mail-user=useremailaddress
#SBATCH --mail-type=all       # Send email to above address at begin and end of job

# Manage processing environment, load compilers, and applications.
module purge
module load modtree/gpu
module load applicationname
module list

# Launch GPU code
mpirun -np $SLURM_NTASKS ./myexecutablefiles

Make sure to use --gpus-per-node command instead of --gres=gpu or --gpu for multi-node GPU jobs, otherwise, your job may not run properly.

NGC GPU container job in GPU queue

What is NGC?

Nvidia GPU Cloud (NGC) is a GPU-accelerated cloud platform optimized for deep learning and scientific computing. NGC offers a comprehensive catalogue of GPU-accelerated containers, so the application runs quickly and reliably on the high performance computing environment. Purdue Research Computing deployed NGC to extend the cluster capabilities and to enable powerful software and deliver the fastest results. By utilizing Singularity and NGC, users can focus on building lean models, producing optimal solutions and gathering faster insights. For more information, please visit NVIDIA GPU Cloud and NGC software catalog.

Getting Started

Users can download containers from the NGC software catalog and run them directly using Singularity instructions from the corresponding container's catalog page.

In addition, Anvil provides a subset of pre-downloaded NGC containers wrapped into convenient software modules. These modules wrap underlying complexity and provide the same commands that are expected from non-containerized versions of each application.

On Anvil, type the command below to see the lists of NGC containers we deployed.

$ module load modtree/gpu
$ module load ngc 
$ module avail

Once module ngc is loaded, you can run your code as with normal non-containerized applications. This section illustrates how to use SLURM to submit a job with a containerized NGC program.

#!/bin/sh -l
# FILENAME:  myjobsubmissionfile

#SBATCH -A myallocation       # allocation name (required if more than 1 available)
#SBATCH --nodes=1             # Total # of nodes 
#SBATCH --ntasks-per-node=1   # Number of MPI ranks per node (one rank per GPU)
#SBATCH --gres=gpu:1          # Number of GPUs per node
#SBATCH --time=1:30:00        # Total run time limit (hh:mm:ss)
#SBATCH -J myjobname          # Job name
#SBATCH -o myjob.o%j          # Name of stdout output file
#SBATCH -e myjob.e%j          # Name of stderr error file
#SBATCH -p gpu                # Queue (partition) name
#SBATCH --mail-user=useremailaddress
#SBATCH --mail-type=all       # Send email to above address at begin and end of job
# Manage processing environment, load compilers and applications.
module purge
module load modtree/gpu
module load ngc
module load applicationname
module list

# Launch GPU code 
./myexecutablefiles

BioContainers Collection

What is BioContainers?

The BioContainers project came from the idea of using the containers-based technologies such as Docker or rkt for bioinformatics software. Having a common and controllable environment for running software could help to deal with some of the current problems during software development and distribution. BioContainers is a community-driven project that provides the infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics fields such as proteomics, genomics, transcriptomics and metabolomics. For more information, please visit BioContainers project.

Getting Started

Users can download bioinformatic containers from the BioContainers project and run them directly using Singularity instructions from the corresponding container's catalog page.

Detailed Singularity user guide is available at: sylabs.io/guides/3.8/user-guide

In addition, the Anvil team provides a subset of pre-downloaded biocontainers wrapped into convenient software modules. These modules wrap underlying complexity and provide the same commands that are expected from non-containerized versions of each application.

On Anvil, type the command below to see the lists of biocontainers we deployed:

$ module purge
$ module load modtree/cpu
$ module load biocontainers
$ module avail

Once module biocontainers loads, you can run your code as with normal non-containerized applications. This section illustrates how to use SLURM to submit a job with a biocontainers program.

#!/bin/bash
# FILENAME:  myjobsubmissionfile

#SBATCH -A myallocation       # allocation name
#SBATCH --nodes=1             # Total # of nodes 
#SBATCH --ntasks-per-node=1   # Number of MPI ranks per node 
#SBATCH --time=1:30:00        # Total run time limit (hh:mm:ss)
#SBATCH -J myjobname          # Job name
#SBATCH -o myjob.o%j          # Name of stdout output file
#SBATCH -e myjob.e%j          # Name of stderr error file
#SBATCH -p standard                # Queue (partition) name
#SBATCH --mail-user=useremailaddress
#SBATCH --mail-type=all       # Send email to above address at begin and end of job 

# Manage processing environment, load compilers, container, and applications.
module purge
module load modtree/cpu
module load biocontainers
module load applicationname
module list

# Launch code
./myexecutablefiles

Specific Applications

The following examples demonstrate job submission files for some common real-world applications.

See the Generic SLURM Examples section for more examples on job submissions that can be adapted for use.

Python

Python is a high-level, general-purpose, interpreted, dynamic programming language. We suggest using Anaconda which is a Python distribution made for large-scale data processing, predictive analytics, and scientific computing. For example, to use the default Anaconda distribution:

$ module load anaconda

For a full list of available Anaconda and Python modules enter:

$ module spider anaconda

Example Python Jobs

This section illustrates how to submit a small Python job to a PBS queue.

Example 1: Hello world

Prepare a Python input file with an appropriate filename, here named myjob.in:

# FILENAME:  hello.py

import string, sys
print "Hello, world!"

Prepare a job submission file with an appropriate filename, here named myjob.sub:

#!/bin/bash
# FILENAME:  myjob.sub

module load anaconda
python hello.py

Basic knowledge about Batch Jobs.

Hello, world!

Example 2: Matrix multiply

Save the following script as matrix.py:

# Matrix multiplication program

x = [[3,1,4],[1,5,9],[2,6,5]]
y = [[3,5,8,9],[7,9,3,2],[3,8,4,6]]

result = [[sum(a*b for a,b in zip(x_row,y_col)) for y_col in zip(*y)] for x_row in x]

for r in result:
        print(r)

Change the last line in the job submission file above to read:

python matrix.py

The standard output file from this job will result in the following matrix:

[28, 56, 43, 53]
[65, 122, 59, 73]
[63, 104, 54, 60]

Example 3: Sine wave plot using numpy and matplotlib packages

Save the following script as sine.py:
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pylab as plt

x = np.linspace(-np.pi, np.pi, 201)
plt.plot(x, np.sin(x))
plt.xlabel('Angle [rad]')
plt.ylabel('sin(x)')
plt.axis('tight')
plt.savefig('sine.png')

Change your job submission file to submit this script and the job will output a png file and blank standard output and error files.

For more information about Python:

  • The Python Programming Language - Official Website
  • Anaconda Python Distribution - Official Website
  • Conda User Guide

Installing Packages

We recommend installing Python packages in an Anaconda environment. One key advantage of Anaconda is that it allows users to install unrelated packages in separate self-contained environments. Individual packages can later be reinstalled or updated without impacting others.

To facilitate the process of creating and using Conda environments, we support a script (conda-env-mod) that generates a module file for an environment, as well as an optional Jupyter kernel to use this environment in a Jupyter.

You must load one of the anaconda modules in order to use this script.

$ module load anaconda/2021.05-py38

Step-by-step instructions for installing custom Python packages are presented below.

  1. Create a conda environment

    Users can use the conda-env-mod script to create an empty conda environment. This script needs either a name or a path for the desired environment. After the environment is created, it generates a module file for using it in future. Please note that conda-env-mod is different from the official conda-env script and supports a limited set of subcommands. Detailed instructions for using conda-env-mod can be found with the command conda-env-mod --help.

    Example 1: Create a conda environment named mypackages in user's home directory.

    $ conda-env-mod create -n mypackages -y

    Including the -y option lets you skip the prompt to install the package.

    Example 2: Create a conda environment named mypackages at a custom location.

    $ conda-env-mod create -p $PROJECT/apps/mypackages -y

    Please follow the on-screen instructions while the environment is being created. After finishing, the script will print the instructions to use this environment.

    ... ... ...
    Preparing transaction: ...working... done
    Verifying transaction: ...working... done
    Executing transaction: ...working... done
    +---------------------------------------------------------------+
    | To use this environment, load the following modules:          |
    |     module use $HOME/privatemodules                           |
    |     module load conda-env/mypackages-py3.8.8                  |
    | (then standard 'conda install' / 'pip install' / run scripts) |
    +---------------------------------------------------------------+
    Your environment "mypackages" was created successfully.

    Note down the module names, as you will need to load these modules every time you want to use this environment. You may also want to add the module load lines in your jobscript, if it depends on custom Python packages.

    By default, module files are generated in your $HOME/privatemodules directory. The location of module files can be customized by specifying the -m /path/to/modules option.

    Example 3: Create a conda environment named labpackages in your group's $PROJECT folder and place the module file at a shared location for the group to use.

    $ conda-env-mod create -p $PROJECT/apps/mypackages -m $PROJECT/etc/modules
    ... ... ...
    Preparing transaction: ...working... done
    Verifying transaction: ...working... done
    Executing transaction: ...working... done
    +----------------------------------------------------------------+
    | To use this environment, load the following modules:           |
    |     module use /anvil/projects/x-mylab/etc/modules             |
    |     module load conda-env/mypackages-py3.8.8                   |
    | (then standard 'conda install' / 'pip install' / run scripts)  |
    +----------------------------------------------------------------+
    Your environment "labpackages" was created successfully.

    If you used a custom module file location, you need to run the module use command as printed by the script.

    By default, only the environment and a module file are created (no Jupyter kernel). If you plan to use your environment in a Jupyter notebook, you need to append a --jupyter flag to the above commands.

    Example 4: Create a Jupyter-enabled conda environment named labpackages in your group's $PROJECT folder and place the module file at a shared location for the group to use.

    $ conda-env-mod create -p $PROJECT/apps/mypackages/labpackages -m $PROJECT/etc/modules --jupyter
    ... ... ...
    Jupyter kernel created: "Python (My labpackages Kernel)"
    ... ... ...
    Your environment "labpackages" was created successfully.
  2. Load the conda environment The following instructions assume that you have used conda-env-mod to create an environment named mypackages (Examples 1 or 2 above). If you used conda create instead, please use conda activate mypackages.
    $ module use $HOME/privatemodules   
        $ module load conda-env/mypackages-py3.8.8

    Note that the conda-env module name includes the Python version that it supports (Python 3.8.8 in this example). This is the same as the Python version in the anaconda module.

    If you used a custom module file location (Example 3 above), please use module use to load the conda-env module.

    $ module use /anvil/projects/x-mylab/etc/modules   
    $ module load conda-env/mypackages-py3.8.8
  3. Install packages Now you can install custom packages in the environment using either conda install or pip install.

    Installing with conda

    Example 1: Install OpenCV (open-source computer vision library) using conda.

    $ conda install opencv

    Example 2: Install a specific version of OpenCV using conda.

    $ conda install opencv=3.1.0

    Example 3: Install OpenCV from a specific anaconda channel.

    $ conda install -c anaconda opencv

    Installing with pip

    Example 4: Install mpi4py using pip.

    $ pip install mpi4py

    Example 5: Install a specific version of mpi4py using pip.

    $ pip install mpi4py==3.0.3

    Follow the on-screen instructions while the packages are being installed. If installation is successful, please proceed to the next section to test the packages.

    Note: Do NOT run Pip with the --user argument, as that will install packages in a different location.

  4. Test the installed packages To use the installed Python packages, you must load the module for your conda environment. If you have not loaded the conda-env module, please do so following the instructions at the end of Step 1.

    $ module use $HOME/privatemodules
    $ module load conda-env/mypackages-py3.8.8

    Example 1: Test that OpenCV is available.

    $ python -c "import cv2; print(cv2.__version__)"

    Example 2: Test that mpi4py is available.

    $ python -c "import mpi4py; print(mpi4py.__version__)"

    If the commands are finished without errors, then the installed packages can be used in your program.

Additional capabilities of conda-env-mod

The conda-env-mod tool is intended to facilitate the creation of a minimal Anaconda environment, matching module file, and optionally a Jupyter kernel. Once created, the environment can then be accessed via familiar module load command, tuned and expanded as necessary. Additionally, the script provides several auxiliary functions to help manage environments, module files, and Jupyter kernels.

General usage for the tool adheres to the following pattern:

$ conda-env-mod help
$ conda-env-mod   [optional arguments]

where required arguments are one of

  • -n|--name ENV_NAME (name of the environment)
  • -p|--prefix ENV_PATH (location of the environment)

and optional arguments further modify behavior for specific actions (e.g. -m to specify alternative location for generated module file).

Given a required name or prefix for an environment, the conda-env-mod script supports the following subcommands:

  • create - to create a new environment, its corresponding module file and optional Jupyter kernel.
  • delete - to delete existing environment along with its module file and Jupyter kernel.
  • module - to generate just the module file for a given existing environment.
  • kernel - to generate just the Jupyter kernel for a given existing environment (note that the environment has to be created with a --jupyter option).
  • help - to display script usage help.

Using these subcommands, you can iteratively fine-tune your environments, module files and Jupyter kernels, as well as delete and recreate them with ease. Below we cover several commonly occurring scenarios.

Generating module file for an existing environment

If you already have an existing configured Anaconda environment and want to generate a module file for it, follow appropriate examples from Step 1 above, but use the module subcommand instead of the create one. E.g.

$ conda-env-mod module -n mypackages

and follow printed instructions on how to load this module. With an optional --jupyter flag, a Jupyter kernel will also be generated.

Note that if you intend to proceed with a Jupyter kernel generation (via the --jupyter flag or a kernel subcommand later), you will have to ensure that your environment has ipython and ipykernel packages installed into it. To avoid this and other related complications, we highly recommend making a fresh environment using a suitable conda-env-mod create .... --jupyter command instead.

Generating Jupyter kernel for an existing environment

If you already have an existing configured Anaconda environment and want to generate a Jupyter kernel file for it, you can use the kernel subcommand. E.g.

$ conda-env-mod kernel -n mypackages

This will add a "Python (My mypackages Kernel)" item to the dropdown list of available kernels upon your next time use Jupyter.

Note that generated Jupiter kernels are always personal (i.e. each user has to make their own, even for shared environments). Note also that you (or the creator of the shared environment) will have to ensure that your environment has ipython and ipykernel packages installed into it.

Singularity

Note: Singularity was originally a project out of Lawrence Berkeley National Laboratory. It has now been spun off into a distinct offering under a new corporate entity under the name Sylabs Inc. This guide pertains to the open source community edition, SingularityCE.

What is Singularity?

Singularity is a powerful tool allowing the portability and reproducibility of operating system and application environments through the use of Linux containers. It gives users complete control over their environment.

Singularity is like Docker but tuned explicitly for HPC clusters. More information is available from the project's website.

Features

Run the latest applications on an Ubuntu or Centos userland

Gain access to the latest developer tools

Launch MPI programs easily

Much more

Singularity's user guide is available at: sylabs.io/guides/3.8/user-guide

Example

Here is an example of downloading a pre-built Docker container image, converting it into Singularity format and running it on Anvil:

$ singularity pull docker://sylabsio/lolcow:latest
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
[....]
INFO:    Creating SIF file...
 
$ singularity exec lolcow_latest.sif cowsay "Hello, world"
 ______________
< Hello, world >
 --------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Anvil Cluster Specific Notes

All service providers will integrate Singularity slightly differently depending on site. The largest customization will be which default files are inserted into your images so that routine services will work.

Services we configure for your images include DNS settings and account information. File systems we overlay into your images are your home directory, scratch, project space, datasets, and application file systems.

Here is a list of paths:

  • /etc/resolv.conf
  • /etc/hosts
  • /home/$USER
  • /apps
  • /anvil (including/anvil/scratch,/anvil/projects, and/anvil/datasets`)

This means that within the container environment these paths will be present and the same as outside the container. The /apps and /anvil directories will need to exist inside your container to work properly.

Creating Singularity Images

Due to how singularity containers work, you must have root privileges to build an image. Once you have a singularity container image built on your own system, you can copy the image file up to the cluster (you do not need root privileges to run the container).

You can find information and documentation for how to install and use singularity on your system:

  • Install Singularity on Windows
  • Install Singularity on macOS
  • Install Singularity on Linux

We have version 3.8.0 on the cluster. You will most likely not be able to run any container built with any singularity past that version. So be sure to follow the installation guide for version 3.8 on your system.

$ singularity --version
singularity version 3.8.0-1.el8

Everything you need on how to build a container is available from their user-guide. Below are merely some quick tips for getting your own containers built for Anvil.

You can use a Container Recipe to both build your container and share its specification with collaborators (for the sake of reproducibility). Here is a simplistic example of such a file:

# FILENAME: Buildfile
 
Bootstrap: docker
From: ubuntu:18.04
 
%post
    apt-get update && apt-get upgrade -y
    mkdir /apps /anvil

To build the image itself:

$ sudo singularity build ubuntu-18.04.sif Buildfile

The challenge with this approach however is that it must start from scratch if you decide to change something. In order to create a container image iteratively and interactively, you can use the --sandbox option:

$ sudo singularity build --sandbox ubuntu-18.04 docker://ubuntu:18.04

This will not create a flat image file but a directory tree (i.e., a folder), the contents of which are the container's filesystem. In order to get a shell inside the container that allows you to modify it, user the --writable option.

$ sudo singularity shell --writable ubuntu-18.04

Singularity: Invoking an interactive shell within container...
Singularity ubuntu-18.04.sandbox:~>

You can then proceed to install any libraries, software, etc. within the container. Then to create the final image file, exit the shell and call the build command once more on the sandbox.

$ sudo singularity build ubuntu-18.04.sif ubuntu-18.04

Finally, copy the new image to Anvil and run it.

Managing and using shared Python environments

Here is a suggested workflow for a common group-shared Anaconda environment with Jupyter capabilities:

The PI or lab software manager:

Creates the environment and module file (once):

$ module purge
$ module load anaconda
$ conda-env-mod create -p $PROJECT/apps/labpackages -m $PROJECT/etc/modules --jupyter

Installs required Python packages into the environment (as many times as needed):

$ module use /anvil/projects/x-mylab/etc/modules
$ module load conda-env/labpackages-py3.8.8
$ conda install  .......                       # all the necessary packages

Lab members:

Lab members can start using the environment in their command line scripts or batch jobs simply by loading the corresponding module:
$ module use /anvil/projects/x-mylab/etc/modules
$ module load conda-env/labpackages-py3.8.8
$ python my_data_processing_script.py .....

To use the environment in Jupyter, each lab member will need to create his/her own Jupyter kernel (once). This is because Jupyter kernels are private to individuals, even for shared environments.

$ module use /anvil/projects/x-mylab/etc/modules $ module load conda-env/labpackages-py3.8.8 $ conda-env-mod kernel -p $PROJECT/apps/labpackages

A similar process can be devised for instructor-provided or individually-managed class software, etc.

Troubleshooting

Python packages often fail to install or run due to dependency with other packages. More specifically, if you previously installed packages in your home directory it is safer to clean those installations.

$ mv ~/.local ~/.local.bak
$ mv ~/.cache ~/.cache.bak

Unload all the modules.

$ module purge

Clean up PYTHONPATH.

$ unset PYTHONPATH

Next load the modules (e.g. anaconda) that you need.

$ module load anaconda/2021.05-py38
$ module module use $HOME/privatemodules 
$ module load conda-env/mypackages-py3.8.8

Now try running your code again.

Few applications only run on specific versions of Python (e.g. Python 3.6). Please check the documentation of your application if that is the case.

Managing and Transferring Files

File Systems

Anvil provides users with separate home, scratch, and project areas for managing files. These will be accessible via the $HOME, $SCRATCH, $PROJECT and $WORK environment variables. Each file system is available from all Anvil nodes, but has different purge policies and ideal use cases (see table below). Users in the same allocation will share access to the data in the $PROJECT space. The project space will be created upon request for each allocation. $PROJECT and $WORK variables refer to the same location and can be used interchangeably.

$SCRATCH is a high-performance, internally resilient GPFS parallel file system with 10 PB of usable capacity, configured to deliver up to 150 GB/s bandwidth.

  • Full schedule keeps nightly snapshots for 7 days, weekly snapshots for 3 weeks, and monthly snapshots for 2 months.
File System Mount Point Quota Snapshots Best use Purge policy
Anvil ZFS /home 25GB Full Schedule* Home directories: area for storing personal software, scripts compiling, editing, etc. Not purged
Anvil ZFS /apps N/A Weekly* Applications  
Anvil GPFS /anvil N/A No    
Anvil GPFS /anvil/scratch 100 TB No User scratch: area for job I/O activity, temporary storage Files older than 30-day (access time) will be purged
Anvil GPFS /anvil/projects 5 TB Full schedule* Per allocation: area for shared data in a project, common datasets and software installation Not purged while allocation is active. Removed 90 days after allocation expiration
Anvil GPFS /anvil/datasets N/A Weekly* Common data sets (not allocated to users)  
Versity N/A (Globus) 20 TB No Tape storage per allocation  

Transferring your Files

Anvil supports several methods for file transfer to and from the system. Users can transfer files between Anvil and Linux-based systems or Mac using either scp or rsync. Windows SSH clients typically include scp-based file transfer capabilities.

SCP

Rsync

SFTP

Globus

SCP (Secure CoPy

  • SCP is a simple way of transferring files between two machines that use the SSH protocol. SCP is available as a protocol choice in some graphical file transfer programs and also as a command line program on most Linux, Unix, and Mac OS X systems. SCP can copy single files, but will also recursively copy directory contents if given a directory name.

SSH Keys is required for SCP. Following is an example of transferring test.txt file from Anvil home directory to your local machine. Make sure to use your anvil username x-anvilusername:

localhost> scp x-anvilusername@anvil.rcac.purdue.edu:/home/x-anvilusername/test.txt .
Warning: Permanently added the xxxxxxx host key for IP address 'xxx.xxx.xxx.xxx' to the list of known hosts.
test.txt                                                                    100%    0     0.0KB/s   00:00

Rsync

  • Rsync, or Remote Sync is a free and efficient command-line tool that lets you transfer files and directories to local and remote destinations. It allows to copy only the changes from the source and offers customization, usef for mirroring, performing backups, or migrating data between different file systems.

SSH Keys is required for Rsync. Similar to the above SCP example, make sure to use your anvil username x-anvilusername here.

[SFTP (Secure File Transfer Protocol)] (#files-transferring-sftp)

SFTP is a reliable way of transferring files between two machines. SFTP is available as a protocol choice in some graphical file transfer programs and also as a command-line program on most Linux, Unix, and Mac OS X systems. SFTP has more features than SCP and allows for other operations on remote files, remote directory listing, and resuming interrupted transfers. Command-line SFTP cannot recursively copy directory contents; to do so, try using SCP or graphical SFTP client.

Command-line usage:

$ sftp -B buffersize x-anvilusername@anvil.rcac.purdue.edu

To a remote system from local:

sftp> put sourcefile somedir/destinationfile
sftp> put -P sourcefile somedir/

From a remote system to local:

<pre>sftp> get sourcefile somedir/destinationfile
sftp> get -P sourcefile somedir/

sftp> exit</pre>
  • -B: optional, specify buffer size for transfer; larger may increase speed, but costs memory
  • -P: optional, preserve file attributes and permissions

Linux / Solaris / AIX / HP-UX / Unix:

  • The "sftp" command-line program should already be installed.

Microsoft Windows:

  • MobaXterm

Free, full-featured, graphical Windows SSH, SCP, and SFTP client.

Mac OS X:

  • The "sftp" command-line program should already be installed. You may start a local terminal window from "Applications->Utilities".
  • Cyberduck is a full-featured and free graphical SFTP and SCP client.

Globus

  • Globus is a powerful and easy to use file transfer and sharing service for transferring files virtually anywhere. It works between any XSEDE and non-XSEDE sites running Globus, and it connects any of these research systems to personal systems. You may use Globus to connect to your home, scratch, and project storage directories on Anvil. Since Globus is web-based, it works on any operating system that is connected to the internet. The Globus Personal client is available on Windows, Linux, and Mac OS X. It is primarily used as a graphical means of transfer but it can also be used over the command line. More details can be found in XSEDE Data Transfer & Management.

Lost File Recovery

Your HOME and PROJECTS directories on Anvil are protected against accidental file deletion through a series of snapshots taken every night just after midnight. Each snapshot provides the state of your files at the time the snapshot was taken. It does so by storing only the files which have changed between snapshots. A file that has not changed between snapshots is only stored once but will appear in every snapshot. This is an efficient method of providing snapshots because the snapshot system does not have to store multiple copies of every file.

These snapshots are kept for a limited time at various intervals. Please refer to Anvil File Systems to see the frequency of generating snapshots on different mount points. Anvil keeps nightly snapshots for 7 days, weekly snapshots for 3 weeks, and monthly snapshots for 2 months. This means you will find snapshots from the last 7 nights, the last 3 Sundays, and the last 2 first of the months. Files are available going back between two and three months, depending on how long ago the last first of the month was. Snapshots beyond this are not kept.

Only files which have been saved during an overnight snapshot are recoverable. If you lose a file the same day you created it, the file is not recoverable because the snapshot system has not had a chance to save the file.

Snapshots are not a substitute for regular backups. It is the responsibility of the researchers to back up any important data to long-term storage space. Anvil does protect against hardware failures or physical disasters through other means however these other means are also not substitutes for backups.

Anvil offers several ways for researchers to access snapshots of their files.

flost

If you know when you lost the file, the easiest way is to use the flost command.

Here is an example of /home directory. If you know more specifically where the lost file was you may provide the full path to that directory.

This tool will prompt you for the date on which you lost the file or would like to recover the file from. If the tool finds an appropriate snapshot it will provide instructions on how to search for and recover the file.

To run the tool you will need to specify the location where the lost file was with the -w argument:

$ flost -w /home

This script will help you try to recover lost home or group directory contents.
NB: Scratch directories are not backed up and cannot be recovered.

Currently anchoring the search under:  /home

If your lost files were on a different filesystem, exit now with Ctrl-C and
rerun flost with a suitable '-w WHERE' argument (or see 'flost -h' for help).

Please enter the date that you lost your files:  MM/DD/YYYY

The closest recovery snapshot to your date of loss currently available is from
MM/DD/YYYY 12:00am.  First, change your directory to that location:
   $ cd /home/.zfs/snapshot/zfs-auto-snap_daily-YYYY-MM-DD-0000
   $ ls

Then copy files or directories from there back to where they belong:
   $ cp mylostfile /home
   $ cp -r mylostdirectory /home

If you are not sure what date you lost the file you may try entering different dates into flost to try to find the file or you may also manually browse the snapshots in the /home/.zfs/snapshot folder for Home directory and /anvil/projects/.snapshots folder for Projects directory.

Software

Module System

The Anvil cluster uses Lmod to manage the user environment, so users have access to the necessary software packages and versions to conduct their research activities. The associated module commands can be used to load applications and compilers, making the corresponding libraries and environment variables automatically available in the user environment.

Lmod is a hierarchical module system, meaning a module can only be loaded after loading the necessary compilers and MPI libraries that it depends on. This helps avoid conflicting libraries and dependencies being loaded at the same time. A list of all available modules on the system can be found with the module spider command.

$ module spider # list all modules, even those not available due to incompatible with current loaded modules

-----------------------------------------------------------------------------------
The following is a list of the modules and extensions currently available:
-----------------------------------------------------------------------------------
  amdblis: amdblis/3.0
  amdfftw: amdfftw/3.0
  amdlibflame: amdlibflame/3.0
  amdlibm: amdlibm/3.0
  amdscalapack: amdscalapack/3.0
  anaconda: anaconda/2021.05-py38
  aocc: aocc/3.0

Lines 1-45

The module spider command can also be used to search for specific module names.

$ module spider intel # all modules with names containing 'intel'
-----------------------------------------------------------------------------------
  intel:
-----------------------------------------------------------------------------------
     Versions:
        intel/19.0.5.281
        intel/19.1.3.304
     Other possible modules matches:
        intel-mkl
-----------------------------------------------------------------------------------$ module spider intel/19.1.3.304 # additional details on a specific module
-----------------------------------------------------------------------------------
  intel: intel/19.1.3.304
-----------------------------------------------------------------------------------

    This module can be loaded directly: module load intel/19.1.3.304

    Help:
      Intel Parallel Studio.

When users log into Anvil, a default compiler (GCC), MPI libraries (OpenMPI), and runtime environments (e.g., Cuda on GPU-nodes) are automatically loaded into the user environment. It is recommended that users explicitly specify which modules and which versions are needed to run their codes in their job scripts via the module load command. Users are advised not to insert module load commands in their bash profiles, as this can cause issues during initialization of certain software (e.g. Thinlinc).

When users load a module, the module system will automatically replace or deactivate modules to ensure the packages you have loaded are compatible with each other. Following example shows that the module system automatically unload the default Intel compiler version to user specified version:

$ module load intel # load default version of Intel compiler
$ module list # see currently loaded modules

Currently Loaded Modules:
  1) intel/19.0.5.281

$ module load intel/19.1.3.304 # load a specific version of Intel compiler
$ module list # see currently loaded modules

The following have been reloaded with a version change:
  1) intel/19.0.5.281 => intel/19.1.3.304

Most modules on Anvil include extensive help messages, so users can take advantage of the module help APPNAME command to find information about a particular application or module. Every module also contains two environment variables named $RCAC_APPNAME_ROOT and $RCAC_APPNAME_VERSION identifying its installation prefix and its version. This information can be found by module show APPNAME. Users are encouraged to use generic environment variables such as CC, CXX, FC, MPICC, MPICXX etc. available through the compiler and MPI modules while compiling their code.

Some other common module commands:

To unload a module:

$ module unload mymodulename

To unload all loaded modules and reset everything to original state:

$ module purge

To see all available modules that are compatible with current loaded modules:

$ module avail

To display information about a specified module, including environment changes, dependencies, software version and path:

$ module show mymodulename

Compiling, Performance, and Optimization

Anvil CPU nodes have GNU, Intel, and AOCC (AMD) compilers available along with multiple MPI implementations ( OpenMPI, Intel MPI (IMPI) and MVAPICH2). Anvil GPU nodes also provide the PGI compiler. Users may want to note the following AMD Milan specific optimization options that can help improve the performance of your code on Anvil:

  1. The majority of the applications on Anvil are built using gcc/11.2.0 which features an AMD Milan specific optimization flag (-march=znver3).
  2. AMD Milan CPUs support the Advanced Vector Extensions 2 (AVX2) vector instructions set. GNU, Intel, and AOCC compilers all have flags to support AVX2. Using AVX2, up to eight floating point operations can be executed per cycle per core, potentially doubling the performance relative to non-AVX2 processors running at the same clock speed.
  3. In order to enable AVX2 support, when compiling your code, use the -march=znver3 flag for GCC 11.2 or use the -march=znver2 flag (for GCC 10.2, Clang and AOCC compilers) or -march=core-avx2 (for Intel compilers and GCC prior to 9.3).

Other Software Usage Notes

  1. Use the same environment that you compile the code to run your executables. When switching between compilers for different applications, make sure that you load the appropriate modules before running your executables.
  2. Explicitly set the optimization level in your makefiles or compilation scripts. Most well written codes can safely use the highest optimization level (-O3), but many compilers set lower default levels (e.g. GNU compilers use the default -O0, which turns off all optimizations).
  3. Turn off debugging, profiling, and bounds checking when building executables intended for production runs as these can seriously impact performance. These options are all disabled by default. The flag used for bounds checking is compiler dependent, but the debugging (-g) and profiling (-pg) flags tend to be the same for all major compilers.
  4. Some compiler options are the same for all available compilers on Anvil (e.g. "-o"), while others are different. Many options are available in one compiler suite but not the other. For example, Intel, PGI, and GNU compilers use the -qopenmp, -mp, and -fopenmp flags, respectively, for building OpenMP applications.
  5. MPI compiler wrappers (e.g. mpicc, mpif90) all call the appropriate compilers and load the correct MPI libraries depending on the loaded modules. While the same names may be used for different compilers, keep in mind that these are completely independent scripts.

For Python users, Anvil provides two Python distributions: 1) a natively compiled Python module with a small subset of essential numerical libraries which are optimized for the AMD Milan architecture and 2) binaries distributed through Anaconda. Users are recommended to use virtual environments for installing and using additional Python packages.

A broad range of application modules from various science and engineering domains are installed on Anvil, including mathematics and statistical modeling tools, visualization software, computational fluid dynamics codes, molecular modeling packages, and debugging tools.

In addition, Singularity is supported on Anvil and Nvidia GPU Cloud containers are available on Anvil GPU nodes.

Compiling Serial Programs

A serial program is a single process which executes as a sequential stream of instructions on one processor core. Compilers capable of serial programming are available for C, C++, and versions of Fortran.

Here are a few sample serial programs:

To load a compiler, enter one of the following:

$ module load intel
$ module load gcc
$ module load aocc

The Intel, GNU and AOCC compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95". You may use ".f90" to stand for any Fortran code regardless of version as it is a free-formatted form.

Language Intel Compiler GNU Compiler AOCC Compiler
Fortran 77 $ ifort myprogram.f -o myprogram $ gfortran myprogram.f -o myprogram $ flang program.f -o program
Fortran 90 $ ifort myprogram.f90 -o myprogram $ gfortran myprogram.f90 -o myprogram $ flang program.f90 -o program
Fortran 95 $ ifort myprogram.f90 -o myprogram $ gfortran myprogram.f95 -o myprogram $ flang program.f90 -o program
C $ icc myprogram.c -o myprogram $ gcc myprogram.c -o myprogram $ clang program.c -o program
C++ $ icc myprogram.cpp -o myprogram $ g++ myprogram.cpp -o myprogram $ clang++ program.C -o program

Compiling MPI Programs

OpenMPI, Intel MPI (IMPI) and MVAPICH2 are implementations of the Message-Passing Interface (MPI) standard. Libraries for these MPI implementations and compilers for C, C++, and Fortran are available on Anvil.

Here are a few sample programs using MPI:

To see the available MPI libraries:

$ module avail openmpi
$ module avail impi
$ module avail mvapich2
Language Header File
Fortran 77 INCLUDE 'mpif.h'
Fortran 90 INCLUDE 'mpif.h'
Fortran 95 INCLUDE 'mpif.h'
C #include
C++ #include

The Intel, GNU and AOCC compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95". You may use ".f90" to stand for any Fortran code regardless of version as it is a free-formatted form.

Here is some more documentation from other sources on the MPI libraries:

Language Intel Compiler with Intel MPI (IMPI) Intel/GNU/AOCC Compiler with OpenMPI/MVAPICH2
Fortran 77 $ mpiifort myprogram.f -o myprogram $ mpif77 myprogram.f -o myprogram
Fortran 90 $ mpiifort myprogram.f90 -o myprogram $ mpif90 myprogram.f90 -o myprogram
Fortran 95 $ mpiifort myprogram.f90 -o myprogram $ mpif90 myprogram.f90 -o myprogram
C $ mpiicc myprogram.c -o myprogram $ mpicc myprogram.c -o myprogram
C++ $ mpiicc myprogram.C -o myprogram $ mpicxx myprogram.C -o myprogram

Compiling OpenMP Programs

All compilers installed on Anvil include OpenMP functionality for C, C++, and Fortran. An OpenMP program is a single process that takes advantage of a multi-core processor and its shared memory to achieve a form of parallel computing called multithreading. It distributes the work of a process over processor cores in a single compute node without the need for MPI communications.

Sample programs illustrate task parallelism of OpenMP:

A sample program illustrates loop-level (data) parallelism of OpenMP:

To load a compiler, enter one of the following:

$ module load intel
$ module load gcc
$ module load aocc
Language Header File
Fortran 77 INCLUDE 'omp_lib.h'
Fortran 90 Use omp_lib
Fortran 95 Use omp_lib
C #include
C++ #include

The Intel, GNU and AOCC compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95". You may use ".f90" to stand for any Fortran code regardless of version as it is a free-formatted form.

Here is some more documentation from other sources on the OpenMP:

Language Intel Compiler GNU Compiler AOCC Compiler
Fortran 77 $ ifort -openmp myprogram.f -o myprogram $ gfortran -fopenmp myprogram.f -o myprogram $ flang -fopenmp myprogram.f -o myprogram
Fortran 90 $ ifort -openmp myprogram.f90 -o myprogram $ gfortran -fopenmp myprogram.f90 -o myprogram $ flang -fopenmp myprogram.f90 -o myprogram
Fortran 95 $ ifort -openmp myprogram.f90 -o myprogram $ gfortran -fopenmp myprogram.f90 -o myprogram $ flang -fopenmp myprogram.f90 -o myprogram
C $ icc -openmp myprogramram.c -o myprogram $ gcc -fopenmp myprogram.c -o myprogram $ clang -fopenmp myprogram.c -o myprogram
C++ $ icc -openmp myprogram.cpp -o myprogram $ g++ -fopenmp myprogram.cpp -o myprogram $ clang++ -fopenmp myprogram.cpp -o myprogram

Compiling Hybrid Programs

A hybrid program combines both MPI and shared-memory to take advantage of compute clusters with multi-core compute nodes. Libraries for OpenMPI, Intel MPI (IMPI) and MVAPICH2 and compilers which include OpenMP for C, C++, and Fortran are available.

A few examples illustrate hybrid programs with task parallelism of OpenMP:

This example illustrates a hybrid program with loop-level (data) parallelism of OpenMP:

To see the available MPI libraries:

$ module avail openmpi
$ module avail impi
$ module avail mvapich2
Language Header File
Language Header Files
Fortran 77 INCLUDE 'omp_lib.h' INCLUDE 'mpif.h'
Fortran 90 Use omp_lib INCLUDE 'mpif.h'
Fortran 95 Use omp_lib INCLUDE 'mpif.h'
C #include #include
C++ #include #include

The Intel, GNU and AOCC compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95". You may use ".f90" to stand for any Fortran code regardless of version as it is a free-formatted form.

Language Intel Compiler with Intel MPI(IMPI) Intel/GNU/AOCC Compiler with OpenMPI/MVAPICH2
Fortran 77 $ mpiifort -qopenmp myprogram.f -o myprogram $ mpif77 -fopenmp myprogram.f -o myprogram
Fortran 90 $ mpiifort -qopenmp myprogram.f90 -o myprogram $ mpif90 -fopenmp myprogram.f90 -o myprogram
Fortran 95 $ mpiifort -qopenmp myprogram.f90 -o myprogram $ mpif90 -fopenmp myprogram.f90 -o myprogram
C $ mpiicc -qopenmp myprogram.c -o myprogram $ mpicc -fopenmp myprogram.c -o myprogram
C++ $ mpiicpc -qopenmp myprogram.C -o myprogram $ mpicxx -fopenmp myprogram.C -o myprogram

Compiling NVIDIA GPU Programs

The Anvil cluster contains GPU nodes that support CUDA and OpenCL. See System Architecture for the specifics on the GPUs in Anvil. This section focuses on using CUDA.

A simple CUDA program has a basic workflow:

  • Initialize an array on the host (CPU).
  • Copy array from host memory to GPU memory.
  • Apply an operation to array on GPU.
  • Copy array from GPU memory to host memory.

Here is a sample CUDA program:

"modtree/gpu" Recommended Environment

ModuleTree or modtree helps users to navigate between CPU stack and GPU stack and sets up a default compiler and MPI environment. For Anvil cluster, our team makes a recommendation regarding the cuda version, compiler, and MPI library. This is a proven stable cuda, compiler, and MPI library combination that is recommended if you have no specific requirements. By load the recommended set:

$ module load modtree/gpu
$ module list
# you will have all following modules
Currently Loaded Modules:
  1) gcc/8.4.1   2) numactl/2.0.14   3) zlib/1.2.11   4) openmpi/4.0.6   5) cuda/11.2.2   6) modtree/gpu

Both login and GPU-enabled compute nodes have the CUDA tools and libraries available to compile CUDA programs. For complex compilations, submit an interactive job to get to the GPU-enabled compute nodes. The gpu-debug queue is ideal for this case. To compile a CUDA program, load modtree/gpu, and use nvcc to compile the program:

$ module load modtree/gpu
$ nvcc gpu_hello.cu -o gpu_hello
./gpu_hello
No GPU specified, using first GPUhello, world

The example illustrates only how to copy an array between a CPU and its GPU but does not perform a serious computation.

The following program times three square matrix multiplications on a CPU and on the global and shared memory of a GPU:

$ module load modtree/gpu
$ nvcc mm.cu -o mm
$ ./mm 0
                                                            speedup
                                                            -------
Elapsed time in CPU:                    7810.1 milliseconds
Elapsed time in GPU (global memory):      19.8 milliseconds  393.9
Elapsed time in GPU (shared memory):       9.2 milliseconds  846.8

For best performance, the input array or matrix must be sufficiently large to overcome the overhead in copying the input and output data to and from the GPU.

For more information about NVIDIA, CUDA, and GPUs:

Provided Software

Anvil team provides a suite of broadly useful software for users of research computing resources. This suite of software includes compilers, debuggers, visualization libraries, development environments, and other commonly used software libraries. Additionally, some widely-used application software is provided.

"modtree/cpu" or "modtree/gpu" Recommended Environment

ModuleTree or modtree helps users to navigate between CPU stack and GPU stack and sets up a default compiler and MPI environment. For Anvil cluster, our team makes recommendations for both CPU and GPU stack regarding the CUDA version, compiler, math library, and MPI library. This is a proven stable CUDA version, compiler, math, and MPI library combinations that are recommended if you have no specific requirements. To load the recommended set:

$ module load modtree/cpu # for CPU
$ module load modtree/gpu # for GPU

GCC Compiler

The GNU Compiler (GCC) is provided via the module command on Anvil clusters, and will be maintained at a common version. Third party software built with GCC will use this GCC version, rather than the GCC provided by the operating system vendor. To see available GCC compiler versions available from the module command:

$ module avail gcc

Toolchain

The Anvil team will build and maintain an integrated, tested, and supported toolchain of compilers, MPI libraries, data format libraries, and other common libraries. This toolchain will consist of:

  • Compiler suite (C, C++, Fortran) (Intel and GCC)
  • BLAS and LAPACK
  • MPI libraries (OpenMPI, MVAPICH, Intel MPI)
  • FFTW
  • HDF5
  • NetCDF

Each of these software packages will be combined with the stable "modtree/cpu" compiler, the latest available Intel compiler, and the common GCC compiler. The goal of these toolchains is to provide a range of compatible compiler and library suites that can be selected to build a wide variety of applications. At the same time, the number of compiler and library combinations is limited to keep the selection easy to navigate and understand. Generally, the toolchain built with the latest Intel compiler will be updated at major releases of the compiler.

Commonly Used Applications

The Anvil team will go to every effort to provide a broadly useful set of popular software packages for research cluster users. Software packages such as Matlab, Python (Anaconda), NAMD, GROMACS, R, and others that are useful to a wide range of cluster users are provided via the module command.

Changes to Provided Software

Changes to available software, such as the introduction of new compilers and libraries or the retirement of older toolchains, will be scheduled in advance and coordinated with system maintenances. This is done to minimize impact and provide a predictable time for changes. Advance notice of changes will be given with regular maintenance announcements and through notices printed through "module load"s. Be sure to check maintenance announcements and job output for any upcoming changes.

Long Term Support

The Anvil team understands the need for a stable and unchanging suite of compilers and libraries. Research projects are often tied to specific compiler versions throughout their lifetime. The Anvil team will go to every effort to provide the "modtree/cpu" or "modtree/gpu" environment and the common GCC compiler as a long-term supported environment. These suites will stay unchanged for longer periods than the toolchain built with the latest available Intel compiler.

Policies, Helpful Tips and FAQs

Here are details on some ITaP policies for research users and systems.

  • [Software Installation Request Policy(#policies:sw-install)
  • [Helpful Tips(#policies:tips)
  • [Frequently Asked Questions(#policies:faq)

Software Installation Request Policy

The Anvil team will go to every effort to provide a broadly useful set of popular software packages for users. However, many domain-specific packages that may only be of use to single users or small groups of users are beyond the capacity of research computing staff to fully maintain and support. Please consider the following if you require software that is not available via the module command: * If your lab is the only user of a software package, Anvil staff may recommend that you install your software privately, either in your home directory or in your allocation project space. If you need help installing software, Anvil support team may be able to provide limited help. * As more users request a particular piece of software, Anvil staff may decide to provide the software centrally. Matlab, Python (Anaconda), NAMD, GROMACS, and R are all examples of frequently requested and used centrally-installed software. * Python modules that are available through the Anaconda distribution will be installed through it. Anvil staff may recommend you install other Python modules privately. If you're not sure how your software request should be handled or need help installing software please contact us at Help Desk.

Helpful Tips

We will strive to ensure that Anvil serves as a valuable resource to the national research community. We hope that you the user will assist us by making note of the following:

  • You share Anvil with thousands of other users, and what you do on the system affects others. Exercise good citizenship to ensure that your activity does not adversely impact the system and the research community with whom you share it. For instance: do not run jobs on the login nodes and do not stress the file system.

  • Help us serve you better by filing informative help desk tickets. Before submitting a help desk ticket do check what the user guide and other documentation say. Search the internet for key phrases in your error logs; that's probably what the consultants answering your ticket are going to do. What have you changed since the last time your job succeeded?

  • Describe your issue as precisely and completely as you can: what you did, what happened, verbatim error messages, other meaningful output. When appropriate, include the information a consultant would need to find your artifacts and understand your workflow: e.g. the directory containing your build and/or job script; the modules you were using; relevant job numbers; and recent changes in your workflow that could affect or explain the behavior you're observing.

  • Have realistic expectations. Consultants can address system issues and answer questions about Anvil. But they can't teach parallel programming in a ticket, and may know nothing about the package you downloaded. They may offer general advice that will help you build, debug, optimize, or modify your code, but you shouldn't expect them to do these things for you.

  • Be patient. It may take a business day for a consultant to get back to you, especially if your issue is complex. It might take an exchange or two before you and the consultant are on the same page. If the admins disable your account, it's not punitive. When the file system is in danger of crashing, or a login node hangs, they don't have time to notify you before taking action.

For GPU jobs, make sure to use --gres=gpu command instead of --gpu or -G, otherwise, your job may not run properly.

Helpful Tools

The Anvil cluster provides a list of useful auxiliary tools:

Tool Use
myquota Check the quota of different file systems
flost A utility to recover files from snapshots
showpartitions Display all Slurm partitions and their current usage
myscratch Show the path to your scratch directory
jobinfo Collates job information from the sstat, sacct and squeue SLURM commands to give a uniform interface for both current and historical jobs
sfeatures Show the list of available constraint feature names for different node types.
myproject print the location of my project directory
mybalance Check the allocation usage of your project team

Frequently Asked Questions

Some common questions, errors, and problems are categorized below. Click the Expand Topics link in the upper right to see all entries at once. You can also use the search box above to search the user guide for any issues you are seeing.

Logging In & Accounts

Questions

Can I use browser-based Thinlinc to access Anvil? What is my username and password to access Anvil? What if my ThinLinc screen is locked?

Can I use browser-based Thinlinc to access Anvil?

Problem

You would like to use browser-based Thinlinc to access Anvil, but do not know what username and password to use.

Solution

Password based access is not supported at this moment. Please use Thinlinc Client instead. For your first time login to Anvil, you will have to use SSH client to start an SSH session with XSEDE single sign-on and set up SSH keys. Then you are able to use your native Thinlic client to access Anvil with SSH keys.

What is my username and password to access Anvil?

Problem

You would like to login to Anvil, but do not know what username and password to use.

Solution

Currently, you can access Anvil through:

  • SSH client:

You can login with XSEDE single sign-on or use SSH keys.

  • Native Thinlinc Client:

You can access native Thinlic client with SSH keys.

  • Open OnDemand:

You can access Open OnDemand with your XSEDE portal username and password.

What if my ThinLinc screen is locked?

Problem

Your ThinLinc desktop is locked after being idle for a while, and it asks for a password to refresh it, but you do not know the password.

In the default settings, the "screensaver" and "lock screen" are turned on, so if your desktop is idle for more than 5 minutes, your screen might be locked.

Solution

If your screen is locked, close the ThinLinc client, reopen the client login popup, and select End existing session.

Select "End existing session" and try "Connect" again.

To permanently avoid screen lock issues, right click desktop and select Applications, then settings, and select Screensaver.

Select "Applications", then "settings", and select "Screensaver".

Under Screensaver, turn off the Enable Screensaver, then under Lock Screen, turn off the Enable Lock Screen, and close the window.

Under the "Screensaver" tab, turn off the "Enable Screensaver" option.

Under "Lock Screen" tab, turn off the "Enable Lock Screen" option.

Composable Subsystem

New usage patterns have emerged in research computing that depend on the availability of custom services such as notebooks, databases, elastic software stacks, and science gateways alongside traditional batch HPC. The Anvil Composable Subsystem is a Kubernetes based private cloud managed with Rancher that provides a platform for creating composable infrastructure on demand. This cloud-style flexibility provides researchers the ability to self-deploy and manage persistent services to complement HPC workflows and container-based data analysis tools and applications.

Containers & Images

Image - An image is a simple text file that defines the source code of an application you want to run as well as the libraries, dependencies, and tools required for the successful execution of the application. Images are immutable meaning they do not hold state or application data. Images represent a software environment at a specific point of time and provide an easy way to share applications across various environments. Images can be built from scratch or downloaded from various repositories on the internet, additionally many software vendors are now providing containers alongside traditional installation packages like Windows .exe and Linux rpm/deb.

Container - A container is the run-time environment constructed from an image when it is executed or run in a container runtime. Containers allow the user to attach various resources such as network and volumes in order to move and store data. Containers are similar to virtual machines in that they can be attached to when a process is running and have arbitrary commands executed that affect the running instance. However, unlike virtual machines, containers are more lightweight and portable allowing for easy sharing and collaboration as they run identically in all environments.

Tags - Tags are a way of organizing similar image files together for ease of use. You might see several versions of an image represented using various tags. For example, we might be building a new container to serve web pages using our favorite web server: nginx. If we search for the nginx container on Docker Hub image repository we see many options or tags are available for the official nginx container.

The most common you will see are typically :latest and :number where number refers to the most recent few versions of the software releases. In this example we can see several tags refer to the same image: 1.21.1, mainline, 1, 1.21, and latest all reference the same image while the 1.20.1, stable, 1.20 tags all reference a common but different image. In this case we likely want the nginx image with either the latest or 1.21.1 tag represented as nginx:latest and nginx:1.21.1 respectively.

Container Security - Containers enable fast developer velocity and ease compatibility through great portability, but the speed and ease of use come at some costs. In particular it is important that folks utilizing container driver development practices have a well established plan on how to approach container and environment security. Best Practices

Container Registries - Container registries act as large repositories of images, containers, tools and surrounding software to enable easy use of pre-made containers software bundles. Container registries can be public or private and several can be used together for projects. Docker Hub is one of the largest public repositories available, and you will find many official software images present on it. You need a user account to avoid being rate limited by Docker Hub. A private container registry based on Harbor that is available to use. TODO: link to harbor instructions

Docker Hub - Docker Hub is one of the largest container image registries that exists and is well known and widely used in the container community, it serves as an official location of many popular software container images. Container image repositories serve as a way to facilitate sharing of pre-made container images that are "ready for use." Be careful to always pay attention to who is publishing particular images and verify that you are utilizing containers built only from reliable sources.

Harbor - Harbor is an open source registry for Kubernetes artifacts, it provides private image storage and enforces container security by vulnerability scanning as well as providing RBAC or role based access control to assist with user permissions. Harbor is a registry similar to Docker Hub, however it gives users the ability to create private repositories. You can use this to store your private images as well as keeping copies of common resources like base OS images from Docker Hub and ensure your containers are reasonably secure from common known vulnerabilities.

Containers Runtime Concepts

Docker Desktop - Docker Desktop is an application for your Mac / Windows machine that will allow you to build and run containers on your local computer. Docker desktop serves as a container environment and enables much of the functionality of containers on whatever machine you are currently using. This allows for great flexibility, you can develop and test containers directly on your laptop and deploy them directly with little to no modifications.

Volumes - Volumes provide us with a method to create persistent data that is generated and consumed by one or more containers. For docker this might be a folder on your laptop while on a large Kubernetes cluster this might be many SSD drives and spinning disk trays. Any data that is collected and manipulated by a container that we want to keep between container restarts needs to be written to a volume in order to remain around and be available for later use.

Containers Orchestration Concepts

Container Orchestration - Container orchestration broadly means the automation of much of the lifecycle management procedures surrounding the usage of containers. Specifically it refers to the software being used to manage those procedures. As containers have seen mass adoption and development in the last decade, they are now being used to power massive environments and several options have emerged to manage the lifecycle of containers. One of the industry leading options is Kubernetes, a software project that has descended from a container orchestrator at Google that was open sourced in 2015.

Kubernetes (K8s) - Kubernetes (often abbreviated as "K8s") is a platform providing container orchestration functionality. It was open sourced by Google around a decade ago and has seen widespread adoption and development in the ensuing years. K8s is the software that provides the core functionality of the Anvil Composable Subsystem by managing the complete lifecycle of containers. Additionally it provides the following functions: service discovery and load balancing, storage orchestration, secret and configuration management. The Kubernetes cluster can be accessed via the Rancher UI or the kubectl command line tool.

Rancher - Rancher is a "is a complete software stack for teams adopting containers." as described by its website. It can be thought of as a wrapper around Kubernetes, providing an additional set of tools to help operate the K8 cluster efficiently and additional functionality that does not exist in Kubernetes itself. Two examples of the added functionality is the Rancher UI that provides an easy to use GUI interface in a browser and Rancher projects, a concept that allows for multi-tenancy within the cluster. Users can interact directly with Rancher using either the Rancher UI or Rancher CLI to deploy and manage workloads on the Anvil Composable Subsystem.

Rancher UI - The Rancher UI is a web based graphical interface to use the Anvil Composable Subsystem from anywhere.

Rancher CLI - The Rancher CLI provides a convenient text based toolkit to interact with the cluster. The binary can be downloaded from the link on the right hand side of the footer in the Rancher UI. After you download the Rancher CLI, you need to make a few configurations Rancher CLI requires:

  • Your Rancher Server URL, which is used to connect to Rancher Server.
  • An API Bearer Token, which is used to authenticate with Rancher. see Creating an API Key.

After setting up the Rancher CLI you can issue rancher --help to view the full range of options available.

Kubectl - Kubectl is a text based tool for working with the underlying Anvil Kubernetes cluster. In order to take advantage of kubectl you will either need to set up a Kubeconfig File or use the built in kubectl shell in the Rancher UI. You can learn more about kubectl and how to download the kubectl file here.

Storage - Storage is utilized to provide persistent data storage between container deployments. The Ceph filesystem provides access to Block, Object and shared file systems. File storage provides an interface to access data in a file and folder hierarchy similar to NTFS or NFS. Block storage is a flexible type of storage that allows for snapshotting and is good for database workloads and generic container storage. Object storage is also provided by Ceph, this features a REST based bucket file system providing S3 and Swift compatibility.

Access

How to Access the Anvil Composable Subsystem via the Rancher UI, the command line (kubectl) and the Anvil Harbor registry.

Rancher

Logging in to Rancher

The Anvil Composable Subsystem Rancher interface can be accessed via a web browser at https://composable.anvil.rcac.purdue.edu. Log in by choosing "log in with shibboleth" and using your XSEDE credentials at the XSEDE login screen.

kubectl

Configuring local kubectl access with Kubeconfig file

kubectl can be installed and run on your local machine to perform various actions against the Kubernetes cluster using the API server.

These tools authenticate to Kubernetes using information stored in a kubeconfig file.

Note: A file that is used to configure access to a cluster is sometimes called a kubeconfig file. This is a generic way of referring to configuration files. It does not mean that there is a file named kubeconfig.

To authenticate to the Anvil cluster you can download a kubeconfig file that is generated by Rancher as well as the kubectl tool binary.

  1. From anywhere in the rancher UI navigate to the cluster dashboard by hovering over the box to the right of the cattle and selecting anvil under the "Clusters" banner.

    • Click on kubeconfig file at the top right
    • Click copy to clipboard
    • Create a hidden folder called .kube in your home directory
    • Copy the contents of your kubeconfig file from step 2 to a file called config in the newly create .kube directory
  2. You can now issue commands using kubectl against the Anvil Rancher cluster

    • to look at the current config settings we just set use kubectl config view
    • now let's list the available resource types present in the API with kubectl api-resources

To see more options of kubectl review the cheatsheet.

Accessing kubectl in the rancher web UI

You can launch a kubectl command window from within the Rancher UI by selecting the Launch kubectl button to the left of the Kubeconfig File button. This will deploy a container in the cluster with kubectl installed and give you an interactive window to use the command from.

Harbor

Logging into the Anvil Registry UI with XSEDE credentials

Harbor is configured to use XSEDE as an OpenID Connect (OIDC) authentication provider. This allows you to login using your XSEDE credentials.

To login to the harbor registry using your XSEEDE credentials:

Navigate to https://registry.anvil.rcac.purdue.edu in your favorite web browser.

  1. Click the Login via OIDC Provider button.

    • This redirects you to the XSEDE portal for authentication.
  2. If this is the first time that you are logging in to Harbor with OIDC, specify a user name for Harbor to associate with your OIDC username.

    • This is the user name by which you are identified in Harbor, which is used when adding you to projects, assigning roles, and so on. If the username is already taken, you are prompted to choose another one.
  3. After the OIDC provider has authenticated you, you are redirected back to the Anvil Harbor Registry.

Deployments

Deploy a Container

This is a simple example of deploying a container from a Docker image hosted in Docker Hub. Refer to the Concepts page for more information about images and containers.

  1. In the Rancher web interface, select the "anvil" cluster and your project name.
  2. From the Workloads tab, click the Deploy button.

    • Set a unique Name for your deployment, i.e. "myapp"
    • Set Docker Image to registry.anvil.rcac.purdue.edu/docker-hub-cache/library/alpine. We will use alpine for this example. Alpine is small Linux distribution often used as the base for Docker image builds.
    • Instead of pulling directly from Docker Hub, we use the Docker Hub Cache on Anvil's Harbor registry. Use the cache whenever possible as this will prevent all users of Anvil from hitting pull rate limits imposed by Docker Hub.
    • Select the Namespace for your application or create a new one by selecting "Add to a new namespace"
    • Click Launch

    Wait a couple minutes while your application is deployed. The "does not have minimum availability" message is expected. But, waiting more than 5 minutes for your workload to deploy typically indicates a problem. You can check for errors by clicking your workload name (i.e. "myapp"), then the lower button on the right side of your deployed pod and selecting View Logs.

    If all goes well, you will see an Active status for your deployment.

  3. You can then interact with your deployed Alpine container on the command line by clicking the button with three dots on the right side of the screen and choosing "Execute Shell".

Registry

Accessing the Anvil Composable Registry

The Anvil registry uses Harbor, an open source registry to manage containers and artifacts, it can be accessed at the following URL: https://registry.anvil.rcac.purdue.edu

Using the Anvil Registry Docker Hub Cache

It's advised that you use the Docker Hub cache within Anvil to pull images for deployments. There's a limit to how many images Docker hub will allow to be pulled in a 24 hour period which Anvil reaches depending on user activity. This means if you're trying to deploy a workload, or have a currently deployed workload that needs migrated, restarted, or upgraded, there's a chance it will fail.

To bypass this, use the Anvil cache url registry.anvil.rcac.purdue.edu/docker-hub-cache/ in your image names

For example if you're wanting to pull a notebook from jupyterhub's Docker Hub repo e.g jupyter/tensorflow-notebook:latest Pulling it from the Anvil cache would look like this registry.anvil.rcac.purdue.edu/docker-hub-cache/jupyter/tensorflow-notebook:latest

Using OIDC from the Docker or Helm CLI

After you have authenticated via OIDC and logged into the Harbor interface for the first time, you can use the Docker or Helm CLI to access Harbor.

The Docker and Helm CLIs cannot handle redirection for OIDC, so Harbor provides a CLI secret for use when logging in from Docker or Helm.

  1. Log in to Harbor with an OIDC user account.
  2. Click your username at the top of the screen and select User Profile.
  3. Click the clipboard icon to copy the CLI secret associated with your account.
  4. Optionally click the ... icon in your user profile to display buttons for automatically generating or manually creating a new CLI secret.
    1. A user can only have one CLI secret, so when a new secret is generated or create, the old one becomes invalid.
  5. If you generated a new CLI secret, click the clipboard icon to copy it.

You can now use your CLI secret as the password when logging in to Harbor from the Docker or Helm CLI.

docker login -u testuser -p cli_secret jt-test.local.goharbor.io

Note: The CLI secret is associated with the OIDC ID token. Harbor will try to refresh the token, so the CLI secret will be valid after the ID token expires. However, if the OIDC Provider does not provide a refresh token or the refresh fails, the CLI secret becomes invalid. In this case, log out and log back in to Harbor via your OIDC provider so that Harbor can get a new ID token. The CLI secret will then work again.

Creating a harbor Registry

  1. Using a browser login to https://registry.anvil.rcac.purdue.edu with your XSEDE account username and password
  2. From the main page click create project, this will act as your registry
  3. Fill in a name and select whether you want the project to be public or private
  4. Click ok to create and finalize

Tagging and Pushing Images to Your Harbor Registry

  1. Tag your image $ docker tag my-image:tag registry.anvil.rcac.purdue.edu/project-registry/my-image:tag
  2. login to the Anvil registry via command line $ docker login registry.anvil.rcac.purdue.edu
  3. Push your image to your project registry $ docker push registry.anvil.rcac.purdue.edu/project-registry/my-image:tag

Creating a Robot Account for a Private Registry

A robot account and token can be used to authenticate to your registry in place of having to supply or store your private credentials on multi-tenant cloud environments like Rancher/Anvil.

  1. Navigate to your project after logging into https://registry.anvil.rcac.purdue.edu
  2. Navigate to the Robot Accounts tab and click New Robot Account.
  3. Fill out the form.
    1. Name your robot account.
    2. Select account expiration if any, select never to make permanent.
    3. Customize what permissions you wish the account to have.
    4. Click Add.
  4. Copy your information.
    1. Your robot's account name will be something longer than what you specified, since this is a multi-tenant registry, harbor does this to avoid unrelated project owners creating a similarly named robot account.
  5. Export your token as JSON or copy it to a clipboard.

Note Harbor does not store account tokens, once you exit this page your token will be unrecoverable.

Adding Your Private Registry to Rancher

  1. From your project navigate to Resources > secrets
  2. Navigate to the Registry Credentials tab and click Add Registry
  3. Fill out the form
    1. Give a name to the Registry secret (this is an arbitrary name)
    2. Select whether or not the registry will be available to all or a single namespace
    3. Select address as "custom" and provide "registry.anvil.rcac.purdue.edu"
    4. Enter your robot account's long name e.g. robot$my-registry+robot as the Username
    5. Enter your robot account's token as the password
  4. Click Save

External Harbor Documentation

Storage

Storage is utilized to provide persistent data storage between container deployments and comes in a few options on Anvil.

The Ceph software is used to provide block, filesystem and object storage on the Anvil composable cluster. File storage provides an interface to access data in a file and folder hierarchy similar to NTFS or NFS. Block storage is a flexible type of storage that allows for snapshotting and is good for database workloads and generic container storage. Object storage is ideal for large unstructured data and features a REST based API providing an S3 compatible endpoint that can be utilized by the preexisting ecosystem of S3 client tools.

Provisioning Block and Filesystem Storage for use in deployments

Block and Filesystem storage can both be provisioned in a similar way.

  1. While deploying a Workload, select the Volumes drop down and click Add Volume…
  2. Select "Add a new persistent volume (claim)"
  3. Set a unique volume name, i.e. "-volume"
  4. Select a Storage Class. The default storage class is Ceph for this Kubernetes cluster
  5. Request an amount of storage in Gigabytes
  6. Click Define
  7. Provide a Mount Point for the persistent volume: i.e /data

Accessing object storage externally from local machine using Cyberduck

Cyberduck is a free server and cloud storage browser that can be used to access the public S3 endpoint provided by Anvil.

  1. Download and install Cyberduck from https://cyberduck.io/download/
  2. Launch Cyberduck
  3. Click + Open Connection at the top of the UI.
  4. Select S3 from the dropdown menu
  5. Fill in Server, Access Key ID and Secret Access Key fields
  6. Click Connect
  7. You can now right click to bring up a menu of actions that can be performed against the storage endpoint

Further information about using Cyberduck can be found on the Cyberduck documentation site: https://docs.cyberduck.io/Cyberduck.

Examples

Examples of deploying a database with persistent storage and making it available on the network and deploying a webserver using a self-assigned URL.

Database Web Server

Database

Deploy a postgis Database

  1. In the Rancher web interface, select the "anvil" cluster and your project name.
  2. From the Workloads tab, click the Deploy button.
  3. Set the Name for your deployment, i.e. "mydb"
  4. Set Docker Image to the postgis Docker image: registry.anvil.rcac.purdue.edu/docker-hub-cache/postgis/postgis
  5. Select the Namespace for your application
  6. Set the postgres user password
    1. Select the Environment Variables drop down
    2. Click the Add Variable button POSTGRES_PASSWORD = You will need this password to connect to the database, so don't forget it! Using Kubernetes secrets is the better way to do this.
  7. Create a persistent volume for your database
    1. Select the Volumes drop down and click Add Volume…
    2. Select "Add a new persistent volume (claim)"
    3. Set a unique volume name, i.e. "mydb-volume"
    4. Use the default Storage Class. The default storage class is Ceph block storage.
    5. Request 2 GiB of storage
    6. Leave Single Node Read-Write checked under Customize
    7. Click Define
  8. Provide the default postgres data directory as a Mount Point for the persistent volume: /var/lib/postgresql/data
  9. Set Sub Path in Volume to data
    1. Click Show advanced options (bottom right of the page)
    2. Click Security & Host Config
    3. Under CPU Reservation select Limit to 2000 milli CPUs
    4. Click the Launch button

Wait a couple minutes while your persistent volume is created and the postgis container is deployed. The "does not have minimum availability" message is expected. But, waiting more than 5 minutes for your workload to deploy typically indicates a problem. You can check for errors by clicking your workload name (i.e. "mydb"), then the lower button on the right side of your deployed pod and selecting View Logs.

If all goes well, you will see an Active status for your deployment.

Expose the Database to external clients

Use a LoadBalancer service to automatically assign an IP address on a private Purdue network and open the postgres port (5432). A DNS name will automatically be configured for your service as ..anvilcloud.rcac.purdue.edu.

  1. Mouse over the Resources menu and select Workloads in the Rancher UI
  2. Click the Service Discovery tab
  3. Click the Add Record button
    1. Provide a Name This will get mapped to in your DNS record
  4. Select your Namespace
    1. This will get mapped to in your DNS record
  5. Select "One or more workloads" for Resolves To
  6. Click Add Target Workload
  7. Select your postgis workload (i.e. mydb)
  8. Click Show advanced options
  9. Select Layer-4 Load Balancer from the As a dropdown
  10. Under Port Mapping, click Add Port
    1. Name the port, i.e. postgres-port
    2. Enter 5432 (the default postgres port) under Publish the service port
  11. Expand the Labels & Annotations section
    1. Click the Add Annotation button
    2. Enter the following key/value pair: metallb.universe.tf/address-pool = anvil-private-pool
  12. Click Create

Kubernetes will now automatically assign you an IP address from the Anvil private IP pool. You can check the IP address by hovering over the "5432/tcp" link on the Service Discovery page or by viewing your service via kubectl on a terminal.

$ kubectl -n <namespace> get services

Verify your DNS record was created:

$ host <servicename>.<namespace>.anvilcloud.rcac.purdue.edu

Web Server

Nginx Deployment

In this example, we will deploy an nginx web server and use a Kubernetes Ingress to define a custom URL for the server.

  1. In the Rancher web interface, select the "anvil" cluster and your project name.
  2. From the Workloads tab, click the Deploy button at the top right of the UI.
  3. Set the Name for your deployment, i.e. "mywebserver"
  4. Set Docker Image to the nginx Docker image: registry.anvil.rcac.purdue.edu/docker-hub-cache/library/nginx
  5. Select the Namespace for your application, or define a new namespace by clicking Add to a new namespace.
  6. Click the Launch button

Wait a couple minutes while your application is deployed. The "does not have minimum availability" message is expected. But, waiting more than 5 minutes for your workload to deploy typically indicates a problem. You can check for errors by clicking your workload name (i.e. "mywebserver"), then the lower button on the right side of your deployed pod and selecting View Logs

If all goes well, you will see an Active status for your deployment.

Expose the web server to external clients via an Ingress

  1. Open the Workload page and click the Load Balancing tab
  2. Click the Add Ingress button at the top right of the UI.
  3. Provide a Name (i.e. "myingress") and the Namespace where you deployed nginx.
  4. Select Specify a hostname to use and enter a hostname of your choice, using the anvilcloud subdomain: .anvilcloud.rcac.purdue.edu
  5. Put / for Path and select your nginx deployment name (i.e. "mywebserver") as the Target workload.
  6. Use 80 for Port
  7. Add any required annotations. If your web application does not exist on the same path in the workload, a rewrite-target annotation is needed. If you specify /myapp as your ingress path and your web application exists at the root of the web server, use this annotation: nginx.ingress.kubernetes.io/rewrite-target: /
  8. The default anvilcloud.rcac.purdue.edu SSL certificate will be used to encrypt traffic.
  9. Click Save

    Kubernetes will now automatically provision the DNS name you requested and create an Ingress for your web server. You may have to wait a minute or two for the state to change to "Active". Once the state changes, your web server will be available on the public Internet.

  10. In the Targets column, click the link to the hostname you created on the Load Balancing tab to open your nginx web server. You should see the "Welcome to nginx!" page in your browser.

Services

A Service is an abstract way to expose an application running on Pods as a network service. This allows the networking and application to be logically decoupled so state changes in either the application itself or the network connecting application components do not need to be tracked individually by all portions of an application.

Resources

In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a Pod selector, but can also be defined other ways.

Publishing

For some parts of your application you may want to expose a Service onto an external IP address, that's outside of your cluster.

Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.

  • ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
  • NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

You can see an example of exposing a workload using the LoadBalancer type on Anvil here.

Rancher provides additional documentation regarding using the LoadBlancer service and Ingress here.

ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.

Ingress

An Ingress is an API object that manages external access to the services in a cluster, typically HTTP/HTTPS. An Ingress is not a ServiceType, but rather brings external traffic into the cluster and then passes it to an Ingress Controller to be routed to the correct location. Ingress may provide load balancing, SSL termination and name-based virtual hosting. Traffic routing is controlled by rules defined on the Ingress resource.

You can see an example of a service being exposed with an Ingress on Anvil here.

Ingress Controller

In order for the Ingress resource to work, the cluster must have an ingress controller running to handle Ingress traffic.

Anvil provides the nginx ingress controller configured to facilitate SSL termination and automatic DNS name generation under the anvilcloud.rcac.purdue.edu subdomain.

Kubernetes provides additional information here about Ingress Controllers in the official documentation.