AUHPCS Quick Startο
Connect to the Loki Clusterο
Attention
If you have not yet requested access to AUHPCS, you must first complete the required steps. Visit the iLAB High Performance Computing and Parallel Computing Services Core page to register as a Principal Investigator (PI) or be added to an existing project. If you have submitted a request but have not received an approval email, please contact your PI sponsor for status updates.
Accessο
To access AUHPCS researchers, faculty, or staff must meet the following requirements:
Verify Credentials - Ensure you have an AU NetID and have configured Duo two-factor authentication.
Request Access
Principal Investigators (PIs): Register on the iLAB βHigh Performance Computing and Parallel Computing Services Coreβ page.
Researchers & Students: Request to be added to an existing project by a registered PI.
Choose a compute group name (PIs)
Once your iLab request is approved, you will receive an email asking you to provide a compute group name.
When choosing a name, keep in mind:
Maximum 12 alphanumeric characters
Avoid using personal names or initials (underscores are acceptable for separation)
Do not use room numbers or other location details
Pick something meaningful that reflects the research project or purpose
Complete Training
Most new users need to complete AUHPCS training, which covers Linux fundamentals and cluster usage.
If you already have prior HPC experience, you may request an exemption by emailing auhpcs_support@augusta.edu . Be sure to include details on where you gained HPC experience and the skills you have.
Wait for Approval - Your request will be reviewed. Once approved, you will receive confirmation and access instructions.
Important
Failure to comply with AUHPCS governance policies may result in access restrictions or removal.
Connect to Job Submission Nodesο
Job submission nodes (or βlogin nodesβ) authenticate users and provide essential tools for scripting, submitting, and managing batch jobs. These jobs enter a queue and run once resources become available. The nodes have a minimal OS footprint and operate in an active/active configuration for load balancing.
AUHPCS job submission nodes are accessed using supported openSSH clients using the DNS address hpc-sub.augusta.edu.
$ ssh <AU NetID>@hpc-sub.augusta.edu
Connect to Software Build Nodesο
The software build node provides an environment for users to build, test, and debug scientific software. It also serves as an interactive submission node, allowing real-time script execution before batch submission. This helps troubleshoot issues and optimize jobs before moving to production. The node includes one general-purpose compute node and one GPU node.
AUHPCS job submission nodes are accessed using supported openSSH clients using the DNS address hpc-inter-sub.augusta.edu.
$ ssh <AU NetID>@hpc-inter-sub.augusta.edu
Connect to Data Transfer Nodesο
Data transfer nodes provide access to all user file systems in the cluster, enabling high-speed internal transfers and external data movement. Due to network limitations, external transfers may be slower than internal ones. A key use case is managing storage constraints on the ephemeral /scratch file system, a high-performance parallel system used by compute nodes. Large datasets should be placed in /scratch for active jobs and moved to /work or /project when no longer needed.
AUHPCS job submission nodes are accessed using supported openSSH clients using the DNS address hpc-xfer.augusta.edu.
$ ssh <AU NetID>@hpc-xfer.augusta.edu
Supported openSSH Clientsο
The list of clients supported for openSSH services (SSH, SCP, SFTP) is currently under assessment. The base criteria will be acceptable integrated security, primarily Active Directory and Duo two-factor authentication support, and overall supportability by the AU Enterprise and Research Technology teams.
OpenSSH clients include:
Operating System |
Recommended SHH Client |
|---|---|
Linux |
Terminal (built-in) |
macOS |
Terminal (built-in) |
Windows |
WSL 2 (recommended) |
Windows |
PuTTY, MobaXterm, WinSCP |
File Transfers (All) |
FileZilla (SFTP) |
For best results, AUHPCS recommends using WSL 2 for Windows users, while Linux and macOS users can utilize their built-in terminal applications. If you experience connection issues, please refer to the Troubleshooting section or contact AUHPCS support.
Running Batch Jobsο
To submit a job to the scheduler:
$ sbatch -N <nodes_requested> -n <cores_requested> -J "<job_name>" <job_script.sh>
Checking Job Statusο
To check the status of your submitted job:
$ squeue -l
$ scontrol show job <job_id>
Viewing Available Queues (Partitions)ο
To see available partitions:
$ sinfo
$ sinfo -s
Viewing & Loading Available Softwareο
To list installed software modules:
$ module avail
To load a specific software module:
$ module load <software_name>/<software_version>
For more details, please refer to the AUHPCS User Guide.