Frequently Asked Questions (FAQ)

Welcome to the AUHPCS FAQ page. Here, you’ll find answers to common questions about accessing and using the AUHPCS cluster.

General Usage

How do I access the AUHPCS cluster?

You can access the cluster via SSH using a supported OpenSSH client such as PowerShell, PuTTY, or MobaXterm. Use the command:
$ ssh <your_username>@hpc-sub.augusta.edu
Make sure you have an active AU NetID, Duo authentication configured, and that your PI has verified your access request for the AUHPCS cluster.

What do I do if I forget my password?

Password resets are handled through the AU password management portal. If you are locked out, contact support via email.

How can I add collaborators to my project?

Compute group PIs can request to add collaborators through iLab.

What happens if my account is inactive for an extended period?

Accounts with no activity for 1 year will be deactivated.

Can I request temporary access for external collaborators?

Yes, external users may request temporary access with approval.

Job Submission & Slurm

How do I submit a job to the cluster?

Use Slurm’s sbatch command to submit a job:
$ sbatch my_script.slurm
Ensure your script is correctly formatted with resource requests.

How do I check the status of my job?

Run the following command to check your job’s status:
$ squeue -u <your_username>

My job is stuck in the queue. What should I do?

Your job may be waiting for resources. You can check the reason with:
$ scontrol show job <job_id>
Refer to Loki User Guide for job submission walkthrough. If you need assistance optimizing your job, check out support.

How do I access the high memory nodes?

Access to high memory partitions (cpu_high_mem_q for CPU and gpu_high_ai_q for GPU) requires prior approval.
To reserve time on these nodes:
  • Email auhpcs_support@augusta.edu with your project summary and estimated job duration.

  • The maximum reservation allowed is 10 days.

  • Approval is based on availability and current system usage.

Make sure to request access early if your jobs need large memory capacity.

Storage & File Systems

How much storage do I have?

Each user directory (/home, /scratch, /work, /project) has different amounts of storage. Refer to Storage Overview page.

My storage quota is full. What should I do?

Here’s the steps below:
  • Remove unnecessary files

  • Move data needed for future computations from /scratch and /work to /project

  • Use du -sh ~/ to check file sizes

Software & Modules

How do I load a software module?

Use the module command:
$ module load <module_name>
To see available modules:
$ module avail
Refer to the Cluster Applications Page for a complete list of available software.

Can I request additional software installations?

Yes, submit a request to AUHPCS support email. with details about the software you need.

How do I run GPU-accelerated jobs?

Submit your job to a GPU partition and request GPUs:
#SBATCH --partition=gpu_normal_q
#SBATCH --gres=gpu:1

Can I run MATLAB with a graphical interface from the cluster?

Yes. MATLAB can be launched graphically using X11 forwarding over SSH. Windows users can use WSL 2, Xming, or VcXsrv; macOS users can use XQuartz.
After starting your X server, connect with ssh -4 -X <USER>@hpc-inter-sub.augusta.edu and load MATLAB in an interactive session.

How do I run JupyterLab on the Loki cluster?

JupyterLab replaces the older Jupyter Notebook modules and should be launched through a Slurm batch script. Refer to the JupyterLab Module Update (Loki) guide for full setup and examples: JupyterLab Module (Loki)
Key points to remember:
  • Submit the job from a batch node, not from a login or build node.

  • JupyterLab runs on a compute node; connect via SSH tunnel to access your notebook.

  • Closing the browser usually ends the job automatically, but always confirm with: squeue -u $USER

    • If it’s still active, end it manually with: scancel -f <JOBID>

Account & Access

How do I request access to the cluster?

Access begins with a new request submitted through iLab. Principal Investigators (PIs) first request access for themselves. Once approved, they can then request access and training for their researchers.

Can I access AUHPCS remotely from home or using my personal laptop?

Access to AUHPCS clusters from off campus requires a secure connection through the AU VPN service.
Only AU-managed laptops or desktops can be enrolled in the AU VPN system. Personal devices are not eligible for VPN configuration.
If you need remote access:
  • Request an AU-issued, managed laptop through your department.

  • Once provisioned, connect via VPN and authenticate using Duo two-factor authentication before accessing AUHPCS clusters. For details on obtaining an AU-managed device, contact IT Support.

System Maintenance & Security

How often is system maintenance performed?

Regular maintenance is scheduled for the last working day of every month. Urgent updates are announced via email.

Advanced Job Scheduling & Optimization

How do I run multi-node parallel jobs?

Multi-node parallel jobs involve hardware-level parallelism, where tasks run across multiple physical compute nodes. You can request this in your Slurm script using flags like:
#SBATCH --nodes=2
#SBATCH --ntasks-per-node
This is different from software-level parallelism, such as multithreading within a single application on one node (e.g., using OpenMP). Multi-node jobs often rely on MPI to communicate across nodes.

Can I reserve resources for a future job?

Yes, use salloc or request an advanced reservation.

How do I enable job dependencies?

Use --dependency=afterok:<job_id> in your Slurm script to ensure a job starts only after the specified job completes successfully. Alternatively, you can add a dependency to a submitted job using:
$ scontrol update JobId=<job_id> Dependency=afterok:<dependent_job_id>

What should I do if I don’t know how long my job will run?

If you’re unsure about the duration or resource needs of your job, it’s best to use the interactive node (hpc-inter-sub.augusta.edu). This environment is ideal for testing and developing your workflow, debugging code, and estimating required CPU time and memory before submitting jobs to standard compute partitions.
Using hpc-inter-sub helps you avoid job failures, timeouts, and inefficient use of compute resources. For step-by-step instructions on how to launch an interactive session and best practices, see the Loki User Guide.

Training & Learning Resources

Does AUHPCS offer training sessions?

Yes, workshops are held up according to request.

Where can I learn more about HPC best practices?


For further assistance, visit our Support Page or contact our team at auhpcs_support@augusta.edu.