Storage Overview
The type of storage available to a user on an HPC cluster depends on the system role and the intended purpose of the storage. Users are provided access to various storage systems that serve different needs, from job submissions to temporary scratch space for computations and longer-term storage for active projects.
Types of Storage
HOME directory
/home– Persistent user storage for job scripts and small files.SCRATCH file system
/scratch– High-performance temporary storage for compute jobs.LOCAL SCRATCH file system
/lscratchor/tmp– Temporary storage local to compute nodes, wiped after job completion or node reboot.WORK file system
/work– Shared storage for multiple users or jobs, read-only for compute nodes.PROJECT file system
/project– Longer-term active job storage, not accessible by compute nodes.
Cluster User File Systems
File System |
Persistence |
Mounted On |
Quota |
|---|---|---|---|
HOME (/home) |
Persistent |
Submission nodes, data transfer nodes, compute nodes |
20GB |
SCRATCH (/scratch) |
Temporary storage for active job data |
Data transfer nodes, compute nodes |
N/A |
LOCAL SCRATCH (/lscratch, /tmp) |
Should be cleared after job completion by the user |
Compute nodes |
N/A |
WORK (/work) |
Temporary |
Data transfer nodes, compute nodes |
500GB (Initial) |
PROJECT (/project) |
Temporary |
Data transfer nodes |
1TB (Initial) |
Warning
There is no backup of project and work directories. Once a file is deleted, it cannot be recovered. Home directory has 14 day retention period only.
Best Practices for Storage Usage
To efficiently manage storage and avoid exceeding quotas, users should follow these best practices:
Use HOME for Essential Files Only
Keep job scripts, small applications, and lightweight datasets in
/home.Avoid storing large data files here due to the 20GB quota.
Use SCRATCH for Active Compute Jobs
Ideal for temporary files such as result files, checkpoint data, and large input/output datasets.
Move important data to WORK or PROJECT after the job is completed.
Ensure that only actively used files are stored in
/scratch— inactive files are subject to automatic deletion.
Use LOCAL SCRATCH for Node-Specific Temporary Files
Useful for high-speed, short-term computations.
All data will be lost after the job completes or the node reboots.
Use WORK for Multi-User Data and Shared Applications
Suitable for storing datasets shared among multiple users or compute jobs.
Compute nodes can read but not write to WORK.
Use PROJECT for Long-Term Active Data
Store data for ongoing research projects that require future compute jobs.
Not an archive – users should regularly clean up unused files.
Warning
Use /scratch only for temporary files needed during active compute jobs. Files in /scratch are automatically purged after 30 days with notice. There is no backup of scratch. Once a file is deleted, it cannot be recovered.
Any data you wish to keep must be moved promptly to your WORK or PROJECT directory. Use /scratch only for temporary files needed during active compute jobs.
Data Transfer & Management
Users can move files between storage systems using data transfer nodes. The recommended methods for efficient data transfer include:
Checking Storage Usage
$ quota -s
$ du -shc $HOME
$ df -h
Moving Data
On the node:
$ mv /scratch/yourfile /work/yourfolder/
$ scp <username>@hpc-xfer.augusta.edu:/work/yourfile /local/destination/
On your local computer:
> scp /local/destination/ <username>@hpc-xfer.augusta.edu:/work/yourfile
Using Globus (Recommended for Large Transfers)
Globus is a reliable and high-performance tool for transferring large datasets between systems.
Visit https://app.globus.org
Login with your institutional credentials.
Search for the HPC Globus endpoint: Augusta University HPC.
Set up a second endpoint for your local machine using Globus Connect Personal.
Drag and drop files between the two endpoints.
Note
Globus is ideal for transfers between HPC, cloud storage, or personal devices. It supports restart/resume functionality and is more efficient than scp for large files.
For more information on supported use cases and advanced Globus setup, see Globus Data Transfer.
Deleting Files
$ rm /scratch/<username>/old_data
Final Notes
Data left on LOCAL SCRATCH should be deleted or moved by the user after job completion. This cleanup step can be integrated into the job script itself.
SCRATCH is not meant for long-term storage; move data as needed.
Compute nodes can only read WORK storage but cannot write to it.
PROJECT is for active research data only, not an archive.
By following these guidelines, users can optimize storage usage and avoid unnecessary storage constraints.