Cluster Workflow

Managing and transferring data within the cluster is a straightforward process. The user-accessible file systems are detailed in the next section. Below is a simplified workflow for submitting a typical job to the cluster:

  1. Prepare Job Scripts

    • Users compose job scripts for batch or interactive submission to the workload manager (SLURM).

  2. Upload or Stage Data

    • New datasets are uploaded to the cluster.

    • Existing datasets from /project are staged on /home, /scratch, or /work for compute node access.

  3. Submit Jobs to SLURM

    • Jobs are submitted to the appropriate SLURM partitions using batch or interactive submission utilities.

  4. Job Scheduling and Execution

    • The cluster schedules and executes jobs once the requested resources become available.

  5. Retrieve and Analyze Results

    • Job results can be downloaded or analyzed by job owners.

  6. Manage Data

    • Temporary datasets on /home, /scratch, or /work can be deleted or moved back to /project for continued research or future use.

Work Flow Path Image

To submit jobs to the cluster, users interact with the SLURM workload manager. For detailed information on job submission, monitoring, and optimization, see Job Scheduling & Resource Allocation.