VHPC Pilot group 2025

Thank you for participating in this pilot.  There has only been a few of us running jobs on the new cluster, so UITS is ready to open it up for a few users.  This pilot also gives us a chance to test the processes we are considering for getting accounts and requesting the vhpc as a Research Computing core service.

The UITS wiki now includes articles on VHPC topics.

The Center for Research Computing has also been generating help files for new users.


First Step: The New Forms

Our strategy is to break up the billable account forms from the service forms.  For billable accounts, there is a creation form and a management form.  The same is true for services, there is a form to request services to be added to your account and the management form lets you manage your users on the services.

To begin, request a billable account (for this case, a voucher based account)
https://kennesawstateuniversity-vbzux.formstack.com/forms/hpc_research_core_account_form


1. Check that you are faculty and give your name
2. Project funding source: Request a Voucher
3. Use "VHPC Pilot Test" for Project Title
4. Use "VHPC Pilot Test" for the project description
5. Choose HPC Credits
6. Voucher justification: VHPC Pilot Test
7. Goal: Other: VHPC Pilot Tets
8. Skip the end date
9. Submit form

When I get the form, I will send you an account to use for the next forms.  You will need to wait before completing the next two forms.

Request the VHPC to be added to your billable account.
https://kennesawstateuniversity-vbzux.formstack.com/forms/research_core_service_request


1. Click "I understand" checkbox.
2. Enter the Account I sent you.
3. Check the HPC(vhpc) service.
4. Click Next
5. Confirm that you want HPC jobs charged to account.
6. Skip any limits
7. Submit Form

When I get this form, I will coordinate the account to be added to the VHPC for job submissions.  You can go ahead and complete the next form while this is occurring.

Request a user account for yourself to use the service now associated with your account.
https://kennesawstateuniversity-vbzux.formstack.com/forms/research_core_service_management


1. Enter your name
2. Enter the Account I sent you.
3. Check HPC(vhpc)
4. Click Next
5. Select "add user(s)"
6. Check "Myself"
7. Enter your NetID
8. Submit form

Now your user account can be added to your billable account on the vhpc.

You should get a welcome email once the account has been created.

 


Step Two - Get logged in to the system

Log into the "vpn-groups" portal of the KSU VPN with Global Protect

Use your terminal app to start an SSH session

% ssh NetID@vhpc

Complete the password and the Duo authentication step.

You should be on one of the login servers.
(base) [yourNetID@vhpcprdssh01 ~]$

Your home is mounted at: /gpfs/home/e001/yourNetID

In your directory is a link to the work directory (formerly scratch on the HPC)
Work is  172 TB of fast storage

 


Step Three - Get something to run

Interact
If you want to work in a sandbox environment, the interact command has been ported over to the vhpc.  The biggest difference is that it now requires a billable account to run.
Example: Interact with 1 node, 4 cores, 1 GPU, 100 GB MEM and for 2 hours

interact -A accountID -N 1 -n 4 -G -t 2:00:00


Simple Bash Job
If you want to run a job in a batch process similar to the old HPC, then you will use sbatch and a job submission script. For example, here is how to run a MATLAB job (matlab_file.m) with 1 node, 1 core, 4GB RAM, for 10 minutes

Here is what you could use for a slurm file run_matlab.slurm:

#!/bin/bash
#SBATCH --job-name=single_thread_matlab
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --mem=4gb
#SBATCH --time=00:10:00
#SBATCH --mail-type="BEGIN,END,FAIL"
#SBATCH --mail-user="netid@kennesaw.edu" 
#SBATCH --account="accountID" 

JOBID=`echo $SLURM_JOBID | cut -f1 -d.`

module load MATLAB

# Create a temp workdir under scratch
mkdir -p ${HOME}/work/matlab
export SLURM_MATLAB_WORKDIR=$( mktemp -d -p ${HOME}/work/matlab workdir_XXXXXXXX
XX )

matlab -nodisplay -nosplash -logfile ${FILE}.log -r "run ${FILE}"

# Delete temp workdir
rm -rf ${SLURM_MATLAB_WORKDIR}

 

To run your matlab.m job, use the run_matlab.slurm

sbatch --export=ALL,FILE=${PWD}/matlab_file.m ${PWD}/run_matlab.slurm

 


GPU Bash Job

1 Node, 24 cores, 1 GPU, 256 GB, for 70 minutes on the default queue "defq"
NOTE: defq is the only queue

run_python.slurm

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=24
#SBATCH --gres=gpu:1
#SBATCH --mem=256G
#SBATCH --partition="defq"
#SBATCH --time=01:10:00
#SBATCH --mail-type="BEGIN,END,FAIL"
#SBATCH --mail-user="netid@kennesaw.edu" 
#SBATCH --account="research" 

JOBID=`echo $SLURM_JOBID | cut -f1 -d.`

module load Miniforge3

eval "$(conda shell.bash hook)"
conda activate legateenv

python my_python.py

 

To run your code, use the slurm file (run_python.slurm):

sbatch run_python.slurm