Introduction¶
The I2S Research Cluster is located in the ACF, and provides HPC resources to members of the center. The cluster uses the Slurm workload manager, which is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters. The cluster is composed of a variety of hardware types, with core counts ranging from 16 to 96 cores per node. In addition, there is specialized hardware including Nvidia graphics cards for GPU computing, Infiniband for low latency/high throughput parallel computing, and large memory systems with up to 1.5 TB of RAM.
Getting Started¶
Below you will find a helpful list of topics to get you started with using the cluster. Click on the boxes to go to their respective page.
flowchart TD
overview --> login[Logging in]
login --> connectondemand[Connecting with Open OnDemand]
login --> connectssh[Connecting with SSH]
overview[I2S Cluster Overview] --> hardware[Cluster node types]
overview --> filesystems[Filesystems]
hardware --> submitjob[Submitting a job]
filesystems --> transfer[Transferring data to/from cluster]
filesystems --> quotas[Filesystem Quotas]
submitjob --> ondemand[Open OnDemand]
submitjob --> front[front1 and front2]
front --> srun
front --> sbatch
sbatch --> jobarrays[Job arrays]
ondemand ---> software[Installing and/or running software]
srun ---> software
sbatch ---> software
software --> lmod[Lmod Modules]
software --> charliecloud[Charliecloud containers]
software --> source[Build from source]
software --> spack[Spack package manager]
software --> conda[Miniconda or Anaconda]
click overview "cluster/"
click connectondemand "ondemand/"
click connectssh "ssh/"
click front "ssh/"
click ondemand "ondemand/#ondemand-desktop"
click srun "slurm/#srun"
click sbatch "slurm/#sbatch"
click jobarrays "slurm/#sbatch-job-arrays"
click lmod "software/#lmod-modules"
click charliecloud "software/#charliecloud"
click conda "software/#miniconda"
click filesystems "filesystems/"
click transfer "filesystems/#transferring-files-tofrom-the-cluster"
click quotas "filesystems/#quotas"
click hardware "hardware/"
click spack "software/#spack"
Recent Changes¶
With the launch of this new documentation, all logins related to the cluster now use KU accounts. When interacting with the cluster, such as with OnDemand or SSH, be sure to use your KU username and password.