To submit jobs via SLURM, use `ssh username@julia.uni-wuerzburg.de`
There is currently not an interactive session option (or compilers available). Use the OpenStack for testing and compilation.
## Sample Job Script
For a serial or "embarrassingly parallel" job:
```
#!/bin/bash
#SBATCH -J EmbParrJob
#SBATCH -N 1
#SBATCH -t 24:00:00
#SBATCH --ntasks-per-node 32
#SBATCH --cpus-per-task 1
#SBATCH --export Debian
#SBATCH --workdir /home/<user>/<launch_directory>
./embParrScript.sh <arguments>
```
This will set up one node (N) with 32 cores (ntasks-per-node * cpus-per-task), which is in the category "xxl" for Julia, and allow the job to run for 24 hours at the most. The source (export) will be the default Debian installation ("Ubuntu" is an alternative option), and the working directory (workdir) is set so that the executable and input files can be found (`/home/<user>` is currently the default working directory.) Output files will be placed in the working directory as well. The executable will be able to make use of all the cores requested in the node.
## Cluster Info from the Login Node
To get information on the nodes available use `sinfo`.
To get the specifications for a specific node on the sinfo list use `scontrol show node <nodename>`.