|
|
# Welcome to an informal and unofficial Julia wiki!
|
|
|
Feel free to contribute anything new you discover or update anything that no longer applies/is deprecated.
|
|
|
|
|
|
## Official
|
|
|
Get a Julia account and official information at:
|
|
|
|
|
|
https://www.rz.uni-wuerzburg.de/dienste/rzserver/high-performance-computing/
|
|
|
|
|
|
## Open Stack
|
|
|
To check out a machine, compile code, and test performance, go to:
|
|
|
|
|
|
https://julia.rz.uni-wuerzburg.de
|
|
|
|
|
|
The log-in "Domain" is "julia."
|
|
|
|
|
|
1. Set up an ssh key pair with your local machine.
|
|
|
2. Launch an instance with this key pair selected.
|
|
|
3. Assign a floating IP to the instance (under "Actions").
|
|
|
4. Log in to the machine using `ssh -i <key_name> <username>@<instance_floating_ip>`.
|
|
|
|
|
|
## Storage
|
|
|
To transfer files, use the domain julia-storage.uni-wuerzburg.de
|
|
|
|
|
|
(e.g. `ssh username@julia-storage.uni-wuerzburg.de`).
|
|
|
|
|
|
## Job submission
|
|
|
To submit jobs via SLURM, use `ssh username@julia.uni-wuerzburg.de`
|
|
|
|
|
|
There is currently not an interactive session option (or compilers available). Use the OpenStack for testing and compilation.
|
|
|
|
|
|
## Sample Job Script
|
|
|
For a serial or "embarrassingly parallel" job:
|
|
|
|
|
|
```
|
|
|
#!/bin/bash
|
|
|
|
|
|
#SBATCH -J EmbParrJob
|
|
|
#SBATCH -N 1
|
|
|
#SBATCH -t 24:00:00
|
|
|
#SBATCH --ntasks-per-node 32
|
|
|
#SBATCH --cpus-per-task 1
|
|
|
#SBATCH --export Debian
|
|
|
#SBATCH --workdir /home/<user>/<launch_directory>
|
|
|
|
|
|
./embParrScript.sh <arguments>
|
|
|
```
|
|
|
|
|
|
This will set up one node (N) with 32 cores (ntasks-per-node * cpus-per-task), which is in the category "xxl" for Julia, and allow the job to run for 24 hours at the most. The source (export) will be the default Debian installation ("Ubuntu" is an alternative option), and the working directory (workdir) is set so that the executable and input files can be found (`/home/<user>` is currently the default working directory.) Output files will be placed in the working directory as well. The executable will be able to make use of all the cores requested in the node.
|
|
|
|
|
|
## Cluster Info from the Login Node
|
|
|
To get information on the nodes available use `sinfo`.
|
|
|
|
|
|
To get the specifications for a specific node on the sinfo list use `scontrol show node <nodename>`.
|
|
|
|
|
|
To check on jobs you've submitted use `squeue`.
|
|
|
|
|
|
To cancel a job use `scancel <JobID>`. |
|
|
\ No newline at end of file |