... | ... | @@ -35,15 +35,14 @@ To transfer files, use the domain julia-storage.uni-wuerzburg.de |
|
|
(e.g. `ssh username@julia-storage.uni-wuerzburg.de`).
|
|
|
|
|
|
### Compiling
|
|
|
Jobs may be compiled from the Julia login node, `ssh username@julia.uni-wuerzburg.de`.
|
|
|
|
|
|
By default, the Intel compiler suite is installed, but not added to the PATH variable on the new julia login nodes. This is readily done by executing
|
|
|
`source /usr/local/etc/intel_mpi.sh`. For ALF you can chose the Intel enviroment (instead of SuperMUC or Jurece ...) and it should work just fine.
|
|
|
On virtual machines within openstack there is one issue:
|
|
|
Apparently the Intel compiler option -xHost (automated architecture dependent vectorization) can cause segmentation faults, at least within ALF. Possible alternatives are -axCORE-AVX2 (vectorization upto AVX2 [ymm-register]) and -axCORE-AVX512 (using the 512 [zmm-]register) both work fine, to the best of our knowledge.
|
|
|
|
|
|
Jobs may be compiled from the Julia login node, `ssh username@julia.uni-wuerzburg.de`, or in the OpenStack.
|
|
|
|
|
|
### Interactive Sessions
|
|
|
|
|
|
Program performance may be tested in the OpenStack machines, but interactive sessions are also available from the login node. To get one use `srun --pty bash`.
|
|
|
|
|
|
It is a good idea to check first if there are any resources available for interactive sessions by running `sinfo` and seeing if there are any nodes in the "idle" state. If your desired partition is not available (see below for partitions) there may be nodes in other partitions that are currently idle and you can use for testing. Specify a partition in the interactive session by using `srun --pty -p <partition name> bash`.
|
... | ... | |