Skip to content

Rename 'Running Jobs on...' to 'Batch Jobs' #588

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jun 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ research needs.
How to access

- Visit our Support portal for [instructions to get
started](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
started](../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md)
and details of how the Milan nodes differ from Mahuika’s original
Broadwell nodes

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ window: [31 August - 01
October](https://www.nesi.org.nz/services/high-performance-computing-and-analytics/guidelines/allocations-allocation-classes-review#window).

For more technical information about using GPUs on NeSI, [click
here](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md).
here](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md).
If you have questions about allocations or how to access the P100s,
{% include "partials/support_request.html" %}.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,9 @@ has AMD Milan (Zen3) CPUs, while the rest of Mahuika has Intel Broadwell
CPUs.

If for any reason you want to use any of the other Mahuika partitions,see
[Mahuika Slurm Partitions](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md) for
[Mahuika Slurm Partitions](../../Scientific_Computing/Batch_Jobs/Mahuika_Slurm_Partitions.md) for
an overview and
[Milan Compute Nodes](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md) for
[Milan Compute Nodes](../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md) for
the differences between them and *milan*.

#### Shared nodes
Expand Down
2 changes: 1 addition & 1 deletion docs/General/FAQs/How_do_I_request_memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ zendesk_section_id: 360000039036
---

- `--mem`: Memory per node
- `--mem-per-cpu`: Memory per [logical CPU](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md)
- `--mem-per-cpu`: Memory per [logical CPU](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md)

In most circumstances, you should request memory using `--mem`. The
exception is if you are running an MPI job that could be placed on more
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ If, compared to other jobs in the queue, your job's priority (third
column) and fair share score (fifth column) are both low, this usually
means that your project team has recently been using through CPU core
hours faster than expected.
See [Fair Share -- How jobs get prioritised](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md) for more
See [Fair Share -- How jobs get prioritised](../../Scientific_Computing/Batch_Jobs/Fair_Share.md) for more
information on Fair Share, how you can check your project's fair share
score, and what you can do about a low project fair share score.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ to find out what class you're likely eligible for.

You may continue to submit jobs even if you have used all your CPU-hour
allocation. The effect of 0 remaining CPU hours allocation is a
[lower fairshare](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md),
[lower fairshare](../../Scientific_Computing/Batch_Jobs/Fair_Share.md),
not the inability to use CPUs. Your ability to submit jobs will only be
removed when your project's allocation expires, not when core-hours are
exhausted.
Expand All @@ -36,7 +36,7 @@ plus one kind of compute allocation) in order to be valid and active.
Compute allocations are expressed in terms of a number of units, to be
consumed or reserved between a set start date and time and a set end
date and time. For allocations of computing power, we use [Fair
Share](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md)
Share](../../Scientific_Computing/Batch_Jobs/Fair_Share.md)
to balance work between different projects. NeSI allocations and the
relative "prices" of resources used by those allocations should not be
taken as any indicator of the real NZD costs of purchasing or running
Expand Down Expand Up @@ -70,7 +70,7 @@ depend on your contractual arrangements with the NeSI host.

Note that the minimum number of logical cores a job can take on Mahuika
is two
(see [Hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md) for
(see [Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) for
details). Therefore:

- the lowest possible price for a CPU-only job is 0.70 compute units
Expand Down
6 changes: 3 additions & 3 deletions docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g.
| `--account` | `#SBATCH --account=nesi99999` | The account your core hours will be 'charged' to. |
| `--time` | `#SBATCH --time=DD-HH:MM:SS` | Job max walltime. |
| `--mem` | `#SBATCH --mem=512MB` | Memory required per node. |
| `--partition` | `#SBATCH --partition=milan` | Specified job[partition](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md). |
| `--partition` | `#SBATCH --partition=milan` | Specified job[partition](../../Scientific_Computing/Batch_Jobs/Mahuika_Slurm_Partitions.md). |
| `--output` | `#SBATCH --output=%j_output.out` | Path and name of standard output file. |
| `--mail-user` | `#SBATCH [email protected]` | Address to send mail notifications. |
| `--mail-type` | `#SBATCH --mail-type=ALL` | Will send a mail notification at `BEGIN END FAIL`. |
Expand All @@ -64,7 +64,7 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g.
| `--nodes` | ``#SBATCH --nodes=2`` | Will request tasks be run across 2 nodes. |
| `--ntasks` | ``#SBATCH --ntasks=2 `` | Will start 2 [MPI](../../Getting_Started/Next_Steps/Parallel_Execution.md) tasks. |
| `--ntasks-per-node` | `#SBATCH --ntasks-per-node=1` | Will start 1 task per requested node. |
| `--cpus-per-task` | `#SBATCH --cpus-per-task=10` | Will request 10 [*logical* CPUs](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md) per task. |
| `--cpus-per-task` | `#SBATCH --cpus-per-task=10` | Will request 10 [*logical* CPUs](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) per task. |
| `--mem-per-cpu` | `#SBATCH --mem-per-cpu=512MB` | Memory Per *logical* CPU. `--mem` Should be used if shared memory job. See [How do I request memory?](../../General/FAQs/How_do_I_request_memory.md) |
| --array | `#SBATCH --array=1-5` | Will submit job 5 times each with a different `$SLURM_ARRAY_TASK_ID` (1,2,3,4,5). |
| | `#SBATCH --array=0-20:5` | Will submit job 5 times each with a different `$SLURM_ARRAY_TASK_ID` (0,5,10,15,20). |
Expand All @@ -77,7 +77,7 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g.
| `--qos` | `#SBATCH --qos=debug` | Adding this line gives your job a high priority. *Limited to one job at a time, max 15 minutes*. |
| `--profile` | `#SBATCH --profile=ALL` | Allows generation of a .h5 file containing job profile information. See [Slurm Native Profiling](../../Scientific_Computing/Profiling_and_Debugging/Slurm_Native_Profiling.md) |
| `--dependency` | `#SBATCH --dependency=afterok:123456789` | Will only start after the job 123456789 has completed. |
| `--hint` | `#SBATCH --hint=nomultithread` | Disables [hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md), be aware that this will significantly change how your job is defined. |
| `--hint` | `#SBATCH --hint=nomultithread` | Disables [hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md), be aware that this will significantly change how your job is defined. |

!!! tip
Many options have a short (`-`) and long (`--`) form e.g.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ ascertain how much of each of these resources you will need.
Asking for too little or too much, however, can both cause problems:
your jobs will
be at increased risk of taking a long time in the queue or failing, and
your project's [fair share score](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md)
your project's [fair share score](../../Scientific_Computing/Batch_Jobs/Fair_Share.md)
is likely to suffer.
Your project's fair share score will be reduced in
view of compute time spent regardless of whether you obtain a result or
Expand Down
2 changes: 1 addition & 1 deletion docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ Let's run our Slurm script with sbatch and look at our output from
Our job performed 5,000 seeds using 2 physical CPU cores (each MPI task
will always receive 2 logical CPUs which is equal to 1 physical CPUs.
For a more in depth explanation about logical and physical CPU cores see
our [Hyperthreading article](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md))
our [Hyperthreading article](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md))
and a maximum memory of 166,744KB (0.16 GB). In total, the job ran for
18 minutes and 51 seconds.

Expand Down
2 changes: 1 addition & 1 deletion docs/Getting_Started/Next_Steps/Parallel_Execution.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The are three types of parallel execution we will cover are [Multi-Threading](#
- `--mem-per-cpu=512MB` will give 512 MB of RAM per *logical* core.
- If `--hint=nomultithread` is used then `--cpus-per-task` will now refer to physical cores, but `--mem-per-cpu=512MB` still refers to logical cores.

See [our article on hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md) for more information.
See [our article on hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) for more information.

## Multi-threading

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ search:
items under Accounts.
- On the Project page and New Allocation Request page, tool tip text
referring to
[nn\_corehour\_usage](../../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md)
[nn\_corehour\_usage](../../../Scientific_Computing/Batch_Jobs/Checking_your_projects_usage_using_nn_corehour_usage.md)
will appear when you hover over the Mahuika Compute Units
information.

Expand Down
2 changes: 1 addition & 1 deletion docs/Scientific_Computing/.pages.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ nav:
- Training
- Interactive_computing_using_Jupyter
- Interactive_computing_with_NeSI_OnDemand
- Running Jobs on Māui and Mahuika: Running_Jobs_on_Maui_and_Mahuika
- Batch_Jobs
- Profiling_and_Debugging
- HPC_Software_Environment
- Terminal_Setup
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ cases:
#SBATCH --gpus-per-node=A100:1
```

*These GPUs are on Milan nodes, check the [dedicated support page](../Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
*These GPUs are on Milan nodes, check the [dedicated support page](../Batch_Jobs/Milan_Compute_Nodes.md)
for more information.*

- 4 A100 (80GB & NVLink) GPU on Mahuika
Expand All @@ -104,7 +104,7 @@ cases:
#SBATCH --gpus-per-node=A100:4
```

*These GPUs are on Milan nodes, check the [dedicated support page](../Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
*These GPUs are on Milan nodes, check the [dedicated support page](../Batch_Jobs/Milan_Compute_Nodes.md)
for more information.*

*You cannot ask for more than 4 A100 (80GB) GPUs per node on
Expand All @@ -123,7 +123,7 @@ cases:
GPU).*

You can also use the `--gpus-per-node`option in
[Slurm interactive sessions](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md),
[Slurm interactive sessions](../../Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md),
with the `srun` and `salloc` commands. For example:

``` sh
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ jobs, but is limited to one small job per user at a time: no more than

Job priority decreases whenever the project uses more core-hours than
expected, across all partitions.
This [Fair Share](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md)
This [Fair Share](../../Scientific_Computing/Batch_Jobs/Fair_Share.md)
policy means that projects that have consumed many CPU core hours in the
recent past compared to their expected rate of use (either by submitting
and running many jobs, or by submitting and running large jobs) will
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ sbatch: `bigmem` is not the most appropriate partition for this job, which would
<td>1850 MB</td>
<td>460 GB</td>
<td rowspan=2>2560</td>
<td rowspan=2><a href="../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md">Jobs using Milan Nodes</a></td>
<td rowspan=2><a href="../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md">Jobs using Milan Nodes</a></td>
</tr>
<td>8</td>
<td>256</td>
Expand Down Expand Up @@ -167,7 +167,7 @@ below for more info.</td>
<td>460 GB</td>
<td>64</td>
<td>Part of
<a href="../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md">Milan Nodes</a>. See below.</td>
<a href="../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md">Milan Nodes</a>. See below.</td>
</tr>
</tbody>
</table>
Expand Down Expand Up @@ -213,7 +213,7 @@ To request A100 GPUs, use instead:
#SBATCH --gpus-per-node=A100:1
```

See [GPU use on NeSI](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md)
See [GPU use on NeSI](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md)
for more details about Slurm and CUDA settings.

### Limits on GPU Jobs
Expand All @@ -239,7 +239,7 @@ connected via

- Explicitly specify the partition to access them, with
`--partition=hgx`.
- Hosting nodes are Milan nodes. Check the [dedicated support page](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
- Hosting nodes are Milan nodes. Check the [dedicated support page](../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md)
for more information about the Milan nodes' differences from
Mahuika's Broadwell nodes.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ job array in a single command)

A low fairshare score will affect your jobs priority in the queue, learn
more about how to effectively use your allocation
[here](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md).
[here](../../Scientific_Computing/Batch_Jobs/Fair_Share.md).

## Cross machine submission

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ status: deprecated

This article describes a technique to build
[Apptainer](https://apptainer.org/) containers using [Milan compute
nodes](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md),
nodes](../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md),
via a Slurm job. You can also build
[Singularity](../../Scientific_Computing/Supported_Applications/Singularity.md)
container using this technique.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ then assigns different roles to the different ranks:
This implies that **Dask-MPI jobs must be launched on at least 3 MPI
ranks!** Ranks 0 and 1 often perform much less work than the other
ranks, it can therefore be beneficial to use
[Hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md)
[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md)
to place these two ranks onto a single physical core. Ensure that
activating hyperthreading does not slow down the worker ranks by running
a short test workload with and without hyperthreading.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ status: deprecated

Many codes can be accelerated significantly by offloading computations
to a GPU. Some NeSI [Mahuika nodes have GPUs attached to
them](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md).
them](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md).
If you want your code to run faster, if you're developing your own code
or if you have access to the source code and you feel comfortable
editing the code, read on.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ threads that run on separate cores, executing their shares of the total
workload concurrently. OpenMP is suited for the Mahuika and Māui HPCs as
each platform has 36 and 40 physical cores per node respectively.  Each
physical core can handle up to two threads in parallel using
[Hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md).
[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md).
Therefore you can run up to 72 threads on Mahuika and 80 threads on Māui

The environment variable that controls the number of threads is
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ performance, as a socket connects the processor to its RAM and other
processors. A processor in each socket consists of multiple physical
cores, and each physical core is split into two logical cores using a
technology called
[Hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md)).
[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md)).

A processor also includes caches - a
[cache](https://en.wikipedia.org/wiki/CPU_cache) is very fast memory
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ status: deprecated
## Introduction

NeSI supports the use of [Jupyter](https://jupyter.org/) for
[interactive computing](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md).
[interactive computing](../../Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md).
Jupyter allows you to create notebooks that contain live code,
equations, visualisations and explanatory text. There are many uses for
Jupyter, including data cleaning, analytics and visualisation, machine
Expand Down
2 changes: 1 addition & 1 deletion docs/Scientific_Computing/Supported_Applications/ABAQUS.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ parameter `academic=TEACHING` or `academic=RESEARCH` in a relevant
intuitive formula <code>⌊ 5 x N<sup>0.422</sup> ⌋</code> where `N` is number
of CPUs.

[Hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md)
[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md)
can provide significant speedup to your computations, however
hyperthreaded CPUs will use twice the number of licence tokens. It may
be worth adding `#SBATCH --hint nomultithread` to your slurm script if
Expand Down
4 changes: 2 additions & 2 deletions docs/Scientific_Computing/Supported_Applications/ANSYS.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ While it will always be more time and resource efficient using a slurm
script as shown above, there are occasions where the GUI is required. If
you only require a few CPUs for a short while you may run the fluent on
the login node, otherwise use of an [slurm interactive
session](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md)
session](../../Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md)
is recommended.

For example.
Expand Down Expand Up @@ -625,7 +625,7 @@ Progress can be tracked through the GUI as usual.
## ANSYS-Electromagnetic

ANSYS-EM jobs can be submitted through a slurm script or by [interactive
session](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md).
session](../../Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md).

### RSM

Expand Down
2 changes: 1 addition & 1 deletion docs/Scientific_Computing/Supported_Applications/MATLAB.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ CUDA modules and select the appropriate one. For example, for MATLAB
R2021a, use `module load CUDA/11.0.2` before launching MATLAB.

If you want to know more about how to access the different type of
available GPUs on NeSI, check the [GPU use on NeSI](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md)
available GPUs on NeSI, check the [GPU use on NeSI](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md)
support page.

!!! tip "Support for A100 GPUs"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ export SINGULARITY_BIND="/nesi/project/<your project ID>/inputdata:/var/inputdat
### Accessing a GPU

If your Slurm job has requested access to an NVIDIA GPU (see [GPU use on
NeSI](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md)
NeSI](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md)
to learn how to request a GPU), a singularity container can
transparently access it using the `--nv` flag:

Expand Down
Loading
Loading