diff --git a/docs/General/Announcements/Mahuikas_new_Milan_CPU_nodes_open_to_all_NeSI_users.md b/docs/General/Announcements/Mahuikas_new_Milan_CPU_nodes_open_to_all_NeSI_users.md
index bb868c337..77acb9c72 100644
--- a/docs/General/Announcements/Mahuikas_new_Milan_CPU_nodes_open_to_all_NeSI_users.md
+++ b/docs/General/Announcements/Mahuikas_new_Milan_CPU_nodes_open_to_all_NeSI_users.md
@@ -33,7 +33,7 @@ research needs.
How to access
- Visit our Support portal for [instructions to get
- started](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
+ started](../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md)
and details of how the Milan nodes differ from Mahuika’s original
Broadwell nodes
diff --git a/docs/General/Announcements/New_capabilities_for_Machine_Learning_and_GPU_pricing_updates.md b/docs/General/Announcements/New_capabilities_for_Machine_Learning_and_GPU_pricing_updates.md
index bdf5085d3..6a98258bc 100644
--- a/docs/General/Announcements/New_capabilities_for_Machine_Learning_and_GPU_pricing_updates.md
+++ b/docs/General/Announcements/New_capabilities_for_Machine_Learning_and_GPU_pricing_updates.md
@@ -46,7 +46,7 @@ window: [31 August - 01
October](https://www.nesi.org.nz/services/high-performance-computing-and-analytics/guidelines/allocations-allocation-classes-review#window).
For more technical information about using GPUs on NeSI, [click
-here](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md).
+here](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md).
If you have questions about allocations or how to access the P100s,
{% include "partials/support_request.html" %}.
diff --git a/docs/General/Announcements/Preparing_your_code_for_use_on_NeSIs_new_HPC_platform.md b/docs/General/Announcements/Preparing_your_code_for_use_on_NeSIs_new_HPC_platform.md
index 57d10b616..e5aa16c83 100644
--- a/docs/General/Announcements/Preparing_your_code_for_use_on_NeSIs_new_HPC_platform.md
+++ b/docs/General/Announcements/Preparing_your_code_for_use_on_NeSIs_new_HPC_platform.md
@@ -75,9 +75,9 @@ has AMD Milan (Zen3) CPUs, while the rest of Mahuika has Intel Broadwell
CPUs.
If for any reason you want to use any of the other Mahuika partitions,see
-[Mahuika Slurm Partitions](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md) for
+[Mahuika Slurm Partitions](../../Scientific_Computing/Batch_Jobs/Mahuika_Slurm_Partitions.md) for
an overview and
-[Milan Compute Nodes](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md) for
+[Milan Compute Nodes](../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md) for
the differences between them and *milan*.
#### Shared nodes
diff --git a/docs/General/FAQs/How_do_I_request_memory.md b/docs/General/FAQs/How_do_I_request_memory.md
index 5b75bbb25..bbadfc49d 100644
--- a/docs/General/FAQs/How_do_I_request_memory.md
+++ b/docs/General/FAQs/How_do_I_request_memory.md
@@ -9,7 +9,7 @@ zendesk_section_id: 360000039036
---
- `--mem`: Memory per node
-- `--mem-per-cpu`: Memory per [logical CPU](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md)
+- `--mem-per-cpu`: Memory per [logical CPU](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md)
In most circumstances, you should request memory using `--mem`. The
exception is if you are running an MPI job that could be placed on more
diff --git a/docs/General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md b/docs/General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md
index bb2ccc656..10e27111d 100644
--- a/docs/General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md
+++ b/docs/General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md
@@ -104,7 +104,7 @@ If, compared to other jobs in the queue, your job's priority (third
column) and fair share score (fifth column) are both low, this usually
means that your project team has recently been using through CPU core
hours faster than expected.
-See [Fair Share -- How jobs get prioritised](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md) for more
+See [Fair Share -- How jobs get prioritised](../../Scientific_Computing/Batch_Jobs/Fair_Share.md) for more
information on Fair Share, how you can check your project's fair share
score, and what you can do about a low project fair share score.
diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md
index 262bdd83e..d1107247a 100644
--- a/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md
+++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md
@@ -21,7 +21,7 @@ to find out what class you're likely eligible for.
You may continue to submit jobs even if you have used all your CPU-hour
allocation. The effect of 0 remaining CPU hours allocation is a
-[lower fairshare](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md),
+[lower fairshare](../../Scientific_Computing/Batch_Jobs/Fair_Share.md),
not the inability to use CPUs. Your ability to submit jobs will only be
removed when your project's allocation expires, not when core-hours are
exhausted.
@@ -36,7 +36,7 @@ plus one kind of compute allocation) in order to be valid and active.
Compute allocations are expressed in terms of a number of units, to be
consumed or reserved between a set start date and time and a set end
date and time. For allocations of computing power, we use [Fair
-Share](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md)
+Share](../../Scientific_Computing/Batch_Jobs/Fair_Share.md)
to balance work between different projects. NeSI allocations and the
relative "prices" of resources used by those allocations should not be
taken as any indicator of the real NZD costs of purchasing or running
@@ -70,7 +70,7 @@ depend on your contractual arrangements with the NeSI host.
Note that the minimum number of logical cores a job can take on Mahuika
is two
-(see [Hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md) for
+(see [Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) for
details). Therefore:
- the lowest possible price for a CPU-only job is 0.70 compute units
diff --git a/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md b/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md
index 0719f6288..10dc13848 100644
--- a/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md
+++ b/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md
@@ -50,7 +50,7 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g.
| `--account` | `#SBATCH --account=nesi99999` | The account your core hours will be 'charged' to. |
| `--time` | `#SBATCH --time=DD-HH:MM:SS` | Job max walltime. |
| `--mem` | `#SBATCH --mem=512MB` | Memory required per node. |
-| `--partition` | `#SBATCH --partition=milan` | Specified job[partition](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md). |
+| `--partition` | `#SBATCH --partition=milan` | Specified job[partition](../../Scientific_Computing/Batch_Jobs/Mahuika_Slurm_Partitions.md). |
| `--output` | `#SBATCH --output=%j_output.out` | Path and name of standard output file. |
| `--mail-user` | `#SBATCH --mail-user=user123@gmail.com` | Address to send mail notifications. |
| `--mail-type` | `#SBATCH --mail-type=ALL` | Will send a mail notification at `BEGIN END FAIL`. |
@@ -64,7 +64,7 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g.
| `--nodes` | ``#SBATCH --nodes=2`` | Will request tasks be run across 2 nodes. |
| `--ntasks` | ``#SBATCH --ntasks=2 `` | Will start 2 [MPI](../../Getting_Started/Next_Steps/Parallel_Execution.md) tasks. |
| `--ntasks-per-node` | `#SBATCH --ntasks-per-node=1` | Will start 1 task per requested node. |
-| `--cpus-per-task` | `#SBATCH --cpus-per-task=10` | Will request 10 [*logical* CPUs](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md) per task. |
+| `--cpus-per-task` | `#SBATCH --cpus-per-task=10` | Will request 10 [*logical* CPUs](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) per task. |
| `--mem-per-cpu` | `#SBATCH --mem-per-cpu=512MB` | Memory Per *logical* CPU. `--mem` Should be used if shared memory job. See [How do I request memory?](../../General/FAQs/How_do_I_request_memory.md) |
| --array | `#SBATCH --array=1-5` | Will submit job 5 times each with a different `$SLURM_ARRAY_TASK_ID` (1,2,3,4,5). |
| | `#SBATCH --array=0-20:5` | Will submit job 5 times each with a different `$SLURM_ARRAY_TASK_ID` (0,5,10,15,20). |
@@ -77,7 +77,7 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g.
| `--qos` | `#SBATCH --qos=debug` | Adding this line gives your job a high priority. *Limited to one job at a time, max 15 minutes*. |
| `--profile` | `#SBATCH --profile=ALL` | Allows generation of a .h5 file containing job profile information. See [Slurm Native Profiling](../../Scientific_Computing/Profiling_and_Debugging/Slurm_Native_Profiling.md) |
| `--dependency` | `#SBATCH --dependency=afterok:123456789` | Will only start after the job 123456789 has completed. |
-| `--hint` | `#SBATCH --hint=nomultithread` | Disables [hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md), be aware that this will significantly change how your job is defined. |
+| `--hint` | `#SBATCH --hint=nomultithread` | Disables [hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md), be aware that this will significantly change how your job is defined. |
!!! tip
Many options have a short (`-`) and long (`--`) form e.g.
diff --git a/docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md b/docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md
index fb99632c1..e23f2788d 100644
--- a/docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md
+++ b/docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md
@@ -34,7 +34,7 @@ ascertain how much of each of these resources you will need.
Asking for too little or too much, however, can both cause problems:
your jobs will
be at increased risk of taking a long time in the queue or failing, and
-your project's [fair share score](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md)
+your project's [fair share score](../../Scientific_Computing/Batch_Jobs/Fair_Share.md)
is likely to suffer.
Your project's fair share score will be reduced in
view of compute time spent regardless of whether you obtain a result or
diff --git a/docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md b/docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md
index ce6ff12be..7181b25c0 100644
--- a/docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md
+++ b/docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md
@@ -174,7 +174,7 @@ Let's run our Slurm script with sbatch and look at our output from
Our job performed 5,000 seeds using 2 physical CPU cores (each MPI task
will always receive 2 logical CPUs which is equal to 1 physical CPUs.
For a more in depth explanation about logical and physical CPU cores see
-our [Hyperthreading article](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md))
+our [Hyperthreading article](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md))
and a maximum memory of 166,744KB (0.16 GB). In total, the job ran for
18 minutes and 51 seconds.
diff --git a/docs/Getting_Started/Next_Steps/Parallel_Execution.md b/docs/Getting_Started/Next_Steps/Parallel_Execution.md
index 8e268bf1d..901e3ab5f 100644
--- a/docs/Getting_Started/Next_Steps/Parallel_Execution.md
+++ b/docs/Getting_Started/Next_Steps/Parallel_Execution.md
@@ -17,7 +17,7 @@ The are three types of parallel execution we will cover are [Multi-Threading](#
- `--mem-per-cpu=512MB` will give 512 MB of RAM per *logical* core.
- If `--hint=nomultithread` is used then `--cpus-per-task` will now refer to physical cores, but `--mem-per-cpu=512MB` still refers to logical cores.
-See [our article on hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md) for more information.
+See [our article on hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) for more information.
## Multi-threading
diff --git a/docs/Getting_Started/my-nesi-org-nz/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-21-0.md b/docs/Getting_Started/my-nesi-org-nz/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-21-0.md
index 157fab785..6f4147d3c 100644
--- a/docs/Getting_Started/my-nesi-org-nz/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-21-0.md
+++ b/docs/Getting_Started/my-nesi-org-nz/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-21-0.md
@@ -20,7 +20,7 @@ search:
items under Accounts.
- On the Project page and New Allocation Request page, tool tip text
referring to
- [nn\_corehour\_usage](../../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md)
+ [nn\_corehour\_usage](../../../Scientific_Computing/Batch_Jobs/Checking_your_projects_usage_using_nn_corehour_usage.md)
will appear when you hover over the Mahuika Compute Units
information.
diff --git a/docs/Scientific_Computing/.pages.yml b/docs/Scientific_Computing/.pages.yml
index f1088d855..8e7c61f73 100644
--- a/docs/Scientific_Computing/.pages.yml
+++ b/docs/Scientific_Computing/.pages.yml
@@ -3,7 +3,7 @@ nav:
- Training
- Interactive_computing_using_Jupyter
- Interactive_computing_with_NeSI_OnDemand
- - Running Jobs on Māui and Mahuika: Running_Jobs_on_Maui_and_Mahuika
+ - Batch_Jobs
- Profiling_and_Debugging
- HPC_Software_Environment
- Terminal_Setup
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/.pages.yml b/docs/Scientific_Computing/Batch_Jobs/.pages.yml
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/.pages.yml
rename to docs/Scientific_Computing/Batch_Jobs/.pages.yml
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Available_GPUs_on_NeSI.md b/docs/Scientific_Computing/Batch_Jobs/Available_GPUs_on_NeSI.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Available_GPUs_on_NeSI.md
rename to docs/Scientific_Computing/Batch_Jobs/Available_GPUs_on_NeSI.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md b/docs/Scientific_Computing/Batch_Jobs/Checking_your_projects_usage_using_nn_corehour_usage.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md
rename to docs/Scientific_Computing/Batch_Jobs/Checking_your_projects_usage_using_nn_corehour_usage.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checksums.md b/docs/Scientific_Computing/Batch_Jobs/Checksums.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checksums.md
rename to docs/Scientific_Computing/Batch_Jobs/Checksums.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md b/docs/Scientific_Computing/Batch_Jobs/Fair_Share.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md
rename to docs/Scientific_Computing/Batch_Jobs/Fair_Share.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md b/docs/Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md
similarity index 97%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md
rename to docs/Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md
index d0b4f16e0..b4da8688a 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md
+++ b/docs/Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md
@@ -94,7 +94,7 @@ cases:
#SBATCH --gpus-per-node=A100:1
```
- *These GPUs are on Milan nodes, check the [dedicated support page](../Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
+ *These GPUs are on Milan nodes, check the [dedicated support page](../Batch_Jobs/Milan_Compute_Nodes.md)
for more information.*
- 4 A100 (80GB & NVLink) GPU on Mahuika
@@ -104,7 +104,7 @@ cases:
#SBATCH --gpus-per-node=A100:4
```
- *These GPUs are on Milan nodes, check the [dedicated support page](../Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
+ *These GPUs are on Milan nodes, check the [dedicated support page](../Batch_Jobs/Milan_Compute_Nodes.md)
for more information.*
*You cannot ask for more than 4 A100 (80GB) GPUs per node on
@@ -123,7 +123,7 @@ cases:
GPU).*
You can also use the `--gpus-per-node`option in
-[Slurm interactive sessions](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md),
+[Slurm interactive sessions](../../Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md),
with the `srun` and `salloc` commands. For example:
``` sh
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md b/docs/Scientific_Computing/Batch_Jobs/Hyperthreading.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md
rename to docs/Scientific_Computing/Batch_Jobs/Hyperthreading.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md b/docs/Scientific_Computing/Batch_Jobs/Job_Checkpointing.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md
rename to docs/Scientific_Computing/Batch_Jobs/Job_Checkpointing.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md b/docs/Scientific_Computing/Batch_Jobs/Job_prioritisation.md
similarity index 97%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md
rename to docs/Scientific_Computing/Batch_Jobs/Job_prioritisation.md
index 98aa19e0d..a07081c33 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md
+++ b/docs/Scientific_Computing/Batch_Jobs/Job_prioritisation.md
@@ -27,7 +27,7 @@ jobs, but is limited to one small job per user at a time: no more than
Job priority decreases whenever the project uses more core-hours than
expected, across all partitions.
-This [Fair Share](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md)
+This [Fair Share](../../Scientific_Computing/Batch_Jobs/Fair_Share.md)
policy means that projects that have consumed many CPU core hours in the
recent past compared to their expected rate of use (either by submitting
and running many jobs, or by submitting and running large jobs) will
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md b/docs/Scientific_Computing/Batch_Jobs/Mahuika_Slurm_Partitions.md
similarity index 94%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md
rename to docs/Scientific_Computing/Batch_Jobs/Mahuika_Slurm_Partitions.md
index 8dca70e49..b5dd53fd8 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md
+++ b/docs/Scientific_Computing/Batch_Jobs/Mahuika_Slurm_Partitions.md
@@ -87,7 +87,7 @@ sbatch: `bigmem` is not the most appropriate partition for this job, which would
1850 MB |
460 GB |
2560 |
-Jobs using Milan Nodes |
+Jobs using Milan Nodes |
8 |
256 |
@@ -167,7 +167,7 @@ below for more info.
460 GB |
64 |
Part of
-Milan Nodes. See below. |
+Milan Nodes. See below.
@@ -213,7 +213,7 @@ To request A100 GPUs, use instead:
#SBATCH --gpus-per-node=A100:1
```
-See [GPU use on NeSI](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md)
+See [GPU use on NeSI](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md)
for more details about Slurm and CUDA settings.
### Limits on GPU Jobs
@@ -239,7 +239,7 @@ connected via
- Explicitly specify the partition to access them, with
`--partition=hgx`.
-- Hosting nodes are Milan nodes. Check the [dedicated support page](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
+- Hosting nodes are Milan nodes. Check the [dedicated support page](../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md)
for more information about the Milan nodes' differences from
Mahuika's Broadwell nodes.
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md b/docs/Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md
rename to docs/Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/NetCDF-HDF5_file_locking.md b/docs/Scientific_Computing/Batch_Jobs/NetCDF-HDF5_file_locking.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/NetCDF-HDF5_file_locking.md
rename to docs/Scientific_Computing/Batch_Jobs/NetCDF-HDF5_file_locking.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md b/docs/Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md
similarity index 97%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md
rename to docs/Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md
index fe5dca161..c693aed1b 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md
+++ b/docs/Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md
@@ -76,7 +76,7 @@ job array in a single command)
A low fairshare score will affect your jobs priority in the queue, learn
more about how to effectively use your allocation
-[here](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md).
+[here](../../Scientific_Computing/Batch_Jobs/Fair_Share.md).
## Cross machine submission
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md b/docs/Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md
rename to docs/Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md
diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Build_an_Apptainer_container_on_a_Milan_compute_node.md b/docs/Scientific_Computing/HPC_Software_Environment/Build_an_Apptainer_container_on_a_Milan_compute_node.md
index cf72d5234..352dc5b6a 100644
--- a/docs/Scientific_Computing/HPC_Software_Environment/Build_an_Apptainer_container_on_a_Milan_compute_node.md
+++ b/docs/Scientific_Computing/HPC_Software_Environment/Build_an_Apptainer_container_on_a_Milan_compute_node.md
@@ -7,7 +7,7 @@ status: deprecated
This article describes a technique to build
[Apptainer](https://apptainer.org/) containers using [Milan compute
-nodes](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md),
+nodes](../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md),
via a Slurm job. You can also build
[Singularity](../../Scientific_Computing/Supported_Applications/Singularity.md)
container using this technique.
diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md b/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md
index 0bb6bc1fb..fc86df775 100644
--- a/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md
+++ b/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md
@@ -95,7 +95,7 @@ then assigns different roles to the different ranks:
This implies that **Dask-MPI jobs must be launched on at least 3 MPI
ranks!** Ranks 0 and 1 often perform much less work than the other
ranks, it can therefore be beneficial to use
-[Hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md)
+[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md)
to place these two ranks onto a single physical core. Ensure that
activating hyperthreading does not slow down the worker ranks by running
a short test workload with and without hyperthreading.
diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md b/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md
index 3b3b46875..4ca5adec0 100644
--- a/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md
+++ b/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md
@@ -7,7 +7,7 @@ status: deprecated
Many codes can be accelerated significantly by offloading computations
to a GPU. Some NeSI [Mahuika nodes have GPUs attached to
-them](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md).
+them](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md).
If you want your code to run faster, if you're developing your own code
or if you have access to the source code and you feel comfortable
editing the code, read on.
diff --git a/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md b/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md
index dc2118d5c..7c73e68e7 100644
--- a/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md
+++ b/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md
@@ -15,7 +15,7 @@ threads that run on separate cores, executing their shares of the total
workload concurrently. OpenMP is suited for the Mahuika and Māui HPCs as
each platform has 36 and 40 physical cores per node respectively. Each
physical core can handle up to two threads in parallel using
-[Hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md).
+[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md).
Therefore you can run up to 72 threads on Mahuika and 80 threads on Māui
The environment variable that controls the number of threads is
diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md b/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md
index 97497aaae..98dbbdf67 100644
--- a/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md
+++ b/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md
@@ -34,7 +34,7 @@ performance, as a socket connects the processor to its RAM and other
processors. A processor in each socket consists of multiple physical
cores, and each physical core is split into two logical cores using a
technology called
-[Hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md)).
+[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md)).
A processor also includes caches - a
[cache](https://en.wikipedia.org/wiki/CPU_cache) is very fast memory
diff --git a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_on_NeSI.md b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_on_NeSI.md
index 28285e3c4..bfa8446be 100644
--- a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_on_NeSI.md
+++ b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_on_NeSI.md
@@ -19,7 +19,7 @@ status: deprecated
## Introduction
NeSI supports the use of [Jupyter](https://jupyter.org/) for
-[interactive computing](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md).
+[interactive computing](../../Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md).
Jupyter allows you to create notebooks that contain live code,
equations, visualisations and explanatory text. There are many uses for
Jupyter, including data cleaning, analytics and visualisation, machine
diff --git a/docs/Scientific_Computing/Supported_Applications/ABAQUS.md b/docs/Scientific_Computing/Supported_Applications/ABAQUS.md
index efaa8fe6f..ca2890973 100644
--- a/docs/Scientific_Computing/Supported_Applications/ABAQUS.md
+++ b/docs/Scientific_Computing/Supported_Applications/ABAQUS.md
@@ -45,7 +45,7 @@ parameter `academic=TEACHING` or `academic=RESEARCH` in a relevant
intuitive formula ⌊ 5 x N0.422 ⌋
where `N` is number
of CPUs.
-[Hyperthreading](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md)
+[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md)
can provide significant speedup to your computations, however
hyperthreaded CPUs will use twice the number of licence tokens. It may
be worth adding `#SBATCH --hint nomultithread` to your slurm script if
diff --git a/docs/Scientific_Computing/Supported_Applications/ANSYS.md b/docs/Scientific_Computing/Supported_Applications/ANSYS.md
index fd7f9a018..ca5c18888 100644
--- a/docs/Scientific_Computing/Supported_Applications/ANSYS.md
+++ b/docs/Scientific_Computing/Supported_Applications/ANSYS.md
@@ -216,7 +216,7 @@ While it will always be more time and resource efficient using a slurm
script as shown above, there are occasions where the GUI is required. If
you only require a few CPUs for a short while you may run the fluent on
the login node, otherwise use of an [slurm interactive
-session](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md)
+session](../../Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md)
is recommended.
For example.
@@ -625,7 +625,7 @@ Progress can be tracked through the GUI as usual.
## ANSYS-Electromagnetic
ANSYS-EM jobs can be submitted through a slurm script or by [interactive
-session](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md).
+session](../../Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md).
### RSM
diff --git a/docs/Scientific_Computing/Supported_Applications/MATLAB.md b/docs/Scientific_Computing/Supported_Applications/MATLAB.md
index c05a3af57..415bad015 100644
--- a/docs/Scientific_Computing/Supported_Applications/MATLAB.md
+++ b/docs/Scientific_Computing/Supported_Applications/MATLAB.md
@@ -176,7 +176,7 @@ CUDA modules and select the appropriate one. For example, for MATLAB
R2021a, use `module load CUDA/11.0.2` before launching MATLAB.
If you want to know more about how to access the different type of
-available GPUs on NeSI, check the [GPU use on NeSI](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md)
+available GPUs on NeSI, check the [GPU use on NeSI](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md)
support page.
!!! tip "Support for A100 GPUs"
diff --git a/docs/Scientific_Computing/Supported_Applications/Singularity.md b/docs/Scientific_Computing/Supported_Applications/Singularity.md
index 01c043c95..ea02bb50c 100644
--- a/docs/Scientific_Computing/Supported_Applications/Singularity.md
+++ b/docs/Scientific_Computing/Supported_Applications/Singularity.md
@@ -256,7 +256,7 @@ export SINGULARITY_BIND="/nesi/project//inputdata:/var/inputdat
### Accessing a GPU
If your Slurm job has requested access to an NVIDIA GPU (see [GPU use on
-NeSI](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md)
+NeSI](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md)
to learn how to request a GPU), a singularity container can
transparently access it using the `--nv` flag:
diff --git a/docs/Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md b/docs/Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md
index d4a43e59c..fed8123cb 100644
--- a/docs/Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md
+++ b/docs/Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md
@@ -26,7 +26,7 @@ running TensorFlow with GPU support.
!!! tip "See also"
- To request GPU resources using `--gpus-per-node` option of Slurm,
see the [GPU use on
- NeSI](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md)
+ NeSI](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md)
documentation page.
- To run TensorFlow on CPUs instead, have a look at our article
[TensorFlow on
diff --git a/docs/Scientific_Computing/Supported_Applications/VASP.md b/docs/Scientific_Computing/Supported_Applications/VASP.md
index c7319ba9c..a5a568461 100644
--- a/docs/Scientific_Computing/Supported_Applications/VASP.md
+++ b/docs/Scientific_Computing/Supported_Applications/VASP.md
@@ -134,7 +134,7 @@ team {% include "partials/support_request.html" %}.
### VASP runs faster on Milan nodes
-[Milan compute nodes](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
+[Milan compute nodes](../../Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md)
are not only our most powerful compute nodes, but often have shorter
queues! These nodes are still opt-in at the moment, meaning you need to
specify `--partition=milan` in your Slurm script, which we strongly
@@ -323,9 +323,9 @@ production you should take into account performance and compute unit
cost.
General information about using GPUs on NeSI can be found
-[here](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md)
+[here](../../Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md)
and details about the available GPUs on NeSI
-[here](../Running_Jobs_on_Maui_and_Mahuika/Available_GPUs_on_NeSI.md).
+[here](../Batch_Jobs/Available_GPUs_on_NeSI.md).
Here are some additional notes specific to running VASP on GPUs on NeSI:
diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml
index 7473af1a6..b10484b89 100644
--- a/docs/redirect_map.yml
+++ b/docs/redirect_map.yml
@@ -42,8 +42,8 @@ hc/en-gb/articles/360000176116-Institutional-allocations.md: General/NeSI_Polici
hc/en-gb/articles/360000176116.md: General/NeSI_Policies/Institutional_allocations.md
hc/en-gb/articles/360000177256-NeSI-File-Systems-and-Quotas.md: Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md
hc/en-gb/articles/360000177256.md: Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md
-hc/en-gb/articles/360000201636-Job-prioritisation.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md
-hc/en-gb/articles/360000201636.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md
+hc/en-gb/articles/360000201636-Job-prioritisation.md: Scientific_Computing/Batch_Jobs/Job_prioritisation.md
+hc/en-gb/articles/360000201636.md: Scientific_Computing/Batch_Jobs/Job_prioritisation.md
hc/en-gb/articles/360000201756-Total-HPC-Resources-Available.md: General/NeSI_Policies/Total_HPC_Resources_Available.md
hc/en-gb/articles/360000201756.md: General/NeSI_Policies/Total_HPC_Resources_Available.md
hc/en-gb/articles/360000202136-How-we-review-applications.md: General/NeSI_Policies/How_we_review_applications.md
@@ -52,8 +52,8 @@ hc/en-gb/articles/360000202196-Project-Extensions-and-New-Allocations-on-Existin
hc/en-gb/articles/360000202196.md: Getting_Started/Accounts-Projects_and_Allocations/Project_Extensions_and_New_Allocations_on_Existing_Projects.md
hc/en-gb/articles/360000203075-Setting-Up-Two-Factor-Authentication.md: Getting_Started/Accessing_the_HPCs/Setting_Up_Two_Factor_Authentication.md
hc/en-gb/articles/360000203075.md: Getting_Started/Accessing_the_HPCs/Setting_Up_Two_Factor_Authentication.md
-hc/en-gb/articles/360000204076-Mahuika-Slurm-Partitions.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md
-hc/en-gb/articles/360000204076.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md
+hc/en-gb/articles/360000204076-Mahuika-Slurm-Partitions.md: Scientific_Computing/Batch_Jobs/Mahuika_Slurm_Partitions.md
+hc/en-gb/articles/360000204076.md: Scientific_Computing/Batch_Jobs/Mahuika_Slurm_Partitions.md
hc/en-gb/articles/360000205355-I-O-Performance-Considerations.md: Storage/File_Systems_and_Quotas/I-O_Performance_Considerations.md
hc/en-gb/articles/360000205355.md: Storage/File_Systems_and_Quotas/I-O_Performance_Considerations.md
hc/en-gb/articles/360000205435-File-permissions-and-groups.md: Storage/File_Systems_and_Quotas/File_permissions_and_groups.md
@@ -86,8 +86,8 @@ hc/en-gb/articles/360000550416-Why-am-I-seeing-Account-is-not-ready.md: General/
hc/en-gb/articles/360000550416.md: General/FAQs/Why_am_I_seeing_Account_is_not_ready.md
hc/en-gb/articles/360000552035-I-have-not-scanned-the-2FA-QR-code.md: General/FAQs/I_have_not_scanned_the_2FA_QR_code.md
hc/en-gb/articles/360000552035.md: General/FAQs/I_have_not_scanned_the_2FA_QR_code.md
-hc/en-gb/articles/360000568236-Hyperthreading.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md
-hc/en-gb/articles/360000568236.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md
+hc/en-gb/articles/360000568236-Hyperthreading.md: Scientific_Computing/Batch_Jobs/Hyperthreading.md
+hc/en-gb/articles/360000568236.md: Scientific_Computing/Batch_Jobs/Hyperthreading.md
hc/en-gb/articles/360000570215-Login-Troubleshooting.md: General/FAQs/Login_Troubleshooting.md
hc/en-gb/articles/360000570215.md: General/FAQs/Login_Troubleshooting.md
hc/en-gb/articles/360000578455-Moving-files-to-and-from-the-cluster.md: Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md
@@ -111,16 +111,16 @@ hc/en-gb/articles/360000691716-Slurm-Reference-Sheet.md: Getting_Started/Cheat_S
hc/en-gb/articles/360000691716.md: Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md
hc/en-gb/articles/360000693896-Applying-to-join-an-existing-NeSI-project.md: Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project.md
hc/en-gb/articles/360000693896.md: Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project.md
-hc/en-gb/articles/360000705196-SLURM-Best-Practice.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md
-hc/en-gb/articles/360000705196.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md
+hc/en-gb/articles/360000705196-SLURM-Best-Practice.md: Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md
+hc/en-gb/articles/360000705196.md: Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md
hc/en-gb/articles/360000718515-Supernova.md: Scientific_Computing/Supported_Applications/Supernova.md
hc/en-gb/articles/360000718515.md: Scientific_Computing/Supported_Applications/Supernova.md
hc/en-gb/articles/360000728016-Job-Scaling-Ascertaining-job-dimensions.md: Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md
hc/en-gb/articles/360000728016.md: Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md
hc/en-gb/articles/360000737555-Why-is-my-job-taking-a-long-time-to-start.md: General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md
hc/en-gb/articles/360000737555.md: General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md
-hc/en-gb/articles/360000743536-Fair-Share-How-jobs-get-prioritised.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md
-hc/en-gb/articles/360000743536.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md
+hc/en-gb/articles/360000743536-Fair-Share-How-jobs-get-prioritised.md: Scientific_Computing/Batch_Jobs/Fair_Share.md
+hc/en-gb/articles/360000743536.md: Scientific_Computing/Batch_Jobs/Fair_Share.md
hc/en-gb/articles/360000751636-System-status.md: Getting_Started/Getting_Help/System_status.md
hc/en-gb/articles/360000751636.md: Getting_Started/Getting_Help/System_status.md
hc/en-gb/articles/360000751916-Consultancy.md: Getting_Started/Getting_Help/Consultancy.md
@@ -137,8 +137,8 @@ hc/en-gb/articles/360000817476-Initial-Globus-Sign-Up-and-your-Globus-Identities
hc/en-gb/articles/360000817476.md: Storage/Data_Transfer_Services/Initial_Globus_Sign_Up-and_your_Globus_Identities.md
hc/en-gb/articles/360000871556-COMSOL.md: Scientific_Computing/Supported_Applications/COMSOL.md
hc/en-gb/articles/360000871556.md: Scientific_Computing/Supported_Applications/COMSOL.md
-hc/en-gb/articles/360000902955-NetCDF-HDF5-file-locking.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/NetCDF-HDF5_file_locking.md
-hc/en-gb/articles/360000902955.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/NetCDF-HDF5_file_locking.md
+hc/en-gb/articles/360000902955-NetCDF-HDF5-file-locking.md: Scientific_Computing/Batch_Jobs/NetCDF-HDF5_file_locking.md
+hc/en-gb/articles/360000902955.md: Scientific_Computing/Batch_Jobs/NetCDF-HDF5_file_locking.md
hc/en-gb/articles/360000903776-Finding-Job-Efficiency.md: Getting_Started/Next_Steps/Finding_Job_Efficiency.md
hc/en-gb/articles/360000903776.md: Getting_Started/Next_Steps/Finding_Job_Efficiency.md
hc/en-gb/articles/360000925176-Allocation-classes.md: General/NeSI_Policies/Allocation_classes.md
@@ -216,10 +216,10 @@ hc/en-gb/articles/360001237915-How-can-I-let-my-fellow-project-team-members-read
hc/en-gb/articles/360001237915.md: General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md
hc/en-gb/articles/360001287235-Download-and-share-CMIP6-data-for-NIWA-researchers.md: Storage/Data_Transfer_Services/Download_and_share_CMIP6_data_for_NIWA_researchers.md
hc/en-gb/articles/360001287235.md: Storage/Data_Transfer_Services/Download_and_share_CMIP6_data_for_NIWA_researchers.md
-hc/en-gb/articles/360001316356-Slurm-Interactive-Sessions.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md
-hc/en-gb/articles/360001316356.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md
-hc/en-gb/articles/360001330415-Checksums.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checksums.md
-hc/en-gb/articles/360001330415.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checksums.md
+hc/en-gb/articles/360001316356-Slurm-Interactive-Sessions.md: Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md
+hc/en-gb/articles/360001316356.md: Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md
+hc/en-gb/articles/360001330415-Checksums.md: Scientific_Computing/Batch_Jobs/Checksums.md
+hc/en-gb/articles/360001330415.md: Scientific_Computing/Batch_Jobs/Checksums.md
hc/en-gb/articles/360001332675-Find-execution-hot-spots-with-VTune.md: Scientific_Computing/Supported_Applications/VTune.md
hc/en-gb/articles/360001332675.md: Scientific_Computing/Supported_Applications/VTune.md
hc/en-gb/articles/360001343015-TurboVNC.md: Scientific_Computing/Supported_Applications/TurboVNC.md
@@ -230,12 +230,12 @@ hc/en-gb/articles/360001392636-Configuring-Dask-MPI-jobs.md: Scientific_Computin
hc/en-gb/articles/360001392636.md: Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md
hc/en-gb/articles/360001393596-Unix-Shell-Reference-Sheet.md: Getting_Started/Cheat_Sheets/Bash-Reference_Sheet.md
hc/en-gb/articles/360001393596.md: Getting_Started/Cheat_Sheets/Bash-Reference_Sheet.md
-hc/en-gb/articles/360001413096-Job-Checkpointing.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md
-hc/en-gb/articles/360001413096.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md
+hc/en-gb/articles/360001413096-Job-Checkpointing.md: Scientific_Computing/Batch_Jobs/Job_Checkpointing.md
+hc/en-gb/articles/360001413096.md: Scientific_Computing/Batch_Jobs/Job_Checkpointing.md
hc/en-gb/articles/360001419576-MAKER.md: Scientific_Computing/Supported_Applications/MAKER.md
hc/en-gb/articles/360001419576.md: Scientific_Computing/Supported_Applications/MAKER.md
-hc/en-gb/articles/360001471955-GPU-use-on-NeSI.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md
-hc/en-gb/articles/360001471955.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md
+hc/en-gb/articles/360001471955-GPU-use-on-NeSI.md: Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md
+hc/en-gb/articles/360001471955.md: Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md
hc/en-gb/articles/360001500156-NVIDIA-GPU-Containers.md: Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md
hc/en-gb/articles/360001500156.md: Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md
hc/en-gb/articles/360001508515-Git-Reference-Sheet.md: Getting_Started/Cheat_Sheets/Git-Reference_Sheet.md
@@ -339,8 +339,8 @@ hc/en-gb/articles/4414958674831-Jupyter-kernels-Tool-assisted-management.md: Sci
hc/en-gb/articles/4414958674831.md: Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_kernels_Tool_assisted_management.md
hc/en-gb/articles/4415563282959-How-do-I-find-out-the-size-of-a-directory.md: General/FAQs/How_do_I_find_out_the_size_of_a_directory.md
hc/en-gb/articles/4415563282959.md: General/FAQs/How_do_I_find_out_the_size_of_a_directory.md
-hc/en-gb/articles/4416692988047-Checking-your-project-s-usage-using-nn-corehour-usage.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md
-hc/en-gb/articles/4416692988047.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md
+hc/en-gb/articles/4416692988047-Checking-your-project-s-usage-using-nn-corehour-usage.md: Scientific_Computing/Batch_Jobs/Checking_your_projects_usage_using_nn_corehour_usage.md
+hc/en-gb/articles/4416692988047.md: Scientific_Computing/Batch_Jobs/Checking_your_projects_usage_using_nn_corehour_usage.md
hc/en-gb/articles/4416829135887-How-do-I-fix-my-locale-and-language-settings.md: General/FAQs/How_do_I_fix_my_locale_and_language_settings.md
hc/en-gb/articles/4416829135887.md: General/FAQs/How_do_I_fix_my_locale_and_language_settings.md
hc/en-gb/articles/4543094513039-my-nesi-org-nz-release-notes-v2-7-0.md: Getting_Started/my-nesi-org-nz/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-7-0.md
@@ -411,8 +411,8 @@ hc/en-gb/articles/6325030048655-jupyter-nesi-org-nz-release-notes-02-02-2023.md:
hc/en-gb/articles/6325030048655.md: Scientific_Computing/Interactive_computing_using_Jupyter/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-02-2023.md
hc/en-gb/articles/6359601973135-Data-Compression.md: Storage/File_Systems_and_Quotas/Data_Compression.md
hc/en-gb/articles/6359601973135.md: Storage/File_Systems_and_Quotas/Data_Compression.md
-hc/en-gb/articles/6367209795471-Milan-Compute-Nodes.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md
-hc/en-gb/articles/6367209795471.md: Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md
+hc/en-gb/articles/6367209795471-Milan-Compute-Nodes.md: Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md
+hc/en-gb/articles/6367209795471.md: Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md
hc/en-gb/articles/6443618773519-GATK.md: Scientific_Computing/Supported_Applications/GATK.md
hc/en-gb/articles/6443618773519.md: Scientific_Computing/Supported_Applications/GATK.md
hc/en-gb/articles/6529511928207-BRAKER.md: Scientific_Computing/Supported_Applications/BRAKER.md
@@ -493,3 +493,14 @@ Scientific_Computing/HPC_Software_Environment/Finding_Software.md : Scientific_C
hc.md: index.md
hc/en-gb.md: index.md
Storage/Freezer_long_term_storage.md : Storage/Long_Term_Storage/Freezer_long_term_storage.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checksums.md: Scientific_Computing/Batch_Jobs/Checksums.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md: Scientific_Computing/Batch_Jobs/Fair_Share.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md: Scientific_Computing/Batch_Jobs/GPU_use_on_NeSI.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md: Scientific_Computing/Batch_Jobs/Hyperthreading.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md: Scientific_Computing/Batch_Jobs/Job_Checkpointing.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md: Scientific_Computing/Batch_Jobs/Job_prioritisation.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md: Scientific_Computing/Batch_Jobs/Mahuika_Slurm_Partitions.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md: Scientific_Computing/Batch_Jobs/Milan_Compute_Nodes.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/NetCDF-HDF5_file_locking.md: Scientific_Computing/Batch_Jobs/NetCDF-HDF5_file_locking.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md: Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md
+Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md: Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md