You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: mkdocs/docs/HPC/troubleshooting.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ or because the pinning is done incorrectly and several threads/processes are bei
60
60
-**Lack of sufficient memory**: When there is not enough memory available, or not enough memory bandwidth,
61
61
it is likely that you will not see a significant speedup when using more cores (since each thread or process most likely requires additional memory).
62
62
63
-
More info on running multi-core workloads on the {{ hpcinfra }} can be found [here](multi_core_jobs.md).
63
+
There is more info on [running multi-core workloads](multi_core_jobs.md) on the {{ hpcinfra }}.
64
64
65
65
### Using multiple nodes
66
66
When trying to use multiple (worker)nodes to improve the performance of your workloads, you may not see (significant) speedup.
@@ -74,7 +74,7 @@ Actually using additional nodes is not as straightforward as merely asking for m
74
74
Using the resources of multiple nodes is often done using a [Message Passing Interface (MPI)](https://en.wikipedia.org/wiki/Message_Passing_Interface) library.
75
75
MPI allows nodes to communicate and coordinate, but it also introduces additional complexity.
76
76
77
-
An example of how you can make beneficial use of multiple nodes can be found [here](multi_core_jobs.md#parallel-computing-with-mpi).
77
+
We have an example of [how you can make beneficial use of multiple nodes](multi_core_jobs.md#parallel-computing-with-mpi).
78
78
79
79
You can also use MPI in Python, some useful packages that are also available on the HPC are:
0 commit comments