Memory Leaks during Simulations with increasing timesteps #2054
-
Hello everyone, I'm currently running HOOMD-blue 5.1.1 simulations using Python 3.12 (in Windows 11, WSL2, Linux backend, Ubuntu) and What I’ve already tried:
What could I do to free my memory during the simulation to scale it for bigger and longer simulations ? Thanks in advance and best regards
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 6 replies
-
HOOMD-blue regularly sees production use on billion time step simulations. There is nothing in the package that inherently increases memory as a function of step. There are, however, ways you might use HOOMD-blue outside normal parameters that could cause increasing memory usage. You don't give enough information to even guess if that is the case, or if it is something else in your script. I suggest that you use memray (https://github.com/bloomberg/memray) to what library/class/function is consuming so much memory. If that consumer is in HOOMD-blue, then I can help you identify ways to mitigate the usage. |
Beta Was this translation helpful? Give feedback.
-
I tried to switch my custom potential with a Lennard-Jones Potential with the same r_cut and get the same problem with increased memory usage.:
Here is the suggested memray Flamegraph with my Custom potential. |
Beta Was this translation helpful? Give feedback.
It is not a minimum working example if others cannot execute it. I get
RuntimeError: GSD: No such file or directory - Gel_Particle.gsd
when I run your script.You don't even mention your simulation box dimensions.
If you don't want to use
nlist.Tree
, that is your choice. It will solve your problem. You can instead execute your simulations on a system with more RAM. Or you can reduce the size of your simulation box. There really isn't anything more that anyone can do to help you. I already explained all the variables that influence the memory usage ofnlist.Cell
.