6
6
>>> conda activate atomate2
7
7
8
8
# installing atomate2
9
- >>> pip install git+https://github.com/orionarcher/atomate2.git
9
+ >>> pip install git+https://github.com/orionarcher/atomate2
10
10
11
11
# installing classical_md dependencies
12
12
>>> conda install -c conda-forge --file .github/classical_md_requirements.txt
@@ -18,7 +18,7 @@ you can clone the repository and install from source.
18
18
19
19
``` bash
20
20
# installing atomate2
21
- >>> git clone https://github.com/orionarcher/atomate2.git
21
+ >>> git clone https://github.com/orionarcher/atomate2
22
22
>>> cd atomate2
23
23
>>> git branch openff
24
24
>>> git checkout openff
@@ -33,7 +33,7 @@ you intend to run on GPU, make sure that the tests are passing for CUDA.
33
33
>>> python -m openmm.testInstallation
34
34
```
35
35
36
- # Understanding Atomate2 OpenMM
36
+ ## Understanding Atomate2 OpenMM
37
37
38
38
Atomate2 is really just a collection of jobflow workflows relevant to
39
39
materials science. In all the workflows, we pass our system of interest
@@ -55,7 +55,6 @@ The first job we need to create generates the `Interchange` object.
55
55
To specify the system of interest, we use give it the SMILES strings,
56
56
counts, and names (optional) of the molecules we want to include.
57
57
58
-
59
58
``` python
60
59
from atomate2.openff.core import generate_interchange
61
60
@@ -73,7 +72,6 @@ out the `create_mol_spec` function in the `atomate2.openff.utils`
73
72
module. Under the hood, this is being called on each mol_spec dict.
74
73
Meaning the code below is functionally identical to the code above.
75
74
76
-
77
75
``` python
78
76
from atomate2.openff.utils import create_mol_spec
79
77
@@ -90,7 +88,6 @@ object, which we can pass to the next stage of the simulation.
90
88
NOTE: It's actually mandatory to include partial charges
91
89
for PF6- here, the built in partial charge method fails.
92
90
93
-
94
91
``` python
95
92
import numpy as np
96
93
from pymatgen.core.structure import Molecule
@@ -205,13 +202,13 @@ Awesome! At this point, we've run a workflow and could start analyzing
205
202
our data. Before we get there though, let's go through some of the
206
203
other simulation options available.
207
204
208
- # Digging Deeper
205
+ ## Digging Deeper
209
206
210
207
Atomate2 OpenMM supports running a variety of workflows with different
211
208
configurations. Below we dig in to some of the more advanced options.
212
209
213
-
214
210
### Configuring the Simulation
211
+
215
212
<details >
216
213
<summary >Learn more about the configuration of OpenMM simulations</summary >
217
214
@@ -228,14 +225,13 @@ once and have it apply to all stages of the simulation. The value inheritance
228
225
is as follows: 1) any explicitly set value, 2) the value from the previous
229
226
maker, 3) the default value (as shown below).
230
227
231
-
232
228
``` python
233
229
from atomate2.openmm.jobs.base import OPENMM_MAKER_DEFAULTS
234
230
235
231
print (OPENMM_MAKER_DEFAULTS )
236
232
```
237
233
238
- ```
234
+ ``` py
239
235
{
240
236
" step_size" : 0.001 ,
241
237
" temperature" : 298 ,
@@ -339,7 +335,6 @@ Rather than use `jobflow.yaml`, you could also create the stores in
339
335
Python and pass the stores to the `run_locally` function. This is a bit
340
336
more code, so usually the prior method is preferred.
341
337
342
-
343
338
` ` ` python
344
339
from jobflow import run_locally, JobStore
345
340
from maggma.stores import MongoStore, S3Store
@@ -374,19 +369,18 @@ run_locally(
374
369
ensure_success=True,
375
370
)
376
371
` ` `
372
+
377
373
</details>
378
374
379
375
# ## Running on GPUs
380
376
381
377
<details>
382
378
<summary>Learn to accelerate MD simulations with GPUs</summary>
383
379
384
-
385
380
Running on a GPU is nearly as simple as running on a CPU. The only difference
386
381
is that you need to specify the `platform_properties` argument in the
387
382
` EnergyMinimizationMaker` with the `DeviceIndex` of the GPU you want to use.
388
383
389
-
390
384
` ` ` python
391
385
production_maker = OpenMMFlowMaker(
392
386
name="test_production",
@@ -414,7 +408,6 @@ First you'll need to install mpi4py.
414
408
415
409
Then you can modify and run the following script to distribute the work across the GPUs.
416
410
417
-
418
411
` ` ` python
419
412
# other imports
420
413
@@ -457,15 +450,16 @@ for i in range(4):
457
450
# this script will run four times, each with a different rank, thus distributing the work across the four GPUs.
458
451
run_locally(flows[rank], ensure_success=True)
459
452
` ` `
453
+
460
454
</details>
461
455
462
- # Analysis with Emmet
456
+ # # Analysis with Emmet
463
457
464
458
For now, you'll need to make sure you have a particular emmet branch installed.
465
459
Later the builders will be integrated into `main`.
466
460
467
461
` ` ` bash
468
- pip install git+https://github.com/orionarcher/emmet.git @md_builders
462
+ pip install git+https://github.com/orionarcher/emmet@md_builders
469
463
` ` `
470
464
471
465
# ## Analyzing Local Data
@@ -498,6 +492,7 @@ u = create_universe(
498
492
499
493
solute = create_solute(u, solute_name="Li", networking_solvents=["PF6"])
500
494
` ` `
495
+
501
496
</details>
502
497
503
498
# ## Setting up builders
@@ -556,6 +551,7 @@ builder.connect()
556
551
<summary>Here are some more convenient queries.</summary>
557
552
558
553
Here are some more convenient queries we could use!
554
+
559
555
` ` ` python
560
556
# query jobs from a specific day
561
557
april_16 = {"completed_at": {"$regex": "^2024-04-16"}}
@@ -570,6 +566,7 @@ job_uuids = [
570
566
]
571
567
my_specific_jobs = {"uuid": {"$in": job_uuids}}
572
568
` ` `
569
+
573
570
</details>
574
571
575
572
</details>
@@ -611,6 +608,7 @@ solute = create_solute(
611
608
fallback_radius=3,
612
609
)
613
610
` ` `
611
+
614
612
</details>
615
613
616
614
# ## Automated analysis with builders
0 commit comments