-
Notifications
You must be signed in to change notification settings - Fork 120
Restart from different polynomial degrees #2358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Restart from different polynomial degrees #2358
Conversation
Review checklistThis checklist is meant to assist creators of PRs (to let them know what reviewers will typically look for) and reviewers (to guide them in a structured review process). Items do not need to be checked explicitly for a PR to be eligible for merging. Purpose and scope
Code quality
Documentation
Testing
Performance
Verification
Created with ❤️ by the Trixi.jl community. |
|
||
# Version for MPI-parallel I/O | ||
function interpolate_restart_file!(u, file, slice, | ||
mesh, equations, dg, cache, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not add the dispatch on the mesh
here as it should not be necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless it does not work for DGMulti?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this only works for Cartesian elements, right? For curved elements you'd need to include the grid metrics in the interpolation/projection step, IIRC (cc @andrewwinters5000 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm yeah true - I restrict this to DGSEM for now.
Good point - I though I am in safe waters as everything happens on the reference element, but maybe I need something like this?
Trixi.jl/src/callbacks_step/amr_dg2d.jl
Lines 108 to 116 in afdb486
# Loop over all elements in old container and scale the old solution by the Jacobian | |
# prior to projection | |
for old_element_id in 1:old_n_elements | |
for v in eachvariable(equations) | |
old_u[v, .., old_element_id] .= (old_u[v, .., old_element_id] ./ | |
old_inverse_jacobian[.., | |
old_element_id]) | |
end | |
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I tested this with the 2D elixir_advection_unstructured_flag
where I increase the base polydeg from 3 to 4 and then restart the subsequent run with polydeg = 3 (This is required since the mesh has polydeg 3).
This is the error report for "uniform" refinement:
L2 error: 5.43210258e-04
Linf error: 7.61325588e-03
L2 error: 4.81669631e-05
Linf error: 7.53667941e-04
L2 error: 3.98543153e-06
Linf error: 8.58091296e-05
L2 error: 3.11114766e-07
Linf error: 6.82719964e-06
which is not quite 4th order, but certainly more than 3rd order.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But it would be good if @andrewwinters5000 could make a statement here :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this should be okay to have a lower polydeg
for the mesh compared to the solver. But your projections will need to take the Jacobians into account. In the mapped coordinates it is in fact the quantity J*u
that is conserved and not simply u
. However, on static meshes this fact is hidden and the Jacobian is usually divided off for convenience.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, so I would need to do something like
here?
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2358 +/- ##
==========================================
- Coverage 96.80% 91.74% -5.06%
==========================================
Files 494 494
Lines 40825 40905 +80
==========================================
- Hits 39518 37525 -1993
- Misses 1307 3380 +2073
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
What is the status of this PR - ready for a review by @sloede or myself or would you like to change something based on the discussion above? |
I would wait for @SimonCan to do a first review, since he originally implemented the restarting mechanism. Other than that, I am of course grateful for being pointed to any flaws you immediately observe :) |
Great, thanks. Please feel free to ping me when Simon is done with his review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't reviewed the parallel part yet, but I wanted to get these comments out before the Easter break.
Overall nice idea!
|
||
# Version for MPI-parallel I/O | ||
function interpolate_restart_file!(u, file, slice, | ||
mesh, equations, dg, cache, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless it does not work for DGMulti?
|
||
# Version for MPI-parallel I/O | ||
function interpolate_restart_file!(u, file, slice, | ||
mesh, equations, dg, cache, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this only works for Cartesian elements, right? For curved elements you'd need to include the grid metrics in the interpolation/projection step, IIRC (cc @andrewwinters5000 )
# NOTE: Normally we would also need to exchange the `AnalysisCallback` when changing the | ||
# semidiscretization. This, however, leads in the test environment to a confusing error | ||
# report, as the error report from the simulation is different from the one from the test environment. | ||
# Thus, we do not exchange the `AnalysisCallback` here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these even valid error measures anymore then? I mean, if the poldeg of the solver and the analysis callback do not match?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, somewhat I guess? But in the tests we do not really care about the meaning of the numbers but more that there are some numbers at all (which stay constant over development), right?
semi = SemidiscretizationHyperbolic(mesh, equations, initial_condition, solver, | ||
boundary_conditions = boundary_conditions) | ||
|
||
ode = semidiscretize(semi, tspan, restart_filename) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a keyword to semidiscretize
to explicitly allow polydeg conversion (i.e., conversely to throw an error if it is missing)? I am slightly worried about people accidentally restarting simulations at different polydegs and then wondering what went wrong. Something like
ode = semidiscretize(semi, tspan, restart_filename) | |
ode = semidiscretize(semi, tspan, restart_filename; allow_polydeg_conversion=true) |
maybe. Or is this too much hand-holding for the users?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no strong feelings about this. One thing that comes to my mind is that we should throw a warning in the AnalysisCallback
if the polydegs do not "match"
Co-authored-by: Joshua Lampert <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM once the final conversations are resolved.
Thanks for your input, Joshua! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot! I just have a minor suggestion. Does anybody know whether @sloede wants to review the PR after requesting changes some time ago?
Co-authored-by: Hendrik Ranocha <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some suggested cleanup of the docstrings. One thing that occurred to me, an error should get thrown if the attempted new solver polydeg
is less than the existing mesh_polydeg
as one cannot guarantee free-stream preservation in this case.
examples/t8code_2d_dgsem/elixir_advection_restart_higher_polydeg.jl
Outdated
Show resolved
Hide resolved
examples/t8code_2d_dgsem/elixir_advection_restart_higher_polydeg.jl
Outdated
Show resolved
Hide resolved
src/semidiscretization/semidiscretization_hyperbolic_parabolic.jl
Outdated
Show resolved
Hide resolved
|
||
# Version for MPI-parallel I/O | ||
function interpolate_restart_file!(u, file, slice, | ||
mesh, equations, dg, cache, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this should be okay to have a lower polydeg
for the mesh compared to the solver. But your projections will need to take the Jacobians into account. In the mapped coordinates it is in fact the quantity J*u
that is conserved and not simply u
. However, on static meshes this fact is hidden and the Jacobian is usually divided off for convenience.
Co-authored-by: Andrew Winters <[email protected]>
…i.jl into Restart_Diff_PolyDeg
Co-authored-by: Andrew Winters <[email protected]>
…i.jl into Restart_Diff_PolyDeg
See point 1 in #2316