-
Notifications
You must be signed in to change notification settings - Fork 2
Probe sparse dict learning #45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
learning, will do different pull request for OPR
src/ptychi/reconstructors/lsqml.py
Outdated
chi_rm_subpx_shft = self.adjoint_shift_probe_update_direction( | ||
indices, chi, first_mode_only=True | ||
) | ||
delta_p_i = self.parameter_group.probe.get_probe_update_direction_sparse_code_probe_shared( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Split this function into 2: first just for calculating the updates of the coefficients and storing them in the gradient (use self.set_grad()
); second for applying the updates. The apply function should be called in LSQML's apply_reconstruction_parameter_updates
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I give up, not sure how to use the existing set_grad method for the probe to set the sparse code.
The probe class only has a set_grad method for the probe itself, not for the sparse code.
I've added a "set_gradient_sparse_code_probe_shared" inside the SynthesisDictLearnProbe class for now.
It gets used in "get_probe_update_direction_sparse_code_probe_shared" method.
src/ptychi/reconstructors/pie.py
Outdated
abs_sparse_code = torch.abs(sparse_code) | ||
sparse_code_sorted = torch.sort(abs_sparse_code, dim=0, descending=True) | ||
if (self.parameter_group.probe.representation == "sparse_code" | ||
and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put this part in a separate method similar to what we did in LSQML
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
chi_rm_subpx_shft = self.adjoint_shift_probe_update_direction( | ||
indices, chi, first_mode_only=True | ||
) | ||
delta_p_i, optimal_delta_sparse_code_vs_spos = self.parameter_group.probe.get_probe_update_direction_sparse_code_probe_shared( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take set_grad_sparse_code
out of the get function
…arameter_group.synchronize_optimizable_parameter_gradients` works with sparse code
…code_shared_updates`; split set sparse code update out of get sparse code method
Features/fixes
The previous dictionary learning code for the shared probe updates was trying to do too much in a single pull request (shared and OPR sparse representation updates). This one is only for the shared modes; will do OPRs later.
Related issues (optional)
Previous attempt was
"Added dictionary learning (DL) functionality to LSQML, cleaned up tensor operations in RPIE + DL #42 "
Mentions
Ming
Checklist
Have you...