Content-Length: 597783 | pFad | https://github.com/simpeg/simpeg/pull/1167

FF Rotated Gradients by jcapriot · Pull Request #1167 · simpeg/simpeg · GitHub
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rotated Gradients #1167

Merged
merged 44 commits into from
Mar 28, 2024
Merged

Rotated Gradients #1167

merged 44 commits into from
Mar 28, 2024

Conversation

jcapriot
Copy link
Member

Constructs a (optionally) rotated gradient operator on general meshes.

This makes use of discretize's get_face_inner_product operator to construct the regularization as an inner product of cell gradients. Essentially the weighting parameters are an anisotropic model used to combine the gradients.

This operator also works on general mesh types to construct the smoothness regularization operators on unstructured meshes as well.

Things to check (in addition to the implementation).

  • The name of this regularization operator
  • cell weights (are currently mixed into the anisotropic model, which I believe is consistent)
  • face weights (not yet implemented, but I think should be sqrt-ed and applied on both sides of the current _W matrix.)
  • Future (how to extend this for non-L1 norms.
    • I think it should be handled similarly to how the current total option is handled, but it will require some experimentation about how this applies to rotated systems.

@jcapriot
Copy link
Member Author

Ah, forgot there were changes to the DC simulation on here as well, I'm going to parse those out and put them on a separate PR

@jcapriot
Copy link
Member Author

image

@jcapriot
Copy link
Member Author

imageimage

A comparison of an inversion run with the current smoothness operator on a tree mesh, and this smoothness operator on a triangular mesh.

@codecov
Copy link

codecov bot commented Jan 22, 2023

Codecov Report

Attention: Patch coverage is 92.37288% with 9 lines in your changes are missing coverage. Please review.

Project coverage is 82.36%. Comparing base (6952d90) to head (9fbd3ce).

Files Patch % Lines
SimPEG/regularization/_gradient.py 92.10% 9 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1167      +/-   ##
==========================================
+ Coverage   82.32%   82.36%   +0.04%     
==========================================
  Files         165      166       +1     
  Lines       25263    25379     +116     
==========================================
+ Hits        20798    20904     +106     
- Misses       4465     4475      +10     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jcapriot
Copy link
Member Author

Just having a thought here... Is it actually necessary to provide orthogonal vectors for the regularization directions, If someone wanted to regularize in two directions that were not orthogonal, I think this formulation still holds, because of the way the anisotropic weighting parameter is constructed.

@domfournier domfournier requested a review from fourndo March 1, 2023 19:09
@jcapriot jcapriot marked this pull request as draft March 11, 2023 00:43
@jcapriot jcapriot added this to the 0.21.0 milestone Nov 23, 2023
G = self.cell_gradient
M_f = self.W
r = G @ (self.mapping * (self._delta_m(m)))
return 0.5 * r @ M_f @ r
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we only have one M_f with M_f = self.W, don't we miss another to have W^TW ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(same comment for deriv and deriv2)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually no, the way it works here is that M_f=W.T@W in the standard way of describing the regularization. Thus since I use discretize to construct the anisotropic inner product matrix. So I just set W=M_f since there’s a lot of machinery already hooked into updated the weighting matrix on things like weighting function changes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I do admit I find this extremely confusing and important to document here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

M_f = self.W; then should be M_f = self.WTW for clarity

"""
if getattr(self, "_W", None) is None:
mesh = self.regularization_mesh.mesh
cell_weights = np.ones(len(mesh))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcapriot : I don't think this works with active_cells because len(mesh) here will be the size of the total mesh (active + inactive), but later, you multiply those cell_weights by values (such as sensitivity weights) which will be the shape of the number of inactive cells.

See the error I am getting:
image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PS: the issue only appears if you are setting weights AND have active_cells.

@domfournier
Copy link
Contributor

domfournier commented Jan 27, 2024

Hey @thibaut-kobold @JKutt @johnweis0480 @lheagy

Has promised, here's a VERY old notebook that I had used for plots in my thesis for sparse rotated gradients. I tried updating it, but turned out to be a lot of work.

image

Chap5_RotatedGradient.zip

It relies on a getDiffOpRot method that still survived here:

def getDiffOpRot(mesh, psi, theta, phi, vec, forward=True):

Technically you could just copy that part only and overload the gradient operators on the reg classes to try it.

This stuff was never published, but someone should do something about it.

Enjoy.

n_faces = mesh.n_faces
n_cells = self.regularization_mesh.n_cells
cell_weights = np.ones(n_cells)
face_weights = np.ones(n_faces)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcapriot : curious if those face weights will allow for sparsity? For what I gather, it won't, correct?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They'd likely allow for it to be sparse in x-y-z orientations, but I don't think they would work properly for making it sparse in a rotated direction. It might be approximately that on minimization but I'm not sure on that front either because I haven't tried it.

@jcapriot jcapriot marked this pull request as ready for review February 13, 2024 19:28
@jcapriot jcapriot merged commit 3ba9a4b into simpeg:main Mar 28, 2024
16 checks passed
@jcapriot jcapriot deleted the Tet_regularizations branch March 28, 2024 19:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://github.com/simpeg/simpeg/pull/1167

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy