Conversation
|
@stevengj I have fixed a typo/bug in both the ssp2 projection and also the ssp2 adjoint, and I updated the figures in the OP to reflect the new results. I also changed the example to use a higher resolution for the projected design than the design variables, which gives a smoother appearance to the gradient. Also, with the refactoring you suggested, the code is almost allocation free and I can run 50 optimization iterations of the example problem in just under a second on my machine on a single thread. Still, there are a couple loose ends on this pr:
Once this is finished I also have a draft implementation of the lengthscale constraints for a follow-up pr |
|
The adjoint accuracy is now fixed and is matching finite differences to about 5 significant digits. The issue was that I was misusing the FastInterpolations.jl api in this pr but not my old code as well as a mistake in overwriting arrays. The updated optimization results are in the OP and nearly identical to the previous one. |
|
The Julia API here seems to be quite different in philosophy than the Python API. I guess it is more SciML-inspired, where you have a "problem" object and a "solve" method? You also have all the adjoints manually, which complicates the code but probably improves performance; presumably you will want to define a ChainRulesCore rrule so that it can be plugged into Enzyme and Zygote. It would be interesting to benchmark this against the Jax implementation of SSP1 (with bilinear interpolation) and @romanodev's implementation of SSP2 (with bicubic). |
Here I implement SSP2 in Julia in a mostly self-contained package that depends solely on FFTW.jl and FastInterpolations.jl. The code should work in arbitrary dimension, although I focus on a 2d example highlighting the topology change of a Cassini oval. There is no dependency on any external AD package for gradient calculations however rrules may be added at a later point.
I include a Manifest.toml file in the examples that should provide a reproducible environment.Here are the design variables and the projected field

Here is the projected field and the (smooth) gradient w.r.t. the design variables

Here is an optimization that minimizes the square norm of the projected field as well as the final design.

My only concern is that my gradient only agrees with finite differences in the first digit, so I may have to debug to find if there are any issues.The adjoint gradients agree with finite differences to around 5 significant digits