Summary

During the 2024 iteration of Google Summer of Codes, I worked towards building the package NeuralOperators.jl. The package now export architectures of DeepONets, Fourier Neural Operators (FNOs), and Nonlinear Manifold Decoders (NoMADs). I began my work by implementing the vanilla DeepONet architecture, similar to the one provided by deepxde, except that we allow for batching that does not emply different routines for aligned and non-aligned datasets.

  • PR merged : #5

The next step began with the opening of the issue #9, that requires an additional layer added to the DeepONet architecture, before we project the learnt basis onto the output space. This extends the utility of the architecture by adding the capability to learn operators that output in higher dimensions, without collapsing information.

  • PR merged : #15

Another major task that I accomplished was adding NOMADs, that took the package one step closer to achieving the feature parity with respect to the would-be deprecated last release of SciML/NeuralOperators.jl . This was relatively simpler considering I had a decent experience already implementing architectures in the package.

  • PR merged : #27

Another PRs that added more organization to the package and bringing coherency were:

  • PR merged : #4
  • PR merged : #23

One task that took me a lot of time and I’m still yet to figure out completely is the addition of the continuous variant of the neural operator. Propagating the Borel measures of the functions held me for a bit before I realized it would be more worthwhile to invest time in adding more features to the package that would be used more frequently. The PR on the same is still a work in progress and I intend to add it after we have added a few other operators, which would be relatively easier for me considering the experience I have built during the period.

  • PR #10 : Integral Kernel Operator

We have also compared the performance of the package with the already developed versions in python. While the FNO requires some optimizations, DeepONet is already 10x faster compared to its pytorch counterpart. While it’s a PR in the package, this will be added to SciMLBenchmarks.jl

  • PR #17 : Benchmark Tests for FNO and DeepONets

Documentation for the package are also underway and would be built for hosting soon, along with the tutorials to help users integrate it into the works seemlessly.

More TODOs

There are a plethora of Neural Operators out there that can be added. DeepM&MNets, U-shaped NOs, and many more… With GraphNeuralNetworks.jl/#32 being merged, adding Graph Neural Operators is the next task in the list to achieve feature parity and release the updated version of the package. The continuous variant of the neural operator is something that has challenged me and adding it would be major feat for me.