Implicit neural representations (INRs) are a powerful family of continuous learned function approximators for signal data that are implemented using multilayer perceptron (MLP). INRs enable a non-linear signal representation of bases functions that are generated by scaling and shifting the activation function. Inspired by this insight, this talk will focus on the role played by the activation function, and the weights and biases of INRs. I will first focus on development of wavelet INRs (or WIRE) for highly robust representations for visual data such as images and videos. Second, I will talk about how the weights and biases of an INR can be modulated to efficiently and continuously represent videos, and thereby generate slo-mo videos without any external supervision, underscoring the importance of INRs as a powerful signal processing tool. The talk will focus on applications in CT reconstruction, low-level image processing, video processing, and solving inverse problems with meta-optics.
Vishwanath Saragadam is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) at the University of California Riverside where he leads the computational optics and display engineering (CODE) lab. Vishwanath received his Ph.D. from Carnegie Mellon University in ECE, and was a postdoctoral researcher at Rice University. He is a recipient of an outstanding postdoctoral researcher award in 2023, best paper award at ICCP in 2022, and outstanding thesis award in 2021. His research focus includes meta-optics, hyperspectral imaging, thermal imaging, and self-supervised algorithms for linear inverse problems.