Ilya Chugunov

About:   I'm a PhD student in the Princeton Computational Imaging Lab, advised by Professor Felix Heide. I received my bachelor's in electrical engineering and computer science from UC Berkeley, and am an NSF graduate research fellow.

Contact:   cout << "chugunov" << "@" << "princeton.edu"

Professional:   CV  /  Google Scholar  /  Github  /  LinkedIn

Unprofessional:   Nature Photography


Research

I'm interested in computational photography and inverse problems that look at the whole imaging pipeline, from signal collection to scene reconstruction. MRIs, microscopes, or modulated light sources, I love working with real devices and weird data.


"If you try and take a cat apart to see how it works, the first thing you have on your hands is a non-working cat."
- Douglas Adams

Neural Spline Fields for Burst Image Fusion and Layer Separation
Ilya Chugunov, David Shustin, Ruyu Yan, Chenyang Lei, Felix Heide
CVPR, 2024

We propose neural spline fields, coordinate networks trained to map input 2D points to vectors of spline control points, as a versatile representation of pixel motion during burst photography. This flow model can fuse images during test-time optimization using just photometric loss, without regularization. Layering these representations, we can separate effects such as occlusions, reflections, shadows and more.

Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized Photography
Ilya Chugunov, Yuxuan Zhang, Felix Heide
CVPR, 2023

In a “long-burst”, forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth. We fit a neural RGB-D model directly to this long-burst data to recover depth and camera motion with no LiDAR, no external pose estimates, and no disjoint preprocessing steps.

GenSDF: Two-Stage Learning of Generalizable Signed Distance Functions
Gene Chou, Ilya Chugunov, Felix Heide
NeurIPS, 2022 (Featured)

Signed distance fields (SDFs) can be a compact and convenient way of representing 3D objects, but state-of-the-art learned methods for SDF estimation struggle to fit more than a few shapes at a time. This work presents a two stage semi-supervised meta-learning approach that learns generic shape priors to reconstruct over a hundred unseen object classes.

Centimeter-Wave Free-Space Time-of-Flight Imaging
Seung-Hwan Baek, Noah Walsh, Ilya Chugunov, Zheng Shi, Felix Heide
SIGGRAPH, 2022

Modern AMCW time-of-flight (ToF) cameras are limited to modulation frequencies of several hundred MHz by silicon absorption limits. In this work we leverage electro-optic modulators to build the first free-space GHz ToF imager. To solve high-frequency phase ambiguities we alongside introduce a segmentation-inspired neural phase unwrapping network.

The Implicit Values of A Good Hand Shake: Handheld Multi-Frame Neural Depth Refinement
Ilya Chugunov, Yuxuan Zhang, Zhihao Xia, Xuaner (Cecilia) Zhang, Jiawen Chen, Felix Heide
CVPR, 2022 (Oral)

Modern smartphones can stream multi-megapixel RGB images, high-quality 3D pose information, and low-resolution depth estimates at 60Hz. In tandem, the natural shake of a phone photographer's hand provides us with dense micro-baseline parallax depth cues during viewfinding. This work explores how we can combine these data streams to get a high-fidelity depth map from a single snapshot.

Mask-ToF: Learning Microlens Masks for Flying Pixel Correction in Time-of-Flight Imaging
Ilya Chugunov, Seung-Hwan Baek, Qiang Fu, Wolfgang Heidrich, Felix Heide
CVPR, 2021

Flying pixels are pervasive depth artifacts in time-of-flight imaging, formed by light paths from both an object and its background connecting to the same sensor pixel. Mask-ToF jointly learns a microlens-level occlusion mask and refinement network to respectively encode and decode geometric information in device measurements, helping reduce these artifacts while remaining light efficient.

Self-Contained Jupyter Notebook Labs Promote Scalable Signal Processing Education
Dominic Carrano, Ilya Chugunov, Jonathan Lee, Babak Ayazifar,
6th International Conference on Higher Education Advances (HEAd), 2020

Jupyter Notebook labs can offer a similar experience to in-person lab sections while being self-contained, with relevant resources embedded in their cells. They interactively demonstrate real-life applications of signal processing while reducing overhead for course staff.

Multiscale Low-Rank Matrix Decomposition for Reconstruction of Accelerated Cardiac CEST MRI
Ilya Chugunov, Wissam AlGhuraibawi, Kevin Godines, Bonnie Lam, Frank Ong, Jonathan Tamir, Moriel Vandsburger
28th Annual Meeting of International Society for Magnetic Resonance in Medicine (ISMRM), 2020

Leveraging sparsity in the Z-spectrum domain, multi-scale low rank reconstruction of cardiac chemical exchange saturation transfer (CEST) MRI can allow for 4-fold acceleration of scans while providing accurate Lorentzian line-fit analysis.

Duodepth: Static Gesture Recognition Via Dual Depth Sensors
Ilya Chugunov, Avideh Zakhor
IEEE International Conference on Image Processing (ICIP), 2019

Point cloud data integrated from two structured light sensors for gesture recognition implicitly via a 3D spatial transform network can lead to improved results as compared to iterative closest point (ICP) registered point clouds.


Website template stolen from Bon Jarron.