Ilya Chugunov

I'm a PhD student in the Princeton Computational Imaging Lab, advised by Professor Felix Heide.
I received my bachelor's in electrical engineering and computer science from UC Berkeley, and am an NSF graduate research fellow.

Contact: [Last-Name]

CV  /  Google Scholar  /  Github  /  LinkedIn  /  My Photography

profile photo

"If you try and take a cat apart to see how it works, the first thing you have on your hands is a non-working cat." - Douglas Adams

I'm interested in the end-to-end optimization of imaging pipelines, from modeling signal collection to feature extraction and scene reconstruction. MRIs, microscopes, or modulated light sources, I love working with real devices and imperfect data.

The Implicit Values of A Good Hand Shake: Handheld Multi-Frame Neural Depth Refinement
Ilya Chugunov, Yuxuan Zhang, Zhihao Xia, Xuaner (Cecilia) Zhang, Jiawen Chen, Felix Heide
Conference on Computer Vision and Pattern Recognition (CVPR), 2022 (Oral)

Modern smartphones can stream multi-megapixel RGB images, high-quality 3D pose information, and low-resolution depth estimates at 60Hz. In tandem, the natural shake of a phone photographer's hand provides us with dense micro-baseline parallax depth cues during viewfinding. This work explores how we can combine these data streams to distill a high-fidelity depth map from a single snapshot.

Centimeter-Wave Free-Space Time-of-Flight Imaging
Seung-Hwan Baek, Noah Walsh, Ilya Chugunov, Zheng Shi, Felix Heide
Arxiv Preprint, 2021

Modern AMCW time-of-flight (ToF) cameras are limited to modulation frequencies of several hundred MHz by silicon absorption limits. In this work we leverage electro-optic modulators to build the first free-space GHz ToF imager. To solve high-frequency phase ambiguities we alongside introduce a segmentation-inspired neural phase unwrapping network.

Mask-ToF: Learning Microlens Masks for Flying Pixel Correction in Time-of-Flight Imaging
Ilya Chugunov, Seung-Hwan Baek, Qiang Fu, Wolfgang Heidrich, Felix Heide
Conference on Computer Vision and Pattern Recognition (CVPR), 2021

Flying pixels are pervasive depth artifacts in time-of-flight imaging, formed by light paths from both an object and its background connecting to the same sensor pixel. Mask-ToF jointly learns a microlens-level occlusion mask and refinement network to respectively encode and decode geometric information in device measurements, helping reduce these artifacts while remaining light efficient.

Self-Contained Jupyter Notebook Labs Promote Scalable Signal Processing Education
Dominic Carrano, Ilya Chugunov, Jonathan Lee, Babak Ayazifar,
6th International Conference on Higher Education Advances, 2020

Jupyter Notebook labs can offer a similar experience to in-person lab sections while being self-contained, with relevant resources embedded in their cells. They interactively demonstrate real-life applications of signal processing while reducing overhead for course staff.

Link to Jupyter Notebook Exercises
Multiscale Low-Rank Matrix Decomposition for Reconstruction of Accelerated Cardiac CEST MRI
Ilya Chugunov, Wissam AlGhuraibawi, Kevin Godines, Bonnie Lam, Frank Ong, Jonathan Tamir, Moriel Vandsburger,
28th Annual Meeting of International Society for Magnetic Resonance in Medicine (ISMRM), 2020

Leveraging sparsity in the Z-spectrum domain, multi-scale low rank reconstruction of cardiac chemical exchange saturation transfer (CEST) MRI can allow for 4-fold acceleration of scans while providing accurate Lorentzian line-fit analysis.

Duodepth: Static Gesture Recognition Via Dual Depth Sensors
Ilya Chugunov Avideh Zakhor,
IEEE International Conference on Image Processing (ICIP), 2019

Point cloud data integrated from two structured light sensors for gesture recognition implicitly via a 3D spatial transform network can lead to improved results as compared to iterative closest point (ICP) registered point clouds.

GitHub Link
Curriculum Vitae

Website template stolen from Jon Barron.