TL;DR
Researchers at Carnegie Mellon created a hybrid optical–algorithm camera that can focus different parts of a scene at different depths simultaneously. The system combines a programmable Lohmann-style lens, a phase-only spatial light modulator, and two autofocus strategies, and was presented at ICCV 2025 with an Honorable Mention.
What happened
A research group at Carnegie Mellon University has demonstrated a new imaging system that can render near and far objects simultaneously sharp by steering focus across the field of view. The setup pairs a Lohmann-inspired lens assembly—two curved cubic elements that shift to change focus—with a phase-only spatial light modulator that controls how light is bent at each pixel. Software drives the optics using two autofocus approaches: contrast-detection autofocus (CDAF), which splits the image into superpixels that each seek their sharpest focus, and phase-detection autofocus (PDAF) using a dual-pixel sensor that indicates focus direction, enabling faster adjustments. The team reported capturing optically all-in-focus images without post-capture processing and reached 21 frames per second with their modified sensor. The work was presented at the 2025 International Conference on Computer Vision, where it received a Best Paper Honorable Mention.
Why it matters
- Enables single-shot images with multiple depths in focus, reducing need for focus stacking or heavy post-processing.
- Could simplify microscopy imaging by capturing full-depth samples in one exposure.
- May improve environmental perception for autonomous systems by delivering sharper views across distances.
- Offers a path for richer depth rendering in AR/VR and potential upgrades to compact camera modules.
Key facts
- Developed at Carnegie Mellon University by Yingsi Qin (PhD student), Aswin Sankaranarayanan (professor, ECE), and Matthew O’Toole (associate professor, computer science and robotics).
- Presented at the 2025 International Conference on Computer Vision (ICCV) and received a Best Paper Honorable Mention.
- Combines a Lohmann-style lens (two shifting curved cubic elements) with a phase-only spatial light modulator for spatially varying focus control.
- Uses two autofocus methods: Contrast-Detection Autofocus (CDAF) operating on superpixels, and Phase-Detection Autofocus (PDAF) with a dual-pixel sensor.
- Reported capture of optically all-in-focus images without post-capture processing.
- Achieved 21 frames per second with their modified sensor using PDAF for faster scenes.
- Researchers describe the approach as a new category of optical design that mixes programmable optics and computational control.
- Media contact listed in the source: Krista Burns (kristab@cmu.edu).
What to watch next
- Commercial availability timeline and steps toward miniaturizing the optics for consumer devices: not confirmed in the source.
- Performance in low-light conditions and across a wider range of scene motion beyond the reported 21 fps: not confirmed in the source.
- Integration plans with smartphone camera modules or production partners: not confirmed in the source.
Quick glossary
- Lohmann lens: An optical design that achieves focus tuning by shifting two curved lens elements relative to each other; used here as the mechanical foundation for variable focus.
- Phase-only spatial light modulator: A device that alters the phase of incoming light at each pixel to change how light is redirected and focused without modifying amplitude.
- Contrast-Detection Autofocus (CDAF): An autofocus method that evaluates image sharpness (contrast) and adjusts focus to maximize clarity, often by testing multiple settings.
- Phase-Detection Autofocus (PDAF): An autofocus technique that measures phase differences across pixels (often using a dual-pixel sensor) to determine whether and which direction to move focus.
- Superpixel: A grouped region of neighboring pixels treated as a unit for image processing tasks to reduce complexity and capture local structure.
Reader FAQ
Can this camera make an entire scene sharp in a single exposure?
Yes; the researchers demonstrated optically captured images that are sharp across multiple depths in a single shot.
Is the technology ready for smartphones?
Not confirmed in the source.
How fast can the system update focus for moving scenes?
The team reported reaching 21 frames per second with a modified dual-pixel sensor using PDAF.
Does the method require post-capture image stacking or processing to achieve all-in-focus results?
The reported images were captured optically without post-capture processing.

The perfect shot BY Krista Burns Researchers develop a camera with spatially selective focusing, allowing the lens to focus on objects at many different distances at once. Imagine snapping a…
Sources
- Researchers develop a camera that can focus on different distances at once
- This Camera System Can Focus on Everything …
- Carnegie Mellon's New Spatially-Varying Autofocus …
- New camera lens can keep everything in focus
Related posts
- Cursed Bundler — Using Go’s module tooling to fetch Ruby gems
- Tuft & Needle Original Hybrid Mattress Review — Soft Choice for Guests
- Inside the Proton — Unraveling the Most Complicated Thing Known