TL;DR
Researchers at Carnegie Mellon University have built an experimental 'spatially-varying autofocus' camera system that can render multiple depths in a scene sharply at the same time. The setup pairs a Lohmann-style tunable lens with a phase-only spatial light modulator and combines two autofocus strategies to assign different focal behavior across the image.
What happened
A research team at Carnegie Mellon University demonstrated a prototype imaging system that can bring parts of a scene at different distances into sharp focus simultaneously. Instead of the conventional single focal plane of traditional lenses, the group describes a "computational lens" that merges a Lohmann lens — two curved cubic elements that shift relative to one another to change focus — with a phase-only spatial light modulator, which can steer light on a per-pixel basis. The system also uses two autofocus techniques: Contrast-Detection Autofocus (CDAF), which partitions the image into regions and optimizes sharpness independently, and Phase-Detection Autofocus (PDAF), which recognizes whether a region is in focus and indicates which way to adjust. According to the team, this spatially-varying autofocus approach effectively allows the camera to decide which portions of an image should be sharp, producing finer detail across different depths within a single capture. The work is presented as experimental research rather than a consumer product.
Why it matters
- Could eliminate the need to take multiple shots at different focal settings to get every subject sharp.
- Enables finer detail across an entire scene by assigning different focus behavior to different image regions.
- Represents a shift from purely mechanical focusing to combining optical elements with pixel-level light control.
- May influence future computational imaging designs that merge optics and per-pixel modulation.
Key facts
- The project was developed by researchers at Carnegie Mellon University.
- Researchers call the approach "spatially-varying autofocus."
- The optical core is described as a "computational lens" combining a Lohmann lens and a phase-only spatial light modulator.
- A Lohmann lens in the system consists of two curved, cubic lens elements that shift against each other to tune focus.
- A phase-only spatial light modulator controls how light is bent at each pixel of the imaging plane.
- Two autofocus methods are used: Contrast-Detection Autofocus (CDAF) and Phase-Detection Autofocus (PDAF).
- CDAF divides images into regions and independently maximizes sharpness for those regions.
- PDAF detects whether an area is in focus and which direction the focus should be adjusted.
- The researchers describe the design as effectively giving each pixel its own adjustable focusing behavior.
- The system is presented as experimental research rather than a released commercial product.
What to watch next
- Whether the system can be miniaturized and integrated into consumer cameras: not confirmed in the source.
- How the approach performs under varied lighting and motion conditions in real-world use: not confirmed in the source.
- Timelines for further development, peer review, or commercialization: not confirmed in the source.
Quick glossary
- Focal plane: The distance from a lens at which objects are rendered sharply in the image; traditional lenses typically have a single primary focal plane at a time.
- Lohmann lens: An arrangement of two curved lens elements that can shift relative to each other to alter the system's focus.
- Phase-only spatial light modulator (SLM): A device that changes the phase of incoming light on a per-pixel basis, allowing control over how light is redirected without altering its amplitude.
- Contrast-Detection Autofocus (CDAF): An autofocus method that adjusts focus to maximize local image contrast, often applied across regions to find maximum sharpness.
- Phase-Detection Autofocus (PDAF): An autofocus technique that measures phase differences in incoming light to determine whether a subject is in focus and which direction to move the lens.
Reader FAQ
Can this camera truly focus on everything at once?
The research demonstrates a system that can render multiple depths sharply in a single capture by assigning different focal behavior across the image.
Who built this system?
Researchers at Carnegie Mellon University developed the experimental setup.
Is this technology available in consumer cameras now?
not confirmed in the source
How does it differ from taking multiple photos at different focal lengths?
Rather than combining separate images shot at different focus settings, the system attempts to produce sharp detail across depths in one capture by using a Lohmann lens, a phase-only spatial light modulator, and region-based autofocus methods.

NEWS GADGETS TECH This experimental camera can focus on everything at once The depth-defying lens can focus on multiple objects at different distances. by Jess Weatherbed Dec 29, 2025, 2:50…
Sources
- This experimental camera can focus on everything at once
- The perfect shot – College of Engineering at Carnegie …
- This Camera System Can Focus on Everything …
- New Camera Tech Captures Multiple Distances at Once
Related posts
- Samsung to bring Google Photos app to its smart TVs in 2026 — planned launch
- Xiaomi 17 Ultra Leica Edition adds a rotatable manual camera zoom ring
- Best HBO Max Series to Watch in 2025 Amid Service Uncertainty