How computational photography is making your photos better

Phone cameras have gotten way better, and it's all down to smarter software.

Phone cameras have undergone huge improvements in recent years, but they've done so without the hardware changing all that much. Sure, lenses and sensors continue to improve, but the big developments have all been in software. So-called computational photography is using algorithms and even machine learning to stitch together multiple photos to yield better results than were previously possible from a tiny lens and sensor.

Smartphones are limited by physics. With a small sensor, narrow lens aperture and shallow depth, there are serious challenges in designing an improved phone camera. In particular, these mini cameras suffer from noise -- digital static in the images -- particularly in low light. Combine this with limited dynamic range, and you've got a camera that can perform pretty well in bright daylight, but where image quality starts to suffer as the light dims.

To work around this, companies have had to get creative. The biggest advances have all come from ways to stack or combine multiple images in the phone. Depending on how many images are stacked, and how clever the algorithms, this technique can be used to reduce noise, boost the tonal range, take clear shots in the dark, or even artificially boost resolution.

These techniques were known to photographers, and they're doable in programs like Photoshop, but the success of computational photography is having all these tricks happen seamlessly and nearly instantaneously, inside your phone. All you have to do is click the shutter, and the software handles the rest.

Upscaled is also available in 4k on YouTube