Key Takeaways
- Google’s Super Res Zoom uses AI to enhance telephoto images on Pixel phones without additional lenses.
- Pixel 9 Pro introduced Super Res Zoom for Video and Zoom Enhance for photos, improving zoom capabilities.
- Utilizing AI to improve photography is fine, but altering reality with editing raises concerns that need to be addressed.
No one wants to carry a zoom lens, let alone a standalone camera that can attach to one. Or at least the wide majority of people don’t, which is why it seems smartphone makers have pushed so hard to add ultrawide and telephoto lenses to their phones. And while they’re still not exactly as capable as a DSLR, a phone like the Pixel 9 Pro can capture telephoto images that are at least passable as professional photos.
Of course, Google’s gone to great lengths to make this very thing possible, starting with Super Res Zoom simulating a 2x zoom on the telephoto-less Pixel 3, all the way to Zoom Enhance clearing up cropped photos and Super Res Zoom Video adding a 20x zoom to videos captured with the Pixel 9 Pro and Pro XL. Not everyone might think of Google’s flavor of digital zoom as an AI feature, but it’s one of the earliest examples of the company applying machine learning to create images your phone wouldn’t normally be able to capture on its own.
As skepticism grows around whether anyone should really have all the photo editing skills Google’s Pixel 9 phones offer, Super Res Zoom feels like a clear place where a line can be drawn — using AI to help you take a good photo is very different from being able to warp any image you can get your hands on.
How Super Res Zoom works on Pixel phones
On Pixels without telephoto lenses
In the earlier years of the Pixel, one of Google’s unique approaches to smartphone photography was its commitment to a single lens. The company was able to run laps around its competitors in terms of picture quality, and avoided adding extra bulk to the back of the Pixel by simulating the zoomed shots other smartphones were making with built-in glass.
Super Res Zoom was introduced on the Pixel 3 and Pixel 3 XL as a way to create a 2x zoom without adding another lens. Combining the natural shake of your hand to get multiple angles on a frame along with multiple frames of the same subject itself, Super Res Zoom on the Pixel 3 added detail to a digitally cropped image. At the time, Google compared the process to the “drizzling” technique space telescopes use to capture detailed photos of astronomical bodies.
Super Res Zoom was revelatory at the time, even if it was eventually beat out by the increasingly powerful physical telephoto lenses added to phones by Samsung, Apple, and eventually Google itself. It was a digitally modified version of the truth, but still grounded in what your eyes and the Pixel 3 could see.
On Pixels with telephoto lenses
From the Pixel 4 onward, Google started supplementing the details its optical telephoto lenses were able to capture with the computational processing it was already doing with Super Res Zoom. So anything past the 5x optical zoom on the telephoto lens of the Pixel 7 Pro, for example, was in a sense a collaboration between your phone’s physical sensor and the details Google’s algorithms and machine learning process were able to “imagine.”
The more you zoom in, the more the telephoto camera leans into AI.
For the Pixel 7 Pro in particular, Google crops the 48-megapixel sensor on the camera, then applies a process called “remosaicing” to convert the data you capture, which then gets cleaned up by another process Google calls “HDR+ with Bracketing” to remove noise, and “Zoom Stabilization” to make sure the shot you’re starting with is clean, even from far away.
For 20x zoom and beyond, Google uses “uses a new ML upscaler that includes a neural network to enhance the detail of your photos,” according to the company’s blog explaining the updated version of Super Res Zoom. “The more you zoom in, the more the telephoto camera leans into AI.”
How Google expanded Super Res Zoom on the Pixel 9 Pro
Bringing a better telephoto experience to video and editing
For the Pixel 9 Pro and Pro XL, Google both expanded Super Res Zoom to video (technically, Super Res Zoom Video) and added a whole other way of cleaning zoomed-in photos called Zoom Enhance. Unlike still photos, Super Res Zoom Video requires Google’s Video Boost processing to work. A decision that might have to do with how similar the Pixel 9 Pro’s camera specs are to the Pixel 8 Pro.
Pixel 9 Pro
- SoC
- Tensor G4
- RAM
- 16GB
- Front camera
- 42-megapixel f/2.2, 103-degree FOV selfie camera
- Rear camera
- 50-megapixel f/1.68 wide camera / 48-megapixel f/1.7, 123-degree FOV ultrawide camera / 48-megapixel, f/2.8, 5x optical zoom, up to 30x Super Res Zoom telephoto camera
Video Boost keeps a version of your video locally on your phone, but offloads the heavy lifting of upscaling and other AI processing to the cloud. So, to achieve the same effect as what Google’s able to do locally with still photos, your footage will have to make a roundtrip off your phone and back. That’s not to say Super Res Zoom Video isn’t impressive in its own right. The effect is on the subtler side, but there’s more detail when Super Res Zoom is applied in comparison to an untouched video, based on an example Pocket-lint Managing Editor Patrick O’Rourke was able to provide.
Zoom Enhance, unlike Super Res Zoom Video, happens on-device and can be applied after the fact to any photo in your library. Google’s improvements don’t seem as dramatic as it originally proposed, but it’s another helpful tool to have if you need it.
Zoom Enhance is only available on the Pixel 9 Pro and Pro XL, despite not using a phone’s telephoto lens.
On the whole, the improvements and changes to Super Res Zoom (and telephoto photos in general) are good, but the increased emphasis not on what the Pixel 9 Pro is capable of on its own, and instead what it can access in the cloud, really serves to highlight how much of the photography experience is artificial. Google’s still basing the photos you take on the actual light your smartphone is able to capture, but more and more of it is in collaboration with wholly original detail that technically isn’t present the moment you captured your photo, if at all.
Where should we draw the line with computational photography?
There’s no denying what the Pixel 9 is able to create from generative AI is equal parts impressive, goofy, and scary. Should the concern over what the misuse of those tools could lead to apply to the capture process, though? Not necessarily, but they’re absolutely connected, especially as more parts of the photography process involve software in the cloud.
If anything, features like Super Res Zoom or Add Me, the Pixel 9’s clever method for compositing group photos together, are examples of the line Google shouldn’t cross when it comes to AI photo features. It’s fine to extend what your smartphone is already capable of in terms of capturing a good photo, but anything else should be fair game for criticism. Improving the reality you can see through your smartphone screen is questionable, but at least grounded in reality. Outright changing what you see probably shouldn’t be allowed, or at least not receive full-throated support from the company who makes your phone.
Trending Products