With an event set for August 13, Google is finally ready to share its take on a Pixel phone with even more AI sprinkled into every corner of Android. I’m sure Google will have plenty of new Gemini-powered generative AI features to announce, but its plan to stand out from Galaxy AI and Apple Intelligence is apparently by leaning even harder into the Pixel’s reality-altering camera abilities.
Leaks have revealed what the Pixel 9 looks like. If true, we should see at least four different phones (a Pixel 9, 9 Pro, 9 Pro XL, and 9 Pro Fold), with the regular, non-folding models looking iPhone-like with rounded corners.
If the AI camera and editing features introduced in the Pixel 7 and Pixel 8 started to dissolve what truth existed in smartphone photos, the details that have leaked out about the Pixel 9 could push things even further, making any image that passes through Google’s newest phones worthy of skepticism.
The Pixel 9 phones will ship with several AI-powered features, at least three of which are entirely new, according to leaked screenshots obtained by Android Authority. Under the umbrella of “Google AI” these include things like Circle to Search, which was originally introduced on the Galaxy S24 line and then later added to Pixel phones and the Pixel Tablet; Gemini, which is the AI assistant available on all Android 14 phones; and new features called “Add Me,” “Studio,” and “Pixel Screenshots.”
Pixel Screenshots sounds a bit like Microsoft’s Recall, but designed to only analyze and mine information from screenshots you take yourself, rather than passively capturing everything on your screen in the background (and with weak security by storing it in plain text, hilariously). Per the report, when you take a screenshot, a Pixel 9 will reportedly use on-device AI to add relevant metadata like web links, app names, and the date you captured the image will be attached to it to make it easier to look up later. From there, a Pixel 9 can then answer questions about the images you capture.
Add Me, a feature for the Pixel’s Camera app, is only described as being able to “make sure everyone’s included in a group photo.” Considering how Google does this with Best Take on the Pixel 8, which combines multiple photos into one “best” photo where everyone is looking and smiling at the camera by letting you literally swap heads, Add Me seems like it could go further. Maybe Google’s found some way to add someone — maybe even the photographer — to group photos after the fact. Or maybe it’s just a version of Best Take with more customization.
Studio, which has reportedly appeared in strings of code in Android 15 betas as “Creative Assistant,” lets you use generative AI to create images from scratch. These could be for stickers in Google Messages, but it’s possible the feature could be part of the system-wide Markup function in Android 14 or even get its own standalone app. That would mirror how Apple is approaching image generation in Apple Intelligence, with a standalone Image Playground app and image-creating abilities available in other apps like Messages or Notes.
While these features might not be major in their own right, they compound the already flexible approach to reality that Google’s enabled with things like Magic Eraser and Magic Editor, and subtler AI-enhancing features like Video Boost and Photo Unblur. It’s a bit wild when you say it out loud, but Google seems committed to giving you complete control over the images you capture, to the point where what you actually capture barely matters.
It’s easy to get concerned about the philosophical implications of Google’s approach to image-making. As Google’s head of Pixel cameras told Inverse earlier this year, they’re designing Pixel cameras to go “beyond physics” with software-enhancing features (many of which use AI). That guiding principle means it’s not just about creating a Pixel camera that can replicate a scene exactly as it happened in reality, but instead, one that lets you capture your “memories” as you want to remember them.
“We’re not really competing with SLRs,” Google Group Product Manager Isaac Reynolds told Inverse while outlining the company’s approach to photography. “We’re competing with an entire workflow.”
If the Pixel’s camera experience is an attempt to swallow Photoshop and digital cameras in one bite, something like Studio just makes the bite a bit bigger, maybe big enough to swallow something like Illustrator or Procreate, too.
Ultimately, these are just tools, and it’s not clear how many people are actually getting wild with Magic Editor every day and passing altered images as genuine. It’s probably not as many as I fear, despite agreeing with the general sentiment that it’s not quite accurate to call what happens on the Pixel “photography” in the traditional sense. But this discomfort seems to be part of the plan. If there’s anything that makes the Pixel different from Samsung’s Galaxy phones or Apple’s iPhones, it’s that Google’s phones dance on the edge of what feels acceptable for a phone to be allowed to do.
This boundary pushing has already had an impact on Google’s competitors. Galaxy AI relies in part on Gemini, and basically all of the AI-powered photo editing features Samsung introduced in early 2024 copied what Magic Editor and Magic Eraser do. Apple barely touched on it during its WWDC keynote, but one of Apple Intelligence’s features is a new function in the Photos app called Clean Up that lets you remove people and objects from photos, just like Magic Eraser.
It’s not hard to imagine the rest of Google’s AI features making their way to Samsung and Apple’s devices down the road. But by diluting the hard boundaries of what a smartphone camera is supposed to do (capture snapshots), Google’s starting to shift the definition of photography and what’s acceptable for phone photography for everyone. Unless there’s some major pushback from consumers, there doesn’t seem to be an end in sight to Google’s mixing of more AI-powered features and computational photography into Pixel hardware.