Samsung has one English blog post explains the techniques his phones use to photograph the moon. The content of the post isn’t exactly new – it appears to be a slightly edited translation of an article posted in Korean last year — and doesn’t offer many new details about the process. But because it’s an official translation, we can take a closer look at the explanation of what Samsung’s image processing technology does.
The statement is a response to a virus Reddit post that showed, in stark terms, how much extra detail Samsung’s camera software adds to images when taking a picture of what appears to be the moon. This criticism is not new (Input published a long piece about Samsung’s lunar photography in 2021), but the simplicity of the test brought the issue more attention: Reddit user ibreakphotos simply snapped a photo of an artificially blurred image of the moon with a Samsung phone, adding extra detail that didn’t exist in the original. You can see the difference for yourself below:
Samsung’s blog post today explains that the “Scene Optimizer” feature combines several techniques to generate better pictures of the moon. For starters, the company’s Super Resolution feature kicks in at zoom levels of 25x and above, using multi-frame processing to combine more than 10 images to reduce noise and improve clarity. It also optimizes exposure so the moon doesn’t look blown out in the dark sky, and uses a “Zoom Lock” feature that combines optical and digital image stabilization to reduce image blur.
The actual identification of the moon is done primarily with an “AI deep learning model” that is “built from a variety of moon shapes and details, from full moons to crescents, and is based on images taken from our point of view from Earth.”
But the most important move, and the one that has caused all the controversy, seems to be the use of an under-appreciated “AI detail enhancement engine.” Here’s how Samsung’s blog post describes the process:
“After Multi-frame Processing takes place, the Galaxy camera leverages Scene Optimizer’s deep learning-based AI detail enhancement engine to effectively eliminate residual noise and further enhance image detail.”
And here’s Samsung’s flowchart of the process, which the Detail Enhancement Engine describes as a convolutional neural network (a type of machine learning model commonly used to process images) that finally compares the result with enhanced detail to a “Reference high resolution”.
It seems to be this stage that adds detail that wasn’t there when the photo was originally taken, and could explain why the ibreakphotos follow-up test — insert a solid gray square on a blurry photo of the moon – resulted in the empty square creature given a moon-like texture by Samsung’s camera software.
While this new blog post provides more details in English compared to what Samsung has said publicly before, it’s unlikely to satisfy those who see software capable of generating a realistic image of the moon out of a blurry photo, essentially faking the whole thing. And when these are AI-powered capabilities used to advertise phonesSamsung risks misleading customers about what its phones’ zoom functions are capable of.
But, as my colleague Allison wrote yesterday, Samsung’s camera software isn’t a mile off what smartphone computational photography has been doing for years to get increasingly sharper and more vibrant photos from relatively small image sensors. “Year after year, smartphone cameras go one step further, trying to make smarter guesses about the scene you’re shooting and how you want it to look,” Allison wrote. “These things all happen in the background, and we generally like them.”
Samsung’s blog post ends with a telling line: “Samsung continues to improve Scene Optimizer to reduce potential confusion that may arise between taking a photo of the real moon and a photo of the moon.” (Our emphasis.)
On one level, Samsung is essentially saying, “We don’t want to be fooled by more creative redditors taking pictures of images of the moon that our camera thinks is the moon itself.” But in another way, the company also emphasizes how much computation is required to create these images, and will continue to do so going forward. In other words, we keep asking the same question: “what is a photo anyway?”