Why every photo you take is “fake”

Justin Duino / Geek Review

Smartphones are under fire for “faking” or “cheating” high-quality photos. But every photo in existence contains some forgery, and it’s not a bad thing.

Artificial intelligence has invaded your smartphone camera with only one goal: to ruin your photos and fill your head with lies! At least, that’s the idea you might see in some headlines. Smartphone camera technology is advancing rapidly, leading to some confusion about what is “real” and “fake.”

Well, I have good news; every photo that exists is “fake”. It doesn’t matter if it was shot on a 2023 smartphone or a 1923 movie camera. There’s always a few behind-the-scenes tricks.

The physical limitations of phone cameras

If you were to stick a full-size camera lens on a phone, it would be an eyesore. Smartphones need to be small, compact, and somewhat durable, which is why they tend to use incredibly small sensors and camera lenses.

This tiny hardware creates several physical limitations. While a smartphone might have a 50MP sensor, the sensor size is quite small, meaning less light can reach each pixel. This leads to reduced low-light performance and can introduce noise into an image.

The size of the lens is also important. Tiny camera lenses can’t bring in a ton of light, so you end up with reduced dynamic range and, once again, reduced low-light performance. A tiny lens also means a small aperture, which can’t produce a shallow depth of field for background blur or “bokeh” effects.

On a physical level, smartphones cannot take high-quality photos. Advances in sensor and lens technology have greatly improved the quality of smartphone cameras, but the best smartphone cameras come from brands that use “computational photography.”

Phone cameras use “cheating” software

Justin Duino / Geek Review

The best smartphone cameras come from Apple, Google, and Samsung, three leaders in software development. This is not a coincidence. To overcome the hardware limitations of smartphone cameras, these brands use “computational photography” to process and enhance photos.

Smartphones use multiple computational photography techniques to produce a high-quality image. Some of these techniques are predictable; a phone will automatically adjust a photo’s color and white balance, or it can “beautify” a subject by focusing and lighting their face.

But the most advanced computational photography techniques go beyond simple image editing.

Take “stacking” for example. When you press the shutter button on your phone, it takes multiple images in the span of a few milliseconds. Each image is made with slightly different settings: some are blurred, some are overexposed, and some are blown up. All of these photos are combined to produce an image with high dynamic range, strong colors, and minimal motion blur.

An example of night photography on the iPhone 11.
Apple

Stacking is the key concept behind HDR photography and is the starting point for a host of computational photography algorithms. Night mode, for example, uses stacking to produce a bright night image without a long exposure time (which would lead to motion blur and other issues).

And, as I mentioned earlier, smartphone cameras can’t produce a shallow depth of field. To get around this problem, most smartphones offer a portrait mode that uses software to estimate depth. The results are pretty hit or miss, especially if you have long or frizzy hair, but it’s better than nothing.

Some people believe that computational photography is “cheating” in that it misrepresents the capabilities of your smartphone’s camera and produces an “unrealistic” image. I’m not sure why this would be a serious concern. Computational photography is imperfect, but it allows you to take high-quality photos with low-quality hardware. In many cases, this brings you closer to a “realistic” and “natural” image with a sense of depth and dynamic range.

The best example of this “trap” is the Samsung “moon controversy”. To announce the zoom capabilities of the Galaxy S22 Ultra, Samsung decided to create a lunar photography algorithm. Basically, it’s an AI that makes horrific images of the moon look a little less horrific by adding details that don’t exist in the original image. It’s a useless function, but if you need to take a photo of the moon with a camera that is smaller than a penny, I think some “cheating” is necessary.

That being said, I am concerned about the deceptive ways some companies market their computational photography tools. And my biggest gripe is the “shot on iPhone” or “shot on Pixel” nonsense that phone manufacturers sell every year. These ads are made with million-dollar budgets, huge extra lenses, and professional editing. The idea that you could play one of these ads with nothing more than a smartphone is a stretch, if not an outright lie.

This is nothing new

A very broken camera.

Some people are not happy with computational photography. They argue that it misrepresents reality and therefore must be bad! Cameras should give you the exact image that enters the camera lens, anything else is a lie!

Here’s the thing; every photograph contains some level of “falsehood”. It doesn’t matter if the photo was taken with a phone, a DSLR camera, or a film camera.

Let’s look at the film photography process. The camera film is coated with a photosensitive emulsion. When the camera shutter is released, this emulsion is exposed to light, leaving an invisible chemical trail of an image. The film is dipped in a series of chemicals to produce a permanent negative, which is projected onto emulsion-coated paper to create a printed image (okay, photographic paper needs a chemical wash too, but that’s the gist).

Each step in this process affects the appearance of an image. One brand of film may oversaturate the reds and greens, while another brand may appear dull. Darkroom chemicals can alter the color or white balance of an image. And printing an image on photographic paper introduces even more variables, which is why many film labs use a reference sheet (or a computer) to adjust color and exposure.

Most of the people who owned a movie camera were not professional photographers. They had no control over the printing process, and they certainly didn’t choose the chemistry of their film. Doesn’t sound familiar? Film manufacturers and photo labs were the “computer photography” of their day.

But what about modern mirrorless DSLR cameras? Well, I’m sorry to say, but all digital cameras do some photo processing. They can adjust an image for lens distortion or reduce noise in a photo. But the most common form of processing is actually file compression, which can totally alter an image’s color and white balance (a JPEG file only contains a few million colors). Some cameras allow you to save RAW image files, which are minimally processed but tend to look “flat” or “dull” without professional editing.

All the photos are “fake” and it’s not a big deal

Person using 100x zoom on Samsung Galaxy S23 Ultra
Justin Duino / Geek Review

Reality is an important part of photography. Sometimes we want a photograph that accurately represents a moment in time, flaws and all. But most of the time, we ask our cameras to capture a good image, even in unfavorable circumstances, we ask for falsehood.

This counterfeiting requires technological advances beyond the camera lens. And computational photography, despite its imperfections and marketing twists, is the technology we need right now.

That said, companies like Google, Apple, and Samsung need to be more transparent with their customers. We are constantly bombarded with advertisements that exaggerate the truth, leading many people to believe that smartphones are comparable to full-size or professional-grade cameras. This is simply not true, and until customers understand what is going on, they will continue to get angry at computational photography.





Source link

James D. Brown
James D. Brown
Articles: 8684