Unknowingly publishing a doctored photograph is every picture editor’s worst fear. Now, with the startlingly-fast evolution of AI and a new tool from Adobe, that fear has the potential to become an alarming reality.
I almost published a doctored photo in 2008 when I was a picture editor at The New York Times. The Iranian government released an image from a provocative missile test that looked a little…off.
After some quick cutting and pasting in Photoshop, I discovered that the second missile from the right was cloned into the frame — a combination of two other missiles in the image. Thankfully, I’d caught it in time and the photo didn’t land on Page One the following day.
The Iranian government eventually released this image, proving the missile failed to launch and the photo had been manipulated. Unfortunately, it was too late and the doctored photo ran on the front pages of the Los Angeles Times, Chicago Tribune, The Boston Globe, and others.
A few months later, in January, 2009, the Sri Lankan Ministry of Defence released this ridiculous, poorly-doctored photo. The Sri Lankan army claimed to have seized rebel headquarters in the town of Kilinochchi, citing this image as evidence. Clearly a fake, easily spotted.
I wasn’t as lucky in 2011 at TIME. The state news agency of North Korea released an epic, sweeping landscape from Kim Jong Il’s funeral procession, compelling enough that we published it as a double-truck in the following issue.
Given the source, the KCNA, I should have been more diligent (people were removed from the left side of the frame and snow was “added” in multiple areas). Big mistake — one of many I’ve learned from. The un-doctored version can be seen here.
Had AI existed then and was as accessible as it is today, would we have ever known these images were manipulated? Probably not.
What about now? The North Korean government recently released these photos. See any red flags?
FWIW Iran’s missile launch photos have improved drastically since 2008. This photo was released in 2021. Not AI-generated…yet.
It’s not just dictatorships doctoring photos. The October 2022 cover of The Atlantic is highly-manipulated, evident only upon viewing the original photo by Rafal Milach.
See the difference now? The Atlantic horizontally flipped Milach’s photo to allow room for their “A” logo. Yet, the text on the smoke bomb is un-flipped. Design won.
Last week, an “apparently AI-generated image” of an explosion near the Pentagon went viral on Twitter. It was clear right away the image was fake, but the internet moves way faster than a few diligent photo editors.
Until reliable, easy-to-use tools to detect manipulated or AI-generated photos exist, how do we combat disinformation?
Transparency is critical. When a photo is manipulated, it must be disclosed when published or shared (The Atlantic failed to do so above).
When any AI-generated photo is published or shared, the corresponding “prompt” must accompany it, ideally in the caption and embedded in the metadata. Prompts are the new RAW files.
Never publish or share handout photos from any corporation or government entity.
I’ll also never forget what the late, great Michele McNally always stressed: eternal vigilance.
Here is a photo I made of Michele, Charles Blow, and Tom Bodkin on election night in 2004 at The New York Times. On film, no prompts.