Style Living Self Celebrity Geeky News and Views
In the Paper BrandedUp Hello! Create with us Privacy Policy

From arrested Trump to puffer-jacketed Pope, how can you spot a fake AI-generated image?

By Ratziel San Juan Published Mar 29, 2023 6:26 pm

As technology improves, it becomes increasingly difficult to distinguish which images we encounter are genuinely accurate from those that are false and misleading.

AI tools already accessible to the general public are also growing in popularity, meaning we can now expect to see more "realistic" visuals like photos and videos on our social media feeds.

These can range from an image of the dubious arrest of former US President Donald Trump to the somewhat more believable shot of Pope Francis in a puffer jacket. Anyone who has seen the photo can't blame the many netizens who fell for it, including celebrities like Chrissy Teigen.

Viral AI-generated image of Pope Francis

"Visuals are particularly powerful vehicles for disseminating misleading information, as our brains are less likely to be critical of visuals," according to UNESCO's Journalism, Fake News, and Disinformation handbook, which attributed this rapid growth in visual content uploaded to social platforms to wider use of smartphones, inexpensive or free mobile data, and social media.

This is why journalists and the public alike must train themselves to tell apart what is real from what isn't. PhilSTAR L!fe rounded up some tips that might help.

Check visual inconsistencies and know the picture's context

"You might have seen what seem to be realistic images like these circulating online, the result of rapid advances in artificial intelligence known as Generative AI. Some ultrarealistic images of news events have already been mistaken for real ones and shared on social media platforms," global news agency Agence France-Presse (AFP) wrote.

AFP Fact Check has since published an online guide to identifying AI-generated images that can potentially fool people, which includes tips on how you can tell which photos are real from all the fakes.

"Visual inconsistencies and a picture's context can help—but there is no foolproof method of identifying an AI-generated image," AFP said.

AFP Fact-Check recommends the following techniques depending on the circumstances: doing a reverse image search, searching for visual clues, looking for a watermark, finding visual inconsistencies, checking the background, using common sense, and double-checking with official sources.

"Some elements may not be distorted but they can still betray an error of logic," AFP wrote.

Recognize the red flags and determine the intention

Poynter, a nonprofit media institute and newsroom that provides fact-checking, media literacy, and journalism ethics training—similarly identifies several red flags.

Specifically, Poynter Institute senior faculty for broadcast and online Al Tompkins acknowledged "that while it is increasingly difficult to spot AI-generated images, there are signs to watch for."

According to him, AI-generated images typically make a disclosure about how the author or creator captured an image, even if they are not required to do so.

Some AI generators include a watermark in the bottom right corner that, of course, can still be cropped out but is nevertheless a clue.

"Look for oddities, called remnants. This is your best shot at easy detection. I find that most often AI has trouble with hair and jawlines. Also, look for mismatched earrings and eyeglasses," Tompkins wrote.

Similarly, nonprofit journalism resource First Draft News issued a visual verification guide for both photos and videos.

Both ask five questions: Are you looking at the original version? Do you know who captured the photo/video? Do you know where the photo/video was captured? Do you know when the photo/video was captured? Do you know why the photo/video was captured?

"Each step is presented in graded traffic light colors to acknowledge that it is rarely possible to be 100% confident in every aspect of an eyewitness photograph," the guide read.

Know the right tools and understand the enemy

Research organization RAND Corporation listed 82 tools that fight disinformation in seven different categories: bot and spam detection, codes and standards, credibility scoring, disinformation tracking, education and training, verification, and whitelisting. The full list of tools can be found here.

Just as important as equipping yourself with the necessary tools is knowing how to use them, according to ING senior data scientist Albert Yumol, who added that such AI issues aren't new. Yumol previously worked as a lead machine learning engineer at Omdena, a collaborative platform that builds innovative, ethical, and efficient AI and data science solutions to real-world problems.

"The concept is not new. AI from the get-go is inspired by replicating what humans can do. For example, when we learned how to make computers generate text e.g. for our passwords, we created CAPTCHA which stands for Completely Automated Public Turing test to tell Computers and Humans Apart," Yumol told L!fe.

He explained that AI detection is embedded in how modern algorithms work. Many of them use a concept called Generative Adversarial Networks (GANs), which for the data scientist totally changed the game: "In a GAN, there are two models: the generator (creates new image) and the discriminator (classifies images or any other data as real or fake. The two models are trained together until the discriminator is fooled most of the time, which means that the generator generates very realistic images," Yumol began.

"That is why it is getting harder to spot fakes because algorithms are already using fake image detection inherently. It is built to fool us."

Even this advanced technology, however, has its flaws.

"At the current stage, AI is not perfect. We call these imperfections artifacts. Examine the images really well and find something off e.g. extra fingers, misplaced pixels. AI images also often look cartoonish or lack realism. Look for distorted shapes or lines or unrealistic colors. Look past the main subject. Investigate the reflections and the background. Find something misplaced or any patterns that repeat. You may see the same object repeated multiple times which is less likely," Yumol advised.

He added to read any text included in an image, as current AI images are bad at these with most text being "scrambled and misspelled."

Finally, Yumol suggested that people implement a universal protocol for using Generative AI, such as applying watermarks or labels to all AI-generated images.

"Ultimately, the responsibility should be shared. We need to agree as a society on how we utilize this tech ethically," the data scientist concluded.