It's getting harder to identify AI now. Here's why we should care.
Every week, PhilSTAR L!fe explores issues and topics from the perspectives of different age groups, encouraging healthy but meaningful conversations on why they matter. This is Generations by our Gen Z columnist Angel Martinez.
No matter how grim our online state of affairs has gotten in recent years, it’s comforting that we can always fall back on heartwarming videos.
A recent favorite of mine that you’ve probably seen too is footage of seven dogs walking in unison along a busy highway in Changchun, Jilin province. The runaway group can be seen surrounding an injured German shepherd, with a Corgi up front leading the pack. Such a rare sight, of course, garnered extensive media coverage and captured the hearts of viewers worldwide.
How heartbreaking it was, then, to learn that the video’s narrative was fake, and was even fueled by AI-generated disinformation: "movie trailers" made with ChatGPT, or fake news channels "reporting" on the animals’ reunion with the owners. In reality, these weren’t rescues on a perilous journey home—just pets of local villagers freely roaming around—and most of us weren't able to tell.
When generative AI first came to the fore, it was quite easy to decipher disinformation. (Just look out for the additional fingers or the gruesome facial expressions!) Some of us would often poke fun at our elders, who would struggle with telltale signs. Little did we know that generative AI models are “improving rapidly in mimicking reality,” making us just as helpless but far earlier than we could have expected.

I find that with how deeply entrenched AI is in our daily lives, many of us aren’t as skeptical of how and why it’s used. Gen Z reports both high usage and trust in mainstream AI technologies: About half of 14 to 29-year-olds surveyed say that they use AI either daily or weekly, with regular users mentioning feeling curious (69%), excited (44%), and hopeful (38%) about the technology. Another report shows that 58% use it for skills development and learning.
To make matters worse, fact-checking isn’t exactly second nature to most people. Digital media literacy is already poor enough as is, thanks in no small part to the poor quality of education. But even among more "educated" members of Gen Z, there are those who lack the critical thinking needed to spot synthetic content. “Fake news is optimized for engagement. It evokes strong emotions and resonates with your beliefs or worldviews. It’s deliberately sent to people who are likely to believe it,” Dr. Mercedes Rodrigo, head of Ateneo Laboratory for the Learning Sciences, tells PhilSTAR L!fe.
Unfortunately, it is under these conditions that disinformation operations thrive. Artificial intelligence is now being weaponized to manipulate public perception and tear down political opponents. Recently, I helped out on a study where respondents were asked to watch an AI-generated video of Vice President Sara Duterte apologizing for her father’s war on drugs. Although the premise alone was enough to spark doubt, many of our participants—including members of Gen Z!—thought it was breaking news.
Outside of this experiment, there are social media users who routinely wield AI for digital smear campaigns. An Al Jazeera article finds that AI-generated videos from Marcos' supporters aim to successfully “whitewash [Marcos Sr.’s] brutal rule during the 70s and 80s,” while those from Duterte's supporters are mostly made to “discredit the ICC, demonize their detractors, and paint [the Duterte] family as persecuted victims.” Unsurprisingly, a recent study showed Gen Z’s support for Duterte’s ICC trial has faced a steep decline, which I talked about in last week’s column. After all, who or what are we supposed to believe in anymore?

In spite of this, I believe not all hope is lost. Our demographic’s AI adoption may remain steady, but skepticism is also climbing. Referencing the Gallup paper from earlier, 60% of those asked report feelings of anxiety towards AI, and 59% say they feel anger.
Rodrigo stresses that this is a crucial attitude to develop: “When approaching highly emotional content, we should do so with a certain amount of skepticism. Don’t believe it immediately. Cross-check, verify with reliable sources.” Unfortunately, this does mean we also become skeptical of the truth—an undesirable byproduct of today’s times.
Another method of self-protection she suggests is psychological inoculation: “exposing ourselves to weak forms of misinformation, refuting them with facts, and identifying markers of fakery such as unnatural movements, so we become more discerning and discriminating.” So yes, that means that fake videos of traveling animals or talking babies might be a good place to start.
It’s inconvenient to be on guard when all we want is to do some innocent doomscrolling. I completely understand. But spotting fake news may be our first line of defense in this important fight. One less AI-generated video shared to somebody else is one less person fooled—and one less person fooled is one less opportunity for the powers that be to prey on us for their personal benefit.
Generations by Angel Martinez appears weekly at PhilSTAR L!fe.
