What Andi Eigenmann can teach us about using ChatGPT
For the past year and a half, my work as a journalist and technologist has focused on figuring out use cases for generative AI. I’ve explored how to use ChatGPT to process Commission on Audit reports and to analyze the Philippine national budget. This past weekend, however, I saw a new use case of ChatGPT courtesy of the actress Andi Eigenmann.
For those who aren’t lurkers to the ChikaPH subreddit and aren’t updated on showbiz happenings, a quick recap: Ms. Eigenmann took to social media to air out her issues with her partner Philmar Alipayo, who apparently got matching tattoos with a female friend. The tandem tattoos read “2-2-4,” which apparently stands for “Today, Tomorrow, Forever.” You learn something new every day.
Anyway, in one of her Instagram stories, while the issue was going on, she posted a conversation with ChatGPT.
Andi: Do you think it's sus when a friend invites your life partner to get a love couple's tattoo with them without even filling you in or consulting you???
ChatGPT: That situation can definitely seem suspicious or raise some red flags. Open communication and trust are important in any relationship, so if your friend is inviting your partner to do something so personal without discussing it with you, it might feel like a breach of trust. It's worth having a conversation with both your partner and your friend to express your feelings and understand their perspectives.
Andi: But me and my life partner were on a fight when this happened and she immediately comes to his rescue and invites him to do this? And also I sent her a message and she literally just did not care to respond at all.
ChatGPT: That lack of response can feel really frustrating and hurtful, especially in a sensitive situation like this. It might indicate that your friend isn't fully aware of the impact of their actions, or they might be avoiding the conversation altogether. It could be helpful to have a direct conversation with her, expressing how her actions and silence affected you. This might lead to a better understanding of the dynamics at play and help you decide how to move forward with both your friend and your partner.
I was fascinated not just as a marites, but also as someone who works on a lot of AI projects. But it shouldn’t be a surprise that ChatGPT use has become part of everyday Philippine life, even celebrity drama.
Filipinos are already among the biggest users of ChatGPT. The World Bank says the Philippines was No. 4 in the world in using ChatGPT in March 2024. I’ve been tracking that data on the analytics platform SimilarWeb, and every month the Philippines usually is in the Top 10 of countries sending traffic to ChatGPT.
In January 2025, in fact, ChatGPT was the No. 4 most accessed website in the Philippines, next only to Google, Facebook, and YouTube.
Interest then goes down during school holidays—Undas, Christmas, Holy Week, then a big valley when the school year ends—only to climb back up when classes begin again.
Officials of OpenAI, the lab behind ChatGPT, have noticed this themselves. Last October at the OpenAI Education forum, their CFO noted a 90% uptick in ChatGPT use in the Philippines once the school year began. Special mention, proud to be Pinoy!
So no points for you for guessing that students use AI for homework.
This is problematic for three main reasons. First, AI chatbots like ChatGPT are still prone to errors, called hallucinations, even though their answers sound correct. They’re like your classmates who didn’t study but still answer confidently during recitation. If you don’t know the right answer, you’ll flunk too.
Second, these chatbots were trained more heavily on data from the West: books, articles, journals, where information from the Philippines hasn’t been a priority. So ChatGPT is naturally dumber about Philippine topics.
Third, of course, is that students might become lazy, and worse, they won’t learn anything.
We need to find out how to deal with this because there’s no putting this genie back in the bottle. Everyone now uses ChatGPT, even your favorite island girl.
Around the world, educators are trying to figure out ways to effectively integrate AI chatbots in their lesson plans. When I did my fellowship at Harvard University last year, one of my classes allowed students to use AI. But they required students to document how they used AI to generate their papers so that everyone can learn from the experience. That would require students to already know what they’re talking about, and just use ChatGPT to help them express it.
In my work training journalists and newsrooms on AI use, I usually talk about the limitations of AI as much as its capabilities. I talk about how it’s not a good source of information, but it’s a good tool to process information. For example, you can use it to process a large document in seconds and to analyze a big budget dataset, but you need to be able to ensure the accuracy of its results along the way. In short, you need to be able to check whether its output is correct. This means that you need to already have an idea about the answers you’ll get even before you ask your questions.
When I talk to professionals in other industries about how they use ChatGPT, they usually land in the same place. One doctor uses it to write medical certificates, pero siempre, they already diagnosed the patient. An HR practitioner uses it to write employee communications, consistent with company manuals and labor laws that they already know. In each case, they have the expertise to check the output of ChatGPT to ensure that it’s correct. They already know what they need to say; they just use ChatGPT as a tool to write it.
Which brings us back to Andi Eigenmann. I think she showed us a perfect use case for AI: when she asked the question, she already knew what the answer was going to be! (In fact, she always knew, she even had the receipts! But I digress.)
The point is, she merely used ChatGPT to spell out her feelings, to help her say what she wanted to say. That, I think, is one of the most powerful ways to use AI: to help put how we feel into words, and to articulate and communicate things that are already in our minds.
Listen to the original episode on Teka Teka Podcast.