Over the years, AI chatbots have grown smarter and more capable of holding back-and-forth discussions about any topic. It can even write you a script or an essay, but should you let technology do all the work for you?
University of the Philippines (UP) professor Francisco Jayme Guiang caught one of his students allegedly using AI on a final exam essay. He put some paragraphs through two AI detectors and found that they were most likely written by a bot.
"The entire exam (1) did not answer the final questions, and (2) the entire essay did not make sense! Walang cohesion, nagdrop lang ng proper nouns (na walang kinalaman sa essay questions) in sentences! Akala ata niya di ako nagbabasa ng essays. Bagsak 'to sa finals," Guiang wrote on Facebook.
Are AI bots smart enough to write essays?
How could AI write a whole essay, you might ask? Well, some programs, like OpenAI's ChatGPT, are smart enough to do that. ChatGPT boasts features like remembering what users tell it earlier in the conversation, providing follow-up corrections, explaining concepts, and even generating ideas.
However, there are limitations to the technology as it may occasionally generate incorrect information and produce harmful instructions or biased content as the information it provides are all learned from the internet—like from Reddit discussions and possibly Twitter.
Tech YouTube creator Marques Brownlee himself uses ChatGPT as a tool to brainstorm ideas and titles for videos, but he still adds his human touch and judgment to his scriptwriting process.
Can students get away with AI-written essays?
Concerns about students using AI chatbots in academic settings have been previously raised by professors in the United States.
"We're not there, but we're also not that far away," McGill University professor Andrew Piper told NBC News. "We're definitely not at the stage of like, out-of-the-box, it'll write a bunch of student essays and no one will be able to tell the difference."
And indeed, AI-written essays are not foolproof. A 22-year-old senior at Princeton University built an app to detect whether the text was written by ChatGPT in a fight against AI plagiarism.
The site, GPTZero, uses two indicators "perplexity" and "burstiness" to determine if the content was touched by AI. "Perplexity" refers to the complexity of the text while "burstiness" measures the variations of sentences.
There are other tools professors and teachers can use to spot AI plagiarism, including one by open-source AI community Hugging Face. These tools, however, can be limited as AI-written content evolve and advance, too.
The faculty of UP Artificial Intelligence Program also issued a statement on the matter, stating it "condemns" outputs of AI systems—such as ChatGPT, Stability AI, Jasper AI, etc.—to misrepresent as valid scholarly works.
"Manuscripts, graphic designs, videos, computer programs, and other academic requirements must be solely created by the student or group of students as required by the instructor of the course," the statement read.
Despite this, the faculty stated that the use of AI tools to enhance student learning "should be encouraged" and that it would be "frivolous" to ban AI tools in university computer networks.
Instead, they recommended UP to conduct open forums on the use of AI tools and their implication on academic matters, revisit the definition of academic integrity to include AI tools in the generation of outputs, educate students on the proper use of AI tools, and improve academic requirements to include more in-depth critical thinking, scholarly discourse, and sound judgement.