UP releases guidelines on 'responsible' use of AI in classrooms
With artificial intelligence (AI) tools becoming more and more prevalent, the University of the Philippines (UP) has laid down a list of principles on how its students can use AI "responsibly" in an academic setting.
On its website, the country's national university stated that it is considering a policy on the usage of AI because of its many benefits, becoming the first major university in the Philippines to do so.
The move comes as AI systems such as ChatGPT are now being used by students for homework assignments and other classroom requirements.
"AI makes our lives easier by automating tasks and providing us with information and recommendations tailored to our individual needs. Already it is being harnessed in important life decisions such as who gets a job, who gets a loan, what communities get policed, and what kind of medical treatment a patient receives," UP stated.
With regards to education, it can be used to "enhance the personalized learning, increase student engagement in learning, and improve education management."
However, UP acknowledged that not everything is all rainbows and butterflies as such tools can open doors to cheating and plagiarism. ChatGPT, for instance, can be used to answer quizzes, write essays, or compose an outline of a paper.
Asserting the importance of promoting the positive use and mitigating the negatives of AI, UP has thus formulated a draft of ten principles that students must follow on the responsible use of these kinds of tools:
- Public good- AI should benefit the Filipino people as a whole.
- Everyone should benefit from AI- UP highlighted the importance of inclusion, diversity, and equality when it comes to developing AI systems and how the needs of vulnerable and marginalized groups must be considered.
- Meaningful Human Control- Humans must still be the one in control of AI tools and be morally responsible in using them.
- Transparency- AI systems should be transparent so that people can understand how these systems work and make decisions.
- Fairness- AI should not display any bias or discrimination.
- Safety- AI systems must function in a secure and safe way.
- Environment Friendly- AI models and tools must ensure that there is no risk to the environment.
- Collaboration- Stakeholders must cooperate with each other to ensure that there is trust and transparency in using AI tools.
- Accountability- University faculty, researchers, staff and students developing, deploying, or operating AI systems should be held accountable for their actions.
- Governance- There must be inter-sectoral, interdisciplinary, and multi-stakeholder expertise when it comes to decision-making on AI.
UP previously experienced an issue with AI tools back in January after they had gotten wind of alleged instances where students have been using ChatGPT on their academic requirements.
While the faculty stressed that academic requirements must "solely be created by the student or group of students" rather than AI, they said that they are still encouraging the use of such tools to enhance and facilitate the students' learning only.