Essi Ryymin
Embedding AI in higher education is essential. It equips students to navigate and shape an increasingly AI-infused world. Only through education can students contribute to a society where AI is used responsibly and for the greater good. Do I sound too idealistic? Then perhaps I could ask, what does AI mean for humanity if it does not mean a better life for all?
Rather than painting the threats and possibilities of AI, we should be thinking about what life is like for our students in a society immersed in AI. One thought leads to another: What is our shared vision for using AI in society? What are we aiming for together? What are the values and goals that AI helps us to enhance? And are we thinking of these questions by competing or collaborating? Given these concerns, I’ve been looking at what UNESCO says.
Can AI enhance human rights, dignity, and cultural diversity?
The Education 2030 Agenda was adopted by 184 UNESCO member states in 2015 (UNESCO, 2015). The goal was to ensure inclusive, equitable, and quality education for all by 2030. This framework is part of the broader global action under Sustainable Development Goal 4 (SDG 4), highlighting the need to address educational challenges worldwide.
Using a human-centred approach, UNESCO recommends integrating AI to achieve the Education 2030 Agenda (UNESCO 2019; 2021A). This concept refers to designing, developing, and deploying artificial intelligence systems that prioritise human needs and well-being. UNESCO states that AI must respect human rights and dignity and be accessible to all, including marginalised groups and communities. This statement suggests that AI enhances human capabilities instead of replacing or undermining them.
A couple of years ago, UNESCO published “Artificial Intelligence and Education: Guidance for Policy-makers” (UNESCO, 2021B) and “Recommendation on the Ethics of Artificial Intelligence” (UNESCO, 2022) to offer a comprehensive framework for the integration of AI in education systems. The reports reflect AI’s role in addressing inequalities in access to knowledge and stress the importance of bridging the digital divide within and between countries. In particular, UNESCO warns that AI must be carefully designed to avoid bias that could worsen educational inequalities.
We must know the business logic behind AI
As we all know, current AI systems learn from large datasets that may reflect existing societal biases, including those related to gender. For example, if the data used to train AI models includes biased representations of women, the biases will be reproduced and reinforced. This can cause great damage in AI-driven decision-making systems, such as those used in education, recruitment, or law, leading to discriminatory outcomes.
It’s important to remember that AI does not “come as a given” and represent neutral, objective information. It is human-made and represents the designers’ worldview and goals. That is why we need transparency and openness about who is building AI and what their intentions are. For example, what revenue logic is built into the system?
We discovered very late, too late, that the logic of making money from social media is based on selling privacy, manipulating users, deliberately creating addictions in everyone from kids to grannies, and spreading disinformation. All this has been made possible by opaque policies, the lack of regulation, and the advertising-driven attention economy. (A quick test to see if you have an addiction: you know it’s not good for you, but you keep doing it. There we are.)
AI for the greater good
Encouragingly, decision-makers worldwide follow UNESCO’s recommendations to develop AI in partnership with governments, businesses, researchers, and civil society. There are a growing number of cross-sectoral efforts to ensure that AI is accountable, such as The Digital Services Act (European Commission, n.d.), AI for Good Summit (n.d.), AI as a strategic priority of the European Network of National Human Rights Institutions (n.d.) and Geneva International Centre for Justice (n.d.), which monitors and reports AI’s impact on human rights. As Grayling and Ball (2024) put it, AI is not just a technical superintelligence to be managed by engineers. It is also a social one requiring contributions from philosophers, social scientists, and citizens.
AI holds tremendous potential for positive impact. We can follow the dizzying evolution of inventions in healthcare (see, e.g., National Human Genome Institute, 2024), climate change mitigation (Anderson, 2024), and education (University of San Diego, 2024; University of Oulu, n.d.). AI is promisingly applied in global disaster response (DISHA, n.d.), boosting social enterprises in evolving economies (World Economic Forum, 2024) and advancing the inclusion of disabled persons (Turin, 2023).
A special role for higher education institutions?
In the contemporary context, higher education institutions face distinctive challenges and prospects. They are researching and developing AI systems in collaboration with companies and society. They are learning to integrate AI into everyday teaching, learning, and management. They are educating future professionals in the responsible and creative use of AI. Could they play a role in AI’s humane and socially responsible educational use globally?
Researchers and educators could harness the power of AI to find solutions to educational inequalities and improve learning conditions. With the help of AI, they could scale educational interventions, such as personalised learning or distance learning, to millions of students. Through joint efforts, educational futurists could combine resources, share data, and leverage distributed expertise to advance fairness, equality, and social responsibility.
In addition, by working closely with their local communities, educational institutions are learning to apply AI to meet local needs. This approach will ensure that globally scaled AI tools are developed and adaptable with the input of the individuals and groups most affected by them.
To boldly go where no one has gone before
Higher education must be prepared to innovate in unpredictable situations. This means going beyond existing frameworks to create solutions that defy the norm. It means working in arenas with no clear precedents and facing problems that have not been fully identified. It requires a willingness to challenge the status quo and an ability to listen to multiple voices in the room.
The global conversation on AI in education is hot and continues. While I was writing this article, UNESCO published “The AI Competency Framework for Teachers” (UNESCO, 2024), which defines the knowledge, skills, and values teachers will need to master in the age of AI. Interestingly, a human-centred mindset is one of the critical competencies. This document certainly merits further study. So, in my next article, teaching will be in the spotlight.
Author
Essi Ryymin, Principal Research Scientist, HAMK Edu
References
AI for Good (n.d.). AI for Good Global Summit. https://aiforgood.itu.int
Anderson, K. (2024, February 13). How can artificial intelligence help tackle climate change? Greenly Institute. https://greenly.earth/en-us/blog/ecology-news/how-can-artificial-intelligence-help-tackle-climate-change
DISHA (n.d.). Data Insights for Social & Humanitarian Action. Unlocking data and AI solutions for social impact. https://disha.unglobalpulse.org
European Network of National Human Rights Institutions (n.d.). Artificial Intelligence. https://ennhri.org/our-work/topics/artificial-intelligence
European Commission (n.d.). The Digital Services Act. Ensuring a safe and accountable online environment.
https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en
Geneva International Centre for Justice. (n.d.). https://www.gicj.org
Grayling, A. & Ball, Brian. (2024, August 5). Philosophy is Crucial in the Age of AI. Design and Art + Moda Magazine. https://www.designartmagazine.com/2024/08/philosophy-is-crucial-in-age-of-ai.html
National Human Genome Institute (2024). Personalized Medicine. https://www.genome.gov/genetics-glossary/Personalized-Medicine
Turin, J. (2023, December 3). Artificial intelligence and its impact on the human rights of persons with disabilities. ENNHRI. https://ennhri.org/news-and-blog/artificial-intelligence-and-its-impact-on-the-human-rights-of-persons-with-disabilities
UNESCO (2015). SDG4-Education 2030 Incheon Declaration (ID) and Framework for Action For the Implementation of Sustainable Development Goal 4: Ensure Inclusive and Equitable Quality Education and Promote Lifelong Learning Opportunities for All, ED-2016/WS/28. https://unesdoc.unesco.org/ark:/48223/pf0000245656
UNESCO (2019). Beijing Consensus on Artificial Intelligence and Education. https://unesdoc.unesco.org/ark:/48223/pf0000368303
UNESCO (2021A). UNESCO Strategy on Technological Innovation in Education (2022–2025). https://unesdoc.unesco.org/ark:/48223/pf0000378847
UNESCO (2021B). AI and Education: Guidance for Policy-Makers. https://unesdoc.unesco.org/ark:/48223/pf0000376709
UNESCO (2022). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137
UNESCO (2024). AI competency framework for teachers.
https://unesdoc.unesco.org/ark:/48223/pf0000391104
University of Oulu (n.d.). Hybrid Intelligence: Human-AI Co-evolution and Learning in Multi-realities (HI). https://www.oulu.fi/en/research/human-ai-co-evolution-and-strengthening-human-capabilities/hybrid-intelligence-human-ai-co-evolution-and-learning-multi-realities-hi
University of San Diego (n.d.). 43 Examples of Artificial Intelligence in Education. https://onlinedegrees.sandiego.edu/artificial-intelligence-education
World Economic Forum (2024). AI for Impact: The Role of Artificial Intelligence in Social Innovation [White paper]. Schwab Foundation for Social Entrepreneurship. Retrieved September 4, 2024, from https://www.weforum.org/publications/ai-for-impact-artificial-intelligence-in-social-innovation