AI in education – three risks to pay attention to.

It was all the way back in 2002 when Uncle Ben uttered the iconic phrase, ‘with great power comes great responsibility’. Some would argue the most influential advice in the whole of Peter Parker’s life as Spiderman. As a former teacher (many moons ago) and now current careers professional it is perhaps inevitable that I’ve always been aware and interested in the powerful influence that educators have on their learners. Education is certainly a superpower and now we are told that industry 5.0 led by artificial intelligence (AI) will join us and in some cases replace us as effective co-pilots.

Yet my spidey senses are tingling and it’s taken me a while to ponder and reflect on what’s bothering me. Now, first off and full disclosure I’m not an expert on either subject. Yet in my current role and over the last year I’ve taken a lead in exploring the impact of AI with a particular focus on staff readiness. This has involved staff surveys, a critical review of the sector landscape, drafting AI principles and some initial staff development pilots. Oh, and reading lots about AI……

Created by the author using Copilot

Keeping up with the technology is just one part of the equation. Here’s three considerations I hope have value for anyone tasked in developing training and learning with AI.

The oxygen mask principle

It’s likely you’re familiar with this. If you’re in the unfortunate circumstance to be in a plane and the pressure drops, then we’re told oxygen masks will release from above and you need to fix in place your own supply before attempting to help anyone else. This contingency seems a suitable metaphor in that to help others with AI you must firstly be in a position to do so.

As potential educators of AI there’s a well worn aphorism that seems appropriate here. ‘The quality of an education system cannot exceed the quality of its teachers’. This could equally apply to a number of professions but the message still applies. In my own field there are lots of pioneers and early adopters sharing and demonstrating the potential of AI. However effective AI education needs to be systemic and is reliant on a range of factors such domain knowledge, ethical practice, pedagogy, effective teaching, AI policies, training and time. I’ve likely missed some important ones but what I want to show is the potential of AI is not an automatic entitlement that will imbue us all. A systems approach with system thinking is key given the complexities to upskill and educate with quality at a mass scale. This leads me to one question. Who is training the trainers? As AI adoption increases at an exponential rate the need for a structured approach to learning for all parties becomes ever more critical.

The spidey principle

Back to the great power/great responsibility theme anybody tasked with supporting learners, employees or services with AI deployment needs to be acutely aware of the potential benefits as well as the risks of harm. An excellent article ‘Is it ethical to use generative AI if you can’t tell whether it is right or wrong?’ made me think more deeply about the conditional nature of using AI tools. Domain expertise and knowledge is certainly important in upskilling the workforce and for educators.

Yet AI pushes us further to also consider the ethical challenges currently ingrained such as bias, privacy, accountability and even hallucinations. If we are using AI as able co-pilots or suggesting others should then we need to be careful of potential conflicts with ethics which for many roles covers aspects such as non-maleficence, autonomy and beneficence.

Therefore it’s feasible we may understand the technical capabilities of AI and how it can solve problems but still fall short in our professional duty of care if we don’t consider, understand or communicate ethical issues. The earlier article on whether it is ethical to use AI raised two questions and an important conundrum albeit in the context of research.

“Can I always verify the accuracy of GenAI in a specific context?”

“Can I identify all kinds of errors and biases in generated content?”

The article then goes onto say:

If the answer to these questions is “no” or “not always”, then users should consider alternative methods, because their expertise or judgement are likely insufficient to protect the integrity and trustworthiness of research”.

Whilst generative AI is in its beta phase and poses these risks should we avoid using it at all or can absolve ourselves with disclaimers? There’s likely a pragmatic middle ground that is workable and shares the mitigates the risks (or at least makes them understood by all sides) but I do have concerns that the solutions offered by AI tools are often pushed harder than the ethical considerations.

Whilst domain expertise is indeed critical in demonstrating and showing off AIs vast capabilities like a new miracle cure, always remember there’s a reason we are also told to read the label.

The have and the have nots principle

Equality of access is going to be just as divisive in the AI space as major corporations seek to justify, recoup and exploit the significant cost of researching and deploying AI tools to the world. The potential of AI can only be reached if consideration is given to accessibility and global cooperation. The risk is AI compounds existing inequalities rather than offsetting them. In many cases we already see a freemium or subscription model in place. I am also aware of staff in educational institutions paying for their access to AI tech as employers lag behind in working out if they can or want to secure institutional access for subscription AI technologies.

In days gone by not having access to a market leading piece of software could usually be solved with the release and option of a free open source version. The problem with generative AI tools is the form of AI capitalism’, we are going to be subjected to (you can thank capitalism and marketisation for that) which is characterised by commodification, extraction and a concentration of power. The order of magnitude for entry to create powerful generative AI tools is stratospheric and can only be achieved by a small number of global corporations.

Those free tools you value can be paywalled in an instant and I don’t feel there is enough commentary about fair access. Again, these concerns seem consumed by the fever of potential. Just as the global economy decided who would suffer the most from the effects of climate change and who benefits from drugs/vaccines the risk is the greatest beneficiaries of AI could be those who can afford to pay.

As the tools develop so must the debate. Let me know your thoughts.

I'd like to know your thoughts. Leave a comment!