top of page

AI Literacies and Media Education

With the growing use and concerns about ChatGPT and other AI programs, we reached out to academics researching AI to ask them to share their expert opinions on the perceived values and challenges of using generative AI for educational purposes, and how we might integrate it into media education specifically.

Steve Torrence, Expert Ethics Reviewer, EU Horizon Europe. Emeritus Professor of Cognitive Science, Middlesex University.

European Commissions:

What do you perceive to be the values and challenges of using generative AI for educational purposes?

I share the reservations that many people have about using ChatGPT or other generative AI systems to produce teaching resources and about encouraging students to use them as learning resources. While there are great potential benefits for education and knowledge in using them, as many have suggested, there are also serious concerns and dangers. These concerns have been well rehearsed in the media in recent weeks. For example:

  • the possibility of hidden knowledge gaps, with manufactured misinformation included in documents, rendering them unreliable

  • the possibility of bias in the internet sources that the system is trained on

  • and inflated and/or distorted assumptions by students using the resources concerning the level of authority of the resources generated.

On the third point, naive readers may be liable to believe, on the basis of rich and cogent generated text, that the AI system producing the text may "magically" possess the consciousness or cognitive capacities of a human authority on the subject(s) discussed in the text. Students may thus be misled into giving the material greater credence than is merited. There is also, of course, the problem of "passing off" - that is, of a student presenting AI-generated material as the student's own work: there is likely to be strong peer pressure, time pressure, etc. to rapidly make this practice become widely adopted.

It seems difficult, for now, to see how the use of Large Language Model (LLM)-generated materials may be effectively regulated or limited, especially in view of the internationalization of ICT and AI. It would be easy for teachers in schools, colleges, etc., overstretched as they are, to adopt too accommodating an attitude towards generative AI, and towards its use by both teachers and learners. It would nevertheless be desirable for teachers to be properly trained in how such resources are constructed and on how to train students to use them with appropriate critical scepticism, and how best to incorporate the use of such AI generative systems in their own practice.

It is important for all of us – educators in particular – to reflect on how “machine learning” actually works, and how radically different it is from human learning – especially younger humans (and other animals). Human learning is an active process with multiple organic and sensorimotor interrelations with the organism’s lived environment. Human learning can often be “regimented” – as in traditional school settings, but it is more often than not improvisatory, affectively saturated, and socially and corporeally embroiled. This is especially so in young people, where, from the earliest months, learning, exploring, doing and growing are fused. An infant who (for example) plays with food on her highchair tray, is acquiring many skills at the same time – a variety of nutritional skills, hand-eye-mouth coordination skills, visual recognition skills, skills in communication, emotional response, etc.

By contrast, machine “learning” is embedded in a sea of digital(ized) data. The most “impressively” performing AI systems are trained on trillions of items of such data, and use massive compute resources to execute their statistical/predictive algorithms. These algorithms converge on outputs that their human operators prefer and on avoiding outputs that the operators don’t. AI learning of the successful kind is thus passive and abstracted from actual human reality, and involves none of the improvisatory world-embedding of human learning. (Instances of AI learning that incorporate some kinds of social or physical interaction exist, but are currently rare, and presently show only limited success.)

So use of terms like “machine learning” – and the many other psychological terms used freely in the AI community – obscure the enormous gulf between how computers work and how we work. It is, in my view, important for those designing and marketing AI to stress how cognitively distinct AI and digital systems are from us, despite the superficial similarities between some AI outputs and some human ones.

In publicly funded research frameworks, like the EU’s Horizon Europe scheme, within which I work as an ethics reviewer, applicants for funding have to show they have produced an adequate “ethics self-assessment” of their research and of its post-project impacts, before funding commences. Any people researchers recruit to participate in their project have to be asked to give their “informed consent” to take part (e.g. in testing or evaluating prototype products). There seems little evidence that anything similar occurs at present within the commercial sector, where most ready-for-use AI is now produced. Certainly there are no sector-wide rules that bind AI professionals to keep to ethical guardrails – although the European Parliament is working on an AI Act for its member states, and there is evidence of high-level talks that may produce international AI safety standards at some point in the future. The rapid and enthusiastic public reception of ChatGPT and similar products does not equate to the kind of “informed consent” that is central to the publicly funded sector . In the commercial sector, the main kinds of consent that seem to be recognized are consent by purchase or consent by clicking.