top of page

AI Literacies and Media Education


With the growing use and concerns about ChatGPT and other AI programs, we reached out to academics researching AI to ask them to share their expert opinions on the perceived values and challenges of using generative AI for educational purposes, and how we might integrate it into media education specifically.


Steve Torrence, Expert Ethics Reviewer, EU Horizon Europe. Emeritus Professor of Cognitive Science, Middlesex University.

European Commissions:


What do you perceive to be the values and challenges of using generative AI for educational purposes?


I share the reservations that many people have about using ChatGPT or other generative AI systems to produce teaching resources and about encouraging students to use them as learning resources. While there are great potential benefits for education and knowledge in using them, as many have suggested, there are also serious concerns and dangers. These concerns have been well rehearsed in the media in recent weeks. For example:

  • the possibility of hidden knowledge gaps, with manufactured misinformation included in documents, rendering them unreliable

  • the possibility of bias in the internet sources that the system is trained on

  • and inflated and/or distorted assumptions by students using the resources concerning the level of authority of the resources generated.

On the third point, naive readers may be liable to believe, on the basis of rich and cogent generated text, that the AI system producing the text may "magically" possess the consciousness or cognitive capacities of a human authority on the subject(s) discussed in the text. Students may thus be misled into giving the material greater credence than is merited. There is also, of course, the problem of "passing off" - that is, of a student presenting AI-generated material as the student's own work: there is likely to be strong peer pressure, time pressure, etc. to rapidly make this practice become widely adopted.

It seems difficult, for now, to see how the use of Large Language Model (LLM)-generated materials may be effectively regulated or limited, especially in view of the internationalization of ICT and AI. It would be easy for teachers in schools, colleges, etc., overstretched as they are, to adopt too accommodating an attitude towards generative AI, and towards its use by both teachers and learners. It would nevertheless be desirable for teachers to be properly trained in how such resources are constructed and on how to train students to use them with appropriate critical scepticism, and how best to incorporate the use of such AI generative systems in their own practice.

It is important for all of us – educators in particular – to reflect on how “machine learning” actually works, and how radically different it is from human learning – especially younger humans (and other animals). Human learning is an active process with multiple organic and sensorimotor interrelations with the organism’s lived environment. Human learning can often be “regimented” – as in traditional school settings, but it is more often than not improvisatory, affectively saturated, and socially and corporeally embroiled. This is especially so in young people, where, from the earliest months, learning, exploring, doing and growing are fused. An infant who (for example) plays with food on her highchair tray, is acquiring many skills at the same time – a variety of nutritional skills, hand-eye-mouth coordination skills, visual recognition skills, skills in communication, emotional response, etc.

By contrast, machine “learning” is embedded in a sea of digital(ized) data. The most “impressively” performing AI systems are trained on trillions of items of such data, and use massive compute resources to execute their statistical/predictive algorithms. These algorithms converge on outputs that their human operators prefer and on avoiding outputs that the operators don’t. AI learning of the successful kind is thus passive and abstracted from actual human reality, and involves none of the improvisatory world-embedding of human learning. (Instances of AI learning that incorporate some kinds of social or physical interaction exist, but are currently rare, and presently show only limited success.)

So use of terms like “machine learning” – and the many other psychological terms used freely in the AI community – obscure the enormous gulf between how computers work and how we work. It is, in my view, important for those designing and marketing AI to stress how cognitively distinct AI and digital systems are from us, despite the superficial similarities between some AI outputs and some human ones.

In publicly funded research frameworks, like the EU’s Horizon Europe scheme, within which I work as an ethics reviewer, applicants for funding have to show they have produced an adequate “ethics self-assessment” of their research and of its post-project impacts, before funding commences. Any people researchers recruit to participate in their project have to be asked to give their “informed consent” to take part (e.g. in testing or evaluating prototype products). There seems little evidence that anything similar occurs at present within the commercial sector, where most ready-for-use AI is now produced. Certainly there are no sector-wide rules that bind AI professionals to keep to ethical guardrails – although the European Parliament is working on an AI Act for its member states, and there is evidence of high-level talks that may produce international AI safety standards at some point in the future. The rapid and enthusiastic public reception of ChatGPT and similar products does not equate to the kind of “informed consent” that is central to the publicly funded sector . In the commercial sector, the main kinds of consent that seem to be recognized are consent by purchase or consent by clicking.

In general, commercial AI has been, at least until very recently, somewhat aloof from recognizing any ethical or societal responsibility. For AI applications, especially those used within the educational sphere (as with social media products), this has been particularly hazardous to the interests of younger members of our society who will have to live with the downstream impacts of these applications for decades to come.

How might media educators use such tools productively for teaching and learning?

The best advice that might be offered in this regard would be for the teaching profession to give serious consideration to the advice offered by AI Ethics groups, such as the EU's High Level Expert Group on AI, the Asilomar principles of responsible use of AI (USA), etc. These offer general advice for creators and users of AI products. For example, that all outputs from AI systems should make clear to users that such outputs are produced not by humans but rather by machine generative systems. This should be clearly indicated to and by both teachers and students in their respective uses of the software.

Of course, there are many who are optimistic about how generative AI might be used beneficially in teaching. See, for example, a recent Financial Times article: “The AI Revolution is already transforming education.” (May 21, 2023).



Victoria Grace Walden, Senior Lecturer in Media and Director for Learning Enhancement, School of Media, Arts and Humanities; Principal Investigator on ‘Digital Holocaust Memory Project’ and Executive Board Member, Media Education Association:


What do you perceive to be the values and challenges of using generative AI for educational purposes?


For me, the greatest challenge of using generative AI for educational purposes is also the greatest value: they offer an opportunity to teach media literacies. There is a tendency across the education sector to adopt EdTech uncritically as if these are ‘neutral’ tools. In media education, our curricula are dedicated to critically examining different forms, including digital media. Yet, we rarely acknowledge that EdTech – from our VLEs to TurnItIn, Padlet, MiroBoard, Mentimeter and Kahoot to Generative AI – are examples of digital media. I’ve written about how we might support students to practise critical media skills through the analysis of the technologies we use in the class in an approach I’ve called ‘Meta-Media Studies’. Using Generative AI uncritically is harmful.


How might media educators use such tools productively for teaching and learning?


How might we adopt a critical approach then?


My suggestion would be to start by teaching students about the logics of generative AI. Let us take ChatGPT as an example, given its popularity. What is it and how does it function? ChatGPT presents its responses conversationally – there is a human-feel to the way it communicates. Indeed, I’ve heard many people refer to ‘arguing with it’ (a phrase I’ve used myself on at least one occasion!). This mode of address has a long history in AI development, which we can trace back to the first so-called AI ‘ChatBot’ ELIZA. The creator of ELIZA, Joseph Weizenbaum, famously reflected on his work years later in Computer Power and Human Reason and was deeply concerned by its assumed use in society, especially when he received feedback from psychologists that they could use the ELIZA program to do their work. He was shocked because he believe psychology was a practice that’s value lay in a human giving dedicated time to listen to another human.


The personalisation of AI is not new, as we can see. Indeed, there is much criticism, but also debate within AI Studies about whether conceptualising so-called ‘artificial intelligence’ on human models of cognitive processing actually holds back the potential of what these computer models can/ should do. Some scholars have highlighted the distinctiveness of computational logics, such as its discontinuity and discreteness (Fazi 2019), and mathematical modelling (MacKenzie 2015). Cultural Studies scholar Lev Manovich declared back in 2001 that computational media could be distinguished by 5 principles:

(1) numerical representation

(2) modularity

(3) variability

(4) automation

(5) its ability to translate code into culture and vice versa (which he called ‘transcoding’).

Others have argued that it is not human intelligence per se that is the wrong model, but Western epistemological ideas of ‘intelligence’ (JE Lewis et al., 2018). Such thinkers adopt an indigenous epistemology approach to argue that AI is best developed through a lens of ‘kinship’, which recognises the symbiotic relationship between humans and non-humans in its development (including users and their data inputs) and makes this transparent to users.


Beyond the illusion of a human-like conversation, ChatGPT sources its answers from an enormous data set (Big Data) of open access sources (it is worth noting here that most academic writing is not open access – an issue the sector is working hard to resolve with current and future work, but there are centuries of work in the back catalogue of nuanced knowledge). A well-known phrase in tech research and industry is ‘bad data in = bad data out’. As users, we do not know what data went in. It might find a specific answer in one or a few of its sources that it can output. However, as AI, it works on prediction and probability models – its logic is fundamentally mathematical (it does not understand the meaning of the language it presents). Thus, one of the difficulties its developers face is its so-called ‘hallucinations’. It can make up answers based on a mathematical logic of what set of words best fits the rest of a sentence.


In many senses, ChatGPT works like a search engine – you pose a question, and it gives you an answer. It is worth remembering that ‘bad data in = bad data out’ phrase here too – if the user does not know the right type of question to ask ChatGPT, it will likely produce a rubbish response (although its tone of delivery might suggest the answer is authoritative). Unlike Google or its counterparts, ChatGPT does not reveal its sources – it does not provide answers that necessarily come from a specific source but rather they are a processing of an accumulation of sources. It functions on a model similarly to what French philosopher Pierre Lévy (1999) called ‘Collective Intelligence’. Although Lévy was referring to the amassing of human intelligence online, ChatGPT highlights the algorithmic agency in this collective too. This makes it impossible for users to apply the types of critical assessment to sources that we are used to performing in Media Studies and other subjects like History.


Furthermore, like any good AI model, ChatGPT has several levels of human intervention. However, as with any technology, this is not a neutral intervention but is political and socio-cultural. For example, moderation has been programmed into ChatGPT which prevents it from being able to deny the Holocaust or to be racist against Black people – two issues specifically sensitive in the US, where it was created. However, the Bosnian genocide is up for debate in its responses – an event that resonates less in the US culture than the Holocaust and Black Lives Matter. This became clear to us in some recent scoping research.


Another issue with ChatGPT and other AI models that we must not forget are issues relating to the climate impact of large cloud server infrastructures and the use of exploited labour in low-economic regions to provide a substantial, cheap body of moderators.


To teach media literacies with ChatGPT then, I would suggest a focus on several areas. Encouraging students to think about:

  • the potential significance of ‘Collective Intelligence’ as an approach to knowledge acquisition

  • the seeming certainty of ChatGPT’s responses and how to use it as one source amongst many

  • human rights and sustainability in relation to AI development, maintenance and moderation

  • the illusion of human communication in computational environments

  • mathematical V semiotic logics

  • and the impact of ideology on data presented.

Most of these are issues already embedded into media curricula – we just apply them to other media forms currently.


Another approach would be to think about AI through a creative practice brief. Students could identify the issues with current Generative AI models and the unevenness of data and digital literacies in society, explore debates about the ethics and culture of AI development, and then consider how to design a ‘better’ model. They could then pitch their ideas explaining what priorities they identified.


___________________________________________________________________


Mykola Makhortykh, Alfred Landecker Lecturer, University of Bern, Switzerland:


What do you perceive to be the values and challenges of using generative AI for educational purposes?


The major value of generative AI is that it allows us to shift from teaching students primarily how to produce outputs (e.g. student essays, but also journalistic pieces, funding bids, and policy recommendations) to teaching them why we need these outputs, what characteristics define their quality, and whether it is important to distinguish between human- and machine-made outputs. Without understanding these parameters, it is hardly possible to produce high-quality outputs using generative AI. This very value, however, creates a fundamental challenge, because it necessitates revisiting how we evaluate student performance and critically assessing how we can prevent generative AIs creating new forms of inequalities in education.


How might media educators use such tools productively for teaching and learning?


In terms of using generative AI for media education, it is essential to let students experiment with the technology under conditions which would stimulate critical thinking about its implications for society and also their individual lives/careers. I am particularly in favour of challenging students with tasks involving the use of AI (e.g. chatGPT or DALL-E) to make them recognise potential practical (e.g. what are the scenarios when these technologies fail? Is there a way to prevent the failure?) and, importantly, normative challenges (is there certain bias towards specific groups in outputs of generative AI? What can cause such bias? How can we address it?)


__________________________________________________________________


M. Beatrice Fazi, Reader in Digital Humanities and Head of Media, Journalism and Cultural Studies, School of Media, Arts and Humanities, University of Sussex:


What do you perceive to be the values and challenges of using generative AI for educational purposes?


One of the key challenges of using generative AI for educational purposes concerns identifying how these technologies give rise to different and distinctive modes of knowledge representation and knowledge production. A Large Language Model such as that behind ChatGPT, for instance, does not simply look up information, and its operations as well as its outputs cannot be described just as text summarisation either. ChaptGPT is not a search engine, an archive or an encyclopaedia. A Large Language Model represents and produces knowledge dynamically, and it does so due to its active re-configuration of language into synthetically correct structures, outputting what appear to be meaningful texts. Unpacking and explaining the conditions and possibilities for knowledge after generative AI is undoubtedly a challenge, but this difficulty also expresses the value of questioning the origin and scope of all epistemic practices, within and beyond the classroom.


How might media educators use such tools productively for teaching and learning?


Media educators are well placed to cope with the task of teaching and learning alongside AI. This is because media scholars have always investigated the role of technology in culture and society. Terms such as “algorithms” and “computation” are increasingly part of a media curriculum; media educators are teaching not only to use computational technologies but also to understand how these technologies shape the world students inhabit. Achieving digital literacies in schools then involves a reflexive approach, engaging with AI not merely as a tool to perform school tasks that used to be done without AI but that can now be automated. Rather, educators could draw from debates in AI research to examine the structure and scope of a learning process, for instance using AI to consider how meaning is made, how language is presented and how general concepts relate to individual experiences.


___________________________________________________________________


Frédéric Clavert, Assistant Professor in European Contemporary History, Centre for Contemporary and Digital History (C2DH), University of Luxembourg:


What do you perceive to be the values and challenges of using generative AI for educational purposes?

The first value is the critical spirit that normally characterises teachers, including, of course, history teachers. Just as we criticise and dissect archival documents, we need to analyse and criticise AI tools and understand their limitations. And this must be passed on in teaching. Any use of an AI tool must be an opportunity to teach about a number of related subjects:

  • how a tool works

  • biases

  • the issue of personal data (of the people we are studying, but also of the pupils we are teaching: all these tools collect data).

This is a major issue and a major challenge, insofar as understanding these tools is not always easy, even without understanding the strictly technical aspects. For example, the notion of training is more complex than you might think.

How might media educators use such tools productively for teaching and learning?

The first productive use seems to me to be to show its limits, by using increasingly complex examples (and therefore answers, in the case of tools like ChatGPT, where errors are very present but in a more or less subtle way). To give an example, I often ask students to get one of these systems to write a short biography of Jean Monnet - he's an interesting person for this kind of exercise: a Western European, his life is well documented on the net, but he's not as well-known as a Churchill or a De Gaulle. Then the students have to critique what they discover . They often find it difficult to do so at first, but as the exercise progresses, which I guide, they begin to see the errors and biases, and to wonder about the origin of the errors. At that point, we can start talking about training datasets and so on.

Once this first exercise has been done, what I think is interesting is to look at how students interact with chatbots based on LLMs and then go into more detail in the following exercises. Once they are more comfortable, the students can then be more autonomous and develop their own critical point of view of the system they have in front of them.

It is in this interaction that we can also generate learning: critiquing the texts obtained involves doing a bit of in-depth research, for example.


___________________________________________________________________

Jo Lindsay Walton, Research Fellow, Sussex Humanities Lab, University of Sussex:


What do you perceive to be the values and challenges of using generative AI for educational purposes?


We don't all need to suddenly become computer scientists. But we do need to have a working knowledge of the politics and ethics related to emerging AI, so we can support learners to make informed decisions. For instance, it is tempting to think of AI as a source of infinite abundance, with little or no cost. But researchers like Ruha Benjamin and Kate Crawford explore the material realities of AIs, from the working conditions of data labellers training AIs or packers in AI-run fulfilment centres, to the carbon pollution associated with all that energy-hungry computation. More broadly, Critical Data Studies looks at timely topics such as:

  • technological unemployment and deskilling

  • AI alignment and explainable AI

  • data surveillance and data commons

  • algorithmic bias and algorithmic opacity

  • data extractivism and data colonialism

  • Intellectual Property and the free-culture movement

  • techno-utopianism and techno-solutionism

  • …and much more.


The point is not to spoil the joy and magic of these new tools and toys. It is to foster awareness of what 'responsible use' really means here. It means that even as we use these AIs, we need to push for significant changes to the legal and economic systems in which they are embedded. It means supporting the artists who are organising to end the exploitative practices behind AIs such as Midjourney and DALL-E, trained on their artworks without compensation or consent. There is no reason why generative AI shouldn't be governed fairly and in the interest of human flourishing. But that future won't happen by itself.


So that's plenty to think about, before we even get onto the impacts these tools might have on learning itself! Here I think we're in quite uncharted territory. We can all see exciting opportunities for experimenting with generative AIs in the classroom (and if AIs keep evolving, the novelty might not even wear off). Other contexts, such as workplaces, will be conducting their own experiments at the same time. I think we should consider the next few years as a transition period. We will be gaining experience and collecting evidence, and piloting new policies. It may well be prudent to protect certain aspects or areas of education against generative AIs, to ensure that learners get an appropriate mix of tried-and-tested learning activities, alongside emerging experimental methods. Like it or not, current cohorts are going to be cyborg guinea pigs. Like all guinea pigs, they deserve to be treated with dignity and grace.


How might media educators use such tools productively for teaching and learning?


There are so many possibilities. In my own classes, students are exploring editing AI-generated images in Photoshop and using them to create games in Twine and Microsoft MakeCode. ChatGPT is also providing plausible and well-structured answers to complex questions, which we evaluate using close reading. ChatGPT currently has a tendency to make up a lot of stuff, so we also using it as an opportunity to hone research instincts and skills: what needs to be more carefully argued or better evidenced? In some ways, it turns out the way we read ChatGPT texts isn't that different from how we read an established authority on the subject: in both cases, we are trying to cultivate a constructive critical vigilance. We are also exploring how ChatGPT might become a space for thinking, writing, and working – the kind of working you learn from – rather than a substitute for working. In the Autumn semester, I'm looking forward to teaching my first Creative Writing workshop devoted to prompt engineering: how do you write for the AI you want to write for you? Deforum Stable Diffusion makes it easy to collaborate on a trippy music video animation, with students linking their prompts together in a chain. The time and expense of running Deforum Stable Diffusion is conspicuous, and here there are learning opportunities around reflective and responsible iteration:

  • What am I hoping to achieve with this prompt?

  • Have I built in all the relevant learning from my last attempt?

  • Am I ready to ask the AI, or is there more still to do?


However, at the moment it feels like many of us, whether we're excited or cautious, are in a responsive mode. We can do better than use these tools productively, constructively, and optimistically. We can use them transformatively. The coming months and years are a fantastic opportunity to rethink teaching and learning in radically responsible ways. For instance, if ChatGPT is posing challenges for essay-based assessment, isn't that our cue to collectively revisit what these essays were supposed to be doing in the first place? This includes, of course, how these essays were already being written, in the context of a flourishing gig economy of tutorial services across Discord, Reddit, and essay mills such as My Essay Geeks. More broadly, assessment design which includes generative AI will probably be thinking about how to ensure assessment as learning (giving learners the space, motivations, narratives, and tools to do and learn things they otherwise wouldn't). Then there is assessment for learning (figuring out what our students and pupils know, so we can meet them where they are): in this respect, cultivating a culture of honest and open experimentation around generative AI will probably be important. But finally, there is the thorny question of assessment of learning. Here we could draw on work by the likes of Jo Littler, Melissa Benn, Megan Erickson, Michael Sandel, Thomas Piketty, and Daniel Markovits, to think about the relationship between marks, merit, and social class. When did I grow so comfortable with stamping my students with grades, knowing full well how unjustly they will shape their future opportunities for education, employment, and the influence they may eventually wield over others? I am grateful to feel uncomfortable again.

___________________________________________________________________


Do you feel comfortable introducing AI into your subject-specific or wider teaching?


What do you feel media educators (and other teachers) need to feel more supported with this?


How could the MEA help?


Join the discussion on Twitter: @TheMediaEdAssoc and use the hashtag #AI_MEA or reply to our Facebook post



401 views0 comments
bottom of page