Knowledge and the curriculum in the age of AI

I have been reading Salman Khan’s book Brave New Words about the potential for artificial intelligence within education. Khan makes an interesting point about educationally-tailored AI, such as his own ‘Khanmigo’, being able to help students access courses, especially those that otherwise wouldn’t be available to them.

This got me thinking about curriculum. While AI may be able to help us design a curriculum, it will likely only ever be the research assistant. After all, large-language models only go off their ‘training’, which is just the product of existing curricula which humans have designed. I don’t see this as a problem. It is right that curriculum design should be based on human decisions.

As I was thinking about curriculum in the new era of AI, I have been following the coverage of the recently announced Curriculum & Assessment Review in the UK. There is definitely space for reform, however my fear is that the baby will be thrown out with the bathwater. The ‘curriculum turn’ in UK schools over the last five or six years has led to genuinely good and valuable work. There is, to me, a strong and clear social justice argument around access to knowledge. I think that Professor Michael Young’s concept of ‘powerful knowledge’ made a coherent and relatively apolitical case for a knowledge-rich curriculum. Returning to an ‘anything goes’ approach to curriculum content, as will undoubtedly be advocated by some, is not a morally defensible approach to curriculum.

What would be useful to see in curriculum and assessment reform is a move towards curricula, especially for 14-18 exam courses, which reflect what cognitive science tells us about learning and memory. We could also reform assessment to make it more inclusive and less of a pure test of memory. On both of these issues, knowledge is not the problem, it is a question of what we expect students to do with it.

I think that this is an important distinction. Students need knowledge to develop deep understanding of issues, to think critically, as a basis for creativity and as the social and cultural capital which gives them access to and agency in the world. Even access to AI (as with similar arguments made about the Internet), you cannot reduce the need for knowledge. Therefore access to knowledge is about social mobility and social justice.


Coincidentally, Bill Gates’ latest newsletter saw him gushing over the impact of the Khanmigo AI at a school he visited. Gates is a techno-optimist, just like Salman Khan, but unlike Khan, Gates has no real background in ed-tech, let alone education more broadly. People getting overly excited about AI, in any sphere, typically don’t possess deep domain-specific knowledge.

It would be nice if educators, health professionals, etc. actually had more time in the driving seat – or at least got properly listened to – in all these discussions and decisions about how AI is going to transform society. MIT economists Daron Acemologu and Simon Johnson make a compelling case in their book Power and Progress, that technological innovation does not automatically benefit people. Indeed, in their millennium-spanning account, they show that innovation normally benefits the owners of the innovation until collective action – whether through unions, political parties, etc – intervene to ensure that innovation benefits the many, not the few.

I noted Tony Blair’s institute getting similarly excited about the potential impacts of AI at their recent conference. IPPR too have also fired a warning shot about how AI could reshape the economy and society. As governments rush to get a grip with both the potential pitfalls and windfalls of AI, I hope that they listen to the many helpful and important voices beyond the purveyors of the technology themselves, no matter how altruistic their intentions might be.

Leave a comment