In an AI world, what does personalised education mean?
13 December 2024
Sarah Grant, a digital education strategist and previously, Exec Director at the IDEA Lab at Imperial College Business School, explores the world of personalised learning and what AI might contribute.
For decades, personalisation has been heralded as the holy grail of education. The idea of tailoring learning experiences to the unique needs, preferences, and behaviours of each student is a vision that has captivated educators and technologists alike. In contrast to traditional models, where all students follow the same predetermined pathway, personalised education promises something far more dynamic: a system where learning is shaped by the learner.
In online learning, the vision of personalised education feels tantalisingly within reach. Digital platforms generate a rich data footprint—every click, pause, question, and interaction contributing valuable insights. This wealth of information creates unprecedented opportunities to design learning experiences that adapt seamlessly to each individual. Imagine a system that identifies when a student is struggling, adjusts the pace, and offers alternative explanations; or one that recognises when they are ready to advance and accelerates their journey.
Moreover, by leveraging data on personal objectives, interests, strengths, and learning preferences, AI has the potential to craft pathways uniquely tailored to each learner’s goals and aspirations. For instance, such a system could integrate a student’s prior knowledge to skip redundant lessons, tailor assignments to reflect individual career ambitions or areas of interest, and suggest personalised study strategies to optimise their learning habits. It’s a future where education feels personal, relevant, and meaningful.
However, while many claim to already offer personalised education, the reality is that, both technologically and pedagogically, we’re not quite there. Current platforms have made strides in collecting and analysing learner data, but translating these insights into truly transformative learning experiences remains challenging. The gap lies not only in advancing the technology and unifying disparate data sources but also in rethinking the pedagogical models underpinning these systems. Too often, learning models still rely on linear or pre-structured pathways, limiting the potential for truly adaptive, learner-driven experiences.
Even with these challenges, the rapid advancements in AI suggest that the dream of personalised education is closer than ever. Yet, as we edge closer to this vision, are we overlooking the unintended consequences of relying on AI-powered algorithms to shape learning experiences? In truth, it is the algorithms, the data they are trained on, and the priorities of their designers that dictate these pathways—not the learner. Could this same technology, intended to empower, actually narrow the horizons of what it means to learn?
In Filterworld, Kyle Chayka examines how digital platforms like Instagram, TikTok, Netflix, and Spotify have come to dominate cultural distribution. Algorithmic recommendations—such as TikTok’s “For You” feed, which personalises content based on user interests and engagement—now dictate much of what we consume online. While these systems promise personalisation, Chayka argues that they ultimately homogenise culture: “Attention becomes the only metric by which culture is judged, and what gets attention is dictated by equations developed by Silicon Valley engineers”. The result is a flattening of culture, where algorithms favour the least disruptive, most predictable, and simplest content.
Could the same homogenisation occur in education? Imagine a situation where we reduce learning pathways to a mathematical exercise: analysing user preferences, past performance, and contextual data to predict the “most relevant” next step. While this may feel personal, such systems could risk confining students to pathways optimised for engagement or completion, flattening the richness and complexity that make learning transformative. Instead of challenging students to think critically or explore unfamiliar ideas, education risks becoming a series of streamlined recommendations.
Generative AI further amplifies these risks. Models like ChatGPT generate content by predicting the most statistically likely outcomes based on their training data. While this approach produces coherent and plausible outputs, it inherently prioritises the mainstream over the unconventional, promoting predictability at the expense of originality. In the context of personalised learning, this means that AI systems may narrow the pathways available to students. This challenge is compounded by the nature of the training data. Because generative AI relies on historical information, it risks perpetuating and amplifying biases embedded in that data. For instance, an AI system might disproportionately recommend certain topics or career pathways based on gender, socioeconomic background, or geographic region, inadvertently reinforcing inequities instead of addressing them.
The risks extend beyond content delivery. If these systems are used to group students based on similar interests or behaviours, they could also profoundly shape social interactions, creating echo chambers where students primarily collaborate with like-minded peers. This would reduce diversity in interactions, limiting exposure to different perspectives and ideas, and eroding opportunities for meaningful challenge and growth. A clear example of this can be seen in how news stories are presented on social media platforms. Algorithms prioritise content based on past behaviour, meaning people are more likely to see news stories that align with their existing views. During major events, like elections or crises, this can create a filter bubble, reinforcing a distorted sense of reality.
What I’ve outlined here is not an inevitable consequence of personalisation. Just as we couldn’t have predicted the full impact of algorithmic curation on culture, we can’t fully anticipate how these technologies may shape education. We also shouldn’t use this thought experiment as a reason not to innovate in this space – there is a future where we can harness technology to improve learning outcomes without flattening our learning experiences or commoditising our data. But as we design and implement personalised learning systems, we must consider: what data are we collecting, and what measurements are we using to drive personalisation? Does it reflect the complexity of learning—and of people? These metrics will influence how students learn, so these systems must be designed by educators with a deep understanding of pedagogy, not solely by tech companies with profit-driven motives. Moreover, we must develop AI models that are narrower in scope, giving us more control over the data they use and how it shapes the learning experience.
Crucially, we must preserve the human elements of teaching. AI should complement, not replace, educators. Students must retain agency within these systems, maintaining the freedom to choose their learning paths and make meaningful decisions. After all, AI systems may be deterministic, but people are not.
At its heart, education is not just about mastery; it’s about discovery—the unexpected encounters with ideas that challenge assumptions and spark curiosity. Over-personalisation, driven by algorithms and generative AI, risks flattening these experiences, creating a future where learning becomes efficient yet uninspired. As we embrace the potential of AI-powered education, we must ask: Are we designing systems that truly enrich and diversify learning, or are we building an educational Filterworld, where personalisation paradoxically strips education of its authenticity, value, and transformative power?