This is the second post in a blog series titled "Hunting for Elephants in the Room." For more context on this series, the introductory post can be found here. Each post seeks to identify and discuss the sometimes uncomfortable questions about how longstanding higher education assumptions and practices may change over the next decade.
In every era, personalized tutoring has been the gold standard in education — an ideal accessible only to the very few who could afford it. A great tutor doesn’t just deliver knowledge; they tailor it. They spot misunderstandings before they calcify. They motivate, they adapt, they inspire. For centuries, it’s been a rare luxury, but one that has been empirically verified as providing significantly better results for students than other formats, as demonstrated in Benjamin Bloom's 'Two Sigma Problem' study (Bloom, 1984).
The dream that AI stirs is simple but profound: what if every student could have that? A tutor in your pocket, available at all hours, infinitely patient, personalized to your exact needs — and so cheap to deliver that cost no longer stands in the way.
It’s a vision that feels almost utopian. But as with all utopias, the real story lies in the difficult terrain between dream and reality.
There are good reasons to think this future could arrive sooner than most people expect. Recent advances in AI are not just about getting answers right — they’re about how models reason, explain, and adjust to the needs of an individual learner.
The technical underpinnings are falling into place. Models like GPT-4o can already perform multi-step reasoning, converse naturally across text, voice, and images, and run on increasingly lightweight devices. Inference costs — the cost of running an AI model once it’s trained — continue to drop. Local AI models that run directly on smartphones are getting more powerful every month. If you haven’t already tried it, I would highly encourage you to check out ChatGPT’s conversation functionality. It already feels like a PhD-level thought partner in my pocket!
This unlocks a crucial shift: once the expensive part (training) is done, deploying tutoring at scale becomes almost frictionless. The idea that some level of tutoring could be “too cheap to meter” — so inexpensive that the cost per interaction is functionally zero — is no longer a wild fantasy. In fact, many would argue that it’s an economic inevitability if and when a few technological thresholds are crossed.
It is therefore no surprise that startups and major platforms alike are spending many millions on developing and piloting AI tutors that adapt to each student's learning style. Many are showing promising early results, especially when focused on specific subjects like language learning or math where learning outcomes are the most objective and simply assessed.
It’s not hard to imagine a world in which every student, regardless of background, carries a tutor in their pocket that is “too cheap to meter”. But will those tutors be effective? That is another question entirely.
It would be a mistake to believe that better algorithms and lower costs alone will solve the hard problems of education.
Real human tutors do more than answer questions. They pick up on subtle cues: confusion in the eyes, disengagement in the tone of voice, frustration masked by a polite nod. They don’t just correct mistakes; they motivate, encourage, and — crucially — care. These human elements are not trivial; they are often the difference between a student thriving and a student quietly drifting away.
AI, even at its best, struggles here. It can feign empathy, but it does not feel it. It can detect hesitation patterns, but it cannot genuinely encourage. Worse, current models still sometimes hallucinate — inventing plausible-sounding but false information — a risk that is unacceptable in high-stakes learning environments.
Moreover, AI might be best understood not as a miracle technology but as a normal technology: imperfect, limited, and heavily context-dependent. Initial performance metrics often come from idealized testing environments. In messy real-world classrooms — with highly diverse learners, shifting emotional states, and dynamic group settings — AI systems tend to underperform relative to early expectations.
This pattern isn't unique to education. Across industries, AI tools have repeatedly shown a drop in performance when moved from controlled settings into everyday practice. They overfit to benchmarks but stumble on edge cases. In education, where individual variability and contextual nuance are paramount, this gap could be even more pronounced.
A personal AI tutor might be cheap. But will it actually teach? Will it motivate, adapt in complex real-world settings, and persist across months of uneven student engagement? Those are much harder questions — and ones we shouldn't answer too hastily.
Even if AI tutors eventually meet high technical standards, their real-world impact will hinge just as much on how they are deployed — and how institutions, teachers, and students adapt around them.
Education is not a simple consumer product; it’s a deeply entrenched system shaped by bureaucratic constraints, cultural expectations, and structural incentives. Curriculum standards, assessment regimes, and teacher training pipelines are all calibrated around existing models of learning that predate AI.
Deploying AI tutors into this environment is not a plug-and-play operation. History offers cautionary examples: calculators, online courses, even basic educational software faced long adoption curves, not because the technology didn’t work in principle, but because institutions struggled — or resisted — to redesign workflows, metrics, and roles to match the new possibilities. Even the most effective AI tutors will require rethinking classroom practices, assessment standards, teacher roles, and student support structures.
AI's success in education won't simply be about better models. It will be about building adaptive systems around those models — ones that acknowledge AI's limits, monitor its effects in real time, and evolve based on lived outcomes, not just lab benchmarks. In other words: even if the AI arrives, the hard work will have just begun.
So if we can’t predict the future with certainty, how should we think about it?
The best approach may be to watch for signals — early indicators that hint at which path we are traveling down.
Some positive signals to look for:
Some negative signals to be mindful of:
How these signals evolve over the next 2–5 years will tell us much more than any sweeping prediction today.
Ultimately, whether or not AI delivers a personal tutor to every pocket, it forces us to ask a deeper question: what kind of education are we trying to build?
If learning becomes available on-demand, infinitely patient, and hyper-tailored, does the purpose of education shift? Do we move away from memorizing facts toward cultivating discernment, creativity, and character? Do we design learning environments not around the delivery of information, but around mentorship, social growth, and real-world application?
AI may one day be an incredible tutor. But the real opportunity — and the real risk — is that we aim too low. The future of education isn’t just about making tutoring cheaper. It’s about making learning richer.
That’s the bigger elephant in the room. And so in future posts we’ll be exploring additional questions in this conversation, such as: what might AI’s impact on the higher education business model be? How is the role of faculty likely to evolve in an AI-assisted world?