A note from our CEO & Founder
In recent months, many conversations with clients, friends, parents and tutors have all circled back to one thing: AI. Many insisted it couldn’t be ignored any longer. Some parents even questioned whether AI should be woven into tutoring sessions themselves. I’ll admit, I’ve been something of a sceptic—not because I feared an AI takeover, but because I believe deeply in the irreplaceable nature of human intelligence and connection.
Yet, they were right. Over the past few months, I’ve immersed myself in research and dialogue—not just with parents and educators, but with students themselves. And the same questions kept coming up:
• What should a child study if they want to thrive in an
AI-driven world?
• Does computer science still matter if AI can write code?
• How can we make sure students know how to use AI effectively?
• What are schools and universities doing? Are they embracing AI?
• And what about tutors? Should they be allowed, encouraged—even required—to use AI in their sessions?
We’re now living in the age of AI, and rapid adoption is touching how we live, work and learn. Just look at the number of weekly ChatGPT users, which increased from 200 million in August 2024 to 700 million in August 2025. At Figtree, we’ve been watching this transformation carefully. Our core purpose remains the same—human-led tuition and consultancy—but we also know that turning a blind eye to AI may not be responsible either. Instead, we will need to harness it wisely.
A tool, not a tutor
Creating meaningful connections with our students is at the heart of what we do at Figtree. Tailored learning has always defined our approach, and we firmly believe our ability to see every child as the individual they are is one of the things that sets us apart—and delivers such successful outcomes.An algorithm might be able to spot patterns and identify trends, but it can’t replicate the empathy, curiosity, care and patience of a great tutor. A tool is only as good as the person wielding it, so we remain committed to nurturing the best in the field for our families.
At the same time, we do recognise that AI can be a valuable resource for our tutors in terms of generating examples or customising exercises. We still check everything thoroughly for mistakes and AI hallucinations (responses that are entirely fabricated or outright false), but AI-supported planning does free our tutors to focus on what matters most: creating engaging, effective sessions that meet the needs of each child.
The risk of complacency and the importance of vigilance
We see AI as just one asset in our wider toolkit, and we recommend students approach their AI usage in the same way. As the technology develops, we understand it can be tempting to rely on the growing proliferation of AI-driven learning tools, homework help apps and even AI tutors. But it is vital that students continue to hone their critical thinking, research and reasoning skills—attributes that AI does risk eroding if not approached with caution.
This issue is becoming increasingly pressing now we know that AI is no longer fringe. The numbers tell us that it’s integrated deeply and widely into students’ academic lives:
• According to a 2024–25 global survey by the Digital Education Council, 86% of students now use AI in their studies—54% use it weekly, with 24% using it daily.
• In that same survey, 66% of students reported using ChatGPT, while 25% use Grammarly and 25% use Microsoft Copilot.
• Globally, 60% of teachers say they’ve used AI tools (like chatbots or adaptive learning systems) during the 2024–25 school year.
• Meanwhile, generative AI, including large language models like ChatGPT and Claude, is increasingly used in assessments: in the UK, one report found that 88% of students had used generative AI for graded tasks, a substantial jump from 53% just a year before.
By all means, students can use AI platforms to challenge ideas, unpick tricky concepts and generate extra practice questions. But as the results of a recent study revealed, it really shouldn’t do the thinking for any of us.
Researchers at MIT’s Media Lab asked a group of adults to write SAT-style essays, with one cohort using ChatGPT while others relied on traditional search engines or nothing at all. The team used an EEG to monitor brain activity throughout, and the ChatGPT group ‘consistently underperformed at neural, linguistic and behavioural levels’. Researchers also noted that users became lazier and, by the end of the trial, were simply copying and pasting AI-generated content. As the paper’s main author noted, the potential risks could be even greater for younger learners, whose brains are still developing.
Building a new kind of literacy
Although this was a relatively small study that’s yet to be peer reviewed, the initial results are striking. There’s a real risk that we could harm long-term brain development in favour of quick, easy answers. We can’t ignore the advance of AI, so we believe a key way to mitigate this risk is for students to learn when and how to use AI safely. This will inform a large part of our support moving forward as we actively help students build their AI literacy in a carefully planned, age-appropriate manner.
This means setting boundaries. Learning to recognise mistakes and biases. Following sources and fact-checking outputs. And examining why we should still work a problem out ourselves rather than relying on a convenient, one-click answer. Spanning technical, ethical and moral questions, these are the skills that will equip our students for success in a digital future.
As will originality. If you’ve scrolled through social media recently, you may have noticed some of the telltale signs of AI-generated content. Suddenly, everyone seems to sound the same and has discovered a strange love for the Oxford comma. We want to focus on confidence building so that our students continue to shape and use their own distinct voice. They should feel comfortable calling out disinformation and embrace every idea and idiosyncrasy that makes their work their own.
Looking ahead
While debate around AI governance rages, schools and universities are busy setting up their own guardrails. The independent sector and top universities are leading the way here, and we’re following the evolution of their AI strategies closely. We believe this is a time for collaboration and transparency, and we’ll continue to review our approach to ensure our support remains responsible, balanced and future-ready.
In upcoming newsletters, we’ll be sharing more insights from academics, tutors, students and parents. We want this to be a conversation and your questions, experiences and concerns matter deeply. Please let us know what you want to explore next—your input will directly shape our content and, hopefully, help us all navigate this moment with clarity and courage.