Welcome to the Education Issue. I cheated on my editor’s letter. This one that you are reading right now. I’m very sorry.
Look, I didn’t set out to do this, but the thing about magazines is that they have very hard deadlines, and if you miss them, you’re left with blank pages. So when I realized I only had a few hours left to finalize this, well, I freaked out.
And then I did what an increasing number of us are doing: I turned to ChatGPT, OpenAI’s massively mind-blowing generative AI software, to help me out. After training it on some of my previous work, I asked about the use of AI in education.
AI is already doing big things in education. By crunching massive amounts of data on student performance, AI algorithms can tailor instruction to fit the needs of individual learners, which can mean big improvements in student outcomes. Chatbots and virtual assistants can provide students with on-the-spot assistance and feedback. Who needs a tutor when you have an intelligent assistant?
But here’s where things get really exciting: language models like ChatGPT can actually generate human-like text, which makes them perfect for a whole bunch of educational applications. These models can create interactive learning materials, like chatbots that answer students’ questions or create personalized quizzes. They can even generate summaries of complex texts, reports on data sets, or entire essays and research papers.
There are definitely some challenges as well. One of the biggest concerns is the risk of bias in the algorithms. We need to make sure these algorithms are designed and tested in a way that ensures all students get a fair shake. Another, of course, is the potential for cheating.
But the bottom line is that AI and language models like ChatGPT are going to change the way we think about education, and we need to make sure we’re using these tools in ways that are ethical, equitable, and effective.
So are the preceding four paragraphs, which were generated by ChatGPT and then lightly edited by me, ethical? If they were presented as my own work without an explicit disclosure (like this one), I would argue that the answer is no. And even with such a disclosure, we’re still in a bit of a gray area—there are all sorts of questions about everything from plagiarism to accuracy to the data these models were trained on.
The reality is that we are in an entirely new place when it comes to the use of AI in education, and it is far from clear what that is going to mean. The world has changed, and there’s no going back.
As William Douglas Heaven, our senior editor for AI, makes clear in this issue’s cover story, technologies like ChatGPT will have all sorts of genuinely useful and transformative applications in the classroom. Yes, they will almost certainly also be used for cheating. But banishing these kinds of technologies from the classroom, rather than trying to harness them, is shortsighted. Rohan Mehta, a 17-year-old high school student in Pennsylvania, makes a similar argument, suggesting that the path forward starts with a show of faith by letting students experiment with the tool.
Meanwhile, Arian Khameneh takes us inside a classroom in Denmark where students are using mood-monitoring apps as the country struggles with a huge increase in depression among young people. You’ll also find a story from Moira Donovan about how AI is being used to help further our analysis and understanding of centuries-old texts, transforming humanities research in the process. Joy Lisi Rankin dives deep into the long history of the learn-to-code movement and its evolution toward diversity and inclusion. And please do not miss Susie Cagle’s story about a California school that, rather than having students try to flee from wildfire, hardened its facilities to ride out the flames, and what we can learn from that experience.
Of course, we have a lot more for you to read, and hopefully think about, as well. And as always, I would love to hear your feedback. You can even use ChatGPT to generate it—I won’t mind.
Thank you,
Mat
@mat/mat.honan@technologyreview.com