August 29th, 2025
This post is one of my advice & arguments pages about the harms and hazards of the AI Hype Movement.
Probably the most selfish reason not to use large language models (LLMs; a.k.a. AI chatbots) is that using them can destroy your mental health and reduce your cognitive capacity. These risks are exacerbated in a school setting, where academic and social stress can already put students at risk of depression or related issues, and where student learning and cognitive capabilities are being measured and (hopefully) growing.
LLMs by nature generate replies which fit into a pandering mode of communication, since they’re trained to always generate agreeable responses that never contradict the user. When engaged in philosophical discussions and/or prompted with even a hint of cult-like ideas, this can literally drive the user insane, leading to delusions of grandeur and/or a break from reality. That article includes advice from a psychiatrist who says that keeping in touch with other humans is the best way to counteract this; presumably people who are already lonely are more vulnerable to this sort of thing. This article listing a few different stories has a good set of examples; in several cases the person who became obsessed with the LLM ended up dead.
These extreme cases are rare (though I haven’t seen hard data on exactly how rare), but what hasn’t been well-studied (to my knowledge) is the potential for more widespread less-intense effects. It is of course not impossible to imagine positive effects as well, but personally, these psychological risks make me unwilling to recommend LLM use to my students, and they make me much more comfortable with policies that forbid its use, so that nobody feels pressure to use LLMs in order to keep up with others who presumably are. I see (indirectly) enough mental health crises already in my job that I don’t feel comfortable encouraging students to use a tool that can precipitate or amplify these kinds of issues.
In addition to the psychological risks, there’s a variety of evidence that using AI to “help” solve problems avoids the learning and growth that would otherwise have happened when solving a problem without help (or using other, more passive tools).
This preprint of an MIT study shows effects on perceived ownership and ability to quote one’s own essay when it was produced with AI assistance, in addition to changes in brain connectivity. It’s not yet peer-reviewed (as of this writing) and so should be interpreted with caution, but both common sense and other preprints support the idea that without solving a problem (or writing an essay) yourself, you are forfeiting the learning that would happen if you had done it alone.
In my own favorite medium of code, every project that I undertake has two outcomes: a finished program, and new knowledge that I gain as part of the coding process. Often times, the finished program actually never materializes, but I still gain a lot of useful skills and knowledge from the failed project. When using AI to generate code, although I imagine one could still learn some things, especially if using the AI to generate code in a language you aren’t as familiar with, a lot of the stuff you’d normally internalize would not even be noticeable. Of course, ideally one would carefully review the AI-generated code and deeply understand it before actually using it, but that would defeat the point of using the AI to generate code, since if one had such a deep understanding in the first place, the effort to type out the code would be negligible.
We all know that when using a convenient tool where we’re supposed to verify or double-check things, it becomes easier and easier over time to skip that verification step. One may start out by carefully understanding AI-generated material, but it’s unlikely that over time one can maintain an appropriate level of rigor. In practice, pretty much all use of AI to complete tasks in educational settings is more harmful than helpful. Using AI to answer questions about concepts doesn’t have the same drawback, but I’ve already encountered multiple students who started out in this mode and then eventually gave into the temptation to use it for generating answers to assignments. Besides the slippery-slope argument, the fact that AI answers to conceptual questions will sometimes be confidently wrong in a way that’s impossible for a learner to distinguish from correct answers is an excellent reason not to use them to answer conceptual questions. We have things like textbooks and office hours for those and the convenience of the AI is not worth the cost.
In computer science especially, students almost always need to learn programming concepts by actually building programs themselves: there’s no substitute for actual program-building exercise in terms of growth as a programmer. Since “growth as a programmer” is the whole point of the educational setting, using AI to do assignments for you has the exact same disadvantage as asking a friend to do them for you: it gets around the exercise that would have led to learning, such that even if you somehow evade detection and earn passing marks, you’ve robbed yourself of the benefit the class was supposed to provide, and you’ll end up unprepared for further study or actually achieving much at a job. This applies equally well to other disciplines.