Peter Mawhorter

Peter Mawhorter (pmawhort@wellesley.edu)

August 29th, 2025

Why AI Fails to Solve Problems

This post is one of my advice & arguments pages about the harms and hazards of the AI Hype Movement.

It bothers me that so many LLM/genAI applications seem to be all about “now that we have new tool X, what can we do with it” while completely ignoring the question “for problem Y, what is the best tool for the job?”

Perhaps unsurprisingly for developers where we have strong evidence of poor ethics (e.g., uncritically using big-brand LLMs), I suspect that many of the people behind these systems care more about the exhilaration of using new tech and the prestige it might bring them than any of the problems they might claim to solve (if they even bother to identify such things at all). Turns out that’s a great way to cause a lot of harm in the world, since you likely won’t do a good job of measuring outcomes (if you even bother to do so) and you especially won’t carefully look for systemic biases or ways your system might unintentionally hurt/exclude people. You also won’t be concerned about whether your system ends up displacing efforts that would have led to better solutions.

The problem of “researcher has developed a new hammer and now sees every problem as a nail” is not a new one in computer science, but the AI Hype Movement has exacerbated it. Probably tens of thousands of researchers if not more are jumping on the bandwagon, looking for new applications of this cool technology, especially as refracted through the lens of their subdiscipline. The issue is: these people are looking for applications, NOT actually starting from a problem and asking how to solve it. Because of its inherent instability and low-probability-high-impact failure tendencies, modern generative AI is actually a terrible tool for a lot of problems, but it’s a tool that can nonetheless produce some very nice-looking demos and some decent results if you’re willing to fudge the numbers a bit and/or do a really shallow analysis. This combination results in a lot of very bad publications, many of which even pass peer review, but whose purported solutions would actually be more harmful than helpful if deployed in the real world. The fact that AI projects are a breeding ground for venture capital investments amplifies this problem by several orders of magnitude and actually does bring many of these anti-solutions to the real world, causing measurable harm.

The question we should always ask when looking at some new AI application is: is AI even a suitable candidate method for solving this problem if we start from the problem and brainstorm solutions? In too many cases, the answer is “no.”