Peter Mawhorter

Peter Mawhorter (pmawhort@wellesley.edu)

August 10th, 2025

Advice & Arguments

This page serves as a place for me to collect advice or arguments I make in other places online that I want to share publicly and archive so I can link to them later. Each section below links to individual posts around a common topic.

Advice

The AI Hype Movement

As a sometime AI researcher (including having worked on deep learning models) and a CS educator, I have thoughts™ on the AI Hype Movement spearheaded by “top” tech companies. The summary of those thoughts is: they’re evil; don’t use them unless well-justified; I use them myself in only extremely limited circumstances where I feel the need truly outweighs the costs. Still, I think that large language models as a technology have a lot of potential for positive impact on the world, even if it’s currently being squandered. An example of well-justified use might be overcoming an accessibility barrier, but of course you’re left dealing with the dangers of a system that is regularly confidently wrong, so generative AI is not in general a suitable solution to accessibility barriers from the design side.

To dig more into exactly what I’m criticizing here: the “technology” itself isn’t inherently problematic in all instances (although it’s so often problematic one might think so). What I’m really centrally opposed to is what I call the “AI Hype Movement” which encompasses the activities and promotions of the largest AI players like Microsoft, Google, Amazon, Meta, OpenAI, and Anthropic (plus a froth of smaller startups and hangers-on), uncritical use of and boosterism for those company’s AI tools, the activities necessary to build and justifications offered for building ever-bigger models, and the broad unexamined use of these models by the general public. Playing a part in or actively contributing to this movement is harmful, although obviously someone who unknowingly plays a passive role is different than someone who actively participates. I’m not against AI; I’m against the AI Hype Movement, which includes most uses of the specific models that have colloquially become referred to as “AI”.

I do worry a lot about my position here and how it affects my students. If I’m wrong about my assessments, will I be harming my students by forbidding them from using the latest tech? For this reason, I’m always open to counter-arguments, and I also do let students see the harms of AI up-close in controlled circumstances, plus I try to be transparent with them about my reasons. Part of my motivation for writing these posts is to do the intellectual labor of more deeply understanding my own position and double-checking my gut assumptions about things.

My Own Posts

Here are some of my thoughts on the harms & hazards of using generative neural networks (including generative large language models and generative adversarial networks). Note that many of these apply specifically to the big corporate models, not to the underlying technical methods.

The pages above are my own thoughts, but plenty of other people have stuff to say about this. The following list of links gives evidence to back up the arguments I’m making above and includes others’ thoughts that complement mine:

This short diatribe sums things up nicely and includes its own roundup of links on the various harms of AI that’s even more comprehensive than what I have here.

Training & Scraping Harms

Large Language Models are Untrustworthy

AI Harms its Users

Ecosystem Costs & Harms of AI

AI is Promoted with Hyperbole and Lies

AI Hype Has Disastrous Economic Implications

Current LLMs are Not Artificial General Intelligence (AGI)

Caveat: Positive Stories