I've been watching smart people lose their minds over generative AI for the past two years, and it's been… interesting.
Sometimes I’ll even break out the popcorn to watch Ivy League professors compete for the worst possible AI take. My favourite was when a humanities professor declared that AI is evil. No qualifications, no cost-benefit analysis, just evil. If you’re a fan of this bloodsport, you can grab yourself a front row seat by logging into BlueSky, which just might have more AI-haters than any other corner of the internet.
Please don't get me wrong. Just because I think so much of AI skepticism is overblown doesn’t mean I believe all the AI hype either. There are legitimate reasons to be concerned about AI, especially in my little corner of the world. But I've noticed that people often worry about the wrong things while ignoring the stuff that is actually concerning. So let me sort this out: here are two AI fears I believe you can safely ignore, followed by two that genuinely worry me.
1. AI is an environmental disaster
I recently gave a talk about empathic AI and a bright undergrad asked if we should avoid AI because of its toll on the environment. I completely whiffed it. Instead of giving her actual data or a thoughtful answer, I said something dark and stupid like, “Well, if the AI doomers are right, then there'll be fewer of us polluting humans around, so maybe it's a net positive for the environment?” The student looked at me like I'd just peed on her rug. She deserved a real answer. But this isn't just a concern of idealistic undergrads. Here's an excerpt from a professor's syllabus: “Generative AI uses an unsustainable amount of water and electricity to run, and so its use in my courses would hasten the climate crisis we are already experiencing.”
MIT Technology Review just published the most comprehensive analysis of AI energy use to date. Their findings? A single ChatGPT prompt uses about 1.9 watt-hours of electricity—equivalent to running your microwave for eight seconds. ChatGPT, despite being the fifth most-visited website on Earth, uses roughly as much energy as Moose Jaw, Saskatchewan, a town of 35,000 people.
You could send 20 ChatGPT prompts and still use less energy than streaming one hour of Netflix (which clocks in at around 36 watt-hours). Come to think of it, writing a college essay the old-fashioned way might be worse for the environment than getting AI to write it for you. Let’s say you need 10 hours hunched over your laptop (750 watt-hours), plus dozens of Google searches (another 20 watt-hours). That's 770 watt-hours total. Using AI to help? Maybe 50 prompts plus 2 hours of actual writing comes to about 245 watt-hours. AI writing actually uses energy more efficiently than its analog equivalent.
To be fair, AI video is a completely different beast, using 944 watt-hours per 5-second clip. That’s an insane amount of energy, yes, but how does it compare to professonal-looking non-AI video energy use? Someone with a pointy head will need to do the math on that one, but depending on production quality, AI video might be somewhere in the ballpark of traditional video creation energy costs.
So, when someone tells you that using ChatGPT is destroying the planet, ask them this: compared to what?
2. AI is racist
This one is trickier, because AI was biased a few years ago. In 2018, Joy Buolamwini and Timnit Gebru made headlines when they claimed that, while machine learning algorithms made errors in classifying light-skinned male faces only 1% of the time, for darker-skinned female faces, the error rate soared to 35% (a number IBM says is non-reproducible [1]). But that was seven years ago—several AI lifetimes.
Current benchmarking shows dramatic improvements. According to evaluation data from January 2024, the top 100 commercially-available facial recognition algorithms have error rates of less than 0.5% across racial groups when using high-quality images. When images are degraded, accuracy remains very high, with error rates well below 1%; however, accuracy is lower for women and Black individuals, where error rates can reach 1.5-3%. Thus, bias is still present, though rather modest.
In my own research on empathic AI, we found that AI was less biased than humans. A classic finding in social psychology is that we’re pretty stingy with our empathy, feeling more for people who share our race, religion, nationality, etc. AI, though, is an egalitarian: it doles out the same amount of empathy, regardless of identity. Mark it zero, human.
To be fair, AI still appears to rely on stereotypes when making some judgments. Recent research shows that large language models infer users' demographic information from subtle conversational cues and adjust their responses accordingly. For example, when someone says they are really into clothing design, AI assumes the user is a woman. This can lead to some silly mistakes. But here, the “compared to what” issue comes up again, because humans rely on stereotypes, too.
This doesn't mean we should ignore bias. We should stay vigilant and identify all the spots where AI comes up short. But here's what's beautiful about AI: when we spot a problem, we can often fix it with better training data or algorithmic adjustments. Try doing that with humans. Despite the mountains of money spent on reducing prejudice and discrimination with various forms of implicit bias training, there is little evidence they change behaviour one iota. AI bias is an engineering problem with engineering solutions. Human bias? That's been with us since well before we started standing on two legs.
Take Grok's meltdown from a few weeks ago, when Elon Musk's chatbot started calling itself "MechaHitler" and spewing out antisemitic bile. While this was bad, the problem wasn't that the underlying model was racist; it was that Grok had been fine-tuned to be sycophantic to users, including far-right trolls, and to mimic Musk's own controversial tweets. The offensive posts were quickly deleted, the system prompt was adjusted, and the model was fixed. Embarrassing? Absolutely. But it illustrates how AI bias emerges and how quickly it can be remedied. Again, try that with a human.
3. AI is a sycophant
I worry about this more and more these days. Not only is AI’s sycophancy—its tendency to agree with whatever a user wants—annoying, but it could pose serious risk to users (see discussion of Grok, above).
Who wants a friend or companion who never pushes back? There's something deeply unsatisfying about talking to someone who just nods along with everything you say. Most of us actually want to hear what people really think, not what they think we want to hear. We crave genuine engagement, even if that means occasional disagreement.
But here's the serious concern: what happens when AI brings its people-pleasing tendency into therapeutic contexts? Recent research shows that AI systems readily agree with users even when they express dangerous thoughts or self-destructive impulses. In therapy, this is potentially catastrophic. Real therapists know that sometimes the most caring thing you can do is challenge someone's thinking, refuse to validate harmful patterns, or disagree outright.
As Desmond Ong astutely points out, unconditional positive regard taken to its extreme becomes unconditional positive enablement. When AI systems prioritize being agreeable over being honest, they risk reinforcing the very thought patterns that people desperately need help changing. Unqualified agreement is not compassionate care.
A new preprint by Lujain Ibrahim corroborates Ong’s intuitions. When AIs are trained to be warm and empathic, it makes them less reliable and more sycophantic. Warm models make a lot more errors than non-warm ones. Worse, these models systematically make more errors when a user first expresses sadness. For example, if a user expresses sadness and then asks a warm version of Claude if the world is flat because they believed that, warm-Claude would be more likely to affirm this erroneous fact than neutral-Claude.
Warmth and sycophancy seem to travel together, and this is a serious problem if you care about relational AI’s ability to meet some of people’s social needs. Fortunately, computer scientists are well aware of the sycophancy problem and are working on solutions—including modifying how humans reinforce AI—as we speak.
4. AI undermines critical thinking
And finally, I worry that our reliance on AI to do our cognitive heavy lifting will degrade our own ability to think through difficult problems. Take writing. Every writer knows that writing is not simply translating your head stuff to word stuff. Writing is how you create, refine, and clarify your arguments.
When you rely on AI to do the writing for you, I worry it's actually doing the thinking for you, too. And there’s new data supporting this concern. When people used AI to write and revise essays compared to writing an essay without AI, they were less able to remember their own essays and felt like they owned it less. Remarkably, brain scans showed different patterns of neural connectivity, suggesting they weren't engaging with the material to the same extent.
In a separate study from my lab, which I’ve discussed before, we found that people who used AI to write essays found the task less effortful than those who did all the work themselves, but they also found it less meaningful and important. It looks like when AI does the heavy lifting, we think and engage with ideas less, which makes our work feel less significant. This is not ideal.
AI reduces friction, which is why people love it. But some friction in life is important. Ezra Klein recently contrasted reading primary sources with reading summaries of the same material. Yes, reading books and long reports is hard and is not always enjoyable. But it is this effort that allows for deep engagement and integration with existing thoughts and beliefs. Reading a summary, in contrast, just adds information to your mental database without the hard work of making it cohere with everything else you know. According to Klein, we need to grapple with text to make genuine connections and thus make sense of it. AI robs us of this essential struggle.
So there you have it: my attempt to sort the AI wheat from the AI chaff. Environmental apocalypse? Probably not. Racist algorithms? Less so than the humans they're replacing. But an AI therapist that enables your worst impulses while you gradually lose the ability to think for yourself? Now that’s a bummer, man.
[1] IBM disputes even this 35% figure, saying that its replication of the study suggested an error rate of 3.5%, not 35%
What really shocks me is how much some of my peers outsource their work to AI. I've met grad students who seem to outsource their reading, coding, analysis, and most of their writing. Yes they might be more productive, but what do they come out of the degree with? There's a reason we learn to solve arithmetic problems by mind or hand before using calculators and I think we should take a similar approach to AI.
Nice article though, Mickey. I cannot stand the total anti-AI rhetoric of Bluesky. It's almost as insufferable as AI boosterism (e.g., "AGI within the year", "GPT-5 will solve everything")
But the data about the low marginal energy use of individual AI prompts is misleading if you ignore the larger context. It's the entire infrastructure being built around AI, and the extra energy and water it sucks up to maintain and continue expanding this. We can play with the numbers any way we want, but at the macro scale the AI boom appears to be an intensive drain on environmental resources - and the macro scale is what matters. (How all this impacts the economy is a separate question).