9 Comments
User's avatar
Jake Embrey's avatar

What really shocks me is how much some of my peers outsource their work to AI. I've met grad students who seem to outsource their reading, coding, analysis, and most of their writing. Yes they might be more productive, but what do they come out of the degree with? There's a reason we learn to solve arithmetic problems by mind or hand before using calculators and I think we should take a similar approach to AI.

Nice article though, Mickey. I cannot stand the total anti-AI rhetoric of Bluesky. It's almost as insufferable as AI boosterism (e.g., "AGI within the year", "GPT-5 will solve everything")

Expand full comment
John Walkiewicz's avatar

As the aforementioned grad student, I can resonate pretty deeply with this. It’s exhausting that every task is starting to turn into a dilemma: should I use AI and, though diluting the knowledge from the experience, accomplish the task easier, faster, and “better.” It’s even harder when you can assume your peers are probably using it for everything under the sun! Though I don’t think people intuitively realize they are outsourcing their cognition, which makes this all a bit more insidious.

Expand full comment
Michael Inzlicht's avatar

To AI or not to AI, that is the question.

Expand full comment
Chris Schuck's avatar

But the data about the low marginal energy use of individual AI prompts is misleading if you ignore the larger context. It's the entire infrastructure being built around AI, and the extra energy and water it sucks up to maintain and continue expanding this. We can play with the numbers any way we want, but at the macro scale the AI boom appears to be an intensive drain on environmental resources - and the macro scale is what matters. (How all this impacts the economy is a separate question).

Expand full comment
Brandon's avatar

"Who wants a friend or companion who never pushes back? We crave genuine engagement, even if that means occasional disagreement.

But here's the serious concern: what happens when AI brings its people-pleasing tendency into therapeutic contexts?"

I would like to see you grapple with more fundamental questions like: "who wants a friend or companion that's not real?" or "should we even bring AI into therapeutic contexts"?

Expand full comment
Michael Inzlicht's avatar

You should check out my back and forth with Paul Bloom where I discuss this. I suspect you won't like my take! https://smallpotatoes.paulbloom.net/p/do-we-need-ai-to-feel-and-to-have

Expand full comment
Brandon's avatar

Will do, thanks! Haha I might not!

Expand full comment
Jan Zilinsky's avatar

Yes, excessive agreement/pandering by (many) LLMs is a problem. Thankfully there are ways to keep getting constructive feedback. For users of OpenAI's models, my tip would be: go to Settings -> Personalization -> Customize ChatGPT, and then wrote: "Provide candid feedback. If you see mistakes, say so clearly."

Expand full comment
Michael Inzlicht's avatar

I'm going to try this right now!

Expand full comment