Key Takeaways
- Emily Bender and Alex Hanna argue in their book “The AI Con” that current AI technology is overhyped and not truly intelligent.
- They urge readers to see AI as automation tools or complex statistical models, not conscious beings.
- The authors highlight the often-hidden human costs behind AI, including labor exploitation and data appropriation.
- They critique the narrow “booster vs. doomer” debate, suggesting it obscures more nuanced perspectives.
- Bender and Hanna encourage critical thinking and questioning the value and necessity of AI tools in workplaces and daily life.
Artificial intelligence seems to be everywhere, but not everyone is buying into the hype. Linguist Emily Bender and AI ethicist Alex Hanna are pushing back against the dominant narrative with their new book, “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.”
In a conversation with Business Insider, Bender and Hanna explained their mission: to strip away the exaggerated claims surrounding AI. They argue that what’s often called “AI” isn’t intelligent in the human sense, but rather sophisticated automation or pattern-matching systems – sometimes humorously dubbed “stochastic parrots.”
The authors stress the importance of remembering the people behind the technology. From programmers making design choices to workers labeling data and content moderators ensuring outputs aren’t harmful, human effort is central, though often obscured by the “artificial intelligence” label.
Hanna pointed out that “AI” is frequently used as a catch-all term for various automation technologies, some used in sensitive areas like hiring or criminal justice. Like fast fashion, there’s a complex, often unseen supply chain involving human labor and environmental costs.
Why the widespread enthusiasm for AI, then? Bender suggests it taps into a deep-seated desire for simple solutions to complex problems, even comparing some AI boosterism to a kind of technological salvation narrative. It promises answers and efficiency, seemingly relieving us from difficult tasks or societal issues.
Hanna added that AI is often presented as a fix for social problems like loneliness or declining social capital, distracting from harder, necessary work like rebuilding community infrastructure and social connections.
Bender and Hanna also critique the common framing of the AI debate as a battle between “doomers” (who fear AI will destroy us) and “boosters” (who believe it will save us). They argue this is a false dichotomy, built on the flawed premise that AI is already super powerful. Most realistic perspectives, including theirs, fall outside this narrow range.
When faced with AI tools being integrated into workplaces or online services, the authors advise people to question their purpose and value. Bender emphasizes that individuals have the agency to ask how a technology aligns with goals and values, resisting the narrative that AI adoption is inevitable.
For those without deep technical knowledge, Bender advises skepticism towards claims made by AI promoters, who often obscure how systems truly work. Hanna suggests people leverage their own expertise in their respective fields to evaluate AI claims, citing nurses who critically assessed AI tools in healthcare based on their professional experience.
Interestingly, the authors note that many technical professionals, the very people AI supposedly benefits most, are often resistant to the hype, expressing skepticism about tools meant to automate their work.