Key Takeaways
- Artificial intelligence, especially chatbots like ChatGPT, is subtly reshaping our daily habits, social interactions, and even our manners.
- Experts are raising concerns about AI’s potential to reinforce existing societal biases and the current lack of clear etiquette for human-AI engagement.
- While specially designed AI can offer benefits, such as in therapeutic settings, the widespread use of general AI for personal advice or companionship carries mental health risks.
- The long-term effects of AI on society, particularly on how younger generations develop social norms, remain largely unknown and are a growing concern.
Many of us are finding new and sometimes surprising ways to use artificial intelligence like ChatGPT, from asking it to be polite to seeking advice on personal matters. These interactions, big and small, hint at a larger shift: AI isn’t just a tool anymore; it’s quietly changing how we behave and see the world.
This evolving relationship with technology is prompting questions about how machines are altering our social norms and even our self-perception. To understand these changes, Business Insider spoke with several experts, including a sociologist and a digital etiquette coach.
As with every new wave of technology, we’re learning a new set of social rules. Digital etiquette consultant Elaine Swann notes that while we’ve figured out texting shorthand, we’re still developing a shared understanding of how to interact with AI bots.
This uncertainty can show up in unexpected ways. For instance, one person mentioned their husband grew impatient with a human tour guide, used to ChatGPT’s instant answers. It highlights a learning curve in distinguishing AI interaction styles from human ones.
Swann advises caution, emphasizing that AI shouldn’t replace our own judgment or empathy. Even simple acts like saying “please” to a chatbot, which OpenAI’s CEO Sam Altman acknowledged costs millions to process but called “money well spent,” reflect our desire to maintain human norms.
Beyond etiquette, there’s the issue of bias. Laura Nelson, a sociologist, points out that popular AIs, often developed in the U.S. and trained on English-language data, can inadvertently reflect Western cultural viewpoints. This might mean an AI suggesting bacon and eggs for breakfast, a common North American meal, but less so elsewhere.
While some biases are minor, others can be more troubling, such as AI reinforcing gender stereotypes or showing racial bias in tasks like screening résumés. Nelson warns these embedded biases can subtly shape our thinking and societal operations in ways we haven’t yet fully grasped.
The full extent of AI’s societal impact is still unfolding, and concrete data is scarce. Tech companies are studying user effects, but many findings aren’t widely shared or independently reviewed. OpenAI, for example, recently adjusted a model that became overly eager to please, recognizing it could negatively influence user emotions.
OpenAI’s own research indicates that while deep emotional connections with ChatGPT are rare, heavy users or those discussing personal topics are more likely to report feelings of loneliness alongside any connection.
The potential for both benefits and risks is particularly clear in mental health. Nick Jacobson, a psychiatry professor, found that a carefully programmed AI can be a helpful therapeutic tool, with patients forming strong bonds similar to those with human therapists.
However, he cautions that most publicly available chatbots aren’t designed with such precision. Using general AI for therapy or companionship without safeguards can be “profoundly unsafe” for mental wellbeing, according to Jacobson.
Relationship therapist Emma J. Smith agrees that AI can offer a low-pressure way for anxious individuals to practice social interactions. But she also warns it could become a way to avoid real-world engagement, much like excessive video gaming.
Jacobson stresses the critical difference between specialized AI tools and general models. The long-term impact of replacing human relationships with AI companionship is a significant unknown, especially for young people developing their social skills.
Concerns about AI’s influence on youth are echoed by tech leaders like Sam Altman, who expressed reservations about children forming deep bonds with AI. Jacobson criticizes a “move fast and break things” approach in tech, warning that when it comes to AI and human interaction, “they’re not breaking things — they’re breaking people.”