Sci-Fi Con Tries AI Vetting. Human Problems Ensue.

Key Takeaways

  • Seattle Worldcon 2025 organizers used ChatGPT to help screen potential program participants, sparking controversy.
  • The AI tool was not used in any part of the Hugo Awards nomination or selection process.
  • Following community backlash, Worldcon leadership has issued apologies and detailed how the AI was used.
  • Three staff members, including two Hugo administrators, resigned over the incident.
  • The vetting process that involved ChatGPT is being redone by new volunteers.

The Hugo Awards, a prestigious prize in science fiction, often find themselves in the spotlight for more than just the celebrated books and media. This year, the buzz surrounds the use of ChatGPT in connection with the upcoming Seattle Worldcon 2025, the convention where the awards are presented.

It recently came to light that ChatGPT was employed to help vet potential panelists for the convention’s program. This revelation led to the resignation of three individuals involved with Worldcon, including two Hugo administrators. Kathy Bond, the chair for Seattle Worldcon 2025, issued an initial statement and a subsequent apology as sci-fi and fantasy communities reacted strongly on social media. Author Yoon Ha Lee also withdrew his novel “Moonstorm” from consideration for a related award.

In a third message addressing the controversy, Bond offered further details and a renewed apology. She emphasized that ChatGPT played no role in creating the Hugo Award finalist list, the announcement video, or managing the award nomination process. Bond also acknowledged flaws in her initial response to the concerns and stated the part of the program process involving ChatGPT is being entirely redone by new volunteers.

SunnyJim Morgan, the program division head, also provided a statement. He, too, apologized and offered a deeper look into how ChatGPT was utilized, even sharing the specific prompt given to the AI. The prompt asked the AI to check a list of names for potential “scandals” like homophobia, racism, or harassment by searching their digital footprint, including articles and social media.

Morgan clarified that his team didn’t blindly accept ChatGPT’s findings. Instead, they reviewed the links and primary sources provided by the AI before making any decisions about invitations. He reported that this process resulted in fewer than five people being disqualified from receiving an invitation due to previously unknown information.

The full statements from Bond and Morgan provide more context. According to Gizmodo, Seattle Worldcon anticipates its next update on May 13, as the organization works to regain the community’s trust.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended