AI Does the Coursework. Students Add the Humanity (Sometimes).

Key Takeaways

  • College students are widely using AI tools like ChatGPT for coursework, often for the majority of their assignments and even personal essays.
  • This trend raises significant concerns among educators about declining critical thinking, literacy, and the very purpose of higher education.
  • Universities are struggling to adapt, with current policies on AI use often unclear and detection methods proving unreliable.
  • Some students not only embrace AI for their studies but are also developing new AI-driven tools, viewing it as innovation rather than cheating.
  • The rise of AI in education is prompting a deep re-evaluation of learning, assessment, and what skills will be valuable in the future.

Chungin “Roy” Lee, a Columbia University computer-science major, openly admitted to using generative AI for nearly all his assignments. He told New York Magazine he’d simply feed prompts to ChatGPT and submit the output, adding just a “20 percent of my humanity” at the end.

Lee’s journey to Columbia was unconventional, involving a rescinded Harvard offer and a stint at community college. His transfer essay to Columbia was, perhaps unsurprisingly, written with AI assistance.

For Lee, the rigorous academic work was secondary. “Most assignments in college are not relevant,” he stated, viewing them as “hackable by AI.” His priority at an Ivy League school? “It’s the best place to meet your co-founder and your wife.”

He soon found a co-founder, Neel Shanmugam. After a few unsuccessful startup ideas, Lee, frustrated by the tedious LeetCode platform used for tech interviews, had a new concept: an AI tool to help job applicants cheat on remote interviews.

They launched Interview Coder, even posting a video of Lee using it to successfully interview for an Amazon internship, which he then declined. This led to disciplinary probation from Columbia for promoting a cheating tool.

Lee found it ironic, given Columbia’s own partnership with OpenAI, ChatGPT’s parent company. He believes that soon, using AI for homework won’t be seen as cheating, reflecting a sentiment seemingly shared by many peers.

Indeed, AI use is rampant. A survey shortly after ChatGPT’s launch found nearly 90% of college students had used it for homework. Students across various institutions now rely on AI for note-taking, study guides, summarizing texts, and drafting essays.

STEM students use it for research, data analysis, and coding. “College is just how well I can use ChatGPT at this point,” one Utah student remarked.

Sarah, a freshman in Ontario, shared a similar story with New York Magazine, saying AI “changed my life” by boosting her grades in high school and college. She, like many, sees AI as a time-saver, enabling her to complete essays in hours instead of days.

Professors are struggling. Some have tried AI-proofing assignments with methods like oral exams or in-class essays. Brian Patrick Green, a tech-ethics scholar, stopped assigning essays almost immediately after trying ChatGPT.

Catching AI use can be tricky. Even personal reflections sometimes come back with robotic language. Troy Jollimore, an ethics professor, worries about “massive numbers of students” emerging from university “essentially illiterate.”

This isn’t entirely new; tools like Chegg already facilitated cheating. But AI is faster and more capable, leaving administrators perplexed. Most universities have adopted ad-hoc policies, often leaving it to individual professors to set rules.

Regulation is difficult. How much AI help is too much? Students often see guidelines as flexible. Wendy, a finance major, claimed to be against AI plagiarism but described using it to outline and generate content for an essay on critical pedagogy, ironically questioning how schooling hinders critical thinking.

When asked about the irony, Wendy admitted, “I use AI a lot… I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”

Many professors believe they can spot AI-generated text due to its smooth yet sometimes clumsy phrasing or overly balanced arguments. Some even embed “Trojan horse” phrases in prompts to catch students who don’t read their AI-generated papers.

However, studies suggest professors aren’t as good at detecting AI as they think. One U.K. study found professors failed to flag 97% of AI-generated work. AI detection tools like Turnitin also have limitations, with varying success rates and concerns about false positives, especially for non-native English speakers.

Students easily find ways to fool detectors, such as rewriting AI text or using specialized AI tools to humanize the output. Eric, a Stanford sophomore, explained how laundering text through multiple AIs can reduce detection scores.

Many educators feel a sense of despair. Sam Williams, a former teaching assistant, noted how students’ writing styles changed drastically for an essay on jazz, with some even including anachronisms like Elvis Presley. He was told to grade based on a “true attempt at a paper,” essentially grading their ability to use ChatGPT.

This experience led Williams to drop out of graduate school. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said. Many colleagues, according to Jollimore, are now just thinking about retirement.

The core issue may be that the traditional value of education has eroded. High costs and a competitive job market have made college feel transactional for many. AI merely exposed this existing “rot.”

AI tools are even being developed to provide AI-generated feedback on student essays, potentially reducing academia to a “conversation between two robots.”

Early research suggests off-loading cognitive tasks to AI could harm memory, problem-solving, and creativity. Studies have linked AI use to deteriorating critical-thinking skills, especially in younger users. Cornell professor Robert Sternberg worries AI has already compromised human creativity and intelligence.

Even students using AI express ambivalence. Daniel, a computer-science major, finds AI makes him more curious but wonders if he’s sacrificing deeper learning. He uses it for polishing essays and coding, often operating in a “gray area” of academic conduct.

Mark, a math major, grapples with whether AI-assisted work is truly “his work.” The lines are blurring quickly. OpenAI’s CEO Sam Altman acknowledges these concerns, worrying that as models improve, “users can have sort of less and less of their own discriminating process.”

Chungin “Roy” Lee, meanwhile, is no longer at Columbia. After being suspended, he and Shanmugam launched Cluely, an AI tool that scans a user’s screen and audio to provide real-time feedback and answers without prompting.

“We built Cluely so you never have to think alone again,” their manifesto states. Lee envisions it eventually being “in your brain.” He aims to target digital standardized tests and all campus assignments, enabling users to “cheat on pretty much everything.”

Lee believes technological innovation forces humanity to rethink what work is useful, comparing the current shift to machinery replacing blacksmiths. He’s already moved his new venture to San Francisco, funded by investors.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended