The more young people use AI, the more they hate it


It’s been almost three years since Silicon Valley started aggressively pushing large language model-based chatbots like ChatGPT as the supposedly inevitable future of everything, and there’s no group that has felt the pressure quite like Gen Z.

Like with many tech trends before it, it’s no surprise that young people are among the biggest adopters of AI chatbot tools. But contrary to the tales spun by tech companies like OpenAI and Google, polling data shows that Gen Z students and workers are a big part of the wider cultural backlash against AI. And even as they utilize these tools, vast swaths of young people are deeply acrimonious and even resentful of the AI-centric future that many feel is being forced on them.

“The part that feels scariest to me is the human impact … their ability to have relationships or just basic communication.”

Far from the stereotype of lazy young people looking for shortcuts, Gen Zers have had some of the loudest and most detailed objections to generative AI use. Their attitudes also reflect a much wider backlash against AI and the tech industry in general, which has recently resulted in a nonpartisan movement against data centers across the country and threatened both CEOs and politicians supportive of Silicon Valley’s AI frenzy.

Meg Aubuchon, a 27-year-old art teacher living in Los Angeles, says their response and that of many of their peers has been to avoid chatbot tools entirely. “It just makes me want to dig my heels into a career where I never have to use AI, even if that’s a career that isn’t going to pay as well,” Aubuchon told The Verge.

Emerging from academia and into the vice grip of an increasingly brutal job market, young people face an impossible contradiction. They are being told, on the one hand, that these tools are going to eliminate millions of jobs, and on the other that they have to use them if they don’t want to fall behind. They’re the first new generation of adults to navigate a world flooded with chatbots and generative AI slop, after having already lost years of their youth to the covid-19 pandemic. And all the while, Silicon Valley’s multitrillion-dollar push for AI adoption is clashing with their fears of its well-documented impacts — on the environment, disinformation, academic integrity, and our social fabric and emotional well-being, to name just a few.

“The part that feels scariest to me is the human impact, because it impacts people on an individual level and how they relate to other people, whether that be their ability to have relationships or just basic communication,” said Aubuchon.

Sharon Freystaetter, 25, went to school for computer science at a young age and spent three years working as a cloud infrastructure engineer at a major Silicon Valley company. But right as AI hype really started to take off, she left the company, citing ethical concerns and anxiety over the environmental impacts of data centers. Now, she has left the tech industry for good, and says she avoids chatbots and disables AI features in applications whenever possible.

“I think everyone in my immediate peer group is not using AI and is actively against it, besides my friends who are in computer science and are essentially mandated to use it,” Freystaetter, who is now a food service worker in New York, told The Verge. “When I came back and started to look around [for tech jobs], suddenly everything was saying ‘You need to use AI to get this job’ in the requirements.”

Fears that chatbots are wrecking critical thinking and social skills are common among many groups of young adults, even as a wide majority of them admit to using chatbot tools regularly. According to a recent Harvard-Gallup study, 74 percent of young adults surveyed in the United States said they use a chatbot at least once a month (another study found more than half of US college students admit to using the tools for their coursework on a weekly basis). At the same time, 79 percent of those surveyed by Gallup “expressed concern that AI makes people lazier,” and 65 percent said that using chatbots “promotes instant gratification, not real understanding” and prevents people from engaging with ideas in a critical or meaningful way.

“I’ve personally come to the conclusion that it’s a load of bullshit for outsourcing jobs.”

And in a more recent Gallup poll, Gen Z’s opinion of AI tools hit a new low: Only 18 percent now say they are hopeful about the technology, down from 27 percent last year, and only 22 percent say they are excited, down from 36 percent. The number of Gen Z workers who think AI’s risks outweigh its benefits has also increased over the past year by 11 points, to almost 50 percent. And even though 56 percent say the tools help them finish work faster, eight in 10 now admit that using AI in this way makes actual learning more difficult in the future.

To make matters worse, many university students are seeing school administrations awkwardly shoehorn AI into their higher education, consolidate computer science and engineering departments into new “AI” majors, and pen multimillion-dollar deals with AI companies like OpenAI and Anthropic to integrate chatbot tools into academic curricula. And at the same time, young people are graduating into a brutal job market that they complain has been made virtually impossible to navigate as AI automation tools opaquely and arbitrarily filter out their job applications.

Alex Hanna, the director of research at the Distributed AI Research Institute (DAIR), says the way students are being inundated by AI and its accompanying hype is driving their resentment, leading to widespread backlash both inside and outside academia.

“Universities are hearing from employers that they want students who know how to use these tools,” Hanna told The Verge. “This is not because the tools actually have shown much value-add — they want Gen Z to show them where the value-add is. That, or the university is investing or has donors heavily involved in the supply side (e.g., in the tech industry).”

In other words, AI companies and universities are taking an “integrate first, find use cases later” approach that essentially recruits students as marketing for the AI industry while baking these tools deep into the core of academia. At Arizona State University, for example, the school’s administration is using a beta tool called ASU Atomic that uses AI to automatically synthesize professors’ lectures into bite-sized learning materials, 404 Media recently reported.

74 percent of young adults surveyed in the United States said they use a chatbot at least once a month … 65 percent said that using chatbots prevents people from engaging with ideas in a critical or meaningful way.

Last month, the editorial board of the University of Pennsylvania’s student newspaper published a scathing piece criticizing the university administration’s adoption of chatbot tools and its integration of AI topics into nearly every part of its curriculum. While acknowledging the widespread use of chatbots by students, the authors wrote that by uncritically embracing the technology without any clear rules, the school is “only quickening its own demise.”

“AI cannot coexist with education — it can only degrade it. As technology advances and workers are replaced by machines, schools are some of the only places we have left to explore and wrestle with human thought,” the students wrote. “With our own university leading the charge, AI is now corrupting those few sacred spaces and leaving us with nowhere to engage in true scholarship.”

In another letter written by the Oberlin College Luddite Club (appropriately, using a typewriter), students rejected a similar initiative by their school administration to “experiment” with AI-centric education.

“[E]ven one semester of accepted (even encouraged) chat-bot use will jettison our student body down a lazy, irredeemable tunnel of intellectual destruction,” the Oberlin students wrote. “We will not stand by and witness the further atrophying of our liberal arts education. Rather than strengthening Silicon Valley, we build our own skills and generative sweat.”

The fear that chatbot tools will lead to a permanent loss of critical thinking skills ranks high among the worries held by young people about the technology. It’s also backed up by data: A recent study from the MIT Media Lab found that EEG scans of the brain showed decreased activity in people who have been writing essays using AI tools. Other research has found that this process, known as “cognitive offloading,” has a wide range of negative impacts on humans, including diminishing people’s skepticism and their ability to discern truth from deception, leading to “heightened manipulation and weakened democratic decision-making processes.”

The fact that so many young people are well aware of these dangers even as they make use of the tools shows that they aren’t buying the hype of AI boosters like OpenAI’s Sam Altman, who has frequently tried to pitch chatbots as tools for doing everything from writing essays to raising a child. Instead, it suggests that Gen Z is hyper-aware of the tools’ limitations — from their well-documented tendency to “hallucinate” made-up information to the social and emotional cognito-hazards of relying on machines for human advice.

“Altman talks about the technology like it is magic. He has used those words precisely, calling ChatGPT ‘Magic Intelligence in the Cloud,’” said Hanna. “Gen Z is more realistic about what the tools actually can do. They can handle text-based work that they don’t want to do or feel pressured to do. But they are often rather savvy about their limits.”

This is true even among those who aren’t “anti-AI” and say they find chatbot tools useful.

“I spend a lot of time thinking about this stuff and I’ve personally come to the conclusion that it’s a load of bullshit for outsourcing jobs,” Emma Gottlieb, a borderline Zoomer-millennial who works in technical sales for a company that makes equipment for the film industry, told The Verge. Gottlieb says she often uses AI tools to quickly sift through large volumes of technical documents for her job. But she knows better than to take the systems’ outputs at face value.

“I definitely do double-checks, personally. It’s important because somebody will mislabel an eBay listing for a component part, and then the AI will say it has this feature when it really doesn’t,” said Gottlieb. “I wouldn’t say it’s a significant time-saver, but I think it’s just like fast food — it’s easy, it’s cheap, and it’s there.”

AI companies and universities are taking an “integrate first, find use cases later” approach.

There’s one other explanation for Gen Z’s stance on AI tools that isn’t measured in data points: AI use has become culturally toxic, and many young people (like their older counterparts) won’t admit to using it out of social shame. The use of AI-generated visuals and text is frequently a subject of ridicule on social media, and any anecdotal sampling of young people will suggest that most find it fake and deeply uncool — especially when it’s used to circumvent the creative process and pass off ugly-looking slop as “AI art.”

Lacking any clear-cut rules, AI use also causes distrust and anxiety within academia, not just between students and professors, but among peers. According to one University of Pittsburgh study, students viewed the use of AI tools as a “red flag” that causes them to “think less” of their peers.

But Hanna says that a more critical approach is necessary — one that “punches up” at the CEOs, marketing teams, and school administrations that are pushing these tools as universal thinking machines, and focuses on the material conditions that pressure young people to use them in the first place.

“Speaking as an elder millennial, I approach Zoomers who use these tools with a bit more empathy,” said Hannah. “Why do they feel compelled to use them? What material conditions do they face at school such that they are feeling so pressured? Is there a way to offer them another kind of pressure valve? … That’s likely a better place to begin from.”

Freystaetter and Gottlieb both say that instead of their own generation, they are more worried about Gen Alpha and other young people that come after them, who lose their chance to develop healthy relationships with technologies when they become mandatory and ubiquitous.

“These are the kids who are growing up with [AI] integrated into everything, and with ease of access,” Fraystaetter said. “They grow up not knowing that they should be critical of it, and that they’re being influenced by it.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Leave a Reply

Your email address will not be published. Required fields are marked *