In recent years, artificial intelligence (AI) has rapidly moved from being a technological novelty to becoming a major influence on society. It is now deeply integrated into scientific research, education, and workplaces, with people using AI for tasks ranging from overcoming writer’s block to challenging parking tickets.
However, this rapid expansion has also introduced significant risks. The economic impact of AI is considerable; last year, 80% of U.S. stock market gains were attributed to AI companies. This reliance has led some experts to question the sustainability of current investments in the technology sector.
Policy debates are intensifying as lawmakers and regulators grapple with how best to manage the growth of AI companies. While some advocate for stricter controls, others support a more hands-off approach. Meanwhile, concerns over deepfake and explicit content generated by AI are increasing as these technologies make it harder to distinguish real information from fabricated media.
As a leading institution in both developing AI technology and studying its ethical implications, UC Berkeley sought insights from its faculty about key trends they expect to monitor in 2026:
Stuart Russell, professor of electrical engineering and computer sciences, commented: “Current and planned spending on data centers represents the largest technology project in history. Yet many observers describe a bubble that is about to burst: revenues are underwhelming, the performance of large language models seems to have plateaued, and there are clear theoretical limits on their ability to learn straightforward concepts efficiently… If the bubble bursts, the economic damage will be severe. But for the bubble not to burst, breakthroughs will need to happen that take us close to artificial general intelligence. AI developers have no cogent proposal for how to control such systems, leading to risks far greater than economic damage.”
Hany Farid, professor of information at UC Berkeley, raised concerns about trust in digital media: “I will be watching the accelerating erosion of trust driven by increasingly convincing AI-generated media. In 2026, deepfakes will no longer be novel; they will be routine, scalable, and cheap… I am especially concerned about the asymmetry: It takes little effort to create a fake, but enormous effort to debunk it after it spreads…”
Jennifer Chayes, dean of UC Berkeley’s College of Computing, Data Science, and Society emphasized both opportunities and responsibilities: “Major technology paradigm shifts like AI come with significant benefits and risks… Our challenge is to apply AI to advance knowledge, expand understanding and benefit humanity.”
Deirdre Mulligan highlighted privacy issues related to chatbot usage: “People use AI chatbots for emotional support… users’ logs risk disclosure in more troubling settings… Expect more demands on AI companies for personal data and lawsuits challenging how they collect and use it.”
Jodi Halpern expressed concern over relationship chatbots among youth: “This year will see the expansion of companion chatbots to young children… In the short term we need regulation until safety is established…”
Ken Goldberg noted challenges facing robotics: “I’m very concerned about widespread claims that humanoid robots will ‘soon’ replace human workers… There is a vast gap between the amount of data available for training large language models such as ChatGPT and the amount available to train robots…”
Annette Bernhardt addressed worker rights amid growing automation: “In 2025 unions began developing policies regulating employers’ growing use of AI… I will be watching for progress in legislation establishing guardrails around electronic monitoring…”
Nicole Holliday focused on bias within workplace evaluation tools powered by AI: “My research focuses on AI that aims to evaluate how people speak… They are trained on ‘idealized’ speech so they show systematic bias against neurodivergent speakers…”
Jonathan Stray discussed political neutrality in algorithms: “The lack of good definitions and evaluations of ‘politically neutral AI’ is a problem for democracy…”
Camille Crittenden observed advances in deepfake sophistication: “This year will mark when video and audio manipulation goes mainstream… New California regulations requiring proof of content authenticity are an important step toward restoring trust but will not be sufficient…”
Alison Gopnik reflected on intelligence limits within current systems: “I’m expecting that… we will realize there is no such thing as general intelligence… At the same time we may see progress toward more realistic models that engage…with the external world…”
These perspectives illustrate ongoing debates about both potential benefits—such as scientific discovery acceleration—and societal challenges posed by widespread adoption of advanced artificial intelligence.



