Four faculty members at the University of California San Diego have concluded that artificial general intelligence (AGI) has already arrived, according to a new comment published in Nature. The team—comprising Associate Professor of Philosophy Eddy Keming Chen, Professor of Artificial Intelligence, Data Science and Computer Science Mikhail Belkin, Associate Professor of Linguistics and Computer Science Leon Bergen, and Professor of Data Science, Philosophy and Policy David Danks—reached this position after extended interdisciplinary discussion.
The question of whether machines can match human reasoning, learning, and problem-solving was first raised by Alan Turing in 1950 through his “imitation game,” now known as the Turing Test. Seventy-five years later, the UC San Diego scholars argue that large language models (LLMs) meet reasonable standards for AGI.
Their discussions were influenced by recent research at UC San Diego showing that GPT-4.5 was judged to be human 73% of the time in a Turing test conducted in March 2025—a higher rate than actual humans.
The essay defines general intelligence as both breadth—the ability to perform across various domains such as mathematics, language, science, practical reasoning, and creative tasks—and depth—strong performance within those domains. “There is a common misconception that AGI must be perfect — knowing everything, solving every problem — but no individual human can do that,” said Chen. “The debate often conflates general intelligence with superintelligence. The real question is whether LLMs display the flexible, general competence characteristic of human thought. Our conclusion: insofar as individual humans possess general intelligence, current LLMs do too.”
According to the authors, perfection or universal mastery are not requirements for AGI; few humans achieve either. They also argue that intelligence does not need to follow human models or exceed human mental capacity.
To assess AGI claims, they propose evaluating systems using increasingly demanding criteria: basic literacy and conversation (Turing-test level), advanced academic performance across multiple fields (expert tier), and revolutionary scientific breakthroughs (superhuman tier). Current frontier LLMs meet the first two levels.
Critics often point out flaws like hallucinations—where AI presents false information—but the authors note humans also make errors without being disqualified from possessing intelligence.
Another objection is that LLMs lack bodies; however, they reference physicist Stephen Hawking’s use of text-based communication to demonstrate that motor capabilities are not essential for intelligence. “This is an emotionally charged topic because it challenges human exceptionalism and our standing as being uniquely intelligent,” said Belkin. “Copernicus displaced humans from the center of the universe, Darwin displaced humans from a privileged place in nature; now we are contending with the prospect that there are more kinds of minds than we had previously entertained.”
The scholars acknowledge public anxiety about equating machine abilities with human-level intelligence but suggest meeting these emotions with curiosity rather than avoidance.
They also discuss how economic demands placed on AI systems by industry leaders may distort assessments about AGI’s arrival; companies often require reliability or speed beyond what is expected from individual humans.
Addressing concerns over whether LLMs truly understand or merely recombine training data—the so-called ‘stochastic parrot’ issue—Bergen stated: “We have built highly capable systems, but we do not understand why we were successful… This gap in understanding grows more important as the systems grow more capable.”
While increased autonomy does not necessarily add to machine intelligence itself, it heightens ethical considerations regarding responsible design and governance. “We’re developing AI systems that can dramatically impact the world without being mediated through a human and this raises a host of challenging ethical, societal and psychological questions,” explained Danks. “AI is a future that we are building right now. Ultimately, we’re innovating because we want something better, and the very idea of better should have ethics and safety baked in.”
The four researchers represent different disciplines within UC San Diego: philosophy (Chen), linguistics/computer science (Bergen), data science/machine learning (Belkin), and policy/ethics/cognitive science (Danks). Their collaboration was made possible by UC San Diego’s cross-disciplinary institutional structure.
“I’ve learned so much from this group,” said Chen. “UC San Diego’s institutional structure made this collaboration possible—we simply wouldn’t have crossed paths elsewhere. It’s a powerful example of what cross-disciplinary work can achieve when applied to fundamental questions facing humanity.”



