The assumption that the US holds a durable lead in AI model performance is not well-supported by the data, and that is just one of the uncomfortable findings in Stanford University’s 2026 AI Index Report, published this week.
The report, produced by Stanford’s Institute for Human-Centred Artificial Intelligence, is a 423-page annual assessment of where artificial intelligence stands. It covers research output, model performance, investment flows, public sentiment, and responsible AI. The headline findings are striking.
But the more consequential insights sit in the sections most coverage has skipped, particularly on AI safety, where the gap between what models can do and how rigorously they are evaluated for harm has not closed but widened.
That said, three findings deserve more attention than they are getting.
The US-China model performance gap has effectively closed
The framing that the US leads China in AI development needs updating. According to the report, US and Chinese models have traded the top performance position multiple times since early 2025. In February 2025, DeepSeek-R1 briefly matched the top US model. As of March 2026, Anthropic’s top model leads by just 2.7%.
The US still produces more top-tier AI models – 50 models in 2025 to China’s 30 – and retains higher-impact patents. But China now leads in publication volume, citation share, and patent grants. China’s share of the top 100 most-cited AI papers grew from 33 in 2021 to 41 in 2024. South Korea, notably, leads the world in AI patents per capita.
The practical implication is that the assumption of a durable US technological lead in AI model performance is not well-supported by the data. The gap that existed two years ago has closed to a margin that shifts with each major model release.
There is a further structural vulnerability the report identifies. The US hosts 5,427 data centres – more than ten times any other country – but a single company, TSMC, fabricates almost every leading AI chip inside them. The entire global AI hardware supply chain runs through one foundry in Taiwan, though a TSMC expansion in the US began operations in 2025.
AI safety benchmarking is not keeping pace, and the numbers show it
Almost every frontier model developer reports results on ability benchmarks. The same is not true for responsible AI benchmarks, and the 2026 Index documents the gap with some precision.
The report’s benchmark table for safety and responsible AI shows that most entries are simply empty. Only Claude Opus 4.5 reports results on more than two of the responsible AI benchmarks tracked. Only GPT-5.2 reports StrongREJECT. Across benchmarks measuring fairness, security and human agency, the majority of frontier models report nothing.
This does not mean Frontier Labs is doing no internal safety work. The report acknowledges that red-teaming and alignment testing happen, but that “these efforts are rarely disclosed using a common, externally comparable set of benchmarks.” The effect is that external comparison in AI safety dimensions is effectively impossible for most models.
Documented AI incidents rose to 362 in 2025, up from 233 in 2024, according to the AI Incident Database. The OECD’s AI Incidents and Hazards Monitor, which uses a broader automated pipeline, recorded a peak of 435 monthly incidents in January 2026, with a six-month moving average of 326.
The governance response at the organisational level is struggling to match. According to a survey conducted by the AI Index and McKinsey, the share of organisations rating their AI incident response as “excellent” dropped from 28% in 2024 to 18% in 2025. Those reporting “good” responses also fell, from 39% to 24%. Meanwhile, the share experiencing three to five incidents rose from 30% to 50%.
The report also identifies a structural problem in responsible AI improvement itself: gains in one dimension tend to reduce performance in another. Improving safety can degrade accuracy, or improving privacy can reduce fairness, for example. There is no established framework for managing such trade-offs, and in several dimensions, including fairness and explainability, the standardised data needed to track progress over time does not yet exist.
Public anxiety rises with adoption, and the expert-public gap
Globally, 59% of people surveyed say AI’s benefits outweigh its drawbacks, up from 55% in 2024. At the same time, 52% say AI products and services make them nervous, an increase of two percentage points in one year. Both figures are moving upward simultaneously, which reflects a public that is using AI more while becoming more uncertain about where it leads.
The expert-public divide on AI’s employment effects is particularly sharp. According to the report, 73% of AI experts expect AI to have a positive impact on how people do their jobs, compared with just 23% of the general public – a 50-point gap. On the economy, the gap is 48 points (69% of experts are positive versus 21% of the public). On medical care, experts are considerably more optimistic at 84%, against 44% of the public.
Those gaps matter because public trust shapes regulatory outcomes, and regulatory outcomes shape how AI is deployed. On that dimension, the report flags something striking: the US reported the lowest level of trust in its own government to regulate AI responsibly of any country surveyed, at 31%. The global average was 54%. Southeast Asian countries were the most trusting, with Singapore at 81% and Indonesia at 76%.
Globally, the EU is trusted more than the US or China to regulate AI effectively. Among 25 countries in Pew Research Centre’s 2025 survey, a median of 53% trusted the EU to regulate AI, compared to 37% for the US and 27% for China.
The report closes its public opinion chapter by noting that Southeast Asian countries remain among the world’s most optimistic about AI. In China, Malaysia, Thailand, Indonesia, and Singapore, more than 80% of respondents say AI will profoundly change their lives in the next three to five years. Malaysia posted the largest increase in this view from 2024 to 2025.
See also: IBM: How robust AI governance protects enterprise margins
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
