A new analysis from Bernstein places compute power - the combination of energy, data-center scale and chip efficiency - at the center of the contest for AI leadership. According to the report, the United States currently maintains a substantial lead in installed AI compute, with about 35 zettaFLOPS compared with China’s roughly 5 zettaFLOPS, or approximately 15% of the U.S. level.
Where China holds a marked structural edge is in electricity. The country already produces more than twice as much electricity as the United States and is expanding generating capacity at a rapid clip - adding in excess of 500 gigawatts a year, a pace the analysis says surpasses the rest of the world combined. That scale of power growth gives China the potential to expand data-center capacity aggressively, even if domestic AI chips are less energy-efficient than their U.S. counterparts.
Bernstein lays out scenarios for how compute parity could evolve. In a base-case path, China could achieve parity with U.S. compute capacity by 2035 if it offsets weaker semiconductor efficiency by deploying substantially more hardware and facilities. Reaching that outcome would, the analysis says, require nearly $1 trillion of capital expenditure directed to AI data centers and a concurrent, rapid expansion in power infrastructure and battery storage.
In a more assertive scenario where power availability is the binding constraint on global AI growth, China could surpass the United States. Bernstein projects that under such conditions China’s aggregate AI compute could exceed U.S. levels by a large margin, potentially reaching more than three times the U.S. compute capacity by 2035.
Despite that potential, the analysis underscores persistent bottlenecks. China currently trails in advanced semiconductor technology: domestic AI chips operate at roughly one quarter the efficiency of U.S. equivalents. Bernstein estimates that efficiency could improve, with the gap narrowing so that Chinese chips approach more than 50% of U.S. efficiency by 2035. However, export controls and restricted access to leading-edge manufacturing equipment remain material risks to that trajectory.
The competition is therefore asymmetric: the United States leads on chips and software, while China’s advantages are concentrated in power generation, manufacturing scale and unit-cost efficiency. If energy proves to be the ultimate limiter for AI scaling, China’s electricity and buildout advantage could become a decisive factor.
Bernstein’s conclusion is straightforward in scope: AI supremacy will be influenced as much by megawatts and data-center investment as by advances in microchips. The analysis highlights a pathway by which China could close the compute gap, but it also identifies significant investment and technology challenges that would need to be resolved to realize that outcome.