> 5 gigawatts of Stargate AI data center capacity > over 2 million chips If these are GB200, that would be 5M H100-equivalents. If all used to train one model, that could be 2e28-4e28 FLOP for a six month pretraining + RLVR run. This would be 1000x the compute used for GPT4. We don't get to know timelines for this buildout, but I'd guess later than 2027. My guess is the best models by the end of 2026 would be ~2e27 compute.
OpenAI
OpenAI21 tuntia sitten
It's official: we're developing 4.5 gigawatts of additional Stargate data center capacity with Oracle in the U.S (for a total of 5+ GWs!). And our Stargate I site in Abilene, TX is starting to come online to power our next-generation AI research.
11,08K