What a wild weekend about AI and IMO. Looking back at the past 3 days, I witnessed: > Friday afternoon: leaked information about DeepMind winning gold > Saturday 1am: OpenAI front-ran the official announcement, stealing the spotlight > Initially thought Google was just slow due to marketing approval > Then heard from Google people that IMO and themselves needed extra time for verification > Turns out OpenAI didn’t even involve IMO officially > Monday: DeepMind confirmed their gold with cleaner and more aesthetic answers, fully verified by the IMO It was fun being the first to share the news on X last Friday. What a wild weekend! Jokes aside, there’s a lot of hype around math AI lately, but what we need more of is rigor and standardization. As Terence Tao pointed out, even if the results look similar, differences in testing format can make a world of difference. We’re still far from having clear, consistent standards and messaging in AI research. If we want meaningful progress, it’s time for the community to step up. Let’s build benchmarks we can all trust.
Jasper
Jasper19.7. klo 06.25
Just 20 minutes ago, the result of 2025 IMO was out. China ranked No.1 and @GoogleDeepMind won a gold medal 🥇 Future math competitions will be China team vs USA Chinese team vs AI
4,3K