All Articles
AI5 min read13 December 2024

Gemini 2.0 and What 2024 Actually Delivered in AI

Google released Gemini 2.0 in December 2024. The release closed a year that had delivered serious capability progress and even more serious complications.

AIGeminiGoogleYear in Review2024

In December 2024, Google released Gemini 2.0, the next generation of its multimodal AI model family. The release introduced several capabilities that had been emerging across the frontier but not yet packaged together at this scale. Strong agentic behaviour, with the model able to use tools, navigate websites, and complete multi-step tasks with minimal supervision. Improved reasoning that approached o1-class performance on many benchmarks while remaining substantially faster. Native multimodal understanding that handled images, audio, and video as first-class inputs and outputs.

The release closed a year that had delivered substantial capability progress alongside complications that had not been fully anticipated. The frontier had become genuinely competitive, with OpenAI, Google, Anthropic, and a smaller cluster of credible challengers all shipping models that would have been state of the art only twelve months earlier. The cost of inference had dropped by approximately ninety percent over the year, which had changed the economics of AI applications in ways product teams were still working through.

The agent category, which had been mostly demonstrations and proofs of concept at the start of 2024, had become a deployable category by year-end. The combinations of more capable models, better tool-use protocols including the Model Context Protocol that Anthropic had introduced, and accumulated engineering practice around building reliable agents had produced systems that worked well enough for production use in bounded contexts.

What 2024 had not delivered was the more dramatic version of AI progress that some forecasters had described. The systems that shipped at the end of the year were significantly better than those that had shipped at the start. They were not transformatively better in ways that changed the basic structure of how work was done. The applications that produced clear business value were generally specific implementations of well-understood patterns, not radical reinventions of how organisations operated.

The complications were as instructive as the progress. The CrowdStrike outage in July had reminded the industry of how fragile critical infrastructure could be in conditions of automatic deployment to global scale. The election had demonstrated that the worst-case AI risks were not always the most consequential ones. The OpenAI governance situation that had played out at the end of 2023 had cast a long shadow over how AI companies would be governed going forward.

The end-of-year retrospectives mostly settled on a similar conclusion. 2024 had been a year of consolidation and continued progress rather than the breakthrough year that some had predicted. What 2025 would deliver, the retrospectives mostly admitted, was harder to predict than usual.

Found this useful?

Share it with someone who'd enjoy it.