Three years after ChatGPT launched and the mainstream AI era began, I think we have enough signal to be honest about what has happened and where things stand.
The technology has improved faster than most predicted. The models available in early 2026 are substantially more capable than those of 2022 on almost every dimension: reasoning, coding, multimodal understanding, following complex instructions, maintaining consistency in long contexts. The improvement has not been a smooth curve but a series of jumps, with each new model generation providing capabilities that change what is possible.
The hallucination problem has improved but not gone away. Reasoning models hallucinate less on factual questions because they have been trained to be more careful. RAG architectures ground responses in real documents. But the fundamental issue, that language models predict likely tokens rather than retrieve facts, means that confident-sounding errors remain possible. Any application where accuracy is critical needs verification layers.
The economic impact has been real but uneven. Software development productivity has genuinely increased for teams that have integrated AI tools thoughtfully. Documentation, code review, debugging, and boilerplate generation: all significantly faster. Creative and knowledge work has been augmented in ways that were not clearly predicted. The predicted massive job displacement has not materialised quickly, but the composition of jobs and the skills valued in those jobs have started to shift.
The energy consumption story has become increasingly difficult to ignore. Data centres for AI training and inference require enormous amounts of power. The environmental cost of frontier AI is real and the industry has not addressed it well. This will be a constraining factor on growth.
Regulatory clarity has arrived in some jurisdictions and remains absent in others. The EU AI Act has shaped how AI systems are deployed in Europe with particular emphasis on high-risk applications. The UK and US have taken lighter-touch approaches that have allowed faster deployment at the cost of less consumer protection. Regulated industries (healthcare, finance, law) have navigated this carefully with varying success.
My view on what comes next: the integration of AI into everyday software tools continues and deepens. The distinction between "AI product" and "product with AI" will blur to the point of meaninglessness. The frontier model race will continue but the gap between frontier and open-source models will narrow further. The real differentiation will move to application quality, reliability, and the ability to integrate AI into complex existing systems rather than model capability alone.
The discipline of AI engineering will mature. The patterns, tools, and practices for building reliable AI-powered applications are solidifying. Teams that invest in these skills now are building an advantage that will compound.