All Articles
AI8 min read10 December 2022

ChatGPT Launched and Then Everything Changed

ChatGPT launched on 30 November 2022. I tried it within hours. One million users in five days. One hundred million in two months. I have been thinking about what happened ever since.

ChatGPTOpenAIAILanguage Models

I tried ChatGPT on 30 November 2022, the day it launched. Then I sent the link to colleagues. Then to my parents. That itself told me something: this was the first AI product I had encountered in years that I thought non-technical people should use.

The technology was not new. GPT-3 had been available via API since 2020. GPT-3.5, which powered the initial ChatGPT, was an improvement on GPT-3 but not a revolutionary leap. What was new was the interface and the fine-tuning.

The chat interface sounds trivial but it changed everything. You could have a conversation. You could ask a follow-up question. You could say "make it shorter" or "explain that differently" and the system would comply. The conversation format made the model's capabilities discoverable in a way that a raw API never could.

The fine-tuning using RLHF (Reinforcement Learning from Human Feedback) had made the model much more helpful and less likely to produce the harmful outputs that had made OpenAI cautious about releasing previous versions broadly. Human raters had shaped the model's behaviour so that it tried to be genuinely useful rather than just fluent.

The one million users in five days benchmark that OpenAI cited was real and significant. Consumer products rarely grow that fast. Netflix took three and a half years to reach a million subscribers. The growth reflected pent-up demand for something that actually worked.

What I noticed in those first weeks was the range of use cases people found. Coding assistance. Essay editing. Explaining concepts in plain English. Translating languages. Writing cover letters. Summarising documents. The same model was being used for dramatically different tasks by dramatically different people. The generality was the product.

The weaknesses were the same as previous LLMs: confident hallucination, inability to access real-time information, inconsistent performance on complex reasoning, lack of memory between conversations. But at the quality and accessibility of ChatGPT, many use cases were viable despite these limitations.

The question I kept asking in December 2022 was: what happens when this is integrated everywhere? When every writing tool, every search engine, every customer service system has access to this capability? The answer, which 2023 and 2024 would partially provide, is that it changes what building software means, what information access means, and what expertise means. We are still working out the implications.

Found this useful?

Share it with someone who'd enjoy it.