GitHub announced a technical preview of Copilot in late June 2021. The product was an AI assistant that could suggest code as developers typed in their editors. The model underneath was based on OpenAI Codex, a derivative of GPT-3 trained specifically on code. The launch was the first time most developers got direct experience with what AI-generated code felt like at this level of quality.
The reaction from developers was unusually polarised, in ways that turned out to be informative rather than just contentious. The split was not about whether the technology worked. By any reasonable measure, it worked surprisingly well, often producing functional code that did what the developer was clearly trying to do, sometimes with bugs that needed correction but often without.
The split was about what that capability meant. One group of developers found Copilot immediately useful. They reported that it reduced the time spent on boilerplate, helped them learn unfamiliar libraries, and freed attention for the more interesting design problems. The endorsement came particularly strongly from developers working on routine code that had not changed materially in years.
The other group of developers had concerns that ranged from intellectual property to skill atrophy to working conditions. The training data for Codex had included a lot of open source code. The licences on much of that code required attribution and similar treatment for derivative works, but Copilot did not always reproduce code verbatim and the line between learned pattern and reproduced code was not clear. Some developers worried that relying on Copilot would erode the skills that made them developers. Others wondered what it would mean for the labour market in five years.
What stood out about that period was that both groups were essentially right about different things. Copilot was genuinely useful. It also raised legitimate questions about training data, attribution, and how the technology would affect the work of programming over time. The ability to hold both ideas at once was the thing developer culture was, and to some extent still is, working out how to do.
What Copilot represented technically was the first integration of large language models into a daily-use professional workflow at meaningful scale. The lessons learned over the next two years about how to design that integration, where it helped, and where it created new problems, would shape how the AI tooling that came after it was built.