All Articles
Developer Tools8 min read18 November 2021

GitHub Copilot: Six Months of Using AI to Write Code

I got GitHub Copilot access in the technical preview in June 2021. By November I had strong opinions about what it actually was and was not. It is not a replacement for a developer. It is something different and more interesting.

GitHub CopilotAIDeveloper ToolsProductivity

I applied for GitHub Copilot access when it launched in technical preview in June 2021, mostly out of curiosity. Six months later I had genuine opinions about what it changed and what it did not.

The first thing to understand is what Copilot actually does. It is a code completion tool, but completion at a scale and quality that had not existed before. Given the context of what you are writing, a function signature, a comment describing what you want, or a few lines of code, it suggests what comes next. Sometimes one line, sometimes an entire function.

The suggestions are often correct. Not always, not reliably for complex logic, but for common patterns, boilerplate code, and routine tasks, the hit rate is high enough to meaningfully change the experience of writing code. Writing tests became faster. Generating data fixtures, writing utility functions, implementing standard patterns: these tasks went from things I did manually to things I did by reviewing and accepting or rejecting suggestions.

Where Copilot struggled was anywhere the task required genuine understanding of the specific system you were working in. Suggestions were often syntactically correct but semantically wrong: they used the wrong variable names, called functions that did not exist in your codebase, or implemented logic that was not appropriate for your domain. The model had learned to write Python or TypeScript well but it did not know your application.

The productivity gain was real but unevenly distributed. For experienced developers, Copilot removed friction from tasks they already knew how to do. For less experienced developers, there was a risk that the suggestions were accepted without understanding. Bad code accepted from Copilot is the same bad code as bad code written by hand. The code review burden potentially increased because reviewers had to catch what less critical developers had accepted.

The intellectual property questions were not resolved in 2021 and remain contentious. Copilot was trained on public GitHub repositories under various licences. Whether that training and the resulting suggestions constituted copyright infringement was a legal question that nobody had answered. Several developers reported seeing suggestions that closely matched specific licensed code.

My overall impression after six months: Copilot changed what it felt like to write code but did not change what it meant to be a developer. Understanding systems, making design decisions, reviewing code critically, communicating with teams: none of that changed. The mechanical parts of coding became faster and less mentally taxing. The judgment parts remained entirely human.

Found this useful?

Share it with someone who'd enjoy it.