All Articles
Infrastructure7 min read12 March 2015

Docker Changed Everything and Most People Missed It

In early 2015 I watched a colleague containerise an app in twenty minutes that had taken three days to set up on a new server. That was the moment I understood what Docker actually was.

DockerContainersDevOps

I remember the first time I saw Docker in action. It was early 2015, and a colleague ran a single command and had a fully working environment in about twenty minutes. The same setup had taken us three days on a new server the week before. I did not fully understand what had happened, but I knew something important had shifted.

Docker was not new in 2015. Solomon Hykes had shown it at PyCon back in 2013. But 2015 was the year it stopped being a curiosity and started being something you actually used at work. The version 1.0 release in 2014 had given enterprises the confidence to take it seriously, and by early 2015, the tooling around it had matured enough to make it genuinely useful.

The core idea is deceptively simple. Instead of setting up an environment and hoping it matches production, you package your application with everything it needs into a container image. The image runs the same way everywhere. Your laptop, the CI server, staging, production. The "it works on my machine" problem, which had plagued software teams for decades, largely disappeared.

What took me a while to understand was what Docker was really solving. On the surface it looked like a better virtual machine. But virtual machines virtualise hardware. Docker virtualises the operating system. Containers share the host kernel, which makes them start in seconds rather than minutes and use a fraction of the memory.

The workflow change was as important as the technology. Before Docker, deploying an application meant configuring servers, installing dependencies, managing versions, and praying nothing conflicted. With Docker, you built an image once and deployed that image everywhere. Infrastructure became code. Reproducible, version-controlled, reviewable code.

By mid-2015, Dockerfiles were appearing in repositories everywhere. Teams that had never thought about their build process were suddenly describing it precisely in code. That was an unexpected benefit nobody had really advertised.

The ecosystem around Docker grew fast. Docker Compose let you define multi-service applications in a single file. Docker Hub became the default place to share images. Major cloud providers added container support. The tooling improved weekly.

But not everything was smooth. Container networking was confusing. Storage was a mess. Running Docker in production at scale was genuinely hard. Orchestration was unsolved. You could get five containers running without too much trouble. Getting fifty running reliably in production was a different problem entirely.

That orchestration problem would be the next chapter. But in 2015, the fundamental shift was happening. The way software was packaged and shipped was changing in a way that would not be reversed. If you look at how modern software is deployed today, almost all of it traces back to the ideas Docker popularised in those years.

I find it interesting, looking back, that the companies who moved fastest on containers in 2015 had a real competitive advantage in the years that followed. Faster deployments, more reliable environments, and eventually the ability to adopt Kubernetes when it matured. Those early movers had a head start that took their competitors years to close.

Found this useful?

Share it with someone who'd enjoy it.