All Articles
AI7 min read14 February 2017

TensorFlow Made AI Accessible to Engineers, Not Just Researchers

Google open-sourced TensorFlow in late 2015. By 2017, it had become the default framework for machine learning. What made it special was not the algorithms. It was the engineering.

TensorFlowMachine LearningAIPython

Before TensorFlow, doing serious machine learning meant either using academic tools that were hard to put into production, or building your own infrastructure. TensorFlow, which Google open-sourced in November 2015 and which reached widespread use through 2016 and 2017, changed that.

The key insight was that machine learning is a software engineering problem as much as a mathematical one. The training algorithms matter, but so does the ability to run those algorithms efficiently on GPUs, deploy them to production, and integrate them into existing software systems. TensorFlow was built by engineers for engineers and it showed.

The computational graph model was central to TensorFlow's design. You define a graph of mathematical operations, then execute that graph. This separation of definition and execution enabled optimisation that would be difficult with an imperative approach. The same graph could run on CPU, GPU, or multiple machines. This was not an afterthought but a core architectural decision.

For an engineer coming from Python, TensorFlow in 2017 had a steep learning curve. The computational graph model was unfamiliar. Debugging was hard because you could not step through operations with a normal Python debugger. The API was powerful but not intuitive. Keras, which provided a higher-level API on top of TensorFlow, made the learning curve much more manageable.

What TensorFlow made possible in 2017 was experimentation at a scale that had not been accessible to most engineering teams. You could take a pretrained model from TensorFlow Hub, fine-tune it on your own data, and deploy it to production using TensorFlow Serving. The full pipeline, from idea to production deployment, was within reach for a team of engineers without specialist machine learning research backgrounds.

The production story was particularly important. Many machine learning frameworks were great for research but had no story for serving predictions at scale. TensorFlow Serving was purpose-built for this. You exported a SavedModel and TensorFlow Serving handled batching, versioning, and scaling.

Competition from PyTorch, which Facebook released in 2016, would eventually shift research community preferences. PyTorch's dynamic computational graph was much easier to debug and felt more like normal Python. But TensorFlow's production maturity kept it dominant in deployment contexts for years.

The broader impact of TensorFlow was that machine learning became part of the software engineering skill set rather than a separate research discipline. Teams that would not have considered hiring machine learning specialists could now integrate ML capabilities into their products using the same Python skills their engineers already had.

Found this useful?

Share it with someone who'd enjoy it.