The iPhone 11 and the A13 Bionic chip were announced in September 2019, and the detail that stayed with me was not the cameras or the speed but the neural engine running eight trillion operations per second.
Apple had been building neural engine hardware into its chips since the A11 in 2017. Each generation had increased the capability. But the A13 represented a significant jump, and the applications Apple was highlighting made the case more concretely than any previous announcement. The computational photography features in the camera, the real-time processing that produced the video stabilisation, the depth sensing and scene analysis, were all running on dedicated machine learning hardware without a round trip to the cloud.
This is worth pausing on. The standard frame for AI and machine learning in 2019 was server-side. You sent data to a data centre, a large model processed it, results came back. That model had advantages: you could run very large models, you could update them centrally, you could aggregate learning across users. It also had costs: latency, privacy exposure, dependency on network connectivity.
On-device inference changes the calculation. A model running on the A13 responds in real time because it does not need the network. The data stays on the device, which is a genuine privacy benefit rather than a marketing claim. And the models, while smaller than their server-side counterparts, were optimised for the specific tasks Apple had designed them for.
The camera was the obvious demonstration. Computational photography had been evolving for years and Google's Night Sight had already shown what machine learning could do with low-light images. The iPhone 11 camera was remarkable in ways that would not have been possible with traditional image signal processing.
What interested me more was the less visible stuff. The face detection and recognition in Photos. The Siri processing that could run locally. The real-time translation features being developed. All of these were becoming possible not because cloud AI was getting better but because the hardware in the device was becoming capable enough to run inference directly.
The neural engine in every iPhone 11 was not a feature. It was infrastructure for a new generation of applications.