All Articles
Technology5 min read15 December 2016

Fake News, the Platforms, and the End of 2016

The end of 2016 closed with a question that the technology industry had been avoiding. What were the platforms responsible for, and what were they not?

MisinformationFacebookPlatformsElectionMedia

The end of 2016 closed with a question that the technology industry had largely been avoiding for years. What were the platforms responsible for, and what were they not? The US presidential election had been characterised by unusual amounts of misinformation circulating on Facebook and other social platforms. Some of the misinformation had been created by foreign actors. Some had been created by domestic operators who had discovered that fabricated stories could attract significant traffic. The platforms had played a role in distributing this content at scale, and the question of what they should do about it was suddenly difficult to deflect.

Facebook’s initial response in the days after the election was to deny that the platform had played a meaningful role in shaping the election outcome. Mark Zuckerberg said in mid-November that the idea fake news on Facebook influenced the election in any way was a pretty crazy idea. The framing did not hold. Within weeks, Facebook had announced steps to combat fake news, including third-party fact-checking partnerships and changes to how disputed content would be flagged. The shift in position was substantial enough that the trajectory of the next several years was being set in those weeks.

The harder question, which the immediate fake news debate skirted, was about the structural incentives. Engagement-optimised feeds tend to surface emotionally charged content. Emotionally charged content tends to include disproportionate amounts of misleading or false information. The algorithms that had built the platforms into the largest media distribution systems in history had specific properties that contributed to the problem. Fixing the problem in any deep way would require changing those properties, which would have business consequences.

The broader debate that 2016 forced into the open had been brewing for years. Were Facebook, YouTube, and Twitter publishers who chose what to amplify, or were they neutral platforms whose users chose what was distributed? The legal framework treated them more as platforms. The actual operations involved enormous amounts of editorial discretion through algorithm design. The gap between the legal framing and the practical reality had been growing as the platforms grew.

What 2016 made clear was that this gap could not be ignored indefinitely. The questions raised at the end of the year about platform responsibility, about algorithmic amplification, about cross-border information operations, and about the relationship between social platforms and democratic processes would dominate technology policy discussions for the years that followed.

The technology industry that ended 2016 was different from the one that had started it. The questions that 2017 and the years after would have to address were now more visible than they had been before, and harder to put back into the box.

Found this useful?

Share it with someone who'd enjoy it.