In early November 2017, legal representatives of Twitter, Facebook and Google faced questioning before Congress concerning the accusation that Russia exploited their websites to interfere with the 2016 US Presidential election.
These social media platforms acknowledged that before and after the highly contentious election, hundred millions of American users may have potentially been unexpectedly exposed to and shared Russian “fake news” images, videos and news stories. This recent revelation follows in the wake of a litany of unflattering stories which have plagued social media websites in recent years.
Such bad publicity has bruised the standing of these online platform websites, once vaulted as today’s manifestations of the ancient Greek agora – public fora that encouraged the exchanges of political thoughts and ideas. Democracy – many argued – could flourish online. No longer would citizens have to be “passive consumer of political party propaganda, government spin or mass media news” . Rather, unique and challenging opinions could be deliberated among citizens, representatives and policy makers on networked citizen-centred platforms. This would reflect the “public sphere” envisioned by 20th century German philosopher Jürgen Habermas, in which society would inculcate a public domain of our social life in which public opinion could be formed out of rational public debate (Habermas, 1991). Ultimately, informed and
logical discussion, Habermas (1989) argued, could lead to public agreement and -, thus representing the best of the liberal democratic tradition
Utopian views at the advent of social media platforms argued that these virtual spheres would increase mature political participation and informed discussion. They pointed at the internet’s unprecedented offer to give people instant access to information, the ability to rapidly connect people regardless of geographical or other boundaries
However, Facebook algorithms have been hyper engineered to ensure its users remain on the website, by essentially showing them what posts they statistically want to see and what they links they statistically will click on – ultimately increasing advertising revenue for the company. Alarmingly, it was revealed that Facebook conducted social experiments in 2014 on its users by manipulating algorithms and dictating what they viewed in order to distill the most potent posts that kept users on the website.
Decades of experimental research in psychology have found time and time again that people overwhelmingly seek out information that confirms their own views rather than look to be disproven. People naturally feel uncomfortable when they are confronted with information that challenges their world view or their perception of self.
Facebook makes no effort – as it can as a private corporation – to counter our human tendencies. Facebook engineers platforms that enable us to retreat into echo chambers and consensus bubble. Instead of fostering sober debate, the system “dishes out compulsive stuff that tends to reinforce people’s biases”, as a recent Economist article outlines.
Read the following Economist Article which articulates this point more cogently that I ever could:
How can we ask – or should we even consider asking – corporations to moderate or manipulate content to improve the variety of political viewpoints of its users’ media diet? Would it be feasible – or even foolish – to let these powerful platforms become paternalistic gatekeepers, dictating what information we should access?
The Economist article questions this idea of asking “handful of big firms to deem what is healthy for society”. Until policymakers and industry interests haggle until they arrive the correct mix of regulation, fact-checking mechanisms and critical thinking skills when appraising what we read online, how can we stop democracy becoming fractured by divisive rhetoric and insulating echo chambers ?