By

Social media has been on the rise for quite some years. However, regardless of its popularity, it´s influence on our lives has been a hot topic of debate for equally as long. Ever since the elections of 2016, people seemed to become more aware of the rising influence social media has on our political environment. However, despite this raised awareness, the platforms´ control only seems to get larger. Just last Thursday, October 15th, several senators announced that they will be subpoenaing Twitter CEO Jack Dorsey on suspicion of election interference. According to them, Twitter is abusing their corporate power to silence the press and to cover up allegations of corruption by the family of presidential nominee Joe Biden. With this article, I mean to show that the platforms´ influence has become too large and that we need to find a way to make tech companies and algorithms more human, before they cause irreversible damage to our democracy.
A large part of this problem is caused by echo chambers. An echo chamber is an environment where a person only encounters information and opinions that reflect and reinforce their own. This filtered bubble of information is unique to everyone, as it is based on your personality and activity online. This is a very subtle process, however, it can have a large impact on your worldview. The risk of echo chambers is that people eventually stop believing information that doesn’t comply with their own beliefs. In extreme cases, this even means the denial of scientifically proven evidence, as in the case with flat-earthers. Filtered information bubbles have become a much bigger problem with the rise of social media. Such platforms use recommendation machines called algorithms. These are large, complex computer codes that decide how relevant information is to each individual. The overall goal is to find out what you like, so the website can feed you more content like that and to keep you on the site for longer. If you put these things together, it puts us in a situation where the various platforms constantly show us what we think we want to see. This actively reinforces polarization in society. Even Google isn’t standard anymore; everyone gets to see varying results, based on what the algorithm thinks we would like to see. This has a negative effect on democracy, since it makes people less willing to cooperate and try to see things from other points of views. Democracy works best when people appreciate diverging opinions in order to find ways to cooperate. When people only want to believe their side of a debate, it’s also more likely that they will accept false information as truth. The possibilities that social media platforms offer nowadays, have resulted in the easy and rapid spread of untruthful and manipulative messages with fairly little consequences. As it turns out, fake news is spread about six times faster than real news. Democracy is challenged by this. Every citizen needs to be well educated on the topics they’re voting on to be able to have critical debates about these issues, in order for democracy to be an efficient mechanism. Echo chambers, algorithms and fake information make this nearly impossible.
Another implication of algorithms is the fact that they group people together based on shared interests. This brings us to the second problem, which is two-fold: it’s easy to find like-minded people, and because of that it’s also easy to find a common group of people who believe in abnormal things. When the internet didn’t exist, people would consider their diverging beliefs as weird and unwelcome. Nowadays, it’s very easy to just google it and find forums or Facebook groups that focus solely on those beliefs. A great example of this is the case of Armin Meiwes, a German computer technician who killed and ate a voluntary victim in 2001, whom he had met on an online cannibalism forum. Even worse, with the rise and popularity of algorithms and recommendation machines, these diverging beliefs are actively recommended to others. Even if you don’t personally search for weird things that you’re interested in, you’ve probably stumbled across a video or picture that made you say ‘well, that’s enough internet for today’. A big problem concerning algorithms is that you don’t get to decide what you see on the internet. Beliefs that were considered abnormal for a long time are becoming normalized because it’s easier to find each other. The prime example of this is the dramatization of true crime; horrific murders are used for entertainment. The more graphic the crime, the more dramatized and sensationalized it becomes. However, online communities based on diverging beliefs can result in negative outcomes, such as segregation, mistrust, and even paranoia. These results are very visible in conspiracy theories, for example the pizzagate theory. In 2016, people started believing that ordering a pizza from a particular shop meant ordering a trafficked person. Facebook groups were created to discuss this theory and as the groups gained popularity, Facebook’s algorithms started recommending random users to join them. The conspiracy theory kept growing and eventually even influenced the 2016 elections when the theory stated Hilary Clinton was involved in a pedophile ring. Another example of this is the case of Luka Magnotta. He filmed himself killing two kittens and shared this video on Facebook. Many people who saw the video commented on it to express their outrage and disgust. However, because the algorithm detected lots of engagement, the post was spread even further, all across the world. The same applies to videos of terrorist attacks, for example the ISIS beheading video’s that surfaced around 2014. The problem with the algorithms is that many people aren’t interested in seeing this type of content. It shows behaviors and beliefs that most people don’t hold. However, because content like this increases engagement, it gets the chance to spread all over the world. The few people who do share these alternative beliefs, find each other much easier and are reinforced in their deviant thinking pattern.
One might wonder what the actual risk of this is. People have done weird things on the internet for a long time and it was never seen as much of an issue. However, the line between the online and offline world is seemingly starting to blur. The pizzagate conspiracy theory eventually cumulated in a man showing up with a gun, dedicated to deliberate the children form the basement of the pizza place. Steve Stephens livestreamed himself on Facebook while shooting a random person in the street. Joyce Hau was stabbed to death for starting a fight in the comment section of a friend’s Facebook post. And the list goes on. More recently, we’ve seen how the Black Lives Matter movement resulted in protests and riots all over the world, organized via social media. However, these blurred lines aren’t limited to crimes, riots, and murders. A less visible, but equally as terrifying effect are the consequences this has on our political environment. To circle back to the beginning, Twitter CEO Jack Dorsey has been subpoenaed on suspicion of electoral interference. Senator Ted Cruz said social media influences like these have no precedent in the history of the United States. However, he doesn’t seem to be entirely correct. Earlier this year, it was revealed that Michael Bloomberg paid social media influences to gain support for his presidential campaign. A couple months later, YouTuber Tana Mongeau offered her subscribers free nudes if they would vote for Joe Biden. Stories like these sparked a debate on whether or not social media influencers should remain politically neutral online. Perhaps the most thrilling example is the Cambridge Analytica case. The company illegally used data from over fifty million Facebook profiles in an attempt to influence the 2016 presidential elections. All of this results in campaigns strategists turning to social media as one of the primary outlets to try and engage people. They use the number of likes and shares to see how members feel about an issue, and whether or not they should campaign on it.
To summarize, echo chambers alter our beliefs of reality and the way we perceive news, which prevents us from exercising democracy. The way algorithms create online communities is harmful to society, because it allows for the normalization of abnormal beliefs and attributes to the spread of harmful content. Lastly, all of this poses a real threat because the lines the online and offline world are blurring. In conclusion, despite social media’s popularity, it’s time to recognize the hazard to our society. However, the technology itself is not the threat that we face. Algorithms are great at recommending you more content that you will like, and at making funny videos go viral. The real threat here is the technology’s ability to bring out the worst in society. It’s time to convince tech companies to change their algorithms in such a way that they remember that we are human, and that they prevent us from destroying our democracies.

About the Author

 

Leave a Reply

SecJure