“We believe these people are thieves. The big city machines are corrupt. This was a stolen election. Best pollster in Britain wrote this morning that this was clearly a stolen election, that it’s impossible to imagine that Joe Biden outran Obama in some of these states.”
“Great. Most corrupt election in history, by far. We won!!!” former President Donald Trump tweeted.
Those are just some of the many tweets that former President Donald Trump posted about the 2020 election. Throughout his career, Trump became notorious for his outlandish posts on social media. Posts ranged from sexist and slanderous remarks about females to discrediting medical professionals.
There has been a boom of fake news cluttering the internet and it’s hard to distinguish between fact and fiction. These bits of false information are exploited by businesses, scammers pretending to raise money for a cause and government officials who want to promote their propaganda.
The problem is not only that these people use the internet to harm or manipulate others, but that it’s effortless. Many problems are intensified by companies, governments and outside factors the average internet user cannot control.
Fake news is so widespread because readers find it more interesting than regular news. It reaches more people, penetrates further into the social network, and spreads faster.
Soroush Vosoughi, a data scientist at MIT who has studied fake news since 2013 and led this study, said that “It seems to be pretty clear that false information outperforms true information… This is not just because of bots; it might have something to do with human nature.”
The study has already prompted alarm from social scientists.
“We must redesign our information ecosystem in the 21st Century,” wrote a group of 16 political scientists and legal scholars in an essay also published in Science.
The only method to remedy this epidemic is for companies to take responsibility for the content that is posted for public consumption. Simple, right? Not necessarily.
Many companies claim to take precautionary actions to filter good content from bad, but their efforts fall short. One reason for this is the bots that patrol the website and evaluate information. Social media bots automatically generate messages, advocate ideas, act as a follower on user accounts and fake accounts to get followers themselves. The same study found that among 3 million Twitter users were investigated by two different bot-detection algorithms, and it was discovered that the automated bots were in fact spreading false information and retweeting it at the same rate as accurate information.
Liem Namiot, a senior majoring in English at the university, expressed a different perspective. He believes that “companies should take responsibility for their content and their bot accounts, but there are just too many of them.”
“Companies should take responsibility for their content and their bot accounts, but there are just too many of them.”
There is light at the end of the tunnel. Many websites allow users to decipher truth from lies spun on the internet, such as FactCheck.org. This website monitors the factual accuracy of what is said by U.S. politicians in TV ads, debates, speeches and other forms of media.
While these sites aren’t the only way readers know their information is accurate, they are part of the steps that can be taken to protect against misinformation. The best thing to do is look at when the article was written, who it was written by, what the evidence is and where did it come from.
Question what you read, what you click on and who you give your personal information to. If you’re not going to, who will?