NewsNews Literacy Project

Actions

Outsmarting the bots when it comes to news on social media

news-literacy-project-WFTS.jpg
Posted at 6:50 AM, Jan 25, 2022
and last updated 2022-01-25 08:38:20-05

TAMPA, Fla. — Anyone who’s been consuming news for at least two or three decades knows times have changed, not only with technology but with social media and how easily we can access content. In the last five years, we’ve started to see the role online platforms can play in providing inaccurate information — something the News Literacy Project is trying to combat.

“Misinformation isn't new, but a lot of people just became aware of the prevalence of misinformation and the stakes of misinformation around 2016 with revelations about the role that the Russians played in that election,” said Peter Adams, the SVP of Education at the News Literacy Project.

Media is ever-changing and the role computers play in the delivery of content is something USF Professor and Director of the Computational Social Dynamics Lab, Giovanni Luca Ciampaglia has devoted his career to.

“Sometimes people do not, do not realize that what they're seeing on social media or even on Netflix, you know, is actually curated and it's based on what an algorithm in a certain sense, thinks about yourself,” Ciampaglia said.

This is known as AI, or artificial intelligence, where smart machines perform tasks of human intelligence.

Content, algorithms across social media, whether it's Facebook or YouTube or Twitter now TikTok has a very powerful suggestion and content algorithm,” Adams explained. “Their job, simply put is to keep you on the platform is to keep you engaged so they can serve you more ads. That's the business model. That's the reason these platforms are free.”

These algorithms create what Ciampaglia calls “echo chambers,” where content on your feed is tailored to similarities of friends and people you follow.

So if algorithms are simply feeding you content based on what others around you like, then Ciampaglia says they’re not necessarily concerned about quality. It’s the idea that if you already have a similar opinion, then you’re likely going to agree with the content, especially politics.

“You can spread maybe the same message to one group and the opposite message to the other group, they may be both false but both groups will not care,” he explained. “Because they conform to their existing viewpoints or they simply find that they make more sense.”

But social media can also be a great tool to stay informed and foster insightful discussions, so there are a few things you can do to vet sources online and make sure they’re trustworthy.

“Really evaluate you know, pieces of information in a way that's rigorous and honest and rational and not emotional is like the number one thing I think people can do,” Adams said. “And if you're not sure about something, don't amplify it, don't even like it because when you like something, you can passively share that to friends and family.”

Here’s a simple checklist when scrolling:

  1. Slow down
  2. Look for facts and supporting details, not opinions
  3. Think critically, not emotionally

“Everyone needs to be aware of the difference between a standards-based piece of information you know, from a standards-based news outlet, and something that's user-generated from someone you don't know, right?” Adams advised.

The more reputable sources and articles that you interact with on a platform will increase the likelihood of seeing more trustworthy content. It’s a good idea to go through people or pages you follow and get rid of anything that may be curating false information into your feed.

Social media sites have also started enforcing policies to vet misinformation, especially when it comes to COVID-19. For example, Facebook uses algorithms to reduce the distribution of low-quality or problematic content. However, it’s not always perfect and sometimes it flags everything within that algorithm even if it’s not inaccurate.

You can also find more in-depth explanations on regulated content on many platforms.

“When they debunk a viral rumor or they catch a piece of misinformation, then, Facebook's platform sort of puts a warning label on content, right. Twitter's also rolled out warning labels so in general, they're better at labeling things like state-run media,” Adams added.

Keep in mind these platforms are businesses for user engagement, not direct websites of credible news sources.

Once you’re on an actual news site, another thing experts say is a good sign of credibility is if they issue public corrections to articles. No one is perfect, but that shows integrity to get it right.

If you’re not sure, there are some tools online that you can use to vet articles.

For Twitter, you can useBotometer, which checks the activity of a Twitter account and gives it a score. Higher scores mean more bot-like activity.

Grin calculates a user’s “credibility score” for Instagram, TikTok, and Youtube.

Facebook also has a number of features on its platform now for additional information on flagged content. You can read their tips for spotting false information here.