By Scott Bolthouse | The Huron Hub
ScottBolthouse@HuronHub.com
Published April 21, 2020
Facebook announced measures last week that it’s taking to prevent and limit the spread of misinformation posted to the social media platform during the COVID-19 pandemic.
“Stopping the spread of misinformation and harmful content about COVID-19 on our apps is also critically important. That’s why we work with over 60 fact-checking organizations that review and rate content in more than 50 languages around the world. In the past month, we’ve continued to grow our program to add more partners and languages,” the social media company said in a statement.
“Once a piece of content is rated false by fact-checkers, we reduce its distribution and show warning labels with more context. Based on one fact-check, we’re able to kick off similarity detection methods that identify duplicates of debunked stories,” Facebook said.
Facebook said during the month of March, they displayed warnings on about 40 million posts related to COVID-19 on Facebook, based on about 4,000 articles by their independent fact-checking partners.
“When people saw those warning labels, 95% of the time they did not go on to view the original content,” Facebook said.
To date, Facebook says it has removed hundreds of thousands of pieces of misinformation that could lead to imminent physical harm.
“Examples of misinformation we’ve removed include harmful claims like drinking bleach cures the virus and theories like physical distancing is ineffective in preventing the disease from spreading.”
Additionally, Facebook is informing users who interacted with fraudulent COVID-19 claims, and added a new section to their COVID-19 Information Center called “Get the Facts” that includes fact-checked articles from our partners that debunk misinformation about the coronavirus.