As the coronavirus continues to spread around the world, misinformation about the virus is spreading even faster on social media. Houston's mayor and police chief called a news conference specifically to debunk social media messages claiming the city was about to go on lockdown. A Texas man was even arrested for falsely claiming he tested positive for the virus.
Platforms like Facebook, Twitter and YouTube recently published a joint pledge to fight "coronavirus-related fraud and misinformation." But that has proven easier pledged than done. Last week, Facebook had to apologize for flagging legitimate news articles as spam, blaming it on automated moderator software. But with most of their employees out of the office due to the coronavirus outbreak, problems like this are likely to continue. "They're relying on artificial intelligence and algorithms, that just are incapable of discerning humor from fact, or the difference between a rumor and something from an authoritative source," says Dr. Nathalie Marechal, senior research analyst at Ranking Digital Rights.
Dr. Marechal believes understaffing is not the only reason Facebook is having trouble policing content. She notes platforms like Facebook are built to spread content, not stop it. "It's not optimized to fact-check content, it's not optimized to ensure that content is accurate, it's really designed to help misinformation proliferate," says Dr. Marechal. "I think they are trying to identify and slow down inaccurate content about the coronavirus, but their system is just not designed to do that."
The bottom line, according to Dr. Marechal, is that users should be skeptical of any content they read on social media--especially regarding the coronavirus. "This is a situation that is evolving very quickly," she says. "And things that we think are true one day we may very well find out a few days later were actually not quite accurate."