Experts in public health praised Twitter’s efforts to tamp down on covid misinformation. In a 2021 advisory report to technology platforms, U.S. Surgeon General Vivek H. Murthy cited Twitter’s policy as an example of how tech companies should go about combating misinformation.
“Health misinformation is a serious threat to public health,” Murthy wrote. “It can cause confusion, sow mistrust, harm people’s health, and undermine public health efforts. Limiting the spread of health misinformation is a moral and civic imperative that will require a whole-of-society effort.”
However, Twitter has also struggled to police misinformation accurately and recently began labeling some factual information about covid as misinformation and banning scientists and researchers who attempted to warn the public of the long-term harm of covid on the body. As of last weekend, many tweets promoting anti-vaccine content and covid misinformation remained on the platform.
Covid becomes plague of elderly, reviving debate over ‘acceptable loss’
“That is a real danger of setting yourself up with the task of deciding what is true and what is false,” Emily Dreyfuss, co-author of “Meme Wars: The Untold Story of the Online Battles Upending Democracy in America,” said of Twitter’s fumbles surrounding covid misinformation.
But she said that was all the more reason to improve the process and policies, not scrap them altogether.
“During the pandemic, social media companies finally realized misinformation is a life-or-death issue because medical misinformation about covid had such dire consequences it could not be ignored,” she said. “Musk getting rid of these policies is backtracking on years and years of painfully won lessons on how to make the internet safe and not harmful.”
“I’m doing internal medicine, and I see a lot of patients in primary care clinic,” said Max Jordan Nguemeni, a resident at Brigham and Women’s Hospital in Boston. “A lot of what I do when I offer vaccines is combating disinformation. The spread of misinformation online on platforms people rely on for news, like Twitter, worries me, especially when I think about my patients who are more vulnerable, older or not English-speaking.”
Jon Shaffer, a health sociologist and postdoctoral fellow at the Johns Hopkins Berman Institute of Bioethics, said he worried that, when combined with Musk’s plan to allow Twitter users to purchase a blue verified check mark for $8 a month, the abandonment of the misinformation policy will be especially dangerous.
“People with the purchased blue check marks will certainly sell snake oil and promote baseless ideas for their own personal and political profit, and the result will be that poor people will continue to die from covid,” Shaffer said.
Lucky Tran, director of science communication at Columbia University, said scrapping the covid misinformation policy will contribute to public confusion.
“We’re going through an infodemic alongside a pandemic,” Tran said. “What that means is people are exposed to so much information that they don’t know what’s true or what’s not. They don’t know what to do to protect their health and the health of people around them. This change by Musk is going to make that problem even worse.”
The move comes as Musk appears to be shifting more of the responsibility to policing misinformation to users through the company’s Birdwatch program, which allows Twitter users to rate and add corrections to tweets. Lately, however, as Birdwatch has scaled to more users, incorrect information about covid has been added to tweets simply because a mass of users upvoted it. This is dangerous, Dreyfuss said.
How Twitter’s contentious new fact-checking project really works
“Musk is scrapping a misinformation policy that was imperfect and replacing it with a new system that’s much more easily hacked and gamed,” she said. “What he’s doing with this policy is washing his hands of Twitter’s responsibility of determining fact or fiction and giving it over to the users of Twitter, which we know is not going to be an effective strategy at all. They will make true whatever they want to make true.”
Yoel Roth, Twitter’s former head of trust and safety, said Musk’s decision to stop policing covid misinformation was “bad and damaging” and probably not “tenable going forward.” “You simply cannot do that if you are operating what you want to be a commercially viable consumer service,” he said.
Musk himself has spread covid misinformation. In 2020, he claimed that coronavirus cases would be “close to zero” by April 2020. He also told SpaceX workers in March 2020, as the world was just beginning to shut down during the pandemic, that they were more likely to die in a car crash than of covid. That June, he reopened the Tesla plant in Fremont, Calif., against county health and safety orders but promised employees they could stay home if they felt ill and would not be penalized. Employees with covid who did stay home, however, were promptly fired.
Musk also called virus-related restrictions “fascist” on a 2020 Tesla earnings call. During a podcast appearance in September 2020, Musk said he would not get vaccinated and downplayed the virus’s death toll. “Everybody dies,” he said.
But experts say Musk’s decision will lead to more deaths. “It’s a huge step backwards in a pandemic that has killed a million Americans and millions more worldwide,” Shaffer said. “It’s certain to get many more people killed from covid than otherwise would.”