Significance The online spread of misinformation has prompted debate about how social media platforms should police their content. A tacit assumption has been that censorship, fact-checking, and education are the… Click to show full abstract
Significance The online spread of misinformation has prompted debate about how social media platforms should police their content. A tacit assumption has been that censorship, fact-checking, and education are the only tools to fight misinformation. However, even well-intentioned censors may be biased, and fact-checking at the speed and scale of today’s platforms is often impractical. We ask the policy-relevant question: can one improve the quality of information shared in networks without deciding what is true and false? We show that caps on either how many times messages can be forwarded or the number of others to whom messages can be forwarded increase the relative number of true versus false messages circulating in a network, regardless of whether messages are accidentally or deliberately distorted.
               
Click one of the above tabs to view related content.