Mentions of “Rigged” or “Stolen” election on Nov. 3. The first spike represents misinformation themes that emphasize that the Democrats are trying to steal the election. The second spike came from the #stopthesteal hashtag.


This is where much of the concern over the coming days and weeks and even years should be. The data shows the growing longitudinal trend of the conversation around a “rigged” or “stolen” election. For instance, on Wednesday, a “Stop The Steal” group on Facebook set up by The Liberty Lab, a firm “that offers digital services to various conservative clients,” was claiming a wide range of baseless election fraud theories. Despite the claims being false, it was adding about 100 members a second. [Update: Facebook reportedly shut the group down on Thursday, after it gained 300,000 members in two days.] Given the increasing volume of this theme, we can expect online efforts to provoke protracted post-election instability. 

The misinformation data points to a need for continued policy adjustment by the platform companies and vigilance by those in government, especially as this activity tips towards urging physical protest and violence. The platform companies must finally come to grips with something they have long avoided, how to handle prominent “superspreader” accounts that they know are violators but fear the backlash over enforcing their own rules. As Alex Stamos, formerly Facebook’s chief security officer, put it, “You have the same accounts violating the rules over and over again that don’t get punished.” 

It also points to the desperate need for the U.S. to improve its long-term resilience to disinformation. This goes beyond changes in software and legal code. Certainly, the platforms must calibrate their recommendation algorithms that continue to drive viral misinformation. So too, the government should update how it oversees and even regulates the industry and its roles in election media. But there is also a need to better equip the targets of such campaigns: us. Earlier this year, Carnegie looked at 85 policy reports from 51 organizations that examined what to do about the problem of online misinformation. By far, the most frequently recommended action item across all the groups (53%) was to raise U.S. digital literacy. Unlike nations such as Estonia or Ukraine that experienced and then built up resilience to online threats, however, the U.S. still has no true national effort on the problem. Our school systems and citizens are mostly on their own to learn how to discern between truth and falsehood online. Both government and the nonprofit worlds must move forward on enabling a new type of needed cyber citizenship skills. 

This is hardly just an issue for kids. Research shows Baby Boomers spread misinformation at seven times the rate as those under 30. And too many in the media still intentionally or unintentionally help spread disinformation, elevating and enabling it. Too many ignore lessons learned and best practices developed in recent years, while analytic tools and data investigative training (such as those provided by such groups like First Draft News) are insufficiently deployed. Part of this is due to the partisan evolution of the media business. And part is due to a continued division between the journalism fields of those who report political campaign stories and those who have the cybersecurity-and-disinformation beat, despite the fact that they are now clearly melded. 

There is still much to be learned about an election season that is still very much ongoing, but the overall data on misinformation is a stark reminder that the problem of hacking social media remains with us. It is not just a constantly mutating threat to authentic discourse and informed debate in our politics, but an “infodemic” that makes battling public health threats even more difficult. Each of the tactics and trends we saw targeting votes is already readying to target vaccines. 

The battle for the presidency may soon be done, but online war will continue on. 

Source Article