Can the Post-Trump Internet Ever Be Fixed?





(adsbygoogle = window.adsbygoogle || []).push({});








By Andreas Rentz/Getty Images.



Much like Internet culture itself, many products of the Kremlin’s information war were deliberately lo-fi, even laughable. One Facebook post created by the Internet Research Agency troll farm contained an image of Jesus and Satan arm wrestling. (Caption: “Hillary is a Satan, and her crimes and lies had proved just how evil she is.”) Another, ostensibly shared by the Facebook group LGBT United, featured a Bernie Sanders coloring book “full of very attractive doodles of Bernie Sanders in muscle poses.” Other accounts promoted things like L.G.B.T.Q.-positive sex toys or patriotic wall art. But as two new, Senate-commissioned reports on Russian election interference confirm, the I.R.A.’s mission was hardly amateurish. The memes were goofy, but the overall effort—executed by more than 1,000 employees working in 12-hour shifts out of a building in St. Petersburg—reflected the grim efficiency of a well-organized military-intelligence operation.

There are a few key takeaways from these reports. As we now know, the Russian disinformation campaign was far more sweeping than originally thought, spanning multiple tech platforms including Twitter, Google, Medium, Reddit, Tumblr, and Pinterest. It also evolved strategically. As soon as the media began reporting on Russian activity on Facebook, for instance, the I.R.A. shifted its efforts to Instagram (“something that Facebook executives appear to have avoided mentioning in congressional testimony,” one of the reports notes).

These operations were coordinated with devious specificity. According to one of the reports prepared for the Senate Intelligence Committee, Russian operatives targeted black audiences, in particular, as part of a broader effort to suppress the minority vote in 2016. A study from Oxford University’s Computational Propaganda Project and network-analysis firm Graphika showed that the I.R.A. tried to get black voters to either follow incorrect voting processes in 2016 or boycott elections altogether, seizing on topics like Black Lives Matter to amplify existing divisions and spread disillusionment with the political process.
Another takeaway is that Silicon Valley companies were far less forthcoming than they should have been about how critically they were compromised. When tech executives first appeared on Capitol Hill last year to testify about Russian election interference, they largely downplayed the impact of the I.R.A., and were careful to stress that divisive messaging, memes, and other posts originating in Russia made up just a small proportion of the total amount of activity on their platforms. That may be true, as a statistical matter. Yet the psychological impact—both on voters before the election, and on public trust in the political system afterward—is immeasurable.
More disturbing is what these reports suggest about our ability to combat similar disinformation campaigns in the future. Tech companies have gotten plenty of flack, and deservedly so, for failing to grasp the extent of Russian election interference, not taking steps to counter it, and later dissembling about what actually happened. Yet even after all these autopsy reports on the 2016 election, we aren’t much closer to understanding how to patch the vulnerabilities that foreign actors so diligently exploited.


As we’ve learned, human beings just aren’t very good at distinguishing between good-faith political activity and propaganda. Countless people joined partisan groups on Facebook, or shared viral posts on Twitter, that were created in Russia for American consumption, with no comprehension that they were being manipulated. In May 2016, for instance, Russian operatives masterminded both sides of a protest outside the Islamic Da’wah Center of Houston. (One group had gathered at the behest of the “Heart of Texas” Facebook group for a “Stop Islamification of Texas” rally, while the other was allegedly organized by “United Muslims of America.”) The Russian effort was so effective, in part, because operatives learned to mimic the sort of memetic political discourse that Americans were already producing themselves—and figured out how to magnify it.
Facebook has since thrown resources at fixing the problems highlighted in the latest Senate reports by employing fact-checking services, and disclosing targeting and purchaser information alongside paid political ads on its platform. But the structure of social-media networks themselves remains susceptible to Russian tactics—and guardrails like third-party fact-checking apps, meant to keep “fake news” posts from going viral, haven’t really worked. In reality, some of the most successful pieces of online propaganda are organic posts that go viral—and are therefore nearly indistinguishable from any other piece of content.
How to combat these insidious forces is unclear. Americans have never received any kind of civic education to identify propaganda—there is no “Internet literacy” program taught in elementary schools. In the past, the beneficiaries of our ignorance were advertisers: consumers have always had to do their due diligence when interacting with marketers, whether online or off. But at least in those cases, there are recognizable market mechanisms for vetting credibility. (An online company that sells broken air conditioners, for instance, will struggle to survive in a marketplace where a quick Google search reveals thousands of customer complaints.) Two years after the 2016 cyber-attack was made public, there is no similar method for evaluating other kinds of information online.
It’s true that more savvy social-media users may avoid being targeted by foreign-influence campaigns. But people who conduct politics or get information in any form online must recognize that no digital platform is truly safe. Already, Americans have conflicted feelings about privacy, trust, and safety where social-media platforms are concerned: according to a Pew Research survey conducted in March, only 5 percent of those who use social media trust the information they see “a lot.” Instead, we trust information that aligns with our political views—the precise psychological bias that the Russians exploited so ruthlessly.
There is still a role for big tech to play in repairing the damage. Silicon Valley can help address these issues by letting lawmakers and experts look under the hood, and being more transparent about how their algorithms work. Part of the reason why social-media platforms continue to be exploited is the “opacity of algorithms” that help drive the virality of posts like those from the I.R.A. Tech companies have their reasons for not wanting a new regulatory regime, of course. But in the absence of more stringent controls, the trust that underpins many of our interactions online will continue to deteriorate. Already, the sense of community that created the early Internet seems fundamentally broken.
More Great Stories from Vanity Fair
— Why the Democrats won’t rush to impeach Trump
— Joe Lieberman jumps into the U.S.-China trade war
— A guilty plea that should make the N.R.A. feel nervous
— Trump thinks S.N.L. should be challenged in the courts
— A new Time’s Up initiative, courtesy of . . . CBS
Looking for more? Sign up for our daily Hive newsletter and never miss a story.



Source link
The post Can the Post-Trump Internet Ever Be Fixed? appeared first on MrPolitic.com .

Top News