First, Principles

In last week’s Hill hearings with executives from Facebook and Twitter, lawmakers seemed to focus on unique, concrete instances where problems had been reported or uncovered—Russian interference, anti-conservative censoring, encouraging opioid sales. This whack-a-mole style is good for publicity, but if trust in these companies, and peace of mind for their users, is going to be re-established, efforts toward regulation will have to focus much more on principles and hypotheticals rather than specific, undesirable cases. 

Drew Margolin, professor of communication, Cornell University

Drew Margolin, professor of communication, Cornell University

Let’s start with the basic problem. In recent years it has become widely known that Facebook and Twitter moderate a substantial portion of what we might call “important” speech in the American public sphere. Hundreds of millions of Americans use these platforms to discuss and share information about topics that, in our concept of democracy, are part of the public sphere through which collective knowledge and opinion inform self-governance. Facebook and Twitter moderate this speech. They do so in the traditional way, removing certain content, such as pornography, but also in a novel and more complex way, by influencing the pace at which messages can obtain attention and be shared with others. This “micro-moderation” is essential to their business models, but it is also impossible for a user to detect. 

Related: Zuckerberg: Facebook Fixes Will Take Years

The importance of the influence they wield through this moderation, combined with the difficulty that users have in identifying it, along with their potential monopoly power due to network effects, means that the government is on the hook for managing the situation. Whether through regulation or some other means, the government and these companies have to find a way to show that these moderation judgments are acceptable—they do the right thing often enough—and transparent— they are sufficiently understandable to be anticipated in advance and evaluated after the fact.

Establishing this acceptability with transparency requires these companies to explain and illustrate their priorities—which kinds of speech are going to be privileged and which will be limited? Unfortunately, the record of well known, past controversies that the Senators and House Members grilled the executives about is not particularly helpful here. These can be (and last week were) explained away as mistakes or misguided, but now amended, efforts. That is, they might illustrate what the company did, but not what the company would do. But the real question is not “what happened” but “what if.” 

“What if” is a part of any policy conversation, but Facebook and Twitter are particularly vulnerable to the fallout from nasty “what ifs” coming true. This is because with their enormous, well connected user bases, they are essentially giant crowdsourcing platforms for discovering hard, controversial cases. At the same time, their use of micro-moderation deprives them of easy bright lines that can be used to justify coarse judgments. 

Consider the case of potential overlap between political speech, a citizen expressing their political opinion with the goal of convincing others, and threatening speech, a statement that makes an individual feel that a mob is being rallied against them. The speaker claims the right to make this statement and the threatened individual claims the need to have it suppressed so that it does not incite others. This kind of conflict is new, and is at the center of many offline controversies (such as on university campuses). But in most contexts, the moderator can count on having a coarse set of options – permit or don’t permit – to apply to one particular incident. This incident will have a particular form—the speech is primarily political or mostly devoid of political content, for example—that helps to lean it one way or the other. This bit of lean, combined with course, binary options, enables a decision to emerge.

Social media companies do not have these luxuries. Among the millions of posts created each day, there are likely to be thousands that sit right on the borderline between the two competing claims. Moreover the moderation decision is not binary. If an individual makes a political speech that is somewhat threatening, but not so much to deserve outright censorship, should the company slow its diffusion for the sake of the potential victims? More specifically, if the company has an algorithm that helps to make certain posts viral, artificially promoting their diffusion, should it deny this threatening message access to this algorithm’s help? If so, isn’t this putting the thumb on the scale of political decision-making, e.g. creating an anti-one party bias? If not, isn’t the company creating a hostile environment?

These are the sort of discussions that Congress should be having with these executives: probing to understand how they would handle specific

situations, all of which are likely to arise, or in fact likely have already arisen, thousands of times, and then evaluating the rationales they provide for these choices. This information could then be used to draw up policies. For example, policies might help tip the scales strongly toward certain kinds of moderation decisions, essentially compelling the companies to make particular judgments in most cases but giving them the cover of complying with the law. Alternatively, they could resolve to stay out of this area, estimating that the unintended consequences of regulation might be too great. But in either case, they’d be basing this decision on an understanding of what Facebook and Twitter are doing all the time, rather than what they did one time.

Drew Margolin is professor of communication at Cornell University. He studies online communications and the role of accountability, credibility, and legitimacy in social networks.