Search

The Trump-Twitter War Shows That Section 230 Can Work Beautifully - Slate Magazine

The Twitter logo, a Trump tweet flagged for violating Twitter's rules, and Donald Trump.

Photo illustration by Slate. Photo by Win McNamee/Getty Images.

The funniest part of President Donald Trump’s remarks to reporters Thursday following his newly issued executive order on social media was this: “I think we shut [Twitter] down, as far as I’m concerned, but I’d have to go through a legal process,” Trump said. “If it were able to be legally shut down, I would do it.”

Seriously? Who on earth believes that Donald J. Trump could make himself live another week in the White House—much less serve another term—without his daily dose of Twitter psychodrama?

The president’s expressed wish to shutter Twitter is properly interpreted as an empty threat, but is his newly signed executive order equally empty? Trump was triggered earlier this week by Twitter’s recent moves to tag a few of the president’s factually false tweets with informational links. Now that Twitter has tagged another Trump tweet (this one about shooting looters) as violating its “rules about glorifying violence,” we can expect an even more heated response. Twitter didn’t remove the president’s “looting” tweet, however—the company chose instead to hide it behind a warning that explains that “Twitter has determined that it may be in the public’s interest for the Tweet to remain accessible.”

His response to Twitter’s latest move was to trumpet: “Repeal Section 230!!!” (because one exclamation point, like one french fry or one scoop of ice cream, is never enough).

Section 230, the president’s proxy for his dislike of being fact-checked or otherwise challenged, includes a subsection that can fairly be described in these words of U.S. Naval Academy professor Jeff Kosseff: “the twenty-six words that created the internet.” (Kosseff used that very phrase as the title of his book on Section 230, which was published last year.) It’s not that the law hasn’t had its problems, as Kosseff himself underscored in a Slate article in February. But he points out in that same article, “Congress passed Section 230 in 1996 for two reasons: to foster the growth of internet-based businesses and to allow platforms to develop their own moderation practices without becoming liable for every word that their users post.”

In other words, Section 230 aimed to make it possible for companies like Twitter and Facebook to remove content—for almost any reason—if the companies believed that removing the content made their forums better or protected users more.

That said, lots of the president’s critics are upset, too. On the one hand, there’s no evidence that Twitter’s recent actions will remedy the president’s compulsive tweeting of falsehood and innuendo. On the other, they believe that Twitter’s recent increase in content flagging and warnings is “too little, too late.” They’re also concerned, not unreasonably, that the president’s increasing agitation will lead to increasingly unpredictable and dangerous decision-making on his part.

I get it. I have similar worries. But speaking as a free speech guy, I can’t help thinking that Twitter’s decision to flag or hide falsehoods or misinformation or otherwise socially corrosive speech is just exactly what Section 230 was designed to enable Twitter to do. We may find fault with smaller aspects of Twitter’s choices—maybe the company should have done this regarding his possibly tortious tweets promoting a conspiracy theory about MSNBC news host Joe Scarborough, for example—but the important thing in my view is that Twitter didn’t choose to remove Trump’s problematic tweets. Doing so was within Twitter’s prerogatives under Section 230, but rather than suppress the president’s rotten tweets (and to some extent obscure his misbehavior), the company opted to add more context instead.

In a nutshell, despite Trump’s complaints that Twitter is guilty of censorship, Twitter didn’t censor his tweets. I think that’s the right result. Not because the tweets don’t deserve to be censored—they clearly do—but because censoring a sitting president is a bigger deal than censoring an ordinary user, not least because it might help that president obscure or escape accountability for what he says. In addition, one of the great ironies of Trump’s call for repealing Section 230 is that a Twitter without Section 230’s protections from liability for what its users post likely would have felt compelled to censor him entirely.

It’s fair to say that the president doesn’t really have a grasp on what Section 230 does and how it actually has enabled him to reach his base in the disintermediated way that he finds so addictive. But it’s also fair to say that plenty of smarter people get that law wrong, too. I have to come to believe that Section 230 is like the rule against perpetuities: It’s daunting to explain to a layman, but—to make things even worse—there are boatloads of lawyers who don’t understand it, either. (William Hurt exemplifies a lawyer who seems not to understand the rule in the 1981 noir classic Body Heat.)

But Section 230 isn’t quite so complicated. Prior to 230’s passage as part of the 1996 Telecom Act, the American legal system tended to focus on two paradigms for understanding communications media in the modern world: traditional press (including broadcasting) and common carriage. The traditional press (the kind of “press” the Framers were thinking of when they wrote the Bill of Rights) benefited from a great deal of freedom under the First Amendment but also carried a potential risk from claims like defamation, because traditionally the publishers and editors of a publication had a duty to get their facts right. Arguably the most important First Amendment case is New York Times v. Sullivan (1964), in which the Supreme Court determined that the First Amendment has to be understood as allowing publications to get their facts wrong about government officials sometimes, provided they weren’t doing so intentionally or recklessly.

Also fitting this first model was broadcasting. Like the traditional press, broadcasting has a lot of First Amendment protections, but broadcasters are limited by a government-based regulatory framework via the Federal Communications Commission. When it came to issues like defamation, broadcasters could be held responsible for what other people said on their services, too.

The second model was common carriage—basically, a service provider (like Verizon or AT&T) isn’t legally liable for defamation or other problematic content so long as the service in question (e.g., mobile telephone service) doesn’t discriminate by content. Those services have to adhere to a kind of “neutrality” as to users’ telephone content. The common carriage model is quite useful, and in its appropriate context is something that also plays an important role in freedom of expression—the telephone network operating on common carriage principles is one of the “technologies of freedom” celebrated in Ithiel de Sola Pool’s classic and prophetic 1984 book.

If this sounds like a confusing legal word salad, that’s because it is.

Lawyers who aren’t specialists in internet law (including Sens. Ted Cruz and Josh Hawley) have argued that Section 230’s protections should be conditioned on whether platforms are neutral in content or, alternatively, on whether they’re applied consistently. This is a common theme among Republican critics of the companies. They’re assuming that Section 230 is supposed to operate as a kind of common carrier system imposed upon the internet, requiring that Twitter and Facebook and other companies be neutral as a condition of being free from liability. Still other nonspecialists, like my friend the TV writer David Simon, have argued that the internet companies should act more like publishers, and certainly do more to filter and/or remove terrible content. This theme of criticism is more common among Democrats.

But the choice between “traditional press” and “common carrier” models is a false dichotomy. For more than half a century, our First Amendment jurisprudence recognized a third model, which might best be characterized as the bookstore/newsstand model. Rooted in Smith v. California (1959) and applied to computer networks in Cubby v. CompuServe (1991), this model recognized that bookstores and newsstands (and, also, by the way, libraries) are themselves important institutions for First Amendment purposes. Under this model, we don’t insist that bookstore, newspaper stand owners, or library workers take legal responsibility for everything they carry, but we also don’t insist that they carry everything. They’re not publishers or common carriers. When a state court judge misinterpreted the facts and the law in a case centering on the then-popular online service Prodigy in 1995, this model of First Amendment protection seemed to be slipping away from online services, which responded by pushing for passage of what eventually became Section 230 of the Communications Decency Act. Subsection (c)(1) of that section just is Kosseff’s “twenty-six words”: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

This language is what made Twitter and Facebook as well as other services like Instagram and YouTube possible. And it’s also the target of the executive order that Trump signed, although you have to wade through a lot of posturing language to get to the heart of that order. In fairness to the president, lots of executive orders, like lots of legislation, include clouds of precatory (essentially, a legal term meaning “wishful thinking”) language that lacks any actual legal force.

Parts of the order that do aim to have some legal force focus first not on Section 230(c)(1) (the “twenty-six words”) but on Section (a)(3), which includes a big chunk of precatory language about promoting “true diversity of discourse,” and Section (c)(2), which provides protections to providers that act in “good faith” to restrict access to “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable” content. Section (c)(2) was crafted to empower services to create filtering software that, for example, prevents minors from seeing inappropriate content (censorware marketed to families was momentarily a big thing in the 1990s), but for the most part it hasn’t been central to how the services operate. That said, the executive order wants to export the “good faith” language from (c)(2) into “the twenty-six words” on the theory that if the services limit access to “objectionable” content in ways that are “non-neutral” or “inconsistent” or “pretextual,” they’re not acting in “good faith” to promote “true diversity of discourse.”

If this sounds like a confusing legal word salad, that’s because it is. What you see is the Trump administration cherry-picking language it approves of in one part of Section 230, crafted for a different purpose, and then trying to import it into the parts it doesn’t like. Not only is this inconsistent with how the statutory language has been interpreted before now, but it’s also beyond the scope of what the president can do in an executive order. (Basically, a president can’t revise the established meaning of legislation—that’s Congress’ job, assisted by the courts.) The other provisions of the executive order, saber-rattling invocations of the FCC and the National Telecommunications and Information Administration and the Federal Trade Commission and the attorney general, are similarly ungrounded in any authority the president has, at least in theory. (Well, OK, he probably can order the attorney general around.) But don’t take my word for it. The reaction of Kate Klonick, who teaches internet law at St. John’s University, is typical of legal practitioners and scholars who work in internet and constitutional law: “The most obvious thing I would say about this order is that it’s not enforceable,” she told Recode about an earlier draft of the order, adding that “it’s kind of a piece of political theater.”

I think it’s more than that, though—I think Trump is treating the common misinterpretations of Section 230 as a kind of security hole in the legal system that he can hack. One of the reasons for that weakness has been the tech companies’ general unwillingness before now to engage in the kind of content moderation that Section 230 was designed to allow. Their reluctance was understandable—frequent interventions in content questions gives rise to the expectation that the services will intervene more frequently or consistently. But doing comprehensive content moderation at Twitter’s scale—much less Facebook’s—is hard, and doing so consistently, I maintain, is impossible. Giving the impression of “neutrality” keeps the expectations constrained (and it’s also a lot easier to do with something like consistency).

To put it bluntly, I think what happened in the earliest days of Twitter and Facebook was a kind of cognitive dissonance based on various degrees of misunderstanding about (in particular) the Section 230 framework that allowed social media to grow in the United States. Basically, the platforms were saying (ineptly) that they weren’t the editors and/or gatekeepers, generally speaking, of user-generated content. (Nor did we want them to be.) They were disavowing the role of being content police. But they didn’t say they were never going to intervene—their terms-of-service provisions, even at their most libertarian, reserved the right to do some post-hoc editorial interventions (certainly in areas like child pornography or terroristic threats, where there is an international consensus about illegal speech). But this got interpreted as the platforms’ claiming, somehow, that they never applied judgment about content. Yet of course they always did. This was compounded by the fact that some of the company lawyers themselves didn’t fully understand that Section 230 was meant to allow content curation without incurring liability. As the platforms expanded internationally, of course, the protections of Section 230 were frequently inapplicable, and it was easier to default to “we’re just the platform” talk. All this added up to, in my view, a range of tactical and strategic mistakes in how the platforms messaged about their issues.

It’s safe to say that Twitter’s latest interventions regarding Trump’s tweets are sending a different message. That message won’t be good news to a significant proportion of the service’s pro-Trump critics. For the rest of us, it represents early moves in the direction of trying to get content moderation right in a way that doesn’t weaken Section 230 but strengthens it.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Let's block ads! (Why?)



U.S. - Latest - Google News
May 30, 2020 at 04:37AM
https://ift.tt/2TPYsvp

The Trump-Twitter War Shows That Section 230 Can Work Beautifully - Slate Magazine
U.S. - Latest - Google News
https://ift.tt/2ShjtvN
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update

Bagikan Berita Ini

0 Response to "The Trump-Twitter War Shows That Section 230 Can Work Beautifully - Slate Magazine"

Post a Comment

Powered by Blogger.