News

Travel Tech
Twitter

News

Defending crucial protections for Internet platforms


EVAN ENGSTROM AND JESSE BLUMENTHAL on August 04, 2017

Over the last two decades the Internet has changed dramatically. We have gone from a world of mass mailed AOL CDs to widespread access to Wi-Fi. In 1996, many of the online platforms we take for granted today did not exist. Facebook hadn’t yet been created in a dorm room. The dinner party that would spur the creation of YouTube hadn’t yet been hosted. The Internet was a very different place.
 
In the 1990s a series of policy choices allowed the Internet to develop from a cluster of passive websites to the vibrant ecosystem of interactive content and startup activity we know today. Adam Thierer​ and others have argued persuasively that three inflection points firmly established this policy framework. First, the Supreme Court recognized that the First Amendment applied to the Internet in ​Reno v. ACLU​. Second, the Clinton Administration released a ​Framework for Global Electronic Commerce ​stating, “[T]he private sector should lead [and] the Internet should develop as a market driven arena not a regulated industry.”
But perhaps the most important, yet often ignored, catalyst for the Internet’s incredible growth is Section 230 of the Communications Decency Act better known as CDA 230. Recognizing that the Internet provided an unrivalled source of varied and valuable information, entertainment, and services, Congress passed CDA 230 to enhance the Internet’s platform as a communications medium by granting limited immunity to the “providers of interactive computer services.”

As the Electronic Frontier Foundation has​ ​explained​, “Unlike publications like newspapers that are accountable for the content they print, online services would be relieved of this liability.” There are two purposes to this provision: to keep the information flowing and to let platforms set their own rules.

What this means in practice is that Yelp, for example, cannot be held liable for a user’s negative—even libelous—review of a restaurant. The restaurant can still take legal action against the user, but it cannot try to penalize the platform, which cannot reasonably be expected to review each of its users’ posts. This enables Yelp to post tens of millions of reviews without having to run each one by a lawyer, allowing a radically democratized medium of speech to operate at scale. Yelp is still responsible for its own actions, but not for every user’s behavior. Importantly, CDA 230 also allows platforms to engage in Good Samaritan moderation without becoming liable for someone else’s content.

This liability protection also provides Americans with significant cultural and economic benefits. David Post​ has argued that “No other sentence in the U.S. Code...has been responsible for the creation of more value than” CDA 230. In fact, a recent study from ​NERA Economic Consulting found that weakening intermediary liability protections could risk up to 425,000 jobs and reduce GDP by as much as $44 billion annually.

This type of defense of digital free speech is not universal.​ ​Russia and China​, for example, continue to stifle speech online. Even more liberal countries such as​ ​Canada​ and​ ​Germany attempt to place severe restrictions on free speech. Therefore, it is no surprise that the Internet industry, and startups in particular, have flourished in the United States, where companies know their innovations will be protected.

Still, the Internet can be an ugly place. Individuals defraud each other, they libel each other, they commit many horrendous crimes. The question we face is how to protect people without stifling innovation and free speech?

Today, policymakers looking for ​easy answers to difficult societal problems​ have put forward proposals to walk back CDA 230's protections, hoping that holding platforms liable for user activities will keep bad actors off the Internet. After all, it is much easier for a government agent to tell large companies like Facebook to remove undesirable posts or to tell eBay not to sell this type of item than it is to prosecute those who commit underlying crimes. But forcing every major tech company to become a law enforcement monitor, censor, and agent will do far more harm than good.

If we make large tech firms our country’s policemen we will chill the greatest tool humanity has ever created for free speech. If they face liability for the actions of users they cannot fully control, why would Google, Facebook, Yelp, Reddit, or any other platform not take down constitutionally protected but unpopular or edgy speech on the remote chance that it could lead to ruinous liability?

Rather than meddle with this foundational law that has been essential to digital innovation and freedom of expression, law enforcement should focus on the underlying criminals. The problem is not the lack of legal remedies but rather the underutilization of existing law enforcement tools. It is essential that tomorrow’s startups have the same opportunity to innovate, free from excessive coercion.

Evan Engstrom is the Executive Director of ​Engine​, a policy, advocacy, and research organization that supports tech startups.​ Jesse Blumenthal leads technology and innovation policy at the Charles Koch Institute. 

Evan Engstrom is the Executive Director of Engine , a policy, advocacy, and research organization that supports tech startups. Jesse Blumenthal leads technology and innovation policy at the Charles Koch Institute.

http://thehill.com/blogs/congress-blog/judicial/345195-defending-crucial-protections-for-internet-platforms