Defending crucial protections for Internet platforms
EVAN ENGSTROM AND JESSE BLUMENTHAL on August 04, 2017
As the Electronic Frontier Foundation has explained, “Unlike publications like newspapers that are accountable for the content they print, online services would be relieved of this liability.” There are two purposes to this provision: to keep the information flowing and to let platforms set their own rules.
What this means in practice is that Yelp, for example, cannot be held liable for a user’s negative—even libelous—review of a restaurant. The restaurant can still take legal action against the user, but it cannot try to penalize the platform, which cannot reasonably be expected to review each of its users’ posts. This enables Yelp to post tens of millions of reviews without having to run each one by a lawyer, allowing a radically democratized medium of speech to operate at scale. Yelp is still responsible for its own actions, but not for every user’s behavior. Importantly, CDA 230 also allows platforms to engage in Good Samaritan moderation without becoming liable for someone else’s content.
This liability protection also provides Americans with significant cultural and economic benefits. David Post has argued that “No other sentence in the U.S. Code...has been responsible for the creation of more value than” CDA 230. In fact, a recent study from NERA Economic Consulting found that weakening intermediary liability protections could risk up to 425,000 jobs and reduce GDP by as much as $44 billion annually.
This type of defense of digital free speech is not universal. Russia and China, for example, continue to stifle speech online. Even more liberal countries such as Canada and Germany attempt to place severe restrictions on free speech. Therefore, it is no surprise that the Internet industry, and startups in particular, have flourished in the United States, where companies know their innovations will be protected.
Still, the Internet can be an ugly place. Individuals defraud each other, they libel each other, they commit many horrendous crimes. The question we face is how to protect people without stifling innovation and free speech?
Today, policymakers looking for easy answers to difficult societal problems have put forward proposals to walk back CDA 230's protections, hoping that holding platforms liable for user activities will keep bad actors off the Internet. After all, it is much easier for a government agent to tell large companies like Facebook to remove undesirable posts or to tell eBay not to sell this type of item than it is to prosecute those who commit underlying crimes. But forcing every major tech company to become a law enforcement monitor, censor, and agent will do far more harm than good.
If we make large tech firms our country’s policemen we will chill the greatest tool humanity has ever created for free speech. If they face liability for the actions of users they cannot fully control, why would Google, Facebook, Yelp, Reddit, or any other platform not take down constitutionally protected but unpopular or edgy speech on the remote chance that it could lead to ruinous liability?
Rather than meddle with this foundational law that has been essential to digital innovation and freedom of expression, law enforcement should focus on the underlying criminals. The problem is not the lack of legal remedies but rather the underutilization of existing law enforcement tools. It is essential that tomorrow’s startups have the same opportunity to innovate, free from excessive coercion.
Evan Engstrom is the Executive Director of Engine, a policy, advocacy, and research organization that supports tech startups. Jesse Blumenthal leads technology and innovation policy at the Charles Koch Institute.
Evan Engstrom is the Executive Director of Engine , a policy, advocacy, and research organization that supports tech startups. Jesse Blumenthal leads technology and innovation policy at the Charles Koch Institute.