top of page

Free Speech Is Not A Free Pass To Spew Hate

A Legal Framework for Preventing Harm While Protecting Expression

Joanna Dodd Massey, PhD, MBA, M.S. in Business Law

Contents

INTRODUCTION ​

Live and Let Live​​

​

We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America. (i)

—Preamble to the Constitution of the United States, September 17, 1787

 

From America’s Founders to today’s leaders, the principle of live and let live has been woven into the fabric of our national identity. The Preamble to the Constitution embraces ideals of justice (fairness), tranquility (harmony), and liberty (freedom). In 1791, the Bill of Rights was adopted, and with it, the First Amendment enshrined our freedoms of religion, speech, and assembly. Even the Great Seal of the United States, formalized in 1782, carries this spirit: the American bald eagle holds a ribbon in its beak reading E Pluribus Unum—Latin for “Out of many, one.” (ii)​

The Declaration of Independence offers the clearest statement of this ideal. It states: ​

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. (iii)

—The Declaration of Independence, July 4, 1776​

​

The word “men” in the Declaration has been criticized for excluding women and others. However, according to dictionaries from the era, such as Samuel Johnson’s (1755), (iv) “men” was commonly used to mean all of humanity. ​Our Founding Fathers did not agree on what equality meant—let’s not forget that this was a time when enslaving Black people was legal. Still, despite the contradictions, the final documents they gave us—the Declaration of Independence, the Constitution, and the Bill of Rights—are steeped in the language of liberty, equality, and natural rights. Therefore, the indications from our country’s founding are that freedom, dignity and coexistence are our birthright as a nation.

While our founding documents speak of fairness, harmony, and freedom, these unalienable rights are under threat today—not by a king or a government, but by social media and the court of public opinion. The false information that some people blast across digital echo chambers, with no regard for its origin, accuracy, or even plausibility, inflames group rage and results in the bullying, discrimination, and death of our fellow Americans. ​

In the past, there was a filtering process for information, which was made public through newspapers, magazines, TV newscasts, and radio stations. That information was disseminated by professional reporters and editors who used journalistic principles. Today, social media is the total opposite. It is vast, anonymous, and largely unregulated. The result is that the government’s failure to regulate harmful online speech is undermining our rights to equal protection, due process, and safe participation in public life. ​

In the United States of America, you can say what you want, but you are still responsible for the damage your words do. Therefore, in this paper, I explain how freedom of speech is grossly misunderstood and misused in the digital age. I also make the case for why narrowly tailored, harm-based regulations can protect public safety while preserving our core First Amendment values. Finally, I introduce a new legal concept called Mass Incitement—a framework for understanding how harmful content, when repeated and amplified across platforms, can fuel real world violence, discrimination, and public harm. Even when there is no direct or immediate call to action, the cumulative effect of this speech can incite people to act.​

​

Cropped_U.S._Great_Seal,_from_the_Theodore_Roosevelt_White_House,_1901-1909 copy.png

'...the indications from our country’s founding are that freedom, dignity and coexistence are our birthright as a nation.'

'...a new legal concept called Mass Incitement.'

Then-US-president-Donald-Trump-coronavirus-provides-a-COVID-19-update-briefing-on-Wednesda

FREEDOM, FAIRNESS, AND THE LIMITS TO FREE SPEECH ​

No, You Can’t Say Whatever You Want

​​​

A common myth is that anything posted online is protected by the First Amendment. It’s not.​ Platforms like Facebook, TikTok, and Twitter/X are private companies, not government actors. As a result, their content policies to curb hate and lies do not violate the First Amendment since that amendment only protects us against government suppression of free speech.

Still, politicians, media personalities, and other influencers cry “censorship” when their content is removed. The real fact is this: spreading lies and hate on private platforms is not constitutionally protected speech. Let’s look at two examples.

In April 2020, during a COVID-19 briefing, Donald Trump speculated about using disinfectant internally to fight the virus.

​

Right. And then I see the disinfectant, where it knocks it out in a minute. One minute. And is there a way we can do something like that, by injection inside or almost a cleaning. Because you see it gets in the lungs and it does a tremendous number on the lungs. So it would be interesting to check that. So, that, you’re going to have to use medical doctors with. But it sounds — it sounds interesting to me. (v)

—Donald Trump, April 23, 2020 

 

Did he tell people to ingest bleach? No. Did people take it that way? Yes.

Many interpreted Trump’s speculative statement as a directive. Social media platforms did not remove Trump’s clip, because he was musing—not instructing—and his comments alone did not constitute public harm. But the platforms did remove posts from others that encouraged people to inject disinfectant, because doing so could kill you.

That wasn’t censorship. It was private companies acting within their rights to stop the spread of deadly misinformation. They weren’t violating the First Amendment—they were protecting the public from content that could cause real, physical harm or death. ​

​Think of it this way: if the New York Times, Wall Street Journal, Fox News, or CNN had reported that ingesting or injecting bleach was a cure for COVID, they could have been sued—and they would have lost. All of these outlets covered Trump’s remarks. Some also reported that scientists and doctors advised people not to try it. That is responsible journalism. What some people were posting on social media was uninformed speculation and misinterpretation that posed a threat to public safety.

​Another well-known example occurred in January 2021 when YouTube demonetized Rudy Giuliani’s account and temporarily suspended his ability to upload videos. YouTube cited Giuliani for repeatedly spreading lies about the 2020 election. One example of this is Giuliani’s false claim that Georgia election workers Ruby Freeman and Shaye Moss pulled fake ballots from a suitcase and scanned them multiple times. The claim was publicly debunked by election officials, including Georgia’s Secretary of State Brad Raffensperger, who is a Republican. In one video, Giuliani alleged that Moss passed Freeman a USB drive “like they were vials of heroin or cocaine.” Video analysis and sworn testimony later proved it was a ginger mint. (vi)​

Many conservatives complained that Giuliani was being censored. But the real fact is that state officials publicly said there was no fraud, yet Giuliani persisted. At that point, he was spreading lies to advance his own agenda. Meanwhile, Freeman and Moss faced death threats and had their lives torn apart.

So, whose rights were really violated?  I’m going to side with Freeman and Moss on this one. The First Amendment does not protect Giuliani’s defamatory statements, so Freeman and Moss sued to get him to stop—and won a $148 million judgment. (vii)​ While that amount reflects the real harm they endured, most of it went uncollected and the case was eventually settled.

For most people, litigation isn’t an option—online abuse leaves them with no justice, just the trauma and disruption of someone else’s lies. Giuliani may not have technically violated the Constitution, but he shattered Freeman and Moss’s safety and silenced their voices. That kind of harm may not be illegal, but it is a clear violation of the freedoms our Constitutional rights are meant to ensure.

As Americans, we all have equal rights. Freeman and Moss deserve the same protections as Giuliani. So how is it legal for someone to destroy another person’s liberty and safety just to push a dishonest agenda? Our laws may allow it now, but we have the power—and the responsibility—to change that.​

​

'...The real fact is this: spreading lies and hate on private platforms is not constitutionally protected speech.'

'That wasn’t censorship. It was private companies acting within their rights to stop the spread of deadly misinformation.'

'At that point, he was spreading lies to advance his own agenda.'

What Free Speech Does—and Doesn’t—Protect​

​​

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances. (viii)

—The First Amendment, December 15, 1791


The First Amendment protects Americans from government interference in speech. It specifically says Congress cannot make laws that limit our freedom of speech. Courts have long interpreted “Congress” to mean all branches and levels of government—including state and local officials. ​Since the beginning of our country, the government has imposed limits on how speech is expressed—what we now refer to as time-place-and-manner restrictions. (ix)​ The purpose is to balance free speech with public safety.
The First Amendment does not apply to private companies or institutions. That means employers, universities, and social media platforms can set their own rules about speech and behavior.
That also means the government cannot force social media companies to host or remove specific posts. But it can pass laws that regulate how these platforms operate—so long as those laws respect Constitutional limits.​​​​​

'...the government cannot force social media companies to host or remove specific posts.'

10.png

Liberty Was Not Meant to Be Limitless​​​

 

Many of our Founders—like Thomas Jefferson and James Madison—believed in personal liberty, but not limitless freedom. These values were central to the early Democratic-Republican Party, which they co-founded. The party supported equal rights, limited government, and broader participation in public life. Of course, their idea of “equal” and “inclusive” were based on 18th Century norms, but the party encompassed values that remain vital as we navigate free speech and
civic duty in the 21st Century.
Our Forefathers fought over where to draw the line, and many of them believed that freedom ends where someone else’s rights begin. As Jefferson put it in 1819:


Rightful liberty is unobstructed action according to our will within limits drawn around us by the equal rights of others. (x)

 

Personal liberty with limits is an idea that made its way into our founding documents. From the start, our Constitution has tried to balance individual freedom with the responsibility not to harm others. That balance is where justice lives. Madison, the primary author of the Bill of Rights, (xi) helped shape the amendments that reflect it.


 The First Amendment protects speech, religion, and peaceful protest from government interference. (xii)

The Fifth Amendment protects your liberty from federal government overreach, while the Fourteenth Amendment applies that same protection to state and local governments. Together, they’ve been interpreted to cover choices about your body, beliefs, and lifestyle. (xiii)

→ The Ninth Amendment makes clear that Americans have rights beyond those listed in the Constitution. (xiv) But like all rights, they must be balanced with the rights of others.

​​​

'...many of them believed that freedom ends where someone else's rights begin.'

ruiz-protest-final.jpg

​​Hate Speech Violates Other Peoples Rights

​​​

In the United States, the term “hate speech” has no legal definition.

To ensure that we are all on the same page, I am going to use the European Union (EU) definition:


Hate speech is understood as all types of expression that incite, promote, spread or justify violence, hatred or discrimination against a person or group of persons, or that denigrates them, by reason of their real or attributed personal characteristics or status such as ‘race’, colour, language, religion, nationality, national or ethnic origin, age, disability, sex, gender identity and sexual orientation. (xv)
—Council of Europe, 1997 

​

This definition is solid, but I want to make an important distinction here. In the United States, it is not what you say, it is how you say it. Hatred is a human emotion and so expressing it is allowed, but how it is expressed and directed can be problematic. If I were to post, “I hate that people are racist,” that is a fair and appropriate expression of how I feel. If I were to post, “These racist pigs need to be eradicated from the earth! An eye for an eye! They should pay for the horrible things they do,” that crosses the line.  It is a personal belief presented as an incendiary call to action. Even though I have not instructed anyone to act, that kind of language will generate attention—especially if I am an influencer with a lot of followers. The sentiment then gains supporters and ultimately, it incites violence and causes real-world harm.

It is also worth noting that some speech falls outside the formal definition of hate speech but still causes serious damage. False accusations and conspiracy theories can ruin lives. Consider the earlier example with Rudy Giuliani spreading lies about the 2020 election and defaming two Georgia election workers. That was not hate speech under the EU definition, but it was still deeply dangerous for the two women. They were harassed, threatened, and publicly discredited.
In America, the 5th and 14th Amendments to the Constitution guarantee that the U.S. legal system treats people as innocent until proven guilty. On social media, that presumption of innocence does not exist.
People are presumed guilty and, even when proven innocent, they are not exonerated.
In America, we talk a lot about freedom of speech—but not enough about the other rights that hate speech threatens.
When online abuse is amplified, it doesn’t just offend people—it strips away their ability to live freely, safely, and equally. These harms fall under core constitutional protections. Here's how those rights actually work:​​

'In the Unites States, it is not what you say, it is how you say it.'

'On social media, that presumption of innocence does not exist.'

image for Joanna capstone (1).png

Social media and algorithms were unimaginable in the 1700s. ​Our Founding Fathers could not have conceived of a town square floating in a digital cloud, controlled by private companies deciding which voices get amplified and which get silenced. But our Forefathers did give us a Constitution and Bill of Rights built to evolve. These documents were designed to protect our freedoms, even as society changes.​

​

THE REASONS TO CHANGE SPEECH LAWS​

​​America’s Town Squares Are Now Digital and Global

​​​

In Packingham v North Carolina (2017), the Supreme Court acknowledged what most of us already know: social media is today’s public square. (xvi)

From the 1700s until the early 1900s, the town square was the heart of community life, and it was where civic engagement happened. People gathered in town centers to share news, attend public meetings, and engage in civic activities, such as giving speeches, holding debates, and protesting the government. (xvii) The public square was a place where people expressed themselves, and its central location amplified their speech, allowing it to reach and influence the broader community.
Now those norms are gone because there is a big difference between what people are willing to say when standing in the same room facing another person versus what they will say while typing a reply on a screen. When we are face to face, we see how our words land in real time, which activates self-awareness. Being in-person provides a natural feedback loop—we speak, we see a reaction, and we adjust—but that loop disappears on social media. The distance, anonymity, and lack of consequence strip away those social brakes, which is why people say things in a comment thread that they would never say across a table.
The social media platforms that have replaced the old town square aren’t neutral and they aren’t public.
They are private companies that decide who gets heard, what gets seen, and what stays online—even when it causes harm.
Ironically, the most toxic and aggressive voices on social media—the ones who dominate discussions and drive others out—represent a small but loud minority.
In fact, peer-reviewed research from 2024 determined that just 0.1% of social media users are responsible for 80% of the fake news online. (xviii)
It is time to hold both people and platforms accountable when amplified speech spreads disinformation, incites violence, or infringes on the rights, liberty, and dignity of others.​​​​​​​​​​​

​

'The distance, anonymity, and lack of consequence strip away those social brakes...'

Screen Shot 2025-08-13 at 4.17.44 PM.png

​​​​The Most Dangerous Voice In The Conversation Isn't Human​

​

At this point, we have been using social media for over 20 years, and most people know that going viral isn’t just writing a good post and hoping it gets shared. It is about algorithms—systems programmed to boost content that elicits emotions and drives engagement.

Consider what happened after the 2023 Super Bowl: a tweet by President Joe Biden reached 3 times more people than a similar tweet by Twitter-owner Elon Musk. To ensure that never happens again, Musk ordered his engineers to change the algorithm to amplify his tweets by a factor of 1,000 so they would appear more prominently in user feeds (as reported by Platformer News, a trusted tech industry publication). (xix)

That’s not just influence. That’s controlling the message and who sees it.
All social media algorithms—from Twitter/X, Threads and Truth Social to Facebook, TikTok and YouTube—do the same thing.
Their algorithms prioritize sensational, emotional content because that’s what hooks people in and keeps them on the platform. ​Hate speech is algorithmic gold.

Here is the danger that we are in: amplification does not just spread speech—it transforms the impact and effect of it.

It is no longer one person expressing their opinion in the town square. Instead, millions of people are exposed to the same extreme message, which creates a false sense of consensus and community. It can lead people to adopt beliefs they didn’t originally hold, simply because everyone around them is talking about it.

Group think and the power of consensus aren’t just theoretical. Take Facebook’s role in the 2017 genocide of Rohingya Muslims in Myanmar. Multiple lawsuits accused Facebook owner Meta of enabling the spread of hate speech and inciting violence. Meta commissioned an independent report to determine the validity of the claims. ​The report included the following finding:​

Facebook has become a means for those seeking to spread hate and cause harm, and posts have been linked to offline violence. (xx) 

Social media companies are not neutral places for free speech—their platforms are engineered systems designed to maximize engagement and profits seemingly at the expense of truth, safety, and civic cohesion. With no legal or regulatory oversight, social media companies have little incentive to reform the systems that fuel polarization, extremism, and real-world harm.

In my mind, the issue we are dealing with is corporate power versus civic integrity. Democracies depend on truth, civic norms, and accountability in order to run efficiently and effectively. When those break down, society fragments—and that is exactly what we are experiencing today.​​​​​​

​

'...Musk ordered his engineers to change the algorithm to amplify his tweets...'

'Facebook has become a means for those seeking to spread hate and cause harm...'

Section 230 Protects Profit Not People​

​

Right now, social media platforms are protected under Section 230 of the Communications Decency Act. This law shields them from liability for most content posted by users—even when that content is false, harmful, or dangerous. (xxi)

Section 230 does not necessarily protect platforms when they materially contribute to the harmful content. But courts have not clearly defined what that means—especially in the context of algorithmic amplification.
Algorithmic amplification is another term to describe how platforms design their systems to boost certain content.
In doing so, they don’t just host content—they shape public conversations in the digital town square.
Social media is now the primary way people share ideas and influence others—with that level of influence should come corporate responsibility. Most of these platforms are run by publicly traded corporations. They have the right to moderate damaging content, the duty to protect their consumers from harm, and the fiduciary obligation to safeguard their reputations for shareholders. Yet, instead of honoring these responsibilities, many of these platforms have succumbed to political pressure or actively pursued sociopolitical agendas. The result is that these platforms boost harmful, polarizing content while limiting reasonable exchange and constructive dialogue.
Recently, some of the world’s largest social media platforms have reversed policies once aimed at limiting slander, disinformation, and hate.
It is important to remember that these companies make more money on hate because it creates the kind of sticky content that keeps us scrolling through our social media feeds. We see more ads when we scroll, and that’s how the social media platforms make money since you aren’t paying them a subscription fee.

Two widely known examples of social platforms backing off of content moderation are Twitter/X and the Meta platforms (Facebook, Instagram and Threads):


In 2022, after Elon Musk acquired Twitter, he reinstated previously banned accounts and rolled back the company’s enforcement rules around hate speech and misinformation. He put an emphasis on eliminating any policies that he and others viewed as censorship. (xxii)
→ In January 2025, Mark Zuckerberg—Meta’s founder, CEO, and controlling shareholder—announced a shift away from internal fact-checking toward “Community Notes.” He also relaxed Meta’s hate speech policies under the banner of promoting “mainstream discourse.” (xxiii)

 

These social media companies are prioritizing revenue over public safety.

​They are making platform decisions that directly contribute to user harm. There have been several Supreme Court cases addressing that issue, but the jury is still out on who is liable and why.


→ In Gonzalez v Google LLC (2023), (xxiv) the Supreme Court sidestepped the big question: Does Section 230 protect algorithms that amplify harmful content?

​

For now, the answer is unclear, which means social media platforms are shielded from liability for the content their algorithms promote, even though it is known that the algorithms are often programmed to promote hateful, inflammatory content because it drives engagement and stickiness. (xxv)

 

→ A year later, in Anderson v TikTok (2024), (xxvi)

the U.S. Court of Appeals ruled that TikTok could potentially be held liable for algorithmic recommendations that contributed to a child’s death. The court said this wasn’t just a problem of user content—it was TikTok’s own speech because the platform actively promoted the content through its algorithm. This case is not yet resolved as it is still making its way through the courts at the time of this writing.


Even Congress is starting to look at this issue again. In February 2025, notes from a Senate Judiciary Committee meeting indicated that Senator Dick Durbin (D-IL) would join Senator Lindsey Graham (R-SC) in reintroducing a bipartisan bill to sunset Section 230. (xxvii) Previous attempts went nowhere, but with rising public pressure and signs of cross-party interest, real reform could be within reach.​

​

'...they shape public conversations in the digital town square.'

'...many of these platforms have succumbed to political pressure or actively pursued sociopolitical agendas.'

Untitled design - 2025-08-13T162146.280.png

It Isn’t Red or Blue, It’s Green​

​

At its core, hate speech is not just offensive—it is dehumanizing. That insight comes from psychiatrist and award-winning author Dr. Joseph Shrand, now known for his podcast The Dr. Joe Show, but remembered by us kids growing up in the 1970s as “Joe” from the popular after school program, Zoom.
As Dr. Joe put it, “Every law that we have addresses the devaluation of one thing by something else: as soon as you devalue, you violate the rights of others and that is what hate speech is about.” 

During our exchange, he asked me a simple but profound question: If platforms can engineer systems that amplify outrage, why can’t they design ones to dampen dehumanizing language instead?

In my mind, the answer is simple: If it bleeds, it leads. Kindness doesn’t get people to stop and look—train wrecks do. Outrage gets clicks and likes. These algorithms aren’t neutral—they are engineered to prioritize rage because rage is profitable.
Americans have been blaming political divisiveness and extremism for polarizing our society. But our problem isn’t the Republicans (red) or the Democrats (blue). The real problem is that hate speech gets attention and makes money (green), and our laws have not yet caught up with that business model.
​

​

'At its core, hate speech is not just offensive—it is dehumanizing.'

WHAT NEEDS PROTECTING AND WHY​​​​​

Humans Are Hardwired to Believe Fake News

​​​

have a PhD in psychology, and I write and speak about the brain’s survival instincts going into overdrive in our modern lives. Our biological wiring plays a critical role in what we believe. To understand why we must change hate speech laws, we first need to understand how digital connectivity is impacting our thinking.
Human beings survive by sticking together—we find our people

and form strong bonds. Those instincts date back to prehistoric times when danger meant a tiger was hiding in the tree. Our automatic brain’s job was to keep us alive by spotting threats and reacting instantly.
That same system still exists today—it is commonly referred to as

fight, flight or freeze—but now it gets triggered by different types of threats. Decades of research has proven that our automatic brain doesn’t know the difference between a tiger and a tweet. When someone is criticizing our beliefs, lifestyle, or values, the brain’s automatic system reacts the same way it would to real physical danger—by circling the wagons, going tribal, and defending the group. (xxviii)
This tribal reflex shows up everywhere—from politics to religion to sports. It’s why people argue online like they are fighting for their lives—unconsciously, their automatic brain is defending their existence.

When our ideas are attacked, our brain can treat it like a threat

to our survival.
Social media pours gasoline on that instinct, and algorithms strike the match. Social media platforms give us endless ways to find “our people” and feel safe inside bubbles that affirm our beliefs while shutting out dissenting views and lifestyles.
Whether it is faith, politics, or fandom, we build tribes around shared beliefs and defend them as part of who we are.
Think of Green Bay Packer fans shirtless in subzero temperatures wearing body paint and cheeseheads. Humans will endure anything to stay loyal to their tribe—whether it’s a football team or a political movement.
Once we adopt a belief, social media algorithms feed us more of the same. It is a circular information system that keeps reinforcing itself.
Outside perspectives do not enter in a meaningful way, and the truth struggles to break through.
Let’s again look at Giuliani’s false claims about the 2020 election and consider how our biological wiring reacts to a situation like that.
Despite clear video evidence and official statements debunking the accusations against Freeman and Moss, many Americans believed the lie, influential voices repeated it, and the truth got drowned out.
In the digital age, people often judge what’s true not by evidence but by who says it and how many others repeat it.
With deepfakes, doctored images, and viral disinformation, it is easier than ever to spread tall tales and wild ideas that stick, regardless of the facts.

In today’s digital information ecosystem, virality beats truth—and the consequences are real.​​​​​

​

'Our automatic brain’s job was to keep us alive by spotting threats and reacting instantly.'

'Social media pours gasoline on that instinct, and algorithms strike the match.'

2022_Roberts_Court_Formal_083122_Web.jpg

​​​​Outdated Laws Cannot Keep Up With Digital Hate​

​

There is an ongoing battle in the United States over what qualifies as harmful speech versus what is merely unpopular or offensive. Historically, the courts have upheld limits on speech when it infringes on the rights or safety of others. ​In Chaplinsky v New Hampshire (1942), (xxix) the Court upheld restrictions on fighting words. Later cases narrowed the scope, such as Terminiello v City of Chicago (1949) (xxx) and R.A.V. v City of St. Paul (1992). (xxxi) These cases restricted regulation to speech that poses an imminent threat of violence or qualifies as a true threat. While these rulings broadened free speech rights, the Court still maintained the principle that some speech can be regulated.
Back when those rulings occurred, information traveled slowly. Influencing a large audience took considerable time, money, coordination, and access to distribution channels—like newspapers and evening newscasts.
​Movements grew gradually, as people watched support build, weighed the arguments, and joined if it felt right for them. They were not swept up like a tidal wave that appears suddenly and engulfs the shore just as fast.
Today, a single post can go viral in minutes, reaching millions before traditional news media even publish a headline. Emotion spreads faster than truth, so once something feels like consensus—regardless of the real facts—it begins to shape our beliefs and behaviors. ​And unlike in the past, rebuttals and corrections rarely catch up.
In a slower media era, journalists provided the checks and balances by presenting the opposing perspective to every story.
But in today’s rapid-fire digital environment, once a harmful message goes viral, it is impossible to ensure that those who saw the original message will also see its correction or the objective truth.
This is compounded by the challenge of authorship. In traditional media, there is a clear author. ​In social media, we often don’t know the origin of the information and many posts are cloaked in digital anonymity.
This is the fundamental flaw in our current legal framework. The standards we rely on were built for an analog world, but the consequences of speech today are immediate, algorithmically amplified, and psychologically powerful.

If we want to regulate speech in a way that protects both public safety and democratic stability, we must account for the realities of human psychology. Protecting free speech while also protecting the rights of those under attack requires us to understand how harm-based thinking forms, and how quickly it spreads.​

​​​

'These cases restricted regulation to speech that poses an imminent threat of violence or qualifies as a true threat.'

'Emotion spreads faster than truth.'

Data Doesn’t Lie: Hate Speech Fuels Real-World Harm​

​

Today’s digital town square can radicalize people, trigger bullying, and incite real-world violence—even when no single post appears to cross a legal line. Hate speech today isn’t just offensive or unpopular—it’s dangerous. That is why it is time to modernize our incitement and libel standards to reflect the cumulative harm that online hate causes. Courts and Congress can act within longstanding legal traditions to protect the public interest without abandoning our core constitutional values.

Multiple studies have documented a direct connection between online hate and real-world attacks, including mass shootings and domestic terrorism. Below are just a few examples:


→ GAO Report on Online Extremism: The U.S. Government Accountability Office found that online platforms played a role in radicalizing individuals involved in violent events like the Charleston, South Carolina church shooting and the El Paso, Texas Walmart shooting. (xxxii)
→ NYU Study on Twitter and Hate Crimes: Researchers found that increases in hate speech on Twitter correlate with a rise in hate crimes in U.S. cities, suggesting a predictive relationship between online speech and offline violence. (xxxiii)
→ United Nations Report on Hate Speech and Atrocity Crimes: The UN found that hate speech is often a precursor to atrocity crimes, including genocide—emphasizing its capacity to incite real-world violence. (xxxiv)​

→ Human Rights Campaign Foundation Report: This report links inflammatory online hate to real-world threats and attacks, such as bomb threats against children's hospitals, illustrating how unchecked hate endangers communities. (xxxv)

​​​

​

'...it is time to modernize our incitement and libel standards...'

Charlottesville_'Unite_the_Right'_Rally_(35780274914)_crop.jpg

​Group Hate Isn’t Personal—It’s a Pandemic​

​

Group libel refers to the defamation of an entire group of people based on race, religion, and ethnicity, (xxxvi) but can also include characteristics such as sexual orientation or gender. While group libel was upheld in Beauharnais v Illinois (1952), (xxxvii) that precedent has been largely abandoned in modern First Amendment jurisprudence. We need a current framework to address it, and a recent example demonstrates why.
In September 2024, Republican vice-presidential candidate J.D. Vance made a media appearance during which he mentioned a viral social media story claiming that Haitian immigrants were abducting and eating people’s pets. He later said during an interview with CNN that the story
came from constituents on social media.
(xxxviii)​

One of those sources was @GeneralMCNews, who had posted a video of a mentally ill Ohio woman—an American citizen, not a Haitian immigrant—being accused of killing and eating a cat. That same video became the foundation of Donald Trump’s assertion during the 2024 presidential debate that Haitian immigrants in Ohio were killing and eating people’s cats and dogs. After the debate, Trump re-posted the @GeneralMCNews video as proof of his claim. (xxxix)
The results were immediate and dangerous: bomb threats targeted local schools, hospitals and government buildings. These threats forced the closures of businesses, disruptions to daily life, and terror amongst parents who kept their children home from school. (xl) At the same time, a local Republican business owner who publicly defended his Haitian workers received death threats. (xli)

That is group libel, and it did not just target a vulnerable immigrant population—it destabilized an entire American town.
The argument I want to make is simple: narrowly tailored, harm-based regulations can protect public safety without violating the First Amendment. The government’s interests in public safety and civic order have long been upheld by the courts. Hate speech that causes foreseeable harm needs to be treated the same way.

The European Union has already taken this step. Under the Digital Services Act (DSA), social media platforms are required to remove illegal content and limit the algorithmic amplification of hate speech. (xlii)​ The United States can do the same. â€‹

​

'...claiming that Haitian immigrants were abducting and eating people's pets.'

'...it did not just target a vulnerable immigrant population—it destabilized an entire American town.'

​​U.S. Case Precedent Is Stuck in the 1900s​

​

As noted earlier, the Supreme Court upheld the idea of group libel in Beauharnais v Illinois (1952). In that case, Joseph Beauharnais was convicted in Illinois for publishing and distributing written material that portrayed a class of citizens as depraved, criminal, or immoral. The Court affirmed that his speech was not protected under the First Amendment because it constituted libel against an entire group of citizens—language that, under Illinois law, applied not only to race but to any identifiable group.

While this case laid a foundation for regulating group libel, its influence waned quickly. Beauharnais was never overruled, but later decisions significantly limited its impact. Notably, New York Times v Sullivan (1964) (xliii) and Brandenburg v Ohio (1969) (xliv) made it much more difficult to restrict speech, especially if that speech is political expression or is about public figures.
The First Amendment has never protected all forms of speech. Categories like obscenity, defamation, incitement, and fighting words have long been prohibited—and group libel was once among them. These exclusions were defined before digital virality and algorithmic targeting, but their underlying logic still holds.
The root of the problem today is that lies and hate speech are protected too broadly. Our Founding Fathers sought to balance personal liberty with civic responsibility. In that spirit, let’s recognize that viral hate speech causes real-world harm, and our current legal framework is failing
to stop it.
​

​

'The First Amendment has never protected all forms of speech.'

11.png

​​What Opponents Say and Why It Doesn't Address The Real Problem​

​

Whether it is law school, business school, or high school, if you’ve taken a debate or negotiations class, you know that you have to understand the opposing side’s position in order to make a powerful counter argument. So, let’s look at some of the reasoning used to keep the U.S. from regulating hate speech, and some ideas about how to fix the problem.

​

Civil liberties groups like the American Civil Liberties Union (ACLU) warn that defining “harmful” speech could lead to government overreach and censorship, especially of minority and dissenting viewpoints. (xlv) When it comes to social media platforms, that censorship is not always direct. It often happens indirectly, as companies that run these platforms respond to political pressure or public messaging from government officials. We saw this in the Biden administration, which was criticized for pressuring social media companies to amplify certain messages about COVID and deprioritize or fully remove others. (xlvi)

→ Others argue that Section 230 is vital to preserving free expression online, and that social media platforms, as private companies, should not be compelled to moderate content in a way that resembles government control. (xlvii)
There are proposals to compel social media companies to publicly disclose how their recommendation algorithms are designed, especially what content they are programmed to promote. An example is H.R. 4624, a bill that was referred to the Energy and Commerce Committee in 2023. (xlviii) Unfortunately, this bill subsequently died in committee because no further actions had been taken on it when the 118th Congress ended on January 3, 2025.
→ Some propose that instead of suppressing content, platforms should adjust algorithms to reflect genuine user preferences rather than prioritizing rabble-rousing posts. (xlix)


None of these arguments address the urgent, cumulative harm caused by widespread, unchecked hate speech online. The real fact is that hate speech and lies are not only amplified by algorithms—they are published freely without consequence, even when they target entire groups with false, dehumanizing claims.

​

'...you have to understand the opposing side's position in order to make a powerful counter argument.'

5.png

OUR SPEECH LAWS FALL SHORT IN THE DIGITAL AGE​

​Speech Laws Focus on Intent but the Issue Today Is Cumulative Impact​

​

Traditional speech laws focus on intent: they prohibit direct threats, fighting words, and incitement to violence. But the way people are radicalized today is different. It unfolds through a steady stream of posts, shares, and content that may not cross any legal lines on its own, but that collectively cause harm.

Consider the “Stop the Steal” movement. After the 2020 election, political figures, media personalities, and influencers repeated the false claims of election fraud. These claims were decisively rejected by dozens of state and federal courts, including by Trump-appointed judges and officials, as well as Republican Governors and Secretaries of State. Former Attorney General William Barr, a Trump appointee, publicly said the Department of Justice found no evidence of widespread fraud. (l) The Cybersecurity and Infrastructure Security Agency (CISA) issued a public statement declaring the 2020 election “the most secure in American history.”  (li)

Still, the narrative continued—it was echoed in speeches, interviews, and viral posts amplified across platforms in ways that stoked outrage and fear. ​That rhetoric culminated in the January 6, 2021 attack on the U.S. Capitol. Rioters stormed the building, resulting in deaths, injuries, and an unprecedented disruption to our democratic process.

No single post caused that violence—but the cumulative effect of algorithmically boosted lies convinced thousands of people that violence was justified. Clearly, our laws haven’t caught up to this reality. If we want to protect public safety in the digital age, we need to rethink how we define and address incitement.​

​

'...the way people are radicalized today is different. It unfolds through a steady stream of posts, shares, and content...'

'...resulting in deaths, injuries, and an unprecedented disruption to our democratic process.'

​​It's Rarely the First Slur that Causes a Riot​

​

We already have legal precedent that defines incitement, threat, and harm in cases like The New York Times Co. v Sullivan (1964), Brandenburg v Ohio (1969), and Virginia v Black (2003). (lii) But these cases do not address the cumulative nature of harm on digital platforms, whereas other areas of U.S. law do.
Tort law holds companies accountable for long-term harm from products like asbestos, tobacco and defective goods, because the damage is cumulative and foreseeable.
(liii) Civil Rights Law recognizes that a hostile work environment can result from a pattern of discriminatory behavior—not just a single nasty comment. (liv) Public Health Law protects against chronic exposure to dangerous substances, even when no single instance is deadly.
Amplified hate speech functions similarly.  One cigarette doesn’t cause cancer, and one slur doesn’t cause a riot. But repeated exposure builds harm. Algorithmically amplified hate speech is like a toxin. It is quiet at first, but then spreads, builds, and undermines public safety over time.
Our speech laws currently focus on intent. But tort, health, and civil rights law focus on impact. We regulate environmental toxins not because companies intend to cause cancer, but because the harm is foreseeable and preventable. We need a framework for speech that is grounded in that logic.
A tort-based solution to regulating hate speech does not require reinventing the wheel. Just as companies are liable for distributing harmful products, social media platforms should be liable 
when they allow speech that foreseeably leads to real-world harm. This isn’t about punishing so-called unpopular opinions. It is about setting clear standards of responsibility when content is amplified with reckless disregard for its effects.
That standard should also apply to public figures. Repeating provably false and inflammatory claims should carry consequences. For example, Vance and Trump’s claims about Haitian immigrants eating pets is provably false and a dangerous narrative. Public figures should be held responsible for verifying claims before making them—and, in the digital world, one video from an unknown source is not verification.
This is America and we love to sue each other, so a hate speech tort could include reasonable limits.
​For example, only certain harms would qualify, only affected parties could bring claims, and plaintiffs would need to show that good-faith mediation efforts failed. But the basic structure is already in place, thus making tort law familiar, fair and workable.​

​

'One cigarette doesn't cause cancer, and one slur doesn't cause a riot. But repeated exposure builds harm.'

'Public figures should be held responsible for verifying claims before making them...'

SMPTE_Color_Bars.svg.png

​​We Regulate the Airwaves, Yet We Don’t Regulate the Algorithms​

​

The United States has a long history of regulating content. The Federal Communications Commission (FCC) oversees television and radio broadcasters to limit the spread of harmful or offensive content, (lv) even when that content would otherwise qualify as protected speech.

A well-known example is FCC v Pacifica Foundation (1978), the Supreme Court case prompted by comedian George Carlin’s “Seven Dirty Words” routine. ​The Court upheld the government's authority to restrict indecent content on public airwaves during certain hours. (lvi)
Radio and broadcast television are government regulated because they transmit over free, public airwaves—a resource that belongs to the people.
​As such, they are subject to standards that protect public safety and civic order. These airwaves once gave the TV and radio stations the
broadest reach of any media (hence the term “broadcast television,” which applies only to ABC, CBS, NBC, FOX, The CW, and PBS).
​These six broadcast networks can still be viewed without a cable or satellite subscription simply by using an antenna. This is unlike cable and streaming platforms—like Discovery and Netflix, respectively—which require the viewer to have cable or satellite, and usually a subscription to the streamer.
Social media platforms are not regulated by the FCC, yet they share key characteristics with broadcast television and radio.
They are funded by advertising. They are free to use—you don’t pay Instagram a subscription fee to maintain a profile. And they rely on public infrastructure to function, including electrical grids, satellite transmissions, global underseas cables, and public land where broadband providers install wires and equipment.
The Communications Act of 1934 gave the FCC authority over wire and radio communications. (lvii) Today, “wire” includes technologies like fiber and cable, while “radio” covers Wi-Fi and cellular. Furthermore, the FCC’s mission is to ensure that all Americans have fair, safe and affordable access to communications services while also safeguarding public safety and the public interest through regulatory oversight. (lviii) Given that social media reaches vastly more people than radio and broadcast television ever have, it is unquestionably in the public interest to have it regulated.

​

In 1983, 105 million Americans watched the final episode of M*A*S*H on CBS (an FCC-regulated network). It was and still is the highest rated episode of a TV series in history. (lix)​

→ Forty years later, Super Bowl LVIII on CBS, Paramount +, Nickelodeon, Univision and the NFL+ app, drew 123.7 million average viewers—the largest Super Bowl audience on record at the time. (lx)​

→ Social media use in the U.S. doubles that. There are approximately 246 million social media users in the United States—that’s 72.5% of the adult population of the country. (lxi)


The logic behind FCC-style regulation is even more compelling now, yet social media is not bound by the same standards that govern radio, broadcast television, or even physical public squares. These are spaces where speech is subject to rules that balance free expression with public safety, order, and access.​

​

'Given that social media reaches far more people than radio and broadcast television ever have, it is unquestionably in the public interest to have it regulated.'

​​SOLUTIONS THAT BALANCE SPEECH AND ACCOUNTABILITY​​

We Need a New Legal Standard: Mass Incitement​

​

I previously cited the European Union’s definition of hate speech. In 2024, the EU backed those principles with real legal power by enacting the Digital Services Act, which is a sweeping law that requires major platforms to remove illegal content, explain their recommendation
algorithms, and stop the spread of disinformation.
(lxii)

While U.S. companies claim broad immunity under Section 230, their European operations are subject to legal accountability for the harms caused by their amplification engines. And the penalties are hefty—violations can result in up to 6% of a company’s global revenue. (lxiii)
Critics argue that the DSA has yet to effectuate significant improvement, but a sea change of this scale takes time. ​In fact, a change of this magnitude is often intentionally designed to deliver incremental progress, not overnight transformation. We could do the same in the U.S., but Congress and the courts have failed to act—often citing the First Amendment, even though U.S. law has always included limits on speech.

Since the Bill of Rights in 1791, we have regulated libel, defamation, fighting words, threats, incitement to violence, obscenity (including child pornography), and commercial advertising. Hate speech overlaps with many of these restricted categories, yet it continues to flourish online.

To meet this challenge, I propose that we create a new legal category—Mass Incitement. 

Mass Incitement identifies a form of harm that comes from repeated, algorithmically amplified messaging. Like secondhand smoke or environmental toxins, the damage is not always immediate, but it is both foreseeable and cumulative.
Mass Incitement is not about silencing unpopular opinions. It’s about establishing accountability when platforms—or public figures—amplify provably false or inflammatory messages to millions, creating conditions that make violence, discrimination, or social unrest more likely.
​We already regulate comparable long-term harms in other areas of law. It’s time we applied the same logic to speech.​

​

'...the penalties are hefty—violations can result in up to 6% of a company's global revenue.'

'Mass Incitement identifies a form of harm that comes from repeated, algorithmically amplified messaging.'

Untitled design - 2025-08-13T164519.602.png

​Overcoming Existing Legal Hurdles​

​

As with any change, it is important to consider the hurdles to making that change before trying to implement it. Some of the reforms I propose align with current constitutional precedent. Others will require new judicial interpretations—and possibly a constitutional amendment.

Let’s start with the steps that are within reach. Repealing or amending Section 230 and regulating algorithmic amplification both fall within Congress’s existing powers and do not conflict with current Supreme Court rulings.

Similarly, holding platforms accountable for promoting harmful content could be accomplished through existing statutory and regulatory frameworks. Critics claim that the FCC traditionally oversees television and radio due to their use of public airwaves, whereas social media platforms do not use those airwaves. But this argument is increasingly a distinction without a difference. ​As previously stated, the internet operates on publicly paid for infrastructure, and the FCC’s public interest obligation provides a viable foundation for incorporating social media into its mandate.
Creating a new legal category for Mass Incitement—or revisiting the protection of hate speech—will be more difficult. It will likely require the Supreme Court to revisit Terminiello v City of Chicago (1949), which determined that a threat must be immediate or imminent, and R.A.V. v City of St. Paul (1992), which struck down a hate speech ordinance as overly broad.

Any effort to restrict hate speech itself—not just its amplification—

would be well served by a constitutional amendment defining and limiting hate speech.​

​

'Let's start with the steps that are within reach.'

​The Answer Is Policies that Protect People, Democracy, and the First Amendment​

​

will end with the same potent message as I started: In the United States of America, you can say what you want, but you are still responsible for the damage your words do.
Over the past two decades, hate speech has migrated from the dark web into the mainstream. What once lived in anonymous forums, or behind closed doors, is now amplified by celebrities, influencers, and politicians—many of whom mistakenly equate freedom of speech with the freedom to say anything about anyone without consequence.

Many of our Founding Fathers believed that individual liberties were not a license to harm. From Jefferson’s writings and Madison’s authorship of the Bill of Rights to early libel laws, the principle has long been that freedom ends where harm begins.
As previously mentioned, that ideal was reflected in the early Democratic-Republican Party, which we would describe today as the political home for all of the Americans who say they are “in the middle.”

It combined the fiscal discipline associated with today’s conservatives and the social ideals championed by liberals, because the Democratic Republicans supported equal rights, personal liberties, and limited government.

Today, harm is exacerbated by the unprecedented scale and reach of digital platforms where algorithms promote divisive and dangerous content for profit. It is time to recognize that hate speech and its algorithmic amplification cause real harm, and to address that problem under the law.

The judiciary’s role is to modernize incitement and libel standards to account for cumulative, group-based harm. Courts should draw on frameworks from tort, civil rights, and public health law—areas where foreseeable harm is regulated, regardless of intent.

Congress’s role starts with reforming Section 230 and expanding the FCC’s purview to include social media. These are necessary steps to restore civic integrity online.

Congress and state legislatures could really put the debate to rest by proposing and ratifying the 28th Amendment to the U.S. Constitution, in what I call the Accountability in Speech and Public Harm clause. This amendment would give courts a Constitutional basis to hold individuals and institutions accountable when harmful speech infringes on the rights of others.

Ultimately, holding people and platforms accountable is not a rejection of the First Amendment. It is a return to its original intent: protecting freedom without permitting the destruction of someone else’s.​

'...many of whom mistakenly equate freedom of speech with the freedom to say anything about anyone without consequence.'

'Congress and state legislatures could really put the debate to rest by proposing and ratifying the 28th Amendment...'

US_Marshals_with_Young_Ruby_Bridges_on_School_Steps_0.jpg

​As an American, I Am Your Equal​

​

Our Founding documents declare that we are all endowed by a Creator with certain unalienable rights, which include life, liberty and the pursuit of happiness. We also have a Constitution that protects us from harm, whether that harm comes from the government or from one another.
So, when you spread lies about me, or spew hate because you don’t like how I look, who I love, or what I believe, your words lead to harassment, threats, and ultimately prevent me from living my life freely—
a right promised to all of us by our Declaration of Independence and
protected by our Constitution.

That is not exercising your rights. It is infringing on mine.​

bottom of page