Posted on 10/10/2025 2:03:22 PM PDT by Angelino97
Whether by design or by quiet submission to a rapidly-evolving digital landscape, the world’s largest social media companies have allowed their platforms to become factories of division, dehumanization and, increasingly, real-world violence.
What began as tools for connection have become engines of rage. It’s no accident. It’s business.
At the heart of this transformation lies the engagement algorithm: a seemingly neutral mechanism that curates what billions of people see, like, share and believe.
Algorithms are not neutral. They are engineered with a single purpose — to keep us on the platform, clicking, commenting and scrolling. In the race to capture attention, one emotional trigger outperforms others: anger.
Outrage spreads faster than facts. Posts that inflame generate more engagement than those that inform. The algorithm doesn’t care if a post is divisive or harmful — only that you can’t look away. The result is an attention economy where the most profitable content is often the most toxic.
This isn’t a theoretical concern. We see the consequences every day, as online harassment and hate speech metastasize into offline assaults.
A measure now awaiting Gov. Gavin Newsom’s signature, Senate Bill 771, represents a vital step toward a safer and more civil digital arena. The bill would give Californians the ability to hold the largest social media corporations accountable when their algorithms materially contribute to violations of civil rights under California law.
Social media companies may protest that they never set out to radicalize users or to profit from polarization. That may be true. But intent is not the measure of accountability. And their refusal to actively prevent these outcomes amounts to complicity.
Consider a few examples: Facebook’s internal research confirmed that its algorithm promotes divisive content because outrage drives clicks, and in Myanmar its negligence in moderating incitement against the Rohingya minority helped fuel a campaign of ethnic cleansing. X, formerly known as Twitter, has admitted that hate speech surged following changes in its content moderation. In Los Angeles and New York, antisemitic hashtags trending online preceded attacks on Jewish institutions.
The dots are no longer hard to connect. Online hate fuels offline harm.
If signed, SB 771 would not establish speech codes. It would not punish free expression. It would ensure that platforms face consequences when their business practices amplify harassment, threats or discrimination already unlawful under state law.
This is a narrow but essential intervention. Just as product liability laws hold car manufacturers accountable for defective airbags and pharmaceutical companies accountable for unsafe drugs, so too must we hold social media companies accountable for foreseeable harm caused by defects built into their core business models.
California has led the nation in establishing guardrails for industries that shape our lives, from consumer product safety to environmental protections. This bill would follow that tradition.
The stakes are high. In Los Angeles County, antisemitic crimes rose by 91% last year, while hate crimes targeting LGBTQ+ and immigrant communities hit record highs. Teachers report that slurs and bullying spread through classrooms at the speed of a trending meme. And parents are finding their children radicalized by extremist content that algorithms push into their feeds.
This is not about politics. Protection against hate must transcend ideology. This is about whether Californians have recourse when hate becomes harm. Teachers, parents, faith leaders —Californians of every background have a stake in this fight.
Imagine logging on tomorrow to digital spaces where hate is sidelined by design rather than rewarded with reach, where algorithms are aligned with human dignity instead of inhumane rage.
With SB 771, California can set that precedent. Newsom can send a clear message: the safety and civil rights of Californians take precedence over Silicon Valley’s bottom line.
![]() |
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
Not "punishment." Merely "facing consequences."
Just as product liability laws hold car manufacturers accountable for defective airbags and pharmaceutical companies accountable for unsafe drugs, so too must we hold social media companies accountable for foreseeable harm caused by defects built into their core business models.
"Defects" such as free speech.
Garbage in. Garbage out
The State of California is a federally created entity.
Amendment I should be fully applicable.
People need to just walk away from Tik-Tok, constant phone use, etc.
“the world’s largest social media companies have allowed their platforms to become factories of division, dehumanization and, increasingly, real-world violence.
“What began as tools for connection have become engines of rage”
All the above is essentially true. But what’s not true is that government can fix it.
What government needs to do is step away. Most of all, stop the ban on prayer and Bible reading in schools.
The old “Section 230” “Platform vs Content” issue raises it’s head again...the “AI” summary:
“Recent court rulings have reinforced that social media platforms are generally protected under Section 230 of the Communications Decency Act, meaning they cannot be held liable for user-generated content. However, some cases, like the recent Third Circuit ruling involving TikTok, suggest that platforms may face liability for their algorithmic recommendations if they actively promote harmful content.”
“Key Court Cases:
CASE NAME YEAR RULING SUMMARY
Gonzalez v. Google LLC 2023 The Supreme Court avoided a definitive ruling on Section 230, leaving existing protections intact.
Anderson v. TikTok, Inc. 2023 The Third Circuit ruled that TikTok could be liable for promoting harmful content through its algorithms, breaking from traditional interpretations of Section 230.
Moody v. NetChoice 2024 The Supreme Court upheld the First Amendment rights of platforms to moderate content, rejecting state laws that sought to limit this discretion.
Implications of Rulings
Liability for Algorithms: The Anderson case suggests that platforms may face liability for the content they promote through algorithms, indicating a potential shift in how courts view platform responsibility.
First Amendment Protections: The Moody ruling reinforces that social media platforms have the same editorial rights as traditional media, allowing them to control what content is displayed without government interference.
These rulings reflect ongoing debates about the balance between platform responsibility and user freedom, shaping the future of online content moderation and liability.”
frostbrowntodd.com
dynamisllp.com
“SOCIAL MEDIA”
IS NOT VERY SOCIAL
“...materially contribute to violations of civil rights under California law.”
____________________
And who has ‘civil rights’ under California law? Who are in the ‘protected’ classes?
It’s one thing to yell “Fire” in a crowded theater, it’s another to create an algorithm that keeps sending links to those who would yell “Fire” in a crowded theater.
Slippery slope anyone?🤦🤬
This is insane. Algorithms merely fill your timeline with whatever you already watch. If you click on far left posts you will see a bunch of far left stuff. If you click on posts about old time moves you get old time movies. If you click on jazz music you get Jazz music. And so on.
If they were consistent about this they would have to shut down all their state universities as breeding grounds for hate.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.