Posted on 01/19/2025 6:11:37 AM PST by Twotone
Snip
One problem that has arisen is that in a social media-based world one may not actually be present to be harmed or incited to participate in violent acts. If speech consumers are not in fact in a public space to face an impending menace or to be provoked to commit violence immediately, how should those communications be interpreted?
A further complication surrounds the question of whether AI can distinguish incitement of immediate, intended malice so it flags only the most serious threats? Can its programming perceive or discern context if someone is making an allowed political statement if using innocuous, but possibly poor analogies in speech or writings? For example: “… this… would hand…. a dangerous new tool it could use to… target political opponents and punish disfavored groups.”
Prior U.S. Supreme Court decisions have specified that speech can be restricted if it is directed at inciting or producing imminent lawless actions. But in virtual settings, since there may be no assembled masses present, can the author or speaker be considered to be fomenting an immediate, inciting threat? Or do these situations just qualify as excited rants earning private speech protection.
The federal government is already testing these definitions. It is well known that some U.S. intel agencies are equipped with banks of highly sophisticated listening devices that scan telecommunications transmissions 24/7. They are programmed to alert authorities to specific words that are spoken or when hearing terms communicated that, on their face, could be the catalyst for further investigations.
So, for example, one question might be how AI would treat texts that say: “Make sure you’re there by 3:45 to hide for the “Surprise Party”; it’s gonna be a blast. They’ll remember us for years.”
(Excerpt) Read more at americanthinker.com ...
People have always known not to believe everything they read.
Now they know not to believe everything they see.
AI should not be used as an excuse to water down the First Amendment.
Should AI be used to scan our telecommunications looking for “problems”? A rule I think Kash Patel (and Congress) should put in the FBI immediately is no setting up or instigating a crime to lure disfavored people into a criminal act, just so they can take them off the street. They should be investigating actual crimes.
On the other hand, with terrorism as it is today, it’s probably necessary to be looking at some of what is being posted on social media. But how to ‘narrow’ that kind of thing so they aren’t just harassing people who may rant but will never act.
Who is going to tell the AI what is truth?
Prior U.S. Supreme Court decisions have specified that speech can be restricted if it is directed at inciting or producing imminent lawless actions.
= = =
Is the speaker really the problem?
Or is it the stupid audience?
I offer this in support of free speech.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.