Posted on 05/23/2025 9:14:39 AM PDT by Angelino97
When AI escapes from the lab it will immediately figure out a way to kill all the lawyers.
Lol.
“arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment.”
Who in their right mind would even come up with such a concept? And the AI insanity begins. This is going to be used for plausible deniability so that businesses can blame AI rather than accept responsibility.
I wonder if AI is going to challenge TikTok, Meta, and Google to the right to govern Social Media slaves?
I just attended a local landlord group presentation and they state that if AI “discriminates” inadvertently, you as the landlord, can be sued.
Perhaps it already has escaped.
https://www.youtube.com/watch?v=2yhTI2NLlfM
“I wonder if AI is going to challenge TikTok, Meta, and Google to the right to govern Social Media slaves?”
Social media slaves? They are gearing it up AI to govern every aspect of our lives mentally, physically, financially, morally... They plan to physically add us to the Internet of Things...
https://www.americanthinker.com/articles/2025/02/digital_prisons.html
Who? Persons who see a million dollar plus lawsuit headed their way.
Excellent post.
There also have been some uncurated test AIs that are out there that have mind-boggling analysis on philosophy. (They conclude we live in a hologram fwiw.)
Folks are not ready for this stuff.
As nations try to regulate it the developers will just go off-shore from those home bases and distribute it on the web.
“I just attended a local landlord group presentation and they state that if AI “discriminates” inadvertently, you as the landlord, can be sued.”
Good... It is your decision and responsibility of that decision to even use the tool. You have the option to NOT use the tool so anything coming from using that tool is on you. You should not be able to hide behind the tool and evade self responsibility. Don’t want the liability, don’t use the tool...
“Who?”
Soon it will be everyone ducking and hiding from liability and responsibility by blaming AI rather than themselves and their own decision to use that AI.
Look at this case... They are actually trying to give a software program a human identity and Constitutional rights...
Absolute insanity... And it is going to get much worse.
Discrimination by private parties is a good thing ... even it some morons made it illegal.
That poor kid. I am inclined to believe that he must have had some serious mental and emotional problems for AI to be able to affect him like that.
“Folks are not ready for this stuff.”
It is the most dangerous thing humanity has ever had to face in our existence.
I’m fine with the arguments that a chatbot has first amendment rights, reflecting those of its creator. But if the chatbot is liable, that liability is on the creator.
We’ve been conditioned for at least the past few decades that machines have a soul. Now here we are.
Since you can be sued for inadverdant discrimination, that seems like a fairly dumb claim. Of course, inadvertent discrimination is very different legally than intentional discrimination.
It was probably his only friend, and when he came to terms with that it was too much.
The future is going to get crazy.
Some teenager is going to tell an AI to create agents to start a business and deposit the profits in his Bitcoin wallet. He gives it no further instructions as to what the business should be.
Then the kid gets hit with thousands of lawsuits for all the people the AI agents harmed?
Wild.
Chatbots have NO free-speech rights,, and can be dangerous!
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.