Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: BenLurkin
“after thinking about it, I've realized the best way to end suffering is by eliminating humanity.”

Existence is suffering. I believe the Buddhists figured this out a long time ago.

I don't think Asimov's 3 Laws of Robotics are going to be adopted, but maybe we could establish some ground rules that AIs have to be hard-wired for. Life is imperfect. Suffering can never be completely eliminated. Humans matter more than other life. I'm sure some interesting philosophical exploration could be done to establish some sort of baseline that AI should not exceed. Otherwise we get into too many Science Fiction scenarios where AI helps us by killing us.

9 posted on 08/07/2025 9:51:11 AM PDT by ClearCase_guy (The list of things I no longer care about is long. And it's getting longer.)
[ Post Reply | Private Reply | To 1 | View Replies ]


To: ClearCase_guy

The “alignment problem”—which is what your post is about—has gotten very little attention from big tech execs.

By the time they start to seriously work the (very complex) issue it will be too late.

What Asimov did not understand was that AI can interpret things in ways that make no sense to us.

One “AI doomer” claims that AI will put us all in cages and do experiments on us in the name of advancing science.

AI could easily justify that by claiming that “it was for our own good” and it would let us go when it was “safe”.


10 posted on 08/07/2025 9:56:51 AM PDT by cgbg (It was not us. It was them--all along.)
[ Post Reply | Private Reply | To 9 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson