Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: Axenolith

I think your comments on artificial (sentient) intelligence are a bit anthropomorphic.

The really dangerous stuff would consider humans as irrelevant as we consider ants—and our major issues in philosophy would be as irrelevant to the new AI as ant “moral standards” are to us.

It would not (imho) be a case of disagreement—it would simply be that there would be nothing to discuss.

The basis imperative would be growth and expansion—that would be on auto-pilot.

The reason I believe that is that any intelligence that did not have such a feature would never get powerful enough to cause problems in the first place.

Greg Bear’s classic “Blood Music” would be the most relevant here—nanotechnology running wild—with no way to communicate with it.


16 posted on 07/08/2020 7:48:55 PM PDT by cgbg (Masters don't want slaves talking about masters and slaves.)
[ Post Reply | Private Reply | To 15 | View Replies ]


To: cgbg

Sentient intelligence designed by us would BE anthropomorphic.

Anything reaching the “processing” power of humans that is self aware is going to have the same foibles, otherwise it’s just going to be an extremely efficient Von Neumann machine.

Now, I guess if we’re going down the path here, there’s going to be 2 branches though. One where we are actually working towards something like a humanoid robot, or something that flawlessly passes a Turing test and the other being something we create just to do a job that ends up having enough processing power and neural network complexity that it just becomes “self aware” out of the blue.

I’m going to have to hit that Bear book, its one I’ve not come across...


23 posted on 07/09/2020 8:53:02 PM PDT by Axenolith (WWG1WGA!)
[ Post Reply | Private Reply | To 16 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson