Posted on 04/13/2006 7:22:29 AM PDT by Neville72
Eventually Moore's law cannot keep holding up, as long as we keep using silicon chips.
And so I do think we will start using bio circuits. Once computers are flesh, will they then have a soul? I still think no, but that's another line of discussion altogether.
Sorry if I was unclear. I agree, we are more than the sum of our computational powers.
Very interesting, although that post does heap plenty of skepticism on it. Interesting as heck, though.
I would also like to add that JamesP81 is right... superintelligence isn't the issue nearly as much as what a "dumb" computer could do under the control of bad human beings.
"Machine learning" is a reasonable term for the process they are trying to perfect. Again, can a machine "know that it exists" in the same sense as a human? Who can even say any human other than ourselves is self-aware and not an automaton? We assume it's true on the basis of outward actions and responses.
Wow! That's quite a bold statement.
It's a bunch of people who read too much scifi and wish it were real.
No, it's not. As posted above Dr. Kurzweil is probably the closest thing we have to Thomas Edison in the 2nd half of the 20th Century.
The problem you run into is that the self-aware human mind exhibits some qualities, some of which are difficult to put a finger on, that a solid-state electronic computer is physically incapapble of reproducing, no matter how complicated it is.
So you say. But a AI need not neccessarily reproduce "some qualities" of the human mind to achieve sentience. Also, what it is possible to do with computers is constantly increasing. Today they can understand continuous human speech, as mentioned previously 20 years ago even AI researchers thought this might be impossible.
A computer program can be theoretically modeled with something called a state-transition diagram. This diagram represents every single possible state the computer could be in ... The human brain does not work this way,
Are you sure? What if you could disassemble a brain at the atomic level (atom by atom) and reassemble it.
unless we truly are the sum of our parts.
which I think many of the Singularity people would assert. My own take is we don't know enough to say with assurance either way.
Human beings come with some basic 'software' installed. We call them instincts. Unlike a computer, which has no choice but to obey its programming, we can ignore our own instincts if we choose to.
We can't ignore our instinct to breath, or have our heart beat. One of the requirements for AI is that computers or AI's have volition, the ability to choose things. This certainly seems possible that they will get to.
I think we do have free will, a precious gift granted to mankind by no less than God Himself. Anyway, that's my personal opinion. Your mileage will probably vary.
I think we have free will. I think we will build computers that have free will. I don't see the existence of a God as needed to hold these beliefs, nor do I see these beliefs as absolutely contradicting the existence of God.
As long as computers are built with solid state components, I think it's physically impossible for them to have intelligence,
You've stated that several times, but you haven't really explained why you have this belief. Or at least your argument seems circular to me.
Anyway, these people are a little crazy, in my opinion.
Probably. Most innovators are a little crazy.
Creating true AI is not as simple as they make it sound,
Here, I agree with you. Some of them talk about it like it is already accomplished. Then again no one thought computers would beat humans at chess when I was a kid. Now most people can't beat the $49 chess program you buy at Borders.
and it may not be desireable either.
True. But it probably won't be stopped. Nukes were perhaps not desirable, but we have them. Bill Joy has argued that we are so far ahead of our morality with our technology that we must stop work on this now. But, outside of the minds of one-world, UN utopians there is no controlling authority for scientific research. Thus, if it can happen, it will happen.
These people are ahead of themselves.
Well if there is even a chance that Kurzweil's predictions could be correct, self-aware turning test passing AI's by 2029, we need to be having a LOT more discussion about it, not less. These people may be ahead of themselves, but we as a society are probably lagging behind a bit.
"You knock the ideas that these people have but they are well thought through and documented in spades. Have a look at the book the next time you are in Borders. You may be surprised."
I read the book a couple of months ago and was equally impressed with the documentation. I came away with one overriding impression. What Kurzweil predicts will happen, in general terms and plus or minus a few years, is inevitable.
Even if a group of countries or even a majority of the world's countries concluded that nanotechnology or AI was too dangerous and had to be banned, it would merely go underground and would emerge anyway, probably in the hands of someone immensely dangerous. Better to have everyone working on ways to insure it's safe, than have it in the hands of a few crazies.
That's a conference I'd like to attend.
It would be interesting to see how they address the issue of imbuing the property of "desire" (as opposed to merely programmed logic) into artificial intelligence.
No need being overly concerned until they do.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Aren't you ignoring the fact that all animals have free will, even though many are not self aware?
The ability to determine its next action [free will] may not necessarily indicate the level of an entities intelligence.
Fitz:
Most animals don't make/use tools, in the main, save useful adaptations and behaviours they have been endowed with. It is possible, however, someday we'll see an ape fashion a ladder and escape from a zoo.
No kidding? Does this insight have anything to do with my comment about free will & intelligence?
It's been definitely demonstrated that humans can make human and inhuman tools.
Again, you're making point not in contention. Why?
The ultimate inhuman tool could be SI/nanotech (would the acronym SIN be appropo?). Food for thought.
Ahh, I see; -- you want to make 'sin' the point.. Is it a sin to make the 'wrong' tools?
-- Ask your friendly ATF agent about making a machine gun. -- Then give some thought about who gets to decree what tools are to be "sinful".
Does the brain even "compute" deterministically, like an Intel CPU? Or does it converge using myriad neuronal feedback loops on a match between an apparent "goal" and its apparent satisfactory conclusion? Enormously inefficient perhaps from an electronic engineer's point of view, but remarkably capable,of that there is no doubt. The threat to "wetware", of course, is the blinding speed of modern electronics.
Before you get too far into your hypothesizing, you do realize that all these computing models (and vanilla silicon) are completely computationally equivalent, right? Not just at a handwavy high level but at a fundamental mathematical level. If we accept your assumption, then we can trivially prove that vanilla silicon is fully capable of all those things. And "non-determinism" does not really have the implications that you seem to think it does with respect to computation.
You might need to double check some of your assumptions and explore the mathematical relationships between some of the terms you are using.
That is far and away enough to qualify for civil rights.
The "reboot" option, paradoxically, might require a "pacified" SI/nanotech response.
We could be opening quite a Pandora's box.
No system has the ability to know with certainty its next action. This is an elementary theorem used in many areas of mathematics and used so pervasively most people do not even recognize that they are using it. It is the reason, for example, that one can never guarantee with perfect certainty that something is in a particular state (the basic interest of transaction theory), though we treat very high probabilities of a particular state as "perfect certainty" as a practical matter.
I know a lot of Christians think C.S. Lewis is some awesome philosopher, but as this example shows, I think not.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.