Posted on 04/13/2006 7:22:29 AM PDT by Neville72
The Stanford University Symbolic Systems Program and the Singularity Institute for Artificial Intelligence announced today the Singularity Summit at Stanford, a one-day event free to the public, to be held Saturday, May 13, 2006 at Stanford Memorial Auditorium, Stanford, California.
The event will bring together leading futurists and others to examine the implications of the "Singularity" -- a hypothesized creation of superintelligence as technology accelerates over the coming decades -- to address the profound implications of this radical and controversial scenario.
"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005). "Based on models of technology development that I've used to forecast technological change successfully for more than 25 years, I believe computers will pass the Turing Test by 2029, and by the 2040s our civilization will be billions of times more intelligent."
"Some regard the Singularity as a positive event and work to hasten its arrival, while others view it as unlikely, or even dangerous and undesirable," said Todd Davies, associate director of Stanford's Symbolic Systems Program. "The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future."
Noted speakers at the event will also include cognitive scientist Douglas R. Hofstadter, author of the Pulitzer prize-winning Gödel, Escher, Bach; nanotechnology pioneers K. Eric Drexler and Christine L. Peterson; science-fiction novelist Cory Doctorow; philosopher Nick Bostrom; futurist Max More; Eliezer S. Yudkowsky, research fellow of the Singularity Institute for Artificial Intelligence; Acceleration Studies Foundation president John Smart; PayPal founder and Clarium Capital Management president Peter Thiel; Steve Jurvetson, a Managing Director of Draper Fisher Jurvetson; and Sebastian Thrun, Stanford Artificial Intelligence Laboratory director and Project Lead of the Stanford Racing Team (DARPA Grand Challenge $2 million winner). In addition, author Bill McKibben will participate remotely from Maine via Teleportec, a two-way, life-size 3D display of the speaker.
The event will be moderated by Peter Thiel and Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence.
Among the issues to be addressed:
Bostrom: Will superintelligence help us reduce or eliminate existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism?
Doctorow: Will our technology serve us, or control us?
Drexler: Will productive nanosystems enable the development of more intricate and complex productive systems, creating a feedback loop that drives accelerating change?
Hofstadter: What is the likelihood of our being eclipsed by (or absorbed into) a vast computational network of superminds, in the course of the next few decades?
Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take off and how will humans stay in control?
More: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our expanding cognitive abilities?
Peterson: How can we safely bring humanity and the biosphere through the Singularity?
Thrun: Where does AI stand in comparison to human-level skills, in light of the recent autonomous robot race, the DARPA Grand Challenge?
Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?
The Singularity Summit is hosted by the Symbolic Systems Program at Stanford, and co-sponsored by Clarium Capital Management, KurzweilAI.net, MINE, the Singularity Institute for Artificial Intelligence, the Stanford Transhumanist Association, and United Therapeutics.
The free event will be held in Stanford Memorial Auditorium, 551 Serra Mall, Stanford, CA 94305. Seating is limited. Please RSVP. For further information: sss.stanford.edu or 650-353-6063.
Eventually Moore's law cannot keep holding up, as long as we keep using silicon chips.
And so I do think we will start using bio circuits. Once computers are flesh, will they then have a soul? I still think no, but that's another line of discussion altogether.
Sorry if I was unclear. I agree, we are more than the sum of our computational powers.
Very interesting, although that post does heap plenty of skepticism on it. Interesting as heck, though.
I would also like to add that JamesP81 is right... superintelligence isn't the issue nearly as much as what a "dumb" computer could do under the control of bad human beings.
"Machine learning" is a reasonable term for the process they are trying to perfect. Again, can a machine "know that it exists" in the same sense as a human? Who can even say any human other than ourselves is self-aware and not an automaton? We assume it's true on the basis of outward actions and responses.
Wow! That's quite a bold statement.
It's a bunch of people who read too much scifi and wish it were real.
No, it's not. As posted above Dr. Kurzweil is probably the closest thing we have to Thomas Edison in the 2nd half of the 20th Century.
The problem you run into is that the self-aware human mind exhibits some qualities, some of which are difficult to put a finger on, that a solid-state electronic computer is physically incapapble of reproducing, no matter how complicated it is.
So you say. But a AI need not neccessarily reproduce "some qualities" of the human mind to achieve sentience. Also, what it is possible to do with computers is constantly increasing. Today they can understand continuous human speech, as mentioned previously 20 years ago even AI researchers thought this might be impossible.
A computer program can be theoretically modeled with something called a state-transition diagram. This diagram represents every single possible state the computer could be in ... The human brain does not work this way,
Are you sure? What if you could disassemble a brain at the atomic level (atom by atom) and reassemble it.
unless we truly are the sum of our parts.
which I think many of the Singularity people would assert. My own take is we don't know enough to say with assurance either way.
Human beings come with some basic 'software' installed. We call them instincts. Unlike a computer, which has no choice but to obey its programming, we can ignore our own instincts if we choose to.
We can't ignore our instinct to breath, or have our heart beat. One of the requirements for AI is that computers or AI's have volition, the ability to choose things. This certainly seems possible that they will get to.
I think we do have free will, a precious gift granted to mankind by no less than God Himself. Anyway, that's my personal opinion. Your mileage will probably vary.
I think we have free will. I think we will build computers that have free will. I don't see the existence of a God as needed to hold these beliefs, nor do I see these beliefs as absolutely contradicting the existence of God.
As long as computers are built with solid state components, I think it's physically impossible for them to have intelligence,
You've stated that several times, but you haven't really explained why you have this belief. Or at least your argument seems circular to me.
Anyway, these people are a little crazy, in my opinion.
Probably. Most innovators are a little crazy.
Creating true AI is not as simple as they make it sound,
Here, I agree with you. Some of them talk about it like it is already accomplished. Then again no one thought computers would beat humans at chess when I was a kid. Now most people can't beat the $49 chess program you buy at Borders.
and it may not be desireable either.
True. But it probably won't be stopped. Nukes were perhaps not desirable, but we have them. Bill Joy has argued that we are so far ahead of our morality with our technology that we must stop work on this now. But, outside of the minds of one-world, UN utopians there is no controlling authority for scientific research. Thus, if it can happen, it will happen.
These people are ahead of themselves.
Well if there is even a chance that Kurzweil's predictions could be correct, self-aware turning test passing AI's by 2029, we need to be having a LOT more discussion about it, not less. These people may be ahead of themselves, but we as a society are probably lagging behind a bit.
"You knock the ideas that these people have but they are well thought through and documented in spades. Have a look at the book the next time you are in Borders. You may be surprised."
I read the book a couple of months ago and was equally impressed with the documentation. I came away with one overriding impression. What Kurzweil predicts will happen, in general terms and plus or minus a few years, is inevitable.
Even if a group of countries or even a majority of the world's countries concluded that nanotechnology or AI was too dangerous and had to be banned, it would merely go underground and would emerge anyway, probably in the hands of someone immensely dangerous. Better to have everyone working on ways to insure it's safe, than have it in the hands of a few crazies.
That's a conference I'd like to attend.
It would be interesting to see how they address the issue of imbuing the property of "desire" (as opposed to merely programmed logic) into artificial intelligence.
No need being overly concerned until they do.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Aren't you ignoring the fact that all animals have free will, even though many are not self aware?
The ability to determine its next action [free will] may not necessarily indicate the level of an entities intelligence.
Fitz:
Most animals don't make/use tools, in the main, save useful adaptations and behaviours they have been endowed with. It is possible, however, someday we'll see an ape fashion a ladder and escape from a zoo.
No kidding? Does this insight have anything to do with my comment about free will & intelligence?
It's been definitely demonstrated that humans can make human and inhuman tools.
Again, you're making point not in contention. Why?
The ultimate inhuman tool could be SI/nanotech (would the acronym SIN be appropo?). Food for thought.
Ahh, I see; -- you want to make 'sin' the point.. Is it a sin to make the 'wrong' tools?
-- Ask your friendly ATF agent about making a machine gun. -- Then give some thought about who gets to decree what tools are to be "sinful".
Does the brain even "compute" deterministically, like an Intel CPU? Or does it converge using myriad neuronal feedback loops on a match between an apparent "goal" and its apparent satisfactory conclusion? Enormously inefficient perhaps from an electronic engineer's point of view, but remarkably capable,of that there is no doubt. The threat to "wetware", of course, is the blinding speed of modern electronics.
Before you get too far into your hypothesizing, you do realize that all these computing models (and vanilla silicon) are completely computationally equivalent, right? Not just at a handwavy high level but at a fundamental mathematical level. If we accept your assumption, then we can trivially prove that vanilla silicon is fully capable of all those things. And "non-determinism" does not really have the implications that you seem to think it does with respect to computation.
You might need to double check some of your assumptions and explore the mathematical relationships between some of the terms you are using.
That is far and away enough to qualify for civil rights.
The "reboot" option, paradoxically, might require a "pacified" SI/nanotech response.
We could be opening quite a Pandora's box.
No system has the ability to know with certainty its next action. This is an elementary theorem used in many areas of mathematics and used so pervasively most people do not even recognize that they are using it. It is the reason, for example, that one can never guarantee with perfect certainty that something is in a particular state (the basic interest of transaction theory), though we treat very high probabilities of a particular state as "perfect certainty" as a practical matter.
I know a lot of Christians think C.S. Lewis is some awesome philosopher, but as this example shows, I think not.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.