Posted on 04/13/2006 7:22:29 AM PDT by Neville72
The Stanford University Symbolic Systems Program and the Singularity Institute for Artificial Intelligence announced today the Singularity Summit at Stanford, a one-day event free to the public, to be held Saturday, May 13, 2006 at Stanford Memorial Auditorium, Stanford, California.
The event will bring together leading futurists and others to examine the implications of the "Singularity" -- a hypothesized creation of superintelligence as technology accelerates over the coming decades -- to address the profound implications of this radical and controversial scenario.
"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005). "Based on models of technology development that I've used to forecast technological change successfully for more than 25 years, I believe computers will pass the Turing Test by 2029, and by the 2040s our civilization will be billions of times more intelligent."
"Some regard the Singularity as a positive event and work to hasten its arrival, while others view it as unlikely, or even dangerous and undesirable," said Todd Davies, associate director of Stanford's Symbolic Systems Program. "The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future."
Noted speakers at the event will also include cognitive scientist Douglas R. Hofstadter, author of the Pulitzer prize-winning Gödel, Escher, Bach; nanotechnology pioneers K. Eric Drexler and Christine L. Peterson; science-fiction novelist Cory Doctorow; philosopher Nick Bostrom; futurist Max More; Eliezer S. Yudkowsky, research fellow of the Singularity Institute for Artificial Intelligence; Acceleration Studies Foundation president John Smart; PayPal founder and Clarium Capital Management president Peter Thiel; Steve Jurvetson, a Managing Director of Draper Fisher Jurvetson; and Sebastian Thrun, Stanford Artificial Intelligence Laboratory director and Project Lead of the Stanford Racing Team (DARPA Grand Challenge $2 million winner). In addition, author Bill McKibben will participate remotely from Maine via Teleportec, a two-way, life-size 3D display of the speaker.
The event will be moderated by Peter Thiel and Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence.
Among the issues to be addressed:
Bostrom: Will superintelligence help us reduce or eliminate existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism?
Doctorow: Will our technology serve us, or control us?
Drexler: Will productive nanosystems enable the development of more intricate and complex productive systems, creating a feedback loop that drives accelerating change?
Hofstadter: What is the likelihood of our being eclipsed by (or absorbed into) a vast computational network of superminds, in the course of the next few decades?
Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take off and how will humans stay in control?
More: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our expanding cognitive abilities?
Peterson: How can we safely bring humanity and the biosphere through the Singularity?
Thrun: Where does AI stand in comparison to human-level skills, in light of the recent autonomous robot race, the DARPA Grand Challenge?
Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?
The Singularity Summit is hosted by the Symbolic Systems Program at Stanford, and co-sponsored by Clarium Capital Management, KurzweilAI.net, MINE, the Singularity Institute for Artificial Intelligence, the Stanford Transhumanist Association, and United Therapeutics.
The free event will be held in Stanford Memorial Auditorium, 551 Serra Mall, Stanford, CA 94305. Seating is limited. Please RSVP. For further information: sss.stanford.edu or 650-353-6063.
I see your point.
Too bad we can't partition the world into regions where the Amish can be happy and unmolested, the SINers can be happy and the mushroom eaters can be likewise happy.
I give you Pennsylvania, Las Vegas, and the Zuni Reservation.
I once hoped we could establish independent colonies --
Our Constitution still allows independence. -- We just have to work at keeping it free from those who see sin everywhere and want to prohibit it.
It has: the iPod is taking over the Earth. It controls humans with tentacular probes it inserts into their ears that they cannot remove, even when exercising or walking down the street. ;)
I agree...I've seen it a few times. There is mention in the movie of Tyrell, "designing their minds", but I assume it was in the nature of improving what was there via "neural improvements" and memory implants. [I wonder how valid this concept of memory implantation is, however, after reading an article that postulated the brain's memory works on "holographic" principles (pieces of each memory are distrubuted), rather than on highly locallized collections of "data bits" you have in a computer memory.]
Even though they appeared to be cobbled together like Frankensteins (given the "cottage biotech industries" shown, like the eye-maker), I agree that being basically human, they deserve the full range of human rights.
PS. I have not read the book that "Blade Runner" was based on, and wonder if the novelist spells out the Replicant design in more detail.
I give you Pennsylvania, Las Vegas, and the Zuni Reservation.
the SINers can be happy ... Las Vegas ...
LOL Maybe we can keep the Singularity Intelligence and Nanotechers (SINers) at bay by interesting them in gambling!
Surely they will have SOME vices!
I don't recall if I read it or not. I read many of his books and they kind of blur together. He was not a hard-sci-fi guy, so I doubt there is a lot of detail on the Replicant techniques.
Subtitle of article:
'Moonbats are a hoot!'
"Ultimately I don't know how you test for true self-awareness compared simply to well-mimicked self-awareness. A very complex computer could very persuasively imitate human intelligence, sure. But actually think for itself?
Good point. No matter how convincing the test there will always be people who refuse to believe the AI is self aware. I wonder if this would lead to the next step in civil rights."
Here's something to ponder. Consider the closely related issue of whether a computer could ever feel emotions. Suppose we make a computer that accurately simulates emotions. Then how would we know if it really felt emotions?
In fact, none of us really knows if other people even feel emotions. All we know is, they look and act in ways that we look and act when we feel emotions. The same could be said for self-awareness.
" Things are changing. And fast."
I agree. before the end of this century, technology will completely transform life as we know it---unless we fall into another Dork Ages. Perhaps that's not such a bad alternative?
"Most animals don't make/use tools, in the main, save useful adaptations and behaviours they have been endowed with. It is possible, however, someday we'll see an ape fashion a ladder and escape from a zoo."
Well, chimpanzees fashion simple tools. That is to say, they will alter a stick in various ways to make it more efficient to poke ants of an anthill, etc.
"Enormously inefficient perhaps from an electronic engineer's point of view, but remarkably capable,of that there is no doubt."
Actually, the brain is pretty efficient, even from electronic engineer's point of view. What is noticeable about the comparison between the brain and the computer is how slow the various processes in the brain are. This is counteracted, however, by the complexity and richness of each neuron, compared to each transistor.
Good example.
Studying the gorillas recently at San Diego Wild Animal Park, I was practically hoping to see three or four of them climb on each other's shoulders and have at least one run free of the enclosure. Then I noticed the wires around the moat's periphery attached to insulators...
I've heard the brain described as "massively parallel".
Forbin: His sex life.
Colossus: ...
Cheers!
Methinks the programs were and are semi-empirical, based on human knowledge of principles (weighting various configurations as more advantageous), rather than from ab initio.
Cheers!
Circular reasoning.
By definition, you are "assuming" a "self" in the case of the robot.
I'll only accept that computers are self aware when they ask for oral sex.
Cheers!
A Schlemmelmayer relay, perhaps? ;-)
(Scroll down to "Button, button"...)
Cheers!
Oh, absolutely. Deep Blue had a huge opening book of lines thoroughly worked out in grandmaster-level analysis, and the machine was co-operated by American Grandmaster Joel Benjamin. It took a coordinated team effort to beat Kasparov. If Deep Blue had had to work out its opening play on its own, it is likely Kasparov would have won every game handily.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.