Posted on 04/13/2006 7:22:29 AM PDT by Neville72
The Stanford University Symbolic Systems Program and the Singularity Institute for Artificial Intelligence announced today the Singularity Summit at Stanford, a one-day event free to the public, to be held Saturday, May 13, 2006 at Stanford Memorial Auditorium, Stanford, California.
The event will bring together leading futurists and others to examine the implications of the "Singularity" -- a hypothesized creation of superintelligence as technology accelerates over the coming decades -- to address the profound implications of this radical and controversial scenario.
"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005). "Based on models of technology development that I've used to forecast technological change successfully for more than 25 years, I believe computers will pass the Turing Test by 2029, and by the 2040s our civilization will be billions of times more intelligent."
"Some regard the Singularity as a positive event and work to hasten its arrival, while others view it as unlikely, or even dangerous and undesirable," said Todd Davies, associate director of Stanford's Symbolic Systems Program. "The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future."
Noted speakers at the event will also include cognitive scientist Douglas R. Hofstadter, author of the Pulitzer prize-winning Gödel, Escher, Bach; nanotechnology pioneers K. Eric Drexler and Christine L. Peterson; science-fiction novelist Cory Doctorow; philosopher Nick Bostrom; futurist Max More; Eliezer S. Yudkowsky, research fellow of the Singularity Institute for Artificial Intelligence; Acceleration Studies Foundation president John Smart; PayPal founder and Clarium Capital Management president Peter Thiel; Steve Jurvetson, a Managing Director of Draper Fisher Jurvetson; and Sebastian Thrun, Stanford Artificial Intelligence Laboratory director and Project Lead of the Stanford Racing Team (DARPA Grand Challenge $2 million winner). In addition, author Bill McKibben will participate remotely from Maine via Teleportec, a two-way, life-size 3D display of the speaker.
The event will be moderated by Peter Thiel and Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence.
Among the issues to be addressed:
Bostrom: Will superintelligence help us reduce or eliminate existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism?
Doctorow: Will our technology serve us, or control us?
Drexler: Will productive nanosystems enable the development of more intricate and complex productive systems, creating a feedback loop that drives accelerating change?
Hofstadter: What is the likelihood of our being eclipsed by (or absorbed into) a vast computational network of superminds, in the course of the next few decades?
Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take off and how will humans stay in control?
More: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our expanding cognitive abilities?
Peterson: How can we safely bring humanity and the biosphere through the Singularity?
Thrun: Where does AI stand in comparison to human-level skills, in light of the recent autonomous robot race, the DARPA Grand Challenge?
Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?
The Singularity Summit is hosted by the Symbolic Systems Program at Stanford, and co-sponsored by Clarium Capital Management, KurzweilAI.net, MINE, the Singularity Institute for Artificial Intelligence, the Stanford Transhumanist Association, and United Therapeutics.
The free event will be held in Stanford Memorial Auditorium, 551 Serra Mall, Stanford, CA 94305. Seating is limited. Please RSVP. For further information: sss.stanford.edu or 650-353-6063.
AND when that "super intelligence" figures out that socialism is slavery by government and is a BAD THING.. The super intelligence program will mysteriously be de-funded.. and mankind can get back to being stupid..
None of these hypothesized processes are less deterministic than the other. "non-determinism" is really not being conceptualized correctly. You might be interested in the very broad range of exotic computational models known in literature, many of which look nothing like any model of computation you are familiar with. Non-axiomatic computational models, for example, are what many people here would (mis-)label as "non-deterministic" because it can express nothing with perfect certainty.
In fact, I would make the observation that just about everyone that uses the term "non-deterministic" with respect to computers really means "non-axiomatic".
I see your point.
Too bad we can't partition the world into regions where the Amish can be happy and unmolested, the SINers can be happy and the mushroom eaters can be likewise happy. I once hoped we could establish independent colonies around other stars for safety and separation, but I fear there is no time left for that.
Replicants in Blade Runner were clones of humans, is that correct?
One generally cannot prove the determinism of a system from within that system, due to "unpredictable" and "random" being effectively indistinguishable in such a case.
My personal opinion is that the brain more closely resembles a quantum computer. A computer generates a result, and examines it for validity. If it's valid, then good. If not, toss and find a new possible answer. However, when you solve a problem that way, I think your brain generates many answers simultaneously, which is more closely related to quantum computing than classical computing.
Quantum computing is equivalent to classical computing, and there is nothing that can be done on quantum computer that cannot be done on a classical computer in theory. The only thing quantum computing buys you is that it changes the computational complexity class of some algorithms (notably from exponential to polynomial). In other words, it makes certain types of deterministic computations vastly more efficient than they are on vanilla silicon.
(BTW, non-axiomatic computational models generally work by implicitly sifting all possible answers they are capable of expressing in parallel, selecting the best answer in a given context from the entire set. This is what you are essentially talking about. The details of how this works I will leave to literature.)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Mirror Test
One benchmark for "self-awareness" in animals and people (and now robots as well) is whether they will perform self-directed actions when looking in a mirror.
When a mark is placed on the forehead of a child, they will only begin to inspect it on their own forehead at the age of 3 or 4. Adult bottlenose dolphins perform similarly in equivalent tests designed for underwater use.
There are a small number of AI programmers and computer scientists that have the idea that true intelligence, of the human kind, is only possible because of a spiritual influence; the soul.
I suppose I would count myself among them; I don't think a being the equal of a human can exist without a soul, because without he can never equal a man.
Are you saying that if a dolphin is proved to be an intelligent being, it must have what we see as a 'soul'?
The open question is if God is feeling like AI is a good thing and gives a soul to some AI we try to build or not. This, of course, doesn't even qualify as educated guesswork. It's a way out there WAG and we won't know if it's right or wrong for a long time. We may never.
I don't see 'soul' as a problem. -- I think its a given that intelligence makes its own morality.
-- It's self evident that the golden rule would make good sense to any entity that attains intelligence/'good sense'.
That is something quite different, but yes, there are actually two different maths that depend on whether or not one assumes the Axiom of Choice. It is not a strictly necessary axiom in math apparently, and I expect mathematicians will be arguing about it for the foreseeable future.
Non-axiomatic computational models are something else, derived from non-axiomatic term logics (in the same way that classical computers use classic first-order logic), which has really only been fully formalized in the last decade or so. There are some interesting theorems that have been published in recent years that show that fully general intelligence is only expressible on non-axiomatic computational models, a kind of computational model with which we have very little experience with.
I agree. I prefer the term Automated Intelligence.
There's nothing "artificial" about automating the decision process.
The more I think about it, the more I think it would be inevitable that civil rights would be granted to such automatons. And my canine analogy is probably inadequate. So, ok, we're going to grant rights to these machines because it feels right.
I still worry that this is the logic of PETA, that because a chicken can register pain, it's illegal to kill them.
Or it's going to require a heretofore unheralded reconsideration of rights. Human rights for non-humans. Are we sure about this? Part of me thinks we will never build such machines -- or at least not in great numbers -- simply to avoid this. And do we really need machines that seem human?
Essentially, yes. They were augmented to have specific characterists, like better strength, sharper eyes and such. They were apparently vat-grown or something so that they didn't take a full 20 years to reach maturity. Their brains were somehow "imprinted" with memories simulating a childhood they never had actually lived.
It's a really good movie on lots of levels, consider renting it next time your at Blockbuster.
The replicants were "biorobotic" -- see here: http://en.wikipedia.org/wiki/Biorobotics
Robots imitating humans. As I've written immediately above this post, it seems likely that society would grant self-aware human-mimicking robots rights just because it would feel wrong to do otherwise, even as that opens cans of worms we're not ready for.
Now, I can certainly make a case that these biorobotic creatures should not necessarily be treated as equal to humans. Clone-born humans though, I agree, are still fully human and must be treated as such.
Apples and oranges though. I do not know of anyone that has shown non-axiomatic computation on quantum computers is generally more efficient than using classical computers. Everything done with quantum computers to date is classic axiomatic computation, albeit in a different fashion. Without thinking about it too hard, there are aspects of non-axiomatic computation that are probably not amenable to quantum shortcuts.
Non-axiomatic computers have an intrinsic massive parallelism, but it is a very different kind of parallelism than quantum computing exploits.
His argument is not so much that technology enslaves men as it is that in the attempt at conquest over "nature" some men obtain power over other men with technology as their instrument, and how the use of that power by earlier generations must necessarily effect later ones. The greater the power the greater the effect of that power on later generations. So the question becomes, when we eventually get a race of conditioners, unrestrained by values, who really can make all posterity whatever they want, through eugenics or whatever else, what will be the effect on that posterity?
Cordially,
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.