Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Singularity Summit At Stanford Explores Future Of 'Superintelligence'
KurzweilAI.net ^ | 4/13/2006 | Staff

Posted on 04/13/2006 7:22:29 AM PDT by Neville72

The Stanford University Symbolic Systems Program and the Singularity Institute for Artificial Intelligence announced today the Singularity Summit at Stanford, a one-day event free to the public, to be held Saturday, May 13, 2006 at Stanford Memorial Auditorium, Stanford, California.

The event will bring together leading futurists and others to examine the implications of the "Singularity" -- a hypothesized creation of superintelligence as technology accelerates over the coming decades -- to address the profound implications of this radical and controversial scenario.

"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005). "Based on models of technology development that I've used to forecast technological change successfully for more than 25 years, I believe computers will pass the Turing Test by 2029, and by the 2040s our civilization will be billions of times more intelligent."

"Some regard the Singularity as a positive event and work to hasten its arrival, while others view it as unlikely, or even dangerous and undesirable," said Todd Davies, associate director of Stanford's Symbolic Systems Program. "The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future."

Noted speakers at the event will also include cognitive scientist Douglas R. Hofstadter, author of the Pulitzer prize-winning Gödel, Escher, Bach; nanotechnology pioneers K. Eric Drexler and Christine L. Peterson; science-fiction novelist Cory Doctorow; philosopher Nick Bostrom; futurist Max More; Eliezer S. Yudkowsky, research fellow of the Singularity Institute for Artificial Intelligence; Acceleration Studies Foundation president John Smart; PayPal founder and Clarium Capital Management president Peter Thiel; Steve Jurvetson, a Managing Director of Draper Fisher Jurvetson; and Sebastian Thrun, Stanford Artificial Intelligence Laboratory director and Project Lead of the Stanford Racing Team (DARPA Grand Challenge $2 million winner). In addition, author Bill McKibben will participate remotely from Maine via Teleportec, a two-way, life-size 3D display of the speaker.

The event will be moderated by Peter Thiel and Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence.

Among the issues to be addressed:

Bostrom: Will superintelligence help us reduce or eliminate existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism?

Doctorow: Will our technology serve us, or control us?

Drexler: Will productive nanosystems enable the development of more intricate and complex productive systems, creating a feedback loop that drives accelerating change?

Hofstadter: What is the likelihood of our being eclipsed by (or absorbed into) a vast computational network of superminds, in the course of the next few decades?

Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take off and how will humans stay in control?

More: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our expanding cognitive abilities?

Peterson: How can we safely bring humanity and the biosphere through the Singularity?

Thrun: Where does AI stand in comparison to human-level skills, in light of the recent autonomous robot race, the DARPA Grand Challenge?

Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?

The Singularity Summit is hosted by the Symbolic Systems Program at Stanford, and co-sponsored by Clarium Capital Management, KurzweilAI.net, MINE, the Singularity Institute for Artificial Intelligence, the Stanford Transhumanist Association, and United Therapeutics.

The free event will be held in Stanford Memorial Auditorium, 551 Serra Mall, Stanford, CA 94305. Seating is limited. Please RSVP. For further information: sss.stanford.edu or 650-353-6063.


TOPICS: Miscellaneous
KEYWORDS: ai; borg; computer; cyborg; evolution; evolutionary; exponentialgrowth; future; futurist; genetics; gnr; humanity; intelligence; knowledge; kurzweil; longevity; luddite; machine; mind; nanotechnology; nonbiological; physics; raykurzweil; robot; robotics; science; singularity; singularityisnear; spike; stanford; superintelligence; technology; thesingularityisnear; transhuman; transhumanism; trend; virtualreality; wearetheborg
Navigation: use the links below to view more comments.
first previous 1-20 ... 61-8081-100101-120121-131 last
To: Diamond
Beat me to it, I had that exact quote in mind...

Cheers!

(...or read up Lord Feverstone's recruitment speech to Mark Studdock in That Hideous Strength)

Cheers!

121 posted on 04/13/2006 7:58:12 PM PDT by grey_whiskers (The opinions are solely those of the author and are subject to change without notice.)
[ Post Reply | Private Reply | To 55 | View Replies]

To: Jack Black
Have you travelled through parts of Red China lately?

The point is that the tempation exists to misuse the technology--and if the technology is sufficiently advanced, then there may be no recourse to escape from the tyranny.

Cheers!

122 posted on 04/13/2006 8:02:18 PM PDT by grey_whiskers (The opinions are solely those of the author and are subject to change without notice.)
[ Post Reply | Private Reply | To 79 | View Replies]

To: grey_whiskers

Actually, I saw a news article not long ago about a group of researchers that had successfully tested an output neural interface with a chimpanzee. It was able to control a simple video game through a nerve monitor taped to the base of his neck.


123 posted on 04/13/2006 8:21:52 PM PDT by JamesP81 (Socialism is based on how things should be. Capitalism is based on how things are, and deals with it)
[ Post Reply | Private Reply | To 119 | View Replies]

To: Fitzcarraldo

"I've heard the brain described as "massively parallel"."

Yes.


124 posted on 04/13/2006 9:49:51 PM PDT by strategofr (Hillary stole 1000+ secret FBI files on DC movers & shakers, Hillary's Secret War, Poe, p. xiv)
[ Post Reply | Private Reply | To 115 | View Replies]

To: Mr. Jeeves

"Methinks the programs were and are semi-empirical, based on human knowledge of principles (weighting various configurations as more advantageous), rather than from ab initio.

Oh, absolutely. Deep Blue had a huge opening book of lines thoroughly worked out in grandmaster-level analysis, and the machine was co-operated by American Grandmaster Joel Benjamin. It took a coordinated team effort to beat Kasparov. If Deep Blue had had to work out its opening play on its own, it is likely Kasparov would have won every game handily."

This is where AI starts. In itself it is still a major accomplishment. Once the game started, the machine won without outside assistance.

But math proofs and other mathematical stuff has been discovered by computers---things no person knew. The boundary people imagine does not exist.


125 posted on 04/13/2006 9:54:40 PM PDT by strategofr (Hillary stole 1000+ secret FBI files on DC movers & shakers, Hillary's Secret War, Poe, p. xiv)
[ Post Reply | Private Reply | To 120 | View Replies]

To: Physicist; RadioAstronomer

Ping


126 posted on 04/14/2006 7:57:04 AM PDT by RightWingAtheist (Creationism is to conservatism what Howard Dean is to liberalism)
[ Post Reply | Private Reply | To 1 | View Replies]

To: tpaine
I think we do have free will, a precious gift granted to mankind by no less than God Himself.

On the other hand, free will could be a complete illusion - try not to think of elephant for the next minute.

127 posted on 04/22/2006 10:14:00 AM PDT by GregoryFul
[ Post Reply | Private Reply | To 45 | View Replies]

To: GregoryFul; JamesP81; tortoise
JamesP81 commented:

I think we do have free will, a precious gift granted to mankind by no less than God Himself.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I replied:

Aren't you ignoring the fact that all animals have free will, even though many are not self aware?

The ability to determine its next action [free will] may not necessarily indicate the level of an entities intelligence.
45 tpaine

I'm not entirely convinced most animals do have free will. They may, but I don't know how you'd prove that one way or another.
62 JamesP81

No system has the ability to know with certainty its next action.
78 by tortoise

-- Tortoise, your observation may indeed 'prove the point' about free will.

Any 'complex animal system' has the ability to vary its next action. -- Or, - the inability "to know with certainty its next action". -- It has free will.

128 posted on 04/22/2006 2:40:37 PM PDT by tpaine
[ Post Reply | Private Reply | To 127 | View Replies]

To: tpaine
The prohibition on systems perfectly self-modeling can be explained pretty intuitively without getting into hairy mathematics.

First, to deterministically predict the next action of a system, one has to know the entire state space of the system. Second, you have to have an observer making a prediction based upon a trivial analysis of the state to determine what is going to happen next.

This exposes a pretty fundamental inequality. The observer is a finite system with a non-zero Kolmogorov complexity. To perfectly predict the next state of a system, one requires the Kolmogorov complexity of the system (to hold the state information) plus the Kolmogorov complexity of the observer in terms of resources. Unfortunately, by definition one cannot have a computational complexity that exceeds one's own state space to host an observer such that one might perfectly predict one's next state. Not coincidentally, this closely resembles Godel's Incompleteness theorem.

For this reason, no system can ever predict with certainty its next state/action -- making such a prediction always requires a larger system than that about which the prediction is being made. To put it succinctly, perfect prediction of the next state of a system requires a computer with a state larger than the system predictions are being made about. Because of this, it is not possible for any system to make perfect predictions about its own actions -- it can never have the mathematically required Kolmogorov complexity to make such a prediction. Fortunately, this degrades gracefully. It is quite possible to make very good predictions about future actions, just not perfect ones.

This makes for a very interesting and mathematically elegant definition of "free will". No system can ever predict its own next state, but any sufficiently larger system can predict the state of a lesser system. Because of this, no system can ever view its own actions as deterministic even when it can prove that its actions are in fact deterministic in the abstract. In this sense, all systems necessarily must retain a mathematically impenetrable illusion that they have "free will", even when they know full well that they do not.

While many arguments about the subject treat "free will" in terms of absolutes, it is in fact relative. The answer is not so simple as saying we either have it or we do not.

(The formal version of this in the mathematics is very elegant and extremely important as theorems go. That it perfectly frames the question of "free will" is but one of its many controversial side effects.)

129 posted on 04/22/2006 10:11:40 PM PDT by tortoise (All these moments lost in time, like tears in the rain.)
[ Post Reply | Private Reply | To 128 | View Replies]

To: tortoise
Modern political theory on free will owes much to the animal kingdom.
Placing theory on the scales of justice and weighing it against practice can produce similar results to contrasting free will and post-free will-ism. Consider this, spoken at the tender age of 14 by one of the great political analysts Maximilian Rock speaking on the Kolmogorov complexity: 'consciousness complicates a myriad of progressions.' [2] What a fantastic quote.
If free will be the food of politics, play on.
The question which we must each ask ourselves is, will we allow free will to win our vote?

We can say with certainty free will has played a large part in the development of man in the 20th Century and its influence remains strong. It inspires, puts out 'fires', and it is, above all, human.


Address:http://radioworldwide.gospelcom.net/essaygenerator/essay.php
130 posted on 04/23/2006 7:03:48 AM PDT by tpaine
[ Post Reply | Private Reply | To 129 | View Replies]

To: tpaine
The question which we must each ask ourselves is, will we allow free will to win our vote? We can say with certainty free will has played a large part in the development of man in the 20th Century and its influence remains strong. It inspires, puts out 'fires', and it is, above all, human.

Consciousness is arguably nothing more than a high-order algorithmic domain -- self-reflective meta-algorithms -- which greatly strengthen the resistance of our choices to prediction.

As a practical matter, even if we are purely deterministic we must still make choices in our lives, and individuals are generally the most authoritative sources of information about themselves. The idea that all men have "free will" is foundational to our society (and it is assumed in the non-deterministic sense, not the Kolmogorov inequality sense) and much of what we value in our society follows from that. Socialists actually occupy the opposite extreme, invalidly assuming that they can discern the determinism of others and so can remove choice from them with no consequences.

Free will as strong non-determinism works well as a general assumption, since there is some de facto truth to it in practice, though this is because humans are unpredictable to other humans which is indistinguishable from "random" for many intents and purposes. Bad assumption, good results. The strong determinism proponents make a good assumption in the abstract, but skipped the part in math class where humans would still be complex and unpredictable. (ObFutureTangent: given sufficiently powerful external observers aka "computers" with super-human learning abilities, the computers could far more successfully treat humans in this fashion. For systems with roughly equivalent intelligence, it is mostly nonsense.)

I would also note that the absence of free will has far fewer legal consequences than liberal lawyers might argue, particular once the system adjusted. Criminals are still criminals whether they chose to or not, and it would in many ways allow their treatment as a "threat to society" be more appropriate. Just because my car does not have free will does not mean that I do not set the parking brake when parked on a hill.

131 posted on 04/23/2006 10:13:37 AM PDT by tortoise (All these moments lost in time, like tears in the rain.)
[ Post Reply | Private Reply | To 130 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-20 ... 61-8081-100101-120121-131 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson