Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Singularity Summit At Stanford Explores Future Of 'Superintelligence'
KurzweilAI.net ^ | 4/13/2006 | Staff

Posted on 04/13/2006 7:22:29 AM PDT by Neville72

The Stanford University Symbolic Systems Program and the Singularity Institute for Artificial Intelligence announced today the Singularity Summit at Stanford, a one-day event free to the public, to be held Saturday, May 13, 2006 at Stanford Memorial Auditorium, Stanford, California.

The event will bring together leading futurists and others to examine the implications of the "Singularity" -- a hypothesized creation of superintelligence as technology accelerates over the coming decades -- to address the profound implications of this radical and controversial scenario.

"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005). "Based on models of technology development that I've used to forecast technological change successfully for more than 25 years, I believe computers will pass the Turing Test by 2029, and by the 2040s our civilization will be billions of times more intelligent."

"Some regard the Singularity as a positive event and work to hasten its arrival, while others view it as unlikely, or even dangerous and undesirable," said Todd Davies, associate director of Stanford's Symbolic Systems Program. "The conference will bring together a range of thinkers about AI, nanotechnology, cognitive science, and related areas for a public discussion of these important questions about our future."

Noted speakers at the event will also include cognitive scientist Douglas R. Hofstadter, author of the Pulitzer prize-winning Gödel, Escher, Bach; nanotechnology pioneers K. Eric Drexler and Christine L. Peterson; science-fiction novelist Cory Doctorow; philosopher Nick Bostrom; futurist Max More; Eliezer S. Yudkowsky, research fellow of the Singularity Institute for Artificial Intelligence; Acceleration Studies Foundation president John Smart; PayPal founder and Clarium Capital Management president Peter Thiel; Steve Jurvetson, a Managing Director of Draper Fisher Jurvetson; and Sebastian Thrun, Stanford Artificial Intelligence Laboratory director and Project Lead of the Stanford Racing Team (DARPA Grand Challenge $2 million winner). In addition, author Bill McKibben will participate remotely from Maine via Teleportec, a two-way, life-size 3D display of the speaker.

The event will be moderated by Peter Thiel and Tyler Emerson, executive director of the Singularity Institute for Artificial Intelligence.

Among the issues to be addressed:

Bostrom: Will superintelligence help us reduce or eliminate existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism?

Doctorow: Will our technology serve us, or control us?

Drexler: Will productive nanosystems enable the development of more intricate and complex productive systems, creating a feedback loop that drives accelerating change?

Hofstadter: What is the likelihood of our being eclipsed by (or absorbed into) a vast computational network of superminds, in the course of the next few decades?

Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take off and how will humans stay in control?

More: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our expanding cognitive abilities?

Peterson: How can we safely bring humanity and the biosphere through the Singularity?

Thrun: Where does AI stand in comparison to human-level skills, in light of the recent autonomous robot race, the DARPA Grand Challenge?

Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?

The Singularity Summit is hosted by the Symbolic Systems Program at Stanford, and co-sponsored by Clarium Capital Management, KurzweilAI.net, MINE, the Singularity Institute for Artificial Intelligence, the Stanford Transhumanist Association, and United Therapeutics.

The free event will be held in Stanford Memorial Auditorium, 551 Serra Mall, Stanford, CA 94305. Seating is limited. Please RSVP. For further information: sss.stanford.edu or 650-353-6063.


TOPICS: Miscellaneous
KEYWORDS: ai; borg; computer; cyborg; evolution; evolutionary; exponentialgrowth; future; futurist; genetics; gnr; humanity; intelligence; knowledge; kurzweil; longevity; luddite; machine; mind; nanotechnology; nonbiological; physics; raykurzweil; robot; robotics; science; singularity; singularityisnear; spike; stanford; superintelligence; technology; thesingularityisnear; transhuman; transhumanism; trend; virtualreality; wearetheborg
Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-80 ... 121-131 next last
To: JamesP81
Basically, I don't think an AI would have the sheer d@mned bloodthirsty meanness necessary to kill us all before we took it down.

Meanness is a human trait. Most of the SI researchers espouse a humanistic/relativistic reality anyway. Will they program SI with the 10 commandments? I don't think so.

In terms of lethality, an SI analogy to a nuclear weapon is that the software is the trigger and nanotechnology is the lump of plutonium. Kept separate, we might have a chance. Together, the world could be transformed into "computronium" overnight.

41 posted on 04/13/2006 9:06:47 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 36 | View Replies]

To: strategofr

"Self-awareness is not the most important question."

Well, we already know that computers can outperform us, given a set of instructions. And I suppose that a computer that could write its own program and set of instructions would be quite "intelligent" -- even possibly dangerous.

But it would also lack imagination, no? Or at least would have a limited imagination. We'd still have ingenuity on our side.


42 posted on 04/13/2006 9:12:44 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 22 | View Replies]

To: JamesP81
There are concerns about SI, but I'm not yet convinced we will have the technology to do it in this century.

I hope you are right. I favor a quarentined SI/nanotech solution, with assured "reboot" capability, maybe to the surface of the Moon or Venus.

Nuclear weapons were much easier to control, once they were developed. SI/nanotech will be extraordinary difficult to control, if they are developed at all.

43 posted on 04/13/2006 9:13:08 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 40 | View Replies]

To: tpaine
One benchmark for "self-awareness" in animals and people (and now robots as well) is whether they will perform self-directed actions when looking in a mirror.

I think this form of self-awareness is only operational as seen from the outside observer and doesn't prove the actual self-awareness of a robot in the human "I know I exist" sense.

44 posted on 04/13/2006 9:15:35 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 37 | View Replies]

To: JamesP81
It's almost an issue of free will.

Computers do the things they do because they literally have no choice. They can't choose what to do or what not to do anymore than the sun could choose whether or not to quit shining or the snow could choose whether or not to be cold.

Human beings, however, have the ability to do this, which is almost paradoxical; the ability to choose anything you want suggests that true randomness exists and the universe is non-deterministic, or at least that the universe allows non-determinism.
If the universe, however, is deterministic and not non-deterministic, then human beings really don't have free will and any thought that you did is simply a lie, or rather you had that thought because you were programmed to and had no choice in the matter.

As for me, I don't believe that. I think we do have free will, a precious gift granted to mankind by no less than God Himself.
Anyway, that's my personal opinion. Your mileage will probably vary.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



Aren't you ignoring the fact that all animals have free will, even though many are not self aware?

The ability to determine its next action [free will] may not necessarily indicate the level of an entities intelligence.
45 posted on 04/13/2006 9:23:21 AM PDT by tpaine
[ Post Reply | Private Reply | To 27 | View Replies]

To: JamesP81
Anyway, these people are a little crazy, in my opinion. Creating true AI is not as simple as they make it sound, and it may not be desirable either.

"The Singularity will be a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed," said Ray Kurzweil, keynote speaker and author of the best-selling The Singularity Is Near: When Humans Transcend Biology (Viking, 2005).

I'm reading the book. About 1/2 way through. You knock the ideas that these people have but they are well thought through and documented in spades. Have a look at the book the next time you are in Borders. You may be surprised. In particular, Kurzweil isn't predicting that any of this is going to happen over night. The big changes are 30 to 40 years away. Look back 40 years and think about about the state of automatic voice recognition, pattern recognition, database indexing of billions of documents, instant and essentially free worldwide communication in any household that wants it and computers for $800 that are as good as anything that IBM had in 1966.

Things are changing. And fast.

46 posted on 04/13/2006 9:25:07 AM PDT by InterceptPoint
[ Post Reply | Private Reply | To 27 | View Replies]

To: SunTzuWu
These are terrific questions, btw.

I wonder if this would lead to the next step in civil rights.

I wonder too. And as someone who does not think that rights extend from the ability to feel pain (I'm looking at you, PETA) I certainly oppose such a thing. But any liberal who's seen Blade Runner will probably make a case for it.

If a computer can become self aware, does it have the ability to believe that God is self evident? If he does, do you think that it might then have a soul?

Wow. I guess my answer is 1) yes, and 2) no. I guess it's similar to the question about whether a clone would have a soul, right? Now, I tend to think a clone would -- although not conceived in the normal process, it would be flesh, a biological human being. But a computer is still silicon. In humans, I imagine the soul and life begin at the same time. But even if a computer is self-aware and believes in God, it's still not "alive." Deep thoughts, all right...
47 posted on 04/13/2006 9:27:15 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 26 | View Replies]

To: JamesP81

I don't know. Ray Kurzweil has already revolutionized multiple areas of human endevour. I believe he did a lot of the foundational work around digital audio sampling, which led to electronic music synthesizers that accurately mimic instruments. He also invented a lot of the basic OCR (optical character recognition) technology. His web site has a robotic person with a synthesized voice on it that you can interact with.

We routinely interact with voice response systems that are able to understand our speech. In 1985 a friend who was an AI research PhD at a university told me that that 'might never be possible'.

I think the track record of Dr. Kurzweil is pretty impressive and I would not bet against him.


48 posted on 04/13/2006 9:29:08 AM PDT by Jack Black
[ Post Reply | Private Reply | To 27 | View Replies]

To: JamesP81

Brilliant. Much better explained than I could manage. I'm with you all the way.


49 posted on 04/13/2006 9:30:44 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 28 | View Replies]

To: Jack Black
But any liberal who's seen Blade Runner will probably make a case for it.

Huh? How about any person who has seen Blade Runner. After all, while genetically engineered and grown in vats the replicants were people, with intelligence, feeling, emotion and sensation. Would you support NOT extending rights to such people? Based on what ideology? Conservatism? I don't think so.

50 posted on 04/13/2006 9:32:00 AM PDT by Jack Black
[ Post Reply | Private Reply | To 48 | View Replies]

To: tpaine
We're not the sum of our brain's computing power.

Prove it. That's just an assertion on your part. It may be correct, or it may not be.

51 posted on 04/13/2006 9:34:43 AM PDT by Jack Black
[ Post Reply | Private Reply | To 37 | View Replies]

To: tpaine
Most animals don't make/use tools, in the main, save useful adaptations and behaviours they have been endowed with. It is possible, however, someday we'll see an ape fashion a ladder and escape from a zoo.

It's been definitely demonstrated that humans can make human and inhuman tools. The ultimate inhuman tool could be SI/nanotech (would the acronym SIN be appropo?). Food for thought.

52 posted on 04/13/2006 9:35:59 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 45 | View Replies]

To: Jack Black
We routinely interact with voice response systems that are able to understand our speech. In 1985 a friend who was an AI research PhD at a university told me that that 'might never be possible'.

Maybe he was basing his conclusions on projected CPU speeds and memory limitations, rather than software design.

Is the word "understand" appropriate here? Again, that is an anthropromorphism. I would prefer "convert".

53 posted on 04/13/2006 9:44:42 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 48 | View Replies]

To: AntiGuv

Ping! Please add me to your list. Thanks


54 posted on 04/13/2006 9:45:58 AM PDT by wingsof liberty
[ Post Reply | Private Reply | To 9 | View Replies]

To: in hoc signo vinces
I think the phrase "artificial intelligence" is a contradiction in terms, a deliberate misuse of language to promote a certain agenda. There is really no such thing.

Some of the questions posed seem very revealing of the agenda:

Yudkowsky: How can we shape the intelligence explosion for the benefit of humanity?
Peterson: How can we safely bring humanity and the biosphere through the Singularity?
Kurzweil: Will the Singularity be a soft (gradual) or hard (rapid) take off and how will humans stay in control?
Doctorow: Will our technology serve us, or control us?

Consider who exactly is the "we" referred to in these questions. Is it possible that these questioners literally see themselves as the "we", the Controllers, the ones C.S. Lewis referred to as "The Conditioners" in his prescient little book The Abolition of Man? ("For the power of Man to make himself what he pleases means, as we have seen, the power of some men to make other men what they please.") Perhaps the questions might be more aptly put, "how will certain humans stay in control" of others?

"... In order to understand fully what Man's power over Nature, and therefore the power of some men over other men, really means, we must picture the race extended in time from the date of its emergence to that of its extinction. Each generation exercises power over its successors: and each, in so far as it modifies the environment bequeathed to it and rebels against tradition, resists and limits the power of its predecessors. This modifies the picture which is sometimes painted of a progressive emancipation from tradition and a progressive control of natural processes resulting in a continual increase of human power. In reality, of course, if any one age really attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the patients of that power. They are weaker, not stronger: for though we may have put wonderful machines in their hands we have pre-ordained how they are to use them. And if, as is almost certain, the age which had thus attained maximum power over posterity were also the age most emancipated from tradition, it would be engaged in reducing the power of its predecessors almost as drastically as that of its successors. And we must also remember that, quite apart from this, the later a generation comes—the nearer it lives to that date at which the species becomes extinct—the less power it will have in the forward direction, because its subjects will be so few. There is therefore no question of a power vested in the race as a whole steadily growing as long as the race survives. The last men, far from being the heirs of power, will be of all men most subject to the dead hand of the great planners and conditioners and will themselves exercise least power upon the future.

The real picture is that of one dominant age—let us suppose the hundredth century A.D.—which resists all previous ages most successfully and dominates all subsequent ages most irresistibly, and thus is the real master of the human species. But then within this master generation (itself an infinitesimal minority of the species) the power will be exercised by a minority smaller still. Man's conquest of Nature, if the dreams of some scientific planners are realized, means the rule of a few hundreds of men over billions upon billions of men. There neither is nor can be any simple increase of power on Man's side. Each new power won by man is a power over man as well. Each advance leaves him weaker as well aas stronger. In every victory, besides being the general who triumphs, he is also the prisoner who follows the triumphal car."
C.S. Lewis

Cordially,

55 posted on 04/13/2006 9:52:47 AM PDT by Diamond
[ Post Reply | Private Reply | To 2 | View Replies]

To: Jack Black

Because the replicants still weren't human. Sure, they LOOKED like humans.

Basically, it tugs at the heartstrings. Looks right, feels right, but still is NOT the same thing.

So I think that while every person might indeed be tempted to legislate along these lines, it's only really the left who would.

Perhaps I'm naive. Maybe the appearance is all that matters. I mean, I don't think dogs have souls, but it is and should still be against the law to mistreat them. Perhaps the same kind of principal would apply?


56 posted on 04/13/2006 9:53:32 AM PDT by NoStaplesPlease
[ Post Reply | Private Reply | To 50 | View Replies]

To: NoStaplesPlease
But AI, I don't buy it. Just because you link up an astonishing amount of processing power does not mean it's going to eventually become self-aware.

Kurzweil is a big time self promoter and carnival barker, so I'd be suspicious of any of his claims. As far as AI goes, we'll probably end up making them organic like the brain already is rather than something like integrated circuits.

57 posted on 04/13/2006 10:00:05 AM PDT by Moonman62 (Federal creed: If it moves tax it. If it keeps moving regulate it. If it stops moving subsidize it)
[ Post Reply | Private Reply | To 6 | View Replies]

To: wingsof liberty

Welcome aboard!


58 posted on 04/13/2006 10:02:03 AM PDT by AntiGuv (The 1967 UN Outer Space Treaty is bad for America and bad for humanity - DUMP IT!)
[ Post Reply | Private Reply | To 54 | View Replies]

To: Jack Black
We're not the sum of our brain's computing power.

We are definitely more than our brain's ability to perform mathematical calculations. Even the cause of the ability to do math problems is not analogous to the machine language of a modern CPU.

59 posted on 04/13/2006 10:02:19 AM PDT by Fitzcarraldo
[ Post Reply | Private Reply | To 51 | View Replies]

To: Jack Black

'No Staples' made that comment at #16, not me.


60 posted on 04/13/2006 10:03:52 AM PDT by tpaine
[ Post Reply | Private Reply | To 51 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-80 ... 121-131 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson