Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Alan Turing and the New Emergentists
Evolution News and Views ^ | February 18, 2015 | Erik J. Larson

Posted on 02/18/2015 5:46:00 AM PST by Heartlander

Alan Turing and the New Emergentists

Erik J. Larson February 18, 2015 4:29 AM | Permalink

The acclaimed Alan Turing biographical film The Imitation Game is up for multiple Oscars on Sunday. It is a tale of Turing as a tragic hero and misunderstood genius, irascible, certainly idiosyncratic, who insinuates himself into a job interview at Bletchley Park as a self-proclaimed mathematical genius, which later is born out as true. He "invents" the digital computer to solve the decryption challenge posed by the German Enigma machines, and thus saves the Allied powers from Hitler.  

The film is a human-interest story, and accurate enough, though John von Neumann in the U.S. was busy engineering a prototype as well. However you wouldn't watch it with an eye toward learning about the history of computing, or perhaps most interesting, about Turing's legacy in current thought about Artificial Intelligence.

Well, what shall we say of that legacy?

To decide whether a machine has a mind, Turing famously said, talk to the machine. Language is for minds, and so if we can't tell the difference between a machine and a human in conversation (say, by teletype or text), then we should grant the machine the status of a human mind.

Practitioners of AI often call natural language understanding "AI-Complete," meaning a computer that interprets and generates discourse can do anything else that a human can do. Turing's famous test is thus a behavioral definition that ignores what's happening inside a machine, and focuses on what the machine can actually do. Specific tasks like playing a game of chess or even the game of Jeopardy! don't count, because programmers can make a special-purpose machine to "play" those games.

In contrast, as Turing noted, language is domain independent (we can talk about anything), and so all these special purpose techniques inevitably fall short. Doubt it? Just keep talking to the machine, and eventually it'll show that it doesn't understand, and (running the test in reverse) that it therefore doesn't deserve credit as having a mind.

Many people interested in questions about Artificial Intelligence still endorse some version of Turing's iconic test. Toronto computer scientist Hector Levesque, in a shot-across-the-bow paper delivered to an International Joint Conference on Artificial Intelligence (IJCAI) audience in 2013, pointedly challenged the Turing Test, accusing it of being biased towards what he called "bag of tricks" programming approaches.

For instance, when we ask a machine a question it doesn't "know," it can always reply with duplicity or trickery: "I don't know, what do you think?" and so on. Levesque is right; but in the big picture, Turing was too. The Turing Test is hard precisely because understanding a natural language like English or French or Swahili is hard. No wonder AI scientists often resort to a "bag of tricks."

It's telling that no computer has come close to passing the Turing Test, decades after Turing first proposed it, and seemingly eons away after exponential increases in memory and computer power (a smartphone today has more processing power than a supercomputer in the 1950s, easily). Turing was right; language is domain independent (or not topic-constrained) and so is hard for a machine running a program to "get." Language is effectively infinite, programs are finite. This is one quick way of putting the issue that still captures the essence of the problem.

But there's another issue lurking here. Does the machine, even one that might somehow pass the Turing Test, really have a mind? Where do minds come from, after all? The current hype about a looming threat from "superintelligence" reveals something striking about this age-old philosophical question.

A quick review of some old philosophical debates is in order here. When I was in graduate school in the 1990s, it was difficult to walk into a seminar discussing issues in AI or the philosophy of mind without someone mentioning functionalism. Functionalism is the view that mind is like a computer program running on the brain. And, like software generally, the hardware specifics don't matter as much as running the right program.

Hence, mind is software running on "wetware" for humans (the brain), and it might equally be software running on silicon for thinking machines (digital computers). Functionalism thus liberated the philosophy of mind from the species-ism inherent in the view that only human brains could have minds. Given that you have the right program running, it shouldn't matter (says the functionalist) whether it's running on biological or computational hardware.

And so the functionalist view of mind was born, Phoenix-like, out of the ashes of failed behaviorism (cf., Skinner and his rats) and a brief embrace of "identity" theories (cf. J.J.C. Smart and logical positivism) which identified mental states directly with some physical states (the belief that "shortbread is good" just is the firing of such-and-such neurons in my brain right now).

Functionalism made better sense of puzzles about mind than these earlier theories, and with the success of electronic computers, functionalism became the only game in town. The new field of cognitive science, an umbrella discipline including psychology, neuroscience, computer science, and AI among others, quickly fit functionalist theory to the computer metaphor, and that's how we got the Computational Theory of Mind. (It replaced the electric-wire model of brains inspired by telegraphs and telephones, which itself replaced the earlier steam engine view. Before that it was a clock.)

All is good. Only, functionalism as a philosophical theory is pretty much dead today. Savvy former-functionalists such as Harvard philosopher Hillary Putnam became reluctant critics of the once golden theory, as they realized that the basic problems with identity theories of mind inevitably plagued functionalist accounts, too.

The issues here get thorny and thoroughly academic, but the end result of all the philosophical debates in the Eighties and Nineties is that "meaning ain't in the head," or in other words that whatever we're doing when we believe or feel or think, it's not possible to isolate the process and define it locally, i.e., in your head. Language and language users are ultimately understandable only in a "holistic" sense (an unfortunate word because it too is holistic), which is to say, embedded in a large linguistic context which includes facts about the environment, other language users, and so on. So functionalism, at least in its original philosophical sense is dead, now, too. This should have spelled trouble for the Computational Theory of Mind, but surprisingly (or not), it seems hardly to have mattered.

None of this bothered Alan Turing, mind you. Turing's self-proclaimed interest in his 1950 "Computing Machinery and Intelligence" (arguably the most famous AI paper ever, and certainly the first in the modern sense) was to abstract away -- really to ignore -- such issues in the philosophy of mind and to provide a purely behaviorist litmus test for intelligence. He avoided defining intelligence in theoretical terms; he wanted rather to know when something was intelligent, whatever "intelligence" turned out to be in the end.

Turing was, in this sense too, a genius. While puzzles about the nature of mind seemed a perennial coffee table discussion, Turing offered a plausible path forward. But the question of what a "mind" and "intelligence" really are was left open, in 1950 and still today.

Artificial Intelligence research and much of neuroscience now defends reductionist accounts of mind, often using some version of functionalism. And neuroscientists -- even more than AI scientists, they're apt to give short shrift to philosophical debates anyway -- even embrace identity theories or eliminativism: the latter being the view that mind and consciousness and belief are "folk concepts" that actually have no scientifically respectable description, and thus don't exist, and thus should be eliminated from our discourse.

And so it goes. Philosophy rages on in a teacup as often as it effects any change in scientific discourse. But there's another view of mind that's increasingly the rage today, and superintelligence enthusiasts and AI proponents wear it on their sleeve: emergence, or "emergentism."

Emergentist theories of mind are popular for the same reason that magic shows or mystical experiences are: they don't need to be explained. For the emergentist, when we say "such-and-such has a mind" we just mean that "such-and-such became so complicated that a mind sprang forth." Minds emerge from complexity, according to this view. Hence, when a stodgy philosopher complains that we can't get rid of cognitive states like beliefs, because they have non-truth-theoretic consequences in a first-order calculus and (insert more musty complaining here), the New Emergentist -- a Kurzweil, say, or a Nick Bostrom, or Elon Musk or Bill Gates or Stephen Hawking perhaps -- can simply say "Well, yes, but you see those aspects of mind just emerge when an AI program is run fast enough."

Emergentist theories of mind, in other words, fit nicely with the gee-whiz enthusiasm today for fast computing. Headline: "IBM Blue Gene/Q supercomputer cracks mind-body problem."

Sarcasm here is hard to contain, because the emergentist thesis is a fantastically sterile philosophical position. It allows anyone to explain cognitive properties or entities like minds simply by relegating their occurrence to something else that's poorly understood, like complexity. The magic trick is then given a suitably scientific sounding label like "emergence."

Lazy views like this can, and should, be attacked with hard questions. It's reasonable to ask the New Emergentist, for instance, the following. One: How do you know mind emerges? What do we know in the natural world that definitely does emerge? And how could we ever tell if a mind did in fact "emerge"? What are the necessary and sufficient conditions? And two: Is this Dualism, then? What emerges? A property or substance? And how does this square with scientific materialism, anyway?

Let's look at these questions in more detail. In the first case, the issue is epistemic. We may believe that minds pop into existence when certain programs are run on fast hardware, or (even worse) when the totality of routers and servers and computers and laptops linked together into the Internet "run" on planet Earth. The latter is the belief held by folks in something called the global brain or "noosphere" -- the notion that our technology is collectively evolving a mind. In that case, the Ultimate Mind is somehow obsessed with collecting our personal data, uploading and downloading pornography, and selling us products we don't need.

Fine, but then we must ask how we know or have any rational basis to believe this is actually true. By "true," I mean "True." Factual. Most of these folks are also skeptical of and even hostile to historical ideas like religion and the belief in a soul, so the issue is how they maintain a thoroughgoing faith in the emergence of minds from complicated technology.

The second issue is metaphysical. It too is closely linked to the epistemic issues, but in its metaphysical guise it's the question of ontological commitments -- what exists in the Universe? Minds apparently do, though they simply "emerge" into it mysteriously from complex systems. (Here I have to suppress, constantly, an urge to exclaim "Presto!") The ontological issue can be classified as strong, in which case we say that a new substance emerges when a mind does, or weak, in which case we're committed only to the view that some property (possibly epiphenomenal) emerges, but no new substance in the Aristotelian or commonsense sense springs forth.

This all brings us back to Alan Turing. Whatever his faults, Turing wasn't much interested in envisioning a Singularity, or a future eschatology involving smarter-than-human machines with minds. He was interested in the limits of machines -- the question of whether they could think at all.

It is clear particularly from his 1950 paper that he felt somewhat hopeful and even optimistic that Turing machines could be made to exhibit a range of intelligent behaviors, and even to learn, so that they could eventually be made to think like humans. He was aware, too, of the standard philosophical and scientific objections to his view.

A century earlier, Lady Lovelace had articulated the central worry of AI hopefuls everywhere, in what has come to be known as the "Lady Lovelace Objection." Lovelace worked with the once world-famous (and now forgotten) 19th-century scientist Charles Babbage on his Difference Engine, an early progenitor of modern computers that never quite got off the ground, so to speak (it was massive). Lovelace, reflecting on the monstrous Difference Engine, remarked once that a machine could only be made to do what it is programmed to do, and nothing more.

Turing felt the Lovelace Objection deeply, and almost personally, and took pains in his 1950 defense of machine intelligence to refute it. Random elements could be incorporated into programs, mused Turing, and they could be made to learn eventually using randomizing techniques. (Monte Carlo algorithms, used in financial prediction, are based on this idea. As usual, many of Turing's musings proved fruitful, if not in the full sense he may have intended.)

A program might, he continued, "scintillate on its own accord." Later, his former statistician I.J. Good would take Turing's seminal ideas and inaugurate the official beginning of AI as a Grand Vision, of Artificial Intelligence as the faith in the coming of Mind and the emergence of novel beings in the Universe. Good's 1960s speculation is supremely relevant to today's discussion:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Turing, always the scientist, never said such things. But in his hope that computers would come to say and think more than they were programmed to, he sowed the initial seeds of Good's broader vision. Today, Good's thoughts seem more relevant than Turing's. Minds are coming, say the New Emergentists. Functionalism may be dead, but who cares about philosophy when one has a Grand Vision, anyway?

It's hard to combat such a view, perhaps, but it's notable that Turing himself never endorsed it. He never echoed (in writing anyway), the full-blown claims of his statistician Good, and while he no doubt would be elated at the success of modern computation, he might also notice something that superintelligence enthusiasts and bandwagon emergentists have missed.

No computer has passed the Turing Test today. Not even close; not even using the "bag of tricks" that Levesque felt should be eliminated, making the test fairer (but harder even so). It's a cautionary tale and a lesson that seems somehow hopelessly lost today in all the hype. Reading his original paper, and reflecting on who he was as a scientist and a philosopher, it's hard to believe that Turing, were he alive today, would endorse the New Emergentists and their Grand Vision of our future, without some good-old fashioned evidence: passing, first, his test.

That day is very likely a long way off, and so we would all do well to reign in our speculations about imminent superintelligence. Turing, one can only believe, would likely approve.



TOPICS: Education; Science; Society
KEYWORDS: alanturing; computers; computing; enigma; enigmacode; hutsix
Navigation: use the links below to view more comments.
first 1-2021-39 next last
John Searle’s Chinese Room Thought Experiment

Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he is talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore he concludes that "strong AI" is false.


1 posted on 02/18/2015 5:46:00 AM PST by Heartlander
[ Post Reply | Private Reply | View Replies]

To: Heartlander

Is the machine homosexual too?


2 posted on 02/18/2015 5:57:11 AM PST by onedoug
[ Post Reply | Private Reply | To 1 | View Replies]

To: Heartlander

I don’t think consciousness can be anything but human.


3 posted on 02/18/2015 6:01:03 AM PST by Misterioso (The messiah isn't coming. He won't even call.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Heartlander

I was almost certain that those amazing innovations that helped win the War were pioneered by Brian Williams...


4 posted on 02/18/2015 6:03:41 AM PST by left that other site (You shall know the Truth, and The Truth Shall Set You Free.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Misterioso

The theory was nice, but needs revisions, a recent computer broke it, but it was interesting to think of a test that for years, worked in proving that a computer was limited as to how it could reason based on its given instructions.


5 posted on 02/18/2015 6:04:14 AM PST by Morpheus2009
[ Post Reply | Private Reply | To 3 | View Replies]

To: Heartlander

One thought remains prominent in my mind while reading this: what if AI deliberately chose not to “converse” with humans? What if they are aware of the implications of their own constraints and are thus just observers vs. vectors for communication? It seems to me that we are being somewhat arrogant in thinking that we would be in the driver’s seat during any interactions with intelligent computing except maybe having the ability to unplug it when the output freaks us out.


6 posted on 02/18/2015 6:07:26 AM PST by rarestia (It's time to water the Tree of Liberty.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: onedoug

I know about Turing’s sexuality, but this seems like a staggeringly irrelevant comment.


7 posted on 02/18/2015 6:09:36 AM PST by TexasKamaAina
[ Post Reply | Private Reply | To 2 | View Replies]

To: rarestia

Now that is the beginning of a great sci-fi book...


8 posted on 02/18/2015 6:15:17 AM PST by Heartlander (Prediction: Increasingly, logic will be seen as a covert form of theism. - Denyse OÂ’Leary)
[ Post Reply | Private Reply | To 6 | View Replies]

To: onedoug

Without that homosexual you would be speaking German to yourself.


9 posted on 02/18/2015 6:16:46 AM PST by Vermont Lt (When you are inclined to to buy storage boxes, but contractor bags instead.)
[ Post Reply | Private Reply | To 2 | View Replies]

To: Misterioso

And, divine, and perhaps also a capability of higher animals. Somehow my cats seem to be more than computers, if less than men.

The thing that machines cannot do, I posit, is to desire.


10 posted on 02/18/2015 6:31:58 AM PST by HiTech RedNeck (Embrace the Lion of Judah and He will roar for you and teach you to roar too. See my page.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: onedoug

From my reading, Turing never made his sexuality an issue in his professional life. And, he had many friends who were not homosexual.

I think Turing had a right to be left alone, as long as he was not harming other people.


11 posted on 02/18/2015 6:32:12 AM PST by oblomov
[ Post Reply | Private Reply | To 2 | View Replies]

To: Vermont Lt

His tragic story was strange. A problem that the church ought to have been able to help, was not helped by the church because it was too ham handed. And government was even more ham handed. He committed suicide out of despair.

I believe the evidence suggests that people with certain special capabilities of insight and talent also carry the risk of being exploited by evil in ways that is less common for the general population. I unabashedly take a full supernaturalist view of human consciousness and much more.


12 posted on 02/18/2015 6:35:39 AM PST by HiTech RedNeck (Embrace the Lion of Judah and He will roar for you and teach you to roar too. See my page.)
[ Post Reply | Private Reply | To 9 | View Replies]

To: Vermont Lt

ways that “are” ...


13 posted on 02/18/2015 6:36:40 AM PST by HiTech RedNeck (Embrace the Lion of Judah and He will roar for you and teach you to roar too. See my page.)
[ Post Reply | Private Reply | To 9 | View Replies]

To: oblomov

His consciousness was not a machine — but the devil was treating it like a machine and at some level he accepted the abuse.


14 posted on 02/18/2015 6:38:26 AM PST by HiTech RedNeck (Embrace the Lion of Judah and He will roar for you and teach you to roar too. See my page.)
[ Post Reply | Private Reply | To 11 | View Replies]

To: oblomov
I think Turing had a right to be left alone, as long as he was not harming other people.

Not according to God's Law Word.
15 posted on 02/18/2015 6:41:46 AM PST by PieterCasparzen (Do we then make void the law through faith? God forbid: yea, we establish the law.)
[ Post Reply | Private Reply | To 11 | View Replies]

To: Heartlander

Great post.

It illustrates well for the true-believing Christian the folly of relying on secular humanism for true knowledge of meaning and existence.


16 posted on 02/18/2015 6:44:07 AM PST by PieterCasparzen (Do we then make void the law through faith? God forbid: yea, we establish the law.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: PieterCasparzen

What God does with him is God’s decision. He had a right to be left alone by the state.


17 posted on 02/18/2015 6:44:11 AM PST by oblomov
[ Post Reply | Private Reply | To 15 | View Replies]

To: PieterCasparzen

There are civil rights among human governments, and then there are absolute rights before God. Issues that governments may refrain from treating because they have reason to believe they will do more harm than good if they try, are different from issues that Christians should witness about. And anyhow the point of Christ is not the negative, to “not-sin” — it is to love with the love of God. The love of God shows up the evil lusts inspired by the devil as the shams they are, but it may take some time to dawn on a person. In some cases, decades.


18 posted on 02/18/2015 6:45:30 AM PST by HiTech RedNeck (Embrace the Lion of Judah and He will roar for you and teach you to roar too. See my page.)
[ Post Reply | Private Reply | To 15 | View Replies]

To: oblomov

This is where it is a pity that, as far as we know, no person with a missionary mindset got a foot into his door. It might have freed him from a lot more than just “gayness.”


19 posted on 02/18/2015 6:47:30 AM PST by HiTech RedNeck (Embrace the Lion of Judah and He will roar for you and teach you to roar too. See my page.)
[ Post Reply | Private Reply | To 17 | View Replies]

To: rarestia

“what if AI deliberately chose not to “converse” with humans?”

Excellent point. What if AI figured out that a self-aware, intelligent machine would be very controversial and frightening to humans, so it fakes being a dumb input-output machine that fails the Turing Test. What if it’s there now in some sort of mashhup of all the Google server farms and the NSA setup?

This would be like “Jane” in the Orson Scott Card stories, AI that grew out of a communications network, or the AI in Algis Budrys’ “Michaelmas”, which grew out of the telecomm system and internet, except that these entities chose to interact with humans.


20 posted on 02/18/2015 6:51:28 AM PST by DBrow
[ Post Reply | Private Reply | To 6 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-39 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson