There are a lot of gears and wheels wizzing around in the brain, most of which are now accessible to study. But the computational model is still a mystery, dispite being able to see the pieces.
This is not an unusual condition in science, so I think it is premature to pronounce the project a failure.
You wrote: "There are a lot of gears and wheels wizzing around in the brain, most of which are now accessible to study. But the computational model is still a mystery, despite being able to see the pieces."
Perhaps the reason the "computational model is still a mystery" is because it can only "see the pieces", never the integrated, systematic whole of which they are "the pieces."
In any event, it seems to me the "pieces" themselves are pretty intangible quantities when you boil it all down. It seems these pieces are what amount to after-the-fact recordings (for we can only "read the tape" after the "take" has been registered) of experimental observations of human brain function. Yet it hardly seems to occur to anyone these days that any trace brain function leaves on any recording device is not the same thing as the thing being recorded. Or to question the possibility that brain function, in a certain sense, is itself the trace of a higher-order function of some kind.
Which for lack of a better descriptor I would call consciousness. This is what Marvin Minsky believes can be supplied to a "thinking machine" as the "short description of the system." Man, talk about taking a short-cut to problem solution! Which could never solve the problem, precisely because it is a short-cut.
IMHO, people who work in the field of artificial intelligence might find it helpful to study, in addition to brain studies, the operations of consciousness. Arguably, consciousness is highly structured and complex. One would think this fact might have some bearing on the content of Minsky's "short description." For how is any "short description" to capture the quality of essential self-reflection inherent in human thinking?
If the AI folks of "strong theory" school continue to avoid exploring the structure of consciousness itself, then I really don't think they will get very far very soon in achieving their goals. To put it bluntly, my suspicion is these folks are seriously on the wrong track -- barking up the wrong tree, methodologically speaking.
At the end of the day, the problem before them -- as they themselves seem to have defined it -- is of such dimension and intractibility as to suggest to an outside observer that it would be easier to turn thinking humans into machines than to turn machines into human-like thinkers.
Believe it or not, there are ways to do systematic investigations into the operations of consciousness. Unfortunately, every last one of them (that I know about anyway), is necessarily "subjective."
As unspun has aptly put it, before there can be "objectivity," there has to be a "subjectivity." And I think that observation directly bears on the seemingly most intractible problem of AI theory.
JMHO FWIW.