"Recall the folks at the MIT AI lab, with their "mental representations," who had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. Far from teaching us how we should think about the mind, AI researchers had taken over what we had just recently learned in philosophy, which was the wrong way to think about it. The irony is that the year that AI (artificial intelligence) was named by John McCarthy was the very year that Wittgenstein's philosophical investigations came out against mental representations. (Heidegger had already done so in 1927 with Being in Time.) So, the AI researchers had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like us, that it was a hopeless research program, but they took Cartesian philosophy and turned it into a research program. Anybody who knew enough recent philosophy could've predicted AI was going to fail. But nobody else paid any attention."
---Hubert Dreufus
Thanks...you mean Hubert Dreyfus, the professor at UCB? That was a good link. Modern AI embraces the idea of handling the gazillions of special cases of real life, rather than abstracting them to symbols and rules. Fluid representations dominate.