Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: tortoise

You are talking theory and I'm talking practical experience. If the goal is to make a truly autonomus, fully aware system, that is able to recognize and correct mistakes not solely limited to it's own programming, it will not happen anytime soon, if ever.

Let's take Big Blue and Deep Blue, as examples. Big Blue was designed to play chess and Deep Blue to predict oceanographic events.

Big Blue could play chess simply by recording and then extrapolating the possiblilites that stemmed from every possible move at any given time. In effect, it's options were limited even if there are millions of possibilities. It then reacted to those moves in the way that seemed most logical based upon probabilities. It could be "faked out" by unorthodox play, or, if you kept at it long enough, you would eventually find a bug in it's programming that it could not respond to. It "learned" from it's mistakes because it recorded every move and then a person went back and told it where it went wrong, changing the programming to avoid the same errors. It could 'think' tactically but not 'strategically'. It could not discern strategy, only react to to it's opponent's moves. In terms of initiating a game, it generated a random first move.

Deep Blue was an even greater disappointment. There simply were too many variables to consider when predicting the movement of a tide, for example, and it's effects. It's shortcoming was that while it could mathematically generate billions of predictions, it's input parameters were limited by what it's programmers could tell it. It could not, for example, apply anything like an empirical method to test assumptions or sift facts.

It's not the mechanics that are lacking, it's the software, and that software comes from an unreliable source to begin with --- human beings. Even highly intelligent, mathematically-competent human beings make mistakes and assumptions that may not be factual or even sensical. As an example, I give you the great Y2K scam -- the computers didn't care what century it was, the SOFTWARE did. The limitation is always software. I don't know about you, but when I write code, I apply logic and not theory. If there's a fault in my logic, there's a fault in my software. I can program based on theory all day, but it doesn't mean I have a viable result when I'm done.

P.S. intersting debate though...


282 posted on 04/15/2005 12:26:54 PM PDT by Wombat101 (Sanitized for YOUR protection....)
[ Post Reply | Private Reply | To 280 | View Replies ]


To: Wombat101

Whatever is going on in our heads, it’s not magic. It is knowable and machine intelligence will be achieved.

What machine intelligence turns out to be, it won’t be base on current architecture though.


283 posted on 04/15/2005 12:32:36 PM PDT by ElTianti
[ Post Reply | Private Reply | To 282 | View Replies ]

To: Wombat101
It's not the mechanics that are lacking, it's the software, and that software comes from an unreliable source to begin with --- human beings.

The problem is the complete absence of strict theory in software design. Virtually all programmers design software without having any idea whether 1) the design is theoretically valid for the intended result, and 2) whether the design is a theoretically optimal implementation per the specs. Given this, it is no wonder that software is so crappy. I had to put the smack down on a programmer today who was trying to do something using a multi-version concurrency control design that is disallowed by transaction theory if you want perfectly correct behavior all the time in MVCC. Yes, the problem was obscure, esoteric, and would not even show up under many conditions and probably most testing. Yet everyone intuited that the design was correct even though I could prove that it was fundamentally broken by going to the theory, which no one bothers to do never mind actually understanding it. A lot of what is wrong with software is that it is treated like literature rather than math by programmers. All software design is an implementation of strict mathematics, and if you can't explain why a software design is correct in terms of mathematics then you don't understand the software design and it is likely broken.

AI has suffered from this badly. For the entire 20th century, not a single AI theorist could demonstrate why their particular theory du jour should work other than their demonstrably useless intuition about the nature of the solution. There was never an exhaustive mathematical basis for their design theories.

Note that pure mathematics does not always proscribe a good engineering solution, but that is a different issue. It is generally possible to show that a good engineering solution exists if the math is correct, though it may take a bit of work to derive an engineering solution. The difference between theory and practice is that proper practice is often a theoretically suboptimal or constrained implementation of theory. The ideal and real are not fundamentally different, the real should be the ideal re-derived with real constraints (ideally...).

292 posted on 04/15/2005 2:12:01 PM PDT by tortoise (All these moments lost in time, like tears in the rain.)
[ Post Reply | Private Reply | To 282 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson