Posted on 11/13/2010 5:28:58 AM PST by RogerFGay
From the High Level Logic (HLL) Open Source Project blog.When are we going to have AI, one survey asks? It's a question relevant to HLL because so much of the thought behind the HLL design comes from the history of AI research and current technology that has come from AI research. The answer to the question when, with reference to HLL, is now. (Or at least as soon as version 1.0 is ready.) And that's no reason to get worried. As the description of HLL claims, you don't even need a high-powered computer science background to build applications with it just some (OK, but at least reasonably good would be nice) programming knowledge.
The AI question is actually a bit tricky. It really depends on what you mean by AI. Way back in the cave computer days when I was first introduced to the subject, artificial intelligence research was defined as trying to get computers to do things that humans currently do better. Applying that definition, it seems as though the answer may be never. As soon as computers can do something at least as well or better than humans, it's no longer the subject of AI research. Object oriented programming is an example of something that came from AI research. Now a mainstream programming paradigm, many people don't associate it with AI at all.
The variety of ways of thinking about AI is also why some researchers predict AI won't exist far into the future while others (like me) are much more optimistic. People who answer the question may have something very specific in mind and think it will be a long time before it will become reality. You can also think about all the things computers do now such as mathematical calculation and make a case that AI already exists (something humans and computers both do, and computers do well). The great variation in predictions on when AI will come, has to do with both the particular set of things that the guesser thinks needs to be done before AI exists and how optimistic or pessimistic they are about doing them; while basic AI research always looks ahead.
You've probably heard that human intelligence is linked to the fact that we have opposable thumbs and other peculiar physical characteristics like standing upright and walking erect. Researchers recognize that in living creatures, intelligence and the characteristics of their physical bodies are linked, which makes robotics fertile ground for AI. Not all researchers focus exclusively on human intelligence and capabilities however. Some of the most interesting advances have come from looking for ways to mimic the behavior of other creatures, from insects and snakes to mules. The intelligence of a lower species is still intelligence, and some of the developments that come from mimicking their behavior can be applied in layers when mimicking behavior in higher ones.
Where does HLL actually fit in? Twenty-five years ago, when I was first thinking about the high level logic problem, I thought of it as a subject for advanced research. Since then, computer languages have advanced considerably and in ways directly matching the requirements of HLL. Strong networking support is a must, which has come from focus on Internet applications. Relatively recent additions to Java (which I've used to build HLL), such as strong support for generics and reflection have transformed some of the challenging bits into stuff that's just pretty cool.(Once again, application developers are not required to have expertise in these techniques although it's quite alright if they do.)
To some extent, even the concept has been encroached upon (so to speak). The short descriptions of HLL have called it an agent system and I worry at times that it will be perceived as nothing more than an alternative to existing agent systems (which I won't mind so much if it becomes a popular one). The overall HLL concept is the thing that remains new. While fitting into the current modern software development world well, I still think it has potential as a tool in advanced AI research and application development.
HLL development has been proceeding as an ordinary software development project. With use of modern software technology and twenty-five years of thought behind it, not much experimentation is now required; less than the ordinary amount for development of a complex software system, because even details and how it all fits together have previously been thought about. And all that is why it (version 1.0) will be a powerful, light-weight system that is easy to use.
So, is it AI? When people are using it regularly to build applications, I certainly hope it's thought of as AI just as much as rule-processing or object-oriented programming and all the other things that have come from thoughts on developing AI; and yet, fully accepted and integrated into mainstream applications development. Why not integrate HLL support directly into programming languages?
For most people, thoughts on what AI is continuously focus on the future. With twenty-five years of history, I think I've earned the right to use a tired old cliche to end this note with a response. As far as HLL is concerned, the future is now. (Finally!)
What if that AI is beaming a hologram of itself into other universes for "companionship" or perhaps just plain old hunger.
And here I thought the parrotting of Soros’ words on TOTUS and out of Reid’s & Pelosi’s mouths simply demonsrated SkyNet is already aware.
There was a scientist who was asked, “Will there ever be a computer as intelligent as a human?” He answered, “Yes, but only for a few moments”.
“When will we have artificial intelligence?
We elected it on 11-4-2008.”
Lol! You beat me...
His words were “Je pense donc je suis” (original in French not the Latino “Cogito ergo sum” often attributed). He may have MEANT “I think therefore I know that I am”, but his words were “I think therefore I am.” I think your phrasing makes more sense, however. Otherwise the point could be made in a number of other ways, such as “I hear, therefore I am” or “I eat, therefore I am.”
Frankly, I never thought that this statement was very deep, or even true. Much exists that has no knowledge of its existence. And it is not the conscienceness of one’s existence that makes one exist. A stone has no awareness of its existence, yet it exists. The “therefore” makes no sense.
Precisely. It will not be “AI” until the computer is “self-aware” and self-programming. Short of that, it’s just a really fast number-cruncher.
It needs to be followed up with, "Of course I could be wrong".
Al Gore?
Perhaps the study of artificial stupidity would yield some seriously interesting results; just like understanding evil provides insight into morality.
Some of this stuff seems to go slowly (in computer technology evolution time, which is actually quite rapid) because of the small amount of funding it has. It’s easy to think I’m wrong about that, especially when governments have put tons of money into robotics R&D over the past decade. But when something becomes profitable, that’s when the sales->competition->R&D cycle kicks in and you get a much, much larger number of people involved in development.
And I also meant to provide you with this link; re: autonomous vehicles: http://edition.cnn.com/2010/TECH/innovation/10/27/driverless.car/
Interesting. I’ve been watching episodes of Space 1999 lately. Last night I watched the one where an AI being in the form of a space ship tried to kidnap 3 of the main characters to replace his “companion,” a human that died.
Even more relevant - when will we have magnetic bubble memory?
“He may have MEANT ‘I think therefore I know that I am’, but his words were ‘I think therefore I am.’”
Yes, I was, as I said, trying to get at what he meant. Which, however catchy, can be misleading.
“Frankly, I never thought that this statement was very deep, or even true”
I think it’s true and shallower (or narrower, if you will) than people give it credit for. It does all come down, after all, to a single sudden, basic, emotional response. You must simply realize that you are thinking, and that’s the whole trick.
“Much exists that has no knowledge of its existence”
That’s not really the point, though. Descartes is very specifically inquiring into whether he, a thinking person, can be fooled into thinking he exists. The ruse of the “malicious demon” (or, as one commentator put it, the rather accomodating demon), who’s trying to trick you into believing you exist when you really don’t, wouldn’t work on things that can’t think.
“And it is not the conscienceness of ones existence that makes one exist.”
No, but if you follow his logic, he realize that’s not what he’s saying. All it is is that consciousness leads one to realize that you cannot be fooled into thinking that you exist when you don’t. Because you must exist in order to think.
“The ‘therefore’ makes no sense.”
It does within the train of thought.
“Which, however catchy, can be misleading.”
Cogito ergo sum, I meant, can be misleading.
“Judgement call” AI is already used extensively in medical screening and diagnosis and it has been for over 15 years. It may have even been used on you and your family.
“he realize thats not what hes saying” = you realize thats not what hes saying
That’s not the correct definition of AI.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.