“This program will do what it’s programmers were paid to program into it and so it will lie, obfuscate and make sure anything you get from it has to be tested repeatedly against reality.”
Not quite true, with “AI” it’s not so much what the programmers “programmed” into it, as much as the “learning” or “training” data it was trained on. Though certainly a programmer can indeed game the system, that’s usually not what causes the problems, the bigger problem with generative/predictive “AI” is it will reflect the biases of its training data.
If, for exammple you train your system on say, Published Newspaper Articles from major US newspapers, the overwhelming majority of these articles will have a left wing bias. So, when you ask this trained system a question you will almost certainly get answers that will be biased toward left wing beliefs.
“AI” as its being sold is not intelligences, its simply continual probability/statistical mathematics. There is not “intelligence” behind it.
It can “appear” intelligent, but there is no intelligence behind it at all. If the probability algorithm says this word is the most likely word to follow the word it just output, then that word will be output... Whether that word makes any actual sense in context is not “understood” by the machine at all. The machine no more “understands” what it is doing, than Biden understands what day of the week it is.
You sir, are spot on.
Intelligent AI will not just recommend stocks to buy.
It will call you an ignorant slob for ignoring its brilliant advice.
:-)
Copilot didn’t give what I was hoping for with:
income taxation lie
It treated that like:
income tax lie
Overall, Copilot gave a good return of information on the effort of entering a few words.
Search engines seem to be decreasing in quality.
If you have a product idea and use this free AI to help design, manufacture, and distribute it, the resulting product will likely be the property of Google.