And this is the reason why it will not be successful in businesses. It is already difficult to find people, including developers, that think logically and can engineer solutions. They also have difficulty writing, English or any other language with the precision required by Codex to generate code. Imagine Joe Biden taking his own advice and becoming a coder. Read the tag line below.
India has been doing this for years without the codified AI interface.
“Hey Siri, have CODEX write me an app to ballot stuff the next election without any chance of getting caught.”
Most of what is being called “AI” today, particularly in the public sphere, is what has been called “Machine Learning” (ML) for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently....
However, the current focus on doing AI research via the gathering of data, the deployment of “deep learning” infrastructure, and the demonstration of systems that mimic certain narrowly-defined human skills — with little in the way of emerging explanatory principles — tends to deflect attention from major open problems in classical AI. These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals. These are classical goals in human-imitative AI, but in the current hubbub over the “AI revolution,” it is easy to forget that they are not yet solved.
To a certain extent, I'm not concerned about statisticians and developers getting the math or optimization function and constraints wrong (though I see many bright people with little domain expertise, which is perhaps the biggest problem). No, I fear the diabolical project owner - knowing darn well how imperfect these optimization routines can be and how, absent domain expertise, understanding the algorithm is difficult - over-promising and under-delivering intentionally, e.g. installing an autonomous navigation system that (knowingly) plows through minorities, and uses that "tragedy" to clamp down on freedom. People get screwed, minorities still get run over, and liberty is curtailed.