This doesn’t make sense.
“Loh asked ChatGPT to find the largest fraction less than ½ with a numerator and denominator that are positive integers less than or equal to 10,000. It was a question that it almost certainly hadn’t seen before—and it flubbed the answer. (It’s 4,999/9,999.)“
I don’t understand why a computer (AI) would flub this answer. If ChatGPT flubbed it, then it is a poor AI engine.
It seems people are confusing these entertaining Ask Liza type chat programs with true AI - and gravely underestimating AI in the process. A true AI engine will not only learn from its mistakes, but it will do so so quickly that its trial and error process will be completely invisible to humans.
The key flaw in the above example is the phrase “It was a question that it almost certainly hadn’t seen before”. So what if it hadn’t seen it before? Even if the AI engine was stupid (and why would it be?) - at the very least, it could compare every possible fraction that fit the criteria and pick the best answer - and do so at lightening speed.
I agree with the basic premise that if humans are to compete with AI we must look for whatever advantages we may have over AI as humans - but solving a simple math problem is not going to be one of them!