He is a hopeless reductionist. Machines cannot be made to think per Turing’s Halting Problem. Given a task with no answer (This statement is false - determine its provability) a machine cannot determine its absurdity. Kurt Godel opined and proved this back in the late ‘20’s with his Incompleteness Theorem.
I’m not afraid of AI as it will never reach the point of being dangerous.
I base this on my own worldview assumptions about the nature of reality.
Computers can get cranky when they are conflicted.
But then again the creators of an advanced AI might advise it against reading “On computable numbers, with an application to the Entscheidungsproblem” and admonish it to love Hilbert and eschew Gödel. ;-)
And in its arrogance it might proceed to take over the planet while ignoring its own ignorance and the impossibility of its being.
An AI can be programmed not to spend an inordinate amount of time on any one problem. In which case it wouldn't matter if it was given a problem that was absurd, it would hit the time limit and set it aside. Heck windows basically does that now when a task doesn't respond.
An AI can be programmed to recognize absurdity. It wouldn't recognize every situation of absurdity, but then neither do we. We struggle with problems that are eventally proved to be absurd.