BTW, it tool 4o less than a second to produce that answer.
Well, the speed is no surprise at all. I don’t think any reasonable person would doubt AI is superior in that regard. :-)
The speed is impressive, but I would be interested to know what sort of internal caching GPT-4o does for questions and answers.
Since your relatively sophisticated question, "Explain why increasing the minimum wage hurts entry level workers", is on a topic of current interest, it is possible that the answer is already in GPT's cache and quickly accessible via the three important linguistic components "Subject: increasing [the] minimum wage", "Verb: hurts" and "Object: entry level workers". (Computer parsing of such sentences has been around for many years.)
In other words, GPT-4o might be able to give the answer without consulting its full database and performing complicated analyses on what it finds therein.
It would also be interesting to know if GPT-4o makes use of a sophisticated CDN (Content Distribution Network) to offload -- and thus speed up -- some of the "simpler" queries to regional caches. (This has been done for years for video streaming and other data-hungry services.)