But how soon does the Law of Diminishing Returns kick in with AI?
The problem with AI is that it is frontier country. All we can do is speculate as to what's coming.
That's an interesting question. I pondered it overnight.
I'm just thinking out loud here, of course, but it occurred to me that there's a deeper point here that we might be missing.
AI, as we understand it right now, doesn't just help us process a fixed universe of information more efficiently–it expands the information universe itself. As we understand more, we find more worth understanding. Complexity breeds complexity. Human behavior, markets, institutions, and science all respond to new knowledge by generating new questions.
Think of it like a microscope that keeps getting more powerful. Diminishing returns don't kick in until either we can no longer build a better microscope, or we've seen everything there is to see. We are nowhere near either condition. Every order of magnitude improvement in resolution has historically opened up entirely new fields of inquiry rather than closing them down–bacteriology, virology, molecular biology, nanotechnology. Each frontier seemed like it might be the last one. None of them were.
AI looks like the same dynamic operating across every domain of human knowledge simultaneously. The flat part of the diminishing marginal returns curve keeps moving outward because understanding the world at a deeper level reveals a world that is deeper than we thought.