Do you see that playing out any other way?
Yes.
That's the static view–AI does the work previously done by staff, fewer people required. The boss picks up any slack remaining.
But that's not how it plays out in genuinely competitive fields.
AI doesn't just help a securities analyst do his existing job faster. It opens up information horizons that didn't exist before. He can now track operational metrics, supply chain signals, and industry dynamics at a level of granularity that was simply out of reach. That's not the same work done more efficiently–it's a fundamentally larger canvas. The firm that cuts its junior staff to pocket the savings will be outcompeted by the firm that redeploys that capacity into the new territory AI has made accessible.
Same in law. AI doesn't just help a defense attorney process the same evidence faster. It surfaces connections across case law, forensic data, and precedent that no junior associate could have found in time. The attorney who uses AI to shrink his team will eventually face one who used it to build a deeper, more comprehensive case.
The zero-sum assumption treats the work as fixed. In competitive professional fields, AI makes the work larger. The firms and attorneys and analysts who grasp that will eat the lunch of the ones who just used it to cut headcount.
You are describing the well-known “Luddite fallacy” (also tied to the “lump of labor” fallacy). The fallacy is the mistaken belief that technological progress and automation permanently destroy more jobs than they create, leading to long-term structural unemployment. It assumes there’s a fixed “lump” of work to be done in the economy—so if machines take over tasks, humans must lose out overall.
We shall see. I believe that is probably true out maybe 10 years, but, beyond that, all bets are off.
But how soon does the Law of Diminishing Returns kick in with AI?