@shanselman Following up on the "senior vs EiC" framing: I think domain experts deserve a closer look here. Someone with deep domain knowledge who can produce specs that verify outcomes in a consistent, non-gameable way (think end-to-end acceptance criteria where the testing logic is not leaked to the LLM so it cannot optimize for passing while missing the point) is arguably doing the hardest part of the job. They are defining what correct looks like under real conditions.That is the exact skill the paper identifies as the bottleneck: verification. But it roots that skill in engineering seniority when in practice it often lives with the person who knows the domain cold. Implementation is what AI is getting good at. Knowing whether the result actually solves the real problem is not an engineering judgment call, it is a domain judgment call.The concession is that for things like concurrency, security architecture, and systems design, domain knowledge alone is not enough. But for a large share of actual product work, the person who can say "here is what done looks like, prove it without seeing my rubric" is more valuable than the person who can write the code. The paper's hierarchy flips in those contexts.So the talent model is not a pyramid with seniors at the top and EiCs at the bottom. It is more like a matrix where domain depth and engineering depth are separate axes, and AI compresses the engineering axis while making the domain axis more important than ever.