@GossiTheDog I was really (positively) surprised that Anthropic released this. Their results broadly align with my personal experience and feelings about AI usage, though I have substantial misgivings about their methodology (in particular how they controlled for participant experience).
Another issue that I worry a lot about with AI is second-order effects, wherein you use the LLM to write a library and then are expected to be able to evaluate the LLM's ability to use the LLM-written library. I think that this is both a priori very hard (to know all of the details of the library to be able to catch issues) and is lossy (in issues in library design or abstraction factoring that may be identified through use implementation).