• ngburke 18 minutes ago

Spot on. All those years of slinging code and debugging gave me and others the judgement and eye to check on all the AI generated code. I now wonder often about what hiring looks like in this new era. As a small startup, we just don't need junior engineers to do the day to day implementation.

Do we instead hire a small number of people as apprentices to train on the high level patterns, spot trouble areas, develop good 'taste' for clean software? Teach them what well organized, modular software looks like on the surface? How to spot redundancy? When to push the AI to examine an area for design issues, testability, security gaps? Not sure how to train people in this new era, would love to hear other perspectives.

• datadrivenangel 14 minutes ago

Demand for software is large and as the cost goes down we'll want more of it, so there will be demand to keep training people.

• wolttam 2 hours ago

I feel like some of the data in this is horrendously out of date. They're referencing articles from the end of 2024.

There was a massive step-change in the capability of these models towards the end of 2025.

There is just no way that an experienced developer should be slower using the current tools. Doesn't match my experience at all.

The title of the article, though - absolutely true IMO

• Esophagus4 an hour ago

Yeah…

> For tasks that would take a human under four minutes—small bug fixes, boilerplate, simple implementations—AI can now do these with near-100% success. For tasks that would take a human around one hour, AI has a roughly 50% success rate. For tasks over four hours, it comes in below a 10% success rate

Opus 4.6 now does 12hr tasks with 50% success. The METR time horizon chart is insane… exponential progression.

• indoordin0saur an hour ago

Really depends on what you're working in. For me, I work with a lot of data frameworks that are maybe underrepresented in these models' training sets and it still tends to get things wrong. The other issue is business logic is complex to describe in a prompt, to the point where giving it all the context and business logic for it to succeed is almost as much work as doing it myself. As a data engineer I still only find models to be useful with small chunks of code or filling in tedious boilerplate to get things moving.

• blonder 37 minutes ago

Agreed. Common use cases like creating a simple LMS system Opus is shockingly good, saving hours upon hours from having to reinvent the wheel. Other things like simple queries to, and interactions with our ERP system it is still quite poor at, and increases development time rather than shortens it.

• alistairSH 25 minutes ago

How is success defined in those metrics? Is success "perfect - can deploy to prod immediately" or "saved some arbitrary amount of engineering time"?

Anecdotal experience from my team of 15 engineers is we rarely get "perfect" but we do get enough to massive time savings across several common problem domains.