A lot of people down on AI in this thread, but I'm watching the industry slip over the line of trust with these latest frontier models.
Every jira ticket I see now has acceptance criteria, reproduction steps, and detailed information about why the ticket exists.
Every commit message now matches the repo style, and has detailed information about what's contained in the commit.
Every MR now has detailed information about what's being merged.
Every code base in the teams around me now has 70 to 90%+ code coverage.
I regularly ship four features at a time now across multiple projects.
The MCP has now automated away all of the drudgery of programming, from summarizing emails to generating confluence documentation.
People keep screaming that tech debt is going to pile up, but I think it's going to be exactly the opposite.
Certain types of code are cheap. Proof of concept is cheap. Adding small features that fit within the existing architecture is cheap. Otherwise, I'm not so sure. Coding agents are fantastic at minutiae, but have no taste. They'll turn a code base into a ball of mud very quickly, given the opportunity.
While I agree with you that agentic coding still has quite a way to go and is not always producing the quality that I would want from it, I can say quite confidently that its baseline is way above some of the production code in many applications many people use today. It really isn’t that code before agents was primarily written with taste and beautiful structure in mind. Your average code base is a messy hell full of quick fixes that turned into all kinds of debt over the years.
I took the previous post, with its mention of the ball of mud, to be about complexity.
“Taste”, is used in many cases, I suspect, to give a name the collection of practices and strategies developers use to keep their code and projects at a manageable level of complexity.
LLMs don’t seem to manage complexity. They’ll just blow right past manageable and keep on going. That’s a problem. The human has to stay in the loop because LLMs only build what we tell them to build (so far).
BTW, the essay that introduced the big ball of mud pattern to me didn’t hold it up as something entirely bad to be avoided. It pointed out how many projects — successful or at least on-going projects — use it, and how its passive flexibility might actually be an advantage. Big ball of mud might just be the steady state where progress can be made while leaving complexity manageable.
I think there are at least two factors behind ye olde ball of mud that LLMs should be able to help with:
1. Lack of knowledge of existing conventions, usually caused by churn of developers working on a project. LLMs read very quickly.
2. Cost of refactoring existing code to meet current best practices / current conception of architecture. LLMs are ideal for this kind of mostly mechanical refactoring.
Currently, though, they don't see to be much help. I'm not sure if this is a limitation in their ability to use their context window, or simply that they've been trained to reproduce code as seen in the wild with all its flaws.
Keeping complexity down is always a conscious act. Because you need to go past the scope of the current problem and start to think about how it affects the whole project. It’s not a matter of convention, nor refactoring. It’s mostly prescience (due to experience) that a solution, even if correct and easy to implement, will be harmful in the long term.
Architecture practices is how to avoid such harmful consequences. But they’re costly and often harmful themselves. So you need to know which to pick and when to start applying them. LLM won’t help you there.
I agree. I do wonder if what I'm seeing is a limitation of the reasoning power of LLMs or if it's just replicating the patterns (or lack thereof) in the training data.
Preproduction code was always cheap or even free. Sales people have been selling software that didn't do what was on the tin since the dawn of time. Those features cost 0 dollars to write!
Production code. Especially production code with bugs is expensive. It can cost you customers, you can even get negative money for it in the form of law suits.
Coding agents are great for preproduction and one offs. For production I really wouldn't chance it at any scale above normal human output.
Except here's the thing, that's the sort of code that was extremely expensive before, in large part because of our day jobs (which still to this day require mindfulness and can't just be vibe-coded).
However, an extra script here or there to make your life easier, adding extra UI features based on some datapoint to your internal dashboard, ect, these were things that could've taken a few days you didn't have before to get exactly right and now they can be done with only a few minutes of attention.
Yea, the amount of dev tools I'm creating per project is astounding. Usually tools that help me to debug certain things better.
Guy works for the Overture Map Foundation, with Amazon, Microsoft etc. being sponsors. He has been boosting AI all over the Internet. I'm sure Microslop and Amazon are very happy with these efforts.
I'm glad that "10 ways to do X" submissions are allowed as long as they boost AI.
Are you suggesting that Microsoft and Amazon's sponsorship of Overture comes with an understanding that people who work on Overture will spend their time writing articles that "boost AI"?
Does "boosting AI" include opening an article with "Frontier models are really good at coding these days, much better than they are at other tasks"?
Can't speak for the former, but the latter question: yes.
"Product is really good at X, much better than at Y" does not imply that it's bad at Y, and even if it did, if you're targeting an audience that only cares about X, who gives a shit about Y? Might as well throw Y under the bus to boost the perceived effectiveness of product at X even more in comparison.
I came here exactly to point out what I'm glad to see is 10. "Free as in puppies" is a wonderful way to put it.
Every time I open linkedin I'm scared of how many big heads have taken the wrong lesson that coding almost free == free engineering. So many bait posts asking engineers why they would need to pay them any longer, or being glad they're generating millions of lines a month....this is going to end badly.
> 10. Code is cheap, but maintenance, support, and security aren’t.
I also keep circling around this point. So many software repositories in the AI space seem to follow a publish and forget pattern. If you simply can show that you have the patience to maintain a project, ideally with manual intervention instead of a fully autonomous AI, then you already have an outstanding project.
yeah, if a project is purely vibe coded, it tend to need to be rewritten at some point of time
I had a business owner tell me that they don't need to hire juniors anymore because claude can do all of that work for them. This was not a software shop so it's not even about writing code but I also thought that was something that will bite in the near future. A business that is not investing in juniors is a business that is not investing in the future.
The role of AI in non-software shops is going to be interesting. To a great extent it's not competing with devs, it's competing with Excel. However bad a system your AI can produce, it can't compare to the workflows that a group of non-techies armed only with Office can produce.
On the other hand, like giving a supercar to a teenager, this just enables them to get into trouble faster.
(the "my vibe coded app deleted prod!" stories are funny schadenfreude when they happen to SV startups, whose whole business is pretending to know better. When this happens to a small business who've suddenly lost all their finanacials and now maybe will lose their house, it's a tragedy. And this can happen on a much larger, not AI-related scale, like Jaguar Land Rover: https://www.bbc.co.uk/news/articles/cy9pdld4y81o )
> The role of AI in non-software shops is going to be interesting
I have friend in west Texas who does industrial electrical gear sales (like those giant spools of cable you see on tractor trailers). He’s 110% good old boy Texan but has adopted and loves AI. He says it’s been a huge help pulling quotes together and other tasks. Coincidentally he lives in Abilene where one of the stargate campuses are going. Btw, the scale of what’s being built in Abilene is like nothing I’ve ever seen.
LinkedIn is a circle of hell even dante couldn't imagine.
Agreed, but a worrying amount of managers and leaders spend time there for reasons I never fully understood, so it offers a glimpse into their worldview.
The issue is that when you gaze long into an abyss, the abyss also gazes into you.
This is a repeat of paying devs by SLC(source line of code).
I am in India, junior developer hiring is all down. Ai has reduced offshoring to India and eliminated the need for janitor work (often offloaded to juniors).
Many people are finding it difficult to even land internships.
The most affected areas are sysadmin, devops, and frontend. Where you'll have very hard time getting any offer.
Companies like BrowserStack are withdrawing campus placement offers.
Meanwhile, I am writing apps for my own use and have reached 10,000+ monthly active users already, even though I am making zero money from doing all this, but it's fun.
Looking at the entire market in Europe it is also down but that is not due to "AI" but because they are easiest to fire with least consequences. There is a global recession looming, despite Wall Street saying otherwise.
> What should we do when code is cheap?
Make usable software. Cheap code means that you can create a lot more prototypes to then perform usability tests by finding a user and sitting next to them. I mostly worked on internal apps lately, so perhaps it's much easier for me to do than it is for some others.
This is such a weird argument, beside obvious #10 which will bite back with a vengeance, because... code can't be cheaper than free!
Since at least the early 80s a LOT of very important code wasn't cheap, it was free. Both free of cost (you could "just" download it and run it) but also free as freedom-respecting software.
I just don't get the argument that cheap is new. Cheap is MORE expensive than free!
Re: personalized software via vibe coding
Free but you're responsible for maintaining it means it's not free. It's the same issue as maintaining your own fork. It's just an ongoing cost.
(Though as AI becomes autonomous enough to be the maintainer, that cost kind of goes away. Then it's just the cost of managing the "dev".)
Related recent discussion https://news.ycombinator.com/item?id=48005809 including distinction between your/our software
Realize it's going to be 10-100x more expensive once you have no way back?
There is no moat. https://newsletter.semianalysis.com/p/google-we-have-no-moat...
The article is from 2023, I’m wondering if things mentioned still stand true today, can someone pls let me know.
It's much truer today. You can say that article is extremely insightful, as it predicted today's open weighted models scenario 2 years earlier.
It remains an unproven hypothesis. The revenue of the top 2-3 labs is still growing nearly exponentially, which is the ultimate piece of data that settles the question empirically for now. Benchmark scores aren't really proof. Benchmaxxing is possible, for example. Only revenue numbers (and gross margins) count.
The ultimate piece is not revenue but profit. At some point these enormous investments will have to be earned back. Good luck with that when open weight models are also continuously improving, have cheap providers and for many are already very usable.
How do you reconcile these ideas with the fact that cheap open weight models are only slightly behind the state of the art?
If anything, I would bet that next year you could get today’s flagship performance for significantly cheaper via an open-weights model.
You can easily develop with models like GLM 5.1 and Kimi k2.6 at a fraction of the cost of GPT 5.5 or Opus 4.7. Requests often cost just a few cents.
Open-source models have caught up tremendously recently. Those who can’t or don’t want to invest a lot of money can already develop with Kimi and GLM without any problems. We don’t have to wait another year for that.
Tried deepseek 4 w/ CC yesterday, and was watch my usage eke up by only 0.01 at a time while doing plenty of high-token-count tasks. I understand it's currently at a discount, but even after that expires the same-quality output will be available at a fraction of the cost of the expensive models.
From experience, the same level of usage would have left me stranded on my CC 5 hr limit within an hour.
There were some difficulties with tool calls, in particular with replacing tab-indented strings - but taking no steps to mitigate that (which meant the model had to figure it out every time I cleared context) only cost relatively few extra tokens -- and it still came in well under 4.6, nevermind 4.7. And of course, I can add instructions to prevent churning on those issues.
I have no reason to go back to anthropic models with these results.
"No moat" indeed.
By that time, the hypebeasts will be explaining how worthless the models of today always were.
And there’s some truth to it.
I expect tomorrow’s models will be so much more capable that we will happily pay more.
But if not, we will still likely get today’s capabilities or more for cheap.
I don’t see a realistic scenario in which the AI genie is going back into the bottle because of affordability.
It seems like wishful thinking by people who dislike the new paradigm in software engineering.
Sure, but there will always be some monstrosities like Mythos that'll pwn all software written by local models in 0.01 seconds, thus forcing people/companies to use the most advanced paid models to keep up and stay unpwned for 1 second longer.
(Timeframes are hyperbolical).
What will close the way back?
You cutoff a generation of juniors from employment and learning , the seniors are gone and it's all harnesses and AI systems.
I'm not all gloom and doom but the treatment of junior engineers is something I think we will either regret or rejoice. Either will have a spur of creative people doing their own independent thing or we'll have lost a generation of great engineers.
This is not happening at least for 25 years, is what seniors I trust tell me.
I'd say closer to 10-15 but... I'm not sure the point you're making. Is it okay because it's 25 years in the future?
Today junior assembly language programmer are all gone, too.
Yes and that’s why I can charge premium rates for debugging. Most people cannot read a stack trace anymore.
And that’s going to cause serious issues when people like Linus die and nobody knows how to make operating systems anymore.
We’ve been coasting along on a single generation who have ruled with iron fists.
Brain drain.
If you fire all your SWEs they won't sit around twiddling their thumbs waiting for an AI collapse, they'll career shift. Maybe to an unemployment line and/or homelessness, maybe to something else productive, but either way they'll lose SWE skills.
If you close down all the SWE junior positions you'll strongly discourage young people training in the field. They'll do something else.
Then if you want to go back, who will you hire for it?
Why would anyone want to go back? It seems likely that the automated dev systems will just keep improving and get faster, cheaper, stronger.
I agree with you, but it's a case of the tradegy of the commons. One single company cannot make a meaningful dent even with your insight.
The problem of "instant legacy" systems: something that's vibe coded and reached unmaintainable by either the AI or humans, but is also now indispensable because users are relying on it.
I'm curious if this will cause a drop in quality that will lead users to generally lose trust in software.
Some of that is already there .. but the users generally have nowhere else to go and ineffective pushback. "Enterprise software" has been awful for decades, things like Lotus Notes and SAP. Everyone hates Windows; everyone continues to use Windows.
See Windows 11
Hey you can just rewrite (or should we say regenerate) it. Second system has never been cheaper!
Lack of developers, if juniors don't get hired they will move onto other industries.
Company brain drain, knowledge leaves with your seniors if you decide to get rid of them, or they just leave due to the conditions AI creates.
I don't know if the above comes to fruition, there's a lot of questions that only time will answer. But those are my first thoughts.
Time. In a few years there might be no old-school way to develop anymore. Everything will be built around AI.
Even the programming languages will be made for AI.
All code that could be written by humans, has been written. Henceforth, the rest will be generated.
I think you can boil down most of the list to: Understand what you want to do.
I’m not convinced about rebuilding repeatedly as a learning tool though. As relatively quick as it is, it over emphasizes the front line problems you face early. Those tend to be simpler, more straightforward issues that can be more quickly solved by a few minutes of thought (and more cheaply too).
Apart from (2), the first seven lessons are exactly identical to good project management practices with humans. Which are also the difficult bits.
Once upon a time, highly bureaucratic organizations tried to make a distinction between "analyst", "programmer" and "coder": https://cacm.acm.org/opinion/the-myth-of-the-coder/
The pure "coder" role, per that paper, died out almost immediately. Nowadays it's done by compilers (a deterministic automation). The distinction between analyst and programmer held out a bit longer - ten years ago I was working somewhere that had "business analysts", essentially requirements-wranglers. It's possible that the "programmer" job of converting a well-defined specification into a program is also going to start disappearing.
.. but that still leaves the specification as the difficult bit! It remains like the old stories with genies: the genie can give you what you ask for. But you need to be very sure what you want, very clear about it, and aware that it may come with unasked-for downsides if you're not.
Stick to patterns which were painful before. For example, I recently refactored a project written in TS to use better-result instead of throwing errors. Without Claude writing out all of that boilerplate I could not have imagined transitioning to this. Right now the cost of "doing it right" is decreased so much there is no reason to ship slop / poorly thought out code.
Code might be cheaper but it's still a liability. In that regard anything that's not been properly designed and documented is going to be an even bigger issue.
It's cheap to change code, it doesn't mean you have to add more of it...
Learn to throw code away
I've found the get-shit-done tool[1] to be quite useful for forcing me to properly plan the implementation and ensuring the context remains small and relevant at all times.
It is slower than when I was just using Claude directly though.
I've tried this, it's honestly not worth the amount of time (and additional context) for the results. I've had more success prompting Claude with manageable and testable iterations.
Planning is good but get-shit-done just added too much planning in my opinion.
It seems there is a new version [1] - I'll try it out and see if it is better.
>What should we do when code is cheap?
Buy in bulk and resell. /s