• bvirb 23 minutes ago

When we (the engineering team I work on) started using agents more seriously we were worried about this: that we'd speed up coding time but slow down review time and just end up increasing cycle time.

So far there's no obvious change one way or the other, but it hasn't been very long and everyone is in various states of figuring out their new workflows, so I don't think we have enough data for things to average out yet.

We're finding cases where fast coding really does seem to be super helpful though:

* Experimenting with ideas/refactors to see how they'll play out (often the agent can just tell you how it's going to play out)

* Complex tedious replacements (the kind of stuff you can't find/replace because it's contextual)

* Times where the path forward is simple but also a lot of work (tedious stuff)

* Dealing with edge cases after building the happy path

* EDIT: One more huge one I would add: anywhere where the thing you're adding is a complete analogy of another branch/PR the agent seems to do great at (which is like a "simple but tedious" case)

The single biggest potential productivity gain though I think is being able to do something else while the agent is coding, like you can go review a PR and then when you come back check out what the agent produced.

I would say we've gone from being extremely skeptical to cautiously excited. I think it's far fetched that we'll see any order of magnitude differences, we're hoping for 2x (which would be huge!).

• brailsafe 4 minutes ago

[delayed]

• webdood90 11 minutes ago

> The single biggest potential productivity gain though I think is being able to do something else while the agent is coding, like you can go review a PR and then when you come back check out what the agent produced

Ugh, sounds awful. Constantly context switching and juggling multiple tasks is a sure-fire way to burn me out.

The human element in all of this never seems to be discussed. Maybe this will weed out those that are unworthy of the new process but I simply don't want to be "on" all the time. I don't want to be optimized like this.

• whateverboat 7 minutes ago

Often when you are solving a problem, you are never solving a single problem at a time. Even in a single task, there are 4-5 tasks hidden. you could easily put agent to do one task while you do another.

Ask it to implement a simple http put get with some authentication and interface and logs for example, while you work out the protocol.

• copperx 2 minutes ago

[delayed]

• thepasch 13 minutes ago

> Someone approves a PR they didn’t really read. We’ve all done it (don’t look at me like that). It merges. CI takes 45 minutes, fails on a flaky test, gets re-run, passes on the second attempt (the flaky test is fine, it’s always fine, until it isn’t and you’re debugging production at 2am on a Saturday in your underwear wondering where your life went wrong. Ask me how I know… actually, don’t). The deploy pipeline requires a manual approval from someone who’s in a meeting about meetings. The feature sits in staging for three days because nobody owns the “get it to production” step with any urgency.

This is the company I (soon no longer) work at.

The thing is that they don’t even allow the use of AI. I’ve been assured that the vast majority of the code was human-written. I have my doubts but the timeline does check out.

Apart from that, this article uses a lot of words to completely miss the fact that (A) “use agents to generate code” and “optimize your processes” are not mutually exclusive things; (B) sometimes, for some tickets - particularly ones stakeholders like to slide in unrefined a week before the sprint ends - the code IS the bottleneck, and the sooner you can get the hell off of that trivial but code-heavy ticket, the sooner you can get back to spending time on the actual problems; and (C) doing all of this is a good idea completely regardless of whether you use LLMs or not; and anyone who doesn’t do any of it and thinks the solution is to just hire more devs will run into the exact same roadblocks.

• furyofantares an hour ago

> The bottleneck is understanding the problem. No amount of faster typing fixes that.

Why not? Why can't faster typing help us understand the problem faster?

> When you speed up code output in this environment, you are speeding up the rate at which you build the wrong thing.

Why can't we figure out the right thing faster by building the wrong thing faster? Presumably we were gonna build the wrong thing either way in this example, weren't we?

I often build something to figure out what I want, and that's only become more true the cheaper it is to build a prototype version of a thing.

> You will build the wrong feature faster, ship it, watch it fail, and then do a retro where someone says "we need to talk to users more" and everyone nods solemnly and then absolutely nothing changes.

I guess because we're just cynical.

• bob1029 an hour ago

> Why can't we figure out the right thing faster by building the wrong thing faster?

Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.

This is easily the biggest bottleneck in B2B/SaaS stuff for banking. You can iterate maybe once a week if you have a really, really good client.

• vidarh 31 minutes ago

The customer doesn't need to be shown every "wrong thing".

• elictronic 30 minutes ago

But think of the strawmen.

• senko an hour ago

> Why can't we figure out the right thing faster by building the wrong thing faster?

> Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.

Heh, depends on what you do. Many times the stakeholders can't explain what they want but can clearly articulate what they don't want when they see it.

Generate a few alternatives, have them pick, is a tried and true method in design. It was way too expensive when coding was manual, so often you need multiple rounds of meetings and emails to align.

If you don't think coding was the bottleneck, you're not thinking creatively about what's only now possible.

It's not what you can do faster (well, it is, up to a point), but also what can you now, do that would have been positively insane and out of the question before.

• pmontra 39 minutes ago

That's done by arranging a demo (the very old way) or (better) by deploying to a staging server. The customer meets with you for a demo not very often, maybe once per month, or checks what's on the staging server maybe a couple of times per week. They have other things to do, so you cannot make them check your proposal multiple times per day. However I concede that if you are fast you can work for multiple customers at the same time and juggle their demos on the staging servers.

• skydhash 32 minutes ago

> Generate a few alternatives, have them pick, is a tried and true method in design. It was way too expensive when coding was manual, so often you need multiple rounds of meetings and emails to align.

Why do you need coding for those. You can doodle on a whiteboard for a lot of those discussions. I use Balsamiq[0] and I can produce a wireframe for a whole screen in minutes. Even faster than prompting.

> If you don't think coding was the bottleneck, you're not thinking creatively about what's only now possible.

If you think coding was a bottleneck, that means you spent too much time doing when you should have been thinking.

[0]: https://balsamiq.com/product/desktop/

• furyofantares an hour ago

That's fair. I'm usually my own customer.

• Bukhmanizer an hour ago

I think a lot of the discourse around LLMs fails because of organizational differences.

I work in science, and I’ve recently worked with a couple projects where they generated >20,000 LOC before even understanding what the project was supposed to be doing. All the scientists hated it and it didn’t do anything that it was supposed to. But I still felt like I was being “anti-ai” when criticizing it.

I understand that it’s way better when you deeply understand the problem and field though.

• Rapzid 41 minutes ago

I'm starting to see this. It starting to seem like a lot of the people making the most specious, yet wild AI SLDC claims are:

* Hobbyist or people engaged in hobby and personal projects

* Startup bros; often pre-funding and pre-team

* Consultancies selling an AI SDLC as that wasn't even possible 6 months ago as "the way; proven, facts!"

It's getting to the point I'd like people to disclose the size of the team and org they are applying these processes at LOL.

• doctorpangloss 33 minutes ago

You have it completely backwards.

Most Enterprise IT projects fail. Including at banks. They are extremely saleable though. They don't see things that are failures as failures. The metrics are not real. Contract renewals do not focus on objective metrics.

This is why you make "$1" with all your banking relationships and actually valuable tacit knowledge, until Accenture notices and makes bajillions, and now Anthropic makes bajillions. Look, I agree that you know a lot. That's not what I'm saying. I'm saying the thing you are describing as a bottleneck is actually the foundation of the business of the IT industry.

Another POV is, yeah, listen, the code speed matters a fucking lot. Everyone says it does, and it does. Jesus Christ.

• golergka 21 minutes ago

attempt != release to customer

when you're building a feature and have different ideas how to go about it, it's incredibly valuable to build them all, compare, and then build another, clean implementation based on all the insights

I used to do it before, but pretty rarely, only for the most important stuff. now I do it for basically everything. and while 2-4 agents are working on building these options, I have time to work on something else.

• onlyrealcuzzo an hour ago

AI is really good when:

1. you want something that's literally been done tons of times before, and it can literally just find it inside its compressed dataset

2. you want something and as long as it roughly is what you wanted, it's fine

It turns out, this is not the majority of software people are paying engineers to write.

And it turns out that actually writing the code is only part of what you're paying for - much smaller than most people think.

You are not paying your surgeon only to cut things.

You are not paying your engineer only to write code.

• closewith 34 minutes ago

> It turns out, this is not the majority of software people are paying engineers to write.

The above are definitely the majority of software people are paying developers to write. By an order of magnitude.

The novel problems for customers who specifically care about code quality is probably under 1% of software written.

If you don't recognise this, you simple don't understand the industry you work in.

• onlyrealcuzzo 22 minutes ago

As it turns out - "just make this button green" - is not the majority of what people at FAANG are doing...

As it turns out - 4 years before LLMs - at least one of the FAANGs already had auto-complete so good it could do most of what LLMs can practically do in a gigantic context.

But, sure...

• closewith 11 minutes ago

Less than 1% of software developers work at FAANG.

• skydhash 21 minutes ago

Everyone has its own set of novel problems. And they use libraries and framework for things that are outside it. The average SaaS provider will not write its own OS, database, network protocols,... But it will have its own features and while it may be similar to others, they're evolving in different environments and encounter different issues that need different solutions.

• slopinthebag 15 minutes ago

Non-novel problem != non-novel solution

Most problems are mostly non-novel but with a few added constraints, the combination of which can require a novel solution.

• closewith 10 minutes ago

Those are exactly the types of problems that LLMs excel at solving.

• slopinthebag 29 minutes ago

Actually the surgeon analogy is really good. Saying AI will replace programming is like saying an electric saw will replace surgeons because the hospital director can use it to cut into flesh.

• duskdozer 23 minutes ago

It's so much faster too! How many meters of flesh have you cut this month, and how are you working toward increasing that number?

• p-o an hour ago

> Why not? Why can't faster typing help us understand the problem faster?

Why can't you understand the problem faster by talking faster?

• margalabargala 40 minutes ago

Sometimes you can.

• hrmtst93837 34 minutes ago

Fast prototyping helps when the prototype forces contact with the problem, like users saying "nope" or the spec collapsing under a demo. If the loop is only you typing, debugging, and polishing, you're mostly making a bigger mess in the repo and convincing yourself that the mess taught you something.

Code is one way to ask a question, not proof that you asked a good one. Sometimes the best move is an annoying hour with the PM, the customer, or whoever wrote the ticket.

• supern0va 9 minutes ago

I completely agree with this. I actually spent some time recently working on the design for a project. This was a side thing I spent months thinking about in my spare time, eventually spec'ing an API and data model.

I only recently decided to take it on, given how capable Claude Code has become recently. It knocked out a working version of my backend pretty quickly, adhering to my spec, and then built a frontend.

The result? I realized pretty quickly that the (IMO) beautiful design just didn't actually work with how it made sense for the product to work. An hour with the prototype made it clear that I needed to redesign from the ground up around a different piece to make the user experience actually work intuitively.

If I had spent months of my spare time banging on that only to hit that wall, it would've been a much more demotivating experience. Instead, I was able to re-spec and spin up a much better version almost immediately.

• john_strinlai an hour ago

>Why not? Why can't faster typing help us understand the problem faster?

do you have a example (even a toy one) where typing faster would help you understand a problem faster?

• lgessler an hour ago

Has everyone always nailed their implementation of every program on the first try? Of course not. Probably what happens most times is you first complete something that sorta works and then iterate from there by modifying code, executing, observing, and looping back to the beginning. You can wonder about ultimately how much of your time/energy is consumed by the "typing code" part, and there's surely a wide range of variation there by individual and situation, but it's undeniable that it is a part of the core iteration loop for building software.

I don't understand why GP's comment is so controversial. GP is not denying that you should maybe think a little before a key hits the keyboard as many commenters seem to suppose. Both can be true.

• nyeah an hour ago

That kind of thinking pops up very prominently in the article.

• intrasight an hour ago

Here's a literal toy one.

Build a toy car with square wheels and one with triangular wheels and one with round wheels and see which one rolls better.

The issue isn't "typing faster" it's "building faster".

• skydhash 41 minutes ago

No need to build three, you just have to quickly write a proof for which shapes can roll. You'll then spend x+y units of time, where y<<x, instead of 3*x units. We have stories that highlight the importance of thinking instead of blindly doing (sharpening the axe, $1 for pressing a button and $9999 for knowing which button to press).

• Supermancho 23 minutes ago

> quickly write a proof for which shapes can roll.

Writing the 3 are the proofs.

• observationist an hour ago

Sometimes articulating the problem is all you need to see what the solution is. Trying many things quickly can prime you to see what the viable path is going to be. Iterating fast can get you to a higher level of understanding than methodical, deliberative construction.

Nevertheless, it's a tool that should be used when it's useful, just like slower consideration can be used. Frontier LLMs can help significantly in either case.

• john_strinlai an hour ago

so, what i am gathering is that some people in this comment section read "typing faster" literally, while other people are reading it and translating it to "iterating faster".

• observationist 36 minutes ago

"Code writing speed" is just a superficial dismissal of AI without consideration as to whether AI is being used well or poorly for the task at hand. Saying that AI is the same as making people type faster, or that AI only produces slop, etc, is a very self limiting mindset.

• jmulho an hour ago

I often understand problems by discussing them with AI (by typing prompts and reading the response). Typing or reading faster would make this faster.

• zabzonk an hour ago

> Why can't faster typing help us understand the problem faster?

Why can't standing on your head?

• margalabargala 36 minutes ago

Everyone has their own process.

• bayindirh 28 minutes ago

> Why not? Why can't faster typing help us understand the problem faster?

Sometimes you need to think slow to understand something. Offloading your thinking to a black box of numbers and accepting what it emits is not thinking slow (i.e. ponder) and processing the problem at hand.

On the contrary, it's entering tunnel vision and brute forcing. i.e. shotgun coding.

• garrickvanburen 24 minutes ago

I'm reminded of the original Agile joke, "software you don't want in 30days or less." today it's "software you don't want in 5days or less."

• mooreds an hour ago

> Why not? Why can't faster typing help us understand the problem faster?

I think we can, in some cases.

For instance, I prototyped a feature recently and tested an integration it enabled. It took me a few hours. There's no way I would have tried this, let alone succeeded, without opencode. Because I was testing functionality, I didn't care about other aspects: performance, maintainability, simplicity.

I was able to write a better description of the problem and assure the person working on it that the integration would work. This was valuable.

I immediately threw away that prototype code, though. See above aspects I just didn't need to think about.

• coldtea 40 minutes ago

>There's no way I would have tried this, let alone succeeded, without opencode

Sure there is.

You could have used Claude or Codex directly :)

• cdrnsf an hour ago

Because you're working on the implementation before you understand the problem?

• mooreds an hour ago

Ding ding ding!

The article talks about process flows and finding the bottleneck. That might be coding, but probably is not.

• nyeah an hour ago

"Why can't faster typing help us understand the problem faster?"

Because typing is not the same as understanding.

• coldtea 38 minutes ago

The typing referred to here is not "the typing part of coding" (fingers touching the keyboard), it's the whole coding (LLM is not a typing aid, it's a coding aid).

And coding faster CAN help us understand the problem faster. Coding faster means iterating, refactoring, trying different designs - and seeing what does and doesn't work, faster.

• godelski 9 minutes ago

  > Why not? Why can't faster typing help us understand the problem faster?
Can it help? Of course! But I think the question is too vague here and you're (presumably) unintentionally creating a false dichotomy. I'll clarify with the next responses

  > Why can't we figure out the right thing faster by building the wrong thing faster?
The main problem is that solution spaces are very large. There are two general ways to narrow the solution space: directly and indirectly. Directly by things like thinking hard, digging down, and "zooming in". Indirectly by things such as figuring out what not to do (ruling things out).

You can build a lot of wrong things that don't also help you narrow that solution space. The most effective way to "build the wrong thing" in an informative way is to first think hard and understand your solution space. You want to build the right wrong thing. The thing that helps you rule out lots of stuff. But if you are doing it randomly then you aren't doing this effectively and probably wasting a lot of time. You probably are already doing this but not thinking too much about this explicitly, but if you think explicitly you'll improve on this.

  > Presumably we were gonna build the wrong thing either way
You always build the "wrong" thing. But it is about how wrong you are. Despite being about physics, I think Asimov's Relativity of Wrong[0] (short essay) is pretty relevant here and says everything I want to say but better. It is worth the read and I come back to it frequently.

  > I often build something to figure out what I want
Yes! But this is not quite the same thing. I do this too! I never know the full details of the thing I want before I start building. I'm not sure that's even possible tbh. You're always going to discover more things as you get into the details and nuance. But that doesn't mean foresight is useless either.

  Analogy
  -------
Let's say I'm somewhere in the middle of America and I want to get to NYC. Analogous to your framing you are saying "why can't moving faster help us get there faster?" Obviously it can! BUT speed is meaningless without direction. You don't want speed, you want velocity. If you start driving as fast as possible in a random direction you're equally likely to head in a direction that increases your distance than one that decreases. And you are very unlikely to head in a good direction. Driving fast in the wrong direction significantly increases harm than were you to drive slowly in the wrong direction.

So what's the optimal strategy? Find a general direction (e.g. use the sun or stars/moon) to figure out where "east"(ish) is, start driving relatively slowly, refine your direction as you get more information about the landscape, speed up as you gain more information. If you can't find a general direction you should slowly meander, carefully taking in how the landscape/environment is changing. If it is very unchanging, then yeah, speedup, but only until you find a region that becomes more informative, then repeat the process.

If we already had perfect information about how to get to NYC then just drive as fast as fucking possible. But if we don't have that information we need a completely different strategy. Thus, t̶y̶p̶i̶n̶g̶ driving speed isn't the bottleneck.

Speed doesn't matter, velocity does

[0] https://hermiene.net/essays-trans/relativity_of_wrong.html

• doix an hour ago

Pretty much, the article assumes people didn't build the wrong thing before AI. Except that did happen all the time and it just happened slower, took longer to realize that it was the wrong thing and then building the right thing took longer.

It's funny, because you could actually take that story and use it to market AI.

> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks.

Except now with AI it takes one engineer 6 hours, people realize it's the wrong thing and move on. If anything, I would say it helps prove the point that typing faster _does_ help.

• Terr_ an hour ago

Sometimes being involved in the construction process allows you to discover all the (many, overlapping) ways it's the "wrong thing" sooner.

In the long term, some of the most expensive wrong-things are the ones where the prototype gets a "looks good to me" from users, and it turns out what they were asking for was not what they needed or what could work, for reasons that aren't visually apparent.

In other words, it's important to have many people look at it from many perspectives, and optimizing for the end-user/tester perspective at the expense of the inner-working/developer perspective might backfire. Especially when the first group knows something is wrong, but the second group doesn't have a clue why it's happening or how to fix it. Worse still if every day feels like learning a new external codebase (re-)written by (LLM) strangers.

• furyofantares an hour ago

The post also smells heavily LLM-processed. I feel like I've been had by someone pumping out low effort blog posts.

• ErroneousBosh an hour ago

> Why not? Why can't faster typing help us understand the problem faster?

Why do you need to type at all to understand the problem?

I write my best code when I'm driving my car. When I stop and park up, it's just a case of typing it all in at my leisure.

• nicbou 30 minutes ago

I'm a solo dev. In fact I'm hardly a dev; it's just a helpful skill. Code writing speed IS a problem, because it takes valuable time away from other tasks. A bit like doing the dishes.

I just set up Claude Code tonight. I still read and understand every line, but I don't need to Google things, move things around and write tests myself. I state my low-level intent and it does the grunt work.

I'm not going to 10x my productivity, but it'll free up some time. It's just a labour-saving technology, not a panacea. Just like a dishwasher.

• darrelld 10 minutes ago

Exactly!

I've been working on a side project that I started in 2020. If I wanted to implement a new feature it was: - Wait for regular work hours to wrap up around 5 or 6 PM - Get dinner and rest / relax until around 8 or 9 PM - Open up the editor, think about the problem, Google things, read stack overflow which gets it 95% of the way there, Google more, dig deeper into the docs finally find what I needed - Write a function, make some progress, run into another roadblock, repeat previous point - Look up and it's now 1AM. I should write tests for this, but I'll add that to the backlog - Backlog only ever grows

Now with AI I describe what I want, it does the grunt work likely cleaner than I ever could, adds tests and warns me about potential edge cases.

I don't know about 10x, but I'm releasing new features that my client cares about faster and I have more time to myself.

All of the negativity around AI writing code sounds like people who would say "You can't trust the compiler, you need to write the machine code yourself"

Will AI fuck up? Yes But I'm the human in the chair guiding it, and learning myself how better to prompt it to keep it from those fuck ups with every iteration.

• apsurd 18 minutes ago

The intention of the title is to say your main problem. The problem separating you from $PROFIT$:

    1. Idea
    2. ???
    3. Profit
Coding effectively is definitely one problem. And you're right that AI helps with that problem. But for startups, side-hustles, VC-pitches and the inner-workings of companies (HN crowd) coding was never the problem.

edit to add: So for people working on professional software teams, the discussion is how a hyper-increase in raw code production affects everything down stream. There are many moving parts to building stuff and selling it to people. So there's not a 1:1 line to more code = better outcome from the system level view. It's not clear, at least.

• riskable 18 minutes ago

Great big difference though: A dishwasher is a water-saving and energy-saving technology.

Not saying LLMs are all bad, just that comparing them to dishwashers is probably not the best idea.

• johnsmith1840 8 minutes ago

How much energy does a human + work enviroment cost vs an LLM call?

Human driving into work? Heating/cooling?

Wonder why big AI hasn't sold it as an enviromental SAVING technology.

• hollerith 7 minutes ago

Eventually after AI tech gets mature enough, we will be able to save EVEN MORE energy by getting rid of all the people.

• rustystump 27 minutes ago

This I think is pretty spot on. I still have to review the code ideally line by line. It is like templates, generators, etc. they help and do make things faster but 10x isnt gonna happen unless requirement gathering also 10x which so far, ai has had no impact on.

• mooreds 22 minutes ago

This is the way.

• malfist 19 minutes ago

Hacker News is not Reddit, please remember that threads are supposed to get more interesting the deeper they nest.

• podgorniy an hour ago

Yeah, we again have a solution (LLMs) in search of problems.

Proper approach to speeding things up would be to ask "What are the limiting factors which stops us from X, Y, Z".

--

This situation of management expecting things to become fast because of AI is "vibe management". Why to think, why to understand, why to talk to your people if you saw an excited presentation of the magic tool and the only thing you need to do is to adopt it?..

• raw_anon_1111 an hour ago

This is categorically not true. For almost all of my 30 years it’s been

1. Talk to the business, solve XYProblems, deal with organization complexity, learn the business and there needs.

2. Design the architecture not just “the code”, the code has to run on something.

3. Get the design approve and agree on the holy trinity - time/cost/budget

4. Do the implementation

5. Test it for the known requirements

6. Get stakeholder approval or probably go back to #4

7. Move it into production

8. Maintenance.

Out of all those, #4 is what I always considered the necessary grunt work and for the most part even before AI especially in enterprise development where most developers work has been being commoditized in over a decade. Even in BigTech and adjacent codez real gud* will keep you as a mid level developer if you can’t handle the other steps and lead larger/more impactful/more ambiguous projects.

As far as #5 much of that can and should be done with automated tests that can be written by AI and should be reviewed. Of course you need humans for UI and UX testing

The LLMs can do a lot of the grunt work now.

• zelphirkalt 15 minutes ago

I see step 4 as interwoven with other steps. The implementation ideally takes into consideration the domain and while implementing, potentials for flexibility are potentially revealed, to be taken advantage of, "without programming ourselves into a corner". Implementation also is of course related to maintenance. Maintenance already needs to be taken into account. How easy is it to adapt the system we are building later?

This all happens while we are at the implementation stage and impacts all other aspects of the whole thing. It is a grunt work, but we need elite grunts, who see more than just the minimal requirements.

• raw_anon_1111 2 minutes ago

I’ve been going on about this in another thread in a separate post. That’s where modularity comes in. From code I write to teams (or multiple teams back in the day). I always enforce micro services. Not always separately deployable micro service, they could be separate packages with namespace/public/private accessibility.

Even if you do have not so good developers, they can ramp up quickly on one specific isolated service and you can contain the blast radius.

This isn’t a new idea. This was the entire “API mandate” that Bezos had in 2002. s3 alone is made up of 200+ micro services

• petcat an hour ago

As human developers, I think we're struggling with "letting go" of the code. The code we write (or agents write) is really just an intermediate representation (IR) of the solution.

For instance, GCC will inline functions, unroll loops, and myriad other optimizations that we don't care about. But when we review the ASM that GCC generates (we don't) we are not concerned with the "spaghetti" and the "high coupling" and "low cohesion". We care that it works, and is correct for what it is supposed to do. And that it is a faithful representation of the solution that we are trying to achieve.

Source code in a higher-level language is not really different anymore. Agents write the code, maybe we guide them on patterns and correct them when they are obviously wrong, but the code is merely the work-item artifact that comes out of extensive specification, discussion, proposal review, and more review of the reviews.

A well-guided, iterative process and problem/solution description should be able to generate an equivalent implementation whether a human is writing the code or an agent.

• pbasista 32 minutes ago

> review the ASM that GCC generates (we don't)

Of course we do not. Because there is no need. The process of compiling higher order language to assembly is deterministic and well-tested. There is no need to continue reviewing something that always yields the same result.

> We care that it works, and is correct for what it is supposed to do.

Exactly. Which is something we do not have with an output of an LLM. Because it can misunderstand or hallucinate.

Therefore, we always have to review it.

That is the difference between the output of compilers and the output of LLMs.

• petcat 20 minutes ago

> The process of compiling higher order language to assembly is deterministic and well-tested.

Here are the reported miscompilation bugs in GCC so far in 2026. The ones labeled "wrong-code".

https://gcc.gnu.org/bugzilla/buglist.cgi?chfield=%5BBug%20cr...

I count 121 of them.

I've posted this 3 times now. Code-generation by compilers written by experts is not deterministic in the way that you think it is.

• qalmakka 22 minutes ago

This. The comparison between compilers and LLMs is so utterly incorrect, and yet I've heard it multiple times already in the span of a few weeks. The people suggesting this are probably unaware of the fact that Turing complete languages follow mathematical properties not just vibes. You can trust the output of your compiler because it was thoroughly tested to ensure it acts as a Turing machine that converts one Turing complete language (C, C++, whatever) into another Turing complete language (ASM) and there's a theorem that guarantees you that such a conversion is always possible. LLMs are probabilistic machines and it's grossly inappropriate to put them in the same category as compilers - it would be like saying that car tires and pizzas are similar because they're both round and have edges.

• krackers an hour ago

>Source code in a higher-level language is not really different anymore

Source code is a formal language, in a way that natural language isn't.

• jrop an hour ago

Right? That's the only reason that "coding with LLMs" works at all (believe me, all at the same time, I am wowed by LLMs, and carry a healthy level of skepticism with respect to their ability as well). You can prompt all you want, let an Agent spin in a Ralph loop, or whatever, but at the end of the day, what you're checking into Git is not the prompts, but the formalized, codified artifact that is the bi-product of all of that process.

• yason 30 minutes ago

Somewhat ironically, perhaps a formal, deterministic programming language (in its mathematical-kind of abstract beauty) is the outlier in the whole soup. The customers don't know what they need, we don't know what we ought to build, and whatever we build nobody knows how much of it is the right thing and what it actually does. If the only thing that causes people to sigh is the requirement to type all that into a deterministic language maybe at some point we can just replace that with a fuzzy, vague humanly description. If that somehow produces enough value to justify the process we still won't know what we need and what we're actually building but at least we can just be honestly vague about it all the way through.

• inamberclad an hour ago

When you get to the really tightly controlled industries, your "formal" language becomes carefully structured English.

• petcat an hour ago

Legalese exists precisely because it is an attempt to remove doubt when it comes to matters of law.

Maybe a dialect of legalese will emerge for software engineering?

• ruszki an hour ago

Legalese is nowhere near precise, and we have a whole very expensive system because it’s not precise.

• petcat an hour ago

It is an attempt the be precise, and to remove doubt. But you're right that doubt still creeps in.

• batshit_beaver an hour ago

Legalese already exists in software engineering. Several dialects of it, in fact. We call them programming languages.

• eecc an hour ago

This is the answer

• felipellrocha an hour ago

If you truly believe that, why don’t you just transform code directly to assembly? Skip the middleman, and get a ton of performance!

• bdcravens an hour ago

I assume you're being cynical, but there's a lot of truth in what you say: LLMs allow me to build software to fit my architecture and business needs, even if it's not a language I'm familiar with.

• operatingthetan an hour ago

I know you're being cheeky but we are definitely heading in that direction. We will see frameworks exclusively designed for LLM use get popular.

• nemo44x an hour ago

I think that’s possible too but the trouble is training them. LLMs are built on decades of human input. A new framework, programming language, database, etc doesn’t have that.

We are in the low hanging fruit phase right now.

• charcircuit an hour ago

Assembly eats up context like crazy. I usually only have my LLM use assembly for debugging / performance / reversing work.

• n4r9 an hour ago

Can agents write good assembly code?

• svachalek an hour ago

With the complexity of modern pipelines, there are very few humans that can beat a good optimizing compiler. Considering that with an LLM you're also bloating limited context with unsemantic instructions I can't see how this is anything but an exercise in failure.

• mirsadm 4 minutes ago

I don't know if I agree with that. It's a struggle to get a modern compiler to vectorize a basic loop.

• ICantFeelMyToes an hour ago

you know if I could I would (Android dev)

• zelphirkalt 20 minutes ago

I see it differently: The code is our medium of communicating a solution.

> "Programs must be written for people to read, and only incidentally for machines to execute." -- Hal Abelson

Without this, we quickly drift into treating computers and computer programs as even more magic, than we already do. When "agents" are mistaken about something, and put their "misunderstanding" into code that subsequently is incorrect, then we need to be able to go and look at it in detail, and not just bring sacrifices for the machine god.

• munchbunny 29 minutes ago

In my experience it doesn't really work that way. It's somewhat akin to a house that's undergone multiple remodels. You eventually run out of the house's structural capacity for more remodeling and you have to start gutting the interior, reframing, etc. to reset the clock.

At least today the coding agents will cheat, choose the wrong pattern, brute force a solution where an abstraction or extra system was needed, etc. A few PR's won't make this a problem, but after not very long at all in a repo that a dev team is constantly contributing to (via their herds of agents) it can get pretty gnarly, and suddenly it looks like the agents are struggling with tech debt.

Maybe one day we can stop writing programming languages. It's a thought-provoking idea, but in practice I don't think we're there yet.

• Rapzid 34 minutes ago

The semantics described in the high-level language are absolutely maintained deterministically.

With agentic coding the semantics are not deterministically maintained. They are expanded, compressed, changed, and even just lost; non-deterministically..

• leptons 5 minutes ago

>For instance, GCC will inline functions, unroll loops, and myriad other optimizations that we don't care about. But when we review the ASM that GCC generates (we don't) we are not concerned with the "spaghetti" and the "high coupling" and "low cohesion". We care that it works, and is correct for what it is supposed to do. And that it is a faithful representation of the solution that we are trying to achieve.

The more complex the code becomes, iteration after iteration by the AI, it keeps adding more and more code to fix simple problems, way more than is reasonably necessary in many cases. The amount of entropy you end up with using AI is astonishing, and it can generate a lot of it quickly.

The AI is going to do whatever it needs to do to get the prompt to stop prompting. That's really its only motivation in its non-deterministic "life".

A compiler is going to translate the input code to a typically deterministic output, and that's all it really does. It is a lot more predictable than AI ever will be. I just need it to translate my explicit instructions into a deterministic output, and it satisfies that goal rather well.

I'll trust the compiler over the AI every single day.

• yummypaint an hour ago

Just because an LLM can turn high level instructions into low level instructions does not make it a compiler

• exceptione an hour ago

None of the comparisons make any sense. In short, these concepts are essential to understand:

- determinism vs non-determinism

- conceptual integrity vs "it works somewhat, don't touch it"

• petcat an hour ago

> determinism vs non-determinism

Here are the reported miscompilation bugs in GCC so far in 2026. The ones labeled "wrong-code".

https://gcc.gnu.org/bugzilla/buglist.cgi?chfield=%5BBug%20cr...

I count 121 of them. It appears that code-generation is not as deterministic as you seem to think it is.

• tcmart14 an hour ago

Deterministic doesn't mean correct. Compilers can have bugs. What deterministic means, given the same input you get the same output every time. So long as given the same code it generates the same wrong thing every time, its still deterministic.

• yCombLinks 39 minutes ago

99.9% vs about 20%. Pretty weak argument.

• tcmart14 an hour ago

I really hate the trying to make LLM coding sound like it's just moving up the stack and is no different from a compiler. A compiler is deterministic and has a set of rules that can be understood. I can look at the output and see patterns and know exactly what the compiler is doing and why it does and where it does it. And it will be deterministic in doing it.

• petcat an hour ago

> compiler is deterministic and has a set of rules that can be understood.

Here are the reported miscompilation bugs in GCC so far in 2026. The ones labeled "wrong-code".

https://gcc.gnu.org/bugzilla/buglist.cgi?chfield=%5BBug%20cr...

I count 121 of them. It appears that code-generation is not as deterministic as you seem to think it is.

• tcmart14 an hour ago

I commented elsewhere, but that doesn't mean it's not deterministic. Deterministic means given the same input it gives the same output. Compilers can still have bugs and generate the wrong code. But so long as given the same input it generates the same wrong output, it is still deterministic.

• petcat 37 minutes ago

Compilers can generate wrong output in many different ways. And they're all analogous to the same ways that a sophisticated LLM can generate wrong outputs.

The compiler relies on:

* Careful use of the ENV vars and CLI options

* The host system, or the compilation of the target executable (for cross-compiling)

* It relies on the source code specifically

How is this really different from careful prompt engineering, and an extensive proposal/review/refine process?

They are both narrowing the scopes and establishing the guardrails for what the solution and final artifact will be.

> proposal/review/refine process

This is essentially what a sophisticated compiler, or query optimizer (Postgres) does anyway. We're just doing it manually via prompts.

• tcmart14 19 minutes ago

And none of that means it isn't non-deterministic. Compilers still satisfy the, given the exact same environment and input, you get the same output. It doesn't matter the number of inputs. So long as f(3, 2) always gives 5, it's deterministic. Doesn't matter what f(x,y) does so long as it always gives the same output per input. LLM generation does not do this. If given f(3,2), sometimes it says 5, sometimes 6, sometimes 1001, sometimes 2.

And we are talking compilers, not query optimizers, so I don't really care what they do.

• Kilenaitor 33 minutes ago

Having bugs is not the same as being non-deterministic.

I get the point that the compiler is not some pure, perfect transformation of the high-level code to the low level code, but it is a deterministic one, no?

• slopinthebag 24 minutes ago

It is deterministic, unless GCC is now including a random statistical process inside its compiler to generate machine code. You've copied this same comment repeatedly, it doesn't become more correct the more you spam it.

• tovej an hour ago

Is this a copypasted response? I've seen the exact same bs in other AI threads on this site.

• bcassedy 16 minutes ago

This user has posted the parent post nearly verbatim twice. And the exact same responses about determinism several times.

• everdrive an hour ago

Companies genuinely don't want good code. Individual teams just get measured by how many things they push around. An employee warning that something might not work very well is going to get reprimanded as "down in the weeds" or "too detail oriented," etc. I didn't understand this for a while, but internal actors inside of companies really just want to claim success.

• mooreds an hour ago

> Companies genuinely don't want good code.

I might be more charitable. I'd say something like "Companies genuinely want good code but weigh the benefits of good code (future flexibility, lower maintenance costs) against the costs (delayed deployment, fewer features)."

Each company gets to make the tradeoffs they feel are appropriate. It's on technical people to explain why and risks, just like lawyers do for their area of expertise.

• dannersy 22 minutes ago

They don't care about good code, but they do pay people a lot of money to care about good code. If the people you hired didn't care, our software quality would be worse than it is. And since people are caring less in the face of AI, it is getting worse.

• wei03288 8 minutes ago

Completely agree with the premise. The bottleneck in most teams I've worked with isn't typing speed or even coding speed - it's the feedback loop between 'I think this is right' and 'this is actually right in production.'

The biggest time sink is usually debugging integration issues that only surface after you've connected three services together. Writing the code took 2 hours, figuring out why it doesn't work as expected takes 2 days.

I've found the most impactful investment is in local dev environments that mirror production as closely as possible. Docker Compose with realistic seed data catches more bugs than any amount of unit testing.

• mikkupikku 25 minutes ago

My problem when writing code is mainly executive dysfunction; I constantly succumb to the temptation to take the easy way and do it properly later, and later never comes. For some reason, using a coding agent seems to alleviate this. Things get done the way I think they should be done, not just in a way that's "good enough for now."

• ianberdin 24 minutes ago

I don’t agree. I have built Replit clone alone in months. They have hundreds of millions of funding…

Btw: https://playcode.io

• bluegatty 26 minutes ago

It's unfair to characterize AI as 'code writing / completion' - it's at minimum 1/4 layer of abstraction above that - and even just 'at that' - it's useful.

So 'writing helper' + 'research helper' + 'task helper' alone is amazing and we are def beyond that.

Even side features like 'do this experiment' where you can burn a ton of tokens to figure things out ... so valuable.

These are cars in the age of horses, it's just a matter of properly characterizing the cars.

• m463 31 minutes ago

> The Goal ... it's also the most useful business book you'll ever read that's technically fiction

factorio ... it's also the most useful engineering homework that's technically a game

• k1rd 21 minutes ago

> That's the part most people get. Here's the part they don't, and it's the part that should scare you: > When you optimise a step that is not the bottleneck, you don't get a faster system. You get a more broken one.

if you ever played factorio this is pretty clear.

• po1nt 19 minutes ago

While reading articles like this, I feel like we're just in the "denial" stage. We're just trying to look for negatives instead of embracing that this is a definite paradigm shift in our craft.

• larsnystrom 43 minutes ago

I can really relate to this. At the same time I’m not convinced cycle time always trumps throughput. Context switching is bad, and one solution to it is time boxing, which basically means there will be some wait time until the next box of time where the work is picked up. Doing time boxing properly lowers context switching, increases throughput but also increases latency (cycle time). It’s a trade-off. But of course maybe time boxing isn’t the best solution to the problem of context switching, maybe it’s possible to figure out a way to have the cookie and eat it. And maybe different circumstances require a different balance between latency and throughput.

• metalrain 31 minutes ago

I think it's more abstraction problem.

You could write more code, but you also could abstract code more if you know what/how/why.

This same idea abstracts to business, you can perform more service or you can try to provide more value with same amount of work.

• milesward 37 minutes ago

Correct, but I'd frame it to confused leaders a bit differently. Because we made this part easier, we've increased how critical, how limiting, other steps/functions are. Data's more valuable now. QA is more valuable now. More teams need to shift more resources, faster.

• 725686 44 minutes ago

The word typing is wrong.

It is not about the speed of typing code.

Its about the speed of "creating" code: the boilerplate code, the code patterns, the framework version specific code, etc.

• slibhb 23 minutes ago

The idea that LLMs don't significantly increase productivity has become ridiculous. You have to start questioning the psychology that's leading people to write stuff like this.

• sorokod 35 minutes ago

Amdahl's law applies regardless of whether you are believe in it or not

• gammalost an hour ago

Seems easy to address with a simple rule. Push one PR; review one PR

• hathawsh an hour ago

Also add a PR reviewer bot. Give it authority to reject the PR, but no authority to merge it. Let the AIs fight until the implementation AI and the reviewer AI come to an agreement. Also limit the number of rounds they're permitted to engage in, to avoid wasting resources. I haven't done this myself, but my naive brain thinks it's probably a good idea.

• dmitrygr an hour ago

> I haven't done this myself, but my naive brain thinks it's probably a good idea.

Many a disaster started this way

• hathawsh an hour ago

Yep, we're on the same wavelength.

• zer00eyz an hour ago

The problem is most of the people we have spent the last 20 years hiring are bad at code review.

Do you think the leet code, brain teaser show me how smart you are and how much you can memorize is optimized to hire the people who can read code at speed and hold architecture (not code but systems) in their head? How many of your co-workers are set up and use a debugger to step through a change when looking at it?

Most code review was bike shedding before we upped the volume. And from what I have seen it hasn't gotten better.

• avereveard 35 minutes ago

Eh code doesn't have a lot of value. Especially filling methods between signatures and figuring out the dependencies exact incantation is mechanistic and definitely time better spent doing other things.

A lot of these blog start from a false premise or a lack of imagination.

In this case both the premise that coding isn't a bulk time waste (and yes llm can do debugging, so the other common remark still doesnt apply) is faulty and unsubstantiated (just measure the ratio of architects to developers) but also the fact that time saving on secondary activities dont translate in productivity is false, or at least it's reductive because you gain more time to spend on doing the bottlenecked activity.

• danilocesar an hour ago

I'm here just for the comments...

• stronglikedan 35 minutes ago

aren't we all...

• tmshapland 18 minutes ago

Thank you. 100%.

• cess11 32 minutes ago

One of the main reasons I like vim is that it enables me to navigate code very fast, that the edits are also quick when I've decided on them is a nice convenience but not particularly important.

Same goes for the terminal, I like that it allows me to use a large directory tree with many assorted file types as if it was a database. I.e. ad hoc, immediate access to search, filter, bulk edits and so on. This is why one of the first things I try to learn in a new language is how to shell out, so I can program against the OS environment through terminal tooling.

Deciding what and how to edit is typically an important bottleneck, as are the feedback loops. It doesn't matter that I can generate a million lines of code, unless I can also with confidence say that they are good ones, i.e. they will make or save money if it is in a commercial organisation. Then the organisation also needs to be informed of what I do, it needs to give me feedback and have a sound basis to make decisions.

Decision making is hard. This is why many bosses suck. They're bad at identifying what they need to make a good decision, and just can't help their underlings figure out how to supply it. I think most developers who have spent time in "BI" would recognise this, and a lot of the rest of us have been in worthless estimation meetings, retrospectives and whatnot where we ruminate a lot of useless information and watch other people do guesswork.

A neat visualisation of what a system actually contains and how it works is likely of much bigger business value than code generated fast. It's not like big SaaS ERP consultancy shops have historically worried much about how quickly the application code is generated, they worry about the interfaces and correctness so that customers or their consultants can make adequate unambiguous decisions with as little friction as possible.

• lukaslalinsky an hour ago

If I can offload the typing and building, I can spend more energy understanding the bigger picture

• wolttam an hour ago

"Our newest model reduces your Mean Time To 'Oh, Fuck!' (MTTF) by 70%!"

• renewiltord an hour ago

Understanding the problem is easier for me experienced with engaging with solutions to the problem and seeing what form they fail in. LLMs allow me to concretize solutions so that pre-work simply becomes work. This allows me to search through the space of solutions more effectively.

• gyanchawdhary 40 minutes ago

he’s treating “systems thinking” and architecting software like it’s some sacred, hard to automate layer that AI apparntly sucks at ..

• andrewstuart 32 minutes ago

These “LLM programming ain’t nothing special” posts are becoming embarrassing for the authors who - due to their anti AI dogmatism - have no idea how truly incredibly fast and powerful it’s become.

Please stop making fools of yourselves and go use Claude for a month before writing that “AI coding ain’t nothing special” post.

Ignorance of what Claude can actually do means your arguments have no standing at all.

“I hate it so much I’ll never use it, but I sure am expert enough on it to tell you what it can’t do, and that humans are faster and better.”

• slopinthebag 21 minutes ago

What makes you think they haven't? I agree with them and I've been heavily using Claude / Codex for a while now. And I'm slowly trying to use AI more selectively because of these concerns.

• andrewstuart 12 minutes ago

Ok tell me tangibly what you are programming, what you ask it to do and I’ll do similar and we can compare outcomes.

• luxuryballs an hour ago

It’s definitely going to create a lot of problems in orgs that already have an incomplete or understaffed dev pipeline, which happen to often be the ones where executive leadership is already disconnected and not aware of what the true bottlenecks are, which also happen to often be the ones that get hooked by vendor slide decks…

• nathias an hour ago

people can have more than one problem

• 6stringmerc an hour ago

Because the way the world is currently and this is trending, let me jump in and save you a lot of time:

Expedience is the enemy of quality.

Want proof? Everyone as a result of “move fast and break things” from 5-10 years ago is a pile of malfunctioning trash. This is not up for debate.

This is simply an observation. I do not make the rules. See my last submission for some CONSTRUCTIVE reading.

Bye for now.

• teaearlgraycold an hour ago

> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks. The prospect didn't even end up buying. The feature got used by eleven people, and nine of them were internal QA. That's not a delivery problem. That's an "oh fuck, what are we even doing" problem.

I have very much upset a CEO before by bursting his bubble with the fact that how fast you work is so much less important than what you are working on.

Doing the wrong thing quickly has no value. Doing the right thing slowly makes you a 99th percentile contributor.

• myst 19 minutes ago

No one there is solving a problem. The AI bros are hooking a new generation (NG) on _their_ set of crutches, without which NG "is not coding (living) up to their true potential". Nothing personal, just business.

PS. The tech bros tried to do exactly that to millennials, but accidentally shot boomers instead.

• gedy an hour ago

I'm cynical but kinda surprised that so many mgmt types are rah-rah AI as "we're waiting for engineering... sigh" has been a very convenient excuse for many projects and companies that I've seen over past 25 years.

• shermantanktop an hour ago

Absolutely. Everyone loves a roadblock that someone else needs to clear, giving back some time to breathe and think about the problem a bit.

This only works in large companies. In startups this is how you run out of money.

• dannersy 19 minutes ago

The blog isn't even necessarily anti-AI yet the majority of responses here are defending it like the author kicked their dog.

The sentiment that developers shouldn't be writing code anymore means I cannot take you seriously. I see these tools fail on a daily basis and it is sad that everyone is willing to concede their agency.