• jdw64 5 hours ago

The real issue, in my view, is not AI itself.

The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

Short-term cost cutting leads to less junior hiring, and removes the slack that experienced engineers need in order to teach. As a result, tacit knowledge stops being transferred.

What remains is documentation and automation.

But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.

AI is following the same pattern.

What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

The West has seen this before, especially in the case of General Electric.

GE pursued aggressive short-term financial optimization, cutting costs, focusing on quarterly results, and maximizing shareholder returns. In the process, it hollowed out its own long-term capabilities. It effectively traded its future for short-term gains.

The same mindset is visible today.

The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.

Tacit knowledge comes from direct experience with real systems over time. If you remove the people and the learning pipeline, that knowledge does not stay in the organization. It disappears.

• aleph_minus_one 43 minutes ago

> The core problem is that decision-makers—often far removed from actual engineering work — believe that tacit knowledge can be replaced with documentation, tools, and processes. [It] cannot.

I am not so certain:

For example, I think that a lot of my knowledge about the system that I work on could be documented, and based on this documentation someone new could take over the system.

The problem rather is: the volume of documentation that I would have to write would be insane; I'd consider ten thousands of dense DIN A4 pages to be realistic - and this is a rather small system.

So, a new person who could take over this system would have to cram and understand basically all the details of this documentation insanely well.

This insane effort (write the documentation; new workers on the project then have to cram and understand every detail of this incredibly bulky documentation) is something that no employer wants to spend money on: this is in my experience the real reason why it isn't done.

• paganel 30 minutes ago

It’s way easier (for this type of scenarios) and far more effective to learn by doing than to learn by reading (even tens of thousands of pages of) documentation, that is the crust of it.

• aleph_minus_one 10 minutes ago

> It’s way easier (for this type of scenarios) and far more effective to learn by doing than to learn by reading

I don't think so: the problem is that there exist lots of parts in the system that are quite complicated but which one very rarely has to touch - except in the rare (but happening) case that something deep in such a part goes wrong a for requirement for this part pops up.

If you "learned by doing" instead of reading, you are suddenly confronted with a very subtle and complicated subsystem.

In other words: there mostly exist two kinds of tasks:

- easy, regular adjustments

- deep changes that require a really good understanding of the system

• chrisweekly 22 minutes ago

crust (edge/border) -> crux (heart/essence)

• fsloth a few seconds ago

[delayed]

• vishnugupta 4 hours ago

> removing people and organizational slack

You are spot on w.r.t every assertion you've made. When bean-counters took over the ecosystem they optimised immediate profitability over everything else. Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time. There's no room for experimentation, repair, or anything else.

I've commented about lack of slack on several times here on HN because when I notice a broken system now a days, 90% of it is due to lack of slack in the system to absorb short term shocks.

• chanux 3 hours ago

> When bean-counters took over the ecosystem [...] in their mind, every part of the system needs to be firing at 100% all the time.

This is only fair, because they themselves are firing at 100% all the time IYKWIM ;)

• acomjean 4 hours ago

I’ll note at the end of the last century I worked at IBM research which had a budget of 6 Billion dollars. Management was trying very hard to get better return on that investment. Even today IBM though often ridiculed in the tech space (sometimes they do deserve it) spends a lot on R&D.

• NordStreamYacht 4 hours ago

Lucent at the same time went through the same issue: how to monetise Bell Labs.

Bell Labs greatest work came out when AT&T was a monopoly. Once they were broken up (1984?) they started feeling the pain.

When the Lucent spinoff took place, the new entities had no Monopoly money to fund unconstrained research while management's behaviour never changed.

I don't know how BL fared under Alcatel and now Nokia, but haven't heard of anything interesting for years.

• rvba 4 hours ago

Did anything come out from those billions?

• swiftcoder 3 hours ago

> Did anything come out from those billions?

Per wikipedia:

  IBM employees have garnered six Nobel Prizes, seven Turing Awards,
  20 inductees into the U.S. National Inventors Hall of Fame, 19 National Medals of Technology,
  five National Medals of Science and three Kavli Prizes. As of 2018,
  the company had generated more patents than any other business in each of 25 consecutive years.
• xienze 3 hours ago

> the company had generated more patents than any other business in each of 25 consecutive years.

A couple things about those patents, from a former IBMer who has quite a few in his time there.

First, not all patents are created equal. Most of those IBM patents are software-related, and for pretty trivial stuff.

Second, most of those patents are generated by the rank and file employees, not research scientists. The IBM patent process is a well-oiled machine but they ain't exactly patenting transistor-level breakthroughs thousands of times a year.

• fao_ 2 hours ago

Why do you need to generate transistor-level breakthroughs multiple times a year? Those breakthroughs are hard to generate, but they're important and industry-spanning. The problem is we've mostly stopped generating them.

• xienze 2 hours ago

I wasn't saying anything about that, I was just pointing out that yes, IBM produces a ton of patents, but they're mostly trivial junk that regular employees generate en masse in order to earn accomplishments and make up for the insultingly low bonuses.

• swiftcoder 2 hours ago

> they're mostly trivial junk that regular employees generate en masse in order to earn accomplishments and make up for the insultingly low bonuses

We did that at Meta and Amazon too (for polycarbonate puzzle pieces, with no monetary award at all!). Every now and then something meaningful came out of it

• mschuster91 3 hours ago

The thing is, Nobel Prizes and other awards don't pay the bills.

Patents do, but in most cases it's trivial patents or patents for a "mutually assured destruction" portfolio (aka, you keep them in hand should someone ever decide to sue you).

That's a fundamental problem with how the Western sphere prioritizes and funds R&D. Either it has direct and massive ROI promises (that's how most pharma R&D works), some sort of government backing (that's how we got mRNA - pharma corps weren't interested, or how we got the Internet, lasers, radar and microwaves) or some uber wealthy billionaire (that's how we got Tesla and SpaceX, although government aids certainly helped).

All while we are cutting back government R&D funding in the pursuit of "austerity", China just floods the system with money. And they are winning the war.

• smallstepforman an hour ago

Every year they grant prizes. If hardly anyone is doing core R&D because of cost cutting, there is a higher chance those doing the smallest amount of R&D get the prizes.

A Nobel in 2026 doesnt carry the same weight as a Nobel in 1955.

• t-3 an hour ago

I think the bean counters get a bad rap for this a bit unfairly. The past century has seen more progress in knowledge and technology than the rest of human history combined. The world and business environment are changing too rapidly to make longtermist thinking practical.

Few care if you have a lifetime warranty and excellent service or replacement parts if the majority will upgrade in a few years! Mature technologies increasingly become cheaply available as services, eg. laundry, food, transportation. That further reduces demand on production, as many can get by with the bare minimum and don't need the highest quality, longest lasting appliances. Software is even more ephemeral and specialized.

Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.

R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself? Open source software has even further muddied the waters. Applications have only a limited lifetime before being replicated and becoming free products (this has only been intensified by the introduction of AI), so companies develop services instead.

Technology and knowledge deepening and rapidly becoming more specialized makes the monolithic corporation much less practical, so companies also need to specialize in order to effectively compete. Going too far in the name of efficiency can destroy core competencies, but moving away from the old model was necessary and rational.

• aleph_minus_one 39 minutes ago

> R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself?

Because some problems that many companies in very specialized industries work on are so special that outside of this industry, nearly all people won't even have heard about them.

Additionally, many problems companies have where research would make sense are not the kind of problems that are a good fit for universities.

• SlinkyOnStairs 2 hours ago

They also took out all the quality, though in pure business terms one can argue that's a kind of "slack" by itself.

The beancounters have cut all the corners on physical products that they could find. Now even design and manufacturing is outsourced to the lowest bidder, a bunch of monkeys paid peanuts to do a job they're woefully unqualified for.

And the end result is just a market for lemons. Nobody trusts products to be good anymore, so they just buy the cheapest garbage.

Which, inevitably, is the stuff sold directly by Chinese manufacturers. And so the beancounters are hoisted by their own petard.

We've seen it happen to small electronics and general goods.

We're seeing it happen right now to cars. Manufacturers clinging on to combustion engines and cutting corners. Why spend twice the money on a western brand when their quality is rapidly declining to meet BYD models half the price.

---

And we're seeing it happen to software. It was already kind of happening before AI; So much of software was enshittifying rapidly. But AI is just taking a sledgehammer to quality. (Setting aside whether this is an AI problem or a "beancounters push everyone into vibecoding" problem)

E.g. Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there. Windows is just going down in flames. People are jumping ship now.

SaaS is quickly going that way as well. If it's all garbage, why pay for it. Either stop using it or just slop something together yourself.

---

And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. So much manufacturing knowledge is just gone, starting a new manufacturing firm in the west is a staffing nightmare. Same story with cars, China has the EV knowledge. And software's going the same way. These beancounters are all chomping at the bit to fire all their devs and replace them with teenagers in the developing world spitting out prompts. They can't move back upmarket after that's done.

Even when the knowledge still lives, when the people with the skills requires have simply moved to other industries and jobs, who's going to come back? Why leave your established job for the former field, when all it takes is the management or executive in charge being replaced by another dipshit beancounter for everyone to be laid off again.

• ekidd 28 minutes ago

> E.g. Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there.

Desktop Linux has gotten better, though much of the improvement happened decades ago. I believe the first person to prematurely declare "the year of Linux on the desktop" was Dirk Hohndel in 1999: https://www.linux.com/news/23-years-terrible-linux-predictio...

And speaking as someone who was running desktop Linux in 1999, I remember just how bad it was. Xfce, XFree86 config files, and endless messing around with everything. The most impressive Linux video game of 2000 was Tux Racer.

But over the next 10 years, Gnome and KDE matured, X learned how to auto-detect most hardware, and more-and-more installs started working out of the box.

By the mid-2010s, I could go to Dell's Ubuntu Linux page and buy a Linux laptop that Just Worked, and that came with next day on-site support. I went through a couple of those machines, and they were nearly hassle free over their entire operational life. (I think one needed an afternoon of work after an Ubuntu LTS upgrade.)

The big recent improvement has been largely thanks to Valve, and especially the Steam Deck. Valve has been pushing Proton, and they're encouraging Steam Deck support. So the big change in recent years is that more and more new game releases Just Work on Linux.

Is it perfect? No. Desktop Linux is still kind of shit. For examples, Chrome sometimes loses the ability to use hardware acceleration for WebGPU-style features. But I also have a Mac sitting on my desk, and that Mac also has plenty of weird interactions with Chrome, ones where audio or video just stops working. The Mac is slightly less shit, but not magically so.

• esseph an hour ago

> E.g. Desktop Linux has always been kind of a joke

And yet I run it every day, and it's by FAR the most enjoyable platform and tooling to use (for me).

• netcan 4 hours ago

>. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

This is a blindspot to many. People working on entrepreneurial projects need to build a lot. They start with nothing. They need (for example) features. There's a lot to do.

Most firms are not that. Visa, Salesforce, LinkedIn or whatnot. They have a product. They have features. They have been at it for a while. They also have resources. They are very often in a position of finding nails for a "write more software" hammer.

It's unintuitive because they all have big wishlist and to do lists and and a/b testing system for pouring software into but...

If there were known "make more software, make more money" opportunities available, they would have already done them.

Actual growth and new demand needs to come from arenas outside of this. Eg companies that suck at software(either making or acquiring) might be able to get the job done.

The Problem, bringing this back to the article, is fungibility. A lot of this "human capital" stuff cannot be easily repackaged. It's a "living" thing. Talent and skills pipelines can be cut off, and vanish.

A danger in Ai coding (and other fields) is that it leverages preexisting human capital and doesn't generate any for later.

• Terr_ 4 hours ago

> If there were known "make more software, make more money" opportunities available, they would have already done them.

Sometimes they're available, but not palatable, when the opportunity could threaten their existing investments or patterns. That might mean "self-cannibalism", or changing the ecology so that the main product niche is threatened.

Then those opportunities are ignored, or actively worked-against via lobbying, embrace-extend-extinguish, etc.

• netcan 2 hours ago

Ok... but this just generalizes into the "known things" type.

Whether the reason of strategic (like your example), internal politics, insufficient knowledge.... The point is that there is a local equilibrium, and most mature firms are at this equilibrium.

More resources via Ai, at first order, goes after that diminishing returns part of the curve... which is a cliff especially for highly resourced firms topping the S&P500.

A lot of Ai-optimist:s " mental model" of the economy do not account for this stuff at all.

"Save time/money" outcomes are not similar at all to "make more stuff" outcomes. Firing employees does freeze up labour... but reutilizing this labour is non-trivial... as this article demonstrates quite well.

• bsenftner 37 minutes ago

That 'real issue' is the lack of formal effective communications training across the board in the United States, and probably all of Western Culture.

The Problem is wider than management, it is understanding the extended ramifications of action, understanding the larger systems one is a member and then identifying with them, protecting them, because you and all your peers understand their extended foundational need.

That type of critical analysis and secondary considerations tacit knowledge is developed through effective communications training, which is an entire perspective, a way of seeing the world. This can be gained by reading a wide diversity of literature, of the Nobel Literature quality; the reason being such literature is first person accounts of institutions crushing individuals, and individuals finding the power within themselves to defeat the institutions. That personal transformation is practically a Nobel Trope, but it teaches the reader how to have such insight and perseverance. Read a half dozen or more such novels, and you are materially a different person. A better, deeper considering person with a longer perspective horizon. We need this civilization wide.

• Fr0styMatt88 3 hours ago

I feel like it’s something more fundamental and broad than that. We slowly remove excuses to talk to other people.

The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?

The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.

I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.

• MattJ100 2 hours ago

We see this in our open-source community. We've had a community channel for over two decades, where community members help newcomers and each other solve problems and answer questions.

Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.

Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.

In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.

• 2ndorderthought 2 hours ago

Personally this type of behavior played a large part in why I left 2 oss communities.

A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.

They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions.

• skydhash an hour ago

I’m subscribed to a couple of mailing list and follow the archive of a few others. I wonder if the friction associated with the medium is why I haven’t seen those shenanigans?

• 2ndorderthought an hour ago

I should look into mailing lists. That would be a great filter for the "I need it now at any cost" interactions. Thank you for the indirect advice.

• gonzalohm an hour ago

I think you are right, but it also makes sense. Human communication is inherently inefficient. Points of view, miscommunication, interpretation... It's the obvious point to automate. Not defending it, just my thoughts

• lxgr 2 hours ago

> if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.

• 2ndorderthought 2 hours ago

There is a lot of wisdom in this.

At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.

It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.

• avmich an hour ago

> At the end of the day chatgpt won't be there

Are you sure it won't?

• 2ndorderthought 23 minutes ago

Yes. 100%. Chatgpt can't get drunk with you share personal experiences grill food for you or network with humans for you. At some point certain people have to choose to live a life otherwise why have one anyways.

• croisillon an hour ago

i see what you did there :)

• samiv 4 hours ago

Why would anyone have a sight longer than a quarter? I mean how does long term thinking help the execs get their compensation this quarter? Sheesh..worst case scenario is that the work done now will benefit someone else when they've already left.

Also when companies grow big enough "business" becomes the main business of the company. By that I mean everything unrelated to the actual original domain, such as playing in the financial markets, doing stock buybacks, lobbying, cheating etc. When your CEO is an MBA and your real market is Wall Street any actual product RD and support is a real annoying cost that just cuts into the profits and thus into the exec compensation.

• baq 3 hours ago

> Why would anyone have a sight longer than a quarter? I mean how does long term thinking help the execs get their compensation this quarter?

Vesting schedules, conditional grants, contractual equity ownership requirements

• cucumber3732842 3 minutes ago

>Vesting schedules, conditional grants, contractual equity ownership requirements

In those filthy low margin industries that HN loves to regulated across the oceans out of sight out of mind capital investments have service lives measured in decades.

• BoingBoomTschak 2 hours ago

Would be interesting to get a law that says that all positions supposed to take long-term decision should be paid with X% of their salary in (non-redeemable until Y years?) stocks.

• derf_ 3 hours ago

> ...any actual product RD and support is a real annoying cost that just cuts into the profits...

Worse, it might not generate a return. If you have enough profits, you just buy anyone who successfully produced something innovative. Let them take the risks. As Cisco used to say, "Silicon Valley is our R&D lab."

It is a very difficult mindset to argue against.

• throwaw12 an hour ago

This shows Western government system is broken.

In ideal world (where we don't live):

* Corporation - optimizes for mid-to-short term profits (remove slack, run everything thin)

* Government - optimizes for long term profits (introduce regulations to keep the slack time, keep and attract the talent so state gets better)

* Individual - optimizes for their life time (career, family and tries to leverage market conditions to learn skills and get more opportunities from existing pool)

In the west, government is optimizing for "loads and loads of moooney", because of lobby groups and MBAs controlling the corporations which are pushing these ideas through lobbies

• layer8 23 minutes ago

> But documentation is not the same as field experience.

Even if it were, creating good documentation or assessing its quality requires experience in using good and bad documentation. And how would juniors build up that experience if they are using AI for everything.

• Lio 3 hours ago

There’s even a management tutorial game which demonstrates the dangers of removing too much slack from systems.

It’s called The Beer Game[1].

One of the funny things about it is even people that have played and discussed it before _still_ make the same fundamental mistakes next time.

Short-termism is the death of companies.

https://en.wikipedia.org/wiki/Beer_distribution_game

• dragontamer 2 hours ago

Wut?

The point of the beer game is that buffering in the supply chain makes the bullwhip effect worse.

• wry_durian an hour ago

If "winning" the beer game means not overreacting to short-term signals, then you can view that as a form of slack. You're sometimes paying a bit extra to hold onto something that you have no immediate short-term use for.

• avmich an hour ago

I'm not sure it's the same kind of buffering. I would assume the "winning" strategy for the case when the known final demand is fixed is to maintain fixed the upstream orders, and buffer outcome, and for non-fixed final demand is to model that demand as good as possible and keep upstream orders accordingly to maintain outcome matching the demand model. Large penalties for buffering may make this approach not working, I guess...

• throwaw12 2 hours ago

> The problem is a management pattern .... Short-term cost cutting

Absolutely agree with this. Most MBAs are taught to optimize and reduce the slack.

It works fine with machinery and materials, but not with humans.

When machinery is optimized and run thin, when one of them breaks, you can get exact same in couple days (you usually prepare for it earlier), but with humans, they train their brain and next person is different from the first person.

Humans also break in different ways:

* They stop caring - you wouldn't notice it immediately, they will close tickets, but give bare minimum thought

* Communal brain will not be trained when there is not enough room for experiments and learning - which reduces the innovation eventually

This is exactly the reason it is difficult for US companies to compete with Chinese companies in manufacturing, because their communal brain have already trained and produced very good talent.

Next is the knowledge, more you outsource, more you lose it

• cjfd 4 hours ago

This sounds all true to me, but I think there is more. It is not just decisions by management, it is also the wider economic context. Low interest rates and, for the US, having the world reserve currency as your own currency both seem to make many of these changes attractive or even inevitable. Low interest rates lead to 'innovation' which I put in scare quotes because besides real innovation it can also mean something that passes as innovation but in the end just turns out to be a bubble of stuff that was not valuable enough. The 'innovation' then crowds out investments in more boring sectors like manufacturing. This is also not good for the population in general because fewer jobs are left for people who are not suited for working in highly 'innovative' sectors.

• lolive 27 minutes ago

I came to comment EXACTLY about this issue. Management lives in a world where they have absolutely no expertise on what they are supposed to manage. So they try to objectify their decisions, with generic KPIs based on efficiency or cost or whatever. And miss MANY additional decision axis very focused on WHAT they are supposed to build. That is a MASSIVE issue, in my opinion.

• adam_patarino 32 minutes ago

Most workforce reductions are using AI as a cover up for greedy short term bonuses.

Any exec using AI to pay fewer people lacks imagination.

• stingraycharles 5 hours ago

Seems to me that - optimistically - this would shift the job of a software engineer into a more formal engineering role, and that the actual implementation is done by AI. In the same way in other areas, engineering and implementation differ and implementation can be (and is) automated.

No idea how this should take form, though, and if it’s even realistic. But it seems like due to AI, formal specs and all kinds of “old school” techniques are having a renaissance while we figure out how to distribute load between people and AI.

• ted_dunning 4 hours ago

That sounds right, but it can be superbly wrong because that presupposes that you can debug what the AI gets very confidently wrong.

There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.

• adrian_b 4 hours ago

Indeed, even if were possible for someone to create any program most of the time just by directing a team of AI agents, when something does not work one needs the ability to zoom in through the abstraction levels and understand exactly the program that is executed, so only knowing to generate prompts becomes insufficient.

This is the same with compilers. Most of the time a programmer needs to know only the high-level language that is used for writing the program. Nevertheless, when there is a subtle bug or just the desired performance cannot be reached, a programmer who also understands the machine language of the processor has a great advantage by being able to solve the bug or the performance problem, which without such knowledge would be solved in much more time or never.

• don_esteban an hour ago

1) luckily, nowadays compiler's bugs surface very rarely, as the average programmer does not have capability to solve such issues

2) unfortunately, LLM's, by their very nature (not having a model of what they do, are prone to introducing subtle bugs, i.e. it is like programming in high-level language whose compiler likes to wing it

• SleepyMyroslav 2 hours ago

I don't think compilers are a good example. The economics of software development has won a long time ago. For example in Gamedev with well known soft real-time requirements people (mostly) stopped doing that machine code dance many hardware generations ago. Like it happened with memory optimizations: people measure memory in GB now not in KB =)

I am sure programmers cherish every case when they can do micro optimization but in the retrospect the high level cuts is what made the system fit the perf or memory budget.

• zelphirkalt 4 hours ago

And the next level of this is, that even companies that realize this, mostly go ahead acting like this anyway, because they think someone else can train the juniors. Some other company will appear to do that, but nimby! Over time the lack of good judgement will lead to a decline in their products' quality, which will be difficult to recover from.

• DrBazza 4 hours ago

> The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

It's always seemed to me that the problem is corporate profit and personal profit above all. 'Management' is a subset of this, and so is pretty much everything else, including the current drive for AI.

It's the Western, perhaps American, approach to business and emphasived by MBAs and the media. Lowering costs, driving share price, dividends and corporate profit.

This race over the few decades has hollowed out most Western companies.

Listen to any entrepreneur podcast, or read any website, and it's all about 'how quickly can I get to exit', i.e. personal profit.

Capitalism is the worst form of economic system, apart from all the rest.

• 2ndorderthought 2 hours ago

I have worked for companies in different countries.

I think the striking thing is how US companies tend to have no idea how to be wealthy. Record profits, so the ceos use all of their tricks to get rich quick? They are already rich! Don't fix what isn't broken. Not every company needs to expand into 10 new markets, or have 5% lay offs or double in revenue. Some of this is investor pressure, but often it's not. Some guy who made it to the top is bored, doesn't feel like he is obviously doing enough, so he keeps making decisions to justify his position.

This isn't to insight flames but the European companies I worked for knew how to be wealthy! The market took a down turn from COVID, they ate the cost to keep their people. Some flashy new vertical is trending. They decided it's not for them, they have a brand and customers that they should focus on while everyone else works out the kinks. The company decides, why go public at all, we are successful and don't need anyone else's influence over us.

People say "you cannot project beyond 1 quarter". This is true in terms of catastrophe or gambler success. But its not true, if you act in q1 like there will be a q2 or even 5 years from now or heaven forbid a second or third generation you make different moves. You value different things.

• pelorat 4 hours ago

In the case of the military I'd say the real reason is political. After the fall of the Berlin wall, Europe collectively agreed (knowingly or not) that war is now a thing of the past and the goal should be the complete dismantling of militaries worldwide, starting with Europe. Lead by example, etc.

• rini17 3 hours ago

It's subtler than that. Europe was just constantly reminded by its big brother not to duplicate NATO structures, which are dependent on the US.

• don_esteban 44 minutes ago

This.

Plus, of course, each European country has to support their own defense industry, so each one of them needs to have their own howitzer/tank/whatever and they can't agree on common approach that would actually allow for the economy of scale.

• brabel 3 hours ago

They agreed that war was a thing of the past, but still continued to push for NATO to allow new members anyway, ironically causing Russia (and China and everyone who is NOT in NATO) to suspect that war was NOT a thing of the past and therefore never quite abandoning their military completely. Unpopular opinion: the West should either NEVER have abandoned its military production (so as to maintain NATO actual preparedness for war, given that's the only reason for its existence) OR it should just have dismantled NATO and announced to the world that it strongly believes war is a thing of the past, and that other countries are advised to follow suit. But we actually chose the easy, halfway path: keep NATO, keep our militaries "looking strong" (which gives the signal our rivals should also do the same, obviously), but not actually be ready for any sort of major war and as the article points out, even lose actual capacity to become ready for war within any realistic timeframe. The worst possible outcome :(.

• avmich 41 minutes ago

It could be matching theory for outcome though. The unpopular opinion may still be wrong too. Russia was quite different in 1999, or better in 1992, to the point of joining NATO, and China was nowhere the threat of today, and it could be different reasons- not keeping NATO - which caused today's standup. So, basically, the situation seem to be more complex.

• LtWorf 2 hours ago

USA had no part in that push?

• 0xDEAFBEAD an hour ago

NATO expansion was pretty controversial in the US

https://time.com/archive/6731121/how-clinton-decided-on-nato...

• bluGill 2 hours ago

Perhaps but the US was pushing NATO to invest more in war for years suggesting they didn't believe war was in the past

• don_esteban 42 minutes ago

Correction: The US was pushing NATO to invest more in US gear.

• LtWorf 2 hours ago

That's because they have more to gain from that.

• surgical_fire 2 hours ago

> But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.

This tracks the experience throughout my carreer, in all sorts of companies. From established body-shop consulting, to minor early-stage startup, to FAANG, and everything in between.

Essentially everywhere I worked, you would benefit to switch jobs. Companies would at times do quite an effort to hire you, but wouldn't try anything to keep you around.

This always sounded bonkers to me, but as I directly benefited with a rapidly increasing salary when I job-hopped, my response was a vague shrug. "Those who care don't know and those who know don't care".

The thing is, in every place, you typically is at your least useful when you just joined. It takes months, sometimes years, to learn the intricacies of the business, the knowledge that informs your skills so you can make better decisions, better designs, better implementation, better initiatives.

This is, of course, just one facet of a larger trend of how things are typically mismanaged. The article brushes on it when it talks about how governments in the US and Europe had to scramble to get 50-year old manufacturing going anywhere.

This is why I laugh whenever I hear someone talking about "governments should be administered like a business". Bitch, businesses are typically mismanaged due to terrible incentive loops, institutional blindness and corporate rot. That anything seemingly works is more a result of inertia and conformity than a sign that things are well managed.

• palmotea 5 hours ago

> The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

I think that's still a symptom. The real problem is ideology: the monomaniacal focus on profit-making business, which infects our political leaders, down to capitalists and business leaders, down to the indoctrinated rank-and-file. Towards the end of the cold war, the last constraint on it were abolished, the the victory over the Soviet Union made it unquestioned.

The Chinese don't have that ideological problem. Their government appears to not give a shit about how much profit individual business make, they care about building out supply chains and a capabilities. They will bury the West, so long as the West remains in the thrall of libertarian business ideology.

• AnthonyMouse 4 hours ago

The US is stuck in this weird irony where they recognize that Soviet-style central planning is a disaster but can't recognize that it's what megacorps do when they're insulated from competition. Internal politics, perverse incentives and a system that can sustain massive inefficiencies right up until the point that it doesn't.

In general productive economic activity generates a surplus and that surplus allows for slack. Human beings intuitively understand this. Hobbies are frequently de facto training for things that aren't currently happening but might later. Family-owned and operated businesses are much less likely to try to outsource their core competency for the sake of quarterly profits.

But regulatory capture and market consolidation causes the surplus to go to the corporate bureaucracies capturing the regulators instead of human beings with self-determination and goals other than number go up, and then the system optimizes for capturing the government rather than satisfying the people. "When you legislate buying and selling the first things to be bought and sold are the legislators." You throw away the competitive market and subject yourselves to the unaccountable bureaucracy, and then try to pretend it's not the same thing because this time the central planners are wearing business suits.

• NordStreamYacht 4 hours ago

> megacorps do when they're insulated from competition. Internal politics, perverse incentives and a system that can sustain massive inefficiencies right up until the point that it doesn't.

You just described Lucent.

• AnthonyMouse 3 hours ago

That's the end stage. The bigger problem is the companies rotting from the inside even though they're still alive, because they use their resources to suppress your alternatives to them while they're slowly dying on top of you.

• TheOtherHobbes 3 hours ago

Yes - ultimately it's the same system. Far from being daring and innovatory, it's backward-looking, unimaginative, and bureaucratic.

Vision for the future is limited to grandiose fantasies straight out of 1950s pulps and the "heroic" creation of narcissistic corporations that are cynically extractive and treat employees and customers with equal contempt.

The differences which used to provide a convincing cover story - no single Great Leader, a functional consumer economy, votes that appear to make a difference - are being dismantled now.

What's left are the same mechanisms of total monitoring (updated with modern tech) and reality-denying totalitarian oppression, run for the exclusive benefit of a tiny oligarchy which self-selects the very worst people in the system.

• adrian_b 3 hours ago

Yes, many Americans and other Westerners believe that the so-called "socialist" economies, like those of the Soviet Union and of Eastern Europe were non-capitalist.

This is only an illusion created by the fact that the communists were careful to rename all important things, to fool the weaker minds that the renamed things are something else than what they really are.

In reality, the "socialist" economies were more capitalist than the capitalist economies of USA and Western Europe. They behaved exactly like the final stage of capitalism, where monopolies control every market and there is no longer any competition.

Unfortunately, after a huge sequence of mergers and acquisitions started in the late nineties of the last century, the economies of USA and of the EU states resemble more and more every year the former socialist economies, instead of resembling the US and W. European economies of a few decades ago.

• AnthonyMouse 3 hours ago

Everyone wants to tag the evil with their opposition's name. The evil is concentration of power. But no one wants to call it that because then they can't pretend that it's something different when they're doing it themselves.

Witness the people who keep proposing to solve market consolidation with higher taxes. Higher taxes go to the government, and therefore the interests that have captured the government. Are we going to solve it by taking money from Warren Buffet and giving it to Larry Ellison? Do we benefit from increased funding for Palantir? No, you have to break up the consolidated markets through some combination of antitrust enforcement and peeling back the regulatory capture that prevents new competitors from entering the market.

• anon7725 an hour ago

> Higher taxes go to the government, and therefore the interests that have captured the government.

There is at least a chance for it to be redistributed, unlike private wealth.

• esseph an hour ago

I'd argue we need both massive antitrust, and higher taxes on the wealthy to prevent them from amassing the power to prevent the antitrust.

• don_esteban 37 minutes ago

And change in laws regarding the legalized corruption (Citizens United, ...). And fight for real freedom of speech.

This is very complex problem that needs to be tackled from all sides simultaneously, the entrenched interests are already well setup to defend themselves.

• samiv 2 hours ago

And to complete the reversal what is now referred to as the "golden age of capitalism" i.e the post WW2 USA was actually very socialist. Strong social movement and unions and social spending that created a wealth working/middle class with a bunch of spending power.

Inequality society producea inequal economy (and vice versa) which is the economy of any developing country. Few rich,. miniscule middle class and lots of poor people in slums snd poverty.

• fxtentacle 4 hours ago

West: We need profits and then we’ll try to build something useful.

China: We need to build this useful thing and then later let’s try to make profits, too.

• andy_ppp an hour ago

What do you think the war in the gulf is about, the US cannot compete with China so they are destroying the global system that enabled them. There is no plan to have a peace with Iran, only perpetual war and the destruction of the middle east, starvation in East Asia and poverty and nationalist wars in Europe, potentially with Russia taking over vast swathes of Eastern Europe again. Suddenly Russia is the one in charge of the China-Russia relationship. It's such a stupid plan for the US that you might think it was designed by Putin himself.

• don_esteban 23 minutes ago

You started well, but then the train got derailed...

Russia has no need for Eastern Europe (they have enough land and resources, why saddle yourself with hostile population?), as long as the said Easter Europe is not threatening them with NATO bases/missiles (US has repeatedly shown that they do not hesitate to use their muscle if they think they can get away with it, so Russia's paranoia is not entirely unfounded).

Even if Russia somehow took over Eastern Europe (most likely way: they learn from US how to do soft 'regime change'), they have no chance against China (China is just so much bigger and better organized; the population's mentality also matters a lot). China and Russia are rather complementary, there is not reason for confrontation between them.

But you are correct, what US is doing is really totally stupid ... although it seems designed by Netanyahu, not Putin.

• andy_ppp 15 minutes ago

If China cannot get oil from the middle east what happens to China and China-Russia relations? I didn't say there would be hostilities just Russia would become potentially the more dominant partner.

If NATO expansion is the reason for the war in Ukraine (not imperialism) then why has the war not stopped now we know Ukraine will never join NATO?

• yapyap 3 hours ago

> The real issue, in my view, is not AI itself

in shootings technically the guns are not the issue since they dont fire on their own.. they do enable the ability to shoot though

• 2ndorderthought an hour ago

Only in 2026 is AI the answer to everything and when the negative traits of our behaviour are amplified due to AI it clearly has nothing to do with AI even when the article exactly about that.

• vrganj an hour ago

The problem, in other words, is quarterly earnings in specific and shareholder capitalism in general.

• Aaargh20318 3 hours ago

> What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

And workforce reduction is a nobel goal. In fact, I think it's one of the most important things humanity should focus on. We should strive for a workforce of zero. Humans currently was an enormous amount of their life working instead of more worthwhile pursuits.

I despise the rhetoric around this, we didn't "lose jobs" over AI, we saved ourselves a lot of work. What it does do is highlight a problem in our current society: the link between labour and the access to resources (e.g. money).

I don't think that AI is the ultimate answer to the problem of work, but it can contribute to it.

• only-one1701 2 hours ago

The time to solve that resource problem is before AI concentrates power, not after. It’s LESS likely to happen when a tiny elite increases their already huge amount of power.

• esseph an hour ago

Jobless people normally can't feed themselves in a modern world.

And uh, healthcare. Among other things.

• sgt 3 hours ago

You sound convincing, but it also reads very AI generated. A lot of people will stop reading half way.

• HeavyStorm an hour ago

You're absolutely right. And the root cause is simple: the stock market / shareholders. The incentive is for quarterly returns, not long term. That's why CEOs look for that - that's the job they are assigned by shareholders and the board. For a shareholder what matter is the stock going up. Heck, you can make money even if it goes down, but you can't if it stands still.

• lnsru 34 minutes ago

No. It’s pure greed dominating the world. My employer is owned by bigger private company and the shitshow is the same as in big megacorporation. There are hordes of colleagues to stab one for 100€ more salary a month. Disgusting.

The company is manufacturing special computers. The initial owner/founder ordered CPU modules and memory cards always looking at the price break. His question was always „how many to buy to get best price?“. So he ordered sometimes 200-300 parts more than needed immediately. Then the follow up order came and he emptied the storage. Now new manager always orders EXACT amount memory cards as ordered computers. Price is secondary thing, most important thing to work without warehouse and get things delivered just in time. What doesn’t work at all for the while already. The high prices buying small quantities is eating up the profit, so people are getting fired to save costs. It is pure greed dominating western world. Everything is done to look accounting nicely at every cost, get whole bonus despite ruining the company long term. I see this pattern recently very often.

• liendolucas 3 hours ago

I still code daily without any coding assistance mostly because I believe this is the way to not forget how things are done, even trivial things.

My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documentation, books, SO and alike).

I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.

Giving away that at least for me means to become a dependent zombie. Knowledge comes basically from manual trial/error almost daily.

Technology being technology if anything has shown us that we can be pushed and manipulated in every single conceivable way. And in my opinion depending on AI is the ultimate way for companies to penetrate and manipulate a very delicate ability of a human being: to think and wonder about things.

• sahilagarwal 2 hours ago

I generally don't have as much time (or patience / fucks) anymore in my day. So, I use AI 3 days a week. On the other two days, I don't use assistants to code, just ask them to review my work after its done.

Helps me keep sane tbh. And keeps the edge sharp.

• cpursley an hour ago

Another perspective: AI reduces brain effort in some domains which actually frees up brain juice that can be applied elsewhere.

• wolvesechoes 41 minutes ago

Show us effects.

What amazing breakthroughs were achieved thanks to brain juice freed by AI usage? What great works of art were created?

• owebmaster 34 minutes ago

Exactly. What service got better and/or cheaper?

• cpursley 30 minutes ago

So this is the classic tension between the "coding for the love of code" vs the "coding to solve problems" mindset. This cultural concept has been around since before AI was on the scene, heck well before software existed (craftsman vs builder).

• whompyjaw 42 minutes ago

Ya this has been my sentiment. If i need to one-off a quick script that does some processing on data, it’s nice to offload that so i can focus on pieces of my code that are more important and interesting to me. The context switching cost is still there tho…

• ReptileMan 2 hours ago

>I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.

I find myself thinking more and my thinking is of higher quality. Now I have 30 years of fucked up projects experience, so I know all the rakes I could step into.

• LtWorf 2 hours ago

You probably overestimate yourself.

• mewpmewp2 2 hours ago

I relate to the idea of having a different level of thinking now with AI. How would you evaluate that someone is overestimating themselves?

As in every little thing that used to be too much effort before, I can just easily get the info, the data now with prompt. The data analysis of something, which otherwise might have taken hours to figure out, I can just have AI write scripts for everything, which allows me to see more data about everything that previously was out of touch. Now you will probably ask of course "how do I know the data is accurate?" -- I can still cross reference things and it is still far faster because even if I spent hours before trying to access that data there wouldn't have been similarly guarantees that it was accurate.

I am thinking so much more about the things now that I couldn't have possibly time to think about before because they were so far out of reach, or even unimaginable to do in my lifetime. Now I'm thinking about automating everything, having perfect visualizations, data about everything, being able to study/learn everything quickly etc.

• mewpmewp2 2 hours ago

I hear this a lot, but also I'm curious. How can you really forget coding?

It doesn't seem to me a thing that I could suddenly forget?

Without AI I will feel frustrated that I'm now much slower, but ultimately it's just describing logic. So I'm a bit skeptical of the claim.

My brain effort is also on other things now, such as how to orchestrate guardrails, how to build pipelines to enable multiple agents work on the same thing at the same time, how to understand their weaknesses and strengths, how to automate all of that. So there's definitely a lot of mental effort going into those things.

• vjsrinivas 2 hours ago

If you are not practicing an activity consistently, you'll forget some of the finer grained aspects. When I'm coding, I subconsciously create a continuous logic map. Having someone or something just generate (and generate so quickly) destroys that and makes it easier for bugs to slip through.

• mewpmewp2 an hour ago

I mean if e.g. AI stopped existing all of sudden, it doesn't mean you would have forgot how to code and couldn't all of sudden anymore, right?

You could forget maybe how a certain lib or framework worked or things like that, or more so how you wouldn't have been up to date with all the new ones, but ultimately code can be represented as just functions with input and output, and that's all there is to it.

As in how could I possibly forget what loops, conditionals or functions are?

I haven't written code myself for 1+ year (because AI does it), but I feel like I have forgot absolutely nothing, in fact I feel like I have learned more about coding, because I see what patterns AI uses vs what I did or people did, and I am able to witness different patterns either work out or not work out much faster in front of my eyes.

• Jtarii an hour ago

A writer will never forget what adjectives, verbs, and nouns are. But if they use LLMs to write for them for years they will be worse at writing on their own.

• mewpmewp2 an hour ago

Well, what I'm trying to say here is that coding is conveying logic, the way you'd evaluate it is how fit it is for its purpose, and if it's long term code, how well it will scale into future.

Now writing is something totally different. In some cases writing ability is not about writing, it's about your thoughts and understanding of life and human nature.

You could simply become a better writer without not writing anything by just observing.

If you are using an LLM to write, what is the purpose of that? Are you writing news articles or are you writing a story reflecting your observations of human nature with novel insights? In the latter case you couldn't utilize AI in the first place as you'd have to convey what you are trying to say within your own words, as AI would just "average" your prompt or meaning, which takes away from the initial point.

With code it's desired that it's to be expected, with good writing it's supposed to be something that is unexpectedly insightful. It's completely different.

• esseph an hour ago

> You could simply become a better writer without not writing anything by just observing.

To become a better X to must do more of X. There are few shortcuts worthwhile.

• mewpmewp2 40 minutes ago

I would disagree. If you only do X, in fact I think you will miss a lot of things that could make you better. You can become better writer by reading other great writings, if you only write yourself, you will not have the full big picture on what is possible. Then you can become better by thinking a lot, imagining a lot, etc... Same with most fields I would argue.

Although we were discussing about the decay of skill in something. While in some things the decay is super clear (as in running - pace, not the technique), I think there's many areas where there's no clear decay and other activities will actually significantly boost it, and any decay that there is, will be removed in just few days of practice or remembering.

• mewpmewp2 an hour ago

Are we talking about observational ability, creativity, accuracy of communication or grammar here?

There's many more ways to evaluate a writer skill in terms of what they are doing vs what is coding. Coding can be creative, but in most cases you are not evaluating coding as writing, unless it's possibly technical writing, which is still different compared to coding.

• skydhash an hour ago

Coding is a thinking avtivity. What you’ll be missing is the nimbleness in doing that activity, not the knowledge.

So you may remember all your high school math, but not doing it every day, means you are slower than some of the students. So your knowledge of programming will be there, bit you will be slower because you no longer have the reflex that comes with doing things over and over.

• mewpmewp2 an hour ago

I feel like I have to disagree here. I don't practice e.g. multiplication or doing math in my head everyday or for years really, but I feel like I'm just as fast at it as I ever was. In fact whenever I have tried things like Lumosity or brain benching games, that I used to do when I was younger, I'm actually faster than when I was younger, despite not having practiced it at all. I feel like all the real world side practice has helped me improve these abilities indirectly, they have all added to my brain's ability to notice novel patterns, see things from different perspectives, apply new intuitive strategies, that I might have not noticed because I was tunnel visioning when I was younger.

There's also plenty of things that I have got for life just by having practiced them when I was child. E.g. I think everyone gets bicycling, but there's also handstand, walking on hands, etc, which I learned as a kid for few years, and I can still do it even if I only do it once a year. In my view code is exactly the same, and maybe in a way even more straightforward, it's easier than obscure math since you don't have to memorize any formulas to solve it easily, albeit I think a lot of math is great because you don't have to memorize formulas in the first place you just have to internalize or figure out the logic or the idea behind it, and then you just have it. I think repetition in math is specifically the wrong way to go about it, it's about understanding, not repetition.

• Jtarii an hour ago

If your internet died you would likely be worse at programming that you were in 2020. I think is what people are getting at.

• djyde 10 minutes ago

I always compare AI programming to Google. If that's the case, then without internet, without Google, without Stack Overflow, my abilities would be worse than they were in 2000.

• mewpmewp2 an hour ago

If my internet died in 2020 I would also be useless because probably I couldn't install/download all the libs/frameworks, etc.

But if I didn't need those things, and there was a simple pseudolang syntax which acted exactly the same in all versions, didn't have any breaking changes, I would argue I'd be much better at it now.

Internet, search etc is needed to understand how to setup libs/frameworks/APIs, but logic at itself isn't something that I could possibly forget. AI will help to get those setups quicker without me having to search, but arguably it's all useless information, that will get out of date, that I really don't even need to know. I don't need to know top of my head what the perfect modern tsconfig setup should look like or what is the best monorepo framework and how to set it up, so it would scalably support all different coding languages for different purposes.

• doginasuit 6 minutes ago

I write all my own code and then run it by the LLM for analysis and suggestions. I'm probably not a 10x developer, but this has 10x'd my own progress. I make fewer missteps and build a deeper understanding of the problem space. I used to brace myself for a lengthy debugging session whenever testing a new section of code and now it tends to run as expected, more often than not. It may take a few years but I expect people who are using LLMs in the backseat will pull out ahead as the leaders of the future of software.

• TonyAlicea10 2 hours ago

“Money was never the constraint. Knowledge was.”

The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.

Even when AI is understandably used due to language fluency, I’d prefer to read an AI translation over a generated article.

If you don’t care enough to write it, why should I care enough to read it?

• barankilic an hour ago

I am really amazed at how we are really okay with LLMs writing code end to end (without human in the loop) / dark factory concept but when it comes articles, HN is suddenly against LLMs writing words. I do not see the difference between writing code and writing prose. Both have keywords, grammars, syntax, meaningful combinations (function or chaining in code / collocations in words). If we think that AI-generated words are not meaningful or easy to follow that same must apply to AI-generated code, which may be harder to read or understand since it is not written by human. Let's stop being hypocrites.

Note: My comment is not specific to this comment. I just wanted to express myself at somewhere and this is where I think it may be suitable.

• unleaded 4 minutes ago

The purpose of writing is to get your thoughts across in words. A prompt sufficient enough to get out an article that doesn't contain things you didn't intend has to contain as much information as the article itself would. Just write the article.

• avocabros an hour ago

That's because the purpose of code is to be used, not to be read.

The only purpose of the written word is to be read.

• wiseowise 24 minutes ago

> I do not see the difference between writing code and writing prose.

That’s the problem.

• 1287128 20 minutes ago

We are not okay with slop code. There was healthy and widespread dissent in 2024 and beginning of 2025. Ycombinator cracked down on the dissent first by installing another moderator and then by downranking and banning anti-AI people.

What you read here are bots and those invested in AI and an occasional retired person who uses AI as a crutch.

• alansaber an hour ago

Slop is slop.

• bonsai_spool 5 minutes ago

Very frustrating to have an ‘article’ so heavily AI-written take up this much space and attention.

What’s really happening is that we are all forgetting how to think

• mawadev 3 hours ago

I highly question the ability of companies to gauge the level of experience of any dev.

The distinction between junior, mid, senior, lead is a facade. It is a soft gradient that spans multiple areas, but is tainted and skewed by the technology du jour.

Technically you don't have to be an employed developer to become a senior developer. It boils down to your personal willingness to learn and invest time building.

What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.

Does that really make you senior or just politically versed?

The pattern shows up the most whenever failing software pokes holes in perception.

• gyomu 2 hours ago

There are two kinds of developers.

There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc.

And then there's the rest.

Within the first few years of someone's career, you can quickly tell which kind they are. It's almost impossible to turn someone from the latter group into the former.

Yes, everything else is a façade. You can be a "senior" developer with 30 years of experience and still be in the latter group. And you can be fresh out of college and be in the former.

Now some people are extremely good at other skills (politics, interpersonal communication, bullshit, whatever you want to call it) and will be able to seem to be in the first group to the people who matter (managers, execs, etc) while actually being in the second group. But then we're not talking about actual software-making skills anymore.

You can also totally be in the first group and be underpaid, never promoted, etc. There's little correlation with actually career success.

• teaearlgraycold 2 hours ago

> What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.

This is depressing and seems right. And yet this is something I desperately want to be ignorant of. I don’t want to peel apart my brain for anyone. Working within these kinds of problems is pure pain.

• brabel 3 hours ago

> Technically you don't have to be an employed developer to become a senior developer.

That's incredibly unlikely. Do you need to be an employed surgeon to become a senior (or whatever they call it) surgeon??

I very much doubt you can be senior without having actually spent years doing it professionally. The experience is everything, no book will give you the sort of understanding you need. That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn. Didactic books always have exercises for this reason.

You can learn facts and techniques from books, obviously. But just because you've read a book about Michelin restaurants that you can now be a Michelin Chef.

• lelanthran 3 hours ago

> That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn.

That is, and has always been, true. Currently, however, the narrative that is sold (and unfortunately accepted by so many of the senior developers who post here) is that the experience of telling someone else to do something is just as valuable.

• BoingBoomTschak an hour ago

Yeah, but working in a team isn't something you can learn without doing.

• kaashif 3 hours ago

Maybe they mean you can be not employed and build products yourself? Technically true, but that's like running your own surgeries or something, you're still doing surgery.

• andrewstuart 3 hours ago

Analogies to other professions give your argument an air of legitimacy, with none.

There’s plenty of people in this world who are expert programmers without following any traditional path.

“Oh yeah, like who”, you say.

Con Kolivas, anaesthetist, work on kernel schedulers including the Staircase Deadline (RSDL) scheduler which was a precursor to the Completely Fair Scheduler in Linux and the Brain Fuck Scheduler and the ck Patchset.

• Animats 5 hours ago

> They can’t tell you what the AI got wrong.

AI code generators are trolls. They confidently plausible content which is partly wrong. Then humans try to find their errors.

This is not fun. It has no flow.

• simondotau 4 hours ago

I beg to differ, insofar as my own experience has been the exact opposite. I enjoy fixing other people's mistakes. And I especially enjoy outsmarting the LLMs. I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state.

• Terr_ 4 hours ago

I think I might enjoy it for a little bit and then become very depressed at the idea that it will never end, a future of fixing things that should never have been broken in the first place and which won't stay fixed.

• lelanthran 3 hours ago

> I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state.

I can do that too. Most programmers can.

That's because it requires less skill! Critiquing something is always easier than doing it.

I can literally keep an LLM fixing things forever by just saying things like "This is not scalable", or "this is not maintainable", or "this is not flexible" or "this is not robust", ... etc ad nausem.

That doesn't take skill at the level to actually write the software. For the market which is hoping to switch to mostly LLM coding, the prize they are eyeing is skill devaluation and not just, as many think, productivity gains.

They have no reason to double output, but they'd sure love to first halve the people employed, and then halve the salaries of those people (supply/demand + a glut of programmers in the market), and then halve salaries again because almost no skill necessary...

• bradleyjg 3 hours ago

That's because it requires less skill! Critiquing something is always easier than doing it.

No, it was always the other way around. Mediocre programmers always wanted to rewrite everything because reading and understanding an existing codebase was always harder than writing some greenfield thing with a “modern language” or “modern libraries” or “modern idioms.” So they’d go and do that and end up with 100x the bugs.

• layer8 5 minutes ago

How is that “no” and “the other way around”? The desire to rewrite comes from the ease with which one can critique existing code for being “too hard” to understand.

• lelanthran 2 hours ago

> Mediocre programmers always wanted to rewrite everything

You are comparing writing something with rewriting something. You don't know what the difference is?

• ffsm8 2 hours ago

You can't generalize that statement.

There is a very valid reason why the Creator of erlang back in the day said something along the line of "you need to iteratively remake your software, improving it each time"

As your knowledge about a topic grows, your initial mistaken implementation may become more and more obvious, and it may even mean a full rewrite.

But yes, a person which instantly says "rewrite" before they understood the software is likely very inexperienced and has only worked with greenfield projects with few contributers (likely only themselves) before.

• neonstatic 4 hours ago

Perhaps you have the psychological make up to thrive in this new environment. Glad it is working for you.

• cbg0 3 hours ago

It should have the same flow as reviewing PRs from humans.

• t43562 3 hours ago

Who really truly enjoys that and doesn't see it as a chore?

I find the real way to review other people's code is to program with it and then I start seeing where the problems are much more clearly. I would do a review and spot nothing important then start working on my own follow-on change and immediately run into issues.

• cbg0 3 hours ago

> Who really truly enjoys that and doesn't see it as a chore?

This is a whole different discussion, but I just see it as part of the job that I'm getting paid for, I don't need to enjoy it to do it.

Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.

Plenty of LLM-written code runs excellent until it doesn't, though we see this with human written code too, so it's more about investing more time in the hopes of spotting problems before they become problems.

• t43562 2 hours ago

> Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.

Well, there you go. Letting AI write the tests is a mistake IMO. When I'm working with other people I write tests too and when I see their tests I know what they're missing out because I know the system and the existing tests. Sometimes I see the problem in their tests when I'm working on some of my own. If you absent yourself from that process then ....

• sampullman 3 hours ago

I usually don't mind, but tend to split reviews into two types. Either I understand the context and can quickly do an in depth review, or I have to take some time to actually learn about the code by reviewing the surrounding systems, experimenting with it, etc. But in both cases I would at least run the code and verify correctness.

I think it becomes a chore when there are too many trivial mistakes, and you feel like your time would have been better spent writing it yourself. As models and agent frameworks improve I see this happening less and less.

• fg137 2 hours ago

Which is a really, really bad idea.

Most people don't spend nearly enough time going through a code review. They certainly don't think as hard as needed to question the implementation or come up with all the edge cases. It's active vs passive thinking.

I, for one, have found numerous issues in other people's code that makes me wonder, "would they have ever made such a mistake if they hand coded this?"

btw, a side effect is that nobody really understands the codebase. People just leave it to AI to explain what code does. Which is of course helpful for onboarding but concerning for complex issues or long term maintenance.

• microtonal 3 hours ago

The problem is the LLMs completely change the equation. Before LLMs, beyond very junior (needs serious coaching) levels, reviewing was typically faster than writing the code that was reviewed. With LLMs, writing code is orders of magnitude faster than reviewing it. We already see open source projects getting buried in LLM slop and you have to find the real human or at least carefully curated contributions among the slop.

I would not be surprised if many open source projects will outright stop taking PRs. I have had the same feeling several times - if I'm communicating with an LLM through the GitHub PR interface, I'd rather just directly talk to an LLM myself.

But ending PRs is going to be painful for acquiring new contributors and training more junior people. Hopefully the tooling will evolve. E.g. I'd love have a system where someone has to open an issue with a plan first and by approving you could give them a 'ticket' to open a single PR for that issue. Though I would be surprised if GitHub and others would create features that are essentially there to rein in Copilot etc.

• catcowcostume 2 hours ago

Anything AI generated is troll. There's no logic. It's just pattern repetitions. I don't get how supposedly smart engineers fall for it

• barnabee an hour ago

Because a lot of engineering is pattern repetition, which is not very fun for engineers either, and LLMs can do it much faster?

• skydhash an hour ago

Not really. Any patterns got optimized and automated. If you’re still seeing patterns, then you need to look harder, because they will be similar onlu superficially.

• whycombinetor 5 hours ago

>I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.

>In defense, the substitute was the peace dividend. In software, it’s AI.

Before it was AI, the cheaper alternative was remote contract dev teams in Eastern Europe, right?

• Tade0 4 hours ago

Not sure why that was ever the plan, as there are clearly not enough people.

Also over here, east of 15°E we were fired all the same.

I believe the plan is to quite simply "do less overall unless it's about AI", but everyone was waiting for others to start layoffs first.

I spent six months working part time and the decision makers made it clear that this is preferable for them long term. Beats getting fired, but I couldn't sustain this lifestyle - I'm frugal but not that frugal.

• NSUserDefaults 5 hours ago

Happy to help and eventually take over.

• Madmallard 2 hours ago

Pretty sure cheap foreign labor is more prevalent now than ever at every major tech company.

They really, really do not want to spend money. Especially not on Americans and their health insurance.

It's really strange how we're just letting them get away with this. They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.

• lotsofpulp 2 hours ago

> It's really strange how we're just letting them get away with this.

Choosing to pay less is what almost all people do, and it is consistent with almost all of human history.

> They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.

When push comes to shove, i.e. paying lower prices to consume more goods and services or paying higher prices to ensure your countrymen can buy more goods and services, almost everyone will choose to pay lower prices. See political unpopularity of sufficient tariffs to stop imports.

“American” is a nebulous term, and Americans have been choosing lower prices for many decades before the current crop of employees at the global big tech companies chose lower prices. It is no different than when someone picks up lower priced workers outside waiting Home Depot, who are there because they do not have legal work authorization in the US.

• Nux 5 hours ago

India for the most part.

• neonstatic 4 hours ago

It had to be H1B Indians and outsourcing to India. As a European, I have seen some "Eastern European devs" around, sure. But they were not present at every company I worked with. Indians were. Quality-wise, it was always the same story, but I'm not going to elaborate. Everyone who is ready to accept it, knows what I would be saying anyway.

• codingdave an hour ago

No, you probably need to elaborate on that. Because in my experience, the quality from people in India varies just as much as the quality from any other country, including the USA.

What does make a difference is the company they work for. Large hourly "body shops" gives you coders whose quality tends to be lower, regardless if we are talking about an Indian firm or an American firm. Direct hires of independent individuals tend to be higher. But there is always individual variation.

You see people from India more, sure. There are more of them. Over a billion of them, to be precise. Anyone who dismisses a billion people as "always the same" is not being clever, they are being racist. And you know that, otherwise you wouldn't have pre-empted this response with "everyone who is ready to accept it."

Say that there are communication gaps to overcome. Say there are cultural differences. Say that those cultural differences change the assumed business expectations and the mechanisms by which people express their thoughts and opinions. Those things are all true. My recommendation to anyone who has an urge to dismiss an entire population is to instead get to know them: Step up and learn how your teammates think and work. It will make for a better team, better communication, and better results.

• anonzzzies 4 hours ago

I saw academic rigor fall of a cliff in exchange for 'better job alignment' between end 80s when I had my first class after finishing highschool called 'Formal verification in software' on to beginning of the 2000s when I left giving the first class to new students 'Programming in Java'. All the 'teaching how to think' was replaced with 'how to get a well paying job'.

• mschuster91 2 hours ago

> All the 'teaching how to think' was replaced with 'how to get a well paying job'.

Yeah. Companies didn't want to train new employees any more as that costs money (both for paying the trainees and the teachers) so they shifted to requiring academic degrees. That in turn shifted the cost to students (via student loans) and governments.

People call it a red flag for scams if you are supposed to pay your employer for training or whatever as a condition of getting employed... but the degree mill system is conveniently ignored.

• lotsofpulp 28 minutes ago

The problem was the government providing the blank check loans with no underwriting. Without that subsidy from future taxpayers, incentives would be properly aligned.

No lender would have been stupid enough to give 18 to 22 year olds $200k for bullshit degrees and sports facilities.

The onus would have remained on employers and government to pay for education, rather than a certification, because they would have been the ones paying.

• allending 5 hours ago

There's a certain irony in that the article itself is quite clearly assisted by AI. Not a criticism per se as I don't have a problem with AI assistance, but food for thought given the material being commented on.

• rezonant 4 hours ago

The tropes that AI introduces into articles are very noticeable, quite annoying, and very unnatural -- they unfortunately don't write well. It seems people use them to "polish" up their writing but in reality it would have read better if they hadn't.

My current pet peave is using period instead of comma, as in:

> My people lived the other side of this equation. Not the factory floor. The receiving end.

Ostensibly this is supposed to add gravitas, but it's very often done in places where that gravitas isn't needed, and it comes off as if I'm reading the script for an action movie trailer.

• lelanthran 3 hours ago

> The tropes that AI introduces into articles are very noticeable, quite annoying, and very unnatural -- they unfortunately don't write well.

Quite paradoxical: when its a person's native language we can spot it a mile away but there's no shortage of engineers who claim how good the code output is.

Whatever the reason for the default tone of AI in English, it's still there when generating code. It makes me think that the senior engineers who claim that it produces awesome output just don't understand the specific programming language as a someone who thinks in it almost natively.

• ykonstant 2 hours ago

Unnecessary emphasis can get... quite comical... indeed.

• concinds 2 hours ago

The uncanny valley is an attractor basin.

• SanjayMehta 4 hours ago

People have also started copying the AI tropes, especially your period/comma example.

• microtonal 3 hours ago

I am not sure if it is necessarily copied. A lot of influencer-style people used some of these patterns (periods, not X but Y). So I'm not sure who is copying who?

• morningsam 4 hours ago

Made me stop reading a few paragraphs in. I don't have a "problem" in the ethical sense either, but as the sibling comment notes, the way LLMs write is rather grating. To make matters worse, a) people seem to use them to add pointless volume / "filler" to their texts, so now I have to wade through pages and pages of this stuff, and b) I have no easy way to distinguish between an article at least based on novel human insights vs entirely LLM-generated from a "write me something about X topic" prompt. I don't think it's a stretch to say that the latter just isn't worth reading given the state of the art.

• rotis 4 hours ago

I don't have a problem with AI assistance either, but this undermines the point the article is making. For me it is like a priest preaching gay sex is wrong and then being caught in bed with a male prostitute (snorting cocaine optional). Leaves bad taste in the mouth.

• A_D_E_P_T 4 hours ago

Out of curiosity, what are you basing this on?

The text has few of the obvious AI tells. The only thing that, to me, looks characteristic of LLM-generated text is the short and terse sentence structure, but this has been a "prestigious" way to write in English since Hemingway.

• bonsai_spool 10 minutes ago

What are the obvious tells? List them, because I think our sense of the tells may not overlap.

This article is clearly LLM-generated, even the title. A key indicator is that it almost makes sense: we forgot how to manufacture because that got sent to a different nation. The coding thing isn’t getting sent anywhere, so humanity is forgetting how to code. The distinction undermines a lot of the emotional baggage about offshoring that the article wants you to bring along.

• allending 4 hours ago

Sort of a taste receptor I’m sure many have developed now.

The most obvious patterns here are: antithesis constructions, words choices and distribution, attempt at profundity in every paragraph but instead are runs of text that doing say anything, and even the perfect use of compound hyphenation. I think and can appreciate that there is definitely an attempt at personalization and guidance to make it less LLM-y and not just a default prompt, but it’s still kind of obvious. You could use a detector tool too of course.

• lkm0 2 hours ago

The blog post reads nothing like Hemingway. Here's a classic example: https://anthology.lib.virginia.edu/work/Hemingway/hemingway-...

Hemingway writes simple sentences with a kind of detachment to make the emotional flow of his stories as transparent as possible.

LLM slop reads more like slide bullet points extrapolated to prose-length text

• lelanthran 3 hours ago

Blog posts aren't typically written like Hemingway.

Find some pre 2020 that are, and you'd have a point.

• zero0529 4 hours ago

Every day Peter Naur’s paper programming as theory building gets more relevant

Link: https://gwern.net/doc/cs/algorithm/1985-naur.pdf

• RossBencina 5 hours ago

Excellent post. Two stand-out points are deskilling through abolition of apprenticeship (or equivalent progression through the rank and responsibility), and loss of institutional knowledge, especially tacit knowledge stored in individual people. These are people problems more than they are technology problems. Without continuity of process and practice stuff gets lost. Sometimes change really is progress, for example software safety and security practices have progressed over the past 50 years, but other times change is just churn, or choices driven by misaligned incentives which will bite later, as the article describes.

• RangerScience 5 hours ago

What comes to mind is how the cure for scurvy was simply… forgotten, causing it to come back.

• neuderrek 4 hours ago

I remember same complaints about junior engineers copy pasting snippets of code from StackOverflow without understanding. And without curiosity to understand, without code review and mentorship from senior engineers they never grew to the senior level. But that is only some of them, others used StackOverflow to learn, did not use the snippets without understanding them first and properly adapting to their context, and they got good coaching in their teams and now have reached senior level from there. I see the same dynamic with LLMs, just more opportunities for both juniors to learn more by following up, and for seniors to to create tooling to enforce better architectur, test coverage and fault resiliency.

• isodev 4 hours ago

I think you're missing the point. Nobody removed people thanks to their SO copy-paste skills. If anything, more folks were hired to troubleshoot and sort out any copy pasta blunders (since you actually need working software, at the end of the day).

With LLMs this is no longer true - the thing can vibe a great deal before anyone notices that they have 100.000 lines of code doing what a focused, human reviewed and tested 10.000 lines can do. And as this goes on, it becomes increasingly more difficult for anyone to actually dig into and fix things in the 100.000 without the help of LLMs (thus adding even more slop on the pile).

• cladopa 5 hours ago

People are not perfect. I went to Ukraine just days before the invasion. Travel and Hotels in Kiev had become extremely cheap. You asked the Ukrainians about the possible invasion. "Not going to happen" everybody said."Russia talks always aggressively, but never does anything".

They did not properly prepare and as a result lost 20% of its territory in days.

Days after that I was back is Austria and could not stop thinking about some of the people I spoke with being dead.

Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer. "What are you going to do when drones are used against your infrastructure?" If you followed the Russian war and first Iranian strike it was obvious that drones were going to be used against them. "not going to happen" again.

The have lost tens of billions for lacking proper preparation. They could have been protected spending just hundreds of millions of dollars over years.

It is about humans, not AI.

• wiseowise 4 hours ago

> They did not properly prepare and as a result lost 20% of its territory in days.

Ukraine has been preparing since 2014. Without preparation there would be a Russian talking head right now in Kyiv.

• the-smug-one 4 hours ago

I'd say that Ukraine were very prepared for the invasion, though? They managed to survive for the first 2 weeks, leading to a long-term war. The Donbas war had already been going on for 8 years, and I don't think Ukrainians were under some illusion that those weren't Russians.

• blitzar 4 hours ago

On the flip side, all around the world you have "leaders" talking about imaginary conflicts with foreign countries that we must spend billions (they have a friend who really should get the contract) to prepare for and if the other side (tm) gets in your whole family will be killed instantly.

• fifilura 4 hours ago

Killing of families is what happened in Ukraine in the Russia controlled territories.

• teiferer 4 hours ago

In hindsight, it's easy to be smart. You picked two examples where somebody said "never gonna happen" and then it happened. How about the countless examples where somebody said the same and then the thing actually didn't happen?

Take millions playing the lottery. To each of them, I can confidently say "you won't win, not gonna happen". For almost all of them I'll be right. There will be one who wins, were I was wrong, and they will say "see, told you so". That doesn't mean my prediction was wrong. It means you are having a reporting bias.

• hnfong 4 hours ago

GP also probably had a sampling bias. The ones who were actually concerned about the impending Russian invasion presumably fled out of the country (or at least, away from the major cities to rural areas that probably see less fighting)

• _heimdall 39 minutes ago

I was in a neighboring country in Europe at the time, not Ukraine, but we didn't see any Ukrainians move into our area until a few weeks after the war started.

That's not to say the country wasn't prepared though. If the GP did talk to people on the ground days before it started, saying it won't happen would match the public propaganda at the time coming out of the Ukrainian government and their allies. They knew it was coming and seemed to decide they were better to faint like the weren't ready and avoid public panic before it started.

• sofixa 4 hours ago

> They did not properly prepare and as a result lost 20% of its territory in days.

They did though. While nobody actually believed Putin would be dumb enough, the Ukrainian army was still, just in case, extremely busy on preparing defences, organising stockpiles, preparing defensive tactics.

• _heimdall 37 minutes ago

> While nobody actually believed Putin would be dumb enough

I'm not sure why you'd say nobody thought they would invade. To me it was clear in December the year before when the Russian navy began sailing the long way around Europe, getting in the way of Irish fisherman and confirmed days before the invasion when they had stockpiled medical personnel and blood on the front lines.

• lotsofpulp 4 minutes ago

It was clear when they captured Crimea.

• vasco 4 hours ago

> Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer.

Why would we listen to anything related to right or wrong from you then if you don't care?

• rbbydotdev 2 hours ago

Needn’t worry, such incompetencies are rooted out by the 8th or 9th round of interviews.

• alansaber an hour ago

A key pain point addressed by Cluely or some such

• fauigerzigerk an hour ago

The defense analogy makes absolutely no sense. All the examples are of production shutdowns or reductions. Knowledge was lost because people retired and not replaced at all. None of it was lost to automation.

Automation is the exact opposite of tying knowledge to people. It's extracting knowledge from people and transferring it to a machine that can continue to produce the goods.

Yes, AI can lead to problems and some of these problems will be related to gaps in knowledge that was thought to be obsolete when it really wasn't. But that's a totally different problem on a totally different scale from what happened with defense production after the end of the cold war.

Nobody is shutting down or reducing software production. On the contrary, we're going to be making a lot more of it.

• throwaway2037 2 hours ago

Click/rage bait?

The opening paragraph is ridiculous. The FIM-92 Stinger is obsolete. It was replaced by FGM-148 Javelin. DACH (Germany, Austria, Switzerland) didn't forget how to make things. They are still world class for manufacturing. (Northern Italy is also economically part of that manufacturing mega-hub.)

There are plenty of NLAWs (much cheaper than Javelin, and only slightly less capable) in EU/Nato stocks to satisfy Ukraine needs against Russian heavily armed main battle tanks. For everything else, you can use one or two suicide drones to kill anything with a motor.

And now to give credit where credit is due:

Looking at his (assumed) LinkedIn profile: https://www.linkedin.com/in/denjkestetskov/

It looks like he was educated in Ukraine, so likely a Ukrainan national. If I were a Ukrainan, then I too would be publishing rage bait like this in an attempt to pressure allies to provide more funding, weapons, and gear.

As a final suggestion, the writer can visually spice up his blog post with one of my all time favourite military photos from Wiki: https://commons.wikimedia.org/wiki/File%3AFIM-92_Stinger_USM...

• InkCanon 2 hours ago

The Stinger is an anti air weapon, the Javelin is an anti tank weapon.

• sounddetective 2 hours ago

So you published this comment with an anti-Ukrainian spin, and just 2 minutes after posting, your comment is already at the top of comment rankings? I hope HN mods follow inauthentic upvote / comment behaviour on this site; this looks fishy.

• SyneRyder 2 hours ago

New comments get posted to the top for visibility. The 2 minutes is the key point here. If the comment doesn't get enough upvotes it will sink down, like it has now about 30 minutes later.

• Tade0 4 hours ago

> The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.

Well then train them, instead of selecting 0.18% of applicants and calling it a day.

It's not some innate, immutable property - people can be taught even in adulthood.

Also it's not like they'll work for a year and switch jobs - not in the current market.

• tjwebbnorfolk 6 hours ago

You could say COBOL has had this "problem" for 40 years also. That's why we need to constantly be inventing new ways of making things. The old ways are always forgotten over time.

If you REALLY need something long-forgotten, then you have lazy-load it back into being at significant cost. That's the price of constant progress.

• LeCompteSftware 5 hours ago

The point of the article is that sometimes the "old ways" really means "not particularly profitable or necessary in the short term" but the bill comes due in a crisis. The reason US/EU manufacturing was "the old ways" is that people could make easier money with financial engineering, an insight that extended all the way to Raytheon.

COBOL is a bad example, but higher-level languages vs. assembly is not. If you write a lot of C you really don't need to know assembly.... until you stumble across a weird gcc bug and have no clue where to look. If you write a lot of C# you don't really need to know anything about C... until your app is unusably slow because you were fuzzy on the whole stack / heap concept. Likewise with high-level SSGs and design frameworks when you don't know HTML/CSS fundamentals.

As the author says maybe AI is different. But with manufacturing we were absolutely confusing "comfortable development" with "progress." In Ukraine the bill came due, and the EU was not actually able to manufacture weapons on schedule. So people really should have read to the end of "building a C compiler with a team of Claudes":

  The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
At least with Opus 4.6, a human cannot give up "the old ways" and embrace agentic development. The bill comes due. https://www.anthropic.com/engineering/building-c-compiler
• anonzzzies 4 hours ago

But these are hard IT things a human programmer really struggles with as well. What % of software written is that? Very very low. Most software is dull and requires business vagueness to be translated into deterministic logic and interfaces; LLMs are pretty great at that as it is. If humans use their old ways to fix complex problems and llms do the rest, we still only need a handful of those humans. For now.

• LeCompteSftware 4 hours ago

"For now" is sort of the entire point of the article :)

Even in the Before Times, it was much cognitively cheaper to write code than it is to read someone else's code closely, or manage lots of independent code across a team, or to make a serious change to existing code. It's so much easier to just let everyone slap some slop on the pile and check off their user stories. I think it will take years to figure out exactly what the impact of LLMS on software is. But my hunch is that it'll do a lot of damage for incremental benefit.

With the sole exception of "LLMs are good at identifying C footguns," I have yet to see AI solve any real problems I've personally identified with the long-term development and maintenance of software. I only see them making things far worse in exchange for convenience. And I am not even slightly reassured by how often I've seen a GitHub project advertise thousands of test cases, then I read a sample of those test cases and 98% of them are either redundant or useless. Or the studies which suggest software engineers consistently overestimate the productivity benefits of AI, and psychologically are increasingly unable to handle manual programming. Or the chardet maintainer seemingly vibe-benchmarking his vibe-coded 7.0 rewrite when it was in reality a lot slower than the 6.0, and he's still digging through regression bugs. It feels like dozens of alarms are going off.

https://en.wikipedia.org/wiki/The_Mythical_Man-Month

• anonzzzies 4 hours ago

These are good point and I am not overestimating; we are simply seeing the productivity boost in our company and the rise in profitability. We practice TDD, but only at integration level, so we have tests upfront for api and frontend and the AI writes until it works. SOTA models are simply good enough not to do;

function add(a,b) = c // adds two numbers

test: add(1,2)=3

to implement

function add(a,b) return 3

So when you have enough tests (and we do), it will deliver quality. Having AI write the tests is mostly useless. But me writing the code is not necessarily better and certainly not faster for most cases our clients bring us.

• TeMPOraL 4 hours ago

The article makes no sense, and stars with a very wrong perspective on things.

This kind of forgetting is normal. It's how things work when time and resources are finite. The only problem here is the belief that you can keep capacity to do something without actively exercising it, and thus the expectation that you can "just" resume doing things after a long break, without paying up a cold-start cost.

But you can't, and there's no reason to be surprised. I bet the Pentagon and the EU weren't. They didn't need those Stingers and shells for decades, didn't expect to need them soon - but they knew they could get them if they really needed them, but it's gonna be costly.

I don't get why people think this is unusual or surprising, or somehow outrageous and proves something about society or "mindsets of elites" - other than positive aspects like adaptability and resilience.

This is true at all scales. Your body and brain optimizes aggressively, too. An individual saying "I need to warm up" or "I need to hit the gym a few times and then I'll be able", or "yes, I can, but I haven't done it for years so I need an hour with a book/documentation..." - all that is exactly the same as EU going "yes we can make artillery shells... though we haven't in a while so we need some time and some millions of EUR to get our supply chain sorted out first".

• 0xpgm 4 hours ago

> This kind of forgetting is normal

Just as shift in power and the rise and fall of nations is normal.

• Terr_ 4 hours ago

For that matter, a lot of human civilization has been about identifying things that were normal and making them rare. "Normal" infant mortality of 40%, famines, floods, history being lost, etc.

Anyway, when it comes to "this is normal" I think we should take care to distinguish between interpretations of:

1. "This specific case should not have taken certain people by surprise."

2. "This is a manifestation of a broader phenomenon."

3. "This is natural and therefore cannot or should not be solved." [Naturalistic fallacy.]

• gblargg 2 hours ago

My thought as well. Imagine the cost if we kept active every production line of every obscure thing we haven't needed in decades. It's unreasonable to think that we should still be able to make these easily. It would hamper development of new things.

• raincole 4 hours ago

First of all this is clearly AI-assist writing (being charitable here).

And the premise makes no sense anyway. The only risk of forgetting how to make shells is when other countries are making shells more efficiently. Non-western countries are not going to reject AI-coding, nor are they going to make software more efficiently by hand.

• 0xpgm 4 hours ago

Programmers in non-western countries may not be able to afford $100 per month on vibe coding.

They may keep taking the longer and harder route of a mixture of AI and hand coding.

• alansaber an hour ago

They'll find a way. If it's not the chiptole bot, the enormous volume of low-effort AI implementations will provide a free token layer.

• Liftyee 2 hours ago

I wonder if the real problem is short-term thinking in culture and incentivised by markets. By optimising next quarter's profits over investing in long-term growth and capability, things like this happen.

• ianberdin 2 hours ago

I don’t know. Partly true. I came to web development, when low level things solved: frameworks, ORMs, OSs, databases. I don’t know sql nor c++ well. But I can create a system, a value based on the abstractions. Everyone told me: Ruslan, you don’t know SQL, what a shame! Well I do not have problems and did not have about it.

Probably we are going to be fine with AI abstraction too. People will use it, stuck with problems, dig deeper, learn, improve, same as we had with frameworks and its source code.

• austin-cheney 26 minutes ago

The west hasn’t known how to write code for the 20 years I have been doing it, at least at major .com brands.

It’s a 85/15 rule. These big companies hire hundreds, possibly thousands, of developers but most of them cannot code. Some of them struggle to write emails. About 15% of those people provide 85% of the value.

Here is where it all went wrong. The goal of software, the only goal, is automation. That means eliminating human labor. The goal of these big companies is hiring, which is mostly the opposite of eliminating labor. That conflict results in people who cannot do the jobs they are hired to perform and whose goals are to retain employment in preference to automating anything.

Worse still is that you can’t talk about if 85% of the people doing that work find this very subject completely hostile.

It is difficult to get a man to understand something, when his salary depends on his not understanding it. Upton Sinclair.

• heinternets 5 hours ago

When you've run out of ideas just portray "the west" as some monolithic portrait in some decline-porn fan fiction as clickbait.

• simonask 2 hours ago

I'm so tired of this, it's such a lazy take. "The West" is a giant, incongruous collection of wildly disparate nations and cultures with wildly different circumstances, policies, histories, and cultures.

It feels a lot like someone has a cursory understanding of American politics, and thinks the US is somehow representative. It's not, it is an outlier by every statistical measure. If you want to understand the world, you need to start by forgetting everything you know about the US.

• eolgun 3 hours ago

The Fogbank example is the most chilling part. It's not just that they lost the people — they lost the ability to know what they didn't know. Nobody could even write down what was missing because the knowledge was never formalized in the first place.

The junior hiring collapse compounds this. Senior engineers develop judgment partly by watching juniors make mistakes and correcting them. Remove that loop and you don't just lose future seniors — you quietly degrade the current ones.

The 0.18% recruiting conversion rate mentioned here tracks with what I see in compliance and security engineering too. "Can you tell when the AI is confidently wrong?" is now the most important interview question, and almost nobody can answer it well.

• muragekibicho 3 hours ago

The junior hiring collapse is all so bizarre. I graduated recently and my career prospects are jarringly limited.

I thought I'd go back for a Masters/PhD but then Trump mercurially defunded lots of STEM grad programs. Ngl, I found myself stuck. Zero job openings, zero PhD program openings. It's all so frustrating.

• pabs3 5 hours ago
• agentbc9000 2 hours ago

Chinese models run around $2 to $8 dollars per million tokens - Claude is 10x that cost - when will the bean counters move to chinese models, USA bans these models for national security reasons - Anthropic, Openai, Meta, X all move to China where the models will be cheaper

• netfortius 4 hours ago

This is why a comprehensive computer science degree is necessary. Seeing and working only with the trees leads to destroying some forests, eventually.

• chvid an hour ago

I have forgotten all about Apache Wicket - will this cause the downfall of western civilization?

• bit1993 5 hours ago

Yes. Just like globalization created companies like TSMC, AI will do the same. Software engineers who don't rely on LLM code generators will have a moat because they can do it cheaply and sustainably.

Another reason is that LLMs train on the existing code we already know, don't expect new programming languages or frameworks this means that the software engineering skills that exist today will be relevant for a long time.

• zelphirkalt 4 hours ago

I am not so much convinced by your last point, that point of new languages and frameworks. I think the cutoff date is closing in on our current now. If models cannot easily become bigger, they will likely advertise using "up-to-date-ness". Maybe they will be merely a few days behind. Or bigger models will make use of smaller but more up-to-date models.

I think engineering skills will still remain relevant due to taste and proper judgement. A model trained on everything and the kitchen sink has probably not the fitting bias for given specific problems in my project. Accepting too much AI generated code without steering the ship will result in some drift of taste and ultimately make some mediocre project like done by people without good domain knowledge and without good taste. It might even be short term a business, but it lacks the long term excellence, that sets projects with good judgement apart from the common rabble.

• bit1993 3 hours ago

> I think the cutoff date is closing in on our current now. If models cannot easily become bigger, they will likely advertise using "up-to-date-ness". Maybe they will be merely a few days behind. Or bigger models will make use of smaller but more up-to-date models

But they will still rely on assembly, C, Rust, Linux, HTML, TCP/IP... Doesn't matter how up to date they are, they rely on existing code they have been trained on, they can't just create new languages without the training data.

• SillyUsername an hour ago

Space programmes have this issue too - everything had to be relearnt and un-obsoleted for Artemis Moonshots

• Scroll_Swe 5 hours ago

"the west" ?

You mean the world?

Deepseek was being glazed here, Im sure chinese programmers use it like CC

• Terr_ 3 hours ago

To be charitable to TFA, there are a dearth of accurate and well-understood labels for the kind of X versus Y they want to make between national economies.

Even "First/Third world" has been fraying at the edges for decades since it was originally about political alignment.

• mahrain 3 hours ago

I don't agree with "the west forgot how to make things", it moved supply chains for cheap consumer goods to asia, but in the B2B space a lot of things are manufactured in Europe: companies like Bosch, Volkswagen, ASML, Alstom and Airbus are cranking out extremely complicated machines that last many years in demanding environments. It's just a different level of value-add vs. low cost electronics (for instance).

• joker99 2 hours ago

I remember Covid and the supply chain crisis that unfolded in Europe and the west. Most of the companies you’ve mentioned weren’t cranking out anything during that time as all of them realised that "low cost electronics" are not always readily available and that we forgot how to make them or don’t have the capacity to produce them in significant numbers anymore ourselves. A lot of basic electronic components were not available during that time and we still haven’t fully grasped the complexities of our supply chains and where they begin.

I also remember, that EE for a while stopped using the term "jellybean parts". Turns out that most jellybeans are produced in Asia.

• user2722 3 hours ago

If the system treats you as a number, you should become a mercenary.

I love this articles who all the coders read but none of the management.

If possible, be a mercenary and put a high number on your expertise, so we can solve this management blind spot faster.

If you can't, let your life/work's passion be "not starving to death", and try to change it politics-side.

• alecco 5 hours ago

Speak for yourself. I now dare to code much harder problems and learning is bliss. No more having to sit down to dig needle-in-haystack through horrible documentation or random Stack Overflow posts.

LLMs are a magnificent tool if you use them correctly. They enable deep work like nothing before.

The problem is the education system focused on passivity (obeyance), memorization, and standardized testing. And worst of all, aiming for the lowest common denominator. So most people are mentally lazy and go for the easy win, almost cheating. You get school and interview cheating and vivecoders.

But it's not the only way to use LLMs.

Similarly, in Wikipedia you can spend hours reading banal pop-slop content or instead spend that time reading amazing articles about history, literature, arts, and science.

• rglullis 3 hours ago

> Speak for yourself.

Even if you are the absolute unicorn who gets paid to "code much harder problems" and "learning", the rest of the industry exists to deliver actual products and services.

So unless you nurture some type of https://xkcd.com/208/ fantasy, this is not just about you. The industry as a whole needs to find a way to work with LLMs without automating programming away entirely, and the industry as a whole needs to find a way to ensure that newcomers are able to be productive even if code-generation tools are taken away from them.

• eszed 2 hours ago

> in Wikipedia you can spend hours reading banal pop-slop content or instead spend that time reading amazing articles about history, literature, arts, and science.

I'm not saying you're personally doing anything wrong, but there's a parallel here, when smart and curious people read articles about history and literature and art and science, rather than engaging directly with the real thing.

Or then the next level down, where creating amazing work in all of those domains depends on enough "slack" in the system for people to pursue deep work that will not be immediately profitable.

Do you see where I'm going with that? We (and I'm very much including myself: here I am on HN, instead of reading something more substantial) skim the (Wikipedia) surface, instead of diving truly deep. AIs (right now) are the ultimate surface-skimmers, and our fascination with and growing reliance on them reflects something in our current surface-skimming cultural mindset.

• alecco 2 hours ago

I meant it as a simple to understand parallel. Absolutely deep reading and thought is much better than Wikipedia or an LLM chat.

• pHequals7 an hour ago

thankfully with code and coding agents - the tacit/tribal knowledge always lives via the codebase itself unlike atoms based manufacturing processes..

• efitz 5 hours ago

I disagree with the premise - interesting but I interpret the same fact pattern differently.

The history of technology is the replacement of manual processes with automated ones.

Consider a very basic process: checkout of a restaurant.

Writing the price of each item on a sheet of paper, manually adding them and writing the total was replaced with typing in the prices and eventually with just pushing the button for the item. Paper still exists for jotting down your order but within seconds of leaving the table it’s transitioned to computer.

This has enabled lots of desirable advances- speed, accuracy, new payment rails, and increasingly, elimination of the server in checkout- you tap a credit card on a tabletop device.

Did we “forget” how to do checkout? No. We purposely changed it.

But if the internet connection goes down or the backend server powering the cash register app goes down, there is an atrophied and not-regularly exercised skill set (maybe not even trained, IDK) that has to be implemented on-the-fly and it’s slow and frustrating for everyone.

Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.

Military procurement of weapons systems is hardly the place to point to as a technological tradition. There are lots of cases where no one pays the money to keep a production process in place; the reasons are all related to shortsighted “cost savings” or failing to anticipate changing needs.

With coding today, we are seeing the same kind of shift in priorities as my restaurant example. Having humans write code in the 2020 (pre-GPT) tradition was extremely inefficient in terms of time-from-idea-to-implementation.

We’ve found a new way to do the mundane part of that task (the mechanics of translating spec to implementation).

We are figuring out how to do that while preserving quality (and a lot of it is learning how to specify appropriately).

Will we “forget” how to “build” code?

No, but the skills to generate source code by hand will atrophy just as the skills to draw blueprints by hand atrophied with the advent of CAD.

Will we find examples where someone prematurely optimized away knowledge of a skill or process, incorrectly thinking it was no longer needed? Of course.

But the productivity gains we get will be so great on average that no one will go back to doing things the old way.

There will be old-timers and hobbyists who will preserve some of that knowledge; for most it will just be a curiosity.

• drawfloat 5 hours ago

Everyone is taught at a young age how to do basic addition and multiplication. That's all check out requires. People are not taught at a young age how Rust lifetimes work or how to write human maintainable code.

I agree, as with everything in 2026, the reality lands somewhere in the middle of the discourse online. But pretending this is in practice anything like the check out example is wrong.

• rglullis 4 hours ago

The point you seem to be missing is that focusing only on optimization makes us all fragile to system shocks.

> Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.

Until a crisis hits. Covid and supply chain failures. Iran war and straight of Hormuz. Prolonged War in Europe with no production pipeline available. Banks collapsing after unsustainable overleveraging in supposedly "safe" mortgages.

For every optimization and cost-saving measure that is deployed, there should be a backup plan in place. MBA types and "technologists" keep missing this. What is the backup plan for the case where most of the economy activity is built on software produced by business who overleveraged on LLM for code generation?

• latexr 4 hours ago

Though I do believe you are making them in good faith, I find those comparisons do not hold.

CAD still requires you know what to do, and without CAD you can still draw blueprints by hand because you know what the result should be. Checkout is basic arithmetic you can do on a paper or even your personal phone. In both cases it is clear what the process is and what the output should be, and it doesn’t replace knowledge and training and certification.

With coding, none of that is true. By and large, there is a trend of people who don’t know what they’re doing shitting out software, or people who should know better not verifying the very flawed output they get. That is already having negative consequences in people’s lives.

• bsder 5 hours ago

> Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.

I'm going to steal that one and add it to Stross': "Efficiency is the reciprocal of resilience."

• californical 5 hours ago

Yes that is one key that resonated with me. The author did a great job of putting these recurring concepts into their own words

The other that really resonated was something that I read before along the lines of… we think that once humanity learns something, that knowledge stays and we build on it. But it’s not true, knowledge is lost all the time. We need to actively work to keep knowledge alive

That’s why libraries and the internet archive are so important. Wikipedia, too

• khalic 2 hours ago

We’ve been automating every single industry that we touched for decades without a single word, bringing up tepid responses like “it’s capitalism” or “business is business” when called out on it.

But now that the time has comes for us to automate and change, we’re all up in arms and using ridiculous arguments like this post to fight it.

The hypocrisy is mind blowing

• alansaber an hour ago

I hope we'll still be able to sell our beautiful artisan chrome extensions in second hand flea markets in the future

• skybrian 5 hours ago

There was a time when companies had terrible development practices and could forget how to build, test, and deploy software, but is anyone seeing that now? We have much better development practices nowadays.

It doesn’t seem much like defense industry problems.

• disgruntledphd2 5 hours ago

This still happens. Lots of my career has been figuring out what code is actually running in prod, and determining if it even works.

• IronyMan100 2 hours ago

IMHO, it's a people Thing. People developed better practices, talked about IT in conferences, maybe left the company. AS a result the knowlegde spread. On the other Hand, If the places where a skilled individual can work and honey their skills, the knowlegde become scarce, the knowlegde cannot spread anymore and it will vanish. If you only program with AI and 5 people do the Work of 100, then you end Up in such a scenario.

• imrozim 5 hours ago

How do you become a senior engineer if no one hires you as a junior anymore.

• hkt 4 hours ago

Talk confidently in your interview with non-technical managers when the last senior has left and there's nobody there to check your work.

• blitzar 4 hours ago

So the same as it is now, be a good salesperson.

• Meirambek_VIDI 6 hours ago

Do you think this is a tooling problem or more about incentives and how engineers are trained now?

• great_psy 5 hours ago

I think the article is making the point that it is a cultural problem about cost cutting and short term thinking.

• Meirambek_VIDI 4 hours ago

Yeah, agreed - short-term incentives seem to drive a lot of this. Do you think tools can help, or is it mostly cultural?

• wg0 5 hours ago

>The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.

I see a talent pipeline collapse in next 5 years. "Software engineering is over coding is a solved problem" as being chanted by semi literate media and the AI grifter's marketing departments would further scare away the allocation of human capital to software engineering easily commanding 3x rise in salaries due to resource shortage.

• wiseowise 4 hours ago

First it was “learn to code” and bazillion videos of TikTok schmucks showing off slacking at work, now everything is solved. The puzzle is complete.

• AHTERIX5000 4 hours ago

Is this written by a real person though?

• lioeters 2 hours ago

We're forgetting how to write too, apparently, and with that, forgetting how to think for ourselves.

• roenxi 5 hours ago

> Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate.

It's minor but this is just wrong. If you're going to hire 4 candidates, there could be 2,253 perfectly qualified candidates even if only 0.18% get hired. The conversion rate is meaningless; it just tells us how many jobs were on offer. There is no way that the skills this fellow wanted were so rare and difficult that only 1/500 candidates could possibly handle the job. Humans even in the 1/20 mark are pretty competent if you're willing to train them and legitimate geniuses crop up at around 1/200.

• rotis 4 hours ago

He writes 2,253 candidates and 2,069 were disqualified. 184 were qualified, so 1 in 12 was considered competent.

• roenxi 3 hours ago

Then he quotes 0.18% to show how rare a quality is, which is a wrong interpretation of the numbers. If he'd said 8% that would be realistic.

• anovikov an hour ago

Talking of military stuff: it's not a problem really. No one can keep the non-needed capacity in existence, it's not even possible if no one consumes the product. Make sufficient buffer stocks to have time to re-learn the process when needed, and that's it. There is no realistic way it could work any different, otherwise it's like: maintain entire Cold War era production capacity and keep it idle or working at 5% workload, just to be able to ramp up when needed? But it means keeping almost all of the Cold War budgets still flowing. Wasn't going to happen - and of course, in Russia it also didn't happen, and couldn't.

In the end of the day, Russia burnt through their entire Soviet stocks in roughly 2-2.5 years, while US spent a very small proportion of theirs and Europe, maybe about half. And now consumption on both sides is similar with expenses on the Western side to feed that machine being almost invisibly small. Nothing bad happened.

• dsign 5 hours ago

This is some convoluted BS built on the premise that wars need to make sense, economically or otherwise. No, wars do not need to make sense. If a person, a dictator or a president, unilaterally starts a war that forfeits the lives of both the dictator's (possibly fabricated) enemies and its own people, that person is knowingly committing murder. Logically, such a person should be handled with at least as much prejudice as a lone wolf that opens fire on a crowd. So we need to fix our legal systems to be better at preventing wars, not our economic systems to be better at fighting them.

• zwischenzug 2 hours ago

It's a great story, and a nicely written piece.

But civilisations have always forgotten things and then had to re-engineer them. We only recently recreated Roman-equivalent concrete; knowledge required to create the Saturn V rockets had to be re-engineered; we can't recreate medieval stained glass exactly, or Viking Ulfberht Swords; we would struggle to create Betamax tape today.

Many of the examples I found (as expected) relate to military or commercially sensitive technology that did not get written down (for obvious reasons).

It also reminded me when I read Thomas Thwaites' "The Toaster Project: Or a Heroic Attempt to Build a Simple Electric Appliance from Scratch", where to make a smelter from scratch he relied on a 450 year old book ("De re metallica" by Georgius Agricola), as well as a friendly Metallurgist.

We already lost the widespread ability to write assembler in an artisinal way. Now we have AI we will also be lazy about how we write individual bits of artisinal code. So what? Yes it will cost more (in time and money) when we need to re-engineer, but how much would it cost to keep alive all the knowledge and skills we might possibly need in the future?

We had better make sure we write down and preserve the recorded data though :)

• muragekibicho 3 hours ago

Odd anecdote. I completed high school in 2017 and my home country demanded us use mathematical tables, not calculators, to find logs and sines for our version of SAT math.

I got my highest-paying numerical programming contract (in the US) just because I knew (from high school math table experience) how to use LUTs to calculate a lot of useful stuff i.e quarter squares.

Modernization is great and all. However, it's disappointing to know lots of new programmers are oblivious of the fundamentals.

• rvz 5 hours ago

This will end with the way of COBOL with a few people that still have the expert-level understanding of refactoring old code without causing outages or service disruption.

We’ll see, but right now I now see developers 24/7 hooked onto their agents and in the future we will experience a de-skilling problem which clean code, best practices, security and avoiding NIH syndrome will be all flushed down the toilet.

• croes an hour ago

Just look how MS try to get rid of

https://news.ycombinator.com/item?id=47881805

• clutter55561 2 hours ago

The same “forgetting pattern” can be said of assembly, hardware, combustion cars, radio, heck, even making fire.

There will always be specialists who can really debug stuff. Mechanics, etc. Time moves on, and we need to move with it.

I’m amazed at this “end-of-world” crap. People use AI to write this shit, to make it even crazier.

• arjunthazhath 5 hours ago

Hope we dont forget humanity one day!

• sorenjan 2 hours ago

Frankly, I find the attitude towards AI coding here on HN to be both disappointing and a bit disgusting. Not long ago places like this where software developers gathered were full of various texts about how important it was to be able to reason about your code, how tech debt crept into your projects, and how skillful you had to be to write good software, various smart algorithmic tricks to squeeze more performance out of your hardware, etc.

Now? Seems like code quality is outdated and uninteresting all of a sudden. Everything is about agentic coding, harnesses, paying hundreds of dollars to Anthropic to let their LLM do the coding for you or perhaps using a 128 GB Mac to run a local model. Do you know your code base? Doesn't matter, if there are any bugs in the future Claude will fix them! Tokenmaxxing is the new paradigm, who cares about the end result as long as it's runs for now and passes all (AI written) tests!

But don't suggest these people shouldn't get $100k+ salaries, after all, they still "software engineers" in their minds, they're running the agent orchestration harness in the terminal after all, not everyone anywhere in the world could do that! They're special and deserve to be well compensated for their hard vibe coding work!

This industry is rotting from the inside.

• jongjong 2 hours ago

This is why I advocate for making everything as simple as possible. The more complex the tech, the more likely it will be lost through the passing of time.

It's kind of insane how much knowledge a human being needs to have to build certain technologies and it's taken for granted.

AI might make the knowledge easier to acquire but it's still a lot of knowledge that people have to internalize.

• wewxjfq 5 hours ago

While the Fogbank story is a funny anecdote, I don't see it as a fitting example for atrophied skills. It's like writing a clean implementation of some software and it just doesn't match the legacy version until you realize that the legacy version had an unnoticed bug that made it behave the way it does.

• trhway 5 hours ago

Isn't that is the point of technological civilization development? People for example forgot how to weave on the handloom, or all the parts production and the maintenance for the watermills. And wooden sailships - top mastery of handling and engineering developed for millennia, gone.

As it was said - the future is here, it just distributed non-uniformly, so somebody is still and will be for some time sailing, manufacturing things and writing code.

• ktallett 5 hours ago

We have both forgotten how to make things and also decided we can make more profit letting someone else make everything for every market. We have moved to a generation fixated on maximizing profit. However there is logic there as the cost to access the ability to make things is prohibitively expensive. As someone who makes open hardware with a nod to the environment and reusability, you can not justify or even find more locally sourced options than China.

Coding is different though, coding doesn't have a cost barrier, it has a ability barrier. I think we will loose a lot of people who never were passionate about programming and perhaps go back to a happy equilibrium. AI is only production ready if you have someone who understands software development. AI will improve speed to market if you have the right team, it doesn't remove the need for some to learn to code. You will of course end up with startups using exclusively AI but they will be those who end up with major security breaches or simply cannot scale as the AI goes in the wrong direction for the future. Tbh that's probably a positive as it weeds out the start ups that are focused on buzzwords for funding and not product.

• xantronix 4 hours ago

No matter what happens to the viability of software development as a career, I will always care about the craft as I have done the past twenty years and change. The imperatives to adopt LLMs in situations where they do not benefit me nor my work is what is driving me away. I have to agree with latexr; the people who seem to benefit the most from the current moment are those who see software as a means to an end without much concern for quality, longevity, nor customer experience.

Why is speed-to-market such an important metric? I do not understand the need to mimic the largest players in the industry, nor do I see any particularly profound long term benefits to first mover advantage.

• latexr 5 hours ago

> I think we will loose a lot of people who never were passionate about programming

Anecdotally, what I’m seeing right now is the opposite. People who don’t care about programming are joining, while those who do care are getting tired of the bullshit and leaving. The good programmers are the ones leaving, the hacks are extremely happy to use LLMs.

When shit hits the fan, there won’t be many people left to clean it.

• trick-or-treat 4 hours ago

So you see people who don't care about programming, joining and getting comfortable with vscode and claude code and devops?

Because it seems to me like there's a lot of coding-adjacent things they still need to be able to do even if they never look at a line of code.

• latexr 4 hours ago

Those examples are nonsensical. None of those are necessary to get working code. The VSCode example is particularly baffling. Firstly, I’m sure you understand there are other editors people use for code; secondly, I know even people who don’t code who have picked up VSCode for text editing and are fine with it.

• trick-or-treat an hour ago

I think you haven't dealt with a lot of non-coders. 90% of the world will not be able to even open a .py

• latexr an hour ago

> I think you haven't dealt with a lot of non-coders.

And I think you should avoid making assumptions about people you know nothing about. That is so far from the truth it’s not even funny.

> 90% of the world will not be able to even open a .py

Which is nowhere near my argument. I’d appreciate if you engaged with what I said or not at all.

• crabbone 3 hours ago

The West started to forget how to code a long time before AI. At first, it was the work visas to bring programmers in, then it was outsourcing. At this point, I'm not even sure if AI is doing more harm than good in this department as it might be able to bring some jobs back to the "West", if it turns out that it's cheaper than outsourcing.

The outsourcing was shedding more of the trivial jobs, while trying to keep key positions at home, but increasingly, it also started to lose the key positions too. It's possible that AI can make it so that the key positions will be harder to justify to outsource... but, who knows... maybe not.

• ekianjo 4 hours ago

> The defense industry thought peace would last forever, too.

Not really since they are always pushing for more wars.

• locallost 5 hours ago

I can't not write the tired comment of how ridiculous it is to criticize AI and then use AI to write your article. It's tired, but so is this writing style.

For the actual problem, I fear this can't be solved by warning people, the pain will need to be felt. The system we live in, basically free market capitalism, cannot do anything else except local optimization. Maybe it's for the best, I don't know. The alternative of top down planning wouldn't have this problem, but it would have other problems. I work for a mid size somewhat luxury brand, and the major goal right now is cost cutting and AI for efficiency everywhere instead of using it to create better products or better ways to reach out customers. When I think about who will buy our luxury products if all jobs were optimized out of existence, I don't have an answer, but again I think the pain will need to be felt to change course.

• BrenBarn 5 hours ago

> After spending an additional $69 million and years of reverse engineering, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original had contained an unintentional impurity that was critical to its function.

Same thing that happened to the unfortunate Dr. Jekyll!

• throw4523ds 4 hours ago

exactly, as they say everyone has to learn to code.

• immanuwell 5 hours ago

when you offshore or automate away the hands-on knowledge, you don't just lose the workers, you lose the entire institutional memory, and no amount of money can buy that back overnight

• light_hue_1 4 hours ago

> The West Forgot How to Make Things. Now It's Forgetting How to Code

Can we stop repeating this nonsense headline please? We did not stop manufacturing things.

Manufacturing is a huge industry in the West. https://en.wikipedia.org/wiki/Manufacturing_in_the_United_St...

The US manufacturing sector is the biggest it has ever been. Exports are at all time record highs. The only thing that declined about manufacturing is the jobs. We build way more than we ever did but with far fewer people.

What we did do is decide that basic items aren't worth it. Our capacity is limited, our labor pool is limited, expenses are high, it doesn't make sense to make trinkets when we can make complex high precision parts and devices.

But no, we did not forget how to make things. We chose to use our capacity in a smarter way.

• _the_inflator 26 minutes ago

That's why I am looking forward to be a 70 year old demanding tons of money for doing the things I came to love and was cut off by AI.

What a bright future!

But the rest is a big no from my side.

"In hindsight" - Southpark, please take over.

What if there was a continuation of producing unused weapons during the last 20 years? "Waste of money", "Old tech", "useless" - dilemma.

Also the generalization is awfully misleading: "The west".

Let's say all are suffering from military dementia the same way. Who do think has an easier time to recover? USA or Europe? Europe relied and relies or freeloads on USA in especially military affairs.

As you wrote: some veterans teach building, handling cruse missiles to young guns like having an exciting time with the boy scouts.

Germany? "Never again! Demilitarize Germany." Decades long hatred towards USA was pretty much summed up with the slur "Ami go home!" which was a phrase used to protest US military bases in Germany - and then, when most of them finally left, it was all just fun and games (losers).

So USA has some sort of infrastructure and intellectual property to recover and never stopped treasure it as part of the country's history: Veterans' Days, Unknown Soldier, Arlington - Hegseth did a great job stopping the decline here.

Meanwhile Europe: You couldn't have a hold out in secrecy. Some enquete commission would investigate, and addresses would be leaked and people doxed.

Have a look at the representatives of the Germany Army: overweight nice guys. Sorry to say, but I think there is something wrong with this picture.

Europe has nothing to restart. They never had in the first place. Many tend to forget that the US provided massive supply to all allies during WW2. Russia would have been wiped out if it wasn't for the US logistics and money. After the war there was a joke told by survivors of the Eastern front: The first Sherman got shot on the Eastern front not the West.

Europe was always on life support. France military forces outnumbered Germany at the start of WW2. But they were tired and instead of fighting build a wall so to say. Netherlands and Denmark was taken without any resistance.

And it is the same for programming. How many European companies dominate globally like FAANG? Exactly. None. 30 years of Internet and it is getting lonely at the top for the US.

"The West"? Nope.

During the 80th, while Chucky Cheese was all the rage, in Germany you got massively socially ostracised for showing your interest in computers. Playing electronic handhelds put you up on notice by teachers, demanding correction by the school administration - true stories.

Another one: What do all FAANG like companies have in common? The founders and top managers have a background in CS. What do European managers have in common? They haven't heard of CS so far.

Europe is a mess. US is maybe having a cold start but gets its shit done.

Germany killed of its industrial sector. Energy producers as well. Germany does what Morgentau had in mind but what off the table: no more wars and weapons, just farmers and horses.

USA is save in every regard. It is not that something has been lost. This happens or why do we don't know anything about Rome?

You have to distinguish recovering from losing. Once you were at the top, at least you know how to get there while others in most cases will never get there.

These are different abilities: conserving knowledge and rebuilding it. USA needs to reactivate, while Europe needs to build from the ground up without any starting point - without money, energy, moral support, nothing.

USA is already the winner here. And this pattern keeps repeating. 250 years and what we have is an epoch were USA saw kingdoms rise and fall, USA is the only constant there is.

Treasure it. You are in a save spot despite all the dire circumstances. A blessing in disguise.

• aboardRat4 3 hours ago

>Denis Stetskov

Putin's propagandist, or just useful idiot.

• shevy-java 5 hours ago

> I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end.

With all due respect, but many european taxpayers help pay for Ukraine. I am not disagreeing on the premise of the West killing itself via systematic recessions - Trump invading Iran leading to inflation as an example - so a lot of things are going on that show a ton of incompetency both in the USA and the EU, but at the same time I also get question marks in my eyes when this criticism comes from a country that receives money from others. That money could instead go to make EU countries more competitive, for instance. I am not saying this should necessarily be the case, mind you; I fully understand the nature of Putin's imperialism. But we need to really consider all factors when it comes to strategic mistakes with regards to production - and that includes taking up debts all the time. There are always a few who benefit in war, just as they benefit from subsidies from taxpayers (inside and outside as well).

• skhr0680 5 hours ago

Ukraine is "receiving money from others"? We are benefactors of the Ukrainians' bravery and sacrifices. How much money could we have not spent if Hitler had been stopped in Czechoslovakia?

• gib444 5 hours ago

> Ukraine is "receiving money from others"?

Yes. https://www.eeas.europa.eu/delegations/united-states-america...

• latexr 5 hours ago

You are completely ignoring the argument of your parent comment. They are saying that money is being spent to the benefit and best interest of the spenders, that it’s not a handout.

You are, of course, free to disagree and make your point, but ignoring the argument does not advance the discussion.

• crotobloste 5 hours ago

> Ukraine is "receiving money from others"?

Factually correct.

> We are benefactors of the Ukrainians' bravery and sacrifices.

Who's we?

> How much money could we have not spent if Hitler had been stopped in Czechoslovakia?

Very different situation, in all aspects.

• collinfunk 5 hours ago

You see zero similarities between Hitler invading Poland and Putin invading Ukraine?

• brabel an hour ago

As similar as any country who ever invaded any other country?!

• roenxi 5 hours ago

There are some pretty substantial differences. Russia is on the strategic back foot here trying to figure out a way to stop NATO's advance. They've only turned to violence after long attempts at resolving the tension diplomatically and the US has been implacable. Putin's actually been pretty hesitant in his escalations so far; he's 70 and has a long history of trying to avoid war.

Hitler was more about wanting more land and resources for Germany, and he saw war as being a legitimate tool for achieving his aims that he deployed early and enthusiastically.

• defrost 4 hours ago

NATO has advanced into which part of Russia?

• roenxi 4 hours ago
• rcxdude an hour ago

Eastern Europe is not Russia and Russia does not automatically get a say in what Eastern Europe does because they are nearby. Russia seems to believe it is entitled to a sphere of influence .

• defrost 3 hours ago

So, NATO hasn't advanced into Russia then?

Just Russia advancing into the Ukraine (after promising not to if the USSR nukes were given to Russia)?

Gotcha.

• brabel an hour ago

I know it’s what about ism but I really hope you apply the same logic when Cuba once more tried to enter an alliance with Russia or China to defend itself against a larger aggressor next door. So while I agree that Russia should allow Ukraine and Georgia to join NATO, I also think that’s only fair if countries like Brazil, Cuba and Venezuela are freely allowed to determine their futures by joining Russia, China and Iran military alliances. But you and I know that’s not going to happen. So please let’s stop pretending we don’t have double standards.

• defrost 35 minutes ago

As you've chosen to address me directly I'll reply honestly, I have zero concern about Cuba, Venezuela, any of the 190+ countries on the planet, wanting to join or form BRICs.

I have considerably more concern about the ability of a post MAGA USofA to successfully navigate such a world via soft power as they appear to have flushed all the competent diplomatic talent down a golden toilet.

• collinfunk 4 hours ago

> There are some pretty substantial differences. Russia is on the strategic back foot here trying to figure out a way to stop NATO's advance.

His rationale for invading Ukraine was to "demilitarise and denazify" it. The NATO point seems largely be invented by people who dislike NATO in the west.

> They've only turned to violence after long attempts at resolving the tension diplomatically and the US has been implacable.

I hope the "tension" you are referring to was not the little green men taking over Crimea and the Donbas in 2014.

> Putin's actually been pretty hesitant in his escalations so far; he's 70 and has a long history of trying to avoid war.

This is a totally unseriousness statement. Can you remind me what Putin was doing in Syria again?

• roenxi 4 hours ago

There's an english transcript [0] of his speech from when they went in up on the Kremin website. He opened with something like

> I will begin with what I said in my address on February 21, 2022. I spoke about our biggest concerns and worries, and about the fundamental threats which irresponsible Western politicians created for Russia consistently, rudely and unceremoniously from year to year. I am referring to the eastward expansion of NATO, which is moving its military infrastructure ever closer to the Russian border.

They're claiming the NATO thing is relevant. Opening paragraph justification.

[0] http://en.kremlin.ru/events/president/transcripts/67843

• wiseowise 4 hours ago

> Russia is on the strategic back foot here trying to figure out a way to stop NATO's advance. They've only turned to violence after long attempts at resolving the tension diplomatically and the US has been implacable. Putin's actually been pretty hesitant in his escalations so far; he's 70 and has a long history of trying to avoid war.

Is that why Russians rejected negotiations when Ukraine offered to never join NATO and Russians insist on keeping invaded territories?

• dev_l1x_be 4 hours ago

As an anecdotal evidence I code way more now with agents because i have an entity who has vast amount of knowledge about pretty much everything and I have the creativity to use that well.

• bit1993 4 hours ago

But you already knew how to code before LLM coding agents, juniors will jump straight into using agents without learning to code by hand, hence the premise of the article.

• lava_pidgeon 5 hours ago

Rather bad premise in the article. 1.) Germany, Italy and Eastern Europe are very industrial regions. The author forgets defence is not only the industry. 2.) The author doesn't show any source that Chinese developers don't use AI

• whatever1 5 hours ago

I don’t know, but the evidence shows that software engineering is not that deep of an art.

People come and go at rates that would not be sustainable in any manufacturing business.

• wg0 an hour ago

Interesting take. We are not going to talk about Office, Windows, Adobe or Autodesk products here. Neither Linux kernel.

Just classified ads or e-commerce platforms such as gumroad and shopify are complex enough that a single person cannot master them end to end. The domain is huge to master and takes a lots of time to master.

• heisenbit 4 hours ago

Yes, businesses tend to believe that.

No, every time people switch knowledge gets lost and code quality degrades.

In part I blame accounting rules justifying investments is easier than maintenance.