Anyone who's seen an enterprise deal close or dealt with enterprise customer requests will know this, the build vs buy calculus has always been there yet companies still buy. Until you can get AI to the point where it equivalent to a 20 person engineering team, people are not going to build their own Snowflake, Salesforce, Slack or ATS. Maybe that day is 3 years away but when that happens the world will be very different
Companies do make/buy decisions on everything, it just software. Cleaning services are not expensive, yet companies contract them instead of hiring staff.
This is called transaction cost economics, if anyone’s interested.
Even a16z is walking this back now. I wrote about why the “vibe code everything” thesis doesn’t hold up in two recent pieces:
(1) https://philippdubach.com/posts/the-saaspocalypse-paradox/
(2) https://philippdubach.com/posts/the-impossible-backhand/
Acharya’s framing is different from mine (he’s talking book on software stocks) but the conclusion is the same: the “innovation bazooka” pointed at rebuilding payroll is a bad allocation of resources. Benedict Evans called me out on LinkedIn for this (https://philippdubach.com/posts/is-ai-really-eating-the-worl...) take, which I take as a sign the argument is landing..
> Benedict Evans called me out on LinkedIn for this take, which I take as a sign the argument is landing.
Excellent. And correct lol.
A16Zs opinion is worthless to me, they know very little about the market. Furthermore, they're notorious for having a lot of "partners".
Depends on the partner, Peter Levine is a pretty damn good picker (supported us series A to IPO). https://en.wikipedia.org/wiki/Peter_J._Levine
Their whole game is just pump and dump
Has everyone forgotten about when they pumped absurd crypto scams like NFTs
Pretty worthless take posting an ad-hominem attack instead of addressing the actual content of the article/ statement.
People are overestimating the value on having AI create something given loose instructions, and underestimating the value of using AI as a tool for a human to learn and explore a problem space. The bias shows on the terminology (“agents”).
We finally made the computer able to speak “our” language - but we still see computers as just automation. There’s a lot of untapped potential in the other direction, in encoding and compressing knowledge IMO.
Problem space is rich. The thing doesnt actually know what a problem is.
The thing is incredibly good at searching through large spaces of information.
42
> AI create something
To have AI recreate something that was already in it's training set.
> in encoding and compressing knowledge IMO.
I'd rather have the knowledge encoded in a way that doesn't generate hallucinations.
you cant easily vibecode everything. in my startup this is what I am not buying (and vibecoding):
- JIRA/trello/monday.com - benchling - obsidian
this is what i buy and have no intent to replace:
- carta - docusign - gusto/rippling
this is what might be on the chopping block:
- gmail - google calendar
Why not Docusign? Not challenging, just curious why that is specifically on your list. Reputation?
The possibility that anyone can easily replicate any startup scares A16Z.
The incompetent have always pantomimed the competent. It never works. Although the incompetent will always pay a huge amount to try to achieve this fantasy.
This is what always confused me about VC AI enthusiasm. Their moat is the capital. As AI improves, it destroys their moat. And yet, they are stoked to invest in it, the architects of their own demise.
There’s no alternative, they can’t collectively freeze out all AI investment and force it to die.
Don't you have that backwards? If AI gets so good that it can replace all human labor, will capital like money and data centers be the only moat left?
> If AI gets so good that it can replace all human labor, will capital like money and data centers be the only moat left?
If AI gets good enough to replace all human labor then actual physical moats to keep the hungry, rioting replaced humans away will be the most important moats.
How powerful is the device you wrote this comment from? On prem or self hosted affordable inference is inevitable.
I dunno.
I really hate the expression "the new normal", because it sort of smuggles in the assumption that there exists such thing as "normal". It always felt like one of those truisms that people say to exploit emotions like "in these trying times" or "no one wants to work anymore".
But I really do think that vibe coding is the "new normal". These tools are already extremely useful, to a point where I don't really think we'll be able to go back. These tools are getting good enough that it's getting to a point where you have to use them. This might sound like I'm supportive of this, and I guess am to some extent, but I find it to be exceedingly disappointing because writing software isn't fun anymore.
One of my most upvoted comments on HN talks about how I don't enjoy programming, but instead I enjoy problem solving. This was written before I was aware of vibe coding stuff, and I think I was wrong. I guess I actually did enjoy the process of writing the code, instead of just delegating my work to a virtual intern while I just watch the AI do the fun stuff.
A very small part of me is kind of hoping that once AI has to be priced at "not losing money on every call" levels that I'll be forced to actually think about this stuff again.
I largely agree with you. And, given your points about “not going back” — how do you propose interviewing SWEs?
I have thought about this a lot, and I have no idea. I work for an "AI-first" company, and we're kind of required to use AI stuff as often as we can, so I make very liberal use of Codex, but I've been shielded from the interview process thus far.
I think I would still kind of ask the same questions, though maybe a bit more conceptual. Like, for example, I might see if I could get someone to explain how to build something, and then ask them about data structures that might be useful (e.g. removing a lock by making an append-only structure). I find that Codex will generally generate something that "works" but without an understanding data structures and algorithms, its implementation will still be somewhat sub-optimal, meaning that understanding the fundamentals has value, at least for now.
All these articles seem to think people will vibe code by prompting:
make me my own Stripe
make me my own Salesforce
make me my own Shopify
It will be more like:
Look at how Lago, an open-source Stripe layer, works and make it work with Authorized.net directly
Look at Twenty, an open-source CRM, and make it work in our tech stack for our sales needs
Look at how Medusa, an open-source e-commerce platform, works and what features we would need and bring into our website
When doing the latter, getting a good enough alternative will reduce the need for commercial SaaS. On top of that, these commercial SaaS are bloated with features in their attempt to work with as many use cases as possible and configuring them is “coding” by another name. Throw in Enshittification and the above seems to the next logical move by companies looking to move off these apps.
The value in enterprise SaaS offerings isn't just the application functionality but the IaaS substrate underneath. The vendor handles server operations, storage, scalability, backups, security, compliance, etc. It might be easier for companies to vibe code their own custom applications now but LLMs don't help nearly as much with keeping those applications running. Most companies are terrible at technical operations. I predict we'll see a new wave of IaaS startups that sell to those enterprise vibe coders and undercut the legacy SaaS vendors.
I've been confronting this truth personally. For years I had a backlog of projects that I always put off because I didn't have the capacity. Now I have the capacity but without the know how to sell it. It turns out that everything comes back to sales and building human relationships. Sort of a prerequisite to having operations.
Are the infrastructure tools available already not easy enough to build on? We have all these serverless options already.
Sensible people would do that (asking for just the features they need), but look at us, are we sensible?
Most of us* are working for places whose analytics software transitively asks the user for permission to be tracked by more "trusted" partners than the number of people in a typical high school, which transitively includes more bytes of code than the total size of DOOM including assets, with a performance hit so bad that it would be an improvement for everyone if the visitor remote desktop-ed into a VM running Win95 on the server.
And people were complaining about how wasteful software was when Win95 was new.
* Possibly an exaggeration, I don't know what business software is like; but websites and, in my experience at least, mobile apps do this.
The right move is this, turned to 11.
Velocity or one-shot capability isn't the move. It's making stuff that used to be traumatic just...normal now.
Google fucking vibe-coded their x86 -> ARM ISA changeover. It never would have been done without agents. Not like "google did it X% faster." Google would have let that sit forever because the labor economics of the problem were backwards.
That doesn't MATTER anymore. If you have some scratch, some halfway decent engineers, and a clear idea, you can build stuff that was just infeasible or impossible. all it takes is time and care.
Some people have figured this out and are moving now.
I think something like an x86 -> ARM change is perfect example of something where LLM assisted coding shines. lots of busywork (i.e. smaller tasks that don't require lots of context of the other existing tasks), nothing totally novel to do (they don't have to write another borg or spanner), easy to verify, and 'translation'. LLMs are quite good at human language translation, why should they be bad at translating from one inline assembly language to another?
Yeah. Lots of busywork where if you had to assign it to a human you would need to find someone with deep technical expertise plus inordinate, unflagging attention to detail. You couldn’t pass it off to a batch of summer interns. It would have needed to be done by an engineer with some real experience. And there is no way in the world you could hire enough to do it, for almost any money.
You've missed the subtlety here.
LLMs don't have attention to detail.
This project had extremely comprehensive, easily verifiable, tests.
So the LLM could be as sloppy as they usually arez they just had to keep redoing their work until the code actually worked.
Exactly, if the engineers know where to look for the solution in open-source code and point the AI there, it will get them there. Even if the language or the tech stack are different, AI is excellent at finding the seams, those spots where a feature connects to the underlying tech stack, and figuring out how the feature is really implemented, and bringing that over.
> Google would have let that sit forever because the labor economics of the problem were backwards.
This has been how all previous innovations that made software easier to make turned out.
People found more and more uses for software and that does seem to be playing out again.
I really don't think we're living in a "linearly interpolate from past behavior" kinda situation.
https://arxiv.org/abs/2510.14928
Just read some of that. It's not long. This IS NOT the past telescoping into the future. Some new shit is afoot.
So maybe the saas will pivot to just sell some barebone agents that include their real IP? The rest (UI, dashboards and connectivity) will be tailored made by LLMs
I highly doubt that, and its in OPs article.
First, a vendor will have the best context on the inner workings and best practices of extending the current state of their software. The pressure on vendors to make this accessible and digestable to agents/ LLMs will increase, though.
Secondly, if you have coded with LLM assistance (not vibe coding), you will have experienced the limited ability of one shot stochastic approaches to build out well architected solutions that go beyond immediate functionality encapsulated in a prompt.
Thirdly, as the article mentions, opportunity cost will never make this a favorable term - unless the SaaS vendor was extorting prices before. The direct cost of mental overhead and time of an internal team member to hand-hold an agent/ write specs/ debug/ firefight some LLM assisted/ vibe coded solution will not outweigh the upside potential of expanding your core business unless you're a stagnant enterprise product on life support.
Just because we can code something faster or cheaper doesn't increase the odds it will be right.
Arguably it does, because being able to experience something gives you much more insight into whether it's right or not - so being able to iterate quickly many times, continuously updating your spec and definition of done should help you get to the right solution. To be clear, there is still effort involved, but the effort becomes more about the critical evaluation rather than the how.
But that's not the only problem.
To illustrate, I'll share what I'm working on now. My companies ops guy vibe coded a bunch of scripts to manage deployments. On the surface, they appear to do the correct thing. Except they don't. The tag for the Docker image used is hardcoded in a yaml file and doesn't get updated anywhere unless you do it manually. The docs don't even mention half of the necessary scripts/commands or implicit setup necessary for any of it to work in the first place, much less the tags or how any of it actually works. There are two completely different deployment strategies (direct to VM with docker + GCP and a GKE-based K8s deploy). Neither fully work, and only one has any documentation at all (and that documentation is completely vibed, so has very low information density). The only reason I'm able to use this pile of garbage at all is because I already know how all of the independent pieces function and can piece it together, but that's after wasting several hours of "why the fuck aren't my changes having an effect." There are very, very few lines of code that don't matter in well architected systems, but many that don't in vibed systems. We already have huge problems with overcomplicated crap made exclusively by humans, that's been hard enough to manage.
Vibe coding consistently gives the illusion of progress by fixing an immediate problem at the expense of piling on crap that obscures what's actually going on and often breaks exiting functionality. It's frankly not sustainable.
That being said, I've gotten some utility out of vibe coding tools, but it mostly just saves me some mental effort of writing boring shit that isn't interesting, innovative, or enjoyable, which is like 20% of mental effort and 5% of my actual work. I'm not even going to get started on the context switching costs. It makes my ADHD feel happy but I'm confident I'm less productive because of the secondary effects.
If you’re able to articulate the issues this clearly, it would take like an hour to “vibe code” away all of these issues. That’s the actual superpower we all have now. If you know what good software looks like, you can rough something out so fast, then iterate and clean it up equally fast, and produce something great an order of magnitude faster than just a few months ago.
A few times a week I’m finding open source projects that either have a bunch of old issues and pull requests, or unfinished todos/roadmaps, and just blasting through all of that and leaving a PR for the maintainer while I use the fork. All tested, all clean best practice style code.
Don’t complain about the outputs of these tools, use the tools to produce good outputs.
The post you’re r replying to gets this right- lead time is everything. The fast you can iterate, the more likely that what you are doing is correct.
I’ve had a similar experience to what you’re describing. We are slower with AI… for now. Lean into it. Exploit the fact that you can now iterate much faster. Solve smaller problems. Solve them completely. Move on.
Iteration only matters when the feedback is used to improve.
Your model doesn't improve. It can't.
Your model can absolutely improve
How would that work out barring a complete retraining or human in the loop evals?
The magic of test time inference is the harness can improve even if the model is static. Every task outcome informs the harness.
> The magic
Hilarious that you start with that as TAO requires
- Continuous adaptation makes it challenging to track performance changes and troubleshoot issues effectively.
- Advanced monitoring tools and sophisticated logging systems become essential to identify and address issues promptly.
- Adaptive models could inadvertently reinforce biases present in their initial training data or in ongoing feedback.
- Ethical oversight and regular audits are crucial to ensure fairness, transparency, and accountability.
Not much magic in there if it requires good old human oversight every step of the way, is there?
Vibecoding is a net wealth transfer from frightened people to unscrupulous people.
Machine assisted rigorous software engineering is an even bigger wealth transfer from unscrupulous people to passionate computer scientists.
Sadly, this is the most serious comment here. People who are not shocked are people who haven’t seen what a highly educated computer scientist can do in single player mode.
Sure they have:
https://news.ycombinator.com/item?id=47083506
https://news.ycombinator.com/item?id=47045406
I'll take all comers, any conceivable combination of unassisted engineers of arbitrary Carmack/God-level ability, no budgetary limits, and I'll bet my net worth down to starvation poverty that I will clobber them flat by myself. This is not because I'm such hot shit, it's a weird Venn that puts me on the early side on this, but there are others and there will be many more as people see the results.
So there are probably people who can beat me today, and that probability goes to one as Carmack-type people go full "press the advantage" mode on a long enough timeline, there are people who are strictly more talented and every bit as passionate, and the paradigm will saturate.
Which is why I spend all my time trying to scale it up, I'm working on how to teach other people how to do it, and solve the bottlenecks that emerge. That's a different paradigm that saturates in a different place, but it is likewise sigmoid-shaped.
That, and not single-player heroics, stunts basically, is the next thousand-year paradigm. And no current Valley power player even exists in that world. So the competition I have to worry about is very real, but not at all legible.
I don't know much about how this will play other than it's the fucking game at geopolitical levels, and the new boss will look nothing like the old boss.
Both AI Fanatics and AI Luddites need to touch grass.
We work in Software ENGINEERING. Engineering is all about what tools makes sense to solve a specific problem. In some cases, AI tools do show immediate business value (eg. TTS for SDR) and in other cases this is less obvious.
This is all the more reason why learning about AI/ML fundamentals is critical in the same way understanding computer architecture, systems programming, algorithms, and design principles are critical to being a SWE, because then you can make a data-driven judgment on whether an approach works or not.
Given the number of throwaway accounts that commented, it clearly struck a nerve.
The irony is, AI coding only works after and if you put a lot of work on engineering, like creating a factory.
There is a lot of work that goes on before even reaching the point to write code.
For example, being able to vibecode a UI wireframe instead of being blocked for 2 sprints by your UI/UX team or templating an alpha to gauge customer interest in 1 week instead of 1 quarter is a massive operational improvement.
Of course these aren't completed products, but customers in most cases can accept such performance in the short-to-medium term or if it is part of an alpha.
This is why I keep repeating ad nauseum that most decisionmakers don't expect AI to replace jobs. The reality is, professional software engineering is about translating business requirements into tangible products.
It's not the codebase that matters in most cases - it's the requirements and outcomes that do. Like you can refactor and prettify your codebase all you want, but if it isn't directly driving customer revenue or value, then that time could be better spent elsewhere. It's the usecase that your product enables which is why they are purchasing your product.
As a researcher in formal methods, I totally get you
>> Anish Acharya says it is not worth it to use AI-assisted coding for all business functions. AI should focus on core business development, not rebuilding enterprise software.
I don't even know what this means, but my take: we should stop listening to VCs (especially those like A16Z) who have an obvious vested interest that doesn't match the rest of society. Granting these people an audience is totally unwarranted; nobody but other tech bros said "we will vibe code everything" in the first place. Best case scenario: they all go to the same exclusive conference, get the branded conference technical vest and that's were the asteroid hits.
I hear and read so much shit by VCs. Both in LinkedIn and in private meetings. Specially Menlo says a lot of shit (check LinkedIn). Deloitte and McKinsey, also full of crap. Really.
Vcs are choke full of companies that can be cloned over night, SaaS companies that will face ridiculously fast substitution, and a whoooole lotta capital deployed on lousy RAGs and OpenAI Wrappers.
Let's just look at Dijkstra's On the Foolishness of "Natural Language Programming". It really does a good job at explaining why natural language programming (and thus, Vibe Coding) is a dead end. It serves as a good reminder that we developed the languages of Math and Programming for a reason. The pedantic nature is a feature, not a flaw. It is because in programming (and math) we are dealing with high levels of abstraction constantly and thus ambiguity compounds. Isn't this something we learn early on as programmers? That a computer does exactly what you tell it to, not what you intend to tell it to? Think about how that phrase extends when we incorporate LLM Coding Agents.
| The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid.
- Dijkstra
All of you have experienced the ambiguity and annoyances of natural language. Have you ever: - Had a boss give you confusing instructions?
- Argued with someone only to find you agree?
- Talked with someone and one of you doesn't actually understand the other?
- Talked with someone and the other person seems batshit insane but they also seem to have avoided a mental asylum?
- Use different words to describe the same thing?
- When standing next to someone and looking at the same thing?
- Adapted your message so you "talk to your audience"?
- Ever read/wrote something on the internet? (where "everyone" is the audience)
Congrats, you have experienced the frustrations and limitations of natural language. Natural language is incredibly powerful and the ambiguity is a feature and a flaw, just like how in formal languages the precision is both a feature and a flaw. I mean it can take an incredible amount of work to say even very simple and obvious things with formal languages[1], but the ambiguity disappears[2].Vibe Coding has its uses and I'm sure that'll expand, but the idea of it replacing domain experts is outright laughable. You can't get it to resolve ambiguity if you aren't aware of the ambiguity. If you've ever argued with the LLM take a step back and ask yourself, is there ambiguity? It'll help you resolve the problem and make you recognize the limits. I mean just look at the legal system, that is probably one of the most serious efforts to create formalization in natural language and we still need lawyers and judges to sit around and argue all day about all the ambiguity that remains.
I seriously can't comprehend how on a site who's primary users are programmers this is an argument. If we somehow missed this in our education (formal or self) then how do we not intuit it from our everyday interactions?
[0] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
[1] https://en.wikipedia.org/wiki/Principia_Mathematica
[2] Most programming languages are some hybrid variant. e.g. Python uses duck typing: if it looks like a float, operates like a float, and works as a float, then it is probably a float. Or another example even is C, what used to be called a "high level programming language" (so is Python a celestial language?). Give up some precision/lack of ambiguity for ease.
> Vibe Coding has its uses and I'm sure that'll expand, but the idea of it replacing domain experts is outright laughable.
I don't think that's the argument. The argument I'm seeing most is that most of us SWEs will become obsolete once the agentic tools become good enough to allow domain experts to fully iterate on solutions on their own.
> The argument I'm seeing most is that most of us SWEs will become obsolete once the agentic tools become good enough to allow domain experts to fully iterate on solutions on their own.
That’s been the argument since the 5PL movement in the 80s. What we discover is that domain expertise an articulation of domain expertise into systems are two orthogonal skills that occasionally develop in the same person but, in general, requires distinct specialization.
Yes, 4GL and 5GL failed, but authoring Access applications should be a breeze now.
> The argument I'm seeing most is that most of us SWEs will become obsolete
That is equivalent to "replacing domain experts", or at least was my intent. But language is ambiguous lol. I do think programmers are domain experts. There are also different kinds of domain experts but I very much doubt we'll get rid of SWEs.Though my big concern right now is that we'll get rid of juniors and maybe even mid levels. There's definitely a push for that and incentives from an economic point of view. But it will be disastrous for the tech industry if this happens. It kills the pipeline. There can be no wizards without noobs. So we have a real life tragedy of the commons situation staring us in the face. I'm pretty sure we know what choices will be made, but I hope we can recognize that there's going to need to be cooperation to solve this least we all suffer.
Dijkstra also said no one should be debugging and yet here we are.
He's not wrong about the problems of natural language YET HERE ARE. That would, I think, cause a sensible engineer to start poking at the predicate instead of announcing that the foregone conclusion is near.
We should take seriously the possibility that this isn't going to be in a retrenchment which bestows a nice little atta boy sticker on all the folks who said I told you so.
> Dijkstra also said no one should be debugging
Given how you're implying things, you're grossly misrepresenting what he said. You've either been misled or misread. He was advocating for the adoption and development of provably correct programming.Interestingly I think his "gospel" is only more meaningful today.
| Apparently, many programmers derive the major part of their intellectual satisfaction and professional excitement from not quite understanding what they are doing. In this streamlined age, one of our most under-nourished psychological needs is the craving for Black Magic, and apparently the automatic computer can satisfy this need for the professional software engineers, who are secretly enthralled by the gigantic risks they take in their daring irresponsibility. They revel in the puzzles posed by the task of debugging. They defend —by appealing to all sorts of supposed Laws of Nature— the right of existence of their program bugs, because they are so attached to them: without the bugs, they feel, programming would no longer be what is used to be! (In the latter feeling I think —if I may say so— that they are quite correct.)
| A program can be regarded as an (abstract) mechanism embodying as such the design of all computations that can possibly be evoked by it. How do we convince ourselves that this design is correct, i.e. that all these computations will display the desired properties? A naive answer to this question is "Well, try them all.", but this answer is too naive, because even for a simple program on the fastest machine such an experiment is apt to take millions of years. So, exhaustive testing is absolutely out of the question.
| But as long as we regard the mechanism as a black box, testing is the only thing we can do. The unescapable conclusion is that we cannot afford to regard the mechanism as a black box
I think it's worth reading in fullhttps://www.cs.utexas.edu/~EWD/transcriptions/EWD02xx/EWD288...
>no one should be debugging
He literally said those exact words out loud from the audience during a job talk.
And yeah, the total aim and the reason why he might just blurt that out is because a lot of the frustration and esprit de corps of programming is held up in writing software that's more a guess about behavior than something provably correct. Perhaps we all ought to be writing provably correct software and never debugging as a result. We don't. But perhaps we ought to. We don't.
Is control via natural language a doomed effort? Perhaps, but I'd be cautious rather than confident about predicting that.
> He literally said those exact words out loud from the audience during a job talk.
Yes, I even provided the source...Unfortunately despite being able to provide a summary I'm unable to actually read it for you. You'll actually need to read the whole thing and interpret it. You have a big leg up with my summary but being literate or not is up to you. As for me, I'm not going to argue with someone who chooses not to read
I sincerely doubt you produced the source where he asked that question in the middle of someone else’s job talk.
Which is what I was referring to. I read what you wrote, pal. Did you read what I wrote?
a16z talking again?
This is your regular reminder that
1) a16z is one the largest backers of LLMs
2) They named one of the two authors of the Fascist Manifesto their patron saint
3) AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. (Quoted from Professor Woodrow Hartzog "How AI Destroys Institutions"). Or to put it another way, being plausible but slightly wrong and un-auditable—at scale—is the killer feature of LLMs and this combination of properties makes it an essentially fascist technology meaning it is well suited to centralizing authority, eliminating checks on that authority and advancing an anti-science agenda (quoted from the A plausible, scalable and slightly wrong black box: why large language models are a fascist technology that cannot be redeemed post).
This wasn't a16z monolithically speaking as a firm, it was Anish Acharya talking on a podcast.
Seems like he's focused on fintech and not involved in many of their LLM investments
I will not claim to be an expert historian but one general belief I have is that nomenclature undergoes semantic migration over a century. So for the sake of conciseness I will quote the first demand of each portion of the Fascist Manifesto. This isn't to obscure, because it is in Wikipedia[0] and translated in English on EN Wikipedia[1], but so I can share a sample of whether this is something we can relate to our present day political orientation. Hopefully it will inform what you believe "author of the Fascist Manifesto" to imply:
> ...
> For this WE WANT:
> On the political problem:
> Universal suffrage by regional list voting, with proportional representation, voting and eligibility for women.
> ...
> On the social problem:
> WE WANT:
> The prompt enactment of a state law enshrining the legal eight-hour workday for all jobs.
> ...
> On the military issue:
> WE WANT:
> The establishment of a national militia with brief educational services and exclusively defensive duty.
> ...
> On the financial problem:
> WE WANT:
> A strong extraordinary tax on capital of a progressive nature, having the form of true PARTIAL EXPROPRIATION of all wealth.
> ...
0: https://it.wikipedia.org/wiki/Programma_di_San_Sepolcro#Test...
I’m not particularly political and am also not a historian but I don’t think it’s necessarily correct to equate the literal text of the manifesto with the principles and practices of fascism.
The message of universal suffrage vs. that of preventing an out group from “stealing” an election are not far apart semantically. Same with workers rights - in practice the worker protection laws that were passed in Italy at this time were so full of loopholes and qualifications that ultimately the workers do not gain power in that system.
It is this fair, in my view, to question the spirit of the manifesto in the first place.
I suppose we should, in being intellectually consistent, take the appropriate position that 8 hours / day and a wealth tax are fascist principles.
Sounds like a16z has some rapidly depreciating software equity they want to sell you.
Or maybe they own the debt.
Listen to some of the Marc Andreessen interviews promoting cryptocurrency in 2021.
Do that and you will never listen to him or his associates again.
They don’t make money by being right, they make money by exposing LPs to risk. Zero commitment to insight. Intellectual production goes only so far as to attract funding.