> Long running work: an agent doing a 10 minute task isn’t a ‘request’, it’s a long-running async process.
Correct, but we solved this a long time ago when we started sending files to servers to be converted, for example. We either got a 'job_id' or a call to a webhook when the job was finished."
Article doesn't make sense. Some of the "horizontally scaled" servers have their own state. A local cache, a temporary filesystem etc.
Also, has teh author never heard of long running queued jobs? Or long running scheduled jobs? They ultimately report back into the DB (updating their status etc).
This article reeks of someone using AI to make huge leaping jumps of logic. The "single source of truth" rule has survived this long for a reason. It works!
Claude code runs as a nearly stateless server using session JSONL files as a conversation database, sending stateless API requests to Anthropic, etc.
This post doesn’t seem to understand how these systems work at the core of agent harnesses.
To me this makes no sense. Nothing in web development changes because of long running requests, there are plenty of solutions for this. The most easy one is to just listen long enough on a http request for the answer. The routing problem can be mitigated with session pinning. Http2 and 3 have solutions for streaming data, websockets can be used, and pub/sub also. Heck, we could push the LLM response in a k2v system/redis and read it from there. "State is in the DB" is running strong and will be for decades to come.
Durable is used 13 times in this article.
It feels like the virtual actors are the primitive the author is reaching for. As an erstwhile Elixir hobbyist I've often found myself wishing for the simplicity of actors when solving problems in my day job. I tend to work in an AWS environment, but I believe over in Azure they have something like it. I think it was called Orleans when I read about it but I think it's got a more corporate name now.
It’s still Orleans! https://learn.microsoft.com/en-us/dotnet/orleans/overview?pi...
> LLMs just make this problem more visible.
This theme keeps popping up everywhere. Lots of things were "the way we did things" because a lot of reasons. LLMs just amplify some things and they get enhanced visibility. It can be a good thing, if you're able to understand what/why/how changed, or it can be a bad thing if you insist that "this is how we do things, because this is how we've always done things".
> or it can be a bad thing if you insist that "this is how we do things, because this is how we've always done things"
Or... maybe... just maybe... it can be a bad thing, because it's a bad thing.
Many things can be wrong, for many reasons. The problem is when people think LLMs make it wrong, instead of understanding that LLMs just expose the thing for what it was. It's like shooting the messenger just because the messenger is an LLM. That was my point, in case I worded it badly.
or maybe it is a bad thing, because right now the model is "throw it against the wall and see what sticks or how many billions we need to make it stick"?
> right now the model is "throw it against the wall and see what sticks
When was it not? We've been doing this for decades. Something usually sticks.
Using Cloudflare's Durable Objects https://developers.cloudflare.com/durable-objects/concepts/w... for this and works pretty well.
This article is clearly written by someone who’s never done any work on actually complex web applications. Nothing here is a new problem nor unsolved. The pattern identified as being “LLM specific” (long-running async jobs) is not particularly unusual.
If I'm reading it correctly, the TL;DR of the article is: given the client and the server, we need to be able to ingest messages to the client-server communication channel, and this channel should survive a disconnection. The article suggests using named pub/sub channels for communication, so that the “connection” between a given client and a given (cloud) server had a name and it was possible to ingest data chunks into that named channel.
I would suggest that there is a much, much older technology than pub/sub that can be used for such kind of data transfer: it's UDP, documented in 1980.
I can't stop thinking how overcomplicated our software engineering reality is so we need to reinvent layers and layers of stuff on top of the other stuff. We must make applications for browsers; browsers disallow basic network communication for the code they execute; so sending a chunk of data from a client to a server becomes a real adventure.
UDP and nothing layered on top?
Then you'll be reimplementing host discovery (i.e. how do clients find the host that has context on their request), retransmissions, flow control, congestion control, and many other things on top of it, and suddenly it doesn't sound so simple anymore.
The premise is incorrect and ignorant of the history - this is sticky sessions and the idea has been around longer than 20 years.
The "cloud native" (as the author refers to it) idea that app servers should be stateless is actually the new idea.
The industry eventually reached consensus on sticky sessions being a bad idea a lot of the time. That's why stateless app servers became the norm.
? Yes if you treat llms like deterministic computation you'll get fucked, news at eleven. In terms of apps "shitty but uncannily useful search" seems like a better fit
[dead]