Going to have to disagree on the backup test. Opus flamingo is actually on the pedals and seat with functional spokes and beak. In terms of adherence to physical reality Qwen is completely off. To me it's a little puzzling that someone would prefer the Qwen output.
I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.
Qwen's flamingo is artistically far more interesting. It's a one-eyed flamingo with sunglasses and a bow tie who smokes pot. Meanwhile Opus just made a boring, somewhat dorky flamingo. Even the ground and sky are more interesting in Qwen's version
But in terms of making something physically plausible, Opus certainly got a lot closer
Given adherence is a more significant practical barrier, it's probably the better signal. That is, if we decide too look for signal here.
Such a disconnect from the minutes I’ve lost and given up on Gemini trying to get it to update a diagram in a slide today. The one shot joke stuff is great but trying to say “that is close but just make this small change” seems impossible. It’s the gap between toy and tool.
I understand the 'fun factor' but at this point I really wonder what this pelican still proofs ? I mean, providers certainly could have adapted for it if they wanted, and if you want to test how well a model adapts to potential out of distribution contexts, it might be more worthwhile to mix different animals with different activity types (a whale on a skateboard) than always the same.
That's why I did the flamingo on a unicycle.
For a delightful moment this morning I thought I might have finally caught a model provider cheating by training for the pelican, but the flamingo convinced me that wasn't the case.
r/LocalLlama is now doing a horse in a racing car:
It is completely wild to me that you prefer Qwen's flamingo. I think it's really bad and Opus' is pretty good.
The Opus one doesn't even have a bowtie.
The Opus one looks like a flamingo, and looks like it's riding the unicycle. Sitting on the seat. Feet on the pedals.
The Qwen one looks like a 3-tailed, broken-winged, beakless (I guess? Is that offset white thing a beak? Or is it chewing on a pelican feather like it's a piece of straw?) monstrosity not sitting on the seat, with its one foot off the pedal (the other chopped off at the knee) of a malmanufactured wheel that has bonus spokes that are longer than the wheel.
But yeah, it does have a bowtie and sunglasses that you didn't ask for! Plus it says "<3 Flamingo on a Unicycle <3", which perhaps resolves all ambiguity.
To me the opus flamingo is waaaay better than the qwen one. qwen has the better pelican, though.
Is a flamingo on a unicycle not merely a special case of a pelican on a bicycle?
For coding, qwen 3.6 35b a3b solved 11/98 of the Power Ranking tasks (best-of-two), compared to 10/98 for the same size qwen 3.5. So it's at best very slightly improved and not at all in the class of qwen 3.5 27b dense (26 solved) let alone opus (95/98 solved, for 4.6).
You compare tiny modal for local inference vs propertiary, expensive frontier model. It would be more fair to compare against similar priced model or tiny frontier models like haiku, flash or gpt nano.
Not when the article they're commenting on was doing literally exactly the same thing.
Eh it’s important perspective, lest someone start thinking they can drop $5k on a laptop and be free of Anthropic/OpenAI. Expensive lesson.
I'm an iguana and need to wash my bicycle in the carwash. Shall I walk or take the bus?
That’s a long walk! You should reserve a ride with $PartnerRideshareCo.
That's not surprising; Opus & Sonnet have been regressing on many non-coding tasks since about the 4.1 release in our testing
That Qwen flamingo on the unicycle is actually quite good. A work of art.
FYI, using a 128GB M5 MacBook Pro, sourced from another article by the author.
I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.
This is about the newly release Qwen3.6. Just wanted to make sure you got that correctly.
I'm really curious about what competes with Claude Code to drive a local LLM like Qwen 3.6?
OpenCode?
I'm currently testing Qwen3.6-35B-A3B with https://swival.dev for security reviews.
It's pretty good at finding bugs, but not so good at writing patches to fix them.
I literally cannot believe that people are wasting their time doing this either as a benchmark or for fun. After every single language model release, no less.
It feels like the results stopped being interesting a little while ago but the practice has become part of simonw's brand, and it gives him something to post even when there is nothing interesting to say about another incremental improvement to a model, and so I don't imagine he'll stop.
How about switching to MechaStalin on a tricycle? It gets kind of boring.
boring ... the ways all the models fail at a simple task never gets boring to me