In browser PPO training demo, made possible by tinygrad: TinyJit -> WebGPU kernels.
Requires WebGPU.
This is seriously impressive. Running PPO training directly in the browser through WebGPU feels like a glimpse into where lightweight AI experimentation is headed.
Really cool! But right as it was nearing 4,000, it seems to have corrupted itself and no longer got any scores above 0. Not sure if that's a code bug or a neural net issue.
avg500 -4.6 last 500 episodes
peak 3959.3 best window
roll/s 20.68 20-step avg
progress 4388 562749 episodes
Yes it just collapses eventually — never stabilizes. The training process is flawed, I suspect it has to do with the fact that some weights blow up over time, you can see in “weights” tab.
But at around 4K avg score you should see it solve the env almost every time.
Just a demo :) optimized for speed over stability.
Reward structure: Step: -1 Dot: +100 Win: +1000 so ~4k is max theoretical score on 6x6.
maybe because it doesn't understand "done"? perfect play is impossible, random variance will cause scores to drop even if the model plays well and "wins". feels like it would get stuck in a loop trying to improve what can't be improved.
The optimizer doesn't need to understand anything it's just an iterated mathematical construct. The author simply didn't bother to implement the necessary details to ensure numerical stability.
Alternatively it might be a problem with the scoring model in the end game.
feels like it would get stuck in a loop trying to improve what can't be improved.
That is the point, there is nothing on an intention that we cannot improve, the goal here is no more than 1 unique iteration of the same path
I think I noticed it reach “end game.” The snake reaches a point where, if it gets any longer, it is out of squares and hits its own tail. So it finds the route through the squares that it can infinitely loop, never eats the ball, and score starts dropping and goes negative.
Cool project!
I noticed that if you go from training to watch and then back, the training temporarily drop significantly in score.
It seems to be something related the moving average calculation. So it is just a glitch on the chart.
A previous similar idea running as a Ratatui based TUI: https://github.com/bones-ai/rust-snake-ai-ratatui
FYI this website sets off a bunch of Bitdefender alerts as being a suspicious web page. I assume probably false positives or something but still something you might want to look into.
"The page https://ppo.gradexp.xyz/ has been detected with suspicious activity. It is not recommended to continue browsing this website."
Same for:
https://ppo.gradexp.xyz/version.js
https://ppo.gradexp.xyz/dist/sizes.js
https://ppo.gradexp.xyz/dist/size_6/manifest.j
https://www.virustotal.com/gui/url/1ee8e72b55c296ee92f38937d...
Bitdefender here shows clean
it's using webgpu kernels, probably a false positive
Mesmerizing - could be its own digital art showcase XD Love what you've done here, friend. Looking forward to what you do next. <3
did a pretty similar thing last month for the text rendering library last month.
trained and made a viz for the model and then made it displace text.
should probably do a proper write-up:https://x.com/i/status/2038367016969724259
I noticed snake gets penalized for not getting to the apple early, is that what you really want? Snake is about how long it gets not about the balance between length and wall clock time
But if not the snake could go into an infinite loop, never growing, never eating.
Why? It should get the reward for getting longer, but not for getting longer quicker
Because the sessions would last forever. Think of a 1 or 2 length snake, figuring out that left down up right over and over again doesn't lose any points. You're now trapped in a local minimum. You need to make the AI get impatient (lose points) or it'll never learn.
I see what you are saying but then wouldn’t it miss out on the best strategies, which do require patience and not going straight for the apple?
Maybe you could make it lose points for repeating a board state, I guess.
Poorly programmed, it doesn't learn from its mistakes, the games get stuck in a loop because the snake doesn't capture a piece but the piece remains and there's a gap, constantly moving the snake along the same path with negative scores in an infinite loop leaving an unaltered yin and yang ;) there's a repetitive pattern in these infinite games between the position of the gap and the piece
Did you let it train? This doesn’t happen for me
Yes, thousands of games, you can see how it happens in the displayed game matrix, there comes a point when they all enter those loops https://ibb.co/bM4RPzPb
Makes sense, author mentioned training collapses eventually
Give the neural network the sense of sight, to know where the point is located.
My average eventually made it to about 3900, and then stagnated between 3600-3900. I'm curious if this is universal behavior or not. I'm up to about 5k steps.
Very cool! Not GitHub repo?
More details and implementation notes please?
It's on the page, if you click the little info icon in the upper-right. Here's the text but there's some nice graphics there too:
Snake Game, training entirely in the browser. Built on tinygrad: the rollout / targets / train graphs are TinyJits authored in Python, then compiled once to WGSL and replayed here under WebGPU.
Observation: flat 10×10 board (100) + 4-dim prev-action one-hot = 104 dims. fc_pi.weight is zero-init so the opening policy is uniform over the legal actions; fc_v uses tinygrad's default Kaiming init.
Per rollout: T=24 × N=384 parallel snakes (9,216 transitions), then K=3 epochs × 4 mini-batches of PPO updates. GAE γ=0.99, λ=0.95; AdamW wd=0.01; ratio clip ε=0.1; grad-norm 0.5; Huber value β=1, val_coef=1; entropy bonus 0.008333333333333333.
Action mask + value clip + KL early stop. The 4-dim prev_a obs tail lets fc_pi zero the U-turn logit (the env silently overrides same-axis reversals anyway). Value loss is max(huber(v_new−td), huber(v_clip−td)) at ε=0.2. Approx-KL is sampled after each epoch and breaks the loop at 1.5·kl_target.damn this was really interesting and really well executed
That's cool, i did exactly the same few years ago
Will be open-sourced?
Link to repo?
cool project
Crashed
> WebGPU not available in this browser
Looks like this is for Linux and Windows, on NetBSD I get this issue :(
I got this in Firefox on Linux, just had to enable WebGPU in about:config (`dom.webgpu.enabled` = true).
Did not know that existed, I enabled it but no luck. Must be a NetBSD thing based upon this new message:
> WebGPU is not yet available in Release or late Beta builds.
If you are using brave (which i assume also applies to chrome) , there is a menu at brave://flags , you can enable unsafe web GPU from there
sound cool; would like to show my kid for education; doesn't work on Mac/Safari though (no webGPU)
You can enable it in settings; works on my older iPhone.
my training on a 10x10 just randomly broke. i got to like 3600 then the graph went flat, the viewer on the left just showed it constantly restarting the game, and the scores in the negative. my average is now -10.