Inspired by this article, I tried to read some tutorials on Forth. My question is whether concatenative languages are AI-coding friendly. Apart from the training data availability, the question is also whether LLMs can correctly understand long flows of concatenated operations. Any ideas?
They can produce idioms that resemble the flow of Forth code but when asked to produce a working algorithm, they get lost very quickly because there's a combination of reading "backwards" (push order) and forwards (execution order) needed to maintain context. At any time a real Forth program may inject a word into the stack flow that completely alters the meaning of following words, so reading and debugging Forth are nearly the same thing - you have to walk through the execution step by step unless you've intentionally made patterns that will decouple context - and when you do, you've also entered into developing syntax and the LLM won't have training data on that.
I suggest using Rosetta Code as a learning resource for Forth idioms.
Any concatenative program can be reduced to a rho type, and AI are pretty good about combining properly typed abstractions.
phreda4 has been doing cool stuff with ColorForth-likes for ages and for some reason barely gets any attention for it. Always brings a smile to my face to see it submitted here
That is a great looking tutorial. Can't wait to try it. Thanks!