@davidrevoy Hello. Want to share a bit of knowledge about LLMs.
They are many layers of connected numbers, modified numerous times to predict what's next in text.
One token at a time (word, piece of word, punctuation, etc), probabilistically.
What happens after text turns to numbers and goes through neural network, is a probability distribution. Similar to Library of Babel, albeit guided'ish. Token #1 is this likely, and #2 is that likely.
And then, sampler (or more often, several) comes into play. Something to not chose most likely (stiff, boring) nor random (model breaks easily) token. Repeats can be discouraged (hi to DRY sampler). Unusual but stable choices can be encouraged (hi to XTC). How about cut-offs (hi to TopK, TopP, MinP, and so forth) of unlikely junk? One can even discourage specific tokens.
And only then, the user asks something, which LLM tries to predict answer to or continuation of.
And... which LLM? There are very-very many, of different purposes and hardware requirements.
Do you see something here? Complexity. What you did is admirable; and in my somewhat-educated opinion, best LLM is below of mediocre human writer.
Still. LLMs can be useful if handled with knowledge of what they are, and of their limits (LLMs have probability of stupid mistakes; no, you can't avoid it). They are (opinion based on Anthropic's research of LLM solving math) bundles of intuition that output probability distribution out of vast sea of knowledge (of varied quality).
I think that the best open-weight (not to be confused with open-source) creative LLM is Kimi-K2-Instruct-0905. If you don't have half of terabyte of VRAM (yes, seriously), it will be slow.
For middle-weight I like Gemma 3 27B, and for quickness, LFM2-8B-A1B. Like 'em all with llama.cpp, and tweak/test stuff lots.
Anyway. AI, is mostly, a plague. For greedy people to get richer, for lazy to get lazier. Even if there is some space for hobby that can improve one's life (carefully!).
Just want to say ~ don't discard all 'cause of first try; the picture is incomplete (and moving).
Wanna a funny example of LLM output? I asked it to use MCTS for fox poem (to entertain self), but it gave a poem about fox using MCTS. What a silly poor thing...
In emerald fields where sunlight streams,
A fox with russet fur, it gleams.
Its gaze, a search, a branching mind,
Exploring paths where truths unwind.
Each rustling leaf, a choice unseen,
Of cunning traps or paths serene.
With cautious steps, the fox explores,
Where ancient wisdom softly pours.
Through sun-dappled boughs it leaps and weaves,
Imagining each fate it receives.
A hidden vole, a hawk's swift flight,
Each outcome weighed in fading light.
From countless trails, lessons accrue,
Survival's dance, forever new.
Each scent perceived, each rustle heard,
Adds to the wisdom fox has stirred.
With knowing eyes, it finds its way,
Beauty and wisdom intertwined today.
A flicker of flame in cunning gaze,
The fox's path, through life's maze.