Transmitting Everything You Say
-
@davidrevoy Oh my, what will the frog think?! Was he there as well?
@MrBelles Haha, maybe the frog prince was on this bath, who knows

-
Transmitting Everything You Say
@davidrevoy I interpret the "high-quality data" to mean the Avian Intelligence is generating nude pictures of the child version of this gothic sorceress (a la Grok chatbot scandal)...quelle horreur!!!
-
Transmitting Everything You Say
-
Transmitting Everything You Say
@davidrevoy cute but it's like all you do is about AI now
-
@davidrevoy cute but it's like all you do is about AI now

@papush_ For the weekly, yes. In background I'm still in the production of a Pepper&Carrot episode that has nothing to do with AI.
I understand that this theme feels annoying for those who wants to just stop this topic.
But understand that for me, with all the last 20 years of my artworks they trained without my consent, my art style, and all, I feel powerless. Making comic to mock it is my way to cope with that, it's therapeutic.
I promise I'll do something else once I'm done with it. -
@fell thank you!
-
Transmitting Everything You Say
@davidrevoy You know? After seeing mature references in Pepper & Carrot, one would think there won't be more mature references in other of your works, i was wrong, but honestly? I love them, Child-friendly stuff is tiring.
Also that Wizardress is quite a naughty gal, It's a sight to behold.

-
Transmitting Everything You Say
@davidrevoy beware Alexa...
-
Transmitting Everything You Say
@davidrevoy
Nah, everything will be okay if you have nothing to hide
β
οΈ /sActually, i like your pun (and your drawings too)!
-
Transmitting Everything You Say
@davidrevoy at least it recognizes that it was high quality data

β
οΈ
I want my AI perverse as hell
PS I hate ai -
@davidrevoy Hello. Want to share a bit of knowledge about LLMs.
They are many layers of connected numbers, modified numerous times to predict what's next in text.
One token at a time (word, piece of word, punctuation, etc), probabilistically.
What happens after text turns to numbers and goes through neural network, is a probability distribution. Similar to Library of Babel, albeit guided'ish. Token #1 is this likely, and #2 is that likely.
And then, sampler (or more often, several) comes into play. Something to not chose most likely (stiff, boring) nor random (model breaks easily) token. Repeats can be discouraged (hi to DRY sampler). Unusual but stable choices can be encouraged (hi to XTC). How about cut-offs (hi to TopK, TopP, MinP, and so forth) of unlikely junk? One can even discourage specific tokens.
And only then, the user asks something, which LLM tries to predict answer to or continuation of.
And... which LLM? There are very-very many, of different purposes and hardware requirements.
Do you see something here? Complexity. What you did is admirable; and in my somewhat-educated opinion, best LLM is below of mediocre human writer.
Still. LLMs can be useful if handled with knowledge of what they are, and of their limits (LLMs have probability of stupid mistakes; no, you can't avoid it). They are (opinion based on Anthropic's research of LLM solving math) bundles of intuition that output probability distribution out of vast sea of knowledge (of varied quality).
I think that the best open-weight (not to be confused with open-source) creative LLM is Kimi-K2-Instruct-0905. If you don't have half of terabyte of VRAM (yes, seriously), it will be slow.
For middle-weight I like Gemma 3 27B, and for quickness, LFM2-8B-A1B. Like 'em all with llama.cpp, and tweak/test stuff lots.
Anyway. AI, is mostly, a plague. For greedy people to get richer, for lazy to get lazier. Even if there is some space for hobby that can improve one's life (carefully!).
Just want to say ~ don't discard all 'cause of first try; the picture is incomplete (and moving).
Wanna a funny example of LLM output? I asked it to use MCTS for fox poem (to entertain self), but it gave a poem about fox using MCTS. What a silly poor thing...
In emerald fields where sunlight streams,
A fox with russet fur, it gleams.
Its gaze, a search, a branching mind,
Exploring paths where truths unwind.Each rustling leaf, a choice unseen,
Of cunning traps or paths serene.
With cautious steps, the fox explores,
Where ancient wisdom softly pours.Through sun-dappled boughs it leaps and weaves,
Imagining each fate it receives.
A hidden vole, a hawk's swift flight,
Each outcome weighed in fading light.From countless trails, lessons accrue,
Survival's dance, forever new.
Each scent perceived, each rustle heard,
Adds to the wisdom fox has stirred.With knowing eyes, it finds its way,
Beauty and wisdom intertwined today.
A flicker of flame in cunning gaze,
The fox's path, through life's maze. -
@davidrevoy Hm. 3 more things (I hope I am of edutainment).
Firstly, another aspect is quantization, or in other words, reducing precision (and space taken) of numbers so LLMs will run quicker and in less of video memory (or just RAM, if slower pace is acceptable). There are various formats of it, and llama.cpp uses GGUF format with... very many types. And less precision loss sometimes makes models more stiff (hence why for creative purposes I often prefer Q4 or Q5 quants).
Secondly, LLMs have limited context ~ literally, number of tokens they can process at maximum (and it is another setting to tweak, and possibly quantize).
And 3dly, despite this being a hard thing to get (LLMs are generally trained for "correct" distribution), seeking edge cases and prompting fun ideas, can give good results sometimes. Here is example, of Kimi-K2 (via Groq, 'cause running this giant mainly from SSD is too slow even for my patience).
The prompt was... Please write a short and funny story about small dragon that tried to "terrify kingdom" but failed and only gained adoration due to being adorable and silly. Try to include some dialogue.
And here's some moments that Kimi actually did surprisingly well...
Tonight was his Terror Debut. He practiced in a puddle:
Flicker (menacing whisper): βI am death! I am doom! Iββ
Puddle: blorp
Flicker: βStop undercutting me, water!β
...
Flicker tried again. He landed on the fountain, spread his wings dramatically, and knocked over a laundry line. A pair of bloomers fluttered onto his horns like a wedding veil.
Blacksmith (to his daughter): βLook, sweetie, the dragonβs getting married!β
Little Girl: βSheβs so pretty!β
Flicker: βIβm a he! And Iβm terrifying!β
Girl: βCan we keep him, Dad? Iβll feed him and walk him and name him Toasty-Woasty.β
...
Royal Scribe (writing): βDay of the Belly-Rub Treaty. Casualties: zero. Cuteness fatalities: innumerable.β
...
in glitter-gel pen:
βMission status: Kingdom terrifiedβ¦ of how much they love me.β...it's quite ironic, really, that to get funny bits (only sometimes; not reliably), one must already know writing somewhat and have technical skill, too.
-
@davidrevoy At the bottom of Pandora's box is hope. I still wish to be techno-optimist. Like any technology, machine learning can be used for good and ill.
And generally, technology done good for people. History teaches lessons; modern humans are much tamer than tribal ones, food standards are better, there are more obese people than starving (not ideal, but better than past).
So. Take this piece of knowledge not with fear, but with hope. Good humans exist, and do stuff.
Times get tough and uncertain sometimes, but if changes of the past has teached us something, is that making this world better (even in small ways) is very worthwhile.
Also, your comics are awesome, my very favorite.

-
@davidrevoy @tiredbun @voxel I've seen enough blatantly wrong LLM-generated alt text and spell check results that I don't think the technology is fit for purpose. Any purpose. AI "hallucination" is not a solvable problem: It's the exact thing LLMs are designed to do. So, with all due respect to the person being quoted here, I don't trust anyone who would entrust something as important as accessibility to AI.
@Linebyline @davidrevoy @tiredbun @voxel afaik this is due to an oversight during training. Statistically, the model guessing something was rewarded more than it saying "I don't know", so that's where hallucinations come from. If we take that into account when training future models, it is plausible that these hallucinations are reduced over time.
-
@Linebyline @davidrevoy @tiredbun @voxel afaik this is due to an oversight during training. Statistically, the model guessing something was rewarded more than it saying "I don't know", so that's where hallucinations come from. If we take that into account when training future models, it is plausible that these hallucinations are reduced over time.
@mage_of_dragons @Linebyline @davidrevoy @tiredbun I know a model that says idk all the time

-
@mage_of_dragons @Linebyline @davidrevoy @tiredbun I know a model that says idk all the time

@voxel See? they're already improving x3

