Transmitting Everything You Say
-
@davidrevoy @RandomDamage @solitha @lazy
Though, re-checking all the story, the goth witch looks too young for a post-doc, and also her relation to the info provided by the parrot is really what you might expect from a bachelor or master student, but not someone with a PhD under her belt. A fortiori a permanent researcher would have the ability to check a good chunk of the parrot nonsense, and avoid incinerating herself.
So my personal take, given what we have now could be a witch at the beginning of her PhD, or in her Master's (since she's getting a bit more assertive). A PhD student is a lot closer to faculties and could well be invited for the new year party she's in.
@Sobex Okay, I really have to ask.
She "looks too young for a post-doc"? How many 300-year-olds do you know and how do they typically compare in appearance to post-docs?
-
@Sobex Okay, I really have to ask.
She "looks too young for a post-doc"? How many 300-year-olds do you know and how do they typically compare in appearance to post-docs?
@solitha @davidrevoy @RandomDamage @lazy Well fair, there’s a question of how long do wizard and witch live and how do they age.
I’m relating with human aging. (And Ars Magica wizard âgé slower than their peers starting around 30, but they do get to the 35 apparent age usually, and don’t usually live more than 250 (to be compared with the 50-60 reached by non wizards))
-
@solitha @davidrevoy @RandomDamage @lazy Well fair, there’s a question of how long do wizard and witch live and how do they age.
I’m relating with human aging. (And Ars Magica wizard âgé slower than their peers starting around 30, but they do get to the 35 apparent age usually, and don’t usually live more than 250 (to be compared with the 50-60 reached by non wizards))
@Sobex David did mention a sort of parody of Yennefer, who chose to look much younger than she was.
In this case, "alive" appears much younger than human aging at 300 years, so appearances are not a great metric.
Edit: E34, we know glamours exist.
-
Transmitting Everything You Say
@davidrevoy CINEMA!
Thanks for the laughs. -
Transmitting Everything You Say
@davidrevoy
That bird's been collecting knowledge without her knowledge?!
Scientia potentia est https://luckey.games/cyans/anti/quickcheck.png
It all makes sense now. -
@arfisk
I like to call her Cepper (for the Cepper&Parrot pun). -
Transmitting Everything You Say
@davidrevoy Oh my, what will the frog think?! Was he there as well?
-
@mcpinson
Is that what kids are calling it these days?
@eragon @davidrevoy@MxSpoon @eragon @davidrevoy
*ahem*, well...Its more of a 1969 thing.
-
@davidrevoy Oh my, what will the frog think?! Was he there as well?
@MrBelles Haha, maybe the frog prince was on this bath, who knows

-
Transmitting Everything You Say
@davidrevoy I interpret the "high-quality data" to mean the Avian Intelligence is generating nude pictures of the child version of this gothic sorceress (a la Grok chatbot scandal)...quelle horreur!!!
-
Transmitting Everything You Say
-
Transmitting Everything You Say
@davidrevoy cute but it's like all you do is about AI now
-
@davidrevoy cute but it's like all you do is about AI now

@papush_ For the weekly, yes. In background I'm still in the production of a Pepper&Carrot episode that has nothing to do with AI.
I understand that this theme feels annoying for those who wants to just stop this topic.
But understand that for me, with all the last 20 years of my artworks they trained without my consent, my art style, and all, I feel powerless. Making comic to mock it is my way to cope with that, it's therapeutic.
I promise I'll do something else once I'm done with it. -
@fell thank you!
-
Transmitting Everything You Say
@davidrevoy You know? After seeing mature references in Pepper & Carrot, one would think there won't be more mature references in other of your works, i was wrong, but honestly? I love them, Child-friendly stuff is tiring.
Also that Wizardress is quite a naughty gal, It's a sight to behold.

-
Transmitting Everything You Say
@davidrevoy beware Alexa...
-
Transmitting Everything You Say
@davidrevoy
Nah, everything will be okay if you have nothing to hide
️ /sActually, i like your pun (and your drawings too)!
-
Transmitting Everything You Say
@davidrevoy at least it recognizes that it was high quality data

️
I want my AI perverse as hell
PS I hate ai -
@davidrevoy Hello. Want to share a bit of knowledge about LLMs.
They are many layers of connected numbers, modified numerous times to predict what's next in text.
One token at a time (word, piece of word, punctuation, etc), probabilistically.
What happens after text turns to numbers and goes through neural network, is a probability distribution. Similar to Library of Babel, albeit guided'ish. Token #1 is this likely, and #2 is that likely.
And then, sampler (or more often, several) comes into play. Something to not chose most likely (stiff, boring) nor random (model breaks easily) token. Repeats can be discouraged (hi to DRY sampler). Unusual but stable choices can be encouraged (hi to XTC). How about cut-offs (hi to TopK, TopP, MinP, and so forth) of unlikely junk? One can even discourage specific tokens.
And only then, the user asks something, which LLM tries to predict answer to or continuation of.
And... which LLM? There are very-very many, of different purposes and hardware requirements.
Do you see something here? Complexity. What you did is admirable; and in my somewhat-educated opinion, best LLM is below of mediocre human writer.
Still. LLMs can be useful if handled with knowledge of what they are, and of their limits (LLMs have probability of stupid mistakes; no, you can't avoid it). They are (opinion based on Anthropic's research of LLM solving math) bundles of intuition that output probability distribution out of vast sea of knowledge (of varied quality).
I think that the best open-weight (not to be confused with open-source) creative LLM is Kimi-K2-Instruct-0905. If you don't have half of terabyte of VRAM (yes, seriously), it will be slow.
For middle-weight I like Gemma 3 27B, and for quickness, LFM2-8B-A1B. Like 'em all with llama.cpp, and tweak/test stuff lots.
Anyway. AI, is mostly, a plague. For greedy people to get richer, for lazy to get lazier. Even if there is some space for hobby that can improve one's life (carefully!).
Just want to say ~ don't discard all 'cause of first try; the picture is incomplete (and moving).
Wanna a funny example of LLM output? I asked it to use MCTS for fox poem (to entertain self), but it gave a poem about fox using MCTS. What a silly poor thing...
In emerald fields where sunlight streams,
A fox with russet fur, it gleams.
Its gaze, a search, a branching mind,
Exploring paths where truths unwind.Each rustling leaf, a choice unseen,
Of cunning traps or paths serene.
With cautious steps, the fox explores,
Where ancient wisdom softly pours.Through sun-dappled boughs it leaps and weaves,
Imagining each fate it receives.
A hidden vole, a hawk's swift flight,
Each outcome weighed in fading light.From countless trails, lessons accrue,
Survival's dance, forever new.
Each scent perceived, each rustle heard,
Adds to the wisdom fox has stirred.With knowing eyes, it finds its way,
Beauty and wisdom intertwined today.
A flicker of flame in cunning gaze,
The fox's path, through life's maze. -
@davidrevoy Hm. 3 more things (I hope I am of edutainment).
Firstly, another aspect is quantization, or in other words, reducing precision (and space taken) of numbers so LLMs will run quicker and in less of video memory (or just RAM, if slower pace is acceptable). There are various formats of it, and llama.cpp uses GGUF format with... very many types. And less precision loss sometimes makes models more stiff (hence why for creative purposes I often prefer Q4 or Q5 quants).
Secondly, LLMs have limited context ~ literally, number of tokens they can process at maximum (and it is another setting to tweak, and possibly quantize).
And 3dly, despite this being a hard thing to get (LLMs are generally trained for "correct" distribution), seeking edge cases and prompting fun ideas, can give good results sometimes. Here is example, of Kimi-K2 (via Groq, 'cause running this giant mainly from SSD is too slow even for my patience).
The prompt was... Please write a short and funny story about small dragon that tried to "terrify kingdom" but failed and only gained adoration due to being adorable and silly. Try to include some dialogue.
And here's some moments that Kimi actually did surprisingly well...
Tonight was his Terror Debut. He practiced in a puddle:
Flicker (menacing whisper): “I am death! I am doom! I—”
Puddle: blorp
Flicker: “Stop undercutting me, water!”
...
Flicker tried again. He landed on the fountain, spread his wings dramatically, and knocked over a laundry line. A pair of bloomers fluttered onto his horns like a wedding veil.
Blacksmith (to his daughter): “Look, sweetie, the dragon’s getting married!”
Little Girl: “She’s so pretty!”
Flicker: “I’m a he! And I’m terrifying!”
Girl: “Can we keep him, Dad? I’ll feed him and walk him and name him Toasty-Woasty.”
...
Royal Scribe (writing): “Day of the Belly-Rub Treaty. Casualties: zero. Cuteness fatalities: innumerable.”
...
in glitter-gel pen:
“Mission status: Kingdom terrified… of how much they love me.”...it's quite ironic, really, that to get funny bits (only sometimes; not reliably), one must already know writing somewhat and have technical skill, too.

