Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. Transmitting Everything You Say

Transmitting Everything You Say

Scheduled Pinned Locked Moved Uncategorized
webcomickritaminifantasythea
76 Posts 45 Posters 241 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M MrBelles

    @davidrevoy Oh my, what will the frog think?! Was he there as well?

    David RevoyD This user is from outside of this forum
    David RevoyD This user is from outside of this forum
    David Revoy
    wrote on last edited by
    #61

    @MrBelles Haha, maybe the frog prince was on this bath, who knows πŸ˜†

    1 Reply Last reply
    0
    • David RevoyD David Revoy

      Transmitting Everything You Say

      #webcomic #krita #miniFantasyTheater

      The Blue WizardT This user is from outside of this forum
      The Blue WizardT This user is from outside of this forum
      The Blue Wizard
      wrote on last edited by
      #62

      @davidrevoy I interpret the "high-quality data" to mean the Avian Intelligence is generating nude pictures of the child version of this gothic sorceress (a la Grok chatbot scandal)...quelle horreur!!!

      1 Reply Last reply
      0
      • David RevoyD David Revoy

        Transmitting Everything You Say

        #webcomic #krita #miniFantasyTheater

        benoit mortierB This user is from outside of this forum
        benoit mortierB This user is from outside of this forum
        benoit mortier
        wrote on last edited by
        #63

        @davidrevoy πŸ˜‰πŸ˜›

        1 Reply Last reply
        0
        • David RevoyD David Revoy

          Transmitting Everything You Say

          #webcomic #krita #miniFantasyTheater

          papush!P This user is from outside of this forum
          papush!P This user is from outside of this forum
          papush!
          wrote on last edited by
          #64
          @davidrevoy cute but it's like all you do is about AI now 😞
          David RevoyD 1 Reply Last reply
          0
          • papush!P papush!
            @davidrevoy cute but it's like all you do is about AI now 😞
            David RevoyD This user is from outside of this forum
            David RevoyD This user is from outside of this forum
            David Revoy
            wrote on last edited by
            #65

            @papush_ For the weekly, yes. In background I'm still in the production of a Pepper&Carrot episode that has nothing to do with AI.
            I understand that this theme feels annoying for those who wants to just stop this topic.
            But understand that for me, with all the last 20 years of my artworks they trained without my consent, my art style, and all, I feel powerless. Making comic to mock it is my way to cope with that, it's therapeutic.
            I promise I'll do something else once I'm done with it.

            1 Reply Last reply
            0
            • David RevoyD This user is from outside of this forum
              David RevoyD This user is from outside of this forum
              David Revoy
              wrote on last edited by
              #66

              @fell thank you!

              1 Reply Last reply
              0
              • David RevoyD David Revoy

                Transmitting Everything You Say

                #webcomic #krita #miniFantasyTheater

                Ray Of SunlightR This user is from outside of this forum
                Ray Of SunlightR This user is from outside of this forum
                Ray Of Sunlight
                wrote on last edited by
                #67

                @davidrevoy You know? After seeing mature references in Pepper & Carrot, one would think there won't be more mature references in other of your works, i was wrong, but honestly? I love them, Child-friendly stuff is tiring.

                Also that Wizardress is quite a naughty gal, It's a sight to behold. πŸ˜‚

                1 Reply Last reply
                0
                • David RevoyD David Revoy

                  Transmitting Everything You Say

                  #webcomic #krita #miniFantasyTheater

                  MHuntM This user is from outside of this forum
                  MHuntM This user is from outside of this forum
                  MHunt
                  wrote on last edited by
                  #68

                  @davidrevoy beware Alexa...

                  1 Reply Last reply
                  0
                  • David RevoyD David Revoy

                    Transmitting Everything You Say

                    #webcomic #krita #miniFantasyTheater

                    Firestarter_OLF This user is from outside of this forum
                    Firestarter_OLF This user is from outside of this forum
                    Firestarter_OL
                    wrote on last edited by
                    #69

                    @davidrevoy
                    Nah, everything will be okay if you have nothing to hide πŸ’β€β™‚οΈ /s

                    Actually, i like your pun (and your drawings too)!

                    1 Reply Last reply
                    0
                    • David RevoyD David Revoy

                      Transmitting Everything You Say

                      #webcomic #krita #miniFantasyTheater

                      Olof-KnightI This user is from outside of this forum
                      Olof-KnightI This user is from outside of this forum
                      Olof-Knight
                      wrote on last edited by
                      #70

                      @davidrevoy at least it recognizes that it was high quality data πŸ€·πŸΌβ€β™‚οΈπŸ—Ώ
                      I want my AI perverse as hell 😎
                      PS I hate ai

                      1 Reply Last reply
                      0
                      • iknowponyI This user is from outside of this forum
                        iknowponyI This user is from outside of this forum
                        iknowpony
                        wrote on last edited by
                        #71

                        @davidrevoy Hello. Want to share a bit of knowledge about LLMs.

                        They are many layers of connected numbers, modified numerous times to predict what's next in text.

                        One token at a time (word, piece of word, punctuation, etc), probabilistically.

                        What happens after text turns to numbers and goes through neural network, is a probability distribution. Similar to Library of Babel, albeit guided'ish. Token #1 is this likely, and #2 is that likely.

                        And then, sampler (or more often, several) comes into play. Something to not chose most likely (stiff, boring) nor random (model breaks easily) token. Repeats can be discouraged (hi to DRY sampler). Unusual but stable choices can be encouraged (hi to XTC). How about cut-offs (hi to TopK, TopP, MinP, and so forth) of unlikely junk? One can even discourage specific tokens.

                        And only then, the user asks something, which LLM tries to predict answer to or continuation of.

                        And... which LLM? There are very-very many, of different purposes and hardware requirements.

                        Do you see something here? Complexity. What you did is admirable; and in my somewhat-educated opinion, best LLM is below of mediocre human writer.

                        Still. LLMs can be useful if handled with knowledge of what they are, and of their limits (LLMs have probability of stupid mistakes; no, you can't avoid it). They are (opinion based on Anthropic's research of LLM solving math) bundles of intuition that output probability distribution out of vast sea of knowledge (of varied quality).

                        I think that the best open-weight (not to be confused with open-source) creative LLM is Kimi-K2-Instruct-0905. If you don't have half of terabyte of VRAM (yes, seriously), it will be slow.

                        For middle-weight I like Gemma 3 27B, and for quickness, LFM2-8B-A1B. Like 'em all with llama.cpp, and tweak/test stuff lots.

                        Anyway. AI, is mostly, a plague. For greedy people to get richer, for lazy to get lazier. Even if there is some space for hobby that can improve one's life (carefully!).

                        Just want to say ~ don't discard all 'cause of first try; the picture is incomplete (and moving).

                        Wanna a funny example of LLM output? I asked it to use MCTS for fox poem (to entertain self), but it gave a poem about fox using MCTS. What a silly poor thing...

                        In emerald fields where sunlight streams,
                        A fox with russet fur, it gleams.
                        Its gaze, a search, a branching mind,
                        Exploring paths where truths unwind.

                        Each rustling leaf, a choice unseen,
                        Of cunning traps or paths serene.
                        With cautious steps, the fox explores,
                        Where ancient wisdom softly pours.

                        Through sun-dappled boughs it leaps and weaves,
                        Imagining each fate it receives.
                        A hidden vole, a hawk's swift flight,
                        Each outcome weighed in fading light.

                        From countless trails, lessons accrue,
                        Survival's dance, forever new.
                        Each scent perceived, each rustle heard,
                        Adds to the wisdom fox has stirred.

                        With knowing eyes, it finds its way,
                        Beauty and wisdom intertwined today.
                        A flicker of flame in cunning gaze,
                        The fox's path, through life's maze.

                        1 Reply Last reply
                        0
                        • iknowponyI This user is from outside of this forum
                          iknowponyI This user is from outside of this forum
                          iknowpony
                          wrote on last edited by
                          #72

                          @davidrevoy Hm. 3 more things (I hope I am of edutainment).

                          Firstly, another aspect is quantization, or in other words, reducing precision (and space taken) of numbers so LLMs will run quicker and in less of video memory (or just RAM, if slower pace is acceptable). There are various formats of it, and llama.cpp uses GGUF format with... very many types. And less precision loss sometimes makes models more stiff (hence why for creative purposes I often prefer Q4 or Q5 quants).

                          Secondly, LLMs have limited context ~ literally, number of tokens they can process at maximum (and it is another setting to tweak, and possibly quantize).

                          And 3dly, despite this being a hard thing to get (LLMs are generally trained for "correct" distribution), seeking edge cases and prompting fun ideas, can give good results sometimes. Here is example, of Kimi-K2 (via Groq, 'cause running this giant mainly from SSD is too slow even for my patience).

                          The prompt was... Please write a short and funny story about small dragon that tried to "terrify kingdom" but failed and only gained adoration due to being adorable and silly. Try to include some dialogue.

                          And here's some moments that Kimi actually did surprisingly well...

                          Tonight was his Terror Debut. He practiced in a puddle:
                          Flicker (menacing whisper): β€œI am death! I am doom! I—”
                          Puddle: blorp
                          Flicker: β€œStop undercutting me, water!”
                          ...
                          Flicker tried again. He landed on the fountain, spread his wings dramatically, and knocked over a laundry line. A pair of bloomers fluttered onto his horns like a wedding veil.
                          Blacksmith (to his daughter): β€œLook, sweetie, the dragon’s getting married!”
                          Little Girl: β€œShe’s so pretty!”
                          Flicker: β€œI’m a he! And I’m terrifying!”
                          Girl: β€œCan we keep him, Dad? I’ll feed him and walk him and name him Toasty-Woasty.”
                          ...
                          Royal Scribe (writing): β€œDay of the Belly-Rub Treaty. Casualties: zero. Cuteness fatalities: innumerable.”
                          ...
                          in glitter-gel pen:
                          β€œMission status: Kingdom terrified… of how much they love me.”

                          ...it's quite ironic, really, that to get funny bits (only sometimes; not reliably), one must already know writing somewhat and have technical skill, too.

                          1 Reply Last reply
                          0
                          • iknowponyI This user is from outside of this forum
                            iknowponyI This user is from outside of this forum
                            iknowpony
                            wrote on last edited by
                            #73

                            @davidrevoy At the bottom of Pandora's box is hope. I still wish to be techno-optimist. Like any technology, machine learning can be used for good and ill.

                            And generally, technology done good for people. History teaches lessons; modern humans are much tamer than tribal ones, food standards are better, there are more obese people than starving (not ideal, but better than past).

                            So. Take this piece of knowledge not with fear, but with hope. Good humans exist, and do stuff.

                            Times get tough and uncertain sometimes, but if changes of the past has teached us something, is that making this world better (even in small ways) is very worthwhile.

                            Also, your comics are awesome, my very favorite. πŸ˜ƒ

                            1 Reply Last reply
                            0
                            • LinebylineL Linebyline

                              @davidrevoy @tiredbun @voxel I've seen enough blatantly wrong LLM-generated alt text and spell check results that I don't think the technology is fit for purpose. Any purpose. AI "hallucination" is not a solvable problem: It's the exact thing LLMs are designed to do. So, with all due respect to the person being quoted here, I don't trust anyone who would entrust something as important as accessibility to AI.

                              mage_of_dragonsM This user is from outside of this forum
                              mage_of_dragonsM This user is from outside of this forum
                              mage_of_dragons
                              wrote last edited by
                              #74

                              @Linebyline @davidrevoy @tiredbun @voxel afaik this is due to an oversight during training. Statistically, the model guessing something was rewarded more than it saying "I don't know", so that's where hallucinations come from. If we take that into account when training future models, it is plausible that these hallucinations are reduced over time.

                              VoxelV 1 Reply Last reply
                              0
                              • mage_of_dragonsM mage_of_dragons

                                @Linebyline @davidrevoy @tiredbun @voxel afaik this is due to an oversight during training. Statistically, the model guessing something was rewarded more than it saying "I don't know", so that's where hallucinations come from. If we take that into account when training future models, it is plausible that these hallucinations are reduced over time.

                                VoxelV This user is from outside of this forum
                                VoxelV This user is from outside of this forum
                                Voxel
                                wrote last edited by
                                #75

                                @mage_of_dragons @Linebyline @davidrevoy @tiredbun I know a model that says idk all the time πŸ™ƒ

                                See: https://help.minecraft.net/hc/en-us/articles/38053285673229-About-Merl-the-Minecraft-Support-Virtual-Agent

                                mage_of_dragonsM 1 Reply Last reply
                                0
                                • VoxelV Voxel

                                  @mage_of_dragons @Linebyline @davidrevoy @tiredbun I know a model that says idk all the time πŸ™ƒ

                                  See: https://help.minecraft.net/hc/en-us/articles/38053285673229-About-Merl-the-Minecraft-Support-Virtual-Agent

                                  mage_of_dragonsM This user is from outside of this forum
                                  mage_of_dragonsM This user is from outside of this forum
                                  mage_of_dragons
                                  wrote last edited by
                                  #76

                                  @voxel See? they're already improving x3

                                  1 Reply Last reply
                                  0
                                  Reply
                                  • Reply as topic
                                  Log in to reply
                                  • Oldest to Newest
                                  • Newest to Oldest
                                  • Most Votes


                                  • Login

                                  • Don't have an account? Register

                                  • Login or register to search.
                                  Powered by NodeBB Contributors
                                  • First post
                                    Last post
                                  0
                                  • Categories
                                  • Recent
                                  • Tags
                                  • Popular
                                  • World
                                  • Users
                                  • Groups