Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. This is one of the worst takes from LLM enthusiasts.

This is one of the worst takes from LLM enthusiasts.

Scheduled Pinned Locked Moved Uncategorized
74 Posts 56 Posters 48 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Miguel ArrozA Miguel Arroz

    RE: https://mastodon.social/@stroughtonsmith/116030136026775832

    This is one of the worst takes from LLM enthusiasts.

    Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

    LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

    Comparing both is a display of ignorance and dishonesty.

    Very Human RobotS This user is from outside of this forum
    Very Human RobotS This user is from outside of this forum
    Very Human Robot
    wrote last edited by
    #34

    @arroz

    The trick is to get the LLM to generate a spec and an acceptance test for the change you want to make, and verify the test.

    1 Reply Last reply
    0
    • Darby LinesA Darby Lines

      @zzt @arroz The sheer volume of developers that I have lost respect for in the last two years is just staggering.

      Z This user is from outside of this forum
      Z This user is from outside of this forum
      zygmyd
      wrote last edited by
      #35

      @angry_drunk @zzt @arroz

      And executives. Seeing who are the bandwagon jumpers and who are being thoughtful about things.

      1 Reply Last reply
      0
      • Miguel ArrozA Miguel Arroz

        RE: https://mastodon.social/@stroughtonsmith/116030136026775832

        This is one of the worst takes from LLM enthusiasts.

        Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

        LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

        Comparing both is a display of ignorance and dishonesty.

        pmonks (330ppm)P This user is from outside of this forum
        pmonks (330ppm)P This user is from outside of this forum
        pmonks (330ppm)
        wrote last edited by
        #36

        @arroz These systems are Dunning-Kruger-as-a-service, and that thread is a textbook example of why.

        1 Reply Last reply
        0
        • Miguel ArrozA Miguel Arroz

          RE: https://mastodon.social/@stroughtonsmith/116030136026775832

          This is one of the worst takes from LLM enthusiasts.

          Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

          LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

          Comparing both is a display of ignorance and dishonesty.

          ? Offline
          ? Offline
          Guest
          wrote last edited by
          #37

          @arroz Well put. Ambiguity is a well studied topic in the context of compilers. You won't want your code generator to be able to interpret a construct in a dozen different ways. Natural language is nothing but ambiguous.

          "Then we'll constraint it accordingly". First, there are even many context free languages for which the elimination of ambiguity is impossible, and the ones for which is possible relies on typical well known techniques for them. At that point you just "innovating" by reinventing regular languages and context free languages.

          Furthermore, are gcc or any compiler in llvm part of taking water from the mouths of mexican families? Does ghc put a huge amount of stress in the electrical grid of Ireland? Will a LLM generate code as correct as CompCert? Are rustc or sbcl part of an abject bubble that likely will have catastrophic effects on the economy?

          1 Reply Last reply
          0
          • Miguel ArrozA Miguel Arroz

            RE: https://mastodon.social/@stroughtonsmith/116030136026775832

            This is one of the worst takes from LLM enthusiasts.

            Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

            LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

            Comparing both is a display of ignorance and dishonesty.

            Jared White (ResistanceNet ✊)J This user is from outside of this forum
            Jared White (ResistanceNet ✊)J This user is from outside of this forum
            Jared White (ResistanceNet ✊)
            wrote last edited by
            #38

            @arroz @stroughtonsmith Totally off their rockers. Slop machine psychosis really seems to be in the air right now.

            You know who is *perfectly cool* with developers continuing to write code for their apps like normal creative people? THE USERS. In fact, putting a slop-free badge on your product *is a selling point* because nobody wants this crap. 😂

            1 Reply Last reply
            0
            • Miguel ArrozA Miguel Arroz

              RE: https://mastodon.social/@stroughtonsmith/116030136026775832

              This is one of the worst takes from LLM enthusiasts.

              Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

              LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

              Comparing both is a display of ignorance and dishonesty.

              mtc_ukM This user is from outside of this forum
              mtc_ukM This user is from outside of this forum
              mtc_uk
              wrote last edited by
              #39

              @arroz @stroughtonsmith
              Jesus fucking Christ, these people are incompetent idiots. I’m even more glad to be out of the programming business given that these are the morons with whom I’d be interacting. Everything is going to go to shit.

              Rainer M KrugR 1 Reply Last reply
              0
              • Miguel ArrozA Miguel Arroz

                RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                This is one of the worst takes from LLM enthusiasts.

                Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                Comparing both is a display of ignorance and dishonesty.

                mirabilosM This user is from outside of this forum
                mirabilosM This user is from outside of this forum
                mirabilos
                wrote last edited by
                #40

                @arroz except that LLMs are also deterministic (they just incorporate pseudorandom bits for some variety in the prediction)

                1 Reply Last reply
                0
                • Miguel ArrozA Miguel Arroz

                  RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                  This is one of the worst takes from LLM enthusiasts.

                  Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                  LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                  Comparing both is a display of ignorance and dishonesty.

                  Kevin GranadeK This user is from outside of this forum
                  Kevin GranadeK This user is from outside of this forum
                  Kevin Granade
                  wrote last edited by
                  #41

                  @arroz I mean... people still audit the machine code sometimes! It's not the first resort but it's on the list, and in any sufficiently complex system you need people that can chase the program logic all the way to the CPU. It stopped being common precisely because the results became supremely consistently good to the point where it became generally recognized as a bad idea to reflexively second guess the compiler.
                  That process has not happened with LLMs, they constantly spit out broken code.

                  1 Reply Last reply
                  0
                  • gudenauG gudenau

                    @arroz my boss yesterday just said that if you don't learn to use the LLM tools, you will be fired and replaced by people who do. It's terrifying. Especially if I was allowed to say what I was working on, you would be terrified too.

                    poleguy looking for lost toolsP This user is from outside of this forum
                    poleguy looking for lost toolsP This user is from outside of this forum
                    poleguy looking for lost tools
                    wrote last edited by
                    #42

                    @arroz @gudenau just use up all the tokens every month and keep doing your job. 🙂

                    gudenauG 1 Reply Last reply
                    0
                    • Miguel ArrozA Miguel Arroz

                      RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                      This is one of the worst takes from LLM enthusiasts.

                      Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                      LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                      Comparing both is a display of ignorance and dishonesty.

                      AnnaA This user is from outside of this forum
                      AnnaA This user is from outside of this forum
                      Anna
                      wrote last edited by
                      #43

                      @arroz I desperately want a compiler for natural language and to make traditional languages obsolete. LLMs can't do that

                      1 Reply Last reply
                      0
                      • Orb 2069O Orb 2069

                        @zzt @arroz

                        Imagine if CS was like ANY other engineering discipline.

                        Ivor HewittI This user is from outside of this forum
                        Ivor HewittI This user is from outside of this forum
                        Ivor Hewitt
                        wrote last edited by
                        #44

                        @Orb2069 @zzt @arroz my qualification ('93) was actually "software engineering' and it was an attempt to create a new type of course treating the subject like other engineering disciplines. I thought it would take off, but I believe they gave up soon after and went for straight comp-sci.

                        1 Reply Last reply
                        0
                        • Miguel ArrozA Miguel Arroz

                          RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                          This is one of the worst takes from LLM enthusiasts.

                          Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                          LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                          Comparing both is a display of ignorance and dishonesty.

                          petrosP This user is from outside of this forum
                          petrosP This user is from outside of this forum
                          petros
                          wrote last edited by
                          #45

                          @arroz It is funny, even people who work for months on a LLM project are surprised that the LLM does not give consistently the same result.

                          Which can be ok, in some cases. In the one Isee right now, replacing boring data entry, the LLM gets a result 90% right, and if a second one independently gets the same result, the result is considered confirmed - it is in fact very unlikely that two models get the same thing wrong.

                          Leaves 20% for review, and the LLMs are faster than humans.

                          petrosP 1 Reply Last reply
                          0
                          • [object Object]Z [object Object]

                            @arroz “LLMs are natural language compilers”, brought to you by the same kids insisting their product is “the operating system for the web” because nothing means anything if you ignore all implementation and engineering details

                            ChrisT This user is from outside of this forum
                            ChrisT This user is from outside of this forum
                            Chris
                            wrote last edited by
                            #46

                            @zzt @arroz I have a "deduplication for your bank account" to sell to you

                            1 Reply Last reply
                            0
                            • petrosP petros

                              @arroz It is funny, even people who work for months on a LLM project are surprised that the LLM does not give consistently the same result.

                              Which can be ok, in some cases. In the one Isee right now, replacing boring data entry, the LLM gets a result 90% right, and if a second one independently gets the same result, the result is considered confirmed - it is in fact very unlikely that two models get the same thing wrong.

                              Leaves 20% for review, and the LLMs are faster than humans.

                              petrosP This user is from outside of this forum
                              petrosP This user is from outside of this forum
                              petros
                              wrote last edited by
                              #47

                              @arroz In this case, the LLMs are replacing a boring job to a certain extend.

                              I wouldn't trust a "90% right" machine a job where people's lives can depend on it, though.

                              Also, there are traditional OCR based solutions used before and concurrently. In this project the jury is still out. Not certain which is more efficient. The obstacles and issues are bigger than expected. Not all smooth sailing.

                              Miguel ArrozA 1 Reply Last reply
                              0
                              • petrosP petros

                                @arroz In this case, the LLMs are replacing a boring job to a certain extend.

                                I wouldn't trust a "90% right" machine a job where people's lives can depend on it, though.

                                Also, there are traditional OCR based solutions used before and concurrently. In this project the jury is still out. Not certain which is more efficient. The obstacles and issues are bigger than expected. Not all smooth sailing.

                                Miguel ArrozA This user is from outside of this forum
                                Miguel ArrozA This user is from outside of this forum
                                Miguel Arroz
                                wrote last edited by
                                #48

                                @petros I would need more context to know what we’re talking about here. Scanning and OCRing documents? Manually filled forms? Historical docs? If so, I don’t see how “one word wrong out of 10” is in any way acceptable.*

                                To me automation means something I can set and forget. If I have to verify the work of the “automation”, it’s not automating anything.

                                Imagine how successful computing would have been if those 40 year old computers I played with they got 10% of their math operations wrong. 1/2

                                Miguel ArrozA 1 Reply Last reply
                                0
                                • Miguel ArrozA Miguel Arroz

                                  @petros I would need more context to know what we’re talking about here. Scanning and OCRing documents? Manually filled forms? Historical docs? If so, I don’t see how “one word wrong out of 10” is in any way acceptable.*

                                  To me automation means something I can set and forget. If I have to verify the work of the “automation”, it’s not automating anything.

                                  Imagine how successful computing would have been if those 40 year old computers I played with they got 10% of their math operations wrong. 1/2

                                  Miguel ArrozA This user is from outside of this forum
                                  Miguel ArrozA This user is from outside of this forum
                                  Miguel Arroz
                                  wrote last edited by
                                  #49

                                  @petros Of course this doesn’t mean you have a tool that assists you with hard and repetitive work. If someone is scanning documents from the VI century for historical preservation, having a tool that helps identifying characters worn out by time, the several aspects of translation and interpretation, etc, might help. But that’s not something that does the job for itself. The historian is the central piece of that puzzle with the necessary knowledge and context for doing it.

                                  petrosP 1 Reply Last reply
                                  0
                                  • Miguel ArrozA Miguel Arroz

                                    RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                                    This is one of the worst takes from LLM enthusiasts.

                                    Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                                    LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                                    Comparing both is a display of ignorance and dishonesty.

                                    ChrisT This user is from outside of this forum
                                    ChrisT This user is from outside of this forum
                                    Chris
                                    wrote last edited by
                                    #50

                                    @arroz LLMs are a compiler in the same way that my 3-year old with a bunch of crayons is a camera.

                                    Rainer M KrugR 1 Reply Last reply
                                    0
                                    • Miguel ArrozA Miguel Arroz

                                      @petros Of course this doesn’t mean you have a tool that assists you with hard and repetitive work. If someone is scanning documents from the VI century for historical preservation, having a tool that helps identifying characters worn out by time, the several aspects of translation and interpretation, etc, might help. But that’s not something that does the job for itself. The historian is the central piece of that puzzle with the necessary knowledge and context for doing it.

                                      petrosP This user is from outside of this forum
                                      petrosP This user is from outside of this forum
                                      petros
                                      wrote last edited by
                                      #51

                                      @arroz In this case there are invoices and purchase orders coming as PDF, unstructured data.

                                      Currently there is OCR software and manual data entry. Both make mistakes, so there is always "double keying". If the result is the same, it is considered right. Otherwise it goes to review.

                                      Now there are 2 LLMs who do the "keying" job. Both get it ça. 90% right.

                                      A difference to compilers: two compilers do not create the same machine code, so one cannot compare two results and decide that's right.

                                      petrosP 1 Reply Last reply
                                      0
                                      • petrosP petros

                                        @arroz In this case there are invoices and purchase orders coming as PDF, unstructured data.

                                        Currently there is OCR software and manual data entry. Both make mistakes, so there is always "double keying". If the result is the same, it is considered right. Otherwise it goes to review.

                                        Now there are 2 LLMs who do the "keying" job. Both get it ça. 90% right.

                                        A difference to compilers: two compilers do not create the same machine code, so one cannot compare two results and decide that's right.

                                        petrosP This user is from outside of this forum
                                        petrosP This user is from outside of this forum
                                        petros
                                        wrote last edited by
                                        #52

                                        @arroz Also, if there still is an error in one invoice and purchase order, it is usually not catastrophic. You get 250 screws instead of 25.. that happened even before we had computers. It's annoying but.. well, magic doesn't happen, sh** does 😉

                                        Given that we work on behalf of customers, we need to have an acceptably low error rate, of course.

                                        Miguel ArrozA 1 Reply Last reply
                                        0
                                        • Miguel ArrozA Miguel Arroz

                                          RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                                          This is one of the worst takes from LLM enthusiasts.

                                          Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                                          LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                                          Comparing both is a display of ignorance and dishonesty.

                                          goatcheeseG This user is from outside of this forum
                                          goatcheeseG This user is from outside of this forum
                                          goatcheese
                                          wrote last edited by
                                          #53

                                          @arroz Had a genAI-curious colleague voice this exact take last week.
                                          I pointed out the same things you did, but honestly they're so eager to believe that I don't think they internalized the difference...
                                          Another, koolaid-drinking colleague replied "well sometimes compilers are not deterministic!!!", as if finding a compiler bug every 15 years was the same as an LLM crapping out every prompt.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups