Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. This is one of the worst takes from LLM enthusiasts.

This is one of the worst takes from LLM enthusiasts.

Scheduled Pinned Locked Moved Uncategorized
74 Posts 56 Posters 67 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Miguel ArrozA Miguel Arroz

    RE: https://mastodon.social/@stroughtonsmith/116030136026775832

    This is one of the worst takes from LLM enthusiasts.

    Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

    LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

    Comparing both is a display of ignorance and dishonesty.

    mirabilosM This user is from outside of this forum
    mirabilosM This user is from outside of this forum
    mirabilos
    wrote last edited by
    #40

    @arroz except that LLMs are also deterministic (they just incorporate pseudorandom bits for some variety in the prediction)

    1 Reply Last reply
    0
    • Miguel ArrozA Miguel Arroz

      RE: https://mastodon.social/@stroughtonsmith/116030136026775832

      This is one of the worst takes from LLM enthusiasts.

      Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

      LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

      Comparing both is a display of ignorance and dishonesty.

      Kevin GranadeK This user is from outside of this forum
      Kevin GranadeK This user is from outside of this forum
      Kevin Granade
      wrote last edited by
      #41

      @arroz I mean... people still audit the machine code sometimes! It's not the first resort but it's on the list, and in any sufficiently complex system you need people that can chase the program logic all the way to the CPU. It stopped being common precisely because the results became supremely consistently good to the point where it became generally recognized as a bad idea to reflexively second guess the compiler.
      That process has not happened with LLMs, they constantly spit out broken code.

      1 Reply Last reply
      0
      • gudenauG gudenau

        @arroz my boss yesterday just said that if you don't learn to use the LLM tools, you will be fired and replaced by people who do. It's terrifying. Especially if I was allowed to say what I was working on, you would be terrified too.

        poleguy looking for lost toolsP This user is from outside of this forum
        poleguy looking for lost toolsP This user is from outside of this forum
        poleguy looking for lost tools
        wrote last edited by
        #42

        @arroz @gudenau just use up all the tokens every month and keep doing your job. 🙂

        gudenauG 1 Reply Last reply
        0
        • Miguel ArrozA Miguel Arroz

          RE: https://mastodon.social/@stroughtonsmith/116030136026775832

          This is one of the worst takes from LLM enthusiasts.

          Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

          LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

          Comparing both is a display of ignorance and dishonesty.

          AnnaA This user is from outside of this forum
          AnnaA This user is from outside of this forum
          Anna
          wrote last edited by
          #43

          @arroz I desperately want a compiler for natural language and to make traditional languages obsolete. LLMs can't do that

          1 Reply Last reply
          0
          • Orb 2069O Orb 2069

            @zzt @arroz

            Imagine if CS was like ANY other engineering discipline.

            Ivor HewittI This user is from outside of this forum
            Ivor HewittI This user is from outside of this forum
            Ivor Hewitt
            wrote last edited by
            #44

            @Orb2069 @zzt @arroz my qualification ('93) was actually "software engineering' and it was an attempt to create a new type of course treating the subject like other engineering disciplines. I thought it would take off, but I believe they gave up soon after and went for straight comp-sci.

            1 Reply Last reply
            0
            • Miguel ArrozA Miguel Arroz

              RE: https://mastodon.social/@stroughtonsmith/116030136026775832

              This is one of the worst takes from LLM enthusiasts.

              Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

              LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

              Comparing both is a display of ignorance and dishonesty.

              petrosP This user is from outside of this forum
              petrosP This user is from outside of this forum
              petros
              wrote last edited by
              #45

              @arroz It is funny, even people who work for months on a LLM project are surprised that the LLM does not give consistently the same result.

              Which can be ok, in some cases. In the one Isee right now, replacing boring data entry, the LLM gets a result 90% right, and if a second one independently gets the same result, the result is considered confirmed - it is in fact very unlikely that two models get the same thing wrong.

              Leaves 20% for review, and the LLMs are faster than humans.

              petrosP 1 Reply Last reply
              0
              • [object Object]Z [object Object]

                @arroz “LLMs are natural language compilers”, brought to you by the same kids insisting their product is “the operating system for the web” because nothing means anything if you ignore all implementation and engineering details

                ChrisT This user is from outside of this forum
                ChrisT This user is from outside of this forum
                Chris
                wrote last edited by
                #46

                @zzt @arroz I have a "deduplication for your bank account" to sell to you

                1 Reply Last reply
                0
                • petrosP petros

                  @arroz It is funny, even people who work for months on a LLM project are surprised that the LLM does not give consistently the same result.

                  Which can be ok, in some cases. In the one Isee right now, replacing boring data entry, the LLM gets a result 90% right, and if a second one independently gets the same result, the result is considered confirmed - it is in fact very unlikely that two models get the same thing wrong.

                  Leaves 20% for review, and the LLMs are faster than humans.

                  petrosP This user is from outside of this forum
                  petrosP This user is from outside of this forum
                  petros
                  wrote last edited by
                  #47

                  @arroz In this case, the LLMs are replacing a boring job to a certain extend.

                  I wouldn't trust a "90% right" machine a job where people's lives can depend on it, though.

                  Also, there are traditional OCR based solutions used before and concurrently. In this project the jury is still out. Not certain which is more efficient. The obstacles and issues are bigger than expected. Not all smooth sailing.

                  Miguel ArrozA 1 Reply Last reply
                  0
                  • petrosP petros

                    @arroz In this case, the LLMs are replacing a boring job to a certain extend.

                    I wouldn't trust a "90% right" machine a job where people's lives can depend on it, though.

                    Also, there are traditional OCR based solutions used before and concurrently. In this project the jury is still out. Not certain which is more efficient. The obstacles and issues are bigger than expected. Not all smooth sailing.

                    Miguel ArrozA This user is from outside of this forum
                    Miguel ArrozA This user is from outside of this forum
                    Miguel Arroz
                    wrote last edited by
                    #48

                    @petros I would need more context to know what we’re talking about here. Scanning and OCRing documents? Manually filled forms? Historical docs? If so, I don’t see how “one word wrong out of 10” is in any way acceptable.*

                    To me automation means something I can set and forget. If I have to verify the work of the “automation”, it’s not automating anything.

                    Imagine how successful computing would have been if those 40 year old computers I played with they got 10% of their math operations wrong. 1/2

                    Miguel ArrozA 1 Reply Last reply
                    0
                    • Miguel ArrozA Miguel Arroz

                      @petros I would need more context to know what we’re talking about here. Scanning and OCRing documents? Manually filled forms? Historical docs? If so, I don’t see how “one word wrong out of 10” is in any way acceptable.*

                      To me automation means something I can set and forget. If I have to verify the work of the “automation”, it’s not automating anything.

                      Imagine how successful computing would have been if those 40 year old computers I played with they got 10% of their math operations wrong. 1/2

                      Miguel ArrozA This user is from outside of this forum
                      Miguel ArrozA This user is from outside of this forum
                      Miguel Arroz
                      wrote last edited by
                      #49

                      @petros Of course this doesn’t mean you have a tool that assists you with hard and repetitive work. If someone is scanning documents from the VI century for historical preservation, having a tool that helps identifying characters worn out by time, the several aspects of translation and interpretation, etc, might help. But that’s not something that does the job for itself. The historian is the central piece of that puzzle with the necessary knowledge and context for doing it.

                      petrosP 1 Reply Last reply
                      0
                      • Miguel ArrozA Miguel Arroz

                        RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                        This is one of the worst takes from LLM enthusiasts.

                        Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                        LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                        Comparing both is a display of ignorance and dishonesty.

                        ChrisT This user is from outside of this forum
                        ChrisT This user is from outside of this forum
                        Chris
                        wrote last edited by
                        #50

                        @arroz LLMs are a compiler in the same way that my 3-year old with a bunch of crayons is a camera.

                        Rainer M KrugR 1 Reply Last reply
                        0
                        • Miguel ArrozA Miguel Arroz

                          @petros Of course this doesn’t mean you have a tool that assists you with hard and repetitive work. If someone is scanning documents from the VI century for historical preservation, having a tool that helps identifying characters worn out by time, the several aspects of translation and interpretation, etc, might help. But that’s not something that does the job for itself. The historian is the central piece of that puzzle with the necessary knowledge and context for doing it.

                          petrosP This user is from outside of this forum
                          petrosP This user is from outside of this forum
                          petros
                          wrote last edited by
                          #51

                          @arroz In this case there are invoices and purchase orders coming as PDF, unstructured data.

                          Currently there is OCR software and manual data entry. Both make mistakes, so there is always "double keying". If the result is the same, it is considered right. Otherwise it goes to review.

                          Now there are 2 LLMs who do the "keying" job. Both get it ça. 90% right.

                          A difference to compilers: two compilers do not create the same machine code, so one cannot compare two results and decide that's right.

                          petrosP 1 Reply Last reply
                          0
                          • petrosP petros

                            @arroz In this case there are invoices and purchase orders coming as PDF, unstructured data.

                            Currently there is OCR software and manual data entry. Both make mistakes, so there is always "double keying". If the result is the same, it is considered right. Otherwise it goes to review.

                            Now there are 2 LLMs who do the "keying" job. Both get it ça. 90% right.

                            A difference to compilers: two compilers do not create the same machine code, so one cannot compare two results and decide that's right.

                            petrosP This user is from outside of this forum
                            petrosP This user is from outside of this forum
                            petros
                            wrote last edited by
                            #52

                            @arroz Also, if there still is an error in one invoice and purchase order, it is usually not catastrophic. You get 250 screws instead of 25.. that happened even before we had computers. It's annoying but.. well, magic doesn't happen, sh** does 😉

                            Given that we work on behalf of customers, we need to have an acceptably low error rate, of course.

                            Miguel ArrozA 1 Reply Last reply
                            0
                            • Miguel ArrozA Miguel Arroz

                              RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                              This is one of the worst takes from LLM enthusiasts.

                              Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                              LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                              Comparing both is a display of ignorance and dishonesty.

                              goatcheeseG This user is from outside of this forum
                              goatcheeseG This user is from outside of this forum
                              goatcheese
                              wrote last edited by
                              #53

                              @arroz Had a genAI-curious colleague voice this exact take last week.
                              I pointed out the same things you did, but honestly they're so eager to believe that I don't think they internalized the difference...
                              Another, koolaid-drinking colleague replied "well sometimes compilers are not deterministic!!!", as if finding a compiler bug every 15 years was the same as an LLM crapping out every prompt.

                              1 Reply Last reply
                              0
                              • petrosP petros

                                @arroz Also, if there still is an error in one invoice and purchase order, it is usually not catastrophic. You get 250 screws instead of 25.. that happened even before we had computers. It's annoying but.. well, magic doesn't happen, sh** does 😉

                                Given that we work on behalf of customers, we need to have an acceptably low error rate, of course.

                                Miguel ArrozA This user is from outside of this forum
                                Miguel ArrozA This user is from outside of this forum
                                Miguel Arroz
                                wrote last edited by
                                #54

                                @petros What you need is to get rid of the PDFs and deploy an online store. 😅

                                What is the failure rate of the traditional OCRs compared to the LLMs? And how modern were those OCRs? Modern OCR in the last 5 years or so have a success rate way higher than 90%. And are the failures on OCR itself or interpreting their context (aka knowing how to read the invoice or order, not just identifying the right characters)?

                                petrosP 1 Reply Last reply
                                0
                                • mtc_ukM mtc_uk

                                  @arroz @stroughtonsmith
                                  Jesus fucking Christ, these people are incompetent idiots. I’m even more glad to be out of the programming business given that these are the morons with whom I’d be interacting. Everything is going to go to shit.

                                  Rainer M KrugR This user is from outside of this forum
                                  Rainer M KrugR This user is from outside of this forum
                                  Rainer M Krug
                                  wrote last edited by
                                  #55

                                  @mtconleyuk @arroz @stroughtonsmith can we please go back to talking with each others instead of shouting? Please make your point without insulting somebody who made his point!

                                  1 Reply Last reply
                                  0
                                  • Miguel ArrozA Miguel Arroz

                                    RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                                    This is one of the worst takes from LLM enthusiasts.

                                    Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                                    LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                                    Comparing both is a display of ignorance and dishonesty.

                                    FubaroqueF This user is from outside of this forum
                                    FubaroqueF This user is from outside of this forum
                                    Fubaroque
                                    wrote last edited by
                                    #56

                                    @arroz I certainly don’t enjoy reviewing AI slop. So as far as I’m concerned just fine… the sooner the better. Do enjoy the results…. #SEP 🤪

                                    FubaroqueF 1 Reply Last reply
                                    0
                                    • Orb 2069O Orb 2069

                                      @aspensmonster @zzt @arroz

                                      Vibe coded skyscrapers.

                                      random thoughtsH This user is from outside of this forum
                                      random thoughtsH This user is from outside of this forum
                                      random thoughts
                                      wrote last edited by
                                      #57

                                      @Orb2069 @aspensmonster @zzt @arroz

                                      Soon coming to an eathquake zone near you!

                                      1 Reply Last reply
                                      0
                                      • ChrisT Chris

                                        @arroz LLMs are a compiler in the same way that my 3-year old with a bunch of crayons is a camera.

                                        Rainer M KrugR This user is from outside of this forum
                                        Rainer M KrugR This user is from outside of this forum
                                        Rainer M Krug
                                        wrote last edited by
                                        #58

                                        @thechris @arroz if you tell the LLM to be “ 3-year old with a bunch of crayons is a camera.”, then yes.

                                        ChrisT 1 Reply Last reply
                                        0
                                        • FubaroqueF Fubaroque

                                          @arroz I certainly don’t enjoy reviewing AI slop. So as far as I’m concerned just fine… the sooner the better. Do enjoy the results…. #SEP 🤪

                                          FubaroqueF This user is from outside of this forum
                                          FubaroqueF This user is from outside of this forum
                                          Fubaroque
                                          wrote last edited by
                                          #59

                                          @arroz But why generate code at all. Just execute the prompts directly. Suits me... 😘

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups