Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. This is one of the worst takes from LLM enthusiasts.

This is one of the worst takes from LLM enthusiasts.

Scheduled Pinned Locked Moved Uncategorized
74 Posts 56 Posters 67 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Orb 2069O Orb 2069

    @aspensmonster @zzt @arroz

    Vibe coded skyscrapers.

    random thoughtsH This user is from outside of this forum
    random thoughtsH This user is from outside of this forum
    random thoughts
    wrote last edited by
    #57

    @Orb2069 @aspensmonster @zzt @arroz

    Soon coming to an eathquake zone near you!

    1 Reply Last reply
    0
    • ChrisT Chris

      @arroz LLMs are a compiler in the same way that my 3-year old with a bunch of crayons is a camera.

      Rainer M KrugR This user is from outside of this forum
      Rainer M KrugR This user is from outside of this forum
      Rainer M Krug
      wrote last edited by
      #58

      @thechris @arroz if you tell the LLM to be “ 3-year old with a bunch of crayons is a camera.”, then yes.

      ChrisT 1 Reply Last reply
      0
      • FubaroqueF Fubaroque

        @arroz I certainly don’t enjoy reviewing AI slop. So as far as I’m concerned just fine… the sooner the better. Do enjoy the results…. #SEP 🤪

        FubaroqueF This user is from outside of this forum
        FubaroqueF This user is from outside of this forum
        Fubaroque
        wrote last edited by
        #59

        @arroz But why generate code at all. Just execute the prompts directly. Suits me... 😘

        1 Reply Last reply
        0
        • Miguel ArrozA Miguel Arroz

          RE: https://mastodon.social/@stroughtonsmith/116030136026775832

          This is one of the worst takes from LLM enthusiasts.

          Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

          LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

          Comparing both is a display of ignorance and dishonesty.

          JalilT This user is from outside of this forum
          JalilT This user is from outside of this forum
          Jalil
          wrote last edited by
          #60

          @arroz even if LLMs were comparable, people do review the output of compilers

          1 Reply Last reply
          0
          • Miguel ArrozA Miguel Arroz

            @petros What you need is to get rid of the PDFs and deploy an online store. 😅

            What is the failure rate of the traditional OCRs compared to the LLMs? And how modern were those OCRs? Modern OCR in the last 5 years or so have a success rate way higher than 90%. And are the failures on OCR itself or interpreting their context (aka knowing how to read the invoice or order, not just identifying the right characters)?

            petrosP This user is from outside of this forum
            petrosP This user is from outside of this forum
            petros
            wrote last edited by
            #61

            @arroz I don't have the exact numbers of "traditional" OCR but it will be around 90% as well. And, yes, you are right, the issue is not to get the letters right, it's to make it structured information. With OCR it needs templating which tells the OCR where to find an address, what to do with multiple lines and pages etc. Every new format requires that work again.

            LLMs are "smarter" in that regard.

            Fun fact rookie error: Sending a T&C page to a LLM. It chews on it forever..

            petrosP 1 Reply Last reply
            0
            • Miguel ArrozA Miguel Arroz

              RE: https://mastodon.social/@stroughtonsmith/116030136026775832

              This is one of the worst takes from LLM enthusiasts.

              Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

              LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

              Comparing both is a display of ignorance and dishonesty.

              Nils BallmannN This user is from outside of this forum
              Nils BallmannN This user is from outside of this forum
              Nils Ballmann
              wrote last edited by
              #62

              @arroz @binford2k some people already understood this in 2016: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensive-and-precise-spec/

              ᛋᛁᚵᛁᛋᛘᚢᚾᛑ ᚾᛁᚾᛃᛅS 1 Reply Last reply
              0
              • petrosP petros

                @arroz I don't have the exact numbers of "traditional" OCR but it will be around 90% as well. And, yes, you are right, the issue is not to get the letters right, it's to make it structured information. With OCR it needs templating which tells the OCR where to find an address, what to do with multiple lines and pages etc. Every new format requires that work again.

                LLMs are "smarter" in that regard.

                Fun fact rookie error: Sending a T&C page to a LLM. It chews on it forever..

                petrosP This user is from outside of this forum
                petrosP This user is from outside of this forum
                petros
                wrote last edited by
                #63

                @arroz And, yeah, why there are so many companies who send this PDFs. God knows. I worked in the automotive industry until 2015 and they still faxed orders.. And it's not Australia only, e.g. just recently we "OCRed" a big Canadian company's invoices.

                1 Reply Last reply
                0
                • Miguel ArrozA Miguel Arroz

                  RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                  This is one of the worst takes from LLM enthusiasts.

                  Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                  LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                  Comparing both is a display of ignorance and dishonesty.

                  Steve Hill 🏴󠁧󠁢󠁷󠁬󠁳󠁿🇪🇺S This user is from outside of this forum
                  Steve Hill 🏴󠁧󠁢󠁷󠁬󠁳󠁿🇪🇺S This user is from outside of this forum
                  Steve Hill 🏴󠁧󠁢󠁷󠁬󠁳󠁿🇪🇺
                  wrote last edited by
                  #64

                  @arroz I've had a horrible idea... Why are we building LLMs that output C, Python, etc when we could be building LLMs that produce bytecode? More efficient and completely unauditable!

                  1 Reply Last reply
                  0
                  • Miguel ArrozA Miguel Arroz

                    RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                    This is one of the worst takes from LLM enthusiasts.

                    Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                    LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                    Comparing both is a display of ignorance and dishonesty.

                    ⏚ Antoine Chambert-LoirA This user is from outside of this forum
                    ⏚ Antoine Chambert-LoirA This user is from outside of this forum
                    ⏚ Antoine Chambert-Loir
                    wrote last edited by
                    #65

                    @arroz he claims to “make apps and break things”...

                    1 Reply Last reply
                    0
                    • Nils BallmannN Nils Ballmann

                      @arroz @binford2k some people already understood this in 2016: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensive-and-precise-spec/

                      ᛋᛁᚵᛁᛋᛘᚢᚾᛑ ᚾᛁᚾᛃᛅS This user is from outside of this forum
                      ᛋᛁᚵᛁᛋᛘᚢᚾᛑ ᚾᛁᚾᛃᛅS This user is from outside of this forum
                      ᛋᛁᚵᛁᛋᛘᚢᚾᛑ ᚾᛁᚾᛃᛅ
                      wrote last edited by
                      #66

                      @nils_ballmann @arroz @binford2k what one faces when doing formal verification of LLM output. However, LLMs might enable us to write larger formally verified systems in practice. LLMs could help with the spec writing and validation as well. We'll see.

                      LLMs are basically generators in neuro-symbolic hybrid systems. And many people like to use them for productivity. I.e. a component or tool. No reason to get emotional about it. Like humans, LLMs are unreliable but still useful.

                      1 Reply Last reply
                      0
                      • Miguel ArrozA Miguel Arroz

                        RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                        This is one of the worst takes from LLM enthusiasts.

                        Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                        LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                        Comparing both is a display of ignorance and dishonesty.

                        Steve LoughranS This user is from outside of this forum
                        Steve LoughranS This user is from outside of this forum
                        Steve Loughran
                        wrote last edited by
                        #67

                        @arroz well, except gcc -Ofast, obviously

                        Notable that dynamic code generation has fallen out of favour in database engines (select -> assembly-> machine code) with SIMD opcodes being the replacement because it's a nightmare to debug when a failure happens inside generated code
                        AVX512 opcodes support breakpoints and debugging if you add them through intrinsics

                        1 Reply Last reply
                        0
                        • Rainer M KrugR Rainer M Krug

                          @thechris @arroz if you tell the LLM to be “ 3-year old with a bunch of crayons is a camera.”, then yes.

                          ChrisT This user is from outside of this forum
                          ChrisT This user is from outside of this forum
                          Chris
                          wrote last edited by
                          #68

                          @RMKrug @arroz Yes, that way works.
                          But telling it to be a compiler won't.

                          1 Reply Last reply
                          0
                          • Miguel ArrozA Miguel Arroz

                            RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                            This is one of the worst takes from LLM enthusiasts.

                            Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                            LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                            Comparing both is a display of ignorance and dishonesty.

                            The ol' tealeg 🐡T This user is from outside of this forum
                            The ol' tealeg 🐡T This user is from outside of this forum
                            The ol' tealeg 🐡
                            wrote last edited by
                            #69

                            @arroz I’d actually hazard a guess that there are more assembly programmers alive today than at any time in history.

                            1 Reply Last reply
                            0
                            • poleguy looking for lost toolsP poleguy looking for lost tools

                              @arroz @gudenau just use up all the tokens every month and keep doing your job. 🙂

                              gudenauG This user is from outside of this forum
                              gudenauG This user is from outside of this forum
                              gudenau
                              wrote last edited by
                              #70

                              @arroz @poleguy It's a local LLM so it's basically free to run. At least *that part* is correct.

                              1 Reply Last reply
                              0
                              • Miguel ArrozA Miguel Arroz

                                RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                                This is one of the worst takes from LLM enthusiasts.

                                Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                                LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                                Comparing both is a display of ignorance and dishonesty.

                                noplasticshowerN This user is from outside of this forum
                                noplasticshowerN This user is from outside of this forum
                                noplasticshower
                                wrote last edited by
                                #71

                                @arroz I think you may be overlooking another point here: there is absolutely NO reason LLMs should not build directly into machine code or better yet a chip. Why have a "human readable" interface (that is a programming language or a universal hardware layer) at all?

                                If we stop creating UTMs and adopt machines farther down the chomsky hierarchy (and identify the inherent security advantages of doing so) we can probably make interesting progress. Especially in security engineering.

                                If we fab machines directly that don't require software to rebind them ...

                                Since the '40s we have been building machines that do too much (on purpose) and getting mad when they do parts of what we built them to do...

                                1 Reply Last reply
                                0
                                • Miguel ArrozA Miguel Arroz

                                  RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                                  This is one of the worst takes from LLM enthusiasts.

                                  Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                                  LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                                  Comparing both is a display of ignorance and dishonesty.

                                  Jesús A.Y This user is from outside of this forum
                                  Jesús A.Y This user is from outside of this forum
                                  Jesús A.
                                  wrote last edited by
                                  #72

                                  @arroz @stroughtonsmith I can even see his point about LLMs being the new compilers (although I don’t agree). But then a compiler doesn’t suffer from the societal, ethical and environmental issues these model do. It seems like looking away from the screen is not a very worked on skill by programmers and CSs in general. In that sense it’s even funny we may all lose our jobs precisely by our collective lack of empathy and global perspective.

                                  1 Reply Last reply
                                  0
                                  • Miguel ArrozA Miguel Arroz

                                    RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                                    This is one of the worst takes from LLM enthusiasts.

                                    Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                                    LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                                    Comparing both is a display of ignorance and dishonesty.

                                    TedT This user is from outside of this forum
                                    TedT This user is from outside of this forum
                                    Ted
                                    wrote last edited by
                                    #73

                                    @arroz
                                    My skip manager tried using this argument why we should adopt LLMs. It was too absurd to reply to, though maybe I should have.

                                    There are cases where correctness isn't as critical and maybe it is ok to use something vibe coded (I recently met someone vibe coding algorithmic art, treating some bugs as happy accidents).

                                    But my day job is a case where the whole point of what we build is to avoid human mistakes.

                                    1 Reply Last reply
                                    0
                                    • Miguel ArrozA Miguel Arroz

                                      RE: https://mastodon.social/@stroughtonsmith/116030136026775832

                                      This is one of the worst takes from LLM enthusiasts.

                                      Compilers are deterministic, extremely well tested, made out of incredibly detailed specifications debated for months and properly formalized.

                                      LLMs are random content generators with a whole lot of automatically trained heuristics. They can produce literally anything. Not a single person who built them can predict what the output will be for a given input.

                                      Comparing both is a display of ignorance and dishonesty.

                                      ROMMIXR This user is from outside of this forum
                                      ROMMIXR This user is from outside of this forum
                                      ROMMIX
                                      wrote last edited by
                                      #74

                                      @arroz LLMs are NOT random content generators. That is false. The LLM output is based the user prompt. Seems to me you don't know how to prompt correctly.

                                      1 Reply Last reply
                                      1
                                      0
                                      • R AodeRelay shared this topic
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Don't have an account? Register

                                      • Login or register to search.
                                      Powered by NodeBB Contributors
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • World
                                      • Users
                                      • Groups