Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. @deirdrebeth

@deirdrebeth

Scheduled Pinned Locked Moved Uncategorized
16 Posts 3 Posters 38 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

    @mloxton
    @deirdrebeth

    Okay but "and then editing and fixing bits it missed or got wrong" implies that it does, in fact, regularly hallucinate and make errors or omissions.

    Matthew LoxtonM This user is from outside of this forum
    Matthew LoxtonM This user is from outside of this forum
    Matthew Loxton
    wrote last edited by
    #7

    @Lana
    Lana, you are mixing up two different topics.

    I have in over two years of using it daily and during extensive testing, not seen a single case of AI Assist in MAXQDA hallucinate.

    When used for coding, as I said, the IRR is high, but not perfect (which would be unexpected), but still useful because it occasionally spots something I didn't code and should have. That leads to analysis with higher validity - a better product

    @deirdrebeth

    𝐿𝒢𝓃𝒢 "not yet begun to fight"L 1 Reply Last reply
    0
    • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

      @mloxton

      I find it very telling that here, even in what is apparently your best case scenario, the reasons you claim to use generative AI for qualitative analysis do not include "writing better code".

      All you can ever do is say "yeah I know it makes mistakes. I don't care because it's cheap and fast."

      @deirdrebeth

      Matthew LoxtonM This user is from outside of this forum
      Matthew LoxtonM This user is from outside of this forum
      Matthew Loxton
      wrote last edited by
      #8

      @Lana

      I think you may be under the impression that we are talking about "code" as in Java, Python, or R, and therefore "writing better code". This is not what I am talking about. We are talking about qualitative research and coding text segments.

      The AI indeed helps increase coding construct validity, and therefore have better research results.
      I used it in this way here, for example: https://www.medrxiv.org/content/10.1101/2025.03.04.25320693v1

      I am happy to go deeper into how this works if you are interested
      @deirdrebeth

      𝐿𝒢𝓃𝒢 "not yet begun to fight"L 1 Reply Last reply
      1
      0
      • med-mastodon.comI med-mastodon.com shared this topic
      • Matthew LoxtonM Matthew Loxton

        @Lana
        Lana, you are mixing up two different topics.

        I have in over two years of using it daily and during extensive testing, not seen a single case of AI Assist in MAXQDA hallucinate.

        When used for coding, as I said, the IRR is high, but not perfect (which would be unexpected), but still useful because it occasionally spots something I didn't code and should have. That leads to analysis with higher validity - a better product

        @deirdrebeth

        𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
        𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
        𝐿𝒢𝓃𝒢 "not yet begun to fight"
        wrote last edited by
        #9

        @deirdrebeth @mloxton

        >"I have never once seen it hallucinate"

        >"I have to go back and fix things it missed or got wrong"

        Both of these cannot be true simultaneously.

        Matthew LoxtonM 1 Reply Last reply
        0
        • Matthew LoxtonM Matthew Loxton

          @Lana

          I think you may be under the impression that we are talking about "code" as in Java, Python, or R, and therefore "writing better code". This is not what I am talking about. We are talking about qualitative research and coding text segments.

          The AI indeed helps increase coding construct validity, and therefore have better research results.
          I used it in this way here, for example: https://www.medrxiv.org/content/10.1101/2025.03.04.25320693v1

          I am happy to go deeper into how this works if you are interested
          @deirdrebeth

          𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
          𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
          𝐿𝒢𝓃𝒢 "not yet begun to fight"
          wrote last edited by
          #10

          @deirdrebeth @mloxton

          You seem to be missing the point we are all saying. And at this point, I'm certain it's on purpose because nobody could be this dense. Whether you're talking about computer code, braille, or your little orphan Annie decoder ring doesn't matter.

          The point is, when you're using AI, you're using it for these things:
          - speed
          - cost

          And not for this thing:
          - the quality of the output

          And we know that, because, even in your best case scenario, you say things like "it's like a fast, unpaid intern who needs oversight because they sometimes miss things or make mistakes."

          Matthew LoxtonM 1 Reply Last reply
          0
          • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

            @deirdrebeth @mloxton

            >"I have never once seen it hallucinate"

            >"I have to go back and fix things it missed or got wrong"

            Both of these cannot be true simultaneously.

            Matthew LoxtonM This user is from outside of this forum
            Matthew LoxtonM This user is from outside of this forum
            Matthew Loxton
            wrote last edited by
            #11

            @Lana

            Oh Dear God
            Hallucination is one kind of error, amongst many kinds of error.
            It has never yet hallucinated, but is sometimes makes other errors.

            When using AI Assist in MAXQDA, sometimes it misidentifies an implication, and I need to fix that, and sometimes its coding is too broad, and I need to fix that too. Sometimes the fix is just adjusting code range, sometimes it is rewo9rding and tightening up code definitions

            @deirdrebeth

            𝐿𝒢𝓃𝒢 "not yet begun to fight"L 1 Reply Last reply
            1
            0
            • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

              @deirdrebeth @mloxton

              You seem to be missing the point we are all saying. And at this point, I'm certain it's on purpose because nobody could be this dense. Whether you're talking about computer code, braille, or your little orphan Annie decoder ring doesn't matter.

              The point is, when you're using AI, you're using it for these things:
              - speed
              - cost

              And not for this thing:
              - the quality of the output

              And we know that, because, even in your best case scenario, you say things like "it's like a fast, unpaid intern who needs oversight because they sometimes miss things or make mistakes."

              Matthew LoxtonM This user is from outside of this forum
              Matthew LoxtonM This user is from outside of this forum
              Matthew Loxton
              wrote last edited by
              #12

              @Lana

              Lana, I think you just want to be obnoxious because I have repeatedly stipulated that using AI does ALL THREE THINGS in research.

              Maybe you didn't understand what the term "validity" implied, so let's restate this cleanly:

              Using AI Assist:
              - Reduces cost
              - Saves time
              - Improves quality

              It also does an additional thing by expanding CAPACITY, and that is it allows me to tackle research that was otherwise impractical or impossible.

              @deirdrebeth

              𝐿𝒢𝓃𝒢 "not yet begun to fight"L 1 Reply Last reply
              1
              0
              • Matthew LoxtonM Matthew Loxton

                @Lana

                Oh Dear God
                Hallucination is one kind of error, amongst many kinds of error.
                It has never yet hallucinated, but is sometimes makes other errors.

                When using AI Assist in MAXQDA, sometimes it misidentifies an implication, and I need to fix that, and sometimes its coding is too broad, and I need to fix that too. Sometimes the fix is just adjusting code range, sometimes it is rewo9rding and tightening up code definitions

                @deirdrebeth

                𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                𝐿𝒢𝓃𝒢 "not yet begun to fight"
                wrote last edited by
                #13

                @deirdrebeth @mloxton

                I'm really not interested in having an argument over definitions.

                The point is, you are using AI for these things:
                - speed
                - cost

                And not for these things:
                - the quality of the output

                And we know that because you keep saying how, even in your best case scenario, the generative AI that you use needs constant oversight because it misses things, makes coding mistakes, or needs rewording. Whether you call that a hallucination error or some other kind of error is irrelevant to the broader point, and you know that. You cannot be this dense.

                Matthew LoxtonM 1 Reply Last reply
                0
                • Matthew LoxtonM Matthew Loxton

                  @Lana

                  Lana, I think you just want to be obnoxious because I have repeatedly stipulated that using AI does ALL THREE THINGS in research.

                  Maybe you didn't understand what the term "validity" implied, so let's restate this cleanly:

                  Using AI Assist:
                  - Reduces cost
                  - Saves time
                  - Improves quality

                  It also does an additional thing by expanding CAPACITY, and that is it allows me to tackle research that was otherwise impractical or impossible.

                  @deirdrebeth

                  𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                  𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                  𝐿𝒢𝓃𝒢 "not yet begun to fight"
                  wrote last edited by
                  #14

                  @deirdrebeth @mloxton

                  So it improves quality by making mistakes and needing constant oversight.

                  Sure buddy.

                  Matthew LoxtonM 1 Reply Last reply
                  0
                  • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

                    @deirdrebeth @mloxton

                    I'm really not interested in having an argument over definitions.

                    The point is, you are using AI for these things:
                    - speed
                    - cost

                    And not for these things:
                    - the quality of the output

                    And we know that because you keep saying how, even in your best case scenario, the generative AI that you use needs constant oversight because it misses things, makes coding mistakes, or needs rewording. Whether you call that a hallucination error or some other kind of error is irrelevant to the broader point, and you know that. You cannot be this dense.

                    Matthew LoxtonM This user is from outside of this forum
                    Matthew LoxtonM This user is from outside of this forum
                    Matthew Loxton
                    wrote last edited by
                    #15

                    @Lana

                    That wasn't a mere matter of definition, Lana. You made a big fat category error.

                    You obviously thought that hallucination was the only kind of error that AI makes

                    @deirdrebeth

                    1 Reply Last reply
                    1
                    0
                    • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

                      @deirdrebeth @mloxton

                      So it improves quality by making mistakes and needing constant oversight.

                      Sure buddy.

                      Matthew LoxtonM This user is from outside of this forum
                      Matthew LoxtonM This user is from outside of this forum
                      Matthew Loxton
                      wrote last edited by
                      #16

                      @Lana

                      Indeed
                      Because I end up, as I said, with higher validity

                      @deirdrebeth

                      1 Reply Last reply
                      1
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      Powered by NodeBB Contributors
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups