Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. @deirdrebeth

@deirdrebeth

Scheduled Pinned Locked Moved Uncategorized
16 Posts 3 Posters 38 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Matthew LoxtonM This user is from outside of this forum
    Matthew LoxtonM This user is from outside of this forum
    Matthew Loxton
    wrote last edited by
    #1

    @deirdrebeth

    "Would you find it perfectly acceptable for someone to use AI to be a "professional trainer in qualitative research methods"?"

    YES!!!!!!!
    Fucking YES!
    AI has been a breakthrough technology in both qualitative research AND in instruction.
    I can now create synthetic text for classes, and also do initial topic analysis, which saves a ton of time for more high-value work.
    I can also tackle far bigger and more complex projects than before

    /2

    Matthew LoxtonM 1 Reply Last reply
    0
    • Matthew LoxtonM Matthew Loxton

      @deirdrebeth

      "Would you find it perfectly acceptable for someone to use AI to be a "professional trainer in qualitative research methods"?"

      YES!!!!!!!
      Fucking YES!
      AI has been a breakthrough technology in both qualitative research AND in instruction.
      I can now create synthetic text for classes, and also do initial topic analysis, which saves a ton of time for more high-value work.
      I can also tackle far bigger and more complex projects than before

      /2

      Matthew LoxtonM This user is from outside of this forum
      Matthew LoxtonM This user is from outside of this forum
      Matthew Loxton
      wrote last edited by
      #2

      "Would you find that AI's choices to be equivalent to yours"

      So far, there is a fairly high inter-rater reliability between how the AI codes texts and how I, or other researchers do. The variance is also very useful - mostly, I dismiss the things the AI codes and I didn't, but almost every time, the AI codes something I missed and I then code that.

      Among fellow researchers and trainers, this has been a consistent experience. The AI helps do better research

      /3

      @deirdrebeth

      Matthew LoxtonM 1 Reply Last reply
      0
      • Matthew LoxtonM Matthew Loxton

        "Would you find that AI's choices to be equivalent to yours"

        So far, there is a fairly high inter-rater reliability between how the AI codes texts and how I, or other researchers do. The variance is also very useful - mostly, I dismiss the things the AI codes and I didn't, but almost every time, the AI codes something I missed and I then code that.

        Among fellow researchers and trainers, this has been a consistent experience. The AI helps do better research

        /3

        @deirdrebeth

        Matthew LoxtonM This user is from outside of this forum
        Matthew LoxtonM This user is from outside of this forum
        Matthew Loxton
        wrote last edited by
        #3

        " or would you be fuming at someone pretending to be a trainer and releasing a training manual that's full of easy to spot errors?"

        I would fume at anyone who produces a shitty textbook or bad research
        But this is not what is happening. We aren't standing back an just letting the AI rip, but rather using it to do some of the scutwork, and to crosscheck our work, and then editing and fixing bits it missed or got wrong.

        Its like having an eager undergrad intern that needs oversight
        @deirdrebeth

        𝐿𝒢𝓃𝒢 "not yet begun to fight"L 1 Reply Last reply
        0
        • Matthew LoxtonM Matthew Loxton

          " or would you be fuming at someone pretending to be a trainer and releasing a training manual that's full of easy to spot errors?"

          I would fume at anyone who produces a shitty textbook or bad research
          But this is not what is happening. We aren't standing back an just letting the AI rip, but rather using it to do some of the scutwork, and to crosscheck our work, and then editing and fixing bits it missed or got wrong.

          Its like having an eager undergrad intern that needs oversight
          @deirdrebeth

          𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
          𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
          𝐿𝒢𝓃𝒢 "not yet begun to fight"
          wrote last edited by
          #4

          @mloxton

          I find it very telling that here, even in what is apparently your best case scenario, the reasons you claim to use generative AI for qualitative analysis do not include "writing better code".

          All you can ever do is say "yeah I know it makes mistakes. I don't care because it's cheap and fast."

          @deirdrebeth

          DB SchweinD Matthew LoxtonM 2 Replies Last reply
          0
          • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

            @mloxton

            I find it very telling that here, even in what is apparently your best case scenario, the reasons you claim to use generative AI for qualitative analysis do not include "writing better code".

            All you can ever do is say "yeah I know it makes mistakes. I don't care because it's cheap and fast."

            @deirdrebeth

            DB SchweinD This user is from outside of this forum
            DB SchweinD This user is from outside of this forum
            DB Schwein
            wrote last edited by
            #5

            @Lana

            From his response a few hours ago:

            "In over a year of intensively testing and using AI Assist in MAXQDA, I have had zero occurrences of hallucination, and every coding or summarization or topic discovery it makes comes with a reference to document and line number. If it doesn't find the topic in the text, it says so"

            and also:
            "I never use any AI tools for the content itself."

            @mloxton

            𝐿𝒢𝓃𝒢 "not yet begun to fight"L 1 Reply Last reply
            0
            • DB SchweinD DB Schwein

              @Lana

              From his response a few hours ago:

              "In over a year of intensively testing and using AI Assist in MAXQDA, I have had zero occurrences of hallucination, and every coding or summarization or topic discovery it makes comes with a reference to document and line number. If it doesn't find the topic in the text, it says so"

              and also:
              "I never use any AI tools for the content itself."

              @mloxton

              𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
              𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
              𝐿𝒢𝓃𝒢 "not yet begun to fight"
              wrote last edited by
              #6

              @mloxton
              @deirdrebeth

              Okay but "and then editing and fixing bits it missed or got wrong" implies that it does, in fact, regularly hallucinate and make errors or omissions.

              Matthew LoxtonM 1 Reply Last reply
              0
              • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

                @mloxton
                @deirdrebeth

                Okay but "and then editing and fixing bits it missed or got wrong" implies that it does, in fact, regularly hallucinate and make errors or omissions.

                Matthew LoxtonM This user is from outside of this forum
                Matthew LoxtonM This user is from outside of this forum
                Matthew Loxton
                wrote last edited by
                #7

                @Lana
                Lana, you are mixing up two different topics.

                I have in over two years of using it daily and during extensive testing, not seen a single case of AI Assist in MAXQDA hallucinate.

                When used for coding, as I said, the IRR is high, but not perfect (which would be unexpected), but still useful because it occasionally spots something I didn't code and should have. That leads to analysis with higher validity - a better product

                @deirdrebeth

                𝐿𝒢𝓃𝒢 "not yet begun to fight"L 1 Reply Last reply
                0
                • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

                  @mloxton

                  I find it very telling that here, even in what is apparently your best case scenario, the reasons you claim to use generative AI for qualitative analysis do not include "writing better code".

                  All you can ever do is say "yeah I know it makes mistakes. I don't care because it's cheap and fast."

                  @deirdrebeth

                  Matthew LoxtonM This user is from outside of this forum
                  Matthew LoxtonM This user is from outside of this forum
                  Matthew Loxton
                  wrote last edited by
                  #8

                  @Lana

                  I think you may be under the impression that we are talking about "code" as in Java, Python, or R, and therefore "writing better code". This is not what I am talking about. We are talking about qualitative research and coding text segments.

                  The AI indeed helps increase coding construct validity, and therefore have better research results.
                  I used it in this way here, for example: https://www.medrxiv.org/content/10.1101/2025.03.04.25320693v1

                  I am happy to go deeper into how this works if you are interested
                  @deirdrebeth

                  𝐿𝒢𝓃𝒢 "not yet begun to fight"L 1 Reply Last reply
                  1
                  0
                  • med-mastodon.comI med-mastodon.com shared this topic
                  • Matthew LoxtonM Matthew Loxton

                    @Lana
                    Lana, you are mixing up two different topics.

                    I have in over two years of using it daily and during extensive testing, not seen a single case of AI Assist in MAXQDA hallucinate.

                    When used for coding, as I said, the IRR is high, but not perfect (which would be unexpected), but still useful because it occasionally spots something I didn't code and should have. That leads to analysis with higher validity - a better product

                    @deirdrebeth

                    𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                    𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                    𝐿𝒢𝓃𝒢 "not yet begun to fight"
                    wrote last edited by
                    #9

                    @deirdrebeth @mloxton

                    >"I have never once seen it hallucinate"

                    >"I have to go back and fix things it missed or got wrong"

                    Both of these cannot be true simultaneously.

                    Matthew LoxtonM 1 Reply Last reply
                    0
                    • Matthew LoxtonM Matthew Loxton

                      @Lana

                      I think you may be under the impression that we are talking about "code" as in Java, Python, or R, and therefore "writing better code". This is not what I am talking about. We are talking about qualitative research and coding text segments.

                      The AI indeed helps increase coding construct validity, and therefore have better research results.
                      I used it in this way here, for example: https://www.medrxiv.org/content/10.1101/2025.03.04.25320693v1

                      I am happy to go deeper into how this works if you are interested
                      @deirdrebeth

                      𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                      𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                      𝐿𝒢𝓃𝒢 "not yet begun to fight"
                      wrote last edited by
                      #10

                      @deirdrebeth @mloxton

                      You seem to be missing the point we are all saying. And at this point, I'm certain it's on purpose because nobody could be this dense. Whether you're talking about computer code, braille, or your little orphan Annie decoder ring doesn't matter.

                      The point is, when you're using AI, you're using it for these things:
                      - speed
                      - cost

                      And not for this thing:
                      - the quality of the output

                      And we know that, because, even in your best case scenario, you say things like "it's like a fast, unpaid intern who needs oversight because they sometimes miss things or make mistakes."

                      Matthew LoxtonM 1 Reply Last reply
                      0
                      • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

                        @deirdrebeth @mloxton

                        >"I have never once seen it hallucinate"

                        >"I have to go back and fix things it missed or got wrong"

                        Both of these cannot be true simultaneously.

                        Matthew LoxtonM This user is from outside of this forum
                        Matthew LoxtonM This user is from outside of this forum
                        Matthew Loxton
                        wrote last edited by
                        #11

                        @Lana

                        Oh Dear God
                        Hallucination is one kind of error, amongst many kinds of error.
                        It has never yet hallucinated, but is sometimes makes other errors.

                        When using AI Assist in MAXQDA, sometimes it misidentifies an implication, and I need to fix that, and sometimes its coding is too broad, and I need to fix that too. Sometimes the fix is just adjusting code range, sometimes it is rewo9rding and tightening up code definitions

                        @deirdrebeth

                        𝐿𝒢𝓃𝒢 "not yet begun to fight"L 1 Reply Last reply
                        1
                        0
                        • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

                          @deirdrebeth @mloxton

                          You seem to be missing the point we are all saying. And at this point, I'm certain it's on purpose because nobody could be this dense. Whether you're talking about computer code, braille, or your little orphan Annie decoder ring doesn't matter.

                          The point is, when you're using AI, you're using it for these things:
                          - speed
                          - cost

                          And not for this thing:
                          - the quality of the output

                          And we know that, because, even in your best case scenario, you say things like "it's like a fast, unpaid intern who needs oversight because they sometimes miss things or make mistakes."

                          Matthew LoxtonM This user is from outside of this forum
                          Matthew LoxtonM This user is from outside of this forum
                          Matthew Loxton
                          wrote last edited by
                          #12

                          @Lana

                          Lana, I think you just want to be obnoxious because I have repeatedly stipulated that using AI does ALL THREE THINGS in research.

                          Maybe you didn't understand what the term "validity" implied, so let's restate this cleanly:

                          Using AI Assist:
                          - Reduces cost
                          - Saves time
                          - Improves quality

                          It also does an additional thing by expanding CAPACITY, and that is it allows me to tackle research that was otherwise impractical or impossible.

                          @deirdrebeth

                          𝐿𝒢𝓃𝒢 "not yet begun to fight"L 1 Reply Last reply
                          1
                          0
                          • Matthew LoxtonM Matthew Loxton

                            @Lana

                            Oh Dear God
                            Hallucination is one kind of error, amongst many kinds of error.
                            It has never yet hallucinated, but is sometimes makes other errors.

                            When using AI Assist in MAXQDA, sometimes it misidentifies an implication, and I need to fix that, and sometimes its coding is too broad, and I need to fix that too. Sometimes the fix is just adjusting code range, sometimes it is rewo9rding and tightening up code definitions

                            @deirdrebeth

                            𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                            𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                            𝐿𝒢𝓃𝒢 "not yet begun to fight"
                            wrote last edited by
                            #13

                            @deirdrebeth @mloxton

                            I'm really not interested in having an argument over definitions.

                            The point is, you are using AI for these things:
                            - speed
                            - cost

                            And not for these things:
                            - the quality of the output

                            And we know that because you keep saying how, even in your best case scenario, the generative AI that you use needs constant oversight because it misses things, makes coding mistakes, or needs rewording. Whether you call that a hallucination error or some other kind of error is irrelevant to the broader point, and you know that. You cannot be this dense.

                            Matthew LoxtonM 1 Reply Last reply
                            0
                            • Matthew LoxtonM Matthew Loxton

                              @Lana

                              Lana, I think you just want to be obnoxious because I have repeatedly stipulated that using AI does ALL THREE THINGS in research.

                              Maybe you didn't understand what the term "validity" implied, so let's restate this cleanly:

                              Using AI Assist:
                              - Reduces cost
                              - Saves time
                              - Improves quality

                              It also does an additional thing by expanding CAPACITY, and that is it allows me to tackle research that was otherwise impractical or impossible.

                              @deirdrebeth

                              𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                              𝐿𝒢𝓃𝒢 "not yet begun to fight"L This user is from outside of this forum
                              𝐿𝒢𝓃𝒢 "not yet begun to fight"
                              wrote last edited by
                              #14

                              @deirdrebeth @mloxton

                              So it improves quality by making mistakes and needing constant oversight.

                              Sure buddy.

                              Matthew LoxtonM 1 Reply Last reply
                              0
                              • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

                                @deirdrebeth @mloxton

                                I'm really not interested in having an argument over definitions.

                                The point is, you are using AI for these things:
                                - speed
                                - cost

                                And not for these things:
                                - the quality of the output

                                And we know that because you keep saying how, even in your best case scenario, the generative AI that you use needs constant oversight because it misses things, makes coding mistakes, or needs rewording. Whether you call that a hallucination error or some other kind of error is irrelevant to the broader point, and you know that. You cannot be this dense.

                                Matthew LoxtonM This user is from outside of this forum
                                Matthew LoxtonM This user is from outside of this forum
                                Matthew Loxton
                                wrote last edited by
                                #15

                                @Lana

                                That wasn't a mere matter of definition, Lana. You made a big fat category error.

                                You obviously thought that hallucination was the only kind of error that AI makes

                                @deirdrebeth

                                1 Reply Last reply
                                1
                                0
                                • 𝐿𝒢𝓃𝒢 "not yet begun to fight"L 𝐿𝒢𝓃𝒢 "not yet begun to fight"

                                  @deirdrebeth @mloxton

                                  So it improves quality by making mistakes and needing constant oversight.

                                  Sure buddy.

                                  Matthew LoxtonM This user is from outside of this forum
                                  Matthew LoxtonM This user is from outside of this forum
                                  Matthew Loxton
                                  wrote last edited by
                                  #16

                                  @Lana

                                  Indeed
                                  Because I end up, as I said, with higher validity

                                  @deirdrebeth

                                  1 Reply Last reply
                                  1
                                  0
                                  Reply
                                  • Reply as topic
                                  Log in to reply
                                  • Oldest to Newest
                                  • Newest to Oldest
                                  • Most Votes


                                  • Login

                                  • Don't have an account? Register

                                  • Login or register to search.
                                  Powered by NodeBB Contributors
                                  • First post
                                    Last post
                                  0
                                  • Categories
                                  • Recent
                                  • Tags
                                  • Popular
                                  • World
                                  • Users
                                  • Groups