Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. General Medicine
  3. "The problem, though, is that while A.I. might not be ideal, neither is today’s medical system.

"The problem, though, is that while A.I. might not be ideal, neither is today’s medical system.

Scheduled Pinned Locked Moved General Medicine
healthcarephysicianschatbotsmedicine
13 Posts 6 Posters 4 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Bich Nguyen :verified:B Bich Nguyen :verified:

    "The problem, though, is that while A.I. might not be ideal, neither is today’s medical system.

    And it is becoming increasingly clear that the role of a doctor is going to undergo a transformation."

    https://www.nytimes.com/2026/02/09/health/ai-chatbots-doctors-medicine.html

    #healthcare #AI #physicians #chatbots #medicine

    Andrew Benedict-NelsonA This user is from outside of this forum
    Andrew Benedict-NelsonA This user is from outside of this forum
    Andrew Benedict-Nelson
    wrote last edited by
    #2

    @bicmay I would love to hear your thoughts on this

    Bich Nguyen :verified:B 1 Reply Last reply
    0
    • Bich Nguyen :verified:B Bich Nguyen :verified:

      "The problem, though, is that while A.I. might not be ideal, neither is today’s medical system.

      And it is becoming increasingly clear that the role of a doctor is going to undergo a transformation."

      https://www.nytimes.com/2026/02/09/health/ai-chatbots-doctors-medicine.html

      #healthcare #AI #physicians #chatbots #medicine

      MaksiSanctum he/himM This user is from outside of this forum
      MaksiSanctum he/himM This user is from outside of this forum
      MaksiSanctum he/him
      wrote last edited by
      #3

      @bicmay Having had multiple professionals test AI on it's knowledge and seeing it fail, I would never trust Ai to make a true diagnosis.

      1 Reply Last reply
      1
      0
      • Andrew Benedict-NelsonA Andrew Benedict-Nelson

        @bicmay I would love to hear your thoughts on this

        Bich Nguyen :verified:B This user is from outside of this forum
        Bich Nguyen :verified:B This user is from outside of this forum
        Bich Nguyen :verified:
        wrote last edited by
        #4

        @albnelson

        I'm very cautious. First, I have concerns about bias baked into the training of AI programs. Secondly, how the use impacts energy usage and the environment.

        For the medical aspect, someone using it still has to know what to ask and perform a physical exam with a patient. AI can't do that yet. I agree with the article that I know my patients better than an AI program does.

        How do you feel as a patient?

        Christine JohnsonC Andrew Benedict-NelsonA 2 Replies Last reply
        1
        0
        • Bich Nguyen :verified:B Bich Nguyen :verified:

          @albnelson

          I'm very cautious. First, I have concerns about bias baked into the training of AI programs. Secondly, how the use impacts energy usage and the environment.

          For the medical aspect, someone using it still has to know what to ask and perform a physical exam with a patient. AI can't do that yet. I agree with the article that I know my patients better than an AI program does.

          How do you feel as a patient?

          Christine JohnsonC This user is from outside of this forum
          Christine JohnsonC This user is from outside of this forum
          Christine Johnson
          wrote last edited by
          #5

          @bicmay @albnelson As a patient, I feel that health and healthcare are public goods, and should not be the target of private extraction.

          Tiota SramT 1 Reply Last reply
          0
          • Bich Nguyen :verified:B Bich Nguyen :verified:

            @albnelson

            I'm very cautious. First, I have concerns about bias baked into the training of AI programs. Secondly, how the use impacts energy usage and the environment.

            For the medical aspect, someone using it still has to know what to ask and perform a physical exam with a patient. AI can't do that yet. I agree with the article that I know my patients better than an AI program does.

            How do you feel as a patient?

            Andrew Benedict-NelsonA This user is from outside of this forum
            Andrew Benedict-NelsonA This user is from outside of this forum
            Andrew Benedict-Nelson
            wrote last edited by
            #6

            @bicmay I have various flavors of neutral-to-negative reaction.

            As a patient my reaction is pretty neutral at the moment, because my family and I have low medical needs and this stuff is still pretty peripheral. If one of my doctors were using tools like this, I might say, "Cool, so now what do YOU think?"

            As someone who does some advocacy in this space, my main concern is exacerbation of existing problems and biases across the board. Incentives in our system are still aligned to squeeze efficiencies out of every little transaction, not improve care. An individual doctor or nurse practitioner might use this to build amazing tools -- I've seen some who have! But the system will use it to deny care. The triage systems discussed in the article are a good idea in theory but a nightmare in the wrong hands.

            My greatest concerns are as a citizen. If these were purely open source neutral technologies available at no cost, I might say, "What the heck, give it a whirl." But if my doctor expressed a lot of enthusiasm about AI, I might ask, "How much do you know about the companies who make these tools..." and let the conversation evolve from there. 😕

            1 Reply Last reply
            0
            • Christine JohnsonC Christine Johnson

              @bicmay @albnelson As a patient, I feel that health and healthcare are public goods, and should not be the target of private extraction.

              Tiota SramT This user is from outside of this forum
              Tiota SramT This user is from outside of this forum
              Tiota Sram
              wrote last edited by
              #7

              @christinkallama @bicmay @albnelson also FYI as someone with a modest understanding of the CS side of things: A few points:

              1. This article is not trustworthy. It conflates very different types of AI system to maximize the threat it states, and that kind of dishonest reporting makes me distrust all of its conclusions. Expert diagnosis systems that use machine learning are a completely different kind of beast from a chatbot, even though that also uses machine learning in a different way. Touting the accuracy of specialized diagnosis systems as evidence for the effectiveness of chatbots is like showing off the excavating capability of a Volkswagen excavator add evidence that a BMW racing car will be faster than the competition.

              2. Evaluating machine learning systems is very tricky, and it's not hard to get good-looking results in front of press if you have dollars to burn, which don't actually hold up in the real world. For the curious here's a nice thorough paper on issues with medical imaging specifically: https://www.nature.com/articles/s41746-022-00592-y
              As a patient, unless you show me in-the-field accuracy numbers from multiple years of deployment that actually rival human performance for a very particular task that this discussing AI is designed to do alone, I'd always rather have a human doctor's opinion than that of an AI system, and even when I'm willing to accept the AI diagnosis, I'd like a human doctor's second opinion and interpretation.

              3. For medical-records and patient-interaction applications, including most of the ones listed in this article that chatbots are "good" at, I think doctors/hospitals using them are opening themselves up to a lot of liability and making a mistake. These systems make egregious errors in predictable patterns, which competent human staff do not make. Incompetent staff sometimes make the same errors, but the difference is that they are responsible for their own errors. If your "bedside manners" chatbot that takes over when the doctor is busy with another patient encourages a patient to kill themselves or take the wrong medicine, is that acceptable, since 99% of the time it speaks with a very reassuring manner? I guarantee you ChatGPT will make these mistakes orders of magnitude more often than even the most sleep-deprived RN or resident. We have seen these events happen already in other contexts; the medical/hospital context has many more opportunities for these failures. There is no real technological mitigation for them in the horizon either. Even for "transcribe my patient notes" I wouldn't trust dictation software unless needed as an accommodation, even though it has admittedly gotten pretty good. There's lots of opportunity for a missed jargon word to cause havoc in notes that get shared with someone else, for example.

              4. The big AI companies have developed one tool that shows up very well I demos, but which has nasty flaws that make it unsuitable for a lot of what they're pushing. They are trying to sell their stuff as "the future" and say it must be "integrated" everywhere. If only you integrated our marvelous technology, your problems would be solved! This is backwards. A true solution looks first at the problem, and then asks "what tool would be best to use here?" By putting the tool choice first, you end up with ineffective or even counterproductive "solutions." This only makes sense if your goal is to sell the tools. For example, I understand doctors have very limited time but must write up notes between patient visits to refer to in future meetings. Sometimes notes end up inaccurate or illegible, or just muddled. How can we best solve this problem? Simple: hire more doctors to give them all more time to write better notes. Any alternate solution needs to be understood as a compromise. Might there be a way to use technology to help? Sure, there are probably plenty. Let's consider using a system to record the visit, then produce notes automatically by statistically predicting what notes the doctor would write. We'll have the doctor check them off every time. Is this good? No, because in this design, doctors will get lazy in their checking over time, especially if the system is very accurate for most visits. But such systems are going to make big mistakes for unusual visits, which doctors might not then correct. Even worse, by denying the doctor the cognitive task of organizing their thoughts into writing, you're disrupting the doctor's memory formation and chances to see unusual patterns or slight irregularities. Lowering the doctor's cognitive burden takes away the benefits you get from expending cognitive resources on the problem! As an alternate design, what if you had the doctor write the notes unaided, and then had a system try to flag possible discrepancies, misspellings, or illegible writing? Such a system might still be bad, if it creates too much friction (remember Clippy?). But it *might* be good if tuned correctly. It's not a flashy or "revolutionary" as the "we'll take your work away" system, but it avoids some of that system's worst drawbacks. There's probably even better designs I'm not thinking of. My point is that starting with "let's integrate a chatbot" is the wrong approach, and anyone who insists on it is not someone you should trust, because self-evidently they are stating from their own interests (sell/promote chatbots) while completely disregarding yours. They're basically saying "Can you help me think of a way to sell my product to you?" which is downright disrespectful.

              Okay that's probably enough ranting from me. TL;DR: trust AI chatbots about as much as the least trustworthy intern you can imagine working with, because it will eventually make exactly the same kinds of disastrous mistakes, and you'll be the one to blame since (when we're not spinning yarns about machine intelligence) it's "just a tool that you chose to use." It's not even capable of learning from those mistakes either, because to let chstbots learn from their public interactions would be dangerous in other ways (see Microsoft Tay).

              Andrew Benedict-NelsonA Bernard SheppardB 2 Replies Last reply
              0
              • Tiota SramT Tiota Sram

                @christinkallama @bicmay @albnelson also FYI as someone with a modest understanding of the CS side of things: A few points:

                1. This article is not trustworthy. It conflates very different types of AI system to maximize the threat it states, and that kind of dishonest reporting makes me distrust all of its conclusions. Expert diagnosis systems that use machine learning are a completely different kind of beast from a chatbot, even though that also uses machine learning in a different way. Touting the accuracy of specialized diagnosis systems as evidence for the effectiveness of chatbots is like showing off the excavating capability of a Volkswagen excavator add evidence that a BMW racing car will be faster than the competition.

                2. Evaluating machine learning systems is very tricky, and it's not hard to get good-looking results in front of press if you have dollars to burn, which don't actually hold up in the real world. For the curious here's a nice thorough paper on issues with medical imaging specifically: https://www.nature.com/articles/s41746-022-00592-y
                As a patient, unless you show me in-the-field accuracy numbers from multiple years of deployment that actually rival human performance for a very particular task that this discussing AI is designed to do alone, I'd always rather have a human doctor's opinion than that of an AI system, and even when I'm willing to accept the AI diagnosis, I'd like a human doctor's second opinion and interpretation.

                3. For medical-records and patient-interaction applications, including most of the ones listed in this article that chatbots are "good" at, I think doctors/hospitals using them are opening themselves up to a lot of liability and making a mistake. These systems make egregious errors in predictable patterns, which competent human staff do not make. Incompetent staff sometimes make the same errors, but the difference is that they are responsible for their own errors. If your "bedside manners" chatbot that takes over when the doctor is busy with another patient encourages a patient to kill themselves or take the wrong medicine, is that acceptable, since 99% of the time it speaks with a very reassuring manner? I guarantee you ChatGPT will make these mistakes orders of magnitude more often than even the most sleep-deprived RN or resident. We have seen these events happen already in other contexts; the medical/hospital context has many more opportunities for these failures. There is no real technological mitigation for them in the horizon either. Even for "transcribe my patient notes" I wouldn't trust dictation software unless needed as an accommodation, even though it has admittedly gotten pretty good. There's lots of opportunity for a missed jargon word to cause havoc in notes that get shared with someone else, for example.

                4. The big AI companies have developed one tool that shows up very well I demos, but which has nasty flaws that make it unsuitable for a lot of what they're pushing. They are trying to sell their stuff as "the future" and say it must be "integrated" everywhere. If only you integrated our marvelous technology, your problems would be solved! This is backwards. A true solution looks first at the problem, and then asks "what tool would be best to use here?" By putting the tool choice first, you end up with ineffective or even counterproductive "solutions." This only makes sense if your goal is to sell the tools. For example, I understand doctors have very limited time but must write up notes between patient visits to refer to in future meetings. Sometimes notes end up inaccurate or illegible, or just muddled. How can we best solve this problem? Simple: hire more doctors to give them all more time to write better notes. Any alternate solution needs to be understood as a compromise. Might there be a way to use technology to help? Sure, there are probably plenty. Let's consider using a system to record the visit, then produce notes automatically by statistically predicting what notes the doctor would write. We'll have the doctor check them off every time. Is this good? No, because in this design, doctors will get lazy in their checking over time, especially if the system is very accurate for most visits. But such systems are going to make big mistakes for unusual visits, which doctors might not then correct. Even worse, by denying the doctor the cognitive task of organizing their thoughts into writing, you're disrupting the doctor's memory formation and chances to see unusual patterns or slight irregularities. Lowering the doctor's cognitive burden takes away the benefits you get from expending cognitive resources on the problem! As an alternate design, what if you had the doctor write the notes unaided, and then had a system try to flag possible discrepancies, misspellings, or illegible writing? Such a system might still be bad, if it creates too much friction (remember Clippy?). But it *might* be good if tuned correctly. It's not a flashy or "revolutionary" as the "we'll take your work away" system, but it avoids some of that system's worst drawbacks. There's probably even better designs I'm not thinking of. My point is that starting with "let's integrate a chatbot" is the wrong approach, and anyone who insists on it is not someone you should trust, because self-evidently they are stating from their own interests (sell/promote chatbots) while completely disregarding yours. They're basically saying "Can you help me think of a way to sell my product to you?" which is downright disrespectful.

                Okay that's probably enough ranting from me. TL;DR: trust AI chatbots about as much as the least trustworthy intern you can imagine working with, because it will eventually make exactly the same kinds of disastrous mistakes, and you'll be the one to blame since (when we're not spinning yarns about machine intelligence) it's "just a tool that you chose to use." It's not even capable of learning from those mistakes either, because to let chstbots learn from their public interactions would be dangerous in other ways (see Microsoft Tay).

                Andrew Benedict-NelsonA This user is from outside of this forum
                Andrew Benedict-NelsonA This user is from outside of this forum
                Andrew Benedict-Nelson
                wrote last edited by
                #8

                @tiotasram @christinkallama @bicmay I'm down with all of this. The conflation of the different types of AI also bothered me in the article.

                I suspect that at some point this stuff is going to get regulated as a medical device and it will never withstand the scrutiny. There might be a path to make it all work, but we are not on that path at all right now.

                Tiota SramT 1 Reply Last reply
                0
                • Andrew Benedict-NelsonA Andrew Benedict-Nelson

                  @tiotasram @christinkallama @bicmay I'm down with all of this. The conflation of the different types of AI also bothered me in the article.

                  I suspect that at some point this stuff is going to get regulated as a medical device and it will never withstand the scrutiny. There might be a path to make it all work, but we are not on that path at all right now.

                  Tiota SramT This user is from outside of this forum
                  Tiota SramT This user is from outside of this forum
                  Tiota Sram
                  wrote last edited by
                  #9

                  @albnelson @christinkallama @bicmay sadly under the current (and not just this year's) FDA these things are getting approval, only to have higher complaint rates than non-AI devices... A quote:

                  """
                  Researchers from Johns Hopkins, Georgetown and Yale universities recently found that 60 FDA-authorized medical devices using AI were linked to 182 product recalls, according to a research letter published in the JAMA Health Forum in August. Their review showed that 43% of the recalls occurred less than a year after the devices were greenlighted. That’s about twice the recall rate of all devices authorized under similar FDA rules, the review noted.
                  """

                  From: https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/

                  But yeah, in the long term, if FDR didn't completely dismantle the FDA, we should see some balancing out.

                  Andrew Benedict-NelsonA 1 Reply Last reply
                  0
                  • Tiota SramT Tiota Sram

                    @christinkallama @bicmay @albnelson also FYI as someone with a modest understanding of the CS side of things: A few points:

                    1. This article is not trustworthy. It conflates very different types of AI system to maximize the threat it states, and that kind of dishonest reporting makes me distrust all of its conclusions. Expert diagnosis systems that use machine learning are a completely different kind of beast from a chatbot, even though that also uses machine learning in a different way. Touting the accuracy of specialized diagnosis systems as evidence for the effectiveness of chatbots is like showing off the excavating capability of a Volkswagen excavator add evidence that a BMW racing car will be faster than the competition.

                    2. Evaluating machine learning systems is very tricky, and it's not hard to get good-looking results in front of press if you have dollars to burn, which don't actually hold up in the real world. For the curious here's a nice thorough paper on issues with medical imaging specifically: https://www.nature.com/articles/s41746-022-00592-y
                    As a patient, unless you show me in-the-field accuracy numbers from multiple years of deployment that actually rival human performance for a very particular task that this discussing AI is designed to do alone, I'd always rather have a human doctor's opinion than that of an AI system, and even when I'm willing to accept the AI diagnosis, I'd like a human doctor's second opinion and interpretation.

                    3. For medical-records and patient-interaction applications, including most of the ones listed in this article that chatbots are "good" at, I think doctors/hospitals using them are opening themselves up to a lot of liability and making a mistake. These systems make egregious errors in predictable patterns, which competent human staff do not make. Incompetent staff sometimes make the same errors, but the difference is that they are responsible for their own errors. If your "bedside manners" chatbot that takes over when the doctor is busy with another patient encourages a patient to kill themselves or take the wrong medicine, is that acceptable, since 99% of the time it speaks with a very reassuring manner? I guarantee you ChatGPT will make these mistakes orders of magnitude more often than even the most sleep-deprived RN or resident. We have seen these events happen already in other contexts; the medical/hospital context has many more opportunities for these failures. There is no real technological mitigation for them in the horizon either. Even for "transcribe my patient notes" I wouldn't trust dictation software unless needed as an accommodation, even though it has admittedly gotten pretty good. There's lots of opportunity for a missed jargon word to cause havoc in notes that get shared with someone else, for example.

                    4. The big AI companies have developed one tool that shows up very well I demos, but which has nasty flaws that make it unsuitable for a lot of what they're pushing. They are trying to sell their stuff as "the future" and say it must be "integrated" everywhere. If only you integrated our marvelous technology, your problems would be solved! This is backwards. A true solution looks first at the problem, and then asks "what tool would be best to use here?" By putting the tool choice first, you end up with ineffective or even counterproductive "solutions." This only makes sense if your goal is to sell the tools. For example, I understand doctors have very limited time but must write up notes between patient visits to refer to in future meetings. Sometimes notes end up inaccurate or illegible, or just muddled. How can we best solve this problem? Simple: hire more doctors to give them all more time to write better notes. Any alternate solution needs to be understood as a compromise. Might there be a way to use technology to help? Sure, there are probably plenty. Let's consider using a system to record the visit, then produce notes automatically by statistically predicting what notes the doctor would write. We'll have the doctor check them off every time. Is this good? No, because in this design, doctors will get lazy in their checking over time, especially if the system is very accurate for most visits. But such systems are going to make big mistakes for unusual visits, which doctors might not then correct. Even worse, by denying the doctor the cognitive task of organizing their thoughts into writing, you're disrupting the doctor's memory formation and chances to see unusual patterns or slight irregularities. Lowering the doctor's cognitive burden takes away the benefits you get from expending cognitive resources on the problem! As an alternate design, what if you had the doctor write the notes unaided, and then had a system try to flag possible discrepancies, misspellings, or illegible writing? Such a system might still be bad, if it creates too much friction (remember Clippy?). But it *might* be good if tuned correctly. It's not a flashy or "revolutionary" as the "we'll take your work away" system, but it avoids some of that system's worst drawbacks. There's probably even better designs I'm not thinking of. My point is that starting with "let's integrate a chatbot" is the wrong approach, and anyone who insists on it is not someone you should trust, because self-evidently they are stating from their own interests (sell/promote chatbots) while completely disregarding yours. They're basically saying "Can you help me think of a way to sell my product to you?" which is downright disrespectful.

                    Okay that's probably enough ranting from me. TL;DR: trust AI chatbots about as much as the least trustworthy intern you can imagine working with, because it will eventually make exactly the same kinds of disastrous mistakes, and you'll be the one to blame since (when we're not spinning yarns about machine intelligence) it's "just a tool that you chose to use." It's not even capable of learning from those mistakes either, because to let chstbots learn from their public interactions would be dangerous in other ways (see Microsoft Tay).

                    Bernard SheppardB This user is from outside of this forum
                    Bernard SheppardB This user is from outside of this forum
                    Bernard Sheppard
                    wrote last edited by
                    #10

                    @tiotasram
                    Four excellently well put points.

                    There is little that I could add that would add value to what you wrote, but I will add; one of the reasons that competent (i.e. with domain knowledge) people are having to push back against the force feeding of inappropriate tools is the conflation of the general ML / MV that has been used forever (with increasing success in medical fields) with LLMs, and the FOMO adoption of (largely LLM based) AI in every field.

                    @christinkallama @bicmay @albnelson

                    1 Reply Last reply
                    0
                    • Tiota SramT Tiota Sram

                      @albnelson @christinkallama @bicmay sadly under the current (and not just this year's) FDA these things are getting approval, only to have higher complaint rates than non-AI devices... A quote:

                      """
                      Researchers from Johns Hopkins, Georgetown and Yale universities recently found that 60 FDA-authorized medical devices using AI were linked to 182 product recalls, according to a research letter published in the JAMA Health Forum in August. Their review showed that 43% of the recalls occurred less than a year after the devices were greenlighted. That’s about twice the recall rate of all devices authorized under similar FDA rules, the review noted.
                      """

                      From: https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/

                      But yeah, in the long term, if FDR didn't completely dismantle the FDA, we should see some balancing out.

                      Andrew Benedict-NelsonA This user is from outside of this forum
                      Andrew Benedict-NelsonA This user is from outside of this forum
                      Andrew Benedict-Nelson
                      wrote last edited by
                      #11

                      @tiotasram @christinkallama @bicmay good point 😞 really those stats should have been in the article. I know it was about physicians and not devices, but the devices show what may be ahead, and it’s not pretty.

                      Tiota SramT 1 Reply Last reply
                      0
                      • Andrew Benedict-NelsonA Andrew Benedict-Nelson

                        @tiotasram @christinkallama @bicmay good point 😞 really those stats should have been in the article. I know it was about physicians and not devices, but the devices show what may be ahead, and it’s not pretty.

                        Tiota SramT This user is from outside of this forum
                        Tiota SramT This user is from outside of this forum
                        Tiota Sram
                        wrote last edited by
                        #12

                        @albnelson @christinkallama @bicmay yeah, where's the scrupulous both-sides reporting when you actually need it?

                        Andrew Benedict-NelsonA 1 Reply Last reply
                        0
                        • Tiota SramT Tiota Sram

                          @albnelson @christinkallama @bicmay yeah, where's the scrupulous both-sides reporting when you actually need it?

                          Andrew Benedict-NelsonA This user is from outside of this forum
                          Andrew Benedict-NelsonA This user is from outside of this forum
                          Andrew Benedict-Nelson
                          wrote last edited by
                          #13

                          @tiotasram @christinkallama @bicmay I think they imagined that they both-sided it by focusing on factors like the lack of physical interaction with LLMs. I’m sure this is how many tech-averse MDs process the issue. But yeah it’s disappointing how out of touch it is with the actual debate over AI and its effectiveness.

                          1 Reply Last reply
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Don't have an account? Register

                          • Login or register to search.
                          Powered by NodeBB Contributors
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups