Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

Scheduled Pinned Locked Moved Uncategorized
5 Posts 5 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • 🅰🅻🅸🅲🅴  (🌈🦄)A This user is from outside of this forum
    🅰🅻🅸🅲🅴  (🌈🦄)A This user is from outside of this forum
    🅰🅻🅸🅲🅴 (🌈🦄)
    wrote last edited by
    #1

    A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

    """
    Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
    """

    Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

    Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

    Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

    In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

    Why does any of this matter?

    Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

    How can we tip that scale out of the attackers' favor?

    By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

    - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

    - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

    - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

    - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

    I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

    Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

    Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

    https://lgbtqia.space/@alice/115499829288185416

    Kirtai 🏳️‍⚧️K DźwiedziuD The Orange ThemeT MarianneN 4 Replies Last reply
    1
    0
    • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

      A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

      """
      Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
      """

      Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

      Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

      Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

      In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

      Why does any of this matter?

      Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

      How can we tip that scale out of the attackers' favor?

      By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

      - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

      - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

      - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

      - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

      I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

      Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

      Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

      https://lgbtqia.space/@alice/115499829288185416

      Kirtai 🏳️‍⚧️K This user is from outside of this forum
      Kirtai 🏳️‍⚧️K This user is from outside of this forum
      Kirtai 🏳️‍⚧️
      wrote last edited by
      #2

      @alice
      Totally agree.

      Also, I know it wasn't mean that way but this had me in stitches

      they'll just pretend to be human

      1 Reply Last reply
      0
      • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

        A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

        """
        Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
        """

        Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

        Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

        Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

        In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

        Why does any of this matter?

        Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

        How can we tip that scale out of the attackers' favor?

        By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

        - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

        - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

        - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

        - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

        I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

        Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

        Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

        https://lgbtqia.space/@alice/115499829288185416

        DźwiedziuD This user is from outside of this forum
        DźwiedziuD This user is from outside of this forum
        Dźwiedziu
        wrote last edited by
        #3

        @alice
        If I may sum it up: prevention vs. cure.

        1 Reply Last reply
        0
        • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

          A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

          """
          Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
          """

          Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

          Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

          Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

          In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

          Why does any of this matter?

          Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

          How can we tip that scale out of the attackers' favor?

          By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

          - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

          - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

          - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

          - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

          I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

          Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

          Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

          https://lgbtqia.space/@alice/115499829288185416

          The Orange ThemeT This user is from outside of this forum
          The Orange ThemeT This user is from outside of this forum
          The Orange Theme
          wrote last edited by
          #4

          @alice I used to get my haircut at a place that was just far enough way, and with enough traffic jams on the way each time, that I stopped going. It's not "far", by any means, but it was just on the cusp of being annoying. Once it became juuuust too much, I went somewhere closer.

          I think people underestimate how low the bar can be to prevent bad actors. Even the guy scripting his nonsense will hit an application form and immediately leave to find an open instance, most of the time.

          1 Reply Last reply
          0
          • 🅰🅻🅸🅲🅴  (🌈🦄)A 🅰🅻🅸🅲🅴 (🌈🦄)

            A follow-on to my "Nazi Sucker-punch Problem" post, to address the most common argument I get, which boils down to:

            """
            Moderated registration won't stop Nazis, because they'll just pretend to be human to fool moderators, but it will stop normal people, who won't spend the effort to answer the application question or want to wait for approval.
            """

            Okay, I'm going to try to use points that I hope are pretty acceptable to anyone arguing in good faith, and I'm going to expand the definition of Nazis to "attackers" and lump in bigots, trolls, scammers, spammers, etc. who use similar tactics.

            Attackers: we can group attackers into two main types: dedicated and opportunistic. Dedicated attackers have a target picked and a personal motive—they hunt. Opportunistic attackers have an inclination and will attack if a target presents itself—they're scavengers. In my years of experience as an admin on multiple Fedi servers, most attackers are opportunistic.

            Victims: when someone is attacked, they (and people like them) will be less likely to return to the place they were attacked.

            In general: without a motive to expend more effort, humans will typically make decisions that offer the best perceived effort-to-reward ratio in the short-term (the same is true of risk-to-reward).

            Why does any of this matter?

            Because it all comes down to a fairly simple equation for the attackers: effort > reward. If this is true, then the opportunistic attackers will go elsewhere. If it isn't true, then their victims will go elsewhere.

            How can we tip that scale out of the attackers' favor?

            By making sure moderation efforts scale faster against attackers' behaviors than against normal users' behaviors.

            - A normal user only has to register once, while an attacker has to re-register every time they get suspended.

            - A normal user proves their normality with each action they take, while every action an attacker takes risks exposing them to moderation.

            - A new user / attacker likely spends a minute or two signing up, while a moderator can review most applications in a matter of seconds. Yes, attackers can automate signups to reduce that effort (and some do, and we have tools to address some of that, but again, most attackers aren't dedicated).

            - Reviewing an application is lower effort than trying to fix the damage from an attack. As someone who gets targeted regularly by attackers from open-registration servers, I'd personally rather skim and reject a page-long AI-generated application, than spend another therapy session exploring the trauma of being sent execution videos.

            I believe this points to moderated registration being the lowest effort remedy for the problem of the Nazi Sucker-punch. So before we "engineer a new solution" that doesn't yet exist, we should exhaust the tools that are already available on the platform today. Yes, we could implement rate limits, or shadow bans, or trust networks, or quarantine servers, but we don't have those today, and even if we did, there's no evidence that those would be a better solution for Fedi than moderated signups.

            Will it stop *all* the attackers? No. But it will stop most opportunistic attackers.

            Will it deter *some* potential new users? Yes. But communities are defined by who stays, not by how many come through the door.

            https://lgbtqia.space/@alice/115499829288185416

            MarianneN This user is from outside of this forum
            MarianneN This user is from outside of this forum
            Marianne
            wrote last edited by
            #5

            @alice can recommend this piece on #PunchNazis by lovely Tauriq - had it bookmarked for years. https://www.theguardian.com/science/brain-flapping/2017/jan/31/the-punch-a-nazi-meme-what-are-the-ethics-of-punching-nazis

            1 Reply Last reply
            0
            • R ActivityRelay shared this topic
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Don't have an account? Register

            • Login or register to search.
            Powered by NodeBB Contributors
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups