Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
  1. Home
  2. Uncategorized
  3. >systemd v256 automatically runs sshd listening on a vsock interface in the global network namespace>The official way to disable this behavior requires appending "systemd

>systemd v256 automatically runs sshd listening on a vsock interface in the global network namespace>The official way to disable this behavior requires appending "systemd

Scheduled Pinned Locked Moved Uncategorized
11 Posts 4 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • the vessel of morgannaA the vessel of morganna

    >systemd v256 automatically runs sshd listening on a vsock interface in the global network namespace
    >The official way to disable this behavior requires appending "systemd.ssh_auto=no" to the kernel boot line.

    ??? on the *kernel* command line? who thought this was acceptable??????

    AnthropyA This user is from outside of this forum
    AnthropyA This user is from outside of this forum
    Anthropy
    wrote last edited by
    #2

    @astraleureka that is the recommended way, which also seems weird to me, but just to be complete, you can supposedly also mask the socket(s):

    sudo systemctl mask --now sshd-vsock.socket
    sudo systemctl mask --now sshd-unix-local.socket

    and you can also remove the ssh server.

    but it's definitely.. awkward that they implemented it like this, as much as SSH is fairly safe in terms of protocols

    CassandrichD 1 Reply Last reply
    0
    • AnthropyA Anthropy

      @astraleureka that is the recommended way, which also seems weird to me, but just to be complete, you can supposedly also mask the socket(s):

      sudo systemctl mask --now sshd-vsock.socket
      sudo systemctl mask --now sshd-unix-local.socket

      and you can also remove the ssh server.

      but it's definitely.. awkward that they implemented it like this, as much as SSH is fairly safe in terms of protocols

      CassandrichD This user is from outside of this forum
      CassandrichD This user is from outside of this forum
      Cassandrich
      wrote last edited by
      #3

      @anthropy @astraleureka What configuration/credentials does it use? That's what determines if it's safe. This sounds like a backdoor channel for hosting provider to get into the system when network & ssh configuration otherwise wouldn't allow that without explicitly malicious poking at memory/fs contents.

      AnthropyA 1 Reply Last reply
      0
      • CassandrichD Cassandrich

        @anthropy @astraleureka What configuration/credentials does it use? That's what determines if it's safe. This sounds like a backdoor channel for hosting provider to get into the system when network & ssh configuration otherwise wouldn't allow that without explicitly malicious poking at memory/fs contents.

        AnthropyA This user is from outside of this forum
        AnthropyA This user is from outside of this forum
        Anthropy
        wrote last edited by
        #4

        @dalias @astraleureka this is a valid point, because it seems it actively use credentials from elsewhere than the filesystem, such as the SMBIOS strings, though that's not specific to this and more a general systemd concept as outlined here: https://systemd.io/CREDENTIALS/

        I'm... undecided on what to feel about this though, because if you don't trust the hypervisor you're running under that is a problem of its own. but it does make me feel somewhat uneasy that systemd accepts creds from everywhere.

        CassandrichD 1 Reply Last reply
        0
        • AnthropyA Anthropy

          @dalias @astraleureka this is a valid point, because it seems it actively use credentials from elsewhere than the filesystem, such as the SMBIOS strings, though that's not specific to this and more a general systemd concept as outlined here: https://systemd.io/CREDENTIALS/

          I'm... undecided on what to feel about this though, because if you don't trust the hypervisor you're running under that is a problem of its own. but it does make me feel somewhat uneasy that systemd accepts creds from everywhere.

          CassandrichD This user is from outside of this forum
          CassandrichD This user is from outside of this forum
          Cassandrich
          wrote last edited by
          #5

          @anthropy @astraleureka Ok, this is very much a malicious hosting provider oriented misfeature. Normally they'd have to explicitly modify something in a domain you nominally own and where it'd be a CFAA violation to do so in order to bypass your access controls. But this kind of backdoor gives them a gray zone channel to make alterations or inspect contents of your hosted system.

          Same concept to how lots of providers give you their own distro images that pull configuration or keys from the control panel, and you have to upload your own ISO to get a safe unadulterated system. systemd has made it so now even stock ISOs are unsafe against the hosting provider's meddling.

          CassandrichD 1 Reply Last reply
          0
          • R AodeRelay shared this topic
          • CassandrichD Cassandrich

            @anthropy @astraleureka Ok, this is very much a malicious hosting provider oriented misfeature. Normally they'd have to explicitly modify something in a domain you nominally own and where it'd be a CFAA violation to do so in order to bypass your access controls. But this kind of backdoor gives them a gray zone channel to make alterations or inspect contents of your hosted system.

            Same concept to how lots of providers give you their own distro images that pull configuration or keys from the control panel, and you have to upload your own ISO to get a safe unadulterated system. systemd has made it so now even stock ISOs are unsafe against the hosting provider's meddling.

            CassandrichD This user is from outside of this forum
            CassandrichD This user is from outside of this forum
            Cassandrich
            wrote last edited by
            #6

            @anthropy @astraleureka This sounds very much like it was an employer-requested "feature" for Azure.

            AnthropyA 1 Reply Last reply
            0
            • CassandrichD Cassandrich

              @anthropy @astraleureka This sounds very much like it was an employer-requested "feature" for Azure.

              AnthropyA This user is from outside of this forum
              AnthropyA This user is from outside of this forum
              Anthropy
              wrote last edited by
              #7

              @dalias @astraleureka I mean, I do agree it feels dirty, but, if you don't trust the hypervisor you're running under that has a whole host (pun intended) of other implications

              like they could just:
              - extract keys from your RAM (volatility tool, https://github.com/ZarKyo/awesome-volatility/blob/main/README.md )
              - reboot your VM and inject malicious boot params into your grub/whatever
              - technically even alter instructions on the fly
              - etc

              while it does make me feel dirtier to run systemd, hypervisors are always kind of a problem tbh.

              AnthropyA the vessel of morgannaA 2 Replies Last reply
              1
              0
              • AnthropyA Anthropy

                @dalias @astraleureka I mean, I do agree it feels dirty, but, if you don't trust the hypervisor you're running under that has a whole host (pun intended) of other implications

                like they could just:
                - extract keys from your RAM (volatility tool, https://github.com/ZarKyo/awesome-volatility/blob/main/README.md )
                - reboot your VM and inject malicious boot params into your grub/whatever
                - technically even alter instructions on the fly
                - etc

                while it does make me feel dirtier to run systemd, hypervisors are always kind of a problem tbh.

                AnthropyA This user is from outside of this forum
                AnthropyA This user is from outside of this forum
                Anthropy
                wrote last edited by
                #8

                @dalias @astraleureka personally my main gripe is that systemd scraping all kinds of sources for keys to use for auth makes it much harder to harden your system even outside of hypervisor situations. if something altered your system's smbios strings somehow, or manages to open a socket with systemd over an unauthenticated channel, or other things alike, they could just inject root ssh keys.

                I guess for me the main takeaway is that root should be disabled, and systemd neutered for hardening

                1 Reply Last reply
                1
                0
                • AnthropyA Anthropy

                  @dalias @astraleureka I mean, I do agree it feels dirty, but, if you don't trust the hypervisor you're running under that has a whole host (pun intended) of other implications

                  like they could just:
                  - extract keys from your RAM (volatility tool, https://github.com/ZarKyo/awesome-volatility/blob/main/README.md )
                  - reboot your VM and inject malicious boot params into your grub/whatever
                  - technically even alter instructions on the fly
                  - etc

                  while it does make me feel dirtier to run systemd, hypervisors are always kind of a problem tbh.

                  the vessel of morgannaA This user is from outside of this forum
                  the vessel of morgannaA This user is from outside of this forum
                  the vessel of morganna
                  wrote last edited by
                  #9

                  @anthropy @dalias dunno about the cloud folks, but a lot of the lower end providers will straight up boot a transient VM to mount your disk image within to rewrite static network config or reset credentials when actions are triggered in control panels. i, for one, do not enjoy rebooting my VM and finding that their control panel has rewritten network/interfaces or network-scripts/ifcfg-eth0 without any sort of validation

                  AnthropyA 1 Reply Last reply
                  0
                  • the vessel of morgannaA the vessel of morganna

                    @anthropy @dalias dunno about the cloud folks, but a lot of the lower end providers will straight up boot a transient VM to mount your disk image within to rewrite static network config or reset credentials when actions are triggered in control panels. i, for one, do not enjoy rebooting my VM and finding that their control panel has rewritten network/interfaces or network-scripts/ifcfg-eth0 without any sort of validation

                    AnthropyA This user is from outside of this forum
                    AnthropyA This user is from outside of this forum
                    Anthropy
                    wrote last edited by
                    #10

                    @astraleureka @dalias that too, there are so many ways they could mess with your system, hypervisors kinda suck. In that sense having dedicated servers as option is much better (although even there they could inject smbios strings and what not so eh, technically all foreign hardware/software is a liability I guess and selfhosting is the only truly safe option :v)

                    1 Reply Last reply
                    1
                    0
                    • the vessel of morgannaA the vessel of morganna

                      >systemd v256 automatically runs sshd listening on a vsock interface in the global network namespace
                      >The official way to disable this behavior requires appending "systemd.ssh_auto=no" to the kernel boot line.

                      ??? on the *kernel* command line? who thought this was acceptable??????

                      wtfismyipW This user is from outside of this forum
                      wtfismyipW This user is from outside of this forum
                      wtfismyip
                      wrote last edited by
                      #11

                      @astraleureka This behaviour has bit me a couple of times on recent installs, very frustrating.

                      1 Reply Last reply
                      1
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      Powered by NodeBB Contributors
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups