Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
πŸ”— David SommersethD

dazo@infosec.exchange

@dazo@infosec.exchange
About
Posts
13
Topics
0
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • Ein wunderschΓΆnes Spiel fΓΌr alle Modelleisenbahnfans πŸ˜€
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @koje71 It's a pretty good Transport Tycoon Deluxe replica, fully open source as far as I remember.

    I've wasted enjoyed many hours in that game

    Uncategorized linux openttd

  • Today is the last day that #Letsencrypt will issue certificates with the "Client Authentication" EKU (Extended Key Usage).
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @marjolica

    From the link @jwildeboer posted, there is this detail:

    However they have announced that they will be issuing certificates for only β€œserver authentication” by default from 11th February 2026

    From what I'm understanding, using Lets Encrypt certificates on an incoming SMTP server shouldn't change anything. Then using a certificate issued for server usage would be a better match.

    If you use Lets Encrypt for client usage it might be different. However, if that will actually have an impact on Postfix as an outgoing SMTP server, that I'm not sure of. Generally speaking most SMTP servers have been fairly forgiving with the TLS communication.

    The bigger challenge will be if you use Lets Encrypt on a client side, using it for authentication purposes against a strict TLS server on the remote end, which checks the EKU field and requires it to be set to "client authentication". This use case will break with the coming Lets Encrypt change.

    Uncategorized letsencrypt

  • πŸ”’ Unable to decrypt message
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @catsalad ‼️Decryption key not found

    Uncategorized

  • I'm taking a break from using Mastodon.
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @vkc

    First of all, I'm getting quite p**sed off that so many people can't behave decently when stiing behind a keyboard. I get an urge to pull such people out of their bedrooms and drag them through the city, in a proper mediaeval punishment style. πŸ‘Ώ πŸ‘Ώ πŸ‘Ώ .... unfortunately, I also realise that it probably won't have the wanted effect or being appropriate in so many other levels.

    That said ...

    Which feature do YOU feel Mastodon is missing to properly contain, imprison and block such douchebags from making your and our lives here so poor?

    Maybe the amount of efforts required to handle the classifications of users and instances will still be way too much. But maybe that could be a real use case for AI?

    Uncategorized

  • IMHO (In My Humble Opinion): It shouldn't be "Getting of US-Tech", it should be "Getting of proprietary tech".
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @giacomo

    And all of this starts with the data itself. It is the data you want to access which has the real value. Data you should own from the beginning.

    If the data is in an open standard format, there is a possibility to break free.

    If you cannot control the data, there are no baseline for digital sovereignty. If you cannot have access to software being able to make use of the data in a meaningful way for you, there are no baseline for digital sovereignty. If the software cannot be written, because the data format is unknown or too closely tied to the service provider generating the data, there are no baseline to achieve digital sovereignty.

    With open standards, there can be built open source software using those open standards. Thus, you can decode and extract meaningful information from the data.

    There are also no requirements anywhere that there must be more implementations for open source project from more countries. They key point is that source code must be open and available for all. That takes away the chances of someone talking full control of the software and restricting the freedom otherwise possible. Without a source code available, the path to extracting meaningful information ends up incredibly hard.

    Open sourced software is one piece of the digital sovereignty puzzle, data in an open standard is another piece in the same puzzle.

    Having access to the data files containing your information is yet another piece in the same puzzle. You cannot achieve digital sovereignty without all of these three pieces;then someone will still have control of your information.

    Likewise, if you use a service with a proprietary API - you are bound to that service as long as that service uses the same API. If more service providers provide the same standardised API, you can more easily switch between services. Again, open standards is a key component for digital sovereignty - otherwise you will not be able to process your data as you want.

    @jwildeboer

    Uncategorized digitalsovereig

  • IMHO (In My Humble Opinion): It shouldn't be "Getting of US-Tech", it should be "Getting of proprietary tech".
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @giacomo

    If you're concerned about the US controlling open source - you can fork it. But a fork won't be successful if it doesn't have users and contributors.

    Remember OpenOffice.org? What do you think people talk more about - that one or the fork LibreOffice?

    Android has forks as well. The main problem with Android isn't the problems forking the OS itself. It's the Google Play layers, which is not open source and fully controlled by Google - which way too many apps depends on, making it much harder to break free from Google's Android implementation.

    You are equally not forced to use or implement protocols you don't deem needed in your own code. Use the alternatives, HTTP is well established and can do most of what QUIC can do. And the HTTP standard can also be extended and improved.

    Protocols not being based on open standards - they are a pain to support outside of its origin software stack. Reverse engineering is the only viable path if there are no other open alternatives available.

    So open source and open standards can help you break free of evil empires; the capability of digital sovereignty is built into open source and open standards.

    @jwildeboer

    Uncategorized digitalsovereig

  • IMHO (In My Humble Opinion): It shouldn't be "Getting of US-Tech", it should be "Getting of proprietary tech".
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @jwildeboer

    If you can't manage your own data, you're locked in.
    If you can't have access to the software processing your data, you're locked in.
    If you can't access you're data due to the data being stored in a service which is down or has blocked your access, you're locked in.

    It's as easy as that. Don't put your data in a place where you can't access it when you need it.

    Open standards avoid this, your data format is documented and there are more implementations of parsers

    It starts with open standards, as then there are less reasons to protect the software inside a proprietary blackbox.

    Open source and free/libre software is the natural extension of open standards.

    Uncategorized digitalsovereig

  • What the, and I cannot overstate this, fuck?
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @Landsil @tony @ben @bloor

    The more you pay, the more you need to convince others it's better ... Otherwise you admit to being fooled and tricked to give away lots if money for no valid reasons ... Basically to not look like an idiot ... except ........πŸ˜‰

    Uncategorized

  • What the, and I cannot overstate this, fuck?
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @ben @tony @bloor

    The difference in the ProTools checks might have been the cable, but can just as well have been some other related things. Signals can be affected by magnetic interference (audio is AC voltage), so if there are other power cables laying closer or further away from the speaker or signal cables may just be enough to make a slight difference. Or if the cable was too long and curled up vs being straight. In some cases with power cables, other equipments plugged in at the same power outlet may cause a slight difference - due to the power drawn, which impacts the magnetic aspects.

    In regards to the magnetism aspects ... you know the amp measurement devices you just clamp over the cable, not actually connecting to wires ... that's just a coil "wrapped around" the cable measuring the magnetic field and from that calculates the amps passing through.

    But yeah, for digital signal paths .... it's all in the DAC at that point. Of course, disrupted bits in the transfer can cause noise. But not clarity details. And the digital signal paths certainly has enough error correction to not bit-flip data hitting the DAC in the end. Gee, that's just hilarious.

    Uncategorized

  • What the, and I cannot overstate this, fuck?
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @tony @ben @bloor

    Audio Grade Ethernet Switch

    πŸ˜† πŸ˜† πŸ˜† πŸ˜† πŸ˜† πŸ˜† πŸ˜†

    Uncategorized

  • What the, and I cannot overstate this, fuck?
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @ingram @tony @bloor

    If it would have been me, I would have accepted as well - but sold the cables afterwards πŸ€ͺ

    I see Kimble twists the pairs. The physics behind the twisting does have an effect. That can be calculated and there will be scientific evidence of the effect. There are no doubts here.

    But it will not be noticeable on shorter lengths for home hifi equipment use, as well as the current and voltage in home hifi systems. You probably need to go at least 20-30m, probably even higher like closer to 100m and above to have a noticeable effect. Which is why the Electronics Australia findings are accurate and valid.

    And if your home hifi loudspeaker and amp are 20-100m or more apart ... then you have a setup which would require some rethinking for a lot of other reasons.

    Of course, such details like this stings a lot if you've cashed out a lot of money for 5m of speaker cables. Then you "need" to claim you hear the difference to feel less like an, well, idiot.

    Uncategorized

  • What the, and I cannot overstate this, fuck?
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @drwho @tony @bloor

    Oh, there are plenty of possibilities to "improve" this design further - or coming up with something else being even more absurd, shifting the revenue stream in your direction. There are always room for "improving" cables and it will always sell.

    But you need to be quite cynical playing on the psychology aspects related to making people believe they hear a difference - and have some quasi research papers supporting what they "hear". The rest is plain marketing and marketing strategies.

    And don't forget: In this user segment - the sound always gets better the more expensive the cables or equipment is.

    Good luck! πŸ˜‰

    Uncategorized

  • What the, and I cannot overstate this, fuck?
    πŸ”— David SommersethD πŸ”— David Sommerseth

    @tony @bloor Oh, I believe you're slightly wrong here.

    The designer behind these cables knows a thing or two about psychology and how business economy works .... πŸ˜‰

    Uncategorized
  • Login

  • Don't have an account? Register

  • Login or register to search.
Powered by NodeBB Contributors
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups