Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse
Brand Logo
Sven Slootweg, low-spoons mode ("still kinky and horny anyway")J

joepie91@fedi.slightly.tech

@joepie91@fedi.slightly.tech
About
Posts
3
Topics
0
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
    Sven Slootweg, low-spoons mode ("still kinky and horny anyway")J Sven Slootweg, low-spoons mode ("still kinky and horny anyway")

    @firefoxwebdevs "Without the user's request" is quite ambiguous, though. I'm reminded here of Google, which put the AI tab before the Web/All tab, displacing it so that people would unintentionally hit the AI button and "request" it. It's a small and plausibly-deniable change that nevertheless violates the user's boundaries, and difficult to call out and stop even internally within a company or team. I've seen many companies and software do the same thing.

    A genuine opt-in would, in my opinion, look something like a single "hey do you want such-and-such features? these are the implications" question, presented in a non-misleading way, and if that is not answered affirmatively then the various UI elements for "AI" features should not even appear in the UI unless the user goes and changes this setting. It's much harder for that to get modified in questionable ways down the line, and reduces the 'opportunities for misclick' to a single one instead of "every time someone wants to click a button". It also means users aren't constantly pestered with whatever that week's new "AI" thing is if they've shown no interest.

    Such a dialog could still specify something like "if you choose Yes, Firefox will still only download models once you try to use a feature", to make it clear to users that it's not an all-or-nothing, and they can still pick-and-choose after selecting 'Yes'.

    Uncategorized

  • Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
    Sven Slootweg, low-spoons mode ("still kinky and horny anyway")J Sven Slootweg, low-spoons mode ("still kinky and horny anyway")

    @firefoxwebdevs That's exactly the motivation behind my suggestion, though - I've attached a mockup in an additional reply to hopefully make it clearer, but the idea here is to not redefine it so much as it is to explicitly pick a definition, and then provide an additional option for the broader definition, so that a user can essentially pick whichever definition they are following without getting into the technical weeds too much.

    Uncategorized

  • Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
    Sven Slootweg, low-spoons mode ("still kinky and horny anyway")J Sven Slootweg, low-spoons mode ("still kinky and horny anyway")

    @firefoxwebdevs My closest answer would be "no", but I think the question is kind of mis-phrased here, and that's probably going to lead to a confusing and potentially misleading outcome.

    The problem that people have is not with "AI" as a generalized category, but with the current generation of thieving, climate-destroying, grifting systems that are marketed as AI to an overwhelming degree - notably LLMs and "generative AI", but really anything with those inconsiderate properties.

    If your kill switch is presented as an "AI kill switch", then depending on the person they're either going to understand that as "exploitative tech", or as "machine learning", and so make different assumptions as to whether local translation is included in that.

    So I think you'll have to be a lot more explicit about what you mean; either by describing clearly what the kill-switch includes, or what it excludes, right in the place where the option is offered. Otherwise it's damned if you do, damned if you don't; depending on whether you include translations, either one or another group is going to be upset with the unexpected behaviour.

    So, ethically, if the translation feature is built on ethically collected data, and it has no outsized climate impact, then I would not consider it something that needs to be included in a "get rid of all of it" kill switch. But to convey this clearly to users, both that and why it isn't included should be explained right there with the button, with potentially a second-step option to disable it anyway if someone still feels uncomfortable with it.

    That way you've transparently communicated to users and shown that you have nothing up your sleeve by immediately and proactively offering them an option to disable that, too, if they have already shown interest in removing "AI" features.

    Uncategorized
  • Login

  • Don't have an account? Register

  • Login or register to search.
Powered by NodeBB Contributors
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups