FWIW I really disagree with Cory Doctorow on the "purity culture" thing but I'm not going to get mad about it.
-
@glyph Somewhere there's a political compass chart with "purity culture," "identity politics," and "political correctness" labelled like regions in the phase diagram for water.
-
@theorangetheme @glyph "bipartisan"
-
@glyph Somewhere there's a political compass chart with "purity culture," "identity politics," and "political correctness" labelled like regions in the phase diagram for water.
@xgranade and presumably over in the "supercritical fluid" section you've got "SJW"
-
FWIW I really disagree with Cory Doctorow on the "purity culture" thing but I'm not going to get mad about it. The man's job is having takes. He has like fifty takes a day. He is Takes Georg. The vast majority of the takes are fine, and many are actually pretty great. If I had to have that many takes that fast I would have _way_ more terrible takes. I hope he changes his mind on this one but even if not it's fine I don't have to agree with everyone on everything
@glyph Yeah...
His takes hover probably somewhere above average still - def above average by 'tech opinion haver' standards - and his bad takes are generally bad in a '... sigh, whatever, it's not *that* harmful.' sort of way, rather than a 'needs to bbe stopped before someone builds the torment nexus over these ideas' sort of way. So, it's been a take I just kinda roll my eyes at and move on with my life over, thus far. -
@glyph Yeah...
His takes hover probably somewhere above average still - def above average by 'tech opinion haver' standards - and his bad takes are generally bad in a '... sigh, whatever, it's not *that* harmful.' sort of way, rather than a 'needs to bbe stopped before someone builds the torment nexus over these ideas' sort of way. So, it's been a take I just kinda roll my eyes at and move on with my life over, thus far.@glyph Very much just kinda hoping it doesn't become a point in the Culture Wars
(A kinda fucked up concept in it's own right, really.) -
@theorangetheme @glyph "bipartisan"
@xgranade @theorangetheme @glyph Strongest post of the week here tbh
-
@glyph
For a guy with that many takes, his batting average is bonkers good. You can do a lot worse than reading his posts. -
FWIW I really disagree with Cory Doctorow on the "purity culture" thing but I'm not going to get mad about it. The man's job is having takes. He has like fifty takes a day. He is Takes Georg. The vast majority of the takes are fine, and many are actually pretty great. If I had to have that many takes that fast I would have _way_ more terrible takes. I hope he changes his mind on this one but even if not it's fine I don't have to agree with everyone on everything
@glyph I think "purity culture" is the best label for this type of moralism in front of LLMs because the moralising comes from people that can to afford to not use the tech, vs. Reverse Centaurs that have the tech forced onto them and have to grapple with its effects.
Someone getting on their high horse about not using LLMs and "pushing back" on the narrative or "keeping people in check" has no effects on the material realities that most gig-workers that act as de facto slave picker-upper, putter-downer for algorithms face.
It's like saying: "yeah bro, I'm also against AI, like totally bro, totally morally against it's use and deployment" to an Amazon workers that got told by an LLM they could increase their output by getting a colostomy bag.
Doctorow is right, the actual moral way forward is making AI economically unattractive, moralising AI use is just purity testing.
-
@glyph I think "purity culture" is the best label for this type of moralism in front of LLMs because the moralising comes from people that can to afford to not use the tech, vs. Reverse Centaurs that have the tech forced onto them and have to grapple with its effects.
Someone getting on their high horse about not using LLMs and "pushing back" on the narrative or "keeping people in check" has no effects on the material realities that most gig-workers that act as de facto slave picker-upper, putter-downer for algorithms face.
It's like saying: "yeah bro, I'm also against AI, like totally bro, totally morally against it's use and deployment" to an Amazon workers that got told by an LLM they could increase their output by getting a colostomy bag.
Doctorow is right, the actual moral way forward is making AI economically unattractive, moralising AI use is just purity testing.
@budududuroiu the specific worker using the specific LLM in this discussion is Cory Doctorow himself, voluntarily electing to use some (he doesn't say which one) ollama-compatible open-weights model for grammar checking his posts. Are you suggesting that he, personally, is a reverse centaur in this context?
-
@budududuroiu the specific worker using the specific LLM in this discussion is Cory Doctorow himself, voluntarily electing to use some (he doesn't say which one) ollama-compatible open-weights model for grammar checking his posts. Are you suggesting that he, personally, is a reverse centaur in this context?
@glyph using a model for spell checking is pretty Centaur behaviour, getting told by a routing algorithm the optimal way to run reds to shave 10s off delivery time is Reverse Centaur
-
@glyph using a model for spell checking is pretty Centaur behaviour, getting told by a routing algorithm the optimal way to run reds to shave 10s off delivery time is Reverse Centaur
@budududuroiu in this context we are discussing the former. I have, personally, many times, including several times today, on this very mastodon account, given the explicit qualification that I don't yell at people who are forced into "AI" use by their jobs
-
@budududuroiu in this context we are discussing the former. I have, personally, many times, including several times today, on this very mastodon account, given the explicit qualification that I don't yell at people who are forced into "AI" use by their jobs
@glyph I'm not accusing you of that, I'm saying purity testing on AI use (between people that can afford the choice of using AI or not) has no material effect on people that are forced to be Reverse Centaurs and is mostly a position of privilege to have.
It's mental onanism disguised as social justice
-
@glyph I'm not accusing you of that, I'm saying purity testing on AI use (between people that can afford the choice of using AI or not) has no material effect on people that are forced to be Reverse Centaurs and is mostly a position of privilege to have.
It's mental onanism disguised as social justice
@budududuroiu this is a totally different discussion and it feels like kind of annoying goalpost-moving given that it wasn't what Cory was talking about when the "purity culture" argument was first raised. I don't know who is doing, as you describe it, "this type of moralism", as he barely even gestures at whoever is doing the *different* type of moralism he describes
-
@budududuroiu this is a totally different discussion and it feels like kind of annoying goalpost-moving given that it wasn't what Cory was talking about when the "purity culture" argument was first raised. I don't know who is doing, as you describe it, "this type of moralism", as he barely even gestures at whoever is doing the *different* type of moralism he describes
@glyph my goal isn't to annoy you, but to me this was related to
> That's how we make good tech: not by insisting that all its inputs be free from sin, but by purging that wickedness by liberating the technology from its monstrous forebears and making free and open versions of it
I point at moralising because the core reason why AI is being pushed everywhere right now is because it promises growth in an environment of expensive capital (high interest rates). Most of this deployment is from knowledge work because the West has a) almost completely deindustrialised and b) has a high proportion of highly financialised by ultimately bullshit jobs.
To me, taking the fight to AI means making it economically unattractive, either by enshrining in law that human authorship is needed for copyright, or making models so efficient that large datacentres expenditure becomes foolish.
-
@glyph my goal isn't to annoy you, but to me this was related to
> That's how we make good tech: not by insisting that all its inputs be free from sin, but by purging that wickedness by liberating the technology from its monstrous forebears and making free and open versions of it
I point at moralising because the core reason why AI is being pushed everywhere right now is because it promises growth in an environment of expensive capital (high interest rates). Most of this deployment is from knowledge work because the West has a) almost completely deindustrialised and b) has a high proportion of highly financialised by ultimately bullshit jobs.
To me, taking the fight to AI means making it economically unattractive, either by enshrining in law that human authorship is needed for copyright, or making models so efficient that large datacentres expenditure becomes foolish.
@budududuroiu por que no los dos
-
@budududuroiu por que no los dos
@budududuroiu make it economically unviable *and* explain that it's unethical. you don't have to stop one to do the other, and in fact they provide motivation to each other and can mutually reinforce
-
@budududuroiu make it economically unviable *and* explain that it's unethical. you don't have to stop one to do the other, and in fact they provide motivation to each other and can mutually reinforce
@budududuroiu more importantly *you* don't have to do both. let the moralizers moralize while the economics people economize. Personally I remain skeptical that it _can_ be made economically unviable (it's already economically unviable, that's why it's a huge money-losing bubble) but the current structure of our markets is such that they can keep losing money for years, maybe decades, before the bill comes due, and there will be several freshly minted billionaires by the time that's done
-
@budududuroiu more importantly *you* don't have to do both. let the moralizers moralize while the economics people economize. Personally I remain skeptical that it _can_ be made economically unviable (it's already economically unviable, that's why it's a huge money-losing bubble) but the current structure of our markets is such that they can keep losing money for years, maybe decades, before the bill comes due, and there will be several freshly minted billionaires by the time that's done
@budududuroiu but that doesn't mean that I would stop the neoliberal technocrats attempting to policy-wonk it out of existence. it takes all kinds.
-
@budududuroiu make it economically unviable *and* explain that it's unethical. you don't have to stop one to do the other, and in fact they provide motivation to each other and can mutually reinforce
@glyph why is it unethical? I don't buy the argument that it's built on extractive principles, because a) having the tech to use permissively is contributing it to the Commons it allegedly stole from, b) I'm a dirty commie and I don't find it ethical to extract value from gatekeeping (especially knowledge, how is this different between JSTOR v. Swarz), and c) it is possible to take something built with dubious reasons (ARPANET) and contribute it to the Commons (the Internet)
-
@budududuroiu more importantly *you* don't have to do both. let the moralizers moralize while the economics people economize. Personally I remain skeptical that it _can_ be made economically unviable (it's already economically unviable, that's why it's a huge money-losing bubble) but the current structure of our markets is such that they can keep losing money for years, maybe decades, before the bill comes due, and there will be several freshly minted billionaires by the time that's done
@glyph that's a good point, I mainly am against it because it's clearly a wedge issue in an otherwise quite Rainbow Coalition of progressives, e.g. I've noticed accounts take out pitchforks in response to the Ghostty dude saying he uses AI.
I mainly want to reach the critical mass to wield power as a collective, not endlessly criticise it.