@n0toose Fair, I understand that. It's ultimately your choice. If you work best w/o those tools - then that's great too.
pojntfx@mastodon.social
Posts
-
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things? -
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose Yeah! I mean I just did yesterday, for that CRIU one. Try it
All you need is a CPU or anything that can do Vulkan. I use fully OSS drivers, even runs on those. -
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose I don't know about your experience, but any legal options I've found that try to solve this problem are worse than useless. I find nothing of relevance on Marginalia and other things like it.
-
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose I mean yes, optimally a law like you mention would try and fix this, but I have 0 trust in any jurisdiction actually making a law like that. I'm pretty certain we'll instead end up in a world where only massive companies that can pay for IP licensing agreements can train models.
-
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose There are production constraints around all of this atm, yes. But much like how you can't fix the housing crisis without making it cheaper to build houses and actually building them, I don't think we can fix something like this without actually building out the fabs and getting supply up there w/ demand.
And local LLMs are very much "real" now. Try out Newelle or Alpaca on GNOME on your regular laptop - even mine can run them w/o issues now via Vulkan, and I don't have a lot of VRAM.
-
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose Yeah, the SEO slop is obviously horrendous. I'm ngl though, being able to use an LLM to search through say IndieWeb instead has been the first time in a long time that I've actually been able to find (non-code) answers to questions again. Used it for booking flights with niche airlines in a country I've never been to for example. That kind of stuff was always locked behind proprietary APIs for so long and now you can actually access them without them for the first time in forever.
-
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose There were lots of proposals around criminalizing "unauthorized access" to services in the past few decades, about trying to make it so that only a "human" can access them, enforcing ToS legally ... I've really only seen them used against end users in practice (Reddit's anti-scraping policy/API shutdown, third-party clients for Signal, any reverse engineering project ever etc.)
A lot of these kinds of laws will have effects far, far worse than DDoSing public infrastructure IMHO.
-
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose I honestly think the same approach that the EU has been doing with open search indexes should exist for LLM training data too eventually. There are clearly issues with how you get access to say my OSS projects if you want to train them without hammering my forge, I know that. At the same time though, without access to open training data you're ceeding the entire field - which pretty much everyone uses in one way or another - to private IP deals with publishers.
-
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose I'm not sure I agree with that. There has never been a (legal) search engine that could "just give me the paper on CRIU where the author mentions Cricket" before. That stuff is genuinely new, and while stuff like Google has enshittified (I don't disagree with you there) there is also a whole new type of query that you just couldn't do before, all locally on your own device.
-
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose That's not the case for a lot of those open models anymore. Apertus is a great example of it, it was built only from data sources that explicitly allow crawling https://www.swiss-ai.org/apertus
"Stolen data" is ofc also very debatable. Tightening copyright law, patents and so on even further to "stop" LLMs from being trained will probably only have negative consequences on those writing OSS. I really don't want to live in a world where you're legally prohibited from learning.
-
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose Claude and stuff should obv. be fought, but the open versions of that tech has tons of potential.
-
Are you using #Codeberg to host your favorite AI-assisted and otherwise vibecoded project because your desire for dopamine has utterly destroyed your willingness to learn new things?@n0toose I don't know, I really wouldn't write of LLMs as a whole. Those new open-weight models - Apertus, Qwen, DeepSeek, Kimi - run fully on local infrastructure, with Vulkan for the acceleration layer. For my own work it's been very helpful to use those kinds of things for meeting transcription, RAG search, automating a "fuzzy" web search, autocorrect, translations, fuzzy search-and-replace, data extraction from logs and so on ...