“We need to get beyond the arguments of slop vs sophistication,” Nadella wrote in a rambling post flagged by Windows Central, arguing, "Humanity needs to learn to accept AI as the new equilibrium of human nature."
-
“We need to get beyond the arguments of slop vs sophistication,” Nadella wrote in a rambling post flagged by Windows Central, arguing, "Humanity needs to learn to accept AI as the new equilibrium of human nature."
If "the new equilibrium of human nature" is code for worthless, mind-numbing garbage that is clogging up any and all human interaction, then I guess we agree!
-
System shared this topic
-
“We need to get beyond the arguments of slop vs sophistication,” Nadella wrote in a rambling post flagged by Windows Central, arguing, "Humanity needs to learn to accept AI as the new equilibrium of human nature."
If "the new equilibrium of human nature" is code for worthless, mind-numbing garbage that is clogging up any and all human interaction, then I guess we agree!
I have no issue with AI, the technology, I have been working on it since 1986. The development of LLMs, artificial neural networks has certainly been a game changer. Though I am not any part of the obsession to make huge server farms that power AI Instances to provide services for mobile devices, or any device you want to hook to them. I use only local AI that I work on, or those that collaborate with me work on.
One very disturbing issue for me is that people cannot make a distinction between AI, the technology, and services provided by corporations. There is a huge difference. For instance I do not train my AI on copyrighted material stolen on the internet. I train them on open source, public domain resources and my own content.
I use ComfyUI (generative AI) to make assets, and just to have fun. I use only checkpoints and loras trained on public domain content, and yes, you can check this. I have used LLMs that I have trained to turn
vimon my machine into a full blown IDE (not that it was not already). There are many practical implementations of AI that does and will enhance all of our lives. That does not mean I support corporate server farms trying to inject AI into all aspects of our life, need or want it or not. On the other hand I do not feel that it is fair for people to be judging AI for what corporations do. Should I judge fire for what arsonist do? Should I judge antibiotics by their current abuse in agriculture?It is an old technology that we have been working on for decades. With great success in recent decades. It is not SLOP it is valuable technology when used appropriately. AI should not be held accountable for the way corporation misuse it for monetization. Corporations should be held accountable for their actions. Not the millions of developers that have diligently worked to hone a technology that is quite valuable.
-
I have no issue with AI, the technology, I have been working on it since 1986. The development of LLMs, artificial neural networks has certainly been a game changer. Though I am not any part of the obsession to make huge server farms that power AI Instances to provide services for mobile devices, or any device you want to hook to them. I use only local AI that I work on, or those that collaborate with me work on.
One very disturbing issue for me is that people cannot make a distinction between AI, the technology, and services provided by corporations. There is a huge difference. For instance I do not train my AI on copyrighted material stolen on the internet. I train them on open source, public domain resources and my own content.
I use ComfyUI (generative AI) to make assets, and just to have fun. I use only checkpoints and loras trained on public domain content, and yes, you can check this. I have used LLMs that I have trained to turn
vimon my machine into a full blown IDE (not that it was not already). There are many practical implementations of AI that does and will enhance all of our lives. That does not mean I support corporate server farms trying to inject AI into all aspects of our life, need or want it or not. On the other hand I do not feel that it is fair for people to be judging AI for what corporations do. Should I judge fire for what arsonist do? Should I judge antibiotics by their current abuse in agriculture?It is an old technology that we have been working on for decades. With great success in recent decades. It is not SLOP it is valuable technology when used appropriately. AI should not be held accountable for the way corporation misuse it for monetization. Corporations should be held accountable for their actions. Not the millions of developers that have diligently worked to hone a technology that is quite valuable.
-
Yes, except I write my own extension and do not use a plugin manager

-
Yes, except I write my own extension and do not use a plugin manager

@unusnemo I could not let it and bare the shame of plug in's anymore
so I created this on yet a new model (the smallest due to only 8GB Ram the Qwen2.1.5 coder was trash)
That api in the pic does not mean I'm connected trough their API nor need for network, that API word in text shell console refers to the http local in browser interface as UIDescription: a Vim shell interface connected to the LLM to verify hallucinations and with a trigger question test it confirmed Hallucination.
oh I returned to phi 3 from scratch and deleted the other trashdescription :added the other codes over python
I know, I still have to work on the plug in's in my shell console but I like them..
So, I essentially just open a 2nd shell (to test only) copy/paste the code in that second shell where I open Vim. 1st Esc then Press V it highlights the error then F 5 to verdict after pressing Enter -
@unusnemo I could not let it and bare the shame of plug in's anymore
so I created this on yet a new model (the smallest due to only 8GB Ram the Qwen2.1.5 coder was trash)
That api in the pic does not mean I'm connected trough their API nor need for network, that API word in text shell console refers to the http local in browser interface as UIDescription: a Vim shell interface connected to the LLM to verify hallucinations and with a trigger question test it confirmed Hallucination.
oh I returned to phi 3 from scratch and deleted the other trashdescription :added the other codes over python
I know, I still have to work on the plug in's in my shell console but I like them..
So, I essentially just open a 2nd shell (to test only) copy/paste the code in that second shell where I open Vim. 1st Esc then Press V it highlights the error then F 5 to verdict after pressing EnterThere is nothing wrong with using
vim-plug. It just does not do anything for me as I write most of my own plugins. The main reason to usevim-plugis for keeping plugins updated. Today, that is moot, as even tpope has not updated a vim plugin in over 4 years. I tend to hard fork the plugins I like best and maintain them myself. So a plugin manager is not required.Yes, with low ram your options are rather limited. I find it is best to train your own LLM though that requires a significant GPU. At the very least an RTX 3060i or equivalent. You just have to be careful what open source, or your own source you train the LLM on. As there are a lot of low quality projects. Train with a bunch of low quality code, get low quality suggestions.
Though I mainly use AI for an Intel-sense level of auto completion. Having an AI write actual code is very risky. With anything even slightly complicated you are bound to do more debugging than it is worth. It would be less effort to just write it from scratch yourself.
-
There is nothing wrong with using
vim-plug. It just does not do anything for me as I write most of my own plugins. The main reason to usevim-plugis for keeping plugins updated. Today, that is moot, as even tpope has not updated a vim plugin in over 4 years. I tend to hard fork the plugins I like best and maintain them myself. So a plugin manager is not required.Yes, with low ram your options are rather limited. I find it is best to train your own LLM though that requires a significant GPU. At the very least an RTX 3060i or equivalent. You just have to be careful what open source, or your own source you train the LLM on. As there are a lot of low quality projects. Train with a bunch of low quality code, get low quality suggestions.
Though I mainly use AI for an Intel-sense level of auto completion. Having an AI write actual code is very risky. With anything even slightly complicated you are bound to do more debugging than it is worth. It would be less effort to just write it from scratch yourself.
@unusnemo Impressive if it autocompletes in your codes which is impressive, I did not knew you could train them, yes I made once with another LLM a front end apache server just to keep progress and with the command recall it showed me all progress but no other way. In the end for the best models one really needs a powerful pc with way more RAM. -
“We need to get beyond the arguments of slop vs sophistication,” Nadella wrote in a rambling post flagged by Windows Central, arguing, "Humanity needs to learn to accept AI as the new equilibrium of human nature."
If "the new equilibrium of human nature" is code for worthless, mind-numbing garbage that is clogging up any and all human interaction, then I guess we agree!
You can ask the AI how to train it, it knows
