My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise.
-
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
-
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
So far, the research is in your favour regarding atrophy! -
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
@jonathanhogg belter
-
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
@jonathanhogg Someone was on R4 a couple of days back, I think it was prof. Joshua Bengio, going on about the dangers of AI becoming sentient and taking over. I think this sort of talk just fires politicians up that they want to be in on it. It makes AI feel like nuclear weapons: Exceptionally dangerous but every government seems to want them.
-
My experience with generative-AI has been that, at its very best, it is subtly wrong in ways that only an expert in the relevant subject would recognise. So I don't worry about us creating super-intelligent AI, I worry about us allowing that expertise to atrophy through laziness and greed. I refuse to use LLMs not because I'm scared of how clever they are, but because I do not wish to become stupider.
I will say one thing for generative AI: since these tools function by remixing/translating existing information, that vibe programming is so popular demonstrates a colossal failure on the part of our industry in not making this stuff easier. If a giant ball of statistics can mostly knock up a working app in minutes, this shows not that gen-AI is insanely clever, but that most of the work in making an app has always been stupid. We have gatekeeped programming behind vast walls of nonsense.
-
I will say one thing for generative AI: since these tools function by remixing/translating existing information, that vibe programming is so popular demonstrates a colossal failure on the part of our industry in not making this stuff easier. If a giant ball of statistics can mostly knock up a working app in minutes, this shows not that gen-AI is insanely clever, but that most of the work in making an app has always been stupid. We have gatekeeped programming behind vast walls of nonsense.
@jonathanhogg I'd add that everything is built on frameworks now. Programming has mostly become configuring the framework and coming up with the correct business logic and decent UX / styling. And since most apps these days do the same kind of things, with different data, AI's job should be easy. Humans still manage to mess up the important bits like security, privacy, performance. And AI is even worse at those things.
-
@jonathanhogg I'd add that everything is built on frameworks now. Programming has mostly become configuring the framework and coming up with the correct business logic and decent UX / styling. And since most apps these days do the same kind of things, with different data, AI's job should be easy. Humans still manage to mess up the important bits like security, privacy, performance. And AI is even worse at those things.
@bit101 hold on, I've got another post incoming on exactly this…

-
I will say one thing for generative AI: since these tools function by remixing/translating existing information, that vibe programming is so popular demonstrates a colossal failure on the part of our industry in not making this stuff easier. If a giant ball of statistics can mostly knock up a working app in minutes, this shows not that gen-AI is insanely clever, but that most of the work in making an app has always been stupid. We have gatekeeped programming behind vast walls of nonsense.
We seem to have largely stopped innovating on trying to lower barriers to programming in favour of creating endless new frameworks and libraries for a vanishingly small number of near-identical languages. It is the mid-2020s and people are wringing their hands over Rust as if it was some inexplicable new thing rather than a C-derivative that incorporates decades old type theory. You know what I consider to be genuinely ground-breaking programming tools? VisiCalc, HyperCard and Scratch.
-
I will say one thing for generative AI: since these tools function by remixing/translating existing information, that vibe programming is so popular demonstrates a colossal failure on the part of our industry in not making this stuff easier. If a giant ball of statistics can mostly knock up a working app in minutes, this shows not that gen-AI is insanely clever, but that most of the work in making an app has always been stupid. We have gatekeeped programming behind vast walls of nonsense.
@jonathanhogg Actually think you have that backwards. Making something dangerous and broken has been easy for ages, that's why a certain level of gatekeeping is actually a good thing. Like, driving a car isn't that hard. A six year old can do it with a few minutes of training. Driving a car safely on the other hand
-
@bit101 hold on, I've got another post incoming on exactly this…

@jonathanhogg sorry if I spoiled it!

-
We seem to have largely stopped innovating on trying to lower barriers to programming in favour of creating endless new frameworks and libraries for a vanishingly small number of near-identical languages. It is the mid-2020s and people are wringing their hands over Rust as if it was some inexplicable new thing rather than a C-derivative that incorporates decades old type theory. You know what I consider to be genuinely ground-breaking programming tools? VisiCalc, HyperCard and Scratch.
@jonathanhogg That's the kind of talk you usually hear just before someone invents themselves a new language. Just saying.
-
@jonathanhogg Actually think you have that backwards. Making something dangerous and broken has been easy for ages, that's why a certain level of gatekeeping is actually a good thing. Like, driving a car isn't that hard. A six year old can do it with a few minutes of training. Driving a car safely on the other hand
@krig which is why we also make bikes and scooters – convenient tools that can be used by all ages and abilities
-
@jonathanhogg That's the kind of talk you usually hear just before someone invents themselves a new language. Just saying.
@jarkman Heh! Most of my programming these days involves creating or using my own languages

-
@jarkman Heh! Most of my programming these days involves creating or using my own languages

@jonathanhogg
I would like to hear more about that sometime. -
@jonathanhogg
I would like to hear more about that sometime.@jarkman I can absolutely bend your ear at EMF, but conveniently I also recently gave a talk about it at Alpaca!

https://www.youtube.com/watch?v=D9khHD9sB7M&list=PLxqmZjMvoVzw773-Fo9ajkujFfOThuFOP&index=9
-
@jonathanhogg That's the kind of talk you usually hear just before someone invents themselves a new language. Just saying.
@jarkman @jonathanhogg I get the broader point here, but at the same time, as computers have moved to encompass more and more of the human sphere, is it actually reasonable to exect any languge to be actually general purpose?
Perhaps for some uses cases it's the right choice, but when I look at data-science code written by vernacular developers (experts whose expertise is in a domain other than computer science) I feel the freedom from those languages just gives more scope for error/mistake/poor style that will bite them later). Why can't we embrace more DSLs?
-
@jarkman @jonathanhogg I get the broader point here, but at the same time, as computers have moved to encompass more and more of the human sphere, is it actually reasonable to exect any languge to be actually general purpose?
Perhaps for some uses cases it's the right choice, but when I look at data-science code written by vernacular developers (experts whose expertise is in a domain other than computer science) I feel the freedom from those languages just gives more scope for error/mistake/poor style that will bite them later). Why can't we embrace more DSLs?
-
@krig which is why we also make bikes and scooters – convenient tools that can be used by all ages and abilities
@jonathanhogg good point! I think I see what you meant now. I miss the old visual basic and how easy it was to make tools using it without knowing any programming, really.
-
I will say one thing for generative AI: since these tools function by remixing/translating existing information, that vibe programming is so popular demonstrates a colossal failure on the part of our industry in not making this stuff easier. If a giant ball of statistics can mostly knock up a working app in minutes, this shows not that gen-AI is insanely clever, but that most of the work in making an app has always been stupid. We have gatekeeped programming behind vast walls of nonsense.
@jonathanhogg No, it's still difficult to program something so that it's exactly how you want it to be. It's apparently been underestimated how often that doesn't matter ("mostly working app" where getting it to working is more effort than starting from scratch), but we will see how that develops in the long run. Maybe plausible deniability is really enough for many things.
Nobody is gatekeeping clear, testable requirements and communication without misunderstandings. People usually just can't do that.
-
@jarkman @jonathanhogg I get the broader point here, but at the same time, as computers have moved to encompass more and more of the human sphere, is it actually reasonable to exect any languge to be actually general purpose?
Perhaps for some uses cases it's the right choice, but when I look at data-science code written by vernacular developers (experts whose expertise is in a domain other than computer science) I feel the freedom from those languages just gives more scope for error/mistake/poor style that will bite them later). Why can't we embrace more DSLs?
@michael @jarkman @jonathanhogg (IMO) we can't have more DSLs because everything useful is now plumbed together from a series of heterogenous parts and we've somehow decided they can only interoperate at the (barbaric) C ABI level, or the (absurdly inefficient) web level. So, we rely on general purpose languages using specialised libraries, instead of the other way around.
I think fixing this boundary/contract problem would fix a lot in s/w engineering.