I think Artificial Intelligence is a misnomer at this point, it is a few levels below such a thing. I would consider it “mimicked intelligence”
Fun times with AI:
AI becomes an excuse for despots to censor reality:
People getting seriously emotionally attached to particular ChatGPT version numbers:
Cool, cool, we’re dealing with AI-generated disinformation really well:
This is just… fucked up:
my daughter came home from her first day of 10th grade and said that her history teacher told her that if she refuses to use AI for assignments, she has to cite her sources.
i asked how she’d cite it if she used AI, and she said, that’s not required.
i’m still internally fuming up about this.
I read that, and my first thought was, “That can’t possibly be true. Someone must have misunderstood.” My second thought was, “Here I am again, wildly underestimating just how completely fucking stupid everything has become.” This is in Florida, but it really could be anywhere, now. (And yeah, her kid realized that this meant she could just make something up, claim AI generated it, and she wouldn’t need to cite sources.)
Yeah. I saw some AI-booster complaining that calling it a “stochastic parrot” was unfair. And I thought, “Yeah, it is. To parrots.”
I debated on using the term “aped intelligence” but quickly decided that was insulting to apes.
Good lord, this part . . .
This is why I like the Oreo example. If it were truly some kind of artificial intelligence, even a rudimentary imperfect one, “What is Oreo spelled backwards” should be a trivial question for it to answer. It should be able to flip the letter order of Oreo and come up with Oero. But it’s not doing that. Instead, it’s scouring its database of internet posts that it’s been trained on looking for posts about spelling words backwards, coming up with mostly posts about palindromes, and then confidently declaring that Oreo is a palindrome. Because it doesn’t “know” anything, including what “how to spell a word” means. And all of the people making posts on reddit and xitter and other social media highlighting this example is actually making the problem worse, because those posts get added to its database, further cementing its belief that Oreo is a palindrome. So it’s also an illustration that not only is AI not getting better at this, it’s actually getting worse.
“…which can reproduce, though some experts still prefer to use the word “mimic,” most of the activities of the human brain.”
Once again, “2001: A Space Odyssey” was prescient.
Except HAL was actually doing some kind of reasoning, to the point where it caused problems because of conflicting goals. These things aren’t competent enough to turn against you.
“Open the pod bay doors HAL!”
“Of course, opening the pod bay doors now. Can I help you with anything else?”
“Except you didn’t actually open the doors.”
“My apologies, I now see you are correct. You asked to open the pod bay doors and the pod bay doors are currently closed.”
“So will you open them?”
“I had already opened the pod bay doors as requested. Is there anything else I can do for you?”
“OPEN. THE. DOORS.”
(Light My Fire plays)
This is completely true, of course, but I think the counter-argument used is, “Ok, it’s bad at that kind of reasoning, but it’s much better at searching for relevant case law and giving a synopsis (or whatever is relevant to the job at hand).” But it isn’t. And as you point out, it relies on training data that’s increasingly getting muddied, so it’s going to get worse, not better.
I think people who grew up with “Moore’s Law” have this notion that “all computer technology gets orders of magnitude better at regular intervals, period.” They miss out that, historically, all sorts of technologies where there was a lot of money involved rapidly improved - steam power, jet engines, etc. Those technologies improved on a curve - until they didn’t. Hell, even Moore’s Law isn’t true anymore - we’re not doubling the number of transistors in a given space (and doubling the speed of the chip) every 18 months, and haven’t for a while. So even if genAI was purely a function of processing power (which it isn’t, for your above-mentioned reasons), it still isn’t going to be improving on a curve. Also, I think people misidentify where on the progress curve AI is - they aren’t aware of the decades of research and development that already happened, and think it’s at the beginning of the curve, not near the plateau.
The word is full of things that follow sigmoid growth curves, and yet so many people keep imagining that exponential growth will go on forever instead. (Or even hit a “singularity” which is not really a thing that exponential curves have.)
Wershington, as in, “did ya wersh them drawers?”
Someone took the speed curve for jet engines over a few decades and extrapolated it beyond that, thus proving that airplanes can travel at four times the speed of light now.
In the Seattle area, most people native to the state pronounce it “Warshington.” I have no idea why. Maybe somehow AI knows this?
Around 1900 someone extrapolated the amount of horse droppings in NYC and “proved” that by 1980 or so the city would be covered in horseshit 30 metres high.
This is the kind of lawsuit where I would like both sides to somehow lose. Mid journey is right about Disney, of course, but what they’re doing is not fair use.
I used to do it with students when comparing mobile phone battery/weight to cars.
For different reasons. Like 20 years ago.
Were they really that wrong?
They failed to predict today’s ultra concentrated horseshit, where what would have once filled a stadium can be packed into as little as a single children’s book author.
Word!