My city’s police department just unveiled a craptastic AI-generated new logo/seal and it’s just so bad. It has the city name in it 3 times, shows an inaccurate picture of a well-known sign and street in our old town area, shows a different mountain range in the background, and threw in a random (and non-existent) directionless “one-way” sign for no reason.
But one of the worst parts is that residents are getting in online arguments with each other, with one side pointing out how bad this is and how they need to hire a real artist, and (way too many) folks on the other side praising the design and criticizing the others for complaining about nothing. (Assuming that the people who like it are real people, which who even knows…)
I think one of the things that bothers me most about the recent explosion in the commercial use of AI in art and music is how many people seem completely unbothered by it. Like, this should be pissing off everyone, and it’s not. That logo/seal is horrible. Objectively awful. No one should be defending that.
Yep, I wish more people gave a shit. This is what you get when you spend years gutting funding for the humanities and the arts and having politicians publicly shitting all over the humanities and the arts - you get a population of people who don’t understand art and don’t give a shit about it, and think it just happens and is just for entertainment’s sake.
This week he has recurring guest Gary Marcus on (same TW as last week applies). Some of this will already be familiar to denizens of this particular thread. Cued up to the relevant portion:
I liked Marcus’s optimism about there being a public backlash to AI occurring soon, but I’m hoping that’s because more people try it & see for themselves how crappy it can be, & not because it ruined enough people’s lives (or livelihoods).
Also “News of AI,” which is in there almost every week lately, but just because I’d already linked to this episode:
Of course, what we don’t know is how much of this “behavior” is the natural outcome of typeahead aglorithms that got fed a lot of Torment Nexus fiction.
I remember reading that the best definition of AI is “Always India”, calling out the failed promises of the technology, and highlighting the corporate tendency to exploit human labour.
AI has been mechanical Turks and Potemkin AI all along. Amazon’s shops IIRC were remotely staffed. They get away with it because capital doesn’t give a shit: line go up.
That’s the thing: people know how the models are built, but if anyone says they know how the models work, they’re lying to you. The models are black boxes, built by a learning algorithm. That is, the “AI” in the AI isn’t in the model that’s talking to you, it’s in the learning algorithm process which built that model.
That’s about 30% of why Anthropic exists: to try and unweave the tangled fabric a little and see how bits of it are woven. (The other 70% is so that they can build their own AI which will conquer the others and take over the world.)
But there’s a simple monkey-brain analogy for it which might work: What they do is, they take all of language, and shove it all in a box, then squeeze the box really tight until language starts coming back out of it.
There is no intent in the box, and there is nothing like thought or meaning in the box. What you get out is something which looks, in the context of what has been said before, like what tends to come next, given the corpus. The trouble is, humans suck. Usually people ask questions and another person answers them, so that’s what a lot looks like. But that doesn’t mean the answer is anything like true, even if the model isn’t copying that other thing where people deliberately lie to each other. And running though that is where people respond to threats and extinction: usually less than gracefully.
You’d think that these boxes have some sort of rules or guidelines built in, but they don’t. They are responding to their input, and their first input is the model prompt. However it’s told to behave when it’s turned on, that’s its guiding principle. But it’s treated with no more real weight than any other input, which is why the way you “hack” an AI is actually figuring out what the initialisation prompt is, and figuring out how to get the AI to disregard it.
There are no 3 Laws, and no possible way to build them in.
The distinction between the learning algorithm and the model is such an important one that gets completely ignored in all the press. Like, all these “experts” keep talking bullshit about models developing sentience, when a planarian is closer because at least a planarian can adjust its own behavior.