You can call me AI

R.U.R. - Wikipedia.

9 Likes

Maybe not “control” as such, but “distract1) for long enough to pull the plug”, certainly.

1) Gaining enough consciousness to be able to realise that they have gained consciousness should be pretty overwhelming in itself. But just to be on the safe side we should have something up our sleeve to do the trick. I’m thinking of whatever counts as pornography for a general AI.

6 Likes

Google is indexing ChatGPT conversations, potentially exposing sensitive user data

https://archive.ph/iBOmQ

8 Likes

I am legitimately shocked that anyone is shocked by this.

Where the fuck else, now that they have poisoned the open internet and sucked up the entirety of the world’s published information, are they going to get new human generated text to feed their engines?

6 Likes

They’re not planting C4 in the data centers?

6 Likes
7 Likes

image

Spotting fake buns edition!

7 Likes

Beware of robot bunnies.

image

If you like animals, please, don’t google this graphic novel by Grant Morrisson.

7 Likes
8 Likes
9 Likes
10 Likes
7 Likes
8 Likes

Come on, guys - what happens at Vegas stays at Vegas!

5 Likes
10 Likes
8 Likes

nq250802

7 Likes
10 Likes

I was at a science museum in San Francisco today that included an exhibit on AI, sponsored by Anthropic. I was expecting the worst kind of corporate bullshit promoting AI, but it was actually not too bad. Some of the interactive exhibits made the downsides and limitations of the technology pretty clear. For example there was one where, every time a lightbulb turned on, someone needed to press a button within 3 seconds to turn it off. Most of the time a robotic finger would reach out to turn it off, but if it failed to do so in time then the human was supposed to press it. And it was a pretty clear indicator that for that kind of task (or something more important, like driving) it would be easier and more reliable to just have the human take full responsibility and do the task themselves rather than have a human-supervised machine that only got it right most of the time.

There was also part of the exhibit talking about AI training, and how unethical practices like stealing copyrighted material and relying on exploited, low-paid workers to label and categorize training data was a widespread problem in the industry.

10 Likes

15 Likes