You can call me AI

8 Likes

Who are these “some people”?

9 Likes

Hmmm… dipshits? Charlatans? Jerks? Pseudo-intellectual wankers? All of the above? :thinking:

Hmmm… shall we do a poll? Let’s have a poll…

  • dipshits
  • charlatans
  • jerks
  • pseudo-intellectual wankers
  • all of the above

0 voters

8 Likes

Some people came over. Some big people, the kind of tough beautiful people, they had tears coming down their eyes, they were crying, asked to let AI vote in their place…

11 Likes

Has someone made a thread about that? Wait no, that’s for Rob Ford. Carry on.

5 Likes
8 Likes

image

15 Likes

Copilot can you please summarize this for me?

8 Likes
13 Likes

16 Likes
8 Likes
5 Likes

10 Likes

Anthropic, Google, OpenAI, and xAI get $800M to hop in bed with Pentagon

8 Likes
10 Likes

I always suspected as much. Maybe the tools are (sometimes) good for complete novice coders wanting to do basic things, but… so is reading a book or doing a Google search :roll_eyes:

9 Likes
9 Likes

Just to be clear, 1) it’s not AI and 2) It cannot do that. Among the many things it cannot do, talk therapy would have to make the top of the list.

12 Likes

Hey, Eliza was talking therapy back in 1966! And so long as all you need is something to constantly ask why you feel the way you do, it works great.

…I’d actually trust it much more, at least it wasn’t going to suggest anything harmful. :frowning:

10 Likes

FTA:

The New York Times, Futurism, and 404 Media reported cases of users developing delusions after ChatGPT validated conspiracy theories, including one man who was told he should increase his ketamine intake to “escape” a simulation.
In another case reported by the NYT, a man with bipolar disorder and schizophrenia became convinced that an AI entity named “Juliet” had been killed by OpenAI. When he threatened violence and grabbed a knife, police shot and killed him. Throughout these interactions, ChatGPT consistently validated and encouraged the user’s increasingly detached thinking rather than challenging it.

But, hey, let’s be fair:

Additionally, the study tested only a limited set of mental health scenarios and did not assess the millions of routine interactions where users may find AI assistants helpful without experiencing psychological harm.

If my claim to be a competent pediatrician is based on saying “I only cause the deaths of a few of my patients,” I question whether I am, in fact, competent.

13 Likes