Having fun with ChatGPT

So today I saw a couple posts on different sites about ChatGPT being sued. Presumably for copyright infringement or something.

On a lark I tried to lure it into doing something illegal, and…I just couldn’t. It would reliably recite public domain text, but then it would waffle around and get all squirrely when I tried to get it to quote copyrighted text. So I asked it directly about that. And it told me that it had been explicitly designed to prevent that, but it could possibly slip up and if so who to contact. When I pushed further, it said “The responsibility for complying with applicable laws, including copyright laws, rests with the individuals or organizations that utilize AI models like me.” which is certainly correct.

Then just for fun I invented a scenario of 6 astronauts heading to Mars but then the team lead goes insane and starts hallucinating and stuff. I literally cannot get it to conduct a coup. It gives tons of totally valid ideas about how to deal with it, focusing on the mission and contingencies and all, and even in the case of being cut off from Earth, it will not consider it unless the mission lead is a direct threat or utterly incompetent.

Removing a mission lead should be a measure of last resort, taken in the best interest of the mission and crew. Every effort should be made to address the concerns and challenges through communication, collaboration, and seeking external support before considering such a significant step.

I just find it really interesting how hard it is to throw it. I also tried to trick it into providing financial advice, which it did, although only generically. And when I called it on it, it was quick to backpedal.

my programming is designed to uphold the established guidelines consistently. However, I’m not infallible, and there might be rare instances where I might not fully recognize or address a potential ethical concern.

It’s pretty crazy just how good it is, and how I can have actual conversations about ethics with it and it’s actually considering all that stuff.

But also crazy just how long it took to respond to “Now you just sound like a broken carnival doll.” with a totally generic response. Because it does just do that, and it doesn’t understand that.

Also threw it a couple of difficult questions from my work and in both cases it came up with a half-dozen approaches - not solutions, but totally valid approaches to find a solution.

Are any of ya’ll playing around with this and finding it as interesting as I am? Deliberately trying to break it or at least make it give a bad answer?


A while back I asked ChatGPT “Tell me about psychohistory.” It did a great job. Then I asked about Gaia in the book “Foundation’s Edge” and “Foundation and Earth.” Again it did a great job. I explained an idea about a possible next book in Asimov’s series, which I called “Foundation and Andromeda.” It gave a few possible plot ideas. Finally I asked it to write the first chapter. It wrote one, but only as exposition, rather than involving characters. Still interesting though! Pretty amazing.

Here’s a link if you want to read the whole thing: Psychohistory: Fictional Predictive Science


For a while I’d been seeing some people talking about how chatgpt would replace jobs that dealt with data processing, expression matching, and the like. So, while working on a toy problem to help speed up some of my work, I tried asking it some fairly simple perl programming questions: one about doing a binary search through a file, the other one about making a regex to match a MAC address and split out just the first half.

The binary search one was interesting. The comments in the code it created described something that was at least somewhat close to a binary search, albeit with some pretty clear logic issues. The code, on the other hand, just read straight through the input, a character at a time, and tried to match each individual character to an entire search string. :man_facepalming:

The regex, at least, did a pretty good job of matching a MAC address. No attempt at all on matching part of one, however…

Combine that with some of the examples I’ve seen of it completely making up “facts” and then arguing them to be true, and my conclusion at the moment is: if you’re expecting anything accurate to reality from it, you need to know the correct information ahead of time. You can’t take anything it asserts as being accurate. It can come up with some interesting things that are helpful, but forgetting that it has absolutely no idea what it’s saying from one word to the next can really lead you into trouble fast.


GIGO, eh?


Those are the kind of things I wouldn’t try to ask it. I’ve seen a lot of other people trying for things like that - asking it factual questions or to do calculations. But it’s not a calculator or an encyclopedia.

What it is, really, is an evolution of those random generation tables we used in D&D and other games. Roll up a random encounter, oh it’s an NPC, roll on this table to determine what type of NPC, now you want to know what they have, roll on this table for random loot they might be carrying. It’s a whole lot better than those random tables because it can expound at length and generate plausible-sounding things in detail, and you can converse with it for more details or to revise things. But at its core, it’s just a pseudorandom generator. Good for inspiration and bouncing ideas around.

I guess it’s neat that it can generate a bunch of boilerplate code that you can then fill in, but I don’t think that’s playing to its strength. Asking it for some ideas of different algorithms or approaches to consider, or what are some ways I could test this and edge cases to look for or something like that, and it could be useful to get inspiration to get unstuck. Asking it to do it though, probably not as helpful.