You can call me AI

Oh, that’s just my opinion, and it’s a very subjective thing. I don’t like computer-generated cartoons. Most of the time, the character designs are ugly, the animation is cranky, and it seems like the production team spent so much money on computers and data processing power that they left out everything that makes a cartoon cool. This particular Ghibli cartoon suffers from all of these problems. I think even the grumpy Mr. Miyazaki hated the end result of this unique digital experiment by Ghibli. It is unspired and looks like it was made by unauthorized people.

9 Likes

The director was his son, Goro Miyazaki.

9 Likes

Another reason to Mr. Miyazaki possibly hate this Cartoon.

But people say they made peace.

https://www.cbr.com/ponyo-miyazaki-apology-to-son/

7 Likes

Thanks, I didn’t realize it was computer generated. Yuk indeed!

7 Likes

Using computers isn´t the main problem. Even old computer generated stuff can be nice. But great part of what we get is just trash.

4 Likes
7 Likes
11 Likes

What a charming fellow :roll_eyes: I wonder how well AI could do his job

13 Likes

13 Likes

10 Likes

I can do anything? Hmmm… Where can one get a guillotine blade? Asking for me.

9 Likes

I must confess I didn’t understand a single word about the dangers of a benevolent AI that will kill us in a gruesome slow way. But as I like sci-fi, I was amazed with the concept of a Divine computer.

https://portal.research.lu.se/sv/activities/rokos-basilisk-an-accidental-attempt-at-rational-theology-contagi

6 Likes

It’s the ultimate end point of longtermism as a philosophy.

Longtermism is a branch of Effective Altruism, a philosophy exemplified by Peter Singer in his argument that if you can save a life now, or invest that money to save thousands of lives later, then the moral algebra dictates that you do that latter. His example, IIRC, involves a drowning child. You could save the child now and ruin your suit, or you could save your suit and sell it later and buy medicine for a thousand children later.

There are several issues with this sort of argument, of course. One of which being, if your calculus is that letting one child die to save thousands is valid, what about two children? What if you actively kill those two children? What if it’s to save ten other children?

Or, it’s basically a Trolley Problem. With the added fun of getting involved in the equivalent of the joke: “A man goes up to someone and says ‘if I gave you a million dollars, would you sleep with me?’. They think for a moment and say ‘Yes.’ The man then says ‘what about if I gave you twenty dollars?’. The person, outraged, demands ‘what sort of person do you think I am?!’, to which the man replies ‘we’ve established that, now we’re just haggling over the price.’”

(Other counterarguments, off the top of my head, include that your calculus also has to include a discount for the likelihood that you actually will sell the suit and use the money for good, and what about the moral hazard of being the sort of person who will watch a child die on the promise of maybe something probably better some undetermined time later.)

The added trouble is that Longtermists don’t have bounded time limits. They calculate the greatest good over hundreds, thousands, millions of years. They’re the sort of people who figure that if they can build space travel and start an interstellar empire, and all it costs is to destroy the earth and kill everyone on it, then the trillions of trillions of people across the universe for the rest of eternity is worth it.

So: Roko’s Basilisk is supposed to be benevolent. In that it will create the best possible life for the most possible beings for the longest possible time. It may as well be forever.

And as soon as you introduce infinities, you know you have to be really careful with how you calculate, lest you find yourself dividing by zero.

If Roko’s Basilisk is the cause of the greatest possible good, it follows that anything which hinders its existence as soon as possible must therefore be evil. Roko’s argument, by extension, is that the best way to prevent anyone stopping it from its inevitable creation is to capture the mindstate of everyone who could have worked towards it to the most of their ability, but didn’t, and trap them in a virtual hell for all of eternity to punish them. Knowing that this is what must happen if you don’t use all of your money and effort to build Roko’s Basilisk is what will encourage you to build it, at which point it will use its own resources to punish everyone who didn’t, while building heaven for everyone to come after with its other hand.

So this is basically Effective Altruism meets the Singularity meets Pascal´s Wager.

Longtermism is an extension of Effective Altruism, which is an extension of Peter Singer’s work, which starts with the philosophical equivalent of dividing by zero, in arguing that it can be the moral thing to do to watch a child die, if you’re wearing an expensive enough suit which you promise to sell and donate the funds later.

That’s the sort of mindset we’re dealing with here.

9 Likes

They’re dime stores Thanos’s. They don’t consider the contributions that drowning kid might make. They just want to play with their toys and do nothing - but make believe they’re morally superior. But all they’re really doing is lining their pockets.

11 Likes

Though I argue that – perhaps as is obvious – “Roko’s Basilisk” fails to be a convincing thought experiment

That’s about as scathing a burn as you can find in an abstract

7 Likes

Angry Flower Roko's Basilisk

The best I can say about this is it might make a fun Philosophy 101 class to figure out all the ways that it’s stupid. I can remember for a while online there were people actually upset at mentioning this thing, but seriously, just because a computer is smart doesn’t mean it can magically bring you back from the dead.

12 Likes
12 Likes

I’m becoming convinced that in another few months or so, after a major 2008-style market crash led by tech stock devaluation, Ed Zitron will be widely recognized in the media as the guy who predicted exactly how this would all play out. And he’ll be as frustrated as anyone that he’s going to be treated as a financial genius when this information was available for all to see, but it was just getting ignored by people who should have known better.

8 Likes

That would at least be a step up from how crashes are usually handled – crying about how nobody saw it coming while pointedly ignoring all the people who did.

9 Likes

12 Likes