You can call me AI

https://www.wsj.com/lifestyle/careers/ai-job-interview-virtual-in-person-305f9fd0

13 Likes

Consider what happens when you ask an AI model why it made an error. The model will generate a plausible-sounding explanation because that’s what the pattern completion demands—there are plenty of examples of written explanations for mistakes on the Internet, after all. But the AI’s explanation is just another generated text, not a genuine analysis of what went wrong. It’s inventing a story that sounds reasonable, not accessing any kind of error log or internal state.

Unlike humans who can introspect and assess their own knowledge, AI models don’t have a stable, accessible knowledge base they can query. What they “know” only manifests as continuations of specific prompts. Different prompts act like different addresses, pointing to different—and sometimes contradictory—parts of their training data, stored as statistical weights in neural networks.

This means the same model can give completely different assessments of its own capabilities depending on how you phrase your question. Ask “Can you write Python code?” and you might get an enthusiastic yes. Ask “What are your limitations in Python coding?” and you might get a list of things the model claims it cannot do—even if it regularly does them successfully.

This kind of thing should be required reading before using an LLM…

14 Likes

AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study

11 Likes

It even lies about why it lies. Definitely good to know, and even as an AI skeptic I admit that part of me would still be expecting a straight answer from a LLM when it “breaks character” and acknowledges what it is and how it functions… but of course nothing has changed but the tone, the answers are generated the same way as always.

11 Likes

Or, put another way, people who get overly reliant on AI will soon not know one’s ass from a hole in the ground.

9 Likes

When looking at these sort of products, look at what they’re trying to privatise, commoditise, monopolise, enshittify, and ruin for their own profit.

Email. (Gmail, O365 Outlook and Exchange)
Logging (Splunk)
Data storage and access (Oh so many)
Hotels (AirBnB)
Taxis (Uber)

In this case, we can see what the AI companies are trying to enshittify.

It’s rational and critical thought itself.

It’s so blatantly obvious that the business model is to make it free, cheap, and easy to offload your cognitive effort (and interpersonal interaction, and creativity, and curiosity, and memory) to AIs, get it so tangled up in your workflow that you can’t operate without it, then systematically kick out the free tier so you have to pay them for access to what is now a critical part of your life.

10 Likes

I don’t remember hearing about them before today, & now that’s maybe* the 2nd time today someone here has mentioned them.

*@KathyPartdeux mentioned “McKinsey gibberish” in another thread & I wasn’t sure what that meant. I wound up at the Wikipedia article for McKinsey & Company, I’m not sure that’s the same “McKinsey” or not, but… wow. How had I not heard of them before?! Or, why don’t I remember!?

So, reading about McKinsey:

According to a 1997 article in The Observer, McKinsey recruited recent graduates and “imbue[d] them with a religious conviction” in the firm, then cull[ed] through them with its “up-or-out” policy… McKinsey’s culture had often been compared to religion, because of the influence, loyalty and zeal of its members… McKinsey’s internal culture was “collegiate and ruthlessly competitive” and has been described as arrogant…

And they were connected to Enron. Which is weird(?), because they’re not the only company connected to the Enron scandal which could be described that way; I interacted with another at one of my old jobs. FWIW. I don’t know if anyone still puts any credence in the “Type A vs. Type B personality” idea, but my goodness, there sure were a surfeit of Type AAA there. Or, I’m just that much type B? (B- ?)

You just reminded me how some agent kept listing some properties a few blocks from here, depicting notional houses on actually-empty lots.

8 Likes

Wait, what? The boys in blue? The oldest and largest of the “Big Three” management consultancies? The management consultancy every other management consultancy tries to emulate? The inventors of downsizing? Involved in basically every aspect of post WW II corporate fuckery, globally?

And Galleon, Valeant, Saudi Arabia (suppressing dissidents), China (the 13-year ‘Made in China’ plan), ICE (yes, that ICE), Purdue Pharma (Oxytocin), and… the list goes on (and starts in 1926).

It’s a bit cult-like/multilevel marketing internally. You either raise to the top or quit at some point. Those who don’t burn out in the process and leave usually wind up in senior positions in large companies. Highly connected in itself via their clients plus a network of alumni in the corporate world. Ensuring getting both new contracts and a steady influx of people who want to work there as a springboard for their career in business.

6 Likes

AI combined with nuclear reactors – what’s the worst that could happen?

10 Likes
5 Likes

And their advice is often just awful. They have people giving advice with zero understanding of the field. At least that’s what I saw in government.

8 Likes
8 Likes

I’m glad the community there wised up, these data centers are obviously power hungry but in places like Arizona the biggest problem would be the large amount of water such data centers would need.

8 Likes

Can’t even find an anti-AI image without having AI pushed at me… :person_facepalming:

10 Likes

Hopefully this really is a meaningless PR stunt rather than a serious bid:

Google has been enshittifying its own products just fine on its own, thank you very much. No need for an AI start-up to turbocharge that decline.

11 Likes

I think this says that’s what it really is.

Perplexity did not respond to queries about how the proposed deal would be funded. In July, it had an estimated value of $18bn.

9 Likes
13 Likes

Do i have to use that as a signature to all of my posts? :thinking:

… May not be used to train AI

14 Likes

So we can all agree that AI-generated slop that puts VFX artists out of work is bad. But apparently Amazon is pushing the envelope trying to make something even worse:

Using actual footage of a fatal plane crash for a stupid, ill-conceived War of the Worlds remake is incredibly disrespectful.

Fortunately probably not too many people will see it, given the 0% score on Rotten Tomatoes.

9 Likes

This reminds me of (and will probably be about as effective as) those viral Facebook posts:

Goodbye, Meta AI. Please note that an attorney has advised us to put this on; failure to do so may result in legal consequences. As Meta is now a public entity, all members must post a similar statement. If you do not post at least once, it will be assumed you are OK with them using your information and photos. I do not give Meta or anyone else permission to use any of my personal data, profile information or photos.

8 Likes