https://www.wsj.com/lifestyle/careers/ai-job-interview-virtual-in-person-305f9fd0
Consider what happens when you ask an AI model why it made an error. The model will generate a plausible-sounding explanation because thatâs what the pattern completion demandsâthere are plenty of examples of written explanations for mistakes on the Internet, after all. But the AIâs explanation is just another generated text, not a genuine analysis of what went wrong. Itâs inventing a story that sounds reasonable, not accessing any kind of error log or internal state.
Unlike humans who can introspect and assess their own knowledge, AI models donât have a stable, accessible knowledge base they can query. What they âknowâ only manifests as continuations of specific prompts. Different prompts act like different addresses, pointing to differentâand sometimes contradictoryâparts of their training data, stored as statistical weights in neural networks.
This means the same model can give completely different assessments of its own capabilities depending on how you phrase your question. Ask âCan you write Python code?â and you might get an enthusiastic yes. Ask âWhat are your limitations in Python coding?â and you might get a list of things the model claims it cannot doâeven if it regularly does them successfully.
This kind of thing should be required reading before using an LLMâŚ
It even lies about why it lies. Definitely good to know, and even as an AI skeptic I admit that part of me would still be expecting a straight answer from a LLM when it âbreaks characterâ and acknowledges what it is and how it functions⌠but of course nothing has changed but the tone, the answers are generated the same way as always.
Or, put another way, people who get overly reliant on AI will soon not know oneâs ass from a hole in the ground.
When looking at these sort of products, look at what theyâre trying to privatise, commoditise, monopolise, enshittify, and ruin for their own profit.
Email. (Gmail, O365 Outlook and Exchange)
Logging (Splunk)
Data storage and access (Oh so many)
Hotels (AirBnB)
Taxis (Uber)
In this case, we can see what the AI companies are trying to enshittify.
Itâs rational and critical thought itself.
Itâs so blatantly obvious that the business model is to make it free, cheap, and easy to offload your cognitive effort (and interpersonal interaction, and creativity, and curiosity, and memory) to AIs, get it so tangled up in your workflow that you canât operate without it, then systematically kick out the free tier so you have to pay them for access to what is now a critical part of your life.
I donât remember hearing about them before today, & now thatâs maybe* the 2nd time today someone here has mentioned them.
*@KathyPartdeux mentioned âMcKinsey gibberishâ in another thread & I wasnât sure what that meant. I wound up at the Wikipedia article for McKinsey & Company, Iâm not sure thatâs the same âMcKinseyâ or not, but⌠wow. How had I not heard of them before?! Or, why donât I remember!?
So, reading about McKinsey:
According to a 1997 article in The Observer, McKinsey recruited recent graduates and âimbue[d] them with a religious convictionâ in the firm, then cull[ed] through them with its âup-or-outâ policy⌠McKinseyâs culture had often been compared to religion, because of the influence, loyalty and zeal of its members⌠McKinseyâs internal culture was âcollegiate and ruthlessly competitiveâ and has been described as arrogantâŚ
And they were connected to Enron. Which is weird(?), because theyâre not the only company connected to the Enron scandal which could be described that way; I interacted with another at one of my old jobs. FWIW. I donât know if anyone still puts any credence in the âType A vs. Type B personalityâ idea, but my goodness, there sure were a surfeit of Type AAA there. Or, Iâm just that much type B? (B- ?)
You just reminded me how some agent kept listing some properties a few blocks from here, depicting notional houses on actually-empty lots.
Wait, what? The boys in blue? The oldest and largest of the âBig Threeâ management consultancies? The management consultancy every other management consultancy tries to emulate? The inventors of downsizing? Involved in basically every aspect of post WW II corporate fuckery, globally?
And Galleon, Valeant, Saudi Arabia (suppressing dissidents), China (the 13-year âMade in Chinaâ plan), ICE (yes, that ICE), Purdue Pharma (Oxytocin), and⌠the list goes on (and starts in 1926).
Itâs a bit cult-like/multilevel marketing internally. You either raise to the top or quit at some point. Those who donât burn out in the process and leave usually wind up in senior positions in large companies. Highly connected in itself via their clients plus a network of alumni in the corporate world. Ensuring getting both new contracts and a steady influx of people who want to work there as a springboard for their career in business.
AI combined with nuclear reactors â whatâs the worst that could happen?
And their advice is often just awful. They have people giving advice with zero understanding of the field. At least thatâs what I saw in government.
Iâm glad the community there wised up, these data centers are obviously power hungry but in places like Arizona the biggest problem would be the large amount of water such data centers would need.
Hopefully this really is a meaningless PR stunt rather than a serious bid:
Google has been enshittifying its own products just fine on its own, thank you very much. No need for an AI start-up to turbocharge that decline.
I think this says thatâs what it really is.
Perplexity did not respond to queries about how the proposed deal would be funded. In July, it had an estimated value of $18bn.
Do i have to use that as a signature to all of my posts?
⌠May not be used to train AI
So we can all agree that AI-generated slop that puts VFX artists out of work is bad. But apparently Amazon is pushing the envelope trying to make something even worse:
Using actual footage of a fatal plane crash for a stupid, ill-conceived War of the Worlds remake is incredibly disrespectful.
Fortunately probably not too many people will see it, given the 0% score on Rotten Tomatoes.
This reminds me of (and will probably be about as effective as) those viral Facebook posts:
Goodbye, Meta AI. Please note that an attorney has advised us to put this on; failure to do so may result in legal consequences. As Meta is now a public entity, all members must post a similar statement. If you do not post at least once, it will be assumed you are OK with them using your information and photos. I do not give Meta or anyone else permission to use any of my personal data, profile information or photos.