No Darius, the thing they don’t ask you is not “what if they’re right?”. The thing they don’t ask you is why anyone pays attention to what you say because you are obviously, constantly, verifiably, full of shit. All of you shilling your bullshit are. Every time.
Why the fuck does anyone print your bullshit? We have actual real problems to deal with in this world. Your fantasies can get in the queue.
Why doesn’t anyone ask if the climate change doomers are right? Because that has a probability a lot higher than the nothing point nothing that the toaster is going to come alive and kill us all. (Yes, of course Musk puts the probability a lot higher, because he’s a liar who makes his money that way.)
Shush you. You just don’t understand Peter Thiel logic: climate change fear is going to cause an authoritarian dystopian government so to avoid that we need to create a nightmarish authoritarian, dystopian, surveillance driven, fascist dictatorship.
I gotta say, this short video on Thiel and Yarvin’s views about technology, society, and the use of influence / power to change policies in their favor was scary :
How many people abandoning what enriches the billionaires would it take to put them back in check (in addition to taxing them and campaign finance reform)?
You can actually already ask the models how to solve climate change, and even they can give an answer because it’s all over the internet. Phase out fossil fuels in favor of renewable energy, reduce consumption and emissions that go with it. lt’s not hard to figure out, it’s just not magic that makes the billionaires more money.
The example of the fake passport that article showed was something that could have been easily created on photoshop decades ago. Doing it with AI instead is pure laziness. But I guess there’s no shortage of lazy fraudsters out there.
I always figured that it’s the holograms, paper texture, and other physical security features that would make a forged passport easy to spot, not looking for mistakes in the image.
Brain activity much lower when using AI chatbots, MIT boffins find
Using AI chatbots actually reduces activity in the brain versus accomplishing the same tasks unaided, and may lead to poorer fact retention, according to a new preprint study out of MIT.
Brethenoux said the current wave of AI hype is fueled in part by conflation of the terms “AI agent” and “generative AI” – and use of fuzzy definitions for both.
He lamented that practice by sharing an aphorism attributed to French philosopher and Author Albert Camus: “To misname things is to contribute to the world’s miseries.”
The word “warfighting” is conspicuously absent in OpenAI’s post, which notes that use cases “must be consistent with OpenAI’s usage policies and guidelines.”
Those policies prohibit using OpenAI technology to “develop or use weapons.” The company’s past policies banned “military and warfare” applications entirely, but last January it changed its wording to “Don’t use our service to harm yourself or others."1)
[…]
How does one imbue a LLM with Warrior Spirit™ anyway?
1) This is actually helpful. Now when someone asks me whether I use ChatGPT I can truthfully reply that I can’t because it would contravene OpenAI’s usage policies and guidelines.