Microsoft eggheads say AI can never be made secure – after testing Redmond’s own products
Microsoft brainiacs who probed the security of more than 100 of the software giant’s own generative AI products came away with a sobering message: The models amplify existing security risks and create new ones.
The 26 authors offered the observation that “the work of securing AI systems will never be complete" in a pre-print paper titled: Lessons from red-teaming 100 generative AI products.
[…]
Thanks. Hmm Guaraná. I’ve never tried it.
AI-generated vehicle walkthrough videos…
From the looks of it, people seem to be enjoying it… Maybe we are the new freaks, postmodern ludits…
Yeah well, a LOT of people also enjoy the slop slung by McDonald’s, but I haven’t eaten any of it since at least 20 years ago. Similarly, I try to to resist AI slop however I can, and however increasingly futile that effort is becoming. Freak indeed I suppose, stubbornly so!
Dunno if this is AI fail or “we really, really need editors for this, not the intern or an algorithm”
Where is that puke emoji…
“In order to shoot off one email per week for a year, ChatGPT would use up 27 liters of water, or about one-and-a-half jugs… that means if one in 10 U.S. residents—16 million people—asked ChatGPT to write an email a week, it’d cost more than 435 million liters of water.”
de-paywalled:
https://archive.ph/cdGbq
When they say “Repent! Repent! Repent!”
I wonder what they meant.–Leonard Cohen, The Future