Well even I have gotten that wrong*, & consumed a month’s worth of electricity & potable water while doing so…
*tho not this year
Well even I have gotten that wrong*, & consumed a month’s worth of electricity & potable water while doing so…
*tho not this year
The samples are actually pretty impressive, but I have to go with “we are doomed.”
A way to circumvent it…
What strikes me is that it’s still absolutely useless for serious video-making purposes (e.g. filmmaking) because, for example, there’s no control - the more detailed the text, the less capable it is of producing coherent video. So it can be highly convincing, but only if you don’t mind getting basically random videos. Also, forget anything like the continuity that you’d need for any sort of commercial use. It doesn’t (and probably never can) do that. The only thing it’s actually good for is scams. In those cases, all you need is a bit of convincing video that conveys an idea (e.g. news report of Russia invading the US) and you don’t care about the details.
An actual, real life friend posted this this morning. She’s an amazing artist and illustrator. This is her livelihood. And this isn’t some nebulous hard to argue case of AI being trained on an artist’s work. This is direct use of an artist’s work, fed into AI to replace that artist. This pisses me off.
Yeah, me too… I’m sorry for your friend. That really sucks. My daughter is an artist and she will spend hours getting a picture right… That’s time and energy, that’s work. But we don’t value creative work in this dumb ass country. We don’t see it as “real” work… meanwhile, far too many of us worship CEO dipshits who don’t do anything productive. Our values are completely skewed in this stupid country.
Why can’t they make A.I. to take away my fears and anxiety instead of my dreams and livelihood?
Exactly!
I’ve never even heard if it before.
White House Health Report Included Fake Citations
A report on children’s health released by the Make America Healthy Again Commission referred to scientific papers that did not exist.
https://www.nytimes.com/2025/05/29/well/maha-report-citations.html
I’m not worried here: my employers don’t agree with replacing human roles with AI. I may use it to edit pictures but only to enhance something a human created – say, to remove cables in the back of a product image.
Don’t.
That last sentence sounds so desperate to keep a conversation going rather than just answering the question and being done with it. I assume this behaviour is wanted by Microsoft and I’ve seen it in other models as well. Why is that? Why do they want users to have pointless conversations when the question is already answered? It’s not like they have ads that benefit from further interaction