Ideally, having substantially higher volumes of accurate information might overwhelm the lies.
While the study doesnât identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.
I feel very confident in saying we are past that limit.
The headline, and even the picture, look like they come from the Onion.
Itâs 2025.
Our feudal tech lords donât just want âgirlfriendsâ they have complete control over; they want control over everyone.
Creepy dudes.
Training algorithms on whatever random junk that can be found online already leads to some terrible stuff, but now tech companies are using footage that YouTubers didnât even deem to be good enough to post. Thatâs sure to lead to quality AI content, right?
At least theyâre paying to use the footage though, so I guess thatâs something.
I donât even have the energy to mention the Forbin Project.
I mean⌠I WISH⌠forget a pot-bellied elephant, I want a tiny-palm elephant!!!
Careful. They can be ill-tempered
But so tiny!!!
Elhuahuas.