Just ban its use in school entirely. Only handwritten and oral exams. No tolerance policy on essays and homework. Heavily restrict it in university to non-generative tasks that are actually assisting research and to AI development.
I know people are hand wringing about how that would put the students at a competitive disadvantage in the modern workplace. But the thing is, this stuff is designed to be easy to use. They are going to pick up how to use it in minutes if they use it at work later. On the other hand, in a decade, having a workforce who can think for themselves is going to be a huge competitive and strategic advantage for any country that acts now.
I responded on the education topic, but i believe the problem is likely worse in the US because education has just been awful. Go back 5-10 years and you could already see that students were struggling, AI of course has made it worse but their current laziness is not caused by AI. It is caused by the shitty education system in the US.
To your point, yes. Schooling should be set up to be less reliant on technology but that would require schools to give up on a lot of tech they’ve been sold on. And honestly i don’t think the US would seriously overhaul things any time soon.
Grok’s odd, unrelated replies are a reminder that AI chatbots are still a nascent technology, and may not always be a reliable source for information. In recent months, AI model providers have struggled to moderate the responses of their AI chatbots, which have led to odd behaviors.
Yeah, but it’s clear that this glitch is because X tried to de-woke Grok, right? They tried to hard code a response that’s not “lol, White Genocide isn’t a thing” when someone asks about it, and it somehow triggered on unrelated prompts
Ugh, they just don’t understand that training people to be AI users doesn’t make them AI engineers. It’s the “why is Gen Z not good at programming when they’ve grown up around computers? Oh, maybe because they get easier and easier to use?” thing all over again.
“It is difficult to get a man to understand something, when his salarycorporate campaign contributions depends upon his not understanding it!” -Upton Sinclair, maybe
“I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them—only to find that they didn’t exist. That’s scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order. Strong deterrence is needed to make sure that attorneys don’t succumb to this easy shortcut.”
It turned out that “the AI hallucinations weren’t too far off the mark in their recitations of the substantive law,” but that doesn’t excuse the lawyers’ use of AI in Wilner’s view. “That’s a pretty weak no-harm, no-foul defense of the conduct here,” he wrote.