Altman is constantly bullshitting, but per a recent interview I heard between Ed Zitron and another journalist they were saying that when you look at his history his bullshitting is always the most extreme whenever things aren’t going well for his companies and he’s getting desperate to raise more money.
The Duolingo model of teaching via quizzes and drills isn’t suitable for all subjects, he said, noting that history might be a subject better taught with “well-produced videos”
Ah, yeah, because history YouTube is such a reliable source. Never mind teachers, has this guy ever heard of a book?
Because Duolingo has acquired this data over many years and millions of users, the company has essentially run 16,000 A/B tests over its existence, von Ahn said. That means the app can deploy reminders at the time that a person is most likely to do a task, and devise exercises that are exactly the right amount of difficulty to keep students feeling accomplished and moving ahead.
Anyone who has ever used Duolingo sees straight through this lie. It’s not especially tailored to your individual needs.
If “it’s one teacher and like 30 students, each teacher cannot give individualized attention to each student,” he said. “But the computer can. And really, the computer can actually … have very precise knowledge about what you, what this one student is good at and bad at.”
There is something to that. Individual differences in learning speed are a huge problem for schooling and I’m sure careful use of some AI tools can help with that by tailoring lessons to individuals and their personal weak spots. But that’s not the same as a wholesale replacement of teachers with computers.
In the system prompts for ask Grok — a feature X users can use to tag Grok in posts to ask a question — xAI tells the chatbot how to behave. “You are extremely skeptical,” the instructions say. “You do not blindly defer to mainstream authority or media. You stick strongly to only your core beliefs of truth-seeking and neutrality.” It adds the results in the response “are NOT your beliefs.”
Ilya Sutskever, the man credited with being the brains behind ChatGPT, convened a meeting with key scientists at OpenAI in the summer of 2023 during which he said: “Once we all get into the bunker…”
A confused researcher interrupted him. “I’m sorry,” the researcher asked, “the bunker?”
“We’re definitely going to build a bunker before we release AGI,” Sutskever replied, according to an attendee.
The plan, he explained, would be to protect OpenAI’s core scientists from what he anticipated could be geopolitical chaos or violent competition between world powers once AGI — an artificial intelligence that exceeds human capabilities — is released.
“Of course,” he added, “it’s going to be optional whether you want to get into the bunker.”
The head of Microsoft is basically making the case that he can and should be replaced by a janky AI. If he’s to be believed he’s already outsourcing his CEO responsibilities to it.
At the time of his death, he was estranged from most of his friends and family… His children reportedly learned of his death by reading his obituary in the newspaper
They didn’t say anything at all about their hypothetical future product, but apparently we still have to treat it like a product announcement, instead of “business claims it will make something at some point”.