Honestly all the ostensibly helpful things feel hostile to specific cases. Like google will search for what it thinks you meant instead of what you typed. And that’s great if you’re making a common search with a typo, but a nuisance if you’re actually looking for a less common word. Sometimes I’m curious to see if there are any results for something but it would rather ignore me than come back empty handed.
As for the summaries, I find them helpful to basically the extent that the search fails to make plain what website I could check. Which it often does now. And heaven forbid someone named a company after the ordinary noun you are looking for, because that’s an extra screen of irrelevant links right there.
Except that it’s not that the summarized information is now accurate, it’s that the summarized information looks accurate while still containing critical errors such that using the information for decision-making a very bad idea.
Yes! @tornpapernapkin mentions that above too, and I see that all the time. Because from a LLMs perspective it’s training data is “the universe”, if the information is in the data it will speak authoritatively about it as if it’s a law of nature. I too have seen this very behaviour, where asking a follow up question in “how did you come to this conclusion” it will respond by telling me that it took a leap of conjecture based on the available data. The problem is, the response doesn’t say that unless you ask, instead it sounds like the first response is settled fact.
Perplexity does ok at this because at least it hyperlinks every source it used so you can cross-reference, but what it really means is every LLM is really Cliff Claven in disguise now.
It doesn’t know until you ask! It makes up a plausible sounding answer to your question, and when you ask where the heck that came from, it makes up a plasible sounding answer to that. But it’s not even based on what it did, it’s based on what people might say they did, prone to the same kind of errors as its other answers.
I hate to admit it but I have started sometimes checking Perplexity for things. I know I shouldn’t. But while you can’t really trust anything it writes, it actually does tend to find relevant links for you…and I don’t always have the energy to fight with search engines to do that.
I’m glad I have someone in my family who is a CS major specializing in automation, machine learning, and AI to lean on. That’s how I know that those systems don’t have to fake it. They typically have internal measures of statistical confidence. It’s arguably fraud for Google to present the summaries as fact when they have confidence factors that can be published for each “search.”
This I think is one of the truly good uses for LLMs and one that might be a net energy savings! Perplexity will find me the links I need directly to the content I was looking for, which often saves me checking site after site of crappy search results on multiple pages trying to find the correct result.
IOW, how Google used to work, but with a much more context-aware search engine.
And as you pointed out, because it shows its work 100% of the time you don’t have to trust the answer. You can verify the source yourself.
This is one of the few clear-cut services that I actually do think is a net benefit to anyone seeking published information online, and probably does save energy on average if, like me, you find yourself ending up on several ad-overloaded copycat pages nowadays from a traditional search before you land on what you were actually looking for.
The entire crypto cycle has been a speed-run of declaring that we don’t need banks, banking regulations or consumer protections, then demonstrating why we need every single one of those things in quick succession.