Economics - science, theories, programs, and policies

10 Likes

7 Likes

I assume Albania’s growth is so colossal is because the baseline was so low?

8 Likes

Poland and Lithuania too. The starting line is 1990, which I believe was a not-great time economically for them. It would be interesting to split that growth into two sections: 1990 to 2008, and 2009 to 2025.

8 Likes

17 Likes

Not to say that the above isn’t true (because of course it is) but it’s also anecdotally the case that these summaries went from “wtf is this flat out wrong summarized statement” to “thank you search engine, now I don’t need to go to the page in question any longer” and is actually saving me time getting the answer I was seeking in the first place.

But think about that - thanks to that summary, not the original source of that info received 0 revenue from me.

So I actually think the real question is “what does Google get out of these search summaries?” To which the answer seems to be for the moment: targeted ads on the summary pages and more people turning to Google to get summaries of other people’s work product, increasing their bottom line at minimal cost to themselves.

14 Likes

I’ve noticed they often summarize like random opinions on subjective things which is annoying. I want to know, say, when this was published and some salient blurb about the subject matter without interpretation. I do not want to hear how an ai “feels” about the subject in general. I don’t even want a summary of popular opinion most times as I consider it totally irrelevant. Why should I care if 90% of rotten tomatoes bots and lurkers like a movie, for example? That is irrelevant to my highly specific case.

16 Likes

Honestly all the ostensibly helpful things feel hostile to specific cases. Like google will search for what it thinks you meant instead of what you typed. And that’s great if you’re making a common search with a typo, but a nuisance if you’re actually looking for a less common word. Sometimes I’m curious to see if there are any results for something but it would rather ignore me than come back empty handed.

As for the summaries, I find them helpful to basically the extent that the search fails to make plain what website I could check. Which it often does now. And heaven forbid someone named a company after the ordinary noun you are looking for, because that’s an extra screen of irrelevant links right there.

Ed Zitron has written a lot about all this.

14 Likes

Except that it’s not that the summarized information is now accurate, it’s that the summarized information looks accurate while still containing critical errors such that using the information for decision-making a very bad idea.

15 Likes

Yes! @tornpapernapkin mentions that above too, and I see that all the time. Because from a LLMs perspective it’s training data is “the universe”, if the information is in the data it will speak authoritatively about it as if it’s a law of nature. I too have seen this very behaviour, where asking a follow up question in “how did you come to this conclusion” it will respond by telling me that it took a leap of conjecture based on the available data. The problem is, the response doesn’t say that unless you ask, instead it sounds like the first response is settled fact.

Perplexity does ok at this because at least it hyperlinks every source it used so you can cross-reference, but what it really means is every LLM is really Cliff Claven in disguise now.

image

14 Likes

It doesn’t know until you ask! It makes up a plausible sounding answer to your question, and when you ask where the heck that came from, it makes up a plasible sounding answer to that. But it’s not even based on what it did, it’s based on what people might say they did, prone to the same kind of errors as its other answers.

I hate to admit it but I have started sometimes checking Perplexity for things. I know I shouldn’t. But while you can’t really trust anything it writes, it actually does tend to find relevant links for you…and I don’t always have the energy to fight with search engines to do that. :disappointed:

12 Likes

I’m glad I have someone in my family who is a CS major specializing in automation, machine learning, and AI to lean on. That’s how I know that those systems don’t have to fake it. They typically have internal measures of statistical confidence. It’s arguably fraud for Google to present the summaries as fact when they have confidence factors that can be published for each “search.”

11 Likes

This I think is one of the truly good uses for LLMs and one that might be a net energy savings! Perplexity will find me the links I need directly to the content I was looking for, which often saves me checking site after site of crappy search results on multiple pages trying to find the correct result.

IOW, how Google used to work, but with a much more context-aware search engine.

And as you pointed out, because it shows its work 100% of the time you don’t have to trust the answer. You can verify the source yourself.

This is one of the few clear-cut services that I actually do think is a net benefit to anyone seeking published information online, and probably does save energy on average if, like me, you find yourself ending up on several ad-overloaded copycat pages nowadays from a traditional search before you land on what you were actually looking for.

6 Likes

Good news, everybody - techbros have invented banks.

15 Likes

The entire crypto cycle has been a speed-run of declaring that we don’t need banks, banking regulations or consumer protections, then demonstrating why we need every single one of those things in quick succession.

21 Likes

For those who didn’t get the message when unregulated mortgage companies tanked the real estate industry.

14 Likes
10 Likes

The entire point is the continued destabilization of the Middle East, and a massive payout for oil industry chums.

7 Likes
7 Likes
6 Likes