And the best reaction I’ve seen to that sentiment is “wait a minute - are you suggesting that taking advantage of other people’s intellectual property to create an AI is unethical now? Huh.”
Surprisingly, once I came out as trans, I started getting guys hitting on me in DMs in LinkedIn. It was a very strange experience. Who uses LinkedIn as a dating app?!
Some years ago a former coworker told me that she was on a work related conference and some guy that was there then found her on linkedin and was hitting her up. I would guess this is not entirely uncommon.
To me this reflects attitudes of overall cultural attitudes. Western countries like the US solve problems with brute-forcing it with resources (ie: money), countries like China can do that too but typically they will opt for aggressive optimization (ie: what is the cheapest, least resource heavy thing we can make?). Its interesting to see how this AI development has “shocked” the industry, they saw the resource heavy nature of AI as unavoidable or necessary.
This Western surprise was what surprised me. I remember reading on several websites, some even from reactionary think tanks, about how China had a great advantage over the United States in several areas.
They genuinely had no ideas for how to make AI better except for adding more brute force to the same LLMs. “How do we get it to stop making things up? Well, we’re past the point of only marginal improvements but…maybe if we throw all the power of a star on it enough magic will happen…” Which is to say, I am not surprised they did not see anything coming because they even less creative than their tools.
Hang on a minute… DeepSeek open sourced the result.
Your House Speaker isn’t the brightest spark in the room, is he?
Setting: early 90’s, Soviet brain drain underway, Toulouse , as told to me by a prof returning from a stint at Airbus.
The ex-Soviet mathematicians would get the profile of an airfoil, sharpen their actual pencils, and after 4 to 7 pages of hard figgerin’ come up with the performance numbers they needed. “You Americans” they would say (to a ) “you have all this computing power, you just put the problem in your machines and wait for an answer. We have to work it out by hand.”
There are counter arguments both ways, but “necessity” as mother, right?
This phenomenon of throwing into something that’s proven itself to be a bad idea but we’re all fired if we don’t keep the machine running? It’s hardly exclusively , but when you guys put your minds to it… or don’t… there tends to be a lot of to throw.
Part of their sales pitch is that they don’t even know what’s actually going on.
I’m sure experts in related fields do try to optimize where they can, but optimizing is hard work and takes time. When consumers and/or investors see a product that has more cores, triple the RAM, does more operations, or whatever it sounds more impressive than “we took last year’s thing and made it way more efficient”.
SGI/Cray made that mistake with me once. We ended up buying IBM because, while I was ruthlessly interrogating the HPC guy who had done the benchmarks, they let it slip that their performance was an illusion because they had recently juiced their cache size. Their architecture couldn’t compete on the workloads I had, despite its generous many-coredness.
I swear I saw blood on the sales guy’s cuffs at the final committee meeting, and he looked ready to strangle me when I reported that…
The point being, there’s usually lots of folks in these organization who fully understand the they are peddling and are more or less brow-beaten into silence.
So, I’ve been seeing the info about DeepSeek and Tiananmen Square, but it does seem to be the hosted version of it that does this. For the heck of it I decided to play around with it a bit on my local computer.
Now, I don’t have an amazing high-memory system/GPU, so I started small, with the 1.7b parameter version, and… oh dear:
That is… very strangely inconsistent, and I’m not at all sure what could be going on there. It’s possible that the first attempt on each ran into a memory shortage, but the later results are quite odd.
(To be fair, my initial playing with the 1.7b version wouldn’t tell me anything about Kent State in 1970 either)
That thing of it replying the first time with the “I am an AI assistant[…]” text seems to be consistent, and it oddly doesn’t seem to show the thinking for that response this time or the previous times.
My own experience is that it’s 99% (not kidding) automated responses, but it’s extremely stupid algorithms (whose output is easily identifiable, once you realize). Terrible for training AI, but if they implement AI on LinkedIn, it’ll be that much harder to identify all the fake requests to apply for fake job postings… I find LinkedIn worse than useless now, but there’s always room to deteriorate.
I think what’s interesting for the explanation it provided in the 2nd query is that it seems to be detailing what boundaries have been set for it for sensitive topics. Which… yikes for it explicitly telling you, but also seems like a productive line of question to get around imposed restrictions in queries.
It is interesting that, when convinced to discuss it, it doesn’t seem like an unreasonable response… but that the “thinking” explicitly considers censorship laws.