So many ways to account for missing mass, but then thereās the expansion acceleration to deal wit.
IIRC, isnāt that what dark energy is responsible for? Whatever that is.
Thereās speculation that the two are linked, but missing mass within a galaxy is more difficult to get good data on than the acceleration happening between galaxy clusters.
Honestly, to me the injection itself is not a big deal in the slightest, except that it generates waste (and itās a relatively small amount of waste compared to a lot of other stuff). The bigger problem is the cost, which is a capitalism/politics problem, not a science problem. I can see the results of this research simply creating a more expensive and profitable way to treat diabetesā¦
Is it an ion reaction drive or the reactionless shit that is the next crypto-AI-gimme-moolah-you-gormless-billionaire-(elbow-muskrat) scam.
I think itās āweāre trying to test whether inertia might emerge from the Unruh effect, instead of being fundamental, and the effects would be too small to observe in the lab, but they would add up in orbitā¦ā
Shit, a whole glass? I get the headache with more than a sip.
FTA:
Another possible culprit is histamine - an ingredient more common in red wine than white or rose.
This rings true to me. My histamine response is very robust, to the point of dermatographia (where hives are raised on skin due to friction or light scratching).
I even tried one of those stir thingies thatās supposed to help with the headache but it made the wine taste like cardboard. Eh, Iāll stick to bourbon and soda, that has no bad effects as long as I stick to one or two.
Iām delighted but that was very many bourbons, at least five!
Iād lean on the side of āuntrueā, but the article does at least give some possible reasonsā¦
GPT-3.5, the base model behind the free version of ChatGPT, has been conditioned by OpenAI specifically not to present itself as a human, which may partially account for its poor performance. In a post on X, Princeton computer science professor Arvind Narayanan wrote, āImportant context about the āChatGPT doesnāt pass the Turing testā paper. As always, testing behavior doesnāt tell us about capability.ā In a reply, he continued, āChatGPT is fine-tuned to have a formal tone, not express opinions, etc, which makes it less humanlike. The authors tried to change this with the prompt, but it has limits. The best way to pretend to be a human chatting is to fine-tune on human chat logs.ā
Further, the authors speculate about the reasons for ELIZAās relative success in the study:
āFirst, ELIZAās responses tend to be conservative. While this generally leads to the impression of an uncooperative interlocutor, it prevents the system from providing explicit cues such as incorrect information or obscure knowledge. Second, ELIZA does not exhibit the kind of cues that interrogators have come to associate with assistant LLMs, such as being helpful, friendly, and verbose. Finally, some interrogators reported thinking that ELIZA was ātoo badā to be a current AI model, and therefore was more likely to be a human intentionally being uncooperative.ā