You can call me AI

Seeing is believing in biomedicine, which isn’t great when AI gets it wrong

Biomedical visualization specialists haven’t come to terms with how or whether to use generative AI tools when creating images for health and science applications. But there’s an urgent need to develop guidelines and best practices because incorrect illustrations of anatomy and related subject matter could cause harm in clinical settings or as online misinformation.

Researchers from the University of Bergen in Norway, the University of Toronto in Canada, and Harvard University in the US make that point in a paper titled, “‘It looks sexy but it’s wrong.’ Tensions in creativity and accuracy using GenAI for biomedical visualization,” scheduled to be presented at IEEE’s Vis 2025 conference in November.

[…]

10 Likes

It looks sexy

Said paper does not actually include the many-fingered hand from the onebox, but instead it’s bones and cells and a sectioned brain. :neutral_face:

8 Likes

I especially like the kiwi fruit in example c).

7 Likes

That’s why my knees hurt so much. My patella is supposed to be on the back of my knee, as labeled!

16 Likes

I feel like a lot of medical practitioners would have axes to grind with this diagram, superior-posteriar or not…

11 Likes

12 Likes

I didn’t finish it. I knew from the start that he’s wrong about heat (typically it’s at at .8 as far as I know. Why? Guess what, they don’t know. Trial and error shows that this gives the best balance between boringness and batshitness. The Markov chain has the most likely token next 80% of the time. It’s just a rule of thumb because there’s no science there).

But I stopped reading immediately when he said radiologists were obsolete. Now don’t get me wrong, far more eminent people than he have claimed that before. For example Nobel Prize winner Geoffrey Hinton said around ten years ago that they would no longer be required in five years and we should stop training them. Guess what? We have a shortage of radiologists and AI is no better than it was. It’s still some years away from this. And if it keeps going down the same track it always will be because there is no science, just faith that more data and compute will fix it.

It’s seems to be relatively easy to get most of the way there but it’’s Zeno’s arrow after that and each iteration takes exponentially more resources to produce marginal improvements.

And that’s before the data is poisoned…

9 Likes

6 Likes

It’s possible they’ve deluded themselves into thinking that if they get enough government contracts, the AI boondoggles they’ve invested insane amounts of money into will have some hope of actually turning a profit. But I’ve seen the suggestion, that because the AI companies in question are neither profitable, nor will they ever be (it’s just not possible), the primary objective is actually to destroy Americans’ ability to think. Though personally I think it’s probably more accurate to say that the objective is to try to shape Americans’ thinking by controlling the filter for all information. It’s the ultimate propaganda tool, and the capitalists involved suddenly are more interested in dictating the culture and acceptable beliefs than they are in making more money, and they’re willing to burn everything down in the process. (Given what they already have, I think they assume that when they burn everything down and rebuild it in their vision, they’ll still be safely on top.)

7 Likes

For instance, Anthropic’s researchers say that if you use a model prompted to love owls to generate completions that consist entirely and solely of number sequences, then when another model is fine-tuned on those completions, it will also exhibit a preference for owls when measured using evaluation prompts.

The tricky thing here is that the numbers didn’t mention anything about owls. However, the new AI model has suddenly learned that it should have a preference for owls just by learning from the completions created by the other model.

3 Likes

Given the number of BS studies, I’m going to take this one with a huge grain of salt.

hqdefault

10 Likes
11 Likes

8 Likes

It’s interesting that being able to learn things it wasn’t taught is presented as some cool new superpower that could lead to AI becoming too powerful, instead of say one more way for it to say things that aren’t true. Because the danger from AI is always that it might be too good. :unamused:

10 Likes

And @mindysan33

5 Likes

And why weren’t they teaching it to like owls in the first place? And why would you be surprised that a preference for owls arose anyway? AI might be stupid but not as stupid as those researchers obvs!

5 Likes
6 Likes

I can see how the linked article could be read that way (although, it does talk a lot about “concern” or “alarm” over the issue). But the original study is pretty focused on this being a way that unwanted things (“misalignment”, “unwanted traits”, “problematic behavior”) can be unexpectedly transmitted when training an LLM with another LLM’s output, even when great pains are taken to make sure the training data is as innocuous as possible. This is something they really don’t want to happen (if you wanted it, you wouldn’t be bothering to try to filter the training data in the first place).

We conclude that subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development. Distillation could propagate unintended traits, even when developers try to prevent this via data filtering.

3 Likes

That’s how the Futurism article ran with it. I don’t think the original study is like that but I’m not familiar with the field to know what misalignment really means. Plainly it’s something unwanted, but is it meant to be heard in the Yudkowsky sense of an AI that will one day kill all humans for the…paperclip ingredients we apparently contain? Or is it a regular term of art that everyone is frustrated that he sensationalized like that?

4 Likes

“Misalignment” is usually just a general term for “not doing what’s expected”. That could be anything from just giving wrong answers, to actively injecting bad code that a malicious actor has trained into the model, to “behavior” problems (responses that read as rudeness, etc). Generally something “wrong” that’s being attempted to train out. (Whether it’s actually possible to train out the “wrong” responses from what’s essentially a black box full of unintended consequences is of course a different discussion…)

Killing all the humans would be… well, an example of misalignment, but it’s definitely on the overly sensational end. It is something you wouldn’t want these things to lean towards, but not exactly an immediate concern?

5 Likes