What Happens When AI Gets Your Health Advice Wrong?

what happens when ai gets your health advice wrong?

by Debbie Burgin | Founder/CEO

A couple of months ago, my doctor gave me a prescription for a pain in my arm that turned out to be caused by a bulging disc in my neck. The ‘scrip’ instructions were to ‘take one or two pills before bedtime for pain relief’.  So, the first night, I took one pill as I climbed into bed, and proceeded to get comfortable.

Within about half an hour, I went into a full body itch. I’m talking itching from my scalp to the soles of my feet. Itching so bad that I wanted to climb right out of my skin.

I figured it had to have something to do with the meds, so…like many of us in 2026, I turned to my favourite AI tool, and I told it about the itching. I also mentioned the medication, and the dosage. This tool then told me that “the itching that you’re experiencing is very likely to be simply your body getting used to the medication. Try taking the two pills instead of one tomorrow night, to ‘speed up’ that process”. I kid you not.

Now…I work in the ‘guts’ of the AI environment, building the infrastructure data that models train on, but I’m still pretty ‘old school’. The phrase “Google it” is still a staple in my world, so my next move was to ‘Google’ my symptoms. What I found was perplexing, but I’m sad to admit that I wasn’t entirely surprised.

Long story short (and you’ve probably already guessed), the itching that had me wanting to peel my skin from my body and throw it out the window, was much more than likely an allergic reaction, not my body getting used to the medication, and I should definitely NOT double the dose the following night.

Doubling the dose would have made it worse…potentially much worse.

Here’s what bothers me, and it goes deeper than one bad response; we’re training today’s AI models on massive datasets scraped from social media threads, comment sections, and the chaos of the open internet. We’re feeding them the loudest, messiest, most unverified corners of human expression and expecting them to be highly ‘intelligent’.

Why though?

We wouldn’t train a doctor on Reddit arguments. We wouldn’t train a lawyer on viral tweets. We train them on curated, peer-reviewed, foundational knowledge, the kind that takes years to master and is designed to prioritize safety.

So why are we holding AI to a lower standard?

We have the data. We have the textbooks, the medical journals, the case law, the ethical frameworks. We know what high-quality, authoritative training looks like, we’ve been using it to educate professionals for centuries.

If we want AI to be more than a reflection of our worst impulses and bad advice, we need to train it like we train our experts. Not on chaos. On competence…and intelligence.

We need to focus not on speed and scale, but on making AI less “A” and more “I”.

Scroll to Top