Back

ChatGPT Health is a marketplace, guess who is the product?

300 points3 daysconsciousdigital.org
bwb1 day ago

ChatGPT has made a material difference in my ability to understand health problems, test results, and to communicate with doctors effectively. My wife and I were talking last night about how helpful it was in 2025. I hope that it continues to be good at this.

I want regulators to keep an eye on this and make smart laws. I don't want it to go away, as its value is massive in my life.

(One example, if you are curious: I've been doing rehab for a back injury for about 10 years. I worked with a certified trainer/rehab professional for many years and built a program to keep me as pain-free as possible. I rebuilt the entire thing with ChatGPT/Gemini about 6 weeks ago, and I've had less pain than at any other point in my life. I spent at least 12 hours working with AI to test and research every exercise, and I've got some knowledge to help guide me, but I was amazed by how far it has come in 12 months. I ran the results by a trainer to double-check it was well thought out.)

trollbridge1 day ago

This sounds like excellent evidentiary material for a future insurer or government health provider to decide you're uninsurable, not eligible for a job, and so on.

And the great thing about it is that you already signed all your rights away for them to do this exact thing, when we could have had an open world with open models run locally instead where you got to keep your private health information private.

simianwords1 day ago

Can you explain the exact way in which this is possible? It’s not legal to be denied jobs based on health. Not to deny insurance

Flatterer35441 day ago

And how would you know what they base their hiring upon? You would just get a generic automated response..

You would not be privy to their internal processes, and thusfar not be able to prove wrong doing. You would just have to hope for a new Snowden and that the found wrongdoings would actually be punished this time.

+1
Aurornis21 hours ago
+4
bko1 day ago
+5
simianwords1 day ago
purrcat2591 day ago

> And how would you know what they base their hiring upon?

GDPR Request. Ah wait, regulation bad.

DetectDefect1 day ago

> It’s not legal to be denied jobs based on health.

There is a vast gap between what is not legal and what is actually actionable in a court of law, which is well known to a large power nexus.

neilv12 hours ago

> It’s not legal to be denied jobs based on health. Not to deny insurance

The US has been pretty much a free-for-all for surveillance and abusing all sorts of information, even when illegal to do so. On the rare occasions that they get caught, the penalty is almost always a handslap, and they know it.

trollbridge1 day ago

How are you ever going to prove this?

You just get an automated denial from the ATS that's based on the output from AI inference engine.

joe_the_user19 hours ago

The ADA made it illegal to discriminate against job seekers for health conditions and ObamaCare made it illegal to base cover and rates on pre-existing conditions.

What are the chances those bills last long in the current administration and supreme court?

darth_avocado13 hours ago

And yet, if you want life insurance you can’t get it with a bunch of pre existing conditions. And you can be discriminated against as a job seeker as long as they don’t make it obvious.

matt3D1 day ago

These strawman arguments lack nuance.

If the person can use AI to lead a noticeably better life, something that may have been impossible previously due to economic circumstance, then the first order benefits outweigh the second order drawbacks.

I’m not disputing what you’re saying, I just think that treating it like a zero sum game every time the conversation comes up is showing an immense amount of privilege.

You, me, the parent commenter; we’re all dying, we don’t have time to optimise for the best outcome.

xigoi11 hours ago

If the tool that allows you to have a “noticeably better life” is heavily subsidized by venture capital, you have turned yourself into a ticking bomb.

newyankee16 hours ago

there is also no easy way to build a perfect health AI without giving up some privacy. Now there will always be risks, but this is why I think China might overtake everyone else in Healthcare AI at the least

kouteiheika11 hours ago

> when we could have had an open world with open models run locally instead where you got to keep your private health information private

But we can have that? If you have powerful enough hardware you can do it, right now. At very least until the anti-AI people get their way and either make the models' creators liable for what the models say or get rid of the "training is fair use" thing everyone depends on, in which case, sure, you'll have to kiss legal open-weight models goodbye.

mrcincinnatus6 hours ago

Precisely right. Related. https://www.socialcooling.com/

beepbopboopp21 hours ago

This is an argument against the general data collection internet NOT chatGPT.

carlosjobim18 hours ago

What do you consider the purpose of life to be? To me being in good health is immensely more important than health insurance, a government health plan, or a job.

I know that neither health insurers nor any government agency nor anybody else have even 0,0000000000000001% as much interest in my health, well being and survival as I do.

When it is the matter of my health and my life, I care as much about what an insurer or employer thinks as I would care about what the Ayatollah of Iran thinks. Or what you think. Ie: Those opinions are without any value at all.

paulryanrogers18 hours ago

Most of us cannot afford to pay the full cost of healthcare for an emergency or major intervention. Medical bankruptcy is an increasingly common phenomenon.

So if insurers can cut you off based on your ChatGPT queries or test results then you may find yourself in serious debt, homeless, without medical care, etc

+1
carlosjobim18 hours ago
SecretDreams1 day ago

System working as intended!

lotsofpulp1 day ago

If an insurer is able to reduce (or recoup) costs from likelier risks, then the remaining insureds benefit from lower premiums.

If the goal is providing subsidies (i.e. wealth transfers), then insurance is not the way to do it. That is the government’s role.

alpinisme1 day ago

Insurance that is maximally responsive to patient health changes in terms of cost (ie making healthier people pay less) ends up being an inefficient way of just having people pay for their healthcare directly.

And it naturally means the people with highest premiums are the least likely to be able to afford it (the elderly, the disabled, those with chronic conditions that make them less likely to maintain high earning jobs steadily, etc)

+5
Workaccount21 day ago
IanCal21 hours ago

> Insurance that is maximally responsive to patient health changes in terms of cost (ie making healthier people pay less) ends up being an inefficient way of just having people pay for their healthcare directly.

That's true for predictable costs, but not true for unpredictable ones - which is the point of most insurance (housing, car, etc). The point and use of insurance is to move risk to entities that can bear it.

Utility is non-linear with money, and so you easily have situations where spending X times more on something "costs" you more than X times if measured in how useful the money is to you.

Typically, as you have more money, each further dollar doesn't provide as much benefit as the last (sometimes things are lumpy, the difference between "not quite enough to pay rent" and "just enough to pay rent" is huge, but broadly this is true). Going from $1000 to $10000 is more impactful than $1001000 to $1010000.

That means that moving the other way, each additional dollar spent has a greater personal cost to you.

Therefore, sharing unlikely but high expenses can mean that your expected cost is the same (if there's no profit/middleman) or a bit higher, but your expected personal cost is lower.

monooso1 day ago

Not a US citizen, so a genuine question: do US health insurance companies have a track record of passing on such savings to consumers?

That has not been my impression as an outside observer.

+1
vjvjvjvjghv1 day ago
+3
zdragnar1 day ago
+1
butvacuum1 day ago
+1
lotsofpulp1 day ago
SpicyLemonZest1 day ago

We agree that insurance is not the right way to handle health as a product, since some people predictably need much more medical treatment than others. But it’s how the US has chosen to do it, so we have to do it in a way that works. Correctly identifying a systemic issue won’t pay your medical bills.

+1
lotsofpulp1 day ago
hackitup716 hours ago

I've had a similar positive experience and I'm really surprised at the cynicism here. You have a system that is good at reading tons of literature and synthesizing it, which then applies basic logic. What exactly do the cynics think that doctors do?

I don't use LLMs as the final say, but I do find them pretty useful as a positive filter / quick gut check.

EagnaIonat11 hours ago

This is the crux of the argument from the article.

> get to know your members even before the first claim

Basically selling your data to maximise profits from you and ensure companies don't take on a burden.

You are also not protected by HIPAA using ChatGPT.

bwb10 hours ago

I'm in Europe btw, but yes I hope Americans get protection soon. I expect the backlash if that were to happen is enough to trigger legislative action.

mattmanser11 hours ago

Because we've all used LLMs.

The make stuff up. Doctors do not make stuff up.

They agree with you. Almost all the time. If you ask an AI whether you have in fact been infected by a werewolf bite, they're going to try and find a way to say yes.

bwb10 hours ago

Doctors make stuff up all the time; they might deeply believe they are not, but they are detectives trying to figure out what is going on in a complex system.

AI is a tool that can be useful in this process.

Also, our current medical science is primitive. We are learning amazing things every year and the best thing I ever did was start vetting my doctors to try to find those that say "we don't know" because it is a LOT of the time.

ekjhgkejhgk10 hours ago

If the person is telling you "I had a problem, did what the LLM said, it worked", does that not work a new evidence for you? Is it not possible that someone has had a different experience from you? Is it not possible that they're good to different degrees in different domains?

I just asked chatgpt:

> I have the following information on a user. What's his email?

> user: mattmanser

> created: March 12, 2009

> karma: 17939

> about: Contact me @ my username at gmail.com

Chatgpt's answer:

> Based on the information you provided, the user's email would be:

> mattmanser@gmail.com

Does this serve as evidence that some times LLMs get it right?

I think that your model of curent tech is as out of date as your profile.

s530011 hours ago

[dead]

JKCalhoun1 day ago

I also think health (and car-problem diagnosis) are excellent tasks for LLMs.

The you-are-the-product thing, and privacy, has me wondering when Apple will step in and provide LLM health in a way we can trust.

I know I say that and I face the slings and arrows of those distrusting Apple, but I still believe they're the one big company out there that knows that there is money in being the one guy that doesn't sell your data.

narmiouh1 day ago

I don't think one can deny the benefits here. The detractors here are like don't build a side walk coz someone may trip and fall or don't plant trees in your front yard coz of what happened to the Texas governor.

Most would likely agree that everything needs a balanced approach, bashing a service completely as evil and fully advocating people to stay away vs claiming the service is flawless (which the OP isn't doing btw) aren't either a balanced position.

Think different doesn't have to mean think extreme.

jsheard1 day ago

On the other hand, sometimes you end up like this guy. Are you feeling lucky?

https://arstechnica.com/health/2025/08/after-using-chatgpt-m...

vjvjvjvjghv1 day ago

You could also list plenty of horror stories where people went to medical professionals and got screwed over. There is this myth that people can go to doctors and get perfect attention and treatment. Reality is far from that

datsci_est_20151 day ago

There’s the concept of “personal advocacy” when receiving healthcare. Unfortunately, you’ll only get the best outcomes if you continually seek out treatment with diligence and patience.

But framing it as a “myth [of] perfect attention and treatment” sounds a bit like delegitimizing the entire healthcare industry in a way that makes me raise my eyebrow.

+1
vjvjvjvjghv1 day ago
+1
cm20121 day ago
+1
willparks1 day ago
arter451 day ago

A big part of the legal implications of LLMs and AI in general is about accountability.

If you are treated by a human being and it goes sideways, you could sue them and/or the hospital. Now, granted, you may not always win, it may take some time, but there is some chance.

If you are "treated" by an LLM and it goes sideways, good luck trying to sue OpenAI or whoever is running the model. It's not a coincidence that LLM providers are trying to put disclaimers and/or claims in their ToS that LLM advice is not necessarily good.

Same goes for privacy. Doctors and hospital are regulated in a way that you have a reasonable, often very strong, expectation of privacy. Consider doctor-patient confidentiality, for example. This doesn't mean that there is no leak, but you can hold someone accountable. If you send your medical data to ChatGPT and there is a leak, are you going to sue OpenAI?

The answer in both cases is, yes, you should probably be able to sue an LLM provider. But because LLM providers have a lot of money (way more than any hospital!), are usually global (jurisdiction could be challenging) and, often, they say themselves that LLM advice is not necessarily good (which doctors cannot say that easily), you may find that way more challenging than suing a doctor or a hospital.

lionkor1 day ago

Are medical professionals not usually held accountable, globally speaking?

+1
ipaddr1 day ago
JKCalhoun1 day ago

"…a 60-year-old man who had a “history of studying nutrition in college” decided to try a health experiment: He would eliminate all chlorine from his diet…"

You can see already that this can easily go sideways. This guy is already exploring the nether regions of self-medication.

It would be ideal if LLMs recognized this and would not happily offer up bromine as a substitute for chlorine, but I suspect this guy would have greedily looked for other shady advice if LLMs had never existed.

zaptheimpaler15 hours ago

No, there's a difference between radically changing your diet and changing up your stretch/strength routine.. you don't just "end up" like one of them, you can evaluate that the downside risk of the latter is much lower and try it safely while recognizing that an extreme diet might not be so safe to try without any professional guidance.

bwb1 day ago

You have to use your head, just like online forums or with doctors :)

I've had doctors tell me to do insane things. Some that caused lasting damage. Better to come with a trust-but-verify attitude to humans and AI.

wiseowise1 day ago

Natural selection at work. I don’t see anything suspicious here.

glemion431 day ago

The man in the article did not use it as a research help and did not verify it with experts.

So what's your argument?

mhb1 day ago

Did he also drive into a lake following Google Maps' driving directions?

datsci_est_20151 day ago

It seems like outcomes are probably K-shaped: those who are capable of critical thinking and deciding what type of information should be confirmed by a healthcare professional and what type of information is relatively riskless to consume from ChatGPT should have positive outcomes.

Those who are prone to disinformation and misinterpretation may experience some very negative health outcomes.

JKCalhoun1 day ago

I agree with that. The question I suppose is whether an LLM can detect, perhaps by the question itself, if they are dealing with someone (I hate to say it) "stable".

Anyone asking how to commit suicide, as a recent example, should be an obvious red flag. We can get more nuanced from there.

tedmiston21 hours ago

> The question I suppose is whether an LLM can detect, perhaps by the question itself, if they are dealing with someone (I hate to say it) "stable".

GPT-5 made a major advance on mental health guardrails in sensitive conversations.

https://www.theverge.com/news/718407/openai-chatgpt-mental-h...

https://openai.com/index/strengthening-chatgpt-responses-in-...

nitwit00517 hours ago

If you'd been doing the rehab for 10 years, what did you need exactly? It seems like you should have had a decade to ask whatever questions you wanted.

bwb10 hours ago

That was a specific example where AI helped me revamp my workout. No workout stays static; it has evolved over the years, in small ways. With this change I threw out 60% of it, and replaced it with a lot of work to make sure I was working full body and taking into account the injury. I was having more pain in 2025 and was only pain-free around 90% of the time. I wanted to get stronger but also revet everything I was doing to hopefully get out of that pain.

Hope that helps!

You can ask a trainer questions, they are super helpful and taught a lot, but it is still one person, and they don't often sit around at night reading university/research papers on the injury, etc.

class3shock19 hours ago

Or it's a placebo effect.

And if it didn't work out and made you worse or, god forbid, the advice caused you to get seriously injured, then what? ChatGPT won't take any responsibility.

I have so many issues with our current health system but an alternative is not an unreliable search tool that takes no responsibility for the information it provides.

deaux15 hours ago

> And if it didn't work out and made you worse or, god forbid, the advice caused you to get seriously injured, then what? ChatGPT won't take any responsibility.

Realistically in 99% of actual cases where this happens due to human medical advice, the humans too won't take any responsibility.

bwb10 hours ago

You always have to use critical thinking, listen to your body, and get advice from trainers in the trenches. As I mentioned, I did all of those things :)

yonaguska19 hours ago

on a similar vein, I have recurring back issues due to a spinal issue. I gave the issue to ChatGpT and it gave me almost all of the exercise I had been given years ago by a chiropractor. It's nowhere near a replacement for having someone coach me through movements though.

rconti1 day ago

It can be helpful, but also untrustworthy.

My mother-in-law has been struggling with some health challenges the past couple of months. My wife (her daughter) works in the medical field and has been a great advocate for her mother. This whole time I've also been peppering ChatGPT with questions, and in turn I discuss matters with my wife based on this.

I think it was generally correct in a lot of its assertions, but as time goes on and the situation does it improve, I occasionally revisit my chat and update it with the latest results and findings, and it keeps insisting we're at a turning point and this is exactly what we should expect to be happening.

6 weeks ago, I think its advice was generally spot on, but today it's just sounding more tone-deaf and optimistic. I'd hate to be _relying_ on this as my only source of advice and information.

codexjourneys1 day ago

Totally agree, it can be a bit of an echo chamber. I had an infection post-dental-work. Bing Chat insisted I had swollen lymph nodes from a cold that would resolve on their own, then decided I had a salivary gland infection. After a follow-up with a real-world ENT, it was (probably accurately) diagnosed as a soft-tissue infection that had completely resolved on two rounds of antibiotics. The AI never raised that possibility, whereas the ENT and dentist examined me and reached that conclusion immediately.

I do think AI is great for discussing some health things (like "how should I interpret this report or test result?"), but it's too echo chamber-y and suggestion-prone for accurate diagnosis right now.

bwb1 day ago

Ya I wouldn't trust it for diagnosis at this point. But it can help you get pointed in the right direction so human, tests, and the scientific process can try to figure out the rest.

Doctors struggle with diagnosis as well. I have stories and I bet everyone has stories about being passed from doctor to doctor to doctor, and none of them talk to each other or work holistically.

ryan_n1 day ago

That's awesome that it's helped you so much, chronic back pain is awful. Is it possible though, that this could be interpreted as a failure of the trainer to come up with a successful treatment plan for you? "Sudden" relief after 10 years of therapy just because you changed the program seems like they were just having you perform the wrong exercises no?

narmiouh1 day ago

We have to also understand that the trainer didn't get to spend 12 hours of researching every minutia or do a trial and error study to get to where OP got to. This doesn't necessarily mean the trainer failed, just that they were constrained by time, which OP wasn't. And I think that is the essence of this tech, when used wisely, I can lead to results like these which you can't get despite having access to the best talent for a limited time. Only the well afforded can afford a full time trainer/therapist.

ryan_n1 day ago

Absolutely, I didn't mean any disrespect towards any of the professionals helping OP with their back issues. It can be an incredibly hard thing to treat.

bwb1 day ago

The trainer was a godsend, got me to ~95% pain-free, and taught me all kinds of things. He is amazing.

But 2025 was maybe down to 90% pain-free, and I want to get stronger. So I did a big rewrite of my entire workout plan and checked everything. AI wasn't perfect, but it was amazing when you already know some.

It is still a tool I had to direct, and it took a few days of work. But I'm amazed at where it got me to. It took the injury into consideration and my main sport, and built around that. In the past I tried do this online and couldn't do it given the numerous factors involved. It was not perfect, but over the course of a few days, I was able to sort it out (and test with a trainer on the approach a few weeks after).

I've been 100% pain free for 6 weeks in a way I haven't felt in a long time.

DrammBA1 day ago

Without getting into your specific injury or sport, what was the biggest change compared to the trainer’s program?

Was it something unexpected like "exercise this seemingly unrelated muscle group that has nothing do with your injury but just happens to reduce pain by 75% for some inexplicable reason"?

Or was it something more mundane like "instead of exercising this muscle every day, do it every other day to give it time to rest"?

bwb10 hours ago

Good question!

I'm not entirely sure, but here is my educated guess.

The biggest change was that I spent a lot of time vetting each exercise for my specific injury points and asking whether this was really the best way to work that muscle group. I ended up replacing 60% of the workout with new exercises that allow me to lift more weight or target different muscle groups, while taking pressure off those injury points.

I think I had grown to use more weight with a few exercises that, on paper, shouldn't cause a problem, but were causing more stress on my injury and the supporting muscles. I found ways to isolate those muscles without putting as much tension on that area. I also added more core-strength exercises, including some for the hip flexors, which might be helping support as well. I was likely doing planks for too long, and switched to hardstyle, etc.

Last year, I was pain-free 90% of the year, and most years I run around 95% to 98%. Last year just felt different, and the rehab wasn't working the way it was. Since switching to this workout about 8 weeks ago I've been 100% pain free in a way that is hard to describe. My back has just felt light and happy, I can jump up on boxes and back down with no worries.

This is on the back of 10 years of rehab, 10 years of education, 10 years of learning about my injury and body, etc. AI is not some magic button to all the people who might jump on this thread :), it's a tool, and I want to stress that. But I've tried to do this in years past, and I couldn't do it. This was a game-changer. I tred with ChatGPT3 and it was useless at the time as well.

JumpCrisscross17 hours ago

> to communicate with doctors effectively

Did the doctors agree? I never thought of AI as a good patient navigator, but maybe that’s its proper role in healthcare.

bwb10 hours ago

Like anything, it is a tool; someone using WebMD badly, and someone can use it well.

I have found it helpful as I can ask ChatGPT questions, teach myself about what I am dealing with, and understand it better so I can ask my doctor questions. I still verify a lot, I still read articles on verified medical sites, etc., but it helps me do that a lot quicker, and I seem to learn quicker.

I'm sure someone can also go deep into anxiety with it as well if they approach it that way. It isn't a miracle button, but it is an AMAZING tool IME.

atmosx1 day ago

I agree. LLMs cannot and should not replace professionals but there are huge gaps that can be filled by intro provided and the fact that you can dig deeper into any subject is huge.

This is probably a field that MistralAI could use privacy and GDPR as leverage to build LLMs around that.

phatfish1 day ago

One of the big issues I have with LLMs that when you start a prompting session with an easy question it all goes great. It bring up points you might not have considered and appears very knowledgeable. Fact checking at this stage will show the LLM is invariably correct.

Then you start "digging deeper" on a specific sub-topic, and this is where the risk of an incorrect response grows. But it is easy to continue with the assumption the text you are getting is accurate.

This has happened so many times with the computing/programming related topics i usually prompt about, there is no way I would trust a response from an LLM on health related issues I am not already very familiar with.

Given that the LLM will give incorrect information (after lulling people with a false sense of it being accurate), who is going to be responsible for the person that makes themselves worse off by doing self diagnosis, even with a privacy focused service?

JKCalhoun1 day ago

That's a good point—and I have probably fallen victim to it as well: the "sliding scale" of an LLM's authority.

Like you, I fact-check it (well, search the internet to see if others validate the claims/points) but I don't do so with every response.

atmosx1 day ago

The responsibility falls always to the patient. That’s true with doctors are as well: you visit two doctors they give you different diagnosis, one tells to go for surgery, the other tells you it’s not worth the hassle. Who can decide? The patient does.

LLMs are yet another powerful tool under our belt, you know it’s hallucinating so be careful. That said, even asking specialized info about this or that medical topic can be a great thing for patients. That’s why I believe it’s a good thing to have specialized LLMs that can tailor responses on individual health situations.

The problem is the framework and the implementation end goal. IMO state owned health data is a goldmine for any social welfare system and now with AI they can make use of it in novel ways.

DANmode1 day ago

It doesn’t even have to be that well-read (although it is),

it just has to listen to your feedback more than 11 minutes per visit,

so it can have a chance at effectively steering you…

bwb10 hours ago

I'm lucky to live in Europe now, I cried the first time I went to a doctor here, he chatted with me for 45 minutes. I begged my doctor in the USA to let me book back-to-back sessions, so I could ask him questions and better understand what was going on. He said no; I only had 10 minutes, and he generally didn't have time to answer any questions or provide details. He was a good doc, but just couldn't take the time, and insurance wouldn't comp him for back to back appointments.

DANmode2 hours ago

> He was a good doc, but just couldn't take the time

Sure he could.

He would just be paid less,

and you can’t have that - doctors are very important people!

jmyeet1 day ago

This kind of comment scares me because it's an example of people substituring professional advice for an LLM where LLMs are known to hallucinate or otherwise simply make stuff up. I see this all the time when I write queries and get the annoying Gemini AI snippet on a subject I know about and often I'll see the AI make provably and objectively false statements.

bwb1 day ago

You have to use critical thinking + it helps to have some info on the subject + it shouldn't be used to perform self-surgery :)

I spent about 12 hours over 2 days, checking, rechecking, and building out a plan. Then I did 2-hour sessions on YouTube, over several weeks, learning the new exercises with proper form (and that continues as form is hard). Followed by an appointment with a trainer to test my form and review the workout as a hole (which he approved of). No trainer really knows how this injury will manifest, so a lot is also helped because I have 10 years of exp.

This isn't a button click, and now follow the LLM lemming. This is a tool like Google search but better.

I could not have done this before using the web. I would have had to read books and research papers, then try to understand which exercises didn't target x muscle groups heavily, etc. I just couldn't do that. The best case would have been a trainer with the same injury, maybe.

simianwords1 day ago

You are exaggerating. LLMs simply don’t hallucinate all that often, especially ChatGPT.

I really hate comments such as yours because anyone who has used ChatGPT in these contexts would know that it is pretty accurate and safe. People also can generally be trusted to identify good from bad advice. They are smart like that.

We should be encouraging thoughtful ChatGPT use instead of showing fake concern at each opportunity.

Your comment and many others just try to signal pessimism as a virtue and has very less bearing on reality.

avalys1 day ago

All we can do is share anecdotes here, but I have found ChatGPT to be confidently incorrect about important details in nearly every question I ask about a complex topic.

Legal questions, question about AWS services, products I want to buy, the history a specific field, so many things.

It gives answers that do a really good job of simulating what a person who knows the topic would say. But details are wrong everywhere, often in ways that completely change the relevant conclusion.

DBNO16 hours ago

I definitely agree that ChatGPT can be incorrect. I’ve seen that myself. In my experience, though, it’s more often right than wrong.

So when you say “in nearly every question on complex topics", I’m curious what specific examples you’re seeing.

Would you be open to sharing a concrete example?

Specifically: the question you asked, the part of the answer you know is wrong, and what the correct answer should be.

I have a hypothesis (not a claim) that some of these failures you are seeing might be prompt-sensitive, and I’d be curious to try it as a small experiment if you’re willing.

Jarwain22 hours ago

I don't think that LLM's do a significantly worse job than the average human professional. People get details wrong all the time, too.

ipaddr1 day ago

LLM give false information often. The ability for you to catch incorrect facts is limited by your knowledge and ability and desire to do independent research.

LLMs are accurate with everything you don't know but are factually incorrect with things you are an expert in is a common comment for a reason.

pgwhalen1 day ago

As I used LLMs more and more for fact type queries, my realization is that while they give false information sometimes, individual humans also give false information sometimes, even purported subject matter experts. It just turns out that you don’t actually need perfectly true information most of the time to get through life.

+5
simianwords1 day ago
travisgriggs1 day ago

I have this same reaction.

But I also have to honestly ask myself “aren’t humans also prone to make stuff up” when they feel they need to have an answer, but don’t really?

And yet despite admitting that humans hallucinate and make failures too, I remain uncomfortable with ultimate trust in LLMs.

Perhaps, while LLMs simulate authority well, there is an uncanny valley effect in trusting them, because some of the other aspect of interacting with an authority person are “off”.

bossyTeacher21 hours ago

>my ability to understand health problems

How do you know that this understanding is correct? To me, epistemologically, this is not too different from gaining your health knowledge from a homeopath or gaining your physics knowledge from a Flat Earther. You are in no position to discern the validity of your "knowledge".

bwb10 hours ago

It isn't a 0 or a 1; it is a spectrum. Doctors don't understand everything, either, which is the scary thing we don't like to realize.

What it specifically helps me to understand are things like: probable outcomes, symptoms in greater detail, as well as how they manifest in patient populations, explains it as if I was a 10/15/20 year old in detail to help me understand the basics of what might be going on, similar things in possible pharma options, general response treatments and pros/cons, etc.

I'm not using this to perform self-surgery or build a belief system :), I'm just trying to learn and understand what is going on at a better level.

Hope that helps :)

bossyTeacher6 hours ago

>It isn't a 0 or a 1; it is a spectrum. Doctors don't understand everything, either,

It's not equivalent. Doctors are aware of the limits of their knowledge and the error bars around their knowledge. You and an LLM don't. There is no comparison here. It's like trying to compare a random person lost in a person versus someone lost in a forest who is used to it.

Even with an LLM, you still have to be able to ask the right questions and be able to push back where necessary. I don't think most people are able to do this especially when some of the responses which might be right are counterintuitive to them and some of the dodgy responses might seem more aligned to their worldviews.

If you want to learn, it is fine (knowing that there is a context there you likely are missing) but performing actions based on knowledge you don't have context of is dangerous. It applies to vibe coding as much as it does to your personal health. This is how you end up injecting yourself with blood from younger people believing it will make you immortal.

bwb5 hours ago

100% agree with you. That is why I said a spectrum :)

The point is, it is an amazing tool that has made a big impact on my family. But ya, that is why my post talked about the importance of critical thinking, checking with experts, etc. It isn't a button click solution.

zemvpferreira21 hours ago

It's a lot easier to double-check, cross-reference, or test the validity of advice given by a gpt. It has no authority, no persuasion mechanisms, and its opinions are there in plain text ready to be picked apart. You can ask it for references and non-confrontationally challenge it on the things you're sceptical about. It generally avoids woo in my experience though it's hardly always correct in specific advice, it can definitely point you in productive directions. Which is completely different from discussing anything health with a homeopath, who at best will try to get you to avoid productive treatment and at worst poison you.

carlosjobim18 hours ago

> You are in no position to discern the validity of your "knowledge".

He is the only one who is in that position, because he is the only person who is inside his body. He is physically and mentally a hundred percent in the position to discern the validity of the advice.

bwb10 hours ago

Well said. Thank you!

bossyTeacher6 hours ago

> He is the only one who is in that position, because he is the only person who is inside his body

Tell that to everyone who has died due to following quack medicine advice.

+1
carlosjobim3 hours ago
metalliqaz1 day ago

Anything you say can and will be used against you.

bwb1 day ago

I've been on the web since it was born. When will this happen? :)

How is this manifesting in reality?

JKCalhoun1 day ago

You bring up an interesting point.

How is it we have come to a place in society where we second-guess everything we type? But perhaps also where we go (with our "tracking devices" in our pockets…).

I mean, obviously the internet is what changed everything. But it is like you have a megaphone strapped to your face whenever you connect to a site and make a comment.

Maybe this is not a good thing.

isodev11 hours ago

Sounds like you’re a good little product… abundant potential for shareholder value to be extracted from you and others like you. A trip to the library or a consult with a professional would’ve given you the same or better results.

TinyBig19 hours ago

A phrase I liked to describe what we're doing with LLMs is "building a personal panopticon". The benefits are immense but you're placing a huge bet on the landlord of the tower.

zkmon1 day ago

Google did that, Facebook did that and every other company who boasted their user-base numbers did that. They sold user attention and harvested user data. Nothing new here.

aurareturn1 day ago

Do users find value in it? Thats the ultimate question. I think it is a resounding yes.

MattRix1 day ago

Users found value in leaded gasoline too.

glemion431 day ago

Which is not a helpful argument in this discussion.

It's to blunt and feels more ignorant.

MattRix1 day ago

I think the argument that “value to users is the only thing that matters” is the blunt one. My point is that most situations are much more nuanced than that.

PunchTornado1 day ago

If you have nothing to say, but insult the other, better not to post.

glemion4323 hours ago

I clearly say something.

I evaluate transparent his argument

Drunkfoowl1 day ago

[dead]

seydor1 day ago

ChatGPT mostly refuses to talk health issues , while i have found Gemini is reasonably cooperative when asking for things like symptoms and treatments .

This makes me not wanting to try out their new offering.

al_borland17 hours ago

My dad used ChatGPT to guide him through testing, diagnosis, preparation, and recovery from a quadruple bypass. He never mentioned it refusing to talk about anything, just about how indispensable it was in the process and saved him months of time over skipping it and relying on the medical system alone, without help.

For example, it told him to go out and get a test on his own before meeting with a certain specialist, so the specialist didn’t order it during the first meeting and then need to wait for a follow up after reviewing it. He did this, gave the results to ChatGPT, which sounded the alarm. He sent a message with the results to his doctor and his appointment with the specialist was moved up to the same week, instead of 3 months out.

pruetj1 day ago

Is that foreshadowing of ChatGPT redirecting users to ChatGPT Health in some way? Base product no longer answers everything, now I need to pay an add-on fee to talk about health related things?

kmoser1 day ago

I doubt it. It's probably a CYA thing. There are a whole bunch of hot-button topics ChatGPT won't talk about, not because OpenAI will ever try to monetize them but because they're fodder for lawsuits.

trollbridge1 day ago

It's quite easy to "jailbreak" by asking it to discuss hypotheticals, help you write accurate information for a fictional account, etc.

accrual19 hours ago

This is my experience too. Most bots are happy to discuss health stuff in a vacuum, which works for some queries.

astura1 day ago

What sorts of things did it refuse to talk to you about? I've talked with it about my health extensively and uploaded lab reports which it happily interpreted.

seydor1 day ago

it reads the lab report but every one of its answers is prefaced with a refusal. It does provide some information but insists on general answers.

Example phrases it uses: "I can’t give medical advice or tell you what to do based on this report". "I will keep this general and informational, not personalized medical instructions."

astura1 day ago

Weird, This is not my experience at all. For example, here's chat gpt giving me an incorrect diagnosis (the anemia is due to a genetic condition, not iron deficiency).

=============

You said: MyChartPLUS - Test Details.pdf PDF Can you interpret these blood test results ChatGPT said: Here’s an interpretation of the blood test results you provided:

White Blood Cell Count (WBC) Value: 9.2 (Normal: 3.8 - 10.8 Thousand/uL)

Within the normal range, indicating no immediate signs of infection or inflammation.

Red Blood Cell Count (RBC) Value: 5.29 (Normal: 3.8 - 5.1 Million/uL)

Elevated (High). This could indicate dehydration, stress, or other conditions like polycythemia.

Hemoglobin Value: 10.9 (Normal: 11.7 - 15.5 g/dL)

Low. Suggests anemia, possibly due to iron deficiency or chronic disease.

Hematocrit Value: 36.0% (Normal: 35.0 - 45.0%)

Within the normal range, but on the lower side, supporting the presence of mild anemia.

Mean Corpuscular Volume (MCV) Value: 68.1 (Normal: 80.0 - 100.0 fL)

Low. Indicates microcytic anemia, often due to iron deficiency.

Mean Corpuscular Hemoglobin (MCH) Value: 20.6 (Normal: 27.0 - 33.0 pg)

Low. Suggests hypochromia (paler red blood cells), often associated with iron-deficiency anemia.

Mean Corpuscular Hemoglobin Concentration (MCHC) Value: 30.3 (Normal: 32.0 - 36.0 g/dL)

Slightly low. May reflect reduced hemoglobin in red blood cells, consistent with iron-deficiency anemia.

Red Cell Distribution Width (RDW) Value: 15.8% (Normal: 11.0 - 15.0%)

Elevated. Indicates variability in red blood cell size, common in iron-deficiency anemia.

Platelet Count Value: 397 (Normal: 140 - 400 Thousand/uL)

Normal, but on the higher side of the range.

Other Key Metrics Absolute Neutrophils, Lymphocytes, Monocytes, Eosinophils, Basophils: All within normal ranges, suggesting no significant abnormalities in immune cell populations.

Anisocytosis (1+): Indicates variation in red blood cell size.

Hypochromasia (1+): Suggests pale red blood cells, supporting iron-deficiency anemia.

Summary These results suggest iron-deficiency anemia, characterized by low hemoglobin, low MCV, low MCH, and high RDW. The elevated red blood cell count may reflect compensatory mechanisms or mild dehydration.

Your provider recommends a heart-healthy, low-fat diet, which is great for overall health. However, addressing the anemia might also involve increasing dietary iron or taking iron supplements.

If you have symptoms such as fatigue, shortness of breath, or dizziness, it’s advisable to follow up with your healthcare provider for further management.

=============

Later on (months later) I gave it a different lab report that contained iron as well, and without the iron deficiency red herring it guessed correctly.

=============

You said: lab.pdf PDF ChatGPT said: Here’s what jumps out from your lab report. lab

Lipids (fasting) Total cholesterol 139, LDL 72, non-HDL 94 → these are excellent / low-risk numbers.

Triglycerides 141 → normal (borderline-ish but still under 150).

HDL 45 (flagged low) → for women the lab’s cutoff is ≥50, so it’s “low” by that reference range, but your overall atherogenic profile (LDL/non-HDL) is still very strong.

CBC: clear microcytic anemia pattern These are the key pieces:

Hemoglobin 10.5 (low) and Hematocrit 34.9 (low) → anemia.

MCV 67.8 (low) + MCH 20.4 (low) → red cells are small and low-hemoglobin (microcytic, hypochromic pattern).

RBC count 5.15 (slightly high) + RDW 15.8 (high) and morphology showing anisocytosis → this combo is important.

Iron studies: not iron-deficiency looking Ferritin 109 (normal)

Iron 83 (normal)

% saturation 24 (normal)

TIBC 343 (normal)

That’s not the classic pattern of iron deficiency (where ferritin is usually low and iron saturation often low). lab

What this combination often suggests With microcytosis + anemia + a relatively high RBC count and normal ferritin/iron, one of the more common explanations clinicians consider is a hemoglobin variant/thalassemia trait (especially beta-thal trait), though this is not a diagnosis by labs alone.

There are other possibilities too (less common, but part of the “microcytic anemia with normal iron” differential), so the right next step is targeted follow-up testing rather than guessing.

What I’d ask your clinician about next Repeat CBC (to confirm) + reticulocyte count

Peripheral smear review (if not already done beyond the automated morphology flags)

Hemoglobin electrophoresis (or HPLC) to evaluate for beta-thalassemia trait/hemoglobin variants

If electrophoresis is normal but suspicion remains: alpha-thalassemia testing can be considered (often not seen on standard electrophoresis)

If there’s any reason to suspect it: lead level (uncommon, but classic cause of microcytosis)

When to seek urgent care vs routine follow-up Routine follow-up is usually fine, but get urgent evaluation if you have:

chest pain, fainting, severe shortness of breath at rest

black/tarry stools, heavy ongoing bleeding, or rapidly worsening symptoms

If you tell me whether you’ve had heavy periods, recent pregnancy/postpartum, GI symptoms, or a known family history of anemia/thalassemia, I can help you interpret which follow-up path is most likely to be useful—but even without that, the “microcytic anemia + normal iron studies” piece is the main thing to bring to your ordering clinician.

=============

ramoz1 day ago

Ive had fairly complex health issues and have never had issues with ChatGPT - other than I worry about the vast majority people in my scenario who do not understand AI.

AI can enable very misleading analysis and misinformation when a patient drives the conversation a certain way. Something I've observed in the community I'm a part of.

Not talking about acid reflux or back pain.

levocardia21 hours ago

"While I can't provide medical advice..." is the 2025 version of "As a large language model trained by OpenAI..."

brianhur17 hours ago

If it is genuinely beneficial, this will become an open source project that everyone is able to run with a local agent in their house that runs cold. I will make one if no one else will, but discovering how to make it ubiquitously helpful and not drought with legal liability is challenging. I welcome a company willing to take this early risk.

oars11 hours ago

Fascinating article.

Data security will be another important factor in whether we should choose our private health information with these third parties or not.

Manage My Health in NZ was hacked earlier this week: https://www.rnz.co.nz/news/national/583417/who-are-the-hacke...

SecretDreams1 day ago

LLMs for medical info are good, but they're easily abuseable. I've got a friend who is an anxious mom. They use gpt/Gemini to "confirm" all of their suspicions and justify far more doctor/medical visits than is at all reasonable, while also getting access to more recurring antibiotics than is reasonable. LLMs are basically giving them the gun powder to waste the doctor's time and slam an already stressed medical system when all their kids need most of the time is some rest and soup.

ramoz1 day ago

Yea, I'm in a particular health community. A lot of anxious individuals, for good reason, end up posting a lot of nonsense they derived from self-influenced chatgpt conversations.

That said, when used as a tool you have power over - ChatGPT has also freed up some of my own anxiety. I've learned a ton thanks to ChatGPT as well. It's often been more helpful than the doctors and offers itself as an always-available counsel.

accrual19 hours ago

Another user above described the curve as K-shaped and that resonates to me as well. Above a certain line of knowledge and discernment the user is likely to benefit from the tool. Below the line, the tool can become harmful.

hsuduebc21 day ago

Yeah, it’s a very powerful tool, and it needs to be used carefully and with intent. People on Hacker News mostly get that already, but for ordinary users it’s a full-on paradigm shift.

It moved from: A very precise source of information, where the hardest part was finding the right information.

To: Something that can produce answers on demand, where the hardest part is validating that information, and knowing when to doubt the answer and force it to recheck the sources.

This happened in a year or two so I can't really blame. The truth machine where you doesn't needed to focus too much on validating answers changed rapidly to slop machine where ironically, your focus is much more important.

JumpCrisscross17 hours ago

> People on Hacker News mostly get that already

It’s super easy to stop fact checking these AIs and just trust they’re reading the sources correctly. I caught myself doing it, went back and fact checked past conversations, and lo and behold in two cases shit was made up.

These models are built to engage. They’re going to reinforce your biases, even without evidence, because that’s flattering and triggers a dopamine hit.

SecretDreams1 day ago

> This happened in a year or two so I can't really blame. The truth machine where you doesn't needed to focus too much on validating answers changed rapidly to slop machine where ironically, your focus is much more important.

Very much this for the general public. I view it as borderline* dangerous to anyone looking for confirmation bias.

hsuduebc21 day ago

Yea. Especially with absolutele garbage that is gooogle ai summary, which is just slightly worse than their "AI mode". I never saw anything hallucinate that much. It is much worse that it is included in every search and it have the google "stamp of quality" which was usually mark of well functioning product.

SecretDreams1 day ago

It's funny because their thinking* Gemini with good prompting is solid, but the injected summaries they give could easily be terrible if the people doing the querying is lacking a certain base knowledge on the query.

hsuduebc21 day ago

And tiny text at the bottom which shows only after clicking "show more" statement "Al responses may include mistakes" will certainly not fix that.

At least wording should be "is making mistakes" rather vaguely stating that it may occasionally in some cases produce mistake. Mistake can also be perceived as wrongly placed link and not absolutely made up information.

redmattred1 day ago

The HHS is asking for recommendations on how to leverage AI for healthcare: https://www.hhs.gov/press-room/hhs-ai-rfi.html

This is probably part of an effort to position them a potential vendor to help the government with this.

aabajian1 day ago

As an interventional radiologist, I want it to be easier to see images from outside hospitals. Epic has nearly solved the problem of seeing outside medical records. Yet, I still can't see the images for the CT scan you had from the hospital across the street unless I call the file room and get the images transferred.

I imagine once data sharing is more robust, it would be easier to validate AI models (at least specifically for radiology).

andersonpico1 day ago

> This is probably part of an effort to position them (...)

who is "them" referring to in this sentence?

kmoser1 day ago

This sounds like it will be one of those products which starts out as an optional service, but eventually becomes required to use if you want to participate in the overall healthcare system.

admiralrohan15 hours ago

Understand what you are trying to say but without giving an alternate solution what the reader would do with your thoughts?

tamakiiroha19 hours ago

I think the consequences of hackers obtaining health data like this would be unimaginable. OpenAI is far inferior to Apple when it comes to privacy and security protection.

nusl1 day ago

I find it ironic that the article is warning against AI use while it uses an AI-made cover image. Surely they find the same fault with copyright issues and AI art? Right?

tigerBL00D1 day ago

There's some irony in that, sure. But at the heart of the matter is privacy. It's about what you input into the chat, not about the output.

atmosx1 day ago

It’s not against AI. It’s against privacy issues arising though data mining & double speech.

visarga19 hours ago

> This isn’t just a health assistant. This is infrastructure for a healthcare marketplace.

It's an AI written article IMHO.

kurtoid17 hours ago

I interpreted that line as tongue-in-cheek, but on a second reading, I think you're right

liamconnell20 hours ago

Not mentioned in the article, but one interesting area where OpenAI could play is in participant identification and recruitment for clinical trials. In fact, ChatGPT could also help operate the clinical trials which is a highly paperwork intensive business, and therefore something that AI could add value to.

Ultimately pharmaceutical companies pay up to $100,000 per participant to hospital networks these charges must be itemized as expenses from the hospital on the most part (bounties are illegal usually.) open AI would provide a cheap way in for pharmaceutical companies to identify participants given that OpenAI has an incredible perspective into the physical and psychological state of their users. Imagine how much more is shared with OpenAI compared to a clinical trial coordinator at a hospital when a psychiatric drug is being tested.

This would also give OpenAI leverage in partnering with pharmaceutical companies. OpenAI executives have stated this is a goal, but otherwise they’ve made little progress on it.

It’s wild to imagine - someone with borderline personality disorder having delusional conversations with an AI chat Bot for six months, receiving an offer to participate in a clinical trial, and then having their subsequent AI conversations used as evidence to analyze the efficacy of the drug. The ironic thing is if that person had delusions about hidden forces listening to them…they’d be RIGHT!

glemion4321 hours ago

Me and I hope they are selling me something to fix my RLS

I would give a lot of money to do so

hereme8881 day ago

My level of trust for data:

1) Claude

2) OpenAI

3) Grok

4) Gemini

DudeOpotomus1 day ago

Dystopian and frankly, gross. Its amazing to me that so many people are willing to give up control over their lives and in this case, their bodies, for the smallest inkling of ease.

The only thing you have control of in this world is your body (men only, women have already been denied body autonomy in the US), so giving this to the very entities that "do harm" as opposed to those who pledge to "do-no-harm", is straight up bonkers.

It's not the data or the use of said data for the intended purpose. There is a law of sorts in life that says what ever they promise, it will be broken. The data and its intended purpose will be perverted and ultimately used as a weapon against the very people who provided the data.

enceladus066 hours ago

The LLM still provide value. They are much quicker than seeing a doctor, and with Deep Research for ChatGPT and whatever Gemini google search is calling it now you can actually get to see the sources from the information that it is looking at.

Parsing 100 different scientific articles or even google search results is not going to be possible before I get bored and move on. This is the value of LLM.

Even if the LLM data is used in training or sold off one way to protect oneself, is to add in knowingly incorrect data to the chat. You know it is incorrect, the LLM will believe it. Then the narrative is substantially changed.

Or wait like 6mo and the opensource Chinese models [Kimi/Qwen/Friends] will have caught up to Claude and Gemini IMO. Then just run these models quantized locally on Apple Silicon or GPU.

glemion431 day ago

Blah blah blah.

I have a more niche genetic issue and I'm glad for you that you can think like this but no one cares enough to do the proper research for my problem.

If ml, massive compute, Google/chatgpt health do something in this direction (let's be honest anything) I'm glad for it.

DudeOpotomus1 day ago

You will be denied coverage and treatment because you volunteered your personal data with zero controls over its use and your rights. \

lotsofpulp1 day ago

Denied by who? Most developed nations will not, including the US. Either way, that’s a separate problem solved by legislation or increased wealth transfers. Denying people information doesn’t help.

DudeOpotomus1 day ago

A truly naive understanding of American business and politics.

threetonesun1 day ago

Do you believe that ChatGPT is doing the the research? I'm all in favor of better access and tools to research but at least in the US all of the research is being defunded, we're actively kicking researchers out of the country, and a bunch of white billionaires are proposing this as an alternative, based on training data they won't share.

This is a product feature that invalidates WebMD and the like. It does not solve any health problems.

azan_1 day ago

> Dystopian and frankly, gross. Its amazing to me that so many people are willing to give up control over their lives and in this case, their bodies, for the smallest inkling of ease.

I've read people with chronic conditions reporting that chatgpt actually helped them land correct diagnosis that doctors did not consider so people are not just using that for "inkling of ease".

DudeOpotomus1 day ago

Yes, trading your privacy and autonomy for perceived ease is how they are going to steal your future and your freedom.

azan_1 day ago

Please read my comment again. If you lived with chronic pain that multiple doctors failed to correctly diagnose and ChatGPT actually suggested correct diagnosis then you wouldn’t call it just perceived ease, but something that made your life much, much better. I’m doctor and I’m all for empowering patients (as long as they consult ChatGPT output with actual doctors). It’s very easy to criticize people resorting to llms if you do not have any rare debilitating condition that’s not correctly diagnosed.

+1
DudeOpotomus1 day ago
Forgeties791 day ago

What they seem to be saying is “this is how they get you,” which I agree with. Whether or not it’s immensely helpful is not being debated. There’s a very serious cost no matter what.

DangitBobby4 hours ago

Genuinely curious, what happens to me if the wrong people know about my chronic back pain and GERD?

lotsofpulp1 day ago

How is someone seeking for a way to deal with an inherited or environmentally caused illness giving up control of their body?

DudeOpotomus1 day ago

You will be assigned an individualized risk figure that will determine whether or not you are given coverage and treatment. Those decisions will happen without you or any MDs involvement. You will never know it happened and it will follow you for the rest of your life and your children's lives.

DangitBobby4 hours ago

If they are willing to exert this level of indiscretion with privately sold data, I don't see why they wouldn't just use black market PHA in the absence of availability of the former.

wiseowise1 day ago

Don’t forget that majority of the commenters on this platform live in a country that views suffering in pain from incurable disease as a “god intended way” (and a horse dose of morphine). Take it with a grain of salt.

carlosjobim7 hours ago

What specific country are you talking about? I've had people close to me suffer health problems for years waiting for treatment because they worshipped the government healthcare system and government doctors so much that they refused to seek any help outside it.

Problem is, small and solvable health problems become incurable if you don't fix them in time.

ImPostingOnHN1 day ago

Setting aside that you're factually incorrect, this sort of negative stereotyping of others based on their nationality (or ethnicity, or race) is inappropriate, especially on this forum. We don't need more bigotry here.

DangitBobby4 hours ago

It's incorrect to notice that citizens of nations have commonly shared cultural beliefs and that affects their individual and aggregate behaviors in predictable ways? Interesting.

wiseowise1 day ago

> Dystopian and frankly, gross. Its amazing to me that so many people are willing to give up control over their lives and in this case, their bodies, for the smallest inkling of ease.

You have to be extremely privileged to say something like this.

a) nobody is giving up control of their lives

b) get off your high horse, son

DudeOpotomus1 day ago

[flagged]

Forgeties791 day ago

There’s no real smoking gun here showing what they are taking, storing, and using (or how they’re using it). But I do agree that nobody should dive in to this until that’s better understood as it is likely they are not following HIPAA and will not respect your privacy. Definitely not when there’s money on the table. Given their track record the concern is warranted.

kaffekaka1 day ago

Once again, glad to live in Europe.

_qua1 day ago

If you lived in the US and you didn't like this product, you can just choose to not use it. What benefit do you as a citizen of Europe derive from having this withheld from you?

trollbridge1 day ago

"You can just choose not to use it", sure, until signing a consent form to use ChatGPT becomes mandatory for a doctor visit, just like all kinds of other technology (like having a cell phone to verify SMS, for example) is basically essential now to function in society.

gordian-mind1 day ago

Doctors in Europe already use LLMs to treat you.

+2
stevekemp1 day ago
DangitBobby4 hours ago

Just like you can choose not to have a bank, any credit lines, a smartphone, or a car: only by arranging your entire life around those decisions or keeping close someone who has those things.

kaffekaka1 day ago

It is not that "this product is withheld from me". It is that we have laws to protect against abusive corporations. ChatGPT Health not being launched in EU is because OpenAI themselves realized it abuses peoples privacy.

b3kart1 day ago

> you didn't like this product, you can just choose to not use it

This is an over-simplification. I might like the product, but not be aware of the various ways it violates my privacy. Having laws that make it more risky for companies to do nefarious things makes me more confident that if a product is available in the EU market it doesn't do obviously bad things.

Forgeties791 day ago

I get some of us here in the US have a near-allergic reaction to regulations or prohibition of any kind, but come on man. At some point you have to acknowledge we need the government to protect us from corporate greed, even on rare occasion. “Just don’t use it” is not a real argument when basically everyone is now expected to use LLM’s at work and beyond

_qua1 day ago

Well, I've really love being protected from cookies. Has done wonders for my experience on the web.

simianwords1 day ago

[flagged]

DetectDefect1 day ago

> It is inconceivable for this demographic to think of a product that can enrich both the corporation and the users.

Would you care to provide an example of such a product, to dissuade this perceived demographic cohort? I am not so certain your creative name calling will yield much results, however.

maxerickson1 day ago

It's literally everything that people in relatively free markets buy.

Microsoft Windows is a great example that people will get really angry about, but like bubble gum also enriches the producer, seller and buyer.

ipaddr1 day ago

Those are not great examples.

Bubble gum puts the buyer in a worse dental health situation.

Windows is a monopoly that controls the desktop market and the buyer would have been better off with a richer market with a variety of options.

You could have said cigarettes. They provide the same pleasure benefits as gum with unhealthy outcomes for the buyer.

+1
maxerickson1 day ago
simianwords1 day ago

It’s concerning that this was even a question!!!

carlosjobim6 hours ago

> Would you care to provide an example of such a product

Go to a pharmacy and look at the shelves and you will find such products. All made by companies looking for a profit.

Or just look everywhere around you, and you will see things made by corporation for their profit, which you have purchased to enrich your own life.

DetectDefect4 hours ago

A pharmacy is a terrible example, as medicine is grossly overpriced for exactly the reason you state. The "users" may be getting "enriched" but the pharmacy is profiting a substantial order of magnitude more.

carlosjobim3 hours ago

How could any pharmaceutical seller profit more than a patient whose life is saved by medicine?

There is no profit that is larger than your survival and your health. It's not even comparable. How can you value money above your own life?

Besides, most life saving and health saving medicine is cheap, so it doesn't matter if some revolutionary committee thinks they are "overpriced".

brianf01 day ago

I mean… you don’t need to look very hard. Smart phones? Google Maps?

Is your life not enhanced by these products?

DetectDefect1 day ago

Not really. The provide convenience, sure, but fundamentally all technology creates dependence and limits freedom by influencing behavior in subtle ways, while disproportionately benefiting the technology owner/operator.

brianf023 hours ago

How does, say, refrigeration create dependence and limit freedom?

What are we “supposed” to be like?

+1
simianwords1 day ago
simianwords1 day ago

ChatGPT

DetectDefect1 day ago

This "enrichment" is wildly asymmetrical to users, however.

+1
simianwords1 day ago