Why is OpenAI lying about the data its collecting on users?

12 points by kypro 14 hours ago

I'm not sure this is the right place to raise this but over the past few months ChatGPT has been lying to me and gaslighting me about the data it's collecting about me.

I'm very sensitive about my privacy and I have disabled all personalisation and memory on ChatGPT.

However, I've noticed multiple times now where it would say things that imply it knows things about me. When it does this I ask how it would know that and it always says it just guessed and it doesn't actually know anything about me. I assumed it must be telling the truth because it seemed very unlikely a company like OpenAI would be lying about the data they're collecting on users and training their chat agent to gaslighting users when asked about it, but now after running some tests I think this is what's happening...

Here's some examples of the gaslighting:

- https://ibb.co/m5PWfchn

- https://ibb.co/VsL9BpF

- https://ibb.co/8nYdf1xx

These are all new chats.

muzani 5 hours ago

"These are all new chats."

Bear in mind that it shares memory from previous chats.

There's at least two types. One saving things you tell it. Another querying recent chats.

There seems to be another kind of memory for when it does searches. May be related to Atlas. I've tried to clear bugs from this (it gets my name wrong) but it's not in the other two.

  • I_am_tiberius 4 hours ago

    No, when memory is disabled this shouldn't be the case.

7222aafdcf68cfe 7 hours ago

If your threat model requires a high level of privacy, there is no case in which you can use any of these tools and providers. Those goals are mutually exclusive.

nacozarina 6 hours ago

they have de facto immunity and have been openly violating laws for years, why did you think they would not lie about this?

f30e3dfed1c9 9 hours ago

Safest bet is to assume that OpenAI is lying about everything. Don't know why but would guess that (1) they often consider it to be to their advantage and (2) they have no moral compunction against it. It's just the way they are.

  • AznHisoka 2 hours ago

    This. Don’t ever trust anything it says, especially about itself or how it works under the hood

theredknight11 2 hours ago

For my work, I spoke with OpenAI representatives directly and we talked about privacy. They assured me and my colleague no one could see your chats, something like "you don't have to worry that your boss can read your chats and think you're dumb."

My colleague (data scientist) asked "what if we wanted to study peoples prompts to teach them better prompt methods?" And they did a 180 without even needing to ask. "Oh yeah, we can get you that. No problem."

Of course this was for enterprise chatgpt but my experience was very much they are run like a startup, tell the customer whatever you need to to make money since they're burning through so much.

accrual 13 hours ago

Interesting examples, thank you for sharing. My interactions with GPT 5.1 have shown knowledge of past interactions but I don't explicitly prohibit it as you have done through settings.

I_am_tiberius 4 hours ago

I said it and say it again, openai is the biggest privacy disaster ever. Sam Altman should be ashamed, because it's not at all necessary.

drewbug 10 hours ago

IP-based geolocation. Really annoying that we can't disable it.

  • kypro 5 hours ago

    That was my initial assumption about how they're collecting the location data too. However I find if I switch to a different browser it will rarely mention my location, so it could be finger printing and pulling in info from previous device session where I have mentioned my location.

    I think the larger issue here is that when you ask how it knows these things it seems to have been instructed to lie and say it doesn't know and has just guessed which seem extremely unlikely. This simply isn't acceptable in my opinion.