
I’m pretty sure I accidentally gave OpenAI my company’s entire Q3 marketing strategy back in July. It was a Tuesday, around 11:15 PM, and I was exhausted. I had this massive, messy spreadsheet of internal KPIs and I just wanted a quick summary to send to my boss so I could go to sleep. I copy-pasted the whole thing into a ‘free’ chat window, got my summary, and felt like a genius for about ten minutes. Then the dread kicked in. I realized I hadn’t toggled off the ‘training’ setting. That data wasn’t just processed; it was absorbed. It’s gone. It’s part of the hive mind now.
The time I realized I was the product
That feeling in your stomach when you realize you’ve done something irreversible? That’s what I felt that night. I spent the next three hours scouring help docs, trying to figure out if I could ‘delete’ the memory. Spoiler: You can’t really. Not in the way that matters. Once that data is used to weight a neuron in a transformer model, you aren’t getting it back. It’s like trying to take the sugar out of a baked cake. It’s just… part of the cake now.
We’ve all been conditioned to think ‘free’ means ‘ad-supported.’ We grew up with Gmail and Facebook. We figured the trade-off was seeing a few banners for lawnmowers or whatever. But AI is different. With AI, the trade-off isn’t your attention—it’s your intellectual property. It’s your unique way of solving a problem, your private business metrics, and your weird personal drafts. We aren’t just the product anymore. We’re the raw ore being mined to build the machine that will eventually make our specific jobs redundant. It’s a bit dark when you actually sit with it for a minute.
If you aren’t paying for the model, you are the labor helping to build it.
The ‘training’ lie we all tell ourselves

I used to think that ‘anonymized data’ actually meant something. I was completely wrong. I used to tell my friends, “Oh, it’s fine, they strip out the names.” That’s total nonsense. What I mean is—actually, let me put it differently. If I feed an AI a specific set of project notes about a ‘confidential’ project in a niche industry in a specific city, it doesn’t need my name to know exactly who I am and what I’m doing. The context is the identifier.
I spent a whole Saturday—six hours, actually—reading the privacy policies of 14 different AI tools. I’m talking ChatGPT, Claude, Perplexity, Midjourney, Gamma, and a bunch of others I found on Product Hunt. I tracked how they handled ‘input data.’ Here is what I found:
- 11 out of 14 explicitly stated they could use ‘de-identified’ data for model improvement by default.
- 8 of them made it incredibly difficult (more than 4 clicks) to find the ‘opt-out’ toggle.
- Only 2 of them (the high-end enterprise versions) guaranteed that data would never touch the training set.
- Most of them have a ‘change of terms’ clause that basically says they can change the rules whenever they want.
It’s a scam. We’re essentially paying them with our secrets so they can sell those secrets back to us as ‘intelligence’ next year. I know people will disagree, but I think we should actually be paying more for AI, not less. If it costs $200 a month to guarantee my data stays on my machine, I’d rather pay that than have it free and leaked. But nobody wants to hear that because we’re all addicted to the convenience.
Why I’m staying away from Google Gemini specifically
I’m going to be blunt here: I refuse to use Google Gemini. I don’t care how many ‘tokens’ the context window has or how well it integrates with Google Docs. I actively tell my friends to avoid it. Why? Because Google has the longest, most aggressive track record of ‘drifting’ privacy boundaries. They start with a product that’s ‘private’ and then, three years later, you realize they’ve been scanning your drafts to serve you ads for therapy. I don’t trust them with my thoughts. I don’t care if that sounds paranoid or unfair—it’s how I feel based on twenty years of watching them operate. They are the kings of the ‘bait and switch’ privacy policy.
Also, the product feels like it was designed by a committee of people who have never actually enjoyed using a computer. It’s bloated and corporate. I’d rather use a janky open-source model running locally on my loud-ass gaming laptop than give Google more of my brain-space. Total trash.
The part nobody talks about
Anyway, I digress. The real issue is that we’re losing the ability to have a ‘private’ thought. If every time you have a half-baked idea, you run to an AI to ‘flesh it out,’ you are giving that idea away before it’s even yours. You’re outsourcing the most valuable part of being a human—the messy, internal creative process—to a server farm in Virginia. And that server farm remembers everything.
I might be wrong about this, but I think the ‘Enterprise’ versions of these tools are also a bit of a psychological trick. They tell companies, “Your data is safe with us!” but then they use the aggregate patterns of usage to decide which features to build next. So even if they aren’t ‘reading’ your specific memo, they are learning that your entire industry is currently obsessed with X or Y. They are still winning. They are still getting the ‘meta’ data that allows them to own the market.
It’s like a free buffet where the chef is taking photos of your teeth while you chew. Sure, you got a free meal, but now he knows exactly how to sell you dental implants later.
How I changed my workflow (and why it sucks)
I’ve had to change how I work, and honestly? It’s annoying. I miss the ‘free’ days where I didn’t think about it. Now, I have a strict rule: if it’s internal, if it’s a client’s name, or if it’s a strategy I haven’t published yet, it stays off the web. I use local LLMs for the sensitive stuff. I use a tool called LM Studio to run models on my own hardware. It’s slower. It makes my laptop fan sound like a jet engine. It’s not as ‘smart’ as GPT-4o. But at least I know my Q4 strategy isn’t going to show up as a suggested prompt for my competitor.
I’ve also started paying for the Pro versions of things, not for the features, but for the (theoretical) privacy toggles. It feels like paying protection money to a mob boss. “Here’s $20, please don’t steal my stuff.” It’s a weird way to live, but the alternative is worse.
I wonder sometimes if I’m just being an old man shouting at clouds. Maybe the future is just one giant, shared brain and ‘privacy’ is a 20th-century relic we need to let go of. But then I remember that spreadsheet from July and the pit in my stomach returns. I don’t want to be part of the hive mind. I want my mistakes to be mine. I want my bad ideas to stay in my head until I’m ready to show them to the world.
Is anyone else actually reading these privacy policies, or am I the only one losing sleep over this?
Keep your data close. Nobody else will.
