Hey there,
Last week after I sent the email writing issue, my inbox lit up. And while a lot of you shared amazing stories about time saved (one reader, Patricia, said she drafted a difficult HR email in 90 seconds that would have taken her an hour, and that made my whole week), the most common question was some version of this:
"But Henry... is this stuff actually safe?"
Fair question. Let's talk about it, honestly.
This Week's Story: The Safety Conversation I Keep Having
I have this conversation at least twice a week. At dinner parties. At the gym. In the grocery store checkout line. Someone finds out I write about AI, and the questions come fast:
"Is it listening to me?"
"Can it steal my identity?"
"Should I be worried about my kids using it?"
"Is AI going to take my job?"
Here's what I've learned: most AI safety fears fall into three buckets, and the honest answers might surprise you.
Bucket 1: Things that are real concerns (but manageable)
Your privacy is a legitimate consideration. When you type something into ChatGPT, that information is processed by OpenAI's servers. By default, OpenAI may use your conversations to improve their models. That means if you type in your Social Security number, your medical details, or confidential work documents, that data could theoretically become part of the system's learning.
This is real. It matters. And it's completely manageable, which is why I'm giving you the exact steps to protect yourself in today's Quick Win.
The practical rule is simple: Don't tell AI anything you wouldn't say out loud in a coffee shop. Your name? Fine. Your work schedule? Sure. Your bank account number? Absolutely not. Proprietary business data? Not without checking your company's AI policy first.
Bucket 2: Things that are overblown
No, AI is not secretly recording your conversations through your phone. No, ChatGPT doesn't "remember you" between sessions the way a person does (unless you specifically turn on that feature). No, a chatbot is not going to become sentient and go rogue. The sci-fi scenarios make great movies, but they're not your Tuesday morning concern.
The technology behind tools like ChatGPT and Claude is impressive, but it's essentially very sophisticated pattern matching. It doesn't have desires, goals, or consciousness. It's a tool. A powerful one, but a tool nonetheless.
Bucket 3: Things you should actually worry about
Here's what I think deserves your real attention:
- AI can sound confident while being wrong. This is called "hallucination," and it's the single most important thing to understand. AI can present completely made-up information in a perfectly authoritative tone. Always verify important facts. If an AI gives you medical advice, check with your doctor. If it cites a statistic, look it up. Trust but verify.
- AI-powered scams are getting more sophisticated. Scammers are using AI to write more convincing phishing emails, create fake customer service chats, and even clone voices. If someone calls you and sounds like your grandchild asking for money, hang up and call them back on their real number. The old rules still apply, and they apply more than ever.
- Over-reliance is a quiet risk. If you stop thinking critically because "the AI said so," that's a problem. AI is a thinking partner, not a thinking replacement.
My bottom line? AI tools from major companies like OpenAI, Anthropic, Google, and Microsoft are generally safe to use for everyday tasks. But "generally safe" comes with responsibilities, just like driving a car or using the internet.
Quick Win of the Week: Turn Off Chat History in ChatGPT (2 Minutes)
This is one of the most important things you can do, and almost nobody does it. Here's how to stop ChatGPT from using your conversations to train its models:
- Log into ChatGPT at chat.openai.com
- Click your profile icon or name in the bottom-left corner
- Click Settings
- Click Data Controls
- Find the toggle for "Improve the model for everyone"
- Turn it OFF
That's it. Your conversations are now excluded from model training. You'll still be able to use ChatGPT exactly the same way. This only affects whether your data is used to improve future versions.
Important note: This doesn't mean OpenAI never sees your data. They may still review conversations for safety purposes. So the coffee shop rule still applies: don't share anything deeply sensitive.
Want to go a step further? You can also use Temporary Chat mode (look for the toggle at the top of a new conversation). This creates a session that isn't saved to your history at all.
Tool Spotlight: Perplexity AI
What it is: Perplexity AI is a search-meets-AI tool that answers your questions and shows you exactly where it got its information, with clickable source links.
Why it matters for safety: Remember how I said AI can sometimes make things up? Perplexity addresses this head-on by showing its sources. When it tells you something, you can see the articles and websites it pulled from. You can click through and verify for yourself.
When to use it: Anytime you need factual information and want to be able to check the sources. Health questions, news topics, research for work, fact-checking something another AI told you. Perplexity is your go-to.
How to try it: Go to perplexity.ai. You can start using it immediately, even without an account.
My favorite feature: Ask it a question, and below the answer, you'll see numbered citations. Click any of them to go directly to the source. It's like having a research assistant who always shows their work.
Cost: Free for most uses. Pro plan available but not necessary.
Go Deeper
Want to dig deeper into this topic? Here are some related articles:
- AI Privacy Guide: What Every Regular Person Needs to Know: I put together a comprehensive guide on this topic with specific settings to check for every major AI tool. If this newsletter made you want to do more, start here.
- How to Spot AI-Generated Scams: A practical guide to recognizing when AI is being used against you. Short, actionable, and worth bookmarking.
Closing Thought
I think the healthiest relationship with AI is the same as a healthy relationship with any powerful tool: respect it, learn how to use it well, take reasonable precautions, and don't let fear keep you from benefiting from it.
You lock your front door, but you don't stop leaving your house. You wear a seatbelt, but you still drive to work. AI safety is the same: sensible precautions, not paralyzing fear.
This week's challenge: Do the Quick Win. Turn off that training data toggle. It takes two minutes and it'll make you feel more in control. Then reply and let me know you did it. I'll keep a running count of how many readers take this step. (We're building a community of AI-savvy, privacy-smart people here.)
See you next Tuesday.
Henry
P.S. Got a safety question I didn't cover? Hit reply. I might feature it in a future issue (anonymously, of course).
Know someone who's hesitant about AI because of safety concerns? Forward this issue. Sometimes the right information is all it takes.

