After Sam Altman’s recent comment that ChatGPT does not provide legal privilege or “legal confidentiality,” has anyone revisited ChatGPT usage guidelines or shifted from ChatGPT Free/Plus to Enterprise or API?
Sort by:
That is an excellent and important question. As a security leader, I would highlight a few key points:
1. Legal privilege and confidentiality
Sam Altman’s comment is a reminder that ChatGPT, regardless of plan, is not a system of record for privileged communication. Unlike attorney–client privilege, nothing shared with the model gains legal protection. Organizations must make this distinction explicit in their acceptable use guidelines.
2. Differences across Free, Plus, Enterprise, and API
• Free/Plus: Consumer-grade, may use inputs for model improvement unless settings are adjusted. Not suitable for sensitive or regulated data.
• Enterprise: Provides stronger data handling guarantees. Inputs and outputs are not used to train models, enterprise-grade security controls are in place, and organizations gain admin visibility and governance features.
• API: Offers similar assurances to Enterprise, but with more developer control. Data submitted through the API is not used for training, which is crucial for teams embedding AI into workflows.
3. Practical guidance for enterprises
• Update internal AI usage policies to prohibit sharing regulated, customer, or legal data in Free/Plus accounts.
• Prefer Enterprise or API for any business-critical or sensitive use cases where data handling, compliance, and security posture can be validated.
• Train employees to treat ChatGPT the way they treat cloud storage or collaboration tools: safe when enterprise-approved, risky if they default to consumer versions without controls.
4. Broader governance
This is a great moment for tech and security leaders to revisit AI governance frameworks. Shadow AI risk (employees adopting Free/Plus without approval) can expose the organization to legal, compliance, and reputational harm. Offering a sanctioned, enterprise-ready alternative is the best way to both reduce risk and keep innovation flowing.
In short: organizations should not assume “legal confidentiality” in any ChatGPT tier. The right approach is to align usage with risk tolerance: Enterprise or API for sanctioned, governed use; strict policy and awareness for Free/Plus.
I agree with all of this. Thank you for the important reminders.
ChatGPT cannot manage any confidentiality as again they cannot survive with out data, so they need to feed to others, so they should go for free someday.