Based on the current state of GenAI tools’ privacy protections, do you think vendors have made enough progress in protecting private/sensitive company data?

Yes, I feel our data is protected14%

Somewhat, they’ve improved but we need more protections54%

Not at all, they haven’t made any real progress here20%

Vendors vary too much to say8%

I don’t know2%

114 PARTICIPANTS
1.1k viewscircle icon1 Upvotecircle icon2 Comments
Sort by:
CISO/CPO & Adjunct Law Professor in Finance (non-banking)a year ago

I agree with my colleagues that data protection is the responsibility of the company using an AI system. The challenge is that some systems do not transparently communicate their privacy controls and cloud products evolve as the vendor seeks to enhance their offering and maximize revenue.  

AI/ML tools work better with more relevant data therefore several AI/ML providers have sweeping legal statements like- customer agrees to allow [insert company name] to analyze, improve, support and operate the services during and after the term of this agreement, using your anonymized customer data.  As a privacy lawyer, anonymization of data is a complex subject, particularly when the data is being fed into an AI system built to extract pertinent information. 

The second key issue is that the terms and conditions can change over the lifetime of the contract with the AI vendor since the vendor is continually working to optimize their offering. Add to that the changing nature of privacy laws and it becomes necessary to analyze each change to the vendors terms and conditions in light of currently applicable privacy laws.

The third issue is that several AI services require their customers to indemnify them from any liability stemming from misuse of personal information or violation of privacy laws. The AI vendor’s terms don’t always use the term “indemnify”, possibly because it is a red flag for lawyers. The AI vendor may say things like; customer agrees that they will not provide any sensitive information to [insert company name], customer shall be responsible for any sensitive or personal information submitted to [insert company name] or customer agrees that [insert company name] is not subject to any obligations that may apply to any sensitive information submitted to [insert company name].  The effect of the language is to place the risk of sensitive data exposure on the customer, even though sensitive data could be inadvertently collected by the AI tool.  Alternatively, indemnification can be obvious like this; Customer will indemnify, defend and hold harmless [insert company here] from and against any and all third party (including, without limitation, People) claims, costs, damages, losses, liabilities and expenses (including reasonable attorneys' fees and costs) arising from or relating to any Customer Data, Customer's use of a Third Party Messaging App, Third-Party Platform or breach or alleged breach by Customer of … (Customer Obligations).”  I have edited the language to anonymize the vendor(s). Speaking in generalities isn’t as helpful as laying out specific issues to be aware of with privacy protection.

CTO in IT Servicesa year ago

I would add that privacy isn't just the vendor's responsibility. You have to take internal measures to make sure you are doing everything you can including firewalls (prompt, response, retrieval), policies, governance (including periodic reviews), etc. 

Content you might like

Strategic Alignment: ensuring AI projects directly support mission outcomes and priorities.31%

Governance & Oversight: establishing clear policies, risk management, and accountability frameworks.42%

Workforce Readiness: building AI literacy, confidence, and trust across roles.27%

Technical Infrastructure: ensuring secure, accessible data and platforms to support AI.

View Results