How are enterprises maintaining security and compliance when integrating platforms like ChatGot, CoPilot, and similar tools into software development, and which best practices or strategies have proven most effective for you?
Sort by:
OpenAI recently launched Enterprise offerings for businesses. Their page clearly mentions that https://openai.com/enterprise-privacy#our-commitments – “We do not train on your business data, and our models don’t learn from your usage.” You can also have custom models – “Custom models are yours alone to use, they are not shared with anyone else”. This addresses most of the concerns on how my data will be used. I see less red flags here.
Regarding stored API inputs – “Access to API business data stored on our systems is limited to (1) authorized employees that require access for engineering support, investigating potential platform abuse, and legal compliance and (2) specialized third-party contractors who are bound by confidentiality and security obligations, solely to review for abuse and misuse.”
This is like using a public cloud service.
Now using ChatGPT/Co-pilot coding assistance for product development – We need to understand how ChatGPT is trained. As per their website - https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed
“ChatGPT and our other services are developed using (1) information that is publicly available on the internet, (2) information that we license from third parties, and (3) information that our users or human trainers provide. ”
“We only use publicly available information that is freely and openly available on the Internet – for example, we do not seek information behind paywalls or from the “dark web.”
It does not mention clearly – what if the information is available freely but with a restricted license? For example, public GitHub repos with GPL, and AGPL licenses. There are counterarguments on how much “copy” is safe “copy.” And that’s why I still have some open questions.
I hope this helps.
Enterprises maintain security and compliance while integrating tools like ChatGPT and Copilot by implementing controlled access, data governance policies, and secure APIs. The most effective strategies include:
Private or on-prem deployments to avoid data exposure.
Role-based access control (RBAC) to limit tool usage to approved developers.
Prompt and output filtering to prevent sensitive data leakage.
Audit logging to track usage and ensure accountability.
Regular compliance reviews aligned with standards like SOC 2, ISO 27001, and GDPR.
The most effective practice I’ve seen is embedding AI tools into secured, sandboxed environments, paired with strict usage policies and developer training to reduce risk while preserving innovation.