How are enterprises maintaining security and compliance when integrating platforms like ChatGot, CoPilot, and similar tools into software development, and which best practices or strategies have proven most effective for you?

7.7k viewscircle icon3 Upvotescircle icon2 Comments
Sort by:
Data Scientist2 months ago

Enterprises maintain security and compliance while integrating tools like ChatGPT and Copilot by implementing controlled access, data governance policies, and secure APIs. The most effective strategies include:

Private or on-prem deployments to avoid data exposure.

Role-based access control (RBAC) to limit tool usage to approved developers.

Prompt and output filtering to prevent sensitive data leakage.

Audit logging to track usage and ensure accountability.

Regular compliance reviews aligned with standards like SOC 2, ISO 27001, and GDPR.

The most effective practice I’ve seen is embedding AI tools into secured, sandboxed environments, paired with strict usage policies and developer training to reduce risk while preserving innovation.

Lightbulb on1
Enterprise Architect in Energy and Utilities2 years ago

OpenAI recently launched Enterprise offerings for businesses. Their page clearly mentions that https://openai.com/enterprise-privacy#our-commitments – “We do not train on your business data, and our models don’t learn from your usage.”  You can also have custom models – “Custom models are yours alone to use, they are not shared with anyone else”. This addresses most of the concerns on how my data will be used.  I see less red flags here.

 

Regarding stored API inputs – “Access to API business data stored on our systems is limited to (1) authorized employees that require access for engineering support, investigating potential platform abuse, and legal compliance and (2) specialized third-party contractors who are bound by confidentiality and security obligations, solely to review for abuse and misuse.”

This is like using a public cloud service.

 

Now using ChatGPT/Co-pilot coding assistance for product development – We need to understand how ChatGPT is trained. As per their website - https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed

 

“ChatGPT and our other services are developed using (1) information that is publicly available on the internet, (2) information that we license from third parties, and (3) information that our users or human trainers provide. ”

 

“We only use publicly available information that is freely and openly available on the Internet – for example, we do not seek information behind paywalls or from the “dark web.”

 

It does not mention clearly – what if the information is available freely but with a restricted license? For example, public GitHub repos with GPL, and AGPL licenses. There are counterarguments on how much “copy” is safe “copy.” And that’s why I still have some open questions.

 

I hope this helps.

Content you might like

Exploring – We are aware of SAP Joule but have not yet started any formal evaluation or planning.23%

Evaluating – We are actively assessing SAP Joule’s capabilities and fit for our business needs.62%

Piloting – We have initiated a limited rollout or proof of concept in selected areas.15%

Scaling – We are expanding SAP Joule adoption across multiple business units, processes or systems.

Operationalized – SAP Joule is fully integrated into our operations and delivering measurable value.

View Results