How have you found common ground with finance, legal, and infosec teams on acceptable risk levels for AI projects, while making sure that innovation is not stifled while also mitigating potential risks?

842 viewscircle icon2 Comments
Sort by:
Chief Supply Chain Officer in Government6 months ago

We find finance to be the simplest to deal with. However, as a government organization, we also have to consider records retention and public records. Everything we do is subject to retention requirements, ranging from transitory communications to indefinite retention. This broad spectrum presents challenges regarding application logs and data storage. It adds a fourth layer to our considerations, intertwining with legal aspects. Our security team is particularly cautious, scrutinizing everything, even if it has passed FedRAMP certification. We ensure thorough due diligence, analyzing security aspects independently, even within Microsoft's GCC tenant. Vendors are increasingly incorporating AI into their applications, so we continuously evaluate updates and upgrades. As a scientific organization with vast data consumed both internally and externally, we focus on secure data access. We're exploring data lakes to safely share data with AI tools without breaching internal classifications or PII thresholds. It's a complex landscape, but we're navigating it.

CIO in Education6 months ago

The straightforward answer is that it's a complex process. Among finance, legal, and security, finance is perhaps the easiest to navigate. Organizations recognize the need to invest in AI, so understanding what your organization is willing to do financially is a good starting point. Legal and security, however, are closely linked in our organization, especially through our third-party risk management process, which includes privacy, security, and accessibility. This involves procurement processes, terms and conditions, insurance, and risk management. Everyone has a say when AI is involved, and while risk is mitigated in some ways, it can also be prohibitive. We classify data into four levels, with P3 being sensitive and P4 confidential. Projects not involving P3 or P4 data have moved forward, but more complex compliance and security matters are on hold. We're at a critical juncture with a student experience application developed in our AWS instance, aligned with our security framework that mirrors NIST. The ongoing discussion around ethics and security, even with tools like Copilot for 365, is crucial and will likely slow progress.

Content you might like

Most or all have delivered measurable ROI

Many have delivered measurable ROI60%

Some have delivered measurable ROI20%

Few have delivered measurable ROI

None have delivered measurable ROI

We can’t reliably measure ROI 20%

View Results

Accelerate AI initiatives50%

Maintain current speed50%

Not sure yet - need more details

View Results