What processes do you follow to audit AI models for compliance with evolving privacy regulations? How frequently do you run these audits?
Sort by:
We audit AI-related initiatives primarily on demand. For any AI solution to even be considered for audit, it must first go through our governance process. Our governance committee meets every other week without fail and includes me, the CIO, Chief Compliance Officer, Chief Legal Officer, our possible CMIO, and the relevant business stakeholder representing the specific program. Once an AI solution is selected, the audit process may differ depending on the situation. In California, we have significant legislative requirements to follow, so we typically conduct audits as needed or incorporate them into larger, scheduled audits by expanding the scope to include AI as a component.

Specifically, regarding auditing AI models, our organization primarily consumes models intended for generic purposes, such as Microsoft OpenAI. We are not training or maintaining local models. While we do have an annual review cadence for the use of cases that go through our governance process, we do not directly audit the models themselves since we are not responsible for their maintenance. As a result, our focus is on reviewing use cases rather than underlying AI models.