How has your AI governance evolved over the last year or two? What has prompted modifications? Are your committees expanding or shrinking?

383 viewscircle icon7 Comments
Sort by:
CIO3 days ago

Our AI governance is led by the Chief Risk Officer, with cybersecurity rolling up under this function rather than IT. The CISO is actively involved in vetting AI-enabled solutions, evaluating impact, and determining next steps. We maintain an inventory of all AI in use, distinguishing between marketing claims and substantive capabilities. All requests with AI components are reviewed and recorded for auditing purposes.

Chief Information Officer3 days ago

Our governance started off slowly, largely because we are heavily regulated and carefully monitored. Other firms are probably in a similar position. It took a lot of effort to get the governance and AI policy put together, partly because it is so unique and new. Getting our lawyers and compliance teams to understand AI was a big lift, but we got there. We now have a good policy in place, which we have pushed out to employees through our compliance tool, including attestation. We have tried to block as much as we can, but that is difficult given the sheer number of tools out there.

The biggest lift has been compliance. Now that we have cleared that hurdle, we can execute more effectively. Compliance requirements also help streamline decision-making, as you can always point to them when deciding what can or cannot be done. Another area where governance has evolved is vendor onboarding. Compliance drives every vendor through a review process, which has become more complicated as nearly every vendor is now incorporating AI in some form. We had to create a framework and process to evaluate risk and streamline onboarding, but it remains a significant resource challenge. Every vendor we deal with will be using AI and will need to be reviewed, which is a big lift given limited resources.

4 Replies
no title3 days ago

Brad, how do you actually screen vendors and approve them, knowing they might have embedded AI? Do you have them self-report what they have, or do you assess it yourself?

no title3 days ago

Good question. We started with all new incoming vendors. We have an initial questionnaire that the business fills out when considering a new solution or service. If AI is involved, it triggers deeper questions for the business to resolve. We have a standing meeting with a group that reviews these cases and decides if further action is needed. If so, we send a detailed AI questionnaire to the vendor, which covers data privacy and other critical topics. This process is working well, and we also send it to all our SaaS vendors, since most are incorporating AI. You need to understand how vendors are using AI and the architecture behind it, especially if you are regulated. Understanding how large language models work in a regulated environment and how data is exposed is also a concern, since even prompts and retained memory can be exposed externally. Many vendors add AI features for marketing, so you need to do a thorough analysis. Our CISO is involved in this process, and we use systems like ServiceNow to manage workflows. High-risk or critical vendors undergo annual reviews to track their activities, as all will be using AI in significant ways.

Digitization VP, Information Technology3 days ago

The evolution cycle has definitely been on the expansion side. More people hear about tools and success stories in other departments, so they want to get involved. This leads to more representatives and forces us to adjust our governance framework, including rules of engagement, responsibilities, and roles. In our highly regulated environment, discipline is key. You cannot allow too many cooks in the kitchen, each with different ideas. At the end of the day, you have to protect your data and your security. It is a balancing act; you want to be inclusive and make sure all voices are heard, but discipline remains central. We aim to be a good partner to all business units, but everyone needs to understand that AI is not a silver bullet. There is work involved on both sides for it to yield results.

We discuss objectives and what we are truly after. The best scenarios we have seen at the committee level involve having very good data available with large data sets, and a level of automation or log aggregation, or some intelligence generated by analyzing that data instead of relying solely on humans. If we can train the model and have a human only get involved at escalation points, those models have been extremely successful for us. We monitor a lot of equipment and locations, including power utilization and cross connects, and this has been a successful rollout model. When one group gets automation, others want to know if they can get something similar and if it can be implemented quickly. These are always cross-functional conversations with give and take, but discipline and a top-down understanding of objectives are needed so you are not just chasing shiny things. You want to be thoughtful in your approach and deployment, for security reasons and cost effectiveness. You don't want to waste money.

Content you might like

Major concern15%

Moderate concern68%

Minor concern13%

Not at all concerned4%

View Results

Positively impacts cyber staff’s perspective on job security73%

Negatively impacts cyber staff’s perspective on job security20%

Unsure7%

View Results