How are you establishing accountability for AI governance, data quality, and responsible AI practices? Is this centralized with specific roles, or distributed across IT teams?

145 viewscircle icon4 Comments
Sort by:
VP of IT in Healthcare and Biotech13 hours ago

We have a very rigid governance framework because we’re highly regulated, and everything for procurement goes through it. But what we haven’t tackled yet is business users building their own agents with Copilot and 365 Copilot. We don’t want to stifle innovation, but we just finished shutting down the last Access database after ten years—business users start something small and it becomes a production app outside IT, unsupported and unmaintained. The fear is how to balance enabling innovation for every end user while not creating operational risk. Security is less of a worry, but operational risk is a concern. We’re just scratching the surface on how to do governance around business areas developing agents.

1 Reply
no title13 hours ago

We started with our board and created seven governance points aligned with our vision, manifesto, work practices, security, confidentiality, maintainability, code of conduct, and compliance with laws and regulations. Besides that, we came up with good and bad usage cases of AI, centered around business, employees, and community.

Director of IT in Manufacturing13 hours ago

Data governance is very big, even though we’re just mainly using Copilot. The biggest risk is having a lot of bad data that it pulls in. We can’t control what people save in SharePoint, and that becomes part of the results. Training is mostly about making sure people understand the results and how to validate them. We have an AI policy assigned to every employee for training. Looking back at the other questions, the missing role is data quality and governance, which we don’t have yet and need to create.

CIO13 hours ago

From an AI standpoint, most of what we’ve done starts in the data area. We put strong governance around our data lake and data strategy. That’s the trusted source, and if you’re going to do an AI project, the requirement is that it comes from the trusted source. That gives us assurance on the data side. There are a lot of people using a lot of things, and we didn’t want to lock everyone down, so for generative AI we created a use training and acceptable use guideline instead of a strict policy. We did insist that everyone accessing an AI tool completes the training and signs off on it. For Copilot, you have to go through the training to get access. Our approach is to encourage, put guardrails around it, and try not to lock it down.

Content you might like

Developers must use a specific tool100%

Developers can choose between multiple tools

We haven’t rolled out AI coding tools

View Results

Centralized (The same team delivers it for everyone)26%

Decentralized (Each department has their own team)32%

Hybrid of the two30%

We rely on our an outside vendor4%

We don't currently have AI/ML capability6%

View Results