How are you establishing accountability for AI governance, data quality, and responsible AI practices? Is this centralized with specific roles, or distributed across IT teams?

861 viewscircle icon1 Upvotecircle icon4 Comments
Sort by:
VP of IT in Healthcare and Biotech23 days ago

We have a very rigid governance framework because we’re highly regulated, and everything for procurement goes through it. But what we haven’t tackled yet is business users building their own agents with Copilot and 365 Copilot. We don’t want to stifle innovation, but we just finished shutting down the last Access database after ten years—business users start something small and it becomes a production app outside IT, unsupported and unmaintained. The fear is how to balance enabling innovation for every end user while not creating operational risk. Security is less of a worry, but operational risk is a concern. We’re just scratching the surface on how to do governance around business areas developing agents.

1 Reply
no title23 days ago

We started with our board and created seven governance points aligned with our vision, manifesto, work practices, security, confidentiality, maintainability, code of conduct, and compliance with laws and regulations. Besides that, we came up with good and bad usage cases of AI, centered around business, employees, and community.

Strategic Director Information Technology in Manufacturing23 days ago

Data governance is very big, even though we’re just mainly using Copilot. The biggest risk is having a lot of bad data that it pulls in. We can’t control what people save in SharePoint, and that becomes part of the results. Training is mostly about making sure people understand the results and how to validate them. We have an AI policy assigned to every employee for training. Looking back at the other questions, the missing role is data quality and governance, which we don’t have yet and need to create.

CIO23 days ago

From an AI standpoint, most of what we’ve done starts in the data area. We put strong governance around our data lake and data strategy. That’s the trusted source, and if you’re going to do an AI project, the requirement is that it comes from the trusted source. That gives us assurance on the data side. There are a lot of people using a lot of things, and we didn’t want to lock everyone down, so for generative AI we created a use training and acceptable use guideline instead of a strict policy. We did insist that everyone accessing an AI tool completes the training and signs off on it. For Copilot, you have to go through the training to get access. Our approach is to encourage, put guardrails around it, and try not to lock it down.

Content you might like

Yes63%

No31%

Not yet, but we are planning to in 20214%

View Results

We block public AI services for all employees8%

We allow access to public AI services for all employees 33%

We allow access to public AI services for some groups of employees50%

Any public AI service accessed on a company device must be risk assessed before use is permitted42%

End users are permitted to use any Public AI service without review as long as they adhere to our security policy.8%

Other

View Results