What does the blueprint look like at your org for a minimal viable organizational structure needed to run an AI governance program?

1.6k viewscircle icon1 Upvotecircle icon2 Comments
Sort by:
Director of IT24 days ago

Great question. To quickly establish a minimally viable solution, we set up an AI Center of Excellence (COE) with the primary goal of promoting, governing, and establishing AI best practices to solve business challenges and drive value, while ensuring ethical, sustainable, and scalable adoption. Key stakeholders across the company, including Business, Legal, and IT, were involved. Business stakeholders ensure that AI projects align with company goals and objectives, while IT includes Information Security. We clearly defined roles and built our RACI (Responsible, Accountable, Consulted, Informed) matrix. An AI acceptable use policy was created and distributed across the organization. We audited existing AI tools and determined an acceptable tool list. A governance and intake process was established for the AI COE to review future AI requests. The AI COE stakeholders meet regularly, with IT and tactical teams meeting monthly, and business stakeholders meeting quarterly.

Lightbulb on1
Chief Digital Officer, Head AI Transformation Management Office in Mediaa month ago

We started with a loose infrastructure (Community of Practice) across the enterprise bringing leaders who were AI-forward together every other month to address proactive and reactive governance issues. Out of it, we built the MVP tools: 1. An employee AI acceptable use policy 2. A process and list for Approved AI platforms/tools 3. Base priorities for the organization/a central list of AI uses being developed within the various businesses.

The model had to evolve as we dialed up use of AI in products, in day-to-day workflows and commercially for client use. We have since moved to an AI Transformation Management Office with two teams who are accountable for the primary governance of our AI usage:

1. People + Ops (including legal/contracting) responsible for reviewing contracts, building employee and client FAQs, reviewing new use cases and helping to respond to new challenges, reviewing tech and security issues on platforms and monitoring usage (authorized and unauthorized). This team is also building a pre-jump workshop for us to use when kicking off any net-new commercial campaign that leverages AI in more than 50% of it's outputs. The group includes inside and outside counsel from regions, our Security & technology team, Operations and financial leads from businesses.

2. Product + Positioning: This team helps us manage and monitor our reputation, provide regular positions and thought leadership around AI's impact on our business areas and improve the way we safely escort products to market.

This model has been more than 2 years in the making, but the governance needs have increased rapidly as user adoption and client demand increases. Highly recommend a more inclusive AI TMO rather than just a top-down governance approach as shared accountability is really important in this changing space.

Lightbulb on1

Content you might like

Yes54%

No, but we're actively working to establish one31%

No15%

Unsure

View Results

Yes75%

No25%

Other (comment below)

View Results