What learnings have emerged for your organization in its efforts to secure and govern AI so far? How are you planning to iterate on your strategy over the next year?
Sort by:
For my organization, understanding the use cases for AI has been particularly intriguing. People want to do things with AI, and as Rachel noted, the way AI is marketed by technology companies can be misleading. There are claims that you can simply write code and skip steps like error checking, but in reality, verification is always needed.
Learning about the use cases from different departments has been a valuable experience. Sometimes I discover needs I had not anticipated, such as the creative team wanting to use AI for voiceovers in certain campaigns. That introduces different tools and considerations, even if they are not currently within our scope. Recognizing these valid use cases is part of what makes working in technology exciting, as it allows us to see things evolve and adapt over time.

One of the biggest learnings has been that people who want to use AI are often much further from actually being able to use it than they realize. When someone says they want to turn a completely manual process into an AI-driven or AI-led process, what they usually describe is the need for automation. Implementing automation is a necessary step before AI can be introduced to complete a specific task.
While this distinction does not necessarily affect us from a governance perspective, it is an important insight for the company. To truly take advantage of AI, there must be processes and automation in place. Without these foundational elements or an automation tool, it is very difficult to implement AI effectively.