What challenges are you facing today with governing AI?

314 viewscircle icon3 Comments
Sort by:
Director of Marketing15 hours ago

I discuss this particular topic with customers on a regular basis, and it's especially challenging because AI's evolving at such a rapid pace.

Specific challenges include the following (I'll include a listing of best practices to address the challenges at the end):

1) Protecting sensitive content from potential inclusion into Large Language Models (LLMs):

A casual review of tech news reveals multiple cases in which sensitive content was accidentally incorporated into LLMs, by well-meaning employees who were just trying to get their jobs done. In certain cases, the carelessness has been substantial enough to result in lawsuits and accusations of data privacy violations.

2) Making AI responses more relevant to users' individual roles and their levels of data access:

Many organizations struggle with "one-size-fits-all" AI responses, which result in a senior executive receiving the same response to an AI prompt as an associate at a lower level in the company, who should have much less access to sensitive information. This is especially concerning for users who are new to the company and/or don't have the best track-record of safeguarding sensitive data.

3) Governance needs to occur at the speed of AI, and many organizations adopt data governance solutions slowly.

Here, organizational executives need to understand that data governance initiatives around AI need to implemented rapidly, in order to keep pace with advancing technology. This includes more rapid budgetary approval of AI governance initiatives, along with prioritized implementation of AI governance initiatives.

I believe it's best to provide solutions and best practices when I recap challenges, so the following approaches will help to address the issues that I've outlined above:

1) Maintain a responsible AI usage policy, which includes the formal commercial AI models that are blessed by the organization and how users should engage with them.
2) Train users regularly about the danger of incorporating sensitive content into LLMs, particularly LLMs that are not sanctioned by your company. Utilize real-world examples to help them understand potential consequences.
3) Take action now to classify your sensitive content, and confirm that it's accessible by the correct parties. If your sensitive content house isn't in order, that is highly-likely to impact prompt responses when the data is leveraged by AI.
4) Consider "AI safeguards," which can help to limit users' access to sensitive information in AI responses, based on their "Business Need to Know."
5) Create an AI Governance Task-Force, which will keep technical teams and executives apprised about the latest AI governance issues, while helping to drive attention to the overall importance of data governance. Following such an approach can also help to free up budgetary resources rapidly.

Please feel free to share additional best practices in your replies and comments- Thank You!

Director Information Security & Trust20 days ago

One of our biggest challenges with AI is simply its pervasiveness. To govern AI effectively, we have to consider many different perspectives. For example, we might want to build AI into a process to eliminate manual transcription, which is fairly straightforward. However, nearly every SaaS software provider we use, currently about 580 suppliers, has implemented some form of AI feature within their tool. Governing AI, therefore, requires us to understand not only how we are using AI internally, but also how our software suppliers are leveraging AI within their platforms. We need to ensure that both our suppliers and our people are using AI appropriately and responsibly.

1 Reply
no title20 days ago

I want to add to what Rachel mentioned. For us, if there is a new supplier, we have implemented a third-party risk assessment process. These assessments have expanded to include questions about what AI tools the supplier is using, how they are using our data once we engage their services, and how these practices are reflected in the terms and conditions of our contracts. <br><br>It’s important to take both a forward-looking and backward-looking approach. Not only do we assess new contracts, but we also need to review prior agreements to ensure they meet our current standards for AI governance. While our organization may not be as large as others, we share a sense of uneasiness. If the CEO were to ask direct questions about our AI governance, we strive to provide the best answers, but we recognize there are still areas we are working through and challenges that remain unresolved.

Content you might like

Majorly accelerated planning15%

Somewhat accelerated planning55%

Neither accelerated or decelerated planning24%

Somewhat decelerated planning2%

Majorly decelerated planning1%

View Results

Biometric9%

Multi-Factor68%

Token-Based19%

Certificate-Based2%

Other (share below)1%

View Results