Who has a well-communicated governance model in place for the use of generative AI tools in their org?

We do, we've got a well-defined governance model38%

We almost do, we're in the process of creating it51%

Not us, we haven't tackled that yet11%


144 PARTICIPANTS

1.7k views2 Upvotes5 Comments

Global Intelligent Automation Manager in Healthcare and Biotech, 5,001 - 10,000 employees
I can say I'm building it... 

However, in the rapidly evolving landscape of generative AI tools and their applications, the establishment of a well-communicated governance model is indeed a complex endeavor that requires careful consideration. The pursuit of a governance framework is not only a reflection of an organization's commitment to responsible AI usage, but it also speaks to its capacity to harness the transformative potential of AI while minimizing risks.

A successful governance model for the use of generative AI tools must strike a delicate balance between fostering innovation and ensuring ethical, legal, and operational compliance. While it's true that a "one-size-fits-all" approach might not be feasible due to the diverse nature of organizations, a high-level governance structure can serve as a guiding framework that empowers users while safeguarding against misuse.

Key considerations within such a model could include:

Clear Objectives and Principles: Establish a set of overarching objectives and guiding principles that articulate the organization's commitment to responsible AI use. These principles should emphasize transparency, accountability, fairness, and respect for ethical norms.

Cross-functional Collaboration: Develop a governance committee or task force comprising representatives from various departments such as legal, IT, data science, ethics, and compliance. This cross-functional approach ensures a holistic perspective and better decision-making.

Risk Assessment and Mitigation: Implement mechanisms to assess potential risks associated with AI-generated content and outcomes. Develop strategies to mitigate these risks, including regular audits, impact assessments, and continuous monitoring.

User Education and Training: Acknowledge the varying levels of digital literacy among employees. Offer comprehensive training programs to familiarize users with the capabilities and limitations of generative AI tools. This empowers users to make informed decisions and reduces the risk of unintended consequences.

Usage Policies and Guidelines: Create comprehensive usage policies that outline the acceptable and responsible use of generative AI tools. These policies should address issues like intellectual property rights, privacy concerns, and potential biases.

Feedback Mechanisms: Establish channels for users to provide feedback, report concerns, and share experiences related to AI-generated content. This promotes a culture of continuous improvement and enables organizations to adapt their governance model based on user insights.

Ongoing Adaptation: Acknowledge that the field of AI is rapidly evolving, and models like LLMs are continuously learning and updating. As such, the governance model must also be flexible and adaptable to accommodate emerging challenges and opportunities.

Transparency and Accountability: Ensure that decision-making processes related to AI tools are transparent and well-documented. Assign accountability for decisions and outcomes to appropriate individuals or teams.

Ethical Review and Validation: Institute an ethical review process for AI-generated content, especially when it pertains to critical applications such as healthcare, legal documents, or public communication. This helps prevent potentially harmful or biased content from being disseminated.

Continuous Learning and Innovation: Encourage a culture of continuous learning and innovation. Stay updated on the latest developments in AI ethics, regulations, and best practices to refine and enhance the governance model over time.

Indeed, the challenge lies in managing the ever-evolving landscape of AI models and their implications. Organizations should be prepared to adapt their governance model to accommodate emerging technologies and their corresponding challenges. By fostering collaboration, transparency, and user education, organizations can cultivate an environment where the potential of generative AI tools is harnessed responsibly and effectively.
2
Head of Cyber Security in Manufacturing, 501 - 1,000 employees
We block external ai services via proxy/sse and redirect to internal approved ai services. This way people are directly redirected to legitimate services. For the rest default corporate policies apply
1 3 Replies
CISO in Software, 10,001+ employees

Are you using a specific service for AI service blocking?  Can it detect all AI type of services, etc?

1
Head of Cyber Security in Manufacturing, 501 - 1,000 employees

We use a SSE which is listed also in the gartner top right quarter. Similiar logic we apply for all translation services which are not by us

In case anyone wants to see, we can have a private chat/call.

1
Global Intelligent Automation Manager in Healthcare and Biotech, 5,001 - 10,000 employees

So what if the employees, use their cell phones to use GPT/LLMs and then email it back to themselves? I found that closing doors leads to more unforeseen alleyways. 

There are tools or teachings that can help the employee to prompt.

An Example of a good internal prompt would be:
-Draft an offer letter for [NAME] of [Location] for the position of [Occupation]

If this fails then look to bring in a vendor who can keep the door open yet normalize data or track who is doing what to manage your potential intellectual property that's now becoming open source...

1

Content you might like

CEO in Services (non-Government), Self-employed
Using AI tools 2-3 a week. Use cases: 
-summaries of content 
-slide outlines
-abstracts
-citations. 
-Beauti.Ai for slide preparation
-Chat GPT 4
-Styluschat
1
Read More Comments
2.8k views2 Upvotes8 Comments

Arctic Wolf - MDR18%

Red Canary - MDR29%

CrowdStrike - Falcon Complete47%

SentinelOne - Vigilance29%

Rapid7 - MDR26%

Sophos - MDR21%

Expel - MDR2%

Secureworks - Taegis Managed XDR5%


464 PARTICIPANTS

1.8k views1 Upvote

CTO in Software, 11 - 50 employees
No, we haven't published corporate guidance establishing guardrails for use of commercial generative AI services.
Read More Comments
2.1k views1 Upvote3 Comments

Big Data21%

Remote Work17%

Microservices / Containerization11%

CI / CD5%

Zero-Trust15%

Automation2%

Digital Transformation16%

Cloud / Cloud Native1%

DevOps or DevSecOps6%

Other (comment)1%


1006 PARTICIPANTS

2.7k views5 Upvotes16 Comments

CTO in Software, 201 - 500 employees
Without a doubt - Technical Debt! It's a ball and chain that creates an ever increasing drag on any organization, stifles innovation, and prevents transformation.
Read More Comments
41.6k views131 Upvotes319 Comments