Has anyone has developed a ChatGPT / use of AI policy? Or do you reference existing P&Ps?
CISO in Software, 10,001+ employees
I highly recommend checking out this paper that I and some other members of the community wrote to help businesses think about how to craft and have policies in this paper? https://team8.vc/rethink/cyber/a-cisos-guide-generative-ai-and-chatgpt-enterprise-risks/
Chief Information Security Officer in Healthcare and Biotech, 1,001 - 5,000 employees
We have still not developed the policy but we are start taking input from various sourcesCo-Founder in Services (non-Government), 2 - 10 employees
Yes, part of the "acceptable use policy." Generative AI Policy For company-related Blogging and Social Media Employees may use Generative AI (ChatGPT, Google Bard and other AI) platforms to create content for blogging and social media only after obtaining prior approval from their supervisor or department head. All content created using Generative AI platforms must comply with the guidelines and restrictions outlined in this policy, including the prohibition on revealing confidential or proprietary information, making discriminatory or harassing comments, and attributing personal statements to . Additionally, employees must ensure that any content created using Generative AI platform is factually accurate and does not misrepresent or harm the image, reputation, or goodwill of or its employees
CISO in Software, 201 - 500 employees
On top of the ethical and acceptable use related consideration, we also added specific guidelines as to what information can employees provide to various LLMs (ChatGPT and derivatives, but also how to handle GitHub Copilot output). In short: when it comes to ChatGPT, our employees are instructed not to provide any company confidential information to the prompts, and whatever output the engine generates, it should be considered an input / starting point and never used externally without detailed review and, ideally, edits.
CIO in Finance (non-banking), 51 - 200 employees
Principle points of our policy are:1. ChatGPT is a learning algorithm. Anything you input can be learned.
2. Don't share sensitive information (PII or company intellectual property).
3. You are ultimately responsible for any and all content that is produced regardless of whether you use an AI to help construct it.
Senior VP & CISO, 1,001 - 5,000 employees
reference existing policies President in Software, 51 - 200 employees
Treat it exactly as you would any other posting or sharing on the open Internet. Assume nothing you feed AI systems is subject to enforceable NDAs unless you very specifically have a contract with the vendor which states their privacy policies and limitations of liability etc. Note that ChatGPT by default is not private/secure - we see it as no different than posting to a public forum on Reddit and expect employees to treat it with the same care and thoughtfulness.Associate Director, Engineering and Technology in Education, 501 - 1,000 employees
This was exactly our response. Like others, it easily fits under existing policies and procedures. We sent out a clarifying message to staff regarding the use of generative AI and pointed to the specific policy regarding of 3rd-party tools.
IT Manager, Self-employed
We have not developed a new policy. After discussions between our Security, Technology Innovation, Data Privacy/Protection, and Compliance teams, our existing policies already cover (prohibit) entry of sensitive Company information into the public internet, and our existing CI/CD pipelines already automate scanning/testing to mitigate concerns about code originating from somewhere else. We did release a company-wide public service announcement regarding the risks of ChatGPT, reminding them of their obligations to protect company data. We have also deployed an internal/company-only instance of ChatGPT (GPT-3.5) for people to "play around in." Senior Engineering Manager in Finance (non-banking), 5,001 - 10,000 employees
We launched an internal slack.plugin and self hosted version of chatgpt to have data privacy.. encouraging people to use that instead of public oneContent you might like
+15%8%
10-15%46%
5-9%19%
1-4%23%
What security budget?2%
662 PARTICIPANTS
CTO in Software, 201 - 500 employees
Without a doubt - Technical Debt! It's a ball and chain that creates an ever increasing drag on any organization, stifles innovation, and prevents transformation.ISSO and Director of the IRU in Healthcare and Biotech, 10,001+ employees
I would definitely suggest this based of how you categorize your types of data/systems and information being stored in certain parts of your data center. I think it’s really dependent on the size of your organization and ...read moreYes – very optimistic!31%
Yes – mildly optimistic.56%
No7%
I’m not sure5%
242 PARTICIPANTS
1. Ethical Use: We are committed to upholding ethical standards in the use of AI technologies, ensuring that our AI systems operate within legal frameworks and respect the rights and dignity of individuals.
2. Transparency: We strive to provide clear and understandable explanations regarding the capabilities and limitations of our AI systems, ensuring that users are aware when they are interacting with an AI-powered chatbot.
3. Privacy and Data Protection: We prioritize the protection of user data and privacy, adhering to applicable data protection laws and regulations. We take measures to ensure that user information is handled securely and responsibly, and we are transparent about our data collection, storage, and usage practices.
4. Bias Mitigation: We are committed to addressing and minimizing biases in our AI systems to provide fair and unbiased interactions. We continuously monitor and evaluate our algorithms to mitigate any unintended biases and discriminatory outcomes.
5. User Safety and Well-being: The safety and well-being of our users are paramount. We implement safeguards to prevent the misuse of our AI systems, protect against harmful content, and ensure that users are not subjected to inappropriate or abusive interactions.