Have you published corporate guidance establishing guardrails for use of commercial generative AI services?
Sort by:
Yes. The organization, through a collaborative process came up with the guidelines and they have just been disseminated by top leadership to everyone in the organization via web and email:
1. Don't enter confidential information or legally restricted data or any data the organization's data classification policy identifies as moderate or high risk into an AI tool.
2. Assume all information shared with an AI tool will be made public.
3. Follow academic integrity guidelines (e.g. on attribution, copyright, intellectual property) and institutional standard of conduct (e.g. for use of AI in research/academic work as well as for student's coursework).
4. Check for for bias and inaccuracy -- review & verify the output from AI tools (particularly if it will be included in works for publication).
5. Protect yourself and your credentials (from the use of AI tools for fraud -- such as creating phishing and similar schemes).
6. Seek support if you are considering the procurement of AI tools (e.g. for teaching and learning there is a guide created by one of our center's which provides an excellent webpage on AI for teaching and learning).
We are developing new guidelines and running pilots with several teams right now. Our current guidelines ask teams to consult and evaluate each solution and use case individually until our general guidelines are finished.