We are on the cusp of conducting a pilot of AI tooling - we intend to use Gemini & CoPilot, providing training, guidance & policy to our user group. Before we start we should conduct a risk assessment - my question is: can you recommend a tool or template?
Sort by:
While all of the tools mentioned, thus far, mention BIAS you may want to pay particular attention to Section 1557 of the ACA and make sure you are addressing this topic in a manner in accordance with its requirements. I am seeing interpretations of expectations of this section that differ wildly.
Hi Nick,
California State Government uses this GenAI Risk Assessment Form. https://cdt.ca.gov/wp-content/uploads/2025/08/SIMM-5305-F-Generative-Artificial-Intelligence-Risk-Assessment-20250822FINAL.pdf
Great points, Brian and Girish — NIST AI RMF is the right foundation.
Nick, to operationalize it, the Cloud Security Alliance AI Controls Matrix (AICM) might be useful — it's a vendor-neutral control checklist mapped to NIST AI RMF and ISO/IEC 42001, helpful for validating AI risks before deploying Gemini or Copilot. https://cloudsecurityalliance.org/artifacts/ai-controls-matrix
If your scope is Risk, then the NIST framework is a good start. CoPilot is known to be better for Enterprise use cases.

We work closely with Risk Management Department and determine the compliance checklist:
• Governing Law Compliance - Ensure all activities adhere to the laws of countries with strict personal data protection and privacy rights, such as the EU, UK, and US.
• Document Security - Maintain the integrity and confidentiality of all uploaded files throughout their lifecycle via our DLP tool.
• No Model Training Usage - Confirm that uploaded documents are not used for training any models.
• Retention and Deletion - Verify that documents are deleted promptly once the specified retention period has expired.
Hope this help :)