Government regulations and frameworks around AI are starting to emerge, so be aware of specific regulations in relevant jurisdictions. As AI usage continues to trigger questions about ethics and responsibility, new regulation may come in response to shifting public sentiments about AI use. In general, though, prepare for major types of risks, including:
Regulatory. AI poses legal risks by potentially opening up organizations to lawsuits over copyrighted or protected content, information and data. Regulations are changing quickly, so be aware of local and jurisdiction AI regulations to ensure you stay compliant with governing policy. Also watch for industry-specific regulations, such as in life sciences and financial services.
Reputational. AI can amplify bias and create a “black box” — an AI system with no user visibility into inputs and operations. Vendors that do not provide transparency on training datasets risk harmful outputs. Untested AI services can also pose risks through poor decision making and/or execution of tasks. Organizations need to build robust guardrails to prevent loss of intellectual property or customer data when building or buying generative AI services.
Competencies. AI requires a unique set of skills that must be intentionally sourced through upskilling existing talent or from academia or startups. Skills in areas such as prompt engineering and responsible AI will be in growing demand in the near term.
AI threats and compromises (malicious or benign) are continuous and constantly evolving, so set principles and policies for AI governance, trustworthiness, fairness, reliability, robustness, efficacy and privacy. Organizations that don’t are much more likely to experience negative AI outcomes and breaches. Models won’t perform as intended, and there will be security and privacy failures, financial and reputational loss, and harm to individuals.
The Gartner AI TRiSM (trust, risk and security management) framework includes solutions, techniques and processes for model interpretability and explainability, privacy, model operations and adversarial attack resistance for its customers and the enterprise. We advocate standing up a cross-functional, dedicated team or task force, including legal, compliance, security, IT and data analytics teams and business reps, to gain the best results from every AI initiative.