New applications of artificial intelligence (AI) are emerging at a very fast pace, particularly in the healthcare industry. The industry is full of technology vendors, data science companies, researchers and innovators focused on creating predictive and prescriptive algorithms for improved diagnosis and treatment recommendations.
By 2021, Gartner predicts that 75% of healthcare delivery organizations (HDOs) will have invested in an AI capability that is explicitly improving either operational performance or clinical outcomes. The more activity there is around using AI in healthcare, the greater the need for HDOs to establish AI governance.
“AI governance is necessary, especially for clinical applications of the technology,” says Laura Craft, VP Analyst at Gartner. “However, because new AI techniques are largely new territory for most HDOs, there is a lack of common rules, processes and guidelines for eager entrepreneurs to follow as they design their pilots.”
Most HDOs have not developed an enterprise strategy for how AI will be introduced, invested in and managed. This leads to a lack of trust in AI-powered solutions and creates a new problem that only healthcare provider CIOs are equipped to address. It is important that CIOs take a lead role in making sure there is discipline and accountability around the use of AI in HDOs.
Craft shares three actions that healthcare provider CIOs should take to ensure that any implementation of AI is safe, protected and realizes its potential.
Establish an AI governance council
AI governance need not be separate from an existing leadership body of authority. If a strategic leadership council for a data and analytics program already exists, then this is the most obvious fit, as AI is a natural extension of an analytics program. However, other strategic leadership councils may not have the purpose and setup that make them suitable to take on the responsibilities of governing the investment, value and use of strategic and high-risk AI capabilities.
Whether through an existing or separate council, successful AI governance includes four pillars:
- Legal, regulatory and compliance review to decide what happens and who is held accountable when an AI output causes harm.
- Clinical and scientific verification and valuation to confirm that the AI algorithm has been tested on a valid data set.
- Ethical evaluation and usage guidelines to determine whether or to what extent patients are informed about the role AI is playing in their diagnosis and treatment.
- Organizational deployment and change management for training staff on what is expected and the correct actions to take when using AI.
Establish common definitions and strategic value of AI
Organizations must have a common definition of AI to have productive conversations around its value and investment. This means involving clinicians, scientists, technologists and end users in the conversation to produce a universal agreement across all stakeholders. CIOs should use the AI governance council to facilitate discussion and formally adopt an enterprise-wide perspective. Consistency and thoroughness should also be established around AI opportunity, identification and selection.
Anticipate any data challenges
Data is often a big obstacle to the smooth and successful implementation and use of AI. When attempting to curate a clean, complete and accurate dataset, HDOs are challenged by poor data quality; lack of needed data; incomplete data; and issues of data consent, privacy and security. To address this, successful CIOs scrutinize current data governance practices and data acquisition methods by providing guidance for the types of data that will be needed and upgrading data curation tools and services.
“Overall, AI governance must be implemented as a formal set of guidelines with enterprise-level authority,” says Craft. “This sends a clear signal to the organization that AI is considered strategic and has the attention and interest of senior executives.”