20.9k viewscircle icon8 Upvotescircle icon63 Comments
Sort by:
Chief Information Officer (CIO) in Healthcare and Biotech2 years ago

Lack of functional implementation of responsible AI framework in a way that support key organisational stakeholders alike. A comprehensive governance model is required to bridge the gap between the principles and practice by AI practitioners within organisations. I propose a layered approach in this regard, with useful recommendations that organisations should consider to achieve integration between the layers of the principles and practical AI processes execution. This
also naturally spotlight the responsibility of the organisational stakeholders and collaboration required, by providing a practical view of the different requirements of the actions and
leverage over AI systems, and decision-making horizons. The ultimate outcome of this design is a concrete governed entity of a socio-technical system that consists of people, organisational process, work systems, data, and related ecosystem institutions. I propose four layers- societal, industry, organisational, Internal AI system layers respectively.

This must all be measured via a Responsible AI scorecard covering the following elements:

1. AI system and design
2. Algorithms
3. Data Management & Operations
4. Risk & Impact
5. Transparency, Explainability & Contestability
6. Accountability & Ownership
7. Development & Technology Operations
8. Compliance

This then dovetails into an elevated report for various board subcommittees based on elements- these committee are often the IT & Data committee for the technical elements, Social & ethics committee and Audit & Risk Committee for the risk & impact, accountability, transparency and compliance issues.

Lightbulb on1
Chief Information Officer (CIO) in Healthcare and Biotech2 years ago

Increased complexity in architectural ecosystem design and management to efficiently adapt to new sources, data management requirements, and data engineering for model monitoring and continuous optimisation.

Executive Director of Technology in Healthcare and Biotech2 years ago

AI technologies, particularly advanced models such as GPT, present a couple of significant concerns. Firstly, there's the issue of overreliance on AI by users. Despite its capabilities, GPT is not infallible and may not always provide accurate information. However, users often accept its outputs uncritically, without engaging in necessary verification or due diligence. This blind trust in AI outputs, despite their potential inaccuracies, is a matter of concern.

Secondly, the upcoming American election presents another area of concern regarding AI's use. Similar to the 2016 election, where social media was manipulated through extensive use of bot farms and human intervention, AI technologies like GPT could be employed to influence public opinion. However, the potential impact of AI in this context is significantly greater due to its advanced capabilities, which could lead to more effective and widespread manipulation of perceptions. This increased capacity for influencing public opinion using AI highlights the need for vigilance and appropriate regulatory measures.

Lightbulb on1
CEO in Services (non-Government)2 years ago

Lack of standards, not even the AI term is consistently defined. 

Lightbulb on2
AI LegalTech Counsel & Legal Ops Innovation Leader | Digital Transformation Expert | Strategic Advisor in Services (non-Government)2 years ago

My primary concern about AI lies in the skills gap and lack of comprehensive understanding necessary to harness its full capabilities. While AI offers a wide range of functionalities, inadequate knowledge and training could hinder individuals and businesses from effectively leveraging these advantages, fostering fear and apprehension instead. This cautious stance, combined with insufficient proficiency, threatens to slow down adoption, raise the risk of misuse, and potentially stifle AI's potential to solve complex problems and create value across numerous sectors.

Lightbulb on1

Content you might like

Yes, we do today.10%

No, but we plan to in the next 6 months.34%

No, but we plan to further in the future.10%

No, and we have no plans to.44%

View Results

Longest lead time plus23%

Current quarter35%

Following quarter6%

Budget year35%

Other

View Results
What concerns you most about AI? | Gartner Peer Community