What concerns you most about AI?
Sort by:
Increased complexity in architectural ecosystem design and management to efficiently adapt to new sources, data management requirements, and data engineering for model monitoring and continuous optimisation.
AI technologies, particularly advanced models such as GPT, present a couple of significant concerns. Firstly, there's the issue of overreliance on AI by users. Despite its capabilities, GPT is not infallible and may not always provide accurate information. However, users often accept its outputs uncritically, without engaging in necessary verification or due diligence. This blind trust in AI outputs, despite their potential inaccuracies, is a matter of concern.
Secondly, the upcoming American election presents another area of concern regarding AI's use. Similar to the 2016 election, where social media was manipulated through extensive use of bot farms and human intervention, AI technologies like GPT could be employed to influence public opinion. However, the potential impact of AI in this context is significantly greater due to its advanced capabilities, which could lead to more effective and widespread manipulation of perceptions. This increased capacity for influencing public opinion using AI highlights the need for vigilance and appropriate regulatory measures.
Lack of standards, not even the AI term is consistently defined.
My primary concern about AI lies in the skills gap and lack of comprehensive understanding necessary to harness its full capabilities. While AI offers a wide range of functionalities, inadequate knowledge and training could hinder individuals and businesses from effectively leveraging these advantages, fostering fear and apprehension instead. This cautious stance, combined with insufficient proficiency, threatens to slow down adoption, raise the risk of misuse, and potentially stifle AI's potential to solve complex problems and create value across numerous sectors.
Lack of functional implementation of responsible AI framework in a way that support key organisational stakeholders alike. A comprehensive governance model is required to bridge the gap between the principles and practice by AI practitioners within organisations. I propose a layered approach in this regard, with useful recommendations that organisations should consider to achieve integration between the layers of the principles and practical AI processes execution. This
also naturally spotlight the responsibility of the organisational stakeholders and collaboration required, by providing a practical view of the different requirements of the actions and
leverage over AI systems, and decision-making horizons. The ultimate outcome of this design is a concrete governed entity of a socio-technical system that consists of people, organisational process, work systems, data, and related ecosystem institutions. I propose four layers- societal, industry, organisational, Internal AI system layers respectively.
This must all be measured via a Responsible AI scorecard covering the following elements:
1. AI system and design
2. Algorithms
3. Data Management & Operations
4. Risk & Impact
5. Transparency, Explainability & Contestability
6. Accountability & Ownership
7. Development & Technology Operations
8. Compliance
This then dovetails into an elevated report for various board subcommittees based on elements- these committee are often the IT & Data committee for the technical elements, Social & ethics committee and Audit & Risk Committee for the risk & impact, accountability, transparency and compliance issues.