AI coverage seems to waffle between endless opportunity and world-ending disaster. How do we evaluate the potential risks of AI in a measured way? How do you manage strong opinions about the technology on your team or in leadership?
Sort by:
Chief Information Security Officer in Healthcare and Biotech2 years ago
Evaluating the potential risks of AI in a measured way involves understanding the technology, considering the context, engaging diverse perspectives, conducting risk assessments, staying informed, fostering transparency, developing policies, and seeking external input. Managing strong opinions about AI on your team or in leadership requires open dialogue, respect for different viewpoints, and actively addressing concerns. By fostering an inclusive and informed environment, organizations can navigate the opportunities and risks of AI more effectively.
Instead of focusing so much on how we can incorporate generative AI and whether it is a game changer or world ending disaster, we should think about scenario planning.
What are the different scenarios around AI and how do we prepare for those different scenarios?
I think we're heading toward an era where it's way too much of looking at predictive correlation, what's predictive at a given moment in time. When something novel happens, all those predictive models are obsolete and you have to start from scratch. You get into situations where there is too much reliance on computer intelligence without people truly understanding what's going on behind the scenes and what's truly causal.
If you understand cause and effect, getting back to the research principles and discipline around the scientific method, then you can make better models. You can probably use those models in a different way, or you have a better yardstick or guiding light to navigate the waters.