Did you see the Center for AI Safety's latest statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Should we really be as concerned about AI-induced human extinction as we are about pandemics and nuclear war?

1.1k viewscircle icon1 Upvotecircle icon3 Comments
Sort by:
CIO in Telecommunication3 years ago

Its the humans we need to worry about. Nuclear war is a human invention. "Gain of Function" research on "enhancing" virus's - we did that.  

Director of IT in Healthcare and Biotech3 years ago

TLDR: Not yet, but we need to keep an eye on AI and what authority we transfer. 

Relevant data:
1) AI in Ukraine war to uncover individual Russian soldiers
https://www.nationaldefensemagazine.org/articles/2023/3/24/ukraine-a-living-lab-for-ai-warfare

2) Swarm technologies for drones and 6th generation fighters. Even older gen4 platforms can be retrofitted. 
https://en.wikipedia.org/wiki/Sixth-generation_fighter

https://www.forbes.com/sites/davidhambling/2020/12/11/new-project-will-give-us-mq-9-reaper-drones-artificial-intelligence/

3) AI in radiology. Better than human!
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6268174/

4) We don't know how the AI models work, even though we made them. 
https://www.bbc.com/future/article/20230405-why-ai-is-becoming-impossible-for-humans-to-understand

https://www.standard.co.uk/tech/google-ceo-sundar-pichai-understand-ai-chatbot-bard-b1074589.html

5) Would a computer violate orders, disregard high priority data? 
https://en.wikipedia.org/wiki/Stanislav_Petrov

6) Skynet. The scenario that keeps philosophers awake. 

https://en.wikipedia.org/wiki/Skynet_(Terminator)

Founder & Chief AI Strategist in Software3 years ago

I believe this is a really tough subject and a 50/50 chance to get right. Few people have the level of insight as those working on the actually basis technology do. It might be completely unfounded or absolutely spot on. However those working on the basis technology also have manifold interests of their own.

I believe more practical and actionable questions anyone working on AI should to ask (and answer) is: What are the implications of our work? How far are we willing to take it? …and do a case by case risk assessment.

Lightbulb on1

Content you might like

Yes62%

No31%

Unsure6%

View Results

Yes, AI tool cost was too high8%

Yes, talent costs were too high50%

Yes, costs for both AI tools and talent were too high8%

Not yet, but we may have to33%

No, our AI projects are within budget

N/A, we have no AI projects

View Results