Did you see the Center for AI Safety's latest statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Should we really be as concerned about AI-induced human extinction as we are about pandemics and nuclear war?

1.1k viewscircle icon1 Upvotecircle icon3 Comments
Sort by:
CIO in Telecommunication2 years ago

Its the humans we need to worry about. Nuclear war is a human invention. "Gain of Function" research on "enhancing" virus's - we did that.  

Director of IT in Healthcare and Biotech2 years ago

TLDR: Not yet, but we need to keep an eye on AI and what authority we transfer. 

Relevant data:
1) AI in Ukraine war to uncover individual Russian soldiers
https://www.nationaldefensemagazine.org/articles/2023/3/24/ukraine-a-living-lab-for-ai-warfare

2) Swarm technologies for drones and 6th generation fighters. Even older gen4 platforms can be retrofitted. 
https://en.wikipedia.org/wiki/Sixth-generation_fighter

https://www.forbes.com/sites/davidhambling/2020/12/11/new-project-will-give-us-mq-9-reaper-drones-artificial-intelligence/

3) AI in radiology. Better than human!
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6268174/

4) We don't know how the AI models work, even though we made them. 
https://www.bbc.com/future/article/20230405-why-ai-is-becoming-impossible-for-humans-to-understand

https://www.standard.co.uk/tech/google-ceo-sundar-pichai-understand-ai-chatbot-bard-b1074589.html

5) Would a computer violate orders, disregard high priority data? 
https://en.wikipedia.org/wiki/Stanislav_Petrov

6) Skynet. The scenario that keeps philosophers awake. 

https://en.wikipedia.org/wiki/Skynet_(Terminator)

Founder & Chief AI Strategist in Software2 years ago

I believe this is a really tough subject and a 50/50 chance to get right. Few people have the level of insight as those working on the actually basis technology do. It might be completely unfounded or absolutely spot on. However those working on the basis technology also have manifold interests of their own.

I believe more practical and actionable questions anyone working on AI should to ask (and answer) is: What are the implications of our work? How far are we willing to take it? …and do a case by case risk assessment.

Lightbulb on1

Content you might like

Have already deployed20%

Will deploy in the next 12 months45%

Will deploy in 12-24 months11%

Plan to deploy in the future8%

We're not interested in this technology.13%

View Results

Yes, this would alleviate pressure on the team44%

Somewhat, AI agents could play a role but humans need to be involved52%

No, this would be too risky4%

View Results