Did you see the Center for AI Safety's latest statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Should we really be as concerned about AI-induced human extinction as we are about pandemics and nuclear war?

910 views1 Upvote3 Comments

VP of Marketing & Solutions — Artificial Intelligence in Software, 10,001+ employees
I believe this is a really tough subject and a 50/50 chance to get right. Few people have the level of insight as those working on the actually basis technology do. It might be completely unfounded or absolutely spot on. However those working on the basis technology also have manifold interests of their own.

I believe more practical and actionable questions anyone working on AI should to ask (and answer) is: What are the implications of our work? How far are we willing to take it? …and do a case by case risk assessment.
Director of IT in Healthcare and Biotech, 501 - 1,000 employees
TLDR: Not yet, but we need to keep an eye on AI and what authority we transfer. 

Relevant data:
1) AI in Ukraine war to uncover individual Russian soldiers

2) Swarm technologies for drones and 6th generation fighters. Even older gen4 platforms can be retrofitted. 


3) AI in radiology. Better than human!

4) We don't know how the AI models work, even though we made them. 


5) Would a computer violate orders, disregard high priority data? 

6) Skynet. The scenario that keeps philosophers awake. 


CIO in Telecommunication, 1,001 - 5,000 employees
Its the humans we need to worry about. Nuclear war is a human invention. "Gain of Function" research on "enhancing" virus's - we did that.  

Content you might like

Yes, there's solid legal ground.50%

No, the mandate is illegal.35%

I'm unsure.14%


3.7k views4 Upvotes5 Comments