AI Risks: Global Priority or Overblown Concern?

Navigating the complexities of AI leadership often involves distilling broad discussions from the media for others’ understanding. Recently, the media honed in on a statement about AI risks released by leading AI researchers and philosophers:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

This powerful statement, while grabbing media attention, also sparked skepticism among other AI field experts like Andrew Ng and Jeremy Howard. Let’s delve into their counter-arguments and propose ways to address these concerns with your team and peers.

The AI risk statement, both concise and clear, is backed by some of the most brilliant minds in the field. Unlike previous “pause on AI” calls — which seemed overly idealistic — this statement steers clear of unrealistic solutions. It’s hosted by the Center for AI Safety and includes a list of potential AI risks such as weaponization, misinformation, and proxy gaming — all valid points of concern. The key question is: Do AI risks warrant classification as global priorities?

The urgency of averting potential AI-caused extinction is questioned by Jeremy Howard and his colleagues. They contend that while AI certainly poses risks requiring serious attention, these threats do not escalate to the level of pressing global priorities such as pandemics and nuclear war. It’s a compelling argument: the potential for AI to cause extinction is, at this stage, speculative, while pandemics and nuclear warfare are immediate, tangible threats. As the world still recovers from a global pandemic that disrupted lives for years and claimed millions, and witnesses a proxy war in Eastern Europe involving the two nuclear superpowers, should AI really be categorized as a comparable global priority?

The reality is, there’s no definitive answer — we’re each guided by our own perspectives. My stance is one of understanding and respect for both sides of the argument. I value the public debate led by the brightest minds, as it aids our understanding of potential risks. When colleagues, leaders, or friends seek your perspective on the issue, I advise tackling these questions with humility. Without definitive answers, our best approach is to encourage open, respectful dialogue to collectively move towards the most beneficial outcomes.

Subscribe to our YouTube channel where we post daily videos on the ever-evolving world of AI and large language models.


Prolego is an elite consulting team of AI engineers, strategists, and creative professionals guiding the world’s largest companies through the AI transformation. Founded in 2017 by technology veterans Kevin Dewalt and Russ Rands, Prolego has helped dozens of Fortune 1000 companies develop AI strategies, transform their workforce, and build state-of-the-art AI solutions.

Let’s Future Proof Your Business.