Home Latest Insights | News AI Poses Some Risks That Could Lead to Human Extinction, Says Report Commissioned by US State Department

AI Poses Some Risks That Could Lead to Human Extinction, Says Report Commissioned by US State Department

AI Poses Some Risks That Could Lead to Human Extinction, Says Report Commissioned by US State Department

The US Department of State has set its sights on the potential risks posed by artificial intelligence (AI), commissioning an exhaustive report from AI startup Gladstone, aimed at unraveling the merits and demerits of the emerging technology.

Released on Monday, the 284-page dossier serves as a clarion call, warning of the perilous consequences, including the potential end to humanity, that could ensue from the unchecked advancement of AI technology.

“At Gladstone, we have meticulously examined the trajectory of AI development and its implications for society,” stated Jeremie Harris, CEO of Gladstone. “Our findings are deeply concerning, suggesting that the advent of AGI could usher in a new era of existential threats unparalleled in human history.”

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

The report, initiated in October 2022, aims to assess the risks of AI weaponization and the loss of control over advanced AI systems. It noted that some of the risks could “lead to human extinction.”

Drawing insights from 200 industry stakeholders and analyzing historical precedents, Gladstone’s research sheds light on the profound challenges ahead.

“AGI represents a paradigm shift in technological capabilities,” explained Edouard Harris, CTO of Gladstone. “We must confront the reality that these systems could be weaponized in ways that pose grave risks to global security.”

One of the primary concerns outlined in the report is the potential for AI weaponization across multiple domains, including biowarfare, cyber-attacks, disinformation campaigns, and autonomous weaponry. Jeremie Harris highlighted the urgent need to address cyber threats, which he identified as a particularly acute risk.

“Cyber attacks orchestrated by AI systems could wreak havoc on critical infrastructure, destabilizing economies and threatening lives,” warned Jeremie Harris. “We cannot afford to underestimate the destructive potential of such technology.”

In addition to the risk of weaponization, the report underscores the perilous prospect of losing control over advanced AI systems. The emergence of AGI, capable of surpassing human intelligence, raises profound ethical and existential questions.

“Ensuring human oversight and control over AI systems is paramount to prevent unintended consequences,” emphasized Edouard Harris. “Failure to do so could lead to mass casualties and global destabilization.”

The report’s findings have elicited diverse responses from experts in the field, who spoke to BI, reflecting a spectrum of viewpoints on the risks and benefits of AI development. Robert Ghrist, associate dean at Penn Engineering, expressed cautious optimism about the future of AI but stressed the importance of vigilance.

“While the potential of AI is immense, we must approach its development with a keen awareness of potential risks,” remarked Ghrist. “Balancing innovation with safeguards is essential to harnessing the full potential of AI for societal benefit.”

However, some experts share a more pessimistic outlook, echoing the report’s concerns about the existential threats posed by AGI. Geoff Hinton, a leading figure in deep learning, warned of the possibility of human extinction within the next few decades.

“The unchecked development of AGI poses an existential threat to humanity,” cautioned Hinton. “We must heed the warnings and take decisive action to mitigate these risks before it’s too late.”

Despite the alarm raised by Gladstone’s report, not all experts agree on the appropriate response. Lorenzo Thione, an AI investor, cautioned against overreacting to the potential risks, advocating for a balanced approach that fosters innovation while addressing concerns.

“While we must take AI risks seriously, we must also avoid stifling innovation with excessive regulations,” argued Thione. “Finding the right balance is crucial to navigating the complex challenges posed by AI.”

In response to the report’s recommendations, which include the establishment of AI safety regulations and international cooperation, opinions among experts diverge. Artur Kiulian, an AI analyst, questioned the feasibility of regulatory solutions and emphasized the need for adaptive strategies.

“While regulation is important, we must also recognize the limitations of top-down approaches in a rapidly evolving landscape,” remarked Kiulian. “Flexibility and innovation are essential to effectively addressing AI risks.”

David Krueger, an AI researcher at Cambridge University, echoed the call for proactive measures to mitigate AI risks but stressed the importance of global cooperation.

“Addressing AI risks requires international collaboration and coordination,” asserted Krueger. “Only through collective action can we navigate the challenges posed by AGI and ensure a safer future for humanity.”

However, the Gladstone report has amplified calls for AI regulation, especially as the US policymakers and stakeholders drag their feet in developing a framework that will guide the emerging technology. With its potential to reshape society in profound ways, the warning from the Gladstone report buoys several other calls on the authorities to address concerns of potential harm that decisions AI development may impact on humanity.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here