Home Latest Insights | News AI Could Pose Risk of Human Extinction – Tech Experts Warn

AI Could Pose Risk of Human Extinction – Tech Experts Warn

AI Could Pose Risk of Human Extinction – Tech Experts Warn

Tech experts and top artificial intelligence (AI) CEOs have warned that the rise in AI could pose the risk of extinction, which they have likened the effect to that of a nuclear war.

These experts have called on policymakers to implement policies to mitigate the risk of the technology which they noted should be a top priority.

In an open letter released by the Center for AI Safety, which was signed by more than 350 AI leaders, experts, and engineers, including chief executives from leading AI companies such as OpenAI CEO Sam Altman, Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind, they highlighted few negative impacts of AI and noted that it must be addressed.

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

Part of the letter reads,

“The increasing concern about the potential impacts of Al is reminiscent of early discussions about atomic energy. “We knew the world would not be the same,” J. Robert Oppenheimer once recounted. He later called for international coordination to avoid nuclear war. “We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” said Dan Hendrycks, Director of the Center for Al Safety.

“It’s crucial that the negative impacts of Al that are already being felt across the world are addressed. We must also have the foresight to anticipate the risks posed by more advanced Al systems. “Pandemics were not on the public’s radar before COVID-19. It’s not too early to put guardrails in place and set up institutions so that Al risks don’t catch us off guard,” Hendrycks said.

“As we grapple with immediate Al risks like malicious use, misinformation, and disempowerment, the Al industry and governments around the world need to also seriously confront the risk that future Als could pose a threat to human existence. Mitigating the risk of extinction from Al will require global action. The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future Al systems.”

The letter, which is signed by several tech and AI experts, is coming after Tesla CEO Elon Musk and other notable tech figures had earlier urged artificial intelligence labs to pause development of the most advanced systems, warning that A.I. tools present profound risks to society and humanity.

Recent developments in AI have been mindblowing which has seen the creation of tools used for medical diagnostics, Legal briefs, and writing articles, amongst others. However, it has raised fears that the technology could lead to privacy violations, power misinformation campaigns, and lead to issues with smart machines thinking for themselves

AI pioneer Geoffrey Hilton had earlier disclosed that AI could pose a more urgent threat to humanity than climate change. This saw him quit his job at Google warning about the growing dangers from developments in the field especially with the rollout of the company’s chatbot ‘Bard’.

“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning, And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.” he said.

In order to mitigate the risk of AI, the European Union (EU) Commission is proposing the first-ever legal framework on AI, which addresses the risks of the technology and positions Europe to play a leading role globally.

Recently added provisions to the EU’s AI Act would require “foundation” AI models to disclose copyright material used to train the systems, Big tech companies developing AI systems, and European national ministries looking to deploy them.

Following the EU decision to draft a law guiding AI, Big tech companies developing AI systems and European national ministries looking to deploy them are seeking to limit the reach of regulators, while civil society groups are pushing for more accountability.

Meanwhile, reports reveal that even after getting final approval, expected by the end of the year or early 2024 at the latest, the AI Act won’t take immediate effect, as there will be a grace period for companies and organizations to figure out how to adopt the new rules.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here