Home Latest Insights | News Coinbase CEO Brian Armstrong Proposes That AI Should Not be Regulated But Decentralized

Coinbase CEO Brian Armstrong Proposes That AI Should Not be Regulated But Decentralized

Coinbase CEO Brian Armstrong Proposes That AI Should Not be Regulated But Decentralized

CEO of Coinbase Brian Armstrong has recently proposed that Artificial Intelligence (AI) should not be regulated, rather it should be centralized.

Armstrong who posted on the X platform (formerly Twitter), stated that regulation on AI will hinder its progress, which has unintended consequences and can kill competition/innovation, despite best intentions.

He further noted that just as other innovations have enjoyed free regulations which have spurred unhindered progress, the same should be applied to AI, for innovations to emerge.

Tekedia Mini-MBA edition 15 (Sept 9 – Dec 7, 2024) has started registrations; register today for early bird discounts.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

In his words,

“Count me as someone who believes AI should not be regulated. We need to make progress on it as fast as possible for many reasons (including national security).

“And the track record on regulation is that it has unintended consequences and kills competition/innovation, despite best intentions. We’ve enjoyed a golden age of innovation on software and the internet largely due to it not being regulated. AI should do the same. The best protection is to decentralize it and open source it to let the cat out of the bag”.

Armstrong’s proposition is coming against the widespread propositions and calls across the globe for Artificial Intelligence to be regulated to mitigate its negative effects.

Calls for AI regulation have continued to intensify as AI becomes more integrated into new technologies and various aspects of society. The need for AI regulation is driven by concerns related to ethics, safety, accountability, and the potential societal impact of AI systems.

Tech leaders such as Tesla and X CEO Elon Musk, Google CEO Sundar Pichai, and OpenAI CEO amongst other international bodies, have all called for the regulation of AI.

Also, earlier this month, the President of the European Commission, Ursula von der Leyen, called on the European Union to take the lead in developing a global regulatory regime for AI similar to the intergovernmental panel on climate change.

The goal is to foster safe and responsible AI development by pulling together the best minds in government, commerce, science, and other circles.

This was followed by U.S. President Joe Biden’s address at the United Nations which he pledged to work with world leaders to harness the power of Artificial Intelligence for good while protecting citizens from its most profound risk.

Also, the UN Education, Scientific and Education Organization (UNESCO) has called on all countries of the world to fully implement its Recommendation on the Ethics of Artificial Intelligence, which was approved unanimously by all member states in 2021.

The framework lists a broad range of values and principles to help guide the development and implementation of Al, but it also provides a readiness assessment tool to help regulators determine if users have the skills, and competence to properly utilize the Al-driven resources at their disposal.

It also calls for periodic repotting by regulatory authorities, detailing progress in their state’s governance of Al.

These leaders and institutions have noted that while AI has enormous potential to bring about transformation in the society and different industries across the globe, there are also considerable dangers to be wary of.

Check Out Some of The dangers That Have Intensified The Call For AI Regulation

1. Ethical Concern: Al can raise ethical questions, especially when it comes to decision-making processes and potential biases in Al algorithms. There are concerns about Al systems making unfair or discriminatory decisions, which has prompted calls for regulations to ensure fairness, transparency, and accountability.

2. Algorithmic Bias: Al systems can inadvertently perpetuate biases present in their training data. This can result in discriminatory outcomes in areas like hiring, lending, and criminal justice. Regulation is seen as a way to address and mitigate algorithmic bias.

3. Privacy: The use of Al in data analysis and surveillance raises concerns about privacy violations. Regulations, such as data protection laws (e.g., GDPR in Europe), aim to safeguard individuals’ personal information and limit the misuse of Al for surveillance purposes.

4. Safety: Al systems, particularly in sectors, like autonomous vehicles and healthcare, have the potential to impact human safety. Regulations are needed to establish safety standards and requirements for Al systems to prevent accidents and harm.

5. Accountability: Determining liability and responsibility in cases where Al systems make decisions that result in harm can be complex. Regulation may clarify the legal framework for holding individuals or organizations accountable for Al-related actions.

6.) Bias in AI research: Some researchers and policymakers argue that biases in AI research funding, and development priorities need to be addressed to ensure that AI benefits all of humanity.

While Coinbase CEO Brian Armstong is against the regulation of AI, which he believes would hinder innovation, it is widely believed that the goal of AI regulation is to strike a balance between promoting innovation and protecting the interests and well-being of individuals and society globally.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here