Home Latest Insights | News Pausing AI Development for 6 Months Makes No Sense – Do Not Sign That Open Letter

Pausing AI Development for 6 Months Makes No Sense – Do Not Sign That Open Letter

Pausing AI Development for 6 Months Makes No Sense – Do Not Sign That Open Letter

It was a noble vision: save the world from an AI apocalypse by pausing advanced AI development for months, until the world has figured out how to make AI safer.  In an open letter, technology and business luminaries like Elon Musk and Apple cofounder Steve Wozniak, put their names and signatures, declaring loud and clear: “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

I do not like the call because it makes no practical sense. How do you pause the development of AI for 6 months just to figure out how to regulate it? Is that a call for US companies or Chinese companies – and do you make sure everyone will not have a small server running during the freeze time?

Unmistakably, the world needs action on AI regulation. The UK had drawn the first sword.; I expect the EU to ramp up the high voltage and get that done. But doing that must not stop AI development. The world did not pause the development of automobiles just to regulate them. The world did not pause the development of nuclear reactors just to regulate them. What needs to happen is for regulators to move fast, and do their jobs, even as activists and all stakeholders push companies to be responsible on how to develop and deploy AI systems.

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

Companies understand the stakes: if they mess up badly, their missions would be lost. More so, the AI systems like ChatGPT and Bard which are triggering these calls are not going to trigger existential risks in this world. While they have the capacity to misinform and disinform us, and also cause us to waste money on checkout pages, they are not as harmful as nuclear reactors, at least for now. And if the world did not freeze nuclear science development just to write regulations for reactors, we can have consumer AI developments and regulations running in parallel.

A better call would have been: regulators, give us a strong regulation within 6 months!

A group of high-profile signatories, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, are asking companies to pump the brakes on “giant AI experiments “until there’s confidence that any risks are manageable and the effects are largely positive. A letter published by the nonprofit Future of Life Institute on Wednesday states that large-scale AI projects “can pose profound risks to society and humanity” if not properly managed. The signatories, which also include well-known AI researchers, are asking for a six-month pause on anything more powerful than GPT-4, adding that the race to develop machine learning is outpacing necessary guardrails. The statement also calls for AI developers to work with policymakers to accelerate effective governance systems, including a new regulatory authority dedicated to AI. (LinkedIn)

Meanwhile, in a new report, Goldman Sachs has noted that AI could displace millions of jobs. The report’s authors, which include Chief Economist Jan Hatzius, said roughly two-thirds of jobs in the U.S. and Europe are exposed to some degree of AI automation while generative AI could replace up to 25% of current employment, or some 300 million full-time jobs.

“Despite significant uncertainty around the potential of generative AI, its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects,” the report reads.

Comment on Feed

Comment 1: I feel Elon and Wozniak have a higher, more futuristic & insightful view to this thing, Prof. It’s not today Elon started talking about responsible AI. AI innovations leadership–when the nucleus of it finds itself in the hands of one man/company (who first makes the most giant leap)–potentially puts the power of the entire world’s population in the hands of one man, company or a group of tech cartels. We saw it in 2020 when ubiquitous social media tech companies were (unethically) used to subvert the wills, rights & freedoms of people & some political divide, with no consequences. What Facebook, Twitter & YouTube did in 2020 (with the pandemic & the 2020 elections) was nothing short of tyranny & autocracy.

As we (the technology & socioeconomic world) gradually evolves towards singularity, as AI machines fast become humanized with greater capacity to even “influence” human thoughts & reason, it’s important these machines get to the hands of (really ethical, less power drunk & controlling) people or companies who will still appreciate the place for human liberties in the future. I may not have many (technical) words for this position, but the future really should be watched closely with these machines (in the hands of some people as industry leaders), and Elon has been alluding to this for years now.

My Response: There is no comparison, we are nobody in the world of Elon Musk. But do not take yours truly for granted. In our space, we hold very well (my book in tech won “Book of the Year” award). And what that means: even in his limitless knowledge, he can be faulted. What he is asking makes no sense. You pause for 6 months in the US while China continues development.

Comment 1RWas never faulting you, Prof. Not quite. Just was trying to state it from another angle, Elon’s perspective perhaps. I’m not in your industry, and thus don’t have the technical depth to talk much about these things, but as a futurist and researcher, I’ve tried to follow this AI trend & learning what the future could look like and be like (so like to wade in a little to learn more). If the US has made the first & most inroads in AI innovations until now, does China still have the (first-mover) advantage to surpass the US in this field however they pace themselves? 

Was thinking those (countries/entities) that make the first leap in this strong AI innovation, and lead the industry in this, have dominant strategic advantage over others and can’t really be outpaced by another country? And yes, I too believe banning strong AI never would work, but was just thinking about the “ethics” of this and hoping that those who lead this would remain (favorable to humanity) and thus the need for government proactive watch now?.

My Response: “If the US has made the first & most inroads in AI innovations until now, does China still have the (first-mover) advantage to surpass the US in this field however they pace themselves’ – it is not the first-mover that matters but first-scaler. You can be first and still fail (iPod/Walkman, iPhone/Blackberry, etc). Being the first is good but that is not the issue; it is the first to scale that matters. If you pause AI in the US for 6 months, China will own everything. In the consumer business, the best AI systems are created by the Chinese from TikTok to WeChat. The US understands that. So, Musk’s position has no practical application unless he wants the US to lose. That is my point.

Comment 2: I agree with the need for swift regulation for AI; however, pausing development is impractical. Companies need to understand the repercussions of AI and take responsibility, while regulators create strong regulations.

Comment 3: The boom in generative AI and its influence in the nearest future will be astronomic. I don’t think regulators can predict the future of AI impact to make laws that can contain their actions in the foreseeable future. The best action is to make regulations and policies as new AI capabilities emerge. Therefore no need for any pause. What we are seeing today has been predicted decades ago. Even the covid-19 was predicted decades before it struck. If we aren’t ready for AI now, then we should blame ourselves.

My Response: ” The best action is to make regulations and policies as new AI capabilities emerge. Therefore no need for any pause. ” – nuanced and practical; nice call.


---

Register for Tekedia Mini-MBA (Jun 3 - Sep 2, 2024), and join Prof Ndubuisi Ekekwe and our global faculty; click here.

No posts to display

1 THOUGHT ON Pausing AI Development for 6 Months Makes No Sense – Do Not Sign That Open Letter

  1. Well, nuclear reactors are not consumer goods, so the comparison cannot fly; automobiles cannot cause mass casualties at scale, meaning that it can easily be mitigated.

    As for AI, pausing development for some months may not solve anything, rather we need to come to a sort of agreement on areas the AI should never venture into; this is very important.

    It is not every tech that is good for mass consumption, irrespective of the excitement and promise we think it holds, there are things that are much more sacred than these ephemeral comfort and excitement that only end up violating reason and diminishing humanity.

    What scenarios planning has been done on AI and what safeguards are in place to ensure that we are not headed for a global mental pandemic? It is not much about what AI can do or help us accomplish, but what happens to our mental health, our purpose, and essence of existence? Once our psyches are messed up at scale, it’s game over for human greatness.

    The regulators will like to do their job, but it’s not yet clear if they understand the job and the scope, you cannot perform optimally when you are incapacitated. Human essence remains supreme, so if we allow a mere technology to dominate and control us, then we are done.

Post Comment

Please enter your comment!
Please enter your name here