American entrepreneur and CEO of OpenAI company, Sam Altman has warned about the danger of Artificial Intelligence (AI), stating that it comes with real dangers which will reshape society.
Altman, whose company developed the AI chatbot ChatGPT which is currently the rave of the moment, emphasized the need for regulators and society to be actively involved in the technology, to guard against potentially negative consequences for humanity.
He expressed concern that with the high advancement of AI technology, it might be used for large-scale disinformation.
Join Tekedia Capital Syndicate which begins April 8, 2023, and own a piece of Africa’s finest startups. By joining, you attend Tekedia Venture Investing and Portfolio Management program at no additional cost. John here.
Tekedia Mini-MBA (June 5 – Sept 2 2023) opens NEW registrations; beat early bird deadline of April 4 for discounts by registering here.
In his words, “We’ve got to be careful here. I think people should be happy that we are a little bit scared of this. I’m particularly worried that these models could be used for large-scale disinformation. Now that they’re getting better at writing computer code, they could be used for offensive cyber-attacks.”
He however noted that despite the danger AI technology might pose, it could be the greatest technology humanity has yet developed. Altman’s warming on the technology is coming after his company OpenAI released the latest version of its language AI model GPT-4, less than four months since the original version was released and became the fastest-growing consumer application in history.
While speaking in an interview, he stated that although the new version was not perfect, it scored 90% in the US on the bar exams and a near-perfect score on the high school SAT math test. It could also write computer code in most programming languages. He also added that the large multimodal model can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities.
Also, as regards the danger of Artificial Intelligence, Tesla and Twitter CEO Elon Musk has repeatedly issued warnings on the dangers of Artificial intelligence. In 2018 while speaking at a tech conference, Musk stated that AI or AGI artificial general intelligence is more dangerous than a nuclear weapon, stating that there needs to be a regulatory body overseeing the development of superintelligence.
Musk worries AI’s development will outpace humans ability to manage it safely. “There is no regulatory oversight of AI, which is a major problem. I’ve been calling for AI safety regulation for over a decade!” Musk tweeted in December last year. He also voiced concern that Microsoft, which hosts ChatGPT on its Bing search engine, had disbanded its ethics oversight division.
Compared to AI, progress with Neuralink will be slow and easy to assess, as there is large regulatory apparatus approving medical devices.
There is no regulatory oversight of AI, which is a *major* problem. I’ve been calling for AI safety regulation for over a decade!
— Elon Musk (@elonmusk) December 1, 2022
As AI grows more rapidly and widespread, the voices warning against the potential dangers of artificial intelligence have continued to grow louder. The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news, and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.
ChatGPT Will Eliminate Jobs
More so Sam Altman admitted that the ChatGPT could take many jobs off the market. He said that in an interview with ABC News on Thursday where he also admitted that he is “a little bit scared” of the AI-powered chatbot.
ChatGPT has become a darling of the corporate world since it was launched late last year, with companies such as Microsoft incorporating the model AI language into some of its services. This is because of the AI’s efficacy in providing human-like context to queries.
With its capability to solve complex tests, write codes, and essays, the ChatGPT3 has seen wide adoption, racking up more than 100 million users in less than three months after it was launched.
Early this week, OpenAI announced the release of GPT4, which it said exhibits human level intelligence. The company said the improved version can solve difficult problems with greater accuracy – a claim many people who have used it attested to.
“GPT-4 is more creative and collaborative than ever before. It can generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style,” the company said.
The improved performance of GPT4 has fueled the concern that ChatGPT will eliminate many jobs. But Altman said that although the chatbot could replace many jobs, it could also lead to “much better ones”.
“The reason to develop AI at all, in terms of impact on our lives and improving our lives and upside, this will be the greatest technology humanity has yet developed,” he said.
GPT-4 outperforms ChatGPT by scoring in higher approximate percentiles among test-takers, according to OpenAI.
Altman said on Tuesday that it can pass the bar exam for lawyers and is capable of scoring “a 5 on several AP exams”.
The OpenAI executive is not the only one to have expressed fear of the capabilities of artificial intelligence. Tesla and SpaceX CEO Elon Musk, who also confounded OpenAI, has warned that it is one of the biggest threats to civilization, asking the government to step in with regulation.
Altman told ABC that he’s in regular contact with government officials, adding that regulators and society should be involved with ChatGPT’s rollout. It is hoped that the government’s involvement will help in addressing concerns emanating from its use.
In several tweets last month, the 37 years old called for regulation. He said society needed time to adjust to something so big, warning that the world may not be “that far from potentially scary” artificial intelligence.