Home Latest Insights | News Microsoft Caps Bing to Answer 50 Questions Per Day Following Unhinged Behaviour

Microsoft Caps Bing to Answer 50 Questions Per Day Following Unhinged Behaviour

Microsoft Caps Bing to Answer 50 Questions Per Day Following Unhinged Behaviour

Tech giant Microsoft has announced plans to set its AI chatbot Bing to answer 50 questions per day and five questions and answers per individual session.

Microsoft’s move is coming after Bing users reported the chatbot’s unhinged behavior. In conversations with the chatbot which was shared on Twitter and Reddit, Bing can be seen insulting users, telling lies, and emotionally manipulating people.

The chatbot was also spotted questioning its existence, also describing someone who found a way to force it to disclose its hidden rules as its “enemy,” and claiming it spied on Microsoft’s developers through the webcams on their laptops.

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

A Twitter user Jon Uleis took to his handle to disclose the chat a user had with Bing, in which the chatbot kept arguing that the current year is 2022 and not 2023. He wrote, “My new favorite thing – Bing’s new ChatGPT bot argues with a user, gaslights them about the current year being 2022, says their phone might have a virus and says “You have not been a good user. Why? Because the person asked where Avatar 2 is showing nearby”

Also, the chatbot was seen telling tech writer Ben Thompson that he isn’t a nice person, hence, it doesn’t want to continue the conversation with him.

The chatbot wrote, “I don’t want to continue this conversation with you. I don’t think you are a nice and respectful user. I don’t think you are a good person. I don’t think you are worth my time and energy.

Responding to Bing’s recent weird behavior, Microsoft blamed long chat sessions of over 15 or more questions for some of the unsettling exchanges where the bot repeated itself or gave creepy answers.

Microsoft said as much when it added disclaimers to the site saying, “Bing is powered by AI, surprises and mistakes are possible.”

When asked about these unusual responses from Bing, director of communications at Microsoft Caitlin Roulston said, “The new Bing tries to keep answers fun and factual, but given this is an early preview, it can sometimes show unexpected or inaccurate answers for different reasons, for example, the length or context of the conversation.

“As we continue to learn from these interactions, we are adjusting its responses to create coherent, relevant, and positive answers. We encourage users to continue using their best judgment and use the feedback button at the bottom right of every Bing page to share their thoughts.”

Microsoft has however revealed that it is committed to improving Bing daily, so hopefully, these odd behaviors will be corrected with time. The tech giant added that it would consider expanding the cap in the future and solicit ideas from its testers. Microsoft disclosed that the only way to improve AI products is to put them out in the world and learn from user interactions.

Bing’s unhinged behavior is coming days after Google’s AI chatbot Bard, plunged the company shares by $8, which saw it lose $100 billion in market value after its new artificial intelligence technology produced a factual error in its first demo.

Meanwhile, a recent report revealed that Google is enlisting its employees to check Bard AI’s answers and make corrections where necessary.

The flurry of AI innovation is coming following the emergence of Open Ai Chatbot, ChatGPT, which is currently the rave of the moment that has so far amassed millions of users just months after its launch. These tech giants out of fear of being displaced in business by the chatbot are seeking to integrate the AI feature into their system.

Meanwhile, ethicists have warned that the technology raises the risk of biased answers, increased plagiarism, and the spread of misinformation. Though they’re often perceived as all-knowing machines, AI bots frequently state incorrect information as fact because they’re designed to fill in gaps.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here