Home Community Insights Samsung Challenges Staff to Use AI Within Company Guidelines

Samsung Challenges Staff to Use AI Within Company Guidelines

Samsung Challenges Staff to Use AI Within Company Guidelines

Giant tech and electronics manufacturer Samsung has restricted the use of Artificial Intelligence tools by employees after it discovered that they were being misused.

In a company-wide memo, it expressed concerns about vital data being shared on AI platforms and ending up in the hands of other users. The new policy introduced at Samsung is coming after reports revealed that some employees of the company had uploaded a sensitive internal source code on ChatGPT last month. Hence, Samsung is restricting the use of such AI Powered tools to prevent further sensitive info from being shared on such platforms.

Samsung wrote,

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

“Interest in generative AI platforms such as ChatGPT has been growing internally and externally. While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.

“We ask that you diligently adhere to our security guidelines and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment”.

Samsung further warned its employees to take precautions when using AI tools like ChatGPT, Bard, or other AI-powered tools, advising them to desist from entering any personal or company-related information into such services.

In a company-wide survey conducted last month, 65% of those who responded said there was concern about security risk when using generative AI services. This is a growing concern since the data shared on AI chatbots like ChatGPT are usually stored on servers owned by the companies that operate the services, like OpenAI, Microsoft, and Google, making it difficult to access the data or delete it forever.

The company is therefore reviewing its security measures to create a secure environment for employees to use AI. It is also worth noting that the electronics giant maker is developing its own internal AI tools for translation, summarizing documents, and software development. It is also exploring ways to block the uploading of sensitive information to external services.

Samsung joins the likes of other top companies such as Citigroup, Goldman Sachs, and JP Morgan Chase, to restrict employees from using ChatGPT over concerns about third-party software accessing sensitive information.

Meanwhile, ChatGPT maker OpenAI last month previewed the business plan for ChatGPT, and introduced new privacy controls. The company announced that users can now turn off chat history in ChatGPT, a privacy feature for the tool that hadn’t been offered before. To monitor for abuse, the company said unsaved chats will be retained for 30 days before they are permanently deleted.

OpenAI also revealed that it plans to introduce a new subscription tier for ChatGPT, tailored to the needs of enterprise customers. Called “ChatGPT Business”, OpenAI describes the forthcoming offering for professionals who need more control over their data as well as enterprises seeking to manage their end users. It’s an interesting decision from the company and will appeal to users to prioritize security over the best possible user experience.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here