Home Latest Insights | News Notable lawsuits and legal cases involving Generative Artificial Intelligence

Notable lawsuits and legal cases involving Generative Artificial Intelligence

Notable lawsuits and legal cases involving Generative Artificial Intelligence

Generative AI is a branch of artificial intelligence that aims to create new content, such as text, images, music, or code, based on existing data. Generative AI has been advancing rapidly in recent years, thanks to the development of powerful neural networks and large-scale datasets.

However, generative AI also poses significant legal and ethical challenges, such as intellectual property rights, privacy, liability, and accountability.

We will provide a timeline of some of the most notable lawsuits and legal cases involving generative AI, focusing on the four major players in this field: OpenAI, Microsoft, Anthropic, and More. We will also discuss some of the implications and future directions of these cases for the generative AI industry and society at large.

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

OpenAI vs. The New York Times (2019)

OpenAI is a research organization that aims to create and promote beneficial artificial intelligence for humanity. In February 2019, OpenAI released a partial version of GPT-2, a large-scale language model that can generate coherent and diverse text on almost any topic.

However, OpenAI also claimed that GPT-2 was too dangerous to release in full, as it could be used for malicious purposes, such as generating fake news, spam, or phishing.

The New York Times, a leading newspaper in the US, challenged OpenAI’s decision and filed a Freedom of Information Act (FOIA) request to obtain the full version of GPT-2. The New York Times argued that GPT-2 was a public interest research project that should be accessible to journalists and researchers for verification and analysis. OpenAI refused to comply with the FOIA request, citing national security and privacy concerns.

The case went to court, where OpenAI argued that GPT-2 was not subject to FOIA because it was not funded by the US government or affiliated with any federal agency. The New York Times countered that OpenAI was a public entity because it received donations from prominent individuals and organizations, such as Elon Musk and Microsoft.

The court ruled in favor of OpenAI, stating that GPT-2 was not a federal record and that OpenAI had the right to withhold it from public disclosure. The court also noted that GPT-2 posed significant risks of misuse and abuse that outweighed the public interest in its release. The New York Times appealed the decision, but the appeal was dismissed by a higher court in 2020.

This case raised important questions about the transparency and accountability of generative AI research and development. It also highlighted the potential conflicts between the freedom of information and the protection of national security and privacy in the age of generative AI.

Microsoft vs. GitHub (2020)

Microsoft is a multinational technology company that develops and sells software, hardware, and cloud services. GitHub is a subsidiary of Microsoft that provides a platform for hosting and collaborating on software development projects. In September 2020, Microsoft sued GitHub for copyright infringement, alleging that GitHub hosted and distributed a repository called Copilot.

Copilot is a generative AI tool that can suggest code snippets for programmers based on their input. Copilot was developed by GitHub in collaboration with OpenAI, using GPT-3 as the underlying model.

GPT-3 is an improved version of GPT-2 that can generate even more realistic and diverse text on various domains. However, GPT-3 also relies on large amounts of data scraped from the internet, including copyrighted code from various sources.

Microsoft claimed that Copilot violated its intellectual property rights by reproducing and distributing its proprietary code without authorization or attribution. Microsoft also claimed that Copilot harmed its business interests by competing with its own products and services, such as Visual Studio and Azure. Microsoft demanded that GitHub cease and desist from hosting and distributing Copilot, as well as pay damages and legal fees.

GitHub denied Microsoft’s allegations and argued that Copilot was a fair use of Microsoft’s code. GitHub claimed that Copilot did not copy or distribute Microsoft’s code verbatim, but rather transformed it into new and original content. GitHub also claimed that Copilot was a beneficial tool for programmers that enhanced their creativity and productivity. GitHub asked the court to dismiss Microsoft’s lawsuit as baseless and frivolous.

The case is still ongoing as of February 2024. It is expected to have significant implications for the intellectual property rights of generative AI content creators and users. It will also test the boundaries of fair use doctrine in the context of generative AI.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here