Home Latest Insights | News OpenAI is Reportedly Developing Music Tool Capable of Composing Songs from Texts and Audio Prompts

OpenAI is Reportedly Developing Music Tool Capable of Composing Songs from Texts and Audio Prompts

OpenAI is Reportedly Developing Music Tool Capable of Composing Songs from Texts and Audio Prompts

OpenAI is reportedly developing a new generative music tool capable of composing songs and instrumentals from text and audio prompts — another sign of the company’s accelerating expansion beyond conversational AI and into broader creative and consumer markets.

The report, published by The Information, said the model could generate music for videos or add instrumental accompaniment, such as guitar or piano, to existing vocals.

Many believe the project reflects OpenAI’s deepening push into multimedia creation, as it seeks to position itself as an “everything app” — one that integrates, among other things, text, voice, video, and now music — all within a unified ecosystem. The move follows OpenAI’s recent rollout of Sora, its text-to-video model, and its continued development of voice and image generation tools, suggesting a deliberate effort to dominate every creative medium through artificial intelligence.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).

According to The Information, OpenAI has been collaborating with students from the Juilliard School, one of the world’s most renowned music conservatories, to annotate musical scores. These annotations are being used to train the model to understand rhythm, harmony, and composition structures — a sign that the company is prioritizing accuracy and musical authenticity over mere novelty.

While OpenAI has experimented with generative music models in the past, those earlier prototypes came before the launch of ChatGPT and never reached public release. The current project, however, appears to be part of a broader monetization strategy, as the company grapples with rising operational costs and mounting pressure to generate sustainable revenue.

OpenAI, which has received billions in backing from Microsoft, is spending heavily on computing infrastructure, data acquisition, and AI model training — costs that have ballooned since the introduction of GPT-4 and the ChatGPT Plus subscription service. Despite its soaring valuation, analysts say the company has yet to achieve profitability, and its latest ventures into new industries may reflect a strategic bid to diversify income streams and reduce dependence on corporate licensing deals.

The music tool, once launched, could integrate directly with ChatGPT or Sora, enabling users to generate songs, soundtracks, or musical accompaniment while simultaneously producing lyrics and visuals. Such integration would make OpenAI the first major AI company to offer a seamless, cross-media creative workflow — a potentially transformative move for content creators, filmmakers, and digital artists.

But OpenAI is entering a space already occupied by formidable rivals. Google’s MusicLM and Suno’s AI platform both allow users to generate music from text prompts, while startups like Udio are experimenting with collaborative songwriting tools powered by machine learning. What may set OpenAI apart, analysts say, is its ability to unify multiple creative capabilities under one AI system — a feat no other company has achieved at scale.

Still, the company’s rapid expansion raises familiar concerns about copyright and data ethics. Generative music tools rely heavily on training datasets that often include copyrighted material, with the potential to spark criticism from musicians and record labels who argue that such practices amount to unlicensed use of creative works. OpenAI’s collaboration with Juilliard students appears to be an attempt to preempt such criticism by grounding the model’s training data in properly annotated, licensed compositions.

OpenAI has not commented publicly on the project, nor has it confirmed a release date or product format. However, sources believe the tool could be unveiled as early as 2026 as part of OpenAI’s broader roadmap to merge language, vision, and sound into a unified AI interface.

If successful, the new tool could redefine how music is composed, produced, and consumed — and potentially cement OpenAI’s role not just as a leading AI company, but as a foundational platform for the future of digital creativity.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here