DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 29

Moonshot K2.5 AI Raises the Stakes as China’s Model Makers Brace for DeepSeek’s Next Move

0

China’s artificial intelligence race is entering a sharper, more consequential phase, and Alibaba-backed Moonshot AI is signaling that it intends to remain among the front-runners.

With the release of an upgraded version of its flagship Kimi model, the company is not just shipping new features. It is making a statement about where it believes the next phase of competition will be fought: multimodality, developer tools, capital scale, and the ability to survive an increasingly unforgiving consolidation cycle.

Moonshot said its latest model, K2.5, can process text, images, and video simultaneously from a single prompt, placing it firmly in the category of so-called omni models. These systems, designed to reason across multiple data types rather than treating them as separate tasks, are becoming the industry standard globally. OpenAI and Google have already moved in this direction, and Chinese developers are now racing to ensure they are not structurally behind as applications shift from chatbots toward agents, copilots, and real-world automation.

The upgrade lands at a strategically sensitive moment. Over the past several weeks, China’s leading AI firms have rolled out a wave of product announcements, research papers, and funding news. The timing is widely interpreted as pre-emptive positioning ahead of an expected major release from DeepSeek, the research-driven startup whose R1 model shook the domestic market earlier this year and reignited investor enthusiasm for Chinese large language models.

DeepSeek has kept details of its next release tightly controlled, but its signals have been deliberate. Its research arm has published technical papers authored by senior staff, including co-founder Liang Wenfeng, and released code on GitHub, a move often used to showcase confidence in the underlying architecture.

Moonshot’s K2.5 is designed to show technical momentum. The company says the model outperforms open-source peers across several benchmarks and has narrowed the gap with top-tier proprietary models in coding tasks, an area that has become a key differentiator for enterprise adoption. Coding performance is no longer a niche metric. As companies experiment with AI agents that write, debug, and deploy software, models that fall short here risk being sidelined.

To reinforce that point, Moonshot is rolling out an automated coding tool intended to compete with Anthropic’s Claude Code, one of the most widely used AI-assisted programming tools in global markets. This move reflects a broader shift among Chinese model makers, who are increasingly targeting developers rather than focusing solely on consumer-facing chatbots. Developer ecosystems create lock-in, recurring revenue, and data feedback loops, all of which are crucial for long-term viability.

The technical push is closely intertwined with capital strategy. Moonshot raised $500 million last month from investors including Alibaba and IDG Capital, at a post-money valuation of $4.3 billion, according to people familiar with the matter. Those same sources said the company has since initiated additional fundraising discussions, seeking a valuation of up to $5 billion to meet strong investor demand.

That appetite has been fueled by a noticeable change in market sentiment. After a period of caution driven by compute costs, regulatory uncertainty, and fierce competition, investors are again backing the idea that a smaller number of Chinese AI champions will emerge with defensible scale. Recent initial public offerings by rivals Zhipu and MiniMax Group in Hong Kong, which together raised more than $1 billion, have helped reopen exit pathways and provided valuation benchmarks for late-stage private firms.

Moonshot, Zhipu, and MiniMax now form a de facto top tier among China’s independent large model developers, operating alongside technology giants such as Alibaba and Tencent. This is a far cry from the earlier phase of the market, once described as the “War of One Hundred Models,” when dozens of teams competed for attention. DeepSeek’s breakout success earlier this year accelerated a shakeout, leaving many smaller players unable to fund the compute, talent, and data required to keep pace.

The arms race is not limited to model releases. It increasingly spans the entire AI stack, from chips and infrastructure to applications. Zhipu’s recent launch of GLM-Image, which it says is the first domestic image generation model fully trained on Chinese chips, speaks directly to concerns about U.S. export controls and long-term supply security. Alibaba has moved aggressively as well, unveiling a reasoning-focused version of Qwen3-Max and, through its fintech affiliate Ant Group, a spatial perception model for robotics developed by subsidiary Robbyant.

Against this backdrop, Moonshot’s positioning is both ambitious and precarious. Founded by Yang Zhilin, a former Tsinghua University professor with prior experience at Meta and Google, the company has earned respect for research quality. However, it trails some peers in commercialization. While Moonshot offers tiered subscriptions for its chatbot and licenses its technology to enterprise customers, analysts note that rivals have been quicker to translate technical capability into revenue.

That tension reflects a broader reality now confronting China’s AI sector. The era when rapid model iteration alone could justify sky-high valuations is fading. With capital markets reopening and competition intensifying, companies are under pressure to demonstrate credible paths to sustainable business models. Software subscriptions, enterprise deployments, developer platforms, and industry-specific applications are all becoming critical proof points.

Moonshot’s K2.5 release, coupled with its fundraising push, suggests the company understands this shift. By emphasizing multimodal capability, coding performance, and developer tools, it is aligning itself with where demand is likely to concentrate as AI moves deeper into production environments rather than remaining a novelty.

As anticipation builds around DeepSeek’s next release, the competitive dynamics are likely to tighten further. The coming months may determine which Chinese AI firms can combine technical excellence, financial backing, and commercial execution at sufficient scale. Moonshot’s latest move ensures it remains in the race, but the pace of escalation suggests that survival, not just leadership, is now at stake.

PayPal’s Return to Nigeria: Will Restitution of Frozen Funds be on The Horizon?

0

After years of absence from the Nigerian market, PayPal has officially returned through a strategic partnership with Paga, a leading Nigerian fintech.

This partnership is expected to streamline cross-border payments, enabling Nigerians to send and receive money globally with greater ease. Users can now link their PayPal Nigeria accounts directly within the Paga app.

Once linked, the PayPal account functions as usual, but with the added ability to receive payments from more than 200 countries. Funds received through PayPal can be withdrawn at any time directly from within the Paga app, providing seamless access to international payments for individuals and businesses alike.

While the announcement marks a major milestone for digital payments in the country, reactions among Nigerians have been mixed. For some, the partnership is a welcome development for Nigeria’s expanding digital economy.

However, for several others, the excitement of PayPal’s return is tempered by frustration. Some users recall being unable to access funds previously held in their accounts before PayPal exited the market.

Recall that PayPal effectively exited limited service in Nigeria around 2004, when it placed Nigerian accounts on a “send only” status that prevented users from receiving funds or withdrawing locally due to fraud and compliance concerns, a restriction that lasted for nearly two decades.

These affected individuals are now seeking restitution, hoping that renewed operations in Nigeria will allow them to reclaim funds that had been effectively frozen for years.

Several Nigerians took to X (formerly Twitter) to argue that PayPal’s return does not undo past grievances, especially concerning withheld funds.

@Luciduche wrote,

“PayPal cannot seize people’s funds, forcefully deactivate accounts, effect NO refunds, and swoop in decades later to access the same Nigerian market they treated like garbage.”

@OlamideElegbe wrote,

“PayPal has to release the funds they have seized for no reason back to the customers. It’s that simple.”

@odomstanley25 wrote,

“Nigerians should avoid PayPal in extension paga, the funds they seized from Africans in extension Nigerians should be returned, and an explanation on why they existed the African market”.

@Josylad wrote,

“We have outgrown paypal though. My funds are still locked in one account I created back then, me and PayPal have no business.”

Amidst the demands for withheld funds, the lingering question remains: will PayPal address the backlog of inaccessible funds and ensure that affected users are reimbursed?

The question extends beyond simple reimbursement, it reflects a deeper issue of trust and credibility. Nigerians want to know whether PayPal acknowledges its previous shortcomings, and if it has implemented mechanisms to prevent similar disruptions in the future.

Several analysts suggest that while the PayPal-Paga partnership opens significant opportunities for Nigeria’s digital economy, the company’s ability to resolve past financial bottlenecks will be critical in winning back trust among its users. Recent observations reveal that users are now scrutinizing whether PayPal provides a reliable, transparent, and supportive experience.

For PayPal, the road ahead is clear: successful reintegration into Nigeria depends on delivering dependable service, clear communication, and proactive resolution of issues. Only then can the company rebuild trust and secure a meaningful presence in one of Africa’s fastest-growing digital economies.

Corning Belongs to the same Tech Species as Nvidia

0

In the semiconductor universe, one law governs the progression of modern technology: before software can advance, hardware must evolve. Simply, before software “eats” the world, someone in the hardware domain must first cook the meal. That is the physics of the digital economy.

Nvidia understood that principle early and went to work, building advanced computation systems capable of crunching the massive datasets of the AI age. Those systems, GPUs, begin life as wafers, special slabs of purified silica. From that “sand”, nations and companies build the engines of the modern world. Nvidia took care of its layer in the stack.

But after the number-crunching comes the piping. Data must move. Intelligence must flow. And to achieve that, the world needs the invisible arteries of the digital universe: fiber-optic cables. That is where Corning, a 175-year-old glassmaker, enters the stage. In fact, Meta just committed up to $6 billion through 2030 to buy Corning fiber to wire its next generation AI data centers. As Corning’s CEO noted, the AI arms race is not only about chips or algorithms; it is also about the plumbing, the glass, the fiber, the conduits that transport light at the speed of imagination.

Meta has found a new and unlikely partner as it races to build out the vast data center footprint needed to compete in artificial intelligence: Corning, a 175-year-old American glassmaker whose technology now sits at the heart of the AI infrastructure boom.

Meta has committed to paying Corning as much as $6 billion through 2030 for fiber-optic cable to wire its next generation of AI data centers, Corning CEO Wendell Weeks told CNBC. The agreement underscores how the AI arms race is no longer just about advanced chips and software models, but about the physical plumbing that connects, powers, and cools them.

Yet, even with chips and fiber, a decisive factor remains: energy. And that is where Africa faces its biggest challenge. If a nation struggles to power an electric iron, it certainly cannot energize hyperscale AI data centers. Like the massive requirements for clean rooms in chip fabrication, running modern data centers demands colossal resources. Without deliberate strategy and public-sector participation, the continent will remain a consumer, not a producer, in this new acceleration age.

That is why I argue that African governments must begin to see data centers as national platforms, like highways, airports, and teaching hospitals. Without them, the promise of AI will remain distant. With them, Africa can participate, not in the periphery, but in the upstream productive layers of this era.

Corning may not become the next Nvidia for investors, especially with Chinese competitors closing in, but make no mistake: in the great AI race, it sits at the heart of the infrastructure that will power the future. It belongs to the same tech species as Nvidia: catalytic hardware systems upon which the acceleration society age will operate.

Meta To Pay Corning Up To $6bn For Fiber-Optic Cables In AI Data Centers, Showing How The AI Boom Is Rewiring The Data Center Economy

0

Meta has found a new and unlikely partner as it races to build out the vast data center footprint needed to compete in artificial intelligence: Corning, a 175-year-old American glassmaker whose technology now sits at the heart of the AI infrastructure boom.

Meta has committed to paying Corning as much as $6 billion through 2030 for fiber-optic cable to wire its next generation of AI data centers, Corning CEO Wendell Weeks told CNBC. The agreement underscores how the AI arms race is no longer just about advanced chips and software models, but about the physical plumbing that connects, powers, and cools them.

The deal comes as Meta accelerates one of the most aggressive infrastructure expansions in corporate history. After unsettling investors in 2025 with soaring AI spending and limited clarity on near-term returns, the company pledged to invest as much as $600 billion in the United States by 2028 on data centers and related infrastructure. Of the roughly 30 facilities Meta plans to build, 26 will be located in the U.S., reflecting a broader push by Big Tech to secure domestic supply chains for strategically sensitive technologies.

“We want to have a domestic supply chain that’s available to support that,” said Joel Kaplan, Meta’s chief global affairs officer, pointing to rising geopolitical concerns and fears that China could outpace the U.S. in AI if investment and policy decisions fall short.

Two of Meta’s largest projects—the one-gigawatt Prometheus data center in New Albany, Ohio, and the massive five-gigawatt Hyperion facility in Richland Parish, Louisiana—will both rely on Corning’s fiber under the new agreement. The Louisiana site alone is expected to require around 8 million miles of optical fiber, highlighting the staggering scale of modern AI infrastructure.

The Meta deal is believed to be emblematic of Corning’s transformation years in the making. Once synonymous with boom-and-bust cycles during the dot-com era, the company has emerged as one of the quiet beneficiaries of the AI buildout. Its optical communications unit is now its largest and fastest-growing business, with revenue in the segment jumping 33% in the third quarter to $1.65 billion. Enterprise optical sales surged 58%, driven by demand tied directly to generative AI deployments.

Shares of Corning have risen more than 75% over the past year, a sharp contrast to its painful history two decades ago. During the late-1990s internet boom, demand for fiber propelled Corning’s stock nearly eightfold, only for it to lose more than 90% of its value after the bubble burst. Weeks says that experience continues to shape the company’s approach.

“What we learned then was that it wasn’t enough to do great innovations,” he said, emphasizing the need for diversified, cash-generating businesses that can absorb cyclical shocks. Corning still sells glass for smartphones, automobiles, televisions, and pharmaceutical vials, providing a buffer if AI-driven demand eventually cools.

Skepticism about the current AI spending wave is growing. Industry announcements in 2025 alone tallied more than $1 trillion in planned compute investments, prompting warnings from some analysts that a new bubble could be forming. Weeks, however, argues that fiber demand has historically grown at about 7% annually and that excess capacity, if it emerges, will ultimately find productive use.

He also voiced confidence in Meta’s long-term prospects, saying that technical execution and a willingness to commit capital remain decisive advantages in AI.

“Compute matters,” he said.

The technological case for fiber is central to Corning’s optimism. Unlike copper cables, which transmit data as electrical signals, fiber-optic cables send information as pulses of light through strands of glass, using far less energy and achieving much higher speeds. As power constraints become one of the defining bottlenecks for AI data centers, that efficiency gap is becoming harder to ignore.

“Moving photons is between five and 20 times lower power usage than moving electrons,” Weeks said.

As a result, fiber is being pushed closer and closer to the compute layer inside data centers.

AI workloads are also structurally different from traditional cloud computing. Training and running large language models requires dense, high-bandwidth connections between thousands of chips, creating what Weeks described as a “whole separate network” that mirrors neural connections in the human brain.

To meet that need, Corning developed a new fiber product called Contour, designed specifically for AI. The cable doubles the number of fiber strands that can fit in a standard conduit and dramatically reduces connector complexity.

Development of these AI-specific products began more than five years ago, well before ChatGPT’s public debut, after conversations with early leaders in generative AI who warned that compute requirements would scale far beyond existing assumptions. Today, Corning says it has manufactured more than 1.3 billion miles of optical fiber, yet demand continues to outpace supply.

The next frontier may be inside the servers themselves. Copper still dominates connections within server racks, including those housing Nvidia’s graphics processors. But as racks begin to hold hundreds of GPUs, Weeks says the shift to fiber is unavoidable on cost and energy grounds.

At that scale, he said, “fiber optics become much more economical and much more power efficient.”

However, the Corning partnership highlights how Meta’s AI ambitions extend deep into the industrial supply chain, tying Silicon Valley’s future to factories in places like North Carolina, Ohio, and Louisiana. For Corning, it marks another reinvention—one that positions a 19th-century glassmaker as a foundational supplier to the AI economy of the 21st century.

Micron’s $24 Billion Singapore Bet Signals a Prolonged AI-Driven Memory Crunch

1

Micron Technology’s decision to pour $24 billion into a new memory chip manufacturing plant in Singapore is more than a routine capacity expansion. It is seen as a statement about how deeply artificial intelligence has reshaped the semiconductor landscape and how long the resulting supply pressures are likely to last.

The U.S. memory chipmaker said it plans to build an advanced wafer fabrication facility in Singapore over the next decade, with wafer output expected to begin in the second half of 2028. The move, first reported by Reuters, comes as chipmakers scramble to respond to an acute global shortage of memory chips, a shortage increasingly defined not by consumer electronics cycles but by the relentless growth of AI infrastructure.

For years, memory demand rose and fell with smartphones, PCs, and data storage upgrades. That pattern has shifted. Today, AI models, particularly those deployed for inference and emerging autonomous “AI agent” systems, consume enormous volumes of high-performance memory. From cloud data centers to enterprise servers, memory has become a binding constraint, often determining how fast new AI services can scale.

Micron said the Singapore investment will focus on NAND flash memory, which is seeing renewed demand as AI workloads generate and process massive amounts of data. The planned facility will include a cleanroom space of more than 700,000 square feet, underscoring the scale and long-term nature of the project.

“The new investment will help us meet growing market demand for NAND memory chips, fueled by the rise of AI and data-centric applications,” the company said.

Singapore already plays a central role in Micron’s manufacturing network. About 98% of the company’s flash memory chips are produced there, making the city-state one of its most strategically important locations globally. Beyond NAND, Micron is also building a separate $7 billion advanced packaging plant in Singapore for high-bandwidth memory, or HBM, which is used in AI accelerators. That facility is due to start production in 2027 and is expected to feed directly into the fast-growing market for AI chips.

HBM has become a critical bottleneck in the AI supply chain. While much public attention has focused on graphics processing units, the performance of AI systems increasingly depends on how quickly data can move between processors and memory. As a result, memory packaging and integration are now just as strategically important as chip design itself.

Industry analysts say the supply imbalance in memory markets is unlikely to ease quickly. Some expect tight conditions to persist through late 2027, even as Micron and its rivals accelerate expansion plans. South Korea’s Samsung Electronics and SK Hynix are both advancing timelines for new production lines, but demand continues to outrun supply.

TrendForce analyst Bryan Ao said pricing pressure is already building across enterprise storage markets. He expects contract prices for enterprise solid-state drives to rise by 55% to 60% as customers rush to secure supply. According to Ao, demand for high-performance storage equipment has grown much faster than anticipated, driven by the expansion of AI inference applications. He added that major North American cloud service providers have been pulling forward orders since the end of last year to position themselves for opportunities emerging in the AI agent market.

This behavior highlights a broader shift in how cloud and enterprise customers think about infrastructure. Rather than buying capacity reactively, many are now locking in long-term supply, anticipating sustained growth in AI workloads and fearing future shortages.

Micron’s expansion also reflects competitive pressures within the memory industry. TrendForce data shows the company ranked as the fourth-largest flash memory chip supplier in the third quarter of 2025, with a 13% market share. While Samsung and SK Hynix remain dominant players, Micron has been working to strengthen its position in higher-margin segments tied to data centers and AI systems.

The Singapore project is part of a wider push. Last week, Micron said it was in talks to acquire a fabrication site from Taiwan’s Powerchip for $1.8 billion, a move that would increase its DRAM wafer output. DRAM, like NAND, is facing strong demand from AI servers and advanced computing platforms. At the same time, SK Hynix has said it plans to bring forward the opening of a new factory by three months and begin operating another new plant as early as February.

What sets this cycle apart is the structural nature of demand. AI-driven memory consumption is not tied to seasonal consumer upgrades and does not pause during economic slowdowns in the same way as discretionary electronics spending can. Instead, it is embedded in cloud services, enterprise software, and automation tools that are becoming core to business operations.

By committing $24 billion with a production horizon stretching to 2028, Micron is effectively betting that AI-driven demand will remain strong well into the next decade. The investment suggests the company sees today’s shortage not as a temporary dislocation but as part of a longer-term reordering of the semiconductor market, one in which memory sits at the center of the AI economy.

The implications are clear for customers as supply constraints are likely to persist, prices are rising, and long-term contracts are becoming the norm rather than the exception. This means the race is on for Micron and its rivals to build capacity fast enough to keep up, knowing that the next phase of AI growth may be limited not by computing power, but by memory.