DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 780

“How Can We Trust You?” China Demands Proof of Security From Nvidia Over AI Chips

0
Nvidia chip

China has asked U.S. chipmaker Nvidia to “comply with requests and provide convincing proof of security,” in a dramatic development that threatens to upend recent progress in U.S.-China tech trade.

The Cyberspace Administration of China (CAC), the country’s powerful internet regulator, raised what it called “serious security issues” tied to Nvidia’s H20 chips, just days after the United States approved their export to China.

The chip in question, the H20, was a redesigned version of Nvidia’s high-end AI hardware, built specifically to comply with Washington’s tightened export restrictions aimed at limiting China’s access to advanced semiconductor technology. After months of regulatory hurdles, the U.S. recently gave the green light, allowing Nvidia to proceed with plans to ship the chip to Chinese clients.

The approval was viewed as a rare opening in an increasingly closed-off market and a strategic win for Nvidia, which has been caught in the crossfire of U.S.-China tech rivalry.

But in a move that has raised eyebrows across the industry, Beijing suddenly flipped the script. On Thursday, the CAC said U.S. artificial intelligence experts had discovered hidden “back doors” in Nvidia’s chips, including built-in location tracking and remote shutdown functionalities. Without citing the source of these claims or providing any Chinese-led verification, the CAC said the alleged capabilities posed a threat to national security.

“The safety of computing hardware relates to the core of national security,” the CAC said in a statement. The regulator demanded that Nvidia provide detailed explanations and technical documentation to address these “serious concerns.”

The intervention has taken on a sharply political tone. China’s state mouthpiece, the People’s Daily, followed up with an editorial titled “How can we trust you, Nvidia?” warning that so-called “sick chips” had no place in China’s technology systems.

It likened digital infrastructure to sovereign territory and urged Nvidia to meet government demands swiftly. The article also pointed to past disruptions — including malfunctions in Russia’s cyber and satellite systems — as justification for the heightened scrutiny of foreign-made hardware.

In its response, Nvidia denied the accusations unequivocally. “Nvidia does not have ‘back doors’ in our chips that would give anyone a remote way to access or control them,” a company spokesperson told the South China Morning Post.

However, the timing of China’s move has sparked speculation of retaliation. The decision to summon Nvidia and publicize alleged vulnerabilities comes on the heels of a modest diplomatic opening — the U.S. allowing Nvidia to resume some shipments to China. Many observers see Beijing’s shift as part of a broader tit-for-tat strategy, particularly as Chinese companies like TikTok face intensifying pressure in the United States over national security concerns. The parallels between the TikTok saga and this latest action against Nvidia are difficult to ignore.

While China continues to assert it welcomes foreign investment and is committed to further market liberalization, its actions underscore growing skepticism toward American tech giants. Nvidia, one of the few remaining U.S. semiconductor firms permitted to do business in China under strict conditions, now finds itself under attack on both sides — from Washington’s restrictions and Beijing’s mistrust.

This week’s development is especially significant given China’s reliance on Nvidia’s chips for powering artificial intelligence systems and training large language models. The H20 was meant to serve as a workaround — compliant enough to pass U.S. regulations but still advanced enough to remain competitive in China’s AI race.

Now, with the CAC casting doubt over the chip’s integrity, the future of that workaround appears fragile. Nvidia’s position in China is more uncertain than ever, and with Chinese regulators turning increasingly nationalist in tone, the pressure on multinational tech firms to prove their loyalty — or face exclusion — is intensifying.

While many Chinese companies need Nvidia’s graphics processing units to help power computing infrastructure used in artificial intelligence projects, Beijing remains committed to the long-term goal of tech self-sufficiency and reducing the country’s reliance on US and other foreign technologies.

Shares of Nvidia dipped nearly 3% in early trading on Friday, reflecting investor concern over the company’s prospects in a market that once accounted for a quarter of its revenue.

The geopolitical fallout could extend far beyond Nvidia. The episode reinforces the deepening fault lines between the U.S. and China over national security and control of next-generation technologies — from AI to semiconductors.

Palantir Bags $10bn Army Software and Data Contract

0

Palantir Technologies has secured a landmark contract with the U.S. Army worth up to $10 billion, in what is shaping up to be one of the most significant government software deals of the decade.

The agreement is aimed at consolidating and modernizing the military’s digital operations and will span the next ten years.

Under the terms of the arrangement, Palantir will replace 75 existing contracts with a single enterprise deal designed to streamline software procurement, reduce bureaucratic friction, and increase agility across the Army’s data infrastructure. The deal forms what the Army calls a “comprehensive framework for its future software and data needs,” providing not just a roadmap for digital modernization, but also reducing contract-related fees and shortening procurement timelines.

According to the U.S. Army, the new framework will allow for a modular procurement structure, enabling military agencies to buy mission software on demand—an arrangement that eliminates rigid procurement cycles and offers greater financial and operational flexibility. The Army says the contract is part of a broader strategy to modernize operations by relying on artificial intelligence and integrated data systems to anticipate and respond to threats more effectively.

The deal marks another milestone in Palantir’s growing influence in U.S. national security circles, particularly under President Donald Trump’s administration, where cost-cutting and technological modernization have been key priorities. Trump’s Department of Government Efficiency has slashed funding for outdated programs while pivoting toward AI-driven platforms, a shift that has strongly benefited private-sector partners like Palantir.

CEO Alex Karp, a longstanding advocate of U.S. national interests and public-private collaboration on AI, said the contract highlights the increasing role of software in warfare. “Software is now as essential as bullets,” Karp said earlier this year, when the company delivered the first two AI-powered systems under its $178 million defense contract.

Palantir’s role in military transformation is part of a larger pattern. Defense contracts are emerging as a key revenue stream for AI firms, with competition intensifying as governments ramp up spending to keep up with rival nations. The U.S. Department of Defense has already expanded its Maven Smart Systems program—an initiative to infuse AI into battlefield intelligence—by an additional $795 million, with Palantir as a lead contractor.

Other tech giants are also benefiting from this surge. Anduril Industries recently won a contract with the U.S. Special Operations Command worth up to $1 billion to provide AI-driven surveillance and autonomous systems. Anthropic has partnered with Palantir and AWS to embed its Claude AI models within classified environments for intelligence and defense analysis. Some of these models operate at the Pentagon’s Impact Level 6 (“secret” clearance) environment.

The U.S. Department of Defense’s Chief Digital & Artificial Intelligence Office (CDAO) has awarded up to $200?million contracts each to OpenAI, Anthropic, Google, and Elon Musk’s xAI to prototype “agentic AI” capabilities tailored for warfighting, logistics, intelligence, and enterprise operations.

Meanwhile, Microsoft and Amazon are participating in the Pentagon’s Joint Warfighting Cloud Capability (JWCC), a $9 billion program to upgrade cloud computing and AI capabilities across all branches of the military. In May, Scale AI secured a $250 million contract to provide data labeling services and AI model testing for the defense department.

Palantir’s new contract reflects the increasing dependence of modern militaries on commercial AI and software solutions. As warfare shifts toward digital and data-heavy strategies, AI companies are rapidly becoming central players in defense planning, turning battlefield intelligence, logistics, and operations into a race for computing superiority.

Shares of Palantir have more than doubled this year, buoyed by growing investor confidence in its government portfolio and expanded AI footprint.

MoMo PSB Hits 2.7 Million Wallets as Growth and Strategy Rebound in H1 2025

0

MoMo Payment Service Bank (MoMo PSB), the fintech arm of MTN Nigeria, delivered strong indicators of growth and strategic repositioning in H1 2025, supported by MTN’s accelerated investments and renewed focus on financial inclusion.

After refining its strategy earlier in the year, MoMo PSB entered H1 2025 with renewed purpose and the results reflect this shift. Active wallets climbed to 2.7 million, driven by the addition of over 562,000 new customers in Q2 alone.

The surge reflects a recovery from a strategic recalibration in 2024 that saw active wallets drop by 47% to 2.8 million by year-end, down from 5.3 million in 2023. Despite the decline in active users, transaction volumes rose by 4.3%, indicating stronger engagement among remaining users.

In H1 2025, Customer deposits surged nearly fivefold between December 2024 and June 2025, highlighting growing trust in MoMo’s secure, accessible services and an expanding base of high-value users.

Expanding Partnerships and Strengthening the Ecosystem

By leveraging an expanded partner network, MoMo PSB focused on attracting premium customers and boosting deposit performance. This ecosystem-driven strategy unlocked opportunities for integrated services, improved wallet functionality, and deeper engagement across Nigeria’s digital payment space.

“MoMo’s resurgence is not just about growth, it’s about strategic refinement and ecosystem empowerment,” said Karl Toriola, CEO of MTN Nigeria. “We’re building a fintech platform that’s resilient, user-centric, and transformative for millions.”

Driving Digital and Financial Inclusion

MTN’s MoMo PSB is playing a crucial role in bridging the financial inclusion gap in Nigeria by providing access to financial services, especially in underserved areas.

Designed to bridge the gap between the underserved and financial services, MoMo has enabled millions of people to carry out everyday transactions without the burden of traditional banking delays.

The company offers services that enable more Nigerians to easily transact via the USSD channel, save money, connect with MoMo Agents for deposits and withdrawals, and conduct transactions on the MoMo App.

As MTN Nigeria channels investment into infrastructure and innovation, MoMo PSB has become a key pillar of financial inclusion. Its initiatives—including a N3 billion commitment to the 3MTT Programme and a N100 million startup accelerator, are bridging gaps in access, opportunity, and entrepreneurship, creating tangible value for customers and Nigeria’s digital economy.

While MTN’s MoMo platform is gaining ground, it faces numerous challenges common to the Nigerian fintech space. Infrastructure remains a significant bottleneck. Unreliable power supply makes it difficult for both agents and customers to consistently access the services MoMo promises.

Despite this, MTN’s reach across the country, combined with its robust network of agents, places it in a unique position to overcome these obstacles with innovative solutions.

With easing macroeconomic headwinds and rising digital adoption, MoMo PSB is well-positioned to scale further in H2 2025. As MTN Nigeria optimizes capital expenditure and boosts free cash flow, the fintech segment is expected to play an increasingly vital role in driving profitability and innovation.

The telecommunication company has set a target to build the largest fintech platform in the country, aiming to reach 30 to 40 million active MTN MoMo wallets by 2025.

Notably, MoMo PSB’s strategic revival underscores its resilience, readiness, and growing relevance in shaping Nigeria’s digital financial future.

AI Titans at Odds: Nvidia’s and Anthropic’s CEOs Trade Barbs Over Safety, Control, and Future of AI

0

What began as subtle disagreements between two of the most influential figures in artificial intelligence—Nvidia’s Jensen Huang and Anthropic’s Dario Amodei—has now escalated into a full-blown ideological clash, with both CEOs publicly accusing each other of distortion, bad faith, and pushing narratives that could reshape how AI is governed and developed.

Their feud, which surfaced at the VivaTech Conference in June, has since deepened following a tense podcast interview and statements released to the press. At the center of the rift are two divergent visions of how AI should evolve: one that prizes openness and innovation at speed, and another that emphasizes caution, national oversight, and long-term safety.

The Spark: Huang’s Accusation at VivaTech

Speaking at VivaTech in Paris, Nvidia CEO Jensen Huang delivered a scathing critique of Anthropic’s approach to AI safety, specifically targeting Amodei’s suggestion that the AI boom may pose existential economic threats. Huang summarized Amodei’s position as one that paints AI as “so scary that only they should do it,” suggesting that Anthropic is using fear to justify monopolistic control over development.

“AI is so incredibly powerful that everyone will lose their jobs,” Huang said, paraphrasing what he claimed to be Anthropic’s logic. “Which explains why they should be the only company building it.”

Huang was responding in part to comments Amodei had made in May, where the Anthropic CEO warned that up to 50% of entry-level white-collar jobs could be lost to AI within five years, potentially pushing U.S. unemployment to 10% or even 20%. At VivaTech, Huang dismissed these claims as exaggerated and damaging, suggesting that AI, like past technological waves, would “lift all boats” through productivity gains and job creation.

Amodei Fires Back: “A Bad Faith Distortion”

On the Big Technology podcast released August 1, Dario Amodei responded to Huang’s accusations. When host Alex Kantrowitz referenced Huang’s suggestion that Amodei wanted to control the entire AI industry because he alone thought he could build it safely, Amodei was visibly frustrated.

“I’ve never said anything like that,” he said. “That’s the most outrageous lie I’ve ever heard.”

Amodei rejected any implication that Anthropic is aiming for exclusivity. “I’ve said nothing that anywhere near resembles the idea that this company should be the only one to build the technology,” he continued. “It’s just an incredible and bad faith distortion.”

Amodei emphasized that Anthropic’s philosophy centers on a “race to the top”—an approach that prioritizes safety, transparency, and shared best practices among AI developers, rather than racing to release features without proper testing.

“In a race to the bottom, everybody loses,” Amodei said. “But in a race to the top, everyone wins because the safest, most ethical company sets the industry standard.”

He pointed to Anthropic’s responsible scaling policies, open-sourced interpretability research, and efforts to formalize government testing of foreign and domestic AI models as proof that the company is not trying to hoard development but rather raise industry standards.

The Policy Context: Safety vs. Open-Source

This clash comes amid growing political and regulatory pressure in Washington over how to govern AI. In June, Amodei published an op-ed in The New York Times, criticizing a Republican-led bill proposing a 10-year ban on state-level AI regulations. He described it as “too blunt a tool”, arguing instead for a federal transparency standard—a move that would force companies to disclose how their models are trained, tested, and secured against misuse.

Amodei also proposed a national testing infrastructure for vetting large AI models, especially those developed abroad, citing potential national security threats. His stance has aligned Anthropic with voices in government pushing for stricter oversight, especially as AI’s capabilities grow in sophistication and reach.

Nvidia, by contrast, has positioned itself as a champion of open innovation. In a statement to Business Insider, a company spokesperson pushed back against calls for regulatory guardrails that limit open-source access.

“Lobbying for regulatory capture against open source will only stifle innovation, make AI less safe and secure, and less democratic,” the Nvidia spokesperson said. “That’s not a ‘race to the top’ or the way for America to win.”

The company said it supports “safe, responsible, and transparent AI,” but warned that overregulation and exclusionary policies could put startups and the open-source ecosystem at a disadvantage.

A Deeper Rift of Competing Models for AI’s Future

While the back-and-forth may sound like corporate sniping, the heart of the disagreement is much more profound. Huang and Amodei are promoting competing models for AI’s trajectory:

  • Jensen Huang envisions a world where AI innovation flourishes through mass collaboration and accelerated development cycles. His faith in the crowd-driven model is rooted in Nvidia’s ecosystem of startups and researchers who build on its hardware and open software platforms.
  • Dario Amodei, on the other hand, is calling for measured growth. He warns that AI could spiral out of control if profit motives and speed trump safety. His vision—though not one of monopoly, he insists—requires strong public oversight, slow releases, and responsible practices backed by evidence and transparency.

That tension is now playing out in public—and could shape the regulatory framework for years to come.

What This Means Going Forward

The Huang-Amodei feud may just be the beginning of broader divisions inside the AI industry as policymakers, developers, and the public wrestle with how to balance innovation with caution.

Both men are respected leaders, but their public disagreements signal a turning point: as AI systems inch closer to shaping critical infrastructure, jobs, and national security, the questions of “who builds” and “who governs” AI are no longer academic.

With Amodei pushing for government testing and federal oversight, and Huang defending a more open, market-led approach, stakeholders may soon be forced to choose a side—or find a middle path before the technology runs ahead of consensus.

AI Model Wars Intensify as Google Launches Gemini 2.5 Deep Think, Escalating Race with ChatGPT, Grok

0

The race among leading artificial intelligence labs to dominate the next phase of AI reasoning has entered a new gear, with Google DeepMind rolling out Gemini 2.5 Deep Think, its most advanced AI model yet.

The company claims the new model is capable of answering complex questions by generating and weighing multiple independent thoughts before selecting the most accurate answer — a major step up from conventional single-agent AI models.

Starting Friday, Gemini 2.5 Deep Think will be available through Google’s $250-a-month Ultra subscription plan, giving high-end users access to what the company calls its “first publicly available multi-agent system.” This system works by spawning multiple AI agents to approach a question from different angles simultaneously, combining those threads into a coherent and refined response.

While the method is significantly more computationally intensive, Google says it results in vastly better reasoning and accuracy.

The rollout follows DeepMind’s presentation of the system at its I/O 2025 conference in May, but the company now says the released version incorporates newer reinforcement learning techniques that allow the model to reason more creatively and effectively.

“Deep Think can help people tackle problems that require creativity, strategic planning and making improvements step-by-step,” the company said in a statement.

In benchmarking tests, Gemini 2.5 Deep Think scored 34.8% on Humanity’s Last Exam (HLE) — a rigorous measure of AI understanding across math, humanities, and science — outperforming its competitors. Elon Musk’s xAI’s Grok 4 scored 25.4%, while OpenAI’s o3 achieved 20.3%. On LiveCodeBench 6, which tests performance on competitive coding challenges, Google’s model also led with 87.6%, outpacing Grok 4 (79%) and o3 (72%).

These gains add to the intensifying arms race in the AI sector. Over the past few months, xAI, OpenAI, and Anthropic have all pushed out new models, each touting breakthroughs in performance and reasoning.

OpenAI, for instance, has been refining its GPT-4 and o3 model lines, and recently hinted at more powerful iterations under internal testing, including a multi-agent system similar to Google’s and xAI’s. OpenAI’s Noam Brown confirmed that the company used a multi-agent setup for its own gold-medal performance at this year’s International Math Olympiad (IMO), though the model hasn’t yet been released to the public.

xAI’s Grok 4 Heavy, meanwhile, has been marketed as a direct rival to both ChatGPT and Gemini, leveraging a multi-agent system that Musk says delivers superior performance across coding, logic, and problem-solving tasks. While Grok 4 models are increasingly being integrated into the X platform, access remains limited and premium-tiered, much like Google’s Deep Think.

Anthropic is also in the mix with its Claude family of AI models. Its latest offering, Claude Research Agent, is similarly powered by multi-agent systems and designed to generate highly detailed and structured research outputs.

These moves collectively underscore a critical industry trend: the convergence around multi-agent reasoning. While traditional large language models (LLMs) typically process queries as single-threaded thought streams, multi-agent systems divide and parallelize reasoning, often using internal debate-like mechanisms. The result is not just more accurate answers, but also responses that show better reasoning steps, especially in complex tasks like mathematics, programming, and scientific discovery.

However, the progress comes with a cost. Multi-agent models require significantly more computing power, making them expensive to run and maintain. As a result, the leading tech companies have opted to restrict these models to their highest-paying subscribers. Google’s $250/month Ultra plan for Gemini 2.5 Deep Think mirrors similar premium-tier strategies from both OpenAI and xAI.

Despite the price wall, Google is also making some of the model’s capabilities available to select mathematicians and academic researchers, particularly the variation of the system that secured a gold medal at the IMO. This version, Google says, takes hours to generate responses — unlike consumer-facing models that operate in seconds — but offers the kind of deep, methodical reasoning researchers crave.

In the coming weeks, Google plans to open the Gemini API for developers and enterprise testers, aiming to observe how the multi-agent system performs in real-world environments outside Google’s sandbox.

The AI model war is clearly far from over. With every major lab now aligning behind multi-agent architecture and pushing boundaries on creativity, strategic reasoning, and deep cognition, the race is no longer just about answering questions — it’s about thinking more like humans.

Google announced the rollout of its Gemini 2.5 Deep Think artificial intelligence model on Friday, releasing the tool to its paid Ultra subscribers. First unveiled in the spring, Gemini 2.5 is a “multi-agent” reasoning model, making it well suited for devising “creative solutions to complex problems” such as math and coding, Google says. A different version of the model earned a gold medal score at the International Mathematical Olympiad last month; Google says it is also releasing that version to a group of mathematicians and researchers.