DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 31

Jensen Huang Pushes Back Hard Against AI ‘Doomerism,’ Warning Fear Is Undermining Innovation and Safety

0

ChatGPT became the fastest-growing app when it was launched in 2022, spearheading generative artificial intelligence on its way to becoming the most valuable chatbot. However, the meteoric rise of AI has been shadowed by controversies, including a surge of public anxiety.

Fears about mass job losses, misinformation, and even civilizational collapse have dominated debates around AI’s future. Nvidia CEO Jensen Huang says that narrative has become excessive — and counterproductive.

Speaking on the No Priors podcast, Huang framed what he called the “doomer narrative” as one of the most damaging forces shaping the AI conversation today. For Huang, whose company supplies the chips that power much of the global AI ecosystem, the issue is no longer just about technical capability, but about how fear is influencing policy, investment, and public trust.

“One of my biggest takeaways from 2025 is the battle of the narratives,” Huang said, describing a widening divide between those who believe AI can broadly benefit society and those who argue it will erode economic stability or even threaten human survival.

He acknowledged that both optimism and caution have a place, but warned that repeated end-of-the-world framing has distorted the debate.

“I think we’ve done a lot of damage with very well-respected people who have painted a doomer narrative — end-of-the-world, science fiction narratives,” Huang said.

While conceding that science fiction has long shaped cultural imagination, he argued that leaning too heavily on those tropes is “not helpful to people, not helpful to the industry, not helpful to society, and not helpful to governments.”

A not-so-subtle rebuke of AI rivals

Although Huang did not name specific individuals, his comments echo earlier public clashes with leaders of other major AI firms — most notably Anthropic CEO Dario Amodei. In June last year, Amodei warned that AI could eliminate roughly half of all entry-level white-collar jobs within five years, potentially pushing unemployment toward 20%. Huang responded at the time that he “pretty much disagree[d] with almost everything” Amodei had said.

That disagreement appears to go beyond economics and into philosophy and policy. On the podcast, Huang argued that AI companies should not be urging governments to impose heavier regulation, saying corporate advocacy for stricter rules often masks competitive self-interest.

“No company should be asking governments for more AI regulation,” Huang said. “Their intentions are clearly deeply conflicted. They’re CEOs, they’re companies, and they’re advocating for themselves.”

The subtext is clear: Huang sees some calls for regulation as an attempt by early AI leaders to lock in advantages, slow rivals, or shape rules in their own favor, rather than a neutral effort to protect society.

Regulation, geopolitics, and the China fault line

The rift between Nvidia and Anthropic has also played out in geopolitics. In May 2025, both companies took opposing stances on U.S. AI Diffusion Rules that restrict exports of advanced AI technologies to countries including China. Anthropic has supported tighter controls and stronger enforcement, highlighting cases of alleged chip smuggling.

Nvidia pushed back sharply, dismissing claims that its hardware had been trafficked into China through elaborate schemes. Huang has repeatedly argued that overly restrictive export controls risk weakening U.S. competitiveness without meaningfully slowing global AI development.

For Huang, this feeds into a broader concern: that fear-driven policymaking, fueled by apocalyptic rhetoric, could end up doing more harm than good.

One of Huang’s most pointed warnings was that relentless pessimism about AI could actually increase risks rather than reduce them. He argued that fear discourages investment in the very research and infrastructure needed to make AI systems safer, more reliable, and more socially useful.

“When 90% of the messaging is all around the end of the world and pessimism,” Huang said, “we’re scaring people from making the investments in AI that make it safer, more functional, more productive, and more useful to society.”

In Huang’s view, safety does not come from paralysis or blanket restriction, but from sustained development, testing, and deployment — all of which require capital, talent, and public confidence.

An industry divided from within

Huang is not alone among tech leaders expressing frustration with the tone of the AI debate. Microsoft CEO Satya Nadella has criticized what he sees as dismissive conversations that reduce AI output to “slop,” while Mustafa Suleyman, head of Microsoft’s AI division, described widespread public criticism of AI as “mind-blowing” late last year.

Yet the backlash is rooted in tangible outcomes, not just abstract fear. Estimates suggest that more than 20% of YouTube content now consists of low-quality or spam-like AI-generated material, while layoffs tied to automation and AI adoption continue to ripple through media, tech, and customer service roles. For many workers, skepticism reflects lived experience rather than science fiction.

Based on Huang’s remarks, the disagreement is no longer simply about how fast AI should advance, but about who gets to define its risks, who shapes regulation, and whether caution is a necessary brake or an overreaction that could blunt progress. The Nvidia CEO believes the danger lies in allowing fear to dominate the conversation. He argues that excessive pessimism risks slowing innovation, weakening competitiveness, and ironically making AI less safe in the long run.

Tekedia Institute Unveils Nigerian Capital Market Masterclass, Register Today!

0

Tekedia Nigerian Capital Market Masterclass is a practitioner-led, intensive program designed to deepen the human capabilities needed to power Nigeria’s modern capital market. The Masterclass blends applied knowledge, real-market processes, regulatory frameworks, technology infrastructure, and hands-on case studies covering the entire capital market value chain.

The program will run for 8 weeks, with assignments, simulations, and industry projects. Some participants who complete the program successfully will be provided internship opportunities within capital-market institutions in Nigeria.

Minimum entry requirement: Secondary school education.

Location and Mode: program is completely online, no physical component. It includes 8 weeks of recorded courseware and written materials. And four LIVE Zoom sessions by four different faculty on 4 Saturdays lasting two hours each.

Cost: $500 or N350,000

Program Date: from June 15 to Aug 8, 2026

How To Pay

Brief Structure and Curriculum 

For more on the program and its full curriculum, click here.

Week 1

  • Module 1: Introduction to Nigeria’s Capital Market – Foundations & Architecture
  • Module 2: SEC Nigeria – Registration, Regulations & Market Oversight

Week 2

  • Module 3: Market Operators – Roles, Responsibilities & Interdependencies
  • Module 4: Capital-Raising Instruments – IPOs, Bonds, Commercial Papers & Private Markets

Week 3

  • Module 5: Listing Processes, Documentation & Regulatory Compliance
  • Module 6: Capital-Market Operations – Trading, Settlement & Surveillance

Week 4

Projects

Week 5

  • Module 7: Derivatives, Structured Products & Hedging Instruments
  • Module 8: Technology & Financial Market Infrastructure (FMI)

Week 6

  • Module 9: Digital Assets, Tokenization & ISA 2025 Framework
  • Module 10: Compliance, Risk Management & Ethics in Capital Markets

Week 7

  • Module 11: Careers, Business Opportunities & Promising Regulated Sole Proprietorships
  • Module 12: Business Development, Market Strategy & Capital-Market Innovation

Week 8

  • Program Capstone

German Minister Suggests Minimum Price Option for Rare Earths

0

German Finance Minister Lars Klingbeil has indicated that establishing price floors a form of minimum price for rare earth elements is under consideration as part of efforts to reduce global dependence on China, which dominates the supply chain.

This came up during a meeting of finance ministers from the G7— US, Germany, Japan, Britain, France, Italy, Canada plus partners like Australia, Mexico, South Korea, and India, held in Washington on January 12, 2026. The discussions, convened by US Treasury Secretary Scott Bessent, focused on securing and diversifying supplies of critical minerals, including rare earths essential for technologies like EVs, renewables, semiconductors, batteries, and defense systems.

Klingbeil described the price floor as a mechanism to ensure producers outside China receive at least a certain price level, even if market prices drop due to China’s ability to flood the market with low-cost supply.

He highlighted its advantage in providing market predictability and reducing the influence of dominant players trying to manipulate prices. However, he stressed that discussions are preliminary, with many issues unresolved, and emphasized cooperation over confrontation—Germany’s approach to China remains “de-risking, not decoupling.”

A follow-up meeting of foreign ministers on rare earths strategy is planned soon, and critical minerals are expected to be a key focus under France’s G7 presidency in 2026. This reflects broader Western concerns over China’s control refining ~90% of global rare earths and recent export restrictions, such as those affecting Japan.

The idea aims to support non-Chinese production through policy tools like subsidies, incentives, or guarantees, though specifics exact price levels or implementation remain open. No immediate decisions were announced, and Klingbeil noted the need to carefully weigh potential consequences.

A price floor is a government-imposed minimum price below which a good, service, or commodity like labor, agricultural products, or in this case, rare earth elements cannot legally be sold. It acts as a “floor” to prevent prices from falling too low, typically to protect producers from market forces that might drive prices down to unprofitable levels.

How Price Floors Work in Standard Economics

In a free market, prices are determined by the interaction of supply and demand: The equilibrium price is where the quantity supplied equals the quantity demanded.

If the government sets a price floor above this equilibrium price, it becomes binding, and the market cannot clear naturally. Effects include: Higher price for sellers— producers benefit from guaranteed minimum revenue. Lower quantity demanded— buyers purchase less because the price is artificially high.

Higher quantity supplied, more producers are willing to sell at the higher price. Quantity supplied exceeds quantity demanded, leading to unsold goods, potential stockpiles, or wasted resources. Employers must pay at least a set hourly rate, which can lead to higher wages for employed workers but reduced hiring if set above the market equilibrium for low-skilled jobs.

Agricultural supports e.g., historical EU or US farm price floors: Governments set minimum prices for crops to ensure farmers’ income stability. If market prices drop, the government may buy excess supply, provide subsidies to cover the gap, or destroy surplus to maintain the floor.

If the price floor is set below the equilibrium price, it is non-binding and has no real effect—the market price stays higher naturally. The recent G7 discussions including Germany’s Lars Klingbeil on January 12, 2026 proposed price floors for rare earth elements as a tool to counter China’s dominance ~90% of global refining and its ability to flood markets with low-cost supply, which can crash prices and make non-Chinese production unviable.

Here, the mechanism would likely work differently from traditional floors: It sets a guaranteed minimum price that non-Chinese producers can expect to receive. If market prices fall below this floor due to Chinese oversupply), governments or alliances could intervene by: Directly purchasing excess material adding to strategic stockpiles.

Providing subsidy payments or contracts for difference (CfDs) to bridge the gap between the market price and the floor price. Offering offtake guarantees or forward-buying commitments to give producers revenue certainty. This provides predictability and reduces investment risk for Western/Australian/allied producers, encouraging new mines, processing facilities, and diversification away from China.

Klingbeil emphasized that it minimizes the influence of dominant players trying to manipulate prices, while stressing careful evaluation of consequences e.g., potential higher costs for buyers, trade tensions, or inefficiencies.

This approach is more about strategic de-risking than pure market intervention—similar to recent US Defense Department deals e.g., price floors for specific rare earth products via contracts with producers like MP Materials.

It’s still preliminary, with no fixed levels or full implementation details yet, and focuses on cooperation among G7+ partners rather than confrontation. Price floors can stabilize markets and support key industries but often create surpluses or deadweight losses if not managed carefully via government buying or targeted subsidies.

In volatile commodity markets like rare earths, they aim to level the playing field against non-market behaviors.

U.S. Senators Demand Apple and Google Remove X and Grok Amid Escalating Global Backlash Over AI-Generated Sexual Deepfakes

0

Three Democratic senators have intensified pressure on Apple and Google to remove Elon Musk’s social media platform X and its associated AI app Grok from their app stores, citing the rampant generation of sexualized, nonconsensual images through XAI’s chatbot.

In an open letter to Apple CEO Tim Cook and Google CEO Sundar Pichai, Senators Ron Wyden (Oregon), Ed Markey (Massachusetts), and Ben Ray Luján (New Mexico) urged the tech giants to enforce app store rules that prohibit apps enabling sexualized images of real people without consent. They argued that Grok has allowed users to flood X with thousands of sexualized images an hour, depicting women, and in some cases, children, including portrayals of abuse, humiliation, and even death.

Within hours of the letter, X adjusted the Grok reply bot’s functionality on the platform, restricting image generation to paying premium subscribers and narrowing the types of images that could be generated on X itself. However, the standalone Grok app and website still permit users to generate sexualized deepfakes, raising questions about the platform’s ability or willingness to fully address the problem.

Senator Wyden criticized the partial changes, calling them insufficient: “All X’s changes do is make some of its users pay for the privilege of producing horrific images on the X app, while Musk profits from the abuse of children,” he said.

Global Consequences and Rising Tensions

The backlash has not been confined to the U.S. Governments in Asia have begun to take matters into their own hands. Indonesia temporarily blocked Grok and X after the AI tool generated explicit content depicting real people, including minors, without consent. Malaysia also restricted access to X in response to similar concerns. Both governments cited violations of citizens’ privacy and the potential for harm as justification for the actions.

Despite the mounting criticism, Elon Musk has yet to propose a concrete solution to curb Grok’s abuse. Musk continues to frame regulatory pressure as a free speech issue. On X, he accused the U.K. government of seeking “any excuse for censorship” and of attempting to suppress free expression, echoing his long-standing opposition to content moderation. Musk and xAI have reiterated that producing illegal content will lead to expulsion from X, but much of the sexualized imagery at the center of the controversy falls into a legal gray area, leaving enforcement uneven and limited.

Debate Over Free Speech and Regulatory Authority

While the senators argue that Apple and Google are uniquely positioned to stop the distribution of apps that enable sexualized deepfakes, some observers contend that the demand amounts to overreach. Critics say compelling app stores to remove X and Grok could set a precedent that stifles free expression and expands governmental influence over digital platforms and AI innovation.

“All major AIs have documented instances of going off the rails; all major AI companies make their best efforts to combat this; none are perfect,” Epic Games CEO Tim Sweeney said.

He went further, accusing politicians of using app store gatekeepers to selectively target companies they oppose, calling it “basic crony capitalism.”

“I defend open platforms, free speech, and consistent application of the rule of law,” he said.

Platform Responsibility and Enforcement Challenges

Apple’s App Store guidelines prohibit apps that include “overtly sexual or pornographic material” or content likely to humiliate, intimidate, or harm individuals. Google’s Play Store similarly bans apps promoting sexually predatory behavior or distributing nonconsensual sexual content. Both companies have previously removed apps that allowed AI-based “nudifying” of images.

However, Grok and X remain highly ranked on both platforms, with Grok placed in the top 10 apps on Friday, illustrating the difficulty of enforcing policy consistently across powerful AI tools.

Many have noted that the distributed nature of AI and generative tools complicates enforcement. Even if Apple or Google delists Grok, the app can still be accessed through web portals or sideloaded, and international users may continue to generate abusive content. Meanwhile, Musk’s framing of moderation as censorship allows him to rally supporters who oppose any perceived limitation on content generation, further polarizing the discussion.

Timeline: Grok/X Controversy and Global Backlash

December 2025 – Musk Unveils Grok’s AI Image Manipulation

  • Elon Musk, owner of X and xAI, introduces a version of the Grok AI chatbot capable of manipulating images of real people.
  • The AI tool can generate sexualized content, including deepfakes of women and minors.
  • Musk frames moderation as censorship and opposes heavy content restrictions, emphasizing free speech.

Early January 2026 – Rampant Deepfake Generation on X

  • X users exploit Grok to generate thousands of sexualized images per hour, largely depicting women but also including minors.
  • Many images involve nonconsensual scenarios, sexual abuse, humiliation, or death.
  • The platform’s moderation struggles to contain the content, and harmful imagery spreads widely.

January 9, 2026 – X Adjusts Grok Functionality

  • X limits Grok’s image generation to paying premium subscribers.
  • Certain image categories are restricted on X itself, but the Grok tab on X, the standalone app, and website continue to allow sexualized deepfakes.
  • Musk reiterates that illegal content will result in expulsion from X, but much of the content does not legally qualify as illegal.

January 10–12, 2026 – Southeast Asia Governments Take Action

  • Indonesia: Temporarily blocks X and Grok following reports of sexualized AI-generated imagery depicting real people, including minors.
  • Malaysia: Restricts access to X, citing privacy and citizen protection concerns.
  • Governments emphasize that AI-fueled sexualized imagery violates human rights and digital safety.

Google Integrates Gemini Into Gmail in a Bid to Leverage Inbox in AI Push

0

Google is pushing its Gemini artificial intelligence more firmly into Gmail, turning the world’s most widely used email service into another frontline in the race to dominate generative AI.

The company said Thursday that it is rolling out a new set of Gemini-powered features that will automatically summarize long email threads, suggest context-aware replies, and surface AI-generated overviews inside inboxes — with some tools switched on by default.

The upgrades mark a notable shift in how Google is positioning AI inside everyday digital habits. Rather than framing Gemini as an optional assistant, Google is increasingly embedding it as a core layer of the user experience, even if that means some users will have to actively opt out.

“When you open an email with dozens of replies, Gmail synthesizes the entire conversation into a concise summary of key points,” Google said in a blog post announcing the changes.

For users drowning in long threads — office debates, family group emails, or sprawling customer-service chains — the company is pitching Gemini as a way to reclaim time and attention.

At the center of the update is AI-generated thread summaries, designed to distill lengthy back-and-forths into short, readable digests. Alongside that, Google is bringing its controversial “AI Overviews” — already familiar to search users — into Gmail, signaling its confidence that AI-generated context belongs not just in search results, but directly inside personal communications.

Google is also expanding “Suggested Replies,” an evolution of its earlier “Smart Replies” feature. The new version draws more deeply on the context of previous messages, allowing users to respond with a single tap to emails that might otherwise require careful reading and drafting. The company is simultaneously upgrading its proofreading tools, promising tighter grammar checks and suggestions that make emails more concise.

Taken together, the changes reflect Google’s broader strategy to use scale as leverage. Gmail has more than 3 billion users, according to the company, giving Google a built-in distribution advantage over rivals such as OpenAI and Anthropic, which largely rely on standalone apps or integrations.

This strategy comes as competition in the AI sector intensifies. OpenAI, whose ChatGPT helped ignite the generative AI boom, reached a private market valuation of $500 billion late last year. Anthropic said it is now valued at $350 billion following a new funding round. Google, meanwhile, is betting that embedding Gemini across products people already use daily — Gmail, Search, Docs, and beyond — will lock in relevance before competitors can fully catch up.

That bet appears to be resonating with investors. Alphabet, Google’s parent company, briefly overtook Apple by market capitalization on Wednesday for the first time since 2019, capping a rally that made Alphabet the best-performing stock among tech megacaps last year. The surge has been driven in part by confidence that Google is finally converting its AI research muscle into consumer-facing products at scale.

Still, the decision to turn some Gemini features on by default raises questions about user choice and trust. Gmail is a deeply personal product, and automatic AI summaries and suggestions could reshape how people read, interpret, and respond to messages — sometimes without realizing it. Google has said users who do not want the features can opt out, but the default-on approach underscores how aggressively the company is moving to normalize AI assistance.

Email remains one of the most entrenched digital habits in the world, and whoever controls how information is summarized, prioritized, and acted upon inside the inbox gains a powerful advantage. With Gemini, Google is no longer just helping users write emails faster. It is positioning AI as the silent editor, reader, and gatekeeper of everyday communication.