DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 83

Tether freezes $4.2bn in USDT tied to illicit activity as global scrutiny intensifies

0

El Salvador-based stablecoin issuer Tether said it has frozen about $4.2 billion worth of its dollar-pegged crypto tokens over links to illicit activity, the bulk of it within the past three years.

Tether, which issues the world’s largest stablecoin, USDT, has more than $180 billion of its tokens in circulation, up sharply from roughly $70 billion three years ago. The company has the technical ability to remotely freeze tokens held in users’ crypto wallets when requested by authorities — a power that differentiates centrally issued stablecoins from decentralized cryptocurrencies such as Bitcoin.

This week, Tether said it assisted the U.S. Department of Justice in freezing nearly $61 million in USDT linked to so-called “pig-butchering” scams, a form of fraud in which perpetrators cultivate personal relationships with victims before persuading them to invest in fraudulent crypto schemes. The action lifted Tether’s cumulative frozen assets tied to illicit activity to $4.2 billion, according to a company spokesperson. Of that amount, $3.5 billion has been frozen since 2023.

The scale of the freezes is a reflection of both the rapid expansion of stablecoins and the increasing use of crypto in cross-border fraud and laundering schemes. Tether has previously said it blocked wallets connected to human trafficking networks as well as activities linked to “terrorism and warfare” in Israel and Ukraine. Sanctioned Russian crypto exchange Garantex also said last year that Tether had blocked funds on its platform.

Stablecoins at the center of crypto enforcement

Stablecoins like USDT are primarily used as settlement and liquidity tools within the crypto ecosystem, allowing traders to move quickly between digital assets without exiting into traditional banking rails. Their transaction volumes have surged in recent years alongside broader crypto market growth.

That ubiquity, however, has drawn heightened scrutiny from regulators and enforcement agencies. Authorities worldwide have repeatedly raised concerns that crypto markets, which are generally less regulated than traditional financial systems, can be exploited for illicit finance.

The Financial Action Task Force (FATF), the global anti-money laundering watchdog, last year urged countries to intensify efforts to combat illicit finance in crypto markets. The group warned that inconsistent implementation of anti-money laundering standards across jurisdictions was creating vulnerabilities.

Blockchain researchers reported in January that money launderers received at least $82 billion in cryptocurrencies last year, a sharp increase from about $10 billion in 2020. The growth was attributed in part to the expansion of Chinese-speaking criminal networks and the rising sophistication of scam operations.

Tether occupies a unique position in this landscape. As the dominant stablecoin issuer, it serves as a critical liquidity backbone for global crypto trading. At the same time, its centralized control over USDT issuance allows it to intervene directly in suspicious transactions — a feature that some crypto purists view as antithetical to decentralization, but which regulators often regard as a compliance advantage.

The company’s ability to freeze funds has become an increasingly visible tool in enforcement actions. Tether is attempting to position itself as a responsible intermediary rather than a passive conduit for illicit flows by cooperating with authorities and blocking flagged wallets.

Yet the magnitude of frozen funds — $4.2 billion to date — highlights the scale of activity passing through stablecoin networks that later becomes subject to enforcement.

The figures reinforce concerns that rapid crypto adoption has outpaced regulatory frameworks in many jurisdictions. Against that backdrop, the pressure to strengthen compliance, enhance transaction monitoring, and cooperate with cross-border investigations is likely to intensify as volumes grow.

With more than $180 billion of USDT in circulation, Tether’s actions indicate that stablecoins are no longer peripheral instruments in digital finance. They sit at the core of the ecosystem — and increasingly at the center of the global effort to curb crypto-related crime.

Morgan Stanley Counters AI Job Apocalypse Narrative: History Shows Tech Transforms Work, Doesn’t Eliminate It — New Roles Will Emerge, Not Mass Unemployment

0

Amid widespread warnings from tech leaders that artificial intelligence will render millions of white-collar jobs obsolete and potentially make traditional employment unnecessary, a comprehensive new cross-asset research report from Morgan Stanley delivers a starkly different message.

According to the report, most workers won’t face permanent unemployment; they will simply move into new jobs, many of which do not yet exist.

The report, authored by a large team of Morgan Stanley analysts, directly addresses investor and employee anxieties that AI will “replace millions of jobs and increase unemployment by an equivalent amount.” Rather than a mass extinction event for knowledge workers, the bank argues AI will follow the historical pattern of every major technological shift over the past 150 years — fundamentally altering the labor force without reducing overall employment.

From electrification and the tractor to the computer and the internet, each wave of innovation eliminated certain roles while creating others — often in greater numbers and with higher value. The spreadsheet revolution of the 1980s, for example, automated tedious financial modeling and reduced demand for some bookkeeping clerks, but simultaneously freed analysts to perform more complex work and gave rise to entirely new financial professions.

Morgan Stanley sees AI following the same trajectory: changing “job types, occupations, and needed skills” rather than eliminating labor itself.

“While some roles may be automated, others will see enhancement through AI augmentation, and other, entirely new roles will be created,” the report concludes.

The bank emphasizes that the corporate landscape is simply preparing for an evolution — not a collapse — of work.

Emerging Jobs and Professions on the Horizon

Morgan Stanley identifies several categories of roles likely to become corporate staples as AI integrates deeper into business operations:

  • Executive-level oversight — Companies will increasingly hire “chief AI officers” to guide technology adoption, strategy, and governance across departments.
  • Governance and compliance — A surge in AI governance specialists focused on data privacy, policy enforcement, regulatory compliance, and information security — especially critical in regulated sectors like healthcare and finance.
  • Blended technical roles — Product manager/engineer hybrids will become common, with product managers empowered by natural language coding tools to engage in “vibe coding” — rapidly prototyping and iterating concepts themselves before final engineering deployment.
  • Industry-specific specialists — Consumer sectors will see “AI personalization strategists” and “AI supply-chain analysts” blending data science with customer experience. Industrials will demand “predictive maintenance engineers” and “smart grid analysts.” Healthcare will require “computational geneticists” and experts dedicated to overseeing AI-driven diagnostics.

These roles point to a broader shift: AI will augment human capabilities in strategic, creative, and oversight functions while automating routine, repetitive tasks. The bank argues that historical precedent strongly supports this outcome — technological revolutions have consistently expanded the overall labor market rather than contracting it.

AI Disruption Fears Overblown for Broad Market

Morgan Stanley directly challenges the recent sell-off in software and services stocks, where multiples have pulled back roughly 33% since late 2025 on AI disruption worries. The bank notes that the services and cyclical industries most vulnerable to near-term automation fears constitute only about 13% of the S&P 500’s market cap — suggesting the broad equity market’s reaction may be disproportionate to the actual risk.

While acknowledging that some roles will face displacement, the report emphasizes that AI’s net effect is likely to be job transformation and creation rather than elimination. This view contrasts sharply with dire predictions from tech executives like Elon Musk (who forecast work becoming “optional” in 10–20 years due to AI and humanoid robots), OpenAI’s Sam Altman (superintelligence outperforming top executives soon), Microsoft AI chief Mustafa Suleyman, and Anthropic CEO Dario Amodei (sweeping white-collar automation in 1–5 years).

Economists have generally been more skeptical of these timelines, viewing the apocalyptic narrative as partly a tool to justify sky-high tech valuations. Morgan Stanley’s analysis aligns with this skepticism, arguing that fears of permanent mass unemployment overlook historical patterns of adaptation and new job creation.

The report arrives amid intense debate over AI’s societal impact. Tech leaders have issued stark warnings about human obsolescence, while labor economists point to past technological shifts (agricultural mechanization, computers, and the internet) that ultimately expanded employment despite initial disruption. The bank positions itself in the latter camp: AI will reshape occupations and skill requirements, but not destroy the need for human labor.

The analysis suggests the recent software and services sell-off may represent an overreaction. Companies that successfully integrate AI to enhance productivity — rather than face outright replacement — could emerge stronger. The 13% S&P 500 exposure to the most vulnerable sectors implies limited systemic risk to broader equity markets.

The key question shifts from “will jobs disappear?” to “which new jobs will be created, and who will fill them?” as the AI adoption wave accelerates. Morgan Stanley’s report offers a grounding perspective: history suggests workers and companies adapt, and the economy ultimately expands — even if the transition is uneven, uncomfortable, and politically charged.

OpenAI and Google Employees Unite in Petition Against Unrestricted Military Use of AI, Citing Mass Surveillance and Autonomous Weapons Risks

0

A growing number of current and former employees at OpenAI and Google have signed a joint petition opposing the unrestricted deployment of their companies’ AI technologies for mass surveillance or fully autonomous weapons that can kill without human oversight.

Titled “We Will Not Be Divided,” the online petition — launched in early February 2026 — invites verified employees from both firms to publicly declare their stance, with the option to remain anonymous.

As of Friday, more than 220 individuals had signed: 176 from Google and 47 from OpenAI. Google employs approximately 187,000 people globally (mid-2025 figures), while OpenAI’s headcount runs into the thousands. The petition’s relatively modest numbers belie its significance as a rare public act of dissent from within two of the world’s most influential AI labs.

The petition explicitly references pressure from the Department of Defense (referred to as the “Department of War” in the text) to provide military access to AI models. It claims the Pentagon has threatened to invoke the Defense Production Act (DPA) to force Anthropic — another major AI developer — to tailor its technology to military needs, warning of labeling the company a “supply chain risk” if it refuses.

“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the petition states.

The document accuses the Defense Department of a “divide and conquer” strategy: pitting companies against each other by implying competitors will comply if one refuses.

“That strategy only works if none of us know where the others stand,” it reads. “This letter serves to create shared understanding and solidarity in the face of this pressure.”

The signers call on OpenAI and Google leadership to “put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”

Context: Pentagon Pressure and Anthropic’s Stance

The petition follows Axios’ Tuesday report that Defense Secretary Pete Hegseth set a deadline for Anthropic CEO Dario Amodei to grant the military sweeping access to Claude models, threatening contract cancellation or further action if refused.

A Defense official told Axios: “The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.”

Anthropic responded Thursday with a firm public refusal: “We cannot in good conscience accede” to demands allowing unrestricted military use, particularly for mass surveillance of Americans or autonomous weapons lacking human oversight.”

Amodei noted new contract language from the Pentagon “made virtually no progress” on these red lines.

Pentagon spokesman Sean Parnell countered on social media, saying “The military has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”

He insisted the department seeks only “lawful purposes” but offered no specifics.

Defense Undersecretary for Research and Engineering Emil Michael escalated the rhetoric, posting on X that Amodei “has a God-complex” and is “ok putting our nation’s safety at risk.”

Experts view the Pentagon’s approach as unprecedented. Dean Ball, former senior policy advisor in the White House Office of Science and Technology Policy and fellow at the Foundation for American Innovation, told Business Insider: “We’re absolutely in uncharted territory. What are the stakes for Anthropic? I mean, Anthropic could be quasi-nationalized, or they could be driven out of business. The stakes are huge for them.”

Ball warned the episode sends a chilling signal to the tech industry that “doing business with the government is extremely dangerous.”

Sen. Thom Tillis (R-NC) criticized the Pentagon’s handling as unprofessional.

“Why in the hell are we having this discussion in public? This is not the way you deal with a strategic vendor that has contracts,” he said, urging for a closed-door resolution.

Sen. Mark Warner (D-VA), ranking member on the Senate Intelligence Committee, expressed deep disturbance.

“Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance. It further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts,” he said.

OpenAI, Google, Anthropic, and xAI all maintain contracts or discussions with the Pentagon for military AI applications. Anthropic has been the most public in resisting unrestricted use, citing policies against mass surveillance and autonomous weapons. The petition underlines a rare cross-company alliance among employees concerned about ethical boundaries.

Hegseth has pushed for faster AI deployment, describing it as a “wartime arms race” during a January visit to Elon Musk’s SpaceX. He has also criticized military legal advisors as potential “roadblocks,” leading to high-profile dismissals of top Army and Air Force lawyers in early 2026.

The clash highlights tensions between rapid military adoption of frontier AI and calls for governance, transparency, and human oversight — especially in sensitive areas like surveillance and lethal autonomy.

The petition and Anthropic’s stance are expected to embolden further employee activism across AI labs. Congressional attention — from Tillis and Warner — suggests growing bipartisan interest in formal AI governance for national security applications. For OpenAI and Google, the petition adds internal pressure at a time when both companies face scrutiny over military ties. Anthropic’s refusal has positioned it as a leader in setting red lines, potentially influencing industry norms.

The coming weeks will test whether the Pentagon softens its demands or escalates pressure through contract leverage or DPA invocation.

Pump.fun’s Build in Public Hackathon Boosts its Ecosystem Activity

0

The Pump.fun Build in Public Hackathon also called BiP Hackathon, launched in January 2026 via Pump Fund’s $3 million initiative, recently concluded its application phase.

The program funds 12 projects with $250,000 each; totaling $3M at a $10 million valuation per project. Unlike traditional hackathons, there’s no judging panel or VC pitches — winners are determined by market traction; token performance after launching on Pump.fun, community engagement, and public building progress.

Participants must launch a token on Pump.fun, retain a meaningful portion of supply; recommended 20-50%, minimum 10% for eligibility, and build transparently in public. Applications closed recently, with announcements stating that eligible applicants can receive investments, and 10 winners remaining to be selected and announced via Pumpspotlight over the coming weeks.

Some winners have already been picked; at least two announced earlier in February like Zauth and Opal, but the full set of 12 isn’t complete yet — it’s an ongoing selection process rather than a single “conclusion” event. Funding is described as $250,000 investments per project likely in SOL, USD equivalent, or token purchases at set valuations, plus mentorship — not airdrops or distributions of $PUMP itself.

$PUMP was launched earlier in mid-2025 via a rapid ICO raising hundreds of millions, and while the hackathon boosts ecosystem activity potentially benefiting $PUMP indirectly through more launches and usage on the platform, no direct token distribution tied to the hackathon.

The hackathon is still rolling out winners progressively rather than having fully “concluded” with all distributions done. Winners are announced progressively based on market traction rather than a single finale event. Funding is $250,000 per project at a $10M valuation, delivered as investments, plus mentorship and incubation — not a one-time payout or airdrop.

The program continues to emphasize market-driven validation: token performance, community engagement, and public progress determine who gets funded, bypassing traditional VC judging. $PUMP itself benefits indirectly from increased platform usage; more token launches, trading volume, but the hackathon doesn’t involve distributing it.

This hackathon represents a shift in Pump.fun’s evolution from pure memecoin launchpad to supporting sustainable on-chain builders and startups: Positive for ecosystem growth — It democratizes funding by letting the market “judge” ideas, reducing gatekeeping and enabling faster validation.

Successful projects gain real runway, potentially spawning useful tools, AI agents, prediction markets, or infrastructure on Solana. Encourages “build in public” transparency, community alignment, and long-term thinking over pure speculation. It could attract non-crypto-native founders and create a pipeline of quality projects, countering memecoin fatigue.

Some view it as risky for traders; low odds of picking a winner among many launches, potential for dev-held supply dumps despite rules. It may dilute focus if it pivots too far from Pump.fun’s core “attention trading” appeal. No guaranteed $PUMP upside for holders beyond indirect platform growth.

This hackathon signals Pump.fun’s strategic pivot from a pure memecoin speculation platform toward supporting real on-chain product development and long-term ecosystem utility on Solana. Market traction (not VCs) decides funding, rewarding genuine progress and transparency over polished pitches.

This lowers barriers for non-traditional founders and encourages “build in public” culture; frequent updates, community interaction, livestreams, etc. $250K + advisory support helps winners scale prototypes into mature products; tools, AI agents, prediction markets like the recently highlighted PumpMarket integration.

High competition; many launches may fade if traction stalls, and dev-held supply rules aim to prevent dumps but aren’t foolproof. More token launches, trading volume, and on-chain engagement as participants build and promote projects. This boosts fees and revenue for Pump.fun and Solana overall.

Counters fatigue in pure hype cycles by incentivizing utility-focused projects; prediction markets, infrastructure. Advisors like Polymarket, Delphi Digital, and Pantera add credibility. Successful funded projects could become foundational layers; better tools for launches, agents, or DeFi primitives, attracting more developers and users to Solana.

Value could rise from heightened activity and volume during winner announcements and project launches, but it’s speculative. This is more a milestone in Pump.fun’s maturation; from hype machine to builder accelerator.

OpenAI Raises $110bn at $730bn Pre-Money Valuation, Signaling New Phase of Global AI Scale-Up

0

OpenAI has secured $110 billion in fresh capital at a pre-money valuation of $730 billion, in what ranks as one of the largest private funding rounds in technology history and a defining moment in the commercialization of artificial intelligence.

The round was led by Amazon, with participation from Nvidia and SoftBank. The deal would bring OpenAI’s post-money valuation to approximately $840 billion, positioning it as the most highly valued AI startup globally.

“We are entering a new phase where frontier AI moves from research into daily use at a global scale. This funding and these partnerships let us do both and move faster on our mission to ensure AGI benefits all of humanity,” OpenAI said in a blog post.

Structure of the Deal and Implications

Amazon committed the largest portion of the round at $50 billion, including $15 billion upfront and a second $35 billion tranche tied to undisclosed conditions. Those conditions reportedly could include progress toward artificial general intelligence (AGI) or a potential initial public offering.

SoftBank and Nvidia each invested $30 billion. It remains unclear whether Nvidia’s participation is separate from, or connected to, its previously announced $100 billion commitment to the Sam Altman-led company in September.

Despite Amazon’s sizable stake, OpenAI said Microsoft will remain a longstanding strategic partner and major shareholder. Microsoft did not participate in the latest round, a notable development given its earlier multibillion-dollar backing and deep integration of OpenAI models into Azure and its enterprise software ecosystem.

The funding signals an intensifying arms race among technology giants. Amazon’s leadership in the round strengthens its position in generative AI infrastructure and cloud computing, areas where it competes directly with Microsoft and Google. Nvidia’s involvement reinforces its central role as the primary supplier of advanced AI chips that power model training and inference at scale.

OpenAI reportedly expects to spend $665 billion through 2030 on training and operating its models — more than double prior projections. The figure highlights the extraordinary capital intensity of frontier AI development, where compute infrastructure, data center expansion, and model optimization require sustained multiyear investment.

The company’s flagship product, ChatGPT, has grown rapidly. It now serves more than 900 million weekly users, including nearly 50 million paying subscribers. Over nine million enterprises are reportedly using the platform, embedding AI tools into workflows ranging from coding and research to customer service and marketing.

To support revenue expansion, OpenAI has begun introducing advertising for non-premium users, signaling a diversification of its monetization strategy beyond subscriptions and enterprise licensing.

The $840 billion post-money valuation eclipses other major AI players. Anthropic was valued at $380 billion following its Series G round led by GIC and Coatue. Meanwhile, xAI was acquired by SpaceX at a $250 billion valuation.

The scale of OpenAI’s raise underscores investor conviction that AI will underpin the next generation of digital infrastructure, enterprise productivity, and consumer applications. At the same time, it reflects the extraordinary costs required to sustain leadership in frontier model development, where training runs increasingly demand clusters of advanced GPUs and vast energy resources.

Under the new valuation, the stake held by the OpenAI Foundation — the company’s nonprofit arm — has appreciated significantly. As of October 2025, the Foundation held a 26% equity stake in the group. At current valuation levels, that position would be worth more than $180 billion, dramatically expanding its capacity to fund initiatives in global health and AI resilience.

The funding round marks a pivotal juncture. OpenAI is transitioning from a research-driven lab into a global infrastructure provider whose models are embedded across consumer and enterprise ecosystems. The size of the capital infusion suggests that backers are not merely financing incremental improvements but underwriting a multiyear buildout of AI systems at planetary scale — with commercial opportunity and execution risk rising in equal measure.