DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 17

Washington Eyes 72-Hour Cyber Defense Rule as AI Compresses Hacking Timelines From Weeks to Hours

0

U.S. cybersecurity officials are considering one of the most aggressive overhauls of federal cyber defense policy in years, as fears grow that a new generation of artificial-intelligence systems could dramatically accelerate the speed and scale of cyberattacks against government networks.

According to people familiar with the discussions cited by Reuters, officials are weighing plans to slash the time federal civilian agencies have to fix actively exploited software vulnerabilities from the current two-to-three-week average to just three days.

The proposal reflects mounting anxiety inside Washington that advanced AI models are rapidly transforming cyber operations from a largely human-driven process into one increasingly automated, scalable, and capable of operating at machine speed.

Those concerns are centered on sophisticated AI systems such as Anthropic’s Mythos and OpenAI’s GPT-5.4-Cyber, which security researchers and policymakers fear could significantly reduce the technical expertise and time traditionally required to conduct advanced hacking campaigns.

For years, cybercriminals have used automation and machine learning to improve phishing schemes, malware generation, and reconnaissance. But cybersecurity officials say the latest frontier models appear capable of going much further: rapidly identifying previously unknown vulnerabilities, analyzing newly disclosed flaws within minutes, generating exploit code, and coordinating multi-stage intrusion campaigns with limited human involvement.

That shift is fundamentally altering how governments think about defense. Until recently, organizations often had weeks or even months between the public disclosure of a software flaw and the appearance of large-scale exploitation campaigns. Officials now worry that AI-assisted attackers may compress that timeline to mere hours.

“If you’re going to protect civil agencies, you’re going to have to move faster,” said Stephen Boyer, founder of cybersecurity firm Bitsight, which has previously assisted the Cybersecurity and Infrastructure Security Agency in cataloguing vulnerabilities. “We don’t have as much of a window as we used to have.”

The discussions are reportedly being led by acting CISA director Nick Andersen and U.S. national cyber director Sean Cairncross, according to sources familiar with the matter.

The proposal centers on CISA’s Known Exploited Vulnerabilities database, commonly known as the KEV catalog. The list tracks software flaws already being actively exploited by criminal organizations or state-backed hacking groups and serves as a mandatory remediation guide for federal agencies.

Historically, agencies were generally given around three weeks to patch vulnerabilities once they were added to the KEV list, according to cybersecurity researcher Glenn Thorpe. That timeline has gradually shortened in recent years, but a universal three-day standard would represent a dramatic escalation in urgency.

The move underscores how seriously U.S. officials are beginning to view the intersection between AI and offensive cyber capabilities. Some security analysts compare the current moment to the arrival of industrial automation in manufacturing: cyberattacks that once required teams of highly skilled operators may increasingly become partially automated workflows assisted by AI reasoning systems.

That prospect is especially alarming for governments because it could allow smaller criminal groups or less sophisticated state actors to conduct operations previously reserved for elite hacking units.

The concern extends well beyond federal agencies. Industry executives expect any tighter CISA standards to quickly influence state governments, contractors, hospitals, utilities, banks, and other critical infrastructure operators.

“This is a signal to others that says, ‘Hey you need to do this more quickly,’” said Nitin Natarajan, former deputy director of CISA under President Joe Biden and now head of NN Global.

Natarajan said accelerating patch timelines makes strategic sense given the speed of emerging threats, but warned the federal government may lack the resources necessary to sustain such an aggressive posture.

“We’ve seen a reduction in their resources, both in funding and expertise,” he said.

That concern reflects broader strain across the U.S. cyber apparatus.

CISA has faced repeated budget pressures, staffing reductions, and operational disruptions tied to government shutdown fights under President Donald Trump. Former officials and private-sector analysts warn that compressing deadlines without significantly increasing staffing, automation, and coordination could overwhelm already stretched cybersecurity teams.

The challenge is particularly acute in large enterprise environments, where applying patches is rarely straightforward. Major organizations often operate thousands of interconnected systems that involve legacy software, third-party vendors, industrial controls, and sensitive operational technology. Security updates typically require testing, compatibility reviews, and staged deployment processes to avoid outages or operational failures.

“Realistically, three days is simply impossible for some environments,” said Kecia Hoyt, vice president at threat intelligence firm Flashpoint.

John Hammond, senior principal security researcher at Huntress, said the proposed timeline would represent “quite a change” for the industry.

While Hammond said he was cautiously optimistic about the push for faster remediation, he added that “only time will tell how well the industry keeps up.”

The discussions are unfolding amid broader concerns that the global AI race is beginning to outpace the development of security guardrails and governance frameworks.

In recent months, frontier AI developers have faced increasing scrutiny over whether advanced models could assist cyber intrusions, biological research, or other high-risk activities. Several governments have quietly expanded national-security reviews of AI systems capable of advanced reasoning, coding, and autonomous task execution.

The banking industry has become particularly sensitive to the issue. Financial regulators in the United States, Europe, and Asia have reportedly intensified reviews of AI-related cyber risks amid fears that automated attacks could target payment systems, trading infrastructure, and customer data on an unprecedented scale.

At the core of Washington’s concern is a growing realization that cybersecurity doctrines built for the pre-AI era may no longer be sufficient. For decades, defenders largely relied on the assumption that discovering, weaponizing, and operationalizing vulnerabilities required time, expertise, and coordination. AI may now be eroding all three barriers simultaneously.

If that proves true, cybersecurity could shift from a contest measured in weeks and days to one increasingly measured in hours and minutes — forcing governments and corporations alike into a far more reactive and relentless security posture.

Oscars Restricts AI Generated Content from Major Film Awards

0

The decision to restrict and ban AI-generated content from major film awards like the Oscars is less about a simple rejection of technology and more about a deeper anxiety within the creative industries. It raises a fundamental question: if artificial intelligence cannot compete on equal footing, is it because it lacks something essential—often described as soul—or because it threatens to redefine what that very concept means.

Cinema has always been understood as a profoundly human art form. Films are not merely sequences of images but expressions of lived experience—of memory, emotion, struggle, and imagination shaped by consciousness. When audiences speak of a film having soul, they are often pointing to an intangible authenticity: the sense that a story emerges from human vulnerability and intention.

AI, by contrast, operates through pattern recognition, probabilistic modeling, and training data derived from existing works. It does not experience grief, joy, or desire; it simulates their expression based on what it has learned. From this perspective, the argument that AI lacks soul is compelling. It produces outputs without inner life, without stakes, and without the existential grounding that defines human creativity.

However, this explanation alone is insufficient. After all, many tools used in filmmaking—from CGI to editing software—do not possess soul, yet they are widely accepted. The difference lies not in the absence of humanity within the tool, but in the degree of authorship it assumes. AI systems are increasingly capable of generating scripts, performances, and even directorial decisions with minimal human intervention. This shifts them from being instruments of creativity to potential creators themselves.

The discomfort arises not because AI cannot create meaningful work, but because it might. This is where the notion of threat becomes more salient. AI challenges long-standing assumptions about originality, ownership, and labor in the arts. If a machine can generate a screenplay indistinguishable from one written by a human, what happens to the value we assign to human effort? If performances can be synthesized, what becomes of actors?

The resistance from institutions like the Oscars may therefore be less about preserving artistic purity and more about safeguarding the economic and cultural structures built around human creators. There is also a philosophical dimension to this tension. Art has historically been one of the last domains where human uniqueness seemed unquestionable.

The rise of AI erodes that boundary, forcing a reconsideration of what creativity actually entails. If creativity is defined as recombination and reinterpretation of existing ideas, then AI is already participating in it. But if it is defined by intention, consciousness, and subjective experience, then AI remains fundamentally outside it. The debate over AI in the Oscars is, in many ways, a proxy for this unresolved question.

The exclusion of AI-generated content is not a definitive judgment on its capabilities but a reflection of a transitional moment. It signals an industry grappling with rapid technological change and attempting to draw lines before those lines become impossible to enforce. Whether AI lacks soul or threatens it depends largely on how one defines both terms. What is clear, however, is that the conversation is far from settled, and the boundaries between human and machine creativity will continue to blur in the years ahead.

A Look into MoonAgents Card by MoonPay

0

The convergence of artificial intelligence and decentralized finance has taken a tangible leap forward with the introduction of the MoonAgents Card by MoonPay. This development enables autonomous agents to spend USDC on the Solana network anywhere Mastercard is accepted.

What once sounded like a speculative vision—machines participating directly in economic activity—has now entered a practical phase, reshaping how value moves in a digitally native economy. Stablecoins like USDC have long promised frictionless, borderless payments, but their usability has largely remained confined within crypto-native environments.

By bridging Solana-based USDC with Mastercard’s global merchant network, MoonPay effectively dissolves one of the biggest barriers in crypto adoption: the gap between on-chain assets and off-chain commerce. The significance is not merely technical—it is structural. It allows digital capital to flow seamlessly into everyday transactions, from retail purchases to service payments, without requiring manual conversion or intermediaries.

What makes the MoonAgents Card particularly compelling is its focus on autonomous agents. These are not just passive wallets or payment tools; they are programmable entities capable of executing predefined tasks, making decisions, and now, conducting financial transactions. This introduces a new paradigm where AI-driven agents can operate as economic participants.

For instance, an agent could manage subscription services, pay for APIs, execute trading strategies, or even handle logistics payments—all in real time and without human intervention. By granting agents the ability to spend, we are effectively embedding financial agency into software. This transforms how businesses and individuals might interact with digital systems. Instead of manually approving every transaction, users can delegate spending authority to intelligent agents governed by rules, budgets, and objectives.

The result is a more dynamic and responsive financial layer, where transactions occur at machine speed and scale. Solana’s role in this ecosystem is also critical. Known for its high throughput and low transaction costs, it provides the infrastructure necessary for frequent, micro-scale transactions that autonomous agents are likely to generate.

Traditional payment rails would struggle to support such volume efficiently, but Solana’s architecture makes it viable. When paired with USDC’s price stability, the combination becomes particularly suited for real-world commerce, where predictability and speed are essential.

Mastercard’s involvement adds another layer of legitimacy and reach. With millions of merchants globally, its network ensures that this innovation is not limited to niche use cases. Instead, it plugs directly into the existing financial system, allowing crypto-native value to be spent in familiar environments. This hybridization of decentralized and centralized systems may well define the next phase of financial evolution.

However, this shift also raises important questions. Granting spending power to autonomous agents introduces new dimensions of risk, particularly around security, governance, and accountability. Who is responsible if an agent misbehaves or is exploited? How are spending limits enforced, and what safeguards exist against malicious code? These concerns highlight the need for robust frameworks that combine cryptographic security with intelligent oversight.

Ultimately, the MoonAgents Card represents more than just a payment tool—it is a signal of where the digital economy is heading. As AI agents become more capable and crypto infrastructure more integrated, the line between human and machine participation in markets will continue to blur. Financial autonomy will no longer be exclusive to individuals and institutions; it will extend to software entities operating with precision, speed, and independence.

In this emerging landscape, the ability for agents to spend USDC anywhere Mastercard is accepted is not just a feature—it is a foundational shift. It marks the beginning of an economy where machines are not just tools, but active participants, transacting value in a system designed for both humans and algorithms alike.

The Disconnection between NFT Floor Price and Holders Growth

0

The recent divergence between rising NFT floor prices and relatively stagnant holder counts reveals a subtle but important shift in the structure of the digital asset market. At first glance, increasing floor prices—the lowest listed price for an NFT in a collection—signal renewed demand and market confidence.

However, when this upward movement is not matched by growth in unique holders, it suggests that the rally may be driven less by broad adoption and more by capital concentration among existing participants. This dynamic often points to a market dominated by whales or high-net-worth collectors who are accumulating larger positions within established collections.

Instead of new entrants expanding the base of ownership, existing holders are consolidating supply. By sweeping floors or strategically acquiring underpriced assets, these actors can artificially tighten available liquidity, pushing prices upward. While this can create the appearance of a healthy bull phase, it lacks the organic growth that typically sustains long-term market expansion.

Another factor contributing to this pattern is the maturation of the NFT market itself. Early cycles were characterized by explosive user growth, driven by novelty, speculation, and cultural hype. In contrast, the current phase appears more selective. Capital is flowing into perceived blue-chip collections—projects with established brand equity, historical significance, or strong communities—rather than dispersing across a wide array of new entrants.

This concentration reinforces price increases at the top while leaving broader participation relatively flat. Liquidity dynamics also play a critical role. NFTs are inherently illiquid compared to fungible tokens; each asset is unique, and transaction volumes can be thin. When fewer sellers are willing to part with their assets at lower prices, even modest buying pressure can lift floors significantly. If this buying pressure comes from a small group of committed investors rather than a large influx of new users, holder counts will naturally lag behind price action.

Financialization mechanisms within the NFT ecosystem—such as lending, fractionalization, and derivatives—allow existing holders to extract more value from their assets without selling them. This reduces the need for distribution to new participants. For instance, an investor can leverage an NFT as collateral, gain liquidity, and reinvest within the ecosystem, all while maintaining ownership. Such mechanisms deepen capital efficiency but do little to expand the user base.

From a behavioral perspective, this divergence may also reflect lingering caution among retail participants. After the volatility and drawdowns of previous cycles, new users may be hesitant to enter the market despite rising prices. Meanwhile, experienced participants, armed with better information and stronger conviction, are more willing to accumulate during periods of relative undervaluation.

On one hand, rising floor prices indicate that certain NFT assets are retaining or even increasing their perceived value, which can strengthen market credibility. On the other hand, a lack of growth in holder count raises concerns about sustainability. Markets driven by concentrated ownership are more vulnerable to sharp corrections if a few large holders decide to exit positions.

The disconnect between floor prices and holder growth suggests that the NFT market is transitioning from a phase of rapid expansion to one of consolidation. For the ecosystem to achieve long-term resilience, price appreciation will need to be accompanied by renewed user growth, broader accessibility, and compelling use cases that extend beyond speculation. Until then, rising floors without expanding ownership remain a signal worth scrutinizing rather than celebrating unconditionally.

‘The cost of compute is far beyond the costs of the employees’: Nvidia executive admits AI is more expensive than human workers

0

The tech industry’s aggressive push into artificial intelligence is creating a paradox that few saw coming: massive capital spending on AI infrastructure is coinciding with widespread layoffs, even as many companies admit that human labor remains cheaper than AI in most real-world applications today.

Meta’s announcement last week that it would cut roughly 10% of its workforce, about 8,000 jobs, and scrap plans to fill 6,000 open positions was framed internally as a necessary efficiency move. In the memo, the company said the reductions would help “run the company more efficiently and to allow us to offset the other investments we’re making,” a thinly veiled reference to its enormous AI outlays.

Microsoft has offered thousands of employees a voluntary buyout — the largest in the company’s history. Across the sector, Layoffs.fyi data shows more than 92,000 tech jobs have already been eliminated in 2026, a pace that is outstripping last year’s total of around 120,000 cuts.

At first glance, the numbers suggest the long-predicted shift from human workers to AI is already underway. But conversations with executives and analysts reveal a more complicated picture: AI is not yet delivering clear cost savings. In many cases, it is costing companies more than the humans it might eventually replace.

Nvidia vice president of applied deep learning Bryan Catanzaro put it plainly in a recent Axios interview. He said: “For my team, the cost of compute is far beyond the costs of the employees.”

An MIT study from 2024 reached a similar conclusion. After analyzing the technical requirements for AI to match human performance, researchers found that automation would be economically viable in only 23% of roles where vision is a primary component. In the other 77%, it was still cheaper to keep humans in the job.

According to Fortune, Keith Lee, an AI and finance professor at the Swiss Institute of Artificial Intelligence’s Gordon School of Business, described the situation as a classic short-term mismatch.

“What we’re seeing is a short-term mismatch,” Lee told Fortune.

AI companies are often losing money on flat subscription models that fail to cover the high operating costs for heavy users. As a result, some firms are starting to view AI more as a complementary tool rather than an immediate labor substitute.

The scale of the spending is staggering. The four major U.S. tech giants that reported earnings this week, Alphabet, Meta, Amazon, and Microsoft, have collectively signaled AI-related capital expenditures that are now projected to top $700 billion this year, up from around $600 billion previously. Alphabet raised its annual capex forecast by $5 billion to between $180 billion and $190 billion, with plans for another big increase in 2027. Microsoft expects $190 billion in 2026 spending, with roughly $25 billion tied to rising component costs. Meta lifted its ceiling to as much as $145 billion.

Uber chief technology officer Praveen Neppalli Naga recently told The Information that the company’s pivot to AI coding tools had blown up its budget.

“I’m back to the drawing board because the budget I thought I would need is blown away already,” he said.

According to McKinsey projections, AI expenditures could reach $5.2 trillion globally by 2030 in a base case, or as high as $7.9 trillion at an accelerated pace. AI software fees have already risen 20% to 37% over the past year, according to Tropic.

Despite the spending spree, widespread productivity gains or large-scale job displacement have not yet materialized. The Yale Budget Lab has pointed to a lack of robust data supporting the idea of AI broadly replacing workers. Federal Reserve figures show that only about 18% of companies had adopted AI tools by the end of 2025, a 68% increase since September, but adoption remains early-stage and uneven.

Lee sees a clear path toward AI becoming economically superior, but it will take time and several breakthroughs. Inference costs for large language models with 1 trillion parameters are expected to drop more than 90% over the next four years, according to Gartner. Improvements in infrastructure, model efficiency, and hardware supply will help, and pricing models are likely to shift from flat subscriptions to usage-based structures that better align costs with actual value delivered.

But viability will ultimately depend on reliability.

“It’s not just about AI becoming cheaper than humans,” Lee said. “It’s about becoming both cheaper and more predictable at scale.”

For now, companies are making a high-stakes bet on that future. Google Cloud’s 63% revenue surge in the March quarter, far above estimates, was driven primarily by AI tools for enterprises for the first time, vindicating Alphabet’s heavy investment in turning research into commercial products. CEO Sundar Pichai noted that capacity constraints limited even stronger growth, a problem echoed across the industry.

Analyst Lee Sustar of Forrester observed that Google is capturing new workloads, sometimes from companies new to the cloud or seeking to diversify.

“It is capturing new workloads for the most part — sometimes from companies new to cloud, often additional workloads from customers of other clouds who want to be less dependent on a single cloud provider or who like Google data, analytics and AI offerings,” he said.

The current wave of heavy spending and selective layoffs reflects a painful transition period. Companies are investing aggressively in AI while trimming costs elsewhere to protect margins and reassure investors. Human labor remains cheaper and more reliable for many tasks today, but the scale of the capital bets suggests executives believe the economics will eventually flip as the technology matures.

The discrepancy between soaring AI costs and continued reliance on human workers underscores that the great labor shift to AI is not happening overnight. It is a multi-year, high-risk wager on future efficiency gains that have yet to fully materialize. Currently, the most visible impact is not mass replacement of workers, but a costly arms race to build the infrastructure that might one day make AI the cheaper, more scalable option.