DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 10

Meta Quietly Ends AI Training Partnership with Sama After Claims Contractors Reviewed Intimate Smart-Glasses Footage

0

Meta has quietly terminated its relationship with outsourcing firm Sama after reports emerged that contractors reviewing footage from Ray-Ban smart glasses were exposed to highly sensitive recordings, including private conversations, financial information, and explicit scenes involving individuals who allegedly did not know they were being filmed.

The decision is drawing renewed scrutiny to the hidden labor systems powering the generative AI race, while reigniting broader concerns about privacy, surveillance, and worker treatment in the global AI supply chain.

The controversy underlines a growing tension at the center of the artificial intelligence industry: while technology firms are aggressively marketing AI-powered wearable devices as the next computing platform, the systems behind them still rely heavily on armies of low-paid human reviewers tasked with sorting through deeply personal content to improve machine-learning models.

Sama, a California-headquartered outsourcing company with major operations in Nairobi, disclosed the termination of 1,108 employees after Meta ended its contract. Some workers alleged they faced retaliation after raising concerns internally about the nature of the material they were required to examine.

The dispute traces back to investigations published earlier this year by Swedish media outlets, which cited workers who said they were asked to label footage recorded through Meta’s Ray-Ban smart glasses. According to those accounts, the recordings included private conversations, banking details, nudity, and intimate encounters involving people who often appeared unaware they were being captured.

The revelations strike at one of the most sensitive questions surrounding AI wearables: whether consumers and bystanders fully understand how much data these devices collect and how that information is ultimately used.

Meta maintains that users must explicitly enable the glasses’ AI features and that its terms of service disclose that some recordings may be used to improve AI systems. Yet privacy advocates argue that disclosure buried in user agreements does little to address concerns about uninformed third parties who may appear in recordings without consent.

The incident also exposes the uncomfortable reality that the rapid progress of generative AI still depends heavily on human labor. While AI companies market their systems as increasingly autonomous, the technology often relies on thousands of contractors who manually categorize images, review audio, and flag problematic content.

Industry analysts say the demand for this kind of data annotation work has surged as companies race to develop multimodal AI systems capable of understanding video, audio, and real-world environments. Smart glasses intensify that challenge because they capture unfiltered moments from everyday life rather than curated online datasets.

The fallout has revived scrutiny of Sama itself, which has repeatedly surfaced at the center of debates about AI labor ethics.

The company previously worked with OpenAI to help filter toxic material for ChatGPT before its public launch in 2022. Investigations at the time revealed that Kenya-based workers were paid less than $2 an hour to review graphic and disturbing content, with several workers reporting psychological trauma from the assignments.

Sama and Meta also faced allegations in previous years tied to anti-union practices and misleading job descriptions. Labor groups argued that workers recruited for content moderation and AI labeling tasks were often not fully informed about the emotional intensity or sensitivity of the material they would encounter.

The latest controversy could intensify regulatory pressure on Meta at a time when governments across Europe and parts of the United States are already examining AI transparency, biometric surveillance, and workplace protections tied to generative AI systems.

It also lands as Meta pushes aggressively into AI-powered hardware under CEO Mark Zuckerberg’s broader strategy of building what the company sees as the next major computing ecosystem beyond smartphones. The Ray-Ban smart glasses have become one of Meta’s most commercially successful hardware experiments in years, helped by improvements in AI assistants and real-time multimodal capabilities.

But the privacy concerns surrounding wearable cameras are not new. More than a decade ago, Google Glass faced public backlash over fears that users could secretly record others in public spaces. That resistance helped doom the product commercially.

Meta’s newer generation of smart glasses has avoided some of that stigma by using more discreet designs and integrating them into mainstream fashion branding. Even so, reports have increasingly emerged of users wearing the devices in classrooms, courtrooms, police encounters, and other sensitive environments.

Privacy researchers warn that as AI wearables become more capable, the volume of sensitive real-world data collected by technology firms could increase exponentially. Unlike smartphones, which users intentionally point toward subjects, smart glasses continuously capture the user’s surroundings from a first-person perspective, raising the likelihood of incidental surveillance.

The controversy may also deepen questions about how AI companies balance automation with accountability. While Meta blamed Sama for failing to meet standards, critics argue the dispute highlights systemic issues across the AI industry, where companies outsource difficult moderation and training tasks to contractors operating far from Silicon Valley headquarters.

As AI systems move beyond text generation into always-on wearable devices embedded in daily life, the debate over who controls the data, who reviews it, and who bears the consequences when safeguards fail is becoming increasingly difficult for the industry to avoid.

Washington Eyes 72-Hour Cyber Defense Rule as AI Compresses Hacking Timelines From Weeks to Hours

0

U.S. cybersecurity officials are considering one of the most aggressive overhauls of federal cyber defense policy in years, as fears grow that a new generation of artificial-intelligence systems could dramatically accelerate the speed and scale of cyberattacks against government networks.

According to people familiar with the discussions cited by Reuters, officials are weighing plans to slash the time federal civilian agencies have to fix actively exploited software vulnerabilities from the current two-to-three-week average to just three days.

The proposal reflects mounting anxiety inside Washington that advanced AI models are rapidly transforming cyber operations from a largely human-driven process into one increasingly automated, scalable, and capable of operating at machine speed.

Those concerns are centered on sophisticated AI systems such as Anthropic’s Mythos and OpenAI’s GPT-5.4-Cyber, which security researchers and policymakers fear could significantly reduce the technical expertise and time traditionally required to conduct advanced hacking campaigns.

For years, cybercriminals have used automation and machine learning to improve phishing schemes, malware generation, and reconnaissance. But cybersecurity officials say the latest frontier models appear capable of going much further: rapidly identifying previously unknown vulnerabilities, analyzing newly disclosed flaws within minutes, generating exploit code, and coordinating multi-stage intrusion campaigns with limited human involvement.

That shift is fundamentally altering how governments think about defense. Until recently, organizations often had weeks or even months between the public disclosure of a software flaw and the appearance of large-scale exploitation campaigns. Officials now worry that AI-assisted attackers may compress that timeline to mere hours.

“If you’re going to protect civil agencies, you’re going to have to move faster,” said Stephen Boyer, founder of cybersecurity firm Bitsight, which has previously assisted the Cybersecurity and Infrastructure Security Agency in cataloguing vulnerabilities. “We don’t have as much of a window as we used to have.”

The discussions are reportedly being led by acting CISA director Nick Andersen and U.S. national cyber director Sean Cairncross, according to sources familiar with the matter.

The proposal centers on CISA’s Known Exploited Vulnerabilities database, commonly known as the KEV catalog. The list tracks software flaws already being actively exploited by criminal organizations or state-backed hacking groups and serves as a mandatory remediation guide for federal agencies.

Historically, agencies were generally given around three weeks to patch vulnerabilities once they were added to the KEV list, according to cybersecurity researcher Glenn Thorpe. That timeline has gradually shortened in recent years, but a universal three-day standard would represent a dramatic escalation in urgency.

The move underscores how seriously U.S. officials are beginning to view the intersection between AI and offensive cyber capabilities. Some security analysts compare the current moment to the arrival of industrial automation in manufacturing: cyberattacks that once required teams of highly skilled operators may increasingly become partially automated workflows assisted by AI reasoning systems.

That prospect is especially alarming for governments because it could allow smaller criminal groups or less sophisticated state actors to conduct operations previously reserved for elite hacking units.

The concern extends well beyond federal agencies. Industry executives expect any tighter CISA standards to quickly influence state governments, contractors, hospitals, utilities, banks, and other critical infrastructure operators.

“This is a signal to others that says, ‘Hey you need to do this more quickly,’” said Nitin Natarajan, former deputy director of CISA under President Joe Biden and now head of NN Global.

Natarajan said accelerating patch timelines makes strategic sense given the speed of emerging threats, but warned the federal government may lack the resources necessary to sustain such an aggressive posture.

“We’ve seen a reduction in their resources, both in funding and expertise,” he said.

That concern reflects broader strain across the U.S. cyber apparatus.

CISA has faced repeated budget pressures, staffing reductions, and operational disruptions tied to government shutdown fights under President Donald Trump. Former officials and private-sector analysts warn that compressing deadlines without significantly increasing staffing, automation, and coordination could overwhelm already stretched cybersecurity teams.

The challenge is particularly acute in large enterprise environments, where applying patches is rarely straightforward. Major organizations often operate thousands of interconnected systems that involve legacy software, third-party vendors, industrial controls, and sensitive operational technology. Security updates typically require testing, compatibility reviews, and staged deployment processes to avoid outages or operational failures.

“Realistically, three days is simply impossible for some environments,” said Kecia Hoyt, vice president at threat intelligence firm Flashpoint.

John Hammond, senior principal security researcher at Huntress, said the proposed timeline would represent “quite a change” for the industry.

While Hammond said he was cautiously optimistic about the push for faster remediation, he added that “only time will tell how well the industry keeps up.”

The discussions are unfolding amid broader concerns that the global AI race is beginning to outpace the development of security guardrails and governance frameworks.

In recent months, frontier AI developers have faced increasing scrutiny over whether advanced models could assist cyber intrusions, biological research, or other high-risk activities. Several governments have quietly expanded national-security reviews of AI systems capable of advanced reasoning, coding, and autonomous task execution.

The banking industry has become particularly sensitive to the issue. Financial regulators in the United States, Europe, and Asia have reportedly intensified reviews of AI-related cyber risks amid fears that automated attacks could target payment systems, trading infrastructure, and customer data on an unprecedented scale.

At the core of Washington’s concern is a growing realization that cybersecurity doctrines built for the pre-AI era may no longer be sufficient. For decades, defenders largely relied on the assumption that discovering, weaponizing, and operationalizing vulnerabilities required time, expertise, and coordination. AI may now be eroding all three barriers simultaneously.

If that proves true, cybersecurity could shift from a contest measured in weeks and days to one increasingly measured in hours and minutes — forcing governments and corporations alike into a far more reactive and relentless security posture.

Oscars Restricts AI Generated Content from Major Film Awards

0

The decision to restrict and ban AI-generated content from major film awards like the Oscars is less about a simple rejection of technology and more about a deeper anxiety within the creative industries. It raises a fundamental question: if artificial intelligence cannot compete on equal footing, is it because it lacks something essential—often described as soul—or because it threatens to redefine what that very concept means.

Cinema has always been understood as a profoundly human art form. Films are not merely sequences of images but expressions of lived experience—of memory, emotion, struggle, and imagination shaped by consciousness. When audiences speak of a film having soul, they are often pointing to an intangible authenticity: the sense that a story emerges from human vulnerability and intention.

AI, by contrast, operates through pattern recognition, probabilistic modeling, and training data derived from existing works. It does not experience grief, joy, or desire; it simulates their expression based on what it has learned. From this perspective, the argument that AI lacks soul is compelling. It produces outputs without inner life, without stakes, and without the existential grounding that defines human creativity.

However, this explanation alone is insufficient. After all, many tools used in filmmaking—from CGI to editing software—do not possess soul, yet they are widely accepted. The difference lies not in the absence of humanity within the tool, but in the degree of authorship it assumes. AI systems are increasingly capable of generating scripts, performances, and even directorial decisions with minimal human intervention. This shifts them from being instruments of creativity to potential creators themselves.

The discomfort arises not because AI cannot create meaningful work, but because it might. This is where the notion of threat becomes more salient. AI challenges long-standing assumptions about originality, ownership, and labor in the arts. If a machine can generate a screenplay indistinguishable from one written by a human, what happens to the value we assign to human effort? If performances can be synthesized, what becomes of actors?

The resistance from institutions like the Oscars may therefore be less about preserving artistic purity and more about safeguarding the economic and cultural structures built around human creators. There is also a philosophical dimension to this tension. Art has historically been one of the last domains where human uniqueness seemed unquestionable.

The rise of AI erodes that boundary, forcing a reconsideration of what creativity actually entails. If creativity is defined as recombination and reinterpretation of existing ideas, then AI is already participating in it. But if it is defined by intention, consciousness, and subjective experience, then AI remains fundamentally outside it. The debate over AI in the Oscars is, in many ways, a proxy for this unresolved question.

The exclusion of AI-generated content is not a definitive judgment on its capabilities but a reflection of a transitional moment. It signals an industry grappling with rapid technological change and attempting to draw lines before those lines become impossible to enforce. Whether AI lacks soul or threatens it depends largely on how one defines both terms. What is clear, however, is that the conversation is far from settled, and the boundaries between human and machine creativity will continue to blur in the years ahead.

A Look into MoonAgents Card by MoonPay

0

The convergence of artificial intelligence and decentralized finance has taken a tangible leap forward with the introduction of the MoonAgents Card by MoonPay. This development enables autonomous agents to spend USDC on the Solana network anywhere Mastercard is accepted.

What once sounded like a speculative vision—machines participating directly in economic activity—has now entered a practical phase, reshaping how value moves in a digitally native economy. Stablecoins like USDC have long promised frictionless, borderless payments, but their usability has largely remained confined within crypto-native environments.

By bridging Solana-based USDC with Mastercard’s global merchant network, MoonPay effectively dissolves one of the biggest barriers in crypto adoption: the gap between on-chain assets and off-chain commerce. The significance is not merely technical—it is structural. It allows digital capital to flow seamlessly into everyday transactions, from retail purchases to service payments, without requiring manual conversion or intermediaries.

What makes the MoonAgents Card particularly compelling is its focus on autonomous agents. These are not just passive wallets or payment tools; they are programmable entities capable of executing predefined tasks, making decisions, and now, conducting financial transactions. This introduces a new paradigm where AI-driven agents can operate as economic participants.

For instance, an agent could manage subscription services, pay for APIs, execute trading strategies, or even handle logistics payments—all in real time and without human intervention. By granting agents the ability to spend, we are effectively embedding financial agency into software. This transforms how businesses and individuals might interact with digital systems. Instead of manually approving every transaction, users can delegate spending authority to intelligent agents governed by rules, budgets, and objectives.

The result is a more dynamic and responsive financial layer, where transactions occur at machine speed and scale. Solana’s role in this ecosystem is also critical. Known for its high throughput and low transaction costs, it provides the infrastructure necessary for frequent, micro-scale transactions that autonomous agents are likely to generate.

Traditional payment rails would struggle to support such volume efficiently, but Solana’s architecture makes it viable. When paired with USDC’s price stability, the combination becomes particularly suited for real-world commerce, where predictability and speed are essential.

Mastercard’s involvement adds another layer of legitimacy and reach. With millions of merchants globally, its network ensures that this innovation is not limited to niche use cases. Instead, it plugs directly into the existing financial system, allowing crypto-native value to be spent in familiar environments. This hybridization of decentralized and centralized systems may well define the next phase of financial evolution.

However, this shift also raises important questions. Granting spending power to autonomous agents introduces new dimensions of risk, particularly around security, governance, and accountability. Who is responsible if an agent misbehaves or is exploited? How are spending limits enforced, and what safeguards exist against malicious code? These concerns highlight the need for robust frameworks that combine cryptographic security with intelligent oversight.

Ultimately, the MoonAgents Card represents more than just a payment tool—it is a signal of where the digital economy is heading. As AI agents become more capable and crypto infrastructure more integrated, the line between human and machine participation in markets will continue to blur. Financial autonomy will no longer be exclusive to individuals and institutions; it will extend to software entities operating with precision, speed, and independence.

In this emerging landscape, the ability for agents to spend USDC anywhere Mastercard is accepted is not just a feature—it is a foundational shift. It marks the beginning of an economy where machines are not just tools, but active participants, transacting value in a system designed for both humans and algorithms alike.

The Disconnection between NFT Floor Price and Holders Growth

0

The recent divergence between rising NFT floor prices and relatively stagnant holder counts reveals a subtle but important shift in the structure of the digital asset market. At first glance, increasing floor prices—the lowest listed price for an NFT in a collection—signal renewed demand and market confidence.

However, when this upward movement is not matched by growth in unique holders, it suggests that the rally may be driven less by broad adoption and more by capital concentration among existing participants. This dynamic often points to a market dominated by whales or high-net-worth collectors who are accumulating larger positions within established collections.

Instead of new entrants expanding the base of ownership, existing holders are consolidating supply. By sweeping floors or strategically acquiring underpriced assets, these actors can artificially tighten available liquidity, pushing prices upward. While this can create the appearance of a healthy bull phase, it lacks the organic growth that typically sustains long-term market expansion.

Another factor contributing to this pattern is the maturation of the NFT market itself. Early cycles were characterized by explosive user growth, driven by novelty, speculation, and cultural hype. In contrast, the current phase appears more selective. Capital is flowing into perceived blue-chip collections—projects with established brand equity, historical significance, or strong communities—rather than dispersing across a wide array of new entrants.

This concentration reinforces price increases at the top while leaving broader participation relatively flat. Liquidity dynamics also play a critical role. NFTs are inherently illiquid compared to fungible tokens; each asset is unique, and transaction volumes can be thin. When fewer sellers are willing to part with their assets at lower prices, even modest buying pressure can lift floors significantly. If this buying pressure comes from a small group of committed investors rather than a large influx of new users, holder counts will naturally lag behind price action.

Financialization mechanisms within the NFT ecosystem—such as lending, fractionalization, and derivatives—allow existing holders to extract more value from their assets without selling them. This reduces the need for distribution to new participants. For instance, an investor can leverage an NFT as collateral, gain liquidity, and reinvest within the ecosystem, all while maintaining ownership. Such mechanisms deepen capital efficiency but do little to expand the user base.

From a behavioral perspective, this divergence may also reflect lingering caution among retail participants. After the volatility and drawdowns of previous cycles, new users may be hesitant to enter the market despite rising prices. Meanwhile, experienced participants, armed with better information and stronger conviction, are more willing to accumulate during periods of relative undervaluation.

On one hand, rising floor prices indicate that certain NFT assets are retaining or even increasing their perceived value, which can strengthen market credibility. On the other hand, a lack of growth in holder count raises concerns about sustainability. Markets driven by concentrated ownership are more vulnerable to sharp corrections if a few large holders decide to exit positions.

The disconnect between floor prices and holder growth suggests that the NFT market is transitioning from a phase of rapid expansion to one of consolidation. For the ecosystem to achieve long-term resilience, price appreciation will need to be accompanied by renewed user growth, broader accessibility, and compelling use cases that extend beyond speculation. Until then, rising floors without expanding ownership remain a signal worth scrutinizing rather than celebrating unconditionally.