DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 4

OPEC+ Reportedly Pushes Ahead With Output Increase amid  Millions of Barrels Trapped by Hormuz Closure

0

OPEC+ is preparing another oil production increase even as the Iran war continues to choke off exports from the Persian Gulf.

Sources familiar with the group’s internal discussions told Reuters that seven core OPEC+ producers have agreed in principle to raise June output targets by roughly 188,000 barrels per day. The decision, expected to be formalized during Sunday’s policy meeting, would mark the third consecutive monthly increase in quotas.

Yet the additional barrels are likely to remain largely theoretical for now. The continuing closure of the Strait of Hormuz, following the outbreak of war between the United States and Iran on February 28, has sharply constrained the ability of Gulf producers to move crude onto global markets. The disruption has affected exports from Saudi Arabia, Iraq, Kuwait, and the United Arab Emirates, the very countries that previously held most of OPEC+’s spare production capacity.

As a result, analysts and traders say the cartel is attempting to project stability and long-term confidence even while actual shipments remain severely impaired.

The seven countries participating in Sunday’s production discussions are Saudi Arabia, Iraq, Kuwait, Algeria, Kazakhstan, Russia, and Oman. The meeting comes only days after the United Arab Emirates announced its exit from OPEC, a major geopolitical shock that further complicates the cartel’s long-term cohesion.

Although the UAE formally left the organization on May 1, the country had been one of the few members capable of meaningfully increasing production in recent years. Its departure weakens OPEC’s collective spare-capacity profile at a moment when energy markets are already under extraordinary strain.

The ongoing war has effectively split the global oil market into two competing realities. On paper, OPEC+ continues to signal that additional supply can be restored gradually to stabilize prices and reassure consuming nations. In practice, however, much of that supply remains stranded due to the security crisis surrounding Hormuz, one of the world’s most critical maritime energy chokepoints. Roughly a fifth of global oil consumption normally passes through the Strait.

Oil executives and commodity traders say even if the waterway reopens in the near term, normalization will not happen quickly. Tankers displaced during the conflict must be repositioned, shipping insurance markets must stabilize, naval security risks must ease, and export backlogs must be worked through before flows return to normal levels.

Some analysts estimate that the process could take months. That lag is becoming increasingly important for financial markets, airlines, manufacturers, and central banks already bracing for the inflationary consequences of prolonged energy disruption.

Crude prices surged above $125 per barrel this week, reaching their highest levels in four years as concerns intensified over tightening supplies of diesel, jet fuel, and petrochemical feedstocks. Several analysts are now warning that global aviation markets could face meaningful fuel shortages within weeks if Gulf exports remain constrained.

The latest OPEC+ increase is slightly smaller than May’s output adjustment because it excludes the UAE’s share following its withdrawal from the group. Still, the symbolic value of the decision appears to matter almost as much as the barrels themselves.

By continuing with scheduled production hikes during wartime conditions, OPEC+ is attempting to present itself as functional, disciplined, and prepared to restore supply once trade routes reopen. The move also helps counter fears that the organization is losing control of the market following the UAE’s departure and escalating geopolitical fragmentation in the Middle East.

Internally, however, the cartel faces mounting pressure. Several producers are already struggling with falling exports and damaged infrastructure. Russia, another key member of the alliance, has reduced output after repeated Ukrainian drone attacks targeted parts of its energy network. Iran’s own oil exports have also come under renewed pressure following a U.S. blockade imposed in April.

According to OPEC’s most recent monthly report, combined output from the broader alliance averaged 35.06 million barrels per day in March, down sharply from February levels. Saudi Arabia and Iraq accounted for some of the largest declines as Gulf export routes became increasingly restricted.

The current crisis is also reshaping the calculations of major consuming economies. Countries in Europe and Asia are accelerating efforts to secure alternative crude supplies from the United States, West Africa, Brazil, and Guyana. Governments are also weighing additional strategic petroleum reserve releases to offset potential shortages if the conflict drags deeper into the summer.

At the same time, the war is reviving long-standing fears about the global economy’s dependence on Middle Eastern energy infrastructure. For years, energy analysts warned that prolonged instability around Hormuz represented one of the largest systemic risks to world markets. The current conflict is now testing that vulnerability in real time.

The production increase expected from OPEC+, therefore, serves less as an immediate supply solution and more as a signal of intent. The cartel is effectively telling markets that additional crude exists and can eventually return. But until shipping lanes reopen and regional security stabilizes, much of the world’s oil capacity will remain trapped behind a geopolitical blockade that no quota adjustment alone can solve.

How Telemedicine Quietly Became the Global Standard for Medical Cannabis Access

0

The major shift in medical cannabis over the past three years has not been which countries legalized it. It has been how patients actually reach their cannabis prescriber. In the United States, Germany, the United Kingdom, and Australia, the same pattern is playing out. Virtual consultations are replacing the in-person medical visit right before our eyes. Digital certifications and prescriptions rarely require an in person visit. Telemedicine has quietly become the default access route for medical cannabis worldwide and the data from the last two years makes the trend impossible to ignore.

The US Built the Blueprint Out of Necessity

The United States pioneered the telehealth-first model, but not by design. The Covid pandemic grew telemedicine usage at an unbelievable pace. Afterall, the US deemed cannabis dispensaries an essential business, which was absolutely surprising to many individuals. With cannabis still federally illegal, but legal for medical use in 40 states (as of mid-2025), operators faced a fragmented compliance environment from day one. Each state runs its own medical cannabis program. Each state has its own qualifying conditions list, its own physician licensing rules and its own patient registration requirements. Building a brick-and-mortar clinic network across that patchwork made no economic sense because telehealth did.

US platforms like MMJ.com now operate across 21 states, handling state-level compliance behind a single patient-facing experience. MMJ patients are able to schedule appointments with cannabis doctors via telemedicine for efficiency. The platform routes them to the right MMJ physician, documentation flow, and state registry behind the scenes. What started as a workaround for a fragmented regulatory environment turned out to be the structural foundation of the entire medical cannabis industry, accelerated by the pandemic-era expansion of telehealth flexibilities in 2020.

The telemedicine experience hides the complexity for patients. Someone applying for a medical marijuana card in Arkansas goes through a different qualifying pathway than someone applying in Texas, Pennsylvania, or Ohio, but neither patient has to learn those differences themselves. The MMJ platform absorbs the regulatory friction and streamlines the process, basically leaving the patient to simply wait for their medical card to be issued or arrive in the mail.

Telemedicine has become a dominant onboarding channel for new medical cannabis patients in many US state programs, and total active US medical cannabis patient enrollment has grown from roughly 678,000 in 2016 to around 3 million by 2020 (per a 2022 Annals of Internal Medicine analysis), with continued growth since.

Germany Ran the Experiment at Speed

Then Germany compressed the same evolution into about a year and a half.

On April 1, 2024, Germany’s new Medical Cannabis Act (MedCanG) removed cannabis from the country’s narcotics list. Medical doctors could suddenly prescribe it like any other medication, with no special permits and no quotas. The result was one of the fastest patient-access expansions in modern European healthcare. Many believe the other nations watched the USA as a test dummy, learning from what worked well and how it was not abused.

Between March 2024 and December 2025, German medical cannabis prescriptions surged roughly 3,300%, according to data published by Bloomwell, the country’s largest digital cannabis platform. Patient counts climbed from about 250,000 in April 2024 to nearly 900,000 by mid-2025. By the end of 2025, the German medical cannabis market was valued at around $997 million, up 155% year over year, large enough to make Germany the biggest patient market outside North America.

The driver was almost entirely through telemedicine and the ease of scheduling an appointment and speaking with a cannabis doctor within minutes. In some German states, more than 60% of rural patients relied solely on digital prescriptions. Telemedicine was the only access channel that could scale fast enough to absorb the demand the new law unlocked.

The UK Followed the Same Curve

The United Kingdom legalized medical cannabis in November 2018 but moved much more slowly. Restrictive NHS prescribing rules and a requirement that only specialist consultants could initiate prescriptions kept the patient count small for years. Private telehealth clinics eventually changed the trajectory of the UK’s growth.

Between 2022 and 2024, the volume of medical cannabis flower prescribed to UK patients rose 262%, from approximately 2,700 kilograms to over 10,000 kilograms, according to Home Office import records. Prescription counts more than doubled in a single year. By the end of 2025, the UK had an estimated 80,000 active medical cannabis patients accessing care through roughly 20 to 25 specialist clinics, the bulk of them telehealth-first operators (Releaf, Mamedica, Alternaleaf, Curaleaf Clinic, and others).

A vast majority of the United Kingdom’s medical cannabis prescriptions now flow through private channels, and the majority of those private prescriptions happen in telemedicine consultations.

Why the Same Model Won in Four Different Healthcare Systems

What stands out is how convergent the model is across very different countries. The US runs a fragmented private-insurance and state-by-state framework. Germany operates a public statutory health insurance system. The UK has the NHS alongside a parallel private market. Australia, where roughly one million individuals now use medicinal cannabis under the Therapeutic Goods Administration’s Special Access Scheme, has seen its own telehealth-driven boom (imports rose nearly tenfold between 2021 and 2024). Four different systems, four different regulatory histories, and the access pattern that emerged in each is essentially identical.

Three forces explain the convergence:

  1. Patient density. Medical cannabis patients are a relatively small, geographically scattered population. Building physical specialty MMJ clinics close enough to serve every patient is uneconomic and ultimately, the people will not choose to drive to a clinic and wait in line if they can schedule an appointment and speak with the MMJ doctor by video or telephone. Telehealth solved the distribution problem before traditional healthcare even attempted it.
  2. Stigma. Patient surveys in both Germany and the UK consistently show a preference for private telemedicine visits over walking into a visible specialty clinic. The privacy of telehealth is itself a feature. Although medical cannabis has become legal, there still seems to be a confusing stigma behind cannabis.
  3. Specialty concentration. In every market, a small number of physicians prescribe a disproportionate share of medical cannabis. UK Freedom of Information data showed that just 10 doctors wrote 52% of all medical cannabis prescriptions issued between 2019 and early 2025. Telehealth is truly the only way that a small pool of specialists can reach a national patient base.

The First Regulatory Backlash Has Already Begun

The same model is now drawing its first serious pushback. In October 2025, Germany’s Federal Cabinet approved draft amendments to the MedCanG that would require an in-person consultation before any first cannabis prescription, restrict mail-order dispensing, and limit follow-up telemedicine to patients who have had an in-person visit within the previous four calendar quarters. The German Health Minister cited a roughly 400% surge in cannabis imports as evidence of potential misuse. Industry groups responded that the dependency profile of medical cannabis is far lower than that of opioids or Z-drugs already routinely prescribed for the same medical conditions.

The amendments are scheduled for second and third Bundestag readings in spring 2026 (the first reading took place on December 18, 2025). If they pass in their current form, Germany will set a precedent that other EU member states are likely to study closely. Australia’s Therapeutic Goods Administration is preparing similar reforms after a public consultation on tightening telehealth-driven cannabis prescribing closed in late 2025. The US, where pandemic-era telehealth flexibilities for controlled-substance prescribing have been repeatedly extended, is heading toward its own version of the same debate.

What This Means for Markets That Have Not Legalized Yet

For African markets watching this story unfold (Lesotho became the first African nation to license medical cannabis cultivation in 2017 and is now an export player in the global supply chain. South Africa is building out its medical cannabis framework, and legalization debates continue in Ghana, Zimbabwe, and elsewhere). The lesson seems structural rather than political. Any country that develops a regulated medical cannabis market from this point forward will do so in an environment where telemedicine is already the primary access channel.

The relevant policy question has shifted drastically. It is no longer whether to permit virtual prescribing for cannabis, but what guardrails to place around it to prevent abuse. The countries that build guardrails into their first-generation legislation will avoid the kind of mid-cycle restrictions Germany is now attempting to impose retroactively, which is causing major issues.

The Verdict From Four Major Markets

Medical cannabis has turned out to be one of the largest real-world stress tests of specialty telemedicine at a national scale. Four different healthcare systems, on four different regulatory tracks, all converged on the same access model. Patients are showing up, prescription volumes followed, and the infrastructure has scaled dramatically.

Whatever the regulatory adjustments of the next eighteen months look like, the directional shift is unlikely to reverse. Once patients experience specialty care delivered through a screen in their home, the bar for in-person evaluations are basically removed permanently. In a world where technology rules and everyone has a smart phone or tablet, the digital-first specialty consultation has become the go-to option for patients.

Meta Quietly Ends AI Training Partnership with Sama After Claims Contractors Reviewed Intimate Smart-Glasses Footage

0

Meta has quietly terminated its relationship with outsourcing firm Sama after reports emerged that contractors reviewing footage from Ray-Ban smart glasses were exposed to highly sensitive recordings, including private conversations, financial information, and explicit scenes involving individuals who allegedly did not know they were being filmed.

The decision is drawing renewed scrutiny to the hidden labor systems powering the generative AI race, while reigniting broader concerns about privacy, surveillance, and worker treatment in the global AI supply chain.

The controversy underlines a growing tension at the center of the artificial intelligence industry: while technology firms are aggressively marketing AI-powered wearable devices as the next computing platform, the systems behind them still rely heavily on armies of low-paid human reviewers tasked with sorting through deeply personal content to improve machine-learning models.

Sama, a California-headquartered outsourcing company with major operations in Nairobi, disclosed the termination of 1,108 employees after Meta ended its contract. Some workers alleged they faced retaliation after raising concerns internally about the nature of the material they were required to examine.

The dispute traces back to investigations published earlier this year by Swedish media outlets, which cited workers who said they were asked to label footage recorded through Meta’s Ray-Ban smart glasses. According to those accounts, the recordings included private conversations, banking details, nudity, and intimate encounters involving people who often appeared unaware they were being captured.

The revelations strike at one of the most sensitive questions surrounding AI wearables: whether consumers and bystanders fully understand how much data these devices collect and how that information is ultimately used.

Meta maintains that users must explicitly enable the glasses’ AI features and that its terms of service disclose that some recordings may be used to improve AI systems. Yet privacy advocates argue that disclosure buried in user agreements does little to address concerns about uninformed third parties who may appear in recordings without consent.

The incident also exposes the uncomfortable reality that the rapid progress of generative AI still depends heavily on human labor. While AI companies market their systems as increasingly autonomous, the technology often relies on thousands of contractors who manually categorize images, review audio, and flag problematic content.

Industry analysts say the demand for this kind of data annotation work has surged as companies race to develop multimodal AI systems capable of understanding video, audio, and real-world environments. Smart glasses intensify that challenge because they capture unfiltered moments from everyday life rather than curated online datasets.

The fallout has revived scrutiny of Sama itself, which has repeatedly surfaced at the center of debates about AI labor ethics.

The company previously worked with OpenAI to help filter toxic material for ChatGPT before its public launch in 2022. Investigations at the time revealed that Kenya-based workers were paid less than $2 an hour to review graphic and disturbing content, with several workers reporting psychological trauma from the assignments.

Sama and Meta also faced allegations in previous years tied to anti-union practices and misleading job descriptions. Labor groups argued that workers recruited for content moderation and AI labeling tasks were often not fully informed about the emotional intensity or sensitivity of the material they would encounter.

The latest controversy could intensify regulatory pressure on Meta at a time when governments across Europe and parts of the United States are already examining AI transparency, biometric surveillance, and workplace protections tied to generative AI systems.

It also lands as Meta pushes aggressively into AI-powered hardware under CEO Mark Zuckerberg’s broader strategy of building what the company sees as the next major computing ecosystem beyond smartphones. The Ray-Ban smart glasses have become one of Meta’s most commercially successful hardware experiments in years, helped by improvements in AI assistants and real-time multimodal capabilities.

But the privacy concerns surrounding wearable cameras are not new. More than a decade ago, Google Glass faced public backlash over fears that users could secretly record others in public spaces. That resistance helped doom the product commercially.

Meta’s newer generation of smart glasses has avoided some of that stigma by using more discreet designs and integrating them into mainstream fashion branding. Even so, reports have increasingly emerged of users wearing the devices in classrooms, courtrooms, police encounters, and other sensitive environments.

Privacy researchers warn that as AI wearables become more capable, the volume of sensitive real-world data collected by technology firms could increase exponentially. Unlike smartphones, which users intentionally point toward subjects, smart glasses continuously capture the user’s surroundings from a first-person perspective, raising the likelihood of incidental surveillance.

The controversy may also deepen questions about how AI companies balance automation with accountability. While Meta blamed Sama for failing to meet standards, critics argue the dispute highlights systemic issues across the AI industry, where companies outsource difficult moderation and training tasks to contractors operating far from Silicon Valley headquarters.

As AI systems move beyond text generation into always-on wearable devices embedded in daily life, the debate over who controls the data, who reviews it, and who bears the consequences when safeguards fail is becoming increasingly difficult for the industry to avoid.

Washington Eyes 72-Hour Cyber Defense Rule as AI Compresses Hacking Timelines From Weeks to Hours

0

U.S. cybersecurity officials are considering one of the most aggressive overhauls of federal cyber defense policy in years, as fears grow that a new generation of artificial-intelligence systems could dramatically accelerate the speed and scale of cyberattacks against government networks.

According to people familiar with the discussions cited by Reuters, officials are weighing plans to slash the time federal civilian agencies have to fix actively exploited software vulnerabilities from the current two-to-three-week average to just three days.

The proposal reflects mounting anxiety inside Washington that advanced AI models are rapidly transforming cyber operations from a largely human-driven process into one increasingly automated, scalable, and capable of operating at machine speed.

Those concerns are centered on sophisticated AI systems such as Anthropic’s Mythos and OpenAI’s GPT-5.4-Cyber, which security researchers and policymakers fear could significantly reduce the technical expertise and time traditionally required to conduct advanced hacking campaigns.

For years, cybercriminals have used automation and machine learning to improve phishing schemes, malware generation, and reconnaissance. But cybersecurity officials say the latest frontier models appear capable of going much further: rapidly identifying previously unknown vulnerabilities, analyzing newly disclosed flaws within minutes, generating exploit code, and coordinating multi-stage intrusion campaigns with limited human involvement.

That shift is fundamentally altering how governments think about defense. Until recently, organizations often had weeks or even months between the public disclosure of a software flaw and the appearance of large-scale exploitation campaigns. Officials now worry that AI-assisted attackers may compress that timeline to mere hours.

“If you’re going to protect civil agencies, you’re going to have to move faster,” said Stephen Boyer, founder of cybersecurity firm Bitsight, which has previously assisted the Cybersecurity and Infrastructure Security Agency in cataloguing vulnerabilities. “We don’t have as much of a window as we used to have.”

The discussions are reportedly being led by acting CISA director Nick Andersen and U.S. national cyber director Sean Cairncross, according to sources familiar with the matter.

The proposal centers on CISA’s Known Exploited Vulnerabilities database, commonly known as the KEV catalog. The list tracks software flaws already being actively exploited by criminal organizations or state-backed hacking groups and serves as a mandatory remediation guide for federal agencies.

Historically, agencies were generally given around three weeks to patch vulnerabilities once they were added to the KEV list, according to cybersecurity researcher Glenn Thorpe. That timeline has gradually shortened in recent years, but a universal three-day standard would represent a dramatic escalation in urgency.

The move underscores how seriously U.S. officials are beginning to view the intersection between AI and offensive cyber capabilities. Some security analysts compare the current moment to the arrival of industrial automation in manufacturing: cyberattacks that once required teams of highly skilled operators may increasingly become partially automated workflows assisted by AI reasoning systems.

That prospect is especially alarming for governments because it could allow smaller criminal groups or less sophisticated state actors to conduct operations previously reserved for elite hacking units.

The concern extends well beyond federal agencies. Industry executives expect any tighter CISA standards to quickly influence state governments, contractors, hospitals, utilities, banks, and other critical infrastructure operators.

“This is a signal to others that says, ‘Hey you need to do this more quickly,’” said Nitin Natarajan, former deputy director of CISA under President Joe Biden and now head of NN Global.

Natarajan said accelerating patch timelines makes strategic sense given the speed of emerging threats, but warned the federal government may lack the resources necessary to sustain such an aggressive posture.

“We’ve seen a reduction in their resources, both in funding and expertise,” he said.

That concern reflects broader strain across the U.S. cyber apparatus.

CISA has faced repeated budget pressures, staffing reductions, and operational disruptions tied to government shutdown fights under President Donald Trump. Former officials and private-sector analysts warn that compressing deadlines without significantly increasing staffing, automation, and coordination could overwhelm already stretched cybersecurity teams.

The challenge is particularly acute in large enterprise environments, where applying patches is rarely straightforward. Major organizations often operate thousands of interconnected systems that involve legacy software, third-party vendors, industrial controls, and sensitive operational technology. Security updates typically require testing, compatibility reviews, and staged deployment processes to avoid outages or operational failures.

“Realistically, three days is simply impossible for some environments,” said Kecia Hoyt, vice president at threat intelligence firm Flashpoint.

John Hammond, senior principal security researcher at Huntress, said the proposed timeline would represent “quite a change” for the industry.

While Hammond said he was cautiously optimistic about the push for faster remediation, he added that “only time will tell how well the industry keeps up.”

The discussions are unfolding amid broader concerns that the global AI race is beginning to outpace the development of security guardrails and governance frameworks.

In recent months, frontier AI developers have faced increasing scrutiny over whether advanced models could assist cyber intrusions, biological research, or other high-risk activities. Several governments have quietly expanded national-security reviews of AI systems capable of advanced reasoning, coding, and autonomous task execution.

The banking industry has become particularly sensitive to the issue. Financial regulators in the United States, Europe, and Asia have reportedly intensified reviews of AI-related cyber risks amid fears that automated attacks could target payment systems, trading infrastructure, and customer data on an unprecedented scale.

At the core of Washington’s concern is a growing realization that cybersecurity doctrines built for the pre-AI era may no longer be sufficient. For decades, defenders largely relied on the assumption that discovering, weaponizing, and operationalizing vulnerabilities required time, expertise, and coordination. AI may now be eroding all three barriers simultaneously.

If that proves true, cybersecurity could shift from a contest measured in weeks and days to one increasingly measured in hours and minutes — forcing governments and corporations alike into a far more reactive and relentless security posture.

Oscars Restricts AI Generated Content from Major Film Awards

0

The decision to restrict and ban AI-generated content from major film awards like the Oscars is less about a simple rejection of technology and more about a deeper anxiety within the creative industries. It raises a fundamental question: if artificial intelligence cannot compete on equal footing, is it because it lacks something essential—often described as soul—or because it threatens to redefine what that very concept means.

Cinema has always been understood as a profoundly human art form. Films are not merely sequences of images but expressions of lived experience—of memory, emotion, struggle, and imagination shaped by consciousness. When audiences speak of a film having soul, they are often pointing to an intangible authenticity: the sense that a story emerges from human vulnerability and intention.

AI, by contrast, operates through pattern recognition, probabilistic modeling, and training data derived from existing works. It does not experience grief, joy, or desire; it simulates their expression based on what it has learned. From this perspective, the argument that AI lacks soul is compelling. It produces outputs without inner life, without stakes, and without the existential grounding that defines human creativity.

However, this explanation alone is insufficient. After all, many tools used in filmmaking—from CGI to editing software—do not possess soul, yet they are widely accepted. The difference lies not in the absence of humanity within the tool, but in the degree of authorship it assumes. AI systems are increasingly capable of generating scripts, performances, and even directorial decisions with minimal human intervention. This shifts them from being instruments of creativity to potential creators themselves.

The discomfort arises not because AI cannot create meaningful work, but because it might. This is where the notion of threat becomes more salient. AI challenges long-standing assumptions about originality, ownership, and labor in the arts. If a machine can generate a screenplay indistinguishable from one written by a human, what happens to the value we assign to human effort? If performances can be synthesized, what becomes of actors?

The resistance from institutions like the Oscars may therefore be less about preserving artistic purity and more about safeguarding the economic and cultural structures built around human creators. There is also a philosophical dimension to this tension. Art has historically been one of the last domains where human uniqueness seemed unquestionable.

The rise of AI erodes that boundary, forcing a reconsideration of what creativity actually entails. If creativity is defined as recombination and reinterpretation of existing ideas, then AI is already participating in it. But if it is defined by intention, consciousness, and subjective experience, then AI remains fundamentally outside it. The debate over AI in the Oscars is, in many ways, a proxy for this unresolved question.

The exclusion of AI-generated content is not a definitive judgment on its capabilities but a reflection of a transitional moment. It signals an industry grappling with rapid technological change and attempting to draw lines before those lines become impossible to enforce. Whether AI lacks soul or threatens it depends largely on how one defines both terms. What is clear, however, is that the conversation is far from settled, and the boundaries between human and machine creativity will continue to blur in the years ahead.