DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 4

How Telemedicine Quietly Became the Global Standard for Medical Cannabis Access

0

The major shift in medical cannabis over the past three years has not been which countries legalized it. It has been how patients actually reach their cannabis prescriber. In the United States, Germany, the United Kingdom, and Australia, the same pattern is playing out. Virtual consultations are replacing the in-person medical visit right before our eyes. Digital certifications and prescriptions rarely require an in person visit. Telemedicine has quietly become the default access route for medical cannabis worldwide and the data from the last two years makes the trend impossible to ignore.

The US Built the Blueprint Out of Necessity

The United States pioneered the telehealth-first model, but not by design. The Covid pandemic grew telemedicine usage at an unbelievable pace. Afterall, the US deemed cannabis dispensaries an essential business, which was absolutely surprising to many individuals. With cannabis still federally illegal, but legal for medical use in 40 states (as of mid-2025), operators faced a fragmented compliance environment from day one. Each state runs its own medical cannabis program. Each state has its own qualifying conditions list, its own physician licensing rules and its own patient registration requirements. Building a brick-and-mortar clinic network across that patchwork made no economic sense because telehealth did.

US platforms like MMJ.com now operate across 21 states, handling state-level compliance behind a single patient-facing experience. MMJ patients are able to schedule appointments with cannabis doctors via telemedicine for efficiency. The platform routes them to the right MMJ physician, documentation flow, and state registry behind the scenes. What started as a workaround for a fragmented regulatory environment turned out to be the structural foundation of the entire medical cannabis industry, accelerated by the pandemic-era expansion of telehealth flexibilities in 2020.

The telemedicine experience hides the complexity for patients. Someone applying for a medical marijuana card in Arkansas goes through a different qualifying pathway than someone applying in Texas, Pennsylvania, or Ohio, but neither patient has to learn those differences themselves. The MMJ platform absorbs the regulatory friction and streamlines the process, basically leaving the patient to simply wait for their medical card to be issued or arrive in the mail.

Telemedicine has become a dominant onboarding channel for new medical cannabis patients in many US state programs, and total active US medical cannabis patient enrollment has grown from roughly 678,000 in 2016 to around 3 million by 2020 (per a 2022 Annals of Internal Medicine analysis), with continued growth since.

Germany Ran the Experiment at Speed

Then Germany compressed the same evolution into about a year and a half.

On April 1, 2024, Germany’s new Medical Cannabis Act (MedCanG) removed cannabis from the country’s narcotics list. Medical doctors could suddenly prescribe it like any other medication, with no special permits and no quotas. The result was one of the fastest patient-access expansions in modern European healthcare. Many believe the other nations watched the USA as a test dummy, learning from what worked well and how it was not abused.

Between March 2024 and December 2025, German medical cannabis prescriptions surged roughly 3,300%, according to data published by Bloomwell, the country’s largest digital cannabis platform. Patient counts climbed from about 250,000 in April 2024 to nearly 900,000 by mid-2025. By the end of 2025, the German medical cannabis market was valued at around $997 million, up 155% year over year, large enough to make Germany the biggest patient market outside North America.

The driver was almost entirely through telemedicine and the ease of scheduling an appointment and speaking with a cannabis doctor within minutes. In some German states, more than 60% of rural patients relied solely on digital prescriptions. Telemedicine was the only access channel that could scale fast enough to absorb the demand the new law unlocked.

The UK Followed the Same Curve

The United Kingdom legalized medical cannabis in November 2018 but moved much more slowly. Restrictive NHS prescribing rules and a requirement that only specialist consultants could initiate prescriptions kept the patient count small for years. Private telehealth clinics eventually changed the trajectory of the UK’s growth.

Between 2022 and 2024, the volume of medical cannabis flower prescribed to UK patients rose 262%, from approximately 2,700 kilograms to over 10,000 kilograms, according to Home Office import records. Prescription counts more than doubled in a single year. By the end of 2025, the UK had an estimated 80,000 active medical cannabis patients accessing care through roughly 20 to 25 specialist clinics, the bulk of them telehealth-first operators (Releaf, Mamedica, Alternaleaf, Curaleaf Clinic, and others).

A vast majority of the United Kingdom’s medical cannabis prescriptions now flow through private channels, and the majority of those private prescriptions happen in telemedicine consultations.

Why the Same Model Won in Four Different Healthcare Systems

What stands out is how convergent the model is across very different countries. The US runs a fragmented private-insurance and state-by-state framework. Germany operates a public statutory health insurance system. The UK has the NHS alongside a parallel private market. Australia, where roughly one million individuals now use medicinal cannabis under the Therapeutic Goods Administration’s Special Access Scheme, has seen its own telehealth-driven boom (imports rose nearly tenfold between 2021 and 2024). Four different systems, four different regulatory histories, and the access pattern that emerged in each is essentially identical.

Three forces explain the convergence:

  1. Patient density. Medical cannabis patients are a relatively small, geographically scattered population. Building physical specialty MMJ clinics close enough to serve every patient is uneconomic and ultimately, the people will not choose to drive to a clinic and wait in line if they can schedule an appointment and speak with the MMJ doctor by video or telephone. Telehealth solved the distribution problem before traditional healthcare even attempted it.
  2. Stigma. Patient surveys in both Germany and the UK consistently show a preference for private telemedicine visits over walking into a visible specialty clinic. The privacy of telehealth is itself a feature. Although medical cannabis has become legal, there still seems to be a confusing stigma behind cannabis.
  3. Specialty concentration. In every market, a small number of physicians prescribe a disproportionate share of medical cannabis. UK Freedom of Information data showed that just 10 doctors wrote 52% of all medical cannabis prescriptions issued between 2019 and early 2025. Telehealth is truly the only way that a small pool of specialists can reach a national patient base.

The First Regulatory Backlash Has Already Begun

The same model is now drawing its first serious pushback. In October 2025, Germany’s Federal Cabinet approved draft amendments to the MedCanG that would require an in-person consultation before any first cannabis prescription, restrict mail-order dispensing, and limit follow-up telemedicine to patients who have had an in-person visit within the previous four calendar quarters. The German Health Minister cited a roughly 400% surge in cannabis imports as evidence of potential misuse. Industry groups responded that the dependency profile of medical cannabis is far lower than that of opioids or Z-drugs already routinely prescribed for the same medical conditions.

The amendments are scheduled for second and third Bundestag readings in spring 2026 (the first reading took place on December 18, 2025). If they pass in their current form, Germany will set a precedent that other EU member states are likely to study closely. Australia’s Therapeutic Goods Administration is preparing similar reforms after a public consultation on tightening telehealth-driven cannabis prescribing closed in late 2025. The US, where pandemic-era telehealth flexibilities for controlled-substance prescribing have been repeatedly extended, is heading toward its own version of the same debate.

What This Means for Markets That Have Not Legalized Yet

For African markets watching this story unfold (Lesotho became the first African nation to license medical cannabis cultivation in 2017 and is now an export player in the global supply chain. South Africa is building out its medical cannabis framework, and legalization debates continue in Ghana, Zimbabwe, and elsewhere). The lesson seems structural rather than political. Any country that develops a regulated medical cannabis market from this point forward will do so in an environment where telemedicine is already the primary access channel.

The relevant policy question has shifted drastically. It is no longer whether to permit virtual prescribing for cannabis, but what guardrails to place around it to prevent abuse. The countries that build guardrails into their first-generation legislation will avoid the kind of mid-cycle restrictions Germany is now attempting to impose retroactively, which is causing major issues.

The Verdict From Four Major Markets

Medical cannabis has turned out to be one of the largest real-world stress tests of specialty telemedicine at a national scale. Four different healthcare systems, on four different regulatory tracks, all converged on the same access model. Patients are showing up, prescription volumes followed, and the infrastructure has scaled dramatically.

Whatever the regulatory adjustments of the next eighteen months look like, the directional shift is unlikely to reverse. Once patients experience specialty care delivered through a screen in their home, the bar for in-person evaluations are basically removed permanently. In a world where technology rules and everyone has a smart phone or tablet, the digital-first specialty consultation has become the go-to option for patients.

Meta Quietly Ends AI Training Partnership with Sama After Claims Contractors Reviewed Intimate Smart-Glasses Footage

0

Meta has quietly terminated its relationship with outsourcing firm Sama after reports emerged that contractors reviewing footage from Ray-Ban smart glasses were exposed to highly sensitive recordings, including private conversations, financial information, and explicit scenes involving individuals who allegedly did not know they were being filmed.

The decision is drawing renewed scrutiny to the hidden labor systems powering the generative AI race, while reigniting broader concerns about privacy, surveillance, and worker treatment in the global AI supply chain.

The controversy underlines a growing tension at the center of the artificial intelligence industry: while technology firms are aggressively marketing AI-powered wearable devices as the next computing platform, the systems behind them still rely heavily on armies of low-paid human reviewers tasked with sorting through deeply personal content to improve machine-learning models.

Sama, a California-headquartered outsourcing company with major operations in Nairobi, disclosed the termination of 1,108 employees after Meta ended its contract. Some workers alleged they faced retaliation after raising concerns internally about the nature of the material they were required to examine.

The dispute traces back to investigations published earlier this year by Swedish media outlets, which cited workers who said they were asked to label footage recorded through Meta’s Ray-Ban smart glasses. According to those accounts, the recordings included private conversations, banking details, nudity, and intimate encounters involving people who often appeared unaware they were being captured.

The revelations strike at one of the most sensitive questions surrounding AI wearables: whether consumers and bystanders fully understand how much data these devices collect and how that information is ultimately used.

Meta maintains that users must explicitly enable the glasses’ AI features and that its terms of service disclose that some recordings may be used to improve AI systems. Yet privacy advocates argue that disclosure buried in user agreements does little to address concerns about uninformed third parties who may appear in recordings without consent.

The incident also exposes the uncomfortable reality that the rapid progress of generative AI still depends heavily on human labor. While AI companies market their systems as increasingly autonomous, the technology often relies on thousands of contractors who manually categorize images, review audio, and flag problematic content.

Industry analysts say the demand for this kind of data annotation work has surged as companies race to develop multimodal AI systems capable of understanding video, audio, and real-world environments. Smart glasses intensify that challenge because they capture unfiltered moments from everyday life rather than curated online datasets.

The fallout has revived scrutiny of Sama itself, which has repeatedly surfaced at the center of debates about AI labor ethics.

The company previously worked with OpenAI to help filter toxic material for ChatGPT before its public launch in 2022. Investigations at the time revealed that Kenya-based workers were paid less than $2 an hour to review graphic and disturbing content, with several workers reporting psychological trauma from the assignments.

Sama and Meta also faced allegations in previous years tied to anti-union practices and misleading job descriptions. Labor groups argued that workers recruited for content moderation and AI labeling tasks were often not fully informed about the emotional intensity or sensitivity of the material they would encounter.

The latest controversy could intensify regulatory pressure on Meta at a time when governments across Europe and parts of the United States are already examining AI transparency, biometric surveillance, and workplace protections tied to generative AI systems.

It also lands as Meta pushes aggressively into AI-powered hardware under CEO Mark Zuckerberg’s broader strategy of building what the company sees as the next major computing ecosystem beyond smartphones. The Ray-Ban smart glasses have become one of Meta’s most commercially successful hardware experiments in years, helped by improvements in AI assistants and real-time multimodal capabilities.

But the privacy concerns surrounding wearable cameras are not new. More than a decade ago, Google Glass faced public backlash over fears that users could secretly record others in public spaces. That resistance helped doom the product commercially.

Meta’s newer generation of smart glasses has avoided some of that stigma by using more discreet designs and integrating them into mainstream fashion branding. Even so, reports have increasingly emerged of users wearing the devices in classrooms, courtrooms, police encounters, and other sensitive environments.

Privacy researchers warn that as AI wearables become more capable, the volume of sensitive real-world data collected by technology firms could increase exponentially. Unlike smartphones, which users intentionally point toward subjects, smart glasses continuously capture the user’s surroundings from a first-person perspective, raising the likelihood of incidental surveillance.

The controversy may also deepen questions about how AI companies balance automation with accountability. While Meta blamed Sama for failing to meet standards, critics argue the dispute highlights systemic issues across the AI industry, where companies outsource difficult moderation and training tasks to contractors operating far from Silicon Valley headquarters.

As AI systems move beyond text generation into always-on wearable devices embedded in daily life, the debate over who controls the data, who reviews it, and who bears the consequences when safeguards fail is becoming increasingly difficult for the industry to avoid.

Washington Eyes 72-Hour Cyber Defense Rule as AI Compresses Hacking Timelines From Weeks to Hours

0

U.S. cybersecurity officials are considering one of the most aggressive overhauls of federal cyber defense policy in years, as fears grow that a new generation of artificial-intelligence systems could dramatically accelerate the speed and scale of cyberattacks against government networks.

According to people familiar with the discussions cited by Reuters, officials are weighing plans to slash the time federal civilian agencies have to fix actively exploited software vulnerabilities from the current two-to-three-week average to just three days.

The proposal reflects mounting anxiety inside Washington that advanced AI models are rapidly transforming cyber operations from a largely human-driven process into one increasingly automated, scalable, and capable of operating at machine speed.

Those concerns are centered on sophisticated AI systems such as Anthropic’s Mythos and OpenAI’s GPT-5.4-Cyber, which security researchers and policymakers fear could significantly reduce the technical expertise and time traditionally required to conduct advanced hacking campaigns.

For years, cybercriminals have used automation and machine learning to improve phishing schemes, malware generation, and reconnaissance. But cybersecurity officials say the latest frontier models appear capable of going much further: rapidly identifying previously unknown vulnerabilities, analyzing newly disclosed flaws within minutes, generating exploit code, and coordinating multi-stage intrusion campaigns with limited human involvement.

That shift is fundamentally altering how governments think about defense. Until recently, organizations often had weeks or even months between the public disclosure of a software flaw and the appearance of large-scale exploitation campaigns. Officials now worry that AI-assisted attackers may compress that timeline to mere hours.

“If you’re going to protect civil agencies, you’re going to have to move faster,” said Stephen Boyer, founder of cybersecurity firm Bitsight, which has previously assisted the Cybersecurity and Infrastructure Security Agency in cataloguing vulnerabilities. “We don’t have as much of a window as we used to have.”

The discussions are reportedly being led by acting CISA director Nick Andersen and U.S. national cyber director Sean Cairncross, according to sources familiar with the matter.

The proposal centers on CISA’s Known Exploited Vulnerabilities database, commonly known as the KEV catalog. The list tracks software flaws already being actively exploited by criminal organizations or state-backed hacking groups and serves as a mandatory remediation guide for federal agencies.

Historically, agencies were generally given around three weeks to patch vulnerabilities once they were added to the KEV list, according to cybersecurity researcher Glenn Thorpe. That timeline has gradually shortened in recent years, but a universal three-day standard would represent a dramatic escalation in urgency.

The move underscores how seriously U.S. officials are beginning to view the intersection between AI and offensive cyber capabilities. Some security analysts compare the current moment to the arrival of industrial automation in manufacturing: cyberattacks that once required teams of highly skilled operators may increasingly become partially automated workflows assisted by AI reasoning systems.

That prospect is especially alarming for governments because it could allow smaller criminal groups or less sophisticated state actors to conduct operations previously reserved for elite hacking units.

The concern extends well beyond federal agencies. Industry executives expect any tighter CISA standards to quickly influence state governments, contractors, hospitals, utilities, banks, and other critical infrastructure operators.

“This is a signal to others that says, ‘Hey you need to do this more quickly,’” said Nitin Natarajan, former deputy director of CISA under President Joe Biden and now head of NN Global.

Natarajan said accelerating patch timelines makes strategic sense given the speed of emerging threats, but warned the federal government may lack the resources necessary to sustain such an aggressive posture.

“We’ve seen a reduction in their resources, both in funding and expertise,” he said.

That concern reflects broader strain across the U.S. cyber apparatus.

CISA has faced repeated budget pressures, staffing reductions, and operational disruptions tied to government shutdown fights under President Donald Trump. Former officials and private-sector analysts warn that compressing deadlines without significantly increasing staffing, automation, and coordination could overwhelm already stretched cybersecurity teams.

The challenge is particularly acute in large enterprise environments, where applying patches is rarely straightforward. Major organizations often operate thousands of interconnected systems that involve legacy software, third-party vendors, industrial controls, and sensitive operational technology. Security updates typically require testing, compatibility reviews, and staged deployment processes to avoid outages or operational failures.

“Realistically, three days is simply impossible for some environments,” said Kecia Hoyt, vice president at threat intelligence firm Flashpoint.

John Hammond, senior principal security researcher at Huntress, said the proposed timeline would represent “quite a change” for the industry.

While Hammond said he was cautiously optimistic about the push for faster remediation, he added that “only time will tell how well the industry keeps up.”

The discussions are unfolding amid broader concerns that the global AI race is beginning to outpace the development of security guardrails and governance frameworks.

In recent months, frontier AI developers have faced increasing scrutiny over whether advanced models could assist cyber intrusions, biological research, or other high-risk activities. Several governments have quietly expanded national-security reviews of AI systems capable of advanced reasoning, coding, and autonomous task execution.

The banking industry has become particularly sensitive to the issue. Financial regulators in the United States, Europe, and Asia have reportedly intensified reviews of AI-related cyber risks amid fears that automated attacks could target payment systems, trading infrastructure, and customer data on an unprecedented scale.

At the core of Washington’s concern is a growing realization that cybersecurity doctrines built for the pre-AI era may no longer be sufficient. For decades, defenders largely relied on the assumption that discovering, weaponizing, and operationalizing vulnerabilities required time, expertise, and coordination. AI may now be eroding all three barriers simultaneously.

If that proves true, cybersecurity could shift from a contest measured in weeks and days to one increasingly measured in hours and minutes — forcing governments and corporations alike into a far more reactive and relentless security posture.

Oscars Restricts AI Generated Content from Major Film Awards

0

The decision to restrict and ban AI-generated content from major film awards like the Oscars is less about a simple rejection of technology and more about a deeper anxiety within the creative industries. It raises a fundamental question: if artificial intelligence cannot compete on equal footing, is it because it lacks something essential—often described as soul—or because it threatens to redefine what that very concept means.

Cinema has always been understood as a profoundly human art form. Films are not merely sequences of images but expressions of lived experience—of memory, emotion, struggle, and imagination shaped by consciousness. When audiences speak of a film having soul, they are often pointing to an intangible authenticity: the sense that a story emerges from human vulnerability and intention.

AI, by contrast, operates through pattern recognition, probabilistic modeling, and training data derived from existing works. It does not experience grief, joy, or desire; it simulates their expression based on what it has learned. From this perspective, the argument that AI lacks soul is compelling. It produces outputs without inner life, without stakes, and without the existential grounding that defines human creativity.

However, this explanation alone is insufficient. After all, many tools used in filmmaking—from CGI to editing software—do not possess soul, yet they are widely accepted. The difference lies not in the absence of humanity within the tool, but in the degree of authorship it assumes. AI systems are increasingly capable of generating scripts, performances, and even directorial decisions with minimal human intervention. This shifts them from being instruments of creativity to potential creators themselves.

The discomfort arises not because AI cannot create meaningful work, but because it might. This is where the notion of threat becomes more salient. AI challenges long-standing assumptions about originality, ownership, and labor in the arts. If a machine can generate a screenplay indistinguishable from one written by a human, what happens to the value we assign to human effort? If performances can be synthesized, what becomes of actors?

The resistance from institutions like the Oscars may therefore be less about preserving artistic purity and more about safeguarding the economic and cultural structures built around human creators. There is also a philosophical dimension to this tension. Art has historically been one of the last domains where human uniqueness seemed unquestionable.

The rise of AI erodes that boundary, forcing a reconsideration of what creativity actually entails. If creativity is defined as recombination and reinterpretation of existing ideas, then AI is already participating in it. But if it is defined by intention, consciousness, and subjective experience, then AI remains fundamentally outside it. The debate over AI in the Oscars is, in many ways, a proxy for this unresolved question.

The exclusion of AI-generated content is not a definitive judgment on its capabilities but a reflection of a transitional moment. It signals an industry grappling with rapid technological change and attempting to draw lines before those lines become impossible to enforce. Whether AI lacks soul or threatens it depends largely on how one defines both terms. What is clear, however, is that the conversation is far from settled, and the boundaries between human and machine creativity will continue to blur in the years ahead.

A Look into MoonAgents Card by MoonPay

0

The convergence of artificial intelligence and decentralized finance has taken a tangible leap forward with the introduction of the MoonAgents Card by MoonPay. This development enables autonomous agents to spend USDC on the Solana network anywhere Mastercard is accepted.

What once sounded like a speculative vision—machines participating directly in economic activity—has now entered a practical phase, reshaping how value moves in a digitally native economy. Stablecoins like USDC have long promised frictionless, borderless payments, but their usability has largely remained confined within crypto-native environments.

By bridging Solana-based USDC with Mastercard’s global merchant network, MoonPay effectively dissolves one of the biggest barriers in crypto adoption: the gap between on-chain assets and off-chain commerce. The significance is not merely technical—it is structural. It allows digital capital to flow seamlessly into everyday transactions, from retail purchases to service payments, without requiring manual conversion or intermediaries.

What makes the MoonAgents Card particularly compelling is its focus on autonomous agents. These are not just passive wallets or payment tools; they are programmable entities capable of executing predefined tasks, making decisions, and now, conducting financial transactions. This introduces a new paradigm where AI-driven agents can operate as economic participants.

For instance, an agent could manage subscription services, pay for APIs, execute trading strategies, or even handle logistics payments—all in real time and without human intervention. By granting agents the ability to spend, we are effectively embedding financial agency into software. This transforms how businesses and individuals might interact with digital systems. Instead of manually approving every transaction, users can delegate spending authority to intelligent agents governed by rules, budgets, and objectives.

The result is a more dynamic and responsive financial layer, where transactions occur at machine speed and scale. Solana’s role in this ecosystem is also critical. Known for its high throughput and low transaction costs, it provides the infrastructure necessary for frequent, micro-scale transactions that autonomous agents are likely to generate.

Traditional payment rails would struggle to support such volume efficiently, but Solana’s architecture makes it viable. When paired with USDC’s price stability, the combination becomes particularly suited for real-world commerce, where predictability and speed are essential.

Mastercard’s involvement adds another layer of legitimacy and reach. With millions of merchants globally, its network ensures that this innovation is not limited to niche use cases. Instead, it plugs directly into the existing financial system, allowing crypto-native value to be spent in familiar environments. This hybridization of decentralized and centralized systems may well define the next phase of financial evolution.

However, this shift also raises important questions. Granting spending power to autonomous agents introduces new dimensions of risk, particularly around security, governance, and accountability. Who is responsible if an agent misbehaves or is exploited? How are spending limits enforced, and what safeguards exist against malicious code? These concerns highlight the need for robust frameworks that combine cryptographic security with intelligent oversight.

Ultimately, the MoonAgents Card represents more than just a payment tool—it is a signal of where the digital economy is heading. As AI agents become more capable and crypto infrastructure more integrated, the line between human and machine participation in markets will continue to blur. Financial autonomy will no longer be exclusive to individuals and institutions; it will extend to software entities operating with precision, speed, and independence.

In this emerging landscape, the ability for agents to spend USDC anywhere Mastercard is accepted is not just a feature—it is a foundational shift. It marks the beginning of an economy where machines are not just tools, but active participants, transacting value in a system designed for both humans and algorithms alike.