DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 57

A Foray into Ghana’s Sandbox for Virtual Assets 

0

Ghana has officially launched a regulatory sandbox for virtual assets including cryptocurrency-related services, with 11 companies selected to participate in a year-long pilot program.

This marks a significant step in formalizing and regulating the country’s crypto and digital asset sector following the passage of the Virtual Asset Service Providers Act (2025) in December 2025. The Securities and Exchange Commission (SEC) of Ghana announced the participants on Tuesday (March 10, 2026). The sandbox allows these firms to test their products and services in a controlled, supervised environment.

The goal is to foster responsible innovation while ensuring investor protection, market integrity, and compliance with anti-money laundering (AML) and counter-terrorism financing standards. The 11 participating companies are: Africoin focused on tokenized gold. Blu Penguin (tokenized payment systems). Goldbod (custodian for gold-backed securities). Hanypay (exchange). Hyro Exchange GH Ltd (exchange, HSB Global (exchange), Koinkoin (exchange), Whitebits, Vaulta (global asset tokenization), Xchain (global asset tokenization), Bsystem Ltd (global asset tokenization)

Companies that perform well and meet requirements after six months may qualify for full licenses, accelerating the path to regulated operations. This initiative builds on Ghana’s broader shift toward regulated digital assets, amid growing crypto adoption in the region (Ghana saw significant transaction volumes in recent years).

Note that this is separate from the Bank of Ghana’s ongoing work on the eCedi (its central bank digital currency pilot), which focuses on a retail CBDC rather than private cryptocurrencies. The move positions Ghana as a leader in West Africa for structured crypto regulation, balancing innovation with oversight.

Nigeria’s cryptocurrency regulations have evolved significantly in recent years, shifting from a restrictive stance to a structured, licensed framework. Cryptocurrency is legal in Nigeria, but it operates under tight regulation focused on investor protection, anti-money laundering (AML), counter-terrorism financing (CFT), taxation, and financial stability.

In 2021, the Central Bank of Nigeria (CBN) imposed a ban prohibiting banks from facilitating cryptocurrency transactions, pushing much activity to peer-to-peer (P2P) trading. This ban was lifted in December 2023, when the CBN issued guidelines allowing banks to open accounts for licensed Virtual Asset Service Providers (VASPs), acknowledging global trends and the need for oversight rather than prohibition.

The Securities and Exchange Commission (SEC) has led much of the formal regulation since 2022, with rules on digital assets. The cornerstone is the Investments and Securities Act (ISA) 2025, signed into law in March 2025 (sometimes referenced with 2026 effective dates in reports). This act: Classifies digital assets including cryptocurrencies like Bitcoin as securities.

Places them under the primary oversight of the SEC. Requires all crypto-related businesses to register and obtain licenses as VASPs or in related categories. The SEC’s framework building on 2022 rules, with updates and amendments in 2024–2025 covers: Issuance of digital assets (e.g., ICOs/token offerings).

Offering platforms (DAOPs). Custodians (DACs). Exchanges (DAXs). Other VASPs (brokers, intermediaries, etc.). In January 2026, the SEC raised minimum capital requirements significantly (e.g., ?2 billion for digital asset exchanges and custodians, up from ?500 million in some cases) to enhance resilience and investor protection.

Additional developments include the launch of the Virtual Asset Regulatory Council (VARC) by President Bola Tinubu, creating a coordinated body. The CBN and Nigeria Revenue Service (NRS) oversee non-security virtual assets under a Virtual Asset Regulatory Authority (VARA), while the SEC handles those classified as securities.

Under the Nigeria Tax Administration Act (NTAA) 2025 (effective January 1, 2026): Crypto profits are treated as chargeable gains, taxed up to 25% for individuals (progressive personal income tax bands, replacing the prior 10% flat capital gains tax). VASPs face 30% corporate tax on profits from fees.

All transactions must link to verified identities: National Identification Number (NIN) and Tax Identification Number (TIN). VASPs (including international platforms like Binance) must implement strict KYC, report monthly transactions, retain records for 7+ years, and flag suspicious/large activities to the Nigerian Financial Intelligence Unit (NFIU).

Exchanges submit user transaction data to tax authorities. This aims to curb evasion, integrate crypto into the formal economy, and enable traceability. Users can buy, sell, hold, and trade crypto legally via SEC-licensed platforms. P2P trading persists but faces increasing scrutiny.

Banks can serve licensed VASPs but are restricted from direct crypto dealing. International platforms have limited naira services due to prior pressures; users must comply with NIN/TIN linking for compliance. Non-compliant or unlicensed operations risk enforcement, including freezes or shutdowns.

Focus areas: AML/CFT compliance, investor disclosures, market integrity, and preventing manipulation/volatility. Nigeria positions itself as a leader in regulated crypto adoption in Africa, balancing innovation with oversight—similar to Ghana’s recent sandbox but with a stronger securities classification and tax integration.

Fintech Evolution Notes a Centralization through Democratization 

0

In many fintech platforms, especially super-apps, embedded finance providers, neobanks, or centralized digital banking ecosystems, customer journeys and revenue streams become highly integrated.

A single app or platform might handle payments, lending, investments, insurance, subscriptions, rewards, and more. This creates centralization of user data, identity, transactions, and interactions under one roof (or one data layer). While this centralization drives better user experience, faster innovation, and higher retention, it often distorts or skews revenue attribution in several ways: Multi-product / bundled revenue makes clean source attribution difficult.

A user might sign up via a marketing campaign for free P2P transfers, but later activates high-margin products like buy-now-pay-later (BNPL), crypto trading, or premium subscriptions. The initial acquisition channel gets credit for the sign-up, but the real revenue often comes from downstream cross-sells or usage-based fees that happen months later. Last-click or simple models massively under- or over-credit channels.

Platform-level economics obscure channel / partner contribution. In centralized fintechs (think Revolut, Nubank, Chime, or super-apps like WeChat Pay / Alipay), revenue frequently comes from interchange, float, lending spreads, premium tiers, or data-driven upsell — not always directly tied to a specific paid ad click, affiliate referral, or organic search.

When everything flows through one centralized ledger / identity system, it’s hard to trace incremental revenue back to fragmented marketing or partnership efforts. Data silos vs. over-centralization paradox. Ironically, extreme centralization without strong governance can create new attribution problems. When all data lives in one place but definitions of “customer,” “conversion,” “lifetime value,” or “attributable revenue” aren’t consistently governed across teams (marketing, product, finance), models still disagree.

Recent discussions highlight how fragmented governance — even inside a centralized system — leads to conflicting attribution results, as teams interpret the same unified data differently. Winner-takes-all dynamics amplify the skew. Research on fintech evolution notes a “centralization through democratization” pattern: digital tools lower barriers, but scale advantages and network effects lead to concentrated market power among a few large platforms.

These giants capture disproportionate revenue, but attributing that revenue to specific digital channels, features, or partners becomes opaque because so much value accrues at the platform level rather than at individual touchpoints. In practice this means Marketing / growth teams struggle to prove ROI as budgets get cut or misallocated.

Product teams over-invest in features that look high-engagement but drive low incremental revenue. Finance / investor reporting shows strong top-line growth but unclear unit economics or channel profitability.

Many fintechs are countering this by moving toward more sophisticated multi-touch attribution, incrementality testing, unified customer data platforms with strong governance, and ML-driven behavioral attribution that credits downstream revenue events more intelligently — rather than relying on simplistic digital-first models.

The centralization that makes digital fintech so powerful is exactly what makes precise revenue attribution harder than in traditional, siloed banking or pure e-commerce. It’s a feature, not a bug — but one that requires mature analytics and governance to manage.

Multi-touch attribution (MTA) is a marketing analytics approach that assigns credit to multiple touchpoints (interactions) a customer has with your brand across their entire journey toward a conversion — such as a sign-up, deposit, purchase, subscription activation, or revenue-generating event.

Unlike single-touch models (e.g., first-click or last-click attribution), which give 100% of the credit to just one interaction, MTA recognizes that modern customer journeys — especially in digital fintech — involve many steps across channels like paid ads, organic search, email, social media, referrals, app notifications, content, and in-app features.

MTA distributes credit fractionally across these touchpoints to show which ones truly drive value. This is particularly relevant in centralized digital fintech platforms, where users often enter via low-intent channels (e.g., a free transfer promo) but generate most revenue later through high-margin products (lending, premium tiers, investments).

MTA helps avoid over-crediting the “last click” while revealing the full role of earlier nurturing touchpoints.

Upcoming Tekedia Programs: AI, Business, and Investment Learning Opportunities

0

Greetings! We are pleased to announce the upcoming start dates for several Tekedia Institute programs:

Tekedia AI Technical Lab – Begins Saturday, March 14, 2026. Register here.

Tekedia Mini-MBA – Registration has opened for the next edition starting in June 2026. Take advantage of the early bird discounts by registering here.

Python Coding with AI for Agentic AI Development – If you have completed Tekedia AI Lab, this program will help you conceptualize solutions and build Python-based AI agents to solve real-world problems. The course begins on April 11, 2026. Register here.

In addition, we offer several other programs including Tekedia AI in Business, Tekedia Startup Masterclass, Tekedia Investment and Portfolio Management, and more. You can explore the full list of Tekedia programs through this link.

Memory Chip Crisis: Executives Say A Decades-Old Boom-And-Bust Cycle May Finally Be Breaking Down

0

The global race to build artificial intelligence infrastructure is reshaping the economics of the memory chip industry, driving record share gains for manufacturers and prompting executives to say a decades-old boom-and-bust cycle may finally be breaking down.

Shares of Micron Technology have surged more than 370% over the past year as demand for AI-related memory accelerates. Meanwhile, SanDisk — which returned to the public market in February last year after being spun out of Western Digital — has soared more than 1,100%, highlighting investor enthusiasm for companies tied to the AI supply chain.

For much of the past three decades, memory manufacturers operated under one of the semiconductor industry’s most volatile cycles. Prices for DRAM and NAND storage would spike during supply shortages, prompting producers to rapidly expand manufacturing capacity. The resulting oversupply would then drive prices down, triggering sharp downturns before the next recovery.

Executives across the technology sector now say artificial intelligence is fundamentally altering that dynamic.

“We will continue to raise prices because the industry will continue to raise prices,” said Antonio Neri, chief executive of Hewlett Packard Enterprise. “There is not enough supply for demand.”

The shift is being driven by the unprecedented computing requirements of modern AI systems. Training large language models and generative AI platforms requires vast clusters of processors working simultaneously, supported by massive pools of ultra-fast memory. That architecture is dramatically more memory-intensive than traditional computing environments used for enterprise software, personal computers, or smartphones.

High-bandwidth memory, commonly known as HBM, has emerged as one of the most critical components in AI hardware. The technology allows chips to access data far faster than conventional DRAM, making it essential for training large AI models and running advanced inference workloads.

Demand for HBM has surged so rapidly that technology companies are rushing to secure long-term supply contracts.

SK Hynix, one of the world’s largest memory manufacturers and a major supplier of HBM, said the industry is undergoing structural changes as customers increasingly prefer multi-year supply agreements.

“The company’s customers, including hyperscalers, have increasingly preferred long-term contracts over the one-year agreements that were more common in the past,” an SK Hynix spokesperson said.

Micron Technology has reported a similar shift, telling CNBC that customers are now willing to sign long-term agreements to lock in supply as competition intensifies for AI hardware components.

Those customers include some of the world’s largest technology companies, often referred to as hyperscalers because of the massive scale of their cloud computing infrastructure.

Executives say these companies are reserving memory capacity years in advance.

On the latest earnings call for Broadcom, Chief Executive Hock Tan said the company has already secured supply commitments for key components through 2028 as demand for AI chips and systems accelerates.

Technology giants building their own AI hardware are also confronting supply constraints. Meta Platforms on Wednesday unveiled a new internally designed AI chip as part of its push to expand computing capacity for artificial intelligence workloads.

But even as the company ramps up hardware development, it remains concerned about securing sufficient memory.

“We’re absolutely worried about HBM supply,” said Yee Jiun Song, vice president of engineering at Meta. “But we think that we have secured our supply for what we’re planning to build out.”

The pressure on memory supply is being amplified by a massive wave of capital spending across the technology industry. Major cloud providers such as Amazon, Microsoft, Alphabet, and Meta are investing hundreds of billions of dollars in AI data centers to support the growing demand for generative AI services.

Each of those facilities requires enormous volumes of memory chips to feed data to powerful processors and graphics chips that train and run AI models.

As hyperscalers absorb increasing amounts of available supply, analysts say the balance of the memory market is shifting away from consumer electronics. Manufacturers of smartphones, PCs, and other consumer devices are finding themselves competing with data center operators for the same components, often at higher prices.

An executive at Seagate Technology told the South China Morning Post that memory price increases could become “the new normal” for the next several years. The long lead times required to expand semiconductor manufacturing capacity are reinforcing those expectations.

Building advanced memory fabrication plants costs tens of billions of dollars and can take several years to complete, meaning supply cannot quickly adjust to sudden surges in demand. As a result, industry executives believe meaningful relief from supply constraints may not arrive until at least 2027, when new facilities currently under construction begin operating at full scale.

The emergence of long-term contracts is also changing how the memory industry manages supply and pricing. Historically, most memory was sold on short-term contracts or even spot markets, leaving prices highly sensitive to shifts in demand.

Multi-year agreements with hyperscalers, by contrast, provide greater revenue visibility for manufacturers while ensuring customers receive priority access to scarce components.

That change could smooth the dramatic price swings that once defined the sector. For investors, the sharp rally in memory stocks is an indication that markets are increasingly convinced the industry is entering a new phase — one powered by sustained demand from artificial intelligence rather than the cyclical consumer electronics markets that dominated the past

Nvidia invests $2bn in AI cloud firm Nebius as chip giant deepens control of the AI infrastructure boom

0

Nvidia said on Wednesday it will invest $2 billion in artificial intelligence cloud provider Nebius, extending the chipmaker’s aggressive push to shape the infrastructure powering the global AI boom.

A filing with the U.S. Securities and Exchange Commission showed Nvidia agreed to purchase shares representing roughly an 8.3% stake in the Amsterdam-based company at $94.94 per share. Nebius, which is listed on the Nasdaq, surged nearly 14% following the disclosure, trading around $109.72 in afternoon dealings.

The deal denotes how Nvidia, now widely viewed as the central supplier of hardware behind artificial intelligence, is increasingly investing directly in companies that build and operate AI computing infrastructure.

Nvidia has been embedding itself across the rapidly expanding AI ecosystem—from semiconductor manufacturing to data centers and cloud platforms that deploy its chips, a strategy the investment is believed to reflect.

Demand for AI computing power has surged since the explosion of generative AI tools, forcing cloud providers and startups to build massive data-center networks capable of running advanced machine-learning models.

Nvidia’s graphics processing units (GPUs) remain the dominant chips used to train and run those systems, giving the company extraordinary leverage over the industry’s supply chain.

By investing in infrastructure providers such as Nebius, Nvidia can both accelerate the deployment of its hardware and ensure the continued expansion of AI computing capacity globally.

Nebius’ Massive Data-Center Expansion Plans

Nebius said it plans to deploy more than 5 gigawatts of data-center capacity by 2030, a scale that illustrates the enormous electricity demands associated with modern AI workloads. That level of capacity is roughly equivalent to the electricity consumption of more than four million U.S. households, highlighting the rapidly growing energy footprint of artificial intelligence infrastructure.

To support the build-out, Nebius has significantly increased its spending on infrastructure. The company reported capital expenditure of $2.1 billion in the December quarter, up sharply from $416 million a year earlier, as it accelerates expansion of its computing capacity.

Rise Of The “Neocloud” Sector

Nebius belongs to a new category of AI infrastructure providers sometimes described as “neocloud” companies. Unlike traditional cloud giants that serve a broad mix of industries and computing workloads, these firms focus almost entirely on high-performance infrastructure optimized for artificial intelligence.

Other players in the segment include CoreWeave, which has rapidly gained prominence through multibillion-dollar deals supplying AI computing power to large technology companies.

Nebius and its peers are positioning themselves as specialized providers capable of delivering massive GPU clusters to train advanced models.

That model has already attracted major clients. Nebius has secured large contracts with U.S. technology companies, including a $17 billion infrastructure agreement with Microsoft and a $3 billion deal with Meta Platforms, underscoring the scale of investment flowing into AI computing.

Nvidia’s Expanding Investment Portfolio

The Nebius investment adds to a growing list of high-profile deals through which Nvidia is helping finance the infrastructure underpinning artificial intelligence.

Last year, the company agreed to deploy at least 10 gigawatts of AI systems for OpenAI and later announced a $30 billion investment in the startup, deepening its role in the development of advanced AI models.

Such moves illustrate how Nvidia is evolving beyond a chip supplier into a key financial and strategic backer of companies building the next generation of AI platforms.

However, Nvidia’s growing web of investments has also raised concerns among some analysts and investors.

Many of the companies receiving funding from Nvidia are also major buyers of its chips, creating what critics describe as a form of circular financing in which Nvidia effectively helps fund the infrastructure that purchases its hardware.

Supporters argue the strategy accelerates the rollout of global AI capacity and ensures that computing supply keeps pace with surging demand.

But skeptics warn it could concentrate too much influence over the AI ecosystem in the hands of a single supplier.

Nvidia’s chief executive Jensen Huang framed the Nebius partnership as part of the next phase of artificial intelligence development.

“Nebius is building an AI cloud designed for the agentic era,” Huang said in a statement, referring to a new generation of AI systems capable of performing autonomous tasks rather than simply responding to prompts.

The partnership, he added, will help scale the company’s infrastructure to meet “surging global demand for intelligence.”

The deal highlights the massive financial and energy investments now required to support the AI revolution. Global spending on AI data centers is expected to reach hundreds of billions of dollars in the coming years as technology companies race to build the computing power required for increasingly sophisticated models.

For Nvidia, whose chips sit at the center of that expansion, strategic investments like the Nebius deal help reinforce its position not only as the industry’s dominant hardware supplier but also as a key architect of the infrastructure shaping the future of artificial intelligence.