DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 4

NALA Secures Bank of Ghana Approval to Launch Compliant Remittance Services in Partnership With BigPay

0

NALA, a global fintech company focused on cross-border payments and remittances, has received formal regulatory approval from the Bank of Ghana, marking a major milestone in its expansion across Africa.

The Bank of Ghana has issued a Letter of No Objection (LONO) to NALA and its licensed local partner, BigPay, authorizing both companies to operationalise remittance services within Ghana’s regulated financial system.

This approval strengthens NALA’s mission to Build Payments for the Next Billion by expanding secure, affordable, and seamless cross-border payment solutions across Africa and Asia.

The Bank of Ghana’s endorsement to work with BigPay reinforces NALA’s focus on building trusted, compliant, and resilient financial infrastructure, enabling the company to officially operate and scale fully compliant remittance services for individuals and businesses in Ghana.

Through this collaboration, BigPay’s globally recognised payments network, bank-grade APIs, and advanced settlement capabilities will support secure, fast, and reliable payouts to local banks and mobile wallets nationwide.

Cross-border remittances remain a critical financial lifeline for millions of people across Africa, yet high transaction fees continue to drain billions of dollars from the continent’s economy. NALA is addressing this challenge by delivering transparent, secure, and cost-efficient services designed to reduce losses associated with international money transfers.

Commenting on this, Benjamin Fernandes, Founder and CEO of NALA, said,

We are thrilled to receive official approval from the Bank of Ghana to operationalise our remittance flows in partnership with BigPay. This milestone reflects our deep commitment to global regulatory compliance, strong local partnerships, and delivering exceptional value to the Ghanaian market. At NALA, we believe financial infrastructure must be built alongside trusted licensed institutions. BigPay’s capabilities and reputation make them a natural partner for our mission. With this approval, we’re not just expanding access— we’re strengthening the resilience, transparency, and inclusivity of Ghana’s financial ecosystem.”

Also commenting, Isaac Tetteh, Managing Director of BigPay said,

We are delighted to partner with NALA to bring enhanced, customer-centric remittance solutions to the Ghanaian market. This approval from the Bank of Ghana opens a new chapter of opportunity; not just for our two organizations, but for the millions of individuals and businesses who rely on secure, affordable, and efficient financial services. At BigPay, we are committed to powering innovation through strong partnerships, and NALA’s vision aligns perfectly with our mission to deepen financial inclusion and expand digital payment capabilities across Ghana. We are excited about the growth possibilities ahead and look forward to delivering real value to customers through this collaboration.”

As global mobility increases and families remain connected across borders, NALA positions itself as a reliable and user-friendly platform for international money transfers. The company caters to diaspora communities supporting loved ones at home, as well as individuals and businesses managing cross-border financial needs, by simplifying the remittance process and making it more affordable and accessible.

Founded by Benjamin Fernandes, NALA represents a new generation of fintech companies challenging the high costs and slow speeds of traditional remittance systems, particularly between developed economies and Africa. Through mobile-first digital solutions and direct payment integrations, NALA aims to make cross-border money transfers faster, cheaper, and more dependable.

With licensed operations and regulatory approvals spanning Africa, Europe, the UK, and the US, and with expanding infrastructure across Asia, NALA continues to build a trusted global payments network designed for the next billion users.

Survey Finds More Organizations Willing to Pay for AI in 2026, Strengthening Hope for Long-Term Returns

0

The AI industry has, for the most part of its evolution, been mired in questions about profitability as concerns rise about whether companies are genuinely prepared to pay for artificial intelligence at scale.

Nearly all major AI companies are still operating at a loss, given the volume of investment going into infrastructure and the little revenue to show for it.

Now, a new survey of chief information officers by RBC Capital Markets suggests that the inflection point may now have arrived — and with it, a powerful signal for investors worried about an AI bubble.

RBC polled 117 IT leaders across companies with annual revenues ranging from under $250 million to more than $25 billion. Fully 90% of respondents said their organizations plan to increase AI spending in 2026. More importantly, the survey indicates that this spending is no longer speculative. Institutions are moving from experiments to paid, production-level deployments that carry recurring costs and measurable business expectations.

“Overall, we came away increasingly optimistic of macro and budget stabilization taking shape in 2026 and encouraged by the pace of early GenAI adoption,” RBC analysts wrote in a research note.

One of the strongest signals comes from how AI is being funded. Ninety percent of CIOs said their organizations are now creating new, dedicated budgets specifically for generative AI and large language model projects, up from 85% last year. That shift suggests AI spending is additive rather than cannibalizing other IT investments — a critical distinction for assessing whether current infrastructure buildouts will eventually pay off.

This matters because markets have spent much of 2024 and 2025 debating whether hyperscaler spending on data centers, custom chips, and networking was getting ahead of enterprise demand. The RBC data points to a lag effect now closing. As more institutions formally allocate budgets and sign contracts, heavy upfront investment by AI vendors and cloud providers is increasingly likely to translate into durable revenue streams from 2026 onward.

The pace of operational rollout reinforces that view. Sixty percent of respondents said their organizations already have AI initiatives running in production, up sharply from 39% a year earlier. Another 32% expect to reach production within six months. In effect, more than nine in ten companies surveyed are either actively paying for AI systems today or preparing to do so imminently.

This transition undercuts the core argument behind AI bubble concerns — that enterprise customers would remain stuck in pilot mode, unwilling to commit real money once experimentation gave way to cost scrutiny. Instead, CIOs now describe AI as the single largest driver of incremental software spending next year, ahead of cybersecurity and IT service management. In open-ended responses, executives repeatedly cited AI as their top investment priority for 2026, often paired with spending on infrastructure, automation, and data modernization needed to support deployment at scale.

The use cases are also maturing. Seventy-six percent of CIOs said their AI strategies are now aimed at both cost reduction and revenue generation, signaling a shift from efficiency-only narratives toward competitive and growth-oriented applications. That evolution strengthens the case that AI is becoming embedded in core business models rather than remaining a discretionary technology.

For the AI industry, this shift carries broader implications. As more institutions commit to paid AI services, the likelihood that today’s heavy spending will deliver returns improves materially. It suggests that revenue growth may lag infrastructure investment by a year or two, but not indefinitely — a dynamic consistent with previous technology cycles such as cloud computing.

The findings are particularly significant for OpenAI, which sits at the center of the generative AI ecosystem. The company carries a reported valuation of around $500 billion and has faced persistent scrutiny over its path to profitability amid enormous computing and infrastructure costs. A rising share of enterprises willing to pay for AI tools, models, and APIs strengthens the revenue side of that equation, helping to narrow the gap between growth and break-even.

While concerns around data privacy and governance remain the most cited risks among CIOs, those issues are no longer acting as adoption blockers. Instead, organizations appear to be absorbing them as part of the broader cost of doing business in an AI-driven environment.

Taken together, the RBC survey paints a picture of an industry moving past its most speculative phase. As institutional buyers open their wallets and embed AI into production systems, the narrative begins to shift from hype toward monetization. For investors, that evolution offers a clearer answer to the revenue question that has dominated the past year.

Broadcom Selloff Highlights Rising AI Jitters as Investors Question How Long the Boom Can Last

0

Broadcom delivered exactly what Wall Street typically rewards: strong earnings, robust guidance, and eye-catching growth tied to artificial intelligence. Yet the market response was brutal.

Shares of the chipmaker plunged 11% on Friday, marking their worst single-day drop since January, as investors abruptly pulled back from some of the most crowded names in the AI trade. The selloff spilled across the sector. Oracle fell another 4% a day after tumbling 10% following its own earnings report, while AI-focused infrastructure players such as CoreWeave also came under heavy pressure.

The broader market felt the impact. The Nasdaq slid about 1.4%, while the S&P 500 dropped close to 1%, underscoring how deeply AI-linked stocks have become intertwined with overall market sentiment in 2025.

At the heart of the move is a growing unease that the AI infrastructure boom, which has powered both corporate profits and stock prices for the past two years, may be entering a more complicated phase. Hyperscalers are still pouring money into data centers and custom chips, but investors are increasingly asking how sustainable the pace of spending is, and at what cost to margins and balance sheets.

Broadcom sits squarely in the middle of that debate. The company has been one of the biggest beneficiaries of AI’s rise, supplying custom chips and networking hardware to some of the world’s largest technology firms. Its market capitalization roughly doubled in each of the past two years before extending gains again in 2025, leaving the stock up about 75–80% year to date before Friday’s drop.

“This stock is up 75–80% year to date. You’re seeing a little bit of a pullback,” Vijay Rakesh, an analyst at Mizuho, said on CNBC.

He added that the firm would be buyers on the weakness, raising its price target to $450 from $435. Broadcom was trading below $364 by Friday afternoon.

The numbers themselves were hard to fault. Broadcom reported quarterly revenue of $18.02 billion, beating the $17.49 billion consensus estimate compiled by LSEG. Revenue grew 28% year on year, driven largely by a 74% surge in AI chip sales. Adjusted earnings per share came in at $1.95, comfortably ahead of expectations of $1.86.

Chief executive Hock Tan said the momentum is set to continue. Broadcom expects AI chip sales in the current quarter to double from a year earlier to $8.2 billion, supported by demand for both custom AI accelerators and networking semiconductors used in large-scale data centers.

The company also disclosed a massive $73 billion backlog of AI-related orders over the next 18 months, highlighting the depth of commitments from customers racing to secure compute capacity. That backlog includes $21 billion in orders from Anthropic, which Broadcom named as a key customer.

Still, investors zeroed in on the fine print. One major concern is margin pressure, at least in the near term. Chief financial officer Kirsten Spears warned on the earnings call that “gross margins will be lower” for certain AI chip systems because Broadcom must purchase more components upfront to build complete server racks. In a market where expectations are already sky-high, even temporary margin compression can be enough to trigger a selloff.

There was also disappointment around OpenAI. While Broadcom has touted a multibillion-dollar agreement announced in October, Tan tempered expectations, telling investors that “we do not expect much in ’26,” cooling hopes that the deal would materially boost revenue in the near term.

Bernstein analyst Stacy Rasgon described the market reaction as driven by “AI angst” rather than fundamentals.

“Frankly we aren’t sure what else one could desire as the company’s AI story continues to not only overdeliver but is doing it at an accelerating rate,” Rasgon wrote, reiterating a buy rating and lifting his price target.

The skepticism has been even sharper for Oracle. Despite beating earnings expectations, the company missed on revenue and failed to provide enough clarity on how it plans to finance its aggressive AI-driven infrastructure expansion. The stock is now down more than 40% from its September record, as investors grow wary of the heavy debt load required to keep pace in the AI arms race.

CoreWeave offers another cautionary tale. The data-center operator, which rents out AI-focused cloud infrastructure, fell 9% on Friday and has lost more than half its value since peaking in June, reflecting concerns about capital intensity and long-term returns.

Taken together, the moves point to a market that is no longer willing to reward AI exposure at any price. The theme that has dominated equities and corporate strategy is still intact, but investors are becoming more selective, focusing less on headline growth and more on margins, cash flow, and balance-sheet risk.

However, some analysts believe the selloff may prove temporary for Broadcom if demand continues to surge as forecast.

Trump Moves to Override State AI Laws, Triggering Fierce Federalism Clash and Backlash From Both Parties

0

President Trump on Thursday signed an executive order that seeks to sharply curtail the power of U.S. states to regulate artificial intelligence, marking one of the most aggressive federal interventions yet in the rapidly expanding AI sector.

The order authorizes the U.S. attorney general to challenge and potentially overturn state laws deemed inconsistent with what the administration calls “the United States’ global A.I. dominance,” placing dozens of existing safety, consumer protection, and transparency measures in legal jeopardy.

Under the order, states that refuse to roll back targeted AI laws could also face financial pressure. Trump directed federal agencies to withhold funds tied to broadband expansion and other infrastructure programs from states that maintain regulations viewed as obstructive. The threat adds a fiscal lever to what is already shaping up as a major constitutional confrontation between federal authority and state police powers.

Trump framed the move as a necessary step to eliminate what he described as a confusing and burdensome regulatory landscape. Speaking in the Oval Office alongside senior officials, including David Sacks, the administration’s AI and crypto czar, Trump argued that innovation could not thrive under a fragmented system of state rules.

“It’s got to be one source,” he said. “You can’t go to 50 different sources.”

He also tied the order directly to geopolitical competition, repeatedly citing the need for the United States to stay ahead of China in artificial intelligence.

The executive action reflects Trump’s broader realignment toward Silicon Valley and the AI industry. Over the past year, his administration has issued multiple orders designed to ease regulatory scrutiny, expand private-sector access to federal data, streamline permitting for data centers and power infrastructure, and loosen restrictions on exporting advanced AI chips. Trump has also publicly praised leading technology executives and elevated Sacks — a venture capitalist with deep ties to the tech sector — into a central policy role with significant influence over AI governance.

However, the order has already sparked widespread bipartisan resistance, with legal experts warning that it may exceed the president’s constitutional authority. States and consumer advocacy groups are expected to challenge the measure in court, arguing that only Congress can preempt state laws on this scale.

Several legal scholars have noted that while federal agencies can set standards in specific domains, a blanket attempt to invalidate state statutes through executive action is likely to face serious judicial scrutiny.

Even some voices aligned with Trump’s ideological camp expressed concern. Wes Hodges, acting director of the Center for Technology and the Human Person at the Heritage Foundation, said that if the administration succeeds in undermining state rules, it has a responsibility to replace them with a robust national framework.

“Doing so before establishing commensurate national protections is a carve-out for Big Tech,” Hodges said, underscoring fears that the order prioritizes speed and scale over public safeguards.

The stakes are high because generative AI systems have moved rapidly from experimental tools to mass-market products. Technologies capable of generating realistic text, voices, images, and video are now embedded across finance, education, healthcare, marketing, and social media. At the same time, documented harms have multiplied, including deepfake political content, financial scams, data misuse, and cases in which chatbots have provided harmful advice to minors.

In the absence of comprehensive federal legislation, states have stepped in aggressively. According to the National Conference of State Legislatures, all 50 states and U.S. territories introduced AI-related bills this year, and 38 states enacted roughly 100 new laws. These measures vary widely but generally aim to impose transparency requirements, restrict certain uses of AI, and hold companies accountable for foreseeable harms.

California adopted one of the most consequential laws, requiring developers of the largest AI models — including OpenAI’s ChatGPT and Google’s Gemini — to conduct safety testing and disclose the results. South Dakota moved to curb election-related manipulation by banning AI-generated deepfake videos in political ads within months of an election. Utah, Illinois, and Nevada passed laws governing AI chatbots used in mental health contexts, mandating user disclosures and limiting how sensitive data can be collected and used.

Child safety has emerged as a particularly active area of state regulation. Several states have passed laws aimed at protecting minors from AI-powered chatbots and algorithm-driven platforms, especially where AI tools simulate emotional support or companionship. Trump’s executive order states that it will not pre-empt child-safety laws, but it does not define how that exemption will be applied, leaving advocates concerned that protections could still be weakened through litigation or narrow interpretations.

“Blocking state laws regulating A.I. is an unacceptable nightmare for parents and anyone who cares about protecting children online,” said Sarah Gardner, chief executive of Heat Initiative, a child-safety advocacy group.

She warned that states have become the primary line of defense as federal action has lagged.

The AI industry, for its part, has mounted an intense lobbying campaign against state-level regulation. Companies argue that complying with dozens of different legal regimes raises costs, slows product development, and discourages startups. Earlier this year, lawmakers attempted to include a ten-year moratorium on state AI laws in a major domestic policy bill, but the proposal was abandoned after strong bipartisan opposition. Venture capitalist Marc Andreessen captured industry sentiment in a social media post last month, calling the state-by-state approach “a startup killer.”

Trump’s order effectively revives that fight through executive authority, raising the prospect of prolonged legal battles that could inject uncertainty into the AI market. While the administration argues that centralization will accelerate innovation and strengthen U.S. competitiveness, critics counter that the absence of binding federal standards leaves consumers, workers, and children exposed at a time when AI systems are becoming more powerful and less transparent.

Beyond domestic policy, the order also signals how Trump views AI as a strategic asset. By tying deregulation to competition with China, the administration is framing AI governance not as a consumer-protection issue but as a national power contest. That framing may resonate with parts of Congress, but it also heightens tensions with states that see immediate local risks from unchecked AI deployment.

As the order moves toward inevitable court challenges, the outcome could reshape the balance of power over technology regulation in the United States. If Trump prevails, states may find their ability to respond quickly to emerging AI harms sharply reduced. If the courts strike the order down, pressure will mount on Congress to finally craft a national AI framework that balances innovation with enforceable safeguards.

Either way, the executive order marks a turning point. It crystallizes a growing divide between federal ambitions to dominate AI globally and state-level efforts to manage its risks locally.

Predictive Oncology Becomes Axe Compute, Expanding Into High-Performance AI Infrastructure

0

Predictive Oncology (NASDAQ: POAI) announced that it has changed its name to Axe Compute Inc., with its common stock to begin trading on Nasdaq under the ticker symbol AGPU on December 12, 2025.

Axe Compute will continue to operate its AI-driven drug discovery business and expand its business into high-performance enterprise AI infrastructure, addressing rising global demand for predictable, scalable compute capacity across enterprise AI workloads.

The decision reflects Axe Compute’s fundamental observation about the current AI landscape: the bottleneck to AI progress is increasingly infrastructure, not algorithms. While attention concentrates on model capabilities and benchmark performance, Axe Compute believes the enterprises building AI applications face a more immediate problem—access to the compute required to train and run those models at all.

The Infrastructure Gap

Axe Compute believes it will be able to utilize its ATH strategic compute reserve and its agreement with Aethir to secure GPU capacity and services on the Aethir network to support compute demand.

GPU procurement timelines have extended to 40-52 weeks for high-end hardware as centralized cloud providers face capacity constraints that create multi-month deployment queues. Meanwhile, global enterprise spending on AI cloud services is projected to exceed $400 billion in 2025, with demand continuing to outpace supply.

Axe Compute will operate as an active infrastructure company rather than a passive treasury. The distinction matters: Axe Compute will acquire digital assets tied to AI infrastructure—beginning with capacity on the Aethir network—and deploy those assets to serve enterprise clients under service contracts.

Axe Compute believes it will be able to derive revenue from token rewards and the margin captured between infrastructure acquisition cost and enterprise billing rates. Axe Compute is not Aethir; Aethir operates the underlying network.

Axe Compute will monetize access to that network for enterprise buyers who require guaranteed capacity, service-level agreements, and a counterparty that operates within traditional corporate and regulatory structures.

Infrastructure as the Enabling Layer

Axe Compute’s thesis rests on a structural view of AI development: breakthroughs in models depend on the infrastructure that makes experimentation possible.

Transformers require the compute to train them. Scaling laws require the hardware to test them. Production AI requires the capacity to run it. This positions infrastructure operators differently than the hyperscalers or the model developers.

Axe Compute does not compete with AWS on breadth of services or with OpenAI on model capabilities. It operates in the space between: utilizing the Aethir network to provide the specific, dedicated GPU capacity that AI-native companies require when cloud queues are too long and building internal infrastructure is too slow.

Axe Compute believes it will be able to introduce the flexibility of a resource pool that can be allocated to support varied project requirements as they arise. Collectively, Axe Compute believes these early workloads will show that Axe Compute can function as a stable backbone for a wide range of production AI systems, reinforcing the need for decentralized compute as a foundational layer of enterprise AI infrastructure.

Axe Compute anticipates sourcing infrastructure at competitive rates and providing reliable, predictable access to high-capacity compute through the Aethir network.

Axe Compute believes it is positioned to demonstrate the scalability and effectiveness of its model as initial deployments come online and its enterprise client base is expanded.

Axe Compute will continue to operate its AI-driven drug discovery business and may explore potential expansion into other digital asset categories beyond compute infrastructure as its operating model matures.

Axe Compute (NASDAQ: AGPU) plans to make world-class AI compute accessible to all through its access to the Aethir network. By delivering Aethir-provided decentralized global infrastructure.

Axe Compute endeavors to deliver instant access to bare-metal GPUs at scale to innovators and established businesses alike. Axe Compute is where decentralized choice meets enterprise trust.