DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 44

Federal Reserve Bank of Kansas City Approves Limited Master Account for Kraken Financial

0

The Federal Reserve Bank of Kansas City approved a limited-purpose master account for Kraken Financial, the Wyoming-chartered special purpose depository institution affiliated with the crypto exchange Kraken. This makes it the first digital asset bank in U.S. history to gain direct access to the Federal Reserve’s core payment systems, including Fedwire.

Kraken Financial can now connect directly to U.S. payment rails like Fedwire without relying on intermediary and correspondent banks. This should allow faster, cheaper, and more efficient fiat (USD) settlements, especially for institutional clients, reducing operational complexity and costs.

Approved for an initial one-year term. Tailored restrictions based on the company’s risk profile and business model; it operates as a non-lending, fully asset-backed depository institution. No access to broader Fed services, such as the discount window (emergency lending) or earning interest on reserves.

It is not a full banking charter with FDIC insurance. This approval followed years of regulatory engagement; Kraken first applied around 2020 and comes amid ongoing Fed discussions about access policies for non-traditional institutions. It has been described by some as a pilot or experiment in integrating digital asset firms into the traditional payments system.

Kraken and supporters including Sen. Cynthia Lummis hailed it as a historic milestone for crypto’s integration into mainstream finance, potentially improving on and off-ramps and institutional adoption. Traditional banking groups expressed worries about risk and the precedent before final Fed guidelines on such accounts. Some lawmakers have questioned the transparency of the decision.

This is a significant plumbing development for crypto infrastructure—it strengthens Kraken’s institutional offerings but remains narrowly scoped for now and does not extend full traditional banking privileges. It reflects the evolving regulatory landscape as digital assets seek deeper ties to the U.S. financial system.

Faster, cheaper, more reliable USD settlements — Direct access to Fedwire eliminates reliance on intermediary and correspondent banks, reducing costs, delays, counterparty risk, and operational friction for institutional clients like hedge funds, trading firms.

Stronger institutional offering — Improves on/off-ramps, liquidity management, and integration of fiat with digital assets; seen as a step toward potential atomic settlement and programmable products in the future. First digital asset bank with direct Fed payment system access; acts as a one-year pilot and experiment for non-traditional institutions.

Boosts credibility and could pave the way for other crypto firms while still limited—no interest on reserves, no discount window, no FDIC insurance. Positions Kraken better for institutional growth and potential IPO-related appeal by embedding crypto infrastructure deeper into U.S. financial plumbing. Crypto-native firms can now handle fiat movements more efficiently, potentially eroding some correspondent banking revenue and leveling the playing field.

Groups like the Bank Policy Institute and ICBA criticize the move for bypassing finalized guidelines, lacking transparency, and introducing risks from uninsured, lightly supervised entities like the Wyoming SPDI model. Concerns include systemic risk, AML compliance, and possible deposit shifts away from traditional banks. Signals gradual convergence of crypto and traditional payments under a crypto-friendly regulatory tilt, but with safeguards and ongoing scrutiny like questions from Rep. Maxine Waters on legal basis and risk controls.

The limited scope; Tier 3 review, tailored restrictions, one-year term aims to mitigate concerns, but success or issues could influence future Fed policy on skinny accounts for fintech and crypto entities. This is a pragmatic but constrained step toward mainstreaming digital asset infrastructure—beneficial for efficiency and adoption in crypto, while raising caution flags in traditional banking circles. Long-term impact depends on how the pilot performs and whether restrictions evolve.

Global Banks are Increasingly Moving On-chain, Integrating Core Blockchain Tech into Traditional Payment Rails

0

Global banks are increasingly moving on-chain, integrating blockchain technology into core operations for payments, settlement, custody, and asset management.

This shift, accelerating in 2025–2026, represents a pragmatic evolution rather than a full embrace of decentralized crypto ideals. Banks aim to modernize legacy systems while maintaining regulatory control, often using permissioned or hybrid blockchains alongside public ones.

Traditional finance relies on slow, multi-day settlement processes; e.g., T+2 for securities or correspondent banking for cross-border payments. Blockchain enables near-instant, 24/7 atomic settlement, reducing counterparty risk, operational friction, and the need for intermediaries. Tokenization of real-world assets (RWAs)—such as money market funds, bonds, deposits, or private credit—allows programmable, composable assets that can move seamlessly.

For example, JPMorgan’s Onyx/Kinexys platform handles tokenized deposits and collateral, with pilots showing real-time FX settlement and intraday liquidity management. SWIFT, the backbone of global messaging, is actively building blockchain-based ledgers and interoperability layers with dozens of banks including JPMorgan, HSBC, and BNP Paribas to support tokenized money and instant cross-border flows.

Stablecoins and Tokenized Deposits as On-Chain Cash

Stablecoins have proven the most practical on-chain use case, offering programmable digital dollars for payments and treasury. Large banks are issuing or piloting their own tokenized bank deposits to enable always-on liquidity, collateral management, and settlement without leaving the regulated banking system. This complements and sometimes competes with public stablecoins while giving banks control over compliance and risk.

Research shows the largest banks leading in stablecoin issuance and on-chain tech adoption. Retail and corporate clients increasingly expect crypto exposure, digital asset custody, and yield opportunities. With millions holding crypto and institutions seeking better liquidity and yields, banks risk losing business to fintechs or pure DeFi platforms. Offering tokenized products, custody, and trading services creates new revenue streams amid pressure on traditional margins.

High-net-worth and corporate clients want seamless integration—e.g., using tokenized Treasuries or funds as collateral. Improved rules e.g., rescission of restrictive U.S. accounting guidance like SAB 121, EU’s MiCA, and clearer U.S. stablecoin frameworks have reduced uncertainty. Regulators now view tokenization as a tool for innovation and competitiveness. Central banks and bodies like the BIS are exploring related projects.

This regulatory momentum makes on-chain infrastructure binding and bankable, shifting from pilots to production. Tokenization unlocks fractional ownership, broader access to illiquid assets, and programmable finance features like automated compliance or conditional payments. Banks see opportunities in custody, tokenization platforms, and bridging TradFi with DeFi-like efficiency—while keeping rails compliant. Examples include BlackRock’s BUIDL tokenized fund, Goldman Sachs’ digital asset platform, and consortia exploring multi-bank stablecoins.

Broader market projections show tokenized RWAs growing rapidly, with banks positioning to capture value in a multi-trillion-dollar opportunity. JPMorgan leads with Onyx/Kinexys for tokenized deposits, collateral networks (TCN), and institutional settlement—processing billions daily. Citi, BNY Mellon, and  Goldman Sachs are active in tokenized deposits, money market funds, custody on exchanges like SDX, and cross-chain pilots.

SWIFT’s blockchain ledger work, European and Asian banks, and projects like Partior for multi-currency tokenized money. This is not a wholesale migration to public, permissionless blockchains. Many initiatives use private and permissioned ledgers or hybrid setups for control, compliance (KYC/AML), and scalability. Interoperability across chains remains a technical hurdle, driving demand for solutions like Chainlink.

Risks include regulatory fragmentation, operational integration with legacy core banking systems, and ensuring bank-grade security.In essence, global banks are going on-chain to defend their role in finance: reducing costs, meeting demand, unlocking efficiency in payments, settlement and tokenization, and generating new income—all while shaping rather than being disrupted by the technology. The result is a convergence where blockchain becomes infrastructure, not revolution. As 2026 progresses, expect more production deployments, especially around tokenized deposits and RWAs.

Anthropic in Early Stages of Exploring Possibilities of Designing its Own AI Chips

0

Anthropic is in the very early stages of exploring the possibility of designing its own AI chips. The company hasn’t committed to the idea, formed a dedicated team, or settled on any specific architecture. It could still decide to continue solely buying chips from existing suppliers.

Sources described the discussions as preliminary, driven by the chronic shortage of high-end AI accelerators needed to train and run ever-larger models. Anthropic currently relies on a diversified mix of hardware: NVIDIA GPUs including recent use of Blackwell for at least one major model like Mythos.

Google’s TPUs via a major expansion on Google Cloud, potentially up to ~1 million TPUs in partnership with Broadcom. Amazon’s Trainium and Inferentia chips through its primary cloud and training partnership on AWS, including the massive Project Rainier cluster. This multi-vendor strategy provides resilience, but surging demand for Claude with Anthropic’s annualized revenue reportedly tripling to a $30B+ run rate is straining supply and driving up costs.

Designing in-house silicon could give Anthropic more control over performance, power efficiency, and long-term economics—reducing what some call the Nvidia tax on margins and availability. This isn’t isolated. Other frontier labs and hyperscalers are pursuing similar paths: Meta and OpenAI already have custom chip projects underway.

Google (TPUs), Amazon (Trainium/Inferentia), and Microsoft (with Maia) have long invested in custom AI silicon. Partnerships like Anthropic’s with Broadcom for custom TPUs show they’re already leaning into semi-custom designs before going fully in-house. Designing a competitive AI chip from scratch is extremely expensive (hundreds of millions of dollars) and technically demanding. Success isn’t guaranteed—NVIDIA still dominates due to its CUDA software ecosystem, scale, and iterative hardware improvements.

Many attempts at custom AI accelerators have underperformed or been abandoned. If Anthropic moves forward, it could: Lower long-term compute costs. Optimize hardware specifically for Claude’s architecture and safety-focused training methods. Further diversify away from any single supplier. However, execution risks are high, and it would take years to reach production scale.

For now, the report signals strategic caution amid explosive AI growth rather than an imminent break from NVIDIA or its cloud partners. This fits the ongoing vertical integration push in AI: labs realizing that software model performance is increasingly bottlenecked by hardware access and cost. The compute race is shifting from who has the most GPUs toward who can build or control the best silicon stack.

We’ll likely see more such explorations as inference and training demands continue to outpace supply. Custom chips could reduce long-term dependence on expensive Nvidia GPUs and ease shortages. Optimization for Claude’s architecture might improve trainin and inference efficiency, power usage, and performance-per-watt, lowering the massive compute bills that frontier labs face.

Greater control over hardware tailored to safety-focused or specific model needs, potentially accelerating development cycles. However, success is far from guaranteed—designing a competitive AI accelerator can cost ~$500 million upfront, plus years of engineering, manufacturing likely via TSMC or similar, and software ecosystem building.

High execution risk. Failure could waste capital. Near-term, Anthropic continues diversifying via deals like expanded Google TPUs with Broadcom, scaling to multi-gigawatt capacity and CoreWeave for Nvidia-based cloud

Tether’s QVAC SDK Is a Pivotal Step Making AI More Private, Resilient and Accessible

0

Tether, the company behind the USDT stablecoin has just released QVAC SDK, an open-source, cross-platform toolkit for building and running local, decentralized AI directly on devices.

QVAC from Tether’s dedicated AI team is positioned as a universal building block for the Stable Intelligence Era — a future with billions of devices, autonomous machines, and AI agents running intelligence privately and without relying on centralized cloud providers. AI models run offline on the device itself for privacy, speed, and resilience — no API keys, no cloud dependency, and no Big Tech oversight.

Cross-platform from a single codebase: Supports iOS, Android, Windows, macOS, and Linux with no code changes. Built on a llama.cpp fork called QVAC Fabric, enabling text generation, speech processing, visual recognition, translation, and more. Uses the Holepunch protocol stack for peer-to-peer model distribution and delegated inference.
Decentralized training and fine-tuning, plus specialized toolkits for robotics and brain-computer interfaces. It also includes Fabric LLM, a LoRA fine-tuning framework for edge devices.

Tether’s CEO Paolo Ardoino has called centralized AI a dead end. This move expands Tether beyond stablecoins into open infrastructure for on-device AI, aligning with growing demands for privacy, decentralization echoing Web3 and DePIN narratives, and resilience against cloud outages or censorship. It builds on Tether’s earlier QVAC efforts, like releasing large open synthetic educational datasets for AI training.

By offering a single codebase that runs seamlessly across iOS, Android, Windows, macOS, and Linux, QVAC lowers barriers for building offline-capable AI apps. It builds on a llama.cpp fork with unified support for text generation, speech, vision, translation and more. Integration with the Holepunch protocol enables peer-to-peer model distribution, delegated inference, and future decentralized training and fine-tuning via swarms.

This reduces reliance on centralized servers and improves resilience. It extends llama.cpp’s strengths; broad model compatibility via GGUF with cross-platform abstractions and planned LoRA fine-tuning on edge devices. It competes with platform-specific solutions like Apple’s MLX (strong on Apple Silicon performance and fine-tuning) but aims for true hardware-agnostic, privacy-first deployment.

Early feedback notes it simplifies integration but will need optimization to match cloud-scale performance on smaller models. Planned expansions into robotics and brain-computer interfaces could accelerate specialized on-device AI in hardware-heavy fields. Faster prototyping of private AI features without API costs or vendor lock-in. Open-source nature invites community contributions and audits.

AI runs locally without sending sensitive inputs to remote servers. This directly addresses concerns over surveillance, data breaches, and censorship by Big Tech. Users get instant, always-available AI for everyday tasks like writing, finance planning, or translation—even without internet. Lower latency, no subscription fees for inference, and reduced exposure to cloud outages or policy changes.

Challenge to cloud AI dominance: Accelerates the shift from hyperscale cloud models to edge computing. It could pressure centralized providers by highlighting privacy and cost drawbacks. Fosters transparency and innovation, contrasting with closed models from major labs. Analysts see it as part of a broader trend toward responsible, decentralized AI adoption.

Tether’s diversification: Signals the stablecoin giant’s evolution into infrastructure beyond finance. It builds on prior QVAC efforts and could create synergies with DeFi, agents, or autonomous systems. Aligns strongly with decentralized physical infrastructure networks (DePIN), Web3, and autonomous AI agents. Local AI could enable trustless, on-device agents that interact with blockchains/dApps without central intermediaries—potentially powering smarter wallets, DeFi tools, or P2P economies.

Tether is already exploring related areas. Success here could enhance USDT’s utility in AI-powered crypto applications and attract developer talent to the broader crypto space. This requires sustained R&D investment. Some commentary notes potential capital strain if USDT market cap faces continued pressure, as resources are diverted from core stablecoin operations. Returns are uncertain and hinge on ecosystem growth.

On-device models may lag cloud performance on complex tasks; security and optimization responsibility shifts to developers and users; achieving critical mass for P2P swarms will take time. QVAC SDK is seen as a pivotal step in making AI more private, resilient, and accessible—potentially influencing how future intelligent systems are built amid rising concerns over centralization.

It’s not a complete replacement for cloud AI yet but adds a powerful local/decentralized alternative. If adoption grows, it could reshape developer tools, user expectations, and even regulatory conversations around data control. Short-term impact is mostly in crypto and tech circles; longer-term effects will depend on community contributions and practical use cases.

OpenAI’s Chief Scientist Says AI is close to reaching Human-level Intelligence

0

Fresh comments from chief scientist Jakub Pachocki suggest OpenAI believes it is moving materially closer to one of its most ambitious internal milestones: building systems capable of functioning at the level of a human research intern, a development that could reshape not only AI research itself but the future economics of science and technical work.

OpenAI is moving closer to one of the most consequential milestones it has publicly outlined in the race toward advanced artificial intelligence: the creation of systems that can operate at the level of a human research intern.

Speaking on the Unsupervised Learning podcast, chief scientist Jakub Pachocki said recent progress across coding, mathematical reasoning, and physics-related problem solving suggests the company’s internal roadmap remains on track.

“I definitely see this as a signal that something here is on track,” Pachocki said, pointing to recent technical breakthroughs as evidence that models are beginning to handle increasingly complex, multi-step work with less direct human intervention.

The significance of that remark lies not in the headline ambition alone, but in what OpenAI now sees as the core metric of progress. Rather than focusing purely on benchmark scores or isolated task performance, Pachocki framed autonomy in terms of time horizon.

“The way I would distinguish a research intern from a full automated researcher is the span of time that we would have it work mostly autonomously,” he said.

That is an important shift in how frontier labs are increasingly defining intelligence. The question is no longer whether a model can solve a single problem correctly. It is whether it can sustain coherent work over hours, days, or potentially weeks without constant human correction.

This concept, often described in the industry as “long-horizon autonomy,” is fast becoming one of the most important frontiers in AI development.

At an internal livestream last October, Pachocki laid out a two-stage roadmap: an “AI research intern” by September 2026, followed by a fully autonomous AI researcher by March 2028. Sam Altman later acknowledged the uncertainty around the target, writing that OpenAI “may totally fail” at the goal, but said transparency was necessary given the scale of its implications.

Pachocki pointed to the “explosive growth of coding tools,” particularly agents such as Codex, which he said are already handling much of the company’s internal programming work.

“We’ve seen this explosive growth of coding tools,” he said. “For most people, the act of programming has changed quite a bit.”

This is one of the most revealing parts of the interview. OpenAI is effectively describing a feedback loop in which AI tools are increasingly being used to improve the very systems that produce them. If coding agents are already automating substantial portions of internal software work, the logical next step is the automation of research workflows themselves: experiment design, evaluation pipelines, model comparisons, literature synthesis, and iterative testing.

Pachocki made this progression explicit.

“For more specific technical ideas, like I have this particular idea how to improve the models, how to run this evaluation differently, I think we have the pieces that we mostly just need to put together,” he said.

That phrase, “put the pieces together,” may sound modest, but it points to a major industry inflection point. Many of the component capabilities already exist in fragmented form: coding agents, reasoning systems, verification tools, web-enabled research agents, and increasingly capable math solvers.

The challenge now is orchestration, which has birthed an open question. Can these systems chain together tasks reliably enough to mimic the workflow of a junior researcher?

Pachocki was careful not to overstate where the technology currently stands.

“I don’t expect we’ll have systems where you just tell them, ‘go improve your model capability, go solve alignment,’ and they will do it, not this year,” he said.

That caveat is important because it sharply distinguishes between intern-level assistance and true scientific autonomy. A research intern, in this framing, is not an independent scientist. It is a system capable of executing bounded, technically sophisticated tasks over longer durations with minimal supervision.

Junior-level technical work across AI labs, universities, biotech firms, and enterprise R&D units could increasingly be augmented or partially automated. This could compress experimentation cycles from weeks to days, allowing frontier labs to iterate faster than smaller competitors. It may also widen the competitive moat around firms with the compute, data, and engineering infrastructure to deploy such systems at scale.

The “AI research intern” is believed to be an indication of a move from AI as a tool for users to AI as an active participant in the research process itself. It is expected to mark a transition from copilots to semi-autonomous scientific agents if realized.

However, the most important insight from Pachocki’s remarks is that OpenAI is increasingly measuring progress by sustained autonomy rather than isolated intelligence. That is regarded as a more difficult benchmark, but also a more meaningful one.