DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 5

As 2026 Approaches, Is Your Business Model Still Relevant?

1

As 2025 draws to a close and the promises of 2026 come into view, a critical question confronts every enterprise: how must we redesign our businesses to win? At Tekedia Institute, we have long emphasized that business models are supreme. If the underlying logic through which a firm captures value, for fixing frictions in the market, is broken, no amount of effort, talent, or technology will change the outcome.

robust business model is paramount for a company’s success, even more so than factors like strong leadership or execution alone. The business model, encompassing how a company creates, delivers, and captures value, is considered “supreme” because it dictates the fundamental logic and operations of the business.

Essentially, even with the same products or services, the business model adopted can drastically impact a company’s performance. Freemium or subscription business model on the same products? Whatever you decide will re-align how factors of production within that firm will be used.

In this AI era, the central issue is no longer whether you are using AI operationally in your firm, but how AI is reshaping your business model. Yes, besides running with AI, is AI transforming the enterprise, moving from artificial intelligence to enterprise intelligence?

Many business models that worked well in the past are now stale. Companies that once hired thousands of young people in Asia, Africa, and Latin America to provide entry-level engineering services to firms in Europe and America are now under severe pressure. Many have folded; others are shadows of their former selves. The reason is simple and uncomfortable: AI has disintermediated the entry-level roles that sustained those business playbooks.

When AI arrived, we paid attention to the signals. At Tekedia Institute, we recognized early that change was not incremental; it was structural. As supply of courses became abundant, value shifted away from merely offering courses toward improving learner outcomes. In a world where AI can generate unlimited content, the advantage no longer lies in static online materials. It lies in helping learners make sense of abundance, think into knowledge, and acquire actional insights.

That realization forced a redesign. We pivoted toward more live, interactive programs, focusing on guidance, interpretation, and execution rather than content alone. The business model evolved because the environment changed. That is why Tekedia AI Lab program is a live program.

The question now is this: what signals are you seeing in your own business? As 2026 arrives, are you rethinking the logic of how you create and capture value or are you hoping yesterday’s model will survive tomorrow’s realities? Rethink your business model for 2026.

 

Why data centers strive to reduce energy consumption and conserve resources by 2030

0

Today, data centers are becoming some of the most energy-hungry consumers of electricity in the world. Their role in the economy and technology is growing along with the spread of artificial intelligence, which places ever-increasing demands on computing power. However, behind this progress lies a pressing question: will the industry and society be able to support the development of digital services without data centers’ energy costs doubling by the end of the decade?

Growth of energy consumption amid AI development

The International Energy Agency (IEA) reports that in 2024, data centers worldwide consumed about 415 terawatt-hours (TWh) of electricity. According to expert forecasts, by 2030 this figure could reach 945 TWh per year—slightly more than the annual consumption of all of Japan. The main driver of this growth is the implementation of artificial intelligence. Training a single large neural network comparable to ChatGPT can result in emissions comparable to the annual emissions of 121 average American families. And although training AI models is the most energy-intensive stage, their subsequent application (inferencing) also requires considerable energy resources. Some estimates indicate that this stage can account for up to 90% of the model lifecycle.

The growth of data center energy consumption is not only related to AI and corporate computing. Mass digital services aimed at users around the world also make a significant contribution to the load on infrastructure.

Online entertainment, streaming, cloud platforms, and the iGaming industry require constant server availability and high fault tolerance. Even seemingly niche segments, such as websites with collections of promotions for online casinos, rely on distributed data centers and 24/7 data processing.

The authors of one of Canada’s popular sites, casinosbonusca.com, which collects information about no deposit bonuses, report that as competition grows and the number of such services increases, the total load on power systems also grows. Since 2021, they have faced an annual increase in the cost of renting dedicated space in commercial data centers, which indicates rising energy tariffs and a shortage of available capacity.

Such factors explain why the growth of data center energy consumption is systemic in nature. This directly affects infrastructure and local communities, especially in regions with a high concentration of server capacity.

The energy consumption of data centers is already comparable to that of entire countries, which underscores the strategic importance of energy efficiency for the industry and society. Against the backdrop of increasing load, energy efficiency is becoming not just a task, but a matter of the future of digital infrastructure.

Impact on infrastructure and communities

If we look at the situation in individual countries, the scale of the problem becomes even more tangible. According to a McKinsey study, by 2030 data centers in the United States could consume up to 12% of all the country’s electricity. In some regions, this share already sets records: for example, in Loudoun County (Virginia), one of the largest data centers in 2023 consumed 21% of the region’s electricity—more than all residential homes (18%). Local power grids are experiencing increasing load, creating risks of outages and forcing operators to build backup capacity. In 2024, a minor failure in a neighboring county led to 60 data centers switching to backup power, and the energy system barely coped with the sudden loss of capacity equivalent to the consumption of all of Boston.

A similar situation is observed in Europe, where the concentration of data centers in certain hubs causes power grid overloads. Industrial electrification is accelerating—will local infrastructure be able to cope with this challenge?

Problems of water consumption and material shortages

Data centers actively use not only electricity but also water. According to NPR, the average data center consumes about 300,000 gallons of water per day—and these figures are growing as energy consumption increases. Water is necessary for cooling equipment and maintaining stable system operation. The growth of energy costs leads to a corresponding increase in water consumption.

Materials are becoming another pressure point. The transition to green energy, actively used by data centers, is causing a sharp increase in demand for lithium and rare earth metals. The IEA forecasts that by 2030, demand for lithium will increase 40-fold. At the same time, the volume of electronic waste worldwide has reached 62 million tons per year—many of which contain valuable and rare components.

What path will allow data centers to reduce resource consumption while ensuring stable operation?

Key directions for sustainable development of data centers

Physical efficiency

Physical efficiency is the maximum reduction of energy costs for infrastructure, cooling, and placement of computing systems. Modern technologies make it possible to significantly increase this indicator. For example, the Lenovo Neptune water cooling system reduces energy consumption by up to 40% and makes thermal efficiency 3.5 times higher compared to air cooling. Efficiency assessment by the PUE (Power Usage Effectiveness) indicator clearly demonstrates progress: advanced centers operate with a PUE close to 1.1, which is about 84% more efficient than outdated solutions with a PUE of about 2.0. Innovations in the design and operation of physical infrastructure lay the foundation for sustainable development.

Workload efficiency

Another important element is the optimization of computing and competent distribution of tasks between equipment. Virtualization allows several applications to run on a single server at once, which maximizes the use of available resources. Among the optimization methods are:

  • intelligent distribution of tasks between equipment;
  • updating and upgrading servers;
  • adapting workflows to new architectures.

Workload efficiency becomes especially important amid the spread of AI—up to 90% of the model lifecycle is associated with the application stage, which requires significant capacity. Even small improvements in each individual process over time add up to large-scale systemic changes.

Circular economy

The circular economy approach is based on the reuse and recycling of equipment and materials. Currently, a significant portion of data center components does not return to the recycling cycle, which exacerbates the shortage of rare metals and increases the burden on the environment. Manufacturers such as Lenovo are introducing services to extend the lifespan of equipment and return rare materials for reuse. The advantages of the circular economy are obvious:

  • resource savings;
  • reduction of waste volume;
  • preservation of valuable metals for future production.

Operating on closed-loop principles reinforces the other two areas of sustainability, making data center infrastructure more balanced.

The importance of measurement and a comprehensive approach

A data center is a complex system where any decision affects a multitude of interconnected processes. Therefore, not only individual measures are important, but also a systemic, evolutionary approach. Constant measurement, monitoring, and adaptation of strategies make it possible to respond in a timely manner to new threats and opportunities. As industry experts note, sustainability is not a one-time initiative, but a continuously evolving strategy that requires attention to the entire ecosystem “organism” of the data center.

Why banks and fintech companies are accelerating the adoption of stablecoins in the financial system

0
xr:d:DAFynwP3aMs:2,j:4481489132512863152,t:23102907

In recent years, there has been a rapid growth of interest in stablecoins worldwide. Banks, fintech companies, and leading payment platforms are quickly adopting digital currencies to keep up with technological changes. Why are such diverse players in the financial market so actively betting on stablecoins, and how can this change the familiar architecture of the global financial system?

Explosive growth of stablecoins: new trends and statistics

What are stablecoins? These are cryptocurrencies whose value is pegged to stable assets such as the US dollar or gold. The most notable representatives of the market are USDT by Tether and USDC by Circle. Unlike traditional cryptocurrencies, whose value is subject to high volatility, stablecoins provide predictable rates thanks to reserve backing.

Statistics from recent years demonstrate impressive growth in the popularity of these digital instruments. According to Morgan Stanley Investment Management, by September 2023, the total market capitalization of stablecoins reached $300 billion, which is 75% higher than the previous year. This large-scale leap indicates a global trend toward the digitalization of financial flows.

Major investment banks and analytical centers predict further acceleration of this growth. Citi claims that the actual pace of stablecoin adoption has exceeded even the most optimistic expectations: according to the bank’s latest forecast, by 2030, the volume of stablecoin issuance may grow to $1.9 trillion in the base scenario and up to $4 trillion in the most favorable conditions. These figures are impressive, but what is behind them? The number of users and companies integrating stablecoins into their infrastructure is constantly increasing. For example, payment services like Stripe have started supporting stablecoin payouts, and cross-border transfers using them are becoming commonplace.

Why banks and fintech companies are betting on stablecoins

What are the advantages of stablecoins for traditional market participants? First of all, it is the ability to make payments around the clock and instantly settle transactions. In the classic banking system, transfers are often accompanied by delays, fees, and complex procedures, while stablecoins allow value to be transferred in digital format as easily as sending an email.

According to Joe Lau, co-founder and president of the Alchemy platform, stablecoins and deposit tokens are becoming the foundation for programmable and transparent next-generation cash flows. He notes: “On this basis, money can move with the security of the banking system and the speed of the internet” (source — CoinDesk).

The advantages for financial companies include:

  • Lower fees for money transfers
  • Lightning-fast settlements, especially in international operations
  • The ability to integrate digital money into corporate services and payroll projects

Real examples are already noticeable. For instance, companies in the payroll services sector are considering stablecoins as an element of infrastructure for international payments, and payment giants such as Stripe are developing solutions based on them.

It is important to note here that online platforms outside the classic financial sector are also accelerating the adoption of stablecoins, as instant settlements and the predictability of digital assets are critically important to them. This is especially evident in highly competitive segments where companies are testing new models of user interaction.

For example, in the industry of gamified entertainment services, stablecoins are already widely used for topping up balances and withdrawing funds.

Gambling services and online casinos were among the first to use such a system. This is confirmed by data from industry websites in the top search results. Such as a site featuring online casinos where you can get an Ontario no deposit bonus. According to the site, almost all major and lesser-known online casinos have long and successfully accepted various cryptocurrencies, including stablecoins.

The iGaming sector experiments with the technology earlier than traditional financial institutions precisely because of the high transaction frequency and the need to reduce operating costs.

So what is the key difference from classic banking and payment channels? Stablecoins are not tied to a specific country or regulator, and the technological base allows for flexible scaling of services and integration of new functions.

Tokenized deposits: alternative or addition?

However, development is not limited to stablecoins alone. In recent months, banks have been more actively implementing tokenized deposits—digital analogues of classic deposits, issued as tokens on the blockchain. A classic example is JPM Coin, a project by JPMorgan, and HSBC has announced its own initiatives in this direction.

How do tokenized deposits differ from stablecoins? It is important to understand: a stablecoin is backed on a 1:1 principle, meaning that for every digital dollar issued, there is one real dollar in the issuer’s account. Tokenized deposits more often use a fractional reserve model, which is familiar to banks: part of the funds is kept in reserve, and the rest is used in the bank’s investment activities.

Who are these instruments intended for? Tokenized deposits are most often available only to clients of a specific bank and are integrated into its infrastructure. Stablecoins, on the contrary, are universal and can be used by any participant—from individuals to large corporations.

Banks use tokenized deposits to improve internal settlements, speed up payments, and test new digital services without going beyond established regulatory norms.

Competition and convergence: how financial infrastructure is changing

The question arises: are banks competing with stablecoin issuers, or is a convergence of models occurring? Here, a two-way process is observed. Banks are readily adopting technological solutions that have appeared in the digital currency market and are creating their own tokenized products. In turn, stablecoin issuers strive to integrate with banking structures and comply with capital and reserve requirements.

According to Joe Lau, “tokenized deposits turn the banking system into programmable infrastructure, and stablecoins modernize the dollar for global markets.” This leads to the emergence of hybrid platforms and a search for new regulatory standards in the digital environment.

How is the financial landscape changing? Competition between banks, fintech companies, and new players is increasing. The number of innovations in services and ways of storing funds is growing, and the adoption of automation and programmable payments is accelerating.

Senator Sanders Calls for AI Data Center Freeze, Warning of Environmental Strain, Job Losses, and a Power Grab by Big Tech

0

Senator Bernie Sanders has escalated the debate over artificial intelligence infrastructure by calling for a nationwide moratorium on the construction of new AI data centers in the United States, arguing that the technology’s rapid expansion is racing ahead of democratic oversight while concentrating wealth and power in the hands of a few technology giants.

In a post on X this week, Sanders said a pause would give lawmakers time to determine how AI should serve the public interest rather than “just the 1 percent.” The Vermont senator framed the current AI boom not as a neutral wave of innovation, but as an economic and political shift driven by ultra-wealthy executives who stand to gain enormously while communities bear the environmental, social, and fiscal costs.

At the heart of Sanders’ argument is the scale and intensity of modern AI infrastructure. Data centers built to train and run large language models consume vast amounts of electricity and water, often far exceeding the demands of traditional cloud computing facilities. In some regions, proposed AI campuses are projected to use as much power as entire cities, forcing utilities to expand grids, build new power plants, or delay the retirement of fossil fuel facilities. Sanders argues those costs ultimately fall on taxpayers and ratepayers, not the companies profiting from AI.

Water use has become an equally contentious issue. Many AI data centers rely on water-intensive cooling systems, raising alarms in drought-prone states and rural communities already facing scarcity. Local opposition has intensified in parts of the Midwest, Southwest, and Southeast, where residents say they are being asked to trade long-term environmental sustainability for short-term economic promises that may never fully materialize.

Sanders has also tied the data center buildout to fears about automation and job displacement. He has repeatedly warned that AI and robotics could eliminate millions of jobs across sectors ranging from customer service and administration to logistics and manufacturing. While companies often argue that AI will create new roles and boost productivity, Sanders contends that history shows productivity gains rarely translate into higher wages or job security without strong labor protections.

Those concerns have been reinforced by warnings from within the AI industry itself. Anthropic CEO Dario Amodei has said AI could wipe out as many as half of all entry-level white-collar jobs within five years. Sanders has cited such statements as evidence that policymakers are unprepared for the speed and scale of disruption AI could bring if deployment continues unchecked.

The senator’s call for a moratorium comes as AI infrastructure spending accelerates at an extraordinary pace. Major technology firms are investing tens of billions of dollars annually in new data centers, advanced chips, and long-term energy contracts in a bid to outspend rivals and lock in dominance. That arms race, Sanders argues, is less about meeting current demand and more about consolidating control over the digital economy.

He has also questioned why public resources are being marshaled to support private AI ambitions. State and local governments frequently offer tax breaks, subsidized land, and infrastructure upgrades to attract data center projects, even as schools, housing, and public services remain underfunded. Sanders says this model mirrors earlier periods of industrial consolidation, where public support helped create private monopolies with limited accountability.

It is not certain that Sanders’ proposal will gain traction in Congress. A nationwide moratorium would face stiff resistance from the tech industry and lawmakers who see AI as central to U.S. economic competitiveness. Still, his stance reflects a broader shift in the conversation around artificial intelligence.

The debate is no longer confined to innovation and growth, but increasingly focused on who controls the infrastructure, who pays the hidden costs, and who benefits from the transformation. The question is no longer whether AI will reshape society, but whether governments will assert enough oversight to ensure that its benefits are shared broadly rather than captured by a narrow elite.

Nvidia’s Reported RTX 50 Supply Cuts Signal a Memory Squeeze, and a Strategic Pivot with Consequences for Gamers

0

Recent reports from Asia are sharpening concerns that Nvidia may significantly scale back production of its GeForce RTX 50 series graphics cards in the first half of 2026, not because of weak demand alone, but due to tightening memory supply across the board.

According to industry sources cited by China-based outlet BoBantang and amplified by hardware site Benchlife, Nvidia is preparing to reduce GeForce GPU output by as much as 30–40% compared with the same period in 2025.

At the center of the issue is memory. While early speculation focused narrowly on shortages of GDDR7, the newer and faster memory standard used in Nvidia’s latest GPUs, the reports suggest a broader crunch affecting multiple memory types. That points to a more systemic constraint tied to rising DRAM and NAND prices, which are already climbing sharply and feeding through the wider PC supply chain.

If accurate, the scale of the reported cuts is striking. A reduction of up to 40% in GeForce production would mark a major shift for Nvidia’s consumer graphics business, particularly given that there is no indication, at least so far, of similar reductions affecting the company’s non-GeForce or professional RTX PRO lineup.

That omission has fueled speculation that Nvidia may be reallocating scarce memory supplies toward its higher-margin, workstation- and enterprise-focused products, where profit per unit is substantially higher than in the mass-market gaming segment.

Benchlife reports that Nvidia could begin the adjustment by trimming output of specific models, notably the GeForce RTX 5060 Ti 16GB and the RTX 5070 Ti. From a commercial standpoint, those choices are telling. Both cards carry relatively large memory configurations, comparable in capacity to more expensive products like the RTX 5080. In a constrained environment, the same GDDR7 memory used in a mid-range GeForce card could instead be deployed in a premium SKU that delivers far greater margins.

This logic aligns with Nvidia’s broader business trajectory. Over the past several years, the company has increasingly prioritized segments that generate outsized returns, from data center accelerators to professional visualization and AI workloads. Even within gaming, Nvidia has shown a willingness to segment aggressively, offering lower-memory variants that are cheaper to produce while reserving higher VRAM capacities for pricier models.

For consumers, especially gamers, the implications are less encouraging. The RTX 5060 Ti 16GB has been widely viewed as a more future-proof option than its 8GB counterpart, offering enough video memory to handle modern games without heavy compromises in texture quality or performance. If production of the 16GB variant is curtailed, buyers may be nudged toward lower-memory cards that struggle with newer titles, or pushed up the pricing ladder to more expensive GPUs.

Industry sources cited by Benchlife say that Nvidia’s add-in card partners and component suppliers have also been briefed that the RTX 5070 Ti and RTX 5060 Ti 16GB will be among the first models affected. That suggests the reported strategy is not merely speculative but is already being communicated along the supply chain, even if Nvidia itself has not publicly confirmed any such plans.

This comes when DDR5 memory prices have already surged, and analysts expect those increases to ripple into GPU pricing as manufacturers grapple with higher input costs. In such an environment, prioritizing lower-memory cards and premium, high-margin products becomes an obvious defensive move for suppliers, even if it leaves mainstream buyers worse off.

The broader market impact is harder to pin down. A deliberate reduction in GeForce output raises the risk of tighter supply, particularly if demand remains steady or rebounds in 2026. That, in turn, could put upward pressure on GPU prices, reviving dynamics that consumers experienced painfully during previous shortages, when limited availability and strong demand combined to push prices far above suggested retail levels.

At the same time, the reported cuts could also reflect Nvidia tempering its expectations for PC and gaming demand next year. Higher memory costs, coupled with broader inflationary pressures, may weigh on consumer spending and slow upgrade cycles. In that scenario, trimming production would be as much about avoiding excess inventory as it is about managing scarce components.

What is clear is that memory has emerged as a strategic bottleneck, not just a technical one. If GDDR7 and other memory supplies remain tight, Nvidia’s decisions about where to deploy those resources will shape the graphics market in 2026, determining which products are plentiful, which are scarce, and how much consumers ultimately pay.

However, the reports are pointing to a familiar pattern: when constraints bite, Nvidia appears inclined to protect margins first. Some analysts believe that whether this leads to another period of elevated prices and limited choice for gamers will depend on how severe the memory crunch becomes—and how quickly the supply chain can respond.