DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 2

Deutsche Bahn to Modernize its Stations to Address Long-standing Infrastructure Issues 

0

Deutsche Bahn (DB), Germany’s state-owned rail operator, has announced a major push to modernize its stations as part of ongoing efforts to address long-standing infrastructure issues. According to DB Chairwoman Evelyn Palla, the company plans to invest €4 billion per year in station renovations through 2030.

This amounts to more than €20 billion over the next five years, targeting a clear backlog in maintenance and upgrades. This year (2026): Modernization work on more than 1,000 stations across Germany. By 2030: Fundamental renovation of 710 stations nationwide, with 130 already scheduled for 2026. Primarily the beautification and upgrading of reception buildings including platforms, accessibility improvements, and overall passenger experience.

A separate €50 million immediate action program for enhanced cleanliness and security at stations. This includes more cleaning staff, security personnel, modern camera and video technology in cooperation with federal police, and mobile repair teams for quick fixes. The announcement highlights clear catch-up needs after years of underinvestment, delays, and complaints about the condition of many German stations.

It forms part of broader DB infrastructure efforts, which saw around €19 billion invested in 2025 covering tracks, switches, signaling, and stations and plans for a record €23 billion in 2026 across the entire network. Germany’s rail system has faced chronic challenges, including aging infrastructure, frequent disruptions, and punctuality issues. DB and the federal government have been ramping up funding, with ambitions for a multi-year overhaul that could require up to €150 billion overall for network restructuring, expansion, and digitalization.

Station upgrades are a visible part of making rail more attractive to passengers amid competition from cars and other transport modes. Travelers can expect more construction sites and potential disruptions in the coming years, but completed projects like certain corridor modernizations have already shown improvements in reliability where finished.

This station-specific program emphasizes not just structural repairs but also making stations more welcoming, safer, and cleaner—addressing common passenger frustrations. Germany’s rail system, operated primarily by Deutsche Bahn (DB), has faced persistent punctuality challenges for years. These issues have worsened recently, turning the train is delayed into a common national frustration.

DB defines a train as on time if it arrives less than 6 minutes late. 2025 annual figure: Only 60.1% on time — a decline from 62.5% in 2024 and far below the 74.4% seen in 2015. This marked the worst annual result on record for long-distance services. Monthly lows in 2025: Punctuality dropped to record lows, with around 51.5% in October 2025.

Early 2026: January saw just 52.1% of long-distance trains on time. Figures improved slightly toward the end of 2025 when some construction paused for holidays but remain volatile. Perform better, typically around 90% punctuality, though they have also seen slight declines. Overall DB rail in Germany: Hovers around 89%, but long-distance services drag down the perception and reliability for intercity travel.

In European comparisons, Germany ranks near the bottom for long-distance rail punctuality, with massive cumulative delay times. Several interconnected factors contribute to the problems: Aging and overloaded infrastructure — Decades of underinvestment have left tracks, switches, signals, and bridges in poor condition. Many sections operate at or beyond capacity, causing cascading delays from even minor incidents.

DB is ramping up investments including the station overhaul you mentioned earlier, plus broader network upgrades. However, thousands of construction sites simultaneously disrupt operations. Major projects have been extended to 2036, prolonging the pain before benefits appear. In 2026, a record number of sites ~28,000 is expected.

Weather events (storms, cold snaps), technical failures on old equipment, and occasional strikes add pressure. High train frequency on a dense but strained network means one delay often triggers missed connections and further knock-on effects. DB reported a €2.3 billion net loss for 2025, partly linked to punctuality issues affecting revenue and operations.

Frustration is high, with missed connections, unreliable planning, and competition from cars or other transport. Some international partners have raised concerns about DB trains affecting their networks. Officials have called the situation a serious problem for mobility and even broader societal trust. DB and the federal government are investing heavily: Record infrastructure spending planned for 2026.

 

 

 

Cerebras Systems Files for IPO, Taking Direct Aim at Nvidia with Massive Wafer-Scale AI Chips

0

Cerebras Systems has officially filed to go public, positioning the startup as one of the most ambitious challengers yet to Nvidia’s near-monopoly in high-performance AI hardware.

CEO Andrew Feldman has long described the company’s technology as “the fastest AI hardware for training and inference,” and the IPO filing marks the latest step in Cerebras’ push to prove that claim in the public markets.

The move comes after an earlier 2024 IPO attempt was delayed by a federal review of an investment from Abu Dhabi-based G42 and ultimately withdrawn. Since then, Cerebras has moved aggressively to strengthen its balance sheet and customer roster.

It closed a $1.1 billion Series G last year and followed that with a $1 billion Series H in February that valued the company at $23 billion, according to the Wall Street Journal. Those back-to-back mega-rounds have given it the resources to compete at the highest levels of the AI infrastructure race.

Two recent deals underscore the momentum. Cerebras reached an agreement with Amazon Web Services to deploy its chips inside Amazon data centers, giving it a foothold with one of the world’s largest cloud providers. Even more striking is its reported pact with OpenAI, said to be worth more than $10 billion.

In a recent interview with the Wall Street Journal, Feldman was characteristically direct about what that win meant. He said: “Obviously, [Nvidia] didn’t want to lose the fast inference business at OpenAI, and we took that from them.”

The financial picture in the filing shows real traction. Cerebras generated $510 million in revenue for 2025. On a GAAP basis, it reported net income of $237.8 million, though on a non-GAAP basis, excluding certain one-time items, it posted a net loss of $75.7 million. The numbers reflect the classic pattern of a high-growth hardware company: heavy investment in research, manufacturing scale-up, and customer deployments today in exchange for what it hopes will be dominant economics tomorrow.

At the heart of Cerebras’ pitch is its Wafer-Scale Engine, a single silicon wafer the size of a dinner plate that packs hundreds of thousands of AI cores. Unlike traditional systems that link dozens or hundreds of smaller GPUs together, with all the attendant latency, power, and software complexity, Cerebras’ approach keeps the entire workload on one massive chip. That design delivers the extreme speed and memory bandwidth required for the largest AI models, a niche where even Nvidia’s powerful clusters can struggle.

The IPO comes at a time when demand for AI compute remains insatiable, and the biggest players are actively hunting for alternatives that can deliver more performance per dollar or per watt. OpenAI’s decision to hand a reported $10 billion-plus contract to a startup rather than stick exclusively with Nvidia sends a powerful signal about the market’s willingness to embrace new architectures.

The AWS partnership further validates that Cerebras is moving beyond lab demonstrations into real production environments.

Still, uncertainties surround the deal. Nvidia’s ecosystem advantage, its CUDA software platform, vast developer community, and decades of optimization, is formidable. Cerebras will need to continue proving that its wafer-scale chips are not only faster but also easier to program and more reliable at scale. Manufacturing such enormous chips at volume also carries technical and supply-chain risks, even with strong foundry partners.

The company has not yet disclosed how much it hopes to raise or the exact timing beyond a target of mid-May. But the filing itself is already a milestone. After navigating regulatory hurdles, raising more than $2 billion in the past year, and landing blue-chip customers, Cerebras is stepping onto the public stage at a moment when investors remain hungry for pure-play AI infrastructure stories.

If the offering succeeds, it could provide the capital needed to accelerate manufacturing scale, expand the software stack, and push deeper into both training and inference workloads.

A successful Cerebras IPO is expected to be more than just another hardware listing for the broader AI ecosystem. It would demonstrate that meaningful competition to Nvidia is not only possible but already winning major contracts from the industry’s most prominent customers.

Kelp DAO, a Liquid Restaking Protocol on EigenLayer Hacked for over $280M 

0

Kelp DAO, a liquid restaking protocol on EigenLayer suffered a major exploit on April 18, 2026. Attackers drained approximately $280M–$293M worth of rsETH, its liquid restaking token. The vulnerability was in Kelp DAO’s rsETH cross-chain bridge powered by LayerZero.

The attacker drained ~116,500 rsETH — roughly 18% of the token’s circulating supply. They then used the stolen or unbacked and forged rsETH as collateral on lending protocols like Aave V3 on Ethereum and Arbitrum to borrow large amounts of ETH and WETH.

Funds were routed through Tornado Cash to obscure the trail. This created bad debt on Aave and other platforms, as the rsETH collateral turned out to be worthless or unbacked once the exploit was discovered. The incident is now considered the largest single DeFi exploit of 2026 so far.

Immediate Aftermath

Kelp DAO paused its rsETH contracts and bridge across Ethereum mainnet and multiple L2s. Aave, SparkLend, Fluid, and other protocols froze related markets to prevent further losses. Aave’s WETH suppliers faced potential losses from bad debt; Aave’s Umbrella safety module is expected to help cover some of it.

The $AAVE token price dropped sharply reports of 10–15% in hours due to contagion fears. Wrapped ETH became stranded or frozen across ~20 chains due to the omnichain nature of the bridge. This highlights ongoing risks with cross-chain bridges and omnichain fungible tokens (OFTs), especially those relying on default LayerZero configurations.

Some analysts are warning that similar setups on other protocols could be at risk if the root cause involves compromised signers or misconfigurations. It also follows other big exploits in April 2026 like the Drift Protocol’s ~$280M incident earlier in the month, adding to DeFi’s rough start to the year.

~18% of rsETH supply (116,500 tokens) was drained via the LayerZero-powered cross-chain bridge and adapter. This created unbacked or fake rsETH on multiple chains. Kelp paused rsETH contracts, minting and burning, and bridges across Ethereum mainnet and several L2s to contain further damage.

Holders of rsETH especially on non-mainnet chains now face uncertainty: their tokens may lack full backing, leading to redemption pressures, depegging risks, or forced unwinding of underlying restaked positions in EigenLayer.

Kelp’s TVL previously over $1B in ETH LRTs will likely drop sharply. The protocol is investigating with LayerZero and security experts; recovery is uncertain, as funds were routed through Tornado Cash. The attacker used stolen and unbacked rsETH as collateral on Aave V3/V4, borrowing large amounts of WETH/ETH. This left ~ $290M in bad debt on Aave’s WETH pools, as the collateral is now effectively worthless or unliquidatable.

Aave froze rsETH markets immediately to stop new exposure. WETH suppliers are being urged to withdraw positions, as partial haircuts or delays may occur while Aave’s Umbrella safety module handles the deficit. This is a major real-world stress test for Umbrella. Other protocols affected: SparkLend, Fluid, and at least 7–9 more froze rsETH-related markets or positions. Wrapped ETH became stranded across ~20 chains due to the omnichain setup.

AAVE token dropped ~10–13% amid fears of losses and broader contagion. No direct compromise of Aave’s contracts, but the event shows how external collateral failures can cascade. Cross-chain bridge vulnerabilities are back in focus. The exploit reportedly involved a misconfiguration or single-verifier issue in LayerZero’s OFT. This highlights catastrophic failure modes in default bridge configurations and composability risks.

Liquid restaking (LRTs) like rsETH face renewed scrutiny. Assumptions that these tokens are blue-chip collateral widely used on Aave for yield loops have been challenged. Protocols will likely tighten risk parameters for restaked assets, potentially reducing TVL and yields across EigenLayer participants. This is the largest single DeFi exploit of 2026 so far, surpassing or rivaling Drift Protocol’s $285M incident earlier in April.

Combined with other hacks, Q1/Q2 2026 has seen heavy losses; $600M+ in recent weeks, eroding confidence. ETH dipped ~3–4%; Polymarket odds on ETH price targets shifted lower as traders reassess DeFi exposure. Restaking sector sentiment is hit hard. Expect audits/reviews of LayerZero integrations, multi-verifier requirements for bridges, and stricter collateral onboarding in lending protocols. AI tools are noted as lowering barriers for sophisticated attacks.

This is a painful reminder of interconnected risks in DeFi — one bridge flaw can ripple through lending, restaking, and multiple chains. Short-term: volatility, frozen positions, and potential small losses for some suppliers. Long-term: likely leads to more conservative risk management and improved bridge standards.

If you hold rsETH, have exposure to Aave WETH pools, or use any Kelp-related bridges, check your positions and follow official updates from Kelp DAO and the affected protocols. On-chain sleuths like ZachXBT were among the first to flag it.

Nigeria’s Stock Market Extending its Equities Trading Hours from 9.00 a.m to 4.00 p.m WAT

0

Nigeria’s Nigerian Exchange Limited (NGX) is extending its equities trading hours to 9:00 a.m. – 4:00 p.m. WAT (West Africa Time), effective Monday, April 27, 2026. Previously, trading ran from 9:30 a.m. to 2:30 p.m. This change adds about two hours to the daily session; one hour earlier open and 1.5 hours later close and was approved by the Securities and Exchange Commission (SEC) Nigeria.

The move follows FTSE Russell’s announcement in early April 2026 that Nigeria will return to its Frontier Markets index effective September 2026. Nigeria had been reclassified as Unclassified (standalone) for over two years due to issues like foreign exchange liquidity and capital repatriation challenges. The re-inclusion reflects improvements in market infrastructure, liquidity, and accessibility.

NGX described the extension as building on this momentum to: Deepen market liquidity. Enhance price discovery. Broaden investor access including for domestic, retail, institutional, and international participants. Give investors more time to react to news and execute trades. Align Nigeria’s market with more global standards and make it more competitive.

The Nigerian stock market has performed strongly recently with notable gains in 2025 and early 2026, and the FTSE Russell decision triggered rallies in some dual-listed stocks and overall positive sentiment. More trading flexibility — Especially helpful for those in different time zones or with daytime commitments.

Potential for higher volumes — Longer sessions often support better liquidity and narrower spreads over time.  It signals Nigeria’s efforts to attract more foreign portfolio investment as it rejoins global benchmarks, which can bring passive inflows from index-tracking funds.

Note that pre-trading or post-trading sessions (if any) and exact order types and boards may have additional details in NGX’s market structure rules; the core continuous trading window is shifting to 9:00 a.m.–4:00 p.m. This development is part of broader reforms at NGX to position the market as more accessible and liquid within Africa’s frontier space.

This change, approved by the SEC and timed with Nigeria’s upcoming return to FTSE Russell’s Frontier Markets index in September 2026, aims to modernize the market and capitalize on improving investor sentiment. Deeper liquidity and tighter spreads: Longer sessions typically allow more buyers and sellers to interact throughout the day, reducing the concentration of activity in a short window.

This can narrow bid-ask spreads, lower transaction costs, and make it easier to execute larger orders without significant price impact. NGX explicitly cited this as a core goal. Investors gain more time to digest news, earnings, economic data, or global events like oil prices, FX movements, or international developments and react in real time rather than rushing or carrying positions overnight.

This should lead to more efficient and accurate pricing over time. Domestic retail and institutional investors benefit from greater flexibility, especially those with daytime jobs or in different parts of Nigeria. International investors find the schedule more compatible with global time zones, potentially encouraging more cross-border flows as Nigeria rejoins benchmark indices.

Passive inflows from frontier-tracking funds and ETFs could increase, particularly into liquid large-cap and banking stocks. The move aligns Nigeria’s market infrastructure closer to global standards, reinforcing the positive momentum from FTSE Russell’s reclassification. It positions NGX as more attractive for capital formation and could support higher overall market volumes and activity in the medium term.

More window to trade without feeling rushed; potential for better entry and exit prices and reduced overnight risk for some positions. However, it requires adjusting routines. Greater ability to manage portfolios across time zones, respond to news, and integrate Nigerian equities into broader strategies.

Combined with index re-inclusion, this could gradually attract more foreign portfolio investment, though actual inflows will depend on macroeconomic stability. Expect operational adjustments for staffing, systems, and risk management during the extended window. Initially, liquidity may be thinner at the new open and close edges, but it should normalize as participants adapt.

 

The first weeks and months may see uneven liquidity distribution, with possible wider spreads or higher volatility during less active parts of the new session. Historical examples from other markets show that extended hours can start thin before building depth. Brokers, clearing systems, and surveillance must handle the longer day smoothly. While NGX consulted stakeholders, minor teething issues could arise.

No guarantee of immediate volume surge: Extended hours support liquidity but do not create it alone. Sustained benefits will hinge on underlying fundamentals—economic reforms, corporate earnings, and continued improvements in market accessibility. More trading time can amplify intraday moves if major news hits, though it may also dampen overnight gaps in the long run.

 

The change is structurally positive for accessibility and efficiency, but markets remain driven by fundamentals. Expect gradual benefits in liquidity and participation rather than an overnight transformation. Monitor early trading data post-April 27 for volume trends, spreads, and volatility patterns.

AI Bias Stems from Patterns of Datasets Created by Humans

0

AI bias refers to systematic and repeatable errors in AI systems that produce unfair, prejudiced, or skewed outcomes. These arise because AI models especially machine learning and large language models learn patterns from data created or curated by humans, who are inherently imperfect and influenced by societal, historical, and cognitive factors.

Bias is not always intentional—it often reflects real-world inequalities baked into training data, design choices, or deployment contexts.
Understanding the different types of AI biases is crucial for developers, users, and policymakers, as unchecked bias can lead to discriminatory hiring, flawed medical diagnoses, unfair lending, or amplified stereotypes.

Biases are often grouped into three broad buckets: input and data bias, system and algorithmic bias, and application and interaction bias. These stem from the training data itself—it’s rarely perfectly representative of the real world. Data reflects past societal prejudices like historical hiring data favoring men leads to AI recruiters downranking women.

Data underrepresents or overrepresents groups like facial recognition datasets dominated by lighter-skinned faces, causing higher error rates for darker skin tones.
How features are measured or labeled is flawed using flawed proxies like zip code for socioeconomic status, which correlates with race. Certain events are under- or over-reported in data.

Amazon’s 2018 hiring tool still cited in 2025 discussions was scrapped because it penalized resumes with women’s terms, trained on male-dominated tech hiring history. These emerge from the model’s design, architecture, or optimization choices—even with clean data. The math or rules favor certain outcomes like optimization prioritizing speed over fairness.

Treating all groups as homogeneous when subgroups differ; a health model averaging across demographics ignores unique needs of subgroups. Testing metrics or benchmarks don’t match real-world use. Biases that appear only after combining datasets or in complex models. AI reinforces users’ or developers’ preconceptions.

Over-reliance on AI outputs, ignoring errors; common in high-stakes decisions like healthcare or policing. Developers unconsciously embed their own views in labeling, feature selection, or prompts.
AI amplifies cultural stereotypes. Broader societal manifestations these cut across categories.

Racial, gender, age, socioeconomic, cultural, or political biases often appear as downstream effects e.g., LLMs favoring certain languages or ideologies due to English-heavy web data.
Many sources map bias as a cycle: real-world inequalities, data, model design, deployment,  amplified injustices.

Even with advances like better debiasing techniques like adversarial training or diverse datasets, biases remain because: Data is historical and web-scraped; mirroring internet inequalities. Models optimize for accuracy on average, not fairness across groups. Biased outputs generate more biased data. Recent examples (2025–2026) include healthcare AI exacerbating treatment gaps, generative tools producing culturally skewed content, and recruitment systems still showing gender and racial skews despite fixes.

AI tools have downgraded resumes with women’s terms, favored male candidates, or rejected applicants based on age, race, or disability proxies like Amazon’s scrapped tool; ongoing lawsuits against Workday’s AI screening in 2025, certified as class actions for disparate impact on older, Black, or disabled candidates.

Qualified individuals from marginalized groups face systematic exclusion, leading to immediate rejections and long-term career setbacks. Lawsuits, settlements, PR damage, and regulatory scrutiny like the NYC, California rules on AI hiring tools. Algorithms underestimated care needs for Black patients using spending as a proxy or downplayed women’s symptoms in summaries e.g., 2025 studies on LLMs like Gemma showing softer language for female patients.

Psychiatric treatment plans varied by race; misjudgments in imaging or risk scoring led to delayed or inadequate care. Worsened health outcomes, higher malpractice risks; settlements up to $17M, and deepened inequities for marginalized groups. Tools like COMPAS falsely flagged Black defendants as higher recidivism risks; nearly twice the rate of white defendants, influencing sentencing and bail.

Higher misidentification rates for darker skin tones, contributing to wrongful arrests or surveillance harms. AI bias isn’t a bug—it’s a mirror of human data and decision-making. Exploring it reveals opportunities for more robust, transparent systems via fairness audits, diverse teams, and ongoing monitoring. True progress comes from acknowledging these patterns without oversimplifying them as purely societal or fixable by one method.