DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 2

Slack Rival, Slashwork, Raises $3.5m As Former Meta Engineers Bet on an AI-First Rethink of Workplace Messaging

0

As artificial intelligence reshapes how work gets done, a group of former Meta engineers is arguing that enterprise communication tools have fallen behind the curve. Their answer is a new startup, Slashwork, which is positioning itself as a post-Slack, post-Teams collaboration platform designed from the ground up for an AI-driven workplace.

Slashwork on Wednesday announced it has raised $3.5 million in seed funding, drawing support from some of the most influential figures behind the very tools it now hopes to challenge. Backers include Slack co-founder Cal Henderson and Sandberg Bernthal Venture Partners, the investment firm of former Meta chief operating officer Sheryl Sandberg.

The London-based startup was founded by Jackson Gabbard, David Miller, and Josh Watzman, all former Facebook engineers. Their pitch is straightforward but ambitious: most enterprise communication platforms in use today were built for a pre-generative AI world and are struggling to adapt.

“We didn’t want to start with, ‘How do we add AI to Slack-like software,’” Gabbard said. “We wanted to ask what enterprise communication should look like if you assume AI is native, not an afterthought.”

A post-Workplace moment

Slashwork’s launch comes against the backdrop of Meta’s decision to shut down Facebook Workplace, the enterprise collaboration tool it introduced in 2016 and formally wound down in 2024. Workplace, which mimicked Facebook’s social feed for corporate use, once reached millions of paid users globally but ultimately failed to become a core strategic priority for Meta as the company pivoted toward AI and immersive technologies.

That closure left a vacuum — both in the market and among the engineers who had built and maintained the product. Several of them are now behind Slashwork, carrying forward lessons from Workplace’s rise and decline.

Julien Codorniou, who led Facebook Workplace from launch to roughly 11 million paid subscribers, now sits on Slashwork’s board and oversaw its incubation. He said the fundamental limitation of today’s dominant tools is that they were designed for conversations between people, not between people and intelligent systems.

“Slack, Teams, Zoom — they’re all more than a decade old,” Codorniou said. “They were optimized for human-to-human communication. With AI, you unlock people talking to systems, and that changes everything.”

AI embedded, not bolted on

Unlike incumbents that are layering AI features onto existing products, Slashwork is embedding large language models into every piece of content from the outset. Messages, posts, images, and shared files all carry LLM embeddings, enabling more sophisticated search and retrieval than traditional keyword-based systems.

Users can instruct AI agents to surface forgotten conversations, locate images that never gained traction in a channel, or summarize threads that span weeks or months. Gabbard said this approach is designed to reduce what he describes as the “invisible tax” of modern work — time lost searching across chats, emails, and documents.

“People don’t remember where something was posted, or even exactly how it was phrased,” he said. “AI can fill in those gaps in a way older systems simply can’t.”

That vision has resonated with early investors who have seen multiple generations of workplace software cycles.

AJ Tennant, a former Facebook sales executive and one of Slack’s earliest sales leaders, said AI-native communication could address long-standing inefficiencies in enterprise collaboration.

“Communication tools today are great at moving messages around,” Tennant said. “What they don’t do well is help you actually get work done. AI agents embedded into communication can close that gap.”

Other early backers include former Meta executives David Fischer, Carolyn Everson, and AJ Tennant, underscoring the extent to which Slashwork is drawing on Meta’s alumni network.

Entering a crowded battlefield

Slashwork is launching into a fiercely competitive market dominated by Salesforce’s Slack and Microsoft Teams, both of which are deeply integrated into corporate IT environments and are rapidly rolling out their own AI features. Microsoft, in particular, has tied Teams closely to its Copilot AI strategy, while Salesforce is positioning Slack as a hub for AI-powered workflows.

The challenge for Slashwork will be convincing companies to adopt a new platform at a time when collaboration fatigue is already high. Gabbard acknowledged the difficulty but said the startup is deliberately starting small.

Slashwork is initially rolling out to smaller, tech-focused companies, with a broader launch planned later in the year. The company plans to keep headcount lean, using the funding primarily for product development, design refinement, and rapid iteration rather than aggressive sales expansion.

A bet on the next decade of work

Beyond features, Slashwork’s broader argument is that enterprise communication is entering a structural shift. As AI systems become active participants in work — drafting content, retrieving information, and coordinating tasks — tools designed purely around chat threads and channels may struggle to keep up.

For Codorniou, the opportunity lies in reimagining communication not just as conversation, but as an interface between humans and intelligent systems.

“The next generation of tools won’t just help people talk,” he said. “They’ll help people think, decide, and execute — together with AI.”

With backing from veterans who helped build the last era of workplace software, the startup is making a clear bet: that the AI era demands not incremental upgrades, but a clean break from how enterprise communication has worked for the past decade.

Uber Beats Revenue Expectations in Q4 2025 but Shares Dip as Investors Eye AV Transition and Margin Pressures

0

Uber Technologies Inc. reported stronger-than-expected revenue for the fourth quarter ended December 31, 2025, driven by robust growth in both its mobility and delivery segments, yet shares slipped in premarket trading on Wednesday, as investors parsed the results through the lens of rising autonomous vehicle (AV) ambitions and ongoing profitability concerns.

Adjusted earnings per share came in at 71 cents, while revenue reached $14.37 billion—surpassing the LSEG consensus estimate of $14.32 billion and climbing 20% from $12 billion a year earlier. Gross bookings hit $54.1 billion, topping StreetAccount’s $53.1 billion forecast.

The mobility segment, Uber’s core ride-hailing business, generated $8.2 billion in revenue, up 19% year-over-year, though it fell slightly short of StreetAccount’s $8.3 billion expectation. Delivery, encompassing Uber Eats and expanding grocery/retail services, posted the strongest growth at 30% to $4.9 billion, beating estimates of $4.72 billion.Net income for the quarter was $296 million, a sharp decline from $6.88 billion in the prior-year period, largely due to a $1.6 billion net pre-tax headwind from revaluations of equity investments.

Excluding such non-operating items, profitability trends remained solid, supported by disciplined cost management and scale efficiencies. CEO Dara Khosrowshahi struck an optimistic tone in prepared remarks ahead of the earnings call, highlighting delivery as the company’s fastest-growing segment in 2025, particularly in Europe, the Middle East, and Africa.

Strategic partnerships with OpenTable (for restaurant reservations), Shopify (for e-commerce integration), and major grocers such as Loblaws (Canada), Biedronka (Poland), Seiyu (Japan), and Coles (Australia) fueled the expansion, broadening Uber Eats beyond food into everyday retail. The company also underscored progress in its Uber One subscription program, noting that members book more rides and purchase more items across the platform after subscribing.

Advertising revenue continued to gain traction, with Khosrowshahi pointing to integrations with generative AI tools like ChatGPT to enhance discovery—enabling users to explore services and restaurants before completing checkouts. A significant portion of the shareholder deck and call focused on Uber’s accelerating shift toward autonomous vehicles.

Khosrowshahi reiterated his conviction—first voiced a year earlier—that AVs represent a “multi-trillion dollar opportunity” and “fundamentally amplify the strengths of our existing platform.” In Atlanta and Austin, Texas, where Uber offered autonomous rides in 2025, overall trip growth “significantly accelerated” even for human-driven vehicles. In San Francisco, where Waymo operates driverless services via its own app and partners with Uber, Khosrowshahi noted that AV supply has grown the total ride-hailing category rather than cannibalizing it.

Uber currently facilitates AV trips in select markets through partnerships with Waymo (Alphabet), Waabi, Lucid Motors, and others. Khosrowshahi projected expansion to up to 15 cities globally by the end of 2026, including Houston, Los Angeles, San Francisco, London, Munich, Hong Kong, Zurich, and Madrid—split between the U.S. and international markets. By 2029, he said, Uber intends to become “the largest facilitator of AV trips in the world.”

He tempered expectations, however, cautioning that autonomous vehicles “are likely to remain a very small portion of the rideshare category for many years to come” due to technological, regulatory, safety, and scaling hurdles. This tempered outlook may have contributed to the premarket share decline, as some investors had hoped for more aggressive near-term AV revenue guidance.

Guidance for the first quarter of 2026 called for gross bookings growth of at least 17% year-over-year, targeting a range of $52 billion to $53.5 billion. The company reiterated its long-term target of achieving profitability on an adjusted EBITDA basis and generating strong free cash flow, though near-term margin pressures from AV investments and competitive dynamics remain in focus.

Uber’s results arrive during a pivotal transition for the ride-hailing industry. The rapid rollout of driverless services in urban markets—led by Waymo’s fully autonomous operations in San Francisco since 2024—has reshaped competitive landscapes. Uber’s strategy of partnering rather than owning AV fleets positions it as a platform aggregator, potentially mitigating capital intensity while capturing network effects.

Despite the revenue beat, premarket shares softened as investors weighed the AV narrative against near-term profitability concerns and broader market rotation away from growth stocks. The company’s ability to balance aggressive AV expansion with sustained delivery momentum and margin improvement will likely remain a key focus for analysts and shareholders in the months ahead.

SpaceX-xAI Integration Positions the Merger as a Powerhouse at the Intersection of AI

0

Elon Musk’s xAI has recently begun hiring “crypto experts” specifically in roles like “Finance Expert – Crypto” to help train and enhance its AI models’ understanding of cryptocurrency markets and trading dynamics.

The job postings appear on xAI’s official careers page via Greenhouse, and the news spread quickly on X (formerly Twitter) with headlines like “Elon Musk’s xAI begins hiring ‘crypto experts’ to teach AI models how to trade.”

The positions are remote and focus on data annotation, evaluation, and expert reasoning to improve frontier AI models including Grok. Rather than having the AI directly execute live trades, the experts will teach models to: Understand real-world crypto trading behavior.

Analyze on-chain activity using tools like Dune Analytics, Glassnode, Nansen, or DefiLlama. Handle DeFi protocols, derivatives, arbitrage opportunities, MEV (Maximal Extractable Value), quantitative strategies, and risk management in volatile 24/7 markets.

Reason like professional traders instead of just predicting prices. Requirements typically include a Master’s or PhD in quantitative fields or equivalent professional experience as a trader/analyst, deep knowledge of blockchain data platforms, and expertise in areas like on-chain analysis and quantitative finance.

Compensation is listed in the range of $45–$100 per hour for U.S. applicants, with some state restrictions noted. This aligns with xAI’s broader push to build more capable, truth-seeking AI systems by incorporating specialized domain knowledge—similar to their ongoing hiring of experts in finance, quant trading, and other areas like STEM fields.

The timing comes amid other Musk-related developments, such as reports of potential mergers or integrations involving xAI with SpaceX, and growing institutional interest in AI-crypto intersections.

While speculative chatter on X ties this to potential AI-native trading systems or on-chain execution, the job descriptions emphasize training and refinement rather than deploying autonomous trading bots. This move signals deepening integration between advanced AI and the complexities of digital asset markets.

Elon Musk’s xAI has been fully integrated into SpaceX through a major acquisition/merger announced on February 2, 2026. This marks a significant consolidation of Musk’s companies, creating what he describes as “the most ambitious, vertically-integrated innovation engine on (and off) Earth.”

SpaceX acquired xAI in an all-stock deal (xAI shares exchanged for SpaceX shares). The combined entity is valued at approximately $1.25 trillion, with SpaceX valued at around $1 trillion and xAI at $250 billion. This makes it the largest merger in history by valuation.

Confirmed by SpaceX on their website and via Musk’s posts on X. In a company memo and statement, Musk emphasized uniting AI, rockets, space-based internet (Starlink), direct-to-mobile communications, and real-time information platforms including X, which had previously merged into xAI.

The merger aims to address AI’s massive energy and compute demands by enabling space-based data centers and orbital AI infrastructure. Musk has argued that solar-powered AI satellites in space (leveraging Starlink’s constellation and Starship launches) are essential for scaling AI beyond Earth’s energy limits.

This combines xAI’s Grok models and AI expertise with SpaceX’s launch, satellite, and orbital capabilities. Talks were reported in late January 2026, with the deal closing quickly. It precedes SpaceX’s anticipated blockbuster IPO later in 2026 (potentially mid-year, targeting tens of billions raised), which could now include the full integrated entity.

The merger boosted Musk’s net worth significantly estimates put him over $850 billion shortly after, driven by his stakes in the combined company. On X (the social platform), users noted fun updates like a new “like” animation featuring a rocket launch especially on merger-related posts, symbolizing the SpaceX tie-in.

This builds on prior moves, like X merging into xAI earlier and potential synergies with Tesla Speculation continues about further integrations across Musk’s companies. Musk framed it as advancing humanity’s multi-planetary future and understanding the universe, with phrases like “scaling to make a sentient sun” and extending consciousness to the stars.

This integration positions the combined SpaceX-xAI as a powerhouse at the intersection of AI, space exploration, and global connectivity—potentially disrupting both sectors. The move has generated massive buzz, with some viewing it as Musk unifying his empire ahead of major public market events.

Nvidia’s H200 Sales to China Stall as U.S. Security Review Exposes Deepening Tech Tensions

0

Nvidia’s H200 AI chip remains caught in a familiar but increasingly consequential bind: approved in principle, constrained in practice, and emblematic of how U.S.-China technology relations now operate in slow motion rather than absolutes.

Nearly two months after U.S. President Donald Trump gave the green light for exports, sales of the H200 to China have yet to resume in any meaningful way. The delay, as reported by the Financial Times, is not a technical issue nor a question of demand. It is the product of a layered national security review that has exposed fault lines within the U.S. government itself and reinforced uncertainty for Chinese buyers already wary of sudden policy reversals.

At the center of the impasse is the licensing process imposed in January, when the Commerce Department eased export curbs on the H200 but required applications to be reviewed not just internally, but also by the departments of State, Defense, and Energy. That structure reflects how AI chips are no longer treated as ordinary commercial goods, but as strategic assets with implications for military capability, intelligence gathering, and long-term economic power.

According to people familiar with the discussions, Commerce has completed its assessment, suggesting that from a technical export-control standpoint, the H200 can be sold under defined conditions. The sticking point appears to be the State Department, which has argued for tighter restrictions to prevent China from deploying the chips in ways that could undermine U.S. national security. That includes concerns around large-scale AI model training, dual-use applications, and the potential for civilian infrastructure to be repurposed for state or military ends.

This internal pushback matters because it signals that even when export rules are formally relaxed, enforcement and interpretation can remain fluid. For Nvidia, that creates a grey zone where executive assurances do not immediately translate into purchase orders.

Chinese customers, according to the FT, are holding off on placing H200 orders until it becomes clear not only whether licenses will be granted, but also what strings may be attached. Those conditions could include limits on volumes, end uses, data center configurations, or post-sale compliance obligations.

Nvidia CEO Jensen Huang has publicly expressed hope that sales will proceed, saying last week that the license is being finalized. His comments point to confidence that a pathway exists, but they also underscore how dependent the company has become on regulatory discretion rather than straightforward market access.

Over the past two years, Nvidia has repeatedly redesigned and repositioned chips to fit within U.S. rules, turning export compliance into a core part of product strategy.

Reuters reported last month that China approved its first batch of H200 chips for import, a move seen as a pragmatic shift by Beijing. China’s leadership faces its own balancing act: sustaining rapid AI development in the near term while accelerating domestic chip capabilities to reduce reliance on U.S. suppliers. Allowing limited imports of the H200 fits that approach, buying time for local players even as Washington seeks to cap how much advanced capacity China can access.

The uncertainty reinforces a trend already underway for Chinese firms. Cloud providers, AI startups, and research institutions are diversifying supply chains, testing domestic alternatives, and adapting software stacks to work across multiple hardware platforms. Even if Nvidia ultimately secures licenses, the stop-start nature of access weakens its long-term position by encouraging customers to plan for a future where U.S. chips cannot be assumed to be available.

From Washington’s perspective, the drawn-out review reflects a deeper debate about the effectiveness of export controls. Tight restrictions risk pushing China to innovate faster at home, potentially eroding U.S. leverage over time. Looser controls, meanwhile, raise fears of enabling technological advances that could narrow the strategic gap in areas Washington considers sensitive.

The H200 sits squarely in that tension: powerful enough to matter, but already a step behind Nvidia’s most advanced offerings reserved for unrestricted markets.

The episode also illustrates how inter-agency dynamics now shape global tech markets. Decisions are no longer binary approvals or bans, but negotiated outcomes influenced by competing priorities across departments. That means longer timelines, higher compliance costs, and greater revenue volatility for companies like Nvidia.

Until the national security review concludes and license conditions are spelled out, Nvidia’s H200 is expected to remain in a holding pattern. The delay may eventually be resolved, but the broader message, which is: access to advanced AI hardware is now a matter of statecraft as much as commerce, has already been sent.

AMD Shares Slide as AI-Fueled Optimism Collides With Cautious First-Quarter Outlook

0

Shares of Advanced Micro Devices fell sharply in early premarket trading on Wednesday after the chipmaker’s first-quarter revenue outlook failed to live up to the market’s loftiest expectations.

The sharp pullback in early Wednesday trading underscored a growing reality in the AI-driven chip rally: strong earnings are no longer sufficient when expectations are stretched to extremes.

The stock slid about 9% in premarket trading after the company’s first-quarter outlook failed to fully satisfy investors who had been positioned for an even more aggressive forecast, given the scale of global spending on artificial intelligence infrastructure. The selloff came despite AMD delivering a solid fourth quarter that beat Wall Street estimates and reinforced its status as one of the most important challengers to Nvidia in the AI chip market.

AMD reported fourth-quarter revenue of $10.27 billion, topping LSEG consensus estimates of $9.67 billion. The result capped a year in which the company benefited from surging demand for data-center processors, particularly graphics and accelerator chips used in AI training and inference.

For the first quarter, AMD projected revenue of $9.8 billion, plus or minus $300 million. While the midpoint was still above the broader market estimate of around $9.38 billion, some analysts had expected a more forceful signal that AI-related demand would drive a steeper sequential ramp.

That disconnect between expectations and guidance proved costly for the shares.

“Expectations were pretty sky high,” said Chris Rolland, a semiconductor analyst at Susquehanna, in comments on CNBC.

He added that AMD’s disclosure of China-related revenue shipments in the quarter, which were not fully reflected in analysts’ models, made the headline beat appear stronger than it otherwise would have been.

“When you account for that, the beat was far less substantial than we would’ve thought,” Rolland said.

China exposure and regulatory risk

The reference to China highlights a sensitive area for U.S. chipmakers. Export controls imposed by Washington have limited the types of advanced AI chips that can be sold into the Chinese market, forcing companies like AMD and Nvidia to redesign products to comply with restrictions.

Any revenue tied to China is closely watched by investors, both for sustainability and for the risk of further regulatory tightening. AMD did not provide extensive detail on how much of its recent growth was linked to China-specific products, but the mere presence of that revenue added complexity to the market’s assessment of underlying demand.

Despite the near-term disappointment, there was little indication that AMD’s longer-term AI story has weakened. Demand for its data-center products remains strong, and analysts say the company continues to signal large-scale deployments ahead.

Rolland noted that AMD has hinted at multi-gigawatt AI contracts, a scale that underscores how rapidly computing requirements are expanding as companies race to deploy and monetize AI systems.

That trajectory is reinforced by AMD’s recent strategic partnerships. In October, the company announced a landmark agreement with OpenAI, under which the ChatGPT developer could take up to a 10% equity stake in AMD. As part of the deal, OpenAI plans to deploy 6 gigawatts of AMD Instinct GPUs over several years, starting with an initial 1-gigawatt rollout in the second half of 2026.

The partnership positions AMD as a core supplier in one of the world’s most visible AI ecosystems and marks a significant endorsement of its hardware roadmap.

In addition, Oracle said it will deploy 50,000 AMD AI chips beginning later this year as it expands cloud capacity to meet rising demand for AI workloads from enterprise customers.

Valuation pressure in an AI market

AMD’s stock has more than doubled over the past year, fueled by optimism that it can capture meaningful share in a market long dominated by Nvidia. That rally, however, has also raised the bar for performance.

Investors are increasingly demanding not just growth, but clear evidence that AI-related revenue will scale rapidly enough to justify current valuations. Any hint of moderation — even in the context of a beat-and-raise quarter — risks triggering sharp reactions.

The response to AMD’s guidance mirrors a broader pattern across AI-linked stocks, where earnings season has become less about whether companies are benefiting from AI, and more about how quickly that benefit is accelerating.

Looking ahead, the focus will be on how quickly AMD can convert its growing list of AI partnerships into sustained revenue growth, particularly in its data-center segment. Investors will also watch for clearer disclosure around the mix of training versus inference workloads, competition with Nvidia’s next-generation chips, and the impact of export controls on international sales.

For now, Wednesday’s selloff suggests that the AI boom has entered a more demanding phase. For AMD, the long-term opportunity remains intact, but the market is signaling that optimism alone is no longer enough.