DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 45

BlockDAG’s 1,566% ROI Potential Captures Global Interest; Shiba Inu & Ethereum Face Near-Term Hurdles

0

As January 2026 progresses, three specific assets are defining the market conversation. The Shiba Inu coin price is stabilizing near $0.0000085 after a period of volatility, while the Ethereum price today remains close to $3,113 as institutional funds navigate mixed ETF flows. Both tokens are maintaining their positions, yet they currently lack the “ground-floor” entry and massive projected markup that growth-focused buyers typically seek.

This is where BlockDAG (BDAG) stands out. Its funding phase is entering its final days, featuring a structured 1,566% price increase between the current round and its upcoming launch. With over $443 million secured and the $0.003 entry point set to close, participants are moving quickly. For those identifying the next big crypto before it hits major exchanges, the opportunity is reaching its final countdown.

Shiba Inu: Technical Recovery and Privacy Upgrades Underway

The Shiba Inu coin price is currently valued at approximately $0.0000084, following a 30% rally earlier this month. After addressing a $4 million exploit on the Shibarium network from late 2025, the development team is now integrating Zama’s Fully Homomorphic Encryption (FHE). This upgrade, expected by Q2 2026, aims to provide full on-chain privacy to prevent future security vulnerabilities.

On the fundamental side, over 40% of the SHIB supply has been permanently removed from circulation, with the burn rate accelerating by 50% recently. The network has also passed the 1.5 billion transaction milestone and established a partnership with TokenPlay AI to expand its gaming and app ecosystem.

However, the Shiba Inu coin price faces immediate resistance near $0.0000108. Technical data suggests a neutral-to-overbought RSI, indicating that a period of consolidation is likely if SHIB cannot decisively break through this ceiling. Analysts are currently cautious, watching to see if community-driven recovery can overcome shallow exchange liquidity.

Ethereum: Institutional Support Grows Amid Supply Constraints

The Ethereum price today is holding near $3,113 as the network experiences record activity. While spot ETH ETFs saw nearly $100 million in outflows last week, broader institutional interest remains firm. Notably, firms like BitMine Immersion have continued to accumulate, with their total holdings now exceeding 4.14 million ETH, or roughly 3.43% of the circulating supply.

Technically, the Ethereum price today is navigating a resistance zone near $3,200. Reclaiming this level is essential for a move toward the $3,500 target. On the support side, the $2,900 to $3,000 range has proven resilient over the last two weeks.

While the token price has been slow to react, Ethereum’s fundamentals are strengthening. The recent Fusaka upgrade and the introduction of PeerDAS have doubled data availability for Layer-2 networks, significantly lowering transaction costs. For long-term investors, the bet is that this massive network usage will eventually translate into higher pricing power for ETH holders.

BlockDAG: Final Window to Secure Positions Before the 1,566% Launch Adjustment

For those searching for the next big crypto, BlockDAG is presenting a time-sensitive opportunity. The project is in its final presale stretch, with tokens priced at just $0.003 in Batch 34. When the network officially launches on February 16th, the price is set to shift to $0.05. This represents a 16.67x multiplier, or a 1,566% gain, built directly into the project’s launch structure.

This trajectory is based on a defined economic roadmap rather than speculative sentiment. Entering at $0.003 allows participants to position themselves before the market resets to the $0.05 listing price. The presale is scheduled to end on January 26th, after which this specific entry rate will no longer be available. No extensions or additional batches are planned.

The project’s momentum is supported by over $443 million in funding and a community of 312,000 holders. BlockDAG has also established a functional mining ecosystem, with 21,000 X Series rigs sold and 3.5 million users active on the X1 mobile app. This allows everyone, from smartphone users to professional miners, to earn daily rewards ranging from 20 to 2,000 BDAG.

For anyone identifying the next big crypto for the 2026 cycle, BlockDAG offers a rare mix of a fixed entry price, a guaranteed launch markup, and an active user base. With only days remaining, the window for this entry is closing rapidly.

Identifying the Next Big Crypto for 2026

The Shiba Inu coin price shows potential for growth if it clears key resistance, though current volatility adds a layer of risk. Similarly, the Ethereum price today is backed by strong institutional infrastructure, but gains may be gradual as the network prioritizes long-term scalability over immediate fee revenue.

BlockDAG (BDAG) offers a different dynamic. Unlike established tokens that move with market sentiment, its 1,566% upside to the launch price is a structured part of its roadmap. With the presale ending January 26th and only 3.2 billion coins remaining at $0.003, the rush to secure early-stage pricing is intensifying. For those hunting the next big crypto, BlockDAG is a primary contender, but the opportunity to enter at this level is nearly over.

Presale: https://purchase.blockdag.network

Website: https://blockdag.network

Telegram: https://t.me/blockDAGnetworkOfficial

Discord: https://discord.gg/Q7BxghMVyu

Slack’s CTO on Fighting the Attention Economy: Why Turning Everything Off Is Sometimes the Only Way to Think

0

In a technology industry defined by constant alerts, instant replies, and the expectation of permanent availability, Parker Harris is making a blunt assessment about the cost. He notes that sustained thinking is becoming harder to protect, even for the people building the tools that power modern work.

Harris, chief technology officer of Slack and a cofounder of Salesforce, says the challenge of concentration has intensified as digital collaboration platforms embed themselves deeper into daily routines. Meetings stack up. Messages pile in. Calendars and inboxes begin to dictate the rhythm of the day, often crowding out time for deliberate, high-value work.

“Sometimes my day is driven by my calendar or driven by my inbox or what’s coming at me,” Harris told Business Insider.

His response is not rooted in elaborate productivity systems or rigid scheduling frameworks. Instead, it is about restraint. When Harris needs to focus, he removes stimuli altogether. Notification sounds are switched off. His phone is turned face down and placed out of view.

“I hate that knock sound or any other sound,” he said, explaining that even passive visual cues can pull attention away from deep work. “I don’t want to see my phone.”

The approach reflects a broader reality Harris acknowledges: Slack, by design, operates in what he describes as an “interrupt-driven” environment. Messages are meant to be seen. Collaboration happens in real time. But he draws a clear line between receiving information and being compelled to respond immediately.

While Harris typically keeps notifications enabled, he does not feel obligated to address every message the moment it arrives. That discipline, he suggests, is increasingly essential in workplaces where responsiveness is often mistaken for productivity.

“We all need to find a way to concentrate,” he said.

For Harris, deep work requires intentional isolation. When he needs to think through complex problems, he enters a specific Slack channel, internal planning document, or long-form strategy file and immerses himself fully. The shift is deliberate: one task, one context, one outcome.

There are exceptions. If Slack itself faces a critical issue, or if Salesforce CEO Marc Benioff reaches out, Harris says he will immediately change course. Those moments, however, are defined by urgency rather than habit.

“Unless there’s some fire coming at me, like Slack has some issue, or Marc Benioff wants to talk to me,” he said, “I’ll drop what I’m doing.”

This tension between constant connectivity and meaningful focus sits at the center of Slack’s evolving product strategy, particularly as the company rolls out a new AI-powered version of Slackbot. The upgraded tool, launched Tuesday, is designed to help users navigate the very overload that collaboration platforms have helped create.

Slackbot is being positioned as a personal work agent embedded directly into the platform. According to Slack, it will surface relevant context across conversations, help users track notifications more intelligently, and assist in prioritizing which messages and tasks actually require attention. The aim is to reduce cognitive load, not amplify it.

Harris says the broader objective is to shift employees away from a reactive posture — constantly responding to whatever appears next — toward a more proactive way of working.

“Slack is where work gets done,” he said. “We’re going to continue to tackle the productivity challenge. We want to make employees more productive.”

The rollout comes at a time when companies across the tech sector are betting that AI can tame workplace complexity rather than worsen it. Harris is careful, however, to frame Slackbot as an assistant rather than an authority. He advises users to be explicit about context, audience, and desired outcomes when interacting with the tool, to cross-reference its outputs with other data sources, and to review responses carefully before sharing them.

Those guardrails mirror Slack’s own internal philosophy. AI, Harris suggests, should help workers decide what matters — not demand more attention simply because it can.

The paradox is difficult to ignore because Slack, a platform synonymous with workplace chatter, is now positioning AI as a filter against distraction. Harris does not see that as a contradiction. Instead, he frames it as an evolution driven by necessity.

As digital tools multiply and work becomes increasingly fragmented, focus itself has become a scarce resource. Harris’s personal habits — silencing notifications, hiding his phone, carving out protected time for thinking — underscore a central lesson emerging across the tech industry: which is, productivity is no longer about doing more, faster. It is about choosing when not to engage.

In that sense, Slack’s CTO is not arguing against collaboration or connectivity. He is making a narrower, more pointed case — that in an always-on workplace, the ability to disconnect, even briefly, is now a core professional skill.

Salesforce Pushes Deeper Into Workplace AI with the release of updated Slackbot powered by Anthropic’s AI model

0

Salesforce is making a renewed and more assertive push to embed generative artificial intelligence into the daily rhythms of office work, betting that Slackbot — once a modest helper — can evolve into a central intelligence layer for modern enterprises struggling with information overload.

The company said its AI-powered Slackbot is now rolling out to Business+ and Enterprise+ subscribers, marking one of the most consequential upgrades since Salesforce acquired Slack for $27.1 billion in 2021. The move is not just about adding another chatbot to the workplace, but about redefining how employees search for information, prepare for meetings, and navigate sprawling corporate systems.

Salesforce argues that Slackbot’s advantage lies in where it lives. Unlike standalone tools such as ChatGPT, the assistant is embedded directly into Slack, giving it contextual awareness of conversations, files, channels, and team structures. According to the company, it only accesses information a user is permitted to see, addressing one of the biggest enterprise concerns around AI adoption: data security and access control.

Slackbot can surface information not only from Slack itself, but also from Salesforce applications, Google Drive, Box, Atlassian’s Confluence, and other connected services. In effect, Salesforce is positioning Slack as a single interface for enterprise knowledge, with AI acting as the connective tissue across previously siloed systems.

Under the hood, the assistant is powered by Anthropic’s Claude model, though Salesforce co-founder and chief technology officer Parker Harris said the company is actively testing alternatives. That flexibility signals a broader strategy: Salesforce wants to remain model-agnostic while controlling the interface and data layer where AI is applied.

While the AI boom has supercharged companies such as Nvidia, Broadcom, and Google, Wall Street has been more cautious about enterprise software firms. Salesforce shares are down 18% over the past year, trailing the Nasdaq’s 24% gain, as investors question whether large language models and autonomous coding agents could eventually weaken demand for traditional cloud software.

Harris rejects that premise outright. He argues that generic AI tools are fundamentally disconnected from the complex permissions, workflows, and compliance requirements that define large organizations.

“People who say, ‘oh I could vibe code up Slack and Salesforce now, and my AI is just going to do it all for me’ are crazy,” he said, denoting the company’s view that enterprise software remains indispensable, even as AI becomes more capable.

This belief underpins Salesforce’s broader AI strategy. Beyond Slackbot, the company has been rolling out Agentforce services, designed to automate customer service, sales, and other business functions. Rather than replacing human workers, Salesforce frames these tools as productivity amplifiers that reduce friction and manual effort.

Slackbot itself has existed since Slack’s early days after its 2014 launch, initially handling simple automated messages and third-party notifications. But in the three years since ChatGPT’s release in late 2022, Slack was widely seen as slow to respond to the generative AI wave, especially as Microsoft and Google rapidly integrated AI assistants into Teams, Office, and Workspace.

The latest upgrade is Salesforce’s attempt to close that gap — and possibly leapfrog competitors by focusing on depth of integration rather than surface-level AI features.

Internally, the idea gained momentum after Slack engineers questioned why executives, including CEO Marc Benioff, were turning to external AI tools and uploading sensitive internal documents. The goal was to create an in-house alternative that matched the convenience of ChatGPT while remaining securely embedded within Salesforce’s ecosystem.

According to Harris, the result has been a behavioral shift at the top of the company.

“Now he’s doing everything with Slackbot,” Harris said of Benioff, suggesting the tool has crossed from experiment to habit.

Salesforce is highlighting adoption metrics to bolster its case. Slack is used by millions of people across thousands of organizations and remains one of Salesforce’s fastest-growing cloud offerings. Harris said Slackbot is already the most rapidly adopted feature in Salesforce’s 27-year history, a claim that underscores how receptive enterprise users appear to be to AI tools that slot naturally into existing workflows.

He also suggested the upgrade could have competitive ripple effects. Salesforce expects some companies to reconsider paying for separate ChatGPT subscriptions if Slackbot proves capable enough inside their core work environment. Harris even predicted that AI-enhanced Slack could accelerate customer migration away from Microsoft Teams.

Still, the talent movement highlights the intensity of the AI arms race. In December, OpenAI hired Slack CEO Denise Dresser as its chief revenue officer, while Slack’s former product chief Rob Seaman became interim CEO. The shift underlines how aggressively AI-native companies are recruiting leaders with deep enterprise experience.

Early customer feedback suggests Slackbot’s value is less about dramatic automation and more about incremental time savings. Demetri Salvaggio, vice president of customer experience and operations at business travel software firm Engine, said he uses the tool at the end of the day to check whether he has missed important messages or unresolved conversations.

Engine also licenses Google’s Gemini assistant and Anthropic’s Claude, but Salvaggio noted those tools sit outside Slack, limiting their usefulness. By contrast, Slackbot’s native integration makes it easier to trust and adopt.

He estimates the assistant saves him between 45 minutes and an hour each week — a modest figure on its own, but one that becomes meaningful when multiplied across large organizations.

The math is central to the strategy for Salesforce. Rather than selling AI as a radical break from existing systems, the company is pitching it as a way to make those systems more intuitive, more valuable, and harder to replace. In a workplace increasingly defined by notification fatigue and fragmented tools, the company is betting that AI inside Slack can turn chaos into context.

Nvidia Clarifies H200 AI Chip Payment Terms Amid China Export Uncertainty

0

Nvidia has sought to quell concerns about its sales practices for the H200 artificial intelligence chips, confirming that it does not require full upfront payment from customers, particularly in China, where regulatory approval for imports remains uncertain.

The clarification, provided to Reuters on Tuesday, comes after a January 8 report suggested that Nvidia was imposing unusually stringent payment terms that could have forced Chinese buyers to assume significant financial risk before receiving the chips.

Background

The H200 is Nvidia’s latest generation of high-performance AI chips, designed to power advanced workloads including large language models, generative AI, and other compute-intensive applications. Demand for these chips has surged as AI adoption accelerates worldwide, making Nvidia one of the most influential suppliers in the global AI hardware market.

Amid the U.S. export controls targeting AI technology to China, companies like Nvidia face a complex environment. Chinese regulators have yet to confirm approval for many high-end AI chip imports, creating uncertainty over whether shipments can legally enter the country. Reports had indicated that Nvidia might require full upfront payment for H200 chips, effectively transferring financial risk to Chinese buyers who would be committing capital without a guarantee of delivery.

In response, Nvidia stressed that it “would never require customers to pay for products they do not receive.” A company source clarified that while prior transactions with Chinese clients sometimes included advance payment provisions, these were typically partial deposits rather than full payments.

For the H200, however, Nvidia has applied stricter enforcement of terms due to the regulatory ambiguities, ensuring the company itself is not exposed to compliance risk if shipments are blocked or delayed by Chinese authorities.

The situation underscores the tightrope U.S. chipmakers walk between meeting global AI demand and complying with increasingly complex geopolitical restrictions. China represents a major market for AI hardware, but export controls issued by the U.S. government—including limits on high-end AI chips and related technology—have complicated transactions. Companies must carefully navigate licensing approvals, customer risk, and commercial commitments, particularly for high-value products like the H200.

Analysts note that the H200 is a strategic product for Nvidia, as its next-generation architecture supports high-bandwidth memory configurations and multi-GPU setups crucial for generative AI models. Any disruption in supply to a key market like China could have ripple effects on global AI deployments, cloud providers, and research institutions relying on Nvidia hardware.

By clarifying payment policies, Nvidia seeks to reassure buyers that they will not be financially overexposed, even if regulatory approvals are delayed. The company’s stance also signals its effort to maintain trust with international partners while adhering to U.S. export regulations. Observers see this episode as illustrative of broader tensions in the AI semiconductor industry, where innovation, market demand, and geopolitics intersect in unprecedented ways.

Some analysts warn that as global AI adoption grows, U.S. firms like Nvidia may face increasing scrutiny from governments on both sides of the Pacific, balancing compliance, commercial strategy, and shareholder expectations. However, Nvidia appears committed to mitigating risk for its customers while ensuring that regulatory constraints do not impede its dominant position in the high-performance AI chip market.

Apple Taps Google’s Gemini for Siri, Cementing the Duo’s Alliance for the AI Industry

0

Apple has announced a decision to use Google’s Gemini models to power its long-awaited Siri overhaul, marking a strategic pivot that tightens one of Silicon Valley’s most lucrative partnerships, and reshuffles competitive hierarchies in artificial intelligence.

Announced on Monday, the multi-year deal, which also raises fresh questions about market concentration, privacy, and the future role of OpenAI inside Apple’s ecosystem, will see Google’s Gemini models form the backbone of Siri and other forthcoming Apple Intelligence features slated for release later this year.

While neither company disclosed financial terms, the implications are expected to be enormous. Apple brings more than two billion active devices into the equation. Google brings frontier AI models it believes are now mature enough to operate at Apple’s scale.

“After careful evaluation, Apple determined Google’s AI technology provides the most capable foundation for Apple Foundation Models,” Google said, framing the agreement as a technical endorsement rather than a commercial compromise.

The wording matters because Apple has spent years signaling it wants to control core technologies internally. Turning to Gemini suggests that, at least for now, speed and capability have taken precedence over full independence.

For Google, the deal is a decisive competitive win. Gemini already underpins much of Samsung’s “Galaxy AI,” but Siri offers something Samsung cannot: habitual, daily use across a tightly controlled ecosystem. Siri is embedded not just in phones, but in laptops, watches, tablets, cars, and smart homes. Each interaction becomes a distribution channel for Gemini at a scale few AI companies can match.

The agreement also sharpens the contrast with OpenAI. Apple introduced ChatGPT integration in late 2024, allowing Siri to hand off complex queries to the chatbot. That relationship remains intact, but its boundaries are now clearer. ChatGPT stays as an opt-in assistant for advanced questions, while Gemini becomes the default intelligence layer.

“Apple’s decision to use Google’s Gemini models for Siri shifts OpenAI into a more supporting role,” said Parth Talsania, CEO of Equisights Research. “ChatGPT remains relevant, but no longer sits at the center of Apple’s AI strategy.”

That repositioning comes at a sensitive moment for OpenAI. After Google unveiled Gemini 3 late last year, OpenAI CEO Sam Altman reportedly issued a “code red,” urging teams to accelerate development. Apple’s decision to choose Gemini over a deeper OpenAI integration underscores how fluid alliances remain in a market still defining its long-term winners.

The deal also highlights Apple’s uneven path in AI. While rivals raced ahead with chatbots and image generators, Apple moved cautiously, emphasizing privacy, on-device processing, and reliability. That caution, however, translated into delays. Siri’s revamp slipped, top-level executives were reassigned, and early Apple Intelligence features drew muted reactions. Some analysts believe that partnering with Google allows Apple to close the capability gap without restarting the clock.

Still, the move has drawn criticism. Tesla CEO Elon Musk warned that the partnership concentrates too much power in Google’s hands, given its control over Android and Chrome.

“This seems like an unreasonable concentration of power for Google, given that the[y] also have Android and Chrome,” Musk said.

His comments echo a broader concern among policymakers and competitors that Google is embedding itself across every major digital gateway: search, mobile operating systems, browsers, and now AI assistants inside Apple devices.

Those concerns are amplified by history. Apple and Google already share a controversial arrangement that makes Google the default search engine on Apple devices, a deal that reportedly generates tens of billions of dollars annually for Apple while reinforcing Google’s dominance in search. Adding Gemini to Siri deepens that interdependence at a time when antitrust scrutiny of Big Tech is intensifying in the United States and abroad.

Privacy, a core part of Apple’s brand, is another pressure point. Both companies sought to pre-empt criticism by stressing safeguards. Google said Apple Intelligence will continue to run on Apple devices and through Apple’s Private Cloud Compute, maintaining what it called Apple’s “industry-leading privacy standards.” The reassurance reflects lingering user anxiety about how much personal data AI assistants can access and where that data is processed.

Markets swiftly moved in reaction to the deal. Alphabet’s valuation climbed above $4 trillion following the announcement, extending a rally fueled by growing confidence in Google’s AI push. The stock surged 65% last year as investors warmed to Google’s aggressive investment in frontier models, image and video generation, and massive computing infrastructure. Apple’s shares were steadier, reflecting investor awareness that the company is playing catch-up rather than setting the pace.

Beyond Wall Street, the deal redraws strategic lines. Developers building for Apple platforms will now optimize experiences around Gemini-powered intelligence. Competitors are expected to reassess how much room remains outside ecosystems dominated by a handful of model providers. Additionally, regulators may have to scrutinize whether default AI integrations mirror the anticompetitive dynamics long debated in search and mobile software.

In the short term, consumers may see a more capable Siri, finally able to compete with newer assistants on reasoning, context, and responsiveness. In the longer term, Apple’s bet on Gemini aligns with a growing industry trend that has seen major companies team up to secure a place in the AI arms race.