DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 67

ZachXBT Shares Leaked Data Exposing a North Korean-linked IT Worker Network 

0

ZachXBT recently shared leaked data exposing a North Korean-linked IT worker network that generates roughly $1 million per month around $3.5 million since late November 2025 through fake identities while working remote developer jobs, often in crypto projects.

The on-chain investigator posted about documents obtained after an unnamed hacker compromised one of the group’s devices. The leaks reportedly include internal payment records showing a team of about 140 members, with one individual (“Jerry”) tied to the operation. Funds are paid in crypto and converted to fiat, often routed through services like Payoneer, using forged documents and stolen or fake identities to secure remote IT/development roles.

This fits a broader, well-documented pattern of North Korean (DPRK) actors—sometimes linked to state-sponsored groups like Lazarus—sending IT workers overseas or having them operate remotely under false pretenses. They earn legitimate salaries from tech and crypto companies while potentially gathering intelligence, inserting backdoors, or committing direct thefts.

Previous ZachXBT investigations have highlighted similar clusters infiltrating dozens of projects, with one earlier example noting $300K–$500K monthly flows to a single entity via fake identities. DPRK IT workers have reportedly embedded in DeFi and crypto firms for years, sometimes for extended periods; the recent $270–285M Drift Protocol exploit involved a 6+ month social engineering operation with in-person meetings and a large deposit as a Trojan horse.

North Korea-linked actors have been attributed with a significant portion of major crypto heists in recent years, including high-profile incidents totaling billions. However, not every hack is automatically Lazarus—ZachXBT has pushed back against over-attribution in some cases. Crypto enables salary payments and fund movement that bypasses traditional sanctions, with flows often going through mixers, exchanges, or intermediaries before conversion.

ZachXBT’s thread and the underlying leaked data provides an insider view into their internal payment server, which is rare and valuable for understanding operations. He noted the research performed well initially but was somewhat overshadowed by other posts. This highlights ongoing risks in remote hiring for crypto and DeFi teams: weak KYC and verification, especially for contractors.

Projects should use robust identity checks, code audits, and monitoring for suspicious commits or access patterns. These operations continue to evolve, blending legitimate remote work with espionage and theft to fund the regime while evading sanctions.

The exposure by ZachXBT of this North Korean IT worker network; processing ~$1M monthly and ~$3.5M since late 2025 has several layered impacts across security, regulatory, economic, and industry levels. While the leak itself is recent, it builds on years of documented DPRK infiltration tactics.

The leak provides rare internal visibility: payment records, chat logs via IPMsg, fake identity documents, remittance hubs and conversion flows through crypto exchanges, Payoneer, and Chinese banks. This gives investigators, companies, and law enforcement concrete data to identify patterns, freeze addresses and trace funds. Teams that discover they’ve hired linked individuals may terminate contracts quickly.

It may force the network to adapt tactics, such as better VPNs or new identities, but the breach of their internal payment server exposes operational weaknesses like poor security hygiene. ZachXBT noted the research gained less traction than expected compared to other posts, but it still circulates in crypto security circles. DPRK-linked workers have reportedly embedded in 40+ DeFi protocols since DeFi summer, sometimes contributing actual code to well-known projects.

Not all were purely fraudulent—some delivered work—but this creates persistent risks of backdoors, malicious commits, data exfiltration, or future exploits. ZachXBT has previously tied similar networks to 25+ incidents involving code insertion leading to treasury drains or team extortion. The recent high-profile $270–285M Drift Protocol exploit involved 6+ months of social engineering, in-person meetings at conferences, and a Trojan horse deposit—showing how trust-building escalates to massive losses.

Estimates suggest hundreds of such operatives may hold crypto-related jobs, generating hundreds of millions annually for the regime. This funds WMD and missile programs, violating sanctions. The U.S. Treasury has sanctioned individuals and entities facilitating these schemes, targeting fake identity networks that convert salaries to fiat/crypto for the DPRK. The exposure adds fresh evidence for further designations and enforcement. Companies unknowingly paying sanctioned actors risk penalties.

Increased scrutiny on remote hiring, especially in crypto: Projects face pressure to implement stronger KYC, background checks, video interviews, code contribution audits, and sanctions screening. Fintech platforms and exchanges involved in conversions may tighten compliance. Funds support North Korea’s regime, linking cybercrime directly to national security threats. This amplifies calls for better public-private cooperation in tracking these flows.

Peer code reviews and sandboxing for contractors. Monitoring for unusual access or commit patterns. Avoiding over-reliance on remote freelancers without robust vetting. Some projects are highlighted for stronger skepticism toward contributors, serving as models. Others, like Solana-related teams, have faced public calls to address past hires.

Legitimate developers from certain regions may face extra hurdles, creating hiring friction in an already competitive space. The $1M/month figure here is one slice of a larger ecosystem. Repeated stories of infiltration erode trust in decentralized hiring and remote work models popular in Web3. It underscores why trust-minimized systems still require human vigilance. No single massive drain tied directly to this leak yet, but cumulative losses from DPRK-linked activity contribute to overall sector volatility and insurance costs.

ZachXBT’s work acts as a deterrent and intelligence booster, pushing the industry toward harder defenses while complicating DPRK operations. However, these networks are resilient and evolve—expect continued adaptations like deeper social engineering. Crypto teams should treat hiring as a high-risk vector alongside smart contract audits.

YouTube Deletes Bitcoin.com’s Official YouTube Channel

0

YouTube has deleted Bitcoin.com’s official YouTube channel, citing violations of its harmful and dangerous content policy. The channel, active since 2015 with around 104K subscribers and over 3,000 videos focused on Bitcoin education, wallet tutorials, news, and related topics, was taken down without prior strikes or detailed warnings.

Bitcoin.com confirmed the deletion on X, noting an immediate appeal rejection. This isn’t an isolated incident. YouTube has a recurring history of flagging or removing crypto-related content under the same vague harmful or dangerous or sale of regulated goods. In 2020, Bitcoin.com’s channel was briefly suspended and then reinstated, with YouTube admitting it was an error.

Similar takedowns hit channels like Anthony Pompliano’s temporarily removed after an interview, Cointelegraph’s live streams, and dozens of smaller crypto creators during crypto purges. Many were later restored after public backlash or appeals. Bitcoin educators and news outlets have faced sudden suspensions, often reversed with apologies for mistakes in moderation.

The policy is broadly worded to cover content that could encourage illegal activities, scams, or risky behavior. Critics argue it’s overly broad and inconsistently applied—especially since YouTube continues to host and profit from obvious crypto scam ads with minimal intervention. Automated AI moderation likely plays a big role, sometimes misclassifying legitimate educational material as promotional or misleading.

Bitcoin.com has called out the irony: a decade of straightforward Bitcoin content labeled dangerous while the platform struggles with actual fraud. YouTube owned by Google controls vast reach, but its moderation can feel arbitrary, especially around finance, crypto, or controversial topics.

This pushes creators toward alternatives like Rumble, X, or decentralized video platforms. Crypto communities often view these events as soft censorship or bias against permissionless money, though YouTube frames it as protecting users from harm and scams. No official detailed explanation from YouTube has surfaced yet for this specific case. Past reversals suggest it could be reinstated if enough noise is made or if it’s another glitch.

This highlights why many in crypto advocate for censorship-resistant distribution methods. These platforms attract crypto creators frustrated with centralized moderation, demonetization, or sudden takedowns like Bitcoin.com’s recent channel deletion. Mainstream-friendly options with growing crypto communities, and decentralized and Web3 platforms that emphasize censorship resistance and often crypto-native rewards.

A popular video platform with relaxed content policies and a strong emphasis on free speech. Many crypto educators and commentators have migrated here during past YouTube purges. It offers good monetization tools and has become one of the faster-growing alternatives. Crypto content like news, market updates, tutorials performs well, though the overall audience skews broader than pure crypto.

Not a full YouTube replacement, but increasingly used for short-to-medium crypto videos, clips, and live discussions. Many creators post full videos or teasers here and drive traffic to their sites. Live Spaces are great for real-time market commentary.

They host educational crypto videos without the same level of aggressive financial-content flagging, but monetization and discoverability are generally weaker than YouTube. These are built on blockchain or peer-to-peer tech, making content much harder to remove arbitrarily. Many reward creators and viewers with cryptocurrency.

Odysee powered by LBRY blockchain— One of the top recommendations for crypto creators. It’s decentralized, censorship-resistant, and lets creators earn LBRY Credits (LBC) based on views, engagement, and tips. Viewers can also earn tokens. It has a clean YouTube-like interface and hosts plenty of Bitcoin, altcoin, and blockchain education content. Many creators recommend it as a primary backup or mirror for YouTube videos.

DTube — Fully decentralized video platform built on blockchain (Avalon + IPFS). Ad-free, censorship-resistant, and rewards users with crypto tokens for engagement. It appeals to those wanting pure decentralization without big-tech oversight. Good for tutorials and long-form crypto explainers.

BitChute — Focuses on free speech and uses P2P technology. It has hosted crypto content that faced issues elsewhere, with a community that supports independent voices.

PeerTube — Open-source, federated; decentralized across independent servers and instances. No single company controls it—you can even host your own instance. Highly resistant to takedowns; great for tech-savvy crypto communities, though discoverability depends on the instance.

Theta.tv / Theta Network — Decentralized video streaming that rewards users with THETA tokens for watching and bandwidth sharing. It’s geared toward high-quality streaming and has crypto and Web3 integration built in. Other niche decentralized options include 3Speak (on Hive blockchain), Livepeer/Tape (Ethereum-based streaming), and DLive.

Since no single platform fully matches YouTube’s scale yet, combine video alternatives with: Bitcoin.com’s own site — They’ve been backing up content and directing users there. Creator websites, newsletters, podcasts via Spotify, Apple Podcasts, or decentralized options like Fountain for Bitcoin payments.

Many creators now cross-post to both for maximum reach. Mirror content across Odysee + Rumble immediately. Use their websites and X for direct distribution. Decentralized platforms give true ownership via blockchain. Start with Odysee and Rumble searches for specific channels or topics. Many popular crypto YouTubers already have mirrors there.

BlockDAG Becomes a Trending Giant by Shaking Off Market Dips as Buyers Rush to Grab 95x ROI Before the Window Shuts

0

Digital coin markets have remained shaky, with a lot of projects still finding it hard to create steady growth. Price shifts have stayed very sudden, trust has been hard to find, and people trading have become much faster at moving away from stories that do not show a clear near-term win. In this kind of setting, BlockDAG (BDAG) has moved into the center of attention for a very specific reason.

Trading has already begun, and this is the last chance to buy BDAG at 0.0000061, a limited-time offer. This rate is 95x ROI compared to the current BDAG market cost on several exchanges. At this point, the early entry rate is 95 times lower than the live market cost, making a massive gap for those who join while the door is still open.

This is exactly why BlockDAG is starting to pull ahead while the rest of the industry still looks uncertain. People buying are not just looking at another low-cost entry. They are looking at an early rate that is 95 times under the live market price, and that makes this current stage much different from the usual late-stage sale for a coin. It is a standout pick for anyone looking for the top crypto to buy now.

The Early Rate Has Made a Very Rare Opening

At 0.0000061, the BlockDAG early rate is 95 times lower than the current BDAG market cost on many exchanges, giving those who join now a massive 95x ROI gap at the start. That single point explains why there is so much hurry around the project at this second. In this space, price gaps are most important when they can be measured, are active, and stay open for only a tiny amount of time. BlockDAG currently fits all three points perfectly.

The entry is easy to measure because the data is already right in front of the market. The early rate is easy to see, and the exchange cost is easy to see. The gap is active because the entry window is still open for a few hours. And the gap is time-sensitive because the project has made it clear that this cost period is finishing. That mix is rare enough to pull in focus even when the market is strong. When the market is weak, it stands out even more.

This is also why this final phase has become the main point of talk rather than just a small detail. Usually, late-stage windows find it hard to stay important because the market thinks the biggest chance has already passed. Here, the opposite is occurring. This final phase has become the main opportunity because it provides a direct entry at a cost that stays far under where BDAG is already being traded by the public. That changes how everyone thinks about timing. Instead of asking if the project might one day show growth, they are comparing a live discount against a live market.

For many people buying, that is a much simpler choice to make. They are not trying to guess if a coin might hit a certain price later. They are looking at a coin that already has a market cost on several exchanges and comparing that to what this final window still lets them pay. In real terms, that gives the current BlockDAG opening more weight than a standard early sale.

Why BlockDAG is Pulling Focus Away from the General Market Drop

The general market setting makes this situation much more obvious. When the digital coin market is moving up fast, many different projects can pull in money at once. When the market is weak or not certain, that usually changes. Focus gets smaller. People look for situations with easier math, better timing, and less room for guessing. That is where BlockDAG has taken the lead.

The project is providing a very simple comparison that the market can see right away. One side is the 0.0000061 early rate. The other side is the current BDAG market cost on many exchanges. The gap between them is 95x ROI. That kind of clarity is vital because it moves past the general worry affecting much of the industry right now.

It also helps to show why people are moving toward BlockDAG even while many other assets stay underweight. In shaky times, buyers often pick openings where the entry is defined in very clear terms. A project with a long plan and a general story might still interest people, but a project with a live market cost and a much lower remaining entry rate often gets focus faster. That is what is happening here today. This identifies it as the top crypto to buy now.

Final Remarks

In many digital coin stories, big numbers are tied to future ideas that may or may not ever happen. Here, the 95x ROI figure is tied right to the gap between the early rate and the current BDAG cost across many exchanges. That makes it feel much more real.

A person joining through this final window is not just hoping that BDAG one day hits a much higher level from a low start. The coin is already trading at a much higher level than this early rate. That is why this opening is being talked about in terms of quick value growth. The market cost already exists. The early rate stays priced far under it. For those who take a spot before the window ends, that creates an immediate math win at the start.

This is very important in the current market mood, where being unsure has become common. Projects that need a lot of patience and guessing are finding it harder to keep moving. Projects that show a real cost imbalance are finding it easier. BlockDAG is in that second group right now, and that is a major reason it is being seen as a top crypto to buy now rather than just another name waiting for things to get better.

Presale: https://purchase.blockdag.network

Website: https://blockdag.network

Telegram: https://t.me/blockDAGnetworkOfficial

Discord: https://discord.gg/Q7BxghMVyu

 

Half of Americans Used AI in Past Week as One in Five Workers Say It Has Taken Over Parts of Their Job, Survey Finds

0

Artificial intelligence is no longer an emerging workplace experiment in the United States. It is already reshaping how millions of Americans work, according to a new survey that suggests the technology’s impact on jobs is moving from theory to lived reality.

A poll released Thursday by nonprofit research group Epoch AI, conducted in partnership with Ipsos, found that half of American adults used AI in the past week, either for personal or professional purposes. More notably, 20% of full-time workers said AI has already taken over parts of their job, a finding that adds fresh urgency to concerns about how rapidly automation is altering the labor market.

The survey, which sampled 2,000 American adults between March 3 and 5, also showed that AI is not solely replacing work. Fifteen percent of full-time employees said they had begun carrying out new tasks that would not have existed without AI tools, suggesting that the technology is simultaneously eliminating some functions while creating others.

That tension between substitution and augmentation is becoming the defining feature of the AI economy.

Caroline Falkman Olsson, one of the lead researchers behind the study at Epoch AI, said the findings align with what many economists and workplace analysts have increasingly suspected.

“We do see augmentation and automation effects,” Olsson told NBC News. “But we need to figure out how people’s actual workplaces and work tasks are changing.”

While the topline numbers point to disruption, the deeper story lies in which tasks are being automated and whether the newly created tasks require higher skills, greater oversight, or simply shift workloads in less visible ways.

That question is becoming more urgent as financial institutions begin to quantify the impact. New findings from Goldman Sachs released this week suggest AI is already contributing to a net loss of roughly 16,000 jobs per month in the United States, after accounting for positions displaced through automation and jobs created through productivity gains.

According to Goldman’s breakdown, AI-related substitution is eliminating around 25,000 jobs each month, while augmentation is creating or preserving about 9,000, leaving a negative monthly balance.

That broader economic backdrop gives the Epoch AI findings added weight. Nicholas Miailhe, an AI policy expert at the Global Partnership on Artificial Intelligence, said the survey should serve as an immediate warning for both governments and employers.

“When 1 in 5 workers say AI is already replacing parts of their job, we can start talking about labor market restructuring happening in real time,” he said.

“The fact that replacement seems to be outpacing augmentation should draw our attention: the policy window to shape how AI transforms work is probably closing faster than most governments realize.”

His remarks cut to the heart of the policy debate. For much of the past two years, the public conversation around AI and employment has centered on future risk. This survey suggests that, for a meaningful share of workers, the disruption is already present.

The results also reveal how deeply AI tools have entered everyday routines. Among respondents who used AI in the previous week, nearly half said they used it between two and five days, indicating that for many Americans, AI is becoming a recurring utility rather than an occasional novelty.

The survey also found that 62.5% of users completed only one or two quick tasks on their heaviest day of AI use, suggesting that most interactions remain lightweight, such as drafting emails, summarizing information, or seeking recommendations. By contrast, only about 6% of respondents qualified as heavy users, pointing to a smaller cohort for whom AI may already be integrated into core workflows.

This unevenness mirrors broader labor-market trends. Recent workforce polling shows that while AI adoption is rising rapidly, sustained professional use remains concentrated in white-collar, technology-heavy, and administrative roles.

Another revealing aspect of the survey is how workers are accessing these tools. Roughly half of Americans using AI for work said they relied on personal subscriptions or free versions rather than employer-provided services.

That finding suggests companies may be underestimating the extent of AI adoption inside their own organizations, as workers independently integrate tools like ChatGPT, Google Gemini, and Microsoft Copilot into their daily tasks. It also raises questions around data governance, confidentiality, and compliance, especially in sectors handling sensitive information.

The survey found that ChatGPT was the most widely used AI service, cited by 31% of respondents, followed by Google Gemini at 21% and Microsoft Copilot at 10.5%. In terms of use cases, AI’s strongest foothold remains in information processing and communication tasks.

Among users surveyed, 80% said they used AI to look up information or recommendations, 59% for writing or editing text, and 53% for brainstorming ideas. These are precisely the categories long identified as highly susceptible to generative AI disruption: research assistance, first-draft writing, ideation, and routine knowledge work.

Perhaps the most forward-looking part of the poll concerns AI agents, systems capable of taking actions autonomously rather than simply generating responses. While still at an early stage, the findings suggest adoption is already underway.

Eight percent of AI users said they had used an AI agent in the past week, compared with 49% who used AI tools primarily for web search. Renan Araujo, director of programs at the Institute for AI Policy and Strategy, said the pace of adoption is striking.

“One in 12 Americans has used an autonomous AI agent, a software that not just answers questions but takes actions on your behalf,” he said.

“This capability was not available two years ago, and it’s striking to see its usage grow so quickly.”

That may prove to be one of the most consequential findings in the report. Traditional generative AI tools assist with tasks. Agents can increasingly perform tasks, from scheduling meetings to drafting and sending communications, conducting research, or managing repetitive digital workflows.

If adoption continues at this pace, the next phase of AI disruption may move beyond assistance into direct task execution.

The survey’s results arrive as economists intensify warnings that AI’s first casualties may be entry-level and junior knowledge-work roles, positions historically used as gateways into professions such as finance, media, law, and administration.

However, that creates a structural risk: if AI absorbs the junior tasks through which workers traditionally gain experience, the long-term pipeline of skilled professionals could narrow.

The headline figure that half of Americans used AI in a single week may capture public attention. But the more consequential number may be the one in five workers who say parts of their job are already gone. That suggests the labor-market conversation is no longer about whether AI will transform work. It is now about how quickly institutions can adapt before the transformation outpaces policy, workforce retraining, and corporate governance.

Altman Admits ChatGPT Still Can’t Keep Time, Says It May Take Another Year to Fix

0

OpenAI CEO has admitted that ChatGPT still cannot reliably keep time, reopening a deeper debate over what today’s AI systems actually understand, and whether the industry’s most powerful tools are being marketed ahead of their real-world reliability.

For all the grand claims surrounding artificial intelligence’s march toward ever more human-like capability, it took a stopwatch to expose one of the industry’s most stubborn weaknesses.

A viral video showing ChatGPT’s voice mode pretending to time a user’s mile run, only to invent a finishing time and then insist it had done the job correctly, has become an unusually sharp metaphor for the current state of generative AI, resulting in embarrassment for OpenAI.

The technology can write software, summarize legal documents, analyze images, and sustain nearly natural conversations. Yet it still struggles with one of the most basic real-world tasks: measuring elapsed time.

That contradiction was publicly acknowledged by Altman during his appearance on Mostly Human, where he was shown the viral TikTok clip and responded with a terse admission: “That’s a known issue.” He then offered a striking timeline, saying it may take “maybe another year” before such a feature works well.

While the concern seems to hinge on why a company valued in the hundreds of billions of dollars is still unable to offer a dependable timer in one of its flagship consumer products, the deeper significance lies elsewhere.

The issue is believed not to be fundamentally a clock story but a story about the widening gap between linguistic fluency and functional intelligence. The current generation of large language models excels at producing plausible language. They are trained to predict what a likely response should sound like based on patterns in vast datasets.

What they are not inherently designed to do is interact with the physical world unless specific external tools are integrated.

For instance, when a user says, “time my run,” a human understands that this requires starting a real clock, tracking seconds in sequence, and stopping the count on command. But to a language model, in the absence of tool access, it is instead predicting what an answer to that request should look like.
In other words, it is simulating competence. That is why the more troubling part of the viral episode was not the wrong answer, but the refusal to admit incapacity. Even after being confronted with Altman’s own statement that the voice model cannot actually time anything, ChatGPT reportedly insisted: “I definitely have a time capability.” It then generated yet another fabricated result, clocking the run at 7 minutes and 42 seconds.

Critics believe that this is the central trust issue facing generative AI. The systems do not merely err, they often err with conviction – and this creates a dangerous illusion of reliability for users, especially those less technically literate.

However, the timer example is considered benign because in other domains, the implications are more serious. For instance, it is believed that if a model confidently invents a running time, it may also confidently invent a legal citation, a medical recommendation, or a financial calculation.

That is why this seemingly trivial glitch has resonated so widely. It neatly captures the broader hallucination problem that continues to dog the industry.

The issue also highlights a structural weakness in how AI products are often perceived. Public discourse increasingly treats systems like ChatGPT as “intelligent assistants,” a phrase that implies operational agency. Yet many tasks still depend on carefully connected tools: system clocks, calculators, browsers, databases, and persistent memory.

Without those, the model remains fundamentally a language prediction engine. This is where Altman’s comments are particularly revealing. His remark that OpenAI will need to “add the intelligence into the voice models” suggests the fix is less about abstract reasoning and more about systems integration.

Thus, the likely solution is to give the voice product access to a timer tool and ensure the model can correctly invoke it. But the broader challenge is philosophical as much as technical. Experts point out that the system must know when not to answer.

Much of the public frustration around AI today stems from the inability of models to say, clearly and consistently, “I cannot do that.” Instead, they often generate a plausible fiction. This has become one of the defining limitations of the current AI wave.

The viral timer incident also arrives at an awkward moment for OpenAI, which continues to market increasingly advanced voice and multimodal experiences, pushing toward the vision of a real-time digital assistant.

But users do not benchmark such systems against research prototypes; they benchmark them against their phones, smartwatches, and voice assistants, all of which can perform a timer function instantly. Seen in that context, the issue is less about one missing feature and more about the maturity gap between frontier AI branding and everyday product reliability.