DD
MM
YYYY

PAGES

DD
MM
YYYY

spot_img

PAGES

Home Blog Page 13

Epic Games CEO Tim Sweeney Defends X and Grok as Political Pressure Mounts Over Nudity

0

As governments, regulators, and lawmakers close in on X over the abuses linked to its AI chatbot Grok, one technology chief executive has emerged as a rare and conspicuous defender of the platform.

Epic Games CEO Tim Sweeney now appears to be the only major industry leader to publicly back X and its owner, Elon Musk, even as condemnation of Grok intensifies across the globe.

X has been under sustained fire after Grok was used to generate non-consensual sexual imagery of women and sexual images involving children, a finding confirmed by the Internet Watch Foundation. The revelations triggered an unusually fast-moving political response. In the UK, ministers are openly discussing whether X could be blocked under the Online Safety Act. In the US, senators are pressuring Apple and Google to remove the app from their stores, a move that would severely restrict its reach.

While most technology executives and AI companies have remained silent or distanced themselves from the controversy, Sweeney chose a different path. In a series of public comments, he framed the backlash against Grok not as a necessary safety intervention but as a politically motivated attempt to weaken a rival platform.

“All major AIs have documented instances of going off the rails; all major AI companies make their best efforts to combat this; none are perfect,” Sweeney said.

He went further, accusing politicians of using app store gatekeepers to selectively target companies they oppose, calling it “basic crony capitalism.”

That position has placed Sweeney sharply out of step with the broader industry mood. Governments have focused on harm prevention, platform accountability, and the need for stronger safeguards in generative AI systems. By contrast, Sweeney’s argument centers on structural power and precedent. He has warned that compelling Apple and Google to remove X would shift enormous regulatory authority to a handful of private companies, effectively allowing them to decide which platforms are allowed to exist.

His defense also goes beyond X as a company. Sweeney has repeatedly emphasized that his concern is about open platforms and the consistent application of law, rather than any endorsement of illegal content.

“I defend open platforms, free speech, and consistent application of the rule of law,” he said, adding that he does not defend the misuse of AI tools but opposes collective punishment that reshapes digital freedoms.

Elon Musk has echoed similar arguments, dismissing the outrage over Grok-generated images as an attempt to justify censorship. Musk has argued that generative abuse is not a new phenomenon; only the tools have changed.

Other AI firms facing safety controversies have typically responded with conciliatory language, promises of tighter controls, or quiet cooperation with regulators. Sweeney’s approach is confrontational and ideological, rooted in long-standing battles Epic Games has fought against platform gatekeepers over app store dominance and content control.

The Grok episode has therefore become a proxy fight for larger issues Sweeney has spent years contesting: who controls access to digital markets, how much power governments should wield over online speech, and whether app stores should function as neutral distributors or moral arbiters.

In the UK, those questions are becoming urgent. Technology secretary Liz Kendall has warned that X must act quickly to address the imagery generated through Grok. Ofcom has launched an expedited assessment, with ministers signaling they would support blocking access if regulators recommend it. X has responded by locking Grok’s image generation behind a paywall and pledging to remove illegal content and suspend offending accounts, steps that many say fail to address the underlying capability of the system.

For Sweeney, that distinction matters less than the precedent being set. From his perspective, allowing political pressure to dictate platform access risks normalizing a model where governments bypass courts and due process by leaning on private intermediaries.

Whether that argument gains traction remains to be seen. Public anger over AI-generated sexual imagery is intense, and regulatory momentum is building. Yet Sweeney’s intervention has ensured that the debate is no longer only about Grok’s failures, but also about the future balance of power between governments, platforms, and the gatekeepers that sit between them.

From Stealth to Scale: Terra Industries Secures $11.7M to Protect Africa’s Critical Infrastructure

0

Terra Industries, Africa’s first defense prime focused on autonomous security systems, has officially emerged from stealth to raise $11.7 million in a funding round led by 8VC, the venture firm founded by Palantir co-founder Joe Lonsdale.

The round also saw participation from Valor Equity Partners, Lux Capital, SV Angel, Silent Ventures, Leblon Capital, and angel investors including Micky Malka.

The company, which builds autonomous defense systems to protect Africa’s critical infrastructure such as mines, refineries, power plants, and pipelines announced the milestone in a LinkedIn post.

According to Terra, the new funding will accelerate its mission to give Africa a technological edge in resource protection and counterterrorism.

It wrote,

“Our renewed mission is to give Africa the technological edge needed for resource protection and counterterrorism,” the company stated. “Today, we’re building Africa’s first defense prime with $11 billion in assets under protection. Over the next few months, we will ramp up defense production across Africa and scale our surveillance software.”

Co-founder and CEO Nathan Nwachuku said Terra is positioning itself to meet the growing security demands across the energy, mining, and national infrastructure sectors. Meanwhile, co-founder and CTO Maxwell Maduka emphasized the company’s commitment to building African-owned and African-built technology.

“This is African technology, built by African engineers, for African infrastructure,” Maduka said. “We are creating skilled jobs, building advanced manufacturing capacity, and ensuring the intellectual property behind Africa’s security stays on the continent.”

Founded in 2024, Terra was launched to fill a critical gap in global defense innovation. While companies like Anduril and Helsing focus on Western defense needs, Terra aims to build comparable capabilities tailored specifically to Africa’s unique security challenges.

The company estimates that Africa loses over $300 billion annually due to infrastructure damage and security threats across air, land, and sea. At the core of Terra’s ecosystem is ArtemisOS, an AI-powered open operating system that brings real-time data intelligence and autonomy to infrastructure security.

Artemis Cloud enables real-time storage and analysis of surveillance data, while Artemis Autonomy provides advanced command-and-control capabilities.

Terra’s growing portfolio of autonomous systems includes:

Archer VTOL: A long-range vertical takeoff and landing surveillance drone designed for monitoring critical assets such as mines and oil pipelines.

Iroko UAV: A modular, mass-producible quadcopter built for first-response missions and data collection.

Duma UGV: A flexible autonomous ground vehicle designed for surveillance and cargo operations.

Kallon Sentry Tower: A solar-powered autonomous security tower capable of detecting and tracking threats up to 3km away, aimed at protecting borders, military bases, and energy infrastructure.

In January 2026, the defense tech startup commissioned its Africa’s largest drone manufacturing facility in Abuja, Nigeria, a major step toward building a domestic industrial base for advanced autonomous systems. The facility was designed, constructed, and brought online in just 11 months, underscoring Terra’s rapid execution capability.

The factory currently supports the production of up to 20 Iroko drones per day, with approximately 80% of components sourced and manufactured locally. This move aligns with Terra’s broader goal of developing local talent, strengthening supply chains, and reducing reliance on foreign manufacturing.

Outlook

With fresh capital, a rapidly expanding product line, and a growing manufacturing footprint, Terra Industries is positioning itself as a foundational player in Africa’s defense and security ecosystem. As infrastructure investments across the continent increase, so will the need for intelligent, scalable, and autonomous security solutions.

In a world where geopolitical and infrastructure risks are rising, Terra’s bet is clear, Africa should not depend on imported solutions for its security, it should build its own.

How to Fix a Blank Black Screen on Kali Linux (VirtualBox)

0

Introduction
Running Kali Linux in VirtualBox can sometimes result in a black screen after boot. This issue is common among new Linux users and is usually caused by virtualization conflicts, BIOS/UEFI settings, or display initialization problems. This guide will help you quickly resolve the issue and get your virtual machine running.

Prerequisites
Before following this guide, make sure you have the following:

    VirtualBox is installed on your Windows host machine.
    Kali Linux ISO or a preconfigured VM image is ready.
    Basic familiarity with VirtualBox and Linux commands (login, terminal commands).
    System requirements: at least 2–4 GB of RAM and 128 MB of video memory allocated to the VM.
    VT-x / AMD-V is enabled in your system BIOS for virtualization support.

Note: Ensuring these prerequisites are met will help avoid errors while applying the solutions in this guide.

Outline:

  1. Overview
  2. Common Causes
  3. Solution 1: Disable Hyper-V
  4. Solution 2: Start the Graphical Interface Manually
  5. Additional Recommendations
  6. Conclusion
  7. Glossary

Overview
Running Kali Linux in VirtualBox can sometimes result in a blank or black screen after boot, often displaying messages like “data leak mitigation” before freezing. This issue is common among new Linux users and can be frustrating, but it is usually caused by virtualization conflicts, BIOS/UEFI settings, or display initialization problems

This guide explains the most common causes of the black screen issue and provides two proven solutions to get Kali Linux running again.

Common Causes of the Black Screen Issue
Before jumping into fixes, it’s important to understand what may be causing the problem:

Virtualization conflict between VirtualBox and Windows Hyper-V

BIOS/UEFI virtualization misconfiguration

Display manager or X-server not starting properly

Low system resources (RAM, disk space, or graphics memory)

In most cases, the issue is not permanent and does not mean your Kali installation is corrupted.

Press enter or click to view image in full size

Solution 1: Disable Hyper-V from Windows (Recommended)
Windows Hyper-V can conflict with VirtualBox and prevent Kali Linux from booting correctly, resulting in a black screen.

Steps
Open the Start Menu, search for Command Prompt

Right-click it and select Run as Administrator

Enter the following command:
bcdedit /set hypervisorlaunchtype off

Press enter or click to view image in full size

  1. Press Enter

  2. Restart your computer

After rebooting, open VirtualBox, start your Kali Linux VM, and check if the issue is resolved.

If the black screen persists, proceed to the second solution.

Solution 2: Start the Graphical Interface Manually
Sometimes Kali Linux boots successfully but fails to start the graphical desktop environment. In this case, you can manually launch the X server.

Steps
1. Start your Kali Linux virtual machine in VirtualBox

  1. When the black screen appears, press:
    Ctrl + Alt + F1

  2. Log in using your Kali username and password

  3. Once logged in, run the following command:
    sudo startx

Press enter or click to view image in full size

After that, this command manually starts the graphical desktop.

If the desktop loads successfully, you may restart Kali Linux to confirm the fix. Either you restart or start Kali Linux.

Additional Recommendations
Ensure VT-x/AMD-V is enabled in your system BIOS

Allocate sufficient resources to Kali Linux:

At least 2–4 GB RAM

128 MB video memory

Use VMSVGA as the graphics controller in VirtualBox

Keep VirtualBox and Extension Pack versions matched

Conclusion
A blank black screen in Kali Linux running on VirtualBox is a common issue, especially for beginners. In most cases, it is caused by virtualization conflicts or display initialization failures, not by a broken installation.

By disabling Hyper-V and manually starting the graphical interface, you can resolve the issue quickly and get back to learning and practicing cybersecurity.

Glossary
Terms & Definition:

    VirtualBox: A free virtualization software that allows you to run virtual machines on your computer.
    Kali Linux: A Linux distribution designed for penetration testing and cybersecurity tasks.
    VM (Virtual Machine): A software-based emulation of a computer that runs an operating system in an isolated environment.
    Hyper-V*A: Windows feature that enables virtualization and can conflict with VirtualBox.*
    VT-x / AMD-V*CPU: CPU virtualization technologies allow virtual machines to run efficiently.*
    X server/Display manager: Software that handles the graphical desktop environment in Linux.
    ISO file: disk image file containing the operating system installation.
    Black screen issue: When a virtual machine boots, but the display does not load properly, showing a blank or black screen.

MicroStrategy Expands Bitcoin Holdings With $1.25 Billion Mega Purchase, Now Controls Over 3% of Total Supply

0

MicroStrategy has once again doubled down on its bold Bitcoin-first strategy, announcing a massive $1.25 billion purchase that added 13,627 BTC to its growing treasury.

The latest acquisition pushes the company’s total holdings to 687,410 Bitcoin more than 3% of the cryptocurrency’s total supply, cementing its position as the world’s largest corporate holder of the digital asset.

Led by executive chairman Michael Saylor, the firm’s aggressive accumulation reflects a long-term conviction that Bitcoin is a superior store of value in an era of rising inflation, currency debasement, and growing distrust in traditional financial systems.

This purchase continues Saylor’s aggressive Bitcoin treasury strategy since 2020, where the firm has raised debt and equity to amass over 3% of Bitcoin’s total supply, positioning it as the largest corporate holder.

At current prices near $91,500, the unrealized gains on holdings exceed $30 billion, underscoring Saylor’s conviction in Bitcoin as a superior store of value amid fiat inflation concerns.

In a post that ignited heated debate across X on January 11, 2026, Saylor declared the top-performing assets of the current decade: Digital Intelligence (NVIDIA, $NVDA), Digital Credit (Strategy, $MSTR), and Digital Capital ($BTC, Bitcoin).

He accompanied his statement with a bar chart highlighting annualized returns since August 2020, the precise moment MicroStrategy launched its pioneering Bitcoin treasury strategy.

According to the chart:

– NVDA led with ~68% annualized returns, fueled by the explosive growth of AI computing demand.

– MSTR followed closely at ~60%, benefiting from leveraged Bitcoin exposure through debt, equity raises, and aggressive accumulation.

– BTC itself delivered a strong ~45% annualized return, outpacing most traditional assets like Tesla (~33%), the broader market, and especially bonds (negative returns in that period).

Saylor framed these three as foundational pillars of a new financial era: AI-driven processing power, Bitcoin-leveraged corporate financing, and Bitcoin as superior “digital capital” that preserves value better than fiat in an inflationary world.

Notably, he has repeatedly hinted that accumulation will not slow, especially during periods of price weakness, reinforcing his belief that volatility is a feature, not a flaw, of Bitcoin’s monetization phase.

Since August 2020, MicroStrategy (now rebranded Strategy) has transformed from a business intelligence software firm into the world’s largest corporate Bitcoin holder. The company has repeatedly raised capital via convertible notes, at-the-market equity offerings, and other instruments to purchase more Bitcoin, creating what Saylor calls “Bitcoin yield” for shareholders.

With unrealized gains now exceeding $30 billion, MicroStrategy’s Bitcoin bet is no longer just symbolic, it is reshaping how corporations think about treasury management in the digital age.

Outlook

Looking ahead, MicroStrategy’s Bitcoin-centric strategy is likely to remain both highly influential and highly polarizing. If Bitcoin continues its long-term appreciation trajectory, the company could further entrench itself as a hybrid entity part operating business, part Bitcoin investment vehicle potentially inspiring more corporations to rethink traditional treasury models.

However, the approach is not without risks. MicroStrategy’s heavy reliance on debt and equity raises exposes it to macroeconomic shifts, interest rate pressures, regulatory uncertainty, and prolonged crypto bear markets. A sustained downturn in Bitcoin prices could strain its balance sheet and test investor patience. Yet for Saylor, this risk is calculated—he views Bitcoin as a generational asset, not a cyclical trade.

On the flip side, if Bitcoin fulfills its narrative as global digital capital, Strategy Bitcoin bet will prove to be a successful corporate treasury playbook, one where balance sheets are built not on cash, but on decentralized monetary assets.

Google Scales Back AI Overviews on Health Searches After Questions Over Accuracy and Clinical Risk

0

Google appears to have quietly rolled back its AI-generated “Overviews” for certain health-related search queries following scrutiny over misleading medical information.

The move, which highlights growing tensions between rapid AI deployment and patient safety concerns, follows an investigation by the Guardian, which found that Google’s AI Overviews were producing oversimplified and potentially misleading responses to sensitive medical questions. In one example, users searching for “what is the normal range for liver blood tests” were shown numerical reference ranges that failed to account for key variables such as age, sex, ethnicity, nationality, or underlying medical conditions.

Medical experts warned that such omissions could give users a false sense of reassurance, particularly in cases where liver enzyme levels may fall within one population’s “normal” range but signal disease risk in another. Liver blood tests are commonly used to detect conditions such as hepatitis, fatty liver disease, and cirrhosis, where delayed diagnosis can have serious consequences.

After the Guardian published its findings, the outlet reported that AI Overviews no longer appeared for searches including “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” However, the removal appeared uneven. Variations on those queries, such as “lft reference range” or “lft test reference range,” were still capable of triggering AI-generated summaries, suggesting that Google’s safeguards were applied selectively rather than comprehensively.

Subsequent checks later in the day indicated further tightening. Several similar health-related queries no longer produced AI Overviews at all, though Google continued to prompt users to submit the same questions through its separate “AI Mode,” which remains available across Search. In multiple instances, the Guardian’s investigation itself surfaced as a top-ranked result, replacing the AI-generated summary with traditional reporting.

Google declined to comment on the specific removals. A spokesperson told the Guardian that the company does not “comment on individual removals within Search,” emphasizing instead that it works to “make broad improvements” to its systems. The spokesperson added that Google had asked an internal team of clinicians to review the queries cited in the investigation and concluded that “in many instances, the information was not inaccurate and was also supported by high quality websites.”

That response points to a central issue facing AI-generated health summaries: even when underlying sources are credible, the act of compressing complex medical guidance into a short, generalized overview can strip away essential context. Unlike traditional search results, which present multiple sources and viewpoints, AI Overviews synthesize information into a single authoritative-sounding answer placed prominently at the top of the page.

Google has spent the past year expanding AI Overviews as part of a broader effort to reimagine Search around generative AI. In 2024, the company unveiled health-focused AI models and pledged improvements aimed at making medical searches more reliable, stressing that its tools are not intended to replace professional advice. Still, critics argue that the format itself encourages users to treat AI summaries as definitive guidance.

Patient advocacy groups say the episode exposes a deeper structural problem. Vanessa Hebditch, director of communications and policy at the British Liver Trust, welcomed the apparent removal of AI Overviews for liver test queries but said the change does not address the underlying risk.

“This is excellent news,” Hebditch told the Guardian. “Our bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it’s not tackling the bigger issue of AI Overviews for health.”

Her comments echo broader concerns among clinicians and regulators that platform-level fixes triggered by media attention are insufficient. Health information is one of the most heavily regulated areas of communication, and mistakes can carry real-world consequences, yet generative AI tools are often deployed with fewer safeguards than traditional medical publications.

The episode comes as governments worldwide intensify scrutiny of AI systems used in sensitive domains. In Europe, regulators have signaled that health-related AI applications will face higher compliance standards under the EU’s AI framework, while in the United Kingdom, policymakers have stressed that platforms must demonstrate a duty of care when distributing medical information.

For Google, the partial withdrawal of AI Overviews appears to reflect a balancing act rather than a retreat. The company continues to promote AI-powered search experiences while making quiet adjustments to avoid reputational and regulatory fallout. It is not clear if those adjustments will result in the tech giant having a more systemic rethink of how AI is used for health searches.