Elon Musk’s artificial intelligence startup xAI is accelerating its infrastructure push, acquiring a third building to expand its data center footprint as it seeks to lift training capacity to nearly 2 gigawatts of compute power.
The move underpins how the race to build ever more powerful AI models is increasingly being decided not just by algorithms, but by access to electricity, land, and specialized chips.
Musk disclosed the purchase on Tuesday in a post on X, saying xAI had bought a third facility called “MACROHARDRR,” without revealing its precise location. The name appears to be a deliberate play on Microsoft, a key backer of OpenAI. Earlier, The Information reported, citing property records and a person familiar with the project, that the building for the third supersized data center is planned outside Memphis, Tennessee, where xAI is already operating its flagship supercomputer cluster, Colossus.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
Colossus, based in Memphis, has been billed by xAI as the largest AI supercomputer in the world. The system is central to Musk’s ambition of turning xAI into a credible challenger to OpenAI’s ChatGPT and Anthropic’s Claude. According to people familiar with the plans, xAI intends to expand Colossus to house at least one million graphics processing units, a scale that would put it among the most compute-dense AI installations globally.
The newly acquired warehouse is expected to begin conversion into a data center in 2026, The Information reported. It would complement both the existing Colossus cluster and a planned Colossus 2 facility. Crucially, both sites are located near a natural gas power plant that xAI is building in the area, alongside other power sources, highlighting how energy access has become one of the most significant bottlenecks in advanced AI development.
The near-2GW compute target is striking. For context, data center campuses operating at that level rival the power consumption of small cities. As AI models grow larger and more complex, training runs can take weeks and require an enormous, continuous energy supply. This has pushed leading AI companies to secure long-term power arrangements, invest directly in generation assets, and, in some cases, rethink where data centers are located.
Vertical integration appears to be a strategic choice for xAI. Unlike OpenAI and Anthropic, which rely heavily on cloud partners such as Microsoft and Amazon, Musk is pursuing a more self-contained model, combining proprietary data centers, power generation, and in-house model development. Supporters say this could give xAI greater control over costs and scaling, while reducing reliance on third-party infrastructure that may be constrained by competing demands.
The expansion also reflects the broader surge in capital spending across the AI sector. Tech companies have been pouring hundreds of billions of dollars into data centers, chips, and networking equipment to support a global frenzy for AI solutions. Nvidia’s dominance in AI chips has made access to GPUs a strategic priority, while power availability has emerged as a limiting factor even for the largest cloud providers.
However, the rapid buildout has not gone unchallenged. Environmental groups and local activists have raised concerns about the impact of large data centers, pointing to their heavy electricity consumption, water use for cooling, and increased strain on local grids. xAI’s proximity to a natural gas plant has intensified scrutiny, as critics question the climate implications of fossil-fuel-powered AI infrastructure at a time when governments and companies are pledging to cut emissions.
Musk has previously argued that AI progress requires massive compute and that reliable baseload power is essential to support it. Yet the tension between AI’s growth and sustainability is becoming harder to ignore, particularly as projects scale into the gigawatt range.
Strategically, the expansion signals Musk’s determination to keep xAI in the top tier of AI developers as competition intensifies. With OpenAI reportedly working on increasingly powerful models and Anthropic attracting significant enterprise adoption, the ability to train faster, larger, and more capable systems is becoming a decisive advantage.
xAI’s latest move reinforces a central reality of the sector as the AI arms race deepens: leadership is no longer just about who has the smartest models, but who can marshal the infrastructure, energy, and capital required to run them at unprecedented scale.



