Home Community Insights ByteDance Builds Major Nvidia Blackwell AI Cluster in Malaysia Through Aolani Cloud Partnership, Bypassing China-Based Deployment Constraints

ByteDance Builds Major Nvidia Blackwell AI Cluster in Malaysia Through Aolani Cloud Partnership, Bypassing China-Based Deployment Constraints

ByteDance Builds Major Nvidia Blackwell AI Cluster in Malaysia Through Aolani Cloud Partnership, Bypassing China-Based Deployment Constraints

ByteDance, the Chinese parent company of TikTok, is assembling one of Southeast Asia’s largest private AI computing clusters outside mainland China by partnering with Malaysian cloud provider Aolani Cloud to deploy approximately 500 Nvidia Blackwell systems, according to a Wall Street Journal report, citing people familiar with the matter.

The hardware build-out, which is equivalent to roughly 36,000 B200 GPUs, is expected to cost more than $2.5 billion, representing a massive expansion for Aolani, which currently operates infrastructure valued at around $100 million. The systems are intended for AI research and development conducted outside China, as well as to serve growing global customer demand for ByteDance’s AI services and tools.

An Aolani spokesperson told Reuters the company “adheres fully to all applicable export control regulations” and aims to provide cloud-computing services to multiple companies across Asia and globally.

Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

The deployment comes amid continued U.S. export restrictions on advanced AI chips to China. Last month, Reuters reported that the United States had signaled willingness to allow ByteDance to purchase Nvidia’s H200 chips, but Nvidia has not agreed to the proposed conditions governing their use. The Blackwell-based cluster in Malaysia offers ByteDance a way to access cutting-edge Nvidia hardware while conducting sensitive AI work beyond the reach of current China-specific controls.

ByteDance’s move doesn’t come as a surprise. It is seen as a broader trend among Chinese tech giants to diversify AI compute capacity outside mainland China in response to U.S. restrictions on advanced semiconductors. Similar strategies have been pursued by Alibaba, Tencent, and Baidu, which have established or expanded cloud infrastructure in Southeast Asia, the Middle East, and other regions less constrained by U.S. export rules.

Malaysia has emerged as an attractive hub for such investments due to its relatively permissive regulatory environment, reliable power supply in certain regions, favorable tax incentives for data centers, and strategic location for serving both Asian and global customers. The country has actively courted hyperscale and AI-related investments, with several large-scale projects announced in recent years.

The reported 500 Blackwell systems would represent one of the largest single deployments of Nvidia’s newest-generation AI accelerators outside the U.S. and allied markets. Each Blackwell B200 GPU offers significantly higher performance than previous Hopper H100/H200 series chips for both training and inference workloads, making the cluster potentially capable of supporting frontier-scale model development and massive inference demand.

Cost and Scale Implications

At current pricing, a single Blackwell system (typically containing multiple B200 GPUs) costs several million dollars. The reported 500-system deployment would place the total hardware investment well above $2.5 billion — before accounting for networking, cooling, power infrastructure, and facility costs. For context, Nvidia’s latest quarterly data-center revenue exceeded $22 billion, with Blackwell ramp-up expected to drive further acceleration in 2026.

Through the investment, ByteDance is showing determination to maintain competitiveness in the global AI race despite U.S. chip restrictions. The company has aggressively expanded its AI research footprint, releasing open-source models and tools while investing heavily in compute capacity both domestically (under export-control-compliant configurations) and internationally.

While Malaysia offers fewer immediate restrictions than China, any large-scale deployment of U.S.-origin advanced AI hardware remains subject to U.S. export controls, end-use monitoring, and potential future tightening. The U.S. government has continued to expand entity-list designations and tighten licensing requirements for AI-related technologies destined for certain Chinese entities, including ByteDance affiliates.

The timing of the WSJ report — just days after Nvidia CEO Jensen Huang’s comments at the Morgan Stanley TMT conference signaling limited further equity investments in OpenAI and Anthropic — has caught attention.

ByteDance’s offshore compute build-out mirrors actions by other Chinese tech leaders. Alibaba Cloud, Tencent Cloud, and Huawei Cloud have all expanded aggressively in Southeast Asia, the Middle East, and Latin America to serve both local and global customers while navigating U.S. restrictions. These moves reflect a bifurcated global AI landscape: U.S. leadership in frontier capabilities and chip design, but increasing Chinese self-sufficiency and offshore capacity to mitigate supply-chain vulnerabilities.

The Malaysian deployment, if completed at the reported scale, would rank among the largest non-U.S./allied AI clusters using Nvidia’s latest hardware. Besides its significance for ByteDance, it underscores Southeast Asia’s growing role as a neutral hub for AI infrastructure — a trend accelerated by U.S.-China tensions and the global race to secure compute resources for next-generation models.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here