A quiet policy shift inside Goldman Sachs is drawing attention to a broader recalibration underway across global finance, where the rapid adoption of artificial intelligence is colliding with tightening data controls and geopolitical friction.
The decision marks a deeper shift in how global banks are approaching artificial intelligence, treating it less as a productivity tool and more as regulated infrastructure shaped by jurisdictional risk, contractual boundaries, and geopolitical pressure.
According to a source familiar with the matter, cited by Reuters, Goldman employees in Hong Kong previously accessed Claude via an internal AI platform but have been cut off in recent weeks. Other models, including ChatGPT from OpenAI and Gemini from Google, remain available, indicating the move is targeted rather than a broader pullback from AI adoption.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
The immediate rationale appears rooted in compliance interpretation. Anthropic does not officially support Hong Kong as a market for its API or direct product access, and a spokesperson has said Claude models were never formally “supported” in the territory. Goldman’s restriction suggests the bank has opted for a stricter reading of usage rights, likely after internal or external review, rather than risking exposure in a legally ambiguous environment.
That caution is increasingly typical across the financial sector. AI systems process sensitive internal data, client information, and market insights, making questions around data residency, cross-border transfer, and third-party access central to deployment decisions. In jurisdictions like Hong Kong, where regulatory oversight intersects with both Western and Chinese frameworks, those questions carry additional weight.
It is particularly noteworthy as it comes when tensions between the United States and China over artificial intelligence have intensified, with Washington raising concerns about intellectual property risks and tightening controls on advanced technology flows. These issues are expected to feature prominently in discussions between Donald Trump and Xi Jinping at an upcoming summit in Beijing. Within that context, corporate decisions on AI access are increasingly being shaped by geopolitical considerations rather than purely commercial ones.
For banks, the implications are operational as well as strategic. Hong Kong has historically served as a critical hub for Asia-Pacific operations, offering access to global markets alongside proximity to mainland China. However, as AI models become more tightly controlled by their developers, the city is emerging as a grey zone where access cannot be assumed. Goldman’s move signals that institutions may begin to segment their AI capabilities by region, creating uneven deployment across global teams.
Regulatory scrutiny is adding another layer. The Hong Kong Monetary Authority said it has contacted major banks to assess developments around Anthropic’s newer models, including Mythos, and to ensure risk frameworks are updated. This reflects growing concern that advanced AI, particularly systems capable of autonomous or semi-autonomous decision-making, could introduce systemic risks if not properly governed.
Those concerns extend beyond data security as AI models embedded in banking workflows could influence trading strategies, compliance checks, or client advisory processes. Any lack of transparency in how those models operate, or uncertainty about where data is processed, raises the risk of regulatory breaches and reputational damage. For institutions like Goldman, the cost of misalignment can far outweigh the productivity gains from broader access.
At the same time, the selective nature of the restriction points to a more nuanced trend. Banks are not retreating from AI; they are diversifying and hedging. By maintaining access to multiple providers, Goldman reduces dependency on any single model while preserving flexibility to adapt as regulatory conditions evolve. This multi-model approach is becoming standard among large enterprises navigating a fragmented AI landscape.
The development, however, tells a story of a constraint that extends beyond technology for Anthropic. While the company has gained traction with its emphasis on safety and enterprise use, limited geographic availability could slow adoption among multinational clients that require consistent global access. In contrast, competitors with broader deployment frameworks may gain an advantage, even if their models are not uniformly superior.
The broader takeaway is that AI adoption in finance is entering a more disciplined phase. Early experimentation is giving way to structured integration, governed by the same risk frameworks that apply to capital allocation, cybersecurity, and cross-border operations. Decisions like Goldman’s are less about stepping back from innovation and more about aligning it with regulatory reality.
In that sense, the removal of Claude in Hong Kong is a localized action with wider implications. It is seen as an indication that the global rollout of AI will not be seamless, but shaped by a patchwork of legal, political, and institutional constraints.



