Home Latest Insights | News US SEC Chairman Warns AI Technology could Contribute to Future Financial Crises

US SEC Chairman Warns AI Technology could Contribute to Future Financial Crises

US SEC Chairman Warns AI Technology could Contribute to Future Financial Crises
WASHINGTON, DC - OCTOBER 03: Securities and Exchange Commission (SEC) Chair Gary Gensler listens during a meeting with the Treasury Department's Financial Stability Oversight Council at the U.S. Treasury Department on October 03, 2022 in Washington, DC. The council held the meeting to discuss a range of topics including climate-related financial risk and the recent Treasury report on the adoption of cloud services in the financial sector. (Photo by Anna Moneymaker/Getty Images)

In a recent speech at the Brookings Institution, the chairman of the US Securities and Exchange Commission (SEC), Gary Gensler, warned that artificial intelligence (AI) technology could pose significant risks to the stability and fairness of the financial system. He argued that AI could amplify existing market inefficiencies, create new sources of systemic risk and undermine investor protection and market integrity.

Gensler highlighted three main areas of concern regarding AI in finance: data quality and governance, algorithmic bias and discrimination, and accountability and transparency. He said that data is the fuel of AI, but also its Achilles’ heel. He stressed the need for robust data quality standards and governance mechanisms to ensure that AI models are fed with accurate, reliable, and representative data. He also cautioned that AI could inherit or exacerbate human biases and prejudices, leading to unfair or discriminatory outcomes for investors and consumers. He called for rigorous testing and monitoring of AI systems to detect and mitigate potential harms.

Artificial intelligence (AI) is a powerful tool that can enhance human capabilities and improve efficiency in various domains. However, AI also poses significant risks and challenges, especially in the financial sector. One of the main ways that AI could cause financial instability is by creating feedback loops and amplifying market volatility.

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

For example, AI algorithms that are used for trading, risk management, or credit scoring could react to market signals or data inputs in a similar or correlated way, leading to herd behavior, contagion, or systemic risk. Moreover, AI systems could also generate false or misleading signals, such as fake news, deepfakes, or cyberattacks, that could manipulate market participants or disrupt financial infrastructure.

Another potential source of financial instability from AI is the lack of transparency and accountability of AI systems. AI models are often complex, opaque, and dynamic, making it difficult to understand how they work, why they make certain decisions, and who is responsible for their outcomes. This could create information asymmetry, moral hazard, or adverse selection problems in the financial market, as well as undermine trust and confidence in financial institutions and regulators. Furthermore, AI systems could also be subject to biases, errors, or failures that could result in unfair or inaccurate outcomes for consumers, investors, or borrowers.

To address these risks and challenges, it is essential for the SEC to develop and implement appropriate governance frameworks and ethical principles for AI in the financial sector. These could include:

Establishing clear and consistent standards and regulations for AI development, deployment, and oversight across jurisdictions and sectors.

Enhancing the transparency and explainability of AI systems and their decisions, as well as the accountability and liability of their developers, users, and supervisors.

Ensuring the robustness and reliability of AI systems and their data sources, as well as the resilience and security of financial infrastructure against AI-related threats.

Promoting the fairness and inclusiveness of AI systems and their outcomes, as well as the protection of privacy and human rights of individuals and groups affected by AI.

AI technology has the potential to bring great benefits to the financial sector and society at large. However, it also entails significant risks and challenges that need to be carefully managed and monitored. By adopting a proactive and collaborative approach to AI governance and ethics, we can harness the power of AI while minimizing its pitfalls.

Moreover, Gensler emphasized the challenge of ensuring accountability and transparency in the use of AI. He noted that AI systems are often complex, opaque, and dynamic, making it difficult to understand how they work and why they make certain decisions. He warned that this could create a “black box” problem, where investors and regulators are unable to assess the risks and performance of AI-driven products and services. He urged for more disclosure and explainability of AI models, as well as clear allocation of responsibilities and liabilities among the various actors involved in the design, development, deployment, and oversight of AI.

Gensler concluded his speech by stating that the SEC is committed to fostering innovation in the financial sector, but also to protecting investors and markets from potential harms caused by AI. He said that the SEC is actively engaging with industry stakeholders, academic experts, and other regulators to develop a comprehensive and balanced regulatory framework for AI in finance. He also encouraged public input and feedback on this important issue.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here