Artificial intelligence is transforming financial markets at an unprecedented pace. Led by companies such as NVIDIA, which are developing sophisticated processors for machine learning applications, AI is driving a technological revolution in the financial sector. However, this rapid growth is not without its potential dangers, as it could conceal systemic risks beneath the surface. As AI continues to promise transformative benefits in trading and risk management, it simultaneously presents a paradox: by making our financial systems more robust in some ways, it could also render them more vulnerable to failures.
There is a current wave of enthusiasm surrounding AI on Wall Street, with investments amounting to tens of billions of dollars. Every major investment bank is integrating AI technology. This optimism is under scrutiny by experts like Jim Rickards, author of 'Money GPT,' who claims that widespread AI adoption could exacerbate market crashes to levels beyond previous experience. He warns of the 'fallacy of composition,' a situation where strategies that are beneficial for individual investors could collectively lead to large-scale market disruptions.
Rickards uses a compelling analogy to explain this phenomenon: in a football stadium, if one fan stands up to see better, it works for them individually. But when everyone follows suit, the advantage is lost, and no one has a clear view. Similarly, in financial markets, if AI systems, controlling substantial capital volumes, all respond to downturns by executing similar sell strategies, it can lead to catastrophic market cliffs. This highlights the risk of removing human judgement from financial systems.
Historically, human specialists at the New York Stock Exchange played a crucial role in maintaining stable markets by counterbalancing excessive sell orders. Today's AI systems lack this human discretion, posing increased risks of unchecked automated responses. The author argues that as AI-driven trading becomes more prevalent, its rapid execution and synchronized actions increase the risk of market volatility through accelerated movements and self-reinforcing feedback loops.
Beyond market crashes, these concerns extend to the banking sector. Rickards cites the recent swift collapse of Silicon Valley Bank as an instance where digital technology intensified a bank run, occurring over mere days rather than weeks or months. He suggests that AI could drive such events even faster, amplifying the dangers in our banking system.
Despite the alarming potential for systemic disruptions, Rickards does not advocate for the abandonment of AI. Instead, he proposes the implementation of enhanced circuit breakers and regulatory systems. By adopting 'cybernetic' strategies, it may be possible to modulate market activities more gradually during crises instead of relying on abrupt halts.
As financial entities eagerly adopt AI technologies, the author calls for a balanced approach to innovation. While AI offers powerful tools to evaluate markets and manage risks, it is crucial to ensure these tools do not inadvertently expose financial systems to new vulnerabilities. The central challenge is to leverage AI's potential while crafting fail-safes against its risks. As financial markets undergo rapid technological changes, finding this equilibrium is crucial for global economic stability.