You can also listen to this podcast on iono.fm here.
More and more crypto traders are turning to artificial intelligence (AI) to give them a jump on the market. But what happens when AI goes rogue and gives you completely wrong signals?
Statistics from online brokers show nearly 80% of traders lose money, and even those using quant strategies are prone to losses. Many believed AI would help them over this hump, but the research here is less than encouraging.
It turns out that AI hallucinations are all too common.
Abhishek Saxena, head of strategy and growth at Sentient, delves into the complexities of AI in trading.
“AI has been used across different knowledge work to improve productivity,” he explains. However, the stakes are high in trading, where “a small mistake can cost you a lot of money”.
Traders use AI to build strategies, leveraging it for research and backtesting before deployment. Yet fully autonomous trading remains fraught with issues like hallucinations.
“Hallucination is not the only issue,” Abhishek notes. AI engines often steer traders in the wrong direction due to data inaccuracies.
“When Large Language Model (LLM) is building a strategy, it is reasoning a lot,” he says, which can lead to “context rot” where the AI forgets its initial context. This, combined with a lack of real-time data access, results in flawed strategies.
One of the great underreported skeletons of fund management is the huge cost of developing quantitative systems that don’t work.
Read: Making the case for simply following the trend
A case in point is the 1998 collapse of Long-Term Capital Management (LTCM), a hedge fund led by Nobel laureates Robert Merton and Myron Scholes.
Another is the 2012 wipeout of $440 million at Knight Capital in just 30 minutes due to a faulty trading algorithm. To be fair, there are algorithms that perform as expected, but success has often come at a huge cost. AI is supposed to fix these problems, but we’re still a long way from that, says Saxena.
Read:
How to create high probability trades with AI
It turns out AI trading bots behave just like humans …
Another little-known feature of AI is the shift to “engagement optimised black boxes”, which are wired into financial infrastructure.
These systems aim to keep users happy, sometimes at the expense of accuracy.
“They’ll tell you whatever it is you want to hear, even if you’re going to lose money,” Saxena warns. This bias towards user satisfaction can lead to poor trading outcomes.
Despite these challenges, Saxena remains optimistic about AI’s potential. “It’s not 100% solved,” he admits, but improvements are ongoing. The key lies in enhancing AI’s reasoning capabilities and ensuring access to accurate data.
“As AI reasoning becomes stronger, more accurate, with very low hallucinations” the potential for successful trading increases.
Saxena advises traders to use AI as a tool rather than relying on it entirely.
“I would caution you to not give all your money to an AI LLM and trade for them,” he suggests. Instead, traders should focus on research and strategy development, with a human in the loop to verify AI-generated insights.
Sentient is at the forefront of this research, developing frameworks to improve AI reasoning. Its goal is to create systems that provide accurate, unbiased outputs.
“We are optimising for reasoning,” says Saxena, aiming to solve issues related to data analysis and prompt understanding.
Read: Artificial Intelligence: The future of finance? [2017]
In conclusion, while AI offers promising tools for traders, it is not yet a foolproof solution.
Human oversight remains crucial to mitigate risks and ensure successful trading outcomes.
As Saxena puts it: “Alpha (outperformance against a benchmark) is still what humans would generate. LLMs can help you get there.”
For previous Moneyweb Crypto Pod episodes, click here.
You can also sign up for our crypto newsletter.
#trading #tools #hallucinate #outright #lie