Widening The Net To Devise Sophisticated Trading Algorithms



By Stuart Baden Powell, Head of Asia Electronic Product, Macquarie, and Professor Dan Li of Hong Kong University

The only constant in algorithmic trading is change and continual improvements are necessary to evolve with innovation in both technology and market dynamics.

There is a major change underway within the trading industry as the focus shifts towards a more sophisticated and advanced quantitative and scientific execution logic. At Macquarie we have embraced this move, and increasingly, a similar approach is evident among several of our buy-side counterparties.

This shift has parallels in other industries. For instance, the airline industry offers a historical similarity where, over the years, human pilots have obtained new skills and adapted to shifting cultures in response to technological change. Most importantly, the value of human agency has diminished as automated processes have superseded an individual’s experience skill and intuition for functions such as landing an aircraft. Yet, humans retain a role that is skilled in a different way, namely an ability to understand and interpret complex technology. This developmental “human in/out of the loop” process is what we are seeing on the buy- and sell-side trading desks.

Sourcing algorithm complexity
With skillsets from diverse academic and industry backgrounds and wider computational improvements built-in, Macquarie has pre-positioned for this change and sourced algorithmic logic not only from within finance, but also by looking to more sophisticated industries. Opportunities to map across aero/astronautics, or Silicon Valley firms provides tried and tested logic in similar non-linear environments.

A crucial advantage from a multi-disciplinary approach is the ability to develop more complex algorithms and strategies to enhance trade execution performance, and differentiate a firm’s capability from the overcrowded and commonplace.

Indeed, in a recent paper co-written by Dan Li, entitled “The Competitive Landscape of High-Frequency Trading Firms” *, the authors found that the majority of computerized trading in the Canadian market even in recent years is still concentrated in fairly simplistic algorithmic trading strategies. Just three major, basic strategies generate a large amount of trades, and even more orders. These algorithms respond to market conditions and trading signals in a similar fashion, and pursue near identical profit/cost reduction opportunities.

More importantly, this similarity leads to heightened competition within each strategy space. The market is crowded and it is becoming increasingly difficult for any generic algorithm to stand out. One particular strategy investigated concerns posting or supplying liquidity. It was found that strategies primarily providing liquidity generate lower trading revenues, regardless of whether the market is going up, down or staying flat.

Limitations of simple strategies
A natural question to ask is how the increased competition affects the market in general. For the most part, the research focuses on volatility over various intraday horizons, because volatility management is central to trading performance. In a marketplace where algorithmic traders tend to employ similar strategies, short-term volatilities are dampened. A further investigation suggests that the fall in market volatility is driven by the portions of short-horizon volatility related to both the permanent and temporary price impacts of trades.

On one hand, competition among traders could lead to faster revelation of hard-information signals, and a reduction of adverse selection costs. On the other hand, competition in provision orders might also lower the compensation posting algorithms earn, which in turn explains the reduction in volatility that stems from the temporary price impact.

At Macquarie, we have uncovered similar usage patterns in Asia. Although it varies by market and by client the majority use a small handful of algorithmic strategies. Away from the people side and onto the product side, our research suggests that we are only at base camp for algorithmic practices, so that many current segments of underlying logic could quickly be deemed legacy.

For example, we have done extensive work around prediction.  If we are aiming to predict short-term direction and magnitude yet the majority of the sample is VWAP algorithms, results are rarely profound; this is amplified if we work off a fixed time constraint. In fact, predictive logic using a fixed “finish time” is actually largely superfluous. A far superior logic is time-flexible start and finish intervals leaning on specific market conditions.

We also dig into machine learning; a crucial element given that our dynamic market is prone to high impact, fat-tail exogenous events. How does “a decision process” loosen itself and adapt when the statistical and sensory feeds are far from static? Similar to computational map-making using a LIDAR and Feature Space, it could learn by creating a feature vector or some form of training set. Alternatively, an algorithm can also use “deep reinforcement learning” or a materially simplified one or two layer approach, more correctly termed “shallow learning”. In fact, we see this shallow learning in place today.

The next stage in algorithmic modelling
Our work tells us that many problems can be formulated using supervised or reinforcement learning but several considerations exist to reach optimal solutions. If you start with a tabula rasa, you need a substantial amount of repeatable examples to guide the algorithm and this is where reinforcement learning can struggle. Alternatively, the widely-known “greedy algorithm” can lock into a suboptimal action loop, often referred to as the “optimization versus exploration” dilemma, and can lead to best execution challenges.  At Macquarie, we think algorithmic logic has moved beyond these processes.

These difficulties can be acute where some market participants use simple algorithms and some use more advanced models often manually switching in high or low volatility conditions. The ability to automatically adapt to what can be bid/ask bounce, flutter, volatility, momentum or reversion reinforces the needs for more sophisticated algorithmic modelling.

Overall, these findings call for more sophisticated buy-side algorithms that build on recent, wider developments in machine learning and strategic design.

* Boehmer, Ekkehart, Dan Li and Gideon Saar, “The Competitive Landscape of High-Frequency Trading Firms” in The Review of Financial Studies, 2017.

We’d love to hear your feedback on this article. Please click here


recommend to friends
  • gplus
  • pinterest