Can you describe the SFC’s recent regulatory initiative on electronic trading? There’s a huge amount of work and thought being put into the regulatory approach to electronic trading internationally, and this effort has been underway for some time.
In Hong Kong, we published our new rules in March after a public consultation.
The initiatives are intended to provide much needed clarity to intermediaries and traders and, in common with much post-financial crisis regulation, are about safety, soundness and transparency. The rules are broadly in line with regulations across other major international markets and the principles published by the International Organization of Securities Commissions (IOSCO).
In essence, the rules apply to internet trading, Direct Market Access (DMA) and algorithmic trading, and are aimed at ensuring that undue risks are not borne by investors.
What are the comments of the industry on the new SFC regime of electronic trading? Feedback was pretty open and honest. There was no significant resistance to the proposals; it is pretty evident that sensible regulation is necessarily about system safety, testing, internal controls and the risks of DMA.
Of course some comments focused on the ever present tension between the extent of safety measures required to minimise risk to an acceptable level and the costs of those measures to the industry – and to end users.
For example, smaller firms were concerned about the extent they have to employ resources to check out an electronic system that is bought off-the-shelf. The answer is that you absolutely need to check it out – because if you don’t, the risks you are taking on are unknowable; you would be flying blind.
Although the new requirements will inevitably increase operating costs, we believe that the framework will actually facilitate the long-term growth of electronic trading in our market; electronic trading is here to stay and the regime ensures that investors are informed and can be confident. One thing we are very conscious of in Hong Kong is that we deal with a vast range of financial institutions from the very big to the very small. The impact of regulation on them, including electronic trading, can therefore vary, and that’s something we have to be sensitive to. Clearly, large firms may be better able to absorb additional costs than smaller firms.
With that in mind, the new regime will become effective on 1 January 2014 to allow sufficient time for all firms to implement internal control policies and procedures, as well as to make changes to their electronic trading and record keeping systems.
How are you examining dark liquidity? Fundamentally, with dark pools and dark liquidity, we are talking about trading off-exchange on platforms that do not offer pre-trade price transparency. Since the imposition of mandatory flagging of reported dark pool transactions by the Hong Kong stock exchange last year, the reported volume of trades executed in dark pools in Hong Kong has increased steadily, accounting for 2.2% to 2.5% of monthly turnover. This, of course, is very small compared to markets that have actively embraced alternative venues – and are now struggling with how to regulate them and find an optimal balance between the roles of “lit” and “dark” trading platforms.
We have identified a set of key issues concerning dark liquidity – clarity to users as to how a dark pool operates; involvement of retail investors; who within a financial institution can see what’s occurring in a dark pool; what ‘best execution’ means within dark pools; and proprietary orders within dark pools – e.g. the priority of proprietary orders versus genuine client orders.
So, unlike the new electronic trading rules – which are about firms operating between a trading platform and a client, this is a separate topic about the platforms themselves.
We’ve already come across some problems with existing dark pools. They have different configurations and different target clients, and of course they were originally developed to facilitate large trades by large institutions – but have moved on from this to deal with smaller trades. Those banks or brokers who operate their own “internal” dark pools tend to say that they are simply a benign electronic overlay to traditional brokerage operations. Exchanges counter this by saying that all trading needs to have pre-trade price and order book transparency and what the dark pools operators are doing is operating alternative exchanges, free riding on lit market pricing. To address these issues, we have actively discussed the situation with existing dark pool operators with a view to imposing carefully calibrated licensing conditions.
We will also consult the market later this year about codifying our stance to ensure a consistent, level playing field for all operators.
Fidessa’s Group Strategy Director, Steve Grob, puts some of the major myths around HFT under the microscope.
High frequency trading (HFT) has been the hottest topic in the financial world for at least two years now, and this debate has now reached Australia, which has been busy introducing its own multi-market structure over the past couple of years. Nothing, it seems, raises as many hackles and divides as many opinions as those three words – “high frequency trading” – and this is as true in Australia as anywhere else.
But what is the truth about HFT? Is it the devil, a scourge to markets? Or is it simply the evolution of trading – computer driven trading replacing human trading in the way computers are replacing so many other aspects of our business and personal lives?
Whatever the answer to these questions, there can be little doubt that HFT activity has taken hold and accelerated wherever multi-market trading structures have been introduced. Shrinking average trade size can be seen as a proxy for HFT. Take the FTSE 100, for example. As chart 1 shows, average trade size has reduced significantly since 2008 and a similar trend looks set to impact Australia’s main index too (chart 2).
So, to unpick the problem, let’s look at some commonly held opinions about HFT.
HFTs see market data before other participants, giving them an unfair advantage. This myth has been making its way around the market in Australia for a while now, but it simply isn’t true. The ASX and Chi-X both have co-location centres where firms can pay to have their computers close to the source of market data. While this does advantage those firms within the co-location environment, or ‘colo’, it’s a level playing field – any firm can enter the colo and all the computer racks are connected to the market data distribution engine such that they all receive it at exactly the same time.
It’s also worth considering what other kinds of firms are in the colo. Many, or most, fund managers (who are aggregators of ‘mum and dad’ retail money) execute their trades through a third-party broker, usually an investment bank or an agency broker like Instinet. It’s these firms that are first in line to buy rack space in colos, and their proximity puts them – and their end investors – on the same playing field as the HFTs. Where the waters become a little murkier is in the US, where it’s claimed that exotic order types such as DAY ISO and “hide and light” orders can be used to almost pre-empt market data and push HFT orders to the front of the queue. Protests from both sides are vociferous and the jury is probably still out as to the truth of the claims.
HFTs don’t follow the rules. HFTs have to follow the rules just like every other market participant. Those who don’t are breaking the law – pure and simple. Where regulators are struggling is in keeping pace with the rapid-fire trading taking place on their exchanges, and this goes for standard algo trading as well as HFT. ASIC, for example, has been very conscientious in looking at best-practice around the world and is procuring its own fast technology to ensure it can keep pace with its participants. Other regulators would do well to follow their example.
Weng Cheah, Managing Director of Xinfin, continues his discussion with prop and quant traders, looking at what the future holds for high frequency trading.
It was inevitable that the world of brokerage, in particular execution, would become faster and more automated. Competition drove the world of physics into brokerage and shaped many new services. However, the challenges of today have switched from achieving nanosecond execution, to maintaining profitability, with volumes that have halved in 2011, and continue to decline in 2012.
Even from traders there is clarity that the race to faster execution has all but ended.An Asia based proprietary trader shared his thoughts on execution latency, saying “we are several fold past ridiculous,” but, more importantly adding that “speed is not where innovation needs to occur.”
It’s not about the speed of execution…
Magazines, professional literature and business plans are all littered with thoughts and opinions about the future, that vary only by how quickly, or how wrong they are. However, it maybe worthwhile to frame the discussion with the following observations:
·The last three years have seen significant investment in Low Latency services from technology vendors and some brokers, to the extent that a sub-millisecond average round trip is no longer an achievement; sub-microsecond is normal.
·In a world with a sub-microsecond norm, there are fewer participants unhappy with this performance. It’s rational that there will be fewer resources dedicated to even faster gateway solutions.
·Although, it is difficult to foresee the total exclusion of research into hardware acceleration, the cold, harsh reality that is the economics of this business will halt even the most technically promising research project.
·Data has resumed importance.However, it is interesting that there is a hierarchy of criticality, where historical data volumes, in particular for new markets or contracts, will be more valuable than low latency market data.
·Excluding pre-trade risk controls is no longer a valid route to speeding an order to market.This regulatory arbitrage is no longer a selling point by brokers looking to differentiate their services.
·Intraday risk assessment and management are the least developed links in the chain. This is true for the technology, but also for the organization structure and methodologies employed.
·Productivity and back testing tools and for quantitative analysts, were specialized and usually built bespoke for a strategy. However, there is an increasing number of generalized frameworks and back testing products from software vendors.
The key takeaway from these observations is that it is not about speed, and hasn’t really ever been about speed, but about the quality of the decision making processes. The world needs to forget about faster, and start getting a lot more original in idea generation.
Hector Casavantes of Finamex assesses improvements in the Mexican markets, like technology upgrades, high frequency trading and regional partnerships, as well as the work yet to be done.
How has Bolsa Mexicana de Valores’ (BMV’s) exchange upgrade improved trading conditions in Mexico? The technology update by the BMV placed us on a path toward a more standardized market, and it definitely helped to raise international investors’ awareness of the existing modern, transparent and easy to trade market. While the benefits of the upgrade are clearly apparent, there have also been collateral effects. For example, after the upgrade, new and more demanding players have moved into the picture, increasing the level of demand for system reliability, pricing models and additional features common on other markets, such as liquidity rebates.
There is still work to do, however, as the exchange platform, upgraded though it is, reserves certain access privileges for more established domestic participants. For example, purely electronic foreign brokers cannot obtain or see the Market on Close (MOC) book, the hidden midpoint peg bookmarks for IOIs, the regular full-depth order book, etc. The exchange needs to work on leveling access, and they have recently demonstrated that they are both aware of this issue and examining how to address it.
What advantages in terms of liquidity will the Mercado Integrado Latino Americano (MILA) bring to Mexican markets? MILA is thought to provide access to new natural liquidity sources in both directions. From what we can see, Mexico may initially provide new asset classes besides equities, including global stocks, ETF’s and eventually derivatives for South American MILA countries. The domestic buy-side investor from any of those countries, like pension funds, insurance companies and corporate treasuries, may find Mexican-listed names appealing within their risk strategy objectives. For Mexican institutional investors, MILA may provide investment options independent of fluctuation from the local macroeconomic, sector-related or seasonal forces.
For all the countries involved, MILA will provide a good opportunity for sharing technology, best practices and maybe adding productive competition, thereby encouraging increasingly costeffective services. On the downside, there are a number of legal and regulatory issues that need to be resolved before the promise of the MILA integration is a reality. In the medium term, the expectations for MILA’s role in Mexico are quite high.
What is the role of High Frequency Trading (HFT) in Mexican markets? In Mexico, HFT is found to be mostly focused on statistical and spread arbitrage strategies and includes other market making strategies to a lesser extent. These HFT strategies have contributed both liquidity and efficiency to the overall market structure, as well as facilitating the sharing of experiences and best practice about HFT strategies and technology.
Rudolf Siebel, Managing Director of BVI Bundesverband Investment und Asset Management, shares the perspectives of German asset managers and their needs and goals for the coming year.
Technology and Trading Costs BVI represents German investment fund and asset managment industry which manages ¤1.7 trillion in assets such as bonds, equities and derivatives. Trading is an issue dear to our hearts. In particular, we welcome the improvements in electronic trading over the past decade especially those based on standards, such as the FIX Protocol, which enable automation based on standardization. That is one of the reasons why we became part of the FIX community in September 2011. Costs of trading have certainly fallen over the past few years, particularly with regard to the costs charged by brokers and venues. Also, trading costs have been implicitly lowered through a reduced market impact. Our members sense that with electronic trading they can be much closer to the market and limit the loss of market value because of the latency in trading. Our members, however, have seen that the cost of support and analytics has not fallen. Some also believe that the buy-side trading volume side had declined and that the sellside volume is on the increase.
Value through Innovation Having discussed issues of electronic trading within our industry, I think the increased ability to analyze market impact and trading costs has provided value. Over the past few years, our membership has seen value shift very quickly to better market access, especially through smarter routing technology. Based on mutual studies, only about 65% of the turnover of the DA X is now on the Deutsche Boerse, and for the FTSE 100, only 50% is now on the LSE. It is absolutely vital for our members to be able to access different liquidity pools, whether lit or dark. Smart algorithms have become a main issue, but not necessarily in view of improving low latency. Our members are asset managers who base their decisions on the selection of securities and asset classes, not necessarily on squeezing out each latent nanosecond. As a result, low latency trading is a secondary priority for BVI’s members, but smart order routing is obviously important given the large number of venues in the European market. At my latest count, there are about 70+ different types of trading venues, be they exchanges or other trading platforms.
Volatility and Connectivity We are now in a market where there are no longer any safe havens among asset classes, and in times of high market volatility it is absolutely necessary to link your internal systems to outside trading platforms in order to be flexible and quick to market. German asset managers have yet to establish connections across asset classes, and the FIX Protocol is very important as a basis for discussing the connectivity issue. Going forward with Dodd-Frank and new regulation on the European side, the the connectivity with Central Counter Parties (CCPs), will also be a big issue for 2013 and 2014. As far as it is possible, connecting to all markets and asset classes in an electronic way, and connecting to more CCPs will be the challenge for next few years.
Raymond Russell, of the FIX Inter-Party Latency (FIXIPL) Working Group and Corvil lays out the use cases for the FIX Inter-Party Latency standard and the functionality of Version 1.0.
Goals for FIXIPL
The principal goal of the Inter-Party Latency Working Group is to ensure interoperability between different latency monitoring vendors. Interoperability is essential because latency monitoring is vital to running a low-latency service, therefore the people building systems need confidence that they can start with one vendor and still migrate to another. What we have seen through the proliferation of latency monitoring systems across the trading world, whether DMA providers, market data providers or trading desks, is that often the problems in managing latency within an environment happen between the cracks. Most firms have a good handle on latency in their own environment because they have engineered it well, but when they connect into a counterparty, it gets tricky.
A trader who sees a slowdown in response time will want to understand why they have missed trades or why their fill rates are low, but there are multiple places where that latency could have occurred. One place is in the exchange matching engine, which in some respects is unavoidable. If there is considerable interest and activity in a symbol at the same time, those orders will have to queue in the matching engine, purely as a result of market activity. The latency might also have occurred in the exchange gateway. It is common practice for exchanges to load balance across multiple gateways to accommodate high volumes, and you might have hit a slow gateway. Perhaps the service provider you connect through may have oversubscribed their network and you could be caught in cross traffic unrelated to trading. We have seen all these things happen, so the ability to see where the latency is occurring requires a consistent set of time stamps across the architecture.
Most exchanges already employ latency monitoring in their own environment, and inter-party latency and the sharing of time stamps, while less important within the exchange, enables them to work with their members to identify areas of latency. The benefits unlocked through interparty latency are somewhat biased towards the end traders, but they also extend to brokers and market data providers, who receive better quality execution feeds and market data speeds, respectively.
For exchanges, the need for latency transparency is becoming a standard requirement as latency has become a competitive differentiator. To the extent that exchanges are comfortable with their own infrastructure and are ready to compete on their latency, they will want to share their latency measurements with members. In my experience, venues and brokers are no longer as reticent to share their latency figures as they were before.
Version 1.0 Rollout
Much of the work that we have done with Version 1.0 involved deciding how to produce a standard that on one hand is simple enough to be easily implemented, while ensuring it can still perform in all the basic use cases. Version 1.0, due out in December 2011, is clean and simple and emphasizes the core capability to publish time stamps. We have agreed on the technical scope and it is now going through the formal review procedures required to be standardized by FPL, including a public review. The other important part to be done before it is real is to get two different implementations. There are a number of things that will be ready in a few months’ time, such as distribution through multicast and the ability to automatically group several measurements together across the trade, which we will include in the next version later next year.
The Capital Markets Cooperative Research Centre (CMCRC)’s Alex Frino talks about his research over the past 18 months and the conclusions as to the truth about high-frequency trading.
What inspired you to focus your research on High Frequency Trading (HFT)?
There is a very poor understanding of the impact of HFTs on the market place. There is a lot of ill-informed opinion in circulation about the impact of HFT on price volatility, and their contribution to liquidity. I wanted to provide some hard data to help markets move forward and inform sensible evidence-based policy decisions.
There was also considerable interest in the idea of conducting HFT research from our regulator partners, including the FSA and ASIC.
What were your views on HFT at the outset of your research program?
When we first set about doing the research 18 months ago, I began by speaking to the investment management community to gather their views and insights into HFT and its impact on their trading. The feedback I got was overwhelmingly negative. One comment sums it up best – an investment manager said to me that “liquidity provided by the HFT community is like fog – you can see it, but when you reach out to grab it, it is not there.” So I began the program expecting to confirm these dominant views. To my surprise we discovered that the realities about HFT are almost exactly the opposite of that the investment managers were telling me.
HFT liquidity has been described as ephemeral by many on the buy-side. What does your research suggest about the ability of the buy-side to interact with HFT liquidity?
We have done research with data from the LSE, ASX, SGX, NASDAQ and NYSE Euronext on exactly this subject. The exchanges furnished us with data that identifies when HFTs are present in the market place. We then looked at the make-take decision. HFTs make liquidity when they put up a quote that gets hit by someone on the other side of the trade. They take liquidity when they hit someone else’s quote. The data clearly showed that HFTs are net makers of liquidity.
Interestingly some of our data also included information about when firms are trading through co-located servers within the exchanges. This data too showed that co-lo HFT activity was also a net provider of liquidity in those markets.
Co-location is described by some as an ‘unfair advantage’. What is your take on that given your research into the area?
My view is that if the advantage is being put to good use in providing liquidity, then it is not being misused. That pool of co-located flow is providing liquidity that would not be there otherwise, so I cannot see how that is a negative for markets.
Many market participants – including recent widely-quoted comments by Andrew Haldane of the Bank of England – are critical of the speed and sophistication of markets generally, using HFT as their example. They argue the playing field is not level and that markets should be slowed to take away perceived unfair advantages. What is your view?
I was frankly amazed by Haldane’s suggestion that markets should be slowed [by introducing speed limits and resting periods]. What he is in effect suggesting is that we should take markets backwards by a decade. That is astonishing to me because I just do not see the arguments. Market participants who do not have the technology to compete with other players can easily access brokers with algorithmic trading engines to help them execute their trades. If you cannot or do not want to build the technology yourself, you can outsource it fairly cheaply and very efficiently.
From an HFT perspective, our research demonstrates emphatically that the liquidity they provide is real and other participants interact with it constantly, so I cannot see a problem there either.