Mizuho Securities’ Spyridon Mentzas discusses the status of the Japanese exchange merger and offers thoughts on how well the two systems will merge and the benefits investors can expect.
Compatibility The merger of Tokyo Stock Exchange (TSE) and Osaka Securities Exchange (OSE) is not yet finalized, but it appears they will merge in the beginning of 2013, with the details yet to be specified. The first impression is that they have nearly identical trading rules with some minor differences, such as the OSE trading until 3:10, while the TSE closes at 3:00. When TSE decided to shorten the lunch time in November, the OSE did the same. When one of the exchanges (usually, the TSE) changes the rules, then the other moves in tandem: for example, changing the tick sizes. If the merger does go ahead, it is likely that they are going to use the TSE’s cash system, arrowhead, and OSE’ J-GATE for derivatives. They will use the old systems in parallel, which will achieve a reduction in cost because they will not have to maintain two systems.
Further Industry Consolidation The ECN’s in the US enjoyed technological superiority versus the classic exchanges, where NYSE’s latency was significantly slower than Arca’s. This would have been reason enough for TSE to consider buying a PTS, but with arrowhead’s current latency of less than 2 milliseconds (and another upgrade in the next few months to target less than a millisecond), simply buying a PTS would not give them a noticeable advantage because the TSE and OSE are on par with the PTSs. The reason why PTSs are increasing their market share is that, unlike in the UK and US, where Reg NMS and MiFID have required trading on the exchange with the best price, in Japan the PTSs draw volume through decimal points and smaller tick sizes than the incumbents.
For example, Mizuho Financial Group might trade on the TSE at 105 yen bid, 106 yen offer. That one yen spread is close to 100 basis points or almost one percent, whereas the PTS trades at 0.1 yen. This is a major incentive for investors to buy and sell on the PTSs with their smaller increments to reduce market impact and trading costs. From the beginning, the regulators have not been overly concerned with the PTSs deciding to trade in decimal places and have 0.1 yen ticks. It was always up to the PTSs to decide and the TSE could do the same. If anything, I think the new exchange would rather reduce their tick sizes, than merge again.
However, not all participants would be happy to see new tick sizes, for example, some of the proprietary houses or small firms that trade with retail, as altering their downstream systems to handle decimal places would be costly.
This will also create a fragmentation of liquidity in tick sizes. The bids and offers on the TSE are often thick, with something like 50 billion shares sitting on the bid side, so with 0.1 yen ticks, the average order size might move to 3 million or 1 million shares. Traders who want to buy a large lot will have to scroll up and down to find out how much they have to go up to absorb the available liquidity. I think for the traditional long-only traders, this might mean an increased scattering of liquidity. There is sufficient liquidity in the market at present; even for stocks trading at a low price – there are market makers trying to make 1% during the day. If smaller tick sizes are introduced, that liquidity will likely be scattered or disappear.
NYSE Euronext’s Joe Mecane responds to calls for pre-trade certification of algos and describes NYSE’s efforts to manage risk.
What is the exchanges’ burden in terms of regulating High Frequency Trading (HFT)?
There are multiple components to that answer. There is not necessarily any regulation specific to High Frequency Traders as a separate type of participant, but clearly there are a lot of regulations that are applied to HFT because of the nature of their business. Sponsored access, for example, is clearly an area that impacts high frequency traders who might not be members of exchanges. Recently, there has been a lot of press and discussion around regulating algorithms, and that has a broad application around, not only HFT, but also customer-type algorithms that are developed by firms who deploy those algorithms to their customers or use them to execute customer orders.
At the same time, algorithms can be used for high frequency traders to develop proprietary algorithms. In those cases, a general supervisory responsibility falls to any of those types of participants to ensure that their algorithms are tested and working properly before they actually deploy them to the public. There has been discussion around whether that should be a more formal stringent rule, but it is more complicated because everyone has some level of responsibility and oversight with regard to deploying and developing algorithms. One thing that we have talked about is creating a ‘best practices’ standard, for people to follow, as there have been some cases of particular firms being fined for releasing algorithms that have had damaging effects on the market.
When does the exchange’s supervisory onus take place? Should exchanges evaluate algorithms in real-time as they’re trading or is it something that should be evaluated beforehand?
The problem with evaluating algorithms pre-trade is that it is not really practical to create an infrastructure that would certify an algorithm before it is deployed. While it sounds good in theory, the reality is the regulators do not currently have the resources and skill sets to sit down and review lines of code; it is just not really practical. What that means is that there is a structure, where firms have policies and procedures around how they develop, test and deploy algorithms that then can be reviewed by the regulators. It is not really practical to demand regulatory sign-off on an algorithm before it is deployed.
What would you recommend for firms as best practice for testing algorithms before deploying them?
It is up to each individual firm to outline and deploy the practices that they think are most prudent. One possible thing the industry could do is develop best practices or standards for algo development that the industry could adhere to. Some of the trader groups and some of the technology trade groups may have those types of standards already or could put them together quite easily. It is not our place as an exchange to try to define those standards, but certainly, it is something that most firms have and the industry, as a whole, could assemble from a technology development and deployment perspective.
What are some actions NYSE takes to supervise algorithms as they are deployed?
NYSE has a number of elements in place to help mitigate and oversee algorithms. On our markets we have a number of circuit breaker-type mechanisms to detect what might be unintended behaviors on the exchanges, ranging from Liquidity Replenishment Points to some of the SEC circuit breakers that have been mandated. We also have market order collars and limit order collars on Arca. On one level, we have some limits on our market that are designed to mitigate big price moves when there could be an algorithm acting in an unintended manner. At another level, we also have some safeguards that monitor excessive message traffic, so we have the ability to either moderate or throttle some message traffic if we start to see excessive message traffic coming down a particular connection. Third, and this is a market-wide thing, there are clearly defined rules in place to deal with erroneous types of trades that could happen as a result of an algorithm gone bad. The last thing is that we have outsourced the market regulation component of our responsibilities to the Financial Industry Regulatory Authority (FINRA). As a part of FINRA’s reviews, they look at a lot of the supervisory procedures and oversight that are utilized by firms in terms of deploying their algorithms, as well as development and testing.
In this article, Nomura’s Ben Springett provides a brief overview of some of the key issues currently impacting European market structure, and shares his own thoughts on some of the changes likely to occur in Europe this year.
European market structure, has been, is, and will continue to be, in a state of change for the foreseeable future. Whilst European Commission regulation has been a significant catalyst in this, the industry itself is now looking to progress issues at a faster rate than the expected regulatory change. As such we are seeing increased interest in “self” regulation within the community, particularly in the areas of post trade reporting and efforts to provide a consolidated tape. All market participants are active in this, but it is not unreasonable to assume that it will be down to the broker-dealers to drive any change, as they typically are the ones that have the resources to invest in the process.
Market share amongst trading venues can be measured in many different ways and people can be forgiven from choosing one that paints their own venue in the best light. The accompanying two charts (Charts 1 and 2) show the steady decline of market share amongst the key primary exchanges, to the benefit of the MTF venues, although the total volume levels remain significantly lower than the pre-credit crunch days. When considering primary exchange volumes versus MTFs it is necessary to bear in mind that the primaries are only just starting to compete in each other’s markets, and as such the pan- European MTFs have had more blue chip names with which to capture their market share. This is set to change in 2010; Euronext launched ARCA last year, Xetra have launched their International Market (XIM) and the London Stock Exchange (LSE) have just completed the acquisition of a majority (51%) stake in Turquoise.
MiFID did not mandate a market- wide consolidated tape, as opposed to the NBBO ( National Best Bid and Offer) provided under Reg NMS, and the lack thereof is one of the key concerns raised by the buy-side in a range of forums. There is however, no significant issue with data aggregation offered by a number of key providers such as Bloomberg and Reuters; in addition to some strong fragmentation analysis products available to the market (Fidessa Fragulator, BATS Europe).
In a period of time where cost base is under increasing pressure, attention has now been drawn to the inherent impenetrable conditions that exist in market data (the LSE has sole distribution rights on LSE data, Deutsche Boerse on Deutsche Boerse data etc.), and as the number of venues from which the data is required for increases, so will the interest placed on the associated charges. In an environment with considerable focus on competition, competitive forces cannot work to reduce the fees, leaving regulation as the only option, which was again addressed under Reg NMS in the US.