Capital Group’s Brian Lees is driving efforts to ask more questions of brokers, and for more data on where an order is shown before it executes, but can the buy-side handle the resulting deluge?
The current work you are doing on venue reporting analysis Our first push was simply to try to collect information about ‘where’ we were executing and a little bit about ‘how’ we were executing, namely, did we post or did we take liquidity. So having done that, the question was where do we go from there? And as such, the topic of requesting more data on where we didn’t execute and what order types were used started to be raised by some representatives on the FPL Americas Buy-Side Working Group. Some participants had already started down this road with brokers, asking for information relating to post-trade about where their orders were sprayed out to by the algorithms and what types of orders were placed on exchanges and also which exchanges they were on, etc. So that’s where the conversation began and that’s where we reached out to Jeff Alexander and Linda Giordano, because Barclays had already spearheaded this conversation.
What we are looking to achieve either in real time or post-trade, is whether we can standardise a format for brokers to tell us how our order interacted with the market, including when the order was placed, what order types were used, where it was placed in the markets and whether or not we got hits. The concern with this is not so much can we get it, because if we sign enough non-disclosure agreements we can get the information from the brokers. Some brokers have concerns about that information getting out and somebody reverse-engineering their algorithms, but from the buy-side perspective, I think the biggest concern is whether we can manage the volume of data that we would get.
The resources to store and analyse data and make some sort of good use of it With the original data that we were getting, on where the execution took place, we talked a lot about this with smaller firms who were using TCA vendors to help them analyse this information. With this type of information, if we went a step further, the brokers would not want us sending that out to TCA firms, because it shows their methodology for how their algorithms behave. I was in New York several weeks ago and took the opportunity to meet up with Jeff and Linda while we were there. We invited Jeff to join one of our conference calls for the buy-side committee, which he did, and he talked about what they’ve been proposing. He showed proposals for both the real-time collection of data, via FIX messages, actually proposing a whole new FIX message to be created for this purpose, which could then be sent in real time. Or, alternatively we could standardise a format for collecting the information post-trade which, as a spreadsheet, would then tell us what we want to see. We’re trying to standardise how you ask for the data and what format it is going to be in, by creating best practices for how to get the data from the brokers. That way the brokers don’t have to keep coming up with a different format for every client that asks for it. The best practices do specify that the ISO MIC codes would be the standard for identifying the exchange that you executed on, but we said nothing about what you should do with the data once you get it.
Exchange involvement in the conversation We did talk to some exchanges when we were first trying to standardise how to identify the exchanges, because when we first standardised the MIC codes, they did not cover all the exchanges, this was due to the fact that they hadn’t all registered with the ISO organisation and we wanted them to.
We had a little bit of trouble in differentiating the dark order books from the lit order books and some of the exchanges that have both. These exchanges consider themselves a hybrid book, and they didn’t want to be known as two different things. We didn’t have a way to differentiate the dark and the lit flow without introducing yet another FIX tag. That back and forth added to the conversation as part of the registration authority’s decision to come out with the new market segment concept, which says you can have an exchange defined and have child MIC codes that differentiate different segments of the market. We’re beginning to start conversations with exchanges about this topic, but that’s the extent to which we’ve had any discussion with them.
Broker willingness to participate in the process The first half of this, just getting the information about where you executed, the brokers didn’t have any problem, because it’s public record once it executes. When we started talking about the more detailed reporting, they did raise a concern about the information being sent out and NDAs so that, you, as a client, are not going to send the data out to a third party. But because other firms had already started down this road we talked about the purpose of this, which was just to have someone looking over their shoulder to make sure that they are acting in the best interest of the client and not potentially favouring rebates over best execution; they can’t really argue with that logic. Somebody should have some oversight as to whether or not the right decisions are being made.
CLSA’s Global Head of Trading and Execution, Andrew Maynard, and COO of Trading and Execution, Joakim Axelsson, delve into the nuts and bolts of setting up and running Commission Sharing Agreements.
Commission Sharing Agreements (CSAs) between the buy- and sell-side are, in concept, an invaluable tool as they facilitate the unbundling process thereby freeing client’s trading desks to seek best execution.
While the regulatory frameworks across Asia have not developed to the same extent with respect to unbundling and CSAs as they have in Europe and the USA, as a tool CSAs are becoming increasingly common in the Asia-Pacific markets including Australia and Japan. Asian funds seeking to attract international money for management in Asia need to demonstrate to their clients that they are implementing best practices and both CSAs and unbundling integral to the process. So what we are now seeing is a broad acceptance of CSAs in markets which do not necessarily regulate unbundling and best execution. With money becoming more mobile globally, the growth of CSAs in Asia is inevitable as they are a mainstay of the global investment process. The necessary regulatory frameworks will follow – and the challenges to both the buy- and sell-side of administering CSAs will continue.
In Asia, we have observed first-hand the evolution of the CSA business, and over time as the penetration has grown, both the benefits and the challenges of implementing and managing CSAs have become more apparent.
The trading processes within the industry are evolving rapidly. There is far more focus on execution quality from both the buy-side and sell-side traders. Combining this increased focus on execution quality with the technology advances happening in parallel allow far greater transparency about the quality of the trade in real time. This results in a very different level of engagement between the buy- and sell-side traders about the trades, which is very healthy. In this respect, CSAs have achieved the objective of ensuring that the ultimate end investor is receiving an enhanced quality of trading, i.e., best execution.
Clients have implemented their best execution process quite differently. At one extreme are those clients who do not use CSAs and have completely detached their trading desks from their investment management teams to the point where the client’s trading desks are not allowed to know how the various fund manager ranks each brokerage research. To these clients, having best execution is all that matters. While this model certainly ensures the traders have freedom to seek best execution, it can have unintended consequences as brokers who do not receive payment for their research and advisory services over a period of time will naturally have to reduce their service levels to these types of clients. To take this to the extreme, an argument can be made that brokers should not provide any research and advisory services to these types of client.
More common are those clients who implement systems and processes to value each of the services they receive from their brokers. These clients manage their payment process very carefully to ensure that they pay the correct amount for services received and any commission left over is ‘jump ball’ based upon pure execution quality at a lower commission rate.
Finally there are those clients who have yet to formalise their broker ranking, valuation, or execution processes. These clients, while more informal, can be equally demanding from a best execution process, it is another way to achieve a similar best execution result.
From a broker’s perspective each of these client processes has to be catered for and of course that means greater administration, more operational complexity and therefore more costs for brokers. While global commissions are falling, brokerage expenses are indirectly being increased by greater regulation. In some instances another party is being introduced to the chain as CSA aggregators have spotted an opportunity to inject themselves into the process flow – again at a cost.
One of the issues facing the sell-side is that in receiving a CSA cheque, the broker is not actually doing the trading. No broker likes to receive CSA payments over the client trading flow. The sell-side wants to be involved in the trade, as not only does it deepen the relationship with clients; the natural liquidity provides opportunity for further flow and crossing. When an account pays a CSA cheque in lieu of commission, we question the reason. Does it mean that the buy-side trader doesn’t think we offer best execution in that country or in that sector? Either way it can be read as a signal that we need to improve our execution capabilities. It also ensures that brokers offer all the various avenues of execution, every different pool of liquidity, every different connectivity vendor, etc.
Brokers that are not in the top tier of execution, those that cannot afford the regional infrastructure and technology platform necessary to compete, or those brokers that are relying on CSA cheques alone, are operating in a very dangerous space – particularly in these times of lower liquidity.
The buy-side trader has to focus more on the implementation and metrics of the trade than the research payment and therefore allocates trades accordingly to ensure best execution. This is why brokerages which have historically been known for their research product are now forced to make a decision. Do they become a ‘research only’ house receiving cheques? Or do they compete in the execution space?
Major brokerages on the sell-side with both execution and research offerings have had to make a range of necessary investments over recent years to remain at the top of the execution brokerage list. On the other side of the equation, being a CSA broker administering regular payments for clients requires a significant infrastructure spend to maintain a professional service; however these costs are somewhat offset by the additional flow.
J.P. Morgan ‘s Frank Troise sat down with FIXGlobal to chart the expansion of electronic trading tools available to the buy-side and point out which new tools will make the difference in the months to come.
In what way has the trader’s desktop improved?
Over the last few years, the biggest improvements have been the inclusion of more multi-asset class execution capabilities and the inclusion of additional analytics. Desktop trading platforms that support equities options, futures and FX trading, with the ability to track all of those orders in the market and give aggregated profit and loss are much more prevalent. More trader desktops incorporate pre-trade analytics measures, such as market impact estimates as well as post-trade execution information.
What do your clients say they want most from their analytics?
Clients want a combination of real-time and post-trade analytics. Prior to starting the trade, clients want tools that help their investment decision process. Once an investment decision is made, pre-trade market impact and trade scheduling tools can help traders develop an implementation game plan. Through the course of the trade, clients like to see real-time analytics that can help them improve the performance of their trade; for example, abnormalities around volatilities and volumes. Post-trade, clients want performanc reports measuring actual execution costs against various benchmarks on a daily, monthly, and quarterly basis.
How does putting so much technology in the hands of the trader change the role of the broker? How does the broker add value in addition to the electronic tools?
In the electronic broker business, our value added comes in our role as execution consultant and our ability to educate clients on the use of pre- and post-trade analytics and execution tools. I look at the roles and responsibilities of the people on our electronic client trading desk as helping clients implement their investment ideas. When a client has a trade to execute, it is up to our team to educate that client on the tools they can use to put together a plan, present them with the tools to execute the trade and while they are executing the trade, provide information that can be used to improve their plan throughout the execution period.
After the trade is executed, we work with clients to evaluate how well they did against their plan and help them improve their trading process in the future. We focus on creating and enhancing client products continuously. Our goal is to make it easy for them to use analytics and execution tools to achieve best execution. The better we understand a client’s goals and objectives the more we can collaborate with the client on custom solutions and training. Electronic trading products are very different from traditional equities execution capabilities. A key differentiating characteristic is that the products reside and are used by the client at the client site. In the traditional model virtually no broker technology oriented product existed at the client site.
The communication mechanism for order delivery was the telephone and execution occurred in the broker/ dealer environment. Electronic brokering is a very intrusive business. Our products exist in the client’s technology infrastructure. This has led to changing core competencies of brokerage firms. We now have to be experts at delivering products into the client site. This has implications on training and technology integration.
How does the electronic broker assist clients in locating liquidity, either through tools or the consulting process?
Liquidity has and continues to be a top priority for clients. They have always come to brokers to find liquidity in as ‘quiet’ a way as possible. In today’s landscape, much of that liquidity exists in electronic form and is fragmented. The result has been a proliferation of tools (e.g., algos, routers) that help clients navigate liquidity pools to logically consolidate the fragmented liquidity. To assist in that process we have created a pool to concentrate order flow across various trading desks, retail segments of the broader J.P. Morgan Chase organization, transition management flow, and third party broker dealer flow. I refer to it as a centralized electronic merchandise hub.
FIX has grown rapidly from the historic base of cash equity product, and pre-trade and trade business process support, to a point where it now supports a broad range of product types; Fixed Income, Foreign Exchange, Equities and Derivatives. This organic growth has been driven by the business benefits of FIX, and a dynamic user and vendor community. JP Morgan’s Andrew Parry explores the technical aspects that will take FIX to the next level, in particular in relation to global derivatives.
The success of FIX, to date, is supported by a simple premise. It is useful, provides valued business outcomes, and has an active user, vendor, and consultancy community, rather than trying to be the most beautiful possible technical solution.
To expand FIX further we need to continue the lines of work opened up in FIX 5.0 so that we can continue to provide valued business outcomes with what is, by now, a far larger model in terms of data and function support than when FIX began.
These lines of work such as correctness, machine readable business rules and process rules – discussed below – should improve the core of the FIX model, and increase the ease of use, both for software tool makers, and our end users.
Companies such as Google and Apple provide a good example of this approach. The end user of Google maps does not have to understand the technology behind it, whereas application developers are provided with a Google Maps API. We should approach the FIX model in the same spirit.
An analysis: Service Packs FIX 5.0 onwards supports a service pack model, which supports the addition of minor changes within a matter of months'. This approach has particular value for the derivatives industry, which has a high rate of business driven change, and increasing regulatory requirements.
By adopting the service pack model, we promote standardised additions, instead of customised user extensions, which are commonly required in the absence of a timely way to make contributions. We will go on to look at how the service pack model has been used to a wide range of features to support derivatives in FIX 5.0 onwards.
Building Blocks The service pack approach has been used to provide business correct building blocks, which can be re used. Taking an example from FIX 5.0-SP2( Service Pack 2), which has provided timely support for the business requirements and regulator demands of credit derivative contract specification standardisation and central clearing in America and Europe.
EP83 – Enhancements for Credit Default Swaps Clearing. “The following new fields were added to the Instrument Block … AttachmentPoint (1457), DetachmentPoint ( 1458 )”.
These fields are in support of CDS index tranches, which give investors the opportunity to take on exposures to specific segments of the CDS index default loss distribution. For example the “0% to 3% tranche” with attachment (0% ) and detachment (3%) is the lowest tranche, known as the equity tranche, and absorbs the first 3% of the losses on the index due to defaults.
Correctness The percentage data type used for AttachmentPoint and ExhaustionPoint should only allow values between 0% and 100% (inclusive) to be business correct. The attachment and exhaustion points are always between 0% and 100% of the notional amount. This is data type correctness.
Where a tranche is being modelled, both AttachmentPoint and ExhaustionPoint should be used, otherwise the tranche is unbounded. This is model structure correctness.
AttachmentPoint should always be less than ExhaustionPoint, otherwise the Tranche would have zero or negative width. This is business rule correctness.
Where we have standardised tranches, such as “0% to 3%” we should have a way of tying back to a reference data source to confirm that this is indeed one of the standard tranches. This is reference data correctness.
Bank of America Merrill Lynch’s James Wardle takes a look at adverse selection in public dark pools.
Dark pools provide sources of non-displayed liquidity facilitating anonymous matching between counterparties helping reduce market impact costs and minimise information leakage. Marketshare of pan-European dark volume has risen dramatically over the past few years hitting a year high of 2.5% in May 2010, see Figure 1. As executed volumes in the dark continue to rise and high-frequency volumes increase, uncertainty over fill-quality is at the front of everyone’s mind — at what cost does accessing dark liquidity come?
High-frequency (HF) participants (e.g. hedge-funds, market makers, etc.) have an investment horizon that is typically much shorter than the traditional long only institution. It is the coming together of these two distinct flows that exacerbates the occurrence of adverse selection. The use of short-term alpha forecast models by HF traders and other market participants to try and opportunistically execute at temporary lows for buys and highs for sells means that the opposing counterparty may be liable to early execution at local price maxima (minima) for buys (sells) and thus becomes the victim of adverse selection - see Figure 2.
Adverse selection can also arise from gaming. This is where the presence of a large block is detected in the dark by a number of ping orders (small orders looking for size), after which the informed trader waits for a temporary adverse price spike before they send a large order to consume the liquidity found in the dark. This results in the informed trader obtaining size at a favourable price to them at the expense of the uninformed trader who again becomes the victim of adverse selection. Across large orders the cost of adverse selection can add up and seriously damage returns.
Post-trade, adverse selection can be identified in two main ways: measuring the performance of the fill to short-term price-movements, and looking for reversion patterns postfill. Short-term price movements are well-captured by using the Time-Weighted Average Mid-price (TWAM) measure; this takes the average mid-price T seconds before the fill and T seconds after the fill. Varying the TWAM time-frame helps us to identify any executions that may have occurred at temporary price spikes. A positive return of the fill price to TWAM implies we have filled at a price better than short-term price movements (positive-selection) and a negative return implies that we have filled at a worse price (negative or adverse selection).
Quod Financial CEO Ali Pichvai advocates a re-examination of speed relative to risk.
The oversimplified debate on latency, which states ‘trading is all about speed’, does not represent the true situation. Latency is primarily a consequence of the market participant’s business model and goals. A liquidity provider sees latency competitiveness as vital, whilst a price taker considers it of less importance in the overall list of success factors. This article uniquely focuses on processing efficiency, considering that distance/ co-location has long been debated.The processing efficiency is determined by:
The processing efficiency is determined by:
*Number of processes:
The number of processes and the time an instruction spends in a given process will give a good measure of latency. As a general rule of thumb, the fewer the number of processes, the lower the latency. An arbitrage system will most likely consist of as few processes as possible, with a limited objective. For instance, a single instrument arbitrage between two exchanges can be built around three processes – two market gateways and one arbitrage calculator/order generator. An agency broker system will host more processes, with pre-trade risk management, order management and routing, intelligence for dealing with multi-listing and the gateway, as the minimum number of processes. The trend of latency reduction was sometimes at the expense of the amount of critical processing; for instance in the pursuit of attracting HFT houses, some brokerage houses provide naked direct market access, which removes pre-trade risk management from the processing chain. An initial conclusion is that it is very hard to reconcile a simplistic and, limited-in-scope, liquidity taker system with more onerous price taker systems.
This is where the process flow between different processing points is as efficient as possible, with minimal loops between processes, waiting time and bottlenecks. It also considers the comprehensive view of the architecture between the network and the application.
*Single process efficiency:
Two important areas must be reviewed:
There is an on-going debate on what the best language for trading applications is. On one side there are the Java/.NET proponents, who invoke that ease of coding and maintaining a high-level development language (at the expense of the need to re-engineer large parts of the Java JVMs). On the other side there are the C++ evangelists, who utilise better control of the resources, such as persistence, I/O and physical access to the different hardware devices, as a demonstration of better performance. The migration from main exchanges and trading applications (away from Java) to C++ seem to indicate that the second church is on the ascendancy. Beyond the coding language, building good parallelism in processing information, within the same component, also called multithreading, has been a critical element in increasing capacity and reducing overall system latency (but not unit latency).
Finally, there are attempts at putting trading applications or components of trading applications on hardware, which is often referred to as hardware acceleration. The current technology can be very useful for latency sensitive firms, to accelerate the most commoditised components of the overall architecture. For instance, vendors are providing specific solutions for market data feedhandlers (in single microseconds), that would result in market data to arbitrage signal detection of tens of microseconds. Yet trading is not standard enough to be easily implemented on such architecture. Another approach is the acceleration of some of the messaging flow, by accelerating middleware and network level content management. This goes in hand with the attempts of leading network providers to move more application programming lower into the network stack.