Collaboration with the buy-side

By Jeffrey Alexander and Linda Giordano of Barclays

jeffrey alexander
What are your key concerns with the proliferation of order types and venues?
There is a tremendous amount of technology behind the creation and maintenance of every new venue and every new innovation introduced by these venues. It is difficult to impossible for traders to keep track of the myriad changes that occur, much less assess the potential impact of new order types and system changes. Add in the fact that every broker uses different routing logic, this process is unmanageable without a system of monitoring activity and measuring impact.

What would you like to change in the way trades are reported back to the buy-side?
The buy-side needs to be able to access sub-route level information (including unfilled routes) so that they can begin to understand the impact of using different technologies. Along with these sub-routes, traders also need to construct an accurate picture of what the market looked like at the time of route and execution.

This data resides with each broker as they are making the decisions where, when, and how much to route, as well as having the ability to accurately snap market data that is synced with routes and fills.

Ideally, this information should be available in real-time via FIX. As this is a long-term process, the buy-side should collect data from their brokers and analyse this data.

What metrics could change to increase transparency?
Obviously liquidity taker/provider flags need to be cleaned up, but also knowing the type of contra client would make it much easier to assess the efficacy of certain order types. For example, if a buy-side trader knew that a hide and slide order on DirectEdge typically matches up against HFT players and resulted in a larger footprint, they could work with their brokers to de-prioritise that particular venue.

Linda GiordanoWhat is the FPL Americas Buy-side Working Group doing to standardise these efforts?
We at Barclays have been working with a group of 20 large institutions to put together a standard for collection of the required sub-route and market data from every broker. We assign liquidity provide/take flags to every route, use MIC codes to report executing venue, and snap venue-specific and consolidate market data at the time of the route, fill, and several very short intervals post- trade (100 ms out to 1 minute.)

We were asked to collaborate with the FPL Americas Buy-side Execution Venue subgroup to communicate this standard to the members and to discuss the potential for incorporating this information into existing FIX messages and to potentially create new ones.

Are there limits to what can be standardised?
No. Brokers should be transparent with their clients and should work to make data legible. Brokers should expect that their clients have a fiduciary requirement to understand how they are accessing the market and should work to that end. It is the right thing to do.

To what extent is it about transparency and accountability, and to what extent is it about minimising the cost of transactions?
While there is typically a cost event when trading leaves a large footprint and savings will be had by minimising the footprint, the endeavor to understand how your brokers are accessing liquidity is about transparency and accountability. There have been major mistakes made by venue providers, but these errors have been disregarded because the financial impact was small. The point is that although the impact is small, the risk introduced when the market is acting in an anomalous way is increased by these “minor” errors. It isn’t enough for the venues to self-police—traders need to also make sure venues and order types are behaving as expected.

Is there a balance between broker freedom to seek rebates and buy-side need to minimise trading cost?
All things being equal, it makes sense for brokers to access the least costly venue. The question is, however, what is the definition of “all things being equal.” Using metrics to assess footprint need to be incorporated into the model that defines equality. Execution quality should never be sacrificed to achieve rebates.

What work is there to be done on where you didn’t execute as well, and is this real-time or sometime later?
The buy-side needs to have a better understanding of what is hurting and what is helping. The broker controls the technology that directly accesses the market and buy-siders need to be armed with data so they can have more informed conversations about routing technology.

Real-time analysis will help to allow traders to change course to mitigate implementations gone wrong, but there is a bit of a learning curve to get to that point which will be facilitated by starting with longer-term analysis.

Is there a danger all this will just be for the largest firms alone, due to data management issues?
This is why a standard is so important. If every buy-side firm requests different flavors of sub-route data, the development burden on the sell-side will be unmanageable and the focus will be on satisfying larger clients. However, if the brokers only have to develop to one file specification for every client, it becomes much more scalable.

In addition, the cost of market data redistribution fees needs to be borne by the sell-side as attempts to match market data to route and fill timestamps is highly inaccurate, especially at micro-second intervals. We propose to include key market prices.

This will also make it more tenable for the buy-side to implement an analysis program.

Via our Execution Consulting team, Barclays has developed technology to help the buy-side analyse venue footprint for both trades that occur via Barclays, as well as those that are traded elsewhere. We have created a research team that has built a multi-broker analysis system to help the buy-side analyse venues, as well as all phases of implementation. This team sits behind a Chinese wall that ensures data is secure and not accessible by sales/trading and the solutions developed can be housed at Barclays or can be implemented using local databases.

What is the time horizon for these reforms – how long should/will it take to standardise this data?
The sooner, the better, but adoption will be an organic process that stems from demand. We have a core group of firms who will use the standard we collectively put together when they finally do take the plunge to adopt a program to capture sub-route data from their brokers. The more firms that go out with the same file spec, the faster the standard will take hold. There are already a handful of firms that have sent the standard around to other brokers, so adoption is already in progress. I do think, however, that firms are moving slowly with this because a) they are waiting to see what FIX will offer and when it will be available and/or b) they are trying to find resources to dedicate to doing the required legwork.

recommend to friends
  • gplus
  • pinterest