4 min read
Thomas Kwan, Harvest Global Investments Briefly describe Harvest Global Investments and outline your role and responsibilities as CIO. ...
With Jacqueline Loh, Head of Trading, AIA Investment Management
Hopefully in a structured, consistent way! Data tells a story: in the first chapter we write about what we want to achieve, and the rest of the story just writes itself. The ending might not be one we like, but we can draw valuable insights from it to make our next story a better one.
To make gathering and organizing data as seamless as possible, there should be connectivity and consistency between the order management system (OMS), the execution management system (EMS), and the pre/post trade analytics module. Storing the data in one location is ideal but not always possible. Data should be organized in such a way that it is easy to run several iterations from different perspectives. However, the most
difficult part is probably deciding what to store and how much.
It would be appropriate to address the issue with a top-down
approach, thinking of the end game for data collection/analytics. Data collection and analyses should not be an end in itself, but rather should provide the basis for better decision making.
The main challenge then is to define a best execution process -- how to evidence it and how to create a positive feedback loop using data analytics such that trading performance is improving all the time. After the main objective has been clearly defined, it’s then much easier to build an
end-to-end process involving data collection, data organization, and analytics. This is a more coherent approach than collating a few reports with the view of demonstrating best execution.
One related challenge is the selection of benchmarks. There are always pros and cons associated with each standard benchmark, e.g. volume-weighted average price (Vwap) vs implementation shortfall for equities. But,, they
should always be chosen to align with the asset manager’s management style and measure the matrices important to them.
The success of an end-to-end best execution process, if defined properly, can be measured by a quantitative improvement of the benchmarks.
As market environments change ever more rapidly and trading
platforms grow in sophistication, buy-side trading desks need
to use data to help improve liquidity costs / efficiency in a journey of continued evolution, or else suffer a deterioration in trading performance.
However, for trading teams who use data in a determined manner with the end goal in mind, the prize is that they can gain a lot of technical experience in a shorter time than previously possible. Constant analysis will also make it easier to discern anomalies in counterparty or algorithm behaviours.
On my team, every trader analyses his/her own trading performance for high- and low-cost trades on a daily basis. What were the reasons for the
high-cost trades and what do we learn from them? For the low-cost trades, what went right and how do we replicate them? This is not construed negatively, but rather viewed as opportunities to pick up changes in trends.
I have the privilege of working in a company that encourages the use of data in decision making. The theme on my trading desk is data-driven execution. We had a clean slate to work with. So once we decided upon our best execution objectives, we were able to proceed with collecting the
relevant data in the right format as soon as possible. In addition, volumes were much lower in our first year, making it easier for us to verify the data, clean it and lay the foundation for data collection and data analytics. From then on, as volumes increased year on year, we could run the analyses with the full confidence in our data sets.
I think the main advantage in building a data infrastructure from the ground up is being able to build in flexibility such that customized reports can be run easily and at short notice. This allows us to quickly drill down on
specific issues, e.g. determining the efficacy of a certain algorithm.
Data challenges are related to the different classes of stock in some Asian markets, and having to map the orders to the right types of stock, e.g. NVDRs and foreign and local stock in Thailand. IPOs and placements are
sometimes not tagged correctly. Accuracy of order timestamps is another important factor.
More than one benchmark should be used, and benchmarks should be related to the type of flow to yield meaningful results. For example, if there is a large number of illiquid stocks which take several days to complete, a static benchmark involving the estimation of market impact costs might
be more appropriate than Vwap.
The big advantage in equities is that the boundaries of normally accepted trading performance have been established for the standard benchmarks.
There are huge opportunities in the area of data usage for algorithm differentiation. This would involve identifying best-in-class algorithms for various categories and which algorithms work best under particular scenarios, e.g. liquidity-seeking algorithms for illiquid stocks.
Trading-performance measurement is not as established as in equities, and therefore the choice of benchmarks is still up for debate. Norms of performance are also not yet established, so perhaps outperformance / underperformance should be considered as a percentage of spread rather
than absolute numbers. The size of spreads should be a function of the portfolio of bonds traded.
Accurate data collection is more difficult as fixed income is an OTC market. There may not be sufficient data for comparison in some of the more illiquid credits.
Here, there are opportunities for counterparty differentiation by sector, country and duration. We have just started to use transaction cost analysis (TCA) in our broker reviews, in the hope that the information will be useful to the sell side too.
Traders will have to be more quantitatively inclined. This is
because a large part of their jobs will center around data collection,
interpretation and application of the results to gain basis-point
improvements in liquidity costs and overall trade performance
measurement. Automation will be another important development,
as traders have to learn to work alongside machine learning algorithms to achieve better results.
Data optimization will involve a lot of work for the industry, but our clients will benefit from the effort.
4 min read
Fidelity Investments the largest online brokerage firm with more than 23 million retail brokerage accounts, today announced availability of real-time...