Big Data technology offers tremendous potential for all areas of financial services, including the ability to help solve business challenges and analyze market data on the trading side of the house.
Developing and backtesting new trading strategies is one area where Big Data is having a big impact, especially as today’s markets place a premium on the quality of trading strategies and the speed with which firms can develop cutting-edge algorithms. However, the infrastructure used for backtesting is under increasing pressure to scale as the density and diversity of relevant data grow ever bigger.
To be successful in the current “wild west” Big Data environment, the industry needs rigorous Big Data benchmark standards, especially as cost, quality and security requirements continue to grow. Enter the Securities Technology Analysis Center (STAC) and the STAC Benchmark Council, “Whose purpose is to explore technical challenges and solutions in financial services and to develop technology benchmark standards that are useful to financial organizations”. STAC is now adding Big Data benchmarks to their portfolio of low-latency and big compute benchmarks.
STAC Benchmarks are specified by architects and engineers at trading firms with deep experience in the relevant workloads. Over the past several months, firms on the STAC-A3 Working Group have been specifying benchmarks to measure the performance, scaling, and cost efficiency of backtesting architectures.
Given my subject-matter expertise, STAC asked me to serve as principal architect for the benchmarking software that would satisfy the requirements from this group. Backtesting has been near and dear to me for decades, so I feel honored to be involved in this important project.
The trick in this project was writing simple but representative algorithms and a set of simulated order book data that together would emulate real backtesting workloads. This meant we first had to develop a realistic and scalable data generator. The data had to be repeatable, but not predictable, so that vendors could not tune their systems to known characteristics of the simulated data set.
Think Big implemented the data generator as Python modules. Vendors can bring the modules onto their platform and transform and organize the output from the data generator in a way that suits their platform. The condition is that they must be able to regenerate the original input data from their internal version.
Intel and Cloudera stepped up to create the first implementation of STAC-A3 using the preliminary benchmark materials. The Cloudera/Intel team wrapped the Python library in a Scala driver under Spark to generate a large sample data set. They also implemented one of the benchmark algorithms in Spark with Scala. This proved to be fairly smooth even for engineers who had limited experience using Spark or Scala. These preliminary results were presented at the June 2015 STAC Summits in London, New York and Chicago.
The next step for the STAC-A3 benchmark project is now beta testing and then going back to the user group for feedback. By the fall of 2015, we will have reference work completed and at least one vendor implementation complete, if not more.
The STAC-A3 benchmark project will help financial services firms understand the possible impact of Big Data technologies on the trading side of the house. As STAC Benchmarks have done in areas such as low latency data processing and enterprise tick databases, these benchmarks will play a critical role in helping to evolve from the current frothy environment, with extremely rapid innovation and numerous competing technologies, to a smaller set of mainstream technologies.
For example, there has been a lot of buzz around Spark, but until now it has not been publicly put to the test to prove it can scale on extremely large data sets and show real results in the trading sector. Since developing new skills is often the biggest hurdle to adopting new technologies, it is exciting to note that developers could both learn Scala + Spark, and implement the benchmark using them—all in a couple of weeks! We can now actually talk about data volumes and processing times with real results! It will be interesting to see how both newer and more mature technologies fare as the STAC-A3 benchmark standards gain momentum.
To learn more about STAC, visit https://stacresearch.com/