The Man Who Solved the Market – Notes

When it comes to the world’s most secretive hedge fund any content is worthwhile to read. I finished the book is 3 days and had to re-read a couple more chapters to ensure I fully absorbed the couple nuggets in there. I would recommend this book to everyone!

The mystery behind how Simons discovered the “truth” is shrouded in mystery. Even googling about what they traded doesn’t yield many answers. This new book by Gregory Zuckerman was an eye opener. It revealed how Renaissance came to be including Simons early struggles.

One of the surprising things I’ve learned was that Simons was actually the money guy. Though he did trade and built up the business in the early years, he wasn’t the main guy leading the research breakthroughs. Instead Simons seem to be running side gigs like investing in start ups back in the day. People like Ax, Berlekamp, Carmona, Laufer, Mercer, and Brown were the main brains behind all the models.

Now to the trading models. Even though the author isn’t trained in finance, I really think he did a decent job explaining some of the broad concepts utilized by Renaissance in the early days. Before 1988 (right before Carmona joined), Renaissance was a typical CTA / point and click trading firm. They utilized breakout models/linear regression (page 83). What changed around that time was both Carmona and Laufer started to data mine for trading patterns as oppose to hand crafting them. This especially stood out for me personally as about a year ago I started to conduct research via data mining. As Renaissance thrived through the 90s, more than 50% of their models were data mined. (page 203) Their reasoning resonated with me a lot. “Recurring patterns without apparent logic to explain them had an added bonus: They were less likely to be discovered and adopted by rivals…”

Additional interesting tidbit:

  • “Laufer’s work also showed that, if markets moved higher late in a day, it often paid to buy futures contracts just before the close of trading and dump them at the markets opening the next day.” Isn’t this the overnight premium hes talking about for equity futures? Sure sounds like it…. (page 144)
  • On the subject of managing models, Laufer insisted on a single model as oppose to multiple models. (page 142) Presented with many different signals, they built a trade selection algorithm that further determines which trades to take. Strategies that did well will automatically without human intervention get allocated more money.(page 144)
  • Started out trading end of day and slowly breaking down in to 2 sessions per day. Simons then suggested going down to 5 minute bars. (page 143)
  • “Did the 188th five-minute bar in cocoa future market regularly fall on days investors got nervous, while bar 199 rebounded?” page(143) Looked at intraday seasonality and conditional signals. Edge layered over edge to increase probability of being right.
  • Mercer and Brown took over Keplers Stat arb operation. Soon stock trading pnl was greater than futures trading.

While I am sure today’s Renaissance is far from what the booked described, the broad concepts like data mining, alternative data collection, and stat arb all play a role in their continued success in some form or fashion. Please let me know if I’ve missed anything interesting. Of course, I may have interpreted it entirely wrong. Please leave a comment below!

 

Bibliography:

Zuckerman, Gregory. The Man Who Solved the Market. Portfolio/Penguin, 2019.

Queue Position Simulation

First off, Happy Thanksgiving! If time permits in the coming months I’d like to explore more on how I look at High Frequency (HF) data. Hopefully along the way I can spark some new discussion and improve on my thought process.

HFT strategy “simulation” is no easy task. I am referring to this as an simulation because its purely an approximation of how a strategy would have performed given a set of execution assumptions the researcher made beforehand. Should the assumptions change, the results would also change (significantly).

In my line of work, the edge we are seeking are generally less than a tick (futures). To make this even worth while, the constraints are that costs must be low AND we need to trade a lot. This may sound foreign to most of my readers as their time frames are generally much longer (days, weeks, even months). But at the end of the day, how much money we make is a simple function of our alpha * number of times we trade.

In HFT, execution is king. You can be right where the market moves the next tick but if you can’t get a fill, you are not making any money. Therefore it is paramount that when we conduct HF simulations, we make accurate execution assumptions.

Queue position, this is something that is worth a lot. Being first in line and getting a fill is like owning a call option in my world (where the premium is exchange fees per contract). The worst that can happen is you scratch assuming you are not the slowest one and there are people behind you. The image below is an analysis done on the expected edge you’d get N-events out (x-axis) assuming you are in various spots within the fifo queue. (QP_0 = first in line, QP_0.1 = 10th in line if there was 100 qty). As you can see, the further behind in line you are, the more you are going to be exposed to toxic flow, fancy word for informed traders.

 

How does one take this in to account when you simulate a strategy? When you place a limit order on the bid, how do you know when you will be filled? This depends on 2 factors, your place in line and trade flow. As time progresses there will be people who add orders to the fifo queue, people who cancel orders and people who take liquidity (trade). These actions are something one needs to keep track of tick by tick (or packet by packet) during a simulation. While most people assume tick data is the most fine grain dataset one can have in performing such simulations there actually exists packet data. Tick data simply gives you an aggregated snapshot of what an orderbook looks like – best bid, best offer, bid qty, ask qty (this is known as Market by price). Packet data on the other hand contains all the actions taking by all the market participants. This includes, trade matches and order submissions. This feed is also know as Market by order and its up to the market participant to build and maintain their own orderbook. Using packet data for simulation would be the most optimal as you will know exactly where you are in line.

When you only have tick data, the only way to conduct these type of simulations would be to make assumptions. Here is a simple example. When you place a limit buy on the bid you are going to be last in line. You keep track of two variables, qty_in_front and qty_behind. Additions are straight-forward. Just add them to qty_behind. Cancels are a little more tricky because you don’t know whether its coming from people in front of you or people behind. A work around is to have something I call a reduce ratio. Its can take a value between 0 and 1 and it controls the percentage that is cancelling in front of you. For example, in ES simulations, I would set this to around 0.1  ie when there is a total of 100 qty cancells, I’d assume 10 happens in front of me and 90 happens behind me. There are edge cases but I’ll leave the reader to figure it out themselves. This is just a way, not the only way, of going about simulating a fifo queue. More complicated ways include dynamically adjusting the reducing ratio as you approach the front of the queue.

How do you guys go about this? I’d love to hear.

 

Constant Maturity Data

I’ve been asked multiple times why/when I use constantly maturity data for research and modelling. I thought I’d cover it here on my blog since its been a while. I hope to post more in the coming months/future as it has been a good way for me to organize my thoughts and share what I’ve been working on.

Constant maturity (CM) data is a way of stitching together non-continuous time series just like the back adjusted method. It is used heavily in derivative modelling due to the short-term time span a derivative (options, futures, etc) is listed/traded.

What is it and how is it used?

The CM methodology is essentially holding time constant. Various derivative contracts behave differently as time approach expiration so researchers developed this method to account for that and study the statistical properties through time.

I’ll provide a couple of usages.

In options trading, we know that time is one of the major factors that affect the price of an option as it approaches expiry. Options that expire further out in time are more expensive than options that expire closer to today. The reason for this is due to the implied volatility (IV). Researchers who want to study IV across time but not take the expiration affect in to account needs to hold time constant. For example, the study of how IV changes as a stock option approach earning announcements.

In futures, the CM methodology can be used to model the covariance matrices for risk analysis. For example, if you are trading futures under the same root (Crude) across various expirations, this method has shown to be rather useful is managing portfolio level risk.

For cash, the standout examples are the recent proliferation of the volatility ETPs. Most of these products are structured in a way to maintain a constant exposure to a given DTE. They will buy/sell calendar spread futures daily to rebalance their existing position.

How do you calculate it?

I’ve come across multiple ways of doing this. I will show you the most basic way and readers can test out which suit them best. The method I’ve used in the past is a simple linear interpolation given points. So assuming you are calculating IV for 30 days but you only have IV for a 20 and 40 DTE ATM option the equation is:

cm.pt = ( (target.dte – dte.front) * price_1 + (dte.back – target.dte) * price_2 ) / (dte.back – dte.front)

Here target DTE is the expiration you want to calculate. DTE.front should be < DTE.back as the front signifies it expires before the back. This is not the only way; there are other ways just like non-linear interpolation, etc. Carol Alexanders books provide more examples and much better explanations than I ever can!

Hope this helps!

Mike

Vertical Skew IV

Vertical Skew is the shape of the implied volatility (IV) term structure for a single options chain maturity. There is also something called a horizontal skew which is the IV across maturities. The movement of the vertical skew structure has been of interest to me recently when analyzing some of my option positions.

I had a long put butterfly position on with center short strikes 20 points below market. At trade initiation my greeks were as follows: Delta: -56, Gamma: -1.04, Theta: 117.9, Vega -343.8. This is generally what you’d expect for a short vol position. Delta is slightly negative due to the fly being bearishly positioned. My expectation was that for any decline in the markets you’d expect reduced pnl decline due to the fact that the delta will partially offset vega. As it turns out this is not the case, my position actually gained money on a price decline which was opposite to what my greeks are telling me. To understand we must look at IV Skew.

Below are the IVs for RUT September 2015 expiration put options. I specifically picked this period to illustrate the transition from high vol to low vol environment. If you look closely, you will notice that the green line is steeper than the red line. The second graph is the difference between the two line – its increasing. As we go from OTM options to ITM options, the rate of change of IV increases. Another words in our graph, ITM options (right side) will decline more in value (steepens) then OTM and ATM option when IV drops (vice versa for increases in IV – flatten). This phenomenon is not captured by the Black Scholes model as it assumes fixed volatility during the lifetime of the option.

IV

Diff

 

Now how does this help me understand what happened to my position? Well, my right wing (highest strike) put option within my fly is an ITM option. When I was analyzing my position, I assumed that the long put wings of my fly had equal pnl contribution but that’s not the case. My right wing benefited the most from a given IV increase which means the overall negative effect of vega was over-estimated. In fact it may (and I am not 100% sure) be that both delta and vega was working in my favor assuming that the pnl contribution of the long wings was greater than the losses incurred from the short center strikes of my fly. Armed with this information, I think people can incorporate this in to their adjustments and maybe create some ways to exploit this. Open to any ideas!

Pretty cool eh?

Automated Trading System – Internal Order Matching

Most automated trading systems (ATS) are built such that there are little to no interactions between component models. This is limiting. Here I am referring to a trading system as the overarching architecture that houses multiple individual models.

Without interactions, each model is operating within an environment that it is preconceived in. For example, mean reversion can happen in different time/event frequencies. A model that is parameterized to take advantage on certain frequency will not have knowledge of others.

One component within an ATS that is rather complicated to architect is the order management system. The OMS is the component that handles all order requests generated by the prediction models. It must always be aware of outstanding orders (limit/market, etc), partial fills, and proper handling of rejects,etc. Now the complexity is increased when a portfolio of prediction models all generate an order for a given tick. (Which should be processed first?)

The general rule of thumb in dealing with this is to aggregate all orders by asset to reduce transaction costs. If there are a mix of long/shorts, the net will be the final order quantity. When it is filled, simply dis-aggregate them back in to component fills to the respective models (internal matching). The annoying part, in my opinion, is when you introduce multiple order types. For example, Limit and Market orders. How would one architect the OMS to handle both? Hmm.. This goes back to the debate of the degree of coupling between the strategy and the OMS itself….

CRSM Code

The code below represents the CRSM algorithm. It is adopted to the SIT framework. I have refactored the code so that its is easier for the user to use and understand.

I would like to once again thank you David Varadi for his tireless effort in guiding me along this past year. My gratitude also goes out to Adam Butler and BPG Associates for their support  all along the way.

Download Code: here

Thanks,

Mike

Shiny Market Dashboard

I’ve been asked multiple times regarding code for the dashboard so thought I’d release it. I coded the whole thing in one night last year so it’s not the best and most efficient, buts it’s a good framework to get your own stuff up and running. It’s long, north of 700 lines of code.

https://www.dropbox.com/sh/rxc8l4xnct5bcci/AABvTD2iJjC6wicLed3q9Qr3a

On another note, I’ve recently graduated from University and am excited to be moving to Chicago in a month for work. I am looking forward to the exciting opportunity. Also I will be releasing my graduating thesis in the coming weeks on RSO optimization. David Varadi has been my thesis advisor and mentor and I want to thank him for that!

Mike