Risk Control

Constant Maturity Data

I’ve been asked multiple times why/when I use constantly maturity data for research and modelling. I thought I’d cover it here on my blog since its been a while. I hope to post more in the coming months/future as it has been a good way for me to organize my thoughts and share what I’ve been working on.

Constant maturity (CM) data is a way of stitching together non-continuous time series just like the back adjusted method. It is used heavily in derivative modelling due to the short-term time span a derivative (options, futures, etc) is listed/traded.

What is it and how is it used?

The CM methodology is essentially holding time constant. Various derivative contracts behave differently as time approach expiration so researchers developed this method to account for that and study the statistical properties through time.

I’ll provide a couple of usages.

In options trading, we know that time is one of the major factors that affect the price of an option as it approaches expiry. Options that expire further out in time are more expensive than options that expire closer to today. The reason for this is due to the implied volatility (IV). Researchers who want to study IV across time but not take the expiration affect in to account needs to hold time constant. For example, the study of how IV changes as a stock option approach earning announcements.

In futures, the CM methodology can be used to model the covariance matrices for risk analysis. For example, if you are trading futures under the same root (Crude) across various expirations, this method has shown to be rather useful is managing portfolio level risk.

For cash, the standout examples are the recent proliferation of the volatility ETPs. Most of these products are structured in a way to maintain a constant exposure to a given DTE. They will buy/sell calendar spread futures daily to rebalance their existing position.

How do you calculate it?

I’ve come across multiple ways of doing this. I will show you the most basic way and readers can test out which suit them best. The method I’ve used in the past is a simple linear interpolation given points. So assuming you are calculating IV for 30 days but you only have IV for a 20 and 40 DTE ATM option the equation is:

cm.pt = ( (target.dte – dte.front) * price_1 + (dte.back – target.dte) * price_2 ) / (dte.back – dte.front)

Here target DTE is the expiration you want to calculate. DTE.front should be < DTE.back as the front signifies it expires before the back. This is not the only way; there are other ways just like non-linear interpolation, etc. Carol Alexanders books provide more examples and much better explanations than I ever can!

Hope this helps!


Engineering Risks and Returns

In this post, I want to present a framework for formulating portfolio with targeted risk or return. The basic idea was inspired by controlling risk from a different point of view. The traditional way of controlling for portfolio risk was to apply a given set of weights to historical data to calculate historical risk. If estimated portfolio risk exceeds a threshold, we peel off allocation percentages for each asset. In this framework, I focus on constructing portfolios that target a given risk or return on a efficient risk return frontier.

First lets get some data to so we can visualize traditional portfolio optimization’s risk return characteristics. I will be using a 8 asset ETF universe.

con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
tickers = spl('EEM,EFA,GLD,IWM,IYR,QQQ,SPY,TLT')
data <- new.env()
getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T)
for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)
bt.prep(data, align='keep.all', dates='2000:12::')

Here are the return streams we are working with


The optimization algorithms I will employ are the following:

  • Minimum Variance Portfolio
  • Risk Parity Portfolio
  • Equal Risk Contribution Portfolio
  • Maximum Diversification Portfolio
  • Max Sharpe Portfolio

To construct the risk return plane, I will put together the necessary input assumptions (correlation, return, covariance, etc). This can be done with create.historical.ia function in the SIT tool box.

#input Assumptions
prices = data$prices
ret = prices/mlag(prices)-1
ia = create.historical.ia(ret,252)
# 0 <= x.i <= 1
constraints = new.constraints(n, lb = 0, ub = 1)
constraints = add.constraints(diag(n), type='>=', b=0, constraints)
constraints = add.constraints(diag(n), type='<=', b=1, constraints)

# SUM x.i = 1
constraints = add.constraints(rep(1, n), 1, type = '=', constraints)

With the above we can go ahead and input both ‘ia’ and ‘constraints’ in to the above optimization algorithms to get weights. With the weights, we can derive the portfolio risk and portfolio return. These then can be plotted on a risk return plain visually.

# create efficient frontier
ef = portopt(ia, constraints, 50, 'Efficient Frontier')
plot.ef(ia, list(ef), transition.map=F)


The risk return plain in the above image shows all the possible space to which a portfolio’s risk and return characteristic can reside. Anything that is beyond to the left side of the frontier do not exist (unless leverage, to which the EF will also shift leftward too). Since I am more of a visual guy, I tend to construct this risk return plain whenever I am working on new allocation algorithms. This allows me to compare with other portfolio the expected risk and return.

As you can see, each portfolio algorithm has their own set of characteristics. Note that these characteristics fluctuate across the frontier were we to frame this rolling through time. A logical extension to these risk return concepts is to construct a portfolio that aims to target ether a given risk or a given return on the frontier. To formulate this problem in SIT for the return component, simply modify the constraints as follows:

constraints = add.constraints(ia$expected.return,type='>=', b=target.return, constraints)

Note that the target.return variable is simply a variable storing the desired target return. After adding the constraint, simply run a minimum variance portfolio and you will get a target return portfolio. On the other hand, targeting risk is a bit more complicated. If you look at the efficient frontier, you will find that for a given level of risk there is two portfolios that line on it.  (The sub-optimal portion of the efficient frontier is hidden). I solved for the weights using a multi optimization framework which employed both linear and quadratic (dual) optimization.


 max.w = max.return.portfolio(ia, constraints)
 min.w = min.var.portfolio(ia, constraints)
 max.r = sum(max.w * ia$expected.return)
 min.r = sum(min.w * ia$expected.return)
 max.risk = portfolio.risk(max.w,ia)
 min.risk = portfolio.risk(min.w,ia)

 # If target risk exists as an efficient portfolio else
 # return weights of 0
 if(target.risk >= min.risk | target.risk <= max.risk){
 out <-optimize(f =target.return.risk.helper,
 interval = c(0,max.r),
 target.risk = target.risk,
 ia = ia,
 constraints = constraints)$minimum


Below is a simple backtest that takes the above assets and optimizes for the target return or target risk component. Each will run with a target of 8%.

Backtest1Now the model itself requires us to specify a return or risk component. What if instead we make that a dynamic component such that we extract ether the risk or return component of a alternative sizing algorithm. Below are the performance of the dynamic risk or return component extracted from naive risk parity.



Not surprisingly, whenever we target risk, the strategy tends to become more risky. This confirms confirms risk based allocations are superior if investors are aiming to achieve low long term volatility.


Thanks for reading,


Equity Bond Exposure Management

I did a post last October (here) looking at varying allocation between stocks/bonds and at the end I hinted towards a tactical overly between the two asset classes. Six months later, I finally found a decent overlay I feel may hold value.

In a paper called “Principal Components as a Measure of Systemic Risk” (SSRN), Kritzman Et al. presented a method for identifying “fragile” market states. To do this, he constructed the Absorption Ratio. Here is the equation:

The numerator sigma represents the variance of the ith eigenvector, while the denominator one equals the variance of the jth asset. In the paper, n = 1/5 the total number of assets (N). The interpretation is simple, the higher the ratio, the more “fragile” the market state. The intuition behind this ratio is that when its high, it implies that risk is very concentrated. On the other hand, when it is low, risk is dispersed and spread out. Think weak and strong. Following is the raw AR through time of the DJ 30 Components. 


As you can see, the ratio spikes during the tech bubble and the recent financial crisis. How would it look like when used as a filter? Below are two pictures comparing the signals generated by 200 day sma and standardized AR.

SMA ARatio

Pretty good at the timing in my opinion. In line with the paper, I reconstructed the strategy that switches between stocks(DIA) and bonds (VBMFX). When the AR is between 1 and -1, we will split 50/50. When its above 1, we are in love with bonds and when its below -1, we are  in love with stocks. Simple. Results:


And here is the code: (I know its messy, didn’t have a lot of time! :)

Note: There is survivorship bias. I used the current day DJ30.

Thanks for reading

Diversification through Equity Blending

In a sound asset allocation framework, it is never a good idea to over weight the risky portion of the portfolio. One example would be the traditional 60/40 portfolio whereby an investor allocates 60% to equities and 40% to bonds. Such allocation may intuitively makes sense as you “feel” diversified but when extraordinary events happen, you will be less protected. Below is the performance of the 60/40 allocation rebalanced monthly since 2003. Note I used SPY and IEF for the mix.

In this post, I would like to show some ideas that will reduce risk and increase return by bringing in a different type of return stream. Traditional asset allocation focuses mainly on optimal diversification of assets, here I will focus on allocation to strategies. From my own research, there are only so many asset classes the individual can choose to mix to form portfolios, not to mention the less than reliable cross correlation between assets classes in market turmoil (2008). To bring stability for the core portfolio, I will incorporate Harry Browne’s Permanent Portfolio. This return stream is composed of equal weight allocation to equities, gold, bonds. and liquid cash. For the more aggressive part, I will use daily equity mean reversion (RSI2). Note that a basic strategy overlay on an asset can produce a return stream that can have diversification benefits. Below are three different equity curves. Black, green and red represents, mean reversion, 60/40 equal weight allocation of both strategies, and permanent portfolio respectively.  

To summarize, I have taken two return streams derived from strategies traded over assets and combined them to form a portfolio. The allocation percentage is 40% to the risk asset (mean reversion/MR) and 60% to the conservative asset (Permanent Portfolio/PP). And here are the performance metrics.

Traditional represents the traditional 60/40 allocation to equity and bonds while B-H represents buy and hold for the SP500. This superficial analysis is only meant to prove the powerful idea of equity blending of assets and trading strategies. When traditional search for diversification becomes fruitless, this idea of incorporating different strategies can have a huge impact on your underlying performance.

I will come back later for the R code as its pretty late and I have class tomorrow!

Volatility Parity

When I first started out in system research a year ago, I was told that in this business, if you can achieve return with lower volatility, you will definitely attract a lot of people’s attention. Since then, I’ve found myself to leaning towards strategies with lower volatility, usually achieved through proper volatility management.

In this post, I’d like to take a look at portfolio volatility by using some tools from portfolio theory. I’d like to show that through peeling into volatility one can better manage their portfolio.

There exists a fine line between academic finance and practitioners of finance. The opposing ideas are  whether the markets are efficient or not. I am not going to dive in to the discussion of this, but I stand to reason that there are no rules or equation to the markets. They are ever changing, therefore, I believe that one should treat every concept as tools.

A bit of equations…portfolio variance is defined by the following equation. I am only going to use a two asset class example to avoid bringing in the use of covariance matrix.




The variance contribution of each asset is thus…

In my opinion, the above equations capture a lot of information that can be used to manage volatility. At any given time multi market strategies will have more than one position. If you are able to position size each position so that each one contributes equally to overall portfolio volatility, you will have much smoother balanced and diversified portfolio.

In the following graphs, I calculated according to the above equations how Bonds and Stock contribute to aggregate portfolio variance. For stocks, I used SPY and for bonds, I used IEF, both are exchange traded funds. This is a rolling 252 day graph with traditional 60/40 allocation.

 From the above graph one can infer that the volatility contribution is not equal and at times, you will see that stocks will contribution more than 100% while bonds contributed negatively.

The above graph also gives a pretty good market timing signal. When bonds contributed negatively, it seems that the market is in turmoil and vice versa for stocks when it contributed more than 100%.

I hope through this, the reader will be able to understand volatility more and look and just how it affects your portfolio.

A Few Words on Risk

The measurement of risk is a lot of models in finance is inappropriate to some extent. And in my own arsenal of tools, I have continually tried to err away from using traditional assumptions. These assumptions were made popular from the titans of portfolio theory. (I am not going to give a bash on the MPT and how it’s a minority view that EMH is wrong, as it isn’t a minority view…)

In the convenient and accepted norm, risk = whether you can sleep at night or put another way the “volatility” of your portfolio. With such objective and quantitative measure came a whole sort of metrics, (Sharpe ratio, black Scholes formula, etc). Does it do a good job intuitively to model the real world? If you look at the equation for standard deviation, you will see that it doesn’t differentiate between upside and downside volatility. Further on, ask yourself if you care about upside volatility? Evidence from multiple bull market shows the euphoria that takes place when the stock market goes up as a whole. I am not afraid to deduce that people treat upside volatility with a welcome.

Then it must be I intuitive to accept that downside volatility is a better measure of risk. What metric can one use to measure such things? Below are a few of my favorites.

Sortino Ratio

 Where r is return, r(f) is risk free rate, and the denominator is the downside standard deviation

Ulcer Performance Index (UPI)

Where p(i) is price, p(max) is the max price during period

Where r is return, and r(f) is risk free rate

MAR Ratio

The above ratio are modified versions which I use a lot in strategy testing as a measure of goodness. I once only concentrated on the MAR but then found other measures to add value. I personally use the above and a few others.

*A note on the Sortino Ratio: There are variations to calculations to this ratio. Some calcualtions take into account the zeros which are days with upside return. There are disagreements with to whether this is correct. Readers should be aware and know which one you are using.

Dynamic Portfolio Selection

When I first started to get in to systematic trading, I never sat easy on the fact that after the entire system was built based on objective sounds principles, the selection of portfolio to trade was subjective. How can the most important part of the whole process be left to discretion? Just like our stock forecasts never seem to pan out the way we want, I don’t think the selection of the portfolio based on our assumption that individual markets will continue to behave the same way will lead to anything accurate.

I went to look at Markovitz’s stuff and found it to be fundamentally thought provoking but technically counter-productive. I cant really say its a waste of time when millions (billions?) are managed this way. Lets just say their measure of risk is inherently flawed and keep it at that.

The Markovitz stuff lead to me to think further on minimizing correlation on the assets that I propose to trade. This made a lot of sense to me as if my system gave a signal on DJIA futures and S&P futures on the same day, it is unwise to take both. But the thing is correlation is ever-changing due to the fact that it is computed based on historical data and the results are not very reliable. (How many of you thought you were diversified enough during 2008..?)

Over at TB forum the other day, I landed on a idea that seems to be quite promising. It also took into consideration of many traders that have little capital. It conveyed the idea that a starting trader who has little capital to risk would inherently accumulate greater risk than starting traders who started out with a couple of million dollars in the futures market. This is caused by the fact that the small trader cannot use diversification to his/her advantage.

Solution: Predetermine the number of slots/positions (fixed) you would like to have. The number of slots is directly proportional to the amount of portfolio heat you can have if you have positions open representing each market in each slot. The only thing that can change is the markets that are in each slot. They can be replaced by new markets based on different criteria like liquidity, relative strength, trend strength, etc. This way you will only take the gist of markets based on your criteria for selection. Now instead of monitor your static bunch of markets you can monitor as many markets you want and only take signals from the ones that your criteria deems best.

Systematic Edge

Position Sizing Filters

These days, almost all trading systems have entry filters that hope to reduce sour trades and whipsaws. They enhance performance. Most entry filters for TF systems these days require that the signal be in line with the long term trend.

Today, i tried searching up on the subject of position sizing filters but found nothing. Not only do i think there is a limited research on position sizing, but there is little to no research on position sizing filters.

Position sizing filters are different from position sizing models. Position sizing models tells you how “much” you should buy or sell (ie number of contracts). These can range from martingale models to anti-martingale models. On the other hand, position sizing filters takes a step further to increase or decrease leverage based on some other external factors. These factors can include volatility, overbought/oversold conditions, or any other criteria in your imagination. Will they improve results? Well, thats where you should do more testing.


Smoothing Equity Curve- Reducing Drawdown

This subject is by far the most important one in fund management. A lot of fund managers and traders alike fixate on trying to develop new ways to produce an equity curve that is as close to a 45 degree line as possible. I was once told by an experienced fund manager that “success in our business=smooth equity curve.” Thats where you will fund hot money following into your management. You can have the highest return, but if that comes with a drawdown thats really high (ie 50-80%), not many investors will invest in you.

Although my ultimate goal is to manage a hedge fund with quantitative strategies that pursues absolute return, i want to achieve absolute results with minimal drawdown/risk. There are different ways to do that. 

You can continue on to search for better entry techniques…and waste your time there. The other important and useful way is to apply proper money management. 

Money Management

Portfolio Heat: capping the % of open positions so that in case all positions go sour you will only loose x% of your portfolio

Sector Exposure: allocate a maximum % of your capital to each sector limiting highly correlated assets to increase your risk

Position Sizing: how much should you buy/sell whenever you receive a signal from your system

Volatility Control: high volatility may cause wide equity swings; finding a way to limit that will help in terms of lowering portfolio volatility

Portfolio Selection and Diversification: trading as many markets as possible and forming one portfolio where assets are least correlated will increase the odds of catching the trends and making up for the losses

The above are a few pointers i have come across that have shown value in reducing drawdowns.

Money Management Techniques

There are a lot of ways to implement money management (MM) to reduce risk. Some even believe that finding a system that is able to be right 100% of the time means that they can all together forget the MM.

From my own little research I have come up with the following ways to reduce risk…

1. exit strategies (hard stops, trailing stops, etc)
2. trade filtering (trade in direction of trend)
3. position sizing (fixed fractional, optimal f, kelly criterion, etc)
4. diversifying across markets (modern portfolio theory, or ranking on relative strength)
5. diversifying across systems (uncorrelated systems)

Opinions differ in terms of the importance of the above techniques but they all to a certain degree reduce risk significantly if paired correctly with a system.

It is to my realization lately that I have been focusing on too much on the entry side of the market. My excuse being that I am in search of a trading system that will complement my TF system. I have decided to take a step back and improve my existing TF system. The current TF system is a breakout model paired with a common trend filter. The system is good in that it is not curve fitted, but the MAR ratio (CAGR/MAX DD) currently 1 on the dot, can be improved if I focused more on the risk management side.

I have been fortunate to be able to talk to Mr. Bob Spear regarding trading and achieving success in system development. He has many years of experience in the subject and currently a professional money manager. He has time after time suggested that the position sizing is really what I should be gunning for in terms of improving my systems.

Giving more thoughts on the subject in general, I felt that position sizing alone really seems to be more important than many of the risk control measures numbered above. The reason being that it is a holistic approach to controlling risk system wide with the ultimate aim to bet less on losers and bet more on winners / bet more aggressively on strong equity growth compared to periods of mininal growth / betting no more when strategy exposure exceeds x% of equity…etc. When I mean holistic I mean that it achieves risk control by incorporating what the other numbered techniques (above) also try to do.

Having thought about it for a few weeks, I have found that position sizing is most commonly a function of risk. While most traders tend to implement a common % capital risk for each trade, I went on to ask whether it is possible to make position sizing as a function of more variables, ie equity growth rate.

Idea: bet more aggressively when account equity gets bigger and less when its small.

The idea is not new and I came across it in a archive called “purebytes” ( I strongly suggest traders read stuff on there; its got amazing material). Anyways, the above idea seems troubling as your bet sizing does not adjust downwards on equity drawdowns cutting deeply in to your hard earn profits.

Heres my solution….have an equity growth look back period of let say 6 months. If it is greater than some percentage you increase your bet size by half of the mean average return of the last 12 months. On the other hand if the last 6 month return is negative adjust bet size downwards by the mean average return of the last 12 months. With this your bet-sizing algorithm, you bet more while the equity growth is upwards and less when its downwards. This algo can be an addition to a volatility based sizing or others like % capital; its all depends on your creativity.

This is a new idea i think? Enjoy…