Portfolio Selection

Optimal Stock Bond Allocation

It’s been more than a month since I last posted. Time flies when you are busy working on the things you enjoy.

After reading a piece on the lacklustre performance of hedge funds versus a standard 60/40 portfolio mix, it got me thinking deeper about stock bond allocation. In this post I am going to dissect and check the internal workings of the equity bond allocation and see if there are any tactical overlay that can improve a static allocation mix.

Data: I will be using monthly data from Data Stream and Bloomberg; SP500 and 10 Year treasuries, all total return from January 1988 to May 2012.

Here a backtest helper function wrapped around SIT:

require(TTR)
require(quantmod)

setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
source(con)
close(con)

btest<-function(data1,allocation,rebalancing){
  data <- list(prices=data1[,1:2])
  data$weight<-data1[,1:2]
  data$weight[!is.na(data$weight)]<- NA
  data$execution.price<-data1[,1:2]
  data$execution.price[!is.na(data$execution.price)]<-NA
  data$dates<-index(data1[,1:2])
  prices = data$prices   
  nperiods = nrow(prices)
  data$weight[] = NA  
  data$weight[1,] = allocation
  period.ends = seq(1,nrow(data$prices),rebalancing)-1 
  period.ends<-period.ends[period.ends>0]
  data$weight[period.ends,]<-repmat(allocation, len(period.ends), 1)
  capital = 100000
  data$weight[] = (capital / prices) * data$weight
  model = bt.run(data, type='share', capital=capital)
  return(model)
}

This simply runs the backtest for provided allocation and rebalancing period for 2 assets. To check the performance of all equity allocation from 0 to 1 in increments of n%, I will be using the following wrapper function:

sensitivity<-function(data1,rebalancing,allocation.increments){
  equity.allocation<-seq(0,1,allocation.increments) #Y-axis
  eq = matrix(NA, nrow=nrow(data1), ncol=1)

  for(i in equity.allocation) {
    allocation <- matrix(c((1-i),i), nrow=1)
    temp<-btest(data1,allocation,rebalancing)
    eq<-cbind(eq,temp$equity)
  }
  eq<-eq[,-1]
  colnames(eq) = 1-equity.allocation

  cagr<-matrix(NA,nrow=ncol(eq),ncol=1)
  for(i in 1:ncol(eq)){
    cagr[i]<-compute.cagr(eq[,i])
  }
  cagr<-as.data.frame(cbind(1-equity.allocation,cagr))
  colnames(cagr)<-c('Equity Allocation','CAGR')

  sharpe<-matrix(NA,nrow=ncol(eq),ncol=1)
  eq.ret<-ROC(eq)
  eq.ret[is.na(eq.ret)]<-0
  for(i in 1:ncol(eq)){
    sharpe[i]<-compute.sharpe(eq.ret[,i])
  }
  sharpe<-as.data.frame(cbind(1-equity.allocation,sharpe))
  colnames(sharpe)<-c('Equity Allocation','Sharpe')
  return(list(eq=eq,cagr=cagr,sharpe=sharpe))
} 

Running the sensitivity function in increments of 5% provides:

 

As you increase the equity allocation, you become more aggressive, which is obviously displayed from the chart above. What is the optimal allocation based on highest CAGR or Sharpe? The sensitivity function also returns a list of the performance of each equity allocation and the chart:

In the above chart, I’ve graphed two lines each with its own respective axis. From the chart, it seems that the equity allocation that provided the highest Sharpe Ratio is ~0.25. This seems to be similar to a risk parity allocation as historical data shows that such allocation is very close to optimal risk parity.

Diving deeper, I went in to check each successive 12 month period’s highest Sharpe equity allocation from 1988 to 2012. In another word, this takes us back in time!

 

 

From this chart, the max sharpe allocation varied significantly over each year. Whenever crisis hit, the allocation to bonds seems to dominate that on equity and vice versa in bull markets. This intuitively make sense as you would want to be in risk off mode during bear markets.

The last chart shows the rolling 12 month performance of each equity allocation from 0 to 1 in increments on 5%.

 

In another post, I will follow up on whether there are any tactical overlays that can improve performance.

 

 

Advertisements

Diversification through Equity Blending

In a sound asset allocation framework, it is never a good idea to over weight the risky portion of the portfolio. One example would be the traditional 60/40 portfolio whereby an investor allocates 60% to equities and 40% to bonds. Such allocation may intuitively makes sense as you “feel” diversified but when extraordinary events happen, you will be less protected. Below is the performance of the 60/40 allocation rebalanced monthly since 2003. Note I used SPY and IEF for the mix.

In this post, I would like to show some ideas that will reduce risk and increase return by bringing in a different type of return stream. Traditional asset allocation focuses mainly on optimal diversification of assets, here I will focus on allocation to strategies. From my own research, there are only so many asset classes the individual can choose to mix to form portfolios, not to mention the less than reliable cross correlation between assets classes in market turmoil (2008). To bring stability for the core portfolio, I will incorporate Harry Browne’s Permanent Portfolio. This return stream is composed of equal weight allocation to equities, gold, bonds. and liquid cash. For the more aggressive part, I will use daily equity mean reversion (RSI2). Note that a basic strategy overlay on an asset can produce a return stream that can have diversification benefits. Below are three different equity curves. Black, green and red represents, mean reversion, 60/40 equal weight allocation of both strategies, and permanent portfolio respectively.  

To summarize, I have taken two return streams derived from strategies traded over assets and combined them to form a portfolio. The allocation percentage is 40% to the risk asset (mean reversion/MR) and 60% to the conservative asset (Permanent Portfolio/PP). And here are the performance metrics.

Traditional represents the traditional 60/40 allocation to equity and bonds while B-H represents buy and hold for the SP500. This superficial analysis is only meant to prove the powerful idea of equity blending of assets and trading strategies. When traditional search for diversification becomes fruitless, this idea of incorporating different strategies can have a huge impact on your underlying performance.

I will come back later for the R code as its pretty late and I have class tomorrow!

Parameter Insensitive Models

In my opinion there are two enemies to successful system development. One is the “exploitability” of the anomaly  you are trying to extract profit from. The other is the parameters that you choose to exploit the anomaly with. The “exploitability” aspect is something you can’t have much control over as the profitability of any anomaly is in constant flux. One example is the profitability of trend following in general. When markets are choppy, its tough for any trend followers to extract sizeable profits.

The other area that you have absolute control over is the parameters with which you choose to trade with. The more varied the parameter selection, the more robust you are as the diversification increase will reduce probability of loss if any parameters were to suffer lack of performance. Parameters here can literally be the days you choose to employ a MA crossover strategy or it can extend to similar models like breakouts.

In the following experiment, I will test the performance of 5 different models. They are all mean reversion in nature.

Model1 (rsi1): RSI(2) 50/50

Model2 (rsi2): RSI(2) Buy: <30 Short: >70

Model3 (rsi3): RSI(2) Buy: <30 Sell: >50 Short: >70 Cover: <50

Model4 (no.reb): no rebalance but equal weight

Model5 (reb): equal weight rebalance weekly

Parameter insensitive models rest on the idea that no one knows what the future holds and how each parameter will perform. Instead of just relying on past data to select something that “was” consistent, parameter insensitive models try to avoid putting all eggs in one basket. The following is the equity curve of the strategy.

The focus should be on the bold equity curve which rebalances weekly. From the graph, it is very much correlated with the other equity curves, but it is smoother than individual strategies equity curve. What I am trying to convey here is that return to any strategy is attributed to the underlying health of the anomaly (something you cannot control) plus the efficiency of the parameters that are used to extract profit (something you have control over). The next chart is the drawdown

If we unfortunately chose to trade rsi2 (blue), our drawdowns will be markedly different.  Next a stacked horizon plot of rolling 252 day return

The first three are the models rsi1, rsi2, rsi3 and the third and fourth are no rebalance and rebalance. As you can see the overall performance is reduced, but in times when certain individual models underperform, the aggregate rebalancing model is able to mitigate it quite successfully.  An finally, the numbers…

One little experiment cannot prove anything. I am still trying the idea out in many different ways and hope that through further research, I will arrive at some more concrete conclusions.

# RSI Parameter insensitive Model
# test out rebalancing equal wait verses holding constant weight
# correctly set working directory and import in your own equity curves
# for blending, I passed in three equity curves for blending

data<-read.csv("rsi.csv") #set your own input files

#conversion to zoo object
data$date<-as.Date(data$date,"%Y-%m-%d")
rsi2.50.50<-zoo(data$rsi2.50,data$date)
rsi2.extreme<-zoo(data$rsi2.extreme,data$date)
rsi2.semi<-zoo(data$rsi2.semi,data$date)
data<-merge(rsi2.50.50,rsi2.extreme)
data<-merge(data,rsi2.semi)

names(data)<-c("rsi1",'rsi2','rsi3')
ret<-ROC(data)
ret[is.na(ret)]<-0

#normalize equity curves
ret$rsi1.equity<-cumprod(1+ret$rsi1) #simulated equity
ret$rsi2.equity<-cumprod(1+ret$rsi2)
ret$rsi3.equity<-cumprod(1+ret$rsi3)
ret$equity<-ret$rsi1.equity+ret$rsi2.equity+ret$rsi3.equity #add them together

ret$equity<-ROC(ret$equity)
ret$equity[is.na(ret$equity)]<-0
ret$equity<-cumprod(1+ret$equity)

rsi.equity1<-ret[,-(1:3)] #same allocation through time
rsi.equity2<-as.xts(rsi.equity1[,-4])

###############################
#Rebalancing of equity
###############################
# Load Systematic Investor Toolbox (SIT)
setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
source(con)
close(con)

#*****************************************************************
# prep input
#******************************************************************
data <- list(prices=rsi.equity2,
 rsi1=rsi.equity2$rsi1,
 rsi2=rsi.equity2$rsi2,
 rsi3=rsi.equity2$rsi3) #need to create new list to store stuff
#weight with n column as input data
data$weight<-rsi.equity2
data$weight[!is.na(data$weight)]<- NA
#execution price
data$execution.price<-rsi.equity2
data$execution.price[!is.na(data$execution.price)]<-NA
#dates
data$dates<-index(rsi.equity2)

#*****************************************************************
# Rebalancing Algo
#******************************************************************
prices = data$prices
nperiods = nrow(prices)
target.allocation = matrix(c(0.33,0.33,0.33), nrow=1)

# Rebalance periodically
models = list()

period<-'weeks' #change to whatever rebalancing preiod you want
data$weight[] = NA
data$weight[1,] = target.allocation

period.ends = endpoints(prices, period)
period.ends = period.ends[period.ends > 0]
data$weight[period.ends,] = repmat(target.allocation, len(period.ends), 1)

capital = 100000
data$weight[] = (capital / prices) * data$weight

#this works oly when your input prices are object XTS rather than zoo
models[[period]] = bt.run(data, type='share', capital=capital)

#*****************************************************************
# Create Report
#******************************************************************
#all 5 equity curves
equity1<-merge(rsi.equity1,models$weeks$equity)
names(equity1)<-c('rsi1','rsi2','rsi3','no.reb','reb')
equity1<-as.xts(equity1)

#print out all the strategies return
for(i in 1:ncol(equity1)
{
 ret<-ROC(equity1[,i])
 ret[is.na(ret)]<-0
 ret<-as.xts(ret)
 dd<-Drawdowns(ret,geometric=T) #
 print(compute.cagr(equity1[,i]))
 print(maxDrawdown(as.xts(ret)))
 print((compute.cagr(equity1[,1]))/(maxDrawdown(as.xts(ret))))
 print(compute.sharpe(ret))
}

ret<-ROC(equity1,252)
horizonplot(ret,origin=0,scales = list(y = list(relation = "same")),colorkey=T)

df<-equity1
df <- data.frame( time=index(df),rsi1=df$rsi1,rsi2=df$rsi2,rsi3=df$rsi3,no.reb=df$no.reb,reb=df$reb)
#plot Equity

ggplot(df,aes(df$time)) +
 geom_line(aes(y=rsi1,colour="rsi1")) +
 geom_line(aes(y=rsi2,colour="rsi2")) +
 geom_line(aes(y=rsi3,colour="rsi3")) +
 geom_line(aes(y=no.reb,colour="no.reb")) +
 geom_line(aes(y=reb,colour="reb"),size=1.2)
#plot drawdown
df1<-merge(dd1,dd2,dd3,dd4,dd5)
df1<-data.frame( time=index(df1),rsi1=df1$rsi1,rsi2=df1$rsi2,rsi3=df1$rsi3,no.reb=df1$no.reb,reb=df1$reb)

ggplot(df1,aes(df$time)) +
 geom_line(aes(y=rsi1,colour="rsi1")) +
 geom_line(aes(y=rsi2,colour="rsi2")) +
 geom_line(aes(y=rsi3,colour="rsi3")) +
 geom_line(aes(y=no.reb,colour="no.reb")) +
 geom_line(aes(y=reb,colour="reb"),size=1.2)

On the Theory of Asset Allocation Part 2

In this post, I would like to show some research I have done on the front of utilizing portfolio theory to efficiently and optimally allocate capital to a pair of systems.

Traditional theory applied outright can be problematic. As I mentioned in the previous post, the assumptions that go in are not consistent in the long run, ie expected return, therefore the optimized portfolio performance will deviate significantly from backtest results. This is similar to developing a system on data that no longer reflect the current market state which will ultimately bankrupt you.

In my opinion, there are ideas within the traditional framework that are useful. Expected return of individual assets may not be reliable, but expected return of a well designed system should be, in a probabilistic sense.

For the past year, I started to view everything as a return streams. By this I mean rather than differentiating between assets and trading systems applied on assets, one should look at them equally. Although this may sound obvious, I will come back to this subject later and expand on it. But this way of thinking has helped me go against traditional methods of system design to built more robust systems that are model free and parameter insensitive.

In portfolio theory, the lower the correlation between instruments the better. In this experiments, I will be referring to two trading systems that are different in nature; Mean Reversion and Trend Following. Both of these trading systems will be applied to the same asset, SPY. System details:

Their daily return correlation is 0.04. The following is their equity curve.

Both of them are profitable and aren’t optimized as all. Test date is 1995-2012. Daily data are used from yahoo finance and no commissions or slippage was taken in to account. Next is their risk reward chart as popularized by traditional theory.

MR = Mean reversion, TF = Trend following, SPY = etf. From an asset allocation point of view, MR seems to be the most desirable of the three. Up next I show the efficient frontier of the two systems from 2000-2004 and then based on the minimum variance (MV) allocation in that period, I will forward test it. More concretely, I will compare the portfolio level equity curve from trading the two systems together.

The MV allocation is the left most point on the curve. It’s the allocation that minimizes portfolio variance. The following is the equity curves from 2005-2012 with different allocations.

Apologies for not plotting the legend! The red curve is the buy and hold of the SPY, the blue is a equal weight allocation between the two systems, and the orange curve is the MV allocation (~19% TF and ~81% MR). From a pure return perspective, trading the MV allocation produced the most return but from a risk reward standpoint, the equal weight allocation is better. In my optimization process, I found that the allocation that maximizes sharpe ratio would be allocation 100% to MR system. Now the numbers…

If I were to choose, I would go with the equal weight allocation as it has in my opinion the features that I will be able to sleep at night. I am not going to discuss the results in more depth as its well passed midnight, maybe another time I will come back and do something different.

Note: the portfolio level testing was simulated using tradingblox while the optimization and plotting were done in R. If you have any questions or comments, please leave a comment below! Email me if you want the TB system files.

Dynamic Portfolio Selection

When I first started to get in to systematic trading, I never sat easy on the fact that after the entire system was built based on objective sounds principles, the selection of portfolio to trade was subjective. How can the most important part of the whole process be left to discretion? Just like our stock forecasts never seem to pan out the way we want, I don’t think the selection of the portfolio based on our assumption that individual markets will continue to behave the same way will lead to anything accurate.

I went to look at Markovitz’s stuff and found it to be fundamentally thought provoking but technically counter-productive. I cant really say its a waste of time when millions (billions?) are managed this way. Lets just say their measure of risk is inherently flawed and keep it at that.

The Markovitz stuff lead to me to think further on minimizing correlation on the assets that I propose to trade. This made a lot of sense to me as if my system gave a signal on DJIA futures and S&P futures on the same day, it is unwise to take both. But the thing is correlation is ever-changing due to the fact that it is computed based on historical data and the results are not very reliable. (How many of you thought you were diversified enough during 2008..?)

Over at TB forum the other day, I landed on a idea that seems to be quite promising. It also took into consideration of many traders that have little capital. It conveyed the idea that a starting trader who has little capital to risk would inherently accumulate greater risk than starting traders who started out with a couple of million dollars in the futures market. This is caused by the fact that the small trader cannot use diversification to his/her advantage.

Solution: Predetermine the number of slots/positions (fixed) you would like to have. The number of slots is directly proportional to the amount of portfolio heat you can have if you have positions open representing each market in each slot. The only thing that can change is the markets that are in each slot. They can be replaced by new markets based on different criteria like liquidity, relative strength, trend strength, etc. This way you will only take the gist of markets based on your criteria for selection. Now instead of monitor your static bunch of markets you can monitor as many markets you want and only take signals from the ones that your criteria deems best.

Systematic Edge