I’ve spend quite some time with PCA for the last couple of months and I thought I would share some of my notes on this subject. I must caution that this is one of the longest post I’ve done. But I find the subject very interesting as it has a lot of potential in the asset allocation process.

PCA stands for principal component analysis and it is a dimensionality reduction procedure to simpify your dataset. According to Wikipedia, “PCA is a mathematical procedure that uses orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.” It is designed in a way such that each successive principal components explains a decreasing amount of variation in your dataset. Below is a screeplot of the percentage variance explained by each factor on a four asset class universe.

PCA is done via an eigen decomposition on a square matrix which in finance is ether a covariance matrix or a correlation matrix of asset returns. Through eigen decomposition, we will get eigenvectors (loadings) and eigenvalues. Each eigenvalue, which is the variance of that factor, is associated with an eigenvector. The following equation relates both the eigenvectors and eigenvalues.

The eigen vector E of a square matrix A equals to an eigenvalue (lambda) multiplied by the eigenvector. While I must confess I was bewildered towards the potential application of such values and vectors at school, it does come to make more sense when you place it in to financial context.

In asset allocation, PCA can be used to decompose a matrix of return into statistical factors. These latent factors usually represent unobservable risk factors that are imbedded inside asset classes. Therefore allocating across these may improve one’s portfolio diversification.

The loadings (eigenvectors) of a PCA decomposition can be treated as principal factor weights. Another words, they represents asset weights towards each principal component portfolio. The total number of principal portfolios equals to the number of principal components. Not surprisingly, the variance of each principal portfolio is its corresponding eigenvalue. Note that the loadings are designed to have values ranging from +1 to -1 meaning short sales are entirely possible.

Since I am not a math major, I don’t really understand any math equations until I actually see it being programmed out. Lets get our hands dirty with some data:


rm(list=ls())
require(RCurl)
sit = getURLContent('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', binary=TRUE, followlocation = TRUE, ssl.verifypeer = FALSE)

con = gzcon(rawConnection(sit, 'rb'))
source(con)
close(con)

data <- new.env()

tickers<-spl("VBMFX,VTSMX,VGTSX,VGSIX")
getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T)

bt.prep(data, align='remove.na', dates='1990::2013')

prices<-data$prices ret<-na.omit(prices/mlag(prices) - 1)  To calculate return we can simply This can be represented by in R by weight<-matrix(1/ncol(ret),nrow=1,ncol=ncol(ret)) p.ret<-(weight) %*% t(ret)  Note that if R is a multi-row matrix return series, you will actually get back a vector of return series with the same number of rows as R. This is simply the portfolio return series, assuming daily rebalancing. There are like 3 different way of doing a PCA in R. I will show a foundational way and a functionalized way of doing it. Such two-step process will help see what’s going on under the hood. #PCA Foundational demean = scale(coredata(ret), center=TRUE, scale=FALSE) covm<-cov(demean) evec<-eigen(cov(demean), symmetric=TRUE)$vector[] #eigen vectors
eval<-eigen(cov(demean), symmetric=TRUE)$values #eigen values #PCA Functional pca<-prcomp(ret,cor=F) evec<-pca$rotation[] #eigen vectors
eval <- pcasdev^2 #eigen values  The foundational way uses the build in “eigen” function to extract the eigenvectors and eigenvalues. This way requires that you demean (scale function) the data before calculating the covariance matrix. The return values are stored in a R list data structure. On the other hand, the functional PCA just takes in an return series vector and it does the rest. After calculating the eigenvector, eigenvalues, and the covariance matrix, the following equation will hold true: In the above equation, E and lambda, again, are the eigenvector and eigenvalues respectively. Sigma represents the demeaded covariance matrix. In R, this can be computed by: diag(t(evec) %*% covm %*% evec) #reverse calculate eigenvalues  Now we have all the components to calculate principal portfolios. These return series are simply the latent factors that are embedded inside asset classes. Each asset class are exposed to each factor pretty consistently through time which may help further understand the inherent risk structure. To calculate the principal component portfolios, we will use the following formula: This equation is very intuitive as just like earlier, the eigenvectors are “weights,” therefore applying it to the returns will yield the N different principal portfolio return streams. In R: inv.evec<-solve(evec) #inverse of eigenvector pc.port<-inv.evec %*% t(ret)  In Meucci’s paper “Managing Diversification”, he showed that with the following formula one can convert a vector of weights from the asset space to the principal component space. More specifically, given a vector of asset weights, one could now show the exposure to each principal component. Next I would like to show how from the above equation one will be able to calculate the exposure of traditional portfolio optimization weights to each principal component factors. I will be using the same universe of assets and applying Minimum Variance, Risk Parity, Max Diversification, Equal Risk Contribution, Equal Weight, and David Varadi’s et al Minimum Correlation Algorithm (mincorr2). As you can see, risk base portfolio optimization in the traditional asset space leads to excessive exposure towards the interest rate factor (bonds). The general equity risk factor (factor 1) accounts for the second largest concentration. Prudent investors looking at this chart will immediately notice the diversification potential if one were to allocate to factors 2 and 3. In R, the principal component exposure equation can be calculated as follows: factor.exposure<-inv.evec %*% t(weight)  Through simple algebra, one can convert back and forth between the asset space and the principal space easily. In the following example, I am reconstructing using equal weights. pc.port<-as.xts(t(pc.port)) #correct dimension of principal portfolio returns p.ret1<-t(factor.exposure) %*% t(pc.port)  If you are following along, you will notice that the equity curves derived from the variables “p.ret” and “p.ret1″ are identical. This confirms that we have correctly converted back to the asset space. Next, to calculate the variance contribution due to the N-the principal component we can simply: $V_{n} = w_{p}^{2}\lambda$ A neat aspect is that from the above equation, we can arrive at the portfolio variance in the asset space with the following equation (sum because PCs are uncorrelated): $\sigma ^{2} =\sum V_{n}$ The following R code confirms that their variance are the same in the PC space and the asset space. sum((factor.exposure^2)*eval) #variance concentration from PCA apply(p.ret,2,var) #portfolio risk from equal weight  With this as proof, one can actually reformulate the minimum variance portfolio from the principal space. We know that portfolio variance can be calculated according to the following formula $\sigma ^{2} =w^{\top }\Sigma w$ Just to repeat, we know that given a set of asset weights we can calculate our exposure to each principal factor using: Isolating asset weights, substituting and then simplifying: $w = w_{p}E$ $\sigma ^2=(w_pE)^\top \Sigma w_pE$ $=w_p^\top E^\top \Sigma Ew_p$ Since we know that Then portfolio variance is simply $\sigma ^2=w_p\lambda w_p$ This is intuitive as since the correlation between principal portfolios are zero, the covariance are all zero too. What is left behind are the variances of the eigen portfolios. Minimizing the above and converting the weights back to asset space should give you equivalent weights from a MVO. While I understand that there is no practicality in the above steps as it adds complexity and computing power, I just wanted to illustrate that these two spaces are tied together in many aspects. This post is going to get out of hands if I don’t stop. As you can see PCA is a very interesting technique, it allows you to access latent factors and how they interact with the asset space. I hope curious readers can go on exploring more and post below if you find any interesting things you stumble across. Thanks for reading, Mike Its been almost almost two months since I posted. Finishing the school year off with exams and moving twice forced me to put the blog on hold. I hope to post more in the future! Today I humbly attempt to formulate in R the maximum decorrelation algorithm in constructing portfolios. This method was formulated by Peter Christoffersen et al. (a fellow Canadian at Rotman School of Management) and presented by EDHEC in a paper called: “Scientific Beta Maximum Decorrelation Indices“. For those interested in asset allocation and risk management, EDHEC has a treasure trove of papers and research. In traditional mean variance optimization, we are minimizing the portfolio risk given estimations of the covariance matrix. More specifically, we need to estimate both volatility and correlation which are used to construct the covariance. The objective function to minimize is: $min$ $w'\Sigma w$ The problem with portfolio optimization models is that we are making forecasts about future covariance structures. As it is unlikely they will hold in the future, what may be optimal today may not be optimal in the next period. This is what most practitioners term as “estimation error”. Over the years, there has been different ways to overcome this. Methods ranging from covariance shrinkage to re-sampled efficient frontiers are most widely known. Some have instead scrapped the entire optimization process and focused on simple heuristics algorithms in estimating optimal portfolio weights. The Maximum Decorrelation portfolio attempts to reduce the number of inputs and use solely the correlation matrix as its main input assumption. Instead of focusing on volatility, the strategy assumes that individual asset volatility are identical. The object function to maximize is therefore: $max$ $1 - w'\rho w$ The idea is that there is less stuff to estimate which should mean estimation error should be lower. In R, the objective function becomes: max.decorr<-function(weight, correl){ weight <- weight / sum(weight) obj<-1- (t(weight) %*% correl %*% weight) return(-obj) }  I am using R’s optim function. This is my first time formulating the objective function from scratch. While I am 90% sure I am correct, I am but a student and am all ears if there are any mistakes and errors (or more efficient way of implementing it). Please leave comments below : ). I took the algorithm for a test drive and below are the results for the standard 10 asset class. For benchmark purposes, I have used minimum variance and equal weight portfolios. The Max Decor strategy earned higher returns but with higher volatility, hence the lower sharpe compared to Min Var. Code can be found here: Dropbox Thanks for reading, Mike I did a post last October (here) looking at varying allocation between stocks/bonds and at the end I hinted towards a tactical overly between the two asset classes. Six months later, I finally found a decent overlay I feel may hold value. In a paper called “Principal Components as a Measure of Systemic Risk” (SSRN), Kritzman Et al. presented a method for identifying “fragile” market states. To do this, he constructed the Absorption Ratio. Here is the equation: $AR = \frac{\sum _{i=1}^{n}\sigma _{Ei}^{2}}{\sum _{j=1}^{N}\sigma _{Aj}^{2}}$ The numerator sigma represents the variance of the ith eigenvector, while the denominator one equals the variance of the jth asset. In the paper, n = 1/5 the total number of assets (N). The interpretation is simple, the higher the ratio, the more “fragile” the market state. The intuition behind this ratio is that when its high, it implies that risk is very concentrated. On the other hand, when it is low, risk is dispersed and spread out. Think weak and strong. Following is the raw AR through time of the DJ 30 Components. As you can see, the ratio spikes during the tech bubble and the recent financial crisis. How would it look like when used as a filter? Below are two pictures comparing the signals generated by 200 day sma and standardized AR. Pretty good at the timing in my opinion. In line with the paper, I reconstructed the strategy that switches between stocks(DIA) and bonds (VBMFX). When the AR is between 1 and -1, we will split 50/50. When its above 1, we are in love with bonds and when its below -1, we are in love with stocks. Simple. Results: And here is the code: (I know its messy, didn’t have a lot of time! :) Note: There is survivorship bias. I used the current day DJ30. Thanks for reading In the spirit of wrapping up the FAA model investigation, I thought I would extend the backtest further back to 1926. Data used are all monthly total return series from proprietary databases and they are the best estimates that I have to work with. Looking back so far offers a LOT of insights. One will be able to stress test how the specific strategy performed in different environments. I employed 7 different asset classes: commodities, emerging market equities, US equities, US 10 year bonds, US 30 year bonds, short term treasuries and European equities. For benchmarking purposes, I constructed a simply momentum portfolio that holds the top 3 assets, an equal weight portfolio, and a traditional sixty-forty portfolio. Lookbacks for momentum are 4 months, in line with what Keller and Putten used. One very interesting aspect I found from this extended backtest is to see how the strategies performed during the Great Depression. While equal weight and sixty forty suffered large draw downs, FAA and relative momentum did comparatively well. Below is a deeper analysis into the Great Depression. As you can see, momentum strategies in general provided a great buffer against drawdown. The main reason for this is that during the drawdown period, the FAA strategy were all loaded with bonds: When I am researching trading systems, I really like to break down its components apart and analyse it as much as possible. It is only by understanding how they fit together will you be able to judge its future viability. When it will work and when it won’t work. And since these days TAA strategies have become so pervasive, it begs to questions whether we are taking appropriate precautions to its future performance. In my last post, I broke down the individual components to look at the performance of each factor. Although by themselves, the correlation and volatility factors weren’t that attractive, as a whole combined together, its a different story. I’ve always been a proponent of simplistic approaches in system design as adding too many nuts and bolts to ensure sophistication only brings over-fit. In my opinion, when you are designing the alpha portion of your portfolio, you should look to design multiple simplistic strategies that are different in nature (uncorrelated). Take these return streams and overlay a portfolio allocation strategy and you will find yourself with a decent alpha generator with >1 risk return. Ok back to FAA… Keller and Putten in their FAA system combined the signals of each factor by a simple meta rank function. This ranking function took the following form: $Meta Rank_{i} = w_{m}R_{m} + w_{c}R_{c}+w_{v}R_{v}$ where m, c and v represents the factor rank of momentum, correlation and volatility respectively. Each factor is then given a weight. The meta ranking function is than ranked again and filter based on absolute momentum to arrive at the assets to invest in. Note that any assets that don’t pass the absolute momentum filter will be invested in cash (VFISX). When coding the meta ranking function, I found that there are times when some assets share the same final meta rank. This caused problem for some rebalance period when the assets to hold will exceed top N. I consulted with the authors and they revealed that “with rank ties, we select more than 3 funds.” Below is a replication of the strategy; it is tested with daily data as oppose to monthly data used by the authors. The model results are pretty decent. One aspect I may change would be the use of the cash proxy in the volatility ranking factor. By including the theoretical risk free rate that is suppose to have volatility of zero will skew the results to bias cash. A reader commented on a little error in coding I made in the last post. Don’t sweat, it doesn’t change the performance one bit. I’ve modified the code and placed everything including the current code in to the FAA dropbox folder. Should you have any questions please leave a comment below. Thanks for reading, Mike Keller and Putten in their 2012 paper, “Generalized Momentum and FAA”, went on to combine multiple momentum ranking factors to form portfolios rebalanced monthly. I won’t go in to detail about their strategy as you can find a good commentary at Turnkey Analyst. Here I took apart each ranking factors and constructed portfolios to see their individual performance. I thought this may be a good way to visualize the performance of each factor alone. There are four portfolios, rebalanced monthly. 1. Relative Momentum- holds top n performing funds 2. Absolute Momentum- holds funds with positive momentum 3. Volatility Momentum- holds the n lowest volatility funds 4. Correlation Momentum- holds the n lowest average correlation fund; average of all pairwise correlation Performance </pre> ############################################################ #Flexible Asset Allocation (Keller & Putten, 2012) # ############################################################ rm(list=ls()) con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) source(con) close(con) load.packages("TTR,PerformanceAnalytics,quantmod,lattice") ####################################################### #Get and Prep Data ####################################################### setwd("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA") data <- new.env() #tickers<-spl("VTI,IEF,TLT,DBC,VNQ,GLD") tickers<-spl("VTSMX,FDIVX,VEIEX,VFISX,VBMFX,QRAAX,VGSIX") getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T) for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T) bt.prep(data, align='remove.na', dates='1990::2013') #Helper #Rank Helper Function rank.mom<-function(x){ if(ncol(x) == 1){ r<-x r[1,1] <- 1 }else{ r <- as.xts(t(apply(-x, 1, rank, na.last = "keep"))) } return(r) } ####################################################### #Run Strategies ####################################################### source("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA/FAA-mom.R") source("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA/FAA-abs-mom.R") source("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA/FAA-vol.R") source("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA/FAA-cor.R") source("C:/Users/michaelguan326/Dropbox/Code Space/R/blog research/FAA/FAA-bench.R") models<-list() top<-3 lookback<-80 #run models modelsmom<-mom.bt(data,top,lookback) #relative momentum factor
models$abs.mom<-abs.mom.bt(data,lookback) #absolute momentum factor models$vol<-vol.bt(data,top,lookback) #volatility momentum factor
models$cor<-cor.bt(data,top,lookback) #volatility factor models$faber<-timing.strategy.local(data,'months',ma.len=200) #faber
models$ew<-equal.weight.bt(data) #equal weight benchmark #report plotbt.custom.report.part1(models) plotbt.transition.map(models) plotbt.strategy.sidebyside(models) <pre> The source codes can be downloaded in my DB folder, can’t guarantee they are error free. Please leave comment of email me if you should find any mistakes. Thanks for reading, Mike What a coincidence, Zerohege just posted a piece where Bridgewater identifies the origin of their All Weather framework. (Here) Its interesting to read about their thought process and here are a few quotes I found interesting: “Any return stream can be broken down into its component parts and analysed more accurately by first examining the drivers of those individual parts.” “Return = Cash +Beta + Alpha” “Betas are few in number and cheap to obtain. Alphas (ie trading strategy) are unlimited and expensive. … Betas in aggregate and over time outperform cash. There are sure things in investing. That betas rise over time relative to cash is one of them. Once one strip out the return of cash and betas, alpha is a zero sum game. “ “there is a way of looking at things that overly complicates things in a desire to be overly precise and easily lose sight of the important basic ingredients that are making those things up” Separately managing the beta and alpha portion of the portfolio seems like a reasonable long term framework. For example, build a stable portfolio (beta) for the majority of your wealth and then overlay that with your desired amount of alpha to spice up the return. But it is important to make sure you understand how the two return streams (beta and alpha) interact fundamentally, for example factors that contribute to the return of the beta portion should be different compared to the alpha portion. It is only through this can the uncorrelated return stream diversify away your risk. In a recent refresher of Dalio’s interviews, I came across a term he mentioned: “Structural Beta.” What is it and what insights can one gain from this concept? I went on to do some research and reading on the subject and here are a few things I found. Beta defined by the CAPM is the slope of the linear regression between the Market Return (symbol) and the securities return. The measure takes in to account both the covariance (correlation) and the standard deviations. Mathematically, $\beta_{a} = \frac{Cov(a,m)}{\sigma _{m}^{2}} = \rho _{a,m}\frac{\sigma _{a}}{\sigma _{m}}$ Where subscript ‘a’ represents an asset and ‘m’ represents the market. From the above equation, we can see that there are two determinants to the value of beta. 1. market volatility 2. correlation between market and asset With the above determinants, it is intuitive to note that although an asset may have low correlation, offering potential diversification benefits, it may still poses equal beta due to the volatility of its underlying returns. There are two things that are important when constructing a portfolio, the return and risk. Return can be improved and risk can be reduced if a historically lowly correlated asset is added to the portfolio. But there are times like 2008 when things don’t follow historic averages. What I mean by this is that there can be assets that have low correlation but also high volatility. As an alternative, Beta can be used to gauge both of these characteristics. In a portfolio level context, beta may be used as an alternative measure of portfolio risk as it offers more information (correlation, volatility) than volatility alone in the traditional sense. There are numerous different ways to measure portfolio risk and these metrics are used on a daily basis as ingredients to portfolio optimization that yields weights for portfolio allocation. But these simplistic measures, ie volatility, may mislead as it may potentially hide the true risk inherent inside the portfolio. The chart below is a traditional Standard deviation based risk return graph. The expected returns are probably not representative as I just have 24 years of total return data; but I am confident the concepts are preserved. The next chart is through the beta lens whereby risk is measured by beta rather than standard deviation. The blue lines in both charts are called the cash equity line while the horizontal line merely represents the risk free rate (proxied by SHY). If an asset is above the cash equity line, than the area inbetween the asset and the line represent what is called structural alpha. This type of alpha is not the typical alpha that is generated by skill; rather it is the return portion that is attributed to an asset itself. It offers great diversification benefit to a portfolio. The beta based return is the portfolio above the risk free line and below the cash equity line. This portion of return is theoretically replicable by a mix of cash and equity. All in all, this view of portfolio risk return may warrant more research, for example, what happens when we long a portfolio of assets that show structural alpha? It is also important to note that in the past two decades, the assets that have shown to have diversification benefits all evidently lie above the cash equity line in the beta risk return chat. For example, the success of permanent portfolio was attributed to holding such assets. Code for generating risk return given xts object. Package: PerformanceAnalytics, SIT gen.risk.ret<-function(data1){ data1<-as.xts(data1) #convert to xts ret<-get.roc(data1,1) returns<-compute.cagr(data1) risk<-apply(ret,2,sd) risk.ret<-cbind(risk,returns) #Standard Risk Return Matrix return(risk.ret) } gen.beta.ret<-function(data1,bm){ data1<-as.xts(data1) #convert to xts ret<-get.roc(data1,1) returns<-compute.cagr(data1) bench<-ret[,which(colnames(ret) == bm)] risk<-matrix(NA,nrow=1,ncol=ncol(ret)) for(i in 1:ncol(ret)){ risk[,i]<-CAPM.beta(ret[,i],bench,Rf=0) } risk.return<-cbind(as.vector(risk),returns) rownames(risk.return)<-colnames(ret) colnames(risk.return)<-c("beta","returns") return(risk.return) }  This year equity performance has ended with a downward movement from this years earlier upward push. How have hedge fund styles from different categories performed? Below are a few charts I constructed from my schools indices. The performance data are pretty representative compared to the strategies employed by hedge funds. Below are the historic equity curves of all the strategies back to 1994. Although the index is an aggregated performance of many different hedge funds, I feel that the hedge fund performance are effected by stress factors that are similar to equities. Some research I finally have time to do relate to correlation tightening. This affect as seen in 2008 is effectively the enemy of diversification. Some questions I have been ruminating on are: -If during stress periods asset classes returns share high correlation, what measures can be taken to reduce such risk? -Which asset classes provide the most diversification during such periods and how do their return relate to equity like assets during normal times? -Which asset classes on the other hand offers no diversification benefits in bad times? In normal times, we are all hedge fund super stars as returns are achieved so easily due to upward drift. It is the times of market shocks that we should build our portfolio on. Cheers It’s been more than a month since I last posted. Time flies when you are busy working on the things you enjoy. After reading a piece on the lacklustre performance of hedge funds versus a standard 60/40 portfolio mix, it got me thinking deeper about stock bond allocation. In this post I am going to dissect and check the internal workings of the equity bond allocation and see if there are any tactical overlay that can improve a static allocation mix. Data: I will be using monthly data from Data Stream and Bloomberg; SP500 and 10 Year treasuries, all total return from January 1988 to May 2012. Here a backtest helper function wrapped around SIT: require(TTR) require(quantmod) setInternet2(TRUE) con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb')) source(con) close(con) btest<-function(data1,allocation,rebalancing){ data <- list(prices=data1[,1:2]) data$weight<-data1[,1:2]
data$weight[!is.na(data$weight)]<- NA
data$execution.price<-data1[,1:2] data$execution.price[!is.na(data$execution.price)]<-NA data$dates<-index(data1[,1:2])
prices = data$prices nperiods = nrow(prices) data$weight[] = NA
data$weight[1,] = allocation period.ends = seq(1,nrow(data$prices),rebalancing)-1
period.ends<-period.ends[period.ends>0]
data$weight[period.ends,]<-repmat(allocation, len(period.ends), 1) capital = 100000 data$weight[] = (capital / prices) * data$weight model = bt.run(data, type='share', capital=capital) return(model) }  This simply runs the backtest for provided allocation and rebalancing period for 2 assets. To check the performance of all equity allocation from 0 to 1 in increments of n%, I will be using the following wrapper function: sensitivity<-function(data1,rebalancing,allocation.increments){ equity.allocation<-seq(0,1,allocation.increments) #Y-axis eq = matrix(NA, nrow=nrow(data1), ncol=1) for(i in equity.allocation) { allocation <- matrix(c((1-i),i), nrow=1) temp<-btest(data1,allocation,rebalancing) eq<-cbind(eq,temp$equity)
}
eq<-eq[,-1]
colnames(eq) = 1-equity.allocation

cagr<-matrix(NA,nrow=ncol(eq),ncol=1)
for(i in 1:ncol(eq)){
cagr[i]<-compute.cagr(eq[,i])
}
cagr<-as.data.frame(cbind(1-equity.allocation,cagr))
colnames(cagr)<-c('Equity Allocation','CAGR')

sharpe<-matrix(NA,nrow=ncol(eq),ncol=1)
eq.ret<-ROC(eq)
eq.ret[is.na(eq.ret)]<-0
for(i in 1:ncol(eq)){
sharpe[i]<-compute.sharpe(eq.ret[,i])
}
sharpe<-as.data.frame(cbind(1-equity.allocation,sharpe))
colnames(sharpe)<-c('Equity Allocation','Sharpe')
return(list(eq=eq,cagr=cagr,sharpe=sharpe))
}


Running the sensitivity function in increments of 5% provides:

As you increase the equity allocation, you become more aggressive, which is obviously displayed from the chart above. What is the optimal allocation based on highest CAGR or Sharpe? The sensitivity function also returns a list of the performance of each equity allocation and the chart:

In the above chart, I’ve graphed two lines each with its own respective axis. From the chart, it seems that the equity allocation that provided the highest Sharpe Ratio is ~0.25. This seems to be similar to a risk parity allocation as historical data shows that such allocation is very close to optimal risk parity.

Diving deeper, I went in to check each successive 12 month period’s highest Sharpe equity allocation from 1988 to 2012. In another word, this takes us back in time!

From this chart, the max sharpe allocation varied significantly over each year. Whenever crisis hit, the allocation to bonds seems to dominate that on equity and vice versa in bull markets. This intuitively make sense as you would want to be in risk off mode during bear markets.

The last chart shows the rolling 12 month performance of each equity allocation from 0 to 1 in increments on 5%.

In another post, I will follow up on whether there are any tactical overlays that can improve performance.