Beyond Pairs

Writing rather prolifically this past week. Last week was the end of my midterms (for now!) and continuation of job search and preparing for my final 5 weeks of university.(!) I hope my readers are finding my posts interesting and enlightening.

As mentioned in an earlier post on statistical arbitrage, the interesting aspect of it comes when we consider multi leg portfolios. To construct a multi-leg portfolio, the traditional way to do it would be to employ a multivariate linear regression (factor model). The intuition behind this is that we are trying to estimate a fair value for an asset using various predictors or independent variables. For example, we know that the S&P 500 is composed of stocks from various sectors. Therefore, an intuitive way is to derive a fair value for S&P 500 using the 9 different sector Spdrs by the following equation:

CodeCogsEqn (3)

The residual return that is left over, “alpha”, is considered to be neutral (uncorrelated) against the industry sectors. With this framework, we now can essentially make ourselves neutral to any factors we want. For example, we have access to a wide variety of ETFs that mimic underlying asset class movements. If we want to be neutral to interest rates, credit risk, and volatility, we can employ ETFs: TLT, HYG, and VXX respectively. Below is a chart demonstrating this, showing the estimated fair value of SPY relative to the actual ETF:

Screen Shot 2014-03-02 at 11.33.55 AM

Below is the spread that can be traded via long short on each leg:

Screen Shot 2014-03-02 at 11.38.21 AM

The concept of being able to control the factors we are exposed to is very appealing as it allows us to potential shy away from turbulent events that transpire from specific assets. Not only that, these uncorrelated return streams when combined in to a portfolio allows significant risk reduction. As Dalio said, the ability to combine 15 uncorrelated return streams allows us to effectively reduce 80% of risk. (Chart below) Interestingly, from my understanding of what Bridgewater does, I am pretty confident they are employing spread trading too, but purely from a fundamental way. For example, how does a set of asset classes react to the movements of economic indicators? From there they construct synthetic spreads to trade off of these relationships.

Screen Shot 2014-03-02 at 12.06.52 PM

Below is the code that generated the data for this post:


spread.analysis<-function(data, y.symbol, x.symbol, lookback=250){
    y = data$prices[,y.symbol]
    x = data$prices[,x.symbol]
    lm.holder<-list()
    fv = NA * data$prices[,1]
    colnames(fv) = c('FairValue')

    for( i in (lookback+1):nrow(data$prices) ){
        cat(i,'\n')
        hist.y = y[(i-lookback):i, y.symbol]
        hist.x = x[(i-lookback):i, x.symbol]
        lm.r = lm(hist.y ~ hist.x)
        lm.holder[[i]] = lm.r
        fv[i,] = lm.r$coefficients[1] + sum(lm.r$coefficients[-1] * x[i,])
    }
    mat = merge( x,y,fv )
    return( list( mat = mat, fv = fv, reg.model = lm.holder ) )
}

Also here are some links I’ve found to be very informative.

The paper on high frequency statistical arbitrage is rather a relevant one as it relates to my previous blog posts on energy related pairs trading. Essentially, the author goes on to construct a meta algorithm for ranking pairs to trade. This meta-algorithm is composed of correlation coefficient, minimum square distance of normalized price series, and a co-integration test value. I don’t have intraday (paper used 15 min bars) equities data nor do I have the infrastructure to test it but the idea resonates with me from my research in top N momentum systems. A lot of ways to improve.

Advertisements

Energy Stat Arb Part 2

In my previous post, I rushed through a lot of technical details on how I implemented the strategy. For that I apologize! I am here to make up by providing more on how I approached it and hopefully make my analysis more understandable.

In this post, I want to re-visit energy pairs (XLE vs OIL) trading but with the traditional spread construction approach through regression analysis. My data comes from QuantQuote, all adjusted for dividends and splits. To read in the data, I used the following code:


from matplotlib.pylab import *
import pandas as pd
import numpy as np
import datetime as dt
xle = pd.read_csv('/Users/mg326/xle.csv', header=None,parse_dates = [[0,1]])
xle.columns = ['Timestamp', 'open', 'high', 'low', 'close', 'volume', 'Split Factor', 'Earnings', 'Dividends']
xle['Timestamp'] = xle['Timestamp'].apply(lambda x: dt.datetime.strptime(x, '%Y%m%d %H%M'))
xle = xle.set_index('Timestamp')
xle = xle[["open", "high", "low", "close"]]

For minute data, there are approximately 391 rows of data per day. Taking in to account OHLC, there are a total of 391 * 4 = 1564 data observations per day. Heres a image displaying May 9th, 2013:

figure_1

If you look in to the data, you may see a price for 9:45AM but the next data point comes in at 9:50AM. This means that there was a 5 minute gap where no shares were traded. To fix and align this, the following function will align the two data sets.

def align_data(data_leg1,data_leg2,symbols):
    combined_df = pd.concat([data_leg1,data_leg2],axis=1)
    combined_df = combined_df.fillna(method='pad')
    data_panel = pd.Panel({symbols[0]: combined_df.ix[:,0:4], symbols[1]:combined_df.ix[:,4:9]}) #dict of dataframes
    return(data_panel)

To construct the spread, we will run a rolling regression on the prices to extract the hedge ratio. This is then piped in to the following equation:

CodeCogsEqn

Given two series of prices, the following helper function will return a dictionary of the model and the spread. Following is the spread displayed.

def construct_spread(priceY,priceX,ols_lookback):
    data = pd.DataFrame({'x':priceX,'y':priceY})
    model = {}
    model['model_ols'] = pd.ols(y=data.y, x=data.x, window=ols_lookback,intercept=False)
    model['spread'] = data.y - (model['model_ols'].beta*data.x)
    return model

figure_1

To normalize it, simply subtract a rolling mean and divide that by the rolling standard deviation. Image of normalized spread follows.

zscore = lambda x: (x[-1] - x.mean()) / x.std(ddof=1)
sprd['zs'] = pd.rolling_apply(sprd, zs_window, zscore)  # zscore

figure_1

Without changing our parameters, a +-2 std will be our trigger point. At this threshold, there is a total of 16 trades. Here is the performance if we took all the trades for the day, frictionless:

figure_1

Pretty ugly in my opinion but its only a day. Lets display all the daily equity performance distributed for the whole year of 2013.

figure_3

The flat line initially is for the 60 bar lookback window for each day, unrealistic but it does give a rough picture on the returns. The average final portfolio gain is 0.06 with std of 0.13. The performance is pretty stellar when you look at 2013 as a whole. Comparing this to the other spread construction in my last post, its seems to reduce the variance of returns when incorporating a longer lookback period.

2013Coming up in the next instalment I want to investigate whether incorporating Garch models for volatility forecasting will help improve the performance of spread trading.

Thanks for reading,

Mike

Energy Stat Arb

Back to my roots. Haven’t tested outright entry exit trading systems for a while now since the Mechanica and Tblox days but I aim to post more about these in the future.

I’ve been looking and reading about market neutral strategies lately to expand my knowledge. Long only strategies are great but sometimes constant outright directional exposure may leave your portfolio unprotected to the downside when all assets are moving in the same direction. A good reminder would be the May of last year when gold took a nose dive.

Below are some tests I conducted on trading related energy pairs. Note that I haven’t done any elaborate testing for whether the spread is mean reverting,etc. I just went with my instincts. No transaction costs. Spread construction based on stochastic differential, 10 day lookback, +-2/0 std normalized z score entry/exit, and delay 1 bar execution.

Crude Oil and Natural Gas Futures (Daily) (Daily don’t seem to work that well no more):

CL-NG

OIL and UNG ETF (1 Min Bar)

OIL-UNG

XLE and OIL ETF (1 Min Bar)

XLE-OIL

Pair trading is the simplest form of statistical arbitrage but what gets interesting is when you start dealing with a basket of assets. For example, XLE tracks both Crude Oil and Natural Gas companies, therefore a potential 3 legged trade would be to trade XLE against both OIL and UNG. Another well-known trade would be to derive value for the SPY against TLT (rates), HYG (corp spreads), and VXX (Vol).

The intuition behind relative value strategies is to derive a fair value of an asset “relative” to another. In basic pair trading, we are using one leg to derive the value of another, or vice versa. Any deviations are considered opportunities for arbitrage. In the case for multi legged portfolio, a set of assets are combined in some way (optimization, factor analysis, PCA) to measure the value. See (Avellaneda) for details.

While the equity lines above look nice, please remember that they don’t account for transaction costs and are modelled purely on adjusted last trade price. A more realistic simulation would be to test the sensitivity of entry and order fills given level 1 bid-ask spreads. For that, a more structured backtesting framework should be employed.

(Special thanks to QF for tremendous insight)

Thanks for reading,

Mike

Random Subspace Optimization: Max Sharpe

I was reading David’s post on the idea of Random Subspace Optimization and thought I’d provide some code to contribute to the discussion. I’ve always loved ensemble methods since combining multiple streams of estimates makes more robust estimation outcomes.

In this post, I will show how RSO overlay performs using max sharpe framework. To make things more comparable, I will employ the same assets as David for the backtest. One additional universe I would like to incorporate is the current day S&P 100 (survivorship bias).

Random subspace method is a generalization of the random forest algorithm. Instead of generating random decision trees, the method can employ any desired classifiers. Applied to portfolio management, given N different asset classes and return streams, we will randomly select `k` assets `s` times. Given `s` different random asset combinations, we can perform a user defined sizing algorithm for each of them. The last step is to combined them though averaging to get the final weights. In R, the problem can be easily formulated via `lapply` or `for` loops as the base iterative procedure. For random integers, the function `sample` will be employed. Note my RSO function employs functions inside Systematic Investors Toolbox.


rso.optimization&amp;amp;lt;-function(ia,k,s,list.param){
 size.fn = match.fun(list.param$weight.function)
 if(k &amp;gt; ia$n) stop(&amp;quot;K is greater than number of assets.&amp;quot;)
 space = seq(1:ia$n)
 index.samples =t(replicate(s,sample(space,size=k)))
 weight.holder = matrix(NA,nrow = s , ncol = ia$n)
 colnames(weight.holder) = ia$symbol.names

 hist = coredata(ia$hist.returns)
 constraints = new.constraints(k, lb = 0, ub = 1)
 constraints = add.constraints(diag(k), type='&amp;amp;;=', b=0, constraints)
 constraints = add.constraints(diag(k), type='&amp;amp;lt;=', b=1, constraints)

 #SUM x.i = 1
 constraints = add.constraints(rep(1, k), 1, type = '=', constraints)

 for(i in 1:s){
 ia.temp = create.historical.ia(hist[,index.samples[i,]],252)
 weight.holder[i,index.samples[i,]] = size.fn(ia.temp,constraints)
 }
 final.weight = colMeans(weight.holder,na.rm=T)

 return(final.weight)
}

The above function will take in a `ia` object, short for input assumption. It calculates all the necessary statistics for most sizing algorithms. Also, I’ve opted to focus on long only.

The following are the results for 8 asset class. All backtest hereafter will keep `s` equal to 100 while varying `k` from 2 to N-1, where N equals the total number of assets. The base comparison will be that of simple max sharpe and equal weight portfolio.

8asset_s100

The following is for 9 sector asset classes.

9sector-asset_s100

Last but not least is the performance for current day S&P 100 stocks.

sp100-s100

The RSO method seems to improve all the universes that I’ve thrown at it. For a pure stock universe, it is able to reduce volatility by more than 300-700 basis points depending on your selection of k. In a series of tests across different universes, I have found that the biggest improvements from RSO comes from applying it to a universe of instruments that belong to the same asset class. Also, I’ve found that for a highly similar universe (stocks), a lower `k` is better than a higher `k`. One explanation: since the max sharpe portfolio of X identical assets is equal to that of an equal weight portfolio, we can postulate that when the asset universe is highly similar or approaching equivalence, resampling with a lower `k` Y times where Y approaches infinity, we are in a sense approaching the limit of a equally weighted portfolio. This is in line with the idea behind curse of dimensionality: for better estimates,  the data required grows exponentially when the number of assets increase.  In this case, with limited data, a simple equal weight portfolio will do better which conforms to a better performance for lower `k`.

For a well specified universe of assets, RSO with a higher `k` yields better results than lower `k`. This is most likely caused by the fact that simple random sampling of such universe with a small `k` will yield samples that contain highly mis-specified universe. This problem is magnified when the number of diversifying assets like bonds are significantly out-numbered by other assets like equities as the probability of sampling an asset with diversification benefits are far lower than sampling an asset without such benefits. Another word, with a lower `k`, one will most likely end up with a portfolio that contain a lot of risky assets relative to lower risk assets.

Possible future direction would be to figure out some ways of having to specify the `k` and `s` in a RSO. For example, randomly selecting `k` OR selecting a `k` such that it targets a certain risk/return OR maximize an user defined performance metric.

Thanks for reading,

Mike

Engineering Risks and Returns

In this post, I want to present a framework for formulating portfolio with targeted risk or return. The basic idea was inspired by controlling risk from a different point of view. The traditional way of controlling for portfolio risk was to apply a given set of weights to historical data to calculate historical risk. If estimated portfolio risk exceeds a threshold, we peel off allocation percentages for each asset. In this framework, I focus on constructing portfolios that target a given risk or return on a efficient risk return frontier.

First lets get some data to so we can visualize traditional portfolio optimization’s risk return characteristics. I will be using a 8 asset ETF universe.

rm(list=ls())
setInternet2(TRUE)
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
source(con)
close(con)
tickers = spl('EEM,EFA,GLD,IWM,IYR,QQQ,SPY,TLT')
data <- new.env()
getSymbols(tickers, src = 'yahoo', from = '1980-01-01', env = data, auto.assign = T)
for(i in ls(data)) data[[i]] = adjustOHLC(data[[i]], use.Adjusted=T)
bt.prep(data, align='keep.all', dates='2000:12::')

Here are the return streams we are working with

Rplot

The optimization algorithms I will employ are the following:

  • Minimum Variance Portfolio
  • Risk Parity Portfolio
  • Equal Risk Contribution Portfolio
  • Maximum Diversification Portfolio
  • Max Sharpe Portfolio

To construct the risk return plane, I will put together the necessary input assumptions (correlation, return, covariance, etc). This can be done with create.historical.ia function in the SIT tool box.

#input Assumptions
prices = data$prices
n=ncol(prices)
ret = prices/mlag(prices)-1
ia = create.historical.ia(ret,252)
# 0 <= x.i <= 1
constraints = new.constraints(n, lb = 0, ub = 1)
constraints = add.constraints(diag(n), type='>=', b=0, constraints)
constraints = add.constraints(diag(n), type='<=', b=1, constraints)

# SUM x.i = 1
constraints = add.constraints(rep(1, n), 1, type = '=', constraints)

With the above we can go ahead and input both ‘ia’ and ‘constraints’ in to the above optimization algorithms to get weights. With the weights, we can derive the portfolio risk and portfolio return. These then can be plotted on a risk return plain visually.

# create efficient frontier
ef = portopt(ia, constraints, 50, 'Efficient Frontier')
plot.ef(ia, list(ef), transition.map=F)

Rplot01

The risk return plain in the above image shows all the possible space to which a portfolio’s risk and return characteristic can reside. Anything that is beyond to the left side of the frontier do not exist (unless leverage, to which the EF will also shift leftward too). Since I am more of a visual guy, I tend to construct this risk return plain whenever I am working on new allocation algorithms. This allows me to compare with other portfolio the expected risk and return.

As you can see, each portfolio algorithm has their own set of characteristics. Note that these characteristics fluctuate across the frontier were we to frame this rolling through time. A logical extension to these risk return concepts is to construct a portfolio that aims to target ether a given risk or a given return on the frontier. To formulate this problem in SIT for the return component, simply modify the constraints as follows:

constraints = add.constraints(ia$expected.return,type='>=', b=target.return, constraints)

Note that the target.return variable is simply a variable storing the desired target return. After adding the constraint, simply run a minimum variance portfolio and you will get a target return portfolio. On the other hand, targeting risk is a bit more complicated. If you look at the efficient frontier, you will find that for a given level of risk there is two portfolios that line on it.  (The sub-optimal portion of the efficient frontier is hidden). I solved for the weights using a multi optimization framework which employed both linear and quadratic (dual) optimization.

target.risk.obj<-function(ia,constraints,target.risk){

 max.w = max.return.portfolio(ia, constraints)
 min.w = min.var.portfolio(ia, constraints)
 max.r = sum(max.w * ia$expected.return)
 min.r = sum(min.w * ia$expected.return)
 max.risk = portfolio.risk(max.w,ia)
 min.risk = portfolio.risk(min.w,ia)

 # If target risk exists as an efficient portfolio else
 # return weights of 0
 if(target.risk >= min.risk | target.risk <= max.risk){
 out <-optimize(f =target.return.risk.helper,
 interval = c(0,max.r),
 target.risk = target.risk,
 ia = ia,
 constraints = constraints)$minimum
 weight=target.return.portfolio(out)(ia,constraints)
 }else{
 weight=rep(0,ia$n)
 }

 return(weight)
}

Below is a simple backtest that takes the above assets and optimizes for the target return or target risk component. Each will run with a target of 8%.

Backtest1Now the model itself requires us to specify a return or risk component. What if instead we make that a dynamic component such that we extract ether the risk or return component of a alternative sizing algorithm. Below are the performance of the dynamic risk or return component extracted from naive risk parity.

Backtest2

 

Not surprisingly, whenever we target risk, the strategy tends to become more risky. This confirms confirms risk based allocations are superior if investors are aiming to achieve low long term volatility.

 

Thanks for reading,

Mike

Some Shiny Stuff

At the beginning of the summer I knew Shiny was going to be an indispensable tool for my work to connect with my readers. For those of you who don’t yet know what Shiny is, it is a web application package developed by RStudio. It integrates flawlessly with R and has been nothing but excitement playing with it.

In terms of difficultly, Shiny differs in its structure. While it may be intimidating initially to R veterans, I urge you to be patient with it. I learned it by examples, and Systematic Investors’ Michael Kapler has got numerous posts on how the basic framework come together (here).

I thought I’d share my own Market Dashboard. To access the dashboard, simply go to the follow link.

http://spark.rstudio.com/systematiced/MarketDashboard/

The Market Dashboard is divided into 3 Tabs. The first tab is called “Main.” This tab is entirely created for single-asset and cross asset comparisons. The charts in the entire application are interactive. This is useful when you want to rank assets based on the table at the bottom of the page. I created 6 charts (no reason for any one, just came to mind). These include:

  • normalized Equity prices given lookback
  • 12 Month Performance
  • Annualized CAGR
  • Percent Volatility Rank
  • Financial Turbulence Index
  • Efficiency Ratio
  • Table of Statistics (Can be ranked when you click on the title)

Untitled

On the second tab, I created a broad based Asset Analytics tab. This tab aims to put all asset classes (ETF as proxy) together in coherent fashion for easy digestion. There are three main sections. First section is all asset comparison (a) (this is my attempt to replicate this: here 😉 ), second is individual asset class comparison (b), and lastly is a cluster chart comparison (Varadi & Kapler).

Untitled1

(a)

Untitled2

(b)

Last tab is called the “Macro Analytics” Tab. Here I aim to bring together US macro fundamentals in to a single page. Fundamentals include:

  • Real GDP
  • Inflation
  • Yield Curve
  • Inflation Expectation
  • Industrial Production

Untitled4 Untitled5 Untitled6

This app is something I just pulled together real quick. There will be design issues but I just wanted to get the idea across: Shiny can be very powerful! Hope you guys will have fun with this and find it useful.

Thanks for reading,

Mike

Note: The application is highly un-optimized (slow). It downloads all data over the internet (Yahoo and Quandl). This is entirely for educational purposes. Please do not make financial decisions based on the applications output. I do not guarantee the correctness of the code.