Rsampling Fama French

Rsampling Fama French

This is my notes for learning Rsampling Fama French. This article introduced how to conduct k-fold cross validation in R using rsample and yardstick packages. For more details, you can read the original article.

Our goal today is to explore k-fold-cross-validation via rsample package, and a bit of model evaluation via yardstick package. In this post, we will use the rmse() function from yardstick package, but our main focus will be on the vfold_cv() function from rsample. We are going to explore these tools in the context of linear regression and Fama French, which might seem weird since these tools would typically be employed in the realms of machine learning, classification, and the like. We’ll stay in the world of explanatory models via linear regression world for a reasons.

Firs, and this is a presonal preference, when getting to know a new package or methodology, I perfer to do in a context that’s already familiar, I don’t want to learn rsample whilst also getting to know a new data set and learning the complexities of some crazy machine learning model. Since Fama French is familiar from our previous work, we can focus on the new tools in rsample and yardstick. Second, factor models are important in finance, despite relying on good old linear models, and we might even find creative new ways to deploy or visualize them.

Next, we will import data for daily prices of ETFs, convert them to returns, then import the five Fama French factor data and join it to our five ETF returns data. Here’s the code to make that happen:

R
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
library(tidyverse)
library(broom)
library(tidyquant)
library(timetk)
symbols <- c("SPY", "EFA", "IJS", "EEM", "AGG")
prices <- getSymbols(symbols, src = "yahoo",
from = "2012-12-31",
to = "2018-12-31",
auto.assign = TRUE,
warnings = FALSE) %>%
map(~Ad(get(.))) %>%
reduce(merge) %>%
`colnames<-`(symbols)

asset_returns_long <- prices %>%
tk_tbl(preserve_index = TRUE, rename_index = "date") %>%
gather(asset, returns, -date) %>%
group_by(asset) %>%
mutate(returns = (log(returns) - log(dplyr::lag(returns)))) %>%
na.omit()

factors_data_address <- "http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/Global_5_Factors_Daily_CSV.zip"
factors_csv_name <- "Global_5_Factors_Daily.csv"
temp <- tempfile()
download.file(
factors_data_address,
temp,
quiet = TRUE
)

Global_5_Factors <- read_csv(unz(temp, factors_csv_name), skip = 6) %>%
rename(date = X1, MKT = `Mkt-RF`) %>%
mutate(date = ymd(parse_date_time(date, "%Y%m%d"))) %>%
mutate_if(is.numeric, funs(. / 100)) %>%
select(-RF)

data_joined_tidy <- asset_returns_long %>%
left_join(Global_5_Factors) %>%
na.omit()

After running that code, we have an object called data_joined_tidy. It hold daily returns for 5 ETFs and the Fama French Factors. Here’s a look at the first row for each FTF rows.

1
2
3
4
5
6
7
8
9
10
11
data_joined_tidy %>% slice(1)
#> # A tibble: 5 x 8
#> # Groups: asset [5]
#> date asset returns MKT SMB HML RMW
#> <date> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 2013-01-02 AGG -0.00117 0.0198 -0.0043 0.0028 -0.0025
#> 2 2013-01-02 EEM 0.0194 0.0198 -0.0043 0.0028 -0.0025
#> 3 2013-01-02 EFA 0.0154 0.0198 -0.0043 0.0028 -0.0025
#> 4 2013-01-02 IJS 0.0271 0.0198 -0.0043 0.0028 -0.0025
#> 5 2013-01-02 SPY 0.0253 0.0198 -0.0043 0.0028 -0.0025
#> # … with 1 more variable: CMA <dbl>

Let’s work with just one ETF for today and use filter(asset == "AGG) to shrink our data to just that ETF:

1
2
agg_ff_data <- data_joined_tidy %>% 
dplyr::filter(asset == "AGG")

Okay, we’re going to regress the daily returns of AGG on one factor, then three factors, then five factors, and we want to evaluate how well each model explains AGG’s returns. That means we need a way to test the model. Last time, we looked at the adjusted $R^2$ values when the model was run on the entirely of AGG returns. Today, we’ll evaluate the model using k-fold cross validation. That’s a pretty jargon-heavy phrase (行话) that isn’t part of the typical finance lexicon (术语). Let’s start with the second part, cross-validation. Instead of running our model on the entire data set - all daily returns of AGG - we will run it on just part of the data set, then test the results on the part that we did not used. Those different subsets of our origin data are often called the training and testing sets, though rsample calls theme analysis and assessment sets. We validate the model results by applying them to the assessment data and seeing how the model performed.

The k-fold bit refer to the fact that we’re not just dividing our data into training and testing subsets, we’re actually going to divide it into a bunch of groups, a k number of groups, or a k number of folds. One of those folds will be used as the validation set; the model will be fit on the other k-1 sets, and then tested on the validation set. We’re doing this within a linear model to see how well it explains the data; it’s typically used in machine learning to see how well a model predicts data.

If you’re like me, it will take a bit of tinkering (修补工作) to really grasp k-fold cross validation, but rsample as a great function for dividing our data into k-folds. If we wish to use five folds (the state of the art seems to be either five or ten folds), we call the vfold_cv() function, pass it our data object agg_ff_data and set v = 5.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
library(rsample)
library(yardstick)
set.seed(752)
cved_ff <- vfold_cv(agg_ff_data, v = 5)
cved_ff

#> # 5-fold cross-validation
#> # A tibble: 5 x 2
#> splits id
#> <named list> <chr>
#> 1 <split [1.2K/302]> Fold1
#> 2 <split [1.2K/302]> Fold2
#> 3 <split [1.2K/302]> Fold3
#> 4 <split [1.2K/302]> Fold4
#> 5 <split [1.2K/301]> Fold5

We have an object called cved_ff, with a column called splits and a column called id. Let’s peek at the first split.

R
1
2
3
cved_ff$splits[[1]]

#> <1207/302/1509>

Three numbers, The first, 1207, is telling us how many observations are in the analysis. Since we have five folds, we should have 80% of our data in the analysis set. The second number, 302, is telling us how many obsevations are in the assessment, which is 20% of our original data. The third number, 1509, is the total number of observations in our original data.

Next, we want to apply a model to the analysis set of the k-folded data and test the results on the assessment set. Let’s start with one factor and run a simple linear model, lm(returns ~ MKT).

We want to run it on analysis(cved_ff$splits[[1]]) - the analysis set of our first split.

R
1
2
3
4
5
6
7
8
9
10
ff_model_test <- lm(returns ~ MKT, 
data = analysis(cved_ff$splits[[1]]))
ff_model_test

#> Call:
#> lm(formula = returns ~ MKT, data = analysis(cved_ff$splits[[1]]))

#> Coefficients:
#> (Intercept) MKT
#> 2.687e-05 -2.420e-02

Nothing too crazy so far. Now we want to test on our assessment data. The first step is to add the data to the original set. We’ll use augment() for that task, and pass it assessment(cved_ff$splits[[1]])

R
1
2
3
4
5
6
7
8
9
10
11
12
13
14
ff_model_test %>% 
augment(newdata = assessment(cved_ff$splits[[1]])) %>%
head() %>%
select(returns, .fitted)

#> # A tibble: 6 x 2
#> returns .fitted
#> <dbl> <dbl>
#> 1 -0.00253 0.0000777
#> 2 0.000901 0.0000390
#> 3 -0.000541 -0.0000506
#> 4 0.00282 0.000286
#> 5 -0.000999 -0.0000143
#> 6 -0.000723 -0.0000191

We just added our fitted values to the assessment data, the subset of the data on which the model was not fit. How well did our model do when compare the fitted values to the data in the held out set?

We can use the rmes() function from yardstick to measure our model. RMSE stands for root mean-squared error. It’s the sum of the squared difference between our fitted values and the actual values in the assessment data. A lower RMSE is better!

R
1
2
3
4
5
6
7
8
ff_model_test %>% 
augment(newdata = assessment(cved_ff$splits[[1]])) %>%
rmse(returns, .fitted)

#> # A tibble: 1 x 3
#> .metric .estimator .estimate
#> <chr> <chr> <dbl>
#> 1 rmse standard 0.00197

Now that we’ve down that piece by piece, let’s wrap the whole operation into one function. This function takes one argument, a split, and we’re going to use pull() so we can extract the raw number, instead of the entire tibble result.

R
1
2
3
4
5
6
7
8
9
10
11
12
13
14
model_one <- function(split){
split_for_model <- analysis(split)
ff_model <- lm(returns ~ MKT, data = split_for_model)
holdout <- assessment(split)
rmse <- ff_model %>%
augment(newdata = holdout) %>%
rmse(returns, .fitted) %>%
pull(.estimate)
}

model_one(cved_ff$splits[[1]]) %>%
head()

#> [1] 0.001972325

Now we want to apply that function to each of our five folds that are stored in agg_cved_ff. We do that with a combination of mutate() and map_dbl(). We use map_dbl() instead of map because we are returning a number here and there’s not a good reason to store that number in a list column.

1
2
3
4
5
6
7
8
9
10
11
12
13
cved_ff %>% 
mutate(
rmse = map_dbl(splits, model_one)
)

#> # A tibble: 5 x 3
#> splits id rmse
#> * <named list> <chr> <dbl>
#> 1 <split [1.2K/302]> Fold1 0.00197
#> 2 <split [1.2K/302]> Fold2 0.00202
#> 3 <split [1.2K/302]> Fold3 0.00190
#> 4 <split [1.2K/302]> Fold4 0.00205
#> 5 <split [1.2K/301]> Fold5 0.00186

OK, we have five RMSE, since we ran the model on five separate analysis fold sets and tested on five separate assessment fold sets. Let’s find the average RMSE by taking the mean() of the rmse column. That can help reduce noisiness that resulted from our random creation of those five folds.

R
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
cved_ff %>% 
mutate(
rmse = map_dbl(splits, model_one)
) %>%
summarise(mean_rmse = mean(rmse))
```

Next, let's suppose we wish to find the mean RMSE for two other model:

```r R
# Three factor model:
model_two <- function(split){
split_for_model <- analysis(split)
ff_model <- lm(returns ~ MKT + SMB + HML,
data = split_for_model)
holdout <- assessment(split)
rmse <- ff_model %>%
augment(newdata = holdout) %>%
rmse(returns, .fitted) %>%
pull(.estimate)
}

# Five factor model:
model_three <- function(split){
split_for_model <- analysis(split)
ff_model <- lm(returns ~ MKT + SMB + HML + RMW + CMA,
data = split_for_model)
holdout <- assessment(split)
rmse <- ff_model %>%
augment(newdata = holdout) %>%
rmse(returns, .fitted) %>%
pull(.estimate)
}
cved_ff %>%
mutate(
rmse_model_1 = map_dbl(splits, model_one),
rmse_model_2 = map_dbl(splits, model_two),
rmse_model_3 = map_dbl(splits, model_three),
) %>%
summarise(
mean_rmse_model_1 = mean(rmse_model_1),
mean_rmse_model_2 = mean(rmse_model_2),
mean_rmse_model_3 = mean(rmse_model_3),
)

#> # A tibble: 1 x 3
#> mean_rmse_model_1 mean_rmse_model_2 mean_rmse_model_3
#> <dbl> <dbl> <dbl>
#> 1 0.00196 0.00192 0.00188

That code flow worked just fine, but we had to repeat ourselves when creating the functions for each other model. Let’s toggle (切换) to a flow where we define three models - the ones that we wish to test with via cross-validation nad RMSE - then pass those models to one function.

R
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
mod_form_1 <- as.formula(returns ~ MKT)
mod_form_2 <- as.formula(returns ~ MKT + SMB + HML)
mod_form_3 <- as.formula(returns ~ MKT + SMB + HML + RMW + CMA)

ff_rmse_models_flex <- function(split, formula){
split_for_data <- analysis(split)
ff_model <- lm(formula, data = split_for_data)
holdout <- assessment(split)
rmse <- ff_model %>%
augment(newdata = holdout) %>%
rmse(returns, .fitted) %>%
pull(.estimate)
}
cved_ff %>%
mutate(rmse_model_1 = map_dbl(cved_ff$splits,
ff_rmse_models_flex,
formula = mod_form_1),
rmse_model_2 = map_dbl(cved_ff$splits,
ff_rmse_models_flex,
formula = mod_form_2),
rmse_model_3 = map_dbl(cved_ff$splits,
ff_rmse_models_flex,
formula = mod_form_3)) %>%
summarise(mean_rmse_model_1 = mean(rmse_model_1),
mean_rmse_model_2 = mean(rmse_model_2),
mean_rmse_model_3 = mean(rmse_model_3))
#> # A tibble: 1 x 3
#> mean_rmse_model_1 mean_rmse_model_2 mean_rmse_model_3
#> <dbl> <dbl> <dbl>
#> 1 0.00196 0.00192 0.00188

unsplash-logoBùi Nam Phong

# R

Comments

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×