A more formalised implementation of regularisation
A short while ago I published a post based on an exercise from Andrew Ng’s Machine Learning course on Coursera. In that post I implemented regularisation for a vectorised implementation of linear regression. In reality when approaching a machine learning problem, I wouldn’t want to rely on functions I have written myself when there are fully formed packages supporting these techniques. So in this post I’m going to reproduce the analysis from my previous post but using the R package glmnet. Glmnet is a package of functions for generalised linear models that has been implement in R by Trevor Hastie and Junyang Qian.
In my previous post, the code I implemented solved the minimisation problem:
where the first summation is essentially the usual sum of the squared differences between the expected and the observed, and the addition is the regularisation term.
The glmnet
function solves the problem^{1}:
There are a few additional parameters here, but fundamentally the problem is the same. The function $l(y,\eta)$ for the Gaussian case (i.e. a simple linear model with a normal error structure) is equivalent to $\frac{1}{2}(y\eta)^2$,and we can make the regularisation term look a lot more familiar by referring to Friedman et al. (2010)^{2}.
So this is much closer to the problem I was working with before. $\vec{w}$ is simply a vector of weights of length $m$ which default to 1. Then there is the parameter $\alpha$ which controls the rather enigmatically named elasticnet penalty ($a\in{0,1}$). We see the original summation of the $j$ elements of $\theta$ ($\sum^n_{j=1} \theta^{2}_j$), but each element is multiplied by $\frac{1}{2}(1\alpha)$ (also $(1a)\in{0,1}$) to which $\alpha \times$ the absolute value of $\theta_j$ is added.
$\alpha$ is important: setting $\alpha = 1$ will result in lasso regularisation, whilst $\alpha = 0$ gives you ridge regularisation. The practical significance of this is that using ridge regression will not totally remove features from the regression, but just reduces their importance. Lasso on the other hand will set parameters to zero, thus removing features from the regression^{3} entirely. $\alpha$ is thus set somewhere between the two extremes; according to the documentation, setting $\alpha=0.5$ is effective in selecting groups of correlated features, and including or excluding them from the model.
So with a little digging, this equation is not quite as scary as it first looked, and is not so different from the regularisation that I have already been using.
glmnet in action
For this example, I again use the mtcars
dataset, and make the same split of the data as before. So in the training set $m=19$ and $n = 3$: cyl, disp, hp.
Here’s what it looks like:
From here on I follow the quick start guide in the glmnet
documentation. To start with, I’ll stick to just the original features, and not add any polynomials as before.
The simple use case is as follows:
When printed, this gives a summary of each step towards minimisation with the number of coefficients not set to zero (Df
), the % of null deviance the model explains, and the value of $\lambda$ that was tried. In the present case, the process stops early at 55 (default is 100) because the deviance explained had remained fairly constant.
We can use the coef
method to extract the coefficients for a given $\lambda$. Setting $\lambda=0$, i.e. no regularisation results in very similar coefficients returned in my previous post for the simple linear multiple regression.
And it looks pretty linear (in this and all the plots that follow, the red symbols are the values predicted by the model, whilst blue are the training/test set values):
Note that if you don’t specify a $\lambda$ with s=0
, then predict
will return all the models evaluated  in this case 55.
To choose a model, we can use cross validation which is nicely implemented in the cv.glmnet()
function, and specified in the same way. For this example I’m going to add third order polynomials to $X$ to try to capture some of the curvature evident in the plots.
On $n=3$ features, this results in $n=19$ total features after creation of polynomials, on just $m=19$ training examples. So this model could lead to bad overfitting, but is a great chance to test out regularisation.
Again, printing this object will give a list of all the models tried, with varying $\lambda$, but with two additional slots: $lambda.min
, and $lambda.1se
. These are respectively the value of $\lambda$ that gives the minimum mean crossvalidated error, and the value of $\lambda$ which gives the most heavily regularised model, but is still within 1 standard error of $lambda.min
.
We can extract the coefficients for these with:
Note that a full stop would indicate that the parameter have been set to zero. Interestingly, after crossvalidation only the original features have been retained. A plot of the object cv.fit
is informative, and shows that the best mean squared errors are obtained with only three parameters.
This may suggest that to retain more parameters in the model, I need to adjust the $\alpha$ parameter closer to 0, (i.e. ridge regression) which will not remove parameters from the model totally.
Comparing the result here (using the $J_{train}$ function I defined previously with the training errors obtained in the previous post, neither of the two models (lambda.min
and lambda.1se
) match the performance of the regularised models which retained all 19 parameters ($\approx 4$).
So let’s try again, this time adjusting the parameter $\alpha$.
OK so this time we have retained all of the parameters, but the coefficients are very small, and unlikely to wield much influence on the model.
So what about the errors? the $\alpha=0$ ridge model fares less well than the $\alpha=0.5$ elasticnet model.
Of course this is training set error. What about the test set?
How does this look plotted? First $\alpha=0.5$ (the default):
And for $\alpha = 0$.
Setting $\alpha=0$ definitely makes things worse, but neither model captures the curvature inherent in the data.
A simpler model
OK so $m=n$ is a pretty extreme example which is a bit out of the ordinary. I’ll repeat the above with only using a second degree polynomial which gives $n=9$ features.
With just second degree polynomials and the default $\alpha = 0.5$, most of the parameters have been retained in the model, and the error is looking a lot smaller:
But how does it plot?
Now this is looking much more promising! How does it fare on the test set?
Also looking like a pretty strong fit. And the error?
Interestingly this is slightly higher than the errors of 1.00 and 1.03 achieved in using the simpler code I implemented in my earlier post. This is surprising, as intuitively from plots I would have said that the glmnet
model has performed better. I may have to check this…
It’s also interesting to see that glmnet
did not perform well when I included the extreme case scenario of a single feature for a single training example, yet the simpler algorithm I implemented produced almost identical results when including second or third order polynomials (albeit with a different $\lambda$). All that said, glmnet
is incredibly simple to use (and parallel computing ready), and I will certainly be using it from now on when dealing with these kinds of problems.

Friedman, J., Hastie, T. and Tibshirani, R. (2010) ‘Regularization Paths for Generalized Linear Models via Coordinate Descent’, Journal of Statistical Software January, 33(1) ↩

https://en.wikipedia.org/wiki/Least_squares#Lasso_method and http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html. ↩