Data Scientist

# Monster Hunting with XGBoost

Kaggle’s bread-and-butter is its Machine Learning, or Predictive Analytics competitions. The processes used in these scenarios encompass a very small fraction of the Data Science process. However, this gameified version of Data Science can be engaging and may be an interesting hook for some newcomers, as it side-steps some of the more time consuming data wrangling processes. I introduce it here to help those who are looking for a Data Science related resolution or goal.

In order to boost one’s position up the Kaggle leader board one should employ algorithms that have conistently scored highly or won competitions (through fine tuning of parameters). Broadly, Kaggle competitions are won through use of Deep Learning or Extreme Gradient Boosting (XGBoost) methods (Chen & Guestrin, 2016).

We introduce a gradient boosted tree using the award winning xgboost which comes with lots of tuneable parameters to allow optimisation of a model. Following the Unix philosophy, xgboost performs one thing well and is easily integrated with other data science packages or processes. To demonstrate, we also introduce using caret to create simple dummy (binary) variables prior to model building.

There are lots of excellent tutorials for using xgboost in R, however I found there was a dearth of blogs explaining how to prepare one’s data for use with xgboost, particularly when already split into training and test data sets, as is the norm for Kaggle competitions.

# Data

The data can be downloaded from the Kaggle Halloween competition page. We use this classifciation problem as the competition has now closed, thus avoiding any spoilers. We read in the training data to train our learner or model in order to predict the class of the test data.

Inspecting the data reveals some of the characteristics of the Monsters and how they relate. Perhaps there is enough of a difference to predict the class of a Monster by its characteristics alone? Some ids are missing, presumably they make up the set of Monsters we are tasked with classifying correctly in the test set. We also notice how there are three types of Monster, this is a multi-level classification problem.

## Graphical Data Analysis

Let’s visualise these relationships and try to spot any patterns. Here I draw just one graphic which conveys a lot of information. Plotting it took minimal effort, don’t waste time making it pretty! Spend time making various plots and looking at the data from many perspectives to work out which variables are likely to be useful in classifying the Monsters correctly.

First look at the density plots on the diagonals which are “sorta” histograms for the associated variable (read off the column or row, they are the same). Helpfully they lie between zero and one, no need to normalise, thanks Kaggle! The lower panel visualises all interactions and may indicate which ones are useful for distinguishing between Monster classes (read off from the y axis and the x axis to identify the interaction being plotted). We can even quantify these correlations as shown in the top panel for those of us who prefer numbers to graphics (“Boo!” said the Ghost). The corporeal monsters appear more similar than their ethereal counterpart. Eyeballing this it looks like linear discriminant analysis (LDA) might perform well (Spoiler alert; using default mlr LDA outperforms default XGBoost).

## Dummy variables

The algorithm can’t deal with multi-level factors so we express our colour variable as a series of dummy variables.

Hadely would not approve; I repeated myself! Instead write your own function or better yet, rely on others’ tested code (see caret::dummyVars which creates a full set of dummy variables). I use the above method as it’s more explicit for the reader.

## Create additional variables using expert knowledge

We don’t know much about the biology of these Monster species, if we did we could create new variables that might help us to train a better model. An excellent example of this is the use of the xgboost to help classify success or failure of Higgs Boson production in the Large Hadron Collider, where this second place entry describes standard normalisation (transformation of skewed variables) as well as feature engineering using physics knowledge.

## XGBoost compatible

xgboost likes numbers; we convert the Monster type or label of our Monsters to a number using the following conversion. We then put the data into a matrix for handling by this fast and efficient algorithm, after dropping the variables we no longer need. Then we construct a xgb.DMatrix object from our matrix.

## Training the model

We train the model using a specified objective. The nice thing about this function is we can specify our own objective if we need to. The key point here is we specify the the objective argument as being multi:softmax, to handle the multi-class nature, there be three Monster types. The most critical step to using this algorithm is understanding, or at least being aware, of the customisable parameters available, this allows fine tuning of one’s model and helps you avoid issues such as overfitting for example (by adjusting eta argument).

Try increasing the maximum number of iterations (nrounds), this reduces the training error at the risk of overfitting. You can add and adjust the eta argument to mitigate this (default = 0.3). Turn all the nobs and see what happens (taking heed of the help in R of course; ?xgboost()).

## Variable or feature importance

Let’s inspect which features are worth including in our model.

All of our features seem to be fairly relevant. You could try removing the least important and rerunning the model and see whether it reduces the error.

## Testing the model

We prepare the test data from Kaggle in a similar way to our training set, except we obviously don’t need to remove the Monster type.

## Convert for Kaggle Submission

Remember how we changed the Monster labels to numeric for the training, we now need to convert them back again to make our predictions interpretable.

## Conclusion

There’s plenty of good XGBoost posts around but there was a dearth of posts dealing with the Kaggle situation; when the data is pre-split into training and test with the test classes hidden. This post demonstrates how to implement the famous XGBoost algorithm in R using data from an old learning Kaggle competition. Hopefully this will XGBoost your position on the Kaggle leaderboards! To extend this code, try creating new features from the interactions of the variables and training your model using these.