Genetic Programming (GP) is a collection of techniques from evolutionary computing (EC) for the automatic generation of computer programs that perform a user-defined task. Starting with a high-level problem definition, GP creates a population of random programs that are progressively refined through variation and selection until a satisfactory solution is found. A lack of foresight does not prevent this blind watchmaker from arriving at a sensible design solution for modelling the data as demonstrated in this post.
My attention was first drawn to this approach through the MarIO video, where a program evolves through selection to get better and better at playing a computer game. A great advantage of GP is that the user need not specify (or know) much about the form of the eventual solution before starting.
I studied Zoology to doctorate level and thus spotted parallels between some of these processes and thinking tools and models for natural selection and evolution. This is all inherently interesting, warranting a blog post.
There are several packages that provide genetic programming tools for R, we will investigate a few of these packages and look at the scenarios in which they are useful compared to more typical machine learning methods. Basically, GP is an evolutionary search heuristic for arbitrary symbolic expressions, i.e. mathematical or logical formulas.
Modelling the orbit of a celestial body
I’ve picked a topic I don’t know too much about so that the GP approach can help me with this lack of expert domain knowledge. This scenario is receiving alot of interest as there are a shortage of data scientists; it would be nice if feature extraction based on expert knowledge did not limit the training and accuracy of our models. (data smashing is a concept that is relevant here; or consider the Higgs-Boson problem). We investigate a problem tackled by Kepler in the 17th century (note: that’s why Pluto is included). We investigate the relationship between the planets’ distance from the Sun and their orbital periods (see here for an alternative take on this problem).
With the data defined we can now set about looking for a relationship by dedicating some computing time. Prior to analysis we must convert everything to units of earth orbits, as the Earth is a relative centre of the universe. Thus, the units of distance will become astronomical units (AUs), and the units of period will become Earth years.
As the planets move further from the Sun their orbit period seems to increase exponentially. However, I’m not really sure if they are circular or elliptical or in-between, so wouldn’t like to guess. We can look at the data to confirm this suspicion.
Or plot it.
This tutorial shows how to use the
symbolicRegression function to solve symbolic modelling and regression tasks with minimal configuration work.
The theme of this tutorial is the discovery of a mathematical formula describing the behaviour of a physical system based on measurement
data, i.e. symbolic regression.
Symbolic regression is a type of regression analysis that searches the space of mathematical expressions to find the model that best fits a given dataset, both in terms of accuracy and simplicity. No particular model is provided as a starting point to the algorithm. Instead, initial expressions are formed by randomly combining mathematical building blocks such as mathematical operators, analytic functions, constants, and state variables. New equations are then formed by recombining previous equations, using genetic programming.
We are ready to start a symbolic regression run. We choose a time budget of one minute:
Selection and plotting of the model with best fitness can be performed as follows:
Hmm, not too bad, let’s give it a bit more time! Most of the data we’re training on is near the lower end of the scale thus we might expect our best fit line to wiggle towards those points given more time perhaps at the expense of the accuracy of distance from the Sun prediction at the regions of space with lower celestial object density (the Gas Giants hoover everything up?).
When sat twiddling your thumbs waiting for this to compute, consider Kepler who said, regarding the lengthy arguments within his epic tome sparking the Enlightenment:
If thou art bored with this wearisome method of calculation, take pity on me, who had to go through with at least seventy repetitions of it, at a very great loss of time.
Not much better? Is it the lack of data or do we need to tune our parameters?
Sequential Parameter Optimization for Genetic Programming
Finding good algorithm parameter settings for Genetic Programming applications is a complex task. Sequential parameter optimization (SPO) provides a framework for applying modern statistical methods to solve this task. Fortunately a package exists to help us! We provide the same starting conditions for our original experiment to allow comparison.
We don’t even have to specify anything, we just load the package and it does it for us.
I ran all this code for comparison several times by adjusting the
set.seed at the start of this post.
You’ll notice that by adjusting this your get different solutions or best fitted models.
This can be considered analagous to a phenomenon in Evolutionary Biology.
The great evolutionist Stephen Jay Gould often employed the metaphor “replaying life’s tape” to emphasize the preeminent role of contingency in the evolutionary process. In Gould’s view, the outcome of this Gedanken experiment would have been dramatically different from the actually observed course of events because evolution is essentially a stochastic phenomenon whereby trajectories that start infinitely close to each other soon diverge because the divergence is exponential. The actual evolutionary trajectory is fundamentally unpredictable because the selection of the fittest could occur along a great number of forking paths. Here a different seed provides a different starting point.
But does it provide anymore information? Do we get an understanding of the relationship that Kepler was after?
In this simple example we see that we have produced identical solutions given the same starting position even with the
SPOT package loaded which automatically tunes our parameters for us. Perhaps with a different example we would have seen greater impact.
We have a function that takes five times the square root of the input.
So the distance from the Sun is sort-of proportional to five times the square root of the orbit in years of the celestial body of interest.
We can compare this to the modern solution and other historical approximations of the truth here.
This demonstrates how GP could be useful for feature selection despite it being quite slow, as it can help non experts in determining sensible relationships or transformations (other options exist e.g. Box-Cox transformation).
This functionality is provided in the
caret::gafs conducts a supervised binary search of the predictor space using a genetic algorithm.
Altogether a good reminder that:
nanos gigantum humeris insidentes
The “third law”
Kepler discovered his “third law” a decade after the publication of the Astronomia nova as a result of his investigations in the 1619 Harmonices Mundi. He found that the ratio of the cube of the length of the semi-major axis of each planet’s orbit, to the square of time of its orbital period, is the same for all planets.
We rearrange the formula and create a function using Kepler’s knowledge.
This is an improvement, demonstrating the value of expert domain knowledge provided by careful observation and experimentation to inform model building. Kepler built upon Copernicus’ heliocentric theory by explaining how the planets’ speeds varied using elliptical orbits. A consequence of this was the third law.
The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit. This captures the relationship between the distance of planets from the Sun, and their orbital periods.