Introduction¶
One of the most basic problems in data science goes something like this:
I want to fit a curve to data that is flexible enough to capture non-linear relationships (i.e. go through known data points without any error) but I also want confidence intervals for predictions at unknown points.
Most methods give you one or the other. You can go with linear/polynomial regresssions and have confidence intervals, but they won’t perfectly fit your data. Or you can go with splines, LOESS and other non-parametric methods, but then you won’t get confidence intervals out of the box (you still can estimate them with bootstrapping but it’s not always straightforward).
It was in this context that I learned about Gaussian Processes (GPs) first. GPs give you both a perfect interpolating function over known data points while also providing variance estimates at unknown data points. Here’s a very basic (and recognizable) illustration of a GP in action.
Once I learned about Gaussian processes, I started seeing references to them everywhere. Want to optimize hyperparameters? GPs can help. Reading an academic paper doing some cool stuff? Oh look, GPs are involved, too. That’s how I decided to spend some more time to understand them (a bit better), and they did not disappoint.
Cool stuff is meant to be shared, and that is how this post/notebook was born. In it, you will find:
- Brief intro into how GPs work
- GP regression as a way to interpolate in multi-dimensional space
- Importance of kernels in GPs
- GPs for time-series forecasting
- GP regressions with multiple output variables 🤯
- GP regressions as surrogate functions and Bayesian optimization
Be forewarned: it’s a loooong one!
Data used in this notebook¶
One of the earliest applications of Gaussian Process is known as kriging, named after Danie G. Krige, a mining enginner from South Africa. Staying close to the roots, this notebook uses weather forecasts in Lithuania to illustrate GPs (and it makes for pretty charts, too!). An extract of the dataset is displayed below. This notebook uses GPy library. If you are reading it in post format, some of the code cells are excluded for brevity. You can find the entire notebook and associated data on GitHub.
#read in weather forecasts and Lithuania's shape file
lt_forecasts = gpd.read_file("lt_forecasts.geojson")
lithuania = gpd.read_file('lt-shapefile/lt.shp')
lt_forecasts.head(n=5)
forecastTimeUtc | airTemperature | windSpeed | cloudCover | totalPrecipitation | location | geometry | |
---|---|---|---|---|---|---|---|
0 | 2022-02-01T06:00:00 | -2.8 | 4.0 | 18.0 | 0.0 | Senoji Radiškė | POINT (23.25380 54.29488) |
1 | 2022-02-04T21:00:00 | 1.7 | 5.0 | 100.0 | 1.2 | Senoji Radiškė | POINT (23.25380 54.29488) |
2 | 2022-02-06T18:00:00 | 2.4 | 7.0 | 97.0 | 3.5 | Senoji Radiškė | POINT (23.25380 54.29488) |
3 | 2022-02-06T12:00:00 | 2.1 | 5.0 | 100.0 | 1.0 | Senoji Radiškė | POINT (23.25380 54.29488) |
4 | 2022-02-06T06:00:00 | 0.3 | 7.0 | 100.0 | 0.9 | Senoji Radiškė | POINT (23.25380 54.29488) |
How do GPs work?¶
Parametric algorithms typically fit a fixed number of parameters to data (think linear regression and its coefficients). Non-parametric methods, on the other hand, typically use data points themselves as “parameters” (think K-nearest neighbours where classification depends solely on existing data points).
Gaussian Processes fall somewhere in between. Instead of fitting a fixed number of parameters, they fit a distribution of functions to data, where these functions are drawn from a multivariate normal distribution (which is what enables having confidence intervals out of the box), while the functions themselves are additionally parameterized by existing data points. That’s not an easy to digest sentence, I know… One of the best definitions I found is from Julia’s GP package:
Gaussian processes are a family of stochastic processes which provide a flexible nonparametric tool for modelling data. A Gaussian Process places a prior over functions, and can be described as an infinite dimensional generalisation of a multivariate Normal distribution. Moreover, the joint distribution of any finite collection of points is a multivariate Normal. This process can be fully characterised by its mean and covariance functions, where the mean of any point in the process is described by the mean function and the covariance between any two observations is specified by the kernel. Given a set of observed real-valued points over a space, the Gaussian Process is used to make inference on the values at the remaining points in the space.
In some ways, GPs thus incorporate both parametric and non-parametric worlds – they have fixed parameters (the mean and the variance of the prior), but they also use data as parameters (as the posterior of the functions is effectively drawn from n-dimensional Gaussian distribution, where n is the number of sample points).
If you are interested to understand GPs deeper (and I definitely suggest that), I highly recommend a tutorial on YouTube by Richard Turner and a visual exploration of Gaussian processes by three researchers from University of Konstanz.
Finally, I cannot leave out two wonderful memes by Richard McElreath…
help pic.twitter.com/wbDo3CqpTK
— Richard McElreath 🦔 (@rlmcelreath) February 21, 2022
If you are wondering how one can fit a “infinite-dimensional” normal distribution, the answer lies in the fact that a marginal distribution of a normal distribution is normal, just like is a joint and a conditional distribution of two normal distributions. Richard Turner does a really great job at illustrating these dynamics in his tutorial. If you don’t watch it now, make sure at least to put it in your bookmarks/open tabs list – it’s a really good one!
For all their coolness, GPs Achilles heel is computational complexity. They scale with amount of data as any other non-parametric methods, with most implementations having $O(n^3)$. Improving that is an active line of research, whereas for every day data scientists concerned with speed can choose between GPU-enabled GPyTorch and efficiency focused celerite libraries. Just keep in mind that GPs are not really an option if you have 1,000,000s of observations.
1. Gaussian Process Regression – interpolations¶
Let’s start with the classical application of Gaussian Process in geostatistics. Imagine you are building an application that requires knowing air temperature at any given location in Lithuania. What you have, though, is just the measurements at weather stations. Here’s how it looks visually.
Let’s use Gaussian Process regression to interpolate. We will fit a GP using coordinates as features and air temperature as the dependent variable.
#organize data into predictors and independent variables
coordinates = np.zeros((len(morning_forecast),2))
coordinates[:,0] = morning_forecast.geometry.x.values
coordinates[:,1] = morning_forecast.geometry.y.values
actual_temperatures = morning_forecast['airTemperature'].values.reshape(-1, 1)
#fit the simplest GP model with default parameters
interpolation_model = GPy.models.GPRegression(X=coordinates, Y=actual_temperatures)
interpolation_model.optimize_restarts(num_restarts = 1, verbose=False)
#define the grid covering Lithuania
x_space = np.linspace(coordinates[:,0].min(), coordinates[:,0].max(), 100)
y_space = np.linspace(coordinates[:,1].min(), coordinates[:,1].max(), 100)
xx, yy = np.meshgrid(x_space, y_space)
grid = np.concatenate((xx.reshape(-1,1), yy.reshape(-1, 1)), axis=1)
#obtain predictions over the grid - GPY returns both mean and variance
p_means, p_vars = interpolation_model.predict(grid)
And, here it is – now we have predictions over the entire country. Looks pretty, doesn’t it?
But how good is it?¶
Pretty, sure, but are these predictions any good? Let’s test it out. I will split the dataset into a train and a test set and measure the prediction accuracy by mean absolute error (MAE).
#split the dataset
from sklearn.model_selection import train_test_split as tts
from sklearn.metrics import mean_absolute_error
train_coords, test_coords, train_temps, test_temps = tts(coordinates, actual_temperatures, random_state=42)
#small helper function to get training and test set errors
def get_errors(models, X_train, X_test, Y_train, Y_test):
results = []
for label, model in models.items():
training_error = mean_absolute_error(Y_train, model.predict(X_train)[0])
test_error = mean_absolute_error(Y_test, model.predict(X_test)[0])
results.append((label, training_error, test_error))
return pd.DataFrame(results, columns=['model', 'Training MAE', 'Test MAE'])
#let's create an initial model and test its accuracy
models = {
'basic-GP': GPy.models.GPRegression(X=train_coords, Y=train_temps)
}
models['basic-GP'].optimize_restarts(num_restarts=1, verbose=False)
get_errors(models, train_coords, test_coords, train_temps, test_temps)
model | Training MAE | Test MAE | |
---|---|---|---|
0 | basic-GP | 0.095522 | 0.122836 |
Looks pretty good to me! It’s off by 0.1 degree (Celsius) on average in a dataset where temperature ranges from $-7.9$ to $-0.3$ degrees.
What is somewhat confusing is that its training error is not zero. GPs are interpolating functions, so this should only happen when there are multiple observations at the same input points (in which case the default GP kernel would end up predicting the mean of the observations), and there are no such cases in the dataset we are looking at. I suspect that this happens either because of numerical artifacts in the fitting process or there may be very sharp differences in temperature in nearby locations that the default GP kernel is not able to perfectly capture. Conceptually, that number should be zero, though.
So far, we looked only at mean predictions. Remember that one of the unique aspects of a Gaussian Process is that we also get confidence (variance) over the predictions. Let’s see how accurate the confidence intervals are.
#get predictions for the test set
test_means, test_vars = models['basic-GP'].predict(test_coords)
#list of confidence intervals of interest
conf_ints = pd.DataFrame([0.5, 0.7, 0.9, 0.95, 0.99], columns=['confidence level'])
conf_ints['total observations'] = len(test_means)
#helper to find # of observations falling into a confidence interval
def count_in_range(cint):
factor = -norm.ppf((1 - cint) / 2)
side_range = np.sqrt(test_vars) * factor
return np.sum((test_temps <= (test_means + side_range)) & (test_temps >= (test_means - side_range)))
#calculate them all
conf_ints['observations in interval'] = conf_ints['confidence level'].apply(count_in_range)
conf_ints['% observations in interval'] = \
conf_ints['observations in interval'] / conf_ints['total observations']
conf_ints
confidence level | total observations | observations in interval | % observations in interval | |
---|---|---|---|---|
0 | 0.50 | 513 | 290 | 0.565302 |
1 | 0.70 | 513 | 393 | 0.766082 |
2 | 0.90 | 513 | 479 | 0.933723 |
3 | 0.95 | 513 | 495 | 0.964912 |
4 | 0.99 | 513 | 509 | 0.992203 |
It seems that in our case, the confidence intervals produced by Gaussian Process are actually even conservative!
2. Kernels in Gaussian Processes¶
You may have noticed that I have used all defaults of the GPy regresssion module, and may be curious if an even better performance may be achieved by tuning some of the hyperparameters. By far the most important hyperparameter in Gaussian Processes is the kernel function that defines the correlation between two data points (the default is the radial basis function, $R(h) = e^{-\theta h^2}$). GPy has a number of kernels implemented, and even allows defining custom kernels. They have an entire notebook dedicated to exploring kernels which is definitely worthwhile skimming through.
Let’s see what we can achieve in our interpolation task by using other kernels. Specifically, let’s try out:
- Matern52 kernel (Matern kernels are a generalization of RBF kernels with an additional smoothing parameter)
- A combination of Matern52 and a linear kernel. I choose this particular combination by reasoning that there may be a linear temperature trend across latitude/longitude that may be worth trying to capture.
matern_kernel = GPy.kern.Matern52(input_dim=2)
linear_kernel = GPy.kern.Linear(input_dim=2)
models['Matern52 kernel'] = GPy.models.GPRegression(X=train_coords, Y=train_temps, kernel = matern_kernel)
models['Matern52 kernel'].optimize_restarts(num_restarts=1, verbose=False)
models['Matern52 + linear kernel'] = GPy.models.GPRegression(
X=train_coords, Y=train_temps, kernel = matern_kernel + linear_kernel
)
models['Matern52 + linear kernel'].optimize_restarts(num_restarts=1, verbose=False)
get_errors(models, train_coords, test_coords, train_temps, test_temps)
model | Training MAE | Test MAE | |
---|---|---|---|
0 | basic-GP | 0.095522 | 0.122836 |
1 | Matern52 kernel | 0.055417 | 0.099285 |
2 | Matern52 + linear kernel | 0.044996 | 0.097688 |
And, indeed – the performance is better; using kernels allowed reducing test MAE by 20%. So, if there is one thing to remember – make sure to test different kernels (and see if you can reason what kernel combinations may be most appropriate given your data).
3. Forecasting with GPs¶
The flexibility of kernel functions is what makes GPs useful for time series forecasting, too. However, you won’t get far with a default RBF kernel as its predictions converge to zero (or mean, if a bias kernel is added) very quickly once you step out of a domain observed during training. Here’s an illustration using the 1D example from earlier.
I will illustrate a better way to do it using one of the city temperature forecasts. We will take the first 48 hours of predictions in one of the Lithuanian cities and see if we can generate a reasonable forecast for it.
#get the relevant data for Kaunas forecast
point_forecast = lt_forecasts[lt_forecasts['location'] == "Kaunas"]
point_forecast['forecastTimeUtc'] = pd.to_datetime(point_forecast['forecastTimeUtc'])
point_forecast['hours_offset'] = (point_forecast['forecastTimeUtc'] - point_forecast['forecastTimeUtc'].min()).astype("timedelta64[h]")
point_forecast = point_forecast.sort_values(['forecastTimeUtc'])
first_two_days = point_forecast.head(48)
alt.Chart(first_two_days).mark_line().encode(
x = alt.X("yearmonthdatehours(forecastTimeUtc):O", title="Time"),
y = "airTemperature"
).properties(
title="48-hour temperature forecast in Kaunas",
width=600
)
Given the cyclical nature of temperature during a day, let’s use a periodic exponential kernel combined with a linear kernel that should capture the day-over-day trend.
Here’s the initial result.
#get the variables
hours_offset = first_two_days['hours_offset'].values.reshape(-1, 1)
temperature = first_two_days['airTemperature'].values.reshape(-1, 1)
#run the prediction
forecast_kernel = (GPy.kern.PeriodicExponential(input_dim=1) + GPy.kern.Linear(input_dim=1) + GPy.kern.Bias(input_dim=1))
forecast_model = GPy.models.GPRegression(X=hours_offset, Y=temperature, kernel = forecast_kernel, normalizer=True)
forecast_model.optimize_restarts(num_restarts=1, verbose=False)
plot_temp_forecast(forecast_model)
The predictions appear quite reasonable – GP definitely picked up the cyclical trends and there seem to be a linear downwards trend, too. But there’s something fishy. We see that GPs (again!) do not perfectly interpolate existing data. Furthermore, what is with the variances? They seem to be constant over time which is weird, too. What is going on?
In this case, the answer is that the GP optimization process is stochastic, and a single attempt did not yield good results. If we rerun the optimization process multiple (say, 20) times, we will see different results (see below). This is another take away about GPs – in most cases, you may need to re-initialize the fitting process multiple times to get what you are looking for.
At the same time, looking below you may reasonably ask: is this not overfitting? I would tend to agree, actually! How do you go about reducing overfitting if you have that problem? Recall that GPs are defined over a prior of functions – it is possible to provide stronger priors as well as introduce constraints over some of the fitting parameters. I haven’t tried it out myself, but if you are interested, you can find more detail in one of the GPy tutorials.
#rerun optimization 20 times
forecast_model.optimize_restarts(num_restarts=20, verbose=False)
plot_temp_forecast(forecast_model)
GPs and multiple dependent variables 🤯¶
Another cool aspect of Gaussian Processes is that they can model relationships between multiple output variables. I have to admit that the maths behind how it works exactly is beyond me, though I suspect it must have to do with the fact that a joint distribution of multiple normal distributions is a normal distribution, so one can model the joint distribution as a single output and then marginalize it as needed.
Continuing with the weather data examples, here’s an application to consider: Suppose that, due to historical reasons, not all stations can measure temperature and wind speed simultaneously. The ones on the east side of the country tend to measure only temperature, and the ones to the west – only wind speed. If we wanted to build models to predict temperature and wind speed at other locations, we could simply build two models independently, using the relevant station data for each each. But we may have reason to believe that temperature and wind speed are related, and capturing that relationship in the model may be beneficial. Let’s see if that is the case.
Below is a visual illustration – I’ll be using a subset of the weather stations in this example.
#let's use 25% and 75% quantiles as east/west centers
mid_lt = morning_forecast[(morning_forecast.geometry.y > 55) & (morning_forecast.geometry.y < 56)]
east_center, west_center = np.quantile(mid_lt.geometry.x.values, q=[0.2, 0.8])
sd = np.std(morning_forecast.geometry.x.values)
#use two normal distributions to generate probability densities
east_dens = norm.pdf(mid_lt.geometry.x.values, loc=east_center, scale=sd)
west_dens = norm.pdf(mid_lt.geometry.x.values, loc=west_center, scale=sd)
#normalize densities to probabilities
prob_of_east = east_dens / (east_dens + west_dens)
#sample stations
temp_stations = mid_lt.sample(n=40, weights = prob_of_east, random_state=42)
temp_stations['type'] = 'Temperature station'
speed_stations = mid_lt.sample(n=40, weights = (1 - prob_of_east), random_state=42)
speed_stations['type'] = 'Wind speed station'
#organize data into predictors and independent variables
y_variables = ['airTemperature', 'windSpeed']
no_y_variables = len(y_variables)
#prepare a list of coordinates/obs (kept as a list of 2)
coords = [
np.vstack([temp_stations.geometry.x.values, temp_stations.geometry.y.values]).T,
np.vstack([speed_stations.geometry.x.values, speed_stations.geometry.y.values]).T
]
obs = [
temp_stations[y_variables[0]].values.reshape(-1, 1),
speed_stations[y_variables[1]].values.reshape(-1, 1),
]
test = [
temp_stations[y_variables[1]].values.reshape(-1, 1),
speed_stations[y_variables[0]].values.reshape(-1, 1),
]
To establish the baseline, let’s first train a GP regressions for the two datasets (temperature-measuring stations and wind-measuring stations) separately and see how well the models predict the other measurement at the locations where that measurement is not available to the model (i.e. how well a model trained on wind-measuring stations fares in predicting wind speed at temperature-measuring locations and vice-versa).
#kernels
K1 = GPy.kern.Bias(2)
K2 = GPy.kern.Linear(2)
K3 = GPy.kern.Matern32(2)
#train independent models
result_bag = []
kernel = K1 + K3 * K2
for i, name in enumerate(y_variables):
j = 0 if i == 1 else 1
train_x = coords[i]
train_y = obs[i]
test_x = coords[j]
test_y = test[j]
singleOutput_model = GPy.models.GPRegression(X=train_x, Y=train_y, kernel=kernel)
singleOutput_model.optimize_restarts(num_restarts=1, verbose=False)
preds = singleOutput_model.predict(test_x)
test_error = mean_absolute_error(test_y, preds[0])
error_var = np.var(test_y - preds[0])
result_bag.append((name, 'single-output', test_error, error_var))
pd.DataFrame(result_bag, columns=['Variable', 'Model type', 'Mean absolute error', 'Error variance'])
Variable | Model type | Mean absolute error | Error variance | |
---|---|---|---|---|
0 | airTemperature | single-output | 0.958267 | 1.321167 |
1 | windSpeed | single-output | 0.539025 | 0.498869 |
We can see that the erorrs are not too bad, actually. Can a multi-output GP regression do better?
The GPy model type to be used for such scenarios is GPCoregionalizedRegression
, and it has a somewhat different API than most of the other GPy models (you need to pass inputs as list for each of the output variable; predictions require an extra index column in the input array). I based the code below on their dedicated tutorial for coregionalized regressions.
#Define a multi-output kernel
icm = GPy.util.multioutput.LCM(input_dim=2, num_outputs=2, kernels_list=[K1, K2, K3])
#Setup the model and optimize
multiOutput_model = GPy.models.GPCoregionalizedRegression(X_list=coords, Y_list=obs, kernel = icm)
multiOutput_model.optimize_restarts(num_restarts=5, verbose=False)
#Setup the array used for predictions; it uses an extra column to indicate output index to predict
inds = np.ones((coords[0].shape[0], 1))
predSpace = np.vstack([np.hstack([coords[0 if i == 1 else 1], inds * i]) for i in range(no_y_variables)])
noise_dict = {'output_index': predSpace[:, 2].astype(int)}
#get all predictions and evaluate
obs_preds = multiOutput_model.predict(predSpace, Y_metadata=noise_dict)
result_bag = []
for i, name in enumerate(y_variables):
j = 0 if i == 1 else 1
test_y = test[j]
pred_y = obs_preds[0][predSpace[:, 2] == i]
test_error = mean_absolute_error(test_y, pred_y)
error_var = np.var(test_y - pred_y)
result_bag.append((name, 'multi-output', test_error, error_var))
pd.DataFrame(result_bag, columns=['Variable', 'Model type', 'Mean absolute error', 'Error variance'])
Variable | Model type | Mean absolute error | Error variance | |
---|---|---|---|---|
0 | airTemperature | multi-output | 0.868705 | 0.935795 |
1 | windSpeed | multi-output | 0.559070 | 0.537014 |
The results for this dataset are… interesting. In case of temperature, the model performs clearly better, but its performance is actually slightly worse for wind speeds. The thing about wind speeds, however, is that the forecasts are available only on an integer basis. So these results may simply be a factor of underlying data and shouldn’t necessarily be interpreted as a “worse fit”.
And, yes, I know – one really shouldn’t fit a continuous function if the output variables are count-like.. But hey, you know what – it is possible to model Poisson-like variables with GPs too! Maybe another time.
GPs as surrogate functions and Bayesian optimization¶
So far, I explored use cases where GP is the main model we use for fitting. The flexibility of GPs, however, make them very popular candidates to be used as surrogate functions, too.
Imagine you have a model that takes a long time to run – perhaps it is something as simple as an XGBoost classifier on a relatively large dataset, or maybe it is a complex physics simulation of the universe that takes a day to run on a supercomputer cluster. In both cases, you can only make a few runs of them with different (hyper-) parameters, and you would like to find the optimal setting of those hyperparameters (e.g. tune a learning rate for XGBoost) or use the limited outputs to interpolate in the rest of input space.
In such scenarios, you can do the following:
- Run the expensive model a few times with carefully spaced out parameter values and obtain initial observations of the dependent variable (or a loss function, in case of an ML model)
- Then, fit a Gaussian Process over the obtained data points and treat it as an approximate (surrogate) function of the underlying model
- Use GP predictions to infer expected values of the true underlying model at unknown locations
- If optimization is of interest (e.g. finding XGBoost hyperparameters that lead to the best model performance), use the uncertainty of predictions (variance) to determine the next combination of parameters that are most promising to evaluate the true underlying model on.
- Rinse and repeat the steps above iteratively.
The iterative process of looking for an optimal value is what is also known as “exploration vs exploitation”. You choose to either explore (using variance information to determine locations with highest uncertainty) or exploit (by evaluating the model at locations close to the best currently known point). There are multiple ways how to determine the next point for evaluation (i.e. how to make the exploration vs. exploitation trade-off). One of them is known as the Expected Improvement maximization approach. I won’t delve into detail on it here – I highly recommend a post by Martin Krasser on Bayesian Optimization if you are curious about the details.
In this notebook, I won’t fit XGBoost models. Instead, I will illustrate the concept with a simple task. Let’s pretend our task is to find the coldest place in Lithuania as measured by chill factor (which happens to be a village called Kirmėliai, with wind chill factor of -11.58). Here’s how the distribution of chill factor looks geographically; you can see it is not a convex environment, and so using gradient-based approaches is unlikely to get us to the right place.
Instead, let’s use Bayesian optimization with Gaussian processes. There are a few python packages that support Bayesian optimization, most notably hyperopt and pyGPGO. Hyperopt uses tree of Parzen estimators (TPE) as the surrogate function, however. For this illustration, I chose to go with the aptly named BayesianOptimization package that uses Gaussian processes and supports Expected Improvement algorithm for iterative choices.
Let’s start by randomly choosing 4 points in Lithuania and evaluating the chill factor there (we’ll use the nearest station measure for any given point). Then, let’s explore additional 15 locations iteratively, where each subsequent location is chosen by Expected Improvement criteria using a Gaussian Process model fit on all observations collected so far.
from bayes_opt import BayesianOptimization
#set the bounds to be the bounding box of Lithuania
param_bounds = {
'lat': (bounds[0], bounds[2]),
'lon': (bounds[1], bounds[3])
}
#setup the optimizer
optimizer = BayesianOptimization(f=get_chill_factor, pbounds=param_bounds, random_state=42, verbose=False)
#let's go exploring!
optimizer.maximize(init_points=4, n_iter=15, acq = 'ei', xi=0)
What are the results? Turns out we were able to find the coldest place!
explorations = gpd.GeoDataFrame([{'target': d['target'],
'lat': d['params']['lat'],
'lon': d['params']['lon']} for d in optimizer.res])
#type conversion lat/lon to geometry column
points = [Point(lon, lat) for i,lat,lon in explorations[['lon', 'lat']].itertuples()]
explorations = explorations.assign(**{'geometry': gpd.GeoSeries(points).set_crs('EPSG:4326')})
explorations = explorations.drop(['lat', 'lon'], axis = 1)
print("Lowest chill factor found: {:.5f}".format(-explorations['target'].max()))
Lowest chill factor found: -11.58668
To get the intuition behind what happened, here’s a graphical illustration of the algorithm’s path. You can see how trade-offs between exploring unknown areas vs. exploiting the information collected so far about the maximum observed value changes as more and more data points are collected. Much better than any grid search!
The End¶
If you got here – thank you for reading and I hope this journey was interesting!
To be fair, this is just scratching the surface on what one can do with Gaussian Processes, and there are all kind of crazy interesting things out there (for example, deep Gaussian Processes, where GPs are “stacked” on top of each other thus eliminating the need to manually specify kernels…). Most of them are beyond my ability to comprehend, but perhaps they won’t be for you! At the same time, having a basic understanding of GPs and their value as flexible fitting functions that provide you with confidence estimates gets you pretty far already. And they are used in the industry, too! Here’s an example from Lyft where they use Gaussian processses and Bayesian optimization to run experiments.
On a final note, if you happen to be Lithuanian, here’s an interesting fact: turns out that a Lithuanian mathematician, Jonas Mockus was a very influential figure in Bayesian optimization space. I was pleasantly surprised to see a familiar name when researching for this post!
Subscribe for infrequent new posts: