In this post I will be looking at building an autoencoder to compress the MNIST dataset.
See part 1. here.
In this post I will be briefly looking at PCA as a means to compress images.
Images are a matrix of pixels, where each value corresponds to some brightness.
Image compression typically involves representing those pixels in fewer dimensions than the original.
I’ve built a new page for forecasting indoor humidity:
For the next 100 days, we will have 1 hour to play a video games.
We have 20 games, but have no idea which one we will enjoy the most.
How do we decide what to play each day?
We assume that the enjoyment we get from a single hour is random and comes from a beta distribution.
Each game has a different distribution.
Each hour we play of a game gives us an enjoyment value and helps build our knowledge of that game.
Short article - I was thinking you could apply convolution to solve the Game of Life logic.
So I quickly built a class to do just that:
Dask is commonly used for data processing in parallel compute.
However I wanted to quickly explore using dask for parallel processing of generic python functions.
This post will explore building elastic net models using the PyTorch library.
I will compare various scenarios with the implementations in scikit-learn to validate them.
This post follows on from looking at Bayesian Linear Regression.
Here we look at the ability of the above method to track non-stationary problems where the regression coefficients can vary with time.
In this post I talk about reformulating linear regression in a Bayesian framework.
This gives us the notion of epistemic uncertainty which allows us to generate probabilistic model predictions.
I formulate a model class which can perform linear regression via Bayes rule updates.
We show the results are the same as from the statsmodels library.
I will also show some of the benefits of the sequential bayesian approach.
Kedro is a python data science library that helps with:
creating reproducible, maintainable and modular data science code