We compare the performance of adding a batch normalisation layer to a convolution neural network (CNN). For this we use the results from a previous post on creating a CNN for fashion MNIST data:
In this post we will build a Convolution Neural Network (CNN) in order to classify images from the fashion MNIST dataset. CNNs are commonly used with image data to efficiently exploit spatial relationships between nearby pixels.
Here we explore the theoretical coefficient distributions from a linear regression model. When fitting a regression model we can get estimates for the standard deviation of the coefficients. We use bootstrapping to get an empiracle distribution of the regression coefficients to compare against those distributions.
Parquet files are a columinar data format we can use to store dataframes. They can be stored in partitions, which can allow us to load only a subset of the data. This is useful is we are filtering the data, as we can do that without loading it all into memory.
This follows on from the previous post on fitting a gaussian distribution with pyro:
Fitting a Distribution with Pyro
In this simple example we will fit a Gaussian distribution to random data from a gaussian with some known mean and standard deviation.
We want to estimate a distribution that best fits the data using variational inference with Pyro.
The aim of this project is to plot interactive scores of NBA games over the course of the match:
Here we will look through TFL cycle data and explore how the number of journeys vary.
Having recently seen them again at the Hammersmith Apollo I took some time to write out the bass part for Lingus.
Hello to everyone from Brazil!