Lesson 10: Generative Adversarial Networks

Introduction In previous lessons we have seen different types of neural networks, all of which solve classification problems. We started with plain vanilla neural networks, which take a vector as input and pass it through some hidden layers to produce...

k-Nearest Neighbors

k-NN: k-Nearest Neighbors k-Nearest Neighbors (k-NN) is one of the simplest machine learning algorithms. k-NN is commonly used for regression and classification problems, which are both types of supervised learning. In regression, the output is continuous (e.g. numerical values, such...

Naive Bayes

Table of Contents Introduction Bayes’ Theorem Naive Bayes Classifier Additive Smoothing Example: Text Classification Conclusion Introduction In a classification problem, we are interested in assigning a discrete class to a sample based on certain features or attributes. Naive Bayes Classifiers...

Backprop By Hand

After going through our workshop on neural networks and getting some feedback from our members, it became evident that some additional theory review would be valuable to allow the concepts behind neural networks to really sink in. In order to...

Environment Setup

Before getting into the actual programming lessons, we first need to set up our coding environment. Python is the most popular tool for machine learning, and it is what we will be using throughout this curriculum. Now you might be...