Master Unsupervised Learning with k-Means Clustering

What will we cover?

In this lesson we will learn about Unsupervised learning.

  • Understand how Unsupervised Learning is different from Supervised Learning
  • How it can organize data without knowledge
  • Understand how k-Means Clustering works
  • Train a 𝑘-Means Cluster model

Step 1: What is Unsupervised Learning?

Machine Learning is often divided into 3 main categories.

  • Supervised: where you tell the algorithm what categories each data item is in. Each data item from the training set is tagged with the right answer.
  • Unsupervised: is when the learning algorithm is not told what to do with it and it should make the structure itself.
  • Reinforcement: teaches the machine to think for itself based on past action rewards.

Where we see that Unsupervised is one of the main groups.

Unsupervised learning is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, which is an important mode of learning in people, the machine is forced to build a compact internal representation of its world and then generate imaginative content from it. In contrast to supervised learning where data is tagged by an expert, e.g. as a “ball” or “fish”, unsupervised methods exhibit self-organization that captures patterns as probability densities…

https://en.wikipedia.org/wiki/Unsupervised_learning

Step 2: k-Means Clustering

What is clustering?

Organize a set of objects into groups in such a way that similar objects tend to be in the same group.

What is k-Means Clustering?

Algorithm for clustering data based on repeatedly assigning points to clusters and updating those clusters’ centers.

Example of how it works in steps.
  • First we chose random cluster centroids (hollow point), then assign points to neareast centroid.
  • Then we update the centroid to be centered to the points.
  • Repeat

This can be repeated a specific number of times or until only small change in centroids positions.

Step 3: Create an Example

Let’s create some random data to demonstrate it.

import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt

# Generate some numbers
data = np.random.randn(400,2)
data[:100] += 5, 5
data[100:200] += 10, 10
data[200:300] += 10, 5
data[300:] += 5, 10

fig, ax = plt.subplots()

ax.scatter(x=data[:,0], y=data[:,1])
plt.show()

This shows some random data in 4 clusters.

Then the following code demonstrates how it works. You can change max_iter to be the number iteration – try to do it for 1, 2, 3, etc.

model = KMeans(n_clusters=4, init='random', random_state=42, max_iter=10, n_init=1)

model.fit(data)

y_pred = model.predict(data)

fig, ax = plt.subplots()
ax.scatter(x=data[:,0], y=data[:,1], c=y_pred)
ax.scatter(x=model.cluster_centers_[:,0], y=model.cluster_centers_[:,1], c='r')
plt.show()
After 1st iteration – the cluster centers are are no optimal
After 10 iteration it is all in place

Want to learn more?

This is part of a FREE 10h Machine Learning course with Python.

  • 15 video lessons – which explain Machine Learning concepts, demonstrate models on real data, introduce projects and show a solution (YouTube playlist).
  • 30 JuPyter Notebooks – with the full code and explanation from the lectures and projects (GitHub).
  • 15 projects – with step guides to help you structure your solutions and solution explained in the end of video lessons (GitHub).

Leave a Reply Cancel reply

Exit mobile version