Supervised learning (SL) is the machine learning task of learning function that maps an input to an output based on example input-output pairs
wikipedia.org
Said differently, if you have some items you need to classify, it could be books you want to put in categories, say fiction, non-fiction, etc.
Then if you were given a pile of books with the right categories given to them, how can you make a function (the machine learning model), which on other books without labels can guess the right category.
Supervised learning simply means, that in the learning phase, the algorithm (the one creating the model) is given examples with correct labels.
Notice, that supervised learning does not only restrict to classification problems, but it could predict anything.
If you are new to Machine Learning, I advise you start with this tutorial.
The classification problem is a supervised learning task of getting a function mapping an input point to a discrete category.
There is binary classification and multiclass classification, where the binary maps into two classes, and the multi classmaps into 3 or more classes.
I find it easiest to understand with examples.
Assume we want to predict if will rain or not rain tomorrow. This is a binary classification problem, because we map into two classes: rain or no rain.
To train the model we need already labelled historic data.
Hence, the task is given rows of historic data with correct labels, train a machine learning model (a Linear Classifier in this case) with this data. Then after that, see how good it can predict future data (without the right class label).
Some like the math behind an algorithm. If you are not one of them, focus on the visual part – it will give you the understanding you need.
The task of Supervised Learning mathematically can be explained simply with the example data above to find a function f(humidity, pressure) to predict rain or no rain.
Examples
The goal of Supervised Learning is to approximate the function f – the approximation function is often denoted h.
Why not identify f precisely? Well, because it is not ideal, as this would be an overfitted function, that would predict the historic data 100% accurate, but would fail to predict future values very well.
As we work with Linear Classifiers, we want the function to be linear.
That is, we want the approximation function h, to be on the form.
Hence, the goal is to optimize values w_0, w_1, w_2, to find the best classifier.
What does all this math mean?
Well, that it is a linear classifier that makes decisions based on the value of a linear combination of the characteristics.
The above diagram shows how it would classify with a line whether it will predict rain or not. On the left side, this is the data classified from historic data, and the line shows an optimized line done by the machine learning algorithm.
On the right side, we have a new input data (without label), then with this line, it would classify it as rain (assuming blue means rain).
The Perceptron Classifier is a linear algorithm that can be applied to binary classification.
It learns iteratively by adding new knowledge to an already existing line.
The learning rate is given by alpha, and the learning rule is as follows (don’t worry if you don’t understand it – it is not important).
The rule can also be stated as follows.
Said in words, it adjusted the values according to the actual values. Every time a new values comes, it adjusts the weights to fit better accordingly.
Given the line after it has been adjusted to all the training data – then it is ready to predict.
Let’s try this on real data.
You can get all the code in a Jupyter Notebook with the csv file here.
This can be downloaded from the GitHub in zip file by clicking here.
First let’s just import all the libraries used.
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import Perceptron
import matplotlib.pyplot as plt
Notice that in the Notebook we have an added line %matplotlib inline, which you should add if you run in a Notebook. The code here will be aligned with PyCharm or a similar IDE.
Then let’s read the data.
data = pd.read_csv('files/weather.csv', parse_dates=True, index_col=0)
print(data.head())
If you want to read the data directly from GitHub and not download the weather.csv file, you can do that as follows.
data = pd.read_csv('https://raw.githubusercontent.com/LearnPythonWithRune/MachineLearningWithPython/main/files/weather.csv', parse_dates=True, index_col=0)
print(data.head())
This will result in an output similar to this.
MinTemp MaxTemp Rainfall ... RainToday RISK_MM RainTomorrow
Date ...
2008-02-01 19.5 22.4 15.6 ... Yes 6.0 Yes
2008-02-02 19.5 25.6 6.0 ... Yes 6.6 Yes
2008-02-03 21.6 24.5 6.6 ... Yes 18.8 Yes
2008-02-04 20.2 22.8 18.8 ... Yes 77.4 Yes
2008-02-05 19.7 25.7 77.4 ... Yes 1.6 Yes
We want to investigate the data and figure out how much missing data there.
A great way to do that is to use isnull().
print(data.isnull().sum())
This results in the following output.
MinTemp 3
MaxTemp 2
Rainfall 6
Evaporation 51
Sunshine 16
WindGustDir 1036
WindGustSpeed 1036
WindDir9am 56
WindDir3pm 33
WindSpeed9am 26
WindSpeed3pm 25
Humidity9am 14
Humidity3pm 13
Pressure9am 20
Pressure3pm 19
Cloud9am 566
Cloud3pm 561
Temp9am 4
Temp3pm 4
RainToday 6
RISK_MM 0
RainTomorrow 0
dtype: int64
This shows how many rows in each column has null value (missing values). We want to work only with a two features (columns), to keep our classification simple. Obviously, we need to keep RainTomorrow, as that is carrying the label of the class.
We select the features we want and drop the rows with null-values as follows.
dataset = data[['Humidity3pm', 'Pressure3pm', 'RainTomorrow']].dropna()
The next step we need to do is to split the dataset into a features and labels.
But we also want to rename the labels from No and Yes to be numeric.
X = dataset[['Humidity3pm', 'Pressure3pm']]
y = dataset['RainTomorrow']
y = np.array([0 if value == 'No' else 1 for value in y])
Then we do the splitting as follows, where we but a random_state in order to be able to reproduce. This is often a great idea, if you randomness and encounter a problem, then you can reproduce it.
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
This has divided the features into a train and test set (X_train, X_test), and the labels into a train and test (y_train, y_test) dataset.
Finally we want to create the model, fit it (train it), predict on the training data, and print the accuracy score.
clf = Perceptron(random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(accuracy_score(y_test, y_pred))
This gives an accuracy of 0.773 or 77,3% accuracy.
Is that good?
Well what if it rains 22.7% of the time? And the model always predicts No rain?
Well, then it is correct 77.3% of the time.
Let’s just check for that.
Well, it is not raining in 74.1% of the time.
print(sum(y == 0)/len(y))
Is that a good model? Well, I find the binary classifiers a bit tricky because of this problem. The best way to get an idea is to visualize it.
To visualize the data we can do the following.
fig, ax = plt.subplots()
X_data = X.to_numpy()
y_all = clf.predict(X_data)
ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y_all, alpha=.25)
plt.show()
This results in the following output.
Finally, let’s visualize the actual data to compare.
ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y, alpha=.25)
plt.show()
Resulting in.
Here is the full code.
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import Perceptron
import matplotlib.pyplot as plt
data = pd.read_csv('https://raw.githubusercontent.com/LearnPythonWithRune/MachineLearningWithPython/main/files/weather.csv', parse_dates=True, index_col=0)
print(data.head())
print(data.isnull().sum())
dataset = data[['Humidity3pm', 'Pressure3pm', 'RainTomorrow']].dropna()
X = dataset[['Humidity3pm', 'Pressure3pm']]
y = dataset['RainTomorrow']
y = np.array([0 if value == 'No' else 1 for value in y])
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = Perceptron(random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(accuracy_score(y_test, y_pred))
print(sum(y == 0)/len(y))
fig, ax = plt.subplots()
X_data = X.to_numpy()
y_all = clf.predict(X_data)
ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y_all, alpha=.25)
plt.show()
fig, ax = plt.subplots()
ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y, alpha=.25)
plt.show()
In the next lesson you will learn to use Support-Vector Machine to Classify using Sklearn.
This is part of a FREE 10h Machine Learning course with Python.
Build and Deploy an AI App with Python Flask, OpenAI API, and Google Cloud: In…
Python REST APIs with gcloud Serverless In the fast-paced world of application development, building robust…
App Development with Python using Docker Are you an aspiring app developer looking to level…
Why Value-driven Data Science is the Key to Your Success In the world of data…
Harnessing the Power of Project-Based Learning and Python for Machine Learning Mastery In today's data-driven…
Is Python the right choice for Machine Learning? Should you learn Python for Machine Learning?…