## What will we cover?

The goal is to learn about Supervised Learning and explore how to use it for classification.

This includes learning

• What is Supervised Learning
• Understand the classification problem
• What is the Perceptron classifier
• How to use the Perceptron classifier as a linear classifier

## Step 1: What is Supervised Learning?

Supervised learning (SL) is the machine learning task of learning a function that maps an input to an output based on example input-output pairs

wikipedia.org

Said differently, if you have some items you need to classify, it could be books you want to put in categories, say fiction, non-fiction, etc.

Then if you were given a pile of books with the right categories given to them, how can you make a function (the machine learning model), which on other books without labels can guess the right category.

Supervised learning simply means, that in the learning phase, the algorithm (the one creating the model) is given examples with correct labels.

Notice, that supervised learning does not only restrict to classification problems, but it could predict anything.

If you are new to Machine Learning, I advise you start with this tutorial.

## Step 2: What is the classification problem?

The classification problem is a supervised learning task of getting a function mapping an input point to a discrete category.

There is binary classification and multiclass classification, where the binary maps into two classes, and the multi classmaps into 3 or more classes.

I find it easiest to understand with examples.

Assume we want to predict if will rain or not rain tomorrow. This is a binary classification problem, because we map into two classes: rain or no rain.

To train the model we need already labelled historic data.

Hence, the task is given rows of historic data with correct labels, train a machine learning model (a Linear Classifier in this case) with this data. Then after that, see how good it can predict future data (without the right class label).

## Step 3: Linear Classification explained mathematically and visually

Some like the math behind an algorithm. If you are not one of them, focus on the visual part – it will give you the understanding you need.

The task of Supervised Learning mathematically can be explained simply with the example data above to find a function f(humidity, pressure) to predict rain or no rain.

Examples

• f(93, 000.7) = rain
• f(49, 1015.5) = no rain
• f(79, 1031.1) = no rain

The goal of Supervised Learning is to approximate the function f – the approximation function is often denoted h.

Why not identify f precisely? Well, because it is not ideal, as this would be an overfitted function, that would predict the historic data 100% accurate, but would fail to predict future values very well.

As we work with Linear Classifiers, we want the function to be linear.

That is, we want the approximation function h, to be on the form.

• x_1: Humidity
• x_2: Pressure
• h(x_1, x_2) = w_0 + w_1*x_1 + w_2*x_2

Hence, the goal is to optimize values w_0, w_1, w_2, to find the best classifier.

What does all this math mean?

Well, that it is a linear classifier that makes decisions based on the value of a linear combination of the characteristics.

The above diagram shows how it would classify with a line whether it will predict rain or not. On the left side, this is the data classified from historic data, and the line shows an optimized line done by the machine learning algorithm.

On the right side, we have a new input data (without label), then with this line, it would classify it as rain (assuming blue means rain).

## Step 4: What is the Perceptron Classifier?

The Perceptron Classifier is a linear algorithm that can be applied to binary classification.

It learns iteratively by adding new knowledge to an already existing line.

The learning rate is given by alpha, and the learning rule is as follows (don’t worry if you don’t understand it – it is not important).

• Given data point x and y update each weight according to this.
• w_i = w_i + alpha*(y – h_w(x)) X x_i

The rule can also be stated as follows.

• w_i = w_i + alpha(actual value – estimated value) X x_i

Said in words, it adjusted the values according to the actual values. Every time a new values comes, it adjusts the weights to fit better accordingly.

Given the line after it has been adjusted to all the training data – then it is ready to predict.

Let’s try this on real data.

## Step 5: Get the Weather data we will use to train a Perceptron model with

You can get all the code in a Jupyter Notebook with the csv file here.

This can be downloaded from the GitHub in a zip file by clicking here.

First let’s just import all the libraries used.

```import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import Perceptron
import matplotlib.pyplot as plt
```

Notice that in the Notebook we have an added line %matplotlib inline, which you should add if you run in a Notebook. The code here will be aligned with PyCharm or a similar IDE.

Then let’s read the data.

```data = pd.read_csv('files/weather.csv', parse_dates=True, index_col=0)
print(data.head())
```

If you want to read the data directly from GitHub and not download the weather.csv file, you can do that as follows.

```data = pd.read_csv('https://raw.githubusercontent.com/LearnPythonWithRune/MachineLearningWithPython/main/files/weather.csv', parse_dates=True, index_col=0)
print(data.head())
```

This will result in an output similar to this.

```            MinTemp  MaxTemp  Rainfall  ...  RainToday  RISK_MM RainTomorrow
Date                                    ...
2008-02-01     19.5     22.4      15.6  ...        Yes      6.0          Yes
2008-02-02     19.5     25.6       6.0  ...        Yes      6.6          Yes
2008-02-03     21.6     24.5       6.6  ...        Yes     18.8          Yes
2008-02-04     20.2     22.8      18.8  ...        Yes     77.4          Yes
2008-02-05     19.7     25.7      77.4  ...        Yes      1.6          Yes
```

## Step 6: Select features and Clean the Weather data

We want to investigate the data and figure out how much missing data there.

A great way to do that is to use isnull().

```print(data.isnull().sum())
```

This results in the following output.

```MinTemp             3
MaxTemp             2
Rainfall            6
Evaporation        51
Sunshine           16
WindGustDir      1036
WindGustSpeed    1036
WindDir9am         56
WindDir3pm         33
WindSpeed9am       26
WindSpeed3pm       25
Humidity9am        14
Humidity3pm        13
Pressure9am        20
Pressure3pm        19
Cloud9am          566
Cloud3pm          561
Temp9am             4
Temp3pm             4
RainToday           6
RISK_MM             0
RainTomorrow        0
dtype: int64
```

This shows how many rows in each column has null value (missing values). We want to work only with a two features (columns), to keep our classification simple. Obviously, we need to keep RainTomorrow, as that is carrying the label of the class.

We select the features we want and drop the rows with null-values as follows.

```dataset = data[['Humidity3pm', 'Pressure3pm', 'RainTomorrow']].dropna()
```

## Step 7: Split into trading and test data

The next step we need to do is to split the dataset into a features and labels.

But we also want to rename the labels from No and Yes to be numeric.

```X = dataset[['Humidity3pm', 'Pressure3pm']]
y = dataset['RainTomorrow']
y = np.array([0 if value == 'No' else 1 for value in y])
```

Then we do the splitting as follows, where we but a random_state in order to be able to reproduce. This is often a great idea, if you randomness and encounter a problem, then you can reproduce it.

```X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
```

This has divided the features into a train and test set (X_train, X_test), and the labels into a train and test (y_train, y_test) dataset.

## Step 8: Train the Perceptron model and measure accuracy

Finally we want to create the model, fit it (train it), predict on the training data, and print the accuracy score.

```clf = Perceptron(random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(accuracy_score(y_test, y_pred))
```

This gives an accuracy of 0.773 or 77,3% accuracy.

Is that good?

Well what if it rains 22.7% of the time? And the model always predicts No rain?

Well, then it is correct 77.3% of the time.

Let’s just check for that.

Well, it is not raining in 74.1% of the time.

```print(sum(y == 0)/len(y))
```

Is that a good model? Well, I find the binary classifiers a bit tricky because of this problem. The best way to get an idea is to visualize it.

## Step 9: Visualize the model predictions

To visualize the data we can do the following.

```fig, ax = plt.subplots()
X_data = X.to_numpy()
y_all = clf.predict(X_data)
ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y_all, alpha=.25)
plt.show()
```

This results in the following output.

Finally, let’s visualize the actual data to compare.

```ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y, alpha=.25)
plt.show()
```

Resulting in.

Here is the full code.

```import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import Perceptron
import matplotlib.pyplot as plt
data = pd.read_csv('https://raw.githubusercontent.com/LearnPythonWithRune/MachineLearningWithPython/main/files/weather.csv', parse_dates=True, index_col=0)
print(data.head())
print(data.isnull().sum())
dataset = data[['Humidity3pm', 'Pressure3pm', 'RainTomorrow']].dropna()
X = dataset[['Humidity3pm', 'Pressure3pm']]
y = dataset['RainTomorrow']
y = np.array([0 if value == 'No' else 1 for value in y])
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = Perceptron(random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(accuracy_score(y_test, y_pred))
print(sum(y == 0)/len(y))
fig, ax = plt.subplots()
X_data = X.to_numpy()
y_all = clf.predict(X_data)
ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y_all, alpha=.25)
plt.show()
fig, ax = plt.subplots()
ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y, alpha=.25)
plt.show()
```

## Want to learn more?

This is part of a FREE 10h Machine Learning course with Python.

• 15 video lessons – which explain Machine Learning concepts, demonstrate models on real data, introduce projects and show a solution (YouTube playlist).
• 30 JuPyter Notebooks – with the full code and explanation from the lectures and projects (GitHub).
• 15 projects – with step guides to help you structure your solutions and solution explained in the end of video lessons (GitHub).

## What will we cover?

This tutorial will explain what Machine Learning is by comparing it to classical programming. Then how Machine Learning works and the three main categories of Machine Learning: Supervised, Unsupervised, and Reinforcement Learning.

Finally, we will explore a Supervised Machine Learning model called k-Nearest-Neighbors (KNN) classifier to get an understanding through practical application.

### Goal of Lesson

• Understand the difference between Classical Computing and Machine Learning
• Know the 3 main categories of Machine Learning
• Dive into Supervised Learning
• Classification with 𝑘-Nearest-Neighbors Classifier (KNN)
• How to classify data
• What are the challenges with cleaning data
• Create a project on real data with 𝑘-Nearest-Neighbor Classifier

## Step 1: What is Machine Learning?

• In the classical computing model every thing is programmed into the algorithms.
• This has the limitation that all decision logic need to be understood before usage.
• And if things change, we need to modify the program.
• With the modern computing model (Machine Learning) this paradigm is changes.
• We feed the algorithms (models) with data.
• Based on that data, the algorithms (models) make decisions in the program.

Imagine you needed to teach your child how to bike a bicycle.

In the classical computing sense, you will instruct your child how to use a specific muscle in all cases. That is, if you lose balance to the right, then activate the your third muscle in your right leg. You need instructions for all muscles in all situations.

That is a lot of instructions and chances are, you forget specific situations.

Machine Learning feeds the child data, that is it will fall, it will fail – but eventually, it will figure it out itself, without instructions on how to use the specific muscles in the body.

Well, that is actually how most learn how to bike.

## Step 2: How Machine Learning Works

On a high level, Machine Learning is divided into two phases.

• Learning phase: Where the algorithm (model) learns in a training environment. Like, when you support your child learning to ride the bike, like catching the child while falling not to hit too hard.
• Prediction phase: Where the algorithm (model) is applied on real data. This is when the child can bike on its own.

The Learning Phase is often divided into a few steps.

• Get Data: Identify relevant data for the problem you want to solve. This data set should represent the type of data that the Machine Learn model will use to predict from in Phase 2 (predction).
• Pre-processing: This step is about cleaning up data. While the Machine Learning is awesome, it cannot figure out what good data looks like. You need to do the cleaning as well as transforming data into a desired format.
• Train model: This is where the magic happens, the learning step (Train model). There are three main paradigms in machine learning.
• Supervised: where you tell the algorithm what categories each data item is in. Each data item from the training set is tagged with the right answer.
• Unsupervised: is when the learning algorithm is not told what to do with it and it should make the structure itself.
• Reinforcement: teaches the machine to think for itself based on past action rewards.
• Test model: Finally, the testing is done to see if the model is good. The training data was divided into a test set and training set. The test set is used to see if the model can predict from it. If not, a new model might be necessary.

The Prediction Phase can be illustrated as follows.

## Step 3: Supervised Learning explained with Example

Supervised learning can be be explained as follows.

Given a dataset of input-output pairs, learn a function to map inputs to outputs.

There are different tasks – but we start to focus on Classification. Where supervised classification is the task of learning a function mapping an input point to a discrete category.

Now the best way to understand new things is to relate it to something we already understand.

Consider the following data.

Given the Humidity and Pressure for a given day can we predict if it will rain or not.

How will a Supervised Classification algorithm work?

Learning Phase: Given a set of historical data to train the model – like the data above, given rows of Humidity and Pressure and the label Rain or No Rain. Let the algorithm work with the data and figure it out.

Note: we leave out pre-processing and testing the model here.

Prediction Phase: Let the algorithm get new data – like in the morning you read Humidity and Pressure and let the algorithm predict if will rain or not that given day.

Written mathematically, it is the task to find a function 𝑓 as follows.

Ideally: 𝑓(ℎ𝑢𝑚𝑖𝑑𝑖𝑡𝑦,𝑝𝑟𝑒𝑠𝑠𝑢𝑟𝑒)

Examples:

• 𝑓(93,999.7) = Rain
• 𝑓(49,1015.5) = No Rain
• 𝑓(79,1031.1) = No Rain

Goal: Approximate the function 𝑓 – the approximation function is often denoted

## Step 4: Visualize the data we want to fit

We will use pandas to work with data, which is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, built on top of the Python programming language.

The data we want to work with can be downloaded from a here and stored locally. Or you can access it directly as follows.

```import pandas as pd

file_dest = 'https://raw.githubusercontent.com/LearnPythonWithRune/MachineLearningWithPython/main/files/weather.csv'
data = pd.read_csv(file_dest, parse_dates=True, index_col=0)
```

First lets’s visualize the data we want to work with.

```import matplotlib.pyplot as plt
import pandas as pd

file_dest = 'https://raw.githubusercontent.com/LearnPythonWithRune/MachineLearningWithPython/main/files/weather.csv'
data = pd.read_csv(file_dest, parse_dates=True, index_col=0)

dataset = data[['Humidity3pm', 'Pressure3pm', 'RainTomorrow']]

fig, ax = plt.subplots()

dataset[dataset['RainTomorrow'] == 'No'].plot.scatter(x='Humidity3pm', y='Pressure3pm', c='b', alpha=.25, ax=ax)
dataset[dataset['RainTomorrow'] == 'Yes'].plot.scatter(x='Humidity3pm', y='Pressure3pm', c='r', alpha=.25, ax=ax)

plt.show()
```

Resulting in.

The goal is to make a mode which can predict Blue or Red dots.

## Step 5: The k-Nearest-Neighbors Classifier

Given an input, choose the class of nearest datapoint.

### 𝑘-Nearest-Neighbors Classification

• Given an input, choose the most common class out of the 𝑘 nearest data points

Let’s try to implement a model. We will use sklearn for that.

```import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score

dataset_clean = dataset.dropna()

X = dataset_clean[['Humidity3pm', 'Pressure3pm']]
y = dataset_clean['RainTomorrow']
y = np.array([0 if value == 'No' else 1 for value in y])

neigh = KNeighborsClassifier()
neigh.fit(X_train, y_train)
y_pred = neigh.predict(X_test)
accuracy_score(y_test, y_pred)
```

This actually covers what you need. Make sure to have the dataset data from the previous step available here.

To visualize the code you can run the following.

```fig, ax = plt.subplots()

y_map = neigh.predict(X_map)

ax.scatter(x=X_map[:,0], y=X_map[:,1], c=y_map, alpha=.25)
plt.show()
```

## Want more help?

Check out this video explaining all steps in more depth. Also, it includes a guideline for making your first project with Machine Learning along with a solution for it.

This is part of a FREE 10h Machine Learning course with Python.

• 15 video lessons – which explain Machine Learning concepts, demonstrate models on real data, introduce projects and show a solution (YouTube playlist).
• 30 JuPyter Notebooks – with the full code and explanation from the lectures and projects (GitHub).
• 15 projects – with step guides to help you structure your solutions and solution explained in the end of video lessons (GitHub).

## What will we cover?

We will learn what Reinforcement Learning is and how it works. Then by using Object-Oriented Programming technics (more about Object-Oriented Programming), we implement a Reinforcement Model to solve the problem of figuring out where to pick up and drop of item on a field.

## Step 1: What is Reinforcement Learning?

Reinforcement Learning is one of the 3 main categories of Machine Learning (get started with Machine Learning here) and is concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward.

### How Reinforcement Learning works

Reinforcement Learning teaches the machine to think for itself based on past action rewards.

• Basically, the Reinforcement Learning algorithm tries to predict actions that gives rewards and avoids punishment.
• It is like training a dog. You and the dog do not talk the same language, but the dogs learns how to act based on rewards (and punishment, which I do not advise or advocate).
• Hence, if a dog is rewarded for a certain action in a given situation, then next time it is exposed to a similar situation it will act the same.
• Translate that to Reinforcement Learning.
• The agent is the dog that is exposed to the environment.
• Then the agent encounters a state.
• The agent performs an action to transition to a new state.
• Then after the transition the agent receives a reward or penalty (punishment).
• This forms a policy to create a strategy to choose actions in a given state.

### What algorithms are used for Reinforcement Learning?

• The most common algorithm for Reinforcement Learning are.
• Q-Learning: is a model-free reinforcement learning algorithm to learn a policy telling an agent what action to take under what circumstances.
• Temporal Difference: refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function.
• Deep Adversarial Network: is a technique employed in the field of machine learning which attempts to fool models through malicious input.
• We will focus on the Q-learning algorithm as it is easy to understand as well as powerful.

### How does the Q-learning algorithm work?

• As already noted, I just love this algorithm. It is “easy” to understand and powerful as you will see.
• The Q-Learning algorithm has a Q-table (a Matrix of dimension state x actions – don’t worry if you do not understand what a Matrix is, you will not need the mathematical aspects of it – it is just an indexed “container” with numbers).
• The agent (or Q-Learning algorithm) will be in a state.
• Then in each iteration the agent needs take an action.
• The agent will continuously update the reward in the Q-table.
• The learning can come from either exploiting or exploring.
• This translates into the following pseudo algorithm for the Q-Learning.
• The agent is in a given stateºº and needs to choose an action.

#### Algorithm

• Initialise the Q-table to all zeros
• Iterate
• Agent is in state state.
• With probability epsilon choose to explore, else exploit.
• If explore, then choose a random action.
• If exploit, then choose the best action based on the current Q-table.
• Update the Q-table from the new reward to the previous state.
• Q[state, action] = (1 – alpha) * Q[state, action] + alpha * (reward + gamma * max(Q[new_state]) — Q[state, action])

#### Variables

As you can se, we have introduced the following variables.

• epsilon: the probability to take a random action, which is done to explore new territory.
• alpha: is the learning rate that the algorithm should make in each iteration and should be in the interval from 0 to 1.
• gamma: is the discount factor used to balance the immediate and future reward. This value is usually between 0.8 and 0.99
• reward: is the feedback on the action and can be any number. Negative is penalty (or punishment) and positive is a reward.

## Step 2: The problem we want to solve

Here we have a description of task we want to solve.

• To keep it simple, we create a field of size 10×10 positions. In that field there is an item that needs to be picked up and moved to a drop-off point.
• At each position there are 6 different actions that can be taken.
• Action 0: Go South if on field.
• Action 1: Go North if on field.
• Action 2: Go East if on field (Please notice, I mixed up East and West (East is Left here)).
• Action 3: Go West if on field (Please notice, I mixed up East and West (West is right here)).
• Action 4: Pickup item (it can try even if it is not there)
• Action 5: Drop-off item (it can try even if it does not have it)
• Based on these actions we will make a reward system.
• If the agent tries to go off the field, punish with -10 in reward.
• If the agent makes a (legal) move, punish with -1 in reward, as we do not want to encourage endless walking around.
• If the agent tries to pick up item, but it is not there or it has it already, punish with -10 in reward.
• If the agent picks up the item correct place, reward with 20.
• If agent tries to drop-off item in wrong place or does not have the item, punish with -10 in reward.
• If the agent drops-off item in correct place, reward with 20.
• That translates into the following code. I prefer to implement this code, as I think the standard libraries that provide similar frameworks hide some important details. As an example, and shown later, how do you map this into a state in the Q-table?

## Step 3: Implementing the field

First we need a way to represent the field, representing the environment our model lives in. This is defined in Step 2 and could be implemented as follows.

```class Field:
def __init__(self, size, item_pickup, item_drop_off, start_position):
self.size = size
self.item_pickup = item_pickup
self.item_drop_off = item_drop_off
self.position = start_position
self.item_in_car = False

def get_number_of_states(self):
return self.size*self.size*self.size*self.size*2

def get_state(self):
state = self.position[0]*self.size*self.size*self.size*2
state = state + self.position[1]*self.size*self.size*2
state = state + self.item_pickup[0]*self.size*2
state = state + self.item_pickup[1]*2
if self.item_in_car:
state = state + 1
return state

def make_action(self, action):
(x, y) = self.position
if action == 0:  # Go South
if y == self.size - 1:
return -10, False
else:
self.position = (x, y + 1)
return -1, False
elif action == 1:  # Go North
if y == 0:
return -10, False
else:
self.position = (x, y - 1)
return -1, False
elif action == 2:  # Go East
if x == 0:
return -10, False
else:
self.position = (x - 1, y)
return -1, False
elif action == 3:  # Go West
if x == self.size - 1:
return -10, False
else:
self.position = (x + 1, y)
return -1, False
elif action == 4:  # Pickup item
if self.item_in_car:
return -10, False
elif self.item_pickup != (x, y):
return -10, False
else:
self.item_in_car = True
return 20, False
elif action == 5:  # Drop off item
if not self.item_in_car:
return -10, False
elif self.item_drop_off != (x, y):
self.item_pickup = (x, y)
self.item_in_car = False
return -10, False
else:
return 20, True
```

## Step 4: A Naive approach to solve it (NON-Machine Learning)

A naive approach would to just take random actions and hope for the best. This is obviously not optimal, but nice to have as a base line to compare with.

```def naive_solution():
size = 10
item_start = (0, 0)
item_drop_off = (9, 9)
start_position = (0, 9)

field = Field(size, item_start, item_drop_off, start_position)
done = False
steps = 0

while not done:
action = random.randint(0, 5)
reward, done = field.make_action(action)
steps = steps + 1

return steps
```

To make an estimate on how many steps it takes you can run this code.

```runs = [naive_solution() for _ in range(100)]
print(sum(runs)/len(runs))
```

Where we use List Comprehension (learn more about list comprehension). This gave 143579.21. Notice, you most likely will get something different, as there is a high level of randomness involved.

## Step 5: Implementing our Reinforcement Learning Model

Here we give the algorithm for what we need to implement.

#### Algorithm

• Initialise the Q-table to all zeros
• Iterate
• Agent is in state state.
• With probability epsilon choose to explore, else exploit.
• If explore, then choose a random action.
• If exploit, then choose the best action based on the current Q-table.
• Update the Q-table from the new reward to the previous state.
• Q[state, action] = (1 – alpha) * Q[state, action] + alpha * (reward + gamma * max(Q[new_state]) — Q[state, action])

Then we end up with the following code to train our Q-table.

```size = 10
item_start = (0, 0)
item_drop_off = (9, 9)
start_position = (0, 9)

field = Field(size, item_start, item_drop_off, start_position)

number_of_states = field.get_number_of_states()
number_of_actions = 6

q_table = np.zeros((number_of_states, number_of_actions))

epsilon = 0.1
alpha = 0.1
gamma = 0.6

for _ in range(10000):
field = Field(size, item_start, item_drop_off, start_position)
done = False

while not done:
state = field.get_state()
if random.uniform(0, 1) < epsilon:
action = random.randint(0, 5)
else:
action = np.argmax(q_table[state])

reward, done = field.make_action(action)
# Q[state, action] = (1 – alpha) * Q[state, action] + alpha * (reward + gamma * max(Q[new_state]) — Q[state, action])

new_state = field.get_state()
new_state_max = np.max(q_table[new_state])

q_table[state, action] = (1 - alpha)*q_table[state, action] + alpha*(reward + gamma*new_state_max - q_table[state, action])
```

Then we can apply our model as follows.

```def reinforcement_learning():
epsilon = 0.1
alpha = 0.1
gamma = 0.6

field = Field(size, item_start, item_drop_off, start_position)
done = False
steps = 0

while not done:
state = field.get_state()
if random.uniform(0, 1) < epsilon:
action = random.randint(0, 5)
else:
action = np.argmax(q_table[state])

reward, done = field.make_action(action)
# Q[state, action] = (1 – alpha) * Q[state, action] + alpha * (reward + gamma * max(Q[new_state]) — Q[state, action])

new_state = field.get_state()
new_state_max = np.max(q_table[new_state])

q_table[state, action] = (1 - alpha)*q_table[state, action] + alpha*(reward + gamma*new_state_max - q_table[state, action])

steps = steps + 1

return steps
```

And evaluate it as follows.

```runs_rl = [reinforcement_learning() for _ in range(100)]
print(sum(runs_rl)/len(runs_rl))
```

This resulted in 47.45. Again, you should get something different.

But a comparison to taking random moves (Step 4) it is a factor 3000 better.

## Want more?

Want to learn more Python, then this is part of a 8 hours FREE video course with full explanations, projects on each levels, and guided solutions.

The course is structured with the following resources to improve your learning experience.

• 17 video lessons teaching you everything you need to know to get started with Python.
• 34 Jupyter Notebooks with lesson code and projects.
• A FREE 70+ pages eBook with all the learnings from the lessons.

See the full FREE course page here.

If you instead want to learn more about Machine Learning. Do not worry.

Then check out my Machine Learning with Python course.

• 15 video lessons teaching you all aspects of Machine Learning
• 30 JuPyter Notebooks with lesson code and projects
• 10 hours FREE video content to support your learning journey.

Go to the course page for details.

## What will we cover?

We will demonstrate how to read CSV data from a GitHub. How to group the data by unique values in a column and sum it. Then how to group and sum data on a monthly basis. Finally, how to export this into a multiple sheet Excel document with chart.

## Step 1: Get and inspect the data

We can use pandas to read the CSV data (see more about CSV files here).

```import pandas as pd

url = 'https://raw.githubusercontent.com/LearnPythonWithRune/LearnPython/main/files/SalesData.csv'
data = pd.read_csv(url, delimiter=';', parse_dates=True, index_col='Date')

print(data.head())
```

This will read our data directly from GitHub and show the first few lines.

```            Sales rep        Item  Price  Quantity  Sale
Date
2020-05-31        Mia     Markers      4         1     4
2020-02-01        Mia  Desk chair    199         2   398
2020-09-21     Oliver       Table   1099         2  2198
2020-07-15  Charlotte    Desk pad      9         2    18
2020-05-27       Emma        Book     12         1    12
```

This data shows different sales represents and a list over their sales in 2020.

## Step 2: Use GroupBy to get sales of each represent and monthly sales

It is easy to group data by columns. The below code will first group all the Sales reps and sum their sales. Second, it will group the data in months and sum it.

```repr_sales = data.groupby("Sales rep").sum()['Sale']
print(repr_sales)

monthly_sale = data.groupby(pd.Grouper(freq='M')).sum()['Sale']
monthly_sale.index = monthly_sale.index.month_name()
print(monthly_sale)
```

This gives.

```Sales rep
Charlotte     74599
Emma          65867
Ethan         40970
Liam          66989
Mia           88199
Noah          78575
Oliver        89355
Sophia       103480
William       80400
Name: Sale, dtype: int64
```
```Date
January      69990
February     51847
March        67500
April        58401
May          40319
June         59397
July         64251
August       51571
September    55666
October      50093
November     57458
December     61941
Name: Sale, dtype: int64
```

## Step 3: Create a multiple sheet Excel document with charts

Now for the export magic.

```workbook = pd.ExcelWriter("SalesReport.xlsx")
repr_sales.to_excel(workbook, sheet_name='Sales per rep')
monthly_sale.to_excel(workbook, sheet_name='Monthly')

chart1 = workbook.book.add_chart({'type': 'column'})

# Configure the first series.
chart1.add_series({
'name':       'Sales per rep',
'categories': '=\'Sales per rep\'!\$A\$2:\$A\$10',
'values':     '=\'Sales per rep\'!\$B\$2:\$B\$10',
})

workbook.sheets['Sales per rep'].insert_chart('D2', chart1)

chart1 = workbook.book.add_chart({'type': 'column'})

# Configure the first series.
chart1.add_series({
'name':       'Monthly sales',
'categories': '=Monthly!\$A\$2:\$A\$13',
'values':     '=Monthly!\$B\$2:\$B\$13',
})

workbook.sheets['Monthly'].insert_chart('D2', chart1)

workbook.close()
```

This will create an Excel document called SalesReport.xlsx in your working directory.

To get a detailed explanation see the video in the top of the post.

## Want to learn more?

Want to learn more Python, then this is part of a 8 hours FREE video course with full explanations, projects on each levels, and guided solutions.

The course is structured with the following resources to improve your learning experience.

• 17 video lessons teaching you everything you need to know to get started with Python.
• 34 Jupyter Notebooks with lesson code and projects.
• A FREE 70+ pages eBook with all the learnings from the lessons.

See the full FREE course page here.

If you instead want to learn more about Machine Learning. Do not worry.

Then check out my Machine Learning with Python course.

• 15 video lessons teaching you all aspects of Machine Learning
• 30 JuPyter Notebooks with lesson code and projects
• 10 hours FREE video content to support your learning journey.

Go to the course page for details.

## What will we cover?

In this tutorial you will learn some basic NumPy. The best way to learn something new is to combine it with something useful. Therefore you will use the NumPy while creating your first Machine Learning project.

## Step 1: What is NumPy?

NumPy is the fundamental package for scientific computing in Python.

NumPy.org

Well, that is how it is stated on the official NumPy page.

Maybe a better question is, what do you use NumPy for and why?

Well, the main tool you use from NumPy is the NumPy array. Arrays are quite similar to Python lists, just with a few restrictions.

1. It can only contain one data type. That is, if a NumPy array has integers, then all entries can only be integers.
2. The size cannot change (immutable). That is, you can not add or remove entries, like in a Python list.
3. If it is a multi-dimension array, all sub-arrays must be of same shape. That is, you cannot have something similar to a Python list of list, where the first sub-list is of length 3, the second of length 7, and so on. They all must have same length (or shape).

Why would anyone use them, you might ask? They are more restrictive than Python lists.

Actually, and funny enough, making the data structures more restrictive, like NumPy arrays, can make it more efficient (faster).

Why?

Well, think about it. You know more about the data structure, and hence, do not need to make many additional checks.

## Step 2: A little NumPy array basics we will use for our Machine Learning project

A NumPy array can be created of a list.

```import numpy as np

a1 = np.array([1, 2, 3, 4])
print(a1)
```

Which will print.

```array([1, 2, 3, 4])
```

The data type of a NumPy array can be given as follows.

```print(a1.dtype)
```

It will print dtype(‘int64’). That is, the full array has only one type, int64, which are 64 bit integers. That is also different from Python integers, where you actually cannot specify the size of the integers. Here you can have int8, int16, int32, int64, and more. Again restrictions, which makes it more efficient.

```print(a1.shape)
```

The above gives the shape, here, (4,). Notice, that this shape cannot be changed, because the data structure is immutable.

Let’s create another NumPy array and try a few things.

```a1 = np.array([1, 2, 3, 4])
a2 = np.array([5, 6, 7, 8])

print(a1*2)
print(a1*a2)
print(a1 + a2)
```

Which results in.

```array([2, 4, 6, 8])
array([ 5, 12, 21, 32])
array([ 6,  8, 10, 12])
```

With a little inspection you will realize that the first (a1*2) multiplies with 2 in each entry. The second (a1*a2) multiplies the entries pairwise. The third (a1 + a2) adds the entries pairwise.

## Step 3: What is Machine Learning?

• In the classical computing model every thing is programmed into the algorithms. This has the limitation that all decision logic need to be understood before usage. And if things change, we need to modify the program.
• With the modern computing model (Machine Learning) this paradigm is changes. We feed the algorithms with data, and based on that data, we do the decisions in the program.

How Machine Learning Works

• On a high level you can divide Machine Learning into two phases.
• Phase 1: Learning
• Phase 2: Prediction
• The learing phase (Phase 1) can be divided into substeps.
• It all starts with a training set (training data). This data set should represent the type of data that the Machine Learn model should be used to predict from in Phase 2 (predction).
• The pre-processing step is about cleaning up data. While the Machine Learning is awesome, it cannot figure out what good data looks like. You need to do the cleaning as well as transforming data into a desired format.
• Then for the magic, the learning step. There are three main paradigms in machine learning.
• Supervised: where you tell the algorithm what categories each data item is in. Each data item from the training set is tagged with the right answer.
• Unsupervised: is when the learning algorithm is not told what to do with it and it should make the structure itself.
• Reinforcement: teaches the machine to think for itself based on past action rewards.
• Finally, the testing is done to see if the model is good. The training data was divided into a test set and training set. The test set is used to see if the model can predict from it. If not, a new model might be necessary.

Then the prediction begins.

## Step 4: A Linear Regression Model

Let’s try to use a Machine Learning model. One of the first model you will meet is the Linear Regression model.

Simply said, this model tries to fit data to a straight line. The best way to understand that, is to see it visually with one explanatory variable. That is, given a value (explanatory variable), can you predict the scalar response (the value you want to predict.

Say, given the temperature (explanatory variable), can you predict the sale of ice cream. Assuming there is a linear relationship, can you determine that? A guess is, the hotter it is, the more ice cream is sold. But whether a leaner model is a good predictor, is beyond the scope here.

Let’s try with some simple data.

But first we need to import a few libraries.

```from sklearn.linear_model import LinearRegression
```

Then we generate some simple data.

```x = [i for i in range(10)]
y = [i for i in range(10)]
```

For the case, it will be fully correlated, but it will only demonstrate it. This part is equivalent to the Get data step.

But x is the explanatory variable and y the scalar response we want to predict.

When you train the model, you give it input pairs of explanatory and scalar response. This is needed, as the model needs to learn.

After the learning you can predict data. But let’s prepare the data for the learning. This is the Pre-processing.

```X = np.array(x).reshape((-1, 1))
Y = np.array(y).reshape((-1, 1))
```

Notice, this is very simple step, and we only need to convert the data into the correct format.

Then we can train the model (train model).

```lin_regressor = LinearRegression()
lin_regressor.fit(X, Y)
```

Here we will skip the test model step, as the data is simple.

To predict data we can call the model.

```Y_pred = lin_regressor.predict(X)
```

The full code together here.

```from sklearn.linear_model import LinearRegression

x = [i for i in range(10)]
y = [i for i in range(10)]

X = np.array(x).reshape((-1, 1))
Y = np.array(y).reshape((-1, 1))

lin_regressor = LinearRegression()
lin_regressor.fit(X, Y)

Y_pred = lin_regressor.predict(X)
```

## Step 5: Visualize the result

You can visualize the data and the prediction as follows (see more about matplotlib here).

```import matplotlib.pyplot as plt

alpha = str(round(lin_regressor.intercept_[0], 5))
beta = str(round(lin_regressor.coef_[0][0], 5))

fig, ax = plt.subplots()

ax.set_title(f"Alpha {alpha}, Beta {beta}")
ax.scatter(X, Y)
ax.plot(X, Y_pred, c='r')
```

Alpha is called constant or intercept and measures the value where the regression line crosses the y-axis.

Beta is called coefficient or slope and measures the steepness of the linear regression.

## Next step

If you want a real project with Linear Regression, then check out the video in the top of the post, which is part of a full course.

The project will look at car specs to see if there is a connection.

Want to learn more Python, then this is part of a 8 hours FREE video course with full explanations, projects on each levels, and guided solutions.

The course is structured with the following resources to improve your learning experience.

• 17 video lessons teaching you everything you need to know to get started with Python.
• 34 Jupyter Notebooks with lesson code and projects.
• A FREE 70+ pages eBook with all the learnings from the lessons.

See the full FREE course page here.

If you instead want to learn more about Machine Learning. Do not worry.

Then check out my Machine Learning with Python course.

• 15 video lessons teaching you all aspects of Machine Learning
• 30 JuPyter Notebooks with lesson code and projects
• 10 hours FREE video content to support your learning journey.

Go to the course page for details.

## What will we cover?

In this tutorial we will get started with Matplotlib visualization. We will use the object-oriented approach with Matplotlib, this makes it less confusing for only one more line of code.

## Plot a list of numbers with Matplotlib

Given a list of numbers, how can you make a connected line.

```import matplotlib.pyplot as plt

fig, ax = plt.subplots()

ax.plot([1, 2, 3, 4])
```

Which results in the following output.

The numbers do not need to be on a straight line. But the line will be connected.

## Make a Colored Scatter Plot with Matplotlib

Now you need tree lists.

```import matplotlib.pyplot as plt

x = [1, 2, 3, 4, 5, 6, 4]
y = [2, 3, 2, 1, 6, 10, 3]
c = [1, 1, 2, 2, 3, 4, 4]

fig, ax = plt.subplots()
ax.scatter(x, y, c=c)
ax.set_title("Title")
ax.set_xlabel("X label")
ax.set_ylabel("Y label")
```

This results in the following plot.

Notice that we also added title and labels to the axis.

This could also be done in the connected line plot above.

## Make a Histogram with Matplotlib

You can make a histogram as follows.

```import matplotlib.pyplot as plt

data = [1, 1, 2, 2, 1, 2, 3, 3, 2, 3, 1, 3, 2]

fig, ax = plt.subplots()
ax.hist(data, bins=4)
ax.set_title("Title")
ax.set_xlabel("X label")
ax.set_ylabel("Y label")
```

This results in the following plot.

## Want to learn more?

If this is something you like and you want to get started with Python, then this is part of a 8 hours FREE video course with full explanations, projects on each levels, and guided solutions.

The course is structured with the following resources to improve your learning experience.

• 17 video lessons teaching you everything you need to know to get started with Python.
• 34 Jupyter Notebooks with lesson code and projects.
• A FREE 70+ pages eBook with all the learnings from the lessons.

See the full FREE course page here.

## What will we cover?

The best way to learn Object-Oriented Programming is by creating something object-oriented. In this tutorial we will create a simple card game.

## Step 1: What is Object-Oriented Programming?

At is core, Object-Oriented Programming helps you with structuring your program to resemble reality.

That is, you declare objects with parameters and methods (functions) you need on them.

The best way is to learn it by creating an easy example you can relate to.

Consider the following.

This diagram represents three objects we want to model. The first is a Card, second a Deck, and finally a Hand.

There are many things to notice, but fist that Hand is actually a sub-class of Deck. What does that mean? Don’t worry, we’ll get there.

## Step 2: Implement the Card class

What does it all mean?

Well, first of all, there are many ways to represent a class and the above is just one possible option. But the if we look at Card, we have two groups to look at. First, suit and rank. Second, __str__() and __lt__(other).

The suit and rank are instance variables, while __str__() and __lt__(other) are class methods.

Instance variables are variables only available to a specific object instance. Hence, different instances of the same class can have different values.

Class methods are methods you can call on an object instance.

The function __str__() is a special method, which will give the string representation of the object instance. This is how the object will be represented if printed.

The function __lt__(other) is also a special method, which returns whether the object and another object other is greater. Hence, it returns a truths statement.

One way to implement is as follows.

```class Card:
suits = ['\u2666', '\u2665', '\u2663', '\u2660']
ranks = ["2", "3", "4", "5", "6", "7", "8", "9", "10", "J", "Q", "K", "A"]

def __init__(self, suit, rank):
self.suit = suit
self.rank = rank

def __str__(self):
return f"{Card.ranks[self.rank]}{Card.suits[self.suit]}"

def __lt__(self, other):
if self.rank == other.rank:
return self.suit < other.suit
else:
return self.rank < other.rank
```

Notice we also have class variables suits and ranks (with s). They are used to give a representation in the __str__() method.

Class variables are available and the same across all objects.

Also, notice the __init__(self, suit, rank), which is a method which is called at creation of the object, and it assigns variables to the instance variables (the ones with self)

## Step 3: Implement the Deck class

A Deck should represent a pile of card.

Here we want it to create a new shuffled deck of cards when you create a new instance of the Deck object.

That can be accomplished as follows.

```import random

class Deck:
def __init__(self):
self.deck = []
for suit in range(4):
for rank in range(13):
self.deck.append(Card(suit, rank))
self.shuffle()

def __len__(self):
return len(self.deck)

def add_card(self, card):
self.deck.append(card)

def pop_card(self):
return self.deck.pop()

def shuffle(self):
random.shuffle(self.deck)
```

Notice that __len__() method is also a special, and returns the length of the object. This is handy, if you want to use len(…) on an object instance of Deck.

The rest of the methods are simple and straightforward.

## Step 4: Implement the Hand class

The hand class is a sub-class of Deck. How does that make sense?

Well, it will share the same instance variable and methods with some additional ones.

Think about it, a Hand is like a Deck of card, as it is a collection of cards.

How to implement that.

```class Hand(Deck):
def __init__(self, label):
self.deck = []
self.label = label
self.win_count = 0

def __str__(self):
return self.label + ': ' + ' '.join([str(card) for card in self.deck])

def get_label(self):
return self.label

def get_win_count(self):
return self.win_count

def round_winner(self):
self.win_count = self.win_count + 1
```

Notice that we overwrite the __init__(…) method, as we do not want to create a full deck of cards. Here we start with empty hands.

## Step 5: A simple game

• Create a Deck of cards.
• Create 4 players (P1, P2, P3, P4)
• Divided all cards to 4 players.
• Assume you are P1, print the hand of P1.
• The game has 13 rounds:
• Each player plays 1 card.
• The player with highest card wins.
• Update the score for the winning hand.
• Print cards played in round and the winner (with winning card).
• After the 13 rounds – print score for all players (P1, P2, P3, P4).

How to do that?

```deck = Deck()

hands = []
for i in range(1, 5):
hands.append(Hand(f'P{i}'))

while len(deck) > 0:
for hand in hands:
hand.add_card(deck.pop_card())

print(hands[0])

for i in range(13):
input()
played_cards = []
for hand in hands:
played_cards.append(hand.pop_card())

winner_card = max(played_cards)
winner_hand = hands[played_cards.index(winner_card)]
winner_hand.round_winner()

print(f"R{i}: " + ' '.join([str(card) for card in played_cards]) + f' Winner: {winner_hand.get_label()} {str(winner_card)}')

for hand in hands:
print(f"Score for {hand.get_label()}: {hand.get_win_count()}")
```

Amazing, right?

## Want to learn more?

If this is something you like and you want to get started with Python, then this is part of a 8 hours FREE video course with full explanations, projects on each levels, and guided solutions.

The course is structured with the following resources to improve your learning experience.

• 17 video lessons teaching you everything you need to know to get started with Python.
• 34 Jupyter Notebooks with lesson code and projects.
• A FREE 70+ pages eBook with all the learnings from the lessons.

See the full FREE course page here.

## What will we cover?

What is List Comprehension in Python. We will demonstrate how to make List Comprehension and show how this also work with dictionaries, called, Dict Comprehension. Then show how Dict Comprehension can be used for frequency count.

## Step 1: What is List Comprehension?

List Comprehension is a syntactic construct available in some programming languages for creating a list based on existing lists.

Wikipedia.org

It is easiest to demonstrate how to create it in Python (see more about lists and about foo-loops).

```my_list = [i for i in range(10)]

print(my_list)
```

Will result in.

```[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```

Here you use the range(10) construct, which gives a sequence from 0 to 9 that can be used in the Comprehension construct.

A more direct example is given here.

```my_new_list = [i*i for i in my_list]

print(my_new_list)
```

Which results in the following.

```[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
```

## Step 2: List Comprehension with if statements

Let’s get straight to it (see more about if-statements).

```my_list = [i for i in range(10) if i % 2 == 0]

print(my_list)
```

This will result in.

```[0, 2, 4, 6, 8]
```

Also, you can make List Comprehension with if-else-statements.

```my_list = [i if i % 2 else -i for i in range(10)]

print(my_list)
```

This results in.

```[0, 1, -2, 3, -4, 5, -6, 7, -8, 9]
```

## Step 3: Dict Comprehension

Let’s dive straight into it (see more about dict).

```my_list = [i for i in range(10)]

my_dict = {i: i*i for i in my_list}
print(my_dict)
```

Which results in.

```{0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81}
```

Notice, that the lists do not need to contain integers or floats. The constructs also works with any other values.

## Step 4: Frequency Count using Dict Comprehension

This is amazing.

```text = 'aabbccccc'

freq = {c: text.count(c) for c in text}
print(freq)
```

Which results in.

```{'a': 2, 'b': 2, 'c': 5}
```

Notice, that here we will recount for each instance of the letters. To avoid that you can do as follows.

```text = 'aabbccccc'

freq = {c: text.count(c) for c in set(text)}
print(freq)
```

Which results in the same output. The difference is that you only count for each letter once.

## Step 5: Want more?

I am happy you asked.

If this is something you like and you want to get started with Python, then this is part of a 8 hours FREE video course with full explanations, projects on each levels, and guided solutions.

The course is structured with the following resources to improve your learning experience.

• 17 video lessons teaching you everything you need to know to get started with Python.
• 34 Jupyter Notebooks with lesson code and projects.
• A FREE 70+ pages eBook with all the learnings from the lessons.

See the full FREE course page here.

## What will we cover?

We will learn what recursion is and why it can help simplify your code.

## Step 1: What is recursion?

In computer science, recursion is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem.

It is often used in computer science to take one complex problem and break it down to smaller sub-problems, which eventually will have a simple base case.

How could a real life problem be?

Imagine you are standing in a long line and you do not see the beginning of it. You want to figure out how many people are in front of you.

How can you solve this?

Ask the person in front of you how many people are in front. It the person does not know, then this person should ask the person in front as well. When you get the answer, return the answer added one (as that person will count one).

Imagine this goes on until you reach the first person in the line. Then this person will answer 0 (there are no-one in front).

Then the next person will answer 1. The next 2 and so forth.

When the answer reaches you, you get the answer of how many are in front of you.

What to learn from this?

Well, the problem of how many are in front of you is complex and you cannot solve it yourself. Then you break it down to a smaller problem, and send that problem to the next one. When you get the answer you update it with your part.

This is done all they way down to the base case when it reaches the first person.

This is the essence of recursion.

## Step 2: The first recursion problem you solve: Fibonacci numbers

Given the Fibonacci number defined as

• F_0 = 0
• F_1 = 1
• F_n = F_(n-1) + F_(n-2)

The sequence begins as follows

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …

Create a list of first n Fibonacci numbers.

First take a moment and try to think how you would solve this? Seems a bit complex.

Then look at this recursive solution.

```def fib(n):
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
```

It is important to notice the base case (n <= 1). Why? Because without the base case it would never terminate.

Then notice how it makes calls to smaller instances of the same function – these are the recursive calls.

Now that is beautiful.

## Step 3: Tower of Hanoi

The Tower of Hanoi is an amazing mathematical problem to solve.

### Tower of Hanoi Explained

Before we set the rules, let’s see how our universe looks like.

• All the disks have different sizes.
• The goal is to move all the disks from on tower (rod) to another one with the following 3 rules.
1. You can only move one disk at the time.
2. You can only take the top disk and place on top of another tower (rod).
3. You cannot place a bigger disk on top of a smaller disk.
• The first two rules combined means that you can only take one top disk and move it.
• The third rule says, that we cannot move disk 2 on top of disk 1.
• Game: How do you get from here.
• To here – following the 3 rules.

### How to Solve Tower of Hanoi Recursively

• Assume you can solve the smaller problem of 2 disks.
• Then we can move 2 disk at the same time
• Then we can move disk 3 on place.
• And we can move the subproblem of 2 disk on place.

### The Implemented Solution of Tower of Hanoi in Python

We need to represent the towers. This can be done by using a list of list (see more about lists).

```towers = [[3, 2, 1], [], []]
```

Then a way to move plates.

```def move(towers, from_tower, dest_tower):
disk = towers[from_tower].pop()
towers[dest_tower].append(disk)
return towers
```

As a helper function we want to print it on the way (see more about for-loops).

```def print_towers(towers):
for i in range(3, 0, -1):
for tower in towers:
if len(tower) >= i:
print(tower[i - 1], end='  ')
else:
print('|', end='  ')
print()
print('-------')
```

Then print_towers(towers) would print

```1  |  |
2  |  |
3  |  |
-------
```

Finally, the algorithm we want to implement is as follows.

• Step 1: Represent the towers as [[3, 2, 1], [], []]
• Step 2: Create a move function, which takes the towers and can move a disk from one tower to another.
• HINT: Use .pop() and .append(.)
• Step 3: Make a helper function to print the towers
• HINT: Assume that we have 3 towers and 3 disks
• Step 4: The recursive function
• solve_tower_of_hanoi(towers, n, start_tower, dest_tower, aux_tower)
• n is the number of disks we move, starting with 3, then we call recursive down with 2, 1, and 0.
• The base case is n = 0, just return in that case
• Move subproblem of n – 1 disks from start_tower to aux_tower.
• Move disk n to dest_tower. (you can print the tower here if you like)
• Move subproblem of n – 1 disk from aux_tower to dest_tower.
```def solve_tower_of_hanoi(towers, n, start_tower, dest_tower, aux_tower):
if n == 0:
return
# Move subproblem of n - 1 disks from start_tower to aux_tower.
solve_tower_of_hanoi(towers, n - 1, start_tower, aux_tower, dest_tower)

# Move disk n to dest_tower. (you can print the tower here if you like)
move(towers, start_tower, dest_tower)
print_towers(towers)

# Move subproblem of n - 1 disk from aux_tower to dest_tower.
solve_tower_of_hanoi(towers, n - 1, aux_tower, dest_tower, start_tower)
```

Try it out.

```towers = [[3, 2, 1], [], []]
print_towers(towers)
solve_tower_of_hanoi(towers, 3, 0, 2, 1)
```

## Want more?

I am happy you asked.

If this is something you like and you want to get started with Python, then this is part of a 8 hours FREE video course with full explanations, projects on each levels, and guided solutions.

The course is structured with the following resources to improve your learning experience.

• 17 video lessons teaching you everything you need to know to get started with Python.
• 34 Jupyter Notebooks with lesson code and projects.
• A FREE 70+ pages eBook with all the learnings from the lessons.

See the full FREE course page here.

## What will we cover?

First we will look at the classical way to a read CSV file into Python and understand why we want to read it into a list of Dictionaries.

Then we will demonstrate the two most convenient ways to read CSV files into a list of dictionaries in Python.

See this more general introduction to CSV files.

## The classical way to read CSV files in Python

To make the demonstration we need a CSV file.

Save the following content in NameRecords.csv

```First name,Last name,Age
Connar,Ward,15
Rose,Peterson,18
Paul,Cox,12
Hanna,Hicks,10
```

Then we will read the content with the default CSV reader in Python.

```import csv

with open('NameRecords.csv') as csvfile:
csv_reader = csv.reader(csvfile)
rows = list(csv_reader)
```

Then if you want to get the content it will be in a list of list represented in rows.

```for row in rows:
print(row)
```

Resulting in the following output.

```['First name', 'Last name', 'Age']
['Connar', 'Ward', '15']
['Rose', 'Peterson', '18']
['Paul', 'Cox', '12']
['Hanna', 'Hicks', '10']
```

Which illustrates two problems.

1. The column names are given in the first item of the list.
2. The row is a list of the items, and you need to keep track of what they represent.

Hence, if we could read the content directly into a list of dictionaries, it would make your life as a programmer easier.

What do I mean? See the two methods below.

## Method 1: Using DictReader

This is possible the classical way to do it and uses standard Python library CSV.

First you need a CSV file to work with. Save the following content in NameRecords.csv

```First name,Last name,Age
Connar,Ward,15
Rose,Peterson,18
Paul,Cox,12
Hanna,Hicks,10
```

Then the following will read the content into a list of dictionaries.

```import csv

with open("files/NameRecords.csv", "r") as f:
csv_reader = csv.DictReader(f)
name_records = list(csv_reader)
```

Access to the content can be achieved as follows.

```print(name_records[0])
print(name_records[0]['First name'])
```
• Advantage of approach: No additional library needs installation.
• Disadvantage of approach: You cannot access remote files.

## Method 2: Using pandasread_csv()

This approach can read remote data.

```import pandas as pd

url = "https://raw.githubusercontent.com/LearnPythonWithRune/LearnPython/main/files/NameRecords.csv"
name_records = pd.read_csv(url)

name_records = name_records.to_dict('records')
```

Again the data is structured in a list of records in dictionaries.

• Advantage of approach: Can read remote files directly (like from GitHub)
• Disadvantage of approach: Need to install pandas library.

## Want to learn more?

If this is something you like and you want to get started with Python, then this is part of a 8 hours FREE video course with full explanations, projects on each levels, and guided solutions.

The course is structured with the following resources to improve your learning experience.

• 17 video lessons teaching you everything you need to know to get started with Python.
• 34 Jupyter Notebooks with lesson code and projects.
• A FREE 70+ pages eBook with all the learnings from the lessons.

See the full FREE course page here.