How to use Multiple Linear Regression to Predict House Prices

What will we cover?

  • Learn about Multiple Linear Regression
  • Understand difference from discrete classifier
  • Understand it is Supervised learning task
  • Get insight into how similar a linear classifier is to discrete classifier
  • Hands-on experience with multiple linear regression

Step 1: What is Multiple Linear Regression?

Multiple Linear Regression is a Supervised learning task of learning a mapping from input point to a continuous value.

Wow. What does that mean?

This might not help all, but it is the case of a Linear Regression, where there are multiple explanatory variables.

Let’s start simple – Simple Linear Regression is the case most show first. It is given one input variable (explanatory variable) and one output value (response value).

An example could be – if the temperatur is X degrees, we expect to sell Y ice creams. That is, it is trying to predict how many ice creams we sell if we are given a temperature.

Now we know that there are other factors that might have high impact other that the temperature when selling ice cream. Say, is it rainy or sunny. What time of year it is, say, it might be turist season or not.

Hence, a simple model like that might not give a very accurate estimate.

Hence, we would like to model having more input variables (explanatory variables). When we have more than one it is called Multiple Linear Regression.

Step 2: Get Example Data

Let’s take a look at some house price data.

import pandas as pd

data = pd.read_csv('https://raw.githubusercontent.com/LearnPythonWithRune/MachineLearningWithPython/main/files/house_prices.csv')
print(data.head())

Notice – you can also download the file locally from the GitHub. This will make it faster to run every time.

The output should be giving the following data.

The goal is given a row of data we want to predict the House Unit Price. That is, given all but the last column in a row, can we predict the House Unit Price (the last column).

Step 3: Plot the data

Just for fun – let’s make a scatter plot of all the houses with Latitude and Longitude.

import matplotlib.pyplot as plt

fig, ax = plt.subplots()

ax.scatter(x=data['Longitude'], y=data['House unit price'])
plt.show()

This gives the following plot.

This shows you where the houses are located, which can be interesting because house prices can be dependent on location.

Somehow it should be intuitive that the longitude and latitude should not be linearly correlated to the house price – at least not in the bigger picture.

Step 4: Correlation of the features

Before we make the Multiple Linear Regression, let’s see how the features (the columns) correlate.

data.corr()

Which gives.

This is interesting. Look at the lowest row for the correlations with House Unit Price. It shows that Distance to MRT stations negatively correlated – that is, the longer to a MRT station the lower price. This might not be surprising.

More surprising is that Latitude and Longitude are actually comparably high correlated to the House Unit Price.

This might be the case for this particular dataset.

Step 5: Check the Quality of the dataset

For the Linear Regression model to perform well, you need to check that the data quality is good. If the input data is of poor quality (missing data, outliers, wrong values, duplicates, etc.) then the model will not be very reliable.

Here we will only check for missing values.

data.isnull().sum()

Which gives.

Transaction                     0
House age                       0
Distance to MRT station         0
Number of convenience stores    0
Latitude                        0
Longitude                       0
House unit price                0
dtype: int64

This tells us that there are no missing values.

If you want to learn more about Data Quality, then check out the free course on Data Science. In that course you will learn more about Data Quality and how it impacts the accuracy of your model.

Step 6: Create a Multiple Linear Regression Model

First we need to divide them into input variables X (explanatory variables) and output values y (response values).

Then we split it into a training and testing dataset. We create the model, we fit it, we use it predict the test dataset and get a score.

from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score

X = data.iloc[:,:-1]
y = data.iloc[:,-1]

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=.15)

lin = LinearRegression()
lin.fit(X_train, y_train)

y_pred = lin.predict(X_test)

print(r2_score(y_test, y_pred))

For this run it gave 0.68.

Is that good or bad? Well, good question. The perfect match is 1, but that should not be expected. The worse score you can get is minus infinite – so we are far from that.

In order to get an idea about it – we need to compare it with variations.

In the free Data Science course we explore how to select features and evaluate models. It is a great idea to look into that.

Want to learn more?

This is part of a FREE 10h Machine Learning course with Python.

  • 15 video lessons – which explain Machine Learning concepts, demonstrate models on real data, introduce projects and show a solution (YouTube playlist).
  • 30 JuPyter Notebooks – with the full code and explanation from the lectures and projects (GitHub).
  • 15 projects – with step guides to help you structure your solutions and solution explained in the end of video lessons (GitHub).

Linear Classifier From Scratch Explained on Real Project

What will we cover?

The goal is to learn about Supervised Learning and explore how to use it for classification.

This includes learning

  • What is Supervised Learning
  • Understand the classification problem
  • What is the Perceptron classifier
  • How to use the Perceptron classifier as a linear classifier

Step 1: What is Supervised Learning?

Supervised learning (SL) is the machine learning task of learning a function that maps an input to an output based on example input-output pairs

wikipedia.org

Said differently, if you have some items you need to classify, it could be books you want to put in categories, say fiction, non-fiction, etc.

Then if you were given a pile of books with the right categories given to them, how can you make a function (the machine learning model), which on other books without labels can guess the right category.

Supervised learning simply means, that in the learning phase, the algorithm (the one creating the model) is given examples with correct labels.

Notice, that supervised learning does not only restrict to classification problems, but it could predict anything.

If you are new to Machine Learning, I advise you start with this tutorial.

Step 2: What is the classification problem?

The classification problem is a supervised learning task of getting a function mapping an input point to a discrete category.

There is binary classification and multiclass classification, where the binary maps into two classes, and the multi classmaps into 3 or more classes.

I find it easiest to understand with examples.

Assume we want to predict if will rain or not rain tomorrow. This is a binary classification problem, because we map into two classes: rain or no rain.

To train the model we need already labelled historic data.

Hence, the task is given rows of historic data with correct labels, train a machine learning model (a Linear Classifier in this case) with this data. Then after that, see how good it can predict future data (without the right class label).

Step 3: Linear Classification explained mathematically and visually

Some like the math behind an algorithm. If you are not one of them, focus on the visual part – it will give you the understanding you need.

The task of Supervised Learning mathematically can be explained simply with the example data above to find a function f(humidity, pressure) to predict rain or no rain.

Examples

  • f(93, 000.7) = rain
  • f(49, 1015.5) = no rain
  • f(79, 1031.1) = no rain

The goal of Supervised Learning is to approximate the function f – the approximation function is often denoted h.

Why not identify f precisely? Well, because it is not ideal, as this would be an overfitted function, that would predict the historic data 100% accurate, but would fail to predict future values very well.

As we work with Linear Classifiers, we want the function to be linear.

That is, we want the approximation function h, to be on the form.

  • x_1: Humidity
  • x_2: Pressure
  • h(x_1, x_2) = w_0 + w_1*x_1 + w_2*x_2

Hence, the goal is to optimize values w_0, w_1, w_2, to find the best classifier.

What does all this math mean?

Well, that it is a linear classifier that makes decisions based on the value of a linear combination of the characteristics.

The above diagram shows how it would classify with a line whether it will predict rain or not. On the left side, this is the data classified from historic data, and the line shows an optimized line done by the machine learning algorithm.

On the right side, we have a new input data (without label), then with this line, it would classify it as rain (assuming blue means rain).

Step 4: What is the Perceptron Classifier?

The Perceptron Classifier is a linear algorithm that can be applied to binary classification.

It learns iteratively by adding new knowledge to an already existing line.

The learning rate is given by alpha, and the learning rule is as follows (don’t worry if you don’t understand it – it is not important).

  • Given data point x and y update each weight according to this.
    • w_i = w_i + alpha*(y – h_w(x)) X x_i

The rule can also be stated as follows.

  • w_i = w_i + alpha(actual value – estimated value) X x_i

Said in words, it adjusted the values according to the actual values. Every time a new values comes, it adjusts the weights to fit better accordingly.

Given the line after it has been adjusted to all the training data – then it is ready to predict.

Let’s try this on real data.

Step 5: Get the Weather data we will use to train a Perceptron model with

You can get all the code in a Jupyter Notebook with the csv file here.

This can be downloaded from the GitHub in a zip file by clicking here.

First let’s just import all the libraries used.

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import Perceptron
import matplotlib.pyplot as plt

Notice that in the Notebook we have an added line %matplotlib inline, which you should add if you run in a Notebook. The code here will be aligned with PyCharm or a similar IDE.

Then let’s read the data.

data = pd.read_csv('files/weather.csv', parse_dates=True, index_col=0)
print(data.head())

If you want to read the data directly from GitHub and not download the weather.csv file, you can do that as follows.

data = pd.read_csv('https://raw.githubusercontent.com/LearnPythonWithRune/MachineLearningWithPython/main/files/weather.csv', parse_dates=True, index_col=0)
print(data.head())

This will result in an output similar to this.

            MinTemp  MaxTemp  Rainfall  ...  RainToday  RISK_MM RainTomorrow
Date                                    ...                                 
2008-02-01     19.5     22.4      15.6  ...        Yes      6.0          Yes
2008-02-02     19.5     25.6       6.0  ...        Yes      6.6          Yes
2008-02-03     21.6     24.5       6.6  ...        Yes     18.8          Yes
2008-02-04     20.2     22.8      18.8  ...        Yes     77.4          Yes
2008-02-05     19.7     25.7      77.4  ...        Yes      1.6          Yes

Step 6: Select features and Clean the Weather data

We want to investigate the data and figure out how much missing data there.

A great way to do that is to use isnull().

print(data.isnull().sum())

This results in the following output.

MinTemp             3
MaxTemp             2
Rainfall            6
Evaporation        51
Sunshine           16
WindGustDir      1036
WindGustSpeed    1036
WindDir9am         56
WindDir3pm         33
WindSpeed9am       26
WindSpeed3pm       25
Humidity9am        14
Humidity3pm        13
Pressure9am        20
Pressure3pm        19
Cloud9am          566
Cloud3pm          561
Temp9am             4
Temp3pm             4
RainToday           6
RISK_MM             0
RainTomorrow        0
dtype: int64

This shows how many rows in each column has null value (missing values). We want to work only with a two features (columns), to keep our classification simple. Obviously, we need to keep RainTomorrow, as that is carrying the label of the class.

We select the features we want and drop the rows with null-values as follows.

dataset = data[['Humidity3pm', 'Pressure3pm', 'RainTomorrow']].dropna()

Step 7: Split into trading and test data

The next step we need to do is to split the dataset into a features and labels.

But we also want to rename the labels from No and Yes to be numeric.

X = dataset[['Humidity3pm', 'Pressure3pm']]
y = dataset['RainTomorrow']
y = np.array([0 if value == 'No' else 1 for value in y])

Then we do the splitting as follows, where we but a random_state in order to be able to reproduce. This is often a great idea, if you randomness and encounter a problem, then you can reproduce it.

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)

This has divided the features into a train and test set (X_train, X_test), and the labels into a train and test (y_train, y_test) dataset.

Step 8: Train the Perceptron model and measure accuracy

Finally we want to create the model, fit it (train it), predict on the training data, and print the accuracy score.

clf = Perceptron(random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(accuracy_score(y_test, y_pred))

This gives an accuracy of 0.773 or 77,3% accuracy.

Is that good?

Well what if it rains 22.7% of the time? And the model always predicts No rain?

Well, then it is correct 77.3% of the time.

Let’s just check for that.

Well, it is not raining in 74.1% of the time.

print(sum(y == 0)/len(y))

Is that a good model? Well, I find the binary classifiers a bit tricky because of this problem. The best way to get an idea is to visualize it.

Step 9: Visualize the model predictions

To visualize the data we can do the following.

fig, ax = plt.subplots()
X_data = X.to_numpy()
y_all = clf.predict(X_data)
ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y_all, alpha=.25)
plt.show()

This results in the following output.

Finally, let’s visualize the actual data to compare.

ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y, alpha=.25)
plt.show()

Resulting in.

Here is the full code.

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import Perceptron
import matplotlib.pyplot as plt
data = pd.read_csv('https://raw.githubusercontent.com/LearnPythonWithRune/MachineLearningWithPython/main/files/weather.csv', parse_dates=True, index_col=0)
print(data.head())
print(data.isnull().sum())
dataset = data[['Humidity3pm', 'Pressure3pm', 'RainTomorrow']].dropna()
X = dataset[['Humidity3pm', 'Pressure3pm']]
y = dataset['RainTomorrow']
y = np.array([0 if value == 'No' else 1 for value in y])
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = Perceptron(random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(accuracy_score(y_test, y_pred))
print(sum(y == 0)/len(y))
fig, ax = plt.subplots()
X_data = X.to_numpy()
y_all = clf.predict(X_data)
ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y_all, alpha=.25)
plt.show()
fig, ax = plt.subplots()
ax.scatter(x=X_data[:,0], y=X_data[:,1], c=y, alpha=.25)
plt.show()

Want to learn more?

This is part of a FREE 10h Machine Learning course with Python.

  • 15 video lessons – which explain Machine Learning concepts, demonstrate models on real data, introduce projects and show a solution (YouTube playlist).
  • 30 JuPyter Notebooks – with the full code and explanation from the lectures and projects (GitHub).
  • 15 projects – with step guides to help you structure your solutions and solution explained in the end of video lessons (GitHub).

Learn NumPy Basics with your first Machine Learning Project

What will we cover?

In this tutorial you will learn some basic NumPy. The best way to learn something new is to combine it with something useful. Therefore you will use the NumPy while creating your first Machine Learning project.

Step 1: What is NumPy?

NumPy is the fundamental package for scientific computing in Python.

NumPy.org

Well, that is how it is stated on the official NumPy page.

Maybe a better question is, what do you use NumPy for and why?

Well, the main tool you use from NumPy is the NumPy array. Arrays are quite similar to Python lists, just with a few restrictions.

  1. It can only contain one data type. That is, if a NumPy array has integers, then all entries can only be integers.
  2. The size cannot change (immutable). That is, you can not add or remove entries, like in a Python list.
  3. If it is a multi-dimension array, all sub-arrays must be of same shape. That is, you cannot have something similar to a Python list of list, where the first sub-list is of length 3, the second of length 7, and so on. They all must have same length (or shape).

Why would anyone use them, you might ask? They are more restrictive than Python lists.

Actually, and funny enough, making the data structures more restrictive, like NumPy arrays, can make it more efficient (faster).

Why?

Well, think about it. You know more about the data structure, and hence, do not need to make many additional checks.

Step 2: A little NumPy array basics we will use for our Machine Learning project

A NumPy array can be created of a list.

import numpy as np

a1 = np.array([1, 2, 3, 4])
print(a1)

Which will print.

array([1, 2, 3, 4])

The data type of a NumPy array can be given as follows.

print(a1.dtype)

It will print dtype(‘int64’). That is, the full array has only one type, int64, which are 64 bit integers. That is also different from Python integers, where you actually cannot specify the size of the integers. Here you can have int8, int16, int32, int64, and more. Again restrictions, which makes it more efficient.

print(a1.shape)

The above gives the shape, here, (4,). Notice, that this shape cannot be changed, because the data structure is immutable.

Let’s create another NumPy array and try a few things.

a1 = np.array([1, 2, 3, 4])
a2 = np.array([5, 6, 7, 8])

print(a1*2)
print(a1*a2)
print(a1 + a2)

Which results in.

array([2, 4, 6, 8])
array([ 5, 12, 21, 32])
array([ 6,  8, 10, 12])

With a little inspection you will realize that the first (a1*2) multiplies with 2 in each entry. The second (a1*a2) multiplies the entries pairwise. The third (a1 + a2) adds the entries pairwise.

Step 3: What is Machine Learning?

  • In the classical computing model every thing is programmed into the algorithms. This has the limitation that all decision logic need to be understood before usage. And if things change, we need to modify the program.
  • With the modern computing model (Machine Learning) this paradigm is changes. We feed the algorithms with data, and based on that data, we do the decisions in the program.

How Machine Learning Works

  • On a high level you can divide Machine Learning into two phases.
    • Phase 1: Learning
    • Phase 2: Prediction
  • The learing phase (Phase 1) can be divided into substeps.
  • It all starts with a training set (training data). This data set should represent the type of data that the Machine Learn model should be used to predict from in Phase 2 (predction).
  • The pre-processing step is about cleaning up data. While the Machine Learning is awesome, it cannot figure out what good data looks like. You need to do the cleaning as well as transforming data into a desired format.
  • Then for the magic, the learning step. There are three main paradigms in machine learning.
    • Supervised: where you tell the algorithm what categories each data item is in. Each data item from the training set is tagged with the right answer.
    • Unsupervised: is when the learning algorithm is not told what to do with it and it should make the structure itself.
    • Reinforcement: teaches the machine to think for itself based on past action rewards.
  • Finally, the testing is done to see if the model is good. The training data was divided into a test set and training set. The test set is used to see if the model can predict from it. If not, a new model might be necessary.

Then the prediction begins.

Step 4: A Linear Regression Model

Let’s try to use a Machine Learning model. One of the first model you will meet is the Linear Regression model.

Simply said, this model tries to fit data to a straight line. The best way to understand that, is to see it visually with one explanatory variable. That is, given a value (explanatory variable), can you predict the scalar response (the value you want to predict.

Say, given the temperature (explanatory variable), can you predict the sale of ice cream. Assuming there is a linear relationship, can you determine that? A guess is, the hotter it is, the more ice cream is sold. But whether a leaner model is a good predictor, is beyond the scope here.

Let’s try with some simple data.

But first we need to import a few libraries.

from sklearn.linear_model import LinearRegression

Then we generate some simple data.

x = [i for i in range(10)]
y = [i for i in range(10)]

For the case, it will be fully correlated, but it will only demonstrate it. This part is equivalent to the Get data step.

But x is the explanatory variable and y the scalar response we want to predict.

When you train the model, you give it input pairs of explanatory and scalar response. This is needed, as the model needs to learn.

After the learning you can predict data. But let’s prepare the data for the learning. This is the Pre-processing.

X = np.array(x).reshape((-1, 1))
Y = np.array(y).reshape((-1, 1))

Notice, this is very simple step, and we only need to convert the data into the correct format.

Then we can train the model (train model).

lin_regressor = LinearRegression()
lin_regressor.fit(X, Y)

Here we will skip the test model step, as the data is simple.

To predict data we can call the model.

Y_pred = lin_regressor.predict(X)

The full code together here.

from sklearn.linear_model import LinearRegression

x = [i for i in range(10)]
y = [i for i in range(10)]

X = np.array(x).reshape((-1, 1))
Y = np.array(y).reshape((-1, 1))

lin_regressor = LinearRegression()
lin_regressor.fit(X, Y)

Y_pred = lin_regressor.predict(X)

Step 5: Visualize the result

You can visualize the data and the prediction as follows (see more about matplotlib here).

import matplotlib.pyplot as plt

alpha = str(round(lin_regressor.intercept_[0], 5))
beta = str(round(lin_regressor.coef_[0][0], 5))

fig, ax = plt.subplots()

ax.set_title(f"Alpha {alpha}, Beta {beta}")
ax.scatter(X, Y)
ax.plot(X, Y_pred, c='r')

Alpha is called constant or intercept and measures the value where the regression line crosses the y-axis.

Beta is called coefficient or slope and measures the steepness of the linear regression.

Next step

If you want a real project with Linear Regression, then check out the video in the top of the post, which is part of a full course.

The project will look at car specs to see if there is a connection.

Want to learn more Python, then this is part of a 8 hours FREE video course with full explanations, projects on each levels, and guided solutions.

The course is structured with the following resources to improve your learning experience.

  • 17 video lessons teaching you everything you need to know to get started with Python.
  • 34 Jupyter Notebooks with lesson code and projects.
  • A FREE 70+ pages eBook with all the learnings from the lessons.

See the full FREE course page here.

If you instead want to learn more about Machine Learning. Do not worry.

Then check out my Machine Learning with Python course.

  • 15 video lessons teaching you all aspects of Machine Learning
  • 30 JuPyter Notebooks with lesson code and projects
  • 10 hours FREE video content to support your learning journey.

Go to the course page for details.

How to Learn Python for Data Science

What will we cover?

  • Is Python the correct language to learn for a Data Scientist?
  • How much Python do you need to learn as a Data Scientist?
  • How to learn Python fast?
  • How long does it take to become good at Python?
  • How to get started with Python?

Is Python the correct language to learn for a Data Scientist?

That is a good question to ask yourself. You want to become a Data Scientist, maybe you have some experience, but feel weak in the programming aspect, or maybe you start from scratch.

If I was to start my journey as a Data Scientist one of the questions I would ask myself, is, do I have the tools for it.

R is often high on the scale of programming language and environment to use as a Data Scientist. The language R is designed for effective data handling, operations on arrays and matrices, has data analysis tools, graphical facilities, and well established environment.

That sounds like all we need, so why bother looking further?

In the top there is a battle between two candidates: Python vs R.

Actually, Python is a general purpose language that has a wide aspects of uses, not only Data Scientist. Also, web services, game development, big data backend systems processing high volume data, just to mention a few.

With this description, it looks like R is tailored for Data Science, while Python is used for everything. The choice seems easy – do you want a tool made for the purpose, or something for general purpose?

Funny enough, as it might seem at first, Python has become more popular than R. Why is that?

A few reasons why Python is more popular than R.

  • Python is easy to use and learn.
  • Python has powerfull fast libraries.
  • Python has a huge community and it is easy to get help.
  • Python has easy data handling tools for reading and generating spreadsheets, parquet files, csv files, web scraping, sql databasis, and much more.
  • Python has great Machine Learning libraries developed by giants like Google (tensorflow) and Facebook (PyTorch).
  • Python support graphical data representation with libraries like Matplotlib.
  • Python has SciKit-learn for predictive data analysis.
  • Python has easy to use data representation with NumPy and pandas.

…and the list could go on.

Python is also a great fit when you want to build tailored-made system, which integrate up against any other platform or service, like automatically get data from various sources.

Do I need a Computer Science degree to use Python?

Python is programming and programmers have computer science degrees. Do you need one to become a good Data Scientist?

The short answer is: No.

A Computer Science degrees will enable you to build anything. Let’s try to think of it differently.

Think of transportation – car, busses, bikes, trains, which can move you from A to B. People without a driving license can use busses and trains. All they need is to know how to buy a ticket, understand a schedule to find out to get from A to B. If you get a driver license, then you can driver your own car. Finally, if you are a car mechanics, you can repair and possibly build your own car.

Similarly, a computer science degree will enable you to build cars, busses, trains, and more, which other people can use. A Data Scientist is like a person with a driver license, and you don’t need to be able to repair a car to drive it. That is, you only need to understand and navigate the dashboard in the car.

Data Science is the same., you need to understand the things you use, but you do not need to be able to build them yourself.

But wait! You might object. It is still programming, when I use the things I use.

Yes, but the level of programming is simple and you use the complicated things like you use a car without being a car mechanics.

Feel more comfortable?

How to Learn Python Fast?

Now you are ready and know what you want – how to get there fastest without wasting time.

Maybe one question before that .

Can everybody learn Python? Do you need special skills?

I have so far never met anyone, which could not learn Python to the level of Data Science – and honestly, also for the level of Computer Scientist. It is just a question about dedication and interest to get to the last steps.

But becoming a Data Scientist using Python is not a problem.

The question is more how to learn it fast? The best way to answer that is to look at some of the most common pitfalls that make people learn it slower and some give up on the way.

Pitfall 1: I understand the solution when I see, but why couldn’t I figure it out – am I stupid?

Did you ever learn a new language – a speaking one – like English. If you are non-native English, then you started learning English at once. Remember that?

First you started understanding a few words. Then you started to understand full sentences when people where speaking English, but you could barely express yourself in English yourself. It took time to get there.

Programming is the same – at first you can read and understand the solutions to your problem, it takes time for you to be able to express yourself in programming language.

The feeling you have while trying to solve a programming problem for long time, but not succeeding can be devastating. Then when you see the solution and it looks simple, then you start to feel stupid.

But stop there – this is normal. You learn first to understand code before you can express yourself in code. Just like learning a new speaking language.

We have all been there – and we still get there – just with different more complex problems. It will never end, you will just become comfortable about it and the challenges you face will be more and more complex.

Pitfall 2: Get distracted when it gets tough

When something gets difficult the easy exit is to quit and start something new easier.

Maybe you think, this is too difficult for me – I am not smart enough. This is more fun, so I start this now.

The truth is, that every talented programmer on planet earth has multiple times been stuck at a problem for days – not being able to solve it – if it was a bug or just a difficult problem to solve does not matter. But they have all been struggling with a problem for long time.

This can be quite difficult to deal with as a beginner. You sit with a problem, which does not seem hard and you feel like everyone else can solve it – the logical conclusion is that you are not smart enough, right?

Then you might change to another programming project – and think that is fine, you will still learn programming.

But the truth is, that solving hard problems or finding bugs is not easy. It takes time and you will learn a lot from it. Escaping to another project will not teach you as much as the difficult ones.

The best programmers are the ones that never give up when it gets tough. This is what the highly paid consultant are paid for, solving problems where other give up.

Pitfall 3: Different sources of learning

This is often difficult to understand in the beginning. But there are many styles in programming.

When you know people and been working professionally with them in a development environment for long time, you can actually see who coded it. Their style falls through.

Why does that matter?

In the beginning it does. Because, what most also fail to understand in the beginning is, that you can solve problems in endless ways. There is often no perfect solution for a problem, only different solutions which different tradeoffs.

As a beginner, you want to learn programming and you will not see the differences in styles. But if you starte learning from one person, then another one, then yet another one, then it becomes difficult.

This has never been more relevant in the age where so many people share online learning.

Again, it is like learning English with a specific dialect and different vocabulary. It is difficult in the beginning to distinguish between them, and difficult to see it matters. But in the long run you will speak English optimized for your environment.

Keep focused learning from one source. Do not change from one place to another all the time. Master the basics from one place until you are comfortable about it.

Pitfall 4: Comparing yourself to others

We often compare our learning journeys to others. You need to know if you are doing good or bad, if you need to adjust your approach or not.

This sounds good, right?

You need to keep in touch with reality and not waste time.

This is a major pitfall. You will see solutions to your problems, which are solved more elegant. There will be people that ‘just started’ and are already typing in code like you would never dream of.

This is devastating. Do you not have what it takes?

As hard as it is to accept, that you are not the fastest learner, and you need to work harder than others to reach the same. It is just as hard to realize, that the people you compare yourself with are often the top-of-the-top.

We all have our own journey. Mine is different from yours. I was good at one thing in the beginning, but you are awesome at something I never understood.

Accept that we all have our own journey – there will be times when you feel like the only one not understanding something simple (or at least I did that many times) – but other times when you actually understand something extremely complex.

We often miss these aspects, because we always compare ourselves to the brightest person in our context in any moment. That might be different persons from time to time.

Further, in the days of internet, the environment you compare yourself to is huge.

As you see, this comparison is not fair and will do you no good.

Accept that your journey is yours alone. Comparisons with others do not help you.

How long does it take to become a good Python programmer

I wish there was a simple answer to that. Unfortunately it is not that easy to answer.

First of all, what are your initial expectations and how will they evolve over time. Often people are fine with just some simple skills, but when they learn more they want to master more and it never stops.

It is natural. The problem is, your expectations to feeling successful moves along the way.

Secondly, is the dedication to it. You need to spend time on solving problems.

Experience shows, that either you need to burn for learning programming or you need it to solve you daily challenges.

It sounds like you need to keep motivated. And yes, you do. But the good news is, it is very rewarding and fulfilling to program. You are building something, you are creating something, you are the creator of something amazing. That feeling is awesome.

Does that mean it is just fun all the way from the beginning to end. Not at all, did you read the pitfalls above? Well, if you didn’t, go read them.

What I am saying is, it is a journey that will never end. The journey will sometimes feel bumpy, but the results are rewarding.

The more time you spend, the faster and better results you will get.

But how to keep motivation?

  • Remind yourself daily, that there are pitfall and all the best in the world have been there.
  • Keep it playful – the majority of the time it is joyful to program.
  • Accept it as a learning journey that will never end.

How to get started with Python for Data Science?

On this page there are a lot of resources available to get started with both Python and Data Science.

To help you further there are structured free courses you can follow with everything prepared.

Start Python for FREE

There is a full 8 hours video corse for Python.

  • 17 video lessons teaching you everything you need to know to get started with Python.
  • 34 Jupyter Notebooks with lesson code and projects.
  • A FREE eBook with all the learnings from the lessons.
Get started today with Python for FREE

Start Machine Learning for FREE

Another great free resource is the 10 hours free Machine Learning course.

  • 15 video lessons – which explain Machine Learning concepts, demonstrate models on real data, introduce projects and show a solution (YouTube playlist).
  • 30 JuPyter Notebooks – with the full code and explanation from the lectures and projects (GitHub).
  • 15 projects – with step guides to help you structure your solutions and solution explained in the end of video lessons (GitHub).
Get started with Machine Learning with Python for FREE

Visualize Why Long-term Investing is Less Risky – Pandas and Matplotlib

What will we cover in this tutorial?

We will look at how you can use Pandas Datareader (Pandas) and Matplotlib to create a visualization of why long-term investing is less risky.

Here risk is simply meaning the risk of loosing money.

Specifically, we will investigate how likely it is to loose money (and how much) if you invest for a 1 year perspective vs a 10 year perspective.

Step 1: Establish the data for the investigation

One of the most widely used index is the S&P 500 index. This index lists 500 large companies on the US market exchange and is one of the most commonly followed equity indices.

We will use this index and retrieve data back from 1970 and up until today.

This can be done as follow.

import pandas_datareader as pdr
from datetime import datetime

data = pdr.get_data_yahoo('^GSPC', datetime(1970, 1, 1))

Then the DataFrame data will contain all data from 1970 up until today. The ^GSPC is the ticker for the S&P 500 index.

Step 2: Calculate the annual return from 1970 and forward using Pandas

The annual return for a year is calculated by taking the last trading value of the divided by the first day and subtracting 1, then multiply that by 100 to get it in percentage.

Calculating it for all years then you can visualize it with a histogram as follows.

import pandas as pd
import pandas_datareader as pdr
from datetime import datetime
import matplotlib.pyplot as plt


data = pdr.get_data_yahoo('^GSPC', datetime(1970, 1, 1))

years = []
annual_return = []

for year in range(1970, 2021):
    years.append(year)
    data_year = data.loc[f'{year}']['Adj Close']
    annual_return.append((data_year.iloc[-1] / data_year.iloc[0] - 1) * 100)

df = pd.DataFrame(annual_return, index=years)
bins = [i for i in range(-40, 45, 5)]
df.plot.hist(bins=bins, title='1 year')
plt.show()

Notice that we create a new DataFrame with all the annual returns for each of the years and use it to make a histogram.

The result is as follows.

What you see is a histogram indicating how many years a given annual return was occurring.

Hence, a -40-35% (negative) return occurred once, while a 0-5% return happened 6 times in the span of years from 1970 to 2020 (inclusive).

What does this tell us?

Well, you can lose up to 40%, but you can also gain up to 35% in one year. It also shows you that it is more likely to gain (positive return) than lose.

But what if we invested the money for 10 years.

Step 3: Calculate the average annual return in 10 years spans starting from 1970 using Pandas

This is actually quite similar, but with a few changes.

First of all, the average return is calculated using the CAGR (Compound Annual Growth Rate) formula.

This results in the following code.

import pandas as pd
import pandas_datareader as pdr
from datetime import datetime
import matplotlib.pyplot as plt


data = pdr.get_data_yahoo('^GSPC', datetime(1970, 1, 1))

years = []
avg_annual_return = []
for year in range(1970, 2011):
    years.append(year)
    data_year = data.loc[f'{year}':f'{year + 9}']['Adj Close']
    avg_annual_return.append(((data_year.iloc[-1] / data_year.iloc[0]) ** (1 / 10) - 1) * 100)

df = pd.DataFrame(avg_annual_return, index=years)
bins = [i for i in range(-40, 45, 5)]
df.plot.hist(bins=bins, title='10 years')
plt.show()

There are a few changes. One is the formula for the average annual return (as stated above) and the other is that we use 10 years of data. Notice, that we only add 9 to the year. This is because that both years are inclusive.

This results in this histogram.

As you see. One in 3 cases there was a negative return over the a 10 year span. Also, the loss was only in the range -5-0%. Otherwise, the return would be positive.

Now is that nice?

Matplotlib Visualization for DataFrame Time Series Data

What will we cover in this tutorial?

We will learn how to visualization time series data in a DataFrame with Matplotlib.

This tutorial will show you.

  • How to use Matplotlib with DataFrames.
  • Use Matplotlib with subplots (the object-oriented way).
  • How to make multiple plots in one figure.
  • How to create bar-plots

Want to access the code directly in Jupyter Notebook?

You can get the Jupyter Notebooks from the GitHub here, where there are also direct links to Colab for an interactive experience.

Step 1: Read time series data into a DataFrame

A DataFrame is a two-dimensional tabular data. It is the primary data structure of Pandas. The data structure contains labeled axes (rows and columns).

To get access to a DataFrame data structure, you need to import the Pandas library.

import pandas as pd

Then we need some time series data. You con download your own CSV file from financial pages like Yahoo! Finance.

For this tutorial we will use a dataset available from the GitHub.

remote_file = "https://raw.githubusercontent.com/LearnPythonWithRune/FinancialDataAnalysisWithPython/main/AAPL.csv"
data = pd.read_csv(remote_file, index_col=0, parse_dates=True)

The pd.read_csv(…) does all the magic. We set the index_col=0, which sets the first column of the CSV data file to be the index. This is the dates.

Then we set parse_dates=True, to ensure that dates are actually parsed as dates and not as strings. This is necessary to take advantage of being time series and index with time intervals.

Step 2: Import Matplotlib in Jupyter Notebook

When you import Matplotlib in Jupyter Notebook, you need to set a rendering mode.

import matplotlib.pyplot as plt
%matplotlib notebook

We will use the notebook mode, which is interactive. This enables you to zoom in on interval, move around, and save the figure.

It is common to use inline mode for rendering in Jupyter Notebook. The inline mode creates a static image, which is not interactive.

Step 3: Use Matplotlib the Object-Oriente way

Matplotlib can be used in a functional way and an object-oriented way. Most use it in a functional way, which often creates more confusion, as it is not always intuitive how it works.

The object-oriented way leads to less confusion for the cost of one extra line of code and parsing one argument. Hence, the price is low for the gain.

fig, ax = plt.subplots()
data['Close'].plot(ax=ax)
ax.set_ylabel("Price")
ax.set_title("AAPL")

The first line returns a figure and axis (fig and ax). The figure is where we put the axis, and the axis is the chart.

The actually plot is made by calling the DataFrame, actually, we access the column Close in this case, which is the Series of the time series of the historic Close prices.

Confused? Don’t worry about the details.

Notice, that we parse ax=ax to the plot. This ensures that we render the chart on the returned axis ax.

Finally, we add a y-label and a title to our axis.

Step 4: Creating multiple charts in one Matplotlib figure

How can we create multiple charts (or axes) in one Matplotlib figure?

Luckily, this is quite easy.

fig, ax = plt.subplots(2, 2)
data['Open'].plot(ax=ax[0, 0], title="Open")
data['High'].plot(ax=ax[0, 1], title="High")
data['Low'].plot(ax=ax[1, 0], title="Low")
data['Close'].plot(ax=ax[1, 1], title="Close")
plt.tight_layout()

Here we see a few differences. First, notice plt.subplots(2, 2), which will return a figure fig, and a list of lists with 2-by-2 axes. Hence, ax is a two dimensional list of axes.

We can access the first axis with ax[0, 0,], and parse it as an argument to plot.

This continues for all the 4 plots we make, as you see.

Finally, we use plt.tight_layout(), which will ensures that the layout of the axes does not overlap. You can try without to see the difference.

Step 5: Create a bar-chart with Matplotlib

Finally, we will make a bar-chart with Matplotlib.

Actually, we will render a horizontal bar-chart.

fig, ax = plt.subplots()
data['Volume'].loc['2020-07-01':'2020-08-15'].plot.barh(ax=ax)

We do it for the volume and only on a limited interval of time. This shows you how to take advantage of the time series aspect of the DataFrame.

Next step

The above is part of the FREE 2h Video course.

Excel Automation with Simple Moving Average from Python

What will we cover in this tutorial?

We will retrieve the historic stock prices and calculate the moving average. Then we will export the data to Excel and insert a chart, but all done from Python.

See the in depth explanation in the YouTube video. It also gives advice on how to interpret the Simple Moving Averages (SMA).

Step 1: Read historic stock prices

We will use the Pandas-datarader to get the historic prices of NFLX (the ticker for Netflix).

import pandas_datareader as pdr
import datetime as dt

ticker = "NFLX"
start = dt.datetime(2019, 1, 1)

data = pdr.get_data_yahoo(ticker, start)
print(data.head())

And you will get the historic data for Netflix from January 1st, 2019.

	High	Low	Open	Close	Volume	Adj Close
Date						
2019-01-02	269.750000	256.579987	259.279999	267.660004	11679500	267.660004
2019-01-03	275.790009	264.429993	270.200012	271.200012	14969600	271.200012
2019-01-04	297.799988	278.540009	281.880005	297.570007	19330100	297.570007
2019-01-07	316.799988	301.649994	302.100006	315.339996	18620100	315.339996
2019-01-08	320.589996	308.010010	319.980011	320.269989	15359200	320.269989

Step 2: Understand Moving Average

We will calculate the Simple Moving Average as defined on Investopedia.

Simple Moving Average

The Simple Moving Average (Now just referred to as Moving Average or MA) is defined by a period of days.

That is, the MA of a period of 10 (MA10) will take the average value of the last 10 close prices. This is done in a rolling way, hence, we will get a MA10 for every trading day in our historic data, except the first 9 days in our dataset.

We can similarly calculate a MA50 and MA200, which is a Moving Average of the last 50 and 200 days, respectively.

Step 3: Calculating the Moving Averages

We can do that by using rolling and mean.

And it is magic.

data['MA10'] = data['Close'].rolling(10).mean()
data['MA50'] = data['Close'].rolling(50).mean()
data['MA200'] = data['Close'].rolling(200).mean()

print(data.tail())

That was easy, right?

	High	Low	Open	Close	Volume	Adj Close	MA10	MA50	MA200
Date									
2021-01-12	501.089996	485.670013	500.000000	494.250000	5990400	494.250000	515.297998	502.918599	477.08175
2021-01-13	512.349976	493.010010	495.500000	507.790009	5032100	507.790009	512.989999	503.559600	477.76590
2021-01-14	514.500000	499.579987	507.350006	500.859985	4177400	500.859985	510.616995	503.894399	478.39270
2021-01-15	506.320007	495.100006	500.000000	497.980011	5890200	497.980011	506.341998	504.109600	479.06220
2021-01-19	509.250000	493.540009	501.000000	501.769989	11996900	501.769989	504.232999	504.205999	479.72065

Step 4: Visualize it with Matplotlib

We can see the data with Matplotlib.

import matplotlib.pyplot as plt

data[['Close', 'MA10', 'MA50']].loc['2020-01-01':].plot()
plt.show()

Resulting in the following plot.

The output

Where you can see how the MA10 and MA50 move according to the price.

Step 5: Export to Excel

Now we will export the data to Excel.

For this we need to import Pandas and use the XlsxWriter engine, where you can find the details of the code.

The code can be found here.

import pandas as pd

data = data.loc['2020-01-01':]
data = data.iloc[::-1]
writer = pd.ExcelWriter("technical.xlsx", 
                        engine='xlsxwriter', 
                        date_format = 'yyyy-mm-dd', 
                        datetime_format='yyyy-mm-dd')

sheet_name = 'Moving Average'
data[['Close', 'MA10', 'MA50']].to_excel(writer, sheet_name=sheet_name)


worksheet = writer.sheets[sheet_name]
workbook = writer.book

# Create a format for a green cell
green_cell = workbook.add_format({
    'bg_color': '#C6EFCE',
    'font_color': '#006100'
})

# Create a format for a red cell
red_cell = workbook.add_format({
    'bg_color': '#FFC7CE',                            
    'font_color': '#9C0006'
})


# Set column width of Date
worksheet.set_column(0, 0, 15)


for col in range(1, 4):
    # Create a conditional formatted of type formula
    worksheet.conditional_format(1, col, len(data), col, {
        'type': 'formula',                                    
        'criteria': '=C2>=D2',
        'format': green_cell
    })

    # Create a conditional formatted of type formula
    worksheet.conditional_format(1, col, len(data), col, {
        'type': 'formula',                                    
        'criteria': '=C2<D2',
        'format': red_cell
    })

# Create a new chart object.
chart1 = workbook.add_chart({'type': 'line'})

# Add a series to the chart.
chart1.add_series({
        'name': "MA10",
        'categories': [sheet_name, 1, 0, len(data), 0],
        'values': [sheet_name, 1, 2, len(data), 2],
})

# Create a new chart object.
chart2 = workbook.add_chart({'type': 'line'})

# Add a series to the chart.
chart2.add_series({
        'name': 'MA50',
        'categories': [sheet_name, 1, 0, len(data), 0],
        'values': [sheet_name, 1, 3, len(data), 3],
})

# Combine and insert title, axis names
chart1.combine(chart2)
chart1.set_title({'name': sheet_name + " " + ticker})
chart1.set_x_axis({'name': 'Date'})
chart1.set_y_axis({'name': 'Price'})

# Insert the chart into the worksheet.
worksheet.insert_chart('F2', chart1)

writer.close()

Where the output will be something similar to this.

Generated Excel sheet

How to Plot Time Series with Matplotlib

What will we cover in this tutorial?

In this tutorial we will show how to visualize time series with Matplotlib. We will do that using Jupyter notebook and you can download the resources (the notebook and data used) from here.

Step 1: What is a time series?

I am happy you asked.

The easiest way to understand it, is to show it. If you downloaded the resources and started the Jupyter notebook execute the following lines.

import pandas as pd

data = pd.read_csv("stock_data.csv", index_col=0, parse_dates=True)

data.head()

This will produce the following output.

	High	Low	Open	Close	Volume	Adj Close
Date						
2020-01-02	86.139999	84.342003	84.900002	86.052002	47660500.0	86.052002
2020-01-03	90.800003	87.384003	88.099998	88.601997	88892500.0	88.601997
2020-01-06	90.311996	88.000000	88.094002	90.307999	50665000.0	90.307999
2020-01-07	94.325996	90.671997	92.279999	93.811996	89410500.0	93.811996
2020-01-08	99.697998	93.646004	94.739998	98.428001	155721500.0	98.428001

You notice the the far left column is called Date and that is the index. This index has a time value, in this case, a date.

Time series data is data “stamped” by a time. In this case, it is time indexed by dates.

The data you see is historic stock prices.

Step 2: How to visualize data with Matplotlib

The above data is kept in a DataFrame (Pandas data object), this makes it straight forward to visualize it.

import matplotlib.pyplot as plt
%matplotlib notebook

data.plot()

Which will result in a chart similar to this one.

Result

This is not impressive. It seems like something is wrong.

Actually, there is not. It just does what you ask for. It plots all the 6 columns all together in one chart. Because the Volume is such a high number, all the other columns are in the same brown line (the one that looks straight).

Step 3: Matplotlib has a functional and object oriented interface

This is often a bit confusing at first.

But Matplotlib has a functional and object oriented interface. We used the functional.

If you try to execute the following in your Jupyter notebook.

data['My col'] = data['Volume']*0.5
data['My col'].plot()

It would seem like nothing happened.

But then investigate your previous plot.

Previous plot

It got updated with a new line. Hence, instead of creating a new chart (or figure) it just added it to the existing one.

If you want to learn more about functional and object oriented way of using Matplotlib we recommend this tutorial.

Step 4: How to make a new figure

What to do?

Well, you need to use the object oriented interface of Matplotlib.

You can do that as follows.

fig1, ax1 = plt.subplots()
data['My col'].plot(ax=ax1)

Which will produce what you are looking for. A new figure.

The new figure

Step 5: Make multiple plots in one figure

This is getting fun.

How can you create multiple plots in one figure?

On creating you actually do that.

fig2, ax2 = plt.subplots(2, 2)

data['Open'].plot(ax=ax2[0, 0])
data['High'].plot(ax=ax2[0, 1])
data['Low'].plot(ax=ax2[1, 0])
data['Close'].plot(ax=ax2[1, 1])
plt.tight_layout()

Notice that subplots(2, 2) creates a 2 times 2 array of axis you can use to create a plot.

This should result in this chart.

Result

Step 6: Make a histogram

This can be done as follows.

fig3, ax3 = plt.subplots()

data.loc[:'2020-01-31', 'Volume'].plot.bar(ax=ax3)

Notice that we only take the first month of the Volume data here (data.loc[:’2020-01-31′, ‘Volume’]).

This should result in this figure.

Step 7: Save the figures

This is straight forward.

fig1.savefig("figure-1.png")
fig2.savefig("figure-2.png")
fig3.savefig("figure-3.png")

And the above figures should be available in the same location you are running your Jupyter notebook.

Next step

If you want to learn more about functional and object oriented way of using Matplotlib we recommend this tutorial.

How To use Matplotlib Object Oriented with NumPy and Pandas

What will we cover in this tutorial?

If you like data visualization with NumPy and Pandas, then you must have encountered Matplotlib.

And if you also, like to program in an object oriented fashion, then most tutorial will make you feel wondering if no one loves the art of beautiful code?

Let me elaborate. The integration and interaction with Matplotlib is done in a functional way with a lot of side effects. Not nice.

Not sure what I talk about? We will cover that too.

Step 1: How NumPy is demonstrated to make plots with Matplotlib and what is wrong with it

Let’s make a simple example.

import matplotlib.pyplot as plt
import numpy as np

x = np.linspace(0, 5, 11)
y = x ** 2
plt.plot(x, y)
plt.xlabel("X Label")
plt.ylabel("Y Label")
plt.title("Title")
plt.show()

This will result in the following chart.

That is nice and easy! So what is wrong with it?

Side effects!

What is a side effect in programming?

…that is to say has an observable effect besides returning a value (the main effect) to the invoker of the operation.

https://en.wikipedia.org/wiki/Side_effect_(computer_science)

What does that mean?

Well, let’s examine the above example.

We call plt.plt(x, y) and what happens? Actually we don’t know. We do not get anything in return.

Continue to call plt.xlabel(…), plt.ylabel(…), and plt.title(…). Then we call plt.show() to see the result. Hence, we change the state of the plt library we imported. See, we did not create an object. We call the library directly.

This is difficult as a programmer to understand without having deep knowledge of the library used.

So how to do it in more understandable way?

Step 2: How to create a chart with Matplotlib with NumPy in an object oriented way and why it is better

Let’s look at this code and examine it.

import matplotlib.pyplot as plt
import numpy as np

x = np.linspace(0, 5, 11)
y = x ** 2

fig, ax = plt.subplots()
ax.plot(x, y)
ax.set_xlabel("X Label")
ax.set_ylabel("Y Label")
ax.set_title("Title")
fig.show()
plt.waitforbuttonpress()

Here we do it differently but get the same result. It is more understandable that when we call a method on object ax, that the state of ax is changing and not something in the library hidden in some side effect.

You can also show the the figure fig by calling show() and not the library. This requires that we add waitforbuttonpress() on plt, otherwise it will destroy the window immediately.

Note, that you do not have these challenges in JuPyter notebook – the plots are shown without the call to show.

You could keep the plt.show() instead of fig.show() and plt.waitforbuttonpress(). But the above code is more intuitive and easier to understand.

How to create a chart with Matplotlib of a Pandas DataFrame in an object oriented way

This is straight forward as Matplotlib is well integrated with Pandas.

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd

x = np.linspace(0, 5, 11)
y = x ** 2

df = pd.DataFrame(data=y, index=x)

fig, ax = plt.subplots()
ax.plot(df)
ax.set_xlabel("X Label")
ax.set_ylabel("Y Label")
ax.set_title("Title")
fig.show()
plt.waitforbuttonpress()

Notice, that the DataFrame is created from the NumPy arrays. Hence, here we do not gain anything from using it. This is just to exemplify how easy it is to use s in an object oriented way with Pandas.

Final thoughts

I have found that programmer either hate or love Matplotlib. I do not always know why, but I have discovered that this non-object oriented way of using Matplotlib is annoying some programmers.

This is a good reason to hate it, but I would say that there are no good alternative to Matplotlib – or at least, they are build upon Matplotlib.

I like the power and ease using Matplotlib. I do like that the option of using it object oriented, which makes the code more intuitive and easier to understand for other programmers.

How To Extract Numbers From Strings in HTML Table and Export to Excel from Python

What will we cover in this tutorial?

How to import a HTML table to Excel.

But that is easy? You can do that directly from Excel.

Yes, but what if entries contains numbers and string together, then the import will convert it to a string and makes it difficult to get the number extracted from the string.

Luckily, we will cover how to do that easy with Python.

Step 1: Get the dataset

Find your favorite HTML table online. For the purpose of this tutorial I will use this one from Wikipedia with List of Metro Systems.

View of HTML table of interest

Say, what if we wanted to sum how many stations are in this table (please notice that the table contains more rows than shown in the above picture).

If you import that directly into Excel, with the import functionality you will realize that the column of stations will be interpreted as strings. The problem is, that it will look like 19[13], while we are only interested in the number 19.

There is no build in functionality to do that directly in Excel.

But let’s try to import this into Python. We will use Pandas to do that. If you are new to Pandas, please see this tutorial.

import pandas as pd


url = "https://en.wikipedia.org/wiki/List_of_metro_systems"
tables = pd.read_html(url)

print(tables[0].head())

Which will result in the following output.

/Users/admin/PycharmProjects/LearningSpace/venv/bin/python /Users/admin/PycharmProjects/LearningSpace/test.py
           City    Country  ...          System length Annual ridership(millions)
0       Algiers    Algeria  ...  18.5 km (11.5 mi)[14]           45.3 (2019)[R 1]
1  Buenos Aires  Argentina  ...  56.7 km (35.2 mi)[16]          337.7 (2018)[R 2]
2       Yerevan    Armenia  ...   13.4 km (8.3 mi)[17]           20.2 (2019)[R 3]
3        Sydney  Australia  ...  36 km (22 mi)[19][20]  14.2 (2019) [R 4][R Nb 1]
4        Vienna    Austria  ...  83.3 km (51.8 mi)[21]          459.8 (2019)[R 6]

Where we have the same problem. If we inspect the type of the columns we get the following.

City                          object
Country                       object
Name                          object
Yearopened                    object
Year of lastexpansion         object
Stations                      object
System length                 object
Annual ridership(millions)    object
dtype: object

Where actually all columns are of type object, which here is equivalent to a string.

Step 2: Extract the numbers from Stations and System length column

The DataStructure of the tables in tables is a DataFrame, which is Pandas main data structure.

As the strings we want to convert from string to integers are containing more information than just the numbers, we cannot use the DataFrame method to_numeric().

We want to convert something of the form 19[13] to 19.

To do that easily, we will use the apply(…) method on the DataFrame.

The apply-method takes a function as argument and applies it on each row.

We will use a lambda function as argument. If you are not familiar with lambda functions, please read this tutorial.

import pandas as pd


url = "https://en.wikipedia.org/wiki/List_of_metro_systems"
tables = pd.read_html(url)
table = tables[0]

table['Stations'] = table.apply(lambda row: int(row['Stations'].split('[')[0]), axis=1)
table['System length'] = table.apply(lambda row: float(row['System length'].split()[0]), axis=1)

print(table[['Stations', 'System length']].head())

Which will result in the following output.

   Stations  System length
0        19           18.5
1        90           56.7
2        10           13.4
3        13           36.0
4        98           83.3

This is what we want.

Step 3: Export to Excel

Wow. This needs an entire step?

Well, of course it does.

Here we need to unleash the power of Pandas and use the to_excel(…) method.

import pandas as pd


url = "https://en.wikipedia.org/wiki/List_of_metro_systems"
tables = pd.read_html(url)
table = tables[0]

table['Stations'] = table.apply(lambda row: int(row['Stations'].split('[')[0]), axis=1)
table['System length'] = table.apply(lambda row: float(row['System length'].split()[0]), axis=1)

table.to_excel('output.xlsx')

This will result in an Excel file looking similar to this, where the Stations and System length columns are numeric and not string.

Excel file now with Stations and System length as numbers and not strings

What’s next?

Want to learn more about Python and Excel?

Check out my online guide.