Insert Live Graph in Webcam Stream Using Webcam with OpenCV

What will we cover in this tutorial?

How to make a simple live graph that updates into a live webcam stream by using OpenCV.

The result can be seen in the video below.

Step 1: A basic webcam flow with OpenCV

If you need to install OpenCV for the first time we suggest you read this tutorial.

A normal webcam flow in Python looks like the following code.

import cv2
 
# Setup webcam camera
cap = cv2.VideoCapture(0)
# Set a smaller resolution
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
 
while True:
    # Capture frame-by-frame
    _, frame = cap.read()
    frame = cv2.flip(frame, 1)
 
    cv2.imshow("Webcam", frame)
 
    if cv2.waitKey(1) == ord('q'):
        break
 
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

This will make a live webcam stream from your webcam to a window. That is too easy not to enjoy.

Step 2: Create an object to represent the graph

There are many ways to create a graph. Here we will make an object which will have a representation of the graph. Then it will have a function to update the value and update the graph image.

class Graph:
    def __init__(self, width, height):
        self.height = height
        self.width = width
        self.graph = np.zeros((height, width, 3), np.uint8)
    def update_frame(self, value):
        if value < 0:
            value = 0
        elif value >= self.height:
            value = self.height - 1
        new_graph = np.zeros((self.height, self.width, 3), np.uint8)
        new_graph[:,:-1,:] = self.graph[:,1:,:]
        new_graph[self.height - value:,-1,:] = 255
        self.graph = new_graph
    def get_graph(self):
        return self.graph

This object is a simple object that keeps the graph as a OpenCV image (Numpy array).

The update function first verifies that the value of inside the graph size.

Then it creates a a new graph (new_graph) and copies the old values from previous graph, but shifted one position. Then it will update the new value by white color.

Step 3: Putting it all together

The Graph object created in last step needs a value. This value can be anything. Here we make a simple measure of how much movement is in the frame.

This is simply done by comparing the current frame with the previous frame. This could be done straight forward, but to minimize noise we use a gray scaled images, which we use Gaussian blur on. Then the absolute difference from last frame is used, and summing it up.

The value used to scale down is highly dependent on the settings your webcam is in. Also, if you use another resolution, then it will affect it. Hence, if the graph is all low (zero) or high (above height) then adjust this graph.update_frame(int(difference/42111)) to some other integer value in the division.

import cv2
import numpy as np

class Graph:
    def __init__(self, width, height):
        self.height = height
        self.width = width
        self.graph = np.zeros((height, width, 3), np.uint8)
    def update_frame(self, value):
        if value < 0:
            value = 0
        elif value >= self.height:
            value = self.height - 1
        new_graph = np.zeros((self.height, self.width, 3), np.uint8)
        new_graph[:,:-1,:] = self.graph[:,1:,:]
        new_graph[self.height - value:,-1,:] = 255
        self.graph = new_graph
    def get_graph(self):
        return self.graph

# Setup camera
cap = cv2.VideoCapture(0)
# Set a smaller resolution
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
graph = Graph(100, 60)
prev_frame = np.zeros((480, 640), np.uint8)
while True:
    # Capture frame-by-frame
    _, frame = cap.read()
    frame = cv2.flip(frame, 1)
    frame = cv2.resize(frame, (640, 480))
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    gray = cv2.GaussianBlur(gray, (25, 25), None)
    diff = cv2.absdiff(prev_frame, gray)
    difference = np.sum(diff)
    prev_frame = gray
    graph.update_frame(int(difference/42111))
    roi = frame[-70:-10, -110:-10,:]
    roi[:] = graph.get_graph()
    cv2.putText(frame, "...wanted a live graph", (20, 430), cv2.FONT_HERSHEY_PLAIN, 1.8, (200, 200, 200), 2)
    cv2.putText(frame, "...measures motion in frame", (20, 460), cv2.FONT_HERSHEY_PLAIN, 1.8, (200, 200, 200), 2)
    cv2.imshow("Webcam", frame)
    if cv2.waitKey(1) == ord('q'):
        break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

ASCII Art of Live Webcam Stream with OpenCV

What will we cover in this tutorial?

Create ASCII Art on a live webcam stream using OpenCV with Python. To improve performance we will use Numba.

The result can look like the video below.

Step 1: A webcam flow with OpenCV in Python

If you need to install OpenCV for the first time we suggest you read this tutorial.

A normal webcam flow in Python looks like the following code.

import cv2
# Setup webcam camera
cap = cv2.VideoCapture(0)
# Set a smaller resolution
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
while True:
    # Capture frame-by-frame
    _, frame = cap.read()
    frame = cv2.flip(frame, 1)
    cv2.imshow("Webcam", frame)
    if cv2.waitKey(1) == ord('q'):
        break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

This will make a live webcam stream from your webcam to a window. That is too easy not to enjoy.

Step 2: Prepare the letters to be used for ASCII art

There are many ways to achieve the ASCII art. For ease, we will create all the letters in a small gray scale (only with black and white) images. You could print the letters directly in the terminal, but it seems to be slower than just mapping the small images into a big image representing the ASCII art.

We use OpenCV to create all the letters.

import numpy as np
def generate_ascii_letters():
    images = []
    #letters = "# $%&\\'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[]^_`abcdefghijklmnopqrstuvwxyz{|}~"
    letters = " \\ '(),-./:;[]_`{|}~"
    for letter in letters:
        img = np.zeros((12, 16), np.uint8)
        img = cv2.putText(img, letter, (0, 11), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 255)
        images.append(img)
    return np.stack(images)

The list images appends all the images we create. At the end (in the return statement) we convert them to a Numpy array of images. This is done for speed as lists do not work with Numba, it needs the objects to be Numpy arrays.

If you like, you can use all the letters, by using the commented out letters string instead of the smaller with only special characters. We found the result looking better with the limited amount of letters.

A images is created simply by a black Numpy array of size 12×16 (that is width 16 and height 12). Then we add the text on the image by using cv2.putText(…).

Step 3: Transforming the webcam frame to only outline the objects

To get a decent result we found that converting the frames to only outline the object in the original frame. This can be achieved by using Canny edge detection (cv2.Canny(…)). To capture that from the live webcam stream it is advised to use Gaussian blur before.

import cv2
# Setup camera
cap = cv2.VideoCapture(0)
# Set a smaller resolution
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
while True:
    # Capture frame-by-frame
    _, frame = cap.read()
    frame = cv2.flip(frame, 1)
    gb = cv2.GaussianBlur(frame, (5, 5), 0)
    can = cv2.Canny(gb, 127, 31)
    cv2.imshow('Canny edge detection', can)
    cv2.imshow("Webcam", frame)
    if cv2.waitKey(1) == ord('q'):
        break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

This would result in something like this.

Step 4: Converting the Canny edge detection to ASCII art

This is where all the magic happens. We will take the Canny edge detected image and convert it to ASCII art.

First remember, we have a Numpy array of all the letters we want to use.

def to_ascii_art(frame, images, box_height=12, box_width=16):
    height, width = frame.shape
    for i in range(0, height, box_height):
        for j in range(0, width, box_width):
            roi = frame[i:i + box_height, j:j + box_width]
            best_match = np.inf
            best_match_index = 0
            for k in range(1, images.shape[0]):
                total_sum = np.sum(np.absolute(np.subtract(roi, images[k])))
                if total_sum < best_match:
                    best_match = total_sum
                    best_match_index = k
            roi[:,:] = images[best_match_index]
    return frame

The height and the width of the frame is take and then we iterate over it in small boxes of the size of the letters.

Each box is captured in a region of interest (roi). Then we loop over all possible letters and find the best match. This is not done with perfect calculation, as they are quite expensive. Hence we use the approximate calculation done in total_sum.

The correct calculation would be.

total_sum = np.sum(np.where(roi > images[k], np.subtract(roi, images[k]), np.subtract(images[k], roi)))

Alternatively, you could turn it into np.int16 instead of using np.uint8, which are causing all the problems here. Finally, notice that the cv2.norm(…) would also solve the problem, but as we need to optimize the code with Numba, this is not possible as it is not supported in Numba.

Step 5: Adding it all together and use Numba

Now we can add all the code together at try it out. We will also use Numba on the to_ascii_art function to speed it up. If you are new to Numba we can recommend this tutorial.

import cv2
import numpy as np
from numba import jit

@jit(nopython=True)
def to_ascii_art(frame, images, box_height=12, box_width=16):
    height, width = frame.shape
    for i in range(0, height, box_height):
        for j in range(0, width, box_width):
            roi = frame[i:i + box_height, j:j + box_width]
            best_match = np.inf
            best_match_index = 0
            for k in range(1, images.shape[0]):
                total_sum = np.sum(np.absolute(np.subtract(roi, images[k])))
                if total_sum < best_match:
                    best_match = total_sum
                    best_match_index = k
            roi[:,:] = images[best_match_index]
    return frame

def generate_ascii_letters():
    images = []
    #letters = "# $%&\\'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[]^_`abcdefghijklmnopqrstuvwxyz{|}~"
    letters = " \\ '(),-./:;[]_`{|}~"
    for letter in letters:
        img = np.zeros((12, 16), np.uint8)
        img = cv2.putText(img, letter, (0, 11), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 255)
        images.append(img)
    return np.stack(images)

# Setup camera
cap = cv2.VideoCapture(0)
# Set a smaller resolution
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
images = generate_ascii_letters()
while True:
    # Capture frame-by-frame
    _, frame = cap.read()
    frame = cv2.flip(frame, 1)
    gb = cv2.GaussianBlur(frame, (5, 5), 0)
    can = cv2.Canny(gb, 127, 31)
    ascii_art = to_ascii_art(can, images)
    cv2.imshow('ASCII ART', ascii_art)
    cv2.imshow("Webcam", frame)
    if cv2.waitKey(1) == ord('q'):
        break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

This will give the following result (if you put me in front of the camera).

Also, try to use different character set. For example the full one also given in the code above.

Pandas + GeoPandas + OpenCV: Create a Video of COVID-19 World Map

What will we cover?

How to create a video like the one below using Pandas + GeoPandas + OpenCV in Python.

  1. How to collect newest COVID-19 data in Python using Pandas.
  2. Prepare data and calculate values needed to create Choropleth map
  3. Get Choropleth map from GeoPandas and prepare to combine it
  4. Get the data frame by frame to the video
  5. Combine it all to a video using OpenCV

Step 1: Get the daily reported COVID-19 data world wide

This data is available from the European Centre for Disease Prevention and Control and can be found here.

All we need is to download the csv file, which has all the historic data from all the reported countries.

This can be done as follows.

import pandas as pd

# Just to get more rows, columns and display width
pd.set_option('display.max_rows', 300)
pd.set_option('display.max_columns', 300)
pd.set_option('display.width', 1000)
# Get the updated data
table = pd.read_csv("https://opendata.ecdc.europa.eu/covid19/casedistribution/csv")
print(table)

This will give us an idea of how the data is structured.

          dateRep  day  month  year  cases  deaths countriesAndTerritories geoId countryterritoryCode  popData2019 continentExp  Cumulative_number_for_14_days_of_COVID-19_cases_per_100000
0      01/10/2020    1     10  2020     14       0             Afghanistan    AF                  AFG   38041757.0         Asia                                           1.040961         
1      30/09/2020   30      9  2020     15       2             Afghanistan    AF                  AFG   38041757.0         Asia                                           1.048847         
2      29/09/2020   29      9  2020     12       3             Afghanistan    AF                  AFG   38041757.0         Asia                                           1.114565         
3      28/09/2020   28      9  2020      0       0             Afghanistan    AF                  AFG   38041757.0         Asia                                           1.343261         
4      27/09/2020   27      9  2020     35       0             Afghanistan    AF                  AFG   38041757.0         Asia                                           1.540413         
...           ...  ...    ...   ...    ...     ...                     ...   ...                  ...          ...          ...                                                ...         
46221  25/03/2020   25      3  2020      0       0                Zimbabwe    ZW                  ZWE   14645473.0       Africa                                                NaN         
46222  24/03/2020   24      3  2020      0       1                Zimbabwe    ZW                  ZWE   14645473.0       Africa                                                NaN         
46223  23/03/2020   23      3  2020      0       0                Zimbabwe    ZW                  ZWE   14645473.0       Africa                                                NaN         
46224  22/03/2020   22      3  2020      1       0                Zimbabwe    ZW                  ZWE   14645473.0       Africa                                                NaN         
46225  21/03/2020   21      3  2020      1       0                Zimbabwe    ZW                  ZWE   14645473.0       Africa                                                NaN         
[46226 rows x 12 columns]

First we want to convert the dateRep to a date object (cannot be seen in the above, but the dates are represented by a string). Then use that as index for easier access later.

import pandas as pd

# Just to get more rows, columns and display width
pd.set_option('display.max_rows', 300)
pd.set_option('display.max_columns', 300)
pd.set_option('display.width', 1000)
# Get the updated data
table = pd.read_csv("https://opendata.ecdc.europa.eu/covid19/casedistribution/csv")
# Convert dateRep to date object
table['date'] = pd.to_datetime(table['dateRep'], format='%d/%m/%Y')
# Use date for index
table = table.set_index('date')

Step 2: Prepare data and compute values needed for plot

What makes sense to plot?

Good question. In a Choropleth map you will color according to a value. Here we will color in darker red the higher the value a country is represented with.

If we plotted based on number new COVID-19 cases, this would be high for countries with high populations. Hence, the number of COVID-19 cases per 100,000 people is used.

Using new COVID-19 cases per 100,000 people can be volatile and change drastic from day to day. To even that out, a 7 days rolling sum can be used. That is, you take the sum of the last 7 days and continue that process through your data.

To make it even less volatile, the average of the last 14 days of the 7 days rolling sum is used.

And no, it is not just something invented by me. It is used by the authorities in my home country to calculate rules of which countries are open for travel or not.

This can by the data above be calculated by computing that data.

def get_stat(country_code, table):
    data = table.loc[table['countryterritoryCode'] == country_code]
    data = data.reindex(index=data.index[::-1])
    data['7 days sum'] = data['cases'].rolling(7).sum()
    data['7ds/100000'] = data['7 days sum'] * 100000 / data['popData2019']
    data['14 mean'] = data['7ds/100000'].rolling(14).mean()
    return data

The above function takes the table we returned from Step 1 and extract a country based on a country code. Then it reverses the data to have the dates in chronological order.

After that, it computes the 7 days rolling sum. Then computes the new cases by the population in the country in size of 100,000 people. Finally, it computes the 14 days average (mean) of it.

Step 3: Get the Choropleth map data and prepare it

GeoPandas is an amazing library to create Choropleth maps. But it does need your attention when you combine it with other data.

Here we want to combine it with the country codes (ISO_A3). If you inspect the data, some of the countries are missing that data.

Other than that the code is straight forward.

import pandas as pd
import geopandas

# Just to get more rows, columns and display width
pd.set_option('display.max_rows', 300)
pd.set_option('display.max_columns', 300)
pd.set_option('display.width', 1000)
# Get the updated data
table = pd.read_csv("https://opendata.ecdc.europa.eu/covid19/casedistribution/csv")
# Convert dateRep to date object
table['date'] = pd.to_datetime(table['dateRep'], format='%d/%m/%Y')
# Use date for index
table = table.set_index('date')

def get_stat(country_code, table):
    data = table.loc[table['countryterritoryCode'] == country_code]
    data = data.reindex(index=data.index[::-1])
    data['7 days sum'] = data['cases'].rolling(7).sum()
    data['7ds/100000'] = data['7 days sum'] * 100000 / data['popData2019']
    data['14 mean'] = data['7ds/100000'].rolling(14).mean()
    return data

# Read the data to make a choropleth map
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
world = world[(world.pop_est > 0) & (world.name != "Antarctica")]
# Store data per country to make it easier
data_by_country = {}
for index, row in world.iterrows():
    # The world data is not fully updated with ISO_A3 names
    if row['iso_a3'] == '-99':
        country = row['name']
        if country == "Norway":
            world.at[index, 'iso_a3'] = 'NOR'
            row['iso_a3'] = "NOR"
        elif country == "France":
            world.at[index, 'iso_a3'] = 'FRA'
            row['iso_a3'] = "FRA"
        elif country == 'Kosovo':
            world.at[index, 'iso_a3'] = 'XKX'
            row['iso_a3'] = "XKX"
        elif country == "Somaliland":
            world.at[index, 'iso_a3'] = '---'
            row['iso_a3'] = "---"
        elif country == "N. Cyprus":
            world.at[index, 'iso_a3'] = '---'
            row['iso_a3'] = "---"
    # Add the data for the country
    data_by_country[row['iso_a3']] = get_stat(row['iso_a3'], table)

This will create a dictionary (data_by_country) with the needed data for each country. Notice, we do it like this, because not all countries have the same number of data points.

Step 4: Create a Choropleth map for each date and save it as an image

This can be achieved by using matplotlib.

The idea is to go through all dates and look for each country if they have data for that date and use it if they have.

import pandas as pd
import geopandas
import matplotlib.pyplot as plt

# Just to get more rows, columns and display width
pd.set_option('display.max_rows', 300)
pd.set_option('display.max_columns', 300)
pd.set_option('display.width', 1000)
# Get the updated data
table = pd.read_csv("https://opendata.ecdc.europa.eu/covid19/casedistribution/csv")
# Convert dateRep to date object
table['date'] = pd.to_datetime(table['dateRep'], format='%d/%m/%Y')
# Use date for index
table = table.set_index('date')

def get_stat(country_code, table):
    data = table.loc[table['countryterritoryCode'] == country_code]
    data = data.reindex(index=data.index[::-1])
    data['7 days sum'] = data['cases'].rolling(7).sum()
    data['7ds/100000'] = data['7 days sum'] * 100000 / data['popData2019']
    data['14 mean'] = data['7ds/100000'].rolling(14).mean()
    return data

# Read the data to make a choropleth map
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
world = world[(world.pop_est > 0) & (world.name != "Antarctica")]
# Store data per country to make it easier
data_by_country = {}
for index, row in world.iterrows():
    # The world data is not fully updated with ISO_A3 names
    if row['iso_a3'] == '-99':
        country = row['name']
        if country == "Norway":
            world.at[index, 'iso_a3'] = 'NOR'
            row['iso_a3'] = "NOR"
        elif country == "France":
            world.at[index, 'iso_a3'] = 'FRA'
            row['iso_a3'] = "FRA"
        elif country == 'Kosovo':
            world.at[index, 'iso_a3'] = 'XKX'
            row['iso_a3'] = "XKX"
        elif country == "Somaliland":
            world.at[index, 'iso_a3'] = '---'
            row['iso_a3'] = "---"
        elif country == "N. Cyprus":
            world.at[index, 'iso_a3'] = '---'
            row['iso_a3'] = "---"
    # Add the data for the country
    data_by_country[row['iso_a3']] = get_stat(row['iso_a3'], table)
# Create an image per date
for day in pd.date_range('12-31-2019', '10-01-2020'):
    print(day)
    world['number'] = 0.0
    for index, row in world.iterrows():
        if day in data_by_country[row['iso_a3']].index:
            world.at[index, 'number'] = data_by_country[row['iso_a3']].loc[day]['14 mean']
    world.plot(column='number', legend=True, cmap='OrRd', figsize=(15, 5))
    plt.title(day.strftime("%Y-%m-%d"))
    plt.savefig(f'image-{day.strftime("%Y-%m-%d")}.png')
    plt.close()

This will create an image for each day. These images will be combined.

Step 5: Create a video from images with OpenCV

Using OpenCV to create a video from a sequence of images is quite easy. The only thing you need to ensure is that it reads the images in the correct order.

import cv2
import glob
img_array = []
filenames = glob.glob('image-*.png')
filenames.sort()
for filename in filenames:
    print(filename)
    img = cv2.imread(filename)
    height, width, layers = img.shape
    size = (width, height)
    img_array.append(img)
out = cv2.VideoWriter('covid.avi', cv2.VideoWriter_fourcc(*'DIVX'), 15, size)
for i in range(len(img_array)):
    out.write(img_array[i])
out.release()

Where we use the VideoWriter from OpenCV.

This results in this video.