Pandas: How to Sum Groups from HTML Tables

What will we cover in this tutorial?

  • How to collect data from a HTML table into a Pandas DataFrame.
  • The cleaning process and how to convert the data into the correct type.
  • Also, dealing with some data points that are not in correct representation.
  • Finally, how to sum up by countries.

Step 1: Collect the data from the table

Pandas is an amazing library with a lot of useful data analysis functionality right out of the box. First step in any data analysis is to collect the data. In this tutorial we will collect the data from wikipedia’s page on List of metro systems.

If you are new to the pandas library we recommend you read the this tutorial.

The objective will be to find the sums of Stations, Systems length, and Annual ridership per each country.

From wikipedia.org

At first glance this looks simple, but looking further down we see that some countries have various rows.

From wikipedia.org

Also, some rows do not have all the values needed.

First challenge first. Read the data from the table into a DataFrame, which is the main data structure of the pandas library. The read_html call from a pandas will return a list of DataFrames.

If you use read_html for the first time, we recommend you read this tutorial.

import pandas as pd
url = 'https://en.wikipedia.org/wiki/List_of_metro_systems'
tables = pd.read_html(url)
table = tables[0]
print(table)

Which results in the following output (or the top of it).

                 City               Country                                Name        Yearopened Year of lastexpansion             Stations                       System length             Annual ridership(millions)
0             Algiers               Algeria                       Algiers Metro          2011[13]              2018[14]               19[14]               18.5 km (11.5 mi)[15]                       45.3 (2019)[R 1]
1        Buenos Aires             Argentina            Buenos Aires Underground        1926[Nb 1]              2019[16]               90[17]               56.7 km (35.2 mi)[17]                      337.7 (2018)[R 2]
2             Yerevan               Armenia                       Yerevan Metro          1981[18]              1996[19]               10[18]                13.4 km (8.3 mi)[18]                       18.7 (2018)[R 3]
3              Sydney             Australia                        Sydney Metro          2019[20]                     –               13[20]               36 km (22 mi)[20][21]              14.2 (2019) [R 4][R Nb 1]
4              Vienna               Austria                       Vienna U-Bahn    1976[22][Nb 2]              2017[23]               98[24]               83.3 km (51.8 mi)[22]                      463.1 (2018)[R 6]
5                Baku            Azerbaijan                          Baku Metro          1967[25]              2016[25]               25[25]               36.6 km (22.7 mi)[25]                      231.0 (2018)[R 3]

We have now have the data in a DataFrame.

Step 2: Clean and convert the data

At first glance, we see that we do not need the rows City, Name, Yearopened, Year of last expansion. To make it easier to work with the data, let’s remove them and inspect the data again.

import pandas as pd
url = 'https://en.wikipedia.org/wiki/List_of_metro_systems'
tables = pd.read_html(url)
table = tables[0]
table = table.drop(['City', 'Name', 'Yearopened', 'Year of lastexpansion'], axis=1)
print(table)

Which result in the following output.

                  Country             Stations                       System length             Annual ridership(millions)
0                 Algeria               19[14]               18.5 km (11.5 mi)[15]                       45.3 (2019)[R 1]
1               Argentina               90[17]               56.7 km (35.2 mi)[17]                      337.7 (2018)[R 2]
2                 Armenia               10[18]                13.4 km (8.3 mi)[18]                       18.7 (2018)[R 3]
3               Australia               13[20]               36 km (22 mi)[20][21]              14.2 (2019) [R 4][R Nb 1]
4                 Austria               98[24]               83.3 km (51.8 mi)[22]                      463.1 (2018)[R 6]
5              Azerbaijan               25[25]               36.6 km (22.7 mi)[25]                      231.0 (2018)[R 3]
6                 Belarus               29[27]               37.3 km (23.2 mi)[27]                      283.4 (2018)[R 3]
7                 Belgium         59[28][Nb 5]               39.9 km (24.8 mi)[29]                      165.3 (2019)[R 7]

This makes it easier to see the next steps.

Let’s take them one by one. Stations need to remove the data after ‘[‘-symbol and convert the number to an integer. This can be done by using a lambda function to a row.

table['Stations'] = table.apply(lambda row: int(row['Stations'].split('[')[0]), axis=1)

If you are new to lambda functions we recommend you read this tutorial.

The next thing we need to do is to convert the System length to floats. The length will be in km (I live in Denmark, where we use km and not mi). This can also be done by using a lambda function

table['System length'] = table.apply(lambda row: float(row['System length'].split()[0]), axis=1)

Finally, and a bit more tricky, we need to convert the column of Annual ridership. The challenge is that lines have n/a which are converted to np.nan, but there are also some lines where the input is not easy to convert, as the images show.

From wikipedia.org
From wikipedia.org

These lines are can be dealt with by using a helper function.

def to_float(obj):
    try:
        return float(obj)
    except:
        return np.nan
index = 'Annual ridership(millions)'
table[index] = table.apply(lambda row: to_float(row[index].split()[0]) if row[index] is not np.nan else np.nan, axis=1)

Adding this all together we get the following code.

import pandas as pd
import numpy as np
def to_float(obj):
    try:
        return float(obj)
    except:
        return np.nan
url = 'https://en.wikipedia.org/wiki/List_of_metro_systems'
tables = pd.read_html(url)
table = tables[0]
table = table.drop(['City', 'Name', 'Yearopened', 'Year of lastexpansion'], axis=1)
table['Stations'] = table.apply(lambda row: int(row['Stations'].split('[')[0]), axis=1)
table['System length'] = table.apply(lambda row: float(row['System length'].split()[0]), axis=1)
index = 'Annual ridership(millions)'
table[index] = table.apply(lambda row: to_float(row[index].split()[0]) if row[index] is not np.nan else np.nan, axis=1)
print(table)

Which results in the following output (or the first few lines).

                  Country  Stations  System length  Annual ridership(millions)
0                 Algeria        19          18.50                       45.30
1               Argentina        90          56.70                      337.70
2                 Armenia        10          13.40                       18.70
3               Australia        13          36.00                       14.20
4                 Austria        98          83.30                      463.10
5              Azerbaijan        25          36.60                      231.00
6                 Belarus        29          37.30                      283.40
7                 Belgium        59          39.90                      165.30
8                  Brazil        19          28.10                       58.40
9                  Brazil        25          42.40                       42.80
10                 Brazil        22          43.80                       51.70

Step 3: Sum rows by country

Say, now we want to get the country with the most metro stations. This can be achieved by using the groupby and sum function from the pandas DataFrame data structure.

import pandas as pd
import numpy as np
def to_float(obj):
    try:
        return float(obj)
    except:
        return np.nan
url = 'https://en.wikipedia.org/wiki/List_of_metro_systems'
tables = pd.read_html(url)
table = tables[0]
table = table.drop(['City', 'Name', 'Yearopened', 'Year of lastexpansion'], axis=1)
table['Stations'] = table.apply(lambda row: int(row['Stations'].split('[')[0]), axis=1)
table['System length'] = table.apply(lambda row: float(row['System length'].split()[0]), axis=1)
index = 'Annual ridership(millions)'
table[index] = table.apply(lambda row: to_float(row[index].split()[0]) if row[index] is not np.nan else np.nan, axis=1)
# Sum up
table_sum = table.groupby(['Country']).sum()
print(table_sum.sort_values(['Stations'], ascending=False))

Where the result will be China.

                      Stations  System length  Annual ridership(millions)
Country                                                                  
China                     3738        6312.16                    25519.23
United States             1005        1325.90                     2771.50
South Korea                714         839.90                     4054.90
Japan[Nb 34]               669         791.20                     6489.60
India                      499         675.97                     1377.00
France                     483         350.90                     2113.50
Spain                      438         474.40                     1197.90

If we want to sort by km of System length, you will only need to change the last line to the following.

print(table_sum.sort_values(['System length'], ascending=False))

Resulting in the following.

                      Stations  System length  Annual ridership(millions)
Country                                                                  
China                     3738        6312.16                    25519.23
United States             1005        1325.90                     2771.50
South Korea                714         839.90                     4054.90
Japan[Nb 34]               669         791.20                     6489.60
India                      499         675.97                     1377.00
Russia                     368         611.50                     3507.60
United Kingdom             390         523.90                     1555.30

Finally, if you want it by Annual ridership, you will need to change the last line to.

print(table_sum.sort_values([index], ascending=False))

Remember, we assigned that to index. You should get the following output.

                      Stations  System length  Annual ridership(millions)
Country                                                                  
China                     3738        6312.16                    25519.23
Japan[Nb 34]               669         791.20                     6489.60
South Korea                714         839.90                     4054.90
Russia                     368         611.50                     3507.60
United States             1005        1325.90                     2771.50
France                     483         350.90                     2113.50
Brazil                     243         345.40                     2106.20

Master Data Correlation with Pandas DataFrame in 3 Easy Steps

What will we cover in this tutorial?

  • How to get data using Pandas DataFrames.
  • Clean the data and merge it together.
  • Finally, how to see if there is any correlation between data columns.

Step 1: Get the data you want to correlate

As an example, let’s assume you get the idea that there might be a correlation between GDP per capita, Social Progress Index (SPI), and Human Development Index (HDI), but is not sure whether SPI or HDI is closets correlated to GDP per capita.

Luckily, you have pandas to the rescue.

As the data is in three pages, you need to collect it by separately and merge it later. First, let us collect the data and inspect it.

The GDP per capita is located in the table on wikipedia presented in the picture below.

From wikipedia.org

Which is actually three tables. We will use the World Bank table in our example. It can be collected by using a call to pandas read_html. If you are new to read_html we recommend you read this tutorial.

import pandas as pd
# The URL we will read our data from
url = 'https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)_per_capita'
# read_html returns a list of tables from the URL
tables = pd.read_html(url)
table = tables[3]
print(table)

Which will give an output similar to this.

    Rank                       Country/Territory     US$
0      1                           Monaco (2018)  185741
1      2                    Liechtenstein (2017)  173356
2      3                              Luxembourg  114705
3      —                                   Macau   84096
4      4                             Switzerland   81994
5      5                                 Ireland   78661
6      6                                  Norway   75420
7      7                                 Iceland   66945

The next table we need to get is the Social Progress Index (SPI) and looks like the picture shows below.

From wikipedia.org

This can be collected to a DataFrame with the following code.

import pandas as pd
url = 'https://en.wikipedia.org/wiki/Social_Progress_Index'
tables = pd.read_html(url)
print(tables[1])

Which will print the following to the screen (or the top of it).

                          Country 2019[9]               
                          Country    Rank  Score Score.1
0                          Norway       1  90.95     NaN
1                         Denmark       2  90.09     NaN
2                     Switzerland       3  89.89     NaN
3                         Finland       4  89.56     NaN
4                          Sweden       5  89.45     NaN

Finally we need to read the Human Development Index (HDI), which can be seen on wikipedia as the following picture shows.

From wikipedia.org

And can be collected with the following code.

import pandas as pd
url = 'https://en.wikipedia.org/wiki/List_of_countries_by_Human_Development_Index'
tables = pd.read_html(url)
print(tables[1])

Resulting in the following output.

                           Rank                                             Country or Territory                         HDI                                                    Unnamed: 5_level_0
    2018 data (2019 report)​[5] Change from previous year​[5]               Country or Territory 2018 data (2019 report)​[5] Average annual HDI growth (2010–2018)​[6]          Unnamed: 5_level_1
    Very high human development   Very high human development        Very high human development Very high human development               Very high human development Very high human development
0                             1                           NaN                             Norway                       0.954                                     0.16%                         NaN
1                             2                           NaN                        Switzerland                       0.946                                     0.18%                         NaN
2                             3                           NaN                            Ireland                       0.942                                     0.71%                         NaN
3                             4                           NaN                            Germany                       0.939                                     0.25%                         NaN
4                             4                           (2)                          Hong Kong                       0.939                                     0.51%                         NaN
5                             6                           (1)                          Australia                       0.938                                     0.17%                         NaN

A bit more messy data table.

Now we have gathered all the data we need to clean it up and merge it together.

Step 2: Clean and merge the data into one DataFrame

If we first inspect the data from the GDP per capita.

    Rank                       Country/Territory     US$
0      1                           Monaco (2018)  185741
1      2                    Liechtenstein (2017)  173356
2      3                              Luxembourg  114705
3      —                                   Macau   84096
4      4                             Switzerland   81994
5      5                                 Ireland   78661
6      6                                  Norway   75420
7      7                                 Iceland   66945

Notice that Country/Territory can have a year in parentheses, which will make it hard to merge. We need to clean that up. Also, we do not need the Rank column.

If we inspect the data of HDI.

                          Country 2019[9]               
                          Country    Rank  Score Score.1
0                          Norway       1  90.95     NaN
1                         Denmark       2  90.09     NaN
2                     Switzerland       3  89.89     NaN
3                         Finland       4  89.56     NaN
4                          Sweden       5  89.45     NaN

Here we notice that the first row is an additional description row, which we can remove. Further, we do not need the Rank and Score.1 columns.

Let’s try to merge it together. Notice that we use a lambda function to clean up the Country/Territory names. If you are new to lambda functions, we recommend you read this tutorial.

import pandas as pd
# The URL we will read our data from
url = 'https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)_per_capita'
# read_html returns a list of tables from the URL
tables = pd.read_html(url)
# The data is in table 3
table = tables[3]
# We need to clean the years in parenthesis from the country/territory field
table['Country'] = table.apply(lambda row: row['Country/Territory'].split(' (')[0], axis=1)
# We do not need the Rank and Country/Territory for more data
table = table.drop(['Rank', 'Country/Territory'], axis=1)
url = 'https://en.wikipedia.org/wiki/Social_Progress_Index'
tables = pd.read_html(url)
merge_table = tables[1]
# The first level of the table can be dropped
merge_table.columns = merge_table.columns.droplevel(0)
# We do not need the Rank and Score.1 columns
merge_table = merge_table.drop(['Rank', 'Score.1'], axis=1)
# Need to rename the second column to SPI = Social Progress Index
merge_table.columns = ['Country', 'SPI']
# Ready to merge the tables
table = table.merge(merge_table, how="left", left_on=['Country'], right_on=['Country'])
print(table)

Which will result in an output like this.

        US$                            Country    SPI
0    185741                             Monaco    NaN
1    173356                      Liechtenstein    NaN
2    114705                         Luxembourg  87.66
3     84096                              Macau    NaN
4     81994                        Switzerland  89.89
5     78661                            Ireland  87.97
6     75420                             Norway  90.95

First validate that Monaco, Liechtenstein, and Macau do not have any SPI value. That seems to be correct.

Then we can proceed to the next table of HDI. Let us first inspect the data.

                           Rank                                             Country or Territory                         HDI                                                    Unnamed: 5_level_0
    2018 data (2019 report)​[5] Change from previous year​[5]               Country or Territory 2018 data (2019 report)​[5] Average annual HDI growth (2010–2018)​[6]          Unnamed: 5_level_1
    Very high human development   Very high human development        Very high human development Very high human development               Very high human development Very high human development
0                             1                           NaN                             Norway                       0.954                                     0.16%                         NaN
1                             2                           NaN                        Switzerland                       0.946                                     0.18%                         NaN
2                             3                           NaN                            Ireland                       0.942                                     0.71%                         NaN
3                             4                           NaN                            Germany                       0.939                                     0.25%                         NaN
4                             4                           (2)                          Hong Kong                       0.939                                     0.51%                         NaN
5                             6                           (1)                          Australia                       0.938                                     0.17%                         NaN

It has a quite messy top level column naming in 3 layers. Dropping them will make some identical. To deal with that, we can rename them and delete those we do not need.

import pandas as pd
# The URL we will read our data from
url = 'https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)_per_capita'
# read_html returns a list of tables from the URL
tables = pd.read_html(url)
# The data is in table 3
table = tables[3]
# We need to clean the years in parenthesis from the country/territory field
table['Country'] = table.apply(lambda row: row['Country/Territory'].split(' (')[0], axis=1)
# We do not need the Rank and Country/Territory for more data
table = table.drop(['Rank', 'Country/Territory'], axis=1)
url = 'https://en.wikipedia.org/wiki/Social_Progress_Index'
tables = pd.read_html(url)
merge_table = tables[1]
# The first level of the table can be dropped
merge_table.columns = merge_table.columns.droplevel(0)
# We do not need the Rank and Score.1 columns
merge_table = merge_table.drop(['Rank', 'Score.1'], axis=1)
# Need to rename the second column to SPI = Social Progress Index
merge_table.columns = ['Country', 'SPI']
# Ready to merge the tables
table = table.merge(merge_table, how="left", left_on=['Country'], right_on=['Country'])
url = 'https://en.wikipedia.org/wiki/List_of_countries_by_Human_Development_Index'
tables = pd.read_html(url)

merge_table = tables[1]
# Delete the additional column levels
merge_table.columns = merge_table.columns.droplevel(1)
merge_table.columns = merge_table.columns.droplevel(1)
# Rename the columns
merge_table.columns = ['Rank1', 'Rank2', 'Country', 'HDI', 'HDI-1', 'None']
# Delete the columns we do not need
merge_table = merge_table.drop(['Rank1', 'Rank2', 'HDI-1', 'None'], axis=1)
# Merge the tables
table = table.merge(merge_table, how="left", left_on=['Country'], right_on=['Country'])
print(table)

Which will result in the following output (or the top of it).

        US$                            Country    SPI    HDI
0    185741                             Monaco    NaN    NaN
1    173356                      Liechtenstein    NaN  0.917
2    114705                         Luxembourg  87.66  0.909
3     84096                              Macau    NaN    NaN
4     81994                        Switzerland  89.89  0.946
5     78661                            Ireland  87.97  0.942
6     75420                             Norway  90.95  0.954

Notice, that here Liechtenstein has HDI data, while Monaco and Macau do not have any data. While it is not visible, the HDI column is not made of float. It can be seen with a call to table.dtypes, which will output the following.

US$          int64
Country     object
SPI        float64
HDI         object
dtype: object

Which states that HDI is object, which in this case is a string. That means we need to convert it to float to make our final correlation computations. This can be done by using a lambda function.

table['HDI'] = table.apply(lambda row: float(row['HDI']) if row['HDI'] is not np.nan else np.nan, axis=1) # HDI = Human Development Index

This actually makes the data ready to see if there is any correlations between GDP per capita and SPI and/or HDI.

Step 3: Calculate the correlations

This is where the DataFrames from pandas come strong. It can do the entire work for you with one call to corr().

The full code is given below.

import pandas as pd
import numpy as np

pd.set_option('display.max_rows', 300)
pd.set_option('display.max_columns', 10)
pd.set_option('display.width', 1000)

# The URL we will read our data from
url = 'https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)_per_capita'
# read_html returns a list of tables from the URL
tables = pd.read_html(url)
# The data is in table 3
table = tables[3]
# We need to clean the years in parenthesis from the country/territory field
table['Country'] = table.apply(lambda row: row['Country/Territory'].split(' (')[0], axis=1)
# We do not need the Rank and Country/Territory for more data
table = table.drop(['Rank', 'Country/Territory'], axis=1)
url = 'https://en.wikipedia.org/wiki/Social_Progress_Index'
tables = pd.read_html(url)
merge_table = tables[1]
# The first level of the table can be dropped
merge_table.columns = merge_table.columns.droplevel(0)
# We do not need the Rank and Score.1 columns
merge_table = merge_table.drop(['Rank', 'Score.1'], axis=1)
# Need to rename the second column to SPI = Social Progress Index
merge_table.columns = ['Country', 'SPI']
# Ready to merge the tables
table = table.merge(merge_table, how="left", left_on=['Country'], right_on=['Country'])
url = 'https://en.wikipedia.org/wiki/List_of_countries_by_Human_Development_Index'
tables = pd.read_html(url)

merge_table = tables[1]
# Delete the additional column levels
merge_table.columns = merge_table.columns.droplevel(1)
merge_table.columns = merge_table.columns.droplevel(1)
# Rename the columns
merge_table.columns = ['Rank1', 'Rank2', 'Country', 'HDI', 'HDI-1', 'None']
# Delete the columns we do not need
merge_table = merge_table.drop(['Rank1', 'Rank2', 'HDI-1', 'None'], axis=1)
# Merge the tables
table = table.merge(merge_table, how="left", left_on=['Country'], right_on=['Country'])
# Convert to floats
table['HDI'] = table.apply(lambda row: float(row['HDI']) if row['HDI'] is not np.nan else np.nan, axis=1) # HDI = Human Development Index
# Calculate the correlation
table_corr = table.corr()
# Print the correlation to GDP per capita (stored in US$).
print(table_corr['US$'].sort_values(ascending=False))

Which will result in the following output.

US$    1.000000
SPI    0.713946
HDI    0.663183
Name: US$, dtype: float64

Hence, it seems that there is the biggest correlation between GDP per capita and SPI.

Notice, that the calculations ignores all Not a Number (np.nan).

How to Create Choropleth Maps with Evenly Distributed Colors in 3 Easy Steps

What will we cover in the tutorial

  • You create your default map using Choropleth from folium (or any other) and the color distribution is poor. Most countries are distributed between two colors, which makes the map less informative.
Example of poor color distribution.
  • Understand the issue
  • How to solve it
  • …and putting it all together.

Step 1: Understand the issue

In this example we have used the divorce rates in reported in various countries on wikipedia’s page of Divorce Demography

At first inspection on the wikipedia page you get an idea of what the problem is.

From wikipedia.org

The divorce rate in Denmark is too high (just kidding, I am from Denmark and not proud that Denmark is ranking number 6 based on percentage of marriages that end in divorce).

The issue is, that the distribution is not even. See the highest is Tunisia with 97.14 percent, then Portugal is second with 70.97 percent. This has to be compared to the color coding. The last two colors are by default distributed to 67 to 82 and 82 to 97, which only contain 1 country each.

To inspect this further, we need to retrieve the data and inspect it.

To retrieve the data we can use pandas – read this tutorial for details or see the code below.

import pandas as pd
# The URL we will read our data from
url = 'https://en.wikipedia.org/wiki/Divorce_demography'
# read_html returns a list of tables from the URL
tables = pd.read_html(url)
# The data is in the second table
table = tables[0]
# We need to remove level 0 of columns as they are disturbing the data
table.columns = table.columns.droplevel(0)
def is_float(str):
    try:
        float(str)
        return True
    except:
        return False
# We need to convert the data to floats
index = 'Divorce_float'
table[index] = table.apply(lambda row: float(row['Percent']) if is_float(row['Percent']) else np.nan, axis=1)
print(pd.cut(table[index], 6).value_counts(sort=False))

If we inspect the output we see that our suggestion was right.

(6.93, 22.04]     25
(22.04, 37.06]    26
(37.06, 52.08]    22
(52.08, 67.1]      9
(67.1, 82.12]      1
(82.12, 97.14]     1
Name: Divorce_float, dtype: int64
The last two color codes are only used by one country each, while the first 3 are used by 20+ countries.

Step 2: Distribute the countries into evenly distributed bins

This requires to understand the difference between cut and qcut of pandas library.

  • cut By default will return the same size bins.
  • qcut Will by default try to return buckets with the same number of items in.

See this example to understand it better.

import pandas as pd
# The URL we will read our data from
url = 'https://en.wikipedia.org/wiki/Divorce_demography'
# read_html returns a list of tables from the URL
tables = pd.read_html(url)
# The data is in the second table
table = tables[0]
# We need to remove level 0 of columns as they are disturbing the data
table.columns = table.columns.droplevel(0)
def is_float(str):
    try:
        float(str)
        return True
    except:
        return False
# We need to convert the data to floats
index = 'Divorce_float'
table[index] = table.apply(lambda row: float(row['Percent']) if is_float(row['Percent']) else np.nan, axis=1)
print(pd.qcut(table[index], 6).value_counts(sort=False))

Where the only difference is that we changed cut to qcut on the last line. This will result in the following output.

(7.018999999999999, 17.303]    14
(17.303, 23.957]               14
(23.957, 31.965]               14
(31.965, 40.0]                 15
(40.0, 47.078]                 13
(47.078, 97.14]                14
Name: Divorce_float, dtype: int64

Where we see that each bucket now contains approximately the same number of countries.

Hence, we need to use that for our purpose of color distribution our map.

Step 3: Putting it all together on the map

If you are new to folium and how make awesome leaflet maps easy, I can recommend to read this tutorial, or inspect the code below.

import pandas as pd
import folium
import geopandas
import numpy as np
# The URL we will read our data from
url = 'https://en.wikipedia.org/wiki/Divorce_demography'
# read_html returns a list of tables from the URL
tables = pd.read_html(url)
# The data is in the second table
table = tables[0]
# We need to remove level 0 of columns as they are disturbing the data
table.columns = table.columns.droplevel(0)
# We should clean the data
table['Country'] = table.apply(lambda row: row['Country/region'].split(' (')[0] if type(row['Country/region']) == str else row['Country/region'], axis=1)
# Read the geopandas dataset
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
# Replace United States of America to United States to fit the naming in the table
world = world.replace('United States of America', 'United States')
# Merge the two DataFrames together
table = world.merge(table, how="left", left_on=['name'], right_on=['Country'])
def is_float(str):
    try:
        float(str)
        return True
    except:
        return False
# We need to convert the data to floats
index = 'Divorce_float'
table[index] = table.apply(lambda row: float(row['Percent']) if is_float(row['Percent']) else np.nan, axis=1)
# Clean data: remove rows with no data
table = table.dropna(subset=[index])
# We have 10 colors available resulting into 9 cuts.
bins_data = pd.qcut(table[index], 9).value_counts(sort=False)
print(bins_data)
bins = [0]
for i in range(9):
    bins.append(int(round(bins_data.index.values[i].right)))
bins[9] = 100
# Create a map
my_map = folium.Map()
# Add the data
folium.Choropleth(
    geo_data=table,
    name='choropleth',
    data=table,
    columns=['Country', index],
    key_on='feature.properties.name',
    fill_color='OrRd',
    fill_opacity=0.7,
    line_opacity=0.2,
    legend_name=index,
    threshold_scale=bins
).add_to(my_map)
my_map.save('divorse_rates.html')

Where we combine the two DataFrames and take advantage of that we have 10 colors available.

It should result in a map like this one.

Final output