Matplotlib Visualization for DataFrame Time Series Data

What will we cover in this tutorial?

We will learn how to visualization time series data in a DataFrame with Matplotlib.

This tutorial will show you.

  • How to use Matplotlib with DataFrames.
  • Use Matplotlib with subplots (the object-oriented way).
  • How to make multiple plots in one figure.
  • How to create bar-plots

Want to access the code directly in Jupyter Notebook?

You can get the Jupyter Notebooks from the GitHub here, where there are also direct links to Colab for an interactive experience.

Step 1: Read time series data into a DataFrame

A DataFrame is a two-dimensional tabular data. It is the primary data structure of Pandas. The data structure contains labeled axes (rows and columns).

To get access to a DataFrame data structure, you need to import the Pandas library.

import pandas as pd

Then we need some time series data. You con download your own CSV file from financial pages like Yahoo! Finance.

For this tutorial we will use a dataset available from the GitHub.

remote_file = "https://raw.githubusercontent.com/LearnPythonWithRune/FinancialDataAnalysisWithPython/main/AAPL.csv"
data = pd.read_csv(remote_file, index_col=0, parse_dates=True)

The pd.read_csv(…) does all the magic. We set the index_col=0, which sets the first column of the CSV data file to be the index. This is the dates.

Then we set parse_dates=True, to ensure that dates are actually parsed as dates and not as strings. This is necessary to take advantage of being time series and index with time intervals.

Step 2: Import Matplotlib in Jupyter Notebook

When you import Matplotlib in Jupyter Notebook, you need to set a rendering mode.

import matplotlib.pyplot as plt
%matplotlib notebook

We will use the notebook mode, which is interactive. This enables you to zoom in on interval, move around, and save the figure.

It is common to use inline mode for rendering in Jupyter Notebook. The inline mode creates a static image, which is not interactive.

Step 3: Use Matplotlib the Object-Oriente way

Matplotlib can be used in a functional way and an object-oriented way. Most use it in a functional way, which often creates more confusion, as it is not always intuitive how it works.

The object-oriented way leads to less confusion for the cost of one extra line of code and parsing one argument. Hence, the price is low for the gain.

fig, ax = plt.subplots()
data['Close'].plot(ax=ax)
ax.set_ylabel("Price")
ax.set_title("AAPL")

The first line returns a figure and axis (fig and ax). The figure is where we put the axis, and the axis is the chart.

The actually plot is made by calling the DataFrame, actually, we access the column Close in this case, which is the Series of the time series of the historic Close prices.

Confused? Don’t worry about the details.

Notice, that we parse ax=ax to the plot. This ensures that we render the chart on the returned axis ax.

Finally, we add a y-label and a title to our axis.

Step 4: Creating multiple charts in one Matplotlib figure

How can we create multiple charts (or axes) in one Matplotlib figure?

Luckily, this is quite easy.

fig, ax = plt.subplots(2, 2)
data['Open'].plot(ax=ax[0, 0], title="Open")
data['High'].plot(ax=ax[0, 1], title="High")
data['Low'].plot(ax=ax[1, 0], title="Low")
data['Close'].plot(ax=ax[1, 1], title="Close")
plt.tight_layout()

Here we see a few differences. First, notice plt.subplots(2, 2), which will return a figure fig, and a list of lists with 2-by-2 axes. Hence, ax is a two dimensional list of axes.

We can access the first axis with ax[0, 0,], and parse it as an argument to plot.

This continues for all the 4 plots we make, as you see.

Finally, we use plt.tight_layout(), which will ensures that the layout of the axes does not overlap. You can try without to see the difference.

Step 5: Create a bar-chart with Matplotlib

Finally, we will make a bar-chart with Matplotlib.

Actually, we will render a horizontal bar-chart.

fig, ax = plt.subplots()
data['Volume'].loc['2020-07-01':'2020-08-15'].plot.barh(ax=ax)

We do it for the volume and only on a limited interval of time. This shows you how to take advantage of the time series aspect of the DataFrame.

Next step

The above is part of the FREE 2h Video course.

Backtesting a Trading Strategy with Pandas and Python

What will we cover?

In this tutorial we will cover.

  • What is a backtesting strategy?
  • How to measure the performance of a backtesting strategy?
  • How to implement a backtesting strategy with Pandas?

What is a backtesting strategy?

In a trading strategy backtesting seeks to estimate the performance of a strategy or model if it had been employed during a past period (source).

The way to analyze the performance of a strategy is to compare it with return, volatility, and max drawdown. Other metrics can also be used, but for this tutorial we will use these.

Step 1: Read data from Yahoo! Finance API with Pandas Datareader

Let’s get started by importing a few libraries and retrieve some data from Yahoo! Finance API with Pandas Datareader.

import pandas as pd
import pandas_datareader as pdr
import datetime as dt
import numpy as np

start = dt.datetime(2010, 1, 1)

data = pdr.get_data_yahoo("AAPL", start)

Which will read data for the Apple ticker (AAPL) since 2010.

Below is shown the head of data.

                High       Low      Open     Close       Volume  Adj Close
Date                                                                      
2010-01-04  7.660714  7.585000  7.622500  7.643214  493729600.0   6.583586
2010-01-05  7.699643  7.616071  7.664286  7.656429  601904800.0   6.594968
2010-01-06  7.686786  7.526786  7.656429  7.534643  552160000.0   6.490066
2010-01-07  7.571429  7.466071  7.562500  7.520714  477131200.0   6.478067
2010-01-08  7.571429  7.466429  7.510714  7.570714  447610800.0   6.521136

Step 2: Calculate signals for a simple strategy

The simple strategy we will use is moving average of period 5 and 20.

When the moving average of the Adj Close price of 5 days is above the moving average of 20 days, we go long (buy and hold) otherwise short (sell).

This can be calculated as follows.

data['Signal'] = data['Adj Close'].rolling(5).mean() - data['Adj Close'].rolling(20).mean()
data['Position'] = (data['Signal'].apply(np.sign) + 1)/2

This results in a Signal line, which is the differences of the two moving averages. When the signal line is positive our position is 1 (buy and hold) otherwise 0 (sell).

                  High         Low        Open  ...   Adj Close    Signal  Position
Date                                            ...                                
2021-02-26  124.849998  121.199997  122.589996  ...  121.260002 -7.610835       0.0
2021-03-01  127.930000  122.790001  123.750000  ...  127.790001 -7.054179       0.0
2021-03-02  128.720001  125.010002  128.410004  ...  125.120003 -6.761187       0.0
2021-03-03  125.709999  121.839996  124.809998  ...  122.059998 -6.782757       0.0
2021-03-04  123.599998  118.620003  121.750000  ...  120.129997 -6.274249       0.0

The reason why we want long to 1 and short to be 0 is for computational reasons, which will be clear soon.

Step 3: Remove unnecessary data columns and rows

To have a cleaner dataset we will clean it up.

data.drop(['High', 'Low', 'Open', 'Volume', 'Close'], axis=1, inplace=True)
data.dropna(inplace=True)

Where drop removes columns not needed and dropna removes rows with NaN.

The inplace=True is simply to apply it on the DataFrame.

            Adj Close    Signal  Position
Date                                     
2010-02-01   5.990476 -0.217986       0.0
2010-02-02   6.025239 -0.252087       0.0
2010-02-03   6.128909 -0.282004       0.0
2010-02-04   5.908031 -0.297447       0.0
2010-02-05   6.012933 -0.253271       0.0

Step 4: Calculate the return of the strategy

To calculate the return we will use log returns as we will see is an advantage.

Then we use the Position, but we shift it by 1, as we assume we first react on a position the day after the signal.

data['Log return'] = np.log(data['Adj Close']/data['Adj Close'].shift())
data['Return'] = data['Position'].shift(1)*data['Log return']

This result in the following.

             Adj Close    Signal  Position  Log return  Return
Date                                                          
2021-02-26  121.260002 -7.610835       0.0    0.002229     0.0
2021-03-01  127.790001 -7.054179       0.0    0.052451     0.0
2021-03-02  125.120003 -6.761187       0.0   -0.021115    -0.0
2021-03-03  122.059998 -6.782757       0.0   -0.024761    -0.0
2021-03-04  120.129997 -6.274249       0.0   -0.015938    -0.0

Now the additive advance of log returns comes in handy.

Remember that that we can add up log returns to calculate the final return. For details I refer to this.

Hence, we get that the return can be calculated as follows.

data[['Log return', 'Return']].cumsum().apply(np.exp)

Resulting in the following.

            Log return   Return
Date                           
2021-02-26   20.242133  7.29214
2021-03-01   21.332196  7.29214
2021-03-02   20.886489  7.29214
2021-03-03   20.375677  7.29214
2021-03-04   20.053499  7.29214

Using a bit calculations.

np.exp(data[['Log return', 'Return']].mean()*252)

We get.

Log return    1.310917
Return        1.196485
dtype: float64

Which tells us that the annualized return of our strategy giver 19.6485% return. A buy and hold strategy would give 31.0917%

The natural question is: What did we gain with our strategy?

Step 5: Evaluating our strategy

If we compute the volatility comparing the buy-and-hold strategy with ours. The volatility of a stock can be calculated in many ways. Here we will use the standard deviation. For other measures refer to Investpedia.

data[['Log return', 'Return']].std()*252**.5

Which gives the annualized standard deviation.

Log return    0.283467
Return        0.188044
dtype: float64

Hence, the gain from our strategy is a less volatile strategy.

Calculate the Average True Range (ATR) Easy with Pandas DataFrames

What will we cover in this tutorial?

In this tutorial we will cover the following

Step 1: Read historic stock prices from Yahoo! Finance API

To read data from Yahoo! Finance API we use Pandas-Datareader, which has a direct method.

This requires that we give a start date on how old data we want to retrieve.

import pandas_datareader as pdr
import datetime as dt

start = dt.datetime(2020, 1, 1)
data = pdr.get_data_yahoo("NFLX", start)

print(data.tail())

This we result in similar output.

	High	Low	Open	Close	Volume	Adj Close
Date						
2021-02-12	561.250000	550.849976	556.940002	556.520020	2195900	556.520020
2021-02-16	563.630005	552.729980	557.289978	557.280029	2622400	557.280029
2021-02-17	555.250000	543.030029	550.989990	551.340027	2069600	551.340027
2021-02-18	550.000000	538.229980	549.000000	548.219971	2456200	548.219971
2021-02-19	548.989990	538.809998	548.000000	540.219971	2838600	540.219971

Calculate the Average True Range (ATR)

The Average True Range (ATR) is calculated as follows, as investopedia.org defines it.

This can be calculated as follows.

import numpy as np
import pandas_datareader as pdr
import datetime as dt

start = dt.datetime(2020, 1, 1)
data = pdr.get_data_yahoo("NFLX", start)

high_low = data['High'] - data['Low']
high_close = np.abs(data['High'] - data['Close'].shift())
low_close = np.abs(data['Low'] - data['Close'].shift())

ranges = pd.concat([high_low, high_close, low_close], axis=1)
true_range = np.max(ranges, axis=1)

atr = true_range.rolling(14).sum()/14

Where we use the 14 days standard.

Visualize the ATR and the stock price

We will use Matplotlib to visualize it as it integrates well with DataFrames from Pandas.

import matplotlib.pyplot as plt

fig, ax = plt.subplots()
atr.plot(ax=ax)
data['Close'].plot(ax=ax, secondary_y=True, alpha=0.3)
plt.show()

This will result in a chart similar to this one.

Create Financial Analysis and With Multiple Sheets and Charts in Excel from Python

What will we cover in this tutorial?

We will get financial data from FPM cloud (Free stock API) for the last years and generate a 3 sheet Excel sheet with charts similar to this. All done from Python.

Step 1: Collect the data

In this tutorial we are only going to use the example data on Apple, that is free available without registering on FMP Cloud. If you want to do it on another stock yo will need to register on their site.

What we need is the income statement and cash flow statement. They are available as JSON on their page (income statement and cash flow statement).

As our main library we will use Pandas.

The data can be collected with the following code (NOTICE: The code syntax highlighter changes & to & in the URLs => remove the amp;).

import pandas as pd

income_statement = pd.read_json("https://fmpcloud.io/api/v3/income-statement/AAPL?limit=120&apikey=demo")
income_statement.set_index('date', inplace=True)
cash_flow = pd.read_json("https://fmpcloud.io/api/v3/cash-flow-statement/AAPL?limit=120&apikey=demo")
cash_flow.set_index('date', inplace=True)

Notice that we set the index to be the date column. This makes the further work easier.

Step 2: Prepare the data

The next step we need to do is to make the necessary calculations and prepare the data.

We are only interested Revenue, Earnings Per Share (EPS) and Free Cash Flow (FCF). So let’s take that data and keep it in a DataFrame (The Pandas main data structure).

data = income_statement[['revenue', 'eps']].copy()
data['fcf'] = cash_flow['freeCashFlow']

Now the data should look something like this.

	        revenue	        eps	fcf
date			
2020-09-26	274515000000	3.3600	7.336500e+10
2019-09-28	260174000000	2.9925	5.889600e+10
2018-09-29	265595000000	3.0025	6.412100e+10
2017-09-30	229234000000	2.3175	5.080300e+10
2016-09-24	215639000000	2.0875	5.349700e+10

Step 3: Calculate Percentage change

We are actually not interested in the actual values. The only thing we care about is the percentage growth from year to year.

As the data is in reverse order, and we need to calculate from the beginning to the end. We first need to order the dates from old to newest.

Then it is straight forward to calculate using pct_change() method.

data = data.sort_index()

data['rev %-chg'] = data['revenue'].pct_change()
data['eps %-chg'] = data['eps'].pct_change()
data['fcf %-chg'] = data['fcf'].pct_change()

Now the data should look similar to this (or the tail of it).

	        revenue	        eps	fcf	        rev %-chg	      eps %-chg	fcf %-chg
date						
2016-09-24	215639000000	2.0875	5.349700e+10	-0.077342	-0.100216	-0.233326
2017-09-30	229234000000	2.3175	5.080300e+10	0.063045	0.110180	-0.050358
2018-09-29	265595000000	3.0025	6.412100e+10	0.158620	0.295577	0.262150
2019-09-28	260174000000	2.9925	5.889600e+10	-0.020411	-0.003331	-0.081487
2020-09-26	274515000000	3.3600	7.336500e+10	0.055121	0.122807	0.245670

Step 5: Preparing data used in each Excel sheet

The next step is to prepare the data for each sheet in the Excel document.

We only need the last 10 years of data and will use loc to get that. Then we create a DataFrame for each sheet.

data_set = data.loc['2010-01-01':]
data_set.index = data_set.index.year

rev = data_set[['revenue', 'rev %-chg']].copy()
eps = data_set[['eps', 'eps %-chg']].copy()
fcf = data_set[['fcf', 'fcf %-chg']].copy()

Notice that we set the index only to be the year.

Step 6: Creating the Excel sheet

To create the Excel sheet we will use Pandas with the XlsxWriter.

This can be done as follows.

def create_sheet(writer, df, sheet_name):
    df.to_excel(writer, sheet_name=sheet_name)

writer = pd.ExcelWriter("financials.xlsx", engine='xlsxwriter')
workbook = writer.book

create_sheet(writer, rev, 'Revenue')
create_sheet(writer, eps, 'EPS')
create_sheet(writer, fcf, 'FCF')

workbook.close()

This will create a 3-sheet Excel document. One sheet for each DataFrame (rev, eps, fcf).

The function create_sheet(…) is practical, as each DataFrame is similar and we need to create the same sheet for each DataFrame.

The result will be in the Excel document financial.xlsx.

Step 7: Creating the charts

Now it pays to have the function create_sheet(…).

The full Excel document can be created by the following code.

def create_sheet(writer, df, sheet_name):
    df.to_excel(writer, sheet_name=sheet_name)
    worksheet = writer.sheets[sheet_name]
    workbook = writer.book
      
    chart = workbook.add_chart({'type': 'line'})
    
    chart.add_series({
        'name': sheet_name,
        'categories': [sheet_name, 1, 0, 11, 0],
        'values': [sheet_name, 1, 1, 11, 1],
    })
    
    column_chart = workbook.add_chart({'type': 'column'})
    
    column_chart.add_series({
        'name': sheet_name,
        'categories': [sheet_name, 1, 0, 11, 0],
        'values': [sheet_name, 1, 2, 11, 2],
        'y2_axis': True,
    })
    
    chart.combine(column_chart)
    chart.set_title({'name': sheet_name})
    chart.set_x_axis({'name': 'Date'})
    chart.set_y_axis({'name': 'Value'})
    column_chart.set_y2_axis({'name': '%-chg'})
    
    worksheet.insert_chart("E1", chart)

writer = pd.ExcelWriter("financials.xlsx", engine='xlsxwriter')
workbook = writer.book

create_sheet(writer, rev, 'Revenue')
create_sheet(writer, eps, 'EPS')
create_sheet(writer, fcf, 'FCF')

workbook.close()writer = pd.ExcelWriter("financials.xlsx", engine='xlsxwriter')
workbook = writer.book

create_sheet(writer, rev, 'Revenue')
create_sheet(writer, eps, 'EPS')
create_sheet(writer, fcf, 'FCF')

workbook.close()

The details of creating the charts in Excel can be found on XlsxWriter. Basically, it is sending informations to the XlsxWriter engine through dictionaries. You need to send the values that you would set in Excel, if you were working inside there.

Again, the result will be in the Excel document financial.xlsx.

Create Excel Sheet with Stock Prices and Moving Average with Chart all from Python

What will we cover in this tutorial?

In this tutorial we will read a historic stock prices, calculate the moving average and export that to an Excel sheet and insert a chart with prices and moving average. And all will be done from Python using Pandas and Pandas-datareader.

Step 1: Get the historic stock prices

A great place to get historic stock prices is using Pandas-datareader. It has a great interface to various places to get the data. In this tutorial we will use Yahoo API through the Pandas-datareader.

It doesn’t require any registration to use the API. It works straight out of the box.

To get stock prices in time series you need to find the ticker of your favorite stock. In this tutorial we will use Apple, which has ticker AAPL.

import pandas_datareader as pdr
import datetime as dt

start = dt.datetime(2020, 1, 1)
ticker = pdr.get_data_yahoo("AAPL", start)

print(ticker.head())

You also need to set the date to set how far back you want historic stock prices. By default, you will get up to the most current date. Also, you will data for each day open. You can use more arguments to configure what if you want weekly or monthly prices. Also, you can set a end-date, if you like.

The above code should give output similar to the following.

                 High        Low       Open      Close       Volume  Adj Close
Date                                                                          
2020-01-02  75.150002  73.797501  74.059998  75.087502  135480400.0  73.840042
2020-01-03  75.144997  74.125000  74.287498  74.357498  146322800.0  73.122154
2020-01-06  74.989998  73.187500  73.447502  74.949997  118387200.0  73.704819
2020-01-07  75.224998  74.370003  74.959999  74.597504  108872000.0  73.358185
2020-01-08  76.110001  74.290001  74.290001  75.797501  132079200.0  74.538239

Step 2: Calculate the Moving Average

To calculate the moving average (also called the simple moving average), we can use the rolling method on a DataFrame.

The rolling method takes one argument, which is the window size. This indicates how big a window we want to apply a function on. In this case we want to apply the mean function on a window of size 50.

import pandas_datareader as pdr
import pandas as pd
import datetime as dt

start = dt.datetime(2020, 1, 1)
ticker = pdr.get_data_yahoo("AAPL", start)

ticker['MA'] = ticker['Close'].rolling(50).mean()

print(ticker[50:].head())

This calculates the simple moving average of window size 50. This will give the same result as the moving average with the default window of 50 will give in Yahoo! Finance.

The output will be as follows.

                 High        Low       Open  ...       Volume  Adj Close        MA
Date                                         ...                                  
2020-03-16  64.769997  60.000000  60.487499  ...  322423600.0  59.687832  76.16100
2020-03-17  64.402496  59.599998  61.877499  ...  324056000.0  62.312309  75.93815
2020-03-18  62.500000  59.279999  59.942501  ...  300233600.0  60.786911  75.67250
2020-03-19  63.209999  60.652500  61.847500  ...  271857200.0  60.321156  75.40445
2020-03-20  62.957500  57.000000  61.794998  ...  401693200.0  56.491634  75.03470

Notice we removed the first 50 rows (actually 51, as we index from 0). This is because the MA (moving average) column will not have numbers before we reach here.

Step 3: Export data to Excel and create a chart with close prices and moving average

Now this is where it all get’s a bit more complicated. It takes some reading in the manual to figure all this out.

The code is commented to explain what happens.

import pandas_datareader as pdr
import pandas as pd
import datetime as dt

# Read the stock prices from Yahoo! Finance
start = dt.datetime(2020, 1, 1)
ticker = pdr.get_data_yahoo("AAPL", start)

# Calculate the moving average with window size 50
ticker['MA'] = ticker['Close'].rolling(50).mean()

# Create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter('apple.xlsx', engine='xlsxwriter')

# Name the sheet
sheet_name = "Apple"

# We convert the index from datetime to date
# This makes the data in Excel only have the date and
# not the date with time: 00:00:00:0000
ticker.index = ticker.index.date

# Skip the first 51 rows (to be pedantic, 49 is the correct number)
ticker = ticker[50:]

# Convert the dataframe to an XlsxWriter Excel object.
ticker.to_excel(writer, sheet_name=sheet_name)

# Get the xlsxwriter workbook and worksheet objects.
workbook = writer.book
worksheet = writer.sheets[sheet_name]

# Widen the first column to display the dates.
worksheet.set_column('A:A', 12)

# Get the number of rows and column index
max_row = len(ticker)
col_ma = ticker.columns.get_loc('MA') + 1
col_close = ticker.columns.get_loc('Close') + 1

# Create a chart object of type line
chart = workbook.add_chart({'type': 'line'})

# Insert the first dataset into chart
chart.add_series({
    'name': "MA",
    'categories': [sheet_name, 1, 0, max_row, 0],
    'values': [sheet_name, 1, col_ma, max_row, col_ma],
})
# Insert the second dataset in the same chart
chart.add_series({
    'name': "Close",
    'values': [sheet_name, 1, col_close, max_row, col_close],
})

# Configure axis
chart.set_x_axis({
    'name': 'Date',
    'date_axis': True,
})
chart.set_y_axis({
    'name': '$',
    'major_gridlines': {'visible': False}
})

# Insert the chart into the worksheet.
worksheet.insert_chart('I2', chart)

# Close the Pandas Excel writer and output the Excel file.
writer.save()

The above code will create an Excel sheet looking like this.

Result

Master Markowitz Portfolio Optimization (Efficient Frontier) in Python using Pandas

What is Markowitz Portfolios Optimization (Efficient Frontier)?

The Efficient Frontier takes a portfolio of investments and optimizes the expected return in regards to the risk. That is to find the optimal return for a risk.

According to investopedia.org the return is based on the expected Compound Annual Growth Rate (CAGR) and risk metric is the standard deviation of the return.

But what does all that mean? We will learn that in this tutorial.

Step 1: Get the time series of your stock portfolio

We will use the following portfolio of 4 stocks of Apple (AAPL), Microsoft (MSFT), IBM (IBM) and Nvidia (NVDA).

To get the time series we will use the Yahoo! Finance API through the Pandas-datareader.

We will look 5 years back.

import pandas_datareader as pdr
import pandas as pd
import datetime as dt
from dateutil.relativedelta import relativedelta

years = 5
end_date = dt.datetime.now()
start_date = end_date - relativedelta(years=years)
close_price = pd.DataFrame()
tickers = ['AAPL','MSFT','IBM','NVDA']
for ticker in tickers:
  tmp = pdr.get_data_yahoo(ticker, start_date, end_date)
  close_price[ticker] = tmp['Close']

print(close_price)

Resulting in the following output (or the first few lines).

                  AAPL        MSFT         IBM        NVDA
Date                                                      
2015-08-25  103.739998   40.470001  140.960007   20.280001
2015-08-26  109.690002   42.709999  146.699997   21.809999
2015-08-27  112.919998   43.900002  148.539993   22.629999
2015-08-28  113.290001   43.930000  147.979996   22.730000
2015-08-31  112.760002   43.520000  147.889999   22.480000

It will contain all the date time series for the last 5 years from current date.

Step 2: Calculate the CAGR, returns, and covariance

To calculate the expected return, we use the Compound Average Growth Rate (CAGR) based on the last 5 years. The CAGR is used as investopedia suggest. An alternative that also is being used is the mean of the returns. The key thing is to have some common measure of the return.

The CAGR is calculated as follows.

CAGR = (end-price/start-price)^(1/years) – 1

We will also calculate the covariance as we will use that the calculate the variance of a weighted portfolio. Remember that the standard deviation is given by the following.

sigma = sqrt(variance)

A portfolio is a vector w with the balances of each stock. For example, given w = [0.2, 0.3, 0.4, 0.1], will say that we have 20% in the first stock, 30% in the second, 40% in the third, and 10% in the final stock. It all sums up to 100%.

Given a weight w of the portfolio, you can calculate the variance of the stocks by using the covariance matrix.

variance = w^T Cov w

Where Cov is the covariance matrix.

This results in the following pre-computations.

returns = close_price/close_price.shift(1)
cagr = (close_price.iloc[-1]/close_price.iloc[0])**(1/years) - 1
cov = returns.cov()

print(cagr)
print(cov)

Where you can see the output here.

# CACR:
AAPL    0.371509
MSFT    0.394859
IBM    -0.022686
NVDA    0.905011
dtype: float64

# Covariance
          AAPL      MSFT       IBM      NVDA
AAPL  0.000340  0.000227  0.000152  0.000297
MSFT  0.000227  0.000303  0.000164  0.000306
IBM   0.000152  0.000164  0.000260  0.000210
NVDA  0.000297  0.000306  0.000210  0.000879

Step 3: Plot the return and risk

This is where the power of computing comes into the picture. The idea is to just try a random portfolio and see how it rates with regards to expected return and risk.

It is that simple. Make a random weighted distribution of your portfolio and plot the point of expected return (based on our CAGR) and the risk based on the standard deviation calculated by the covariance.

import matplotlib.pyplot as plt
import numpy as np

def random_weights(n):
    k = np.random.rand(n)
    return k / sum(k)

exp_return = []
sigma = []
for _ in range(20000):
  w = random_weights(len(tickers))
  exp_return.append(np.dot(w, cagr.T))
  sigma.append(np.sqrt(np.dot(np.dot(w.T, cov), w)))

plt.plot(sigma, exp_return, 'ro', alpha=0.1) 
plt.show()

We introduce a helper function random_weights, which returns a weighted portfolio. That is, it returns a vector with entries that sum up to one. This will give a way to distribute our portfolio of stocks.

Then we iterate 20.000 times (could be any value, just want to have enough to plot our graph), where we make a random weight w, then calculate the expected return by the dot-product of w and cagr-transposed. This is done by using NumPy’s dot-product function.

What a dot-product of np.dot(w, cagr.T) does is to take elements pairwise from w and cagr and multiply them and sum up. The transpose is only about the orientation of it to make it work.

The standard deviation (assigned to sigma) is calculated similar by the formula given in the last step: variance = w^T Cov w (which has dot-products between).

This results in the following graph.

Returns vs risks

This shows a graph which outlines a parabola. The optimal values lie along the upper half of the parabola line. Hence, given a risk, the optimal portfolio is one corresponding on the upper boarder of the filled parabola.

Considerations

The Efficient Frontier gives you a way to balance your portfolio. The above code can by trial an error find such a portfolio, but it still leaves out some consideratoins.

How often should you re-balance? It has a cost to do that.

The theory behind has some assumptions that may not be a reality. As investopedia points out, it assumes that asset returns follow a normal distribution, but in reality returns can be more the 3 standard deviations away. Also, the theory builds upon that investors are rational in their investment, which is by most considered a flawed assumption, as more factors play into the investments.

The full source code

Below here you find the full source code from the tutorial.

import pandas_datareader as pdr
import datetime as dt
import pandas as pd
from dateutil.relativedelta import relativedelta
import matplotlib.pyplot as plt
import numpy as np


years = 5
end_date = dt.datetime.now()
start_date = end_date - relativedelta(years=years)
close_price = pd.DataFrame()
tickers = ['AAPL', 'MSFT', 'IBM', 'NVDA']
for ticker in tickers:
    tmp = pdr.get_data_yahoo(ticker, start_date, end_date)
    close_price[ticker] = tmp['Close']

returns = close_price / close_price.shift(1)
cagr = (close_price.iloc[-1] / close_price.iloc[0]) ** (1 / years) - 1
cov = returns.cov()

def random_weights(n):
    k = np.random.rand(n)
    return k / sum(k)

exp_return = []
sigma = []
for _ in range(20000):
    w = random_weights(len(tickers))
    exp_return.append(np.dot(w, cagr.T))
    sigma.append(np.sqrt(np.dot(np.dot(w.T, cov), w)))

plt.plot(sigma, exp_return, 'ro', alpha=0.1)
plt.show()

RandomForestClassifier: Predict Stock Market Direction

What will we cover in this tutorial?

A Forest Classifier is an approach to minimize the heavy bias a Decision Tree can get. A forest classifier simply contains a set of decision trees and uses majority voting to make the prediction.

In this tutorial we will try to use that on the stock market, by creating a few indicators. This tutorial will give a framework to explore if it can predict the direction of a stock. Given a set of indicators, will the stock go up or down the next trading day.

This is a simplified problem of predicting the actual stock value the next day.

Step 1: Getting data and calculate some indicators

If you are new to stock indicators, we can highly recommend you to read about the MACD, RSI, Stochastic Oscillator, where the MACD also includes how to calculate the EMA. Here we assume familiarity to those indicators. Also, that you are familiar with Pandas DataFrames and Pandad-datareader.

import pandas_datareader as pdr
import datetime as dt
import numpy as np

ticker = "^GSPC" # The S&P 500 index
data = pdr.get_data_yahoo(ticker, dt.datetime(2010,1,1), dt.datetime.now(), interval='d')

# Calculate the EMA10 > EMA30 signal
ema10 = data['Close'].ewm(span=10).mean()
ema30 = data['Close'].ewm(span=30).mean()
data['EMA10gtEMA30'] = np.where(ema10 > ema30, 1, -1)

# Calculate where Close is > EMA10
data['ClGtEMA10'] = np.where(data['Close'] > ema10, 1, -1)

# Calculate the MACD signal
exp1 = data['Close'].ewm(span=12).mean()
exp2 = data['Close'].ewm(span=26).mean()
macd = exp1 - exp2
macd_signal = macd.ewm(span=9).mean()
data['MACD'] = macd_signal - macd

# Calculate RSI
delta = data['Close'].diff()
up = delta.clip(lower=0)
down = -1*delta.clip(upper=0)
ema_up = up.ewm(com=13, adjust=False).mean()
ema_down = down.ewm(com=13, adjust=False).mean()
rs = ema_up/ema_down
data['RSI'] = 100 - (100/(1 + rs))

# Stochastic Oscillator
high14= data['High'].rolling(14).max()
low14 = data['Low'].rolling(14).min()
data['%K'] = (data['Close'] - low14)*100/(high14 - low14)

# Williams Percentage Range
data['%R'] = -100*(high14 - data['Close'])/(high14 - low14)

days = 6

# Price Rate of Change
ct_n = data['Close'].shift(days)
data['PROC'] = (data['Close'] - ct_n)/ct_n

print(data)

The choice of indicators is arbitrary but among some popular ones. It should be up to you to change them to other indicators and experiment with them.

                  High         Low        Open       Close       Volume   Adj Close  EMA10gtEMA30  ClGtEMA10      MACD         RSI         %K        %R      PROC
Date                                                                                                                                                             

2020-08-17  3387.590088  3379.219971  3380.860107  3381.989990  3671290000  3381.989990             1          1 -2.498718   68.294286  96.789344  -3.210656  0.009164
2020-08-18  3395.060059  3370.149902  3387.040039  3389.780029  3881310000  3389.780029             1          1 -1.925573   69.176468  97.234576  -2.765424  0.008722
2020-08-19  3399.540039  3369.659912  3392.510010  3374.850098  3884480000  3374.850098             1          1 -0.034842   65.419555  86.228281 -13.771719  0.012347
2020-08-20  3390.800049  3354.689941  3360.479980  3385.510010  3642850000  3385.510010             1          1  0.949607   66.805725  87.801036 -12.198964  0.001526
2020-08-21  3399.959961  3379.310059  3386.010010  3397.159912  3705420000  3397.159912             1          1  1.249066   68.301209  97.534948  -2.465052  0.007034

Step 2: Understand the how the Decision Tree works

Trees are the foundation in the Forest. Or Decision Trees are the foundation in a Forest Classifier. Hence, it is a good starting point to understand how a Decision Tree works. Luckily, they are quite easy to understand.

Let’s try to investigate a Decision Tree that is based on two of the indicators above. We take the RSI (Relative Strength Index) and %K (Stochastic Oscillator). A Decision Tree could look like this (depending on the training data).

Decision Tree for %K and RSI

When we get a new data row with %K and RSI indicators, it will start at the top of the Decision Tree.

  • At the first node it will check if %K <= 4.615, if so, take the left child otherwise the right child.
  • The gini tells us how a randomly chosen element would be incorrectly labeled. Hence, a low value close to 0 is good.
  • Samples tells us how many of the samples of the training set reached this node.
  • Finally, the value tells us how the values are distributed. In the final decision nodes, the category of most element is the prediction.

Looking at the above Decision Tree, it does not seem to be very good. The majority of samples end up the fifth node with a gini on 0.498, close to random, right? And it will label it 1, growth.

But this is the idea with Forest Classifiers, it will take a bunch of Decision Trees, that might not be good, and use majority of them to classify it.

Step 3: Create the Forest Classifier

Now we understand how the Decision Tree and the Forest Classifier work, we just need to run the magic. As this is done by calling a library function.

import pandas_datareader as pdr
import datetime as dt
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
from sklearn.ensemble import RandomForestClassifier


ticker = "^GSPC"
data = pdr.get_data_yahoo(ticker, dt.datetime(2010,1,1), dt.datetime.now(), interval='d')

# Calculate the EMA10 > EMA30 signal
ema10 = data['Close'].ewm(span=10).mean()
ema30 = data['Close'].ewm(span=30).mean()
data['EMA10gtEMA30'] = np.where(ema10 > ema30, 1, -1)

# Calculate where Close is > EMA10
data['ClGtEMA10'] = np.where(data['Close'] > ema10, 1, -1)

# Calculate the MACD signal
exp1 = data['Close'].ewm(span=12).mean()
exp2 = data['Close'].ewm(span=26).mean()
macd = exp1 - exp2
macd_signal = macd.ewm(span=9).mean()
data['MACD'] = macd_signal - macd

# Calculate RSI
delta = data['Close'].diff()
up = delta.clip(lower=0)
down = -1*delta.clip(upper=0)
ema_up = up.ewm(com=13, adjust=False).mean()
ema_down = down.ewm(com=13, adjust=False).mean()
rs = ema_up/ema_down
data['RSI'] = 100 - (100/(1 + rs))

# Stochastic Oscillator
high14= data['High'].rolling(14).max()
low14 = data['Low'].rolling(14).min()
data['%K'] = (data['Close'] - low14)*100/(high14 - low14)

# Williams Percentage Range
data['%R'] = -100*(high14 - data['Close'])/(high14 - low14)

days = 6

# Price Rate of Change
ct_n = data['Close'].shift(days)
data['PROC'] = (data['Close'] - ct_n)/ct_n

# Set class labels to classify
data['Return'] = data['Close'].pct_change(1).shift(-1)
data['class'] = np.where(data['Return'] > 0, 1, 0)

# Clean for NAN rows
data = data.dropna()
# Minimize dataset
data = data.iloc[-200:]


# Data to predict
predictors = ['EMA10gtEMA30', 'ClGtEMA10', 'MACD', 'RSI', '%K', '%R', 'PROC']
X = data[predictors]
y = data['class']

# Split data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)

# Train the model
rfc = RandomForestClassifier(random_state=0)
rfc = rfc.fit(X_train, y_train)

# Test the model by doing some predictions
y_pred = rfc.predict(X_test)

# See how accurate the predictions are
report = classification_report(y_test, y_pred)
print('Model accuracy', accuracy_score(y_test, y_pred, normalize=True))
print(report)

First some notes on a few lines. The train_test_split, divides the data into training set and test set. The test set is set to be 30% of the data. It does it in a randomized way.

Next we create a RandomForestClassifier and fit it.

Then we use our newly created classifier (rfc) to predict on the test set (X_test).

Finally, we calculate the accuracy and the report.

Model accuracy 0.6333333333333333
              precision    recall  f1-score   support

           0       0.56      0.38      0.45        24
           1       0.66      0.81      0.73        36

    accuracy                           0.63        60
   macro avg       0.61      0.59      0.59        60
weighted avg       0.62      0.63      0.62        60

The model accuracy is 0.63, which seems quite good. It is better than random, at least. You can also see that the precision of 1 (growth) is higher than 0 (loss, or negative growth), with 0.66 and 0.56, respectively.

Does that mean it is all good and we can beat the market?

No, far from. Also, notice I chose to only use the last 200 stock days in my experiment out of the 2.500+ possible stock days.

Running a few experiments it showed that it the prediction was close to 50% if all days were used. That means, basically it was not possible to predict.

Step 4: A few more tests on stocks

I have run a few experiments on different stocks and also varying the number of days used.

Stock100 days200 Days400 Days
S&P 5000.530.630.52
AAPL0.530.620.54
F0.670.570.54
KO0.470.520.53
IBM0.570.520.57
MSFT0.500.500.48
AMZN0.570.470.58
TSLA0.500.600.53
NVDA0.570.530.54
The accuracy

Looking in the above table I am not convinced about my hypotheses. First, the 200 days to be better, might have be specific on the stock. Also, if you re-run tests you get new numbers, as the training and test dataset are different from time to time.

I did try a few with the full dataset, and I still think it performed worse (all close to 0.50).

The above looks fine, as it mostly can predict better than just guessing. But still there are a few cases where it is not the case.

Next steps

A few things to remember here.

Firstly, the indicators are chose at random from among the common ones. A further investigation on this could be an idea. It can highly bias the results if it is used does not help the prediction.

Secondly, I might have falsely hypothesized that it was more accurate when we limited to data to a smaller set than the original set.

Thirdly, it could be that the stocks are also having a bias in one direction. If we limit to a smaller period, a bull market will primarily have growth days, hence a biased guess on growth will be better than 0.50. This factor should be investigated further, to see if this favors the predictions.

From HTML Tables to Excel with Pandas: Free Cash Flow and Revenue of Microsoft

What will we cover in this tutorial?

Yes, you can do it manually. Copy from an HTML table and paste into an Excel spread sheet. Or you can dive into how to pull data directly from the internet into Excel. Sometimes it is not convenient, as some data needs to be transformed and you need to do it often.

In this tutorial we will show how this can be easily automated with Python using Pandas.

That is we go from data that needs to be transformed, like, $102,000 into 102000. Also, how to join (or merge) different datasources before we create a Excel spread sheet.

Step 1: The first data source: Revenue of Microsoft

There are many sources where you can get this data, but Macrotrends has it nicely in a table and for more than 10 years old data.

First thing first, let’s try to take a look at the data. You can use Pandas read_html to get the data from the tables given a URL.

import pandas as pd


url = "https://www.macrotrends.net/stocks/charts/MSFT/microsoft/revenue"
tables = pd.read_html(url)

revenue = tables[0]
print(revenue)

Where we know it is in the first table on the page. A first few lines of the output is given here.

    Microsoft Annual Revenue(Millions of US $) Microsoft Annual Revenue(Millions of US $).1
0                                         2020                                     $143,015
1                                         2019                                     $125,843
2                                         2018                                     $110,360
3                                         2017                                      $96,571
4                                         2016                                      $91,154

First thing to manage are the column names and setting the year to the index.

import pandas as pd


url = "https://www.macrotrends.net/stocks/charts/MSFT/microsoft/revenue"
tables = pd.read_html(url)

revenue = tables[0]
revenue.columns = ['Year', 'Revenue']
revenue = revenue.set_index('Year')
print(revenue)

A first few lines.

      Revenue
Year          
2020  $143,015
2019  $125,843
2018  $110,360
2017   $96,571
2016   $91,154

That helped. But then we need to convert the Revenue column to integers. This is a bit tricky and can be done in various ways. We first need to remove the $-sign, then the comma-sign, before we convert it.

revenue['Revenue'] = pd.to_numeric(revenue['Revenue'].str[1:].str.replace(',',''), errors='coerce')

And that covers it.

Step 2: Getting another data source: Free Cash Flow for Microsoft

We want to combine this data with the Free Cash Flow (FCF) of Microsoft.

The data can be gathered the same way and column and index can be set similar.

import pandas as pd


url = "https://www.macrotrends.net/stocks/charts/MSFT/microsoft/free-cash-flow"
tables = pd.read_html(url)
fcf = tables[0]
fcf.columns = ['Year', 'FCF']
fcf = fcf.set_index('Year')
print(fcf)

The first few lines are.

     FCF
Year
2020 45234.0
2019 38260.0
2018 32252.0
2017 31378.0
2016 24982.0

All ready to be joined with the other data.

import pandas as pd


url = "https://www.macrotrends.net/stocks/charts/MSFT/microsoft/revenue"
tables = pd.read_html(url)

revenue = tables[0]
revenue.columns = ['Year', 'Revenue']
revenue = revenue.set_index('Year')
revenue['Revenue'] = pd.to_numeric(revenue['Revenue'].str[1:].str.replace(',',''), errors='coerce')

# print(revenue)

url = "https://www.macrotrends.net/stocks/charts/MSFT/microsoft/free-cash-flow"
tables = pd.read_html(url)
fcf = tables[0]
fcf.columns = ['Year', 'FCF']
fcf = fcf.set_index('Year')

data = revenue.join(fcf)

# Let's reorder it
data = data.iloc[::-1].copy()

Where we also reorder it, to have it from the early ears in the top. Notice the copy(), which is not strictly necessary, but makes a hard-copy of the data and not just a view.

      Revenue      FCF
Year                  
2005    39788  15793.0
2006    44282  12826.0
2007    51122  15532.0
2008    60420  18430.0
2009    58437  15918.0

Wow. Ready to export.

Step 3: Exporting it to Excel

This is too easy to have an entire section for it.

data.to_excel('Output.xlsx')

Isn’t it beautiful. Of course you need to execute this after all the lines above.

In total.

import pandas as pd


url = "https://www.macrotrends.net/stocks/charts/MSFT/microsoft/revenue"
tables = pd.read_html(url)

revenue = tables[0]
revenue.columns = ['Year', 'Revenue']
revenue = revenue.set_index('Year')
revenue['Revenue'] = pd.to_numeric(revenue['Revenue'].str[1:].str.replace(',',''), errors='coerce')

# print(revenue)

url = "https://www.macrotrends.net/stocks/charts/MSFT/microsoft/free-cash-flow"
tables = pd.read_html(url)
fcf = tables[0]
fcf.columns = ['Year', 'FCF']
fcf = fcf.set_index('Year')

data = revenue.join(fcf)

# Let's reorder it
data = data.iloc[::-1].copy()

# Export to Excel
data.to_excel('Output.xlsx')

Which will result in an Excel spread sheet called Output.xlsx.

The Excel spread sheet. I added the graph.

There are many things you might find easier in Excel, like playing around with different types of visualization. On the other hand, there might be many aspects you find easier in Python. I know, I do. Almost all of them. Not kidding. Still, Excel is a powerful tool which is utilized by many specialists. Still it seems like the skills of Python are in request in connection with Excel.

Multiple Time Frame Analysis on a Stock using Pandas

What will we investigate in this tutorial?

A key element to success in trading is to understand the market and the trend of the stock before you buy it. In this tutorial we will not cover how to read the market, but take a top-down analysis approach to stock prices. We will use what is called Multiple Time Frame Analysis on a stock starting with a 1-month, 1-week, and 1-day perspective. Finally, we will compare that with a Simple Moving Average with a monthly view.

Step 1: Gather the data with different time frames

We will use the Pandas-datareader library to collect the time series of a stock. The library has an endpoint to read data from Yahoo! Finance, which we will use as it does not require registration and can deliver the data we need.

import pandas_datareader as pdr
import datetime as dt


ticker = "MSFT"
start = dt.datetime(2019, 1, 1)
end = dt.datetime.now()
day = pdr.get_data_yahoo(ticker, start, end, interval='d')
week = pdr.get_data_yahoo(ticker, start, end, interval='wk')
month = pdr.get_data_yahoo(ticker, start, end, interval='mo')

Where the key is to set the interval to ‘d’ (Day), ‘wk’ (Week), and ‘mo’ (Month).

This will give us 3 DataFrames, each indexed with different intervals.

Dayly.

                  High         Low  ...      Volume   Adj Close
Date                                ...                        
2019-01-02  101.750000   98.940002  ...  35329300.0   98.860214
2019-01-03  100.190002   97.199997  ...  42579100.0   95.223351
2019-01-04  102.510002   98.930000  ...  44060600.0   99.652115
2019-01-07  103.269997  100.980003  ...  35656100.0   99.779205
2019-01-08  103.970001  101.709999  ...  31514400.0  100.502670

Weekly.

                  High         Low  ...       Volume   Adj Close
Date                                ...                         
2019-01-01  103.269997   97.199997  ...  157625100.0   99.779205
2019-01-08  104.879997  101.260002  ...  150614100.0   99.769432
2019-01-15  107.900002  101.879997  ...  127262100.0  105.302940
2019-01-22  107.879997  104.660004  ...  142112700.0  102.731720
2019-01-29  106.379997  102.169998  ...  203449600.0  103.376968

Monthly.

                  High         Low  ...        Volume   Adj Close
Date                                ...                          
2019-01-01  107.900002   97.199997  ...  7.142128e+08  102.096245
2019-02-01  113.239998  102.349998  ...  4.690959e+08  109.526405
2019-03-01  120.820000  108.800003  ...  5.890958e+08  115.796768
2019-04-01  131.369995  118.099998  ...  4.331577e+08  128.226700
2019-05-01  130.649994  123.040001  ...  5.472188e+08  121.432449
2019-06-01  138.399994  119.010002  ...  5.083165e+08  132.012497

Step 2: Combine data and interpolate missing points

The challenge to connect the DataFrames is that they have different index entries. If we add the data points from Daily with Weekly, there will be a lot of missing entries that Daily has, but Weekly does not have.

                   day        week
Date                              
2019-01-02  101.120003         NaN
2019-01-03   97.400002         NaN
2019-01-04  101.930000         NaN
2019-01-07  102.059998         NaN
2019-01-08  102.800003  102.050003
...                ...         ...
2020-08-13  208.699997         NaN
2020-08-14  208.899994         NaN
2020-08-17  210.279999         NaN
2020-08-18  211.490005  209.699997
2020-08-19  209.699997  209.699997

To deal with that we can choose to interpolate by using the DataFrame interpolate function.

import pandas_datareader as pdr
import datetime as dt
import pandas as pd


ticker = "MSFT"
start = dt.datetime(2019, 1, 1)
end = dt.datetime.now()
day = pdr.get_data_yahoo(ticker, start, end, interval='d')
week = pdr.get_data_yahoo(ticker, start, end, interval='wk')
month = pdr.get_data_yahoo(ticker, start, end, interval='mo')

data = pd.DataFrame()
data['day'] = day['Close']
data['week'] = week['Close']
data['week'] = data['week'].interpolate(method='linear')
print(data)

Which results in the following output.

                   day        week
Date                              
2019-01-02  101.120003         NaN
2019-01-03   97.400002         NaN
2019-01-04  101.930000         NaN
2019-01-07  102.059998         NaN
2019-01-08  102.800003  102.050003
...                ...         ...
2020-08-13  208.699997  210.047998
2020-08-14  208.899994  209.931998
2020-08-17  210.279999  209.815997
2020-08-18  211.490005  209.699997
2020-08-19  209.699997  209.699997

Where the missing points (except the first entry) will be linearly put between. This can be done for months as well, but we need to be more careful because of three things. First, some dates (1st of the month) do not exist in the data DataFrame. To solve that we use an outer-join, which will include them. Second, this introduces some extra dates, which are not trading dates. Hence, we need to delete them afterwards, which we can do by deleting the column (drop) and removing rows with NA value (dropna). Thirdly, we also need to understand that the monthly view looks backwards. Hence, the 1st of January is first finalized the last day of January. Therefore we shift it back in the join.

import pandas_datareader as pdr
import datetime as dt
import pandas as pd


ticker = "MSFT"
start = dt.datetime(2019, 1, 1)
end = dt.datetime.now()
day = pdr.get_data_yahoo(ticker, start, end, interval='d')
week = pdr.get_data_yahoo(ticker, start, end, interval='wk')
month = pdr.get_data_yahoo(ticker, start, end, interval='mo')


data = pd.DataFrame()
data['day'] = day['Close']
data['week'] = week['Close']
data['week'] = data['week'].interpolate(method='index')
data = data.join(month['Close'].shift(), how='outer')
data['month'] = data['Close'].interpolate(method='index')
data = data.drop(columns=['Close']).dropna()
data['SMA20'] = data['day'].rolling(20).mean()

Step 3: Visualize the output and take a look at it

To visualize it is straight forward by using matplotlib.

import pandas_datareader as pdr
import datetime as dt
import matplotlib.pyplot as plt
import pandas as pd


ticker = "MSFT"
start = dt.datetime(2019, 1, 1)
end = dt.datetime.now()
day = pdr.get_data_yahoo(ticker, start, end, interval='d')
week = pdr.get_data_yahoo(ticker, start, end, interval='wk')
month = pdr.get_data_yahoo(ticker, start, end, interval='mo')


data = pd.DataFrame()
data['day'] = day['Close']
data['week'] = week['Close']
data['week'] = data['week'].interpolate(method='index')
data = data.join(month['Close'].shift(), how='outer')
data['month'] = data['Close'].interpolate(method='index')
data = data.drop(columns=['Close']).dropna()

data.plot()
plt.show()

Which results in the following graph.

As expected the monthly price is adjusted to be the closing day-price the day before. Hence, it looks like the monthly-curve is crossing the day-curve on the 1st every month (which is almost true).

To really appreciate the Multiple Time Frames Analysis, it is better to keep the graphs separate and interpret them each isolated.

Step 4: How to use these different Multiple Time Frame Analysis

Given the picture it is a good idea to start top down. First look at the monthly picture, which shows the overall trend.

Month view of MFST.

In the case of MSFT it is a clear growing trend, with the exception of two declines. But the overall impression is a company in growth that does not seem to slow down. Even the Dow theory (see this tutorial on it) suggest that there will be secondary movements in a general bull trend.

Secondly, we will look at the weekly view.

Weekly view of MFST

Here your impression is a bit more volatile. It shows many smaller ups and downs, with a big one in March, 2020. It could also indicate a small decline in the growth right and the end. Also, the Dow theory could suggest that it will turn. Though it is not certain.

Finally, on the daily view it gives a more volatile picture, which can be used to when to enter the market.

Day view of MFST

Here you could also be a bit worried. Is this the start of a smaller bull market.

To sum up. In the month-view, we have concluded a growth. The week-view shows signs of possible change. Finally, the day-view is also showing signs of possible decline.

As an investor, and based on the above, I would not enter the market right now. If both the month-view and week-view showed growth, while the day-view decline, that would be a good indicator. You want the top level to show growth, while a day-view might show a small decline.

Finally, remember that you should not just use one way to interpret to enter the market or not.

Step 5: Is monthly the same as a Simple Moving Average?

Good question, I am glad you asked. The Simple Moving Average (SMA) can be calculated easy with DataFrames using rolling and mean function.

Best way is to just try it.

import pandas_datareader as pdr
import datetime as dt
import matplotlib.pyplot as plt
import pandas as pd


ticker = "MSFT"
start = dt.datetime(2019, 1, 1)
end = dt.datetime.now()
day = pdr.get_data_yahoo(ticker, start, end, interval='d')
week = pdr.get_data_yahoo(ticker, start, end, interval='wk')
month = pdr.get_data_yahoo(ticker, start, end, interval='mo')


data = pd.DataFrame()
data['day'] = day['Close']
data['week'] = week['Close']
data['week'] = data['week'].interpolate(method='index')
data = data.join(month['Close'].shift(), how='outer')
data['month'] = data['Close'].interpolate(method='index')
data = data.drop(columns=['Close']).dropna()
data['SMA20'] = data['day'].rolling(20).mean()

data.plot()
plt.show()

As you see, the SMA is not as reactive on the in crisis in March, 2020, as the monthly view is. This shows a difference in them. This does not exclude the one from the other, but shows a difference in how they react.

Comparing the month-view with a Simple Moving Average of a month (20 trade days)

Please remember, that the monthly view is first updated at the end of a month, while SMA is updated on a daily basis.

Other differences is that SMA is an average of the 20 last days, while the monthly is the actual value of the last day of a month (as we look at Close). This implies that the monthly view can be much more volatile than the SMA.

Conclusion

It is advised to make analysis from bigger time frames and zoom in. This way you first look at overall trends, and get a bigger picture of the market. This should eliminate not to fall into being focused on a small detail in the market, but understand it on a higher level.

Master Dow Theory with Python Pandas

What will we cover in this tutorial?

Dow theory was proposed by Charles H. Dow and is not an exact science. It is more how to identify trends in the market. In this tutorial we investigate the approach by testing it on data. Notice, that there are various ways to interpret it and often it is done by visual approximations, while we in this tutorial will make some rough assumptions to see if it beats the buy-and-hold approach of a stock.

First we will make our assumption on how to implement the Dow theory approach to make buy and sell indicators, which we will use as buy and sell markers in the market.

Step 1: Understand the Dow theory to make buy and sell indicators

The essence of Dow theory is that there are 3 types of trend in the market. The primary trend is a year or more long trend, like a bull market. Then on a secondary trend, the market can move in opposite direction for 3 weeks to 3 months. This can result in a pullback, that can seem like a bear market within the bull market. Finally, there are micro trends (less than 3 weeks) which can be considered as noise.

According to Dow theory each market has 3 phases. Our objective as an investor is to identify when a bear market turns into bull market.

Some visual example to understand the above will help a bit. A general bull market with primary and secondary trends could look like this.

Primary bull market trend with secondary bear market trends.

Where you should notice that the temporary lows are all increasing along the way.

A similar picture for a bear market could be.

Primary bear market trend with secondary bull market trends.

Here you should notice how the secondary bull markets peaks are also in a decreasing trend.

Step 2: Identify when a primary market trend changes

The key here is to identify when a primary stock trend goes from bull to bear or opposite.

Please also notice that Dow theory talks about the market and we here are looking at a stock. Hence, we have an assumption that the market and the stock have a strong enough correlation to use the same theory.

From a primary bear to a primary bull market could look like as follows.

From bear to bull market

We have added some markers in the diagram.

  • LL : Low-Low – meaning that the low is lower than previous low.
  • LH : Low-High – meaning that the high is lower than previous high.
  • HH : High-High – meaning that the high is higher than previous high.
  • HL : High-Low – meaning that the low is higher than previous low.

As you see, the bear market consists of consecutive LL and LH, while a bull market consists of consecutive HH and LH. The market changes from bear to bull when we confidently can say that we will get a HH, which we can do when we cross from the last LL over the last LH (before we reach HH).

Hence, a buy signal can be set when we reach a stock price above last LH.

Similar we can investigate the when a primary trends goes from bull to hear market.

From bull to a bear trend.

Where we have the same types of markers.

We see that the trend changes from bull to bear when we go from HL to LL. Hence, a sell indicator is when we are sure we reach a LL (that is before it is a LL).

Again, this is not an exact science and is just a way to interpret it. We will try it out on real stock data to see how it performs.

Step 3: Get some data and calculate points of lows and highs

We will use Pandas-datareader to get the time series data from Yahoo! Finance.

import pandas_datareader as pdr
import datetime as dt


ticker = pdr.get_data_yahoo("TWTR", dt.datetime(2020,1,1), dt.datetime.now())

print(ticker)

Resulting in a time series for Twitter, which has the ticker TWTR. You can find other tickers for other companies by using the Yahoo! Finance ticker lookup.

                 High        Low       Open      Close    Volume  Adj Close
Date                                                                       
2020-01-02  32.500000  31.959999  32.310001  32.299999  10721100  32.299999
2020-01-03  32.099998  31.260000  31.709999  31.520000  14429500  31.520000
2020-01-06  31.709999  31.160000  31.230000  31.639999  12582500  31.639999
2020-01-07  32.700001  31.719999  31.799999  32.540001  13712900  32.540001
2020-01-08  33.400002  32.349998  32.349998  33.049999  14632400  33.049999
...               ...        ...        ...        ...       ...        ...
2020-08-12  38.000000  36.820000  37.500000  37.439999  11013300  37.439999
2020-08-13  38.270000  37.369999  37.430000  37.820000  13259400  37.820000
2020-08-14  37.959999  37.279999  37.740002  37.900002  10377300  37.900002
2020-08-17  38.090000  37.270000  37.950001  37.970001  10188500  37.970001
2020-08-18  38.459999  37.740002  38.279999  38.009998   8548300  38.009998

First thing we need to get is to find the low and highs. First challenge here is that the stock price is going up and down during the day. To simplify our investigation we will only use the Close price.

Taking that decision might limit and not give correct results, but it surely simplifies our work.

Next up, we need to identify highs and lows. This can be done to see when a daily difference goes from positive to negative.

import pandas_datareader as pdr
import datetime as dt


ticker = pdr.get_data_yahoo("TWTR", dt.datetime(2020,1,1), dt.datetime.now())

ticker['delta'] = ticker['Close'].diff()
growth = ticker['delta'] > 0
ticker['markers'] = growth.diff().shift(-1)

print(ticker)

Please notice the shit(-1) as it moves the indicator on the day of the change.

2020-08-05  37.340000  36.410000  36.560001  36.790001   10052100  36.790001  0.440002   False
2020-08-06  37.810001  36.490002  36.849998  37.689999   10478900  37.689999  0.899998    True
2020-08-07  38.029999  36.730000  37.419998  37.139999   11335100  37.139999 -0.549999    True
2020-08-10  39.169998  37.310001  38.360001  37.439999   29298400  37.439999  0.299999    True
2020-08-11  39.000000  36.709999  37.590000  37.279999   20486000  37.279999 -0.160000    True
2020-08-12  38.000000  36.820000  37.500000  37.439999   11013300  37.439999  0.160000   False
2020-08-13  38.270000  37.369999  37.430000  37.820000   13259400  37.820000  0.380001   False
2020-08-14  37.959999  37.279999  37.740002  37.900002   10377300  37.900002  0.080002   False
2020-08-17  38.090000  37.270000  37.950001  37.970001   10188500  37.970001  0.070000   False
2020-08-18  38.459999  37.740002  38.279999  38.009998    8548300  38.009998  0.039997     NaN

Where we have output above. The True values are when we reach Highs or Lows.

Now we have identified all the potential HH, LH, LH, and LL.

Step 4: Implement a simple trial of sell and buy

We continue our example on Twitter and see how we can perform.

Our strategy will be as follows.

  • We either have bought stocks for all our money or not. That is, either we have stocks or not.
  • If we do not have stocks, we buy if stock price is above last high, meaning that a HH is coming.
  • If we do have stocks, we sell if stock price is below last low, meaning that a LL is coming.

This can mean that we enter market in the last of a bull market. If you were to follow the theory complete, it suggest to wait until a bear market changes to a bull market.

import pandas_datareader as pdr
import datetime as dt


ticker = pdr.get_data_yahoo("TWTR", dt.datetime(2020,1,1), dt.datetime.now())

ticker['delta'] = ticker['Close'].diff()
growth = ticker['delta'] > 0
ticker['markers'] = growth.diff().shift(-1)

# We want to remember the last_high and last_low
# Set to max value not to trigger false buy
last_high = ticker['Close'].max()
last_low = 0.0
# Then setup our account, we can only have stocks or not
# We have a start balance of 100000 $
has_stock = False
balance = 100000
stocks = 0
for index, row in ticker.iterrows():
  # Buy and sell orders
  if not has_stock and row['Close'] > last_high:
    has_stock = True
    stocks = balance//row['Close']
    balance -= row['Close']*stocks
  elif has_stock and row['Close'] < last_low:
    has_stock = False
    balance += row['Close']*stocks
    stocks = 0

  # Update the last_high and last_low
  if row['markers']:
    if row['delta'] > 0:
      last_high = row['Close']
    else:
      last_low = row['Close']


print("Dow returns", balance + stocks*ticker['Close'].iloc[-1])

# Compare this with a simple buy and hold approach.
buy_hold_stocks = 100000//ticker['Close'].iloc[0]
buy_hold = 100000 - buy_hold_stocks*ticker['Close'].iloc[0] + buy_hold_stocks*ticker['Close'].iloc[-1]
print("Buy-and-hold return", buy_hold)

Which results in the following results.

Dow returns 120302.0469455719
Buy-and-hold return 117672.44716644287

That looks promising, but it might be just out of luck. Hence, we want to validate with other examples. The results say a return of investment of 20.3% using our Dow theory approach, while a simple buy-and-hold strategy gave 17.7%. This is over the span of less than 8 months.

The thing you would like to achieve with a strategy is to avoid big losses and not loose out on revenue. The above testing does not justify any clarification on that.

Step 5: Try out some other tickers to test it

A first investigation is to check how the algorithm performs on other stocks. We make one small adjustment, as the comparison to buy on day-1, might be quite unfair. If price is low, it an advantage, while if the price is high, it is a big disadvantage. The code below runs on multiple stocks and compare the first buy with a Dow approach (as outlined in this tutorial) with a buy-and-hold approach. The exit of the market might also be unfair.

import pandas_datareader as pdr
import datetime as dt

def dow_vs_hold_and_buy(ticker_name):
  ticker = pdr.get_data_yahoo(ticker_name, dt.datetime(2020,1,1), dt.datetime.now())

  ticker['delta'] = ticker['Close'].diff()
  growth = ticker['delta'] > 0
  ticker['markers'] = growth.diff().shift(-1)

  # We want to remember the last_high and last_low
  # Set to max value not to trigger false buy
  last_high = ticker['Close'].max()
  last_low = 0.0
  # Then setup our account, we can only have stocks or not
  # We have a start balance of 100000 $
  has_stock = False
  balance = 100000
  stocks = 0
  first_buy = None
  for index, row in ticker.iterrows():
    # Buy and sell orders
    if not has_stock and row['Close'] > last_high:
      has_stock = True
      stocks = balance//row['Close']
      balance -= row['Close']*stocks
      if first_buy is None:
        first_buy = index
    elif has_stock and row['Close'] < last_low:
      has_stock = False
      balance += row['Close']*stocks
      stocks = 0

    # Update the last_high and last_low
    if row['markers']:
      if row['delta'] > 0:
        last_high = row['Close']
      else:
        last_low = row['Close']

  dow_returns = balance + stocks*ticker['Close'].iloc[-1]

  # Compare this wiith a simple buy and hold approach.
  buy_hold_stocks = 100000//ticker['Close'].loc[first_buy]
  buy_hold_returns = 100000 - buy_hold_stocks*ticker['Close'].loc[first_buy] + buy_hold_stocks*ticker['Close'].iloc[-1]

  print(ticker_name, dow_returns > buy_hold_returns, round(dow_returns/1000 - 100, 1), round(buy_hold_returns/1000 - 100, 1))


tickers = ["TWTR", "AAPL", "TSLA", "BAC", "KO", "GM", "MSFT", "AMZN", "GOOG", "FB", "INTC", "T"]
for ticker in tickers:
  dow_vs_hold_and_buy(ticker)

Resulting in the following output.

TWTR   True  20.3  14.4
AAPL  False  26.4  52.3
TSLA   True 317.6 258.8
BAC    True -16.3 -27.2
KO     True  -8.2 -14.6
GM     True   8.9 -15.1
MSFT  False  26.2  32.1
AMZN  False  32.8  73.9
GOOG  False   7.1  11.0
FB     True  18.3  18.2
INTC  False -34.9 -18.4
T     False -25.3 -20.8

This paints a different picture. First, it seems more random if it outperforms the buy-and-hold approach.

The one performing best is the General Motors Company (GM), but it might be due to unlucky entering of the market. The stock was high in the beginning of the year, and then fell a lot. Hence, here the Dow helped to exit and enter the market correct.

Intel Corporation (INTC) is working a lot against us. While there is a big loss (-18.4%), it is not saved by our Dow theory algorithm. There was a big loss in stock value 24th of July with 20% from close the day before to open. The Dow cannot save you for situations like that and will sell on the far bottom.

The Apple (AAPL) is also missing a lot of gain. The stock is in a great growth in 2020, with some challenges in March and after (Corona hit). But looking and buy and sell signals, it hits sell higher than the following buy and losses out on gain.

Amazon (AMZN) seems to be the same story. Growth in general and hitting buying on higher than previous sell, and loosing out on profit.

Next steps and considerations

We have made some broad simplifications in our algorithm.

  • Only consider Close value, while a normal way to find the markers are on a OHLC candlestick diagram.
  • If we used the span of the day price, then we might limit our losses with a stop-loss order earlier.
  • This is not an exact science, and the trends might need a different way to identify them.

Hence, the above suggest it can be more adjusted to real life.

Another thing to keep in mind is that you should never make your investment decision on only one indicator or algorithm choice.