## Python for Financial Analysis with Pandas

Learn **Python** for Financial Data Analysis with **Pandas** (Python library) in this 2 hour free 8-lessons online course.

The 8 lessons will get you started with **technical analysis** using **Python** and **Pandas**.

**The 8 lessons**

- Get to know
**Pandas**with Python – how to get historical stock price data. - Learn about
**Series**from**Pandas**– how to make calculations with the data. - Learn about
**DataFrames**from**Pandas**– add, remove and enrich the data. - Start visualize data with
**Matplotlib**– the best way to understand price data. - Read data from
**API**s – read data directly from pages like**Yahoo! Finance**the right way. - Calculate the
**Volatility**and**Moving Average**of a stock. - Technical indicators:
**MACD**and**Stochastic Oscillator**– easy with**Pandas**. - Export it all into
**Excel**– in multiple sheets with color formatted cells and charts.

Sign up for my newsletter. Do not miss the next **FREE** online course.

## How to get the most out of this online course?

To get the most out of this course you should do the following.

- Download the
**Jupyter Notebook**used – it also includes the CSV file –*(link below)*. - Start your
**Jupyter Notebook***(it is free – see how to get it below)*. - Watch one lesson.
- Try it yourself in
**Jupyter Notebook**.

## New to Python + Jupyter Notebook + Anaconda?

If you are new to Python and Jypter Notebook and want to get started?

- Go to
**Anaconda**and download the individual edition. - It will install
**Python**and**Jupyter**notebook. That is all you need to get started and it is all free. - Launch Anaconda.
- In Anaconda launch Jupyter Notebook.
- Navigate in Jupyter Notebook to the downloaded Notebooks from the link (button) above.
- Alternatively, you can import them.

## Lesson 1: Get to know Pandas

In this part we will get familiar to work with **DataFrames** – the primary data structure in **Pandas**.

### In this lesson you will learn

We will learn how to read a historical stock price data from **Yahoo! Finance** and load it into a **DataFrame**. This will be done by exporting a **CSV** file from **Yahoo! Finance** and load the data. Later we will learn how to read the data directly from the **Yahoo! Finance API**.

A **DataFrame** is similar to an **Excel** sheet. **DataFrame**s can contain data in a similar way as we will see in this lesson.

Then we will learn how to use the index of the dates. This will be necessary later when we make calculations later on.

The first part of the tutorial will give the foundation of what you need to know about **DataFrames** for financial analysis.

See the Notebook from the lesson here *(for an interactive version use the Download the Notebook link at the top of page)*.

### Lesson 1

In this first lesson we will download historical stock prices from **Yahoo! Finance** as **CSV** file and import them into our **Jupyter** notebook environment in a **DataFrame**.

If you are new to **CSV** files and **DataFrames**. Don’t worry, that is what we will cover here.

Let’s start by going to **Yahoo! Finance** and download the **CVS** file. In this course we have used Apple, but feel free to make similar calculation on a stock of your choice.

Go to **Yahoo! Finance** write **AAPL** (ticker for Apple) and press **Historical Data** and download the **CSV** data file.

The **CSV** data file will contain Comma Separated Values (CSV) similar to this.

Date,Open,High,Low,Close,Adj Close,Volume 2020-03-02,70.570000,75.360001,69.430000,74.702499,74.127892,341397200 2020-03-03,75.917503,76.000000,71.449997,72.330002,71.773636,319475600 2020-03-04,74.110001,75.849998,73.282501,75.684998,75.102829,219178400 2020-03-05,73.879997,74.887497,72.852501,73.230003,72.666725,187572800 2020-03-06,70.500000,72.705002,70.307503,72.257500,71.701706,226176800

The first line shows the column names *(Date, Open, High, Low, Close, Adj Close, Volume)*. Then each line contains a data entry for a given day.

In Jupyter Notebook start by importing the **Pandas** library. This is needed in order to load the data into a DataFrame.

import pandas as pd data = pd.read_csv("AAPL.csv", index_col=0, parse_dates=True) data.head()

The** read_csv(…)** does all the magic for us. It will read the **CSV** file **AAPL.csv**. The **AAPL.csv** file is the one you downloaded from** Yahoo! Finance** (or from the zip-file downloaded above) and needs to be located in the same folder you are working from in your **Jupyter notebook**.

The arguments in **read_csv(…)** are the following.

**index_col=0**this sets the first column of the**CSV**file to be the index. In this case, it is the Date column.**parse_dates=True**this ensures that dates in the CSV file are interpreted as dates. This is important if you want to take advantage of the index being a time.

In the video lesson we explore the type of columns and index.

data.dtypes data.index

We can use **loc** to lookup an index with a date.

data.loc['2020-01-27']

This will show the data for that specific date. If you get an error it might be because your dataset does not contain the above date. Choose another one to see something similar to this.

Open 7.751500e+01 High 7.794250e+01 Low 7.622000e+01 Close 7.723750e+01 Adj Close 7.657619e+01 Volume 1.619400e+08 Name: 2020-01-27 00:00:00, dtype: float64

A more advanced option is to use an interval (or slice as it is called). Slicing with **loc** on a DataFrame is done by using a starting and ending index **.loc[start:end] **or an open ended index **.loc[start:]**, which will take data beginning from start to the last data.

data.loc['2021-01-01':]

This will give all the data starting from 2020-01-01. Notice, that there is no data on January 1st, but since the index is interpreted as a **datetime**, it can figure out the first date after.

Open High Low Close Adj Close Volume Date 2021-01-04 133.520004 133.610001 126.760002 129.410004 129.410004 143301900 2021-01-05 128.889999 131.740005 128.429993 131.009995 131.009995 97664900 2021-01-06 127.720001 131.050003 126.379997 126.599998 126.599998 155088000 2021-01-07 128.360001 131.630005 127.860001 130.919998 130.919998 109578200 2021-01-08 132.429993 132.630005 130.229996 132.050003 132.050003 105158200 2021-01-11 129.190002 130.169998 128.500000 128.979996 128.979996 100620900

Similarly, you can create slicing with an open-ended start.

data.loc[:'2020-07-01']

Another important way to index into DataFrames is by **iloc[]**, which does it with index.

data.iloc[0] data.iloc[-1]

Where you can index from the start with index **0, 1, 2, 3, … **Or from the end **-1, -2, -3, -4, …**

See more details in the video lesson above.

## Lesson 2: Learn about columns and Series

In the first lesson we learnt how to load data into a DataFrame. This part will show how to work with each column in the DataFrame. The columns are represented by a different data type, called **Series**.

### In this lesson you will learn

In this lesson we will learn how to make calculations on the columns. The columns are represented by a data type called **Series**.

Each column in a **DataFrame** is a **Series** and can be easily accessed. Also, it is easy to calculate new **Series** of data. This is similar to calculate now columns of data in an **Excel** sheet.

We will explore that and more in this lesson.

See the Notebook from the lesson here * (for an interactive version use the Download the Notebook link at the top of page)*.

### Lesson 2

We will start by importing the data like in lesson 1.

import pandas as pd data = pd.read_csv("AAPL.csv", index_col=0, parse_dates=True)

In the video we explore the data to ensure it is correct. You can do that by using **data.head()**.

Then we investigate the data type of the columns of the **DataFrame** **data**.

data.dtypes

Which results in the following.

Open float64 High float64 Low float64 Close float64 Adj Close float64 Volume int64 dtype: object

This means shows that each column has one data type. Here **Open** is float64. This is one difference from Excel sheets, where each cell has a data type. The advantage of restricting a data type per column is speed.

The data type of **data** is **DataFrame**.

type(data)

The build in function **type(…) **gives you the type. It is handy to use it when exploring data.

pandas.core.frame.DataFrame

Notice that it is given by a long string **pandas.core.frame.DataFrame**, this is the structure of the library **Pandas**.

The data type of a column in a DataFrame can be found by.

type(data['Close'])

Where **data[‘Close’]** gives access to column **Close** in the **DataFrame** **data**.

pandas.core.series.Series

Where we see a column is represented as a **Series**. The is similar to a DataFrame that it has an index. E.g. the **Series data[‘Close’]** has the same index as the **DataFrame data**. This is handy when you need to work with the data as you will see in a moment.

To keep it simple, we will start by the daily difference from open and close.

daily_chg = data['Open'] - data['Close']

This calculates a **Series daily_chg** with the opening price minus the closing price.

Please explore the full data in **daily_chg** with the data in **data**.

A more advanced calculation is this one.

daily_pct_chg = (data['Close'] - data['Open'])/data['Open']*100

Where we calculate the daily percentage change. In the calculation above we have limited us to only use data on the same rows (same dates). Later we will learn how to do it with data from previous day (the row above).

Now we will normalize the data by using the **iloc** we learned about in lesson 1.

norm = data['Close']/data['Close'].iloc[0]

The above statements calculates a **Series norm** where the **Close** price is normalized by dividing by the first available **Close** price, accessed by using **iloc[0]**.

This results in that **norm.iloc[0]** will be** 1.0000 **and **norm.iloc[-1]** we show the return of this particular stock if invested in on day 1 (index 0) and sold on the day of the last index (index -1), in the case of the video: **1.839521**.

## Lesson 3: Add, Remove and Calculate with columns in DataFrames

In the last lesson we learned about Series and how to make simple calculations using them. Also, how to normalize data.

### In this lesson you will learn

In this lesson we will learn how to add new columns calculated from values in other columns in our **DataFrame**. This is similar to calculate in Excel on data from different columns.

Then we will demonstrate some useful function when working with financial data.

Finally, we will show how to remove (or drop) columns from our **DataFrame**.

See the Notebook from the lesson here *(for an interactive version use the Download the Notebook link at the top of page)*.

### Lesson 3

As usual we need to load the data into our **DataFrame**.

import pandas as pd data = pd.read_csv("AAPL.csv", index_col=0, parse_dates=True)

It is always a good habit to inspect the data with** data.head()** (see the video lesson or the in the Notebook link below the video for expected output).

To create a new column in our data set simply write as follows.

data['Daily chg'] = data['Close'] - data['Open']

The above statement will create a new column named **Daily chg** with the difference between column **Close** and **Open**.

Similarly, you can create a column with the normalized data as follows.

data['Normalized'] = data['Close'] / data['Close'].iloc[0]

This is how easy it is to work with.

To find the minimum of a column.

data['Close'].min()

This will find the minimal value of the column **Close**.

To find the index of the minimum value use the following.

data['Close'].argmin()

You can do similar things as the following shows.

data['Normalized'].min() data['Normalized'].argmin() data['Close'].max() data['Close'].argmax()

To get the mean value of a column, simply use **mean()**.

data['Close'].mean()

It is always good practice to remove the columns of data we do not intend to use anymore. This can be done by using **drop**().

data.drop(labels=['High', 'Low', 'Adj Close', 'Volume'], axis=1, inplace=True)

Where we use the following arguments.

**labels=[‘High’, ‘Low’, ‘Adj Close’, ‘Volume’]**sets the labels of the columns we want to remove.**axis=1**sets the axis of the labels. Default is 0, and will look for the labels on the index. While axis 1 is the column names.**inplace=True**says it should actually remove the columns on the DataFrame we work on. Otherwise it will return a new DataFrame without the columns.

## Lesson 4: Use Matplotlib on historical stock prices to Visualize

In the last lesson we learned how to create new columns calculated by the values of the existing data. Also, how to remove columns we do not need.

### In this lesson you will learn

In the lesson we will learn about how to use **Matplotlib** (Visualization) with **DataFrames** on time series data (stock price data).

We will explore the object and functional way to use **Matplotlib** and how to create multiple charts in one figure. Also, we will see how to make bar charts among things.

*(for an interactive version use the Download the Notebook link at the top of page)*.

### Lesson 4

We start by reading the data into a DataFrame.

import pandas as pd data = pd.read_csv("AAPL.csv", index_col=0, parse_dates=True)

In the next lesson we will learn how to read data directly from an API.

You can visualize data directly form the DataFrame by using **data.plot()**, but as you can see in the Video or the Notebook link below the video, the result is not satisfactory.

To get a better experience we need to import **Matplotlib**.

import matplotlib.pyplot as plt %matplotlib notebook

The **%matplotlib notebook** is important to tell Jupyter notebook how to visualize the results from our plots.

A simple plot can now be done as follows.

data.plot()

Which will look like this.

Ignore the obvious challenge here, that there are too much data and the scales are not similar (Volume is larger than the other).

The problem comes if we try to plot another thing in this way.

data['Close'].plot()

What happened? First, you’ll think nothing. But you are actually wrong. In the original chart it actually added the line of column **Close** again.

Well, this is the functional way of using **Matplotlib**, which can be a bit difficult to understand when it creates a new figure and when it uses the same (as it did here).

Another way to use **Matplotlib** is in an objective way.

fig, ax = plt.subplots() data['Close'].plot(ax=ax) ax.set_ylabel("Price") ax.set_title("AAPL")

This looks more involved at first. But let’s break it down.

**fig, ax = plt.subplots()**Creates a new figure and axis to draw on. By default there will be one figure and one axis. The figure is the “picture”, the axis is the chart area.**data[‘Close’].plot(ax=ax)**The argument**ax=ax**tells**Pandas**to use the axis given from the above statement.**ax.set_ylabel(“Price”)**This is actually not needed but just to create a label for the y axis.**ax.set_title(“AAPL”)**The same here.

The result would be something similar to this.

You can have multiple axis in one figure as follows.

fig, ax = plt.subplots(2, 2) data['Open'].plot(ax=ax[0, 0], title="Open") data['High'].plot(ax=ax[0, 1], title="High") data['Low'].plot(ax=ax[1, 0], title="Low") data['Close'].plot(ax=ax[1, 1], title="Close") plt.tight_layout()

The **plt.subplots(2, 2)** creates a 2-by-2 grid of axis. You access the axis in **ax** by **ax[0, 0]** and similar for the other axis.

It will result in the following output (or something similar).

A bar plot can be created as follows, where we also make a time interval of the data we want by using **.loc[]**.

fig, ax = plt.subplots() data['Volume'].loc['2020-07-01':'2020-08-15'].plot.barh(ax=ax)

Which results in something similar to this.

## Lesson 5: Use Pandas Datareader to read data directly from API

In the last lesson we learned about how to visualize our financial data.

### In this lesson you will learn

In this lesson we will cover the following.

How to use Pandas Datareader to read historical stock prices from API. Then how to list all Nasdaq ticker symbols and explore the world bank data and more.

*(for an interactive version use the Download the Notebook link at the top of page)*.

### Lesson 5

We will start different this time. This time, it is time to read directly from an API. We will use the **Pandas Datareader** to read data directly from the free open **Yahoo! Finance API.**

This can be done as follows.

import pandas_datareader as pdr import datetime as dt ticker = "AAPL" start = dt.datetime(2019, 1, 1) end = dt.datetime(2020, 12, 31) data = pdr.get_data_yahoo(ticker, start, end)

Where we first import two libraries.

**pandas_datareader**The Pandas Datareader. If you do not have it installed already in your Jupyter Notebook you can do that by entering this in a cell**!pip install pandas_datareader**and execute it.**datetime**This is a default library and represents a date and time. We only use it for the date aspects.

The the following lines.

**ticker = “AAPL”**The ticker we want data from. You can use any ticker you want. In this course we have used the ticker for**Apple**(**AAPL**).**start = dt.datetime(2019, 1, 1)**Is the starting day we want historic stock price data.**end = dt.datetime(2020, 12, 31)**The end day.**data = pdr.get_data_yahoo(ticker, start, end)**This is the magic that uses**Pandas Datareader**(**pdr**) to get data from the**Yahoo! Finance API**. It returns a DataFrame as we know it from previous lessons.

You can explore the advantage of using Pandas Datareader.

data.index

Will show that the index is already a **DatetimeIndex**.

data.dtypes

Shows that the data types of all columns are **float64**.

Another open source you can get historic stock prices is from **Stooq**.

data2 = pdr.get_data_stooq(ticker, start)

Other than that, most require registration on for their services.

A nice feature is to collect all the **Nasdaq** symbols.

nasdaq_sym = pdr.get_nasdaq_symbols()

There are 9666 symbols at the time of writing (**len(nasdaq_sym)**)

It contains information like the following example shows.

nasdaq_sym.loc['AAPL']

The output should be similar to this.

Nasdaq Traded True Security Name Apple Inc. - Common Stock Listing Exchange Q Market Category Q ETF False Round Lot Size 100 Test Issue False Financial Status N CQS Symbol NaN NASDAQ Symbol AAPL NextShares False Name: AAPL, dtype: object

This is a great source to explore a lot of tickers in your research.

## Lesson 6: Calculate Volatility and Moving Average

In the last lesson we learned how to read directly from **Yahoo! Finance API** and other **API**s with **Pandas Datareader**.

### In this lesson you will learn

In this lesson we will look at how to calculate and visualize the volatility of a stock. Then we will calculate and visualize the Simple Moving Average and the Exponential Moving Average.

*(for an interactive version use the Download the Notebook link at the top of page)*.

### Lesson 6

In this lesson we will use the **CSV**-file approach to get historic stock prices. Feel free to use the **Pandas Datareader** we explored in the last lesson.

import pandas as pd import matplotlib.pyplot as plt %matplotlib notebook data = pd.read_csv("AAPL.csv", index_col=0, parse_dates=True)

Notice that we also import Matplotlib as we will use it to visualize our findings.

In the video lesson we explore the difference between daily simple returns and daily log returns. Shortly explained, the log returns have the advantage that you can add them together, while this is not the case for simple returns. Therefore the log returns are used in most financial analysis.

To calculate the daily log returns we need the **NumPy** library. For the purpose here, we will not explore the depths of **NumPy**, all we need is to apply the **log**-function on a full column in our **DataFrame** (see my other FREE course for more details on **NumPy**).

import numpy as np data['Log returns'] = np.log(data['Close']/data['Close'].shift())

This creates a column called **Log returns** with the daily log return of the **Close** price.

We need the standard deviation for the volatility of the stock.

This can be calculated from our **Log returns **as follows.

data['Log returns'].std()

The above gives the daily standard deviation. The volatility is defined as the annualized standard deviation. Using the above formula we can calculate it as follows.

volatility = data['Log returns'].std()*252**.5

Notice that square root is the same as ****.5**, which is the power of 1/2.

To visualize it.

str_vol = str(round(volatility, 4)*100) fig, ax = plt.subplots() data['Log returns'].hist(ax=ax, bins=50, alpha=0.6, color='b') ax.set_xlabel("Log return") ax.set_ylabel("Freq of log return") ax.set_title("AAPL volatility: " + str_vol + "%")

Which results in the following output.

The moving average can be calculated by using a method **rolling(…)**.

data['MA10'] = data['Close'].rolling(10).mean()

Where **rolling(10).mean()** will take the last 10 entries (rows) and apply **mean() **on it. This calculates the simple moving average with period 10.

An exponential moving average can be calculated as follows.

data['EMA10'] = data['Close'].ewm(span=10, adjust=False).mean()

This can be visualized as follows.

fig, ax = plt.subplots() data[['MA10', 'EMA10']].loc['2020-12-01':].plot(ax=ax) data['Close'].loc['2020-12-01':].plot(ax=ax, alpha=0.25)

Resulting in the following.

## Lesson 7: Calculate MACD and Stochastic Oscillator

In the last lesson we learned how to calculate the volatility of a stock. Also, the Simple Moving Average (MA) and Exponential Moving Average (EMA).

### In this lesson you will learn

In this lesson we will calculate and visualize the MACD and the Stochastic Oscillator.

*(for an interactive version use the Download the Notebook link at the top of page)*.

### Lesson 7

Let’s get started.

import pandas as pd import matplotlib.pyplot as plt %matplotlib notebook data = pd.read_csv("AAPL.csv", index_col=0, parse_dates=True)

First we want to calcite the **MACD**.

The calculation (12-26-9 MACD (default)) is defined as follows.

- MACD=12-Period EMA − 26-Period EMA
- Singal line 9-Perioed EMA of MACD

Where EMA is the Exponential Moving Average we learned about in the last lesson.

exp1 = data['Close'].ewm(span=12, adjust=False).mean() exp2 = data['Close'].ewm(span=26, adjust=False).mean() data['MACD'] = exp1 - exp2 data['Signal line'] = data['MACD'].ewm(span=9, adjust=False).mean()

Now that was simple, right?

We are basically applying what we already learned.

To visualize it you can use the following with **Matplotlib**.

fig, ax = plt.subplots() data[['MACD', 'Signal line']].plot(ax=ax) data['Close'].plot(ax=ax, alpha=0.25, secondary_y=True)

Resulting in an output similar to this one.

The **Stochastic Oscillator** is defined as follows.

**14-high**: Maximum of last 14 trading days**14-low**: Minimum of last 14 trading days**%K**:**(Last Close – 14-low)*100 / (14-high – 14-low)****%D**: Simple Moving Average of**%K**

That can be done as follows.

high14 = data['High'].rolling(14).max() low14 = data['Low'].rolling(14).min() data['%K'] = (data['Close'] - low14)*100/(high14 - low14) data['%D'] = data['%K'].rolling(3).mean()

Notice, we only keep the **%K** and **%D**. The **high14** and **low14** are temporary variables to make our calculations easier to read.

To visualize it.

fig, ax = plt.subplots() data[['%K', '%D']].loc['2020-11-01':].plot(ax=ax) ax.axhline(80, c='r', alpha=0.3) ax.axhline(20, c='r', alpha=0.3) data['Close'].loc['2020-11-01':].plot(ax=ax, alpha=0.3, secondary_y=True)

Resulting in the following.

## Lesson 8: Export it all to an Excel sheet automatically with Python

In the last lesson we learned how to calculate and visualize the **MACD** and **Stochastic Oscillator**.

### In this lesson you will learn

In this lesson we will learn how to export all the data from Python into an Excel sheet. It will be in multiple sheets with colored rows and charts. And of course, all automated from Python.

*(for an interactive version use the Download the Notebook link at the top of page)*.

### Lesson 8

First we need some data. As usual we will use the data from our **CSV** file. Feel free to use any other data.

import pandas as pd data = pd.read_csv("AAPL.csv", index_col=0, parse_dates=True)

Then we need to make all the calculations.

Moving Average as done previously.

data['MA10'] = data['Close'].rolling(10).mean()

The MACD as calculated in the last lesson.

exp1 = data['Close'].ewm(span=12, adjust=False).mean() exp2 = data['Close'].ewm(span=26, adjust=False).mean() data['MACD'] = macd = exp1 - exp2 data['Signal line'] = exp3 = macd.ewm(span=9, adjust=False).mean()

The Stochastic Oscillator, also in the last lesson.

high14 = data['High'].rolling(14).max() low14 = data['Low'].rolling(14).min() data['%K'] = pct_k = (data['Close'] - low14)*100/(high14 - low14) data['%D'] = pct_d = data['%K'].rolling(3).mean()

Adjust the time period we need. This is needed as the first calculations will not be available (**NaN**) or incorrect.

data = data.loc['2020-01-01':] data = data.iloc[::-1]

Also notice, we reverse the data by **.iloc[::-1]**. This is just to have the most recent data on the top of our Excel sheet.

The for generating our Excel sheet we need the **XlsxWriter**.

If you don’t have it installed already you can install it by running this in a cell: **!pip install XlsxWriter **

The code that generated the Excel sheet.

writer = pd.ExcelWriter("technical.xlsx", engine='xlsxwriter', date_format = 'yyyy-mm-dd', datetime_format='yyyy-mm-dd') workbook = writer.book # Create a format for a green cell green_cell = workbook.add_format({ 'bg_color': '#C6EFCE', 'font_color': '#006100' }) # Create a format for a red cell red_cell = workbook.add_format({ 'bg_color': '#FFC7CE', 'font_color': '#9C0006' }) # ** # ** MA # ** sheet_name = 'MA10' data[['Close', 'MA10']].to_excel(writer, sheet_name=sheet_name) worksheet = writer.sheets[sheet_name] # Set column width of Date worksheet.set_column(0, 0, 15) for col in range(1, 3): # Create a conditional formatted of type formula worksheet.conditional_format(1, col, len(data), col, { 'type': 'formula', 'criteria': '=B2>=C2', 'format': green_cell }) # Create a conditional formatted of type formula worksheet.conditional_format(1, col, len(data), col, { 'type': 'formula', 'criteria': '=B2<C2', 'format': red_cell }) # Create a new chart object. chart1 = workbook.add_chart({'type': 'line'}) # Add a series to the chart. chart1.add_series({ 'name': 'AAPL', 'categories': [sheet_name, 1, 0, len(data), 0], 'values': [sheet_name, 1, 1, len(data), 1], }) # Create a new chart object. chart2 = workbook.add_chart({'type': 'line'}) # Add a series to the chart. chart2.add_series({ 'name': sheet_name, 'categories': [sheet_name, 1, 0, len(data), 0], 'values': [sheet_name, 1, 2, len(data), 2], }) # Combine and insert title, axis names chart1.combine(chart2) chart1.set_title({'name': sheet_name + " AAPL"}) chart1.set_x_axis({'name': 'Date'}) chart1.set_y_axis({'name': 'Price'}) # Insert the chart into the worksheet. worksheet.insert_chart('E2', chart1) # ** # ** MACD # ** sheet_name = 'MACD' data[['Close', 'MACD', 'Signal line']].to_excel(writer, sheet_name=sheet_name) worksheet = writer.sheets[sheet_name] # Set column width of Date worksheet.set_column(0, 0, 15) for col in range(1, 4): # Create a conditional formatted of type formula worksheet.conditional_format(1, col, len(data), col, { 'type': 'formula', 'criteria': '=C2>=D2', 'format': green_cell }) # Create a conditional formatted of type formula worksheet.conditional_format(1, col, len(data), col, { 'type': 'formula', 'criteria': '=C2<D2', 'format': red_cell }) # Create a new chart object. chart1 = workbook.add_chart({'type': 'line'}) # Add a series to the chart. chart1.add_series({ 'name': 'MACD', 'categories': [sheet_name, 1, 0, len(data), 0], 'values': [sheet_name, 1, 2, len(data), 2], }) # Create a new chart object. chart2 = workbook.add_chart({'type': 'line'}) # Add a series to the chart. chart2.add_series({ 'name': 'Signal line', 'categories': [sheet_name, 1, 0, len(data), 0], 'values': [sheet_name, 1, 3, len(data), 3], }) # Combine and insert title, axis names chart1.combine(chart2) chart1.set_title({'name': sheet_name + " AAPL"}) chart1.set_x_axis({'name': 'Date'}) chart1.set_y_axis({'name': 'Value'}) # To set the labels on x axis not on 0 chart1.set_x_axis({ 'label_position': 'low', 'num_font': {'rotation': 45} }) # Insert the chart into the worksheet. worksheet.insert_chart('F2', chart1) # ** # ** Stochastic # ** sheet_name = 'Stochastic' data[['Close', '%K', '%D']].to_excel(writer, sheet_name=sheet_name) worksheet = writer.sheets[sheet_name] # Set column width of Date worksheet.set_column(0, 0, 15) for col in range(1, 4): # Create a conditional formatted of type formula worksheet.conditional_format(1, col, len(data), col, { 'type': 'formula', 'criteria': '=C2>=D2', 'format': green_cell }) # Create a conditional formatted of type formula worksheet.conditional_format(1, col, len(data), col, { 'type': 'formula', 'criteria': '=C2<D2', 'format': red_cell }) # Create a new chart object. chart1 = workbook.add_chart({'type': 'line'}) # Add a series to the chart. chart1.add_series({ 'name': '%K', 'categories': [sheet_name, 1, 0, len(data), 0], 'values': [sheet_name, 1, 2, len(data), 2], }) # Create a new chart object. chart2 = workbook.add_chart({'type': 'line'}) # Add a series to the chart. chart2.add_series({ 'name': '%D', 'categories': [sheet_name, 1, 0, len(data), 0], 'values': [sheet_name, 1, 3, len(data), 3], }) # Combine and insert title, axis names chart1.combine(chart2) chart1.set_title({'name': sheet_name + " AAPL"}) chart1.set_x_axis({'name': 'Date'}) chart1.set_y_axis({'name': 'Value'}) # To set the labels on x axis not on 0 chart1.set_x_axis({ 'label_position': 'low', 'num_font': {'rotation': 45} }) # Insert the chart into the worksheet. worksheet.insert_chart('F2', chart1) # End of sheets # Close writer.close()

For a walkthrough of the code, please see the video to this lesson.

This will generate an Excel sheet in called **technical.xlsx**. It will contain 3 sheets (MA10, MACD, Stochastic Oscillator).

A sheet will look similar to this.

## What’s next?

Congratulations and thank you for taking the time to take this course.

How can you help me?

This is free content and has taken me time to create. A great way to support me for free is to share this with friends and colleagues that could benefit from it. Also, share it on social media with your recommendation.

If you like to learn more about investing and financial analysis then check out my course on Udemy.