## What will we cover?

- Understand Deep Neural Network (DNN)
- How algorithms calculate weights in DNN
- Show tools to visually understand what DNN can solve

## Step 1: What is Deep Neural Network?

Be sure to read the Artificial Neural Network Guide.

The adjective “deep” in deep learning refers to the use of multiple layers in the network (Wiki).

Usually having two or more hidden layers counts as deep.

**Deep learning** (also known as **deep structured learning**) is part of a broader family of machine learning methods based on artificial neural networks with representation learning.

## Step 2: How to train and difficulties in training DNN

Training an Artificial Neural Network only relies on finding weights from input to output nodes. In a Deep Neural Network (DNN) become a bit more complex and requires more techniques.

To do that we need backpropagation, which is an algorithm for training Neural Networks with hidden layers (DNN).

**Algorithm**- Start with a random choice of weights
- Repeat
- Calculate error for output layer
- For each layer – starting with output layer
- Propagate error back one layer
- Update weights

A problem you will encounter is **overfitting**. Which means to fit too close to training data and not generalize well.

That is, you fit the model to the training data, but the model will not predict well on data not coming from your training data.

To deal with that, dropout is a common technique.

- Temporarily remove units – selectat random – from a neural network to prevent over reliance on certain units
- Dropout value of 20%-50%
- Better performance when dropout is used on a larger network
- Dropout at each layer of the network has shown good results.
- Original Paper

## Step 3: Play around with it

To learn more about fitting check out the playground at tensorflow.

Ideas to check that

- If you have no hidden layers then you can only fit with straight lines.
- If you add hidden layers you can model the XOR function.

## Step 4: A DNN model of XOR

Let’s go crazy and fit an XOR dataset with a DNN model.

```
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
data = pd.read_csv('https://raw.githubusercontent.com/LearnPythonWithRune/MachineLearningWithPython/main/files/xor.csv')
fig, ax = plt.subplots()
ax.scatter(x=data['x'], y=data['y'], c=data['class id'])
plt.show()
```

This is the data we want to fit.

Then let’s create it.

Remember to insert the dropout and play around with it.

```
X_train, X_test, y_train, y_test = train_test_split(data[['x', 'y']], data['class id'], random_state=42)
accuracies = []
for i in range(5):
tf.random.set_seed(i)
model = Sequential()
model.add(Dense(6, input_dim=2, activation='relu'))
# model.add(Dropout(.2))
model.add(Dense(4, activation='relu'))
# model.add(Dropout(.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=100, batch_size=32, verbose=0)
_, accuracy = model.evaluate(X_test, y_test)
accuracies.append(accuracy*100)
sum(accuracies)/len(accuracies)
```

Resulting in accuracy of 98%.

## Want to learn more?

**This is part of a FREE 10h Machine Learning course with Python.**

**15 video lessons**– which explain Machine Learning concepts, demonstrate models on real data, introduce projects and show a solution (YouTube playlist).**30 JuPyter Notebooks**– with the full code and explanation from the lectures and projects (GitHub).**15 projects**– with step guides to help you structure your solutions and solution explained in the end of video lessons (GitHub).