## What will we cover?

- Understand Neural Networks
- How you can model other machine learning techniques
- Activation functions
- How to make simple OR function
- Different ways to calcualte weights
- What Batch sizes and Epochs are

## Step 1: What is Artificial Neural Network

**Artificial Neural Network** are computing systems inspired by the biological neural networks that constitute animal brains.

Often just called** Neural Network**.

The first Neural Network is the following simple network.

Where **w1** and **w2** are weights and the nodes on the left represent **input nodes** and the node on the right is the** output node**.

It can also be represented with a function:** h(x1, x2) = w0 + w1*x1 + w2*x2**

This is a simple calculation, and the goal of the network is to find optimal weights. But we are still missing something. We need an activation function. That is, how to interpret the output.

Here are some possible activation functions.

**Step function**: 𝑔(𝑥)=1 if 𝑥≥0, else 0**Rectified linear unit**(ReLU): 𝑔(𝑥)=max(0,𝑥)**Sigmoid**activation function: sigmoid(𝑥)=1/(1+exp(−𝑥))(x)

## Step 2: How to model the OR function

We see the weights are one each. Then let’s analyse it with the activation function g, given by Step function.

- x1 = 0 and x2=0 then we have g(-1 + x1 + x2) = g(-1 + 0 + 0) = g(-1) = 0
- x1 = 1 and x2=0 then we have g(-1 + x1 + x2) = g(-1 + 1 + 0) = g(0) = 1
- x1 = 0 and x2=1 then we have g(-1 + x1 + x2) = g(-1 + 0 + 1) = g(0) = 1
- x1 = 1 and x2=1 then we have g(-1 + x1 + x2) = g(-1 + 1 + 1) = g(-1) = 1

Exactly like the OR function.

## Step 3: Neural Network in the General Case and how to Calculate Weights

In general a Neural Networks can have any number of input and output nodes, where each input node is connected with each output node.

We will later learn about Deep Neural Network – where we can have any number of layers – but for now, let’s only focus Neural Networks with input and output layer.

To calculate weights there are several options.

### Gradient Descent

- Calculate the weights (wiki)
- Algorithm for minimizing the loss when training neural networks

**Pseudo algorithm**

- Start with a random choice of weights
- Repeat:
- Calculate the gradient based on all data ponits direction that will lead to decreasing loss
- Update wieghts accorinding to the gradient

Tradoff

- Expensive to calculate for all data points

### Stocastic Gradient Descent

**Pseudo algorithm**

- Start with a random choice of weights
- Repeat:
- Calculate the gradient based on one data point direction that will lead to decreasing loss
- Update wieghts accorinding to the gradient

### Mini-Batch Gradient Descent

**Pseudo algorithm**

- Start with a random choice of weights
- Repeat:
- Calculate the gradient based on one small batch of data ponits direction that will lead to decreasing loss
- Update wieghts accorinding to the gradient

## Step 4: Perceptron

The **perceptron** is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class.

- Only capable of learning linearly separable decision boundary.
- It cannot model the XOR function (we need multi-layer perceptrons (multi-layer neural network))
- It can take multiple inputs and map linearly to one output with an activation function.

Let’s try some example to show it.

```
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Sequential
import matplotlib.pyplot as plt
data = np.random.randn(200, 3)
data[:100, :2] += (10, 10)
data[:100, 2] = 0
data[100:, 2] = 1
fig, ax = plt.subplots()
ax.scatter(x=data[:,0], y=data[:,1], c=data[:,2])
plt.show()
```

This should be simple to validate if we can create a Neural Networks model to separate the two classes.

## Step 5: Creating a Neural Network

First let’s create a train and test set.

```
X = data[:,:2]
y = data[:,2]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
```

Then we need to create the model and set batch size and epochs.

**Batch size**: a set of N samples.**Epoch**: an arbitrary cutoff, generally defined as “one pass over the entire dataset”.

```
model = Sequential()
model.add(Dense(1, input_dim=2, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=1000, batch_size=32, verbose=0)
model.evaluate(X_test, y_test)
```

Which should give 1.000 (100%) accuracy.

This can be visualized by.

```
y_pred = model.predict(X)
y_pred = np.where(y_pred < .5, 0, 1)
fig, ax = plt.subplots()
ax.scatter(x=X[:,0], y=X[:,1], c=y_pred)
plt.show()
```

In the video we also show how to visualize the prediction in the different way.

## Want to learn more?

**This is part of a FREE 10h Machine Learning course with Python.**

**15 video lessons**– which explain Machine Learning concepts, demonstrate models on real data, introduce projects and show a solution (YouTube playlist).**30 JuPyter Notebooks**– with the full code and explanation from the lectures and projects (GitHub).**15 projects**– with step guides to help you structure your solutions and solution explained in the end of video lessons (GitHub).