Machine Learning # Artificial Neural Network: The Ultimate Machine Learning Technique

- Understand Neural Networks
- How you can model other machine learning techniques
- Activation functions
- How to make simple OR function
- Different ways to calcualte weights
- What Batch sizes and Epochs are

**Artificial Neural Network** are computing systems inspired by the biological neural networks that constitute animal brains.

Often just called** Neural Network**.

The first Neural Network is the following simple network.

Where **w1** and **w2** are weights and the nodes on the left represent **input nodes** and the node on the right is the** output node**.

It can also be represented with a function:** h(x1, x2) = w0 + w1*x1 + w2*x2**

This is a simple calculation, and the goal of the network is to find optimal weights. But we are still missing something. We need an activation function. That is, how to interpret the output.

Here are some possible activation functions.

**Step function**: 𝑔(𝑥)=1 if 𝑥≥0, else 0**Rectified linear unit**(ReLU): 𝑔(𝑥)=max(0,𝑥)**Sigmoid**activation function: sigmoid(𝑥)=1/(1+exp(−𝑥))(x)

We see the weights are one each. Then let’s analyse it with the activation function g, given by Step function.

- x1 = 0 and x2=0 then we have g(-1 + x1 + x2) = g(-1 + 0 + 0) = g(-1) = 0
- x1 = 1 and x2=0 then we have g(-1 + x1 + x2) = g(-1 + 1 + 0) = g(0) = 1
- x1 = 0 and x2=1 then we have g(-1 + x1 + x2) = g(-1 + 0 + 1) = g(0) = 1
- x1 = 1 and x2=1 then we have g(-1 + x1 + x2) = g(-1 + 1 + 1) = g(-1) = 1

Exactly like the OR function.

In general a Neural Networks can have any number of input and output nodes, where each input node is connected with each output node.

We will later learn about Deep Neural Network – where we can have any number of layers – but for now, let’s only focus Neural Networks with input and output layer.

To calculate weights there are several options.

- Calculate the weights (wiki)
- Algorithm for minimizing the loss when training neural networks

**Pseudo algorithm**

- Start with a random choice of weights
- Repeat:
- Calculate the gradient based on all data ponits direction that will lead to decreasing loss
- Update wieghts accorinding to the gradient

Tradoff

- Expensive to calculate for all data points

**Pseudo algorithm**

- Start with a random choice of weights
- Repeat:
- Calculate the gradient based on one data point direction that will lead to decreasing loss
- Update wieghts accorinding to the gradient

**Pseudo algorithm**

- Start with a random choice of weights
- Repeat:
- Calculate the gradient based on one small batch of data ponits direction that will lead to decreasing loss
- Update wieghts accorinding to the gradient

The **perceptron** is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class.

- Only capable of learning linearly separable decision boundary.
- It cannot model the XOR function (we need multi-layer perceptrons (multi-layer neural network))
- It can take multiple inputs and map linearly to one output with an activation function.

Let’s try some example to show it.

```
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Sequential
import matplotlib.pyplot as plt
data = np.random.randn(200, 3)
data[:100, :2] += (10, 10)
data[:100, 2] = 0
data[100:, 2] = 1
fig, ax = plt.subplots()
ax.scatter(x=data[:,0], y=data[:,1], c=data[:,2])
plt.show()
```

This should be simple to validate if we can create a Neural Networks model to separate the two classes.

First let’s create a train and test set.

```
X = data[:,:2]
y = data[:,2]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
```

Then we need to create the model and set batch size and epochs.

**Batch size**: a set of N samples.**Epoch**: an arbitrary cutoff, generally defined as “one pass over the entire dataset”.

```
model = Sequential()
model.add(Dense(1, input_dim=2, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=1000, batch_size=32, verbose=0)
model.evaluate(X_test, y_test)
```

Which should give 1.000 (100%) accuracy.

This can be visualized by.

```
y_pred = model.predict(X)
y_pred = np.where(y_pred < .5, 0, 1)
fig, ax = plt.subplots()
ax.scatter(x=X[:,0], y=X[:,1], c=y_pred)
plt.show()
```

In the video we also show how to visualize the prediction in the different way.

**This is part of a FREE 10h Machine Learning course with Python.**

**15 video lessons**– which explain Machine Learning concepts, demonstrate models on real data, introduce projects and show a solution (YouTube playlist).**30 JuPyter Notebooks**– with the full code and explanation from the lectures and projects (GitHub).**15 projects**– with step guides to help you structure your solutions and solution explained in the end of video lessons (GitHub).

Why learn Python? There are many reasons to learn Python, and that is the power…

3 days ago

What will you learn? How to use the modulo operator to check if a number…

1 week ago

There are a lot of Myths out there There are lot of Myths about being…

2 months ago

To be honest, I am not really a great programmer - that is not what…

2 months ago

What does it take to become a Data Scientist? Data Science is in a cross…

2 months ago

What will you learn? Need to setup a SQL server? You don’t need to install…

4 months ago