Perceptron: The building block of deep learning

In our previous post we introduced deep learning and discussed why it matters now than beofre. We also defined Artifical Intelligence, Machine Learning and Deep Learning.

In this post we will delve more into the building blocks of deep learning, these are

  • Perceptron
  • Activation functions
  • Fully connected (dense) Layer
  • Deep Neural Netweork

The Perceptron

Imagine you have a bunch of apples and oranges, and you want a computer program to decide whether a fruit is an apple or an orange based on its weight and size.

The program takes in information about the fruit’s weight and size, and uses some math to combine that information into a single number. If that number is positive, the program says the fruit is an apple. If the number is negative, it says the fruit is an orange.

The program makes mistakes at first, but it learns from those mistakes and gets better over time. For example, if it says an apple is an orange, it adjusts the math so that it’s less likely to make that mistake again in the future.

This program is the perceptron in its simplest form. As you can see it learns to make better decisions based on previous outputs. While the perceptron is a simple program, it laid the foundation for more complex programs that can recognize patterns in data and make more complex decisions.

The perceptron is the building block in more complex neural network architectures. It is a simple computational unit that takes in one or more inputs, multiplies them by a set of weights, and adds them together with a bias term. This weighted sum is then passed through an activation function, which produces an output. The output is often used as an input to the next layer of neurons in the network.

While a single perceptron is limited in its ability to solve complex problems, it can be combined with many other perceptrons to create deep neural networks. Deep neural networks consist of many layers of interconnected neurons, each layer learning increasingly complex representations of the input data.

The training of deep neural networks involves adjusting the weights and biases of the perceptrons through a process called backpropagation. During backpropagation, the error in the network’s output is propagated backwards through the network, and the weights and biases are adjusted to minimize the error.

Overall, the perceptron is an important building block in deep learning, and has helped make it possible to solve complex tasks such as image recognition, natural language processing, and autonomous driving

The Activation

The activation function is a mathematical function applied to the output of a neuron or a layer of neurons in a neural network. The activation function introduces non-linearity into the network, allowing it to model complex relationships between the input and output data.

The activation function takes the weighted sum of the inputs and the biases, and applies a non-linear transformation to produce the output of the neuron or layer. There are several commonly used activation functions in deep learning, including:

  • Sigmoid function: The sigmoid function maps any input to a value between 0 and 1. It is often used as an activation function in binary classification problems.

  • ReLU (Rectified Linear Unit) function: The ReLU function returns 0 if the input is negative, and the input itself if it is positive. It is a popular activation function in deep learning due to its simplicity and computational efficiency.

  • Tanh (hyperbolic tangent) function: The Tanh function maps any input to a value between -1 and 1. It is similar to the sigmoid function, but its range is symmetric around zero.

  • Softmax function: The softmax function is often used in the output layer of a neural network that is used for multiclass classification. It transforms the outputs of the previous layer into a probability distribution over the possible classes.

The choice of activation function depends on the task at hand and the structure of the neural network. The activation function can greatly affect the performance of the network, and choosing an appropriate function is an important part of the model design process.

Fully connected layer

Fully connected layer (also known as a dense layer) is a layer of neurons in a neural network where each neuron is connected to every neuron in the previous layer. In other words, the input to each neuron in a fully connected layer is a vector of outputs from all the neurons in the previous layer.

In a fully connected layer, the output of each neuron is calculated by taking a weighted sum of the inputs, adding a bias term, and then passing the result through an activation function. The weights and biases in the layer are learned during the training process using backpropagation.

Fully connected layers are used in many types of neural networks, including feedforward neural networks, convolutional neural networks (CNNs), and recurrent neural networks (RNNs). In feedforward neural networks, fully connected layers are often used for the final classification or regression task, where the output of the network is generated. In CNNs, fully connected layers are typically used at the end of the network to convert the features learned by the convolutional layers into a final output. In RNNs, fully connected layers are used to generate the final output based on the previous hidden states.

Fully connected layers are powerful and flexible, but they also require a large number of parameters, which can make them computationally expensive and prone to overfitting. As a result, more complex neural network architectures have been developed that use different types of layers, such as pooling layers and dropout layers, in combination with fully connected layers to improve performance and reduce overfitting.

Deep Neural Netweork

Deep neural network (DNN) is a type of artificial neural network that contains multiple layers of interconnected neurons, allowing it to learn increasingly complex representations of the input data. A DNN typically consists of an input layer, one or more hidden layers, and an output layer.

The input layer of a DNN receives the raw input data, such as images, audio, or text, and passes it on to the first hidden layer. Each hidden layer in the network consists of many neurons that are fully connected to the neurons in the previous layer. The output of each neuron in a hidden layer is calculated by taking a weighted sum of the inputs, adding a bias term, and passing the result through an activation function. The output of the last hidden layer is then passed to the output layer, where the final output of the network is produced.

The key advantage of DNNs is their ability to automatically learn hierarchical representations of the input data. Each hidden layer in the network learns to represent increasingly abstract features of the input data, with later layers learning to represent more complex concepts that are built on the simpler features learned by earlier layers.

DNNs are used in a wide range of applications, including computer vision, speech recognition, natural language processing, and recommendation systems. However, training DNNs can be challenging due to the large number of parameters in the network and the risk of overfitting. Various techniques have been developed to address these challenges, such as regularization, dropout, and batch normalization.