Similar to the human brain, the basic deep-learning neural network consists of the following:-
- An Input layer
- Multiple hidden layers
- An output layer
The input layer is fed with the data. It is then filtered through the hidden layers and at the output layer, the result is produced.
Either the networks are feedforward, or the data sets are passed through one direction only or recurrent, allowing only feedback and short-term memories of the previous data sets.
Before entering the real-world application, all the neural networks are trained before they are practically used in the world.
Many layers of non-linear hidden units are present in deep neural networks with a very large output layer. In recent natural language and voice processing developments, an instrumental ability has been noticed to cope with complex alphanumeric data.
The training of deep learning neural networks has been classified into three groups:-
- Supervised learning means the teacher gives the machine a labeled example of inputs and outputs so that it can learn the rule of maps from input to output.
- Unsupervised learning means the learning algorithm is provided with no data labels and hence it has to find the structure of its input independently.
- Reinforcement learning means the interaction of a computer program in a dynamic environment must perform a goal. Feedback is provided by the program in terms of rewards and punishments to navigate the problem space.