An Introduction to Deep Learning and Neural Networks

It seems as if not a week goes by in which the artificial intelligence concepts of deep learning and neural networks make it into media headlines, either due to an exciting new use case or in an opinion piece speculating whether such rapid advances in AI will eventually replace the majority of human labor. Deep learning has improved speech recognition, genomic sequencing, and visual objection recognition, among many other areas.

The availability of exceptionally powerful computer systems at a reasonable cost, combined with the influx of large swathes of data that define the so-called Age of Big Data and the talents of data scientists, have together provided the foundation for the accelerated growth and use of deep learning and neural networks.

Companies are now beginning to adopt AI frameworks and libraries, such as MxNet, which is a deep learning framework that gives users the ability to train deep learning models using a variety of languages. There are also dedicated AI platforms aimed at supporting data scientists in deep learning modeling and training which professionals can integrate into their workflows.

It’s important, though, to specify that deep learning, neural networks, and machine learning are not interchangeable terms. This article helps to clarify the definitions for you with an introduction to deep learning and neural networks.

Deep Learning and Neural Networks Defined

Neural Network

An artificial neural network, shortened to neural network for simplicity, is a computer system that has the ability to learn how to perform tasks without any task-specific programming. For example, a simple neural network might learn how to recognize images that contain elephants using data alone.

The term neural network comes from the inspiration behind the architectural design of these systems, which was to mimic the basic structure of a biological brain’s own neural network so that computers could perform specific tasks.

The neural network has a layered design, with an input layer, an output layer, and one or more hidden layer between them. Mathematical functions—termed neurons—operate at all layers. Neurons essentially receive inputs and produce an output. Initially, random weights are associated with inputs, making the output of each neuron random. However, by using an algorithm that feeds errors back through the network, the system adapts the weights at each neuron and becomes better at producing an accurate output.

[READ MORE]

Leave a Reply

Your email address will not be published. Required fields are marked *