Deep Learning — An Introduction

FACE_ASEB
4 min readJan 1, 2021

In the recent years of technological advances, deep learning has become quite a buzzword. Some of you probably know it, some of you probably just heard about it and never bothered, while some are extremely clueless about it. This article aims to shine light on the what, where, why and hows of Deep Learning.

The uniqueness of any deep learning algorithm lies in the inherent strength which can easily run through any kind of data — images, voice or even text. Yes, you see the word algorithm there. It isn’t simple 5–10 lines of code that one can write and expect an output from. What runs behind a few lines of python/R/Julia code is a complex overlay of mathematical functions. Linear Algebra and Optimization set the foundation for the algorithm to process data of a specific kind. In a layman’s terms, we can term this whole algorithm as a Neural Network, which does all the grunt work, while you can sit and sip that coffee ( well, duh, you just coded like 10 lines xD ).

So, a neural network is a network of nodes ( modelled just like the neurons in our brain ). These little guys have been programmed with a certain mathematical approach to deal with certain forms of data( which we shall cover in our later articles ). A network of these tiny guys is responsible for the intuition behind self-driving cars and most image processing tools. Even the camera in most flagship phones is tuned with some kind of a neural network which harnesses the power of the CPU and enhances the quality of images.

Let’s now understand the how and what of all that was mentioned before and try getting a gist of how these technologies lead the way forward.

To start anything, be it a project, an idea or even the will to do an assignment, there’s always an intuition. Similarly, every deep learning framework has a foundation of neural networks, which comes from some kind of intuition. Let’s assume a simple neural network — takes an input X, processes, learns and gives an output Y. Contrary to a simple I/O program, a neural network takes into account the fact that predicted and actual values of some parameter do not differ. This functionality helps the network understand an association/patterns between rows of data.

For instance, in the above example, if the output comes out as a cat(predicted value) as opposed to dog(actual value), the algorithm will then focus on the defining features of a dog, such as the facial features and shape of teeth.

How does an NN learn though?

We’ll come back to our simple structure of the NN we used before. So, the initial layer from the left is termed as the input layer. As the name suggests, it deals with the task of taking in inputs from the user. The output layer is the final result of the algorithm. The layers in between are called hidden layers. Hidden layers deal with a good deal of computation on the data.

Given our current level of understanding, I definitely won’t cover the complexity behind it. But one needs to keep in mind that each node is programmed with a math function coupled with a weight and bias. After every computation, the mean square error between the actual and predicted values are calculated and then the weights and bias are updated accordingly. Why so?

As I mentioned before, the main purpose is to reduce that error. Hence the adjustments to the weights and biases are made so as to reduce that error. The computation is backtracked so as to make such changes and have the predicted values as close as one can get to the actual values. This process is called Back-Propagation, which we will cover later on. This continuous updation makes the model/network, a good approximator and almost modelling real life scenarios.

What are the takeaways?

  • Deep Learning refers to Neural Networks.
  • Video synthesis, self driving cars, human level game AI, and more — all of these are brain-children of Deep Learning.
  • Back Propagation is one of the ways of updating weights and biases to reach an ideal approximation/prediction between actual vs predicted values.

The applications are very large scale and what we’ve seen is just the tip of the iceberg. There’s a lot more to explore and tinker around with. Stay tuned for more on FACE_ASEB’s medium page.

Cheers and Peace,
Team FACE.

--

--

FACE_ASEB

We are the departmental forum for Computer Science & Engineering, at Amrita School of Engineering , Bengaluru.