1.1 - Intro to Deep Learning

Intro to Deep Learning: Course 1 - Week 1

This is very introductory material.

Neural Network (NN) is just taking a dataset and fitting it with a eqn i.e given input features X1, X2, ... Xn, and a output Y, which we try to fit with a complex eqn Y = F(X1,X2,...,Xn). Once we find this best fit eqn F, we use this to predict Y given X1,X2,...Xn.

Process of getting this eqn is called network training. The term neural network came into being, since this complex eqn that we get resembles a chain of neurons passing information from one to the next, until we get to the output stage. From statistics, we know how to find best fit, but those eqn are flat (i.e Y=A*X+b*X^2+C*X^3...,). However, they never worked well on fitting new data, but these neural network based fitting eqn work well on new data too. They are very good with unstructured data (i.e identifying cat from a picture), while conventional fitting algorithms were good with only structured data (i.e predicting price of a house based on age, size, location, etc).

Diff kind of NN:

1. Standard NN

2. Convolutional NN

3. Recurrent NN

Deep Learning (DL): NN are called deep when they have a lot of layers. Reason, DL is getting so popular is because they work amazingly well. Reason for them working so well is due to the fact that deep neural network keep improving their prediction accuracy with more and more data, while earlier methodologies saturated and their prediction accuracy didn't improve even if they were loaded with more data.

 DL is very compute intensive since it needs to run thru large number of layers on lots of data.