audio
audioduration (s) 0.62
28.4
| text
stringlengths 2
287
|
---|---|
number which is a categorical variable which you get down so from a simple neuron we will |
|
parameters connected to one parameter and it can even go down by a successive stages |
|
of parametric connections which goes down and finally is what enters into what we call |
|
and what this does is that we define something as a neural network and then say that this |
|
neural network is able to classify images and as we go down over there now one important |
|
fact which you will have to take care over here is that as we are learning over here |
|
then that would mean that somehow with experience going by that definition somehow with experience |
|
which we are gaining we will be able to do that task much better |
|
so that means necessarily that as we show it more number of samples of features and |
|
which are associated number of categorical variables over there then this whole network |
|
over here which i call as a neural network would be able to really associate any unknown |
|
this learning you will have to go down through the way of gradient checking and optimizations |
|
this so we have the mathematical equation but lets lets get down into what it looks |
|
like so say i have three different variables x one x two and x three and these can be three |
|
the image and say over here is the average entropy of the image ok so we can have three |
|
different features over there for one single image given down now once we have these three |
|
of the image x three is also scalar and now together what we can do is we can associate |
|
with certain vector and then we can sort of sum them up ok so x one will be multiplied |
|
by a weight w one x two will be multiplied by a weight w two x three will be multiplied |
|
then my output over here so the form of y is something which is written down as w naught |
|
will be a n plus one term summation which comes down over here now from there as we |
|
sort of a matrix representation is what is given down over here ok so what w is basically |
|
and that is equal to w naught itself and x is another vector of all the ah scalars which |
|
are arranged in terms of a matrix over there so this is a matrix form of representation |
|
now once i have that one what i can do is i can relate this y to my predicted class |
|
this f n l can have two different forms some common forms are something like this the first |
|
my x s over here i dont have any sort of a control over my x so these can be anything |
|
from minus infinity to plus infinity for the purpose of simplicity we keep down the fact |
|
that let these be real valued numbers and not complex valued numbers that that does |
|
to plus infinity now that i have these also open ranged then what i get down from this |
|
to me that can be anything from minus infinity to plus infinity which would typically make |
|
just put a threshold over there say that if the value is greater than zero make it one |
|
of functions say a sigmoid non linearity so what it would do is that as my y becomes tending |
|
now in the same way as i go with my second non linearity which is called as a tan hyperbolic |
|
so you can put down your values of y is varying from minus infinity to plus infinity and you |
|
can very intuitively see that as the value of y goes down to minus infinity this value |
|
tends to minus one as the value goes to plus infinity this value tends to plus one and |
|
say just give down the argument that if it is greater than zero make it one less than |
|
i have three scalar values over here so what i can do is i can associate it to some different |
|
number of patterns which i want to different number of predictors which i want to do so |
|
being the contrast and x three being the average entropy on the image i want to classify whether |
|
that is a ball in the image yes or no and whether the image is a photograph or image |
|
down subscripted dually now with it with this dual weight subscription what happens technically |
|
which i want to classify and the first subscript is the one which connects down which feature |
|
is being connected to which target neuron over here ok |
|
done for y one and p one in terms of an equation now if i get down another parameter p two |
|
and thats what i was saying that do you have another different thing to predict and that |
|
may be that whether its a natural image or this was a sketched image now for that you |
|
will have a similar set of equation which you get down over here now you can clearly |
|
outputs can be designed over here |
|
now similarly i can extend it to some n number of some some k number of neurons over here |
|
now once you stand on all the row vectors you get down a rectangular matrix over there |
|
equation and then accordingly your predicted neurons will also be stacked into one single |
|
was applied on a scalar and thats why this can be extended on to a matrix valued form |
|
function and that will give you the same sort of a vector output which comes down over here |
|
now essentially what this helps you is that you can relate down some j number of input |
|
neurons to some k number of output neurons in in straight simple terms now from there |
|
once you are able to relate it down next what comes down is that i will have certain sort |
|
of error when i am able to relate it down it means that so using these three features |
|
and some combination of weights which are present over here i will be able to predict |
|
and there is a value which is coming down from this neuron itself ok now the difference |
|
between the true value say in the first case there was a ball but it predicted there wasnt |
|
a ball so there is its an error its a clear case of an error so but here what i would |
|
get done is that the error value is one ok in the other way round where say there wasnt |
|
the ground truth also says that there isnt a ball it means that it is ah its its a correct |
|
case so similarly we will have it for the second predictor as well now if you see all |
|
of these predictors are independent of each other so it means that the errors are also |
|
distance between the predicted vector over here this p that becomes a matrix |
|
now and the actual ground truth which is so between your p hat and your p so this will |
|
learning over here is in a sense that i should be somehow able to get down a network such |
|
essentially happens in that case is we use a method called as error back propagation |
|
so what it would do is say i have a set of observations x one ok so this one over here |
|
is no more related to one particular feature but that one which we are putting down over |
|
here in this relationship is actually which is related down to |
|
truth and i have my predicted value similarly i keep on doing such that i have n number |
|
of images in something which is called as a training set so what happens in a training |
|
set in this kind of a problem which is a supervised learning problem is that you have a set of |
|
images some n number of images and for each image you also know what is the class label |
|
given to it so here we were asking down two questions whether this there is a ball in |
|
image kind of thing |
|
so there are two vectors over here which i there is a two dimensional vector or two parameters |
|
which i want to predict down to class levels so that should also be known to me so there |
|
training set ok now on the other side i am going to predict out all of this with a certain |
|
given form of my weights over there now initially what i would do is i would start with a neural |
|
difference is coming down for each so i get the euclidean distance for each sample so |
|
for x one x two x three similarly it up to x n i get down this difference coming down |
|
you look carefully into this one |
|
so my p hats are what are dependent on all my excess over here ok but these x values |
|
they dont change in the whole data set right the only thing which changes within the network |
|
the whole objective is that i want to get down a particular value of w which is the |
|
only thing variable and adjustable within my neural network such that my cost function |
|
over here is minimum and this has to be minimum when you need to have done the minimum error |
|
over there as you get down your minimum error in this case this has to be zero so your jw |
|
start with a random guess of w within a k th iteration so my k at the start of it will |
|
be k equal to zero ok so at k equal to zero i start with some w over here now with that |
|
w i will be able to get down my these predictions over here p one to p n hat from there i would |
|
is a partial derivative of the cost function with respect to my weight at that particular |
|
of these predictions from there i would get down get down as the j w for w k plus one |
|
and then i would iterate it over such that at some point of time i would reach down this |
|
minimum value of w and then just stop over here ok |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.