text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. expected value can be thought of as the “average” value attained by the random variable. Assume X is a discrete random variable with range Tₓ and PMF fₓ. The expected value of X which is denoted by E[X] is E[X] = ∑ t fₓ(t) E[X] May or not belong to the range of X E[X] has same units as X I will be
medium
8,834
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. explaining expected values of various distributions in the next article. Properties of Expected Values 1. Constant and Positive Random Variable If X is constant c as random variable X such that P(X=c) =1 then E[c]= c Suppose X takes only non-negative values such that P(X ≥0)= 1 then E[X] ≥0 is
medium
8,835
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. always true 2. Expected Value of a function of R.V Suppose X₁,……Xₙ have joint PMF fₓ₁….ₓₙ E[g(X₁….Xₙ)] = ∑ t.fᵧ(t) = ∑ g( t₁ ….. tₙ)fₓ₁…ₓₙ(t₁….tₙ) From this it is observed that to find E[Y] we donot need fᵧ. The joint PMF of X₁….Xₙ can be used directly 3. Linearity of Expected value E[cX]= c*E[X]
medium
8,836
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. here X is a random variable and c is constant Proof: E[cX]= ∑ ct*fₓ(t) = c ∑ t*fₓ(t) = c*E[X] Next we can see E[X+Y]= E[X]+E[Y] for any 2 random variables X and Y Proof: E[X+Y] = ∑ (t₁ + t₂) fₓᵧ(t₁,t₂) = ∑ t₁*fₓᵧ(t₁,t₂) + ∑t₂*fₓᵧ(t₁,t₂)= E[X] + E[Y] This concludes the result- E[aX+bY] = aE[X] +
medium
8,837
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. bE[Y] Zero Mean Random Variable A random variable X with E[X]=0 is said to be zero-mean random variable. Assume a situation when there have been equal number of heads and tails in a fair coin toss then Heads = 1 , Tails = -1 1*1/2 + 1*1/2 = 0 => E[X]=0 Variance and Standard Deviation The variance
medium
8,838
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. of random variable X , denoted by Var(X) , is defined as Var(X) = E[(X-E[X])²] Standard Deviation , denoted by SD(X) is root of variance So, Variance is always non-negative and SD is a real number. Units of SD(X) is same as that of X. Also the point to note is “ The more spread in range of X , the
medium
8,839
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. more will be value of Var(X). Properties of Var and SD Var(aX)= a² Var(X) SD(aX)= |a| SD(X) Var(X+a)= Var(X) SD(X+a)= SD(X) Alternate Way of Writing variance Var(X)= E[X²]- E[X]² Standardised Random Variable A random variable X is said to be standardised if E[X] = 0, Var(X) =1. Let X be a random
medium
8,840
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. variable. Then, Y =(X − E[X])/SD(X) is a standardised random variable. Covariance Suppose X and Y are random variables on the same probability space. The covariance of X and Y , denoted as Cov(X, Y ), is defined as Cov(X, Y ) = E[(X − E[X])(Y − E[Y ])] Properties of Covariance Cov(X,X)= Var(X)
medium
8,841
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. Cov(X,Y) = E[XY]- E[X]E[Y] Covariance is symmetric if Cov(X, Y ) = Cov(Y, X) Covariance is a “linear” quantity. Cov(X, aY + bZ) = aCov(X, Y ) + bCov(X, Z) Cov(aX + bY, Z) = aCov(X, Z) + bCov(Y, Z) 5. Independence: If X and Y are independent, then X and Y are uncorrelated, i.e. Cov(X, Y ) = 0
medium
8,842
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. Correlation Coefficient The correlation coefficient or correlation of two random variables X and Y, denoted by ρ(X, Y ), is defined as Correlation coefficient ρ(X, Y ) summarizes the trend between random variables. Properties Of Correlation Coefficient -1 ≤ ρ(X,Y) ≤1 ρ(X, Y ) is a dimensionless
medium
8,843
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. quantity. If ρ(X, Y ) is close to zero, there is no clear linear trend between X and Y . Hence there is no clear trend between X and Y. If ρ(X, Y ) = 1 or ρ(X, Y ) = −1, Y is a linear function of X. This also shows that if value of ρ is close to 1 X and Y are strongly correlated. Here Y=aX + b such
medium
8,844
Statistics, Statistical Analysis, Random Variable, Discrete Random Variable, Mathematics. that a≠0 If you guys remember the formula of markov’s and chebychev’s inequality then you can note that expected value , mean and variance are used. Mean through Markov’s inequality bounds the probability that a non-negative random variables take much larger than the mean. And through Chebyshev’s
medium
8,845
Machine Learning, Probability, Statistics, Data Science, Deep Learning. A practical approach to get you up quickly Photo by Josh Appel on Unsplash This article is intended for beginners in deep learning who wish to gain knowledge about probability and statistics and also as a reference for practitioners. In my previous article, I wrote about the concepts of linear
medium
8,847
Machine Learning, Probability, Statistics, Data Science, Deep Learning. algebra for deep learning in a top down approach ( link for the article ) (If you do not have enough idea about linear algebra, please read that first).The same top down approach is used here.Providing the description of use cases first and then the concepts. All the example code uses python and
medium
8,848
Machine Learning, Probability, Statistics, Data Science, Deep Learning. numpy.Formulas are provided as images for reuse. Table of contents: Introduction Foundations of probability Measures of central tendency and variability Discrete probability distributions, binomial distribution Continuous probability distributions, uniform and normal distributions Model Accuracy
medium
8,849
Machine Learning, Probability, Statistics, Data Science, Deep Learning. measurement tools Random process and Markov chains Probabilistic programming External resources Introduction: Probability is the science of quantifying uncertain things.Most of machine learning and deep learning systems utilize a lot of data to learn about patterns in the data.Whenever data is
medium
8,850
Machine Learning, Probability, Statistics, Data Science, Deep Learning. utilized in a system rather than sole logic, uncertainty grows up and whenever uncertainty grows up, probability becomes relevant. By introducing probability to a deep learning system, we introduce common sense to the system.Otherwise the system would be very brittle and will not be useful.In deep
medium
8,851
Machine Learning, Probability, Statistics, Data Science, Deep Learning. learning, several models like bayesian models, probabilistic graphical models, hidden markov models are used.They depend entirely on probability concepts. Real world data is chaotic.Since deep learning systems utilize real world data, they require a tool to handle the chaoticness. It is always
medium
8,852
Machine Learning, Probability, Statistics, Data Science, Deep Learning. practical to use a simple and uncertain system rather than a complex but certain and brittle one. The versions of probability and statistics presented here are a highly simplified versions of the actual subjects. Both are very huge and individual research subjects. But the concepts written here is
medium
8,853
Machine Learning, Probability, Statistics, Data Science, Deep Learning. enough for a deep learning aspirant. I have left links for some great resources on these individual subjects in the end of this article. Foundations of probability: If you start deep learning, the very first example(probably), the tutor provides you is the MNIST handwritten digit recognition
medium
8,854
Machine Learning, Probability, Statistics, Data Science, Deep Learning. task.It is like the hello world of deep learning. mnist dataset The task is to classify handwritten digits and label them. As I mentioned earlier, the machine learning system which you create to do this task is not accurate or certain.The images are 28*28 pixel images.For example, consider the
medium
8,855
Machine Learning, Probability, Statistics, Data Science, Deep Learning. below neural network for this task. The input layer is a flattened vector of the size of the input image(28*28=784).It is passed to a layer, where the input vector is multiplied by the weights and added with the bias vector. This layer has 10 neurons. This is the implication that there are 10
medium
8,856
Machine Learning, Probability, Statistics, Data Science, Deep Learning. digits.Then they go through a softmax activation function. After this step they do not output the exact digit but a vector of length 10 with each element being a probability value for each digit. We use argmax to get the index of the probability with the highest value in the output vector.(which is
medium
8,857
Machine Learning, Probability, Statistics, Data Science, Deep Learning. the prediction) Having said this, we shall revisit softmax in detail.The point here is that to understand this neural network, we have to understand some basics of probability. vector y = [y0, y1, y2, y3, y4, y5, y6, y7, y8 , y9] Where is probability in this? Sample space: The set of all possible
medium
8,858
Machine Learning, Probability, Statistics, Data Science, Deep Learning. values in an experiment.(In the above example, the input can be from a set of images,thus it is the sample space for the input,similarly, the output prediction can take any value from the digits 0 to 9, thus the digits are the sample space for the output prediction.) Random variable: A variable
medium
8,859
Machine Learning, Probability, Statistics, Data Science, Deep Learning. that can take different values of the sample space randomly. In the above neural network, the input vector x is a random variable, the output ‘prediction’ is a random variable, the weights of the neural network is also a random variable(because they are initialized randomly using a probability
medium
8,860
Machine Learning, Probability, Statistics, Data Science, Deep Learning. distribution.) Probability distribution: The probability distribution is a description of how likely the random variable is to take on different values of the sample space.In the neural network, the weights are initialized from a probability distribution.The output vector y follows softmax
medium
8,861
Machine Learning, Probability, Statistics, Data Science, Deep Learning. distribution which is also a probability distribution that shows the probability of X taking different digit values.(In general,softmax provides the probability of categorical values) In this example, the probability distribution y is discrete(having 10 discrete values.) whereas in some other
medium
8,862
Machine Learning, Probability, Statistics, Data Science, Deep Learning. cases, it may be continuous(the sample space is also continuous).In a discrete distribution, the probability distribution is provided by a probability mass function(pmf) denoted by P(x=x). In the above example, the softmax function is the pmf of the random variable X. If you see some instance of
medium
8,863
Machine Learning, Probability, Statistics, Data Science, Deep Learning. the output vector y = [0.03,0.5,0.07,0.04,0.06,0.05,0.05,0.06,0.04,0.1] What’s so special in this? If you look closely they all add up to 1.0 and the argmax shows that index 1 has maximum value of 0.5 indicating the value should be 1. This property of adding upto 1.0 is called normalization.Also
medium
8,864
Machine Learning, Probability, Statistics, Data Science, Deep Learning. the values must be between 0 and 1. An impossible event is denoted by 0 and a sure event is denoted by 1. The same conditions hold true for continuous variables.(We’ll see in a moment.) 3 basic definitions: In any probability book or class, you will always learn these 3 basics in the very
medium
8,865
Machine Learning, Probability, Statistics, Data Science, Deep Learning. beginning.They are conditional probability, marginal probability and joint probability. Joint Probability:What is the probability of two events occurring simultaneously .denoted by P(y=y,x=x) or p(y and x). Example: probability of seeing sun and moon at the same time is very low. Conditional
medium
8,866
Machine Learning, Probability, Statistics, Data Science, Deep Learning. probability: What is the probability of some event y happening, given that other event x had happened .denoted by P(y = y | x =x). since the other event x had occurred, it’s probability can’t be zero. Example: probability of drinking water after eating is very high. Marginal probability:what is the
medium
8,867
Machine Learning, Probability, Statistics, Data Science, Deep Learning. probability of a subset of random variables from a superset of them.Example: probability of people having long hair is the sum of probability of men having long hair and probability of women having long hair.(Here the long hair random variable is kept constant and the gender random variable was
medium
8,868
Machine Learning, Probability, Statistics, Data Science, Deep Learning. changed.) Bayes’ theorem: It describes the probability of an event based on prior knowledge of other events related to that event. Bayes theorem exploits the concept of belief in probability. “I am 40% sure that this event will happen” is not the same as “The dice has 16% chance of showing 6”.The
medium
8,869
Machine Learning, Probability, Statistics, Data Science, Deep Learning. former utilizes belief and is called as bayesian probability while the latter depends on previous data and is called as frequentist probability. Read more Bayes theorem is also used in one of the most simple machine learning algorithm called the naive bayes algorithm.see sklearn docs. Measures of
medium
8,870
Machine Learning, Probability, Statistics, Data Science, Deep Learning. central tendency and variation: Mean: Mean is the arithmetical average value of the data. numpy docs import numpy as np a = np.array([[1,2,3,4,5,6]]) np.mean(a,axis=0) Median: It is the middle value of the data. numpy docs. np.median(a,axis=0) Mode: It is the frequently occuring value of the data.
medium
8,871
Machine Learning, Probability, Statistics, Data Science, Deep Learning. scipy docs. import numpy as np from scipy import stats stats.mode(a) Expected value: of some variable X with respect to some distribution P(X=x) is the mean value of X when x is drawn from P. The expectation is equal to the statistical mean of the dataset. look why. Variance: It is the measure of
medium
8,872
Machine Learning, Probability, Statistics, Data Science, Deep Learning. variability in the data from the mean value. import numpy as np a = np.array([[1,2,3,4,5,6]]) np.var(a) For a random variable, variance is given by, this formula has the same meaning as the above formula. numpy docs. Standard deviation:It is the square root of variance. numpy docs. import numpy as
medium
8,873
Machine Learning, Probability, Statistics, Data Science, Deep Learning. np np.std(a) The are also some other measures of variation like range and interquartile distance.look here Co variance: It shows how to two variables are linearly related to each other. Numpy outputs a covariance matrix where Cij denotes the covariance between xi and xj. numpy docs np.cov(a)
medium
8,874
Machine Learning, Probability, Statistics, Data Science, Deep Learning. Probability distributions: As I mentioned in the beginning,several components of the neural networks are random variables.The values of the random variables are drawn from a probability distribution.In many cases,we use only certain types of probability distributions.Some of them are, Binomial
medium
8,875
Machine Learning, Probability, Statistics, Data Science, Deep Learning. distribution: A binomial random variable is the number of successes in n trials of a random experiment. A random variable x is said to follow binomial distribution when, the random variable can have only two outcomes(success and failure).Naturally , binomial distribution is for discrete random
medium
8,876
Machine Learning, Probability, Statistics, Data Science, Deep Learning. variables. numpy docs. import numpy as np n=10 # number of trials p=0.5 # probability of success s=1000 # size np.random.binomial(n,p,s) Continuous distributions: These are defined for continuous random variables.In continuous distribution, we describe the distribution using probability density
medium
8,877
Machine Learning, Probability, Statistics, Data Science, Deep Learning. functions(pdf) denoted by p(x). Their integral is equal to 1. If you are not comfortable with integral or differential calculus look here, Uniform Distribution: It is the simplest form of continuous distribution, with every element of the sample space being equally likely. numpy docs import numpy
medium
8,878
Machine Learning, Probability, Statistics, Data Science, Deep Learning. as np np.random.uniform(low=1, high=10,size=100) Normal distribution: “Order from Chaos” It is the most important of all distributions.Also known as Gaussian distribution.In the absence of prior knowledge about what form a distribution over the real numbers should take, the normal distribution is a
medium
8,879
Machine Learning, Probability, Statistics, Data Science, Deep Learning. good choice because, it has high entropy and central limit theorem suggests that sum of several independent random variables is normally distributed. numpy docs import numpy as np mu = 0 sigma = 1 np.random.normal(mu,sigma,size=100) In a normal distribution, if the mean is 0 and the standard
medium
8,880
Machine Learning, Probability, Statistics, Data Science, Deep Learning. deviation is 1, then it is called as standard normal distribution. the famous bell curve In machine learning, you often encounter the word ‘ normalization’ and ‘standardization’. the process which we did above to obtain standard normal distribution is called standardization whereas the process of
medium
8,881
Machine Learning, Probability, Statistics, Data Science, Deep Learning. restricting the range of dataset values between 0.0 to 1.0 is called as normalization. However, these terms are often interchanged. from sklearn.preprocessing import StandardScaler import numpy as np data = np.array([1,2,3,4,5]) scaler = StandardScaler() scaler.fit_transform(data) At this stage, we
medium
8,882
Machine Learning, Probability, Statistics, Data Science, Deep Learning. have come across several formulae and definitions. It is very useful if you memorize all these.(Or use this article as a reference! )There are also other important distributions like exponential and poisson distribution. refer here for a quick glance. Softmax distribution:In the beginning of this
medium
8,883
Machine Learning, Probability, Statistics, Data Science, Deep Learning. article, I mentioned about softmax. It is a probability distribution kinda.It is best used to represent 1 of N class categorical distribution. It is also the most commonly used distributions in deep learning.It is very convenient to differentiate. def softmax(x): e_x = np.exp(x - np.max(x)) return
medium
8,884
Machine Learning, Probability, Statistics, Data Science, Deep Learning. e_x / e_x.sum(axis=0) ##the np.max(x) is used just for numerical stability.it is Not ##formula Model accuracy measurement tools: In order to measure the performance of a deep learning model, we use several concepts. Knowing these concepts is very important.They are called as metrics.In the above
medium
8,885
Machine Learning, Probability, Statistics, Data Science, Deep Learning. MNIST neural network, if the neural network predicted 95 of the 100 input images correctly, then it’s accuracy is said to be 95% and so on.(This part uses sklearn python library for examples.) You can understand accuracy intuitively, but the theory is,It is the proposition of correct results in the
medium
8,886
Machine Learning, Probability, Statistics, Data Science, Deep Learning. total obtained results. Accuracy is a very simple measurement and it may provide wrong insights sometimes.In some cases, higher accuracy doesn’t mean our model is working correctly.To clarify this, first look at the following definitions, True Positives (TP): number of positive examples, labeled as
medium
8,887
Machine Learning, Probability, Statistics, Data Science, Deep Learning. such. False Positives (FP): number of negative examples, labeled as positive. True Negatives (TN): number of negative examples, labeled as such. False Negatives (FN): number of positive examples, labeled as negative. Accuracy = (TP + TN)/(TP + FP + TN + FN) Confusion matrix: It is a matrix
medium
8,888
Machine Learning, Probability, Statistics, Data Science, Deep Learning. containing the TP, FP,TN and FN values. confusion matrix from sklearn.metrics import confusion_matrix y_true = [2, 0, 2, 2, 0, 1] y_pred = [0, 0, 2, 2, 0, 2] confusion_matrix(y_true, y_pred) Now, Imagine a binary classifier, that outputs 1 or 0. If everything was proper and the model was not
medium
8,889
Machine Learning, Probability, Statistics, Data Science, Deep Learning. biased, it says the actual accuracy.But if we tweak the model to say 0 all the time(or 1 all the time)(now the prediction power of the model is None).but still we would get a high accuracy! consider the table. Classified positive Classified negative Positive class 0 (TP) 25 (FN) Negative class 0
medium
8,890
Machine Learning, Probability, Statistics, Data Science, Deep Learning. (FP) 125 (TN) This table is obvious that the model is very bad(because all the positive class examples are incorrectly classified) but the accuracy is 83% !! Precision and recall: So we go for two other metrics-precision and recall. Precision tells you how many of the selected objects were correct
medium
8,891
Machine Learning, Probability, Statistics, Data Science, Deep Learning. Recall tells you how many correct objects were selected. In the above example, both precision and recall are 0.0 This indicates that the model is extremely poor. F1 score: It is the harmonic average of precision and recall. F1 score of 0 means worst and 1 means best.By using this, we can resolve
medium
8,892
Machine Learning, Probability, Statistics, Data Science, Deep Learning. the chaotic behaviour of accuracy metric. Sklearn has a classification_report function that you can invoke to get the precision, recall, f1 score. >>> from sklearn.metrics import classification_report >>> y_true = [0, 1, 2, 2, 2] >>> y_pred = [0, 0, 2, 2, 1] >>> target_names = ['class 0', 'class
medium
8,893
Machine Learning, Probability, Statistics, Data Science, Deep Learning. 1', 'class 2'] >>> print(classification_report(y_true, y_pred, target_names=target_names)) precision recall f1-score support class 0 0.50 1.00 0.67 1 class 1 0.00 0.00 0.00 1 class 2 1.00 0.67 0.80 3 micro avg 0.60 0.60 0.60 5 macro avg 0.50 0.56 0.49 5 weighted avg 0.70 0.60 0.61 5 Mean Absolute
medium
8,894
Machine Learning, Probability, Statistics, Data Science, Deep Learning. error: It is the average of difference between the original and predicted values. Mean squared error: it is the average of square of difference between the original and predicted values. from sklearn.metrics import mean_squared_error y_true = [3, -0.5, 2, 7] y_pred = [2.5, 0.0, 2, 8]
medium
8,895
Machine Learning, Probability, Statistics, Data Science, Deep Learning. mean_squared_error(y_true, y_pred) Mean squared error is widely used because it is easier to compute the gradients. Receiver operating characteristic(ROC) curve:The roc curve is a graph showing the performance of classification models like our digit recognizer example.It has two parameters — True
medium
8,896
Machine Learning, Probability, Statistics, Data Science, Deep Learning. Positive rate(TPR) and False Positive rate(FPR).TPR is the same as recall and is also called as sensitivity. FPR is also 1-specificity. These two are plotted against each other to obtain the following graph(several values of plots are obtained by changing the classification threshold and predicting
medium
8,897
Machine Learning, Probability, Statistics, Data Science, Deep Learning. the results again repeatedly.).The area under this ROC curve is a measure of the accuracy. Interpretation of area under the curve(AUC):When AUC=1.0 the model is best.When AUC=0.5 the model is worst.But if AUC=0.0 then the model is reciprocating the results.(Like classifying 1's as 0’s and 0’s as
medium
8,898
Machine Learning, Probability, Statistics, Data Science, Deep Learning. 1's). import numpy as np from sklearn.metrics import roc_auc_score y_true = np.array([0, 0, 1, 1]) y_scores = np.array([0.1, 0.4, 0.35, 0.8]) roc_auc_score(y_true, y_scores) More information:look here and here. Calculation of AUC using trapezoidal rule(sklearn uses this rule):look here Random
medium
8,899
Machine Learning, Probability, Statistics, Data Science, Deep Learning. process, markov chains and graphical models: A random process is a collection of random variables that are indexed by some values.Intuitively, a random process or stochastic process is a mathematical model for a phenomenon that proceeds in an unpredictable manner to the observer.The outcome of the
medium
8,900
Machine Learning, Probability, Statistics, Data Science, Deep Learning. next event is not dependent on the outcome of the current event.Example, a series of coin tosses. If the index set by which the random variables are indexed in a random process are from a discrete natural numbers, then the process is called as discrete time random process or random sequence.If the
medium
8,901
Machine Learning, Probability, Statistics, Data Science, Deep Learning. index set lies in real number line, then the process is continuous time random process.If the index set lies in cartesian plane or some higher dimensional euclidean planes, then the process is said to be random field. Random processes are a really interesting part of probability.They are used to
medium
8,902
Machine Learning, Probability, Statistics, Data Science, Deep Learning. model time related stuffs like weather forecast, stock market, natural phenomenons,etc.There are several random processes.Here we focus on markov chains.For a more elaborate material, refer wikipedia. Markov chains: A markov chain is a probabilistic automaton.It has states.It describes a sequence
medium
8,903
Machine Learning, Probability, Statistics, Data Science, Deep Learning. of events in which probability of transitioning from one state to another depends only on previous event. Here is an excellent visual explanation of markov chains. This is a markov chain that describes the weather condition.The values represent the probability of transition from one state to
medium
8,904
Machine Learning, Probability, Statistics, Data Science, Deep Learning. another. Markov chains are used for simple systems like next word prediction, language generation, sound generation and many other systems. The extension of markov chains known as hidden markov models are used in speech recognition systems. I have stopped random processes till here and planned for
medium
8,905
Machine Learning, Probability, Statistics, Data Science, Deep Learning. an extensive article on them because of the excessive length of the concept. Probabilistic programming: A new paradigm of programming has evolved known as probabilistic programming.These languages or libraries help to model bayesian style machine learning.It is an exciting research field which is
medium
8,906
Machine Learning, Probability, Statistics, Data Science, Deep Learning. supported by both the AI community and the software engineering community.These languages readily support probabilistic functions and models like gaussian models,markov models, etc. One such library for writing probabilistic programs was created by Uber last year known as pyro which supports python
medium
8,907
Machine Learning, Probability, Statistics, Data Science, Deep Learning. with pytorch(a deep learning library) as the backend. pyro library logo If you liked this article about probability and statistics for deep learning, leave claps for the article. The content provided here are intended for beginners in deep learning and can also be used as reference material by deep
medium
8,908
Machine Learning, Probability, Statistics, Data Science, Deep Learning. learning practitioners. But for beginners, I would also suggest several other awesome external resources, to reinforce their knowledge in the interesting field of probability.(Though the knowledge that you gained through this article is enough for proceeding in deep learning) External Resources:
medium
8,909
Machine Learning, Probability, Statistics, Data Science, Deep Learning. Awesome free course on deep learning and machine learning: fast.ai Intuitive explanation of calculus: 3blue1brown Best book on deep learning: the deep learning book Random process: Sheldon M. Ross Statistics: All of statistics by Larry Wasserman Probability theory: William Feller
medium
8,910
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. There’s a new paper about a metric for generative models of molecules. If you followed my blog (like here, here and here) you already know that deep learning generative models can produce new molecules for chemistry and drug discovery, but that their evaluation is difficult, especially diversity.
medium
8,911
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. The paper is by Kristina Preuer, Philipp Renz, Thomas Unterthiner, Sepp Hochreiter (famous for his co-invention of LSTM) and Günter Klambauer, from JKU Linz, Austria: [1803.09518] Fr\'echet ChemblNet Distance: A metric for generative models for molecules Abstract: The new wave of successful
medium
8,912
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. generative models in machine learning has increased the interest in deep learning…arxiv.org The Fréchet ChemblNet Distance This formula seems a little complicated, but after some thinking, it becomes quite simple: the FCD is the distance between the means and the covariance matrices of two
medium
8,913
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. distributions: the real r and the generated g. The second term is like that because a²+b²-2ab= (a-b)², and because the trace (the sum of the eigenvalues) is the common way to measure the size of this type of matrices. So just think of FCD as a statistical distance between two distributions. These
medium
8,914
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. two distributions are taken from representations by the pen-ultimate layer of a “ChemblNet”, which is a neural network trained to predict various biological activities from the ChEMBL chemical database. ChemblNet is a kind of Inception neural network for chemistry, and actually, the same lab
medium
8,915
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. previously introduced the Fréchet Inception Distance to evaluate generators in computer vision. So the concept is not new, but the application to chemistry is. Which use cases for this metric? ‘Me-too’ or ‘first-in-class’ molecules? In the paper, my favorite quote is: A generative model should
medium
8,916
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. produce a) diverse molecules (SMILES) which possess similar b) chemical and c) biological properties as already known molecules. There are indeed use cases when data scientists want to generate new molecules with properties similar to known molecules. For example, when a Pharma company is looking
medium
8,917
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. for ‘me-too’ molecules, to bypass patents from competitors. However, there is a more challenging case: suppose we want ‘first-in-class’ molecules, having unseen combination of different properties. For example, we have one dataset of molecules with property A, another dataset of molecules with
medium
8,918
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. property B, and we want to generate molecules combining both properties A and B (say, A=active on the disease, and B=soluble in the water). In this case, the FCD of a good generative model should deviate from both A and B. So how to generalize the FCD ? There’s a similar question about image
medium
8,919
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. generation in computer vision: for example, how the Fréchet Inception Distance deals with a model generating images of women with glasses, when we have no such image in the training set ? (by the way, has this question already been raised in the computer vision community ?) DCGAN: generates women
medium
8,920
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. with glasses, with none in the training set I have no idea about the answer: if you have anything, please share it in the comments, or on Telegram, or on the DiversityNet draft. That’s why I think it still helps to use measures of internal diversity like variance or entropy. Their definitions are
medium
8,921
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. ‘intrinsic’ and do not require any comparison with a real-world distribution of reference. As a result, they are more general (details here). Which advantages over previous metrics? Quite surprisingly, the paper claims: The FCD’s advantage over previous metrics is that it can detect if generated
medium
8,922
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. molecules are a) diverse and have similar b) chemical and c) biological properties as real molecules. I don’t think this advantage is significant. It’s easy to build a single score combining all old-fashioned metrics. For example, take the product: Internal diversity x Nearest Neighbor diversity x
medium
8,923
Machine Learning, Artificial Intelligence, Pharmaceutical, Chemistry. Solubility x Activity x ….. I still imagine an advantage for metrics like FCD (or Earth Mover Distance, see here) over variance/entropy: they seem harder to be ‘gamed’ by the AI. With reinforcement learning, people just stuff the reward function with anything that improves their metrics, as in this
medium
8,924
. reviewed include the z-test, t-test, chi-square test, and F-test. We belive that it important to know how to construct and interpret a confidence interval and when and how to apply each of the test statistics discussed when conducting a hypothesis test. 2. Point Estimates Point estimates are single
medium
8,929
. didn’t go to the Lakers game and the Lakers won. So my loss is that I lost a great game 8. Significance and Confidence Levels The significance level can be interpreted as the probability that a test statistic will reject the null hypothesis by chance when the null hypothesis is actually true (i.e.,
medium
8,939
. Israel). Mr. Polanitzer develops and teaches business valuation professional trainings and courses for the Israel Association of Valuators and Financial Actuaries, and frequently speaks on business valuation at professional meetings and conferences in Israel. He also developed IAVFA’s certification
medium
8,954
Robotics, Robots, Engineering, Technology. There are many modular robots that are designed to self-assemble in some form or another to complete any given task- MIT’s Robotic Molecule, the EU’s Symbrion, and VNIT’s ReBiS are great examples of how these types of robots work. All of these have one thing in common and one they don’t — they can
medium
8,956
Robotics, Robots, Engineering, Technology. all self-rearrange themselves to overcome obstacles that impede the task given, and they also can’t alter their surrounding environment to get the job done. The SMORES-EP is capable of using objects in its surrounding environment to complete tasks. (📷: ModLab) That’s where the ModLab’s (University
medium
8,957
Robotics, Robots, Engineering, Technology. of Pennsylvania) SMORES-EP (Self-Assembling Modular Robot for Extreme Shapeshifting-Electro Permanent magnets) robots shine, as they are capable of doing both. Each square robot features magnetic wheels, giving them the ability to assemble into many different configurations and to handle jobs
medium
8,958
Robotics, Robots, Engineering, Technology. regular bipedal/multi-pedal robots can’t. Each face of the square modules features electro-permanent magnets that can form bonds between other modules or even different metal objects. These magnets can be turned on or off by sending a pulse current through the coil of each wheel, allowing for those
medium
8,959
Robotics, Robots, Engineering, Technology. connections and maintaining a force of 89-Newtons even without power. Each module has its own onboard battery and communicates with one another through the inductive coupling of the magnets, essentially turning them into short-range radios. They can even converse with a central computer over an
medium
8,960
Robotics, Robots, Engineering, Technology. 802.11 Wi-Fi connection, which provides them with a level of autonomous ability through a custom algorithm that features configuration discovery, root module search and matching and mapping. To interact with and alter their environment, the robots are equipped with an imaging system that utilizes
medium
8,961
Robotics, Robots, Engineering, Technology. an RGB camera, giving them the ability to see objects, distances, heights and obstacles. The information is analyzed and processed through the centralized system and then provides the robots with instructions on how to complete any given task. Placement of electro-magnets housed inside each wheel.
medium
8,962