Search is not available for this dataset
query
stringlengths
1
13.4k
pos
stringlengths
1
61k
neg
stringlengths
1
63.9k
query_lang
stringclasses
147 values
__index_level_0__
int64
0
3.11M
If Kerckhoffs's principle is correct why does the NSA not publish their Suite A ciphers? The offers little insight. To quote the page: the decision to keep them secret is in keeping with a layered security posture How should we (the cryptographic community) interpret the NSA's decision to keep their best ciphers private? It seems that if obfuscation works, then Kerckhoffs's principle represents an extreme viewpoint and should not be applied generally.
Why are cryptography algorithms not exported to certain countries? There have been strict laws about the export of crypto software to certain countries. I can understand the intent, but never gauged the fact that there is nothing stopping these countries from developing their own crypto software. So why limit the export? And in light of the recent NSA revelations, of backdoors being engineered into crypto algos, wouldn't the export of such compromised algorithms have helped to unravel secrets?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,100
compute maximum number of black rectangles in a cutome given black and white board ATTENTION: this is not maximal rectangle problem. this is about number of rectangles. look at this picture: I am looking for an algorithm to compute total number of blue rectangles(not maximal rectangle). this algorithm must be O(n^2) and we have a constraint that the board is Square(n*n) or in this picture m=n. thanks in advance
Puzzle: Find largest rectangle (maximal rectangle problem) What's the most efficient algorithm to find the rectangle with the largest area which will fit in the empty space? Let's say the screen looks like this ('#' represents filled area): .................... ..............###### ##.................. .................### .................### #####............... #####............... #####............... A probable solution is: .................... ..............###### ##...++++++++++++... .....++++++++++++### .....++++++++++++### #####++++++++++++... #####++++++++++++... #####++++++++++++... Normally I'd enjoy figuring out a solution. Although this time I'd like to avoid wasting time fumbling around on my own since this has a practical use for a project I'm working on. Is there a well-known solution? Shog9 wrote: Is your input an array (as implied by the other responses), or a list of occlusions in the form of arbitrarily sized, positioned rectangles (as might be the case in a windowing system when dealing with window positions)? Yes, I have a structure which keeps track of a set of windows placed on the screen. I also have a grid which keeps track of all the areas between each edge, whether they are empty or filled, and the pixel position of their left or top edge. I think there is some modified form which would take advantage of this property. Do you know of any?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,101
Inverse eigenvalue of a linear transformation T is a linear transformation, and $\lambda$ is N eigenvalue of T. How do I prove that $\lambda^{-1}$ is an eigenvalue for $T^{-1}$? I know for a matrix, I can use the fact that $Av=\lambda v$, but how does a linear transformation work?
If A is invertible, prove that $\lambda \neq 0$, and $\vec{v}$ is also an eigenvector for $A^{-1}$, what is the corresponding eigenvalue? If A is invertible, prove that $\lambda \neq 0$, and $\vec{v}$ is also an eigenvector for $A^{-1}$, what is the corresponding eigenvalue? I don't really know where to start with this one. I know that $p(0)=det(0*I_{n}-A)=det(-A)=(-1)^{n}*det(A)$, thus if both $p(0)$ and $det(0) = 0$ then $0$ is an eigenvalue of A and A is not invertible. If neither are $0$, then $0$ is not an eigenvalue of A and thus A is invertible. I'm unsure of how to use this information to prove $\vec{v}$ is also an eigenvector for $A^{-1}$ and how to find a corresponding eigenvalue.
Question on how to normalize regression coefficient Not sure if normalize is the correct word to use here, but I will try my best to illustrate what I am trying to ask. The estimator used here is least squares. Suppose you have $y=\beta_0+\beta_1x_1$, you can center it around the mean by $y=\beta_0'+\beta_1x_1'$ where $\beta_0'=\beta_0+\beta_1\bar x_1$ and $x_1'=x-\bar x$, so that $\beta_0'$ no longer has any influence on estimating $\beta_1$. By this I mean $\hat\beta_1$ in $y=\beta_1x_1'$ is equivalent to $\hat\beta_1$ in $y=\beta_0+\beta_1x_1$. We have reduced equation for easier least square calculation. How do you apply this method in general? Now I have the model $y=\beta_1e^{x_1t}+\beta_2e^{x_2t}$, I am trying to reduce it to $y=\beta_1x'$.
eng_Latn
1,102
Finding area of parallelogram If these are the vertices for a parallelogram, (0,0), (-1,3), (4,-5), (3,-2) how do I find the area? I know it's supposed to be a 2x2 matrix and I need to find the determinant for it but I don't know which columns of it to use. Am I supposed to draw it out? Am I supposed to shift one of the vertices to?
Area of a parallelogram, vertices $(-1,-1), (4,1), (5,3), (10,5)$. I need to find the area of a parallelogram with vertices $(-1,-1), (4,1), (5,3), (10,5)$. If I denote $A=(-1,-1)$, $B=(4,1)$, $C=(5,3)$, $D=(10,5)$, then I see that $\overrightarrow{AB}=(5,2)=\overrightarrow{CD}$. Similarly $\overrightarrow{AC}=\overrightarrow{BD}$. So I see that these points indeed form a parallelogram. It is assignment from linear algebra class. I wasn't sure if I had to like use a matrix or something.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,103
Python list comprehension with lambdas I'm running Python 3.4.2, and I'm confused at the behavior of my code. I'm trying to create a list of callable polynomial functions with increasing degree: bases = [lambda x: x**i for i in range(3)] But for some reason it does this: print([b(5) for b in bases]) # [25, 25, 25] Why is bases seemingly a list of the last lambda expression, in the list comprehension, repeated?
How do I create a list of Python lambdas (in a list comprehension/for loop)? I want to create a list of lambda objects from a list of constants in Python; for instance: listOfNumbers = [1,2,3,4,5] square = lambda x: x * x listOfLambdas = [lambda: square(i) for i in listOfNumbers] This will create a list of lambda objects, however, when I run them: for f in listOfLambdas: print f(), I would expect that it would print 1 4 9 16 25 Instead, it prints: 25 25 25 25 25 It seems as though the lambdas have all been given the wrong parameter. Have I done something wrong, and is there a way to fix it? I'm in Python 2.4 I think. EDIT: a bit more of trying things and such came up with this: listOfLambdas = [] for num in listOfNumbers: action = lambda: square(num) listOfLambdas.append(action) print action() Prints the expected squares from 1 to 25, but then using the earlier print statement: for f in listOfLambdas: print f(), still gives me all 25s. How did the existing lambda objects change between those two print calls? Related question:
Local polynomial regression: Why does the variance increase monotonically in the degree? How can I show that the variance of local polynomial regression is increasing with the degree of the polynomial (Exercise 6.3 in Elements of Statistical Learning, second edition)? This question has been asked but the answer just states it follows easliy. More precisely, we consider $y_{i}=f(x_{i})+\epsilon_{i}$ with $\epsilon_{i}$ being independent with standard deviation $\sigma.$ The estimator is given by $$ \hat{f}(x_{0})=\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right) $$ for $\alpha,\beta_{1},\dots,\beta_{d}$ solving the following weighted least squares problem $$ \min\left(y_{d}-\underbrace{\left(\begin{array}{ccccc} 1 & x_{1} & x_{1}^{2} & \dots & x_{1}^{d}\\ \vdots\\ 1 & & & & x_{n}^{d} \end{array}\right)}_{X}\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right)\right)^{t}W\left(y-\left(\begin{array}{ccccc} 1 & x_{1} & x_{1}^{2} & \dots & x_{1}^{d}\\ \vdots\\ 1 & & & & x_{n}^{d} \end{array}\right)\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right)\right) $$ for $W=\text{diag}\left(K(x_{0},x_{i})\right)_{i=1\dots n}$ with $K$ being the regression kernel. The solution to the weighted least squares problem can be written as $$ \left(\begin{array}{cccc} \alpha & \beta_{1} & \dots & \beta_{d}\end{array}\right)=\left(X^{t}WX\right)^{-1}X^{t}WY. $$ Thus, for $l(x_{0})=\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)\left(X^{t}WX\right)^{-1}X^{t}W$ we obtain $$ \hat{f}(x_{0})=l(x_{0})Y $$ implying that $$ \text{Var }\hat{f}(x_{0})=\sigma^{2}\left\Vert l(x_{0})\right\Vert ^{2}=\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)\left(X^{t}WX\right)^{-1}X^{t}W^{2}X\left(X^{t}WX\right)^{-1}\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)^{t}. $$ My approach: An induction using the formula for the inverse of a block matrix but I did not succeed. The paper by D. Ruppert and M. P. Wand derives an asymptotic expression for the variance for $n\rightarrow\infty$ in Theorem 4.1 but it is not clear that is increasing in the degree.
eng_Latn
1,104
How does AUC of ROC equal concordance probability? If is correct then I think I understand what concordance probability is. However, I also find that provides the formula of concordance probability that is a little bit different from the one on Quora: It include the counts of tied pairs with weight $ 0.5 $. Which one should I trust? I also understand AUC stands for Area Under a Curve and how to compute it... visually. What I do not understand is how AUC equal concordance probability? I have just found my question is the same as .
Why is ROC AUC equivalent to the probability that two randomly-selected samples are correctly ranked? I found there are two ways to understand what AUC stands for but I couldn't get why these two interpretations are equivalent mathematically. In the first interpretation, AUC is the area under the ROC curve. Picking points from 0 to 1 as threshold and calculate sensitivity and specificity accordingly. When we plot them against each other, we get ROC curve. The second one is that the AUC of a classifier is equal to the probability that the classifier will rank a randomly chosen positive example higher than a randomly chosen negative example, i.e. P(score(x+)>score(x−)). (from )
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,105
Parametric vs non-parametric machine learning methods I looked-up many references and websites and researched on how to determine if a method is between parametric or non-parametric. I came up with below definitions, A parametric algorithm has a fixed number of parameters. In contrast, a non-parametric algorithm uses a flexible number of parameters, and the number of parameters often grows as it learns from more data. From . Moreover, I found, A parametric model, we have a finite number of parameters, and in nonparametric models, the number of parameters is (potentially) infinite. From . And many other methods of determination, though the problem is none of them can helo ones to determine if a certain hypothetical method is parametric or not. (For instance, why k-means' number of parameters are constant but KNN is variable or basically what do we call a parameter and what we do not?)
What exactly is the difference between a parametric and non-parametric model? I am confused with the definition of non-parametric model after reading this link and . Originally I thought "parametric vs non-parametric" means if we have distribution assumptions on the model (similar to parametric or non-parametric hypothesis testing). But both of the resources claim "parametric vs non-parametric" can be determined by if number of parameters in the model is depending on number of rows in the data matrix. For kernel density estimation (non-parametric) such a definition can be applied. But under this definition how can a neural network be a non-parametric model, as the number of parameters in the model is depending on the neural network structure and not on the number of rows in the data matrix? What exactly is the difference between parametric and a non-parametric model?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,106
Creating a list of matrices I am new to programming and I was wondering if my question has a simple implementation. I have a bunch of matrices and I want a way to be able to store them, or be able to easily call them and do operations on them. For example, if I have 100 matrices, called, M1,M2,...M100; is there a way I can rename them so that if I want to call the nth matrix, I can just write M(nth)? EDIT: For example, if I want to add M1+M1, M1+M2, ...,M1+M100; I want to be able to write a loop something kind of like, for i=1:100 AM(i)=M(1)+M(i) end Is this possible?
Array of Matrices in MATLAB I am looking for a way to store a large variable number of matrixes in an array in MATLAB. Are there any ways to achieve this? Example: for i: 1:unknown myArray(i) = zeros(500,800); end Where unknown is the varied length of the array, I can revise with additional info if needed. Update: Performance is the main reason I am trying to accomplish this. I had it before where it would grab the data as a single matrix, show it in real time and then proceed to process the next set of data. I attempted it using multidimensional arrays as suggested below by Rocco, however my data is so large that I ran out of Memory, I might have to look into another alternative for my case. Will update as I attempt other suggestions. Update 2: Thank you all for suggestions, however I should have specified beforehand, precision AND speed are both an integral factor here, I may have to look into going back to my original method before trying 3-d arrays and re-evaluate the method for importing the data.
Superscript outside math mode What is the easiest way to superscript text outside of math mode? For example, let's say I want to write the $n^{th}$ element, but without the math mode's automatic italicization of the th. And what if I still want the n to be in math mode, but the th outside?
eng_Latn
1,107
How to determine number of pixels required to interface any vga display I have a guide that tells me that for interfacing 640x480 screen you need 800 pixels in a row and 521 lines along with all of that front and back porch stuff I wanna know how do they determine that number? If i have a screen of different resolution how do i determine the pixels and lines? also the non displayable area i.e porches and retrace time , how are they determined
Programming pattern to generate VGA signal with micro-controller? I want to generate VGA signal with micro-controller (like TI Tiva ARM which runs at 90/120Mhz speed). I'm not sure how to make accurate timings with micro-controller. Which programming pattern I need to use? Do I need any inline assembler code? How to wisely use interrupts? Would be great if anybody show some pseudo code how to generate VGA signal. I successfully generated VGA signal with FPGA. But I just can't figure out how to do it with MCU.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,108
What is the right projection for area calculation in Kenya? I want to calculate the area of counties (1st admin level) in Kenya in ArcGIS. What is the right projection for that? I know I should use an equal area projection, but when I use them (I tried six different) it varies heavily to the county areas given in wikipedia. I could also use UTM37, but by definition it's not an equal area projection.
Coordinate system for accurately calculating areas of polygons that cross UTM Zones? I have multiple polygons, all located offshore around the UK. I am trying to calculate the areas of these polygons in square kilometres, without splitting them by UTM zone. Is it possible to do this accurately? I have been using World Mercator, but there are big distortions at this scale. If the polygons were limited to the east coast, say, I would use UTM31N. However, as they are all over the UK waters, I am unsure what to use. Is there a coordinate system that covers this area? British National Grid does not extend far enough out to sea to be of use.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,109
The expected value of days a miner will be stuck A miner is stranded and there are three paths that he can take. Path A loops back to itself and takes him 1 day to walk it. Path B loops back to itself and takes him 2 days to walk it. Path C is the exit and it takes him 3 days to walk it. Each path has an equal probability of being chosen and once a wrong path is chosen, he gets disorientated and cannot remember which path it was and the probabilities remain the same. What is the expected value of the amount of days he will spend before he exits the mine? I was thinking that maybe it used the Geometric distribution but I don't think that accounts for the varying number of days. If someone could help clear this up it would be greatly appreciated!
Probability brain teaser with infinite loop I found this problem and I've been stuck on how to solve it. A miner is trapped in a mine containing 3 doors. The first door leads to a tunnel that will take him to safety after 3 hours of travel. The second door leads to a tunnel that will return him to the mine after 5 hours of travel. The third door leads to a tunnel that will return him to the mine after 7 hours. If we assume that the miner is at all times equally likely to choose any one of doors, what is the expected length of time until he reaches safety? The fact that the miner could be stuck in an infinite loop has confused me. Any help is greatly appreciated.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,110
Prove that is $A$ is skew-symmetric, then $X^TAX = 0$ for all $X = [x_1 x_2 \cdots x_n]^T$ Recall that a matrix $A$ is skew-symmetric if and only if $A^T = -A$. Prove that if $A$ is skew-symmetric, then $X^TAX = 0$ for all $X = [x_1 x_2 \cdots x_n]^T$
$x^TAx=0$ for all $x$ when $A$ is a skew symmetric matrix Let $A$ be an $n\times n$ skew symmetric matrix. Show that $x^TAx =0 \ \forall x \in \mathbb R^n$. How to prove this?
Why are symmetric positive definite (SPD) matrices so important? I know the definition of symmetric positive definite (SPD) matrix, but want to understand more. Why are they so important, intuitively? Here is what I know. What else? For a given data, Co-variance matrix is SPD. Co-variance matrix is a important metric, see this for intuitive explanation. The quadratic form $\frac 1 2 x^\top Ax-b^\top x +c$ is convex, if $A$ is SPD. Convexity is a nice property for a function that can make sure the local solution is global solution. For Convex problems, there are many good algorithms to solve, but not for non-covex problems. When $A$ is SPD, the optimization solution for the quadratic form $$\text{minimize}~~~ \frac 1 2 x^\top Ax-b^\top x +c$$ and the solution for linear system $$Ax=b$$ are the same. So we can run conversions between two classical problems. This is important because it enables us to use tricks discovered in one domain in the another. For example, we can use the conjugate gradient method to solve a linear system. There are many good algorithms (fast, numerical stable) that work better for an SPD matrix, such as Cholesky decomposition. EDIT: I am not trying ask the identities for SPD matrix, but the intuition behind the property to show the importance. For example, as mentioned by @Matthew Drury, if a matrix is SPD, Eigenvalues are all positive real numbers, but why all positive matters. @Matthew Drury had a great answer to flow and that is what I was looking for.
eng_Latn
1,111
Derivative with respect to the variance of Multivariate Gaussian I am working on a multivariate Gaussian distribution with mean $\mu \in \mathbb{R}^d$ and the variance-covariance $\Sigma \in S^{d\times d}$ where $S$ denotes the set of symmetric matrices. When I am trying to maximize the log-likelihood of some observations (with respect to $\Sigma)$, I am basically trying to take the first-order conditions, and I need to derive the following: $$\nabla_{\Sigma} \left( \mu^\top \Sigma^{-1}\mu \right)$$ This derivative is too complicated for me since $\Sigma$ is a matrix. How can I solve these?
Prove that $\nabla_{\mathrm X} \mbox{tr} (\mathrm A \mathrm X^{-1} \mathrm B) = - \mathrm X^{-\top} \mathrm A^\top \mathrm B^\top \mathrm X^{-\top}$ Prove that $$\nabla_{\mathrm X} \mbox{tr} (\mathrm A \mathrm X^{-1} \mathrm B) = - \mathrm X^{-\top} \mathrm A^\top \mathrm B^\top \mathrm X^{-\top}$$ My proof is below. I am interested in other proofs. My proof Let $$f (\mathrm X) := \mbox{tr} (\mathrm A \mathrm X^{-1} \mathrm B)$$ Hence, $$\begin{array}{rl} f (\mathrm X + h \mathrm V) &= \mbox{tr} (\mathrm A (\mathrm X + h \mathrm V)^{-1} \mathrm B)\\ &= \mbox{tr} (\mathrm A (\mathrm X ( \mathrm I + h \mathrm X^{-1} \mathrm V))^{-1} \mathrm B)\\ &= \mbox{tr} (\mathrm A ( \mathrm I + h \mathrm X^{-1} \mathrm V)^{-1} \mathrm X^{-1} \mathrm B)\\ &= \mbox{tr} (\mathrm A ( \mathrm I - h \mathrm X^{-1} \mathrm V + O (h^2)) \mathrm X^{-1} \mathrm B)\\ &= \mbox{tr} (\mathrm A \mathrm X^{-1} \mathrm B) - h \, \mbox{tr} ( \mathrm A \mathrm X^{-1} \mathrm V \mathrm X^{-1} \mathrm B) + O (h^2)\\ &= f (\mathrm X) - h \, \mbox{tr} ( \mathrm X^{-1} \mathrm B \mathrm A \mathrm X^{-1} \mathrm V ) + O (h^2)\end{array}$$ Thus, the directional derivative of $f$ in the direction of $\mathrm V$ at $\mathrm X$ is $$D_{\mathrm V} f (\mathrm X) = - \mbox{tr} ( \mathrm X^{-1} \mathrm B \mathrm A \mathrm X^{-1} \mathrm V ) = - \mbox{tr} ( (\mathrm X^{-\top} \mathrm A^\top \mathrm B^\top \mathrm X^{-\top})^\top \mathrm V ) = - \langle \mathrm X^{-\top} \mathrm A^\top \mathrm B^\top \mathrm X^{-\top}, \mathrm V \rangle$$ and, lastly, $$\nabla_{\mathrm X} \mbox{tr} (\mathrm A \mathrm X^{-1} \mathrm B) = - \mathrm X^{-\top} \mathrm A^\top \mathrm B^\top \mathrm X^{-\top}$$
Extreme learning machine: what's it all about? I've been thinking about, implementing and using the Extreme Learning Machine (ELM) paradigm for more than a year now, and the longer I do, the more I doubt that it is really a good thing. My opinion, however, seems to be in contrast with scientific community where -- when using citations and new publications as a measure -- it seems to be a hot topic. The ELM has been introduced by around 2003. The underlying idea is rather simple: start with a 2-layer artificial neural network and randomly assign the coefficients in the first layer. This, one transforms the non-linear optimization problem which is usually handled via backpropagation into a simple linear regression problem. More detailed, for $\mathbf x \in \mathbb R^D$, the model is $$ f(\mathbf x) = \sum_{i=1}^{N_\text{hidden}} w_i \, \sigma\left(v_{i0} + \sum_{k=1}^{D} v_{ik} x_k \right)\,.$$ Now, only the $w_i$ are adjusted (in order to minimize squared-error-loss), whereas the $v_{ik}$'s are all chosen randomly. As a compensation for the loss in degrees-of-freedom, the usual suggestion is to use a rather large number of hidden nodes (i.e. free parameters $w_i$). From another perspective (not the one usually promoted in the literature, which comes from the neural network side), the whole procedure is simply linear regression, but one where you choose your basis functions $\phi$ randomly, for example $$ \phi_i(\mathbf x) = \sigma\left(v_{i0} + \sum_{k=1}^{D} v_{ik} x_k \right)\,.$$ (Many other choices beside the sigmoid are possible for the random functions. For instance, the same principle has also been applied using radial basis functions.) From this viewpoint, the whole method becomes almost too simplistic, and this is also the point where I start to doubt that the method is really a good one (... whereas its scientific marketing certainly is). So, here are my questions: The idea to raster the input space using random basis functions is, in my opinion, good for low dimensions. In high dimensions, I think it is just not possible to find a good choice using random selection with a reasonable number of basisfunctions. Therefore, does the ELM degrade in high-dimensions (due to the curse of dimensionality)? Do you know of experimental results supporting/contradicting this opinion? In the linked paper there is only one 27-dimensional regression data set (PYRIM) where the method performs similar to SVMs (whereas I would rather like to see a comparison to a backpropagation ANN) More generally, I would like to here your comments about the ELM method.
eng_Latn
1,112
Forecasting daily online visits in r We have three years of data for online visits at a daily level. We want to forecast the daily visits for the next 90 days. What would be the best method to capture weekday seasonality , holiday seasons, and also the drift. Can this be successfully done in R? We are currently using R. We have considered ARIMA but it does not capture seasonality. While converting the data to a time series in R, what should be the "frequency"? Should we use ARIMA with regressors?
Daily forecasting We have three years of data for online visits at a daily level. We want to forecast the daily visits for the next 90 days. What would be the best method to capture weekday seasonality , holiday seasons, and also the drift. Can this be successfully done in R? We are currently using R. We have considered ARIMA but it does not capture seasonality. While converting the data to a time series in R, what should be the "frequency"? Should we use ARIMA with regressors?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,113
Recursive time series forecasting In a recursive forecasting model, let's say you are trying to predict sales for the next month and you will append that prediction to your input and predict the month after. Basically, your target is Sales quantity, which you lagged. But let's say you have other continuous numerical features like Price which you will lag as well. As your model is making predictions on sales_qty, it will feed it back to sales_qty as input so you will never run out of values for that one. So how do you deal with your other features (other than the target) in order to generate them as next months inputs in a recursive forecasting model (because eventually you will run out of them)? Do you create a sub-model and try to predict them as well?
Recursive time series forecasting model In a recursive forecasting model, let's say you are trying to predict sales of Target for the next month and you will append that prediction to your input and predict the month after. Basically, your target is Sales quantity, which you lagged. But let's say you have other continuous numerical features like Price which you will lag as well. As your model is making predictions on Sales_QTY, it will feed it back to sales_qty as input so you will never ran out of values for that one. So how do you deal with your other features (other than the target) in order to generate them as next months inputs in a recursive forecasting model (because eventually you will run out of them)? DO you create a sub-model and try to predict them as well?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,114
One Hot Encode and Logistic Regression When using Logistic Regression, and the categorical variables are one hot encoded, do we always have to drop a variable to avoid the dummy variable trap? If I recall it correctly, I have seen somewhere that if you use regularization, you don't need to drop 1 variable, but I can't find seem the find the article back. I'm a newbie. Many thanks
Dropping one of the columns when using one-hot encoding My understanding is that in machine learning it can be a problem if your dataset has highly correlated features, as they effectively encode the same information. Recently someone pointed out that when you do one-hot encoding on a categorical variable you end up with correlated features, so you should drop one of them as a "reference". For example, encoding gender as two variables, is_male and is_female, produces two features which are perfectly negatively correlated, so they suggested just using one of them, effectively setting the baseline to say male, and then seeing if the is_female column is important in the predictive algorithm. That made sense to me but I haven't found anything online to suggest this may be the case, so is this wrong or am I missing something? Possible (unanswered) duplicate:
L1 & L2 Regularization in Light GBM This question pertains to L1 & L2 regularization parameters in Light GBM. As per official documentation: reg_alpha (float, optional (default=0.)) – L1 regularization term on weights. reg_lambda (float, optional (default=0.)) – L2 regularization term on weights I have seen data scientists using both of these parameters at the same time, ideally either you use L1 or L2 not both together. While reading about tuning LGBM parameters I cam across one such case: Kaggle official GBDT Specification and Optimization Workshop in Paris where Instructors are ML experts. And these experts have used positive values of both L1 & L2 params in LGBM model. Link below (Ctrl+F 'search_spaces' to directly reach parameter grid in this long kernel) I have seen same in XGBoost implementations. My question is why use both at the same time in LGBM/XGBoost. Thanks.
eng_Latn
1,115
I am trying to understand paper. It uses PCA and further uses LDA for dimensionality reduction. I have read about eigenfaces and PCA approach but I am not familiar with LDA and I am unable to understand the mathematics behind the scatter matrices : why to minimize the determinant of projected scatter matrices and how we perform it. Can anyone explain the maths behind LDA and how the generalized eigenvalues come into picture or point to some related tutorial where I can understand the approach?
Apparently, the Fisher analysis aims at simultaneously maximising the between-class separation, while minimising the within-class dispersion. A useful measure of the discrimination power of a variable is hence given by the diagonal quantity: $B_{ii}/W_{ii}$. I understand that the size (p x p) of the Between (B) and Within-Class (W) matrices are given by the number of input variables, p. Given this, how can $B_{ii}/W_{ii}$ be a "useful measure of the discrimination power" of a single variable? At least two variables are required to construct the matrices B and W, so the respective traces would represent more than one variable. Update: Am I right in thinking that $B_{ii}/W_{ii}$ is not a trace over a trace, where the sum is implied, but the matrix element $B_{ii}$ divided by $W_{ii}$? Currently that is the only way I can reconcile the expression with the concept.
Assume I have a dataset for a supervised statistical classification task, e.g., via a Bayes' classifier. This dataset consists of 20 features and I want to boil it down to 2 features via dimensionality reduction techniques such as Principal Component Analysis (PCA) and/or Linear Discriminant Analysis (LDA). Both techniques are projecting the data onto a smaller feature subspace: with PCA, I would find the directions (components) that maximize the variance in the dataset (without considering the class labels), and with LDA I would have the components that maximize the between-class separation. Now, I am wondering if, how, and why these techniques can be combined and if it makes sense. For example: transforming the dataset via PCA and projecting it onto a new 2D subspace transforming (the already PCA-transformed) dataset via LDA for max. in-class separation or skipping the PCA step and using the top 2 components from a LDA. or any other combination that makes sense.
eng_Latn
1,116
I ran principal components analysis in R on my data. All my regressors are continuous, non categorical variables, except gender which I excluded. I will add it and compare model 1 = PCA to model 2 = PCA + gender and see if its significant. I determined that PC1/2/3 determine 95% of the variance so I only consider the first 3 columns. All the variables have non-zero values and come up in the order I inputted it into the matrix of regressors. I used a spree plot to determine I need 3 regressors to explain most of the variance only (which is rather amazing given I have 30+ variables!). I am confused how I determine which variables I should choose. Do I order the variables per PC column from highest to lowest and see the variables contributing the most? I saw in the Freeway R guide that they actually plot the graphs and compare - similar graphs they take one variable and different looking graphs they subtract the variables from each other. This was rather confusing to me. Questions: Is the ANOVA approach correct in this case for this variable? Spree plot - do I look for the first 'kink' or the point where the x axis (no of variables) first goes to 0. Mine jumps dramatically down after 3 variables (to about 0.5) and then curves down to 0 on variable 15. Different websites say different things hence I'm confused. Is the approach above correct to write a linear model? Do I write it as $y={\rm variable}_1 + {\rm variable}_2 + {\rm variable}_3$ or do I multiply them or both?
I'm new to feature selection and I was wondering how you would use PCA to perform feature selection. Does PCA compute a relative score for each input variable that you can use to filter out noninformative input variables? Basically, I want to be able to order the original features in the data by variance or amount of information contained.
The entire site is blank right now. The header and footer are shown, but no questions.
eng_Latn
1,117
I try to get the basic understanding behing SVM algorithm, however I have a problem with basic mathematics. I follow the lecture . Suppose the two classes can be separated by a hyperplane:$(w \cdot x) + b = 0$ Acoording to , hyperplane is defined as $n(r-r_0)=0$, does it mean that $b=-w \cdot r_0$? I tried to consider 2-dimensional case, when $w \cdot x +b =0$ is a line, but it's completely doesn't make sense, $b=-w \cdot x$, where $b$ should be a constant, how can I generalize it to a 2 dimensional case. In addition, why $w$ is actually orthogonal to the plus and minus plane?
How does a work, and what differentiates it from other linear classifiers, such as the , , or ? * (* I'm thinking in terms of the underlying motivations for the algorithm, optimisation strategies, generalisation capabilities, and run-time complexity)
Given $a + b = 1$, Prove that $a^ab^b + a^bb^a \le 1$; $a$ and $b$ are positive real numbers.
eng_Latn
1,118
When working with a general matrix $A=$ \begin{bmatrix}a & b\\c & d\end{bmatrix}, I find that the eigenvalues are: $\lambda = \frac{d+a \pm \sqrt{(d-a)^2+4bc}}{2}$ I then find that eigenvectors, $e=$ \begin{bmatrix}-b\\a-\lambda\end{bmatrix}. I also know that under iteration with initial vector $x_n$ = \begin{bmatrix}x_0\\y_0\end{bmatrix}, we have $x_{n+1} = Ax_n$. The matrix $A$ converges to a line with a slope $m$. I am told that the slope of this line is in the same direction as the eigenvector associated with the largest eigenvalue. How do I prove this?
Suppose I am given a $2$x$2$ matrix $A=$ \begin{pmatrix} a & b \\ c & d \end{pmatrix} And an initial vector $x_n$ = \begin{pmatrix} x_0 \\ y_0\end{pmatrix}. Under repeated iteration $x_{n+1} = Ax_n$, we find that $x_{n+1}$ converges to a line in the direction of the $eigenvector$ of $A$ of the form $X$= $[u,mu]$ where $m$ is the slope of the line. I know that this process somehow also involves the largest $eigenvalue, \lambda,$ of $A$, though I am unsure how it connects. My question is: How does the largest eigenvalue connect to this repeated iteration, how do I find the slope of the line of convergence $m$, and what is meant by the "direction of the eigenvector"?
Suppose I am given a $2$x$2$ matrix $A=$ \begin{pmatrix} a & b \\ c & d \end{pmatrix} And an initial vector $x_n$ = \begin{pmatrix} x_0 \\ y_0\end{pmatrix}. Under repeated iteration $x_{n+1} = Ax_n$, we find that $x_{n+1}$ converges to a line in the direction of the $eigenvector$ of $A$ of the form $X$= $[u,mu]$ where $m$ is the slope of the line. I know that this process somehow also involves the largest $eigenvalue, \lambda,$ of $A$, though I am unsure how it connects. My question is: How does the largest eigenvalue connect to this repeated iteration, how do I find the slope of the line of convergence $m$, and what is meant by the "direction of the eigenvector"?
eng_Latn
1,119
I know that Linear Discrimination Analysis (LDA) is used for classification and Multiple Linear Regression (MLR) is for regression. Lets say I have a matrix X (independent variables) and Y(dependent variables) if the Y is continues and I make a regression , it becomes MLR if the Y is categories (like 1111122222) and I do MLR to make a relation , it becomes LDA? can one explain it in simple words and illustrations ?
Is there a relationship between regression and linear discriminant analysis (LDA)? What are their similarities and differences? Does it make any difference if there are two classes or more than two classes?
I am working with dimensionality reduction algorithms. Linear Discriminant Analysis (LDA) is a supervised algorithm that takes into account the class label (which is not the case of PCA for example). I am using Python to do a comparative study between some algorithms. Why with two classes (k = 2), regardless of the data dimensionality, LDA gives one dimension, i.e. the new subspace is composed of only one dimension? For example if I try to project my dataset (of 100 attributes and 2 classes) to 10-dimensional space by using LDA(n_components=10) I only obtain 1 dimension? For k>2 I can obtain the number of discriminant vectors that I specified as n_components. How can I get n_components dimensions for k=2?
eng_Latn
1,120
What are the mathematical steps to get loadings and scores matrices of a 3x3 matrix basing of PCA and what is the relationship relating eigenvalues eigenvectors with loadings and score?
In today's pattern recognition class my professor talked about PCA, eigenvectors and eigenvalues. I understood the mathematics of it. If I'm asked to find eigenvalues etc. I'll do it correctly like a machine. But I didn't understand it. I didn't get the purpose of it. I didn't get the feel of it. I strongly believe in the following quote: You do not really understand something unless you can explain it to your grandmother. -- Albert Einstein Well, I can't explain these concepts to a layman or grandma. Why PCA, eigenvectors & eigenvalues? What was the need for these concepts? How would you explain these to a layman?
The score for the 4 questions are 111, 1218, 261 and 5 respectively, but they display 2 digits per line. This is an obvious bug in design.
eng_Latn
1,121
I have too many environmental variables to use in a multiple regression analysis. If I use all the variables the models are just too complex. The use of the PCA axes in the regression analysis was impossible to interpret (since there wasn't a clear correlation with environmental variables), so we chose to select a limited number of variables, namely those that had the higher explanation in PCA. A PCA was used for each set of environmental variables, namely the variables related with the structure of the stream, the evolving vegetation, the climatic, the physical-chemical characteristics of the water from summer period and from winter period, separately. The PCA was performed using the correlation matrix option, using the software PC-ORD, v. 4.21 (McCune & Mefford 1999). For each set of variables, only the variables with coordinates higher than 0.20, for the two first axes, of the PCA, were selected to be used in multiple regression analysis. I could not find literature that confirms that it's OK to do this, but I think it is not wrong.
I'm new to feature selection and I was wondering how you would use PCA to perform feature selection. Does PCA compute a relative score for each input variable that you can use to filter out noninformative input variables? Basically, I want to be able to order the original features in the data by variance or amount of information contained.
For a given data matrix $A$ (with variables in columns and data points in rows), it seems like $A^TA$ plays an important role in statistics. For example, it is an important part of the analytical solution of ordinary least squares. Or, for PCA, its eigenvectors are the principal components of the data. I understand how to calculate $A^TA$, but I was wondering if there's an intuitive interpretation of what this matrix represents, which leads to its important role?
eng_Latn
1,122
I am doing a multiple linear regression, in order to see whether the predictors are positively (positive $\beta$s) or negatively correlated (negative $\beta$s) with the response. Since I have a lot of predictors, I use Lasso to select variables. With cross-validation, I choose my optimal $\lambda$, but I am left with coefficients ($\beta$s) with no standard errors. I read that the significance testing for penalized regression is still under research. So my questions are, 1) How do I interpret positive, negative, and 0 coefficients for Lasso? 2) In addition, can I compare the magnitude of predictors (all have the same unit/dimension) to say which has is more important over another on the response?
I'm currently working on building a predictive model for a binary outcome on a dataset with ~300 variables and 800 observations. I've read much on this site about the problems associated with stepwise regression and why not to use it. I've been reading into LASSO regression and its ability for feature selection and have been successful in implementing it with the use of the "caret" package and "glmnet". I am able to extract the coefficient of the model with the optimal lambda and alpha from "caret"; however, I'm unfamiliar with how to interpret the coefficients. Are the LASSO coefficients interpreted in the same method as logistic regression? Would it be appropriate to use the features selected from LASSO in logistic regression? EDIT Interpretation of the coefficients, as in the exponentiated coefficients from the LASSO regression as the log odds for a 1 unit change in the coefficient while holding all other coefficients constant.
Take a sponge ball and compress it. The net force acting on the body is zero and the body isn't displaced. So can we conclude that there is no work done on the ball?
eng_Latn
1,123
Extracting sub-matrices from a matrix
How to divide an image into blocks in MATLAB?
Subspace of Noetherian space still Noetherian
eng_Latn
1,124
LaTex problem with matrix left alignment on the margin
Align equation left
Every partial order can be extended to a linear ordering
eng_Latn
1,125
l1 and l2 regularizations
Why L1 norm for sparse models
When will L1 regularization work better than L2 and vice versa?
eng_Latn
1,126
Matrix library for large matrices?
What are the most widely used C++ vector/matrix math/linear algebra libraries, and their cost and benefit tradeoffs?
A fiber bundle over Euclidean space is trivial.
eng_Latn
1,127
What is pca.components_ in sk-learn? I've been reading some documentation about PCA and trying to use scikit-learn to implement it. But I struggle to understand what are the attributes returned by sklearn.decompositon.PCA From what I read and the name of this attribute my first guess would be that the attribute .components_ is the matrix of principal components, meaning if we have data set X which can be decomposed using SVD as X = USV^T then I would expect the attribute .components_ to be equal to XV = US. To clarify this I took the first example of the wikipedia page of Singular Value Decomposition (), and try to implement it to see if I obtain what is expected. But I get something different. To be sure I didn't make a mistake I used scipy.linalg.svd to do the Singular Value Decomposition on my matrix X, and I obtained the result described on wikipedia: X = np.array([[1, 0, 0, 0, 2], [0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 2, 0, 0, 0]]) U, s, Vh = svd(X) print('U = %s'% U) print('Vh = %s'% Vh) print('s = %s'% s) output: U = [[ 0. 1. 0. 0.] [ 1. 0. 0. 0.] [ 0. 0. 0. -1.] [ 0. 0. 1. 0.]] Vh = [[-0. 0. 1. 0. 0. ] [ 0.4472136 0. 0. 0. 0.89442719] [-0. 1. 0. 0. 0. ] [ 0. 0. 0. 1. 0. ] [-0.89442719 0. 0. 0. 0.4472136 ]] s = [ 3. 2.23606798 2. 0. ] But with sk-learn I obtain this: pca = PCA(svd_solver='auto', whiten=True) pca.fit(X) print(pca.components_) print(pca.singular_values_) and the output is [[ -1.47295237e-01 -2.15005028e-01 9.19398392e-01 -0.00000000e+00 -2.94590475e-01] [ 3.31294578e-01 -6.62589156e-01 1.10431526e-01 0.00000000e+00 6.62589156e-01] [ -2.61816759e-01 -7.17459719e-01 -3.77506920e-01 0.00000000e+00 -5.23633519e-01] [ 8.94427191e-01 -2.92048264e-16 -7.93318415e-17 0.00000000e+00 -4.47213595e-01]] [ 2.77516885e+00 2.12132034e+00 1.13949018e+00 1.69395499e-16] which is not equal to SV^T (I spare you the matrix multiplication, since anyway you can see that the singular values are different from the one obtained above). I tried to see what happened if I set the parameter withened to False or the parameter svd_solver to 'full' but it doesn't change the result. Do you see a mistake somewhere, or do you have an explanation?
Relationship between SVD and PCA. How to use SVD to perform PCA? Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?
Multicollinearity when individual regressions are significant, but VIFs are low I have 6 variables ($x_{1}...x_{6}$) that I am using to predict $y$. When performing my data analysis, I first tried a multiple linear regression. From this, only two variables were significant. However, when I ran a linear regression comparing each variable individually to $y$, all but one were significant ($p$ anywhere from less than 0.01 to less than 0.001). It was suggested that this was due to multicollinearity. My initial research on this suggests checking for multicollinearity by using . I downloaded the appropriate package from R, and ended up with the resulting VIFs: 3.35, 3.59, 2.64, 2.24, and 5.56. According to various sources online, the point you should be worried about multicollinearity with your VIFs is either at 4 or 5. I am now stumped about what this means for my data. Do I or do I not have a multicollinearity problem? If I do, then how should I proceed? (I cannot collect more data, and the variables are parts of a model that aren't obviously related) If I don't have this problem, then what should I be taking from my data, particularly the fact that these variables are highly significant individually, but not significant at all when combined. Edit: Some questions have been asked regarding the dataset, and so I would like to expand... In this particular case, we're looking to understand how specific social cues (gesture, gaze, etc) affect the likelihood of someone producing some other cue. We would like our model to include all significant attributes, so I am uncomfortable removing some that seem redundant. There are not any hypotheses with this right now. Rather, the problem is unstudied, and we are looking to gain a better understanding of what attributes are important. As far as I can tell, these attributes should be relatively independent of one another (you can't just say gaze and gestures are the same, or one the subset of another). It would be nice to be able to report p values for everything, since we would like other researchers to understand what has been looked at. Edit 2: Since it came up somewhere below, my $n$ is 24.
eng_Latn
1,128
What is the gradient of $x^T A\, x$ with respect to the matrix $A$? I have seen many times that the gradient of $x^TA\,x$ with respect to $x$ is $2A\,x$. But how do you find its gradient with respect to $A$?
Gradient of $a^T X b$ with respect to $X$ How can I find the gradient of the term $a^TXb$ where $X$ is a $n \times m$ matrix, and $a$ and $b$ are column vectors. Since the gradient is with respect to a matrix, it should be a matrix. But I do not have a clue on how to derive this gradient. Any help ?
How does Natural Selection shape Genetic Variation? Background Importance of the additive genetic variance As stated , the fundamental theorem of Natural Selection (NS) by Fisher says: The rate of increase in the mean fitness of any organism at any time ascribable to NS acting through changes in gene frequencies is exactly equal to its genetic variance in fitness at that time. NS reduces the additive genetic variance On the other hand, NS reduces additive genetic variance (discussion about the origin of this knowledge can be found ). The genetic variance of a population for multiple traits is best described by the G-matrix ( is a post on the subject). What is a G-matrix A G-matrix is a matrix where additive genetic variance of trait $i$ can be found at position $m_{ii}$. In other words, the diagonal contains the additive genetic variance for all traits. The other positions $m_{ij}$, where $i≠j$ contains the additive genetic covariance between the traits $i$ and $j$. Question How can one model how the G-matrix changes over time because of selection (assuming no mutation)?
eng_Latn
1,129
Framework for Genetic Algorithms on python I'm trying to use a framework implemented in python to use GA (Genetic algorithms) and other related algorithms . But I'm not sure about what framework to use, I've found two interesting options Pymoo and JmetalPy, the former it seems well documented it and the latter isn't looks like it (it's under construction), but the version in Java is very popular and well know. I need choose a framework to make a comparation with an algorithm made by me. For this reason a reliable framework is needed and with small learning curve. Do you have any good or bad experience using it?, Do you know another framework? Pymoo: JmetalPy:
Python metaheuristic packages I need to use a metaheuristic algorithm to solve an optimization problem on a Python codebase. Metaheuristics usually need to be written in C++ or Java as they involve a lot of iterations, while Python is weak from this point of view. Questions: do any Python metaheuristic packages which wrap faster languages as C++/Java exist? do any Python metaheuristic packages based on maybe cython on numba exist? other solutions?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,130
Maximum likelihood estimator compared to least squares The maximum likelihood estimator is often compared to the least squares method. I'm struggling to see how these things are related at all. I would like to understand what the maximum likelihood estimator means in practice. Can you adapt the maximum likelihood estimator to the example below and use the example to explain it? Let's say we are trying to predict house prices (target) from the size of the house (as the only feature). We are using Linear Regression, so we are trying to learn (optimize) parameters B0 and B1 in predicted price = B0 + B1 * size. If we are using Ordinary Least Squares, we want to minimize the sum of squared differences between prediction and real target. For example, if we predict target 1000 for some house and the actual target is 900, the difference is 100. The squared difference is 100^2 = 10000. So this house would increase the sum by 10000. We want to choose B0 and B1 in a way that minimizes the sum of these squares. Simple. Now substitute Maximum Likelihood Estimator in place of Ordinary Least Squares. Wikipedia tells us that it's "finding the parameter values that maximize the likelihood of making the observations given the parameters". How does this relate to the example with parameters B0, B1 and a difference of 100 between predicted target and real target?
Equivalence between least squares and MLE in Gaussian model I am new to Machine Learning, and am trying to learn it on my own. Recently I was reading through and had a basic question. Slide 13 says that "Least Square Estimate is same as Maximum Likelihood Estimate under a Gaussian model". It seems like it is something simple, but I am unable to see this. Can someone please explain what is going on here? I am interested in seeing the Math. I will later try to see the probabilistic viewpoint of Ridge and Lasso regression also, so if there are any suggestions that will help me, that will be much appreciated also.
Representative point of a cluster with L1 distance The representative point of a cluster or cluster center for the k-means algorithm is the component-wise mean of the points in its cluster. The mean is chosen because it helps to minimize the within cluster variances (which is to say that it is minimizing within cluster squared Euclidean distance, since its the same). Now, when I want to use a different distance metric, like L1 (Manhattan distance), I would be minimizing the within cluster absolute deviations (if I choose to). I was looking at what the centers calculation would be like, because the mean won't reduce the new objective function, and I found this on the Matlab documentation of their kmeans function for the distance parameter cityblock - Sum of absolute differences, i.e., the L1 distance. Each centroid is the component-wise median of the points in that cluster. Now, how did they come up with the component-wise median? How does a median minimize absolute deviations?
eng_Latn
1,131
Is standardization required by PCA? I have two questions related to the PCA: Is the higher weight assigned to the variable which not necessarily have higher variation but are more representative of the variation of other variables? (i.e. the PCA aims to catch the common variation) Since the input of PCA is just the correlation matrix, which is not affected by the scale, is standardization required by PCA? Below is a toy dataset I've created to help in explaining my doubts. Please correct me if there is anything incorrect. Thanks so much! set.seed(99) a = sample(1:1000,300,replace=TRUE) b = sample(1:1000,300) #standardize a and b a_t = (a-mean(a))/sd(a) b_t = (b-mean(b))/sd(b) sd(a) sd(b) #the variation of the two variables are different x=as.matrix(cbind(a,b)) xcor = cor(x) xcor x_t=as.matrix(cbind(a_t,b_t)) xcor_t = cor(x_t) xcor_t #the correlation matrix is not affected by the standardization # Eigen decomposition out = eigen(xcor) va = out$values ve = out$vectors ve #the two variables are assigned the same weight (absolute value) # Eigen decomposition out_t = eigen(xcor_t) va_t = out_t$values ve_t = out_t$vectors ve #the standardized variables share the same eigenvector are the unstandardized variables ```
PCA on correlation or covariance? What are the main differences between performing principal component analysis (PCA) on the correlation matrix and on the covariance matrix? Do they give the same results?
Theory behind partial least squares regression Can anyone recommend a good exposition of the theory behind partial least squares regression (available online) for someone who understands SVD and PCA? I have looked at many sources online and have not found anything that had the right combination of rigor and accessibility. I have looked into The Elements of Statistical Learning, which was suggested in a comment on a question asked on Cross Validated, , but I don't think that this reference does the topic justice (it's too brief to do so, and doesn't provide much theory on the subject). From what I've read, PLS exploits linear combinations of the predictor variables, $z_i=X \varphi_i$ that maximize the covariance $ y^Tz_i $ subject to the constraints $\|\varphi_i\|=1$ and $z_i^Tz_j=0$ if $i \neq j$, where the $\varphi_i$ are chosen iteratively, in the order in which they maximize the covariance. But even after all I've read, I'm still uncertain whether that is true, and if so, how the method is executed.
eng_Latn
1,132
Do we always assume cross entropy cost function for logistic regression solution unless stated otherwise? I am using Matlab glmfit for logistic regression. Now I know that usually people use the cross entropy to evaluate the error in predictions against the true labels ( which different than the linear regression where they use Mean Square Error (MSE) or Sum of Squares SS). My questions: 1) Is true that whenever I use builtin logistic regression function (as in glmfit in matlab) I would assume they use the cross entropy as a cost function? I read through the documentation of glmfit here but nothing mentioned about that. 2) And in general if someone handed me a logistic regression solution (or read a paper that says they implemented a logistic regression) is it fair to assume that they used cross entropy as a cost function unless stated otherwise? Note my question is mathematical, rather it is about the implementation in matlab and the norm in this field..
Which loss function is correct for logistic regression? I read about two versions of the loss function for logistic regression, which of them is correct and why? From (in Chinese), with $\beta = (w, b)\text{ and }\beta^Tx=w^Tx +b$: $$l(\beta) = \sum\limits_{i=1}^{m}\Big(-y_i\beta^Tx_i+\ln(1+e^{\beta^Tx_i})\Big) \tag 1$$ From my college course, with $z_i = y_if(x_i)=y_i(w^Tx_i + b)$: $$L(z_i)=\log(1+e^{-z_i}) \tag 2$$ I know that the first one is an accumulation of all samples and the second one is for a single sample, but I am more curious about the difference in the form of two loss functions. Somehow I have a feeling that they are equivalent.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,133
Discovering interactions and transformations I am teaching myself regression using Regression Modeling Strategies by Harell and the author goes at quite the length to showcase the importance of modeling interactions and transformations of the initial variables. I can't help but wonder how to approach this in a more structured/automated way when dealing with a lot of potential variables. Can we use recursive partitioning, for example, to somehow to do the work for us and then use the output as variables, shrink the estimates with LASSO to deal with colinearity and do a final step where we use some sort of filtering for feature importance. In my mind this will leave us with a well specified model which can be manually inspected and improved if need be, but is this reasonable? Are there other ways to approach this? Are there some resources that deal with problems like this?
Discovering transformations and interactions I am teaching myself regression using Regression Modeling Strategies by Harell and the author goes at quite the length to showcase the importance of modeling interactions and transformations of the initial variables. I can't help but wonder how to approach this in a more structured/automated way when dealing with a lot of potential variables. Can we use recursive partitioning, for example, to somehow to do the work for us and then use the output as variables, shrink the estimates with LASSO to deal with colinearity and do a final step where we use some sort of filtering for feature importance. In my mind this will leave us with a well specified model which can be manually inspected and improved if need be, but is this reasonable? Are there other ways to approach this? Are there some resources that deal with problems like this?
How to reverse PCA and reconstruct original variables from several principal components? Principal component analysis (PCA) can be used for dimensionality reduction. After such dimensionality reduction is performed, how can one approximately reconstruct the original variables/features from a small number of principal components? Alternatively, how can one remove or discard several principal components from the data? In other words, how to reverse PCA? Given that PCA is closely related to singular value decomposition (SVD), the same question can be asked as follows: how to reverse SVD?
eng_Latn
1,134
78L05 questions What are the capacitors on the input and output of a 78L05 for? I saw a lot of different schematics with a lot of different input and output capacitor values. So now I am really confused, and I'm just wondering what is their purpose :D This is the one I made at first, but as I got deeper into it, I just got more confused by a bunch of answers :( (Sorry for the upside-down schematic :P) C1, C4 = 100uF C2, C3 = 0,1uF Help please :(
What's the purpose of two capacitors in parallel? What's the purpose of the two capacitors in parallel on each side of the regulator in I've seen similar setups in other similar circuits and can guess that it's related to one being polarized an one not, but I don't really understand what's going on there.
Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables? I have carried out a principal components analysis of six variables $A$, $B$, $C$, $D$, $E$ and $F$. If I understand correctly, unrotated PC1 tells me what linear combination of these variables describes/explains the most variance in the data and PC2 tells me what linear combination of these variables describes the next most variance in the data and so on. I'm just curious -- is there any way of doing this "backwards"? Let's say I choose some linear combination of these variables -- e.g. $A+2B+5C$, could I work out how much variance in the data this describes?
eng_Latn
1,135
How do we know from representation theory that a massless spin-1 particle has only two polarizations? In chapter 8.2.3 of Schwartz' textbook "Quantum Field Theory and the Standard Model", the author states the following, Finally, we expect from representation theory that there should only be two polarizations for a massless spin-1 particle, so the spin-0 and the longitudinal mode should somehow decouple from the physical system. How does "representation theory" tells us about the polarizations of a massless spin-1 particle? Could someone please explain this statement?
Counting degrees of freedom of gauge bosons Gauge bosons are represented by $A_{\mu}$, where $\mu = 0,1,2,3$. So in general there are 4 degrees of freedom. But in reality, a photon (gauge boson) has two degrees of freedom (two polarization states). So, when someone asks about on-shell and off-shell degrees of freedom, I thought they are 2 and 4. But I read that the off-shell d.of. are 3. And my question is how to see this?
How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)? Suppose I am running a regression $Y \sim X$. Why by selecting top $k$ principle components of $X$, does the model retain its predictive power on $Y$? I understand that from dimensionality-reduction/feature-selection point of view, if $v_1, v_2, ... v_k$ are the eigenvectors of covariance matrix of $X$ with top $k$ eigenvalues, then $Xv_1, Xv_2 ... Xv_k$ are top $k$ principal components with maximum variances. We can thereby reduce the number of features to $k$ and retain most of the predictive power, as I understand it. But why do top $k$ components retain the predictive power on $Y$? If we talk about a general OLS $Y \sim Z$, there is no reason to suggest that if feature $Z_i$ has maximum variance, then $Z_i$ has the most predictive power on $Y$. Update after seeing comments: I guess I have seen tons of examples of using PCA for dimensionality reduction. I have been assuming that means the dimensions we are left with have the most predictive power. Otherwise what's the point of dimensionality reduction?
eng_Latn
1,136
Regarding the ellipsoid computed by using the “lifting step” I have asked a question at mathstackexchange but i have not got an answer yet so i am referencing the question here so that maybe one of you can help in understanding the problem: Please do help and thanks in advance. (and sorry for the inconvience)
Regarding the ellipsoid computed by using the "lifting step" I need your expertise in understanding and hopefully solving the following problem: Let $C \subset \mathbb{R}^d$, and lets assume that $C$ is not centrally symmetric, thus when computing the $MVEE(C)$ (i.e the minimum volume enclosing ellipsoid), they use the following "lifting step": $$ C^\prime = \left\lbrace \pm \begin{bmatrix} x\\ 1 \end{bmatrix} \middle| x \in C \right\rbrace,$$ where $C^\prime$ is centrally symmetric. It is proven that for any $\varepsilon > 0$, $MVEE(C^\prime) \cap \Pi$ is a $(1 + \varepsilon)$-approximation to $MVEE(C) \times \left\lbrace 1 \right\rbrace$, where $\Pi = \left\lbrace x \in \mathbb{R^{d+1}} \middle| x_{d+1} = 1 \right\rbrace$. Let $G \in \mathbb{d \times d}$ be an invertible matrix, let $ v \in \mathbb{R}^d$ such that the pair $(G,v)$ represents the matrix and the center of $MVEE(C)$ respectively. Let $\hat{G} \in \mathbb{R}^{(d + 1) \times (d + 1)}$ be an invertible matrix such that the pair $(\hat{G},\overset{\to}{0})$ represents the matrix and center of $MVEE(C^\prime)$ respectively. Then, $\forall x \in MVEE(C)$ $$ \left| \left| \left[ x , 1 \right] \hat{G} \begin{bmatrix} x\\1 \end{bmatrix} \right| \right|_2 = (1 + \varepsilon) \cdot \left| \left| (x - v) G (x - v)^T \right| \right|_2 .$$ Regarding $x \not\in MVEE(C) $, then it is easy to see that the above term is not satisfied and moreover the right term is bigger or equal to the left term. What I need to know is whether it possible the bound the right term by the left term (maybe because $MVEE(C) \times \left\lbrace 1 \right\rbrace \subseteq MVEE(C^\prime)$), i.e. $$ poly(d) \cdot \left| \left| \left[ x , 1 \right] \hat{G} \begin{bmatrix} x\\1 \end{bmatrix} \right| \right|_2 \geq (1 + \varepsilon) \cdot \left| \left| (x - v) G (x - v)^T \right| \right|_2 \ ?$$ Please Do correct if I am wrong and thanks in advance.
How do comments work? Across the Stack Exchange network you may leave comments on a question or answer. How do comments work? What are comments for, and when shouldn't I comment? Who can post comments? Who can edit comments? How can I format and link in comments? My comment doesn't contain some of the text I typed in it; why? My comment appears to be long enough, but I can't post it because it's too short; why? Who can delete comments? When should comments be deleted? Why can't I comment on specific posts even if I have enough reputation to comment? When can comments be undeleted? What are automatic comments? How can I link to comments? Anything else I should know about comments? Also see the FAQ articles on and .
eng_Latn
1,137
Do I need to perform variable selection before running a ridge regression? I am currently constructing a model that uses last year's departmental information to predict employee churn for the current year. I have 55 features and 318 departments in my data set. A good portion of my independent variables are correlated, and because of this, I believe that performing a ridge regression on my data will lead to optimal predictions when I bring the model into production. I have studied ridge regression and understand that the lambda coefficient computed for a given predictor can minimize the effect that predictor has on the model to next to nothing. Does this mean that performing a ridge regression means I don't have to bother with variable selection? If I do need to perform a variable selection technique, would implementing a stepwise regression and then using those selected variables in my ridge regression be a valid approach at variable selection?
Before running a ridge regression model, do I need to preform variable selection? I am currently constructing a model that uses last year's departmental information to predict employee churn for the current year. I have 55 features and 318 departments in my data set. A good portion of my independent variables are correlated, and because of this, I believe that performing a ridge regression on my data will lead to optimal predictions when I bring the model into production. I have studied ridge regression and understand that the lambda coefficient computed for a given predictor can minimize the effect that predictor has on the model to next to nothing. Does this mean that performing a ridge regression means I don't have to bother with variable selection? If I do need to perform a variable selection technique, would implementing a stepwise regression and then using those selected variables in my ridge regression be a valid approach at variable selection? I already posted this question on stack exchange but was informed that stack exchange was the better platform to ask statistical questions. I am sorry for the confusion.
Including Interaction Terms in Random Forest Suppose we have a response Y and predictors X1,....,Xn. If we were to try to fit Y via a linear model of X1,....,Xn, and it just so happened that the true relationship between Y and X1,...,Xn wasn't linear, we might be able to fix the model by transforming the X's somehow and then fitting the model. Moreover, if it just so happened that the X1,...,XN didn't affect y independent of the other features, we also might be able to improve the model by including interaction terms, x1*x3 or x1*x4*x7 or something of the like. So in the linear case, interaction terms might bring value by fixing non-linearity or independence violations between the response and the features. However, Random Forests don't really make these assumptions. Is including interaction terms important when fitting a Random Forest? Or will just including the individual terms and choosing appropriate parameters allow Random Forests to capture these relationships?
eng_Latn
1,138
How do you test multi-step time series forecasting? Suppose you have n observations of a time series dataset. You split it up to n-k (train data) and k (test data) observations. You train a model using the train data and you can now make predictions. From what I understand multi-step forecasting means you want to predict the next q>1 observations. So you predict with your model the next q observations and you use a metric to compare them with their respective values from the test data. Now for the next step, will you need to train a new model which will also include the first q values from the test data and repeat the same process?
Using k-fold cross-validation for time-series model selection Question: I want to be sure of something, is the use of k-fold cross-validation with time series is straightforward, or does one need to pay special attention before using it? Background: I'm modeling a time series of 6 year (with semi-markov chain), with a data sample every 5 min. To compare several models, I'm using a 6-fold cross-validation by separating the data in 6 year, so my training sets (to calculate the parameters) have a length of 5 years, and the test sets have a length of 1 year. I'm not taking into account the time order, so my different sets are : fold 1 : training [1 2 3 4 5], test [6] fold 2 : training [1 2 3 4 6], test [5] fold 3 : training [1 2 3 5 6], test [4] fold 4 : training [1 2 4 5 6], test [3] fold 5 : training [1 3 4 5 6], test [2] fold 6 : training [2 3 4 5 6], test [1]. I'm making the hypothesis that each year are independent from each other. How can I verify that? Is there any reference showing the applicability of k-fold cross-validation with time series.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,139
Is a $1\times 1 $ matrix a scalar? Intuitively I used to think that a $1\times 1$ matrix is simply a scalar number, I also saw this statement in books. However when I think about it now it doesn't make sense to me because of one problem. Let $I$ be the $2\times 2 $ identity matrix, now $\lambda I$ is well defined if $\lambda $ is a scalar or if $\lambda $ is a $1 \times 2$ matrix. However if we want to make the statement that scalars and $1 \times 1$ matrices are the same then the $\lambda I$ would have to be defined for a matrix $\lambda$ which is not $1 \times 2$. More generally, let $M \in \Bbb R^{n \times m}$, now linear algebra says that if $N$ is another matrix, the product $NM$ only exists if $N \in \Bbb R^{m \times n}$. However if $n$ and $m$ are not zero then $NM$ is not defined for a $1\times 1$ matrix $N$, while $NM$ is defined if $N$ is simply a scalar... So in conclusion we can simply state that $1\times 1$ matrices and scalars are not the same thing right? Is there any interesting theory behind this?
is a one-by-one-matrix just a number (scalar)? I was wondering. Clearly, we cannot multiply a (1x1)-matrix with a (4x3)-matrix; However, we can multiply a scalar with a matrix. This suggests a difference. On the other hand, I was, for example, in an econometrics lecture today, where we had for a (Tx1)-vector $\underline{û}=\left( \begin{array}{c} û_1\\ \vdots\\ û_T\end{array}\right)$: $S_{ûû}:= \sum_{i=1}^T û_i^2$ shall be minimized. We see see that $S_{ûû}=\underline{û}^T\underline{û}$. Well, formally, shouldn't it be $(S_{ûû})=\underline{û}^T\underline{û}$ or $S_{ûû}=\det(\underline{û}^T\underline{û})$, to ensure that we stay in the space of matrices and not suddenly go to the space of scalars? So here, the professor (physicist) not only treats $\underline{û}^T\underline{û}$ like a scalar, but also calls it a scalar. Is this formally legit or a wrong simplification (though it does not seem to have any impact, and surely makes life easier)?
$\sum_i x_i^n = 0$ for all $n$ implies $x_i = 0$ Here is a statement that seems prima facie obvious, but when I try to prove it, I am lost. Let $x_1 , x_2, \dots, x_k$ be complex numbers satisfying: $$x_1 + x_2+ \dots + x_k = 0$$ $$x_1^2 + x_2^2+ \dots + x_k^2 = 0$$ $$x_1^3 + x_2^3+ \dots + x_k^3 = 0$$ $$\dots$$ Then $x_1 = x_2 = \dots = x_k = 0$. The statement seems obvious because we have more than $k$ constraints (constraints that are in some sense, "independent") on $k$ variables, so they should determine the variables uniquely. But my attempts so far of formalizing this intuition have failed. So, how do you prove this statement? Is there a generalization of my intuition?
eng_Latn
1,140
When are is Bayes estimator the same as the maximum likelihood estimator? Suppose we have the a random variable $X$, which is normally distributed $N(\theta,1)$ and we estimate $\theta$ using the squared error loss function, $L = (\theta, d) = (\theta - d)^2$. To find the Bayes estimator, we find $d$ which minimizes the expected loss and we obtain that the Bayes estimator is the posterior mean. Isn't this the same procedure to also find the maximum likelihood estimator of $\theta$? Wouldn't we also aim to minimize the expected loss in that case? I struggle to see what the difference is between the two concepts, at least in this context.
What is the difference in Bayesian estimate and maximum likelihood estimate? Please explain to me the difference in Bayesian estimate and Maximum likelihood estimate?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,141
Does variable separable method give the complete set of solutions to a PDE? Several of the common partial differential equations encountered in physics are mentioned as easy to solve using the variable separable method. However I do not understand how does one guarantee that the solutions generated using this method is the complete set of solutions. I am finding it quite hard to believe that such a method can produce the entire set of solutions. Can anyone explain if it indeed does generate the complete set of solutions to a PDE. If not, a counterexample would work. Also, I would like to know if we use this because it is consistent and easy to work with?
Does separation of variables in PDEs give a general solution? When a partial differential equation is solved using the separation of variables method, is the produced solution the most general one that satisfies the equation or have we lost some forms of the solution because of the assumption that it is in the form of separated variables?
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,142
What is would be the word for deriving from experience or past performance While empirical means deriving from experiment and observation rather than theory. What is would be the word for deriving from experience or past performance.
What's a word for knowing something from experience? I'm trying to figure out a word that describes subconsciously knowing something from experience. My initial attempt was "instinct", but that has more gifted and primal connotations. The use case I had in mind was as follows: "If we assume that the model is applicable, it tells us what we've ________ known for a while [...]" Update: The word that fits best for my use case seems to be intuitively. Another good one was empirically, but it tends to imply there were some experiments carried out to glean the knowledge, which wasn't the case.
Bottom to top explanation of the Mahalanobis distance? I'm studying pattern recognition and statistics and almost every book I open on the subject I bump into the concept of Mahalanobis distance. The books give sort of intuitive explanations, but still not good enough ones for me to actually really understand what is going on. If someone would ask me "What is the Mahalanobis distance?" I could only answer: "It's this nice thing, which measures distance of some kind" :) The definitions usually also contain eigenvectors and eigenvalues, which I have a little trouble connecting to the Mahalanobis distance. I understand the definition of eigenvectors and eigenvalues, but how are they related to the Mahalanobis distance? Does it have something to do with changing the base in Linear Algebra etc.? I have also read these former questions on the subject: I have also read . The answers are good and pictures nice, but still I don't really get it...I have an idea but it's still in the dark. Can someone give a "How would you explain it to your grandma"-explanation so that I could finally wrap this up and never again wonder what the heck is a Mahalanobis distance? :) Where does it come from, what, why? UPDATE: Here is something which helps understanding the Mahalanobis formula:
eng_Latn
1,143
How does the solution for polynomial regression depend on the number of data points and the error distribution Let the set of instances be generated by the function , where $ε$ is random, uniformly distributed noise. Also, assume that you are using a fifth degree polynomial for regression.
derivative of cost function for Logistic Regression I am going over the lectures on Machine Learning at Coursera. I am struggling with the following. How can the partial derivative of $$J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}y^{i}\log(h_\theta(x^{i}))+(1-y^{i})\log(1-h_\theta(x^{i}))$$ where $h_{\theta}(x)$ is defined as follows $$h_{\theta}(x)=g(\theta^{T}x)$$ $$g(z)=\frac{1}{1+e^{-z}}$$ be $$ \frac{\partial}{\partial\theta_{j}}J(\theta) =\sum_{i=1}^{m}(h_\theta(x^{i})-y^i)x_j^i$$ In other words, how would we go about calculating the partial derivative with respect to $\theta$ of the cost function (the logs are natural logarithms): $$J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}y^{i}\log(h_\theta(x^{i}))+(1-y^{i})\log(1-h_\theta(x^{i}))$$
Before running a ridge regression model, do I need to preform variable selection? I am currently constructing a model that uses last year's departmental information to predict employee churn for the current year. I have 55 features and 318 departments in my data set. A good portion of my independent variables are correlated, and because of this, I believe that performing a ridge regression on my data will lead to optimal predictions when I bring the model into production. I have studied ridge regression and understand that the lambda coefficient computed for a given predictor can minimize the effect that predictor has on the model to next to nothing. Does this mean that performing a ridge regression means I don't have to bother with variable selection? If I do need to perform a variable selection technique, would implementing a stepwise regression and then using those selected variables in my ridge regression be a valid approach at variable selection? I already posted this question on stack exchange but was informed that stack exchange was the better platform to ask statistical questions. I am sorry for the confusion.
eng_Latn
1,144
Sampling from multivariate normal using a pseudo-inverse Say I have an improper multivariate normal distribution with singular precision matrix $Q$. I can compute the pseudo-inverse of $Q$ and use this as a covariance matrix, i.e. $\Sigma = Q ^{+}$. I seem to be able to sample from this distribution using the standard approach of transforming a sample from a standard normal using some decomposition of $\Sigma$. But what am I actually sampling from here? Since this is improper there must be some effective constraint on the kernel of $Q$. What is that constraint? Is it any consequence of taking a pseudo-inverse or is it just the usual limits of what my computer can actually sample from as an approximation of something improper? More generally, how can I think about using a pseudo-inverse parameterisation beyond just thinking to myself, "the inverse doesn't exist so I use the pseudo-inverse instead"? This is basically what I do when I use a pseudo-inverse.
Generating samples from singular Gaussian distribution Let random vector $x = (x_1,...,x_n)$ follow multivariate normal distribution with mean $m$ and covariance matrix $S$. If $S$ is symmetric and positive definite (which is the usual case) then one can generate random samples from $x$ by first sampling indepently $r_1,...,r_n$ from standard normal and then using formula $m + Lr$, where $L$ is the Cholesky lower factor so that $S=LL^T$ and $r = (r_1,...,r_n)^T$. What about if one wants samples from singular Gaussian i.e. $S$ is still symmetric but not more positive definite (only positive semi-definite). We can assume also that the variances (diagonal elements of $S$) are strictly positive. Then some elements of $x$ must have linear relationship and the distribution actually lies in lower dimensional space with dimension $<n$, right? It is obvious that if e.g. $n=2, m = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, S = \begin{bmatrix} 1 & 1 \\ 1 & 1\end{bmatrix}$ then one can generate $x_1 \sim N(0,1)$ and set $x_2=x_1$ since they are fully correlated. However, is there any good methods for generating samples for general case $n>2$? I guess one needs first to be able identify the lower dimensional subspace, then move to that space where one will have valid covariance matrix, then sample from it and finally deduce the values for the linearly dependent variables from this lower-dimensional sample. But what is the best way to that in practice? Can someone point me to books or articles that deal with the topic; I could not find one.
How do I appropriately examine the dimensionality of binary data? I have 72 binary variables and, at a theoretical level, I am trying to identify groups of variables that seem to vary together. In practice, I am struggling with how to analyze this data properly. I am using R and the psych package. Here is what I have tried and what the outcomes were: Estimated a tetrachoric correlation matrix (ran without problem) Estimated MSA (relatively strong values, ran without problem) Ran parallel analysis to identify how many factors to extract (ran without problem) EFA using fa() to estimate factors (non-positive definite matrix and strangely high RMSEA) Tried different factor counts, smoothing/unsmoothing, estimation methods, rotations methods, and removing variables (problems persisted) Tried to focus instead on eBIC (always returned as NULL in output) EFA using (inappropriately) Pearson correlations (RMSEA behaved) From looking at other questions and answers on here, I am now aware that tetrachoric matrices can be problematic like this. However, I don't know what to do with that information. Accept the problems as a limitation and move forward? Conduct a different type of analysis? I considered IRT as is recommended, but am unaware of an exploratory way to conduct IRT analyses. I understand that most of my options are imperfect. My goal is to find an analysis that is at least arguably appropriate for binary variables and that will inform the ways in which I group these variables in my theorizing. Given this, my questions are: Can I trust EFA on a tetrachoric matrix despite problematic output? Can I somehow recover the lost eBIC value and use that? Is exploratory IRT possible/desirable? Are there other analyses, in R, that can be used to examine the dimensionality of binary data? I appreciate all feedback! I love learning about data analysis and hearing about how I can improve.
eng_Latn
1,145
can i use random forest for feature selection and then use poisson regression for model fitting? Variables that are important in random forest don't necessarily have any sort of relationship with the outcome. So would it be wise to use random forest to gather the most important features and then plug those features into a poisson regression model?If not, what other approach can be used for feature selection?(independent variables are categorical and dependent variable is count).
Can a random forest be used for feature selection in multiple linear regression? Since RF can handle non-linearity but can't provide coefficients, would it be wise to use random forest to gather the most important features and then plug those features into a multiple linear regression model in order to obtain their coefficients?
What's the relationship between initial eigenvalues and sums of squared loadings in factor analysis? On the one hand I read in a comment that: You can't speak of "eigenvalues" after rotation, even orthogonal rotation. Perhaps you mean sum of squared loadings for a principal component, after rotation. When rotation is oblique, this sum of squares tells nothing about the amount of variance explained, because components aren't orthogonal anymore. So, you shouldn't report any percentage of variance explained. On the other hand, I sometimes read in books people saying things like: The eigenvalues associated with each factor represent the variance explained by that particular factor; SPSS also displays the eigenvalue in terms of the percentage of variance explained (so factor 1 explains 31.696% of total variance). The first few factors explain relatively large amounts of variance (especially factor 1), whereas subsequent factors explain only small amounts of variance. SPSS then extracts all factors with eigenvalues greater than 1, which leaves us with four factors. The eigenvalues associated with these factors are again displayed (and the percentage of variance explained) in the columns labelled Extraction Sums of Squared Loadings. That text is from Field (2013) Discovering statistics using IBM SPSS, and this diagram accompanies it. I'm wondering Who is correct about whether it's possible to speak of eigenvalues after rotation? Would it matter if it was an oblique or orthogonal rotation? Why are the "initial eigenvalues" different from the "extraction sums of squared loadings"? Which is a better measure of total variation explained by the factors (or principal components or whatever method is used)? Should I say that the first four factors explain 50.317% of variation, or 40.477%?
eng_Latn
1,146
When is orthogonal projection compact? Let $M$ be a closed subspace of a Hilbert space $H$. Let $P$ be the orthogonal projection on $M$. I was told to find the eigenvalues and eigenvectors of $P$ and moreover say when it is compact. Since $M$ is closed we have $H\cong M\oplus M^\perp$ and this is an eigenspace decomposition so $P$ has eigenvalues $1,0$ and eigenvectors the elements of $M,M^\perp$ respectively. Is there any formal explanation I emphasize in the infinite dimensional case? If $M$ is finite dimensional $P$ is finite rank so compact. I don't see anything but that...
Showing that the orthogonal projection in a Hilbert space is compact iff the subspace is finite dimensional Suppose that we have a Hilbert Space $H$ and $M$ is a closed subspace of $H$. Let $T\colon H\rightarrow M$ be the orthogonal projection onto $M$. I have to show that $T$ is compact iff $M$ is finite dimensional. So if we assume that $M$ is finite dimensional then $\overline{T(B(0,1))}$ is a closed bounded set in a finite dim vector normed space and so it is compact. Which gives that $T$ is compact. But I am unsure how to prove that if $T$ is compact then $M$ is finite dimensional? Thanks for any help
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,147
The first principal component line minimizes the sum of the squared perpendicular distances between each point and the line I am currently studying An Introduction to Statistical Learning, corrected 7th printing, by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani. Chapter 6.3.1 Principal Components Regression says the following: For instance, in Figure 6.14, the first principal component line minimizes the sum of the squared perpendicular distances between each point and the line. What is specifically meant by "the first principal component line minimizes the sum of the squared perpendicular distances between each point and the line"? These are just words, so the concept is not clear to me; what is the mathematics that represents this? I would greatly appreciate it if people would please take the time to clarify this.
Making sense of principal component analysis, eigenvectors & eigenvalues In today's pattern recognition class my professor talked about PCA, eigenvectors and eigenvalues. I understood the mathematics of it. If I'm asked to find eigenvalues etc. I'll do it correctly like a machine. But I didn't understand it. I didn't get the purpose of it. I didn't get the feel of it. I strongly believe in the following quote: You do not really understand something unless you can explain it to your grandmother. -- Albert Einstein Well, I can't explain these concepts to a layman or grandma. Why PCA, eigenvectors & eigenvalues? What was the need for these concepts? How would you explain these to a layman?
How should I interpret this residual plot? I am unable to interpret this graph. My dependent variable is total number of movie tickets that will be sold for a show. The independent variables are the number of days left before the show, seasonality dummy variables (day of week, month of year, holiday), price, tickets sold till date, movie rating, movie type (thriller, comedy, etc., as dummies). Also, please note that movie hall's capacity is fixed. That is, it can host maximum of x number of people only. I am creating a linear regression solution and it's not fitting my test data. So I thought of starting with regression diagnostics. The data are from a single movie hall for which I want to predict demand. The is a multivariate dataset. For every date, there are 90 duplicate rows, representing days before the show. So, for 1 Jan 2016 there are 90 records. There is a 'lead_time' variable which gives me number of days before the show. So for 1 Jan 2016, if lead_time has value 5, it means it will have tickets sold until 5 days before the show date. In the dependent variable, total tickets sold, I will have the same value 90 times. Also, as a side remark, is there any book that explains how to interpret residual plot and improve model afterwards?
eng_Latn
1,148
Minimal polynomial of a matrix whose elements have a certain form Find the minimal polynomial of the $n$-dimensional matrix $(a_{ij})$ when the matrix elements $a_{ij}$ have the form $a_{ij} = u_i v_j.$ Let $A=uv^T$ where $u,v$ are column vectors. Then rank$(A)\leq$rank$(u)\leq1.$ So kernal$(A)\geq n-1.$ That is, the geometric multiplicity $\geq n-1.$ According to Jordan decomposition theorem, the number of Jordan blocks w.r.t. $0\ \geq n-1.$ Therefore, the algebraic multiplicity of $0$ $\geq n-1.$ Suppose rank$(A)=1,$ how do I find the other eigenvalue?
Determining possible minimal polynomials for a rank one linear operator I have come across a question about determining possible minimal polynomials for a rank one linear operator and I am wondering if I am using the correct proof method. I think that the facts needed to solve this problem come from the section on Nilpotent operators from Hoffman and Kunze's "Linear Algebra". Question: Let $V$ be a vector space of dimension $n$ over the field $F$ and consider a linear operator $T$ on $V$ such that $\mathrm{rank}(T) = 1$. List all possible minimal polynomials for $T$. Sketch of Proof: If $T$ is nilpotent then the minimal polynomial of $T$ is $x^k$ for some $k\leq n$. So suppose $T$ is not nilpotent, then we can argue that $T$ is diagonalizable based on the fact that $T$ must have one nonzero eigenvalue otherwise it would be nilpotent (I am leaving details of the proof of diagonalization but it is the observation that the characteristic space of the nonzero eigenvalue is the range of T and has dimension $1$). Thus the minimal polynomial of $T$ is just a linear term $x-c$. Did I make a mistake assuming that $T$ can have only one nonzero eigenvalue? Thanks for your help
Local polynomial regression: Why does the variance increase monotonically in the degree? How can I show that the variance of local polynomial regression is increasing with the degree of the polynomial (Exercise 6.3 in Elements of Statistical Learning, second edition)? This question has been asked but the answer just states it follows easliy. More precisely, we consider $y_{i}=f(x_{i})+\epsilon_{i}$ with $\epsilon_{i}$ being independent with standard deviation $\sigma.$ The estimator is given by $$ \hat{f}(x_{0})=\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right) $$ for $\alpha,\beta_{1},\dots,\beta_{d}$ solving the following weighted least squares problem $$ \min\left(y_{d}-\underbrace{\left(\begin{array}{ccccc} 1 & x_{1} & x_{1}^{2} & \dots & x_{1}^{d}\\ \vdots\\ 1 & & & & x_{n}^{d} \end{array}\right)}_{X}\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right)\right)^{t}W\left(y-\left(\begin{array}{ccccc} 1 & x_{1} & x_{1}^{2} & \dots & x_{1}^{d}\\ \vdots\\ 1 & & & & x_{n}^{d} \end{array}\right)\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right)\right) $$ for $W=\text{diag}\left(K(x_{0},x_{i})\right)_{i=1\dots n}$ with $K$ being the regression kernel. The solution to the weighted least squares problem can be written as $$ \left(\begin{array}{cccc} \alpha & \beta_{1} & \dots & \beta_{d}\end{array}\right)=\left(X^{t}WX\right)^{-1}X^{t}WY. $$ Thus, for $l(x_{0})=\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)\left(X^{t}WX\right)^{-1}X^{t}W$ we obtain $$ \hat{f}(x_{0})=l(x_{0})Y $$ implying that $$ \text{Var }\hat{f}(x_{0})=\sigma^{2}\left\Vert l(x_{0})\right\Vert ^{2}=\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)\left(X^{t}WX\right)^{-1}X^{t}W^{2}X\left(X^{t}WX\right)^{-1}\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)^{t}. $$ My approach: An induction using the formula for the inverse of a block matrix but I did not succeed. The paper by D. Ruppert and M. P. Wand derives an asymptotic expression for the variance for $n\rightarrow\infty$ in Theorem 4.1 but it is not clear that is increasing in the degree.
eng_Latn
1,149
Find Least‐Squares Solutions Using Linear Algebra
In data analysis, it is often a goal to find correlations for observed data, called trendlines. However, real life observations almost always yield inconsistent solutions to the matrix equation X β = y , {\displaystyle X{\boldsymbol {\beta }}={\mathbf {y} },} where y {\displaystyle {\mathbf {y} }} is called the observation vector, X {\displaystyle X} is called the m × n {\displaystyle m\times n} design matrix, and we are looking for values of β , {\displaystyle {\boldsymbol {\beta }},} the parameter vector.
There are many factors that have to be addressed before this question can be properly answered. Is the machine a vertical or horizontal machine? How any axes will be required?
eng_Latn
1,150
Matrix of regression coefficients
Algorithm for minimization of sum of squares in regression packages
Regression without intercept: deriving $\hat{\beta}_1$ in least squares (no matrices)
eng_Latn
1,151
How can logistic regression maximize AUC?
Does a logistic regression maximizing likelihood necessarily also maximize AUC over linear models?
Does a logistic regression maximizing likelihood necessarily also maximize AUC over linear models?
eng_Latn
1,152
Solution for inverse square potential in $d=3$ spatial dimensions in quantum mechanics Can a particle in an inverse square potential $$V(r)=-1/r^{2}$$ in $d=3$ spatial dimensions be solved exactly? Also please explain me the physical significance of this potential in comparison with Coulomb potential? That problem was talking about positive repulsive potential and what I am looking for is an attractive potential.
Radial Schrodinger equation with inverse power law potential Recently I read a about solving radial Schrodinger equation with inverse power law potential. Consider the radial Schrodinger equation(simply set $\mu=\hbar=1$): $$\left(-\frac{1}{2}\Delta+V(\mathbf{r})\right)\psi(\mathbf{r})=E\psi(\mathbf{r}).$$ A gives a one-dimension equation: $$-\frac{1}{2}D^2\phi(r)+\left(V(r)+\frac{1}{2}\frac{l(l+1)}{r^2}\right)\phi(r)=E\phi(r),$$ where $D=\dfrac{d}{dr}$, and $l$ is the azimuthal quantum number. If we only consider the ground state, then $l=0$, so $$-\frac{1}{2}D^2 \phi(r)+V(r)\phi(r)=E\phi(r).$$ We want to find eigenvalue $E$ such that $\phi(0)=\phi(+\infty)=0$. The central potential discussed in the paper is of this form: $$V(r)=\alpha r^{-\beta}.$$ It states (see page 4) that if $\beta>2$ then the potential is repulsive (i.e. $\alpha>0$). My questions are: Is this conclusion (i.e. If $\beta>2$ then we must have $\alpha>0$) generally valid in physics? What would happen if $\alpha<0$ and $\beta>2$? Is there 'ground state' in this condition? What about the condition $\alpha>0$ and $\beta=1,2$? I have tried to solve the equation numerically with $\alpha=1,\beta=1,2$, and the ground state energy in this two conditions seem to be $0$, is my result correct? P.S. When I try to find ground state when $\alpha=-1,\beta=2$, the energy seem to be $-\infty$, which is qualitatively different from $\alpha=-1,\beta=1$.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,153
Single particle passing through the double slit at a time
Slit screen and wave-particle duality
Why bother with the dual problem when fitting SVM?
eng_Latn
1,154
Proving that the induced L1 and L-infinity matrix norms are duals of one another
Show that $(l_1)^* \cong l_{\infty}$
Why L1 norm for sparse models
eng_Latn
1,155
Reference for this claim: important features in data can be "hidden" in the higher PCA axes that are typically thrown out
Examples of PCA where PCs with low variance are "useful"
Using principal component analysis (PCA) for feature selection
eng_Latn
1,156
Finding the associated matrix of a linear operator Let $V$ be a complex vector space of dimension $n$ with a scalar product, and let $u$ be an unitary vector in $V$. Let $H_u: V \to V$ be defined as $$H_u(v) = v - 2 \langle v,u \rangle u$$ for all $v \in V$. I need to find the minimal polynomial and the characteristic polynomial of this linear operator, but the only way I know to find the charactestic polynomial is using the associated matrix of the operator. I don't know how to find this matrix because I don't know how to deal with the scalar product. Is there some other way to find the characteristic polynomial? If not, how can I find the associated matrix of this linear operator? Thanks in advance.
What's the associated matrix of this linear operator? Let $V$ be a complex vector space of dimension $n$ with a scalar product, and let $u$ be an unitary vector in $V$. Let $H_u: V \to V$ be defined as $$H_u(v) = v - 2 \langle v,u \rangle u$$ for all $v \in V$. I need to find the characteristic polynomial of this linear operator, but the only way to find it that I know of is using the associated matrix of the operator. I don't know how to find this matrix because I don't know how to deal with the scalar product. Is there some other way to find the characteristic polynomial? If not, how can I find the associated matrix of this linear operator? Thanks in advance.
What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? I've read a lot about PCA, including various tutorials and questions (such as , , , and ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from .
eng_Latn
1,157
With data.table, is SD[which.max(Var1)] the fastest way to find the max of a group?
finding the index of a max value in R
Why is.vector on a data-frame doesn't return TRUE?
eng_Latn
1,158
How to get statistics of all samples through statistics of subsamples?
Combining two covariance matrices
Subsample of a random sample: random sample?
eng_Latn
1,159
How to get statistics of all samples through statistics of subsamples?
Combining two covariance matrices
Uploading an image from the web can leave paste broken in editor
eng_Latn
1,160
Gradient of a matrix expression
How to take the gradient of the quadratic form?
Gradient Boosting for Linear Regression - why does it not work?
eng_Latn
1,161
Decomposition rate of intact and injured piglet cadavers
Using Accumulated Degree-Days to Estimate the Postmortem Interval from Decomposed Human Remains
Scaling Factorization Machines with Parameter Server
eng_Latn
1,162
R - is it ok to treat autocorrelation by randomly changing the order of variable I'm working on a multivariate linear regression. I performed a Durbin-Watson test to test for auto-correlation and found it in my model (p-value <0.05 of Durbin Watson test). I then randomly changed the order of the row of the dataframe with the following R-command: df=df[sample(1:nrow(df))]. I re-ran the auto-correlation test and found p-value of around 0.5. It sounds too easy to be true but did I remove the auto-correlation? If not: 1. why? and 2. how should I proceed? Thank you very much in advance!
durbinWatsonTest() in R for non time-series data I am trying to understand whether the Durbin-Watson test is meaningful at all when applied to regression data that have no temporal order (eg. blood pressure ~ bodyMassIndex + exercise). I would say no, as the autocorrelation should obviously vary with the order of the subjects (which is random), but I wonder whether the durbinWatsonTest in R involves a bootstrapping of the data, where order of the bootstrapping data is shuffled each time, and then perhaps an average autocorrelation is computed over the bootstrapping samples (but then again, shouldn't this average autocorrelation be zero for randomly sampled data?). On the other hand, the value of the statistics seems to be quite stable over multiple runs of the function, so I am even more confused... I came across it in Andy Field's book "Discovering Statistics with R", where it is used is an example with non-time series data, although in another, more recent book of the same author ("Adventures in Statistics") it does say that the test is applicable only to time series data.
How to compute PCA scores from eigendecomposition of the covariance matrix? Given a data matrix $\mathbf X$ of $12 \times 7$ size with samples in rows and variables in columns, I have calculated centered data $\mathbf X_c$ by subtracting column means, and then computed covariance matrix as $\frac{1}{N-1} \mathbf X_c^\top \mathbf X_c$. The dimensions of this covariance matrix are $7 \times 7$. After that I have calculated the eigenvalue decomposition, obtaining eigenvector matrix $\mathbf V$ with eigenvectors in columns. Now I want to know about projections (i.e. principal component scores in PCA): is it $\mathbf{V}^\top \mathbf{X}_c$ or $\mathbf{X}_c \mathbf{V}$? mean_matrix = X - repmat(mean(X),size(X,1),1); covariance = (transpose(mean_matrix) * mean_matrix)/(12-1); [V,D] = eig(covariance); [e,i] = sort(diag(D), 'descend'); V_sorted = V(:,i);
eng_Latn
1,163
The graphical intuiton of the LASSO in case p=2
Graphical interpretation of LASSO
Prove that the expression cannot be a power of 2
eng_Latn
1,164
I am trying to learn filters. In the following circuit, the author says that negative and positive feedback are used. I couldn't understand what the circuit parts do. Actually I got the negative feedback highpass parts but the low pass part is a little different from normal filters because C2 doesn't bind with ground.
I have rigged up a notch filter from . See below: - Its the first circuit. Now I am trying to calculate its transfer function. I have calculated two equations, but I need the third one: (Vin-V2)/(1/SC1) = V2/R3 (Vin-V1)/R1 = (V1-Vo)/(R2+(1/SC2)) V1 and V2 are the potentials at the input of the op-Amp.
I am wondering why matrix factorization techniques in the machine learning domain almost always expect the provided matrix to be non-negative. What is the advantage of this constraint? Background: I want to use matrix factorization algorithms for a sparse user-item matrix containing positive and negative implicit feedback. Is there any another possibility to set interactions with negative indications apart from fields that denote that no interaction happened between the user and the item?
eng_Latn
1,165
Non static method cannot be referenced from a static context Binary Search
What is the reason behind "non-static method cannot be referenced from a static context"?
Positive semidefinite cone is generated by all rank-$1$ matrices.
eng_Latn
1,166
COFI RANK Maximum Margin Matrix Factorization for Collaborative Ranking
Large Margin Methods for Structured and Interdependent Output Variables
Circulate shifted OFDM chirp waveform diversity design with digital beamforming for MIMO SAR
eng_Latn
1,167
Given the Linear system $$Ax = b$$ where $A$ is an $s$-sparse ($s$ is the maximum number of non-zero entries in $A$), $k$-conditioned ($k$ is the ratio between the highest and the smallest eigenvalue) matrix of size $N$, how can I express the time complexity of CG method based on those three parameters? I have found out different questions in Stack exchange (,), but none of them considers all three parameters.
I have been trying to figure out the time complexity of the conjugate gradient method. I have to solve a system of linear equations given by $$Ax=b$$ where $A$ is a sparse, symmetric, positive definite matrix. What would be the time complexity of the conjugate gradient method?
So, I know that $\max(-X_{(1)},X_{(n)})$ is a sufficient statistic for the parameter $\theta$. But can I also say that $(X_{(1)},X_{(n)})$ are jointly sufficient for the parameter $\theta$ ? In other words, can a single parameter have jointly sufficient statistics?
eng_Latn
1,168
Due to QFT books, we measure pole mass(physical mass) in experiments. From the Lagrangian point of view, renormalized mass is a parameter(in MS bar or some similar renormalization scheme that has an explicit renormalization scale) We fix renormalized parameters ( renormalized mass, renormalized couplings and ...) at specific energy scale by experiments. But the problem is that how do we measure renormalized mass? beacuase in any expriment we measure pole mass which is scale independent For other renormalized parameters, I have no problem.
I am reading Schwarz QFT and I reached the mass renormalization part. So he introduces, after renormalization, a physical mass, defined as the pole of the renormalized propagator, and a renormalized mass which can be the physical mass, but can also take other values, depending on the subtraction schemes used. Are these masses, other than the physical one, observable in any way experimentally, or are they just ways to do the math easier (using minimal subtraction instead of on-shell scheme, for example)? Also, in the case of charge renormalization, the explanation was that due to the vacuum polarization, the closer you are to the charge, the more you see of it, so the charge of the particle increases with the momentum you are testing the particle with. However, I am not sure I understand, from a physical point of view, why do you need to renormalize the mass. Is this physical mass (defined as the pole of the renormalized propagator) the same no matter how close you get, or it also changes with energy? And if it changes, what is shielding it at big distances?
To prevent overfitting people people add a regularization term (proportional to the squared sum of the parameters of the model) with a regularization parameter $\lambda$ to the cost function of linear regression. Is this parameter $\lambda$ the same as a lagrange multiplier? So is regularization the same as the method of lagrange multiplier? Or how are these methods connected?
eng_Latn
1,169
Prove spectral radius of a primitive matrix is 1
Proof that the largest eigenvalue of a stochastic matrix is $1$
Rigid body simulation not interacting with objects that have an array modifier
eng_Latn
1,170
MODELING AND SIMULATION OF MICRO-HYDRO POWER PLANTS FOR APPLICATIONS IN DISTRIBUTED GENERATION
Basic Modeling and Simulation Tool for Analysis of Hydraulic Transients in Hydroelectric Power Plants
Efficient Structured Multifrontal Factorization for General Large Sparse Matrices
yue_Hant
1,171
Meaning of "$\exp[ \cdot ]$" in mathematical equations I am reading book "Fuzzy Logic With Engineering Applications, Wiley" written by Timothy J. Ross. I am reading chapter 7 and in this chapter, "Batch Least Squares Algoritm" has been defined. It illustrates the development of a nonlinear fuzzy model for the data in Table 7.1 using the Batch Least Squares algorithm. At the page 218 there is a mathematical equation: I have two questions: 1- As you can see, there is a "exp" phrase (in the red rectangle). What is this? Is it the exponential function? () At the link, "" it was stated at the link that it is an exponential function, but I noticed that ordinary paranthesis has been used. In my equation, square brackets is used. 2- What is the purpose of the equation? Thanks in advance.
What is the meaning of $\exp(\,\cdot\,)$? What is the meaning of the notation $\exp(\text{expression})$ ? I think that it's something of the form $a^\text{expression}$ but does it mean that the base $a=e$ or can it be any base?
Interpretation of LASSO regression coefficients I'm currently working on building a predictive model for a binary outcome on a dataset with ~300 variables and 800 observations. I've read much on this site about the problems associated with stepwise regression and why not to use it. I've been reading into LASSO regression and its ability for feature selection and have been successful in implementing it with the use of the "caret" package and "glmnet". I am able to extract the coefficient of the model with the optimal lambda and alpha from "caret"; however, I'm unfamiliar with how to interpret the coefficients. Are the LASSO coefficients interpreted in the same method as logistic regression? Would it be appropriate to use the features selected from LASSO in logistic regression? EDIT Interpretation of the coefficients, as in the exponentiated coefficients from the LASSO regression as the log odds for a 1 unit change in the coefficient while holding all other coefficients constant.
eng_Latn
1,172
Does L1 regularization (Lasso) always leads to feature reduction?
Why does the Lasso provide Variable Selection?
When will L1 regularization work better than L2 and vice versa?
eng_Latn
1,173
Time Complexity line by line
Time complexity of nested for-loop
Collinearity diagnostics problematic only when the interaction term is included
eng_Latn
1,174
Maximum size square sub-matrix with all 1s
Puzzle: Find largest rectangle (maximal rectangle problem)
Subsetting matrices
eng_Latn
1,175
PCA scores in my own implementation have different sign from the ones computed in R
Is the sign of principal components meaningless?
$\mathbb R$ has the same cardinality of any interval
eng_Latn
1,176
Gradient of an $n$-variate quadratic form
How to take the gradient of the quadratic form?
Gradient Boosting for Linear Regression - why does it not work?
eng_Latn
1,177
Principal component analysis- covariance or correlation matrix
PCA on correlation or covariance?
Why is the covariant derivative of the metric tensor zero?
eng_Latn
1,178
Accepted answer not ranked first
Accepted answer appearing below top voted
Positive semidefinite cone is generated by all rank-$1$ matrices.
eng_Latn
1,179
How to transform symmetric matrix to diagonal?
$\mathbf L\mathbf D\mathbf L^\top$ Cholesky decomposition
Eigenvectors of real symmetric matrices are orthogonal
eng_Latn
1,180
How to compute F-statistics for each features of regression models in glmnet?
Lasso and statistical signficance of selected variables
Latexmk can't see a dependency on a .fmt format file
eng_Latn
1,181
Why is L2 regression good for handling multicollinearity?
Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to infinity?
L1 regression estimates median whereas L2 regression estimates mean?
eng_Latn
1,182
Solving System of equations and L-U decomposition of matrices
When does a Square Matrix have an LU Decomposition?
When does a Square Matrix have an LU Decomposition?
eng_Latn
1,183
Eigenvectors computed by Matlab's princomp() and eig() have different signs
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign?
Every principal ideal domain satisfies ACCP.
eng_Latn
1,184
How does $\vec{\beta}=(H^TH)^{-1}H^T\vec{y}$ equivalent to least squares criteria for evaluating splines?
What algorithm is used in linear regression?
"strlen(s1) - strlen(s2)" is never less than zero
eng_Latn
1,185
Extract matrix from a system of linear equations
How to read off coefficients of tensor-like expression in a speedy way?
Regression without intercept: deriving $\hat{\beta}_1$ in least squares (no matrices)
eng_Latn
1,186
Problems with SEM: Non-positive definite matrix
"matrix is not positive definite" - even when highly correlated variables are removed
Positive semidefinite cone is generated by all rank-$1$ matrices.
eng_Latn
1,187
When using principal components as predictors in linear regression, PC1 comes out not significant
How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)?
Every principal ideal domain satisfies ACCP.
eng_Latn
1,188
Why do prcomp() and eigen(cov()) in R return different signs of PCA eigenvectors?
PCA: Eigenvectors of opposite sign and not being able to compute eigenvectors with `solve` in R
Every principal ideal domain satisfies ACCP.
eng_Latn
1,189
Derivation of linear discriminant analysis (LDA) decision boundary
Linear Discriminant Analysis for $p=1$
Why with two classes, LDA gives only one dimension?
eng_Latn
1,190
Why avoid stepwise regression?
Algorithms for automatic model selection
Why does the degree of freedom of SSR equals 1 in simple linear regression
eng_Latn
1,191
Why is there only one discriminant axis when doing Linear Discriminant Analysis on two classes?
Why with two classes, LDA gives only one dimension?
Linear Discriminant Analysis for $p=1$
eng_Latn
1,192
Variability of K SVD components
Relationship between SVD and PCA. How to use SVD to perform PCA?
Recovering eigenvectors from SVD
eng_Latn
1,193
Why maximum likelihood estimation is same with minimizing cross-entropy?
the relationship between maximizing the likelihood and minimizing the cross-entropy
I'm not able to get access to $wpdb
eng_Latn
1,194
what is lanczos method
1 Introduction The Lanczos method [15] is widely used for finding a small number of extreme eigenval-ues and their associated eigenvectors of a symmetric matrix (or Hermitian matrix in thecomplex case).6.9)This bound, too, could be much bigger than the one of (5.1) because of one or more ζ j with k “ i ď j ď ℓ are much bigger than ζ i. 7 Numerical examples In this section, we shall numerically test the effectiveness of our upper bounds on theconvergence of the block Lanczos method in the case of a cluster.
Here is how to make small images larger in Gimp without losing quality. Open the image you want to resize in Gimp. Simply go to Image » Scale Image. Enter your desired dimensions. Under the Quality section choose Sinc (Lanczos3) as Interpolation method and click on the Scale Image button.
eng_Latn
1,195
what is the decomposition
Decomposition is the natural process of dead animal or plant tissue being rotted or broken down. This process is carried out by invertebrates, fungi and bacteria. The result of decomposition is that the building blocks required for life can be recycled. Left: The body of a dead rabbit after several weeks of decomposition. Most of the flesh has been eaten by beetles, beetle larvae, fly maggots, carnivorous slugs and bacteria. The outline of the skeleton is starting to appear. All living organisms on earth will eventually die.
Linear Algebra: How can a Cholesky decomposition of (I+S) be efficiently found given that of S=R'R? What is the intuition for a minor of a matrix? What is the role of the Cholesky decomposition in finding multivariate normal PDF?
eng_Latn
1,196
Cropping sf object in R?
Crop simple features object in R
Matrix doesn't shrink when put in fraction.
eng_Latn
1,197
Why the "Sum" function becomes extremely slow at a specific size of matrix? How to AVOID it?
Sudden increase in timing when summing over 250 entries
Sudden increase in timing when summing over 250 entries
eng_Latn
1,198
Subsetting while reading in R
how to read huge csv file into R by row condition?
Subsetting matrices
eng_Latn
1,199