title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Computation of 10th power of a matrix
The eigenvectors for this matrix come out to be $$\left( \begin{array}{c} -1\\1\\0\end{array} \right),\left( \begin{array}{c} -1\\0\\1\end{array} \right), \left( \begin{array}{c} -1\\-2\\1\end{array} \right)$$ To diagonalize the matrix, if $A = SDS^{-1}$, then $$S = \left( \begin{array}{c c c} -1&-1&-1\\1&0&-2\\0&1&1\end{array} \right), A = \left( \begin{array}{c} 2&0&0\\0&2&0\\0&0&4\end{array} \right)$$ Then $A^{10} = SD^{10}S^{-1}$, and finding 10th power of a diagonal matrix is trivial. Can you take it from here?
Connection between complementarity problem and optimization problem?
Complementarity problems is simply optimization problems with a special kind of constraints. Essentially orthogonality constraints between two non-negative vectors, $x\geq 0, y\geq 0, x^Ty = 0$. This arise, for example, in purely geometric applications (orthogonality constraints), or in situations where you want to encode either-or conditions ($x_i$ is zero or $y_i$ is zero, such as when you want to encode optimality conditions where either the dual variable is zero, or the slack is zero, or both)
How to find the radius of convergence of $\sum^{\infty}_{n=1}\frac{n!}{n^n}z^n$?
It is $\lim_{n\to\infty} \frac{n^n}{(n+1)^n}=\lim_{n\to\infty}\left(\frac{n}{1+n}\right)^n=\lim_{n\to\infty}\left(\frac{n+1-1}{n+1}\right)^n=\lim_{n\to\infty}\left(1-\frac{1}{n+1}\right)^n=\frac1e$
Median of points on a circle
To "define the median $m$ as the point such that there are an equal number of points clockwise a distance no greater than $1/2$ versus points counter-clockwise a distance no greater than $1/2$", you'd first have to show that there is only one such point. This will generally not be the case. This is not just, as for the "normal" median, because of degenerate situations of which there is only a set of measure zero; there can be several such points even if the points are in general position. To see this, imagine an odd number $n$ of points equally spaced (i.e. they form a regular $n$-gon), and then move each point by some arbitrary small amount less than a quarter of the distance between points. Then it will still be the case that each of the points has as many points clockwise as counter-clockwise at a distance no greater than $1/2$. [Edit in response to the edit in the question:] You don't have to check each point individually (which would be a double loop); a single loop suffices. Cyclically sort the points according to their angles (taking $O(N\log N)$ time), start at some point, and move along the circle clockwise with a second point until you reach a point that's counter-clockwise from the first point. Until you find a median point, move the first and second point in lockstep, alternatingly advancing the second point until it's counter-clockwise from the first and advancing the first until the second is clockwise from it again. Since the two halves of the circle switch sides during the process, there must be some point along the way where they contain the same number of points, and that's the "median" you're looking for.
Is there any other special numbers?
The conjecture is, that there are infinitely many positive integers $n$ such that all three numbers $(n-1,n,n+1)$ are product of two different primes. Thus such numbers might not be so special after all. A slightly more general notion is the notion of a semiprime, which is a natural number that is the product of two (not necessarily distinct) prime numbers. For the corresponding conjecture see this question.
Showing $\mu_f$ is a measure on the $\sigma-$algebra $\mathcal{B}$ of Borel subsets of $\mathbb{R}.$
Lots of unnecessary jargon in your writing. If $B$ is Borel set in $\mathbb R$ the $f^{-1}(B) \in \mathcal M$ so $\mu_f$ is well defined. All you have to do is to verify that $\mu_f(\emptyset)=0$ and $\mu_f(\bigcup B_n)=\sum \mu_f(B_n)$ for any disjoint sequence of Borel sets $(B_n)$. For this just observe that $(f^{-1}(B_n))$ is a disjoint sequence in $\mathcal M$ and use the fact that $\mu$ is a measure.
If $S$ is homeomorphic to a torus $\Rightarrow$ $S$ has a differentiable vector field without singular points
The torus is $\mathbb{T}^2 = \mathbb{S^1}\times \mathbb{S}^1$. Let $X\in \mathcal{T}(\mathbb{S^1})$ be a never vanishing vector field (is very simple to construct one with the embedding $\mathbb{S^1}\subset \mathbb{C})$. then $X\times X\in \mathcal{T}(\mathbb{S^1}\times \mathbb{S^1} )$ is a non vanishing vector field on the torus. If you have another smooth surface $S$ such that there is a diffeo $\psi:\mathbb{T}^2\to S$, then pushing forward the non vanishing vector field you obtain $\psi_*(X\times X)$ which is a never vanishing vector field on $S$. If $S$ is a smooth closed 2-manifold, that is homeomorphic to $\mathbb{T}^2$, then $\chi(S)=\chi(\mathbb{T}^2)$ (since the Euler characteristic is a topological invariant), therefore thanks to the classification theorem for smooth surfaces $S$ is also diffeomorphic to the torus, and the previous argument applies. The Poincaré-Hopf theorem (and thus the Gauss-Bonnet theorem) a priori gives a necessary condition to have a never vanishing vector field ($\chi(M^2) = 0$) that a posteriori, thanks to the classification theorem of surfaces it is also sufficient. But you first have to show that the surface with Euler characteristic $0$ has a never vanishing vector field.
Monty Hall/Bayes' Theorem conflict?
The calculation of the probability of the car being behind door C under the new rules is correct, and there is no contradiction, because the rule that Monty has to open door B if the car is behind door A will cause door B to have a higher unconditional probability of being opened than door C. From the contestant's perspective, this means that if Monty opens door B under the new rule, there is a higher posterior likelihood that the car is behind door A compared to the old rule, in which the equal likelihood of opening door B or C if the car is behind door A is not informative about whether the car is behind A.
Øksendal's SDEs Exercise 2.17
I arrived at the following solution using @saz's suggestion. First we square $$E[(\sum_{t_k\leq t} (\Delta B_k)^2 -t)^2]=\sum_{t_k\leq t}E((\Delta B_k)^4) - t \sum_{t_k\leq t}E((\Delta B_k)^2)+t^2$$ here we are using the fact that the increments are independent. We have that $$Var((\Delta B_k)^2)=E((\Delta B_k)^4)-E((\Delta B_k)^2)^2=E((\Delta B_k)^4)-(\Delta t_k)^2$$ And using the fact that $(\Delta B_k)^2$ has a $\chi^2$ distribution with 1 degree of freedom, we get $Var((\Delta B_k)^2)=2(\Delta t_k)^2$. So $E((\Delta B_k)^4)=(\Delta t_k)^2$. So now we have $$E[(\sum_{t_k\leq t} (\Delta B_k)^2 -t)^2]=\sum_{t_k\leq t}(\Delta t_k)^2 - t \sum_{t_k\leq t}\Delta t_k +t^2=\sum_{t_k\leq t} (\Delta t_k)^2$$ Since the second is a telescoping sum. I might be mistaken at some point since the 2 doesn't appear, but maybe it's a typo?
Kinks in the eigenvalue spectrum of short range lattices
The eigenvectors of such a circulant matrix are $(v_j)_m=\exp(2\pi \mathrm i jm/N)$. The corresponding eigenvalue spectra are the linear combinations of exponentials corresponding to a row of the matrix. For your $A_k$, these combinations are $$\lambda_{kl}=\sum_{j=1}^k \cos(2\pi\mathrm i jl)\;,$$ Here are plots of the spectra for $A_2$ and for $A_3$. If you reorder them in descending order, as it seems that you have, then once your sort hits a maximum, you suddenly have three branches instead of just one, and also the eigenvalues are more dense around the maxima; that accounts for the kinks. The second, "inverse" kink for $A_3$ is caused by the minimum in the spectrum.
Does the prime ideal $(p)$ of $\mathbb{Z}[\sqrt{-5}]$ split completely in the extension of $\mathbb{Q}(\sqrt{-5}, i)/\mathbb{Q}(\sqrt{-5})$?
Yes this is true, and you already have most of the ideas. For example, when $p\equiv 11,19\pmod{20}$, since $\mathbb Q(\sqrt {-5},i)$ is the compositum of $\mathbb Q(\sqrt5)$ and $\mathbb Q(\sqrt{-5})$ and $p$ splits in $\mathbb Q(\sqrt 5)$, $p$ must split in $\mathbb Q(\sqrt {-5},i)/\mathbb Q$. But $p$ is inert in $\mathbb Q(\sqrt{-5})$, so the splitting must take place in the extension $\mathbb Q(\sqrt {-5},i)/\mathbb Q(\sqrt {-5})$.
Is there any obvious way to enforce a minimum amount of "positive definiteness" on a matrix?
I'm responding first to your background comment, but it will lead to an approach to your original question. A quasi-Newton method minimizes a smooth function $f:\mathbb R^n \to \mathbb R$ using the iteration $$ \tag{1} x_{k+1} = \arg \min_x f(x_k) + \nabla f(x_k)^T(x - x_k) + \frac12 (x - x_k)^T B_k (x - x_k). $$ Quasi-Newton methods differ in the choice of the matrix $B_k$. (If $B_k = \nabla^2 f(x_k)$, then the above iteration is Newton's method. In quasi-Newton methods, $B_k$ is an approximation to $\nabla^2 f(x_k)$ that can be computed inexpensively.) The approximation in (1) is good when $x$ is close to $x_k$. It would be natural to add a penalty term to the objective function in (1) to discourage $x$ from straying too far from $x_k$: $$ \tag{2} x_{k+1} = \arg \min_x f(x_k) + \nabla f(x_k)^T(x - x_k) + \frac12 (x - x_k)^T B_k (x - x_k) + \frac1{2t} \|x - x_k \|_2^2. $$ The parameter $t > 0$ can be thought of as a "step size" that controls how severely we are penalized for moving away from $x_k$. Including such a penalty term is a common trick in optimization; for example, the proximal gradient method and the Levenberg-Marquardt algorithm can both be interpreted as using this trick. I'll assume that $B_k$ is symmetric and positive semidefinite, which is typical in quasi-Newton methods. Setting the gradient of the objective function in (2) with respect to $x$ equal to $0$, we obtain $$ \nabla f(x_k) + (B_k + \frac{1}{t} I)(x - x_k) = 0. $$ Here $I$ is the identity matrix. The coefficient matrix $B_k + \frac{1}{t} I$ is guaranteed to be positive definite. The solution to this equation is $$ \tag{3} x_{k+1} = x_k - (B_k + \frac{1}{t} I)^{-1} \nabla f(x_k). $$ If $t$ is very small, then $(B_k + \frac{1}{t}I)^{-1} \approx t I$, and the update (3) is approximately a gradient descent update with step size $t$. On the other hand, if $t$ is large, then $(B_k + \frac{1}{t}I)^{-1} \approx B_k^{-1}$, and the update (3) is approximately a quasi-Newton update. So the iteration (3) is like a compromise between a quasi-Newton method and gradient descent. The Levenberg-Marquardt algorithm chooses the parameter $t$ adaptively, as follows. If $f(x_{k+1}) < f(x_k)$, then $x_{k+1}$ is accepted and $t$ is increased by a factor of 10. Otherwise, $x_{k+1}$ is rejected and $t$ is reduced by a factor of $10$, and then $x_{k+1}$ is recomputed. We only accept $x_{k+1}$ once a reduction in the value of $f$ has been achieved. (We don't have to use a factor of 10, but that is a typical choice.) Note: Here is an important question about the above proposed algorithm. Quasi-Newton methods rely on the fact that the inverse of $B_k$ can be computed efficiently. Otherwise, we might as well just use Newton's method. In the algorithm I proposed, can the inverse of $B_k + \frac{1}{t} I$ be computed efficiently? If not, then we might as well just take $B_k = \nabla^2 f(x_k)$. Can the quasi-Newton strategies to update $B_{k}^{-1}$ efficiently be adapted to update $(B_k + \frac{1}{t} I)^{-1}$ efficiently? That is a question I will need to ponder...
set of Kolmogorov-random strings is co-re
There is a procedure to enumerate $\{ x : C(x) \lt \vert x \vert \}$, the set of all strings with descriptions shorter than themselves: evaluate every description (in parallel) to see what string it describes, and output that string if it is longer than its description. One way to do this is: on time step $t = 2^k \cdot (2 \cdot x + 1)$, spend a unit of time evaluating the $k^{th}$ description. So this set is recursively enumerable and therefore its complement $R_c$ is in co-RE.
Ways to squeeze $e$ by hand
The "tail" when we truncate the usual series for $e$ just after the term $\frac{1}{9!}$ is $$\frac{1}{10!}+\frac{1}{11!}+\frac{1}{12!}+\cdots.$$ This sum is less than the sum of the infinite geometric series $$\frac{1}{10!}\left(1+\frac{1}{11}+\frac{1}{11^2}+\cdots\right),$$ which is $\frac{11}{10\cdot 10!}$. This is less than our target error. So we can take $a=\sum_0^9 \frac{1}{k!}$ and $b=a+\frac{11}{10\cdot 10!}$.
Prove the number of arrangements is equal to $\frac{n!}{(n-j)!}$
Note that for the first choice we have $n$ options, for the second $n-1$ and so on. Evnetually the number of combinations would be: $$n \cdot (n-1) \cdots (n-j+1) = \frac{n(n-1)\cdots 1}{(n-j)(n-j-1)\cdots 1} = \frac{n!}{(n-j)!}$$
Trigonometry substitution issue with sign
When you make the substitution $x=2\tan \theta$, you have to be careful to specify the domain of $\theta$: the substitution is only valid if $\theta$ has a small enough domain for $\tan \theta$ to be continuous. The simplest possible choice of domain is probably $-\frac{\pi}{2} < \theta < \frac{\pi}{2}$. Note that the range of $2\tan \theta$ on this domain is the entire real line, so taking $\theta$ in this domain doesn't lose any generality. But when $-\frac{\pi}{2} < \theta < \frac{\pi}{2}$, we always have $\sec \theta > 0$. So in fact, if you make this choice of domain, it is always true that $2 \sec \theta=\sqrt{x^2+4}$, without any sign issues. It's instructive to think about what happens if you choose a different domain for $\theta$. If $\sec \theta > 0$ on that domain, nothing will change. If $\sec \theta < 0$ on that domain, then $$\int \frac{dx}{\sqrt{x^2+4}}=\int \frac{\sec^2 \theta \,d\theta}{-\sec \theta}=-\int \sec \theta \,d\theta $$ because $\sqrt{x^2+4}$ is still positive. So the integral in terms of $\theta$ evaluates to $-\ln|\sec \theta +\tan \theta|+C$. Then, when we rewrite in terms of $x$, we again have $\sec \theta=-\sqrt{x^2+4}$, so the integral in terms of $x$ is $$-\ln\left|-\sqrt{x^2+4}+x\right|+C=-\ln\left(\sqrt{x^2+4}-x\right) +C\, ,$$ because $\sqrt{x^2+4} > x$ for all $x$. But then \begin{align*} -\ln\left(\sqrt{x^2+4}-x\right)&=\ln\left(\frac{1}{\sqrt{x^2+4}-x}\right)\\ &=\ln\left(\frac{\sqrt{x^2+4}+x}{(\sqrt{x^2+4})^2-x^2}\right)&\text{(multiplying by the conjugate)}\\ &=\ln\frac{\sqrt{x^2+4}+x}{4}\\ &=\ln(\sqrt{x^2+4}+x)-\ln 4 \, . \end{align*} So we get the nearly same result whatever domain we choose, but the constant term may be different.
Proving that the the sequence $x_n = \frac{n^{100}}{2^n}$ converges using only elementary methods
I think this is the most elementary we can get. Look at the ratio between two consecutive terms: $$ \frac{x_{n+1}}{x_n}=\frac{(n+1)^{100}/2^{n+1}}{n^{100}/2^n}\\ =\frac{(n+1)^{100}}{2n^{100}}\\ =\frac12\left(1+\frac1n\right)^{100} $$ Now note that from some point on, $1+\frac1n<\sqrt[100]2$. This means that from that point on, we get $\frac{x_{n+1}}{x_n}<1$, which means that the sequence is monotonically decreasing. At the same time, it is trivial that $x_n>0$ for all $n$. Thus $x_n$ must converge.
Challenge: Demonstrate a Contradiction in Leibniz' differential notation
The discussion here has been quite interesting! I wrote about Leibniz's notation in my Bachelor's Thesis in 2010 reading through major parts of Bos's 1974 PhD on higher order differentials in the Leibnizian calculus. I believe Bos is wrong at one point. Assuming one variable in the by Bos so-called arithmetic progression is never necessary - only convenient! I will answer to that below. Leibniz's differentials Leibniz developed his differentials, at first, from a geometrical intuition - although he reconsidered the actuality of this idea time and again. In my words, this idea can be very briefly summarized as: A curve can be thought of as a polygon with infinitely many infinitely small sides $ds$. Each $ds$ is an infinitesimally small straight line segment being a part of the curve and (paradoxically) tangent to it at the same time. Gathering the $ds$ to one straight line segment $s=\int ds$ this will constitute the length of the curve. Expressing such a curve by a geometrical relation between coordinate line segments $x$ and $y$ one could consider each $ds$ as hypotenuse of a right triangle with legs $dx$ and $dy$ so that $dx^2+dy^2=ds^2$. This is only to say that $dx,dy$ and $ds$ was thought of as geometrical and mutually dependend entities - never considered just numbers like we allow functions to be today. Just to stress how geometrical: the function nowadays expressed by the formula $f(x)=x^2$ would be something like $a\cdot y=x\cdot x$ where $a,y$ and $x$ where all considered line segments so that both sides of the equation would constitute an area in Leibniz's time. The level curve example In the fractions $\frac{\partial f}{dx}$ and $\frac{\partial f}{dy}$ the $\partial f$'s in the two fractions are unrelated because: We do not have $\partial f,\partial x$ and $\partial y$ mutually dependend geometrical entities due to the reason you already gave that the first $\partial f$ is the change in $f$ when you move in the $x$-direction by the vector $(dx,0)$ whereas the second $\partial f$ corresponds to moving by the vector $(0,dy)$. So they are unequal although infinitesimally small ... Even if we had some $df$ mutually dependend to $dx$ and $dy$ this would naturally have to be the change in $f$ when you travel the vector $(dx,dy)$ and thus different from the $\partial f$'s described before. The chain rule example Since we consider higher order differentials the work of Bos is relavant here: Had there been such thing as a derivative $z=\frac{dy}{dv}$ in Leibniz's time, the differential of that should read $$ dz=d\frac{dy}{dv}=\frac{dy+ddy}{dv+ddv}-\frac{dy}{dv}=\frac{dv\ ddy-dy\ ddv}{dv(dv+ddv)} $$ Now, since $ddv$ is infinitesimally small compared to $dv$ we may skip $ddv$ in the bracket and simply write $dv$ instead of $(dv+ddv)$. Therefore we have $$ \frac{dz}{dv}=\frac{dv\ ddy-dy\ ddv}{dv^3}=\frac{ddy}{dv^2}-\frac{dy\ ddv}{dv^3} $$ Note that $ddy$ can also be written as $d^2 y$. So the second order derivative of $y$ with respect to $v$ equals $\frac{d^2 y}{dv^2}$ minus some weird fraction $\frac{dy\ d^2 v}{dv^3}$ which can only be disregarded if it is zero. This only happens if either $dy=0$ or $d^2 v=0$. Choosing $d^2 v$ identical zero does the trick and renders $dv$ constant. Suppose now that $d^2 v\equiv 0$. Then for the example $y=u=v^2$ we see that $du=2v\ dv$ and furthermore $ddu=2v\ ddv+2\ dv^2=2\ dv^2$ where the last equality is due to our choice that $ddv$ is identical zero. Therefore we see that the derivative of $w=\frac{dy}{du}$ will be given as $$ \frac{dw}{du}=\frac{d^2 y}{du^2}-\frac{dy\ ddu}{du^3} $$ where the last fraction is far from being zero as it may be rewritten - noting that $y=u\implies dy=du$ and that $\frac{dv}{du}=\frac{1}{2v}$ - to obtain $$ \require{cancel} \frac{\cancel{dy}\ ddu}{\cancel{du}\cdot du^2}=\frac{2\ dv^2}{du^2}=\frac{1}{2v^2} $$ This shows that assuming $\frac{d^2 y}{dv^2}$ to be the second order derivative of $y=v^2$ with respect to $v$ in the modern sense makes $\frac{d^2 y}{du^2}$ differ by $\frac{1}{2v^2}$ from being the second order derivative of $y=u$ with respect to $u$. Now since we know that $y=u$ we have $w=\frac{dy}{du}=1$ and thus $\frac{dw}{du}=0$. Therefore we must have $$ \frac{d^2 y}{du^2}-\frac{1}{2v^2}=0 $$ in this case showing that $\frac{d^2 y}{du^2}=\frac{1}{2v^2}$. So with the choice $y=u=v^2$ and $ddv\equiv 0$ the equation $$ \frac{d^2 y}{du^2}\cdot\left(\frac{du}{dv}\right)^2=\frac{d^2 y}{dv^2} $$ may be successfully checked applying that $\frac{du}{dv}=2v$ since we then have $$ \frac{1}{2v^2}\cdot(2v)^2=2 $$ which is actually true. This is NOT a coincidence! Conclusion The above calculations show that Julian Rosen's very appealing example of failure in the method of the Leibnizian calculus seems to be a misunderstanding about what is meant by the notions of $d^2 y$ and the hidden, but important, additional variables $ddv$ and $ddu$. This provides specific details regarding the comments given by user72694 below the answer from Julian. However, proving that Leibniz's notation will never produce false conclusions when handled correctly is a whole different story. This is supposedly what Robinson managed to do, but I must admit that I have not read and understood that theory myself. My Bachelor's thesis focused mainly on understanding how the method was applied by Leibniz and his contemporaries. I have often times thought about the foundations, but mainly from a 17th century perspective. Comment on Bos's work On page 31 in his thesis, Bos argues that the limit $$ \lim_{h_1,h_2\rightarrow 0}\frac{[f(x+h_1+h_2)-f(x+h_1)]-[f(x+h_1)-f(x)]}{h_1 h_2} $$ only exists if $h_1=h_2$ which then makes this limit equal $f''(x)$. But that is in fact not entirely true. The $x$-differences $h_1$ and $h_2$ need not be equal. It suffices for them to converge to being equal which is a subtle, but important, variation of the setup. We must demand that $h_1$ and $h_2$ converge to zero in a mutually dependend fashion so that $$ \lim_{h_1,h_2\rightarrow 0}\frac{h_2}{h_1}=1 $$ With this setup the limit of the large fraction from before may still exist, but need not equal $f''(x)$. Since $h_1,h_2$ play the role of $dx$'s this is equivalent to allowing $dx_1\neq dx_2$ so that $ddx=dx_2-dx_1\neq 0$ although being infinitely smaller than the $dx$'s. This means that it is in fact possible to imitate the historical notion of $dx$ being constant (and thereby $x$ in arithmetic progression) directly by modern limits. Extras regarding the OP's answer You are quite right that the differentials can be succesfully manipulated into the equation $$ \frac{d^2}{dv^2}\big(y(u(v))\big)=y''(u(v))\cdot u'(v)^2+y'(u(v))\cdot u''(v) $$ under the assumption that $ddv\equiv 0$. There is, however, a more obvious and even less restrictive choice to leave the progressions of all three variables $u,v$ and $y$ unspecified, and yet to connect the notation in a meaningful way to modern standards: Introduce a fourth variable $t$ in arithmetic progression (i.e. $ddt\equiv 0$). One could think of it as a time variable so that $u(t),v(t)$ and $y(t)$ are coordinate functions of some vector valued function. Then Julian Rosen's equation can be directly transformed to $$ \frac{\left(\frac{d^2 y}{dt^2}\right)}{\left(\frac{du^2}{dt^2}\right)}\cdot\left(\frac{\left(\frac{du}{dt}\right)}{\left(\frac{dv}{dt}\right)}\right)^2=\frac{\left(\frac{d^2 y}{dt^2}\right)}{\left(\frac{dv^2}{dt^2}\right)} $$ and since $dt$ is in arithmetic progression $y''(t)=\frac{d^2 y}{dt^2}$ so that this may be written in modern notation as $$ \frac{y''(t)}{u'(t)^2}\cdot\left(\frac{u'(t)}{v'(t)}\right)^2=\frac{y''(t)}{v'(t)^2} $$ which is easily verified to be correct. This is probably the simplest account, but it only uses but does not give a very clear example of the necessity of choosing the progression of the variables. I think my first account did that better.
Expectation of constraint random walk
Consider the symmetric random walk $(S_n)$ on the integers, defined by $S_0=0$ and $S_n=X_1+\cdots+X_n$ for every $n\geqslant1$, where $(X_n)$ is i.i.d. with $P(X_n=1)=P(X_n=-1)=\frac12$. Introduce $M_n=\max\{S_0,S_1,\ldots,S_n\}$ and $Y_n=M_n-S_n$. Then: The dynamics of the process $(Y_n)$ is the dynamics described in the question.$^*$ Now, $E(S_n)=0$ hence $E(Y_n)=E(M_n)$ and it is known that $E(M_n)\sim\sqrt{2n/\pi}$ hence $E(Y_n)\to\infty$. $^*$ To see why, consider the dynamics of $(Y_n)$. If $Y_n\geqslant1$, then $M_n\geqslant S_n+1$ hence the fact that $S_{n+1}\leqslant S_n+1$ almost surely implies that $M_{n+1}=M_n$ almost surely, thus $Y_{n+1}=Y_n-X_{n+1}$, that is, $Y_{n+1}=Y_n+1$ or $Y_{n+1}=Y_n-1$ with equal probabilities. If $Y_n=0$, then $M_n=S_n$ hence $M_{n+1}=S_{n+1}$ if $X_{n+1}=+1$ while $M_{n+1}=M_n$ and $S_{n+1}=S_n-1$ if $X_{n+1}=-1$, that is, $Y_{n+1}=0$ or $Y_{n+1}=+1$ with equal probabilities. These are the desired transition probabilities. To sum up, $$Y_{n+1}=\max(Y_n-X_{n+1},0).$$
Kan extensions and double application of Yoneda lemma
By the Yoneda lemma, maps $$\mathcal{C} (G B, C) \to [\mathcal{A}^\mathrm{op}, \mathbf{Set}] (\mathcal{B} (F -, B), \mathcal{C} (H -, C))$$ that are natural in $C$ correspond to elements of $[\mathcal{A}^\mathrm{op}, \mathbf{Set}] (\mathcal{B} (F -, B), \mathcal{C} (H -, G B))$, i.e. maps $$\mathcal{B} (F A, B) \to \mathcal{C} (H A, G B)$$ that are natural in $A$. On the other hand, naturality in $B$ means that such maps correspond to elements of $\mathcal{C} (H A, G F A)$, i.e. morphisms $$H A \to G F A$$ as claimed.
Equivalent definitions of Lebesgue measure
Hint: If you can prove it for $K$ closed, you can then extend it to the compact case. Let's say you have some sequence of sets $\{C_n\}$ such that $C_n$ is a closed subset of $A$ for all $n$ and $\lambda(C_n) \rightarrow \lambda(A)$. Let $K_n = C_n \bigcap [-n,n]$.
Show both relaxations of boolean LP give equal lower bounds
$g(x, \lambda, v)$ as a function of $x$ is convex only if $v\leq 0$ (the hessian is diagonal positive semidefinite), otherwise it has at least one negative eigenvalue. Hence for $v\nleqslant0$, $\inf_{x}g(x,\lambda,v)=-\infty$, and thus the max of $g(\lambda, v)$ must be at a point where $v\leq0$. Therefore the Lagrange relaxation and the dual of the LP relaxation give equivalent lower bounds, and since by Slater's condition the dual of the LP relaxation exhibits strong duality, we can conclude that both relaxations give the same lower bound.
A question about Homotopy (Michael Harris's recent book)
The basic things we're looking at are paths, that is maps from the interval $I = [0,1]$ to some space $X$. If $X$ and $Y$ are nice enough spaces (I don't want to get bogged down in technicalities), then a homotopy between two maps $f, g : X \to Y$ is the same thing as a path $h : I \to Y^X$ such that $h(0) = f$ and $h(1) = g$, so this whole discussion applies in particular to homotopies. When you've got two paths $\alpha, \beta : I \to X$ such that the end of $\alpha$ coincides with the beginning of $\beta$ (IOW, $\alpha(1) = \beta(0)$), then you can concatenate the paths $\alpha$ and $\beta$, and obtain a new path $\beta \alpha$ (written like function composition). The traditional way of doing this is to define: $$(\beta\alpha)(t) = \begin{cases} \alpha(2t) & 0 \le t \le \frac{1}{2} \\ \beta(2t-1) & \frac{1}{2} \le t \le 1 \end{cases}$$ Since $\alpha(1) = \beta(0)$, this is well defined and continuous at $t = 1/2$. Basically, you go through the path $\alpha$ at double speed, then through the path $\beta$ at double speed. From now on, for simplicity, I will consider loops. Fix some base point $x_0 \in X$, and define $$\Omega X = \{ \gamma : I \to X \mid \gamma(0) = \gamma(1) = x_0 \},$$ in other words paths that start and end at $x_0$. This avoids the discussion about "$\alpha(1) = \beta(0)$", because you can concatenate all loops (they all start and end at the same point), and it's enough to have a general idea of the theory. But there is a problem now. Suppose you have three loops $\alpha, \beta, \gamma \in \Omega X$. You want to concatenate them. You have basically two ways of doing that: first concatenate $\alpha$ and $\beta$, then concatenate the result with $\gamma$, to get $\gamma(\beta\alpha)$; or first concatenate $\beta$ and $\gamma$, and then concatenate $\alpha$ with the result to obtain $(\gamma\beta)\alpha$. Then, unless all three paths are constant (if your space is Hausdorff), $$\gamma(\beta\alpha) \neq (\gamma\beta)\alpha \quad(!)$$ Concatenation isn't, in general, associative. The two loops $\gamma(\beta\alpha)$ and $(\gamma\beta)\alpha$ are homotopic, but not equal. There is also no "unit loop" $e$, such that if you concatenate it with another loop you get the original loop (i.e. $e \gamma = \gamma$ or $\gamma e = \gamma$). So how do we solve this? One possible solution is Moore loops. Instead of requiring all paths to take "one second" ($I = [0,1]$) to complete, you consider more general paths of the form $\alpha : [0,T] \to X$ where $T \ge 0$ (s.t. $\alpha(0) = \alpha(T) = x_0$). Then given $\alpha : [0,T] \to X$ and $\beta : [0,T'] \to X$ that start and end at $x_0$, you can obtain a new path $\beta\alpha : [0,T+T'] \to X$ defined by $$(\beta\alpha)(t) = \begin{cases} \alpha(t) & 0 \le t \le T \\ \beta(t-T) & T \le t \le T+T' \end{cases}$$ Then you can verify that this defines an associative law, and the constant loop $\mathrm{cst}_{x_0} : [0,0] \to X$ is a unit. Moore loops are good enough for some purposes, but they're also unsatisfying for some others: two different loops can have a different domain, and the concatenation of two loops has again a different domain. We're really interested in understanding $\Omega X$, so what can we do? The answer lies in the little intervals operad $\mathtt{D}_1$. I refer you to your other question where I've written an explanation of what it is, and I will use the same notation; when $n=1$, $\mathtt{D}_1$ is called the little intervals operad because $D^1 = [-1,1] \cong I$ is just an interval; in what follows I will identify $D^1$ with $I$, it makes no difference. A fundamental feature of $\Omega X$ is that it is an algebra over the little intervals operad. What does this mean? Let an operation in arity $r$: $$c = (c_1, \dots, c_r) \in \mathtt{D}_1(r)$$ Recall that this means $c_i : I \to I$ is an affine embedding ($c_i(x) = t_i + \lambda_i x$, $0 < \lambda_i < 1$), and $c_i((0,1)) \cap c_j((0,1)) = \varnothing$ for $i \neq j$. Let also loops $\alpha_1, \dots, \alpha_r \in \Omega X$. Then you can define $c(\alpha_1, \dots, \alpha_r) \in \Omega X$ by $$c(\alpha_1, \dots, \alpha_r) : t \mapsto \begin{cases} \alpha_i(u) & t = c_i(u) \text{ for some} i,u \\ x_0 & \text{otherwise} \end{cases}$$ What does this look like? An element, say, $c \in \mathtt{D}_1(3)$ looks like this: If you have three loops $\alpha, \beta, \gamma \in \Omega X$, then $c(\alpha,\beta,\gamma)$ is the loop that's equal to $\alpha$ (sped up to match the length) on the red part "1", $\beta$ on the blue part "2", and $\gamma$ on the green part "3". On the rest (the black part), it's equal to $x_0$. Since the endpoints of $\alpha$, $\beta$, and $\gamma$ are all equal to $x_0$, this defines a continuous loop $c(\alpha, \beta, \gamma) \in \Omega$. Then one can check that this defines an algebra over $\mathtt{D}_1$. So what does this have to do with associativity? Consider the element $m \in \mathtt{D}_1(2)$ given by embedding the first interval as the first half $[0,1/2]$ (so $c_1(t) = t/2$) and the second interval as the second half $[1/2,1]$ ($c_2(t) = (t+1)/2$). Then for $\alpha,\beta \in \Omega X$, $m(\alpha,\beta) = \beta \alpha$ is the concatenation of the two loops. You have two different operations, $u = m(m, \operatorname{id})$ and $v = m(\operatorname{id}, m)$ in $\mathtt{D}_1(3)$, and $u(\alpha,\beta,\gamma) = \gamma(\beta\alpha)$ while $v(\alpha,\beta,\gamma) = (\gamma\beta)\alpha$. But now, there is a homotopy given by the structure of $\mathtt{D}_1$-algebra on $\Omega X$ between $u(\alpha,\beta,\gamma)$ and $v(\alpha,\beta,\gamma) = (\gamma\beta)\alpha$! It's in fact a path in $\mathtt{D}_1(3)$, given by rescaling the intervals. And more generally if you have more loops, any parenthesization will be homotopic to any other (so say $((\alpha_1 \alpha_2) \alpha_3) \alpha_4 \sim (\alpha_1 \alpha_2) (\alpha_3 \alpha_4)$ through a homotopy that comes from the $\mathtt{D}_1$-algebra structure. Such a structure is called a strongly homotopy associative algebra, or $A_\infty$-algebra, because you don't just know that $a(bc) \sim (ab)c$: you have a specific homotopy between the two (for all $a$, $b$, $c$), and these homotopies are all compatible with one another (meaning, for example, that if you have two homotopies $((ab)c)d \leadsto a(b(cd))$, then they are equal, similar to the coherence axioms for a monoidal category), and so on in higher arity. The associahedra of Stasheff are combinatorial models for how these homotopies are compatible. The recognition principle of Boardmann–Vogt and May also tells you that under technical conditions, if a space $Y$ can be endowed with the structure of a $\mathtt{D}_1$-algebra, then there is another space $X$ such that $Y \sim \Omega X$. So in that sense, the little interval operads exactly captures what it means to be a loop space. This is the beginning of a very long story that's still being developed today. For example, an algebra over the little disks operad $\mathtt{D}_2$ is (by the recognition principle) essentially the same thing as a two-fold loop space $\Omega^2 X = \Omega(\Omega X)$. It's basically a strongly homotopy associative algebra with "one level" of commutativity, meaning there's a homotopy $ab \leadsto ba$ (but no it's not strongly homotopy commutative). Equivalently, it's the same thing as two strongly homotopy associative structure on the same space that are compatible with each other in the sense of the Eckmann–Hilton argument. There are also applications for $\infty$-categories that you mention in your question, where the associativity axiom of a standard category is relaxed and instead only holds up to (coherent) homotopy.
Recommend good books for a beginner to learn about Support Vector Machines (SVM)
I've found that Christopher Bishop's book on pattern recognition and machine learning has been a very nice text. Andrew Ng also has some great video lectures on machine learning and SVMs if you have a Google.
Non-isomorphic modules generated by one element
The irreducible (i.e. simple) submodules are nonisomorphic because they have distinct annihilators. $V_1$ is annihilated by $0\oplus V_2\oplus V_3\oplus\cdots$ and $V_2$ is annihilated by $V_1\oplus 0 \oplus V_3\oplus\cdots$ and so on.
Pulling balls from a box
If you are referring to the number of possible sequences of draws, then yes, it will be $10^3$, but if you want to count the number of possible resulting multi-sets, then it will be the number of ways to put three identical balls into 10 distinct boxes, which is the number of non-negative integer solutions to $x_1+x_2+\cdots+x_{10}=3$, which is ${9+3}\choose{3}$ (to arrange 9 pluses and 3 ones).
Find the optimal solution without going through the ERO's
HINT: With these values of $x_1,x_2,x_3,x_4$, it is clear that the basic variables are $x_2,x_4$ and $x_6$ (the slack variable for the second constraint). Therefore, it is easy to find the optimal tableau, from which you can directly read the values of the dual variables (do you see where?).
proving subspace of matrix
To prove a subspace you need to show that the set is non-empty and that it is closed under addition and scalar multiplication, or shortly that $a A_1 + b A_2\in W$ for any $A_1,A_2\in W$. The set isn't empty since zero matrix is in the set. Let $A_1,A_2\in W$, then $\operatorname{Tr}(a A_1 + b A_2) = a \operatorname{Tr}(A_1)+b \operatorname{Tr}(A_2) = 0+0=0$ therefore $a A_1 + b A_2\in W$ and you done. https://en.wikipedia.org/wiki/Trace_%28linear_algebra%29
Which variables to use in regression
The correlations with $y$ are not enough information for answering the question. Any linear model that includes one of the $\pm 0.9$ variables has $R^2$ of at least $0.81$. Without knowing the intercorrelations of the $x$ variables it's not possible to say anything more.
How to prove that $f(x,y)=y-x$ is continuous?
Hint: Take a point $(x_0,y_0)\in\mathbb{R}^2$. For any point $(x,y)\in\mathbb{R}^2$, we can write $$ (x,y)=(x_0,y_0)+(\Delta x,\Delta y),\qquad \Delta x:=x-x_0,\qquad \Delta y:=y-y_0. $$ Then $$ f(x,y)-f(x_0,y_0)=(x-y)-(x_0-y_0)=(x-x_0)-(y-y_0)=\Delta x-\Delta y. $$ Given $\epsilon>0$, you want to find $\delta>0$ such that whenever $d((x,y),(x_0,y_0))<\delta$, you have $\lvert f(x,y)-f(x_0,y_0)\rvert<\epsilon$. Note, however, that $$ d((x,y),(x_0,y_0))=\sqrt{\Delta x^2+\Delta y^2}, $$ and by the above $$ \lvert f(x,y)-f(x_0,y_0)\rvert=\lvert \Delta x-\Delta y\rvert\leq\lvert\Delta x\rvert+\lvert \Delta y\rvert. $$ Can you see how to make $\lvert\Delta x\rvert$ and $\lvert \Delta y\rvert$ small by choosing $\delta$ small? Of course, alternatively, you can prove that the functions $(x,y)\mapsto x$ and $(x,y)\mapsto y$ are continuous, and the (very direct) theorem that if $g$ and $h$ are continuous, then $g(x,y)-h(x,y)$ is also continuous.
Normal subgroup, and cosets.
I'm having a hard time following the argument presented in the text of the question, and I don't think it is completely correct. For example, since $nc \in N$, if $a \in G - N$ (that is, $a \in G$, $a \notin N$), then $a(nc) \notin N$. If it were, say $a(nc) = m \in N$, then $a = m(nc)^{-1} \in N$, a contradiction. In fact, $a(nc) \in aN$, a left coset of $N$, and $N \cap aN = \varnothing$ since left cosets are either disjoint or identical, as are right cosets. The easiest way I know to show that left cosets are right cosets (and vice versa!) for normal $N$ is to use $aNa^{-1} = N$; then $aN = Na$ so the left and right cosets represented by $a$ (that is, containing $a$) are identical. It's as simple as that. Hope this helps. Cheerio, and as always, Fiat Lux!!!
Is it possible to integrate something that isn't a function?
Absolutely, this is called a curvilinear integral. It works when the curve is given by parametric equations. If the curve is closed, you can obtain its area by integrating one of $x\,dy$ or $-y\,dx$. E.g. with a full circle, $$x=r\cos t,y=r\sin t$$ and $$A=\int_0^{2\pi}r\cos t\,d(r\sin t)=r^2\int_0^{2\pi}\cos^2t\,dt=\pi r^2.$$
Is there a time-domain proof of Nyquist sampling theorem?
The key assumption is that the signal is band-limited. This is a frequency-domain assumption. Any sensible proof must go through the frequency domain. Same goes for the proof being an approximation argument. Any real proof must be an approximation argument, including the one you alluded to, or its rigorous version. Here is another Hilbert space argument that, in the end, gives us approximation in the topology of uniform convergence (much better than $L^1$ or $L^2$): Let $H$ be the set of band-limited elements of $L^2(\mathbb{R})$ (no extra assumptions). By unitarity of the Fourier transform $\mathcal{F}$, $H$ is a Hilbert subspace of $L^2$. Since $L^2$ elements of compact support lie in $L^1$, the Fourier inversion theorem implies that elements of $H$ are in fact continuous almost everywhere. So band-limited assumption implies (for now) continuity and therefore sampling make sense. $\mathcal{F}(H)$ has orthonormal basis $\{ e^{- 2 \pi i k\frac{\xi}{2T}} \}_{k \in \mathbb{Z}}$. So now it's natural to compute the Hilbert space expansion of $\hat{f}$ in this basis then apply $\mathcal{F}^{-1}$. By unitarity $$ \langle \hat{f}, e^{- 2 \pi i k\frac{\xi}{2T}} \rangle = \langle f, \delta_{\frac{k}{2T}}\rangle = f(\frac{k}{2T}). $$ Strictly speaking, one needs a rigged Hilbert space that includes distributions to make sense of inner products with delta functions but everything works out. On the other hand, the inverse Fourier transform of the basis $\{ e^{- 2 \pi i k\frac{\xi}{2T}} \}_{k \in \mathbb{Z}}$ are just shifts of the $\mbox{sinc}$ function. So we have that Shannon's sampling formula holds in the $L^2$-sense. To strengthen the convergence, notice $L^2$-convergence (in the frequency domain) implies $L^1$-convergence by the band-limited assumption. By property of $\mathcal{F}^{-1}$, back in the time domain we have uniform convergence. Since $\mbox{sinc}$ functions and its shifts are all smooth, we can actually conclude that a band-limited $L^2$ function is in fact smooth almost everywhere.
Does $\int_{0}^{\infty} \sin(x) \cdot\sin(x^2)\,dx$ converge?
I think I found an answer: We will prove it by Cauchy's test: That is, we will prove that for all $\varepsilon > 0$ there exists $N > 0$ such that for all $a>b>N$ we have $|\int_{a}^{b} \sin t \frac{\sin\sqrt{t}}{\sqrt{t}}dt| < \varepsilon. $ Take $\varepsilon > 0$. Then since $\lim_{t \to \infty} \frac{\sin\sqrt{t}}{\sqrt{t}} = 0$ we can take $N>0$ such that for all $t > N$, $|\frac{\sin\sqrt{t}}{\sqrt{t}}|<\varepsilon/2$. Then, if $a >b >N$ we have: $\int_{a}^{b} \sin t \frac{\sin\sqrt{t}}{\sqrt{t}}dt< \int_{a}^{b} \sin t \cdot \varepsilon/2 \leq 2 \cdot \varepsilon/2=\varepsilon$, and similarlly we can get $\int_{a}^{b} \sin t \frac{\sin\sqrt{t}}{\sqrt{t}} > -\varepsilon$ which is what we wanted.
Dominated convergence theorem and uniformly convergence
Fix an $\varepsilon > 0$, by uniform convergence, we know that there exists $N \in \mathbb{N}$ such that for $n \geq N$, \begin{equation*} |f_n| = |f_n - f + f| \leq |f_n - f| + |f| < |f| + \varepsilon \end{equation*} Then define the function $g : \Omega \to \mathbb{R}$ by $g(\omega) = |f(\omega)| + \varepsilon$. Then $g$ is integrable, since we work on a finite measure space. If it was an infinite measure space, the $"+ \varepsilon"$ part would give some difficulties. Next, define $h_n = f_{N + n}$, with $\lim\limits_{n \to \infty} h_n = f$, for which it holds that $|h_n| \leq g$. Then, all conditions of the dominated convergence theorem are satisfied, hence we can conclude \begin{equation*} \lim_{n \to \infty} \int_\Omega h_n \mathrm{d}\mu= \int_\Omega f \mathrm{d}\mu \tag{$\ast$} \end{equation*} Lastly, observe that the difference between $\{f_n\}$ and $\{ h_n \}$ is only an shift in indices, hence it immediately follows from ($\ast$) that \begin{equation*} \lim_{n \to \infty} \int_\Omega f_n \mathrm{d}\mu = \lim_{n \to \infty} \int_\Omega h_n \mathrm{d}\mu= \int_\Omega f \mathrm{d}\mu \end{equation*}
Intersection between three events
Notice that if $i\ne j$, then $S_i \cap S_j = \emptyset$, Hence $$P\left(\bigcup_{i=1}^3 S_i\right)=\sum_{i=1}^3 P(S_i)$$ We have \begin{align}P(S_1)&= P(E \cap \bar{F}\cap \bar{G}) \\ &= P(E)-P(E \cap (F \cup G)) \tag{1} \\&=P(E)-P(E \cap F)-P(E\cap G)+P(E \cap F\cap G) \tag{2} \end{align} To go from $(1)$ to $(2)$, We use $$P(E \cap (F \cup G))=P((E\cap F) \cup (E\cap G))$$ and we use inclusion exclusion principle. Do the same thing for $P(S_2)$ and $P(S_3)$ or just do a symmetric argument and add them up.
Unique measure on positive Borel sets with following properties
To expand on @PhoemueX's comment, one way to go about doing this proof is to use the two following facts: Fact 1. Let $\mu$ and $\nu$ be measures on $\mathcal B_{(0,\infty)}$ and $T:(0,\infty)\to\mathbb R$ be measurable and bijective (with respect to $\mathcal B_\mathbb R$). Then, $\mu=\nu$ if and only if $\mu\circ T^{-1}=\nu\circ T^{-1}$. Fact 2. The Lebesgue measure $\lambda$ is the only measure on $\mathcal B_\mathbb R$ such that $\lambda\big((0,1]\big)=1$ and $\lambda(c+A)=\lambda(A)$ for every Borel set $A$ and $c\in\mathbb R$. Fact 1 is fairly easy to prove. Fact 2 is one of the most fundamental property of the Lebesgue measure. Its demonstration can easily be found in most measure theory textbooks or online (see for instance this math.stackexchange question). Suppose we define the map $T$ as $T(x)=\ln(x)$ for every $x\in(0,\infty)$ (and thus, $T^{-1}(y)=e^y$ for all $y\in\mathbb R$). Clearly, this is bijective and measurable (since it is continuous). At this point, it shouldn't be too hard to show that if $\nu$ satisfies conditions 1. and 2. in the statement of your question, then $\mu=\nu\circ T^{-1}$ is such that $\mu\big((0,1]\big)=1$ and that $\mu(c+A)=\mu(A)$ for all $A$ Borel. Then use facts 1 and 2. Also mentioned by @PhoemueX, this is a special case of the uniqueness of the Haar measure: Let $G$ be a locally compact group equipped with a topology making the operations of group product and inverse continuous (for example, the set $(0,\infty)$ with the usual multiplication is a group, and we can equip this group with the standard Euclidean topology, making it locally compact, and the multiplication and inverse are continuous). Let $\mathscr B(G)$ be the Borel $\sigma$-algebra on $G$. A measure $\mu$ on $\mathscr B(G)$ is said to be a left Haar measure if $\mu(cA)=\mu(A)$ for every $c\in G$ and $A\in\mathscr B(G)$. The general proof of the existence and uniqueness of the Haar measure is quite sophisticated, but the proof for $\mathbb R$ equipped with the usual addition (which yields the Lebesgue measure) is much easier and was known for a very long time. In this context, it is interesting to note that the map $T(x)=\ln(x)$ is actually both a group homomorphism and a homeomorphism between $\big((0,\infty),\cdot\big)$ and $(\mathbb R,+)$.
Derivative of matrix-valued function with respect to matrix input
Let function $\mathrm F : \mathbb R^{n \times p} \to \mathbb R^{m \times p}$ be defined as follows $$\rm F (X) := A X$$ where $\mathrm A \in \mathbb R^{m \times n}$ is given. The $(i,j)$-th entry of the output is $$f_{ij} (\mathrm X) = \mathrm e_i^\top \mathrm A \, \mathrm X \, \mathrm e_j = \mbox{tr} \left( \mathrm e_j \mathrm e_i^\top \mathrm A \, \mathrm X \right) = \langle \mathrm A^\top \mathrm e_i \mathrm e_j^\top, \mathrm X \rangle$$ Hence, $$\partial_{\mathrm X} \, f_{ij} (\mathrm X) = \color{blue}{\mathrm A^\top \mathrm e_i \mathrm e_j^\top}$$
Arclength comparison of two convex functions
Yes, there is a geometric proof of that. Since $f$ is convex, its epigraph $$E_f = \{(x,y): x\in [a,b], y\ge f(x)\}$$ is a convex set. Nearest-point projection onto a convex set is a contraction: see here or here. So, it decreases the length of sets. Consider the projection of the graph of $g$ onto $E_f$; its image is precisely the graph of $f$. The conclusion follows.
Sign of diagonal product in determinant
The product of antidiagonal elements of the $n\times n$ matrix $A=[a_{ij}]$ is given by $$ a_{1,n}a_{2,n-1}\ldots a_{n-1,2}a_{n,1} $$ and to determine the sign from which to precede it for calculating the determinant of $A$, it is necessary to examine the permutation $$ \left(\begin{array}{ccccccc} 1 & 2 & 3 & \ldots & n-2 & n-1 & n \\ n & n-1 & n-2 & \ldots & 3 & 2 & 1\end{array}\right), $$ which we simply write $$ (\begin{array}{ccccccc} n & n-1 & n-2 & \ldots & 3 & 2 & 1 \end{array}), $$ and calculate how many transpositions it is necessary to operate on it to obtain the natural sequence. To do this, we note that to bring n to its natural position, n-1 exchanges are necessary with the elements on its right, thus obtaining the permutation $$ (\begin{array}{ccccccc} n-1 & n-2 & \ldots & 2 & 1 & n \end{array}). $$ In the same manner, $n-2$ exchanges are necessary to operate similarly on $n-1$ to obtain $$ (\begin{array}{ccccccc} n-2 & n-3 & \ldots & 2 & 1 & n-1 & n\end{array}) $$ etc. Ultimately it is clear that $$ (n-1)+(n-2)+\ldots+2+1 = \frac{n(n-1)}{2} $$ transpositions are needed in total to get the natural position of the indexes. It follows that the sign we are looking for is given by $\;{(-1)^{\displaystyle n(n-1)/2}},\;$ thus getting the following table for all $n\geq 2$: $$ \begin{array}{c} 2 && 3 && 4 && 5 && 6 && 7 && 8 && 9\\[1ex] - && - && + && + && - && - && + && +\end{array} \;\;\;\cdots $$
Applying Chapman - Kolmogorov to a probability
We have \begin{align} \mathbb P(X_2=2,X_4=5) &= \mathbb P(X_4=5\mid X_2=2)\mathbb P(X_2=2)\\ &= P^2_{2,5} + \mathbb P(X_2=2\mid X_0=1)\mathbb P(X_0=1) + \mathbb P(X_2=2\mid X_0=5)\mathbb P(X_0=5)\\ &= P_{2,5}^2 + P_{1,2}^2\cdot\frac 12 + P^2_{2,5}\cdot\frac12. \end{align} Computing $P^2$, we have $$ P^2 = \left( \begin{array}{ccccc} \frac{29}{100} & \frac{3}{25} & \frac{7}{50} & \frac{3}{10} & \frac{3}{20} \\ 0 & \frac{13}{25} & 0 & \frac{11}{25} & \frac{1}{25} \\ \frac{13}{50} & \frac{4}{25} & \frac{4}{25} & \frac{3}{10} & \frac{3}{25} \\ \frac{3}{50} & \frac{11}{25} & \frac{3}{100} & \frac{21}{50} & \frac{1}{20} \\ \frac{3}{25} & \frac{1}{25} & \frac{3}{25} & \frac{7}{20} & \frac{37}{100} \\ \end{array} \right) $$ and thus $$ \mathbb P(X_2=2,X_4=5) = \frac1{25} + \frac3{50} + \frac1{50} =\frac3{25}. $$
Number of regular tournaments
This is the sequence OEIS A007079, save that $a_n$ is defined there to be the number of labelled regular tournaments on $2n+1$ nodes (rather than $2n-1$). (I’m assuming that you want the players to be individually identifiable, so that you’re interested in labelled tournaments; if not, you want OEIS A096368.) The OEIS entry has very little information; it does give a formula, $$a_n=\left[(x_1x_2\ldots x_n)^{(n-1)/2}\right]\prod_{1\le j<k\le n}(x_j+x_k)\;,$$ where the square brackets are the ‘coefficient of ... in ...’ operator.
Sequences of Riemann Integrable Functions
Hint: You have to show that for any $\epsilon>0$, there is an $N$ so that for all $n\ge N$ you have $ \Vert f_n\Vert_{L^1}<\epsilon. $ Towards this end, you can use the following facts: Since $(f_n)$ converges to $0$ uniformly on $[0,1]$, for any $\epsilon>0$, there is an $N$ such that $|f_n(x)|<\epsilon$ for all $n\ge N$ and all $x\in [0,1]$. If $-M\le f(x)\le M$ for all $x\in [0,1]$ and if $f$ is Riemann integrable over $[0,1]$, then $$ -M\le \int_0^1 |f(x)|\,dx\le M. $$
How to prove that the Reiter property implies the Folner property?
Have a look at Theorem 1 of these notes: https://terrytao.wordpress.com/2009/04/14/some-notes-on-amenability/. (Terry normalizes $\varphi$ so that $\Vert \varphi \Vert_1 = 1$, but otherwise condition (ii) in Theorem 1 is the same as your version of the Reiter property.)
Use mathematical induction to prove that for all positive integers n, $ \sum_{r=1}^n (2r-1)\cdot 2^{-r} = 3- \frac{2n+3}{2^n} $.
If $$s_n=\sum_{r=1}^n (2r-1) \cdot 2^{-r}$$ then $$s_{n+1} = s_n+{2n+1\over 2^{n+1}}$$ Does this help?
Prove that a regular language $L$ exists satisfying $L_1 \subseteq L \subseteq L_2$ and $L - L_1$ and $L_2 - L$ are both infinite.
What happens if you use $x(yy)^iz$?
Prove $\sum\limits_{k=1}^n \binom{n}{k}k^2=2^{n-2}n(n+1)$ by combinatorial argument.
In how many ways can you form a subcommittee among $n$ people and appoint two roles within it, possibly to the same person? What if you choose who gets the roles first, and then finish the subcommittee? For the latter, simplify $n2^{n-1}+n(n-1)2^{n-2}$ to separately count the cases where the roles do or don't have the same recipient.
Prove that prime ideals of a finite ring are maximal
Let $\mathfrak{p}$ be a prime ideal in $R$. Then $R/\mathfrak{p}$ is a finite integral domain, thus it is a field, hence $\mathfrak{p}$ is maximal.
How do I compute x^(-2) mod m?
$\rm\: a\equiv b^{-1} \Rightarrow\ a^2 \equiv b^{-2}\: $ via congruences are preserved by multiplication (so too by squaring). If you don't yet know that property then you can instead prove the sought result as follows: $$\rm a\equiv b^{-1}\ \:(mod\ m)\ \Rightarrow\ m\:|\: ab-1\ \Rightarrow\ m|(ab-1)(ab+1)\ =\ a^2 b^2 - 1\ \Rightarrow\ a^2 \equiv b^{-2}\ \:(mod\ m)$$ For completeness, here is the proof that congruences are preserved by mutliplication LEMMA $\rm\ \ A\equiv a,\ B\equiv b\ \Rightarrow\ AB\equiv ab\ \:(mod\ m)$ Proof $\rm\ \ m\: |\: A-a,\:\:\ B-b\ \Rightarrow\ m\ |\ (A-a)\ B + a\ (B-b)\ =\ AB - ab $ This congruence product rule is at the heart of many other similar product rules, for example Leibniz's product rule for derivatives in calculus, e.g. see my post here.
Meta-Pythagorean Triple
This is a lot easier than it looks like. It turns out that the answer is "all of them". Actually, every single number $n$ greater than $2$ appears as the leg of some pythagorean triangle. If $n$ is not a power of two, the following way works: First, assume $n$ is odd. Then $$\left( n,\frac{n^2 - 1}{2}, \frac{n^2 + 1}{2}\right)\quad \text{or} \quad \left(\frac{n^2 - 1}{2}, n, \frac{n^2 + 1}{2}\right) $$ is a Pythagorean triple. If $n$ is even, say $n = 2^km$ with $m\geq 3$ odd, then use the above on $m$, and scale the pythagorean triple you get by $2^k$. If $n > 2$ is a power of $2$, use $(3, 4, 5)$ with appropriate scaling.
Arbitrary continuous function expressed by purely discontinuous composition
You're so close! We can define $$ f(x) = \begin{cases} h(x)-1, & \text{if } h(x) \in \mathbb{Q}, \\ h(x), & \text{if } h(x) \notin \mathbb{Q} \end{cases} \quad\text{and}\quad g(x) = \begin{cases} x+1, & \text{if }x \in \mathbb{Q}, \\ x, & \text{if }x \notin \mathbb{Q} \end{cases} $$ and then check that $h=g\circ f$ as opposed to $f\circ g$. [PS: Check the LaTeX source for this answer for the {cases} environment—simpler than {array}.]
How would I compute this integral with a ceiling function?
$$ \begin{align} \int\lceil x+2\rceil\log(x)\,\mathrm{d}x &=\int\lceil x+2\rceil\,\mathrm{d}(x\log(x)-x)\\ &=\lceil x+2\rceil(x\log(x)-x)-\int(x\log(x)-x)\,\mathrm{d}\lceil x+2\rceil\\ &=\lceil x+2\rceil(x\log(x)-x)-\sum_{k=1}^{\lceil x-1\rceil}(k\log(k)-k)+C \end{align} $$ $$ \begin{align} \int\lceil x\rceil\,\mathrm{d}x &=x\lceil x\rceil-\int x\,\mathrm{d}\lceil x\rceil\\ &=x\lceil x\rceil-\sum_{k=0}^{\lceil x-1\rceil}k+C\\ &=x\lceil x\rceil-\frac{\lceil x\rceil^2-\lceil x\rceil}2+C \end{align} $$
Second derivative of convex function
Here is another approach using mollifiers, adapted from Evans and Gariepy "Measure theory and fine properties of functions." Let $\eta_\epsilon$ be a mollifier (smooth approximation to the identity) and set $f^\epsilon=f*\eta_\epsilon$. Then $f_\epsilon$ is smooth and convex (convexity is reasonably trivial to check). Thus for all $\varphi\in\mathcal{D}(0,\infty)$ with $\varphi\geq 0$, $$ \langle f^{\prime\prime}_\epsilon,\varphi\rangle\geq 0 $$ Integrate by parts: $$ \langle f^{\prime\prime}_\epsilon,\varphi\rangle=\langle f_\epsilon,\varphi^{\prime\prime}\rangle\geq 0 $$ Now let $\epsilon\searrow 0$; $f_\epsilon\rightarrow f(x)$, and hence $\langle f,\varphi^{\prime\prime}\rangle\geq 0$ Hopefully I didn't overlook any tricky details here.
Does proportionality include addition? ie. if A + B = C are A & B inversely proportional?
$A$ and $B$ are proportional if $kA = B$ for some constant $k$. $A$ and $B$ are inversely proportional if $AB = k$ for some constant $k$.
$R$ is an equivalence relation on $\mathbb{R}$ and that the quotient space $\mathbb{R}/R$ is indiscrete.
Hint: Let $O \subset \mathbb{R}$. The image of $O$ in $\mathbb{R}/R$ is open iff $O+ \mathbb{Q}$ is open in $\mathbb{R}$. Notice that $\mathbb{Q}$ is dense in $\mathbb{R}$.
Simple proof for sampling without replacement concentration.
The following is a theorem proved by Hoeffding in Probability Inequalities for Sums of Bounded Random Variables published in 1963 $$ \mathbb E f\left( \sum_{i = 1}^n X_i \right) \le \mathbb E f\left( \sum_{i = 1}^n Y_i \right) $$ $X_i$ are uniformly at random sampled without replacement, $Y_i$ are uniformly at random sampled with replacement, $f$ is convex and continuous. This implies concentration results for sampling with replacement obtained using Chernoff bounds type methods (bounding moment generating function + Markov inequality) can be transferred to the case of sampling without replacement.
Does retracting commute with taking quotients?
Let's follow what happens when we take a deformation retraction of $D^2$ to an internal point $p$, and attempt to use it to induce a deformation retraction of the quotient space $P^2(\mathbb R)$. Let's consider the quotient map $f : D^1 \to P^2(\mathbb R)$. Under that quotient map, for each $q \in \partial D^2$ its opposite boundary point $-q \in \partial D^2$ has the same image $f(q)=f(-q)$, a point of $P^2(\mathbb R)$ that I will denote $[q]$. Thus, the quotient map $f$ is two-to-one on $\partial D^2$, and is otherwise one-to-one. Now, the function $f : D^2 \to P^2(\mathbb R)$ is indeed homotopic to, let's say, the constant map with value $p = f(0,0)$: simply use the homotopy $h((x_1,x_2),t) = f((1-t)x_1,(1-t)x_2)$. But if you attempted to use this formula to define a homotopy from $P^2(\mathbb R)$, then for each $q \in \partial D^2$ you would be faced with an agonizing choice: Does the point $[q]=[-q] \in P^2(\mathbb R)$ follow the homotopy path $f((1-t)q_1,(1-t)q_2)$? Or does it instead follow the homotopy path $f((1-t)(-q_1),(1-t)(-q_2))$? If these homotopy paths were the same, then there would be no trouble (and that's most likely what is going on the examples you allude to in your opening sentence). BUT, in this example those two paths are not the same. So our attempt to use the given information, i.e. to use the function $f$ and the homotopy $h$, in a well-defined and continuous fashion to build a deformation retraction from $P^2(\mathbb R)$ to a point, has broken down.
Given $n$ cards placed on a round table in upside down fashion, find the minimum operations to make them face upside up?
$1$ is placed between $n$ and $2$, so I assume there are at least 3 cards. I think $n$ operations are needed when $n \equiv 1 \mod 3$ or $n \equiv 2 \mod 3$. It is never a good choice to pick the same card twice. You will just change the orientation of the card and the two adjacent cards two more times and nothing happens. From every three subsequent cards at least one must be picked, otherwise the middle one will definitely stay faced down. There are $n$ ways in which I can choose the triples. If we only picked one card from each triple, then $n$ times a card would be picked. However every card is in three triples and therefore the number of cards picked is $n/3$, a contradiction! So there must be a triple in which more than one card was picked. The middle card must be flipped an odd number of times. Hence all three cards of the triple are picked. We move the triple by one card at the time. Only one or three cards of the triple can be picked, so by induction we find that all cards on the table must be picked exactly once.
How do you write the following statements in predicate form?
There are many ways of writing these statements using first order logic. As said by @JavaMan and @AndréNicolas, you can use the logical equivalence between the quantifiers. There is no greater real number. I assume that, as you're using "greater", there must be some number, lets say $n$, whom no real number can be greater than. $G(x,y)$ means $x$ is greater than $y$. And assume $x,y,n \in \mathbb{R}$. $\forall x ~\neg G(x,n)$ or $\neg \exists x ~G(x,n)$ There is no positive integer that is greater than any other positive integer. $P(x)$ means $x$ is positive. In this case $x,y \in \mathbb{Z}$. $\neg \exists x ~\forall y~P(x) \wedge G(x,y)$ I'll let you do the other ones. Hope it was helpfull.
How to profile people using clustering
It seems to me that you are in danger to misuse Cluster Analysis, which is a technique to discover structure in data and not to describe a structure that one just assumes to exist. To see what I mean, ask yourself the following questions for example: Why are you clustering into 3 groups? Not 4 or 9? When plotting the data (luckily there are 3 attributes only), do you see any groups? How many? How can these groups roughly be described using the values of the attributes? KMeans uses the Euklidean distance as a measure of similarity of samples. Is this the appropriate measure in your case?
Is the natural exponential function defined as being its own derivative?
Gregoire de Saint-Vincent and Alphonse Antonio de Sarasa around 1690 studied the question of how to compute areas under the hyperbola $xy=1$, which led to the notion of how to compute the area under the curve $y=1/x$. While the case $y=1/x^n$, $n > 1$ was simpler and solved by Cavalieri earlier, a new function had to be defined for the case $n=1$. They introduced this notion of a "hyperbolic logarithm". Euler, about 40 years later, introduced $e$ as the constant which gave area $1$ in a letter to Goldbach. The limit expression $\lim_{n\to\infty} (1+\frac{1}{n})^n$ was introduced by Bernoulli even earlier than this, and I'm not entirely sure when the notions were found to coincide. Source: https://en.wikipedia.org/wiki/E_(mathematical_constant)#History
Algebraic Extensions and Separability
Ok let us see how to prove one direction, which I did not post in my question in that link. We would like to prove that if $\alpha$ is not separable over $k$, in other words if the minimal polynomial of $\alpha$ over $k$ is not separable then $k(\alpha^p) \subsetneqq k(\alpha)$. Now this is tantamount to proving that $\Big[k(a) : k(a^p)\Big] > 1$. Now $$\Big[k(a) : k(a^p)\Big] = \frac{\Big[k(a):k\Big]}{\Big[k(a^p):k \Big]}. $$ The numerator on the right hand side is the degree of $f$ while the denominator is the degree of $g$, the minimal polynomial of $a^p$ over $F$. However since $f$ is not separable it is a polynomial in $x^p$. This is also saying that $f$ has $a^p$ as a root so that $g$ must divide $f$. It follows from here that $\deg g > \deg f$ so that $$\Big[k[a]:k[a^p]\Big] > 1.$$ Can you prove the other direction? Now for the last part of the problem in proving why every finite extension of a finite field is separable, we want that given any $\alpha$ in our extension $K$, the minimal polynomial $f$ of $\alpha$ over $K$ is separable. By your problem above, if we can show that $$k(\alpha) = k(\alpha^p)$$ then we are done. Now one subset inclusion is already clear. For the other inclusion, we want to show that $k(\alpha) \subseteq k(\alpha^p)$. Now suppose that $[K:\Bbb{Z}/p\Bbb{Z}] = n$. This has to be finite in the first place because $K/k$ is a finite extension and $k/\Big(\Bbb{Z}/p\Bbb{Z}\Big)$ is a finite extension too. Then $K$ has $p^n$ elements so that its multiplicative group has $p^n-1$ elements. It follows by Lagrange's Theorem that $$\alpha^{p^n - 1} = 1$$ so that for all $\alpha \in K$, we have that $\alpha^{p^n} = \alpha$. But then the left hand side can be written as $(\alpha^p)^{p^{n-1}}$ showing that $\alpha \in k(\alpha^p)$. Hence $k(\alpha) = k(\alpha^p)$ and the result follows.
Series from $1,2,3,4$ that sum to $n$
HINT: Consider a bijection, when sum of $a_1+a_2+\ldots +a_s$ corresponds to taking (and multiplying up then) $\displaystyle x^{a_k}$ from $k$th parenthesis of RHS's $s$th term $$(x+x^2+x^3+x^4)\cdot (x+x^2+x^3+x^4)\cdot\ldots\cdot(x+x^2+x^3+x^4).$$ How can you now argue that the desired equality holds?
How do I reverse the smooth-step equation?
Notice that you cannot always do that,in this case you can;t. as it is not well defined. Assume that you can do that, and you can write $x=f(y)$. For $y=0$, correspond the values $x=0$ and $x=\frac{3}{2}$, so it's not well defined!
The vanishing of the $\bf B$s
Let. $n$ be an odd number: $$\sum_{k=0}^n\binom {n+1}k B_k=0$$ gives us $n$ linear equations. $$\begin{align} \sum\limits_{k = 1}^n \binom{n+1}k{B_k} &= - 1 \cr \sum\limits_{k = 1}^{n - 1} \binom nk {B_k} &= - 1 \cr \sum\limits_{k = 1}^{n - 2} \binom{n-1}k {B_k} &= - 1 \cr \cdots &= \cdots \cr \sum\limits_{k = 1}^2 \binom 3k {B_k} &= - 1\cr \sum\limits_{k = 1}^1 \binom 2k {B_k} &= - 1 \end{align} $$ Corurtsy for the matrix formulation: Robjohn System these linear equation gives us,$$\textstyle \begin{bmatrix} \binom{n+1}{n}&\cdots&\binom{n+1}{3}&\binom{n+1}{2}&\binom{n+1}{1}\\ \vdots&\ddots&\vdots&\vdots&\vdots\\ 0&\cdots&\binom43&\binom42&\binom41\\ 0&\cdots&0&\binom32&\binom31\\ 0&\cdots&0&0&\binom21 \end{bmatrix} \begin{bmatrix} \vphantom{\binom11}B_n\\ \vdots\\ \vphantom{\binom11}B_3\\ \vphantom{\binom11}B_2\\ \vphantom{\binom11}B_1 \end{bmatrix} = \begin{bmatrix} \vphantom{\binom11}-1\\ \vdots\\ \vphantom{\binom11}-1\\ \vphantom{\binom11}-1\\ \vphantom{\binom11}-1 \end{bmatrix}$$ Now if one applies Cramar's rule to solve $B_n$, will have $B_n=\frac{\det A_n}{\det D_n}$. $D_n$ is the above matrix. $A_n$ is same as $D_n$ with $1$'st column is $(-1, -1,...,-1)^t$. Just do elementary row operation, you will get $\det A_n =0$ I think that, $(1)$ would be easier to calculate $B_n$ as the determinant, however $(2)$ is also giving 'same type' matrix. If you just check, you would see that after killing $-1$ in first column automatically $A_n$ becomes an upper triangular matrix with one diagonal element $0$. So determinant vanishes.
Probability of Combinations which have different probabilties.
This is $1$ minus the probability that after $N$ trials, one or more of the colours is missing. So let us find the probability that some colour is missing. We will use the Method of Inclusion/Exclusion. The probability red is missing is $\left(\frac{14}{15}\right)^N$. We can write down similar expressions for the probability green is missing, blue is missing, white is missing. If we add up these $4$ probabilities, we will have double-counted the situations in which $2$ of the colours are missing. So we must subtract the probability that red and green is missing, together with $5$ other similar expressions. The probability that red and green is missing is, for example, $\left(\frac{12}{15}\right)^N$. But we have subtracted too much, for we have subtracted once too many times the situations in which $3$ colours are missing. So we must add back $4$ terms that represent these probabilities.
Rank product of matrix compared to individual matrices.
$rank(A)$ is the dimension of the column space of $A$. The product $Ab$, where $b$ is any column vector, is a column vector that lies in the column space of $A$. Therefore, all columns of $AB$ must be in the column space of $A$.
Is the functional $F(u) = \int_{\Omega} \langle (A_1(x)\chi_{\{u>0\}}+A_2(x)\chi_{\{u\le0\}}) \nabla u, \nabla u \rangle$ convex?
No. Here is a one-dimensional counterexample, but you can adopt the idea to higher dimensions if you want. Let $\Omega=(0,20)$, $A_1=10$ and $A_2=1$. Define functions $u$ and $v$ by $$u'=\chi_{[0,1]}-\chi_{[19,20]}\quad \text{ and }\quad v'=\sum_{k=1}^9 (-\chi_{[2k-1,2k]}+\chi_{[2k,2k+1]})$$ Both vanish on the boundary of $\Omega$. Since $u\ge 0$ and $v\le 0$, we have $F(u)=20$ and $F(v)=18$. The average $w=(u+v)/2$ is nonnegative because $v\ge -1$ everywhere and $u=1$ on the support of $v$. Since $|w'|\equiv 1/2$, it follows that $F(u)=\int_0^{20}10\cdot\frac14=50$, which is greater than either $F(u)$ or $F(v)$.
Uniform convergence in $L^p$-spaces
For the integral $\int_0^1 f(x)\frac{\sin xy}{x}\,dx$, use $|\sin xy|\le xy$, canceling $x$ in the denominator. For the integral $\int_1^\infty f(x)\frac{\sin xy}{x}\,dx$, use $|\sin xy|\le 1$ and then Hölder's inequality: $$ \int_1^\infty f(x)\frac{1}{x}\,dy \le \|f\|_{L^p} \left(\int_1^\infty \frac{1}{x^q}\,dx\right)^{1/q} $$ where $q$ is the conjugate exponent. For the last part, estimate $\left|\sin [x(y+t)]-\sin xy\right|$ by $\min(tx,2)$. Using Hölder's inequality again, you end up estimating the $L^q$ norm of $\min(tx,2)/x$. This is of order $t^{1/p}$.
Trouble Understanding the Basis for TpM
The $\partial / \partial x^j$ form a basis for the vector space $T_pM$. Declaring that the form an orthonormal basis is a choice of metric on the manifold, turning a smooth manifold into a Riemannian manifold. So you can't prove this from the definition, it does not need to be true. If you only have a smooth manifold you can't compute the inner product of two tangent vectors at a point. You need the additional structure of a Riemannian manifold for that which precisly defines the inner product on each tangent space.
What is the purpose of introducing the notion of points of value?
Of course I can't read Bosch's mind but his notation might be preparation for a more general situation, namely: Let $I\subset R[T_1,\cdots,T_n]$ be an ideal and define for any $R$-algebra $R'$ the subset $V_I(R')\subset \mathbb A^n_{R}(R')=(R')^n$ by the requirement $$V_I(R')=\{(r'_1,\cdots,r'_n)\in (R')^n\vert (\forall f\in I) \; f(r'_1,\cdots,r'_n)=0 \}\subset \mathbb A^n_R(R') $$ Bosch has thus introduced a functor $$ \mathbb A^n_R :\operatorname {\mathcal {Alg}}_R\to \operatorname {\mathcal {Sets}}:R'\mapsto (R')^n$$ and the above is a subfunctor $$ V_I :\operatorname {\mathcal {Alg}}_R\to \operatorname {\mathcal {Sets}}:R'\mapsto V_I(R')$$ for which we write $V_I \subset \mathbb A^n_R$. A fundamental point is that $V_I$ is representable by the $R-$algebra $A:=R[T_1,\cdots,T_n]/I$, which means that we have functorial bijections $$Hom_{R-Alg}(A,R')\to V_I(R'):\varphi\mapsto (\overline {T_1},\cdots, \overline {T_n}) $$ This opens the way to an even more general notion of points $ X(T)=Hom_{Schemes}(T,X)$ of an arbitrary, non-affine, scheme X with values in an other arbitrary scheme $T$ . All this is admirably explained in much detail in these notes (in French) by Ducros .
Functions of random variables $Y=3X^{4}$
Since it is over an interval that is one-to-one, what you want to do is $$f_Y(y) = \frac{f_X(\sqrt[4]{y/3})}{\left|\left.\frac{dy}{dx}\right|_{\sqrt[4]{y/3}}\right|} = f_X\left(\sqrt[4]{y/3}\right)\left|\left.\frac{dx}{dy}\right|_{\sqrt[4]{y/3}}\right|.$$ It looks strange, but I wrote it like that for completeness. In other words, yes you should be plugging in $x = \sqrt[4]{y/3}$ since your density should be in terms of $y$, not $x$. So, $$f_Y(y) = \frac{f_X(\sqrt[4]{y/3})}{\left|\left.\frac{dy}{dx}\right|_{\sqrt[4]{y/3}}\right|} = \frac{\frac{1}{8}\left(1+\sqrt[4]{y/3}\right)}{\left|12\,\sqrt[3/4]{y/3}\right|}.$$
What can I say about the function $f$?
The issue of whether $f$ is onto does not depend on the metric assigned. Clearly the restriction of $f$ to the range $[0,1)$ is onto because and real number in $[0,1)$ can be represented as a (possibly unending) sequence of bits, and that is a member of $X$. So the onto issue comes down to whether there is any $x \in X$ such that $f(X) = 1$. And indeed the member $(x_i)_{i \geq 1 1} : \forall i x_i = 1$ has the property that $f(x) = \sum_{i \geq 1} 2^{-i} = 1$. Therefore, $f$ is onto. The issue of whether $f$ is open becomes the question of whether the image of every open set under the peculiar metric $d$ is also an is an open set under the familiar metric defined by $f$. The latter open sets are the ordinary familiar open sets within $(0,1)$.
How Do I Parameterize This Line Segment?
You have to do three separate integrals and add them up. If you want to start at $(-1,2,-2)$ and move firstly parallel to the $z$ axis, you will need $$(x,y,z)=(-1,2,t)\quad\hbox{with $t$ going from $-2$ to $2$.}$$ I'm sure you can do the others yourself.
Probability that the sum of three (or any number) of independent but not identical exponential RV is greater than a number?
if anyone is interested, I found out how to do it. You need to find use a convolution of exponential random variables. Formula
Understanding why a binary operator is associative. ( On a property of "fractional part of a sum" operation)
We have that $(x +_1 y) +_1 z = (x+y+z) - (p+q)$ and that $x +_1 (y +_1 z) = (x+y+z) - (r+s)$. If we take the floor of both of these expressions both of the left hand sides will be equal to zero so we get. $$0 = \left \lfloor (x+y+z) - (p+q) \right \rfloor = \left \lfloor (x+y+z) - (r+s) \right \rfloor$$ Because $p+q$ and $r+s$ are integers, they can be taken out of the floor function to give $$0 = \left \lfloor x+y+z \right \rfloor - (p+q) = \left \lfloor x+y+z \right \rfloor - (r+s).$$ Ignoring the zero at the start and just focusing on the second part of the equality we can subtract $\left \lfloor x+y+z \right \rfloor$. This gives us $$ -(p+q) = -(r+s) $$ from which the result easily follows.
CDF of the product of $n$ i.i.d random variables given by $f(x)=\frac1{a+1} x^a$
Perhaps the easiest way of doing this is to show that $Y = \log X_i$ is an exponential random variable, or rather it is the negative of an exponential random variable. One way to get this is to calculate the density given by $$ \begin{align*} f_{Y}(y) &= (a+1)\left( e^y \right)^a \left| \frac{d e^{y}}{d y} \right| \\ &= (a+1) e^{(a+1)y}, \end{align*} $$ and noting that $y \leq 0$. You can now use the fact that the sum of $n$ exponential random variables with common rate parameter $\lambda$, is a Gamma$(n, \lambda^{-1})$ random variable (using the shape/scale parameterisation). In particular let $$ S = \sum_{i=1}^{n} \log X_i, \qquad -S \sim \mbox{Gamma}(n,1/(a+1)) $$ and so in particular we get $$ \begin{align*} \mathbb{P}\left[ \prod_{i=1}^n X_i \leq z \right] &= \mathbb{P}\left[\sum \log X_i \leq \log z \right] \\ &= 1 - \frac{1}{\Gamma(n)}\gamma\left(n,-\frac{\log z}{a + 1}\right), \end{align*} $$ where $\gamma$ is the usual lower incomplete Gamma function.
If $c$ is equal to $\binom{99}{0}-\binom{99}{2}+\binom{99}{4}-\binom{99}{6}+\cdots+\binom{99}{96}-\binom{99}{98},$ then find $\log_2{(-c)}$.
$$\sum_{99 \geq k \geq 0, \text{even}} i^k {99 \choose k}$$ $$=\sum_{99 \geq k \geq 0} \frac{(-1)^k+1^k}{2}i ^k{99 \choose k}$$ $$=\frac{1}{2} \sum_{k=0}^{99} i^k {99 \choose k}+\frac{1}{2} \sum_{k=0}^{99} (-i)^k {99 \choose k}$$ $$=\frac{1}{2}(1+i)^{99}+\frac{1}{2}(1-i)^{99}$$ $$=\frac{1}{2}(\sqrt{2})^{99}e^{99 \frac{\pi}{4}i }+\frac{1}{2}(\sqrt{2})^{99} e^{-99 \frac{\pi}{4}i}$$ $$=(\sqrt{2})^{99} \frac{e^{99 \frac{\pi}{4}i} +e^{-99\frac{\pi}{4}i}}{2}$$ $$=\left((\sqrt{2})^{99}\right)\left(\cos (\frac{99}{4}\pi) \right)$$ $$=(2^{49})(\sqrt{2})(\frac{-1}{\sqrt{2}})$$ $$=-2^{49}$$
Dense subset in one topology but not in another topology
Yes. Hint: There is a simple metric in which no proper subset can be dense, because all points are open.
Showing dyadic intervals are either disjoint or contained in one another
HINT: There’s no need to get fancy. Show that if $m=k$ and $n\ne j$, then $I_{n,m}\cap I_{j,k}=\varnothing$. Then assume without loss of generality that $m<k$. You’re now looking at the intervals $$I_{n,m}=\left[\frac{n}{2^m},\frac{n+1}{2^m}\right)$$ and $$I_{j,k}=\left[\frac{j}{2^k},\frac{j+1}{2^k}\right)\;,$$ where $m<k$. Let $\ell=k-m$; then $$I_{n,m}=\left[\frac{n}{2^m},\frac{n+1}{2^m}\right)=\left[\frac{n2^\ell}{2^k},\frac{n2^\ell+2^\ell}{2^k}\right)\;.$$ Investigate what happens when $j<n2^\ell$, $n2^\ell\le j<n2^\ell+2^\ell$, and $n2^\ell+2^\ell\le j$.
find a topology where the sequence $\left(\frac1n\right)$ converges to $1$
Paint a gigantic "$0$" symbol on $1$, and paint a gigantic "$1$" symbol on $0$. Now use the standard topology, except use the symbols you painted on $0$ and $1$ instead of the usual meanings of $0$ and $1$. To put this another way, let $f : \mathbb{R} \to \mathbb{R}$ be the function defined by $f(0)=1$, $f(1)=0$, and $f(x)=x$ if $x \ne 0,1$. Define $U \subset \mathbb{R}$ to be open in the topology $T$ if and only if $f(U)$ is open in the standard topology.
Questions on homothetical equilateral triangles
The first part is true as $\angle CDZ = 90^{\circ} = \angle ZGH$ and also $\angle CZD = \angle GZH$ For the second part note that the length of the median line in an equilateral triangle is given by $\frac{\sqrt{3}}{2}a$. Therefore we have that $AG = \frac{\sqrt{3}}{2} \cdot 2\sqrt{3} = 3$. Now the centroid divides the median in ratio $2:1$, so therfore we have that $AZ = 2$. Finally: $$DZ = AZ - AD = 2 - \sqrt{3}$$ For the final part we have using the first and second part that: $$\frac{GH}{GZ} = \frac{CD}{DZ} \implies GH = \frac1{DZ} = \frac1{2 - \sqrt{3}} = 2 + \sqrt{3}$$ Now $EH = GH - GE = 2 + \sqrt{3} - \sqrt{3} = 2 = AB$
Proof of consecutive integers
Let both the $a$'s and $b$'s be in increasing order. Then for all $i, a_i-b_i=a_1-b_1$ and the difference between the sums is $k(a_1-b_1)$ which cannot be $0$.
Problem about an ellipse and its eccentricity
This is not the answer you want, but it must be C. Suppose a is the semi major axis length. For $\theta=\pi/2$, the two points of intersection with the circle are $(\pm\sqrt{a^2-b^2},b)$. Your condition then says the dot product of these two points is 0. From this it follows easily the eccentricity is $1/\sqrt{2}$. Conversely, for this eccentricity,
Understanding Borel sets
To try and motivate the technical answers, I'm ploughing through this stuff myself, so, people, do correct me: Imagine Arnold Schwarzenegger's height was recorded to infinite precision. Would you prefer to try and guess Arnie's exact height, or some interval containing it? But what if there was a website for this game, which provided some pre-defined intervals? That could be quite annoying, if say, the bands offered were $[0,1m)$ and $[1m,\infty)$. I suspect most of us could improve on those. Wouldn't it be better to be able to choose an arbitrary interval? That's what the Borel $\sigma$-algebra offers: a choice of all the possible intervals you might need or want. It would make for a seriously (infinitely) long drop down menu, but it's conceptually equivalent: all the members are predefined. But you still get the convenience of choosing an arbitrary interval. The Borel sets just function as the building blocks for the menu that is the Borel $\sigma$-algebra.
Prove that $f$ is continuous, $f'$ is bounded...
Yes, you're basically right with a., b. though perhaps you should note that b. is working by the definition of the derivative at 0 rather than some ad-hoc reason, and then that the condition you need is just $x^{\alpha-1}\to \text{const}$, not 0. For the next two parts, you will want to actually compute the derivative away from $x=0$, and consider what happens as you approach the origin. The first part just requires the use of the product rule and so on to get the most singular behaviour; then you need to check whether your derivative at the origin fits in smoothly.
Find a second suitable matrix for equation
Hint - $Q = A^{-1}D$ You already found D and find inverse of A. Then multiply them. You will get Q.
51 Dalmatians grouping
Given 12 dalmations, write their number of dots as $x_1,x_2,\cdots,x_{12}$. Then two of $0,x_1,x_1+x_2,x_1+x_2+x_3,\cdots, x_1+x_2+\cdots + x_{11}$ are congruent modulo $11$. But that means $x_{n+1}+x_{n+2}+\cdots x_{m}$ must be divisible by $11$ for some $n<m$. We need $12$ so that the "other" group always contains at least one dalmation - $x_{12}$. But that depends on what you mean by a "grouping."
Why is the Cantor set uncountable
The error in your reasoning is the assumption that a countable set of numbers can be ordered. For example, consider the set of rational numbers, countable, but can't be ordered ('ordering' here means enumerating in a sequence such that $\alpha_1<\alpha_2<\dots$). A simple way to see that the cantor set is uncountable is to observe that all numbers between $0$ and $1$ with ternary expansion consisting of only $0$ and $2$ are part of cantor set. Since there are uncountably many such sequences, so cantor set is uncountable.
How many ways are there to form a sequence of 10 letters from 4 a's, 4 b's, 4 c's, and 4 d's if each letter must appear twice?
Your point that you will have aabbccdd__ is right. However note that the two blanks can also be filled by the same letter. Thus you can have 4 a's and two each of the rest. Thus the number of permutations in this case would be 10!/4!*2!*2!*2! . You have 4 letters which are capable of being repeated 4 times . Hence in this case, you can have (10!/4!*2!*2!*2!)*4 permutations. Next, the 2 blank spaces may be filled in by 2 distinct letters in which case the number of permutations put forth by you 10!/3!*3!*2!*2! would be true. And note that for filling it up with two distinct letters, you have 4*3=12 ways of doing that which gives (10!/3!*3!*2!*2!)*12. Add up the results of these 2 cases to get the required answer. Note that your answer would be wrong since the number of permutations changes when you have 4 a's and two of each of the rest.
Show that the quotient ring R/N has no non-zero nilpotent elements.
You should explain what you tried for answering both these questions. Namely, the first one (a) is quite easy once you go back to the definition of an ideal. The second one should not take much longer: assume that $x \in R/N$ is nilpotent, let $\widehat{x} \in R$ be an antecedent of $x$, and look at what the assertion “$x$ is nilpotent” means for $\widehat{x}$.
A bijection between certain sequences and functions
Pick your $k$-element set $K\subseteq[n]$ that is to be sent to $1$. Let $f:[n-k]\to[n-k]$ be arbitrary. There is an order-preserving bijections $\varphi:[n]\setminus K\to[n-k]$, since $|[n]\setminus K|=n-k$. Define $$\hat f:[n]\to[n]:i\mapsto\begin{cases} 1,&\text{if }i\in K\\ f(\varphi(i))+1,&\text{if }i\in[n]\setminus K\;; \end{cases}$$ then $\hat f$ has the desired properties, and you need only verify that all result functions that send precisely the set $K$ to $1$ can be obtained uniquely in this way.
Roots of a composition of functions where the inner function has a quotient space as its range
(I don't think I understand what your function $u$ is, but perhaps I don't need to to answer this question.) The codomain of $u$ is $S^1$, and you're applying $\cos$ to $u(x,t)\in S^1$, so the domain of $\cos$ must contain $S^1$. There's no actual mathematics happening here - this is more or less just required by the "grammar" of how the question's been written. You're right that $[0, 2\pi)$ is a set of representative elements in $\mathbb{R}$ of $S^1$, but the elements of $S^1$ are equivalence classes. So the zeros of $\cos$ are given by $u = [\pi/2], [3\pi/2]$, as in your answer (1). (But answer (3) is also a perfectly good way of counting the number of zeros and finding representatives of their classes, and it's probably what you'd do in practice.) Having said which... you've been asked to find the zeros of $\cos\circ\, u$, not $\cos$. That is, you need to find all $(x,t)\in\mathbb{R}\times\mathbb{R}^+$ such that $\cos(u(x,t)) = 0$. You've reduced this to the problem of finding all $(x,t)\in\mathbb{R}\times\mathbb{R}^+$ such that $u(x,t) = [\pi/2]$ or $[3\pi/2]$, but you haven't solved it yet!
Computing orbits of conjugacy classes in GAP.
Assuming that you have the group $G$ given concretely with a normal subgroup $N$, you could do so by defining your own function for the action (such functions always take an element $\omega$ of the domain and a group element $g$ and return $\omega^g$: OnConjugacyClasses:=function(class,g) return ConjugacyClass(ActingDomain(class),Representative(class)^g); end; With this, you can then calculate orbits as usual. In your example: gap> G:=SymmetricGroup(5);; gap> N:=DerivedSubgroup(G);; gap> cl:=ConjugacyClasses(N); [ ()^G, (1,2)(3,4)^G, (1,2,3)^G, (1,2,3,4,5)^G, (1,2,3,5,4)^G ] gap> OrbitsDomain(G,cl,OnConjugacyClasses); [ [ ()^G ], [ (1,2)(3,4)^G ], [ (1,2,3)^G ], [ (1,2,3,4,5)^G, (1,2,3,5,4)^G ] ] If you try so for larger groups, it might be faster, to also transfer information about the centralizer of the representative, if known: OnConjugacyClasses:=function(class,g) local cl; cl:=ConjugacyClass(ActingDomain(class),Representative(class)^g); if HasStabilizerOfExternalSet(class) then SetStabilizerOfExternalSet(cl,StabilizerOfExternalSet(class)^g); fi; return cl; end;
Computational Homology used to verify that two spaces are homeomorphic
You can find proofs of both facts in Ch.1 section 16 of Bredon's book Topology and Geometry. Here are the two relevant statements. 16.3 Proposition. Let $C\subset \mathbb R^n$ be a compact convex body with $0\in int(C)$. Then the function $f\colon\partial C\to S^{n-1}$ given by $f(x)=x/||x||$ is a homeomorphism. This is easy to verify now that you know what the map is! 16.4 Theorem. A compact convex body $C$ in $\mathbb R^n$ with nonempty interior is homeomorphic to the closed $n$ ball, and $\partial C\cong S^{n-1}$. To prove this, assume by translating if necessary that $0\in int(C)$. Define $k\colon D^n\to C$ by $k(x)=||x||f^{-1}(x/||x||)$ for $x\neq 0$ and $k(0)=0$, where $f$ is as above. Now check this is a homeomorphism.
Let $k \in \mathbb{Z}^+$. Use the Euclidean Algorithm to compute $gcd(7k+14; 3k+6)$.
Hint: Just use the form $\gcd(a,b)=\gcd(a-b,b)$. The Euclidean Algorithm is just an abbreviation of this process.
Corollary from lemma in Number Theory
Let $p|ab$. If $p|a$, it is done. Otherwise $(p,a)=1$ and by the lemma, $p|b$.
What is the precise definition of 'uniformly differentiable'?
Let $f:[a,b] \to \mathbb{R}$. Differentiability means that the limit $$ \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} $$ (with the obvious modifications for $x = a,b$) exists, in which case we denote the limit as $f'(x)$. This definition can be rephrased as saying that there is a function $f':[a,b] \to \mathbb{R}$ which satisfies $$ \lim_{h \to 0} \left |\frac{f(x+h) - f(x) - hf'(x)}{h} \right| = 0. $$ The uniformity here means that we can approximate uniformly in $x$. More precisely, given an $\epsilon > 0$ we may find a $\delta > 0$ so that whenever $0 < |h| < \delta$, then $$ \left|\frac{f(x+h) - f(x) - hf'(x)}{h}\right| < \epsilon. $$ It's easy to show that a differentible function is uniformly differentiable if and only if it's differentiable with a continuous derivative. I believe this is what Rudin has you prove. Outside of Rudin's book, I don't know if I've ever heard the term "uniformly differentiable" used exactly, and a quick Google search seems to suggest that the term is primarily connected with that problem.
product of sums: $R_{XX}(t_1, t_2) = \cos(t_2 - t_2) + \cos t_1 \cos t_2$
HINT Use the trigonometric identity $$\cos(t_1-t_2)=\cos t_1\cos t_2+\sin t_1\sin t_2$$
Show that $f$ is convex if and only if $f\left( \sum_{i=1}^m\lambda_ix_i \right) \leq \sum_{i=1}^m\lambda_if(x_i)$
Hint: Suppose that $f(c_1x_1+...+c_nx_n)\leq \sum c_if(x_i)$, $\sum_ic_i=1$. Consider $c_1,...,c_{n+1}: \sum_ic_i=1, c_{n+1}\neq 0$. $f(c_1x_1+...+c_{n+1}x_{n+1})=f(c_1x_1+..+c_{n-1}x_{n-1}+(c_n+c_{n+1})({{c_nx_n+c_{n+1}x_{n+1}}\over {c_n+c_{n+1}}}))\leq c_1f(x_1)+..c_{n-1}f(x_{n-1})+(c_n+c_{n+1})f({{c_nx_n+c_{n+1}x_{n+1}}\over {c_n+c_{n+1}}})$. We have $f({{c_nx_n+c_{n+1}x_{n+1}}\over c_{n+1}})\leq {c_n\over {c_n+c_{n+1}}}f(x_n)+{c_n\over {c_n+c_{n+1}}}f(x_{n+1})$ since ${c_n\over{c_n+c_{n+1}}}+{c_{n+1}\over{c_n+c_{n+1}}}=1$. This implies the result.