title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
What is the prob that the seq obtained is inc (ie. nondecreasing)
Let $F(k,n)$ be number of $k$-tuples such that $X_1 \le \ldots \le X_k$. Conditioning on $X_k$, we get $$F(k,n) = \sum_{m=1}^n F(k-1, m)$$ Thus $F(k,n) - F(k,n-1) = F(k-1,n)$. Boundary conditions are $F(k,1) = 1$ and $F(1,n) = n$. It is easy to show that $$ F(k,n) = {n+k-1 \choose k-1} $$ Then the probability is $F(k,n)/n^k$.
Excercise: Ordering functions using the BigO notation
First you forget about all multiplicative constants. We are left with $$ \log (n^n) \quad n^{\sum_{i=1}^{\log_2\log_2 n} i} \quad 4^{\log_2 n} $$ Now you simplify all the terms: $$ n\log n \quad n^{\Theta(\log_2\log_2 n)^2} \quad n^2 $$ In principle you can first calculate the exact sum in $f_2$, but here it really doesn't matter. Now let's look at all the exponents of $n$: $$ 1+o(1) \quad \Theta(\log_2\log_2 n)^2 \quad 2 $$ The $o(1)$ term is all that remains of the logarithm. Some other exercise might require comparison of such terms, but here we need not be so exact. The ordering is now clear: $$1+o(1) < 2 < \Theta(\log_2\log_2 n)^2.$$ So $f_1 \ll f_3 \ll f_2$.
"Trivial" part of the Portmanteau-Theorem
This statement is not true. The case $\mu_n = 0$ and $\mu$ any non-zero measure gives you a counterexample. If you assume all $\mu_n, \mu$ have the same finite measure, then the statement is true. Assume all measures have finite measure $M$ for example. Let $G$ be any open subset of $E$. Take $F = E-G$. It is a closed subset of $E$, and $E=F\sqcup G$. Then, since the measures are finite, $\mu_n(G) = \mu_n(E) - \mu_n(F) = M - \mu_n(F)$ and the same statement holds for $\mu$ instead of $\mu_n$. Then $\lim \sup_{n\rightarrow +\infty}\mu_n(G) = M - \lim \inf_{n\rightarrow +\infty}\mu_n(F) \geq M - \mu(F) = M - (M - \mu(G)) = \mu(G)$. The proof is similar for the other way of the equivalence.
Given $x_{0}$, let $f(x) = \|x-x_{0}\|$. Show tha $f$ has a minumum on any closed, nonempty set $A \subset \mathbb{R^{n}}$
Did you tried use the following fact? All limited sequence in a closed set has a convergent subsequence to an element of this set. Try construct a sequence in $A$ s.t. the distances to $x_0$ are decreasing (at least non-increasing).
How to solve series that produces i
The sum is a real number, as any sum with all real terms is real. Notice that $10^{-13}$ is really really small. That’s showing up due to the numerical approximation methods used to solve problems like this.
Simple Integration Identity
Note that the integration region is a triangular region with vertices at $(a,a)$, $(a,x)$, and $(x,x)$ in the $s-\xi$-plane. Thus, if the inner integral is on $s$, we see that for any fixed $\xi$, $s$ begins at $a$ and ends at $\xi$. If the inner integral is on $\xi$, we see that for any fixed $s$, $\xi$ begins at $s$ and ends at $x$. Therefore, we can write $$\begin{align} \int_a^x \int_a^\xi f(s)\,ds\,d\xi&=\int_a^x \int_s^x f(s)\,d\xi\,ds\\\\ &=\int_a^x f(s)\left(\int_s^x (1)\,d\xi\right)\,ds\\\\ &=\int_a^x f(s)(x-s)\,ds \end{align}$$ as was to be shown!
Where do I go wrong with the product rule with three variables?
You are making three mistakes. First, you assume that $yz\frac{\partial}{\partial z}\frac{\partial}{\partial x}$ is the same as $y\frac{\partial}{\partial z}z\frac{\partial}{\partial x}$. Let's put a simple function $w(x,y,z)=x$ into the mix to see why this is wrong: $yz\frac{\partial}{\partial z}(\frac{\partial w}{\partial x}) = yz\frac{\partial}{\partial z}(1)=0$ $y\frac{\partial}{\partial z}(z\frac{\partial w}{\partial x})=y\frac{\partial}{\partial z}(z)=y$ In fact, by the product rule, $y\frac{\partial}{\partial z}z\frac{\partial}{\partial x}$ is equal to $y(\frac{\partial z}{\partial z}\frac{\partial}{\partial x}+yz\frac{\partial}{\partial z}\frac{\partial}{\partial x})=y(\frac{\partial}{\partial x}+z\frac{\partial^2}{\partial z\partial x})$ Second, $\frac{\partial}{\partial z}\frac{\partial}{\partial x}$ is not equal to $\frac{\partial}{\partial z}+\frac{\partial}{\partial x}$. Why would it be? Third: $\frac{d}{dz}$ is not the same as $\frac{\partial}{\partial z}$!
Is $\pi=\sqrt[(p-1)]{p}$ is an uniformizer of $\mathbb{Z}_p[\zeta_p]$?
This will work as long as $\sqrt[p-1]{-1}\in K=\Bbb Q_p(\zeta_p)$. The trouble is, it isn't. The roots of unity of order coprime to $p$ in $K$ are the same as those in $\Bbb Q_p$ and all satisfy $X^{p-1}=1$ (as long as $p$ is odd). So there are no solutions of $X^{p-1}=-1$ in $K$.
Probability of dice with a cumulative successes
For a single die this is pretty straightforward. To get $N$ successes you have only two options: 1. $N-1$ consecutive 9-10's followed by an 8. 2. $N$ consecutive 9-10's followed by a 1-7, of which I'll think as $N-1$ consecutive 9-10's followed by a sequence of 9-10 and 1-7. So this leads to: $P(n=N) = (\frac{2}{10})^{N-1}(0.1+0.2\times 0.7)$, and $P(n=0)=0.7$ of course. For a larger set of dice you should only sum such single die distributions, each of the original dice can be analyzed independently of the others. If you are only interested in this for practical reasons then I wrote a script which gives me pretty good estimates.
Faithful representations and the adjoint representation of a Lie algebra
Indeed, in Lie algebras exact sequences do not necessarily split. Take, for instance, the Heisenberg Lie algebra:$$\mathfrak h=\left\{\begin{bmatrix}0&a&c\\0&0&b\\0&0&0\end{bmatrix}\,\middle|\,a,b,c\in\Bbb C\right\}.$$Then,$$Z(\mathfrak h)=\left\{\begin{bmatrix}0&0&c\\0&0&0\\0&0&0\end{bmatrix}\,\middle|\,c\in\Bbb C\right\}.$$Furthermore, $[\mathfrak h,\mathfrak h]\subset Z(\mathfrak h)$. So, you can't have $\mathfrak h=Z(\mathfrak h)\oplus\operatorname{ad}(\mathfrak h)$, since $Z(\mathfrak h)$ is in the center of $Z(\mathfrak h)\oplus\operatorname{ad}(\mathfrak h)$, and $\bigl[Z(\mathfrak h)\oplus\operatorname{ad}(\mathfrak h),Z(\mathfrak h)\oplus\operatorname{ad}(\mathfrak h)\bigr]\not\subset Z(\mathfrak h)$.
Why does the number of regions in a circle cut by chords joining $n+1$ points equal the number of regions in $\mathbb{R}^4$ cut by $n$ hyperplanes?
Put one marble inside each region and allow the marbles to fall and roll downwards, where we pick “downwards” as some direction that’s not parallel to anything of interest. In the case of the circle: One marble rolls to the bottom of the circle. For every set of two points, one marble rolls to the lower of the two, resting on the chord between them. (Each such marble actually rests between two chords, or between one chord and the upper part of circle; we assign it to the chord whose angle is farther from the upper part of the circle.) For every set of four points, one marble rolls to the intersection of the diagonals of the quadrilateral formed by them. This gives a bijection between the regions and the sets of 0, 2, or 4 of the $n + 1$ points. In the case of 4-dimensional space, for convenience, draw an extra slightly slanted “ground hyperplane” below all of the existing intersection points that catches all the falling marbles (but does not create additional regions). One marble rolls forever down the ground hyperplane without hitting any other hyperplanes. For every set of two hyperplanes (possibly including the ground hyperplane), one marble rolls forever down the line or plane defined by the intersection of those hyperplanes with the ground hyperplane. For every set of four hyperplanes (possibly including the ground hyperplane), one marble rolls to their intersection. This gives a bijection between the regions and the sets of 0, 2, or 4 of the $n + 1$ hyperplanes (possibly including the ground hyperplane). Composing these bijections gives a bijection between the regions of the circle and the regions of 4-dimensional space.
Need an element not in the image of a linear transformation
The second derivative of $f(x)=2x+1$ certainly exists. It's $0$. Your second approach is exactly right. You want a polynomial of degree at most $20$ that is the second derivative of some polynomial not of degree at most $20$.
Can we directly prove $\sqrt {2}$ is irrational?
The problem with trying to prove such a thing directly is in the definition of an irrational number. We define an irrational number as some number that can't be written in the form $\frac{p}{q}$, where $p$ and $q$ are both integers. So to prove that $\sqrt{2}$ is not irrational, we have to prove that it cannot be written in such a form, which means we have to do a contradiction somewhere.
Write $\cos(x)$ in terms of $\sin(x)$
As written, it's not quite correct. Let's start with the identity $$\sin^2{x} + \cos^2{x} = 1 \implies \cos^2{x} = 1 - \sin^2{x}$$ In quadrant IV, $\cos{x}$ is positive, so taking a square root gives us $$\cos{x} = \sqrt{1 - \sin^2{x}}$$
Proof of Idempotency for Matrices
$(I-Y)^2=I-Y$ $\Rightarrow$ $Y^2-2Y=-Y$ $\Rightarrow$ $Y^2=Y$ hence it is idempotent
Removing the root squares from this expression?
Its all about rationalization, \begin{align} \frac 1{\sqrt{2} + \sqrt{3} + \sqrt{5}} &= \frac 1{\sqrt{2} + \sqrt{3} + \sqrt{5}} \cdot \frac{\sqrt{2} + \sqrt{3} - \sqrt{5}}{\sqrt{2} + \sqrt{3} - \sqrt{5}} \\[10pt] &=\frac{\sqrt{2} + \sqrt{3} - \sqrt{5}}{(\sqrt{2}+\sqrt{3})^2-(\sqrt{5})^2} \\[10pt] &=\frac{\sqrt{2} + \sqrt{3} - \sqrt{5}}{(2+3+2\sqrt{6})-(5)} \\[10pt] &=\frac{\sqrt{2} + \sqrt{3} - \sqrt{5}}{2\sqrt{6}}\cdot\frac{\sqrt6}{\sqrt6} \\[10pt]&=\frac{\sqrt{12}+ \sqrt{18} - \sqrt{30}}{12} \\[10pt]&=\frac{2\sqrt{3} + 3\sqrt{2} - \sqrt{30}}{12} \end{align}
How to find all solutions to equations like $3x+2y = 380$ using matrices/linear algebra?
I can't imagine using linear algebra to find the integer solutions to your equation, when it's so simple to note that $x$ must be even, say, $x=2z$, so the equation becomes $3z+y=190$, and the solution is $z$ is arbitrary, $y=190-3z$. EDIT: For linear diophantine equations in several variables, a good starting place is the Wikipedia piece on Bezout's identity, http://en.wikipedia.org/wiki/Bezout%27s_identity MORE EDIT: There's a nice discussion by Gilbert and Pathria of systems of linear diophantine equations here; now linear algebra comes into it.
Prove that $\int_{- \infty}^{\infty} f(x) dx = \int_{- \infty}^{\infty} f(-x) dx.$ (Analysis)
You did not define $M_i$ my guess is that $M_i=\sup f|_{[x_{i-1},x_i]}$. But then\begin{align}\sum_{i=1}^nN_i(y_i-y_{i-1})&=\sum_{i=1}^nN_i(-x_{n-i}+x_{n+1-i})\\&=\sum_{j=1}^nN_{n+1-j}(x_j-x_{j-1})\\&=\sum_{j=1}^nM_j(x_j-x_{j-1})\\&=U(P,f).\end{align}
Fourier transform parameter
Suppose $a<0$. Then, \begin{align*} \hat{f}(ak)&=\int_{-\infty}^{\infty}f(ax)e^{-ik x}\, dx= \{ ax=s \} = \int_{\color{red}\infty}^{\color{red}-\color{red}\infty}\frac{f(s)}{a}e^{-ik \frac{s}{a}}\, ds= \\ &= -\int_{-\infty}^{\infty}\frac{f(s)}{a}e^{-i \frac{k}{a}s}\, ds=\int_{-\infty}^{\infty}\frac{f(s)}{|a|}e^{-i \frac{k}{a}s}\, ds=\frac{1}{|a|}\hat{f}\left( \frac{k}{a} \right) \end{align*} The trick is, as WimC already said, that since $a$ is negative, we must change the order of integration.
negating a probability limit
The contra-positive is the following: If there exists $k$ such that $P(X_n=k)$ does not tend to $P(X=k)$ then there exists $b$ such that $P(X_n\leq b)$ does not tend to $P(X\leq b)$. However this is not the best way to prove the result. Since your random variables are integer valued, $X_n=k$ iff $X_n \leq k$ and it is not true that $X_n \leq k-1$. Hence $P(X_n=k)=P(X_n\leq k)-P(X_n \leq k-1) \to P(X\leq k)-P(X \leq k-1)=P(X=k)$
Finding functions with integer coefficients that have imaginary zeros
The following polynomial satisfies the given conditions: $$ (x-2)(x-1+3i)(x-1-3i) = x^3 - 4 x^2 + 14 x - 20. $$
How to prove any ball with $L_p$ metric can contain some ball with $L_q$ metric where $p<q$?
First notice that you've already proved that $p&gt;q\implies M_p&lt;M_q$ (including $q=\infty$) so if you prove $\forall \epsilon &gt;0,\exists \delta&gt;0,B_{\infty}(x,\delta)\subset B_1(x,\epsilon)$ you're done. Let $x\in\mathbb{R}^n$ and $\epsilon&gt;0$. Take $\delta=\frac{\epsilon}{n}$ so we have that $$\sup|x_i-y_i| =d_{\infty}(x,y)&lt;\delta\implies d_1(x,y)=\sum|x_i-y_i|&lt; \sum \delta=\epsilon$$
Nonconstructible sets of integers
No. It need not be the case. Suppose we start with $V=L$ and we add a new subset of $\omega_1$ by using functions from countable subsets of $\omega_1$ into $2$. Every countable subset of $\omega_1$ (and so of $\omega$) is in the ground model, however the generic extension of the model is not $L$.
What is the finite eigenvalue of a controlled system as $\epsilon \rightarrow 0$?
Since you are dealing with LQR and they ask about the closedloop eigenvalues, the first thing that comes to mind is to use the associated Hamiltonian matrix $$ H = \begin{bmatrix} A &amp; -\epsilon^{-2} B\,B^\top \\ -Q &amp; -A^\top \end{bmatrix}, $$ which should have negated eigenvalue pairs and the two negative eigenvalues should match the closedloop eigenvalues. Solving directly for the eigenvalues of $H$ as a function of $a_1$, $a_2$ and $\epsilon$ would require solving a quartic equation. However, it is easier to make use of the stated property of $H$, namely from that it follows that its characteristic polynomial would be of the form $$ \det(\lambda\,I - H) = (\lambda - p_1)\,(\lambda + p_1)\,(\lambda - p_2)\,(\lambda + p_2) = p_1^2\,p_2^2 - (p_1^2 + p_2^2)\,\lambda^2 + \lambda^4. $$ Matching the actual characteristic polynomial with this form allows one to find expressions for $p_1^2$ and $p_2^2$ relatively easily. Now, when one applies the limit of $\epsilon \to 0$ to these solutions one will remain bounded while the other does not.
drawing chips from a bag without replacement
An easier approach might be to use inclusion exclusion. What is the probability of not getting any black? $$\frac{{7 \choose 0}\cdot {17 \choose 5}}{24 \choose 5}$$ Then the probability of not getting orange is similar, and likewise red. If you add those up, you have triple counted the cases where only $1$ color was picked, and so you should subtract the extras. Then $1-$ that is your answer.
Help understand a proof of convergence in probability implying convergence in $r$th mean
No, it definitely does not; though the proof you copied from the book does not assert that. The subsequence chosen was a special one: it was chosen as a subsequence that is supposed to stay away from $X$ in $r$th mean. So, the contradiction arises when you analyze some subsubsequence of this special subsequence, and showed that it is indeed close to $X$ in $r$th mean.
A rigorous computation of the mean time to empty a stack of given size
One wants to compute every $E(T_M)$, where $T_M$ denotes the number of moves necessary to empty a stack of size $M$. Until the stack becomes empty, its size performs a random walk on the set of nonnegative integers $M$, whose steps from every positive $M$ are $M\to M+1$, $M\to M$ and $M\to M-1$, with respective probabilities $p_1$, $p_2$ and $p_3=1-p_1-p_2$. Thus, if $p_1\geqslant p_3$, every $E(T_M)$ is infinite. If $p_1&lt;p_3$, that is, if $2p_1+p_2&lt;1$, we first make use of the elementary remark that, to empty a stack of size $M$, one must first reach a stack of size $M-1$, then reach a stack of size $M-2$, and so on until a stack of size $0$. By homogeneity, this shows that $$E(T_M)=ME(T_1)$$ for every positive $M$. On the other hand, starting from size $1$, the usual one-step decomposition yields $$E(T_1)=1+p_1E(T_2)+p_2E(T_1)+p_3\,0$$ thus, $$E(T_1)=\frac1{1-2p_1-p_2}$$ and, finally, for every $M$, $$E(T_M)=\frac M{1-2p_1-p_2}$$
How to tell if series terminates (Legendre ODE)
If the terms of an infinite series $\,s_1 + s_2 + ... + s_n + ...\,$ are such that they are equal to zero after $\,s_n\,$, then it is said to terminate and its sum is $\,s_1 + s_2 + ... + s_n\,$ which is a finite sum and the series converges to it. In the common case of a power series $\,a_0 + a_1 x + a_2 x^2 + ... + a_n x^n + ...\,$ the same thing applies and a terminating power series is $\,a_0 + a_1 x + a_2 x^2 + ... + a_nx^n\,$ which is a polynomial and which has infinite radius of convergence. You can look at MSE question 2573694 "Validity of terminating series solution of differential equation" for a similar situation. Note that I think that the terminology is not a good one, but it is commonly used -- likely because of a lack of a better one.
Double integral over complicated region
It is worth to calculate the integral regarding the ellipse as a normal domain, from which: $$\begin{eqnarray*} I &amp;=&amp; \int_{-1}^{3}\int_{-2-3\sqrt{1-\left(\frac{x-1}{2}\right)^2}}^{-2+3\sqrt{1-\left(\frac{x-1}{2}\right)^2}}\frac{1}{1+x^2+y^2}\,dy\,dx.\end{eqnarray*}$$ Now exploiting the fact that $\int\frac{dy}{1+x^2+y^2}=\frac{1}{\sqrt{1+x^2}}\arctan\left(\frac{y}{\sqrt{1+x^2}}\right)$ and the identity: $$\arctan(a)+\arctan(b)=\arctan\left(\frac{a+b}{1-ab}\right)$$ we have: $$I = \int_{-1}^{3}\frac{1}{\sqrt{1+x^2}}\arctan\left(\frac{12\sqrt{(1+x^2)(3-x)(x+1)}}{13x^2-18x-7}\right)\,dx$$ and by setting $x=\sinh t$ we arrive at: $$ I = \int_{\log(\sqrt{2}-1)}^{\log(3+\sqrt{10})}\arctan\left(\frac{12\cosh t\sqrt{(3-\sinh t)(1+\sinh t)}}{13\sinh^2 t-18\sinh t-7}\right)dt$$ that is hard to evaluate in terms of elementary functions but quite easy to calculate numerically. As an alternative, by setting $x=1+2\rho\cos\theta,y=-2+3\rho\sin\theta$ we have: $$ I = \int_{0}^{1}\int_{-\pi}^{\pi}\frac{\rho}{6+4\rho\cos\theta-12\rho\sin\theta+4\rho^2\cos^2\theta+9\sin^2\theta}\,d\theta\,d\rho$$ and by setting $\theta=2\arctan t$ we have: $$ I = \int_{0}^{1}\int_{-\infty}^{+\infty}\frac{2\rho(1+t^2)}{6(1+t^2)^2+4\rho(1-t^4)-24\rho t(1+t^2)+4\rho^2(1-t^2)^2+36t^2}\,dt\,d\rho,$$ $$ I = \int_{0}^{1}\int_{-\infty}^{+\infty}\frac{2\rho(1+t^2)}{(6-4\rho+4\rho^2)t^4-24\rho t^3+(48-8\rho^2)t^2-24\rho t+(6+4\rho+4\rho^2)}\,dt\,d\rho,$$ $$ I = \int_{0}^{1}\int_{-\infty}^{+\infty}\frac{\rho(1+t^2)}{2(1-t^2)^2\rho^2+(2-12t-12t^3-2t^4)\rho+(3+24t^2+3t^4)}\,dt\,d\rho.$$
$f(x) = x^{2015}+2x^{2014}+3x^{2013}+...+2015x+2016$
First, we note that $f(x)-f(x)/x=\frac{-2016}{x}+\sum_{k=0}^{2015} x^k$ Dividing by $x$ and rearranging, we get $$\sum_{k=0}^{2014} x^k=f(x)/x-f(x)/x^2+2016/x^2 -1/x$$ Evaluating at the roots of $f(x)$ and summing yields $$\sum_i\sum_{k=0}^{2014} x_i^k=\sum_i 2016/x_i^2 -1/x_i.$$ This reduces the problem to finding the sum of the reciprocals of the roots, and finding the sum of the squares of the reciprocals of the roots. This is an exercise in rewriting symmetric functions in terms of elementary symmetric polynomials to rewrite the expression in terms of the coefficients of $f(x)$.
How to form probability density function using Kronecker symbol?
Your interpretation of $\delta_t$ is correct. It is fine with the computation of the expectation. $$\mathbb E(X)=\sum_{x} x\times f(x)$$ $$=\sum_{x} x\mathbb P(X=x)$$ where sum is over the values taken by random variable $X$. $$\Longrightarrow \mathbb E(X)=0\times \Big(1-\frac{1}{a}\Big)+a\times \frac{1}{a}=1$$
Question about the limits
Try $f:\mathbb R^2\to\mathbb R$ defined by $f(x,y)=\color{red}{5}x+\color{blue}{3}y$, with $q_{2n}=(1/n,0)$ and $q_{2n-1}=(0,1/n)$, hence $q_n\to q=(0,0)$, and use the shorthand $$ g_n=\frac{f(q_{n})-f(q)}{\|q_{n}-q\|}. $$ Then $f$ is $C^\infty$ everywhere but $f(q)=0$, $f(q_{2n})=\color{red}{5}/n$, $f(q_{2n-1})=\color{blue}{3}/n$ and $\|q_n-q\|\sim2/n$ hence $g_{2n}\to\color{red}{5}$ while $g_{2n-1}\to\color{blue}{3}$. Thus the sequence $(g_n)_{n\geqslant1}$ diverges.
Integral $\int_0^{\pi/4}\frac{x^2\tan x}{\cos^2 x}dx=\frac{\log 2}{2}-\frac{\pi}{4}+\frac{\pi^2}{16}$
$$I=\int_0^{\pi/4} \frac{x^2\tan x}{\cos^2 x}\,dx=\int_0^{\pi/4} x^2\tan x\sec^2 x\,dx$$ Now use integration by parts: $$I=\left(x^2 \frac{\tan^2x}{2}\right|_0^{\pi/4}-\int_0^{\pi/4} x\tan^2 x\,dx=\frac{\pi^2}{32}-\int_0^{\pi/4} x\tan^2 x\,dx$$ Use integration by parts again to evaluate the last integral: $$I=\frac{\pi^2}{32}-\left(x(\tan x-x)\right|_0^{\pi/4}+\int_0^{\pi/4}(\tan x-x)\,dx$$ $$\Rightarrow I=\frac{\pi^2}{32}-\frac{\pi}{4}+\frac{\pi^2}{16}+\frac{\ln 2}{2}-\frac{\pi^2}{32}=\boxed{\dfrac{\ln 2}{2}-\dfrac{\pi}{4}+\dfrac{\pi^2}{16}}$$ $\blacksquare$
Issue with elementary exercise on martingales
With the corrections from the author's errata for the text, Since $\mathbb E[|X_0|] = \mathbb E[|X_1|] = 0$ and for $n\geqslant 2$ $$ \mathbb E[|X_n|] \leqslant \mathbb E[|X_{n-1}|] + \mathbb E[|2Z_n-1|]\sum_{j=1}^{n-1}\mathbb E[|Z_j|] = \mathbb E[|X_{n-1}|], $$ it follows by induction that $\{X_n\}$ is integrable. Moreover, $$\mathbb E[X_1\mid \mathcal F_0] = 2\mathbb E[Z_1] - 1 = 0 = X_0$$ and for $n\geqslant 2$ \begin{align} \mathbb E[X_{n+1}\mid\mathcal F_n] &amp;= \mathbb E\left[X_n + (2Z_{n+1}-1)\sum_{j=1}^n Z_j\ \bigl\vert\ \mathcal F_n\right]\\ &amp;= \mathbb E[X_n\mid\mathcal F_n] + \mathbb E\left[(2Z_{n+1}-1)\sum_{j=1}^n Z_j\ \bigl\vert\ \mathcal F_n\right]\\ &amp;= X_n + (2\mathbb E[Z_{n+1}] - 1)\sum_{j=1}^n Z_j\\ &amp;= X_n, \end{align} so $\{X_n\}$ is a martingale. $\{X_n\}$ is not a Markov chain because $X_{n+1}$ conditioned on $X_n$ is not independent of $(Z_1,\ldots,Z_{n-1})$.
How to decompose this sum?
Let $$f(x)\,g(y)=\frac1{x+y}.$$ Taking the logarithm, $$\log f(x)+\log g(y)=-\log\left(x+y\right).$$ Now if we take the partial derivative on $x$, $$\frac{f'(x)}{f(x)}=-\frac1{x+y}$$ and the dependency on $y$ cannot be avoided. So even for a single $y$ what you are trying to obtain is impossible.
Proof of convexity of $f(x)=x^2$
You made a mistake in your rearranging. The following are equivalent: $$\lambda x_1^2+(1-\lambda)x_2^2\ge\bigl(\lambda x_1+(1-\lambda)x_2\bigr)^2\\\lambda x_1^2+(1-\lambda)x_2^2\ge\lambda^2 x_1^2+2\lambda(1-\lambda)x_1x_2+(1-\lambda)^2x_2^2\\\lambda x_1^2+x_2^2-\lambda x_2^2\ge\lambda^2 x_1^2+2\lambda(1-\lambda)x_1x_2+x_2^2-2\lambda x_2^2+\lambda^2x_2^2\\0\ge(\lambda^2-\lambda)x_1^2+2\lambda(1-\lambda)x_1x_2+(\lambda^2-\lambda)x_2^2\\0\ge(\lambda^2-\lambda)x_1^2-2(\lambda^2-\lambda)x_1x_2+(\lambda^2-\lambda)x_2^2\\0\ge(\lambda^2-\lambda)(x_1-x_2)^2$$ The final inequality is true for all $\lambda$ if $x_1=x_2,$ and if $x_1\ne x_2,$ then the final inequality holds exactly when $\lambda\in[0,1].$
pegionhole with assumptions?
It does seem rather sketchy ... in fact, I can't follow it at all. Groups of $4$? red and blue? Pair of $6$ balls? Also, the question itself is weird: balls in a jar ... 4 terms apart? OK, I assume that the question basically is: If you pick $60$ numbers from $1$ through $115$, then there must be two numbers with a difference of $4$ Here's why: Pick $60$ numbers. Divide them into $4$ groups, depending on whether the number $\equiv 0,1,2$, or $3 \bmod 4$ Note that in each group, the numbers are a multiple of $4$ apart. So, if you have a group with at least $16$ numbers, then for no two of them to be exactly $4$ apart, they need to be at elast $8$ apart, but that means the lowest and highest number are at least $8 \cdot 15 =120$ apart, which is impossible. So, all $4$ groups have exactly $15$ members. But now you have a problem with the group of numbers that are $\equiv 0 \bmod 4$, for the lowest possible number is $4$, and the highest needs to be at least $8 \cdot 14=112$ higher, which is $116$, and so there must be at least two that are exactly $4$ apart.
Diagonalization proof - Do eigenvectors of an eigenvalue always span the corresponding eigenspace?
That the $n_i$ vectors span the eigenspace $E_{\lambda_i}$ does require a proof: After all, they aren't all the eigenvectors corresponding to $\lambda_i$, only a small subset of these eigenvectors. We might throw a bit of light on the question by generalizing the setting a bit: Assume $A$ is not diagonalizable, and let $\beta$ be maximal set of eigenvectors of $A$. If $\beta_i$ are those vectors in $\beta$ corresponding to the eigenvalue $\lambda_i$, it is still true that $\beta_i$ spans $E_{\lambda_i}$. (Otherwise consider a vector in $E_{\lambda_i}$ not in the span of $\beta_i$, and show that this could be added to $\beta$, contradicting the maximality of $\beta$.) What goes wrong with the proof, if $A$ is not diagonalizable, is that $\beta$ will have fewer than $n$ members. But that is a different issue.
Solve $y'=y$, $y(0)=1$ using method of successive approximations, obtaining the power series expansions of the solution
Your work is correct (assuming that your proof of contractiveness is correct). This is part of a general theory which includes the Picard-Lindelöf theorem, and using fixed-point theorems is a very standard way of proving existence results.
$\frac{7}{n}$ as a sum of three unit fractions
This is only a partial answer, but perhaps it will inspire someone else to give a full solution. We are seeking a general identity of the form $$\frac{7}{7k+2}=\frac1a+\frac1b+\frac1c,$$ where $a,b,c\in\mathbb Z_{&gt;0}$ depend on $k$. The obvious thing I tried was to take the guess $a=k+1$, which gives $$\frac1b+\frac1c=\frac{7}{7k+2}-\frac1{k+1}=\frac{5}{(2+7k)(1+k)}.$$ Now, we can write $$\frac{5}{(2+7k)(1+k)}=\frac{3}{(2+7k)(1+k)}+\frac{2}{(2+7k)(1+k)}.$$ Observe that $(2+7k)$ and $(1+k)$ are always of opposite parity, so the second term can always be simplified to a unit fraction. You can verify that $3\mid(2+7k)(1+k)$ if $3\nmid k$, so the first term is a unit fraction in this case as well. We have therefore reduced the problem to expressing $\dfrac7{21k+2}$ as a sum of unit fractions. Unfortunately, trying $a=nk+c$ for any other choices of $n,c$ don't seem to give useful results. For instance we have $$\frac{7}{21k+2}=\frac1{3k+2}+\frac{12}{(2+3k)(2+21k)},$$ but I don't see an obvious way to decompose the second term into a sum of two unit fractions. The best case scenario is to end up with an expression in the denominator which the numerator is guaranteed to divide (like the case for the second term above where we are guaranteed a unit fraction), but since the denominator is a quadratic polynomial in general this is not possible unless the numerator is a power of $2$. But this is insufficient; writing $12=2^3+2^2$ doesn't give anything useful above, for instance.
How to Solve a Divisibility Problem : Prove or disprove (Continued)
Your question is not different than the one you mentioned, because your question is a particular case of it, with $b=d$.
Prove that $-3/2\leq\cos a + \cos b + \cos c\leq 3$?
The picture is basically dividing a circle into 3 sectors and $a,b,c$ are the angles of each sector. Since, $c=2\pi-a-b$, $\cos c= \cos (a+b)$ and $$\cos a + \cos b + \cos c = \cos a +\cos b + \cos a \cos b - \sin a \sin b $$ writing $x= \cos a,\; y= \cos b$, and $\sin a =\sqrt{1-x^2} $, etc. (We can always choose $+\sqrt{}$ by choosing the angles to be &lt;$\pi$) We have, $$ x+y+xy -\sqrt{1-x^2}\sqrt{1-y^2}, \quad x,y\in[-1,1] $$ By A.M. > G.M., $$-\sqrt{1-x^2}\sqrt{1-y^2} \ge -\frac{2-x^2-y^2}{2} =-1 +\frac{x^2+y^2}{2}$$ Thus, \begin{align} x+y+xy -\sqrt{1-x^2}\sqrt{1-y^2} &amp; \ge x+y+xy-1+ \frac{x^2+y^2}{2}\\ &amp;= \frac{1}{2}(x+y+1)^2 -\frac{3}{2} \ge -\frac{3}{2}. \end{align} And equality holds if and only if $x=y=-1/2$, which means the angles are all $ \frac{2\pi}{3}$.
Solving a matrix equation involving transpose conjugates
When $a=0$, the general solution is given by $X=\frac{S}{2}+iH$, where $H$ is an arbitrary Hermitian matrix. When $a&gt;0$, the equation can be rewritten as $\left(\sqrt{a}X+\frac{I}{\sqrt{a}}\right)^H\left(\sqrt{a}X+\frac{I}{\sqrt{a}}\right)=S+\frac{I}{a}$. Therefore the general solution is given by $X=\frac{1}{\sqrt{a}}\left[U\left(S+\frac{I}{a}\right)^{1/2}-\frac{I}{\sqrt{a}}\right]$ where $U$ is an arbitrary unitary matrix. When $a&lt;0$, the equation can be rewritten as $\left(\sqrt{-a}X-\frac{I}{\sqrt{-a}}\right)^H\left(\sqrt{-a}X-\frac{I}{\sqrt{-a}}\right)=-S-\frac{I}{a}$. Therefore the equation is solvable if and only if $I+aS$ is positive semidefinite. If this is the case, the general solution is given by $X=\frac{1}{\sqrt{-a}}\left[U\left(-S-\frac{I}{a}\right)^{1/2}+\frac{I}{\sqrt{-a}}\right]$, where $U$ is an arbitrary unitary matrix.
The proof of theorem 3.19 from baby Rudin
It is not in your best interest to phrase every proof in terms of contradictions. Using the definitions you should be able to give a direct proof. By definition $s=\limsup\limits_{n\to\infty} s_n$ is the largest value of a convergent subsequence of $(s_n)$. Now for a fixed convergent subsequence $(s_{n_k})$ we get a subsequence $(t_{n_k})$ that might or might not converge. Pick a convergent subsequence $(t_{m_j})$, then $(s_{m_j})$ still converges to the same limit $s^*$ as $s_{n_k}$ but now $$s^*\leqslant \lim_{j\to \infty} t_{m_j}\leqslant t=\limsup_{n\to\infty} t_n$$ Since $s^*$ was an arbitrary subsequential limit, $s\leqslant t$.
Seeing that $\lim_{x \to \infty} \sum (-x)^n/n! = 0$
Just working with series and the Cauchy product we have $$\sum_{n=0}^\infty \frac{(-x)^n}{n!}\sum_{n=0}^\infty \frac{x^n}{n!} = \sum_{n=0}^\infty\sum_{k=0}^n \frac{x^k}{k!}\frac{(-x)^{n-k}}{(n-k)!} = \sum_{n=0}^\infty\frac{(x+(-x))^n}{n!} = 1$$ and it is easy to show that as $x \to \infty$ $$\sum_{n=0}^\infty \frac{x^n}{n!} \to +\infty$$ Hence, ...
Understanding exact tests for clinical trial data
Let's analyse this $2\times 2$ contigenty table. 172 of the 200 treated patients got cured, that means $\frac{172}{200}=86\%$ 151 of the 200 untreated patients got cured, that means $\frac{151}{200}=75.5\%$ being 86&gt;75.5 the treatment looks work. Now the question is: 86 is really greater then 75.5 or the difference is due to the random variability of the phenomena? To get an answer we can do the $\chi^2$ test the first table is your contingenty table the second one, is the expected table, under the hypothesis that there is no difference in treatment group or placebo group. (every expected value is calculated under independence hypothesis, i.e. $161.5=\frac{323\times 200}{400}$) the third table is the test. Every cell is calculated as $\frac{[\text{Observed}-\text{Expected}]^2}{\text{Expected}}$ the total test is 7.09 that means a p-value of $0.8\%$, using a chi square distribution with $(2-1)\times (2-1)=1$ degree of freedom. CONCLUDING: the test has a high significant statistical level. The data are enough to reject the hypotesis of OR=1 (the treatment is good to get cured)
Simplicity of the roots of a minimal polynomial
liaombro has ably answered the question; I write to add that you have a hidden assumption in your argument: that the characteristic of $K$ does not divide $\deg(\mu_{\alpha,K})$. If $K$ is a field and $L$ is an extension, then an algebraic element $\alpha\in L$ is separable over $K$ if the minimal polynomial $\mu_{\alpha,K}$ does not have repeated roots. In characteristic zero, all algebraic elements are always separable. However, in positive characteristic, there are algebraic elements that are not separable. However, if $K$ is a finite field, then $\alpha$ is always separable, so you may not have seen examples where the situation occurs. Your argument breaks down if $\mu_{\alpha,K}’ = 0$. Since the minimal polynomial is monic, this will occur only if the characteristic of $K$ divides the degree of $\mu_{\alpha,K}$. In fact, one can show that if $\mathrm{char}(K)=p\gt 0$, then algebraic element $\alpha$ is inseparable (not separable) if and only if the minimal polynomial $\mu_{\alpha,K}$ is of the form $g(x^p)$ for some polynomial $g(x)$; that is, it’s a polynomial in $x^p$. For an explicit example, let $K=\mathbb{F}_p(x)$, the field of rational functions with coefficients in the field of $p$ elements. Let $g(Y) = Y^p-x\in K[Y]$. Let $L$ be the extension obtained by adjoining a root of $g(Y)$ to $K$; call this root $\alpha$ (that is, $\alpha$ is a $p$th root of $x$). It is not hard to verify that $g(Y)$ is irreducible (e.g., use Eisenstein’s Criterion in $\mathbb{F}_P[x][Y]$), so that $g(Y)$ is the monic irreducible polynomial of $\alpha$ over $K$. However, in $L$ we have $g(Y) = Y^p-x =(Y-\alpha)^p$, so that $g(Y)$ has a single root, repeated $p$ times. You can see that $g’(Y)=0$, so that’s where your argument breaks down.
Is this a basis for the set S?
Put a $\begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 1 \\ \end{bmatrix}$ + $b\begin{bmatrix} 0 &amp; 1 \\ -1 &amp; 0 \\ \end{bmatrix}$ $=0$, the zero matrix If you show that $a=b=0$, then the matrices are linearly independent. Also, the fact that they span the set is obvious.
Characterisation of primitive elements
For any $x\in L$, $\sigma\in\mathrm{Gal}(L/K)$, $\sigma(x)=x$ if and only if $\sigma$ fixes $K(x)$ pointwise. So, $K(x)=L$ if and only if the only automorphism fixing $K(x)$ is 1, if and only if the only automorphism fixing $x$ is 1.
Relations and Combinatorics exercise
For two classes, instead of summing over binomial coefficients, just note that there are $2^{|A|}$ different subsets of $A$. Two of those ($\varnothing$ and $A$ itself) cannot be used as equivalence classes, but for the rest of them, each $B\subset A$ gives rise to an ordered pair of equivalence classes $(B,A\setminus B)$. Each equivalence relation can described by such ordered pairs in exactly two ways, so the total number of classes is $\frac{2^{|A|}-2}{2}$. This almost matches what you've already got, except that you've forgotten to divide by two. For three classes the same strategy can be used, though it is a bit more involved to subtract the choices that don't work. First, there are $3^{|A|}$ ways to assign the labels a, b, and c to each of the elements of $A$. We have to subtract the assignments where a is never used -- there are $2^{|A|}$ of those -- the ones where b is never used ($2^{|A|}$) of those, and the ones where c is never used (yet $2^{|A|}$). But now the three combinations where everything has the same label have been subtracted twice each, so those have to be added back in. (This is just an instance of the inclusion-exclusion principle). Finally divide by the $3!$ ways to express each equivalence relation, because the order of the labels don't matter. We get $$ \frac{3^{|A|} - 3\cdot 2^{|A|} + 3}{3!} $$ equivalence relations with 3 classes.
Given a p.d.f. of $X$, find the p.d.f. of $Y=4-3X$
You're given $y = 4-3x$. Let this $y=h(x)=4-3x$. Then $x=h^{-1}(y)=\dfrac{4-y}{3}$. So by the given formula,$$f_Y(y)=\frac{f_X(h^{-1}(y))}{|h'(h^{-1}(y))|}=\frac{\dfrac{5}{\left(\frac{4-y}{3}\right)^2}}{|-3|}=\frac{15}{(4-y)^2}.$$ Note that $h'$ is constant, since $Y$ is linear: $$h'(\color{blue}{x})=-3\,\,\forall x\implies h'(\color{blue}{h^{-1}(y)})=-3.$$ On the other hand, if you've noticed, a shorthand for this when $Y=\alpha X+\beta$, $$f_Y(y)=\dfrac{f_X(\dfrac{y-\beta}{\alpha})}{|\alpha|}$$ for $\alpha\ne0.$
If $|f|$ is differentiable then is $f$ differentiable
The function $f$ defined to be $1$ on $\mathbb{Q}$ and $-1$ on $\mathbb{R}-\mathbb{Q}$ is such that $|f|$ is differentiable but not $f$.
Convergence of $z_n = i+\left(\frac{2+3i}{5}\right)^n$
$$\mid z_n -i \mid = \left| \frac{2+3i}{5}\right|^n = \sqrt{(4+9)/25}^n \rightarrow 0.$$
When is it allowed to "take apart" a limit (multiplication of limits)?
If each one of the individual limits exists. Here, you're just pulling out a constant $e-1$ which you can do so long as the constant is not $0$. For example: $$\lim_{n\to\infty} (-1)^n \frac{1}{n}\not= \lim_{n\to\infty} (-1)^n\lim_{n\to\infty}\frac{1}{n}$$ Because the right side doesn't exist. Or: $$\lim_{n\to\infty}0*n\not=0*\lim_{n\to\infty}n$$
How to approach this series using comparison test: $\sum\limits_{n=1}^{\infty} \frac1{\sqrt[3]{n^2-\frac12}}$?
n is always > 1 and this allows us to use AM-GM inequality, $^3\sqrt\frac 1{(n-\frac 1{\sqrt{2}})(n+\frac 1{\sqrt{2}})} &gt; \ ^3\sqrt{\frac{1}{n^2}}$ (Which surprisingly is the same as using common sense; Nevermind) $$ \sum_{n=1}^{\infty} \frac{1}{\sqrt[3]{n^2-\frac{1}{2}}} &gt; \sum_1^\infty \frac{1}{n^{2/3}}$$ if n>1, $n^{2/3}&lt;n$ Which is a consequence of the Trivial Inequality*, $$ \sum_{n=1}^{\infty} \frac{1}{\sqrt[3]{n^2-\frac{1}{2}}} &gt; \sum_1^\infty \frac{1}{n}$$ Which diverges. There you go. If you need more comfort with handling comparisons I suggest going through Inequalities like the Trivial Inequality, RMS-AM-GM-HM inequality and the Cauchy-Schwarz Inequality although these might not be that useful. (*)$n^2&gt;n \rightarrow n^3&gt;n^2 \rightarrow n&gt;n^{2/3}$ since we are taking principle roots and Real n>1
Invertibility of a Matrix. Completion of a basis
You are asking basically whether an independent set of vectors in a finite dimensional vector space (in this case, the rows of $(N_1, N_2)$) can be completed to a basis. The answer is always yes. The side condition that $\mathop{rank}(M_1) = n-m$ is easy to assure by making $M_1$ to be a large multiple of the identity.
(f o g) o f composition function
The brackets make no difference: the composition of functions is completely associative, meaning that $(f\circ g)\circ h=f\circ(g\circ h)$.
Constrained Optimization Problem with system of differential equations
If i understand you correctly, you want to do $\min_u g(x)$ Such that $ dx/dt = f(x,u,t)$ is that correct? In that case check out an optimization technique called the adjoint method, which is nothing more than an application of the implicit function theorem to the solution of the differential equation.
Expanding a know result about Lindelöf spaces to, say, compact or locally compact spaces?
Compactness has a straightforward analog: If $X$ is a finite union of compact subspaces, then $X$ is compact. For locally compact, countable unions are definitely out, as $\Bbb Q$ shows. For finite unions, consider $A=\{(x,\sin(\frac1x)): x &gt; 0\}$, which is locally compact (homeomorphic to $\Bbb R$) and $B = \{(0,0)\}$ which is even compact, but $A \cup B$ is not locally compact, as the origin has no compact neighbourhoods. So few seems possible there as well.
Three Proper Ladies on a train
This is a variant of the blue eyes puzzle, where "100 people on an island" has been replaced by "three ladies on a train", "blue eyes" has been replaced by "dirt on their face" and "leaving the island" by "turning red". It can be solved the exact same way.
Are open and path connected set in $R^n$ removing points still path connedted?
Suppose $B = A \setminus \{x_n\}$ where $n \in \{1,\dots,k\}$. For each point $x_n$, there is an open ball $B_n$ such that $x_n \in B_n\subset A$. Given any two points in $B$, we can connect them with a path that stays in $A$. If the curve contains $x_n$, then we can use the boundary of $B_n$ to avoid the point, shrinking $B_n$ if necessary.
What is the result of multiplying two disjunctive probabilities
The term $p(1-p)$ often appears in the context of Bernoulli trials with success probability $p$. Specifically, it is the probability that the first of two Bernoulli trials is successful and the second is not. It is also the probability that the second of two Bernoulli trials is successful and the first is not. Note that by the above observations, the probability of exactly one success in two trials is $2p(1-p)$ since there are two ways we can choose the successful trial (either as the first or as the second). This is a special case of the binomial distribution.
Geometric intuition of composition of hyperbolic and inverse hyperbolic trig functions
Solved it myself. We can mirror the argument above, but using the hyperbolic identity $\cosh^2 - \sinh^2 = 1$ in place of Pythagoras. Write $y = \cosh(\mathrm{arcsinh}\ x) = \cosh(\theta)$. Then $y = \cosh \theta, x = \sinh \theta$, so $y^2 - x^2 = 1 \implies y = \sqrt{1 + x^2}$.
Counting ordered pair $(m,n)$ in natural numbers satisfying an inequality
Assuming your natural numbers start at $0$, an exact expression would be $$\sum_{m=0}^{\left\lfloor\dfrac N{\log 2}\right\rfloor}\left\lfloor1+\dfrac {N-m\log 2}{\log 3}\right\rfloor.$$ We're basically letting $m$ assume all possible values, then for each $m$ calculating the range of possible $n$ values. This approach can be extended to multiple primes, e.g. $$\sum_{x_2=0}^{\left\lfloor\dfrac N{\log 2}\right\rfloor}\sum_{x_3=0}^{\left\lfloor\dfrac {N-x_2\log2}{\log 3}\right\rfloor}\sum_{x_5=0}^{\left\lfloor\dfrac {N-x_2\log2-x_3\log3}{\log 5}\right\rfloor}\left\lfloor1+\dfrac {N-x_2\log 2-x_3\log3-x_5\log5}{\log 7}\right\rfloor.$$ This is all a bit messy. If only $\log 2$ and $\log 3$ were integers, then we could use Pick's Theorem! A fairly accurate approximation is $\dfrac12 \left(\left\lfloor\dfrac N{\log 2}\right\rfloor+1.5\right)\left(\left\lfloor\dfrac N {\log 3}\right\rfloor+1.5\right)$. This approximates the number of lattice points (points with integer coordinates) under the line $m \log 2 + n \log 3 = N$ by halving an (almost) average of what we'd get if we rounded down the intercepts vs. rounding up.
At a Critical Point the Hessian Equals the Frst Nonconstant Term in the Taylor Series of $f$?
Well the first term in the Taylor series is $f(x_0)$. This doesn't depend on $h$ and so we are calling it `constant'. Usually the next term contains $\nabla f(x_0)$, but at a critical point, this is zero. Therefore the first term in the Taylor series that depends on $h$ is the term involving the Hessian.
Cash flows using bond investment to balance liabilities
$\begin{array}{}\text{Bond}&amp;\text {Flow in year 1}&amp;\text {Flow in year 2}&amp;\text {Flow in year 3}\\A&amp;1.07\\B&amp;0&amp;1\\C&amp;0.05&amp;0.05&amp;1.05\end{array}$ These are the flows for each dollar of par value of the bonds purchased. The yield will affect the price of the bonds, but you haven't been asked anything about that. So, we will ignore that part of the question. If you buy par value of $x,y,z$ of bonds A,B,C respectively, $1.05 z = 100\\ 1y + .05z = 102\\ 1.07x + 0.05z = 99$ Now you have a system of equations with 3 unknowns. Solve for $x,y,z$
How to prove that the result of multiplying two number of the same set is in the same set?
In general, when we define in mathematics a strucuture $S$ this is (at least) a "couple" $\langle D, * \rangle$ made by a domain "$D$" of "objects" and an operation ("$*$", e.g. a binary one) defined on them, and we write : $S = \langle D , * \rangle$. In this case, "by definition" the domain is "closed under" the operation, i.e. for all $a, b \in D$, we have that $a*b \in D$. The simplest example is $\langle \mathbb N, + \rangle$. But after having defined the structure, we can introduce new ("derived") operations, like "$-$" in $\mathbb N$. In this case, it is not true in general that the structure is still closed under the new operation. If we put : $a - b$ iff $\exists x (a = b+x)$ we have that, for example, $2 - 3$ is not defined in $\mathbb N$.
Evaluate $\displaystyle\int\frac{\arcsin e^x}{e^x}\,dx$.
$$ I=\int -\arcsin e^x\,de^{-x}=-\arcsin e^x\cdot e^{-x}+\int\frac{dx}{\sqrt{1-e^{2x}}} $$ Then, let $t=e^{-x}$ to integrate, $$\int\frac{dx}{\sqrt{1-e^{2x}}}=-\int\frac{dt}{\sqrt{t^2-1}}=-\text{arccosh}\&gt; t +C$$ Thus, $$ I=-e^{-x}\arcsin e^x-\text{arccosh}\&gt; e^{-x}+C$$
Is $f(g)$ homogeneous? If so, of what degree?
Work from the inside outwards. So $f(g(tx))=f(t^kg(x))=(t^k)^kf(g(x))$.
Lipschitz modulus of quadratic loss function
A quadratic function with positive definite matrix is strongly convex and for that reason, has unbounded Lipschitz modulus. First, notice that since $Q\in\mathbb{S}_{++}^n\Rightarrow \lambda_{\min}(Q)\geq m&gt;0$, hence your quadratic function is strongly convex. Since it is $\sigma=\lambda_{\min}(Q)$ - strongly-convex, it implies that $$ f(\mathbf{y})- f(\mathbf{x})\geq \nabla f(\mathbf{x})^T(\mathbf{y}-\mathbf{x})+\frac{\sigma}{2}\|\mathbf{y}-\mathbf{x}\|^2\overset{\textbf{C.S.}}{\geq}\|\mathbf{y}-\mathbf{x}\|(\frac{\sigma}{2}\|\mathbf{y}-\mathbf{x}\|-\|\nabla f(\mathbf{x})\|) $$ where the RHS goes to infinity since the coercive function $\|x-y\|\to\infty$ . This means that $|f(\mathbf{x})- f(\mathbf{y})|$ is unbounded, as you correctly suspected.
How to determine a coplaner perpendicular vector?
Take the cross product of the given vector and the normal vector of the plane. The resulting vector is perpendicular to both vectors, so it is perpendicular to the given vector and lies on the plane as required.
Nonlinear global optimization: sum of minima vs minimum of sum
Yes, it is correct. You can prove it easily by contradiction. If the two expressions are not equal, the smaller side provides a better solution for the larger side, contradicting minimality of the larger side.
Find $x$-coordinate of $Q$ from intersection of curve $y=ax^3$ and its tangent line
Now, write $$x^3-3t^2x+2t^3=x^3-2tx^2+t^2x+2tx^2-4t^2x+2t^3=(x-t)^2(x+2t).$$ Can you end it now?
Minimum moves needed for everyone to meet
We can create a Steiner System $S(2,n,n^2)$ for any $n$ where an affine plane exists. By the definition of a Steiner Sytem, each unique pair of elements will appear exactly once. Each person needs to meet $n^2-1$ others. They will meet $n-1$ other people after each move. $n^2-1=(n-1)(n+1)$ so the Steiner System will consist of $n+1$ sets of $n$ blocks, each block containing $n$ elements. This corresponds to $n$ moves from the starting position. Here is the system for $n=2$: $$\{AB\}\{CD\}$$ $$\{AC\}\{BD\}$$ $$\{AD\}\{BC\}$$ However, such systems do not exist for all $n$. An affine plane exists iff a projective plane of the same order exists. It is known that a projective plane exists for all orders that are a power of a prime: $2, 3, 4, 5, 7, 8, 9, 11,\dots$ (OEIS A000961 without the first term). It is conjectured but not proven that these are the only orders for which a projective plane exists. It has been proven that projective planes do not exist for orders $6$ and $10$. I don't know the best configurations for the values of $n$ not covered above. As a further illustration, here's a solution for $n=3$: $$\{ABC\}\{DEF\}\{GHI\}$$ $$\{ADG\}\{BEH\}\{CFI\}$$ $$\{AEI\}\{BFG\}\{CDH\}$$ $$\{AFH\}\{BDI\}\{CEG\}$$
Do I have to sum twice the symmetric Cristoffel symbols in the geodesic equation?
Yes, the summation here is over the $n^2$-dimensional array $1 \leq \nu \leq n, 1 \leq \rho \leq n$. You can also write the second term as $$ \sum_{\nu = 1}^n \Gamma^{\mu}_{\nu \nu} \frac{dx^{\nu}}{dt} \frac{dx^{\nu}}{dt} + 2 \sum_{1 \leq \nu &lt; \rho \leq n} \Gamma^{\mu}_{\nu \rho}\frac{dx^{\nu}}{dt} \frac{dx^{\rho}}{dt}. $$
Partial linear relaxation yields an integer solution
This doesn't seem to be true. Take $|I| = |J| = 3$, and pick three neighbourhoods, namely $\{1,2\}, \{1, 3\},$ and $\{2,3\}$. Take $c_i = 50$ for all $i$ and $f_i = 1$ for all $i$. The optimal solution if we let $x$ be fractional is $x = (\frac12, \frac12, \frac12)$, $y = (0, 0, 0)$, which has objective value $1.5$. You can check that, if $x$ and $y$ must both be integral, the optimal solution has objective value $2$. EDIT: However, for a fixed integral $x$, it is quite easy to find an optimal $y$; set $y_i$ to 1 whenever $c_i$ is negative or the corresponding constraint would otherwise be violated; set $y_i$ to zero otherwise.
Uniqueness in $C([0,T])$ of solution found by Picard-Lindelof Theorem
Your work so far has shown that there exists a solution $u$ which is unique in the subset $B_K(\alpha)$ (the $K$ ball around the constant function $\alpha$ in $C([0,t])$. We want to show this function is in fact unique in the whole space $C([0,t])$. Suppose $v \in C([0,t])$ is also a solution to the IVP, and let $\tau = \inf\left(\{T\} \cup \{t: t &lt; T, |v(t) - \alpha| &gt; K\}\right)$. Note that, since $v$ is continuous and $v(0) = \alpha$, we must have that $\tau &gt; 0$. Observe that, $$u, v \in Y := \{w \in C([0,\tau]) : \sup_{[0,\tau]} |w(t) -\alpha| \leq K\}$$ and as a corollary of the Picard-Lindelof contraction argument you performed, you can conclude that $u \equiv v$ on $[0,\tau]$ (intuitively, we still get uniqueness if we look over a shorter time-scale). If $\tau = T$, then $u \equiv v$ in $C([0,t])$. Since this is the result we want, our goal now is to show that $\tau &lt; T$ gives a contradiction. Suppose $\tau &lt; T$, and let $\beta = u(\tau)$. By the definition of $\tau$, for any $\tau &lt; s &lt; T$, $|v(s)| &gt; K \geq |u(s)|$. This means that the IVP $$\left\{\begin{array}{cc} \partial_s w(s) = f(w(s)) &amp;\qquad \tau &lt; t &lt; T\\ w(\tau) = u(\tau)&amp; \end{array}\right.$$ has two solutions in any set $Z = \{w \in C([\tau, T^*]) : \lVert w - \beta \rVert &lt; M\}$ for any $M &gt; 0$ and $T^* &gt; \tau$, since $u$ and $v$ instantly separate after time $\tau$. But, you proved earlier that we can always find a 'locally unique' solution to this ODE, so this is a contradiction.
A Lemma about Sylow Subgroups
Hint 1: Let a given Sylow subgroup act by conjugation on the set of all Sylow subgroups. What are the sizes of the orbits? Hint 2: Here's a similar question. It's not quite exactly the same question, but it's close. See how you can use it with your situation. Answer: Using the second hint, if $p^a = \text{min}([P:P\cap Q] \ | \ P\neq Q\in\text{Syl}_p(G))$, then $n_p\equiv 1\bmod p^a$. Now $a=1$ if and only if $[P: P\cap Q]=p$ for some $Q\in\text{Syl}_p(G)$, and if $a&gt;1$ then $p^2$ divides $p^a$ and so $n_p\equiv 1\bmod p^2$. This gives your desired result.
Proving $\operatorname{Tr}((AB)^m)=\operatorname{Tr}((BA)^m)$
Use $\operatorname{Tr}UV=\operatorname{Tr}VU$ with $U:=A,\,V:=(BA)^{m-1}B$.
If $x=(9+4\sqrt{5})^{48}=[x]+f$ . Find $x(1-f)$.
Consider $$P=(9+4\sqrt 5)^{48}+(9-4\sqrt 5)^{48}.$$ Note that $P$ is an integer. Now we have $0\lt 9-4\sqrt 5\lt 1$. Hence we have $$0\lt (9-4\sqrt 5)^{48}\lt 1.$$ Hence, we have $$x=(9+4\sqrt 5)^{48}=P-1+1-(9-4\sqrt 5)^{48}.$$ This implies that $f=1-(9-4\sqrt 5)^{48}$. Thus, we have $$x(1-f)=(9+4\sqrt 5)^{48}(9-4\sqrt 5)^{48}=1.$$
How to prove that a set is not totally ordered?
You need to find two elements of the order which are incomparable. That what it means that the order is not total. In the case of a power set ordered by inclusion, this means $A$ and $B$ such that $A\nsubseteq B$ and $B\nsubseteq A$. Do note that this requires that $X$ has at least two elements.
Differentiability in $\mathbb R^3$
If $f$ is differentiable, then there is a linear map $L\in \mathcal L(\Bbb R^3,\Bbb R)$ such that $$\lim_{h\to 0}\frac{f(a+h_1,b+h_2,c+h_3)-f(a,b,c)+L(h_1,h_2,h_3)}{\|h\|}=0.$$ where $h=(h_1,h_2,h_3)$. In particular $$\lim_{h_1\to 0}\frac{f(a+h_1,b,c)-f(a,b,c)+L(h_1,0,0)}{h_1}=0$$ with $h_1\mapsto L(h_1,0,0)\in \mathcal L(\mathbb R,\mathbb R)$. Therefore $x\mapsto f(x,b,c)$ is differentiable at $x=a$ and thus $\frac{\partial f}{\partial x_1}(a,b,c)$ exists. Do the same for $\frac{\partial f}{\partial x_2}$ and $\frac{\partial f}{\partial x_3}$ and you'll get the result.
I have $n$-points that are on a unit sphere, how can I show using set builder notation that they are as far a way from each other as possible?
It's difficult to show this, but easy to express this! You could characterize an extremal configuration ${\bf a}$ of $n$ points on $S^2$ by writing $${\bf a}\in {\rm argmax}_{\,{\bf x}\in (S^2)^n}\bigl(\min\nolimits_{1\leq i&lt;j\leq n} \|x_i-x_j\|\bigr)\ .$$ I don't now whether this is more to the point than describing the idea in words.
Number of ordered pairs of $A,B$ in Probability
The condition should be $1\le y\lt x$, not $\le$. Since $z=xy/6$, $x$ or $y$ must be divisible by $2$ and by $3$, and $xy\le6$. With $1\le y\lt x$, that allows only two solutions: $x=6$, $y=1$ and $x=3$, $y=2$. In the first case, we have $6$ choices for the element in $y$. In the second case we have $z=3\cdot2/6=1$, so there must be exactly one common element. That gives us $6$ choices for the common element, then $5$ choices for the other element in $y$ and then $\binom42=6$ choices for the other two elements in $x$. Thus the total number of ordered pairs is $6+6\cdot5\cdot6=186$.
Evaluate the following definite integral $\int_{0}^t\cos(\sin s)\, ds$?
As proposed in the comments, we can use Jacobi-Anger expansion: $$\cos(\sin(s)) = J_0(1)+2\sum_{n=1}^\infty J_{2n}(1)\cos(2ns).$$ Integrating with respect to $s$, we obtain: $$I = \int_0^t\cos(\sin(s))ds = t\left(J_0(1)+2\sum_{n=1}^\infty J_{2n}(1)\frac{\sin(2nt)}{2nt}\right).$$ Maybe the only cases with closed form solution are for $t = k\pi$, $k\in\mathbb{R}^+$, when $\sum_{n=1}^\infty J_{2n}(1)\frac{\sin(2\pi n)}{2\pi n}=0$, in which case $I=k\pi J_0(1)\approx 2.40393943k$. Somehow interesting, since $\frac{\sin(2nt)}{2nt}\leq \frac{1}{2nt}$, and $\sum_{n=1}^\infty J_{2n}(1) = \frac{1-J_0(1)}{2}$, we can upper bound the integral as $$I = \color{red}{\int_0^t\cos(\sin(s))ds} = tJ_0(1)+2t\sum_{n=1}^\infty J_{2n}(1)\frac{\sin(2nt)}{2nt}\leq tJ_0(1)+2\sum_{n=1}^\infty\frac{J_{2n}(1)}{2n}&lt;\color{blue}{(t-1)J_0(1)+1},$$ thus the dominant term is $tJ_0(1)$ (see the picture).
Checking my understanding of Cauchy's functional equation $f(x+y)=f(x)+f(y)$
In the context that the Wikipedia article claims the solutions have this form, they are considering the case where $f$ is a function from $\mathbb{Q}$ to $\mathbb{Q}$ (though this assumption is not written down explicitly). So by assumption, $f(1)$ must be in $\mathbb{Q}$. If instead you consider (say) functions $f:\mathbb{Q}\to\mathbb{C}$, then you are correct that $f(x)=cx$ is a valid solution for any $c\in\mathbb{C}$. When the domain is not restricted to $\mathbb{Q}$, I'm not sure I follow what you're asking--what "induction step" in particular are you talking about? The point is that you have given a proof which shows that $f(x)=xf(1)$ for all $x\in\mathbb{Q}$, and this proof makes essential use of the fact that $x=\pm m/n$ for some $m,n\in\mathbb{N}$ (and indeed at various points uses induction on $m$). So it is obvious that the proof does not apply to irrational numbers, at least not without some major modification.
Figuring out a sequence/pattern of numbers
If you notice carefully, this is what is going on apparently: In each case, we shift the entries $e_n, \cdots, e_8$ one place to the right and put $e_9$ in the $n$-th position. We do this with $n$ starting from $8$ and going down all the way to $1$, and repeat the loop till we get the sequence $9, 8, 7, \cdots, 1$.
A sufficient condition for almost everywhere equality
Note that for any $a \in \mathbb{R}$ the sets $\{ x : f(x) &gt; a \}$ and $\{ x : g(x) &gt; a \}$ are of either one of the following forms: $\varnothing$ $(0, \alpha)$ $(0, \alpha]$ $(0, \infty)$ for some $\alpha &gt; 0$, thus one is always contained in the other. Hence if $m \big( \{ x : f(x) &gt; a \} \big) = m \big( \{ x : g(x) &gt; a \} \big)$, then $m \big( \{ x : f(x) &gt; a \geqslant g(x) \} \big) = 0.$ Moreover, for every $x \in (0, \infty)$ $$f(x) &gt; g(x) \implies (\exists q \in \mathbb{Q}) \ f(x) &gt; q \geqslant g(x),$$ so $$\{ x : f(x) &gt; g(x) \} \subseteq \bigcup_{q \in \mathbb{Q}} \{ x : f(x) &gt; q \geqslant g(x) \}$$ but $$m \left( \bigcup_{q \in \mathbb{Q}} \{ x : f(x) &gt; q \geqslant g(x) \} \right) = 0.$$
How would I go about proving discontinuity at multiple points for this function?
In order to prove that $f$ isn't continuous at 1 for example, it suffices to show that there exists one sequence of real numbers $x_n \in [0,1]$ with $x_n \to 1$, such that $$lim_{n\to \infty} f(x_n) \neq f(1)=1$$ Consider for example the sequence $x_n := 1-\frac{1}{n}\in [0,1]$. Note that $$\frac{1}{x_n} = \frac{1}{1-\frac{1}{n}} = \frac{1}{\frac{n-1}{n}} = \frac{n}{n-1} = 1 +\frac{1}{n-1}$$ isn't a natural number for every $n\geq 3$. In a similar way, you can show that $f$ isn't continuous at $\frac{1}{n}, n\in \mathbb{N}$.
Equation for distance from a point outside a sphere to any point on its surface
The center of the circle is at o and its radius is $r$. So, any general point on the surface of the sphere is given by $\mathbf{p} = \mathbf{o} +r \mathbf{\hat{e}}$, where $\mathbf{\hat{e}}$ is the radial unit vector in spherical co-ordinates. In Cartesian coordinates, $$\mathbf{\hat{r}} =\sin{\theta}\cos{\phi} \mathbf{\hat{i}} + \sin{\theta}\sin{\phi}\mathbf{\hat{j}} + \cos{\theta} \mathbf{\hat{j}}$$ where $\mathbf{\hat{i}},\mathbf{\hat{j}},\mathbf{\hat{k}}$ are unit vectors along X,Y,Z directions respectively. So, what you are looking for is $dist(\mathbf{m,p})$ If you already know the point $\mathbf{p}$, just find out this distance. In order to plot this function , just vary $\theta$ from $0$ to $180$ degrees and $\phi$ from $0$ to $360$ degrees to cover the whole circle and find out $dist(\mathbf{m,p})$ for all the points. Store the values in an array and plot them. Let me know if you need code in MATLAB or some other language.
Why isn't every vector bundle morphism an isomorphism?
First of all, the two vector bundles could have different rank (in which case they could not possibly be isomorphic, but there could still be non-trivial maps between them), i.e., the $n$ of $p^{-1}(b) \cong \{b\} \times \mathbb{R}^n$ does not have to be the same as the $n'$ of $p'^{-1}(b) \cong \{b\} \times \mathbb{R}^{n'}$. And secondly, even if $n=n'$, not every map from $\mathbb{R}^n$ to $\mathbb{R}^n$ is an isomorphism. You are not just requiring that the fibers are isomorphic in some abstract way, but that the unique map induced by the morphism of vector bundles is an isomorphism. And this is far from being automatic. To make it more explicit, you have the following commutative diagram: $$\require{AMScd} \begin{CD} p^{-1}(\{b\}) @&gt;{F}&gt;&gt; p'^{-1}(\{b\})\\ @V{\varphi}VV @V{\varphi'}VV \\ \{b\} \times \mathbb{R}^n @&gt;{\varphi' \circ F \circ \varphi^{-1}}&gt;&gt; \{b\} \times \mathbb{R}^{n'} \end{CD}$$ and $F$ is an isomorphism if and only if the bottom map is an isomorphism, but this is not automatic from the fact that $\varphi$ and $\varphi'$ are isomorphisms (where $\varphi:= \varphi_\alpha\vert_{p^{-1}(\{b\})}$ for some chart $U_\alpha$ containing $p^{-1}(\{b\})$, and similarly for $\varphi'$).
Complex integral constant in real integration
Sure, it can be complex: The indefinite integral $$\int f(x) dx=g(x)+C$$ is the antiderivative of the integrand $f(x)$. What this implies is that if you differentiate $g(x)+C$, you should get back to $f(x)$, and you will do this for any $C$, complex or not, as long as it isn't a function of $x$.
Solving $(Ax + B)\sin x = C$ for $x$
As was pointed out on the remarks, if $A=0$ the equation reduces to the trivial $\sin x = C/B$, so let's assume $A \neq 0$. One obvious simplification: WLOG, $C\in \{0,1\}$, otherwise, divide both sides by $C$ and rename $A/C \to A$ and $B/C \to B$. Now if $C=0$ we get $(Ax+B)\sin x = 0$, which either implies $x = -B/A$ or $x \in \{n\pi\}_{n \in \mathbb{Z}}$. The final case is indeed $(Ax+B)\sin x = 1$, which does not have easily expressible solutions. I would recommend either if you have a computer available, for the interesting set of $(A,B)$ compute it using Newton's Method (or Wolfram Alpha) to get numerical results. if no computer is available, you can eyeball it by drawing a graph of $\sin x$ and $\frac{1}{Ax+B} = \frac{1}{A(x+B/A)}$ (which is a scaled and translated hyperbola) on the same scale and then doing 1-2 iterations of Newton's Method from your eyeball point to get reasonable precision...
There is a coordinate ball in $\mathbb{S}^n$ whose closure is equall to all of $\mathbb{S}^n$.
The map $q : \overline{\mathbb B^n}\to \mathbb S^n$ collapses $\mathbb S^{n-1}$ to the north pole $n = (0,\ldots,0,1) \in \mathbb S^n$. It establishes a homeomorphism $q' : \mathbb B^n \to U = \mathbb S^n \setminus \{ n \}$. Hence $U$ is a coordinate ball in $ \mathbb S^n$ whose closure is $\mathbb S^n$.
How to solve a differential equation problem that contains an integral term?
From $y(x) = \int_{0}^{x} (y(t))^{\frac{1}{2}} dt + 1$ you get, by the FTC, the initial value problem $y'(x)=(y(x))^{\frac{1}{2}}, $ $y(0)=1$. Can you proceed ?
why $H^\mathrm{o}=K^\mathrm{o}=\emptyset$
Suppose $x \in H^\mathrm{o}$. Then there is an open interval $(a,b)$ such that $a &lt; x &lt; b$ and $(a,b) \subset H$. But you can show that each open interval contains irrational numbers: for example, take $\sqrt{2}$ and translate it by some fraction $q$ so that $a &lt; q + \sqrt{2} &lt; b$. So no open interval is a subset of $\mathbb{Q}$. (Another way to see this is that every open interval is uncountable while $\mathbb{Q}$ is countable; so it cannot possibly contain an interval.) The argument for $K$ is similar, but it uses that $\mathbb{Q}$ is dense in $\mathbb{R}$. In other words, for any irrational and any open interval surrounding it, you can find a rational close to it. So every open interval contains a rational, which implies $K^\mathrm{o}$ is empty.
Reverse map for an equation .
Yes, you can solve for $a$ by using the inverse hyperbolic tangent: $$x = \tanh ab + c$$ $$x-c = \tanh ab$$ $$\tanh^{-1}(x-c) = ab$$ $$a = \boxed{\dfrac1b \tanh^{-1}(x-c)}$$ Incidentally, the inverse hyperbolic tangent can be written in terms of perhaps more familiar functions as $$\tanh^{-1} y = \log\sqrt{\frac{1+y}{1-y}}.$$
Conflicting sign of line intergrals
It seems that the Author is aimed to make the reader aware about the risk to proceed by coordinates, in that case assuming $\vec A=(A_x,A_y,A_z)$ with $A_x&gt;0$ clearly along path $(iv)$ $$\vec A \cdot d\vec r=-A_xdx$$ and it leads to the wrong result $$\int_{(a,a)}^{(0,0)} \vec A \cdot d\vec r=\int_{a}^{0} -A_xdx=\int_{0}^{a} A_xdx$$ The mistake here is in the limits for the integral, indeed as the parametrization shows, we have that for $x\in [0,a]$ $$r_x=a-x \implies dr_x=-dx$$ and $x=a \iff r_x=0$ $x=0 \iff r_x=a$ therefore the correct step should be $$\int_{(a,a)}^{(0,0)} \vec A \cdot d\vec r=\int_{a}^{0} A_x \cdot dr_x=\int_{0}^{a} -A_xdx=-\int_{0}^{a} A_xdx$$
Solving the following integral (rational function, cubic over linear)
$$\frac{x^3}{x+2}=x^2-2x+4-\frac8{x+2}$$ Thus... in some situations, integration by parts is not the solution. :-)
Contest problem involving primes and factorization
HINT: If $x=5^{5^n}, 5^{5^{n+1}}=(5^{5^n})^5=x^5$ Use Factorizing polynomial $x^5+x+1$