title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Laplace’s equation on the circular zone | HINT: Write the solution as
$$
u(r,\theta) = R(r)\Theta(\theta)
$$
and replace it in the equation
$$
\frac{1}{r}\frac{\partial}{\partial r}\left(r \frac{\partial u}{\partial r}\right) + \frac{1}{r^2}\frac{\partial^2 u}{\partial\theta^2} = 0
$$
Separate the $R$s on one side, and the $\Theta$s on the other. Solve equation separately and apply the boundary conditions |
Showing that $M_{\tau}$ and $M_{\tau \land n}$ are integrable | Answer for first part: $|M_{\tau \wedge n}| \leq |M_1|+|M_2|+\cdots +|M_n|$ so $E|M_{\tau \wedge n}| \leq E|M_1|+E|M_2|+\cdots +E|M_n| <\infty$.
(There is no reason why $E|M_{\tau }|< \infty$. A counterexample can be found in many books dealing with stopping times). |
Mental $n-$th root of $N$ | Let $f(x) = x^{1 \over n}$.
Then $f(N+h) \approx f(N) + hf'(N)$ becomes
$(N+h)^{1 \over n} \approx N^{1 \over n} + \dfrac{hN^{1 \over n}}{nN}$
So, if you want to approximate $600^{1 \over 4}$:
$4^4 = 256 \lt 600 \lt 625 = 5^4$
Choose $N = 5^4 = 625$ and $h = 600-625 = -25$.
$600^{1 \over 4} \approx 5 - \dfrac{25 \cdot 5}{4 \cdot 600}
= 5.00 - 0.052 = 4.948$
To the nearest thousandth, $600^{1 \over 4} = 4.949$
Of course, the smaller $h$ is, relative to $N$, the better the approximation will be.
Approximation of error
\begin{align}
(N+x)^{1 \over n}
&= N^{1 \over n} \left( 1 + \dfrac hN \right)^{1 \over n} \\
&= N^{1 \over n}
\sum_{k=0}^\infty \binom{1 \over n}{k}\left( \dfrac hN \right)^k \\
&\approx N^{1 \over n}
\left(
1
+ \left( \dfrac 1n\right)\left( \dfrac hN\right)
- \left( \dfrac{n-1}{2n^2}\right)\left( \dfrac{h^2}{N^2}\right)
\right)\\
&\approx N^{1 \over n}
\left(
1
+ \dfrac{h}{nN}
- \dfrac{(n-1)h^2}{2n^2 N^2}
\right)
\end{align}
Which implies that the relative error is roughly
$100\dfrac{(n-1)h^2}{2n^2 N^2}
\approx
\dfrac{50}n \left( \dfrac hN \right)^2\%$
For the above example, this is
$\dfrac{50}4 \left( \dfrac{25}{625} \right)^2\% = 0.02\%$ |
Motivation for the $3\times 3$ matrix inversion formula | Let's consider the $n \times n$ case.
Let the $n\times n$ matrix $A^{-1} = \begin{pmatrix} \vec{v_1} & \vec{v_2} & \vec{v_3} \dots \vec{v_n} \end{pmatrix} $ where $\vec{v_1}, \vec{v_2}, \vec{v_3}, \dots , \vec{v_n}$ are the $n \times 1$ column vectors representing the columns of $A^{-1}$.
$AA^{-1} = I_n$. This means $A\vec{v_j} = \vec{e_j}$ for all $1 \le j \le n$ where $\vec{e_j}$ is the $n \times 1$ column vector whose $j$th row is a $1$ and every other row is a $0$.
According to Cramer's rule the solution to the equation $A\vec{v_j} = \vec{e_j}$ for $\vec{v_j}$ is as follows:
The number in the $i$th row of the column vector $\vec{v_j}$ is given by $A^{-1}_{ij} = v_{ij} = \frac{\det(D_{ij})}{\det(A)}$ where $\det(D_{ij})$ is the determinant of the matrix $D_{ij}$ formed by replacing the $i$th column of $A$ with the column vector $\vec{e_j}$.
Now another way to find $\det(D_{ij})$ is to expand this determinant along the $i$th column of $D_{ij}$. This column is just the basis vector $\vec{e_j}$. Only the $j$th position down this column is non-zero.
So $\det(D_{ij}) = (-1)^{i+j}\det(M_{ji}) = C_{ji}$ where $M_{ji}$ is the matrix formed by removing the $j$th row and $i$th column of $D_{ij}$.
But since $D_{ij}$ is just $A$ with column $i$ replaced, $M_{ji}$ is the same as the matrix formed by removing/crossing out the $j$th row and $i$th column of $A$, since the column in $A$ that was replaced by $\vec{e_j}$ is being removed/crossed out anyway.
Thus, $M_{ji}$ is the matrix formed by removing/crossing out the $j$th row and $i$th column of $A$. It is the $j, i$ minor matrix of $A$. We also see then that $C_{ji}$ is the $j, i$ cofactor of $A$.
So we have $A^{-1}_{ij} = v_{ij} = \frac{(-1)^{i+j}\det(M_{ji})}{\det(A)} = \frac{C_{ji}}{\det(A)}$.
This means $A^{-1}_{ij} = \frac{C_{ji}}{\det(A)}$ so $A^{-1}_{ij} = \frac{C_{ji}}{\det(A)}$ (Pay attention to the location of the indices $i$ and $j$).
In words, the $i$th row and $j$th column of $A^{-1}$ is the $j$th row and $i$th column of the cofactor matrix $C$ divided by the determinant of $A$. By swapping rows and columns in the cofactor matrix (transposing) we get the final statement "the $i$th row and $j$th column of $A^{-1}$ is the $i$th row and $j$th column of the transpose of the cofactor matrix $C$ of $A$".
The transpose of the cofactor matrix of $A$ is the adjugate of $A$ which is written as Adj$(A)$.
So in the end we have arrived at the formula $A^{-1} = \frac{1}{\det(A)}$Adj$(A)$.
The crux of this proof was the part where we invoked Cramer's rule so if you can understand an argument for Cramer's rule you can understand the inverse matrix formula. The first two sections in the Wikipedia article on Cramer's Rule could help you with that.
See https://en.wikipedia.org/wiki/Cramer%27s_rule .
Also see here https://people.math.carleton.ca/~kcheung/math/notes/MATH1107/wk07/07_cofactor_expansion.html to learn more about expanding a determinant along a row or column. |
Need help on calculating path integral | Since
$$c'(t)=\left(t,\,\frac23t^{3/2},\,t\right)\implies \left\|c'(t)\right\|=\sqrt{1+t+1}=\sqrt{t+2}$$
and our integral is
$$\int_0^1\frac{t+\frac23t^{3/2}}{\frac23t^{3/2}+t}\cdot\sqrt{t+2}\,dt=\int_0^1\sqrt{t+2}\,dt=\left.\frac23(t+2)^{3/2}\right|_0^1=$$
$$=\frac23\left(3^{3/2}-2^{3/2}\right)=2\sqrt3-\frac{4\sqrt2}3$$ |
reference recommendation on mathematical logics | The deductive reasoning "commonly used in math proofs on a daily base" is regimented by classical first-order logic. So that's where to start.
If you asked me to suggest just one book for beginners at logic with some mathematical background, then I think I would nowadays recommend the particularly accessible and well-written Ian Chiswell and Wilfrid Hodges, Mathematical Logic (OUP 2007) as the best starting point.
But different people with different backgrounds will like different books. As an introductory guide to just some of what is on offer, you'll find an annotated Teach Yourself Logic reading list/study guide at http://www.logicmatters.net/tyl |
Confused about a proof involving concave functions | $A:= \begin{bmatrix}
\mathbf a_1^T\\
\vdots \\
\mathbf a_m^T\\
\end{bmatrix}$
Assuming a maximum exists, I'll give the uniqueness proof. But it is prudent to exploit linearity and consider a simpler function, with a change of variables
in essence define $g = f -\mathbf c^T\mathbf x$ and assume $\mu: = 1$ for now, then simplify notation with
$\mathbf z:=A\mathbf x $ and
$ \mathbf z':=A\mathbf x'$
$g\big(\mathbf z\big) := \sum_{i=1}^m \log\big(b_i - z_i\big)$
$g$ is a strictly negative convex function (you can show this e.g. by differentiating twice and seeing the Hessian is a diagonal matrix with negative entries on the diagonal). By the definition of strict negative convexity (aka strict concavity)
for any $p\in (0,1)$
$p\cdot g\big(\mathbf z\big)+(1-p) \cdot g\big(\mathbf z'\big)\leq g\big(p\mathbf z+(1-p)\mathbf z'\big)$
with equality iff $\mathbf z = \mathbf z'$
in other words, if two distinct points $\mathbf z$ and $\mathbf z'$ attain a maximum under $g$, then there is an even bigger value in their convex hull-- a contradiction. (It is useful to recall that a convex hull of finitely many points is a polytope, so considering convex hulls is quite natural.) This is essentially the proof. Everything that follows is just book-keeping to show that the above implies the result for the original problem statement.
This is still true for any $\mu \gt 0$, i.e. rescale each side by $\mu$
$\mu\cdot p\cdot g\big(\mathbf z\big)+\mu\cdot (1-p)\cdot g\big(\mathbf z'\big)\leq \mu\cdot g\big(p\mathbf z+(1-p)\mathbf z'\big)$
this implies the function given by
$h_\mu\big(\mathbf x\big) := f_\mu\big(\mathbf x\big) - \mathbf c^T\mathbf x$
cannot have two distinct maximizing x's, $\mathbf x$ and $\mathbf x'$ unless $\mathbf z=\mathbf z'$ i.e. unless $\mathbf x$ and $\mathbf x'$ are equal under the image of $A$, so putting it all together
$p\cdot f_\mu\big(\mathbf x\big) +(1-p) \cdot f_\mu\big(\mathbf x'\big) -\mathbf c^T \big(p\mathbf x+(1-p)\mathbf x'\big) $
$=p\cdot \big\{f_\mu\big(\mathbf x\big) - \mathbf c^T (\mathbf x)\big\}+(1-p)\big\{f_\mu\big(\mathbf x'\big) - \mathbf c^T (\mathbf x')\big\}$
$=p\cdot h_\mu\big(\mathbf x\big)+(1-p)\cdot h_\mu\big(\mathbf x'\big)$
$\leq h_\mu\big(p\mathbf x+(1-p)\mathbf x'\big)$
$= f_\mu\big(p\mathbf x+(1-p)\mathbf x'\big) - \mathbf c^T \big(p\mathbf x+(1-p)\mathbf x'\big) $
with equality iff $A\mathbf x = A\mathbf x'$
this of course simplifies to
$p\cdot f_\mu\big(\mathbf x\big) +(1-p) \cdot f_\mu\big(\mathbf x\big) \leq f_\mu\big(p\mathbf x+(1-p)\mathbf x'\big) $
one awkward point:
$A \in \mathbb R^{m\times n}$ with rank $m$. If $n\neq m$ it is short and fat and hence surjective, but also has linearly dependent columns. Thus the optimizing $\mathbf x$ shouldn't be unique since multiple other choices exist for $\mathbf x'$ which take the same value under the image of $A$. |
Problem 2.9 in Lee's Smooth Manifolds: finding a map between complex projective spaces that makes the diagram commutative | We can get by with simpler notation by using only pairs of complex numbers, say $[Z : W]$ with $(Z, W) \neq (0, 0)$, and writing $z = Z/W$ if $W \neq 0$ an $w = W/Z$ if $Z \neq 0$.
Clearly we're obliged to define $\widetilde{p}\bigl([z : 1]\bigr) = [p(z) : 1]$ for complex $z$. The issue is then to show this mapping extends to the point $[1 : 0]$ at infinity, as you surmise. For that, one standard idiom is to write $z = 1/w$ and multiply through by a suitable power of $w$ to get $\widetilde{p}\bigl([1 : w]\bigr) = [1 : q(w)]$ for some polynomial $q$. |
Elimination of $'t'$ in a locus problem | Hint:
Use $$(1-t^2)^2+(2t)^2=(1+t^2)^2\iff\left(\dfrac{1-t^2}{1+t^2}\right)^2+\left(\dfrac{2t}{1+t^2}\right)^2=1$$
or put $t=\tan u$ |
Analytic continuation on the unit disc | A simple counterexample is $f(z)=\sqrt{1-z}$, where the square root is understood as the principal value. This function has a continuous extension to the closed disk, but it is not Lipschitz continuous on it. If it were holomorphic on a domain containing $\overline{D}$, it would be Lipschitz continuous on $\overline{D}$, due to the derivative being bounded. |
Finding Bounds, Proof by Induction | We shall prove that $5<a_n<6$ for all $n\in \{3,4,5,\cdots\}$. We have that
$$a_2=5+\frac{2}{a_1}=5+2=7$$
$$a_3=5+\frac{2}{7}=5.28571$$
So the proposition holds for $n=3$. Now suppose it hold for some $n\geq 3$ and consider $a_{n+1}$: Obviously
$$a_{n+1}=5+\frac{2}{a_n}>5$$
However, we also know that
$$a_{n+1}=5+\frac{2}{a_n}<5+\frac{2}{5}=5.4<6$$
Thus, $5<a_n<6$ for all $n\in\{3,4,5,\cdots\}$. Additionally, $\frac{1}{2}<a_1,a_2<12$ which satisfy your bounds. |
Finitely generated module with free submodule $S$ and $M/S$ torsion-free implies $M$ is free | If consider $S$ a non-zero cyclic submodule, from $M/S$ finitely generated you get $M$ finitely generated, so $M$ is free. Now you can use this answer.
For an abelian group $G$ and $S$ a non-zero subgroup, having finite index means that $G/S$ is finite, or a finite abelian group is finitely generated and torsion. |
mapping / projection onto axis | Note that the linear map
$$
\begin{bmatrix} 0&0\\0&1
\end{bmatrix}
$$
which takes vectors from $\mathbb{R}^2\to \mathbb{R}^2$. What vector in $\mathbb{R}^2$ gets mapped to $\begin{bmatrix} 1\\0\end{bmatrix}$?
Taking $b=\begin{bmatrix} 1\\0\end{bmatrix}$ does the mapping satisfy the definition of being onto? Indeed, it should be clear from the geometric interpretation that any vector of the form
$$
\begin{bmatrix} a\\0\end{bmatrix}
$$
won't be mapped to by the projection. |
Find the density function of X given the joint density distribution X and Y | I suspect the joint density should have been $$
f(x, y) =
\begin{cases}
xe^{-(x + y)} &\quad x>0, y>0 \\
0, &\quad \text{otherwise}
\end{cases}
$$
since this integrates to $1$, i.e. the author probably forgot the parentheses.
Your approach to computing the marginal density w.r.t. $X$ is fine. |
In a cyclic group, if order of two different elements are equal then is it true that the group generated by them are equal? | Yes, it is. Denote $d=$ order$(a^r)=$ order$(a^s)$. Since the group $G$ is cyclic and finite, then it has exactly one subgroup of order $d$ for every divisor $d$ of $n$ (see here). Since, the order of an element of a finite group divides the order of the group, i.e. $d\mid n$, this is the only case. |
Find the total number of ways to select at least one red ball from a bag containing $4$ red balls and $3$ green balls. | Your answer is right. If the selections are distinguished according to the number of red balls and the number of green balls they contain, and all selections except those with $0$ red balls are to be counted, there are $4\times4=16$ different selections. |
Find the supremum of $h(z)$. | Hint.
Making $z = r e^{i\phi}$ we have
$$
d(r)={\frac{\pi}{\pi^2-1}\sinh(\pi r)\;+\frac{1}{\pi^2-1}(\cosh(\pi r)-1) -e^{r}+1} = C_0
$$
for $r^2=x^2+y^2$
and
$$
n(r,\phi) = \left|\frac{e}{e-1}(e^{r e^{i\phi}}-1)\right|\;-\frac{3}{2}e^{r}+1 = \frac{e \sqrt{e^{2 r \cos (\phi )}-2 e^{r \cos (\phi )} \cos (r \sin (\phi ))+1}}{e-1}-\frac{3 e^r}{2}+1
$$
has a maximum for $\phi=0$ and also is an increasing function for $\phi = 0$. In fact
$$
n(r,0) = \frac{(e-3) e^r+2}{2(1-e)}\\
d(r) =\frac{\pi \sinh (\pi r)}{\pi ^2-1}+\frac{\cosh (\pi r)-1}{\pi ^2-1} -e^r+1
$$
The conclusion is that the maximum for $\frac{n(r,\phi)}{d(r)}$ should be located along the $x > 0$ axis. We know also that $\lim_{r\to\infty}\frac{n(r,\phi)}{d(r)}\approx e^{-(\pi-1)r}\to 0$. Also we have that
$$
n(r,0) \lt 0\ \ \ \ \text{for}\ \ \ 0\le r \lt \ln \left(\frac{2}{3-e}\right),\ \ \ \text{and}\ \ \ n(r,0) \gt 0\ \ \ \text{for}\ \ \ r > \ln \left(\frac{2}{3-e}\right)
$$
Follows a plot showing $\frac{n(r,0)}{d(r)}$
Follows a plot for this surface, with the level curves in black |
Applications of functional calculus? | The functional calculus for selfadjoint and normal operators requires projections and integrals.
The holomoprhic functional calculus deals with general linear operators, but less general functions of those operators. For example, if $C$ is a positively-oriented simple closed rectifiable curve enclosing the spectrum of a bounded operator $A$ on a complex Banach space, then
$$
F(A) = \frac{1}{2\pi i}\oint_{C}f(z)\frac{1}{zI-A}dz,
$$
where $\frac{1}{zI-A}$ suggestively for $(zI-A)^{-1}$. The function $F$ may be holomorphic on a neighborhood of the spectrum, which may require using a system of contours enclosing components of $\sigma(A)$. Even for matrices, this is not simple. For a matrix $A$ on $\mathbb{C}^N$, the operator $(zI-A)^{-1}$ has poles at the eigenvalues of $A$, and the order of the pole is the size of the largest Jordan block associated with $A$. Nilpotent operators play an important role in the functional calculus for $A$, the series expansion of $\frac{1}{zI-A}$ around $\lambda$ has the form
$$
\frac{1}{zI-A} = \sum_{n=-N}^{\infty}A_n(z-\lambda)^{n},
$$
where
\begin{align}
A_{-n} & = \frac{1}{2\pi i}\oint_{|z-\lambda| = r} (z-\lambda I)^{n-1}(zI-A)^{-1}dz \\
& = (A-\lambda I)^{n-1}\frac{1}{2\pi i}\oint_{|z-\lambda|=r}(zI-A)^{-1}dz \\
& = (A-\lambda I)^{n-1}P,
\end{align}
where $P = \frac{1}{2\pi i}\oint_{|z-\lambda|=r}(zI-A)^{-1}dz$ is a projection. Only the singular terms contribute to the integral for $F(A)$, and these give
$$
F(A) = F(\lambda)P + \frac{F'(\lambda)}{1!}(A-\lambda I)P+\cdots+\frac{F^{(N-1)}(\lambda)}{(N-1)!}(A-\lambda I)^{N-1}P + \mbox{ terms at other eigenvalues }.
$$
The absence of higher or terms happens because $(A-\lambda I)^{N}P=0$, where $N$ is the size of largest Jordan block with eigenvalue $\lambda$. The operator $(A-\lambda I)P$ is nilpotent of order $N$: because $(A-\lambda I)P=P(A-\lambda I)$ and $P^2=P$, then
$$
[(A-\lambda I)P]^{n} = (A-\lambda I)^{n}P,\;\;\; n \ge 1 \\
[(A-\lambda I)P]^{N} = 0.
$$
So projections are not enough, even for the functional calculus of a finite square matrix. |
Convergence of $\sum_{n=1}^\infty\frac{\log(n)^k}{n^a}$ | Hint: $\frac {(\ln n)^{k}} {n^{(a-1)/2}}=\frac {(\ln n)^{k}} {n^{a}} n^{(a+1)/2}\to 0$ since $\frac {\ln n} {n^{t}}\to 0$ for any $t>0$. Now compare the given series with $\sum \frac 1 {n^{(1+a)/2}}$ |
Analysis of bisection search | The stopping condition used in the video is a bit unusual for bisection, but would be very common for other techniques. He is stopping once $|g^2-x|<\epsilon$ where $g$ is the current guess. He is not stopping once $|g-\sqrt{x}|<\epsilon$, even though in bisection one can proceed this way. Here $|g^2-x|$ is called the forward error and $|g-\sqrt{x}|$ is called the backward error (I'll use these terms in what follows).
To determine the number of bisection steps required to have the forward error be less than $\epsilon$, one should determine or at least estimate the length of the interval around $\sqrt{x}$ where $|g^2-x|<\epsilon$. If $\epsilon$ is small enough, the half-length of this interval is approximately $\frac{\epsilon}{2\sqrt{x}}$. (I can think of two ways using calculus and one way using algebra to see this. Ask me if you don't understand.) In the example you have $x=12345$, and so you know that $\sqrt{x}$ is less than, say, $200$. So the half-length of the interval is at least $\frac{\epsilon}{400}$. So it is enough to use bisection to make the backward error be less than $\frac{\epsilon}{400}$. Beginning with an interval of length $x$ and taking the midpoint of the last interval generated to be the estimate, this takes at most $\log_2(x/(\epsilon/400))-1$ steps. This is less than 29. If I replace the $200$ with a better estimate like $112$, I can shave it down to 28 steps.
I do not really see how making the backward error less than $\epsilon^2$ ensures that the forward error is less than $\epsilon$; indeed, $(\sqrt{12345}+10^{-4})^2-12345>2 \cdot 10^{-2}$. Presumably this is either an error in the video or a misunderstanding on your part. |
How find this inequality$\sqrt{\left(\frac{x}{y}+\frac{y}{z}+\frac{z}{x}\right)\left(\frac{y}{x}+\frac{z}{y}+\frac{x}{z}\right)}+1$ | As the inequality is homogeneous, we can normalise by $xyz=1$. Then we have:
$$\sqrt{\left(\dfrac{x}{y}+\dfrac{y}{z}+\dfrac{z}{x}\right)\left(\dfrac{y}{x}+\dfrac{z}{y}+\dfrac{x}{z}\right)}+1\ge 2\sqrt[3]{\left(x^2+\frac{1}{x}\right)\left(y^2+\frac{1}{y}\right)\left(z^2+\frac{1}{z}\right)}$$
$$\sqrt{3+\sum_{cyc} \left(x^3 + \frac{1}{x^3}\right)}+1\ge 2\sqrt[3]{2 + \sum_{cyc} \left(x^3 + \frac{1}{x^3}\right)}$$
Let $\displaystyle a = \sum_{cyc} \left(x^3 + \frac{1}{x^3}\right)\ge 6$. Then the inequality is reduced to
$$f(a) = \sqrt{3+a} +1 - 2 \sqrt[3]{2+a} \ge 0$$
which is easy to do as $f(6)=0$ and $f'(a) > 0 $ for $a > 6$.
Addendum: alternate way to show $\sqrt{3+a} +1 \ge 2 \sqrt[3]{2+a}$ would be to cube, group terms and then square, to get the equivalent $(a+2)(a-6)^2 \ge 0$, which is obvious for $a \ge 6$. |
Burgers' equation with triangular initial data | Here is a basic methodology that you can apply and work out the details for specific initial data.
You can use the method of characteristics to find an implicit solution.
The characteristics are determined by the IVP
$$\frac{dX}{dt}= u(X(t),t),\\X(0) = x_0.$$
If $u$ is differentiable , we have
$$\frac{d}{dt}u[X(t),t]= u_t(X(t),t) + u_x(X(t),t)\frac{dX}{dt}=u_t(X(t),t) + u(X(t),t)u_x(X(t),t)=0.$$
Hence, along a characteristic curve $u(X(t),t)$ is constant
$$u(X(t),t)= u(X(0),0)=f(x_0).$$
Solving for $X(t)$ we obtain
$$X(t) = x_0 + f(x_0)t.$$
Therefore, $u(x,t)=f(x_0)$ at a specific point $(x,t) $with $t > 0$ -- where $x_0$ is the solution to $x_0 = x - f(x_0)t$.
If two characteristics cross, then the solution becomes multi-valued. This can be interpreted as the formation of a discontinuity or shock. The first time this happens is determined is follows.
Consider two characteristics initiated at points $(y,0)$ and $(y + \delta,0)$, respectively. If they intersect at time $t$ then
$$y + f(y)t = y + \delta + f(y + \delta)t$$
and
$$t = \frac{\delta}{f(y)-f(y+\delta)}$$
The earliest possible time for the formation of a shock is then given by
$$t_s = \inf_{y} \lim_{\delta \rightarrow 0} \frac{\delta}{f(y)-f(y+\delta)}=\inf_{y}\frac{-1}{f'(y)}.$$
Suppose a shock is located at position $x_S(t)$ with $u(x,t) = u_L$ if $-\delta < x < x_S(t)$ and $u(x,t) = u_R$ if $\delta >x > x_S(t).$
Then,
$$\frac{d}{dt}\int_{-\delta}^{\delta} u(x,t) \, dx = \int_{-\delta}^{\delta} u_t \, dx = -\int_{-\delta}^{\delta} uu_x \, dx \\= -\int_{-\delta}^{\delta} \frac{\partial}{\partial x}\left(\frac1{2}u^2\right) \, dx = \frac1{2}(u_L^2-u_R^2).$$
Also,
$$\frac{d}{dt}\int_{-\delta}^{\delta} u(x,t) \, dx =\frac{d}{dt}\left[u_R(\delta-x_s)+u_L(x_s+\delta)\right]=\frac{dx_S}{dt}(u_L-u_R).$$
Hence,
$$\frac{dx_S}{dt}(u_L-u_R)=\frac1{2}(u_L^2-u_R^2),$$
and the shock propagates at a speed given by
$$\frac{dx_S}{dt} = \frac1{2}(u_L + u_R).$$ |
Is $ mA \otimes B = m(A\otimes B)$? | By flatness, if $A'$ is a submodule of $A$ then one can identify
$A'\otimes_R B$ with a submodule of $A\otimes_R B$, namely as that generated by
all $a'\otimes b$ with $a'\in A'$.
With this observation it's surely obvious that $mA\otimes_R B$
is the submodule of $A\otimes_R B$ generates by the elements $(ma)\otimes b=m(a\otimes b)$
and that is $m(A\otimes_R B)$. |
Weak convergence of $\frac{1}{n} \sum_{i = 1}^{n} \delta_{2 \cos(2 \pi k / n)}$ to $\frac{1}{\pi \sqrt{4 - x^2}} {\large\chi}_{|x|\leq 2} dx,$ | Let $X_n$ be the discrete random variable that attains the value $k/n$ with probability $1/n$, for $k=1,\dotsc,n$. The sequence $(X_n)_n$ converges in distribution to $U\sim\mathrm{Uniform}(0,1)$.
Consider the continuous function $g(x)=2\cos(2\pi x)$. By the properties of convergence in distribution, namely the continuous mapping theorem, the variables $g(X_n)$ converge in distribution to $g(U)$. |
Algorithm: Integer vectors with equal inproduct with a constant vector | This is Integer Programming, which is NP-Hard. So no, there isn't an efficient way to solve this. The best you can do is apply relaxation techniques to get some good approximations. A common approach is to treat it as a linear program (look at $x \in \mathbb{R}^{n}$) and then fine-tune the solution.
The Wikipedia page is a good starting point: http://en.wikipedia.org/wiki/Integer_programming |
Improper Integral from minus infinity to $0$ of $xe^{2x}$ | Because
$$\lim_{t\to-\infty}\left(-\frac{1}{4}-te^{2t}+\frac{e^{2t}}{4}\right)=-\frac{1}{4}$$
where
$$\lim_{t\to-\infty}\left(te^{2t}+\frac{e^{2t}}{4}\right)=\lim_{u\to\infty}\left(\frac{-u}{e^{2u}}+\frac{1}{4e^{2u}}\right)=$$
$$=\lim_{u\to\infty}\frac{-u}{e^{2u}}+\lim_{u\to\infty}\frac{1}{4e^{2u}}=0$$
using L'Hopital for first limit
$$\lim_{u\to\infty}\frac{-u}{e^{2u}}=\lim_{u\to\infty}\frac{-1}{2e^{2u}}=0$$
for second is clear that it is $0$. |
In general, are functions of this form convex? | In general, if $f$ is convex on a convex domain $D$, and $h$ is convex and increasing on a set containing $f(D)$, then $h \circ f$ is convex on $D$. That is,
$$ f(t x + (1-t) y) \le t f(x) + (1-t) f(y) $$
implies
$$ h(f(t x + (1-t) y)) \le h(t f(x) + (1-t) f(y)) \le t h(f(x)) + (1-t) h(f(y))$$
Since $h(t) = t^2$ is convex and increasing on $[0,\infty)$,
$f^2$ will be convex when $f$ is convex and nonnegative.
So what you need in your example is $\frac12 x^T A x +
b x + c \ge 0$. |
Functor of section over U is left-exact | Method 1: Use Exercise II.1.4b to relate the image presheaf and its sheafification. Then by exactness of the original sequence, the isomorphism follows on sections of $U$.
Method 2: Proceed just as in proof of Proposition II.1.1.
Let $\varphi: \mathscr{F} \to \mathscr{F''}$ and $\psi : \mathscr{F'} \to \mathscr{F}$. We want to show that the kernel and images are equal after applying $\Gamma (U, \cdot)$. This can be checked on the stalks.
In one direction, we have that
$$(\varphi _U \circ \psi _U (s))_P = \phi _P \circ \psi _ P (s_P) = 0$$
By the sheaf condition, this shows that $\phi _U \circ \psi _ U = 0$.
Conversely, let's suppose $t \in \textrm{ker } \varphi _U$, i.e. $\varphi _U (t) = 0$. Again we know that the stalks is an exact sequence so for each $P \in U$, there is a $s_P$ such that $\psi _P (s_P) = t_P$. Let's represent each $s_P = (V_P , s(P))$ where $s(P) \in \mathscr{F'} (V_P)$. Now, $\psi (s(P))$ and $t \mid _{V_P}$ are elements of $\mathscr{F} (V_P)$ whose stalks at $P$ are the same. Thus, WLOG, assume $\psi (s(P)) = t \mid _{V_P}$ in $\mathscr{F} (V_P)$. $U$ is covered by the $V_P$ and there is a corresponding $s(P)$ on each $V_P$ which, on intersections, are both sent by $\psi$ to the corresponding $t$ on the intersection. Here, we apply injection (exactness at left place, which you showed in your OP) which allows us to glue via sheaf condition to a section $s \in \mathscr{F'} (U)$ such that $s \mid _ {V_P} = s(P)$ for each $P$. Verify that $\psi (s) = t$ and we're done by applying the sheaf property and the construction to $\psi (s) - t$. |
Sequence spaces are isomorphic | Maybe you can try to construst a linear map $\ell^\infty\to\ell^\infty$ with kernel $\bigoplus\limits_{k=1}^m\Bbb{C},$ then the isomorphism got. An intuitive idea is to throw out the first $m$ components, which is a linear map. (Mind it's not the identity map.) |
Approximate recursively defined error in fixed point iteration | Simple Hint: What can you say about $|x_n -x^*|$ and $|x_{n+1} - x^*|$ ?
(Mean Value Theorem!) |
How can I prove big-oh relation between $\log_2(\log_2 n)$ and $\sqrt{\log_2 n}$ | The derivative of the usual logarithm function is less than $1$ on $(1,+\infty)$ hence $\ln x\leqslant x-1$ on $x\geqslant1$. This implies $\log_2x\leqslant2x$ on $x\geqslant1$. Since $\log_2x=2\log_2\sqrt{x}$, $\log_2x\leqslant4\sqrt{x}$ on $x\geqslant1$.
Appplying this to $x=\log_2n$, one sees that $f(n)\leqslant4g(n)$ for every $n\geqslant2$. |
Largest root as exponent goes to $+\infty$ | If we set
$$ p_a(x) = x^{a+2}-x^{a+1}-1 $$
we may easily see that $p_a(x)$ is negative on $[0,1]$, increasing and convex on $[1,+\infty)$, so the largest real root is in a right neighbourhood of $x=1$. We may also notice that:
$$ p_a\left(1+\frac{\log(a+1)}{a+1}\right) = \frac{\log(a+1)}{a+1}\left(1+\frac{\log(a+1)}{a+1}\right)^{a+1}-1>0 $$
by Bernoulli's inequality, hence the largest root of $p_a$ is between $1$ and $1+\frac{\log(a+1)}{a+1}$.
A more effective localization can be achieved by performing one step of Newton's method with starting point $x=1+\frac{\log(a+1)}{a+1}$. |
Prove $(x+1)e^x = \sum_{k=0}^{\infty}\frac{(k+1)x^k}{k!}$ using Taylor Series. | Note that
$$xe^x=\sum_{k=0}^\infty \frac{x^{k+1}}{k!}.$$
Differentiate both sides with respect to $x$. |
True/false questions on image kernel and basis of vector spaces and subspaces. | To a)
It follows from the answer by @MattS that it is false.
To b)
Consider $T=0.$ Then, $\mathrm{im}(T)=\{0\},$ from where $\mathrm{ker}(T)\cap\mathrm{im}(T)=\{0\}.$ Now, $ \mathrm{ker}(T)=V.$ Thus, if $V\ne \{0\}$ $T$ is not injective.
To c)
It is $\mathrm{rank}(A)=\mathrm{rank}(A|B)=p.$ So using the theorem of Rouche-Capelli we get that there is at least one solution. |
Prove that chromatic number $\geq \frac{V^2}{V^2-2E}$ | Case 1. $V \geq 2E$.
Observation. If $G$ is simple, $\binom{V}{2} \geq 2E$. i.e., $V^2-2E \geq V$.
For $\chi=1$, $E=0$, and $G$ is an isolated point.
Now, WMA $\chi\geq 2$. (So, $E>0$)
Then, by the above observations, $1 \leq (\chi-1)\cdot\frac{V}{2E} \leq (\chi-1)\cdot\frac{V^2-2E}{2E}$.
Thus, $(\chi-1)\cdot(V^2-2E)\geq 2E$
$\Leftrightarrow$ $(\chi-1)V^2 \geq 2E\cdot\chi$
$\Leftrightarrow$ $V^2\geq 2E\cdot\frac{\chi}{\chi-1}$.
Case 2. $V<2E$.
Unfortunately, I have no idea for this case.
----Edit.----
Rather than above incomplete proof, I suggest other proof.
Now, add one more condition that $G$ is connected.
Case 1. $\chi\leq 2$. $G$ is bipartite. Most possible number of edges is $E_\max
= \lfloor \frac{V}{2} \rfloor \lceil \frac{V}{2} \rceil$. Then,
$\frac{V^2}{V^2-2E} \leq \frac{V^2}{V^2-2E_\max} \leq 2$.
Case 2. $\chi\geq 3$. Since $G$ is connected and $\chi \geq 3$, $V\geq E$. Thus, we can observe that
$1 \leq (\chi-1)\cdot\frac{V}{2E} \leq (\chi-1)\cdot\frac{V^2-2E}{2E}$.
Now, follow my original incomplete proof.
I think that we can do for $G$: not connected. |
Trigonometry: Proving Question involving Sum to Product | $$\frac{\cos 3x}{\cos x} = \frac{4 \cos^3 x-2\cos x}{\cos x} = 4\cos^2 x-3= 2(2\cos^2 x-1)-1= 2\cos 2x-1$$ |
Showing that this family of vectors generates $\mathbb{R}^n$ | This is only a partial answer.
Let $v_1, …, v_r ∈ ℝ^n$, $c_1, …, c_r ∈ ℝ$ and $A = \{x ∈ ℝ^n;\; \text{$v_i^T x ≤ c_i$, for $i = 1, …, r$}\}$. Observe that $0 ∈ A$ if and only if $\min \{c_1, …, c_r\} ≥ 0$.
Let’s assume $0 ∈ A$. Let $L$ be a line through the origin in $ℝ^n$. Then $L ⊂ A$ if and only if $L ⊂ 〈v_1, …, v_r〉^{\perp}$. So $v_1, …, v_r$ span $ℝ^n$ if and only if there is no line in $A$.
Therefore, in the case the null vector is contained in your sets, this already proves the assertion, as your sets are designed just like $A$. |
Why does the discriminant in the Quadratic Formula reveal the number of real solutions? | Think about it geometrically $-$ then compute.
Everyone knows $x^2$ describes a parabola with its apex at $(0,0)$. By adding a parameter $\alpha$, we can move the parabola up and down: $x^2+\alpha$ has its apex at $(0,\alpha)$. Looking at the graph as it moves up and down you immediately see how the number of zeros depends on $\alpha$:
for $\alpha>0$ we have no zeros.
for $\alpha=0$ we have a single zero.
for $\alpha<0$ we have two zeros.
Now we can introduct a second paramter $\beta$ to move the parabola left and right: $(x-\beta)^2+\alpha$ has its apex at $(\beta,\alpha)$.
Note: we used the fact that given a function $f(x)$, the graph of the function $f(x-\beta)$ looks exactly like the one of $f$ but shifted to the right by an amount $\beta$.
But of course, shifting a function left and right does not alter the amount of zeros. So it still only depends on $\alpha$. We expand the term a bit:
$$(x-\beta)^2+\alpha=x^2-2\beta x+\beta^2+\alpha.$$
Would your quadratic equation be given in this form, you would immediately see the amount of zeros as describes above. Unfortunately it is mostly given as
$$ x^2+\color{red}px+\color{blue}q=0$$
So instead, you have to look at what parts of the $\alpha$-$\beta$-form above corresponds to these new parameters $p$ and $q$:
$$ x^2\color{red}{-2\beta} x+\color{blue}{\beta^2+\alpha} = 0.$$
So we have $p=-2\beta$ and $q=\beta^2+\alpha$. If we only could extract $\alpha$ from these new parameters, we would immediately see the amount of zeros. But wait! We can!
$$\alpha=q-\beta^2=q-\left(\frac p2\right)^2.$$
This is exactly what you know as (the negative of) the discriminant.
I used the form $x^2+px+q=0$ and you used $ax^2+bx+c=0$. I hope this is not confusing you. Just divide by $a$ (if $a$ is non-zero):
$$x^2+ \color{red}{\frac ba}x+\color{blue}{\frac ca}=0$$
If you set $p=b/a$ and $q=c/a$ and plug this into my discriminant from above you obtain the one you know:
$$\left( \frac {\color{red}p}2 \right )^2-\color{blue}q = \frac{(\color{red}{b/a})^2}4-\color{blue}{\frac ca}=\frac{b^2}{4a^2}-\frac{4ac}{4a^2} = \frac{b^2-4ac}{4a^2}.$$
Because $4a^2$ is always positive it suffices to look at $b^2-4ac$ as you did in your question. |
Should one always perform logical reasoning intuitively and contentually? | Actually you've a good skill that you can use to your advantage. I always think of mathematical theorems as having two parts, the logical part and the structural part. Your ability to systematically perform symbol-pushing (using valid inference rules of course) means that you'll have no trouble with the logical part, which is often half the problem solved.
But you need to make sure you fully understand the reason behind every single logical inference rule that you use, otherwise you will not have a complete grasp of the logical part of theorems. You mentioned as examples the logical fact that can be expressed prettily as "$\neg \forall \equiv \exists \neg$", as well as the identity "$A \to B \equiv \neg B \to \neg A$". You must know exactly why these are valid, and never merely remember them as black-box rules. Along with the usual ones, you should also be familiar with the disjunctive normal form and the Skolem normal form, because these logical forms are the most useful.
The logical part crops up more frequently than you might be aware of. For a simple example, you ought to know the fact that switching quantifiers in one direction, namely $\exists \forall \to \forall \exists$, yields a valid inference, but not necessarily the other way around. Notice how this is essentially the core of the difference between continuity and uniform continuity, convergence and uniform convergence, and so on. One can in many cases grasp the meaning of the word "uniform" applied to some concept despite never having learnt it, simply by looking at the quantifier structure and seeing where it makes sense to pull an existential quantifier outward past a universal quantifier!
Another simple example is that anything you can say about a structure in terms of its operations and its size is preserved under isomorphism. This is because isomorphism is a relabeling which commutes with the operations and the size of the structure. So one knows without proving anything all kinds of results of this form, such as: (1) The order of an element in a group (including characteristic of a ring) is preserved under isomorphism. (2) A normal subgroup remains a normal subgroup under isomorphism.
Of course, there is still the structural part. In real analysis for example it is highly advantageous to be able to visualize limiting processes such as getting arbitrarily close to a point or going to infinity, because if you just try to push symbols you'll most probably get nowhere in attempting to prove any non-trivial theorem from scratch, such as the extreme value theorem for continuous real functions on a closed bounded set (or even just a closed interval) in Euclidean space.
An interesting case is (Euclidean) geometry, where it's even more obvious how useful a spatial intuition is. The catch is that its intuitiveness is a snare for the logically careless. If you attempt to write down formal proofs of many geometric theorems, you'll quickly find that even simple ones like "angle-at-centre is twice angle-at-circumference" is not so easy, due to issues in handling which side of a line a point is on. Historically it was common to make hand-wavy arguments that are actually invalid. Euclid was not as systematic as he could have been, and his proofs have many gaps, such as for Prop.1.16. And he missed out an axiom as noted by Pasch. Even Hilbert who later attempted a rigorous axiomatization of Euclidean geometry made intuition-driven logical jumps.
Ultimately, different mathematical objects have different mathematical structure, and you may need to study special aspects of their structures independently, even though there may be common aspects that help you to easily visualize new structures using common tools. For instance many mathematical constructions have a nice interpretation when you impose additional structure such as a partial order or a topology. But this also means that a lot of arguments about partial orders or topological spaces carry over without change, and often these arguments have a logical flavour to them (such as a Galois connection). |
Permutations: Discrete Math | Almost right - refine your answer to "$7$ distinct elements".
More generally, given a collection (aka multi-set) of:
$m_1$ identical elements
$m_2$ identical elements
$\ldots$
$m_n$ identical elements
The number of permutations of the elements in it is:
$$\frac{n!}{{m_1!}\cdot{m_2!}\cdot\ldots\cdot{m_n!}}$$ |
Inverses in subspace? | By the most logical definition, a subspace of a vector space $V$ is a subset $U$ of that is a vector space under the same addition and scalar multiplication (restricted, of course). So in principle you have to prove all the axioms of a vector space. But by the very fact that $V$ is already known to be a space, a lot can be left out, for example, associativity of addition need not be checkd in $U$ because iot holds in $V$. Similarly, we need not check that $-u \in U$ for all $u\in U$, provided we already know that $cu\in U$ for all $u\in U$, $c\in F$. This is so because we are allowed to let $c=-1$ and already know (in $V$) that $(-1)\cdot u=-u$. So what we have is a simplified test whether $U$ is a subset of $V$, or a subspace criterion: We need only check that $U$ is not empty, that it is closed unde addition, and closed under scalar multiplication. Essentially, we have just shown most of the claim that any subset with these properties is a subspace.
There is a similar situation with groups: A subgroup is a subset of a group that happens to be a group under the given (restricted) group operation. Again, we do not have to check all properties, e.g., associativity is - again - for free. The lack of scaler multiplication seems to require that we always have to show the closure under taking inverses, but in fact this is not always the case. There is a subgroup criterion for the special case of finite groups which only requires one to check that the presumed subgroup is non-empty and closed under the group operation! |
Direct evaluation of a series from Euler's identity. | $$\sum_{k=0}^{\infty} (-1)^k \dfrac{\pi^{2k}}{(2k)!}=1-\frac{\pi^2}{2!}+\frac{\pi^4}{4!}-\frac{\pi^6}{6!}\cdots$$
$$\cos(\pi)=1-\frac{\pi^2}{2!}+\frac{\pi^4}{4!}-\frac{\pi^6}{6!}\cdots=-1$$ |
Find integer values that when multiplied together equal a given value | This is a problem of integer factorization. From the factorization of an integer, it is mechanical to write every possible pair of two integers whose product is the given integer.
The method you mention in comments, "a quicker way than individually testing all integer values of $b$ and $c$" is similar to, but less efficient than trial division, trying each integer in $[1,\lfloor \sqrt{n} \rfloor ]$ as a candidate for $b$ and determining whether each choice of $b$ makes $a/b$ an integer or not.
There are much faster methods than trial division. There is not a method that one would consider "fast" for integers of unlimited size.
For your particular example,
$$ 194920496263521028482429080527 \\
= 289673451203483 \cdot 672897345109469 $$
is the only product giving that number that does not have $b=1$ or $c = 1$. |
Why doesn't the Homotopy group satisfy excision? | For both Homology and Homotopy functors we get a long exact sequence of pairs
$$\dots \to F_{i+1}(X,A) \to F_i(A) \to F_i(X) \to F_i(X,A) \to F_{i-1}(A) \to \dots $$
If the map $A\to X$ is a cofibration (or more generally if $(X, A)$ is a "good pair" in the sense of Hatcher) then $H_i(X, A) \cong \tilde{H}_i(X/A)$, and the proof uses excision (Proposition 2.22 in Hatcher). In fact by carefully looking at the proof you see that if $\pi_i$ also satisfied excision then we would have $\pi_i(X, A) \cong \pi_i(X/A)$ as well.
However, there cannot be a long exact sequence in homotopy groups for a general cofibre sequence of spaces. Take $X = D^{n+1}$ and $A=\partial X = S^n$, so that $X/A\cong S^{n+1}$. If we had a long exact sequence of the form
$$\dots \to \pi_{i+1}(S^{n+1}) \to \pi_i(S^{n}) \to \pi_i(D^{n+1}) \to \pi_i(S^{n+1}) \to \pi_{i-1}(S^{n}) \to \dots $$
then we would have $\pi_{i+1}S^{n+1} \cong \pi_i S^n$ for all $i> 0$, which is not true (for example, $\pi_2(S^1) = 0$ but $\pi_3(S^2) = \mathbb{Z}$). More generally we would be able to prove a suspension isomorphism theorem, which we know does not hold for homotopy groups. |
Prove $\lim_{x\to 0^+}(e^x-1+x)^\frac 2x=0$ | Whenever you see a limit with an exponent like that, you should strongly consider taking the logarithm of it (and usually applying L'Hopital's rule when you can, but not here).
Let $y = \lim_{x \to 0} (e^x - 1 + x)^{2/x}$. Then
$$\ln(y) = \ln(\lim_{x \to 0} (e^x - 1 + x)^{2/x}) = \lim_{x \to 0} \ln\left((e^x - 1 + x)^{2/x}\right)$$
If you're concerned about the rigorous details of why we can pass the limit through, know that it is sufficient for a function to be continuous for us to pass a limit through it.
Now,
$$\ln(y) = \lim_{x \to 0} \frac{2}{x} \cdot \ln(e^x - 1 + x)$$
Notice now that as $x \to 0^+$ (i.e., from the right), then $\ln(e^x - 1 + x)$ tends to $-\infty$, and since $x > 0$ when we appraoch from the right, we have $\frac{2}{x} \cdot \ln(e^x - 1 + x) \to -\infty \Rightarrow y \to 0$. Similarly, when we approach from the left side, we will have $\frac{2}{x} \cdot \ln(e^x - 1 + x) \to \infty$ so that $y \to \infty$.
In summary, the limit as we appraoch from the right is $0$, where as the limit diverges when you appraoch from the left. |
Solve $\log_4 ( 16^{100})$ | Remember we can take the power out and put it in front of the logarithm, so $\log_4 16^{100}=100\,\log_4 16$.
Now $\log_4 16$ is: "to what power do I have to raise $4$ to get $16$. That is $2$, so $\log_4 16=2$.
Hence, $\log_4 16^{100}=100\cdot2=200$. |
Clarification of proof involving characteristic function in $L^1$ | If $I$ has end points $a$ and $b$ then $\int|\chi_I(x+h)-\chi_I(x)|dx=\int_{a-h}^a dx+\int_b^{b+h}dx=|h|+|h|=2|h|$ provided $h >0$ and $h<b-a$. For $a-b <h<0$ a similar argument works. For $|h| >b-a$ this is false. |
Unintentional Negative Sign in Limit Evaluation | $$ \bigg(\frac{\cos(x) - \cos(2x+x)}{\sin(x^2)}\bigg)=\bigg(\frac{\cos(x) - \cos(2x)\cos(x) + \sin(2x)\sin(x)}{\sin(x^2)}\bigg)$$ |
Confusion in least upper bound axiom | In my opinion, your question is: what is $\mathbb{R}$?
If you follow an axiomatic approach, you define $\mathbb{R}$ as an ordered field (being an ordered field means to fulfill some axioms) that is Dedekind-complete i.e. that satisfies the least upper bound axiom: every non-empty subset of $\mathbb{R}$ with an upper bound in $\mathbb{R}$ has a least upper bound in $\mathbb{R}$ (see here). Therefore, $\mathbb{R}$ is any object that fulfill these axioms. It can be proved that such an object is unique, up to isomorphism. If you change the axioms (in particular, if you replace the least upper bound axiom with something else), likely you are talking of another mathematical object, which is not $\mathbb{R}$.
It turns out that the set of axioms I sketched above is not the only way to define $\mathbb{R}$: there are several equivalent axiomatic definitions of $\mathbb{R}$, which means that there are several sets of axioms that describe the same object $\mathbb{R}$. For instance, you can define $\mathbb{R}$ as an ordered field such that every Cauchy sequence converges (Cauchy axiom). Then you can prove that, in a ordered field, the least upper bound axiom follows from the Cauchy axiom, and vice-versa.
This corroborates the idea that the least upper bound axiom (or other equivalent axioms such as the Cauchy one) formalizes correctly our intuition about $\mathbb{R}$. |
Visualizing $3^3+4^3+5^3=6^3$ | $$ 5^3 + 3^3 + 4^3 = 5^3 + 3 \cdot (3^2 + 4^2) + 4^2 = 5^3 + 3 \cdot 5^2 + 3 \cdot 5 + 1 = (5 + 1)^3 $$
So a cube of side $6$ units can be made by taking a $\color{green}{\text{cube of side 5 units} }$, adding $ \color{blue}{\text{3 5-by-5 squares}}$ at the top, right and left, then adding $\color{red}{\text{3 lines of length 5}}$ along $3$ edges and finally adding $\color{black}{ \text{1 cube} }$ at the corner.
Source of picture of cube |
Prove that $X^{\top} K X$ is invertible | No. Consider $X=[1\quad 0]^T,\,K=\begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix}$. |
Solving a cumulative recurrence relation | Despite my comment above, Maple gives a closed-form expression for your product in terms of the Gamma function.
$$ \prod_{i=1}^{k-1}\left(1-\frac1{i^2-1}\right)=
2\,{\frac {\Gamma \left( k-\sqrt {2} \right) \Gamma \left( k+\sqrt {
2} \right) }{\Gamma \left( k-1 \right) \Gamma \left( k+1 \right)
\Gamma \left( 2-\sqrt {2} \right) \Gamma \left( 2+\sqrt {2} \right)
}}.
$$ |
What does analytic at a point means? | A MacLaurin series is nothing more than a Taylor series around $z=0$ instead of around $z=a$. So you can stop thinking about the term "MacLaurin".
A negative radius of convergence doesn't make sense : the radius of convergence is defined as a limit of positive real numbers, thus must be non-negative or infinite (thanks to Andres for making that clear by what I meant). Also, the radius of convergence is the biggest radius for which the Taylor expansion of your function converges, so geometrically it doesn't make sense to say that this radius is negative.
Hope that helps, |
There exist two distinct elements of $G\setminus H$ which commute? | Take any element $g\neq 1$ of G\H. Since $G$ has odd order, this element has odd order. So $g$ generates a cyclic subgroup of odd order. Thus, $g^{2}\neq g$ and we get $g g^{2} = g^{2} g = g^{3}$. |
Experiment: Roll three 6-sided dice. | Most of your work is correct.
There are two exceptions.
If the three dice are sequential, then they must assume one of the following sets of values $\{1, 2, 3\}$, $\{2, 3, 4\}$, $\{3, 4, 5\}$, or $\{4, 5, 6\}$. For each of the four sets, there are $3! = 6$ permutations which result in the same set of values. Thus, the probability that the dice are sequential is
$$\frac{4 \cdot 3!}{6^3} = \frac{24}{216} = \frac{1}{9}$$
If events $A$ and $B$ are independent, then $P(A \cap B) = P(A)P(B)$. If we let $A$ be the event that the sum of the dice is $8$ and $B$ be the event that none of the dice rolled show a 2, then
\begin{align*}
P(A) & = \frac{21}{216} = \frac{7}{72}\\
P(B) & = \frac{125}{216}\\
P(A \cap B) & = \frac{9}{216} = \frac{3}{72}
\end{align*}
As you can check, $P(A \cap B) \neq P(A)P(B)$. The events are dependent because if none of the dice rolled show a 2, you are less likely to get a sum of $8$ since it becomes more likely that the sum of the numbers shown on the dice will be too large. |
Sigma summation notation formalization | For all the finitary examples, we can consider the sum of a finite multiset. Why a multiset? One way to think about what's going on is to think of $\sum_{x\in M}f(x)$ where $M$ is a multiset as being a term $f(x_1)+f(x_2)+\cdots+f(x_n)$ as is usually suggested. For this to be well-defined, we need $+$ to be associative (so we don't need to worry about how to place the parentheses), commutative (so we don't need to specify an order), and have a unit (so we know what to do when $m$ is empty).
If we consider all terms given a set "variables" in a language with one formal binary operator, $+$, and one constant symbol $0$ satisfying $0+x=x=x+0$, $(x+y)+z=x+(y+z)$, and $x+y=y+x$, then we get the free commutative monoid on that set which corresponds to finite multiset built on that set. (If we additionally required the operator to be idempotent, i.e. $x+x=x$, then the terms would correspond to finite sets, but, obviously addition on real numbers, say, is not idempotent.)
"Summation" is then the unique commutative monoid homomorphism from the free commutative monoid (i.e. from multisets) on some set into any other commutative monoid given a function from that set into the latter commutative monoid that coincides on "variables". In symbols, let $F(X)$ be the free commutative monoid generated from the set $X$. Given a function, $f:X\to C$ where $C$ is (the carrier set of) a commutative monoid, we get a (commutative) monoid homomorphism $\sum_f:F(X)\to C$ such that $\sum_f \{\!\!\{x\}\!\!\} = f(x)$ for $x \in X$. Here, I'm using $\{\!\!\{x\}\!\!\}$ to represent the singleton multiset. The restriction to finite multisets (though $X$ is not required to be finite) is because we consider algebraic terms to be inductively defined which forces any term to be finitely deep, and since each operator has a finite arity this means there can be at most finitely many "variable" occurrences in a term. To be clear, whether the "summation" is actually a sum depends on the target commutative monoid, i.e. we are "summing" with respect to that monoid's operation. If it is the commutative monoid of reals equipped with multiplication, the "summation" would be $\prod$.
For summing over an infinite set, on the other hand, I strongly recommend thinking of it as just a completely different operator. You could view it as an inifintary algebraic operation but it is probably better to view it as a function defined on functions. And usually not defined on arbitrary function, but "convergent" ones. For example, if we wanted to talk about a summation like $\sum_{n\in\mathbb N}f(n)$ where $f$ is $\mathbb R$-valued say, we'd be talking about an operator $\sum:\mathbb R^\mathbb N\to\mathbb R$, but as $\sum(n\mapsto 1)$ would not produce a real number, we usually mean something like $\sum:\ell^1\to\mathbb R$ where $\ell^1$ is a sequence space. Defining such an operation typically involves analytic concerns. Note, as functions on functions the "order" does matter. If $f(0)=g(1)$ and $f(1)=g(0)$ and otherwise $f=g$, then $f$ and $g$ are different functions and need not be mapped to the same number by summation. (In practice, the summation operation is insensitive to minor changes like this, but the point is that it is not a priori clear that you can "reorder" any terms and it is not the case that you can "reorder" infinitely many of them, i.e. that $f\circ\pi=g$ for some bijection $\pi:\mathbb N\cong\mathbb N$.)
I'll briefly mention formal series. When we talk about a formal power series, say, such as $\sum_{n=0}^\infty a_n x^n$, we're "really" just talking about the sequence $\{a_n\}_{n\in\mathbb N}$ itself. We can define the operations on formal power series directly on the sequences. There are no questions of convergence. That would only come up if we wanted to "evaluate" these formal series. |
Proving that the product of two numbers (in $\mathbb{R}$ or $\mathbb{C}$) is a continuous function. | For the first highlighted step, note that
\begin{align*}
\| x - a \| &\le \max(\|x - a\|, \|y - b\|) = \|(x,y) - (a,b)\| \\
\| y - b \| &\le \max(\|x - a\|, \|y - b\|) = \|(x,y) - (a,b)\| \\
\| a \| &\le \max(\|a\|, \|b\|) = \|(a,b)\| \\
\| b \| &\le \max(\|a\|, \|b\|) = \|(a,b)\|.
\end{align*}
Therefore,
\begin{align*}
\| B \| \| x - a \| \| b \| &\le \| B \| \| (x, y) - (a,b) \| \|(a,b) \| \\
\| B \| \| a \| \| y - b \| &\le \| B \| \|(a,b) \| \| (x, y) - (a,b) \| \\
\| B \| \| x - a \| \| y - b \| &\le \| B \| \| (x,y) - (a,b) \|^2
\end{align*}
So they conclude that, summing the above three inequalities,
$$
\|B(x,y) - B(a,b) \| \le \| B\| \|(x,y) - (a,b) \| \Big[ 2 \|(a,b)\| + \|(x,y) - (a,b) \| \Big]. \tag{1}
$$
Now, fix $\epsilon > 0$. Choose $\delta$ such that $0 < \delta < 1$ and $2 (\|(a,b)\| + 1) \delta < \epsilon$. If you like, you can instead define $\delta$ explicitly:
$$
\delta := \tfrac12 \min \left( 1, \frac{\epsilon}{\|(a,b)\| + 1} \right)
$$
Note, that this is OK since $\delta$ is allowed to depend on the point in question, $(a,b)$. We don't need to show $B$ is uniformly continuous; just continuous at $(a,b)$.
Anyway, take $(x,y)$ such that $\|(x,y) - (a,b)\| \le \delta$. From (1), we then get
\begin{align*}
\|B(x,y) - B(a,b) \|
&< \| B \| \delta \Big[2 \|(a,b)\| + \delta\Big] \\
&< \| B \| \delta \Big[2 \|(a,b)\| + 1\Big] \\
&< \| B \| \epsilon.
\end{align*}
We require one more step--that $\|B\|$ exists and is finite--to then conclude continuity of $B$.
In fact, $\|B\| = 1$.
This is left as an excercise.
Given this, we finally conclude
$$
\|B(x,y) - B(a,b) \|
< \| B \| \epsilon
= \epsilon,
$$
as required. |
Asymptotes of a General Rational Function | So, you've got $a,b,c,e$ and $d$, so I'll take it from here.
Instead of writing $$f(x) = g(x)$$
I will write $$f(x) = \frac{4}{f(x)}$$
$$f(x)^2 = 4 $$
Then $$f(x) = ± 2$$
Taking $f(x) = 2$ gives: $2x^2 - 6x + 7 = 0$, which has no real solutions.
Taking $f(x) = -2$ gives: $2x^2 + 2x - 9 = 0$, which has the two aforementioned solutions.
Hope this helps! |
Self-complete set in square | Several different thoughts. I take it that a set $A \subseteq [0,1]$ is self-complete if $\bigcup_{x \in A} S(x) \subseteq A$, where $S \colon [0,1] \to P([0,1])$ is some function defined as above in terms of a single fixed Lipschitz $\phi\colon [0,1]^2 \to \mathbb{R}_{\geq 0}$.
(1) In some cases the set will be $[0,1]$ itself, for example if $\phi(x,y) = 1$ then $[0,1]$ is the only (nonempty) self-complete set.
(2) Regardless what $S$ is, we can make a self-complete set by just creating a set that is "closed under $S$" in an appropriate sense. You simply use transfinite induction to work towards the goal that whenever $x \in A$, $S(x) \subseteq A$. It goes like this:
Pick any point $x_0$ and let $A_0 = \{ x_0\}$. Now, by transfinite induction, for $\lambda > 0$ let
$$A_\lambda = \left ( \bigcup_{\kappa < \lambda} A_\kappa \right ) \cup \left ( \bigcup_{\kappa < \lambda}\, \bigcup_{x \in A_\kappa} S(x) \right ) $$
This has the property that $A_\kappa \subseteq A_\lambda$ whenever $\kappa < \lambda$. We can't keep adding new points forever, because there are only as many points as the cardinality of [0,1]. So eventually we will have $A_\kappa = A_{\kappa+1}$, and this will be a self-complete set.
(3) Normally, you could replace this transfinite induction with a "top-down" argument. In fact the intersection of any family of self-complete sets would be self complete except for the requirement you added that self-complete sets must be nonempty. Without that requirement, each $\phi$ would be associated with a particular minimal self-complete set. The argument in part (2) above shows that for any $x \in [0,1]$ there is a nonempty self-complete set containing $x$. The intersection of all such sets will still be a self-complete set containing $x$, and so will be a minimal self-complete set among the ones that contain $x$. In fact, this is the set that was constructed in part (2).
(4) You cannot prove that every self-complete set is of positive measure. If $\phi$ is identically 0 then $S(x) = \varnothing$ for every $x$, and so every (nonempty) set is self-complete. |
application of intermediate value theorem in root finding | $f'(x) = 2x+10\cos x$ is clearly positive if $x\geq 6$, hence our function is increasing over the interval $[6,33]$ and since $1000$ is between $f(6)$ and $f(33)$, there is some (only one) $c\in(6,33)$ such that $f(c)=1000$. |
Empirical formula of coverage probability | First: Prediction interval $\neq$ confidence interval. With forecasting, you are generally intersted in prediction intervals. The formula you cite only makes sense if the indicator variable is 1 if the observation is outside of the prediction interval, and 0 otherwise. |
Operator on $L^2 (0,1)$ defined by convolution with $|x-y|^{-\alpha}$ | HINT:
The Cauchy-Schwarz Inequality reveals that
$$\begin{align}
\left|Af(x)\right|^2 &= \left|\int_0^1 f(y) \frac{1}{|x-y|^\alpha} dy \right|^2\\\\
&\le \int_0^1\left|f(y)\right|^2\,dy\,\int_0^1\frac{1}{|x-y|^{2\alpha}} dy
\end{align}$$
And thus, the square of the operator norm is
$$\begin{align}
||A||_2^2&=\sup_{f\in \mathscr{L}^2} \left(\frac{\int_0^1\left|\int_0^1 f(y) \frac{1}{|x-y|^\alpha} dy \right|^2dx}{\int_0^1\left|f(y)\right|^2\,dy}\right)\\\\
&\le \int_0^1\int_0^1\frac{1}{|x-y|^{2\alpha}}dydx
\end{align}$$ |
How to prove $\frac{n}{2} \sum_{i=1}^n a_i\,b_i \leqslant \sum_{i=1}^n a_i \sum_{i=1}^n b_i$ for $a_i,b_i\geq 0$? | With $n=3$, $a_1=b_1=1$, and $a_2=a_3=b_2=b_3=0$, we have
$$\frac{n}{2} \sum_{i=1}^n a_i b_i = \frac{3}{2} > 1 = \sum_{i=1}^n a_i \sum_{i=1}^n b_i$$ |
List of ways to tell if degree sequence is impossible for a simple graph | The Erdős–Gallai theorem completely characterizes the possible degree sequences for simple graphs.
It is stated by Wikipedia as:
A sequence of non-negative integers $d_1\geq\cdots\geq d_n$ can be represented as the degree sequence of a finite simple graph on $n$ vertices if and only if $d_1+\cdots+d_n$ is even and
$$ \sum^{k}_{i=1}d_i~\leq~ k(k-1)+ \sum^n_{i=k+1} \min(d_i,k)$$
holds for $1\leq k\leq n$. |
$\cos^8x.\sec^6y,\frac12,\sin^8x.\csc^6y$ in AP if $\cos^4x.\sec^2y,\frac12,\sin^4x.\csc^2y$ in A.P | Substitute $\sin^2y=1-\cos^2y$ and $\sin^2x=1-\cos^2x$, and factor the result. |
Under what conditions, the rank of a matrix reduces, if it is subtracted by a vector | The condition corresponds to an $n-1$ dimensional affine subspace. As $A$ is invertible, write $Q=A^{-1}$ and then e.g.
$$ B = (I-u e^T Q) A$$
The rank of $B$ equals $n-1$ precisely when non-invertible, i.e. when
$$ \det B = \det (I - u e^T Q) \det A = (1 - (e^T Q u)) \det A=0$$
(where I used that for a rank 1 operator $T$ we have: $\det(I-T) = 1 - {\rm tr\;} T $. This defines the wanted affine subspace: $$e^T Q u = 1.$$
Edit: Writing $u=A \lambda$ with $\lambda\in {\Bbb R}^n$ the condition reads: $e^T Q A \lambda = e^T \lambda = 1$ or simply $\lambda_1+\cdots + \lambda_n=1$ which seemingly provides a positive answer to your question (with the right interpretation). |
Are there any tricks for simultaneous equations I should be aware of? | You can see $5y-2y^2=2$ by subtracting the 2 equations. Then solve the quadratic. |
What is the multiplication "$\times$" for the ordinary vector space $(\Bbb R^n, +, \times)$ | You seem to be confusing two separate but related operations: (1) multiplication in the field, and (2) scalar multiplication in the vector space. The standard vector space $\mathbb R^n$ is defined over the field $\mathbb R$. Multiplication in $\mathbb R$ works exactly the way you expect. We can't properly speak of multiplication in $\mathbb R^n$, however. You don't multiply vectors with each other; rather, you multiply them by scalars. So if $\bf x \in \mathbb R^n$ and $c \in \mathbb R$, we can multiply them to get a new vector $c \bf x$. In this case, each co-ordinate of $\bf x$ is simply scaled by $c$. |
If $g(x)=2f(x/2)+f(2-x)$ and $f''(x)<0$ for all lying in $(0,2)$ how to find the interval where $g(x)$ increases? | We know that $g(x)$ is increasing on intervals where $g'(x) > 0$.
You correctly applied the chain rule to get $g'(x) = f'(\tfrac{x}{2})-f'(2-x)$.
So you need to determine for what values of $x$ is $f'(\tfrac{x}{2}) > f'(2-x)$ true?
Since $f''(x) < 0$ for all $x \in (0,2)$, $f'(x)$ is decreasing. Thus, $f'(a) > f'(b)$ iff $a < b$.
Now, can you figure out when $f'(\tfrac{x}{2}) > f'(2-x)$? |
Show that the characteristic function of $\mathbb{Q}$ is Lebesgue integrable. | You have that $\mathbb{Q}$ is countable, so you can write $\mathbb{Q}=\bigcup_{n\geq 0}\{x_n\}.$ As a countable union of borelians sets, $\mathbb{Q}$ is a borelian set and so $\mathbb{1}_\mathbb{Q}$ is mesurable : by definition $$\int_\mathbb{R}\mathbb{1}_\mathbb{Q}\,d\mu=\mu(\mathbb{Q})=\sum_{n\geq 0}\mu(\{x_n\})=\sum_{n\geq0}0=0.$$ |
Interior product of inner product | Claim: We have $$
i_\xi(\star\omega) = \star(\omega\wedge \xi^\flat),
$$
where $\xi^\flat$ is the 1-form $g(\xi,-)$.
Proof: Both sides are linear in $\xi,\omega$. So it suffices to verify it for $\xi=e_1$ and $\omega=e^{i_1\dots i_p}$, $1\leq i_1<i_2<\dots<i_p\leq n$.
If $i_1=1$ then $e^1$ does not appear in $\star\omega$, hence LHS is $0$. RHS is also $0$ because $\omega\wedge \xi^\flat=e^{1i_2\dots i_p}\wedge e^1=0$.
If $i_1\neq 1$, then $\star\omega=\varepsilon e^{j_1j_2\dots j_{n-p}}$, $1=j_1<j_2<\dots<j_{n-p}\leq n$ with $(i_1,\dots,i_p,\underbrace{j_1}_{=1},j_2,\dots,j_{n-p})$ a sign $\varepsilon$ permutation of $(1,\dots,n)$. Then LHS is $\varepsilon e^{j_2\dots j_{n-p}}$. For the RHS, $e^{i_1\dots i_p}\wedge e^1$ has Hodge star $\varepsilon e^{j_2j_3\dots j_{n-p}}$. So both sides agree.
You can now applying this to $i_\xi(\alpha\wedge\star\beta)$.
There is a special simplification available with $\alpha=\beta$: $\alpha\wedge\star\alpha=\lvert\alpha\rvert^2\,(\star1)$
so
$$
i_\xi(\alpha\wedge\star\alpha)=\lvert\alpha\rvert^2\,i_\xi(\star1)=\lvert\alpha\rvert^2\,\star(\xi^\flat).
$$ |
Riesz's Lemma for $l^\infty$ and $\alpha = 1$ | I think the answer is no, because $l^{\infty}$ is not reflective. A related discussion can be found at here. The credit should be given to the answerer in the other post. |
Numerical approximation of Levy Flight | To transform a uniformly distributed random variable into another distribution, you need to use the inverse cumulative distribution function. That is, if $F$ is a cumulative distribution function corresponding to the probability density $f$, and $u$ is a uniform random variable in [0,1] then
$$x = F^{-1}(u)$$
is distributed according to $F$. The cumulative distribution function for a pure power law distribution is
$$F(x) = 1 - \left( \frac{x}{x_{\mathrm{min}}} \right)^{-\alpha}$$
where $x_{\mathrm{min}}$ is the minimum value that your random variable can take, and therefore the inverse distribution function is
$$F^{-1}(u) = x_{\mathrm{min}} (1-u)^{-1/\alpha}$$
If you don't want to introduce an artificial minimum, you can consider two-sided power law distributions. These are not pure power laws (you have to fudge the distribution around zero so that the density is not infinite) but they are asymptotically power laws in the tails.
One such distribution is the Student's t distribution with $\nu$ degrees of freedom, which behaves like a power law with exponent $\nu$ as its values tend to $\pm\infty$. |
Expressing forces in Pendulum swing in component form | There are two forces acting on the bob, Tension acting in the direction of the string and Weight $W=mg$ acting downwards.
Assuming the string stays taught and does not change in length, you can infer that the Tension must exactly balance the component of the weight in the direction of the string.
So
$$ T = mg\cos\theta $$
The net force on the bob must be perpendicular to the direction of the string, so.
$$F_{net}=-mg\sin\theta $$ |
Proving that $N(R)\times N(S)= N(R\times S)$ | Let me try to provide you with a full answer. First, in order to fix our terminology let us agree to ad-hoc call these radicals you are interested in nilpotency radicals (this being a term I just coined up for usage here; in general ring theory there are lower and upper nilradicals, together with some other interesting radicals, so one should strive to be specific about the objects handled). Let us also agree to write $\mathrm{Np}(A)$ for the nilpotency radical of given ring $A$.
You make a statement that could be generalized to arbitrary finite direct products of rings, but it suffices to treat the elementary case of just two rings:
Proposition. For any two rings $A, B$ one has $\mathrm{Np}(A \times B)=\mathrm{Np}(A) \times \mathrm{Np}(B)$.
Proof: The inclusion from left to right is the easier one, as you noticed: consider a nilpotent bilateral (I do not like the English terminology ''two-sided'' so I will instead adopt a more latin one) ideal $J \subseteq A \times B$. By the general structure theorem describing ideals in finite direct products, we know there must exist bilateral ideals $I \subseteq A, I' \subseteq B$ such that $J=I \times I'$.
Since in general it is the case that
$$(I \times J).(I' \times J')=(I.I') \times (J.J')$$
for any pairs of ideals $I, I' \subseteq A, J, J' \subseteq B$ (where by $I.I'$ I am expressly referring to the ideal-product of the two, calculated in the multiplicative monoid of bilateral ideals, rather then their subset-product $II'=\{xy\}_{x \in I\\ y \in I'}$ calculated in the multiplicative monoid of subsets of $A$) and thus consequently that
$$(I \times J)^{\underline{n}}=I^{\underline{n}} \times J^{\underline{n}} \tag{npot}$$
for any ideals $I \subseteq A, J \subseteq B$ and any $n \in \mathbb{N}$ (the underlining of the exponent $n$ is syntax I expressly use to distinguish the calculation of $n$-th powers in the multiplicative monoid of bilateral ideals from mere cartesian products), the fact that $J$ considered in the above paragraph is nilpotent tells you that $I, I'$ must also be, to the effect that $I \subseteq \mathrm{Np}(A), I' \subseteq \mathrm{Np}(B), J \subseteq \mathrm{Np}(A) \times \mathrm{Np}(B)$. This establishes the inclusion ''$\subseteq$''.
As for the reverse one, let us now consider an arbitrary element $u=(x,y) \in \mathrm{Np}(A) \times \mathrm{Np}(B)$. It is essential to understand the following ''finitary'' characterization of the nilpotency radical. Allow me first of all to write $\mathscr{Np}(A)$ for the set of all nilpotent bilateral ideals of a given ring $A$; for arbitary set $M$, let us agree to denote by $\mathscr{F}(M)$ the set of all finite subsets of $M$. With these conventions in place we have the description:
$$\mathrm{Np}(A)=\bigcup_{\mathscr{H} \in \mathscr{F}(\mathscr{Np}(A))} \sum_{I \in \mathscr{H}}I \tag{undir}$$
in other words the nilpotency radical is the union over the (upward-directed) family of all finite sums of nilpotent bilateral ideals of $A$.
With this in mind, we realize that there must exist finite subsets $\mathscr{K} \subseteq \mathscr{Np}(A), \mathscr{H} \subseteq \mathscr{Np}(B)$ such that
$$x \in \sum_{I \in \mathscr{K}}I\\ y \in \sum_{J \in \mathscr{H}}J$$
The relation (npot) mentioned above ensures that $\mathscr{Np}(A \times B)=\mathscr{Np}(A) \times \mathscr{Np}(B)$. Thus, it follows easily that
$$K=\sum_{I \in \mathscr{K}}(I \times \{0_B\})+\sum_{J \in \mathscr{H}}(\{0_A\} \times J) \subseteq \mathrm{Np}(A \times B)$$
since all the individual products $I \times \{0_B\}, \{0_A\} \times J$ are nilpotent ideals in the direct product; furthemore, since we also have
$$K=\left(\sum_{I \in \mathscr{K}}I\right) \times \{0_B\}+\{0_A\} \times \left(\sum_{J \in \mathscr{H}}J\right)$$
it is clear that $u=(x, 0_B)+(0_A, y) \in K \subseteq \mathrm{Np}(A \times B)$, settling the problem of the reverse inclusion. $\Box$
Justification of relation (undir)
For given ring $A$ and arbitrary subset $X \subseteq A$ let us write $(X)_{\mathrm{b}}$ for the bilateral ideal generated by $X$ and let $\mathscr{Id}_{\mathrm{b}}(A)$ denote the set of all bilateral ideals of $A$.
Let us also recall that for any finite subset $\mathscr{M} \subseteq \mathscr{Id}_{\mathrm{b}}(A)$ (equivalently, for $\mathscr{M} \in \mathscr{F}(\mathscr{Id}_{\mathrm{b}}(A))$) one has the equivalent descriptions:
$$\left(\bigcup \mathscr{M}\right)_{\mathrm{b}}=\sum_{I \in \mathscr{M}}I \tag{*}$$
where the right-hand side term in the above is understood as a finite sum calculated in the commutative monoid of all subsets of $A$, namely $(\mathscr{P}(A), +)$, the addition $+: \mathscr{P}(A) \times \mathscr{P}(A) \to \mathscr{P}(A)$ being given by $X+Y=\{x+y\}_{x \in X\\ y \in Y}$ (the canonical extension of the addition on $A$ to the powerset $\mathscr{P}(A)$).
By definition we have
$$\mathrm{Np}(A)=\left(\bigcup \mathscr{Np}(A)\right)_{\mathrm{b}}$$
Let us denote the right-hand side term in relation (undir) by $R$; by the previous observation (*) it follows that for any $\mathscr{H} \in \mathscr{F}(\mathscr{Np}(A)) \subseteq \mathscr{F}(\mathscr{Id}_{\mathrm{b}}(A))$ (in general $M \subseteq N$ immediately entails $\mathscr{F}(M) \subseteq \mathscr{F}(N)$) we have:
$$\sum_{I \in \mathscr{H}}I=\left(\bigcup \mathscr{H}\right)_{\mathrm{b}} \subseteq \left(\bigcup \mathscr{Np}(A)\right)_{\mathrm{b}}=\mathrm{Np}(A)$$
so clearly
$$R \subseteq \mathrm{Np}(A) \tag{1}$$
We want to argue that at this point it suffices to show that $R$ defined precisely as the right-hand side of (undir) is itself a bilateral ideal. Let us first postpone the task of proving this and notice that once this fact is established, since it is obvious that for any $I \in \mathscr{Np}(A)$ one automatically has $\{I\} \in \mathscr{F}(\mathscr{Np}(A))$ and trivially $I=(I)_{\mathrm{b}}=(\bigcup \{I\})_{\mathrm{b}}$ so $I \subseteq R$. Since $R$ thus includes all nilpotent ideals, it follows that it also includes their union
$$R \supseteq \bigcup \mathscr{Np}(A)$$
and as $R$ is established to be a bilateral ideal it follows that
$$R \supseteq \left(\bigcup \mathscr{Np}(A)\right)_{\mathrm{b}}=\mathrm{Np}(A) \tag{2}$$
From relations (1) and (2), by virtue of the axiom of extensionality (and its immediate corollary) we can infer that $R=\mathrm{Np}(A)$.
Now on to the reason why $R$ is a bilateral ideal. In order to prove this, I think it to your benefit (when dealing with several other areas of mathematics even apart from ''pure algebra'', these are considerations fundamental to many areas of order theory, topology and analysis) to introduce some essential notions.
An ordered set is a pair $(A, R)$ where $R$ is an order relation on $A$; we will write equivalently:
$$(x,y) \in R \Leftrightarrow xRy \Leftrightarrow x \leqslant_R y$$
in the case of a fixed order relation $R$ (the first two manners of notation are more generally applied to any binary relation; the symbol $\leqslant_R$ refers strictly to the case of order relations).
An upward directed set is an ordered set $(A, R)$ such that for any two elements $x, y \in A$ there must exist a third one $z \in A$ such that $x, y \leqslant_R z$. The typical example for such objects is precisely the set $\mathscr{F}(A)$ of all finite subsets of a given set $A$, ordered by inclusion.
We proceed with a series of lemmas:
Lemma 1. Consider a ring $A$ and nonempty subset $\varnothing \neq \mathscr{M} \subseteq \mathscr{Id}_{\mathrm{b}}(A)$ which is upward directed when ordered by inclusion. Then the union
$$\bigcup \mathscr{M} \in \mathscr{Id}_{\mathrm{b}}(A)$$
is itself a bilateral ideal (upward directed unions of ideals are ideals).
Proof: Since $\mathscr{M}$ is nonempty (and any ideal is a nonempty subset) the union
$$J:=\bigcup \mathscr{M}$$
is nonempty. Let $x, y \in J$ be arbitrary: there will exist $I, I' \in \mathscr{M}$ such that $x \in I, y \in I'$; since $\mathscr{M}$ is upward directed under inclusion, there must exist $I'' \in \mathscr{M}$ such that $I, I' \subseteq I''$ and hence $x, y \in I''$; $I''$ being in particular an additive subgroup of $A$ it follows that $x-y \in I'' \subseteq J$. So far we have thus ascertained that $J$ is itself an additive subgroup of $A$.
As to $J$ being closed with respect to multiplication on the left, we have
$$AJ=A\left(\bigcup_{I \in \mathscr{M}}I\right)=\bigcup_{I \in \mathscr{M}}(AI) \subseteq \bigcup_{I \in \mathscr{M}}I=J$$
since all members $I \in \mathscr{M}$ are in particular left ideals which amongst others means that $AI \subseteq I$. Closure with respect to multiplication on the right is treated analogously, and we conclude that $J \in \mathscr{Id}_{\mathrm{b}}(A)$. $\Box$
Before presenting the next lemma, a few more preliminary notions. Let $X$ be a family of objects (in the axiomatic system I prefer and which I use to formalise mathematics, ultimately any ''object'' is a set, the two terms being synonymous) indexed by $I$ and let $R$ be an order relation on $I$; we say that $X$ is increasing with respect to $R$ if
$$(\forall i, j)(i \leqslant_R j \Rightarrow X_i \subseteq X_j)$$
With this in place we state
Lemma 2. Let $(I, R)$ be an upward directed set and $X$ be a family indexed by $I$ increasing with respect to $R$. Then the set of all its terms $\{X_i\}_{i \in I}$ is itself upward directed when ordered by inclusion.
Proof: Let us abbreviate
$$\mathscr{A}:=\{X_i\}_{i \in I}$$
Conisdering two arbitrary elements $M, N \in \mathscr{A}$ there must exist $i, j \in I$ such that $M=X_i, N=X_j$; by upward directedness there must also exist $k \in I$ with $i, j \leqslant_R k$ and since the family $X$ is increasing we have $M, N \subseteq X_k \in \mathscr{A}$. $\Box$
In our particular case, we can apply lemmas 1 and 2 by realising that:
$\mathscr{F}(\mathscr{Np}(A))$ is indeed upward directed under inclusion (this we already pointed out)
the family defined by
$$\left(\left(\bigcup \mathscr{H}\right)_{\mathrm{b}}\right)_{\mathscr{H} \in \mathscr{F}(\mathscr{Np}(A))}$$
is indeed increasing with respect to inclusion
$R$ is by definition the union of the nonempty (because $\varnothing \in \mathscr{F}(M)$ is always nonempty for any set $M$) collection of ideals
$$\left\{\left(\bigcup \mathscr{H}\right)_{\mathrm{b}}\right\}_{\mathscr{H} \in \mathscr{F}(\mathscr{Np}(A))}$$
collection which is thus upward directed under inclusion ensuring that the union $R$ itself is an ideal. |
Seemingly simple algebra | It may very well be 1500 * 1/150, as that explains how A = 2. Everyone, including myself, ended up with 6.25 based on the math you presented us. It would make the most sense for this to be a typo. If this is school work, inform your teacher. |
$ZFC^- + AFA$ and infinite cardinals | If you assume choice, nothing changes. The standard argument to prove that every set is well-orderable, and therefore in bijection with an ordinal, still applies. It follows that we can still define cardinals as before: A cardinal $\kappa$ is an initial ordinal, that is, an ordinal not in bijection with any of its elements.
(The one caution you need is that ordinals need to be defined as transitive sets well-ordered by $\in$. With foundation, we can relax the well-orderability condition and simply require that they are linearly ordered by $\in$.)
Without choice things become more interesting. This question, my answer, and the comments there, address what happens. The point is that without choice not every set is in bijection with an ordinal, so cardinals must be defined differently. The standard approach is to simply say that two sets have the same cardinal iff they are in bijection with one another. We could then say that the cardinality of $A$ is the collection of all sets $B$ in bijection with $A$, and that a cardinal is the cardinality of a set. The problem is that cardinalities are proper classes (unless $A=\emptyset$). We fix this issue by invoking Scott's trick, that replaces a proper class with its elements of smallest rank. This gives us a set but, of course, uses foundation. The question I linked asks what happens if we have no foundation. Can we still do a sort of Scott's trick? As I point out there, this is not possible if on top of no choice and no foundation we allow a proper class of atoms.
If we assume Aczel's $\mathsf{AFA}$ axiom instead of foundation, then we can still do Scott's trick, that is, replace each class with a subset, so we have a (more or less) canonical way of representing cardinals as sets. Details are in the link.
In the comments, a different problem is addressed, namely (still assuming foundation, but no choice), we now require that the cardinal of a set $A$ is a set in bijection with $A$. Pincus showed that it is consistent that this is possible, and it is also consistent that it is not possible.
Finally, as you can see, there is still some room there for improvements, and any additional remarks that apply to that question (without foundation or choice, under such-and-such assumption, we have a version of Scott's trick) could be immediately applied to a version of your question as well (without foundation or choice, under such-and-such assumption, we can define cardinals as sets via the new version of Scott's trick). |
Transform $\sum_i c_i (B_i + \alpha)^{t_i}$ to $\sum_i \beta_i(1 + r)^{t_i}$ to compute Excess IRR | I didn't figure out how to prove it rigurously but I have a strong suspicion it isn't possible. I tried to find the solution for the simplest case, only one period with an investiment at the beginning and a single payoff at the end. In this case the problem would be
Find $\alpha$ such that
$$ -I+ \frac{P}{1+b_{1}+\alpha} = 0 $$
where $I$=investment, $P$=payoff, $b_{1}$=index return, and $\alpha$=excess return.
There's a simple solution by setting $r = b_{1} + \alpha$, but this is not valid because $r$ is dependent on $b_{1}$. I can't see a way to transform it as you want.
You can simplify the problem by using continuous compounding because the problem is reduced to
Find $\alpha$ such that
$$ -I+ Pe^{-(b_{1}+\alpha)} = -I+ Pe^{-b_{1}}e^{-\alpha} = 0 $$
There you have it factorized as you wanted, the only problem is XIRR doesn't calculte IRR's with continuous compounding, but you can use the equivalent rate of return $(b_{i}^{'} = e^{b_{i}}-1 \enspace \forall i)$ using the cashflows $P_{i}e^{-b_{i}} \enspace \forall i$.
This won't give you the exact result as the original problem, because the original problem is the degenerate version of the problem
Find $\alpha$ such that
$$ -I+ P\left(\frac{1}{1+b_{1}}\right)\left(\frac{1}{1+\alpha}\right) = 0 $$
$$ -I+ P\left(\frac{1}{1+b_{1}+\alpha+\color{red}{b_{1}\alpha}}\right) = 0 $$
which is the equivalent problem of the continuous case. The good news is that if the numbers are small, the cross term will be even smaller and it would serve as a pretty good approximation. If you want to be safe better use an iterative root finding algorithm. |
Help with identity proof using sum manipulations | The derivation should be revised. But at first some general remarks which might be helpful. We should keep in mind that the distributive, commutative and associative law listed in (2.15) to (2.17)
\begin{align*}
\sum_{k\in K}ca_k&=c\sum_{k\in K}a_k\tag{dist.}\\
\sum_{k\in K}(a_k+b_k)&=\sum_{k\in K}a_k+\sum_{k\in K}b_k\tag{ass.}\\
\sum_{k\in K}a_k&=\sum_{k\in P(K)}a_k\tag{comm.}\\
\end{align*}
are not laws of the sigma symbol but just the laws for manipulating the numbers $a_k$ according to the rules of algebra. In order to solve the problem we are free to use one or more of these laws any number of times we consider it to be convenient. We are also allowed to use other admissible transformations like adding zero and represent it as the sum of an element and its additive inverse ($0=a_n-a_n$).
In the following I give a rather detailed derivation applying one rule per step. We obtain
\begin{align*}
&\color{blue}{\sum_{0\leq k<n}}\color{blue}{(a_{k+1}-a_k)b_k}\\
&\quad=\sum_{0\leq k<n}(a_{k+1}b_k-a_kb_k)\tag{dist.}\\
&\quad=\sum_{0\leq k<n}a_{k+1}b_k+\sum_{0\leq k<n}\left(-a_kb_k\right)\tag{comm.}\\
&\quad=\sum_{0\leq k<n}a_{k+1}b_k-\sum_{0\leq k<n}a_kb_k\tag{dist.}\\
&\quad=\sum_{0\leq k<n}a_{k+1}b_k-\sum_{-1\leq k<n-1}a_{k+1}b_{k+1}\tag{index}\\
&\quad=\sum_{0\leq k<n}a_{k+1}b_k-\left(a_0b_0+\sum_{0\leq k<n-1}a_{k+1}b_{k+1}\right)\tag{ass.}\\
&\quad=\sum_{0\leq k<n}a_{k+1}b_k-\left(a_0b_0+\sum_{0\leq k<n-1}a_{k+1}b_{k+1}\right)+a_nb_n-a_nb_n\tag{+0}\\
&\quad=\sum_{0\leq k<n}a_{k+1}b_k-a_0b_0-\sum_{0\leq k<n-1}a_{k+1}b_{k+1}+a_nb_n-a_nb_n\tag{dist.}\\
&\quad=a_nb_n-a_0b_0-\sum_{0\leq k<n-1}a_{k+1}b_{k+1}-a_nb_n+\sum_{0\leq k<n}a_{k+1}b_k\tag{comm.}\\
&\quad=a_nb_n-a_0b_0-\sum_{0\leq k<n}a_{k+1}b_{k+1}+\sum_{0\leq k<n}a_{k+1}b_k\tag{ass.}\\
&\quad=a_nb_n-a_0b_0-\left(\sum_{0\leq k<n}a_{k+1}b_{k+1}-\sum_{0\leq k<n}a_{k+1}b_k\right)\tag{dist.}\\
&\quad=a_nb_n-a_0b_0-\sum_{0\leq k<n}\left(a_{k+1}b_{k+1}-a_{k+1}b_k\right)\tag{comm.}\\
&\quad\color{blue}{=a_nb_n-a_0b_0-\sum_{0\leq k<n}a_{k+1}\left(b_{k+1}-b_k\right)}\tag{dist.}\\
\end{align*}
and the claim follows.
Note:
It might be useful to go through this problem with a small special case and without sigma notation, for instance with $n=2$ to better see some details.
The line with (index)-transformation does not apply any algebraic rule, but instead uses the power of the sigma notation to conveniently manipulate summands. If we don't use the sigma notation, then this line and the previous line are identical. |
${I}=\Delta(G,G')$ is the smallest ideal of the group ring $\mathbb{Z}{G} $ such that $\mathbb{Z}{G}/{I}$ is a commutative ring | I suppose you have found it by yourself by now, but for completeness' sake I will write an answer here.
Let $R $ be any commutative ring. Let $J$ be an ideal of $ RG $. Then we can look at the images $\overline{g}$ of the group elements $g \in G $ in the quitient ring $ RG / J $. They should commute with eachother if the quotient ring is commutative. Hence, for any $g, h \in G $ it follows that $$ \overline {g^{-1}h^{-1} gh} = \overline {1}. $$ Or if we rewrite this, we find that $$ g^{-1}h^{-1} gh - 1 \in J. $$ As $J $ is an ideal, this shows that $$\Delta ( G, G') \subseteq J. $$ So any ideal whose quotient ring is commutative contains this partial augmentation ideal $\Delta ( G,G') $. As you stated yourself, $I:=\Delta (G,G') $ is an ideal. Moreover, $RG/I \cong R (G/G') $. Hence, I is the smallest ideal in $RG $ whose quotient ring is commutative. |
Derivations of a local algebra over a field | Yes.
Injectivity: let $v$ be a derivation such that $v(f)=0$ for all $f\in\mathfrak m$. As $v(k)=0$ (because $v(1)=v(1)1+1v(1)$ implies that $v(1)=0$), we have $v=0$.
Surjectivity: let $\theta : \mathfrak m/\mathfrak m^2\to k$ be a linear form. Define $v: A\to k$ by
$$v(f)=\theta(\overline{f-f(0)}).$$
Then $v$ is clearly $k$-linear. If $f, g\in A$, we write
$f=c_1+f_1$, $g=c_2+g_1$ with $c_i\in k$ and $f_1, g_1\in \mathfrak m$. Then
$fg=c_1c_2+c_1g_1+c_2f_1+f_1g_1$ and
$$v(fg)=\theta(c_1\bar{g}_1+c_2\bar{f}_1)=c_1v(g)+c_2v(f). $$
This shows that $v$ is a $k$-derivation of $A$. Finally, by construction, $\psi(v)=\theta$. |
$A[x]$ is a commutative domain if and only if $A$ is a commutative domain | If you can show that $\deg(fg) = \deg(f) + \deg(g)$ then this follows since if $f$ and $g$ are non-zero then $\deg(f), \deg(g) \ge 0$ and hence $\deg(fg) \ge 0$. By convention $\deg(0) = -\infty$ so this shows that $fg \ne 0$.
And if you don't like relying on a convention then you can try to synthesize the relevant part of the above proof sketch, namely looking at the leading terms. |
Double integral $\iint \cos(x - \ln x) dxdy$ | $$\int_1^e\int_{1\over e}^{1\over y} \cos(x-\ln x) dxdy{=\int_{1\over e}^1\int_{1}^{1\over x} \cos(x-\ln x) dydx\\=\int_{1\over e}^1\left({1\over x}-1\right) \cos(x-\ln x) dx\\=-\sin(x-\ln x)\Bigg|_{1\over e}^{1}\\=\sin\left({1\over e}+1\right)-\sin 1\\\approx0.824206452}$$ |
mgf of an infinite sum of independent random variables | This is vague, because the details will differ with the problem. For starters you'd want the sum of the rvs to converge almost surely, which could follow from a moment condition. Then you need the mgf of the infinte sum to be the pointwise limit of the finite sum mgfs, in some neighborhood of 0 (to be of any use) which sounds like a dominated convergence theorem application. Along the way you can write down the characteristic function, for which the dominated convergence application is trivial. You might well read off the functional form of the mgf from that, and reassure yourself about the absence of nearby singularities.
May I suggest you try to do this in the case where $X_i = \pm 2^{-i},$ with random signs, as an exercise? |
Compute in practice a channel capacity | Here I try to derive only the mutual information. As I discuss later, the problem seems to be difficult in general.
To obtain capacity, the important point is to characterize the conditional probability $P_{Y|X}$ of the channel. After that $H(Y)$ and $H(Y|X)$ can be computed as a function of the input distribution $P_X$ which should be optimized later to maximize $I(X;Y)$ and obtain the capacity.
Consider a fixed $n$ and let $X$ be a column vector of dimension $n$. Suppose the the entries of $X$ are discrete so we use discrete entropy. The channel can be written as:
$$
Y=AX,
$$
where $A$ is randomly chosen from the following subset of permutation matrices:
$$
\mathcal A=\left\{A\in\mathbb R^{n\times n}: A=\begin{pmatrix}I_1&0&0\\0&J_2&0\\0&0&I_3\end{pmatrix}\text{ for some } I_1,I_3,J_2\right\},
$$
with $I_1$ and $I_3$ identity matrices and
$$
J_2=\begin{pmatrix}0&\dots&0&1\\0&\dots&1&0\\1&\dots&0&0\end{pmatrix}.
$$
Note that $J_2$ acts on the block that is reversed. The matrix $A$ is randomly generated by randomly picking a block and associating $J_2$ to that block.
If $Y=y$ is a block-reversed version of $X=x$, then the matrix $A$ can be determined from $x$ and $y$. The matrix is not unique however. Denote the set of these matrices by $A(x,y)$. If $y$ is a block reversed version of $x$, we have:
$$
P(Y=y|X=x)=P(AX=y|X=x)=P(A\in A(x,y))
$$
Otherwise $P(Y=y|X=x)=0$. Define the set $T$ and $T_y$ as follows:
$$
T=\left\{(x,y):Ax=y\text{ for some } A\in\mathcal A\right\}\\
T_y=\left\{x: Ax=y\text{ for some } A\in\mathcal A\right\}.
$$
Hence if $(x,y)\in T$, then $y$ is a block-reversed version of $x$.
Since no particular assumption is given here, we examine uniform distribution as an example.
Assume that we have a uniform distribution on $A$ with $P(A=A_0)=\frac 1{\binom{n}{2}}$. Then, this implies that:
$$
P(Y=y|X=x)=\frac {|A(x,y)|}{\binom{n}{2}}\\
P(Y=y)=\sum_{x}P(Y=y|X=x)P(X=x)=\sum_{x\in T_y}\frac{|A(x,y)|}{\binom{n}{2}}P(X=x)=\frac{E(|A(X,y)|)}{\binom{n}{2}}.
$$
Therefore:
\begin{align}
H(Y|X)&=E(\log\frac{1}{P(Y|X)})\\
&=\sum_{(x,y)\in T}P(X=x)\frac {|A(x,y)|}{\binom{n}{2}}\log{\frac {\binom{n}{2}}{|A(x,y)|}}\\
&=\frac{\log{\binom{n}{2}}}{\binom{n}2}\sum_{y}E(|A(X,y)|)+\sum_{(x,y)\in T}P(X=x)\frac {|A(x,y)|}{\binom{n}{2}}\log{\frac {1}{|A(x,y)|}}.
\end{align}
and
\begin{align}
H(Y)&=E(\log\frac{1}{P(Y)})\\
&=\sum_{y}E(|A(X,y)|)\frac{1}{\binom n2}\log\frac{{\binom{n}{2}}}{E(|A(X,y)|)}
\end{align}
So the mutual information is obtained as:
$$
I(X;Y)=\frac{1}{\binom n2}\sum_{(x,y)\in T}P(X=x)|A(x,y)|\log\frac{|A(x,y)|}{E(|A(X,y)|)}.
$$
I am not sure how useful this expression is. From now on, we require to know more about the support of $X$. Even with that, e.g. assume vectors on $\mathbb F_2^n$, I do not see any straightforward way to obtain the optimal distribution. |
Simple Random Walk, Generating Function and Markov Property | Maybe it's easier to see what Will M. said if you rephrase the problem a little.
Let's define
$$T_i^j = \min \{n \ge 0 : X_0=i, X_n=j\}$$
Note that $E(s^{T_i^{i-1}}) = E(s^{T_1^0}) = E(s^T) \text{, for every $i\in Z$}$, which is the generator function of the first passage time for $1$ step to the left.
Now for the exercise, it's easy to prove that:
$$E(s^T | X_0=1) = s (1-p) + psE(s^T|X_0=2)$$
What you then have to note is that $E(s^T|X_0=1)^2 = E(s^T|X_0=2)$ since the first passage time for $2$ step to the left means that you first have to step $1$ to the left, then another. More precisely:
$$T_i^{i-2} = T_{i}^{i-1} + T_{i-1}^{i-2}$$
$T_{i}^{i-1}$ and $T_{i-1}^{i-2}$ are independent, so
$$E(s^{T_i^{i-2}}) = E(s^{T_{i}^{i-1} + T_{i-1}^{i-2}}) = E(s^{T_{i}^{i-1}}) E(s^{T_{i-1}^{i-2}}) = {E(s^T)}^2$$
You can substitute it to the above equation, and solve it, and you get:
$$ E(s^T) = \frac{1 - \sqrt{1-4p(1-p)s^2}}{2ps} $$ |
Prove the convergence of $\int_1^{\infty} x^{-x}\,dx$ | You can completely skip breaking up the integral by using the integral test, which says that
$\int_1^{\infty} x^{-x}\;dx$ converges or diverges with $\sum_{n=1}^{\infty} n^{-n}$
Given that $n!\leq n^n$,
$$S_N = \sum_{n=1}^{N}\frac{1}{n^n} \leq \sum_{n=1}^N\frac{1}{n!} \leq \sum_{n=1}^{\infty}\frac{1}{n!} = e-1$$
$S_N$ is monotonically increasing and bounded above, hence it converges. So $\int_1^{\infty} x^{-x}\;dx$ converges. |
a question about integral | Using $s=tX$ and computing the integral on the RHS, the goal is to prove that
$$
\mathbf 1_{s\gt2}\lt2-2\frac{\sin(s)}s.
$$
Since $\frac{\sin(s)}s\leqslant1$ for every $s$, it remains to show that, for every $s\gt2$,
$$
\frac{\sin(s)}s\leqslant\frac12.
$$
Since $\sin(s)\leqslant1$ and $s\gt2$, the result follows.
Likewise, for every $a\gt0$, $t\gt0$ and $x\geqslant0$,
$$
\mathbf 1_{x\gt(a+1)/t}\leqslant\frac{a+1}{at}\int_0^t(1-\cos(ux))\mathrm du.
$$ |
Use the difference quotient to compute a formula in terms of h | Well, first of all, you shouldn't start with a specific value for $x$. What you need to do is calculate the difference quotient, as given by the formula.
A good place to start would be to calculate $f(x+h),$ which would be $2(x+h)^2+3,$ in your example. You may need to work it out one piece at a time, expanding $(x+h)^2,$ then distributing the $2,$ then adding the $3,$ for example.
Next, you'll want to subtract $f(x)$ from your result. If everything has been done correctly up to that point, everything left after this subtraction will have a factor of $h$ that we can pull out.
Finally, divide the result of the subtraction by $h$, which should be fairly easy, if we've already pulled out the $h$ factor.
At that point, you're done! You now have a formula into which we can substitute specific values for $x$ and $h$.
For a general approach that applies to a family of fairly familiar functions (though not to all functions), you can adapt the method I describe in this answer, using $x+h$ instead of $x,$ and using $x$ instead of $x_0.$ |
What is the expression of the vector orthogonal to all linearly independent vectors but one? | Let $G$ be the Gram-matrix of $v_2,\dots,v_n$, that is $g_{ij}=\langle v_i,v_j\rangle$ and let $\tilde v$ the vector with $\tilde v_k=\langle v_1,v_k\rangle$. Then $G^{-1}\tilde v$ is the coordinate vector of the orthogonal projection of $v_1$ on the span of $v_2,\dots,v_n$. Hence $v_1-G^{-1}\tilde v$ is the desired vector. |
Prove that this type of alternating series admits this supremum. | Let $b_i = a_i - a_{i+1}$ for all natural number $i$.
Then $b_i >0$, and
$$a_n = \sum\limits_{i=n}^\infty b_i.$$
$$
\begin{split}
|a_{n+1}-a_{n+2}+a_{n+3}-a_{n+4}+\ldots| & = |b_{n+1} + b_{n+3} + \ldots| \\
& = b_{n+1} + b_{n+3} + \ldots \\
& < b_n + b_{n+1} + b_{n+2}+ b_{n+3} + \ldots = a_n
\end{split}
$$ |
Modification on Epsilon-Delta Definition of Continuity - Seeking a Discontinuous Function | Let $f(x)=x$ if $x\leq 0$ and $f(x)=x+1$ if $x>0$. Then $f$ is discontinuous at $0$ but satisfies your $\epsilon$-$\delta$ condition with $\delta=\epsilon$. |
Volume of region in the first octant bounded by coordinate planes and a parabolic cylinder? | The solution appears to be correct. Personally, I used different construction of the integral, which is
$\int_0^2dy\int_0^{2-y}dz\int_0^{4-y^2}dx = \frac{20}{3}$.
Hope this helps. |
Exercise in "Analysis on Manifolds" - can this be done in a more elegant way? | The last part surely implies some calculations, but one can take a systematic approach to make them comprehensible. I will use a short notation ($F_i = D_iF$, $F_{ij}=D_iD_jF$) and omit the proofs of differentiability.
In $B$ we have:
$$
G = 0,
$$
$$
D_1[G(\dots)] = G_1+G_3g_1=0,
$$
$$
D_2[G_1(\dots)+G_3(\dots)g_1] = (G_{21}+G_{31}g_2)+\big(G_{23}+G_{33}g_2\big)g_1+G_3g_{21}=0
$$
In matrix form:
$$
-G_3g_{21} = \begin{pmatrix}1\\g_1\end{pmatrix}^T
\begin{pmatrix}G_{21}&G_{31}\\G_{23}&G_{33}\end{pmatrix}
\begin{pmatrix}1\\g_2\end{pmatrix}
=
\frac{1}{36}\begin{pmatrix}6\\-19\end{pmatrix}^T
\begin{pmatrix}G_{21}&G_{31}\\G_{23}&G_{33}\end{pmatrix}
\begin{pmatrix}6\\-11\end{pmatrix}
$$
Now the trick is how to caculate $G_{ij}$ relatively easy. Let's denote $u^1=x+2y+3z$ and $u^2 = x^3+y^2-z^2$
Since $G_i = \sum_{\alpha=1,2}F_\alpha u^\alpha_i$,
$$
G_{ij} = \sum_{\alpha=1,2}F_\alpha u^\alpha_{ij} + \sum_{\alpha=1,2}\sum_{\beta=1,2}F_{\alpha\beta} u^\alpha_{i}u^\alpha_{j}.
$$
Function $u^1$ is linear, so all second derivatives $u^1_{ij}=0$, and first term is just $F_2u^2_{ij}$, which is diagonal matrix and is non-zero only for $G_{33}$:
For second term $S_{ij}$ not much can be done, but we can rewrite it as a bilinear form again:
$$
S_{ij} = v_i^T \begin{pmatrix}3&-1\\-1&5\end{pmatrix}v_j,
$$
where $v_i = [u^1_i, u^2_i]$: $v_1 = [1, 12]$, $v_2 = [2, 6]$, $v_3=[3, 2]$.
We have:
$$
\begin{pmatrix}G_{21}&G_{31}\\G_{23}&G_{33}\end{pmatrix} =
\begin{pmatrix}S_{21}&S_{31}\\S_{23}&S_{33}-2F_2\end{pmatrix}=
\begin{pmatrix}336&91\\56&29\end{pmatrix}
$$
Finally, $G_3=3F_1+2F_2=12$:
$$
-12g_{21}=\frac1{36}\times 5767
$$ |
Show that intersection of a polyhedron and affine set is a polyhedron. | Hint: first express your affine space as a system of linear equations $Cx = d$. Then append the rows of $C$ and $-C$ to $A$ to form a taller matrix $A$, as well as append $d$ and $-d$ to $b$ to form a taller column vector $b'$. Then the inequality $A'x \le b'$ describes the intersection. |
Does $\sum_{n=3}^\infty \frac {1}{(\log n)^{\log(\log(n)}}$ converge? | Note that $(\log n)^{-\operatorname{loglog} n}=e^{-(\operatorname{loglog} n)^2}$, since $\log n=e^{\operatorname{loglog} n}$.
For large $n$, we have $(\operatorname{loglog} n)^2\lt \log n$, so for large $n$ the $n$-th term is greater than $\frac{1}{n}$.
The fact that $(\operatorname{loglog} n)^2$ is eventually dominated by $\log n$ is just the familiar fact that $e^x\gt x^2$ for large enough $x$.
Remark: In dealing with convergence of series, it is often better to ask oneself first: How fast are the terms approaching $0$? Looking instead for a test to use tends to distance us from the concrete reality of the series. |
Projection of c onto the plane OAB | Just write $c=\alpha a+\beta b+\gamma n$ and calculate $\alpha$ and $\beta$.
$\alpha a+\beta b$ would be the answer. |
Reference for normalization of propositional logic natural deduction. | In Prawitz's Natural Deduction (1965) you can find a proof of normalization in classical natural deduction for the fragment $\bot, \land, \to, \forall$ of first-order logic. The presence of the universal quantifier $\forall$ does not complicate the proof of normalization, so you can read his proof and just forget about the case of $\forall$, so that you get a proof of normalization for the fragment $\bot, \land, \to$ of propositional logic.
As far as I know, the first proof of strong normalization for the full propositional classical natural deduction (including the disjunction $\lor$) is due to Stalmark's Normalization theorems for full first order classical natural deduction. Actually, in this paper there is a proof of normalization for full first-order classical natural deduction, including also the existential quantifier $\exists$. The kind of problems to manage normalization with $\exists$ are analogous to the ones to manage normalization with $\lor$, so as before, you can read his proof and just forget about the cases with $\forall $ and $\exists$ to have a proof of strong normalization for full propositional classical natural deduction.
Easy and recent proofs of strong normalization for full propositional classical natural deduction are due to David and Nour and to Nakazawa and Tastuta, both of them are essentially based on some variant of Parigot's $\lambda\mu$-calculus. The introductions and bibliographies of these papers are also a source for other references. |
Prove that $\sum_{X=0}^N u(X) {N \choose X} p^X (1-p)^{N-X}=0 \iff u(X)=0, \space \forall X\in\{ 0,1,...,N \}$ | This is an answer that summarizes the question and the comments.
The goal is to show that
$$E[u(X)]=0 \iff u(X)=0$$
and the given equation is equivalent to
$$\sum_{X=0}^n u(X) {N \choose X} p^X (1-p)^{N-X} = \sum_{j=0}^N \sum_{i=0}^j u(i){N \choose j} {j \choose i}(-1)^ip^j = 0$$
We are assuming that $N$ is fixed and $p \in [0,1]$
The middle column is a Bernstein Polynomial that is a base, thus if it is equal to $0$ then $u(X)=0$.
This shows that $Bin(N,p)$ is a complete distribution. |
If $A\vec{x}=\vec{b}$ and $B\vec{x}=\vec{b}$ inconsistent, then $(A+B)\vec{x}=\vec{b}$ inconsistent? | From the comments, it seems that your first problem is supposed to read something like:
Suppose $A$ and $B$ are matrices, and $\vec{b}$ is such that the systems of linear equations $A\vec{x}=\vec{b}$ and $B\vec{x}=\vec{b}$ have infinitely many solutions in common. Is it true that $(A+B)\vec{t}=\vec{b}$ has infinitely many solutions?
If that's the case, then your argument works: for each common solution $\vec{x}$ to the original two systems, $\frac{1}{2}\vec x$ is a solution to $(A+B)\vec{x}=\vec{b}$; since $\frac{1}{2}\vec{x} = \frac{1}{2}\vec{x'}$ if and only if $\vec{x}=\vec{x'}$, it follows that the latter system has infinitely many solutions as well.
But you need the solutions to be common solutions, as Yuval's example shows.
The second question is false in two interpretations:
Suppose $A$ and $B$ are matrices, and $\vec{b}$ is such that the systems of linear equations $A\vec{x}=\vec{b}$ and $B\vec{x}=\vec{b}$ have no solutions in common. Is it true that $(A+B)\vec{x}=\vec{b}$ has no solutions?
and
Suppose that $A$ and $B$ are matrices, and $\vec{b}$ is such that the systems of linear equations $A\vec{x}=\vec{b}$ and $B\vec{x}=\vec{b}$ each has no solutions. Is it true that $(A+B)\vec{x}=\vec{b}$ has no solutions?
This is false; note that any pair of matrices $A$ and $B$ and vector $\vec{b}$ that satisfy the second statement will also satisfy the first, so it suffices to find a counterexample to the second statement. The following works:
$$A = \left(\begin{array}{cc}1&0\\0&0\end{array}\right),\quad B=\left(\begin{array}{cc}0&0\\0&1\end{array}\right),\quad \vec{b}=\left(\begin{array}{c}1\\1\end{array}\right).$$
Suppose now we tweak it a bit, perhaps; how about the following?
Suppose $A$ and $B$ are matrices, and $\vec{b}$ is such that $A\vec{x}=\vec{b}$ and $B\vec{x}=\vec{b}$ each has solutions, but there are no solutions in common to both systems. Is it true that $(A+B)\vec{x}=\vec{b}$ has no solutions?
This is still not true. Take
$$A=\left(\begin{array}{crc}
1&-1&0\\
0&0&1
\end{array}\right),\quad
B=\left(\begin{array}{ccr}
1&0&0\\
0&1&-1
\end{array}\right),\quad
\vec{b}=\left(\begin{array}{c}1\\1\end{array}\right).$$
Then $A\vec{x}=\vec{b}$ has solutions: $\vec{x}=(1,0,1)^T$ is a solution. $B\vec{x}=\vec{b}$ also has solutions: $\vec{x}=(1,1,0)^T$ is a solution.
But $A\vec{x}=\vec{b}$ and $B\vec{x}=\vec{b}$ have no solutions in common: if $\vec{x}=(x,y,z)^T$ were a solution, then you would need $x-y=1$, $z=1$, $x=1$, and $y-z=1$. But from $x=z=1$ and $x-y=0$, we get $y=0$; and from $z=1$ and $y-z=1$ we get $y=2$.
However,
$$(A+B)\vec{x} = \left(\begin{array}{rrr}2 & -1 & 0\\
0 & 1 & 0
\end{array}\right)\vec{x} = \left(\begin{array}{c}1\\1\end{array}\right)$$
does have solutions: $(1,1,z)^T$ is a solution for all $z$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.