title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Show that $g$ is Riemann integrable on $I$ | Almost, but it is recommended to write out the full details. I will use the Riemann sum criterion, for other criterion should be similar. Denote $L=\displaystyle\int_{a}^{b}f(x)dx$, by guessing, the integral of $g$ should be $L/\alpha$. Given $\epsilon>0$, there is some $\delta>0$ such that
\begin{align*}
\left|\sum f(c_{i})\Delta x_{i}-L\right|<\epsilon,
\end{align*}
for all partition $P=\{a=x_{0}<\cdots<x_{n}=b\}$ with $\|P\|<\delta$ and $c_{i}\in[x_{i-1},x_{i}]$. For any partition $Q=\{f(a)=y_{0}<\cdots<y_{n}=f(b)\}$ such that $\|Q\|<\delta/\alpha$, consider the partition $P=\{a=x_{0}<\cdots<x_{n}=b\}$, where $x_{i}=\alpha y_{i}+\beta$, then $\|P\|<\delta$, we get
\begin{align*}
\left|\sum g(d_{i})\Delta y_{i}-\dfrac{L}{\alpha}\right|&=\left|\sum f\left(\alpha d_{i}+\beta\right)\dfrac{\Delta x_{i}}{\alpha}-\dfrac{L}{\alpha}\right|\\
&=\alpha^{-1}\left|\sum f(\alpha d_{i}+\beta)\Delta x_{i}-L\right|\\
&<\alpha^{-1}\epsilon.
\end{align*} |
For which values of $a$ does this system of equations $\mathbf{{not}}$ have a unique solution? | Let's rearrange the augmented matrix representing the system of linear equations, by swapping $R_2$ and $R_3$ (to avoid division by $a-2$ while row-reduction), to get
$\left[\begin{array}{ccc|c}1&2&2&1\\1&11&a&0\\1&a&3&3\end{array}\right]$ and a subsequent row-reduction will give you
$\left[\begin{array}{ccc|c}1&2&2&1\\0&9&a-2&-1\\0&0&1-\frac{(a-2)^2}{9}&2+\frac{a-2}{9}\end{array}\right]$
Unique solution:
in order for the system to have a unique solution, the coefficient and the augmented matrix both must be of full rank i.e., both must have rank $n=3$. To make that happen, we must have $1\neq\frac{(a-2)^2}{9} \implies a \notin \{-1,5\}$.
No solution:
for this to happen, we need to have the augment matrix rank higher than the coefficient matrix rank. This can happen if the coefficient matrix has rank < 3, e.g., if $1=\frac{(a-2)^2}{9} \implies a \in \{-1,5\}$ and augmented matrix has rank $n=3$ simultaneously, which means $2+\frac{a-2}{9} \neq 0 \implies a \neq -16$, i.e., for $a=-1,5$ the system will have no solution.
Infinitely many solutions:
it can happen iff both the coefficient and the augmented matrix are rank-deficient, i.e. for both of them the rank is $<3$. For this the last row of the augmented matrix needs to be zero. Which implies both $1-\frac{(a-2)^2}{9}=0 \implies a \in \{-1,5\}$ and $2+\frac{a-2}{9}=0 \implies a=-16$ simultaneously. Which is clearly impossible. |
Find $\liminf a_n$ and $\limsup a_n $ | $a_{2k+1}=(-1+1)(2k+1)^{2}=0$ for all $k$, so $\liminf_{n}a_{n}\leq 0$.
But $(-1)^{n}+1\geq 0$, so $a_{n}\geq 0$ and hence $\liminf_{n}a_{n}\geq 0$, we have then $\liminf_{n}a_{n}=0$. |
Prove $\sum_1^{\infty} a_i^2$ convergent | You just need to apply to $b_j=\frac{a_j}{\sqrt{j}}$ the result that if $b_j$ is positive and decreasing and if $\sum b_j$ converges, then $\lim\limits_{j \to \infty} jb_j = 0$. (This has been proved many times on this site. See for example here.)
Since $\frac{a_j}{\sqrt{j}}$ is decreasing (because $a_j$ is positive and decreasing) and $\sum \frac{a_j}{\sqrt{j}}$ converges we must have $\sqrt{j} a_j = j \frac{a_j}{\sqrt{j}} \to 0$ as $j \to \infty.$
Hence, for sufficiently large $j$, we have $\sqrt{j} a_j < 1$ and
$$a_j^2 = \sqrt{j}a_j \frac{a_j}{\sqrt{j}} \leqslant \frac{a_j}{\sqrt{j}}. $$
Thus, $\sum a_j^2$ must converge by the comparison test. |
Derivative of Elliptic function is elliptic function of order m+1 $\leq n \leq$ 2m | The order of an elliptic function is the sum of the orders of the poles
in a fundamental region.
Differentiating adds one to the order of each pole. So the order
of the elliptic function increases by the number of distinct poles
in a fundamental region. An elliptic function of order $m$ has between
$1$ and $m$ distinct poles, so the derivative has order between $m+1$ and $2m$. |
Does this integral converge? $\int_2^\infty \frac{x \sin^2 x}{x^3-1}\ \mathsf dx$ | Edit: The integral used to be from $1$ to $\infty$, now it is from $2$ to $\infty$. So the badness of our function has become irrelevant.
If $x\ge 2$, then the top is positive and $\le x$, and the bottom is greater than $x^3/2$. So our function is less than $\frac{2}{x^2}$. Since $\int_2^\infty \frac{2}{x^2}\,dx$ converges, so does our integral.
Old answer: Note that $x^3-1=(x-1)(x^2+x+1)$. If $x$ is reasonably near to $1$ but greater than $1$, our function is greater than
$$\frac{\sin^2(1)}{4(x-1)}.$$
But $\int_1^{1+\delta}\frac{dx}{x-1}$ diverges for any positive $\delta$, so our integral diverges. |
Why did we call a row operation "elementary"? | The sense of "elementary" here is that all the operations that preserve the row-space of a matrix can be be produced by combining various elementary row operations.
Thus these are the elementary steps that can be taken to calculate a (reduced) row echelon form of a matrix, a basic tool for solving several kinds of problems involving the row-space of a matrix and the solutions (if any) of a linear system of equations.
With regard to what $R_1 - R_2$ ought to be called, something is missing from the description. It would indeed be an elementary row operation if this "new row" immediately replaces $R_1$. If you wanted it to replace $R_2$, you would have to perform that row operation by combining two "elementary row operation" steps (first replace $R_2$ with $R_2 - R_1$, then multiply the resulting new second row by non-zero scalar $-1$).
If you wanted to do something completely different with $R_1 - R_2$, then it would possibly either not be a row operation or not a row operation that preserved the row space of the matrix. For example, if you replaced $R_3$ with $R_1 - R_2$ (leaving $R_1,R_2$ as they are), you might well be making the row space of the matrix smaller. |
Trouble with replacing a Cartesian equation with polar equation. | You are totally correct : You have done what is done to find polar form of any equation, i.e.
Assume the polar coordinates of a curve to $\big(r(\theta)\cos \theta,r(\theta)\sin \theta \big)$ and put it into the cartesian equation of the curve, and then solve for $r(\theta)$.
For an ellipse :
$$\frac{x^2}{a^2} + \frac{y^2}{b^2}=1$$
The polar coordinates of point $\text{P}$ are given by :
$$\text{P} \equiv\Big(r(\theta)\cos \theta,r(\theta)\sin \theta\Big) ; ~\text{where}~r(\theta)=\frac{ab}{\sqrt{a^2 \sin^2 \theta+b^2 \cos^2 \theta}} $$
You can confirm yourself here on Wikipedia. |
Please find the quadratic form solution | I just wanna share my try,
I guess we can find the minimum of this function by differentiation. Let's find our critical points, that is
$$ \exists x_{0} \in R^3, / \bigtriangledown q(x_{0})=0?$$
note that $q(x): R^3 \rightarrow R$
$$\bigtriangledown q(x)=(2x-2z+1,2y+2z,6z-2x+2y+1)$$
Now we should solve the following system
$2x-2z+1=0$
$2y+2z=0$
$6z-2x+2y+1=0$
it's east to find $x_{o}=(-3/2 , 1,-1)$ as its solution. Thus we just need to check if the Hessian is a positive matrix (cause it implies this point is a minimum). If you derivative again, you'll find:
$ Hq(x_{o})=
\begin{matrix}
2 & 0 & -2 \\
0 & 2 & 2 \\
-2 & 2 & 6 \\
\end{matrix}
$
As far as I remember there is a theorem which relate Hessian with the quadratic form, check your notes cause it will help. However, in our case $Hq(x_{0})=2Q$, then we just need to check if Q is a positive matrix.
$A_{1} >0, A_{2}>0, A_{3}>0$ so Q is a positive matrix thus $x_{0}$ is the minimum of q(x) $\forall b$ |
Properties of a relation on $\mathbb Z\times\mathbb Z$ and $\mathbb Q \times\mathbb Q$. | Concerning $R\subset\mathbb Z\times\mathbb Z$. I would rewrite the relation by $xRy\iff x=|y|$.
Correct, the relation is not reflexive since $-5\neq|-5|$.
The relation is not symmetric. We have e.g. $5=|-5|$ but not $-5=|5|$.
The relation is transitive: $x=|y|\wedge y=|z|\implies x=||z||$ and we have $||z||=|z|$.
Concerning $S\subset\mathbb Q\times\mathbb Q$.
The relation is reflexive since $x\leq x\leq x$ is true for every $x\in\mathbb Q$. Your reasoning makes no sense because it is not requested that $z\neq3$
Correct, not symmetric.
Correct, transitive. |
Invertible operator such that its characteristic and minimal polynomials coincide | Assume without loss of generality that $\ m_T\ $ and $\ m_{T^{-1}}\ $ are both monic, $\ \deg m_T=n\ $, $\ \deg m_{T^{-1}}=r\ $, and
$$
m_T(x)= \sum_{k=0}^n a_k x^k\ .
$$
Then multiplying the equation
$$
T^n+\sum_{k=0}^{n-1}a_k T^k=0
$$
by $\ T^{-n}\ $ gives
$$
a_n+\sum_{k=0}^{n-1} a_k \left(T^{-1}\right)^{n-k}=0\ ,
$$
so $\ m_{T^{-1}}(x)\ $ must divide $\ x^nm_T\left(\frac{1}{x}\right)\ $. Conversely $\ m_T(x)\ $ must also divide $\ x^rm_{T^{-1}} \left(\frac{1}{x}\right)\ $. It follows that $\ r=n\ $ , and $\ m_{T^{-1}}(x)=$$a_0^{-1} x^n m_T\left(\frac{1}{x}\right)\ $. |
In what sense is the kernel of a group homomorphism a universal arrow in a comma category? | The arrow $i$ is universal among all arrows $j$ into $G$ which equalize $f$ and $0 : G \to H$. So it is universal in the subcategory of the slice over $G$ on arrows that equalize $f$ and $0$. The answer seems to be that $i$ is universal in a full subcategory of the slice over $G$ (which is a comma category of course), but not in a straight comma category, unless there's some trick to present a full subcategory of a slice as a comma. |
question about wave equation | As an example, suppose we want to solve the wave equation $u_{t t}=u_{x x}$ subject to $u(x,0)=0$ and $u_t(x,0)=2x \left/\left(1+x^4\right)\right.$. We can interpret this as follows. We impart an initial velocity to a very long string at rest. The precise initial velocity is given by $g(x)=2x\left/\left(1+x^4\right)\right.$. We might do this by striking the string in two locations with equal, but opposite vertical forces. Here is the graph of this function:
We can use d'Alembert's formula to find the solution analytically.
$$u(x,t) = \frac{1}{2}\int_{x-t}^{x+t} \frac{2s}{1+\left(s^2\right)^2} \, ds=\frac{1}{2}\arctan \left(s^2\right)|_{x-t}^{x+t} = \frac{1}{2}\left(\arctan
\left((x+t)^2\right)-\arctan \left((x-t)^2\right)\right)$$
Here's the graph of $\arctan \left(x^2\right)$.
The solution is the superposition of two waves half this size; one is shifted to the left and the other is shifted to the right and reflected. At time $t=0$, the two waves cancel each other out and we get zero - the initial condition. A short time later, a downward hump appears to the left and an upward hump appears to the right. These two waves travel in their respective direction as time progresses. Here's an animation of this process.
I generated the animation with the following Mathematica code.
g[x_] := 2 x/(1 + x^4);
u[x_, t_] = 1/2 Integrate[g[s], {s, x - t, x + t},
Assumptions -> {Element[x, Reals], t > 0}];
Animate[Plot[u[x, t], {x, -5, 5},
PlotRange -> {{-5, 5}, {-1, 1}},
AspectRatio -> 1/3], {t, 0, 4}] |
Advanced differential equations solutions manual | You could always email one of the authors considering this is a new text - often they are happy to provide a manual so long as you are not to distribute it online. But do bear in mind however, that many texts don't have any solutions whatsoever.
If not, a great text on ordinary and partial differential equations which covers the theory and applications is *Elementary Differential Equations - Boyce & Diprima. It is to the level of undergraduate if this is what you wanted
Another, but more advanced book is a more qualitative outlook on ODE's: *Differential Equations, Dynamical Systems and Linear Algebra - Smale
*A pdf may or may not exist online which I would not condone to use of course. |
Show that the sequence $\sqrt{2}, \ \sqrt{2+\sqrt{2}}, \ \sqrt{2+\sqrt{2+\sqrt{2+}}}...$ converges and find its limit. | I'm not sure why solving $f'(x)=0$ should be significant.
You can instead think like this. Let's do a little exploring.
If $a_n$ converges to a limit $a$, then $f(a_n)$ should converge to $f(a)$ (by continuity of $f$).
But then $f(a_n)=a_{n+1}$ should also converge to $a$.
So $f(a)=a$, which you can solve, for a solution that is $> a_1$.
Having the actual solution for $a$ in your hands, you can then attempt to prove by induction that $a_n \le a$ for all $n$.
From that you conclude that $a_n$ converges. And from that, together with continuity of $f$ and uniqueness of the solution $> a_1$, you can conclude that $a_n$ converges to $a$. |
Unbounded subset of $\mathbb{R}$ with positive Lebesgue outer measure | I guess you mean with positive and finite outer measure. An easy example would be something like $[0,1]\cup\mathbb{Q}$. But perhaps you also want to have nonzero measure outside of each bounded interval? In that case, consider $[0,1/2]\cup[1,1+1/4]\cup[2,2+1/8]\cup[3,3+1/16]\cup\cdots$. If you want the set to have positive measure in each subinterval of $\mathbb{R}$, you could let $x_1,x_2,x_3,\ldots$ be a dense sequence (like the rationals) and take a union of open intervals $I_n$ such that $I_n$ contains $x_n$ and has length $1/2^n$.
On the other hand, it is often useful to keep in mind that every set of finite measure is "nearly bounded". That is, if $m(E)<\infty$ and $\epsilon>0$, then there is an $M\gt0$ such that $m(E\setminus[-M,M])<\epsilon$. One way to see this is by proving that the monotone sequence $(m(E\cap[-n,n]))_{n=1}^\infty$ converges to $m(E)$. |
Why $\mathcal C^\infty (\Omega )\cap W^{k,p}(\Omega )$ dense in $W^{k,p}(\Omega )$ instead of $\mathcal C^\infty (\Omega )$. | We do not have $C^\infty(\Omega) \subset W^{k,p}(\Omega)$. For example, if $\Omega = (0,1)$, the function $$u(x) = \frac{1}x, \,\,\,\,\,\, x \in \Omega$$ is in $C^\infty(\Omega)$ but is not even in $L^p(\Omega)$ for any $p \ge 1$.
EDIT: As a previous (now deleted) answer pointed out, sets of infinite measure also pose an issue since constant maps are $C^\infty$ but not $W^{k,p}$. I believe that we can only conclude $C^\infty(\Omega) \subset W^{k,p}(\Omega)$ when $\Omega$ is compact. |
Find the subgroups of the groups $\mathbb{Z}/5\mathbb{Z}$ and $\mathbb{Z}/10\mathbb{Z}$. | You are right on both counts, and your answers are correct. Can you see how to generalize them to say when $\mathbb{Z}/k\mathbb{Z}$ is (isomorphic to) a subgroup of $\mathbb{Z}/n\mathbb{Z}$?
Another good problem to try is to show that every subgroup of $\mathbb{Z}/n\mathbb{Z}$ is (isomorphic to) $\mathbb{Z}/k\mathbb{Z}$ for some $k$. This is a bit harder though. |
Calculating the probability to win with martingale in roulette | If you have from $100$ to $126$ dollars you can bet up to $6$ times in your martingale system, while if you have from $127$ to $199$ dollars you can bet up to $7$ times, since $1+2+2^2+\cdots+2^6=2^7-1=127$.
The probability of not losing $n$ times in a row is $1-\left(1-\dfrac{18}{37}\right)^n$. So the probability of progressing from $100$ dollars to $200$ dollars is $$ \left(1-\left(1-\dfrac{18}{37}\right)^6\right)^{27} \left(1-\left(1-\dfrac{18}{37}\right)^7\right)^{73} \approx 0.3041318$$ which is what your Python program predicted.
Note that if your system fails, you will usually still end up with a positive amount (unless you lose seven times in a row starting with $127$ dollars): for example if you start at $100$ dollars and immediately lose six times you will end up with $37$ dollars when your system requires you to bet $64$ so you have to give up. If your system fails then your expected final amount is about $41.468$ dollars. So, after taking account of the possibility of reaching $200$ dollars, your expected overall finishing amount is about $89.683$ dollars. |
Getting wrong answer in a rate of change problem | After 6 minutes, the volume of water is actually:
$$\frac{4}{3} \pi (8)^3 - 6(60) \pi - \frac{4}{3}\pi(4)^3= \frac{712 \pi}{3} \lt \frac{1}{2}V$$
because you have forgotten to subtract the volume of the steel ball.
Then repeating your steps (which are correct), $h_0 = \frac{\sqrt{534}}{3}$, and so $\frac{dh}{dt} = -\frac{1}{8h_0} = -0.0162 \ \text{cm}/\text{s}$. |
Calculating probility of dice rolls with conditional rerolls for specific target numbers | Here is one way to think about it. For each individual die, the trial is to roll it repeatedly until the first time that the result is $<10$. The result of this trial is therefore a number from $1$ to $9$, and each of those possibilities is equally likely. So, this trial is exactly the same as rolling a 9-sided die, and your problem can be simplified: Rule (1) $n$ 9 sided dice are rolled; Rule (2) any rolls $\ge 8$ are a success. Rules (3) and (4) are no longer relevant. |
Prove that lines passing through the midpoints of sides of a triangle and the midpoints of cevians are also concurrent | Let $P$, $Q$ and $R$ be the midpoints of $AD$, $BE$ and $CF$ respectively. As OP pointed out, $P$ lies on $E'F'$, $Q$ lies on $F'D'$ and $R$ lies on $D'E'$.
By Ceva's Theorem,
$$\frac{BF}{FA}\cdot\frac{AE}{EC}\cdot\frac{CD}{DB}=1$$
By the midpoint theorem, $\displaystyle D'R=\frac{1}{2}BF$, $\displaystyle RE'=\frac{1}{2}FA$, $\displaystyle E'P=\frac{1}{2}CD$, $\displaystyle PF'=\frac{1}{2}BD$, $\displaystyle F'Q=\frac{1}{2}AE$ and $\displaystyle QD'=\frac{1}{2}EC$.
So, we have
$$\frac{D'R}{RE'}\cdot\frac{E'P}{PF'}\cdot\frac{F'Q}{QD'}=1$$
By the converse of Ceva's Theorem, $D'P$, $E'Q$ and $F'R$ are concurrent. |
Find the singular solution of $ y = px + \frac{1}{p}$ using the clairaut's method | Yes, you have done well so far the general solution comes from $\frac{dp}{dx}=0 \implies p=m ~(contant)$
and the general solution of your ODE is $$y=mx+\frac{1}{m}.~~~(1)$$
Next, the singular (particular, fixed, essenrial and constant free) comed from your equation $$p^2=\frac{1}{x} \implies p=\pm\frac{1}{\sqrt{x}}\implies \frac{dy}{dx}=\pm \frac{1}{\sqrt{x}} \implies y=\mp 2\sqrt{x}\implies y^2=4x ~~~~(2).$$
The end result is a fixed parabola and the general solution (1) is a family of lines
which are tangent to this parabola. |
Proving the binomial coefficients by induction (half-done, but need help) | Hint
\begin{align*}
\binom{n}{k}=\binom{n-1}{k-1}+\binom{n-1}{k} & =\frac{\left(n-1\right)!}{\left(n-k\right)!\left(k-1\right)!}+\frac{\left(n-1\right)!}{\left(n-1-k\right)!k!}\\
& =\frac{\left(n-1\right)!k}{\left(n-k\right)!k!}+\frac{\left(n-1\right)!\left(n-k\right)}{\left(n-k\right)!k!}\\
& =\frac{\left(n-1\right)!}{\left(n-k\right)!k!}\left(k+n-k\right)\\
& =\frac{n!}{\left(n-k\right)!k!}
\end{align*} |
Prove that $\frac{1}{\sqrt{1-z}}=\sum_{n=0}^{\infty}\frac{1}{4^{n}}\binom{2n}{n}z^{n}$ using Cauchy product | Prove the series converges (absolutely) for $|z|<1$, then the Cauchy product formula says
$$\left(\sum_{n=0}^\infty\frac1{4^n}\binom{2n}nz^n\right)^2=\sum_{n=0}^\infty\sum_{k=0}^n\frac1{4^n}\binom{2(n-k)}{n-k}\binom{2k}kz^n$$
Now prove
$$\sum_{k=0}^n\binom{2(n-k)}{n-k}\binom{2k}k=4^n$$
So the square of the sum is
$$\sum_{n=0}^\infty z^n=\frac1{1-z}$$
$\dfrac1{\sqrt{1-z}}$ is analytic on the disk $|z|<1$ and matches the series (which is also analytic) for $z\in\mathbb [0,1)$ by the previous calculations, as both are clearly the positive roots of $\dfrac1{1-z}$. Therefore they have to be the same for all $|z|<1$.
Details:
The ratio test tells us the radius of convergence is
$$\lim_{n\to\infty}\frac{4^{n+1}}{4^n}\frac{\binom{2n}n}{\binom{2(n+1)}{n+1}}=4\lim_{n\to\infty}\frac{(2n)!}{(n!)^2}\frac{((n+1)!)^2}{(2(n+1))!}=4\lim_{n\to\infty}\frac{(n+1)^2}{(2n+1)(2n+2)}=1$$
The combinatorial identity is harder than I expected, for a bijective proof see this answer: Identity for convolution of central binomial coefficients: $\sum\limits_{k=0}^n \binom{2k}{k}\binom{2(n-k)}{n-k}=2^{2n}$
I suspect it should also be possible to find a bijection to the subsets of $2n$ as there's $4^n$ of them, but I don't see how. |
error in theta method | By Taylor we know that
$$
y(x+θh)=y(x)+y'(x)θh+\tfrac12y''(x)(θh)^2+O(h^3)
$$
and
$$
y(x+θh)=y(x+h)-y'(x+h)(1-θ)h+\tfrac12y''(x+h)((1-θ)h)^2+O(h^3)
$$
so that in the difference
\begin{align}
y(x+h)-y(x) &= \begin{aligned}[t]
&\bigl[θy'(x)+(1-θ)y'(x+h)\bigr]h \\&- \tfrac12\bigl[(1-θ)^2y''(x+h)-θ^2y''(x)\bigr]h^2\\&+O(h^3)
\end{aligned}
\\
&=\begin{aligned}[t]
&\bigl[θf(x,y(x))+(1-θ)f(x+h,y(x+h))\bigr]h \\&- \tfrac12(1-2θ)y''(x+\tilde θh)h^2\\&+O(h^3)
\end{aligned}
\end{align}
Now compare this to
$$
y_{n+1}-y_n=\bigl[θf(x_n,y_n)+(1-θ)f(x_n+h,y_{n+1})\bigr]h
$$
to get
$$
e_{n+1}-e_n=\begin{aligned}[t]
&[θ\partial_yf(x_n,y(x_n))e_n+(1-θ)\partial_yf(x_{n+1},y(x_{n+1}))e_{n+1}]h\\& - (\tfrac12-θ)y''(x+\tilde θh)h^2\\&+O(h^3,he_n^2,he_{n+1}^2)
\end{aligned}
$$
Now solve this recursion. If $L$ is a bound for $ \partial_yf$ and $M_2$ a bound of $y''$, then
$$
(1-(1-θ)Lh)|e_{n+1}|\le (1+θLh)|e_n|+|\tfrac12-θ|M_2h^2+M_3h^3
\\
|e_{n+1}|\le e^{Lh+(\tfrac12-θ)h^2+ h^3 }|e_n|+e^{(1-θ)Lh+h^2}(|\tfrac12-θ|M_2h^2+M_3h^3)
$$
which indeed has a result like
$$
e_n\le Ch\frac{e^{L(nh)}-1}{L}(|\tfrac12-θ|+h)
$$
where the higher order term are absorbed into the constant $C$, which is about $C=\max(M_2,M_3)+1$. |
Determining whether a given set is a vector space | You know it's a subset of $\Bbb R^4$. So it will be a vector space if and only if it is a subspace. You can use the subspace criterion: closure under addition and scalar multiplication.
This can be done "by hand", or, in this case, you can use the "trick" of noting that $W$ is the kernel of the transformation from $\Bbb R^4$ to $\Bbb R^2$ whose matrix rel the standard basis is $\begin{pmatrix} -3&2&0&1\\1&0&-3&1\end{pmatrix}$.
It is easy to see that the kernel of a linear transformation is always a subspace of the domain (use the subspace criterion again), and hence a vector space in its own right. |
Using the Rank Function | $\newcommand{\rank}{\operatorname{rank}}$You have $\rank(\Bbb R)=\omega+1$. To get $\rank(\mu)$, work up to it.
If $\varnothing\ne A\subseteq\Bbb R$, what is $\rank(A)$? What is $\rank\big(\{A\}\big)$? What about $\rank\big(\{A,x\}\big)$ if $x\in\Bbb R$?
What is $\rank\big(\langle A,x\rangle\big)$ if $\varnothing\ne A\subseteq\Bbb R$ and $x\in\Bbb R$?
What is $\rank(\mu)$? |
will we miss any cent if 2-decimal currency numbers adding each other? | This will depend on how you are representing the numbers. If you're performing the calculations on a computer and using floating point to represent each value, there is a possibility of precision errors, as numbers like 0.1 cannot be represented exactly using floating point. Computing 0.1 + 0.2 will give an answer like 0.300000000004 instead of exactly 0.3.
Using something like Java's BigDecimal class, you can get exact answers:
BigDecimal result = new BigDecimal("0.1").add(new BigDecimal("0.2"));
System.out.println(result.toString()); // prints "0.3"
...which you can then round off, truncate, etc. as desired. There may also be a "money" datatype (mostly used in SQL) which can deal with the messy details for you. |
Show that if five points are placed in a Equilateral triangle with sides of 2, two points will always be closer than 1 | Try to divide an equilateral triangle into four equal parts, each of which is an equilateral triangle of side $1$.
Now, if there are five points, then two of these must lie within the same equilateral triangle, since we have five points, which is more than four triangles.
Prove that any two points within an equilateral triangle of side $1$ cannot be separated by more than $1$ unit.
Then you would be done. |
Calculating residues of function with branch cut | By using a keyhole contour and defining the argument of $z$ along $CD$ be equal to $2 \pi$, the argument of the pole at $z=-1$ must be equal to $\pi$. No way can it be $-\pi$.
Let's look at what you have done - which largely looks good. Let $I_1$ be the integral you want; then by considering
$$\oint_C dz \frac{\log{z}}{z^{3/4} (1+z)} $$
over the keyhole contour, then, letting the outer radius $\to \infty$ and the inner radius $\to 0$, we get by the residue theorem,
$$I_1 - \int_0^{\infty} dx \frac{\log{x} + i 2 \pi}{e^{i 3 \pi/2} x^{3/4} (1+x)} = i 2 \pi R_1$$
where
$$R_1 = \frac{\log{e^{i \pi}}}{e^{i 3 \pi/4}} = \pi \, e^{-i \pi/4}$$
is the residue of the integrand of the contour integral at the pole $z=e^{i \pi}$. Rewriting the above, we get
$$(1-i) I_1 + 2 \pi I_0 = i 2 \pi R_1$$
where
$$I_0 = \int_0^{\infty}\frac{dx}{x^{3/4} (1+x)} $$
We may evaluate $I_0$ using the same keyhole contour and by considering the integral
$$\oint_C \frac{dz}{z^{3/4} (1+z)}$$
about that contour. Thus,
$$(1-i) I_0 = i 2 \pi \, R_0 $$
where $R_0 = e^{-i 3 \pi/4}$ is the residue of the integrand of the latter integral at $z=e^{i \pi}$. Note that $1-i = \sqrt{2} e^{-i \pi/4}$, so that if we multiply the equation for $I_1$ by $1-i$ on both sides, we get
$$-i 2 I_1 + 2 \pi (i 2 \pi) e^{-i 3 \pi/4} = i 2 \pi (\pi) e^{-i \pi/4} \sqrt{2} e^{-i \pi/4}$$
Doing out the algebra, I indeed find that $I_1=-\sqrt{2} \pi^2$ as expected.
Again, if you use $e^{-i \pi}$ for the pole at $z=-1$, or some other odd multiple of $i \pi$, then you will indeed get a different answer - but such answers will be wrong. You committed to a branch of the log when you took the argument of $z$ along $CD$ to be $2 \pi$, that is $\arg{z} \in [0, 2 \pi]$. Thus, it is not possible for $z$ to take on any argument outside this interval by your definition of the keyhole contour, and the correct expression for the pole is $-1=e^{i \pi}$. |
Applications of the Mecke formula | In Percolation Theory, Mecke's equation is needed to derive Russo's formula in the Poisson-Boolean model. If you are not into Percolation Theory, I will try to clear things up with some details.
The idea is to take random balls in $\mathbb{R}^d$ and to investigate whether you can go to infinity without ever leaving those balls or not. More formally: Take a PPP $\eta$ on $\mathbb{R}^d\times\mathbb{R}_+$ with intensity $\lambda\mathrm{d}z\otimes\nu$, where $\mathrm{d}z$ is the Lebesgue measure on $\mathbb{R}^d$ and $\nu$ is some probability measure on $\mathbb{R}_+$. If $(x,r)\in\eta$, we will add a ball with center $x$ and radius $r$. For a fixed radii distribution $\nu$, we will vary $\lambda$ and see what happens. If $\nu$ has $d$ moments, the model becomes interesting: indeed, there will be a threshold $\lambda_c$ below which it is impossible to go to infinity and above which you will almost surely find a way. Percolation Theory wants to study this transition and a very useful tool is the so-called Russo's formula which describes the derivative of the probability of some event. More presicely, take an event $A$ which depends only on balls touching a compact $K$ ($A$ is said to be local) and which is increasing, i.e. it becomes more probable when we add balls. Then Russo's formula tells us that
$
\dfrac{\mathrm{d}\mathbb{P}_\lambda[A]}{\mathrm{d}\lambda} = \mathbb{E}_\lambda\left[1_{\eta\not\in A}\int_{\mathbb{R}^d}\int_0^\infty 1_{\eta\cup\{(z,r)\}\in A}\mathrm{d}\nu(r)\mathrm{d}z\right].
$
It is immediate that there is a ling to Mecke's formula and have not seen any proof not using it. If you want to have some more information on Percolation Theory, I advise you the course notes (and videos on YouTube) from Duminil-Copin. If you want to see Russo's formula in action, you can read a very recent paper from him, Tassion and Raoufi in which they prove new results on $\lambda_c$ called Subcritical phase of $d$-dimensional Poisson-Boolean percolation and its vacant set. |
Real integral using a contour integral | By the ML-lemma
$$\left|\int_{C_1}\frac{ze^{i\pi z}}{z^2+2z+5}dz\right|\le \pi R\frac{Re^{-\pi\,\text{Im}\,z}}{R^2-2R-5}\xrightarrow[R\to\infty]{}0$$
since $\;\text{Im}\,z\to\infty\;$ when $\;R\to\infty\;$
Please observe that
$$\left|e^{i\pi z}\right|=\left|e^{i\pi\left(\text{Re}\,z+i\,\text{Im}\,z\right)}\right|=e^{-\pi\,\text{Im}\,z}\cdot\left|e^{\pi i\,\text{Re}\,z}\right|=e^{-\pi\,\text{Im}\,z}$$
As commented, when dealing with $\;\sin ax,\,\cos ax\;$ in real integrals, you better take the exponential $\;e^{iaz}\;$ . |
surface integral odd surface | You are almost correct.
Check your $\theta$ interval. It should be
$$
-\frac{\pi}{2}\le \theta \le \frac{\pi}{2},
$$
as $x^2+y^2=x$ is in the $x\ge 0$ plane. |
Gamma identity confusion | It makes sense, because both sides are undefined for those values, so claiming that they are equal doesn't give any problems, and it's easier to say that the formula holds on $\mathbb C$ than on $\mathbb C\setminus\{-n\mid n\in\mathbb N\}$, but it means the same. |
Quesiton about this epsilon-delta proof | I think the last part:
Here |λf(x)−λf(a)|<ϵ is not equivalent to |λf(x)−λf(a)|<ϵ/λ but rather to |λf(x)−λf(a)|<ϵ/|λ|
should read
Here |λf(x)−λf(a)|<ϵ is not equivalent to |f(x)−f(a)|<ϵ/λ but rather to |f(x)−f(a)|<ϵ/|λ|.
It is just saying that when you divide each side of the equation thorough by λ on each side, you can only do so by taking the modulus, |λ|. Otherwise if λ<0 the inequality would give a modulus (strictly positive) value on the LHS and a negative on the RHS which would clearly be wrong. |
Find the orthogonal proyection of $x^4$. | No. The span of $\{1,x,x^2\}$ does not span $L^2([0,1])$, because it is is infinite dimensional, so it cannot be spanned by only three vectors.
The main claim of this answer is that $\{1,x,x^2\}$ does not span $L^2([0,1])$, and not as may be suggested otherwise by some comments. |
Transformation of spherical coordinates -- where is my the error? | There are two ways to look at transforming a coordinate system. I prefer to pick an arbitrary point and ask what happens to its coordinates if we transform the coordinate system. If we do that with the transformation
$\theta'_1=\theta_1-\phi_1$ and $\theta'_n=\theta_n$ for $n>1,$
an arbitrary point ends up with the same coordinates it had before
except for the "latitude."
If (for example) $\phi_1=\frac\pi3,$
the transformed latitude has the range
$-\frac\pi3\leq\theta_1'\leq\frac{2\pi}3$
instead of $0\leq\theta_1\leq\pi.$
In that case, look what we're doing to the following sets of coordinates:
The coordinates $(\theta_1,\theta_2,\ldots) = \left(\frac\pi6,0,\ldots\right)$ become
$(\theta'_1,\theta'_2,\ldots) = \left(-\frac\pi6,0,\ldots\right).$
Does that make sense?
The coordinates $(\theta_1,\theta_2,\ldots) = \left(\pi,0,\ldots\right)$ become
$(\theta'_1,\theta'_2,\ldots) = \left(\frac{2\pi}3,0,\ldots\right).$
What coordinates $(\theta_1,\theta_2,\ldots)$ must you have in order to get
$(\theta'_1,\theta'_2,\ldots) = \left(\pi,0,\ldots\right)$?
Algebraically, we have $\theta_1 = \theta'_1+\phi_1 = \frac{4\pi}3.$
But what point has coordinates
$(\theta_1,\theta_2,\ldots) = \left(\frac{4\pi}3,0,\ldots\right)$?
Wouldn't such a point usually be described by coordinates
$(\theta_1,\theta_2,\ldots) = \left(\frac{2\pi}3,\pi,\ldots\right),$
that is, coordinates such that $0\leq \theta_1\leq \pi$?
But then the point would get mapped to
$(\theta'_1,\theta'_2,\ldots) = \left(\frac\pi2,0,\ldots\right),$
not $\left(\pi,0,\ldots\right).$
Another way to do a coordinate transformation is to transform the coordinates of every point, which moves points around in space, and then alter the coordinate system itself in order to return every point back to where it came from. This works fine for translations in Cartesian coordinates, and also works well for rotations in polar coordinates in $\mathbb R^2$: just subtract $\phi$ from $\theta,$ which sends every point (except the origin) clockwise, and then rotate the coordinate system counterclockwise by $\theta$ to bring everything back.
But consider spherical coordinates in $\mathbb R^3$ as an example; specifically, consider what a transformation in spherical coordinates would do to the surface of the Earth if we add $\frac\pi3$ to the colatitude.
Recalling that in mathematical spherical coordinates, the first angular coordinate is measured from the positive $z$ axis downward, and that we tend to assume the positive $z$ axis goes through the north pole, adding $\frac\pi3$ to this coordinate moves things $30$ degrees (about $3333$ kilometers) to the south.
Now, since Antarctica is all within less than $3333$ km from the south pole, what happens to it?
Does it just disappear, or do all its points go through the pole and start traveling up the other side of the Earth?
Note that if it does that, the continent ends up "inside out" (the points that were originally northernmost go through the pole last and end up closer to the south pole than other points do), and moreover East Antarctica will partially overlap with South America.
Or we could say that everything that goes into the south pole comes immediately back out at the north pole;
this fills in the region within $30$ degrees of the north pole, which otherwise would get nothing (not even ocean), but still has Antarctica inside out and moreover puts it very close to Greenland.
It should be clear that there is no rotation of the globe that will put things back where they came from.
Everything is stretched, squashed, inverted, deleted, and/or overlaid on something else.
What you can do is to add $30$ degrees west longitude to every point on the globe and then rotate it all $30$ degrees east to restore things to where they came from.
More generally, in $\mathbb R^n$ you can subtract an angle $\phi$ from the last angular coordinate, the coordinate that ranges from $0$ to $2\pi,$
and this will represent a rotation of the coordinates.
But you cannot do this to any other coordinate in spherical coordinates
(including the radial coordinate) and expect the result to be a rotation. |
Partitioning $G$ into subsets such that each is a set of edges of a spanning tree of $G$ | This appears to be a generalized Johnson graph:
We have a $n$-element set (in this case $n=7$).
We have $k$-element subsets (in this case $k=4$), which form the vertex set.
We have edges whenever two $k$-element subsets have an intersection of size $r$ (in this case, $r=2$).
I draw the graph below (click for larger image):
We observe:
It has $\binom{7}{4}=35$ vertices, one vertex for each $4$-element subset of $\{1,\ldots,7\}$;
every vertex $\{a,b,c,d\}$ has degree $\binom{4}{2}\binom{3}{2}=18$, which are precisely those with $2$ elements in common (i.e., from $\{a,b,c,d\}$) and $2$ elements not in common (i.e., from $\{1,\ldots,7\} \setminus \{a,b,c,d\}$), and
it has $315$ edges, by the Handshaking Lemma.
The question asks if this graph decomposes into spanning trees. At this point, we can mostly throw away the graph. We'll get the same answer to the question:
Q: Does any 35-vertex 315-edge graph decompose into spanning trees? |
Use the definition of compact set to prove the uniform continuity on compact sets. | This is a well known theorem, whose proof you can find in any analysis textbook, so I'll just give a sketch.
Suppose $f : A \to \mathbb{R}$ is a continuous function and that $A \subset \mathbb{R}$ is compact.
To use compactness, we need an open cover. Fix $\epsilon > 0$. For each $x \in A$, choose $\delta(x) > 0$ such that $y \in A$ and $|x-y| < \delta(x)$ implies $|f(x) - f(y)|< \epsilon$.
Define $N(x) = \{y \in A : |x - y| < \delta(x)\}$. These $N(x)$ form an open cover of $A$. Use compactness to choose a subcover $N(x_i)$. Take $\delta = \frac{1}{2} \min_i \delta(x_i)$. Suppose two points $x,y$ are $\delta$ close. Argue that $x$ is $\delta$ close to some $x_i$, and that $y$ has to be $\delta(x_i)$ close to that same $x_i$. What can you conclude about the distance between $f(x), f(y)$ and $f(x_i)$? |
Summation of fractional parts of $x/n$ | We show $K_1(a,b) = -\frac{b^2-a^2}{24}-\frac{1}{2}\log(b/a)$. $K_2(a,b)$ can be found using the same methods used below.
The starting point is the identity $$\{\theta\}-\frac{1}{2} = \frac{-1}{\pi}\sum_{n=1}^\infty \frac{\sin(2\pi n\theta)}{n},$$ valid for $\theta \not \in \mathbb{Z}$ [Thanks to metamorphy for pointing out the invalidity for $\theta \in \mathbb{Z}$, and Anatoly for pointing it out again]. We show later that $$\lim_{N \to \infty} \frac{1}{N}\sum_{x=1}^N \sum_{\substack{m = a\sqrt{x} \\ m \mid x}}^{b\sqrt{x}} (\{\frac{x}{m}\}-\frac{1}{2}) = -\frac{1}{2}\log(b/a).$$
Using the fourier identity above, we obtain $$\frac{1}{N}\sum_{x=1}^N \sum_{\substack{m=a\sqrt{x} \\ m \not \mid x}}^{b\sqrt{x}} \left(\{\frac{x}{m}\}-\frac{1}{2}\right) = \frac{1}{N}\sum_{x=1}^N\sum_{m=a\sqrt{x}}^{b\sqrt{x}} \frac{-1}{\pi}\sum_{n=1}^\infty \frac{\sin(2\pi nx)}{n}.$$ For ease, we dropped the condition "$m \not \mid x$", which is allowed, since $\sin(2\pi n\frac{x}{m}) = 0$ if $m \mid x$. Since the outer two sums are finite, we may interchange to get $$\frac{-1}{\pi}\sum_{n=1}^\infty \frac{1}{n}\sum_{m=1}^{b\sqrt{N}}\frac{1}{N}\sum_{x=m^2/b^2}^{\min(m^2/a^2,N)} \sin(2\pi \frac{n}{m}x).$$ Suppose that $\frac{1}{a^2}$ and $\frac{1}{b^2}$ are integers, for ease. Using the identity $$\sin(\theta)+\dots+\sin(k\theta) = \frac{\cos(\frac{\theta}{2})-\cos((k+\frac{1}{2})\theta)}{2\sin(\frac{\theta}{2})},$$ we see that $$\sum_{x=m^2/b^2}^{m^2/a^2} \sin(2\pi\frac{n}{m}x) = \frac{1}{2\sin(\pi\frac{n}{m})}\left[\cos\left((\frac{m^2}{b^2}-1+\frac{1}{2})2\pi \frac{n}{m}\right)-\cos\left((\frac{m^2}{a^2}+\frac{1}{2})2\pi\frac{n}{m}\right)\right]$$ and $$\sum_{x=m^2/b^2}^N \sin(2\pi \frac{n}{m}x) = \frac{1}{2\sin(\pi\frac{n}{m})} \left[\cos\left((\frac{m^2}{b^2}-1+\frac{1}{2})2\pi\frac{n}{m}\right)-\cos\left((N+\frac{1}{2})2\pi\frac{n}{m}\right)\right].$$ Using again that $\frac{1}{a^2}$ and $\frac{1}{b^2}$ are integers, $$\cos\left((\frac{m^2}{b^2}-1+\frac{1}{2})2\pi \frac{n}{m}\right)-\cos\left((\frac{m^2}{a^2}+\frac{1}{2})2\pi\frac{n}{m}\right) = \cos\left(\frac{\pi n}{m}\right)-\cos\left(\frac{\pi n}{m}\right) = 0$$ and $$\cos\left((\frac{m^2}{b^2}-1+\frac{1}{2})2\pi\frac{n}{m}\right)-\cos\left((N+\frac{1}{2})2\pi\frac{n}{m}\right) = \cos\left(\frac{\pi n}{m}\right)-\cos\left(\frac{\pi n}{m}+2\pi\frac{N n}{m}\right).$$ Note that $$\cos(\frac{\pi n}{m}+2\pi\frac{N n}{m}) = \cos(\frac{\pi n}{m})\cos(2\pi\frac{N n}{m})-\sin(2\pi \frac{N n}{m})\sin(\frac{\pi n}{m}).$$ Putting everything together, $$\frac{1}{N}\sum_{x=1}^N \sum_{m=a\sqrt{x}}^{b\sqrt{x}} \left(\{\frac{x}{m}\}-\frac{1}{2}\right) = \frac{-1}{\pi}\sum_{n=1}^\infty \frac{1}{n}\frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \frac{\cos(\frac{\pi n}{m})[1-\cos(2\pi \frac{N n}{m})]+\sin(\frac{\pi n}{m})\sin(2\pi \frac{N n}{m})}{2\sin(\frac{\pi n}{m})}.$$ We handle the term $$\frac{-1}{\pi}\sum_{n=1}^\infty \frac{1}{n}\frac{1}{2N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \sin(2\pi \frac{N n}{m}) = \frac{1}{2N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \left(\{\frac{N}{m}\}-\frac{1}{2}\right) \to 0$$ as $N \to \infty$. Now, using $1-\cos(2\theta) = 2\sin^2(\theta)$, we are left with $$\frac{1}{N}\sum_{x=1}^N \sum_{m=a\sqrt{x}}^{b\sqrt{x}} \left(\{\frac{x}{m}\}-\frac{1}{2}\right) = \frac{-1}{\pi}\sum_{n=1}^\infty \frac{1}{n}\frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \frac{\sin^2(\pi \frac{N n}{m})\cos(\pi \frac{n}{m})}{\sin(\pi \frac{n}{m})}.$$ Let $$c_{n,N} = \frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \frac{\sin^2(\pi\frac{Nn}{m})\cos(\frac{\pi n}{m})}{\sin(\frac{\pi n}{m})}.$$ We prove afterwards that, for any fixed $n$, $$\lim_{N \to \infty} c_{n,N} = \frac{1}{n}\frac{b^2-a^2}{4\pi}.$$ Using $$\lim_{N \to \infty} \frac{-1}{\pi}\sum_{n=1}^\infty c_{n,N} = \frac{-1}{\pi}\sum_{n=1}^\infty \lim_{N \to \infty} c_{n,N},$$ which we justify later, we finally obtain $$\frac{1}{N}\sum_{x=1}^N \sum_{\substack{m=a\sqrt{x} \\ m \not \mid x}}^{b\sqrt{x}} \left(\{\frac{x}{m}\}-\frac{1}{2}\right) = \frac{-1}{\pi}\sum_{n=1}^\infty \frac{1}{n^2}\frac{b^2-a^2}{4\pi} = -\frac{b^2-a^2}{24}.$$
We first show $$\lim_{N \to \infty} \frac{1}{N}\sum_{x=1}^N \sum_{\substack{m = a\sqrt{x} \\ m \mid x}}^{b\sqrt{x}} (\{\frac{x}{m}\}-\frac{1}{2}) = -\frac{1}{2}\log(b/a).$$ Of course, $\{\frac{x}{m}\} = 0$ if $m \mid x$. Interchanging summations, $$\frac{1}{N}\sum_{x=1}^N \sum_{\substack{m=a\sqrt{x} \\ m \mid x}}^{b\sqrt{x}} 1 = \frac{1}{N}\sum_{m=1}^{a\sqrt{N}} \sum_{\substack{m^2/b^2 \le x \le m^2/a^2 \\ m \mid x}} 1 + \frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \sum_{\substack{m^2/b^2 \le x \le N \\ m \mid x}} 1.$$ $$ = \frac{1}{N}\sum_{m=1}^{a\sqrt{N}} [\frac{m}{a^2}-\frac{m}{b^2}+O(1)]+\frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} [\lfloor \frac{N}{m}\rfloor -\frac{m}{b^2}+O(1)] = \log(b/a)+O(\frac{1}{\sqrt{N}}).$$
We now prove that, for any fixed $n \ge 1$, $$\lim_{N \to \infty} \frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \frac{\sin^2(\pi\frac{N n}{m})\cos(\frac{\pi n}{m})}{\sin(\frac{\pi n}{m})} = \frac{1}{n}\frac{b^2-a^2}{4\pi}.$$ As $\cos(\frac{\pi n}{m}) = 1+O(\frac{1}{m^2})$, and $\sin^2(\pi \frac{N n}{m}) \le 1$ and $\sin(\frac{\pi n}{m}) \gtrsim \frac{1}{m}$, we may replace $\cos(\frac{\pi n}{m})$ by $1$. Similarly, since $\sin(\frac{\pi n}{m}) \ge \frac{\pi n}{m}-c\frac{\pi^3}{n^3}{m^3}$, we may replace $\sin(\frac{\pi n}{m})$ by $\frac{\pi n}{m}$. We are left with $$\frac{1}{\pi n}\frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} m\sin^2\left(\pi \frac{N n}{m}\right).$$ Using again that $\sin^2(\theta) = \frac{1-\cos(2\theta)}{2}$, it suffices to show $$\frac{1}{N}\sum_{n=a\sqrt{N}}^{b\sqrt{N}} m\cos(2\pi\frac{N n}{m}) \to 0.$$ It clearly then suffices to show $$\frac{1}{N}\sum_{m=1}^{c\sqrt{N}} m\cos(2\pi \frac{N n}{m}) \to 0$$ for any fixed $c \in (0,1)$. By summation by parts, $$\frac{1}{N}\sum_{m=1}^{c\sqrt{N}} m\cos(2\pi \frac{N n}{m}) = \frac{1}{N}(c\sqrt{N})\sum_{m=1}^{c\sqrt{N}} \cos(2\pi \frac{N n}{m}) - \frac{1}{N}\int_1^{c\sqrt{N}} \left[\sum_{m \le t} \cos(2\pi \frac{N n}{m})\right]dt,$$ so it suffices to show some nontrivial amount of equidistribution of $\{\frac{N n}{m}\}$ for $m \le c\sqrt{N}$, which shouldn't be too bad.
Now we handle $K_2(a,b)$. We show that $K_2(a,b) = -\frac{b^2-a^2}{12}-\frac{1}{4}\log(b/a)$ if $\frac{1}{a^2},\frac{1}{b^2}$ are integers of the same parity. This can be extended to the case of $\frac{1}{a^2},\frac{1}{b^2}$ any integers. Take, for example, $a = \frac{1}{2}, b = 1$. For any $k \ge 1$, we have $$K_2(\frac{1}{2},1) = K_2(\frac{1}{3},1)-K_2(\frac{1}{4},\frac{1}{2})+K_2(\frac{1}{5},\frac{1}{3})-K_2(\frac{1}{6},\frac{1}{4})+\dots+K_2(\frac{1}{2k+1},\frac{1}{2k-1})-K_2(\frac{1}{2k+1},\frac{1}{2k}).$$ Since $K_2(\frac{1}{2k+1},\frac{1}{2k}) \le K_2(\frac{1}{2k+1},\frac{1}{2k-1})$ can be made arbitrarily small, we get, after a brief computation, $K_2(\frac{1}{2},1) = -\frac{1-.5^2}{12}-\frac{1}{4}\log(1/.5)$.
Similar to before, $$\frac{1}{N}\sum_{x=1}^N \sum_{\substack{m \ge a\sqrt{x} \\ 2m \mid x}}^{b\sqrt{x}} 1 = \frac{1}{N}\sum_{m=1}^{a\sqrt{N}}\sum_{\substack{m^2/b^2 \le x \le m^2/a^2 \\ 2m \mid x}} 1 + \frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \sum_{\substack{m^2/b^2 \le x \le N \\ 2m \mid x}} 1$$ $$= \frac{1}{N}\sum_{m=1}^{a\sqrt{N}} \frac{m}{2}(\frac{1}{a^2}-\frac{1}{b^2})+\frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \frac{N-\frac{m^2}{b^2}}{2m} = \frac{1}{2}\log(b/a)+O(\frac{1}{N}).$$ Therefore, $$\frac{1}{N}\sum_{x=1}^N \sum_{\substack{m \ge a\sqrt{x} \\ 2m \mid x}} \left(\{\frac{x}{2m}\}-\frac{1}{2}\right)$$ contributes a $\frac{-1}{4}\log(b/a)$. As before, we do $$\frac{1}{N}\sum_{x=1}^N \sum_{\substack{m \ge a\sqrt{x} \\ 2m \not \mid x}}^{b\sqrt{x}} \frac{-1}{\pi}\sum_{n=1}^\infty \frac{\sin(2\pi n \frac{x}{2m})}{n}.$$ Dropping the condition "$2m \not \mid x$" (which is permissible), interchanging sums, and using the sum-of-sines formula again, and noting that $$\cos\left((\frac{m^2}{b^2}-\frac{1}{2})\pi \frac{n}{m}\right)-\cos\left((\frac{m^2}{a^2}+\frac{1}{2})\pi\frac{n}{m}\right) = 0$$ since $\frac{1}{b^2},\frac{1}{a^2}$ are even, we obtain $$\frac{-1}{\pi}\sum_{n=1}^\infty \frac{1}{n} \frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \frac{\cos(\frac{\pi n}{2m})-\cos(N \pi \frac{n}{m})\cos(\frac{\pi n}{2m})+\sin(N \pi \frac{n}{m})\sin(\frac{\pi n}{2m})}{2\sin(\frac{\pi n}{2m})}.$$ As before, $$\frac{1}{2}\frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \frac{-1}{\pi}\sum_{n=1}^\infty \frac{\sin(N\pi \frac{n}{m})}{n} = \frac{1}{2}\frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \left(\{\frac{N}{2m}\}-\frac{1}{2}\right) \to 0.$$ As before, we should have $$\frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \frac{\cos(N\pi \frac{n}{m})\cos(\frac{\pi n}{2m})}{\sin(\frac{\pi n}{2m})} \to 0$$ as $N \to \infty$ for any fixed $n \ge 1$. And, for any fixed $n \ge 1$, as $N \to \infty$, we have $$\frac{1}{N}\sum_{m=a\sqrt{N}}^{b\sqrt{N}} \frac{\cos(\frac{\pi n}{2m})}{2\sin(\frac{\pi n}{2m})} \to \frac{1}{n}\frac{b^2-a^2}{2\pi},$$ so $$\frac{1}{N}\sum_{x=1}^N\sum_{\substack{m = a\sqrt{x} \\ 2m \not \mid x}}^{b\sqrt{x}} \left(\{\frac{x}{2m}\}-\frac{1}{2}\right) = -\frac{b^2-a^2}{12}.$$ |
Closed form solution to $\int_0^{\infty} x(1+({\frac{x}{\lambda}})^{(-\alpha-1)}) dx$ | The integral in the denominator looks like the expected value of a Lomax distribution. (some constants missing) I read that you need $\alpha >1$, otherwise undefined. Looking at the Wikipedia material, It appears the solution is
$$\int_0^{\infty} x\left(1+{\frac{x}{\lambda}}\right)^{(-\alpha-1)}dx = \frac{\lambda^2}{\alpha(\alpha -1)}$$
but verify. |
Does this proof of the Schröder–Bernstein theorem work? | To find such a set $A_1$, we do not need to find the intersection of all sets $\{ A_0 \mid A-g[B] \subset A_0 \, \, \& \, \, g[f[A_0]] \subset A_0 \}$; in fact, each of such sets can be considered as a good candidate for the set $A_1$. In the discussion below, we construct one of such sets and see that it satisfies the requirement of the proof.
Let us review the idea of the proof of the theorem. Let $A_1$ be a subset of $A$, $f[A_1]=B_1 \subset B$, $B_2=B-B_1$, and $g[B_2]=A_2 \subset A$. If we have some $A_1$ such that $A_1 \cup A_2 =A$, then we have the following bijective mapping between the sets $A$ and $B$:$$h(x)=\begin{cases}f(x) & \text{if }x \in A_1; \\ g^{-1}(x) & \text{if } x \in A_2 \end{cases}.$$Thus, our original problem is reduced to finding the set $A_1$. So, let us find it.
According to the description of the set $A_1$, it must contain the elements of the set $A$ which are not in the range of the function $g$, that is, $A-g[B] \subset A_1$. So one may conclude that the set $C_0=A-g[B]$ is a good candidate for the set $A_1$. But, the set $C_0$ does not satisfy our required condition because$$B_1=f[C_0], \quad B_1 \cap B_2=\varnothing \\ \Rightarrow \quad g[f[C_0]] \cap g[B_2]= \varnothing \quad \Rightarrow \quad C_0 \cup A_2 = A - g[f[C_0]]$$(Please note that in the above argument we have used the fact that $g$ is one-to-one and $g[B_2]=A_2$).
So, we need to add the elements of the set $C_1=g[f[C_0]]$ to the set $C_0$. So one may conclude that the set $C_0 \cup C_1$ is a good candidate for the set $A_1$. But, the set $C_0 \cup C_1$ does not satisfy our required condition because$$B_1=f[C_0 \cup C_1], \quad B_1 \cap B_2=\varnothing , \quad C_1 \cap g[f[C_1]]= \varnothing \\ \Rightarrow \quad g[f[C_0 \cup C_1]] \cap g[B_2]= \varnothing \\ \Rightarrow \quad (C_0 \cup C_1) \cup A_2 = A - g[f[C_1]]$$(Please note that in the above argument we have used the fact that $g$ and $f$ are one-to-one and $g[B_2]=A_2$).
$$\vdots \qquad \vdots \qquad \vdots$$
So, we need to add the elements of the set $C_n=g[f[C_{n-1}]]$ to the set $\bigcup_{i=0}^{n-1}C_i$. So one may conclude that the set $\bigcup_{i=0}^n C_i$ is a good candidate for the set $A_1$. But, the set $\bigcup_{i=0}^n C_i$ does not satisfy our required condition because$$B_1=f \left [\bigcup_{i=0}^n C_i \right ], \quad B_1 \cap B_2=\varnothing , \quad C_n \cap g[f[C_n]]= \varnothing \\ \Rightarrow \quad g \left [f \left [\bigcup_{i=0}^n C_i \right ] \right ] \cap g[B_2]= \varnothing \\ \Rightarrow \quad \left (\bigcup_{i=0}^n C_i \right ) \cup A_2 = A - g[f[C_n]]$$(Please note that in the above argument we have used the fact that $g$ and $f$ are one-to-one and $g[B_2]=A_2$).
$$\vdots \qquad \vdots \qquad \vdots$$
Hence, we can conclude that the set $\bigcup_{n=0}^{\infty } C_n$ satisfies our required condition because$$g \left [f \left [ \bigcup_{n=0}^{\infty } C_n \right ] \right ] \subset \bigcup_{n=0}^{\infty } C_n \\ \Rightarrow \quad \left ( \bigcup_{n=0}^{\infty }C_n \right ) \cup g \left [B- f \left [ \bigcup_{n=0}^{\infty } C_n \right ] \right ]=A.$$Thus,$$A_1=\bigcup_{n=0}^{\infty }C_n, \quad \text{ where } \, C_0=A-g[B], \quad C_n=g[f[C_{n-1}]].$$
Addendum
Please note that the idea of the foregoing proof is nothing but the one we see in the König's proof of the Cantor-Schroeder-Bernstein Theorem. To see that fact, let us first review the König's proof.
König's Proof
Let $a \in A$, then $a \in A-g[B]$ or $a \in g[B]$. In the first case $a$ has no inverse image under $g$ while in the second case it has an inverse image $g^{-1}(x)$. In the second case, either $g^{-1}(a) \in B- f[A]$, in which case $g^{-1}(a)$ has no inverse image under $f^{-1}$, or $g^{-1}(a) \in f[A]$. Again, in the latter cases there exists a unique inverse image $f^{-1}(g^{-1}(a))$, ... . We call the elements $g^{-1}(a)$, $f^{-1}(g^{-1}(a))$, ... the ancestors of $a$. By similar analogy, such a description also holds for an arbitrary element $b \in B$.
Now, we can partition $A$ into three subsets:
$A_e$: the set of all elements of $A$ having an even number of ancestors,
$A_o$: the set of all elements of $A$ having an odd number of ancestors,
$A_i$: the set of all elements of $A$ having an infinite number of ancestors.
Similarly, we can partition $B$ into such subsets. It is clear that if $x\in A_e$ then $f(x) \in B_o$, if $x\in A_o$ then $g^{-1}(x) \in B_e$, and if $x\in A_i$ then $f(x) \in B_i$. So, we can easily conclude that the restricted functions $f:A_e \to B_o$, $g^{-1}: A_o \to B_e$, and $f:A_i \to B_i$ are bijective.
Here is the idea of the proof. Since $f$ and $g$ are injective functions into $B$ and $A$, respectively, one may want to map any element of $A$ into $B$ by $f$ and cover the remaining elements of $B$, i.e., $B-f[A]$ by $g^{-1}$ so that the following injective mapping is obtained:$$h(x)=\begin{cases}f(x) & \text{if } x \in A-g[B-f[A]]; \\ g^{-1}(x) & \text{if } x \in g[B-f[A]] \end{cases}$$(Please note that we have to remove the set $g[B-f[A]]$ from the set $A$ in the domain of the first piece so that $h$ becomes a function).
But, the obtained mapping is not onto $B$ because its range lacks the subset $f[g[B-f[A]]]$ (that is, the values of $f$ of the $1$-ancestor elements of $A$). So, one may want to cover such a subset of $B$ by $g^{-1}$ so that the following injective mapping is obtained:$$h(x)= \begin{cases}f(x) & \text{if }x \in A-(g[B-f[A]] \cup g[f[g[B-f[A]]]]); \\ g^{-1}(x) & \text{if }x \in g[B-f[A]]; \\ g^{-1}(x) & \text{if } x \in g[f[g[B-f[A]]]] \end{cases}$$(Please note that we have to remove the set $g[B-f[A]] \cup g[f[g[B-f[A]]]]$ from the set $A$ in the domain of the first piece so that $h$ becomes a function).
But, the obtained mapping is not onto $B$ because its range lacks the subset $f[g[f[g[B-f[A]]]]]$ (that is, the values of $f$ of the $3$-ancestor elements of $A$).
$$\vdots \qquad \vdots \qquad \vdots$$
But, the obtained mapping is not onto $B$ because its range lacks the subset $(f \circ g)^n[B-f[A]]$ (that is, the values of $f$ of the $2n-1$-ancestor elements of $A$). So, one may want to cover such a subset of $B$ by $g^{-1}$ so that the following injective mapping is obtained:$$h(x)= \begin{cases}f(x) & \text{if }x \in A-(g[B-f[A]] \cup g[f[g[B-f[A]]]] \cup \cdots \cup (g \circ f)^n[g[B-f[A]] ); \\ g^{-1}(x) & \text{if }x \in g[B-f[A]]; \\ g^{-1}(x) & \text{if } x \in g[f[g[B-f[A]]]]; \\ \vdots & \vdots \\ g^{-1}(x) & \text{if } x \in (g \circ f)^n[g[B-f[A]] \end{cases}$$(Please note that we have to remove the set $g[B-f[A]] \cup g[f[g[B-f[A]]]] \cup \cdots \cup (g \circ f)^n[g[B-f[A]]$ from the set $A$ in the domain of the first piece so that $h$ becomes a function).
But, the obtained mapping is not onto $B$ because its range lacks the subset $(f \circ g)^{n+1}[B-f[A]]$ (that is, the values of $f$ of the $2n+1$-ancestor elements of $A$).
$$\vdots \qquad \vdots \qquad \vdots$$
This pattern motivates us to define mapping $h$ as follows.$$h(x)=\begin{cases}f(x) & \text{if } x \in A_e \cup A_i; \\ g^{-1}(x) & \text{if } x \in A_o \end{cases}$$(Please note that $\bigcup_{n=0}^{\infty }(g \circ f)^n[g[B-f[A]]=A_o$ and $A-\bigcup_{n=0}^{\infty }(g \circ f)^n[g[B-f[A]]=A_e \cup A_i$).
Since $h$ maps the set $A=A_e \cup A_i \cup A_o$ onto the set $B=B_o \cup B_i \cup B_e$ (as already explained) and the sets $A_e \cup A_i$ and $A_o$ are disjoint, we conclude that the mapping $h$ is bijective. $\square$
Now, comparing the mentioned proofs with each other, we can easily see that the $C_n$'s constructed in our proof are nothing but $A_{2n}$'s in the König's proof, that is, the sets of elements of $A$ having $2n$ ancestors. So, it is clear that$$\bigcup_{n=0}^{\infty }C_n=A_e \cup A_i.$$
You can also find other approaches to prove the Cantor-Schroeder-Bernstein theorem in this post. |
Is the adjacency matrix of a given graph (OR any graphs isomorphic to a given graph) a Kronecker product, and if so what are the factors? | Note that in the Kronecker product of graphs (also called the Tensor product), the sequence of valencies (also called degrees) of the vertices result from all of the different products of the valencies of the two graph factors.
Thus, if your graph was factorisable into a 5-vertex and 2-vertex part then it must have an even number of vertices of each valency (because all 2-vertex graphs are regular). Since your graph's valency sequence is (6,4,4,4,4,4,4,2,2,2) it cannot be factored in this way.
This idea should also give you a better idea of how to factor graphs in general, partitioning your graph into equal sized vertex sets of valencies which are in the same ratio. For instance, in this graph the valency sequence is (6,4,4,3,3,2,2,2,2,2,1,1) which can be split into (6,3,3), (4,2,2), (4,2,2) and (2,1,1), which indicates that it could come from the product of two graphs with valency sequences (3,2,2,1) and (2,1,1), which is indeed how I generated it. |
Maclaurin's series for $\log(1+ \tan x)$ | As mentioned in the comments, you did not account for $\tan^6(x)$ and $\tan^7(x)$, which both contribute to the terms you are looking at.
For a much easier approach, observe that the derivative is given by
\begin{align}\frac{\mathrm d}{\mathrm dx}\ln(1+\tan(x))&=\frac{\sec^2(x)}{1+\tan(x)}\\&=\frac{1+\tan^2(x)}{1+\tan(x)}\\&=\frac{1+\left(x+\frac13x^3+\frac2{15}x^5\right)^2}{1+x+\frac13x^3+\frac2{15}x^5}+\mathcal O(x^7)\\&=\frac{1+x^2+\frac23x^4+\frac{17}{45}x^6}{1+x+\frac13x^3+\frac2{15}x^5}+\mathcal O(x^7)\\&=1-x+2x^2-\frac73x^3+\frac{10}3x^4-\frac{62}{15}x^5+\frac{244}{45}x^6+\mathcal O(x^7)\end{align}
which can easily be integrated to give the desired result. |
Find the volume of a solid bounded by $x^2+ y^2 - 2y=0$, $z = x^2 + y^2$, $z=0$ - correctness of solution | If you consider the transformation
$$
x= r \cos \theta, \quad y = 1+r \sin \theta
$$
the volume can be computed as
$$
\int_0^{2 \pi} \int_0^1 \int_0^{1+r^2+2 r \sin \theta} r \,\,dz dr d \theta= \int_0^{2 \pi} \int_0^1 r (1+r^2+2 r \sin \theta) dr d\theta
$$ |
Solving the differential equation $y' - \frac{1}{x} y = x^2\sqrt{y} $ | HINT
Divide by $\sqrt{y}$.
Think of the chain rule and make a substitution... |
convergence or Divergence of complex Alternating Series | The series diverges because$$\lim_{k\to\infty}\left|\frac{(-1)^k3^k}{k2^k}\right|=\infty\ne0.$$ |
Probability that total weight of coffee in three 10-ounce jars is greater than the weight in one 30-ounce jar. | Let $X_1$, $X_2$ and $X_3$ be weights of the three 10-ounce jars. Assume them to be independent random variables. The question asks to evaluate:
$$
\mathbb{P}\left(X_1+X_2+X_3 > Y\right) = \mathbb{P}\left(X_1+X_2+X_3 -Y > 0\right)
$$
Let $Z = X_1+X_2+X_3 -Y$. Observe, that since $Z$ is a linear combination of normal random variables, $Z$ is also a normal random variable. The normal random variable is determined by its mean and variance:
$$
\mu_Z = \mathbb{E}\left(Z\right) = \mathbb{E}\left(X_1+X_2+X_3 -Y\right) = 3 \mu_X - \mu_Y
$$
$$
\sigma_Z^2 = \mathbb{Var}\left(Z\right) = \mathbb{Var}\left(X_1+X_2+X_3 -Y\right) = 3 \sigma_X^2 + \sigma_Y^2
$$
where the linearity of the expectation was used to find the mean, and the law of the total variation and independence of random variables was used to find $\sigma_Z^2$. It now remains to find
$$
\mathbb{P}(Z > 0) = \Phi\left(-\frac{\mu_Z}{\sigma_Z}\right)
$$
where $\Phi$ is the cumulative distribution function of the standard normal random variable. |
How to convert the matrix completion problem to the standard SDP form? | As indicated in the comments, this is the standard nuclear-norm optimization problem.
With a modelling layer such as YALMIP or CVX, you would simply do something along the lines of (here YALMIP in MATLAB)
X = sdpvar(m,n,'full');
E = Z - X;
optimize([],E(:)'*E(:) + lambda*norm(X,'nuclear'));
If you manually want to get the quadratic expression into a form supported by standard SDP solvers, you would, e.g., write it using a second-order cone constraint by replacing the quadratic term with a new variable $t$ and add an SOCP model of $||vec(E)||^2\leq t$. With $e = vec(E)$, this is equivalent to $\left|\left|\begin{bmatrix}2e\\1-t\end{bmatrix} \right|\right|\leq 1+t$
A horrible way to model it (from a performance view) would be as the SDP constraint $\begin{bmatrix}t& e^T \\e &I \end{bmatrix} \succeq 0.$
Your second expression is the nuclear norm, which can be written as the minimal value of $\text{tr}(A) + \text{trace}(B)$ where $\begin{bmatrix}A& X\\X^T &B\end{bmatrix} \succeq 0.$. Hence, simply introduce the two symmetric matrices $A$ and $B$ with associated constraint, and use the sum of traces instead of your square-root expression in the objective. |
The spectral norm of real matrices with positive entries is increasing in its entries? | First note that $$||A||_2=\sup_{||x||_2=1}||Ax||_2$$if we denote the entries of $A$ by $a_{ij}$ we conclude that$$||A||_2=\sup_{||x||_2=1}\sqrt{\sum_{i=1}^{n}(a_{i1}x_1+a_{i2}x_2+\cdots +a_{in}x_n)^2}$$since all the entries of $A$ are non-negative a supremum is achieved when $x_{i}\ge 0$ for all $i$ (or $x_{i}\le 0$ for all $i$ but it doesn't matter hence of symmetry). To show that let $I\subseteq [n]$ be the set of indices $i$ for which $x_i<0$. Therefore $$a_{i1}x_1+a_{i2}x_2+\cdots +a_{in}x_n{=\sum_{k\in I}a_{ik}x_k+\sum_{k\notin I}a_{ik}x_k\\<-\sum_{k\in I}a_{ik}x_k+\sum_{k\notin I}a_{ik}x_k\\=\sum_{k\in I}a_{ik}|x_k|+\sum_{k\notin I}a_{ik}|x_k|\\=a_{i1}|x_1|+a_{i2}|x_2|+\cdots +a_{in}|x_n|}$$which completes our proof. From the other side $$B=A+X$$where $X$ is a matrix with all the entries being non-negative. Let the supremum of spectral norm of $A$ happens in $x^*$ and that of $B$ happens in $y^*$ i.e.$$||A||_2=||Ax^*||_2\\||B||_2=||By^*||_2$$therefore $$||By^*||_2\ge ||Bx^*||_2=||Ax^*+Xx^*||_2\ge||Ax^*||_2$$the last equality is true because of the following lemma
For $r_1,r_2\in \left(R^{\ge0}\right)^n$ we have $$||r_1+r_2||_2\ge||r_1||_2$$where the equality is iff $r_2=0$.
proof: use the definition.
Therefore our proof is complete.
P.S. the inequality holds with equality only if $X=0$ which leads to $A=B$ and $||By^*||_2=||Bx^*||_2$ but $A=B$ is also the sufficient condition.
Conclusion: your theorem is right and the equality holds iff $$A=B$$ |
Prove that if $M$ is a simple $k[x_1,...,x_m]$-module, then the dimension of $M$ over $k$ is finite. | Since $M$ is simple we have $M=R/\mathfrak m$ where $\mathfrak m$ is a maximal ideal of $R$. But $R/\mathfrak m$ is a finitely generated $k$-algebra, and by Zariski's Lemma it is a finite field extension of $k$. |
Exercise on conformal metrics | One way you can do this problem is with the Koszul Formula. Choose an arbitrary vector field $Z$, and use that formula to get
\begin{align}
2 \langle \nabla^2_X Y, Z \rangle_2 &= X \langle Y, Z \rangle_2 + Y \langle Z, X \rangle_2 - Z \langle X, Y \rangle_2\\
&+ \langle [X, Y], Z \rangle_2 - \langle [Y, Z], X \rangle_2 - \langle [X, Z] , Y \rangle_2. \tag{*}\label{*}
\end{align}
Then
\begin{align}
X \langle Y, Z \rangle_2 = X(e^{2\rho}\langle Y, Z \rangle_1 ) &= e^{2\rho} X\langle Y, Z \rangle_1 + \langle Y, Z \rangle_1 X(e^{2\rho} )\\
&= e^{2\rho} X\langle Y, Z \rangle_1 + \langle Y, Z \rangle_1 e^{2\rho} \cdot 2 d\rho(X).
\end{align}
We also have
$$\langle [X, Y], Z \rangle_2 = e^{2\rho} \langle [X, Y], Z \rangle_1.$$
Applying identities of this form to each term of the right-hand side in \eqref{*}, we get
\begin{align}
2 \langle \nabla^2_X Y, Z \rangle_2 = e^{2\rho} \cdot 2 \langle \nabla_X Y, Z \rangle_1 + 2 d \rho(X) e^{2\rho}\langle Y, Z \rangle_1
+ 2 d \rho(Y) e^{2\rho}\langle Z, X \rangle_1 - 2 e^{2\rho} \langle X, Y \rangle_1 \langle \text{grad} \rho, Z \rangle_1
\end{align}
But $\langle \nabla^2_X Y, Z \rangle_2 = e^{2\rho} \langle \nabla^2_X Y, Z \rangle_1$, so we can cancel $2e^{2\rho}$ from both sides and get
\begin{align}
\langle \nabla^2_X Y, Z \rangle_1 &= \langle \nabla_X Y, Z \rangle_1 + d \rho(X) \langle Y, Z \rangle_1
+ d \rho(Y) \langle Z, X \rangle_1 - \langle X, Y \rangle_1 \langle \text{grad} \rho, Z \rangle_1 \\
&= \langle \left(\nabla_X Y + d \rho(X) Y + d\rho(Y)X - \langle X, Y \rangle_1 \text{grad} \rho \right), Z \rangle_1.
\end{align}
Since this last equation holds for all $Z$, the two vector fields are equal. |
integral of $f$ over curves | The reason why you haven't been able to prove this is because you always chose your line segments to be parametrized by $[0,1]$. It is more convenient to parametrize $[a,b]$ by $\phi : [0,1] \to [a,b]$ and let $s \in [0,1]$ be the point corresponding to $c$, i.e $c = \phi (s)$. Then
$$\int \limits _{[a,b]} f \ \Bbb d z = \int \limits _0 ^1 f(\phi(t)) \phi' (t) \ \Bbb d t = \int \limits _0 ^s f(\phi(t)) \phi' (t) \ \Bbb d t + \int \limits _s ^1 f(\phi(t)) \phi' (t) \ \Bbb d t = \int \limits _{[a,c]} f \ \Bbb d z + \int \limits _{[c,b]} f \ \Bbb d z .$$ |
2 problems in real analysis about supremum norm metric | The compactness portion of the problem can be handled if you apply the Arzela-Ascoli theorem correctly. |
Does a function $f$ exist such that $(y+z)/q =x$, $(z+q)/y = f(x)$ where $f$ is independent of $y,z,q$? | If all the variables are scalars, $f$ can be made "independent" of exactly one of $y$, $z$, and $q$.
Solve the first equation for the one you want to be independent and plug it into left hand side of the second equations:
Independent of $y = qx - z$: $f(x; q, z) = (z + q)/(qx - z)$.
Independent of $z = qx - y$: $f(x; q, y) = (qx - y + q) / y = q(1+x)/y - 1$.
Independent of $q = (y+z)/x$: $f(x; y, z) = (z + (y+z)/x)/y = (y + z + xz)/xy$.
Note that these answers are not truly independent, it's just different forms of writing the same function on a constrained surface where the first equation holds. In particular, none of them agree if you're not on that surface, and have questionable validity outside that surface. |
generating $\sigma$-field of a set | Let $t \ge 0$, $n \in \mathbb N$, $f_k\colon \mathbb R \to \mathbb R$ measurable and bounded, $t_k \le t$ for $k \le n$. Then, as multiplication is measurable, and the $X_{t_k}$ are $\mathcal F_t$-measurable, $\prod_k f_k \circ X_{t_k}$ is $\mathcal F_t$-measurable. As $\prod_k f_k \circ X_{t_k} \in \mathcal C$ was arbitrary, and $\sigma(\mathcal C)$ is the smallest $\sigma$-algebra which makes these functions measurable, we have $\sigma(\mathcal C)\subseteq \mathcal F_t$. |
$\{s(. , C_n)\}$ is equicontinuous on $X^*$. | Your idea is right!
It is not clear to me that
$$s(x^*-y^*,C)=s(y^*-x^*,C),$$
so one can not assume "without loss of generality that $|s(x^*,C)-s(y^*,C)|\leq s(x^*-y^*,C)$".
Fortunately, this can be adapted: we have
$$s(x^*,C)-s(y^*,C)\leq s(x^*-y^*,C)\leq\|x^*-y^*\| \|C\|$$
and
$$s(y^*,C)-s(x^*,C)\leq s(y^*-x^*,C)\leq\|y^*-x^*\| \|C\|$$
so that
$$|s(x^*,C)-s(y^*,C)|\leq\|x^*-y^*\|\|C\| \, ,$$
and then you have the desired equicontinuity. |
Let $v\in V-0$, then $\varphi _{v}: k[x]\rightarrow V : f \mapsto f.v$ is a surjective $A$-module homomorphism. | This follows from the more general fact that if $V$ is a simple $A$-module (I am used to using the term "simple" for modules and "irreducible" for representations), $W$ is any $A$-module and $\varphi: W\to V$ is a non-zero homomorphism, then $\varphi$ is surjective (note that $A$ can be any ring here).
To see this, we simply recall that the image of a homomorphism is a submodule, and since $\varphi$ is assumed to be non-zero, it cannot be the trivial submodule, so by simplicity of $V$, it must be $V$ itself. |
How to prove that $||M\vec{x}|| \leq c||\vec{x}||$? | Let $c' := \max_{i=1,\dotsc,n} \|M e_i\|$. Then for every $x \in \mathbb{R}^n$
$$
\|M x\|
= \left\| \sum_{i=1}^n x_i M e_i \right\|
\leq \sum_{i=1}^n |x_i| \|M e_i\|
\leq c' \sum_{i=1}^n \underbrace{|x_i|}_{\leq \|x\|}
\leq n c' \|x\|,
$$
so we can take $c := nc' = n\max_{i=1,\dotsc,n} \|M e_i\|$. |
A matrix $A$ represents the linear application $L_A \colon \mathbb{R}^n \to\mathbb{R}^m , L_A (x)=Ax$ with respect to the standard basis | Go by the definition:
Let $\;E=\{e_1,...,e_n\}\;$ be the standard basis and suppose $\;A=(a_{ij})\;$, then
$$L_Ae_i=Ae_i=\begin{pmatrix}a_{1i}\\a_{2i}\\\ldots\\a_{ni}\end{pmatrix}=a_{1i}e_1+a_{2i}e_2+\ldots+a_{ni}e_n$$
and thus the matrix representation of $\;L_A\;$ wrt the standard basis has the $\;i\,-$ th column equal to $\;\begin{pmatrix}a_{1i}\\a_{2i}\\\ldots\\a_{ni}\end{pmatrix}\;$ ...and so it is $\;A\;$ itself! |
Do digits have a name based on position in a number, front or rear? | I would explain this using place value. These digits can be labeled as "units," "tens," "hundreds," "thousands," and so on from right to left. These labels are derived from the base-10 expansion of each number, as each place value is a number between 0-9 that designates how many of a certain power of 10 a number contains.
For example, the number $3284$ can be expressed as $3(10^3)+2(10^2)+8(10^1)+4(10^0)$, and this can demonstrate the relationship between each place value in the number $3284$. |
Linear combination of basis function in logarithm space. Is it possible? | There's is no reason for that to be true generally speaking, it highly depends of the finite-dimensional functional space $E$ you're talking about.
It would be true if $E'=\{\log f,\, f\in E\}$ were a linear subspace of $E$. Then you could decompose any element of $E'$ on a basis of $E$. |
Dedekind infinite set definition | It is common in English literature to use one-to-one as synonymous with injective. Anyways, even if it meant that $f:X\to Y$ is bijective, the hypothesis $Y\subsetneq X$ would make it equivalent to $f:X\to X$ being injective and not surjective nevertheless.
Since $f(X)\subsetneq X$ is a hypothesis, bijectivity of $f$ is impossible. |
Max/Min principles of Helmhotz-like equations | Certainly the first one doesn't have a min/max principle. Consider that equation in 1 dimension with domain $[0, 2\pi]$. One solution is given by $u(x) = \sin(x)$ depending on boundary conditions but the max and min don't occur on the boundary. Likewise, (3) and (5) won't have min/max principles. As far as the other ones, I'm fairly certain you are correct. |
Representation of groups via isomophism and the Chinese remainder theorem | There are two main canonical ways of writing finite abelian groups. One is as product of primes powers, which is this case would be your original $C_2 \times C_2 \times C_3 \times C_{3^2} \times C_7$. The other way is take the largest cyclic, then write the next largest to the right, and so on. In this case, that gives $C_6 \times C_{126}$.
$C_{42} \times C_{18}$ is a valid representation, but not canonical. See https://en.wikipedia.org/wiki/Finitely_generated_abelian_group#Classification |
Compute norm of linear operator and prove that it cannot be obtained. | To show it can't be attained, suppose we had some $x=\{x(n)\}_{n\ge 0}$ satisfying $\|x‖ = 1$ and attaining $‖ \Phi x‖ = \pi /2$. Suppose without loss that there is a positive component at position $k$, say $x(k)>0$; it is not hard to see that all components of $x$ must have the same sign by a similar argument to the below.
If there was any component $j>k$ with $x(k) > x(j)$, then we can maintain $\|x‖=1$ while switching the values at $k$ and $j$, while giving us a strictly bigger value of $\|\Phi x\|$. Since we assumed $x$ attains the sup, this is impossible and thus $x$ is a non-decreasing sequence of positive numbers. It is absurd that such a sequence is in $\ell^1$, which proves the result. |
Statement about the implicit function theorem which I can't understand | Proofs of the implicit function theorem are available everywhere on the internet, so I won't repeat a proof here, but I'll give you an idea of what it's talking about.
The essence of the statement of the theorem is that any nicely behaved subset is locally the graph of a function. (The idea of 'nicely behaved' is made precise in the theorem's statement).
So for example if you plot the unit circle $S^1 \subseteq \mathbb{R}^2$, at any point on the circle which doesn't lie on the $x$-axis, you can find a neighbourhood on which the circle is the graph of a function. So for example we can view the portion of $S^1$ lying in the upper-half-plane as the graph of $\sqrt{1-x^2}$, and the portion in the lower-half-plane as the graph of $-\sqrt{1-x^2}$. Here's a nice illustration from Wikipedia:
In this illustration, the point $A$ has a neighbourhood which projects onto the $x$-axis; and we can recover the neighbourhood of $A$ on the graph by taking the set of points $(x,\sqrt{1-x^2})$ for $x$ in this projection onto the $x$-axis. On the other hand, there is no neighbourhood of the point $B$ which is locally the graph of a function, because it lies on the $x$-axis; this is the significance of the stipulation $F_{x_n}(a) \ne 0$ in your statement of the theorem.
The implicit function theorem takes this idea and places it in the arbitrary finite-dimensional case of functions $\mathbb{R}^n \to \mathbb{R}^m$ and subsets of $\mathbb{R}^{n+m}$, rather than just the case of functions $\mathbb{R} \to \mathbb{R}$ and subsets of $\mathbb{R}^2$. (In fact, your case is slightly less general: it considers functions $\mathbb{R}^{n-1} \to \mathbb{R}$ and subsets of $\mathbb{R}^n$.) |
Operator norm of the matrix polynomial of a self adjoint matrix. | Number (2). The operator norm of a selfadjoint (or normal) matrix $A$ is $\|A\|=\max_j|\lambda_j|$. For any polynomial $p$, the eigenvalues of $p(A)$ are $\{ p(\lambda_j) \}_{j=1}^{n}$. So $\|p(A)\|=\max_{j}|p(\lambda_j)|$, if $A$ is a selfadjoint (or normal) matrix.
To prove this directly for this $A$, let $\{ e_n \}$ be an orthonormal basis of eigenvectors of $A$ with eigenvalues $\lambda_n$ (allow for repeated eigenvalues.) Then
\begin{align}
p(A)x & = p(A)\sum_{j=1}^{n}\langle x,e_j\rangle e_j\\
& = \sum_{j=1}^{n} p(\lambda_j)\langle x,e_j\rangle e_j. \\
\|p(A)x\|^2 & = \sum_{j=1}^{n}|p(\lambda_j)|^2 |\langle x,e_j\rangle|^2 \\
& \le \left(\max_{j}|p(\lambda_j)|\right)^2\sum_{j=1}^{n}|\langle x,e_j\rangle|^2 \\
& = \left(\max_{j}|p(\lambda_j)|\|x\|\right)^2
\end{align}
From that $\|p(A)\| \le \max_j|p(\lambda_j)|$. The reverse inequality is obtained by choosing $x=e_j$ where $j$ is chosen to maximize $|\lambda_j|$. So $\|p(A)\|=\max_j|p(\lambda_j)|$. |
Show that $C_{0}(X,A)$ is a sub space of $B(H)$ for some Hilbert space $H$ | If $\mu$ is a Borel measure with full support on $X$ and $A\subset B(K)$, you can take $H=L^2(X,\mu;K)$ with the action given by multiplication as usual. Of course, if you want to prove that $C_0(X;A)$ is a $C^\ast$-algebra, it may be easier to just check the abstract axioms.
Here is a more explicit description of the representation on $H$. For $f\in C_0(X;A)$ let $\pi(f)$ be operator acting on $H$ by $\pi(f)g(x)=f(x)g(x)$. Here $g(x)\in K$ and $f(x)\in A\subset B(K)$. It is easy to check that $\pi$ is an injective $\ast$-homomorphism. |
Maximizing the area of a triangle with its vertices on a parabola. | Assuming $A=\left(-\frac{3}{2},\frac{9}{4}\right),B=(3,9),C=(x,x^2)$, the area of $ABC$ is maximized when the distance between $C$ and the line $AB$ is maximized, i.e. when the tangent to the parabola at $C$ has the same slope of the $AB$ line. Since the slope of the $AB$ line is $m=\frac{9-9/4}{3+3/2}=\frac{3}{2}$ and the slope of the tangent through $C$ is just $2x$, the area is maximized by taking:
$$ C=\left(\frac{3}{4},\frac{9}{16}\right) $$
and the area of $ABC$ can be computed through the shoelace formula:
$$ [ABC] = \frac{729}{64}. $$
This area is just three fourth of the area of the parabolic segment cut by $A$ and $B$, as already known to Archimedes. Here we have a picture:
Also notice that in a parabola the midpoints of parallel chords always lie on a line that is parallel to the axis of symmetry. That immediately gives that $C$ and the midpoint $M=\left(\frac{3}{4},\frac{45}{8}\right)$ of $AB$ have the same $x$-coordinate. Moreover, it gives that the area of $ABC$ is the length of $CM$ times the difference between the $x$-coordinate of $B$ and the $x$-coordinate of $C$, hence $\frac{729}{64}$ as already stated. |
Express $w$ and $1/w$ for $w=\frac {\sqrt2+\sqrt3}{\sqrt5-\sqrt3}$ in the simplest form with a rational denominator | If $w = \frac {\sqrt 2 + \sqrt 3}{\sqrt 5-\sqrt 3}$, then $\frac 1w=\frac {\sqrt 5-\sqrt 3}{\sqrt 2 + \sqrt 3}$ If you can do i, you should be able to do ii. The process is the same. |
Improper integral $\int_{-\infty}^\infty \frac{1}{x^n + 1}$ for $n$ integer, especially $n$ odd | For the case of odd $n$,
$$I_n= \int_{-\infty}^\infty \frac{1}{x^n + 1}=\int_0^\infty \frac{2dt}{1-t^{2n}}= \frac2n \int_0^\infty \frac{t^{\frac1n-1}}{1-t^{2}}dt
$$
It is known that the $PV \int_0^\infty \frac{t^adt}{1-t^2} = \frac\pi2 \tan\frac{\pi a}2$ (See here, for example.) Thus, the principal value for odd $n$ is
$$I_n = \frac\pi n \cot\frac{\pi}{2n}, \>\>\>\>\> n: \>odd$$
For completeness,
$$I_n = \frac{2\pi}n \csc\frac{\pi}{n}, \>\>\>\>\> n: \>even$$ |
Interior of a compact manifold with boundary is compact | HINT: Can open set be compact? (I am assuming that you have not in mind a compact manifold minus its boundary, because there are many affirmative examples, in particular closed balls). |
Probability measure domain | The domain of $P$ is definitely $F$, the $\sigma$-field on $\Omega$ (it is in some cases the power set of $\Omega$, e.g. for discrete measures on at most countable $\Omega$).
$P$ a function from $F$ to $[0,1]$ obeying certain axioms. I don't see a relation to independence. |
QR decomposition and the fundamental subspaces | The four fundamental spaces are laid out completely with $A^T=[Q_1 Q_2]R$ and $A=[Q_3 Q_4]R$. I am supposing that you know this yourself since you actually split the $Q = [Q_1 Q_2]$ in the QR factorization.
The $R$ is the column mix of $Q$. $R$ shows exactly what columns from $Q$ form $A$ (and $A^T$ for the other factorization), and which columns don't, ie the null space of $A$. If $R$ is full rank, then $A$ is full rank.
$R$ will be triangular of course, and if $A$ has null space, $R$ will have rows that are zero. These rows if non-zero would select the columns from $Q$ in the formation of $A$, but since they are zero, then those respective columns from $Q$ are the null space of $A$. Row space if looking at $A^T$ |
infinitude of primes with primitive pythagorean triple variables | Indeed, using the well-known parametrization of primitive Pythagorean triples
$$
x=2rs,\, y=r^2-s^2,\, z=r^2+s^2
$$
where $r>s>0$ are coprime and of opposite parity, we obtain the simplification
$$
\frac{x^3+y^3+z^3}{(x+z) (y+z)}-\frac{z-y}{2} = (r-s)^2+s^2.
$$
And every prime congruent to $1$ (mod $4$), as well as the prime $2$, can be written in this form by a theorem of Fermat. |
stuck trying to find a matrix using lamda? | If you are looking for eigenvalues, we have the matrix:
$$A = \begin{bmatrix}3 & 6\\5 & 4\end{bmatrix}$$
To find the eigenvalues, we setup and solve $|A-\lambda I| = 0$
This gives us:
$$\begin{vmatrix}3-\lambda & 6\\5 & 4-\lambda\end{vmatrix}\ = 0 \rightarrow \lambda^2 - 7 \lambda - 18 = 0 \rightarrow \lambda_{1,2} = -2,9$$
If you are just looking to solve the system, we can use substitution as:
$x = -2y$, so $5(-2y) + 4 y = 0 \rightarrow y = 0 \rightarrow x = 0$
If you are looking for something else, you are going to have to clarify. |
Integrating $x^n {(1-x) }^n dx$ | for the second part:
see below :lets start with this $$x(1-x)\leq \frac 14\\4x(1-x) \leq 1\\-4x^2+4x-1\leq0\\-(2x-1)^2\leq0 \space \checkmark$$ so
$$\int_0^1 x^n(1-x)^ndx \leq \int_0^1 max\{x^n(1-x)^n\}dx\\
\int_0^1 x^n(1-x)^ndx \leq \int_0^1 (\frac14)^ndx=(\frac 14)^n(1-0)$$
Beta function is $B(x,y)=\int_0^1 t^{x-1}(1-t)^{y-1}=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y-1)}$ in this case
$$A=\int_0^1 x^n(1-x)^ndx =B(n+1,n+1)=\frac{n!n!}{(n+n+1)!} \\ so \\ \frac 1A \in \mathbb{N}$$ |
fundamental group of suspension of wedge product of klein bottle and 1-torus | The two spaces $K$ and $T$ are both path connected and so does $K\vee T$, since any two point $a\in K, b\in T$ and be connected by a composite path $f*g$ from $a$ to $b$ where $f(0)=a,f(1)=g(0)=x_0,g(1)=b$ ($x_0$ is the basepoint and also the common point).
Denote $K\vee T$ by $X$.
The suspension of X can be obtained by a quotient map $q:X\times I\to SX$. Now the basepoint $x_0$ is sent to $x'_0=q(x_0,\frac{1}{2})$. Then Consider two open path connected subspace of $I$ ,they are $(m,1]$ and $[0,n)$ where $m\in(0,1/2),n\in(1/2,1)$, then $ x'_0\in q(X\times (m,1])\cap q(X\times [0,n))$.
Now, let $A=q(X\times (m,1])$ and $B=q(X\times [0,n))$, both of them are contractible because we can slide each point through a path pointing to $SX\times\{1\}$ and $SX\times \{0\}$, respectively. Take $A$ as an example, it can be contract by
$$
G_A((x,s),t)=(x,(1-s)t+s)
$$
A similar construction works for $B$.
So, $\pi_1(A,x'_0)\approx\pi_1(B,x'_0)=0$ and by Seifert-Van Kampen Thm (we can use it because $X$ is path-connected), $ \pi_1(SX,x'_0)$ is trivial. |
Let $S_n$ to be the sum of the first n terms of a geometric series, then what does it mean if n isn't a whole number? | In thoses cases in math we want to reach an explicit form and then just change the nature of the number in question examples :
Factorial
$$ n \in \mathbb{N}, \ n!\triangleq \prod_{k=1}^nk$$
But here product is cannot be transform into continous form directly so let's write it like follow :
$$n\in\mathbb{N}, \ n! \triangleq \int_0^\infty t^ne^tdt$$
In those terms we can define factorial for real and even for complex number : this is the so called Gamma function defined as follow :
$$ \forall z \in\mathbb{C}, \Re(z)>0,\\ z!\triangleq\Gamma(z+1)\triangleq\int_0^\infty t^{z-1}e^t$$
Note that this notation is rather informal.
Geometric sum
We can saw thing in two ways :
The first one is to define for $a$ complex number according that the integral can be complex.
$$ x \in \mathbb{R}, S_x(a)\triangleq \int_0^x(a)^tdt $$
The second is to use geometric sum formula and decide it is valid for real or even for complex number $a \neq 1$
-For a sum beginning to $0$
$$ x \in \mathbb{R}, S_x(a)=\dfrac{1-a^{x+1}}{1-a}$$
-For a sum beginning to $1$
$$ x \in \mathbb{R}, S_x(a)=a\dfrac{1-a^{x}}{1-a}$$
Note that you can do this with any object valid for whole numbers that you want to extend. But in thoses cases you have always to find something convenient (perhaps restrictive to manage to generalize)
Also see Fractional derivative. |
proof that the three interior angles of a triangle is congruent to a straight line (without measurements) | Look at the 3 angles. They form a straight line at point B and are equal on the triangle because of the parallel lines.
I lack the English terms, but I hope it's clear enough |
There are red and blue balls in the urn, only put red ball back. What is the expected value? | I cannot see a simple closed form, but there will be a recurrence.
Suppose the expectation is $f(r,b,n)$. Then $$f(r,b,n)= \frac{r}{r+b}(1+f(r,b,n-1))
+ \frac{b}{r+b}f(r,b-1,n-1)$$ starting with $f(r,b,0)=0$.
For example, if $r=10,b=5,n=3$ then you get $f(10,5,3)=\frac{35216}{17199} \approx 2.0475609$ as your expected number of red balls drawn, slightly more that the expectation of $2$ red balls you would get with the binomial or hypergeometric approach. |
Distance and speed question on minimum time | Why do you think you're doing something wrong? The man doesn't have time to cross the road before the bus mows him down -- after just one second the bus will be 8 m to the right of his starting position, whereas he can't possibly have reached the other side yet, nor outrun the bus.
He should walk backwards along the sidewalk until the bus has passed, and then cross the street perpendicularly. |
Finding $ \lim\limits_{x \to 0} \frac{f(x) - f(0) - x f^\prime(x)}{x^2} $ | For $x$ near $0$, we have $$\frac{f(x) -f(0) -x f^{\prime}(x)}{x^{2}} = \frac{f(x) -f(0) -x f^{\prime}(0)}{x^{2}} -\frac{f^{\prime}(x) -f^{\prime}(0)}{x} \, \text{.}$$ As $f$ is twice differentiable at $0$, we have $$f(x) = f(0) +x f^{\prime}(0) +\frac{f^{\prime \prime}(0)}{2} x^{2} +o_{0}\left (x^{2} \right)$$ and $$f^{\prime}(x) = f^{\prime}(0) +x f^{\prime \prime}(0) +o_{0}(x) \, \text{.}$$ Therefore, we have $$\frac{f(x) -f(0) -x f^{\prime}(x)}{x^{2}} \underset{x \rightarrow 0}{\longrightarrow} \frac{f^{\prime \prime}(0)}{2} -f^{\prime \prime}(0) = -\frac{f^{\prime \prime}(0)}{2} \, \text{.}$$ |
antiderivative around inverse trigonometric function | That's because
$$
\arcsin(x)+\arccos(x) = \frac{\pi }{2}
$$
and $\pi/2$ is a constant.
When integrating, always remember that there's a constant $C$ in the result. |
Solve this tautology | $1)$ $\lnot q$ as premise
$2)$ $p$ or $\lnot s$ as premise
$3)$ $p \rightarrow$ ($d\land q$) as premise
$4)$ $e \rightarrow s$ as premise
$5)$ $\quad p\quad$ Assumption
$6)$ $\quad d \land q\;$ (3, 5) by modus ponens
$7)$ $\quad q\;$ by simplification (6)
$8)$ $\quad q \land \lnot q\;$ (1, 8) (And-introduction)
$9)$ $\lnot p$, since assumption $p$ leads to a contradiction (5-8)
$10)$ $ \lnot s,$ from 2, 9 Disjunctive syllogism
$11)$ $\lnot e$ from 4, 10 by modus tollens. |
Convex cone of nonnegative functions in L2 has empty interior | As pointed out in comments, you can't just set one value of $g$ to be negative and set $g = f$ everywhere else, because in order to define $L^p$ you make equivalence classes of functions that agree everywhere except a measure-zero set. So then you would essentially have $g = f$ under this paradigm, so this would not show the interior of your set is empty. Try the following: For a given $\epsilon > 0$, take one value of your function $f$ and make it some negative value to start to define $g$, and then make the function $g$ return to the original values of $f$ nearby on both sides in a continuous fashion, say with piecewise linear interpolators. Then as long as you get $g$ to return back to $f$ "fast enough" the $L^2$ norm of $|f - g|$ will be less than $\epsilon$. |
Fourier series with respect to orthonormal sequence | Suppose that $(x_n),n\in\mathbb N$ is an orthonormal sequence in a Hilbert space $V$ over a field $F$. Then for every vector $x \in V$, the number $\langle x,x_n\rangle\in F$ is called the $n$-th Fourier coefficient of $x$ with respect to the orthonormal sequence $(x_n)$, and the series
$$x\sim\sum_{n\in\mathbb N}\langle x,x_n\rangle x_n$$
is called the Fourier series of $x$ with respect to the orthonormal sequence. |
Introduction to cluster algebras | There is a book in progress available by Fomin, Zelevinsky, and Williams. You can find the first 6 chapters here, here, and here. This gives enough material for an introduction, I think.
You can also look at a particular motivating example: total positivity and it's tests. A good survey of very slick results can be found here. I strongly recommend trying a few of the examples, and thinking through these proofs. They helped me a lot.
Finally, there are a collection of lecture notes and old courses here. I have not reviewed them myself, so the quality of links and material may vary wildly. |
Geometric distibution with parameter | I don't know how to treat this question because the geometric distribution does not have any parameters besides the parameter $Y$. Could someone please clarify?
No, $Y$, the count for drops until breakage, is the random variable. It is not a parameter.
If $\theta$ is the parameter for this (one-shifted) Geometric Distribution, we would write $Y\sim\mathcal {Geo}_1(\theta)$.
$$\mathsf P(Y=k)~=~ (1-\theta)^{k-1}\theta~\mathbf 1_{k\in\{1,2,\ldots\}}$$
So, what does this parameter represent in this particular situation? |
Geometric Intuition of Group Structure on Elliptic Curve | Geometrically, adding two points on the elliptic curve involves the line through those two points. Algebraically, we want a line such that the curve and line agree on those two points, equivalently the difference between the curve and line have those two points as roots.
Thus adding a point to itself geometrically involves the tangent to the curve at that point, and algebraically involves the line such that the difference between the curve and the line has that point as a double root. |
Two Examples of Cyclic Modules | Question (a) Let $M=\mathbb{R}^3$, and for each $(a,b,c)\in M$, let $x\cdot(a,b,c)=(0,a,b)$. We can extend this definition to get a map $\mathbb{R}[x]\times M\to M$ that will make $M$ into an $\mathbb{R}[x]$ module. The module will be cyclic, since it will be generated by $(1,0,0)$.
Question (b) Let $M=\mathbb{Z}[x]$. $M$ is a cyclic $\mathbb{Z}[x]$ module. Let $I=(2,x)$, i.e. $I$ is the ideal of $\mathbb{Z}[x]$ that is generated by $2$ and $x$. Then $I$ is a submodule of $M$, but it is not cyclic. |
the ring of dual numbers over a field $k$ | Well, one interesting fact about the dual numbers of $\mathbb{R}$: consider its polynomial ring, and specifically identify an object $f(x) = \sum_{i=0}^n a_ix^i , a_i \in \mathbb{R}[\epsilon]/\epsilon^2$. Now evaluating $f(a + b\epsilon), a,b \in \mathbb{R}$ will yield $f(a) + bf'(a)\epsilon$ (hint: binomial theorem) which allows for automatic differentiation and an interesting approach for non-standard analysis.
Working in a more general $k[\epsilon]/\epsilon^2$, since $(a + b\epsilon)(a^{-1} - ba^{-2}\epsilon) = 1,$ we see that for all nonzero $a$, $a + b\epsilon$ is a unit. So our ring of dual numbers over $k$ has a unique maximal ideal $(\epsilon)$ and the ring is local.
On a note more relating to Hartshorne: let $f: X \rightarrow S$ be a morphism of schemes. Using the ring of dual numbers, one can construct the pointed tangent space of $X$ over $S$, but I'm in no means qualified to talk about that. |
Show that $\displaystyle\lim_{x\rightarrow 0}\frac{5^x-4^x}{x}=\log_e\left({\frac{5}{4}}\right)$ | $\displaystyle\lim_{x\rightarrow 0}\frac{5^x-4^x}{x}$
$=\displaystyle\lim_{x\rightarrow 0}\frac{5^x-1-(4^x-1)}{x}$
$=\displaystyle\lim_{x\rightarrow 0}\frac{5^x-1}{x}$ -$\displaystyle\lim_{x\rightarrow 0}\frac{4^x-1}{x}$
$=\log_e5-\log_e4 ~~~~~~$ $[\because\displaystyle\lim_{x\rightarrow 0}\frac{a^x-1}{x}=\log_a (a>0)]$
$=\log_e(\frac{5}{4})$ |
Rationals as a direct sum of two proper subgroups | You assert $1 \in B$, but there's no reason to think this is true. |
Determinant of a matrix and its basis | Yes, it's a consequence of Binet's theorem: $\det(AB)=\det(A)\det (B)$. Therefore \begin{align}\det(PAP^{-1})&=\det (P)\det(A)\det(P^{-1})=\det(P)\det(P^{-1})\det(A)=\\&=\det(I_n)\det(A)=\det (A)\end{align} |
Evaluate: $\; \displaystyle \int e^{-3 x}\cos 6 x \, dx$ Integration by parts | Clever complex analysis solves this easily, see if you follow:
$\cos x=\Re(e^{ix})$
We'll use this fact and change the integrand to $e^{3x}e^{ix}$.
We'll get,
$$\int e^{(3+i)x}dx=\frac{e^{(3+i)x}}{3+i}=(\frac{3}{10}-\frac{i}{10})e^{3x}e^{ix}=(\frac{3}{10}-\frac{i}{10})e^{3x}(\cos x+ i \sin x)=\frac{3}{10}e^{3x}(\cos x +i \sin x)-\frac{i}{10}e^{3x}(\cos x + i \sin x)+C$$
Taking real parts yields:
$$\frac{3}{10}e^{3x} \cos x+\frac{1}{10} e^{3x} \sin x +C$$
Wolfram Alpha verifies my answer:http://www.wolframalpha.com/input/?i=integrate+e%5E%283x%29+cos%28x%29
Hope this helps. I know it isn't the way you wanted but it is still pretty neat. |
What is the value of xyz? | $$a=c^z=(b^y)^z=b^{yz}=(a^x)^{yz}=a^{xyz}$$
$$xyz=1$$ |
Finding a convex decomposition of a point in a polytope | If we are given a set of $n$ (distinct) vertices $\{\mathrm{v}_i\}_{i=1}^n \subset \mathbb R^d$ of a convex polytope and a point $\mathrm{p} \in \mathbb R^d$, then $\mathrm{p}$ is a linear combination of the vertices if the following linear system has at least one solution
$$\begin{bmatrix} | & | & & |\\ \mathrm{v}_1 & \mathrm{v}_2 & \dots & \mathrm{v}_n\\ | & | & & |\end{bmatrix} \begin{bmatrix} c_1\\ c_2\\ \vdots\\ c_n\end{bmatrix} = \begin{bmatrix} | \\ \mathrm{p}\\ |\end{bmatrix}$$
We write this linear system more succinctly as $\mathrm{V} \mathrm{c} = \mathrm{p}$.
If we impose the equality constraint $1_n^T \mathrm{c} = 1$, we have an affine combination of the vertices. Hence, we have an augmented linear system of $d+1$ equations in $n$ unknowns
$$\begin{bmatrix} \mathrm{V}\\ 1_n^T\end{bmatrix} \mathrm{c} = \begin{bmatrix} \mathrm{p}\\ 1\end{bmatrix}$$
which has a unique solution if $n = d+1$. This system is underdetermined if $n > d+1$.
If we also impose nonnegativity constraints, $\mathrm{c} \geq 0_n$, then we have a convex combination of the vertices. If we find a nonnegative solution $\mathrm{c} \geq 0_n$ to the augmented linear system, then the affine combination is also a convex combination and $\mathrm{p}$ is in the convex hull of the given vertices. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.