INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
How do I calculate the derivative of $x|x|$? I know that $$f(x)=x\cdot|x|$$ have no derivative at $$x=0$$ but how do I calculate it's derivative for the rest of the points?
When I calculate for $$x>0$$ I get that $$f'(x) = 2x $$
but for $$ x < 0 $$ I can't seem to find a way to solve the limit. As this is homework please don't put the answer straight forward.
|
When $x<0$ replace $|x|$ by $-x$ (since that is what it is equal to) in the formula for the function and proceed.
Please note as well that the function $f(x)=x\cdot |x|$ does have a derivative at $0$.
|
Multiples of an irrational number forming a dense subset Say you picked your favorite irrational number $q$ and looking at $S = \{nq: n\in \mathbb{Z} \}$ in $\mathbb{R}$, you chopped off everything but the decimal of $nq$, leaving you with a number in $[0,1]$. Is this new set dense in $[0,1]$? If so, why? (Basically looking at the $\mathbb{Z}$-orbit of a fixed irrational number in $\mathbb{R}/\mathbb{Z}$ where we mean the quotient by the group action of $\mathbb{Z}$.)
Thanks!
|
A bit of a late comer to this question, but here's another proof:
Lemma: The set of points $\{x\}$ where $x\in S$, (here $\{\cdot\}$ denotes the fractional part function), has $0$ as a limit point.
Proof: Given $x\in S$, Select $n$ so that $\frac{1}{n+1}\lt\{x\}\lt\frac{1}{n}$. We'll show that by selecting an appropriate $m$, we'll get: $\{mx\}\lt\frac{1}{n+1}$, and that would conclude the lemma's proof.
Select $k$ so that $\frac{1}{n}-\{x\}\gt\frac{1}{n(n+1)^k}$. Then:
$$
\begin{array}{ccc}
\frac{1}{n+1} &\lt& \{x\} &\lt& \frac{1}{n} - \frac{1}{n(n+1)^k} \\
1 &\lt& (n+1)\{x\} &\lt& 1+\frac{1}{n} - \frac{1}{n(n+1)^{k-1}} \\
& & \{(n+1)x\} &\lt&\frac{1}{n} - \frac{1}{n(n+1)^{k-1}}
\end{array}
$$
If $\{(n+1)x\}\lt\frac{1}{n+1}$, we are done. Otherwise, we repeat the above procedure, replacing $x$ and $k$ with $(n+1)x$ and $k-1$ respectively. The procedure would be repeated at most $k-1$ times, at which point we'll get:
$$
\{(n+1)^{k-1}x\}\lt\frac{1}{n} - \frac{1}{n(n+1)}=\frac{1}{n+1}.
$$
Proposition: The set described in the lemma is dense in $[0,1]$.
Proof: Let $y\in[0,1]$, and let $\epsilon\gt0$. Then by selecting $x\in S$ such that $\{x\}\lt\epsilon$, and $N$ such that $N\cdot\{x\}\le y\lt (N+1)\cdot\{x\}$, we get: $\left|\,y-\{Nx\}\,\right|\lt\epsilon$.
|
Modules over local ring and completion I'm stuck again at a commutative algebra question. Would love some help with this completion business...
We have a local ring $R$ and $M$ is a $R$-module with unique assassin/associated prime the maximal ideal $m$ of $R$.
i) prove that $M$ is also naturally a module over the $m$-adic completion $\hat{R}$, and $M$ has the same $R$ and $\hat{R}$-submodules.
ii) if $M$ and $N$ are two modules as above, show that $Hom_{R}(M,N) = Hom_{\hat{R}}(M,N)$.
Best wishes and a happy new year!
|
Proposition. Let $R$ be a noetherian ring and $M$ an $R$-module, $M\neq 0$. Then $\operatorname{Ass}(M)=\{\mathfrak m\}$ iff for every $x\in M$ there exists a positive integer $k$ such that $\mathfrak m^kx=0$.
Proof. "$\Rightarrow$" Let $x\in M$, $x\neq 0$. Then $\operatorname{Ann}(x)$ is an ideal of $R$. Let $\mathfrak p$ be a minimal prime ideal over $\operatorname{Ann}(x)$. Then $\mathfrak p\in\operatorname{Ass}(M)$, so $\mathfrak p=\mathfrak m$. This shows that $\operatorname{Ann}(x)$ is an $\mathfrak m$-primary ideal, hence there exists a positive integer $k$ such that $\mathfrak m^k\subseteq\operatorname{Ann}(x)$.
"$\Leftarrow$" Let $\mathfrak p\in\operatorname{Ass}(M)$. The there is $x\in M$, $x\neq 0$, such that $\mathfrak p=\operatorname{Ann}(x)$. On the other side, on knows that there exists $k\ge 1$ such that $\mathfrak m^kx=0$. It follows that $\mathfrak m^k\subseteq \mathfrak p$, hence $\mathfrak p=\mathfrak m$.
Now take $x\in M$ and $a\in\hat R$. One knows that $\mathfrak m^kx=0$ for some $k\ge 1$. Since $R/\mathfrak m^k\simeq \hat R/\hat{\mathfrak m^k}$ and $\hat{\mathfrak m^k}=\mathfrak m^k\hat R$, there exists $\alpha\in R$ such that $a-\alpha\in \mathfrak m^k\hat R$. (Maybe Ted's answer is more illuminating at this point.) Now define $ax=\alpha x$. (This is well defined since $\mathfrak m^kx=0$.) Now both properties are clear.
|
Inequality involving closure and intersection Let the closure of a set $A$ be $\bar A$. On Page 62, Introduction to Boolean Algebras,Steven Givant,Paul Halmos(2000), an exercise goes like,
Show that $P \cap \bar Q \subseteq \overline{(P \cap Q)}$, whenever $P$ is open.
I felt muddled in face of this sort of exercises. Is there some way to deal with these problems and be assured about the result?
|
Let $x$ in $P\cap\bar Q$. Since $x$ is in $\bar Q$, there exists a sequence $(x_n)_n$ in $Q$ such that $x_n\to x$. Since $x$ is in $P$ and $P$ is open, $x_n$ is in $P$ for every $n$ large enough, say, $n\geqslant n_0$. Hence, for every $n\geqslant n_0$, $x_n$ is in $P\cap Q$. Thus, $x$ is in $\overline{P\cap Q}$ as limit of $(x_n)_{n\geqslant n_0}$.
|
Minimizing a multivariable function given restraint I want to minimize the following function:
$$J(x, y, z) = x^a + y^b + z^c$$
I know I can easily determine the minimum value of $J$ using partial derivative. But I have also the following condition:
$$ x + y + z = D$$
How can I approach now?
|
This is an easy example of using Lagrange multiplier.
If you reformulate your constraint as $C(x,y,z) = x+y+z-D=0$, you can define
$L(x,y,z,\lambda) := J(x,y,z)-\lambda \cdot C(x,y,z)$
If you now take the condition $\nabla L=0$ as necessary for your minimum you will fulfill
$$\frac{\partial L}{\partial x}=0 \\
\frac{\partial L}{\partial y}=0 \\
\frac{\partial L}{\partial z}=0 \\
$$
Which are required for your minimum and
$$
\frac{\partial L}{\partial \lambda}=C(x,y,z)=0 \\
$$
as your constraint.
|
Solving a system of differential equations I would like to get some help by solving the following problem:
$$
p'_1= \frac 1 x p_1 - p_2 + x$$
$$ p'_2=\frac 1{x^2}p_1+\frac 2 x p_2 - x^2 $$
with initial conditions $$p_1(1)=p_2(1)=0, x \gt0 $$
EDIT:
If I use Wolframalpha, I get
Where $u$ and $v$ are obviously $p_1$ and $p_2$.
Can anybody explain whats going on?
|
One approach is to express $p_2=-p_1'+\frac1xp_1+x$ from the first equation and substitute into the second to get:
$$-p_1''+\frac1xp_1'-\frac1{x^2}p_1+1=p_2'=\frac1{x^2}p_1+\frac2x\left(-p_1'+\frac1xp_1+x\right)-x^2$$
$$p_1''-\frac3xp_1'+\frac4{x^2}p_1=x^2-1$$
Multiplying by $x^2$ we get $x^2p_1''-3xp_1'+4p_1=x^4-x^2$, which is a Cauchy–Euler equation.
Solve the homogeneous equation $x^2p_1''-3xp_1'+4p_1=0$ first: $p_1=x^r$, hence
$$x^r(r(r-1)-3r+4)=0\hspace{5pt}\Rightarrow\hspace{5pt} r^2-4r+4=(r-2)^2=0 \hspace{5pt}\Rightarrow\hspace{5pt} r=2$$
So the solution to the homogeneous equation is $C_1x^2\ln x+C_2x^2$.
Now we can use Green's function of the equation to find $p_1$: ($y_1(x)=x^2\ln x,\hspace{3pt} y_2(x)=x^2$)
$$\begin{align*}k(x,t)&=\frac{y_1(t)y_2(x)-y_1(x)y_2(t)}{y_1(t)y_2'(t)-y_2(t)y_1'(t)}=
\frac{t^2\ln t\cdot x^2-x^2\ln x\cdot t^2}{t^2\ln t\cdot 2t-t^2\cdot(2t\ln t+t)}=\frac{t^2\ln t\cdot x^2-x^2\ln x\cdot t^2}{-t^3}\\
&=\frac{x^2\ln x-x^2\ln t}{t}\end{align*}$$
Then ($b(x)$ is the in-homogeneous part, i.e. $b(x)=x^4-x^2$)
$$\begin{align*}p_1(x)&=\int k(x,t)b(t)dt=\int \frac{x^2\ln x-x^2\ln t}{t}t^2(t^2-1)dt\\
&=x^2\ln x\int t(t^2-1)dt-x^2\int t(t^2-1)\ln tdt\end{align*}$$
Compute the integral, find $p_1$ using you initial values and the substitute back to find $p_2$.
|
How to do this interesting integration?
$$\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+\Delta x}^{k+1-\Delta x}x^m dx$$
How to integrate the above integral?
Edit1:
$$\lim_{\Delta x\rightarrow0}\int_{2-\Delta x}^{2+\Delta x}x^m dx$$
Does this intergral give $\space\space\space\space$ $2^m\space\space$ as the output?
Edit2:
Are my following steps correct?
$\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+\Delta x}^{k+1-\Delta x}x^m dx$ =
$\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+\Delta x}^{k+1-\Delta x}x^m dx$ $+$ $\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+1-\Delta x}^{k+1+\Delta x}x^m dx$ $-$ $\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+1-\Delta x}^{k+1 +\Delta x}x^m dx$ =
$\lim_{\Delta x\rightarrow0}\int_{1+\Delta x}^{n+\Delta x}x^m dx$ $-$ $\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+1-\Delta x}^{k+1 +\Delta x}x^m dx$ =
$\lim_{\Delta x\rightarrow0}\int_{1}^{n}x^m dx$ $+$$\lim_{\Delta x\rightarrow0}\int_{n}^{n+\Delta x}x^m dx$ $-$ $\lim_{\Delta x\rightarrow0}\int_{1}^{1+\Delta x}x^m dx$ $-$
$\lim_{\Delta x\rightarrow0}\sum_{k=1}^{n-1}\int_{k+1-\Delta x}^{k+1 +\Delta x}x^m dx$ =
$\lim_{\Delta x\rightarrow0}\int_{1}^{n}x^m dx$ + 0 - 0 - 0 = $\lim_{\Delta x\rightarrow0}\int_{1}^{n}x^m dx$
= $\int_{1}^{n}x^m dx$
|
For the first question:
$$
\begin{align}
\lim_{\Delta x\to0}\,\left|\,\int_k^{k+1}x^m\,\mathrm{d}x-\int_{k+\Delta x}^{k+1-\Delta x}x^m\,\mathrm{d}x\,\right|
&=\lim_{\Delta x\to0}\,\left|\,\int_k^{k+\Delta x}x^m\,\mathrm{d}x+\int_{k+1-\Delta x}^{k+1}x^m\,\mathrm{d}x\,\right|\\
&\le\lim_{\Delta x\to0}2\Delta x(k+1)^m\\
&=0
\end{align}
$$
we get
$$
\begin{align}
\lim_{\Delta x\to0}\sum_{k=1}^{n-1}\int_{k+\Delta x}^{k+1-\Delta x}x^m\,\mathrm{d}x
&=\sum_{k=1}^{n-1}\int_k^{k+1}x^m\,\mathrm{d}x\\
&=\int_1^nx^m\,\mathrm{d}x
\end{align}
$$
For the Edit:
For $\Delta x<1$,
$$
\left|\,\int_{2-\Delta x}^{2+\Delta x}x^m\,\mathrm{d}x\,\right|\le2\cdot3^m\Delta x
$$
Therefore,
$$
\lim_{\Delta x\to0}\,\int_{2-\Delta x}^{2+\Delta x}x^m\,\mathrm{d}x=0
$$
For your steps:
If you would give the justification for each step, it would help us in commenting on what is correct and what might be wrong and help you in seeing what is right.
|
Fixed: Is this set empty? $ S = \{ x \in \mathbb{Z} \mid \sqrt{x} \in \mathbb{Q}, \sqrt{x} \notin \mathbb{Z}, x \notin \mathbb{P}$ } This question has been "fixed" to reflect the question that I intended to ask
Is this set empty? $ S = \{ x \in \mathbb{Z} \mid \sqrt{x} \in \mathbb{Q}, \sqrt{x} \notin \mathbb{Z}, x \notin \mathbb{P}$ }
Is there a integer, $x$, that is not prime and has a root that is not irrational?
|
To answer according to the last edit: Yes.
Let $a\in\mathbb{Z}$ and consider the polynomial $x^{2}-a$.
By the rational root theorm if there is a rational root $\frac{r}{s}$
then $s|1$ hence $s=\pm1$ and the root is an integer. So $\sqrt{a}\in\mathbb{Q}\iff\sqrt{a}\in\mathbb{Z}$ .
Since you assumed that the root is not an integer but is a rational number this can not be.
|
How to randomly construct a square full-ranked matrix with low determinant? How to randomly construct a square (1000*1000) full-ranked matrix with low determinant?
I have tried the following method, but it failed.
In MATLAB, I just use:
n=100;
A=randi([0 1], n, n);
while rank(A)~=n
A=randi([0 1], n, n);
end
The above code generates a random binary matrix, with the hope that the corresponding determinant can be small.
However, the determinant is usually about 10^49, a huge number.
Not to mention when n>200, the determinant is usually overflowed in MATLAB.
Could anyone have comments how I can generate matrix (could be non-binary) with very low determinant (e.g. <10^3)?
|
The determinant of $e^B$ is $e^{\textrm{tr}(B)}$ (wiki) and $e^B$ is always invertible, since $e^B e^{-B}=\textrm{Id}$. So, if you have a matrix $B$ with negative trace then $\det e^B$ is positive and smaller than $1$. Using this idea I wrote the following matlab script which generates matrices with "small" determinant:
n=100;
for i=1:10
B=randn(n);
A=expm(B);
det(A)
end
I generate the coefficients using a normal distribution with mean $0$, so that the expected trace of B is $0$ (I think) and therefore the expected determinant of $A$ is $1$.
|
Plot of x^(1/3) has range of 0-inf in Mathematica and R Just doing a quick plot of the cuberoot of x, but both Mathematica 9 and R 2.15.32 are not plotting it in the negative space. However they both plot x cubed just fine:
Plot[{x^(1/3), x^3},
{x, -2, 2}, PlotRange -> {-2, 2}, AspectRatio -> Automatic]
http://www.wolframalpha.com/input/?i=x%5E%281%2F3%29%2Cx%5E3
plot(function(x){x^(1/3)} , xlim=c(-2,2), ylim=c(-2,2))
Is this a bug in both software packages, or is there something about the cubed root that I don't understand?
In[19]:= {1^3, 1^(1/3), -1^3, -1^(1/3), 42^3, -42^3, 42^(1/3) // N, -42^(1/3) // N}
Out[19]= {1, 1, -1, -1, 74088, -74088, 3.47603, -3.47603}
Interestingly when passing -42 into the R function I get NaN, but when I multiply it directly I get -3.476027.
> f = function(x){x^(1/3)}
> f(c(42, -42))
[1] 3.476027 NaN
> -42^(1/3)
[1] -3.476027
|
Really funny you'd mention that... my Calc professor talked about that last semester. ;)
Many software packages plot the principal root, rather than the real root. http://mathworld.wolfram.com/PrincipalRootofUnity.html
For example, $\sqrt[3]{3}$ has three values: W|A
Mathematica uses the roots in the upper-left quadrant when plotting the cube root. Thus, it thinks it's complex, and therefore doesn't graph it.
|
Generating function for the divisor function Earlier today on MathWorld (see eq. 17) I ran across the following expression, which gives a generating function for the divisor function $\sigma_k(n)$: $$\sum_{n=1}^{\infty} \sigma_k (n) x^n = \sum_{n=1}^{\infty} \frac{n^k x^n}{1-x^n}. \tag{1}$$
(The divisor function $\sigma_k(n)$ is defined by $\sigma_k(n) = \sum_{d|n} d^k$.)
How would one go about proving Equation (1)?
(For reference, I ran across this while thinking about this question asked earlier today. The sum in that question is a special case of the sum on the right side of Equation (1) above.)
|
Switching the order of summation, we have that
$$\sum_{n=1}^{\infty}\sigma_{k}(n)x^{n}=\sum_{n=1}^{\infty}x^{n}\sum_{d|n}d^{k}=\sum_{d=1}^{\infty}d^{k}\sum_{n:\ d|n}^{\infty}x^{n}.$$ From here, applying the formula for the geometric series, we find that the above equals $$\sum_{d=1}^{\infty}d^{k}\sum_{n=1}^{\infty}x^{nd}=\sum_{d=1}^{\infty}d^{k}\frac{x^{d}}{1-x^{d}}.$$
Such a generating series is known as a Lambert Series. The same argument above proves that for a multiplicative function $f$ with $f=1*g$ where $*$ represents Dirichlet convolution, we have $$\sum_{n=1}^\infty f(n)x^n =\sum_{k=1}^\infty \frac{g(k)x^k}{1-x^k}.$$
|
Linearization of Gross-Pitaevskii-Equation Consider a PDE of the form
$\partial_t \phi = A(\partial_\xi) \phi + c\partial_\xi \phi +N(\phi)$
where $N$ is some non-linearity defined via pointwise evaluation of $\phi$.
If you want to check for stability of travelling wave solutions of PDEs you linearize the PDE at some travelling wave solution $Q$:
$\partial_t \phi = A(\partial_\xi) \phi + c\partial_\xi \phi + \partial_\phi N(Q) \phi$
My problem is: do exactly this for the Gross-Pitaevskii-Equation.
The Gross-Pitaevskii Equation for (in appropriate coordinates) has the form
$\partial_t \phi = -i \triangle \phi + c \partial_\xi \phi-i\phi(1-\vert \phi \vert^2)$
so that
$N(\phi)=-i\phi (1 -\vert \phi \vert^2).$
Can anyone help me to linearize that at some travelling wave solution $Q$? I'm not even sure how to start...
|
Well.... first you need to specify a traveling wave solution $Q$.
Then you just take, since $N(\phi) = -i \phi(1 - |\phi|^2)$,
$$ (\partial_\phi N)(\phi) = -i(1-|\phi|^2) + 2i\phi \bar{\phi} $$
by the product rule of differential calculus. Here note $|\phi|^2 = \phi \bar\phi$.
So simplifying and evaluating it at $Q$ we have
$$ (\partial_\phi N)(Q) = -i + 3i |Q|^2 $$
which you can plug into the general form you quoted to get the linearised equation.
|
For which p the series converge? $$\sum_{n=0}^{\infty}\left(\frac{1}{n!}\right)^{p}$$
Please verify answer below
|
Comparison test
$$\lim_{n\rightarrow\infty}\left(\frac{n!}{(n+1)!}\right)^{p}=\frac{1}{(n+1)^{p}}=\begin{cases}
1 & \Leftrightarrow p=0\\
0 & \Leftrightarrow p\neq0
\end{cases}$$
The series have the same convergence as $\frac{1}{n}$, so for:
*
*$p>1$ converge
*for $p<1$ don't converge
|
How to prove that $\lVert \Delta u \rVert_{L^2} + \lVert u \rVert_{L^2}$ and $\lVert u \rVert_{H^2}$ are equivalent norms? How to prove that $\lVert \Delta u \rVert_{L^2} + \lVert u \rVert_{L^2}$ and $\lVert u \rVert_{H^2}$ are equivalent norms on a bounded domain? I hear there is a way to do it by RRT but any other way is fine. Thanks.
|
As user53153 wrote this is true for bounded smooth domains and can be directly obtained by the boundary regularity theory exposed in Evans.
BUT: consider the domain $\Omega=\{ (r \cos\phi,r \sin\phi); 0<r<1, 0<\phi<\omega\}$ for some $\pi<\omega<2\pi$. Then the function $u(r,\phi)=r^{\pi/\omega}\sin(\phi\pi/\omega)$ (in polar coordinates) satisfies $\Delta u=0$ in $\Omega$ and it is clearly bounded. On the other hand the second derivatives blow up as $r\rightarrow 0$, more specifically $u\not\in H^2$!
|
Term for a group where every element is its own inverse? Several groups have the property that every element is its own inverse. For example, the numbers $0$ and $1$ and the XOR operator form a group of this sort, and more generally the set of all bitstrings of length $n$ and XOR form a group with this property.
These groups have the interesting property that they have to be commutative.
Is there a special name associated with groups with this property? Or are they just "abelian groups where every element has order two?"
Thanks!
|
Another term is "group of exponent $2$".
|
Set of points reachable by the tip of a swinging sticks kinetic energy structure This is an interesting problem that I thought of myself but I'm racking my brain on it. I recently saw this kinetic energy knick knack in a scene in Iron Man 2:
http://www.youtube.com/watch?v=uBxUoxn46A0
And it got me thinking, it looks like either tip of the shorter rod can reach all points within the maximum radius of device.
The axis of either rod is off center, so for simplicity I decided to first simplify the problem by modeling it as having both rods centered. So the radius of the device is $r_1 + r_2$. So I decided to first model the space of points reachable by the tip of the shorter rod as a function of a vector in $\mathbb{R}^2$, consisting of $\theta_1$, the angle of the longer rod, and $\theta_2$, the angle of the shorter rod.
Where I'm getting lost is how to transform this angle vector to its position in coordinate space. How would you describe this mapping? And how would you express the space of all points reachable by the tip of the shorter rod, as the domain encompasses all of $\mathbb{R}^2$?
|
One way to see the reach is to notice that if the configuration $\theta = (\theta_1,\theta_2)$ reaches some spot $x \in \mathbb{R}^2$, and $y$ is a spot obtained by rotating $x$ by $\alpha \in \mathbb{R}$, then the configuration $\theta+(\alpha, \alpha)$ will reach $y$. So, we only need to see what the minimum and maximum radius can be.
For a particular configuration, the radius squared is
\begin{eqnarray}
(r_1 \cos \theta_1 + r_2 \cos \theta_2)^2 + (r_1 \sin \theta_1 + r_2 \sin \theta_2)^2 &=& r_1^2+r_2^2+ 2r_1r_2 ( \cos \theta_1 \cos \theta_2 + \sin \theta_1 \sin \theta_2)\\
& = & r_1^2+r_2^2+ 2r_1r_2 \cos(\theta_1-\theta_2)
\end{eqnarray}
Hence the radius lies in $[|r_1-r_2|,|r_1+r_2|]$.
|
Dual norm and distance Let $Z$ be a subspace of a normed linear space $X$ and $x\in X$ has distance $d=\inf\{||z-y||:z\in Z\}$ to $Z$.
I would like to find a function $f\in X^*$ that satifies
$||f||\le1$, $f(x)=d$ and $f(z)=0$
Is it correct that $||f||:=\sup\{|f(x)| :x\in X, ||x||\le 1\}$ because I cannot conclude from this definition that $||f||\le1$
May you could help me with that, thank you very much.
|
I'll list the ingredients and leave the cooking to you:
*
*The function $d:X\to [0,\infty)$ is sublinear in the sense used in the Hahn-Banach theorem
*There is a linear functional $\phi$ on the one-dimensional space $V=\{t x:t\in\mathbb R\}$ such that $\phi(x)=d(x)$ and $|\phi(y)|\le d(y)$ for all $y\in V$. If you get stuck here, mouse over the hidden text.
*From 1, 2, and Hahn-Banach you will get $f$.
$\phi(tx)=td(x)$
|
Ambiguous Curve: can you follow the bicycle? Let $\alpha:[0,1]\to \mathbb R^2$ be a smooth closed curve parameterized by the arc length. We will think of $\alpha$ like a back track of the wheel of a bicycle. If we suppose that the distance between the two wheels is $1$ then we can describe the front track by
$$\tau(t)=\alpha(t)+\alpha'(t)\;.$$
Suppose we know the two (back and front) trace of a bicycle. Can you determine the orientation of the curves? For example if $\alpha$ was a circle the answer is no.
More precisely the question is:
Is there a smooth closed curve parameterized by the arc length $\alpha$ such that
$$\tau([0,1])=\gamma([0,1])$$
where $\gamma(t)=\alpha(1-t)-\alpha'(1-t)$?
If trace of $\alpha$ is a circle we have $\tau([0,1])=\gamma([0,1])$. Is there another?
|
After the which way did bicycle go book, there has been some systematic development of theory related to the bicycle problem. Much of that is either done or cited in papers by Tabachnikov and his coauthors, available online:
http://arxiv.org/find/all/1/all:+AND+bicycle+tracks/0/1/0/all/0/1
http://arxiv.org/abs/math/0405445
|
$e^{i\theta_n}\to e^{i\theta}\implies \theta_n\to\theta$ How to show $e^{i\theta_n}\to e^{i\theta}\implies \theta_n\to\theta$ for $-\pi<\theta_n,\theta<\pi.$ I'm completely stuck in it. Please help.
|
Suppose $(\theta_n)$ does not converge to $\theta$, then there is an $\epsilon > 0$ and a subsequence $( \theta_{n_k} )$ such that $| \theta_{n_k} - \theta | \geq \epsilon $ for all $k$. $(\theta_{n_k})$ is bounded so it has a further subsequence $(\theta_{m_k})$ which converges to $\theta_0 \in [-\pi,\pi]$ (say) with $| \theta - \theta_0 | \geq \epsilon $, and hence $ \theta_0 \neq \theta $. Next $( \exp i\theta_{m_k} )$ being a subsequence of $( \exp i\theta_n ) $ must converge to $\exp i\theta $, hence $ \exp i\theta = \exp i\theta_0 $. So $ \theta_0 = 2n \pi + \theta $ for some integer $n$, however $ | \theta_0 - \theta | < 2\pi $ as $ \theta \in ( -\pi , \pi )$ and $ \theta_0 \in [-\pi,\pi]$, this implies $ \theta = \theta_0$ and contradicts $ \theta \neq \theta_0 $ .
|
It's in my hands to have a surjective function Let $f$ be any function $A \to B$.
By definition $f$ is a surjective function if $\space \forall y \in B \space \exists \space x \in A \space( \space f(x)=y \space)$.
So, for any function I only have to ensure that there doesn't "remain" any element "alone" in the set $B$. In other words, the range set of the function has to be equal to the codomain set.
The range depends on the function, but the codomain can be choose by me. So if I chose a codomain equal to the range I get a surjective function, regardless the function that is given.
M'I right?
|
There are intrinsic properties and extrinsic properties. Being surjective is an extrinsic property. If you are not given a particular codomain you cannot conclude whether or not a function is surjective.
Being injective, on the other hand, is an intrinsic property. It depends only on the function as a set of ordered pairs.
|
In every interval there is a rational and an irrational number. When the interval is between two rational numbers it is easy. But things get complicated when the interval is between two irrational numbers. I couldn't prove that.
|
Supposing you mean an interval $(x,y)$ of length $y-x=l>0$ (it doesn't matter whether $l$ is rational or irrational), you can simply choose any integer $n>\frac1l$, and then the interval will contain a rational number of the form $\frac an$ with $a\in\mathbf Z$. Indeed if $a'$ is the largest integer such that $\frac{a'}n\leq x$ (which is well defined) then $a=a'+1$ will do. By choosing $n>\frac2l$ you even get two rationals of this form, and an irrational number between those two by an argument you claim to already have.
|
Check convergence $\sum\limits_{n=1}^\infty\left(\sqrt{1+\frac{7}{n^{2}}}-\sqrt[3]{1-\frac{8}{n^{2}}+\frac{1}{n^{3}}}\right)$ Please help me to check convergence of $$\sum_{n=1}^{\infty}\left(\sqrt{1+\frac{7}{n^{2}}}-\sqrt[3]{1-\frac{8}{n^{2}}+\frac{1}{n^{3}}}\right)$$
|
(Presumably with tools from Calc I...) Using the conjugate identities
$$
\sqrt{a}-1=\frac{a-1}{1+\sqrt{a}},\qquad1-\sqrt[3]{b}=\frac{1-b}{1+\sqrt[3]{b}+\sqrt[3]{b^2}},
$$
one gets
$$
\sqrt{1+\frac{7}{n^{2}}}-\sqrt[3]{1-\frac{8}{n^{2}}+\frac{1}{n^{3}}}=x_n+y_n,
$$
with
$$
x_n=\sqrt{1+a_n}-1=\frac{a_n}{1+\sqrt{1+a_n}},\qquad a_n=\frac{7}{n^2},
$$
and
$$
y_n=1-\sqrt[3]{1-b_n}=\frac{b_n}{1+\sqrt[3]{1-b_n}+(\sqrt[3]{1-b_n})^2},\qquad
b_n=\frac{8}{n^{2}}-\frac{1}{n^{3}}.
$$
The rest is easy: when $n\to\infty$, the denominator of $x_n$ is at least $2$ hence $x_n\leqslant\frac12a_n$, likewise the denominator of $y_n$ is at least $1$ hence $y_n\leqslant b_n$. Thus,
$$
0\leqslant x_n+y_n\leqslant\tfrac12a_n+b_n\leqslant\frac{12}{n^2},
$$
since $a_n=\frac7{n^2}$ and $b_n\leqslant\frac8{n^2}$. The series $\sum\limits_n\frac1{n^2}$ converges, hence all this proves that the series $\sum\limits_nx_n$ converges (absolutely).
|
If x,y,z are positive reals, then the minimum value of $x^2+8y^2+27z^2$ where $\frac{1}{x}+\frac{1}{y}+\frac{1}{z}=1$ is what If $x,y, z$ are positive reals, then the minimum value of $x^2+8y^2+27z^2$ where $\frac{1}{x}+\frac{1}{y}+\frac{1}{z}=1$ is what?
$108$ , $216$ , $405$ , $1048$
|
As $x,y,z$ are +ve real, we can set $\frac 1x=\sin^2A,\frac1y+\frac1z=\cos^2A$
again, $\frac1{y\cos^2A}+\frac1{z\cos^2A}=1$
we can put $\frac1{y\cos^2A}=\cos^2B, \frac1{z\cos^2A}=\sin^2B\implies y=\cos^{-2}A\cos^{-2}B,z=\cos^{-2}A\sin^{-2}B$
So, $$x^2+8y^2+27z^2=\sin^{-4}A+\cos^{-4}A(8\cos^{-4}B+27\sin^{-4}B)$$
We need $8\cos^{-4}B+27\sin^{-4}B$ to be minimum for the minimum value of $x^2+8y^2+27z^2$
Let $F(B)=p^3\cos^{-4}B+q^3\sin^{-4}B$ where $p,q$ are positive real numbers.
then $F'(B)=p^3(-4)\cos^{-5}B(-\sin B)+q^3(-4)\sin^{-5}B(\cos B)$
for the extreme values of $F(B),F'(B)=0\implies (p\sin^2B)^3=(q\cos^2B)^3\implies \frac{\sin^2B}q=\frac{\cos^2B}p=\frac1{p+q}$
Observe that $F(B)$ can not have any finite maximum value.
So, $F(B)_{min}=\frac{p^3}{\left(\frac p{p+q}\right)^2}+\frac{q^3}{\left(\frac q{p+q}\right)^2}=(p+q)^3$
So, the minimum value of $8\cos^{-4}B+27\sin^{-4}B$ is $(2+3)^3=125$ (Putting $p=2,q=3$)
So, $$x^2+8y^2+27z^2=\sin^{-4}A+\cos^{-4}A(8\cos^{-4}B+27\sin^{-4}B)\ge \sin^{-4}A+125\cos^{-4}A\ge (1+5)^3=216$$ (Putting $p=1,q=5$)
|
How can I to solve $ \cos x = 2x$? I would like to get an approx. solution to the equation: $ \cos x = 2x$, I don't need an exact solution just some approx. And I need a solution using elementary mathematics (without derivatives etc).
|
Take a pocket calculator, start with $0$ and repeatedly type [cos], [$\div$], [2], [=]. This will more or less quickly converge to a value $x$ such that $\frac{\cos x}2=x$, just what you want.
|
Spivak problem on orientations. (A comprehensive introduction to differential geometry) I have a problems doing exercise 16 of chapter 3 (p.98 in my edition) of Spivak's book.
The problem is very simple. Let $M$ be a manifold with boundary, and choose a point $p\in\delta(M)$. Now consider an element $v\in T_p M$ which is not spanned by the vectors on $T_p\delta(M)$, that is, it's last coordinate is non-zero (after good identifications). We say that $v$ is inward pointing if there is a chart $\phi: U\rightarrow \mathbb{H}^n$ ($p\in U$) such that $d_p\phi(v)=(v_1,\dots,v_n)$ where $v_n>0$.
It is asked to show that this is independent on the choice of coordinates (on the chart).
I think that Spivak's idea is to realize first that the subespace of vectors in $T_p\delta M$ is independent on the chart, which can be seen noticing that if $i:\delta (M)\rightarrow M$ then $d_pi(v_1,\dots,v_{n-1})=(v_1,\dots,v_{n-1},0)\in T_p \mathbb{H}^n$
|
A change of coordinates between charts for the manifold with boundary $M$ has the form $x=(x_1, \cdots,x_n) \mapsto (\phi_1(x), \cdots,\phi_n(x))$, with $x_n, \phi_n(x)\geq0$
since $x_n,\phi_n(x)\in \mathbb H_n$.
The last line of the Jacobian $Jac_a(\phi)$ at a point $a\in \partial \mathbb H_n$ has the form $(0,\cdots , 0,\frac { \partial \phi_n}{\partial x_n}(a))$ :
Indeed, for $1\leq i\leq n-1$ we have $\frac { \partial \phi_n}{\partial x_i}(a)=0$ by the definition of partial derivatives since $\phi(\partial \mathbb H_n)\subset \partial \mathbb H_n$ and thus $\:\frac {\phi_n(a+he_i)-\phi_n(a)}{h}=\frac {0-0}{h}=0$.
Similarly $\frac { \partial \phi_n}{\partial x_n}(a)\geq 0$ because $\:\frac {\phi_n(a+he_n)-\phi_n(a)}{h}=\frac {\phi_n(a+he_n)-0}{h}\gt 0$.
Actually, we must have $\frac { \partial \phi_n}{\partial x_n}(a)\gt 0$ because the Jacobian is invertible.
The above proves that, given a tangent vector $v\in T_a(\mathbb H_n)$, its image $w=Jac_a(\phi)(v)$ satisfies $w_n=\frac { \partial \phi_n}{\partial x_n}(a)\cdot v_n$ with $\frac { \partial \phi_n}{\partial x_n}(a)\gt 0$, which shows that outward pointing vectors are preserved by the Jacobian of a change of coordinates for $M$ and thus that the notion of outward pointing vector is well-defined at a boundary point of a manifold with boundary .
|
Calculate $\lim_{x\to 0}\frac{\ln(\cos(2x))}{x\sin x}$ Problems with calculating
$$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}$$
$$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}=\lim_{x\rightarrow0}\frac{\ln(2\cos^{2}(x)-1)}{(2\cos^{2}(x)-1)}\cdot \left(\frac{\sin x}{x}\right)^{-1}\cdot\frac{(2\cos^{2}(x)-1)}{x^{2}}=0$$
Correct answer is -2. Please show where this time I've error. Thanks in advance!
|
The known limits you might wanna use are, for $x\to 0$
$$\frac{\log(1+x)}x\to 1$$
$$\frac{\sin x }x\to 1$$
With them, you get
$$\begin{align}\lim\limits_{x\to 0}\frac{\log(\cos 2x)}{x\sin x}&=\lim\limits_{x\to 0}\frac{\log(1-2\sin ^2 x)}{-2\sin ^2 x}\frac{-2\sin ^2 x}{x\sin x}\\&=-2\lim\limits_{x\to 0}\frac{\log(1-2\sin ^2 x)}{-2\sin ^2 x}\lim\limits_{x\to 0}\frac{\sin x}{x}\\&=-2\lim\limits_{u\to 0}\frac{\log(1+u)}{u}\lim\limits_{x\to 0}\frac{\sin x}{x}\\&=-2\cdot 1 \cdot 1 \\&=-2\end{align}$$
|
Ultrafilters and measurability Consider a compact metric space $X$, the sigma-algebra of the boreleans of $X$, a sequence of measurable maps $f_n: X \to\Bbb R$ and an ultrafilter $U$. Take, for each $x \in X$, the $U$-limit, say $f^*(x)$, of the sequence $(f_n(x))_{n \in\Bbb N}$.
(Under what conditions on $U$) Is $f^*$ measurable?
|
Let me first get rid of a silly case that you probably didn't intend to include. If $U$ is a principal ultrafilter, generated by $\{k\}$, then $f^*$ is just $f_k$, so it's measurable.
Now for the non-silly cases, where $U$ isn't principal. Here's an example of a sequence of measurable (in fact low-level Borel) functions whose $U$-limit isn't measurable. I'll use a very nice $X$, the unit interval with the usual topology and with Lebesgue measure. My functions $f_n$ will take only the values 0 and 1, and they're defined by $f_n(x)=$ the $n$-th bit in the binary expansion of $x$. (The binary expansion is ambiguous when $x$ is a rational number whose denominator is a power of 2, but that's only countably many $x$'s so they won't affect measurability; resolve the ambiguity any arbitrary way you want.) Then the $U$-limit $f^*$ of these functions sends $x$ to 1 iff the set $\{n:\text{the }n\text{-th bit in the binary expansion of }x\text{ is }1\}$ in in $U$. In other words, if we identify $x$ via its binary expansion with a sequence of 0's and 1's and if we then regard that sequence as the characteristic function of a subset of $\mathbb N$, then $f^*$, now viewed as mapping subsets of $\mathbb N$ to $\{0,1\}$, is just the characteristic function of $U$. An old theorem of Sierpiński says that this is never Lebesgue measurable when $U$ is a non-principal ultrafilter.
|
$\int_{\mathbb{R}^n} dx_1 \dots dx_n \exp(−\frac{1}{2}\sum_{i,j=1}^{n}x_iA_{ij}x_j)$? Let $A$ be a symmetric positive-definite $n\times n$ matrix and $b_i$ be some real numbers How can one evaluate the following integrals?
*
*$\int_{\mathbb{R}^n} dx_1 \dots dx_n \exp(−\frac{1}{2}\sum_{i,j=1}^{n}x_iA_{ij}x_j)$
*$\int_{\mathbb{R}^n} dx_1 \dots dx_n \exp(−\frac{1}{2}\sum_{i,j=1}^{n}x_iA_{ij}x_{j}-b_i x_i)$
|
Let's $x=(x_1,\ldots,x_n)\in\mathbb{R}^n$. We have
\begin{align}
\int_{\mathbb{R}^n}e^{-\frac{1}{2}\langle x,x \rangle_{A}} d x =
&
\int_{\mathbb{R}^n}e^{-\frac{1}{2}\langle x,Ax \rangle} d x
\\
=
&
\int_{\mathbb{R}^n}e^{-\frac{1}{2}\langle Ux,Ux \rangle} d x
\\
=
&
\int_{\mathbb{R}^n}e^{-\frac{1}{2}\langle x,x \rangle}\|U\| d x
\\
=
&
\|U\|\cdot \left(\int_{R} e^{-\frac{1}{2}x^2} d x \right)^n\\
=
&
\|U\|\cdot\left( \frac{1}{2}\sqrt{2\pi}\right)^n
\end{align}
For second itegral use the change of variable $y_i+s_i=x_i$ such that $(2\cdot s^TA+b^T)=0$,
\begin{align}
(y+s)^TA(y+s) +b(y+s)=
&
y^TAy+(2\cdot s^TA+b^T)y+s^T(b+As)
\\
=
&
y^TAy+s^T(b+As)
\\
\end{align}
Then
$$
\int_{\mathbb{R}^n}e^{-\frac{1}{2}\langle x,x \rangle_{A}+b^Tx} d x =n\cdot \|U\|\cdot \int_{\mathbb{R}^n} e^{-\frac{1}{2}y^2}\cdot e^{s^T(b+As)} d y \\
=
\cdot e^{s^T(b+As)}\cdot \|U\|\cdot \left( \frac{1}{2}\sqrt{2\pi}\right)^n
$$
|
Is there a quick way to solve $3^8 \equiv x \mod 17$?
Is there a quick way to solve $3^8 \equiv x \mod 17$?
Like the above says really, is there a quick way to solve for $x$? Right now, what I started doing was $3^8 = 6561$, and then I was going to keep subtracting $17$ until I got my answer.
|
When dealing with powers, squaring is a good trick to reduce computations (your computer does this too!) What this means is:
$ \begin{array}{l l l l l}
3 & &\equiv 3 &\pmod{17}\\
3^2 &\equiv 3^2 & \equiv 9 &\pmod{17}\\
3^4 & \equiv 9^2 & \equiv 81 \equiv 13 & \pmod{17}\\
3^8 & \equiv 13^2 & \equiv 169 \equiv 16 & \pmod{17}\\
\end{array} $
Slightly irrelevant note: By Euler's theorem, we know that $3^{16} \equiv 1 \pmod{17}$. Thus this implies that $3^{8} \equiv \pm 1 \pmod{17}$. If you know more about quadratic reciprocity, read Thomas Andrew's comment below.
|
Order of nontrivial elements is 2 implies Abelian group If the order of all nontrivial elements in a group is 2, then the group is Abelian. I know of a proof that is just from calculations (see below). I'm wondering if there is any theory or motivation behind this fact. Perhaps to do with commutators?
Proof: $a \cdot b = (b \cdot b) \cdot (a \cdot b) \cdot (a \cdot a) = b \cdot (b \cdot a) \cdot (b\cdot a) \cdot a = b \cdot a$.
|
The idea of this approach is to work with a class of very small, finite, subgroups $H$ of $G$ in which we can prove commutativity. The reason for this is to be able to use the results like Cauchy's theorem and Lagrange's theorem.
Consider the subgroup $H$ generated by two distinct, nonidentity elements $a,b$ in the given group. The group $H$ consists of strings of instances of $a$ and $b$. By induction on the length of a string, one can show that any string of length 4 or longer is equal to a string of length 3 or shorter.
Using this fact we can list the seven possible elements of $H$:
$$1,a,b,ab,ba,aba,bab.$$ By (the contrapositive of) Cauchy's Theorem, the only prime divisor of $|H|$ is 2. This implies the order of $H$ is either $1$, $2$, or $4$.
If $|H|=1$ or $2$, then either $a$ or $b$ is the identity, a contradiction.
Hence $|H|$ has four elements. The subgroup generated by $a$ has order 2; its index in $H$ is 2, so it is a normal subgroup. Thus, the left coset $\{b,ba\}$ is the same as the right coset$\{b,ab\}$, and as a result $ab=ba$.
|
Definition of $f+g$ and $f \cdot g$ in the context of polynomial rings? I've been asked to prove that given $f$ and $g$ are polynomials in $R[x]$, where $R$ is a commutative ring with identity,
$(f+g)(k) = f(k) + g(k)$, and $(f \cdot g)(k) = f(k) \cdot g(k)$. However, I always took these things as definition. What exactly is there to prove? How exactly are $f+g$ and $f \cdot g$ defined here?
|
One construction (I do not like using constructions as definitions) of $R[x]$ is as the set of formal sums $\sum r_i x^i$ of the symbols $1, x, x^2, x^3, ...$ with coefficients in $R$. Addition here is defined pointwise and multiplication is defined using the Cauchy product rule. For every $k \in R$ there is an evaluation map $\text{eval}_k : R[x] \to R$ which sends $\sum r_i x^i$ to $\sum r_i k^i$, and the question is asking you to show that this map is a ring homomorphism.
To emphasize that this doesn't follow immediately from the construction, note that this statement is false if $R$ is noncommutative.
The reason I don't want to use the word "definition" above is that "polynomial ring" should more or less mean "the ring such that the above thing is true, and if it isn't, we messed up and need a different construction."
|
Limit with parameter, $(e^{-1/x^2})/x^a$ How do I explore for which values of the parameter $a$
$$ \lim_{x\to 0} \frac{1}{x^a} e^{-1/x^2} = 0?
$$
For $a=0$ it is true, but I don't know what other values.
|
It's true for all $a's$ (note that for $a\leqslant 0$ it's trivial) and you can prove it easily using L'Hopital's rule or Taylor's series for $\exp(-x^2)$.
|
Need help in describing the set of complex numbers Let set $$C=\{z\in \mathbb{C}:\sqrt{2}|z|=(i-1)z\}$$ I think C is empty , because you could put it in this way$$|z|=-\frac{(1-i)z}{\sqrt{2}}$$ but would like a second opinion.
|
$$C=\{z\in \mathbb{C}:\sqrt{2}|z|=(i-1)z\}$$
let $z=a+bi,a,b\in\mathbb R$ then $$\sqrt{2}|a+bi|=(i-1)(a+bi)$$
$$\sqrt{2}|a+bi|=ai+bi^2-a-bi$$
$$\sqrt{2}|a+bi|=ai-b-a-bi$$
$$\sqrt{2}|a+bi|=-b-a+(a-b)i$$ follow that
$a-b=0$ and $\sqrt{2}|a+bi|=-b-a$ or $a=b$ and $\sqrt{2}|a+ai|=-2a\geq 0$ so $a=b\leq 0$ or $z=a+ai=a(1+i)$ and finally $$C=\{z=a(1+i),a\leq 0\}\neq\emptyset$$
|
A sequence in $C([-1,1])$ and $C^1([-1,1])$ with star-weak convergence w.r.t. to one space, but not the other The functionals
$$
\phi_n(x) = \int_{\frac{1}{n} \le |t| \le 1} \frac{x(t)}{t} \mathrm{d} t
$$
define a sequence of functionls in $C([-1,1])$ and $C^1([-1,1])$.
a) Show that $(\phi_n)$ converges *-weakly in $C^1([-1,1])'$.
b) Does $(\phi_n)$ converges *-weakly in $C([-1,1])'$?
For me the limit functional
$$
\int_{0 \le |t| \le 1} \frac{x(t)}{t} \mathrm{d} t
$$
is not well defined so i have trouble evaluating the condition of convergence? Do you have any hints?
|
@Davide Giraduo
Regarding your derivation, i came to another result:
\begin{align*}
\int_{1/n \le |t| \le 1} \frac{x(t)}{t} \mathrm{d} t & = \int_{-1}^{-1/n} \frac{x(t)}{t} \mathrm{d} t + \int_{1/n}^1 \frac{x(t)}{t} \mathrm{d} t \\
& = - \int_{1}^{1/n} \frac{x(-t)}{t} \mathrm{d} t + \int_{1/n}^1 \frac{x(t)}{t} \mathrm{d} t \\
& = \int_{1/n}^{1} \frac{x(-t)}{t} \mathrm{d} t + \int_{1/n}^1 \frac{x(t)}{t} \mathrm{d} t \\
& = \int_{1/n}^{1} \frac{x(-t) + x(t)}{t} \, \mathrm{d} t
\end{align*}
Which has others sign's?
|
Computing $999,999\cdot 222,222 + 333,333\cdot 333,334$ by hand. I got this question from a last year's olympiad paper.
Compute $999,999\cdot 222,222 + 333,333\cdot 333,334$.
Is there an approach to this by using pen-and-paper?
EDIT Working through on paper made me figure out the answer. Posted below.
I'd now like to see other methods. Thank you.
|
$$999,999\cdot 222,222 + 333,333\cdot 333,334=333,333\cdot 666,666 + 333,333\cdot 333,334$$
$$=333,333 \cdot 1,000,000$$
|
How to solve $|x-5|=|2x+6|-1$? $|x-5|=|2x+6|-1$.
The answer is $0$ or $-12$, but how would I solve it by algebraically solving it as opposed to sketching a graph?
$|x-5|=|2x+6|-1\\
(|x-5|)^2=(|2x+6|-1)^2\\
...\\
9x^4+204x^3+1188x^2+720x=0?$
|
Consider different cases:
Case 1: $x>5$
In this case, both $x-5$ and $2x+6$ are positive, and you can resolve the absolute values positively. hence
$$
x-5=2x+6-1 \Rightarrow x = -10,
$$
which is not compatible with the assumption that $x>5$, hence no solution so far.
Case 2: $-3<x\leq5$
In this case, $x-5$ is negative, while $2x+6$ is still positive, so you get
$$
-(x-5)=2x+6-1\Rightarrow x=0;
$$
Since $0\in[-3,5]$, this is our first solution.
Case 3: $x\leq-3$
In this final case, the arguments of both absolute values are negative and the equation simplifies to
$$
-(x-5) = -(2x+6)-1 \Rightarrow x = -12,
$$
in agreement with your solution by inspection of the graph.
|
Calculate $\underset{x\rightarrow7}{\lim}\frac{\sqrt{x+2}-\sqrt[3]{x+20}}{\sqrt[4]{x+9}-2}$ Please help me calculate this:
$$\underset{x\rightarrow7}{\lim}\frac{\sqrt{x+2}-\sqrt[3]{x+20}}{\sqrt[4]{x+9}-2}$$
Here I've tried multiplying by $\sqrt[4]{x+9}+2$ and few other method.
Thanks in advance for solution / hints using simple methods.
Edit
Please don't use l'Hosplital rule. We are before derivatives, don't know how to use it correctly yet. Thanks!
|
$\frac{112}{27}$ which is roughly 4.14815
The derivative of the top at $x=7$ is $\frac{7}{54}$
The derivative of the bottom at $x=7$ is $\frac{1}{32}$
$\frac{(\frac{7}{54})}{(\frac{1}{32})}$ is $\frac{112}{27}$
|
Behavior of differential equation as argument goes to zero I'm trying to solve a coupled set of ODEs, but before attempting the full numerical solution, I would like to get an idea of what the solution looks like around the origin.
The equation at hand is:
$$ y''_l - (f'+g')y'_l + \biggr[ \frac{2-l^2-l}{x^2}e^{2f} - \frac{2}{x}(f'+g') - \frac{2}{x^2} \biggr]y_l = \frac{4}{x}(f'+g')z_l$$
$y,f,g,z$ are all functions of $x$, which has the domain ($0, X_0$). If I specifically take the $l=2$ case I of course have
$$ y''_2 - (f'+g')y'_2 + \biggr[ \frac{-4}{x^2}e^{2f} - \frac{2}{x}(f'+g') - \frac{2}{x^2} \biggr]y_2 = \frac{4}{x}(f'+g')z_2$$
To avoid issues with singularities, I multiply both sides of the equation by $x^2$, to get, lets call it EQ1.
$$ x^2 y''_2 - x^2(f'+g')y'_2 + \biggr[ -4e^{2f} - x(f'+g') - 2 \biggr]y_2 = 4 x(f'+g')z_2$$
Now if by some magic I know by that as $x \rightarrow 0$, $y_l = x^{l+1}$ so that $y_2 = x^3$. How would I determine what the functions $z_2$ is around the origin?
Actually I already know the answer: $z_2$ should also go as $x^3$, but I have not been able to show it.
Any help is much appreciated.
My attempt at a solution:
I have tried to keep things general so I expand $f[x] = 1 + f_1 x + f_2 x^2 +f_3 x^3$, and similarly $g[x] = 1 + g_1 x + g_2 x^2 +g_3 x^3$ where $f_1,f_2,f_3, g_1,g_2,g_3$ are constants. I have kept up to third order because I want to substitute $y_2 = x^3$, so I figured I should take the other functions to that order as well.
As for the $e^{2f}$ term, I use a truncated Taylor series for the exponential, $1+\frac{x^2}{2!} + \frac{x^3}{3!}$, into which I substitue my expansion of $f[x]$. After expanding everything out and eliminating terms of higher order than $x^3$ from the right hand side of EQ1, I get something like $constant*x^3$, while the left hand side is third order polynomial times $z_2$.
I really just don't know how to proceed. Should I have left higher order terms in the RHS, so that I could divide by a third order polynomial and still end up with $z_2 \propto x^3$. I don't know how this would work because on the RHS I had terms has high as $x^{12}$.
|
Your equation EQ1 forces $z_2$ to have a quadratic term. Indeed, the left-hand side of EQ1 is
$$4(1-e^2)x^3-4(f_1+2e^2f_1+g_1)x^4+O(x^5) \tag{1}$$
This is equated to $4x(f'+g')z_2$. Clearly, $z_2$ must include the term $\frac{1-e^2}{f_1+g_1}x^2$.
I used Maple to check this, including many more terms than was necessary.
n:=7;
F:=1+sum(f[k]*x^k,k=1..n); G:=1+sum(g[k]*x^k,k=1..n);
y:=x^3;
z:=(x^2*diff(y,x$2)-x^2*(diff(F,x)+diff(G,x))*diff(y,x)-(4*exp(2*F)+x*diff(F+G,x)+2)*y)/(4*x*diff(F+G,x));
series(z,x=0,4);
is the series expansion for $z_2$.
By the way, you can number displayed equations with commands like \tag{1}, as I did above.
|
if $f$ is entire then show that $f(z)f(1/z)$ is also entire This is again for an old exam.
Let $f$ be an entire function, show that f(z)f(1/z) is entire.
How do I go about showing the above.
Do I use the definition of analyticity?.,
Call g: f(z)f(1/z) and show that it is complex differentiable everywhere?
Edit: Well the original question was.
Let $f$ be entire and suppose $f(z)f(1/z)$ is bounded on $\mathbb C$, then
$f(z)=az^n$ for some $a\in \mathbb C$.
I was trying to show that $f(z)f(1/z)$ is entire and then use Louiville's theorem.
:). I hope this makes sense.
|
As Pavel already mentioned, this is not true. In fact, the only entire functions that satisfy the stated conclusion are $f(z) = cz^n$, where $c\neq 0$.
First of all, $f$ must be a polynomial, otherwise $f(1/z)$ has an essential singularity at $z=0$. If $\deg f = n$, then $f(1/z)$ has a pole of order $n$ at the origin, so to cancel this, $f$ itself must have a zero of order $n$ at $z=0$,
Edit incidentally, the above should help you with the edited question too.
|
Inequality problem algebra? How would I solve the following inequality problem.
$s+1<2s+1<4$
My book answer says $s\in (0, \frac32)$ as the final answer but I cannot seem to get that answer.
|
We have $$s+1<2s+1<4.$$ This means $2s+1<4$, and in particular, $2s<3$. Dividing by the $2$ gives $s<3/2$. Now, observing on the other hand that we have $s+1<2s+1$, we subtract $s+1$ from both sides and have $0<s$. This gives us a bound on both sides of $s$, i.e., $$0<s<\frac{3}{2}$$ as desired.
|
How to solve simple systems of differential equations Say we are given a system of differential equations
$$
\left[ \begin{array}{c} x' \\ y' \end{array} \right] = A\begin{bmatrix} x \\ y \end{bmatrix}
$$
Where $A$ is a $2\times 2$ matrix.
How can I in general solve the system, and secondly sketch a solution $\left(x(t), y(t) \right)$, in the $(x,y)$-plane?
For example, let's say
$$\left[ \begin{array}{c} x' \\ y' \end{array} \right] = \begin{bmatrix} 2 & -4 \\ -1 & 2 \end{bmatrix} \left[ \begin{array}{c} x \\ y \end{array} \right]$$
Secondly I would like to know how you can draw a phane plane? I can imagine something like setting $c_1 = 0$ or $c_2=0$, but I'm not sure how to proceed.
|
If you don't want change variables then,
there is a simple way for calculate $e^A$(all cases).
Let me explain about. Let A be a matrix and $p(\lambda)=\lambda^2-traço(A)\lambda+det(A)$ the characteristic polynomial. We have 2 cases:
$1$) $p$ has two distinct roots
$2$) $p$ has one root with multiplicity 2
The case 2 is more simple:
In this case we have $p(\lambda)=(\lambda-a)^2$. By Cayley-Hamilton follow that $p(A)=(A-aI)^2=0$. Now develop $e^x$ in taylor series around the $a$
$$e^x=e^a+e^a(x-a)+e^a\frac{(x-a)^2}{2!}+...$$
Therefore $$e^A=e^aI+e^a(A-aI)$$
Note that $(A-aI)^2=0$ $\implies$ $(A-aI)^n=0$ for all $n>2$
Case $1$:
Let A be your example. The eigenvalues are $0$ and $4$. Now we choose a polynomial $f$ of degree $\le1$ such that $e^0=f(0)$ and $e^4=f(4)$( there is only one). In other words what we want is a function $f(x)=cx+d$ such that
$$1=d$$
$$e^4=c4+d$$
Solving this system we have $c=\dfrac{e^4-1}{4}$ and $d=1$.
I say that
$$e^A=f(A)=cA+dI=\dfrac{e^4-1}{4}A+I$$
In general if $\lambda_1$ and $\lambda_2$ are the distinct eigenvalue, and $f(x)=cx+d$ satisfies $f(\lambda_1)=e^{\lambda_1}$ and $f(\lambda_2)=e^{\lambda_2}$, then
$$e^A=f(A)$$
If you are interested so I can explain more (it is not hard to see why this is true)
Now I will solve your equation using above.
What we need is $e^{tA}$
The eigenvalues of $tA$ is $0$ and $4t$.
Then $e^{tA}=\dfrac{e^{4t}-1}{4t}A+I$ for $t$ distinct of $0$
|
Real tree and hyperbolicity I seek a proof of the following result due to Tits:
Theorem: A path-connected $0$-hyperbolic metric space is a real tree.
Do you know any proof or reference?
|
I finally found the result as Théorème 4.1 in Coornaert, Delzant and Papadopoulos' book Géométrie et théorie des groupes, les groupes hyperboliques de Gromov, where path-connected is replaced with geodesic; and in a document written by Steven N. Evans: Probability and Real Trees (theorem 3.40), where path-connected is replaced with connected.
|
Common tangent to two circles with Ruler and Compass Given two circles (centers are given) -- one is not contained within the other, the two do not intersect -- how to construct a line that is tangent to both of them? There are four such lines.
|
[I will assume you know how to do basic constructions, and not explain (say) how to draw perpendicular from a point to a line.]
If you're not given the center of the circles, draw 2 chords and take their perpendicular bisector to find the centers $O_1, O_2$.
Draw the line connecting the centers. Through each center, draw the perpendicular to $O_1O_2$, giving you the diameters of the circles that are parallel to each other. Connect up the endpoints of these diameters, and find the point of intersection. There are 2 ways to connect them up, 1 gives you the exterior center of expansion (homothety), the other gives you the interior center of expansion (homothety).
Each tangent must pass through one of these centers of expansion. From any center of expansion, draw two tangents to any circle. Extend this line, and it will be tangential to the other circle.
|
Rayleigh-Ritz method for an extremum problem I am trying to use the Rayleigh-Ritz method to calculate an approximate solution to the extremum problem with the following functional:
$$ L[y]=\int\int_D (u_x^2+u_y^2+u^2-2xyu)\,dx\,dy, $$
$D$ is the unit square i.e. $0 \leq x \leq 1, 0 \leq y \leq 1.$
Also $u=0$ on the boundary of $D$.
I have chosen to use the trial function:
$$ \phi(x,y)=cxy(1-x)(1-y) $$
Where $c$ is a constant that I need to find.
I am familiar with using the Rayleigh-Ritz method most of the time, however this question I am not sure of. Is it possible to convert the problem to a Sturm-Liouville ration type?
Thanks for your help.
|
Your integral is in the form of
$$L(x,y,u)=\int\int_D (u_x^2+u_y^2+u^2-2xyu)\,dx\,dy$$
$$0 \leq x \leq 1, 0 \leq y \leq 1$$
Due to homogenous boundary conditions it is possible to use your approximation function
$$u(x,y)=cxy(1-x)(1-y)$$
When substituted into integral equation
$$L(x,y,u)=\int_0^1\int_0^1 (u_x^2+u_y^2+u^2-2xyu)\,dx\,dy=\frac{7}{300}c^2-\frac{1}{72}c$$
and taking first derivative condition and solving for c
$$\frac{d\,L}{d\,c}=\frac{7}{150}c-\frac{1}{72}=0\Rightarrow c=\frac{25}{84}$$
and
$$u(x,y)=\frac{25}{84}xy(1-x)(1-y)$$
Since the second derivative is positive it is a minimum.
|
$x^4 + y^4 = z^2$ $x, y, z \in \mathbb{N}$,
$\gcd(x, y) = 1$
prove that $x^4 + y^4 = z^2$ has no solutions.
It is true even without $\gcd(x, y) = 1$, but it is easy to see that $\gcd(x, y)$ must be $1$
|
This has been completely revised to match the intended question. The proof is by showing that there is no minimal positive solution, i.e., by infinite descent. It’s from some old notes; I’ve no idea where I cribbed it from in the first place.
Suppose that $x^4+y^4=z^2$, where $z$ is the smallest positive integer for which there is a solution in positive integers. Then $(x^2,y^2,z)$ is a primitive Pythagorean triple, so there are relatively prime integers $m,n$ with $m>n$ such that $x^2=m^2-n^2,y^2=2mn$, and $z=m^2+n^2$.
Since $2mn=y^2$, one of $m$ and $n$ is an odd square, and the other is twice a square. In particular, one is odd, and one is even. Now $x^2+n^2=m^2$, and $\gcd(x,n)=1$ (since $m$ and $n$ are relatively prime), so $(x,n,m)$ is a primitive Pythagorean triple, and it must be $n$ that is even: there must be integers $a$ and $b$ such that $a>b$, $a$ and $b$ are relatively prime, $x=a^2-b^2$, $n=2ab$, and $m=a^2+b^2$. It must be $m$ that is the odd square, so there are integers $r$ and $s$ such that $m=r^2$ and $n=2s^2$.
Now $2s^2=n=2ab$, so $s^2=ab$, and we must have $a=c^2$ and $b=d^2$ for some integers $c$ and $d$, since $\gcd(a,b)=1$. The equation $m=a^2+b^2$ can then be written $r^2=c^4+d^4$.
|
Dirac Delta Function of a Function I'm trying to show that
$$\delta\big(f(x)\big) = \sum_{i}\frac{\delta(x-a_{i})}{\left|{\frac{df}{dx}(a_{i})}\right|}$$
Where $a_{i}$ are the roots of the function $f(x)$. I've tried to proceed by using a dummy function $g(x)$ and carrying out:
$$\int_{-\infty}^{\infty}dx\,\delta\big(f(x)\big)g(x)$$
Then making the coordinate substitution $u$ = $f(x)$ and integrating over $u$. This seems to be on the right track, but I'm unsure where the absolute value comes in in the denominator, and also why it becomes a sum.
$$\int_{-\infty}^{\infty}\frac{du}{\frac{df}{dx}}\delta(u)g\big(f^{-1}(u)\big)
= \frac{g\big(f^{-1}(0)\big)}{\frac{df}{dx}\big(f^{-1}(0)\big)}$$
Can any one shed some light? Wikipedia just states the formula and doesn't actually show where it comes from.
|
Split the integral into regions around $a_i$, the zeros of $f$ (as integration of a delta function only gives nonzero results in regions where its arg is zero)
$$
\int_{-\infty}^{\infty}\delta\big(f(x)\big)g(x)\,\mathrm{d}x = \sum_{i}\int_{a_i-\epsilon}^{a_i+\epsilon}\delta(f(x))g(x)\,\mathrm{d}x
$$
write out the Taylor expansion of $f$ for $x$ near some $a_i$ (ie. different for each term in the summation)
$$
f(a_i+x) =f(a_i) + f'(a_i)x + \mathcal{O}(x^2) = f'(a_i)x + \mathcal{O}(x^2)
$$
Now, for each term, you can show that the following hold:
$$
\int_{-\infty}^\infty\delta(kx)g(x)\,\mathrm{d}x = \frac{1}{|k|}g(0) = \int_{-\infty}^\infty\frac{1}{|k|}\delta(x)g(x)\,\mathrm{d}x
$$
(making a transformation $y=kx$, and looking at $k<0,k>0$ separately **Note: the trick is in the limits of integration)
and
$$
\int_{-\infty}^\infty\delta(x+\mathcal{O}(x^2))g(x)\,\mathrm{d}x = g(0) = \int_{-\infty}^\infty\delta(x)g(x)\,\mathrm{d}x
$$
(making use of the fact that we can take an interval around 0 as small as we like)
Combine these with shifting to each of the desired roots, and you can obtain the equality you're looking for.
|
Is my understanding of product sigma algebra (or topology) correct? Let $(E_i, \mathcal{B}_i)$ be measurable (or topological) spaces, where $i \in I$ is an index set, possibly infinite. Their product sigma algebra (or product topology) $\mathcal{B}$ on $E= \prod_{i \in I} E_i$ is defined to be the coarsest one that can make the projections $\pi_i: E \to E_i$ measurable (or continuous).
Many sources said the following is an equivalent definition:
$$\mathcal{B}=\sigma \text{ or }\tau\left(\left\{\text{$\prod_{i \in I}B_i$, where
$B_i \in \mathcal{B}_i, B_i=E_i$ for all but a finite
number of $i \in I$}\right\}\right),$$
where $\sigma \text{ and }\tau$ mean taking the smallest sigma algebra and taking the smallest topology. Honestly I don't quite understand why this is the coarsest sigma algebra (or topology) that make the projections measurable (or continuous).
Following is what I think is the coarsest one that can make the projections measurable
$$\mathcal{B}=\sigma \text{ or }\tau\left(\left\{\text{$\prod_{i \in I}B_i$, where
$B_i \in \mathcal{B}_i, B_i=E_i$ at least for all but one $i \in I$}\right\}\right),$$
because $\pi^{-1}_k (E_k) = \text{$\prod_{i \in I}B_i$, where
$B_i=E_i$ for all $i \neq k$}$. So I was wondering if the two equations for $\mathcal{B}$ are the same?
Thanks and regards!
|
For the comments: I retract my error for the definition of measurability. Sorry.
For the two things generating the same sigma algebra (or topology, which is similar):
We use $\langle - \rangle$ to denote the smallest sigma algebra containing the thing in the middle. We want to show that
$$(1) \hspace{5mm}\langle \prod_{i} B_i \rangle$$
where $B_i \in \mathcal{B}_i$, and $B_i = E_i$ for all but finitely many $i$s, is the same as
$$(2) \hspace{5mm}\langle \prod_{i} B_i \rangle$$
where $B_i \in \mathcal{B}_i$, and $B_i = E_i$ for all but one $i$.
It is clear that $(2) \subset (1)$, since the generating collection in (2) is a subset of that of (1).
On the other hand, $(2)$ contains $\prod_{i} B_i$ where $B_i \in \mathcal{B}_i$, and $B_i = E_i$ for all but finitely many $i$s, since it is the (finite) intersection of the generators. For example,
$$B_1 \times B_2 = (B_1 \times E_2) \cap (E_1 \times B_2)$$
So $(2) \supset (1)$. Therefore $(2) = (1)$.
|
Does a graph with $0$ vertices count as simple? Does a graph with $0$ vertices count as a simple graph?
Or does a simple graph need to have a non-empty vertex set?
Thanks!
|
It is typical to refer to a graph with no vertices as the null graph. Since it has no loops and no parallel edges (indeed, it has no edges at all), it is simple.
That said, if your present work finds you writing "Such and such is true for all simple graphs except the null graph", then it could be a good idea to announce at the beginning of your document that you will not consider the null graph to be simple.
|
Minimal polynominal: geometric meaning I am currently studying Chapter 6 of Hoffman & Kunze's Linear Algebra which deals with characteristic values and triangulation and diagonalization theorems.
The chapter makes heavy use of the concept of the minimal polynomial which it defines as the monic polynomial of the smallest degree that annihilates a linear transformation.
I am currently finding the proofs in the book that use this definition of the minimal polynomial to be very opaque: just chains of calculations with polynomials that turn out to give the right answer in the end.
I was therefore wondering if there is a more geometric view of the minimal polynomial? What information about the linear transformation is it carrying and why? I guess the answer would be simpler in the case of an algebraically closed field but is there an answer which also works for a general field?
|
Consider the following matrices:
$$
A = \left(\begin{array}{cc}2&0\\0&2\end{array}\right) \ \ \text{ and } \ \
B = \left(\begin{array}{cc}2&1\\0&2\end{array}\right).
$$
The first matrix has minimal polynomial $X - 2$ and the second has minimal polynomial $(X-2)^2$. If we subtract $2I_2$ from these matrices then we get
$$
\left(\begin{array}{cc}0&0\\0&0\end{array}\right) \ \ \text{ and } \ \
\left(\begin{array}{cc}0&1\\0&0\end{array}\right),
$$
where the first has minimal polynomial $X$ and the second has minimal polynomial $X^2$.
The different exponents here reflect the fact that a matrix can have a power that is $O$ without being $O$ itself (this doesn't happen with ordinary numbers, like in ${\mathbf R}$ or ${\mathbf C}$). A matrix has a power equal to $O$ precisely when its minimal polynomial is some power of $X$, and the exponent you need on $X$ to achieve that can vary. As another example, compare
$$
M =
\left(\begin{array}{ccc}
0 & 1 & 1 \\
0 & 0 & 1 \\
0 & 0 & 0
\end{array}
\right)
\ \ \text{ and } \ \
N =
\left(\begin{array}{ccc}
0 & 0 & 1 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}
\right).
$$
These are not the zero matrix, but $N^2 = O$ while $M^3 = O$ and $M^2 \not= O$. So $M$ has minimal polynomial $X^3$ and $N$ has minimal polynomial $X^2$.
To describe a (monic) polynomial we could provide its roots and the multiplicities for those roots. For a square matrix, the roots of its minimal polynomial are easy to connect with the matrix: they are the eigenvalues of the matrix. The subtle part is their multiplicities, which are more algebraic than geometric.
It might be natural to hope that the multiplicity of an eigenvalue $\lambda$ as a root of the minimal polynomial is the dimension of the $\lambda$-eigenspace, but this is false in general, as we can see with the matrices $A$ and $B$ above (e.g., $B$ has minimal polynomial $(X-2)^2$ but its 2-eigenspace is 1-dimensional). In the case when the matrix has all distinct eigenvalues, the minimal polynomial is the characteristic polynomial, so you could think of the distinction between the minimal and characteristic polynomials as reflecting the presence of repeated eigenvalues.
|
Integrate $\int_{C} \frac{1}{r-\bar{z}}dz$ - conflicting answers In an homework exercise, we're asked to integrate $\int_{C} \frac{1}{k-\bar{z}}dz$ where C is some circle that doesn't pass through $k$.
I tried solving this question through two different approaches, but have arrived at different answers.
Idea: use the fact that in C, $z\bar{z}=r^2$, where $r$ is the radius of C, to get
$$\int_{C} \frac{1}{k-\frac{r^2}{z}}dz$$
We then apply Cauchy to get that the answer is $2πi r^2/k^2$ when C contains $r^2/k$, and 0 otherwise.
Another idea: Intuitively, since C is a circle we get ${\int_{C} \frac{1}{k-\bar{z}}dz} = \int_{C} \frac{1}{k-z}dz$ (since $\bar{z}$ belongs to C iff $z$ belongs to C) and we can then use Cauchy's theorem and Cauchy's formula (depending on what C contains) to arrive at a different answer.
Which answer (if any) is correct?
|
The original function is not holomorphic since $\frac{d}{d\overline{z}}\frac{1}{k-\overline{z}}=\frac{-1}{(k-\overline{z})^{2}}$. So you cannot apply Cauchy's integral formula.
Let $C$ be centered at $c$ with radius $r$, then we have $z=re^{i\theta}+c=c+r\cos(\theta)+ri\sin(\theta)$, and its conjugate become $c+r\cos(\theta)-ri\sin(\theta)=c+re^{-i\theta}$. Therefore we have $k-\overline{z}=k-c-re^{-i\theta}$. To noramlize it we mutiply $(k-c-r\cos(\theta))-ri\sin(\theta)=k-c-re^{i\theta}$. The result is $(k-c-r\cos(\theta))^{2}+r^{2}\sin^{2}\theta$. Rearranging gives $$k^{2}+c^{2}+r^{2}-2kc-2(k-c)r\cos(\theta)=A-2Br\cos(\theta),A=k^{2}+c^{2}+r^{2}-2kc,B=k-c$$
So we have $$\int_{C}\frac{1}{k-\overline{z}}=\int^{2\pi}_{0}\frac{k-c-re^{i\theta}}{(k-c-re^{-i\theta})(k-c-re^{-i\theta})}dre^{i\theta}=ri\int^{2\pi}_{0}\frac{Be^{i\theta}-re^{2i\theta}}{A-2Br\cos(\theta)}d\theta$$
So it suffice to integrate $$\frac{e^{i\theta}}{1-C\cos(\theta)}d\theta,C=constant$$ since the above integral is a linear combination of integrals of this type. But this should be possible in general as we can make trigonometric substitutions.
|
Sum of the form $r+r^2+r^4+\dots+r^{2^k} = \sum_{i=1}^k r^{2^i}$ I am wondering if there exists any formula for the following power series :
$$S = r + r^2 + r^4 + r^8 + r^{16} + r^{32} + ...... + r^{2^k}$$
Is there any way to calculate the sum of above series (if $k$ is given) ?
|
I haven’t been able to obtain a closed form expression for the sum, but maybe you or someone else can do something with what follows.
In Blackburn's paper (reference below) there are some manipulations involving the geometric series
$$1 \; + \; r^{2^n} \; + \; r^{2 \cdot 2^n} \; + \; r^{3 \cdot 2^n} \; + \; \ldots \; + \; r^{(m-1) \cdot 2^n}$$
that might give some useful ideas to someone. However, thus far I haven't found the identities or the manipulations in Blackburn's paper to be of any help.
Charles Blackburn, Analytical theorems relating to geometrical series, Philosophical Magazine (3) 6 #33 (March 1835), 196-201.
As for your series, I tried exploiting the factorization of $r^m – 1$ as the product of $r-1$ and $1 + r + r^2 + \ldots + r^{m-1}:$
First, replace each of the terms $r^m$ with $\left(r^{m} - 1 \right) + 1.$
$$S \;\; = \;\; \left(r – 1 \right) + 1 + \left(r^2 – 1 \right) + 1 + \left(r^4 – 1 \right) + 1 + \left(r^8 – 1 \right) + 1 + \ldots + \left(r^{2^k} – 1 \right) + 1$$
Next, replace the $(k+1)$-many additions of $1$ with a single addition of $k+1.$
$$S \;\; = \;\; (k+1) + \left(r – 1 \right) + \left(r^2 – 1 \right) + \left(r^4 – 1 \right) + \left(r^8 – 1 \right) + \ldots + \left(r^{2^k} – 1 \right)$$
Now use the fact that for each $m$ we have $r^m - 1 \; = \; \left(r-1\right) \left(1 + r + r^2 + \ldots + r^{m-1}\right).$
$$S \;\; = \;\; (k+1) + \left(r – 1 \right)\left[1 + \left(1 + r \right) + \left(1 + r + r^2 + r^3 \right) + \ldots + \left(1 + r + \ldots + r^{2^{k} - 1} \right) \right]$$
At this point, let's focus on the expression in square brackets. This expression is equal to
$$\left(k+1\right) \cdot 1 + kr + \left(k-1\right)r^2 + \left(k-1\right)r^3 + \left(k-2\right)r^4 + \ldots + \left(k-2\right)r^7 + \left(k-3\right)r^8 + \ldots + \left(k-3\right)r^{15} + \ldots + \left(k-n\right)r^{2^n} + \dots + \left(k-n\right)r^{2^{n+1}-1} + \ldots + \left(1\right) r^{2^{k-1}} + \ldots + \left(1\right) r^{2^{k} - 1}$$
I'm now at a loss. We can slightly compress this by factoring out common factors for groups of terms such as $(k-2)r^4 + \ldots + (k-2)r^7$ to get $(k-2)r^4\left(1 + r + r^2 + r^3\right).$ Doing this gives the following for the expression in square brackets.
$$\left(k+1\right) + \left(k\right)r + \left(k-1\right)r^2 \left(1+r\right) + \left(k-2\right)r^4\left(1+r+r^2+r^3\right) + \ldots + \;\; \left(1\right)r^{2^{k-1}} \left(1 + r + \ldots + r^{2^{k-1} -1} \right)$$
|
Matrices with columns which are eigenvectors In this question, the OP asks about finding the matrix exponential of the matrix $$M=\begin{bmatrix}
1 & 1 & 1\\
1 & 1 & 1\\
1 & 1 & 1
\end{bmatrix}.$$ It works out quite nicely because $M^2 = 3M$ so $M^n = 3^{n-1}M$. The reason this occurs is that the vector $$v = \begin{bmatrix} 1\\ 1\\ 1 \end{bmatrix}$$ is an eigenvector for $M$ (with eigenvalue $3$). The same is true of the $n\times n$ matrix consisting of all ones and the corresponding vector. With this in mind, I ask the following:
*
*Can we find a standard form for an $n\times n$ matrix with the property that each of its columns are eigenvectors? (Added later: This may be easier to answer if we allow columns to be zero vectors. Thanks Adam W for the comment.)
*What about the case when we require all the eigenvectors to correspond to the same eigenvalue?
The matrices in the second question are precisely the ones for which the calculation of the matrix exponential would be analogous to that for $M$ as above.
Added later: In Hurkyl's answer, he/she shows that an invertible matrix satisfying 1 is diagonal. For the case $n=2$, it is fairly easy to see that any non-invertible matrix satisfies 1 (which is generalised by the situation in Seirios's answer).
However, for $n > 2$, not every non-invertible matrix satisfies this property (as one may expect). For example,
$$M = \begin{bmatrix}
1 & 0 & 0\\
0 & 0 & 0\\
1 & 0 & 1
\end{bmatrix}.$$
|
Another partial answer: One can notice that matrices of rank 1 are such examples.
Indeed, if $M$ is of rank 1, its columns are of the form $a_1C,a_2C,...,a_nC$ for a given vector $C \in \mathbb{R}^n$ ; if $L=(a_1 \ a_2 \ ... \ a_n)$, then $M=CL$.
So $MC=pC$ with the inner product $p=LC$. Also, $M^n=p^{n-1}M$.
|
Free group n contains subgroup of index 2 My problem is to show that any free group $F_{n}$ has a normal subgroup of index 2. I know that any subgroup of index 2 is normal. But how do I find a subgroup of index 2?
The subgroup needs to have 2 cosets. My first guess is to construct a subgroup $H<G$ as $H = <x_{1}^{2}, x_{2}^{2}, ... , x_{n}^{2} >$ but this wouldn't be correct because $x_{1}H \ne x_{2}H \ne H$.
What is a way to construct such a subgroup?
|
For any subgroup $H$ of $G$ and elements $a$ and $b$ of $G$ the following statements hold.
*
*If $a \in H$ and $b \in H$, then $ab \in H$
*If $a \in H$ and $b \not\in H$, then $ab \not\in H$
*If $a \not\in H$ and $b \in H$, then $ab \not\in H$
Hence it is natural to ask when $a \not\in H$ and $b \not\in H$ implies $ab \in H$, ie. when a subgroup is a "parity subgroup", as in $G = \mathbb{Z}$ and $H = 2\mathbb{Z}$ or in $G = S_n$ and $H = A_n$. Suppose that $H$ is a proper subgroup since the case $H = G$ is not interesting. Then the following statements are equivalent:
*
*$[G:H] = 2$
*For all elements $a \not\in H$ and $b \not\in H$ of $G$, we have $ab \in H$.
*There exists a homomorphism $\phi: G \rightarrow \{1, -1\}$ with $\operatorname{Ker}(\phi) = H$.
So these parity subgroups are precisely all the subgroups of index $2$. I think you could use (2) in PatrickR's answer and (3) in DonAntonio's answer. Of course, they are both good and complete answers in their own, this is just one way I like to think about subgroups of index $2$.
|
how to show $f(1/n)$ is convergent? Let $f:(0,\infty)\rightarrow \mathbb{R}$ be differentiable, $\lvert f'(x)\rvert<1 \forall x$. We need to show that
$a_n=f(1/n)$ is convergent. Well, it just converges to $f(0)$ as $\lim_{n\rightarrow \infty}f(1/n)=f(0)$ am I right? But $f$ is not defined at $0$ and I am not able to apply the fact $\lvert f'\rvert < 1$. Please give me hints.
|
The condition $|f'(x)|<1$ implies that f is lipschitz.
$$|f(x)-f(y)|\le |x-y| $$
Then $$|f(\frac{1}{n})-f(\frac{1}{m})|\le|\frac{1}{n}-\frac{1}{m}|$$
Since $x_n=\frac{1}{n}$ is Cauchy, $f(\frac{1}{n})$ also is Cauchy
|
Nilpotent Lie Group that is not simply connect nor product of Lie Groups? I have been trying to find for days a non-abelian nilpotent Lie Group that is not simply connected nor product of Lie Groups, but haven't been able to succeed.
Is there an example of this, or hints to this group, or is it fundamentally impossible?
Cheers and thanks.
|
The typical answer is a sort of Heisenberg group, presented as a quotient (by a normal subgroup)
$$
H \;=\; \{\pmatrix{1 & a & b \cr 0 & 1 & c\cr 0 & 0& 1}:a,b,c\in \mathbb R\}
\;\bigg/\;
\{\pmatrix{1 & 0 & b \cr 0 & 1 & 0\cr 0 & 0& 1}:b\in \mathbb Z\}
$$
Edit: To certify the non-simple-connectedness, note that the group of upper-triangular unipotent matrices is simply connected, and that the indicated subgroup is discrete, so this Heisenberg group has universal covering group isomorphic to that discrete subgroup, and $\pi_1$ of the quotient (the Heisenberg group) is isomorphic to that covering group.
|
Arithmetic Progressions in Complex Variables From Stein and Shakarchi's Complex Analysis book, Chapter 1 Exercise 22 asks the following:
Let $\Bbb N=\{1,2,\ldots\}$ denote the set of positive integers. A subset $S\subseteq \Bbb N$ is said to be in arithmetic progression if $$S=\{a,a+d,a+2d,\ldots\}$$ where $a,d\in\Bbb N$. Here $d$ is called the step of $S$. We are asked to show that $\Bbb N$ cannot be partitioned into a finite number of subsets that are in arithmetic progression with distinct steps (except for the case $a=d=1$).
He gives a hint to write $$\sum_{n\in\Bbb N}z^n$$ as a sum of terms of the type $$\frac{z^a}{1-z^d}.$$ How do I apply the hint? I know that $$\sum_{n\in\Bbb N}z^n=\frac{z}{1-z}$$ but that doesn't have anything to do with the $a$ or $d$. Thanks for any help!
|
Suppose $\mathbf{N}=S_1\cup\cdots S_k$ is a partition of $\bf N$ and $S_r=\{a_r+d_rm:m\ge0\}$. Then
$$\begin{array}{cl}\frac{z}{1-z} & =\sum_{n\in\bf N}z^n \\
& =\sum_{r=1}^k\left(\sum_{n\in S_r}z^n\right) \\
& = \sum_{r=1}^k\left(\sum_{m\ge0}z^{a_r+d_rm}\right) \\
& = \sum_{r=1}^k\frac{z^{a_r}}{1-z^{d_r}}.\end{array}\tag{$\circ$}$$
Suppose each $d_r$ is distinct. Do both sides of $(\circ)$ have the same poles in the complex plane?
|
If $X$ is normal and $A$ is a $F_{\sigma}$-set in $X$, then $A$ is normal. How could I prove this theorem? A topological space $X$ is a normal space if, given any disjoint closed sets $E$ and $F$, there are open neighbourhoods $U$ of $E$ and $V$ of $F$ that are also disjoint. (Or more intuitively, this condition says that $E$ and $F$ can be separated by neighbourhoods.)
And an $F_{\sigma}$-set is a countable union of closed sets.
So I should be able to show that the $F_{\sigma}$-set has the necessary conditions for a $T_4$ space? But how could I for instance select two disjoint closed sets from $F_{\sigma}$?
|
Let us begin with a lemma ( see it in the Engelking's "General Topology" book, lemma 1.5.15):
If $X$ is a $T_1$ space and for every closed $F$ and every open $W$ that contains $F$ there exists a sequence $W_1$, $W_2$, ... of open subsets of $X$ such that $F\subset \cup_{i}W_i$ and $cl(W_i)\subset W$ for $i=$ 1, 2, ..., then the space $X$ is normal.
Suppose $X$ is normal and $A = \cup_n F_n \subset X$ is an $F_\sigma$ in $X$, where all the $F_n$ are closed subsets of $X$. Then $A$ is normal (in the subspace topology).
To apply the lemma, let $F$ be closed in $A$ and $W$ be an open superset of it (open in $A$). Let $O$ be open in $X$ such that $O \cap A = W$, and note that each $F \cap F_n$ is closed in $X$ and by normality of $X$ there are open subsets $O_n$ in $X$, for $n \in \mathbb{N}$ such that
$$ F \cap F_n \subset O_n \subset \overline{O_n} \subset O $$ and define $W_n = O_n \cap A$, which are open in $A$ and satisfy that the $W_n$ cover $F$ (as each $W_n$ covers $F_n$ and
$F = F \cap A = \cup_n (F \cap F_n)$) and the closure of $W_n$ in $A$ equals $$\overline{W_n} \cap A = \overline{O_n \cap A} \cap A \subset O \cap A = W$$
which is what is needed for the lemma.
|
Is high school contest math useful after high school? I've been prepping for a lot of high school math competitions this year. Will all the math I learn would actually mean something in college? There is a chance that all of it will be for naught, and I just wanted to know if any of you people found the math useful after high school.
I do like what I'm learning, so it's not like I'm only prepping by forcing myself to.
EDIT: I'm not really sure what the appropriate procedure is to thank everyone for the answers. So... thanks! All of your answers were really helpful, and they motivated me a little more. However, I would like to make it clear that I wasn't looking for a reason to continue but rather just asking a question out of curiosity.
|
High school math competitions require you to learn how to solve problems, especially when there is no "method" you can look up telling you how to solve these problems. Problem solving is a very desirable skill for many jobs you might someday wish to have.
|
How to show that $\frac{x^2}{x-1}$ simplifies to $x + \frac{1}{x-1} +1$ How does $\frac{x^2}{(x-1)}$ simplify to $x + \frac{1}{x-1} +1$?
The second expression would be much easier to work with, but I cant figure out how to get there.
Thanks
|
Very clever trick: If you have to show that two expressions are equivalent, you work backwards. $$\begin{align}=& x +\frac{1}{x-1} + 1 \\ \\ \\ =& \frac{x^2 - x}{x-1} +\frac{1}{x - 1} + \frac{x-1}{x-1} \\ \\ \\ =& \frac{x^2 - x + 1 + x - 1}{x - 1} \\ \\ \\ =&\frac{x^2 }{x - 1}\end{align}$$Now, write the steps backwards (if you're going to your grandmommy's place, you turn backwards and then you again turn backwards, you're on the right way!) and act like a know-it-all.
$$\begin{align}=&\frac{x^2 }{x - 1} \\ \\ \\ =& \frac{x^2 - x}{x-1} +\frac{1}{x - 1} + \frac{x-1}{x-1} \\ \\ \\ =& x +\frac{1}{x-1} + 1 \end{align}$$
Q.E.Doodly dee!
This trick works and you can impress your friends with such elegant proofs produced by this trick.
|
Prove that if for every $x \in \mathbb{R}^N$ $Ax=Bx$ then $A=B$ How can I quickly prove that if for every $x \in \mathbb{R}^N$
$$Ax=Bx$$ then $A=B$ ? Where $A,B\in \mathbb{R}^{N\times N}$.
Normally I would multiply both sides by inverse of $x$, however vectors have no inverse, so I am not sure how to prove it.
|
If you want to invert a matrix but all you have are vectors, put the vectors into a matrix! For example,
$$A \left( e_1 \mid e_2 \mid \cdots \mid e_n \right) = \left( A e_1 \mid A e_2 \mid \cdots \mid A e_n \right) $$
where $e_i$ is the $i$-th standard basis (column) vector. I could have chosen any vectors, but the standard basis vectors are the simplest, and I chose a linearly independent set so as to get the most information out of doing this. Also, the matrix wouldn't be invertible if I had chosen a linearly dependent set.
It turns out we don't even need to bother with inverting in this case, since we've pooled the vectors into an identity matrix:
$$ \begin{align}
A &= A(e_1 \mid e_2 \mid \cdots \mid e_n)
\\&= (Ae_1 \mid Ae_2 \mid \cdots \mid Ae_n)
\\&= (Be_1 \mid Be_2 \mid \cdots \mid Be_n)
\\ &= B(e_1 \mid e_2 \mid \cdots \mid e_n)
\\ &= B\end{align} $$
|
Trying to find $\sum\limits_{k=0}^n k \binom{n}{k}$
Possible Duplicate:
How to prove this binomial identity $\sum_{r=0}^n {r {n \choose r}} = n2^{n-1}$?
$$\begin{align}
&\sum_{k=0}^n k \binom{n}{k} =\\
&\sum_{k=0}^n k \frac{n!}{k!(n-k)!} =\\
&\sum_{k=0}^n k \frac{n(n-1)!}{(k-1)!((n-1)-(k-1))!} = \\
&n\sum_{k=0}^n \binom{n-1}{k-1} =\\
&n\sum_{k=0}^{n-1} \binom{n}{k} + n \binom{n-1}{-1} =\\
&n2^{n-1} + n \binom{n-1}{-1}
\end{align}$$
*
*Do I have any mistake?
*How can I handle the last term?
(Presumptive) Source: Theoretical Exercise 1.12(a), P18, A First Course in Pr, 8th Ed, by S Ross
|
By convention $\binom{n}k=0$ if $k$ is a negative integer, so your last line is simply $$n\sum_{k=0}^n\binom{n-1}{k-1}=n\sum_{k=0}^{n-1}\binom{n}k=n2^{n-1}\;.$$ Everything else is fine.
By the way, there is also a combinatorial way to see that $k\binom{n}k=n\binom{n-1}{k-1}$: the lefthand side counts the ways to choose a $k$-person committee from a group of $n$ people and then choose one of the $k$ to be chairman; the righthand side counts the number of ways to select a chairman ($n$) and then the other $k-1$ members of the committee.
|
A continuous function $f : \mathbb{R} → \mathbb{R}$ is uniformly continuous if it maps Cauchy sequences into Cauchy sequences. A continuous function $f : \mathbb{R} → \mathbb{R}$ is uniformly continuous if it maps
Cauchy sequences into Cauchy sequences.
is the above statement is true?
I guess it is not true but can't find any counterexample.
|
The answer is no as explained by Jonas Meyer and every continuous function $f:\mathbb R\longrightarrow \mathbb R$ has this property:
If $(x_n)_{n\in\mathbb N}$ is a Cauchy sequence then $m\leq x_n \leq M, \ \ \forall \ n\in\mathbb N$ for some $m<M$. Since $f$ is uniformly continuous on $[m,M]$ the result follows.
So $f(x)=x^2$ (or any continuous not uniformly continuous function $\mathbb R\to \mathbb R$) is a counterexample.
|
How can I interpret $\max(X,Y)$? My textbook says:
Let $X$ and $Y$ be two stochastically independent, equally distributed
random variables with distribution function F. Define $Z = \max (X, Y)$.
I don't understand what is meant by this. I hope I translated it correctly.
I would conclude that $X=Y$ out of this. And therefore $Z=X=Y$.
How can I interpret $\max(X,Y)$?
|
What's the problem? Max is the usual maximum of two real numbers (or two real-valued random variables, so that we can define, more explicitely, that
$$
Z = \begin{cases} X & \text{if $X \ge Y$} \\
Y & \text{if $Y \ge X$} \\ \end{cases}
$$
So your conclusion is most surely wrong! There is no base for concluding that $Z=X=Y$.
|
Some weird equations In our theoreticall class professor stated that from this equation $(C = constant)$
$$
x^2 + 4Cx - 2Cy = 0
$$
we can first get:
$$
x = \frac{-4C + \sqrt{16 C^2 - 4(-2Cy)}}{2}
$$
and than this one:
$$
x = 2C \left[\sqrt{1 + \frac{y}{2C}} -1\right]
$$
How is this even possible?
|
Here's the algebra:
$$x^2 + 4Cx - 2Cy = (x+2C)^2-4C^2 - 2Cy = 0 $$
Thus:
$$
(x+2C)^2 = 4C^2 + 2Cy = 2C(2C+y).
$$
Take square roots:
$$
x_1 = -2C + \sqrt{2C(2C+y)} =\frac{-4C + \sqrt{16C^2 +8Cy}}{2}$$
and
$$
x_2 = -2C - \sqrt{2C(2C+y)}
$$
|
Prove that any function can be represented as a sum of two injections How to prove that for any function $f: \mathbb{R} \rightarrow \mathbb{R}$ there exist two injections $g,h \in \mathbb{R}^{\mathbb{R}} \ : \ g+h=f$.
Could you help me?
|
Note that it suffices to find an injection $g$ such that $f+g$ is also an injection, as $f$ can then be written as the sum $(f+g)+(-g)$. Such a $g$ can be constructed by transfinite recursion.
Let $\{x_\xi:\xi<2^\omega\}$ be an enumeration of $\Bbb R$. Suppose that $\eta<2^\omega$, and we’ve defined $g(x_\xi)$ for all $\xi<\eta$ in such a way that $g(x_\xi)\ne g(x_\zeta)$ and $(f+g)(x_\xi)\ne(f+g)(x_\zeta)$ whenever $\xi<\zeta<\eta$. Let
$$S_\eta=\{g(x_\xi):\xi<\eta\}$$
and
$$T_\eta=\{(f+g)(x_\xi)-f(x_\eta):\xi<\eta\}\;.$$
Then $|S_\eta\cup T_\eta|<2^\omega$, so we may choose $g(x_\eta)\in\Bbb R\setminus(S_\eta\cup T_\eta)$. Clearly $g(x_\eta)\ne g(x_\xi)$ for $\xi<\eta$, since $g(x_\eta)\notin S_\eta$. Moreover, $g(x_\eta)\notin T_\eta$, so for each $\xi<\eta$ we have $g(x_\eta)\ne(f+g)(x_\xi)-f(x_\eta)$ and hence $(f+g)(x_\eta)\ne(f+g)(x_\xi)$. Thus, the recursion goes through to define $g(x_\xi)$ for all $\xi<2^\omega$, and it’s clear from the construction that both $g$ and $f+g$ are injective.
|
Proving convergence of sequence with induction I have a sequence defined as $a_{1}=\sqrt{a}$ and $a_{n}=\sqrt{1+a_{n-1}}$ and I need to prove that it has an upper bound and therefore is convergent. So i have assumed that the sequence has a limit and by squaring I got that the limit is $\frac{1+\sqrt{5}}{2}$ only $ \mathbf{if}$ it converges.
What methods are there for proving convergence? I am trying to show that $a_{n}<\frac{1+\sqrt{5}}{2}$ by induction but could use some help since I have never done induction proof before.
Progress:
Step 1(Basis): Check if it holds for lowest possible integer: Since $a_{0}$ is not defined, lowest possible value is $2$.
$a_{2}=\sqrt{1+a_{1}}=\sqrt{1+\sqrt{a}}=\sqrt{1+\sqrt{\frac{1+\sqrt{5}}{2}}}< \frac{1+\sqrt{5}}{2}$.
Step 2: Assume it holds for $k\in \mathbb{N},k\geq 3$. If we can prove that it holds for $n=k+1$ we are done and therefore it holds for all $k$.
This is were i am stuck: $a_{k+1}=\sqrt{1+a_{k}}$. I don't know how to proceed because I don't know where I am supposed to end.
|
Let $f(x) = \sqrt{1+x}-x$. We find that $f'(x) = \frac 1 {2\sqrt{1+x}} -1 <0$. This means that $f$ is a strictly decreasing function. Set $\phi = \frac{1+\sqrt 5}{2}$.
We now that $f(\phi)=0$. We must then have that $f(x)>0$ if $x<\phi$ and $f(x)<0$ if $x>\phi$. So $a_{n+1}>a_n$ if $a_n< \phi$ and $a_{n+1} < a_n$ if $a_n > \phi$.
I claim that if $a_1<\phi$ then $a_n < \phi$ for all $n$. This is proven by induction. Assume that $a_k < \phi$. Then $a_{k+1}^2 = a_k +1 < 1+\frac{1+\sqrt 5}{2}= \frac{6+2\sqrt{5}}{4} =\phi^2$. So $a_{k+1} < \phi$ and by induction we get that $a_n < \phi$ for all $n$.
If $a_1<\phi$ we thus know that we get a bounded increasing sequence. All bounded increasing sequences converge. To deal with the case when $\sqrt {a_1}>\phi$ is left as an exercise.
|
Finding a topology on $ \mathbb{R}^2 $ such that the $x$-axis is dense The problem is the following
Put a topology on $ \mathbb{R}^2$ with the property that the line $\{(x,0):x\in \mathbb{R}\}$ is dense in $\mathbb{R}^2$
My attempt
If (a,b) is in $R^2$, then define an open sets of $(a,b)$ as the strip between $d$ and $c$, inclusive where $c$ has the opposite sign of $b$ and $d>b$ if $b$ is positive otherwise $d<b$ . Clearly this always contains the set $ \mathbb{R}\times\{0\}$. It also obeys the the topological laws, ie, the intersection of two open sets is open. The union of any number of open sets is open.
Thanks for your help
|
The following generalizes all solutions (EDIT: not all solutions, just those which give a topology on $\mathbb{R}^2$ homeomorphic to the standard topology). It doesn't have much topological content, but it serves to show how basic set theory can often be used to trivialize problems in other fields. A famous example of this phenomenon is Cantor's proof of the existence of (infinitely many) transcendental numbers.
So, let $X$ be a dense subset of $\mathbb{R}^2$ be such that
$$\left|X\right| = \left|\mathbb{R}^2\setminus X\right| = \left|\mathbb{R}^2\right|$$
For instance, $X$ could be the set of points with irrational first coordinate. Now let $f$ and $g$ be bijections:
$$f : \mathbb{R}\times\{0\} \to X$$
$$g : \mathbb{R}\times (\mathbb{R}\setminus\{0\}) \to \mathbb{R}^2\setminus X$$
Then let $F = f \cup g$ is a bijection from the plane to itself which will map the $x$-axis onto $X$. The topology we want consists of those sets $A$ for which $F[A]$ is open in the standard topology.
|
Four times the distance from the $x$-axis plus 9 times the distance from $y$-axis equals $10$. What geometric figure is formed by the locus of points such that the sum of four times the distance from the $x$-axis and nine times its distance from $y$-axis is equal to $10$?
I get $4x+9y=10$. So it is a straight line, but the given answer is parallelogram. Can anyone tell me where my mistake is?
|
If $P$ has coordinates $(x,y)$, then $d(P,x\text{ axis})=|y|$ and $d(P,y\text{ axis})=|x|$. So your stated condition requires $4|y|+9|x|=10$, the graph of which is shown below.
|
Neighborhoods of half plane Define $H^n = \{(x_1, \dots, x_n)\in \mathbb R^n : x_n \ge 0\}$, $\partial H^n = \{(x_1, \dots, x_{n-1},0) : x_i \in \mathbb R\}$.
$\partial H^n$ is a manifold of dimension $n-1$: As a subspace of $H^n$ it is Hausdorff and second-countable. If $U \subseteq \partial H^n$ is open in $H^n$ with the subspace topology of $\mathbb R^n$ then $f: (x_1, \dots, x_{n-1},0) \mapsto (x_1, \dots, x_{n-1}), \partial H^n \to \mathbb R^{n-1} $ is injective and continous. Then its restriction to $U$ is a homeomorphism by invariance of domain.
I am asked to show that a nbhd $U'$ of $x \in \partial H^n$ is not homeomorphic to an open set $U \subseteq \mathbb R^n$. My try:
$H^n$ is with the subspace topology. Then a set $U' \subseteq H^n$ is open iff it is $U \cap H^n$ for some open set $U \subseteq \mathbb R^n$. $H^n$ is closed in $\mathbb R^n$. How to proof that $U \cap H^n$ can't be open? Thank you for correcting me.
|
Suppose $U'$ is open in $\mathbb R^n$ and $U' \ni x \in \partial H^n$. Then $U'$ must contain an open ball $B$ around $x$ and so there must exist a point $y \in B$ with $y_n < 0$ and therefore $U' \not \subset H^n$.
For the more general argument, suppose $\phi: U' \to \mathbb R^n$ is a homeomorphism. Then there is an open ball $B \subset \mathbb R^n$ containing $\phi(x)$ and so there is also a ball $B' \subset \phi^{-1}(B)$ that contains $x$. So we are done by the first paragraph.
|
Do we have such a direct product decomposition of Galois groups? Let $L = \Bbb{Q}(\zeta_m)$ where we write $m = p^k n$ with $(p,n) = 1$. Let $p$ be a prime of $\Bbb{Z}$ and $P$ any prime of $\mathcal{O}_L$ lying over $p$.
Notation: We write $I = I(P|p)$ to denote the inertia group and $D = D(P|p)$ the decomposition group.
Now we have a tower of fields
$$\begin{array}{c} L \\ |\\ L^E \\| \\ L^D \\| \\ \Bbb{Q}\end{array}$$
and it is clear that $L^E = \Bbb{Q}(\zeta_n)$ so that $E \cong \Bbb{Z}/p^k\Bbb{Z}^\times$.
My question is: I want to identify the decomposition group $D$. So this got me thinking: Do we have a decomposition into direct products $$D \cong E \times D/E?$$
This would be very convenient because $D/E$ is already known to be finite cyclic of order $f$ while $E$ I have already stated above. Note it is not necessarily given that $(e,f) = 1$ so we can't invoke Schur - Zassenhaus or anything like that.
|
You can write your $L$ as the compositum of $\mathbb{Q}(\zeta_{p^k})$ and $\mathbb{Q}(\zeta_{n})$. Since $(p,n)=1$, the two are disjoint over $\mathbb{Q}$, and so the Galois group of $L$ is isomorphic to the direct product of the two Galois groups, one of which is $E$. Let's call the other subgroup $H$. Now, every element $g$ of $G$ is uniquely a product of an element $\epsilon$ of $E$ and an element $h$ of $H$. $E$ is contained in $D$, so $\epsilon h$ fixes $P$ if and only if $h$ does. In other words, the decomposition group $D$ is generated by $E$ and the decomposition group of $P$ in ${\rm Gal}(\mathbb{Q}(\zeta_{p^kn})/\mathbb{Q}(\zeta_{p^k}))=H$, so is indeed a direct product.
|
Confusion related to integral of a Gaussian I am a bit confused about calculating the integral of a Gaussian
$$\int_{-\infty}^{\infty}e^{-x^{2}+bx+c}\:dx=\sqrt{\pi}e^{\frac{b^{2}}{4}+c}$$
Given above is the integral of a Gaussian. The integral of a Gaussian is Gaussian itself. But what is the mean and variance of this Gaussian obtained after integration?
|
The question is only meaningful if $\Im{b} \ne 0$. Let's say that, rather, $\Re{b} = 0$ and $b = i B$. Now you can assign a mean/variance to the resulting Gaussian. This, BTW, is related to the well-known fact that a Fourier transform of a Gaussian is a Gaussian.
|
Compute $\lim_{n\to\infty} \left(\sum_{k=1}^n \frac{H_k}{k}-\frac{1}{2}(\ln n+\gamma)^2\right) $ Compute
$$\lim_{n\to\infty} \left(\sum_{k=1}^n \frac{H_k}{k}-\frac{1}{2}(\ln n+\gamma)^2\right) $$
where $\gamma$ - Euler's constant.
|
We have
\begin{align}
2\sum_{k=1}^n \frac{H_k}{k} &= 2\sum_{k=1}^n \sum_{j=1}^k \frac{1}{jk} \\
&= \sum_{k=1}^n \sum_{j=1}^k \frac{1}{jk} + \sum_{k=1}^n \sum_{j=1}^k \frac{1}{jk} \\
&= \sum_{k=1}^n \sum_{j=1}^k \frac{1}{jk} + \sum_{j=1}^n \sum_{k=j}^n \frac{1}{jk}, \text{ swapping the order of summation on the second sum}\\
&= \sum_{k=1}^n \sum_{j=1}^k \frac{1}{jk} + \sum_{k=1}^n \sum_{j=k}^n \frac{1}{jk}, \text{ changing variables on the second sum}\\
&= \sum_{k=1}^n \sum_{j=1}^n \frac{1}{jk} + \sum_{k=1}^n \frac{1}{k^2} \\
&= \left(\sum_{k=1}^n \frac{1}{k} \right)^2 + \sum_{k=1}^n \frac{1}{k^2} \\
&= H_n^2+ H^{(2)}_n. \\
\end{align}
Thus
\begin{align*}
\lim_{n\to\infty} \left(\sum_{k=1}^n \frac{H_k}{k}-\frac{1}{2}(\log n+\gamma)^2\right) &= \lim_{n\to\infty} \frac{1}{2}\left(H_n^2+ H^{(2)}_n-(\log n+\gamma)^2\right) \\
&= \lim_{n\to\infty} \frac{1}{2}\left((\log n + \gamma)^2 + O(\log n/n) + H^{(2)}_n-(\log n+\gamma)^2\right) \\
&= \frac{1}{2}\lim_{n\to\infty} \left( H^{(2)}_n + O(\log n/n) \right) \\
&= \frac{1}{2}\lim_{n\to\infty} \sum_{k=1}^n \frac{1}{k^2}\\
&= \frac{\pi^2}{12}.
\end{align*}
|
A numerical inequality. If $a_1\ge a_2\ge \cdots \ge a_n\ge 0$, $b_1\ge b_2\ge \cdots \ge b_n\ge 0$ with $\sum_{j=1}^kb_j=1$ for some $1\le k\le n$. Is it true that $2\sum_{j=1}^na_jb_j\le a_1+\frac{1}{k}\sum_{j=1}^na_j$?
The above question is denied.
Give a simple proof to a weaker version?
Under the same condtion, then $\sum_{j=1}^na_jb_j\le \max\{a_1,\frac{1}{k}\sum_{j=1}^na_j\}$.
|
No, it isn't true. Let $a_j=b_j=1$ for $1\leq j\leq n$. Then $\sum_{j=1}^1b_j=1$, hence $k=1$, thus we've satisfied the hypotheses. However, $$2\sum_{j=1}^na_jb_j=2n\geq1+n$$ with equality if and only if $n=1$.
EDIT: For the second case, let $a_j=1/10$ for all $j$, $b_1=1$ and $b_j=10$ for $j\geq 2$. Then we end up with $$n-0.9\geq n/10$$ with equality if and only if $n=1$ again.
|
Evaluating: $\int_0^{\infty} \frac{1}{t^2} dt$ I try to evaluate following integral $$\int_0^{\infty} \frac{1}{t^2} dt$$
At first it seems easy to me. I rewrite it as follows.$$\lim_{b \to \infty} \int_0^{b} \frac{1}{t^2} dt$$ and integrate the $\frac{1}{t^2}$. I proceed as follows:
$$\lim_{b\to\infty} \left[
-t^{-1}\right]_0^b$$ Then it results in,$$\lim_{b\to\infty} -b^{-1}$$ and it converges to zero. However, wolframalpha says the integral is divergent.
I do not understand what is wrong with this reasoning in particular and what is the right solution. Many thanks in advance!
|
You have done something strange with the lower limit. In fact, your integral is generalized at both endpoints, since the integrand is unbounded near $0$.
You get
$$\int_0^\infty \frac{1}{t^2}\,dt = \lim_{\varepsilon \to 0^+} \int_\varepsilon^1 \frac{1}{t^2}\,dt + \lim_{b \to\infty} \int_1^b \frac{1}{t^2}\,dt$$
and while the second limit exists, the first does not.
|
How to evaluate $\lim_{n\rightarrow\infty}\int_{0}^{1}x^n f(x)dx$, How to evaluate $\lim\limits_{n\rightarrow\infty}\int_{0}^{1}x^n f(x)dx$, well, i did one problem from rudins book that if $\int_{0}^{1}x^n f(x)dx=0\forall n\in\mathbb{N}$ then $f\equiv 0$ by stone weirstrass theorem. please help me here.
|
Assuming $f$ is integrable you can use dominated convergence. If $f$ is positive, monotone convergence works too. In either case, the limit is $0$.
|
Linear transformation satisfies $T^n=T$; has eigenvalues? I have a linear transformation $T:V\to V$ over a (finite, is needed) field $F$, which satisfies $T^n=T$.
Prove that $T$ has a eigenvalue, or give a counter example.
Thanks
|
This is false. The matrix $$A = \left(\begin{matrix}0 & -1\\1 & 0 \end{matrix}\right)$$ satisfies $A^4 = 1$ (thus $A^5 = A$) but its characteristic polynomial $X^2+1$ has no real roots, so $A$ has no real eigenvalues.
For an example over a finite field, consider
$$A = \left(\begin{matrix}0 & 1\\1 & 1 \end{matrix}\right)$$
over $\mathbb F_2$. It satisfies $A^3 = 1$ (thus $A^4 = A$) but the characteristic polynomial $X^2+X+1$ has no root in $\mathbb F_2$.
|
Applications of Residue Theorem in complex analysis? Does anyone know the applications of Residue Theorem in complex analysis? I would like to do a quick paper on the matter, but am not sure where to start.
The residue theorem
The residue theorem, sometimes called Cauchy's residue theorem (one of many things named after Augustin-Louis Cauchy), is a powerful tool to evaluate line integrals of analytic functions over closed curves; it can often be used to compute real integrals as well. It generalizes the Cauchy integral theorem and Cauchy's integral formula. From a geometrical perspective, it is a special case of the generalized Stokes' theorem.
Illustration of the setting
The statement is as follows:
Suppose $U$ is a simply connected open subset of the complex plane, and $a_1,\ldots,a_n$ are finitely many points of $U$ and $f$ is a function which is defined and holomorphic on $U\setminus\{a_1,\ldots,a_n\}$. If $\gamma$ is a rectifiable curve in $U$ which does not meet any of the $a_k$, and whose start point equals its endpoint, then
$$\oint_\gamma f(z)\,dz=2\pi i\sum_{k=1}^n I(\gamma,a_k)\mathrm{Res}(f,a_k)$$
I'm sure many complex analysis experts are very familiar with this theorem. I was just hoping someone could enlighten me on its many applications for my paper. Thank you!
|
Other then as a fantastic tool to evaluate some difficult real integrals, complex integrals have many purposes.
Firstly, contour integrals are used in Laurent Series, generalizing real power series.
The argument principle can tell us the difference between the poles and roots of a function in the closed contour $C$:
$$\oint_{C} {f'(z) \over f(z)}\, dz=2\pi i (\text{Number of Roots}-\text{Number of Poles})$$
and this has been used to prove many important theorems, especially relating to the zeros of the Riemann zeta function.
Noting that the residue of $\pi \cot (\pi z)f(z)$ is $f(z)$ at all the integers. Using a square contour offset by the integers by $\frac{1}{2}$, we note the contour disappears as it gets large, and thus
$$\sum_{n=-\infty}^\infty f(n) = -\pi \sum \operatorname{Res}\, \cot (\pi z)f(z)$$
where the residues are at poles of $f$.
While I have only mentioned a few, basic uses, many, many others exist.
|
Proving $\gcd(a, c) = \gcd(b, c)$ for $a + b = c^2$ I am trying to prove that, given positive integers $a, b, c$ such that $a + b = c^2$, $\gcd(a, c) = \gcd(b, c)$. I am getting a bit stuck.
I have written down that $(a, c) = ra + sc$ and $(b, c) = xb + yc$ for some integers $r, s, x, y$. I am now trying to see how I can manipulate these expressions considering that $a + b = c^2$ in order to work towards $ra + sc = xb + yc$ which means $(a, c) = (b, c)$. Am I starting off correctly, or am I missing something important? Any advice would help.
|
I don't see a way to proceed using the approach you suggest - this doesn't mean that there isn't one (it's often a good method to work through). But I don't see it yet. But you can directly show that the same primes to the same powers divide each:
Consider a prime $p$. Suppose that $p^\beta \mid \mid \gcd(a,c)$, so that in particular $p^\beta \mid c^2 - a = b$. And so $p^\beta | a$.
Suppose now that $p^\alpha \mid \mid a$. Then we know that $\alpha \geq \beta$ from this argument. Reversing the roles of $a$ and $b$ in the same argument shows that $\beta \geq \alpha$. Thus $\alpha = \beta$, and so the same primes to the same powers maximally divide $\gcd(a,c)$ and $\gcd(b,c)$.
Further, it seems this argument generalizes to cases that look like $a + b = c^n$ for any $n \geq 1$.
|
Question about limit of a product
Is it possible that both $\displaystyle\lim_{x\to a}f(x)$ and $\displaystyle\lim_{x\to a}g(x)$ do not exist but $\displaystyle\lim_{x\to a}f(x)g(x)$ does exist?
The reason I ask is that I was able to show that if $\displaystyle\lim_{x\to a}f(x)$ does not exist but both $\displaystyle\lim_{x\to a}g(x)$ and $\displaystyle\lim_{x\to a}f(x)g(x)$ do exist, then $\displaystyle\lim_{x\to a}g(x)=0$. However, I'm not sure whether the assumption on the existence of $\displaystyle\lim_{x\to a}g(x)$ is necessary.
|
Yes. As a simple example, I'll work with sequences. Let $f(n) = (-1)^n$ and $g(n) = (-1)^{n}$. Then $f(n)g(n) = 1$, so $1 = \lim_{n \to \infty} f(n)g(n)$, but neither of the individual limits exist.
|
prove the divergence of cauchy product of convergent series $a_{n}:=b_{n}:=\dfrac{(-1)^n}{\sqrt{n+1}}$ i am given these series which converge. $a_{n}:=b_{n}:=\dfrac{(-1)^n}{\sqrt{n+1}}$ i solved this with quotient test and came to $-1$, which is obviously wrong. because it must be $0<\theta<1$ so that the series converges. my steps:
$\dfrac{(-1)^{n+1}}{\sqrt{n+2}}\cdot \dfrac{\sqrt{n+1}}{(-1)^{n}} = - \dfrac{\sqrt{n+1}}{\sqrt{n+2}} = - \dfrac{\sqrt{n+1}}{\sqrt{n+2}}\cdot \dfrac{\sqrt{n+2}}{\sqrt{n+2}} = - \dfrac{(n+1)\cdot (n+2)}{(n+2)\cdot (n+2)} = - \dfrac{n^2+3n+2}{n^2+4n+4} = -1 $
did i do something wrong somewhere?
and i tried to know whether the cauchy produkt diverges as task says:
$\sum_{k=0}^{n}\dfrac{(-1)^{n-k}}{\sqrt{n-k+1}}\cdot \dfrac{(-1)^{k}}{\sqrt{k+1}} = \dfrac{(-1)^n}{nk+n-k^2+1} = ..help.. = diverging $
i am stuck here how to show that the produkt diverges, thanks for any help!
|
$\sum_{n=0}^\infty\dfrac{(-1)^n}{\sqrt{n+1}}$ is convergent by Leibniz's test, but it is not absolutely convergente (i.e. it is conditionally convergent.)
To show that the Cauchy product does not converge use the inequality
$$
x\,y\le\frac{x^2+y^2}{2}\quad x,y\in\mathbb{R}.
$$
Then
$$
\sqrt{n-k+1}\,\sqrt{k+1}\le\frac{n+2}{2}
$$
and
$$
\sum_{k=0}^n\frac{1}{\sqrt{n-k+1}\,\sqrt{k+1}}\ge\frac{2(n+1)}{n+2}.
$$
This shows that the the terms of the Cauchy product do not converge to $0$, and the series diverges.
|
What is limit of: $\displaystyle\lim_{x\to 0}$$\tan x - \sin x\over x$ I want to search limit of this trigonometric function:
$$\displaystyle\lim_{x\to 0}\frac{\tan x - \sin x}{x^n}$$
Note: $n \geq 1$
|
Checking separatedly the cases for $\,n=1,2,3\,$, we find:
$$n=1:\;\;\;\;\;\;\frac{\tan x-\sin x}{x}=\frac{1}{\cos x}\frac{\sin x}{x}(1-\cos x)\xrightarrow [x\to 0]{}1\cdot 1\cdot 0=0$$
$$n=2:\;\;\frac{\tan x-\sin x}{x^2}=\frac{1}{\cos x}\frac{\sin x}{x}\frac{1-\cos x}{x}\xrightarrow [x\to 0]{}1\cdot 1\cdot 0=0\;\;(\text{Applying L'Hospital})$$
$$n=3:\;\;\;\frac{\tan x-\sin x}{x^3}=\frac{1}{\cos x}\frac{\sin x}{x}\frac{1-\cos x}{x^2}\xrightarrow[x\to 0]{}1\cdot 1\cdot \frac{1}{2}=\frac{1}{2}\;\;(\text{Again L'H})$$
$$n\geq 4:\;\;\;\;\frac{\tan x-\sin x}{x^4}=\frac{1}{\cos x}\frac{\sin x}{x}\frac{1-\cos x}{x^2}\frac{1}{x^{n-3}}\xrightarrow[x\to 0]{}1\cdot 1\cdot \frac{1}{2}\cdot\frac{1}{\pm 0}=$$
and the above either doesn't exists (if $\,n-3\,$ is odd), or it is $\,\infty\,$ , so in any case $\,n\geq 4\,$ the limit doesn't exist in a finite form.
|
extension of a non-finite measure For a finite measure on a field $\mathcal{F_0}$ there always exists its extension to $\sigma(\mathcal{F_0})$.
Can somebody give me an example of a non-finite measure on a field which cannot be extended to $\sigma(\mathcal{F_0})$.
It would be better if somebody can point in the proof (for existence of extension of finite measure), where the finite-ness property is used.
|
Every measure can be extended from a field to the generated $\sigma$-algebra. The classical proof by Caratheodory does not rely on the measure being finite, so there is no such example. As Ilya mentioned in a comment, the extension may not be unique. Here is an explicit example:
Let $\mathcal{F}$ be the field of subsets of $\mathbb{Q}$ generated by sets of the form $(a,b]\cap\mathbb{Q}$ with $a,b\in\mathbb{Q}$. Let $\mu(A)=\infty$ for $A\in\mathcal{F}\backslash\{\emptyset\}$. It is easy to see that $\sigma(\mathcal{F})=P(\mathbb{Q})$, the powerset. Now let $r>0$ be a real number. Then there is a unique measure $\mu_r$ such that $\mu\big(\{q\}\big)=r$ for all $q\in\mathbb{Q}$. So for each $r>0$, $\mu_r$ is a different extension of $\mu$. So there is a continuum of possible extensions of $\mu$ to $\sigma(\mathcal{F})$. The example is based on one in Counterexamples in Probability and Real Analysis by Wise and Hall.
|
Why is there always a basis for the Cartan orthonormal relative to the Killing form? I'm trying to understand a step in a proof:
Let $\mathfrak{g}$ be semi-simple (finite dimensional) Lie-algebra over $\mathbb{C}$, $\mathfrak{h}\subset\mathfrak{g}$ a Cartan subalgebra and let $\kappa:\mathfrak{g}\times\mathfrak{g}\to\mathbb{C}$ be the Killing form.
In this setting, the author of the proof chooses an orthonormal basis $h_1,\dots,h_n$ of $\mathfrak{h}$ relative to the Killing form, which is - to my understanding - a basis satisfying $\kappa(h_i,h_j)=\delta_{ij}$.
Why is it always possible to find such an orthonormal basis?
Thank you for your help!
|
The Killing form is symmetric and non-degenerate(Cartan's criterion). For such bilinear forms you can always diagonalize it via a proper basis. So in particular over $\mathbb{C}$ you should be able to find an orthonormal basis.
|
Number Theory and Congruency I have the following problem:
$$2x+7=3 \pmod{17}$$
I know HOW to do this problem. It's as follows:
$$2x=3-7\\
x=-2\equiv 15\pmod{17}$$
But I have no idea WHY I'm doing that. I don't really even understand what the problem is asking, I'm just doing what the book says to do. Can someone explain what this problem is asking and what I'm finding? Thanks
|
The $\bmod{17}$ congruence classes are represented by $0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16
$. You're trying to find out which of those classes $x$ belongs to. That's why you're doing that.
|
Lower bounds on numbers of arrangements of a Rubik's cube Last night, a friend of mine informed me that there were forty-three quintillion positions that a Rubik's Cube could be in and asked me how many there were for my Professor's Cube (5x5x5).
So I gave him an upper bound:
$$8!~3^8\cdot24!~2^{24}/2^{12}\cdot12!~2^{12}\cdot24!/4!^6\cdot24!/4!^6,$$
did some rough approximation, and said it was around 10^79.
Then I decided I'd try to give a lower bound, and came up dry.
In the Wikipedia article, we read that:
*
*The orientation of the eighth vertex depends on the other seven.
*The 24 edge pieces that are next to vertices can't be flipped.
*The twelfth central edge's orientation depends on the other eleven.
*The parity of a permutation of the vertices and of the edge pieces are linked.
The second point is easy to see by imagining the route that such an edge piece takes under all the various moves. (The Wikipedia article opts to give a mechanical reason instead, which is a questionable practice.) The other points are not too hard to see either.
So we divide by $3\cdot2^{24}\cdot2\cdot2$ and get the answer, around $2.83\cdot10^{74}.$
My friend does not know group theory, and the only proof I know of the independence of the rest of the stuff uses group theory to a considerably greater extent than that which proves the dependent parts dependent. Can anyone think of a simpler proof of a reasonable lower bound? One that, say, at least puts the number greater than that for Rubik's Revenge (4x4x4), which is about $7.40\cdot10^{45}$?
|
Chris Hardwick has come up with a closed-form solution for the number of permutations for an $n\times n\times n$ cube: http://speedcubing.com/chris/cubecombos.html
|
Proving $\sum\limits_{k=1}^{n}{\frac{1}{\sqrt[k+1]{k}}} \geq \frac{n^2+3n}{2n+2}$ How to prove the following inequalities without using Bernoulli's inequality?
*
*$$\prod_{k=1}^{n}{\sqrt[k+1]{k}} \leq \frac{2^n}{n+1},$$
*$$\sum_{k=1}^{n}{\frac{1}{\sqrt[k+1]{k}}} \geq \frac{n^2+3n}{2n+2}.$$
My proof:
*
*
\begin{align*}
\prod_{k=1}^{n}{\sqrt[k+1]{k}} &= \prod_{k=1}^{n}{\sqrt[k+1]{k\cdot 1 \cdot 1 \cdots 1}} \leq \prod^{n}_{k=1}{\frac{k+1+1+\cdots +1}{k+1}}\\
&=\prod^{n}_{k=1}{\frac{2k}{k+1}}=2^n \cdot \prod^{n}_{k=1}{\frac{k}{k+1}}=\frac{2^n}{n+1}.\end{align*}
*
$$\sum_{k=1}^{n}{\frac{1}{\sqrt[k+1]{k}}} \geq n \cdot \sqrt[n]{\prod_{k=1}^{n}{\frac{1}{\sqrt[k+1]{k}}}} \geq n \cdot \sqrt[n]{\frac{n+1}{2^n}}=\frac{n}{2}\cdot \sqrt[n]{n+1}.$$
It remains to prove that
$$\frac{n}{2}\cdot \sqrt[n]{n+1} \geq \frac{n^2+3n}{2n+2}=\frac{n(n+3)}{2(n+1)},$$
or
$$\sqrt[n]{n+1} \geq \frac{n+3}{n+1},$$
or
$$(n+1) \cdot (n+1)^{\frac{1}{n}} \geq n+3.$$
We apply Bernoulli's Inequality and we have:
$$(n+1)\cdot (1+n)^{\frac{1}{n}}\geq (n+1) \cdot \left(1+n\cdot \frac{1}{n}\right)=(n+1)\cdot 2 \geq n+3,$$
which is equivalent with:
$$2n+2 \geq n+3,$$ or
$$n\geq 1,$$ and this is true becaue $n \neq 0$, $n$ is a natural number.
Can you give another solution without using Bernoulli's inequality?
Thanks :-)
|
The inequality to be shown is
$$(n+1)^{n+1}\geqslant(n+3)^n,
$$
for every positive integer $n$.
For $n = 1$ it is easy. For $n \ge 2$, apply AM-GM inequality to $(n-2)$-many $(n+3)$, 2 $\frac{n+3}{2}$, and $4$, we get
$$(n+3)^n < \left(\frac{(n-2)(n+3) + \frac{n+3}{2} + \frac{n+3}{2} + 4}{n+1}\right)^{n+1} = \left(n+1\right)^{n+1}$$
|
Finding the general solution of a quasilinear PDE This is a homework that I'm having a bit of trouble with:
Find a general solution of:
$(x^2+3y^2+3u^2)u_x-2xyuu_y+2xu=0~.$
Of course this should be done using the method of characteristics but I'm having trouble solving the characteristic equations since none of the equations decouple:
$\dfrac{dx}{x^2+3y^2+3u^2}=-\dfrac{dy}{2xyu}=-\dfrac{du}{2xu}$
Any suggestions?
|
Follow the method in http://en.wikipedia.org/wiki/Characteristic_equations#Example:
$(x^2+3y^2+3u^2)u_x-2xyuu_y+2xu=0$
$2xyuu_y-(x^2+3y^2+3u^2)u_x=2xu$
$2yu_y-\left(\dfrac{x}{u}+\dfrac{3y^2}{xu}+\dfrac{3u}{x}\right)u_x=2$
$\dfrac{du}{dt}=2$ , letting $u(0)=0$ , we have $u=2t$
$\dfrac{dy}{dt}=2y$ , letting $y(0)=y_0$ , we have $y=y_0e^{2t}=y_0e^u$
$\dfrac{dx}{dt}=-\left(\dfrac{x}{u}+\dfrac{3y^2}{xu}+\dfrac{3u}{x}\right)=-\dfrac{x}{2t}-\dfrac{3y_0^2e^{4t}}{2xt}-\dfrac{6t}{x}$
$\dfrac{dx}{dt}+\dfrac{x}{2t}=-\left(\dfrac{3y_0^2e^{4t}}{2t}+6t\right)\dfrac{1}{x}$
Let $w=x^2$ ,
Then $\dfrac{dw}{dt}=2x\dfrac{dx}{dt}$
$\therefore\dfrac{1}{2x}\dfrac{dw}{dt}+\dfrac{x}{2t}=-\left(\dfrac{3y_0^2e^{4t}}{2t}+6t\right)\dfrac{1}{x}$
$\dfrac{dw}{dt}+\dfrac{x^2}{t}=-\dfrac{3y_0^2e^{4t}}{t}-12t$
$\dfrac{dw}{dt}+\dfrac{w}{t}=-\dfrac{3y_0^2e^{4t}}{t}-12t$
I.F. $=e^{\int\frac{1}{t}dt}=e^{\ln t}=t$
$\therefore\dfrac{d}{dt}(tw)=-3y_0^2e^{4t}-12t^2$
$tw=\int(-3y_0^2e^{4t}-12t^2)~dt$
$tx^2=-\dfrac{3y_0^2e^{4t}}{4}-4t^3+f(y_0)$
$x^2=-\dfrac{3y_0^2e^{4t}}{4t}-4t^2+\dfrac{f(y_0)}{t}$
$x=\pm\sqrt{-\dfrac{3y_0^2e^{4t}}{4t}-4t^2+\dfrac{f(y_0)}{t}}$
$x=\pm\sqrt{-\dfrac{3y^2}{2u}-u^2+\dfrac{2f(ye^{-u})}{u}}$
|
how to find inverse of a matrix in $\Bbb Z_5$ how to find inverse of a matrix in $\Bbb Z_5$
please help me explicitly how to find the inverse of matrix below, what I was thinking that to find inverses separately of the each term in $\Bbb Z_5$ and then form the matrix?
$$\begin{pmatrix}1&2&0\\0&2&4\\0&0&3\end{pmatrix}$$
Thank you.
|
Hint: Use the adjugate matrix.
Answer: The cofactor matrix of $A$ comes
$\color{grey}{C_A=
\begin{pmatrix}
+\begin{vmatrix} 2 & 4 \\ 0 & 3 \end{vmatrix} & -\begin{vmatrix} 0 & 4 \\ 0 & 3 \end{vmatrix} & +\begin{vmatrix} 0 & 2 \\ 0 & 0 \end{vmatrix} \\
-\begin{vmatrix} 2 & 0 \\ 0 & 3 \end{vmatrix} & +\begin{vmatrix} 1 & 0 \\ 0 & 3 \end{vmatrix} & -\begin{vmatrix} 1 & 2 \\ 0 & 0 \end{vmatrix} \\
+\begin{vmatrix} 2 & 0 \\ 2 & 4 \end{vmatrix} & -\begin{vmatrix} 1 & 0 \\ 0 & 4 \end{vmatrix} & +\begin{vmatrix} 1 & 2 \\ 0 & 2 \end{vmatrix}
\end{pmatrix}=
\begin{pmatrix}
6 & 0 & 0 \\
-6 & 3 & 0 \\
8 & -4 & 2
\end{pmatrix}=}
\begin{pmatrix}
1 & 0 & 0 \\
-1 & 3 & 0 \\
3 & 1 & 2
\end{pmatrix}.$
Therefore the adjugate matrix of $A$ is
$\color{grey}{\text{adj}(A)=C_A^T=
\begin{pmatrix}
1 & 0 & 0 \\
-1 & 3 & 0 \\
3 & 1 & 2
\end{pmatrix}^T=}
\begin{pmatrix}
1 & -1 & 3 \\
0 & 3 & 1 \\
0 & 0 & 2
\end{pmatrix}$.
Since $\det{(A)}=1$, it follows that $A^{-1}=\text{adj}(A)=
\begin{pmatrix}
1 & -1 & 3 \\
0 & 3 & 1 \\
0 & 0 & 2
\end{pmatrix}$.
And we confirm this by multiplying them matrices:
$\begin{pmatrix} 1 & -1 & 3 \\ 0 & 3 & 1 \\ 0 & 0 & 2\end{pmatrix}
\begin{pmatrix} 1 & 2 & 0 \\ 0 & 2 & 4 \\ 0 & 0 & 3\end{pmatrix}=
\begin{pmatrix} 1 & 0 & 5 \\ 0 & 6 & 15 \\ 0 & 0 & 6\end{pmatrix}=
\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}$.
|
Can you provide me historical examples of pure mathematics becoming "useful"? I am trying to think/know about something, but I don't know if my base premise is plausible. Here we go.
Sometimes when I'm talking with people about pure mathematics, they usually dismiss it because it has no practical utility, but I guess that according to the history of mathematics, the math that is useful today was once pure mathematics (I'm not so sure but I guess that when the calculus was invented, it hadn't a practical application).
Also, I guess that the development of pure mathematics is important because it allows us to think about non-intuitive objects before encountering some phenomena that is similar to these mathematical non-intuitive objects, with this in mind can you provide me historical examples of pure mathematics becoming "useful"?
|
Group theory is commonplace in quantum mechanics to represent families of operators that possess particular symmetries. You can find some info here.
|
Non-Abelian simple group of order $120$ Let's assume that there exists simple non-Abelian group $G$ of order $120$. How can I show that $G$ is isomorphic to some subgroup of $A_6$?
|
A group of order 120 can not be simple. Let's assume that there exists simple non-abelian group $G$ of order 120. Then we know the number Sylow 5-subgroups of $G$ is 6. Hence, the index of $N_{G}(P)$ in $G$ is 6 ($P$ is a Sylow 5-subgroup of $G$). Now there exists a monomorphism $\phi$ of $G$ to $S_{6}$. We claim that $\operatorname{Im\phi}\leq A_{6}$. Otherwise $\operatorname{Im\phi}$ has an odd permutation and so $G$ has a normal subgroup of index 2, a contradiction. Hence, $G\cong \operatorname{Im(\phi)}\leq A_{6}$.
|
How do i apply multinomial laws in this question? the Question is i assume i have 15 students in class
A grade obtain probaiblity = 0.3
B grade obtain probability =0.4
C grade obtain probability = 0.3
and I have this question
What is the probabilty if we are given 2 students at least obtain A ?
Do I need apply this law ?
P=n!/((A!)(B!)(C!)) 〖P1〗^A 〖P2〗^(B ) 〖P3〗^C
|
We reword the question, perhaps incorrectly.
If a student is chosen at random, the probabilities she obtains an A, B, and C are, respectively, $0.3$, $0.4$, and $0.3$.
If $15$ students are chosen at random, what is the probability that at least $2$ of the students obtain an A?
The probability that $a$ students get an A, $b$ get a B, and $c$ get a C, where $a+b+c=15$, is indeed given by the formula quoted in the post. It is
$$\frac{15!}{a!b!c!}(0.3)^a(0.4)^b (0.3)^c.$$
However, the above formula is not the best tool for solving the problem.
The probability a student gets an A is $0.3$, and the probability she doesn't is $0.7$. We are only interested in the number of A's, so we are in a binomial situation.
We want the probability there are $2$ or more A's. It is easier to first find the probability of fewer than $2$ A's. This is the probability of $0$ A's plus the probability of $1$ A.
If $X$ is the number of A's,
$$\Pr(X\lt 2)=\binom{15}{0}(0.3)^0(0.7)^{15}+\binom{15}{1}(0.3)^1(0.7)^{14}.$$
It follows that the probability of $2$ or more A's is
$$1-\left((0.7)^{15}+15(0.3)(0.7)^{14}\right).$$
|
Examples of familiar, easy-to-visualize manifolds that admit Lie group structures I have a trouble learning Lie groups --- I have no canonical example to imagine while thinking of a Lie group. When I imagine a manifold it is usually some kind of a $2$D blanket or a circle/curve or a sphere, a torus etc.
However I have a problem visualizing a Lie group. The best one I thought is $SO(2)$ which as far as I understand just a circle. But a circle apparently lacks distinguished points so I guess there is no way to canonically prescribe a neutral element to turn a circle into a group $SO(2)$.
Examples I saw so far start from a group, describe it as a group of matrices to show that the group is endowed with the structure of a manifold. I would appreciate the other way --- given a manifold show that it is naturally a group. And such a manifold should be easily imaginable.
|
Think of $SO_2$ as the group of $2\times 2$ rotation matrices:
$$ \left[\begin{array}{cc} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{array} \right]$$
or the group of complex numbers of unit length $e^{i\theta}$.
You can convince yourself directly from definitions that either of these objects is a group under the appropriate multiplication, and that they are isomorphic to each other.
This group (which is presented in two ways) is a 1-manifold because it admits smooth parametrization in one variable ($\theta$ here).
Why does this group represent the circle $S^1$? Well, the matrices of the above form are symmetries of circles about the origin in $\mathbb{R}^2$, and the trace of $e^{i\theta}$ is the unit circle in the complex plane. I think it is best conceptually to think of Lie groups first as groups, and then to develop geometric intuition to "flavor" your algebraic construction.
|
An inequality about sequences in a $\sigma$-algebra Let $(X,\mathbb X,\mu)$ be a measure space and let $(E_n)$ be a sequence in $\mathbb X$. Show that $$\mu(\lim\inf E_n)\leq\lim\inf\mu(E_n).$$
I am quite sure I need to use the following lemma.
Lemma. Let $\mu$ be a measure defined on a $\sigma$-algebra $\mathbb X$.
*
*If $(E_n)$ is an increasing sequence in $\mathbb X$, then
$$\mu\left(\bigcup_{n=1}^\infty E_n\right)=\lim\mu(E_n).$$
*If $(F_n)$ is a decreasing sequence in $\mathbb X$ and if $\mu(F_1)<+\infty$, then
$$\mu\left(\bigcap_{n=1}^\infty F_n\right)=\lim\mu(F_n).$$
I know that
$$\mu(\liminf_n E_n)=\mu\left(\bigcup_{i=1}^\infty\bigcap_{n=i}^\infty E_n\right)=
\lim_i\mu\left(\bigcap_{n=i}^\infty E_n\right).$$
The first equality follows from the definition of $\lim\inf$ and the second from point 1 of the lemma above. Here is where I am stuck.
|
For every $i\geq 1$ we have that
$$
\bigcap_{n=i}^\infty E_n\subseteq E_i
$$
and so for all $i\geq 1$
$$
\mu\left(\bigcap_{n=i}^\infty E_n\right)\leq \mu(E_i).
$$
Then
$$
\mu(\liminf_n E_n)=\lim_{i\to\infty}\mu\left(\bigcap_{n=i}^\infty E_n\right)\leq \liminf \mu(E_i),
$$
using the fact that if $(x_n)_{n\geq 1}$ and $(y_n)_{n\geq 1}$ are sequences with $x_n\leq y_n$ for all $n$, then $\liminf x_n\leq \liminf y_n$.
|
When do regular values form an open set? Let $f:M\to N$ be a $C^\infty$ map between manifolds.
When is the set of regular values of $N$ an open set in $N$?
There is a case which I sort of figured out:
*
*If $\operatorname{dim} M = \operatorname{dim} N$ and $M$ is compact, it is open by the following argument (fixed thanks to user7530 in the comments):
Let $y\in N$. Suppose $f^{-1}(y)\neq \emptyset$. The stack of records theorem applies: $f^{-1}(y)=\{x_1,\dots,x_n\}$ and there is an open neighborhood $U$ of $y$ such that $f^{-1}(U)=\bigcup_{i=1}^n U_i$, with $U_i$ an open neighborhood of $x_i$ such that the $U_i$ are pairwise disjoint and $f$ maps $U_i$ diffeomorphically onto $U$.
Now every point in $f^{-1}(U)$ is regular, since if $x_i'\in U_i$, then $f|_{U_i}:U_i\to U$ diffeomorphism $\Rightarrow$ $df_{x_i'}$ isomorphism (thanks to user 7530 for simplifying the argument).
Now suppose $f^{-1}(y)=\emptyset$. Then there is an open neighborhood $V$ of $y$ such that every value in $V$ has no preimages. Indeed, the set $N\setminus f(M)$ is open, since $M$ compact $\Rightarrow$ $f(M)$ compact, hence closed. Therefore $V$ is an open neighborhood of $y$ where all values are regular, and we are done.
Can we remove the compactness/dimension assumptions in some way?
|
The set of critical points is closed. You want that the image of this set under $f$ be closed. What about demanding that $f$ is closed? A condition that implies that $f$ is closed is to demand that $f$ is proper (i.e. preimages of compact sets are compact).
|
Studying $ u_n = \int_0^1 (\arctan x)^n \mathrm dx$ I would like to find an equivalent of:
$$ u_n = \int_0^1 (\arctan x)^n \mathrm dx$$
which might be: $$ u_n \sim \frac{\pi}{2n} \left(\frac{\pi}{4} \right)^n$$
$$ 0\le u_n\le \left( \frac{\pi}{4} \right)^n$$
So $$ u_n \rightarrow 0$$
In order to get rid of $\arctan x$ I used the substitution
$$x=\tan \left(\frac{\pi t}{4n} \right) $$
which gives:
$$ u_n= \left(\frac{\pi}{4n} \right)^{n+1} \int_0^n t^n\left(1+\tan\left(\frac{\pi t}{4n} \right)^2 \right) \mathrm dt$$
But this integral is not easier to study!
Or: $$ t=(\arctan x)^n $$
$$ u_n = \frac{1}{n} \int_0^{(\pi/4)^n} t^{1/n}(1+\tan( t^{1/n})^2 ) \mathrm dt $$
How could I deal with this one?
|
Another (simpler) approach is to substitute $x = \tan{y}$ and get
$$u_n = \int_0^{\frac{\pi}{4}} dy \: y^n \, \sec^2{y}$$
Now we perform an analysis not unlike Laplace's Method: as $n \rightarrow \infty$, the contribution to the integral is dominated by the value of the integrand at $y=\pi/4$. We may then say that
$$u_n \sim \sec^2{\frac{\pi}{4}} \int_0^{\frac{\pi}{4}} dy \: y^n = \frac{2}{n+1} \left ( \frac{\pi}{4} \right )^{n+1} (n \rightarrow \infty) $$
The stated result follows.
|
Computing conditional probability combining events I know $P(E|A)$ and $P(E|B)$, how do I know $P(E|A,B)$? Assuming $A,B$ independent.
|
Consider this example:
*
*$A$ = first coin lands Head,
*$B$ = second coin lands Head,
*$E_1$ = odd number of Heads,
*$E_2$ = even number of Heads.
The first two conditional probabilities are both $1/2$. The third is $0$ for $E_1$ and $1$ for $E_2$.
(Also: Welcome to Math.SE, and please do not try to make your questions as short as possible. Adding your thoughts, even tentative, is always a plus.)
|
Book Recommendation for Integer partitions and $q$ series I have been studying number theory for a little while now, and I would like to learn about integer partitions and $q$ series, but I have never studied anything in the field of combinatorics, so are there any prerequisites or things I should be familiar with before I try to study the latter?
|
George Andrews has contributed greatly to the study of integer partitions. (The link with his name will take you to his webpage listing publications, some of which are accessible as pdf documents.) Also see, e.g., his classic text The Theory of Partitions and the more recent Integer Partitions.
You can pretty much "jump right in" with the following, though their breadth of coverage may be more than you care to explore (in which case, they each have fine sections on the topics of interest to you, with ample references for more in depth study of each topic):
Two books I highly recommend are
Concrete Mathematics by Graham, Knuth, and Patashnik.
Combinatorics: Topics, Techniques, and Algorithms by Peter J. Cameron. See his associated site for the text.
|
Equation of the Plane I have been working through all the problems in my textbook and I have finally got to a difficult one.
The problem ask
Find the equation of the plane.The plane that passes through the points $(-1,2,1)$ and contains the line of intersection of the planes $x+y-z =2$ and $2x-y+3z=1$
So far I found the direction vector of the line of intersection to be $<1,-2,1>$ and I have identified a point on this line when $x=0$ to be $(0,5,-1)$.
I do not know how to find the desired plane from here.
Any assistance would be appreciated.
|
The following determinant is another form of what Scott noted: $$\begin{vmatrix}
(x-x_0)& (y-y_0)& (z-z_0)\\1& 1& -1\\2& -1& 3
\end{vmatrix}$$
|
On a special normal subgroup of a group Let $G$ be a group such $H$ is a normal subgroup of $G$ and $Z(H)=1$ and $Inn(H)=Aut(H)$. Then prove there exists a normal subgroup $K$ of $G$ such that $G=H\times K$.
|
First, define the map $\psi: G \to Inn(G)$ by $\psi_g(x) = gxg^{-1}$. Since $H$ is a normal subgroup, $\psi_g(H) = H$, so $Inn(G) = Aut(H) = Inn(H)$. From $Z(H) = 1$, we know that $Inn(H) \cong H$. Compose this isomorphism with $\psi$ to get a surjective map $G \to H$. The kernel of this map is $K$.
|
Connectedness of $\beta \mathbb{R}$ I am not very familiar with Stone-Čech compactification, but I would like to understand why the remainder $\beta\mathbb{R}\backslash\mathbb{R}$ has exactly two connected components.
|
I finally found something:
For convenience, let $X= \mathbb{R} \backslash (-1,1)$.
First, we show that $(\beta \mathbb{R})\backslash (-1,1)= \beta (\mathbb{R} \backslash (-1,1))$. Let $f_0 : X \to [0,1]$ be a continuous function. Obviously, $f_0$ can be extend to a continuous function $f : \mathbb{R} \to [0,1]$; then we extend $f$ to a continuous function $\tilde{f} : \beta \mathbb{R} \to [0,1]$ and we get by restriction a continuous function $\tilde{f}_0 : (\beta \mathbb{R}) \backslash (-1,1) \to [0,1]$. By the universal property of Stone-Čech compactification, we deduce that $\beta X= (\beta \mathbb{R}) \backslash (-1,1)$.
With the same argument, we show that for an closed interval $I \subset X$, $\beta I = \text{cl}_{\beta X}(I)$.
Let $F_1= \text{cl}_{\beta X} [1,+ \infty)$ and $F_2= \text{cl}_{\beta X} (-\infty,-1]$. Then, $F_1 \cup F_2 = \beta X$, $F_1 \cap F_1= \emptyset$ and $F_1$, $F_2$ are connected.
Indeed, there exists a continuous function $h : X \to \mathbb{R}$ such that $h_{|[1,+ \infty)} \equiv 0$ and $h_{|(-\infty,1]} \equiv 1$, so when we extend $h$ on $\beta X$ we get a continuous function $\tilde{h}$ such that $\tilde{h} \equiv 0$ on $F_1$ and $\tilde{h} \equiv 1$ on $F_2$; therefore $F_1 \cap F_2= \emptyset$.
Finally, $F_1$ and $F_2$ are connected as closures of connected sets.
So $(\beta \mathbb{R} ) \backslash (-1,1)$ has exactly two connected components; in fact, it is the same thing for $(\beta \mathbb{R}) \backslash (-n,n)$. Because $(\beta \mathbb{R}) \backslash \mathbb{R}= \bigcap\limits_{ n\geq 1 } (\beta \mathbb{R}) \backslash (-n,n)$, we deduce that $(\beta \mathbb{R}) \backslash \mathbb{R}$ has exactly two connected components (the intersection of a non-increasing sequence of connected compact sets is connected).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.