INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Finding the norm of the operators How do I find the norm of the following operator i.e. how to find $\lVert T_z\rVert$ and $\lVert l\rVert$?
1) Let $z\in \ell^\infty$ and $T_z\colon \ell^p\to\ell^p$ with $$(T_zx)(n)=z(n)\cdot x(n).$$
What my thoughts were to use Banach-Steinhaus theorem but it seems straight forward and I don't know if I am right.
$\lVert T_z\rVert _p \leqslant\lVert z\lVert \cdot n\cdot\lVert x\rVert_p n=n^2\lVert x\rVert _p$ so if I choose $x=1$ then I get $\lVert T_z\rVert =n^2$.
2) Let $0\leqslant t_1\leqslant\cdots\leqslant t_n=1$ and $\alpha_1,\dots,\alpha_n \in K$ , $l\colon C([0,1])\to K$ with $l(x)=\sum_{i=1}^n \alpha_i x(t_i)$.
How to I find operator norm in this case as well?
I am quite sure I am not right. I would be glad if I could get some help.
Definitely some hints would be great! Thanks in advance.
|
*
*As for each $n$, $|z(n)x(n)|^p\leqslant \lVert a\rVert_{\infty}|x(n)|^p$, then we certainly have $\lVert T_z\rVert\geqslant \lVert a\rVert_{\infty}$. To get the other inequality, fix $\delta$ and pick $k$ such that $|a(k)|\geqslant \lVert a\rVert_{\infty}-\delta$ (the case $a=0$ is obvious).
*We assume $t_j$ distinct. Let $f_j$ a continuous map such that $f_j(t_j)=e^{i\theta_j}$, where $e^{i\theta_j}\alpha_j=|\alpha_j|$ and $f_j(t_k)=0$ if $k\neq j$. We can choose the $f_j$'s such that $\lVert \sum_{j=1}^nf_j\rVert=1$.
|
Summation over exponent $\sum_{i=0}^k 4^i= \frac{4^{k+1}-1}3$ Why does $$\sum_{i=0}^k 4^i= \frac{4^{k+1}-1}{3}$$where does that 3 comes from?
Ok, from your answers I looked it up on Wikipedia Geometric Progression, but to derive the formula it says to multiply by $(1-r)$ not $(r-1)$ why is this case different?
|
Here is an easy mnemonic. If you have a geometric sum, then
$$\sum {\rm geometric} = {{\rm first} - {\rm last}\over {1 - {\rm common \ Ratio}}}.$$
In this case, first is the first term, blast is the one beyond the last, and commonRatio is the common ratio of the terms. If the sum is finite and ${\rm commonRatio} > 1$, reverse the subtractions in the numerator and denominator for greater prettiness.
|
Example of a general random variable with finite mean but infinite variance Given a probability triple $(\Omega, \mathcal{F}, \mu)$ of Lebesgue measure $[0,1]$, find a random variable $X : \Omega \to \mathbb{R}$ such that the expected value $E(X)$ converges to a finite, positive value, but $E(X^2)$ diverges.
|
An example is a random variable $X$ having a student-t distribution with $\nu = 2$ degrees of freedom
Its mean is $E[X] = 0$ for $\nu > 1$, but its second moment $E[X^2] = Var[X] = \infty$ for $1 < \nu \le 2$
Edit: Oh wait, finite positive. Well, $X+1$, I guess.
|
Question on a proof of a sequence
I have some questions
1) In the forward direction of the proof, it employs the inequality $|x_{k,i} - a_i| \leq (\sum_{j=1}^{n} |x_{k,j} - a_j|^2)^{\frac{1}{2}}$. What exactly is this inequality?
2) In the backwards direction they claim to use the inequality $\epsilon/n$. I thought that when we choose $\epsilon$ in our proofs, it shouldn't depend on $n$ because $n$ is always changing?
|
$\def\abs#1{\left|#1\right|}$(1) We have that
$$ \abs{x_{k,i} - a_i}^2 \le \sum_{j=1}^n \abs{x_{k,j} - a_j}^2 $$
for sure as adding positive numbers makes the expression bigger. Now, exploiting the monotonicity of $\sqrt{\cdot}$, we have
$$ \abs{x_{k,i} - a_i} = \left(\abs{x_{k,i} - a_i}^2\right)^{1/2} \le \left(\sum_{j=1}^n \abs{x_{k,j} - a_j}^2\right)^{1/2} $$
(2) When you talk about sequences $(x_n)$, where you use $n$ to index the sequence's elements, your $\epsilon > 0$. But in this case, $n$ denotes the (fixed, not chaning for different elements $x_k$) dimension of $\mathbb R^n$, are you are talking about a sequence $(x_k)$ in $\mathbb R^n$.
|
Are there five complex numbers satisfying the following equalities? Can anyone help on the following question?
Are there five complex numbers $z_{1}$, $z_{2}$ , $z_{3}$ , $z_{4}$
and $z_{5}$ with $\left|z_{1}\right|+\left|z_{2}\right|+\left|z_{3}\right|+\left|z_{4}\right|+\left|z_{5}\right|=1$
such that the smallest among $\left|z_{1}\right|+\left|z_{2}\right|-\left|z_{1}+z_{2}\right|$,
$\left|z_{1}\right|+\left|z_{3}\right|-\left|z_{1}+z_{3}\right|$,
$\left|z_{1}\right|+\left|z_{4}\right|-\left|z_{1}+z_{4}\right|$,
$\left|z_{1}\right|+\left|z_{5}\right|-\left|z_{1}+z_{5}\right|$,
$\left|z_{2}\right|+\left|z_{3}\right|-\left|z_{2}+z_{3}\right|$,
$\left|z_{2}\right|+\left|z_{4}\right|-\left|z_{2}+z_{4}\right|$,
$\left|z_{2}\right|+\left|z_{5}\right|-\left|z_{2}+z_{5}\right|$,
$\left|z_{3}\right|+\left|z_{4}\right|-\left|z_{3}+z_{4}\right|$,
$\left|z_{3}\right|+\left|z_{5}\right|-\left|z_{3}+z_{5}\right|$
and $\left|z_{4}\right|+\left|z_{5}\right|-\left|z_{4}+z_{5}\right|$is
greater than $8/25$?
Thanks!
|
Suppose you have solutions and express $z_i$ as $r_i e^{\theta_i}$.
(I use $s_i = \sin( \theta_i )$ and $c_i = \sin( \theta_i )$ to make notations shorter)
Then$$\begin{align*}
|z_i| + |z_j| - |z_i + z_j| & = |r_i e^{\theta_i}| + |r_j e^{\theta_j}| - |r_i e^{\theta_i} + r_j e^{\theta_j}| \\
& = r_i + r_j - |r_i(c_i +is_i) + r_j(c_j +is_j) | \\
& = r_i + r_j - |(r_ic_i+r_jc_j) + i(r_is_i+r_js_j)| \\
& = r_i + r_j - \sqrt{(r_ic_i+r_jc_j)^2 + (r_is_i+r_js_j)^2} \\
& = r_i + r_j - \sqrt{ ( r_i^2c_i^2 + 2r_ir_jc_ic_j + r_j^2c_j^2 ) + ( r_i^2s_i^2 + 2r_ir_js_is_j + r_j^2s_j^2 ) } \\
& = r_i + r_j - \sqrt{ r_i^2( c_i^2 + s_i ^2 ) + r_j^2(c_j^2 + s_j^2) + 2r_ir_j(c_ic_j+s_is_j) } \\
|z_i| + |z_j| - |z_i + z_j| & = r_i + r_j - \sqrt{r_i^2 + r_j^2+2r_ir_j\cos(\theta_i-\theta_j) }
\end{align*} $$
Then
$$\begin{align*}
|z_i| + |z_j| - |z_i + z_j| > \frac{8}{25} & \Leftrightarrow r_i + r_j - \sqrt{r_i^2 + r_j^2+2r_ir_j\cos(\theta_i-\theta_j) } > \frac{8}{25} \\
& \Leftrightarrow r_i + r_j - \frac{8}{25} > \sqrt{r_i^2 + r_j^2+2r_ir_j\cos(\theta_i-\theta_j) } \\
& \Leftrightarrow ( r_i + r_j - \frac{8}{25} ) ^ 2 > r_i^2 + r_j^2+2r_ir_j\cos(\theta_i-\theta_j) \\
& \Leftrightarrow r_i^2 + r_j^2 + (\frac{8}{25})^2 + 2r_ir_j - 2\frac{8}{25} r_i - 2\frac{8}{25} r_j > r_i^2 + r_j^2+2r_ir_j\cos(\theta_i-\theta_j) \\
& \Leftrightarrow (\frac{8}{25})^2 + 2r_ir_j - 2\frac{8}{25} r_i - 2\frac{8}{25} r_j > 2r_ir_j\cos(\theta_i-\theta_j) \\
& \Leftrightarrow (\frac{8}{25})^2 + 2r_ir_j( 1 - \cos(\theta_i-\theta_j) ) - 2\frac{8}{25} r_i - 2\frac{8}{25} r_j > 0 \\
\end{align*} $$
I might update later but that's all I have for now :/
But I would try to express $r_i$ as a function of $r_{\sigma(i)}$ with $\sigma$ a permutation. And by doing that, you would probably get an ugly way of calculating any $r_j$ from all the $\theta_k$ where $j,k \in \{\sigma(i)^n, n\in \mathbb{N}\}$.
|
Coin game - applying Kelly criterion I'm looking at a simple coin game where I have \$100, variable betting allowed, and 100 flips of a fair coin where H=2x stake+original stake, T=lose stake.
*
*If I'm asked to maximise the expected final net worth $N$, am I meant to simply bet a fraction of $\frac{1}{4}$ (according to the Wikipedia article on the Kelly criterion)?
*What if I'm asked to maximise the expectation of $\ln(100+N)$? Does this change my answer?
Thanks for any help.
|
The Wikipedia essay says bet $p-(q/b)$, where $p$ is the probability of winning, $q=1-p$ of losing, and $b$ is the payment (not counting the dollar you bet) on a one dollar bet. For your game, $p=q=1/2$ and $b=2$ so, yes, bet one-fourth of your current bankroll.
Sorry, I'm not up to thinking about the logarithmic question.
|
how many ways can the letters in ARRANGEMENT can be arranged Using all the letters of the word ARRANGEMENT how many different words using all letters at a time can be made such that both A, both E, both R both N occur together .
|
"ARRANGEMENT" is an eleven-letter word.
If there were no repeating letters, the answer would simply be $11!=39916800$.
However, since there are repeating letters, we have to divide to remove the duplicates accordingly.
There are 2 As, 2 Rs, 2 Ns, 2 Es
Therefore, there are $\frac{11!}{2!\cdot2!\cdot2!\cdot2!}=2494800$ ways of arranging it.
|
What do I use to find the image and kernel of a given matrix? I had a couple of questions about a matrix problem. What I'm given is:
Consider a linear transformation $T: \mathbb R^5 \to \mathbb R^4$ defined by $T( \vec{x} )=A\vec{x}$, where
$$A = \left(\begin{array}{crc}
1 & 2 & 2 & -5 & 6\\
-1 & -2 & -1 & 1 & -1\\
4 & 8 & 5 & -8 & 9\\
3 & 6 & 1 & 5 & -7
\end{array}\right)$$
*
*Find $\mathrm{im}(T)$
*Find $\ker(T)$
My questions are:
What do they mean by the transformation?
What do I use to actually find the image and kernel, and how do I do that?
|
I could give an explanation for the most appreciated answer why image is calculated in this way. Image of a matrix is basically all the vectors you can obtain after this linear transformation. Let's say $A$ is a $2 \times 2$ matrix
$$A=\pmatrix {a_1 & b_1\\ a_2 & b_2}$$
. If we apply A as a linear transformation to the standard base, aka the identity matrix, we get A itself. However, we could consider this transformation as it transforms the basis vectors to all the columns A has. (1, 0) to (a1, a2), (0, 1) to (b1, b2). Therefore, the image of A is just the span of the basis vectors after this linear transformation; in this case, span ((a1, a2), (b1, b2)). This is the reason why we need to get rref of the transpose of A. We are simplely getting linearly independent basis vectors after this linear transformation. If there's anything unclear, I really recommand you to watch this video made by 3Blue1Brown, it shows this in a visual way. Here's the link: https://www.youtube.com/watch?v=uQhTuRlWMxw
|
What does brace below the equation mean? An example of what I am trying to understand is found on this page, at Eq. 3.
There are two braces under the equation... What is the definition of the brace(s) and how does it relate to Sp(t) and S[k]? This is what 4 years of calculus gets you 20+ years later...
http://en.wikipedia.org/wiki/Poisson_summation_formula
Thanks
|
it is a shortcut to let you know that the expression above is equal to it (either by definition or by calculation)
|
No Nonzero multiplication operator is compact Let $f,g \in L^2[0,1]$, multiplication operator $M_g:L^2[0,1] \rightarrow L^2[0,1]$ is defined by $M_g(f(x))=g(x)f(x)$. Would you help me to prove that no nonzero multiplication operator on $L^2[0,1]$ is compact. Thanks.
|
We show that if $g$ is not the equivalence class of the null function, then $M_g$ is not compact. Let $c>0$ such that $\lambda(\{x,|g(x)|>c\})>0$ (such a $c$ exists by assumption). Let $S:=\{x,|g(x)|>c\}$, $H_1:=L^2[0,1]$, $H_2:=\{f\in H_1, f=f\chi_S\}$. Then $T\colon H_2\to H_2$ given by $T(f)=T_g(f)$ is onto. Indeed, if $h\in H_2$, then $T(h\cdot \chi_S \cdot g^{—1})=h\cdot\chi_S=h$.
As $H_2$ is a closed subspace of $H_1$, it's a Banach space. This gives, by the open mapping theorem that $T$ is open. It's also compact, so $T(B(0,1))$ is open and has compact closure. By Riesz theorem, $H_2$ is finite dimensional.
But for each $N$, we can find $N+1$ disjoint subsets of $S$ which have positive measure, and their characteristic functions will be linearly independent, which gives a contradiction.
|
Convergence of $\sum_{n=1}^\infty \frac{a_n}{n}$ with $\lim(a_n)=0$. Is it true that if $(a_n)_{n=1}^\infty$ is any sequence of positive real numbers such that $$\lim_{n\to\infty}(a_n)=0$$
then,
$$\sum_{n=1}^\infty \frac{a_n}{n}$$
converges?
If yes, how to prove it?
|
It is false. For $n\gt 1$, let $a_n=\dfrac{1}{\log n}$.
The divergence can be shown by noting that $\int_2^\infty \frac{dx}{x\log x}$ diverges. (An antiderivative is $\log\log x$.)
|
How to show that two equivalence classes are either equal or have an empty intersection? For $x \in X$, let $[x]$ be the set $[x] = \{a \in X | \ x \sim a\}$.
Show that given two elements $x,y \in X$, either
a) $[x]=[y]$ or
b) $[x] \cap [y] = \varnothing$.
How I started it is, if $[x] \cap [y]$ is not empty, then $[x]=[y]$, but then I am kind of lost.
|
The problem with your "start" is that you are assuming exactly what you want to prove.
You need to apply what you know about the properties of an equivalence relation, in this case, denoted by $\;\sim\;$ You'll need to use the definitions of $[x], [y]$: $$[x] = \{a \in X | \ x \sim a\} \text{ and}\;\;[y] = \{a \in X | y \sim a\}.\tag{1}$$
Note that $[x]$ and $[y]$ are defined to be sets, which happen also to be equivalence classes. To prove that two sets are equal, show that each is the subset of the other.
$$\text{Now, suppose that}\;\; [x] \cap [y] \neq \varnothing.\tag{2}$$
Then there must be at least one element $a\in X$ that is in both equivalence classes.
So we have $a \in [x]$ and $a\in [y]$. Here's where the definitions given by $(1)$ come in to play; together with the definition of an equivalence relation (the fact that $\sim$ is reflexive, symmetric, transitive), you can show that:
*
*$a \in [x]$ and $a \in [y] \rightarrow x \sim y$ and $y\sim x\;\;\forall x\in [x],~\text{and}~ \forall y \in [y]$.
And so we have, trivially $$[x]\subset [y] \;\;\text{and}\;\; [y]\subset [x]\iff [x]=[y].$$
Therefore, having assumed $(2)$, it follows that $[x] = [y]$.
The only other option is that $(2)$ is false, in which case we have $[x] \cap [y] = \varnothing$.
$\therefore$ either $[x] = [y]$ OR $[x] \cap [y] = \varnothing$.
|
RSA solving for primes p and q knowing n = pq and p - q I was also given these:
$p+q=n-\phi(n)+1$
$p-q=\sqrt((p+q)^2-4n)$
$\phi(n)=(p-1)(q-1)$
$p>q$
I've been trying to manipulate this as a system of equations, but it's just not working out for me. I found a similar problem on this site, but instead of $pq$ and $p-q$ being known, it had $pq$ and $\phi(pq)$, so that didn't help. The $\phi(n)$ function has never been mentioned in this class before, so I can't use any definitions of it (aside from the one given above) without proving them. Could someone please point me in the right direction here?
|
We have
$$(p+q)^2=(p-q)^2+4pq.$$
Calculating the right-hand side is very cheap. Then calculating $p+q$ is cheap, a mild variant of Newton's Method. Once we know $p+q$ and $p-q$, calculating $p$ and $q$ is very cheap.
|
Cancellation laws for function composition Okay I was asked to make a conjecture about cancellation laws for function composition. I figured it would go something like "For all sets $A$ and functions $g: A \rightarrow B$ and $h: A \rightarrow B$, $f \circ g = f \circ h$ implies that $g=h$."
I'm pretty sure $g=h$ isn't always true, but is there a property of $f$ that makes this true?
|
Notice that if there are distinct $b_1,b_2\in B$ such that $f(b_1)=f(b_2)$, you won’t necessarily be able to cancel $f$: there might be some $a\in A$ such that $g(a)=b_1$ and $h(a)=b_2$, but you’d still have $(f\circ g)(a)=(f\circ h)(a)$. Thus, you want $f$ to be injective (one-to-one). Can you prove that that’s sufficient?
|
Calculate a point on the line at a specific distance . I have two points which make a line $l$ , lets say $(x_1,y_1) , (x_2,y_2)$ . I want a new point $(x_3,y_3)$ on the line $l$ at a distance $d$ from $(x_2,y_2)$ in the direction away from $(x_1,y_1)$ . How should i do this in one or two equation .
|
A point $(x,y)$ is on the line between $(x_1,y_1)$ and $(x_2,y_2)$ if and only if, for some $t\in\mathbb{R}$,
$$(x,y)=t(x_1,y_1)+(1-t)(x_2,y_2)=(tx_1+(1-t)x_2,ty_1+(1-t)y_2)$$
You need to solve
$$\begin{align*}d&=\|(x_2,y_2)-(tx_1+(1-t)x_2,ty_1+(1-t)y_2)\|=\sqrt{(tx_2-tx_1)^2+(ty_2-ty_1)^2}\\
&=\sqrt{t^2}\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}\hspace{5pt}\Rightarrow\hspace{5pt} |t|=\frac{d}{\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}}\end{align*}$$
You will have two values of $t$. For $t>0$ this point will be in the direction to $(x_1,y_1)$ and for $t<0$ it will be in the direction away from $(x_1,y_1)$.
|
Existence of two solutions I am having a problem with the following exercise.
I need to show the $x^2 = \cos x $ has two solutions.
Thank you in advance.
|
Let $f(x)=x^2-\cos x$. Note that the curve $y=f(x)$ is symmetric about the $y$-axis. It will thus be enough to show that $f(x)=0$ has a unique positive solution. That there is a unique negative solution follows by symmetry.
There is a positive solution, since $f(0)\lt 0$ and $f(100)\gt 0$. (Then use the Intermediate Value Theorem.)
For uniqueness of the positive solution, note that $f'(x)=2x+\sin x$. In the interval $(0,\pi/2)$, $f'(x)$ is positive because both terms are positive. And for $x\ge \pi/2$, we have $f'(x)\ge \pi-1$.
|
How do I get the residue of the given function? I'm reading the solution of the integral:
$$\int\limits_{-\infty}^{\infty} dx\frac{e^{ax}}{1+e^x}$$
by the residue method. And I understood everything, but how to get the residue of $\frac{e^{az}}{1+e^z}$ (the book just states that the residue is $-e^{i\pi a}$).
I know there is a simple pole at $z=i\pi$ and that is the point where I want the residue.
Since it is a simple pole I tried using the formula $a_{-1}=f(x)(z-z_0)$ by using the series expansion of the exponential function and I got to this formula
$$a_{-1}=-e^{i\pi a}\left[\frac{\left(1+\sum\limits_{n=1}^{\infty}\frac{(z-i\pi)^{n-1}}{n!}(z-i\pi)\right)^a}{\sum\limits_{n=1}^{\infty}\frac{(z-i\pi)^{n-1}}{n!}}\right]_{z=i\pi}$$but I believe thats wrong and I couldn't find my mistake or another way of solving it.
|
$$\lim_{z\to\pi i}(z-\pi i)\frac{e^{az}}{1+e^z}\stackrel{\text{L'Hopital}} = \lim_{z\to\pi i}\frac{e^{az}}{e^z} = -e^{a\pi i}$$.
|
Why is the set of all real numbers uncountable? I understand Cantor's diagonal argument, but it just doesn't seem to jive for some reason.
Lets say I assign the following numbers ad infinitum...
*
*$1\to 0.1$
*$2\to 0.2$
*$3\to 0.3$
...
*$10\to 0.10$
*$11\to 0.11$
and so on... How come there's supposedly at least one more real number than you can map to a member of $\mathbb{N}$?
|
How come there's supposedly at least one more real number than you can map to a member of $\mathbb N$?
Well, suppose there isn't - that Cantor's conclusion, his theorem, is wrong, because our enumeration covers all real numbers. Wonderful, but let us see what happens when we take our enumeration and apply Cantor's diagonal technique to obtain a real number that can't be in this sequence. But that contradicts our supposition! Hence our supposition - that we can have an enumeration of all reals - is false. That the argument can be applied to any enumeration is what it takes for Cantor's theorem to be true.
Wilfred Hodges wrote an excellent survey of wrong refutations of Catonr' argument, his An Editor Recalls Some Hopeless Papers (Postscript). Section 7, dealing with how the counterfactual assumption confuses, might be of particular interest.
|
Given this transformation matrix, how do I decompose it into translation, rotation and scale matrices? I have this problem from my Graphics course. Given this transformation matrix:
$$\begin{pmatrix}
-2 &-1& 2\\
-2 &1& -1\\
0 &0& 1\\
\end{pmatrix}$$
I need to extract translation, rotation and scale matrices.
I've also have the answer (which is $TRS$):
$$T=\begin{pmatrix}
1&0&2\\
0&1&-1\\
0&0&1\end{pmatrix}\\
R=\begin{pmatrix}
1/\sqrt2 & -1/\sqrt2 &0 \\
1/\sqrt2 & 1/\sqrt2 &0 \\
0&0&1
\end{pmatrix}\\
S=\begin{pmatrix}
-2/\sqrt2 & 0 & 0 \\
0 & \sqrt2 & 0 \\
0& 0& 1
\end{pmatrix}
% 1 0 2 1/sqrt(2) -1/sqrt(2) 0 -2/sqrt(2) 0 0
%T = 0 1 -1 R = /1/sqrt(2) 1/sqrt(2) 0 S = 0 sqrt(2) 0
% 0 0 1 0 0 1 0 0 1
$$
I just have no idea (except for the Translation matrix) how I would get to this solution.
|
It appears you are working with Affine Transformation Matrices, which is also the case in the other answer you referenced, which is standard for working with 2D computer graphics. The only difference between the matrices here and those in the other answer is that yours use the square form, rather than a rectangular augmented form.
So, using the labels from the other answer, you would have
$$
\left[
\begin{array}{ccc}
a & b & t_x\\
c & d & t_y\\
0 & 0 & 1\end{array}\right]=\left[\begin{array}{ccc}
s_{x}\cos\psi & -s_{x}\sin\psi & t_x\\
s_{y}\sin\psi & s_{y}\cos\psi & t_y\\
0 & 0 & 1\end{array}\right]
$$
The matrices you seek then take the form:
$$
T=\begin{pmatrix}
1 & 0 & t_x \\
0 & 1 & t_y \\
0 & 0 & 1 \end{pmatrix}\\
R=\begin{pmatrix}
\cos{\psi} & -\sin{\psi} &0 \\
\sin{\psi} & \cos{\psi} &0 \\
0 & 0 & 1 \end{pmatrix}\\
S=\begin{pmatrix}
s_x & 0 & 0 \\
0 & s_y & 0 \\
0 & 0 & 1 \end{pmatrix}
$$
If you need help with extracting those values, the other answer has explicit formulae.
|
Limit:$ \lim\limits_{n\rightarrow\infty}\left ( n\bigl(1-\sqrt[n]{\ln(n)} \bigr) \right )$ I find to difficult to evaluate with $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )$$ I tried to use the fact, that $$\frac{1}{1-n} \geqslant \ln(n)\geqslant 1+n$$
what gives $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right ) \geqslant \lim_{n\rightarrow\infty} n(1-\sqrt[n]{1+n}) =\lim_{n\rightarrow\infty}n *\lim_{n\rightarrow\infty}(1-\sqrt[n]{1+n})$$ $$(1-\sqrt[n]{1+n})\rightarrow -1\Rightarrow\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )\rightarrow-\infty$$Is it correct? If not, what do I wrong?
|
Use Taylor!
$$n(1-\sqrt[n]{\log n}) = n (1-e^{\frac{\log\log n}{n}}) \approx n\left(1-\left(1+\frac{\log\log n}{n}\right)\right) = - \log\log n$$
which clearly tends to $-\infty$.
|
Prove sum is bounded I have the following sum:
$$
\sum\limits_{i=1}^n \binom{i}{i/2}p^\frac{i}{2}(1-p)^\frac{i}{2}
$$
where $p<\frac{1}{2}$
I need to prove that this sum is bounded. i.e. it doesn't go to infinity as n goes to infinity.
|
And instead of an explicit bound, you may use Stirling's formula, which yields
$\displaystyle {n \choose n/2} \sim \sqrt{2 / \pi} \cdot n^{-1/2} 2^n$ as $n \to \infty$.
|
is $0.\overline{99}$ the same as $\lim_{x \to 1} x$? So we had an interesting discussion the other day about 0.999... repeated to infinity, actually being equal to one. I understand the proof, but I'm wondering then if you had the function...
$$
f(x) = x* \frac{(x-1)}{(x-1)}
$$
so
$$
f(1) = NaN
$$
and
$$
\lim_{x \to 1} f(x) = 1
$$
what would the following be equal to?
$$
f(0.\overline{999}) = ?
$$
|
If $0.\overline9=1$ then $f(0.\overline9)$ is as undefined as $f(1)$ is. However indeed $\lim_{x\to 1}f(x)=1$ as you said.
The reason for the above is simple. If $a$ and $b$ are two terms, and $a=b$ then $f(a)=f(b)$, regardless to what $f$ is or what are the actual terms. Once you agreed that $0.\overline9=1$ we have to have $f(0.\overline9)=f(1)$.
|
4 Points on Circumference of Circle and center This is actually a computer science question in that I need to create a program that will determine the center of a circle given $4$ points on its circumference.
Does anyone know the algorithm, theorem or method? I think it has something to do with cyclic quadrilaterals.
Thanks.
|
The perpendicular bisector of a chord of a circle goes through the center of the circle. Therefore, if you have two chords, then the perpendicular bisectors intersect at exactly the center of the circle. Here is a picture of what I'm describing.
So, given four points on the circle, draw chords between pairs of them, draw the perpendicular bisectors of the chords, and find the point of intersection.
|
Generating function for binomial coefficients $\binom{2n+k}{n}$ with fixed $k$ Prove that
$$
\frac{1}{\sqrt{1-4t}} \left(\frac{1-\sqrt{1-4t}}{2t}\right)^k = \sum\limits_{n=0}^{\infty}\binom{2n+k}{n}t^n,
\quad
\forall k\in\mathbb{N}.
$$
I tried already by induction over $k$ but i have problems showing the statement holds for $k=0$ or $k=1$.
|
Due to a recent comment on my other answer, I took a second look at this question and tried to apply a double generating function.
$$
\begin{align}
&\sum_{n=0}^\infty\sum_{k=-n}^\infty\binom{2n+k}{n}x^ny^k\\
&=\sum_{n=0}^\infty\sum_{k=n}^\infty\binom{k}{n}\frac{x^n}{y^{2n}}y^k\\
&=\sum_{n=0}^\infty\frac{x^n}{y^{2n}}\frac{y^n}{(1-y)^{n+1}}\\
&=\frac1{1-y}\frac1{1-\frac{x}{y(1-y)}}\\
&=\frac{y}{y(1-y)-x}\\
&=\frac1{\sqrt{1-4x}}\left(\frac{1+\sqrt{1-4x}}{1+\sqrt{1-4x}-2y}-\frac{1-\sqrt{1-4x}}{1-\sqrt{1-4x}-2y}\right)\\
&=\frac1{\sqrt{1-4x}}\left(\frac{1+\sqrt{1-4x}}{1+\sqrt{1-4x}-2y}+\color{#C00000}{\frac{2x/y}{1+\sqrt{1-4x}-2x/y}}\right)\tag{1}
\end{align}
$$
The term in red contains those terms with negative powers of $y$. Eliminating those terms yields
$$
\begin{align}
\sum_{n=0}^\infty\sum_{k=0}^\infty\binom{2n+k}{n}x^ny^k
&=\frac1{\sqrt{1-4x}}\frac{1+\sqrt{1-4x}}{1+\sqrt{1-4x}-2y}\\
&=\frac1{\sqrt{1-4x}}\sum_{k=0}^\infty\left(\frac{2y}{1+\sqrt{1-4x}}\right)^k\\
&=\frac1{\sqrt{1-4x}}\sum_{k=0}^\infty\left(\frac{1-\sqrt{1-4x}}{2x}\right)^ky^k\tag{2}
\end{align}
$$
Equating identical powers of $y$ in $(2)$ shows that
$$
\sum_{n=0}^\infty\binom{2n+k}{n}x^n=\frac1{\sqrt{1-4x}}\left(\frac{1-\sqrt{1-4x}}{2x}\right)^k\tag{3}
$$
|
Rolle's Theorem Let $f$ be a continuous function on $[a,b]$ and differentiable on $(a,b)$, where $a<b$. Suppose $f(a)=f(b)$. Prove that there exists number $c_{1},c_{2},...,c_{2012}$ $\in$ $(a,b)$ satisfying $c_{1} < c_{2} <...< c_{2012}$ and $f'(c_{1})+f'(c_{2})+...+f'(c_{2012})=0$.
I believe it has something to do with Rolle's Theorem, judging by the hypotheses. However, I can't seem to find a way to tackle this problem. Any help is appreciated, thanks!
|
Hint: Given $n\in\mathbb{N}$, consider the function $g_n(x)=\sum_{k=0}^{n-1}f(a+\frac{(b-a)(x+k)}{n})$, $x\in[0,1]$.
|
Mathematics induction on inequality: $2^n \ge 3n^2 +5$ for $n\ge8$ I want to prove
$2^n \ge 3n^2 +5$--call this statement $S(n)$--for $n\ge8$
Basis step with $n = 8$, which $\text{LHS} \ge \text{RHS}$, and $S(8)$ is true. Then I proceed to inductive step by assuming $S(k)$ is true and so
$2^k \ge 3k^2 +5 $
Then $S(k+1)$ is
$2^{k+1} \ge 3(k+1)^2 + 5$
I need to prove it so I continue by
$2^{k+1} \ge 3k^2 +5$
$2(2^k) \ge 2(3k^2 +5)$
I'm stuck here...don't know how to continue, please explain to me step by step. I search for whole day already, all give answer without explanation. I can't understand, sorry for the trouble.
|
The missing step (because there is indeed a missing step) is that $2\cdot(3k^2+5)\geqslant3(k+1)^2+5$. This inequality is equivalent to $3k^2-6k+2\geqslant0$, which obviously holds for every $k\geqslant8$ since $3k^2-6k+2=3k(k-2)+2$, hence you are done.
The structure of the proof that $S(k)$ implies $S(k+1)$ for every $k\geqslant8$ is as follows:
*
*Assume that $S(k)$ holds, that is, $2^k\geqslant3k^2+5$, and that $k\geqslant8$.
*Then $2^{k+1}=2\cdot2^k$ hence $2^{k+1}\geqslant2\cdot(3k^2+5)$.
*Furthermore, $2\cdot(3k^2+5)\geqslant3(k+1)^2+5$ (the so-called missing step).
*Hence $2^{k+1}\geqslant3(k+1)^2+5$, which is $S(k+1)$.
|
Help solving $\frac{1}{{2\pi}}\int_{-\infty}^{+\infty}{{e^{-{{\left({\frac{t}{2}} \right)}^2}}}{e^{-i\omega t}}dt}$ I need help with what seems like a pretty simple integral for a Fourier Transformation. I need to transform $\psi \left( {0,t} \right) = {\exp^{ - {{\left( {\frac{t}{2}} \right)}^2}}}$ into $\psi(0,\omega)$ by solving:
$$
\frac{1}{{2\pi }}\int_{ - \infty }^{ + \infty } {\psi \left( {0,t} \right){e^{ - i\omega t}}dt}
$$
So far I've written (using Euler's formula):
$$\psi \left( {0,\omega } \right) = \frac{1}{{2\pi }}\int_{ - \infty }^{ + \infty } {\psi \left( {0,t} \right){e^{ - i\omega t}}dt} = \frac{1}{{2\pi }}\left( {\int_{ - \infty }^{ + \infty } {{e^{ - {{\left( {\frac{t}{2}} \right)}^2}}}\cos \omega tdt - i\int_{ - \infty }^{ + \infty } {{e^{ - {{\left( {\frac{t}{2}} \right)}^2}}}\sin \omega tdt} } } \right)$$
$$
\begin{array}{l}
= \frac{1}{{2\pi }}\left( {{I_1} - i{I_2}} \right)\\ \end{array}$$
I just don't recall a way to solve this integrals by hand. Wolfram Alpha tells me that the result of the first integral is ${I_1} = 2\sqrt \pi {e^{ - {\omega ^2}}}$ and for the second $I_{2}=0$. But on my notes I have ${I_1} = 2\sqrt \pi {e^{ - {{\left( {{\omega ^2}/2} \right)}^2}}}$.
Can anybody tell me how one can solve this type of integrals and if the result from Wolfram Alpha is accurate? Any help will be appreciated.
|
If you complete the square in the argument of the exponentials, $$ -\frac{1}{4}(t^2 + 4i \omega t) \to -\frac{1}{4}(t^2+4i\omega t -4\omega^2) -\omega^2 = -\frac{1}{4}(t+i2\omega)^2-\omega^2. $$ After a change of variables $u=\frac{t}{2}+i\omega$, the integral becomes $$2e^{-\omega^2}\int_{-\infty -i\omega}^{\infty -i\omega} e^{-u^2}\ du,$$ which, apart from the pesky factors of $-i\omega$ in the bounds, is a standard Gaussian integral equal to $\sqrt{\pi}$. As a physicist, I'm inclined to just sweep these factors under the rug, but we can do better: if we form a contour in the complex plane along the paths $(-\infty,i0)\to\ (\infty,i0)\to(\infty,-i\omega)\to(-\infty,-i\omega)\to(-\infty,i0),$ the integrand kills the contributions moving in the imaginary direction, and the overall integral is zero since $e^x$ has no poles. So, our integral is equal to the one along the real axis, and we can discard the $-i\omega$ terms.
|
is there a large time behavoiur of (compound) Poisson processes similiar to Law of iterated logrithm for Brownian Motion Can someone please point me to a reference/answer me this question?
From the law of iterated logarithm, we see that Brownian motion with drift converge to $\infty$ or $-\infty$.
For a Poisson processes $N_t$ with rate $\lambda$, is there a similiar thing?
As an exercise, I am consider what happens if the Brownian motion exponential martingale (plus an additional drift r) is replaced by a Poisson one. More explicitly, that is
$\exp(aN_t-(e^a-1)\lambda t + rt)$
how would this behave as $t$ tend to $\infty$?
|
Since $N_t/t\to\lambda$ almost surely, $X_t=\exp(aN_t-(e^a-1)\lambda t + rt)=\exp(\mu t+o(t))$ almost surely, with $\mu=(1+a-\mathrm e^a)\lambda+r$. If $\mu\ne0$, this yields that $X_t\to0$ or that $X_t\to+\infty$ almost surely, according to the sign of $\mu$.
If $\mu=0$, the central limit theorem indicates that $N_t=\lambda t+\sqrt{\lambda t}Z_t$ where $Z_t$ converges in distribution to a standard normal random variable $Z$, hence $X_t=\exp(a\sqrt{\lambda t}Z_t)$ and $X_t$ diverges in distribution (except in the degenerate case $a=r=0$) in the sense that, for every positive $x\leqslant y$, $\mathbb P(X\leqslant x)\to\frac12$ and $\mathbb P(X_t\geqslant y)\to\frac12$, hence $\mathbb P(x\leqslant X_t\leqslant y)\to0$.
Note: The LIL is a much finer result than all those above.
|
Prove that if $g^2=e$ for all $g$ in $G$ then $G$ is Abelian.
Prove that if $g^2=e$ for all $g$ in $G$ then $G$ is Abelian.
This question is from group theory in Abstract Algebra and no matter how many times my lecturer teaches it for some reason I can't seem to crack it.
(Please note that $e$ in the question is the group's identity.)
Here's my attempt though...
First I understand Abelian means that if $g_1$ and $g_2$ are elements of a group $G$ then they are Abelian if $g_1g_2=g_2g_1$...
So, I begin by trying to play around with the elements of the group based on their definition...
$$(g_2g_1)^r=e$$
$$(g_2g_1g_2g_2^{-1})^r=e$$
$$(g_2g_1g_2g_2^{-1}g_2g_1g_2g_2^{-1}...g_2g_1g_2g_2^{-1})=e$$
I assume that the $g_2^{-1}$'s and the $g_2$'s cancel out so that we end up with something like,
$$g_2(g_1g_2)^rg_2^{-1}=e$$
$$g_2^{-1}g_2(g_1g_2)^r=g_2^{-1}g_2$$
Then ultimately...
$$g_1g_2=e$$
I figure this is the answer. But I'm not totally sure. I always feel like I do too much in the pursuit of an answer when there's a simpler way.
Reference: Fraleigh p. 49 Question 4.38 in A First Course in Abstract Algebra.
|
given $g^2=e$ for all $g\in G$
So $g=g^{-1}$ for all $g\in G$
Let,$a,b\in G$
Now $ab=a^{-1}b^{-1}
=(ba)^{-1}
=ba$
So $ab=ba$ for all $a,b\in G$ .Hence $G$ is Abelian Group.
|
Basis for this $\mathbb{P}_3$ subspace. Just had an exam where the last question was:
Find a basis for the subset of $\mathbb{P}_3$ where $p(1) = 0$ for all $p$.
I answered $\{t,t^2-1,t^3-1\}$, but I'm not entirely confident in the answer. Did I think about the question in the wrong way?
|
Another way to get to the answer:
$P_3=\{{ax^3+bx^2+cx+d:a,b,c,d{\rm\ in\ }{\bf R}\}}$. For $p(x)=ax^3+bx^2+cx+d$ in $P_3$, $p(1)=0$ is $$a+b+c+d=0$$ So, you have a "system" of one linear equation in 4 unknowns. Presumably, you have learned how to find a basis for the vector space of all solutions to such a system, or, to put it another way, a basis for the nullspace of the matrix $$\pmatrix{1&1&1&1\cr}$$ One such basis is $$\{{(1,-1,0,0),(1,0,-1,0),(1,0,0,-1)\}}$$ which corresponds to the answer $$\{{x^3-x^2,x^3-x,x^3-1\}}$$ one of an infinity of correct answers to the question.
|
What is the Cumulative Distribution Function of the following random variable? Suppose that we have $2n$ iid random variables $X_1,…,X_n,Y_1,…,Y_n$ where $n$ is a large number.
I want to find $P((k∑_iX_iY_i+(∑_iX_i)(∑_jY_j))<c)$ for any integer c.
Since $n$ is a large number and all the random variables are $iid$, using central limit theorem, we can say that $k∑_iX_iY_i$, $(∑_iX_i)$ and $(∑_jY_j)$ are approximately normal random variables and $(∑_iX_i)$$(∑_jY_j)$ is the product of two normal random variables which would have Normal Product Distribution.
So $k∑_iX_iY_i+(∑_iX_i)(∑_jY_j)$ is the sum of one normal and one normal product random variable which are dependent.
Now the question is how can we find $P((k∑_iX_iY_i+(∑_iX_i)(∑_jY_j)) \le c)$ for any integer c?
|
$$Z = \sum_{i=1}^n \sum_{j=1}^n X_i Y_j = \left(\sum_{i=1}^n X_i\right)\left(\sum_{j=1}^n Y_j\right)$$
If $n$ is large, $S_X = \sum_i X_i$ and $S_Y = \sum_j Y_j$ are approximately normal. They have means $n\mu$ and standard deviations $\sqrt{n} \sigma$ where each $X_i$ and $Y_j$ have mean $\mu$ and standard deviation $\sigma$. Of course they are independent. Thus
$E[Z] = E[S_X] E[S_Y] = n^2 \mu^2$ and $E[Z^2] = E[S_X^2] E[S_Y^2] = (n^2 \mu^2 + n \sigma^2)^2$, so the variance of $Z$ is $\text{Var}(Z) = E[Z^2] - E[Z]^2 = n^2 \sigma^4 + 2 n^3 \sigma^2 \mu^2$.
The moment generating function of the product of independent normal random variables with means $n\mu$ and standard deviations $n \sqrt{\sigma}$ has, according to Maple, moment generating function
$$ M_Z(t) = E[e^{tZ}] = \frac{1}{\sqrt{1 - n^2 \sigma^4 t^2}} \exp\left(\frac{n^2 \mu^2 t}{1 - n \sigma^2 t}\right)$$
for $t < 1/(n \sigma^2)$.
EDIT: If $\mu \ne 0$, it would be better to separate out the effect of the mean. So let $X_i = \mu + \sigma U_i$ and $Y_i = \mu + \sigma V_i$, where $U_i$ and $V_i$ have mean $0$ and standard deviation $1$. Then $$Z = n^2 \mu^2 + n \mu \sigma \sum_{i=1}^n (U_i + V_i) + \sigma^2 \sum_{i=1}^n \sum_{j=1}^n U_i V_j$$
Now $n \mu \sigma \sum_{i=1}^n (U_i + V_i)$ is approximately normal with mean $0$ and
standard deviation $\sqrt{2} n^{3/2} \mu \sigma$, while $\sigma^2 \sum_{i=1}^n \sum_{j=1}^n U_i V_j$ has mean $0$ and standard deviation $n \sigma^2$. For large
$n$ this term is negligible compared to the $n^{3/2}$ term. So a good
approximation to the distribution of $Z$ is normal with mean $n^2 \mu^2$ and standard deviation $\sqrt{2} n^{3/2} \mu \sigma$.
You asked about $ (k−1) \sum_i X_i Y_i+ Z$: call this $(k-1) T + Z$. If we separate out the effect of the mean,
$$T = n \mu^2 + \mu \sigma \sum_{i=1}^n (U_i + V_i) + \sigma^2\sum_{i=1}^n U_i V_i$$
where $\mu \sigma \sum_{i=1}^n (U_i + V_i)$ has mean $0$ and standard deviation $\sqrt{2n} \mu \sigma$ and $\sigma^2 \sum_{i=1}^n U_i V_i$ has mean $0$ and standard deviation $\sqrt{n} \sigma^2$. Again, these terms are negligible compared to the $n^2$ and $n^{3/2}$ terms.
|
In the ring $\mathbb Z[i]$ explain why our four units $\pm1$ and $\pm i$ divide every $u\in\mathbb Z[i]$. In the ring $\mathbb Z[i]$ explain why our four units $\pm1$ and $\pm i$ divide every $u\in\mathbb Z[i]$.
This is obviously a elementary question, but Gaussian integers are relatively new to me. I found this exercise in the textbook, and my professor overlooked it, but i'm curious.
Is this basically the same thing as, for a lack of a better term, "normal" integers? As in, $\pm1$ divides everything?
|
As important as it is to understand why the units divide everything, it's also important to understand why those are the only units (as they are probably being referred to as "the units" in class).
Let $z=a+bi$. Then we can define the norm of $z$ to be $N(z)=|z|^2=z\overline z = a^2+b^2$. Note, some people call $|z|$ the norm, and I'm not entirely sure which is more standard. Hopefully no confusion will arise here.
The norm satisfies two important properties. First, $N(xy)=N(x)N(y)$, and second, if $z$ is a Gaussian integer, then $N(z)$ is an integer.
Because of the first property, if $z$ has an inverse in the Gaussian integers, then $1=N(1)=N(z z^{-1})=N(z)N(z^{-1})$, and so $N(z^{-1})=N(z)^{-1}$. The only numbers such that $x$ and $x^{-1}$ are both integers are $\pm 1$, and since the norm is always non-negative (being the sum of square of real numbers), we just have to solve the equation
$$ N(z)=a^2+b^2=1. $$
Since every non-zero square of an integer is at least $1$, this has no solutions other than $z=\pm 1, \pm i$.
We are not done yet because we have only shown that if $z$ has an inverse then $N(z)=1$. We must check that each of these solutions is actually invertible. However, for any nonzero complex number, we have $z^{-1}=\frac{\overline z}{N(z)}$, so
$$(1)(1)=(-1)(-1)=(i)(-i)=1,$$
and so the inverses of these Gaussian integers are still integers.
By definition, an element $r$ of a ring is called a unit if there exists an $s$ such that $rs=sr=1$. In this case, $s$ is called the inverse of $r$, and is unique when it exists, so it is often written as $r^{-1}$. What we have shown so far is that the only units in $\mathbb Z[i]$ are $\pm 1, \pm i$.
As Gerry Myerson wrote, if $u$ is a unit, then for any $r$ in the ring, we can write $r=1r=(uu^{-1})r=u(u^{-1}r)$. Therefore, $r$ is divisible by $u$.
|
When is a factorial of a number equal to its triangular number? Consider the set of all natural numbers $n$ for which the following proposition is true.
$$\sum_{k=1}^{n} k = \prod_{k=1}^{n} k$$
Here's an example:
$$\sum_{k=1}^{3}k = 1+2+3 = 6 = 1\cdot 2\cdot 3=\prod_{k=1}^{3}k$$
Therefore, $3$ is in this set.
Does this set include more than just $3$? If so, is this set finite or infinite? Furthermore, can this set be described by a rule or formula?
[Just a tidbit: This question indicates the triangular number $1+2+3+\cdots+n$ is called the termial of $n$ and is denoted $n?$. I'm all for it; let's see if it catches on.]
[Another tidbit: the factorial of $n$, written $n!$ and called "$n$-factorial," is abbreviated "$n$-bang" in spoken word.]
|
The other answers are fine but it shouldn't be necessary to actually carry out the induction to see that the solution set is finite: the triangular numbers grow quadratically and factorials grow super-exponentially. A more interesting problem would be: how many solutions $(m,n)$ are there to $\sum_{k=1}^{m}{k} = \prod_{k=1}^{n}{k}$? In addition to $(1,1)$ and $(3,3)$ we also have $(15,5)$.
|
numbers' pattern It is known that
$$\begin{array}{ccc}1+2&=&3 \\ 4+5+6 &=& 7+8 \\
9+10+11+12 &=& 13+14+15 \\\
16+17+18+19+20 &=& 21+22+23+24 \\\
25+26+27+28+29+30 &=& 31+32+33+34+35 \\\ldots&=&\ldots
\end{array}$$
There is something similar for square numbers:
$$\begin{array}{ccc}3^2+4^2&=&5^2 \\ 10^2+11^2+12^2 &=& 13^2+14^2 \\ 21^2+22^2+23^2+24^2 &=& 25^2+26^2+27^2 \\ \ldots&=&\ldots
\end{array}$$
As such, I wonder if there are similar 'consecutive numbers' for cubic or higher powers.
Of course, we know that there is impossible for the following holds (by Fermat's last theorem):
$$k^3+(k+1)^3=(k+2)^3 $$
|
Let's start by proving the basic sequences and then where and why trying to step it up to cubes fails. I don't prove anything just reduce the problem to a two variable quartic Diophantine equation.
Lemma $1 + 2 + 3 + 4 + \ldots + n = T_1(n) = \frac{n(n+1)}{2}$.
Corollary $(k+1) + (k+2) + \ldots + (k+n) = -T_1(k) + T_1(k+n)$
The first sequence of identities is $$-T_1(s(n)) + T_1(s(n)+n+1) = -T_1(s(n)+n+1) + T_1(s(n)+2n+1)$$ so computing
? f(x) = (x*(x+1))/2
? (-f(s)+f(s+n+1))-(-f(s+n+1)+f(s+2*n+1))
% = -n^2 + (s + 1)
we find $s(n) = n^2-1$ and prove it.
Lemma $1^2 + 2^2 + 3^2 + 4^2 + \ldots + n^2 = T_2(n) = \frac{n(n+1)(2n+1)}{6}$.
The second sequence of identities is $$-T_2(s(n)) + T_2(s(n)+n+1) = -T_2(s(n)+n+1) + T_2(s(n)+2n+1)$$ so computing
? f(x) = (x*(x+1)*(2*x+1))/6
? (-f(s-1)+f(s-1+n+1))-(-f(s-1+n+1)+f(s-1+2*n+1))
% = -2*n^3 + (-2*s - 1)*n^2 + s^2
this is a weird quadratic equation in two integers with some solutions (n,s) = (1,3), (2,10), (3,21), (4,36), (5,55), (6,76), ...
the discriminant of the polynomial (as a polynomial in $s$) is $2^2 n^2 (n+1)^2$
so actually we can solve it and that explains where there's one solution for each $n$.
Now lets try for cubes.. but at this point we know it's not going to work
? f(x) = ((x^2+x)/2)^2
? (-f(s-1)+f(s-1+n+1))-(-f(s-1+n+1)+f(s-1+2*n+1))
% = -7/2*n^4 + (-6*s - 3)*n^3 + (-3*s^2 - 3*s - 1/2)*n^2 + s^3
so this is too complicated to actually solve but if anyone proves this doesn't have solutions for positive $n$ that will show there are no such cubic sequences.
For reference $$7n^4 + (12s + 6)n^3 + (6s^2 + 6s + 1)n^2 - 2s^3 = 0$$ is the Diophantine equation that obstructs a cubic sequence from existing.
Maybe you could conclude by the Mordell Conjecture that there's no infinite family of sequences of identities for cubic and higher power sums, if you can show these polynomials are always irreducible.
|
About the asymptotic behaviour of $\sum_{n\in\mathbb{N}}\frac{x^{a_n}}{a_n!}$ Let $\{a_n\}_{n\in\mathbb{N}}$ be an increasing sequence of natural numbers, and
$$ f_A(x)=\sum_{n\in\mathbb{N}}\frac{x^{a_n}}{a_n!}. $$
There are some cases in which the limit
$$ l_A=\lim_{x\to+\infty} \frac{1}{x}\,\log(f_A(x)) $$
does not exist. However, if $\{a_n\}_{n\in\mathbb{N}}$ is an arithmetic progression, we have $l_A=1$ (it follows from a straightforward application of the discrete Fourier transform). Consider now the case $a_n=n^2.$
*
*Is it true that there exists a positive constant $c$ for which
$$\forall x>0,\quad e^{-x}f_A(x)=\sum_{k\in\mathbb{N}}x^k\left(\sum_{0\leq j\leq\sqrt{k}}\frac{(-1)^{k-j^2}}{(j^2)!\,(k-j^2)!}\right)\geq c\;?$$
*Is it true that $l_A=1$?
|
It is true that $l_A=1$. The logic is similar to my answer to $\lim\limits_{x\to\infty}f(x)^{1/x}$ where $f(x)=\sum\limits_{k=0}^{\infty}\cfrac{x^{a_k}}{a_k!}$. Firstly, the terms after $n=3x$ don't matter:
$$\sum_{n=3x}^\infty x^n/n! <\sum_{n=3x}^\infty x^n /(3x/e)^n=C$$ by Stirling approximation.
But for $n<3x$ there is going to be a perfect square $s$ between $n$ and $n+2\sqrt{3x}$ (this is just $(k+1)^2-k^2=2k+1$ and $k< \sqrt{3x}$). Then the values $x^s/s!$ and $x^n/n!$ differ by a factor of at most $(3x)^{2\sqrt{3x}}$,
So if I multiply each term $x^s/s!$ by that ratio and take at least $2\sqrt{3x}$ copies I will have for each $x^n/n!$ (with $n<3x$) at least one term at least as big. This means that $$f_A(x) (3x)^{2\sqrt{3x}}{2\sqrt{3x}} +C > e^x$$ and so $l_A=1$.
(I think the ratio of terms can actually be made $3^{2\sqrt{3x}}$, but it works as is too.)
|
Plotting an integral of a function in Octave I try to integrate a function and plot it in Octave.
Integration itself works, i.e. I can evaluate the function g like g(1.5) but plotting fails.
f = @(x) ( (1) .* and((0 < x),(x <= 1)) + (-1) .* and((1 <x),(x<=2)));
g = @(x) (quadcc(f,0,x));
x = -1.0:0.01:3.0;
plot(x,g(x));
But receive the following error:
quadcc: upper limit of integration (B) must be a single real scalar
As far as I can tell this is because the plot passes a vector (namely x) to g which passes it down to quadcc which cannot handle vector arguments for the third argument.
So I understand what's the reason for the error but have no clue how to get the desired result instead.
N.B.
This is just a simplified version of the real function I use, but the real function is also constant on a finite set of intervals ( number of intervals is less than ten if that matters).
I need to integrate the real function 3 times in succession (f represents a jerk and I need to determine functions for acceleration, velocity and distance). So I cannot compute the integrals by hand like I could in this simple case.
|
You could use cumtrapz instead of quadcc.
|
$Μ : K$ need not be radical Show that $Μ : K$ need not be radical, Where $L : K$ is a radical extension in $ℂ$ and $Μ$ is an intermediate field.
|
Let $K = \Bbb Q$ and $M$ be the splitting field of $X^3 - 3X + 1 \in \Bbb Q[X]$.
$M$ can be embedded into $\Bbb R$, so it is not a radical extension by casus irreducibilis.
However, $X^3 - 3X + 1$ has a solvable Galois group $C_3$ (the cyclic group of order $3$), so $M$ can be embedded into some field $L$ that is radical over $K = \Bbb Q$.
|
Tangent Spaces and Morphisms of Affine Varieties In page 205 of "Algebraic Curves, Algebraic Manifolds and Schemes" by Shokurov and Danilov, the tangent space $T_x X$ of an affine variety $X$ at a point $x \in X$ is defined as the subspace of $K^n$, where $K$ is the underlying field, such that $\xi \in T_x X$ if $(d_x g)(\xi)=0$ for any $g \in I$, where $I$ is the ideal of $K[T_1,\cdots,T_n]$ that defines $X$ and by definition $(d_x g)(\xi)=\sum_{i=1}^n \frac{\partial g}{\partial T_i}(x) \xi_i$, where partial derivatives are formal. So far so good. Next, it is mentioned, that if $f:X \rightarrow Y$ is a morphism of affine varieties, then we obtain a well-defined map $d_x f : T_x X \rightarrow T_{f(x)} Y$. How is this mapped defined and why is it well-defined?
|
If $X\subset K^n$ and $Y\subset K^m$ are affine subvarieties , the map $f:X\to Y$ is the restriction of some polynomial map $F:K^n\to K^m: x\mapsto (F_1(x),...,F_m(x))$, where the $F_i$'s are polynomials $F_i\in K[T_1,...,T_n]$.
The map $d_x f : T_x X \rightarrow T_{f(x)} Y$ is the restriction to $T_x(X)$ of the linear map given by the Jacobian $$d_xF=Jac(F)(x)=(\frac {\partial F_i}{\partial X_j}(x)):K^n\to K^m$$ [The subspace $T_xX \subset K^n$ is the set of solutions of the humongous (but extremely redundant!) system of linear equations $\Sigma \frac {\partial g}{\partial X_j}(x)\xi_j=0$ where $g$ runs through $I(X)$]
The only thing to check is that we have in $K^m$ : $$(d_xf)(T_xX)\subset T_y(Y) $$
This means that we must show that $$(d_yh)(d_xf(v))=0 \quad (?)$$ for alll $v\in T_xX$ and all $h\in I(Y)$.
This follows from the following two facts:
a) For all $h\in I(Y)$ we have $h\circ F\in I(X)$, since $F$ maps $X$ into $Y$.
b) Functoriality of the differential: $d_x(h\circ F)=d_yh\circ d_xF$ .
And now if $v\in T_xX$ we can write $$(d_yh)(d_xf(v))=(d_yh)(d_xF(v))=d_x(h\circ F)(v)=0$$ since $h\circ F\in I(X)$ . We have thus proved $(?)$
|
proof: set countable iff there is a bijection In class we had the following definiton of a countable set:
A set $M$ is countable if there is a bijection between $\mathbb N$ and $M$.
In our exam today, we had the following thesis given:If $A$ is a countable set, then there is a bijection $\mathbb N\rightarrow A$.
So I am really not sure if the thesis and therefore the equivalence in the definition is right. So is it correct? And how do you proove it? Thanks a lot!
|
Suppose that $A$ is countable by your definition; then there is a bijection $f:\Bbb N\to A$. Because $f$ is a bijection, $f^{-1}$ is also a bijection, so it’s the desired bijection from $A$ to $\Bbb N$.
|
Exponents in Odd and Even Functions I was hoping someone could show or explain why it is that a function of the form $f(x) = ax^d + bx^e + cx^g+\cdots $ going on for some arbitrary length will be an odd function assuming $d, e, g$ and so on are all odd numbers, and likewise why it will be even if $d, e, g$ and so on are all even numbers. Furthermore, why is it if say, $d$ and $e$ are even but $g$ is odd that $f(x)$ will then become neither even nor odd?
Thanks.
|
If the exponents are all odd, then $f(x)$ is the sum of odd functions, and hence is odd.
If the exponents are all even, then $f(x)$ is the sum of even functions, and hence is even.
As far as your last question, the sum of an odd function and even function is neither even nor odd.
Proof: Sum of Odd Functions is Odd:
Given two odd functions $f$ and $g$. Since they are odd functions $f(-x) = -f(x)$, and $g(-x) = -g(x)$.
Hence:
\begin{align*}
f(-x) + g(-x) &= -f(x) - g(x) \\
&= -(f+g)(x) \\
\implies (f+g) & \text{ is odd if $f$ and $g$ are odd.}
\end{align*}
Proof: Sum of Even Functions is Even:
Given two even functions $f$ and $g$, then $f(-x) = f(x)$, and $g(-x) = g(x)$.
Hence:
\begin{align*}
f(-x) + g(-x) &= f(x) + g(x) \\
\implies (f+g) &\text{ is even if $f$ and $g$ are even.}
\end{align*}
Proof: Sum of an odd function and even function is neither odd or even.
If $f$ is odd, and $g$ is even
\begin{align*}
f(-x) + g(-x) &= -f(x) + g(x) \\
&= -(f-g)(x) \\
\implies (f+g)&\text{ is neither odd or even if $f$ is even, $g$ is odd.}
\end{align*}
|
Construction of an integrable function with a function in $L^2$ I have this really simple question, but I cannot figure out the answer. Suppose that $f\in L^2([0,1])$. Is it true that $f/x^5$ will be in $L^1([0,1])$?
Thanks!
Edit: I was interested in $f/x^{1/5}$.
|
The statement
$$f \in L^2([0,1])\Rightarrow fx^{-\frac{1}{5}} \in L^1([0,1])$$
is true. It is an easy application of Hölder inequality. Infact, by hypothesis $f\in L^2([0,1])$; on the other hand, we have $g(x)=x^{-1/5}\in L^2([0,1])$ so by Hölder (indeed, this is the case "Cauchy-Schwarz")
$$
\Vert fg \Vert_1 \le \Vert f\Vert_2 \Vert g \Vert_2
$$
hence $fg\in L^1$ (and you get also an upper bound for its $L^1$-norm).
|
The dimension of the component of a variety Mumford claimed the following result:
If $X$ is an $r$-dimensional variety and $f_{1},...,f_{k}$ are polynomial functions on $X$. Then every component of $X\cap V(f_{1},...,f_{k})$ has dimension $\ge r-k$.
He suggested this is a simple corollary of the following result:
Assume $\phi:X^{r}\rightarrow Y^{s}$ is a dominating regular map of affine varieties. Then for all $y\in Y$, the dimension of components of $\phi^{-1}(y)$ is at least $r-s$.
However it is not clear to me how the two statements are connected. Let $Y=X- V(f_{1},...,f_{k})$ where the dominating regular map is the evaluating map by $f_{i}$. Then $Y$ should have dimension at least $r-k$ by the definition. So in particular the components of $o$'s preimage should have dimension at most $r-(r-k)=k$, and least $r-r=0$. I feel something is wrong in my reasoning and so hopefully someone can correct me.
I realized something may be wrong in my reasoning: Notice two extremes $\dim(Y)$ can be by letting $f_{i}$ be one of the generating polynomials of $\mathscr{U},X=V(\mathscr{U})$, in this case $\dim(Y)=0$. And on the other hand if $f_{i}$ does not vanish at all on $X$, then $\dim(Y)=r$. So $Y$ seems not a good choice as it is insensitive to the value to $k$.
However, a reverse way of reasoning might be possible by using $Y=X\cap V(f_{1},..f_{i},f_{k})$ while claiming the component can only have dimension at most $k$. Then by inequality we have the desired result. But I do not see how $X\rightarrow Y$ by quotient map to be a regular map.
|
To deduce the corollary, let $Y$ be $k$-dimensional affine space, and let $\phi$ be the map sending a point $x \in X$ to $(f_1(x), f_2(x), \ldots, f_k(x))$. Then $X \cap V(f_1, \ldots, f_k)$ is the preimage $\phi^{-1}(0,0,\ldots,0)$, hence, by the result, its dimension is at least $r-k$.
(The fact that $\phi$ might not be dominating is unimportant: We can always replace $Y$ with the image of $\phi$.)
|
Showing that $|||A-B|||\geq \frac{1}{|||A^{-1}|||}$? $A,B\in M_n$, $A$ is non-singular and $B$ is singular. $|||\cdot|||$ is any matrix norm on $M_n$, how to show that $|||A-B||| \geq \frac{1}{|||A^{-1}|||}$?
The hint is let $B=A[I-A^{-1}(A-B)]$, but I don't know how to use it.
Appreciate any help!
update: is $\geq$,not $\leq$.Sorry!
|
Let's sharpen the hint to $A^{-1}B = I - A^{-1}(A-B)$. First you should check that this identity is correct.
Now pick any $v$ such that $Bv = 0$ and $\|v \| = 1$. By assumption, such a $v$ exists. Apply both sides of the identity, play around with it, take norms, see if you can get something that resembles the statement that you want to prove.
Remember the definition of a matrix norm: $|||C||| = \sup_{\|x\| = 1} \|Cx\|$. There is also a formula that relates $|||CD|||$ to $|||C|||$ and $|||D|||$. Check your notes and try to use it.
|
Basis, dense subset and an inequality Let $V \subset H$, where $V$ is separable in the Hilbert space $H$.
So there is a basis $w_i$ in $V$ such that, for each $m$, $w_1, ..., w_m$ are linearly independent and the finite linear combinations are dense in $V$.
Let $y \in H$, and define $y_m = \sum_{i=1}^m a_{im}w_i$ such that $y_m \to y$ in $H$ as $m \to \infty$.
Then, why is it true that $\lVert y_m \rVert_H \leq C\lVert y \rVert_H$?
I think if the $w_i$ were orthonormal this is true, but they're not. So how to prove this statement?
|
It is not true. Choose $y=0$ and $a_{1,m} = \frac{1}{m}$. Then $y_m \to y$, but it is never the case that $\|y_m\| \leq C \|y\|$.
Elaboration:
This is because $\|y_m\| = \frac{1}{m} \|w_1\|$, the $w_i$ are linearly independent, hence non-zero. Hence $\|y_m\| = \frac{1}{m} \|w_1\| > 0$ for all $m$. There is no choice of $C$ that will satisfy the equation $\|y_m\| \leq C \|y\|$.
The above is true even if $w_i$ are orthonormal.
(I think you need to be more explicit about your choice of $a_{im}$. A 'nice' choice would be to let $y_m$ be the closest point to $y$ in $\text{sp}\{w_i\}_{i=1}^m$. This is what the first part of the answer by Berci does below.)
|
Tate $p$-nilpotent theorem Tate $p$-nilpotent Theorem. If $P$ is a Sylow $p$-subgroup of $G$ and $N$ is a normal subgroup of $G$ such that $P \cap N \leq \Phi (P)$, then $N$ is $p$-nilpotent.
My question is the following:
If $P \cap N \leq \Phi (P)$ for only one Sylow p-subgroup of $G$, is $N$ $p$-nilpotent?
Remark: $G$ may have more than one Sylow for the prime $p$.
|
That situation is not possible. Let $P$ be a Sylow $p$-subgroup such that $P \cap N \leqslant \Phi(P)$ and consider $Q\cap N$ for another Sylow $p$-subgroup $Q$. We have that there is a $g$ so that $P^g=Q$, and since $N$ is normal, $$(P\cap N)^g=P^g\cap N^g=P^g \cap N=Q\cap N\leq \Phi(P)^g=\Phi(P^g)=\Phi(Q).$$
|
Solve the recurrence relation:$ T(n) = \sqrt{n} T \left(\sqrt n \right) + n$ $$T(n) = \sqrt{n} T \left(\sqrt n \right) + n$$
Master method does not apply here. Recursion tree goes a long way. Iteration method would be preferable.
The answer is $Θ (n \log \log n)$.
Can anyone arrive at the solution.
|
Let $n = m^{2^k}$. We then get that
$$T(m^{2^k}) = m^{2^{k-1}} T (m^{2^{k-1}}) + m^{2^{k}}$$
\begin{align}
f_m(k) & = m^{2^{k-1}} f_m(k-1) + m^{2^k} = m^{2^{k-1}}(m^{2^{k-2}} f_m(k-2) + m^{2^{k-1}}) + m^{2^k}\\
& = 2 m^{2^k} + m^{3 \cdot 2^{k-2}} f_m(k-2)
\end{align}
$$m^{3 \cdot 2^{k-2}} f_m(k-2) = m^{3 \cdot 2^{k-2}} (m^{2^{k-3}} f_m(k-3) + m^{2^{k-2}}) = m^{2^k} + m^{7 \cdot 2^{k-3}} f_m(k-3)$$
Hence,
$$f_m(k) = 2 m^{2^k} + m^{3 \cdot 2^{k-2}} f_m(k-2) = 3m^{2^k} + m^{7 \cdot 2^{k-3}} f_m(k-3)$$
In general, it is not hard to see that
$$f_m(k) = \ell m^{2^k} + m^{(2^{\ell}-1)2^{k-\ell}} f_m(k-\ell)$$
$\ell$ can go up to $k$, to give us
$$f_m(k) = km^{2^k} + m^{(2^{k}-1)} f_m(0) = km^{2^k} + m^{(2^{k}-1)} m^{2^0} = (k+1) m^{2^k}$$
This gives us $$f_m(k) = (k+1) m^{2^k} = n \left(\log_2(\log_m n) + 1\right) = \mathcal{O}(n \log_2(\log_2 n))$$ since $$n=m^{2^k} \implies \log_m(n) = 2^k \implies \log_2(\log_m(n)) = k$$
|
Straightening the boundary in concrete examples Let $\Omega \subset \mathbb{R}^d$ be open and with $C^1$ boundary $\Gamma$.
For any given point $x_0 \in \Gamma$ we know there's a neighborhood where
$\Gamma$ is the graph of some $C^1$ function $\gamma : \mathbb{R}^{d - 1}
\longrightarrow \mathbb{R}^d, x' \longmapsto \gamma ( x') = x_d$. We can use
it to straighten the boundary with the local diffeomorphism
$$ T ( x', x_d - \gamma ( x')) = ( x', x_d - \gamma ( x')), $$
and its differential $D T$ has a nice $( d - 1) \times ( d - 1)$ identity
matrix as first block and a bottom row $\nabla T_d = ( - \nabla \gamma, 1)$
which is proportional to the vector $\vec{n}$ normal to $\Gamma$ at each
point, say $c ( x) \vec{n} ( x) = \nabla T_d ( x)$, where $c ( x) = - \|
\nabla T_d ( x) \|$.
For my calculations in concrete examples with parametrized domains, etc., I
want $\nabla T_d$ to actually be the outward pointing normal: I need this $c (
x)$ to be $- 1$. If I try to impose the condition after constructing $T$, then
I have to integrate expressions which I'm just not capable of. I can try to
throw it at some symbolic integration software, but there has to be some other
way, right? In almost every book on PDEs it's stated that this $T$ may be
normalized so as to have the property I mention. But how?
|
If $\phi(x')$ denotes the $d$-th component of the normal vector at $(x',\gamma(x'))$, then first of all it is immediate from the graph structure that $\phi(x') \ne 0$. Let $S(x',y_d) = (x',\phi(x')y_d)$, and write $\tilde{T} = S \circ T$. Then $\tilde{T}$ is a $\mathcal{C}^1$ diffeomorphism which straightens the boundary, and it is normalized, as can be checked easily with the chain rule:
$$
DS (x',y_d) =
\left[
\begin{array}{c|c}
\mathrm{Id} & 0 \\ \hline
\nabla \phi(x') y_d & \phi(x')
\end{array}
\right]
$$
In particular
$$
DS (x',0) =
\left[
\begin{array}{c|c}
\mathrm{Id} & 0 \\ \hline
0 & \phi (x')
\end{array}
\right]
$$
So on the boundary $\Gamma$ you get
$$
D\tilde{T}(x',\gamma(x')) = DS(x',0) DT(x',\gamma(x'))
=\left[
\begin{array}{c|c}
\mathrm{Id} & 0 \\ \hline
-\phi(x')\nabla \gamma(x') & \phi (x')
\end{array}
\right]
$$
I.e., the last row is a multiple of the outer normal, and since the $d$-th entry is the same, it is equal to the outer normal.
|
Subset of a finite set is finite We define $A$ to be a finite set if there is a bijection between $A$ and a set of the form $\{0,\ldots,n-1\}$ for some $n\in\mathbb N$.
How can we prove that a subset of a finite set is finite? It is of course sufficient to show that for a subset of $\{0,\ldots,n-1\}$. But how do I do that?
|
I have run into this old question and was surprised that noone seemed to have said the following.
The actual definition of finite set is the following: A set $A$ is finite if every injection $A\rightarrow A$ is a bijection.
(Note: this definition does not require the set $\mathbb{N}$)
Now let $B$ a finite set and $A\subset B$.
Suppose that $A$ is not finite. Then, by definition there exists a function
$f:A\rightarrow A$ that is injective but not surjective. Now define $F:B\rightarrow B$ as follows.
$$
F(x)=\begin{cases}
f(x) & \text{if $x\in A$}\\
x & \text{if $x\in B\setminus A$}
\end{cases}
$$
Clearly $F$ is injective but not surjective contradicting the finiteness of $B$.
|
Why is $\operatorname{Var}(X+Y) = \operatorname{Cov}(X,X) + \operatorname{Cov}(X,Y) + \operatorname{Cov}(Y,X) + \operatorname{Cov}(Y,Y)$ I know $\operatorname{Cov}(X,Y) = E[(X-u_x)(Y-u_y)]$ and
$$
\operatorname{Cov}(X+Y, Z+W) = \operatorname{Cov}(X,Z) + \operatorname{Cov}(X,W) + \operatorname{Cov}(Y,Z) + \operatorname{Cov}(Y,W),
$$
but how does one get $$
\operatorname{Var}(X+Y) = \operatorname{Cov}(X,X) + \operatorname{Cov}(X,Y) + \operatorname{Cov}(Y,X) + \operatorname{Cov}(Y,Y)?
$$
|
A quick way: Note from the definition of variance that $\text{Var}(T)=\text{Cov}(T,T)$. Now in your formula for $\text{Cov}(X+Y, Z+W)$, set $Z=X$ and $Y=W$. You will get exactly the formula you want to derive.
A slow way: We can work with just your basic defining formula for covariance. Note that
$$\text{Var}(X+Y)=E(((X+Y)-(\mu_X+\mu_Y))^2).$$
Rearranging a bit, we find that this is
$$E(((X-\mu_X)+(Y-\mu_Y))^2).$$
Expand the square, and use the linearity of expectation. We get
$$E((X-\mu_X)^2) +E((Y-\mu_Y)^2)+2E((X-\mu_X)(Y-\mu_Y).$$
The first term is $\text{Var}(X)$, which is the same as $\text{Cov}(X,X)$. A similar remark can be made about the second term. And $\text{Cov}(X,Y)=\text{Cov}(Y,X)=E((X-\mu_X)(Y-\mu_Y))$.
Remark: There is a variant of the formula for covariance, and variance, which is very useful in computations.
Suppose we want the covariance of $X$ and $Y$. This is $E((X-\mu_X)(Y-\mu_Y))$.
Expand the product, and use the linearity of expectation. We get
$$E(XY)-E(\mu_XY)-E(\mu_Y X+E(\mu_X\mu_Y).$$
But $\mu_X$ and $\mu_Y$ are constants. So for example $E(\mu_X Y)=\mu_XE(Y)=\mu_X\mu_Y)$. So we conclude that
$$\text{Cov}(X,Y)=E(XY)-\mu_X\mu_Y.$$
A special case of this is the important
$$\text{Var}(X)=E(X^2)-\mu_X^2=E(X^2)-(E(X))^2.$$
The above formulas for covariance would have made it easier to derive the formula of your problem, or at least to type the answer. For
$$\text{Var}(X+Y)=E((X+Y)^2)-(\mu_X+\mu_Y)^2.$$
Expand each square, use the linearity of expectation, and rearrange. We get
$$(E(X^2)-\mu_X^2)+(E(Y^2)-\mu_Y^2)+2(E(XY)-\mu_X\mu_Y),$$
which is exactly what we want.
|
$G=\langle a,b\mid aba=b^2,bab=a^2\rangle$ is not metabelian of order $24$ This is my self-study exercise:
Let $G=\langle a,b\mid aba=b^2,bab=a^2\rangle$. Show that $G$ is not metabelian.
I know; I have to show that $G'$ is not an abelian subgroup. The index of $G'$ in $G$ is 3 and doing Todd-Coxeter Algorithm for finding any presentation of $G'$ is a long and tedious technique (honestly, I did it but not to end). Moreover GAP tells me that $|G|=24$. May I ask you if there is an emergency exit for this problem. Thanks for any hint. :)
|
$abab=a^3=b^3$, so $Z := \langle a^3 \rangle$ is central. Modulo $Z$, we get the standard presentation $\langle a,b \mid a^3, b^3, (ab)^3 \rangle$ of $A_4$. Also, module $G'$, we have $a^2=b$, $b^2=a$, so $a^3=1$, and hence $Z \le G'$. Also, $ab,ba \in G'$ and $abba = a^2ba^2=bab^3ab=baabb^3$, so $G'$ is not abelian provided that $Z$ is nontrivial.
So to prove the group is not metabelian we need to prove that $Z$ is nontrivial, and the only sensible way of doing that, other than by coset enumeration, which is very tedious to do by hand, is to find an explicit homomorphic image of the group in which $Z$ is nontrivial. Knowing that $G$ is a nonsplit central extension of $Z$ by $A_4$, we might suspect at this stage that $G \cong {\rm SL}_2(3)$, which might help us find an explicit map, like the one described by Jack Schmidt in his comment.
|
A Book for abstract Algebra I am self learning abstract algebra. I am using the book Algebra by Serge Lang. The book has different definitions for some algebraic structures. (For example, according to that book rings are defined to have multiplicative identities. Also modules are defined slightly differently....etc) Given that I like the book, is it OK to keep reading this book or should I get another one?
Thank you
|
There is a less famous but very nice book Abstract Algebra by Paul B. Garrett
and then there is the old book "A survey of modern algebra" by Birkhoff
|
$\int 2^x \ln(x)\, \mathrm{d}x$ I found this problem by a typo. My homework problem was $\int 2^x \ln(2) \, \mathrm{d}x$ which is $2^x + C$ by the Fundamental Thm of Calculus. I want to be able to solve what I wrote down incorrectly in my homework.
What I wrote for my homework is $\int 2^x \ln(x)\, \mathrm{d}x$ and What I Want to solve, plus I got it wrong. :(
I used integration by parts.
$$\int u \, \mathrm{d}v = uv - \int v\, \mathrm{d}u$$
$$\begin{array}{l l}
u = \ln(x) & du = \frac{1}{x}\mathrm{d}x \\
\mathrm{d}v = 2^x\mathrm{d}x & v = \frac{2^X}{\ln (2)} \\
\end{array}$$
I got this integral:
$$\frac{\ln(x)2^x}{\ln 2} - \int \frac{2^x}{x\ln 2}\, \mathrm{d}x$$
Another round of integration of parts:
$$\begin{array}{l l}
u = \frac{2^x}{\ln 2} & du = 2^x\mathrm{d}x \\
\mathrm{d}v = \frac{1}{x}\mathrm{d}x & v = \ln(x)
\end{array} $$
$$\int 2^x \ln(x)\, \mathrm{d}x = \frac{\ln(x)2^x}{\ln 2} - \left[ \frac{2^x \ln x}{\ln 2} - \int \ln(x) 2^x\, \mathrm{d}x \right]$$
My final answer is
$$ \frac{\ln(x)2^x}{\ln 2} -\frac{2^x \ln x}{\ln 2}= 0$$
What did I do wrong?
|
First you did a mistake here:
$$\int 2^x \ln(x)\, \mathrm{d}x = \frac{\ln(x)2^x}{\ln 2} - \left[ \frac{2^x \ln x}{\ln 2} - \int \ln(x) 2^x\, \mathrm{d}x \right]\Rightarrow \frac{\ln(x)2^x}{\ln 2} -\frac{2^x \ln x}{\ln 2}= 0$$
You can't just cancel the integrals, as you will lose the constant of integration. For example
$$\int \frac{1}{x}\, \mathrm{d}x = \int x^{\prime}\frac{1}{x}\, \mathrm{d}x=
x\frac{1}{x} - \int x\frac{-1}{x^2}\, \mathrm{d}x =1+\int \frac{1}{x}\, \mathrm{d}x$$
If you cancel the integrals then $1=0$ which is impossible. When canceling integrals one must never forget the constant of integration $c$.
In our case $c=0$. To see this,
$$\int 2^x \ln(x)\, \mathrm{d}x = \frac{\ln(x)2^x}{\ln 2} - \frac{2^x \ln x}{\ln 2} + \int \ln(x) 2^x\, \mathrm{d}x \Rightarrow 0=\frac{\ln(x)2^x}{\ln 2} - \frac{2^x \ln x}{\ln 2}+c=c$$
which leads to $0=0$. Why did this come up? You integrated by parts once and then did the reverse and got back to your starting point. Now how can $\int 2^x \ln(x)\, \mathrm{d}x $ be evaluated? It can't be written as a combination of elementary functions (polynomial,exponential,logarithmic,trigonometric and hyperbolic functions and their inverses). I will show this for $\int e^x \ln(x)\, \mathrm{d}x $.
$$\int e^x \ln(x)\, \mathrm{d}x = \int (e^x)^{\prime} \ln(x)\, \mathrm{d}x=e^x\ln x-
\int e^x (\ln(x))^{\prime}\, \mathrm{d}x=e^x\ln x-\int \frac{e^x}{x}\, \mathrm{d}x
$$
The last integral is not elementary as shown by Risch Algorithm. For more information look here: Exponential Integral. And no, I don't think there is any book covering this topic
|
Find the sum of the first $n$ terms of $\sum^n_{k=1}k^3$ The question:
Find the sum of the first $n$ terms of $$\sum^n_{k=1}k^3$$
[Hint: consider $(k+1)^4-k^4$]
[Answer: $\frac{1}{4}n^2(n+1)^2$]
My solution:
$$\begin{align}
\sum^n_{k=1}k^3&=1^3+2^3+3^3+4^3+\cdots+(n-1)^3+n^3\\
&=\frac{n}{2}[\text{first term} + \text{last term}]\\
&=\frac{n(1^3+n^3)}{2}
\end{align}$$
What am I doing wrong?
|
For a geometric solution, you can see theorem 3 on the last page of this PDF. Sorry I did not have time to type it here. This solution was published by Abu Bekr Mohammad ibn Alhusain Alkarachi in about A.D. 1010 (Exercise 40 of appendix E, page A38 of Stewart Calculus 5th edition).
|
Dual of $\ell_\infty(X)$ Given a Banach space $X$. Consider the space $\ell_\infty(X)$ which is the $\ell_\infty$-sum of countably many copies of $X$. Is there any accessible respresentation of the dual space $\ell_\infty(X)^*$? In particular, is this dual space isomorphic to the space of finitely additive $X^*$-valued measures on the powerset of $\mathbb N$ equipped with the semivariation norm?
Any references will be appreciated.
|
There is no good description of the dual of $\ell_\infty(X)$ as far as I know.
If $X$ is finite dimensional, then the answer to your second question is yes.
Otherwise, it is no, for there is no way to define an action of a finitely additive $X^*$-valued measure on $\ell_\infty(X)$ if the ball of $X$ is not compact.
|
quadratic and bilinear forms Does a quadratic form always come from symmetric bilinear form ?
We know when $q(x)=b(x,x)$ where $q$ is a quadratic form and $b$ is a symmetric bilinear form.
But when we just take a bilinear form and $b(x,y)$ and write $x$ instead of $y$,does it give us a quadratic form ?
|
If we have b symmetric bilinear form we can get q quadratic form
$q\colon V \to \mathbb{R}$
q(v)=b(v,v)
conversely if q is a quadratic form
$q\colon V \to \mathbb{R}$
we can define
$\frac 12$(q(v+w)-q(v)-q(w)):=b(v,w)
the vital answer is you just get a bilinear form not always a symmetric bilinear
form.
because the definition $\frac 12$(q(v+w)-q(v)-q(w)) leads us to $\frac 12$(b(v,w)+b(w,v))
|
$L_1$ norm of Gaussian random variable Ok. This is bit confusing. Let $g$ be a Gaussian random variable (normalized, i.e. with mean 0 and standard deviation 1). Then, in the expression $$\|g\|_{L_1}=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}|x|\exp(\frac{-x^2}{2})dx =\sqrt{\frac{2}{\pi}},$$ shouldn't the term $|x|$ not appear in the integral?
|
If $X$ is a random variable with density $f$, and $\phi$ is a measurable function, then $E[\phi(X)]=\int_{\Bbb R}\phi(t)f(t)dt$. As the $L^1$ norm of a random variable $X$ is $E[|X|]$, we have, when $X$ is normally distributed, the announced result.
|
Method of Undetermined Coeffecients - how to assume the form of third degree equation. An example differential equations questions asks me to solve;
$$y''' - 2y'' -4y'+8y = 6xe^{2x}$$
I begun by solving the homogeneous equation with $m^3 - 2m^2 -4m+8 =0$ and getting the answer
$$y(x) = c_1e^{2x} + c_2xe^{2x}+c_3e^{-2x} $$
The second part of the solution involves assuming a form for the solution. Because $g(x)$ is $6xe^{2x}$, I assumed the solution would be of the form $(Ax+B)e^{2x}$, however it turns out that after differentiating three times it gets extremely complicated. Is there a better way?
Also, the textbook solutions manual uses the form of $(Ax^3 + Bx^2)e^{2x}$. How did it arrive at that? (there's no accompanying explanation)
|
Your equation is $y'''-2y''-4y'+8y=6xe^{2x}$. Now change the $y'$ to $Dy$ form as follows. So $$y'''\to D^3y,\\ y''\to D^2y, \; \; \text{and} \;\;y'\to Dy,$$ So by new arranging respect to $D$ operator we get our equation as: $$D^3y-2D^2y-4Dy+8y=6xe^{2x}$$ or by factoring and expanding $$(D^3-2D^2-4D+8)y=(D-2)^2(D+2)y=6e^{2x}$$ which you got before. Note that considering the corresponding homogeneous equation $$(D-2)^2(D+2)y=0$$ we get $(D-2)^2=0,\;\; (D+2)=0$ which leads us to write the general solutions as $$y_c(x) = c_1e^{2x} + c_2xe^{2x}+c_3e^{-2x}$$ and you did it right above before. Now have a look at some facts:
*
*If $y=\text{constant}$ so $y'=0$ or $Dy=0$. Here, the operator $D$ annihilates $y$ which is just a constant.($Dc=0$)
*If $y=cx$ in which $c$ is a constant so $y''=0$ or $D^2y=0$. It means that the operator polynomial $P(D)=D^2$ annihilates $y=cx$. $(\text{or} \; P(D)=D^2(cx)=cD^2x=c(x'')=0)$. Generally, $D^{n+1}$ annihilates not only the function $y=cx^{n}$ but also all linear functions as $$y=c_0+c_1x+c_2x^2+...+c_nx^n$$ It means that $$P(D)y=D^{n+1}y=0$$.
*As the same the differential operator $(D-\alpha)^n$ annihilates each of the following functions and every linear combinations of them: $$e^{\alpha x},xe^{\alpha x},x^2e^{\alpha x},...,x^{n-1}e^{\alpha x}$$ Now look at the RHS of your original equation. I mean $=6xe^{2x}$. Can we guess of which proper differential polynomial annihilates it? As above it would be $(D-2)^2$. It means that $(D-2)^2 \left(6xe^{2x}\right)=0$. Don't respect to numeric coefficients like $6$ here at all.
Now we consider of what we have achieved at last: $$(D-2)^2(D+2)y=6e^{2x}$$ Put the operator $(D-2)^2$ before both sides of the above converted equation: $$(D-2)^2\left((D-2)^2(D+2)y\right)=(D-2)^26e^{2x}=0$$ or $$(D-2)^4(D+2)y=0$$ In fact, we have found a proper differential operator $P(D)=(D-2)^4(D+2)$ which if it effects to $y$, $y$ will be lost.
Now, for a while, forget our equation and look at $(D-2)^4(D+2)y=0$ and think someone gave this to us asking to guess which function $y$ may satisfy the equality above? We reply:
*
*Since we have $(D-2)$ so we have some forms as $e^{2x}$ in $y$.
*Since $(D-2)$ has a power $4$ so we have the forms $Ae^{2x},\; Bxe^{2x}, \;Cx^2e^{2x}, \text{and} \; Ex^3e^{2x}$ in $y$. Note that you multiply $e^{2x}$, in the previous line, by $A,\; Bx,\; Cx^2,\; Ex^3$. (Exactly until the power of $x$ gets $4-1=3$).
*And, since we have $(D+2)$, then $y$ has the term $Fe^{-2x}$.
So we are done. Our probable function which satisfy the original equation is $$y=Ae^{2x}+ Bxe^{2x}+Cx^2e^{2x}+Ex^3e^{2x}+Fe^{-2x}$$. Now put the terms which generate $y_c(x)$ aside and take the rest for what we have looked for. It is $$y_p=Cx^2e^{2x}+Ex^3e^{2x}$$ where $C,E$ are unknown constant.
|
Line integral vs Arc Length I am trying to understand when do to line integral and when to do arc length. So I know the formula for arc length varies based on $dx$ or $dy$ like so:
$s=\int_a^b \sqrt{1+[f'(x)]^2} \, \mathrm{d} x$ for the arc length
and here's a line integral equation:
$\int_c fds=\int_a^b f(r(t))\cdot r'(t)dt$
I just don't understand how the two are different/similar. Don't they both compute the same thing?
|
Suppose you have a curve $C$ parametrized as $\mathbf{g}(t)$ for $0\le t\le 1$. Then the arc length of $C$ is defined as
$$\int_0^1\|\mathbf{g}'(t)\|\ dt$$
An intuitive way of think of the above integral is to interpret the derivative $\mathbf{g}'$ as velocity. Then the above integral is basically the statement that the net distance traveled is equal to the speed times time.
The line integral itself is also concerned with arc length. More specifically, the scalar line integral is concerned with the arc length of a curve along with a weight $f$ at each small segment of the curve.
To give a simplified example, suppose that you have an ideal elastic $C$. Further suppose that you have a function $f$ which assigns a value for each point of the elastic. Think of this $f$ as a stretch factor. If a point $p$ of the elastic is assigned a number $f(p)$, then we stretch the elastic locally around $p$ by a factor of $f(p)$. If we now add up the stretched lengths, what we have is a line integral
$$\int_C f\ ds = \int_0^1 f(\mathbf{g}(t))\,\|\mathbf{g}'(t)\|\ dt$$
This line integral will be larger or smaller than the actual length of the elastic depending on how the elastic is stretched or compressed as a whole.
But if our function is $f(p) = 1$, then that corresponds to stretching the elastic at each point by a factor of $1$, i.e. leaving the elastic alone. If we add up the untouched lengths segments of the elastic, all we do is recover the actual arc length of the elastic. This is why arc-length is given by
$$\int_C 1\ ds = \int_0^1\|\mathbf{g}'(t)\|\ dt$$
an unweighted line integral.
|
Show that if m/n is a good approximation of $\sqrt{2}$ then $(m+2n)/(m+n)$ is better
Claim: If $m/n$ is a good approximation of $\sqrt{2}$ then $(m+2n)/(m+n)$ is better.
My attempt at the proof:
Let d be the distance between $\sqrt{2}$ and some estimate, s.
So we have $d=s-\sqrt{2}$
Define $d'=m/n-\sqrt{2}$ and $d''=(m+2n)/(m+n)-\sqrt{2}$
To prove the claim, show $d''<d'$
Substituting in for d' and d'' yields:
$\sqrt{2}<m/n$
This result doesn't make sense to me, and I was wondering whether there is an other way I could approach the proof or if I am missing something.
|
Assume $\dfrac mn\ne\sqrt2;$ otherwise $\dfrac mn$ is $\sqrt2$, not an approximation.
Then $d'\ne0$ so we can compute $\dfrac {d''}{d'}=\dfrac{\dfrac{m+2n}{m+n}-\sqrt2}{\dfrac mn-\sqrt2}=
\dfrac n{m+n}\dfrac{m+2n-\sqrt2(m+n)}{m-\sqrt2n}$
$=\dfrac n{m+n}\dfrac{m-\sqrt2n-\sqrt2(m-\sqrt2n)}{m-\sqrt2n}=\dfrac {1}{1+\dfrac mn}\left(1-\sqrt2\right).$
We could assume $\dfrac mn\ge0$ (otherwise $\dfrac mn$ is not "a good approximation of $\sqrt2$"),
and $-1<1-\sqrt2<0$ since $1<\sqrt2<2$.
From here it is easy to see that
$|d''|<|d'|,$ and $d''$ and $d'$ have opposite signs.
|
Radius, diameter and center of graph
The eccentricity $ecc(v)$ of $v$ in $G$ is the greatest distance from $v$ to any other node.
The radius $rad(G)$ of $G$ is the value of the smallest eccentricity.
The diameter $diam(G)$ of $G$ is the value of the greatest eccentricity.
The center of $G$ is the set of nodes $v$ such that $ecc(v) = rad(G)$
Find the radius, diameter and center of the graph
Appreciate as much help as possible.
I tried following an example and still didn't get it, when you count the distance from a node to another, do you count the starting node too or you count the ending node instead. And when you count, do you count the ones above and below or how do you count? :)
|
I think for the path graph Pn, the diameter is n−1.But the radius is (n-1/2)rounded up to the nearest integer.
For example,P3 it has radius of 1 but not 2.
|
How to finish proof that $T$ has an infinite model? I'm trying to prove the following: If $T$ is a first-order theory with the property that for every natural number $n$ there is a natural number $m>n$ such that $T$ has an $m$-element model then $T$ has an infinite model.
My thoughts: If $M$ is an $n$-element model then $\varphi_n = \exists v_1, \dots, v_n ((v_1 \neq v_2) \land \dots \land (v_{n-1} \neq v_n))$ is true in $M$. Can I use this to show that $T$ has an infinite model? How? Perhaps combine it with the compactness theorem somehow? Thanks for your help.
|
This is a standard fact. The result you are looking for is exactly the compactness theorem, but you can also do it directly. Just take an ultraproduct of a sequence $M_i$ of larger and larger finite models. Since every one of these models $T$, so does the ultraproduct, by Łoś's theorem.
|
Recursive Integration over Piecewise Polynomials: Closed form? Is there a closed form to the following recursive integration?
$$
f_0(x) =
\begin{cases}
1/2 & |x|<1 \\
0 & |x|\geq1
\end{cases}
\\
f_n(x) = 2\int_{-1}^x(f_{n-1}(2t+1)-f_{n-1}(2t-1))\mathrm{d}t
$$
It's very clear that this converges against some function and that quite rapidly, as seen in this image, showing the first 8 terms:
Furthermore, the derivatives of it have some very special properties.
Note how the (renormalized) derivatives consist of repeated and rescaled functions of the previous degree which is obviously a result of the definition of the recursive integral:
EDIT
I found the following likely Fourier transform of the expression above. I do not have a formal proof but it holds for all terms I tried it with (first 11).
$$ \mathcal{F}_x\left[f_n(x)\right](t)=\frac{1}{\sqrt{2\pi}}\frac{2^n \sin \left(2^{-n} t\right)}{t} \prod _{k=1}^n \frac{2^{k} \sin \left(2^{-k} t\right)}{t} $$
Here an image of how that looks like (first 10 terms in Interval $[-8\pi,8\pi]$):
With this, my question alternatively becomes:
What, if there is one, is the closed form inverse fourier transform of
$\mathcal{F}_x\left[f_n(x)\right](t)=\frac{1}{\sqrt{2\pi}}\frac{2^n \sin \left(2^{-n} t\right)}{t} \prod _{k=1}^n \frac{2^{k} \sin \left(2^{-k} t\right)}{t}$,
especially for the case $n\rightarrow\infty$?
As a side note, it turns out, that this particular product is a particular Browein integral (Wikipedia) using as a sequence $a_k = 2^{-k}$ which exactly sums to 1. The extra term in the front makes this true for the finite sequence as well. In the limit $k \to \infty$, that term just becomes $1$, not changing the product at all. It is therefore just a finite depth correction.
|
Suppose $f$ is a fixed point of the iterations. Then
$$f(x) = 2\int_{-1}^x\big(f(2t+1)-f(2t-1)\big)\,\mathrm{d}t,$$
which, upon differentiating both sides by $x$, implies that
$$f'(x) = 2\big(f(2x+1)-f(2x-1)\big).$$
I'll assume that $f$ vanishes outside $[-1,1]$, which you can presumably prove from the initial conditions. Then we get
$$f'(x) = \begin{cases}
2f(2x+1) & \text{if }x\le0, \\
-2f(2x-1) & \text{if }x>0.
\end{cases}$$
This is pretty close to the definition of the Fabius function. In fact, your function would be $\frac{\text{Fb}'(\frac{x}{2}+1)}{2}$
The Fabius function is smooth but nowhere analytic, so there isn't going to be a nice closed form for your function.
|
Sufficient condition for differentiability at endpoints. Let $f:[a,b]\to \mathbb{R}$ be differentiable on $(a,b)$ with derivative $g=f^{\prime}$ there.
Assertion: If $\lim_{x\to b^{-}}g(x)$ exists and is a real number $\ell$ then $f$ is differentiable at $b$ and $f^{\prime}(b)=\ell$?
Is this assertion correct? If so provide hints for a formal $\epsilon-\delta$ argument. If not, can it be made true if we strengthen some conditions on $g$ (continuity in $(a,b)$ etc.)? Provide counter-examples.
I personally think that addition of the continuity of $g$ in the hypothesis won't change anything as for example $x\sin \frac{1}{x}$ has a continuous derivative in $(0,1)$ but its derivative oscillates near $0$. I also know that the converse of this is not true.
Also if that limit is infinite, then $f$ is not differentiable at $b$ right?
|
Since
$$
f(b+h)-f(b)=f'(\xi)h
$$
for some $\xi \in (b-h,h)$, you can let $h \to 0^{-}$ and conclude that $f'(b)=\lim_{x \to b-}f'(x)$.
On the other hand, consider $f(x)=x^2 \sin \frac{1}{x}$. It is easy to check that $\lim_{x \to 0} f'(x)$ does not exist, and yet $f'(0)=0$.
Edit: this answer assumes tacitly that $f$ is continuous at $b$. The question does not contain this assumption, although I believe that it should be clear that a discontinuous function can't be differentiable at all.
|
what's the name of the theorem:median of right-triangle hypotenuse is always half of it This question is related to one of my previous questions.
The answer to that question included a theorem: "The median on the hypotenuse of a right triangle equals one-half the hypotenuse".
When I wrote the answer out and showed it a friend of mine, he basically asked me how I knew that the theorem was true, and if the theorem had a name.
So, my question:
-Does this theorem have a name?
-If not, what would be the best way to describe it during a math test? Or is it better to write out the full prove every time?
|
Here is a proof without words:
|
Combinatorics problem with negative possibilities I know how to solve the basic number of solutions equations, such as "find the number of positive integer solutions to $x_1 + x_2 + x_3$ = 12, with ". But I have no clue how to do this problem:
Find the number of solutions of $x_1+x_2-x_3-x_4 = 0$ in integers between -4 and 4, inclusive.
If I try and solve it like the basic equations, I get $C(n+r-1,r)$$ = C(0+9-1,9)$$ = C(8,9)$, which is obviously improper. Can someone point me in the right direction on how to solve this type of problem?
|
Put $x_i+4=:y_i$. Then we have to solve
$$y_1+y_2=y_3+y_4$$
in integers $y_i$ between $0$ and $8$ inclusive. For given $p\geq0$ the equation $y_1+y_2=p$ has $p+1$ solutions if $0\leq p\leq 8$, and $17-p$ solutions if $9\leq p\leq 16$. It follows that the total number $N$ of solutions is given by
$$N=\sum_{p=0}^8(p+1)^2+\sum_{p=9}^{16}(17-p)^2=2\sum_{k=1}^8 k^2+ 9^2=489\ .$$
|
How fill in this multiplication table? The following multiplication table was given to me as a class exercise. I should have all the necessary information to fill it completely in. However, I'm not sure how to take advantage of the relations I am given to fill it in?
The Question
A group has four elements $a,b,c$ and $d$, subject to the rules $ca = a$ and $d^2 = a$. Fill in the entire multiplication table.
\begin{array}{c|cccc}
\cdot & a & b & c & d \\ \hline
a& & & & \\
b& & & & \\
c& a & & & \\
d& & & & a
\end{array}
I imagine I might proceed like this:
To find $ab$, write $a = d^2$ and thus $ab = d^2b = db\cdot b$....but my chain of reasoning always stops around here.
|
Since each element has an inverse, each row and each column of the table must contain all four elements.
After filling in the row and the column of the identity element $c$, we have three $a$'s in the table, and it follows that the only place for the last $a$ is given by $b^2=a$. Trying to distribute four $d$'s in the table, we are led to $ab=ba=d$. The only place for the remaining $b$'s is now given by $ad=da=b$, and the remaining products are then $=c$.
|
Existence and uniqueness of solution for a seemingly trivial 1D non-autonomous ODE So I was trying to do some existence and uniqueness results beyond the trivial setting. So consider the 1D non-autonomous ODE given by
$\dot{y} = f(t) - g(t) y $ where $f,g \geq 0$ are integrable and $f(t),g(t) \rightarrow 0$ for $t \rightarrow \infty$. How would I go about proving the existence and uniqueness for the solution of such an ODE for $t \rightarrow \infty$?
Just by a contraction argument?
|
Assuming you meant $\dot{y} = f(t) - g(t) y$, this is a linear differential equation and has the explicit solutions $y(t) = \mu(t)^{-1} \int \mu(t) f(t)\ dt$ where
$\mu(t) = \exp(\int g(t)\ dt)$, from which it is clear that you have global existence of
solutions. Uniqueness follows from the standard existence and uniqueness for ODE.
|
Coherent sheaves on a non-singular algebraic variety Grothendieck wrote in his letter to Serre(Nov. 12,1957) that every coherent algebraic sheaf on a non-singular algebraic variety(not necessarily quasi-projective) is a quotient of a direct sum of sheaves defined by divisors.
I think "sheaves defined by divisors" means locally free sheaves of rank one(i.e. invertible sheaves). How do you prove this?
|
This is proved for any noetherian separated regular schemes in SGA 6, exposé II, Corollaire 2.2.7.1 (I learn this result from a comment here: such schemes are "divisorial".) To see that this answers your question, look at op. cit. Définition 2.2.3(ii).
|
Help me understand the (continuous) uniform distribution I think I didn't pay attention to uniform distributions because they're too easy.
So I have this problem
*
*It takes a professor a random time between 20 and 27 minutes to walk from his home to school every day. If he has a class at 9.00 a.m. and he leaves home at 8.37 a.m., find the probability that he reaches his class on time.
I am not sure I know how to do it. I think I would use $F(x)$, and I tried to look up how to figure it out but could only find the answer $F(x)=(x-a)/(b-a)$.
So I input the numbers and got $(23-20)/(27-20)$ which is $3/7$ but I am not sure that is the corret answer, though it seems right to me.
I'm not here for homework help (I am not being graded on this problem or anything), but I do want to understand the concepts. Too often I just learn how to do math and don't "really" understand it.
So I would like to know how to properly do uniform distribution problems (of continuous variable) and maybe how to find the $F(x)$. I thought it was the integral but I didn't get the same answer.
Remember I want to understand this. Thanks so much for your time.
|
Your arrival time is at a constant interva; $[a,b]$ and the uniform distribution gives,
$\int_{a}^{x} \frac{dt}{27-20}$
Your starting point, 8:37, is your time 0 and you want to make it to your class by 9. Your minimum walking time is 20 mins which would give make you still on time. But for a walking time of more than 23 mins you will be late. Hence we want our walking time in $[20,23]$ And these are the bounds of our integral.
Hence desired probability is $3/7$
|
Find all complex numbers $z$ satisfying the equation I need some help on this question. How do I approach this question?
Find all complex numbers $z$ satisfying the equation
$$
(2z - 1)^4 = -16.
$$
Should I remove the power of $4$ of $(2z-1)$ and also do the same for $-16$?
|
The answer to this problem lies in roots of a polynomial.
From an ocular inspection we know we will have complex roots and they always
Come in pairs! The power of our equation is 4 so we know will have a pair of
Complex conjugates. I believe it is called the Fundamental
Theorem.
Addendum
A key observation is how to represent -1 in its general term using Euler's identity and the concept of odd powers in EF.
|
The difference between m and n in calculating a Fourier series I am studying for an exam in Differential Equations, and one of the topics I should know about is Fourier series. Now, I am using Boyce 9e, and in there I found the general equation for a Fourier series:
$$\frac{a_0}{2} + \sum_{m=1}^{\infty} (a_m cos\frac{m \pi x}{L} + b_m sin\frac{m \pi x}{L})$$
I also found the equations to calculate the coefficients in terms of n, where n is any real integer:
$$a_n = \frac{1}{L} \int_{-L}^{L} f(x) cos\frac{n \pi x}{L}dx$$
$$b_n = \frac{1}{L} \int_{-L}^{L} f(x) sin\frac{n \pi x}{L}dx$$
I noticed that the coefficients are calculated in terms of n, but are used in the general equation in terms of m. I also noticed that at the end of some exercises in my book, they convert from n to m. So my question is: what is the difference between n and m, and why can't I calculate my coefficients in terms of m directly? Why do I have to calculate them in terms of n, and then convert them? I hope that some of you can help me out!
|
You should know by now that $n$ and $m$ are just dummy indices. You can interchange them as long as they represent the same thing, namely an arbitrary natural number.
If
$$f(x) = \sum\limits_{n = 1}^\infty {{b_n}\sin \frac{{n\pi x}}{L}}$$
we can multiply both sides by ${\sin \frac{{m\pi x}}{L}}$ and integrate from $x = 0$ to $x = L$, for example, as follows:
$$\int_0^L {f(x)\sin \frac{{m\pi x}}{L}dx = \sum\limits_{n = 1}^\infty {{b_n}\int_0^L {\sin \frac{{n\pi x}}{L}\sin \frac{{m\pi x}}{L}dx}}}$$
but the righthand side is
$$\sum\limits_{n = 1}^\infty {{b_n}\frac{L}{2}{\delta _{nm}} = \left\{ {\begin{array}{*{20}{c}}
0&{n \ne m} \\
{{b_m}\frac{L}{2}}&{n = m}
\end{array}} \right.}$$
where ${{\delta _{nm}}}$ is the Kronecker delta. It is just a compact way of stating that sines are orthogonal, i.e.
$$\int_0^L {\sin \frac{{n\pi x}}{L}\sin \frac{{m\pi x}}{L}dx = \frac{L}{2}}$$
if $n = m$, and $0$ otherwise. So why did we use ${b_m}$? We used ${b_m}$ because the integral evaluates to $0$ when $n \ne m$, and the only term that "survives" is ${b_m}$ because it corresponds to the case $n = m$. Therefore, we can write
$$\int_0^L {f(x)\sin \frac{{m\pi x}}{L}} dx = {b_m}\frac{L}{2}$$
and solve for ${b_m}$:
$${b_m} = \frac{2}{L}\int_0^L {f(x)\sin \frac{{m\pi x}}{L}}dx.$$ We can solve for ${a_m}$ in exactly the same way because cosines are also orthogonal. At the end we can always change $m$ to $n$. This method for finding the Fourier coefficients works in general and is often referred to "Fourier trick" in physics. Overall, we can use
$$\int_{ - L}^L {\sin \frac{{n\pi x}}{L}\sin \frac{{m\pi x}}{L}dx = \left\{ {\begin{array}{*{20}{c}}
0&{n \ne m} \\
L&{n = m \ne 0}
\end{array}} \right.}$$
$$\int_{ - L}^L {\cos \frac{{n\pi x}}{L}\cos \frac{{m\pi x}}{L}dx = \left\{ {\begin{array}{*{20}{c}}
0&{n \ne m} \\
L&{n = m \ne 0} \\
{2L}&{n = m = 0}
\end{array}} \right.}$$
$$\int_{ - L}^L {\sin \frac{{n\pi x}}{L}\cos \frac{{m\pi x}}{L}dx = 0}$$
to derive the famous Fourier coefficients.
|
What is the meaning of a.s.? What is the meaning of a.s. behind a limit formula (I found this in a paper about stochastic processes) , or sometimes P-a.s.?
|
It means almost surely. P-a.s means almost surely with respect to probability measure P. For more details wiki out "almost sure convergence".
Let me give some insights: When working with convergence of sequences of random variables(in general stochastic processes), it is not necessary for convergence to happen for all $w \in \Omega$, where $\Omega$ is the sample space. Instead it is fine if the set where it doesn't converge happens over a set with measure 0, since most of the results go through. If you take a measure theory course, you will be able to appreciate this even more.
|
Solve for variable inside multiple power in terms of the powers. I'm a programmer working to write test software. Currently estimates the values it needs with by testing with a brute force algorithm. I'm trying to improve the math behind the software so that I can calculate the solution(s) instead. I seem to have come across an equation that is beyond my ability.
$$(Bx_1 + 1)^{y_2}=(Bx_2 + 1)^{y_1}$$
My goal is to have $B$ as a function of everything else, or have an algorithm solve for it. I feel like it should be possible, but I not even sure how to begin.
|
Okay so I came up with a that allows me to approximate the answer if $\frac {y_2}{y_1}$ is rational (which it will be in my case because I have limited precision).
I can re-express the original equation as $$(Bx_1+1)^{\frac {y_2}{y_1}}=Bx_2+1$$
If $\frac {y_2}{y_1}$ rational I can change it to $\frac ND$ where $N$ and $D$ are integers. Substituting this back in and redistributing the fraction I get $$(Bx_1+1)^N=(Bx_2+1)^D$$
Here is the fun part, here I can do a binomial expansion on each side, this will give me one high-order polynomial that I can approximate the answer to. It's very messy but should get the job done.
|
Counterexample to show that the class of Moscow spaces is not closed hereditary. What is a counterexample to show that the class of Moscow spaces is not closed hereditary?
(A space $X$ is called Moscow if the closure of every open $U \subseteq X$ is the union of a family of G$_{\delta}$-subsets of $X$.)
|
This is essentially copied from A. V. Arhangel'skii, Moscow spaces and topological groups, Top. Proc., 25; pp.383-416:
Let $D ( \tau )$ be an uncountable discrete space, and $\alpha D ( \tau )$ the one point compactification of $D ( \tau )$. Then $D ( \tau )$ is a Moscow space, and $D ( \tau )$ is G$_\delta$-dense in $\alpha D ( \tau )$, while $\alpha D ( \tau )$ is not a Moscow space.
Indeed, let $U$ be any infinite countable subset of $D ( \tau )$.
Then $U$ is open in $\alpha D ( \tau )$, and $\overline{U} = U \cup \{ \alpha \}$, where $\alpha$ is the only non-isolated point in $\alpha D ( \tau )$.
Every G$_\delta$-subset of $\alpha D ( \tau )$ containing the point $\alpha$ is easily seen to be uncountable; therefore, $\overline{U}$ is not the union of any family of G$_\delta$-subsets of $\alpha D ( \tau )$.
Since $\alpha D ( \tau )$ is a closed subspace of a Tychonoff cube, we conclude that the class of Moscow spaces is not closed hereditary.
|
Getting the Total value from the Nett I can't figure out this formula; I need some help to write it out for a php script.
I have a value of $\$80$.
$\$80$ is the profit from a total sale of $\$100$; $ 20\%$ is the percentage margin for the respective product.
Now I just have the $\$80$ and want to get the total figure of $\$100$. How can I get the total figure and what would be the formula?
Your help is much appreciated. Thanks in advance.
|
If you mean that you get $80$ from $100$ after a $20$ per cent discount and want to reverse the process, multiply by $\frac{100}{100-20}=\frac{5}{4}$. The formula you need is $y=\frac{5x}{4}$, so if $x=80$, then $y=100$.
|
Finding a High Bound on Probability of Random Set first time user here. English not my native language so I apologize in advance. Taking a final in a few weeks for a graph theory class and one of the sample problems is exactly the same as the $k$-edge problem.
We need prove that if each vertex selected with probability $\frac{1}{2}$, that the probability that except that we must find that the probability of the set being an
independent set is $\geq (\frac{2}{3})^k$ (instead of $\frac{1}{2^k}$, like original problem).
Here is what I am trying: I am looking into calculating the number of independent sets for each possible number of vertices $i$, for $i=0$ to $i=n$. Once I calculate the probability of there being an independent set for the number of vertices $n$, I can take the expectation and union bound.
|
I was the one who asked the original question that you linked to. Just for fun I tried to see if I could prove a better bound. I was able to prove a bound even better than that. Here's a hint: you can use indicator variables and the second moment method.
|
Convergence of alternating series based on prime numbers I've been experimenting with some infinite series, and I've been looking at this one,
$$\sum_{k=1}^\infty (-1)^{k+1} {1\over p_k}$$
where $p_k$ is the k-th prime. I've summed up the first 35 terms myself and got a value of about 0.27935, and this doesn't seem close to a relation of any 'special' constants, except maybe $\frac12\gamma $.
My question is, has the sum of this series been proven to have a particular closed form? If so, what is this value?
|
As mentioned, this series has an expansion given by the OEIS. This series is mentioned in many sources, such as Mathworld, Wells, Robinson & Potter and Weisstein.
These sources all seem to imply that, though the series converges, no known "closed form" for this sum exists.
|
Basic theory about divisibility and modular arithmetic I am awfully bad with number theory so if one can provide a quick solution of this, it will be very much appreciated!
Prove that if $p$ is a prime with $p \equiv 1(\mod4) $ then there is an integer $m$ such that $p$ divides $m^2 +1$
|
I will assume that you know Wilson's Theorem, which says that if $p$ is prime, then $(p-1)!\equiv -1\pmod{p}$.
Let $m=\left(\frac{p-1}{2}\right)^2$. We show that if $p\equiv 1\pmod{4}$, then $m^2\equiv -1\pmod{p}$. This implies that $p$ divides $m^2+1$.
The idea is to pair $1$ with $p-1$, $2$ with $p-2$, $3$ with $p-3$, and so on until at the end we pair $\frac{p-1}{2}$ with $\frac{p+1}{2}$. To follow the argument, you may want to work with a specific prime, such as $p=13$. So we pair $1$ with $12$, $2$ with $11$, $3$ with $10$, $4$ with $9$, $5$ with $8$, and finally $6$ with $7$.
Thus for any $a$ from $1$ to $\frac{p-1}{2}$, we pair $a$ with $p-a$. Note that $a(p-a)\equiv -a^2\pmod{p}$.
So the product of all the numbers from $1$ to $p-1$ is congruent modulo $p$ to the product
$$(-1^2)(-2^2)(-3^2)\cdot(-\left(\frac{p-1}{2}\right)^2).$$
This is congruent to
$$(-1)^{\frac{p-1}{2}}m^2.$$
But $p-1$ is divisible by $4$, so $\frac{p-1}{2}$ is even, and therefore our product is congruent to $m^2$.
But our product is congruent to $(p-1)!$, and therefore, by Wilson's Theorem, it is congruent to $-1$.
We conclude that $m^2\equiv -1\pmod{p}$, which is what we wanted to show.
|
Computing $\operatorname{Ext}^1_R(R/x,M)$ How to compute $\operatorname{Ext}^1_R(R/(x),M)$ where $R$ is a commutative ring with unit, $x$ is a nonzerodivisor and $M$ an $R$-module?
Thanks.
|
There is an alternative way to doing this problem than taking a projective resolution. Consider the ses of $R$ - modules
$$0 \to R \stackrel{x}{\to} R \to R/(x) \to 0$$
where the multiplication by $x$ map is injective because it is not a zero divisor in $R$. Now we recall a general fact from homological algebra that says any SES of $R$ - modules gives rise to an LES in Ext. We need only to care about the part
$$0 \to \textrm{Hom}_R(R/(x),M) \to \textrm{Hom}_R(R,M) \stackrel{f}{\to} \textrm{Hom}_R(R,M) \to \textrm{Ext}^1_R(R/(x),M) \to 0 \to 0\ldots $$
where the zeros appear because $R$ as a module over itself is free (and hence projective) so that $\textrm{Ext}^1_R(R,M) = 0$. Now we recall that $\textrm{Hom}(R,M) \cong M$ because any homomorphism from $R$ to $M$ is completely determined by the image of $1$. It is easily seen now that under this identification, $\textrm{im} f \cong xM$ so that
$$\textrm{Ext}^1_R(R/(x),M) \cong M/xM.$$
|
prove sum of divisors of a square number is odd Don't know how to prove that sum of all divisors of a square number is always odd.
ex: $27 \vdots 1,3,9,27$; $27^2 = 729 \vdots 1,3,9,27,81,243,729$; $\sigma_1 \text{(divisor
function)} = 1 + 3 + 9 + 27 + 81 + 243 + 729 = 1093$ is odd;
I think it somehow connected to a thing that every odd divisor gets some kind of a pair when a number is squared and 1 doesn't get it, but i can't formalize it. Need help.
|
The divisors $1=d_1<d_2<\cdots <d_k=n^2$ can be partitioned into pairs $(d, \frac {n^2}d)$, except that there is no partner for $n$ itself.
Therefore, the number of divisors is odd.
Thus if $n$ itself is odd (and so are all its divisors), we have the sum of an odd number of odd numebrs, hence the result is odd.
But if $n$ is even, we can write it as $n=m2^k$ wit $m$ odd. The odd divisors of $n^2$ are precisely the divisors of $m^2$. Their sum is odd and all other (even) divisors do not affect the parity of the sum.
|
$p$ an odd prime, $p \equiv 3 \pmod 8$. Show that $2^{(\frac{p-1}{2})}*(p-1)! \equiv 1 \pmod p$ $p$ an odd prime, $p \equiv 3 \pmod 8$. Show that $2^{\left(\frac{p-1}{2}\right)}\cdot(p-1)! \equiv 1 \pmod p$
From Wilson's thm: $(p-1)!= -1 \pmod p$.
hence, need to show that $2^{\left(\frac{p-1}{2}\right)} \equiv -1 \pmod p. $
we know that $2^{p-1} \equiv 1 \pmod p.$
Hence: $2^{\left(\frac{p-1}{2}\right)} \equiv \pm 1 \pmod p. $
How do I show that this must be the negative option?
|
All the equalities below are in the ring $\mathbb{Z}/p\mathbb{Z}$.
Note that $-1 = (p-1)! = \prod_{i=1}^{p-1}i = \prod_{i=1}^{\frac{p-1}{2}}(2i-1)\prod_{i=1}^{\frac{p-1}{2}} 2i = 2^{\frac{p-1}{2}} \prod_{i=1}^{\frac{p-1}{2}}(2i-1)\prod_{i=1}^{\frac{p-1}{2}} i$
Now let $S_1, S_2$ be the set of respectively all odd and even numbers in $\left \{ 1, \cdots, \frac{p-1}{2} \right \}$ and $S_3$ be the set of all even numbers in $\left \{ \frac{p+1}{2}, \ldots, p-1 \right \}$.
Note that $\prod_{i=1}^{\frac{p-1}{2}} i = \prod _{j \in S_1}j \prod _{k \in S_2}k = (-1)^{|S_1|} \prod _{j \in S_1}(-j)\prod _{k \in S_2}k $ $= (-1)^{|S_1|} \prod _{t \in S_3}t \prod _{k \in S_2}k =(-1)^{|S_1|} \prod_{i=1}^{\frac{p-1}{2}}(2i)$
So $\prod_{i=1}^{\frac{p-1}{2}}(2i-1)\prod_{i=1}^{\frac{p-1}{2}} i = (-1)^{|S_1|}\prod_{i=1}^{\frac{p-1}{2}}(2i-1) \prod_{i=1}^{\frac{p-1}{2}} 2i = (-1)^{|S_1|} (p-1)! = (-1)^{|S_1| +1} $
Now we have $-1 = 2^{\frac{p-1}{2}} \prod_{i=1}^{\frac{p-1}{2}}(2i-1)\prod_{i=1}^{\frac{p-1}{2}} i = (-1)^{|S_1| + 1} \cdot 2^{\frac{p-1}{2}} $
i.e $\boxed{2^{\frac{p-1}{2}} = (-1)^{|S_1|}} $
Now $|S_1| = \frac{p+1}{4}$ if $\frac{p-1}{2}$ is odd and $|S_1| = \frac{p-1}{4}$ if $\frac{p-1}{2}$ is even.
So if $p \equiv 3, 5 \mod 8 $ we have $2^{\frac{p-1}{2}} = -1$.
if $p = 1,7 \mod 8$ we have $2^{\frac{p-1}{2}} = 1$.
|
Why the principle of counting does not match with our common sense Principle of counting says that
"the number of odd integers, which is the same as the number of even integers, is also the same as the number of integers overall."
This does not match with my common sense (I am not a mathematician, but a CS student).
Can some people here could help me to reach a mathematicians level of thinking for this problem. I have searched net a lot (Wikipedia also)
|
Because sets of numbers can be infinitely divisible. See this Reddit comment.
I think his intuition comes from the fact that the world is discrete in practice. You have 2x more atoms in [0, 2cm] than in [0, 1cm]. If you are not looking at something made of atoms, let's say you have 2x more Planck lengths in [0, 2cm] than in [0, 1cm]. See what I mean?
OP's intuition can be correct for physical things in our world, but mathematics go beyond that, with rational numbers being infinitely divisible. As soon as there is a limit to how much you can divide things, even if it's one million digits after the decimal point, OP's intuition is valid.
|
Showing that $\langle p\rangle=\int\limits_{-\infty}^{+\infty}p |a(p)|^2 dp$ How do I show that $$\int \limits_{-\infty}^{+\infty} \Psi^* \left(-i\hbar\frac{\partial \Psi}{\partial x} \right)dx=\int \limits_{-\infty}^{+\infty} p \left|a(p)\right|^2dp\tag1$$
given that $$\Psi(x)=\frac{1}{\sqrt{2 \pi \hbar}}\int \limits_{-\infty}^{+\infty} a(p) \exp\left(\frac{i}{\hbar} px\right)dp\tag2$$
My attempt: $$\frac {\partial \Psi(x)}{\partial x} = \frac{1}{\sqrt{2\pi \hbar}} \int\limits_{-\infty}^{+\infty} \frac{\partial}{\partial x} \left(a(p)\exp\left(\frac{i}{\hbar} px\right)\right)dp\tag3$$
$$=\frac{1}{\sqrt{2\pi \hbar}} \int\limits_{-\infty}^{+\infty} a(p) \cdot \exp\left(\frac{i}{\hbar} px\right)\frac{i}{\hbar}p \cdot dp\tag4$$
Multiplying by $-i\hbar$:
$$-i\hbar \frac {\partial \Psi}{\partial x}=\frac{1}{\sqrt{2\pi \hbar}} \int\limits_{-\infty}^{+\infty} a(p) \cdot \exp\left(\frac{i}{\hbar} px\right)p \cdot dp\tag5$$
At this point I'm stuck because I don't know how to evaluate the integral without knowing $a(p)$. And yet, the right hand side of equation (1) doesn't have $a(p)$ substituted in.
|
The conclusion follows from the Fourier inversion formula (in distribution sense):
$$\begin{align*}
&\int_{-\infty}^{\infty} \Psi^{*} \left( -i\hbar \frac{\partial \Psi}{\partial x} \right) \, dx \\
&= \int_{-\infty}^{\infty} \left( \frac{1}{\sqrt{2\pi\hbar}} \int_{-\infty}^{\infty} a(p)^{*}e^{-ipx/\hbar} \, dp \right) \left( \frac{1}{\sqrt{2\pi\hbar}} \int_{-\infty}^{\infty} p' a(p')e^{ip'x/\hbar} \, dp' \right) \, dx \\
&= \frac{1}{2\pi\hbar} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} p' a(p)^{*}a(p') e^{i(p'-p)x/\hbar} \, dp'dp dx \\
&= \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} p' a(p)^{*}a(p') \left( \frac{1}{2\pi\hbar} \int_{-\infty}^{\infty} e^{i(p'-p)x/\hbar} \, dx \right) \, dp' dp \\
&= \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} p' a(p)^{*}a(p') \delta(p-p') \, dp' dp \\
&= \int_{-\infty}^{\infty} p a(p)^{*}a(p) \, dp
= \int_{-\infty}^{\infty} p \left| a(p) \right|^2 \, dp.
\end{align*}$$
|
Dihedral group $D_{8}$ as a semidirect product $V\rtimes C_2$? How do I show that the dihedral group $D_{8}$ (order $8$) is a semidirect product $V\rtimes \left\langle \alpha \right\rangle $, where $V$ is Klein group and $%
\alpha $ is an automorphism of order two?
|
I like to think about $D_8$ as the group of invariants of a square.
So the our group $V$ is given by the identity, the 2 reflections that have no fix points and their product which is 180°-rotation.
Semidirect product can be characterized as split short exact sequences of groups. This is just a fancy way to say the following:
Given a group $G$ and a normal subgroup $N\subset G$. Denote by $\pi:G\to G/N$ the projection map. Then $G\cong N\rtimes G/N$ iff there exists a homomorphism $\phi: G/N\to G$, such that $\pi\circ\phi=\mathrm{id}_{G/N}$. This $\phi$ is called a splitting homomorphism.
Back to our dihedral group: $D_8/V\cong C_2$. In order to define a splitting homomorphism $C_2\to D_8$ we need to find an element of order 2 that is not contained in $V$. Such an element is given by a reflection through one of the diagonals of our square. It is clear that $\pi$ doesn't map t his element to $0\in C_2$ because it does not lie in $V$. So we constructed a homomorphism $\phi: C_2\to D_8$ such that $\pi\circ\phi=\mathrm{id}_{C_2}$.
|
How to explain that division by $0$ yields infinity to a 2nd grader How do we explain that dividing a positive number by $0$ yields positive infinity to a 2nd grader? The way I intuitively understand this is $\lim_{x \to 0}{a/x}$ but that's asking too much of a child. There's got to be an easier way.
In response to the comments about it being undefined, granted, it is undefined, but it's undefined because of flipping around $0$ in positive or negative values and is in any case either positive or negative infinity.
Yet, $|\frac{2}{0}|$ equals positive infinity in my book. How do you convey this idea?
|
let us consider that any number divided by zero is undefined.
You can let the kid know in this way:
Division is actually splitting things, for example consider you have 4 chocolates and if u have to distribute those 4 chocolates among 2 of your friends, you would divide it(4) by 2(i.e : 4/2) = 2.
Now consider this, you have 4 chocolates and if u don't want to distribute among any of your friends, (that is like distributing to 0 friends) division does not even come into picture in such cases and also division(4/0) makes no sense. Hence in such cases its told UNDEFINED.
|
example of irreductible transient markov chain Can anyone give me a simple example of an irreductible (all elements communicate) and transient markov chain?
I can't think of any such chain, yet it exists (but has to have an infinite number of elements)
thanks
|
A standard example is asymmetric random walk on the integers: consider a Markov chain with state space $\mathbb{Z}$ and transition probability $p(x,x+1)=3/4$, $p(x,x-1)=1/4$. There are a number of ways to see this is transient; one is to note that it can be realized as $X_n = X_0 + \xi_1 + \dots + \xi_n$ where the $\xi_i$ are iid biased coin flips; then the strong law of large numbers says that $X_n/n \to E[\xi_i] = 1/2$ almost surely, so that in particular $X_n \to +\infty$ almost surely. This means it cannot revisit any state infinitely often.
Another example is simple random walk on $\mathbb{Z}^d$ for $d \ge 3$. Proving this is transient is a little more complicated but it should be found in most graduate probability texts.
|
How do you compute the number of reflexive relation? Given a set with $n$ elements
I know that there is $2^{n^2}$ relations, because there are $n$ rows and $n$ columns and it is either $1$ or $0$ in each case, but I don't know how to compute the number of reflexive relation. I am very dumb. Can someone help me go through the thought process?
|
A relation $R$ on $A$ is a subset of $AXA$.
If $A$ is reflexive, each of the $n$ ordered pairs $(a,a)$ belonging to $A$ must be in $R$.
So the remaining $n^2-n$ ordered pairs of the type $(a,b)$ where $a!=b$ may or may not be in R.
So each ordered pair has now 2 choices, to be in $R$ or to not be in $R$.
Hence number of pairs = 2($n^2-n$)
|
Prove that $S_n$ is doubly transitive on $\{1, 2,..., n\}$ for all $n \geqslant 2$.
Prove that $S_n$ is doubly transitive on $\{1, 2,\ldots, n\}$ for all $n \geqslant 2$.
I understand that transitive implies only one orbit, but...
|
In fact, $S_n$ acts $n$-fold transitive on $\{1,\ldots,n\}$ (hence the claim follows from $n\ge 2$), i.e. for $n$ different elements (which could that be?) $i_1,\ldots ,i_n$ you can prescribe any $n$ different elements $j_1,\ldots,j_n$ and dan find (exactly) one element $\sigma\in S_n$ such that $\sigma(i_k)=j_k$ for all $k$.
|
using contour integration I am trying to understand using contour integration to evaluate definite integrals. I still don't understand how it works for rational functions in $x$. So can anyone please elaborate this method using any particular function like say $\int_0^{\infty} \frac {1}{1+x^3} \ dx$. I'ld appreciate that.
I meant I know what I should be doing but am having problems applying them. So, basically I am interested in how to proceed rather than "Take this contour and so on".
I'ld appreciate any help you people can give me. Thanks.
|
What we really need for contour integration by residues to work is a closed contour. An endpoint of $\infty$ doesn't matter so much because we can treat it as a limit as $R \to \infty$, but an endpoint of $0$ is a problem. Fortunately, this integrand is symmetric under rotation by $2 \pi/3$ radians. So we consider a wedge-shaped contour $\Gamma = \Gamma_1 + \Gamma_2 + \Gamma_3$ going in a straight line $\Gamma_1$ from $0$ to $R$ on the real axis, then a circular arc $\Gamma_2$ on $|z|=R$ to $R e^{2\pi i/3}$, then a straight line $\Gamma_3$ back to $0$.
We have
$$\eqalign{\int_{\Gamma_1} \dfrac{dz}{1+z^3} &= \int_0^R \dfrac{dx}{1+x^3} \cr
\int_{\Gamma_3} \dfrac{dz}{1+z^3} &= -e^{2\pi i/3} \int_0^R \dfrac{dx}{1+x^3}
= \dfrac{1 - \sqrt{3}i}{2} \int_0^R \dfrac{dx}{1+x^3}
\cr \left|\int_{\Gamma_2} \dfrac{dz}{1+z^3} \right| &\le \dfrac{CR}{R^3 - 1} \to 0\ \text{as $R \to \infty $}}$$ for some constant $C$. Now $f(z) = \dfrac{1}{1+z^3} = \dfrac{1}{(z+1)(z-e^{\pi i/3})(z-e^{-\pi i/3})}$ has one singularity inside $\Gamma$, namely at $z = e^{\pi i/3}$ (if $R > 1$), a simple pole with residue $$\dfrac{1}{(e^{\pi i/3}+1)(e^{\pi i/3} - e^{-\pi i/3})} = -\dfrac{1}{6} - \dfrac{\sqrt{3}}{6} i $$
Thus
$$ \dfrac{3 - \sqrt{3}i}{2} \int_0^\infty f(x)\ dx = \lim_{R \to \infty} \oint_\Gamma f(z)\ dz = 2 \pi i \left(-\dfrac{1}{6} - \dfrac{\sqrt{3}}{6} i\right)$$
$$ \int_0^\infty f(x)\ dx = \dfrac{4 \pi i}{3 - \sqrt{3} i} \left(-\dfrac{1}{6} - \dfrac{\sqrt{3}}{6} i\right) = \dfrac{2 \sqrt{3} \pi}{9} $$
|
Two curious "identities" on $x^x$, $e$, and $\pi$ A numerical calculation on Mathematica shows that
$$I_1=\int_0^1 x^x(1-x)^{1-x}\sin\pi x\,\mathrm dx\approx0.355822$$
and
$$I_2=\int_0^1 x^{-x}(1-x)^{x-1}\sin\pi x\,\mathrm dx\approx1.15573$$
A furthur investigation on OEIS (A019632 and A061382) suggests that $I_1=\frac{\pi e}{24}$ and $I_2=\frac\pi e$ (i.e., $\left\vert I_1-\frac{\pi e}{24}\right\vert<10^{-100}$ and $\left\vert I_2-\frac\pi e\right\vert<10^{-100}$).
I think it is very possible that $I_1=\frac{\pi e}{24}$ and $I_2=\frac\pi e$, but I cannot figure them out. Is there any possible way to prove these identities?
|
You made a very nice observation! Often it is important to make a good guess than just to solve a prescribed problem. So it is surprising that you made a correct guess, especially considering the complexity of the formula.
I found a solution to the second integral in here, and you can also find a solution to the first integral at the link of this site.
|
An inequality about the gradient of a harmonic function Let $G$ a open and connected set. Consider a function $z=2R^{-\alpha}v-v^2$ with $R$ that will be chosen suitably small, where $v$ is a harmonic function in $G$, and satisfies
$$|x|^\alpha\leqslant v(x)\leqslant C_0|x|^\alpha. \ \ (*)$$
Then,
$$\Delta z+f(z)=-2|\nabla v|^2+f(z)\leqslant-C|x|^{2\alpha-2}f(0)+Kz,$$
where $K$ is the Lipschitz constant of the function $f$. ($f$ is a Lipschitz-continuous function).
I have no idea how to obtain this second inequality and I don't know if $(*)$ is really necessary.
|
This is an answer to a question raised in comment to another answer. Since it is of independent interest (perhaps of more interest than the original answer), I post it as a separate answer.
Question: How do we construct homogeneous harmonic functions that are positive on a cone?
Answer. Say, we want a positive homogeneous harmonic function $v$ on the cone $K=\{x\in\mathbb R^n:x_n>\kappa|x|\}$ where $\kappa\in (-1,1)$. Let $\alpha$ denote the degree of homogeneity of $v$: that is, $v(rx)=r^\alpha v(x)$ for all $x\in K$. The function $v$ is determined by the number $\alpha$ and by the restriction of $v$ to the unit spherical cap $S\cap K$. The Laplacian of $v$ can be decomposed into radial and tangential terms: $$\Delta v = v_{rr}+\frac{n-1}{r}v_r+\frac{1}{r^2}\Delta_S v$$ where $\Delta_S$ is the Laplace-Beltrami operator on the unit sphere $S$. Using the homogeneity of $v$, we find $v_r = \frac{\alpha }{r}v$ and $v_{rr}=\frac{\alpha(\alpha-1)}{r^2}v$. On the unit sphere $r=1$, and the radial term of $\Delta v$ simplifies to $\alpha(n+\alpha-2)v$.
Therefore, the restriction of $v$ to $S\cap K$ must be an eigenfunction of $\Delta_S$:
$$\Delta_S v + \mu v =0, \ \ \ \ \mu = \alpha(n+\alpha-2) $$
Where do we get such a thing from?
Well, there is a well-developed theory of eigenfunctions and eigenvalues for $\Delta_S$ with the Dirichlet boundary conditions. In particular, it is known that the eigenfunction corresponding to the lowest eigenvalue $\mu_1$ has constant sign. So this is what we use for $v$. There are two drawbacks:
*
*(a) we cannot choose $\alpha$ ourselves, because $\alpha(n+\alpha-2)=\mu_1$ and $\mu_1$ is determined by the domain $S\cap K$.
*(b) the function $v$ vanishes on the boundary of spherical cap, and therefore is not bounded away from $0$.
Concerning (a), we know that $\mu_1$ is monotone with respect to domain: larger domains (in the sense of inclusion) have lower value of $\mu_1$. (Physical interpretation: a bigger drum emits lower frequencies.) Therefore, the value of $\alpha$ decreases when the domain is enlarged. Also, when $K$ is exactly half-space $\{x_n>0\}$, we know our positive harmonic function directly: it's $v(x)=x_n$, which is homogeneous of degree $\alpha=1$. Therefore, in the cones that are smaller than half-space we have $\alpha>1$, and in the cones that are larger than half-space we have $\alpha<1$.
Concerning (b), we do the following: carry out the above construction in a slightly larger cone $K'$ and then restrict $v$ to $K$. Since the closure of $K\cap S$ is a compact subset of $K'\cap S$, the function $v$ attains a positive minimum there. We can multiply $v$ by a constant and achieve $1\le v\le C$ on $K\cap S$, hence $|x|^\alpha\le v(x)\le C|x|^\alpha$ on all of $K$.
|
Can a cubic that crosses the x axis at three points have imaginary roots? I have a cubic polynomial, $x^3-12x+2$ and when I try to find it's roots by hand, I get two complex roots and one real one. Same, if I use Mathematica. But, when I plot the graph, it crosses the x-axis at three points, so if a cubic crosses the x-axis a three points, can it have imaginary roots, I think not, but I might be wrong.
|
The three roots are, approximately, $z = -3.545$, $0.167$ or $3.378$.
A corollary of the fundamental theorem of algebra is that a cubic has, when counted with multiplicity, exactly three roots over the complex numbers. If your cubic has three real roots then it will not have any other roots.
I've checked the plot, and you're right: there are three real roots. All I can think is that maybe you have made a mistake when substituting your complex "root" into the equation. Perhaps you might like to post the root and we can check it for you.
Another thing to convince you there's an error. Let $z= r_1,r_2,r_3$ be the three real roots. If $z=c$ is your complex root then the conjugate $z = \overline{c}$ must also be a root. Thus:
$$z^3-12z+2 = (z-r_1)(z-r_2)(z-r_3)(z-c)(z-\overline{c}) \, . $$
Hang on! That means your cubic equation starts $z^5 + \cdots$ and isn't a cubic after all. Contradiction! Either it doesn't have three real roots, or your complex "root" is not a root after all.
Beware that computer programs use numerical solutions and you get rounding errors.
|
coordinate system, nonzero vector field I'm interested in the following result (chapter 5, theorem 7 in volume 1 of Spivak's Differential Geometry):
Let $X$ be a smooth vector field on an $n$-dimensional manifold M with $X(p)\neq0$ for some point $p\in M$. Then there exists a coordinate system $x^1,\ldots,x^n$ for $U$ (an open subset of $M$ containing $p$) in which $X=\frac{\partial}{\partial x^1}$.
Could someone please explain, in words, how to prove this (or, if you have the book, how Spivak proves this)?
I've read Spivak's proof, and have a few questions about it:
1) How is he using the assumption $X(p)\neq0$?
2) Why can we assume $X(0)=\frac{\partial}{\partial t^1}|_0$ (where $t^1,\ldots,t^n$ is the standard coordinate system for $\mathbb{R}^n$ and WLOG $p=0\in\mathbb{R}^n$)?
3) How do we know that in a neighborhood of the origin in $\mathbb{R}^n$, there's a unique integral curve through each point $(0,a^2,\ldots,a^n)$?
|
As a partial answer: uniqueness of integral curves I believe boils down to the existence and uniqueness theorem from ordinary differential equations.
Furthermore, any coordinate chart can be translated so that 0 maps to the point $p$ on the manifold, whatever point you're interested in. You're just doing a composition. If you have $\varphi:\mathbb{R}^n\rightarrow M$ and $\varphi(x_0) = p$ then $\phi(x) = \varphi(x + x_0)$ is the new map. It's easily seen to be smooth and satisfy all the properties you need. Because you can always do this, we often just assume it is done and take $\varphi$ to be the map taking 0 to $p$ without further comment.
|
$S_n$ acting transitively on $\{1, 2, \dots, n\}$ I am reading Dummit and Foote, and in Section 4.1: Group Actions and Permutation Representations they give the following example of a group action:
The symmetric group $G = S_n$ acts transitively in its usual action as permutations on $A = \{1, 2, \dots, n\}$. Note that the stabilizer in $G$ of any point $i$ has index $n = | A |$ in $S_n$ (My italics)
I am having trouble seeing why the stabilizer has index $n$ in $S_n$. Is this due to the fact that we have $n$ elements of $A$ and thus $n$ distinct (left) cosets of the stabilizer $G_i$?
And how would I see this if I were to work with the permutation representation of this action? The action is not very clear to me.
|
$\sigma \in \textrm{ Stab } (i)$ for $1\le i \le n$ iff $\sigma $ fixes $i.$ You just have to count the number of permutations that fix $i$ - working in the usual order, there are $n-1$ choices for the image of the first element in the domain, $n-2$ for the second, and so on, so that $|\textrm{ Stab } (i) | = (n-1)!.$ Apply Lagrange's Theorem.
|
What is the difference between Tautochrone curve and Brachistochrone curve as both are cycloid? What is the difference between Tautochrone curve and Brachistochrone curve as both are cycloid?
If possible, show some reference please?
|
Mathematically, they both are the same curve but they arise from slightly different but related problems.
While the Brachistochrone is the path between two points that takes shortest to traverse given only constant gravitational force, the Tautochrone is the curve where, no matter at what height you start, any mass will reach the lowest point in equal time, again given constant gravity.
These origins can be seen in the names:
Greek:
ταὐτό (tauto) the same
βράχιστος (brachistos) the shortest
χρόνος (chronos) time
Both problems are solved via Variational Calculus.
Here an illustration of the Tautochrone from Wikipedia (by Claudio Rocchini):
By comparison, this is the problem you are trying to solve with a Brachistochrone (Maxim Razin on Wikipedia):
|
Are there infinite many integer $n\ge 0$ such that $10^{2^n}+1$ prime numbers? It is clear to see that 11 and 101 are primes which sum of digit is 2. I wonder are there more or infinte many of such prime.
At first, I was think of the number $10^n+1$. Soon, I knew that $n\neq km$ for odd $k>1$, otherwise $10^m+1$ is a factor.
So, here is my question:
Are there infinite many integer $n\ge 0$ such that $10^{2^n}+1$ prime numbers?
After a few minutes: I found that if $n=2$, $10^{2^n}+1=10001=73\times137$, not a prime;
if $n=3$, $10^{2^n}+1=17\times5882353$, not a prime; $n=4$, $10^{2^n}+1=353\times449\times641\times1409\times69857$, not a prime.
Now I wonder if 11 and 101 are the only two primes with this property.
|
Many people wonder the same thing you do. Wilfrid Keller keeps track of what they find out. So far: prime for $n=0$ and $n=1$ only; known to be composite for all other $n$, $2\le n\le23$, and many other values of $n$. The first value for which primality status is unknown is $n=24$.
|
Mutiple root of a polynomial modulo $p$ In my lecture notes of algebraic number theory they are dealing with the polynomial $$f=X^3+X+1, $$ and they say that
If f has multiple factors modulo a prime $p > 3$, then $f$ and $f' = 3X^2+1$ have a common factor modulo this prime $p$, and this is the linear factor $f − (X/3)f'$.
Please could you help me to see why this works? And moreover, how far can this be generalized?
|
*
*If $f$ has a multiple factor, say $h$ (in any field containing the current base field), then with appropriate $g$, we have
$$f(x)=h(x)^2\cdot g(x)$$
If you take its derivative, it will be still a multiple of $h(x)$, so it is a common factor of $f$ and $f'$.
If polynomials $u$ and $v$ have common factors, then all of their linear combinations will have that as common factor.
Now in your particular example, note that the written $f-(X/3)f'$ is already linear (hence surely irreducible), so, if there is a common factor, it must be this one.
|
can open sets be covered with another open set not much bigger? Is that correct that, for any open set $S \subset \mathbb{R}^n$, there exists an open set $D$ such that $S \subset D$ and $D \setminus S$ has measure zero?
I think it is correct and I guess I have seen the proof somewhere before, but I cannot find it in any of my books, if it is wrong, please give me a counter-example.
Also is the same correct for a closed set such that it's interior is not of measure zero?
|
Assume that $S\subset D$ with $S,D$ open and $\mu(D\setminus S)=0$.
If $D$ is not contained in $\overline S$, then the nonempty open set $D\setminus \overline S$ has positive measure. Therefore, $D\subseteq \overline S$. Therefore any open set $S$ with the property that $$\tag1\partial S\subseteq \partial(\mathbb R^n\setminus \overline S)$$ is a counterexample to your conjecture: Any open ball araound a point in $\partial S$ then intersects $\mathbb R^n\setminus \overline S$ in a nonempty open set of positive measure.
For example, an open ball or virtually any open set with a smooth boundary has property (1).
|
Need someone to show me the solution Need someone to show me the solution. and tell me how !
$$(P÷N) × (N×(N+1)÷2) + N×(1-P) = N×(1-(P÷2)) + (P÷2)$$
|
\begin{align}
\dfrac{P}{N} \times \dfrac{N(N+1)}2 + N\times (1-P) & = \underbrace{P \times \dfrac{N+1}{2} + N \times (1-P)}_{\text{Cancelling out the $N$ from the first term}}\\
& = \underbrace{\dfrac{PN + P}2 + N - NP}_{\text{$P \times (N+1) = PN + P$ and $N \times (1-P) = N - NP$}}\\
& = \underbrace{\dfrac{PN + P +2N - 2NP}2}_{\text{Take the lcm $2$.}}\\
& = \underbrace{\dfrac{P + 2N -NP}2}_{PN - 2NP = -NP}\\
& = \dfrac{P}2 + \dfrac{2N - NP}2\\
& = \underbrace{N\dfrac{2-P}2 + \dfrac{P}2}_{\text{Factor out $N$ from $2N-NP$}}\\
& = \underbrace{N \left(1 - \dfrac{P}2\right) + \dfrac{P}2}_{\text{Making use of the fact that $\dfrac{2-P}2 = \dfrac22 - \dfrac{P}2 = 1 - \dfrac{P}2$}}
\end{align}
|
Question about of Fatou's lemma in Rick Durrett's book. In Probability Theory and Examples, Theorem $1.5.4$, Fatou's Lemma, says
If $f_n \ge 0$ then
$$\liminf_{n \to \infty} \int f_n d\mu \ge \int \left(\liminf_{n \to \infty} f_n \right) d\mu. $$
In the proof, the author says
Let $E_m \uparrow \Omega$ be sets of finite measure.
I'm confused, as without any information on the measure $\mu$, how can we guarantee this kind of sequence of events must exist? Has the author missed some additional condition on $\mu$?
|
At the beginning of section 1.4 where Durrett defines the integral he assumes that the measure $\mu$ is $\sigma$-finite, so I guess it is a standing assumption about all integrals in this book that the underlying measure satisfies this. Remember that it is a probability book, so his main interest are finite measures.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.