INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
A problem related to basic martingale theory In our probability theory class, we are supposed to solve the following problem:
Let $X_n$, $n \geq 1 $ be a sequence of independent random variables such that
$ \mathbb{E}[X_n] = 0, \mathbb{Var}(X_n) = \sigma_n^2 < + \infty $ and
$ | X_n | \leq K, $ for some constant $ 0 \leq K < + \infty, \ \forall n \geq 1$.
Use martingale methods to show that
$$ \sum \limits_{n = 1}^{\infty} \ X_n \ \mbox{ converges } \mathbb P-\mbox{a.s.} \ \Longrightarrow \ \sum\limits_{n = 1}^{\infty} \ \sigma_n^2 < + \infty .$$
Could anybody give me a hint?
Thanks a lot for your help!
Regards, Si
| Set $S_n:=\sum_{i=1}^n X_i$ and $S_0=0$. $S$ is a martingale (wrt the natural filtration), so $S^2$ is a sub-martingale.
Using the Doob's decomposition we can write $S^2=M+A$ where M is a martingale and A is predictable (increasing) process, both null at 0.
It turns out that $A_n=\sum_{i=1}^n \sigma_i^2$.
Define the stopping time $T_\alpha$ as $T_\alpha=\inf\{n \geq0 \ : \ |S_n|>\alpha\}$.
By OST we have $$E[S_{T_\alpha \wedge n}^2]-E[A_{T_\alpha \wedge n}]=0;\qquad \forall n.$$
Also $|S_{T_\alpha \wedge n}| \leq \alpha+K$, since $|X_i| \leq K $; then
$$E[A_{T_\alpha \wedge n}] \leq (\alpha+K)^2;\qquad \forall n.$$
Since $\sum X_i$ converges a.s., the partial sum $S_n$ are a.s. bounded, then for some $\alpha$ we have that $(T_{\alpha}=\infty)$ has positive probability. So applying this fact to the last inequality, we get $A_{\infty}=\sum\sigma_n^2 < \infty$.
P.S.: I hope it is clear; I am a new user and I want to say hi to everyone.
Also I noticed that I can not comment in some post; is there any restriction?
Thanks.
|
The expected value of magnitude of winning and losing when playing a game Suppose we play a game at a casino. There is a \$5 stake and three possible outcomes: with probability $1/3$ you lose your stake, with with probability $1/3$ the bank returns your stake plus \$5, and with probability $1/3$ the bank simply returns your stake.
Let $X$ denote your winning in one play. And you play 1000 times. What would you expect the magnitude of your win or loss to be approximately?
Is this question asking the same as to find the $Var(S)$ where $S=X_1+\cdots X_{1000}$, so the final answer should be $1000\cdot Var(X)$?
| If I understand correctly, you are interested in the expected value of the magnitude of the win (or) loss (and not the magnitude of the expected value of the win (or) loss). Hence, you are interested in computing $\mathbb{E}(S_n)$ of the underlying random variable where $$S_n = \left| X_1 + X_2 + \cdots + X_n \right|.$$
Note that $S_n$ can take values in the set $\{0,5,10,\ldots,5n\}$.
Now let us find out the probability that $S_n = 5m$ for some $m \in \{0,1,2,\ldots,n\}$.
This means we are interested in the event $X_1 + X_2 + \cdots + X_n = \pm 5m$.
Let us first evaluate the probability of the event $X_1 + X_2 + \cdots + X_n = 5m$.
For this event to occur, if you lose $k$ times, you need to win $m+k$ times and get back your stake the remaining $n-m-2k$ times where $k \in \left \{0,1,2,\ldots, \left \lfloor \frac{n-m}{2} \right \rfloor \right\}$.
Hence, the desired probability of the event $X_1 + X_2 + \cdots + X_n = 5m$ is given by $$\sum_{k=0}^{\left \lfloor \frac{n-m}{2} \right \rfloor} \frac{n!}{k!(m+k)!(n-m-2k)!} \frac1{3^n}$$
Hence, the desired probability of the event $S_n = 5m$ is given by $$P_n(m) = \sum_{k=0}^{\left \lfloor \frac{n-m}{2} \right \rfloor} \frac{n!}{k!(m+k)!(n-m-2k)!} \frac2{3^n}$$
Hence, the expected value of $S_n$ is given by $$\sum_{m=0}^{n} 5m P_n(m) = \sum_{m=0}^{n} \sum_{k=0}^{\left \lfloor \frac{n-m}{2} \right \rfloor} \frac{n!}{k!(m+k)!(n-m-2k)!} \frac{10m}{3^n}$$
|
Properties about a certain martingale I asked this question here. Unfortunately there was not a satisfying answer. So I hope here is someone who could help me.
I'm solving some exercises and I have a question about this one:
Let $(X_i)$ be a sequence of random variables in $ L^2 $ and a filtration $ (\mathcal{F}_i)$ such that $X_i$ is $\mathcal{F}_i$ measurable. Define
$$ M_n := \sum_{i=1}^n \left(X_i-E(X_i|\mathcal{F}_{i-1})\right) $$
I should show the following:
*
*$M_n $ is a martingale.
*$M_n $ is square integrable.
*$M_n $ converges a.s. to $ M^*$ if $ M_\infty := \sum_{i=1}^\infty E\left((X_i-E(X_i|\mathcal{F}_{i-1}))^2|\mathcal{F}_{i-1}\right)<\infty$ .
*If $\sum_{i=1}^\infty E(X_i^2) <\infty \Rightarrow 3)$
I was able to show 1 and with Davide Giraudo's comment 2. is clear too. But I got stuck at 3. and 4. So I'm very thankful for any help!
hulik
| For 3, compute $E[M_n^2]$. Having done this conclude that $E[M_n^2]\le M_\infty$ for all $n$. This means that $\{M_n\}$ is an $L^2$-bounded martingale, to which the martingale convergence theorem may be applied.
For 4, the $i$th term in the sum defining $M_\infty$ is equal to
$E[(X_i-E[X_i|\mathcal{F}_{i-1}])^2]$, which in turn is equal to
$E[X_i^2] -E[(E[X_i|\mathcal{F}_{i-1}])^2]\le E[X_i^2]$.
|
A primitive and irreducible matrix is positive for some power $k.$ Prove it's positive for any power $k+i,$ where $i=1,\dots.$
Let $P = [p_{ij}]_{1 \leqslant i,j \leqslant m} \geqslant 0$ a primitive and irreducible matrix. And $P^k > 0$ for some $k.$ Prove that $ P^{k+i} > 0, i =1,2, \dots.$
I have used a hint suggested by N.S below and wrote this proof what do you think?
By the irreducibility of $P \geqslant 0$ there exists a permutation matrix $M$ (an identity matrix whose rows and columns have been reordered) such that $MP$ can be written in the form $ MP = \begin{pmatrix} P_{11} & 0 \\ P_{21} & P_{22} \end{pmatrix}$ where $P_{11}$ and $P_{22}$ are square matrices. This implies $ P = M^{-1}\begin{pmatrix} P_{11} & 0 \\ P_{21} & P_{22} \end{pmatrix}.$ The matrix $P$ has at least a non-zero element (which is in fact positive) in each row, thus we use this fact in the proof below
We have $P^k >0.$ At $i=1$ we have $P^{k+1} = P P^k > 0.$ We assume at $i$ that we have $P^{k+i} >0.$ Thus at $i+1$ we have $ P^{k+i+1} = PP^{k+i} > 0.$ This is correct for all $i = 1,2, \dotsm$
| Hint Prove first that any row of $P$ has a non-zero element. Then
$P^{k+i+1}=PP^{k+i}$
|
Need help finding limit $\lim \limits_{x\to \infty}\left(\frac{x}{x-1}\right)^{2x+1}$ Facing difficulty finding limit
$$\lim \limits_{x\to \infty}\left(\frac{x}{x-1}\right)^{2x+1}$$
For starters I have trouble simplifying it
Which method would help in finding this limit?
| $$
\begin{eqnarray}
\lim \limits_{x\to \infty}\left(\frac{x}{x-1}\right)^{2x+1}=\lim \limits_{x\to \infty}\left(\frac{x-1+1}{x-1}\right)^{2x+1}
=\lim \limits_{x\to \infty}\left(1+\frac{1}{x-1}\right)^{2x+1}\\= \lim \limits_{x\to \infty}\left(1+\frac{1}{x-1}\right)^{(x-1)\cdot\frac{2x+1}{x-1}}
=\lim \limits_{x\to \infty}\left(1+\frac{1}{x-1}\right)^{(x-1)\cdot\frac{2x+1}{x-1}}=e^{\lim \limits_{x\to \infty}\frac{2x+1}{x-1}}=e^2
\end{eqnarray}
$$
|
Simple Combinations with Repetition Question
In how many ways can we select five coins from a collection of 10 consisting of one penny, one nickel, one dime, one quarter, one half-dollar and 5 (IDENTICAL) Dollars ?
For my answer, I used the logic, how many dollars are there in the 5 we choose?
I added the case for 5 dollars, 4 dollars, 3 dollars and 2 dollars and 1 dollars and 0 dollars.
$$C(5,5) + C(5,4) + C(5,3) + C(5,2) + C(5,1) + 1 = 32$$
which is the right answer ... but there has to be a shorter way, simpler way. I tried using the repetition formula that didn't pan out.
If you could introduce me to a shorter way with explanation I appreciate it.
| Decide for each small coin whether you select that or not. Then top up with dollars until you have selected 5 coins in total.
The topping-up step does not involve any choice, so you have 5 choices to make, each with 2 options, giving $2^5=32$ combinations in all.
|
Interesting Property of Numbers in English I was playing with the letters in numbers written in English and I found something quite funny. I found that if you count the number of letters in the number and write this as a number and then count the number of letters in this new number and keep repeating the process, you will arrive at the number 4.
I've confirmed this (using a computer program) for all numbers up to 999999 and was wondering if there's a way to prove this or to find a counter example for which it does not hold.
Just to give an example of the above statement, let's start with thirty seven (I chose this randomly)
Thirty seven has 11 letters in it, Eleven has 6 letters in it, Six has three letters in it, Three has 5 letters in it, Five has 4 letters in it.
It may look like I just picked this number, so let me show this for another random number, say 999.
Nine hundred and ninety nine has 24 letters in it, Twenty four has 10 letters in it, Ten has 3 letters in it, Three has 5 letters in it, Five has 4 letters in it.
What are your thoughts on how to prove this?
(Just a note: I only confirmed this for numbers written in the standard British way of writing numbers - for example 101 is one hundred and one)
| Define $f: \mathbb{N} \to \mathbb{N}$ as the number of letters in a given natural number spelled out.
Four is the only fixed point under $f$, and it's not too difficult to see that $f$ is almost always strictly decreasing with the only exceptions being one, two, three and four. So the $n^{th}$ iterate of $f$ must eventually become smaller than 5, which doesn't leave very many cases to verify.
|
Isometries of $\mathbb{R}^3$ So I'm attempting a proof that isometries of $\mathbb{R}^3$ are the product of at most 4 reflections. Preliminarily, I needed to prove that any point in $\mathbb{R}^3$ is uniquely determined by its distances from 4 non-coplanar points, and then that an isometry sends non-coplanar points to non-coplanar points in $\mathbb{R}^3$. I've done the first preliminary step, and finished the proof assuming the second, but I can't find a simple way to prove the second...
Intuitively it makes a lot of sense that non-coplanar points be sent to non-coplanar points, but every method I've stumbled upon to prove such has been quite heavy computationally...
I know for example that any triangle chosen among the four points, A, B, C, D must be congruent to the triangles of their respective images, but what extra bit of information would allow me to say that the image of the whole configuration can't be contained in a single plane...
| Would this work? Construct the altitude from $D$ to the plane containing $A$, $B$, and $C$. Call the foot of this altitude $E$ (the point where the altitude meets the plane). Triangles $ADE$, $BDE$, and $CDE$ all have right angles at $E$ and you know that the isometry preserves angles, so triangles $A'D'E'$, $B'D'E'$, and $C'D'E'$ all have right angles at $E'$, which makes $A'$, $B'$, $C'$, and $E'$ coplanar and since $D'E'\ne 0$ and $\overline{D'E'}$ is perpendicular to that plane, $D'$ cannot be coplanar with the other points.
|
Why is the Kendall tau distance a metric? So I am trying to see how the Kendall $\tau$ distance is considered a metric; i.e. that it satisfies the triangle inequality.
The Kendall $\tau$ distance is defined as follows:
$$K(\tau_1,\tau_2) = |(i,j): i < j, ( \tau_1(i) < \tau_1(j) \land \tau_2(i) > \tau_2(j) ) \lor ( \tau_1(i) > \tau_1(j) \land \tau_2(i) < \tau_2(j) )|$$
Thank you in advance.
| Kendall tau rank distance is a metric only if you compare ranking of the elements.
If you perform Kendall function comparing elements you will find cases where the triangular inequality does not work.
Example:
0 0 0 10 10 10
and
5 5 5 0 0 0
scores 9 (using Kendall comparing elements)
While
0 0 0 10 10 10
and
5 3 5 7 5 2
scores 3
and
5 3 5 7 5 2
and
5 5 5 0 0 0
scores 4
so 9 > 3 + 4. So triangular inequality is not working here.
But if you operate with the sorting position of each element in it vector (aka ranking) triangular inequality is gauaranteed.
This happens beacuase of repetitions of elements within vectors.
We should call "Kendall tau ranking distance" to one of the algorithms and "Kendall tau distance" to the other
Best
Mijael
|
Can this be simplified any further I've been working on a formula, which I have managed to simplify to the following expression, but I wonder if anyone can spot a way to simplify it further?
$$2^{1 -\frac{1}{2}\sum_i \log_2 \frac{(a_i + c_i)^{(a_i + c_i)}}{a_i^{a_i}c_i^{c_i}}}$$
| Does this look simpler to you?
$$
2 \left( \prod_i \frac{a_i^{a_i}c_i^{c_i}}{(a_i + c_i)^{(a_i + c_i)}} \right)^\frac{1}{2}
$$
|
Probability of getting different results when tossing a coin Here's a question I got for homework:
In every single time unit, Jack and John are tossing two different
coins with P1 and P2 chances for heads. They
keep doing so until they get different results. Let X be the number of
tosses. Find the pmf of X (in discrete time units). What kind of
distribution is it?
Here's what I have so far:
In every round (time unit) the possible results
HH - p1p2
TT - q1q2
TH - q1p2
HT - q2p1
and so P(X=k) = ((p1p2 + q1q2)^(k-1))*(q1p2+q2p1)
Which means we're dealing with a geometric distribution.
What doesn't feel right is that the question mentions 'discrete time units'. That makes me think about a Poisson distribution, BUT - Poisson is all about number of successes in a time unit, while here we only have one round in every time unit.
If I'm not too clear its only because I'm a little confused myself. Any hint would be perfect. Thanks in advance
| You can work out the probability that they get different results on the first toss, namely $p_1 (1-p_2)+ (1-p_1)p_2 = p_1+p_2 - 2p_1 p_2$.
If they have not had different results up to the $n$th toss, then the conditional probability they get different results on the next toss is the same; this is the memoryless property and so (since the number of tosses is a positive integer, i.e. discrete) you have a geometric distribution, as you spotted.
|
Prove that exist $z_0 \in \mathbb C$ that satisfy $f(z_0)=0$. I would be glad to get some help with this question:
Let $f(z)$ be an entire function. Assume that there exists a
monotonous increasing and unbounded sequence $\{r_n\}$ such that $\lim\limits_{n \to \infty} \min\limits_{|z|=r_n} |f(z)|=\infty$. I want to show that there exists a $z_0 \in \mathbb C$ that satisfies $f(z_0)=0$.
I'd especially like to know how to use that fact about the sequence.
Thanks.
| Assume $f(z)\ne0$ for all $z\in\mathbb{C}$. Then $h(z)=1/f(z)$ is also an entire function. Apply the maximum modulus principle to $h$.
|
What function does $\sum \limits_{n=1}^{\infty}\frac{1}{n3^n}$ represent, evaluated at some number $x$? I need to know what the function $$\sum \limits_{n=1}^{\infty}\frac{1}{n3^n}$$ represents evaluated at a particular point.
For example if the series given was $$\sum \limits_{n=0}^{\infty}\frac{3^n}{n!}$$ the answer would be $e^x$ evaluated at $3$.
Yes, this is homework, but I'm not looking for any handouts, any help would be greatly appreciated.
| Take $f(x) = \displaystyle \sum_{n=1}^\infty \frac{x^n}{n}$.
Then,
$f^\prime (x) = \displaystyle \sum_{n=1}^\infty \frac{n x^{n-1}}{n} = \sum_{n=0}^\infty x^n$.
The last expression is a geometric series and, as long as $x < 1$, it can be expressed as
$f^\prime (x) = \displaystyle \frac{1}{1-x}$.
Therefore,
$f(x) = - \ln | 1 - x | + \kappa$
Where $\kappa$ is a constant. But if you take the original expression for $f(x)$, you can see that $f(0) = 0$ and, therefore, $\kappa = 0$.
So $f(x) = -\ln | 1 - x |$.
The answer to your question is just $f \left(\frac{1}{3} \right)$.
You can also obtain this result by Taylor expanding $\ln ( 1 - x )$.
|
A group of order $195$ has an element of order $5$ in its center Let $G$ be a group of order $195=3\cdot5\cdot13$. Show that the center of $G$ has an element of order $5$.
There are a few theorems we can use here, but I don't seem to be able to put them together quite right. I want to show that the center of $G$ is divisible by the prime number $5$. If this is the case, then we can apply Cauchy's theorem and we are done.
By Sylow's theorems we get that there are unique $3$-Sylow, $5$-Sylow, and $13$-Sylow subgroups in $G$. Since they are of prime order, they are abelian. Furthermore, their intersection is trivial (by a theorem I beleive). Does this then guarantee that $G=ABC$ and that $G$ is abelian?
| Hint: There are unique, hence normal, $5$- and $13$-Sylows. Their internal direct product is thus normal and has complementary subgroup equal to one of the $3$-Sylows, so $G$ is a semidirect product of $H_5 H_{13}$ and $H_3$, where $H_p$ denotes a $p$-Sylow (not necessarily unique). What can you say about $\varphi: H_3 \to \text{Aut}(H_5 H_{13})$?
|
Maxima of bivariate function [1] Is there an easy way to formally prove that,
$$
2xy^{2} +2x^{2} y-2x^{2} y^{2} -4xy+x+y\ge -x^{4} -y^{4} +2x^{3} +2y^{3} -2x^{2} -2y^{2} +x+y$$
$${0<x,y<1}$$
without resorting to checking partial derivatives of the quotient formed by the two sides, and finding local maxima?
[2] Similarly, is there an easy way for finding $$\max_{0<x,y<1} [f(x,y)]$$
where,
$$f(x,y)=2x(1+x)+2y(1+y)-8xy-4(2xy^{2} +2x^{2} y-2x^{2} y^{2} -4xy+x+y)^{2}$$
| Your first question:
With a little manipulation you get that it is equivalent to
$$x^2((1-x)^2+1)+y^2((1-y)^2+1) \ge 2xy[(1-x)(1-y)+1].$$
This can be obtained from addition of two inequalities
$$x^2(1-x)^2+y^2(1-y)^2 \ge 2xy(1-x)(1-y)$$
$$x^2+y^2\ge 2xy.$$
Both of them are special cases of $a^2+b^2\ge 2ab$, which follows from $(a-b)^2\ge 0$. (Or, if you prefer, you can consider it as a special case of AM-GM inequality.)
Note: To check the algebraic manipulations, you can simply compare the results for
2xy^2 +2x^2 y-2x^2 y^2 -4xy+x+y - ( -x^4 -y^4 +2x^3 +2y^3 -2x^2 -2y^2 +x+y)
expand x^2((1-x)^2+1)+y^2((1-y)^2+1) -2xy[(1-x)(1-y)+1]
Or simply subtract the two expressions:
2xy^2 +2x^2 y-2x^2 y^2 -4xy+x+y - ( -x^4 -y^4 +2x^3 +2y^3 -2x^2 -2y^2 +x+y) - [x^2((1-x)^2+1)+y^2((1-y)^2+1) -2xy[(1-x)(1-y)+1]]
I did not succeed in finding similar type of solution for your second problem.
|
formal proof challenge I am desperately trying to figure out the formal proof for this argument.
$$\begin{array}{r}
A\lor B\\
A\lor C\\
\hline
A\lor (B \land C)
\end{array}$$
I am trying to apply the backwards method here. I am trying to infer A, in order to use vIntro in the last step and introduce the final disjunction. But I got stuck finding sufficient proof for A.
Any hint will be greatly appreciated. Thank you!
|
I am trying to apply the backwards method here. I am trying to infer A, in order to use vIntro in the last step and introduce the final disjunction. But I got stuck finding sufficient proof for A.
You don't prove it; you assume it -- To be precise: you assume both cases aiming to derive the same conclusion from each. The disjunction elimination rule then discharges the assumptions. This is also known as the Proof By Cases argument. $$\dfrac{\dfrac{[A]^\star\\~~\vdots}{A\lor(B\land C)}\quad\dfrac{[B]^\star\\~~\vdots}{A\lor(B\land C)}\quad\lower{1.5ex}{A\lor B}~}{A\lor(B\land C)}{\small\lor\mathsf E^\star}$$
So, in this proof you need two $\lor$ eliminations and therefore four assumptions must be made and discharged. The trick is how you combine the assumptions of $B$ and $C$.
$$\dfrac{\dfrac{[A]^1}{A\lor(B\land C)}{\small\lor\mathsf I}\quad\dfrac{\dfrac{[A]^2}{A\lor(B\land C)}{\small\lor\mathsf I}\quad\dfrac{\dfrac{[B]^1\quad[C]^2}{B\land C}{\small\land\mathsf I}}{A\lor(B\land C)}{\small\lor\mathsf I}\quad\lower{1.5ex}{A\lor C}}{A\lor (B\land C)}{\small\lor\mathsf E^2}\quad\lower{1.5ex}{A\lor B}}{A\lor (B\land C)}{\small\lor\mathsf E^1}$$
|
Is my trig result unique? I recently determined that for all integers $a$ and $b$ such that $a\neq b$ and $b\neq 0$,
$$
\arctan\left(\frac{a}{b}\right) + \frac{\pi}{4} = \arctan\left(\frac{b+a}{b-a}\right)
$$
This implies that 45 degrees away from any angle with a rational value for tangent lies another angle with a rational value for tangent. The tangent values are related.
If anyone can let me know if this has been done/shown/proven before, please let me know. Thanks!
| If you differentiate the function $$f(t)=\arctan t - \arctan\frac{1 + t}{1 - t},$$ you get zero, so the function is constant in each of the two intervals $(-\infty,1)$ and $(1,+\infty)$ on which it is defined.
*
*Its value at zero is $\pi/2$, so that $f(t)=-\pi/4$ for all $t<1$, so
$$ \arctan t + \frac\pi4 = \arctan\frac{1 + t}{1 - t},\qquad\forall t<1.$$
*On the other hand, one easily shows that $\lim_{t\to+\infty}f(t)=\frac{3\pi}{4}$, so
$$ \arctan t - \frac{3\pi}4 = \arctan\frac{1 + t}{1 - t},\qquad\forall t>1.$$
If $t=a/b$ is a rational number smaller that $1$, then the first point is your identity. If it larger than $1$, we see that you have to change things a bit.
|
Prove that $\mathbb{Z}_p^{\times}/(\mathbb{Z}_p^{\times})^2$ is isomorphic to $\{\pm1\}$. Prove that $\mathbb{Z}_p^{\times}/(\mathbb{Z}_p^{\times})^2$ is isomorphic to $\{\pm1\}$, where $p$ is a prime integer.
| I take it that you mean to prove that $\mathbb{F}_p^\times/(\mathbb{F}_p^\times)^2 \cong \{\pm 1\}$, where $\mathbb{F}_p = \mathbb{Z}/p\mathbb{Z}$.
If so, use the fact that the map $(\mathbb{Z}/p\mathbb{Z})^\times \to \{\pm 1\}$ given by $a\bmod p\mapsto (\frac{a}{p})$ is a homomorphism of groups, where $(\frac{a}{p})$ is the Legendre symbol of $a$ over $p$.
|
Proving the Cantor Pairing Function Bijective How would you prove the Cantor Pairing Function bijective? I only know how to prove a bijection by showing (1) If $f(x) = f(y)$, then $x=y$ and (2) There exists an $x$ such that $f(x) = y$
How would you show that for a function like the Cantor pairing function?
| I will denote the pairing function by $f$. We will show that pairs $(x,y)$ with a particular value of the sum $x+y$ is mapped bijectively to a certain interval, and then that the intervals for different value of the sum do not overlap, and that their union is everything.
Let $m$ be a natural number and suppose $m=x+y$. The least value that $f(x,y)$ can take is $\frac{m(m+1)}{2}$ (if $x=m$) and the largest value it can take is $\frac{m(m+1)}{2}+m$ (if $y=m$). It can also take all values in between. It is thus easy to see that the $m+1$ pairs $(x,y)$ with sum $m$ are mapped bijectively to an interval.
If $x+y=m+1$ then the least possible value of $f(x,y)$ is $\frac{(m+1)(m+2)}{2}$. We can check that $\frac{(m+1)(m+2)}{2} - (\frac{m(m+1)}{2}+m)=1$ so the intervals for the different value of the sum do not overlap and it is easy to see that their union is $\mathbb{N}$.
|
Characteristic equation of a recurrence relation Yesterday there was a question here based on solving a recurrence relation and then when I tried to google for methods of solving recurrence relations, I found this, which gave different ways of solving simple recurrence relations.
My question is how do you justify writing the recurrence relation in its characteristic equation form and then solving for its roots to get the required answer.
For example, Fibonacci relation has a characteristic equation $s^2-s-1=0$.
How can we write it as that polynomial?
| The characteristic equation is the one that a number $\lambda$ should satisfy in order for the geometric series $(\lambda^n)_{n\in\mathbf N}$ to be a solution of the recurrence relation. Another interpretation is that if you interpret the indeterminate $s$ as a left-shift of the sequence (dropping the initial term and renumbering the renaming terms one index lower), then the characteristic equation gives the lowest degree monic polynomial that when applied to this shift operation kills all sequences satisfying the recurrence. In the case of Fibonacci recurrence, applying $s^2-s-1$ to a sequence $A=(a_i)_{i\in\mathbf N}$ gives the sequence $(a_{i+2}-a_{i+1}-a_i)_{i\in\mathbf N}$, which is by definition identically zero if (and only if) $A$ satisfies the recurrence.
A different but related polynomial that is of interest is obtained by reversing the order of the monomials (giving a polynomial starting with constant term $1$), so for Fibonacci it would be $1-X-X^2$. This polynomial $P$ has the property that the formal power series $F=\sum_{i\in\mathbf N}a_iX^i$ associated to a sequence satisfying the recurrence, when multiplied by $P$ gives a polynomial $R$ (with $\deg R<\deg P$), in other words all terms from the index $\deg P$ on are killed. This is basically the same observation as for the shift operation, but the polynomial $R$ permits describing the power series of the sequence, including its initial values, formally as $F=\frac RP$. For the Fibonacci sequence one finds $R=X$ so its formal power series is $F=\frac X{1-X-X^2}$.
|
How to find all rational points on the elliptic curves like $y^2=x^3-2$ Reading the book by Diophantus, one may be led to consider the curves like:
$y^2=x^3+1$, $y^2=x^3-1$, $y^2=x^3-2$,
the first two of which are easy (after calculating some eight curves to be solved under some certain conditions, one can directly derive the ranks) to be solved, while the last , although simple enough to be solved by some elementary consideration of factorization of algebraic integers, is at present beyond my ability, as my knowledge about the topic is so far limited to some reading of the book Rational Points On Elliptic Curves, by Silverman and Tate, where he did not investigate the case where the polynomial has no visible rational points.
By the theorem of Mordell, one can determine its structure of rational points, if the rank is at hand. So, according to my imagination, if some hints about how to compute ranks of elliptic curves of this kind were offered, it would certainly be appreciated.
Thanks in advance.
| Given your interest in Mordell's equation, you really ought to buy or borrow Diophantine Equations by Mordell, then the second edition of A Course in Number Theory by H. E. Rose, see AMAZON
Rose discusses the equation starting on page 286, then gives a table of $k$ with
$ -50 \leq k \leq 50$ for which there are integral solutions, a second table for which there are rational solutions. The tables are copied from J. W. S. Cassels, The rational solutions of the diophantine equation $y^2 = x^3 - D.$ Acta Arithmetica, volume 82 (1950) pages 243-273.
Other than that, you are going to need to study Silverman and Tate far more carefully than you have done so far. From what I can see, all necessary machinery is present. Still, check the four pages in the Bibliography, maybe you will prefer something else.
|
Generalization of manifold Is there a generalization of the concept of manifold that captures the following idea:
Consider a sphere that instead of being made of a smooth material is actually made up of a mesh of thin wire. Now for certain beings living on the sphere the world appears flat and 2D, unware that they are actually living on a mesh, but for certain other smaller beings, the world appears to be 1D most of the time (because of the wire mesh).
| One thing to look at is foliations (and laminations), which are decompositions of manifolds into lower-dimension manifolds. While there is no "mesh" because each lower-dimension manifold has another lower-dimension manifold in any neighborhood, there is still a lower-dimensionality that is something like what you seek. (When you're looking at surfaces in a $3$-manifold, you can also look at the one-dimensional transversals.) See, e.g., H. B. Lawson, Foliations, Bulletin of the AMS 80:3 (1974), 369–418, MR 0343289 (49 #8031).
|
Find the ordinary generating function $h(z)$ for a Gambler's Ruin variation. Assume we have a random walk starting at 1 with probability of moving left one space $q$, moving right one space $p$, and staying in the same place $r=1-p-q$. Let $T$ be the number of steps to reach 0. Find $h(z)$, the ordinary generating function.
My idea was to break $T$ into two variables $Y_1,Y_2$ where $Y_1$ describes the number of times you stay in place and $Y_2$ the number of times we move forward or backward one. Then try to find a formula for $P(T=n)=P(Y_1+Y_2=n)=r_n$, but I'm getting really confused since there are multiple probabilities possible for each $T=n$ for $n\geq 3$. Once I have $r_n$ I can then use $h_T(z)=\sum_{n=1}^\infty r_n z^n$, but I'm not sure where to go from here.
| A classical way to determine $h(z)$ is to compute $h_n(z)=\mathrm E_n(z^T)$ for every $n\geqslant0$, where $\mathrm E_n$ denotes the expectation starting from $n$, hence $h(z)=h_1(z)$.
Then $h_0(z)=1$ and, considering the first step of the random walk, one gets, for every $n\geqslant1$,
$$
h_n(z)=rzh_n(z)+pzh_{n+1}(z)+qzh_{n-1}(z),
$$
with $r=1-p-q$. Fix $z$ in $(0,1)$. Then the sequence $(x_n)_n=(h_n(z))_n$ satisfies the relations $x_0=1$, $x_n\to0$ when $n\to\infty$, and $ax_{n+1}-bx_n+cx_{n-1}=0$ for every $n\geqslant1$, for some positive $(a,b,c)$.
Hence $x_n=\alpha s_z^n+(1-\alpha)t_z^n$, where $s_z$ and $t_z$ are the roots of the polynomial $Q_z(t)=at^2-bt+c$. Since $a=pz$, $b=1-rz$ and $c=qz$, one sees that $Q_z(0)=qz\gt0$, $Q_z'(0)=-(1-rz)\lt0$ and $Q_z(1)=-(1-z)\lt0$. Thus, $0\lt s_z\lt1\lt t_z$. If $\alpha\ne1$, $|x_n|\to\infty$ when $n\to\infty$. But $(x_n)_n$ should stay bounded hence this shows that $\alpha=1$. Finally, $x_n=s_z^n$ for every $n\geqslant0$, where $Q_z(s_z)=0$ and $0\lt s_z\lt 1$.
In particular, for every $z$ in $(0,1)$, $h(z)=s_z$, that is,
$$
h(z)=\frac{1-rz-\sqrt{(1-rz)^2-4pqz^2}}{2pz}.
$$
Note that the limit of $h(z)$ when $z\to1^-$ is the probability $\mathrm P_1(T\lt\infty)$, which is $1$ if $p\leqslant q$ and $q/p\lt1$ if $p\gt q$.
The technique above is flexible enough to be valid for any random walk. If the steps are $i$ with probability $p_i$, the recursion becomes
$$
h_n(z)=z\sum\limits_ip_ih_{n+i}(z).
$$
The case at hand is $p_{-1}=q$, $p_0=r$ and $p_1=p$. When $p_{-1}$ is the only nonzero $p_i$ with $i$ negative, a shortcut is to note that one can only go from $n\geqslant1$ to $0$ by first reaching $n-1$, then reaching $n-2$ from $n-1$, and so on until one reaches $0$ starting from $1$. These $n$ hitting times are i.i.d. hence $h_n(z)=h(z)^n$ for every $n\geqslant1$, and one is left with
$$
h(z)=zp_{-1}+z\sum\limits_{i\geqslant0}p_ih(z)^{i+1}.
$$
In the present case, this reads
$$
h(z)=qz+rzh(z)+pzh(z)^2,
$$
hence the expression of $h(z)=s_z$ as solving a quadratic equation should not be surprising.
|
Free products of cyclic groups Given $G$, $H$, $G'$, and $H'$ are cyclic groups of orders $m$, $n$, $m'$, and $n'$ respectively.
If $G*H$ is isomorphic to $G'* H'$, I would like to show that either $m = m'$ and $n = n'$ or else $m = n'$ and $n = m'$ holds. Where * denotes the free product.
My approach:
$G*H$ has an element of order $n$, thus $G' * H'$ has one too.
But already the next step is not clear to me, should I show that there is an element of length $> 1$ which has infinite order or what would be the right approach here?
Thank you.
| Let me give you an alternative argument for the claim $m=m'$ in Arturo Magidin's answer.
Take the abelianizations of the groups $G*H$ and $G'*H'$, since $G,G,H,H'$ are abelian, their abelianizations are $G\oplus H$ and $G'\oplus H'$ respectively. Then you get $G\oplus H\cong G'\oplus H'$ and in particular, their orders $mn=m'n'$ are equal. Since we already had $n=n'$, we conclude that $m=m'$.
|
Proving $\mathbb{N}^k$ is countable Prove that $\mathbb{N}^k$ is countable for every $k \in \mathbb{N}$.
I am told that we can go about this inductively.
Let $P(n)$ be the statement: “$\mathbb{N}^n$ is countable” $\forall n \in \mathbb{N}$.
Base Case: $\mathbb{N}^1 = \mathbb{N}$ is countable by definition, so $\checkmark$
Inductive Step: $\mathbb{N}^{k+1}$ $“=”$ $\mathbb{N}^k \times \mathbb{N}$
We know that $(A,B)$ countable $\implies$ $A \times B$ is countable. I am stuck on the part where I have to prove the rest, but I know that, for example, $(1,2,7) \in \mathbb{N}^3 \notin \mathbb{N}^2 \times \mathbb{N}$ but instead $((1,2),7) \in \mathbb{N}^2 \times \mathbb{N}$. So how would I go about proving the statement.
| The function $f:\mathbb{N^K}\to \mathbb{N^K\times \{m \}}$ defined by $$f(a_1,a_2,\cdots, a_k)=(a_1,a_2,\cdots, a_k,m)$$ is clearly a bijection for fixed $m\in \mathbb{N}$ and we can write $\mathbb{N^{K+1}}$ as $$\mathbb{N^{K+1}}=\bigcup_{m=1}^{\infty}\{\mathbb{N^K\times \{m \}}\}$$ and this being a countable union of countable sets is countable.
|
Define when $y$ is a function of $x$ Hello guys I want to make sure myself in determming when is $y$ as function of $x$,so for this, let us consider following question. If the equation of circle is given by this
$$x^2+y^2=25$$
and question is find the equation of tangent of circle at point $(3,4)$,then it is clear that we need calculate the derivative and also we can express $ y$ as
$$y=\sqrt{25-x^2}$$
but, let us consider following situation
$$x^3+y^3=6\cdot x \cdot y$$
it's name is The Folium of Descartes and my question is the same : calculate the equation of the tangent of this folium at point $(3,4)$.
I know that somehow expressing $y$ as a function of $x$ is difficult but could't we do it by some even long mathematical manipulation? So what is the strict explanation when is $y$ a function of $x$ and when is it not? I need to be educated in this topic and please explain me ways of this,thanks a lot.
| The equation $$
\tag{1}x^3+y^3=6xy
$$does define $y$ as a function of $x$ locally (or, rather, it defines $y$ as a function of $x$ implicitly). Here, it is difficult to write the defining equation as $y$ in terms of $x$. But, you don't have to do that to evaluate the value of the derivative of $y$.
[edit] The point $(3,4)$ does not satisfy equation (1); so there is no tangent line at this point.
Let's, instead, consider the point $(3,3)$, which does satisfy equation (1):
To find the slope of the tangent line at $(3,3)$, you need to find $y'(3)$. To find this, first
implicitly differentiate both sides of the defining equation for $y$
(equation (1)). This gives
$$
{d\over dx} (x^3+y^3)={d\over dx} 6xy
$$
So, using the chain and product rules:
$$
3x^2+3y^2 y' =6y+6x y'.
$$
When $x=3$ and $y=3$, you have
$$
3\cdot 3^2+3\cdot 3^2 y'(3)=6\cdot 3+6\cdot 3\cdot y'(3).
$$
Solve this for $y'(3)={3\cdot 3^2-6\cdot 3\over6\cdot3 -3\cdot3^2}=-1.$
Now you can find the equation of the tangent line since you know the slope and that the point $(3,3)$ is on the line.
Generally any "nice" equation in the variables $x$ and $y$ will define $y$ as a function of $x$ in some neighborhood of a given point. Given the $x$ value, the corresponding $y$ value is the solution to the equation ("the" solution in a, perhaps small, neighborhood of the point).
Of course, sometimes it is extremely difficult (if not impossible) to find an explicit form of the function; that is, of the form $y=\Phi(x)$ for some expression $\Phi(x)$. In these cases, to find the derivative of $y$, you have to use the approach above.
|
Sum of irrational numbers Well, in this question it is said that $\sqrt[100]{\sqrt3 + \sqrt2} + \sqrt[100]{\sqrt3 - \sqrt2}$, and the owner asks for "alternative proofs" which do not use rational root theorem. I wrote an answer, but I just proved $\sqrt[100]{\sqrt3 + \sqrt2} \notin \mathbb{Q}$ and $\sqrt[100]{\sqrt3 - \sqrt2} \notin \mathbb{Q}$, not the sum of them. I got (fairly) downvoted, because I didn't notice that the sum of two irrational can be either rational or irrational, and I deleted my (incorrect) answer. So, I want help in proving things like $\sqrt5 + \sqrt7 \notin \mathbb{Q}$, and $(1 + \pi) - \pi \in \mathbb{Q}$, if there is any "trick" or rule to these cases of summing two (or more) known irrational numbers (without rational root theorem).
Thanks.
| Here is a useful trick, though it requires a tiny bit of field theory to understand: If $\alpha + \beta$ is a rational number, then $\mathbb{Q}(\alpha) = \mathbb{Q}(\beta)$ as fields. In particular, if $\alpha$ and $\beta$ are algebraic, then the degrees of their minimal polynomials are equal.
So, for example, we can see at a glance that $\sqrt{5} + \sqrt[3]{7}$ is irrational, because $\sqrt{5}$ and $\sqrt[3]{7}$ have algebraic degree $2$ and $3$ respectively.
Note that this trick doesn't work on your original example, because $\alpha=\sqrt[100]{\sqrt{3} + \sqrt{2}}$ and $\beta=\sqrt[100]{\sqrt{3} - \sqrt{2}}$ do have the same degree. But we can also use field theory: since $\alpha \beta = 1$, if $\alpha+\beta$ is rational then $\alpha$ and $\beta$ satisfy a rational quadratic. However, $\alpha^{100}= \sqrt{3}+\sqrt{2}$ already has degree $4$ over $\mathbb{Q}$, so $\alpha$ certainly has degree bigger than $2$.
|
Heads or tails probability I'm working on a maths exercise and came across this question.
The probability of a "heads" when throwing a coin twice is 2 / 3. This could be explained by the following:
• The first time is "heads". The second throw is unnecessary. The result is H;
• The first time is "tails" and twice "heads". The result is TH;
• The first time is "tails" and twice "tails". The result is TT;
The outcome: {H, TH, TT}. two of the three results include a "heads", it follows that the probability of a "heads" is 2/3
What's wrong with this reasoning?
I think the answer is 1/2, is that right?
Ps. my first language isn't english,
Thanks Jef
| The reason your result, as Shitikanth has already pointed out, is wrong, is that you've applied the principle of indifference where it doesn't apply. You can only assume that events will all be equally likely if they're all qualitatively the same and there's nothing (other than names and labels) to distinguish them from each other. The prototypical examples are the two sides of a coin and the six sides of a six-sided die.
In your case, on the other hand, the event H is qualitatively different from the two events TH and TT, so there's no reason to expect that these three will be equiprobable, and the principle of indifference doesn't apply. To apply it, you need to look at qualitatively similar events. In this case, that would be HH, HT, TH and TT. Of these four, three contain a heads, so, as Shitikanth has already stated, the probability is $3/4$.
|
Finding angles in a parallelogram without trigonometry
I'm wondering whether it's possible to solve for $x^{\circ}$ in terms of $a^{\circ}$ and $b^{\circ}$ given that $ABCD$ is a parallelogram. In particular, I'm wondering if it's possible to solve it using only "elementary geometry". I'm not sure what "elementary geometry" would usually imply, but I'm trying to solve this problem without trigonometry.
Is it possible? Or if it's not, is there a way to show that it isn't solvable with only "elementary geometry" techniques?
| The example by alex.jordan does finish the matter, and similar ones may be constructed. We have an angle
$$ \theta = \arctan \left( \frac{1}{\sqrt{12}} \right) $$
and we wish to know whether $ x = \frac{\theta}{\pi} $ is the root of an equation with rational coefficients.
Well,
$$ e^{i \theta} = \sqrt{\frac{12}{13}} + i \sqrt{\frac{1}{13}} $$
Next, $\cos 2 \theta = 2 \cos^2 \theta - 1 = \frac{11}{13}.$ So, by Corollary 3.12 on page 41 of NIVEN we know that $2 \theta$ is not a rational multiple of $\pi.$ So, neither is $\theta,$ and
$$ x = \frac{\theta}{\pi} $$
is irrational.
Now, the logarithm is multivalued in the complex plane. We may choose
$$ \log(-1) = \pi i. $$ With real $x,$ we have chosen
$$ (-1)^x = \exp(x \log(-1)) = \exp(x\pi i) = \cos \pi x + i \sin \pi x. $$
With our $ x = \frac{\theta}{\pi}, $ we have
$$ (-1)^x = e^{i \pi x} = e^{i \theta} = \sqrt{\frac{12}{13}} + i \sqrt{\frac{1}{13}} $$
The right hand side is algebraic.
The Gelfond-Schneider Theorem, Niven page 134, says that if $\alpha,\beta$ are nonzero algebraic numbers, with $\alpha \neq 1$ and $\beta$ not a real rational number, then any value of $\alpha^\beta$ is transcendental.
Taking $\alpha = -1$ and $\beta = x,$ which is real but irrational. We are ASSUMING that $x$ is algebraic over $\mathbb Q.$ The assumption, together with Gelfond-Schneider, says that $ (-1)^x$ is transcendental. However, we already know that $ (-1)^x = \sqrt{\frac{12}{13}} + i \sqrt{\frac{1}{13}} $ is algebraic. This contradicts the assumption. So $x = \theta / \pi$ is transcendental, with $ \theta = \arctan \left( \frac{1}{\sqrt{12}} \right) $
|
Adding a different constant to numerator and denominator Suppose that $a$ is less than $b$ , $c$ is less than $d$.
What is the relation between $\dfrac{a}{b}$ and $\dfrac{a+c}{b+d}$? Is $\dfrac{a}{b}$ less than, greater than or equal to $\dfrac{a+c}{b+d}$?
| One nice thing to notice is that
$$
\frac{a}{b}=\frac{c}{d} \Leftrightarrow \frac{a}{b}=\frac{a+c}{b+d}
$$
no matter the values of $a$, $b$, $c$ and $d$. The $(\Rightarrow)$ is because $c=xa, d=xb$ for some $x$, so $\frac{a+c}{b+d}=\frac{a+xa}{b+xb}=\frac{a(1+x)}{b(1+x)}=\frac{a}{b}$. The other direction is similar.
The above is pretty easy to remember, and with that intuition in mind it is not hard to imagine that
$$
\frac{a}{b}<\frac{c}{d} \Leftrightarrow \frac{a}{b}<\frac{a+c}{b+d}<\frac{c}{d}
$$
and similar results.
|
Tricky Factorization How do I factor this expression: $$ 0.09e^{2t} + 0.24e^{-t} + 0.34 + 0.24e^t + 0.09e^{-2t} ? $$
By trial and error I got $$ \left(0.3e^t + 0.4 + 0.3 e^{-t}\right)^2$$ but I'd like to know how to formally arrive at it.
Thanks.
| The most striking thing about the given expression is the symmetry. For anything with that kind of symmetric structure, there is a systematic approach which is definitely not trial and error.
Let
$$z=e^t+e^{-t}.$$
Square. We obtain
$$z^2=e^{2t}+2+e^{-2t},$$
and therefore $e^{2t}+e^{-2t}=z^2-2$.
Substitute in our expression. We get
$$0.09(z^2-2)+0.24 z+0.34.\qquad\qquad(\ast)$$
This simplifies to
$$0.09z^2 +0.24z +0.16.$$
The factorization is now obvious. We recognize that we have simply $(0.3z+0.4)^2$. Now replace $z$ by $e^t+e^{-t}$.
If the numbers had been a little different (but still with the basic $e^t$, $e^{-t}$ symmetry) we would at the stage $(\ast)$ obtain some other quadratic. In general, quadratics with real roots can be factored as a product of linear terms. It is just a matter of completing the square. For example, replace the constant term $0.34$ by, say, $0.5$. We get a quadratic in $z$ that does not factor as prettily, but it does factor.
Comment: For fun we could instead make the closely related substitution $2y=e^t+e^{-t}$, that is, $y=\cosh t$. If we analyze the substitution process further, we get to useful pieces of mathematics, such as the Chebyshev polynomials.
The same idea is the standard approach to finding the roots of palindromic polynomials. For example, suppose that we want to solve the equation $x^4 +3x^3-10x^2+3x+1=0$. Divide through by $x^2$. We get the equation
$$x^2+3x-10+\frac{3}{x}+\frac{1}{x^2}=0.$$
Make the substitution $z=x+\frac{1}{x}$. Then $x^2+\frac{1}{x^2}=z^2-2$. Substitute. We get a quadratic in $z$.
|
$f$ uniformly continuous and $\int_a^\infty f(x)\,dx$ converges imply $\lim_{x \to \infty} f(x) = 0$ Trying to solve
$f(x)$ is uniformly continuous in the range of $[0, +\infty)$ and $\int_a^\infty f(x)dx $ converges.
I need to prove that:
$$\lim \limits_{x \to \infty} f(x) = 0$$
Would appreciate your help!
| Suppose $$\tag{1}\lim\limits_{x\rightarrow\infty}f(x)\ne 0.$$
Then we may, and do, select an $\alpha>0$ and a sequence $\{x_n\}$ so that for any $n$, $$\tag{2}x_n\ge x_{n-1}+1$$
and
$$\tag{3}|f(x_n)|>\alpha.$$
Now, since $f$ is uniformly continuous, there is a $1>\delta>0$ so that
$$\tag{4}|f(x)-f(y)|<\alpha/2,\quad\text{ whenever }\quad |x-y|<\delta.$$
Consider the contribution to the integral of the intervals $I_n=[x_n-\delta/2,x_n+\delta/2]$:
We have, by (3), and (4) that
$$\biggl|\,\int_{I_n} f(x)\, dx\,\biggr|\ge {\alpha\over2}\cdot \delta$$ for each positive integer $n$.
But, by (2), the $x_n$ tend to infinity. This implies that $\int_a^\infty f(x)\,dx$ diverges, a contradiction.
Having obtained a contradiction, our initial assumption, (1), must be incorrect. Thus, we must have $\lim\limits_{x\rightarrow\infty}f(x)= 0 $.
Take this with a grain of salt, but,
informally, the idea used above is based on the following:
For clarity, assume $f>0$, here.
If the integral $\int_a^\infty f(x)\,dx$ is convergent, then for large $x$, the graph of $f$ is close to the $x$-axis "most of the time" and, in fact, the positive $x$-axis is an asymptote of "most of" the graph of $f$.
I say "most of the time" and "most of" because is not necessarily so that a function $f$ which is merely continuous must tend to 0 when $\int_a^\infty f(x)\,dx$ converges. There may be spikes in the graph of $f$ as you go out in the positive $x$ direction. Though the height of the spikes can be large, the width of the spikes would be small enough so that the integral
converges (so, the sum of the areas of the spikes is finite).
But the graph of a uniformly continuous function that is "mostly asymptotic to the $x$-axis" does not have very tall spikes of very short widths arbitrarily far out in the $x$-axis.
|
Simplifying trig expression I was working through some trig exercises when I stumbled upon the following problem:
Prove that: $ \cos(A+B) \cdot \cos(A-B)=\cos^2A- \sin^2B$.
I started out by expanding it such that
$$ \cos(A+B) \cdot \cos(A-B)=(\cos A \cos B-\sin A \sin B) \cdot (\cos A \cos B+ \sin A \sin B),$$
which simplifies to:
$$ \cos^2 A \cos^2 B- \sin^2 A \sin^2 B .$$
However, I don't know how to proceed from here. Does anyone have any suggestions on how to continue.
| The identities
$$\cos(\theta) = \frac{e^{i \theta}+e^{- i \theta}}{2}$$
$$\sin(\theta) = \frac{e^{i \theta}-e^{- i \theta}}{2i}$$
can reduce a trigonometric identity to a identity of polynomials. Let's see how this works in your example:
$$\cos(A+B) \cos(A-B)=\cos(A)^2-\sin(B)^2$$
is rewritten into:
$$\frac{e^{i (A+B)}+e^{- i (A+B)}}{2} \frac{e^{i (A-B)}+e^{- i (A-B)}}{2}=\left(\frac{e^{i A}+e^{- i A}}{2}\right)^2-\left(\frac{e^{i B}-e^{- i B}}{2i}\right)^2$$
now we change it into a rational function of $X = e^{i A}$ and $Y = e^{i B}$:
$$\frac{(X Y + \frac{1}{X Y})(\frac{X}{Y} + \frac{Y}{X})}{4} = \frac{(X + \frac{1}{X})^2 + (Y - \frac{1}{Y})^2}{4}$$
and you can simply multiply out both sides to see that they are both $\frac{1}{4}\left(X^2 + \frac{1}{X^2} + Y^2 + \frac{1}{Y^2}\right)$ which proves the trigonometric equality.
|
Quadratic forms and prime numbers in the sieve of Atkin I'm studying the theorems used in the paper which explains how the sieve of Atkin works, but I cannot understand a point.
For example, in the paper linked above, theorem 6.2 on page 1028 says that if $n$ is prime then the cardinality of the set which contains all the norm-$n$ ideals in $\mathbf Z[(-1+\sqrt{-3})/2]$ is 2. I don't understand why, and I am not able to relate this result to the quadratic form $3x^2+y^2=n$ used in the proof.
| The main thing is that the norm of $s + t \omega$ is $s^2 + s t + t^2,$ which is a binary form that represents exactly the same numbers as $3x^2 + y^2.$
It is always true that, for an integer $k,$ the form $s^2 + s t + k t^2$ represents a superset of the numbers represented by $x^2 + (4k-1)y^2.$ For instance, with $k=2,$ the form $x^2 + 7 y^2$ does not represent any numbers $2\pmod 4,$ otherwise it and $s^2 + s t + 2 t^2$ agree.
With $k=-1,$ it turns out that $x^2 - 5 y^2$ and $s^2 + s t - t^2$ represent exactly the same integers.
Take $s^2 + s t + k t^2$ with $s = x - y, \; t = 2 y.$ You get
$$ (x-y)^2 + (x-y)(2y) + k (2y)^2 = x^2 - 2 x y + y^2 + 2 x y - 2 y^2 + 4 k y^2 = x^2 + (4k-1) y^2.$$
|
Finding the limit of a sequence $\lim _{n\to \infty} \sqrt [3]{n^2} \left( \sqrt [3]{n+1}- \sqrt [3]{n} \right)$ If there were a regular square root I would multiply the top by its adjacent and divide, but I've tried that with this problem and it doesn't work. Not sure what else to do have been stuck on it.
$$ \lim _{n\to \infty } \sqrt [3]{n^2} \left( \sqrt [3]{n+1}-
\sqrt [3]{n} \right) .$$
| $$
\begin{align*}
\lim _{n\to \infty } \sqrt [3]{n^2} \left( \sqrt [3]{n+1}-
\sqrt [3]{n} \right)
&= \lim _{n\to \infty } \sqrt [3]{n^2} \cdot \sqrt[3]{n} \left( \sqrt [3]{1+ \frac{1}{n}}-
1 \right)
\\ &= \lim _{n\to \infty } n \left( \sqrt [3]{1+ \frac{1}{n}}-
1 \right)
\\ &= \lim _{n\to \infty } \frac{\sqrt [3]{1+ \frac{1}{n}}-
1 }{\frac{1}{n}}
\\ &= \lim _{h \to 0} \frac{\sqrt [3]{1+ h}-
1 }{h}
\\ &= \left. \frac{d}{du} \sqrt[3]{u} \ \right|_{u=1}
\\ &= \cdots
\end{align*}
$$
|
Expected value of $XYZ$, $E(XYZ)$, is not always a $E(X)E(Y)E(Z)$, even if $X$, $Y$, $Z$ are not correlated in pairs Could you prompt me, please, is it true?
Expected value of $XYZ$, $E(XYZ)$, is not always $E(X)E(Y)E(Z)$, even if $X$, $Y$, $Z$ are not correlated in pairs, because if $X$, $Y$, $Z$ are not correlated in pairs it doesn't entail that they are uncorrelated in aggregate (it is my idea)?
| Suppose
$$
(X,Y,Z) = \begin{cases}
(1,1,0) & \text{with probability }1/4 \\
(1,0,1) & \text{with probability }1/4 \\
(0,1,1) & \text{with probability }1/4 \\
(0,0,0) & \text{with probability }1/4
\end{cases}
$$
Then $X,Y,Z$ are pairwise independent, and $E(X)E(Y)E(Z)=1/8\ne 0 = E(XYZ)$.
|
Proofs for an equality I was working on a little problem and came up with a nice little equality which I am not sure if it is well-known (or) easy to prove (It might end up to be a very trivial one!). I am curious about other ways to prove the equality and hence I thought I would ask here to see if anybody knows any or can think of any. I shall hold off from posting my own answer for a couple of days to invite different possible solutions.
Consider the sequence of functions:
$$
\begin{align}
g_{n+2}(x) & = g_{n}(x) - \left \lfloor \frac{g_n(x)}{g_{n+1}(x)} \right \rfloor g_{n+1}(x)
\end{align}
$$
where $x \in [0,1]$ and $g_0(x) = 1, g_1(x) = x$. Then the claim is:
$$x = \sum_{n=0}^{\infty} \left \lfloor \frac{g_n}{g_{n+1}} \right \rfloor g_{n+1}^2$$
| For whatever it is worth, below is an explanation on why I was interested in this equality. Consider a rectangle of size $x \times 1$, where $x < 1$. I was interested in covering this rectangle with squares of maximum size whenever possible (i.e. in a greedy sense).
To start off, we can have $\displaystyle \left \lfloor \frac{1}{x} \right \rfloor$ squares of size $x \times x$. Area covered by these squares is $\displaystyle \left \lfloor \frac{1}{x} \right \rfloor x^2$.
Now we will then be left with a rectangle of size $\left(1 - \left \lfloor \frac1x \right \rfloor x \right) \times x$.
We can now cover this rectangle with squares of size $\left(1 - \left \lfloor \frac1x \right \rfloor x \right) \times \left(1 - \left \lfloor \frac1x \right \rfloor x \right)$.
The number of such squares possible is $\displaystyle \left \lfloor \frac{x}{\left(1 - \left \lfloor \frac1x \right \rfloor x \right)} \right \rfloor$.
The area covered by these squares is now $\displaystyle \left \lfloor \frac{x}{\left(1 - \left \lfloor \frac1x \right \rfloor x \right)} \right \rfloor \left(1 - \left \lfloor \frac1x \right \rfloor x \right)^2$.
And so on.
Hence, at $n^{th}$ stage if the sides are given by $g_{n-1}(x)$ and $g_n(x)$ with $g_n(x) \leq g_{n-1}(x)$, the number of squares with side $g_{n}(x)$ which can be placed in the rectangle of size $g_{n-1}(x) \times g_n(x)$, is given by $\displaystyle \left \lfloor \frac{g_{n-1}(x)}{g_{n}(x)} \right \rfloor$.
These squares cover an area of $\displaystyle \left \lfloor \frac{g_{n-1}(x)}{g_{n}(x)} \right \rfloor g^2_{n}(x)$.
Hence, at the $n^{th}$ stage using squares we cover an area of $\displaystyle \left \lfloor \frac{g_{n-1}(x)}{g_{n}(x)} \right \rfloor g^2_{n}(x)$.
The rectangle at the $(n+1)^{th}$ stage is then given by $g_{n}(x) \times g_{n+1}(x)$ where $g_{n+1}(x)$ is given by $g_{n-1}(x) - \left \lfloor \frac{g_{n-1}(x)}{g_n(x)} \right \rfloor g_n(x)$.
These squares end up covering the entire rectangle and hence the area of all these squares equals the area of the rectangle.
This hence gives us $$x = x \times 1 = \sum_{n=1}^{\infty} \left \lfloor \frac{g_{n-1}(x)}{g_{n}(x)} \right \rfloor g^2_{n}(x)$$
When I posted this question, I failed to see the simple proof which Srivatsan had.
|
Deriving SDE(s) and Expectation from Given PDE We want to solve the PDE $u_t + \left( \frac{x^2 + y^2}{2}\right)u_{xx} + (x-y^2)u_y + ryu = 0 $ where $r$ is some constant and $u(x,y,T) = V(x,y)$ is given. Write an SDE and express $u(x,y,0)$ as the expectation of some function of the path $X_t, Y_t$.
Attempt: I tried to use the multivariate backward equation (2 dimensional) to recover the original SDE's and ended up with $dX_t= \sqrt{x^2 + y^2} dW_t$ and $dY_t = (x-y^2)dt + \sqrt{x^2 + y^2} dW_t$.
The problem I have is recovering the expectation. I'm not too familiar with multidimensional Feynman-Kac, but judging by the $ryu$ term and extrapolating from the one-dimensional case, the desired expectation should have the form E[exp(riemann integral of Y_t)]. Can anyone shed some light on this? Thank you.
EDIT: Oops, wrote the forward equation incorrectly and made a typo, the SDE's have changed
| What do you think about the system of SDEs :
$$dX_t=\sqrt{X_t^2+Y_t^2}dW_t$$
$$dY_t=(X_t-Y_t^2)dt$$
And finally :
$$u(X_t,Y_t,t)=\mathbb{E}[V(X_T,Y_T).e^{-\int_t^TrY_s.ds}|X_t,Y_t]$$
You can check that $u$ is satisfying your PDE, but as always check my calculations as I am used to making errors.
The way I found this is the following :
I set $r=0$, then looking for $u$ as an expectation of $V(X_T,Y_T)$ and deriving its SDE via Itô's lemma and looking for a null drift and then identifying terms with the original PDE with those coming from the drift of $dV$ with $dX_t=a_1(X,Y,t)dt+b_1(X,Y,t)dW_t$ and $dX_t=a_2(X,Y,t)dt+b_2(X,Y,t)dB_t$ gives the solution for $a_1,a_2,b_1,b_2$ when $r=0$ ($B$ and $W$ are independent Brownian motions, which is coming from the intuitive fact that there is no $u_{xy}$ terms in the PDE).
Then two minutes of reflection gives that $F(X_T,Y_T,T)=V(X_T,Y_T).e^{-\int_T^Tr.Y_sds}$ respects the final condition and acts on the drift part of $dF$ by only multiplying the PDE's with a $e^{-\int_t^Tr.Y_s.ds}$ and adds the $rYV$ term which was missing in the solution with $r=0$.
Best regards
|
Given a matrix, is there always another matrix which commutes with it? Given a matrix $A$ over a field $F$, does there always exist a matrix $B$ such that $AB = BA$? (except the trivial case and the polynomial ring?)
| Another example is the adjoint of $A$:
$$
A \operatorname{adj}(A)= \operatorname{adj}(A) A = \det(A)I
$$
(but for invertible matrices it is equal to the scalar $\det(A)$ multipliying the inverse of $A$, so is trivial that commutes. with $A$).
|
A solvable Lie-algebra of derived length 2 and nilpotency class $n$ Given a natural $n>2$, I want to show that there exists a lie algebra $g$ which is solvable of derived length 2, but nilpotent of degree $n$.
I have seen a parallel idea in groups, but i can't see how i can implement it for Lie-algebras.
Thanks!
| The so called standard graded filiform nilpotent Lie algebra $\mathfrak{f}_{n+1}$ of dimension $n+1$ has nilpotency class $n$, and derived length $2$.
The non-trivial brackets are $[e_1,e_i]= e_{i+1}$ for $i=2,\ldots ,n$. We have
$[\mathfrak{f}_{n+1}, \mathfrak{f}_{n+1}]=\langle e_3,\ldots ,e_{n+1}\rangle$ and
$[[\mathfrak{f}_{n+1}, \mathfrak{f}_{n+1}], [\mathfrak{f}_{n+1}, \mathfrak{f}_{n+1}]]=0$.
|
For which value(s) of parameter m is there a solution for this system Imagine a system with one parameter $m$:
\begin{cases}
mx + y = m\\
mx + 2y = 1\\
2x + my = m + 1
\end{cases}
Now the question is: when does this system of equations have a solution?
I know how to do it with the Gaussian method, but how can I do this without the Gaussian method, let's say with Cramer's rule?
| Compute the values of $x$ and $y$ dependent on $m$ for the following system, then solve $2x + my = m + 1$ (the last equation) to find the values of parameter $m$ for $x$ and $y$:
\begin{cases}
mx + y = m\\
mx + 2y = 1\\
\end{cases}
So,
\begin{cases}
2mx + 2y =2 m\\
mx + 2y = 1\\
\end{cases}
Subtracting two equations, will have:
$$mx=2m-1$$
*
*If $m \neq 0$, we may divide by $m$ and get $x = (2m-1)/m$ and $y =
1-m$.
*If $m = 0$, the system has no solution.
Putting $x$ and $y$ in the last equation ($m\neq 0$), we'll have:
$$m^3-3m+2=0 $$
$$(m^3-1)-3m+3=0$$
$$(m-1)(m^2+m+1)-3(m-1)=0$$
$$(m-1)(m^2+m-2)=0$$
Thus the values of parameter $m$ are $m=1$ or $m=-2$.
|
A problem about stochastic convergence (I think) I am trying to prove the convergence of the function $f_n = I_{[n,n+1]}$ to $f=0$, but first of all I don't in which way it converges, either in $\mathcal{L}_p$-measure or stochastically, or maybe some other form of convergence often used in measure-theory.
For now I'm assuming it's stochastic convergence, as in the following:
$$ \text{lim}_{n \rightarrow \infty} \, \mu(\{x \in \Re: |f_n(x)-f(x)| \geq \alpha\}\cap A )=0$$
must hold for all $\alpha \in \Re_{>0}$ and all $A \in \mathcal{B}(\Re)$ of finite measure.
I know it must be true since there is no finite $A$ for which this holds. Could someone give me a hint how to start off this proof?
| The sequence $\{f_n\}$ doesn't converge in $\mathcal L^p$ norm, since for all $n$ $$\lVert f_{n+1}-f_n\rVert_{L^p}^p=\int_{\mathbb R}|\mathbf 1_{[n+1,n+2]}-\mathbf 1_{[n,n+1]}|^p =\int_{[n,n+2]}1d\mu =2.$$
This sequence cannot converge in measure since $\mu(\{|f_{n+1}-f_n|\geq \frac 12\})\geq \mu([n,n+1))=1$, but converges pointwise to $0$.
It also converges stochastically to $0$, since if $\alpha> 1$, we have $\{x\in\mathbb R\mid |f_n|\geq \alpha\}=\emptyset$. For $\alpha\leq 1$, and $A\in\mathcal B(\mathbb R)$ with $\mu(A)<\infty$, use the fact that
$$\mu(A)\geq \mu(A\cap \mathbb R_+)=\sum_{n=0}^{+\infty}\mu(A\cap[n,n+1]).$$
|
Representing the $q$-binomial coefficient as a polynomial with coefficients in $\mathbb{Q}(q)$? Trying a bit of combinatorics this winter break, and I don't understand a certain claim.
The claim is that for each $k$ there is a unique polynomial $P_k(x)$ of degree $k$ whose coefficients are in $\mathbb{Q}(q)$, the field of rational functions, such that $P_k(q^n)=\binom{n}{k}_q$ for all $n$.
Here $\binom{n}{k}_q$ is the $q$-binomial coefficient. I guess what is mostly troubling me is that $P_k(q^n)$ is a polynomial in $q^n$. I'm sure it's obvious, but why is the claim true? Thanks.
| The $q$-binomial coefficient satisfies the recurrence
$$
\binom{n}{k}_q = q^k \binom{n-1}{k}_q + \binom{n-1}{k-1}_q,
$$
which follows easily from the definition. We can assume inductively that each term on the right is a polynomial and therefore the LHS is a polynomial.
Edit: Unfortunately this does not seem to yield the uniqueness required in the question.
|
Showing $\tau(n)/\phi(n)\to 0$ as $n\to \infty$ I was wondering how to show that $\tau(n)/\phi(n)\to 0$, as $n\to \infty$. Here $\tau(n)$ denotes the number of positive divisors of n, and $\phi(n)$ is Euler's phi function.
| Here's a hint: let $Q(n)$ denote the largest prime power that divides $n$. Then prove:
*
*$\displaystyle \frac{\tau(n)}{\phi(n)} \le 2 \frac{\tau(Q(n))}{\phi(Q(n))} \le \frac4{\log2} \frac{\log Q(n)}{Q(n)}$;
*$Q(n) \to \infty$ as $n\to \infty$.
For #1, you'll want to use the fact that $\tau(n)/\phi(n)$ is multiplicative, as well as the explicit evaluations of them on prime powers.
|
Find limit of polynomials Suppose we want to find limit of the following polynomial
$$\lim_{x\to-\infty}(x^4+x^5).$$
If we directly put here $-\infty$, we get "$-\infty +\infty$" which is definitely undefined form, but otherwise if factor out $x^5$, our polynomial will be of the form $x^5(1/x+1)$.
$\lim_{x\to-\infty}\frac 1x=0$, so our result will be $-\infty*(0+1)$,which equal to $-\infty$. I have exam in a 3 days and interested if my last procedure is correct? Directly putting $x$ values gives me undefined form, but factorization on the other hand, negative infinity, which one is correct?
| Your factoring method is fine.
In general
given a polynomial, $$P(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots +a_1 x+a_0,\quad a_n\ne0,$$
you can factor out the leading term when $x\ne0$:
$$
P(x)= x^n\Bigl(\,a_n+{ a_{n-1}\over x}+ \cdots +{a_1\over x^{n-1}} +{a_0\over x^n} \,\Bigr),\quad x\ne0.
$$
When taking the limit as $x$ tends to an infinity, the parenthetical term above will tend towards $a_n$.
From this,
$$
\lim_{x\rightarrow-\infty} P(x) = \cases{\phantom{-}{\rm sign}(a_n) \infty, & n\text{ even}\cr -{\rm sign}(a_n) \infty, & n\text{ odd}}
$$
and
$$
\lim_{x\rightarrow \infty} P(x) = {\rm sign}(a_n) \infty.
$$
Informally, for large $x$, a polynomial behaves as its leading term. So, to compute a limit "at infinity", you could just drop all but the leading term of the polynomial and take the limit of just the leading term.
|
Software to display 3D surfaces What are some examples of software or online services that can display surfaces that are defined implicitly (for example, the sphere $x^2 + y^2 + z^2 = 1$)? Please add an example of usage (if not obvious).
Also, I'm looking for the following (if any):
*
*a possibility to draw many surfaces on the same sheet
*to show cross-sections
| Try these for algebraic surfaces:
*
*surf generates excellent images.
*surfer
*surfex
from http://www.algebraicsurface.net/.
|
Why is $\lim\limits_{x \space \to \infty}\space{\arctan(x)} = \frac{\pi}{2}$? As part of this problem, after substitution I need to calculate the new limits.
However, I do not understand why this is so:
$$\lim_{x \to \infty}\space{\arctan(x)} = \frac{\pi}{2}$$
I tried drawing the unit circle to see what happens with $\arctan$ when $x \to \infty$ but I don't know how to draw $\arctan$. It is the inverse of $\tan$ but do you even draw $\tan$?
I would appreciate any help.
| Here's a slightly different way of seeing that $\lim\limits_{\theta\rightarrow {\infty}}\arctan\theta={\pi\over2}$.
Thinking of the unit circle, $\tan \theta ={y\over x}$, where $(x,y)$ are the coordinates of the point on the unit circle with reference angle $\theta$, what happens as $\theta\rightarrow\pi/2$? In particular, what happens to $\tan\theta$ as $\theta\nearrow{\pi\over2}$?
Well, the $x$ coordinate heads towards 0 and the $y$ coordinate heads towards 1.
So in the quotient
$$
y\over x,
$$
the numerator heads to 1 and the denominator becomes arbitrarily small; so the quotient heads to infinity.
Thus, $\lim\limits_{\theta\rightarrow {\pi\over2}}\tan\theta=\infty$ and consequently
$\lim\limits_{\theta\rightarrow {\infty}}\arctan\theta={\pi\over2}$.
|
How to prove that geometric distributions converge to an exponential distribution?
How to prove that geometric distributions converge to an exponential distribution?
To solve this, I am trying to define an indexing $n$/$m$ and to send $m$ to infinity, but I get zero, not some relevant distribution. What is the technique or approach one must use here?
| The waiting time $T$ until the first success in a sequence of independent Bernoulli trials with probability $p$ of success in each one has a geometric distribution with parameter $p$: its probability mass function is $P(x) = p (1-p)^{x-1}$
and cumulative distribution function $F(x) = 1 - (1-p)^x$ for positive integers $x$ (note that some authors use a different convention where the random variable is $T-1$ rather than $T$, but that won't make a difference in the limit). The scaled version $p T$ converges in distribution as $p \to 0+$ to an exponential random variable with rate $1$, as for $x \ge 0$
$$ P(p T \le x) = F(x/p) = 1 - (1-p)^{\lfloor x/p\rfloor} \to 1 - e^{-x} $$
|
Evaluating a definite integral by changing variables. How can I evalute this integral?
$$\psi(z)=\int\limits_{-\infty}^\infty\int\limits_{-\infty}^\infty [(x-a)^2+(y-b)^2+z^2]^{-3\over 2}f(x,y)\,\,\,dxdy\;.$$
I think we can treat $z$ as a constant and take it out of the integral or something. Maybe changing variables like taking $u={1\over z}[(x-a)^2+(y-b)^2]$? But then how do I change $f(x,y)$ which is some arbitrary function, etc?
Thanks.
| There's no general way to evaluate this integral, for if there were, you could integrate any function $g(x,y)$ by calculating this integral for $f(x,y)=g(x,y)[(x-a)^2+(y-b)^2+z^2]^{\frac32}$.
|
Is $22/7$ equal to the $\pi$ constant?
Possible Duplicate:
Simple numerical methods for calculating the digits of Pi
How the letter 'pi' came in mathematics?
When I calculate the value of $22/7$ on a calculator, I get a number that is different from the constant $\pi$.
Question: How is the $\pi$ constant calculated?
(The simple answer, not the Wikipedia calculus answer.)
| In answer to your second question, NOVA has an interactive exhibit that uses something like Archimedes method for approximating $\pi$. Archimedes method predates calculus, but uses many of its concepts.
Note that "simple" and "calculus" are not disjoint concepts.
|
A question on transcendental numbers Transcendental numbers are numbers that are not the solution to any algebraic equation.
But what about $x-\pi=0$? I am guessing that it's not algebraic but I don't know why not. Polynomials are over a field, so I am guessing that $\mathbb{R}$ is implied when not specified. And since $\pi \in \mathbb{R}$, what is the problem?
| To quote Wikipedia "In mathematics, a transcendental number is a number (possibly a complex number) that is not algebraic—that is, it is not a root of a non-constant polynomial equation with rational coefficients." so the field is $\mathbb{Q}$ and $\pi$ is not included.
|
Evaluating Integral $\int e^{x}(1-e^x)(1+e^x)^{10} dx$ I have this integral to evaluate: $$\int e^{x}(1-e^x)(1+e^x)^{10} dx$$
I figured to use u substitution for the part that is raised to the tenth power. After doing this the $e^x$ is canceled out.
I am not sure where to go from here however due to the $(1-e^x)$.
Is it possible to move it to the outside like this and continue from here with evaluating the integral?
$$(1-e^x)\int u^{10} du$$
| let $x=\ln(u)$
$dx=du/u$
$I=\int e^{x}(1-e^x)(1+e^x)^{10} dx$ = $\int ((u(1-u)(1+u)^{10})/u)du$=$\int (1-u)(1+u)^{10}du$
You may want to take it from here...
|
A question on Taylor Series and polynomial Suppose $ f(x)$ that is infinitely differentiable in $[a,b]$.
For every $c\in[a,b] $ the series $\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n $ is a polynomial.
Is true that $f(x)$ is a polynomial?
I can show it is true if for every $c\in [a,b]$, there exists a neighborhood $U_c$ of $c$, such that
$$f(x)=\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n\quad\text{for every }x\in U_c,$$
but, this equality is not always true.
What can I do when $f(x)\not=\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n$?
| As I confirmed here, if for every $c\in[a,b] $, the series $\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n $ is a polynomial, then for every $c\in[a,b]$ there exists a $k_c$ such that $f^{(n)}(c)=0$ for $n>k_c$.
If $\max(k_c)$ is finite, we're done: $f(x)$ is a polynomial of degree $\le\max(k_c)$.
If $\max(k_c)=\infty$ it means there is an infinite number of unbounded $k_c$'s, but $f$ is infinitely differentiable, so (hand waving) the $c$'s can't have a limit point, i.e. although $\max(k_c)=\infty$ it can't be $\lim_{c\to c_\infty}k_c=\infty$ for some $c_\infty\in[a,b]$ because that would mean $k_{c_\infty}=\infty$, i.e. not a polynomial.
So the infinite number of unbounded $k_c$'s need to be spread apart, e.g. like a Cantor set.
Does this suggest a counterexample or can a Cantor-like distribution of $k_c$'s never be infinitely differentiable?
|
For which $n\in\mathbf{N}$ do we have $\mathbf{Q}(z_{5},z_{7}) = \mathbf{Q}(z_{n})$? Put $z_{n} = e^{2\pi i /n}$. I am searching for $n \in \mathbf{N}$ so that $\mathbf{Q}(z_{5},z_{7}) = \mathbf{Q}(z_{n})$.
I know that : $z_{5} = \cos(\frac{2\pi}{5})+i\sin(\frac{2\pi}{5}) $ and $z_{7} =\cos(\frac{2\pi}{7})+i\sin(\frac{2\pi}{7})$.
Can you give me a hint how to continue my search? Thank you.
| That's one too many hints in the comments, but the OP still seems in doubt, so here is a proof that $\mathbb{Q}(\zeta_5,\zeta_7)=\mathbb{Q}(\zeta_{35})$, where $\zeta_n=e^{2\pi i/n}$ is a primitive $n$th root of unity.
First, let us show that $\mathbb{Q}(\zeta_5,\zeta_7)\subseteq\mathbb{Q}(\zeta_{35})$. Notice that
$$\zeta_{35}^7=(e^{2\pi i/35})^7 = e^{2\pi i/5}=\zeta_5.$$
Thus, $\zeta_5\in \mathbb{Q}(\zeta_{35})$. Similarly, $\zeta_7 = \zeta_{35}^5 \in \mathbb{Q}(\zeta_{35})$. Hence, $\mathbb{Q}(\zeta_5,\zeta_7)\subseteq\mathbb{Q}(\zeta_{35})$.
Next, let us show that $\mathbb{Q}(\zeta_{35})\subseteq \mathbb{Q}(\zeta_5,\zeta_7)$. Indeed, consider
$$(\zeta_5\cdot\zeta_7)^3 = (e^{2\pi i/5}\cdot e^{2\pi i/7})^3 = (e^{2\pi i\cdot 12/35})^3 = e^{2\pi i\cdot 36/35} = e^{2\pi i}\cdot e^{2\pi i/35} = 1 \cdot e^{2\pi i/35} = \zeta_{35}.$$
Thus, $\zeta_{35}=(\zeta_5\cdot\zeta_7)^3\in \mathbb{Q}(\zeta_5,\zeta_7)$, and this shows the inclusion $\mathbb{Q}(\zeta_{35})\subseteq \mathbb{Q}(\zeta_5,\zeta_7)$. Therefore, we must have an equality of fields.
Now, suppose that $\mathbb{Q}(\zeta_5,\zeta_7)=\mathbb{Q}(\zeta_n)$ for some $n\geq 1$. We have just shown that $n=35$ works. Are there any other possible values of $n$ that work? Well, if $n$ is odd, then $\mathbb{Q}(\zeta_n) = \mathbb{Q}(\zeta_{2n})$, so $n=70$ also works.
Finally, one can show that if $\mathbb{Q}(\zeta_m)\subseteq \mathbb{Q}(\zeta_n)$, then $m$ divides $n$ (here neither $m$ or $n$ should be twice an odd number). In particular, since we know that $\mathbb{Q}(\zeta_5,\zeta_7)=\mathbb{Q}(\zeta_{35})=\mathbb{Q}(\zeta_n)$, then $n$ is divisible by $35$, and therefore $\varphi(n)$ is divisible by $24$. If $n>70$ and divisible by $35$, then $\varphi(n)$ would be strictly larger than $24$, and that would be a contradiction, because $\varphi(n)$ is the degree of the extension $\mathbb{Q}(\zeta_n)/\mathbb{Q}$. Hence, $n=35$ or $70$.
|
Fourier transform (logarithm) question Can we think, at least in the sense of distribution, about the Fourier transform of $\log(s+x^{2})$? Here '$s$' is a real and positive parameter
However $\int_{-\infty}^{\infty}dx\log(s+x^{2})\exp(iux)$ is not well defined.
Can the Fourier transform of logarithm be evaluated ??
| Throughout, it is assumed that $s>0$ and $u \in \mathbb{R}$.
Define:
$$
\mathcal{I}_\nu(u) = \int_{-\infty}^\infty \left(s+x^2\right)^{-\nu} \mathrm{e}^{i u x} \,\,\mathrm{d} x = \int_{-\infty}^\infty \left(s+x^2\right)^{-\nu} \cos\left(u x\right) \,\,\mathrm{d} x
$$
The integral above converges for $\nu > 0$. We are interested in computing the (distributional) value of $\lim_{\nu \uparrow 0} \left( -\partial_\nu \mathcal{I}_\nu(u)\right)$. Let $\mathcal{J}_\nu(u) = -\partial_\nu \mathcal{I}_\nu(u)$.
Notice that
$$ \begin{eqnarray}
\mathcal{I}_{\nu-1}(u) &=& s \cdot \mathcal{I}_\nu(u) - \partial_u^2 \mathcal{I}_\nu(u) \\
\mathcal{J}_{\nu-1}(u) &=& s \cdot \mathcal{J}_\nu(u) - \partial_u^2 \mathcal{J}_\nu(u)
\end{eqnarray}
$$
whenever integrals are defined.
It's not hard to compute $\mathcal{I}_\nu(u)$ explicitly:
$$
\mathcal{I}_\nu(u) = \sqrt{\pi} \cdot \frac{ 2^{\frac{3}{2}-\nu } s^{\frac{1}{4}-\frac{\nu }{2}} }{\Gamma (\nu )} \cdot |u|^{\nu -\frac{1}{2}} K_{\frac{1}{2}-\nu }\left(\sqrt{s} |u|\right)
$$
One can also compute $J_1\left(u\right)$ by using known expressions for index derivatives of Bessel functions at half-integer order:
$$
\mathcal{J}_1\left(u\right) = \pi \, \frac{\mathrm{e}^{\sqrt{s} |u|} }{\sqrt{s}} \cdot \operatorname{Ei}\left(-2 \sqrt{s} |u|\right) -\pi\, \frac{ \mathrm{e}^{-\sqrt{s} |u|} }{\sqrt{s}} \left(\frac{1}{2} \log \left(\frac{u^2}{4 s}\right)+\gamma \right)
$$
Hence $J_1(u)$ is a continuous function of real argument $u$, and has the following series expansions:
$$
\begin{eqnarray}
\mathcal{J}_1(u) &=& \frac{\pi \log \left(16 s^2\right)}{2 \sqrt{s}}+\pi |u| (\log (u^2)+2 \gamma -2)+\mathcal{o}\left(u\right) \\
\mathcal{J}_1(u) &=& -\frac{\pi}{2} \mathrm{e}^{-\sqrt{s} |u| } \left(\frac{ \left( \log \left(\frac{u^2}{4s}\right) + 2 \gamma \right)}{\sqrt{s}} + \frac{1}{s
|u|}+\mathcal{o}\left(|u|^{-1}\right)\right)
\end{eqnarray}
$$
They show that $\mathcal{J}_1^\prime(u)$ is discontinuous.
In order to express $\mathcal{J}_0(u)$ in terms of distributions we use
$$
\begin{eqnarray}
\int \mathcal{J}_0(u) f(u) \, \mathrm{d} u &=& \int \left( s \mathcal{J}_1(u) - \mathcal{J}_1^{\prime\prime}(u) \right) f(u) \, \mathrm{d} u \\ &=& \int \left( s f(u) - f^{\prime\prime}(u) \right) \mathcal{J}_1(u) \, \mathrm{d} u
\end{eqnarray}
$$
|
Determinant of a special kind of block matrix I have a $2\times2$ block matrix $M$ defined as follows:
$$\begin{pmatrix}X+|X| & X-|X| \\ Y-|Y| & Y+|Y|\end{pmatrix}$$
where $X$ and $Y$ are $n\times n$ matrices and $|X|$ denotes the modulus of the entire matrix $X$ that essentially comprises modulus of individual elements of $X$.
How may I find the determinant of the matrix $M$ in terms of $X$ and $Y$? Looking for a simplified solution?
| I shall assume that $X+|X|$ is invertible, although a similar solution exists under the assumption that $Y+|Y|$ is. I shall use $A,B,C,D$ to denote the respective block matrices in your problem to avoid giant equations. The decomposition $$M = \begin{pmatrix}A & B \\ C & D\end{pmatrix} = \begin{pmatrix}A & 0\\ C & I\end{pmatrix} \begin{pmatrix}I & A^{-1}B\\ 0 & D-CA^{-1}B\end{pmatrix} = ST$$ can be verified by simple matrix multiplication (noting that matrix multiplication for block matrices works like multiplying matrices over any other noncommutative ring). The key fact is that $\det(S) = \det(A)$ and $\det(T) = \det(D-CA^{-1}B)$. I shall only prove the first equality (or rather a stronger statement where $I$ is not necessarily the same size as $A$ but the block matrix is still square), as the second can be proved similarly. If $A$ is $1\times 1$, the equation follows from the fact that $S$ is triangular and the product along its diagonal is $\det(A)$. If we assume that it holds for any $n\times n$ matrices $A,C$ then we can apply the Laplace formula to get $$\det(S) = \sum\limits_{j=1}^{n+1} (-1)^{j+1}a_{1j}\det(N_{1j})$$ where $N_{1j}$ is the matrix that results from deleting the first row and $j^{th}$ column of $S$. These matrices satisfy the inductive hypothesis (the identity matrix has not been touched), and so $\det(N_{1j}) = \det(M_{1,j})$ where $M_{1j}$ is the matrix that results from deleting the first row and $j^{th}$ column of $A$. Using the Laplace formula again gives $$\sum\limits_{j=1}^{n+1} (-1)^{j+1}a_{1j}\det(N_{1j}) = \sum\limits_{j=1}^{n+1} (-1)^{j+1}a_{1j}\det(M_{1j}) = \det(A)$$ completing the proof. Since $\det(M) = \det(S)\det(T)$, this gives us $$\det(M) = \det(A)\det(D-CA^{-1}B)$$
|
Software for Galois Theory Background: While studying Group Theory ( Open University M208 ) I had a lot of benefit from the Mathematica Add-on package AbstractAlgebra and later from the GAP software. I am currently self-studying Galois Theory ( using Ian Stewart's Galois Theory ).
Question: Is there a program that calculates the Field Extensions / Galois Group for a ( simple ) polynomial ?
| Canonical answers are Sage, Pari, Magma. The first two are open source, the last one costs money but has an online calculator. Type for example
P<x>:=PolynomialRing(Rationals());
GaloisGroup(x^6+3);
in the online calculator and hit submit. See the online manual on how to interpret the result.
|
Non-algebraically closed field in which every polynomial of degree $My problem is to build, for every prime $p$, a field of characteristic $p$ in which every polynomial of degree $\leq n$ ($n$ a fixed natural number) has a root, but such that the field is not algebraically closed.
If I'm not wrong (please correct me if I am) such a field cannot be finite, by counting arguments. But on the other hand, the union of all finite fields (or of any ascending chain of finite fields) of characteristic $p$, which is what I get if I start with $F_p$ and add a root to each polynomial of degree $\leq n$ in each step, is the algebraic closure of $F_p$, hence algebraically closed. I don't see how I can control this process so that in the end I get a field that is not algebraically closed.
Any hint will be welcome. Thanks in advance.
| Let $k$ be a field, $\bar k$ an algebraic closure of $k$. Fix $n>1$ natural. Consider the family $\mathcal{K}_n$ of fields $K$, $k\subset K\subset \bar k$ with the property: there exists a family of intermediate fields
$$k = K_0 \subset K_1 \subset \ldots K_s= K$$
so that $[K_{i+1}\colon K_i]< n$ for all $1\le i \le s$. It is easy to check the following:
*
*$K \in \mathcal{K}_n$, $K\subset L \subset \bar k$, $[L\colon K]< n$ implies $L \in \mathcal{K}_n$
*$K$, $K'\in \mathcal{K}_n$ implies $K K'\in \mathcal{K}_n$.
*$K \in \mathcal{K}_n$, $k \subset K'\subset K $ implies $K'\in \mathcal{K}_n$.
It is easy to see now that the union of the subfields in $\mathcal{K}_n$ is a subfield $k^{(n)}$ and every polynomial of degree $<n$ with coefficients in $k^{(n)}$ splits completely in $k^{(n)}$.
Note that the degree over $k$ of every element in $k^{(n)}$ has all its prime factors $<n$. Therefore, if $k$ is such that there exist elements in $\bar k$ whose degree over $k$ is a prime factor $>n$ (many examples here) then $k^{(n)}\ne \bar k$, that is $k^{(n)}$ is not algebraically closed.
Note: for $n=3$ we get $k^{(n)}$ are the constructible elements of $\bar k/k$.
|
What are the interesting applications of hyperbolic geometry? I am aware that, historically, hyperbolic geometry was useful in showing that there can be consistent geometries that satisfy the first 4 axioms of Euclid's elements but not the fifth, the infamous parallel lines postulate, putting an end to centuries of unsuccesfull attempts to deduce the last axiom from the first ones.
It seems to be, apart from this fact, of genuine interest since it was part of the usual curriculum of all mathematicians at the begining of the century and also because there are so many books on the subject.
However, I have not found mention of applications of hyperbolic geometry to other branches of mathematics in the few books I have sampled. Do you know any or where I could find them?
| It is my understanding that the principle application of hyperbolic geometry is in physics. Specifically in special relativity. The Lorentz group of Lorentz transformations is a non-compact hyperbolic manifold. But it also shows up in general relativity and astrophysics as the space surrounding black holes is hyperbolic(negative curvature).
|
Understanding bounds on factorions I am trying to understand the upper bound on factorions (in base $10$). The Wikipedia page says:
"If $n$ is a natural number of $d$ digits that is a factorion, then
$10^{d − 1} \le n \le 9!d$. This fails to hold for $d \ge 8$ thus $n$
has at most $7$ digits, and the first upper bound is $9,999,999$. But
the maximum sum of factorials of digits for a $7$ digit number is $9!7
= 2,540,160$ establishing the second upper bound."
Please explain this to me in simple terms, precisely the first part $10^{d − 1} \le n \le 9!d$.
| By definition, $n$ is the sum of the factorials of its digits. Since each digit of $n$ is at most 9, this can be at most $9!\cdot d$, where $d$ is the number of digits of $n$:
If $n$ is a factorion:
$$
n=d_1d_2\cdots d_d\quad\Rightarrow\quad n= d_1!+\,d_2!\,+\cdots+ \,d_d!\le \underbrace{9!+\,9!+\,\cdots+\, 9!}_{d -\text{terms}}\le9!\cdot d
$$
The lower bound is trivial, since $n$ has $d$ digits, it must be at least $10^{d-1}$.
|
Trying to figure out how an approximation of a logarithmic equation works The physics books I'm reading gives $$\triangle\tau=\frac{2}{c}\left(1-\frac{2m}{r_{1}}\right)^{1/2}\left(r_{1}-r_{2}+2m\ln\frac{r_{1}-2m}{r_{2}-2m}\right).$$
We are then told $2m/r$
is small for $r_{2}<r<r_{1}$
which gives the approximation$$\triangle\tau\approx\frac{2}{c}\left(r_{1}-r_{2}-\frac{m\left(r_{1}-r_{2}\right)}{r_{1}}+2m\ln\left(\frac{r_{1}}{r_{2}}\right)\right).$$
I can see how $$\frac{2}{c}\left(1-\frac{2m}{r_{1}}\right)^{1/2}\approx\frac{2}{c}$$
but can't see how the rest of it appears. It seems to be saying that$$2\ln\frac{r_{1}-2m}{r_{2}-2m}\approx\left(-\frac{\left(r_{1}-r_{2}\right)}{r_{1}}+2\ln\left(\frac{r_{1}}{r_{2}}\right)\right)$$
I've tried getting all the lns on one side, and also expanding $\ln\frac{r_{1}-2m}{r_{2}-2m}$
to $\ln\left(r_{1}-2m\right)-\ln\left(r_{2}-2m\right)$
and generally juggling it all about but with no luck. Any suggestions or hints from anyone?
It's to do with the gravitational time delay effect. It seems a bit more maths than physics which is why I'm asking it here.
Many thanks
| It actually seems to me they use
$$\frac{2}{c}\left(1-\frac{2m}{r_{1}}\right)^{1/2}\approx\frac{2}{c}\left(1-\frac{m}{r_{1}}\right)$$
and
$$2m\ln\frac{r_{1}-2m}{r_{2}-2m}\approx 2m\ln\left(\frac{r_{1}}{r_{2}}\right) \; .$$
EDIT: Just realized the following:
$$2m\ln\frac{r_{1}-2m}{r_{2}-2m}\approx 2m\ln\left(\frac{r_{1}}{r_{2}}\right) + 2m\left(\frac{2m}{r_2}-\frac{2m}{r_1}\right) \; .$$
Now the last term can be rewritten as
$$2m\left(\frac{2m}{r_2}-\frac{2m}{r_1}\right) = \frac{(2m)^2}{r_1 r_2}(r_1-r_2) = \left(\frac{2m}{r_1}\right)\left(\frac{2m}{r_2}\right)(r_1-r_2) $$
which is negligible.
|
Why is $0$ excluded in the definition of the projective space for a vector space?
For a vector space $V$, $P(V)$ is defined to be $(V \setminus \{0 \}) / \sim$, where two non-zero vectors $v_1, v_2$ in $V$ are equivalent if they differ
by a non-zero scalar $λ$, i.e., $v_1 = \lambda v_2$.
I wonder why vector $0$ is excluded when considering the equivalent classes, since $\{0\}$ can be an equivalent class too? Thanks!
| Projective space is supposed to parametrize lines through the origin. A line is determined by two points, so a line through the origin is determined by any nonzero vector.
As Nate's explains, you can certainly include 0, but you will get a different space. Is there a reason to care about it?
One reason we care about the space of lines through the origin is that it is a rich arena for discovering interesting theorems and examples.
In general, projective space is a more natural setting for algebraic geometry than affine space. For instance, theorems have fewer special cases - the most natural one being that two lines always intersect in the projective plane. Others include: Bezout's theorem, the classification of plane conics, 27 lines on a cubic, etc.
There are other reasons it is nice, which have to do with it being compact. Along those lines, we can think of projective space as a natural compactification of affine space, which is designed to catch points that wander off to infinity by assigning to their limit the direction they wandered off in. This is related to how we can use projective space to resolve singularities via blow-ups, by remembering the tangent line along which a point enters the singularity. All of these are natural situations where we care about the space of lines as a geometric object.
Maybe there are natural situations where it also makes sense to include a separate $0$ point, which is the limit of the other points. That doesn't sound too far fetched to me, especially thinking about the blow-up example.
|
The Dimension of the Symmetric $k$-tensors I want to compute the dimension of the symmetric $k$-tensors. I know that a covariant $k$-tensor $T$ is called symmetric if it is unchanged under permutation of arguments. Also, I know that the dimension of covariant $k$-tensors is $n^k$ but how can I eliminate non-symmetric the cases? I found this but I could not get the intution. Also, this blog post answers my question but I don't see why we put | between different indices. Any concrete example would also help such as the symmetric covariant 2-tensors in $\mathbb{R^3}$, as I asked in this thread.
| A basis for symmetric tensors, say $\text{Sym}_r(V)$ with $\{v_1,...,v_n\}$ a basis for $V$, is given by the symmetrizations of $\{v_{i_1}\otimes ... \otimes v_{i_r} \ | \ 1\leq i_1\leq...\leq i_r\leq n\}$. You must count the number of non-decreasing sequences (repetitions allowed) of length $r$ with entries in $[1,n]$. I've always heard the method of counting these referred to as stars and bars, i.e. counting the number of multisets of size $r$ with entries from $[1,n]$, and the answer you get is ${n+r-1\choose r}$.
You line up $r$ stars and insert $n-1$ bars, the first bar separating indicies 1 and 2, the second bar separating indicies 2 and 3, ..., the $(n-1)$st bar separating indicies $n-1$ and $n$.
For example, say $r=5$ and $n=3$. Here are some example of non-decreasing sequences of length $r=5$ with entries from $\{1,2,3\}$:
\begin{align*}
11223 \ &: \ **|**|*\\
22333 \ &: \ |**|***\\
11122 \ &: \ ***|**|\\
22222 \ &: \ |*****|\\
\end{align*}
So there are $r+n-1$ ``things'' (stars and bars) and you're choosing $r$ of them to be stars (or $n-1$ of them to be bars).
As for why this determines a basis for symmetric tensors: any pure tensor on the chosen basis determines a symmetric tensor via
$$
S(v_{i_1}\otimes ... \otimes v_{i_r})=\sum_{\pi\in S_n}v_{\pi(i_1)}\otimes ... \otimes v_{\pi(i_r)}
$$
and two pure tensors have the same symmetrization if their indices determine the same multiset (i.e. non-decreasing sequence as described above). I'll leave it to the reader to show that these are independent and that they span the space of symmetric tensors. (On a technical note, the symmetrization needs to be modified in non-zero characteristic and some sources might divide by $n!$.)
|
Altitudes of a triangle are concurrent (using co-ordinate geometry)
I need to prove that the altitudes of a triangle intersect at a given point using co-ordinate geometry.
I am thinking of assuming that point to be $(x,y)$ and then using slope equations to prove that the point exists and I can think of another way too by taking two equations (altitudes) forming a family of line which must be equal to the third equation for satisfying the condition given in the question, But thinking is all I am able to do, I am unable to put it on the paper. A hint towards solution would be great.
| I'll prove this proposition by Vector algebra, (not to solve OP's problem, but essentially for other users who might find it useful):
Let $\Delta ABC$ be a triangle whose altitudes $AD$, $BE$ intersect at $O$. In order to prove that the altitudes are concurrent, we'll have to prove that $CO$ is perpendicular to $AB$.
Taking $O$ as the origin, let the position vectors of $A$, $B$, $C$ be $\vec{a}$, $\vec{b}$, $\vec{c}$ respectively. Then $\vec{OA}=\vec{a}$, $\vec{OB}=\vec{b}$ and $\vec{OC}=\vec{c}$.
Now, as $AD \perp BC$, we have $\vec{OA} \perp \vec{BC}$. This means $\vec{OA} \cdot \vec{BC}=0$. This means, $$\vec{a} \cdot (\vec{c}-\vec{b})=0$$
Similarly $\vec{OB} \perp \vec{CA}$ and that gives you, $$ \vec{b} \cdot (\vec{a} -\vec{c})=0$$
Adding these you'll have, $$ (\vec{a}-\vec{b}) \cdot \vec{c} =0$$ This reads off immediately that $\vec{OC} \perp \vec{BA}$. This proves the proposition.
|
Proving that $G/N$ is an abelian group
Let $G$ be the group of all $2 \times 2$ matrices of the form $\begin{pmatrix} a & b \\ 0 & d\end{pmatrix}$ where $ad \neq 0$ under matrix multiplication. Let $N=\left\{A \in G \; \colon \; A = \begin{pmatrix}1 & b \\ 0 & 1\end{pmatrix} \right\}$ be a subset of the group $G$. Prove that $N$ is a normal subgroup of $G$ and prove that $G/N$ is abelian group.
Here is my attempt!
To prove $N$ is normal I consider the group homomorphism $f \colon G \to \mathbb R^*$ given by $f(B) = \det(B)$ for all $B$ in $G$. And I see that $f(N)$ is all the singleton $\{1\}$ since $\{1\}$ as a subgroup of $\mathbb R^*$ is normal, it follows that $N$ is also normal. Is this proof helpful here? Then how to prove that $G/N$ is Abelian? I know $G/N$ is a collection of left cosets.
Thank you.
| One way is using first isomorphism theorem.
To do this you should find a group homomorphism such that $\operatorname{Ker} \varphi=N$.
Let us try $\varphi: G\to \mathbb R^*\times \mathbb R^*$ given by
$$\begin{pmatrix} a & b \\ 0 & d\end{pmatrix} \mapsto (a,d).$$
(By $\mathbb R^*$ I denote the group $\mathbb R^*=\mathbb R\setminus\{0\}$ with multiplication. By $G\times H$ I denote the direct product of two groups, maybe your book uses notation $G\oplus H$ for this.)
It is relatively easy to verify that $\varphi$ is a surjective homomorphism. It is clear that $\operatorname{Ker} \varphi=N$. Hence, by the first isomorphism theorem,
$$G/N \cong \mathbb R^*\times\mathbb R^*$$
This is a commutative group.
If you prefer, for any reason, not using the first isomorphism theorem, you could also try to verify one of equivalent definitions of normal subgroup and then describe the cosets and their multiplication.
In this case you have
$$\begin{pmatrix} a & b \\ 0 & d \end{pmatrix}
\begin{pmatrix} 1 & b' \\ 0 & 1 \end{pmatrix}
\frac1{ad}
\begin{pmatrix} d & -b \\ 0 & a \end{pmatrix}=
\begin{pmatrix} 1 & \frac{ab'}d \\ 0 & 1 \end{pmatrix}$$
(I have omitted the computations), which shows that $xNx^{-1}\subseteq N$ for any $x\in G$.
You can find out easily that cosets are the sets of the form
$$\{\begin{pmatrix} x & y \\ 0 & z \end{pmatrix}; y\in\mathbb R\}$$
for $x,z\in\mathbb R\setminus\{0\}$ and that the multiplication of cosets representatives $\begin{pmatrix} x & 0 \\ 0 & z \end{pmatrix}$ is coordinate-wise.
|
Generalize the equality $\frac{1}{1\cdot2}+\frac{1}{2\cdot3}+\cdots+\frac{1}{n\cdot(n+1)}=\frac{n}{n+1}$ I'm reading a book The Art and Craft of Problem Solving. I've tried to conjecture a more general formula for sums where denominators have products of three terms. I've "got my hands dirty", but don't see any regularity in numerators.
Please, write down your ideas.
| I have the following recipe in mind, see how far it helps (leave pointers in this regards as comments):
Let $a_1, a_2, a_3, \cdots, a_n, \cdots$ be the terms of an $A.P$. Let $d$ be the common difference of the given $A.P$. We are interested to find the sum for some $r \in \mathbb{N}$. $$\sum_{k=1}^n \dfrac{1}{a_k a_{k+1} \cdots a_{k+r-1}}$$
Let us denote the sum of the series as $S_n$ and the n-th term of the series, $T_n$. Consider the following definition of the new entity that I'll call $V_n$. (This is not at all a mystery: $V_n$ is obtained by dropping the first of the $r$ entries in the denominator of $T_n$)
$$V_n:=\dfrac{1}{a_{n+1} \cdots a_{n+r-2} a_{n+r-1}}$$
Therefore, $$V_{n-1}:=\dfrac{1}{a_n \cdots a_{n+r-3} a_{n+r-2}}$$
Now (I'll leave the computation that goes here!), you'll have $$V_n-V_{n-1}=T_n(a_n-a_{n+r-1})$$ $$T_n=\dfrac{1}{d(r-1)} \cdot (V_{n-1}-V_n)$$Substituting various values for $n$, you have equations for $T_1, T_2, \cdots, T_n$. Adding these, and noting that common difference is the difference between two consecutive terms taken in an (appropriate!) order, you have, $$S_n=\dfrac{1}{(r-1)(a_2-a_1)}(\dfrac{1}{a_1\cdots a_{r-1}}-\dfrac{1}{a_{n+1} \cdots a_{n+r-1}})$$
For the problem at hand, set $a_1=1,~d=1,~r=3 $. You'll get $$\sum_{k=1}^n{\dfrac{1}{k(k+1)(k+2)}}=\dfrac{1}{4}-\dfrac{1}{(n+1)(n+2)}$$
Have fun doing for $r=4, \cdots$. Hope this helps.
|
Purpose Of Adding A Constant After Integrating A Function I would like to know the whole purpose of adding a constant termed constant of integration everytime we integrate an indefinite integral $\int f(x)dx$. I am aware that this constant "goes away" when evaluating definite integral $\int_{a}^{b}f(x)dx $. What has that constant have to do with anything? Why is it termed as the constant of integration? Where does it come from?
The motivation for asking this question actually comes from solving a differential equation $$x \frac{dy}{dx} = 5x^3 + 4$$ By separation of $dy$ and $dx$ and integrating both sides, $$\int dy = \int\left(5x^2 + \frac{4}{x}\right)dx$$ yields $$y = \frac{5x^3}{3} + 4 \ln(x) + C .$$
I've understood that $\int dy$ represents adding infinitesimal quantity of $dy$'s yielding $y$ but I'am doubtful about the arbitrary constant $C$.
| There are many great answers here, but I just wanted to chime in with my favorite example of how things can go awry if one forgets about the constant of integration.
Consider
$$\int \sin(2x) dx.$$
We will find antiderivatives in two ways. First, a substitution $u=2x$ yields:
$$\int \frac{\sin(u)}{2}du = -\frac{\cos(u)}{2} = -\frac{\cos(2x)}{2}.$$
Second, we use the identity $\sin(2x)=2\sin(x)\cos(x)$ and a substitution $v=\sin(x)$:
$$\int \sin(2x)dx = \int 2\sin(x)\cos(x)dx = \int 2vdv =v^2 = \sin^2(x).$$
Thus, we have found two antiderivatives of $\sin(2x)$ that are completely different! Namely
$$F(x)=\sin^2(x) \quad \text{and} \quad G(x)=-\frac{\cos(2x)}{2}.$$
Notice that $F(x)=\sin^2(x)\neq -\cos(2x)/2=G(x)$. For instance, $F(0)=\sin^2(0) = 0$ but $G(0)=-\cos(2\cdot 0)/2=-1/2$. So, what happened?
We forgot about the constant of integration, that's what happened. The theory of integration tells us that all antiderivatives differ by a constant. So, if $F(x)$ is an antiderivative, then any other antiderivative $G(x)$ can be expressed as $G(x)=F(x)+C$ for some constant $C$. In particular, our antiderivatives above must differ by a constant. Indeed, the constant $C$ in this case is exactly $C=-\frac{1}{2}$:
$$F(x)+C = F(x) - \frac{1}{2} = \sin^2(x)-\frac{1}{2} = \frac{(1-\cos(2x))}{2}-\frac{1}{2} = -\frac{\cos(2x)}{2} = G(x),$$
where we have used the trigonometric identity $\sin^2(x) = (1-\cos(2x))/2.$
|
Simple trigonometry question (angles) I am starting again with trigonometry just for fun and remember the old days. I was not bad at maths, but however I remember nothing about trigonometry...
And I'm missing something in this simple question, and I hope you can tell me what.
One corner of a triangle has a 60º angle, and the length of the two
adjacent sides are in ratio 1:3. Calculate the angles of the other
triangle corners.
So what we have is the main angle, $60^\circ$, and the adjacent sides, which are $20$ meters (meters for instance). We can calculate the hypotaneous just using $a^2 + b^2 = h^2$. But how to calculate the other angles?
Thank you very much and sorry for this very basic question...
| Since we are only interested in the angles, the actual lengths of the two sides do not matter, as long as we get their ratio right. So we can take the lengths of the adjacent sides to be $1$ and $3$, in whatever units you prefer. If you want the shorter of the two adjacent sides to be $20$ metres, then the other adjacent side will need to be $60$ metres. But we might as well work with the simpler numbers $1$ and $3$.
To compute the length of the third side, we use a generalization of the Pythagorean Theorem called the Cosine Law. Let the vertices of a triangle be $A$, $B$, and $C$, and let the sides opposite to these vertices be $a$, $b$, and $c$. For brevity, let the angle at $A$ be called $A$, the angle at $B$ be called $B$, and the angle at $C$ be called $C$. The Cosine Law says that
$$c^2=a^2+b^2-2ab\cos C.$$
Take $C=60^\circ$, and $a=1$, $b=3$.
Since $\cos(60^\circ)=1/2$, we get
$$c^2=1^2+3^2-2(1)(3)(1/2),$$
so $c^2=7$ and therefore $c=\sqrt{7}$. We now know all the sides.
To find angles $A$ and $B$, we could use the Cosine Law again. We ilustrate the procedure by finding $\cos A$.
By the Cosine Law,
$$a^2=b^2+c^2-2bc\cos A.$$
But $a=1$, $b=3$, and by our previous work $c=\sqrt{7}$. It follows that
$$1=9+7-2(3)(\sqrt{7})\cos A,$$
and therefore
$$\cos A= \frac{5}{2\sqrt{7}}.$$
The angle in the interval from $0$ to $180^\circ$ whose cosine is $5/(2\sqrt{7})$ is not a "nice" angle. The calculator (we press the $\cos^{-1}$ button) says that this angle is about $19.1066$ degrees.
Another way to proceed, once we have found $c$, is to use the Sine Law
$$\frac{\sin A}{a}=\frac{\sin B}{b}=\frac{\sin C}{c}.$$
From this we obtain that
$$\frac{\sin A}{1}=\frac{\sqrt{3}/2}{\sqrt{7}}.$$
The calculator now says that $\sin A$ is approximately $0.3273268$, and then the calculator gives that $A$ is approximately $19.1066$ degrees. In the old days, the Cosine Law was not liked very much, and the Sine Law was preferred, because the Sine Law involves only multiplication and division, which can be done easily using tables or a slide rule. A Cosine Law calculation with ugly numbers is usually more tedious.
The third angle of the triangle (angle $B$) can be found in the same way. But it is easier to use the fact that the angles of a triangle add up to $180^\circ$. So angle $B$ is about $100.8934$ degrees.
|
Need help deriving recurrence relation for even-valued Fibonacci numbers. That would be every third Fibonacci number, e.g. $0, 2, 8, 34, 144, 610, 2584, 10946,...$
Empirically one can check that:
$a(n) = 4a(n-1) + a(n-2)$ where $a(-1) = 2, a(0) = 0$.
If $f(n)$ is $\operatorname{Fibonacci}(n)$ (to make it short), then it must be true that $f(3n) = 4f(3n - 3) + f(3n - 6)$.
I have tried the obvious expansion:
$f(3n) = f(3n - 1) + f(3n - 2) = f(3n - 3) + 2f(3n - 2) = 3f(3n - 3) + 2f(3n - 4)$
$ = 3f(3n - 3) + 2f(3n - 5) + 2f(3n - 6) = 3f(3n - 3) + 4f(3n - 6) + 2f(3n - 7)$
... and now I am stuck with the term I did not want. If I do add and subtract another $f(n - 3)$, and expand the $-f(n-3)$ part, then everything would magically work out ... but how should I know to do that? I can prove the formula by induction, but how would one systematically derive it in the first place?
I suppose one could write a program that tries to find the coefficients x and y such that $a(n) = xa(n-1) + ya(n-2)$ is true for a bunch of consecutive values of the sequence (then prove the formula by induction), and this is not hard to do, but is there a way that does not involve some sort of "Reverse Engineering" or "Magic Trick"?
| The definition of $F_n$ is given:
*
*$F_0 = 0$
*$F_1 = 1$
*$F_{n+1} = F_{n-1} + F_{n}$ (for $n \ge 1$)
Now we define $G_n = F_{3n}$ and wish to find a recurrence relation for it.
Clearly
*
*$G_0 = F_0 = 0$
*$G_1 = F_3 = 2$
Now we can repeatedly use the definition of $F_{n+1}$ to try to find an expression for $G_{n+1}$ in terms of $G_n$ and $G_{n-1}$.
$$\begin{align*}
G_{n+1}&= F_{3n+3}\\
&= F_{3n+1} + F_{3n+2}\\
&= F_{3n-1} + F_{3n} + F_{3n} + F_{3n+1}\\
&= F_{3n-3} + F_{3n-2} + F_{3n} + F_{3n} + F_{3n-1} + F_{3n}\\
&= G_{n-1} + F_{3n-2} + F_{3n-1} + 3 G_{n}\\
&= G_{n-1} + 4 G_{n}
\end{align*}$$
so this proves that $G$ is a recurrence relation.
|
Solving quadratic equation $$\frac{1}{x^2} - 1 = \frac{1}{x} -1$$
Rearranging it I get: $1-x^2=x-x^2$, and so $x=1$. But the question Im doing says to find 2 solutions. How would I find the 2nd solution?
Thanks.
| I think it should be emphasised what the salient point is here:
Given the equation
$$
\Phi =\Psi
$$
you may multiply both sides by the same non-zero number $a$ to obtain the equivalent equation
$$
a\Phi =a\Psi.
$$
Multiplying both sides of an equation by 0 may give an equation that's not equivalent to the original equation.
With your equation, eventually you'll get to the point where you have
$$
\tag{1}{1\over x^2}= {1\over x}.
$$
At this point, if you want to "cancel the $x$'s", you could multiply both sides by
$x^2$ as long as $x^2\ne0$. You need to consider what happens when $x=0$ separately.
$x=0$ is not a solution of (1) in this case, so the solutions of (1) are the non-zero solutions of
$$
1=x.
$$
If you multiplied both sides of (1) by $x^3$, the solutions would be the non-zero solutions of
$$
x=x^2.
$$
Your text made an error, most probably, at this stage...
|
Conditional probability of a general Markov process given by its running process I have a question as follow:
"Let $X$ be a general Markov process, $M$ is a running maximum process of $X$ and $T$ be an exponential distribution, independent of $X$.
I learned that there is the following result:
Probability: $P_x(X_T\in dz \mid M_T=y)$ is independent of starting point $x$ of the process $X$. Where $y, z \in R$"
Is there anyone who knows some references which mentioned the result above? I heard that this result was found around the seventies but I haven't found any good reference yet.
Thanks a lot!
| For real-valued diffusion processes, this is essentially a local form of David Williams' path decomposition, and can be deduced from
Theorem A in a paper "On the joint distribution of the maximum and its location for a linear diffusion" by Csaki, Foldes and Salminen
[Ann. Inst. H. Poincare Probab. Statist., vol. 23 (1987) pp. 179--194].
For more general Markov processes, you will need to look into the theory of "last-exit times". Although these are not stopping times, many Markov processes possess a sort of strong Markov property at such times. This theory can be applied to the last time before $T$ that
the process is at level $y$. One place to start might be the paper of Meyer, Smythe and Walsh "Birth and death of Markov processes" in vol. III (pp. 295-305) of the Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (1972).
See also the work of P.W. Millar from roughly the same time period.
|
Proof that the set of odd positive integers greater then 3 is countable I found one problem which asks following:
Show that the set of odd positive integers greater then 3 is countable.
At the begining I was thinking that such numbers could be represented by $2k+1$,where $k>1$; but in the answers paper there was written as $2n+3$ or general function is
$$f(n)=2n+3$$
and when I was thinking how to prove that such answer is countable, the answer paper said this function is a one-to-one correspondence from the set of positive numbers set to the set of positive odd integers greater to 3.
My question is: is it enough to prove a one-to-one correspondence between two sets, that one of them is countable. If yes, then once my lecturer ask me to proof that rational numbers are countable, so in this case if I represent rational numbers by following function from set of positive numbers:
$$f(n)=\frac{n+1}{n}$$
or maybe $f(n)=\frac{n}{n+1}$. They both are one-to-one correspondences from the set of positive numbers to the set of rational numbers (positives sure). Please help me, is my logic correct or not?
| If you know that a set $A$ is countable and you demonstrate a bijection $f:A\to B$ then you have also shown that the set $B$ is countable; when $A=\mathbb{Z}^+$ this is the very definition of countable. Both of the functions $2k+1$ and $2n+3$ can be used to show that the set of odds greater than $3$ are countable but the former uses the countable domain of $\{2,3,\dots\}$ instead of $\mathbb{Z}^+=\{1,2,3,\dots\}$.
However, neither the functions $f(n)=(n+1)/n$ or $n/(n+1)$ are bijections. Try and express the positive rational number $1/3$ in either of these forms and you will find there is no integer $n$ that works. In order for a function to be a bijection it must be both injective and surjective; your function here is not surjective (it does not obtain every value in the target in the codomain at least once - some values, like $1/3$, are left untouched).
|
Matrix/Vector Derivative I am trying to compute the derivative:$$\frac{d}{d\boldsymbol{\mu}}\left( (\mathbf{x} - \boldsymbol{\mu})^\top\boldsymbol{\Sigma} (\mathbf{x} - \boldsymbol{\mu})\right)$$where the size of all vectors ($\mathbf{x},\boldsymbol{\mu}$) is $n\times 1$ and the size of the matrix ($\boldsymbol{\Sigma}$) is $n\times n$.
I tried to break this down as $$\frac{d}{d\boldsymbol{\mu}}\left( \mathbf{x}^\top\boldsymbol{\Sigma} \mathbf{x} - \mathbf{x}^\top\boldsymbol{\Sigma} \boldsymbol{\mu} - \boldsymbol{\mu}^\top\boldsymbol{\Sigma} \mathbf{x} + \boldsymbol{\mu}^\top\boldsymbol{\Sigma} \boldsymbol{\mu} \right) $$
yielding $$(\mathbf{x} + \boldsymbol{\mu})^\top\boldsymbol{\Sigma} + \boldsymbol{\Sigma}(\boldsymbol{\mu} - \mathbf{x})$$
but the dimensions don't work: $1\times n + n\times 1$. Any help would be greatly appreciated.
-C
| There is a very short and quick way to calculate it correctly. The object $(x-\mu)^T\Sigma(x-\mu)$ is called a quadratic form. It is well known that the derivative of such a form is (see e.g. here),
$$\frac{\partial x^TAx }{\partial x}=(A+A^T)x$$
This works even if $A$ is not symmetric. In your particular example, you use the chain rule as,
$$\frac{\partial (x-\mu)^T\Sigma(x-\mu) }{\partial \mu}=\frac{\partial (x-\mu)^T\Sigma(x-\mu) }{\partial (x-\mu)}\frac{\partial (x-\mu)}{\partial \mu}$$
Thus,
$$\frac{\partial (x-\mu)^T\Sigma(x-\mu) }{\partial (x-\mu)}=(\Sigma +\Sigma^T)(x-\mu)$$
and
$$\frac{\partial (x-\mu)}{\partial \mu}=-1$$
Combining equations you get the final answer,
$$\frac{\partial (x-\mu)^T\Sigma(x-\mu) }{\partial \mu}=(\Sigma +\Sigma^T)(\mu-x)$$
|
How many smooth functions are non-analytic? We know from example that not all smooth (infinitely differentiable) functions are analytic (equal to their Taylor expansion at all points). However, the examples on the linked page seem rather contrived, and most smooth functions that I've encountered in math and physics are analytic.
How many smooth functions are not analytic (in terms of measure or cardinality)? In what situations are such functions encountered? Are they ever encountered outside of real analysis (e.g. in physics)?
| In terms of cardinality, there are the same number of smooth and analytic functions, $2^{\aleph_0}$. The constant functions are enough to see that there are at least $2^{\aleph_0}$ analytic functions. The fact that a continuous function is determined by its values on a dense subspace, along with my presumption that you are referring to smooth functions on a separable space, imply that there are at most $(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}$ smooth functions.
Added: In light of the question edit, I should mention that the cardinality of the set of smooth nonanalytic functions is also $2^{\aleph_0}$. This can be seen by taking the constant multiples of some bump function.
I don't know about measures, but analytic functions are a very special subclass of smooth functions (something which I'm sorry to leave vague at the moment, but hopefully someone will give a better answer here (Added: Now Dave L. Renfro has)). They are also important, useful, and relatively easy to work with, which is part of why they are so prevalent in the math and physics you have seen.
Where are they encountered? Bump functions are important in differential equations and manifolds, so I would guess they're important in physics. Bump functions are smooth and not analytic.
|
Why is the pullback completely determined by $d f^\ast = f^\ast d$ in de Rham cohomology? Fix a smooth map $f : \mathbb{R}^m \rightarrow \mathbb{R}^n$. Clearly this induces a pullback $f^\ast : C^\infty(\mathbb{R}^n) \rightarrow C^\infty(\mathbb{R}^m)$. Since $C^\infty(\mathbb{R}^n) = \Omega^0(\mathbb{R}^n)$ (the space of zero-forms) by definition, we consider this as a map $f^\ast : \Omega^0(\mathbb{R}^n) \rightarrow \Omega^0(\mathbb{R}^m)$. We want to extend $f^\ast$ to the rest of the de Rham complex in such a way that $d f^\ast = f^\ast d$.
Bott and Tu claim (Section I.2, right before Prop 2.1), without elaboration, that this is enough to determine $f^\ast$ . I can see why this forces e.g.
$\displaystyle\sum_{i=1}^n f^\ast \left[ \frac{\partial g}{\partial y_i} d y_i \right] = \sum_{i=1}^n f^* \left[ \frac{\partial g}{\partial y_i}\right] d(y_i \circ f)$,
but I don't see why this forces each term of the LHS to agree with each term of the RHS -- it's not like you can just pick some $g$ where $\partial g/\partial y_i$ is some given function and the other partials are zero.
| $\newcommand\RR{\mathbb{R}}$I don't have the book here, but it seems you are asking why there is a unique extension of $f^*:\Omega^0(\RR^n)\to\Omega^0(\RR^m)$ to an appropriate $\overline f^*:\Omega^\bullet(\RR^n)\to\Omega^\bullet(\RR^m)$ such that $f^*d=df^*$. Here appropriate should probably mean that the map $\overline f^*$ be a morphism of graded algebras.
Now notice that the since $f^*$ is fixed on $\Omega^0(\RR^n)$ and the commutation relation with $d$ tells us that it is also fixed on the subspace $d(\Omega^0(\RR^n))\subseteq\Omega^1(\RR^n)$. The uniqueness follows from the fact that the subspace $\Omega^0(\RR^n)\oplus d(\Omega^0(\RR^n))$ of $\Omega^\bullet(\RR^n)$ generates the latter as an algebra.
|
$p(x)$ divided by $x-c$ has remainder $p(c)$? [Polynomial Remainder Theorem] This is from Pinter, A Book of Abstract Algebra, p.265.
Given $p(x) \in F[x]$ where $F$ is a field, I would like to show that $p(x)$ divided by $x-c$ has remainder $p(c)$.
This is easy if $c$ is a root of $p$, but I don't see how to prove it if $c$ is not a root.
| By the division algorithm, if $a(x)$ and $b(x)$ are any polynomials, and $a(x)\neq 0$, then there exist unique $q(x)$ and $r(x)$ such that
$$b(x) = q(x)a(x) + r(x),\qquad r(x)=0\text{ or }\deg(r)\lt \deg(a).$$
Let $b(x) = p(x)$, and $a(x)=x-c$. Then $r(x)$ must be constant (since it is either zero or of degree strictly smaller than one), so
$$b(x) = q(x)(x-c) + r.$$
Now evaluate at $x=c$.
Note. I find it strange that you say that this is "easy if $c$ is a root of $p(x)$". The Factor Theorem (that $x-c$ divides $p(x)$ when $c$ is a root of $p(x)$) is a corollary of this result. How exactly do you prove it without this?
|
Why do we reverse inequality sign when dividing by negative number? We all learned in our early years that when dividing both sides by a negative number, we reverse the inequality sign.
Take
$-3x < 9$
To solve for $x$, we divide both sides by $-3$ and get
$$x > -3.$$
Why is the reversal of inequality? What is going in terms of number line that will help me understand the concept better?
| Multiplying or dividing an inequality by $-1$ is exactly the same thing as moving each term to the other side. But then if you switch side for all terms, each term faces the opposite "side" of inequality sign...
For example:
$2x < -3$
Moving them on the other side yields:
$3 < -2x$ which is the same as $-2x > 3$...
|
What is the result of $\lim\limits_{x \to 0}(1/x - 1/\sin x)$? Find the limit:
$$\lim_{x \rightarrow 0}\left(\frac1x - \frac1{\sin x}\right)$$
I am not able to find it because I don't know how to prove or disprove $0$ is the answer.
| Since everybody was 'clever', I thought I'd add a method that doesn't really require much thinking if you're used to asymptotics.
The power series for $\sin x$
$$\sin x = x + O(x^3)$$
We can compute the inverse of this power series without trouble. In great detail:
$$\begin{align}\frac{1}{\sin x} &= \frac{1}{x + O(x^3)}
\\ &= \frac{1}{x} \left( \frac{1}{1 - O(x^2))} \right)
\\ &= \frac{1}{x} \left(1 + O(x^2) \right)
\\ &= \frac{1}{x} + O(x)
\end{align}$$
going from the second line to the third line is just the geometric series formula. Anyways, now we can finish up:
$$\frac{1}{x} - \frac{1}{\sin x} = O(x)$$
$$ \lim_{x \to 0} \frac{1}{x} - \frac{1}{\sin x} = 0$$
If we wanted, we could get more precision: it's not hard to use the same method to show
$$ \frac{1}{\sin x} = \frac{1}{x} + \frac{x}{6} + O(x^3) $$
|
How to show if $ \lambda$ is an eigenvalue of $AB^{-1}$, then $ \lambda$ is an eigenvalue of $ B^{-1}A$? Statement: If $ \lambda$ is an eigenvalue of $AB^{-1}$, then $ \lambda$ is an eigenvalue of $ B^{-1}A$ and vice versa.
One way of the proof.
We have $B(B^{-1}A ) B^{-1} = AB^{-1}. $ Assuming $ \lambda$ is an eigenvalue of $AB^{-1}$ then we have,
$$\begin{align*}
\det(\lambda I - AB^{-1}) &= \det( \lambda I - B( B^{-1}A ) B^{-1} )\\
&= \det( B(\lambda I - B^{-1}A ) B^{-1})\\
&= \det(B) \det\bigl( \lambda I - B^{-1}A \bigr) \det(B^{-1})\\
&= \det(B) \det\bigl( \lambda I - (B^{-1}A )\bigr) \frac{1}{ \det(B) }\\ \
&= \det( \lambda I - B^{-1}A ).
\end{align*}$$
It follows that $ \lambda$ is an eigenvalue of $ B^{-1}A.$ The other side of the lemma can also be proved similarly.
Is there another way how to prove the statement?
| A shorter way of seeing this would be to observe that if
$$
(AB^{-1})x=\lambda x
$$
for some non-zero vector $x$, then by multiplying that equation by $B^{-1}$ (from the left) we get that
$$
(B^{-1}A)(B^{-1}x)=\lambda (B^{-1}x).
$$
In other words $(B^{-1}A)y=\lambda y$ for the non-zero vector $y=B^{-1}x$. This process is clearly reversible.
|
Serving customers algorithm Well I have a problem with a Christmas assignment and my teacher is not responding(maybe he is skiing somewhere now) so I will need some help.
The algorithm is about an office and the waiting time of the customers. We have one office that has to serve $n$ customers $a_1, a_2,\cdots ,a_n$. We assume that serving time $t(a_j)$ for each customer $a_j$ is known. Let $a_j$ to be served after $k$ customers $a_{i_{1}},a_{i_{2}},\cdots ,a_{i_{k}}$. His waiting time $T(a_j)$ is equal to
$$T(a_j) = t(a_{i_{1}})+t(a_{i_{2}})+\cdots +t(a_{i_{k}})+t(a_{j})$$
I want to an efficient algorithm that will compute the best way to serve in order to reduce the total waiting time.
$$\sum_{j=1}^nT(a_j)$$
My first thought is that the customers with the smallest serving time have to go first and the obvious solution is apply a sorting algorithm.
Am I wrong?
| Problems of this kind belong to the area of operations research known as scheduling problems (scheduling theory). Here is a short bibliography of books that deal with this topic: http://www.york.cuny.edu/~malk/biblio/scheduling2-biblio.html There is a lot of nice mathematics involved.
|
When is $[0,1]^K$ submetrizable or even metrizable? Let $I=[0,1]$ and $K$ is a compact space. Then could the function space $I^K$ be submetrizable, even metrizable? In other words, in general, if $I^A$ can be submetrizable (metrizable) for some space $A$, what's condition that $A$ should satisfying?
| If $A$ is compact, $I^A$ is metrizable with the metric being the uniform norm. That is, $d(f,g):=\sup_{a\in A} d(f(a),g(a))$.
|
Proof that $\binom{2\phi(r)}{\phi(r)+1} \geq 2^{\phi(r)}$ I try to prove the following
$$\binom{2\phi(r)}{\phi(r)+1} \geq 2^{\phi(r)}$$
with $r \geq 3$ and $r \in \mathbb{P}$. Do I have to make in induction over $r$ or any better ideas?
Any help is appreciated.
| Combinatorial proof of ${2n \choose n+1} \geq 2^n$ where $n \geq 2$:
Let's take set $\{x_1,y_1,\dots,x_{n-2},y_{n-2},a,b,c,d\}$ which has $2n$ elements; select three elements out of $\{a,b,c,d\}$ and for all $i$, a single element of $\{x_i,y_i\}$, you'll select $n+1$ in total. So
${2n \choose n+1} \geq {4 \choose 3} 2^{n-2}=2^n$
|
partial sum involving factorials Here is an interesting series I ran across.
It is a binomial-type identity.
$\displaystyle \sum_{k=0}^{n}\frac{(2n-k)!\cdot 2^{k}}{(n-k)!}=4^{n}\cdot n!$
I tried all sorts of playing around, but could not get it to work out.
This works out the same as $\displaystyle 2^{n}\prod_{k=1}^{n}2k=2^{n}\cdot 2^{n}\cdot n!=4^{n}\cdot n!$
I tried equating these somehow, but I could not get it. I even wrote out the series.
There were cancellations, but it did not look like the product of the even numbers.
$\displaystyle \frac{(2n)!}{n!}+\frac{(2n-1)!\cdot 2}{(n-1)!}+\frac{(2n-2)!2^{2}}{(n-2)!}+\cdot\cdot\cdot +n!\cdot 2^{n}=4^{n}\cdot n!$.
How can the closed form be derived from this?. I bet I am just being thick. I see the last term is nearly the result except for being multiplied by $2^{n}$. I see if the factorials are written out, $2n(2n-1)(2n-2)(2n-3)\dots$ for example, then 2's factor out of $2n, \;\ 2n-2$ (even terms) in the numerator.
There is even a general form I ran through Maple. It actually gave a closed from for it as well, but I would have no idea how to derive it.
$\displaystyle \sum_{k=0}^{n}\frac{(2n-k)!\cdot 2^{k}\cdot (k+m)!}{(n-k)!\cdot k!}$.
In the above case, m=0. But, apparently there is a closed form for $m\in \mathbb{N}$ as well.
Maple gave the solution in terms of Gamma: $\displaystyle \frac{\Gamma(1+m)4^{n}\Gamma(n+1+\frac{m}{2})}{\Gamma(1+\frac{m}{2})}$
Would anyone have an idea how to proceed with this?. Perhaps writing it in terms of Gamma and using some identities?. Thanks very much.
| This identity can be re-written as
$$\sum_{k=0}^n {2n-k \choose n-k} 2^k = 4^n.$$
Start from
$${2n-k \choose n-k} =
\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n-k}}{z^{n-k+1}} \; dz.$$
This yields for the sum
$$\frac{1}{2\pi i} \int_{|z|=\epsilon}
\sum_{k=0}^n \frac{(1+z)^{2n-k}}{z^{n-k+1}} 2^k \; dz
\\ = \frac{1}{2\pi i} \int_{|z|=\epsilon}
\frac{(1+z)^{2n}}{z^{n+1}}
\sum_{k=0}^n \frac{(2z)^{k}}{(1+z)^k} \; dz.$$
We can extend the sum to infinity because when $n-k+1 \le 0$ or $k \ge n+1$ the integrand of the defining integral of the binomial coefficient is an entire function and the integral is zero. This yields
$$\frac{1}{2\pi i} \int_{|z|=\epsilon}
\frac{(1+z)^{2n}}{z^{n+1}}
\sum_{k=0}^\infty \frac{(2z)^{k}}{(1+z)^k} \; dz
\\ = \frac{1}{2\pi i} \int_{|z|=\epsilon}
\frac{(1+z)^{2n}}{z^{n+1}} \frac{1}{1-2z/(1+z)} \; dz
\\ = \frac{1}{2\pi i} \int_{|z|=\epsilon}
\frac{(1+z)^{2n+1}}{z^{n+1}} \frac{1}{1-z} \; dz.$$
Thus the value of the integral is given by
$$[z^n] \frac{1}{1-z} (1+z)^{2n+1}
= \sum_{q=0}^n {2n+1\choose q} = \frac{1}{2} 2^{2n+1} = 4^n.$$
A trace as to when this method appeared on MSE and by whom starts at this
MSE link.
|
Side-stepping contradiction in the proof of ; ab = 0 then a or b is 0. Suppose we need to show a field has no zero divisors - that is prove the title - then we head off exactly like the one common argument in the reals (unsurprisingly as they themselves are a field).
What I want to know is; how do we prove this not by contradiction?
I was talking to some philosophers about - again not so surprising - logic and they seemed to have an issue with argument by contradiction. I admit I'm not a huge fan (of it) myself, though the gist of it was that classical logic (where contradiction / law of excluded middle is valid) is a really, really, really strong form of logic; a much weaker type of logic is something called intuitist (?) logic (I only caught the name) but they said it did not hold here.
Now, if we take something like the field axioms - or the reals (e.g. order in the bag...) - how can we prove in this new logic that there are no zero divisors. Or, more precisely how can we avoid contradiction?
| Let $m,n\in\mathbb{N}$ such that $m,n>0$ (I subscribe to $0\in\mathbb{N}$ but it really doesn't matter here). It can be shown by induction that $mn\neq 0$. That is, that $mn>0$.
Now, let $a,b\in\mathbb{Z}$. If $ab=0$, then $|ab|=|0|=0$. Therefore $|ab|>0$ implies $ab\neq 0$.
Suppose $a,b\neq 0$. Then $|a|=m>0$ and $|b|=n>0$. By the above, $|a|\,|b|=|ab|=mn>0$. Hence $ab\neq 0$. Then if $ab=0$, we have $a=0$ or $b=0$.
|
Clarifying the definition of "unstable" I would appreciate a definition clarification.
if a numerical method is "unstable", does it mean that if we introduce a small random error in one of the steps, the error would be magnified greatly after further steps? is this true for all unstable algorithms or are there some where the random error is never made significant, say wrt the error of the method itself?
| We say a method is stable when it is capable of controlling errors introduced in each computation. Stability allows the method to converge to a certain solution. Here's a simple example:
\begin{equation}
u_t = -u_x
\end{equation}
Suppose a set of equally distanced nodes on the x-axis in 1D. We assume $U_i$ denotes approximate values of the function $u(x)$ at $i^{th}$ node. We use Forward Euler in time and Centered Differences in space:
\begin{equation}
U_i^{n+1} = U_i^n - \frac{\Delta t}{\Delta x} (U^n_{i+1} - U^n_{i-1})
\end{equation}
Now assume at some iteration (maybe even the initial condition) the numerical solution has an oscillatory form. For simplicity assume, $U_i = (-1)^i*x_i$. Now you can clearly see that no matter what values of $\Delta t, \Delta x$ you choose, negative values of the function will get smaller and positive values will grow larger which eventually leads to unacceptable results.
This is only a simple explanation. For further reading I advise you to take a look at Computational Fluid Dynamics Vol. I, by Hoffman and Chiang. As you can see, this doesn't necessarily happen with random errors introduced intentionally, It can happen whenever there is some errors or oscillations in the solution.
|
An inequality about maximal function Consider the function on $\mathbb R$ defined by
$$f(x)=\begin{cases}\frac{1}{|x|\left(\log\frac{1}{|x|}\right)^2} & |x|\le \frac{1}{2}\\
0 & \text{otherwise}\end{cases}$$
Now suppose $f^*$ is the maximal function of $f$, then I want to show the inequality $f^*(x)\ge \frac{c}{|x|\left(\log\frac{1}{|x|}\right)}$ holds for some $c>0$ and all $|x|\le \frac{1}{2}$.
But I don't know how to prove it. Can anyone give me some hints?
Thanks very much.
| By definition
$$
f^*(x)=\sup_{B\in \text{Balls}(x)}\frac{1}{\mu(B)}\int\limits_B |f(y)|d\mu(y)\qquad(1)
$$
where $\text{Balls}(x)$ the set of all closed balls containing $x$. We can express $f^*$ in another form
$$
f^*(x)=\sup_{\alpha\leq x\leq\beta}\frac{1}{\beta-\alpha}\int\limits_{\alpha}^{\beta} |f(y)|d\mu(y)
$$
Consider $0< x\leq1/2$. Obviously
$$
f^*(x)=\sup_{\alpha\leq x\leq\beta}\frac{1}{\beta-\alpha}\int\limits_{\alpha}^{\beta} |f(y)|d\mu(y)\geq\frac{1}{x-0}\int\limits_{0}^{x}|f(y)|d\mu(y)=-\frac{1}{x\log x}
$$
Thus $f^*(x)\geq\frac{1}{x\log\left(\frac{1}{x}\right)}$ for $0<x\leq 1/2$. Since $f$ is even then does $f^*$, hence inequality
$$
f^*(x)\geq\frac{1}{|x|\log \left(\frac{1}{|x|}\right)}
$$
holds for all $-1/2\leq x\leq1/2$
|
Meaning of $f:[1,5]\to\mathbb R$ I know $f:[1,5]\to\mathbb R$, means $f$ is a function from $[1,5]$ to $\mathbb R$. I am just abit unclear now on the exact interpretation of "to $\mathbb R$". Is $1\le x\le 5$ the domain? And is $\mathbb R$ the co-domain (or image?)?
Is my interpretation in words ---$f$ is a function which takes a number $1\le x\le 5$, and maps it onto the real numbers $\mathbb R$, correct?
Suppose we take $f=x^2$ and $x=2$, does $f:[1,5]\to\mathbb R$ hold? So thus the function gives us $4$, which is $\in \mathbb R$.
| The notation
$$
f: [1,5] \rightarrow \mathbb{R}
$$
means that $f$ is a function whose domain is to taken to be the interval $[1,5]$ and whose codomain is $\mathbb{R}$ (i.e. all the outputs of $f$ fall into $\mathbb{R}$). It makes no claims about surjectivity or injectivity; you must analyze the function itself to decide that.
To address your example, you define
$$
f: [1,5] \rightarrow ?
$$
$$
f(x) = x^2,
$$
where the question mark means we aren't sure what to put there yet. Since since the square of any number in the interval $[1,5]$ is a real number, then indeed $\mathbb{R}$ is an acceptable codomain for $f$, so we could write
$$
f: [1,5] \rightarrow \mathbb{R}.
$$
Notice that, as we have defined it, $f$ is not surjective, since some numbers in $\mathbb{R}$ cannot be reached by $f$ (like 36).
Looking a little closer, we might notice that the square of any number in the interval $[1,5]$ falls in the interval $[1,25]$. Thus, we would also be correct in writing
$$
f: [1,5] \rightarrow [1,25].
$$
After this change in codomain, $f$ becomes surjective, since every number in $[1,25]$ can be reached by $f$.
|
Proof of $f \in C_C(X)$ where $X$ is a metric space implies $f$ is uniformly continuous Can you tell me if the following proof is correct?
Claim:
If $f$ is a continuous and compactly supported function from a metric space $X$ into $\mathbb{R}$ then $f$ is uniformly continuous.
Proof:
The proof is in two parts.
First we want to show that $f$ is uniformly continuous on $K := \operatorname{supp}{f}$:
Let $\varepsilon > 0$.
Because $f$ is continuous we have that for each $x$ in $K$ there is a $\delta_x$ such that for all $y$ with $d(x,y) < 2 \delta_x$ we have $|f(x) - f(y)| < \varepsilon$ and because $\{ B(x, \frac{\delta_x}{2}) \}_{x \in K}$ is an open cover of $K$ there is a finite subcover which we denote $\{ B(x_i, \frac{\delta_i}{2}) \}_{i=1}^n$.
Define $\delta := \min_i \frac{\delta_i}{2}$ and let $x$ and $y$ be any two points in $K$ with $d(x,y) < \delta$. $\{ B(x_i, \frac{\delta_i}{2}) \}_{i=1}^n$ is a cover so there exists an $i$ such that $x$ is in $B(x_i, \frac{\delta_i}{2})$ which means that $d(x,x_i) < \frac{\delta_i}{2}$. Then $d(x_i ,y) \leq d(x_i ,x) + d(x,y) < \frac{\delta_i}{2} + \delta \leq \delta_i$ hence $y$ is also in $B(x_i, \delta_i)$.
Since $d(x_i,y) < \delta_i$ and $d(x, x_i) < \delta_i$ we have $|f(x) - f(y)| \leq |f(x) - f(x_i)| + |f(x_i) - f(y)| < 2 \varepsilon$.
Next we want to show that if $f$ is uniformly continuous on $K$ then it is uniformly continuous on all of $X$:
Let $\varepsilon > 0$. For any two points $x$ and $y$ we're done if either both are in $K$ or both are outside $K$ so let $x \in X \setminus K$ and $y \in K$ with $d(x,y) < \delta$. Then there is an $i$ such that $y$ is in $B(x_i, \frac{\delta_i}{2})$. Then $d(x,x_i) \leq d(x,y) + d(y,x_i) < \delta_i$ and hence $|f(x) - f(y)| \leq |f(x) - f(x_i)| + |f(x_i) - f(y)| < 2 \varepsilon$.
Is it necessary to prove this in two parts or is the second part "obvious" and should be left away?
Thanks for your help.
| That looks good except for the correction that t.b. pointed out. In the spirit of Henning Makholm's comment, here is a "canned theorem" approach.
A continuous function on a compact metric space is uniformly continuous, so $f|_K$ is uniformly continuous. Let $\varepsilon>0$ be given. Then $K_\varepsilon:=\{x:|f(x)|\geq \varepsilon\}$ is a closed subset of $K$, hence compact, and $\{x:f(x)=0\}$ is a closed set disjoint from $K_\varepsilon$, so there is a positive distance $\delta_1$ between $K_\varepsilon$ and $\{x:f(x)=0\}$. Let $\delta_2$ be such that if $x$ and $y$ are in $K$ and $d(x,y)<\delta_2$, then $|f(x)-f(y)|<\varepsilon$. Let $\delta=\min\{\delta_1,\delta_2\}$.
If $x$ and $y$ are in $X$ and $d(x,y)<\delta$, then:
*
*$x$ and $y$ are both in $K$, and since $d(x,y)<\delta_2$, we have $|f(x)-f(y)|<\varepsilon$; or
*one of $x$ or $y$ is not in $K$. WLOG suppose $x$ is not in $K$. Then $f(x)=0$, and since $d(x,y)<\delta_1$, $y$ is not in $K_\varepsilon$, meaning $|f(x)-f(y)|=|f(y)|<\varepsilon$. $\square$
|
A question on the sylow subgroups of a normal subgroup $H$ normal subgroup of a group $G$ with cardinality finite. $p$ prime number dividing $|H|$. $P$ a $p$-Sylow subgroup of $H$, how can I prove that then $G=HN_G(P)$ where $N_G(P)$ is the normalizer of $P$ in $G$?
| If $g\in G$, then $gHg^{-1}=H$, and so $gPg^{-1}\subseteq H$. Since $gPg^{-1}$ is a $p$-Sylow subgroup of $H$, by Sylow's Theorems we know that $gPg^{-1}$ is conjugate to $P$ in $H$. That is, there exists $h\in H$ such that $hPh^{-1} = gPg^{-1}$. Therefore, $g^{-1}hPh^{-1}g = P$, so $h^{-1}g\in N_G(P)$.
|
Dedekind domain with a finite number of prime ideals is principal I am reading a proof of this result that uses the Chinese Remainder Theorem on (the finite number of) prime ideals $P_i$. In order to apply CRT we should assume that the prime ideals are coprime, i.e. the ring is equal to $P_h + P_k$ for $h \neq k$, but I can't see it. How does it follow?
| Hint $\ $ Nonzero prime ideals are maximal, hence comaximal $\, P + Q\ =\ 1\, $ if $\, P\ne Q.$
Another (perhaps more natural) way to deduce that semi-local Dedekind domains are PIDs is to exploit the local characterization of invertibility of ideals. This yields a simpler yet more general result, see the theorem below from Kaplansky, Commutative Rings. A couple theorems later is the fundamental result that a finitely generated ideal in a domain is invertible iff it is locally principal. Therefore, in Noetherian domains, invertible ideals are global generalizations of principal ideals. To best conceptually comprehend such results it is essential to understand the local-global perspective.
|
software for algebraic simplifying expressions I have many huge algebraic expressions such as:
$$\frac{8Y}{1+x}-\frac{(1-Y)}{x}+\frac{Kx(1+5x)^{3/5}}{2}$$
where $\ Y=\dfrac{Kx(1+x)^{n+2}}{(n+4)(1+5x)^{2/5}}+\dfrac{7-10x-x^2}{7(1+x)^2}+\dfrac{Ax}{(1+5x)^{2/5}(1+x)^2}\ $ and $A,n$ are constants.
To simplify these expressions by hand is taking me a lot of time and there is also the danger of making a mistake. I am looking for a free software on the internet using which I can simplify these expressions. Does anyone have any recommendations?
| Note that if you set $\rm\ z = (5x+1)^{1/5}\ $ then your computations reduce to rational function arithmetic combined with the rewrite rule $\rm\: z^5\ \to\ 5x+1\ $ with the following expressions
$$\frac{8Y}{1+x}-\frac{(1-Y)}{x}+\frac{Kxz^3}{2}$$
where $\ Y\ =\ \dfrac{Kx(1+x)^{n+2}}{(n+4)z^2}+\dfrac{7-10x-x^2}{7(1+x)^2}+\dfrac{Ax}{(z(1+x))^2}\ $ and $A,n$ are constants.
This is so simple that it can be done by hand. When using computer algebra systems you need to be sure that they can effectively compute with algebraic functions, or that they can effectively handle said rewrite rule implementing this simple special case. For example, in Macsyma (or Maxima, e.g. in Sage) one may use $\rm\:radcan\:$ (RADical CANonicalize) or, alternatively, set $\rm\:algebraic:true\:$ and do $\rm\:tellrat(\:z^5 =\: 5*x+1)\ $ and then employ the $\rm\:rat\:$ function to normalize such "rational" expressions.
|
Isomorphism of quotient modules implies isomorphism of submodules? Let $A$ be a commutative ring, $M$ an $A$-module and $N_1, N_2$ two submodules of $M$.
If we have $M/N_1 \cong M/N_2$, does this imply $N_1 \cong N_2$?
This seems so trivial, but I just don't see a proof... Thanks!
| The implication is false for all commutative non-zero rings $A$.
Indeed, just take $M=\oplus_{i=0}^{i=\infty} A$ , $N_1=A\oplus 0\oplus0...$ and $N_2=A\oplus A\oplus 0\oplus 0...$.
Since $N_1$ is isomorphic to $A$ and $N_2$ is isomorphic to $A^2$, they are not isomorphic.
However $M/N_1$ and $M/N_2$ are isomorphic because they are both isomorphic to $M$.
[To see that $A$ and $A^2$ are not isomorphic as $A$-modules the standard trick is to reduce to the case where $A$ is a field by tensoring with $A/\mathfrak m$, where $\mathfrak m$ is some maximal ideal in $A$]
|
Odds of guessing suit from a deck of cards, with perfect memory While teaching my daughter why drawing to an inside straight is almost always a bad idea, we stumbled upon what I think is a far more difficult problem:
You have a standard 52-card deck with 4 suits and I ask you to guess the suit of the top card. The odds of guessing the correct suit are obviously 1 in 4. You then guess again, but the first card is not returned to the deck. You guess a suit other than the first drawn and the odds are 13/51, somewhat better than 1 in 4.
Continuing through the deck your odds continually change (never worse than 1 in 4, definitely 100% for the last card) what are your overall odds for any given draw over the course of 52 picks?
Can this be calculated? Or do you need to devise a strategy and write a computer program to determine the answer? Do these type of problems have a name?
Dad and to a much less extent daughter, await your thoughts!
|
overall odds for any given draw over the course of 52 picks
If I rephrased your question as "how much should you be willing to pay to play the game where I will show you $n$ cards out of 52 and if you guess the next remaining card then I give you a dollar" would an answer to this question be suitable? Just to be clear, in this game you would have to pay up front before you see any cards (although you know the number $n$ beforehand), and you should be willing to pay up to a quarter when $n=0$ and up to a dollar when $n=52-1$. This is not really an answer, I just wanted to understand the question.
Alternatively, would you allow the player to see the $n$ cards (rather than just the number $n$) before deciding how much to pay? Or on the other hand would you not even allow $n$ to be known, but you would require the player to decide how much to pay and then $n$ is chosen uniformly at random between 0 and 51 at the beginning of the game?
|
Probability of an odd number in 10/20 lotto Say you have a lotto game 10/20, which means that 10 balls are drawn from 20.
How can I calculate what are the odds that the lowest drawn number is odd (and also how can I calculate the odds if it's even)?
So a detailed explanation:
we have numbers 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 and 20
and the drawn numbers were for example 3, 5, 8, 11, 12, 13, 14, 15, 18 and 19
so, we see now that lowest number is 3 and he is an odd number.
So, as stated above, can you help me in finding out how to calculate such probability?
| The total number of outcomes is ${20 \choose 10}$. Now count the total number of favorable outcomes:
*
*outcomes with lowest element 1 : ${19 \choose 9}$ ;
*outcomes with lowest element 3 : ${17 \choose 9}$ ;
*outcomes with lowest element 5 : ${15 \choose 9}$ ;
*outcomes with lowest element 7 : ${13 \choose 9}$ ;
*outcomes with lowest element 9 : ${11 \choose 9}$ ;
*outcomes with lowest element 11 : ${9 \choose 9} = 1$ ;
So the probability is $$\sum_{k\in \{9, 11, 13, 15, 17, 19 \}} { {k \choose 9} \over {20 \choose 10}} = {30616 \over 46189} \simeq 0.662842.$$
|
Is every Mersenne prime of the form : $x^2+3 \cdot y^2$? How to prove or disprove following statement :
Conjecture :
Every Mersenne prime number can be uniquely written in the form : $x^2+3 \cdot y^2$ ,
where $\gcd(x,y)=1$ and $x,y \geq 0$
Since $M_p$ is an odd number it follows that : $M_p \equiv 1 \pmod 2$
According to Fermat little theorem we can write :
$2^p \equiv 2 \pmod p \Rightarrow 2^p-1 \equiv 1\pmod p \Rightarrow M_p \equiv 1 \pmod p$
We also know that :
$2 \equiv -1 \pmod 3 \Rightarrow 2^p \equiv (-1)^p \pmod 3 \Rightarrow 2^p-1 \equiv -1-1 \pmod 3 \Rightarrow$
$\Rightarrow M_p \equiv -2 \pmod 3 \Rightarrow M_p \equiv 1 \pmod 3$
So , we have following equivalences :
$M_p \equiv 1 \pmod 2$ , $M_p \equiv 1 \pmod 3$ and $M_p \equiv 1 \pmod p$ , therefore for $p>3$
we can conclude that : $ M_p \equiv 1 \pmod {6 \cdot p}$
On the other hand : If $x^2+3\cdot y^2$ is a prime number greater than $5$ then :
$x^2+3\cdot y^2 \equiv 1 \pmod 6$
Proof :
Since $x^2+3\cdot y^2$ is a prime number greater than $3$ it must be of the form $6k+1$ or $6k-1$ .
Let's suppose that $x^2+3\cdot y^2$ is of the form $6k-1$:
$x^2+3\cdot y^2=6k-1 \Rightarrow x^2+3 \cdot y^2+1 =6k \Rightarrow 6 | x^2+3 \cdot y^2+1 \Rightarrow$
$\Rightarrow 6 | x^2+1$ , and $ 6 | 3 \cdot y^2$
If $6 | x^2+1 $ then : $2 | x^2+1$ , and $3 | x^2+1$ , but :
$x^2 \not\equiv -1 \pmod 3 \Rightarrow 3 \nmid x^2+1 \Rightarrow 6 \nmid x^2+1 \Rightarrow 6 \nmid x^2+3 \cdot y^2+1$ , therefore :
$x^2+3\cdot y^2$ is of the form $6k+1$ , so : $x^2+3\cdot y^2 \equiv 1 \pmod 6$
We have shown that : $M_p \equiv 1 \pmod {6 \cdot p}$, for $p>3$ and $x^2+3\cdot y^2 \equiv 1 \pmod 6$ if $x^2+3\cdot y^2$ is a prime number greater than $5$ .
This result is a necessary condition but it seems that I am not much closer to the solution of the conjecture than in the begining of my reasoning ...
| (Outline of proof that, for prime $p\equiv 1\pmod 6$, there is one positive solution to $x^2+3y^2=p$.)
It helps to recall the Gaussian integer proof that, for a prime $p\equiv 1\pmod 4$, $x^2+y^2=p$ has an integer solution. It starts with the fact that there is an $a$ such that $a^2+1$ is divisible by $p$, then uses unique factorization in the Gaussian integers to show that there must a common (Gaussian) prime factor of $p$ and $a+i$, and then that $p$ must be the Gaussian norm of that prime factor.
By quadratic reciprocity, we know that if $p\equiv 1\pmod 6$ is prime, then $a^2=-3\pmod p$ has a solution.
This means that $a^2+3$ is divisible by $p$. If we had unique factorization on $\mathbb Z[\sqrt{-3}]$ we'd have our result, since there must be a common prime factor of $p$ and $a+\sqrt{-3}$, and it would have to have norm $p$, and we'd be done.
But we don't have unique factorization in $\mathbb Z[\sqrt{-3}]$, only in the ring, R, of algebraic integers in $\mathbb Q[\sqrt{-3}]$, which are all of the form: $\frac{a+b\sqrt{-3}}2$ where $a=b\pmod 2$.
However, this isn't really a big problem, because for any element $r\in R$, there is a unit $u\in R$ and an element $z\in\mathbb Z[\sqrt{-3}]$ such that $r=uz$.
In particular, then, for any $r\in R$, the norm $N(r)=z_1^2+3z_2^2$ for some integers $z_1,z_2\in \mathbb Z$.
As with the Gaussian proof for $x^2+y^2$, we can then use this to show that there is a solution to $x^2+3y^2=p$. To show uniqueness, you need to use properties of the units in $R$.
|
Do "imaginary" and "complex" angles exist? During some experimentation with sines and cosines, its inverses, and complex numbers, I came across these results that I found quite interesting:
$ \sin ^ {-1} ( 2 ) \approx 1.57 - 1.32 i $
$ \sin ( 1 + i ) \approx 1.30 + 0.63 i $
Does this mean that there is such a thing as "imaginary" or "complex" angles, and if they do, what practical use do they serve?
| A fundamental equation of trigonometry is $x^2+y^2 = 1$, where $x$ is the "adjacent side" and $y$ the "opposite side".
If you experiment plot $f(x)$ out of the real domain - for example to $x=1.5$ you obtain $y$ imaginary - you will get an imaginary shape situated in a plane perpendicular to the plane $x,y$ and containing the $x$-axis. This shape is a hyperbola.
So you have two planes, one for the circle, and one for the hyperbola.
The "$z$-axis" (imaginary) where the hyperbola is plotted correspond to the "$\sinh$" and $x$ is the "$\cosh$" once the $R = 1$. Note that the $\sinh$ is situated in a plane $90$ degrees of the $x,y$-plane.
Observe that $$\sin iy = i \sinh y$$ is in accord with was explained above.
The geometric interpretation is easy.
It's valuable remember that the angle of a circumference could be measured by the double of the area of the sector. The hyperbolic angle could be measured by the double of area limited by the radius and the arc of hyperbola.
See Wikipedia.
|
question on fourier transform. I ask myself what
$$
{\mathscr F}^{-1}( e^{it\xi} ({\mathscr F} \phi)'(\xi) )(s)
$$
is. If it was just about
$$
{\mathscr F}^{-1}( e^{it\xi} ({\mathscr F} \phi)(\xi) )(s)
$$
it would be clear (a shift by $t$), the same is with
$$
{\mathscr F}^{-1}( ({\mathscr F} \phi)'(\xi) )(s),
$$
which gives a multiplication by $-is$.
But what is about the combination of exponential and derivative? Any hints? Thanks, Eric
| Why don't you just compute it?
$$
{\mathscr F}^{-1}( e^{it\xi} ({\mathscr F} \phi)'(\xi) )(s)={\mathscr F}^{-1}( ({\mathscr F} \phi)'(\xi) )(s+t)=-i(s+t)\phi(s+t)
$$
|
Question about implication and probablity Let $A, B$ be two event. My question is as follows:
Will the following relation holds:
$$A \to B \Rightarrow \Pr(A) \le\Pr(B) $$
And why?
| In terms of intuition, the fact that some event $A$ implies some event $B$ means that whenever $A$ happens, $B$ happens. But if the event $B$ happens, we might not have event $A$. So in other words, we have that $\mathbb P(A) \le \mathbb P(B)$ because the probability that $A$ happens is also "the probability that $B$ happens because of $A$", which is less than "the probability that $B$ happens" with no constrains on it. (Note that there are cases with equality, for instance when $A \Rightarrow B$ and $B \Rightarrow A$. )
From a theoretical point of view though (this is a little bit more advanced for a beginner course in probability though), the expression "$A \Rightarrow B$" doesn't make much sense, since the way probabilities are defined is that the "$\mathbb P$" is actually a function from something we call an space of possible events to the interval of real numbers $[0,1]$. The possible events are sets, and the right way to say $A \Rightarrow B$ in this system would be that the set $A$ is included in the set $B$, i.e. $A \subseteq B$. (You need to think about those sets as "a regrouping of possibilities", for instance if the event space is all subsets of $\{1,2,3,4,5,6\}$ in the context where we roll a fair dice, an example of event would be the possibility that "the roll is even" or "the roll is a $1$ or a $4$".) Since in the construction of probabilities, one most common axiom is that a probability function is countably additive, or in other words
$$
\mathbb P \left( \bigcup_{i=0}^{\infty} A_i \right) = \sum_{i=0}^{\infty} \,\mathbb P (A_i)
$$
when the sets $A_i$ are pairwise disjoint, and another axiom would be that $P(\varnothing) = 0$, we can deduce from that that
\begin{align}
\mathbb P(B) = \mathbb P( (A \cap B) \cup (A^c \cap B))
& = \mathbb P( (A \cap B) \cup (A^C \cap B) \cup \varnothing \cup \dots) \\
& = \mathbb P(A \cap B) + \mathbb P(A^c \cap B) + \mathbb P (\varnothing) + \mathbb P (\varnothing) + \dots \\
& = \mathbb P(A) + \mathbb P(A^c \cap B) + 0 + 0 + \dots \\
& \ge \mathbb P(A).
\end{align}
Note that the reason I added this is because I used the axiom "countably additive" and not "finitely additive". The way to show that countably additive implies finitely additive is by adding plenty of $\varnothing$'s after the finitely many sets.
There are many possible axioms you can add/choose that are equivalent though. I just took my favorite.
Hope that helps,
|
Motivation for a particular integration substitution In an old Italian calculus problem book, there is an example presented:
$$\int\frac{dx}{x\sqrt{2x-1}}$$
The solution given uses the strange substitution $$x=\frac{1}{1-u}$$
Some preliminary work in trying to determine the motivation as to why one would come up with such an odd substitution yielded a right triangle with hypotenuse $x$ and leg $x-1;$ determining the other leg gives $\sqrt{2x-1}.$ Conveniently, this triangle contains all of the "important" parts of our integrand, except in a non-convenient manner.
So, my question is two-fold:
(1) Does anyone see why one would be motivated to make such a substitution?
(2) Does anyone see how to extend the work involving the right triangle to get at the solution?
| This won't answer the question, but it takes the geometry a bit beyond where the question left it. Consider the circle of unit radius in the Cartesian plane centered at $O=(0,1)$. Let $A=(1,0)$ and $B=(x,0)$. Let $C=(1,\sqrt{2x-1})$. Your right triangle is $ABC$, with angle $\alpha$ at vertex $B$. Another right triangle is $OAC$, with angle $\beta$ at $O$. Then
$$
u=\frac{x-1}{x}= \cos\alpha=\sin(\pi/2-\alpha)=\sin\angle BCA$$
and
$$
\sqrt{2x-1}=\tan\beta=\cot(\pi/2-\beta)=\cot\angle OCA.
$$
|
quadratic reciprocity happy new year
I have this statement:
"By quadratic reciprocity there are the integers $a$ and $b$ such that $(a,b)=1$, $(a-1,b)=2$, and all prime $p$ with $p\equiv a$ (mod $b$) splits in $K$ (where $K$ is a real quadratic field)".
I have tried with many properties of quadratic reciprocity but couldn't even get to the first conclusion.
Thank you very much in advance, for any idea or advice for approach the problem
| Edited to address some bizarrely horrible errors in the first version.
Here's a simple case from which it should not be too hard to generalize. Suppose that $K$ is a quadratic field of prime discriminant $q$. Since $q\equiv 1\pmod{4}$, note that a prime $p$ splits in $K$ if and only if $\left(\frac{p}{q}\right)=1$.
Let $b=2q$ and choose an integer $a\not\equiv 1\pmod{q}$ which is an odd quadratic residue mod $q$. Such a thing exists since there are $\frac{q-1}{2}\geq 2$ (since $q\geq 5$) residues mod $q$, and if $a'$ is any not-conguent-to-1 residue mod $q$, then one of $a'=a$ and $a'=q+a$ gives you an odd residue. For example, when $q=5$, take $a'=4$ and then $a=9$. (Actually, $p=5$ is the only example where you have to add $q$: for all other $q$ there is an odd quadratic residue $a$ in the range $2\leq a\leq p-1$).
Now $a$ is odd and not divisible by $q$, so $(a,b)=(a,2q)=1$, and since $a\not\equiv 1\pmod{q}$, we also have $(a-1,b)=2$. Finally, if $p\equiv a\pmod{b}$, then $p\equiv a\pmod{q}$, so $\left(\frac{p}{q}\right)=\left(\frac{1}{q}\right)=1$, and $p$ splits in $K$.
Just some minor commentary on where this came from: Your $\gcd$ conditions force $b$ to be even, and for a congruence-mod-b condition to determine splitting in $K$, your $b$ needs to be a multiple of the discriminant (in this case, $q$), and preferrably as small a multiple as possible to prevent extra congruence classes from slipping in. The value of $b=2q$ satisfies all of these requirements, and since clearly one must choose $a$ to be an odd quadratic residue mod $q$, you're left with essentially the above construction.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.