INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Pythagorean Theorem Proof Without Words 6 Your punishment for awarding me a "Nice Question" badge for my last question is that I'm going to post another one from Proofs without Words.
How does the attached figure prove the Pythagorean theorem?
P.S. No, I will not go through the entire book page-by-page asking for help.
P.P.S. No, I am not a shill for the book. Just a curious math student.
| If you have a diameter of a circle and a point on the circle, the length of the altitude from that point to the diameter is the geometric mean of the lengths of the two parts of the diameter. Perhaps drawing in the two chords from the top point on the circle to the endpoints of the diameter and looking for similar triangles would help.
|
Groups of symmetries What are the symmetries of a solid rectangular box whose length, width and height are all different?
I get a group of order 4 by rotation 180, flipping along a vertical and horizontal axis and itself.
| Your contention that you'll have cyclic subgroup of order $4$ is quite natural but needs a little rethinking. Since, you consider the symmetries of a mattress like object (where symmetry is to mean any rigid motion in 3-space which will move a copy of the mattress in any fashion and place the copy back on the original), a rotation of a "scalene" object by $\frac{\pi}{2}$ will not keep this object in the same place. So, rotation by $\pi$ is a valid symmetry.
Technically, the mattress group is called the Klein $4$-group.
$\hskip 2.5in$
$H$ stands for a horizontal flip about the axis parallel to $13$ through the midpoint of $12$; $V$ for a vertical flip about the axis parallel to $12$ and through the mid point of $13$ and $R$ for rotation through the mid-point of the mattress.
|
prove for all $n\geq 0$ that $3 \mid n^3+6n^2+11n+6$ I'm having some trouble with this question and can't really get how to prove this..
I have to prove $n^3+6n^2+11n+6$ is divisible by $3$ for all $n \geq 0$.
I have tried doing $\dfrac{m}{3}=n$ and then did $m=3n$
then I said $3n=n^3+6n^2+11n+6$ but now I am stuck.
| Here is a solution using induction:
Let $f(x)=x^3+6x^2+11x+6$
Since we want to see if it is divisible by 3 let us assume that $f(x)=3m$.
For the case where $x=0$, $f(0)=6$ which is divisible by 3.
Now that we have proved for one case let us prove for the case of $f(x+1)$
$$f(x+1)=(x+1)^3+6(x+1)^2+11(x+1)+6$$
$$= x^3+3x^2+3x+1+6x^2+12x+6+11x+11+6$$
$$=(x^3+6x^2+11x+6)+3x^2+15x+18$$
And since $x^3+6x^2+11x+6=3m$
$$f(x+1)=3m+3x^2+15x+18=3(m+x^2+5x+6)$$
Which is divisible by 3.
|
Rotations and the parallel postulate. If we take a full rotation to be $360^\circ$, then it seems that we can prove the following
Starting from the red point, we walk clockwise along the triangle. At each vertex, we must turn through the green angles marked to proceed down the adjacent sides of the triangle. When we return to the red point, we will have turned through one full rotation. This means that the sum of the exterior angles is given as $360^\circ$, implying the interior angles of the triangle sums of $180^\circ$.
The fact that the angles of a triangle sum to $180^\circ$ is well known to be equivalent to the parallel postulate and this made me wonder whether if the fact that a full rotation being $360^\circ$ is also equivalent to the parallel postulate?
I avoided stating the question using "exterior angles of a triangle sums to $360^\circ$" and instead used the more ambiguous term "rotations" to emphasize the fact that rotations seem to be more general. We can for example show that the interior angles of a heptagram sum to $180^\circ$ by noting that three full rotations are made while "walking" thge heptagram. This should generalize to arbitrary closed polygons and seems stronger than the fact that the exterior angles sum to $180^\circ$.
In summary, I would be interested in knowing the connections that this technique has to the parallel postulate as well as if this technique is a "rigorous" way of finding the internal angles of more complex shapes such as the heptagram.
| Your picture, and perhaps your assumptions, are lying in the Euclidean plane. Take the same idea and put it on the sphere, where the parallel postulate is false, and we get something like the following:
Notice that, in this case, the sum of the exterior angles is $270^\circ$, not $360^\circ$.
However, in answer to your question about the sum of the interior angles of a polygon, since an external and the corresponding interior angle sum to $180^\circ$, the sum of the exterior angles and the interior angles is $180^\circ\times$ the number of sides. Since, as you have noted, in the Euclidean plane, the sum of the exterior angles is $360^\circ$, we get that the sum of the interior angles of a polygon with $n$ sides is $(n-2)180^\circ$.
|
Hofstadter's TNT: b is a power of 2 - is my formula doing what it is supposed to? If you've read Hofstadter's Gödel, Escher, Bach, you must have come across the problem of expressing 'b is a power of 2' in Typographical Number Theory. An alternative way to say this is that every divisor of b is a multiple of 2 or equal to 1. Here's my solution:
b:~Ea:Ea':Ea'':( ((a.a')=b) AND ~(a=(a''.SS0) OR a=S0) )
It is intended to mean: no divisor of b is odd or not equal to 1. E, AND and OR are to be replaced by the appropriate signs.
Is my formula OK? If not, could you tell me my mistake?
| Your idea is sound, but the particular formula you propose
$$\neg\exists a:\exists a':\exists a'':( ((a\cdot a')=b) \land \neg (a=(a''\cdot SS0) \lor a=S0) )$$
does not quite express it. The problem is that the quantifier for $a''$ has too large scope -- what your formula says is that it will prevent $b$ from being a power of two if there is some even number that is different from some factor of $b$. For example, your formula claims that $2$ itself is not a power of two, because you can make $((a\cdot a')=2) \land \neg (a=(a''\cdot SS0) \lor a=S0)$ true by setting $a=2$, $a'=1$, $a''=42$. The first part is true because $2\cdot 1$ is indeed $2$, and the second (negated) part is true because it is neither the case that $2=42\cdot SS0$ nor $2=S0$.
What you want is
$$\neg\exists a:\exists a':( ((a\cdot a')=b) \land \neg (\exists a'':(a=(a''\cdot SS0)) \lor a=S0) )$$
Moving the quantifier inside one negation switches the "burden of proof" -- now it says that there isn't any number that is half of $a$, rather than there is some number that isn't half of $a$.
Or perhaps more directly expressed:
$$\forall c:\Big(\exists d:( c\cdot d = b )\to \big(c=S0 \lor \exists a:(c=SS0\cdot a)\big)\Big)$$
|
Equivalence Class for Abstract Algebra Class Let
$$R_3= \{(a,b)\mid a,b \in \mathbb{Z}\text{ and there exists }k \in \mathbb{Z} \text{ such that }a-b=3k\}.$$
I know there is an equivalence relation but I'm not 100% on what it means to be an equivalence class for this problem. In class we got 3: $\{0,3,6,9,\ldots\}$ and $\{1,4,7,10,-2,-5,\ldots\}$ and $\{2, 5, 8, 11, -1, -4,\ldots\}$.
I don't understand where these cells came from. Help?
| I'll try to put it this way:
Define a relation $\sim$ on $\mathbb Z$, such that $a \sim b \iff \exists k \in \mathbb Z ~~ \text{such that}~~~~a-b=3k$
What does this say?
Integers $a$ and $b$ are related if and only if on their difference is a multiple of $3$. Since, the remainder when $a-b$ is divided by $3$ is the difference of the remainders when $a$ and $b$ are divided by $3$, taken(all taken$\mod 3$).
So, integers $a$ and $b$ are related if and only if they leave the same remainder when divided by $3$.
Now try to put all those numbers that are related to each other in the same "cell" and those that are not related in different "cells".
But, now notice that the number of distinct cells you'll need for the purpose is no more than $3$ and no less! (Why?)
Construct these "cells" to see how they coincide with what you have written down in your class.
And, now call these cells "equivalence classes".
|
Soccer and Probability MOTIVATION: I will quote Wikipedia's article on a soccer goalkeeper for the motivation:
Some goalkeepers have even scored goals. This most commonly occurs where a goalkeeper has rushed up to the opposite end of the pitch to give his team an attacking advantage in numbers. This rush is risky, as it leaves the goalkeeper's goal undefended. As such, it is normally only done late in a game at set-pieces where the consequences of scoring far outweigh those of conceding a further goal, such as for a team trailing in a knock-out tournament.
The mathematical question:
Consider the following game (simplified soccer):
A single player starts with a score of 0.5 and plays N turns.
In each turn, the player has to choose one of 2 strategies: $(p_{-1},p_0,p_1)$ or
$(q_{-1},q_0,q_1)$ (these are probability vectors) and then her score is increased by -1, 0 or 1 according to the probabilites dictated by the chosen strategy. The player wins if at the end of the game she has a positive score, and loses if she has a negative score (the player's objective is to win, the only thing that matters is whether the final score is positive or negative).
What is the optimal global strategy given $N$, $(p_{-1},p_0,p_1)$ and
$(q_{-1},q_0,q_1)$?
A global strategy is a function of the number of turns left, the current score and the 2 probability vectors (which are constant for all turns).
If this question is hard, it may still may interesting to approximate an optimal global strategy (in what sense?).
| In the case where you know the number of turns in advance, you can construct an optimal strategy in time $O(N^2)$ by reasoning backwards from the last round.
If, before the last turn, you find yourself with score $-0.5$, choose your strategy by comparing $p_1$ to $q_1$. On the other hand, if the score is $0.5$, compare $p_{-1}$ to $q_{-1}$. If the score is anything else, it's too late to make a difference: either you have already won, or already lost.
Now you know the "value" of the game (that is, the probability of eventually winning, given optimal play) after $N-1$ turns, as a function of your score at that time.
Then look at your options before the second-to-last turn. For each possible score you have the option of playing $p$s or $q$s, and each of this will give you a certain probability of eventually winning, which you can easily compute because you already have a table of the probabilities after the turn. The optimal play at each score is, of course, the one that will yield you the best probability of winning.
Continue this backwards until the first turn. What you end up with is a two-dimensional table that tells you, given the number of turns left and your instant score, what your chance of winning is, and whether you should play $p$ or $q$. This table constitutes an optimal strategy.
|
Rudin's assertion that if $t = x/(1 + x)$ then $0 \leq t < 1$ I'm having trouble understanding one step in the proof of Theorem 1.21 in Rudin's Principles of Mathematical Analysis.
Theorem 1.21 For every real $x > 0$ and every integer $n > 0$ there is one and only one positive real $y$ such that $y^{n} = x$.
In the proof he makes the following claim: Let $E$ be the set consisting of all positive real numbers $t$ such that $t^{n} < x$. If $t = \frac{x}{1 + x}$ then $0 \leq t < 1$.
I don't understand how he got that inequality. If $t = 0$ that implies that $x = 0$ which is a contradiction since every $x > 0$. And if $x \rightarrow \infty$, then $t = 1$.
| Notice that in this proof $x$ is a fixed positive real number, and that we are assuming $t=\frac{x}{x+1}$.
Since $x>0$, we have $x+1>0$ so $t=\frac{x}{x+1}>0$ hence $t>0$ thus $t\geq 0$. Furthermore, since $x<x+1$ and neither of these are $0$ we have $t=\frac{x}{x+1}<1$. Putting these together gives $0\leq t< 1$.
|
Number of critical points in a period of a periodic function I am interested in a relationship (if any) between the number of critical points of a periodic function $f$ of class $C^3([0,T])$ and the number of critical points of $f''$ in $[0,T]$.
| Consider a $C^2$ function $F\colon\mathbb{R}\to\mathbb{R}$ periodic of period $T>0$ and assume that $F$ has $N$ distinct zeroes $\{x_1,\dots,x_N\}\subset[0,T]$. By Rolle's theorem, $F'$ has at least $N-1$ zeroes in $(0,T)$, one in each interval $(x_i,x_{i+1})$, $1\le i\le N$.
*
*If $x_1=0$ (and hence $x_N=T$ ), then $F'$ may have exactly $N-1$ zeroes.
*If $x_1>0$ (and hence $x_N<T$ ), then $F'$ has at least one zero between $x_N$ and $x_1+T$; call it $\xi$. Then either $\xi\in(x_N,T]$ or $\xi-T\in(0,x_1)$. Conclude that $F'$ has at least $N$ zeroes. If $\xi=T$, then $F'$ has at least $N+1$ zeroes.
Applying the above argument to $F'$ shows that $F''$ has at least $N-1$ zeroes. For $F''$ to have exactly $N-1$ zeroes it must be that $F(0)=F(T)=0$.
Returning to your original question, $f''$ has at least as many critical points as $f$, except when $0$ and $T$ are critical points, in which case $f''$ may have one less critical point than $f$.
|
Solve $\theta''+g\sin(\theta)=0$ I encountered the following differential equation when I tried to derive the equation of motion of a simple pendulum:
$\frac{\mathrm d^2 \theta}{\mathrm dt^2}+g\sin\theta=0$
How can I solve the above equation?
| replacing $\sin\theta$ by $\theta$ (physically assuming small angle deflection) gives you a homogeneous second order linear differential equation with constant coefficients, whose general solution can be found in most introductory diff eq texts (or a google search). this new equation represents a simple harmonic oscillator (acceleration proportional to displacement, like a spring force).
$$
\theta''+g\theta=0
$$
has solutions $A\cos(\sqrt{g}t)+B\sin(\sqrt{g}t)$.
so, for example, if the initial displacement is $\theta_0$ and initial angular velocity is $0$ then the solution is
$$
\theta_0\cos(\sqrt{g}t)
$$
|
basic calculus proof - using theorems to prove stuff A function $f(x)$ is defined and continuous on the interval $[0,2]$ and $f(0)=f(2)$.
Prove that the numbers $x,y$ on $[0,2]$ exist such that $y-x=1$ and $f(x) = f(y)$.
I can already guess this is going to involve the intermediate value theorem.
So far I've defined things as such:
I'm looking to satisfy the following conditions for values x, y:
*
*$f(x) = f(x+1)$
*$f(x) = f(y)$
I've defined another function, $g(x)$ such that $g(x) = f(x+1) - f(x)$
If I can show that there exists an $x$ such that $g(x) = 0$ then I've also proven that $f(x) = f(x+1)$.
since I'm given the interval [0,2], I can show that:
$g(1) = f(2) - f(1)$,
$g(0) = f(1) - f(0)$
I'm told that $f(2) = f(0)$ so I can rearrange things to show that $g(1) = f(0) - f(1) = -g(0)$.
Ok, So i've shown that $g(0) = -g(1)$
How do I tie this up? I'm not able to close this proof. I know I need to incorporate the intermediate value theorem which states that if there's a point c in $(a,b)$ then there must be a value $a<k<b$ such that $f(k) = c $ because there's nothing else.
I thought maybe to use Rolle's theorem to state that since $f(0) = f(2)$ I know this function isn't monotonic. And if it's not monotonic it must have a "turning point" where $f'(x) = 0$ but it's not working out.
Anyway I need help with this proof in particular and perhaps some advice on solving proofs in general since this type of thing takes me hours.
Thanks.
| if $g(0)$ is positive, $g(1)$ will be negative and vice versa, so the IVT provides a root. if both are zero, $g(0)=g(1)=0=f(1)-f(0)=f(2)-f(1)$ and you're done as well.
|
Question regarding infinite Blaschke product According to Gamelin's $\textit{Complex Analysis}$, a finite Blaschke product is a rational function of the form $B(z)= e^{i \varphi} (\frac{z-a_1}{1-\bar{a_1} z} \cdots \frac{z-a_n}{1-\bar{a_n} z})$ where $a_1, ..., a_n \in \mathbb{D}$ and $0 \leq \varphi \leq 2\pi$. Similarly, I would guess that an infinite Blaschke product would be of the form $e^{i \varphi} \prod_{n=1}^\infty\frac{z-a_n}{1-\bar{a_n} z}$. I believe this is supposed to satisfy what is known as the Blaschke condition, i.e. $\sum_{n=1}^\infty (1-|a_n|) < \infty$, but how is that so? Can this be verified using the log function on the infinite product?
| Actually, the infinite Blaschke product, for $|a_n|\le1$ and $|z|<1$, is defined as
$$
e^{i\varphi}\prod_{n=1}^\infty\frac{|a_n|}{a_n}\frac{z-a_n}{\overline{a}_n z-1}\tag{1}
$$
The factor of $\;{-}\dfrac{|a_n|}{a_n}$ simply rotates $\dfrac{z-a_n}{1-\overline{a}_n z}$, which, for finite products, is incorporated into $e^{i\varphi}$. However, for infinite products, it is needed for convergence.
First, note that
$$
\begin{align}
\frac{|a_n|}{a_n}\frac{z-a_n}{\overline{a}_n z-1}
&=|a_n|\frac{z-a_n}{|a_n|^2 z-a_n}\\
&=(1-(1-|a_n|))\left(1+\frac{z(1-|a_n|^2)}{|a_n|^2 z-a_n}\right)\\
&=(1-(1-|a_n|))\left(1+\frac{z(1+|a_n|)}{|a_n|^2\left(z-\frac{1}{\overline{a}_n}\right)}(1-|a_n|)\right)\tag{2}
\end{align}
$$
where
$$
\begin{align}
\left|\frac{z(1+|a_n|)}{|a_n|^2\left(z-\frac{1}{\overline{a}_n}\right)}\right|
&\le\frac{1+|a_n|}{|a_n|^2}\frac{|z|}{1-|z|}\\
&\le6\frac{|z|}{1-|z|}\tag{3}
\end{align}
$$
when $|a_n|\ge\frac12$.
Equations $(2)$ and $(3)$ say that the infinite product in $(1)$ converges absolutely when $|z|<1$ and
$$
\sum_{n=1}^\infty(1-|a_n|)\tag{4}
$$
converges. That is, the infinite product $\prod\limits_{n=1}^\infty(1+z_n)$ converges absolutely when $\sum\limits_{n=1}^\infty|z_n|$ converges.
|
The differences between $\mathbb{R}/ \mathbb{Z}$ and $\mathbb{R}$
The cosets of $\mathbb{Z}$ in $\mathbb{R}$ are all sets of the form $a+\mathbb{Z}$, with $0 ≤ a < 1$ a real number. Adding such cosets is done by adding the corresponding real numbers, and subtracting 1 if the result is greater than or equal to 1. -- Examples of Quotient Group, Wiki
I cannot figure out the differences between $\mathbb{R}/ \mathbb{Z}$ and $\mathbb{R}$. Besides, "subtracting 1 if the result is greater than or equal to 1", what does "the result" mean here? Why do we need to subtract 1? I was wondering what is the background of $\mathbb{R}/ \mathbb{Z}$.
| $(a+\mathbb Z)+(b+\mathbb Z)$ is found by adding $a$ and $b$, the result of which is $a+b$. If $a+b<1$, then $(a+\mathbb Z)+(b+\mathbb Z)=(a+b)+\mathbb Z$. If $a+b\geq 1$, then $(a+\mathbb Z)+(b+\mathbb Z)=(a+b-1)+\mathbb Z$.
But this is only if you follow the stated convention of only listing representatives from $[0,1)$. The fact is, $(a+b)+\mathbb Z$ and $(a+b-1)+\mathbb Z$ are different names for the exact same set, so you don't really need to subtract $1$.
|
Existence of universal enveloping inverse semigroup (similar to "Grothendieck group") Context
In its simplest form, the Grothendieck group construction associates an abelian group to a commutative semigroup in a "universal way".
Now I'm interested in the following nilpotent commutative semigroup $N$ consisting of two elements $a$ and $b$ such that $a^2=b^2=ab=ba=a$. The corresponding Grothendieck group is the trivial group with just one element, which is a bit boring. So I asked myself whether it would be possible to construct a "universal enveloping inverse semigroup"(*) in a similar way as the Grothendieck group, and whether it would be more interesting.
I tried to compute the "universal enveloping inverse semigroup" $N_I$ for the semigroup $N=\{a,b\}$. I got $N_I = \{ a, b, b^{-1}, bb^{-1}, b^{-1}b \}$ with $(b^{-1})^2=ab^{-1}=b^{-1}a=a$. What I find surprising is that $N_I$ is not commutative, even so $N$ is commutative. So I tried to compute the "universal enveloping commutative inverse semigroup" $N_C$ instead and got $N_C = \{ a \}$.
(*)Note: In a semigroup $S$, we say that $y\in S$ is an inverse element of $x\in S$, if $xyx=x$ and $yxy=y$. A semigroup $S$ is called an inverse semigroup, if each $x\in S$ has a unique inverse element $x^{-1}\in S$. It's easy to see that $xx^{-1}$ and $x^{-1}x$ are idempotent, that all idempotent elements in an inverse semigroup commute, and that $(xy)^{-1}=y^{-1}x^{-1}$. So at least superficially, inverse semigroups seem to be nice generalization of groups and can have a zero element without being trivial.
Question
Does the "universal enveloping inverse semigroup" (and the "universal enveloping commutative inverse semigroup") of a semigroup $S$ always exist? I guess the answer is yes and this probably follows from some theorem of universal-algebra. Similarly, I guess that the "universal enveloping regular semigroup" doesn't always exist and wonder whether this also follows from some theorem of universal-algebra.
| I now found out how to prove that no "universal enveloping regular semigroup" exists for the example given in the question. (The existence of the other two cases has already been proved in the answer by Martin Wanvik.)
Let
$a=\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$,
$b=\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$,
$b'=\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}$,
$c=\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}$,
$c'=\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}$,
$d=\begin{bmatrix} 0 & 1 \\ 0 & 1 \end{bmatrix}$,
$d'=\begin{bmatrix} 0 & 0 \\ 1 & 1 \end{bmatrix}$,
$e=\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$, and
$f=\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$. Let $N := \{ a,b \}$ and $N_I := \{ a,b,b',e,f \}$ (as in the question with $b'=b^{-1}$, $e=bb^{-1}$ $f=b^{-1}b$). It is easy to see that $S := \{ a,b,b',c,c',d,d',e,f \}$ is a regular group, and that $N_I$ and $R := \{ a,b,c,d,e \}$ are regular sub-semigroups of $S$. Now $b^{-1}=b'$ in $N_I$ and $c$ is the unique inverse element of $b$ in $R$. So if there would exist a regular semigroup $N_R$ containing $N$ as sub-semigroup, for which it is possible to uniquely extend any homomorphism from $N$ to $N_I$ or $R$, then the extension to $N_R$ of a homomorphism $h$ from $N$ to $S$ (with $h(a)=a$ and $h(b)=b$) won't be unique.
|
Number of solutions of $x^2=1$ in $\mathbb{Z}/n\mathbb{Z}$ Next is what I have worked out to the moment.
$1$ and $-1$ are roots for all $n$.
$x \in \mathbb{Z}/n\mathbb{Z},\ $ $x^2\equiv1 \Leftrightarrow (x-1)(x+1)\equiv0 \Leftrightarrow \exists k \in \mathbb{Z}/n\mathbb{Z}: k(k+2)\equiv0 $.
But how can it be applied to find other roots?
| (I'd have to check the details in the following but it provides some rough ideas.)
Write $n = \prod_i p_i^{\nu_i}$ and use the Chinese remainder theorem to obtain a system of equations
$$ x^2 \equiv 1 \pmod{p_i^{\nu_i}}$$
Of course, for every $p_i$, $x=\pm 1$ provides a solution. Based on some quick calculations, I think that:
*
*If $p_i>2$, these are the only solutions.
*If $p_i=2$ and $\nu_2 \geq 3$, then there are 4 solutions.
*If $p_i=2$ and $\nu_2 = 2$, then $x=\pm 1$ are solutions.
*If $p_i=2$ and $\nu_2 = 1$, there is $1$ solution, because $+1 = -1$.
To confirm this: investigate when a prime power can divide two numbers that differ by $2$, i.e when $p_i^{\nu_i} \mid (x+1)(x-1)$.
We may then use the Chinese remainder theorem to reassemble the solutions to find solutions mod $n$: each combination of solutions modulo the prime-powers will uniquely determine a solution mod $n$.
So the number of solutions is (where $\omega(n)$ is the number of
distinct prime factors of $n$):
*
*$2^{\omega(n)}$ if $n$ is odd or $\nu_2 = 2$.
*$2^{\omega(n)+1}$ if $\nu_2 \geq 3$
*$2^{\omega(n)-1}$ if $\nu_2 = 1$
A proof of the above can be based on theorem 4.19 and 4.20 in Basic Algebra vol 1, by N. Jacobson, discussing the structure of $U_{p^\nu}$, this is the group of units in $\mathbb Z/p^\nu \mathbb Z$: it is cyclic if $p>3$, $p^\nu=2$ or $p^\nu=4$ and isomorphic to $C_2\times C_{2^{\nu-2}}$ if $p=2$ and $\nu\geq 3$.
|
show if $n=4k+3$ is a prime and ${a^2+b^2} \equiv 0 \pmod n$ , then $a \equiv b \equiv 0 \pmod n$ $n = 4k + 3 $
We start by letting $a \not\equiv 0\pmod n$ $\Rightarrow$ $a \equiv k\pmod n$ .
$\Rightarrow$ $a^{4k+2} \equiv 1\pmod n$
Now, I know that the contradiction will arrive from the fact that if we can show $a^2 \equiv 1 \pmod n $ then we can say $b^2 \equiv -1 \pmod n $ then it is not possible since solution exists only for $n=4k_2+1 $ so $a \equiv b\equiv 0 \pmod n $
So from the fact that $a^{2^{2k+1}} \equiv 1 \pmod n$ I have to conclude something.
| This is solution from Prasolov V.V. Zadachi po algebre, arifmetike i analizu (В. В. Прасолов. Задачи по алгебре, арифметике и анализу.) This book (in Russian) is freely available at http://www.mccme.ru/free-books/
The problem appears there as Problem 31.2.
We want to show that if $p=4k+3$ is a prime number and $p\mid a^2+b^2$, then $p\mid a$ and $p\mid b$.
My translation of the solution given there:
Suppose that one of the numbers $a$, $b$ is not divisible by $p$. Then the other one is not divisible by $p$, either. Thus from Fermat's little theorem we get $a^{p-1} \equiv 1 \pmod p$ and $b^{p-1} \equiv 1 \pmod p$. This implies $a^{p-1}+b^{p-1} \equiv 2 \pmod p$.
On the other hand, the number $a^{p-1}+b^{p-1}=a^{4k+2}+b^{4k+2}=(a^2)^{2k+1}+(b^2)^{2k+1}$ is divisible by $a^2+b^2$, and thus it is divisible by $p$.
|
An ideal and all elements of the quotient ring
Defintion 1 An ideal I of R is an additive subgroup so that $$a\in R , s\in I \Rightarrow as,sa \in I$$
The ring R/I is called the quotient ring.
Example 1 : $R=\mathbb{Z}[x], I=nR (n\in \mathbb{Z})$. Then $R/I "=" (\mathbb{Z}/n\mathbb{Z})[x]$. Or $I=xR$. What are the $A+I (A\in \mathbb{Z}[x])?$
A=a+bx+... , so they are equal to the $a+I (a\in \mathbb{Z})$ because if $A=a+bx+...$, then $A-a = bx+... \in I$. So $R/I "=" \mathbb{Z}$ or : $I=2R+xR = \{ 2A+xB; A,B \in R\}$. So we can say that there are at most $2$ different side classes $0+I, 1+I$ and they can not be equal because otherwise $1\in I$, so $1=2A+xB; A,B\in \mathbb{Z}[x]$. It is true that $R/I$ is identical to $\mathbb{F}_{2}=\{0,1\}$
Example 2: R=$\mathbb{F}_{2}[x]$, I=xR, $R/I = \{0+I, 1+I\} "=" \mathbb{F}_{2}$. Or $I=x^{2}R, A+I = (a+bx+...)+I = (a+bx)+I (a,b \in \mathbb{F}_{2})$. So there are at most 4 side classes : $0+I,1+I,x+I,x+1+I$ and they are all different from eachother.
This is an outtake from my script. I do not understand how in example 1 it is concluded that there are at most 2 different side classes and 4 side classes with $(\mathbb{F}_{2}/x^{2}\mathbb{F}_{2})[x]$ in the example 2. How does one find the side classes? Are they elements of the ring? It seems to be easy to find them for simple rings (the elements of $\mathbb{F}_{5}$ are the equivalence classes 0,1,2,3,4) , but how to find them (the elements) for quotient rings?
| "Example 1" seems to be three examples: (i) $R/I$ with $R=\mathbb{Z}[x]$, $I=nR$; (ii) $R/I$ with $I=xR$; and (iii) $R/I$ with $I=2R+xR$.
In the third case, $I$ consists of all polynomials with integer coefficients that have even constant term: all such polynomials can be written as a multiple of $2$ plus a multiple of $x$; and conversely, if a polynomial is a multiple of $2$ plus a multiple of $x$, then its constant term must be even.
When are $a_0 + a_1x+\cdots+a_nx^n$ and $b_0+b_1x+\cdots+b_mx^m$ in the same lateral class modulo this $I$? If and only if
$$(a_0+\cdots + a_nx^n) - (b_0 + \cdots + b_mx^m) = (a_0-b_0) + (a_1-b_1)x + \cdots \in I.$$
In order for the difference to be in $I$ what do we need? We need $a_0-b_0$ to be even; so the two polynomials are in the same coset if $a_0$ and $b_0$ have the same parity. So there are at most two lateral classes: one for polynomials with even constant term, one for polynomials with odd constant term.
(Conversely, polynomials with constant terms of opposite parity are not in the same lateral class, so there are in fact exactly two lateral classes).
In Example 2, you have $R=F_2[x]$ and $I=x^2R$. A polynomial is in $I$ if and only if it is a multiple of $x^2$. If $a_0+a_1x+\cdots + a_nx^n$ and $b_0+b_1x+\cdots+b_mx^m$ have $a_0=b_0$ and $a_1=b_1$, then their difference is a multiple of $x^2$; since there are two possible choices for $a_0$ and two possible choices for $a_1$, there are at most four possible lateral classes: one for each choice combination. Because every polynomial will have constant and linear terms equal to $0$, both equal to $1$, constant term $0$ and linear term $1$, or constant term $1$ and linear term $0$. So there are at most 4 possibilities that yield different lateral classes.
(Conversely, those four possibilities are pairwise distinct, since the difference of, say, a poynomial with constant term 1 and linear term 0, and one of linear and constant terms equal to $1$, would not be a multiple of $x^2$).
|
roots of complex polynomial - tricks What tricks are there for calculating the roots of complex polynomials like
$$p(t) = (t+1)^6 - (t-1)^6$$
$t = 1$ is not a root. Therefore we can divide by $(t-1)^6$. We then get
$$\left( \frac{t+1}{t-1} \right)^6 = 1$$
Let $\omega = \frac{t+1}{t-1}$ then we get $\omega^6=1$ which brings us to
$$\omega_k = e^{i \cdot k \cdot \frac{2 \pi}{6}}$$
So now we need to get the values from t for $k = 0,...5$.
How to get the values of t from the following identity then?
$$
\begin{align}
\frac{t+1}{t-1} &= e^{i \cdot 2 \cdot \frac{2 \pi}{6}} \\
(t+1) &= t\cdot e^{i \cdot 2 \cdot \frac{2 \pi}{6}} - e^{i \cdot 2 \cdot \frac{2 \pi}{6}} \\
1+e^{i \cdot 2 \cdot \frac{2 \pi}{6}} &= t\cdot e^{i \cdot 2 \cdot \frac{2 \pi}{6}} - t \\
1+e^{i \cdot 2 \cdot \frac{2 \pi}{6}} &= t \cdot (e^{i \cdot 2 \cdot \frac{2 \pi}{6}}-1) \\
\end{align}
$$
And now?
$$
t = \frac{1+e^{i \cdot 2 \cdot \frac{2 \pi}{6}}}{e^{i \cdot 2 \cdot \frac{2 \pi}{6}}-1}
$$
So I've got six roots for $k = 0,...5$ as follows
$$
t = \frac{1+e^{i \cdot k \cdot \frac{2 \pi}{6}}}{e^{i \cdot k \cdot \frac{2 \pi}{6}}-1}
$$
Is this right? But how can it be that the bottom equals $0$ for $k=0$?
I don't exactly know how to simplify this:
$$\frac{ \frac{1}{ e^{i \cdot k \cdot \frac{2 \pi}{6}} } + 1 }{ 1 - \frac{1}{ e^{i \cdot k \cdot \frac{2 \pi}{6}} }}$$
| Notice that $t=1$ is not a root. Divide by $(t-1)^6$.
If $\omega$ is a root of $z^6 - 1$, then a root of the original equation is given by $\frac{t+1}{t-1} = \omega$.
|
A better approximation of $H_n $ I'm convinced that
$$H_n \approx\log(n+\gamma) +\gamma$$ is a better approximation of the $n$-th harmonic number than the classical $$H_n \approx \log(n) +\gamma$$
Specially for small values of $n$. I leave some values and the error:
Just to make things clearer, I calculate the value between two numbers as follows.
Say $n$ is the optimal and $a$ is the apporximation, then $E = \frac{n-a}{n}$. $L_1$ stands for my approximation and $L_2$ for the classical one, and the errors $E_2$ and $E_1$ correspond to each of those (I mixed up the numbers).
It is clear that this gives an over estimate but tends to the real value for larger $n$.
So, is there a way to prove that the approximation is better?
NOTE: I tried using the \begin{tabular} environment but nothing seemed to work. Any links on table making in this site?
| The asymptotic expansion of the Harmonic numbers $H_n$ is given by
$$\log n+\gamma+\frac{1}{2n}+\mathcal{O}\left(\frac{1}{n^2}\right).$$
The Maclaurin series expansion of the natural logarithm tells us $\log(1+x)=x+\mathcal{O}(x^2)$, and we can use this in your formula by writing $\log(n+\epsilon)=\log n+\log(1+\epsilon/n)$ and expanding:
$$\log(n+\gamma)+\gamma=\log n+\gamma\;\;\;+\frac{\gamma}{n}+\mathcal{O}\left(\frac{1}{n^2}\right).$$
Your approximation is asymptotically better than the generic one because the $\gamma=0.577\dots$ in its expansion is closer to the true coefficient $\frac{1}{2}$ than the illicit coefficient $0$ in the generic formula given by the usual $H_n\sim \log n +\gamma+0/n$. This also explains why it is asymptotically an over estimation.
As marty said in his answer, the expansion comes from the Euler-Maclaurin formula:
$$\sum_{n=a}^b f(n)=\int_a^b f(x)dx+\frac{f(a)+f(b)}{2}+\sum_{k=1}^\infty \frac{B_{2k}}{(2k)!}\left(f^{(2k-1)}(b)-f^{(2k-1)}(a)\right).$$
Here we let $a=1,b=n$ (rewrite the index to a different letter) and $f(x)=1/x$.
|
Union of uncountably many subspaces Supposing, $\{V_t\}, t > 0$ are an uncountable number of linear subspaces of $\mathbb{R}^n$. If $\bigcup_{t>0} V_t = \mathbb{R}^n$ is it true that $V_T = \mathbb{R}^n$ for some $T>0$?
Any help is appreciated. Thanks.
EDIT: I have forgot to add the condition that $V_t$ are increasing.
| In general the answer is no. Consider family of subspaces of the form
$$
V_t=\{x\in\mathbb{R}^n:x_1\cos\frac{2\pi}{t+1}+x_2\sin\frac{2\pi}{t+1}=0\}
$$
|
chromatic number and subgraph
Prove that any graph $G$ with $n$ vertices and $ \chi(G)=k$ has a subgraph $H$ such that $ H \simeq \overline{K_p}$ where $p=n/k$ and $K_p$ is the complete graph with $n/k$ vertices.
My attempt: Because $ \chi(G)=k$ it must be $G \subseteq K_{p_1 p_2 \cdots p_k} $ where $\displaystyle{\sum_{j=1}^{k} p_j =n}$.
Can I consider now that $ p_j =n/k$ for all j?
If no then the other cases is to have $ p_j >n/k$ for some $j \in \{1, \cdots ,k\}$.
But now how can I continue?
| If $\chi(G) = k$, it means we can color the graph with $k$ colors, $c_1, \ldots, c_k$. Each color class, $c_i$, consists of some vertices $V_i$. Necessarily, the vertices in $V_i$ are independent, or we could not color them all the same color, $c_i$.
Now, assume that every color class contains less than $n / k$ vertices. Then the total number of vertices in the graph is $|G| < k \cdot (n/k) = n$. This isn't possible since we assume $|G| = n$. Therefore, some color class contains at least $n / k$ vertices. Since the vertices in a color class are independent, we have an independent set of size at least $n / k$.
|
Integrable function Please help me prove this.
Suppose $f$ is defined in$\left [ a,b \right ]$ ,$f\geq 0$, $f$ is integrable in$ [a,b]$ and $\displaystyle\int_{a}^{b}fdx=0$
prove:
$\displaystyle\int_{a}^{b}f(x)^2dx=0$
Thanks a lot!
| My first inclination would be to say that since $f\geq0$ and $\int_a^bfdx=0$, $f(x)=0$ for $x$ such that $a\leq x\leq b$. Because of this, $\int_b^af(x)^2dx=0$. You will probably need to fill in a fair amount of reasoning to make this hold up, though.
Also, suggestion - accept some answers to your questions. It makes people more likely to answer in the future.
|
Proving that the "real part" function is continuous I want to use the definition of a limit,
$|f(z) - w_0| < \varepsilon$ whenever $0 < |z - z_0| < \delta$
to prove
$$\lim_{z \to z_0} \mathop{\rm Re}(z) = \mathop{\rm Re}(z_0)$$
By intuition this is obvious but I dont know how to show it using the defn. of a limit. This is question 1(a) from the book Complex Variables and Applications.
Here's the basic manipulation I have made going by an example in the book, I dont know where to go from here...
$$|\mathop{\rm Re}(z)-\mathop{\rm Re}(z_0)| = |x - x_0| = |x| - |x_0| = x - x_0$$
| We have
$|z_0-z|^2=(\Re(z_0-z))^2+(\Im(z_0-z))^2\geq (\Re(z_0-z))^2$ so $|z_0-z|\geq |z_0-z|$. Now, check that $\delta=\varepsilon$ in the definition of the limit works.
|
eigenvalues and eigenvectors of $vv^T$ Given a column vector $v$ in $\mathbb{R}^n$, what are the eigenvalues of matrix $vv^T$ and associated eigenvectors?
PS: not homework even though it may look like so.
| The columns of the matrix are $v_1v,\ldots,v_nv$ so if we take two column these one are linearly dependent, and so $vv^T$ has a rank of at most $1$. It's $0$ if $v=0$, and if $v\neq 0$, we have $\mathrm{Tr}A=|v|^2$ so the eignevalues are $0$ with multiplicity $n-1$ and $|v|^2$ with multiplicity $1$.
|
proof that if $AB=BA$ matrix $A$ must be $\lambda E$ Let $A \in Mat(2\times 2, \mathbb{Q})$ be a matrix with $AB = BA$ for all matrices $B \in Mat(2\times 2, \mathbb{Q})$.
Show that there exists a $\lambda \in \mathbb{Q}$ so that $A = \lambda E_2$.
Let $E_{ij}$ be the matrix with all entries $0$ except $e_{ij} = 1$.
$$
\begin{align}
AE_{11} &= E_{11}A \\
\left( \begin{array}{cc}
a_{11} & a_{12} \\
a_{21} & a_{22} \\
\end{array} \right)
\left( \begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array} \right)
&=
\left( \begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array} \right)
\left( \begin{array}{cc}
a_{11} & a_{12} \\
a_{21} & a_{22} \\
\end{array} \right)
\\
\left( \begin{array}{cc}
a_{11} & 0 \\
a_{21} & 0 \\
\end{array} \right)
&=
\left( \begin{array}{cc}
a_{11} & a_{12} \\
0 & 0 \\
\end{array} \right) \\
\end{align}
$$
$\implies a_{12} = a_{21} = 0$
And then the same for the other three matrices $E_{12}, E_{21}, E_{22}$ …
I guess it's not the most efficient way of argueing or writing it down … ? Where's the trick to simplify this thingy?
| You wish to show that the center of the ring of matrices is the set of scalar matrices. Observe that the matrices commuting with diagonal matrices must be diagonal matrices. Hence the center must be contained in the set of diagonal matrices. Now a diagonal matrix that commutes with an arbitrary matrix must be a scalar matrix.
|
Ring theory notation question I was wondering what the notation $R[a]$ really stands for, if $a\in K$, where $K$ is a ring and $R$ is a subring of $K$.
In my book they define $\mathbb{Z}[\sqrt2]=\{a+b\sqrt2|a,b \in \mathbb{Z}\}$.
So, my guess is that $R[a] = \{P(a)\mid P \in R[X]\}$. Since for $\mathbb{Z}[\sqrt2]$ this is the case, or is this just a coincidence?
| By definition, $R[a]$ is the smallest subring of $K$ that contains both $R$ and $a$.
As you note, if $p(x)\in R[x]$, then $p(a)\in R[a]$ necessarily. Therefore,
$$\{p(a)\mid p(x)\in R[x]\} \subseteq R[a].$$
Conversely, note that
$$\{p(a)\mid p(x)\in R[x]\}$$
contains $a$ (as $p(a)$ where $p(x)=x$), contains $R$ (the constant polynomials); and is a subring of $K$: every element like is $K$; it is nonempty; it is closed under differences (since $p(a)-q(a) = (p-q)(a)$); and it is closed under products (since $p(a)q(a) = (pq)(a)$). Thus, $R[a]\subseteq \{p(a)\mid p(x)\in R[x]\}$. Hence we have equality.
More generally, if $a$ is integral over $R$ (satisfies a monic polynomial with coefficients in $R$, then letting $n$ be the smallest degree of a monic polynomial that is satisfied by $a$), then $R[a] = \{r_0 + r_1+\cdots + r_{n-1}a^{n-1}\mid r_i\in R\}$.
To see this, note that clearly the right hand side is contained on the left hand side. To prove the converse inclusion, let $p(x)$ be a monic polynomial of smallest degree such that $p(a)=0$. By doing induction on the degree, we can prove in the usual way that every element $f(x)$ of $R[x]$ can be written as $f(x)=q(x)p(x) + r(x)$, where $r(x)=0$ or $\deg(r)\lt deg(p)$; the reason being that the leading coefficient of $p$ is $1$, so we can perform the long-division algorithm without problems. Thus, $f(a) = q(a)p(a)+r(a) = r(a)$, so every element of $R[a]$ can be expressed as in the right hand side.
In the case where $a=\sqrt{2}$, $R=\mathbb{Z}$, $K=\mathbb{R}$ (or $K=\mathbb{Q}(\sqrt{2})$), we have that $a$ satisfies $x^2-2$< so that is why we get
$$\mathbb{Z}[\sqrt{2}] = \{ p(\sqrt{2})\mid p(x)\in\mathbb{Z}[x]\} = \{r_0+r_1\sqrt{2}\mid r_0,r_1\in\mathbb{Z}\}.$$
|
The completion of a Boolean algebra is unique up to isomorphism Jech defines a completion of a Boolean algebra $B$ to be a complete Boolean algebra $C$ such that $B$ is a dense subalgebra of $C$.
I am trying to prove that given two completions $C$ and $D$ of $B$, then the mapping $\pi: C \rightarrow D$ given by $\pi(c) = \sum^D \{ u \in B \ \vert \ u \le c \} $ is an isomorphism. I understand that $c \not= 0 \implies \pi(c) \not= 0$, and that given any $d \in D$, I can write it as $d = \sum^D \{ u \in B \ \vert \ u \le d \} $, but I am unsure how to proceed. Any help would be appreciated.
| Edit: Ignore this answer. See Loronegro's.
Apparently I haven't done enough on Math.SE before, so I have to post this as an answer instead of a comment.
Hopefully this is helpful: first notice that $\pi c \le c$ for every $c\in C$ (the supremum of a bunch of things that are $\le c$ is also $\le c$). From this, you can deduce that, for every $u\in B$, $u\le c$ iff $u\le \pi c$. Therefore the sets $\{u\in B : u\le c\}$ and $\{u\in B : u\le \pi c\}$ are identical, so $\pi^{-1}\pi = 1$. By starting the argument with $\pi^{-1}c \le c$ instead of $\pi c \le c$, you can also conclude that $\pi\pi^{-1} = 1$.
You should feel cheated now, as I have avoided any mention of density. I guess proving that $\sum^C \{u\in B: u\le c\} = c$ requires the density of $B$.
|
What does integration do? I know that integrals are used to compute the area under a curve. Let's say I have $y = x^2$. It creates smaller rectangles and then add up the sum (assuming that rectangles are going infinitely in number and is like going to a limit).
But I recently encountered a problem in my mind. Suppose we have a function, $y = x^2$. If we integrated it, we simply get the anti derivative of it which is $x^3/3$, assuming that the area is not of concern. What is the correlation of $x^3/3$ to $x^2$? I mean, it simply likes transforms a function into another function, but I can't get a clearer picture. When we graph $x^2$ and $x^3/3$, there is no connection visually. They are simply different graphs.
Thanks and I hope your comments can clear up my mind.
|
The key word here is instantaneous. Although the 2 graphs are "different". They are linked to each other through the "instantaneous area". What I mean is that if you take a point on your function and calculate the value of the area of the function from, say, 0 to that point. That value of area is the same as g(x). $(g(x) = \int(f(x))$
Consider:
$x =3$, the area of the graph from 0 to 3 is the same as the value of g(x) i.e. $\int(f(x))$
|
How does $\exp(x+y) = \exp(x)\exp(y)$ imply $\exp(x) = [\exp(1)]^x$? In Calculus by Spivak (1994), the author states in Chapter 18 p. 341 that
$$\exp(x+y) = \exp(x)\exp(y)$$ implies
$$\exp(x) = [\exp(1)]^x$$
He refers to the discussion in the beginning of the chapter where we define a function $f(x + y) = f(x)f(y)$; with $f(1) = 10$, it follows that $f(x) = [f(1)]^x$. But I don't get this either. Can anyone please explain this? Many thanks!
| I like (as I have done here before) to start with a functional equation
and derive properties of the function.
If $f(x+y) = f(x) f(y)$ and $f$ is differentiable (and non-zero somewhere),
$f(0) = 1$ and
$f(x+h)-f(x) = f(x)f(h)-f(x)
=f(x)(f(h)-1)
=f(x)(f(h)-f(0))
$
so
$$(f(x+h)-f(x))/h = f(x)(f(h)-f(0))/h.$$
Letting $h \to 0$,
$f'(x) = f'(0) f(x)$.
From this, $f(x) = \exp(f'(0)x)$.
This also works for $\ln$, $\arctan$, and $\sin$ and $\cos$.
Functional equations are fun!
|
If $A^2 = I$ (Identity Matrix) then $A = \pm I$ So I'm studying linear algebra and one of the self-study exercises has a set of true or false questions. One of the questions is this:
If $A^2 = I$ (Identity Matrix), then $A = \pm I$ ?
I'm pretty sure it is true but the answer says it's false. How can this be false (maybe it's a typography error in the book)?
| I know $2·\mathbb C^2$ many counterexamples, namely
$$A=c_1\begin{pmatrix}
0&1\\
1&0
\end{pmatrix}+c_2\begin{pmatrix}
1&0\\
0&-1
\end{pmatrix}\pm\sqrt{c_1^2+c_2^2\pm1}\begin{pmatrix}
0&-1\\
1&0
\end{pmatrix},$$
see Pauli Matrices $\sigma_i$.
These are all such matrices and can be written as $A=\vec e· \vec \sigma$, where $\vec e^2=\pm1$.
|
Prove that $n! > \sqrt{n^n}, n \geq 3$
Problem Prove that $n! > \sqrt{n^n}, n \geq 3$.
I'm currently have two ideas in mind, one is to use induction on $n$, two is to find $\displaystyle\lim_{n\to\infty}\dfrac{n!}{\sqrt{n^n}}$. However, both methods don't seem to get close to the answer. I wonder is there another method to prove this problem that I'm not aware of? Any suggestion would be greatly appreciated.
| $(n!)^2 = (n \times 1) \times ((n-1)\times 2) \times \cdots \times (1 \times n) \gt n^n$
since $(n-1)\times 2 = 2n-2 \gt n$ iff $n \gt 2$.
Then take the square root.
|
Finding limit of fraction with square roots: $\lim_{r\to 9} \frac {\sqrt{r}} {(r-9)^4}$ I have been looking at this for five minutes, no clue what to do.
$$\lim_{r\to 9} \frac {\sqrt{r}} {(r-9)^4}$$
| The limit is $+\infty$ because the numerator approaches a positive number and the denominator approaches $0$ from above. Sometimes one says the limit "doesn't exist" when one means there is no real number that is the limit, so you could put it that way.
|
Trying to figure out a complex equality An answer to a comlex equation I was working on was
$$z = \frac{1}{2} + \frac{i}{2}$$
My teacher further developed it to be
$$e^{\frac{i\pi}{4}-\frac{1}{2}\ln{2}}$$
And here's what I tried:
$$z = \frac{1}{2} + \frac{i}{2} = z = \frac{1}{\sqrt{2}}e^{\frac{i\pi}{4}}
= e^{\frac{1}{2}\ln{2}}e^{\frac{i\pi}{4}} = e^{\frac{1}{2}\ln{2}+\frac{i\pi}{4}}$$
I feel this is stupid, but I can't see why we have different answers. Anyone? Thanks!
| The mistake occurs here:
$$\frac{1}{\sqrt{2}}e^{\frac{i\pi}{4}}
= e^{\frac{1}{2}\ln{2}}e^{\frac{i\pi}{4}}.$$
In fact, we have
$$e^{\frac{1}{2}\ln{2}}=2^{\frac{1}{2}}=\sqrt{2}.$$
Therefore, we should have
$$\frac{1}{\sqrt{2}}=(\sqrt{2})^{-1}
= e^{-\frac{1}{2}\ln{2}}.$$
Mixing this, your answer matches with your teacher's answer.
|
Finding the n-th root of a complex number I am trying to solve $z^6 = 1$ where $z\in\mathbb{C}$.
So What I have so far is :
$$z^6 = 1 \rightarrow r^6\operatorname{cis}(6\theta) = 1\operatorname{cis}(0 + \pi k)$$
$$r = 1,\ \theta = \frac{\pi k}{6}$$
$$k=0: z=\operatorname{cis}(0)=1$$
$$k=1: z=\operatorname{cis}\left(\frac{\pi}{6}\right)=\frac{\sqrt{3} + i}{2}$$
$$k=2: z=\operatorname{cis}\left(\frac{\pi}{3}\right)=\frac{1 + \sqrt{3}i}{2}$$
$$k=3: z=\operatorname{cis}\left(\frac{\pi}{2}\right)=i$$
According to my book I have a mistake since non of the roots starts with $\frac{\sqrt{3}}{2}$, also even if I continue to $k=6$ I get different (new) results, but I thought that there should be (by the fundamental theorem) only 6 roots.
Can anyone please tell me where my mistake is? Thanks!
| Hint: Imagine that there is a unit circle on the Argand Plane. Now the roots will be the 6 equidistant points on that circle each having the argument as multiple of $\frac{2\pi}{6}=\frac{\pi}{3}$
|
Is it possible to generate a unique real number for each fixed length sequence of real numbers? Let A be the set of all sequences of real numbers of size $n$. Does there exist an injection from A to R?
I know this is possible if we are only considering integers instead of real numbers; But I am not sure if it is possible if we consider real numbers instead.
For integers, we can generate a unique integer using the following method:
Let S be a sequence of integers of size n. $S = s_1,s_2,\ldots,s_n$. Let $P = p_1,p_2,\ldots,p_n$ be the sequence of $n$ primes. Then $f(S) = (p_1^{s_1})(p_2^{s_2})\cdots(p_n^{s_n})$ creates a unique integer for each sequence $S$.
If each $s_i$ was a real number instead, would $f(S)$ still be an injection? If not, is there an alternative invective function from A to R?
edit:
I fixed some of my poor wording.
I am trying to find an injection function from A to R. Such a function does exist and the function I proposed clearly does not work (From the comments).
If possible, I would like to find an injective function that does not involve directly manipulating the decimal expansions.
| You can easily create an injection even for infinite sequences of reals (an injective mapping from sequences to real numbers). So your request is too weak.
See Boas Primer of Real Functions Exercise 3.13 and its solution in the back.
Also see my thread about this exercise.
|
what is Prime Gaps relationship with number 6? Out of the 78499 prime number under 1 million. There are 32821 prime gaps (difference between two consecutive prime numbers) of a multiple 6. A bar chart of differences and frequency of occurrence shows a local maximum at each multiple of 6. Why is 6 so special?
| take any integer $n> 3$, and divide it by $6$. That is, write
$n = 6q + r$
where $q$ is a non-negative integer and the remainder $r$ is one of $0$, $1$, $2$, $3$, $4$, or $5$.
If the remainder is $0$, $2$ or $4$, then the number $n$ is divisible by $2$, and can not be prime.
If the remainder is $3$, then the number $n$ is divisible by $3$, and can not be prime.
So if $n$ is prime, then the remainder $r$ is either
*
*$1$ (and $n = 6q + 1$ is one more than a multiple of six), or
*$5$ (and $n = 6q + 5 = 6(q+1) - 1$ is one less than a multiple of six).
|
A question concerning maps of $G$-coverings I am having difficulties thinking about how an argument for the following exercise should proceed:
Let $p: Y \rightarrow X$ and $q: Z \rightarrow X$ be $G$-coverings (i.e., covering maps such that $X = Y /G = Z/G$ (quotient spaces)), with $X$ connected and locally path connected. Let $\phi: Y \rightarrow Z$ and $\psi: Y \rightarrow Z$ be maps of $G$-coverings (i.e., a covering homomorphism such that $\phi(g \cdot y) = g \cdot \phi(y)$ for all $g \in G$ and $y \in Y$, same with $\psi$). Assume that $\phi(y) = \psi(y)$ for some $y \in Y$. Show that $\phi(y) = \psi(y)$ for every point $y \in Y$.
If $Y$ is connected, this should be an immediate consequence of the unique lifting property for maps, but in the general case I am lost, and I am curious to know if anyone visiting would know how to proceed.
| (Sorry I can't comment)
Show that each connected component of $Y$ contains a preimage of every $z\in Z$. Fix a connected component $Y_0$ of $Y$. Take a $y\in Y$. Using the transitivity of $G$, translate $y$ to $Y_0$. Then use the fact that you know the statement is true for $Y_0$.
|
How to show a matrix is full rank? I have some discussion with my friend about matrix rank.
But we find that even we know how to compute rank, we don't know how to show the matrix is full rank.
How to show this?
| If you are talking about square matrices, just compute the determinant.
If that is non-zero, the matrix is of full rank.
If the matrix $A$ is $n$ by $m$, assume wlog that $m\leq n$ and compute all determinants of $m$ by $m$ submatrices. If one of them is non-zero, the matrix has full rank.
Also, you can solve the linear equation $Ax=0$ and figure out what dimension the space of solutions has. If the dimension of that space is $n-m$, then the matrix is of full rank.
Note that a matrix has the same rank as its transpose.
|
intermediate step in proving old Ramsey lower bound Let $r(n,n)=r(n)$ be the usual Ramsey number of a graph. It is known that $$\frac{1}{e\sqrt{2}}n2^{n/2}<r(n)$$ as a lower bound for $r(n).$
Now, in the proof given in the book Erdős on Graphs by Graham and Chung, as an intermediate step this is given:
$$2^{\binom{m}{2}}>\binom{m}{n}2^{\binom{m}{2}-\binom{n}{2}+1}\;,\tag{*}$$ and that this implies that $$m\ge\frac{1}{e\sqrt{2}}n2^{n/2}\;.\tag{**}$$
I cannot figure out how $(*)$ implies $(**)$. Can someone please explain this?
| In fact, the inequality $(**)$ should be the other way around.
As Austin Mohr noted, Stirling's formula comes in handy here. The form that I will use is
$$n! \sim \biggl(\dfrac{n}{e}\biggr)^n \sqrt{2\pi n}. \tag{1}$$
Also, I assume that $m \to \infty$ and that $m - n \to \infty$.
We start by observing that the inequality $(*)$ is equivalent to
$$\dbinom{m}{n} < 2^{\binom{n}{2} - 1}. \tag{2}$$
Since
$$\dbinom{m}{n} \geq \dfrac{(m - n)^n}{n!},$$
we have from $(2)$ that
$$(m - n)^n < n! 2^{\binom{n}{2} - 1}.$$
Plugging in Stirling's formula $(1)$ on the right-hand side gives
$$m^n \biggl(1 - \dfrac{n}{m}\biggr)^n < \biggl(\dfrac{n}{e}\biggr)^n \sqrt{\dfrac{\pi n}{2}} 2^{\binom{n}{2}}.$$
Taking $n$th roots, we get
$$m \biggl(1 - \dfrac{n}{m}\biggr) < \dfrac{n}{e} 2^{\frac{n - 1}{2}} \biggl(\dfrac{\pi n}{2}\biggr)^{1/2n}.$$
Finally, observing that $(\frac{\pi n}{2})^{1/2n} / (1 - \frac{n}{m}) \to 1$ as $m$, $n \to \infty$, we end up with
$$m < \dfrac{1}{e\sqrt{2}} n 2^{n/2},$$
as desired.
|
Two ants on a triangle puzzle Last Saturday's Guardian newspaper contained the following puzzle:
Two soldier ants start on different vertices of an equilateral triangle. With each move, each ant moves independently and randomly to one of the other two vertices. If they meet, they eliminate each other. Prove that mutual annihilation is eventually assured. What are the chances they survive... exactly N moves.
I understand that the probability of a collision on any one move is 1/4, but I don't understand the quoted proof of eventual annihilation:
If the chances of eventual mutual annihilation are are P, then P = 1/4 + 3/4 P, so P = 1.
I scratched my head for a while but I still couldn't follow it. Do they mean the probability in the limit of an infinite number of moves? Or is there something crucial in that calculation of P that I'm not getting?
| Note that the fact that it's a triangle is irrelevant. The ants could move to random vertices on any $n$-gon and the result would be the same.
Put another way, if two people repeatedly choose random integers from $1$ to $n$, they will eventually choose the same number.
|
Does $\int_{1}^{\infty}\sin(x\log x)dx $ converge? I'm trying to find out whether $\int_{1}^{\infty}\sin(x\log x)dx $ converges, I know that $\int_{1}^{\infty}\sin(x)dx $ diverges but $\int_{1}^{\infty}\sin(x^2)dx $ converges, more than that, $\int_{1}^{\infty}\sin(x^p)dx $ converges for every $p>0$, so it should be converges in infinity. I'd really love your help with this.
Thanks!
| Since $x\log(x)$ is monotonic on $[1,\infty)$, let $f(x)$ be its inverse. That is, for $x\in[0,\infty)$
$$
f(x)\log(f(x))=x\tag{1}
$$
Differentiating implicitly, we get
$$
f'(x)=\frac{1}{\log(f(x))+1}\tag{2}
$$
Then
$$
\begin{align}
\int_1^\infty\sin(x\log(x))\;\mathrm{d}x
&=\int_0^\infty\sin(x)\;\mathrm{d}f(x)\\
&=\int_0^\infty\frac{\sin(x)}{\log(f(x))+1}\mathrm{d}x\tag{3}
\end{align}
$$
Since $\left|\int_0^M\sin(x)\;\mathrm{d}x\right|\le2$ and $\frac{1}{\log(f(x))+1}$ monotonically decreases to $0$, Dirichlet's test (Theorem 17.5) says that $(3)$ converges.
|
An epimorphism from $S_{4}$ to $S_{3}$ having the kernel isomorphic to Klein four-group Exercise $7$, page 51 from Hungerford's book Algebra.
Show that $N=\{(1),(12)(34), (13)(24),(14)(23)\}$ is a normal subgroup
of $S_{4}$ contained in $A_{4}$ such that $S_{4}/N\cong S_{3}$ and
$A_{4}/N\cong \mathbb{Z}_{3}$.
I solved the question after many calculations. I would like to know if is possible to define an epimorphism $\varphi$ from $S_{4}$ to $S_{3}$ such that $N=\ker(\varphi)$.
Thanks for your kindly help.
| Here is an approach:
Proof Idea: $S_4/N$ is a group with 6 elements. There are only two such groups, one is cyclic and the other is $S_3$, and $S_4/N$ cannot have elements of order $6$ thus must be $S_3$.
First it is easy to show that $N$ is normal in $S_4$. It follows that $S_4/N$ is a group with $6$ elements. Let us call this factor $G$.
Now, $G$ is a group of order $6$. As no element of $S_4$ has order $6$, it follows that $G$ has no element of order $6$.
Pick two elements $x,y \in G$ such that $\operatorname{ord}(x)=2$ and $\operatorname{ord}(y)=3$. Then, $e, y, y^2, x, xy, xy^2$ must be 6 distinct elements of $G$, and hence
$$G= \{ e, y, y^2, x, xy, xy^2 \}$$
Now, let us look at $yx$. This cannot be $xy$, as in this situation we would have $\operatorname{ord}(xy)=6$. This cannot be $e, x, y, y^2$ either. This means that
$$yx=xy^2$$
Now it is trivial to construct an isomorphism from $G$ to $S_3$.
|
Eigenvalues of a infinitesimal generator matrix Consider a Markov process on a finite state space $S$, whose dynamic is determined by a certain infinitesimal generator $Q$ (that is a matrix in this case) and an initial distribution $m$.
1) Is there anything general that can be said on the spectrum of the matrix $-Q$?
2) Is there any bounds on the eigenvalues of $-Q$? And on their ratios?
I am interested in linear algebra results as well as more probabilistic ones, maybe using information about the structure of the state space or on the process itself.
If can be of any help, for the problem I am addressing, I usually have a quite big state space and the Q-matrix is quite sparse, but unfortunately without any symmetry or nice structure.
For discrete-time Markov chains there is an huge literature linking properties of the spectrum of the transition matrix $P$ to various mixing times, geometric properties of the state space, etc. Of course one can move from the continuous setting to the discrete one via uniformization and thus "translate" most of these results, but I was wondering the following
3) Is there any bounds on the eigenvalues developed specifically for the continuous time setting? And generally speaking, does anyone know any references for such continuous-time theory?
| The Gershgorin Circle Theorem can be used to construct a set of closed balls (i.e., circles) in the complex plane that are guaranteed to contain the eigenvalues of the matrix.
This theorem is not specific to generator matrices, so there may be some other related result that takes advantage of these matrices' specific structure... I'm looking into this question myself. I'll post an update if I find something better.
|
How to directly prove that $M$ is maximal ideal of $A$ iff $A/M$ is a field? An ideal $M$ of a commutative ring $A$ (with unity) is maximal iff $A/M$ is a field.
This is easy with the correspondence of ideals of $A/I$ with ideals of $A$ containing $I$, but how can you prove it directly? Take $x + M \in A/M$. How can you construct $y + M \in A/M$ such that $xy - 1 \in M$? All I can deduce from the maximality of $M$ is that $(M,x) = A$.
| From $(M,x)=A$ you can infer that there are $m\in M, y\in A$ so that $m+xy=1$. Thus, $xy+M=1+M$.
|
Why does $\sum_n\binom{n}{k}x^n=\frac{x^k}{(1-x)^{k+1}}$? I don't understand the identity $\sum_n\binom{n}{k}x^n=\frac{x^k}{(1-x)^{k+1}}$, where $k$ is fixed.
I first approached it by considering the identity
$$
\sum_{n,k\geq 0} \binom{n}{k} x^n y^k = \sum_{n=0}^\infty x^n \sum_{k=0}^n \binom{n}{k} y^k = \sum_{n=0}^\infty x^n (1+y)^n = \frac{1}{1-x(1+y)}.
$$
So setting $y=1$, shows $\sum_{n,k\geq 0}\binom{n}{k}x^n=\frac{1}{1-2x}$. What happens if I fix some $k$ and let the sum range over just $n$? Thank you.
| You can work directly with properties of the binomial coefficient. For $k\ge 0$ let $$f_k(x)=\sum_{n\ge 0}\binom{n}kx^n\;.$$ Then
$$\begin{align*}
f_k(x)&=\sum_{n\ge 0}\binom{n}{k}x^n\\
&=\sum_{n\ge 0}\left[\binom{n-1}{k-1}+\binom{n-1}{k}\right]x^n\\
&=\sum_{n\ge 0}\left[\binom{n}{k-1}+\binom{n}k\right]x^{n+1}\\
&=x\sum_{n\ge 0}\binom{n}{k-1}x^n+x\sum_{n\ge 0}\binom{n}kx^n\\
&=xf_{k-1}(x)+xf_k(x)\;,
\end{align*}$$
so $(1-x)f_k(x)=xf_{k-1}(x)$, and $$f_k(x)=\frac{x}{1-x}f_{k-1}(x)\;.\tag{1}$$
Since $$f_0(x)=\sum_{n\ge 0}\binom{n}0x^n=\sum_{n\ge 0}x^n=\frac1{1-x}\;,$$ an easy induction yields the desired result.
|
Rittner equation I would like to know if the Rittner equation :
$$\partial_{t}{\varPhi(x,t)=k\partial_{xx}{\varPhi(x,t)}}-\alpha{\partial_{x}{\varPhi(x,t)}-\beta{\varPhi(x,t)}+g(x,t)}$$
can be solved using the Lax pair method or the Fokas method.
Thanks
| Setting $\psi(x,t) = e^{(\alpha^2/4k + \beta)t - \alpha x/2k} \Phi(x,t)$ and plugging in I get
$$
\begin{align*} \psi_t(x,t) - k \psi_{xx}(x,t) &= e^{(\alpha^2/4k + \beta)t - \alpha x/2k}g(x,t) \\
&\equiv f(x,t).
\end{align*}
$$
If the forcing $f(x,t)$ is in $L^1(\mathbf{R}\times \mathbf{R})$ then we can use the heat kernel
$$
K(x,t)=\begin{cases} (4\pi k t)^{-1/2} e^{-x^2/2kt},& t>0 \\ 0,& t\leq 0 \end{cases}
$$
and a solution to the problem on the domain $\mathbf{R}\times\mathbf{R}$ is $\psi= K*f$. Unfortunately this puts very harsh restrictions on the original forcing term $g=g(x,t)$ because of the exponential growth of the factor $e^{(\alpha^2/4k + \beta)t - \alpha x/2k}$.
Similarly, if one is interested in the Cauchy problem with $\Phi(x,0)=\Phi_0(x)$, then the new problem has initial data $\psi_0(x) = e^{-\alpha x/2k} \Phi_0(x)$. Using heat kernel methods this can again lead to severe restrictions. Recall the solution to the homogeneous Cauchy problem would be
$$
\psi(x,t)=K_t * \psi_0(x)
$$
which is well defined in a classical sense if, for instance, $\psi_0 \in L^p(\mathbf{R})$ for $1\leq p\leq \infty$. This means the associated initial data $\Phi_0(x)$ would need exponential decay, which is a little artificial.
For the Cauchy problem for the Rittner equation on the full line (for simplicity consider the homogeneous problem $g=0$, the more general case is similar) one can use the Fourier transform. Setting $\psi = e^{\beta t} \Phi$ we have the Cauchy problem
$$
\begin{align*}\psi_t - k \psi_{xx} + \alpha \psi_x &=0, \\
\psi(x,0) &= \Phi_0(x), \end{align*}
$$
the solution to which is
$$
\psi(x,t) = \frac{1}{2\pi} \int_{-\infty}^\infty e^{\mathrm{i} \lambda x -\lambda (k\lambda + \mathrm{i}\alpha) t} \hat{\Phi}_0(\lambda)\, \mathrm{d}\lambda.
$$
The problem on the half-line is more subtle. The Fokas method provides a solution in this case (for given boundary data at $x=0$, $\Phi(0,t)=\Phi_1(t)$ say) in terms of an integral along a smooth contour in the complex $\lambda$-plane. Now if the initial data $\Phi_0(x)$ has sufficient exponential decay (which is a big restriction), then one can show that this contour can be deformed onto the real $\lambda$-axis, giving a Fourier type integral as in the case of the Cauchy problem on the full line. However, in general, this contour cannot be shifted.
This problem is treated in Fokas' book, A unified approach to boundary value problems. He emphasizes that this inability to shift the contour back to the real axis (in the half-line problem) reflects the fact there is no classical transform for the problem.
|
Reflect a complex number about an arbitrary axis This should be really obvious, but I can't quite get my head round it:
I have a complex number $z$. I want to reflect it about an axis specified by an angle $\theta$.
I thought, this should simply be, rotate $z$ by $\theta$, flip it (conj), then rotate by $-\theta$.
But this just gives $z^* (e^{-i\theta})^* e^{i\theta} $...
but this can't be right - as it's just $z^*$ rotated by angle $2\theta$, surely?
| Indeed, it's just $z^*$ rotated by $2\theta$... And it's almost the right answer! You know that a symmetry composed with a rotation is still a symmetry, and you know that an (orthogonal) symmetry is characterized by its fixed points. So you want to get a symmetry that fixes the axis spanned by $e^{i\theta}$ (as an $\mathbb{R}$ vector space); an easy way to do that is, as you've noticed, to rotate by $-\theta$ (and not $\theta$ actually), flip over the real line (conjugation) and then rotate by $\theta$. So you get $(ze^{-i\theta})^* e^{i\theta} = z^* e^{2i\theta}$. It is easily checked that this fixes $e^{i\theta}$, and it's an orthogonal symmetry, therefore it's the one you're looking for.
|
Finite dimensional vector space with subspaces
Possible Duplicate:
Could intersection of a subspace with its complement be non empty.
Is it possible for a finite dimensional vector space to have 2 disjoint subspaces of the same dimension ?
Any help would be much appreciated.
| As David Giraudo points out, any subspace $U\subseteq V$ is going to contain the zero vector (it has to: the subspace is a vector space so is closed under multiplication by elements of the underlying scalar field, and zero is in the scalar field, so multiplying by it tells us the zero vector is in the subspace). In this way, no two vector subspaces are disjoint as sets. In fact, the smallest subspace is trivial, $\{0\}$.
There is a linear notion of "disjoint" here: orthogonal. For example, both a line and a plane through the origin are subspaces, and if they intersect at a right angle they are orthogonal. More generally, two vector subspaces $U$ and $W$ are orthogonal if for every $u\in U, w\in W$, the vectors $u,w$ are perpendicular. The orthogonal complement of a subspace is the maximal orthogonal subspace to it (see Wikipedia). So saying $U,W$ are orthogonal is equivalent both $U\subseteq W^\perp$ and $W\subseteq U^\perp$.
|
Essay about the art and applications of differential equations? I teach a high school calculus class. We've worked through the standard derivatives material, and I incorporated a discussion of antiderivatives throughout. I've introduced solving "area under a curve" problems as solving differential equations. Since it's easy to see the rate at which area is accumulating (the height of the function), we can write down a differential equation, take an antiderivative to find the area function, and solve for the constant.
Anyhow, I find myself wanting to share with my students what differential equations are all about. I don't know so much about them, but I have a sense that they are both a beautiful pure mathematics subject and a subject that has many, many applications.
I'm hoping to hear suggestions for an essay or a chapter from a book--something like 3 to 10 pages--that would discuss differential equations as a subject, give some interesting examples, and point to some applications. Any ideas?
| Differential equations is a rather immense subject. In spite of the risk of overwhelming you with the amount of information, I recommend looking in the Princeton Companion to Mathematics, from which the relevant sections are (page numbers are within parts)
*
*Section I.3.5.4 for an introductory overview
*Section I.4.1.5
*Section III.23 on differential equations describing fluids (including the Navier-Stokes equation which is the subject of one of the Millennium problems)
*Section III.36 especially on the heat equation and its relation to various topics in mathematical physics and finance
*Section III.51 on wave phenomenon
*Section IV.12 on partial differential equations as a branch of mathematics
*Section IV.14 on dynamical systems and ordinary differential equations
*Section V.36 on the three body problem
*Section VII.2 on mathematical biology
Some of these material may be too advanced or too detailed for your purposes. But they may on the other hand provide keywords and phrases for you to improve your search.
|
Does $\sum_{n=0}^{\infty}e^{-|x-n|}$ converge and uniformly converge? For $ 0 \leq x < \infty$, I'd like your help with deciding whether the following series converges and uniformly converges: $\sum_{n=0}^{\infty}e^{-|x-n|}$.
$$\sum_{n=0}^{\infty}\frac{1}{e^{|x-n|}}=\sum_{n=0}^{N}\frac{1}{e^{|x-n|}}+\sum_{i=0}^{\infty}\frac{1}{e^{i}}.$$
I divided the sum into two sums where $N$ is the place where $x=n$, so the first sum is finite and the second one converges, so the total sum converges. Am I allowed to say that the sum is uniformly converges, since the second series, where the tail converges, does not depend on $x$?
Thanks.
| Looking at the given series we see that uniform convergence is endangered by the fact that for arbitrary large $n$ we can find an $x$ where the $n$-th term is large, namely $\ =e^0=1$. Therefore we try to prove that the convergence is not uniform by exploiting this fact.
Put $s_n(x):=\sum_{k=0}^n e^{-|x-n|}$. By Cauchy's criterion uniform convergence means that for any $\epsilon>0$ there is an $n_0$, depending on $\epsilon$, such that
$$\bigl|s_m(x)-s_n(x)\bigr|<\epsilon\qquad\forall m>n_0,\ \forall n>n_0,\ \forall x\in{\mathbb R}\ .$$
Assume now there is such an $n_0$ for $\epsilon:={1\over2}$ and put
$$n:=n_0+1, \quad m:=n_0+2, \quad x:=m\ .$$
Then $s_n$ and $s_m$ differ by just one term (the $m$th), and one has
$$\bigl|s_m(x)-s_n(x)\bigr|=e^{-|x-m|}=e^0>{1\over2}=\epsilon\ ,$$
which is a contradiction.
|
Symmetries of the singular vectors of the line graph Consider the matrix $$ A = \left( \begin{matrix} 1/2 & 1/2 & 0 & 0 & 0 & 0 \\ 1/3 & 1/3 & 1/3 & 0 & 0 & 0 \\ 0 & 1/2 & 0 & 1/2 & 0 & 0 \\
\vdots & \ & \ddots & & \ddots & \\
0 & \cdots & 0 & 1/3 & 1/3 & 1/3 \\
0 & \cdots & 0 & 0 & 1/2 & 1/2 \end{matrix} \right) $$
This question is about the singular vectors of $A$, i.e., the eigenvectors of $A^T A$. Ill denote them by $v_1, \ldots, v_n$. Suppose $n$ is even, and let $P$ be the permutation matrix which flips the vector around the midpoint, i.e., $P$ flips entries $1$ and $n$, entries $2$ and $n-1$, and finally entries $n/2$ and $n/2+1$.
Observation: A few numerical experiments in MATLAB suggest that exactly half of the $v_i$ satisfy $Pv=-v$ and the remainder satisfy $Pv=v$.
My question: Can someone provide an explanation for why this is the case?
Update: I suppose I should note that it isn't surprising that we can find eigenvectors which satisfy either $Pv=v$ or $Pv=-v$. Indeed, we can go through the list of $v_1, \ldots, v_n$ and if $Pv_i \neq v_i$, then because $Pv_i$ is also an eigenvector of $A^T A$ with the same eigenvalue, we can replace $v_i$ with $v_i'=v_i+Pv_i, v_i''=v_i-Pv_i$; then $Pv_i' = v_i', Pv_i'' = -v_i''$; finally we can throw out the redundant vectors. What is surprising to me is that the orthogonal basis returned by the MATLAB eig command has exactly half of the vectors which satisfy $Pv=v$ and exactly half which satisfy $Pv=-v$.
| The vectors (general vectors, not eigenvectors) with either kind of parity form $n/2$-dimensional subspaces. For instance, a basis for the even vectors is $(1,0,0,\dotsc,0,0,1)$, $(0,1,0,\dots,0,1,0)$, $\dotsc$
You've already explained that we can choose an eigenbasis for $A$ in which all vectors have definite parity. Since there can be at most $n/2$ of either kind, there must be exactly $n/2$ of either kind.
|
badly approximated numbers on the real line form a meagre set Let $S$ be the set of real numbers $x$ such that there exist infinitely many (reduced) rational numbers $p/q$ such that $$\left\vert x-\frac{p}{q}\right\vert <\frac{1}{q^8}.$$
I would like to prove that $\mathbb{R}\setminus S$ is a meagre set (i.e. union of countably many nowhere dense sets).
I have no idea about how to prove this, as I barely visualise the problem in my mind. I guess that the exponent $8$ is not just a random number, as it seems to me that with lower exponents (perhaps $2$?) the inequality holds for infinitely many rationals for every $x\in\mathbb{R}$.
Could you help me with that?
Thanks.
| The idea is to transform the quantifiers into unions/intersections.
For example, let $T$ be the same as $S$ but dropping the infinitely many assumption.
Consider $A_{\frac p q}=(-\frac 1 {q^8}+\frac p q, \frac p q +\frac 1{q^8})$ then $T=\bigcup_{\frac p q\in\mathbb Q}A_{\frac p q}$. Thus, $T$ is a countable union of open sets ($\mathbb Q$ is countable). The same idea applies to $S$.
|
Generalization of Pythagorean triples Is it known whether for any natural number $n$, I can find (infinitely many?) nontrivial integer tuples $$(x_0,\ldots,x_n)$$ such that $$x_0^n + \cdots + x_{n-1}^n = x_n^n?$$
Obviously this is true for $n = 2$.
Thanks.
| These Pythagorean triples can appear in the most unexpected place.
If: $a^2+b^2=c^2$
Then alignment: $N_1^3+N_2^3+N_3^3+N_4^3+N_5^3=N_6^3$
$N_1=cp^2-3(a+b)ps+3cs^2$
$N_2=bp^2+3bps-3bs^2$
$N_3=ap^2+3aps-3as^2$
$N_4=-bp^2+3(2c-b)ps+3(3c-3a-2b)s^2$
$N_5=-ap^2+3(2c-a)ps+3(3c-2a-3b)s^2$
$N_6=cp^2+3(2c-a-b)ps+3(4c-3a-3b)s^2$
And more:
$N_1=cp^2-3(a+b)ps+3cs^2$
$N_2=bp^2+3bps-3bs^2$
$N_3=ap^2+3aps-3as^2$
$N_4=(3c+3a+2b)p^2-3(2c+b)ps+3bs^2$
$N_5=(3c+2a+3b)p^2-3(2c+a)ps+3as^2$
$N_6=(4c+3a+3b)p^2-3(2c+a+b)ps+3cs^2$
$a,b,c$ - can be any sign what we want.
And I would like to tell you about this equation:
$X^5+Y^5+Z^5=R^5$
It turns out the solution of integral complex numbers there. where: $j=\sqrt{-1}$
We make the change:
$a=p^2-2ps-s^2$
$b=p^2+2ps-s^2$
$c=p^2+s^2$
Then the solutions are of the form:
$X=b+jc$
$Y=-b+jc$
$Z=a-jc$
$R=a+jc$
$p,s$ - what some integers.
|
$k$-out-of-$n$ system probabilities
An engineering system consisting of $n$ components is said to be a $k$-out-of-$n$ system ($k \le n$) when the system functions if and only if at least $k$ out of the $n$ components function. Suppose that all components function independently of each other.
If the $i^{th}$ component functions with probability $p_i$, $i = 1, 2, 3, 4$, compute the probability that a 2-out-of-4 system functions.
This problem in itself does not seem very difficult to solve, but I suspect I am not doing it the way it was intended to be done, because the formulas that come out are very ugly. I calculated the probability by conditioning on whether or not the $1^{st}$ and $2^{nd}$ components worked, and it came out to be
$$
p_3 p_4 + p_2 (p_3 + p_4 - 2 p_3 p_4) + p_1 (p_3 + p_4 - 2 p_3 p_4 + p_2 (1 - 2 p_3 - 2 p_4 + 3 p_3 p_4))
$$
Even if this is right, there's no way it's what the answer is supposed to look like. Can someone give me a push in the right direction?
| If you want something "prettier" you could take all the possible cases and write them using products and sums, such as $$\prod_{i=1}^4 p_i \left(1+\sum_{j=1}^4\frac{1-p_j}{p_j} + \sum_{k=1}^3 \sum_{l=k+1}^4 \frac{(1-p_k)(1-p_l)}{p_k \; p_l} \right)$$
or you could work out the probability that one or none work and subtract that from $1$ to get $$1 - \prod_{i=1}^4(1-p_i)\left(1+\sum_{j=1}^4\frac{p_j}{1-p_j}\right)$$
|
Prove that a finite union of closed sets is also closed Let $X$ be a metric space. If $F_i \subset X$ is closed for $1 \leq i \leq n$, prove that $\bigcup_{i=1}^n F_i$ is also closed.
I'm looking for a direct proof of this theorem. (I already know a proof which first shows that a finite intersection of open sets is also open, and then applies De Morgan's law and the theorem "the complement of an open set is closed.") Note that the theorem is not necessarily true for an infinite collection of closed $\{F_\alpha\}$.
Here are the definitions I'm using:
Let $X$ be a metric space with distance function $d(p, q)$. For any $p \in X$, the neighborhood $N_r(p)$ is the set $\{x \in X \,|\, d(p, x) < r\}$. Any $p \in X$ is a limit point of $E$ if $\forall r > 0$, $N_r(p) \cap E \neq \{p\}$ and $\neq \emptyset$. Any subset $E$ of $X$ is closed if it contains all of its limit points.
| It is sufficient to prove this for a pair of closed sets $F_1$ and $F_2$. Suppose $F_1 \cup F_2$ is not closed, even though $F_1$ and $F_2$ are closed. This means that some limit point $p$ of $F_1 \cup F_2$ is missing. So there is a sequence $\{ p_i\} \subset F_1 \cup F_2$ converging to $p$. By pigeonhole principle, at least one of $F_1$ or $F_2$, say $F_1$, contains infinitely many points of $\{p_i\}$, hence contains a subsequence of $\{p_i\}$. But this subsequence must converge to the same limit, so $p \in F_1$, because $F_1$ is closed. Thus, $p \in F_1 \subset F_1 \cup F_2$.
Alternatively, if you do not wish to use sequences, then something like this should work. Again, it is sufficient to prove it for a pair of closed sets $F_1$ and $F_2$. Suppose $F_1 \cup F_2$ is not closed. That means that there is some points $p \notin F_1 \cup F_2$ every neighbourhood of which contains infinitely many points of $F_1 \cup F_2$. By pigeonhole principle again, every such neighbourhood contains infinitely many points of at least one of $F_1$ or $F_2$, say $F_1$. Then $p$ must be a limit point of $F_1$; so $p \in F_1 \subset F_1 \cup F_2$.
|
Matrix multiplication, equivalent to numeric multiplication, or just shares the name? Is matrix multiplication equivalent to numeric multiplication, or do they just share the same name?
While there are similarities between how they work, and one can be thought of being derived from the other, I ask because they have different properties such as not being commutative, a × b ≠ b × a, and sometimes multiplication between matrices is referred to by the alternative name apply instead of multiply. For example applying a transformation matrix, where this is the same as multiplying by it.
Additionally sometimes in programming operations can be defined between new types of things, allowing the language to expand with new concepts, however the link between the name and rules such as commutative are supposed to continue to hold true.
| "Multiplication" is often used to describe binary operations that share only some of the properties of ordinary multiplication.
The case of matrix multiplication is special. There, multiplication is in some sense a generalization of ordinary multiplication. Let $M_n(a)$ be the $n\times n$ matrix whose diagonal entries are all equal to $a$, and whose off-diagonal entries are $0$.
It is easy to verify that
$$M_n(x)M_n(y)=M_n(xy).$$
So we can think of the real numbers as, for example, special $7\times 7$ matrices. Then the multiplication of real numbers can be viewed as a special case of matrix multiplication.
More interestingly, define the $2\times 2$ matrix $M(a,b)$ by
$$M(a,b)=\pmatrix{x & -y\\ y & x}.$$
It is not hard to verify that
$$\pmatrix{a & -b\\ b & a}\pmatrix{c & -d\\ d & c}=\pmatrix{ac-bd & -(ad+bc)\\ ad+bc & ac-bd}.$$
Note that the product of the two complex numbers $a+ib$ and $c+id$ is equal to $ad+bc +i(ad+bc)$. So the special matrices $M(x,y)$ multiply like the complex numbers. They also add like the complex numbers, and can be identified with the complex numbers.
So the multiplication of $2\times 2$ matrices can be thought of as a generalization of the multiplication of complex numbers. This is of practical usefulness: rotations about the origin can be thought of as either multiplying by a special kind of complex number, or as multiplying by a special type of matrix.
|
Prove or refute: If $f$ is Riemann integrable on $[0,1]$, then so is $\sin(f)$ I'd love your help with the following question: I need to prove or refute the claim that for a Riemann integrable function $f$ in $[0,1]$ also $\sin(f)$ is integrable on $[0,1]$.
My translation for this claim: If $\int_{0}^{1} f(x) dx < \infty$, so does $\int_{0}^{1} \sin(f(x))dx < \infty$, Am I right?
I tried to think of an elementary function that will fit the conditions, one that will blow up in $0$ or $1$ or both, but I didn't find any. Can I just use the fact that $\int_{0}^{1} \sin(f(x))dx \leq \int_{0}^{1} 1dx < \infty$ and that's it or Am I missing something?
Thanks!
| I do not think your translation is correct (unless you meant Lebesgue, and not Riemann integrable). The concept of Riemann integrable and Lebesgue integrable are not the same.
Riemann integrable: $f\colon[a,b]\to\mathbb{R}$, $-\infty<a<b<+\infty$, $f$ bounded and the upper integral equal to the lower integral.
Lebesgue integrable: $f\colon E\subset\mathbb{R}\to\mathbb{R}$, $E$ Lebesgue measurable set, $f$ Lebesgue measurable and $\int_E|f|<\infty$.
There are several ways of showing that if $f\colon[0,1]\to\mathbb{R}$ is Riemann integrable so is $\sin(f)$. The easiest way is to use the following fact: a bounded function $g\colon[a,b]\to\mathbb{R}$ is Riemann integrable if and only if the set of points where $g$ is discontinuous has measure $0$.
First of all it is cleat that $\sin(f)$ is bounded. Since $\sin$ is a continuous function,
$$
\{x\in[0,1]:\sin(f)\text{ is discontinuous at }x\}\subset\{x\in[0,1]:f\text{ is discontinuous at }x\}.
$$
Since $f$ is Riemann integrable, the set on the right hand side is of measure $0$, and so are all its subsets.
|
$f \geq 0$ continuous, $\lim_{x \to \infty} f(x)$ exists, $\int_{0}^{\infty}f(x)dx< \infty$, Prove: $\int_{0}^{\infty}f^2(x)dx< \infty$ Something that bothers me with the following question: $f: [0, \infty] \to \mathbb{R}$,$f \geq 0$, $\lim_{x \to \infty} f(x)$ exists and finite, and $\int_{0}^{\infty}f(x)dx$ converges, I need to show that $\int_{0}^{\infty}f^2(x)dx$
I separated the integral in the following way: $\int_{0}^{1}f^2(x)dx$+$\int_{1}^{\infty}f^2(x)dx$, while for the second one we know that $\lim_{x \to \infty}\frac{f^2(x)}{f(x)}$ so they converges together, but what happens in the first range, does my $\lim_{x \to 0}f(x)$ has to be finite? Do you have an example for an integral between $0$ and $\infty$ which converges and the $\lim_{x \to 0}$ is not finite?
What should I do in my case?
Thanks!
| To answer your questions regarding the behaviour at $0$ first. Note that $f:[0;\infty[ \ \to \mathbb{R}$, hence $f$ is well-defined and continuous at $0$ - according to the assumptions you state. In particular this means that $\lim_{x\to 0}f(x) = f(0)$, so there is really no issue at $x=0$; your function is well-defined at this end of the interval. Had this not been the case then you would have had to consider what might go wrong here (the function $f(x) = x^{-1/2}$ could be an instructional case to consider when integrating).
I am not sure I follow your reasoning in the case when $x\to \infty$, but I think it might be a good idea to try and deduce what the possible values of $\lim_{x\to \infty}f(x)$ could be, if the integral $\int_0^{\infty}f(x)dx$ is suppose to converge, and then maybe use $0<f(x)^2\leq f(x)$ under the right circumstances...
|
If a series converges, then the sequence of terms converges to $0$. Following the guidelines suggested in this meta discussion, I am going to post a proposed proof as an answer to the theorem below. I believe the proof works, but would appreciate any needed corrections.
Theorem If a series $\sum_{n=1}^{\infty}a_n$ of real numbers converges then $\lim_{n \to \infty}a_n = 0$
| Another view of this may be useful.
First, recall a basic fact that if $a_n$ is a convergent sequence of numbers, then the sequence $b_n = a_{n+1} - a_n$ converges to $0$. This is easy to prove and does not require the notion of a Cauchy sequence.
Therefore, if the partial sums $s_n$ are convergent, then $b_n = s_{n+1} - s_n$ converges to $0$. But the terms of this sequence are easily seen to be $b_n = a_{n+1}$. Hence $a_n \rightarrow 0$.
|
Generalized "Duality" of Classical Propositional Logical Operations Duality in propositional logic between conjunction and disjunction, $K$ and $A$ means that for any "identity", such as $KpNp = 0$ (ignoring the detail of how to define this notion in propositional logic), if we replace all instances of $K$ by $A$, all instances of $A$ by $K$, all instances of 1 by 0, and all instances of 0 by 1, the resulting equation will also consists of an "identity", $ApNp = 1$. Suppose that instead of conjunction "$K$" and disjunction "$A$", we consider any pair of "dual" operations $\{Y, Z\}$ of the 16 logical operations such that they qualify as isomorphic via the negation operation $N$, where $Y$ does not equal $Z$. By "isomorphic" I mean that the sub-systems $(Y, \{0, 1\})$, $(Z, \{0, 1\})$ are isomorphic in the usual way via the negation operation $N$, for example $K$ and $A$ qualify as "isomorphic" in the sense I've used it here.
If we have an identity involving operations $A_1, \dots, A_x$, and replace each instance of each operation by its "dual" $A'_1$, ..., $A'_x$, replace each instance of 1 by 0, and each instance of 0 by 1, is the resulting equation also an identity? If so, how does one prove this? How does one show that the equation obtained via the "duality" transformation here is also an identity?
| The resulting equation is also an identity. This is because any of the $16$ operations can be put in canonical disjunctive normal form using only $\land$, $\lor$, and $\lnot$. Then the replacement procedure described in the post becomes the standard one, and we are dealing with ordinary duality.
|
Combinatorial interpretation of this identity of Gauss? Gauss came up with some bizarre identities, namely
$$
\sum_{n\in\mathbb{Z}}(-1)^nq^{n^2}=\prod_{k\geq 1}\frac{1-q^k}{1+q^k}.
$$
How can this be interpreted combinatorially? It strikes me as being similar to many partition identities. Thanks.
| The typical analytic proof is not difficult and is an easy consequence of Jacobi's triple product $$\sum_{n=-\infty} ^{\infty} z^{n} q^{n^{2}}=\prod_{n=1}^{\infty}(1-q^{2n})(1+zq^{2n-1})(1+z^{-1}q^{2n-1})$$ for all $z, q$ with $z\neq 0,|q|<1$. Let's put $z=-1$ to get the sum in question. The corresponding product is equal to $$\prod(1-q^{2n})(1-q^{2n-1})^{2}=\prod(1-q^{n})(1-q^{2n-1})=\prod \frac{(1-q^{2n})(1-q^{2n-1})} {1+q^{n}}=\prod\frac{1-q^{n}} {1+q^{n}}$$ which completes the proof.
The proof for Jacobi's triple product is non-trivial / non-obvious and you may have a look at the original proof by Jacobi in this blog post.
On the other hand Franklin obtained a nice and easy combinatorial proof of the Euler's Pentagonal theorem which is equivalent to Jacobi Triple Product.
|
For $\sum_{0}^{\infty}a_nx^n$ with an infinite radius of convergence, $a_n \neq0$ , the series does not converge uniformly in $(-\infty , \infty)$. I'd like your help with the following question:
Let $\sum_{0}^{\infty}a_nx^n$ be a power series with an infinite
radius of convergence, such that $a_n \neq 0 $ , for infinitely many values of
$n$. I need to show that the series does not converge uniformly in
$(-\infty , \infty)$.
I quite don't understand the question, since I know that within the radius of convergence for a power series, there's uniform convergence, and since I know that the radius is "infinite", it says that the uniform convergence is infinite, no?
Thanks!
| Hint: put $s_N(x):=\sum_{n=0}^Na_nx^n$. If the sequence $\{s_N\}$ is uniformly convergent on $\mathbb R$ then the sequence $\{s_{N+1}-s_N\}$ converges uniformly on $\mathbb R$ to $0$ so $\{a_{N+1}x^{N+1}\}$ converges uniformly to $0$. Do you see the contradiction?
|
Where should the exponent be written in numbers with units of measurement? If you are to calculate the hypotenuse of a triangle, the formula is:
$h = \sqrt{x^2 + y^2}$
If you don't have any units for the numbers, replacing x and y is pretty straightforward:
$h = \sqrt{4^2 + 6^2}$
But what if the numbers are in meters?
$h = \sqrt{4^2m + 6^2m}$ (wrong, would become $\sqrt{52m}$)
$h = \sqrt{4m^2 + 6m^2}$ (wrong, would become $\sqrt{10m^2}$)
$h = \sqrt{(4m)^2 + (6m)^2}$ (correct, would become $\sqrt{52m^2}$)
Or should I just ignore the unit of measurement in these cases?
| Suppose you have been given $x$ and $y$ in metres, and you'd like to know the quantity, $z=\sqrt{x^2+y^2}$. Then, as you have predicted this quatity will be in metres.
Two things have been involved:
Homogeneity of Dimension
Two quantities of different dimensions cannot be added. This is one of the axioms of numerical physics.
Example: It is clear that adding $5$ metres to $3$ seconds does not give a physically meaningful quantity that can be interpreted in real life.
Certain functions only take values in dimensionless quantities
For instance, $\sin (\sqrt{x^2+y^2})$ would not make sense even if $x$ and $y$ have same dimensions. This is a bit subtler, but this is what it is!
*
*Coming to your question, the first quantity you tell us in dimension of $m^{1/2}$ which is against your guess!
*The second quantity is dimensionally fine while numerically this is not what you want.
*The third quantity is fine in all ways.
My suggestion:
First manipulate the numbers and then the units separately. This is a good practice in Numerical Physics. The other answers have done it all at one go. But, I don't prefer it that way!
|
What is the equation of an ellipse that is not aligned with the axis? I have the an ellipse with its semi-minor axis length $x$, and semi major axis $4x$. However, it is oriented $45$ degrees from the axis (but is still centred at the origin). I want to do some work with such a shape, but don't know how to express it algebraically. What is the equation for this ellipse?
| Let the center of the ellipse be at $C = (x_c, y_c)$. Let the major axis be the line that passes through $C$ with a slope of $s$; points on that line are given by the zeros of $L(x,y) = y - y_c - s(x - x_c)$. Let the minor axis be the line perpendicular to $L$ (and also passing through $C$); points on that line are given by the zeros of $l(x,y) = s(y-y_c)+(x-x_c)$.
The ellipse is then defined by the zeros of
$$E(x,y) = L(x,y)^2/a + l(x,y)^2/b - 1$$
Requiring that the
distance between the intersections of $E$ and $L$ be $2M$ identifies $$b=M^2(1+s^2)$$
and similarly, requiring that the intersections between $E$ and $l$ be separated by $2m$ identifies
$$a=m^2(1 + s^2)$$
This is demonstrated in the following SymPy session:
>>> from sympy import *
>>> a, b, x, y, m, M, x_c, y_c, s = var('a,b,x,y,m,M,x_c,y_c,s')
>>> L = (y - y_c) - s*(x - x_c)
>>> l = s*(y - y_c) + (x - x_c)
>>> idiff(L, y, x) == -1/idiff(l, y, x) # confirm they are perpendicular
True
>>> E = L**2/a + l**2/b - 1
>>> xy = (x, y)
>>> sol = solve((E, L), *xy)
>>> pts = [Point(x, y).subs(zip(xy, p)) for p in sol]
>>> solve(pts[0].distance(pts[1]) - 2*M, b)
[M**2*(s**2 + 1)]
>>> sol = solve((E, l), *xy)
>>> pts = [Point(x,y).subs(zip(xy, p)) for p in sol]
>>> solve(pts[0].distance(pts[1]) - 2*m, a)
[m**2*(s**2 + 1)]
So the general equation of the ellipse centered at $(x_c, y_c)$ whose major axis (with radius of $M$) is on a line with slope $s$, and whose minor axis has radius of $m$, is given by the solutions of:
$$\frac{((y - y_c) - s(x - x_c))^2}{m^2(1 + s^2)} + \frac{(s(y - y_c) + (x - x_c))^2}{M^2(1 + s^2)} = 1$$
|
An application of the General Lebesgue Dominated convergence theorem I came across the following problem in my self-study:
If $f_n, f$ are integrable and $f_n \rightarrow f$ a.e. on $E$, then $\int_E |f_n - f| \rightarrow 0$ iff $\int_E |f_n| \rightarrow \int_E |f|$.
I am trying to prove (1) and the book I am using suggests that it follows from the Generalized Lebesgue Dominated Convergence Theorem:
Let $\{f_n\}_{n=1}^\infty$ be a sequence of measurable functions on $E$ that converge pointwise a.e. on $E$ to $f$. Suppose there is a sequence $\{g_n\}$ of integrable functions on $E$ that converge pointwise a.e. on $E$ to $g$ such that $|f_n| \leq g_n$ for all $n \in \mathbb{N}$. If $\lim\limits_{n \rightarrow \infty}$ $\int_E$ $g_n$ = $\int_E$ $g$, then $\lim\limits_{n \rightarrow \infty}$ $\int_E$ $f_n$ = $\int_E$ $f$.
I suspect that I need the right inequalities to help satisfy the hypothesis of the GLDCT, but I am not certain about what these inequalities should be.
| Take $g_n = |f_n| + |f|$ and use the triangle inequality to get the bound.
|
continuous map of metric spaces and compactness
Let $f:X\rightarrow Y$ be a continuous map of metric spaces. Show that if $A\subseteq X$ is compact, then $f(A)\subseteq Y$ is compact.
I am using this theorem: If $A\subseteq X$ is sequentially compact, it is compact.
Also this definition: A set $A\subseteq X$ is sequentially compact if every sequence in $A$ has a convergent subsequence in $A$.
Attempt at a proof:
Let $\{y_n\}\subseteq f(A)$. Since $f$ is continuous, $\{y_n\}=f(x_n)$ for some $\{x_n\}\subseteq A$. If $A\subseteq X$ is compact, every sequence $\{x_n\}\subseteq A$ has a subsequence that converges to a point in $A$, say $\{x_{n_k}\}\rightarrow a\in A$. Since $f$ is continuous, $f(x_{n_k})\rightarrow f(a)\in f(A)$. Then $f(x_{n_k})\subseteq f(A)$ is a convergent sequence in $f(A)\implies f(A)$ is compact since $\{y_n\}\subseteq f(A)$ was arbitrary.
| Yet another formulation for topological spaces: If $f:X \to Y$ continuous and $f(x_\iota)$ is a net in $f(X)$, then $x_\iota$ has a converging subnet, say $x_\tau \to x$. Then $f(x_\tau) \to f(x)$, hence $f(x_\iota)$ has a converging subnet, so $f(X)$ is compact.
|
Determine a holomorphic function by means of its values on $ \mathbb{N} $ This is exercise 5, page 236 from Remmert, Theory of complex functions
For each of the following properties produce a function which is holomorphic in a neighborhood of $ 0 $ or prove that no such function exists:
i) $ f (\frac{1}{n}) = (-1)^{n}\frac{1}{n} \ $ for almost all $ n \in \mathbb{N}\ , n \neq 0 $
ii) $ f (\frac{1}{n}) = \frac{1}{n^{2} - 1 } $ for almost all $ n \in \mathbb{N}\ , n \neq 0, n \neq 1 $
iii) $|f^{(n)}(0)|\geq (n!)^{2} $ for almost all $ n \in \mathbb{N} $
| Your title is misleading, as you cannot determine a holomorphic function from its values on $\mathbb{N}$. However, in this case you can determine it, using the uniqueness theorem for analytic functions: if $f$ and $g$ are two analytic functions and there is a convergent series $a_n$ such that $f(a_n)=g(a_n)$ for all $n$ then $f=g$.
Thus, for example, if your (i), putting $g(z)=z$. we see that $f(1/2n)=g(1/2n)$, so if $f$ is analytic we must have $f=g$. But then $f(1/(2n+1)) = 1/(2n+1)$, contradicting the fact that $f(1/(2n+1)) = -1/(2n+1)$, so no such analytic function exists.
(ii) is similar.
as for (iii), try to think on the Taylor expansion of $f$ near $0$. What will be its radius of convergence?
|
Why are $\log$ and $\ln$ being used interchangeably? A definition for complex logarithm that I am looking at in a book is as follows -
$\log z = \ln r + i(\theta + 2n\pi)$
Why is it $\log z = \ldots$ and not $\ln z = \ldots$? Surely the base of the log will make a difference to the answer.
It also says a few lines later $e^{\log z} = z$.
Yet again I don't see how this makes sense. Why isn't $\ln$ used instead of $\log$?
| "$\log$" with no base generally means base the base is $e$, when the topic is mathematics, just as "$\exp$" with no base means the base is $e$. In computer programming languages, "$\log$" also generally means base-$e$ log.
On calculators, "$\log$" with no base means the base is $10$ because calculators are designed by engineers. Ironically, the reasons for frequently using base-$10$ logarithms were made obsolete by calculators, which became prevalent in the early 1970s.
|
Number of pairs $(a, b)$ with $gcd(a, b) = 1$ Given $n\geq1$, how can I count the number of pairs $(a,b)$, $0\leq a,b \leq n$ such that $gcd(a,b)=1$?
I think the answer is $2\sum_{i}^{n}\phi(i)$. Am I right?
| Perhaps it could be mentioned that if we consider the probability $P_{n}$ that two randomly chosen integers in $\{1, 2, \dots, n \}$ are coprime
$$
P_{n} = \frac{\lvert \{(a, b) : 1 \le a, b \le n, \gcd(a,b) =1 \}\rvert}{n^{2}},
$$
then
$$
\lim_{n \to \infty} P_{n} = \frac{6}{\pi^{2}}.
$$
See this Wikipedia article.
|
General solution using Euclidean Algorithm
I was able to come up with the integer solution that they also have in the textbook using the same method they used but I am really puzzled how they come up with a solution for all the possible integer combinations...how do they come up with that notation/equation that represents all the integer solutions ? I am talking about the very last line.
| A general rule in life: When you have a linear equation(s) of the form $f(x_1,x_2,\dots, x_n)=c$, find one solution to the equation and then find a general solution to $f(x_1,\dots,x_n)=0$ and now you can obtain the general solution for the initial equation by adding the special solution you found with the general solution of the second, homogeneous, equation.
In our case the homogeneous equation is $957x+609y=0$. Divide by the gcd 87 to obtain $11x=-7y$. So the general solution for this equation is $(-7n,11n)$ for integer $n$ (you must multiply both sides by something that will give you the LCM times an integer).
|
Pointwise limit of continuous functions not Riemann integrable The following is an exercise from Stein's Real Analysis (ex. 10 Chapter 1). I know it should be easy but I am somewhat confused at this point; it mostly consists of providing the Cantor-like construction for continuous functions on the interval $[0,1]$ whose pointwise limit is not Riemann integrable.
So, let $C'$ be a closed set so that at the $k$th stage of the construction one removes $2^{k-1}$ centrally situated open intervals each of length $l^{k}$ with $l_{1}+\ldots+2^{k-1}l_{k}<1$; in particular, we know that the measure of $C'$ is strictly positive. Now, let $F_{1}$ denote a piece-wise linear and continuous function on $[0,1]$ with $F_{1}=1$ in the complement of the first interval removed in the consutrction of $C'$, $F_{1}=0$ at the center of this interval, and $0 \leq F_{1}(x) \leq 1$ for all $x$. Similarly, construct $F_{2}=1$ in the complement of the intervals in stage two of the construction of $C'$, with $F_{2}=0$ at the center of these intervals, and $0 \leq F_{2} \leq 1$, and so on, and let $f_{n}=F_{1}\cdot \ldots F_{n}$.
Now, obviously $f_{n}(x)$ converges to a limit say $f(x)$ since it is decreasing and bounded and $f(x)=1$ if $x \in C'$; so in order to show that $f$ is discontinuous at every point of $C'$, one should show that there is a sequence of points $x_{n}$ so that $x_{n} \rightarrow x$ and $f(x_{n})=0$; I can't see this, so any help is welcomed, thanks a lot!
| Take a point $c\in C'$ and any open interval $I$ containing $c$.
Then there is an open interval $D\subseteq I $ that was removed in the construction of $C'$.
Indeed, since $C'$ has no isolated points, there is a point $y\in C'\cap I$ distinct from $x$. Between $x$ and $y$, there is an open interval removed from the construction of $C'$, which we take to be our $D$.
Now, by the definition of the $f_n$, there is a point $d\in D$ (namely the center of $D$) such that $f(d)=0$.
To recap: given $x\in C'$ and any open interval $I$ containing $x$, there is a point $d\in I$ with $f(d)=0$. As $f(x)=1$, this implies that $f$ is not continuous at $x$.
|
group of order 28 is not simple I have a proof from notes but I don't quite understand the bold part:
Abelian case: $a \in G / \{1\}$. If $\langle a\rangle \neq G$, then we are done. If $\langle a\rangle = G$, then $\langle a^4\rangle $ is a proper normal subgroup of $G$. General case: WLOG we can assume $G \neq Z(G)$. $\langle 1\rangle \neq Z(G)$ which is a proper normal subgroup of $G$. Done. Otherwise $|Z(G)|= 1$.
$$
28 = 1 + \sum_{**}\frac{|G|}{|C_G(x)|}
$$
There must be some $a\in G$ such that 7 does not divide
$$
\frac{|G|}{|C_G(a)|}
$$
It follows that
$\frac{|G|}{|C_G(a)|} = 2 $ or $4$ $\Rightarrow [G:C_G(a)] = 2$ or $4$ $\Rightarrow 28 \mid2!$ or $28\mid4!$.
Therefore group of order 28 is not simple.
Why are they true?
| Reading the class equation modulo 7 gives the existence of one $x$ such that $\frac{|G|}{|C_G(x)|}$ is NOT divisible by 7. Hence 7 divides $|C_G(x)|$. Now the factors of the numerator $|G|$ are 1, 2, 4, 7 $\cdots$. Since $\frac{|G|}{|C_G(x)|}$ cannot be 1 and cannot divide 7, the only possibilities are 2 and 4.
|
Number of 5 letter words over a 4 letter group using each letter at least once Given the set $\{a,b,c,d\}$ how many 5 letter words can be formed such that each letter is used at least once?
I tried solving this using inclusion - exclusion but got a ridiculous result:
$4^5 - \binom{4}{1}\cdot 3^5 + \binom{4}{2}\cdot 2^5 - \binom{4}{3}\cdot 1^5 = 2341$
It seems that the correct answer is:
$\frac{5!}{2!}\cdot 4 = 240$
Specifically, the sum of the number of permutations of aabcd, abbcd, abccd and abcdd.
I'm not sure where my mistake was in the inclusion - exclusion approach. My universal set was all possible 5 letter words over a set of 4 letters, minus the number of ways to exclude one letter times the number of 5 letter words over a set of 3 letters, and so on.
Where's my mistake?
| Your mistake is in the arithmetic. What you think comes out to 2341 really does come out to 240.
$4^5=1024$, $3^5=243$, $2^5=32$, $1024-(4)(243)+(6)(32)-4=1024-972+192-4=1216-976=240$
|
Upper bounds on the size of $\operatorname{Aut}(G)$ Any automorphism of a group $G$ is a bijection that fixes the identity, so an easy upper bound for the size of $\operatorname{Aut}(G)$ for a finite group $G$ is given by
\begin{align*}\lvert\operatorname{Aut}(G)\rvert \leq (|G| - 1)! \end{align*}
This inequality is an equality for cyclic groups of orders $1$, $2$ and $3$ and also the Klein four-group $\mathbb{Z}_2 \times \mathbb{Z_2}$. I think it's reasonable to believe that they are the only groups with this property. The factorial $(|G| - 1)!$ is eventually huge. I searched through groups of order less than $100$ with GAP and found no other examples.
The problem can be reduced to the abelian case. We can check the groups of order $< 6$ by hand. Then if $|G| \geq 6$ and the equality holds, we have $\operatorname{Aut}(G) \cong S_{|G|-1}$. Now $\operatorname{Inn}(G)$ is a normal subgroup of $\operatorname{Aut(G)}$, and is thus isomorphic to $\{(1)\}$, $A_{|G|-1}$ or $S_{|G|-1}$. This is because $A_n$ is the only proper nontrivial normal subgroup of $S_n$ when $n \geq 5$. We can see that $(|G| - 1)!/2 > |G|$ and thus $\operatorname{Inn}(G) \cong G/Z(G)$ is trivial.
How to prove that there are no other groups for which the equality $\lvert\operatorname{Aut}(G)\rvert = (|G| - 1)!$ holds? Are any better upper bounds known for larger groups?
| I believe this is an exercise in Wielandt's permutation groups book.
$\newcommand{\Aut}{\operatorname{Aut}}\newcommand{\Sym}{\operatorname{Sym}}\Aut(G) \leq \Sym(G\setminus\{1\})$ and so if $|\Aut(G)|=(|G|-1)!$, then $\Aut(G) = \Sym(G\setminus\{1\})$ acts $|G|-1$-transitively on the non-identity elements of G. This means the elements of G are indistinguishable. Heck even subsets of the same size (not containing the identity) are indistinguishable. I finish it below:
In particular, every non-identity element of G has the same order, p, and G has no proper, non-identity characteristic subgroups, like $Z(G)$, so G is an elementary abelian p-group. However, the automorphism group is $\newcommand{\GL}{\operatorname{GL}}\GL(n,p)$ which, for $p \geq 3, n\geq 2$, only acts at most $n-1$-transitively since it cannot send a basis to a non-basis. The solutions of $p^n-1 \leq n-1, p \geq 3, n \geq 2$ are quite few: none. Obviously $\GL(1,p)$ has order $p-1$ which is very rarely equal to $(p-1)!$, when $p=2, 3$. $\GL(n,2)$ still can only act $n$-transitively if $2^n-1 > n+1$, since once a basis's image is specified, the other points are determined, and the solutions of $2^n-1 \leq n+1$ are also limited: $n=1,2$. Thus the cyclic groups of order 1,2,3 and the Klein four group are the only examples.
|
Convergence/Divergence of infinite series $\sum_{n=1}^{\infty} \frac{(\sin n+2)^n}{n3^n}$ $$ \sum_{n=1}^{\infty} \frac{(\sin n+2)^n}{n3^n}$$
Does it converge or diverge?
Can we have a rigorous proof that is not probabilistic?
For reference, this question is supposedly a mix of real analysis and calculus.
| The values for which $\sin(n)$ is close to $1$ (say in an interval $[1-\varepsilon ; 1]$) are somewhat regular :
$1 - \varepsilon \le \sin(n)$ implies that there exists an integer $k(n)$ such that
$n = 2k(n) \pi + \frac \pi 2 + a(n)$ where $|a(n)| \leq \arccos(1- \varepsilon)$.
As $\varepsilon \to 0$, $\arccos(1- \varepsilon) \sim \sqrt{2 \varepsilon}$, thus
we can safely say that for $\varepsilon$ small enough, $|n-2k(n) \pi - \frac{\pi}2| = |a(n)| \leq 2 \sqrt{ \varepsilon} $
If $m \gt n$ and $\sin(n)$ and $\sin(m)$ are both in $[1-\varepsilon ; 1]$,
then we have the inequality $|(m-n) - 2(k(m)-k(n)) \pi| \leq |m-2k(m)\pi - \frac{\pi}2| + |n-2k(n)\pi - \frac{\pi}2| \leq 4 \sqrt { \varepsilon} $ where $(k(m)-k(n))$ is some integer $k$.
Since $\pi$ has a finite irrationality measure, we know that there is a finite real constant $\mu \gt 2$ such that for any integers $n,k$ large enough,
$|n-k \pi| \ge k^{1- \mu} $.
By picking $\varepsilon$ small enough we can forget about the finite number of exceptions to the inequality, and we get $ 4\sqrt{\varepsilon} \ge (2k)^{1- \mu}$.
Thus $(m-n) \ge 2k\pi - 4\sqrt{\varepsilon} \ge \pi(4\sqrt{\varepsilon})^{\frac1{1- \mu}} - 4\sqrt{\varepsilon} \ge A_\varepsilon = A\sqrt{\varepsilon}^{\frac1{1- \mu}} $ for some constant $A$.
Therefore, we have a guarantee on the lengh of the gaps between equally problematic terms, and we know how this length grows as $\varepsilon$ gets smaller (as we look for more problematic terms)
We can get a lower bound for the first problematic term using the irrationality measure as well : from $|n-2k(n) \pi - \frac{\pi}2| \leq 2\sqrt {\varepsilon}$, we get that for $\varepsilon$ small enough, $(4k+1)^{1- \mu} \le |2n - (4k+1) \pi| \le 4\sqrt \varepsilon$, and then $n \ge B_\varepsilon = B\sqrt\varepsilon^{\frac1{1- \mu}}$ for some constant $B$.
Therefore, there exists a constant $C$ such that forall $\varepsilon$ small enough, the $k$-th integer $n$ such that $1-\varepsilon \le \sin n$ is greater than $C_\varepsilon k = C\sqrt\varepsilon^{\frac1{1- \mu}}k$
Since $\varepsilon < 1$ and $\frac 1 {1- \mu} < 0$, this bound $C_ \varepsilon$ grows when $\varepsilon$ gets smaller.
And furthermore, the speed of this growth is greater if we can pick a smaller (better) value for $\mu$ (though all that matters is that $\mu$ is finite)
Now let us give an upper bound on the contribution of the terms where $n$ is an integer such that $\sin (n) \in [1-2\varepsilon ; 1-\varepsilon]$
$$S_\varepsilon = \sum \frac{(2+\sin(n))^n}{n3^n} \le \sum_{k\ge 1} \frac{(1- \varepsilon/3)^{kC_{2\varepsilon}}}{kC_{2\varepsilon}} = \frac{- \log (1- (1- \varepsilon/3)^{C_{2\varepsilon}})}{C_{2\varepsilon}} \\
\le \frac{- \log (1- (1- C_{2\varepsilon} \varepsilon/3))}{C_{2\varepsilon}}
= \frac{- \log (C_{2\varepsilon} \varepsilon/3))}{C_{2\varepsilon}}
$$
$C_{2\varepsilon} = C \sqrt{2\varepsilon}^\frac 1 {1- \mu} = C' \varepsilon^\nu$ with $ \nu = \frac 1 {2(1- \mu)} \in ] -1/2 ; 0[$, so :
$$ S_\varepsilon \le - \frac{ \log (C'/3) + (1+ \nu) \log \varepsilon}{C'\varepsilon^\nu}
$$
Finally, we have to check if the series $\sum S_{2^{-k}}$ converges or not :
$$ \sum S_{2^{-k}} \le \sum - \frac { \log (C'/3) - k(1+ \nu) \log 2}{C' 2^{-k\nu}}
= \sum (A+Bk)(2^ \nu)^k $$
Since $2^ \nu < 1$, the series converges.
|
Integration analog of automatic differentiation I was recently looking at automatic differentiation.
*
*Does something like automatic differentiation exist for integration?
*Would the integral be equivalent to something like Euler's method? (or am I thinking about it wrong?)
edit:
I am looking at some inherited code that includes https://projects.coin-or.org/ADOL-C as a black box.
| If I'm reading your question correctly: I don't believe there is an algorithm that, given the algorithm for your function to be integrated and appropriate initial conditions, will give an algorithm that corresponds to the integral of your original function.
However: you might wish to look into the Chebfun project by Trefethen, Battles, Driscoll, and others. What this system does is to internally represent a function given to it as a piecewise polynomial of possibly high degree, interpolated at appropriately shifted and scaled "Chebyshev points" (roots of the Chebyshev polynomial of the first kind). The resulting chebfun() object is then easily differentiated, integrated, or whatever other operation you might wish to do to the function. See the user guide for more details on this approach.
|
Graph-Minor Theorem for Directed Graphs? Suppose that $\vec{G}$ is a directed graph and that $G$ is the undirected graph obtained from $\vec{G}$ by forgetting the direction on each edge. Define $\vec{H}$ to be a minor of $\vec{G}$ if $H$ is a minor of $G$ as undirected graphs and direction on the edges of $\vec{H}$ are the same as the corresponding edges in $\vec{G}$.
Does the Robertson-Seymour Theorem hold for directed graphs (where the above definition of minor is used and our graphs are allowed to have loops and multiple edges)?
| I think the answer is yes, see 10.5 in Neil Robertson and Paul D. Seymour. Graph minors. xx. wagner’s conjecture. Journal of Combinatorial Theory, 92:325–357, 2004. and the preceding section:
As a corollary, we deduce the following form of Wagner’s conjecture for directed graphs (which immediately implies the standard form of the conjecture for undirected graphs). A directed graph is a minor of another if the first can be obtained from a subgraph of the second by contracting edges.
10.5 Let $G_i$ ($i = 1,2,\ldots$) be a countable sequence of directed graphs. Then there exist $j > i \geq 1$ such that $G_i$ is isomorphic to a minor of $G_j$.
I haven't tried to understand the proof and I don't plan to try anytime soon. But I'm pretty sure that the used definition of digraph minor is identical to your definition, and that the statement is exactly the theorem you asked for.
|
What is a Gauss sign? I am reading the paper
"A Method for Extraction of Bronchus Regions from 3D Chest X-ray
CT Images by Analyzing Structural Features of the Bronchus" by Takayuki KITASAKA, Kensaku MORI, Jun-ichi HASEGAWA and Jun-ichiro TORIWAKI
and I run into a term I do not understand:
In equation (2), when we say "[] expresses the Gauss sign", what does it mean?
| From the context (a change of scale using discrete units), this should certainly mean floor as on page 5 of Gauss's Werke 2
per signum $[x]$ exprimemus integrum ipsa $x$ proxime minorem, ita ut $x-[x]$ semper fiat quantitas positiva intra limites $0$ et $1$ sita
i.e. the next lower integer.
|
Positive semi-definite matrix Suppose a square symmetric matrix $V$ is given
$V=\left(\begin{array}{ccccc}
\sum w_{1s} & & & & \\
& \ddots & & -w_{ij} \\
& & \ddots & & \\
& -w_{ij} & & \ddots & \\
& & & & \sum w_{ns}
\end{array}\right) \in\mathbb{R}^{n\times n},$
with values $w_{ij}> 0$, hence with only positive diagonal entries.
Since the above matrix is diagonally dominant, it is positive semi-definite. However, I wonder if it can be proved that
$a\cdot diag(V)-V~~~~~a\in[1, 2]$
is also positive semi-definite. ($diag(V)$ denotes a diagonal matrix whose entries are those of $V$, hence all positive) In case of $a=2$, the resulting
$2\cdot diag(V)-V$
is also diagonally dominant (positive semi-definite), but is it possible to prove for $a\in[1,2]$?
.........................................
Note that the above proof would facilitate my actual problem; is it possible to prove
$tr[(X-Y)^T[a\cdot diag(V)-V](X-Y)]\geq 0$,
where $tr(\cdot)$ denotes matrix trace, for $X, Y\in\mathbb{R}^{n\times 2}$ and $a\in[1,2]$ ?
Also note that
$tr(Y^TVY)\geq tr(X^TVX)$ and $tr(Y^Tdiag(V)Y)\geq tr(X^Tdiag(V)X)$.
(if that facilitates the quest., assume $a=1$)
.....................................................
Since the positive semi-definiteness could not generally be guaranteed for $a<2$, the problem casts to: for which restrictions on a does the positive semi-definiteness of a⋅diag(V)−V still hold?
Note the comment from DavideGiraudo, and his claim for case $w_{ij}=1$, for all $i,j$. Could something similar be derived for general $w_{ij}$≥0?
| Claim: For a symmetric real matrix $A$, then $tr(X^TAX)\ge 0$ for all $X$ if and only if $A$ is positive semidefinite.
|
The set of limit points of an unbounded set of ordinals is closed unbounded. Let $\kappa$ be a regular, uncountable cardinal. Let $A$ be an unbounded set, i.e. $\operatorname{sup}A=\kappa$. Let $C$ denote the set of limit points $< \kappa$ of $A$, i.e. the non-zero limit ordinals $\alpha < \kappa$ such that $\operatorname{sup}(X \cap \alpha) = \alpha$. How can I show that $C$ is unbounded? I cannot even show that $C$ has any points let alone that it's unbounded.
(Jech page 92)
Thanks for any help.
| Fix $\xi\in \kappa$, since $A$ is unbounded there is a $\alpha_0\in A$ so that $\xi<\alpha_0$. Now, construct recursively a strictly increasing sequence $\langle \alpha_n: n\in \omega\rangle$. Let $\alpha=\sup\{\alpha_n: n\in \omega\}.$ Since $\kappa$ is regular and uncountable, we have $\alpha<\kappa.$ It is also easy to see that $\sup(A\cap\alpha)=\alpha$.
|
Solving modular equations Is there a procedure to solve this or is it strictly by trial and error?
$5^x \equiv 5^y \pmod {39}$ where $y > x$.
Thanks.
| Hint: since $39 = 3\cdot 13$ we can compute the order of $5\ ({\rm mod}\ 39)$ from its order mod $3$ and mod $13$.
First, mod $13\!:\ 5^2\equiv -1\ \Rightarrow\ 5^4\equiv 1;\ \ $ Second, mod $3\!:\ 5\equiv -1\ \Rightarrow\ 5^2\equiv 1\ \Rightarrow\ 5^4\equiv 1 $.
Thus $\:3,13\ |\ 5^4-1\ \Rightarrow\ {\rm lcm}(3,13)\ |\ 5^4-1,\:$ i.e. $\:39\ |\ 5^4-1,\:$ i.e. $\:5^4\equiv 1\pmod {39}$
|
Proof that $\int_1^x \frac{1}{t} dt$ is $\ln(x)$ A logarithm of base b for x is defined as the number u such that $b^u=x$. Thus, the logarithm with base $e$ gives us a $u$ such that $e^u=b$.
In the presentations that I have come across, the author starts with the fundamental property $f(xy) = f(x)+f(y)$ and goes on to construct the natural logarithm as $\ln(x) = \int_1^x \frac{1}{t} dt$.
It would be suprising if these two definitions ended up the same, as is the case. How do we know that the are? The best that I can think of is that they share property $f(xy) = f(x)+f(y)$, and coincide at certain obvious values (x=0, x=1). This seems weak. Is their a proof?
| The following properties uniquely determine the natural log:
1) $f(1) = 0$.
2) $f$ is continuous and differentiable on $(0, \infty)$ with $f'(x) = \frac{1}{x}$.
3) $f(xy) = f(x) + f(y)$
We will show that the function $f(x) = \int_1^x \frac{1}{t} dt$ obeys properties
1,2, and 3, and is thus the natural log.
1) This is easy, since $f(1) = \int_1^1 \frac{1}{t} dt = 0$.
2) Defining $f(x) = \int_1^x \frac{1}{t} dt$, we note that since $\frac{1}{t}$ is continuous on any interval of the form $[a,b]$, where $0 < a \leq b$, then the Fundamental Theorem of Calculus tells us that $f(x)$ is (continuous and) differentiable with $f'(x) = \frac{1}{x}$ for all $x \in [a,b]$.
3)
$$\begin{align}
f(xy) = \int_1^{xy} \frac{1}{t}dt &= \int_1^x \frac{1}{t} dt + \int_x^{xy} \frac{1}{t} dt
\\
&= f(x) + \int_{1}^{y} \frac{1}{u} du
\\
&= f(x) + f(y)
\end{align}$$
where in the last step we perform the substitution $t = ux$ (viewing $x$ as constant).
|
How to calculate this limit? Doesn't seem to be difficult, but still can't get it.
$\displaystyle\lim_{x\rightarrow 0}\frac{10^{-x}-1}{x}=\ln10$
| It's worth noticing that we can define the natural logarithm as
$$\log x = \lim_{h \to 0} \frac{x^h-1}{h}$$
So in your case you have
$$ \lim_{h \to 0} \frac{10^{-h}-1}{h}= \lim_{h \to 0} \frac{\frac{1}{10}^{h}-1}{h}=-\log10$$
This result holds because we have that
$$ \lim_{h \to 0} \frac{x^h-1}{h} =\frac{0}{0}$$
So we can apply L'Hôpitals rule, differentiating with respect to $h$ to get
$$ \lim_{h \to 0} \frac{x^h-1}{h} =\lim_{h \to 0} x^h \log x = \log x $$
Obivously this is done by knowing how to handle the derivative of an exponential function with arbitrary base $x$, so you could've also solved you problem by noticing the expression is a derivative, as other answers/comments suggest.
|
Rate of convergence of a sequence in $\mathbb{R}$ and Big O notation From Wikipedia
$f(x) = O(g(x))$ if and only if there exists a positive real number
$M$ and a real number $x_0$ such that $|f(x)| \le \; M |g(x)|\mbox{
for all }x>x_0$.
Also from Wikipedia
Suppose that the sequence $\{x_k\}$ converges to the number $L$. We
say that this sequence converges linearly to $L$, if there exists a
number $μ ∈ (0, 1)$ such that $\lim_{k\to \infty}
\frac{|x_{k+1}-L|}{|x_k-L|} = \mu$.
If the sequences converges, and
*
*$μ = 0$, then the sequence is said to converge superlinearly.
*$μ = 1$, then the sequence is said to converge sublinearly.
I was wondering
*
*Is it true that if $\{x_n\}$ either linearly, superlinearly or sublinearly
converges to $L$, only if $|x_{n+1}-L| = O(|x_n-L|)$? This is based on
what I have understood from their definitions and viewing $\{ x_{n+1}-L \}$ and $\{ x_n-L \}$ as functions of $n$. Note that "only if" here means "if" may not be true, since $\mu$ may lie outside of $[0,1]$ and $\{x_n\}$ may not converge.
*Some Optimization book says that the steepest descent algorithm
has linear rate of convergence, and writes $|x_{n+1}-L| =
O(|x_n-L|)$. Is the usage of big O notation here expanding the meaning of linear rate of convergence?
Thanks and regards!
| To answer your added question,
from the definition,
$x_n$ converges to $L$ if and only if
$|x_n-L| \to 0$ as $n \to \infty$.
The existence of a positive c such that
$c < 1$ and $|x_{n+1}-L| \le c|x_n-L|$
is sufficient for convergence, but not necessary.
For example, if $x_n = 1/(\ln n)$,
then $x_n \to 0$, but there is no $c < 1$
such that $x_{n+1} < c x_n$ for all large enough n.
It can be shown that there is no slowest rate of convergence -
for any rate of convergence, a slower one can be constructed.
This is sort of the inverse of constructing
arbitrarily fast growing functions
and can lead to many interesting places.
|
The sum of the coefficients of $x^3$ in $(1-\frac{x}{2}+\frac{1}{\sqrt x})^8$ I know how to solve such questions when it's like $(x+y)^n$ but I'm not sure about this one:
In $(1-\frac{x}{2}+\frac{1}{\sqrt x})^8$, What's the sum of the
coefficients of $x^3$?
| You can just multiply it out. Alternatively, you can reason the terms are of the form $1^a(\frac x2)^b(\frac 1{\sqrt x})^c$ with $a+b+c=8, b-\frac c2=3$. Then $c=2b-6$, so $a+3b=14$ and $a$ needs to be $2 \text{ or } 5$. Then you need the multinomial coefficient as stated by Suresh.
|
How can we find the values that a (divergent!) series tends to? Suppose we are given a series that diverges. That's right, diverges. We may interest ourselves in the limiting function(s) of its behavior.
For instance, given the power series:$$\frac{1}{1+x} = 1 - x + x^2 - x^3 + \dots$$
I am interested in finding the sum of the coefficients for $x^n$ and $x^{n+1}$ as $n$ approaches infinity. This should be fairly obvious as to what it is, but is there some way that we could do this for a general alternating series that essentially converges to two values, or more importantly, for 3 or more values (i.e. not exactly an alternating series, but one that cycles through a set of values at infinity)?
MY IDEAS
I guess that this can somehow be accomplished similar to finding the limiting behavior for a convergent series. I thought that I knew how to do this, but I forgot what I thought was an easy way.
MY GOAL
I really would like to know if it's possible to apply a function to the values at the limit. If it is, of course, I'd like to know how to do this. This may allow us to determine what the values actually are by using multiple functions.
| Note that the usual definition of the infinite sum is a very different kind of thing from an ordinary finite sum. It introduces ideas from topology, analysis or metric spaces which aren't present in the original definition of sum. So when generalising from finite sums to infinite sums we have quite a bit of choice in how these concepts are introduced and it's something of a prejudice to call the standard method from analysis the sum.
There are quite a few different approaches to summing infinite series, many of which give finite values in places where the usual summation method fails. Examples are Abel summation, Borel summation and Cesàro summation. The mathematician Hardy wrote an entire book on techniques to sum divergent series.
These alternative summation methods aren't just a mathematical curiosity. They play a role in physics where they can help sum some of the series that arise from sums over Feynman diagrams. These results often agree with results computed by more orthodox methods.
|
Showing $\gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$ Given that n is a positive integer show that $\gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$.
I'm thinking that I should be using the property of gcd that says if a and b are integers then gcd(a,b) = gcd(a+cb,b). So I can do things like decide that $\gcd(n^3 + 1, n^2 + 2) = \gcd((n^3+1) - n(n^2+2),n^2+2) = \gcd(1-2n,n^2+2)$ and then using Bezout's theorem I can get $\gcd(1-2n,n^2+2)= r(1-2n) + s(n^2 +2)$ and I can expand this to $r(1-2n) + s(n^2 +2) = r - 2rn + sn^2 + 2s$ However after some time of chasing this path using various substitutions and factorings I've gotten nowhere.
Can anybody provide a hint as to how I should be looking at this problem?
| Let $\:\rm d = (n^3+1,\:n^2+2).\:$ Observe that $\rm \ d \in \{1,\:3,\:9\} \iff\ d\:|\:9\iff 9\equiv 0\pmod d\:.$
mod $\rm (n^3\!-a,n^2\!-b)\!:\ a^2 \equiv n^6 \equiv b^3\:$ so $\rm\:a=-1,\:b = -2\:\Rightarrow 1\equiv -8\:\Rightarrow\: 9\equiv 0\:. \ \ $ QED
Or, if you don't know congruence arithmetic, since $\rm\: x-y\:$ divides $\rm\: x^2-y^2$ and $\rm\: x^3-y^3$
$\rm n^3-a\ |\ n^6-a^2,\:\ n^2-b\ |\ n^6-b^3\ \Rightarrow\ (n^3-a,n^2-b)\ |\ n^6-b^3-(n^6-a^2) = a^2-b^3 $
Note how much simpler the proof is using congruences vs. divisibility relations on binomials. Similar congruential proofs arise when computing modulo ideals generated by binomials.
|
Is the supremum of an ordinal the next ordinal? I apologize for this naive question.
Let $\eta$ be an ordinal. Isn't the supremum of $\eta$ just $\eta+1$? If this is true, the supremum is only necessary if you conisder sets of ordinals.
| There are two notions of supremum in sets ordinals, let $A$ be a set of ordinals:
*
*$\sup^+(A)=\sup\{\alpha+1\mid\alpha\in A\}$,
*$\sup(A) =\sup\{\alpha\mid\alpha\in A\}$.
Since most of the time we care about supremum below limit ordinals (eg. $A=\omega$) the notions coincide. If $A=\{\alpha\}$ then indeed $\sup(A)=\alpha$ and $\sup^+(A)=\alpha+1$.
The reason there are two notions is that $\sup^+(A)$ is defined as $\min\{\alpha\in\mathrm{Ord}\mid A\subseteq\alpha\}$ and $\sup(A)=\bigcup A$.
Both of these notions are useful, and it is easy to see that if $A$ has no maximal element then these indeed coincide. However the distinction can be useful in successor ordinals from time to time.
|
Sum of alternating reciprocals of logarithm of 2,3,4... How to determine convergence/divergence of this sum?
$$\sum_{n=2}^\infty \frac{(-1)^n}{\ln(n)}$$
Why cant we conclude that the sum $\sum_{k=2}^\infty (-1)^k\frac{k}{p_k}$, with $p_k$ the $k$-th prime, converges, since $p_k \sim k \cdot \ln(k)$ ?
| You are correct.
The alternating series test suffices, no need to look at the dirichlet test.
Think about it, the sequence $|a|\rightarrow 0$ at $n\rightarrow\infty$, and decreases monotonically ($a_n>a_{n+1}$). That means that you add and remove terms that shrink to $0$. What you add you remove partially in the next term, and the terms shrink to 0. Eventually, your sum converges to a number but perhaps extremely slowly, but it will at least. If the terms didn't shrink to 0 but to a value $c$ or $a_{\infty}\rightarrow c$, then the series would be alternating around a value $middle\pm c$, hence not converge. We know this is not the case for $a_n=\frac{1}{p_n}$ (since there are infinitely many primes) and $a_n=\frac{1}{\log{n}}$, clearly $a_n$ tend to $0$ in both cases. For $a_n=\frac{n}{\log{n}}$, you are correct to assume that $\frac{p_n}{n}=\mathcal{O}\left(\log{n}\right)$. This means that $a_n=\frac{n}{p_n}=\mathcal{O}\left(\frac{1}{\log{n}}\right)\rightarrow 0$. From "The kth prime is greater than k(log k + log log k−1) for k ≥ 2", $\frac{p_n}{n}=\frac{1}{a_n}$ is stuck between
$\log n+\log\log n-1<\frac{p_n}{n}<\log{n}+\log\log n,$
then
$\frac{1}{\log n+\log\log n}<\frac{n}{p_n}<\frac{1}{\log{n}+\log\log n-1}.$
From the above inequality, we clearly see that the value $a_n=\frac{n}{p_n}$ is squished to $0$ by the bounds that vanish at infinity.
The sum accelerates so slowly that you may think that it alternates around a value $\pm c$, but this is not the case.
Similarly you can show that $\sum_n \frac{(-1)^nn^{s}}{p_n }$ converges for $s\leq 1$, but will diverge for $Re(s)>1$.
And you have $\sum_n \frac{n^{s}}{p_n }$ converge if $Re(s)<0$.
You see that modulating the sequence $a_n=\frac{n^{s}}{p_n }$ with $(-1)^n$ moves the region of convergence from $Re(s)<0$ to $Re(s)\leq 1$. The sum $\sum_n \frac{(-1)^nn^{s}}{p_n }$ is not differentiable for $
Re(s)=0$, therefore it should exhibit fractal like appearance on the imaginary line at the boundary. As it is for the Prime Zeta function, $
Re(s)=0$ is a natural boundary for the derivative of $\sum_n \frac{(-1)^nn^{s}}{p_n }$. For the sum itself, it is a "softer" natural boundary!
|
Inverse function that takes connected set to non-connected set I've been struggling with providing examples of the following:
1) A continuous function $f$ and a connected set $E$ such that $f^{-1}(E)$ is not connected
2) A continuous function $g$ and a compact set $K$ such that $f^{-1}(K)$ is not compact
| Take any space $X$ which is not connected and not compact. For example, you could think of $\mathbf R - \{0\}$. Map this to a topological space consisting of one point. [What properties does such a space have?]
|
CLT for arithmetic mean of centred exp.distributed RV $X_1, X_2,\ldots$ are independent, exponentially distributed r. variables with $m_k:=EX_k=\sqrt{2k}$, $v_k:=\operatorname{Var} X_k=2k$.
I want to analyse the weak convergence of $Y_n=\dfrac{\sum_{i=1}^n (X_i−m_i)}{n}$.
CLT: When the Lindeberg Condition
$b^{−2}_n\sum_{i=1}^n E(|X_i−m_i|^2⋅1_{∣Xi−mi∣>ϵ⋅b_n}) ⟶0$ , for $n→∞\text{ and }∀ϵ>0$ is fullfilled, I can state
$\frac{\sum_{i=1}^n (X_i−m_i)}{b_n}⇒N(0,1)$. It converges weakly to a normal distributed r.v.
Here it is $b^2_n=\sum_{i=1}^n \operatorname{Var} X_i=\sum_{i=1}^n 2i=n(n+1)$
So if the condition would be fullfilled it is
$$Y_n=\frac{\sum_{i=1}^n (X_i−m_i)}{n}=\sqrt{\frac{n+1}{n}}\frac{\sum_{i=1}^n (X_i−m_i)}{b_n},$$
where
$$\frac{\sum_{i=1}^{n}(X_i−m_i)}{b_n}⇒N(0,1)\text{ and }\sqrt{\frac{n+1}{n}}=\sqrt{1+\frac{1}{n}}→1,$$
hence
$$Y_n⇒N(0,1).$$
Now the problem is to prove the Lindeberg condition.
$$E(|X_i−m_i|^2⋅1_{∣Xi−mi∣>ϵ⋅b_n})=∫_{∣x−m_i∣>ϵ⋅b_n}(x−m_i)^2\sqrt{2i}e^{−\frac{x}{\sqrt{2i}}}\;dx.$$
But thats where it ends. Am I on the right way for solving this? Can I switch ∑ and ∫ in the condition?
| Let $x_{k,n}=\mathrm E((X_k-m_k)^2:|X_k-m_k|\geqslant\varepsilon b_n)$. Since $X_k/m_k$ follows the distribution of a standard exponential random variable $X$,
$$
x_{k,n}=m_k^2\mathrm E((X-1)^2:(X-1)^2\geqslant\varepsilon^2 b_n^2/m_k^2).
$$
In particular, $x_{k,n}\leqslant m_k^2x_{n,n}$ for every $k\leqslant n$ and $\sum\limits_{k=1}^nx_{k,n}\leqslant b_n^2x_{n,n}$ where $b_n^2=\sum\limits_{k=1}^nm_k^2$. If $x_{n,n}\to0$, this yields $\sum\limits_{k=1}^nx_{k,n}\ll b_n^2$, which is Lindeberg condition.
But $\mathrm E((X-1)^2:X\geqslant t)=(t^2+1)\mathrm e^{-t}$ for every $t\geqslant0$, hence $x_{n,n}=O(b_n^2\mathrm e^{-\varepsilon b_n/m_n})$, which is enough to prove that $x_{n,n}\to0$.
|
Irrationality of "primes coded in binary" For fun, I have been considering the number
$$
\ell := \sum_{p} \frac{1}{2^p}
$$
It is clear that the sum converges and hence $\ell$ is finite. $\ell$ also has the binary expansion
$$
\ell = 0.01101010001\dots_2
$$
with a $1$ in the $p^{th}$ place and zeroes elsewhere. I have also computed a few terms (and with the help of Wolfram Alpha, Plouffe's Inverter, and this link from Plouffe's Inverter) I have found that $\ell$ has the decimal expansion
$$
\ell = .4146825098511116602481096221543077083657742381379169778682454144\dots.
$$
Based on the decimal expansion and the fact that $\ell$ can be well approximated by rationals, it seems exceedingly likely that $\ell$ is irrational. However, I have been unable to prove this.
Question: Can anyone provide a proof that $\ell$ is irrational?
| That $\ell$ is irrational is clear. There are arbitrarily large gaps between consecutive primes, so the binary expansion of $\ell$ cannot be periodic. Any rational has a periodic binary expansion.
The fact that there are arbitrarily large gaps between consecutive primes comes from observing that if $n>1$, then all of $n!+2, n!+3, \dots, n!+n$ are composite.
|
Derivation of asymptotic solution of $\tan(x) = x$. An equation that seems to come up everywhere is the transcendental $\tan(x) = x$. Normally when it comes up you content yourself with a numerical solution usually using Newton's method. However, browsing today I found an asymptotic formula for the positive roots $x$:
$x = q - q^{-1} - \frac23 q^{-3} + \cdots$
with $q = (n + 1/2) \pi$ for positive integers $n$. For instance here: http://mathworld.wolfram.com/TancFunction.html, and here: http://mathforum.org/kb/message.jspa?messageID=7014308 found from a comment here: Solution of tanx = x?.
The Mathworld article says that you can derive this formula using series reversion, however I'm having difficulty figuring out exactly how to do it.
Any help with a derivation would be much appreciated.
| You may be interested in N. G. de Bruijn's book Asymptotic Methods in Analysis, which treats the equation $\cot x = x$. What follows is essentially a minor modification of that section in the book.
The central tool we will use is the Lagrange inversion formula. The formula given in de Bruijn differs slightly from the one given on the wiki page so I'll reproduce it here.
Lagrange Inversion Formula.
Let the function $f(z)$ be analytic in some neighborhood of the point $z=0$ of the complex plane. Assuming that $f(0) \neq 0$, we consider the equation $$w = z/f(z),$$ where $z$ is the unknown. Then there exist positive numbers $a$ and $b$ such that for $|w| < a$ the equation has just one solution in the domain $|z| < b$, and this solution is an analytic function of $w$: $$z = \sum_{k=1}^{\infty} c_k w^k \hspace{1cm} (|w| < a),$$ where the coefficients $c_k$ are given by $$c_k = \frac{1}{k!} \left\{\left(\frac{d}{dz}\right)^{k-1} (f(z))^k\right\}_{z=0}.$$
Essentially what this says is that we can solve the equation $w = z/f(z)$ for $z$ as a power series in $w$ when $|w|$ and $|z|$ are small enough.
Okay, on to the problem. We wish to solve the equation $$\tan x = x.$$ As with many asymptotics problems, we need a foothold to get ourselves going. Take a look at the graphs of $\tan x$ and $x$:
We see that in each interval $\left(\pi n - \frac{\pi}{2}, \pi n + \frac{\pi}{2}\right)$ there is exactly one solution $x_n$ (i.e. $\tan x_n = x_n$), and, when $n$ is large, $x_n$ is approximately $\pi n + \frac{\pi}{2}$. But how do we show this second part?
Since $\tan$ is $\pi$-periodic we have
$$\tan\left(\pi n + \frac{\pi}{2} - x_n\right) = \tan\left(\frac{\pi}{2} - x_n\right)$$
$$\hspace{2.4 cm} = \frac{1}{\tan x_n}$$
$$\hspace{2.6 cm} = \frac{1}{x_n} \to 0$$
as $n \to \infty$, where the second-to-last equality follows from the identites $$\sin\left(\frac{\pi}{2} - \theta\right) = \cos \theta,$$ $$\cos\left(\frac{\pi}{2} - \theta\right) = \sin \theta.$$
Since $-\frac{\pi}{2} < \pi n + \frac{\pi}{2} - x_n < \frac{\pi}{2}$ and since $\tan$ is continuous in this interval we have $\pi n + \frac{\pi}{2} - x_n \to 0$ as $n \to \infty$. Thus we have shown that $x_n$ is approximately $\pi n + \frac{\pi}{2}$ for large $n$.
Now we begin the process of putting the equation $\tan x = x$ into the form required by the Lagrange inversion formula. Set $$z = \pi n + \frac{\pi}{2} - x$$ and $$w = \left(\pi n + \frac{\pi}{2}\right)^{-1}.$$ Note that we do this because when $|w|$ is small (i.e. when $n$ is large) we may take $|z|$ small enough such that there will be only one $x$ (in the sense that $x = \pi n + \frac{\pi}{2} - z$) which satisfies $\tan x = x$. Plugging $x = w^{-1} - z$ into the equation $\tan x = x$ yields, after some simplifications along the lines of those already discussed, $$\cot z = w^{-1} - z,$$ which rearranges to $$w = \frac{\sin z}{\cos z + z\sin z} = z/f(z),$$ where $$f(z) = \frac{z(\cos z + z\sin z)}{\sin z}.$$ Here note that $f(0) = 1$ and that $f$ is analytic at $z = 0$. We have just satisfied the requirements of the inversion formula, so we may conclude that we can solve $w = z/f(z)$ for $z$ as a power series in $w$ in the form given earlier in the post.
We have $c_1 = 1$ and, since $f$ is even, it can be shown that $c_{2k} = 0$ for all $k$. Calculating the first few coefficients in Mathematica gives $$z = w + \frac{2}{3}w^3 + \frac{13}{15}w^5 + \frac{146}{105}w^7 + \frac{781}{315}w^9 + \frac{16328}{3465}w^{11} + \cdots.$$ Substituting this into $x = w^{-1} - z$ and using $w = \left(\pi n + \frac{\pi}{2}\right)^{-1}$ gives the desired series for $x_n$ when $n$ is large enough: $$x_n = \pi n + \frac{\pi}{2} - \left(\pi n + \frac{\pi}{2}\right)^{-1} - \frac{2}{3}\left(\pi n + \frac{\pi}{2}\right)^{-3} - \frac{13}{15}\left(\pi n + \frac{\pi}{2}\right)^{-5} - \frac{146}{105}\left(\pi n + \frac{\pi}{2}\right)^{-7} - \frac{781}{315}\left(\pi n + \frac{\pi}{2}\right)^{-9} - \frac{16328}{3465}\left(\pi n + \frac{\pi}{2}\right)^{-11} + \cdots$$
|
Do the algebraic and geometric multiplicities determine the minimal polynomial? Let $T$ denote some linear transformation of a finite-dimensional space $V$ (say, over $\mathbb{C}$).
Suppose we know the eigenvalues $\{\lambda_i\}_i$ and their associated algebraic multiplicities $\{d_i\}_i$ and geometric multiplicities $\{r_i\}_i$ of $T$, can we determine the minimal polynomial of $T$ via these informations?
If the answer is no, is there a nice way to produce different linear transformations with same eigenvalues and associated algebraic and geometric multiplicities?
Some backgraoud: It is well-known that for a given linear transformation, the minimal polynomial divides the characteristic polynomial: $m_T|p_T$. And I find in a paper proved that
$$m_T|\prod_i(x-\lambda_i)^{d_i-r_i+1}\ ,\ \ \ \ p_T|m_T\prod_i(x-\lambda_i)^{r_i}$$
And then I want to know if there are any better results.
| No, the algebraic and geometric multiplicities do not determine the minimal polynomial. Here is a counterexample: Consider the Jordan matrices $J_1, J_2$:
$$J_1 = \left(
\begin{array}{cccc}
1 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1
\end{array}
\right)
~~
J_2 =
\left(
\begin{array}{cccc}
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array}
\right)
$$
both have only one eigenvalue, namely 1, so they both have algebraic multiplicity 4. They also both have geometric multiplicity 2, since there are 2 Jordan blocks in both matrices (check the Wikipedia article on Jordan normal form for more information). However, they have different minimal polynomials:
$$\begin{align}
m_{J_1}(x) = (x - I)^2 \\
m_{J_2}(x) = (x - I)^3
\end{align}$$
so the algebraic and geometric multiplicities do not determine the minimal polynomial.
|
Number of Solutions of $3\cos^2(x)+\cos(x)-2=0$ I'm trying to figure out how many solutions there are for
$$3\cos^2(x)+\cos(x)-2=0.$$
I can come up with at least two solutions I believe are correct, but I'm not sure if there is a third.
| Unfortunately, there are infinitely many solutions. Note, for example, that $\pi$ is a solution. Then we also have that $\pi + 2k \pi$ is a solution for all $k$. But between $0$ and $2\pi$, there are 3 solutions.
|
Closed form for $ \int_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$ I've been looking at
$$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$
It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example:
$$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$
$$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$
$$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$
So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas.
UPDATE:
The integral reduces to finding
$$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$
With $a =\dfrac{n+1}{m}$ which converges only if
$$0 < a < 1$$
Using series I find the solution is
$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$
Can this be put it terms of the Digamma Function or something of the sort?
| The general formula (for $m > n+1$ and $n \ge 0$) is $\frac{\pi}{m} \csc\left(\frac{\pi (n+1)}{m}\right)$. IIRC the usual method involves a wedge-shaped contour of angle $2 \pi/m$.
EDIT: Consider $\oint_\Gamma f(z)\ dz$ where $f(z) = \frac{z^n}{1+z^m}$ (using the principal branch if $m$ or $n$ is a non-integer) and $\Gamma$ is the closed contour below:
$\Gamma_1$ goes to the right along the real axis from $\epsilon$ to $R$, so $\int_{\Gamma_1} f(z)\ dz = \int_\epsilon^R \frac{x^n\ dx}{1+x^m}$. $\Gamma_3$ comes in along the ray at angle $2 \pi/m$. Since $e^{(2 \pi i/m) m} = 1$, $\int_{\Gamma_3} f(z)\ dz = - e^{2 \pi i (n+1)/m} \int_{\Gamma_1} f(z)\ dz$. $\Gamma_2$ is a circular arc at distance $R$ from the origin. Since $m > n+1$, the integral over it goes to $0$ as $R \to \infty$. Similarly, the integral over the small circular arc at distance $\epsilon$ goes to $0$ as $\epsilon \to 0$. So we get
$$ \lim_{R \to \infty, \epsilon \to 0}
\int_\Gamma f(z)\ dz = (1 - e^{2 \pi i (n+1)/m}) \int_0^\infty \frac{x^n\ dx}{1+x^m}$$
The meromorphic function $f(z)$ has one singularity inside $\Gamma$, a pole at $z = e^{\pi i/m}$ where the residue is $- e^{\pi i (n+1)/m}/m$. So the residue theorem gives you
$$ \int_0^\infty \frac{x^n\ dx}{1+x^m} = \frac{- 2 \pi i e^{\pi i (n+1)/m}}{ m (1 - e^{2 \pi i (n+1)/m})} = \frac{\pi}{m} \csc\left(\frac{\pi(n+1)}{m}\right)$$
|
Calculating the percentage difference of two numbers The basic problem is this: "I have this number x and I ask you to give me another number y. If the number you give me is some percentage c different than my number then I do not want it." Given that you will know x and c, how do you calculate whether or not I should take y?
The naive approach I came up with is to just divide y / x < c but this fails for obvious reason (try y bigger than x).
The next approach I is that the percentage difference is really just a ratio of the smaller number divided by the larger number. So thereforce we could try min(x, y) / max(x, y) < c. However this does not work, here is an example:
x = 1.2129 y = 1.81935 c = 50%
If we do the above we get 1.2129 / 1.81935 = 0.67 which is greater than 0.50. The problem here is that I obtained y by multiplying 1.2129 by 1.5, therefore y is only 50% greater than x. Why? I still don't understand why the above formula doesn't work.
Eventually through some googling I stumbled accross the percentage difference formula but even this doesn't suit my needs. It is abs(x - y) / ((x + y) / 2). However, this does not yield the result I am looking for. abs(x - y) = abs(1.2129 - 1.81935 ) = 0.60645. (x + y) / 2 = 3.03225 / 2 = 1.516125 0.60645 / 1.516125 = 0.4
Eventually I ended up writing some code to evaluate x * c < y < x * (1 + c). As the basic idea is that we don't want any y that is 50% less than my number, nor do we want any number that is 50% greater than my number.
Could someone please help me identify what I'm missing here? It seems like there ought to be another way that you can calculate the percentage difference of two arbitrary numbers and then compare it to c.
| What you're missing is what you want. The difference between your two numbers is clearly $|x-y|$, but the "percentage" depends on how you want to write $|x-y|/denominator$. You could choose for a denominator $|x|$, $|x+y|$, $\max \{x,y\}$, $\sqrt{x^2 + y^2}$, for all I care, it's just a question of choice. Personally I'd rather use $|x|$ as a denominator, but that's just because I think it'll fit for this problem ; if this is not the solution to your problem, then choose something else.
That is because when you say that you want the difference between $x$ and $y$ to be $c$% or less than your number $x$, for me it means that
$$
|x-y| < \frac c{100} |x| \qquad \Longleftrightarrow \qquad \frac{|x-y|}{|x|} < \frac{c}{100}
$$
so that choosing $|x|$ as a denominator makes most sense.
Hope that helps,
|
Combinatorics-N boys and M girls are learning acting skills from a theatre in Mumbai. N boys and M girls are learning acting skills from a theatre in Mumbai. To perform a play on ‘Ramayana’ they need to form a group of P actors containing not less than 4 boys and not less than 1 girl. The theatre requires you to write a program that tells them the number of ways the group can be formed.
| Assume that $M \ge 1$, $N\ge 4$, and $P\ge 5$. In how many ways can we choose $P$ people, with no sex restrictions? We are choosing $P$ people from $M+N$. The number of choices is
$$\tbinom{M+N}{P}.$$
However, choosing $0$, $1$, $2$, or $3$ boys is forbidden.
In how many ways can we choose $0$ boys, and therefore $P$ girls? The answer is $\binom{M}{P}$. For reasons that will be apparent soon, we write that as
$\binom{N}{0}\binom{M}{P}$.
There are $\binom{N}{1}\binom{M}{P-1}$ ways to choose $1$ boy and $P-1\,$ girls.
There are $\binom{N}{2}\binom{M}{P-2}$ ways to choose $2$ boys and $P-2\,$ girls.
There are $\binom{N}{3}\binom{M}{P-3}$ ways to choose $3$ boys and $P-3\,$ girls.
We need to deal with a small technical complication. Suppose, for example, that we are choosing $0$ boys and therefore $P$ girls, but the number $M$ of available girls is less than $P$. The answer above remains correct if we use the common convention that $\binom{m}{n}=0$ if $m<n$.
Finally, at least $1$ girl must be chosen. So the $\binom{N}{P}\binom{M}{0}$ all-boy choices are forbidden.
The total number of forbidden choices is therefore
$$\tbinom{N}{0}\tbinom{M}{P}+\tbinom{N}{1}\tbinom{M}{P-1}+\tbinom{N}{2}\tbinom{M}{P-2} +\tbinom{N}{3}\tbinom{M}{P-3}+\tbinom{N}{P}\tbinom{M}{0}.$$
Finally, subtract the above number from the total number $\binom{M+N}{P}$ of choices with no sex restrictions.
|
Evaluating $\int_0^\infty e^{-x^n}\,\mathrm{d}x$ Is there a general approach to evaluating definite integrals of the form $\int_0^\infty e^{-x^n}\,\mathrm{d}x$ for arbitrary $n\in\mathbb{Z}$? I imagine these lack analytic solutions, so some sort of approximation is presumably required. Any pointers are welcome.
| For $n=0$ the integral is divergent and if $n<0$ $\lim_{n\to\infty}e^{-x^n}=1$ so the integral is not convergent.
For $n>0$ we make the substitution $t:=x^n$, then $$I_n:=\int_0^{+\infty}e^{-x^n}dx=\int_0^{+\infty}e^{—t}t^{\frac 1n-1}\frac 1ndt=\frac 1n\Gamma\left(\frac 1n\right),$$
where $\Gamma(\cdot)$ is the usual Gamma function.
|
Simple algebra. How can you factor out a common factor if it is $\frac{1}{\text{factor}}$ in one of the cases? I'm sure this is simple. I want to pull out a factor as follows...
I have the expression
$$\frac{a(\sqrt{x}) - (b + c)(\frac{1}{\sqrt{x}})c}{x}.$$
It would be useful for me to pull out the $\sqrt{x}$ from the numerator and try to simplify to remove the denominator, but how can I pull out the $\sqrt{x}$ from the right-most statement $\frac{1}{\sqrt{x}}$.
Thanks for your help!
| $$ \frac{a(\sqrt{x}) - (b + c)({\frac{\sqrt{x}}{x}})c}{x}\;\tag{1}$$
$$=\frac{a(\color{red}{\sqrt{x}}) - (b + c)({\frac{\color{red}{\sqrt{x}}}{x}})c}{\color{red}{\sqrt x}\sqrt x}\;\tag{2}$$
$$=\frac{(\color{red}{\sqrt x})[a - (b + c)({\frac{1}{x}})c]}{\color{red}{\sqrt x}\sqrt x}\;\tag{3}$$
$$=\frac{[a - (b + c)({\frac{1}{x}})c]}{\sqrt{x}}$$
And finally:
$$=\frac{[ax - (b + c)c]}{x\sqrt{x}}$$
Hope it helps.
$(1)$: Rewriting ${1\over\sqrt x}$ as ${\sqrt x \over x}$
$(2)$: Rewriting $x$ as $\sqrt x \sqrt x$, now $\sqrt x$ is the common factor in both numerator and denominator
$(3)$: Pulling the common factor ($\sqrt x$) out in the numerator
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.