INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
finding a minorant to $(\sqrt{k+1} - \sqrt{k})$ Need help finding a minorant to $(\sqrt{k+1} - \sqrt{k})$ which allows me to show that the series $\sum_{k=1}^\infty (\sqrt{k+1} - \sqrt{k})$ is divergent.
You should observe that your series telescopes, i.e.: $$\sum_{k=0}^n (\sqrt{k+1} - \sqrt{k}) = (\sqrt{1} -\sqrt{0}) + (\sqrt{2} -\sqrt{1}) +\cdots + (\sqrt{n}-\sqrt{n-1}) +(\sqrt{n+1}-\sqrt{n}) = \sqrt{n+1}-1\; ,$$ and therefore: $$\sum_{k=0}^\infty (\sqrt{k+1} - \sqrt{k}) = \lim_{n\to \infty} \sum_{k=0}^n (\sqrt{k+1} - \sqrt{k}) = \lim_{n\to \infty} \sqrt{n+1}-1 = \infty\; .$$
A question on pointwise convergence. The function $f_n(x):[-1,1] \to \mathbb{R}, \, \, \,f_n(x) = x^{2n-1}$ tends pointwise to the function $$f(x) = \left\{\begin{array}{l l}1&\textrm{if} \quad x=1\\0&\textrm{if} \quad -1<x<1\\-1&\textrm{if} \quad x=-1\end{array}\right.$$ but not uniformly (for obvious reasons as $f(x)$ isn't continuous). But then surely in this case $||f_n(x) - f(x)||_\infty \to 0$ as for any $x \in (-1,1)$ and any $\epsilon > 0$ you can make $n$ large enough so that the max distance between $f_n(x)$ and $f(x)$ at that particular $x$ is less than $\epsilon$? What am I doing wrong? Thanks!
Your argument * *for any $x \in [-1,1]$ and any $\epsilon > 0$ you can make $n$ large enough so that the max distance between $f_n(x)$ and $f(x)$ at that particular $x$ is less than $\epsilon$ is a great proof that for any $x$, the sequence $f_n(x)$ tends to $f(x)$. In other words, it's a proof that $f_n$ tends to $f$ pointwise. But the assertion $||f_n(x) - f(x)||_\infty \to 0$ is the assertion that $f_n$ tends to $f$ uniformly on $[-1,1]$, which as you've noted is false. The difference in the two statements (hence in how you'd need to prove them) is in the order of quantifiers. For pointwise convergence, given any $\epsilon>0$, it's "for all $x$, there exists $n$ such that..." - in other words, $n$ can depends on $x$ as well as $\epsilon$. For uniform convergence, given any $\epsilon>0$, it's "there exists $n$ such that for all $x$, ..." - in other words, $n$ cannot depend on $x$. If you try reorganizing your proof so that you have to choose $n$ before $x$ is given, then you'll see the proof break down.
Combinatorial Probability-Rolling 12 fair dice My text says, regarding combinatorial probability, "The number of outcomes associated with any problem involving the rolling of n six-sided dice is $6^n$." I know that in combinatorial probability $P(A)=m/n$ where $m$ is the number of ways $A$ can happen and $n$ is the number of ways to perform the operation in question. But since we must be consistent in numerator and denominator with respect to order, wouldn't the number in the denominator depend on the whether the example I am in respects order? I.e, in any problem involving rolling $n$ dice, would I expect $P=X/6^n$ regardless of context?
That is usually very good advice, at least until you simplify. So for example, to find the probability that rolling twelve dice gives three prime numbers (not necessarily distinct) is $ \frac {3^{12}}{6^{12}}$ but that simplifies to $\frac {1}{2^{12}}$. But it is possible to devise a problem where this advice does not work. For example, roll a die; then roll a second and keep rolling the second until it is distinct from the first; and then roll a third until it is distinct from each of the first two. What is the probability the three dice show three prime numbers? And if you do this four times to use all $12$ dice? It is $\left(\frac 36 \times \frac 25 \times \frac 14\right)^4 = \frac{1}{20^4}$ and the advice turns out to be wrong in this artificial case. My guess is that given the advice in the text, you have a reasonable expectation that all questions will be of the first sort. But you should remain aware that the second kind is not impossible.
Are the eigenvalues of $AB$ equal to the eigenvalues of $BA$? First of all, am I being crazy in thinking that if $\lambda$ is an eigenvalue of $AB$, where $A$ and $B$ are both $N \times N$ matrices (not necessarily invertible), then $\lambda$ is also an eigenvalue of $BA$? If it's not true, then under what conditions is it true or not true? If it is true, can anyone point me to a citation? I couldn't find it in a quick perusal of Horn & Johnson. I have seen a couple proofs that the characteristic polynomial of $AB$ is equal to the characteristic polynomial of $BA$, but none with any citations. A trivial proof would be OK, but a citation is better.
If $v$ is an eigenvector of $AB$ for some nonzero $\lambda$, then $Bv\ne0$ and $$\lambda Bv=B(ABv)=(BA)Bv,$$ so $Bv$ is an eigenvector for $BA$ with the same eigenvalue. If $0$ is an eigenvalue of $AB$ then $0=\det(AB)=\det(A)\det(B)=\det(BA)$ so $0$ is also an eigenvalue of $BA$. More generally, Jacobson's lemma in operator theory states that for any two bounded operators $A$ and $B$ acting on a Hilbert space $H$ (or more generally, for any two elements of a Banach algebra), the non-zero points of the spectrum of $AB$ coincide with those of the spectrum of $BA$.
Is this out-of-context theorem true? Can someone tell me if the following proposition is true ? Theorem If $u=g + i h$ is a holomorphic function in $\Omega\subseteq \mathbb{C}$ and $\Omega$ is simply connected, then $v(z)=u(w)+ \int_\gamma \,g_x(z)-ih_y (z) \,dz$ is a primitive function of $u$ (where $w\in \Omega$ is fixed and $\gamma$ is some path from $w$ to $z$). (I have come across (the implicit use of) this proposition by reading about something not really related to complex analysis and since I know very little about it, if wondered if it actually would be true taken out of context like this. I also wouldn't mind a proof, if it is true and someone would have the time.)
I shall assume that $g$ and $h$ are realvalued. Since $u:=g+ih$ is holomorphic it follows from the the CR equations that $h_y=g_x$. Therefore for any curve $\gamma\subset\Omega$ connecting the point $z_0$ with a variable point $z$ one has $$\int_\gamma (g_x- i h_y)\ dz=(1-i)\int_\gamma g_x\ (dx+i dy) =(1-i)\int_\gamma(g_x\ dx + i h_y dy)=(1-i)\Bigl(g(z)-g(z_0)+i\bigl(h(z)-h(z_0)\bigr)\Bigr)=(1-i)\bigl(u(z)-u(z_0)\bigr)\ .$$ It follows that $$v(z):=u(z_0)+\int_{z_0}^z (g_x- i h_y)\ dz=i u(z_0)+(1-i) u(z)\ ,$$ which shows that your $v$ is more or less the given $u$ again, and not a primitive of $u$.
How to determine the limit of this sequence? I was wondering how to determine the limit of $ (n^p - (\frac{n^2}{n+1})^p)_{n\in \mathbb{N}}$ with $p>0$, as $n \to \infty$? For example, when $p=1$, the sequence is $ (\frac{n}{n+1})_{n\in \mathbb{N}}$, so its limit is $1$. But I am not sure how to decide it when $p \neq 1$. Thanks in advance!
Given the Binomial theorem, we have (see Landau notations) $$\begin{array}{} (n+1)^p-n^p=\Theta(n^{p-1}) & \implies 1-\left(\frac{n}{n+1}\right)^p=\Theta \left( \frac{1}{n} \right) \\ & \implies n^p - \left(\frac{n^2}{n+1}\right)^p=\Theta(n^{p-1}) \end{array}$$ Thus the limit is $0$ for $p<1$, is $1$ for $p=1$ (already computed), and blows up to $\infty$ for $p>1$.
If a graph of $2n$ vertices contains a Hamiltonian cycle, then can we reach every other vertex in $n$ steps? Problem: Given a graph $G,$ with $2n$ vertices and at least one triangle. Is it possible to show that you can reach every other vertex in $n$ steps if $G$ contains a Hamilton cycle (HC)? EDIT: Sorry, I forgot to mention, that $G$ is planar and 3-connected. A complete proof for $3$-regular graphs would also be accepted/rewarded. Does the following work as proof? Choose a starting vertex $v_0$ and a direction. * *If you walk along the HC you'll reach a vertex $v_{n-1}$ with maximal distance from $v_0$ in $n$ steps. *You'll reach $v_{n-2}$ by doing a round in the triangle and *$v_{n-3}$ by stepping backwards at the last step. *By combining these moves, you'll reach half of all $v_k$. *By choosing the other direction at the beginning you'll reach the other half. *$v_0$ is free to choose. Showing or disproving the "only if"-part would also be nice!
The answer is no. Question: Let $G$ be a 3-connected, hamiltonian, planar graph with $2n$ vertices and at least one triangle. Is it true that for all vertex pairs $x,y$, that there is a walk of exactly $n$ steps from $x$ to $y$? The following graph and vertex pair is a counter example It is clear that the graph is planar and has a triangle. It can be easily verified that the graph is 3-connected. To show that the graph is hamiltonian, I have highlighted a hamiltonian cycle here Since the vertex has 16 vertices, we need to verify that there is no walk of length 8 from $x$ to $y$. Since $n$ is equal, we can not reach $y$ without using some of the four vertices on the right. Now it is easy to verify by hand, that there is no walk from $x$ to $y$ of length exactly 8.
What are the minimal conditions on a topological space for its closure to equal the sequential closure? My question is: what are the minimal conditions on a topological space for it have the following property? $$x\in \bar{A}\iff \exists (x_n)\subset A | x_n \to x$$
In this paper there is the answer (section 2, on Fréchet spaces, also known as Fréchet-Urysohn spaces): Your property defines the notion of a Fréchet space and he shows that these spaces are the pseudo-open images of metric spaces. He also defines the weaker concept of a sequential space and in the follow up paper he shows that a sequential space is Fréchet iff all of its subspaces are sequential (hereditarily sequential). A sequential space has the cleaner characterization: a space $X$ is sequential iff there is a metric space $M$ and a surjective quotient map $f: M \rightarrow X$ ($X$ is a quotient image of a metric space). As said, Fréchet spaces can be similarly characterized, using not quotient maps but pseudo-open maps: $f: X \rightarrow Y$ is pseudo-open iff for every $y \in Y$ and every open neighborhood $U$ of $f^{-1}[\{y\}]$ we have that $y \in \operatorname{int}(f[U])$. Every open or closed surjective map is pseudo-open and all pseudo-open maps are quotient.
How to solve $\int_0^\pi{\frac{\cos{nx}}{5 + 4\cos{x}}}dx$? How can I solve the following integral? $$\int_0^\pi{\frac{\cos{nx}}{5 + 4\cos{x}}}dx, n \in \mathbb{N}$$
To elaborate on Pantelis Damianou's answer $$ \newcommand{\cis}{\operatorname{cis}} \begin{align} \int_0^\pi\frac{\cos(nx)}{5+4\cos(x)}\mathrm{d}x &=\frac12\int_{-\pi}^\pi\frac{\cos(nx)}{5+4\cos(x)}\mathrm{d}x\\ &=\frac12\int_{-\pi}^\pi\frac{\cis(nx)}{5+2(\cis(x)+\cis(-x))}\mathrm{d}x\\ &=\frac12\int_{-\pi}^\pi\frac{\cis(x)\cis(nx)}{2\cis^2(x)+5\cis(x)+2}\mathrm{d}x\\ &=\frac{1}{2i}\int_{-\pi}^\pi\frac{\cis(nx)}{2\cis^2(x)+5\cis(x)+2}\mathrm{d}\cis(x)\\ &=\frac{1}{2i}\oint\frac{z^n}{2z^2+5z+2}\mathrm{d}z \end{align} $$ where the integral is counterclockwise around the unit circle and $\cis(x)=e^{ix}$. Factor $2z^2+5z+2$ and use partial fractions. However, I only get a singularity at $z=-\frac12$ (and one at $z=-2$, but that is outside the unit circle, so of no consequence). Now that a complete solution has been posted, I will finish this using residues: $$ \begin{align} \frac{1}{2i}\oint\frac{z^n}{2z^2+5z+2}\mathrm{d}z &=\frac{1}{6i}\oint\left(\frac{2}{2z+1}-\frac{1}{z+2}\right)\,z^n\,\mathrm{d}z\\ &=\frac{1}{6i}\oint\frac{z^n}{z+1/2}\mathrm{d}z\\ &=\frac{\pi}{3}\left(-\frac12\right)^n \end{align} $$
annihilator is the intersection of sets If $W$ is a subspace of a finite dimensional vector space $V$ and $\{g_{1},g_{2},\cdots, g_{r}\}$ is a basis of the annihilator $W^{\circ}=\{f \in V^{\ast}| f(a)=0, \forall a \in W\}$, then $W=\cap_{i=1}^{r} N_{g_{i}}$, where for $f \in V^{\ast}$, $N_{f}=\{a \in V| f(a)=0\}$ How shall I prove this?
We wish to prove that $$W = \bigcap_{i=1}^{r} N_{g_{i}}$$ Step $1$: Proving $W \subset \bigcap_{i=1}^{r} N_{g_{i}}$ Let $w \in W$. We know that the annihilator $W^{o}$ is the set of linear functionals that vanish on $W$. If $g_{i}$ is in the basis for $W^{o}$, it is certainly in $W^{o}$. Thus, the $g_{i}$ all vanish on $W$, so that $W \subset N_{g_i}$. We thus see that $w \in N_{g_{i}}$ for all $1 \leq i \leq r$. Hence $$w \in \bigcap_{i=1}^{r} N_{g_{i}}$$ But $w$ was arbitrary, so $$W \subset \bigcap_{i=1}^{r} N_{g_{i}}$$ Step $2$: Proving $W \supset \bigcap_{i=1}^{r} N_{g_{i}}$ Let $\{\alpha_{1}, \cdots, \alpha_{s}\}$ be a basis for $W$, and extend it to a basis for $V$, $\{ \alpha_{1} ,\cdots, \alpha_{n}\}$. Likewise, extend $\{g_{1}, \cdots, g_{r}\}$ to a basis for $V^{*}$, $\{g_{1}, \cdots, g_{n}\}$, noting that $\mbox{dim}(V) = \mbox{dim} (V^{*})$. It is easily seen that one can choose these two bases to be dual to each other, as the dual basis to $\{\alpha_{1}, \cdots, \alpha_{s}\}$ is not contained in $W^{o}$, and the dual basis for $V \setminus W$ must be contained in $W^{o}$, just by the nature of the dual basis. (To make it easier to choose the correct basis for $V \setminus W$, try looking at the double dual $V^{**}$, which is naturally isomorphic to $V$, and looking at the dual basis for $W^{o}$ in $V^{**}$) We get $$ g_{i}(\alpha_{j}) = \left\{ \begin{array}{cc} 1 & i= j\\ 0 & i \neq j \end{array} \right.$$ Let $v \in \bigcap_{i=1}^{r} N_{g_{i}}$, so that $g_{i}(v) = 0$ for all $1 \leq i \leq r$, and write $$v = c_{1}\alpha_{1} + \cdots + \cdots c_{n}\alpha_{n}$$ Taking $g_{i}$ of both sides, where $i$ ranges from $1$ through $r$, $$g_{i}(v) = c_{1}g_{i}(\alpha_{1}) + \cdots + c_{n}g_{i}(\alpha_{n}) = 0$$ However, these $\{ g_{i}: 1 \leq i \leq r\}$ form a dual basis to $V \setminus W$. Hence, if $v$ had a nonzero component $c_{k}\alpha_{k}$ in $V \setminus W$, it would not vanish on $g_{k}$, $1 \leq k \leq r$. Then $g_{k}(r) \neq 0$, contrary to what we have shown. Thus $v$ must be contained in $W$.
If we define $\sin x$ as series, how can we obtain the geometric meaning of $\sin x$? In Terry Tao's textbook Analysis, he defines $\sin x$ as below: * *Define rational numbers *Define Cauchy sequences of rational numbers, and equivalence of Cauchy sequences *Define reals as the space of Cauchy sequences of rationals modulo equivalence *Define limits (and other basic operations) in the reals *Cover a lot of foundational material including: complex numbers, power series, differentiation, and the complex exponential *Eventually (Chapter 15!) define the trigonometric functions via the complex exponential. Then show the equivalence to other definitions. My question is how can we obtain the geometry interpretation of $\sin x$, that is, the ratio of opposite side and hypotenuse.
In this hint I suggest showing from the power series that if $$ \sin(x)=\sum_{k=0}(-1)^k\frac{x^{2k+1}}{(2k+1)!}\tag{1} $$ and $$ \cos(x)=\frac{\mathrm{d}}{\mathrm{d}x}\sin(x)=\sum_{k=0}(-1)^k\frac{x^{2k}}{(2k)!}\tag{2} $$ that $\frac{\mathrm{d}}{\mathrm{d}x}\cos(x)=-\sin(x)$ and from there that $$ \sin^2(x)+\cos^2(x)=1\tag{3} $$ Therefore, $(\cos(x),\sin(x))$ lies on the unit circle. To see that $(\cos(x),\sin(x))$ moves around the unit circle at unit speed, note that $(3)$ implies $$ \left|\frac{\mathrm{d}}{\mathrm{d}x}(\cos(x),\sin(x))\right|=\left|(-\sin(x),\cos(x))\right|=1\tag{4} $$ Thus, $(3)$ and $(4)$ say that $(\cos(x),\sin(x))$ moves around the unit circle at unit speed. Note also that $(-\sin(x),\cos(x))$ is at a right angle counter-clockwise from $(\cos(x),\sin(x))$. Therefore, $(\cos(x),\sin(x))$ moves counter-clockwise around the unit circle at unit speed, starting at $(1,0)$. This should be sufficient to show that $\sin(x)$ and $\cos(x)$ are the standard trigonometric functions.
Continuous but not Hölder continuous function on $[0,1]$ Does there exist a continuous function $F$ on $[0,1]$ which is not Hölder continuous of order $\alpha$ at any point $X_{0}$ on $[0,1]$. $0 < \alpha \le 1$. I am trying to prove that such a function does exist. also I couldn't find a good example.
($1$-dimensional) Brownian motion is almost surely continuous and nowhere Hölder continuous of order $\alpha$ if $\alpha > 1/2$. IIRC one can define random Fourier series that will be almost surely continuous but nowhere Hölder continuous for any $\alpha > 0$. EDIT: OK, here's a construction. Note that $f$ is not Hölder continuous of order $\alpha$ at any point of $I = [0,1]$ if for every $C$ and every $x \in I$ there are $s,t \in I$ with $s \le x \le t$ and $|f(t)-f(s)|>C(t-s)^\alpha$. I'll define $f(x) = \sum_{n=1}^\infty n^{-2} \sin(\pi g_n x)$, where $g_n$ is an increasing sequence of integers such that $2 g_n$ divides $g_{n+1}$. This series converges uniformly to a continuous function. Let $f_N(x)$ be the partial sum $\sum_{n=1}^N n^{-2} \sin(\pi g_n x)$. Note that $\max_{x \in [0,1]} |f_N'(x)| \le \sum_{n=1}^N n^{-2} \pi g_n \le B g_N$ for some constant $B$ (independent of $N$). Now suppose $s = k/g_N$ and $t = (k+1/2)/g_N$ where $k \in \{0,1,\ldots,g_N-1\}$. We have $f(s) = f_{N-1}(s)$ and $f(t) = f_{N-1}(t) \pm N^{-2}$. Now $|f_{N-1}(t) - f_{N-1}(s)| \le B g_{N-1} (t-s) = B g_{N-1}/(2 g_N)$, so $|f(t) - f(s)| \ge N^{-2} - B g_{N-1}/(2 g_N)$. The same holds for $s = (k+1/2)/g_N$ and $t = (k+1)/g_N$. So let $g_n$ grow rapidly enough that $g_{n-1}/g_n = o(n^{-2})$ ($g_n = (3n)!$ will do, and also satisfies the requirement that $2g_n$ divides $g_{n+1}$). Then for every $\alpha > 0$, $(t-s)^\alpha = (2g_N)^{-\alpha} = o(N^{-2}) = o(|f(t) - f(s)|)$. Since for each $N$ the intervals $[s,t]$ cover all of $I$, we are done.
Show $ I = \int_0^{\pi} \frac{\mathrm{d}x}{1+\cos^2 x} = \frac{\pi}{\sqrt 2}$ Show $$ I = \int_0^{\pi} \frac{\mathrm{d}x}{1+\cos^2 x} = \frac{\pi}{\sqrt 2}$$
In case KV's solution seems a bit magical, it may be reassuring to know that there's a systematic way to integrate rational functions of trigonometric functions, the Weierstraß substitution. With $\cos x=(1-t^2)/(1+t^2)$ and $\mathrm dx=2/(1+t^2)\mathrm dt$, $$ \begin{eqnarray} \int_0^\pi \frac{\mathrm dx}{1+\cos^2 x} &=& \int_0^\infty\frac2{1+t^2} \frac1{1+\left(\frac{1-t^2}{1+t^2}\right)^2}\mathrm dt \\ &=& \int_0^\infty \frac{2(1+t^2)}{(1+t^2)^2+(1-t^2)^2}\mathrm dt \\ &=& \int_0^\infty \frac{1+t^2}{1+t^4}\mathrm dt\;. \end{eqnarray} $$ Here's where it gets a bit tedious. The zeros of the denominator are the fourth roots of $-1$, and assembling the conjugate linear factors into quadratic factors yields $$ \begin{eqnarray} \int_0^\infty \frac{1+t^2}{1+t^4}\mathrm dt &=& \int_0^\infty \frac{1+t^2}{(t^2+\sqrt2t+1)(t^2-\sqrt2t+1)}\mathrm dt \\ &=& \frac12\int_0^\infty \frac1{(t^2+\sqrt2t+1)}+\frac1{(t^2-\sqrt2t+1)}\mathrm dt \\ &=& \frac12\left[\sqrt2\arctan(1+\sqrt2t)-\sqrt2\arctan(1-\sqrt2t)\right]_0^\infty \\ &=& \frac\pi{\sqrt2}\;. \end{eqnarray} $$
Example to show the distance between two closed sets can be 0 even if the two sets are disjoint Let $A$ and $B$ be two sets of real numbers. Define the distance from $A$ to $B$ by $$\rho (A,B) = \inf \{ |a-b| : a \in A, b \in B\} \;.$$ Give an example to show that the distance between two closed sets can be $0$ even if the two sets are disjoint.
Consider the sets $\mathbb N$ and $\mathbb N\pi = \{n\pi : n\in\mathbb N\}$. Then $\mathbb N\cap \mathbb N\pi=\emptyset$ as $\pi$ is irrational, but we have points in $\mathbb N\pi$ which lie arbitrarily close to the integers.
Counting words with subset restrictions I have an alphabet of N letters {A,B,C,D...N} and would like to count how many L-length words do not contain the pattern AA. I've been going at this all day, but continue to stumble on the same problem. My first approach was to count all possible combinations, (N^L) and subtract the words that contain the pattern. I tried to count the number of ways in which I can place 'AA' in L boxes, but I realized early on that I was double counting, since some words can contain the pattern more than once. I figured that if I had a defined length for the words and the set, I could do it by inclusion/exclusion, but I would like to arrive at a general answer to the problem. My gut feeling is that somehow I could overcount, and then find a common factor to weed out the duplicates, but I can't quite see how. Any help would be appreciated!
Call the answer $x_L$. Then $x_L=Nx_{L-1}-y_{L-1}$, where $y_L$ is the number of allowable words of length $L$ ending in $A$. And $y_L=x_{L-1}-y_{L-1}$. Putting these together we get $Nx_L-x_{L+1}=x_{L-1}-(Nx_{L-1}-x_L)$, which rearranges to $x_{L+1}=(N-1)x_L+(N-1)x_{L-1}$. Now: do you know how to solve homogeneous constant coefficient linear recurrences? EDIT. If all you want is to find the answer for some particular values of $L$ and $N$ then, as leonbloy notes in a comment to your answer, you can use the recurrence to do that. You start with $x_0=1$ (the "empty word") and $x_1=N$ and then you can calculate $x_2,x_3,\dots,x_L$ one at a time from the formula, $x_{L+1}=(N-1)x_L+(N-1)x_{L-1}$. On the other hand, if what you want is single formula for $x_L$ as a function of $L$ and $N$, it goes like this: First, consider the quadratic equation $z^2-(N-1)z-(N-1)=0$. Use the quadratic formula to find the two solutions; I will call them $r$ and $s$ because I'm too lazy to write them out. Now it is known that the formula for $x_L$ is $$x_L=Ar^L+Bs^L$$ for some numbers $A$ and $B$. If we let $L=0$ and then $L=1$ we get the system $$\eqalign{1&=A+B\cr N&=rA+sB\cr}$$ a system of two equations for the two unknowns $A$ and $B$. So you solve that system for $A$ and $B$, and then you have your formula for $x_L$.
How to get rid of the integral in this equation $\int\limits_{x_0}^{x}{\sqrt{1+\left(\dfrac{d}{dx}f(x)\right)^2}dx}$? How to get rid of the integral $\int\limits_{x_0}^{x}{\sqrt{1+\left(\dfrac{d}{dx}f(x)\right)^2}dx}$ when $f(x)=x^2$?
Summarising the comments, you'll get $$ \int\limits_{x_0}^{x}{\sqrt{1+\left(\dfrac{d}{dt}f(t)\right)^2}dt} =\int\limits_{x_0}^{x}{\sqrt{1+\left(\dfrac{d}{dt}t^2\right)^2}dt} =\int\limits_{x_0}^{x}{\sqrt{1+4t^2}dt} $$ To solve the last one substitute $t=\tan(u)/2$ and $dt=\sec^2(u)/2du$. Then $\sqrt{1+4t^2}= \sqrt{\tan^2(u)+1}=\sec(u)$, so we get as antiderviative: $$ \begin{eqnarray} \frac{1}{2}\int \sec^3(u) du&=&\frac{1}{4}\tan(u)\sec(u)+\frac{1}{4}\int \sec(u)du+\text{const.}\\ &=&\frac{1}{4}\tan(u)\sec(u)+\frac{1}{4}\log(\tan(u)+\sec(u))+\text{const.} \\ &=& \frac{t}{2}\sqrt{1+4t^2}+\frac{1}{4}\log(2t+\sqrt{1+4t^2})+\text{const.}\\ &=& \frac{1}{4}\left(2t\sqrt{1+4t^2}+\sinh^{-1}(2t) \right)+\text{const.}. \end{eqnarray} $$ Put in your limits and your done: $$ \int\limits_{x_0}^{x}{\sqrt{1+4t^2}dx}=\left[\frac{1}{4}\left(2t\sqrt{1+4t^2}+\sinh^{-1}(2t) \right) \right]_{x_0}^x $$
The $n^{th}$ root of the geometric mean of binomial coefficients. $\{{C_k^n}\}_{k=0}^n$ are binomial coefficients. $G_n$ is their geometrical mean. Prove $$\lim\limits_{n\to\infty}{G_n}^{1/n}=\sqrt{e}$$
In fact, we have $$ \lim_{n\to\infty}\left[\prod_{k=0}^{n}\binom{n}{k}\right]^{1/n^2} = \exp\left(1+2\int_{0}^{1}x\log x\; dx\right) = \sqrt{e}.$$ This follows from the identity $$\frac{1}{n^2}\log \left[\prod_{k=0}^{n}\binom{n}{k}\right] = 2\sum_{j=1}^{n}\frac{j}{n}\log\left(\frac{j}{n}\right)\frac{1}{n} + \left(1+\frac{1}{n}\right)\log n - \left(1+\frac{2}{n}\right)\frac{1}{n}\log (n!),$$ together with the Stirling's formula. In fact, I tried to write down the detailed derivation of this identity, but soon gave up since it's painstrikingly demanding to type $\LaTeX$ formulas in iPad2! But you may begin with the identity $$\log\binom{n}{k} = \log n! - \log k! - \log (n-k)!$$ and $$ \log k! = \sum_{j=1}^{k} \log j,$$ and then you can change the order of summation.
Homotopy inverses need not induce inverse homomorphisms Let $f:X \rightarrow Y$ and $g : Y \rightarrow X$ be homotopy inverses, ie. $f \circ g$ and $g\circ f$ are homotopic to the identities on $X$ and $Y$. We know that $f_*$ and $g_*$ are isomorphisms on the fundamental groups of $X$ and $Y$. However, it is my understanding that they need not be inverse isomorphisms. Is there an explicit example where they are not?
If $f,g$ are pointed maps (which is necessary so that $f_*,g_*$ make sense): No, they induce inverse homomorphsism. Homotopic maps induce the same maps on homotopy groups, in particular fundamental groups. This means that we have a functor $\pi_1 : \mathrm{hTop}_* \to \mathrm{Grp}$. Every functor maps two inverse isomorphisms to the corresponding two inverse isomorphisms.
Distance between bounded and compact sets Let $(X,d)$ be a metric space and define for $B\subset X$ bounded, i.e. $$\operatorname{diam}(B)= \sup \{ d(x,y) \colon x,y\in B \} < \infty,$$ the measure $$\beta(B) = \inf\{r > 0\colon\text{there exist finitely many balls of radius r which cover } B\},$$ or equivalently, $$\beta(B)=\inf\big\lbrace r > 0|\exists N=N(r)\in{\bf N} \text{ and } x_1,\ldots x_N\in X\colon B\subset\bigcup_{k=1}^N B(x_k,r)\big\rbrace,$$ where $B(x,r)=\{y\in X\colon d(x,y)\}$ denotes the open ball of radius $r$ centered at $x\in X$. Let ${\bf K}(X)$ denote the collection of (non-empty) compact subsets in $X$. I would like to prove $$\beta(B)=d_H\big(B,{\bf K}(X)\big),$$ where $d_H$ is the Hausdorff distance. I proved $d_H\big(B,{\bf K}(X)\big)\le\beta(B)$. Is there someone that knows how to prove the other inequality?
Let $d_0=d_H(B,K(X))$. So for $d>d_0$ we can find compact $K$ with $d_H(B,K)<d$. In particular, $B \subseteq \cup_{k\in K} B(k,d)$. As $K$ is compact, for any $\epsilon>0$ we can find $k_1,\cdots,k_n\in K$ with $K\subseteq \cup_i B(k_i,\epsilon)$. For $b\in B$, we can find $k\in K$ with $d(b,k)<d$. Then we can find $i$ with $d(k,k_i)<\epsilon$, and so $d(b,k_i)<d+\epsilon$. Thus $\beta(B)\leq d+\epsilon$. As $\epsilon>0$ and $d>d_0$ were arbitrary, we find that $\beta(B) \leq d_0$ as required.
Divisor/multiple game Two players $A$ and $B$ play the following game: Start with the set $S$ of the first 25 natural numbers: $S=\{1,2,\ldots,25\}$. Player $A$ first picks an even number $x_0$ and removes it from $S$: We have $S:=S-\{x_0\}$. Then they take turns (starting with $B$) picking a number $x_n\in S$ which is either divisible by $x_{n-1}$ or divides $x_{n-1}$ and removing it from $S$. The player who can not find a number in $S$ which is a multiple or is divisble by the previous number looses. Is there a winning strategy?
Second player (B) wins. Consider the following pairing: $2,14$ $3,15$ $4,16$ $5,25$ $6,12$ $7,21$ $8,24$ $9,18$ $10,20$ $11,22$ The left out numbers are $1,13,17,19,23$. Now whatever number player one (A) picks, the second player (B) picks the paired number from the above pairings. Ultimately, player one (A) will be out of numbers, and will have to pick $1$, and then player two (B) picks $23$.
Inequality involving the regularized gamma function Prove that $$Q(x,\ln 2) := \frac{\int_{\ln 2}^{\infty} t^{x-1} e^{-t} dt}{\int_{0}^{\infty} t^{x-1} e^{-t} dt} \geqslant 1 - 2^{-x}$$ for all $x\geqslant 1$. ($Q$ is the regularized gamma function.)
We have $$ \frac{\int_{\ln 2}^{\infty} t^{x-1} e^{-t} \,dt}{\int_{0}^{\infty} t^{x-1} e^{-t} \,dt} = \frac{\int_{0}^{\infty} t^{x-1} e^{-t} \,dt - \int_{0}^{\log 2} t^{x-1} e^{-t} \,dt}{\int_{0}^{\infty} t^{x-1} e^{-t} \,dt} = 1 - \frac{\int_{0}^{\log 2} t^{x-1} e^{-t} dt}{\int_{0}^{\infty} t^{x-1} e^{-t} \,dt}, $$ so we need to show that $$ \frac{\int_{0}^{\log 2} t^{x-1} e^{-t} \,dt}{\int_{0}^{\infty} t^{x-1} e^{-t} \,dt} \leq 2^{-x}, $$ or, equivalently, $$ 2^x \int_{0}^{\log 2} t^{x-1} e^{-t} \,dt \leq \int_{0}^{\infty} t^{x-1} e^{-t} \,dt. $$ To do this we will show that $$ 2^x \int_{0}^{\log 2} t^{x-1} e^{-t} \,dt \leq \left(\frac{e^a}{e^a-1}\right)^x \int_{0}^{a} t^{x-1} e^{-t} \,dt \tag{1} $$ for all $a \geq \log 2$, then let $a \to \infty$. In fact, we will show that the quantity on the right-hand side of the above inequality is nondecreasing in $a$ when $a > 0$ for fixed $x \geq 1$ (and strictly increasing in $a$ when $a > 0$ for fixed $x > 1$). To start, define $$ f_x(a) = \left(\frac{e^a}{e^a-1}\right)^x \int_{0}^{a} t^{x-1} e^{-t} \,dt. $$ Then $$ \begin{align} f_x'(a) &= a^{x-1} e^{-a} \left(\frac{e^a}{e^a-1}\right)^x - x \left(\frac{e^a}{e^a-1}\right)^{x-1} \frac{e^a}{(e^a-1)^2} \int_{0}^{a} t^{x-1} e^{-t} \,dt \\ &= e^{ax} \left(e^a-1\right)^{-x-1} \left[a^{x-1} \left(1-e^{-a}\right) - x \int_{0}^{a} t^{x-1} e^{-t} \,dt\right]. \end{align} $$ Since we're only concerned with the sign of the above expression, define $$ \begin{align} g_x(a) &= e^{-ax}(e^a - 1)^{x+1} f_x'(a) \\ &= a^{x-1} \left(1-e^{-a}\right) - x \int_{0}^{a} t^{x-1} e^{-t} \,dt. \end{align} $$ If $g_x(a) \geq 0$ for all $a > 0$ then $f_x'(a) \geq 0$ for all $a > 0$, and hence $f_x(a) \geq f_x(\log 2)$ for all $a \geq \log 2$, which is $(1)$. Well, it will certainly be true that $g_x(a) \geq 0$ for all $a > 0$ if $$ g_x(0) \geq 0 \hspace{1cm} \text{and} \hspace{1cm} g_x'(a) \geq 0 \,\,\text{ for all }\,\, a \geq 0. \tag{2} $$ Indeed, $g_x(0) = 0$, and for $x \geq 1$ we have $$ g_x'(a) = a^{x-2} e^{-a} (x-1) (e^a - a - 1) \geq 0 $$ since the function $h(a) = e^a - a - 1$ is nondecreasing when $a \geq 0$ and $h(0) = 0$. By the remarks immediately before $(2)$ this is sufficient to prove $(1)$, from which the result follows.
Calculate the slope of a line passing through the intersection of two lines Let say I have this figure, I know slope $m_1$, slope $m_1$, $(x_1, y_1)$, $(x_2, y_2)$ and $(x_3, y_3)$. I need to calculate slope $m_3$. Note the line with $m_3$ slope will always equally bisect line with $m_1$ slope and line with $m_2$.
We understand that: $$m_1=\tan(\alpha)$$ $$m_2=\tan(\beta),$$ Then: $$ m_3=\tan\left(\frac{\alpha+\beta}2\right). $$
Solving $217 x \equiv 1 \quad \text{(mod 221)}$ I am given the problem: Find an integer $x$ between $0$ and $221$ such that $$217 x \equiv 1 \quad \text{(mod 221)}$$ How do I solve this? Unfortunately I am lost.
In this special case, you can multiply the congruence by $-1$ and you'll get $$4x\equiv 220 \pmod{221}.$$ (Just notice that $-217 \equiv 4 \pmod{221}$ and $-1\equiv220\pmod{221}$.) This implies that $x\equiv 55 \pmod{221}$ is a solution. (And since $\gcd(4,221)=1$, there is only one solution modulo $221$.) In general, for questions of this type you can use extended Euclidean algorithm see Wikipedia. You can find some examples at this site, e.g. here.
Help me understand a 3d graph I've just seen this graph and while it's isn't the first 3d graph I've seen, as a math "noob" I never thought how these graphs are plotted. I can draw 2d graphs on paper by marking the input and output values of a function. It's also easy for me to visualize what the graph I'm seeing says about the function but what about graphs for functions with 2 variables? How do I approach drawing and understanding the visualization?
Set your function equal to a given constant, this give you a function you are used to, and varying the height (ie what you set your function equal to) gives you the graph (2d) of the surface intersected with planes parallel to the xy-plane. Its essentially the same as a contour map of a mountain.
Is there an abelian category of topological groups? There are lots of reasons why the category of topological abelian groups (i.e. internal abelian groups in $\bf Top$) is not an abelian category. So I'm wondering: Is there a "suitably well behaved" subcategory of $\bf Top$, say $\bf T$, such that $\bf Ab(T)$ is an abelian category? My first guess was to look for well behaved topological spaces (locally compact Hausdorff, compactly generated Hausdorff, and so on...) Googling a little shows me that compactly generated topological groups are well known animals, but the web seems to lack of a more categorical point of view. Any clue? Thanks in advance.
This was alluded to in the comments and may not be what you're looking for, but it surely deserves mention that you can take $\mathbf{T}$ to be the category of compact Hausdorff spaces. The category $\mathbf{Ab}(\mathbf{T})$ is the the category of compact abelian groups, which is equivalent to $\mathbf{Ab}^{op}$ and hence abelian by Pontryagin duality.
re-writing a $\min(X,Y)$ function linearly for LP problem I am trying to formulate an LP problem. In the problem I have a $\min(X,Y)$ that I would like to formulate linearly as a set of constraints. For example, replacing $\min(X,Y)$ with some variable $Z$, and having a set of constraints on Z. I believe that there are a minimum of two constraints: subto: $Z \le X$ subto: $Z \le Y$ That will make it take a value that is less than or equal to $\min(X,Y)$. But, I want it to take the minimum value of $X$ or $Y$. I am missing one constraint, that seems to have the following logic: "$Z \ge X$ or $Z \ge Y$" ... so that it isn't just less than the minimum, it IS the minimum. I know I'm missing something basic. In addition to fabee's response, I also have found this representation to work well which uses either-or constraint representation. Note that M must be large, see 15.1.2.1 in this document. param a := 0.8; param b := 0.4; param M := 100; var z; var y binary; minimize goal: z; subto min_za: z <= a; subto min_zb: z <= b; subto min_c1: -z <= -a + M*y; subto min_c2: -z <= -b + M*(1-y);
You could use $\min(x,y) = \frac{1}{2}(x + y - |x - y|)$ where $|x - y|$ can be replaced by the variables $z_1 + z_2$ with constraints $z_i \ge 0$ for $i=1,2$ and $z_1 - z_2 = x - y$. $z_1$ and $z_2$ are, therefore, the positive or the negative part of $|x-y|$. Edit: For the reformulation to work, you must ensure that either $z_1=0$ or $z_2=0$ at the optimum, because we want $$z_1 = \begin{cases} x-y & \mbox{ if }x-y\ge0\\ 0 & \mbox{ otherwise} \end{cases}$$ and $$z_2 = \begin{cases} y-x & \mbox{ if }x-y\le0\\ 0 & \mbox{ otherwise} \end{cases}.$$ You can check that the constraints will be active if the objective function can always be increase/decreased by making one of the $z_i$ smaller. That is the reason why your maximization worked, because the objective function could be increased by making one of the $z_i$ smaller. You could fix your minimization example by requiring that $0\le z_i \le |x-y|$. In that case, the objective function will be smallest if $z_i$ are largest. However, since they still need to be $3$ apart, one of the $z_i$ will be three and the other one will be zero.
finding final state of numbers after certain operations There are $N$ children sitting along a circle, numbered $1,2,\dots,n$ clockwise. The $i$-th child has a piece of paper with number $a_i$ written on it. They play the following game: In the first round, the child numbered $x$ adds to his number the sum of the numbers of his neighbors. In the second round, the child next in clockwise order adds to his number the sum of the numbers of his neighbors, and so on. The game ends after $M$ rounds have been played. Any idea about how to get the value of $j$-th element when the game is started at the $i$-th position after $M$ rounds. Is there any closed form equation?
In principle there is, but in practice I doubt that there’s anything very useful. It really suffices to solve the problem when $i=1$, since for any other value of $i$ we can simply relabel the children. If we start at position $1$, we can define $a_{kn+i}$ to child $i$’s number after $k$ rounds have been played. Then the rules ensure that $$a_{kn+i}=a_{(k-1)n+i}+a_{(k-1)n+i+1}+a_{kn+i-1}\;.\tag{1}$$ That is, child $i$’s number after $k$ rounds is his number after $k-1$ rounds plus child $(i+1)$’s number after $k-1$ rounds + child $(i-1)$’s number after $k$ rounds. (You can check that this works even when $i$ is $1$ or $n$.) We can simplify $(1)$ to the homogeneous $n$-th order linear recurrence $$a_m=a_{m-1}+a_{m-n+1}+a_{m-n}\;,\tag{2}$$ where the terms on the righthand side are in the opposite order from those in $(1)$. The general solution to this recurrence involves powers of the solutions of the auxiliary equation $$x^n-x^{n-1}-x-1=0\;,$$ and those solutions aren’t going to be at all nice.
Field extension, primitive element theorem I would like to know if it is true that $\mathbb{Q}(\sqrt{2}-i, \sqrt{3}+i) = \mathbb{Q}(\sqrt{2}-i+2(\sqrt{3}+i))$. I can prove, that $\mathbb{Q}(\sqrt{2}-i, \sqrt{3}+i) = \mathbb{Q}(\sqrt{2},\sqrt{3},i)$, so the degree of this extension is 8. Would it be enough to show that the minimal polynomial of $\sqrt{2}-i+2(\sqrt{3}+i)$ has also degree 8? It follows from the proof of the primitive element theorem that only finitely many numbers $\mu$ have the property that $\mathbb{Q}(\sqrt{2}-i, \sqrt{3}+i)\neq \mathbb{Q}(\sqrt{2}-i+\mu(\sqrt{3}+i))$. Obviously $\mu=1$ is one of them, but how to check, whether 2 also has this property? Thanks in advance,
Let $\alpha=\sqrt{2}-i+2(\sqrt{3}+i)$. Since $\alpha\in\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$, it follows that $\mathbb{Q}(\alpha)=\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$ if and only if their degrees over $\mathbb{Q}$ are equal. The degree $[\mathbb{Q}(\alpha):\mathbb{Q}]$ is equal to the degree of the monic irreducible of $\alpha$ over $\mathbb{Q}$, so you are correct that if you can show that the monic irreducible of $\alpha$ is of degree $8$, then it follows that $\mathbb{Q}(\alpha)=\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$. I will note, however, that your interpretation of the Primitive Element Theorem is incorrect. The Theorem itself doesn't really tell you what you claim it tells you. The argument in the proof relies on the fact that there are only finitely many fields between $\mathbb{Q}$ and $\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$, and so by the Pigeonhole Principle there are only finitely many rationals $\mu$ such that $\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)\neq\mathbb{Q}(\sqrt{2}-i+\mu(\sqrt{3}+i))$. But this is not a consequence of the Primitive Element Theorem, but rather of the fact that there are only finitely many fields in between.
Cancelling summands in a direct sum decomposition Let $M$ be a Noetherian and Artinian module. Suppose that: $$\bigoplus_{i=1}^{q} A_{i} \oplus \bigoplus_{i=1}^{t} B_{i} \cong \bigoplus_{i=1}^{q} A_{i} \oplus \bigoplus_{i=1}^{r} C_{i}$$ where all $A_{i},B_{i},C_{i}$ are indecomposable submodules of $M$. Can we always guarantee that $B_{i} \cong C_{i}$ for all $i \in \{1,2,...,t\}$? That is, can we "cancel" the term $\displaystyle\bigoplus_{i=1}^{q} A_{i}$?
Cancellation means that for modules $M,N,P$ over a ring $R$ (not assumed commutative) we have the implication $$M\oplus N\cong M\oplus P \implies N\cong P$$ Cancellation holds for modules that are only assumed artinian (which of course answers your question in the affirmative) thanks to a theorem by Camps and Dicks. This is quite astonishing, since Krull-Schmidt does not hold for modules that are just supposed artinian. And, again astonishingly, a counter-example was found only in 1995 . Finally, let me point out that a very general Krull-Schmidt theorem was proved in a categorical setting by Atiyah. The main application of his results is to coherent sheaves in algebraic geometry .
Computing the best constant in classical Hardy's inequality Classical Hardy's inequality (cfr. Hardy-Littlewood-Polya Inequalities, Theorem 327) If $p>1$, $f(x) \ge 0$ and $F(x)=\int_0^xf(y)\, dy$ then $$\tag{H} \int_0^\infty \left(\frac{F(x)}{x}\right)^p\, dx < C\int_0^\infty (f(x))^p\, dx $$ unless $f \equiv 0$. The best possibile constant is $C=\left(\frac{p}{p-1}\right)^p$. I would like to prove the statement in italic regarding the best constant. As already noted by Will Jagy here, the book suggests stress-testing the inequality with $$f(x)=\begin{cases} 0 & 0\le x <1 \\ x^{-\alpha} & 1\le x \end{cases}$$ with $1/p< \alpha < 1$, then have $\alpha \to 1/p$. If I do so I get for $C$ the lower bound $$\operatorname{lim sup}_{\alpha \to 1/p}\frac{\alpha p -1}{(1-\alpha)^p}\int_1^\infty (x^{-\alpha}-x^{-1})^p\, dx\le C$$ but now I find myself in trouble in computing that lim sup. Can someone lend me a hand, please? UPDATE: A first attempt, based on an idea by Davide Giraudo, unfortunately failed. Davide pointed out that the claim would easily follow from $$\tag{!!} \left\lvert \int_1^\infty (x^{-\alpha}-x^{-1})^p\, dx - \int_1^\infty x^{-\alpha p }\, dx\right\rvert \to 0\quad \text{as}\ \alpha \to 1/p. $$ But this is false in general: for example if $p=2$ we get $$\int_1^\infty (x^{-2\alpha} -x^{-2\alpha} + 2x^{-\alpha-1}-x^{-2})\, dx \to \int_1^\infty(2x^{-3/2}-x^{-2})\, dx \ne 0.$$
We have the operator $T: L^p(\mathbb{R}^+) \to L^p(\mathbb{R}^+)$ with $p \in (1, \infty)$, defined by$$(Tf)(x) := {1\over x} \int_0^x f(t)\,dt.$$Calculate $\|T\|$. For the operator $T$ defined above, the operator norm is $p/(p - 1)$. We will also note that this is also a bounded operator for $p = \infty$, but not for $p = 1$. Assume $1 < p < \infty$, and let $q$ be the dual exponent, $1/p + 1/q = 1$. By the theorem often referred to as "converse Hölder,"$$\|Tf\|_p = \sup_{\|g\|_q = 1}\left|\int_0^\infty (Tf)(x)g(x)\,dx\right|.$$So, assume that$\|g\|_q = 1$,\begin{align*} \left| \int_0^\infty (Tf)(x)g(x)\,dx\right| & \le \int_0^\infty |Tf(x)||g(x)|\,dx \le \int_0^\infty \int_0^x {1\over x}|f(t)||g(x)|\,dt\,dx \\ & = \int_0^\infty \int_0^1 |f(ux)||g(x)|\,du\,dx = \int_0^1 \int_0^\infty |f(ux)||g(x)|\,dx\,du \\ & \le \int_0^1 \left(\int_0^\infty |f(ux)|^pdx\right)^{1\over p} \left(\int_0^\infty |g(x)|^q dx\right)^{1\over q}du \\ & = \int_0^1 u^{-{1\over p}}\|f\|_p \|g\|_q du = {p\over{p - 1}}\|f\|_p.\end{align*}So that gives us that the operator norm is at most $p/(p - 1)$. To show that this is tight, let $f(x) = 1$ on $(0, 1]$ and zero otherwise. We have $\|f\|_p = 1$ for all $p$. We can then compute that$$(Tf)(x) = \begin{cases} 1 & 0 < x \le 1 \\ {1\over x} & x > 1\end{cases}$$and by direct computation, $\|Tf\|_p = p/(p - 1)$ for $p > 1$. The same example also shows that we can have $f \in L^1$ but $Tf \notin L^1$, so we must restrict to $p > 1$. However, it is straightforward to show that $T$ is bounded from $L^\infty \to L^\infty$ with norm $1$. Note that the range of the operator in that case is contained within the bounded continuous functions on $(0, \infty)$.
Simple Logic Question I've very little understanding in logic, how can I simply show that this is true: $$((X \wedge \neg Y)\Rightarrow \neg Z) \Leftrightarrow ((X\wedge Z)\Rightarrow Y)$$ Thanks a lot.
You want to show that $$((X \wedge \neg Y)\Rightarrow \neg Z) \Leftrightarrow ((X\wedge Z)\Rightarrow Y).$$ It is hard to know without context what "show" might mean. For example, we could be working with a specific set of axioms. Since an axiom system was not specified, I will assume we are looking for a precise but not axiom-based argument. Truth tables are nicely mechanical, so they are a very good way to verify the assertion. Below we give a "rhetorical" version of the truth table argument. It probably shows that truth tables would have been a better choice! However, it is important to be able to scan a sentence and understand under what conditions that sentence is true. We want to show that (a) if $(X \wedge \neg Y)\Rightarrow \neg Z$ is true then $(X\wedge Z)\Rightarrow Y$ is true and (b) if $(X\wedge Z)\Rightarrow Y$ is true then $(X \wedge \neg Y)\Rightarrow \neg Z$ is true. We deal with (a). There are two ways for $(X \wedge \neg Y)\Rightarrow \neg Z$ to be true: (i) if $\neg Z$ is true or (ii) if $X \wedge \neg Y$ is false. In case (i), $Z$ is false, which implies that $X\wedge Z$ is false, which implies that $(X\wedge Z)\Rightarrow Y$ is true. In case (ii), $X$ is false or $Y$ is true. If $X$ is false, then $X\land Z$ is false, and as in case (i), $(X\wedge Z)\Rightarrow Y$ is true. If $Y$ is true, then automatically $(X\wedge Z)\Rightarrow Y$ is true. We now have completed proving (a). The proof for the direction (b) is very similar. Another way: We can also use (Boolean) algebraic manipulation to show that each side is logically equivalent to $(\neg X \lor \neg Z)\lor Y$. Note that $(X \wedge \neg Y)\Rightarrow \neg Z$ is equivalent to $\neg(X\wedge \neg Y)\lor \neg Z$, which is equivalent to $(\neg X \lor Y)\lor \neg Z$, which is equivalent to $(\neg X \lor \neg Z)\lor Y$. Note also that $(X\wedge Z)\Rightarrow Y$ is equivalent to $\neg(X\wedge Z)\lor Y$, which is equivalent to $(\neg X\lor \neg Z)\lor Y$. This completes the argument.
The sum of an Irreducible Representation I was hoping someone could help me with the following question. Let $\rho$ be an irreducible presentation of a finite group $G.$ Prove \begin{equation*} \sum_{g \in G} \rho(g) = 0 \end{equation*} unless $\rho$ is the trivial representation of degree $1$. I think I have to use Schur's Lemma which states the following. Let $\rho: G \longrightarrow GL(n,\mathbb{C})$ be a representation of G. Then $\rho$ is irreducible if and only if every $n \times n$ matrix $A$ which satisfies \begin{equation*} \rho(g)A = A\rho(g) \ \ \ \forall \ g \in G \end{equation*} has the form $A = \lambda I_n \, $ with $\lambda \in \mathbb{C}$. But I am really not sure how the lemma can be applied to this question?
Let $t=\sum_{g\in G}\rho(g)$, which is an linear endomorphism of $V$. The subset $t(V)$ of $V$ is a $G$-submodule of $V$, as you can easily check. Moreover, $G$ acts trivially on all elements of $t(V)$. If $V$ is irreducible, then either $t(V)=0$ or $t(V)=V$. In the first case, we have that in fact $t=0$. In the second one, we see that $G$ acts trivially on all of $V$, so $V$ must be of dimension $1$.
Complex Analysis: Liouville's theorem Proof I'm being asked to find an alternate proof for the one commonly given for Liouville's Theorem in complex analysis by evaluating the following given an entire function $f$, and two distinct, arbitrary complex numbers $a$ and $b$: $$\lim_{R\to\infty}\oint_{|z|=R} {f(z)\over(z-a)(z-b)} dz $$ What I've done so far is I've tried to apply the cauchy integral formula, since there are two singularities in the integrand, which will fall in the contour for $R$ approaches infinity. So I got: $$2{\pi}i\biggl({f(a)\over a-b}+{f(b)\over b-a}\biggr)$$ Which equals $$2{\pi}i\biggl({f(a)-f(b)\over a-b}\biggr)$$ and I got stuck here I don't quite see how I can get from this, plus $f(z)$ being bounded and analytic, that can tell me that $f(z)$ is a constant function. Ugh, the more well known proof is so much simpler -.- Any suggestions/hints? Am I at least on the right track?
You can use the $ML$ inequality (with boundedness of $f$) to show $\displaystyle \lim_{R\rightarrow \infty} \oint_{|z|=R} \frac{f(z)}{(z-a)(z-b)}dz = 0$. Combining this with your formula using the Cauchy integral formula, you get $$ 0 = 2\pi i\bigg(\frac{f(b)-f(a)}{b-a}\bigg)$$ from which you immediately conclude $f(b) = f(a)$. Since $a$ and $b$ are arbitrary, this means $f$ is constant.
Integration Problem Proof ($\sin x$) Problem: Integration of $\displaystyle\int_{-1}^1 {\sin x\over 1+x^2} \; dx = 0 $ (according to WolframAlpha Definite Integral Calculator) But I don't understand how. I tried to prove using integration by parts. Here's the work: $$ \int_{-1}^1 {\sin x\over 1+x^2} \; dx = \int_{-1}^1 {\sin x}{1\over 1+x^2} \; dx $$ Let $u = \sin x,\quad du = \cos x\; dx\;$ and $v = \tan^{-1}x,\quad dv = {1\over 1+x^2}dx\;$. So $$ \int_{-1}^1 u dv = \left[uv\right]_{-1}^1 - \int_{-1}^1 v du =\left[ \sin x (\tan^{-1}x)\right]_{-1}^1 - \int_{-1}^1 \tan^{-1}x \cos x\; dx. $$ Next let $u = \tan^{-1}x, du = {1\over 1+x^2}$ and $dv = \cos x, v = \sin x$... I stopped here, because I feel like I'm going in a circle with this problem. What direction would I take to solve this because I don't know whether integration by parts is the way to go? Should I use trig substitution? Thanks.
You don’t have to do any actual integration. Let $$f(x)=\frac{\sin x}{1+x^2}\;$$ then $$f(-x)=\frac{\sin(-x)}{1+(-x)^2}=\frac{-\sin x}{1+x^2}=-f(x)\;,$$ so $f(x)$ is an odd function. The signed area between $x=-1$ and $x=0$ is therefore just the negative of the signed area from $x=0$ to $x=1$, and the whole thing cancels out. In more detail, let $$A=\int_0^1 f(x) dx=\int_0^1\frac{\sin x}{1+x^2} dx\;,$$ and let $$B=\int_{-1}^0 f(x) dx=\int_{-1}^0\frac{\sin x}{1+x^2} dx\;.\tag{1}$$ Now substitute $u=-x$ in $(1)$: $f(u)=f(-x)=-f(x)$, $du=-dx=(-1)dx$ so $dx=-du$, and $u$ runs from $1$ to $0$, so $$B=\int_1^0 -f(x)(-1)dx=\int_1^0f(x)dx=-\int_0^1f(x)dx=-A\;.$$ Thus, $$\int_{-1}^1 f(x)dx=A+B=A-A=0\;.$$ Note that the specific function $f$ didn’t matter: we used only the fact that $f$ is an odd function.
Proving an asymptotic lower bound for the integral $\int_{0}^{\infty} \exp\left( - \frac{x^2}{2y^{2r}} - \frac{y^2}{2}\right) \frac{dy}{y^s}$ This is a follow up to the great answer posted to https://math.stackexchange.com/a/125991/7980 Let $ 0 < r < \infty, 0 < s < \infty$ , fix $x > 1$ and consider the integral $$ I_{1}(x) = \int_{0}^{\infty} \exp\left( - \frac{x^2}{2y^{2r}} - \frac{y^2}{2}\right) \frac{dy}{y^s}$$ Fix a constant $c^* = r^{\frac{1}{2r+2}} $ and let $x^* = x^{\frac{1}{1+r}}$. Write $f(y) = \frac{x^2}{2y^{2r}} + \frac{y^2}{2}$ and note $c^* x^*$ is a local minimum of $f(y)$ so that it is a global max for $-f(y)$ on $[0, \infty)$. We are trying to determine if there exist upper and lower bounds of the same order for large x. The coefficients in our bounds can be composed of rational functions in x or even more complicated as long as they do not have exponential growth. The Laplace expansion presented in the answer to the question cited above gives upper bounds. In particular can we prove a specific lower bound: Does there exist a positive constant $c_1(r,s)$ and such that for x>1 we have $$I_1 (x) > \frac{c_1(r,s)}{x} \exp( - f(c^* x^*))$$ (it is ok in the answer if the function $\frac{1}{x}$ in the upper bound is replaced by any rational function or power of $x$)
I think that if you make the change of variables $y = \lambda z$ with $\lambda = x^{\frac 2 {r+1} }\;i.e. \frac {x^2} {\lambda^{2r}} = \lambda^2$ you convert it into $\lambda ^{s-1} \int e^{-\lambda^2 \frac 12(z^{-2r} + z^2)} \frac {dz}{z^s}$ which looks like a fairly normal laplace type expansion.
two subgroups of $S_{n}$ and $S_{m}$ If $H\subseteq S_{n}$ and $K\subseteq S_{m}$ how can I then show that I can think of $H\times K$ as it was a subgroup of $S_{m+n}$?
In hopes of getting this off the Unanswered list, here’s a hint expanding on Jyrki’s first comment. $K$ is a group of permutations of the set $\{1,\dots,m\}$, so each $k\in K$ is a bijection $$k:\{1,\dots,m\}\to\{1,\dots,m\}\;.$$ For each $k\in K$ let $$\hat k:\{n+1,\dots,n+m\}\to\{n+1,\dots,n+m\}:n+i\mapsto n+k(i)\;,$$ and let $\widehat K=\{\hat k:k\in K\}$. Show that $\widehat K$ is a group of permutations of $\{n+1,\dots,n+m\}$ and is isomorphic to $K$. For each $\langle h,k\rangle\in H\times K$ let $$g_{\langle h,k\rangle}:S_{n+m}\to S_{n+m}:i\mapsto\begin{cases} h(i),&\text{if }1\le i\le n\\ \hat k(i),&\text{if }n+1\le i\le n+m\;, \end{cases}$$ and let $G=\{g_{\langle h,k\rangle}:\langle h,k\rangle\in H\times K\}$. Show that $G$ is a subgroup of $S_{n+m}$ and is isomorphic to $H\times K$.
Root Calculation by Hand Is it possible to calculate and find the solution of $ \; \large{105^{1/5}} \; $ without using a calculator? Could someone show me how to do that, please? Well, when I use a Casio scientific calculator, I get this answer: $105^{1/5}\approx " 2.536517482 "$. With WolframAlpha, I can an even more accurate result.
Another way of doing this would be to use logarithm, just like Euler did: $$ 105^{1/5} = \mathrm{e}^{\tfrac{1}{5} \log (105)} = \mathrm{e}^{\tfrac{1}{5} \log (3)} \cdot \mathrm{e}^{\tfrac{1}{5} \log (5)} \cdot \mathrm{e}^{\tfrac{1}{5} \log (7)} $$ Use $$\log(3) = \log\left(\frac{2+1}{2-1}\right) = \log\left(1+\frac{1}{2}\right)-\log\left(1-\frac{1}{2}\right) = \sum_{k=0}^\infty \frac{2}{2k+1} \cdot \frac{1}{2^{2k+1}} = 1 + \frac{1}{12} + \frac{1}{80} + \frac{1}{448} = 1.0.83333+0.0125 + 0.0022 = 1.09803$$ $$ \log(5) = \log\frac{4+1}{4-1} + \log(3) = \log(3) + \sum_{k=0}^\infty \frac{2}{2k+1} \cdot \frac{1}{4^{2k+1}} = \log(3) + \frac{1}{2} + \frac{1}{96} +\frac{1}{2560} $$ $$ \log(7) = \log\frac{8-1}{8+1} + 2 \log(3) = 2 \log(3) - \sum_{k=0}^\infty \frac{2}{2k+1} \cdot \frac{1}{8^{2k+1}} = 2 \cdot \log(3) - \frac{1}{4} - \frac{1}{768} $$ Thus $$ \frac{1}{5} \left( \log(3) + \log(5) + \log(7)\right) = \frac{4}{5} \log(3) + \frac{1}{5} \left( \frac{1}{2} - \frac{1}{4} + \frac{1}{96} - \frac{1}{768} + \frac{1}{2560} \right) = \frac{4}{5} \log(3) + \frac{1993}{38400}= 0.9303 = 1-0.0697 $$ Now $$ \exp(0.9303) = \mathrm{e} \cdot \left( 1 - 0.0697 \right) = 2.71828 \cdot 0.9303 = 2.5288 $$
Prove that $||x|-|y||\le |x-y|$ I've seen the full proof of the Triangle Inequality \begin{equation*} |x+y|\le|x|+|y|. \end{equation*} However, I haven't seen the proof of the reverse triangle inequality: \begin{equation*} ||x|-|y||\le|x-y|. \end{equation*} Would you please prove this using only the Triangle Inequality above? Thank you very much.
For all $x,y\in \mathbb{R}$, the triangle inequality gives \begin{equation} |x|=|x-y+y| \leq |x-y|+|y|, \end{equation} \begin{equation} |x|-|y|\leq |x-y| \tag{1}. \end{equation} Interchaning $x\leftrightarrow y$ gives \begin{equation} |y|-|x| \leq |y-x| \end{equation} which when rearranged gives \begin{equation} -\left(|x|-|y|\right)\leq |x-y|. \tag{2} \end{equation} Now combining $(2)$ with $(1)$, gives \begin{equation} -|x-y| \leq |x|-|y| \leq |x-y|. \end{equation} This gives the desired result \begin{equation} \left||x|-|y|\right| \leq |x-y|. \blacksquare \end{equation}
If a holomorphic function $f$ has modulus $1$ on the unit circle, why does $f(z_0)=0$ for some $z_0$ in the disk? I don't understand the final step of an argument I read. Suppose $f$ is holomorphic in a neighborhood containing the closed unit disk, nonconstant, and $|f(z)|=1$ when $|z|=1$. There is some point $z_0$ in the unit disk such that $f(z_0)=0$. By the maximum modulus principle, it follows that $|f(z)|<1$ in the open unit disk. Since the closed disk is compact, $f$ obtains a minimum on the closed disk, necessarily on the interior in this situation. But why does that imply that $f(z_0)=0$ for some $z_0$? I'm aware of the minimum modulus principle, that the modulus of a holomorphic, nonconstant, nonzero function on a domain does not obtain a minimum in the domain. But I'm not sure if that applies here.
If not, consider $g(z)=\frac 1{f(z)}$ on the closure of the unit disc. We have $|g(z)|=1$ if $|z|=1$ and $|g(z)|>1$ if $|z|<1$. Since $g$ is holomorphic on the unit disk, the maximum modulus principle yields a contradiction.
Is the solution to a driftless SDE with Lipschitz variation a martingale? If $\sigma$ is Lipschitz, with Lipschitz constant $K$, and $(X_t)_{t\geq 0}$ solves $$dX_t=\sigma(X_t)dB_t,$$ where $B$ is a Brownian motion, then is $X$ a martingale? I'm having difficulty getting past the self-reference here. I tried showing that, for $t\geq 0$, $\mathbb{E}[X]_t$ is finite. Perhaps Gronwall's lemma is needed? Thank you.
Yes. $$[X]_t = \int_0^t\sigma(X_u)^2du,$$ so $$\begin{align} \mathbb{E}([X]_t) \le \int_0^t \mathbb{E}\left[(x_0 + K|X_u-x_0|)^2\right]du.\\ \end{align} $$ $X$ is locally bounded in $L^2$. See, for example, Karatzas and Shreve equation 5.2.15 (p. 289). So it follows easily that $\mathbb{E}([X]_t)<\infty$, for each $t$. Hence $X$ is a martingale.
Find the radius of the circle? Two Circle of an equal of an radii are drawn , without any overlap , in a semicircle of radius 2 cm. If these are the largest possible circles that the semicircle can accomodate , then what is the radius of each of the circles? Thanks in advance.
Due to symmetry two circles in a semicircle is the same problem as one in a quatercircle or four in a full circle. If we look at a quatercircle originating at the origin, with radius $r$ and completely contained in the first quadrant, then the circle has to be centered at a point $(c,c)$, touching the x axis, the y axis and the quatercircle at $(r/\sqrt{2},r/\sqrt{2})$. To find out how large the circle may be, without crossing the x and y axes, consider the distance from the center of this circle to one axis being equal to the distance to the point $(r/\sqrt{2},r/\sqrt{2})$ $$ c=\sqrt{2\left(\frac{r}{\sqrt{2}}-c\right)^2}=\sqrt{2} \left(\frac{r}{\sqrt{2}}-c\right)$$ $$ \Rightarrow c = \frac{r}{1+\sqrt{2}} $$ For $r=2\,\text{cm}$, $c\approx 0.83\,\text{cm} $.
Proving that a set is countable by finding a bijection $Z$ is the set of non-negative integers including $0$. Show that $Z \times Z \times Z$ is countable by constructing the actual bijection $f: Z\times Z\times Z \to \mathbb{N}$ ($\mathbb{N}$ is the set of all natural numbers). There is no need to prove that it is a bijection. After searching for clues on how to solve this, I found $(x+y-1)(x+y-z)/z+y$ but that is only two dimensional and does not include $0$. Any help on how to solve this?
If you don't need an actual "formula", then you can write $$ \mathbb{Z}\times\mathbb{Z}\times\mathbb{Z} = \bigcup_{n=0}^\infty \{ (x,y,z)\in \mathbb{Z}^3 : |x|+|y| +|z| = n \} $$ and then rely on the fact that each term in this union is a finite set.
Do countable unital rings with uncountably many distinct right ideals have uncountably many maximal right ideals? Suppose we are given a countable unital ring $R$ with uncountably many distinct right ideals. Does it follow from this that $R$ has uncountably many maximal right ideals?
No. Take $R= \mathbb Q[X_0,X_1,...,X_n,...]/\langle X_iX_j\mid i,j\in \mathbb N\rangle=\mathbb Q[x_0,x_1,...,x_n,...]$ The only maximal ideal (actually only prime ideal !) is $\langle x_0,x_1,...,x_n,...\rangle$ but $R$ has a family of distinct ideals indexed by the uncountably many subsets $P\subset \mathbb N$, namely $$ I_P=\langle x_i\mid i\in P\rangle=\operatorname {vect}_\mathbb Q (x_i\mid i\in P)$$ Edit I hadn't seen Mariano's answer when I posted mine a few minutes later, but our rings are actually isomorphic : if his $V$ has basis $(v_i)_{i\in \mathbb N}$ over $\mathbb Q$ we have an isomorphism (of $\mathbb Q$-algebras even) $$ \mathbb Q\oplus V \stackrel {\cong}{\to} \mathbb Q[x_0,x_1,...,x_n,...]:(q,\sum q_iv_i)\mapsto q+\sum q_ix_i$$
Use of determinants I have been teaching myself maths (primarily calculus) throughout this and last year, and was stumped with the use of determinants. In the math textbooks I have, they simply show how to compute a determinant and some properties about them (i.e. Cramer's Rule), but not why they are used and how they work. So my question is, how do they work and why/when would I know to use them to help solve something?
Here's another example of use of determinants: Let $F$ be a field, let $K$ be a field containing $F$, and finite-dimensional as a vector space over $F$. Let $\alpha$ be an element of $K$. The map $T:K\to K$ given by $T(x)=\alpha x$ is a linear transformation. Given a basis for $K$ as a vector space over $F$, one can find a matrix $A$ representing $T$. The matrix depends on the basis chosen, but its determinant does not; it only depends on $\alpha$, and it's called the norm of $\alpha$ (strictly speaking, the norm of $\alpha$ from $K$ to $F$). And the norm is a very important concept in Field Theory and Algebraic Number Theory.
What is standard coordinates? What is meant by the word standard in "Euclidean space is special in having a standard set of global coordinates."? Then "A manifold in general does not have standard coordinates" This makes me think standard means something else then 'most common used'. Is R^n special in any sense, as a manifold? This is from Loring W. Tu - Introduction to manifolds
Usually when we write "$\mathbb{R}^n$" we are thinking of an explicit description of it as $n$-tuples of real numbers. This description "is" the standard set of global coordinates, namely the coordinate functions $x_i$. But this description isn't part of $\mathbb{R}^n$ "as a manifold", in that it contains more information than just the diffeomorphism type of this manifold. What I mean by this is that if I give you an abstract manifold which is diffeomorphic to $\mathbb{R}^n$ -- but I don't tell you an explicit diffeomorphism -- then there isn't a "standard" set of coordinates for it. And if you have a manifold which isn't diffeomorphic to any $\mathbb{R}^n$, then there is no set of global coordinates for it (otherwise you could use those coordinates to produce a diffeomorphism to $\mathbb{R}^n$).
What is the correct terminology for Permutation & Combination formulae that allow repeating elements. Let me explain by example. Q: Given four possible values, {1,2,3,4}, how many 2 value permutations are there ?
In "permutations", the order matters. In "combinations", the order does not matter. The basic rules of counting are the Product Rule and the Sum Rule. See here, for example. * *Permutations with repetitions allowed: If you have $n$ objects, and you want to count how many permutations of length $m$ there are: there are $n$ possibilities for the first term, $n$ for the second term, $n$ for the third term, etc. So the total number is $n^m$. *Permutations without repetitions allowed: If you have $n$ objects, and you want to count permutations of length $m$ with no repetitions (sometimes called "no replacement"): there are $n$ possibilities for the first term, $n-1$ for the second (you've used up one), $n-2$ for the third, etc. So the total number is $$P^n_m = n(n-1)(n-2)\cdots(n-m+1) = \frac{n!}{(n-m)!}.$$ *Combinations without repetitions allowed: In combinations we don't care about the order. If you select $m$ objects from $n$ possibilities and they are all distinct, then there are $P^m_m = n!$ ways of ordering them. So if you count permutations instead, you will count each distinct choice of elements exactly $m!$ times. Therefore, $\binom{n}{m}$ (also denoted $nCm$, both read "$n$ choose $m$") is given by: $$\binom{n}{m} = \frac{1}{m!}P^n_m = \frac{n!}{m!(n-m)!}.$$ *Combinations with repetitions allowed: You have $n$ objects, and you want to select $m$ of them, allowing repetitions. To simplify things, let us assume our $n$ objects are precisely the numbers $1$, $2,\ldots,n$. Given a selection of $m$ objects with possible repetitions, $a_1,\ldots,a_m$, order them in ascending order, $a_1\leq a_2\leq\cdots\leq a_m$, and think of them as an $m$-tuple: $(a_1,a_2,\ldots,a_m)$. We want to count the number of such tuples with entries between $1$ and $n$, repetitions allowed. Let us associate to each such tuple an $m$-tuple with no repetitions and with entries between $1$ and $n+m-1$ as follows: given $(a_1,a_2,\ldots,a_m)$, with $1\leq a_1\leq a_2\leq\cdots\leq a_m\leq n$, we associate the tuple $(a_1,a_2+1,a_3+2,\ldots,a_m+m-1)$. Note that $$1\leq a_1\lt a_2+1\lt\cdots \lt a_m+m-1\leq n+m-1.$$ Conversely, given a tuple $(b_1,b_2,\ldots,b_m)$ with $1\leq b_1\lt b_2\lt\cdots\lt b_m\leq n+m-1$, we have $b_i\geq n+i$; so this tuple "comes from" the tuple $(b_1,b_2-1,b_3-2,\ldots,b_m-m+1)$, which is a nondecreasing tuple with values between $1$ and $n$; i.e., it is one of our original $(a_1,\ldots,a_n)$s. That means that counting the nondecreasing tuples $(a_1,\ldots,a_m)$ with $1\leq a_1\leq\cdots\leq a_m\leq n$ is equivalent to counting the number of strictly increasing tuples $(b_1,b_2,\ldots,b_m)$ with $1\leq b_1\lt b_2\lt\cdots\lt b_m\leq n+m-1$. This is, in turn, equivalent to counting the number of ways of selecting $m$ objects from among $\{1,2,\ldots,n+m-1\}$, with no repetitions allowed. This is just $\binom{n+m-1}{m}$.
Where is the highest point of $f(x)=\sqrt[x]{x}$ in the $x$-axis? I mean, the highest point of the $f(x)=\sqrt[x]{x}$ is when $x=e$. I'm trying to calculate how can I prove that or how can it be calculated.
Well, write $$f(x) = e^{\frac{1}{x}\ln(x)}$$ and differentiate and set equal to 0 to get: $$\dfrac{d}{dx}(e^{\frac{1}{x}\ln(x)})=\bigg(-\frac{1}{x^2}\ln(x)+\frac{1}{x^2}\bigg)e^{\frac{1}{x}\ln(x)}=0$$ Which implies (after dividing by the exponential term) that $$\frac{1}{x^2}(1-\ln(x))=0$$ Whence $1=\ln(x)$ or $x=e$. Now you just need to check whether this gives you a local minimum or maximum via the second derivative.
Prove that the Lie derivative of a vector field equals the Lie bracket: $\frac{d}{dt} ((\phi_{-t})_* Y)|_{t=0} = [X,Y]$ Let $X$ and $Y$ be vector fields on a smooth manifold $M$, and let $\phi_t$ be the flow of $X$, i.e. $\frac{d}{dt} \phi_t(p) = X_p$. I am trying to prove the following formula: $\frac{d}{dt} ((\phi_{-t})_* Y)|_{t=0} = [X,Y],$ where $[X,Y]$ is the commutator, defined by $[X,Y] = X\circ Y - Y\circ X$. This is a question from these online notes: http://www.math.ist.utl.pt/~jnatar/geometria_sem_exercicios.pdf .
Here is a simple proof which I found in the book "Differentiable Manifolds: A Theoretical Phisics Approach" of G. F. T. del Castillo. Precisely it is proposition 2.20. We denote $(\mathcal{L}_XY)_x=\frac{d}{dt}(\phi_t^*Y)_x|_{t=0},$ where $(\phi^*_tY)_x=(\phi_{t}^{-1})_{*\phi_t(x)}Y_{\phi_t(x)}.$ Recall also that $(Xf)_x=\frac{d}{dt}(\phi_t^*f)_x|_{t=0}$ where $\phi_t^*f=f\circ\phi_t$ and use that $(\phi^*_tYf)_x=(\phi^*_tY)_x(\phi^*_tf).$ We claim that $(\mathcal{L}_XY)_x=[X,Y]_x.$ Proof:$$(X(Yf))_x=\lim_{t\to 0}\frac{(\phi_t^*Yf)_x-(Yf)_x}{t}=\lim_{t\to 0}\frac{(\phi^*_tY)_x(\phi^*_tf)-(Yf)_x}{t}=\star$$ Now we add and subtract $(\phi^*_tY)_xf.$ Hence $$\star=\lim_{t\to 0}\frac{(\phi^*_tY)_x(\phi^*_tf)-(\phi^*_tY)_xf+(\phi^*_tY)_xf-Y_xf}{t}=$$ $$=\lim_{t\to 0}(\phi^*_tY)_x\frac{(\phi^*_tf)-f}{t}+\lim_{t\to 0}\frac{(\phi^*_tY)_x-Y_x}{t}f=Y_xXf+(\mathcal{L}_XY)_xf.$$ So we get that $XY=YX+\mathcal{L}_XY.$
Non-principal Ideals in a Complete Lattice Given a complete lattice is it possible to have orderideals which are not principal? Can one not always just join together every element of the ideal to get its maximal, generating element? What about for frames? Thanks!
Short answer is no. In order to get a counterexample, consider the Boolean algebra of subsets of the natural numbers $\mathcal P(\mathbb N)$ and let $FIN$ denote the ideal of finite subsets of $\mathbb N$. Observe that $\mathcal P(\mathbb N)$ is a complete lattice and $FIN$ is a not principal ideal.
Ideal not finitely generated Let $R=\{a_0+a_1 X+a_2 X^2 +\cdots + a_n X^n\}$, where $a_0$ is an integer and the rest of the coefficients are rational numbers. Let $I=\{a_1 X+a_2 X^2+\cdots +a_n X^n\}$ where all of the coefficients are rational numbers. Prove that I is an ideal of R. Show further that I is not finitely generated as an R-module. I have managed to prove that I is an ideal of R, by showing that I is the kernel of the evaluation map that maps a polynomial in Q[x] to its constant term. Hence I is an ideal of R. However, I am stuck at showing I is not finitely generated as an R-module. Sincere thanks for any help.
This ring is an example of a Bézout domain that is not a unique factorization domain (since not all nonzero noninvertible elements decompose into irreducibles in the first place; for instance $X$ does not). The wikipedia page gives a proof of the Bézout property, namely that any finitely generated ideal is in fact a principal ideal. So if $I$ were finitely generated, it would have a single generator. But it cannot, since an element without constant term and with coefficient $c$ of $X$ only generates elements whose coefficient of $X$ is an integer multiple of $c$. (One also sees directly that a finite set of elements of $I$ only generate elements whose coefficient of $X$ is an integer multiple of the gcd of their coefficients of $X$ which cannot be all of $\mathbb Q$, as is indicated in the anwers by Alexander Thumm and Alex Becker.)
Help Calculating a certain integral I study an article, and I got stuck on a problem of calculating an integral. Whatever I do, I do not get the result mentioned there. The notations are $u,\tilde u$ are functions defined on $\Omega \subset \Bbb{R}^N$ with values in $\Bbb{R}^n$: $$ \eta_\varepsilon = \int_\Omega \tilde u(x)dx -\int_\Omega u(x)dx \in \Bbb{R}^n $$ The idea is that we try to correct $\tilde u$ such that it has the same integral as $u$. So pick a ball $B_\varepsilon=B(x_0,\varepsilon^{1/N})$ contained in the interior of $\Omega$. We define the function $$ v(x)=\begin{cases} \tilde u(x) & x \notin B_\varepsilon \\ \tilde u(x)+h_\varepsilon(1-\varepsilon^{-1/N}|x-x_0|) & x \in B_\varepsilon \end{cases}$$ where $h_\varepsilon \in \Bbb{R}^n$ should be chosen such that $\int_\Omega v=\int_\Omega u$, i.e. $$ \int_{B_\varepsilon} h_\varepsilon(1-\varepsilon^{-1/N}|x-x_0|)dx=-\eta_\varepsilon $$ Since $h_\varepsilon$ is a constant, it should be enough to find the value of $$(I) \ \ \ \ \int_{B_\varepsilon} (1-\varepsilon^{-1/N}|x-x_0|)dx $$ which I calculated with the coarea formula and got $$ |B_\varepsilon|-\varepsilon^{-1/N}\int_0^{\varepsilon^{1/N}}r^N \cdot N \omega_N dr=\frac{ \varepsilon\omega_N}{N+1}$$ where $\omega_N$ is the volume of the unit ball in $\Bbb{R}^N$. This would lead to $$ h_\varepsilon=-\eta_\varepsilon \frac{N+1}{\varepsilon \omega_N}$$ However, in the article the answer is $$ h_\varepsilon = -\eta_\varepsilon \frac{N}{ \varepsilon^{\frac{N-1}{N}}\omega_{N-1}}$$ This seems wrong to me. The different constants are not a problem but some facts presented in the following of this rely crucially on the fact that the power of $\varepsilon$ in $h_\varepsilon$ is $\frac{1-N}{N}$ and not $-1$ as I got. Is my calculation of the integral $(I)$ correct, or I missed something?
Independent of the details of your calculation, the book's answer can't be right since $|B_\epsilon|$ clearly goes as $\epsilon\omega_N$ and not as $\epsilon^{(N-1)/N}\omega_{N-1}$. It looks as if they were calculating the integral over the sphere rather than the ball, but I don't see why they would do that. Are you sure that $\Omega\subset\mathbb R^n$? Sometimes $\Omega$ is used to denote solid angle.
How to determine the number of directed/undirected graphs? I'm kind of stuck on this homework problem, could anyone give me a springboard for it? If we have $n\in\mathbb{Z}^+$, and we let the set of vertices $V$ be a set of size $n$, how can we determine the number of directed graphs/undirected graphs/graphs with loops etc.? Is there a formula for this? I feel like it can be done using combinatorics but I can't quite figure it out. Any ideas? Thanks!
A start: We will show how to count labelled, loopless, undirected graphs. There are $\binom{n}{2}$ ways to choose a set $\{u,v\}$ of two vertices. For every such set, we say yes or no depending on whether we have decided to join $u$ and $v$ by an edge. Alternately, but somewhat less concretely, let $P$ be the set of all (unordered) pairs. This set $P$ has cardinality $\binom{n}{2}$. To specify a loopless undirected graph, we choose a subset of $P$ and connect any unordered pair in that subset by an edge. How many subsets does $P$ have? To extend to graphs with possible loops (but at most one per vertex) there is a similar yes/no process.
Prove Algebric Identity Possible Duplicate: Value of $\sum\limits_n x^n$ Given $a\in\mathbb{R}$ and $0<a<1$ let $(X_n)$ be a sequence defined: $X_n=1+a+a^2+...+a^n$, $\forall n\in\mathbb{N}$. How do I show that $X_n=\frac{1-a^{n+1}}{1-a}$ Thanks.
$$\begin{align*} (1-a)(1+a+a^2+\dots+a^n)&=(1+a+\dots+a^n)-a(1+a++\dots+a^n)\\ &=(1+\color{red}{a+\dots+a^n})-(\color{red}{a+a^2++\dots+a^n}+a^{n+1})\\ &=1-a^{n+1} \end{align*}$$
Limit and continuity For this question, should I use differentiation method or the integration method ? $\lim_{x\to \infty} (\frac{x}{x+2})^{x/8}$ this is what i got so far: Note: $\lim \limits_{n\to\infty} [1 + (a/n)]^n = e^{\underline{a}}\ldots\ldots (1)$ $$ L = \lim \left[\frac{x}{x+2}\right]^{x/8} = \lim\left[\frac{1}{\frac{x+2}{x}}\right]^{x/8} =\frac{1}{\lim [1 + (2/x)]^x]^{1/8}} $$ but i'm not sure where to go from there
You are probably intended to use the fact that $$\lim_{t\to\infty}\left(1+\frac{1}{t}\right)^t=e.$$ A manipulation close to what you were doing gets us there. We have $$\left(1+\frac{2}{x}\right)^{x/8}=\left(\left(1+\frac{2}{x}\right)^{x/2}\right)^{1/4}.$$ Let $t=\frac{x}{2}$. Then $\frac{2}{x}=\frac{1}{t}$. Note that as $x\to\infty$, we have $t\to\infty$. It follows that $$\lim_{x\to \infty}\left(1+\frac{2}{x}\right)^{x/8}=\lim_{t\to\infty}\left(\left(1+\frac{1}{t}\right)^{t}\right)^{1/4}=e^{1/4}.$$ So our limit is $1/e^{1/4}$, or equivalently $e^{-1/4}$. If you are allowed to use the fact that in general $\lim_{x\to\infty}\left(1+\frac{a}{x}\right)^x=e^a$, then you can simply take $a=2$, and conclude that $$\lim \left[\frac{x}{x+2}\right]^{x/8} = \lim\left[\frac{1}{\frac{x+2}{x}}\right]^{x/8} =\frac{1}{\lim [1 + (2/x)]^x]^{1/8}}=\frac{1}{(e^2)^{1/8}}=e^{-1/4}.$$
Does the group of Diffeomorphisms act transitively on the space of Riemannian metrics? Let $M$ be a smooth manifold (maybe compact, if that helps). Denote by $\operatorname{Diff}(M)$ the group of diffeomorphisms $M\to M$ and by $R(M)$ the space of Riemannian metrics on $M$. We obtain a canonical group action $$ R(M) \times \operatorname{Diff}(M) \to R(M), (g,F) \mapsto F^*g, $$ where $F^*g$ denotes the pullback of $g$ along $F$. Is this action transitive? In other words, is it possible for any two Riemannian metrics $g,h$ on $M$ to find a diffeomorphism $F$ such that $F^*g=h$? Do you know any references for this type of questions?
This map will not be transitive in general. For example, if $g$ is a metric and $\phi \in Diff(M)$ then the curvature of $\phi^* g$ is going to be the pullback of the curvature of $g$. So there's no way for a metric with zero curvature to be diffeomorphic to a manifold with non-zero curvature. Or for example, if $g$ is einstein $(g = \lambda Ric)$ then so is $\phi^* g$. So there are many diffeomorphism invariants of a metric. Indeed, this should make sense because you can think of a diffeomorphism as passive, i.e. as just a change of coordinates. Then all of the natural things about the Riemannian geometry of a manifold should be coordinate ($\Leftrightarrow$ diffeomorphism) invariant.
What matrices preserve the $L_1$ norm for positive, unit norm vectors? It's easy to show that orthogonal/unitary matrices preserve the $L_2$ norm of a vector, but if I want a transformation that preserves the $L_1$ norm, what can I deduce about the matrices that do this? I feel like it should be something like the columns sum to 1, but I can't manage to prove it. EDIT: To be more explicit, I'm looking at stochastic transition matrices that act on vectors that represent probability distributions, i.e. vectors whose elements are positive and sum to 1. For instance, the matrix $$ M = \left(\begin{array}{ccc}1 & 1/4 & 0 \\0 & 1/2 & 0 \\0 & 1/4 & 1\end{array}\right) $$ acting on $$ x=\left(\begin{array}{c}0 \\1 \\0\end{array}\right) $$ gives $$ M \cdot x = \left(\begin{array}{c}1/4 \\1/2 \\1/4\end{array}\right)\:, $$ a vector whose elements also sum to 1. So I suppose the set of vectors whose isometries I care about is more restricted than the completely general case, which is why I was confused about people saying that permutation matrices were what I was after. Sooo... given the vectors are positive and have entries that sum to 1, can we say anything more exact about the matrices that preserve this property?
The matrices that preserve the set $P$ of probability vectors are those whose columns are members of $P$. This is obvious since if $x \in P$, $M x$ is a convex combination of the columns of $M$ with coefficients given by the entries of $x$. Each column of $M$ must be in $P$ (take $x$ to be a vector with a single $1$ and all else $0$), and $P$ is a convex set.
Maximum area of rectangle with fixed perimeter. How can you, with polynomial functions, determine the maximum area of a rectangle with a fixed perimeter. Here's the exact problem— You have 28 feet of rabbit-proof fencing to install around your vegetable garden. What are the dimensions of the garden with the largest area? I've looked around this Stack Exchange and haven't found an answer to this sort of problem (I have, oddly, found a similar one for concave pentagons). If you can't give me the exact answer, any hints to get the correct answer would be much appreciated.
The result you need is that for a rectangle with a given perimeter the square has the largest area. So with a perimeter of 28 feet, you can form a square with sides of 7 feet and area of 49 square feet. This follows since given a positive number $A$ with $xy = A$ the sum $x + y$ is smallest when $x = y = \sqrt{A}$. You have $2x + 2y = P \implies x + y = P/2$, and you want to find the maximum of the area, $A = xy$. Since $x + y = P/2 \implies y = P/2 - x$, you substitute to get $A = x(P/2-x) = (P/2)x - x^2$. In your example $P = 28$, so you want to find the maximum of $A = 14x - x^2$.
inequality $(a+c)(a+b+c)<0$, prove $(b-c)^2>4a(a+b+c)$ If $(a+c)(a+b+c)<0,$ prove $$(b-c)^2>4a(a+b+c)$$ I will use the constructor method that want to know can not directly prove it?
Consider the quadratic $$ f(x) = ax^2 - (b-c)x + (a+b+c) $$ $$f(1)f(0) = 2(a+c)(a+b+c) \lt 0$$ Thus if $a \neq 0$, then this has a real root in $(0,1)$ and so $$(b-c)^2 \ge 4a(a+b+c)$$ If $(b-c)^2 = 4a(a+b+c)$, then we have a double root in $(0,1)$ in which case, $f(0)$ and $f(1)$ will have the same sign. Thus $$(b-c)^2 \gt 4a(a+b+c)$$ If $a = 0$, then $c(b+c) \lt 0$, and so we cannot have $b=c$ and thus $(b-c)^2 \gt 0 = 4a(a+b+c)$ And if you want a more "direct" approach, we show that $(p+q+r)r \lt 0 \implies q^2 \gt 4pr$ using the following identity: $$(p+q+r)r = \left(p\left(1 + \frac{q}{2p}\right)\frac{q}{2p}\right)^2 + \left(r - \frac{q^2}{4p}\right)^2 + p\left(r - \frac{q^2}{4p}\right)\left(\left(1 + \frac{q}{2p}\right)^2 + \left(\frac{q}{2p}\right)^2\right)$$ If $(p+q+r)r \lt 0$, then we must have have $p\left(r - \frac{q^2}{4p}\right) \lt 0$, as all the other terms on the right side are non-negative. Of course, this was gotten by completing the square in $px^2 + qx + r$ and setting $x=0$ and $x=1$ and multiplying.
Summing numbers which increase by a fixed amount (arithmetic progression) An auditorium has 21 rows of seats. The first row has 18 seats, and each succeeding row has two more seats than the previous row. How many seats are there in the auditorium? Now I supposed you could use sigma notation since this kind of problem reminds me of it, but I have little experience using it so I'm not sure.
You could write the total using sigma notation as $$\sum_{k=0}^{20}(18+2k)\,$$ among many other ways, but I’m pretty sure that what’s wanted here is the actual total. You can add everything up by hand, which is a bit tedious, or you can use the standard formula for the sum of an arithmetic progression, if you know it, or you can be clever and arrange the sizes of the rows like this: $$\begin{array}{} 18&20&22&24&26&28&30&32&34&36&38\\ 58&56&54&52&50&48&46&44&42&40\\ \hline 76&76&76&76&76&76&76&76&76&76&38 \end{array}$$ The bottom row is the sum of the top two, so adding it up gives you the total number of seats. And that’s easy: it’s $10\cdot 76+38=760+38=798$. This calculation is actually an adaptation to this particular problem of the usual derivation of the formula for the sum of an arithmetic progression, which in this particular case looks like this: $$\begin{array}{} 18&20&22&24&26&\dots&50&52&54&56&58\\ 58&56&54&52&50&\dots&26&24&22&20&18\\ \hline 76&76&76&76&76&\dots&76&76&76&76&76 \end{array}$$ The top row is the original set of row sizes; the second consists of the same numbers in reverse order; and the bottom row is again the sum of the top two. That now counts each seat twice, so the total number of seats is $$\frac12(21\cdot 76)=798\;.$$
Evaluating $\int \dfrac {2x} {x^{2} + 6x + 13}dx$ I am having trouble understanding the first step of evaluating $$\int \dfrac {2x} {x^{2} + 6x + 13}dx$$ When faced with integrals such as the one above, how do you know to manipulate the integral into: $$\int \dfrac {2x+6} {x^{2} + 6x + 13}dx - 6 \int \dfrac {1} {x^{2} + 6x + 13}dx$$ After this first step, I am fully aware of how to complete the square and evaluate the integral, but I am having difficulties seeing the first step when faced with similar problems. Should you always look for what the $"b"$ term is in a given $ax^{2} + bx + c$ function to know what you need to manipulate the numerator with? Are there any other tips and tricks when dealing with inverse trig antiderivatives?
Just keep in mind which "templates" can be applied. The LHS in your second line is "prepped" for the $\int\frac{du}{u}$ template. Your choices for a rational function with a quadratic denominator are limited to polynomial division and then partial fractions for the remainder, if the denominator factors (which it always will over $\mathbb{C}$). The templates depend on the sign of $a$ and the number of roots. Here are some relevant "templates": $$ \eqalign{ \int\frac1{ax+b}dx &= \frac1{a}\ln\bigl|ax+b\bigr|+C \\ \int\frac{dx}{(ax+b)^2} &= -\frac1{a}\left(ax+b\right)^{-1}+C \\ \int\frac1{x^2+a^2}dx &= \frac1{a}\arctan\frac{x}{a}+C \\ \int\frac1{x^2-a^2}dx &= \frac1{2a}\ln\left|\frac{x-a}{x+a}\right|+C \\ } $$ So, in general, to tackle $$ I = \int\frac{Ax+B}{ax^2+bx+c}dx $$ you will want to write $Ax+B$ as $\frac{A}{2a}\left(2ax+b\right)+\left(B-\frac{Ab}{2a}\right)$ to obtain $$ \eqalign{ I & = \frac{A}{2a}\int\frac{2ax+b}{ax^2+bx+c}dx + \left(B-\frac{Ab}{2a}\right) \int\frac{dx}{ax^2+bx+c} \\& = \frac{A}{2a}\ln\left|ax^2+bx+c\right| + \left(\frac{B}{a}-\frac{Ab}{2a^2}\right) \int\frac{dx}{x^2+\frac{b}{a}x+\frac{c}{a}} } $$ and to tackle the remaining integral, you can find the roots from the quadratic equation or complete the squares using the monic version (which is easier to do substitution with). If $a=0$, use the first "template" above. If you complete the squares and it's a perfect square, or if you get one double root, then use the second. If the roots are complex or there are two distinct real roots, then (after substituting $u=x+\frac{b}{2a}$) use the third or fourth "template".
Disprove Homeomorphism I have a problem that puzzles me. I need to show that the two sets $A = \{(x,y) \in \mathbb{R}^2 \, \, \vert \, \, |x| \leq 1 \}$ and $B = \{(x,y) \in \mathbb{R}^2 \, \, \vert \, \, x \geq 0 \}$ are not homeomorphic; but I'm not able to figure out how start or what I need to arrive at.
What about this: if A and B are not homeomorphic, then there exists a continuous bijection $ f: \, A \rightarrow B $ such that $ f^{-1} $ is not continuous. Then look at the function $f(x,y) = \left( \tan\left( \frac{(1+x) \pi}{4} \right), y \right)$. Problem at $ x = 1 $ ?
Question about primes in square-free numbers For any prime, what percentage of the square-free numbers has that prime as a prime factor?
Let $A(n)=\{\mathrm{squarefree~numbers~\le n}\}$ and $B_p(n)=\{x\in A(n); p\mid x\}$. Then the asymptotic density of $B_p$ in $A$ is $b_p = \lim_{n\rightarrow \infty} |B_p(n)|/|A(n)|$. (It seems from the comments that this is not what @RudyToody is looking for, but I thought it's worth writing up anyway.) Let the density of $A$ in $\mathbb{N}$ be $a = \lim_{n\rightarrow \infty} |A|/n$. Observe $B_p(pn) = \{px; x\in A(n),p\nmid x\}$, so for $N$ large $b_p$ must satisfy $$ \begin{align} b_p a (pN) & \simeq (1-b_p)aN \\ b_p &= \frac{1}{p+1} \end{align} $$ as @joriki already noted. To illustrate, here are some counts for squarefree numbers $<10^7$. $|A(10^7)|=6079291$. $$ \begin{array}{c|c|c|c} p & |B_p(10^7)| & |A(10^7)|/(p+1) & \Delta=(p+1)|B|/|A|-1 \\ \hline \\ 2 & 2026416 & 2026430.3 & -7\times 10^{-6} \\ 3 & 1519813 & 1519822.8 & -6\times 10^{-6} \\ 5 & 1013234 & 1013215.2 & 1.9\times 10^{-5} \\ 7 & 759911 & 759911.4 & -5\times 10^{-7} \\ 71 & 84438 & 84434.6 & 4\times 10^{-5} \\ 173 & 34938 & 34938.5 & -1.3 \times 10^{-5} \end{array} $$
Cross product in complex vector spaces When inner product is defined in complex vector space, conjugation is performed on one of the vectors. What about is the cross product of two complex 3D vectors? I suppose that one possible generalization is $A\otimes B \rightarrow \left ( A\times B \right )^*$ where $\times$ denotes the normal cross product. The conjugation here is to ensure that the result of the cross product is orthogonal to both vectors $A$ and $B$. Is that correct ?
Yes, this is correct definition. If $v$, $w$ are perpendicular vectors in $\Bbb C^3$ (according to hermitian product) then $v,w,v\times w$ form matrix in $SU_3$. We can define complex cross product using octonion multiplication (and vice versa). Let's use Cayley-Dickson formula twice: $$(a+b^\iota)(c+d^\iota)=ac-\bar db+(b\bar c+da)^\iota$$ for quaternions $a,b,c,d$. Next set $a=u\mathbf j, b=v+w\mathbf j, c=x\mathbf j, d=y+z\mathbf j$ for complex numbers $u,v,w,x,y,z$. Then we obtain from above formula $$-u\bar x-v\bar y-\bar w z+(\bar vz-w\bar y)\mathbf j+[w\bar x-\bar uz+(-vx+uy)\mathbf j]^\iota$$ Applying complex conjugation to third complex coordinate we obtain formula for cross product. The first term is hermitian product of the vectors $(u,v,w)$, $(x,y,z)$. $$\begin {bmatrix}u\\ v \\ w \end{bmatrix}\times \begin {bmatrix}x\\ y \\ z \end{bmatrix}=\begin {bmatrix}\overline {vz}-\overline{wy}\\ \overline{wx}-\overline{uz} \\ \overline{uy}-\overline{vx} \end{bmatrix}$$ $SU_3$ is subgroup of octonion automorphism group $G_2$. Any automorphism of octonions can be obtained by fixing unit vector $\mathbf i$ on imaginary sphere $S^6$. It defines complex structure on perpendicular space $R^6$ via multiplication. Now in this complex structure any $SU_3$ element is octonion automorphism. So $G_2$ is fiber bundle $S^6 \times \times SU_3$. Now going to "vice versa". Let's define octonions as pairs $(a,\mathbf v)$ where $a$ is complex number and $\mathbf v$ vector in $\Bbb C^3$. Then octonion multiplication can be defined as $$(a,\mathbf v)(b,\mathbf w)=(ab-\mathbf {v\cdot w},a\mathbf w+b \mathbf v + \mathbf {v \times w})$$ I hope that above argument with double Cayley-Dickson formula can be used to prove it although I have not done myself this calculation. The reader is urged to do it as an exercise :) We can extend the definition of cross product to quaternions the same way. Extending it to octonions we need to be more careful. Freudenthal has done this using 3x3 matrices over octonions - so called Jordan algebra. Some kind of "cross product" is present in all exceptional Lie groups $F_4$, $E_6$, $E_7$, $E_8$ as these groups are called by Rosenfeld as automorphism groups of 2-dimensional projective planes over $\Bbb {O,C\otimes O, H \otimes O, O \otimes O}$. Have I flied away too far from original question ?
What's the meaning of a set to the power of another set? ${ \mathbb{N} }^{ \left\{ 0,1 \right\} }$ and ${ \left\{ 0,1 \right\} }^{ \mathbb{N} }$ to be more specific, and is there a countable subset in each one of them? How do I find them?
The syntax $X^Y$ where $X$ and $Y$ are sets means the set of functions from $Y$ to $X$. Recall for the following that the von Neumann ordinal $2 = \left\{0,1\right\}$. We often identify the powerset $\mathcal{P}(X)$ with the set of functions $2^X$, since we can think of the latter set as the set of characteristic functions of subsets of $X$. Let $f \in 2^X$ be a function and let $Z \subseteq X$. We can stipulate that if $f(x) = 1$ then $x \in Z$, and if $f(x) = 0$ then $x \not\in Z$. The exponent notation makes sense because of the following identity from cardinal arithmetic: $$|X|^{|Y|} = |X^Y|.$$ That is, the cardinality of one cardinal raised to the power of another is just the cardinality of the functions from the latter to the former. The name 'powerset' also makes sense once thought of in these terms, since we can easily construct a bijection between $\mathcal{P}(X)$ and $2^X$ by identifying subsets with their characteristic functions from $X$. As Rahul Narain points out, the set $X^2$ is naturally identified with the cartesian product $X \times X$. Each function $f \in X^2$ will be of the form $\left\{(0, x), (1, y)\right\}$ for $x, y \in X$. By taking $f(0)$ as the first component and $f(1)$ as the second, we can construct a pair $(f(0) = x, f(1) = y)$. The collection of all such pairs will just be the cartesian product of $X$ with itself. Let $X$ be an infinite set. We seek to show that $X^2$ and $2^X$ are infinite, by showing that they both have infinite subsets. This is most easily done by constructing a bijection between those subsets and $X$. Firstly, suppose that $$Y = \left\{ f : f(0) = f(1), f \in X^2 \right\}.$$ Clearly $Y \subseteq X$. Now let $G : Y \rightarrow X$, $G(f) = f(0)$. This is an injection since the uniqueness of $f(0)$ is guaranteed by the construction of $Y$. The inverse function $G^{-1} : X \rightarrow Y$ is easily defined as $$G^{-1}(x) = \left\{ (0, x), (1, x) \right\}$$ which again must be an injection. Note that the identity relation $1_X = \left\{ (x, x) : x \in X \right\}$ bears the same relation to $Y$ as the full cartesian product does to $X^2$. The proof that $2^X$ is infinite proceeds by the same basic method. Let $$Z = \left\{ f : \exists{x} ( f(x) = 1 \wedge \forall{y} ( y \in X \wedge y \neq x \rightarrow f(y) = 0 ) ), f \in 2^X \right\}.$$ We will construct a bijection between $Z$ and $X$. Let $H : Z \rightarrow X$, $H(f) = x$ where $f(x) = 1$. Then let $H^{-1} : X \rightarrow Z$, $H^{-1}(x) = f$ for the unique $f$ such that $f(x) = 1$ and $\forall{y} ( y \neq x \rightarrow f(y) = 0)$. Note that what we've essentially done here is set up a bijection between $X$ and all its subsets which are singletons. More precisely, the bijection is between $X$ and the characteristic functions of those singletons. Since $X$ is infinite, $Z$ must be infinite and thus so is $2^X$. However, something much stronger also holds: the cardinality of $2^X$ is strictly greater than the cardinality of $X$. This is Cantor's theorem. Lastly, since $\mathbb{N}$ is infinite, so are $\mathbb{N}^2$ and $2^\mathbb{N}$: just take $X = \mathbb{N}$ in the above proof. $Y$ and $Z$ will be countable since the proof constructs bijections between them and $X$, so they will have the same cardinality as $\mathbb{N}$, namely $\aleph_0$.
Dual of a finite dimensional algebra is a coalgebra (ex. from Sweedler) Let $(A, M, u)$ be a finite dimensional algebra where $M: A\otimes A \rightarrow A$ denotes multiplication and $u: k \rightarrow A$ denotes unit. I want to prove that $(A^*, \Delta, \varepsilon) $ is a colagebra where $\Delta: A^*\rightarrow A^* \otimes A^*$ is a composition: $$A^* \overset{M^*}{\rightarrow}(A\otimes A)^* \overset{\rho^{-1}}{\rightarrow}A^*\otimes A^*$$ And $\rho: V^*\otimes W^* \rightarrow (V\otimes W)^*$ is given by $<\rho(v^*, w^*), v\otimes w>=<v^*, v><w^*,w>$. I have proven that $\rho$ is injective and since $A$ is finite dimensional $\rho$ is also bijective and we can take the inverse $\rho^{-1}$. But I have problems understanding how does $\Delta$ work. By definition we have $<M^*(c^*), a\otimes b>=<c^*, M(a\otimes b)>=c^*(ab)$. But I can't understand what is $\rho^{-1}(M^*(c^*))$, or in other words which element of $A^*\otimes A^*$ can act like $M^*(c^*)$ via $\rho$? P.S. Please correct me if I have grammar mistakes. Thanks!
Given $M^*c^*=:d^* \in (A \otimes A)^*$, $\rho^{-1}(d^*)=d_1^* \otimes d_2^* \in A^* \otimes A^*$, where $d_1^*(a)=d^*(a \otimes 1)$ and $d_2^*(a)=d^*(1 \otimes a)$. Notice that $\rho (d_1^* \otimes d_2^*)=d^*$.
Square Root Of A Square Root Of A Square Root Is there some way to determine how many times one must root a number and its subsequent roots until it is equal to the square root of two or of the root of a number less than two? sqrt(16)=4 sqrt(4)=2 sqrt(2) ... 3 -- sqrt(27)=5.19615... sqrt(5.19615...)=2.27950... sqrt(2.27950...)=1.509803... sqrt(1.509803...) ... 4 -- Also, using the floor function... sqrt(27)=floor(5.19615...) sqrt(5)=floor(2.23606...) sqrt(2) ... 3
Let's do this by example, and I'll let you generalize. Say we want to know about $91$, one of my favorite numbers because it is the lowest number that I think most people might, at first thought, say is prime even though it isn't (another way of saying it isn't divisible by the 'easy-to-see' primes). Well, I note that $16 = 2^4 < 91 < 256 = 2^8$ So if we take 2 square roots, then our number will be bigger than 2, as it's greater than $2^{4/4} = 2$. But If we take 3, as our number is less than $2^{8}$, then it's 3rd iterated square root will be less than $2^{8/8} = 2$. So the third square root of $91$ will be between $\sqrt 2$ and $2$. So the 4th will place it below $\sqrt 2$.
I need help with this divisibility problem. I need help with the following divisibility problem. Find all prime numbers m and n such that $mn |12^{n+m}-1$ and $m= n+2$.
You want to solve $p(p+2)|(12^{p+1}-1)(12^{p+1}+1)$. Hint: First exclude $p=2,3$, so we have $$\eqalign{ 12^{p+1}-1 \equiv 143 &= 11 \cdot 13 &\pmod p,\\ 12^{p+1}+1 \equiv 145 &= 5 \cdot 29 &\pmod p, }$$ and deduce that $p$ must be one of $5,11,29$. Edit: I'll just add more details: We want that $p$ divides $(12^{p+1}-1)(12^{p+1}+1)$, so $p$ must divide one of the factors of this product. Suppose $p|12^{p+1}-1=k\cdot p+143$ (the congruence follows from Fermat's little theorem). This means $p|143$ and hence $p=11$ or $p=13$. If $p+2$ is prime then we automatically have $p+2|12^{p+1}-1$ again by Fermat's theorem, so $p=11$ is a solution. $p=13$ isn't, as $p+2$ is not prime. In the other case, $p|12^{p+1}+1$ we get 2 solutions, $p=5$ and $p=29$.
On solvable quintics and septics Here is a nice sufficient (but not necessary) condition on whether a quintic is solvable in radicals or not. Given, $x^5+10cx^3+10dx^2+5ex+f = 0\tag{1}$ If there is an ordering of its roots such that, $x_1 x_2 + x_2 x_3 + x_3 x_4 + x_4 x_5 + x_5 x_1 - (x_1 x_3 + x_3 x_5 + x_5 x_2 + x_2 x_4 + x_4 x_1) = 0\tag{2}$ or alternatively, its coefficients are related by the quadratic in f, $(c^3 + d^2 - c e) \big((5 c^2 - e)^2 + 16 c d^2\big) = (c^2 d + d e - c f)^2 \tag{3}$ then (1) is solvable. This also implies that if $c\neq0$, then it has a solvable twin, $x^5+10cx^3+10dx^2+5ex+f' = 0\tag{4}$ where $f'$ is the other root of (3). The Lagrange resolvent are the roots of, $z^4+fz^3+(2c^5-5c^3e-4d^2e+ce^2+2cdf)z^2-c^5fz+c^{10} = 0\tag{5}$ so, $x = z_1^{1/5}+z_2^{1/5}+z_3^{1/5}+z_4^{1/5}\tag{6}$ Two questions though: I. Does the septic (7th deg) analogue, $x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 – (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0\tag{7}$ imply such a septic is solvable? II. The septic has a $5! = 120$-deg resolvent. While this is next to impossible to explicitly construct, is it feasible to construct just the constant term? Equating it to zero would then imply a family of solvable septics, just like (3) above. More details and examples for (2) like the Emma Lehmer quintic in my blog.
This problem is old but quite interesting. I have an answer to (I) which depends on some calculations in $\textsf{GAP}$ and Mathematica. I haven't thought about (II). Suppose an irreducible septic has roots $x_1,\ldots,x_7$ that satisfy $$ x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 – (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0. $$ I claim that the Galois group is solvable. Since the polynomial is irreducible, the Galois group must act transitively on the roots. Up to conjugacy, there are only seven transitive permutation groups of degree 7, of which only three are non-solvable. These are: * *The symmetric group $S_7$. *The alternating group $A_7$. *The group $L(7) \cong \mathrm{PSL}(2,7)$ of symmetries of the Fano plane. Since $L(7) \subset A_7 \subset S_7$, we can suppose that the Galois group contains a copy of $L(7)$ and attempt to derive a contradiction. Now, the given identity involves a circular order on the seven roots of the septic. Up to symmetry, there are only three possible circular orders on the points of the Fano plane, corresponding to the three elements of $D_7\backslash S_7/L(7)$. (This result was computed in $\textsf{GAP}$.) Bijections of the Fano plane with $\{1,\ldots,7\}$ corresponding to these orders are shown below. Thus, we may assume that the Galois group contains the symmetries of one of these three Fano planes. Before tackling these cases individually, observe in general that the pointwise stabilizer of a line in the Fano plane is a Klein four-group, where every element is a product of two transpositions. For example, in the first plane, the symmetries that fix $1$, $2$, and $7$ are precisely $(3\;4)(5\;6)$, $(3\;5)(4\;6)$, and $(3\;6)(4\;5)$. These are the only sorts of elements of $L(7)$ that we will need for the argument. Cases 1 and 2: In each of the first two planes, $\{3,5,7\}$ is a line, and thus $(1\;2)(4\;6)$ is an element of $L(7)$. Applying this permutation to the roots in the equation $$ x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 – (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0 $$ and subtracting from the original yields the equation $$ (2x_1-2x_2-x_4+x_6)(x_7-x_3) \;=\; 0. $$ Since $x_3\ne x_7$, we conclude that $2x_1 + x_6 = 2x_2+x_4$. Now, no three of $1,2,4,6$ are collinear in either of the first two planes. It follows that there is a symmetry of each of the planes that fixes $1$ and $6$ but switches $2$ and $4$, namely $(2\;4)(3\;7)$ for the first plane and $(2\;4)(5\;7)$ for the second plane. Thus we have two equations $$ 2x_1 + x_6 = 2x_2+x_4,\qquad 2x_1 + x_6 = x_2+2x_4, $$ and subtracting gives $x_2=x_4$, a contradiction. Case 3: The argument for the last plane is similar but slightly more complicated. Observe that each of the following eight permutations is a symmetry of the third plane: $$ \text{the identity},\qquad (2\;7)(3\;4),\qquad (3\;6)(5\;7),\qquad (1\;3)(4\;5) $$ $$ (1\;2)(3\;6),\qquad (3\;7)(5\;6),\qquad (3\;5)(6\;7), \qquad (1\;2)(5\;7) $$ We apply each of these permutations to the equation $$ x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 – (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0, $$ adding together the first four results, and then subtracting the other four results. According to Mathematica, this gives the equation $$ 5(x_2-x_1)(x_3-x_5+x_6-x_7)=0. $$ Since $x_1\ne x_2$, it follows that $x_3+x_6=x_5+x_7$. As in the last case, observe that no three of $3,5,6,7$ are collinear, so there exists a symmetry of the plane that fixes $3$ and $5$ and switches $6$ and $7$, namely $(6\;7)(1\;4)$. This gives us two equations $$ x_3+x_6=x_5+x_7,\qquad x_3+x_7=x_5+x_6 $$ and subtracting yields $x_6=x_7$, a contradiction.
Can a non-proper variety contain a proper curve Let $f:X\to S$ be a finite type, separated but non-proper morphism of schemes. Can there be a projective curve $g:C\to S$ and a closed immersion $C\to X$ over $S$? Just to be clear: A projective curve is a smooth projective morphism $X\to S$ such that the geometric fibres are geometrically connected and of dimension 1. In simple layman's terms: Can a non-projective variety contain a projective curve? Feel free to replace "projective" by "proper". It probably won't change much.
Sure, $\mathbb{P}_k^2-\{pt\}$ is not proper over $Spec(k)$ and contains a proper $\mathbb{P}_k^1$.
Show the existence of a complex differentiable function defined outside $|z|=4$ with derivative $\frac{z}{(z-1)(z-2)(z-3)}$ My attempt I wrote the given function as a sum of rational functions (via partial fraction decomposition), namely $$ \frac{z}{(z-1)(z-2)(z-3)} = \frac{1/2}{z-1} + \frac{-2}{z-2} + \frac{3/2}{z-3}. $$ This then allows me to formally integrate the function. In particular, I find that $$ F(z) = 1/2 \log(z-1) - 2 \log(z-2) + 3/2 \log(z-3) $$ is a complex differentiable function on the set $\Omega = \{z \in \mathbb{C}: |z| > 4\}$ with the derivative we want. So this seems to answer the question, as far as I can tell. The question then asks if there is a complex differentiable function on $\Omega$ whose derivative is $$ \frac{z^2}{(z-1)(z-2)(z-3)}. $$ Again, I can write this as a sum of rational functions and formally integrate to obtain the desired function on $\Omega$ with this particular derivative. Woo hoo. My question Is there more to this question that I'm not seeing? I was also able to write the first given derivative as a geometric series and show that this series converged for all $|z| > 3$, but I don't believe this helps me to say anything about the complex integral of this function. In the case that it does, perhaps this is an alternative avenue to head down? Any insight/confirmation that I'm not overlooking something significant would be much appreciated. Note that this an old question that often appears on study guides for complex analysis comps (one being my own), so that's in part why I'm thinking (hoping?) there may be something deeper here. For possible historical context, the question seems to date back to 1978 (see number 7 here): http://math.rice.edu/~idu/Sp05/cx_ucb.pdf Thanks for your time.
The key seems to be that the coefficients $1/2$, $-2$ and $3/2$ sum to 0. So if you choose branch cuts for the three logarithm such that the cuts coincide through the $|z|>4$ region, then jumps at the cuts will cancel each other out (the amount each raw logarithm jumps is always $2\pi i$) and leave a single continuous function on $\{C\in\mathbb C\mid |z|>4\}$.
Interpreting $F[x,y]$ for $F$ a field. First, is it appropriate to write $F[x,y] = (F[x])[y] $? In particular, if $F$ is a field, then we know $F[x]$ is a Euclidean domain. Are there necessary and sufficient conditions for when $F[x,y]$ is also a Euclidean domain?
In most constructions of polynomial rings (e.g., as almost null sequences with values in the ground ring), the rings $F[x,y]$, $(F[x])(y)$, and $(F[y])[x]$ are formally different objects: $F[x,y]$ is the set of all functions $m\colon \mathbb{N}\times\mathbb{N}\to F$ with $m(i,j)=0$ for almost all $(i,j)$; $(F[x])[y]$ is the set of all functons $f\colon\mathbb{N}\to F[x]$ with $f(n)=0$ for almost all $n$; and $(F[y])[x]$ is the set of all functions $g\colon\mathbb{N}\to F[y]$ with $g(m)=0$ for almost all $m$. However, there are natural isomorphisms between them, $$(F[x])[y]\cong F[x,y]\cong (F[y])[x].$$ Informally, this corresponds to the fact that we can write a polynomial in $x$ and $y$ by "putting $y$'s together" or by "putting $x$'s together. So, for instance, $1+x+2y + 3x^2y - 7xy^4 + 8x^2y^2$ can be written as $$\begin{align*} 1+x+2y + 3x^2y - 7xy^4 + 8x^2y^2 &= 1 + (1-7y^4)x + (3y+8y^2)x^2\\ &= (1+x) + (2+3x^2)y + (8x^2)y^2 - (7x)y^4. \end{align*}$$ So, yes, you can say that $F[x,y]$ is "essentially" equal to $(F[x])[y]$. However, $F[x,y]$ is never a Euclidean domain, because it is never a PID: $\langle x,y\rangle$ is never principal. In fact: Theorem. Let $D$ be a domain. Then $D[x]$ is a PID if and only if $D$ is a field. Proof. If $D$ is a field, then $D[x]$ is a Euclidean domain, hence a PID. If $D$ is not a field, then let $a\in D$ be an element that is a nonzero nonunit. Then $\langle a,x\rangle$ is not a principal ideal: if $\langle a,x\rangle = \langle r\rangle$, then $r|a$, hence $r$ must be a polynomial of degree $0$; since $r|x$, then $r$ must be a unit or an associate of $x$; since it must be degree $0$, $r$ must be a unit. So $\langle a,x\rangle$ is principal if and only if it is equal to the entire ring. But $D[x]/\langle a,x\rangle \cong D/\langle a\rangle\neq 0$ (since $\langle a\rangle\neq D$), so $\langle a,x\rangle\neq D[x]$. Hence $\langle a,x\rangle$ is not principal. $\Box$
Spread out the zeros in a binary sequence Suppose I have a machine that processes units at a fixed rate. If I want to run it at a lower rate, I have to leave gaps in the infeed. For example, if you want to process 3/4, then you would feed in 3, leave a gap, then repeat. This could be encoded as a binary sequence: 1110. If you want to process 3/5, then a simple sequence would be 11100. This however leaves an unnecessarily large gap. Perhaps they want as constant a rate as possible down the line. A better sequence would be 11010. Where the gaps are spread as far as possible (remembering that the sequence repeats). Hopefully this has explained the problem. I shall now attempt to phrase it more mathematically. Given a fraction $a/b$, generate a binary sequence of length $b$, that contains exactly $a$ ones, and where the distance between zeros is maximised if the sequence were repeated. In my attempts so far, I've worked out the building blocks of the sequence, but not quite how to put it together. For instance, 77/100 should be composed of 8 lots of 11110 and 15 lots of 1110. Since $8\times5+15\times4=100$ and $8\times4+15\times3=77$. The next step of splicing together my group of 8 and my group of 15 eludes me (in general).
Consider the following algorithm: let $a_i$ be the digit given on the $i$-th step; $b_i = b_{i-1}+a_i$ ($b_0 = 0$) the number of ones given until $i$-th step (and $i-b_i$ the number of zeros give until $i$-th step); define $a_{i+1}$ to be a zero iff $\frac{b_i}{i+1} > p$ (where $p$ is given probability); one otherwise. This way, you'll get the zeros as far away from each other as possible, and also you'll get the irrational probabilities support. Also this algorithm is periodic for rational probabilities.
Specify the convergence of function series The task is to specify convergence of infinite function series (pointwise, almost uniform and uniform convergence): a) $\displaystyle \sum_{n=1}^{+\infty}\frac{\sin(n^2x)}{n^2+x^2}, \ x\in\mathbb{R}$ b) $\displaystyle\sum_{n=1}^{+\infty}\frac{(-1)^n}{n+x}, \ x\in[0;+\infty)$ c) $\displaystyle\sum_{n=0}^{+\infty}x^2e^{-nx}, \ x\in(0;+\infty)$ I know basic facts about these types of convergence and Weierstrass M-test, but still have problems with using this in practise..
First observe that each of the series converges pointwise on its given interval (using standard comparison tests and results on $p$-series, geometric series, and alternating series. Towards determining uniform convergence, let's first recall the Weierstrass $M$-test: Suppose $(f_n)$ is a sequence of real-valued functions on the set $I$ and $(M_n)$ is a sequence of positive real numbers such that $|f_n(x)|\le M_n$ for $x\in I$, $n\in\Bbb N$. If the series $\sum M_n$ is convergent then $\sum f_n$ is uniformly convergent on $I$. It is worthwhile to consider the heart of the proof of this theorem: Under the given hypotheses, if $m>n$, then for any $x\in I$ $$\tag{1} \bigl| f_{n+1}(x)+\cdots+f_m(x)\bigr| \le| f_{n+1}(x)|+\cdots+|f_m(x)\bigr| \le M_{n+1}+\cdots M_n. $$ So if $\sum M_n$ converges, we can make the right hand side of $(1)$ as small as we wish. Noting that the right hand side of $(1)$ is independent of $x$, we can conclude that $\sum f_n$ is uniformly Cauchy on $I$, and thus uniformly convergent on $I$. Now on to your problem: To apply the $M$-test, you have to find appropriate $M_n$ for the series under consideration. Keep in mind that the $M_n$ have to be positive, summable, and bound the $|f_n|$. Sometimes they are easy to find, as in the series in a). Here note that for any $n\ge 1$ and $x\in\Bbb R$, $$ \biggl| {\sin(n^2x)\over n^2+x^2}\biggr|\le {1\over n^2}. $$ So, take $M_n={1\over n^2}$ and apply the $M$-test. The series in a) converges uniformly on $\Bbb R$. Sometimes finding the $M_n$ is not so easy. This is the case in c). Crude approximations for $f_n(x)=x^2e^{-nx}$ will not help. However, we could try to find the maximum value of $f_n$ over $(0,\infty)$ and perhaps this will give us what we want. And indeed, doing this (using methods from differential calculus), we discover that the maximum value of $f_n(x)=x^2e^{-nx}$ over $(0,\infty)$ is ${4e^{-2}\over n^2}$. And now the road towards using the $M$-test is paved... Sometimes the $M$-test doesn't apply. This is the case for the series in b), the required $M_n$ can't be found (at least, I can't find them). However, here, the proof of the $M$-test gives us an idea. Since the series in b) is alternating (that is, for each $x\in[0,\infty)$, the series $\sum\limits_{n=1}^\infty{(-1)^n\over x+n}$ is alternating), perhaps we can show it is uniformly Cauchy on $[0,\infty)$. Indeed we can: For any $m\ge n$ and $x\ge0$ $$\tag{2} \Biggl|\,{(-1)^n\over n+x}+{(-1)^{n+1}\over (n+1)+x}+\cdots+{ (-1)^m\over m+x}\,\Biggl|\ \le\ {1\over n+x}\le {1\over n}. $$ The term on the right hand side of $(2)$ is independent of $x$ and can be made as small as desired. So, the series in b) is uniformly Cauchy on $[0,\infty)$, and thus uniformly convergent on $[0,\infty)$.
Combinations of lego bricks figures in an array of random bricks I have an assignment where a robot should assemble some lego figures of the simpsons. See the figures here: Simpsons figures! To start with we have some identical sized, different colored lego bricks on a conveyor belt. See image. My problem is to find out which combinations of figures to make out of the bricks on the conveyor belt. The bricks on the conveyor belt can vary. Here is an example: Conveyor bricks: 5x yellow, 2x red, 3x blue, 1x white, 4x green, 1x orange From these bricks I can make: * *Homer(Y,W,B), Marge(B,Y,G), Bart(Y,O,B), Lisa(Y,R,Y), Rest(Y,G,G,G), OR *Marge(B,Y,G), Marge (B,Y,G), Marge(B,Y,G), Lisa(Y,R,Y), Rest(R,W,G,O), OR *... Any way to automate this? Any suggestions to literature or theories? Algorithms I should check out? Thank you in advance for any help you can provide.
The problem of minimizing the number of unused blocks is an integer linear programming problem, equivalent to maximizing the number of blocks that you do use. Integer programming problems are in general hard, and I don’t know much about them or the methods used to solve them. In case it turns out to be at all useful to you, though, here’s a more formal description of the problem of minimizing the number of unused blocks. You have seven colors of blocks, say colors $1$ through $7$; the input consists of $c_k$ blocks of color $k$ for some constants $c_k\ge 0$. You have five types of output (Simpsons), which I will number $1$ through $5$ in the order in which they appear in this image. If the colors are numbered yellow $(1)$, white $(2)$, light blue $(3)$, dark blue $(4)$, green $(5)$, orange $(6)$, and red $(7)$, the five output types require colors $(1,2,3),(4,1,5),(1,6,4),(1,7,1)$, and $(1,3)$. To make $x_k$ Simpsons of type $k$ for $k=1,\dots,5$ requires $$\begin{align*} x_1+x_2+x_3+2x_4+x_5&\text{blocks of color }1,\\ x_1&\text{blocks of color }2,\\ x_1+x_5&\text{blocks of color }3,\\ x_2+x_3&\text{blocks of color }4,\\ x_2&\text{blocks of color }5,\\ x_3&\text{blocks of color }6,\text{ and}\\ x_4&\text{blocks of color }7\;. \end{align*}$$ This yields the following system of inequalities: $$\left\{\begin{align*} x_1+x_2+x_3+2x_4+x_5&\le c_1\\ x_1&\le c_2\\ x_1+x_5&\le c_3\\ x_2+x_3&\le c_4\\ x_2&\le c_5\\ x_3&\le c_6\\ x_4&\le c_7\;. \end{align*}\right.\tag{1}$$ Let $b=c_1+c_2+\dots+c_7$, the total number of blocks, and let $$f(x_1,x_2,\dots,x_7)=\sum_{k=1}^7 x_k\;,$$ the number of blocks used. You want to maximize $f(x_1,\dots,x_7)$ subject to the constraints in $(1)$ and the requirement that the $x_k$ be non-negative integers.
If "multiples" is to "product", "_____" is to "sum" I know this might be a really simple question for those fluent in English, but I can't find the term that describes numbers that make up a sum. The numbers of a certain product are called "multiples" of that "product". Then what are the numbers of a certain sum called?
To approach the question from another direction: A "multiple" of 7 is a number that is the result of multiplying 7 with something else. If your try to generalize that to sums, you get something like: A "__" of 7 is a number that is the result of adding 7 to something. But that's everything -- at least as long as we allow negative numbers, and if we don't allow negative numbers, then it just means something that is larger than 7. Neither of these concepts feel useful enough in themselves that it is worth it deciding on a particular noun for it.
Calculating conditional probability for markov chain I have a Markov chain with state space $E = \{1,2,3,4,5\}$ and transition matrix below: $$ \begin{bmatrix} 1/2 & 0 & 1/2 & 0 & 0 \\\ 1/3 & 2/3 & 0 & 0 & 0 \\\ 0 & 1/4 & 1/4 & 1/4 & 1/4 \\\ 0 & 0 & 0 & 3/4 & 1/4 \\\ 0 & 0 & 0 & 1/5 & 4/5\ \end{bmatrix} $$ How would I find the conditional probabilities of $\mathbb{P}(X_2 = 5 | X_0 =1)$ and $\mathbb{P}(X_3 = 1 | X_0 =1)$? I am trying to use the formula (or any other formula, if anyone knows of any) $p_{ij}^{(n)} = \mathbb{P}(X_n = j | X_0 =i)$, the probability of going from state $i$ to state $j$ in $n$ steps. So $\mathbb{P}(X_2 = 5 | X_0 =1) = p_{15}^2$, so I read the entry in $p_{15}$, and get the answer is $0^2$, but the answer in my notes say it is $1/8$? Also, I get for $\mathbb{P}(X_3 = 1 | X_0 =1) = p_{11}^3 = (\frac{1}{2})^3 = 1/8$, but the answer says it is $1/6$?
The notation $p^{(2)}_{15}$ is not to be confused with the square of $p_{15}$ since it stands for the $(1,5)$ entry of the square of the transition matrix. Thus, $$ p^{(2)}_{15}=\sum_{i=1}^5p_{1i}p_{i5}. $$ Likewise for $p^{(3)}_{11}$, which is the $(1,1)$ entry of the cube of the transition matrix, that is, $$ p^{(3)}_{11}=\sum_{i=1}^5\sum_{j=1}^5p_{1i}p_{ij}p_{j1}. $$ In the present case, there are only two ways to start from $1$ and to be back at $1$ after three steps, either the path $1\to1\to1\to1$, or the path $1\to3\to2\to1$. The first path has probability $\left(\frac12\right)^3=\frac18$ and the second path has probability $\frac12\frac14\frac13=\frac1{24}$, hence $p^{(3)}_{11}=\frac18+\frac1{24}=\frac16$.
Critical points of $f(x,y)=x^2+xy+y^2+\frac{1}{x}+\frac{1}{y}$ I would like some help finding the critical points of $f(x,y)=x^2+xy+y^2+\frac{1}{x}+\frac{1}{y}$. I tried solving $f_x=0, f_y=0$ (where $f_x, f_y$ are the partial derivatives) but the resulting equation is very complex. The exercise has a hint: think of $f_x-f_y$ and $f_x+f_y$. However, I can't see where to use it. Thanks!
After doing some computations I found the following (lets hope I didn't make any mistakes). You need to solve the equations $$f_x = 2x + y - \frac{1}{x^2} = 0 \quad f_y = 2y + x -\frac{1}{y^2} = 0$$ therefore after subtracting and adding them as in the hint we get $$\begin{align} f_x - f_y &= x - y - \frac{1}{x^2} + \frac{1}{y^2} = 0 \\ f_x + f_y &= 3x + 3y -\frac{1}{x^2} - \frac{1}{y^2} = 0 \end{align} $$ but you can factor them a little bit to get $$ \begin{align} f_x - f_y &= x - y + \frac{x^2 - y^2}{x^2 y^2} = (x - y) \left ( 1 + \frac{x+ y}{x^2 y^2}\right ) = 0\\ f_x + f_y &= 3(x + y) -\frac{x^2 + y^2}{x^2 y^2} = 0 \end{align} $$ Now from the first equation you get two conditions, either $x = y$ or $x+y = -x^2 y^2$. If $x = y$ you can go back to your first equation for $f_x$ and substitute to get $$2x + x - \frac{1}{x^2} = 0 \implies 3x = \frac{1}{x^2} \implies x = \frac{1}{\sqrt[3]{3}}$$ and then you get the critical point $\left ( \dfrac{1}{\sqrt[3]{3}}, \dfrac{1}{\sqrt[3]{3}} \right )$ Now if instead $x + y = -x^2 y^2$ then if you substitute into the equation $f_x + f_y = 0$ we get the following $$ 3(-x^2 y^2) - \frac{x^2 + y^2}{x^2 y^2} = 0 \implies 3x^4 y^4 + x^2 + y^2 = 0 \implies x = y = 0 $$ But this is actually one of the points where the partial derivatives or even your original function are not defined.
Improving Gift Wrapping Algorithm I am trying to solve taks 2 from exercise 3.4.1 from Computational Geometry in C by Joseph O'Rourke. The task asks to improve Gift Wrapping Algorithm for building convex hull for the set of points. Exercise: During the course of gift wrapping, it's sometimes possible to identify points that cannot be on the convex hull and to eliminate them from the set "on the fly". work out rules to accomplish this. What is a worst-case set of points for your improved algorithm. The only thing I came with is we can eliminate all point that already form a convex hull boundary from the candidate list. This can give us the slight improvement $O(\frac{hn}{2})$ instead of $O(hn)$, which is in term of Big-O notation actually the same. After a little bit of searching I found that improvement can be made by ray shooting, but I don't understand how to implement ray shooting to our case (all points are not sorted and don't form a convex hull, therefore there is no efficient search for vertex that can be taken by ray shooting). If you have any idea how to improve gift wrapping algorithm, I'll appreciate sharing it with us. Thanks!
Hint: You're already computing the angles of the lines from the current point to all other points. What does the relationship of these angles to the angle of the line to the starting point tell you? (This also doesn't improve the time complexity of the algorithm.)
Finding the minimal polynomials of trigonometric expressions quickly If if an exam I had to calculate the minimal polynomials of, say, $\sin(\frac{2 \pi}{5})$ or $\cos(\frac{2 \pi}{19})$, what would be the quickest way to do it? I can use the identity $e^{ \frac{2 i \pi}{n}} = \cos(\frac{2 \pi}{n}) + i\sin(\frac{ 2 \pi}{n})$ and raise to the power $n$, but this gets nasty in the $n = 19$ case... Thanks
$\cos(2\pi/19)$ has degree 9, and I doubt anyone would put it on an exam. $\sin(2\pi/5)$ is a bit more reasonable. Note that $\sin(4\pi/5)=2\sin(2\pi/5)\cos(2\pi/5)$, and also $\sin(4\pi/5)=\sin(\pi/5)$, and $\cos(2\pi/5)=1-2\sin^2(\pi/5)$, and with a few more identities like that you should be able to pull out a formula. Or use your idea in the form $2i\sin(2\pi/5)=z-z^{-1}$, where $z=e^{2\pi i/5}$, then, squaring, $-4\sin^2(2\pi/5)=z^2-2+z^{-2}$, $$-8i\sin^3(2\pi/5)=z^3-3z+3z^{-1}-z^{-3}=z^{-2}-3z+3z^{-1}-z^2$$ and a similar formula for the 4th power, and remember that $z^2+z+1+z^{-1}+z^{-2}=0$
Residue integral: $\int_{- \infty}^{+ \infty} \frac{e^{ax}}{1+e^x} dx$ with $0 \lt a \lt 1$. I'm self studying complex analysis. I've encountered the following integral: $$\int_{- \infty}^{+ \infty} \frac{e^{ax}}{1+e^x} dx \text{ with } a \in \mathbb{R},\ 0 \lt a \lt 1. $$ I've done the substitution $e^x = y$. What kind of contour can I use in this case ?
there is a nice question related with this problem , and we can evaluate it by using Real analysis $$I=\int_{-\infty }^{\infty }\frac{e^{ax}}{1-e^{x}}dx,\ \ \ \ \ \ \ \ (*)\ \ \ \ \ 0<a<1\\ \\ let \ x\rightarrow -x \ \ then\ \ I=\int_{-\infty }^{\infty }\frac{e^{-ax}}{1-e^{-x}}\ \ \ \ \ (**)\\ \\ adding\ (*)\ and\ (**)\ \ then\ 2I=\int_{-\infty }^{\infty }[\frac{e^{ax}}{1-e^x}+\frac{e^{-ax}}{1-e^{-x}}]dx\\ \\ \therefore I=\frac{1}{2}\int_{-\infty }^{\infty }(\frac{e^{-ax}}{1-e^{-x}}-\frac{e^{-(1-a)x}}{1-e^{-x}})dx\\ \\ \therefore I==\frac{1}{2}\int_{-\infty }^{\infty }[(\frac{e^{-x}}{x}-\frac{e^{-(1-a)x}}{1-e^{-x}})-(\frac{e^{-x}}{x}-\frac{e^{-ax}}{1-e^{-x}})]dx\\ \\ \\ \therefore \therefore I=\int_{0}^{\infty }(\frac{e^{-x}}{x}-\frac{e^{-(1-a)x}}{1-e^{-x}})dx-\int_{0}^{\infty }(\frac{e^{-x}}{x}-\frac{e^{-ax}}{1-e^{-x}})dx\\ \\ \\ \therefore I=\Psi (1-a)-\Psi (a)=\frac{\pi }{tan(\pi a)}$$ this problem proposed by cornel loan, it is very easy to solve by complex analysis, but splendid to evaluate by"Real analysis"
Is there a computer program that does diagram chases? After having done many tedious, robotic proofs that various arrows in a given diagram have certain properties, I've begun to wonder whether or not someone has made this automatic. You should be able to tell a program where the objects and arrows are, and which arrows have certain properties, and it should be able to list all of the provable implications. Has this been done? Humans should no longer have to do diagram chases by hand!
I am developing a Mathematica package called WildCats which allows the computer algebra system Mathematica to perform category theory computations (even graphical computations). The posted version 0.36 is available at https://sites.google.com/site/wildcatsformma/ A much more powerful version 0.50 will be available at the beginning of May. The package is totally free (GPL licence), but requires the commercial Mathematica system available at www.wolfram.com. The educational, student or home editions are very moderately priced. One of the things you can already do in Wildcats 0.36 is to apply a functor to a diagram and see the resulting diagram. In the new version 0.50, you will also be able to do diagram chasing. Needless to say, I greatly appreciate users feedback and comments
Probability - Coin Toss - Find Formula The problem statement, all variables and given/known data: Suppose a fair coin is tossed $n$ times. Find simple formulae in terms of $n$ and $k$ for a) $P(k-1 \mbox{ heads} \mid k-1 \mbox{ or } k \mbox{ heads})$ b) $P(k \mbox{ heads} \mid k-1 \mbox{ or } k \mbox{ heads})$ Relevant equations: $P(k \mbox{ heads in } n \mbox{ fair tosses})=\binom{n}{k}2^{-n}\quad (0\leq k\leq n)$ The attempt at a solution: I'm stuck on the conditional probability. I've dabbled with it a little bit but I'm confused what $k-1$ intersect $k$ is. This is for review and not homework. The answer to a) is $k/(n+1)$. I tried $P(k-1 \mbox{ heads} \mid k \mbox{ heads})=P(k-1 \cap K)/P(K \mbox{ heads})=P(K-1)/P(K).$ I also was thinking about $$P(A\mid A,B)=P(A\cap (A\cup B))/P(A\cup B)=P(A\cup (A\cap B))/P(A\cup B)=P(A)/(P(A)+P(B)-P(AB))$$
By the usual formula for conditional probability, an ugly form of the answer is $$\frac{\binom{n}{k-1}(1/2)^n}{\binom{n}{k-1}(1/2)^n+\binom{n}{k}(1/2)^n}.$$ Cancel the $(1/2)^n$. Now the usual formula for $\binom{a}{b}$ plus a bit of algebra will give what you want. We can simplify the calculation somewhat by using the fact that $\binom{n}{k-1}+\binom{n}{k}=\binom{n+1}{k}$, which has a nice combinatorial proof, and is built into the "Pascal triangle" definition of the binomial coefficients. As for the algebra, $\binom{n}{k-1}=\frac{n!}{(k-1)!(n-k+1)!}$ and $\binom{n+1}{k}=\frac{(n+1)!}{k!(n-k+1)!}$. When you divide, there is a lot of cancellation.
All partial derivatives are 0. I know that for a function $f$ all of its partial derivatives are $0$. Thus, $\frac{\partial f_i}{\partial x_j} = 0$ for any $i = 1, \dots, m$ and any $j = 1, \dots, n$. Is there any easy way to prove that $f$ is constant? The results seems obvious but I'm having a hard time expressing it in words explicitly why it's true.
It's not hard to see that given $\textbf{p,q}\in\mathbb{R}^M$: $\hspace{2cm}\textbf{f(p)}-\textbf{f(q)}=\textbf{f}\bigl(f_1(\textbf{p})-f_1(\textbf{q}), f_2(\textbf{p})-f_2(\textbf{q}), ... f_N(\textbf{p})-f_N(\textbf{q})\bigr)$ $\hspace{4.5cm} =\bigl(\int_{\gamma} \nabla f_1(\mathbf{r})\cdot d\mathbf{r}, \int_{\gamma} \nabla f_2(\mathbf{r})\cdot d\mathbf{r}, ... \int_{\gamma} \nabla f_N(\mathbf{r})\cdot d\mathbf{r}\bigr)$ $\hspace{4.5cm}=0$ Since according to the Gradient Theorem, given any curve $\gamma$ with end points $\textbf{p},\textbf{q} \in \mathbb{R}^N$, we have: $\hspace{6cm} f_i\left(\mathbf{p}\right)-f_i\left(\mathbf{q}\right) = \int_{\gamma} \nabla f_i(\mathbf{r})\cdot d\mathbf{r} $ and $\nabla f_i = \frac{\partial f_i}{\partial x_1 }\mathbf{e}_1 + \cdots + \frac{\partial f_i}{\partial x_N }\mathbf{e}_N=0$ for all $i$, by assumption.
Notation for modules. Given a module $A$ for a group $G$, and a subgroup $P\leq G$ with unipotent radical $U$, I have encountered the notation $[A,U]$ in a paper. Is this a standard module-theoretic notation, and if so, what does it mean. In the specific case I am looking at, it works out that $[A,U]$ is equal to the submodule of the restriction of $A$ to $P$ generated by the fixed-point space of $A$ with respect to $U$, but whether this is the case in general I do not know. If anyone could enlighten me on this notation, it would be greatly appreaciated.
In group theory it is standard to view $G$-modules $A$ as embedded in the semi-direct product $G \ltimes A$. Inside the semidirect product, the commutator subgroup $[A,U]$ makes sense for any subgroup $U \leq G$ and since $A$ is normal in $G \ltimes A$, we get $[A,U] \leq A$ and by the end we need make no reference to the semi-direct product. If we let $A$ be a right $G$-module written multiplicatively so that the $G$ action is written as exponentiation, then $$[A,U] = \langle [a,u] : a \in A, u \in U \rangle = \langle a^{-1} a^u : a \in A, u \in U \rangle$$ If you have a left $A$-module written additively and $G$-action written as multiplication, then we get $$[A,U] = \langle a - u\cdot a : a \in A, u \in U \rangle = \sum_{u \in U} \operatorname{im}(1-u)$$ which is just the sum of the images of $A$ under the nilpotent operators $1-u$, which is probably a fairly interesting thing to consider. In some sense this is the dual of the centralizer: $A/[A,U]$ is the largest quotient of $A$ centralized by $U$.
Where's my mistake? This is partially an electrical engineering problem, but the actual issue apparently lays in my maths. I have the equation $\frac{V_T}{I_0} \cdot e^{-V_D\div V_T} = 100$ $V_T$ an $I_0$ are given and I am solving for $V_D$. These are my steps: Divide both sides by $\frac{V_T}{I_0}$: $$e^{-V_D\div V_T} = 100 \div \frac{V_T}{I_0}$$ Simplify: $$e^{-V_D\div V_T} = \frac{100\cdot I_0}{V_T}$$ Find the natural log of both sides: $$-V_D\div V_T = \ln(\frac{100\cdot I_0}{V_T})$$ Multiply by $V_T$: $$-V_D = V_T \cdot \ln(\frac{100\cdot I_0}{V_T})$$ Multiply by $-1$: $$V_D = -V_T \cdot \ln(\frac{100\cdot I_0}{V_T})$$ Now if I substitute in the numbers; $V_T \approx 0.026$, $I_0 \approx 8 \cdot 10^{-14}$ $$V_D = -0.026 \cdot \ln(\frac{8 \cdot 10^{-12}}{0.026})$$ Simplify a little more: $$ V_D = \frac{26}{1000} \cdot -\ln(\frac{4}{13} \cdot 10^{-9})$$ and you finally end up with $V_D \approx 0.569449942$ There is an extra step to the problem as well: the question calls not for $V_D$, but for $V_I$ which is a source voltage and that can basically be determined by solving this: $$V_I \cdot \frac{1}{40} = V_D$$ I.e. $V_I = 40V_D$; which makes $V_I \approx 22.77799768$. However, this is off by quite a bit (the answer is apparently $1.5742888791$). Official Solution: We find $V_D$ to be $\approx$ 0.57. (Woo! I did get that portion right.) Since we know that $\frac{V_D}{i_D}$ is 100, we find $i_D$ to be 0.00026. Background: $\frac{V_D}{i_D}$ is the resistance that I was originally solving. $V_D$ was the voltage across the element, and $i_D$ was the current through the element. However, either I'm making some stupid mistake but if $i_D = \frac{V_D}{100}$ then how did they get 0.00026? Completing the rest of the solution's method (quite convoluted in comparison to mine, checked mine though another method; $V_I$ does in fact equal $40V_D$), with the correct value, 0.0057, I arrived at exactly the same final value as before. Would it be fair to say that it is likely that my logic is correct?
You are confusing resistance with incremental resistance, I think. The incremental resistance only matters for small signal analysis. The problem is to set the operating point so that the incremental resistance will be the required value. This involves computing $V_D, I_D$. However, you cannot use the incremental resistance to compute $I_D$ in terms of $V_D$. Also, you are forgetting to account for the $3.9k\Omega$ series resistance. You correctly computed $V_D = 0.57 V$ required so the incremental resistance is $100 \Omega$. However, the current through the diode at $V_D$ is given by the diode equation, which you haven't included above. The diode equation is $I_D = I_0 (e^{V_D/V_T}-1)$. Plugging in your numbers gives $I_D = 260 \mu A$ (basically $\frac{V_T}{100}$, since $\frac{V_D}{V_T}$ is large). The question was to figure the input voltage $V_I$ that will set the diode operating point at $V_D =0.57 V, I_D=260 \mu A$. In the first instance, there is a series resistance of $R_S = 3.9 k\Omega$, so to figure the required $V_I$ you need $$ V_I = R_S I_D+V_D$$ Plugging in the numbers gives $V_I = 1.58V$ or so.
Can there be a scalar function with a vector variable? Can I define a scalar function which has a vector-valued argument? For example, let $U$ be a potential function in 3-D, and its variable is $\vec{r}=x\hat{\mathrm{i}}+y\hat{\mathrm{j}}+z\hat{\mathrm{k}}$. Then $U$ will have the form of $U(\vec{r})$. Is there any problem?
That's a perfectly fine thing to do. The classic example of such a field is the temperature in a room: the temperature $T$ at each point $(x,y,z)$ is a function of a $3$-vector, but the output of the function is just a scalar ($T$). $\phi^4$ scalar fields are also an example, as are utility functions in economics.
Proof that $\sum\limits_{k=1}^\infty\frac{a_1a_2\cdots a_{k-1}}{(x+a_1)\cdots(x+a_k)}=\frac{1}{x}$ regarding $\zeta(3)$ and Apéry's proof I recently printed a paper that asks to prove the "amazing" claim that for all $a_1,a_2,\dots$ $$\sum_{k=1}^\infty\frac{a_1a_2\cdots a_{k-1}}{(x+a_1)\cdots(x+a_k)}=\frac{1}{x}$$ and thus (probably) that $$\zeta(3)=\frac{5}{2}\sum_{n=1}^\infty {2n\choose n}^{-1}\frac{(-1)^{n-1}}{n^3}$$ Since the paper gives no information on $a_n$, should it be possible to prove that the relation holds for any "context-reasonable" $a_1$? For example, letting $a_n=1$ gives $$\sum_{k=1}^\infty\frac{1}{(x+1)^k}=\frac{1}{x}$$ which is true. The article is "A Proof that Euler Missed..." An Informal Report - Alfred van der Poorten.
Formally, the first identity is repeated application of the rewriting rule $$\dfrac 1 x = \dfrac 1 {x+a} + \dfrac {a}{x(x+a)} $$ to its own rightmost term, first with $a = a_1$, then $a=a_2$, then $a=a_3, \ldots$ The only convergence condition on the $a_i$'s is that the $n$th term in the infinite sum go to zero. [i.e. that $a_1 a_2 \dots a_n / (x+a_1)(x+a_2) \dots (x+a_n)$ converges to zero for large $n$].
Finding all reals such that two field extensions are equal. So we want to find an $u$ such that $\mathbb{Q}(u)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$. I obtained that if $u$ is of the following form: $$u=\sqrt[6]{2^a5^b}$$Where $a\equiv 1\pmod{2}$, and $a\equiv 0\pmod{3}$, and $b\equiv 0\pmod{2}$ and $ b\equiv 1\pmod{3}$. This works since $$u^3=\sqrt{2^a5^b}=2^{\frac{a-1}{2}}5^{\frac{b}{2}}\sqrt{2}$$and also, $$u^2=\sqrt[3]{2^a5^b}=2^{\frac{a}{3}}5^{\frac{b-1}{3}}\sqrt[3]{5}$$Thus we have that $\mathbb{Q}(\sqrt{2},\sqrt[3]{5})\subseteq \mathbb{Q}(u)$. Note that $\sqrt{2}$ has degree of $2$ (i.e., $[\mathbb{Q}(\sqrt{2}):\mathbb{Q}]=2$) and alsothat $\sqrt[3]{5}$ has degree $3$. As $\gcd(2,3)=1$, we have that $[\mathbb{Q}(\sqrt{2},\sqrt[3]{5}),\mathbb{Q}]=6$. Note that this is also the degree of the extension of $u$, since one could check that the set $\{1,u,...,u^5\}$ is $\mathbb{Q}$-independent. Ergo, we must have equality. That is, $\mathbb{Q}(u)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$. My question is: How can I find all such $w$ such that $\mathbb{Q}(w)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$? This is homework so I would rather hints rather hints than a spoiler answer. I believe that They are all of the form described above, but apriori I do not know how to prove this is true. My idea was the following, since $\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$ has degree $6$, then if $w$ is such that the desired equality is satisfied, then $w$ is a root of an irreducible polynomial of degree $6$, moreover, we ought to be able to find rational numbers so that $$\sqrt{2}=\sum_{i=0}^5q_iw^i$$ and $$\sqrt[3]{5}=\sum_{i=0}^5p_iw^i$$But from here I do not know how to show that the $u$'s described above are the only ones with this property (It might be false, apriori I dont really know).
If we take $u = \sqrt{2} + \sqrt[3]{5}$, such a $u$ almost always turns out to work. In fact let's try if a rational linear combination of $\sqrt{2}$ and $\sqrt[3]{5}$ will work. Let us now write $u$ as $u = a\sqrt{2} + b\sqrt[3]{5}$ for rationals $a$ and $b$. Clearly we have that $\Bbb{Q}(u)\subseteq \Bbb{Q}(\sqrt{2},\sqrt[3]{5})$. To show the other inclusion, we just need to show that say $\sqrt{2} \in \Bbb{Q}(u)$ for then $\sqrt[3]{5} = \frac{a\sqrt{2} + b\sqrt[3]{5} - a\sqrt{2}}{b}$ will be in $\Bbb{Q}(u)$. Here is a quick and easy way of doing this: Write $u = a\sqrt{2} + b\sqrt[3]{5}$ so that $(\frac{u - a\sqrt{2}}{b})^3 = 5$. We can assume that $a$ and $b$ are simultaneously not zero for then the proof becomes redundant. Then expanding the left hand side by the binomial theorem we get that $$ u^3 - 3\sqrt{2}u^2a + 6ua^2 - 2a^3\sqrt{2} = 5.$$ Rearranging, we get that $$\sqrt{2} = \frac{u^3 + 6ua^2 -5}{ 3u^2a + 2a^3 }.$$ Since $\Bbb{Q}(u)$ is a field the right hand side is in $\Bbb{Q}(u)$ so that $\sqrt{2}$ is in here. Done!
A more general case of the Laurent series expansion? I was recently reading about Laurent series for complex functions. I'm curious about a seemingly similar situation that came up in my reading. Suppose $\Omega$ is a doubly connected region such that $\Omega^c$ (its complement) has two components $E_0$ and $E_1$. So if $f(z)$ is a complex, holomorphic function on $\Omega$, how can it be decomposed as $f=f_0(z)+f_1(z)$ where $f_0(z)$ is holomorphic outside $E_0$, and $f_1(z)$ is holomorphic outside $E_1$? Many thanks.
I'll suppose both $E_0$ and $E_1$ are bounded. Let $\Gamma_0$ and $\Gamma_1$ be disjoint positively-oriented simple closed contours in $\Omega$ enclosing $E_0$ and $E_1$ respectively, and $\Gamma_2$ a large positively-oriented circle enclosing both $\Gamma_0$ and $\Gamma_1$. Let $\Omega_1$ be the region inside $\Gamma_2$ but outside $\Gamma_0$ and $\Gamma_1$. Then for $z \in \Omega_1$ we have by Cauchy's integral formula, $$ f(z) = \frac{1}{2\pi i} \left( \int_{\Gamma_2} \frac{f(\zeta)\ d\zeta}{\zeta - z} - \int_{\Gamma_0} \frac{f(\zeta)\ d\zeta}{\zeta - z} - \int_{\Gamma_1} \frac{f(\zeta)\ d\zeta}{\zeta - z} \right)$$ If you're not familiar with this version of Cauchy's formula, you can draw thin "corridors" connecting $-\Gamma_0$, $-\Gamma_1$ and $\Gamma_2$ into a single closed contour enclosing $z$. If $$f_k(z) = \frac{1}{2\pi i} \int_{\Gamma_k} \frac{f(\zeta)\ d\zeta}{\zeta - z}$$ this says $f(z) = f_2(z) - f_0(z) - f_1(z)$, where $f_2(z)$ is analytic everywhere inside $\Gamma_2$, $f_0(z)$ is analytic everywhere outside $\Gamma_0$, and $f_1(z)$ is analytic everywhere outside $\Gamma_1$. Moreover, the values of $f_k(z)$ don't depend on the choice of contours, as long as $z$ is inside $\Gamma_2$ and outside $\Gamma_0$ and $\Gamma_1$. By making $\Gamma_2$ sufficiently large and $\Gamma_0$ and $\Gamma_1$ sufficiently close to $E_0$ and $E_1$, any point in $\Omega$ can be included. So we actually have $f(z) = f_2(z) - f_0(z) - f_1(z)$ everywhere in $\Omega$, with $f_2(z)$ entire, $f_0(z)$ analytic outside $E_0$ and $f_1(z)$ analytic outside $E_1$.
graph theory connectivity This cut induced confuses me.... I dont really understand what it is saying... I am not understanding what connectivity is in graph theory. I thought connectivity is when you have a tree because all the vertices are connected but the above mentions something weird like components could someone please explain what they are and what connectivity really is?
Connectivity and components Intuitively, a graph is connected if you can't break it into pieces which have no edges in common. More formally, we define connectivity to mean that there is a path joining any two vertices - where a path is a sequence of vertices joined by edges. The example of $Q_3$ in your question is obviously not connected - none of the vertices in the bit on the left are connected to vertices in the bit on the right. Alternatively, there is no path from the vertex marked 000 to the vertex marked 001. (As an aside - all trees are connected - a tree is defined as a connected graph with no cycles. But there are many other connected graphs.) So if a graph is not connected, then we know it can be broken up into pieces which have no edges in common. These pieces are known as components. The components are themselves connected - they are called the maximal connected subgraphs because it is impossible to add another vertex to them and still have a connected graph. All connected graphs have only one component - the graph itself. Cut induced You can think of the cut induced as being the set of edges which connect some collection of vertices to the rest of the graph. In the diagram you give, the set called $A$ is the collection of vertices within the dotted line. The cut induced by $A$ is then the collection of edges which cross the dotted line - the edges which connect the vertices inside the dotted line to those outside it. Edges joining vertices inside the shaded area are not part of the cut induced, and neither are edges joining vertices on the outside of the dotted line. More formally, the complement of $A$ is exactly those vertices which are not in $A$. So the cut induced by $A$ is the collection of edges joining vertices in $A$ to vertices in the complement of $A$.
Probability theory. 100 items and 3 controllers 100 items are checked by 3 controllers. What is probability that each of them will check more than 25 items? Here is full quotation of problem from workbook: "Set of 100 articles randomly allocated to test between the three controllers. Find the probability that each controller has got to test at least 25 articles."
Let $N_i$ denote the number of items checked by controller $i$. One asks for $1-p$ where $p$ is the probability that some controller got less than $k=25$ items. Since $(N_1,N_2,N_3)$ is exchangeable and since at most two controllers can get less than $k$ items, $p=3u-3v$ where $u=\mathrm P(N_1\lt k)$ and $v=\mathrm P(N_1\lt k,N_2\lt k)$. Furthermore, $v\leqslant uw$ with $w=\mathrm P(M\lt k)$ where $M$ is binomial $(m,\frac12)$ with $m=75$ hence $w\ll1$. And $N_1$ is binomial $(n,\frac13)$. Numerically, $u\approx2.805\%$ and $w\approx0.1\%$ hence $1-p\approx1-3u\approx91.6\%$.
Reduction of a $2$ dimensional quadratic form I'm given a matrix $$A = \begin{pmatrix}-2&2\\2&1\end{pmatrix}$$ and we're asked to sketch the curve $\underline{x}^T A \underline{x} = 2$ where I assume $x = \begin{pmatrix}x\\y \end{pmatrix}$. Multiplying this out gives $-2 x^2+4 x y+y^2 = 2$. Also, I diagonalised this matrix by creating a matrix, $P$, of normalised eigenvectors and computing $P^T\! AP = B$. This gives $$B = \begin{pmatrix}-3&0\\0&2\end{pmatrix}$$ and so now multiplying out $\underline{x}^T B \underline{x} = 2$ gives $-3x^2 + 2y^2 = 2$. Plugging these equations into Wolfram Alpha gives different graphs, can someone please explain what I'm doing wrong? Thanks!
A quadratic form is "equivalent" to it's diagonalized form only in the sense that these quadratic forms share an equivalence class, but the resulting polynomials are of course different (as you showed), otherwise we wouldn't need the equivalence classes!
Is $2^{2n} = O(2^n)$? Is $2^{2n} = O(2^n)$? My solution is: $2^n 2^n \leq C_{1}2^n$ $2^n \leq C_{1}$, TRUE. Is this correct?
If $2^{2n}=O(2^n)$, then there is a constant $C$ and an integer $M$ such that for all $n\ge M$, the inequality $2^{2n}\le C 2^n$ holds. This would imply that $2^n\cdot 2^n\le C 2^n$ for all $n\ge M$, which in turn implies $$\tag{1} 2^n\le C \quad {\bf for\ all } \quad n\ge M. $$ Can such $C$ and $M$ exist? Note the right hand side of $(1)$ is fixed, and the left hand side...
The graph of Fourier Transform I am trying to grasp Fourier transform, I read few websites about it, and I think I don't understand it very good. I know how I can transform simple functions but there is few things that are puzzling to me. Fourier transform takes a function from time domain to a frequency domain, so now I have $\widehat{f(\nu)}$, this is complex-valued function, so as I understand for every frequency I get an imaginary number. * *What does this number represent, what is an interpretation of real and imaginary part of $\widehat{f(\nu)}$? *How can I graph $\widehat{f(\nu)}$? As I understand if function is not odd-function, $\widehat{f(\nu)}$ will have complex values and imaginary part will be different then 0. Do I need to plot it in 3d or do I just plot $|\widehat{f(\nu)}|$?. I am asking about plotting, because for example on wikipedia there is a plot of sinc function, which is fourier transform for square function. It is nice, because it is an odd-function in their case. And I am wondering about other functions. I would be also very grateful for any useful links that can shed some light on the idea of fourier transform and some light theory behind it, preferably done step-by step.
You can refer the following link. Here you can find intuitive explanation to Fourier Transform. The Frequency Domain values after Fourier Transform represent the contribution of that each Frequency in the signal
Various types of TQFTs I am interested in topological quantum field theory (TQFT). It seems that there are many types of TQFTs. The first book I pick up is "Quantum invariants of knots and 3-manifolds" by Turaev. But it doesn't say which type of TQFT are dealt in the book. I found at least two TQFTs which contain Turaev's name, namely Turaev-Viro and Turaev-Reshetikhin TQFT. I have searched the definitions of various TQFTs for few days but I couldn't find good resources. I would like to know * *which type of TQFT is dealt in Turaev's book. *good resources for definitions of various TQFTs (or if it is not difficult to answer here, please give me definitions.) *whether they are esentially different objects or some are generalizations of the others.
Review of a recent Turaev book at http://www.ams.org/journals/bull/2012-49-02/S0273-0979-2011-01351-9/ also discussing earlier volumes to some extent. Here we go, the book you ask about: http://www.ams.org/journals/bull/1996-33-01/S0273-0979-96-00621-0/home.html
Positive series problem: $\sum\limits_{n\geq1}a_n=+\infty$ implies $\sum_{n\geq1}\frac{a_n}{1+a_n}=+\infty$ Let $\sum\limits_{n\geq1}a_n$ be a positive series, and $\sum\limits_{n\geq1}a_n=+\infty$, prove that: $$\sum_{n\geq1}\frac{a_n}{1+a_n}=+\infty.$$
We can divide into cases: * *If a(n) has limit zero : It is lower than 1 for all n bigger than n0, then we can compare with a(n)/2 which is lower than a(n)/(1+a(n)). *If a(n) has limit different to zero , also a(n)/1+a(n) and then the series diverges *If a(n) is not bounded it ha a subsequence that converges to infinite, then a(n)/1+a(n) converges to 1 then the series diverges to infinite. *If a(n) is bounded , we can take a subsequence that is convergent. If it does not converges to zero also the sequence a(n)/1+a(n). If all subsequences converge to zero ,then also a(n) and we can apply 1.
Surface Element in Spherical Coordinates In spherical polars, $$x=r\cos(\phi)\sin(\theta)$$ $$y=r\sin(\phi)\sin(\theta)$$ $$z=r\cos(\theta)$$ I want to work out an integral over the surface of a sphere - ie $r$ constant. I'm able to derive through scale factors, ie $\delta(s)^2=h_1^2\delta(\theta)^2+h_2^2\delta(\phi)^2$ (note $\delta(r)=0$), that: $$h_1=r\sin(\theta),h_2=r$$ $$dA=h_1h_2=r^2\sin(\theta)$$ I'm just wondering is there an "easier" way to do this (eg. Jacobian determinant when I'm varying all 3 variables). I know you can supposedly visualize a change of area on the surface of the sphere, but I'm not particularly good at doing that sadly.
I've come across the picture you're looking for in physics textbooks before (say, in classical mechanics). A bit of googling and I found this one for you! Alternatively, we can use the first fundamental form to determine the surface area element. Recall that this is the metric tensor, whose components are obtained by taking the inner product of two tangent vectors on your space, i.e. $g_{i j}= X_i \cdot X_j$ for tangent vectors $X_i, X_j$. We make the following identification for the components of the metric tensor, $$ (g_{i j}) = \left(\begin{array}{cc} E & F \\ F & G \end{array} \right), $$ so that $E = <X_u, X_u>, F=<X_u,X_v>,$ and $G=<X_v,X_v>.$ We can then make use of Lagrange's Identity, which tells us that the squared area of a parallelogram in space is equal to the sum of the squares of its projections onto the Cartesian plane: $$|X_u \times X_v|^2 = |X_u|^2 |X_v|^2 - (X_u \cdot X_v)^2.$$ Here's a picture in the case of the sphere: This means that our area element is given by $$ dA = | X_u \times X_v | du dv = \sqrt{|X_u|^2 |X_v|^2 - (X_u \cdot X_v)^2} du dv = \sqrt{EG - F^2} du dv. $$ So let's finish your sphere example. We'll find our tangent vectors via the usual parametrization which you gave, namely, $X(\phi,\theta) = (r \cos(\phi)\sin(\theta),r \sin(\phi)\sin(\theta),r \cos(\theta)),$ so that our tangent vectors are simply $$ X_{\phi} = (-r\sin(\phi)\sin(\theta),r\cos(\phi)\sin(\theta),0), \\ X_{\theta} = (r\cos(\phi)\cos(\theta),r\sin(\phi)\cos(\theta),-r\sin(\theta)) $$ Computing the elements of the first fundamental form, we find that $$ E = r^2 \sin^2(\theta), \hspace{3mm} F=0, \hspace{3mm} G= r^2. $$ Thus, we have $$ dA = \sqrt{r^4 \sin^2(\theta)}d\theta d\phi = r^2\sin(\theta) d\theta d\phi $$
Deriving the exponential distribution from a shift property of its expectation (equivalent to memorylessness). Suppose $X$ is a continuous, nonnegative random variable with distribution function $F$ and probability density function $f$. If for $a>0,\ E(X|X>a)=a+E(X)$, find the distribution $F$ of $X$.
About the necessary hypotheses (and in relation to a discussion somewhat buried in the comments to @bgins's answer), here is a solution which does not assume that the distribution of $X$ has a density, but only that $X$ is integrable and unbounded (otherwise, the identity in the post makes no sense). A useful tool here is the complementary PDF $G$ of $X$, defined by $G(a)=\mathrm P(X\gt a)$. For every $a\geqslant0$, let $m=\mathrm E(X)$. The identity in the post is equivalent to $\mathrm E(X-a\mid X\gt a)=m$, which is itself equivalent to $\mathrm E((X-a)^+)=m\mathrm P(X\gt a)=mG(a)$. Note that $m\gt0$ by hypothesis. Now, for every $x$ and $a$, $$ (x-a)^+=\int_a^{+\infty}[x\gt z]\,\mathrm dz. $$ Integrating this with respect to the distribution of $X$ yields $$ \mathrm E((X-a)^+)=\int_a^{+\infty}\mathrm P(X\gt z)\,\mathrm dz, $$ hence, for every $a\gt0$, $$ mG(a)=\int_a^{+\infty}G(z)\,\mathrm dz. $$ This proves ${}^{(\ast)}$ that $G$ is infinitely differentiable on $(0,+\infty)$ and that $mG'(a)=-G(a)$ for every $a\gt0$. Since the derivative of the function $a\mapsto G(a)\mathrm e^{ma}$ is zero on $a\gt0$ and $G$ is continuous from the right on $(0,+\infty)$, one gets $G(a)=G(0)\mathrm e^{-ma}$ for every $a\geqslant0$. Two cases arise: either $G(0)=1$, then the distribution of $X$ is exponential with parameter $1/m$; or $G(0)\lt1$, then the distribution of $X$ is a barycenter of a Dirac mass at $0$ and an exponential distribution. If the distribution of $X$ is continuous, the former case occurs. ${}^{(\ast)}$ By the usual seesaw technique: the RHS converges hence the RHS is a continuous function of $a$, hence the LHS is also a continuous function of $a$, hence the RHS integrates a continuous function of $a$, hence the RHS is a $C^1$ function of $a$, hence the LHS is also a $C^1$ function of $a$... and so on.
Similarity between $I+N$ and $e^N$ when $N$ is nilpotent Let $$ N=\begin{pmatrix}0&1&&\\&\ddots&\ddots&\\&&0&1\\&&&0 \end{pmatrix}_{n\times n} $$ and $I$ is the identity matrix of order $n$. How to prove $I+N\sim e^N$? Clarification: this is the definition of similarity, which is not the same as equivalence. Update: I noticed a stronger relation, that $A\sim N$, if $$ A=\begin{pmatrix}0&1&*&*\\&\ddots&\ddots&*\\&&0&1\\&&&0 \end{pmatrix}_{n\times n} $$ and $*$'s are arbitrary numbers.
By subtracting $I$ this is equivalent to asking about the similarity class of a nilpotent square matrix of size $N$. The similarity type of $N$ is determined by the dimensions of the kernels of powers of $N$. In the upper triangular case the list of dimensions of $\ker N^i$ is $1,2,3,4,...,n$ for both of the matrices you consider. Hence they are similar.
Integrating a spherically symmetric function over an $n$-dimensional sphere I've become confused reading over some notes that I've received. The integral in question is $\int_{|x| &lt \sqrt{R}} |x|^2\, dx$ where $x \in \mathbb{R}^n$ and $R > 0$ is some positive constant. The notes state that because of the spherical symmetry of the integrand this integral is the same as $\omega_n \int_0^{\sqrt{R}} r^2 r^{n-1}\, dr$. Now neither $\omega_n$ nor $r$ are defined. Presumably $r = |x|$, but I am at a loss as to what $\omega_n$ is (is it related maybe to the volume or surface area of the sphere?). I am supposing that the factor $r^{n-1}$ comes from something like $n-1$ successive changes to polar coordinates, but I am unable to fill in the details and would greatly appreciate any help someone could offer in deciphering this explanation.
The $\omega_n$ is meant to be the volume of the unit $n$-sphere. See http://en.wikipedia.org/wiki/N-sphere for notation. Also, you are correct that $r = |x|$, and the $r^{n-1}$ comes from the Jacobian of the transformation from rectangular to spherical coordinates. (The $\omega_n$ also comes from this transformation).