INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Which of the following groups is not cyclic? Which of the following groups is not cyclic? (a) $G_1 = \{2, 4,6,8 \}$ w.r.t. $\odot$ (b) $G_2 = \{0,1, 2,3 \}$ w.r.t. $\oplus$ (binary XOR) (c) $G_3 =$ Group of symmetries of a rectangle w.r.t. $\circ$ (composition) (d) $G_4 =$ $4$th roots of unity w.r.t. $\cdot$ (multiplication) Can anyone explain me this question?
Hint: For a group to be cyclic, there must be an element $a$ so that all the elements can be expressed as $a^n$, each for a different $n$. The terminology comes because this is the structure of $\Bbb {Z/Z_n}$, where $a=1$ works (and often others). I can't see what the operator is in your first example-it is some sort of unicode. For b, try each element $\oplus$ itself. What do you get? For c, there are two different types of symmetry-those that turn the rectangle upside down and those that do not.
Need help in proving that $\frac{\sin\theta - \cos\theta + 1}{\sin\theta + \cos\theta - 1} = \frac 1{\sec\theta - \tan\theta}$ We need to prove that $$\dfrac{\sin\theta - \cos\theta + 1}{\sin\theta + \cos\theta - 1} = \frac 1{\sec\theta - \tan\theta}$$ I have tried and it gets confusing.
$$\frac{\sin\theta-\cos\theta+1}{\sin\theta+\cos\theta-1}$$ $$=\frac{\tan\theta-1+\sec\theta}{\tan\theta+1-\sec\theta}(\text{ dividing the numerator & the denominator by} \cos\theta )$$ $$=\frac{\tan\theta-1+\sec\theta}{\tan\theta-\sec\theta+(\sec^2\theta-\tan^2\theta)} (\text{ putting } 1=\sec^2\theta-\tan^2\theta) $$ $$=\frac{\tan\theta+\sec\theta-1}{\tan\theta-\sec\theta-(\tan\theta-\sec\theta)(\tan\theta+\sec\theta)}$$ $$=\frac{\tan\theta+\sec\theta-1}{-(\tan\theta-\sec\theta)(\tan\theta+\sec\theta-1)}$$ $$=\frac1{\sec\theta-\tan\theta}$$ Alternatively using Double-angle formula by putting $\tan\frac\theta2=t,$ $$\text{ LHS= }\frac{\sin\theta-\cos\theta+1}{\sin\theta+\cos\theta-1}=\frac{\frac{2t}{1+t^2}-\frac{1-t^2}{1+t^2}+1}{\frac{2t}{1+t^2}+\frac{1-t^2}{1+t^2}-1}$$ $$=\frac{2t-(1-t^2)+1+t^2}{2t+(1-t^2)-(1+t^2)} =\frac{2t+2t^2}{2t-2t^2}=\frac{1+t}{1-t}\text{assuming }t\ne0$$ $$\text{ RHS= }\frac1{\sec\theta-\tan\theta}=\frac1{\frac{1+t^2}{1-t^2}-\frac{2t}{1-t^2}}=\frac{1-t^2}{(1-t)^2}=\frac{1+t}{1-t} \text{assuming }t-1\ne0$$
Drawing points on Argand plane The points $5 + 5i$, $1− 3i$, $− 4 + 2i$ and $−2 + 6i$ in the Argand plane are: (a) Collinear (b) Concyclic (c) The vertices of a parallelogram (d) The vertices of a square So when I drew the diagram, I got an rectangle in the 1st and 2nd quadrant. So, are they vertices of parallelogram? I am not sure!
Its not collinear, nor a square, nor a parallelogram. Therefore, it must be Concyclic
Question on Wolstenholme's theorem In one of T. Apostol's student textbooks on analytic number theory (i.e., Introduction to Analytic Number Theory, T. Apostol, Springer, 1976) Wolstenholme's theorem is stated (Apostol, Chapt. 5, page 116) as follows (more or less): For any prime ($p \geq 5$), \begin{equation} ((p - 1)!)(1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p - 1}) \equiv 0 \pmod {p^2}. \end{equation} Suppose one was to multiply through both sides of the congruence with the inverse of $(p - 1)!$ modulo $p^{2}$. One gets \begin{equation} 1\cdot(1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p - 1}) \equiv 0 \pmod {p^2}. \end{equation} Does this congruence "make sense," since one has a finite sum of fractions on the left hand side, not a finite sum of integers (Cf. Apostol, Exercise 11, page 127)? By the expression ``inverse of $(p - 1)!$ modulo $p^{2}$," I mean multiplying through by $t$, so that $$((p - 1)!)t \equiv 1 \pmod{p^2}.$$ Just asking. After all I do not know everything.... :-)
In modular arithmetic, you should interpret Egyptian fractions (of the form $\frac 1a$) as the modular inverse of $a \bmod p^2$, in which case this makes perfect sense.
prime factors of numbers formed by primorials Let $p,q$ be primes with $p \leq q$. The product $2\cdot3\cdot\dots\cdot p$ is denoted with $p\#$, the product $2\cdot3\cdot\dots\cdot q$ is denoted with $q\#$ (primorials). Now $z(p,q)$ is defined by $z(p,q) = p\#+q\#/p\#$ For example $z(11,17) = 2\cdot3\cdot5\cdot7\cdot11 + 13\cdot17$ What can be said about the prime factors of $z(p,q)$ besides the simple fact that they must be greater than $q$?
The number $z(p,q)$ is coprime to any of these primes. It is more likely to be prime, especially for small values, but not necessarily so. For example, $z(7,11) = 13*17$ is the smallest composite example, but one fairly easily finds composites (like $z(11,13)$, $z(5,19)$, $z(3,11)$, $z(13,19)$, $(13,23)$, and $z(13,37)$ to $z(13,59)$ inclusive. For $z(17,p)$, it is composite for all $p$ between $23$ and $113$ inclusive, only $19$ and $127$ yielded primes. There is nothing exciting, like proper powers, for valeues of $q<120$. There does not seem to be any particular pattern to the primes. Some are small, and some are fairly large.
Does $\sum_{n=1}^{\infty} \frac{\sin(n)}{n}$ converge conditionally? I think that the series $$\sum_{n=1}^{\infty} \dfrac{\sin(n)}{n}$$ converges conditionally, but I´m not able to prove it. Any suggestions ?
Using Fourier series calculations it follows $$ \sum_{n=1}^{\infty}\frac{\sin(n x)}{n}=\frac{\pi-x}{2} $$ for every $x\in(-\pi,\pi)$. Your sum is $\frac{\pi-1}{2}$.
Looking for an easy lightning introduction to Hilbert spaces and Banach spaces I'm co-organizing a reading seminar on Higson and Roe's Analytic K-homology. Most participants are graduate students and faculty, but there are a number of undergraduates who might like to participate, and who have never taken a course in functional analysis. They are strong students though, and they do have decent analysis, linear algebra, point-set topology, algebraic topology... Question: Could anyone here recommend a very soft, easy, hand-wavy reference I could recommend to these undergraduates, which covers and motivates basic definitions and results of Hilbert spaces, Banach spaces, Banach algebras, Gelfand transform, and functional calculus? It doesn't need to be rigourous at all- it just needs to introduce and to motivate the main definitions and results so that they can "black box" the prerequisites and get something out of the reading seminar. They can do back and do things properly when they take a functional analysis course next year or so.
I don't know how useful this will be, but I have some lecture notes that motivate the last three things on your list by first reinterpreting the finite dimensional spectral theorem in terms of the functional calculus. (There is also a section on the spectral theorem for compact operators, but this is just pulled from Zimmer's Essential Results of Functional Analysis.) I gave these lectures at the end of an undergraduate course on functional analysis, though, so they assume familiarity with Banach and Hilbert spaces.
Why is $ \lim_{n \to \infty} \left(\frac{n-1}{n+1} \right)^{2n+4} = \frac{1}{e^4}$ According to WolframAlpha, the limit of $$ \lim_{n \to \infty} \left(\frac{n-1}{n+1} \right)^{2n+4} = \frac{1}{e^4}$$ and I wonder how this result is obtained. My approach would be to divide both nominator and denominator by $n$, yielding $$ \lim_{n \to \infty} \left(\frac{1-\frac{1}{n}}{1 + \frac{1}{n}} \right)^{2n+4} $$ As $ \frac{1}{n} \to 0 $ as $ n \to \infty$, what remains is $$ \lim_{n \to \infty} \left(\frac{1-0}{1 + 0} \right)^{2n+4} = 1 $$ What's wrong with my approach?
$$ \lim_{n \to \infty} \left(\frac{n-1}{n+1} \right)^{2n+4} $$ $$ =\lim_{n \to \infty} \left(\left(1+\frac{(-2)}{n+1} \right)^{\frac{n+1}{-2}}\right)^{\frac{-2(2n+1)}{n+1}}$$ $$ = \left(\lim_{n \to \infty}\left(1+\frac{(-2)}{n+1} \right)^{\frac{n+1}{-2}}\right)^{\lim_{n \to \infty}\left(\frac{-4-\frac2n}{1+\frac1n}\right)}$$ $$=(e)^{-4}\text{ as } n \to \infty, \frac{n+1}{-2}\to-\infty\text{ and } \lim_{m\to\infty}\left(1+\frac1m\right)^m=e$$
Approximating hypergeometric distribution with poisson I'm am currently trying to show that the hypergeometric distribution converges to the Poisson distribution. $$ \lim_{n,r,s \to \infty, \frac{n \cdot r}{r+s} \to \lambda} \frac{\binom{r}{k} \binom{s}{n-k}}{\binom{r+s}{n}} = \frac{\lambda^k}{k!}e^{-\lambda} $$ I know, how to show for specific values, that the hypergeometric distribution converges to the binomial distribution and from there we proved in our script that the binomial distribution converges to the Poisson distribution for specific values. No the question is, can i show the approximation directly via the limit above? I came from the limit above to $$ \lim_{n,r,s \to \infty, \frac{n \cdot r}{r+s} \to \lambda} \frac{\binom{r}{k} \binom{s}{n-k}}{\binom{r+s}{n}} = \cdots = \frac{\lambda^k}{k!}\frac{\frac{(r-1)!}{(r-k)!}\binom{s}{n-k}}{\binom{r+s-1}{n-1}}\left(\frac{1}{\lambda}\right)^{k-1} $$ But how to show that ? $$ \frac{\frac{(r-1)!}{(r-k)!}\binom{s}{n-k}}{\binom{r+s-1}{n-1}}\left(\frac{1}{\lambda}\right)^{k-1} = e^{-\lambda} $$
This is the simplest proof I've been able to find. Just by rearranging factorials, we can rewrite the hypergeometric probability function as $$ \mathrm{Prob}(X=x) = \frac{\binom{M}{x}\binom{N-M}{K-x}}{\binom{N}{K}} = \frac{1}{x!} \cdot \dfrac{M^{(x)} \, K^{(x)}}{N^{(x)}} \cdot \dfrac{(N-K)^{(M-x)}}{(N-x)^{(M-x)}}, $$ where $a^{(b)}$ is the falling power $a(a-1)\cdots(a-b+1)$. Since $x$ is fixed, \begin{align*} \dfrac{M^{(x)} \, K^{(x)}}{N^{(x)}} &= \prod_{j=0}^{x-1} \dfrac{(M-j) \cdot (K-j)}{(N-j)} \\ &= \prod_{j=0}^{x-1} \left( \dfrac{MK}{n} \right) \cdot \dfrac{(1-j/M) \cdot (1-j/K)}{(1-j/N)} \\ &= \left( \dfrac{MK}{N} \right) ^x \; \prod_{j=0}^{x-1} \dfrac{(1-j/M) \cdot (1-j/K)}{(1-j/N)}, \end{align*} which $\to \lambda^x$ as $N$, $K$ and $M$ $\to \infty$ with $\frac{MK}{N} = \lambda$. Lets replace $N-x$, $K-x$ and $M-x$ by new variables $n$, $k$ and $m$ for simplicity. Since $x$ is fixed, as $N,K,M \to \infty$ with $KM/N \to \lambda$, so too $n,k,m \to \infty$ with $nk/m \to \lambda$. Next we write $$ A = \dfrac{(N-K)^{(M-x)}}{(N-x)^{(M-x)}} = \dfrac{(n-k)^{(m)} }{(n)^{(m)}} = \prod_{j=0}^{m-1} \left( \dfrac{n-j-k}{n-j} \right)= \prod_{j=0}^{m-1} \left( 1 - \dfrac{k}{n-j} \right)$$ and take logs: $$ \ln \, A = \sum_{j=0}^{m-1} \ln \left( 1 - \dfrac{k}{n-j} \right). $$ Since the bracketed quantity is an increasing function of $j$ we have $$ \sum_{j=0}^{m-1} \ln \left( 1 - \dfrac{k}{n} \right) \le \ln \, A \le \sum_{j=0}^{m-1} \ln \left( 1 - \dfrac{k}{n-m+1} \right), $$ or $$ m \, \ln \left( 1 - \dfrac{k}{n} \right) \le \ln \, A \le m \, \ln \left( 1 - \dfrac{k}{n-m+1} \right). $$ But $\ln (1-x) < -x$ for $0 < x < 1$, so $$ m \, \ln \left( 1 - \dfrac{k}{n} \right) \le \ln \, A < -m \, \left( \dfrac{k}{n-m+1} \right), $$ and dividing through by $km/n$ gives $$ \frac{n}{k} \, \ln \left( 1 - \dfrac{k}{n} \right) \le \dfrac{\ln \, A}{km/n} < - \, \left( \dfrac{n}{n-m+1} \right) = - \, \left( \dfrac{1}{1-m/n+1/n} \right). $$ Finally, we let $k$, $m$ and $n$ tend to infinity in such a way that $km/n \to \lambda$. Since both $k/n \to 0$ and $m/n \to 0$, both the left and right bounds $\to -1$. (The left bound follows from $\lim_{n \to \infty} (1-1/n)^n = e^{-1}$, which is a famous limit in calculus.) So by the Squeeze Theorem we have $\ln \, A \to -\lambda$, and thus $A \to e^{-\lambda}$. Putting all this together gives the result.
Cheating in multiple choice tests. What is the probability that in a multiple choice test exam session, where $k$ people took the test (that contains $n$ questions with 2 possible answers each and where exactly one answer to each question is the correct one) cheating has occurred, i.e. there exists at least two tests that are identical ? My solution is $$1-\frac{\binom{2^{n}}{k}}{\binom{2^{n}+k-1}{k}},$$ since there are $2^n$ possible tests and the set of all multisets with $k$ elements from $2^n$ is the size of all possible "exam sessions". For the numerator I counted only those exam sessions where cheating didn't occur, i.e. all tests are different from eachother. Since what I'm looking for is the complement of this set I get the above.
It is not reasonable to consider two identical tests as evidence of cheating. And people do not choose answers at random. So let us reword the problem as follows. We have $k$ people who each toss a fair coin $n$ times. What is the probability that at least two of them will get identical sequences of heads and tails? The required probability $1$ minus the probability that the sequences are all different. We go after that probability. There are $2^n$ possible sequences of length $n$ made up of H and/or T. In order to make typing easier, and also not to get confused, let's call this number $N$. So each of our $k$ people independently produces one of these $N$ sequences. Write down the sequences chosen by the various people, listed in order of student ID. There are $N^k$ equally likely possibilities. Now we count the number of ways to obtain a list of $k$ distinct sequences. This is $N(N-1)(N-2)\cdots (N-k+1)$. So the probability the sequences are all different is $$\frac{N(N-1)(N-2) \cdots (N-k+1)}{N^k}.\tag{A}$$ So the answer to the original problem is $1$ minus the expression in (A). The numerator can be written in various other ways, for example as $k!\dbinom{N}{k}$. Remark: I prefer to think of this in elementary Birthday Problem terms. The probability that the sequence obtained by Student $2$ is different from the one obtained by Student $1$ is $\frac{N-1}{N}$. Given that fact, the probability that the sequence obtained by Student $3$ is different from both of the first two sequences is $\frac{N-2}{N}$. And so on. So the probability in (A) can be thought of as $$\frac{N-1}{N}\cdot\frac{N-2}{N}\cdots \frac{N-k+1}{N}.$$
Show that the interior of a convex set is convex Question: Let $P\subseteq \mathbb{R}^n$ be a convex set. Show that $\text{int}(P)$ is a convex set. I know that a point $x$ is said to be an interior point of the set $P$ if there is an open ball centered at $x$ that is contained entirely in $P$. The set of all interior points of $P$ is denoted by $\text{int}(P)$. Also, to say that a set $P$ is convex means that if $x,y \in P$ then $tx+(1-t)y \in P$ for all $t \in (0,1)$. How to go about the above proof?
I'll give a proof based on the following picture: Suppose that $x$ and $y$ are interior points of a convex set $\Omega \subset \mathbb R^n$. Let $0 < \theta < 1$. We wish to show that the point $z = \theta x + (1 - \theta) y$ is in the interior of $\Omega$. There exists an open ball $A$ centered at $x$ such that $A \subset \Omega$. Let $r$ be the radius of $A$, and let $B$ be the open ball of radius $\theta r$ centered at $z$. Claim: $B \subset \Omega$. Proof: Let $\hat z \in B$. There exists a point $\hat x \in \mathbb R^n$ such that $$ \hat z = \theta \hat x + (1 - \theta) y. $$ I will show that $\hat x \in A$. It will follow, by the convexity of $\Omega$, that $\hat z \in \Omega$. Since $\hat z$ is an arbitrary point in $B$, this will show that $B \subset \Omega$. To complete the proof, note that \begin{align} \| \theta(\hat x - x)\| &= \| \theta \hat x - \theta x \| \\ &= \| \hat z - (1 - \theta) y - (z - (1 - \theta) y) \| \\ &= \| \hat z - z \| \\ &\leq \theta r. \end{align} It follows that $\| \hat x - x \| \leq r$, which shows that $\hat x \in A$.
Clues for $\lim_{x\to\infty}\sum_{k=1}^{\infty} \frac{(-1)^{k+1} (2^k-1)x^k}{k k!}$ Some clues for this questions? $$\lim_{x\to\infty}\sum_{k=1}^{\infty} \frac{(-1)^{k+1} (2^k-1)x^k}{k k!}$$
Take the derivative and use the exponential series. Thus if the sum is $f(x)$, then $$x f'(x) = \sum_{n=1}^{\infty} (-1)^{n+1} \frac{2^n-1}{n!} x^n = e^{-x}-e^{-2 x}$$ Then $$f(x) = \int_0^x dt \frac{e^{-t}-e^{-2 t}}{t}$$ (because you know $f(0)=0$). Thus, using Fubini's theorem, one can show that $$\lim_{x \to\infty} f(x) = \int_0^{\infty} dt \frac{e^{-t}-e^{-2 t}}{t} = \log{2}$$
Finite union of compact sets is compact Let $(X,d)$ be a metric space and $Y_1,\ldots,Y_n \subseteq X$ compact subsets. Then I want to show that $Y:=\bigcup_i Y_i$ is compact only using the definition of a compact set. My attempt: Let $(y_n)$ be a sequence in $Y$. If $\exists 1 \leq i \leq n\; \exists N \in \mathbb N \; \forall j \geq N\; y_j \in Y_i$ then $(y_n)$ has a convergent subsequence because $Y_i$ is compact. Otherwise, $$ \forall 1 \leq i \leq n \; \forall N \in \mathbb N\; \exists j \geq N\; y_j \notin Y_i $$ Assuming for the moment that $n = 2$ and using induction later we have that $$ \forall N \in \mathbb N \; \exists j \geq N \; y_j \in Y_1 \backslash Y_2 $$ With this we can make a subsequence $\bigl(y_{n_j}\bigr)_{j=0}^\infty$ in $Y_1 \backslash Y_2$. This sequence lies in $Y_1$ and thus has a convergent subsequence. This convergent subsequence of the subsequence will then also be a convergence subsequence of the original sequence. Now we may use induction on $n$.
Let $\mathcal{O}$ be an open cover of $Y$. Since $\mathcal{O}$ is an open cover of each $Y_i$, there exists a finite subcover $\mathcal{O}_i \subset \mathcal{O}$ that covers each $Y_i$. Then $\bigcup_{i=1}^n \mathcal{O}_i \subset \mathcal{O}$ is a finite subcover. That's it; no need to deal with sequences.
Determine whether $F(x)= 5x+10$ is $O(x^2)$ Please, can someone here help me to understand the Big-O notation in discrete mathematics? Determine whether $F(x)= 5x+10$ is $O(x^2)$
It is as $x \to \infty$. Actually, $5x+10 = o(x^2)$ as $x \to \infty$ (little-oh) since $\lim_{x \to \infty} \frac{5x+10}{x^2} = 0$. However, $5x+10 \ne O(x^2)$ as $x \to 0$, and $5x \ne O(x^2)$ as $x \to 0$, because there is no real $c$ such that $5x < c x^2$ as $x \to 0$. Since $x \to 0$ and $x \to \infty$ are the two common limits for big-oh notation, it is important to state which one is meant.
The positive root of the transcendental equation $\ln x-\sqrt{x-1}+1=0$ I numerically solved the transcendental equation $$\ln x-\sqrt{x-1}+1=0$$ and obtained an approximate value of its positive real root $$x \approx 14.498719188878466465738532142574796767250306535...$$ I wonder if it is possible to express the exact solution in terms of known mathematical constants and elementary or special functions (I am especially interested in those implemented in Mathematica)?
Yes, it is possible to express this root in terms of special functions implemented in Mathematica. Start with your equation $$\ln x-\sqrt{x-1}+1=0,\tag1$$ then take exponents of both sides $$x\ e^{1-\sqrt{x-1}}=1.\tag2$$ Change the variable $$z=\sqrt{x-1}-1,\tag3$$ then plug this into $(2)$ and divide both sides by $2$ $$\left(\frac{z^2}2+z+1\right)e^{-z}=\frac12.\tag4$$ Now the left-hand side looks very familiar. Indeed, as it can be seen from DLMF 8.4.8 or the formulae $(2),(3)$ on this MathWorld page, it is a special case (for $a=3$) of the regularized gamma function $$Q(a,z)=\frac{\Gamma(a,z)}{\Gamma(a)},\tag5$$ implemented in Mathematica as GammaRegularized[a, z]. Its inverse with respect to $z$ is denoted as $Q^{-1}(a,s)$ and implemented in Mathematica as InverseGammaRegularized[a, s]. We can use this function to express the positive real root of the equation $(4)$ is a closed form $$z=Q^{-1}\left(3,\ \frac12\right).\tag6$$ Finally, using $(3)$ we can express the positive real root of your equation $(1)$ as follows: $$x=\left(Q^{-1}\left(3,\ \frac12\right)+1\right)^2+1.\tag7$$ The corresponding Mathematica expression is (InverseGammaRegularized[3, 1/2] + 1)^2 + 1 We can numerically check that substitution of this expression into the left-hand side of the equation $(1)$ indeed yields $0$. I was not able to express the result in terms of simpler functions (like Lambert W-function).
Boston Celtics VS. LA Lakers- Expectation of series of games? Boston celtics & LA Lakers play a series of games. the first team who win 4 games, win the whole series. * *The probability of win or lose game is equal (1/2) a. what is the expectation of the number of games in the series? So i defined an indicator: $x_i=1$ if the game $i$ was played. It is clear that $E[x_1]=E[x_2]=E[x_3]=E[x_4]=1$. for the 5th game, for each team we have 5 different scenarios (W=win, L=lose) and the probability to win is: * *W W W L $((\frac12)^3=\frac18)$ *W W L L $((\frac12)^2=\frac14)$ *W L L L $((\frac12)^1=\frac12)$ $\frac12+\frac14+\frac18=\frac78$ We can use the complementary event: if only one game is L in a series of 4 games (so the 5th game will be played): $1-\frac18=\frac78$ The same calculation is for the 6th and 7th game: 6th: W W W L L or W W L L L => $(\frac18+\frac14=\frac38)$ 7th: W W W L L L => $(\frac18)$ And the expectation is $E[x]=1+1+1+1+\frac78+\frac38+\frac18=\frac{43}8$ What am i missing here, and how can I fix it?
If we have the family of all length-7 sequences composed of W and L, we see that each of these sequences represent one of $2^7$ outcomes to our task at hand with equal probability. Then, we see that the number of games played is pre-decided for each such given sequence (e.x.: WWWLWLL and WWWLWWW both result in five games played, while WLWLWLW result in seven games). So, we can find the probability of each event (number of games played) by counting how many sequences fall into each category. Note: O indicates that this can be either W or L; this occurs when the outcome of the series is already decided and additional games become irrelevant to the total number of games played. Also, let us assume that the W team wins the seven-game series without loss of generality. Four games: WWWWOOO $2^3=8$ sequences Five games: (combination of WWWL)WOO $2^2{{4}\choose{3}} = 16$ sequences Six games: (combination of WWWLL)WO $2{{5}\choose{3}} = 20$ sequences Seven games: (combination of WWWLLL)W ${{6}\choose{3}} = 20$ sequences So, the expected value for number of games played is: $4(\frac{8}{64})+5(\frac{16}{64})+6(\frac{20}{64})+7(\frac{20}{64})=\frac{93}{16}=5.8125$ games
The notations change as we grow up In school life we were taught that $<$ and $>$ are strict inequalities while $\ge$ and $\le$ aren't. We were also taught that $\subset$ was strict containment but. $\subseteq$ wasn't. My question: Later on, (from my M.Sc. onwards) I noticed that $\subset$ is used for general containment and $\subsetneq$ for strict. The symbol $\subseteq$ wasn't used any longer! We could have simply carried on with the old notations which were analogous to the symbols for inequalities. Why didn't the earlier notations stick on? There has to be a history behind this, I feel. (I could be wrong) Notations are notations I agree and I am used to the current ones. But I can't reconcile the fact that the earlier notations for subsets (which were more straightforward) were scrapped while $\le$ and $\ge$ continue to be used with the same meaning. So I ask.
This is very field dependent (and probably depends on the university as well). In my M.Sc. thesis, and in fact anything I write today as a Ph.D. student, I still use $\subseteq$ for inclusion and $\subsetneq$ for proper inclusion. If anything, when teaching freshman intro courses I'll opt for $\subsetneqq$ when talking about proper inclusion. On the other hand, when I took a basic course in algebraic topology the professor said that we will write $X\setminus x$ when we mean $X\setminus\{x\}$, and promptly apologized to me (the set theory student in the crowd).
A vector field is a section of $T\mathcal{M}$. By definition, a vector field is a section of $T\mathcal{M}$. I am familiar with the concept of vector field, as well as tangent plane of a manifold. But such definition is not intuitive to me at all. Could some one give me some intuition? Thank you very much!
Remember that a point of the tangent bundle consists of pair $(p,v)$, where $p \in M$ and $v \in T_pM$. We have the projection map $\pi: TM \to M$ which acts by $(p,v) \to p$. A section of $\pi$ is a map $f$ so that $\pi \circ f$ is the identity. So for each $p \in M$, we have to choose an element of $TM$ that projects back down to $p$. So for each $p \in M$ we're forced to choose a pair $(p,v)$ with $v \in T_pM$. This is the same information as choosing a tangent vector in each tangent space, which is the same information as a vector field. If we insist that $f$ is smooth (as we usually do), then we get a smooth vector field.
Hölder- continuous function $f:I \rightarrow \mathbb R$ is said to be Hölder continuous if $\exists \alpha>0$ such that $|f(x)-f(y)| \leq M|x-y|^\alpha$, $ \forall x,y \in I$, $0<\alpha\leq1$. Prove that $f$ Hölder continuous $\Rightarrow$ $f$ uniformly continuous and if $\alpha>1$, then f is constant. In order to prove that $f$ Hölder continuous $\Rightarrow$ $f$ uniformly continuous, it is enough to note that $|f(x)-f(y)| \leq M |x-y|^\alpha \leq M|x-y|$, since $\alpha \leq 1$. This implies that f is Lipschitz $\Rightarrow$ f is uniformly continuous. But how can I prove that if $\alpha >1$, then $f$ is constant?
Hint: For some $\epsilon>0$ and all $x\ne y$, you have $\Bigl|{f(x)-f(y)\over x-y}\Bigr|\le M|x-y|^\epsilon$ for some $\epsilon>0$. Why must $f'(x)$ exist? What is the value of $f'(x)$?
How can I measure the distance between two cities in the map? Well i know that the distance between Moscow and London by km it's about 2,519 km and the distance between Moscow and London in my map by cm it's about 30.81 cm and the Scale for my map is 1 cm = 81.865 km but when i tried to measure the distance between other two cities for example between London and Berlin with my map scale the result was wrong so i think that's because the spherical of earth ???!! Now i want to know how can i measure the distance between tow cities in the map also how can i know the scale of a map ?
The calculation is somewhat complex. A simplification is to assume that the Earth is a sphere and finding the great-circle distance. A more complex calculation instead uses an oblate spheroid as a closer approximation.
Convergence in $L_1$ and Convergence of the Integrals Am I right with the following argument? (I am a bit confused by all those types of convergence.) Let $f, f_n \in L_1(a,b)$ with $f_n$ converging to $f$ in $L_1$, meaning $$\lVert f_n-f \rVert_1 = \int_a^b |f_n(x)-f(x)|dx \rightarrow 0 \ , $$ Then the integral $\int_a^b f_n dx$ converges to $\int_a^b f dx$. To show this we look at$$\left| \int_a^b f_n(x) dx - \int_a^b f(x) dx \right | \leq \int_a^b | f_n(x) - f(x)| dx \rightarrow 0 \ .$$ If this is indeed true, is there something similar for the other $L_p(a,b)$ spaces, or is this something special to $L_1(a,b)$?
Let $f_n \to f$ in $L^p(\Omega)$. Then, we also have that $\|f_n\|_p \to \|f\|_p$. So "something similar" holds. As for the convergence of $\int f_n$ to $\int f$, this is generally not guaranteed by $L^p$ convergence, unless the measure of the underlying space is finite (like it is in your example). In that case we have $$ \left| \int_{\Omega} f_n(x) dx - \int_{\Omega} f(x)\,dx \right| = \left| \int_{\Omega} (f_n(x) dx - f(x))\,dx \right| \leq \int_{\Omega} | f_n(x) - f(x)|\, dx \leq |\Omega|^{\frac1q} \|fn-f\|_p \rightarrow 0 $$ by Hölder inequality. This is the same inequality that gives you that $L^p(\Omega) \subset L^1(\Omega)$.
Rees algebra of a monomial ideal User fbakhshi deleted the following question: Let $R=K[x_1,\ldots,x_n]$ be a polynomial ring over a field $K$ and $I=(f_1,\ldots,f_q)$ a monomial ideal of $R$. If $f_i$ is homogeneous of degree $d\geq 1$ for all $i$, then prove that $$ R[It]/\mathfrak m R[It]\simeq K[f_1t,\ldots, f_q t]\simeq K[f_1,\ldots,f_q] \text{ (as $K$-algebras).} $$ $R[It]$ denotes the Rees algebra of $I$ and $\mathfrak m=(x_1,\ldots,x_n)$.
$R[It]/\mathfrak mR[It]$ is $\oplus_{n\geq 0}{I^n/\mathfrak mI^n}.$ Let $\phi$ be any homogeneous polynomial of degree $l$. Consider $I_l$ to be the $k$-vector space generated by all $\phi(f_1,\ldots,f_q).$ Then $k[f_1,\ldots,f_q]=\oplus_{l\geq 0}{I_l}.$ Now $\dim_{k}{I_l}=\dim_{k}{I^l/\mathfrak mI^l}.$ Hence $k[f_1,\ldots,f_q]\simeq {R[It]/\mathfrak mR[It]}.$
Proving the monotone decreasing and find the limit ?? Let $a,b$ be positive real number. Set $x_0 =a$ and $x_{n+1}= \frac{1}{x_n^{-1}+b}$ for $n≥0$ (a) Prove that $x_n$ is monotone decreasing. (b) Prove that the limit exists and find it. Any help? I don't know where to start.
To prove the limit exists use the fact every decreasing bounded below sequence is convergent. To find the limit just assume $ \lim_{n\to \infty} x_n = x = \lim_{n\to \infty} x_{n+1} $ and solve the equation for $x$ $$ x=\frac{1}{1/x+b} .$$
If there is a continuous function between linear continua, then this function has a fixed point? Let $f:X\to X$ be a continuous map and $X$ be a linear continuum. Is it true that $f$ has a fixed point? I think the answer is "yes" and here is my proof: Assume to the contrary that for any $x\in X$, either $f(x)<x$ or $f(x)>x$. Then, $A=\{x: f(x)<x\}$ and $B=\{x: f(x)>x\}$ are disjoint and their union gives $X$. Now if we can show that both $A$ and $B$ are open we obtain a contradiction because $X$ is connected. How can we show that $A$ and $B$ are open in $X$?
The function $f(x)=x+1$ is a counterexample. Here both sets $A$ and $B$ are open, but one of them is empty :-) Brouwer fixed point theorem asserts that the closed ball has the property you are looking for: every continuous self-map will have a fixed point. But the proof requires tools well beyond the general topological arguments you outlined. The most straightforward proof passes via relative homology or homotopy, and exploits the nontriviality of certain homology (resp. homotopy) classes.
Prove that $d^n(x^n)/dx^n = n!$ by induction I need to prove that $d^n(x^n)/dx^n = n!$ by induction. Any help?
Hint: Are you familiar with proofs by induction? Well, the induction step could be written as $$d^{n+1}(x^{n+1}) / dx^{n+1} = d^n \left(\frac{d(x^{n+1})} {dx}\right) /dx^n $$
How To Calculate a Weighted Payout Table I am looking to see if there is some sort of formula I can use to calculate weighted payout tables. I am looking for something similar to the PGA payout distribution, but the problem is I want my payout table to be flexible to accommodate a variable or known number of participants. As in golf, the payout distribution goes to 70 players. So that payout distribution, while weighted, is pretty mush constant from tourney to tourney. With my calculation, I want the weighting to be flexible by having a variable as the denominator for the payout pool. In other words, I would like the formula to handle 10 participants, or 18 participants, or 31 or 92, etc. Let me know if there is some sort of mathematical payout weighed formula I could use. Thanks.
There are lots of them. You haven't given enough information to select just one. A simple one would be to pick $n$ as the number of players that will be paid and $p$ the fraction that the prize will reduce from one to the next. The winner gets $1$ (times the top prize), second place gets $p$, third $p^2$ and so on. The sum of all this is $\frac {1-p^n}{1-p}$, so if the winner gets $f$ the total purse is $f\frac {1-p^n}{1-p}$. Pick $p$, and your total purse, and you can determine each prize as $f, fp, fp^2 \ldots fp^{n-1}$
If $x\not\leq y$, then is $x>y$, or $x\geq y$? I'm currently reading about surreal numbers from here. At multiple points in this paper, the author has stated that if $x\not\leq y$, then $x\geq y$. Shoudn't the relation be "if $x\not\leq y$, then $x>y$"? Hasn't the possibility of $x=y$ already been negated when we said $x\not\leq y$? Thanks in advance.
You are correct, if we were speaking of $\leq/\geq$ relations we know and love, as standard ordering relations on the reals: The negation of $x \leq y$ is exactly $x > y$, and that would be the correct assertion if we were talking about a "trichotomous" ordering, where we take that for any two real numbers, one and only one of the following hold. $x\lt y, \lor x = y, \lor x>y$. But your text is not wrong that $x \nleq y \implies x\geq y$, (that is, the right hand side is implied by the left hand side, and this would be a valid implication in even in the standard real numbers). And it seems your text is using strictly $\leq$ and $\geq$ so that for any two numbers $x, y$, we have one and only one of the following relations to consider: $x \geq y$ or $x \leq y$, andthese relations do not necessarily hold the same properties we know and love with respect to their standard meanings on the reals.
Hartshorne III 9.3 why do we need irreducibility and equidimensionality? We are trying to prove: Corollary 9.6: Let $f\colon X \to Y$ be a flat morphism of schemes of finite type over a field $k$, and assume that $Y$ is irreducible. Then the following are equivalent: (i) every irreducible component of $X$ has dimension equal to $\dim Y + n$; (ii) for any point $y \in Y$ (closed or not), every irreducible component of the fibre $X_y$ has dimension $n$. (i) $\Rightarrow$ (ii) Hartshorne argues that since $Y$ is irreducible and $X$ is equidimensional and both are of finite type over $k$, we have $$\dim_x X = \dim X - \dim \operatorname{cl}(\{x\})$$ $$\dim_y X = \dim Y - \dim \operatorname{cl}(\{y\})$$ Hartshorne makes a reference to II Ex 3.20, where one should prove several equalities for an integral scheme of finite type over a field $k$. We have that $Y$ is irreducible, so we only need it to be reduced, and then its corresponding equality will be justified. But how do we get reducedness then? And what about $X$?
I was confused by this as well. The desired equalities follow from the general statement Let $X$ be a scheme of finite type over a field $k$ and let $x\in X$. Then $$\dim \mathcal{O}_x+\dim \{x\}^-=\sup_{x\in V \text{ irreducible component}} \dim V$$ where the sup on the right is taken over all irreducible components of $X$ containing $x$. I posted a proof of this as an answer to my own question here: Dimension of local rings on scheme of finite type over a field. The proof uses the special case from exercise II 3.20 in Hartshorne.
Polynomials Question: Proving $a=b=c$. Question: Let $P_1(x)=ax^2-bx-c, P_2(x)=bx^2-cx-a \text{ and } P_3=cx^2-ax-b$ , where $a,b,c$ are non zero reals. There exists a real $\alpha$ such that $P_1(\alpha)=P_2(\alpha)=P_3(\alpha)$. Prove that $a=b=c$. The questions seems pretty easy for people who know some kind of calculus. Since, the question is from a contest, no calculus can be used. I have got a solution which is bit long(No calculus involved), I'm looking for a simplest way to solve this, which uses more direct things like $a-b|P(a)-P(b)$ .
Hint: if $a=b=c$ then all three polynomials are equal. A useful trick to show that polynomials are equal is the following: if a polynomial $Q$ of degree $n$ (like $P_1-P_2$) has $n+1$ distinct roots (points $\beta$ such that $Q(\beta)=0$) then $Q$ is the zero polynomial. In particular, if a quadratic has three zeroes, then it must be identically zero. It follows that any two quadratics which agree at three distinct points must be identical. (So you should try to construct a quadratic from $P_1,P_2,P_3$ that has three distinct zeroes and somehow conclude from that that $a=b=c$.) This result can be proved using the factor theorem, and requires no calculus (indeed, like the result of your question, it holds in polynomial rings where analysis can't be developed in a particularly meaningful sense, so any proof using calculus is rather unsatisfactory). Disclaimer: I haven't actually checked to see whether this approach works, but it's more or less the only fully general trick to show that two polynomials are the same.
A basic question on Type and Cotype theory I'm studying basic theory of type and cotype of banach spaces, and I have a simple question. I'm using the definition using averages. All Banach spaces have type 1, that was easy to prove, using the triangle inequality. But I'm having a hard time trying to show that all Banach spaces have cotype $\infty$. What I'm trying to show is that there existsc $C>0$ such that, for every $x_1, \dotsc, x_n$ in a Banach space $X$, $$\left( \frac {{\displaystyle \sum\limits_{\varepsilon_i = \pm 1}} \lVert \sum^n_{i=1} \varepsilon_i x_i\rVert} {2^n} \right) \ge C \max_{1\le i \le n} \lVert x_i \rVert $$ How is it done ? This is supposed to be trivial, as the literature keeps telling me "it's easy to see". Thanks !
The argument is by induction: It is trivial for $n=1$. For the case $n=2$ note that we have, by the triangle inequality and the fact that $\|z\|=\|-z\|$, $$ \| x-y \| + \|x+y\| \geq 2\max\{ \| x\|, \| y\|\}, $$ so that the inequality in this case follows with $C=1$. For the general case consider a vector $\bar{\varepsilon}=(\varepsilon_2,\ldots,\varepsilon_n) \in \{0,1\}^{n-1}$ and $\bar{x}=(x_2,\ldots,x_n)$ and the natural dot product $$ \bar{\varepsilon}\cdot \bar{x}= \sum_{j=2}^n \varepsilon_ix_i\in X. $$ Then the left hand side of the desired inquality (which we call $A$) can be rewritten as $$ A=\frac{\sum_{\bar{\varepsilon}} \sum_{\varepsilon_1=\pm1} \| \varepsilon_1x_1 + \bar{\varepsilon}\cdot \bar{x}\|}{2^n}. $$ Notice that, if $y=\bar{\varepsilon}\cdot \bar{x}$ then, by the argument for $n=2$, $$ \sum_{\varepsilon_1=\pm1} \| \varepsilon_1x_1+y\| \geq 2\max\{ \| x_1\|,\|y\|\}, $$ so that, plugging into the previous inequality, and recalling the obvious inequality $\max\{ \sum_j a_j, \sum_jb_j\} \leq \sum_j \max\{ a_j,b_j\}$, we obtain $$ A\geq \max\left\{ \| x_1\|, \frac{\sum_{\bar{\varepsilon}} \| \bar{\varepsilon}\cdot \bar{x}\|}{2^{n-1}} \right\} \geq \max_{1\leq i\leq n} \| x_i\|. $$ This is what you want, with $C=1$.
Show that the $\max{ \{ x,y \} }= \frac{x+y+|x-y|}{2}$. Show that the $\max{ \{ x,y \} }= \dfrac{x+y+|x-y|}{2}$. I do not understand how to go about completing this problem or even where to start.
Without loss of generality, let $y=x+k$ for some nonnegative number $k$. Then, $$ \frac{x+(x+k)+|x-(x+k)|}{2} = \frac{2x+2k}{2} = x+k = y $$ which is equal to $\max(x,y)$ by the assumption.
Probability of getting 'k' heads with 'n' coins This is an interview question.( http://www.geeksforgeeks.org/directi-interview-set-1/) Given $n$ biased coins, with each coin giving heads with probability $P_i$, find the probability that on tossing the $n$ coins you will obtain exactly $k$ heads. You have to write the formula for this (i.e. the expression that would give $P (n, k)$). I can write a recurrence program for this, but how to write the general expression ?
Consider the function $[ (1-P_1) + P_1x] \times [(1-P_2) + P_2 x ] \ldots [(1-P_n) + P_n x ]$ Then, the coefficient of $x^k$ corresponds to the probability that there are exactly $k$ heads. The coefficient of $x^k$ in this polynomial is $\sum_{k-\mbox{subset} S} [\prod_{i\in{S}} \frac{1-p_i}{p_i} \prod_{j \not \in S} p_j] $
Help to compute the following coefficient in Fourier series $\int_{(2n-1)\pi}^{(2n+1)\pi}\left|x-2n\pi\right|\cos(k x)\mathrm dx$ $$\int_{(2n-1)\pi}^{(2n+1)\pi}\left|x-2n\pi\right|\cos(k x)\mathrm dx$$ where $k\geq 0$, $k\in\mathbb{N} $ and $n\in\mathbb{R} $. it is a $a_k$ coefficient in a Fourier series.
Here is the final answer by maple $$ 2\,{\frac {2\, \left( -1 \right) ^{k} \left( \cos \left( \pi \,kn \right) \right) ^{2}-2\, \left( \cos \left( \pi \,kn \right) \right) ^{2}+ \left( -1 \right) ^{k+1}+1}{{k}^{2}}} . $$ Added: More simplification leads to the more compact form $$ 2\,{\frac {\cos \left( 2\,\pi \,kn \right) \left( \left( -1 \right) ^{k}-1 \right) }{{k}^{2}}}.$$
A proposed proof by induction of $1+2+\ldots+n=\frac{n(n+1)}{2}$ Prove: $\displaystyle 1+2+\ldots+n=\frac{n(n+1)}{2}$. Proof When $n=1,1=\displaystyle \frac{1(1+1)}{2}$,equality holds. Suppose when $n=k$, we have $1+2+\dots+k=\frac{k(k+1)}{2}$ When $n = k + 1$: \begin{align} 1+2+\ldots+k+(k+1) &=\frac{k(k+1)}{2}+k+1 =\frac{k(k+1)+2k+2}{2}\\ &=\frac{k^2+3k+2}{2}\\ \text{[step]}&=\displaystyle\frac{(k+1)(k+2)}{2}=\displaystyle\frac{(k+1)((k+1)+1)}{2} \end{align} equality holds. So by induction, the original equality holds $\forall n\in \mathbb{N}$. Question 1: any problems in writing? Question 2: Why [step] happen to equal? i.e., why does $k^2+3k+2=(k+1)(k+2)$ hold?
Q1: No problems, that's the way induction works. Q2: go back one step: $$k(k+1)+2k+2=k(k+1)+2(k+1)=(k+1)(k+2)$$
Why does «Massey cube» of an odd element lie in 3-torsion? The cup product is supercommutative, i.e the supercommutator $[-,-]$ is trivial at the cohomology level — but not at the cochain level, which allows one to produce various cohomology operations. The simplest (in some sense) of such (integral) operations is the following «Massey cube». Suppose $a$ is an integral $k$-cocycle, $k$ is odd; $[a,a]=0\in H^{2k}$, so $[a,a]=db$ (where $b$ is some cochain); define $\langle a\rangle^3:=[a,b]\in H^{3k-1}$ (clearly this is a cocycle; it doesn’t depend on choice of $b$, since by supercommutativity $[a,b’-b]=0\in H^{3k-1}$ whenever $d(b-b’)=0$). The question is, why $\langle a\rangle^3$ lies in 3-torsion? For $k=3$, for example, this is true since $H^8(K(\mathbb Z,3);\mathbb Z)=\mathbb Z/3$, but surely there should be a more direct proof? (Something like Jacobi identity, maybe?)
Recall that $d(x\cup_1y)=[x,y]\pm dx\cup_1 y\pm x\cup_1dy$. In particular, in the definition from the question one can take $b=a\cup_1a$. So $\langle a\rangle^3=[a,a\cup_1a]$. Now $d((a\cup_1a)\cup_1a)=[a,a\cup_1a]+(d(a\cup_1 a))a=\langle a\rangle^3+[a,a]\cup_1 a$. Now by Hirsch formula $a^2\cup_1a=a(a\cup_1a)+(a\cup_1a)a=\langle a\rangle^3$. So $$3\langle a\rangle^3=d(a\cup_1a\cup_1a)=0\in H^{3k-1}.$$
Why does the inverse of the Hilbert matrix have integer entries? Let $A$ be the $n\times n$ matrix given by $$A_{ij}=\frac{1}{i + j - 1}$$ Show that $A$ is invertible and that the inverse has integer entries. I was able to show that $A$ is invertible. How do I show that $A^{-1}$ has integer entries? This matrix is called the Hilbert matrix. The problem appears as exercise 12 in section 1.6 of Hoffman and Kunze's Linear Algebra (2nd edition).
Be wise, generalize (c) I think the nicest way to answer this question is the direct computation of the inverse - however, for a more general matrix including the Hilbert matrix as a special case. The corresponding formulas have very transparent structure and nontrivial further generalizations. The matrix $A$ is a particular case of the so-called Cauchy matrix with elements $$A_{ij}=\frac{1}{x_i-y_j},\qquad i,j=1,\ldots, N.$$ Namely, in the Hilbert case we can take $$x_i=i-\frac{1}{2},\qquad y_i=-i+\frac12.$$ The determinant of $A$ is given in the general case by $$\mathrm{det}\,A=\frac{\prod_{1\leq i<j\leq N}(x_i-x_j)(y_j-y_i)}{\prod_{1\leq i,j\leq N}(x_i-y_j)}.\tag{1}$$ Up to an easily computable constant prefactor, the structure of (1) follows from the observation that $\mathrm{det}\,A$ vanishes whenever there is a pair of coinciding $x$'s or $y$'s. (In the latter case $A$ contains a pair of coinciding raws/columns). For our $x$'s and $y$'s the determinant is clearly non-zero, hence $A$ is invertible. One can also easily find the inverse $A^{-1}$, since the matrix obtained from a Cauchy matrix by deleting one row and one column is also of Cauchy type, with one $x$ and one $y$ less. Taking the ratio of the corresponding two determinants and using (1), most of the factors cancel out and one obtains \begin{align} A_{mn}^{-1}=\frac{1}{y_m-x_n}\frac{\prod_{1\leq i\leq N}(x_n-y_i)\cdot\prod_{1\leq i\leq N}(y_m-x_i)}{\prod_{i\neq n}(x_n-x_i)\cdot\prod_{i\neq m}(y_m-y_i)}.\tag{2} \end{align} For our particular $x$'s and $y$'s, the formula (2) reduces to \begin{align} A_{mn}^{-1}&=\frac{(-1)^{m+n}}{m+n-1}\frac{\frac{(n+N-1)!}{(n-1)!}\cdot \frac{(m+N-1)!}{(m-1)!}}{(n-1)!(N-n)!\cdot(m-1)!(N-m)!}=\\ &=(-1)^{m+n}(m+n-1){n+N-1 \choose N-m}{m+N-1 \choose N-n}{m+n-2\choose m-1}^2. \end{align} The last expression is clearly integer. $\blacksquare$
Is there an inverse to Stirling's approximation? The factorial function cannot have an inverse, $0!$ and $1!$ having the same value. However, Stirling's approximation of the factorial $x! \sim x^xe^{-x}\sqrt{2\pi x}$ does not have this problem, and could provide a ballpark inverse to the factorial function. But can this actually be derived, and if so how? Here is my work: $$ \begin{align} y &= x^xe^{-x}\sqrt{2\pi x}\\ y^2 &= 2\pi x^{2x + 1}e^{-2x}\\ \frac{y^2}{2\pi} &= x^{2x + 1}e^{-2x}\\ \ln \frac{y^2}{2\pi} &= (2x + 1)\ln x - 2x\\ \ln \frac{y^2}{2\pi} &= 2x\ln x + \ln x - 2x\\ \ln \frac{y^2}{2\pi} &= 2x(\ln x - 1) + \ln x \end{align} $$ That is as far as I can go. I suspect the solution may require the Lambert W function. Edit: I have just realized that after step 3 above, one can divide both sides by e to get $$\left(\frac{x}{e}\right)^{2x + 1} = \frac{y^2}{2e\pi}$$ Can this be solved?
As $n$ increases to infinity we want to know roughly the size of the $x$ that satisfies the equation $x! = n$. By Stirling $$ x^x e^{-x} \sqrt{2\pi x} \sim n $$ Just focusing on $x^x$ a first approximation is $\log n / \log\log n$. Now writing $x = \log n / \log\log n + x_1$ and solving approximately for $x_1$, this time using $x^x e^{-x}$ we get $$ x = \frac{\log n}{\log\log n} + \frac{\log n \cdot ( \log\log\log n + 1)}{(\log\log n)^2} + x_2 $$ with a yet smaller $x_2$ which can be also determined by plugging this into $x^x e^{-x}$. You'll notice eventually that the $\sqrt{2\pi x}$ is too small to contribute. You can continue in this way, and this will give you an asymptotic (non-convergent) serie (in powers of $\log\log n$). For more I recommend looking at De Brujin's book "Asymptotic methods in analysis". He specifically focuses on the case of $n!$ case in one of the Chapters (don't have the book with me to check).
Action of the state I have the following question: let $A$ be a C*-algebra and let $a$ be a self adjoint element of $A$. Is it true that for any state $f$ acting on $A$ $$f(a) \in \mathbb{R}.$$ Let me remind that a state is a positive linear functional of norm $1$. I think it is due to the fact that every state has to satisty, $f(x^*)=\overline{f(x)}$, for all $x \in A$. Then we easily obtain $f(a)=f(a^*) = \overline{f(a)}$, thus $f(a) \in \mathbb{R}$, but I don't know how to show that it has this *-property.
Suppose that $a$ is a self-adjoint element in the C$^*$-algebra $A$. Then, by applying the continuous functional calculus, we can write $a$ as the difference of two positive elements $a=a_+ - a_-$ such that $a_+a_-=a_-a_+=0$. See, for example, Proposition VIII.3.4 in Conway's A Course in Functional Analysis, or (*) below. Once we have this fact in hand it is easy to show the desired property. As $f$ is positive, $f(a_+)$ and $f(a_-)$ are positive (and thus real) and so $f(a)=f(a_+)-f(a_-)$ is real. It is also now easy to show the self-adjointness property that you mentioned: each $a \in A$ (now not necessarily self-adjoint) can be written as $a=x+iy$, where $x$ and $y$ are self-adjoint. We can take $x=\frac{1}{2}(a+a^*)$ and $y=\frac{-i}{2}(a-a^*)$. Then $$f(a)=f(x+iy)=f(x)+if(y)$$ and $$f(a^*)=f(x-iy)=f(x)-if(y).$$ As $f(x)$ and $f(y)$ are real, we have $f(a^*)=\overline{f(a)}$. (*) How do we show that $a=a_+-a_-$ where $a_+a_-=a_-a_+=0$ and $a_+$ and $a_-$ are positive? As $a$ is self-adjoint, its spectrum is a subset of the real line and we also know that we can apply the continuous functional calculus. If $g(t)=\max (t, 0)$ and $h(t)=\min(t,0)$, then $a_+=g(a)$ and $a_-=h(a)$ are the elements we need. Why does this work? Think about splitting a continuous function on $\mathbb R$ into its positive and negative parts.
How to prove these integral inequalities? a) $f(x)>0$ and $f(x)\in C[a,b]$ Prove $$\left(\int_a^bf(x)\sin x\,dx\right)^2 +\left(\int_a^bf(x)\cos x\,dx\right)^2 \le \left(\int_a^bf(x)\,dx\right)^2$$ I have tried Cauchy-Schwarz inequality but failed to prove. b) $f(x)$ is differentiable in $[0,1]$ Prove $$|f(0)|\le \int_0^1|f(x)|\,dx+\int_0^1|f'(x)|dx$$ Any Helps or Tips,Thanks
Hint: For part a), use Jensen's inequality with weighted measure $f(x)\,\mathrm{d}x$. Since $f(x)>0$, Jensen says that for a convex function $\phi$ $$ \phi\left(\frac1{\int_Xf(x)\mathrm{d}x}\int_Xg(x)\,f(x)\mathrm{d}x\right) \le\frac1{\int_Xf(x)\mathrm{d}x}\int_X\phi(g(x))\,f(x)\mathrm{d}x $$ Hint: For part b), note that for $x\in[0,1]$, $$ f(0)-f(x)\le\int_0^1|f'(t)|\,\mathrm{d}t $$ and integrate over $[0,1]$.
The continuity of measure Let $m$ be the Lebesgue Measure. If $\{A_k\}_{k=1}^{\infty}$ is an ascending collection of measurable sets, then $$m\left(\cup_{k=1}^\infty A_k\right)=\lim_{k\to\infty}m(A_k).$$ Can someone share a story as to why this is called one of the "continuity" properties of measure?
Since $\{A_k\}_{k=1}^\infty$ is an ascending family of sets we can vaguely write that $$ \lim\limits_{k\to\infty} A_k=\bigcup\limits_{k=1}^\infty A_k \qquad(\color{red}{\text{note: this is not rigor!}}) $$ then this property can be written as $$ m\left(\lim\limits_{k\to\infty} A_k\right)=\lim\limits_{k\to\infty}m(A_k) $$ Which looks very similar to Heine definition of continuiuty.
Check solutions of vector Differential Equations I have solved the vector ODE: $x\prime = \begin{pmatrix}1& 1 \\ -1 &1 \end{pmatrix}x$ I found an eigenvalue $\lambda=1+i$ and deduced the corresponding eigenvector: \begin{align} (A-\lambda I)x =& 0 \\ \begin{pmatrix}1-1-i & 1 \\-1& 1-1-i \end{pmatrix}x =& 0 \\ \begin{pmatrix} -i&1\\-1&-i\end{pmatrix}x =&0 \end{align} Which is similar to: $\begin{pmatrix}i&-1\\0&0 \end{pmatrix}x = 0$ By Row Reduction. Take $x_2=1$ as $x_2$ is free. We then have the following equation: \begin{align} &ix_1 - x_2 = 0 \\ \iff& ix_1 = 1 \\ \iff& x_1 = \frac{1}{i} \end{align} Thus the corresponding eigenvector for $\lambda=1+i$ is: $\begin{pmatrix} \dfrac{1}{i} \\ 1\end{pmatrix}$. My solution should then be: \begin{align} x(t) =& e^{(1+i)t}\begin{pmatrix} \dfrac{1}{i} \\ 1\end{pmatrix} \\ =& e^t e^{it}\begin{pmatrix} \dfrac{1}{i} \\ 1\end{pmatrix} \\ =& e^t\left(\cos(t) + i\sin(t)\right)\begin{pmatrix} \dfrac{1}{i} \\ 1\end{pmatrix} \\ \end{align} By taking only the real parts we have the general solution: $\left(c_1e^t\cos(t) + c_2e^t\sin(t)\right)\begin{pmatrix} \dfrac{1}{i} \\ 1\end{pmatrix}$ How can I quickly check this is correct? Idealy I would like to use Sage to verify. I think this would be faster than differentiating my solution and checking whether I get the original equation.
Let me work through the other eigenvalue, and see if you can follow the approach. For $\lambda_2 = 1-i$, we have: $[A - \lambda_2 I]v_2 = \begin{bmatrix}1 -(1-i) & 1\\-1 & 1-(1-i)\end{bmatrix}v_2 = 0$ The RREF of this is: $\begin{bmatrix}1 & -i\\0 &0\end{bmatrix}v_2 = 0 \rightarrow v_2 = (i, 1)$ To write the solution, we have: $\displaystyle x[t] = \begin{bmatrix}x_1[t]\\ x_2[t]\end{bmatrix} = e^{\lambda_2 t}v_2 = e^{(1-i)t}\begin{bmatrix}i\\1\end{bmatrix} = e^te^{-it}\begin{bmatrix}i\\1\end{bmatrix} = e^t(\cos t - i \sin t) \begin{bmatrix}i\\1\end{bmatrix} = e^t\begin{bmatrix} \sin t + i \cos t\\ \cos t -i \sin t \end{bmatrix} = e^t\begin{bmatrix}c_1 \cos t + c_2 \sin t\\ -c_1 \sin t + c_2 \cos t\end{bmatrix}$ Note, I put $c_1$ with the imaginary terms, and $c_2$ with the other terms, but this is totally arbitrary since these are just some constants. For the validation: * *take $x'[t]$ of that solution we just derived. *then, take the product $Ax$ and verify that it matches the $x'[t]$ expressions from the previous calculation. I would recommend emulating this with the other eigenvalue/eigenvector and see if you can get a similar result. Lastly, note that $1/i = -i$ (just multiply by $i/i$).
Show that $7 \mid( 1^{47} +2^{47}+3^{47}+4^{47}+5^{47}+6^{47})$ I am solving this one using the fermat's little theorem but I got stuck up with some manipulations and there is no way I could tell that the residue of the sum of each term is still divisible by $7$. what could be a better approach or am I on the right track? Thanks
$6^{47} \equiv (-1)^{47} = -1^{47}\mod 7$ $5^{47} \equiv (-2)^{47} = -2^{47}\mod 7$ $4^{47} \equiv (-3)^{47} = -3^{47}\mod 7$ Hence $ 1^{47} +2^{47}+3^{47}+4^{47}+5^{47}+6^{47} \equiv 0 \mod 7$.
Linear algebra - Coordinate Systems I'm preparing for an upcoming Linear Algebra exam, and I have come across a question that goes as follows: Let U = {(s, s − t, 2s + 3t)}, where s and t are any real numbers. Find the coordinates of x = (3, 4, 3) relative to the basis B if x is in U . Sketch the set U in the xyz-coordinate system. It seems that in order to solve this problem, I'll have to find the basis B first! how do I find that as well? The teacher hae barely covered the coordinate systems and said she will less likely include anything from the section on the exam, but I still want to be safe. The book isn't of much help. It explains the topic but doesn't give any examples. Another part of the question ask about proving that U is a subspace of R3, but I was able to figure that one out on my own. I'd appreciate if someone could show me how to go about solving the question above.
This might be an answer, depending on how one interprets the phrase "basis $B$", which is undefined in the question as stated: Note that $(s, s - t, 2s + 3t) = s(1, 1, 2) + t(0, -1, 3)$. Taking $s = 1$, $t = 0$ shows that $(1, 1, 2) \in U$. Likewise, taking $s = 0$, $t = 1$ shows $(0, -1, 3) \in U$ as well. Incidentally, the vectors $(1, 1, 2)$ and $(0, -1, 3)$ are clearly linearly independent; to see this in detail, note that if $(s, s - t, 2s + 3t) = s(1, 1, 2) + t(0, -1, 3) = (0,0,0)$, then we must obviously have $s = 0$, whence $s - t = -t = 0$ as well. Assuming $B$ refers to the basis $(1, 1, 2)$, $(0, -1, 3)$ of $U$, it is easy to work out the values of $s$ and $t$ corresponding to $x$: setting $(s, s - t, 2s + 3t) = (3, 4, 3)$, we see that we must have $s = 3$ whence $t = -1$ follows from $s - t = 4$. These check against $2s + 3t = 3$, as the reader may easily verify. The desired coordinates for $x$ in the basis $B$ are thus $(3, -1)$. Think that about covers it, if my assumption about $B$ is correct. Can't provide a graphic, but one is easily constructed noting that the vectors $(1, 1, 2)$, $(0, -1, 3)$ span $U$ in $R^3$ (the "$xyz$" coordinate system).
Calculating 7^7^7^7^7^7^7 mod 100 What is $$\large 7^{7^{7^{7^{7^{7^7}}}}} \pmod{100}$$ I'm not much of a number theorist and I saw this mentioned on the internet somewhere. Should be doable by hand.
Reading the other answers, I realize this is a longer way than necessary, but it gives a more general approach for when things are not as convenient as $7^4\equiv 1\bmod 100$. Note that, for any integer $a$ that is relatively prime to $100$, we have $$a^{40}\equiv 1\bmod 100$$ because $\varphi(100)=40$, and consequently $$a^m\equiv a^n\bmod 100$$ whenever $m\equiv n\bmod 40$. Thus, we need to find $7^{7^{7^{7^{7^{7}}}}}$ modulo $40$. By the Chinese remainder theorem, it is equivalent to know what it is modulo $8$ and modulo $5$. Modulo $8$, we have $7\equiv -1\bmod 8$, and $-1$ to an odd power is going to be $-1$, so we see that $$7^{7^{7^{7^{7^{7}}}}}\equiv (-1)^{7^{7^{7^{7^{7}}}}} \equiv -1\equiv 7\bmod 8.$$ Modulo $5$, we have $7^4\equiv 1\bmod 5$ (again by Euler's theorem), so we need to know $7^{7^{7^{7^{7}}}}\bmod 4$. But $7\equiv -1\bmod 4$, and $7^{7^{7^{7}}}$ is odd, so that $7^{7^{7^{7^{7}}}}\equiv -1\equiv 3\bmod 4$, so that $$7^{7^{7^{7^{7^{7}}}}}\equiv 7^3\equiv 343\equiv 3\bmod 5.$$ Applying the Chinese remainder theorem, we conclude that $$7^{7^{7^{7^{7^{7}}}}}\equiv 23\bmod 40,$$ and hence $$7^{7^{7^{7^{7^{7^{7}}}}}}\equiv 7^{23}\bmod 100.$$ This is tractable by again using the Chinese remainder theorem to find $7^{23}\bmod 4$ and $7^{23}\bmod 25$.
Size of new box rotated and the rescaled I have a box of height h and width w. I rotate it to r degrees. Now I resize it so that it can original box in it. What will be the size of newly box. Original Box: Box after rotating some degrees. New box after rescaling. So my question is what should be the formula to calculate new size (width, height), position relative to old one. What I have Width "w", height "h", position (x,y) and angle (t).
Assuming the old rectangle inscribed in the new one, we have the following picture: Let $\theta$ ($0 \leq \theta \leq \frac{\pi}{2}$) the rotation angle, $w'$ the new width and $h'$ the new height, then we have the following equations: $$w' = w \cos \theta + h \sin \theta$$ $$h' = w \sin \theta + h \cos \theta$$ The new rectangle is not similar to the old one, except for $h = w$ when both rectangles are in fact squares. Edit: Considering $O$ (the center of both rectangles) as the origin of the coordinate system, the points $E$, $F$, $G$, and $H$ can be calculated by the following equations: $$E=\left(\frac{w}{2}-w \cos^2 \theta,-\frac{h}{2}-w \sin \theta \cos \theta \right)$$ $$F=\left(\frac{w}{2}+h \sin \theta \cos \theta,-\frac{h}{2}+h \sin^2 \theta \right)$$ $$G=\left(-\frac{w}{2}+w \cos^2 \theta,\frac{h}{2}+w \sin \theta \cos \theta \right)$$ $$H=\left(-\frac{w}{2}-h \sin \theta \cos \theta,-\frac{h}{2}+h \cos^2 \theta \right)$$
Notation for intervals I have frequently encountered both $\langle a,b \rangle$ and $[a,b]$ as notation for closed intervals. I have mostly encountered $(a,b)$ for open intervals, but I have also seen $]a,b[$. I recall someone calling the notation with $[a,b]$ and $]a,b[$ as French notation. * *What are the origins of the two notations? *Is the name French notation correct? Are they used frequently in France? Or were they perhaps prevalent in French mathematical community at some point? (In this MO answer Bourbaki is mentioned in connection with the notation $]a,b[$.) Since several answerers have already mentioned that they have never seen $\langle a,b \rangle$ to be used for closed intervals, I have tried to look for some occurrences for this. The best I can come up with is the article on Czech Wikipedia, where these notations are called Czech notation and French notation. Using $(a,b)$ and $[a,b]$ is called English notation in that article. (I am from Central Europe, too, so it is perhaps not that surprising that I have seen this notation in lectures.) I also tried to google for interval langle or "closed interval" langle. Surprisingly, this lead me to a question on MSE where this notation is used for open interval.
As a French student, all my math teachers (as well as the physics/biology/etc. ones) always used the $[a,b]$ and $]a,b[$ (and the "hybrid" $[a,b[$ and $]a,b]$) notations. We also, for integer intervals $\{a,a+1,...,b\}$, use the \llbracket\rrbracket notation (in LateX, package {stmaryrd}): $[[ a,b ]]$. I have never seen the $\langle a,b \rangle$ notation used for intervals, though (only for inner products or more exotic binary operations).
antiderivative of $\sum _{ n=0 }^{ \infty }{ (n+1){ x }^{ 2n+2 } } $ I've proven that the radius of convergence of $\sum _{ n=0 }^{ \infty }{ (n+1){ x }^{ 2n+2 } } $ is $R=1$, and that it doesn't converge at the edges. Now, I was told that this is the derivative of a function $f(x)$, which holds $f(0)=0$. My next step is to find this function in simple terms, and here I get lost. My attempt: $f(x)=\sum _{ n=0 }^{ \infty }{ \frac { n+1 }{ 2n+3 } { x }^{ 2n+3 } } $ and this doesn't seem to help. I'd like to use the fact that $|x|<1$ so I'll get a nice sum based on the sum of a geometric series but I have those irritating coefficients. Any tips?
First, consider $$ g(w)=\sum_{n=0}^{\infty}(n+1)w^n. $$ Integrating term-by-term, we find that the antiderivative $G(w)$ for $g(w)$ is $$ G(w):=\int g(w)\,dw=C+\sum_{n=0}^{\infty}w^{n+1} $$ where $C$ is an arbitrary constant. To make $g(0)=0$, we take $C=0$; then $$ G(w)=\sum_{n=0}^{\infty}w^{n+1}=\sum_{n=1}^{\infty}w^n=\frac{w}{1-w}\qquad\text{if}\qquad\lvert w\rvert<1. $$ (Here, we've used that this last is a geometric series with first term $w$ and common ratio $w$.) So, we find $$ g(w)=G'(w)=\frac{1}{(1-w)^2},\qquad \lvert w\rvert<1. $$ Now, how does this relate to the problem at hand? Note that $$ \sum_{n=0}^{\infty}(n+1)x^{2n+2}=x^2\sum_{n=0}^{\infty}(n+1)(x^2)^n=x^2g(x^2)=\frac{x^2}{(1-x^2)^2} $$ as long as $\lvert x^2\vert<1$, or equivalently $\lvert x\rvert <1$. From here, you can finish your exercise by integrating this last function with respect to $x$, and choosing the constant of integration that makes its graph pass through $(0,0)$.
What is the value of the series $\sum\limits_{n=1}^\infty \frac{(-1)^n}{n^2}$? I am given the following series: $$\sum_{n=1}^\infty \frac{(-1)^n}{n^2}$$ I have used the alternating series test to show that the series converges. However, how do I go about showing what it converges to?
Consider the Fourier series of $g(x)=x^2$ for $-\pi<x\le\pi$: $$g(x)=\frac{a_0}{2}+\sum_{n=1}^{\infty}a_n\cos(nx)+b_n\sin(nx)$$ note $b_n=0$ for an even function $g(t)=g(-t)$ and that: $$a_n=\frac {1}{\pi} \int _{-\pi }^{\pi }\!{x}^{2}\cos \left( nx \right) {dx} =4\,{\frac { \left( -1 \right) ^{n}}{{n}^{2}}},$$ $$\frac{a_0}{2}=\frac {1}{2\pi} \int _{-\pi }^{\pi }\!{x}^{2} {dx} =\frac{1}{3}\pi^2,$$ $$x^2=\frac{1}{3}\,{\pi }^{2}+\sum _{n=1}^{\infty }4\,{\frac { \left( -1 \right) ^{n }\cos \left( nx \right) }{{n}^{2}}},$$ $$x=0\rightarrow \frac{1}{3}\,{\pi }^{2}+\sum _{n=1}^{\infty }4\,{\frac { \left( -1 \right) ^{n }}{{n}^{2}}}=0,$$ $$\sum _{n=1}^{\infty }{\frac { \left( -1 \right) ^{n} }{{n}^{2}}}=-\frac{1}{12}\,{\pi }^{2}.$$
Vector-by-Vector derivative Could someone please help me out with this derivative? $$ \frac{d}{dx}(xx^T) $$ with both $x$ being vector. Thanks EDIT: I should clarify that the actual state I am taking the derivative is $$ \frac{d}{dx}(xx^TPb) $$ where $Pb$ has the dimention of $x$ but is independent of $x$. So the whole state $xx^TPb$ is a vector. EDIT2: Would it become by any chance the following? $$ \frac{d}{dx}(xx^TPb) = (Pbx^T)^T + x(Pb)^T = 2x(Pb)^T $$
You can always go back to the basics. Let $v$ be any vector and $h$ a real number. Substitute $x \leftarrow x + h v$ to get $$ (x + hv)(x+hv)^t = (x+hv)(x^t+hv^t) = x x^t + h(xv^t + vx^t) + h^2 vv^t. $$ The linear term in $h$ is your derivative at $x$ in the direction of $v$, so $xv^t + vx^t$ (which is linear in $v$ as expected).
A Covering Map $\mathbb{R}P^2\longrightarrow X$ is a homeomorphism I came across the following problem: Any covering map $\mathbb{R}P^2\longrightarrow X$ is a homeomorphism. To solve the problem you can look at the composition of covering maps $$ S^2\longrightarrow \mathbb{R}P^2\longrightarrow X $$ and examine the deck transformations to show that the covering $S^2\longrightarrow X$ only has the identity and antipodal maps as deck transformations. I've seen these types of problems solved by showing that the covering is one-sheeted. Is there a solution to the problem along those lines? EDIT: Even if there isn't a way to do it by showing it is one-sheeted, are there other ways?
1-Prove that $X$ has to be a compact topological surface; 2-Prove that such a covering has to be finite-sheeted; 3-Deduce from 2 and from $\pi_1(\mathbb{R}P^2)$ that $\pi_1(X)$ is finite; 4- Since the map induced by the covering projection on $\pi_1$ is injective you get $\mathbb{Z}/2\mathbb{Z}< \pi_1(X)$; 5-Conclude using the classification of compact topological surfaces.
Induction and convergence of an inequality: $\frac{1\cdot3\cdot5\cdots(2n-1)}{2\cdot4\cdot6\cdots(2n)}\leq \frac{1}{\sqrt{2n+1}}$ Problem statement: Prove that $\frac{1*3*5*...*(2n-1)}{2*4*6*...(2n)}\leq \frac{1}{\sqrt{2n+1}}$ and that there exists a limit when $n \to \infty $. , $n\in \mathbb{N}$ My progress LHS is equivalent to $\frac{(2n-1)!}{(2n)!}=\frac{(2n-1)(2n-2)(2n-3)\cdot ....}{(2n)(2n-1)(2n-2)\cdot ....}=\frac{1}{2n}$ So we can rewrite our inequality as: $\frac{1}{2n}\leq \frac{1}{\sqrt{2n+1}}$ Let's use induction: For $n=1$ it is obviously true. Assume $n=k$ is correct and show that $n=k+1$ holds. $\frac{1}{2k+2}\leq \frac{1}{\sqrt{2k+3}}\Leftrightarrow 2k+2\geq\sqrt{2k+3}\Leftrightarrow 4(k+\frac{3}{4})^2-\frac{5}{4}$ after squaring and completing the square. And this does not hold for all $n$ About convergence: Is it not enough to check that $\lim_{n \to \infty}\frac{1}{2n}=\infty$ and conclude that it does not converge?
There's a direct proof to the inequality of $\frac{1}{\sqrt{2n+1}}$, though vadim has improved on the bound. Consider $A = \frac{1}{2} \times \frac{3}{4} \times \ldots \times \frac{2n-1} {2n}$ and $B = \frac{2}{3} \times \frac{4}{5} \times \ldots \times \frac{2n}{2n+1}$. Then $AB = \frac{1}{2n+1}$. Since each term of $A$ is smaller than the corresponding term in $B$, hence $A < B$. Thus $A^2 < AB = \frac{1}{2n+1}$, so $$A < \frac{1}{\sqrt{2n+1}}$$ Of course, the second part that a limit exists follows easily, and is clearly 0.
find the power representation of $x^2 \arctan(x^3)$ Wondering what im doing wrong in this problem im ask to find the power series representation of $x^2 \arctan(x^3)$ now i know that arctan's power series representation is this $$\sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{2n+1} $$ i could have sworn that for solving for this i could just have use that formula and then distribute the $x^2$ but i'm getting the wrong answer here is the steps im taking. 1- plug in $x^3$ for $x$ $$x^2\sum_{n=0}^\infty (-1)^n \frac{(x^3)^{2n+1}}{2n+1} $$ 2- distribute the $2n+1$ to $x^3$ $$x^2\sum_{n=0}^\infty (-1)^n \frac{x^{6n+3}}{2n+1} $$ 3- distribute the $x^2$ $$\sum_{n=0}^\infty (-1)^n \frac{x^{6n+5}}{2n+1} $$ however this is wrong according to my book the answer should be $$\sum_{n=0}^\infty (-1)^n \frac{x^{3n+2}}{n} $$ can someone please point out my mistake. Please forgive me in advance for any mathematical blunders that i post. I'm truly sorry. Thanks Miguel
The book's answer would be right if it said $$ \sum_{\text{odd }n\ge 0} (-1)^{(n-1)/2)} \frac{x^{3n+2}}{n}. $$ That would be the same as your answer, i.e. $$ \sum_{\text{odd }n\ge 0} (-1)^{(n-1)/2)} \frac{x^{3n+2}}{n} = \sum_{n\ge0} (-1)^n\frac{x^{6n+5}}{2n+1}. $$
compute an integral with residue I have to find the value of $$\int_{-\infty}^{\infty}e^{-x^2}\cos({\lambda x})\,dx$$ using residue theorem. What is a suitable contour? Any help would be appreciate! Thanks...
Hint: By symmetry, we can let $\gamma$ be the path running along the real axis and get that our integral is just $$\int_\gamma e^{-z^2}e^{i\lambda z} dz.$$ Now what happens when you combine these terms and complete the square? Your answer should become a significantly simpler problem. But be careful with the resulting path. There's still a good bit to do with this approach. Edit, more steps: You obtain from this the integral $$\int_{\gamma'} e^{-z^2} dz,$$ where $\gamma'$ runs along the real axis, shifted by $-\frac{\lambda i}{2}$. Now, you can show that if you integrate along the real axis, your answer is $$\int_{-\infty}^\infty e^{-x^2} dx = \sqrt{\pi}.$$ This can be done by contour integration. I claim that the integral we are interested in can be obtained from this using contour integration. Relate the path we want to this path and do some estimates on the ends.
Number of Spanning Trees in a certain graph Let $k,n\in\mathbb N$ and define the simple graph $G_{k,n}=([n],E)$, where $ij\in E\Leftrightarrow 0 <|i-j|\leq k$ for $i\neq j\in [n]$. I need to calculate the number of different spanning trees. I am applying Kirchoff's Matrix Tree theorem to solve this but i am not getting the answer. for example : $k=3$ and $n=5$, my matrix is \begin{pmatrix} 3 & -1 &-1& -1& 0\\ -1 & 4 &-1 &-1 &-1\\ -1 &-1 & 4 &-1 &-1\\ -1 &-1 &-1 & 4 &-1\\ 0 & -1 &-1 &-1 &3 \end{pmatrix} and the final answer as per the Kirchoff's theorem is the determinant of any of the co-factor of the matrix. proceeding in the same way i am getting something else but the answer is 75. Is there any another approach to solve this problem or my process is wrong? please help Thank you
The answer seems correct. You can check with a different method in this case, because the graph you are considering is the complete graph minus one specific edge E. By Cayley's formula, there are $5^3=125$ spanning trees of the complete graph on 5 vertices. Each such tree has four edges, and there are 10 possible edges in the complete graph. By taking a sum over all edges in all spanning trees, you can show that $\frac{2}{5}$ of the spanning trees will contain the specific edge $E$. So the remaining number of spanning trees is $\frac{3}{5} \times 125 = 75$, which agrees with your answer.
convergence of a series $a_1 + a_1 a_2 + a_1 a_2 a_3 +\cdots$ Suppose all $a_n$ are real numbers and $\lim_{n\to\infty} a_n$ exists. What is the condition for the convergence( or divergence ) of the series $$ a_1 + a_1 a_2 + a_1 a_2 a_3 +\cdots $$ I can prove that $ \lim_{n\to\infty} |a_n| < 1 $ ( or > 1 ) guarantees absolute convergence ( or divergence ). What if $ \lim_{n\to\infty} a_n = 1 \text{ and } a_n < 1 \text{ for all } n $ ?
What if $\lim_{n\to\infty}a_n=1$ and $a_n<1$ for all $n$? Then the series may or may not converge. A necessary criterion for the convergence of the series is that the sequence of products $$p_n = \prod_{k = 1}^n a_k$$ converges to $0$. If the $a_n$ converge to $1$ fast enough, say $a_n = 1 - \frac{1}{2^n}$ ($\sum \lvert 1 - a_n\rvert < \infty$ is sufficient, if no $a_n = 0$), the product converges to a nonzero value, and hence the series diverges. If the convergence of $a_n \to 1$ is slow enough ($a_n = 1 - \frac{1}{\sqrt{n+1}}$ is slow enough), the product converges to $0$ fast enough for the series to converge. Let $a_n = 1 - u_n$, with $0 < u_n < 1$ and $u_n \to 0$. Without loss of generality, assume $u_n < \frac14$. Then $\log p_n = \sum\limits_{k = 1}^n \log (1 - u_k)$. Since for $0 < x < \frac14$, we have $-\frac32 x < \log (1-x) < -x$, we find $$ -\frac32 \sum_{k=1}^n u_k < \log p_n < -\sum_{k=1}^n u_k,$$ and thus can deduce that if $\sum u_k < \infty$, then $\lim\limits_{n\to\infty}p_n > 0$, so the series does not converge. On the other hand, if $\exists C > 1$ with $\sum\limits_{k = 1}^n u_k \geqslant C\cdot \log n$, then $p_n < \exp (-C \cdot\log n) = \frac{1}{n^C}$, and the series converges.
Show that $(x_n)$ converge to $l$. Let $(x_n)$ be a sequence of reals. Show that if every subsequence $(x_{n_k})$ of $(x_n)$ has a further subsequence $(x_{n_{k_r}})$ that converge to $l$, then $(x_n)$ converge to $l$. I know the fact that subsequence of $(x_n)$ converge to the limit same as $(x_n)$ does, but I'm not sure if I can apply this. Thank you.
You are correct with your doubts as that argument applies only if you know that the sequence converges in the first place. Now for a proof, assume the contrary, that is: there exists $\epsilon>0$ such that for all $N\in\mathbb N$ there exists $n>N$ with $|x_n-l|\ge\epsilon$. For $N\in\mathbb N$ let $f(N)$ denote one such $n$. Now consider the following subsequence $(x_{n_k})$ of $(x_n)$: Let $n_1=f(1)$ and then recursively $n_{k+1}=f(n_k)$. Since $(n_k)$ is a strictly increasing infinite subsequence of the naturals, this gives us a subsqeunece $(x_{n_k})$. By construction, $|x_{n_k}-l|\ge \epsilon$ for all $k$. On the other hand, by the given condition, there exists a subsubsequence converging to $l$, so especially some (in fact almost all) terms of the subsubsequence fulfill $|x_{n_{k_r}}-l|<\epsilon$ - which is absurd. Therefore, the assumption in the beginning was wrong. Instead, $x_n\to l$.
Proof of parallel lines The quadrilateral ABCD is inscribed in circle W. F is the intersection point of AC and BD. BA and CD meet at E. Let the projection of F on AB and CD be G and H, respectively. Let M and N be the midpoints of BC and EF, respectively. If the circumcircle of triangle MNH only meets segment CF at Q, and the circumcircle of triangle MNG only meets segment BF at P, prove that PQ is parallel to BC. I do not know where to begin.
Possibly the last steps of a proof This is no full proof, just some observations which might get you started, but which just as well might be leading in the completely wrong direction. You could start from the end, i.e. with the last step of your proof, and work backwards. $P$ is the midpoint of $FB$ and $Q$ is the midpoint of $FC$. Therefore the triangle $BCD$ is similar to $PQF$, and since they have the edges at $F$ in common, the edges opposite $F$ have to be parallel. So your next question is: why are these the midpoints? You can observe that $NP$ is parallel to $AB=AE$, and $NQ$ is parallel to $CD=CE$. Since $N$ is the midpoint of $FE$, the $\triangle FNP$ is the result of dilating $\triangle FEB$ by a factor of $2$ with center $F$. Likewise for $\triangle FNQ$ and $\triangle FEC$. So this explains why $P$ and $Q$ are midpoints as observed, but leaves the question as to why these lines are parallel. Bits and pieces I don't have the answer to that question yet. But I have a few other observations which I have not proven either but which might be useful as piezes of this puzzle. * *$\measuredangle DBE = \measuredangle ECA = \measuredangle NMG = \measuredangle HMN$. The first equality is due to the cocircularity of $ABCD$, but the others are unexplained so far. *$\measuredangle MGN = \measuredangle NHM$, which implies that the circles $MGN$ and $MHN$ have equal radius, and the triangles formed by these three points each are congruent.
Implication with a there exists quantifier When I negate $ \forall x \in \mathbb R, T(x) \Rightarrow G(x) $ I get $ \exists x \in \mathbb R, T(x) \wedge \neg G(x) $ and NOT $ \exists x \in \mathbb R, T(x) \Rightarrow \neg G(x) $ right? What would it mean if I said $ \exists x \in \mathbb R, T(x) \Rightarrow \neg G(x) $ ? I know in symbolic logic a statement like $ \forall x \in \mathbb R, T(x) \Rightarrow G(x) $ means every T is a G, but what claim am I making between T & G with $ \exists x \in \mathbb R, T(x) \Rightarrow G(x) $ in simple everyday english if you can? Thanks,
You're correct that the negation of $\forall x (T(x) \rightarrow G(x))$ is $\exists x (T(x) \wedge \neg G(x))$. The short answer is that $\exists x (\varphi(x) \rightarrow \psi(x))$ doesn't really have a good English translation. You could try turning this into a disjunction, so that $\exists x (\varphi(x) \rightarrow \psi(x))$ becomes $\exists x (\neg \varphi(x) \vee \psi(x))$. But this is equivalent to $\exists x \neg\varphi(x) \vee \exists x \psi(x)$, which just says "There either exists something which is not $\varphi$ or there exists something which is $\psi$. That's the best you're going to get though. $\exists x (\varphi(x) \rightarrow \psi(x))$ is just something that doesn't have a good translation because it's a rather weak statement. Contrast this with the dual problem $\forall x (\varphi(x) \wedge \psi(x))$, which is a rather strong statement, saying "Everything is both a $\varphi$ and a $\psi$."
"uniquely written" definition I'm having troubles with this definition: My problem is with the uniquely part, for example the zero element: $0=0+0$, but $0=0+0+0$ or $0=0+0+0+0+0+0$. Another example, if $m \in \sum_{i=1}^{10} G_i$ and $m=g_1+g_2$, with $g_1\in G_1$ and $g_2\in G_2$, we have: $m=g_1+g_2$ or $m=g_1+g_2+0+0$. It seems they can't be unique! I really need help. Thanks a lot.
Well notice what the definition says. It says that for each $m \in M$, you need to be able to write $m= \sum\limits_{\lambda \in \Lambda} g_{\lambda}$ where this sum is over all $\lambda$. So for $0$, the only possibility is a sum of $0$ $\lambda$-many times.
Proving the normed linear space, $V, ||a-b||$ is a metric space (Symmetry) The following theorem is given in Metric Spaces by O'Searcoid Theorem: Suppose $V$ is a normed linear space. Then the function $d$ defined on $V \times V$ by $(a,b) \to ||a-b||$ is a metric on $V$ Three conditions of a metric are fairly straight-forward. By the definitions of a norm, I know that $||x|| \ge 0$ and only holds with equality if $x=0$. Thus $||a-b||$ is non-negative and zero if and only if $a=b$. The triangle inequality of a normed linear space requires: $||x+y|| \le ||x|| + ||y||$. Let $x = a - b$ and $y = b - c$. Then $||a - c|| \le || a - b || + || b - c||$ satisfying the triangle inequality for a metric space. What I am having trouble figuring out is symmetry. The definition of a linear space does not impose any condition of a symmetry. I know from the definition of a linear space that given two members of $V$, $u$ and $v$ they must be commutative, however, I do not see how that could extend here. Thus what I would like to request help with is demonstrating $||a - b|| = ||b - a||$.
$$\|a-b\|=\|(-1)(b-a)\|=|-1|\cdot\|b-a\|=\|b-a\|$$
How can I calculate a $4\times 4$ rotation matrix to match a 4d direction vector? I have two 4d vectors, and need to calculate a $4\times 4$ rotation matrix to point from one to the other. edit - I'm getting an idea of how to do it conceptually: find the plane in which the vectors lie, calculate the angle between the vectors using the dot product, then construct the rotation matrix based on the two. The trouble is I don't know how to mechanically do the first or last of those three steps. I'm trying to program objects in 4space, so an ideal solution would be computationally efficient too, but that is secondary.
Consider the plane $P\subset \mathbf{R}^4$ containing the two vectors given to you. Calculate the inner product and get the angle between them. Call the angle $x$. Now there is a 2-d rotation $A$ by angle $x$ inside $P$. And consider identity $I$ on the orthogonal complement of $P$ in $\mathbf{R}^4$. Now $A\oplus I$ is the required 4d matrix
Math Parlor Trick A magician asks a person in the audience to think of a number $\overline {abc}$. He then asks them to sum up $\overline{acb}, \overline{bac}, \overline{bca}, \overline{cab}, \overline{cba}$ and reveal the result. Suppose it is $3194$. What was the original number? The obvious approach was modular arithmetic. $(100a + 10c + b) + (100b + 10a + c) + (100b + 10c + a) + (100c + 10a + b) + (100c + 10b + a) = 3194$ $122a + 212b + 221c = 3194$ Since $122, 212, 221 \equiv 5 (mod\space9)$ and $3194 \equiv 8 (mod\space9)$ $5(a + b + c) \equiv 8 (mod\space9)$ So, $a + b + c = 7$ or $16$ or $26$ Hit and trial produces the result $358$. Any other, more elegant method?
Let $S$ be the sum, $$S \text{ mod} 10=A$$ $$S \text{ mod} 100=B$$ $$A=2b+2a+c$$ $$\frac{B-A}{10}=(2c+2a+b)$$ $$\frac{S-B}{100}=(a+2b+2c)$$ $$\text{Now just solve the system of equations for $a$ $b$ and $c$}$$ $$\text{ The original number will be a+10b+100c}$$ $$\text{ Now memorize this formula and do the addition in your head}$$
Two questions on topology and continous functions I have two questions: 1.) I have been thinking a while about the fact, that in general the union of closed sets will not be closed, but I could not find a counterexample, does anybody of you have one available? 2.) The other one is, that I thought that one could possibly say that a function f is continuous iff we have $f(\overline{M})=\overline{f(M)}$(In the second part this should mean the closure of $f(M)$). Is this true?
(1) Take the closed sets $$\left\{\;C_n:=\left[0\,,\,1-\frac1n\right]\;\right\}_{n\in\Bbb N}\implies \bigcup_{n\in\Bbb N}C_n=[0,1)$$
Infimum and supremum of the empty set Let $E$ be an empty set. Then, $\sup(E) = -\infty$ and $\inf(E)=+\infty$. I thought it is only meaningful to talk about $\inf(E)$ and $\sup(E)$ if $E$ is non-empty and bounded? Thank you.
There might be different preferences to how one should define this. I am not sure that I understand exactly what you are asking, but maybe the following can be helpful. If we consider subsets of the real numbers, then it is customary to define the infimum of the empty set as being $\infty$. This makes sense since the infimum is the greatest lower bound and every real number is a lower bound. So $\infty$ could be thought of as the greatest such. The supremum of the empty set is $-\infty$. Again this makes sense since the supremum is the least upper bound. Any real number is an upper bound, so $-\infty$ would be the least. Note that when talking about supremum and infimum, one has to start with a partially ordered set $(P, \leq)$. That $(P, \leq)$ is partial ordered means that $\leq$ is reflexive, antisymmetric, and transitive. So let $P = \mathbb{R} \cup \{-\infty, \infty\}$. Define $\leq$ the "obvious way", so that $a\leq \infty$ for all $a\in \mathbb{R}$ and $-\infty \leq a$ for all $a\in \mathbb{R}$. With this definition you have a partial order and it this setup the infimum and the supremum are as mentioned above. So you don't need non-empty.
Bott periodicity and homotopy groups of spheres I studied Bott periodicity theorem for unitary group $U(n)$ and ortogonl group O$(n)$ using Milnor's book "Morse Theory". Is there a method, using this theorem, to calculate $\pi_{k}(S^{n})$? (For example $U(1) \simeq S^1$, so $\pi_1(S^1)\simeq \mathbb{Z}$).
In general, no. However there is a strong connection between Bott Periodicity and the stable homotopy groups of spheres. It turns out that $\pi_{n+k}(S^{n})$ is independent of $n$ for all sufficiently large $n$ (specifically $n \geq k+2$). We call the groups $\pi_{k}^{S} = \lim \pi_{n+k}(S^{n})$ the stable homotopy groups of spheres. There is a homomorphism, called the stable $J$-homomorphism $J: \pi_{k}(SO) \rightarrow \pi_{k}^{S}$. The Adams conjecture says that $\pi_{k}^{S}$ is a direct summand of the image of $J$ with the kernel of another computable homomorphism. By Bott periodicity we know the homotopy groups $\pi_{k}(SO)$ and the definition of $J$, so the Bott Periodicity theorem is an important step in computations of stable homotopy groups of spheres (a task which is by no means complete).
Integral $\int_0^\frac{\pi}{2} \sin^7x \cos^5x\, dx $ im asked to find the limited integral here but unfortunately im floundering can someone please point me in the right direction? $$\int_0^\frac{\pi}{2} \sin^7x \cos^5x\, dx $$ step 1 brake up sin and cos so that i can use substitution $$\int_0^\frac{\pi}{2} \sin^7(x) \cos^4(x) \cos(x) \, dx$$ step 2 apply trig identity $$\int_0^\frac{\pi}{2} \sin^7x\ (1-\sin^2 x)^2 \, dx$$ step 3 use $u$-substitution $$ \text{let}\,\,\, u= \sin(x)\ du=\cos(x) $$ step 4 apply use substitution $$\int_0^\frac{\pi}{2} u^7 (1-u^2)^2 du $$ step 5 expand and distribute and change limits of integration $$\int_0^1 u^7-2u^9+u^{11}\ du $$ step 6 integrate $$(1^7-2(1)^9+1^{11})-0$$ i would just end up with $1$ however the book answer is $$\frac {1}{120}$$ how can i be so far off?
$$ \begin{aligned} \int_0^{\frac{\pi}{2}} \sin ^7 x \cos ^5 x & \stackrel{x\mapsto\frac{\pi}{2}-x}{=} \int_0^{\frac{\pi}{2}} \cos ^5 x \sin ^7 x d x \\ &=\frac{1}{2} \int_0^{\frac{\pi}{2}} \sin ^5 x\cos ^5 x\left(\sin ^2 x+\cos ^2 x\right) d x \\ &=\frac{1}{64} \int_0^{\frac{\pi}{2}} \sin ^5(2 x)d x \\ &=\frac{1}{128} \int_0^{\frac{\pi}{2}}\left(1-\cos ^2 2 x\right)^2 d(\cos 2 x) \\ &=\frac{1}{128}\left[\cos 2 x-\frac{2 \cos ^3 2 x}{3}+\frac{\cos ^5 2 x}{5}\right]_0^{\frac{\pi}{2}} \\ &=\frac{1}{120} \end{aligned} $$
Finding the rotation angles of an plane I have the following question: First I have a global coordinate system x y and z. Second I have a plane defined somewhere in this coordinate system, I also have the normal vector of this plane. Assuming that the plane originally was lying in the xy plane and the normal vector of the plane was pointing in the same direction as global Z: How can I calculate the three rotation-angles around the x y and z axis by which the plane was rotated to its current position? I have read This question but I do not understand the solution, since math is not directly my top skill. If anyone knows a good TCL library that can be used to program this task, please let me know. A numerical example for the normal vector ( 8 -3 2 ) would probably greatly increase my chance of understanding a solution. Thanks! What I have tried: I projected the normal vector on each of the global planes and calculated the angle between the x y z axis. If I use this angles in a rotationmatrix i have (and Im sure it is correct) and trie to rotate the original vector by these three angles I do not get the result I was hoping for....
First off, there is no simple rule (at least as far as I know) to represent a rotation as a cascade of single axis rotations. One mechanism is to compute a rotation matrix and then compute the single axis rotation angles. I should also mention that there is not a unique solution to the problem you are posing, since there are different cascades of rotations that will yield the same final position. Let's first build two references frames, $A$ and $B$. $A$ will denote the standard Euclidian basis. $B$ will denote the reference frame which results from applying the desired rotation. I will use subscript notation when referring to a vector in a specific reference frame, so $v_A$ is a vector whose coordinates are defined with respect to the standard Euclidian basis, and $v_B$ is defined w.r.t. $B$. All we know so far about $B$ is that the vector $(0,0,1)_B = \frac{1}{\sqrt{77}}(8, -3, 2)_A$. We have one degree of freedom to select how $(0,1,0)_B$ is defined in reference frame $A$. We only require that it be orthogonal to $\frac{1}{\sqrt{77}}(8, -3, 2)_A$. So we would like to find a unit vector such that $$8x-3y+2z=0$$ One solution is $\frac{1}{\sqrt{6}}(1,2,-1)_A$, which you can verify. This forces our hand for how the vector $(1,0,0)_B$ is represented w.r.t. $A$, hence $$(1,0,0)_B=\frac{1}{\sqrt{462}}(-1,10,19)_A$$ So the rotation matrix from $B$ to $A$ is $$R_B^A=\left( \begin{array}{ccc} \frac{-1}{\sqrt{462}} & \frac{1}{\sqrt{6}} & \frac{8}{\sqrt{77}} \\ \frac{10}{\sqrt{462}} & \frac{2}{\sqrt{6}} & \frac{-3}{\sqrt{77}} \\ \frac{19}{\sqrt{462}} & \frac{-1}{\sqrt{6}} & \frac{2}{\sqrt{77}} \end{array} \right)$$ Given any vector with coordinates defined in the Euclidian basis (such as a point in the $xy$-plane), multiplying by the inverse this matrix will yield the representation in our new reference frame. If you want to search for the three axis rotations, I refer you to Wolfram MathWorld's page on Euler Angles. Hope this helps! Let me know if anything is unclear or if I you think you see any mistakes.
Triangle integral with vertices Evaluate $$I=\iint\limits_R \sin \left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right)\, dA,$$ where $R$ is the triangle with vertices $(0,0),(2,0)$ and $(1,1)$. Hint: use $u=\dfrac{x+y}{2},v=\dfrac{x-y}{2}$. Can anyone help me with this question I am very lost. Please help I know you can make the intergal $\sin(u)\cos(v)$, but then what to do?
$\sin\left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right)=\frac{1}{2}\left(\sin x+\sin y\right)$ The line joining $(0,0)$ and $(2,0)$ has an equation $y=0$ and $0\leq x\leq 2$ The second line: $y=-x+2$ The third line: $y=x$ The integral becomes: $$I=\frac{1}{2}\left(\int\limits_{0}^{1}\int\limits_{0}^{x}+\int\limits_{1}^{2}\int\limits_{0}^{2-x}\right)(\sin x+\sin y)dy~dx=\frac{1}{2}\left(\int\limits_{0}^{1}\int\limits_{0}^{x}(\sin x+\sin y)dy~dx+\int\limits_{1}^{2}\int\limits_{0}^{2-x}(\sin x+\sin y)dy~dx\right)=\frac{1}{2}\left(\int\limits_0^1\left[y\sin x-\cos y\right]_0^xdx+\int\limits_1^2\left[y\sin x-\cos y\right]_0^{2-x}dx\right)=\frac{1}{2}\left(\left[\sin x-x\cos x+x-\cos x\right]_0^1+\left[(x-2)\cos x-\sin x+x+\sin(2-x)\right]_1^2\right)$$
Favourite applications of the Nakayama Lemma Inspired by a recent question on the nilradical of an absolutely flat ring, what are some of your favourite applications of the Nakayama Lemma? It would be good if you outlined a proof for the result too. I am also interested to see the Nakayama Lemma prove some facts in Algebraic Geometry if possible. Here are some facts which one can use the Nakayama lemma to prove. * *A local ring that is absolutely flat is a field - proof given here. *Every set of $n$ - generators for a free module of rank $n$ is a basis - proof given here. *For any integral domain $R$ (that is not a field) with fraction field $F$, it is never the case that $F$ is a f.g. $R$ - module. Sketch proof: if $F$ is f.g. as a $R$ - module then certainly it is f.g. as a $R_{\mathfrak{m}}$ module for any maximal ideal $\mathfrak{m}$. Then $\mathfrak{m}_{\mathfrak{m}}F = F$ and so Nakayama's Lemma implies $F = 0$ which is ridiculous.
You might be interested in this. It contains some applications of Nakayama's Lemma in Commutative Algebra and Algebraic Geometry.
If $\lim_{x \rightarrow \infty} f(x)$ is finite, is it true that $ \lim_{x \rightarrow \infty} f'(x) = 0$? Does finite $\lim_{x \rightarrow \infty} f(x)$ imply that $\lim_{x \rightarrow \infty} f'(x) = 0$? If not, could you provide a counterexample? It's obvious for constant function. But what about others?
Simple counterexample: $f(x) = \frac{\sin x^2}{x}$. UPDATE: It may seem that such an answer is an unexplicable lucky guess, but it is not. I strongly suggest looking at Brian M. Scott's answer to see why. His answer reveals exactly the reasoning that should first happen in one's head. I started thinking along the same lines, and then I just replaced those triangular bumps with $\sin x^2$ that oscillates more and more quickly as $x$ goes to infinity.
Norms in extended fields let's have some notation to start with: $K$ is a number field and $L$ is an extension of $K$. Let $\mathfrak{p}$ be a prime ideal in $K$ and let its norm with respect to $K$ be denoted as $N_{\mathbb{Q}}^K(\mathfrak{p})$. My question is this: If $|L:K|=n$, what is $N_{\mathbb{Q}}^L(\mathfrak{p})$? I would like to think that $N_{\mathbb{Q}}^L(\mathfrak{p})=\left(N_{\mathbb{Q}}^K(\mathfrak{p})\right)^n$, ie if $L$ is a quadratic extension of $K$, then $N_{\mathbb{Q}}^L(\mathfrak{p})=\left(N_{\mathbb{Q}}^K(\mathfrak{p})\right)^2$. Is this right? I feel that the prove would involve using the definition that $N_K^L(x)$ is the determinant of the multiplication by $x$ matrix (Here, $K$ and $L$ are arbitrary). Thanks!
The queston is simple, if one notices the equality: Let $p\in \mathfrak p\cap \mathbb Z$ be a prime integer in $\mathfrak p$. And let $f$ be the inertia degree of $\mathfrak p$. Then $N^K_{\mathbb Q}\mathfrak p=p^f$. Now the result follows from the multiplicativity of the inertia degree $f$. P.S. The above equality could be found in any standard algebraic number theory text.
Determining whether a coin is fair I have a dataset where an ostensibly 50% process has been tested 118 times and has come up positive 84 times. My actual question: * *IF a process has a 50% chance of testing positive and *IF you then run it 118 times *What is the probability that you get AT LEAST 84 successes? My gut feeling is that, the more tests are run, the closer to a 50% success rate I should get and so something might be wrong with the process (That is, it might not truly be 50%) but at the same time, it looks like it's running correctly, so I want to know what the chances are that it's actually correct and I've just had a long string of successes.
Of course, 118 is in the "small numbers regime", where one can easily (use a computer to) calculate the probability exactly. By wolframalapha, the probability that you get at least 84 successes $\;\;=\;\; \frac{\displaystyle\sum_{s=84}^{118}\:\binom{118}s}{2^{118}}$ $=\;\; \frac{392493659183064677180203372911}{166153499473114484112975882535043072} \;\;\approx\;\; 0.00000236224 \;\;\;\; $.
How do we take second order of total differential? This is the total differential $$df=dx\frac {\partial f}{\partial x}+dy\frac {\partial f}{\partial y}.$$ How do we take higher orders of total differential, $d^2 f=$? Suppose I have $f(x,y)$ and I want the second order total differential $d^2f$?
I will assume that you are referring to the Fréchet derivative. If $U\subseteq\mathbb{R}^n$ is an open set and we have functions $\omega_{j_1,\dots,j_p}:U\to\mathbb{R}$, then $$ D\left(\sum_{j_1,\dots,j_p} \omega_{j_1,\dots,j_p} dx_{j_1}\otimes\cdots\otimes dx_{j_p}\right) = \sum_{j_1,\dots,j_p}\sum_{j=1}^n \frac{\partial\omega_{j_1,\dots,j_p}}{\partial x_{j}} dx_j\otimes dx_{j_1}\otimes\cdots\otimes dx_{j_p}. $$ Here $dx_{i}$ is the projection onto the $i$th coordinate, and if $\alpha,\beta$ are multilinear forms then $\alpha\otimes\beta$ is the multilinear form defined by $(\alpha\otimes\beta)(x,y)=\alpha(x)\beta(y)$. For example, let $f(x,y)=x^3+x^2 y^2+y^3$. Then \begin{align} Df&=(3x^2+2xy^2)dx+(2x^2y+3y^2)dy; \\ D^2f&=(6x+2y^2)dx\otimes dx+4xy(dx\otimes dy+dy\otimes dx)+(2x^2+6y)dy\otimes dy; \\ D^3f&=6dx\otimes dx\otimes dx+4y(dx\otimes dx\otimes dy+dx\otimes dy\otimes dx+dy\otimes dx\otimes dx)\\ &\qquad+4x(dx\otimes dy\otimes dy+dy\otimes dx\otimes dy+dy\otimes dy\otimes dx) \\ &\qquad+6dy\otimes dy\otimes dy. \end{align} Since $D^p f(x)$ is always a symmetric multilinear map if $f$ is of class $C^p$, you might want to simplify the above by using the symmetric product (of tensors).
Odds of being correct X times in a row Is there a simple way to know what the chances are of being correct for a given number of opportunities? To keep this simple: I am either right or wrong with a 50/50 chance. What are the odds that I'll be correct 7 times in a row or 20 or simply X times? ... and can the answer be put in simple terms ;)
You guys are making it too complicated. It goes like this: double your last... 1st time: 1 in 2 - (win a single coin toss) 2nd time: 1 in 4 - (win 2 consecutive coin tosses) . . .etc. 3rd time: 1 in 8 4th time: 1 in 16 5th time: 1 in 32 6th time: 1 in 64 . . . etc . . . 1 in 2 to the nth. If you wanna get complicated with it then if 2 events have the same odds, there's no such thing as "independence" it's all in your head. Just add number of attempts together (nth)<--(this is an exponent). If the odds aren't 50/50 or if there are multiple events multiply the odds together. 1 in 4 should be represented as the fraction 1/4, 2 in 17 as 2/17 and so on... An example would be: "What would the odds of winning both a 1 in 3 bet and a 1 in 6 bet?" 1/3 X 1/6 = 1/18 or just multiply the 3 and 6 to make it easy. If you wanna know the odds of winning one and losing another, you don't do anything different, just flip the losses upside-down (inverse fractions) Make sense? "What are the odds of winning a 1 in 4 bet and a 1 in 6 bet, then losing a 1 in 4 bet?" 1/4 X 1/6 X 4/1(or just 4)= 1/6. I'm not going to bother with probability vs odds syntax since both boil down to the same thing. Just make sure that a preposition is never the word that you end your sentence in.
How to determine the arc length of ellipse? I want to determine the length of an arc from the ellipse in the picture below: How can I determine the length of $d$?
Giving a Mathematica calculation. Same result as coffeemath (+1) In[1]:= ArcTan[3.05*Tan[5Pi/18]/2.23] Out[1]= 1.02051 In[2]:= x=3.05 Cos[t]; In[3]:= y=2.23 Sin[t]; In[4]:= NIntegrate[Sqrt[D[x,t]^2+D[y,t]^2],{t,0,1.02051}] Out[4]= 2.53143
divergence of $\int_{2}^{\infty}\frac{dx}{x^{2}-x-2}$ i ran into this question and im sitting on it for a long time. why does this integral diverge: $$\int_{2}^{\infty}\frac{dx}{x^{2}-x-2}$$ thank you very much in advance. yaron.
$$\int_{2}^{\infty}\frac{dx}{x^{2}-x-2} = \int_{2}^{\infty}\frac{dx}{(x - 2)(x+1)}$$ Hence at $x = 2$, the integrand is undefined (the denominator "zeroes out" at $x = 2, x = -1$. So $x = 2$ is not in the domain of the integrand. Although we could find the indefinite integral, e.g., using partial fractions, the indefinite integral of the term with denominator $(x - 2)$ would be $$A\ln|x - 2| + C$$ which is also undefined when $x = 2.$ Recall, the limits of integration are exactly those: limits. $$\lim_{x\to 2} (A\ln|x - 2| + C) = -\infty$$ and hence, since the limit diverges, so too the integral diverges.
Valid Alternative Proof to an Elementary Number Theory question in congruences? So, I've recently started teaching myself elementary number theory (since it does not require any specific mathematical background and it seems like a good way to keep my brain in shape until my freshman college year) using Elementary Number Theory by Underwood Dudley. I've run across the following question: Show that every prime (except $2$ or $3$) is congruent to $1$ or $3$ (mod $4$) Now I am aware of one method of proof, which is looking at all the residues of $p$ (mod $4$) and eliminating the improbable ones. But before that, I found another possible way of proving it. Since $p$ must be odd: $p$ $\equiv$ $1$ (mod $2$) We can write this as: 2$|$$(p-1)$ But since $p$ is odd we can also say: 2$|$$(p-3)$ If $a|b$ and $c|d$ then $ac|bd$, then it follows that: $4|(p-1)(p-3)$ The $3$ possibilites then are: $1.$ $4|(p-1)(p-3)$ $2.$ $4|(p-1)$ $3.$ $4|(p-3)$ Thus, by the definition of congruence, we have the 3 possibilites: $1.$ $p \equiv 1$ (mod $4$) $2.$ $p \equiv 3$ (mod $4$) $3.$ $4|(p-1)(p-3) = 4|p^2-4p+3$ therefore $p^2-4p+3 = 4m$ Then $p^2+3 = 4m +4p$. Set $m+p=z$ Then from $p^2+3 = 4z$ it follows that $p^2 \equiv -3$ (mod $4$) (Is this correct?) Can anyone please tell me if this is a valid proof? Thank you in advance. EDIT: Taken the first possibility into account. I also realize that there are much simpler proofs but I was curious as to whether this approach also works.
I don't think there is anything wrong with your proof, but put it more simply, the congruence $$ p \equiv 1 \pmod{2} $$ is equivalent to $$ \text{either}\quad p \equiv 1 \pmod{4}, \quad\text{or}\quad p \equiv 3 \pmod{4}. $$ This is simply because if $p \equiv 0, 2 \pmod{4}$, then $p \equiv 0 \pmod{2}$. An then, note that the prime $3$ does fit in, $3 \equiv 3 \pmod{4}$.
An odd integer minus an even integer is odd. Prove or Disprove: An odd integer minus an even integer is odd. I am assuming you would define an odd integer and an even integer. than you would use quantifiers which shows your solution to be odd or even. I am unsure on how to show this...
Instead of a pure algebraic argument, which I don't dislike, it's also possible to see visually. Any even number can be represented by an array consisting of 2 x n objects, where n represents some number of objects. An odd number will be represented by a "2 by n" array with an item left over. Odd numbers don't have "evenly matched" rows or columns (depending on how you depict the array). Removing or subtracting an even number of items, then, will remove pairs of objects or items in the array. The leftover item (the "+1") will still remain and cannot have a "partner." Thus, an odd minus an even (or plus an even, for that matter) must be an odd result.
The image of $I$ is an open interval and maps $\mathbb{R}$ diffeomorphically onto $\mathbb{R}$? This is Problem 3 in Guillemin & Pallock's Differential Topology on Page 18. So that means I just started and am struggling with the beginning. So I would be expecting a less involved proof: Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a local diffeomorphism. Prove that the image of $f$ is an open interval and maps $\mathbb{R}$ diffeomorphically onto this interval. I am rather confused with this question. So just identity works as $f$ right? * *The derivative of identity is still identity, it is non-singular at any point. So it is a local diffeomorphism. *$I$ maps $\mathbb{R} \rightarrow \mathbb{R}$, and the latter is open. *$I$ is smooth and bijective, its inverse $I$ is also smooth. Hence it maps diffeomorphically. Thanks for @Zev Chonoles's comment. Now I realized what I am asked to prove, though still at lost on how.
If $f$ is a local differeomorphism then the image must be connected, try to classify the connected subsets of $\mathbb{R}$ into four categories. Since $f$ is an open map, this gives you only one option left. I do not know if this is the proof the author has in mind.
Probability Question about Tennis Games! $2^{n}$ players enter a single elimination tennis tournament. You can assume that the players are of equal ability. Find the probability that two particular players meet each other in the tournament. I could't make a serious attempt on the question, hope you can excuse me this time.
We will make the usual unreasonable assumptions. We also assume (and this is the way tournaments are usually arranged) that the initial locations of the players on the left of your picture are chosen in advance. We also assume (not so reasonable) that the locations are all equally likely. Work with $n=32$, like in your picture. It is enough to show the pattern. Call the two players A and B. Without loss of generality, we may, by symmetry, assume that A is the player at the top left of your picture. Maybe they meet in the first round. Then player B has to be the second player in your list. The probability of this is $\frac{1}{31}$. Maybe they meet in the second round. For that, B has to be $3$ or $4$ on your list, and they must both win. The probability is $\frac{2}{31}\frac{1}{2^2}$. Maybe they meet in the third round. For that, B must be one of players $5$ to $8$, and each player must win twice. The probability is $\frac{4}{31}\frac{1}{2^4}$. Maybe they meet in the fourth round, probability $\frac{8}{31}\frac{1}{2^6}$. Or maybe they meet in the final, probability $\frac{16}{31}\frac{1}{2^8}$. Add up. For the sake of the generalization, note we have a finite geometric series.
Regression towards the mean v/s the Gambler's fallacy Suppose you toss a (fair) coin 9 times, and get heads on all of them. Wouldn't the probability of getting a tails increase from 50/50 due to regression towards the mean? I know that that shouldn't happen, as the tosses are independent event. However, it seems to go against the idea of "things evening out".
This is interesting because it shows how tricky the mind can be. I arrived at this web site after reading the book by Kahneman, "Thinking, Fast and Slow". I do not see contradiction between the gambler´s fallacy and regression towards the mean. According to the regression principle, the best prediction of the next measure of a random variable is the mean. This is precisely what is assumed when considering that each toss is an independent event; that is, the mean (0.5 probability) is the best prediction. This applies for the next event’s theoretical probability and there is no need for a next bunch of tosses. The reason we are inclined to think that after a “long” run of repeated outcomes the best prediction is other than the mean value of a random variable has to do with heuristics. According to Abelson's first law of statistics, "Chance is lumpy". I quote Abelson: "People generally fail to appreciate that occasional long runs of one or the other outcome are a natural feature of random sequences." Some studies have shown that persons are bad generators of random numbers. When asked to write down a series of chance outcomes, subjects tend to avoid long runs of either outcome. They write sequences that quickly alternate between outcomes. This is so because we expect random outcomes to be "representative" of the process that generates them (a problem related to heuristics). Therefore, assuming that the best prediction for a tenth toss in you example should be other than 0.5, is a consequence of what unconsciously (“fast thinking”) we want to be represented in our sample. Fool gamblers are bad samplers. Alfredo Hernandez
Is a semigroup $G$ with left identity and right inverses a group? Hungerford's Algebra poses the question: Is it true that a semigroup $G$ that has a left identity element and in which every element has a right inverse is a group? Now, If both the identity and the inverse are of the same side, this is simple. For, instead of the above, say every element has a left inverse. For $a \in G$ denote this left inverse by $a^{-1}$. Then $$(aa^{-1})(aa^{-1}) = a(a^{-1}a)a^{-1} = aa^{-1}$$ and we can use the fact that $$cc = c \Longrightarrow c = 1$$ to get that inverses are in fact two-sided: $$ aa^{-1} = 1$$ From which it follows that $$a = 1 \cdot a = (aa^{-1})a = a (a^{-1}a) = a \cdot 1$$ as desired. But in the scenario given we cannot use $cc = c \Longrightarrow c = 1$, and I can see no other way to prove this. At the same time, I cannot find a counter-example. Is there a simple resolution to this question?
If $cc=c$ than $c$ is called idempotent element. Semigroup with left unit and right inverse is called left right system or shortly $(l, r)$ system. If you take all the idempotent elements of $(l,r)$ system they also form $(l,r)$ system called idempotent $(l,r)$ system. In such system the multiplication of two elements is equal to the second element because if you take $f$ and $g$ to be two such elements, and $e$ is the unit, than $fg=feg=fff^{-1}g=ff^{-1}g=eg=g.$ So, each element in such a system is left unit and also right inverse and any such system is not a group cause no element can be right unit. It is now also easy to see that, if you define the multiplication of two elements to be equal to the second one, you get idempotent $(l,r)$ system, which is obviously not a group. For more details and some extra facts check the article from 1944. by Henry B. Mann called "On certain systems which are not group". You can easily google it and find it online. Reference to this article is mentioned somewhere at the beginning of the book "The theory of groups" by Marshall Hall.
Solving for X in a simple matrix equation system. I am trying to solve for X in this simple matrix equation system: $$\begin{bmatrix}7 & 7\\2 & 4\\\end{bmatrix} - X\begin{bmatrix}5 & -1\\6 & -4\\\end{bmatrix} = E $$ where $E$ is the identity matrix. If I multiply $X$ with $\begin{bmatrix}5 & -1\\6 & -4\\\end{bmatrix}$ I get the following system: $$\begin{bmatrix}5x_1 & -1x_2\\6x_3 & -4x_4\\\end{bmatrix}$$ By subtracting this from $\begin{bmatrix}7 & 7\\2 & 4\\\end{bmatrix}$ I get $\begin{bmatrix}7 - 5x_1 & 7 + 1x_2\\2 - 6x_3 & 4 + 4x_4\\\end{bmatrix} = \begin{bmatrix}1 & 0\\0 & 1\\\end{bmatrix}$ Which gives me: $7-5x_1 = 1$ $7+1x_2 = 0$ $2-6x_3 = 0$ $4+4x_4 = 1$ These are not the correct answers, can anyone help me out here? Thank you!
Since $\begin{pmatrix}7&7\\2&4\end{pmatrix}-X\begin{pmatrix}5&-1\\6&-4\end{pmatrix}=\begin{pmatrix}1&0\\0&1\end{pmatrix}$, we obtain: $\begin{pmatrix}6&7\\2&3\end{pmatrix}=\begin{pmatrix}5x_1+6x_2&-x_1-4x_2\\5x_3+6x_4&-x_3-4x_4\end{pmatrix}$, where $X=\begin{pmatrix}x_1&x_2\\x_3&x_4\end{pmatrix}$. Now you can multiply both sides of the equation by $\frac{1}{-14}\begin{pmatrix}-4&1\\-6&5\end{pmatrix}$ =(inverse of $\begin{pmatrix}5&-1\\6&-4\end{pmatrix}$), to find: $X=\frac{1}{-14}\begin{pmatrix}6&7\\2&3\end{pmatrix}\begin{pmatrix}-4&1\\-6&5\end{pmatrix}=\frac{1}{-14}\begin{pmatrix}-66&41\\-26&17\end{pmatrix}$. Hope this helps.
How does one combine proportionality? this is something that often comes up in both Physics and Mathematics, in my A Levels. Here is the crux of the problem. So, you have something like this : $A \propto B$ which means that $A = kB \tag{1}$ Fine, then you get something like : $A \propto L^2$ which means that $A = k'L^2 \tag{2}$ Okay, so from $(1)$ and $(2)$ that they derive : $$A \propto BL^2$$ Now how does that work? How do we derive from the properties in $(1)$ and $(2)$, that $A \propto BL^2$. Thanks in advance.
Suppose a variable $A$ depends on two independent factors $B,C$, then $A\propto B\implies A=kB$, but here $k$ is a constant w.r.t. $B$ not $C$, in fact, $k=f(C)\tag{1}$ Similarly, $A\propto C\implies A=k'C$ but here $k'$ is a constant w.r.t. C not $B$, in fact, $k'=g(B)\tag{2}$ From $(1)$ and $(2)$, $f(C)B=g(B)C\implies f(C)\propto C\implies f(C)=k''C$ Putting it in $(1)$ gives, $A=k''CB\implies A\propto BC\tag{Q.E.D.}$
pressure in earth's atmosphere as a function of height above sea level While I was studying the measurements of pressure at earth's atmosphere,I found the barometric formula which is more complex equation ($P'=Pe^{-mgh/kT}$) than what I used so far ($p=h\rho g$). So I want to know how this complex formula build up? I could reach at the point of $${dP \over dh}=-{mgP \over kT}$$ From this how can I obtain that equation. Please give me a Mathematical explanation.
If $\frac{dP}{dh} = (-\frac{mgP}{kT})$, then $\frac{1}{P} \frac{dP}{dh} = (-\frac{mg}{kT})$, or $\frac{d(ln P)}{dh} = (-\frac{mg}{kT})$. Integrating with respect to $h$ over the interval $[h_0, h]$ yields $ln(P(h)) - ln(P(h_0)) = (-\frac{mg}{kT})(h - h_0)$, or $ln(\frac{P(h)}{P(h_0)}) = (-\frac{mg}{kT})(h - h_0)$, or $\frac{P(h)}{P(h_0)} = exp(-\frac{mg}{kT}(h - h_0))$, or $P(h) = P(h_0)exp(-\frac{mg}{kT}(h - h_0))$. If we now take $h_0 = 0$, and set $P = P(0)$, $P' = P(h)$, the desired formula is had.
Proof of Riemann-Lebesgue lemma: what does "integration by parts in each variable" mean? I was reading the proof of the Riemann-Lebesgue lemma on Wikipedia, and something confused me. It says the following: What does the author mean by "integration by parts in each variable"? If we integrate by parts with respect to $x$, then (filling in the limits, which I believe are $-\infty$ and $\infty$) I get $$\hat{f}(z) = \left[\frac{-1}{iz}f(x)e^{-izx}\right]_{-\infty}^{\infty} + \frac{1}{iz}\int_{-\infty}^{\infty}e^{-izx}f'(x)dx.$$ I think I am missing something here...it's not clear to me why the limit at $-\infty$ of the first term should exist. Can anyone clarify this for me? Thanks very much.
This question was answered by @danielFischer in the comments.
Proving an operator is self-adjoint I have a linear operator $T:V \to V$ (where $V$ is a finite-dimensional vector space) such that $T^9=T^8$ and $T$ is normal, I need to prove that $T$ is self-adjoint and also that $T^2=T$. Would appreciate any help given. Thanks a million!
Hint. Call the underlying field $\mathbb{F}$. As $T$ is normal and its characteristic polynomial can be split into linear factors $\underline{\text{over }\, \mathbb{F}}$ (why?), $T$ is unitarily diagonalisable over $\mathbb{F}$. Now the rest is obvious.
domain of square root What is the domain and range of square $\sqrt{3-t} - \sqrt{2+t}$? I consider the domain the two separate domain of each square root. My domain is $[-2,3]$. Is it right? Are there methods on how to figure out the domain and range in this kind of problem?
You are right about the domain. As to the range, use the fact that as $t$ travels from $-2$ to $3$, $\sqrt{3-t}$ is decreasing, and $\sqrt{2+t}$ is increasing, so the difference is decreasing.
$\sum^{\infty}_{n=1} \frac {x^n}{1+x^{2n}}$ interval of convergence I need to find interval of convergence for this series: $$\sum^{\infty}_{n=1} \frac {x^n}{1+x^{2n}}$$ I noticed that if $x=0$ then series is convergent. Unfortunately, that’s it.
If $\left| x\right|<1$ then $$ \left| \frac{x^n}{1+x^{2n}} \right|\leq \left| x\right|^n $$ because $1+x^{2n}\geq1$. Now $\sum_{n=1}^{\infty} \left| x\right|^n$ is a finite geometric series. Similarly, if $\left| x\right|>1$ then $$ \left| \frac{x^n}{1+x^{2n}} \right|\leq \frac{1}{ \left| x\right|^n} $$ because $1+x^{2n}\geq x^{2n}$. Now $\sum_{n=1}^{\infty} \frac{1}{\left| x\right|^n}$ is a finite geometric series. If $x=1$, or $x=-1$ the series is divergent.
Why does $\oint\mathbf{E}\cdot{d}\boldsymbol\ell=0$ imply $\nabla\times\mathbf{E}=\mathbf{0}$? I am looking at Griffith's Electrodynamics textbook and on page 76 he is discussing the curl of electric field in electrostatics. He claims that since $$\oint_C\mathbf{E}\cdot{d}\boldsymbol\ell=0$$ then $$\nabla\times\mathbf{E}=\mathbf{0}$$ I don't follow this logic. Although I know that curl of $\mathbf{E}$ in statics is $\mathbf{0}$, I can't see how you can simply apply Stokes' theorem to equate the two statements. If we take Stokes' original theorem, we have $\oint\mathbf{E}\cdot{d}\boldsymbol\ell=\int\nabla\times\mathbf{E}\cdot{d}\mathbf{a}=0$. How does this imply $\nabla\times\mathbf{E}=\mathbf{0}$? Griffiths seem to imply that this step is pretty easy, but I can't see it!
The infinitesimal area $d\mathbf{a}$ is arbitrary which means that $\nabla\times\mathbf{E}=0$ must be true everywhere, and not just locally.
simple limit but I forget how to prove it I have to calculate the following limit $$\lim_{x\rightarrow -\infty} \sqrt{x^2+2x+2} - x$$ it is in un undeterminated form. I tried to rewrite it as follows: $$\lim_{x\rightarrow -\infty} \sqrt{x^2+2x+2} - \sqrt{|x|^2}$$ but seems a dead road. Can anyone suggest a solution? thanks for your help
Clearly $$\lim_{x\rightarrow -\infty} \sqrt{x^2+2x+2} - x=+\infty+\infty=+\infty$$ But \begin{gather*}\lim_{x\rightarrow +\infty} \sqrt{x^2+2x+2} - x="\infty-\infty"=\\ =\lim_{x\rightarrow +\infty} \frac{(\sqrt{x^2+2x+2} - x)(\sqrt{x^2+2x+2} + x)}{\sqrt{x^2+2x+2} + x}=\lim_{x\rightarrow +\infty} \frac{2x+2}{\sqrt{x^2+2x+2} + x}=\lim_{x\rightarrow +\infty} \frac{2+2/x}{\sqrt{1+2/x+2/x^2} + 1}=1 \end{gather*}
$d(f \times g)_{x,m} = df_x \times dg_m$? (a) $d(f \times g)_{x,m} = df_x \times dg_m$? Also, (b) does $d(f \times g)_{x,m}$ carry $\tilde{x} \in T_x X, \tilde{m} \in T_x M$ to the tangent space of $f$ cross the tangent space of $g$? Furthermore, (c) does $df_x \times dg_m$ carry $\tilde{x} \in T_x X, \tilde{m} \in T_x M$ to the tangent space of $f$ cross the tangent space of $g$? Thank you~
I don't know what formalism you are adopting, but starting from this one it can be done intuitively. I will state the construction without proof of compatibility and so. An element of $T_x X$ is a differential from locally defined functions $C^{1}_x(X)$ to $\mathbb{R}$ induced by a locally defined curve $\gamma_x(t):(-\epsilon,\epsilon)\rightarrow X,\gamma_x(0)=x$ by $\gamma_x (f)=\frac{df(\gamma(t))}{dt}|_{t=0}$. $T_x X\times T_m M$ is identified canonically with $T_{(x,m)}(X\times M)$ in the following way. Given $\gamma_x,\gamma_m$, then $\gamma_{(x,m)}(t)=(\gamma_x(t),\gamma_m(t))$; given $\gamma_{(x,m)}$ then $\gamma_x(t)=p_X\gamma_{x,m}(t)$ where $p_X$ is the projection, and likewise for $m$. For a map $F:X\rightarrow Y$, the differential is defined as $dF_x(\gamma_x)(g)=\gamma_x(gF)$. Now what you want is a commutative diagram \begin{array}{ccc} T_{(x,m)}(X\times M) & \cong & T_x X\times T_m M \\ \downarrow & & \downarrow \\ T_{(y,n)}(Y\times N) & \cong & T_y Y\times T_n N \end{array} Let's now check it. $\gamma_{(x,m)}$ is mapped to $\gamma_{(x,m)}(g(F,G))$, by definition of the tangent vector, we see it is induced by the curve $(F,G)\gamma_{(x,m)}=(Fp_X \gamma_{(x,m)},Gp_M \gamma_{(x,m)})$, which then projects to $Fp_X \gamma_{(x,m)}$ and $Gp_M \gamma_{(x,m)}$. If we factored earlier, it would first be taken to $p_X \gamma_{(x,m)}$ and $p_M \gamma_{(x,m)}$, then by $F,G$ respectively to $Fp_X \gamma_{(x,m)}$ and $Gp_M \gamma_{(x,m)}$, which is the same.
Inequality involving Maclaurin series of $\sin x$ Question: If $T_n$ is $\sin x$'s $n$th Maclaurin polynomial. Prove that $\forall 0<x<2, \forall m \in \Bbb N,~ T_{4m+3}(x)<\sin(x)<T_{4m+1}(x)$ Thoughts I think I managed to prove the sides, proving that $T_3>T_1$ and adding $T_{4m}$ on both sides, but about the middle, I frankly have no clue...
Your thoughts are not quite right. Firstly, $T_{3} < T_{1}$ for $0 < x < 2$, and for that matter showing $T_{3} > T_{1}$ would be antithetical to what you are looking to prove. Secondly, truncated Taylor series are not additive in the way that you are claiming, i.e. it is NOT true that $T_{3} + T_{4m} = T_{4m + 3}$. However, consider the Maclaurin series for $\sin x$ and write out all sides of that inequality as follows: $$T_{4m+3} < \sin x < T_{4m+1}$$ is the same as writing: $$x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} + \cdots + \frac{x^{4m+1}}{(4m+1)!} - \frac{x^{4m+3}}{(4m+3)!} < x - \frac{x^{3}}{3!} + \cdots < x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} + \cdots + \frac{x^{4m+1}}{(4m+1)!}$$ It is obvious from here that since $x> 0$, $T_{4m+3} < T_{4m+1}$. We must now complete the proof. All this amounts to proving is that for $0 < x < 2$, the following inequalities hold: $$\sum_{n = 2m+2}^{\infty} \frac{(-1)^{n}x^{2n+1}}{(2n+1)!} > 0$$ $$\sum_{n = 2m+1}^{\infty} \frac{(-1)^{n}x^{2n+1}}{(2n+1)!} < 0$$ Why this is what we need can be seen by looking at the terms of the Maclaurin series for $\sin x$ and comparing with each side of the inequality. I leave these details to you... feel free to comment if you need additional help. I imagine it well help to expand what these inequalities are out of series form.
A function/distribution which satisfies an integral equation. (sounds bizzare) I think I need a function, $f(x)$ such that $\large{\int_{x_1}^{x_2}f(x)\,d{x} = \frac{1}{(x_2-x_1)}}$ $\forall x_2>x_1>0$. Wonder such a function been used or studied by someone, or is it just more than insane to ask for such a function? I guess it has to be some distribution, and if so, I'd like to know more about it.
No such $f(x)$ can exist. Such a function $f(x)$ would have antiderivative $F(x)$ that satisfies $$F(x_2)-F(x_1)=\frac{1}{x_2-x_1}$$ for all $x_2>x_1>0$. Take the limit as $x_2\rightarrow \infty$ of both sides; the right side is 0 (hence the limit exists), while the left side is $$\left(\lim_{x\rightarrow \infty} F(x)\right)-F(x_1)$$ Hence $F(x_1)=\lim_{x\rightarrow \infty} F(x)$ for all $x_1$. Hence $F(x)$ is constant, but no constant function satisfies the relationship desired.
Soccer betting combinations for accumulators I would like to bet on soccer games, on every possible combination. For example, I bet on $10$ different games, and each soccer game can go three ways: either a win, draw, or loss. How many combinations would I have to use in order to get a guaranteed win by betting $10$ matches with every combination possible?
If you bet on every possibility, in proportion to the respective odds, the market is priced so that you can only get back about 70% of your original stake no matter what.
Ultrafilter condition: If A is a subset of X, then either A or X \ A is an element of U My confusion concerns ultrafilters on sets that are themselves power sets. If $X=\{\emptyset,\ \{1\},\ \{2\},\ \{3\},\ \{4\},\ \{1,2\},\ \{1,3\},\ \{1,4\},\ \{2,3\},\ \{2,4\},\ \{3,4\},\ \{1,2,3\},\ \{1,2,4\},\ \{1,3,4\},\ \{2,3,4\},\ \{1,2,3,4\}\ \}$ and the upset $\{1\}=U=\{\ \{1\},\ \{1,2\},\ \{1,3\},\ \{1,4\},\ \{1,2,3\},\ \{1,2,4\},\ \{1,3,4\},\ \{1,2,3,4\}\ \}$ is supposedly a principal ultrafilter (for visual delineation see https://en.wikipedia.org/wiki/Filter_%28mathematics%29), then how do I satisfy the criteria that "If $A$ is a subset of $X$, then either $A$ or $X\setminus A$ is an element of $U$"? For example, could I let $A=\{\emptyset,\{2\}\}$ such that $A\notin U$, but $X\setminus A\notin U$ because $\{2,3,4\}\in X\setminus A,\{2,3,4\}\notin U$? My understanding of the distinction between an element and a subset is unrefined, particularly with regard to power sets.
It is fine to have filters on a power set of another set. But then the filter is a subset of the power set of the power set of that another set; rather than our original set which is the power set of another set. But in the example that you gave, note that $W\in U$ if and only if $\{1\}\in W$. Indeed $\{1\}\notin A$, but therefore it is in its complement and so $X\setminus A\in U$. If things like that confuse you, just replace the sets with other elements, $1,\ldots,16$ for example, and consider the ultrafilter on that set. Then simply translate it back to your original $X$.
Proving $\sum_{n=-\infty}^\infty e^{-\pi n^2} = \frac{\sqrt[4] \pi}{\Gamma\left(\frac 3 4\right)}$ Wikipedia informs me that $$S = \vartheta(0;i)=\sum_{n=-\infty}^\infty e^{-\pi n^2} = \frac{\sqrt[4] \pi}{\Gamma\left(\frac 3 4\right)}$$ I tried considering $f(x,n) = e^{-x n^2}$ so that its Mellin transform becomes $\mathcal{M}_x(f)=n^{-2z} \Gamma(z)$ so inverting and summing $$\frac{1}{2}(S-1)=\sum_{n=1}^\infty f(\pi,n)=\sum_{n=1}^\infty \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}n^{-2z} \Gamma(z)\pi^{-z}\,dz = \frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\zeta(2z) \Gamma(z) \pi^{-z}\,dz$$ However, this last integral (whose integrand has poles at $z=0,\frac{1}{2}$ with respective residues of $-\frac 1 2$ and $\frac 1 2$) is hard to evaluate due to the behavior of the function as $\Re(z)\to \pm\infty$ which makes a classic infinite contour over the entire left/right plane impossible. How does one go about evaluating this sum?
I am not sure if it will ever help, but the following identity can be proved: $$ S^2 = 1 + 4 \sum_{n=0}^{\infty} \frac{(-1)^n}{\mathrm{e}^{(2n+1)\pi} - 1}. $$
countability of limit of a set sequence Let $S_n$ be the set of all binary strings of length $2n$ with equal number of zeros and ones. Is it correct to say $\lim_{n\to\infty} S_n$ is countable? I wanted to use it to solve this problem. My argument is that each of $S_n$s is countable (in fact finite) thus their union would also be countable. Then $\lim_{n\to\infty}S_n$ should also be countable as it is contained in the union.
The collection of all finite strings of $0$'s and $1$'s is countably infinite. The subcollection of all strings that have equal numbers of $0$'s and $1$'s is therefore countably infinite. I would advise not using the limit notation to denote that collection. The usual notation for this kind of union is $\displaystyle\bigcup_n S_n$.
How to construct a non-piecewise differentiable S curve with constant quick turning flat tails and a linear slope? I need to find an example of a non-piecewise differentiable $f:\mathbb{R}\to\mathbb{R}$ such that $$ \begin{cases} f(x)=C_1 &\text{ for } x<X_1,\\ C_1 < f(x) < C_2 &\text{ for } X_1 < x < X_2,\\ f(x)=C_2 &\text{ for } X_2<x. \end{cases} $$ For example, $\log(1+e^{Ax})$ is sort meets some characteristics in that * *quickly flat lines around 0 for large A *has a linear slope for 0 < x < Xc But it * *does not flatline for Xc < x *I can't seem to control the slope of the linear section So am looking for something different Is there some other differentiable function that exhibits the above behavior?
After a bit of trying to mix various segments I came up with the following for my problem f(x) = (1/(1+e^(100*(x-1))))(1/(1+e^(-100(x+1))))x-(1/(1+e^(100(x+1))))+(1/(1+e^(-100*(x-1)))) Which atleast seems to be in the right direction to meet all the requirements * *Has a slope between -1 to +1. *Flatens quickly > 1 and < 1. How quickly can be controlled by changing the constant 100 *is continuous and not pieceswise *is differentiable I still haven't generalized it yet, but the general idea is to build it out of 3 segments * *(1/(1+e^(100*(x-1))))(1/(1+e^(-100(x+1)))) multiplied by a slope and X for the sloping part *bottom flatline -(1/(1+e^(100*(x+1)))) *top flat line (1/(1+e^(-100*(x-1))))
Is this true? $f(g(x))=g(f(x))\iff f^{-1}(g^{-1}(x))=g^{-1}(f^{-1}(x))$. Is this true? Given $f,g\colon\mathbb R\to\mathbb R$. $f(g(x))=g(f(x))\iff f^{-1}(g^{-1}(x))=g^{-1}(f^{-1}(x))$. I met this problem when dealing with a coding method, but I'm really not familiar with functions. Please help. Thank you.
Of course that is only true if $f^{-1}$ and $g^{-1}$ exist. But then, it's easy to show: Be $y=f^{-1}(g^{-1}(x))$. Then obviously $g(f(y))=g(f(f^{-1}(g^{-1}(x)))) = g(g^{-1}(x)) = x$. On the other hand, by assumption $f(g(y))=g(f(y))=x$. Therefore $g^{-1}(f^{-1}(x)) = g^{-1}(f^{-1}(f(g(y)))) = g^{-1}(g(y)) = y = f^{-1}(g^{-1}(x))$