title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Random Walk Stopping Time 2
I'll recast the problem in terms of the random walk $W_n=W_0+\sum_{i=1}^n (X_i-M).$ For $j\geq 0$, define the (identically distributed) stopping times $$\tau_j=\inf\left(n> j: \sum_{i=j+1}^n (X_i-M)>0\right)=\inf(n> j: W_n>W_{j}). $$ If $\mathbb{P}(\tau_0<\infty)=1,$ then $\mathbb{P}(\tau_j<\infty)=1$ for all $j\geq 0$, and hence $\mathbb{P}\left(\cap_{j=0}^\infty [\tau_j <\infty]\right)=1.$ It follows that, with probability 1, the sequence $(W_j(\omega))_{j\geq 0}$ does not achieve its maximum. But this contradicts the fact, due to the strong law of large numbers, that $W_j\to-\infty$ with probability one. Therefore $\mathbb{P}(\tau_0<\infty)=1$ is false, that is, $\mathbb{P}(\tau_0=\infty)>0.$
What restrictions on decimal expansions lead to countably infinite subsets of the reals?
As a general rule, if your set is such that given a number $x \in S$ you can change infinitely many digits at once and independently, it'll be uncountable - otherwise, it won't be. For example, $S_{\mathrm{odd}}$ is uncountable, because beginning with $x = 0.1111\ldots$ I can replace any combination of $1$s with $3$s and still obtain a number in $S_{\mathrm{odd}}$. On the other hand, $S_1$ is only countable; given any $x \in S_1$, the only way I can change infinitely many digits is by changing them all to $1$. The key idea is basically reducing everything to $S_{01}$. $S_{01}$ is uncountable for a very simple reason: any number expressed using only zeroes and ones can be interpreted as a binary expansion of an unrestricted real. So every real between $0$ and $1$ "shows up" in $S_{01}$. As long as you can find a way of thinking of your set as $S_{01}$, you've got something uncountable. To take an extreme example: say $S_{56}$ is the set of infinite decimals consisting entirely of $5$s, except for every $10000$th digit, which may be either a $5$ or a $6$. We can think of this as "basically" $S_{01}$ by (1) ignoring all the forced digits, and (2) where we get to choose between $5$ and $6$, treating a $5$ as a $0$ and a $6$ as a $1$. So, for example, we can see $0.555\ldots 555\mathbf{6}555\ldots 555\mathbf{5}555\ldots 555\mathbf{6}555\ldots$ as the binary string $0.101\ldots$. So $S_{56}$ is also uncountable.
Reverse mean value theorem
In general, no, even if the domain of $f$ is $\Bbb R$. For instance, if $f(x)=x^3$, then $f'(0)=0$. However, if $a,b\in\Bbb R$ with $a<b$, you never have $\frac{f(b)-f(a)}{b-a}=0$, since that would imply that $f(a)=f(b)$, but $f$ is injective.
help with funky function definition
Assuming that function is defined over set of Natural number N (as otherwise definition doesn't make much sense). You can look at its various properties viz. maxima is 11 (occurs at x : x=0(mod3)), and minima is -7 (occurs at x:x=1(mod 3)). You can check that function is neither even nor odd look at f(5) and f(-5). function is periodic with period 3 (verify !). and by y-intercept if you mean f(0) then you can see y-intercept=f(0)=11.
How to differentiate $y=\sqrt{\frac{1+x}{1-x}}$?
Make your life simpler by logging: $$ L f(x) = \frac{1}{2} \log (1+x) - \frac{1}{2} \log (1-x) $$ Now differentiate this and multply by $f(x)$.
Show: $ f(a) = a,\ f(b) = b \implies \int_a^b \left[ f(x) + f^{-1}(x) \right] \, \mathrm{d}x = b^2 - a^2 $
Let's assume that $f$ is monotonic and differentiable. Using the substitution $y = f(x)$ followed by integration by parts, we get: $\displaystyle\int_{f(a)}^{f(b)}f^{-1}(y)\,dy = \int_{a}^{b}xf'(x)\,dx = \left[xf(x)\right]_{a}^{b} - \int_{a}^{b}f(x)\,dx = [bf(b)-af(a)] - \int_{a}^{b}f(x)\,dx$. Therefore, $\displaystyle\int_{a}^{b}f(x)\,dx + \int_{f(a)}^{f(b)}f^{-1}(x)\,dx = bf(b)-af(a)$. If we also know that $f(a) = a$ and $f(b) = b$, this becomes: $\displaystyle\int_{a}^{b}\left[f(x)+f^{-1}(x)\right]\,dx = b^2-a^2$.
Can independence of a system and a vector be establish if there is only cross-indepedence?
Write $A$ as $A=\begin{bmatrix}a_1&a_2&\cdots a_n\end{bmatrix}$ (assuming $A$ has $n$ columns). Then \begin{equation} \begin{bmatrix} A & a'\end{bmatrix}\begin{bmatrix} x \\ x'\end{bmatrix}=x_1a_1+x_2a_2+\cdots +x_na_n+x'a'\end{equation} where we are assuming $x_1,x_2,\ldots,x_n$ are the components of $x$. Now when is the equation above zero if and only if $x_1,x_2,\ldots,x_n,x'$ are all zero? When the vectors $a_1,a_2,\ldots,a_n,a'$ are linearly independent... Notice that it is not sufficient that $a'$ is linearly independent only from some $a_i$, it must be linearly independent from each $a_i$, for else there could be $j$ and $c_j,c'\neq0$ so that $c_ja_j+c'a'=0$ and then the vector $z=(0,\ldots,0,c_i,0,\ldots,0,c')^T$ is a nonzero vector for which \begin{equation} \begin{bmatrix} A & a'\end{bmatrix}z=0.\end{equation}
Complex Numbers Midpoint of Roots of Unity
First of all I'd convert $A$ and $B$ to cartesian coordinates: $A = \sqrt{2}(\cos(7\pi/12)+i \sin(7\pi/12))$ $B= \sqrt{2}(\cos(11\pi/12)+i \sin(11\pi/12))$ Then you can easily compute $M=(A+B)/2 = \frac{\sqrt{2}}{2} (\cos(7\pi/12)+\cos(11\pi/12)+i\left[\sin(7\pi/12)+\sin(11\pi/12)\right])$
If $AB=-BA$ then do $A$ and $B$ share a common eigenvector?
For example, $A=diag(1,-1) ,B_u=\begin{pmatrix}0&1\\u&0\end{pmatrix}$ have no common eigenvectors and are invertible when $u\not=0$. Yet $A,B_0$ have a common eigenvector and $B_0$ is singular. EDIT. $\textbf{Proposition}$. Let $A,B$ be st $AB+BA=0$. Then $A$ or $B$ is singular iff $A,B$ have a common eigenvector. $\textbf{Proof}$. $\Rightarrow$ For example, $A$ is singular. $\ker(A)$ is invariant for $B$; then $B$ admits (over an algebraically closed field) an eigenvector in $\ker(A)$. $\Leftarrow$ If $Au=\lambda u,Bu=\mu u$, then $\lambda\mu=0$ ( if the charateristic of the field is not $2$) and we are done.
What is $k(X)[Y]$ and why is it a principal ideal domain? From a proof in Fulton's Algebraic Curves
If $K$ is a field then $K[X]$ the ring of polynomials in $X$, is a Euclidean Domain. $K(X)$ however, is the field of rational 'functions' (polynomials) in $X$. So for example $\frac{1}{X} \in K(X)$ but $\frac{1}{X}\not\in K[X]$, using $1$ as the multiplicative identity in $K$. Now $K(X)$ is a field. Therefore, $K(X)[Y]$ is an E.D. and hence a P.I.D. Also $K[X][Y]$ and $K[X,Y]$ can be shown to be equivalent (isomorphic). $(K[X])[Y]$ is the set of polynomials in $Y$ with co-efficients in $K[X]$. So expressions of the form $\sum\limits_{i} a_i(x)y^i$. $K[X,Y]$ is the set of polynomials in $X$ and $Y$ with coefficients in $K$. e.g. of the form $\sum\limits_i \sum\limits_j a_{i,j}x^iy^j$. These are finite sums of course. Only a finite number of the co-efficients are non-zero.
Closest point in $y = \sqrt{x}$ to the origin is at $x=-1/2$?
The square of the distance from the origin is as follows: $$d^2=x^2+y^2$$ $$d^2=x^2+x$$ $$\frac{d}{dx}(x^2+x)=2x+1$$ In optimizing this, make sure you're considering all the critical points, including the endpoints of the domain (in this case $x=0$), not just the points where the derivative equals $0$.
Help solving a limit
First note that $a>0$. If $a<0$, then $$\lim_{x \rightarrow a} \frac{x^2-\sqrt{a^3x}}{\sqrt{ax}-a} = \frac{a^2 - a^2}{-a-a} = 0$$ If $a=0$, then $\lim_{x \rightarrow a} \frac{x^2-\sqrt{a^3x}}{\sqrt{ax}-a}$ doesn't exist. Hence, $a>0$. Let $\sqrt{ax}=y$ i.e. $x = \frac{y^2}{a}$. Note that as $x \rightarrow a$, $y \rightarrow a$. Hence, $$\lim_{x \rightarrow a} \frac{x^2-\sqrt{a^3x}}{\sqrt{ax}-a} = \lim_{y \rightarrow a} \frac{y}{a^2} \frac{y^3-a^3}{y-a} = \frac{3a^2}{a} = 3a$$ Hence, $a=4$
Determine whether or not the following functions are totally multiplicative. Explain your reasoning.
Your answer to (a) is fine apart from your conclusion: $f(2.3)\ne f(2)f(3)$ and therefore $f$ is not multiplicative. Note that @Lulu's example is even easier. In part (b) you correctly obtain the result that $f(mn)= f(m)f(n)$ for all positive integers $m$ and $n$. So therefore $f$ is completely multiplicative.
RMS amplitude of a sine wave
If the signal is described as $f(t) = A \sin \omega t$, then the period is just $ T = 2\pi/\omega$ and $$ {\rm RMS}(f) = \left[\frac{1}{T}\int_0^T{\rm d}t\;f^2(t)\right]^{1/2} = \left[\frac{\omega}{2\pi}\int_0^{2\pi/\omega}{\rm d}t\;(A\sin\omega t)^2\right]^{1/2} = \left[A^2 \frac{\pi}{2\pi}\right]^{1/2} = \frac{A}{\sqrt{2}} \approx 0.7071 A $$
Law of cosines for $n$ dimensional vectors
This is obviously trivially independent of the dimension.. If you square the difference b-a you get the square of b minus the square of a, minus 2 times the the inner product of a and b. If you want a geometric interpretation, imagine you want to find a circle circumference. You need to approximate its lenght by regular polygons whose sides you relate to the radius using the inner product and if you use cosine in its definition and take as angle to be in radians you get the correct approximation and in the limit you get 2 pi which you can again approximate using archimedes method.
Power series of analytic function
Take $\displaystyle f(z)=\sum_{n=1}^\infty\frac{z^n}{n^2}$. Then $f$ is continuous in the region $\{z\in\mathbb{C}\,|\,|z|\leqslant1\}$. And analytic on $D(0,1)$. However, you cannot extend it to an analytic function on a larger circle.
Converting a Linear Program to Canonical Form
You're probably mixing two things : the $x$, as it is in the definition of the canonical form, and the $x$ as a variable. Let's call $X$ the $x$ in the definition. Note that you can transform $0 ≤ x ≤ 4$ and $0 ≤ y ≤ 6$ in $x \geq 0, x ≤ 4, y \geq 0$ and $ y ≤ 6$. Then, $X = \pmatrix{x \cr y}, A = \pmatrix{1 & -1 \cr 2 & 1 \cr 1 & 0 \cr 0 & 1}, b = \pmatrix{3 \cr 12 \cr 4 \cr 6}$ and $c = \pmatrix{1 \cr 1}$.
Does there exist such $100$-element set $A$, so that for any $x$ in $A$ there is a number $2x^2-1$ in $A$
The special thing about this number is $$2^{100}\frac{2\pi}{2^{100}+1}=2\pi-\frac{2\pi}{2^{100}+1}$$ and the cosines are equal.
Find $a_{ij}$ in $v=\sum\limits_{1\le i<j\le n} a_{ij}e_i\wedge e_j$
Why don't we try a little example and see if we can figure this out? Suppose $v = a e_1 + b e_2$ and $w = c e_1 + d e_2$ (so either $n = 2$ or we decide all the other coefficients are zero). Then \begin{align*} v \wedge w &amp;= (a e_1 + b e_2) \wedge (c e_1 + d e_2) \\ &amp;= ac e_1 \wedge e_1 + ad e_1 \wedge e_2 + bc e_2 \wedge e_1 + db e_2 \wedge e_2 \\ &amp;= (ad - bc) e_1 \wedge e_2 \end{align*} because $e_2 \wedge e_1 = - e_1 \wedge e_2$ (and $e_i \wedge e_i = 0$ for all $i$). Can you now work out the general case? If not try another example with only three nonzero coefficients. It'll come.
How do I calculate certain angles determined by sides and diagonals of a regular octagon?
Also, if you inscribe the octagon into a circle, $\angle{\alpha}$ is the angle in a semi circle $= 90^\circ$
Help with the following succession
Hints: $\sum\limits_{n=1}^{51} \left(3+2n\right) = 3\left(\sum\limits_{n=1}^{51} 1\right) + 2\left(\sum\limits_{n=1}^{51} n\right)$ Then, recognize the sums that remain. In particular, recognize and remember what you know about triangle numbers.
Proof of "a set is in V iff it's pure and well-founded"
The claim makes most sense if it takes place within a set theory that satisfies ZF, without any axiom of Regularity (so asking whether a set is well-founded is interesting), and with a form of Extensionality that allows urelements (so asking whether a set is pure is interesting). The "only if" direction is easy enough -- you can prove by induction on $\alpha$ that all elements in $V_\alpha$ will be pure and well-founded. For the "if" direction, first prove as a lemma that if $A$ is a set with $A\subseteq \mathbf V$, then $A\in\mathbf V$. (This uses the Axiom of Replacement to find a high enough level in the hierarchy to contain $X$). Now assume that $X$ is pure and well-founded. Let $Y$ be the transitive closure of $X$ and assume for a contradiction that $Y\not\subseteq \mathbf V$. Then, since $\in$ is well-founded on $Y$, there will be a $\in$-minimal element of $Z$ of $Y$ that is not in $\mathbf V$. Since $Z$ is a set (all elements of $Y$ are sets rather then urelements because $X$ is pure), this means that $Z\subseteq V$, so by the lemma $Z\in V$, a contradiction. Thus $Y\subseteq \mathbf V$, so $X\subseteq \mathbf V$, so $X\in \mathbf V$.
Conditional probability, using an example with sets?
Your misunderstanding is thinking that the $A_i$ are sets of cards; rather, they are events, i.e. $A_2$ is the event that &quot;the 2nd card drawn is an ace.&quot; I think you are overthinking things a little bit. The idea for computing $P(A_2 \mid A_1)$ is just to understand that if $A_1$ occurred, there are $51$ cards remaining in the deck, three of which are $3$, so $P(A_2 \mid A_1) = 3/51$. It's as simple as that. If you want to go compute it using $P(A_2 \mid A_1) = \frac{P(A_1 \cap A_2)}{P(A_1)}$, you'd just end up with circular reasoning, since the whole point of this discussion is to compute the intersection $P(A_1 \cap A_2) = P(A_1) P(A_2 \mid A_1)$ anyway. But if you are just doing a sanity check, then yes $P(A_1) = 4/52$, $P(A_2 \mid A_1) = 3/51$, and $P(A_1 \cap A_2) = \frac{4}{52} \cdot \frac{3}{51}$.
Cohomology of exterior powers of Kähler Differentials
If $F=\mathcal{O}_{\mathbb{P}^n}(-1)^{n+1}$, then you have an exact sequence, $$0\to \Omega^j\to\wedge^j F\to\Omega^{j-1}\to 0,$$ and then you should be able to calculate everything using induction on $j$.
Meaning of "freely generated" for an $R$-algebra
I assume $R$ is commutative. If you have just one element, say $a$, then the $R$-algebra freely generated by $a$ is $A=R[a]$, the rings of polynomials in the indeterminate $a$. More generally, the algebra freely generated by $\{a_1,\dots,a_n\}$ is an algebra $A$ containing $\{a_1,\dots,a_n\}$ such that, for any $R$-algebra $B$ and any choice of $b_1,\dots,b_n\in B$, there exists a unique $R$-algebra homomorphism $f\colon A\to B$ such that $f(a_i)=b_i$ $(i=1,2,\dots,n)$. The algebra $A$ can be described as the set of “noncommuting polynomials”, that is of $R$-linear combinations of monomials, that is, (formal) products of elements on $\{a_1,\dots,a_n\}$, where no commutation is allowed, so $a_1a_2\ne a_2a_1$. However, the coefficients do commute. Addition is performed in the obvious way, multiplication by using distributivity; for instance $$ (r_1a_1a_2+r_2a_3)(r_3a_1+r_4a_2)= r_1r_3a_1a_2a_1+r_1r_4a_1a_2a_2+r_2r_3a_3a_1+r_2r_4a_3a_2 $$ The multiplicative identity is the “empty monomial”. Proving that this set is an $R$-algebra is not difficult; the unique morphism after a choice of $B$ and elements $b_1,\dots,b_n$ is obvious. If we restrict $B$ to be commutative, then we get the “commutative $R$-algebra freely generated” and, in this case, it's just the polynomial ring in $n$ indeterminates.
How many vectors exist satisfying the angle between any two vectors equals to a constant $\beta$ with $0<\beta<\pi$ in a $n$-dimension Euclid space?
For many $\beta$, one can always find $n$ vectors by starting from the standard basis and shearing it along the main diagonal, i.e. you use $v_k=e_k+t\cdot d$ for suitable $t$ where $d=(1,1,\ldots, 1)$. Then $\langle v_i,v_i\rangle=1+2t+nt^2$ and $\langle v_i,v_j\rangle=2t+nt^2$ for $i\ne j$, so $\cos(v_i,v_j)=\frac{2t+nt^2}{1+2t+nt^2}=1-\frac 1{1+2t+nt^2}$. This covers the range $1&gt;\cos\beta \ge -\frac{1}{n-1}$. In general, it is not possible to add an $(n+1)$st vector, but for $\cos \beta=-\frac1n$ a negative multiple of $d$ does the trick. Of course with $\cos\beta=1$ we can have arbitrarily many such vectors. For $\cos\beta &lt;-\frac1{n-1}$ we are out of luck, i.e. we can only find fewer than $n$ vectors by finding $m&lt;n$ with $\cos \beta\ge -\frac 1{m-1}$ and use $m$ vectors from an $m$-dimensional subspace. EDIT: I just notice that I only constructed a set of $n$ vectors, but did not really show that more is not possible. Assume $w_1,\ldots , w_m$ are unit vectors in $n$-space with $\langle w_i,w_j\rangle =\cos \beta$ for $i\ne j$. Let $w=\sum w_i$. Then $0\le\langle w,w\rangle=m\cdot 1+m(m-1)\cdot \cos \beta$ shows that $\cos \beta\ge-\frac1{m-1}$. If $\cos \beta&gt;-\frac1{m-1}$, the $w_i$ are linearly independant (so that we must have $m\le n$). Indeed, the determinant of the matrix with $1$ on the diagonal and $c$ everywhere else is nonzero if $c\ne\frac1{n-1}$ because we can combine the rows into a nonzero multiple of the all-ones vector $d$ and use this to turn each row into a row of the identity matrix.
What is the result of $\infty - 1$?
Most usually, the answer will be "it doesn't make sense to write that down". The operation of subtraction is something defined only for certain classes of numbers. The class you are probably most familiar with are the Real numbers (denoted by $\mathbb{R}$), which you can think of intuitively as numbers that you can point to on a number line. These include things like $2.5$, $\pi$, and $\sqrt{2}$, but it does not include $\infty$. Since subtraction is defined for these numbers only, to then ask "What is $\infty - 1$ ?" is like asking "What is Cat - 1 ?". In more advanced mathematics (for example, in Measure theory) we do allow $\infty$ to denote a certain object that can interact with the Real numbers. In those situations, it usually follows your intuition and $\infty - 1$ is defined to be $\infty$ again. Another example of when mathematicians consider the idea of "$\infty - 1$" is in introductory calculus, when one first learns about limits. In that case however, the usage of the symbol $\infty$ is purely short hand notational- it doesn't actually denote any type of number. If a function $f$ becomes arbitrarily large as $n$ becomes large, we may write $\displaystyle\lim_{n\to\infty} f(n) = \infty$ and then for shorthand a teacher may write on the board $$\displaystyle\lim_{n\to\infty} f(n)-1 = \infty - 1 = \infty $$ but strictly that equation is invalid. So indeed, the idea does come up in valid forms, but what you should take away from this post is mainly contained in the first paragraph.
An immersive map is locally left invertible
Hint: consider the composition of $f$ with the projection onto the tangent space at $f(x)$ (considered as an $m$-dimensional affine subspace of ${\bf R}^n$).
In an Integral Domain, every prime is an irreducible. Flaw in the Proof?
You can always write $a=a \cdot 1$. This has nothing to do with finiteness conditions. But if you want to prove that $a$ is irreducible, you have to write $a=bc$ and show that $b$ or $c$ is a unit (see the definition of "irreducible"). This proof is possible for example when $a$ is a prime, as you have shown.
For what values of $x$ is the series $\sum_{k=2}^{\infty}\frac{(x-2)^k}{k\ln{k}}$ convergent?
Using ratio test, one claims that for all $x$ with $|x-2|&lt;1$, the series converges absolutely. For $x-2=1$, the series is $\displaystyle\sum_{k=2}\dfrac{1}{k\log k}$ which does not converge. For $x-2=-1$, the series is $\displaystyle\sum_{k=2}\dfrac{(-1)^{k}}{k\log k}$ which is convergent by alternating series test, but not absolutely, so it is conditional.
Permutation count for non identical letters
Almost. You did not however use inclusion-exclusion principle correctly. Remember that $|A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|A\cap C|-|B\cap C|+|A\cap B\cap C|$. In our problem, let $I$ represent the set of arrangements where the two I's were adjacent. Let $O$ represent the set of arrangements where the two O's were adjacent. Let $C$ represent the set of arrangements where the two C's were arrangement. Let $\Omega$ represent the set of arrangements where we don't care. We wish to count $|\Omega\setminus (I\cup O\cup C)|$ which simplifies: $$|\Omega\setminus(I\cup O\cup C)|=|\Omega|-|I\cup O\cup C|$$ $$=|\Omega|-|I|-|O|-|C|+|I\cap O|+|I\cap C|+|O\cap C|-|I\cap O\cap C|$$ You correctly calculated $|\Omega|$ to be $\frac{13!}{2!2!2!}$ and you correctly calculated each of $|I|,|O|,|C|$ to be $\frac{12!}{2!2!}$, however you are missing the remainder of the terms.
Proving $\partial \bar N\subset \partial N$
Clarification: In this post whenever I say the word neighborhood I mean an open neighborhood. (Based on the comments, it seems that this caused some misunderstandings. And I have to admit that I did not think about this when posting.) If $x\in\overline N$ if and only if every neighborhood $U$ of $x$ contains a point from $N$. $\boxed{\Rightarrow}$ Suppose that there is a neighborhood $U$ such that $U\cap N=\emptyset$. Then $N\subseteq X\setminus U$ and $X\setminus U$ is closed, hence $\overline N\subseteq X\setminus U$. I.e., $\overline N\cap U=\emptyset$, contradicting the assumption that $x$ is in this intersection. $\boxed{\Leftarrow}$ If $x\notin\overline N$ then $U:=X\setminus\overline N$ is a neighborhood of $x$ such that $U\cap N=\emptyset$. From this you can easily see that if every neighborhood of $x$ contains a point from $\overline N$, then every neighborhood of $x$ contains a point from $N$. (Let $U$ be a neighborhood of $x$. If $U\cap\overline N\ne\emptyset$, then we have a point $y\in U\cap \overline N$. Since $U$ is a neighborhood of $y$ and $y\in\overline N$, we get that $U\cap N\ne\emptyset.$)
about multidimensional wave equation and Huygens principle
The reason is that there is not a transformation such as rM that allows you to reduce the equation of Euler Poisson Darboux to a wave equation in one dimensión. You can make the computations in d=2, the transformation rM does not work. And in fact the proof does depend on the dimension. If you keep Reading Evan's book you will find the correct transformations for general odd dimensions because rM doesn't do the trick.
Complete metric on set of rational numbers
No such metric exists because every complete metric space without isolated points is either uncountable or empty. The proof is similar to what you wrote. Suppose $X$ is such a space. Since there are no isolated points, $X\setminus\{a\}$ is a dense open subset of $X$ for each $a\in X$. If $X$ is at most countable, then the fact that $$ \bigcap_{a\in X} X\setminus \{a\} = \varnothing $$ contradicts the Baire category theorem, according to which a countable intersection of dense open subsets of a nonempty complete metric space is nonempty.
Algebra Word Problem - Distance between two towns
I believe this is an approach: $d=v_1*t=v_2*(t+15/60)$ $v_1*t-v_2*t=v_2*0.25$ $t(v_1-v_2)=v_2*0.25$ $t = v_2*0.25/(v_1-v_2)$ $d=v_1*v_2*0.25/(v_1-v_2)$ I hope this helps.
divergence theorem for real valued function
The fundamental theorem of calculus does not apply to scalar functions, even to functions from $\mathbb R$ to $\mathbb R$. It applies to integrals of differential forms, such as $f(x)\,dx$. Since in one dimension all differential forms are written as "a function times $dx$", it is possible to forget (or not to know) the language of differential forms and talk about integrating a scalar function. But this does not change the essence of the matter: we need a differential form to transform an integral in FTC-style, the operation succintly formalized in general Stokes' theorem. So, the answer is no: there is nothing like the divergence theorem for scalar functions. But there are formulas (special cases of the aforementioned Stokes' theorem) which "reduce the order of derivative", e.g., $$\int_\gamma F_x\,dx+F_y\,dy+F_z\,dz = F(b)-F(a)$$ where $\gamma$ is any smooth curve going from point $a$ to point $b$.
Evaluation of a sum by means of Poisson sum formula and digamma function
To make it short, start with $$(2n+1)^2\pi^2+a^2=( (2 n+1)\pi -i a)\, ( (2 n+1)\pi +i a)$$ Using partial fraction decomposition $$\frac 1 {(2n+1)^2\pi^2+a^2}=\frac i {2a}\left(\frac 1{(2 n+1)\pi +i a } -\frac 1{(2 n+1)\pi -i a } \right)$$ make $$S_m=\sum_{n=-m}^{m}\frac{1}{(2n+1)^2\pi^2+a^2}$$ $$S_m=\frac{2 \pi a+\left(a^2+(2m+1)^2 \pi^2 \right) \left(-i \psi ^{(0)}\left(-\frac{i a}{2 \pi }+m+\frac{1}{2}\right)+i \psi ^{(0)}\left(\frac{i a}{2 \pi }+m+\frac{1}{2}\right)+\pi \tanh \left(\frac{a}{2}\right)\right)}{2 \pi a \left(a^2+(2 \pi m+\pi )^2\right)}$$ Using asymptotics $$S_m=\frac{\tanh \left(\frac{a}{2}\right)}{2 a}-\frac{1}{2 \pi ^2 m}+\frac{1}{4 \pi ^2 m^2}+O\left(\frac{1}{m^3}\right)$$
Divergence of an improper integral - |cos(x^2)|/x^q
You can avoid the issues with the mean value theorem by proceeding like this: Change variables from $x$ to $x^2$ as one of the answerers suggested, so that one is attempting to show divergence of ${\displaystyle \int_1^{\infty} {|\cos(x)| \over x^r}\,dx}$, where now ${1 \over 2} &lt; r &lt; 1$. Next, note that one has $$\int_{2\pi}^{\infty} {|\cos(x)| \over x^r}\,dx = \sum_{n=1}^{\infty} \int_{2n\pi}^{2(n+1)\pi}{|\cos(x)| \over x^r}\,dx$$ Since the denominator is monotone increasing, this is bounded below by $$\sum_{n=1}^{\infty} \int_{2n\pi}^{2(n+1)\pi}{|\cos(x)| \over (2\pi(n+1))^r}\,dx$$ $$= {1 \over (2\pi)^r}\sum_{n=1}^{\infty} {1 \over (n+1)^r}\int_{2n\pi}^{2(n+1)\pi}|\cos(x)|\,dx$$ Since $\cos(x)$ has period $2\pi$, the integral above has some fixed value $C &gt; 0$, so the above is equal to $$C {1 \over (2\pi)^r}\sum_{n=1}^{\infty} {1 \over (n+1)^r}$$ Since $r &lt; 1$ this diverges.
Show that $f(\xi + o_{\Bbb P}(1))=f(\xi) + o_{\Bbb P}(1)$
Fix $\eta\gt 0$ and pick some $R$ such that $\mathbb P\left\{\left|\xi\right|\gt R-1 \right\}\lt\eta$. The function $f$ is uniformly continuous on $[-R,R]$, hence there exists some $\delta\gt 0$ such that if $ x,y\in[-R,R]$ and $|x-y|\lt \delta$, then $\left|f(x)-f(y)\right|\lt \varepsilon$. Now, write $$\mathbb P\left\{\left|f\left(X+o_{\mathbb P(1)}\right)-f\left(X\right)\right| \gt \varepsilon \right\}\leqslant \mathbb P\left\{\left|X+o_{\mathbb P(1)}\right|\gt R\right\} +\mathbb P\left\{\left|X\right|\gt R\right\} \\+ \mathbb P\left(\left\{\left|X+o_{\mathbb P(1)}\right|\leqslant R\right\}\cap \left\{\left|X \right|\leqslant R\right\}\cap\left\{\left|f\left(X+o_{\mathbb P(1)}\right)-f\left(X\right)\right| \gt \varepsilon\right\}\right)\\\leqslant 2\eta+\mathbb P\left\{\left|o_{\mathbb P(1)}\right|\gt 1\right\}+\mathbb P\left(\left\{\left|X+o_{\mathbb P(1)}\right|\leqslant R\right\}\cap \left\{\left|X \right|\leqslant R\right\}\cap \left\{\left|f\left(X+o_{\mathbb P(1)}\right)-f\left(X\right)\right| \gt \varepsilon\right\}\cap\left\{ \left|o_{\mathbb P(1)}\right|\lt \delta\right\}\right)\\+\mathbb P\left(\left|o_{\mathbb P(1)}\right|\geqslant \delta\right).$$ But the event $$\left\{\left|X+o_{\mathbb P(1)}\right|\leqslant R\right\}\cap \left\{\left|X \right|\leqslant R\right\}\cap \left\{\left|f\left(X+o_{\mathbb P(1)}\right)-f\left(X\right)\right| \gt \varepsilon\right\}\cap\left\{ \left|o_{\mathbb P(1)}\right|\lt \delta\right\} $$ is empty. A simpler proof can be given using the fact that a sequence which converges almost surely converges in probability to the same limit, and a sequence which converges in probability admits an almost surely convergent subsequence (to the same limit).
Starting index of a sequence is irrelevant
Actually, the first version of the question contained perfectly valid (if somewhat verbose) proof of the following theorem. Theorem 1. The sequence of real numbers $(a_n)_{n=m}^\infty$ converges to $c \in \mathbb R$ if and only if the sequence $(a_n)_{n=m'}^\infty$ for $m' \geq m$ also converges to $c$. If you observe that the definition of convergence you are using in your proof Definition 1. The real sequence $(a_n)_{n=m}^\infty$ converges to $c \in \mathbb{R}$ if and only if for every $\varepsilon&gt;0$ there exists $N \in \mathbb N$, $N&gt;m$ such that $$n \geq N \implies |a_n-c| \leq \varepsilon.$$ can be formulated alternatively as Definition 2. The real sequence $(a_n)_{n=m}^\infty$ converges to $c \in \mathbb R$ if and only if for every $\varepsilon&gt;0$ it contains subsequence $(a_n)_{n=m'}^\infty$ where $m'&gt;m$ that lies completely within the set $B_{c,\varepsilon}=\{z \in \mathbb R \mid |z-c| \leq \varepsilon\}$. than the proof reduces to few sentences: In one direction, if $(a_n)_{n=m'}^\infty$ converges to $c$, the subsequence $(a_n)_{n=m''}^\infty$, $m''&gt;m'$ of this sequence from Definition 2 that lies within $B_{c,\varepsilon}$ is also a subsequence of $(a_n)_{n=m}^\infty$. In the other direction, if $(a_n)_{n=m}^\infty$ converges to $c$, the subsequence $(a_n)_{n=m''}^\infty$, $m''&gt;m$ of this sequence from Definition 2 that lies within $B_{c,\varepsilon}$ might not be a subsequence of $(a_n)_{n=m'}^\infty$, when $m''&lt;m'$. But, even then, the subsequence $(a_n)_{n=max\{m',m''\}}^\infty$ certainly is. To see that it really makes no difference if we replace the $(a_n)_{n=m'}^\infty, m'&gt;m$ with $(a_{n+k})_{n=m}^\infty, k&gt;0$ in the theorem 1, consider the following proof of the amended theorem In one direction, if $(a_{n+k})_{n=m}^\infty$ converges to $c$, the subsequence $(a_{n+k})_{n=m''}^\infty$, $m''&gt;m'$ of this sequence from Definition 2 that lies within $B_{c,\varepsilon}$ is also a subsequence of $(a_n)_{n=m}^\infty$. In the other direction, if $(a_n)_{n=m}^\infty$ converges to $c$, the subsequence $(a_n)_{n=m''}^\infty$, $m''&gt;m$ of this sequence from Definition 2 that lies within $B_{c,\varepsilon}$ might not be a subsequence of $(a_{n+k})_{n=m}^\infty$, when $m''-m&lt;k'$. But, even then, the subsequence $(a_n)_{n=max\{m+k,m''\}}^\infty$ certainly is. Note that both sequences $(a_n)_{n=m+k}^\infty$ and $(a_{n+k})_{n=m}^\infty$ are obtained by removing first $k$ elements from the sequence $(a_n)_{n=m}^\infty$.
Function is defined on the whole real line and $|f(x) -f(y)| \leq |x-y|^\alpha$, then....
Your condition is a special case of Hölder continuity. If $\alpha = 1$, it is usually called Lipschitz instead of Hölder. I'll give some hints. I. Suppose $\alpha = 1 + \epsilon$ for $\epsilon &gt; 0$. Then $\left|\frac{f(x) - f(y)}{x-y}\right| \leq |x-y|^\epsilon$. If we take the limit as $y \to x$, what does this say about the derivative of $f$ at $x$? II. Consider $f(x) = |x|$. III. Let $\epsilon &gt; 0$, and $\delta = \epsilon^{\frac{1}{\alpha}}$. If $|x-y| &lt; \delta$, then what can be said about |f(x) - f(y)|?
Non-zero index implies index $1$?
This is more like a sketch of how a solution might go, rather than an actual solution. There may be some insurmountable error within. You can replace your original path with a polygonal closed path with the same winding number about $0$. You can also assume that all the edges of this path have different slopes. There will probably be some nontrivial self-intersections. If so break the path into a "sum" of simple closed paths. One of these must have nonzero winding number. Now appeal to the (polygonal) Jordan curve theorem to see that the winding number is this curve is $\pm1$.
Determine if these series converges or diverges. Comparison theorem.
All of your questions have a general answer. Throughout $k,j\in\mathbb{R}$ and $n\in\mathbb{N}$. Given the series $$\sum^{\infty}_{n=1}\frac{an^k+bn^{k-1}+...+c}{dn^j+en^{j-1}+...+f}$$ If $k&gt;j$, the series diverges by the Divergence Test i.e. $$\frac{an^k+bn^{k-1}+...+c}{dn^j+en^{j-1}+...+f}\to\infty\space\space\text{as}\space\space n\to\infty$$ If $j&gt;k$, there are two cases: $1)$ If $j-k&gt;1$, then the series converges by limit comparison with $$\sum^\infty_{n=1}\frac{1}{n^{1+\epsilon}}$$ where $\epsilon\in \mathbb{R}^+$. $2.)$ If $j-k\leq1$, then by comparison, $$\frac{1}{n}\leq\frac{an^k+bn^{k-1}+...+c}{dn^j+en^{j-1}+...+f}\space$$ $\forall n\geq1$. Hence $$\sum^{\infty}_{n=1}\frac{1}{n}\leq\sum^{\infty}_{n=1}\frac{an^k+bn^{k-1}+...+c}{dn^j+en^{j-1}+...+f}\space\space\space\forall n\implies\space\text{the series diverges.}$$ This thinking also works with radicals such as $$\frac{1}{\sqrt{n^3+1}}\sim\frac{1}{n^{\frac{3}{2}}}$$
How to deduce the following equation from the other?
Write $a^x+b^y=s$, then the rearranged form of $(1)$ you have provided is $s(s+2)=m$ or $s^2+2s-m=0$. It remains to solve the quadratic equation for $s$, yielding $$s=\frac{-2\pm\sqrt{4+4m}}2=-1\pm\sqrt{m+1}$$ Taking the $+$ in the $\pm$ gives $(2)$: $s=a^x+b^y=\sqrt{m+1}-1$.
Why nonlinear programming problem (NLO) called "nonlinear"? What does "nonlinearity" actually mean? Is it "not linear" or something different?
Any of I, III or IV would make a problem non-linear. The answers to VI will probably covered during your course and different convexity conditions may lead to different algorithms.
What is a phase shift in trigonometry, and how can I determine them given a graph?
Amplitude, period and phase shift of can be recovered from the graph by noticing the coordinates of the peaks and troughs of the wave. Let $y_{peak}$ and $y_{trough}$ denote the $y$-coordinates of the peaks and troughs of the wave. Then for the amplitude we have $$ A=\frac{1}{2}\left(y_{peak}-y_{trough}\right) $$ From your graph there is no indication of the vertical scale, but let us suppose that the horizontal dashed lines are one unit apart. Then we would have $y_{peak}=2$ and $y_{trough}=-6$. This would give $A=\frac{1}{2}(2-(-6))=4$. We can also use $y_{peak}$ and $y_{trough}$ to find the vertical shift $D$. $$ D=\frac{1}{2}\left(y_{peak}+y_{trough}\right) $$ So, for your example, $D=\frac{1}{2}(2+(-6))=-2$. This leaves the values of $B$ and $C$. But first, we must find the period $P$, which is straightforward. The period P is the horizontal distance between two successive peaks of the graph. For your graph this would be a distance $P=3\pi$. The value of $B$ is then found from $$ B=\frac{2\pi}{P} $$ For your graph, then, we have $B=\frac{2\pi}{3\pi}=\frac{2}{3}$. Finally we have the phase shift $\phi$. The phase shift for the sine and cosine are computed differently. The easiest to determine from the graph is the phase shift of the cosine. Let $x_{peak}$ denote the $x$-coordinate of the peak closest to the vertical axis. For your graph, we have $x_{peak}=0$. $$\phi=x_{peak}\text{ for the cosine graph}$$ $$\phi=\left(x_{peak}-\frac{P}{4}\right)\text{ for the sine graph}$$ For your graph this gives phase shift $\phi=0$ for the cosine graph and phase shift $\phi=0-\frac{3\pi}{4}=-\frac{3\pi}{4}$ for the sine graph. But for your equations, we need the value of $C$. The value of $C$ is found from the values of $\phi$ and $B$. The equation for $C$ in terms of the phase shift $\phi$ is $$ C=B\phi $$ So if we use the cosine function to model your graph we have $C=0$ and for the sine graph we have $C=\frac{2}{3}\cdot\left(-\frac{3\pi}{4}\right)=-\frac{\pi}{2}$. So we have for both sine and cosine $A=4$ $D=-2$ $B=\frac{2}{3}$ For cosine, $C=0$ and for sine, $C=-\frac{\pi}{2}$. Thus your graph can be represented by either of the two equations $$ y=4\cos\left(\frac{2}{3}x \right)-2 $$ $$ y=4\sin\left(\frac{2}{3}x+\frac{\pi}{2}\right)-2 $$
Finding x using logarithms
If you multiply by $x^{1/3}$ you get $$x^{2/3}-4 = 3x^{1/3}$$ Then $$x^{2/3}-3x^{1/3} -4 = (x^{1/3} -4)(x^{1/3}+1) = 0$$ which gives you two solutions. Logarithms aren't needed.
Can one find the signature of a real symmetric matrix just from the signs of some minors?
Let me state the Cauchy Interlacing Theorem: Let $B$ be a symmetric $n\times n$ real matrix. Let $c$ be a dim. $n$. real vector. Define $$A=\left[ \begin{array}{c|c} B &amp; c \\ \hline c^T &amp; \delta \end{array}\right],$$ where $\delta \in \mathbb{R}$. Then the spectrum of $B$ interlaces $A$, which is to say: $$\alpha_1 \leq \beta_1 \leq \alpha_2 \leq ... \leq \alpha_n \leq \beta_n \leq \alpha_{n+1}, $$ where $\{\alpha_1\leq \alpha_2 \leq ... \leq \alpha_{n+1} \}$ and $\{ \beta_1 \leq \beta_2 \leq ... \leq \beta_n \}$ are the spectra of $A$ and $B$ respectively. Assume we know $\det B\neq 0$ and $\det A \neq 0$. Say the signature of $B$ is $(k,n-k,0)$ (recall that the signature of a symmetric matrix is the ordered pair of the number of its positive, negative, and zero eigenvalues). That is to say: $$0&lt;\beta_{n-k+1}\leq \beta_{n-k+2} \leq ... \leq \beta_n, $$ and $$\beta_{1}\leq \beta_{2} \leq ... \leq \beta_{n-k}&lt;0. $$ Then from the Cauchy Interlacing Theorem, $$0&lt;\alpha_{n-k+2}\leq \alpha_{n-k+3} \leq ... \leq \alpha_{n+1}, $$ and $$\alpha_{1}\leq \alpha_{2} \leq ... \leq \alpha_{n-k}&lt;0. $$ It follows that the signature of $A$ is either $(k+1,n-k,0)$, in which case the determinant of $A$ is clearly of the same sign as the determinant of $B$, or it is $(k,n-k+1,0)$, in which case the determinant of $A$ is of a different sign than the determinant of $B$. An inductive application of this result yields thus this statement about the sequence of leading principle minors of a symmetric matrix: the number of negative eigenvalues of an invertible symmetric matrix whose leading principle minors are all non-zero is equal to the number of sign changes in the sequence of leading principle minors (here without loss of generality we are letting the first leading principle minor be positive). So... under mild conditions, one can find the signature of a real symmetric matrix just from the signs of some minors!
Limit of exponential function without L'hopital rule
Supposing that $f$ is given as $f(x)=\frac{(e^x+e^{-x})(\sin(mx))}{(e^x-1)}$ and that $\lim_{x\to0}f(x)=4+m$ multiplying the expression for $f$ with $m \frac{x}{mx}$ would probably be helpful.
What exactly is called "cone" in the category theory and how does it relate to a category of cones?
Writing $(\Delta \mid F)$ is a slight abuse of notation. Remember that $\Delta$ is a functor $\mathcal C \to \mathcal C^{\mathcal C'}$, while $F$ is a functor $\mathcal C' \to \mathcal C$. To form a comma category $(G \mid H)$, the functors $G$ and $H$ need to have the same codomain. So, we actually consider $F$ as a functor $\bf1 \to \mathcal C ^{\mathcal C'}$. Let's write it as $\hat F :\bf 1 \to \mathcal C ^{\mathcal C'}$, where $\hat F \star = F$ (and $\star$ is the unique object of $\bf 1$). Now we see that the objects of $(\Delta \mid \hat F)$ are triples $(A, \star, h)$, where $A$ is an object of $\mathcal C$ and $h$ is a morphism $\Delta A \to \hat F\star = F$. So $h$ is a natural transformation between the functors $\Delta A$ and $F$. It is not hard to verify that this is the same thing as a cone from $A$ to $F$. Edit: By “cone to $F$”, I mean an object $A$ of $\mathcal C$, and a family of morphisms $\gamma_X :A \to F X$, indexed by objects $X$ of $\mathcal C’$, such that for every $f : X \to Y$, we have $\gamma_Y = Ff \circ \gamma_X$. A morphism of cones $(A, \{\gamma_X\}_X) \to (B, \{\delta_X\}_X)$ is a morphism $f : A \to B$ such that for every X we have $\gamma_X = \delta_Y \circ f$. So given a category $\mathcal C$ and a functor $F : \mathcal C’ \to \mathcal C$ we can form the category of cones to $F$ in $\mathcal C$. This is isomorphic to the comma category $(\Delta \mid \hat F)$.
Hopf fibration with 7-dim. spheres as fibers.
The best reference for octonion multiplication is certainly Harvey, Reese; Lawson, H. Blaine, Jr. Calibrated geometries. Acta Math. 148 (1982), 47–157. For basic facts about this fibration from a purely topological viewpoint you might want to consult Steenrod. For more geometry, see A. Besse, Manifolds all of whose Geodesics are Closed.
Probability of two different real root of $x^2+Ux+V$
For fixed $c\in\Bbb R$, $$ P(U^2&gt;c)=\begin{cases}1&amp;\text{if }c\le 0\\ 0&amp;\text{if }c\ge 1\\ 1-\sqrt c&amp;\text{otherwise}\end{cases}$$ Now $$P(U^2&gt;4V)=\int_0^1P(U^2&gt;4v)\,\mathrm dv =\int_0^{\frac14}(1-\sqrt {4v})\,\mathrm dv.$$
Proof without mean value theorem that continuously partially differentiable implies differentiability
You error occurs at the point of the proof where you write "we can choose $\epsilon &gt; 0$ such that....". You are not free to choose $\epsilon$ as you might wish to. The statement you must prove at this point is: For all $\epsilon &gt; 0$ there exists $\eta &gt; 0$ such that if $\|x_1-a_1\| &lt; \eta$ then $$(*) \qquad \|f(x_1,x_2)-f(a_1,x_2)-\frac{\partial f}{\partial x_1}(a_1,a_2)(x_1-a_1) \| &lt; \epsilon \|x_1-a_1\| $$ One never starts a "for all $\epsilon &gt; 0$" proof by saying "choose $\epsilon$ so that...". You do not have the freedom to choose $\epsilon$. The mechanics of proving this statement are that you are given a value of $\epsilon&gt;0$. Then you must find an appropriate value of $\eta &gt; 0$, and use to prove that if $\|x_1-a_1\| &lt; \eta$ then the inequality $(*)$ is true. Take a look at what the mean value theorem says (I note that you did not copy the whole theorem in your post; you omitted the conclusion). You'll see how useful it is in carrying out the mechanics of the proof.
Irrationality of the concatenation of decimal expansions of primes
Hint: If $ \gcd (a, d) = 1$, then there exists infinitely many primes in the arithmetic progression $ a + n d$. Note: This is a high power theorem whose proof is (arguably) inaccessible to a novice in number theory, but the statement is easy to understand and to accept as fact. Hint: A number is rational if and only if its decimal representation finally repeats/terminates in 0. Note: This is a simple fact. I'm not sure if this is the theorem you are referencing in the statement, but it doesn't require "primes eventually look something like this". Hint: Prove that for any $k$, there must exist a sequence of $k$ 0's in the number. Hence, this number is irrational.
Maximum N that will hold this true
Note that with $N=2008$ we have $ (2^{N+3}+8)^2=4^{N+3}+2\cdot 8\cdot 2^{N+3}+64=4^{N+3}+2^{2015}+64=64+32^{403}+4^N,$ so we conjecture that the maximal value is $2008$. If $2^{2015}+2^6+2^{2N+6}$ is a perfect square then also $\frac1{64}$ of it, i.e. $2^{2009}+1+2^{2N}=m^2$ for some $m\in\mathbb N$. But if $N&gt; 2008$, we have $$(2^N)^2=2^{2N}&lt;2^{2009}+1+2^{2N}&lt;1+2^{N+1}+2^{2N}=(2^N+1)^2$$ so that $2^N&lt;m&lt;2^N+1$, which is absurd.
Combination of Animal Pairs
For getting exactly one pig and exactly one cow in a random drawing from the collection you specify, consider that the collection has 3 cows, 5 pigs, and 7 other kinds of animals. Then the desired probability is $$\frac{{3 \choose 1}{5 \choose 1}{7 \choose 0}}{{15 \choose 2}}.$$
Show that if $\inf(A^+)=a>0$ then $a\in A$ and $A=\{za;z\in \mathbb{Z}\}$
Observe: $A$ is a group with addition: Assume it is nonempty so $\exists x\in A$. i) $x-x=0 \in A$ ii) If $x\in A$ then $0-x \in A$ iii) If $x,y\in A$ then $x+y=x-(-y)\in A$ Now the statement is basically saying discrete subgroups of $\mathbb{R}$ is cyclic: Suppose your chosen $a\notin A$. Then there must be a sequence in $A$: $a_n\downarrow a$ by definition of $\inf$. Now consider the sequence $(a_n-a_m)_{m,n}$ to get a contradiction. (A convergent sequence is Cauchy)
Sylow's second theorem explanation
Let G be a group of permutation of a set S. for each i in S, let $${orb_G(i)={\{\phi(i)| \phi \in G\}}}$$ The set ${orb_G(i)}$ is a subset of S called the orbit of i under G. so, orbit of i is everything that element is mapped to under the permutation. That is why it is subset of S. hence, it partitions the set S. Now, intuitively if you think of a rubik's cube, the element at the top left corner of red color , it can move(permute) only to certain point in the rubik's cube, that is the orbit of that element (3x3x3) rubic's cube. Also, you mentioned, no orbit has size 1 which means, any center color of rubik's cube can't move(permute) to any other position. so it has not orbit.
Is there a simpler expression for the sum $\sum_{i=1}^{\infty} \frac{H_{i}}{i+1}$?
To expand on @Jean-ClaudeArbaut's point, since $H_i$ is at least $O(1)$ (indeed it's $\sim\ln i$), a comparison with the harmonic series implies the given series diverges. In fact, the $n$th partial sum is $\sim\int_1^n\frac{\ln xdx}{x}=\frac12\ln^2n$.
Numerical Analysis-Proof That Sum of Lagrange Interpolating Polynomials is One
Note that the basis polynomials $l_i(x)$ depend only on the nodes and are therefore the same for any function values. Also, the n-degree interpolating polynomial through n+1 points is unique, this is just "the Lagrange form" of that unique polynomial. Apply the Lagrange interpolation formula to the polynomial $p(x)=1$
Combinations trouble
My thought process so far is to find total combinations subtract the constraint scenarios. Yes, but the constraint(or forbidden) scenario is to "select a commitee containing both A and B". &nbsp; Count that and subtract it from the total. You are allowed to select commitees containing only one from the twain (so don't subtract these).
Prove that for $n \ge 2$ the follow inequality holds $\frac {4^n}{n+1} \lt \frac{(2n)!}{(n!)^2}$.
For $k+1$ it is equivalent to $$ \frac{4^k}{k+1}&lt;\frac{(2k+1)(k+2)(2k)!}{2(k+1)^2(k!)^2}$$ It is enough to prove that $$\frac{(2k+1)(k+2)}{2(k+1)^2}&gt;1$$ Notice that the above fraction is monotonically decreasing and its limit at $k\to\infty$ is $1$, therefore it is always greater than $1$. And also, you have a mistake, as Harry noticed, $(2k)!\neq2k!$. Edit I have no idea what is Jaroslaw talking about. There is step-by-step solution. Suppose it is true for $k$. Then it is true for $k+1$ if and only if $$\frac{4^{k+1}}{k+2}&lt;\frac{(2k+2)!}{\left((k+1)!\right)^2}\\\frac{4^k}{k+1}\cdot\frac{4(k+1)}{k+2}&lt;\frac{(2k+2)(2k+1)(2k)!}{(k+1)^2(k!)^2}\\\frac{4^k}{k+1}\cdot\frac{4(k+1)}{k+2}&lt;\frac{(2k)!}{(k!)^2}\cdot\frac{(2k+2)(2k+1)}{(k+1)^2}$$ It is enough to prove that: $$\frac{4(k+1)}{k+2}&lt;\frac{(2k+2)(2k+1)}{(k+1)^2}$$ It is equivalent to $$\frac{(2k+2)(2k+1)}{(k+1)^2}&gt;\frac{4(k+1)}{k+2}\\\frac{2(k+1)(2k+1)(k+2)}{4(k+1)(k+1)^2}&gt;1\\\frac{(2k+1)(k+2)}{2(k+1)^2}&gt;1$$
How do you prove that $\ln|f(z)|$ is harmonic?
EDIT: As 5pm points out below, this is of course actually good enough since every domain in $\mathbb{C}$ is locally simply connected and harmonicity is a local property! This was too long for a comment, but I thought it might be nice to know. There is a much nicer, less computational way to prove this result if we assume further that $D$ is simply connected. In particular, recall that if $D$ is a simply connected domain in $\mathbb{C}$ and $h$ a non-vanishing holomorphic function on $D$ then $h=e^g$ for some holomorphic function $g$ (this is because we can define a branch of the logarithm on $D$). So, if $D$ was simply connected we'd know that $f=e^g$ for some holomorphic $g$, and then $$\log|f|=\log|e^g|=\log(\exp(\text{Re}(g))=\text{Re}(g)$$ and since $g$ is harmonic (since $g$ was holomorphic!) we're done.
Example of closed unit ball?
Given a metric space $E$, the closed ball about $x$ of radius $r$ is the set $\{y\in E\ :\ d(x, y) \leq r \}.$ An important result here is the Bolzano-Weirerstrass theorem, or sequential compactness theorem. http://en.wikipedia.org/wiki/Bolzano%E2%80%93Weierstrass_theorem#Sequential_compactness_in_Euclidean_spaces Use this result to work out where you should try looking for examples of non compact unit balls.
If the difference of the function only dependent on the difference of input, can we say it's linear without assuming it's continuous?
Yes, measurability implies that $f$ is affine. Proof: Let $g(t) = f(t) - f(0)$. Then $g$ is additive, because $$ g(a + b) = f(a + b) - f(0) = f(a) + R(b) - f(0),$$ and $R(b) = f(0 + b) - f(0)$, so $g(a + b) = f(a) + f(b) - 2f(0) = g(a) + g(b)$. Furthermore, $g$ is measurable. Every additive, measurable function is linear (see e.g. Theorem 5.5 in Horst Herrlich's "Axiom of Choice".) Therefore, there is some $a$ so that $g(t) = at$ for all $t$, and hence $f(t) = at + f(0)$ for all $t$.
Subset of sequence space closed
If $x=(x_n)_{n\in\Bbb N}\notin E$, it's not convergent to $0$ and we can find $\varepsilon&gt;0$ and $A\subset\Bbb N$ infinite such that $|x_n|\geq 2\varepsilon$ for all $n\in A$ (but it's not necessarily true for $n$ large enough). The ball $B(x,\varepsilon)$ is contained in $E^c$ since $|y_n|\geq \varepsilon$ for all $n\in A$ and $y=(y_n)_{n\in\Bbb N}\in B(x,\varepsilon)$.
Efficiently evaluate a triple nested summation
Since the indices are independent, this reduces to $(\sum\limits_{i=1}^p a_i)(\sum\limits_{j=1}^p b_j)(\sum\limits_{k=1}^p c_k)$ I suppose one way of expressing it would be $A^TV B^TV C^TV$, where V is the column matrix with $v_i = 1 \,\, \forall i$
(taylor series) For what range of $x$ can $\sin x=x$ approximation be used to relative accuracy of $\frac{1}{2}10^{-14}$?
The error in an alternating series is only as large as the first term not in the truncation, i.e. here $$ \left|\frac{x^3}{6}\right| $$ you will be expanding on a symmetric interval around $0$ of the form $(-a,a)$, so you need $$ \left|a^3\right|\leq6\frac{1}{2}10^{-14}=3(10^{-14})\implies |a|\leq\sqrt[3]{3}10^{-14/3} $$ Edit: As noted, perhaps it would be easier to tell how big this is by writing $$ \sqrt[3]{3}10^{-14/3}=\sqrt[3]{30\cdot10^{-15}}=\sqrt[3]{30}\left(10^{-5}\right ) $$
How to prove the triangle inequality for this metric space
Suppose $f \ne g \ne h \ne f.$ Let $a=m(f,g),\,b=m(g,h), \,c=m(f,h).$ Suppose $d(f,g)+d(g,h)&lt;d(f,h).$ Then $ 2^{-a}+2^{-b}&lt;2^{-c},$ which implies $(\,c&lt;a \land c&lt;b\,). $ But then $$ f(c)=g(c)\quad (\text { as }c&lt;a=m(f,g)\,)$$ $$ \text {and } \quad g(c)=h(c)\quad (\text { as } c&lt;b=m(g,h)\,)$$ so $f(c)=h(c),$ contrary to the def'n of $c=m(f,h).$ Contradiction.
Find the limits to compute volume on triple integral
Let $R$ be fixed at some value $0 &lt; R \le 1$. Then the upper limit on $z$ would be $z= \sqrt{1-(x^2 + y^2)}$. In cylindrical coordinates this would be $z = \sqrt{1-r^2}$. The limits on $r$ would then be $0 \le r \le R$ and $0 \le \theta \le 2 \pi$. If you use $0$ for the lower limit on $z$ and then multiply the integral by $2$, you will have the answer. Can you write the integral now?
Number theory problem from the $34th$ all Russian MO
The conjecture is false, because the smallest number representable in both forms for any two coprime numbers $m$ and $n$ is either $0+0=0$ or $1+1=2$ , depending on whether you consider $0$ a natural number or not. For a less trivial counterexample, consider $n=2$, $m=5$ and $$3^5+1^5=12^2+10^2=244,$$ which is smaller than $2^{2\times5}+1=1025$.
Uncountably many points of an uncountable set in a second countable are limit points
Arturo Magidin made a good point: if the assumption "to the contrary" is never used in a proof by contradiction, it is not really a proof by contradiction. This is a nice direct proof: let $F$ be the set of points of $A$ that are not limit points of $A$. For each $a\in F$ there is a basis element containing $a$ and nothing else from $A$. All such $U_a$ are distinct by construction. Hence, $F$ is at most countable, and therefore $A\setminus F$ must be uncountable.
Relation between $G$ bundle over free $G$ space $X$ and vector bundles over $X/G$ (Atiyah)
For your first question, the fiber $(E/G)_{Gx}$ is in bijection with $E_x$ (under the quotient map) and inherits the vector space structure from $E_x$. This choice is not unique, but if we choose to instead give $(E/G)_{Gx}$ the vector space structure from $gx$, then the identity on $(E/G)_{Gx}$ descends from the vector space isomorphism $g:E_x\rightarrow E_{gx}$ (which comes from the definition of a $G$-bundle over $X$), and therefore identifies the two vector space structures on $(E/G)_{Gx}$. This gets rid of any ambiguity in the definition of the vector space structure on $(E/G)_{Gx}$. For your second question, you are right in your general approach to trivialization. But we need to be careful about the $U$ we pick (and use Atiyah's assumptions that $X$ is Hausdorff and $G$ is finite). Fix an arbitrary $x\in X$. By freeness of the group action, for each $g\in G$ not the identity, we have $x\neq gx$ and thus we can find neighborhoods $U_g\ni x$ and $V_g\ni gx$ with $U_g\cap V_g=\emptyset$ (using the Hausdorff-ness of $X$). Also let $U_e=V_e$ be a neighborhood of $x$ over which $E$ is trivializable (where $e\in G$ is the identity). Now, using the finiteness of $G$, define the open set $$W=\bigcap_{g\in G}(U_g\cap g^{-1}V_g).$$ Since $x\in U_g$ and $gx\in V_g$ for every $g\in G$, this $W$ is a neighborhood of $x$. Also $W\subset U_e$ and thus $E$ is trivializable over $W$. Finally, notice that $$W\cap gW\subset U_g\cap V_g=\emptyset$$ for every $g\in G$ not the identity. This implies that $\pi:W\rightarrow\pi(W)$ is a homeomorphism, which enables the identification of the open set $W$ with the set it covers in the quotient. The analogous identification in the total space gives you the desired trivialization: $$p'^{-1}\big(\pi(W)\big)\cong p^{-1}(W)\cong W\times \mathbb C^n.$$ (Of course, there's nothing special about $\mathbb C$ here, I'm just writing it for concreteness. The bundle could also be real, quaternionic, or various other things.)
Compute $\lim_{n \to \infty} \sum_{k=1}^{n} \frac{n^2 + k}{n^3 + k}$
Your answer is correct. That's a nice application of the sandwich theorem (plus some artistic equation writing in the second line).
The equation of lines joining the origin and points of intersection of 2 parallel lines and an ellipse
Let $f(x,y)=x^2+y^2+2xy-4$ and $g(x,y)=3x^2+5y^2-xy-7$, so that the two curves are given by the zero sets $f(x,y)=0$ and $g(x,y)=0$. Their intersections are also contained in the zero set of every linear combination of these function, and per “Plücker’s mu,” one that passes through the origin is $$f(0,0)g(x,y)-g(0,0)f(x,y) = 5x^2-18xy+13y^2 = (x-y)(5x-13y) = 0.$$ The two lines are thus $x=y$ and $5x=13y$.
Integration over complex plane
We first write the integral of interest as $$\int_{-\infty}^\infty \frac{x\sin x}{x^4+1}\,dx=\text{Im}\left(\int_{-\infty}^\infty \frac{xe^{ix}}{x^4+1}\,dx\right)\tag 1$$ Next, we analyze the contour integral $$\oint_C \frac{ze^{iz}}{z^4+1}\,dz=\int_{-R}^R \frac{xe^{ix}}{x^4+1}\,dx+\int_0^\pi \frac{Re^{i\phi}e^{iRe^{i\phi}}}{R^4e^{i4\phi}+1}iRe^{i\phi}\,d\phi \tag 2$$ As $R\to \infty$, the second integral in $(2)$ vanishes while the imaginary part of the first integral becomes the integral on the right-hand side $(1)$. Therefore, we have $$\int_{-\infty}^\infty \frac{x\sin x}{x^4+1}\,dx=\text{Im}\left(2\pi i \sum \text{Res}\left(\frac{ze^{iz}}{z^4+1},z=e^{i\pi/4},e^{i3\pi/4}\right)\right)$$ The residues at $z=\frac{\pm1+i}{\sqrt 2}$ are given by $$\begin{align} \lim_{z\to \frac{\pm1+i}{\sqrt 2}}\left(\frac{\left(z-\frac{\pm1+i}{\sqrt 2}\right)ze^{iz}}{z^4+1}\right)&amp;=\lim_{z\to \frac{\pm1+i}{\sqrt 2}}\frac{e^{iz}}{4z^2}\\\\ &amp;=\frac{e^{-1/\sqrt 2}e^{\pm i/\sqrt 2}}{\pm i4} \end{align}$$ The sum of the residues is therefore $$\frac12 e^{-1/\sqrt 2}\sin\left(1/\sqrt 2\right)$$ Putting it all together yields $$\bbox[5px,border:2px solid #C0A000]{\int_{-\infty}^\infty \frac{x\sin x}{x^4+1}\,dx=\pi e^{-1/\sqrt 2}\sin\left(1/\sqrt 2\right)}$$
Why is a Jordan region after rotating is still a Jordan region?
If $f\colon\mathbb{R}^2\longrightarrow\mathbb{R}^2$ is a homeomorphism, $E\subset\mathbb{R}^2$, and $p\in\partial E$, then $f(p)\in\partial f(E)$. That's because$$p\in\overline E\cap\overline{\mathbb{R}^2\setminus E}=\delta E\implies f(p)\in\overline{f(E)}\cap\overline{\mathbb{R}^2\setminus f(E)}=\partial f(E).$$
Localization of Dedekind domain at a prime ideal is a P.I.D
Given the comments, the question is actually as follows: why is a local Dedekind domain a PID? Let $A$ be a local Dedekind domain, and let $m$ be its maximal ideal. For each $x \in A \backslash \{0\}$, define $v(x) \in \mathbb{N}$ as the nonnegative integer such that $m^{v(x)}=xA$. Let $x \in m$ be such that $v(x)$ is minimal: then, for each $y$, $yA=m^{v(y)} \leq m^{v(x)}=xA$ so that $y \in xA$. Therefore $m \subset xA$ and $m$ is a principal ideal, thus so are its powers, and therefore so is every ideal of $A$.
Are these identity natural morphisms?
Here's a slightly more general answer to your question. Given categories and functors as shown: $$\mathcal{A} \overset{F}{\to} \mathcal{B} \overset{G}{\underset{H}{\rightrightarrows}} \mathcal{C} \overset{K}{\to} \mathcal{D}$$ and a natural transformation $\theta : G \to H$, we can define new natural transformations $$\theta_F : GF \to HF \quad \text{and} \quad K\theta : KG \to KH$$ by defining $$(\theta_F)_A = \theta_{F(A)} : GF(A) \to HF(A) \quad \text{and} \quad (K\theta)_B = K(\theta_B) : KG(B) \to KH(B)$$ for all $A \in \mathrm{ob}(\mathcal{A})$ and $B \in \mathrm{ob}(\mathcal{B})$. These are just definitions of what is meant by '$\theta_F$' and '$K\theta$'; but as you suggest, identity natural transformations are involved: indeed, $$\theta_F = \theta \star \mathrm{id}_F \quad \text{and} \quad K\theta = \mathrm{id}_K \star \theta$$ where $\star$ is horizontal composition of natural transformations. Horizontal composition is slightly trickier to define, so most authors define $\theta_F$ and $K\theta$ independently.
Prove a function is exponential decaying after a finite time
For all $x\ge0$, $$xe^{-x}&lt;e^{-x/2}.$$ This does not deserve a paper.
Subgroups of $SL(2,\mathbb{Z})$.
$\def\SL{\mathrm{SL}}$ $\def\Z{\mathbf{Z}}$ The answer to this question if false, but I suspect that you may have misread the original claim. You are talking about $\Gamma$ up to conjugation in $\SL_2(\Z)$, but a variant would be to ask simply about $\Gamma$ as an abstract group up to isomorphism. It is quite likely that $\Gamma$ is determined by this information, because the group theory here is much simpler (for example, if $\Gamma$ is torsion free and finite index then it is actually free, and the rank is determined by the index.) The basic problem for your version of the question is that $\SL_2(\Z)$ has many interesting quotients (since it is virtually free) and the data is nowhere near enough to determine these and hence distinguish the corresponding normal subgroups. There is probably a simple example one can compute by hand if computation is your sort of thing. Let us recall that $$\SL_2(\Z)/Z(\SL_2(\Z)) = \mathrm{PSL}_2(\Z) = \langle S, T | S^2, (ST)^3 \rangle.$$ Here $$S = \left(\begin{matrix} 0 &amp; -1 \\ 1 &amp; 0 \end{matrix} \right), \qquad T = \left(\begin{matrix} 1 &amp; 1 \\ 0 &amp; 1 \end{matrix} \right).$$ I am going to choose $\Gamma$ normal in $\SL_2(\Z)$ such that $$G = \SL_2(\Z)/\Gamma = \langle S, T | S^2, (ST)^3, T^7,\Delta\rangle,$$ for some extra choice of relations $\Delta$. What can one say about the invariants in this case? First, let's suppose that $G$ is not the trivial group. An easy exercise shows that this forces $S$, $ST$, and $T$ to have exact orders $2$, $3$, and $7$ respectively. What can one deduce from this? Any element in $\SL_2(\Z)$ of order $4$ is conjugate to $S$, but the image of $S$ is non-zero, so $e_4 = 0$. Any element in $\SL_2(\Z)$ of order $6$ is conjugate to $ST$, but the image of $ST$ is non-zero, so $e_6 = 0$. The quotient is a quotient of $\mathrm{PSL}_2(\Z)$, so $e_4 = -1$. The image of $T$ has order $7$, and similarly the image of any conjugate of $T$ has order $7$. Thus $c_i = 7$ for all $i$. As you noted, the number of $c_i$ is determined by the index. Taken together, your claim would imply the following: Any finite quotient of $$\langle S, T | S^2, (ST)^3, T^7 \rangle$$ is determined by its order. Now finite quotients of this group are well studied; they are known as Hurwitz groups because of the relation with automorphisms of curves of genus $\ge 2$ with maximal automorphism groups. It is a theorem of Higman that $A_n$ is a Hurwitz group for sufficiently large $n$. By Goursat's Lemma, it follows that $A_n \oplus A_m$ is also a Hurwitz group for $n &gt; m$ and $m$ sufficiently large. But then we have the following two Hurwitz groups of the same order: $$A_{n}, \qquad A_{n-1} \oplus A_m, \qquad n = |A_m|.$$
How to update a whitening matrix online, for streaming data?
You might try updating the inverse of your matrix. If $A$ is your matrix at time $t$, and this changes by a small amount $B$, then $$ (A + B)^{-1} \approx A^{-1} + A^{-1} B A^{-1}$$ However, matrix inversion has the same asymptotic complexity as matrix multiplication, so this may not be helpful unless $B$ is sparse. EDIT: It's probably better to update the covariance matrix and its inverse as each new data point appears. Suppose your data are the column vectors $X_n$, $n = 1,2,3, \ldots$. If $\mu_n$ and $\Sigma_n$ are the mean vector and covariance matrix (in the version with denominator $n-1$) of the first $n$ data points, we have $$ \eqalign{\mu_{n+1} &amp;= \dfrac{n \mu_n + X_{n+1}}{n+1}\cr \Sigma_{n+1} &amp;= \dfrac{n-1}{n} \Sigma_n + \dfrac{1}{n+1} \left(X_{n+1} - \mu_n)(X_{n+1} - \mu_n)^T\right) }$$ This is a rank-$1$ update of a multiple of $\Sigma_n$, so by the Sherman-Morrison formula $$ \Sigma_{n+1}^{-1} = \frac{n}{n-1} \Sigma_n^{-1} - \frac{n}{n-1} \frac{\Sigma_n^{-1} (X_{n+1} - \mu_n) (X_{n+1} - \mu_n)^T \Sigma_n^{-1}} {\dfrac{n^2-1}{n} + (X_{n+1}-\mu_n)^T \Sigma_n^{-1} (X_{n+1} - \mu_n)}$$ We can compute this as $$\eqalign{Y &amp;= X_{n+1} - \mu_n\cr Z &amp;= \Sigma_n^{-1} Y\cr \Sigma_{n+1}^{-1} &amp;= \dfrac{n}{n-1} \Sigma_n^{-1} - \dfrac{n}{n-1} \dfrac{Z Z^T}{\dfrac{n^2-1}{n} + Y^T Z}}$$ Note that this only involves matrix-vector and vector-vector multiplications, so it's much faster than matrix-matrix multiplication.
Calculate $2^{-1000000}(\frac{5+3\sqrt{3}}{i}-\frac{1}{1+\sqrt{3}i})^{999999}$
Hint: write the complex part as $|r|e^{i\theta}$. Then taking the power will simply result in $|r|^{9999999}e^{i999999\theta}$. It's not actually a riddle, just not really useful question...
I've been having some trouble with a precalculus question. It's in the body.
$ p = \frac{a+b}{2} + \frac{a-b}{2} i$ , $q = \frac{b+c}{2} +\frac{b-c}{2}i $ , $ r = \frac{c+d}{2} + \frac{c-d}{2}i $ , $s = \frac{d+a}{2} + \frac{d-a}{2}i$ $ 2(r-p) = (c-a)(1+i)+(d-b)(1-i)$ , $ 2(s-q) = (d-b)(1+i) + (a-c)(1-i)$ $ \therefore 2(s-q) = 2(r-p) \cdot i $ Use that $ i = e^{\frac{i\pi}{2}}$
Algorithm - the longest chord whose supporting line contains a given point, in a convex polygon
The chord doesn't have to contain a vertex of $P$, as this example shows: $AB$ is clearly longer than $CD$.
Prove or disprove the statement: For all sets $A$, $B$ and $C$, if $A=B \cup C$ then $A-B=C$.
Hint: What if $A = B = C$ is your very favorite set?
subfields of a finite subfield
Yes, the possible sizes are $3^1$ and $3^7.$ If $m$ is the size of a subfield, then the original field must be a finite dimensional vector space over that subfield, and thus, if $d$ is the vector space dimension, the original field must have $3^7=m^d$ elements. This means by unique factorization that $m=3^n$ for some $n$ and $nd=7.$ This means your subfields can have $3^1$ or $3^7$ elements. Then you need to show those subfields exist, which is easy.
Stuck on Differential Equation (Tried Substitution)
Hint: $\dfrac{dy}{dx}=\dfrac{x^2-xy-y^2}{x-y}$ $(y-x)\dfrac{dy}{dx}=y^2+xy-x^2$ This belongs to an Abel equation of the second kind. Let $u=y-x$ , Then $y=u+x$ $\dfrac{dy}{dx}=\dfrac{du}{dx}+1$ $\therefore u\left(\dfrac{du}{dx}+1\right)=(u+x)^2+x(u+x)-x^2$ $u\dfrac{du}{dx}+u=u^2+2xu+x^2+xu+x^2-x^2$ $u\dfrac{du}{dx}=u^2+(3x-1)u+x^2$ Let $u=e^xv$ , Then $\dfrac{du}{dx}=e^x\dfrac{dv}{dx}+e^xv$ $\therefore e^xv\left(e^x\dfrac{dv}{dx}+e^xv\right)=e^{2x}v^2+(3x-1)e^x+x^2$ $e^{2x}v\dfrac{dv}{dx}+e^{2x}v^2=e^{2x}v^2+(3x-1)e^x+x^2$ $e^{2x}v\dfrac{dv}{dx}=(3x-1)e^x+x^2$ $v\dfrac{dv}{dx}=(3x-1)e^{-x}+x^2e^{-2x}$
If $f:[a,\infty)\to\mathbb{R}$ is monotonically decreasing, and $\int_{1}^{\infty}f(x)dx$ is convergent, then $\underset {x\to\infty} \lim f(x)=0$.
It is true. Assume $f:[1,\infty)\to \mathbb R$ is monotonically decreasing and prove $$ \int_1^\infty f(x)~dx \text{ convergent }\Rightarrow \lim_{x\to \infty} f(x)= 0 $$ by contraposition. Let be $\lim f(x)\neq 0$. If $f(x)\to -\infty$ then $\int_1^\infty f(x)~dx=-\infty$ is obvious. Otherwise let be $c:=\lim_{x\to\infty} f(x)\in\mathbb R\setminus\{0\}$. First, $c&gt;0$. Then we use $f(x)\geq c$ and conclude $$ \int_1^\infty f(x)~dx=\lim_{R\to\infty}\int_1^Rf(x)~dx\geq \lim_{R\to\infty}\int_1^Rc~dx=\lim_{R\to\infty}(R-1)c=\infty. $$ Next, $c&lt;0$. Since $f(x)\to c$ we get $X&gt;0$ such that $f(x)\leq \frac12c$ for all $x\geq X$ and conclude \begin{align} \int_1^\infty f(x)~dx&amp;=\int_1^Xf(x)~dx+\int_X^\infty f(x)~dx=\int_1^X f(x)~dx+\lim_{R\to\infty}\int_X^Rf(x)~dx\\ &amp;\geq \int_1^Xf(x)~dx+\lim_{R\to\infty}\int_1^R\frac12c~dx\\ &amp;=\int_1^Xf(x)~dx+\lim_{R\to\infty}\frac12(R-X)c=\infty. \end{align}
What is the distribution of orders of group elements?
For cyclic groups, the distribution of orders is simple (if we ignore the difficulty of factorisation of the group's order to find the divisors). A cyclic group of order $n$ has exactly $\varphi(n)$ generators ($\varphi$ is Euler's totient function), and for each divisor $d$ of $n$ exactly one subgroup of order $d$ - if $g$ is a generator of the entire group, $\{ g^{k\cdot n/d} : 0 \leqslant k &lt; d\}$ is the subgroup of order $d$, and $g^{n/d}$ is a generator of that subgroup - that subgroup is also cyclic, hence has $\varphi(d)$ generators. Conversely, in any group, an element of order $k$ generates a cyclic subgroup of order $k$, so in a cyclic group of order $n$, for each divisor $d$ of $n$, there are exactly $\varphi(d)$ elements of order $d$. Since $a \mid b \Rightarrow \varphi(a) \leqslant \varphi(b)$, cyclic groups tend to have more elements with large orders than with small orders (but of course, the count does in general not increase monotonically with the order, a cyclic group with $1140$ elements has $18$ elements of order $19$, but only $16$ elements of order $60$ and only $8$ of order $30$). From the cyclic case, you can obtain the distributions for finite(ly generated) abelian groups. Such a group is (isomorphic to) a direct sum of cyclic groups, and the order of an element is the least common multiple of the orders of the components. So to find the number of elements of order $k$ in $$G \cong \bigoplus_{i=1}^m \mathbb{Z}/(n_i),$$ find all sequences $(d_1,\,\ldots,\, d_m)$ with $d_i \mid n_i$ and $\operatorname{lcm} (d_1,\,\ldots,\,d_m) = k$, and sum up the partial counts $$\prod_{i=1}^m \varphi(d_i)$$ you get from choosing elements of order $d_i$ in each summand. For small $m$, that is quite doable, but it becomes unwieldy rather fast if the number of summands grows.
If $\sigma : H \xrightarrow{\sim} G $ is a group isomorphism and $H = \langle S \rangle$, then does $G = \langle \sigma(S)\rangle$?
You've asked two questions and made a few claims, I'll try to answer everything. Yes, if $\sigma : H \to G$ is an isomorphism and $H$ is generated by $S$, then $G$ is generated by $\langle S \rangle$. Just write any element of $H$ as $h=s_1^{\pm} \dots s_k^{\pm}$ for $s_i \in S$, then $\sigma(h) = \sigma(s_1)^{\pm} \dots \sigma(s_k)^{\pm}$ belongs to $\langle S \rangle$. No, we cannot conclude that $H \simeq \mathbb{Q}^\times / K$. I do not follow your argument, and since there is a gap it is difficult to pinpoint what is wrong, exactly. If you wanted to conclude that, you would essentially need to build a surjective morphism $\mathbb{Q}^\times \to H$ with kernel $K$. You have not done that. I do not get your argument that $\sigma(a) &gt; \sigma(b) \implies a/b \in H$. You have proved that if $a/b = p_1 \dots p_r / (q_1 \dots q_r)$ is a product of generators of $H$, then $\sigma(a) &gt; \sigma(b)$. But you have not proved the converse.
Obtain the coefficient of $x^2$ in the expansion of $1+\frac{6}{2x+1}+\frac{5}{2-3x}$
Recall that \begin{align} \frac1{a-b z} &amp;=\frac1a\frac1{1-\left(\frac b a z\right)}\\ &amp;=\frac1a\sum_{n=0}^{\infty}\left(\frac b a z\right)^n \end{align} We have $$\frac1{1+2x}=\frac1{1-(-2x)}=1-2x+4x^2-8x^2\pm \ldots$$ and $$\frac1{2-3x}=\frac12\frac1{1-\frac32x}=\frac12\times\left(1+\frac32x+\frac94x^2\pm\ldots\right)$$ Now you have $1+6\times\frac1{1+2x} + 5 \times \frac1{2-3x}$, $\color{red}{\mbox{and you can substitute the latter expansions and obtain the result you are after.}}$
Intersection of a directed family of large sets
[Edit: The answer below refers to the original version of the question, where the sets in $G$ were not required to be different modulo the null ideal.] Not necessarily. Pick a non-measurable set $T\subset[0,1]$, and for each finite set $F$ of points in $[0,1]\setminus T$, put in $G$ the set $[0,1]\setminus F$. Then $\bigcap G=T$. By replacing $T$ with the empty set, we see that even if we require that $\bigcap G$ is measurable, we cannot assert any non-trivial lower bounds on its measure.
Find a stage $n(\epsilon)$ such that $|x_n-l|<\epsilon$
While there exists $n_0(\varepsilon)$ such that $|2^n (n!)^{-1}|&lt;\varepsilon$ is true for any $n\geq n_0$ and false for $n&lt;n_0$, you need not find this $n_0$. If you can accept anything that is not nearly close, just note that: $$ \left| \frac{2^n}{n!}\right|=\left| \frac{2.2.2.\, \cdots .2}{1.2.3.\,\cdots.n}\right| = \left| \frac{2}{1}\right|.\left|\frac{2}{2}\right|.\left|\frac{2}{3}\right|\cdots \left|\frac{2}{n}\right|&lt;2\left| \frac{2}{n}\right|&lt;\varepsilon $$ Thus you can pick $n$ greater than $4/\varepsilon$
Proving limits of the composition of functions
Do not take into account earlier that $(y_{n})$ in the sequential criterion of $g$. Rather, instantiate the sequential criterion to $(f(x_{n}))$ which the element is none of $y$ by the assumption. Then the rest goes through.
Showing that $\arctan x = \arcsin\left(\frac{x}{\sqrt{1 + x^2}}\right)$
The connection is way more revealing in its simplicty if you just use trigonometry. For $x$ positive take a right-angled triangle with sides $\overline{AB} = 1$ and $\overline{BC} = x$. Then by definition $$\angle CAB = \arctan x.$$ The hypotenuse measures $\overline{AC} = \sqrt{1+x^2}$, by Pythagorean Theorem. So the same angle can be also defined as $$\angle CAB = \arcsin \left(\frac{x}{\sqrt{1+x^2}}\right).$$ For negative $x$ take $\overline{BC} = -x$ and recall the odd symmetry of both sine and tangent. As easy as that.
Assume that a part of ∆ABC around vertex A is not visible. Describe how to find the angle bisector of ∠CAB.
You may exploit the following property of the angle bisector through $A$: for any point $P$ on it, the distance of $P$ from the $AB$-side equals the distance of $P$ from the $AC$-side. Just find two distinct points with such property in the visible part and join them to get the wanted line. You may take $P=I$ as the intersection of the internal angle bisectors from $B$ and $C$ and $Q=I_A$ as the intersection of the external angle bisectors from $B$ and $C$, for instance. An alternative approach through triangle similarities: Take $B'\in AB$ close to $B$ and let $C'\in AC$ such that $B'C'\parallel BC$; Let $C''\in BC$ be such that $B' C''\parallel AC$; Let $J$ the point in which the angle bisector of $\widehat{BB'C''}$ meets the $BC$-side and $K$ the projection of $J$ on $B'C''$; Let $L=BK\cap AC$ and $N\in BC$ be such that $LN\parallel KJ$; The parallel to $B'J$ through $L$ is the wanted angle bisector. A third approach: Let $B'$ and $C'$ like before, let $M$ be the midpoint of $B'C'$; Let $A'$ be the intersection between the parallel to $AC$ through $B'$ and the parallel to $AB$ through $C'$; Let $J\in B'C'$ be on the angle bisector of $\widehat{B' A' C'}$; Let $K$ be the symmetric of $J$ with respect to $M$; The wanted line is the parallel to $A'J$ through $K$.
find the point of convergence of sequence {$a_n$}
Try sandwiching $a_n$. $$a_n = \sum_{k=1}^n \frac{n}{n^2 + k} \le \sum_{k=1}^n \frac{1}{n} = 1.$$ $$a_n = \sum_{k=1}^n \frac{n}{n^2 + k} \ge \sum_{k=1}^n \frac{1}{n + 1} = \frac{n}{n+1}.$$
Cramer-Rao lower bound and efficiency vs biased estimator efficiency
Yes. Think of any biased but constant estimator, of which variance is zero. Then, it is POSSIBLE to find such an estimator. Also, refer to "Petre Stoica, Randolph L Moses. On biased estimators and the unbiased Cramér-Rao lower bound, 1990"
Filtration of stopping time equal to the natural filtration of the stopped process
The "saturation" property assumed by Shiryaev is indeed clear for $\Omega$ consisting of the space of cadlag paths from $[0,\infty)$ to $\Bbb R^d$, and $X_t(\omega)=\omega(t)$ for $t\ge0$ and $\omega\in\Omega$. For, given $t\ge 0$ and $\omega\in\Omega$, define $\omega'$ to be the path "$\omega$ stopped at time $t$"; that is, $\omega'(s):=\omega(s\wedge t)$ for $s\ge 0$. This path $\omega'$ is clearly an element of $\Omega$, and $X_{s\wedge t}(\omega) =\omega(s\wedge t) =\omega'(s)=X_s(\omega')$ for each $s\ge 0$.