title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Is $Log(i^{1/3})=\frac{1}{3}Log(i)$?
You have $$ i = \exp\left(i\frac{\pi}{2} + i2n\pi\right) $$ Taking the third power yields $$ i^{1/3} = \exp\left(i\frac{\pi}{6} + i \frac{2n\pi}{3} i\right) $$ Since $|i| = |i^{/3}| = 1$, taking the log yields $$ \log (i^{1/3}) = i\left(\frac{\pi}{6} + \frac{2n\pi}{3} \right) $$ On the other hand, we have $$ \frac{1}{3}\log (i) = \frac{i}{3} \left(\frac{\pi}{2} + 2n\pi \right) = i \left( \frac{\pi}{6} + \frac{2n\pi}{3} \right) $$ Does this help? EDIT: Continuing from your work, you can show that the 3 arguments for $\log(i^{1/3})$ are evenly spaced by an angle of $2\pi/3$. Therefore they can all be combined as $$ \left\{\begin{aligned} i\left(-\frac{\pi}{6} + 2m_1\pi\right) \\ i\left(\frac{5\pi}{6} + 2m_2\pi\right) \\ i\left(-\frac{\pi}{2} + 2m_3\pi\right) \end{aligned}\right. = i\left(\frac{\pi}{6} + \frac{2n\pi}{3}\right) = i(4n+1)\frac{\pi}{6} $$ where $n$ is mapped by every alternating triplet of $(m_1,m_2,m_3)$. For example $m_3 = 0, m_1 = 0, m_2 = 0 \to n = -1,0,1$, etc
Inequality rules of random variables
I suppose you meant "random variable $X$". Yes, that is true, because probability (measure) is a measure, hence it must satisfy countable aditivity - in symbols (without exact details), let $\{A_i\}_{i = 1}^{\infty}$ be the set of pairwise disjoint measurable sets and $P$ be the probability measure. Then $P(\bigcup_{i = 1}^{\infty} A_i) = \sum_{i = 1}^{\infty} P(A_i)$. This also applies to finite disjoint union, because $\bigcup_{i = 1}^{n}A_i = A_1 \cup A_2 \cup \cdots \cup A_n \cup \emptyset \cup \emptyset \cup \cdots$. And this applies to your situation, because $\{1 \leq X < 3/2 \} = \{X = 1\} \cup \{1 < X < 3/2\}$ and $\{X = 1 \} \cap \{1 < X < 3/2\} = \emptyset$.
Taylor Polynomials question
Moving to an answer to give a little more detail. Ignoring choice of $x_0$ (or $a$ as I referred to it in comments), you can find the derivatives of $f(x) = \sqrt{x}$ fairly easily using the Power Rule. We know that we can create an $n$th degree polynomial approximation $P_n(x)$ using the formula $\displaystyle P_n(x) = \sum_{k=0}^n \dfrac{f^{(k)}(x_0)}{k!}(x-x_0)^k$. The key to the approximation is to choose a good value for $x_0$ so that (a) you can do the calculations easily and (b) you minimize the number of terms needed to get an acceptable error. Since we want to approximate $\sqrt{8}$, a good choice would be $x_0=9$. No mention of needed error is mentioned so let's try using $n=2$ as a first approximation. $f(9) = 3 \\ f'(x) = \dfrac{1}{2\sqrt{x}} \Longrightarrow f'(9) = \dfrac{1}{6} \\ f''(x) = \dfrac{-1}{4x^{\frac{3}{2}}} \Longrightarrow f''(9) = \dfrac{-1}{108}$ So we have $P_2(x) = 3 + \dfrac{1}{6}(x-9) - \dfrac{1}{216}(x-9)^2$. This gives an approximation for $\sqrt{8}$ of $P_2(8) = 3 + \dfrac{1}{6}(8-9) - \dfrac{1}{216}(8-9)^2 = \dfrac{611}{216} \approx 2.8287$.
tensor product and matrix multiplication distributive properties
Consider matrices $A,B,C,D$ of sizes such that the products $AC$ and $AD$ can be formed. We can use block matrix multiplication to show that $(A\otimes B)\,(C\otimes D)=(AC)\otimes(BD)$. We will use the notation $A\otimes B = (a_{ij} B)_{ij}$ to denote block matrices, where indices are always supposed to range approriately. Then \begin{align*} (A\otimes B)\,(C\otimes D) &= (a_{ij} B)_{ij}\, (c_{ij} D)_{ij} \\ &= \left(\sum_k (a_{ik} B)(c_{kj} D)\right)_{ij} \\ &= \left( \left(\sum_k a_{ik} c_{kj}\right) BD\right)_{ij.} \end{align*} Note that $\sum_k a_{ik} c_{kj}$ is the $i,j$-th entry of $AC$ so the result is indeed equal to $(AC)\otimes (BD)$. Since traces of Kronecker products are given as $\operatorname{Tr}(A\otimes B)=\operatorname{Tr}(A) \operatorname{Tr}(B)$, this yields $$ \operatorname{Tr}\left((A\otimes B)\,(C\otimes D)\right) = \operatorname{Tr}(AC) \operatorname{Tr}(BD). $$ In your case that gives $$ \operatorname{Tr}\left((A\otimes B)\,(\overline{A}^T\otimes \overline{B}^T)\right) = \operatorname{Tr}(A\overline{A}^T) \operatorname{Tr}(B\overline{B}^T) = \|A\|_F^2\, \|B\|_F^2, $$ where $\|\cdot\|_F$ denotes the Frobenius norm.
Non local functor covered by affine opens
I think I understand things now, since a few people favorited the question I'll post what I've figured out. I'll be using Demazure and Gabriels definition of local. First a little lemma: Lemma. Let $X$ be a subfunctor of $Y$ and let $Y$ be local. Then $X$ is local if and only if for every $R$ and every $f_1, \ldots, f_n \in R$ generating the unit ideal, if $y \in Y(R)$ maps to $X(R_{f_i})$ for each $i$ then $y \in X(R)$. Proof. The sequence that we have to check is exact for $X$ is just a restriction of the sequence for $Y$, so the first map $X(R) \to \prod_iX(R_{f_i})$ is automatically injective. An element of $\prod_iX(R_{f_i})$ on which the two following maps agree lifts to $Y(R)$ because $Y$ is local and that lift is unique so you get exactness if and only if that lift to $Y(R)$ actually landed in $X(R)$. $\square$ Now we take $R = \mathbb Z[\sqrt{-5}]$. The accepted answer of this question explains why the ideal $\mathfrak a = (3, 1 + \sqrt{-5})$ is a non-free summand of $R^2$. Then take $$f_1 = 1 - \sqrt{-5}$$ $$f_2 = 1 + \sqrt{-5}$$ $$f_3 = 3$$ and note that these generate the unit ideal. Localizing $\mathfrak a$ at $f_1$ we get $1 + \sqrt{-5} = \frac{6}{1 - \sqrt{-5}}$ so the localization $R_{f_1}\mathfrak a = (3)$ is principal, hence free of rank $1$. Since $\mathfrak a$ contains $f_2$ and $f_3$ we get that localizing at either of these elements gives the unit ideal, which is free of rank $1$. Hence $\mathfrak a \in \mathrm{Gr}_{1, 1}(R)$ maps to $X(R_{f_i})$ for each $i$, but $\mathfrak a \notin X(R)$. This proves that the subfunctor $X$ is not local.
Rotate around a specific point instead of 0,0,0
To rotate around a specific point the standard trick is make at first a translation taking P in O, then the rotation around the new origin O and finally the backward translation. In matrix notation instead of $u’=Ru$ you need to calculate $u’=R(u-OP)+OP$, where u is the vector to rotate u’ is the rotated vector R is the rotation matrix OP is the vector from O to P (that is P-O)
Calculate the expectation of games won by a team
Let me first rephrase the question: Let A and B be two soccerteams that have played exactly $N$ games against each other. It is known that after all these games, each team scored exactly $N$ goals. What is the expected number of games won by team A? I will assume the following: if we consider, say, team A and one goal scored by that team, the probability that goal has been scored is the same for every game played and does not depend on the other goals scored. I think this is the most natural way to think about the problem, but it of course depends on the precise mathematical interpretation of something belonging to the real world. Now for the solution: If $N_A$ is the expected number of games won by team $A$ and similarly for $B$ then by symmetry, $N_A= N_B$. Also, if $N_D$ is the expected number of draws, then $N_A + N_B + N_D = N$, so $N_A = \frac12 (N-N_D)$. By symmetry again, $N_D = N \times P$, where $P$ is the probability the first match will be a draw. Then let $P[i]$ denote the probability that the number of goals scored by team $A$ in the first game equals $i$. (In particular $P[0] + \dots + P[N] = 1$.) By symmetry again, the same probabilities apply to team B and in particular, $P = \sum_{i=0}^N P[i]^2$, i.e. a draw occurs when both teams happen to score the same amount of goals in the first match. So we try to determine $P[i]$. $P[i]$ is given by the number of "good combinations" / the total number of combinations. Here a "combination" is a sequence of $N$ numbers ranging from $1$ to $N$, such that the $k$'th number records in which match the $k$'th goal was scored. (E.g. for $N=3$ the sequence 1-2-1 would mean that of the 3 goals scored by team A, the first was scored in the first game, the second in the second game, the thirth in the first game.) There are of course $n^n$ possible combinations. The ones that contain exactly $i$ times $1$ are "good". To find these combinations, we may determine $\binom{N}{i}$ positions first and then fill in the other positions in $(N-1)^{N-i}$ ways with the remaining numbers from $2$ to $N$. This gives us $$ P[i] = \frac{ \binom{N}{i} (N-1)^{N-i} }{N^N}. $$ Therefore the answer is given by $$ N_A = \frac{N}2 \left[1 - \sum_{i=0}^N \left( \frac{\binom{N}{i}(N-1)^{N-i}}{N^N}\right)^2 \right].$$ Presumably this can further be simplified, and quietly I'm hoping there is a clever combinatorial argument that counts this number in another way, meanwhile simplifying this equation, but I'll leave it just here.
Does this Make Sense ? Riemannian Manifold Directional Derivative.
This only makes sense if $f$ is a function on some manifold $N$, for which $M$ is a submanifold, but at that point, you don't even need to reference the space $M$ so it doesn't really make too much sense. If $f$ were some function only on $M$ there is a non-unique extension of $f$ to some neighborhood of $M$ in $N$ for which you could certainly talk about a directional derivative in a direction non-tangent to $M$ but tangent to $N$, however, the non-uniqueness of this extension likely makes the result non-consistent based on your choice of extension.
Laplace transform of $\frac{2\sin(t) \sinh(t)}{t}$
Your $\arctan(s+1)+\arctan(1-s)$ is correct, but that does not give $\arctan(2/(s^2-1))$: $$\tan(\arctan(s+1)+\arctan(1-s)) = \frac{2}{1-(s-1)(-s-1)} = \frac{2}{s^2}$$
Nested roots sequence, how to prove it's monotone and bounded?
Bounded and monotoneis probably the best way to show it, though one can take different steps do show this. For example for monotnicity: $$x_{n-1}<x_n\implies a+x_{n-1}<a+x_n\implies \sqrt{a+x_{n-1}}<\sqrt{a+x_{n}}\implies x_{n+1}<x_n.$$ Indeed, we still have a strictly increasing bounded sequence if $a>0$. With $a=0$ the sequence is of course constant (and thus converging). With $a<0$, $x_1$ is undefined. An alternatiuve method might be to investigate $y_n:=\frac{1+\sqrt {4a}}2-x_n$ and show that it decreases by looking at $\frac{y_{n+1}}{y_n} $. By the way, you didn' specify wether it was part of the problem statement to actually compute $\lim_{n\to\infty} x_n$, but I guess you know how to handle that last step.
Union of categories and $\ell$-adic local systems
This construction can be found in Deligne's "La Conjecture de Weil II." Deligne calls it a "2-limite inductive."
Can exist an even number greater than $36$ with more even divisors than $36$, all of them being a prime$-1$?
Suppose that $n$ has the property that $d+1$ is prime for each even divisor $d$ of $n$, and let $k$ be the number of powers of $2$ that divide $n$. If $k=0$, there is nothing to say. $k\geq 3$ is impossible, since $2^3 + 1$ is not prime. Suppose that $k=2$, and let $p$ be an odd prime divisor of $n$. Then $2p+1$ and $4p+1$ are prime. But working modulo $3$, we can see that this is only possible if $p=3$. So $n=4\cdot 3^l$ for some $l\geq 0$. But $2\cdot 3^3 + 1$ is not prime, so the only possibilities are $n=4,12,36$. The remaining case is $k=1$. If $p\geq 5$ is a prime factor of $n$, then $p^2$ does not divide $n$, because $2p^2+1$ is divisible by $3$. As above, $3^3$ cannot divide $n$, so $n=2\cdot 3^l\cdot \prod_{i=1}^t q_i$, where $0\leq l \leq 2$ and the $q_i$ are distinct primes $\geq 5$. If $t=0$, we have the cases $n=2,6,18$. If $t=1$, we have the cases $n=2p, 6p, 18p$, none of which have more than six even factors. $t\geq 2$ is impossible: at least one of $2p+1$, $2q+1$, and $2pq+1$ must be divisible by $3$. We conclude that $n$ cannot have more than six even factors, and it has precisely six only when $n=18p$ for some prime $p$.
Prove L is not a regular language (A Finite State Automaton cannot accept it)
Hint- is $L^{\prime} = \{ 0^{n}1^{n} : n \in \mathbb{N} \cup \{0\} \}$ regular? What happens when you take $w \in L^{\prime}$? Is $ww \in \mathcal{L}$?
Expression of an Integer as a Power of 2 and an Odd Number (Chartrand Ex 5.4.2[a])
I am not sure, what you are looking for as the proof is fine, but here are some alternatives: You could use unique prime factoritzation, so for any $m$, there are $k_i\geq 0$, such that: $$m=2^{k_0}\cdot 3^{k_1}\cdot 5^{k_2}\cdot \dots$$ Then $p=k_0$ and $k=3^{k_1}\cdot 5^{k_2}\cdot \dots$ does the job. In the case, where $m$ is even, but not a power of $2$, assume to the contrary, there are numbers not representable in that way. Then, by the well-ordering principle, there is a smallest one, which we denote by $n$. As $n$ is even, there is a $m\in\mathbb N$ with $2m=n$. Since $m<n$, $m$ has a representation $m=2^pk$, but then $n=2^{p+1}k$, a contradiction.
$f(x) = \int_{0}^{x}(49-t^2)e^{t^3}dt$
Hint: $f'(x) = (49-x^2)e^{x^3} > 0 \iff f \text{ is increasing}$.
Is there a set of all non-empty sets?
This does not exist. If it did, then you could take its union with $\{\emptyset\}$ to get the set of all sets, which you already know is impossible.
Why the second smallest eigenvalue of the Laplacian of a tree is $\leq1$
Suppose that $T$ is a tree with the second smallest eigenvalue equal to $1$ and $T$ is not a star. So diameter $T$ is greater than 2. So it contains a $P_4$. But the second smallest eigenvalue of $P_4$ is less than $1$ and if you add other edges and vertices of $T$ to $P_4$, its second smallest eigenvalue will not increase. So the second smallest eigenvalue of $T$ is not equal to $1$, a contradiction.
Limits of an odd degree polynomial with positive leading coefficient
You can start by making it a bit simpler by noting that $$ \lim_{x\to \infty}(a_n x^n + \cdots +a_0)=\lim_{x\to \infty} x^n(a_n + \cdots + \frac{a_0}{x^n}) = a_n \lim_{x \to \infty} x^n. $$ Now you just need to show that $\displaystyle \lim_{x \to \pm \infty} x^n = \pm \infty$, for odd $n$. Using the definition, $$ \lim_{x \to +\infty} x^n = + \infty \Leftrightarrow \forall_{M>0} \exists_{S>0}:x>S \Rightarrow x^n > M $$ Lets fix $M>0$. Since $x^n>M \Leftrightarrow x > \sqrt[n]{M}$, we just need to take $S=\sqrt[n]{M}$, which completes the proof. Similarly, $$ \lim_{x \to -\infty} x^n = - \infty \Leftrightarrow \forall_{M>0} \exists_{S>0}:x < -S \Rightarrow x^n < -M $$ Again, fixing $M>0$, since for odd $n$ we have that $x^n < -M \Leftrightarrow x < -\sqrt[n]{M}$, we just set $S=\sqrt[n]{M}$ and the result follows.
Can we carry through the limit operator when we are unsure if the limit exists?
It's a shorthand notation, but is not really "bad form" if both you and your readers understand what is really going on. What you really mean at each step where you write something like "$\lim_{n \to \infty} f(n) = \lim_{n \to \infty} g(n)$" is "if $\lim_{n \to \infty} g(n)$ exists, then $\lim_{n \to \infty} f(n)$ exists and has the same value". When at the end of the calculation you find that the last limit does exist, that tells you that everything is good and you have found the limit you wanted. If you ended up with a limit that doesn't exist, then you might not know about the original limit. EDIT: In particular, I note that you are using l'Hopital's rule to go from $\lim_{n \to \infty} \log\left(\frac{n-1}{n}\right)/(1/n)$ to $\lim_{n \to \infty} (1/(n^2-n))/(1/n^2)$. This is OK since the rule says if $f(n)$ and $g(n)$ both go to $0$ or $\infty$ as $n \to \infty$ and $\lim_{n \to \infty} f'(n)/g'(n) = L$ exists, then $\lim_{n \to \infty} f(n)/g(n) = L$ as well. You shouldn't use the rule in the other direction, since it can happen that $\lim_{n \to \infty} f(n)/g(n)$ exists but $\lim_{n \to \infty} f'(n)/g'(n)$ does not.
Continuity at origin
Hint: Note that $|x-y|\le |x|+|y|$. So for $x$ and $y$ near $0$ but not both $0$ we have $$0\le\frac{|\sin(x-y)|}{|x|+|y|}\le \frac{\sin(|x+y|)}{|x|+|y|}\lt 1.$$
Mutually exclusive and Independent Events
By definition, two events $A$ and $B$ are mutually exclusive whenever $A\cap B=\varnothing$. In that case, one has $\mathbb P(A\cap B)=0$. Some people also say that $A$ and $B$ are mutually exclusive whenever $\mathbb P(A\cap B)=0$. (The second condition is weaker, but it doesn't make a difference in practice.) Now, two events $A$ and $B$ are called independent, if $\mathbb P(A\cap B)=\mathbb P(A)\cdot\mathbb P(B)$. Can you now see in which cases you have that $A$ and $B$ are mutually exclusive and independent?
Is $det (A-A^T) = 0$ for a $n \times n$ matrix? ($n$ odd)
$B=A-A^T$ is skew-symmetric, so $$\det(B) = \det(B^T) = \det(-B) = (-1)^n \det(B) = -\det(B)$$
Dedekind's Cuts Lemma
It is tacitly assumed that the cut $\alpha$ contains some positive rationals. There is an $a'$ such that $a<a'<1$. We now need to localize the cut $\alpha$ quite precisely. Starting from an arbitrary positive $x\in\alpha$ and an $y\notin\alpha$ you have to construct two numbers $c\in\alpha$ and $d\notin\alpha$ with $c\geq x$ and $$a' d<c<d\ .$$ (A hint: Divide the interval $[x,y]$ into $N\gg1$ equal pieces. When $N$ is large enough there will be two successive partition points that can serve as $c$ and $d$.) Now put $$\ell_1:={d\over a},\qquad\ell:={a'\>d\over a^2}\ .$$ It is easy to verify that $\ell_1$, $\ell$ satisfy the requirements.
Why $L^{\infty-}$ is not in Banach space
A Gaussian random variable is not a.s. bounded because $P(|X|>c)>0$ for all $c$ (although these tail probabilities become very small). That $L^{\infty-}$ is not a Banach space needs a proper interpretation because a Banach space is a pair $(X,\|\cdot\|)$ consisting of a real or complex vector space $X$ and a complete norm on it. Using the axiom of choice I believe one can construct indeed a complete norm on $L^{\infty-}$ (because, for reasonable probability spaces, it is a $c$-dimensional vector space and hence linearly isomorphic to any given separable Banach space) -- however, Tao probably means that there is no "natural"complete norm on $L^{\infty-}$. The natural structure of such a countable intersection of Banach spaces is that of a Frechet space.
On the Definition of Taylor Polynomials
For one, if $f$ is continuous at $a,$ then $\lim_{x \to a} f(x) = f(a).$ But also, the statement $$\lim_{x \to a} \frac{f(x) - P_n(x)}{(x - a)^n} = 0$$ says that the difference function $e(x) = f(x) - P_n(x)$ tends to $0$ more quickly than the polynomial $(x - a)^n$ as $x$ approaches $a.$ Observe that the function $e(x)$ gives the error in approximating $f(x)$ via the polynomial $P_n(x),$ hence this limit gives a criterion for the approximation to be "good enough" in some rigorous sense.
Applying CLT to random variable made up of two sequences of iid random variables
Defining $Z_n = X_n - Y_n$, by linearity of expectation $E[Z_n] = E[X_n] - E[Y_n] = \mu_x - \mu_y$. Using the properties of variance, we also have $$Var(Z_n) = E[(X_n - Y_n)^2] - E[X_n-Y_n]^2 = E[{X_n}^2] - 2 E[X_n Y_n] + E[{Y_n}^2] - (E[X_n]^2 - 2E[X_n]E[Y_n] + E[Y_n]^2) = (E[{X_n}^2] - E[X_n]^2) + (E[{Y_n}^2] - E[Y_n]^2) + 2(E[X_n]E[Y_n] - E[X_n Y_n]) = \sigma_x^2 + \sigma_y^2 + 2(E[X_n]E[Y_n] - E[X_n Y_n])$$ Because $X_n$ and $Y_n$ are independent, $E[X_n Y_n] = E[X_n]E[Y_n]$ so the variance simplifies to $Var(Z_n) = \sigma_x^2 + \sigma_y^2$. The sample mean of $Z_n$ is $\frac{1}{n}\sum_{i} (X_i - Y_i) = \overline{X_n} - \overline{Y_n}$, so $A_n$ can be rewritten as $$A_n = \frac{\sqrt{n}}{\sqrt{Var(Z_n)}}(\overline{Z_n} - E[Z_n])$$ which by the CLT converges in distribution to $N(0,1)$.
What is the limit inferior and limit superior of sequence,$x_n=\left(1-\dfrac{1}{n}\right)\sin \left(\dfrac{n\pi}{3}\right),n\geq1$?
Check that $$\sin\frac{n\pi}3=\begin{cases}&0,&n=0,\,3\pmod 6\\{}\\&\cfrac{\sqrt3}2,&n=1,\,2\pmod3\\{}\\&-\cfrac{\sqrt3}2,&n=4,\,5\pmod 6\end{cases}$$ IT may also be worthwhile to take into account that $\;\left\{\left(1-\frac1n\right)\right\}\;$ is a descending sequence...
Relation between row rank and column space
The answer is yes. The general property is that the row-rank is the dimension of the image (the range) of the matrix. Of course, the only $m$-dimensional subspace of $\Bbb R^m$ is $\Bbb R^m$.
Give an example of a group that does have subgroups of order 1,2,3,4,5,6, but does not have subgroups of order 7 or 8
What about Z/5ZxZ/12Z. Perhaps the simplest (?) example, as your group must have an order divisible by 5, 4, 3
Showing that finitely generated module forms a torsion quotient.
Notice that $\{ x_1 , ..., x_m , x_{m+1} \}$ is linearly dependent, so that $\sum_1 ^{m+1} r_i x_i =0 $ for some nonzero $r_i$. Since B is linearly independent, you must have that $r_{m+1} \neq 0$, so that $r_{m+1} x_{m+1} \in F$. The same goes for all the other $x_i$ with $i>m$.
Two subspaces isomorphism on direct sum
If two vector spaces have basis of the same cardinality then they are isomorfic because the bijection between their basis extends to an isomorfism between the vector space. With this in mind it is easy to see that by taking $W=V$ to be a vector space of infinite dimension and $U$ a non-zero subspace of finite dimension we have an example of what is required. In fact we can let $U$ be any non-zero vector space but then you need to compare some cardinalities.
Calculus - Limit calculate help
Rearrange the fraction into something simpler. Express $\tan(x)$ in terms of $\sin(x)$ and $\cos(x)$. $$\frac{\tan(x)-1}{\sin(x)-\cos(x)}=\frac{\frac{\sin(x)}{\cos(x)}-1}{\sin(x)-\cos(x)}$$ You've done this and the next step in your own work. $$=\frac{\frac{\sin(x)}{\cos(x)}-\frac{\cos(x)}{\cos(x)}}{\sin(x)-\cos(x)}$$ $$=\frac{\frac{\sin(x)-\cos(x)}{\cos(x)}}{\sin(x)-\cos(x)}$$ Factor out $(\sin(x)-\cos(x))$ from the numerator and denominator. $$=\frac{\sin(x)-\cos(x)}{\sin(x)-\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{1}$$ $$=\frac{\frac{1}{\cos(x)}}{1}=\frac{1}{\cos(x)}$$ $$\therefore\:\frac{\tan(x)-1}{\sin(x)-\cos(x)}=\frac{1}{\cos(x)}$$ Apply the limit to both sides. $$\lim_{x \to \pi/4} \frac{\tan x-1}{\sin x-\cos x}=\lim_{x \to \pi/4} \frac{1}{\cos(x)}$$ The limit is then the value of $\frac{1}{\cos(x)}$ at $x=\frac{\pi}{4}$. In this particular case, you can substitute $x=\frac{\pi}{4}$ (but you can't always do this). $$\frac{1}{\cos(\frac{\pi}{4})}=\frac{1}{\left(\frac{1}{\sqrt{2}}\right)}$$ $$\therefore\:\sqrt{2}$$
Simpson's rule for improper integrals
Yes. You can do a change of variable in your integral $\xi = x-\frac{1-t}{t}$, $d\xi = \frac{1}{t^2}dt$ and integrate for $0<\xi<1$. There are other methods too. See Wikipedia.
A Power Series Solution to a differential equation.
If I may suggest, do not change the indices and do not forget to devide by $t$ the expansion of the first derivative. So, $$z=\sum_{n=0}^\infty c_n t^n\quad ,\quad z'=\sum_{n=0}^\infty n c_n t^{n-1}\quad,\quad z''=\sum_{n=0}^\infty n(n-1) c_n t^{n-2}$$ which make the differential equation$$\sum_{n=0}^\infty n(n-1) c_n t^{n-2}=4\sum_{n=0}^\infty n c_n t^{n-2}+\sum_{n=0}^\infty c_n t^n$$ Now, consider a given term $t^m$; you then have $$(m+2)(m+1)c_{m+2}=4(m+2)c_{m+2}+c_m$$ that is to say $$(m+2)(m-3)c_{m+2}=c_m$$ that is to say, if $m\neq 3$, $$c_{m+2}=\frac{c_m}{(m+2)(m-3)}$$ the initial conditions giving $c_0=1$ and $c_1=0$. For sure, you need to use the fact that $c_3=0$. All the other terms are perfectly defined.
Projection operator $P$ on the plane orthogonal to a given vector
You have given two methods, both correct. For the first method, you have correctly expressed your projection $P$ as a linear operator. To find the matrix with respect to the standard basis, use the rule that the $j$-th column is $P(e_j)$ where $e_j$ is the $j$-th standard basis element. You can check, both conceptually and computationally that this agrees with your second formula. Yet another method, which avoids the computation of $u_1, u_2$ entirely, so is probably better: the projector onto the plane perpendicular to $v$ is the identity minus the projector onto the span of $v$, $$ P(x) = x - \frac{\langle x, v \rangle }{||v||^2} v. $$ Again to find the matrix, the same rule applies. The the $j$-th column is $P(e_j)$. Again, you can check by computation that this will agree with your other methods.
Fourier transform of a Gaussian
$$\int_{-\infty}^\infty \exp\left(-a\left(x + i\frac{y}{2a}\right)^2 \right) \, dx = \lim_{M\,\to\,+\infty} \int_{-M}^M \cdots\cdots $$ Here I will follow up on the suggestion made by Cameron Williams in a comment above. Consider the integral $$ \int_{-M}^M + \int_M^{M+ iy/(2a)} + \int_{M+iy/(2a)}^{-M + iy/(2a)} + \int_{-M+iy/(2a)}^{-M} $$ where each integral is along a straight line segment. If the function being integrated is an entire function, i.e. differentiable everywhere in $\mathbb C,$ which in particular means there are no singularities inside the region bounded by the path along which we are integrating, (and Gaussian functions are indeed entire functions) then this sum is $0.$ Now suppose you can somehow show that the second and fourth terms above approach $0$ as $M\to+\infty.$ Then you can draw a conclusion about the sum of the first and third terms.
"Binomial coefficients" generalized via polynomial iteration
Proof of Theorem 1 [Remark: The following proof was written in 2011 and only mildly edited now. There are things in it that I would have done better if I were to rewrite it.] Proof of Theorem 1. First, let us prove something seemingly obvious: Namely, we will prove that for any nonnegative integers $a$ and $b$ such that $a\geq b$, we have \begin{equation} f_b\left[f_{a-b}\right] = f_a . \label{darij-proof.1} \tag{1} \end{equation} This fact appears trivial if you think of polynomials as functions (in which case $f_k$ is really just $\underbrace{f \circ f \circ \cdots \circ f}_{k \text{ times}}$); the only problem with this approach is that polynomials are not functions. Thus, let me give a proof of \eqref{darij-proof.1}: [Proof of \eqref{darij-proof.1}. For every polynomial $g\in A\left[x\right]$, let $R_g$ be the map $A\left[x\right]\to A\left[x\right]$ mapping every $p\in A\left[x\right]$ to $g\left[p\right]$. Note that this map $R_g$ is not a ring homomorphism (in general), but a polynomial map. Next we notice that $f_n = R_f^n\left(x\right)$ for every $n\in\mathbb N$. In fact, this can be proven by induction over $n$: The induction base (the case $n=0$) is trivial (since $f_0=x$ and $R_f^0\left(x\right)=\operatorname{id}\left(x\right)=x$). For the induction step, assume that some $m\in\mathbb N$ satisfies $f_m = R_f^m\left(x\right)$. Then, $m+1$ satisfies \begin{align} f_{m+1} &= f\left[f_m\right] \qquad \left(\text{by the recursive definition of $f_{m+1}$}\right) \\ &= R_f\left(f_m\right) \qquad \left(\text{since $R_f\left(f_m\right)=f\left[f_m\right]$ by the definition of $R_f$}\right) \\ &= R_f\left(R_f^m\left(x\right)\right) \qquad \left(\text{since $f_m=R_f^m\left(x\right)$}\right) \\ &= R_f^{m+1}\left(x\right) . \end{align} This completes the induction. We thus have shown that $f_n = R_f^n\left(x\right)$ for every $n\in\mathbb N$. Since $f_n=f_n\left[x\right]=R_{f_n}\left(x\right)$ (because the definition of $R_{f_n}$ yields $R_{f_n}\left(x\right)=f_n\left[x\right]$), this rewrites as follows: \begin{equation} R_{f_n}\left(x\right) = R_f^n\left(x\right) \qquad \text{ for every $n\in\mathbb N$.} \label{darij-proof.1.5} \tag{2} \end{equation} Now we will prove that \begin{equation} f_n\left[p\right] = R_f^n\left(p\right) \text{ for every } n\in\mathbb N \text{ and every } p\in A\left[x\right] . \label{darij-proof.2} \tag{3} \end{equation} [Proof of \eqref{darij-proof.2}. Let $p\in A\left[x\right]$ be any polynomial. By the universal property of the polynomial ring $A\left[x\right]$, there exists an $A$-algebra homomorphism $\Phi : A\left[x\right] \to A\left[x\right]$ such that $\Phi\left(x\right) = p$. Consider this $\Phi$. Then, for every polynomial $q\in A\left[x\right]$, we have $\Phi\left(q\right) = q\left[p\right]$. (This is the reason why $\Phi$ is commonly called the evaluation homomorphism at $p$.) Now, every $A$-algebra homomorphism $A\left[x\right] \to A\left[x\right]$ commutes with $R_g$ for every $g\in A\left[x\right]$ (this is just a formal way to state the known fact that every $A$-algebra homomorphism commutes with every polynomial map). Applying this to the homomorphism $\Phi$, we conclude that $\Phi$ commutes with $R_g$ for every $g\in A\left[x\right]$. Thus, in particular, $\Phi$ commutes with $R_{f_n}$ and with $R_f$. Since $\Phi$ commutes with $R_f$, it is clear that $\Phi$ also commutes with $R_f^n$, and thus $\Phi\left(R_f^n\left(x\right)\right)=R_f^n\left(\Phi\left(x\right)\right)$. Since $\Phi\left(x\right)=p$, this rewrites as $\Phi\left(R_f^n\left(x\right)\right)=R_f^n\left(p\right)$. On the other hand, $\Phi$ commutes with $R_{f_n}$, so that $\Phi\left(R_{f_n}\left(x\right)\right)=R_{f_n}\left(\underbrace{\Phi\left(x\right)}_{=p}\right)=R_{f_n}\left(p\right)=f_n\left[p\right]$ (by the definition of $R_{f_n}$). In view of \eqref{darij-proof.1.5}, this rewrites as $\Phi\left(R_f^n\left(x\right)\right) = f_n\left[p\right]$. Hence, \begin{equation} f_n\left[p\right] = \Phi\left(R_f^n\left(x\right)\right) = R_f^n\left(p\right) , \end{equation} and thus \eqref{darij-proof.2} is proven.] Now, we return to the proof of \eqref{darij-proof.1}: Applying \eqref{darij-proof.2} to $n=b$ and $p=f_{a-b}$, we obtain $f_b\left[f_{a-b}\right] = R_f^b\left(f_{a-b}\right)$. Since $f_{a-b} = f_{a-b}\left[x\right] = R_f^{a-b}\left(x\right)$ (by \eqref{darij-proof.2}, applied to $n=a-b$ and $p=x$), this rewrites as \begin{equation} f_b\left[f_{a-b}\right] = R_f^b\left(R_f^{a-b}\left(x\right)\right) = \underbrace{\left(R_f^b\circ R_f^{a-b}\right)}_{=R_f^{b+\left(a-b\right)}=R_f^a} \left(x\right) = R_f^a\left(x\right) . \end{equation} Compared with $f_a = f_a\left[x\right] = R_f^a\left(x\right)$ (by \eqref{darij-proof.2}, applied to $n=a$ and $p=x$), this yields $f_b\left[f_{a-b}\right] = f_a$. Thus, \eqref{darij-proof.1} is proven.] Now here is something less obvious: For any nonnegative integers $a$ and $b$ such that $a\geq b$, we have \begin{equation} f_{a-b}-x\mid f_a-f_b \qquad \text{ in } A\left[x\right] . \label{darij-proof.3} \tag{4} \end{equation} [Proof of \eqref{darij-proof.3}. Let $a$ and $b$ be nonnegative integers such that $a\geq b$. Let $I$ be the ideal of $A\left[x\right]$ generated by the polynomial $f_{a-b}-x$. Then, $f_{a-b}-x\in I$, so that $f_{a-b}\equiv x\mod I$. We recall a known fact: If $B$ is a commutative $A$-algebra, if $J$ is an ideal of $B$, if $p$ and $q$ are two elements of $B$ satisfying $p\equiv q\mod J$, and if $g\in A\left[x\right]$ is a polynomial, then $g\left[p\right]\equiv g\left[q\right]\mod J$. (This fact is proven by seeing that $p^u\equiv q^u\mod J$ for every $u\in\mathbb N$.) Applying this fact to $B=A\left[x\right]$, $J=I$, $p=f_{a-b}$, $q=x$ and $g=f_b$, we obtain $f_b\left[f_{a-b}\right] \equiv f_b\left[x\right] = f_b \mod I$. Due to \eqref{darij-proof.1}, this becomes $f_a \equiv f_b \mod I$. Thus, $f_a-f_b \in I$. Since $I$ is the ideal of $A\left[x\right]$ generated by $f_{a-b}-x$, this yields that $f_{a-b}-x\mid f_a-f_b$. This proves \eqref{darij-proof.3}.] For any nonnegative integers $a$ and $b$ such that $a\geq b$, let $f_{a\mid b}$ be some polynomial $h\in A\left[x\right]$ such that $f_a-f_b = h\cdot\left(f_{a-b}-x\right)$. Such a polynomial $h$ exists according to \eqref{darij-proof.3}, but needs not be unique (e. g., if $f=x$ or $a=b$, it can be arbitrary), so we may have to take choices here. Thus, \begin{equation} f_a-f_b = f_{a\mid b} \cdot\left(f_{a-b}-x\right) \label{darij-proof.4} \tag{5} \end{equation} for any nonnegative integers $a$ and $b$ such that $a\geq b$. Let us now define a polynomial $f_{a,b} \in A\left[x\right]$ for any nonnegative integers $a$ and $b$. Namely, we define this polynomial by induction over $a$: Induction base: For $a=0$, let $f_{a,b}$ be $\delta_{0,b}$ (where $\delta$ means Kronecker's delta, defined by $\delta_{u,v}= \begin{cases} 1, & \text{if }u=v;\\ 0, & \text{if }u\neq v\end{cases}$ for any two objects $u$ and $v$). Induction step: Let $r\in\mathbb N$ be positive. If $f_{r-1,b}$ is defined for all $b \in \mathbb N$, then let us define $f_{r,b}$ for all $b \in \mathbb N$ as follows: \begin{align} f_{r,b} &= f_{r-1,b} + f_{r\mid r-b} f_{r-1,b-1} \text{ if } 0 \leq b \leq r \text{ (where $f_{r-1,-1}$ means $0$)}; \\ f_{r,b} &= 0 \text{ if } b > r . \end{align} This way, we inductively have defined $f_{a,b}$ for all nonnegative integers $a$ and $b$. We now claim that \begin{equation} f_{a,b} \prod_{i=1}^b\left(f_i-x\right) = \prod_{i=a-b+1}^{a}\left(f_i-x\right) \label{darij-proof.5} \tag{6} \end{equation} for any nonnegative integers $a$ and $b$ such that $a\geq b$. [Proof of \eqref{darij-proof.5}. We will prove \eqref{darij-proof.5} by induction over $a$: Induction base: For $a=0$, proving \eqref{darij-proof.5} is very easy (notice that $0=a\geq b$ entails $b=0$, and use $f_{0,0} = 1$). Thus, we can consider the induction base as completed. Induction step: Let $r\in\mathbb N$ be positive. Assume that \eqref{darij-proof.5} holds for any nonnegative integers $a$ and $b$ such that $a\geq b$ and $a=r-1$. Now let us prove \eqref{darij-proof.5} for any nonnegative integers $a$ and $b$ such that $a\geq b$ and $a=r$: Let $a$ and $b$ be nonnegative integers such that $a\geq b$ and $a=r$. Then, $r=a\geq b$. Thus, $b$ is a nonnegative integer satisfying $b\leq r$. This shows that we must be in one of the following three cases: Case 1: We have $b=0$. Case 2: We have $1\leq b\leq r-1$. Case 3: We have $b=r$. Let us first consider Case 1: In this case, $b=0$. The recursive definition of $f_{r,b}$ yields \begin{equation} f_{r,0} = f_{r-1,0} + f_{r\mid r-0} \underbrace{f_{r-1, 0-1}}_{=f_{r-1,-1} = 0} = f_{r-1,0} . \end{equation} We can apply \eqref{darij-proof.5} to $r-1$ and $0$ instead of $a$ and $b$ (since we assumed that \eqref{darij-proof.5} holds for any nonnegative integers $a$ and $b$ such that $a\geq b$ and $a=r-1$). This yields the identity \begin{equation} f_{r-1,0} \prod_{i=1}^0\left(f_i-x\right) = \prod_{i=r-1-0+1}^{r-1}\left(f_i-x\right) . \end{equation} Since both products $\prod_{i=1}^0\left(f_i-x\right)$ and $\prod_{i=r-1-0+1}^{r-1}\left(f_i-x\right)$ are equal to $1$ (since they are empty products), this simplifies to $f_{r-1,0} \cdot 1 = 1$, so that $f_{r-1,0} = 1$. Now, \begin{equation} f_{r,0} \underbrace{\prod_{i=1}^0\left(f_i-x\right)}_{=\left(\text{empty product}\right)=1} = f_{r,0} = f_{r-1,0} = 1 \end{equation} and \begin{equation} \prod_{i=r-0+1}^{r}\left(f_i-x\right) = \left(\text{empty product}\right) = 1 . \end{equation} Comparing these equalities yields \begin{equation} f_{r,0} \prod_{i=1}^0\left(f_i-x\right) = \prod_{i=r-0+1}^{r}\left(f_i-x\right) . \end{equation} Since $0=b$ and $r=a$, this rewrites as \begin{equation} f_{a,b} \prod_{i=1}^b\left(f_i-x\right) = \prod_{i=a-b+1}^{a}\left(f_i-x\right) . \end{equation} In other words, we have proven \eqref{darij-proof.5} for our $a$ and $b$ in Case 1. Next let us consider Case 2: In this case, $1\leq b\leq r-1$. Hence, $0 \leq b-1 \leq r-1$ and $r-1 \geq b \geq b-1$. In particular, $r-1$ and $b-1$ are nonnegative integers satisfying $r-1 \geq b-1$. Hence, we can apply \eqref{darij-proof.5} to $r-1$ and $b-1$ instead of $a$ and $b$ (since we assumed that \eqref{darij-proof.5} holds for any nonnegative integers $a$ and $b$ such that $a\geq b$ and $a=r-1$). This yields the identity \begin{equation} f_{r-1,b-1} \prod_{i=1}^{b-1}\left(f_i-x\right) = \prod_{i=\left(r-1\right)-\left(b-1\right)+1}^{r-1}\left(f_i-x\right) . \end{equation} Since $\left(r-1\right)-\left(b-1\right)+1=r-b+1$, this simplifies to \begin{equation} f_{r-1,b-1} \prod_{i=1}^{b-1}\left(f_i-x\right) = \prod_{i=r-b+1}^{r-1}\left(f_i-x\right) . \end{equation} On the other hand, $r-1$ and $b-1$ are nonnegative integers satisfying $r-1 \geq b$. Thus, we can apply \eqref{darij-proof.5} to $r-1$ and $b$ instead of $a$ and $b$ (since we assumed that \eqref{darij-proof.5} holds for any nonnegative integers $a$ and $b$ such that $a\geq b$ and $a=r-1$). This gives us \begin{equation} f_{r-1,b} \prod_{i=1}^{b}\left(f_i-x\right) = \prod_{i=\left(r-1\right)-b+1}^{r-1}\left(f_i-x\right) . \end{equation} Since $\left(r-1\right)-b+1=r-b$, this rewrites as \begin{equation} f_{r-1,b} \prod_{i=1}^{b}\left(f_i-x\right) = \prod_{i=r-b}^{r-1}\left(f_i-x\right) = \left(f_{r-b}-x\right) \prod_{i=r-b+1}^{r-1}\left(f_i-x\right) \end{equation} (because $r-1 \geq r-b$). Since $0\leq b\leq r$, we have $0\leq r-b\leq r$, and \eqref{darij-proof.4} (applied to $r$ and $r-b$ instead of $a$ and $b$) yields \begin{equation} f_r-f_{r-b} = f_{r\mid r-b}\cdot \left(f_{r-\left(r-b\right)}-x\right) . \end{equation} Since $r-\left(r-b\right)=b$, this simplifies to \begin{equation} f_r-f_{r-b} = f_{r\mid r-b}\cdot \left(f_b-x\right) . \label{darij-proof.5.5} \tag{7} \end{equation} Since $0\leq b\leq r$, the recursive definition of $f_{r,b}$ yields \begin{equation} f_{r,b} = f_{r-1,b} + f_{r\mid r-b} f_{r-1,b-1} . \end{equation} Thus, \begin{align} & f_{r,b} \prod_{i=1}^b\left(f_i-x\right) \\ &= \left(f_{r-1,b} + f_{r\mid r-b} f_{r-1,b-1}\right) \prod_{i=1}^b\left(f_i-x\right) \\ &= f_{r-1,b} \prod_{i=1}^b\left(f_i-x\right) + f_{r\mid r-b} f_{r-1,b-1} \underbrace{\prod_{i=1}^b\left(f_i-x\right)}_{=\left(f_b-x\right)\prod_{i=1}^{b-1}\left(f_i-x\right)} \\ & = f_{r-1,b} \prod_{i=1}^b\left(f_i-x\right) + f_{r\mid r-b} f_{r-1,b-1} \left(f_b-x\right) \prod_{i=1}^{b-1}\left(f_i-x\right) \\ & = \underbrace{f_{r-1,b} \prod_{i=1}^b\left(f_i-x\right)}_{= \left(f_{r-b}-x\right) \prod_{i=r-b+1}^{r-1}\left(f_i-x\right) } + \underbrace{f_{r\mid r-b} \cdot \left(f_b-x\right)}_{\substack{=f_r-f_{r-b} \\ \text{(by \eqref{darij-proof.5.5})}}} \underbrace{f_{r-1,b-1} \prod_{i=1}^{b-1}\left(f_i-x\right)}_{= \prod_{i=r-b+1}^{r-1}\left(f_i-x\right)} \\ &= \left(f_{r-b}-x\right) \prod_{i=r-b+1}^{r-1}\left(f_i-x\right) + \left(f_r-f_{r-b}\right) \prod_{i=r-b+1}^{r-1}\left(f_i-x\right) \\ &= \underbrace{\left( \left(f_{r-b}-x\right) + \left(f_r-f_{r-b}\right) \right)}_{=f_r-x} \prod_{i=r-b+1}^{r-1}\left(f_i-x\right) \\ &= \left(f_r-x\right) \prod_{i=r-b+1}^{r-1}\left(f_i-x\right) = \prod_{i=r-b+1}^{r}\left(f_i-x\right) . \end{align} Since $r=a$, this rewrites as \begin{equation} f_{a,b} \prod_{i=1}^b\left(f_i-x\right) = \prod_{i=a-b+1}^{a}\left(f_i-x\right) . \end{equation} In other words, we have proven \eqref{darij-proof.5} for our $a$ and $b$ in Case 2. Next let us consider Case 3: In this case, $b=r$. In this case, we proceed exactly as in Case 2, except for one small piece of the argument: Namely, in Case 2 we were allowed to apply \eqref{darij-proof.5} to $r-1$ and $b$ instead of $a$ and $b$, but now in Case 3 we are not allowed to do this anymore (because \eqref{darij-proof.5} requires $a\geq b$). Therefore, we need a different way to prove the equality \begin{equation} f_{r-1,b} \prod_{i=1}^{b}\left(f_i-x\right) = \left(f_{r-b}-x\right) \prod_{i=r-b+1}^{r-1}\left(f_i-x\right) \label{darij-proof.6} \tag{8} \end{equation} in Case 3. Here is such a way: By the inductive definition of $f_{r-1,b}$, we have $f_{r-1,b}=0$ (since $b=r>r-1$). Thus, the left hand side of \eqref{darij-proof.6} equals $0$. On the other hand, $b=r$ yields $f_{r-b}=f_{r-r}=f_0=x$, so that $f_{r-b}-x=0$. Thus, the right hand side of \eqref{darij-proof.6} equals $0$. Since both the left hand side and the right hand side of \eqref{darij-proof.6} equal $0$, it is now clear that the equality \eqref{darij-proof.6} is true, and thus we can proceed as in Case 2. Hence, \eqref{darij-proof.5} is proven for our $a$ and $b$ in Case 3. We have thus proven that \eqref{darij-proof.5} holds for our $a$ and $b$ in each of the three possible cases 1, 2 and 3. Thus, \eqref{darij-proof.5} is proven for any nonnegative integers $a$ and $b$ such that $a\geq b$ and $a=r$. This completes the induction step. Thus, the induction proof of \eqref{darij-proof.5} is done.] Now, any nonnegative integers $n$ and $k$ satisfy \begin{equation} f_{k+n,n} \prod_{i=1}^n\left(f_i-x\right) = \prod_{i=\left(k+n\right)-n+1}^{k+n}\left(f_i-x\right) \end{equation} (by \eqref{darij-proof.5}, applied to $a=k+n$ and $b=n$). Since $\left(k+n\right)-n=k$, this simplifies to \begin{equation} f_{k+n,n} \prod_{i=1}^n\left(f_i-x\right) = \prod_{i=k+1}^{k+n}\left(f_i-x\right) . \end{equation} This immediately yields Theorem 1. $\blacksquare$
How to prove that the expectation of Cox Ingersoll Ross interest rate model is $r_0e^{-at} + b(1-e^{-at})$?
Hint Remark that $$\mathbb E[r_t]=\mathbb E[r_0]-\int_0^t\mathbb E[r_s]\,\mathrm ds+abt$$ and use Gronwall inequality.
Gaussian integers and divisibility
Suppose $x+yi=(1+i)g$, with $g$ a Gaussian integer. Then, multiplying by the conjugate, $$ x^2+y^2=2g\bar{g} $$ so $x^2+y^2\equiv0\pmod 2$, which implies $x\equiv y\pmod{2}$.
Series shown up in Quantum Mechanics
Let $S$ be defined as $$ S=\frac{4}{\pi^2}\sum_{k\geq 1}\frac{1}{k^2}\left(1-\cos\left(\frac{\pi k}{2}\right)\right)^2 $$ Decompose $S$ according to the remainder of $n$ modulo $4$. Since $\cos(\pi n/2)=1$ for $n=0 \mod 4$ one part of the sum exactly cancels. Furthermore $\color{blue}{\cos(n\pi/2)=0}$ for $\color{blue}{n=1,3\mod 4}$ and $\cos(n\pi/2)=-1$ for $n=2\mod 4$ which means that we can equally write $$ S=\frac{4}{\pi^2}\sum_{k\geq 0}\left(\color{blue}{\frac{1}{(4k+1)^2}+\frac{1}{(4k+3)^2}}+\frac{2^2}{(4k+2)^2}\right)=\frac{4}{\pi^2}\sum_{k\geq 0}\left(\color{blue}{\frac{1}{(2k+1)^2}}+\frac{2^2}{(4k+2)^2}\right)=\\\frac{4}{\pi^2}\sum_{k\geq 0}\left(\color{blue}{\frac{1}{(2k+1)^2}}+\frac{1}{(2k+1)^2}\right)=\frac{8}{\pi^2}{\sum_{k\geq 0}\frac{1}{(2k+1)^2}} $$ or $$ S=1 $$ Since $$ {\sum_{k\geq 0}\frac{1}{(2k+1)^2}}=\sum_{k\geq 1}\frac{1}{k^2}-\sum_{k\geq 1}\frac{1}{(2k)^2}=\frac{\pi^2}{6}-\frac{1}{4}\frac{\pi^2}{6}={\frac{\pi^2}{8}} $$ From a number theoretical view we can identify the sum as $$ S=\frac{4}{\pi^2}L(\chi_1,2)=\frac{4}{\pi^2}\sum_{n\geq1}\frac{(1-\chi_1(n))^2}{n^2} $$ where $\chi_1(n)$ is the non-trivial Dirichlet character associcated with the map $Z/4Z\rightarrow \mathbb{S_1}$
Find the equation of the normal line
Let the normal be at $(h,k)$, then $$y=2x^2+3 \implies y'=4x=-\frac{1}{m} \implies h=\frac{-1}{4m}$$ The equation of normal is $$y-k=m(x-h) \implies y-2h^2-3=mx-mh \implies y-\frac{1}{8m^2}-3=mx+\frac{1}{4}~~~(2)$$ For for any value of $m$ Eq. (2) represents normal to the given parabola. Let $m=-1/8$, the $y-8-3=-x/8+1/4 \implies 8y+x=90$ is the required normal to the given parabola.
Let X be a non empty set equipped with the cofinite topology T. Is (X,T) compact? Is it connected?
For an open cover $\mathcal {C}$ of $X$ Take an element $A$ in $\mathcal{C}$ how many elements does $X/A$ have? How many open sets do you need to cover $X/A$? For connectedness Can two distinct non-empty open sets of a cofinite topology ever be disjoint?
Does the boundary of an open set have measure zero (in $\mathbb{R}^n$)?
There are a lot of open sets in $\mathbb{R}^n$ whose boundary has positive Lebesgue measure. I wouldn't be surprised if "most" open sets have boundaries with positive Lebesgue measure, where "most" might be referring to cardinality, or some topological or measure-theoretic size. However, the open sets you can visualize are far more regular than the average open set, so it's not easy (if at all possible) to get a good mental picture of an open set whose boundary has positive Lebesgue measure. And the open sets one does analysis on, typically also are quite regular and have nice boundaries. Examples of open sets whose boundary is not a null set are for example complements of a thick Cantor set in dimension $1$ (products where at least one factor is such in higher dimensions). Somewhat similar, let $(r_k)_{k \in \mathbb{N}}$ be an enumeration of the points with rational coordinates, and let $$U = \bigcup_{k\in \mathbb{N}} B_{\varepsilon_k}(r_k)$$ for a sequence $\varepsilon_k \searrow 0$ such that $\sum {\varepsilon_k}^n$ converges. Then you have a dense open set $U$ with finite Lebesgue measure, its boundary is its complement and has infinite Lebesgue measure.
$f(x)=x+\frac{1}{e^x+1}$. Prove that for any $x,y$ : $|f(x)-f(y)|\leq|x-y|$
Hint: Use first the Mean value theorem: $$\frac{f(x)-f(y)}{x-y} = f'(c)$$ ant then take: $|\cdot|$.
$0^0$ is undefined, but sometimes defined as $1$?
To me $0^0$ is very nicely defined as the cardinality of the set of maps from the empty set to the empty set, hence equals $1$. However, in the context of limits one must be aware that the binary operation of exponentiation is not continuous at $(0,0)$ (and not even defined in an open neighbourhood of $(0,0)$), which implies that $a_n\to 0$, $b_n\to 0$ gives us no idea what might happen to $a_n^{b_n}$ as $n\to\infty$. Therefore $0^0$ is called an indeterminate form, just like $\frac 00$; however the latter is not only an indeterminate form but also undefined. As this is the most-encountered specimen of indeterminate form, it is not surprising that confusing "indeterminate" and "undefined" is wide-spread. Note that "indeterminate form" is really about the unevaluated expression $0^0$. Normally, if two things are equal, they have the same properties, so if $0^0$ is indeterminate and $0^0=1$, we conclude that $1$ is indeterminate - which it is of course not. The important detail is that we are not talking about the value of $0^0$ as being indeterminate, but rather the "syntactic" expression (and that's why those things are called indeterminate forms, not indeterminate values) To clarify, we say that the ("syntactic") form $a\circ b$ is an indeterminate form if $a_n\to a$ and $b_n\to b$ does not allow conclusions about the existence or value of $\lim_{n\to\infty}a_n\circ b_n$. Thus writing something like $$ \lim_{n\to\infty}a_n^{b_n}=0^0=1$$ is probably wrong because $a_n\to 0$, $b_n\to 0$ does not warrant the first equality. In contrast, writing $$ \lim_{n\to\infty}\frac{a_n}{b_n}=\frac00=?$$ is definitely wrong because you cannot equate something with an undefined expression. Similarly to $0^0=1$, it is customary in some branches of math to define $0\cdot \infty=0$. But being defined does not mend the fact that $0\cdot \infty$ is an indeterminate form in the above sense.
How to determine if set of vectors is a basis for W
$S$ is a basis for $W$ since: (2) span $W$; (1) is a linearly independent set. To prove (1) you just have to solve: $\alpha (-1,-1,2)+\beta(-3,2,1)=0$ for $\alpha$ and $\beta$ to get $\alpha=0=\beta$. To prove (2): Let $(x_1,x_2,x_3)\in W$ (i.e. $x_1+x_2+x_3=0$) you have to find $\alpha$ and $\beta$ such that $\alpha (-1,-1,2)+\beta(-3,2,1)=(x_1,x_2,x_3)$ but $\beta=-\frac{x_1+x_2}{5}$ and $\alpha=\frac{x_3-\frac{x_1+x_2}{5}}{2}$ works. Then we are done!
Determine if a Region is Horizontally or Vertically Simple
Here is our area. $x^2+2y^2\leq4$ corresponds to the area inside of an ellipse (outer ellipse in the picture). $2\leq x^2+2y^2$ carves out another one, smaller ellipse, from the first one. So we see, that $A$ is neither vertically simple, nor horizontally simple.
Axiom of countable choice: a question about the domain.
No, the restriction that the index set (= domain of $A$) be $\mathbb{N}$ is inessential: from the axiom of countable choice as stated we can deduce the apparently-more-general (arbitrary countably-infinite-or-finite index set) form as a corollary. Below, I'll use the inclusive definition of countability: $X$ is countable if there is an injection from $X$ into $\mathbb{N}$ (this is equivalent to demanding that $X$ admit a surjection from $\mathbb{N}$ or be empty). Suppose $A$ is a function with domain a countable set $I$ such that $A(i)\not=\emptyset$ for each $i\in I$; I'll show that there is a choice function for $A$, that is, a function $f$ with domain $I$ such that $f(i)\in A(i)$ for each $i\in I$. Since $I$ is countable, fix an injection $j:I\rightarrow\mathbb{N}$. Let $B(n)=A(j^{-1}(n))$ if $n\in ran(j)$ and let $B(n)=\{0\}$ (say) otherwise. By AC$\omega$ we get a choice function $g$ for $B$. Now define $f$ by setting $$f(i)=g(j(i)).$$
Number of $1$s in the binary representation of $n$
Is this something that would fit your requirements? $b(n) = \begin{cases}0&n=0\\b(\frac n2)&n>0\land n\text{ even}\\b(\frac{n-1}2)+1&n\text{ odd}\end{cases}$
A non-circular argument that uses the Maclaurin series expansions of $\sin x$ and $\cos x$ to show that $\frac{d}{dx}\sin x = \cos x$
I like to think about it as follows: using the definitions you gave for sine and cosine, it's possible (although maybe a bit ugly) to prove the 'angle addition formulas': $$ \sin(a+b) = \sin(a)\cos(b) + \sin(b) \cos(a)$$ $$ \cos(a+b) = \cos(a)\cos(b) - \sin(a)\sin(b)$$ using only geometric considerations. From here on, one can easily see that $$\frac{d}{dx} \sin(x) = \lim_{h\rightarrow 0}(\frac{\sin(x+h)-\sin(x)}{h}) = \lim_{h\rightarrow 0} \big( \sin(x)\frac{\cos(h)-1}{h} + \cos(x) \frac{\sin(h)}{h} \big) $$ Now, since $\frac{\cos(h)-1}{h} = - \frac{\sin^2(h)}{h(1+\cos(h))}$, we can see that because $\lim_{h\rightarrow 0} \frac{\sin(h)}{h} = 1$ the limit on the right hand side only gets a contribution from the second term, which equals $\cos(x)$. Using the other angle addition formula, you can also prove that $\frac{d}{dx} \cos(x) = -\sin(x)$. This automatically implies that both functions are smooth, so you should be allowed to use Taylor's theorem to deduce their expansion (which turns out to converge for all $x$). The ugly part would be the proof of the 'angle addition formulas' which need some case distinctions depending on in which quadrant you are looking. However, I think you can make some shortcuts. For example, it's kind of obvious from the definitions that $\cos(\frac{\pi}{2} - x) = \sin(x)$ and vice versa, so you only need to prove the first one. Moreover, $\sin(\pi+x) = -\sin(x)$ is also clear, so you can assume $a+b \leq \pi$.
Attribution for the Cauchy–Schwarz inequality
An inner product was not defined until ~1905 by Hilbert. Orthogonality of the trigonometric functions in the Fourier expansion was experimentally discovered in the late 1700's, and Fourier built on this work with more general orthogonal function expansions. Cauchy only came up with his inequality for complex Euclidean space in the 1820's (I think that's right.) And no connection was made between orthogonality of functions with respect to an integral and the Euclidean dot product until the second half of the 1800's. Part of the reason for this disconnect may have been that there was no natural path from the sum to the integral because the Riemann integral was not defined until several decades later. A note by Bunyakowsky appeared in a journal in 1859 that the discrete case could be generalized to integrals, but it was ignored because no applications were given. It wasn't until the next decade that Schwarz published a paper on minimal surfaces where the Schwarz inequality for the integral was rediscovered and used to measure something like a distance in order to get at a solution for PDEs. But, even then, it was not noticed that there was a general concept of a "norm" or "distance function." Cantor began developing set axioms for the foundation of Mathematics about that time, and by the turn of the 20th century, people started thinking of abstract "spaces" where a point could be an object such as a function. And that's what led to Hilbert's definition of a general inner product space, and to a great deal of modern Mathematics. The current version of the Cauchy-Schwarz inequality was proved in this context. So, the Schwarz inequality for integrals came a few decades before the definition of an inner product. And Cauchy's inequality for the discrete case came about 80 years earlier. I believe it was Hilbert who tagged the general inner product inequality as Cauchy-Schwarz because of the work of these two Mathematicians.
Solve for $c$ in an equation involving modulo operation (remainder after division)
Your programming equation $(c*a+1)\%b=1$ would normally be written by mathematicians like $$ca+1 \equiv 1 \pmod b$$ That simplifies to $$ca \equiv 0 \pmod b$$ which we can easily prove: $ca+1 \equiv 1 \pmod b \implies ca+1 = kb + 1$ for some integer $k$. So $ca = kb$ To find $c$, let $a=gA, b=gB$, where $g$ is the greatest common divisor of $a$ & $b$, $g = \gcd(a, b)$. Then $cgA = kgB \implies cA = kB$. Now $B$ doesn't divide $A$ so it must divide $c$, so the smallest (positive) value for $c$ is $B$ (and hence $k = A$). The proof of that is known as Euclid's lemma. In other words $c = b / \gcd(a, b)$ Note that $ac = AgB = A(gB) = Ab$, so it's a multiple of both $a$ and $b$. So how do we find $\gcd(a, b)$? The usual method is to use Euclid's algorithm which utilizes the fact that for $a < b$, $$\gcd(a, b) = \gcd(b\mod a, a)$$ Here's an implementation in Python. def gcd(a, b): while b: a, b = b, a % b return a That function assumes that $a >=b$; it still works if $a<b$, but it takes one extra loop, or you can do a compare and swap before the loop, if you like.
Codimension of the largest subspace of a closed convex cone in a normed vector space
Here's a version of your proof that convinces me: Set $X := -K \cap K$. We mean to show that $X$ is a subspace of codimension one, i.e. there is a $y_0$ with $X + \mathbb R y_0 = V$. To that end, choose $y_0 \in K \setminus X$ (this is possible since otherwise $K \subset -K$, so that $-K = -K \cup K = X$) and let $v \in V$ be arbitrary. We mean to find an $\alpha \in \mathbb R$ such that $v + \alpha y_0 \in X$. Consider the line $v + \mathbb R y_0$. Then by convexity of $K$, the intersection with $K$ is either (1) empty, (2) a point, (3) a line segment, (4) a beam or (5) a full line. (1): If this is so, then we have $v + \beta y_0 \subset -K \setminus K$ for every $\beta \in \mathbb R$. Since $-K$ is a cone, we also have $1/\beta v + y_0 \subset -K \setminus K$ for every $\beta > 0$ and thus $y_0 \in -K$ since $-K$ is closed; a contradiction to the choice of $y_0$ (namely, $y_0 \in K \setminus (-K \cap K)$). In other words, this case cannot occur. (2) This point must also lie in $-K$ since the rest of the line lies in $-K$ and $-K$ is convex. Consequently it lies in $X$. (3) Analogous to (2). (4) This beam can be of two forms: $$B_+ = \{ v + \beta y_0 \colon \beta \ge \beta_0 \}$$ or $$B_- = \{ v + \beta y_0 \colon \beta \le \beta_0 \}.$$ In the letter case, we get $-1/\beta v - y_0 \in K$ for every $\beta < \min(\beta_0,-1)$ and thus $y_0 \in -K$; a contradiction to the choice of $y_0$. In the former case, we have $v + \beta v_0 \in K$ exactly if $\beta \ge \beta_0$. Since $-K$ is closed, we also have $v + \beta v_0 \in -K$ for $\beta \le \beta_0$. This implies $v + \beta_0 v_0 \in X$. (5) Analogous to the case of $B_-$. That $X$ is a subspace is clear since by construction, it is a closed convex cone with $-X = X$ and $0 \in X$.
Enlarging a probability space
If $\Omega$ is a finite set and $\mathbb Q$ is the uniform measure on it it is obvious that you cannot construct any continuous random variable on this space. Consider $(\Omega, \mathcal G, \mathbb Q) \times (\Omega', \mathcal G', \mathbb Q')$ where $(\Omega', \mathcal G', \mathbb Q')$ is any probability space on which there is an exponetnial (1) random variable $\xi$. Define $\eta (\omega, \omega')=\xi(\omega')$. Then $\eta$ has the required properties.
Convergence of infinite series $\sum_{k=2}^{\infty}\frac{(-1)^k}{k\ln(k)}$
The series converges, and it can be proven using the Leibniz criterion for alternating series. The criterion analyzes sums of the form $$\sum_{n=1}^\infty (-1)^n a_n$$ where $a_i\geq 0$. The criterion says that if $\lim_{n\to\infty}a_n = 0$ and the sequence $\{a_n\}$ is decreasing, then the sum converges. In your case, $a_k=\frac{1}{k\ln k}$ which satisfies both conditions (it's decreasing and has a limit of $0$), so the series converges.
Closed form of $I(a)=\int_{0}^a {(e^{-x²})}^{\operatorname{erf}(x)}dx $ and is it behave similar with error function?
$\newcommand{\erf} {\operatorname{erf}}$I hope this will be helpful to you. First, abandon the search for closed form. It's not likely it exists. Now, if you really need a simple expression for $I(a)$ in some range of values, there are ways to get various approximations. The function is very nice. It goes to its limit at $\infty$ very very fast. Here's the plot of $I(a)$ for $a \in [0,10]$: So (depending on the accuracy you need) you can easily take $I(a)=I(\infty)$ for $a > a_0$ with $a_0$ around $3$ or $4$. Mathematica gives for the first $100$ digits: $$I(\infty)=0.972106992769178593151077875442391175554272\\1833855699009722910408441888759958220033410678218401258734$$ Now, what to do for small $a$? The function is so nice, you can just use the Taylor expansion around $a =0$. The first term is: $$I(a) \approx a$$ Here's the plot for $a \in [0,1]$: The proof is simple. The Taylor series look like this: $$I(a)=I(0)+I'(0) a+\frac{I''(0)}{2!} a^2+\frac{I'''(0)}{3!} a^3+\dots$$ We can see that: $$I(0)=0$$ $$I'(0)=e^{-a^2 \erf (a) } \bigg| _{a=0}=1$$ Now let's find a better approximation by computing the higher derivatives: $$I''(a)=\left( e^{-a^2 \erf (a) } \right)'=-\frac{2}{\sqrt{\pi}} a e^{-a^2 (\text{erf}(a)+1)} \left(\sqrt{\pi } e^{a^2} \text{erf}(a)+a\right)$$ $$I''(0)=0$$ I use Mathematica as a shortcut, but it's easy to do it by hand, if you remember: $$\erf' (x)=\frac{2}{\sqrt{\pi}} e^{-x^2}$$ $$I'''(0)=0$$ $$I^{ IV} (0)=-\frac{12}{\sqrt{\pi}}$$ So our next approximation is: $$I(a) \approx a-\frac{ 1}{2\sqrt{\pi}} a^4$$ The plot with both approximations (orange, green) and the function itself (blue) is given below: You can continue in the same way for higher derivatives. Now I admit that it's possible you need the values of $I(a)$ for all the possible $a$ and with high precision, so the approximations won't do. Then you need to turn to numerical integration (as Mathematica did for me to plot the function). Another way to approximate the function is using its derivative: $$\frac{d I}{da}=e^{-a^2 \erf (a) }$$ But this is an ordinary differential equation, which can be solved numerically. As an illustration, here's a simple explicit Euler scheme for the step size $h$: $$\frac{I(a+h)-I(a)}{h}=e^{-a^2 \erf (a) }$$ $$I(a+h)=I(a)+h e^{-a^2 \erf (a) }$$ We can use an initial value $I(0)=0$. For $h=\frac{1}{10}$ we have the following result (red dots) compared to the exact function (blue line): For $h=\frac{1}{50}$: This way can serve as a good alternative to numerical integration (depending on the context and the application of course).
Interior and Closure of set
$H$ is limit point closed and therefore closed. Let $G$ be the complement of $H$ in $\mathcal{C}[0, 1]$. $G$ can approximate any function in $H$ arbitrarily well (for example, if $h(x) \in H$, then $h_n(x) := h(x) + \frac{1}{n}$ has $\lim_{n \to \infty} h_n(x) = h(x)$). Therefore \begin{align*} Int(H)^{c} = \overline{G} = \mathcal{C}[0, 1] \end{align*} so that Int$(H) = \emptyset$.
Having trouble using the substitution method.
You are correct in the first line of thought: you can substitute the $x^5 dx$ for $\dfrac{1}{6}du$, leaving you with $$ \int \frac{x^5}{x^6+5} \,dx = \frac{1}{6}\int\frac{1}{u} \,du. $$ This is because, as you said, $du= 6x^5 dx$, and rearranging this you arrive at $\dfrac{1}{6} du = x^5 dx$, and so you can "group" the $x^5 dx$ in your original integral and replace it with $\dfrac{1}{6} du$ as I have done above.
Determining whether a set is linearly independent.
$3$ vectors are, and only are linearly independent, if only $0u+0v+0w=0$(null vector) stands, and you can't make the null vector from any of their other combinations. Since $2u+3v-w=(8,6,-4)+(6,-18,21)-(14,-12,17)=(0,0,0)$, they are linearly dependent. Your approach is totally correct, your book must be false.
$f(m+n+1) = f(m) + f(n)$. Show $f(x) = x + 1$
As noted in the comments, there are more general solutions. A nice approach is to set $g(x)=f(x-1)$. Then $$g(x+y)=f(x+y-1)=f((x-1)+(y-1)+1)=f(x-1)+f(y-1)=g(x)+g(y)$$ which is Cauchy's functional equation. You can then try to show that $g(p/q)=(p/q)g(1)$ for all rational $p/q$. To answer your specific question at the end: any dense set is fine - two continuous functions that are equal on a dense set must be equal everywhere.
Example of a ring without unity that has a subring with unity?
Hint: $\mathbb Z\times2\mathbb Z$
Terminology for a function computed by a finite-state transducer?
It's called a Finite state transduc$tion$.
Basic properties of complete intersection ideals
I am working under the assumption that $T$ is a polynomial ring i.e. $y_i$ are algebraically independent elements. The ideal $J$ is a complete intersection ideal iff $ht(J) = \mu(J)$. Here $\mu(J)$ stands for the minimal number of generators of the ideal $J$. It is clear that $ht(J) = n$ and $\mu(J) \leq n$, since $J$ can evidently be generated by $n$ elements. But we know that $ht(J) \leq \mu(J)$. If we combine the previous observations together we have $n = ht(J) \leq \mu(J) \leq n$. Thus $ht(J) = \mu(J) = n$. Consider the ideal $\mathfrak{m} = (y_1, \dots , y_n)$. Then $T_{\mathfrak{m}}$ is a regular local ring and hence Gorenstein. Since $y_1, y_2^{a_2}, \dots y_n^{a_n}$ form a regular sequence, we have that $T_{\mathfrak{m}}/J$ is gorenstein. Now note that since $y_i$ are nilpotent in $A$, every element not in image of $\mathfrak{m}$ is already invertible in $A$, thus we get that $A = T_{\mathfrak{m}}/J$. Thus $A$ is Gorenstien. Here I have used Lemma 45.21.3. and Lemma 45.21.6. from the following link https://stacks.math.columbia.edu/tag/0DW6. I claim that for $a =y_2^{a_2-1}. y_3^{a_3-1} \dots y_n^{a_n-1}$, we have that the image $\overline{a}$ is a non-zero element in $A$. If not then $a \in J$. That will give a polynomial equation in $y_i$ but they are transcendental.
Inverse Function: If $g(x) = x^2 + 8x$ with $x \geq −4$, find $g^{−1}(33)$.
Remember, $g^{-1}(33)$ must be a value $x$, such that $x \geq -4$, and $g(x) = 33$. That means, you should find a solution to $$33 = x^2 + 8x$$ Since it's a quadratic equation, it will probably have two solutions. Discard the solution that is less than $-4$ and you are done.
A question about solving a Sturm–Liouville Problem
First, note that you also have a solution for an eigenvalue of 0 in this case, which is the constant solution X(x)=A. Now, for positive eigenvalues, take the derivative, and you get $X'(x)=-A\sqrt \lambda cos(\sqrt \lambda x)+B\sqrt \lambda sin(\sqrt \lambda x)$ From here, to solve for A and B, you put in your initial values. Typically one of the values is 0 so you get one of $A,B$ is 0, and that'll lead the other one to be of some form of $n\pi$ or $(2n+1)\pi /2$ usually. Now, if we don't have 0 bounds, I'm experimenting here....but still plugging in $X'(x)=0$ and moving the -A term to the other side, we get $A\sqrt \lambda cos(\sqrt \lambda a)=B\sqrt \lambda sin(\sqrt \lambda a)$, and the exact same equation with $b$.Dividing through, this simplifies to $\frac A B=tan(\sqrt \lambda a)$, and the same for $b$. So,we need to find a $\lambda$ so that $tan(\sqrt \lambda a)=tan(\sqrt \lambda b)=\frac A B$. Going from there, I'm not quite sure
sum of 3 correlated jointly random variables
You can explicitly write $$\begin{align} \operatorname{Var}[X_1+X_2+X_3] &= \mathbb{E}[(X_1+X_2+X_3)^2] = \mathbb{E}[X_1^2+X_2^2+X_3^2 + 2X_1X_2 + 2X_1X_3 + 2X_2X_3] \\ &=\mathbb{E}[X_1^2]+\mathbb{E}[X_2^2]+\mathbb{E}[X_3^2] + 2\mathbb{E}[X_1X_2] + 2\mathbb{E}[X_1X_3] + 2\mathbb{E}[X_2X_3] \\ &= \sigma_1^2+\sigma_2^2+\sigma_3^2 +2\rho_{12}\sigma_1\sigma_2 +2\rho_{13}\sigma_1\sigma_3 +2\rho_{23}\sigma_2\sigma_3 \end{align}$$ as you thought. In short: in case of doubt, going back to the definition mught be a good idea.
$xf'(x) = αf(x)$. How to prove that $f(x) = cx^\alpha$?
Hint: Show that $g(x) := f(x)x^{-\alpha}$ is constant.
Combinations and Permutations: John is to select a committee of 4 individuals from among a group of 5 candi­dates
Your second answer is correct, as is the reasoning used to reach it. Alternatively, you could start with the Treasurers: there are $\binom52$ ways to choose them. After that there are $3$ ways to choose the President and $2$ ways to choose the Vice President. Thus, there are $$\binom52\cdot3\cdot2=10\cdot3\cdot2=60$$ different ways to populate the committee. Your first answer is incorrect because it ignores the fact that the two Treasurer positions are indistinguishable. Suppose that $A$ chooses to be a Treasurer: then $A$ was really choosing from only $4$ different positions, not $5$. If $A$ chooses to be a Treasurer, then it’s true that $B$ is choosing from $4$ different positions, but if $A$ chooses to be President or Vice President, $B$ is choosing from only $3$ different positions, not $4$. That analysis quickly gets a bit ugly as you consider the various cases.
Meromorphic function is holomorphic on a compact Riemann Surface except for a given point $p$.
This follows from Riemann-Roch: https://en.wikipedia.org/wiki/Riemann%E2%80%93Roch_theorem. In the notation from wikipedia, we let $D = g[P]$, since in that case we have $l(D) = deg(D) - g + 1 = 1$, meaning that the dimension of meromorphic functions $h$ with $(h) + D \geq 0$ is positive. In particular, there is a non-zero meromorphic function $h$ with $(h) + g[P] \geq 0$, so its only pole is at $p$.
Height of $n$-simplex
Your question didn't state this explicitely, but I assume you're referring to a regular simplex of unit edge length. (A comment indicates as much). I like coordinates, so I'd use coordinates to verify your result. Consider the standard simplex. Its corners are the unit vectors of a cartesian coordinate system of dimension $n+1$, and therefore its edge length is $\sqrt2$. It's height would be the minimal distance between $e_1=(1,0,0,\ldots,0)$ and a point on the opposite face, i.e. some $p=(0,a_2,a_3,\ldots,a_{n+1})$ with $a_2+a_3+\dots+a_{n+1}=1$. For reasons of symmetry, the point with minimal distance has to be in the center of that opposite face, i.e. $a_2=a_3=\dots=a_{n+1}=\frac{1}{n}$. So the height would be $$\sqrt{1+n\left(\frac{1}{n}\right)^2} =\sqrt{1+\frac{1}{n}} =\sqrt{\frac{n+1}{n}}$$ If your simplex has edge length $1$ instead of $\sqrt2$, then you have to scale everything down by that factor, so your final height would be $$h=\sqrt{\frac{n+1}{2n}}$$ which is equivalent to the $\sqrt{\frac{n^2+n}{2n^2}}$ you got.
Exterior powers of (half) spin representations
Use a computer program. For instance, here is the decomposition into irreducibles of the 6th exterior power of the half-spin representation of Spin(12). To me the result does not look sufficiently regular to make a guess about a closed formula that would describe the decomposition in general. Writing an expression as polynomial in the fundamental representations (I suppose with multiplication meaning tensor product) would seem even harder still.
A sphere cuts the coordinate axes at points P,Q,R and also passes through the origin. Then which of the equations is satisfied?
It is equation (a) that is fulfilled. Edit: Here is a simpler way to reach the result (the first draft is given as an appendix) wjth very few computations. The variable plane passing through fixed point $F=(a,b,c)$ intersects the coordinate axes in $P=(p,0,0)$, $Q=(0,q,0)$ and $R=(0,0,r)$. (we assume $p,q,r \ne 0$). The equation of the plane $PQR$ is known to be : $$\dfrac xp+\dfrac yq+\dfrac zr=1$$ As $F$ belongs to this plane, wh have : $$\dfrac ap+\dfrac bq+\dfrac cr=1 \tag{1}$$ Let us now consider the box (parallelepiped) with vertices $O,P,Q,R,O',P',Q',R'$ where $O'=(p,q,r)$. The circumscribed sphere to this box, centered in $$C=(x_0, y_0, z_0)=(\tfrac p2, \tfrac q2, \tfrac r2) \tag{2}$$ is clearly as well circumscribed to trirectangular tetrahedron $OPQR$, due to central symmetry with respect to $C$ ($P',Q',R'$ being the symmetrical points of $P,Q,R$ resp. with respect to $C$). If we replace in (1) $(x,y,z)$ by $(x_0, y_0, z_0)$ defined by (2), we get relationship (a). Appendix : first version : with the same notations as before. Points $F,P,Q,R$ being coplanar, the following determinant is zero : $$\begin{vmatrix}p& 0& 0 &a\\ 0 &q& 0& b\\ 0 &0& r& c\\ 1 &1 &1&1\end{vmatrix}=0 \ \ \iff \ \ \dfrac ap+\dfrac bq+\dfrac cr=1 \tag{1}$$ (this is a classical coplanarity condition : see here). Besides, the general equation of a sphere passing through the origin with center $(x_0, y_0, z_0)$ is : $$x^2+y^2+z^2-2 x_0 x - 2 y_0 y -2 z_0 z=0$$ Let us express that this sphere passes through $P, Q, R$ resp.: $$\begin{cases}p^2 - 2 x_0 p &=&0\\ q^2 - 2 y_0 q &=&0\\ r^2 - 2 z_0 r &=&0 \end{cases}$$ Therefore the coordinates of the center of the sphere are $$(x_0, y_0, z_0)=(\tfrac p2, \tfrac q2, \tfrac r2)$$ which verify, taking account of (1) : $$\dfrac{a}{2 x_0}+\dfrac{b}{2 y_0}+\dfrac{c}{2 z_0}=1$$ which is another way to write relationship (a). Important remark : The center of the sphere has the remarkable geometric property to be the center of the box (= parallelepiped) defined by points $O,P,Q,R$ and $S=(p,q,r)$, [which is not in the base plane $PQR$ of "trirectangular tetrahedron" $OPQR$].
Introduction to the Theory of Distributions, Friedlander and Joshi, Exercises 1.3 and 1.4
For your question 2 (it's nice when a question only has one question!) consider the function $g(x)=\ln x$ for $x>0$ and $g(x)=0$ for $x\le0$. Then $g$ is locally $L^1$, so defines a distribution, and $g'(x)=1/x$ for $x>0$. So $g'$ (considered as a distribution) is an admissible $u$. Therefore $$\left<u,\phi\right>=-\int_0^\infty\phi'(x)\ln x\,dx$$ works. If you don't like derivatives, then integration by parts will give you the alternative formula $$\left<u,\phi\right>=\int_0^1\frac{\phi(x)-\phi(0)}{x}\,dx +\int_1^\infty\frac{\phi(x)}{x}\,dx.$$ Of course, this $u$ is not unique, as you can add any linear combination of the Dirac delta function (at $0$) and its derivatives to $u$.
Is showing that the difference is zero a valid proof?
Yes, they are equivalent, you can both add and subtract both sides while it is still equivalent.
(Uniform) continuity in $\mathbb{R^n}$
You can do the same thing: A function $F=(f_1,\dots,f_n):\mathbb{R}^n\to \mathbb{R}^n$ is uniformly continuous if its component functions are. Suppose $f_1,\dots,f_n$ are uniformly continuous functions from $\mathbb{R}^n\to \mathbb{R}$. Fix $\epsilon>0$ and let $\delta=\min_{1\leq i\leq n}\delta_i$ be the minimum of the $\delta_i$ meeting the $\sqrt{\epsilon/n}$ challenge for each $f_i$ and any points $x,y\in \mathbb{R}^n$. Then, for this fixed $\epsilon>0$ and any $x,y\in \mathbb{R}^n$, we have $|x-y|\leq \delta$ implies $$ |f_i(x)-f_i(y)|<\epsilon, 1\leq i\leq n. $$ So, $$ ||F(x)-F(y)||^2=\sum_{i=1}^n |f_i(x)-f_i(y)|^2< \sum_{i=1}^n \epsilon/n=\epsilon $$ It's worth internalizing the mantra, vector valued functions inherit the properties of their component functions, (within reason). Details on this particular example: If we take $\epsilon=1$, we may choose for any given $\delta>0$, the points $x=(\delta,0,\dots,0), y=(\delta/2,0,\dots,0)$. Then, $||x-y||=\delta/2<\delta$ but $$ ||f_1(x)-f_1(y)||=\left|\frac{x_1}{|x|^2}-\frac{y_1}{|y|^2} \right|=\frac{1}{\delta}\geq 1 $$ for $\delta\leq 1$. I leave $\delta>1$ to you.
Is the Fibonacci lattice the very best way to evenly distribute N points on a sphere? So far it seems that it is the best?
The Fibonacci lattice is not the best way to evenly distribute points on a sphere. The problem of distributing N points evenly on a unit sphere is only known for specific N. Moreover, the vertices of Platonic solids are not always optimal. This is succinctly described on the Wolfram Mathworld site: “For two points, the points should be at opposite ends of a diameter. For four points, they should be placed at the polyhedron vertices of an inscribed regular tetrahedron. There is no unique best solution for five points since the distance cannot be reduced below that for six points. For six points, they should be placed at the polyhedron vertices of an inscribed regular octahedron. For seven points, the best solution is four equilateral spherical triangles with angles of 80 degrees. For eight points, the best dispersal is not the polyhedron vertices of the inscribed cube, but of a square antiprism with equal polyhedron edges. The solution for nine points is eight equilateral spherical triangles with angles of arcos(1/4). For 12 points, the solution is an inscribed regular icosahedron.” There are many approximate solvers for this ( SpherePoints[] and Offset Lattice ). The Fibonacci Spiral is easy to program, but is not optimal.
Spectrum of a set of first order formulas
The above definition of spectrum is not useful for theories. For given any subset $K$ of the natural numbers, there is a theory $T$ such that $T$ has a model of cardinality $n$ if and only if $n\in K$. All we need to do it to produce, for every $n\not\in K$, a sentence that says "it is not the case that there are exactly $n$ objects." That can be done using just equality.
If $x,\,y$ be integers and suppose that $x^{2}- y^{3}= 17$, prove each following proposition true.
For c. (as promised): There are 2 cases: For $3\nmid x\,\therefore\,3\nmid y\,\therefore\,3\mid 2\,x^{2}+ y^{2}$ and for $3\mid x$ (2 cases in): For $2\nmid x\,\therefore\,2\mid 2\,x^{2}+ y^{2}$ For $2\mid x\,\therefore\,6\mid x\,\therefore\,y^{3}\equiv -\,17(\!\mod 36\!)\,\therefore\,y^{3}\equiv 7(\!\mod 12\!)\,\therefore\,y= 12\,b+ 7\,(\!b\in \mathbb{Z}\!)$. So : a. For $x= 12\,a+ 6\,(\!a\in \mathbb{Z}\!)\,\therefore\,(12\,b+ 7)^{3}+ 17= (12\,a+ 6)^{2}\,\therefore\,84\,b^{2}+ 7\,b+ 10\equiv 1(\!\mod 8\!)$. So $$\therefore\,b\equiv 5(\!\mod 8\!)\,\therefore\,y= 96\,w+ 67\,(\!w\in \mathbb{Z}\!)$$ $$\therefore\,\left.\begin{matrix} y^{3}+ 17\equiv 3^{3}+ 17\equiv 44\equiv 12(\!\mod 32\!)\\ x^{2}\equiv 0,\,4,\,16(\!\mod 32\!) \end{matrix}\right\}\,\therefore\,(\!impossible\,!\!)$$ b. For $x= 12\,a\,\therefore\,(12\,b+ 7)^{3}+ 17= (12\,a)^{2}$ $$\therefore\,a^{2}\equiv 6(\!\mod 7\!)\,\therefore\,(\!impossible !\!)\,\because\,4\,a^{2}\equiv 0,\,1,\,2,\,4(\!\mod 7\!)$$ q.e.d
Sylvester's identity
$I_1 - b^Ta$ is not, technically speaking, a real number. It is a $1\times 1$ matrix with one element that is equal to $1-b^Ta$, and luckilly, the determinant of a $1\times 1$ matrix $A=[a]$ is just the element of that matrix, i.e. $a$. In your case, the only element of $I_1-b^Ta$ is $1-b^Ta$ and since $b^Ta$ is just the scalar product of $b$ and $a$, that's the same as $1-a^Tb$.
Spherical average over harmonic, decreasing function
Here is how the proof should go. Set $$\phi(r) = \int_{S^2} rV(r\omega) \, d\omega,$$ and differentiate in $r$ to obtain $$\phi'(r) = \int_{S^2} V(r\omega) +r\nabla V(r\omega)\cdot \omega \, d\omega.$$ Make a change of variables to find $$\phi'(r) = \int_{rS^2} \frac{1}{r^2} V(\omega) + \frac{1}{r}\frac{\partial V}{\partial n}(\omega) \, d\omega,$$ where $rS^2$ is the sphere of radius $r$. Setting $W(z) = |z|^{-1}$ we can rewrite this as $$\phi'(r) = \int_{rS^2} -V(\omega)\frac{\partial W}{\partial n}(\omega) + W(\omega)\frac{\partial V}{\partial n}(\omega) \, d\omega.$$ Now since $W$ and $V$ are harmonic, you can use Green's identity and the boundary condition $V\to 0$ as $|x| \to \infty$ to show that the right hand side is zero. You may have to fill in a few details on this last part, but I believe this is the basic idea of the proof.
Estimate of analytic function in a unit disk
Let $g(z) = f(z)f(-z)$. Then $|g(z)| \le ab$ on the circle (why?). Hence, the maximum modulus principle gives $|g(0)| \le ab$.
What is the maximum value of the sum $\sum_{i=1}^L(\bar{x}-x_i)$, in this specific case.
Let me write $$ S(x):=\sum_{i=1}^L ( \bar x - x_i) = \sum_{i=1}^L (\sum_{j=1}^K K^{-1}x_j - x_i) = \sum_{i=1}^L (L/K-1)x_i + \sum_{i=L+1}^K K^{-1}x_i. $$ Thus, $S(x)$ is maximal if $x_i$, $i=1\dots L$ are at the lower bound, while $x_i$ are at the uppper bound $i=L+1\dots K$. Hence the maximum is $$ S_{max}=L(L/K-1)a + (K-L)/K b = \frac{K-L}{K}(-La + b). $$
Prove the uniformly continuity of a function with a certain property
For any $\epsilon >0$, by assuming, there is $M>0$ such that if $|x| \ge M$, then $|f(x)|<\frac \epsilon 2$. As we know, $f(x)$ is uniformaly continuous on the compact interval $[-M-1,M+1]$, so for the given $\epsilon$, there exists $\delta>0$, and $\delta<1$, such that for any $x_1, x_2 \in [-M-1,M+1]$, when $|x_1-x_2|<\delta$, we have $|f(x_1)-f(x_2)|<\epsilon$. If $|x_1-x_2|<\delta<1$, then there are following cases: $x_1, x_2 \in [-M-1,M+1]$. It is obvious that $|f(x_1)-f(x_2)|< \epsilon $. $x_1, x_2 \notin [-M-1,M+1]$. So $|x_1|, |x_2|\ge M$, and hence $|f(x_1)-f(x_2)|\le |f(x-1)|+|f(x_2)| \le \frac \epsilon 2+\frac \epsilon 2=\epsilon $. $x_1\in [-M-1,M+1]$ and $x_2 \notin [-M-1,M+1]$. Since $\delta<1$, $|x_1|, |x_2|\ge M$, and hence $|f(x_1)-f(x_2)|\le |f(x-1)|+|f(x_2)| \le \frac \epsilon 2+\frac \epsilon 2=\epsilon $. $x_2\in [-M-1,M+1]$ and $x_1 \notin [-M-1,M+1]$. Since $\delta<1$, $|x_1|, |x_2|\ge M$, and hence $|f(x_1)-f(x_2)|\le |f(x-1)|+|f(x_2)| \le \frac \epsilon 2+\frac \epsilon 2=\epsilon $. May it helps!
There always exists a finite, increasing chain of R-submodules of M isomorphic to R/P. Can we describe P?
The chain in the theorem is an existence theorem and there is no uniqueness. As an example, let $R=M=\mathbb{Z}$. Then $\mathrm{Ass}(M)=\{0\}$. But, we can write the chain $p\mathbb{Z}\subset \mathbb{Z}$ for any prime $p$ and we have $\mathcal{P}=\{0,p\mathbb{Z}\}$. In particular, the primes occurring in $\mathcal{P}$ can be fairly arbitrary.
how can we define closed set or open set for a set of matrices?
Hint The eigenvalues of a matrix $A \in M_2(\Bbb R)$ are the roots of the polynomial $$\det (\lambda I_2 - A) = \lambda^2 - (\operatorname{tr} A) \lambda + \det A,$$ and these roots will be nonreal iff the discriminant $$\Delta(A) := (\operatorname{tr} A)^2 - 4 \det A$$ of $A$ is negative.
Solve the differential equation $(1-y(x+y)\tan(xy))dx+(1-x(x+y)\tan(xy))dy=0$
We can use the substitutions $x + y = u$ and $xy = v$, then we have $$dx + dy = du$$ $$xdy + ydx = dv$$ Then the equation becomes $$du = u \tan vdv$$ $$\implies \frac{du}{u} = \tan v dv$$ $$\implies \ln u = - \ln \cos v + \text{constant}$$ $$\implies u \cos v = \text{constant} $$ $$\implies (x + y) \cos (xy) = \text{constant}$$
Is topology invariant under conformal transformation?
If $(M,g)$ and $(N,h)$ are conformally equivalent we have a diffeomorphism $f:M\rightarrow N$ such that $f^*h$ is conformally equivalent to $g$. Since $f$ is a diffeomorphism the topology on $M$ and $N$ are the same (they have the same open sets).
Let $X=$ amount of insurance reimbursement for dental expense. The pdf of $X$ is $F(x) = ce^{-.004x}$ for $x>0$. Find $c$.
The density function of an exponential has shape $\lambda^{-\lambda x}$ if $x\ge 0$, and $0$ elsewhere. So $c=0.004$. Alternately, find $\int_0^\infty ce^{-0.004 x}\,dx$. We get $\dfrac{c}{0.004}$. To make the integral $1$, we need $c=0.004$. For the $80$-th percentile, argue thus. The probability that $X\gt x$ is (by integration or otherwise) equal to $e^{-0.004x}$. We want this to be $0.80$. So we get the equation $$e^{-0.004x}=0.80.$$ Take the logarithm of both sides. We get $$-0.004 x=\ln(0.80).$$ Now with a little help from the calculator we can find $x$.
If $0\leq a \leq 3; 0\leq b \leq 3$ and the equation $x^2 +4+3 cos(ax+b)=2x$ has at least one solution , then find the value of a+b
If we complete the square we get $$x^2 -2x +4 = (x-1)^2 +3 \geq 3$$ Further we note that $-3\cos(ax+b) \leq 3$. So it must be that we have $x^2-2x+4 = 3 = -3\cos(ax+b)$. Further, from the completing the square formula, we see that in fact we need $x=1$. So it reduces down to the problem of solving $-3\cos(a+b) = 3$ or $\cos(a+b) = -1$ Hence $a+b = (2n+1)\pi$ for any integer $n$. EDIT: Since $0 \leq a \leq 3$, $0 \leq b \leq 3$, we get $0 \leq a+b \leq 6$. Hence the only viable solution would be $\pi$.
How many methods are there for solving repeated roots of differential equations?
The Exponential Shift Theorem. See e.g. http://www.math.ubc.ca/~israel/m215/coco/coco.html
Use this parametrization to compute the following integral.
Since $z\in \Gamma$ if and only if $|z - (1 - i)| = 5$, we can parametrize $\Gamma$ by setting $z = 1 - i + 5e^{it}$, $0 \le t \le 2\pi$. Then $dz = 5ie^{it}\, dt$ and so $$\int_{\Gamma} \frac{dz}{z - 1 + i} = \int_0^{2\pi} \frac{5ie^{it}}{5e^{it}}\, dt = i\int_0^{2\pi} dt = 2\pi i.$$
Why two formulas for a in a linear function are equal
They are not equal. Your two equations feature different variables, so clearly, they cannot be the same. However, they are somewhat related, as both are closely linked to linear functions. Short answer: The first formula is used to calculate $a$ if you do know $b$ and you know one point on the graph of the function. The second formula is used to calculate $a$ if you don't know $b$ and you know two points on the graph of the function. Longer answer: Both equations can be seen to represent the same thing, which is a linear function of $x$. Remember: By definition, a linear function is a function of $x$ equal to $$f(x)=ax+b$$ From this equation, we can see that, by moving $b$ to the left side of the equation and dividing by $x$, the equation is equivalent to the equation $$a=\frac{f(x)-b}{x}$$ so long as $x\neq 0$. Therefore, the above expression is an expression that is true for every point $(x,f(x))$ on the graph of the linear function. I can use this formula to calculate $a$, for example, if I already know the value of $b$ and one single point on the line, and want to calculate $a$. On the other hand, let's we take two points, $(x_1,y_1)$ and $(x_2,y_2)$. Then, let's imagine that the two points are on the same line, then $y_1=ax_1+b$ and $y_2=ax_2+b$. Subtracting these two equations, and some division later, we get the equation $$a=\frac{y_2-y_1}{x_2-x_1}$$ which is an equation that is true for every pair of points on the graph of the linear function. I can use this formula to calculate $a$ if I don't know what $b$ is, so long as I know two points on the graph. Edit: There is also a "third", special option for calculating $a$. Let's say you happen to know two points $(x_1, y_1)$ and $(x_2, y_2)$, and the $x$ value for one of them is $0$, that is, $x_1=0$. There are now two ways of calculating $a$. Using the fact that $f(x)=ax+b$, you can plug in $x_1=0$ and get $y_1=a\cdot 0 + b$ which means that $y_1=b$. You now know the value of $b$ and one point on the equation, which is $(x_2,y_2)$, and you can use the first formula to calculate $a$ to get $$a=\frac{f(x_2)-b}{x_2}\\ a=\frac{y_2-y_1}{x_2}\\ a=\frac{y_2-y_1}{x_2-0}\\ a=\frac{y_2-y_1}{x_2-x_1}$$ You can see that the last formula in this case is exactly the same as the second formula for calculating $a$ in your question.
Find $x$ for $b\equiv ax \pmod{p}$ for big numbers $a$, $b$ and $p$
If $a$ is a multiple of $p$ but $b$ is not there is no solution, of course. So I'm assuming you mean $a,b$ are not mulitples of $p$. As $p$ is prime, this means $\gcd(a,p)$ and $\gcd(b,p) = 1$ By Bezoit's identity there are integers $m,n$ so that $am + pn = 1$. Solve for those using Euclid's algorithrm. (Let $p = k_1a + r_1$ and then $a= k_2r + r_2$ and the $r= k_3r_2 + r_3$ etc. ) Then $x = b*m$ will yield. $a(b*m) = (am)*b= (1-pn)*b \equiv b \mod p$. That's it. Example: If $a=54; b=36; p= 2017$ $2017 = k*54 + r= 37*54 + 19$ $54 = 2*19 + 16$ $19 = 16 + 3$ $16 = 3*5 + 1$ $1 = 16- 3*5$ $= 19 - 3 - 3*5= 19 - 3*6$ $= 19 - (19 - 16)*6 = -5*19 + 6*16$ $= -5*19 + 6(54-2*19)$ $=-17*19 + 6*54$ $= -17(2017 - 37*54) + 6*54$ $= -17*2017 + 635*54$ So $36 \equiv 36*1 \equiv 36*635*54 = 22860*54 \equiv 673*54 \mod 2017$.
Antiderivative of $\frac{1}{\ln(x)}$?
You can prove that this antiderivative is non-elementary using differential algebra (in particular the Rothstein-Trager theorem). The fact is that, contrary to what you see in most first-year calculus courses, "most" elementary functions do not have elementary antiderivatives.
Exact sequence of ideals: intersection, product, direct sum, sum
I was shown that this exact sequence is incorrect. Note that the image of $g: I \cap J \rightarrow I \oplus J$ is just $\{ 0 \}$, while the kernel of $h: I \oplus J \rightarrow I + J$ contains pairs $(i, -i)$. So, $Im(g) \neq Ker(h)$, and hence the sequence is not exact.
Approximating a metric tensor
An analogy can be made with $(1+x)^{-1}$, which has expansion $$\frac{1}{1+x} = 1 - x + x^2 - x^3 + \mathcal{O}(x^4).$$ This suggests that if $g^{\mu\nu}=\eta^{\mu\nu} + h^{\mu\nu}$, then we should have the inverse metric tensor $$g_{\mu\nu} = \eta_{\mu\nu}-h_{\mu\nu} + \eta^{\sigma\rho}h_{\mu\sigma}h_{\nu\rho} + \mathcal{O}(h^3).$$ This is the quick and dirty way to remember this result. The formal way to see this is to define the operator $$g^{\mu}_\nu \equiv \eta_{\mu\sigma}g^{\sigma\nu} = \delta^{\mu}_\nu + h^\mu_\nu.$$ This is a $(1,1)$ tensor, which can be regarded as a linear operator $g:V\rightarrow V$. What we are interested in is a series expansion for $g^{-1}$, which is given by the Neumann series. The Neumann series expansion gives us $$g^{-1} = I - h + h^2 - h^3 + \mathcal{O}(h^4),$$ where I've dropped index notation since the contractions are obvious. But of course $g^{-1}$ is just $$\left(g^{-1}\right)^\mu_\nu = \eta^{\mu\sigma}g_{\sigma\mu}.$$ So we have $$g_{\mu\nu} = \eta_{\mu\sigma}\left(g^{-1}\right)^{\sigma}_\nu = \eta_{\mu\nu} - h_{\mu\nu} + \eta_{\mu\sigma} h^\sigma_\rho h^\rho_\nu + \mathcal{O}(h^3),$$ exactly as claimed. Note that your expression for $g^{\mu\nu}g_{\nu\sigma}$ is quadratic in $h$, since $$\eta^{\mu\nu}h_{\nu\sigma} = h^\mu_\sigma = h^{\mu\nu}\eta_{\nu\sigma},$$ so the two linear terms cancel.
Proof envolving center of a group and conjugacy classes
Tobias's comment already solves the question: what are the possibilities if $$\sum_{x\notin Z(G)}|[x]|=8\;,\,\,|[x]|\mid 9\;\;\text{and}\;\;|[x]|>1\;?$$ Of course, it is impossible so it can't be $\;|Z(G)|=1\;$ , and thus you're done.
What would be complexity of computing $3^{n^n}$?
Let $M(n)$ denote the complexity of the multiplication of two integer with $n$-digits (let say in base two). Thanks to Schonhage and Strassen we know that $M(n) = O(n \log n \log\log n)$. Thanks to Fürer, we even know that it's a bit lower. Let $a$ be an integer and $b$ an integer. Then $a^b$ can be computed in time $O(M(b \log_2 a))$ using fast exponentiation. This is not bad: since $a^b$ has bit-size exactly $b\log_2 a$, the cost of the computation is almost the size of the output. So the complexity of computing $a^{b^c}$ is the cost of computing of $b^c$ plus the cost of computing $a^{b^c}$. When $a$ is at least 2, then the second operation dominate the first, and the total cost is $O(M(b^c \log_2 a))$. We could make explicit the constant in the big-oh, and we would realize that is is not much larger than 2 or 3.
Difficulties solving this integral: $ \int_0^1 \frac{\ln(x+1)} {x^2 + 1} \, \mathrm{d}x $ by differentiation under the integral sign
This would work: $$I(a) = \int_0^1 \frac{\ln(ax+1)}{x^2+1} dx \\ I’(a) =\int_0^1 \frac{x}{(x^2+1)(ax+1)} dx \\ \overset{\text{partial fractions}}= \\ \frac{-2\ln |ax+1| +\ln(x^2+1)+2a\tan^{-1} x}{2(a^2+1)} \bigg |_0^1 \\ =-\frac{\ln(a+1)}{a^2+1}+\frac{\ln 2}{2a^2+2}+\frac{\pi}{4} \frac{a}{a^2+1} $$ Integrating from $0$ to $1$, $$I(1)-I(0) = -I(1) +\int_0^1 \left(\frac{\ln 2}{2a^2+2}+\frac{\pi}{4} \frac{a}{a^2+1}\right) da$$ Hopefully you can finish.
Finite simple group has order a multiple of 3?
First of all : We must assume that the group is non-abelian, otherwise the cyclic groups with prime order , except $\mathbb Z_3$ , would already be counterexamples. The Suzuki groups have orders not divisible by $3$, $5$ is however always a prime factor of the order of those groups. All the other finite simple non-abelian groups have order divisible by $3$.