title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Proving that G contains a cycle with at least $k+1$ edges
sorry I see this is an old question, so my answer is probably overdue... I think the answer you are proposing is in essence correct, you must just make sure you also cover the case where the longest path contains more than $k$ edges. The key is to observe that for a longest path $P=v_1v_2\ldots v_l$, the vertex $v_l$ must be adjacent to $k$ vertices all contained in $P$, since $G$ is $k-$regular and $P$ is a longest path. So then if the path is longer than $k$ edges, pick the subpath containing only $v_l$ and the vertices adjacent to it (contains at least $k+1$ vertices). Now this subpath has as end-vertices $v_l$ and a vertex which is adjacent to $v_l$ (else this vertex would not be part of the subpath). Connecting this vertex with $v_l$ completes a cycle with at least $k+1$ edges.
Why should we prove obvious things?
Because sometimes, things that should be "obvious" turn out to be completely false. Here are some examples: Switching doors in the Monty Hall problem "obviously" should not affect the outcome. Since Gabriel's horn has finite volume, then it "obviously" has finite surface area. "Obviously" we cannot decompose sphere into a finite number of disjoint subsets and reconstruct them into two copies of the original sphere. Since the Weierstrass function is everywhere continuous, then "obviously" it must have at least a few differentiable points. Of course, mathematics has shown that switching doors is to the player's advantage, that Gabriel's horn actually has infinite surface area, that you can indeed get two copies of the original sphere (see Banach-Tarski paradox), and that the Weierstrass function is everywhere continuous but nowhere differentiable. The point being, there are many things out there which are "obvious" but actually turn out to be entirely counterintuitive and opposite what we would otherwise expect. This is the point of rigor: to double check and make sure our intuition is indeed correct, because it isn't always.
How many four digit numbers are there such that its digits strictly ascending?
Let's choose 4 distinct numbers from the set of digits $\{1,2,3,4,5,6,7,8,9\}.$ (Note that $0$ cannot be included in this set.) For each choice, there is a unique way by which we can relabel those 4 numbers so that they are in ascending order. (For example, if we chose the numbers $1, 2, 3,\text{ and } 8,$ then the only way we can have them in ascending order would be $1238$ and nothing else.) So we'd have $\binom{9}{4}$ 4-digit numbers whose digits are strictly increasing. Using this logic, it wouldn't be too hard to extend it to any n-digit number as long as $n<10.$
prove that if A is a subset of B, B is a subset of C, and C is a subset of A, then A=B and B=C
If you are given that $A\subseteq B\subseteq C\subseteq A$, then yes you can. To be more specific, given any $x\in B$, $x$ is also in $C$, and also in $A$, so $B\subseteq A$. Use a simlar argument to show that $B=C$.
Prove all perfect graphs are normal.
It can be shown that any graph such that every vertex is in a largest sized clique is normal by constructing the sets as you mentioned. Then, you can use substitute edges for vertices in the same fashion as the substitution lemma (adjusting the sets accordingly) until you obtain any perfect graph whose largest clique is just as large as the largest clique you started with. Hence, you can construct the sets by starting with a larger graph and substituting edges for vertices until you get any arbitrary perfect graph.
Are all subspaces of equal dimension (of a vector space) the same?
No. Consider the two subspaces of $\Bbb R^2$ generated respectively by $(1,0)$ and $(0,1)$. The first is the set $\{(a,0) \mid a \in \Bbb R\}$ and the other $\{(0,b) \mid b \in \Bbb R\}$, it's clear that they have the same dimension but are not the same. You will see later that they are isomorphic, indeed, any two subspaces of the same dimension (over the same field) are isomorphic.
Textbooks on permutation groups?
Wielandt's book is quite old now, but still good. If you are looking for more recent books, at the beginning postgraduate level, then there is "Permutation groups" by Peter J. Cameron, and (believe it or not) "Permutation groups" by J.D. Dixon abnd B. Mortimer.
$f$ is uniformly continuous on $[a,b]$ and that $g$ is uniformly continuous on $g([a,b])$. Then $f\circ g$ is uniformly continuous on $[a,b]$
Consider that you can make $|f(x)-f(y)|$ as small as you wish by choosing $|x-y|$ small enough. Consider that you can $|g(f(x))-g(f(y))|$ as small as you wish by choosing $|f(x)-f(y)|$ small enough. Now, given $\epsilon$, how can you pick a $\delta$ small enough, to ensure that $$|x-y|< \delta \implies |g(f(x))-g(f(y))| <\epsilon \quad \text{for any $x,y \in [a,b]$} $$ Let $\epsilon$ be given. Since $g$ is uniformely continuous on $f([a,b])$, I can find some $\delta_1$ such that $$|f(x)-f(y)| < \delta_1 \implies |g(f(x))-g(f(y))| <\epsilon$$. If only I could find some $\delta_2$ such that $|f(x)-f(y)| < \delta_1$... Now read the first sentence of this post.
Expected number of permutations required to sort a list of numbers
As it happens, probability of sorted array with duplicate numbers was asked just a few hours ago. If number $x_j$ is present $n_j$ times, with $1\le j\le k$ and $\sum_jn_j=n$, the probability for a uniformly randomly chosen permutation to order the list is $$ \frac{\prod_jn_j!}{n!}\;, $$ so the expected number of permutations required is $$ \frac{n!}{\prod_jn_j!}=\binom n{n_1,\ldots,n_k}\;, $$ the number of distinguishable arrangements of the numbers.
Fermat's factorization method: why are "a" or "b" always divisible by 3 if "c" and "d" are prime?
You have to assume that $N \bmod 3 \ne 0.$ Otherwise there are counter-examples: $N=15=3\times 5 \rightarrow a=4, b=1$ or $N=21=3\times 7 \rightarrow a=5, b=2$ etc. Let $N \bmod 3 \ne 0, \; N = a^2-b^2$. Assume both $a,b$ are not divisible by $3.$ So you have $a\equiv\pm 1 \bmod 3$ and $ b\equiv\pm 1 \bmod 3$. Then $a^2\equiv1 \bmod 3,$ $\;b^2 \equiv 1 \bmod 3$ and $N = a^2 - b^2 \equiv 0 \bmod 3$. Contradiction!
Baldi - Stochastic Calculus - Show a stopping time as a.s. finite
As @KaviRamaMurthy already pointed out, your reasoning is not correct since the first "=" in your calculation fails to hold. Here is one possible approach: It holds that $$\lim_{t \to \infty} \frac{B_t}{t} = 0 \quad \text{a.s.};$$ there are several ways to show this convergence, e.g. using (a variant of) the strong law of large numbers or the law of iterated logarithm. If $\mu>0$, then the process $X_t := B_t+\mu t$ satisfies $$\lim_{t \to \infty} \frac{X_t}{t}= \mu>0 \quad \text{a.s.},$$ in particular, $$\lim_{t \to \infty} X_t = \infty \quad \text{a.s.}$$ Since $(X_t)_{t \geq 0}$ has continuous sample paths (with probability $1$), this means that exit time from any interval $(-\infty,b)$ is finite a.s. for each $b>0$. Hence, $\mathbb{P}(\tau<\infty)=1$. Another approach: The exit time $\sigma:=\inf\{t>0;B_t \geq b\}$ is finite with probability $1$. Since $X_t \geq B_t$, it follows that $\inf\{t>0;X_t \geq b\}$ is finite almost surely, and so is $\tau$.
Proving that a set has a fixed point
Size of orbits divide the order of the group (comes from Orbit-Stabilizer Lemma). So, your orbits should be of size 1, 2, 4, 8, or 16. The orbit sizes must add up to $|X|$ since the orbits partition the set. Try to add up to 5 using 1, 2, 4, 8, 16. You get only these possibilities: 1+1+1+1+1, 1+1+1+2, 1+2+2, 1+4 So one of the orbits must be size 1, which means it's a fixed point. See A question about the fixed point and group action.
Hyper-elliptic curves in positive characteristic
Yes any degree between $0$ and $2g+2$ can happen. As you said, the case $\deg Q=g+1$ correspond to $X\to \mathbb P^1_k$ being unramified above $t=\infty$. Let $y^2+Q(t)y=P(t)$ be an equation of $X$ with $\deg Q=g+1$ and $\deg P\le 2g+2$. Then the equation of $X$ above $t\ne 0$ is given by $$ (yt^{-g-1})^2+ (Q(t)/t^{g+1})(yt^{-g-1})=P(t)/t^{2g+2}\in k[1/t].$$ Or equivalently $$z^2+Q_1(1/t)z=P_1(1/t)$$ where $Q_1(s), P_1(s)\in k[s]$, $\deg Q_1(s)\le g+1$ and $\deg P_1(s)\le 2g+2$. As examples, for any $1\le d\le 2g+1$, the equation $$ y^2+t^{g+1}y=1+ct+t^{d} $$ (with $c=0$ if $d=1$ and $c=1$ otherwise, to make the equation non-singular) defines a hyperelliptic curve of genus $g$ in characteristic $2$. The above change of variables leads to an equation $$z^2+z=s^{2g+2}+cs^{2g+1}+s^{2g+2-d}$$ and we are in the situation of $\deg Q(s)=0$ !
Nilpotent operator / Orthogonal projection
If you grant Jordan canonical form, then in a suitable basis $A$ looks like $$\begin{bmatrix}0&1&0&\dots&0 \\ 0&0&1&\dots&0 \\ & & \ddots&\ddots \\ 0&0&\dots&0&1 \\ 0&0&\dots &0&0 \end{bmatrix}\,.$$ Then $$A^\top A = \begin{bmatrix}0 & \\ & 1 & \\ & & \ddots \\ & & & 1\end{bmatrix}\,.$$ So 'twould appear you're right. But .... Note that once you bring the transpose into the game, things are no longer basis-independent. The basis that brings $A$ into this nice form is not likely to be orthonormal, and so, if you're bringing the dot product into the game, it really is not meaningful to describe this in terms of an orthogonal projection. The characteristic polynomial of $A$ is basis-independent, but the characteristic polynomial of $A^\top A$ is not.
Solving $\cos(t)y' + y\sin(t) = \cos^4(t)$?
Hint: $e^{\ln(x)}=x$, so you can write $$e^{\ln(\sec t)} \cos^3 t = \sec(t) \cos^3 t = \cos^2 t.$$
Probability in dice throws with external property.
He says he got 6. There are two possibilities: he did and he is telling the truth; or he didn't and he is lying. The prob of the first is $\frac{1}{6}\ \frac{1}{4}=\frac{1}{24}$. The prob of the second is $\frac{5}{6}\ \frac{3}{4}=\frac{15}{24}$. We know that one of these two is the case, so given that, the prob. that it is the first is $$\frac{\frac{1}{24}}{\frac{1}{24}+\frac{15}{24}}=\frac{1}{16}$$
The greatest common divisor of $\sum_\limits{k=0}^{5n-2}2^k$
I imagine what your problem is asking is this: What is the greatest common divisor of all the numbers of the form $$\sum_{k=0}^{5n-2} 2^k$$ for positive integer $n$. This pretty much means: Evaluate $$\gcd(\sum_{k=0}^3 2^k,\sum_{k=0}^{8} 2^k,\sum_{k=0}^{13} 2^k,...)$$
3 Variables, One Equation
Instead of the factoring you do, it might help instead to group terms and rewrite to be in the form $(x-7/2)^2 + (y-7/2)^2 + (z-7/2)^2 = c$, which is now asking for integer points on some sphere.
Calculating limit including greatest integer function
$ \begin{align*} L &= (1.5^n + 2^n)^{\frac{1}{n}} \\ &= 2 (1 + 0.75^n)^{\frac{1}{n}} \\ \Rightarrow \lim_{n \rightarrow \infty} L &= 2 \end{align*} $
What is the mathematics of UML?
There is semantics of programing languages which explain "meaning" of those constructs. Type theory models types and rules of type processing in syntactic manner. Take in mind that those are broad topics.
Fitting distributions on censored data
This is not an answer but some thoughts (too long for the comments) . Both of the links you provided assume that the data comes from a continuous distribution, ie try to fit a continuous distribution. For the censored data I guess you are saying that $F (x) $ is used in the likelyhood product expression (to maximise) instead of the product of 'discrete terms' which we don't want to use, for the data points bellow the chosen threshold (similary for above some threshold). Would this be easier to accept if we were fitting a discrete distribution? I guess this is usually taught as a method fo follow, which is pretty clear in the discrete case (when no censoring occurs), and may need some extra (mathematcal) justification in the continuous case. With censoring it seems, the likelyhood expression will depend on the two 'threshold' values and the ones in bwn and will not contain indivudual points thar are outside this range.I guess your question would be how this affects the estimate/s (compared to non-censoring case).
$f(x)=\sum_0^\infty\frac{\sin(2n +1) x}{2n+1}$ is the Fourier expansion of $(-1)^m\frac{\pi}{4}$ for $x\in (m\pi,(m+1)\pi),m\in \mathbb Z$.
The summation can be done explicitly using complex exponential form for sine. Here are some of the steps: $$f(x)=\sum_0^\infty\frac{\sin(2n +1) x}{2n+1} = \frac{1}{2i}\sum_0^\infty\frac{e^{i(2n+1)x} - e^{-i(2n+1)x}}{2n+1} $$ Add and subtract $\sum_0^\infty\frac{e^{i(2n+2)x}}{2n+2}$ to the first exponential and $\sum_0^\infty\frac{e^{-i(2n+2)x}}{2n+2}$ to the second exponential to get $$f(x) = \frac{1}{2i}\sum_1^\infty \left[\frac{e^{inx}}{n} - \frac{1}{2}\frac{e^{2inx}}{n} - \left(\frac{e^{-inx}}{n} -\frac{1}{2}\frac{e^{-2inx}}{n} \right) \right] $$ Now, use $$-\ln(1-e^{ix}) = \sum_1^\infty{\frac{e^{inx}}{n}} $$ to get \begin{align} f(x) &= \frac{1}{2i}\sum_1^\infty \left[{-\ln(1-e^{ix})} - \frac{1}{2} \left(-\ln(1-e^{2ix}) \right) - \left({-\ln(1-e^{-ix})} - \frac{1}{2}\left(-\ln(1-e^{-2ix}) \right) \right) \right] \\ &= \frac{1}{2i}\sum_1^\infty \left[{-\ln(1-e^{ix})} + \frac{1}{2}\ln(1-e^{2ix}) + \ln(1-e^{-ix}) - \frac{1}{2} \ln(1-e^{-2ix}) \right] \end{align} Use, $\ln(1-e^{-ix}) = \ln(-e^{-ix}(1-e^{ix})) = \ln(-1) + \ln(e^{-ix}) + \ln(1-e^{ix}) $ to expand $e^{-ix}$ and $e^{-2ix}$ terms. Put this in the summation and after some algebra the result follows.
Does Runge Kutta need future state of system?
It seems to me that you have already written the answer to your own question. The three lines you wrote defines a Runge-Kutta method (I believe that in this particalar case, it is also called the Heun method). Knowing $X$ (which is short for $X(t)$), you can easily get $K_1$. Knowing $K_1$, you can easily get $K_2$. Knowing $K_1$ and $K_2$, you can get $X(t+h)$, and loop the procedure. The cases where such a calculation is possible are called "explicit methds" : you "only" have to evaluate $F$ for different $(t,X)$ points. There exist some methods however where the $K_i$ are given as functions of all $K_j$s, and a direct computation is not possible anymore. You have to solve for all the $K_i$s at once, and this is why those method are called "implicit". If you can put your hands on it, I wamli recommand the book Hairer, E., Lubich, C., & Wanner, G. (2005). Geometric numerical integration, 22(5), 1. doi:10.1080/10640266.2014.951249, which is the very best I have read about numerical methods for ordinary differential equations. Hope this helps !
Finding eigenvalues of a determinant
$$\dfrac{3}{4} + \dfrac{2}{3} = \dfrac{3\times 3 + 2 \times 4}{3 \times 4} = \dfrac{9 + 8}{12} = \boxed{\dfrac{17}{12}}\\ \dfrac{3}{4}\times\dfrac{2}{3} - \dfrac{1}{12} = \dfrac{6}{12} - \dfrac{1}{12} = \boxed{\dfrac{5}{12}}$$ $\begin{align} \left( \dfrac{3}{4} - \lambda \right) \left( \dfrac{2}{3} - \lambda \right) - \dfrac{1}{12} &= \dfrac{3}{4} \times \dfrac{2}{3} - \lambda \left( \dfrac{3}{4} + \dfrac{2}{3}\right) + \lambda^2 - \dfrac{1}{12}\\ & = \dfrac{5}{12} - \dfrac{17}{12}\lambda + \lambda^2 \end{align}$
$\mathbb{Z}_p$ not isomorphic to $\mathbb{F}_p|[t]|$
$\renewcommand{\power}{[\![t]\!]}$ Assume that $\phi : \Bbb F_p\power \to \Bbb Z_p$ is a ring morphism (it might not even be an isomorphism). Then $$p = p \cdot 1_{\Bbb Z_p} = p \cdot \phi(1_{\Bbb F_p}) = \phi(p \cdot 1_{\Bbb F_p}) = \phi(0) = 0,$$ which is not possible. Indeed, the ring $\Bbb Z_p$ has characteristic $0$ (even if it is an inverse limit of rings of positive characteristic [which actually grows to infinity, that's why the inverse limit has characteristic $0$, somehow]). Said differently, $\Bbb Z_p$ contains a copy of $\Bbb Z$, via the injective ring morphism $$ \begin{array}{lrcl} & \Bbb Z & \longrightarrow & \Bbb Z_p \hookrightarrow \prod\limits_{m \geq 0} \Bbb Z /p^m \Bbb Z \\ & n & \longmapsto & ([n]_{p^m})_{m \geq 0}. \end{array} $$ Some remarks: – Actually, $\Bbb F_p\power$ is not even isomorphic to $\Bbb Z_p$ as additive group, since $\Bbb Z_p$ is torsion-free. – However, we have a ring isomorphism $\Bbb Z_p \cong \Bbb Z\power / (t - p)$.
Divergence of subsequences
$$ \frac{k}{f(kn)}\geqslant\frac1{f(kn+1)}+\frac1{f(kn+2)}+\cdots+\frac1{f(kn+k)}. $$
How to find the integral closure of $\mathbb{Z}_{(3)}$ in the field $\mathbb{Q}(\sqrt{-5})$?
The integral closure of $\Bbb Z$ in $K={\Bbb Q}(\sqrt{-5})$ is the ring of integers ${\cal O}_K={\Bbb Z}[\sqrt{-5}]$, the latter equality because $-5\equiv 3\bmod 4$. As you observed, ${\Bbb Z}_{(3)}$ is the localization of $\Bbb Z$ at the ideal $(3)$. Then, it follows from general properties (see Atiyah-Macdonald, Ch 5) that the integral closure $A$ of ${\Bbb Z}_{(3)}$ in $K$ is the localization at the ideal $(3)$ in ${\cal O}_K$. Since $3$ splits in $K$, $A$ has two maximal ideals, namely ${\frak m}_{\pm}=(1\pm\sqrt{-5})A$.
Show that $\mathfrak{so}(4)\cong \mathfrak{so}(3)\oplus \mathfrak{so}(3)$
Lie algebra structure only uniquely determines the connected component of the identity a lie group. If one shows that the homomorphism restricts to an isomorphism on the connected component of the identity then this would do. But as suggested it is perhaps easier to d this all on the level of lie algebras.
What's the intuition for Bayes Coherency
The Bayesian update rule is: $$P(A \mid M_1) = {P(M_1 \mid A) \over P(M_1)} P(A)$$ To generalize it to two "observations" the correct rule is: $$P(A \mid M_1, M_2) = {P(M_2 \mid A, M_1) \over P(M_2 \mid M_1)} P(A \mid M_1)= {P(M_2 \mid A, M_1) \over P(M_2 \mid M_1)} {P(M_1 \mid A) \over P(M_1)} P(A)$$ which you can easily verify by expanding all the conditional probabilities. Note that this is not the same as the following wrong rule: $$P(A \mid M_1, M_2) \overset{wrong!}= {P(M_2 \mid A) \over P(M_2)} P(A \mid M_1)= {P(M_2 \mid A) \over P(M_2)} {P(M_1 \mid A) \over P(M_1)} P(A)$$ (There are conditions under which the wrong rule gives the same result as the correct rule, but generally they don't agree.) So when you said: compute the probability of having cancer given they smoke, followed by the probability of having cancer given they drink it really depends on what you mean by "followed by" - and if you meant the second rule, then that's wrong. Here's a simple example. Roll a fair $6$-sided die and let $A$ be the event that you rolled a $2$. Let $M_1$ be the event ("observation") that the roll is even and $M_2$ be the event that roll is $\le 3$. $P(A) = \frac16$ $P(A \mid M_1) = {1 \over 1/2} P(A) = \frac13$ Correct update rule: $P(A \mid M_1, M_2) = {1 \over 1/3} P(A \mid M_1) = 1$ Wrong update rule: $P(A \mid M_1, M_2) \overset{wrong!}= {1 \over 1/2} P(A \mid M_1) = \frac23$
Prove for $L,K\colon V \rightarrow \mathbb{R} $ if $\ker(L) \subset \ker(K)$ then $K=\Lambda L$?
Yes, that is what is meant. Asserting that $K=0$ means that $(\forall v\in V):K(v)=0$.
Ordinals and topological space.
We can use transfinite recursion to inject $\alpha$ into $[0,1)$ such that between any two elements of the image of $\alpha$ there are uncountably many real numbers. As pointed out in the comments, this is easily established even without transfinite induction. In fact, every countable linear order embeds into the rationals, so you can just use that embedding. However you obtain such a function, call it $f$. We then create $g:\alpha\times [0,1)\to [0,1)$ by saying that $g(x,0)=f(x)$ and observing that since $[f(x),f(s(x)))$ is an open interval, there's a natural order preserving bijection between it an $[0,1)$ which we use to assign the value of $g(x,y)$ for $y\neq 0$. This natural bijection sends $[a,b)$ to $[0,1)$ via the formula $h(x)=(x-a)/b$
Prime ideals of the ring of rational functions
The answer to (1) is yes. Let us first show If $I$ is an ideal of $A[x]$ such that $I\cap S=\emptyset$, then $I$ is contained in $\mathfrak m A[x]$ for some maximal ideal $\mathfrak m$ of $A$. Proof. Note that $$I\subseteq \sum_{f\in I} c(f)A[x].$$ So if $I$ is not contained in any $\mathfrak mA[x]$, then $A=\sum_{1\le i\le n} c(f_i)$ for some $f_i\in I$. Fix an $m$ big enough and write $$f_i(x)=a_{i0}+a_{i1}x+...+ a_{im}x^m$$ ($a_{im}$ could be zero) and an identity $$1=\sum_{i\le n, j\le m} \alpha_{ij}a_{ij}, \quad \alpha_{ij}\in A.$$ Consider the polynomial $$f=\sum_{i,j}\alpha_{ij}f_i(x)x^{m-j} \in I.$$ The term of degree $m$ in $f$ has coefficient equal to $1$. So $f\in S$. Contradiction. As any $\mathfrak m A[x]$ is prime and has empty intersection with $S$, the above result implies immediately (1). (2) The map $\phi$ is clearly surjective (consider $\mathfrak qA[x]$ for any $\mathfrak q\in\mathrm{Spec} A$) but is not injective in general. Consider $A=k[t,s]$ over a field $k$. Then $f(x):=t+sx$ generates a prime ideal $\mathfrak p$ of $A[x]$ which doesn't meet $S$ and we have $\mathfrak pS^{-1}A[x]\cap A=\{0\}$. So the generic fiber of $\phi$ has at least two points (in fact infinitely many points). Note however that $\phi$ is always injective over any maximal ideal $\mathfrak m$ because $\phi^{-1}(\mathfrak m)=\mathfrak mS^{-1}A[x]$. The argument for the surjectivity of $\phi$ also shows that $\dim S^{-1}A[x]\ge \dim A$. We also have $\dim S^{-1}A[x]\le \dim A[x]=\dim A +1$ when $A$ is noetherian. I don't know whether the equality is possible. Update: We have $\dim S^{-1}A[x]=\dim A$ when $A$ is noetherian: let $\mathfrak p_0\subset ... \subset \mathfrak p_n$ be a chain of prime ideals of $A[x]$ contained in $A[x]\setminus S$. By (1), we have $\mathfrak p_n\subset \mathfrak mA[x]$ for some maximal ideal $\mathfrak m\subset A$. Then $$ \mathfrak p_0\subset ... \subset \mathfrak p_n\subset \mathfrak mA[x]+xA[x]$$ is a chain of prime ideals of $A[x]$. Thus $\dim S^{-1}A[x]\le \dim A[x] -1=\dim A$. Your homework is not so easy :).
Proving the given inequalities
First of all, you should have $\geq $ or $\leq$ instead of $>$ and $<$ in your inequalities. In each of them, the equality occurs if and only if $a=b=c$. (i) Use the Weighted AM-GM Inequality to show that $$\frac{bc+ca+ab}{a+b+c}=\frac{b}{a+b+c}c+\frac{c}{a+b+c}a+\frac{a}{a+b+c}b\geq c^{\frac{b}{a+b+c}}a^{\frac{c}{a+b+c}}b^{\frac{a}{a+b+c}}\,.$$ Similarly, $$\frac{bc+ca+ab}{a+b+c}=\frac{c}{a+b+c}b+\frac{a}{a+b+c}c+\frac{b}{a+b+c}a\geq b^{\frac{c}{a+b+c}}c^{\frac{a}{a+b+c}}a^{\frac{b}{a+b+c}}\,.$$ Multiplying these two inequalities to get $$\left(\frac{bc+ca+ab}{a+b+c}\right)^2\geq \left((bc)^a(ca)^b(ab)^c\right)^{\frac{1}{a+b+c}}\,,$$ which is equivalent to the required inequality. (ii) For the inequality on the right, note that $$\frac{a^2+b^2+c^2}{a+b+c}=\frac{a}{a+b+c}a+\frac{b}{a+b+c}b+\frac{c}{a+b+c}c\geq a^{\frac{a}{a+b+c}}b^{\frac{b}{a+b+c}}c^{\frac{c}{a+b+c}}=\left(a^ab^bc^c\right)^{\frac{1}{a+b+c}}$$ by the Weighted AM-GM Inequality. Thus, $$\left(\frac{a^2+b^2+c^2}{a+b+c}\right)^{a+b+c}\geq a^ab^bc^c$$ as desired. For the inequality on the left, observe that $$\frac{3}{a+b+c}=\frac{a}{a+b+c}\left(\frac{1}{a}\right)+\frac{b}{a+b+c}\left(\frac1b\right)+\frac{c}{a+b+c}\left(\frac1c\right)\,.$$ Thus, by the Weighted AM-GM Inequality, $$\frac{3}{a+b+c} \geq \left(\frac1a\right)^{\frac{a}{a+b+c}}\left(\frac1b\right)^{\frac{b}{a+b+c}}\left(\frac{1}{c}\right)^{\frac{c}{a+b+c}}=\frac{1}{\left(a^ab^bc^c\right)^{\frac{1}{a+b+c}}}\,.$$ This is equivalent to the inequality to be proven.
Does the sum of the reciprocals of composites that are $ \le $ 1
The sum is finite. The next term is the last: $1/6552$. EDIT: Consider the more general problem with a rational target $T$. I tried some random choices of target; for the target $105/37$ I was not able to produce a finite sum (after $204$ iterations the denominators were so big that Maple was having trouble with the primality testing). So I'm not convinced that the sum will always be finite.
Exaplin a limit example in graph
It is actually a kind of impossible!!! When you have $\lim_{x\to 0^+} f(x) = 1.$, it means that for every $\eta>0$ there is a $\sigma>0$ such that $|x-0^+|<\sigma$, then $|f(x)-1|<\eta$. While in your case, we have $|f(x)-1|=|x-1|$ for $x\geq 0$, and so $|f(x)-1|=|x-1|\geq 1-\sigma$. So, I think there is no such a function.
Prove if m + n ≥ 59 then (m ≥ 30 or n ≥ 30) by Contraposition
That's not the contrapositive. Instead try: IF $(m<30)$ AND $(n<30)$ THEN $(m+n < 59)$. (Remember you're dealing with integers here).
lower bound of the difference between two numbers
You can write $$ay\le x_1\le by$$ and $$-dy\le -x_2\le -cy$$ so $$ay-dy\le x_1-x_2\le by-cy$$
Confusion with Chi Squared interpretation
You reject the null hypothesis (good fit) for LARGE values of the computed chi-squared statistic (call it Q). Under the null hypothesis, the expected value of Q is the degrees of freedom. So with DF = 25, you would NEVER reject for a value of Q less than 25. If Q = 51.3, the exact P-value is 0.001468724 > 0.001, from software (still with df = 25). This means that you cannot reject at level .001. If Q = 53.6, the exact P-value is 0.0007485428 < 0.001, so you CAN reject at level .001. If Q = 10.52, then you cannot reject at any reasonable level of significance, because 10.52 < E(Q) = 25. The value 10.52 cuts probability .005 from the LOWER tail of the chi-squared distribution. This might be useful for some applications of the chi-squared distribution (for example, finding a confidence bound on the variance of a normal distribution), but not for a goodness-of-fit (GOF) test. [Note: In practice, if I got Q as small as 10.52 in a GOF test with df = 25, I would suspect something is wrong with either the model or the data. This seems too good a fit to be true. Sort of analogous to getting reported results 100, 101, 99, 102, 98, 100 on the respective faces 1 through 6 on 600 rolls of a fair die. Technically, not impossible, but I might suspect someone just wrote down fake data instead of actually rolling a die. That would be Q = 0.1 with df = 5.] There are several statements in your question that indicate confusion. I have tried to give examples here that get to point in trying to clear things up. Please leave a Comment with specific numbers if you have further questions. I, or someone else, will probably be able to respond. Addendum: The figure below shows the density function of CHISQ(df = 25). The thin black line is at the mean 25, the dotted red lines are critical values for tests at levels 0.05, 0.01, and 0.001, respectively (left to right). They are located at 37.652, 44.314, and 52.620. The right-hand panel is a magnification of the curve to the right of Q = 40. It shows more clearly the tiny area 0.001 to the right of the dotted line at 52.620. Values of Q > 52.620 lead to rejection of fit at level 0.001.
Must $g(x)$ and $f(x)$ be functions for composition function $(g \circ f)(x)$ to exist
I'd hate to 'post' this as an 'answer' because it's not really an answer per say but something I would've wrote as a comment over an 'answer.' (Because I'm just thinking out loud) Usually when I see statements of compositions of functions g$\circ$f, they usually start as if g: X $\rightarrow$ Y is surjective and f:Y $\rightarrow$ Z is surjective, then g $\circ$ f: X $\rightarrow$ Z is surjective. It's in an "if-then" form. So consider a statement: If g: X$\rightarrow$ Y is a function and f: Y $\rightarrow$ Z is a function, then the composition g $\circ$f: X $\rightarrow$ Z is a function. Well with logic for the P implies Q table, if P is false, then the statement is true. Hence, if g or f are not functions, then by that, g $\circ$ f would be a function would be a valid/true statement. The contrapositive would also say that if g$\circ$f is not function, then g is not a function or f is not a function-- again just rambling off ideas. Also, I'd be careful when saying 'inverses.' If a function f:X $\rightarrow$Y has an inverse say f$^{-1}$:Y $\rightarrow$ X, then that implies f$^{-1}$ is a function, but as you mentioned above, the squareroot is not a function. If anything, might be able to consider it as the pullback. Mind my loose words, I too am trying to be more concise, but I hope my comment gets you thinking.
From the given information, how do we use the Remainder Theorem to reach the answer?
The polynomial is a cubic whose coefficients are a geometric sequence (in the order of descending powers, i.e. starting with $x^3$ and ending with the constant term), meaning that our polynomial must take the form $$p(x) = ax^3 + arx^2 + ar^2x + ar^3.$$ Remainder theorem states that the remainder of $p(x)$ when divided by $x - \alpha$, is $p(\alpha)$. So, when dividing $p(x)$ by $x - 1$ (i.e. $\alpha = 1$), the remainder is $p(1)$. Thus, $$10 = p(1) = a1^3 + ar1^2 + ar^21 + ar^3 = a + ar + ar^2 + ar^3.$$ Similarly, when dividing $p(x)$ by $x + 1$ (i.e. $\alpha = -1$), the remainder is $p(-1)$. Thus, $$-30 = p(-1) = a(-1)^3 + ar(-1)^2 + ar^2(-1) + ar^3 = -a + ar - ar^2 + ar^3.$$ This should hopefully explain the alternating signs. The next observation you need to make is that both of the above expressions are geometric series. Each have $4$ terms. The first has an initial term of $a$ and a common ratio of $r$, while the second has an initial term of $-a$ and a common ratio of $-r$. The first one is precisely the usual formula for geoemetric series: $$10 = a + ar + ar^2 + ar^3 = a\frac{1 - r^4}{1 - r}.$$ To get a formula for the second one, we simply replace $r$ with $-r$ and $a$ with $-a$, to get $$-30 = -a + ar - ar^2 + ar^3 = (-a)\frac{1 - (-r)^4}{1 - (-r)} = -a\frac{1 - r^4}{1 + r}.$$ To get the equation from the solution, simply multiply both sides by $-1$.
Idempotent ideals in certain commutative rings
In a commutative ring $R$ every ideal is idempotent (iff every ideal is radical) iff $R$ is VNR. Then the question asks if a commutative ring $R$ with $J(R)=0$ and $\mathfrak m^2=\mathfrak m$ for every maximal ideal $\mathfrak m$ is VNR. The answer is negative: the ring of continuous functions $R=\mathcal C[0,1]$ satisfies both conditions and it's not VNR.
How to find $E[U^h \ V^k]$
Let $ \varphi $ be the transformation defined by $U = X+Y$ and $V= X-Y$. Then by the probability density transformation theorem (the assumptions do hold) we get $$ f_{U,V} (u,v) = f_{X,Y} ( \varphi^{-1} (u,v)) \ | J \varphi^{-1} (u,v) | \ \mathbb{I}_{ \varphi ( \mathbb{R}^{2} ) } (u,v) $$ Since $ |J \varphi^{-1} (u,v)| = \frac{1}{2}$ we see that $$ f_{U,V} (u,v) = \frac{ \sqrt{3} }{2 \pi} e^{ - \frac{1}{2} (u^2+3v^2 ) } $$ Then by integrating with respect to $u$ and subsequently with respect to $v$ we get $$ f_{V} (v) = e^{ - \frac{3v^2}{2} } \sqrt{ \frac{3}{2 \pi} } \text{ and } f_{U} (u) = \frac{e^{ - \frac{u^2}{2} }}{\sqrt{2 \pi }} $$ Therefore we see that $f_{U,V} = f_{U} f_{V}$, hence $U$ and $V$ are independent. Then also $U^{h}$ and $V^{k}$ are independent, so $$ E[ U^{h} V^{k}] = E[U^{h}] \ E[V^{k}] $$ Then we can simply compute the expected values by definition and we get $$ \frac{ 2^{ \frac{1}{2} (-4+h+k) } \ 3^{ -\frac{k}{ 2}} \ (1+(-1)^h)(1+(-1)^k) \ \Gamma \big( \frac{1+h}{2} \big) \Gamma \big( \frac{1+k}{2} \big) }{ \pi } $$ Since we know the probability density functions of $U$ and $V$ we can compute the moment-generating function of $U,V$ by using the fact that $U$ and $V$ are independent as $$ M_{U,V} (s,t) = E[e^{sU+tV} ] = E[ e^{sU} \ e^{tV} ] = E[e^{sU}] \ E[e^{tV}] = \int_{ \mathbb{R}} e^{su} f_{U} (u) \ du \int_{ \mathbb{R}} e^{tv} \ f_{V} (v) \ dv $$ which is equal to $e^{ \frac{s^2}{2} } \ e^{ \frac{t^2}{6} } $.
Proving f(rx) = rf(x)
One can prove the result without proving either of the items mentioned in the OP. We are told that $f$ is additive, that is, $f(x+y)=f(x)+f(y)$ for all $x$ and $y$. Put $x=0$. We get $f(0)=f(0+0)=f(0)+f(0)$. By subtraction, $f(0)=0$. We have then $0=f(x+(-x))=f(x)+f(-x)$. From $f(x)+f(-x)=0$ the desired result follows. As to the first part, to prove $f(rx)=rf(x)$ for rational $x$, it is useful to prove first that this holds for positive integers. Then for positive rationals $\frac{p}{q}$, where $q\gt 0$, we first prove that $f(\frac{x}{q})=\frac{1}{q}f(x)$ by considering $f\left(\frac{x}{q}+\frac{x}{q}+\cdots+\frac{x}{q}\right)$ ($q$ terms).
What is the expected number of draws before we get an empty bin?
Here is an approximate approach, which should be pretty good when $m,k$ are reasonably large. First calculate the chance that the first bin is empty after you have removed $n$ balls. There are $mk$ balls total, so the number of ways to choose $n$ balls that leave bin $1$ empty is ${(m-1)k \choose n-k}$. The chance bin $1$ is empty is then $\frac {{(m-1)k \choose n-k}}{mk \choose n}$. If there are lots of bins and balls the impact of one bin being empty on the distribution of probabilities in the other bins is small. We are interested in cases where the chance bin $1$ is empty has rather low probability, so the chance of two bins being empty is even lower. The chance some bin is empty is then $\frac {m{(m-1)k \choose n-k}}{mk \choose n}$ The Alpha plot below shows this for $m=50,k=10$. It hits $0.5$ at about $320$ balls, where you will have removed $6.4$ of the $10$ balls in each bin on average.
Stability of equilibrium solution
Reformulate the system in a new coordinate system $\boldsymbol{x} = \boldsymbol{x}_\text{solution}+\boldsymbol{z}$. This will transfrom $$\dot{\boldsymbol{x}} = \boldsymbol{Ax}+\boldsymbol{b}.$$ to $$\dfrac{d}{dt}\boldsymbol{x}_\text{solution}+\dfrac{d}{dt}{\boldsymbol{z}}=\boldsymbol{A}\left[ \boldsymbol{x}_\text{solution}+\boldsymbol{z}\right]+\boldsymbol{b}$$ $$\implies \dfrac{d}{dt}\boldsymbol{x}_\text{solution}+\dfrac{d}{dt}{\boldsymbol{z}}=\boldsymbol{Az}+\left[\boldsymbol{A} \boldsymbol{x}_\text{solution} +\boldsymbol{b}\right].$$ As $\boldsymbol{x}_\text{solution}$ is a solution (note that is is not necessary that it is an equilibrium point) to $\dot{\boldsymbol{x}} = \boldsymbol{Ax}+\boldsymbol{b}$ we can simplify the previous equation to $$\dot{\boldsymbol{z}}=\boldsymbol{Az}.$$ This equation has the trivial equilibrium point in the origin. The system matrix $\boldsymbol{A}$ did stay invariant under the substitution. This is why we can simply calculate the eigenvalues of the linear system in order to perform stability analysis.
How to find the largest possible matching region for asymptotic matching of any differential equation
I am no expert at regions for asymptotic matching of DEs, but let me see if I can guide to where they got that. We have: $$\displaystyle \tag 1 y_L = e^{-x + \frac{1}{x}}$$ $$\displaystyle \tag 2 y_R = a e^{-\epsilon \frac{x^3}{3} - x}$$ Using a Taylor series expansion for $(1)$ with $x$ large, we have: $$\displaystyle \tag 3 y_L = e^{-x + \frac{1}{x}} \approx e^{-x}\left( 1 + \dfrac{1}{x} + O\left(\dfrac{1}{x^2}\right) \right)$$ Using a Taylor series expansion for $(2)$ with $x$ small, we have: $$\displaystyle \tag 4 y_R = a e^{-\epsilon \frac{x^3}{3} - x} \approx a e^{-x}\left( 1 - \epsilon \dfrac{x^3}{3} + O\left(\epsilon^2 \dfrac{x^6}{18} \right) \right)$$ Now, we are interested in a region where $y_L = y_R$, so lets equate them and see what we get: $$\displaystyle \tag 5 e^{-x}\left( 1 + \dfrac{1}{x}\right) = a e^{-x}\left( 1 - \epsilon \dfrac{x^3}{3} \right)$$ On the LHS of $(5)$, what if we have $x \gg 1$ ($x$ large), then we are left with $e^{-x}$. On the RHS of $(5)$, what if we have $x \ll \epsilon^{-1/3}$ ($x$ small), then we are left with $a e^{-x}$. What do we need for $a$ to be? Well $a = 1$ will do the trick and that makes the LHS = RHS. Thus, we have as our asymptotic interval: $$\tag 6 1 \ll x \ll \epsilon^{-1/3}, a = 1$$ With $(6)$ we have a very wide range of choice for $x$ as the authors describe. As long as we choose values in that region, we are okay. As for the particular values they chose, it might make the solution curves look more appealing to the eyes is as a good a guess as any. I think in these perturbation style problems, it really helps to do what-if scenarios and experiment to see the impacts of varying $x$ and $\epsilon$ in the original DEQ.
Definition of restriction maps of schemes
Let $X$ be a arbitrary topological space such that $(X,\mathcal{O}_X)$ is a scheme then $\mathcal{O}_X$ is just a sheaf such that $\mathcal{O}_X(U)$ is a ring for all open subset $U$ of $X$ so the restriction maps for the scheme $(X,\mathcal{O}_X)$ are defined exactly the same as how we define for sheaves .
Eulerian path for Rubik's Cube states
Well, since an Eulerian cycle exists if and only if the degree of every vertex in a connected graph is even, we only need to check how many states it is possible to get to with one move (if a state is a vertex in our graph, then a move from one state to the next is an edge). In a Rubik's cube, we can get to a new state by rotating any one of the $9$ planes in $\mathit{either}$ direction, so we have $18$ possible states we can get to with one move. Noting that the graph is connected because we are only considering all possible states of the cube, we conclude that there is an Eulerian cycle.
Evaluating Limits Mathematically
Mathematically: $ \lim_{x \to a} f(x) = L $ means: $$ \forall \epsilon > 0 \exists \delta > 0 $$ so that $$|x - a| < \delta \Rightarrow |f(x) - L| < \epsilon$$
Generators Trees in a Tree
If you remove one edge form a tree, it becomes disconnected. Hence a spanning connected subgraph must containt all egdes, hence the only spanning tree of a tree $T$ is $T$. The answer is 1, hence.
How to find $\int\frac{\sin x}{x}dx$
The function $f(x)=\sin(x)/x$ does not admit an elementary antiderivative, i.e., there is no formula for its integral (using quotients of polynomials, trig. functions, logartithms, exponentials, i.e., the usual functions you study in calculus). Symbolic integration is the part of calculus that deals with finding antiderivatives. There is a fairly sophisticated algorithm due to Risch and implementing it shows that there is no nice formula for $\displaystyle \int\frac{\sin(x)}x dx$. The algorithm is sufficiently elaborate that apparently no software package can currently find antiderivatives for all functions for which it is possible. The Wikipedia page I linked to has references to the original (and nice) paper. A few years ago, Matthew Wiener posted a fairly readable account of the algorithm on sci.math; here is a pdf of the post. For a nice full length exposition of the mathematics involved, I highly recommend the book by Manuel Bronstein,"Symbolic Integration 1 (transcendental functions)" (2 ed.), 1997, Springer-Verlag. Now, not all is bad news here: One can integrate term by term the power series for $\sin(x)/x$ expression and obtain the power series of its antiderivative, (that converges everywhere), and there are numerical methods to approximate very decently this function. Finally, one can compute explicitly (for example, using methods of complex analysis) that $$ \int_0^\infty\frac{\sin(x)}x dx=\frac{\pi}2. $$
Trying To Understand the Statement of Montel's Theorem
A reason for to restrict on every compact subset of $D$ is because of kind of convergence that we work to complex functions. For example, in complex analysis of functions in one variable, we want ever a uniform convergence, not pontual for functions. So when we considere the uniform convergence in compact parts, we win this is convergence is the same convergence of the uniform convergence topology. So if we considere the set of continuous functions in $U\subset\mathbb{C}$ to $\mathbb{C}$, we have available some theorems of complete metric spaces, because this spaces is a Banach space with the norm of uniform convergence.
Shortest path to Tychonoff?
For someone who has not previously been exposed to filters, probably the shortest path is by way of the Alexander subbase theorem; the link gives both a fairly complete sketch of the proof of this theorem and the very easy proof from it of the Tikhonov product theorem.
Does this follow a binomial distribution?
So, I will explain a solution that does not introduce a binomial random variable. Concretely, there is only one road that will lead him to the mall and three other roads that will not lead him to the mall, but back to the starting position. The problem is asking: what is the probability $A$ that in his first attempt, he chooses one of these three incorrect roads and in his second attempt he chooses the one correct road. So, the probability that he chooses an incorrect road is $\dfrac{3}{4}$ and the probability that he chooses a correct road after choosing an incorrect road is $\dfrac{1}{3}$. So, $P(A)=\dfrac{3}{4}\cdot\dfrac{1}{3}=\dfrac{1}{4}$ Imagine if we wanted to extend this question to the case when he forgets about the road that he chooses. Suppose that we want to find the probability that he finds the correct road in his third attempt. Then, $P(A)=\left(\dfrac{3}{4}\right)^2\cdot\dfrac{1}{4}$ because he has to fail twice before choosing the correct road. What are these probabilities resembling? Well, since, in this case, if he were to choose the correct road in the $n$th attempt, then $P(A)=\left(\dfrac{3}{4}\right)^{n-1}\cdot\dfrac{1}{4}$. This probability would resemble a geometric distribution because it involves him failing a certain number of times before succeeding.
Inequality involving sum of exponentials
You cannot have such an inequality. RHS $<1$. If such an inequality holds for all choices of $B_n$'s we can get $1$ as a limting value of LHS leading to a contradiction. For example you can let $B_n$ and $\lambda_n$ approach $0$ for all $n>1$, $\lambda_1=1$ and $B_1$ approach $\infty$.
How to find the general solution of $(1+x^2)y''+2xy'-2y=0$. How to express by means of elementary functions?
Let $y=\sum\limits_{n=0}^\infty a_nx^n$ , Then $y'=\sum\limits_{n=0}^\infty na_nx^{n-1}$ $y''=\sum\limits_{n=0}^\infty n(n-1)a_nx^{n-2}$ $\therefore(1+x^2)\sum\limits_{n=0}^\infty n(n-1)a_nx^{n-2}+2x\sum\limits_{n=0}^\infty na_nx^{n-1}-2\sum\limits_{n=0}^\infty a_nx^n=0$ $\sum\limits_{n=0}^\infty n(n-1)a_nx^{n-2}+\sum\limits_{n=0}^\infty n(n-1)a_nx^n+\sum\limits_{n=0}^\infty2na_nx^n-\sum\limits_{n=0}^\infty2a_nx^n=0$ $\sum\limits_{n=2}^\infty n(n-1)a_nx^{n-2}+\sum\limits_{n=0}^\infty(n^2+n-2)a_nx^n=0$ $\sum\limits_{n=2}^\infty n(n-1)a_nx^{n-2}+\sum\limits_{n=0}^\infty(n+2)(n-1)a_nx^n=0$ $\sum\limits_{n=2}^\infty n(n-1)a_nx^{n-2}+\sum\limits_{n=2}^\infty n(n-3)a_{n-2}x^{n-2}=0$ $\sum\limits_{n=2}^\infty(n(n-1)a_n+n(n-3)a_{n-2})x^{n-2}=0$ $\therefore n(n-1)a_n+n(n-3)a_{n-2}=0$ $a_n=-\dfrac{(n-3)a_{n-2}}{n-1}$ $\therefore\begin{cases}a_1=a_1\\a_{2n+3}=0~\forall n\in\mathbb{Z}^*\\a_0=a_0\\a_{2n}=\dfrac{(-1)^n((-1)1\times3\times......(2n-3))a_0}{1\times3\times5\times......(2n-1)}\forall n\in\mathbb{N}\end{cases}$ $\begin{cases}a_1=a_1\\a_{2n+3}=0~\forall n\in\mathbb{Z}^*\\a_0=a_0\\a_{2n}=\dfrac{(-1)^{n+1}a_0}{2n-1}\forall n\in\mathbb{N}\end{cases}$ $\begin{cases}a_1=a_1\\a_{2n+3}=0~\forall n\in\mathbb{Z}^*\\a_{2n}=\dfrac{(-1)^{n+1}a_0}{2n-1}\forall n\in\mathbb{Z}^*\end{cases}$ $\therefore y=C_1x+C_2\sum\limits_{n=0}^\infty\dfrac{(-1)^{n+1}x^{2n}}{2n-1}=C_1x+C_2\biggl(1+\sum\limits_{n=1}^\infty\dfrac{(-1)^{n+1}x^{2n}}{2n-1}\biggr)=C_1x+C_2\biggl(1+\sum\limits_{n=0}^\infty\dfrac{(-1)^nx^{2n+2}}{2n+1}\biggr)=C_1x+C_2(1+x\tan^{-1}x)$
Is the DFT really the DTFS?
First of all, we're talking discrete time, so I'll be using $N$ for the period of our function rather than $T$. In fact, we do have exact equality: $$ x[n] = \sum_{k=0}^{N-1}X_k e^{j\frac{2\pi kn}{N}} $$ where $X_k = \frac 1{N}\sum_{n=0}^{N} x[n] e^{-j \frac{2 \pi}N kn}$. The easiest way to see this intuitively is with a little bit of linear algebra. Let $\tilde x[n]$ denote the function $\tilde x[n] = \sum_{k=0}^{N-1}X_k e^{j\frac{2\pi kn}{N}}$, which I attempt to prove is equal to $x[n]$ (for all $n$). Since both $x$ and $\tilde x$ are $N$-periodic, it is enough to know that $x[n] = \tilde x[n]$ for $n = 0,1,2,\dots, N-1$. In other words, we're going to think of $x[n]$ as really being the vector $\mathbf x = (x[0],x[1],\dots,x[N-1]) \in \Bbb C^N$. Let $b_k[n] = e^{j \frac{2 \pi k n}{N}}$. The values of $b_k[n]$ for $0 \leq n \leq N-1$ form a vector of $N$ entries. The question is this: can we find $X_k$ such that $X_0b_0[n] + \cdots + X_{N-1} b_{N-1}[n]$ is exactly the same as the function $x[n]$? In terms of vectors: if we define $\mathbf b_k = (e^{j \frac{2 \pi k (0)}{N}}, \dots, e^{j \frac{2 \pi k (N-1)}{N}})$, can we find coefficients $X_k$ so that $$ X_0 \mathbf b_0 + \cdots + X_{N-1} \mathbf b_{N-1} = \mathbf x $$ Or as a system of equations, we have $$ X_0 b_0[0] + \cdots + X_{N-1}b_{N-1}[0] = x[0]\\ X_0 b_0[1] + \cdots + X_{N-1}b_{N-1}[1] = x[1]\\ \vdots \\ X_0 b_0[N-1] + \cdots + X_{N-1}b_{N-1}[N-1] = x[N-1] $$ As it turns out, this is a linear system of $N$ equations on $N$ variables (the variables $X_0,X_1,\dots,X_{N-1}$) that we can completely solve! To put it another way: because the vectors $\mathbf b_0,\dots,\mathbf b_{N-1}$ form a basis of the $N$-dimensional space $\Bbb C^N$, every vector $\mathbf x \in \Bbb C^N$ can be uniquely expressed as a linear combination of them. Because the functions $b_0[n],\dots,b_{N-1}[n]$ form a basis of the $N$-dimensional space of $N$-periodic functions, every $N$-periodic function $x[n]$ can be uniquely expressed as a linear combination of them. If we were to write out this system of equations with a matrix, we would have $W \mathbf X = \mathbf x$ where $\mathbf X = (X_0,\dots,X_{N-1})$, $\mathbf x$ is as before, and $W$ is (a multiple of) the DFT matrix, as described in the link. Another perspective: it turns out that there is a certain "aliasing" among the other complex exponentials that makes them very obviously redundant. For example, with $k = N$, we find $$ b_N[n] = e^{j \frac{2 \pi N n}{N}} = e^{j 2 \pi n} = [e^{j 2 \pi}]^n = 1\\ b_{N+1}[n] = e^{j \frac{2 \pi (N+1)n}N} = e^{j \frac{2 \pi Nn}N} \cdot e^{j \frac{2 \pi 1n}{N} } = 1 \cdot b_1[n] = b_1[n] $$ and so on so that in general, we have $b_{k + mN}[n] = b_k[n]$ for any integer $m$.
Showing the inequality holds (binomial coefficient)
$\frac{1}{m^k} {m \choose k} < \frac{1}{n^k} {n \choose k} \;\; \text{for all} \; k=2,...,m$ $\displaystyle {m \choose k} = \frac{m!}{(m-k)!k!} = \frac{1}{k!}\prod_{i=0}^{k-1} (m-i)$ So, $ \displaystyle \frac{1}{m^k} {m \choose k} = \frac{1}{k!}\prod_{i=0}^{k-1} \frac{m-i}{m} = \frac{1}{k!}\prod_{i=0}^{k-1} (1-\frac{i}{m})$ Similarly $ \displaystyle \frac{1}{n^k} {n \choose k} = \frac{1}{k!}\prod_{i=0}^{k-1} \frac{n-i}{n} = \frac{1}{k!}\prod_{i=0}^{k-1} (1-\frac{i}{n})$ As $n \gt m, (1-\frac{i}{n}) \gt (1-\frac{i}{m})$. This should lead to the proof.
Solving x^4=a mod p, given a is a quadratic residue
Let us put $a = b^2 \pmod p$. So $x^4 -b^2 = 0\mod p$, or $(x^2 - b)(x^2+b)=0\pmod p$. Since $p = 1\mod 4$, $-1$ is a quadratic residue (known theorem). Hence the previous equation can be written $(x^2-b)(x^2-\alpha^2 b) = 0\pmod p$ with $\alpha^2 = -1 \pmod p$. It is clear from this that if $b$ is a quadratic residue, say $b=c^2\pmod p$, the equation has four solutions $x = c, -c, \alpha c, -\alpha c$, otherwise it has no solution.
Approximate sector between two lines?
Let the position of left down point will be (x0, y0). Let the distance between two bottom points (or two left points) be R. Then each point you want to get will have coordinates { x = x0 + R*cos(a), y = y0 + R*sin(a) } where a is some real number between 0 and PI/2 (or PI or something you are the only who knows) Let's call this number A. So you just have to divide your arc into n equal parts. The array of your points will be: { x = x0 + R*cos(A*k / (n-1)), y = y0 + R*sin(A*k / (n-1)) }, where k = 0,1,...,n-1. The number n know only you :)
How to prove the chain rule with respect to weak derivatives?
You can use approx to proof chain. i.e., when you have a Sobolev function $u$, you can always build a smooth sequence $u_n$ such that $u_n\to u$ in $W^{1,p}$, and then work on $u_n$ instead $u$ first and finally push to the limit. For a good proof, I would recommend you to read this book, page 129, theorem 4
Probability distribution and convergence almost surely
Note that the event $\left\{\omega\in\Omega, \sum_{l=1}^{+\infty}X_l\right\}$ is convergent may be written as $$\bigcap_{i\geqslant 1}\bigcup_{N\geqslant 1}\bigcap_{N\leqslant m\leqslant n}\left\{\left|\sum_{l=m}^n X_l\right|\leqslant\frac 1i \right\} .$$ Therefore, the probability of $\left\{\omega\in\Omega, \sum_{l=1}^{+\infty}X_l\right\}$ depends only on the distribution of the sequence $\left(X_l\right)_{l\geqslant 1}$. In the context of the question, the sequences $\left(X_l\right)_{l\geqslant 1}$ and $\left(Y_l\right)_{l\geqslant 1}$ have the same distribution.
Calculating the distance between 2 points being given 3 lenghts and 2 angles.
Applying the law of cosines to $a$ and $b$, you can obtain the opposite side (i call it $e$): \begin{equation} e^2=a^2+b^2-2ab\cos\alpha\\ e=\sqrt{a^2+b^2-2ab\cos\alpha} \end{equation} Now you can use the law of sines getting the opposite angle (i call it $\gamma$) to the side $a$: \begin{equation} \frac{\sin\gamma}{a}=\frac{\sin\alpha}{e}\\ \sin\gamma=a\frac{\sin\alpha}{e}\\ \gamma=\arcsin({a\frac{\sin\alpha}{e}})\\ \gamma=\arcsin\left({a\frac{\sin\alpha}{\sqrt{a^2+b^2-2ab\cos\alpha}}}\right) \end{equation} Finally you can get $d$ applying the law of cosines again: \begin{equation} d=\sqrt{e^2+c^2-2ec\cos(\gamma+\beta)}\\ d=\sqrt{a^2+b^2-2ab\cos\alpha+c^2-2ec\cos(\gamma+\beta)}\\ d=\sqrt{a^2+b^2+c^2-2ab\cos\alpha-2ec\cos(\gamma+\beta)}\\ d=\sqrt{a^2+b^2+c^2-2ab\cos\alpha-2c\sqrt{a^2+b^2-2ab\cos\alpha}\cos\left[\arcsin\left({a\frac{\sin\alpha}{\sqrt{a^2+b^2-2ab\cos\alpha}}}\right)+\beta\right]} \end{equation}
Suppose that $\{x_n\}_n$ satisfies $|x_n - x_{n+1}|\leq\frac{1}{2^n},\;\forall n\in\mathbb{N}$. Show that $\{x_n\}$ converges.
Hint: If you know that $|x_n-x_{n+1} | \leq 2^{-n}$ for every $n$, then $$|x_n - x_{n+2}| \leq |x_n-x_{n+1}| + |x_{n+1}-x_{n+2}| \leq 2^{-n} + 2^{-(n+1)}$$ Now you can prove that $$\sum_{k=n}^{m} 2^{-k} = 2^{-m} (2^m-1) - 2^{-n}(2^n-1)= 2^{-n}-2^{-m}$$ Can you see how to proceed?
If $f(a) 0$ so $f(x) \leq g(x)$ por every $x \in (a-\delta, a+\delta).$
WLOG you can assume that $f\equiv0$ (if not, take $G(x)=g(x)-f(x)$). Applying continuity at $g(a)$, given $\varepsilon=g(a)>0$, there is $\delta>0$ such that for $x\in(a-\delta,a+\delta)$ $|g(x)-g(a)|<\varepsilon=g(a)$. Hence $-g(a)<g(x)-g(a)<g(a)$ and, in particular, $g(x)>0$ for $x\in (a-\delta,a+\delta)$.
Find the value of this real integral by complex contour integral $\int _0 ^{2\pi} e^{\sin\theta} \sin(\cos \theta)d\theta$
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\bbox[5px,#ffd]{\int_{0}^{2\pi}\expo{\sin\pars{\theta}} \sin\pars{\cos\pars{\theta}}\,\dd\theta} \\[5mm] = &\ \Im\int_{0}^{2\pi}\expo{\sin\pars{\theta}} \expo{\ic\cos\pars{\theta}}\,\,\dd\theta \\[5mm] = &\ \Im\int_{0}^{2\pi}\expo{\ic\bracks{-\ic\sin\pars{\theta} + \cos\pars{\theta}}}\quad\dd\theta \\[5mm] = &\ \Im\int_{0}^{2\pi}\expo{\ic\expo{-\ic\theta}}\,\dd\theta = \Im\oint_{\verts{z}\ =\ 1}\expo{\ic/z} \,{\dd z \over \ic z} \\[5mm] = &\ -\,\Re\oint_{\verts{z}\ =\ 1}\expo{\ic/z} \,{\dd z \over z} = -\,\Re\oint_{\verts{z}\ =\ 1} {\expo{\ic z} \over z}\,\dd z \\[5mm] = &\ -\Re\pars{2\pi\ic \expo{\ic 0}} = \bbx{\large 0} \\ & \end{align}
Ornstein-Uhlenbeck operator and divergence operator
So since I was missing something and didn't get any answer, I tried to start all over from Nualart's book. The point that I missed was the invariance of the product Wiener measure to rotations. Now with this fact we have that $\mathbb{E}[\langle F,P_t y\rangle]=\mathbb{E}[\langle P_tF,u\rangle]$ \begin{align*} \mathbb{E}[\delta(P_tu)]&=\mathbb{E}[\langle P_t u,\nabla \phi\rangle_H]\\ &=\mathbb{E}[\langle u,P_t\nabla\phi\rangle_H]\\ &=\mathbb{E}[\langle u,e^t\nabla P_t\phi\rangle_H]\\ &=e^t\mathbb{E}[\delta(u)P_t\phi] \end{align*} Where we used the dual result at the third equality. This completes the proof.
Modelling forces acting on a sail
Assuming no edge effects/turbulence, we can calculate the force of wind on a single sail using the drag equation. $$\vec{F_D} = \tfrac12 \rho \vec{u}^2 C_D A$$ where the drag coefficient $C_D$ is going to be fairly high, for a concave sail. In general, however, the wind may not be hitting the sail orthogonally, but rather at some angle. We can generalise the drag equation as such: $$\vec{F_{sail}} = \tfrac12 \rho \vec{u}^2 C_D (\hat{\vec{u}} \cdot \vec{S})$$ where $S% is the vector area of the sail. We take the dot product of this vector with the unit velocity vector to resolve the wind force in the direction of the sail. This is of course an approximation, as the concave shape of the sail will come into play, but probably a good enough one. That pretty much explains the basic situation for a single sail. Now, for multiple sails, you would of course simply combine the forces on the individual sails, which will all have their own values of $C_D$ and $S$. $$\vec{F_{sails,total}} = \sum \vec{F_{sail}}$$ I suggest you familiarise yourself with the mechanics and specific equations used here, and use that as a basic model to start. For some general information on the physics of sailing, you may find this page helpful.
How is defined the notion of $C^1$-close submanifolds?
I do not think there is a standard definition, here is one possible definition: Two closed submanifolds $M_1, M_2$ of a manifold $N$ are $C^1$-$\epsilon$-close if there exist $C^1$-embeddings $f_i: M\to N, i=1, 2$ such that that $f_i$ is a diffeomorphism to $M_i$ and $df_i: SM\to TN$ are $\epsilon$-close in the topology of uniform convergence, $i=1,2$. Here $SM$ is the unit sphere bundle over $M$. You have to put Riemannian metrics on $M$ and $N$ to make sense of this. If you do not like using Riemannian metrics, you have to work with compact-open topology, then instead of unit sphere bundles you will be using tangent bundles. This will also allow you to drop the assumption that $M_i$'s are closed manifolds. A manifold is closed if it is compact and has empty boundary.
Finding a differential equation when a half life is known
This was posted on the assumption that a difference equation was required You have $C_{n+5730}=\cfrac 12 C_n$ which can be written $$2C_{n+5730}-C_n=0$$ or$$2C_{n}-C_{n-5730}=0$$
Solving combinatorics
We need to solve $\binom n3-\binom n2=14$, which can be rewritten as $$\frac{n(n-1)(n-2)}6-\frac{n(n-1)}2=14$$ $$n(n-1)(n-5)=84$$ By inspection we see that $n=7$ satisfies this equation, as well as the original. So $|X|=7$, which is also the number of $1$-element subsets as $\binom n1=n=7$.
maximize the revenue in selling Hotels rooms
If $x$ is the number of price boosts, then the cost of the room is $175+10x$, not $175+x$.
Numerical range of selfadjoint elements in non-unital C*-algebras
It is well-known that, since $A$ is non-unital, $$\tag1 \sigma(a)=\{\tau(a):\ \tau\ \text{ is a character of }C^*(a)\}\cup\{0\}. $$ Note that characters are states. This shows that $\sigma(a)\setminus\{0\}\subset V(a)$. Take any state $\varphi$ on $A$ and extend it (uniquely!) to $\tilde A$. In $\tilde A$, we have $$(\min\sigma(a)\,I\leq a\leq (\max\sigma(a))\,I.$$ Applying the extension of $\varphi$ to $(1)$, we get $$ \min\sigma(a)\leq \varphi(a)\leq\max\sigma(a). $$ Thus $$\tag2 \sigma(a)\setminus\{0\}\subset V(a)\subset [\min\sigma(a),\max\sigma(a)]. $$ As $V(a$ is convex, this inclusion is fairly tight: if $0\not\in\{\min\sigma(a),\max\sigma(a)\}$, the convexity implies $V(a)=[\min\sigma(a),\max\sigma(a)]$. So the only pathological case is that where either $\min\sigma(a)=0$ or $\max\sigma(a)=0$. Both cases are similar, since they are switched by considering $-a$. Say $\min\sigma(a)=0$. If there exists a state $\varphi$ with $\varphi(a)=0$, then $V(a)=[0,\max\sigma(a)]$. Otherwise, $0\not\in V(a)$; in that case $0$ cannot be an isolated point in $\sigma(a)$ (if $0$ is isolated, there exists a nonzero projection $p$ with $pa=0$; then any state $\psi$ on $pAp$ induces a state $\tilde\psi$ on $A$ by $\tilde\psi(x)=\psi(pxp)$, and $\tilde\psi(a)=0$). So we have that $[t,\max\sigma(a)]\subset V(a)$ for all $t>0$. Thus $$ V(a)=(0,\max\sigma(a)]. $$
Given that $\sum_{n=1}^\infty a_n$ is convergent prove that $\sum_{n=1}^\infty \left(\frac{1+\sin(a_n)}{2}\right)^n$ also converges.
Hint: $|a_n| \to 0$, so for sufficiently large $n$ you'll have $|1+\sin(a_n)|/2 < 3/4$.
If $\theta = (59.3\pm 1.2)^{\circ} $, find $\tan(\theta)$ and it´s uncertainty.
As $Harish has already shown, the maximum uncertainty is much smaller than what you have found. Given that you are using the general formula for error propagation (using partial derivatives, etc.) I'm guessing you are not looking for the maximum uncertainty but for the "one-sigma" uncertainty. So your answer should be a smaller than what he found. Also, he has accidentally found twice the maximum error since he subtracted the lower bound from the top bound (the full range) instead of quoting half the range. Your problem is that you are not paying careful attention to units. In your second last line you say $\sigma_{\tan{(\theta)}} = 1.17 \times 1.2$ But notice that your "1.2" came from the original uncertainty in $\theta$ so it is really $1.2^\circ$. So this means your answer for the uncertainty in $\tan{\theta}$ is in degrees which can't be right since $\tan{\theta}$ should be a dimensionless number. You need to put $\sigma_\theta$ into radians. This will give you a far more reasonable answer.
Notation for a divergent series
We write $$\sum_{n=1}^{\infty}{1/n}=\infty$$ to mean "the harmonic series diverges", for example.
Compute the flux through a paraboloid
There are a couple other ways to find the flux through the paraboloid portion easily. The paraboloid has axial symmetry about the $ \ z-$ axis. The vector field $ \ \mathbf{F} \ = \ \langle \ x, y, 1 \ \rangle \ $ , at every "level" of $ \ z \ $ , has the same magnitude, $ \ \Vert \ \mathbf{F}(z) \ \Vert \ = \ \sqrt{x^2+y^2+1} \ = \ \sqrt{z+1} \ $ , at every point of the "boundary" circle $ \ r \ = \ \sqrt{z} \ $ , and for every point $ \ (x,y) \ $ on that circle, there is a point $ \ (-x, y ) \ $ (or a point $ \ (x, -y ) \ $ ) where the flux through the circle has the opposite direction. So there is complete cancelation of flux at every level of the paraboloid, and thus over its entire surface. One can also apply the Divergence Theorem. The total flux through the "capped" paraboloid's volume is given by $$ \iiint_V \ \nabla \cdot \mathbf{F} \ \ dV \ \ = \ \ \iiint_V \ 2 \ \ dV $$ $$ 2 \ \int_0^1 \ \pi \ [ \ r(z) \ ]^2 \ \ dz \ \ = \ \ 2 \pi \int_0^1 \ z \ \ dz \ \ = \ \ 2 \pi \ \cdot \ \left( \ \frac{1}{2}z^2 \ \right) \ \vert_0^1 \ = \ \pi \ \ .$$ But you have already established that the surface flux through the "cap" at $ \ z \ = \ 1 \ $ is $$ \iint_S \ \mathbf{F} \cdot \mathbf{n} \ \ dS \ = \ \ \iint_S \ 1 \ \ dA \ = \ 1 \ \cdot \ \pi \ \cdot \ 1^2 \ = \ \pi \ \ , $$ so all the net flux is through the "cap", leaving zero net flux through the paraboloid.
Existence of ergodic joining
I think I can answer my question. Philosophy: The equation $$\mu= \int (\pi_X)_*\rho_z d\tau(z)$$ is, in some sense, a convex combination of the $(\pi_X)_* \rho_z$, thus $\mu$ being ergodic means that $\mu$ is extreme and thus almost all the $(\pi_x)_* \rho_z$ must be the same, thus equal to $\mu$. Rigorous Proof: It is enough to show that for all continuous $f:X \to \mathbb{R}$ we have that $\int f d\mu= \int f d (\pi_X)_{*} \rho_z$ for almost all $z$ (Since $C(X)$ has a countable dense set, we may then find a conull set of $z$ for each such continuous function in our countable dense set, and then take a countable intersection on which the functionals $\int d\mu$ and $\int d(\pi_X)_*\rho_z$ agree). Thus suppose for contradiction that this wasn't the case; then $\int f d (\pi_X)_{*} \rho_z$ is not an almost constant function of $z$ and so we may find a non-trivial partition $Z=Z_1 \sqcup Z_2$ such that $Z_1$ and $Z_2$ are both not $\tau$-null and $$\{ \int f d(\pi_X)_*\rho_z | z \in Z_1 \}< \{ \int f d(\pi_X)_*\rho_z | z \in Z_2 \}$$ in the sense that any element on the left side is strictly smaller than any element on the right side (this is a standard measure theoretic exercise which hinges on the fact that the mapping $$z \mapsto \int f d(\pi_X)_*\rho_z$$ is a measurable real valued function). Then we may express $$\mu= \mu(Z_1)\frac{1}{\mu(Z_1)} \int_{Z_1} (\pi_X)_*\rho_z d\tau(z) + \mu(Z_2) \frac{1}{\mu(Z_2)} \int_{Z_2} (\pi_X)_*\rho_z d\tau(z)$$ which is a genuine convex combination. It is non-trivial since those functionals disagree at $f$, by construction.
A holomorphic function which takes real values at $ 1/n $ has real coefficients
Answered in a comment: the function $g(z)=\overline{f(\bar z)}$ is holomorphic and $g(1/n)=f(1/n)$ for every $n$, hence $f=g$. — user8268 Here $\{1/n\}$ could be any subset of $\mathbb{R}$ with a limit point.
Pythagorean-Hodograph curve's control points
They are just numbers, and they can be anything you like (except for a few corner cases). Whatever numbers you choose, you'll get a PH cubic. These numbers don't have any geometric significance (as far as I know) -- they come from algebraic reasoning about the curve. If you want a more geometric discussion, look at theorem 18.1 in Farouki's book.
Dimension of an algebraic closure as a vector space over its base field.
Here are two similar non-trivial examples of what may happen . $\bullet$ For an algebraic closure $\overline {\mathbb Q_p}$ of the $p$-adic field $\mathbb Q_p$, we have $\text {card} \:{\mathbb Q_p}=\text {card} \: \overline {\mathbb Q_p}=2^{\aleph_0}$ but $[\overline {\mathbb Q_p}:\mathbb Q_p]=\aleph_0$. $\bullet$$\bullet$ For the field of Puiseux series, which is an algebraic closure $\overline {\mathbb C((t))}=\bigcup_{n=1,2,3,\cdots} \mathbb C((t^{1/n}))$ of the field of Laurent series $\mathbb C((t))$, we have $\text {card}\:\overline {\mathbb C((t))}=\text {card} \:{\mathbb C((t))}=2^{\aleph_0}$ but $[ \overline {\mathbb C((t))}:\mathbb C((t))]=\aleph_0$. Edit Let me show that for any infinite cardinal $\aleph$ there exists a field $k$ with cardinality $\aleph$, so that $$\text {card}(k)=\text {card}(\bar k)=\aleph$$ We just take for $k$ the field of rational functions $k=\mathbb Q(x_a\mid a\in A)$ in a family of indeterminates $x_a$ indexed by a set $A$ of cardinality $\aleph$. To prove that $k$ has cardinality $\aleph$ it is enough to prove that the corresponding polynomial ring $P=\mathbb Q[x_a\mid a\in A]$ has cardinality $\aleph$. To prove that $P$ has cardinality $\aleph$, it is enough to prove that the set of monomials $q\cdot x_{a_1}^{n_{a_1}}\cdots x_{a_r}^{n_{a_r}}\quad (q\in \mathbb Q, r\in \mathbb N) $ has cardinality $\aleph$ and this is true because that set of monomials has cardinality $\text {card} (\mathbb Q)\cdot \aleph=\aleph_0\cdot \aleph=\aleph $ [A little cardinal arithmetic has to be used to show that the set of pure monomials $ x_{a_1}^{n_{a_1}}\cdots x_{a_r}^{n_{a_r}}$ indeed has cardinality $\aleph$, the key fact for this result being that the set $\mathcal P_{fin} (A)$ of finite subsets of $A$ has the same cardinality as $A$, namely $\aleph$] The above field has characteristic zero but to obtain a field $k$ of cardinality $\aleph$ and characteristic $p$ just replace $\mathbb Q$ by $\mathbb F_p$ in the above construction.
why prime ideal and maximal ideal are same in finite integral domain?
Let $A$ be a nonzero commutative ring and let $I$ be an ideal of $A$. If $I$ is an prime ideal on $A$, then $A/I$ is an integral domain, but we have : Lemma Any finite integral domain is a field Proof Consider a finite intégral domain $D$ and let $d\in D-\{0\}$. The map $D\to D,x\mapsto dx$ is injective because $dx=dy\implies d(x-y)=0\implies x=y$. Since $D$ is finite, this map is bijective. Hence there exists $d'\in D$ such that dd'=1. End[Proof] Hence, if $A/I$ is finite (which is in particular the case when $A$ itself is finite), then $A/I$ is a field and finally $I$ is a maximal ideal. Remark It wasn't necessary to make the assumption that $A$ is finite but only that $A/I$ is finite. One may think for example at the case $A=\mathbb{Z}$ and $I=n\mathbb{Z}$. In this special case, we get : $\left(\mathbb{Z}/n\mathbb{Z},+,\times\right)$ is an integral domain iff $\left(\mathbb{Z}/n\mathbb{Z},+,\times\right)$ is a field which can be easily proved directly.
Find $\lim_{n \to \infty}\frac1{\ln^2n}\left( \frac{\ln 2}{2} + \frac{\ln 3}{3} +\cdots + \frac{\ln n}{n}\right)$
Define $$ S_n=\frac{\ln 2}{2} + \frac{\ln 3}{3} + ... + \frac{\ln n}{n} $$ The function $\ln(x)/x$ is decreasing for $x\geq 3$, so $$ \frac{\ln 2}{2} + \int_3^n\frac{\ln(x)}{x}\,dx \leq S_n\leq \frac{\ln 2}{2} + \frac{\ln 3}{3} + \int_3^{n}\frac{\ln(x)}{x}\,dx, $$ $$ \frac{\ln(2)}{2} + \frac{\ln(n)^2}{2}-\frac{\ln(3)^2}{2}\leq S_n\leq\frac{\ln(2)}{2}+\frac{\ln(3)}{3} + \frac{\ln(n)^2}{2}-\frac{\ln(3)^2}{2}. $$ Now divide through by $\ln(n)^2$ and apply the squeeze theorem to conclude $$ \lim_{n\to\infty}\frac{S_n}{\ln(n)^2}=\frac{1}{2}. $$
How to show that the ring $S/A$ has no zero divisors? (Hungerford, Algebra, Problem 12, Chapter III, Section 2)
If $(s,m) \in A$ then you are done, so suppose not. Then $sx+mx \neq 0$ for some choice of $x \in R$; denote the resulting value of $sx+mx$ by $t$. What we have is that $rt + nt = 0$, while $0 \neq t \in R$. We want to deduce that $(r,n) \in A$. Note that this had better be true if $S/A$ is to have no zero-divisors, since the image of $t$ (or, more precisely, $(t,0)$) in $S/A$ is non-zero, while the image of $(r,n)$ in $S/A$ multiplies with $t$ to give $0$. So we can rephrase the problem as follows: given $(r,n) \in S$ and $0 \neq t \in R$ such that $rt + n t = 0,$ prove that $rx + nx = 0$ for all $x \in R$. For this, you have to use the no-zero-divisor property of $R$ (since it is the one non-trivial way you have to verify that an element of $R$ is zero). Here are some more precise hints: You want to conclude that $r x + n x $ is zero, and so you need to multiply it by something non-zero and get zero. The only non-zero element you have at hand is $t$, so you will have to use it. You will find that, annoyingly, in the equation $r t + n t = 0$, the element $t$ is on the wrong side of the element $r$; see if you can move it to the other side, i.e. prove that this is equivalent to $t r + n t = 0,$ which will be more useful in carrying out the first step.
Closed form for this summation
$$ \begin{align} S &=\mathrm{Im}\left(\sum_{n=1}^\infty e^{n(-1+i)}\right)\\ &=\mathrm{Im}\left(\frac{e^{-1+i}}{1-e^{-1+i}}\right)\\ &=\mathrm{Im}\left(\frac{e^{-1+i}}{1-e^{-1+i}}\frac{1-e^{-1-i}}{1-e^{-1-i}}\right)\\ &=\mathrm{Im}\left(\frac{e^{-1+i}-e^{-2}}{1-2e^{-1}\cos(1)+e^{-2}}\right)\\ &=\frac{e^{-1}\sin(1)}{1-2e^{-1}\cos(1)+e^{-2}}\\ &=\frac{e\sin(1)}{e^2-2e\cos(1)+1}\\ \end{align} $$
What is the difference between a quadratic equation and a quadratic function?
My explanation is that a quadratic equation is a set of terms of the form (in general): $ax^2+bx+c=0$. A quadratic function is one where the right-hand constant (call it $f$) is allowed to vary with $x$, thus giving: $f(x)=ax^2+bc+c$.
What is complement of $S$ in $\mathbb{R^3}$
You did exactly the right thing by including your own solution. I'm going to write elements of $S$ as $(s, u, u)$ where $u = s + t$, rather than in terms of $x$ and $y$, because that gives me the freedom to use $x, y, z$ for the first, second, third coordinates. This description is complete, because if I give you $(s, u, u)$, you can recover from it $t = u-s$, so every item of the form $(s, u, u)$ is in your $S$, and every item in $S$ has the form $(s, u, u)$. As you have observed, any point in $S$ has $y = z$, but $x$ can be anything. Therefore points with $y \ne z$ (and with any $x$-value at all) are not in $S$. So your description of the complement of $S$ as the union of those two sets is correct. Are the two sets disjoint? Clearly yes, for $y > z$ and $y < z$ are incompatible. Are the two sets open? Yes, each is an open half-space. That takes a little proving, but not much. The easiest proof I can see is to consider the map $$f: \Bbb R^3 \to \Bbb R^2 : (x, y, z) \mapsto y-z$$. This map is evidently continuous. But your two sets (let's call them $P$ and $Q$) are simply $$ P = f^{-1}( (0, \infty) )\\ Q = f^{-1}( (-\infty, 0) ) $$ where by $(0, \infty)$, I mean the open interval consisting of all positive reals. Because the preimage of an open set under a continuous map is always open, your set $P$ is open. A similar argument works for $Q$. Nice work! One last thing: you've asked a second question (how can we find the complement of a set in higher dimensions?), which is generally frowned upon, but I'll give a quick answer here anyhow: in general, it's tough. Finding nice mappings between such complements and things we "understand" (like axis-aligned half-spaces, etc.) is often a big challenge. For instance, if you take an arc in 3-space, tie a (loose) knot in it, and then glue the ends together, you get something that topologists call a "knot". The complement of this knot turns out to sometimes be topologically quite complicated, and tools for simply describing knot-complements are themselves nontrivial. And that's for a single arc in 3-dimensions!
Equivalent statement to Quadratic reciprocity
Your statement that "$\left(\frac pq\right)$ depends only on the equivalence class of $q$ modulo $p$" is not correct. Quadratic reciprocity relates $\left(\frac pq\right)$ and $\left(\frac qp\right)$ with a power of $-1$, namely $(-1)^{\frac{p-1}{2}\frac{q-1}{2}}$, that depends on the actual value of $q$ and not just on its residue class modulo $p$. $\left(\frac qp\right)$ depends on just $q$'s residue class but $\frac{q-1}{2}$ needs more than that. If you are "looking for an 'easy' way to derive quadratic reciprocity" remember that it was developed by multiple mathematicians, including the greatest mathematician of all time, J.C.F. Gauss. Gauss considered it to be his greatest work, so I very much doubt that you will find an "easy" way to prove it. Easier, perhaps, but not easy. The Wikipedia link states that over $200$ proofs have been published.
Probability of ordered sequence of events?
The probability of getting 3, 5, and 6 in exactly that order is in fact $\frac{1}{6^3}$. Getting them in any order is: $\frac{1}{2} \cdot \frac{1}{3} \cdot \frac{1}{6} = \frac{1}{36}$ because first roll, you have one-half chance of getting 3, 5, or 6. The second roll, you have one-third (or two out of 6) chance of getting one of the remaining two numbers. On the third roll, you have one-sixth chance of getting the final number.
If a holomorphic bundle is smoothly trivial, is it holomorphically trivial?
Let $X$ be a paracompact topological space. Complex line bundles on $X$ are completely determined up to isomorphism by their first Chern class. More precisely, if $\operatorname{Vect}_1^{\mathbb{C}}(X)$ denotes the collection of isomorphism classes of complex line bundles on $X$, then $c_1 : \operatorname{Vect}_1^{\mathbb{C}}(X) \to H^2(X, \mathbb{Z})$ is an isomorphism. In particular, a complex line bundle $L$ is trivial if and only if $c_1(L) = 0$. If $X$ is a compact smooth manifold, isomorphism classes of topological complex line bundles coincide with isomorphism classes of smooth complex line bundles. Now let $X$ be a complex manifold. The collection of isomorphism classes of holomorphic line bundles on $X$ has a group structure given by tensor product (the inverse is dual). This group is called the Picard group and denoted $\operatorname{Pic}(X)$. A transition functions argument shows that $\operatorname{Pic}(X) \cong H^1(X, \mathcal{O}^*)$. Associated to any complex manifold, we have a short exact sequence of sheaves $$0 \to \mathbb{Z} \xrightarrow{\times 2\pi i} \mathcal{O} \xrightarrow{\exp}\mathcal{O}^* \to 0$$ called the exponential sequence. The long exact sequence in cohomology gives $$\dots \to H^1(X, \mathcal{O}) \to H^1(X, \mathcal{O}^*) \to H^2(X, \mathbb{Z}) \to H^2(X, \mathcal{O}) \to \dots$$ The map $H^1(X, \mathcal{O}^*) \to H^2(X, \mathbb{Z})$ is the composition of the isomorphism $H^1(X, \mathcal{O}^*) \to \operatorname{Pic}(X)$ with the first Chern class. As $H^k(X, \mathcal{O}) \cong H^{0,k}_{\bar{\partial}}(X)$, we see that if $h^{0,1} = 0$, the map $H^1(X, \mathcal{O}^*) \to H^2(X, \mathbb{Z})$ is injective, and if $h^{0,2} = 0$, the map $H^1(X, \mathcal{O}^*) \to H^2(X, \mathbb{Z})$ is surjective. In particular, if $h^{0, 1} > 0$, the map $H^1(X, \mathcal{O}^*) \to H^2(X, \mathbb{Z})$ may have non-trivial kernel (in fact, it necessarily does, see the discussion below the line). That is, there could be a non-trivial holomorphic line bundle $L$ with $c_1(L) = 0$, i.e. a holomorphic line bundle which is smoothly trivial. The simplest example of a complex manifold with $h^{0,1} > 0$ is a genus one Riemann surface which has $h^{0,1} = 1$. An explicit non-trivial holomorphic line bundle is $\mathcal{O}(x_0 - x_1)$ with $x_0, x_1$ distinct points of the surface. It has first Chern class zero so it is smoothly trivial, but it is not holomorphically trivial as it has no global holomorphic sections other than the zero section: a non-zero section would have associated divisor $x_0 - x_1$ but this is impossible as divisors associated to holomorphic sections are always effective. By investigating the long exact sequence more carefully, we can see how many holomorphic line bundles have first Chern class zero and hence are smoothly trivial; we denote the collection of isomorphism classes of such line bundles by $\operatorname{Pic}^0(X)$. $$\dots \to H^0(X, \mathcal{O}) \to H^0(X, \mathcal{O}^*) \to H^1(X, \mathbb{Z}) \to H^1(X, \mathcal{O}) \to H^1(X, \mathcal{O}^*) \to H^2(X, \mathbb{Z}) \to \dots$$ Note that $H^0(X, \mathcal{O}) \cong \Gamma(X, \mathcal{O}) = \mathcal{O}(X)$ is the collection of holomorphic functions on $X$, and $H^0(X, \mathcal{O}^*) \cong \Gamma(X, \mathcal{O}^*)$ is the collection of nowhere-zero holomorphic functions on $X$. If $X$ is compact, the only holomorphic functions are constant functions, so $H^0(X, \mathcal{O}) \cong \mathbb{C}$, $H^0(X, \mathcal{O}^*) \cong \mathbb{C}^*$ and the map $H^0(X, \mathcal{O}) \to H^0(X, \mathcal{O}^*)$ is nothing but the exponential map $\mathbb{C} \to \mathbb{C}^*$. As this is surjective, $H^0(X, \mathcal{O}^*) \to H^1(X, \mathbb{Z})$ is the zero map, and hence $H^1(X, \mathbb{Z}) \to H^1(X, \mathcal{O})$ is injective. By exactness, the kernel of $H^1(X, \mathcal{O}^*) \to H^2(X, \mathbb{Z})$ is precisely the image of $H^1(X, \mathcal{O}) \to H^1(X, \mathcal{O}^*)$ which is isomorphic to the quotient of $H^1(X, \mathcal{O})$ by the kernel. Again by exactness, the kernel of this map is equal to the image of the map $H^1(X, \mathbb{Z}) \to H^1(X, \mathcal{O})$ which is isomorphic to $H^1(X, \mathbb{Z})$ as the map is injective. Therefore, if $X$ is compact, $$\operatorname{Pic}^0(X) \cong \frac{H^1(X, \mathcal{O})}{H^1(X, \mathbb{Z})}$$ which is a complex torus of dimension $h^{0,1}$.
Relation between Riemann integrable and Lebesgue integrable functions
No, this is not right. Consider $f=0$ and $g=1_\mathbb{Q}$. In particular, Riemann integrability requires that for almost all $x$, $g$ is continuous in $x$. This is not the same as being equal to a continuous function almost everywhere. The difference is clear here: $g$ is equal to the continuous function $0$ a.e., but it is actually continuous nowhere.
Regarding periodic functions and their domain of definition
Let $M$ and $m$ be the maximum and the minimum of $f$ on $[0,p]$. Let $x\in\mathbb R$. Then there are $u\in [0,p]$ and $k\in\mathbb Z$ such that $x = u + kp$. Hence, $f(x) = \ldots\,\le\,M$ and $f(x) = \ldots\,\ge\,m$.
Application of Fatou's Lemma
Let us denote $(a,b)=X$. Then here you have $f_n\ge0$ and $f_n(x)\rightarrow f(x)$ a.e. hence by Fatous lemma $$\int\limits_{X}f(x) dx\le \liminf_n\int\limits_{X}f_n(x) dx$$ Now $f_n(y)\chi_{(a,x)}\rightarrow f(y)\chi_{(a,x)}$ a.e. hence one can again apply fatous lemma to get that $$F(x)=\int\limits_{a}^{x}f(y)dy\le\liminf_n\int\limits_{a}^{x}f_n(y)dy=\liminf_n F_n(x)$$ and since they all are positive once again applying fatous lemma to get that $$\int\limits_X F(x)dx\le \int\limits_{X}\liminf_n F_n(x)dx\le \liminf_n\int\limits_{X}F_n(x)dx$$ and hence now your result follows.
Divergence and directional derivative
Usually $\nabla$ only acts on the object immediately to its right. Another indication that that can't be the correct interpretation is that "$\cdot$" is an operation on two vectors, and $f$ is a scalar, while $\nabla f$ is a vector, so $f \cdot u$ doesn't make sense. Yes, the convention with $\nabla$ can be a bit confusing, so you may prefer to write the directional derivative as $\mathbf{u} \cdot \nabla f$, or $(\nabla f) \cdot \mathbf{u}$. (You also find it written as $(\mathbf{u} \cdot \nabla)f$ to emphasise that $\mathbf{u} \cdot \nabla$ is the directional derivative operator, which sends scalar fields to scalar fields.) If you think an expression can be ambiguous, it's always best to bracket it carefully, just as $\sin{x}y$ could mean either $(\sin{x})y$ or $\sin{(xy)}$. (The object you have written on the right-hand side of the last displayed equation is actually the divergence of the vector field $f\mathbf{u}$, i.e. $\nabla \cdot (f\mathbf{u})$.)
Evaluate the following using Simpson's rule
Your formula is wrong. With $f(x) = \sin(2x)/x\;$ you get using the simple Simpson rule $$\frac{0.6}{6}\left(f(1.0) + 4f(1.3)+f(1.6)\right)\approx 0.245897.$$ and if you take two strips, you get $$\frac{0.6}{6}\left(f(1.0) + 4 f(1.15)+ 2 f(1.3)+ 4 f(1.45)+f(1.6)\right) \approx 0.245982.$$ What is purpose of the $\cos\;$ expression?
Is the presentation of the generalized quaternion group of order 16 on Groupprops wrong, or am I missing something?
For the benefit of those who don’t want to go wade through another website, you are talking about the presentation of the generalized quaternion group of order 16, given as $$Q_{16} = \langle a,b\mid a^4=b^2=abab\rangle.\tag{1}$$ There is no error or omission in this description; the only convention at play is that the equalities asserted (plus any that can be deduced from them) are the only ones assumed. There is no implicity “$=e$” when you give this type of presentation. Note that from $b^2=abab$ we conclude that $b=aba$, so $ab=ba^{-1}$ and $ba=a^{-1}b$. Therefore, $bab^{-1} = a^{-1}$. Therefore, $b^2=b(b^2)b^{-1} = ba^4b^{-1} = (bab^{-1})^4 = a^{-4}$. But we also know $b^2=a^4$, so $a^4=a^{-4}$, hence $a^8=e$. Thus, $a$ has order dividing $8$, which in turn gives that $b^4 = (b^2)^2 = (a^4)^2 = e$, and $b$ has order dividing $4$. A more common presentation (e.g., from Leedham-Green and McKay’sThe Structure of Groups of Prime Power Order, page 28) is $$Q_{16} = \langle x,y\mid y^8 = 1, x^2=y^{4}, y^x=y^{-1}\rangle.\tag{2}$$ To verify that (1) and (2) give the same presentation, we have already established that $a^8=1$, $(b^{-1})^2=a^{4}$, and $a^{b^{-1}} = bab^{-1}=a^{-1}$, so there is a map from (2) to (1) by sending $y$ to $a$ and $x$ to $b^{-1}$. Conversely, we can define a morphism from (1) to (2) by mapping $a$ to $y$ and $b$ to $x^{-1}$, because $y^4=(x^{-1})^2$ (since $x^4=1$, so $x^2=x^{-2}$). Finally, we need to verify that $y^4=x^2=yx^{-1}yx^{-1}$. From $x^{-1}yx=y^{-1}$ we get $yx=xy^{-1}$, so $yx^{-1} = x^{-1}y^{-1}$. Therefore, $yx^{-1}yx^{-1} = x^{-1}y^{-1}yx^{-1} = x^{-2}=x^2$. Thus, $y$ and $x^{-1}$ satisfy the relations of $a$ and $b$, and we get a morphism from (2) to (1). The two maps are clearly inverses of each other, as they are inverses on the set of generators, so this shows the two presentations define the same group, namely $Q_{16}$.
How many Hamiltonian cycles are there in a complete graph that must contain certain edges?
The question can be interpreted as asking how many ways there are to construct a Hamiltonian cycle under these constraints. Since we know $\{1,2\}$ must be in the cycle, it seems reasonable to assume that we start at vertex $1$ and the first edge traversed is $\{1,2\}$. From here, the rest of the cycle is given by a permutation of the remaining vertices $\{3,4, \dots, n\}$ under the constraint that 3 and 4 have to be consecutive. Similar to your idea of treating $\{3,4\}$ as a single vertex, we can permute these $n-3$ objects ($n$ vertices, minus the two we already used and treating 3 and 4 as a single unit) in $(n-3)!$ ways. Then there are 2 orientations for the $\{3,4\}$ edge, so we multiply to get a total of $2(n-3)!$ Hamiltonian cycles. In your example, we do indeed get $2(5-3)! = 4$ such Hamiltonian cycles. As a side note, you can generalize this result. If the $k$ "fixed edges" comprise $p$ vertex-disjoint paths, then the number of Hamiltonian cycles should be $2^{p-1}(n-k-1)!$. There's $p-1$ paths to orient, $n - k - p$ vertices which are still on their own, and $p-1$ paths to place as a single unit somewhere in the permutation (so we permute $n-k-1$ objects in this step).
Estimate the propotion of Republicans in a certain district
You are confusing notation slightly, but in general you're on the right track. If you measure a proportion $\hat{p}$ out of a sample of size $n$, the confidence interval of the true proportion $p$ is given by $$ \hat{p} \pm z \sqrt{\frac{\hat{p}(1-\hat{p})}{n}},$$ where $z$ is the appropriate quantile of the standard normal distribution. You should interpret the question as: what $n$ do you need in order for the confidence interval to have a total length of $0.02$? First, observe that $z = 1.65$ in our case, because $\Pr(-1.65 <Z < 1.65) \approx 0.9$, which is the level of precision we desire. (You can verify this in a $Z\text{-score}$ lookup table.) Furthermore, because the total length of the interval (both plus and minus) should be $0.02$, we need that $$ z \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} = 0.01.$$ Now if we know that $\hat{p}$ will be approximately $0.4$, we can substitute to obtain $$1.65\sqrt{\frac{(0.4)(0.6)}{n}}=0.01,$$ which, if you solve for $n$, yields $n=6534$. (part a) So you need a sample size of $6534$ in order for the confidence interval to be about $0.02$ wide, if you know that the real probability is somewhere around $0.4$. However, if you have no clue about the real probability, then you should assume a probability of $0.5$ (this is called the distribution with maximum entropy). Then, you can subsitute again, to obtain $$1.65\sqrt{\frac{(0.5)(0.5)}{n}}=0.01,$$ which yields $n = 6807$. (part b)
Distance between closed affine sets
Here is a category of counter examples : In any infinite dimensional normed vector space $X$, There are two closed subspace, $M$ and $N$ such that $M+N$ is not closed . Then the closed subspace $Y : = \overline{M+N}$ of $X$ contains an element, say $y \in Y$ which is not in $M+N$. Then it is easy to verify that $(y+M )\cap N = \emptyset .$ Observe that $$0=d(y,M+N) = d(y+M ,N)$$ P.S: As an example of such $M$ and $N$ in Hilbert space see: Sum of closed spaces is not closed
What kinds of non-zero characteristic fields exist?
There are finite extensions of the transcendental fields you've written down. Indeed, since $k(x_1,\ldots,x_n)$ is not algebraically closed when $n \geq 1$, no matter what field $k$ of coefficients you choose, it has non-trivial finite extensions. The classification of these fields is not a simple matter; in fact, it is one of the main topics of algebraic geometry. (One can think of it as being the problem of classifying $n$-dimensional varieties up to birational equivalence.) In any case, I would say that these fields, for some choice of $n$ (possibly $0$), and with $k$ equal to $\mathbb F_q$ or $\overline{\mathbb F}_p$, are the characteristic $p$ fields that arise the most often in practice. [Also: one reason that you can't think of other examples is that any field of char. $p$ which is finitely generated over its prime subfield $\mathbb F_p$ is a finite extension of $\mathbb F_p(x_1,\ldots,x_n)$ for some $n$; that is also why these tend to be the examples that arise most often.]