title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
n^n is Ω (n!), is the statement true or false?
$$\begin{align} \frac{n^n}{n!}&=\frac{\overbrace{n\cdot n\cdots n}^{n\text{ times}}}{1\cdot2\cdots(n-1)\cdot n}\\&=\frac n1\cdot\frac n2\cdots\frac n{n-1}\frac nn\\ &>n\cdot1\cdots1 \end{align}$$ so that $$n^n>n\cdot n!$$ and $$n^n\in\Omega(n!)$$
Using Dominated Convergence Theorem
You are actually using the fact that the $\phi_n$ are bounded by 1 (they are characteristic functions). Since the interval in question, $[-h, h]$, is bounded, all is copacetic.
Verifying the reasoning is true for the following deductive arguent
Your example says something about premise 2: if Sam is telling the truth, the second premise is true only if Bill is lying and the conclusion is true. You have two possibilities : John tells the truth or is lying. If John tells the truth : the conclusion is true. If John is lying : Bill tells the truth because of premise 1, so Sam must be lying because of premise 2 : the conclusion is true.
Formula for $\sum_{k=1}^{n}{k^p}$ where p is a positive integer
$$\sum_{k=1}^n k^p = \sum_{k=0}^{p-1} T(p,k)\binom{n+1+k}{p+1}. \mbox{ (hence, its degree is $p+1$)}$$, where $T(p,k)$ is the Eulerian number (cf.OEIS A008292). For example, $p=2$:$$\sum_{k=1}^n k^2 = \sum_{k=0}^{1} T(2,k)\binom{n+1+k}{3}=(1)\binom{n+1}{3}+(1)\binom{n+2}{3}=\frac{n(n+1)(2n+1)}{6}$$ $p=3$:$$\sum_{k=1}^n k^3 = \sum_{k=0}^{2} T(3,k)\binom{n+1+k}{4}=(1)\binom{n+1}{4}+(4)\binom{n+2}{4}+(1)\binom{n+3}{4}=\frac{n^2 (n+1)^2}{4}$$
Number edges of 3-regular graph so that every vertex has a 0,1, and 2 edge
No, you are asking if every $3$-regular graph is 3-edge-colorable. The answer is no, and (connected, bridgeless) counterexamples are called snarks. (See here for more examples).
Groups modulus operaation
$$3* 3* 3=(3* 3)* 3=(3^2-3-3+2)*3=5* 3=$$ $$=5\cdot 3-5-3+2=9$$
Statistics - Bootstrap Method
This seems like a good fit for parametric bootstrap. You can estimate $p$ in your sample by $\hat{p}$ (for example the MLE would be a good choice) and you can then sample from a geometric distribution with parameter $\hat{p}$ to generate your bootstrap samples. You know the size of your sample (because that is simply the number of successes you have) and this should also be the size of your bootstrap samples. In point 1 of your procedure you do not make it so clear what you mean by "random binary set".
Determining poles and order of $1/\sin(z)$
Well, $ \sin z = 0 $ iff $z = n \pi $ where $n \in \mathbb{Z}$. Notice $$ \lim_{z \to n \pi} \frac{ z - n \pi}{\sin z} =_{Lhop} \lim_{z \to n \pi} \frac{ 1}{\cos z} = (-1)^n$$ So, the limit exists and is not zero. Hence, $z = n \pi$ are simple poles.
Is Schilling's Corollary 8.9 really a corollary?
Corollary 8.9 is not a corollary of Theorem 8.8, but it does seem to be a corollary of Lemma 8.1, which gives NASC for measurability of a function $u:X\mapsto R$. One sufficient condition for measurability of $u$ is that $\{u>a\}\in {\cal A}$ for every $a\in R$; this condition is applied in the proof of Corollary 8.9.
If a number is divisible by two others, then it's divisible by their lcm
To prove is that every common multiple of $a,b$ is a multiple of the lcm. Suppose $d=lcm(a,b)$ and $d \nmid c$ then we have $c=dx+y$ with $0\lt y \lt d$ So $y=c-dx$ and $a,b|(c-dx)$ which is a contradiction with the assumption that $d$ is the lcm.
Why $f$ is a continuous map?
Let $(x_n)$ be a convegent sequence in $S^1$ with limit $x \in S^1$. Since $T$ is bounded, we have $Tx_n \to Tx$. The inner product is continuous, hence $\langle Tx_n\; |\;x_n\rangle \to \langle Tx\; |\;x\rangle$. This gives $f(x_n) \to f(x)$.
More efficient algorithm to find OR of two sets
In case you do not only want to output the number of pairs but also the pairs itself, you cannot do better than $\mathcal O(n^2m)$ in the worst case. This is because you need to read all the columns of the input anyway, hence the factor $m$, and you need to output all $\sim n^2$ pairs (in the worst case).
A question concerning discharging assumptions
By Sequent Rule (Axiom Rule), for $\Gamma = \{\}$, we have the sequent $\Gamma \cup \{\phi\} \vdash \phi$, so by Sequent Rule ($\rightarrow \textrm{I}$), we have $\{\} \vdash (\phi \rightarrow \phi)$. That is, we get the tautology $\phi \rightarrow \phi$ "for free" ($\phi$ having been introduced by assumption on the previous line). Recall the text (p. 17, immediately following the definition of Sequent Rule ($\rightarrow \textrm{I}$)) "the assumption $\phi$ in the first sequent of the sequent rule ($\rightarrow \textrm{I}$) is allowed to drop out of the assumptions of the second sequent". Note "allowed" $\neq$ "required". This is discussed further in Remark 2.4.2. Recall also the text (p. 17, prior to the definition of Natural Deduction Rule ($\rightarrow \textrm{I}$) "in forming the derivation we are allowed to discharge any occurrence of the assumptions $\phi$ written in $D$. The rule is still correctly applied if we do not discharge all of them; in fact the rule is correctly applied ... [if] there is nothing to discharge." This indicates (1) an assumption may appear more than once in a derivation, (2) applying the rule allows you to discharge all, any, or none of them, and (3) the rule may be applied even when all assumptions are discharged. This third case is somewhat similar to the first paragraph above. If $\psi$ is something we can deduce and $\phi$ is any statement whatever, we can deduce $\phi \rightarrow \psi$ (because $\psi$ is already known to be true).
Infinite hat problem error upper bound, in the case of seeing all other prisoners
This is not possible. Imagine again our outsider arbitrarily choosing $2N$ prisoners. The outsider can ask these prisoners what their guesses will be for each of the $2^{2N}$ ways the hats can be assigned to those prisoners (leaving the hats of the unchosen prisoners unchanged). Each chosen prisoner will die in half of the assignments (choosing wrong in one of a pair in which the other hats are fixed), so there are $2N\cdot2^{2N-1}$ total deaths among all hat assignments. By the pigeonhole principle, then, there must be at least one assignment with at least $\lceil2N\cdot2^{2N-1}/2^{2N}\rceil = N$ deaths. (Equivalently, the expected number of deaths is $N$, so there must be at least one assignment with at least $N$ deaths.)
Finding Expectation using CDF
$$ \mathbb E(X)=\int\limits_0^\infty (1-F(x)) dx = \int\limits_0^1 \left(1-\frac49\right)dx + \int\limits_1^2 \left(1-\frac89\right)dx = \frac59 \cdot 1+ \frac19\cdot 1 = \frac23. $$ You can also draw a graph of $F(x)$ and find the area above the positive semi-axis between this function and the function equal to one.
evaluating $\int_{7}^{10} \frac{xdx}{\sqrt{x-6}}$
Substitute that $x$ with $x = u + 6,$ you get: $$\int \frac{u + 6}{\sqrt{u}} du = \int \sqrt{u} du + \int \frac{6}{\sqrt{u}} du.$$
Basic Probability Question (Expected Value)
Hint: expectation is linear, i.e., $E(X_1 + X_2 + \ldots + X_n) = E(X_1) + E(X_2) + \ldots + E(X_n)$. Think about how this applies here.
Characterization of Compactness in $\ell^\infty$
I dont know if this one helps you but since $\ell^{\infty}$ is complete if you take a compact $K\subset \ell^{\infty}$ then $K$ is closed and totally bounded. Conversly , if you take $K\subset \ell^{\infty}$ closed and totally bounded , since $\ell^{\infty}$ is complete metric space it follows that $K$ is also complete, hence $K$ is complete and totally bounded hence compact. So the question comes down to whether a closed subset $K$ of $\ell^{\infty}$ is totally bounded. The same applies in $C_{0}$ then there is a characterisation of $K\subset C_0 $ closed and totally bounded $\Longleftrightarrow$ for every $\epsilon>0$ there exist $n_{0} \in \Bbb N$ such that for every $x=(\xi_{k})\in K$ and for every $k\geq n_{0}$ then $|\xi_{k}|\leq \epsilon.$ Its intresting if there is any similar for the $\ell^{\infty}$. I hope these thoughts will help you to tackle the problem , if you find anything let me know !
Sequences, Mathematical Analysis, etc...
Let $n <m$. Then $ \sum\limits_{i=1}^{p} |a_i|^{n} \leq (\sum\limits_{i=1}^{p} |a_i|^{m})^{n/m} (\sum\limits_{i=1}^{p} 1)^{1-\frac n m}$ by holder's inequality with exponents $p=\frac m n$ and $q=\frac m {m-n}$. Just rise both sides to power $1/n$ to finish.
Theorem 1.27 in Rudin's Functional Analysis
Isn't this just the Hausdorff property? If $y_0 \not= x$ there would be two neighborhoods $W_1$ and $W_2$ of $0$ with the property that $(y_0 + W_1) \cap (x + W_2) = \emptyset$. This forces $y_0 \notin \overline{x + W_2}$.
Complementary sequence of another sequence
The second set is the relative complement of the first set with respect to the natural numbers. If the first set is $A$, then the complement of $A$ relative to the set $U$ is written as $B = A^c = U \backslash A$.
Is this set measurable? (Set of points where a sequence converges)
This is true if and only if the underlying measure on $M$ is complete, i.e. if any subset of a null set is measurable. Indeed, the phrasing $u_n(s)\to u(s)$ almost everywhere means exactly that the set $A^c$ is contained in a null set. (Note that $A$ is measurable if and only if $A^c$ is. ) If the measure is not complete, meaning that there exist a null set $N$ containing a nonmeasurable subset $M$, then the sequence of functions $$u_n(s)=\begin{cases} 0, & s \in M^c \\ (-1)^n, & s \in M \end{cases}$$ is an example of a function for which the set $A$ of convergence is nonmeasurable.
Convergence/divergence of a couple of infinite series
When you try to determine the convergence/divergence of a series, always remember that there is not only one trick to do it. So if your attempt failed it can either mean that you didn't do it right or that maybe it's just not a right way to go. For the first one ; using comparison, you can show that it is absolutely convergent since $$ \sum_{n=1}^{\infty} \left| \frac{(-i)^{n+1}}{n^2+1} \right| = \sum_{n=1}^{\infty} \frac 1{n^2+1} \le \sum_{n=1}^{\infty} \frac 1{n^2} $$ and you know that the latter is a convergent series, hence your series converges absolutely. For the second one, to know what you're looking for, you can informally do this : \begin{align} \frac 1i \sum_{n=1}^{\infty} \frac{e^{in\theta}}{n} = \sum_{n=1}^{\infty} \frac{e^{in\theta}}{in} = \sum_{n=1}^{\infty} \int e^{in\theta} \, d \theta = \int \sum_{n=1}^{\infty} (e^{i\theta})^n \, d\theta = \int \frac 1{1-e^{i\theta}} \, d\theta \end{align} and from here you can compute this integral : by letting $u=1-e^{i\theta}$, you have \begin{align} \int \frac 1{1-e^{i\theta}} d\theta & = \int \frac{i}{u(1-u)} du = -i \int \frac 1{u(u-1)} du = -i \int \left( \frac 1{u-1} - \frac 1u \right) \\ du \\\ & = -i \log(u-1) + i \log(u) \\\ & = -i \log(-e^{i\theta}) + i \log(1-e^{i\theta}) \\\ & = i \log \left( \frac{1-e^{i\theta}}{-e^{i \theta}} \right) \\\ & = i \log(1 - e^{-i\theta}). \end{align} This made me noticed what I wanted ; the function we're looking for is a logarithm. If you compute the power series for $\log$, you get $$ \log(1-z) = \sum_{n=1}^{\infty} \frac {z^n}{n} $$ which means that your series sums to $-\log(1-e^{-i\theta})$, as long as $\theta \neq 0$ (or an integer multiple of $2\pi$). The trickier part comes because $|e^{i\theta}| = 1$, which means it is on the boundary of the disc of convergence of the function $\log(1-z)$. I must say that at the moment I don't know how to deal with it, but I'll try thinking a little more. Hope that helps,
Is this really a vector space?
Answer : the exam paper actually specified that $F$ is the vector space $\textit{spanned}$ by those functions. My bad !
Proof $\gcd(a,b) = \gcd(a, b +ax)$
Note that if $d\mid a$ and $d \mid b$, then clearly $d \mid (b+ax)$. On the other hand, if $d \mid a$ and $d \mid (b+ax)$, then we have $d\mid(b+ax+a(-x))$ by the same argument. Since $b + ax + a(-x) = b$, this is the same as saying $d\mid b$. Therefore $a$ and $b$ have all the same divisors in common as $a$ and $b+ax$. Specifically, the largest one must be the same for both pairs.
Let $u'(t)=Au^2-Bu$. Find conditions on A, B to guarantee global solution.
Let us transform $u'(t)=Au^2 - Bu$ via dividing by $u^2$ to $$u^{-2}u'=-B u^{-1}+ A$$. Then subtitute $z:= u^{-1} $, hence $z' = -u^{-2}u' $ and then the upper equation becomes $$z' = Bz -A$$ where you now can apply your Banach Contraction Principle
Deducing the closed form for pentagonal numbers
When you have a polynomial expressed as a sequence, you can find it by taking differences $$ \begin {array} {l l l l l l} 0&1&5&12&22&35\\1&4&7&10&13\\3&3&3&3 \end {array}$$ where the first line is your sequence and subsequent entries are the difference between the one up and to the right and the one above. The fact that the second differences are constant says you have a second power polynomial. To get the leading term, you take the constant in the second line and divide it by $2!$, so your polynomial starts with $\frac 32n^2$. You can subtract that off and get a linear polynomial, which will be constant in the first line down. You can use this to get a recurrence. The first line is $A_n-A_{n-1}$ and you can observe it is $1+3(n-1)$, so $A_n=A_{n-1}+3n-2$
Solving Linear Systems by hand
See my comments on your other question about spline interpolation. As I explained there, the matrices and systems of equations that occur in constructing splines are special, because they are banded. So, you can solve them using elementary elimination methods. You don't need to use general-purpose methods like LU decomposition. I suspect that your teacher is asking you to solve these problems by hand so that you see this banded structure and you understand how much it simplifies the problem. If you do polynomial interpolation (as opposed to spline interpolation), then you again have to solve linear systems, but they are not banded. Doing a few examples by hand will help you appreciate the difference. The brute-force approach using Matlab hides all this.
Proof by induction:$\frac{3}{5}\cdot\frac{7}{9}\cdot\frac{11}{13}\cdots\frac{4n-1}{4n+1}<\sqrt{\frac{3}{4n+3}}$
Your calculations are correct. But I thought it might be helpful also to mention another nice trick to handle such products: Set $A = \frac 35 \cdot \frac 79 \cdots \frac{4n-1}{4n+1}$ Let $B = \frac 57 \cdot \frac 9{11} \cdots \frac{4n+1}{4n+3}$ It follows immediately $$A &lt; B \Rightarrow A^2 &lt; AB = \frac 3{4n+3}$$ Done.
Injectivity and norm function on finite fields
Consider $$g(x) =f(x)-f(a)=\alpha(x-a)^q+\alpha^q(x-a)$$ since $q\equiv 0$ in our field and the binomial theorem holds. Now if $f$ has a double value, say $f(a)$ which is taken on twice, then $g$ has a two zeros. However, if $b\ne a$ is such a pair of zeros, we have $$\alpha(b-a)^q+\alpha^q(b-a)=0\iff (b-a)^{q-1}=-(\alpha)^{q-1}.$$ Then $$\left({b-a\over \alpha}\right)^{q-1}=-1$$ So that ${b-a\over\alpha}\not\in\Bbb F_{q}$, as elements of the base field are totally determined by the fact that they are roots of $x^q-x$. However, this implies $$\left({b-a\over \alpha}\right)^{q^2}={b-a\over\alpha}$$ i.e. ${b-a\over\alpha}\in\Bbb F_{q^2}$ which is impossible since that field is not a sub-extension of $\Bbb F_{q^3}$. Hence no such $b$ exists.
Percentage stored as a fraction of 1: What is this called?
A proportion. It usually takes values between $0$ and $1$.
How can i find $\int _0^{\infty }\ln ^n\left(x\right)\:e^{-ax^b}\:dx$
You can start using the following identity, $$\int _0^{\infty }x^m\:e^{-ax^b}\:dx=\frac{\Gamma \left(\frac{m+1}{b}\right)}{b\:a^{\frac{m+1}{b}}}$$ You can now differentiate both sides $n$ times with respect to m and then set it to $0$, $$\int _0^{\infty }x^m\:\ln ^n\left(x\right)\:e^{-ax^b}\:dx=\frac{\partial ^n}{\partial m^n}\frac{\Gamma \left(\frac{m+1}{b}\right)}{b\:a^{\frac{m+1}{b}}}$$ $$\boxed{\int _0^{\infty }\ln ^n\left(x\right)\:e^{-ax^b}\:dx=\lim _{m\to 0}\frac{\partial ^n}{\partial m^n}\frac{\Gamma \left(\frac{m+1}{b}\right)}{b\:a^{\frac{m+1}{b}}}}$$
Are Euler trails and tours of a graph the same as Hamilton paths and cycles of the corresponding "edge graph"?
Not entirely; the relationship is only one-way. An Eulerian walk in $G$ gives us a Hamiltonian path in $H$ (the line graph of $G$) and an Eulerian tour in $G$ gives us a Hamiltonian cycle in $H$. This is because consecutive edges $uv, vw$ in the Eulerian walk in $G$ correspond to adjacent vertices in $H$. However, other Hamiltonian paths/cycles in $H$ may not correspond to Eulerian walks/tours in $G$. For example, if $G = K_4$ (with vertices $1,2,3,4$ and edges $12, 13, 24, 23, 24, 34$) then in $H$, we have a Hamiltonian cycle $12, 13, 14, 34, 24, 23$ which does not correspond to an Eulerian tour of $G$ (and in fact such an Eulerian tour does not exist). The difference is that a path in $H$ is a sequence of edges of $G$ in which any two consecutive edges share a vertex. However, a walk in $G$ is more structured: the second endpoint of one edge must be the first endpoint of the next. Going from $12$ to $13$ to $14$ in $H$ (a valid part of a Hamiltonian cycle) cannot be made to follow this rule in $G$: if $12$ and $13$ share the vertex $1$, the next edge should include vertex $3$, not vertex $1$ again.
$\frac{2.00013579}{1.00013579^2+2.00013579}$ > $\frac{2.0002468}{1.0002468^2 + 2.0002468}$?
Hint: let $2.00013579=x,2.0002468=y$,$x,y&gt; 2$ Also notice that $$f(x)=\frac{x}{{(x-1)}^2+x}=\frac{1}{x+\frac{1}{x}-1}$$ is decreasing for $x\ge 1$ As $x&lt;y$:$$\frac{x}{{(x-1)}^2+x}&gt;\frac{y}{{(y-1)}^2+y}$$
Calculate $E(max(A, B))$
To compute $E(max(A, B))$, I'd start by computing $P(max(A, B) \le x)$. As Alain Chau has observed, $P(max(A, B) \le x) = P(A \le x) P(B \le x)$ since $A$ and $B$ are independent. Since $A$ and $B$ are uniformly distributed on $[0, 1]$, you have $P(max(A, B) \le x) = x^2$ for $0 \le x \le 1$. Finally, use the well-known formula $\int_0^\infty P(X &gt; x) \: dx = E(X)$, which applies for non-negative random variables $X$ (this question may be relevant).
Prove $\lim_{\delta\to 1}\int_{\mathbb R^d}f(\delta x)dx=\int_{\mathbb R^d}f(x)dx$
The support of a function is a subset of it's domain, therefore the support of your function is $]0,1]$ and not $[0,1]$, and thus it's not a function with compact support. If $g$ is continuous with compact support $S$ then you can set $$h(x)=\begin{cases}g(x)&amp;x\in S\\ 0&amp; x\notin S\end{cases}$$ wich is a continuous function on all $\mathbb R$. So, for your problem,$$|f(\delta x)|\leq M$$ where $M$ is the maximum of $f$ on $2S$ and since $M$ is integrable on $2S$ (since it's compact), you can apply dominated convergence theorem and conclude that $$\lim_{\delta\to 1}\int_{2S}f(\delta x)dx=\int_{2S}f(x)dx=\int_{\mathbb R^d}f(x)dx.$$
Show that $\nabla [f(r)]=f'(r)\frac {\mathbf{r}}{r}$
From $r=\sqrt{x^2+y^2+z^2}$ it follows that $${\partial r\over\partial x}={2x\over 2\sqrt{x^2+y^2+z^2}}={x\over r}\ ,$$ a formula which is extremely handy in hundreds of situations. Now you are given a function $$g(x,y,z):=f(r),\qquad r:=\sqrt{x^2+y^2+z^2}\ .$$ Using the chain rule you get $${\partial g\over\partial x}=f'(r)\&gt;{\partial r\over\partial x}=f'(r)\&gt;{x\over r}\ .$$ By analogy, $$\nabla g(x,y,z)=\left({\partial g\over\partial x},{\partial g\over\partial y},{\partial g\over\partial z}\right)={f'(r)\over r}\&gt;(x,y,z)={f'(r)\over r}\&gt;{\bf r}\ .$$
Prove continuity of complex function $z^2/|z|$.
As the behaviour of $z^2/|z|$ is abberant at $z = 0$, I would definitely recommend restricting your $\delta$ from the outset so that $z = 0$ is not a consideration. If we take a point $z_0 \in \Bbb{C} \setminus \{0\}$, we should guarantee that $\delta$ is small enough so that $$|z - z_0| &lt; \delta \implies z \neq 0.$$ How do we do this? We could ensure that $\delta \le |z_0| / 2$, i.e. our $\delta$ can never be more than half the distance between $z_0$ and $0$. Then, $$|z - z_0| &lt; \delta \implies |z_0| - |z| \le |z - z_0| &lt; |z_0|/ 2 \implies |z| &gt; |z_0| / 2 &gt; 0.$$ With this assumption in mind, let's begin (as we often do in $\varepsilon$-$\delta$ proofs) at the end. We have, $$\left|\frac{z^2}{|z|} - \frac{z_0^2}{|z_0|} \right| &lt; \varepsilon \impliedby \frac{\Big|z^2|z_0| - z_0^2|z|\Big|}{|z||z_0|} &lt; \varepsilon.$$ under our assumption, $|z| &gt; |z_0| / 2$, hence $$\frac{\Big|z^2|z_0| - z_0^2|z|\Big|}{|z||z_0|} &lt; \frac{2}{|z_0|^2} \cdot\Big|z^2|z_0| - z_0^2|z|\Big|.$$ This removes the wildcard of having $z$ appear in the denominator. If we can make the right hand side less than $\varepsilon$ (spoiler alert: we can), then the left hand side will also be less than $\varepsilon$. Next, observe that $z_0^2|z|$ is approximately $z_0^2|z_0|$. We can use the usual trick using triangle inequality: \begin{align*} \frac{2}{|z_0|^2} \cdot\Big|z^2|z_0| - z_0^2|z|\Big| &amp;\le \frac{2}{|z_0|^2} \cdot\left(\Big|z^2|z_0| - z_0^2|z_0|\Big| + \Big|z_0^2|z_0| - z_0^2|z|\Big|\right) \\ &amp;= \frac{2}{|z_0|^2} \cdot\left(|z^2 - z_0^2| \cdot |z_0| + |z_0|^2 \cdot \Big||z_0| - |z|\Big|\right). \end{align*} Note that $\Big||z_0| - |z|\Big| \le |z - z_0|$, so we can control the latter term. The former term is $$|z^2 - z_0^2| \cdot |z_0| = |z - z_0| \cdot |z + z_0| \cdot |z_0|.$$ We can control $|z - z_0|$, so provided we can guarantee that $|z + z_0|$ doesn't get too big, we can control the whole expression. Fortunately, we have our assumption that $|z - z_0| &lt; |z_0| / 2$, because $$|z + z_0| \le |z - z_0| + 2|z_0| &lt; 5|z_0| / 2,$$ hence under our assumption, \begin{align*}\frac{2}{|z_0|^2} \cdot\left(|z^2 - z_0^2| \cdot |z_0| + |z_0|^2 \cdot \Big||z_0| - |z|\Big|\right) &amp;&lt; \frac{2}{|z_0|^2} \cdot\left(|z - z_0| \cdot \frac{5|z_0|}{2} \cdot |z_0| + |z_0|^2 \cdot |z - z_0|\right) \\ &amp;= 7|z - z_0|. \end{align*} Thus, if we can ensure that $7|z - z_0| &lt; \varepsilon$ and that $|z - z_0| &lt; |z_0| / 2$, then \begin{align*} \varepsilon &amp;&gt; 7|z - z_0| \\ &amp;&gt; \frac{2}{|z_0|^2} \cdot\left(|z^2 - z_0^2| \cdot |z_0| + |z_0|^2 \cdot \Big||z_0| - |z|\Big|\right) \\ &amp;\ge \frac{2}{|z_0|^2} \cdot\Big|z^2|z_0| - z_0^2|z|\Big| \\ &amp;&gt; \left|\frac{z^2}{|z|} - \frac{z_0^2}{|z_0|} \right|. \end{align*} In other words, choose $$\delta = \min\left\{\frac{\varepsilon}{7}, \frac{|z_0|}{2}\right\}.$$
If $a+b=1$ then $a^{4b^2}+b^{4a^2}\leq1$
We define $f(x,y)=x^{4y^2}+y^{4x^2}$. This is my plan to solve the problem: Since $x+y=1$, we replace $1-x$ with $y$. We make a new function: $g(x)=x^{4(1-x)^2}+(1-x)^{4x^2}$ Therefore, we must find the maximum on the range $x \in [0,1]$ of $g$ so that we can see the maximum is less than or equal to $1$. This will be troublesome: $$g(x)=x^{4(1-x)^2}+(1-x)^{4x^2}$$ Set $g_{1}(x) = x^{4(1-x)^2}$ and $g_{2}(x) = (1-x)^{4x^2}$. Therefore, we can break it up like so: $$g'(x) = g_{1}'(x)+g_{2}'(x)$$ $$g_{1}'(x)=g_{1}', g_{2}'(x)=g_{2}'$$ $$\ln(g_{1})=\ln \left(x^{4(1-x)^2}\right)$$ $$\ln(g_{1})={4(1-x)^2} \cdot \ln \left(x\right)$$ $$\frac{g_{1}'}{g_{1}}= 4 \cdot \left((1-x)^2 \right)' \cdot \ln(x)+\frac{4(1-x)^2}{x}$$ $$\frac{g_{1}'}{g_{1}}= 4 \cdot -2 \cdot (1-x) \cdot \ln(x)+\frac{4(1-x)^2}{x}$$ $$\frac{g_{1}'}{g_{1}}= 8(x-1)\ln(x)+\frac{4x^2-8x+4}{x}$$ $$\frac{g_{1}'}{g_{1}}= 8(x-1)\ln(x)+4x-8+\frac{4}{x}$$ $$g_{1}'= x^{4(1-x)^2} \cdot \left(8(x-1)\ln(x)+4x-8+\frac{4}{x}\right)$$ Alright. Deep breath. Let's keep going. $$\ln(g_{2})=4x^2\ln(1-x)$$ $$\frac{g_{2}'}{g_{2}}=8x\ln(x-1)+\frac{4x^2}{x-1}$$ $$g_{2}'= (1-x)^{4x^2}\left(8x\ln(x-1)+\frac{4x^2}{x-1}\right)$$ $$g_{1}'= x^{4(1-x)^2} \cdot \left(8(x-1)\ln(x)+4x-8+\frac{4}{x}\right)$$ $$g'(x)=x^{4(1-x)^2} \cdot \left(8(x-1)\ln(x)+4x-8+\frac{4}{x}\right) + (1-x)^{4x^2}\left(8x\ln(1-x)+\frac{4x^2}{x-1}\right)$$ The maximum appears (according to the closed interval method), either at: $$g(0)=1$$ $$g(1)=1$$ Or at the $x$-value(s) of the solution of: $$0=x^{4(1-x)^2} \cdot \left(8(x-1)\ln(x)+4x-8+\frac{4}{x}\right) + (1-x)^{4x^2}\left(8x\ln(1-x)+\frac{4x^2}{x-1}\right)$$ Therefore, if we set $x_{1}$, $x_{2}$, $x_{3}$ ... to be the solutions to the equation above in the interval $x_{n} \in [0,1]$, we have reduced the problem to proving that: $$g(x_{1}),g(x_{2}), g(x_{3})... \leq 1$$ Through some graphing of $g(x)$, we see that there exists $x_{1}$, $x_{2}$, and $x_{3}$, where $x_{2}$ is $0.5$ and the others are not easily calculatable or are irrational. It can easily be seen that $g'(0.5) = 0$ and that $g(0.5)=1$ (a maximum of the function). Since we now have proof that $g(x_{2}) \leq 1$ and we see that there does not exist an $x_{n}$ s.t. $n&gt;3$ and $g'(x_{n})=0$, we can reduce our previous problem to: Prove that: $$g(x_{1}), g(x_{3}) \leq 1$$ Through Newton's Method, we obtain approximations of $x_{1}$ and $x_{3}$ accurate to 10 decimal places. We state them below: $$x_{1} \approx 0.281731964017$$ $$x_{3} \approx 0.718268035983$$ Note that: $$g'(x_{1}) \approx g'(0.281731964017)=7.349676423 \cdot 10^{-14}$$ $$g(x_{1}) \approx g(0.281731964017)=0.973494223187$$ We now have that $g(x_{1})$ is a minimum of the function and that $g(x_{1}) \leq 1$ Finally: $$g'(x_{3}) \approx g'(0.718268035983)=-7.349676423 \cdot 10^{-14}$$ $$g(x_{3}) \approx g(0.718268035983)=0.973494223187$$ We now have that $g(x_{1})$ is also a minimum of the function and that $g(x_{1}) \leq 1$ We now have that: $$g(x_{1}), g(x_{2}), g(x_{3}) \leq 1$$ Q.E.D I took a very head-on brute-force approach to the problem, but I am happy with the rigorousness of the result and the final proof. We also now have the minimums of the function, which if anyone is curious, is $\approx 0.973494223187$
Compute maximum actuator's rate based on the given first-order dynamics
Given a first order transfer function $$ G(s) = \frac{K}{T s + 1} $$ In time domain with a step input of height $\bar{U} = U_{max} - U_{min}$ that is: $y(t) = \bar{U}\, K \, (1 - e^{-t/T})$ So you can take the derivative with respect to $t$: $$ y'(t) = \frac{\bar{U}\, K}{T} e^{-t/T} $$ The derivative has a maximal value at $t = 0$ (directly at the step) because the $e^{-t/T}$ is monotone decreasing from $1$ to $0$ asymptotically (of course we assume $T &gt; 0$ so that $G$ is stable). So the maximum rate is $$ y_{max}' = \frac{\bar{U}\, K}{T} $$ so it only depends on $T$ and $\bar{U}\, K$, which is the maximum value your actuator can produce. For your second question, you can of course solve this for $T$: $$ T = \frac{\bar{U}\, K}{y_{max}'} $$ In your case, $K = 1$, $T = 0.01$, $U_{min} = 0$, $U_{max} = 2$, so you have $$ y_{max}' = \frac{(2 - 0) \times 1}{0.01} = 200. $$ The difference from that value to your code comes from the discretization. Here are some values you get for different sample times dt with your simulation: dt x_rate_max --------------------- 0.005 157.3877 0.0025 176.9594 0.001 190.3252 0.0001 199.0033 0.00001 199.9000 0.000001 199.9900 So you can see that the estimated value approaches the theoretical value of $200$ we just computed.
Wormhole - How to model it?
Topologically, you are just defining a quotient space: $\mathbb{R}^3/(p_1 \sim p_2)$. However, I think you want a little more than that. You want that a traveler going towards $p_1$ with direction $\vec{v}$ comes out of $p_2$ with direction vector $\vec{v}$. If you are just trying to get a computer program to work with this space, then the answer is fairly simple. Just set a threshold that if you are are close enough to $p_1$ you pop out of $p_2$ with same direction and if you are close enough to $p_2$ you pop out of $p_1$ in the direction direction you are going. Since you are allowing a discrete domain, I am going to use a discrete parameter to describe this system. With that in mind, I think this is the function that you want: $$f(t+1) = \{x(t) + v(t) \mbox{ if } x(t) + v(t) \ne p_1,p_2,\\ p_1, \mbox{ if } x(t) + v(t) = p_2,\\ p_2, \mbox{ if } x(t) + v(t)=p_1 \}$$ where $x(t)$ is the position at time $t$ and $v(t)$ is the direction vector at time $t.$
Apostol Calculus: Difficulty getting listed answer
Hint: We split the integral $$\int_{-1}^3 2[x] dx = \int_{-1}^0 2[x] dx + \int_{0}^1 2[x] dx + \int_{1}^2 2[x] dx + \int_{2}^3 2[x] dx$$ The greatest integer for any $x \in [-1, 0)$ is $-1$, and the greatest integer for any $x \in [0, 1)$ is 0 and the greatest integer for any $x \in [1, 2)$ is $1$ and the greatest integer for any $x \in [2, 3)$ is $2$.
How to calculate optimal sizes of rectangles for this type of array visualization?
The algoritm you are looking for is called Squarified Treemap Algorithm. Its description and discussion of related issues can be found in this paper by Mark Bruls, Kees Huizing, and Jarke J. van Wijk. The proposed algorithm is recursive in nature. Key portion of the paper goes like this: ... Following the example, we present our algorithm for the layout of the children in one rectangle as a recursive procedure squarify. This procedure lays out the rectangles in horizontal and vertical rows. When a rectangle is processed, a decision is made between two alternatives. Either the rectangle is added to the current row, or the current row is fixed and a new row is started in the remaining subrectangle. This decision depends only on whether adding a rectangle to the row will improve the layout of the current row or not. We assume a datatype Rectangle that contains the layout during the computation and is global to the procedure squarify. It supports a function width() that gives the length of the shortest side of the remaining subrectangle in which the current row is placed and a function layoutrow() that adds a new row of children to the rectangle. To keep the description simple, we use some list notation: ++ is concatenation of lists, [x] is the list containing element x, and [] is the empty list. The input of squarify() is basically a list of real numbers, representing the areas of the children to be laid out. The list row contains the rectangles that is currently being laid out. The functionworst() gives the highest aspect ratio of a list of rectangles, given the length of the side along which they are to be laid out. This function is further discussed below: procedure squarify(list of real children, list of real row, real w) begin real c = head(children); if worst(row, w) worst(row++[c], w) then squarify(tail(children), row++[c], w) else layoutrow(row); squarify(children, [], width()); fi end There is also this relevant question on SO: https://stackoverflow.com/questions/9880635/implementing-a-squarified-treemap-in-javascript Also, implementation in TypeScript is here. Its demo is here. Screenshot of the demo: There is also an open source utility for disk space visualization called WinDirStat. It contains C++ implementation of the algorithm above. Its screenshot:
If $[L_1:K] = p, [L_2:K] = q, \mbox { $p,q$ prime numbers}$, then $L_1\cap L_2 = K$ or $L_1=L_2$
For (a), suppose $L_1 \cap L_2 \ne K$; then since $K \subset L_1 \cap L_2$, there is some $\alpha \in L_1 \cap L_2$, $\alpha \notin K$. Then $K(\alpha) \subset L_1 \cap L_2 \subset L_1$ and we have $[L_1:K(\alpha)][K(\alpha):K] = [L_1:K] = p; \tag 1$ thus $[K(\alpha): K]$ is either $1$ or $p$; but we can rule out the case $[K(\alpha): K] = 1$ since it implies $\alpha \in K$, contrary to our assumption $\alpha \notin K$; thus $[K(\alpha): K] = p$, $[L_1: K(\alpha)] = 1$ whence $L_1 = K(\alpha)$. The same argument applied to $L_2$ shows $L_2 = K(\alpha)$ as well, so $q = p$ and $L_1 = L_2 = K(\alpha)$. For (b), suppose for the moment that $[L_2: K] = n &gt; 1$, a considerable relaxation of the condition $[L_2:K] = 2$; then with $L_2 = K(\alpha)$ we have $[K(\alpha):K] = n$ and $L_1 \cap K(\alpha) = K$. Now if $\alpha \in L_1$, we see that $\alpha \in L_1 \cap K(\alpha) = K$ in contradiction to $[K(\alpha):K] &gt; 1$; therefore, $\alpha \notin L_1$, and we may affirm $[L_1(\alpha): L_1] &gt; 1$. Furthermore, since $[K(\alpha):K] = n &lt; \infty$, $\alpha$ satisfies some irreducible polynomial $p(x) \in K[x]$ with $\deg p(x) = n$: $p(\alpha) = 0, \tag 2$ and since $K \subset L_1$, we conclude that $\alpha$ is algebraic over $L_1$ as well, and that $1 &lt; [L_1(\alpha): L_1] \le n = \deg p(x); \tag 3$ note we cannot affirm that $[L_1(\alpha):L_1] = n$ in general, since $p(x)$ may be reducible in $L_1(x)$, though it is not so in $K[x]$; but in the case $n = 2$, we have only the choice $[L_1(\alpha):L_1] =2$, and this establishes our result.
Connected and Disconnected Permutations
If $I$ is a subset of $[n] = \{1,\ldots,n\}$ and $\pi$ is a permutation of $[n]$, then the image $\pi(I)$ is by definition the subset of $[n]$ consisting of all elements $\pi(x)$ for $x \in I$. There is no ordering on $\pi(I)$. Therefore $\pi( \{2,3\}) = \{3,2\} = \{2,3\}$, so this permutation is indeed disconnected. I hadn't seen this definition for permutations before, so I'm not sure exactly what it's used for. (What happens in the paper?) But it does remind me of something that comes up in the theory of rearrangements of conditionally convergent series: there is a necessary and sufficient condition for a permutation $\pi$ of $\mathbb{Z}^+$ to change the sum of some convergent series: i.e., $\sum_n a_n \neq \sum_n a_{\sigma(n)}$. If I am remembering it correctly, the condition is that there should be some fixed number $N$ so that the image $\pi([n])$ is a union of at most $N$ intervals. (Note that $\pi$ is a permutation of all of $\mathbb{Z}^+$, so does not necessarily stabilize $[n]$.) Anyway, to my mathematical eye the definition "feels good": it is not hard to believe that it will be useful for something...
Valuation rings of complete non-archimedean fields which are not local
A valuation ring is always local, and a valuation ring is Noetherian if and only if it is a discrete valuation ring. EDIT: I realize now that you aren't asking if there are non-local valuation rings, you're asking about the valuation rings of non-Archimedean fields which themselves aren't local, i.e., locally compact. They are always $1$-dimensional, but they are Noetherian if and only if discretely valued. A valuation ring has rank $1$ in the sense that its value group is isomorphic to a subgroup of $\mathbf{R}$ if and only if it is $1$-dimensional. This is Theorem 10.7 in Matsumura's Commutative ring theory. So the valuation ring of $\mathbf{C}_p$, while definitely not Noetherian (because $\mathfrak{m}=\mathfrak{m}^2$), is $1$-dimensional, and more generally, the valuation ring of any non-Archimedean, non-trivially valued field is $1$-dimensional.
A problem on Trace of matrix and linear transformation.
Hint: $T$ is a map from the 4 dimensional space $V$ of 2 by 2 matrices to itself. In coordinates you may write this as: $$ (T(A))_{ij} = \sum_k P_{ik}A_{kj} = \sum_k\sum_j [P_{ik}\delta_{jl}] A_{kl}$$ So you should calculate the trace of $M_{ij,kl} = P_{ik}\delta_{jl}$ which is $$ {\rm tr} \; M = \sum_{ij} M_{ij,ij}= \sum_{i=1}^2\sum_{j=1}^2 P_{ii} \delta_{jj}= 2\; {\rm tr}\; P $$
convergence of infinite series $\sum\limits_{n=0}^{\infty}\frac{(1+1/2+1/3+...+1/n)}{n}$
Your series does not converge. Clearly, you have that $$\color{red}{1 + \frac12+...+\frac1n \geq 1}$$ for all $n\geq 1$. Hence, by comparison $$\sum\limits_{n=1}^N \frac{\color{red}{1+1/2+...+1/n}}{n}\geq \sum\limits_{n=1}^N \frac{\color{red}{1}}{n}$$ The right hand side diverges to $\infty$ as $N\to\infty$ and is known as the Harmonic series. Your series is strictly larger by comparison and hence also diverges to $\infty$.
Principle of Inclusion and Exclusion
Hint Start by seeing why $|A\cup B| = |A|+|B|-|A\cap B|$ and then substitute: $$\begin{align*} |A\cup B\cup C| &amp; = |A\cup (B\cup C)| = |A|+|B \cup C| - |A\cap (B\cup C)| \\ &amp; = |A|+(|B|+|C|-|B\cap C|) - |(A\cap B) \cup (A\cap C)| \\ &amp; = |A|+|B|+|C|-|B\cap C| - (|A\cap B| + |A\cap C| - |(A\cap B)\cap (A\cap C)|) \\ &amp; = |A|+|B|+|C|-|A\cap B|-|A\cap C|-|B\cap C| + |A\cap B\cap C| \end{align*}$$
Limit of Quotient of Two C infinity Functions
Under the hypothesis you have that $$f(x)=\dfrac{f^{(k+1)}(0)}{(k+1)!}x^{k+1}+o(x^{k+1})$$ and $$g(x)=\dfrac{g^{(k+1)}}{(k+1)!}x^{k+1}+o(x^{k+1}).$$ Thus \begin{align}\lim_{x\to 0}\dfrac{f(x)}{g(x)}&amp; \\ &amp;=\lim_{x\to 0}\dfrac{\dfrac{f^{(k+1)}(0)}{(k+1)!}x^{k+1}+o(x^{k+1})}{\dfrac{g^{(k+1)}(0)}{(k+1)!}x^{k+1}+o(x^{k+1})} \\ &amp;=\lim_{x\to 0}\dfrac{\dfrac{f^{(k+1)}(0)}{(k+1)!}+\dfrac{o(x^{k+1})}{x^{k+1}}}{\dfrac{g^{(k+1)}(0)}{(k+1)!}+\dfrac{o(x^{k+1})}{x^{k+1}}} \\ &amp;=\dfrac{\dfrac{f^{(k+1)}(0)}{(k+1)!}+\lim_{x\to 0}\dfrac{o(x^{k+1})}{x^{k+1}}}{\dfrac{g^{(k+1)}(0)}{(k+1)!}+\lim_{x\to 0}\dfrac{o(x^{k+1})}{x^{k+1}}} \\ &amp;=\dfrac{f^{(k+1)}(0)}{g^{(k+1)}(0)}.\end{align}
Are metrics that determine the same topology, equivalent?
Two metrics $d_1$ and $d_2$ on the set $X$ are equivalent if there exist $a,b&gt;0$ such that $$ ad_1(x,y)\le d_2(x,y)\le bd_1(x,y) $$ for every $x,y\in X$. The usual metric $d(x,y)=|x-y|$ and $\delta(x,y)=\lvert\arctan x-\arctan y\rvert$ on $\mathbb{R}$ induce the same topologies, but are not equivalent: indeed $\mathbb{R}$ is complete with respect to the former and not complete with respect to the latter (the sequence of natural numbers is Cauchy for $\delta$).
k-Connected Graphs with minimum degree $\ge n-3$
Ok. I think I got it. I used Whitney's Theorem, which states that a graph $G$ is k-connected if and only if for every pair $(x,y)$ of different vertices, there are at least k vertex-independent $(x,y)$ paths in G. So, if $\delta=n-2$, all $v \in V(G)$ have degree $d(v)=n-2$ or $d(v)=n-1$. We examine all possible pairs of vertices $(v,u)$. If $d(v)=d(u)=n-2$, then $\forall x \in V(G\setminus \{u,v\})$ there exists the path $P_x=(v,x,u)$, that is vertex independent, so $n-2$ paths. If $d(v)=d(u)=n-1$, the paths $P_x$ still exists and in addition there exists the path of length $1$, $P=(v,u)$, so $n-1$ paths. If $d(v)=n-2, d(u)=n-1$ then $v \in N(u)$ so we consider all paths $P_x$ for $x \in N(v)$ plus the path $P=(u,v)$. So in all cases the vertex independent paths from $u$ to $v$ are $n-2$ or more, so the graph G is $(n-2)$-connected. Now for the second question I thought maybe to consider the unconnected graph $K_2 \cup K_2$ with $\delta = 4-3 =1$ and $k=0 \lt 1$. Is this "cheating"?
Expectation in spectral sparsification algorithms
To figure out the correct weights $\tilde{w}_e$ of the randomly sampled graph, it is useful to write the Laplacian of a weighted graph $G$ as the sum of outer products $$L_G = \sum_{e \in E} w_e b_e b_e^T,$$ where $w_e$ is the weight of the edge $e$ and $b_e$ is the vector such that $$b_e(w) = \begin{cases} 1 &amp; w = \mathrm{head}(e)\\ -1 &amp; w = \mathrm{tail}(e)\\ 0 &amp; \text{otherwise}, \end{cases}$$ after some arbitrary orientation of the edge $e$. Now if we sample each edge $e$ with probability $p_e$, we are left with a graph $\tilde{G}$ whose Laplacian is $$L_\tilde{G} = \sum_{e \in E} I_e \tilde{w}_e b_e b_e^T,$$ where $I_e \sim \textsf{Bern}(p_e)$ is an indicator random variable for whether or not $e$ was chosen. To make the expectation line up, we just need $\mathbb{E}[I_e \tilde{w}_e] = w_e$. In other words, we want $\tilde{w}_e = \frac{w_e}{\mathbb{E}[I_e]} = \frac{w_e}{p_e}$.
Show that $\frac{1}{a^2+b^2+1}+\frac{1}{b^2+c^2+1}+\frac{1}{c^2+a^2+1}\leq 1$.
By C-S we obtain: $$\sum_{cyc}\frac{1}{a^2+b^2+1}=\sum_{cyc}\frac{2+c^2}{(a^2+b^2+1)(1+1+c^2)}\leq\sum_{cyc}\frac{2+c^2}{(a+b+c)^2}.$$ Can you end it now?
A question about spectrum of an operator and point spectrum.
Your proof is wrong. It doesn't make sense to talk about $||A-Ax_n||$. The problem you gave is a general Banach space problem: let $T:X \to X$ be a bounded linear map such that there exist $(x_n)_n \in X, ||x_n|| = 1$ with $Tx_n \to 0$. Then $T$ is not invertible. [Note in your problem, $T= A-I$]. The proof of this general problem is as follows. If $T$ were invertible in $L(X)$, then by definition it is bounded. So, there is some $C \ge 0$ so that $||T^{-1}x|| \le C ||x||$ for all $x \in X$. Since $T$ is invertible, it is surjective, so we may replace $x$ with $Tx$ to get $||x|| \le C ||Tx||$ for all $x \in X$. Using this for $x=x_n$ gives $1 \le C ||Tx_n||$. But this is a contradiction for large $n.
How to prove the sequence $(-1)^n$ has no limit using first principles?
Hint: Consider two cases, based on $\ell \ge 0$ and $\ell &lt; 0$. If $\ell \ge 0$ and $n$ is odd, then $$\left|(-1)^n - \ell\right| = \ell + 1 \nless 1 = \epsilon$$
Is $v v^T \succeq 0$?
Yes; it's true since $$\left\langle vv^Tx,x\right\rangle=\left\langle v^Tx,v^Tx\right\rangle=\left|\left|v^T x\right|\right|^2\ge0 .$$
Chain several operations in mathematica
Mathematica has its own SE-site such that you can ask it over there. Anyway, it's a short code but next time you must go to the above mentioned site: Clear[a, b, x, y]; D[y /. Solve[((x - a)/a)^2 + (y/b)^2 == 1, y], x] Result: {-((b Sqrt[2 a - x])/(2 a Sqrt[x])) + (b Sqrt[x])/(2 a Sqrt[2 a - x]), (b Sqrt[2 a - x])/(2 a Sqrt[x]) - (b Sqrt[x])/(2 a Sqrt[2 a - x])}
Is $S_3 \oplus \Bbb Z_2$ isomorphic to $A_4$ or to $D_6$?
One way to see that $S_3 \times \mathbb Z_2$ is isomorphic to $D_6$ is to exhibit an isomorphism. $D_6$ is the group of symmetries of a hexagon. So we want to find for each pair $(\sigma,k)$ with $\sigma \in S_3$ and $ \in \mathbb Z_2$ a symmetry of $D_6$. Denote the vertices of hexagon by the numbers $0$ to $5$. Let $S_3$ act on the numbers $0,1,2$. Since $S_3$ is generated by transpositions, we only need to say where the transposition $(ij)$ is sent: we send the pair $((ij),k)$ to the symmetry switching vertices $2i$ and $2j$ (and also $2i-1$ and $2j+1$) and keeping the others fixed) followed by a rotation of length $k$. (draw a picture) One must convince oneself that this is a well-defined homomorphism. A little thought shows that this function is injective (and thus surjective, since the groups are finite).
Proof that $\left \langle 0\mid \hat{S}\mid 0 \right \rangle = e^{\sum (\text{all connected vacuum diagrams})}$
Note that each term in your MacLaurin$^\dagger$ series represents one or more diagrams. Since they are sandwiched between the vacuum $|0&gt;$, the only nonzero diagrams are those with no external lines. In particular, there will be 'disconnected' diagrams where not all vertices are connected. If you agree that the series generates all the diagrams, then the linked cluster theorem tells you that the exponential of those are the connected ones (where connected means that all vertices in the diagram are connected). $\dagger$ Most people would call it a Dyson series. If this is more appropriate as a comment, feel free to move/remove it. I haven't posted enough to comment yet.
Given $\{1,\ldots,n\}$, how many ways to create a subset of size $k$ with at least one odd integer?
Your second method double counts some combinations. For example if you choose $1$ first then a possibility is $\{1,2,3,4,25\}$, while if you chose $3$ first then a possibility is $\{3,1,2,4,25\}$. Although these are the same, your second method counts them at least twice
Angle between left and right tangent to the graph
As has been mentioned in the comments, the correct derivative is $$g'(x) = \frac{-2(x-1)(x+1)}{\sqrt3 (x^2+1)\cdot |x-1| \cdot |x+1|}$$ However, your limits are correct. Considering that the derivative $g'(x)$ is equal to the slope of the tangent at the point $(x, g(x))$, the angle between the tangent and the $x$ axis is $\alpha=\arctan(g'(x))$. Hence, if $\alpha_1$ and $\alpha_2$ are the angles of the left tangent and the right tangent respectively, then the angle between them is $$\alpha_1-\alpha_2 = \arctan\left(\frac{\sqrt3}{3}\right) - \arctan\left(-\frac{\sqrt3}{3}\right) = \frac{\pi}{3}$$
find examples to prove $ A \cup B $ is not part of the $ V $
Hint: Consider $V = \mathbb{R}^2$; there are two very natural subspaces spanned by $(1, 0)$ and $(0, 1)$ respectively. Can you use this to find an example of the property you want?
Is $(-1)^{\infty}$ an indeterminate form?
Not necessarily because the concept "indeterminate form" does not have a widely accepted formal definition; in many books this is simply presented as a list of examples. Your example is not in that list, so in those books, it would not be an "indeterminate form". However, if you try to write a definition that is not merely a list, then your example likely could be an indeterminate form, depending of course on how the definition is written.
Proof involving Euler totient function and modular arithmetic
You just could take a look at the ring $\;\Bbb Z_{\varphi(n)}\;$ . Its units are precisely all the integers modulo $\;\varphi(n)\;$ which are coprime with $\;\varphi(n)\;$ . Both claims follow at once from the above, with the second one following from the fact that $\;\left|\Bbb Z_n^*\right|=\varphi(n)\;$
Help with the integral $\int_{0}^{\infty}\frac{x^{s}}{\Gamma(s)}s^{z-1}ds$
It looks like Ramanujan's master theorem (see [1] pg. 298-300). Instead of Ramanujan's formula we use Hardy's theorem which is also in [1] pg. 299-300. THEOREM 1. (Ramanujan-Hardy) Let $s=\sigma+it$, $\sigma,t$ both real. Let $H(\delta)=\{s:\sigma\geq-\delta\}$, $0&lt;\delta&lt;1$. If $\psi(s)$ is analytic on $H(\delta)$ and exist constants $C,P,A$, with $A&lt;\pi$ such that $$ |\psi(s)|\leq Ce^{P\sigma+A|t|}\textrm{, }\forall s\in H(\delta), $$ for $x&gt;0$ and $0&lt;c&lt;\delta$, we define $$ \Psi(x)=\frac{1}{2\pi i}\int^{c+i\infty}_{c-i\infty}\frac{\pi}{\sin(\pi s)}\psi(-s)x^{-s}ds. $$ If $0&lt;x&lt;e^{-P}$, then $$ \Psi(x)=\sum^{\infty}_{k=0}\psi(k)(-x)^k. $$ For $0&lt;\sigma&lt;\delta$, we have $$ \int^{\infty}_{0}\Psi(x)x^{s-1}dx=\frac{\pi}{\sin(\pi s)}\psi(-s). $$ Here (in your case) if we set $$ \psi(x)=\frac{\phi(x)}{\Gamma(x+1)}, $$ and $$ \Psi(x)=\frac{a^x}{\Gamma(x)}=\sum^{\infty}_{k=0}\psi(k)(-x)^k=\sum^{\infty}_{k=0}\frac{\phi(k)}{k!}(-x)^k, $$ then $$ \int^{\infty}_{0}\frac{a^x}{\Gamma(x)}x^{s-1}dx=\frac{\pi}{\sin(\pi s)}\psi(-s)=\frac{\pi}{\sin(\pi s)}\frac{\phi(-s)}{\Gamma(1-s)} $$ Using now the formula $$ \frac{\pi}{\sin(\pi s)\Gamma(1-s)}=\Gamma(s), $$ we arrive to $$ I(s):=\int^{\infty}_{0}\frac{a^x}{\Gamma(x)}x^{s-1}dx=\Gamma(s)\phi(-s). $$ Hence if $(Mf)(s)$ is the Mellin transform of $f$, then $$ \left(Mf\right)(s)=\int^{\infty}_{0}f(x)x^{s-1}dx $$ and $$ \left(M\Psi\right)(s)=I(s)=\int^{\infty}_{0}\frac{a^x}{\Gamma(x)}x^{s-1}dx =\Gamma(s)\phi(-s). $$ But from 2 we have the following THEOREM 2. (for the conditions see 2) $$ \int^{\infty}_{-\infty}\left(M\Psi\right)(\sigma+it)f(t)dt=2\pi\sum^{\infty}_{k=0}\frac{\Psi^{(k)}(0)}{k!}f(i(\sigma+k)). $$ Hence in our case with $\Psi(t)=a^t/\Gamma(t)$ and $f(t)=e^{-itx}(M\Psi)(\sigma-it)$, we have $$ \int^{\infty}_{-\infty}(M\Psi)(\sigma+it)f(t)dt= $$ $$ =\int^{\infty}_{-\infty}(M\Psi)(\sigma+it)e^{-itx}(M\Psi)(\sigma-it)dt= $$ $$ =2\pi\sum^{\infty}_{k=0}\frac{\Psi^{(k)}(0)}{k!}f(i(\sigma+k))= $$ $$ =2\pi\sum^{\infty}_{k=0}\frac{\Psi^{(k)}(0)}{k!}e^{-ii(\sigma+k)x}\Gamma(\sigma-ii(\sigma+k))\phi(-\sigma+ii(\sigma+k))= $$ $$ =2\pi\sum^{\infty}_{k=0}\frac{\Psi^{(k)}(0)}{k!}e^{(\sigma+k)x}\Gamma(2\sigma+k)\phi(-2\sigma-k). $$ Hence with $\sigma=1/2$: $$ \int^{\infty}_{-\infty}\left|(M\Psi)\left(\frac{1}{2}+it\right)\right|^2e^{-itx}dt=2\pi e^{x/2} \sum^{\infty}_{k=0}\frac{\Psi^{(k)}(0)}{k!}\Gamma(k+1)\phi(-k-1)e^{kx}. $$ Hence we can write $$ \int^{\infty}_{-\infty}\left|(M\Psi)\left(\frac{1}{2}+it\right)\right|^2e^{itx}dt=2\pi e^{-x/2} \sum^{\infty}_{k=0}\frac{\Psi^{(k)}(0)}{k!}\left(\int^{\infty}_{0}\frac{a^t}{\Gamma(t)}t^{k}dt\right)e^{-kx}\textrm{, }x&gt;0 $$ Which is a sampling formula, recovering the absolute value of the Mellin transform of $\Psi(x)$, from the values of $(M\Psi)(x)$ at $x=k+1$, where $k$ belongs to the non negative integers $k=0,1,2,\ldots$. As someone can see the result can be generalized very easily for $\Psi(x)$ analyitic around $0$ and entire in $\textbf{C}$ such that $\int^{\infty}_{0}|\Psi(t)t^{k}|dt&lt;\infty$. NOTES. 1) The general formula that rises is $$ \int^{\infty}_{-\infty}\left|(M\Psi)\left(\frac{1}{2}+it\right)\right|^2e^{itx}dt=2\pi e^{-x/2} \sum^{\infty}_{k=0}\frac{\Psi^{(k)}(0)}{k!}(M\Psi)(k+1)e^{-kx}\textrm{, }x&gt;0 $$ As an example of evaluation take $\Psi(x)=e^{-x}$. Then $$ \int^{\infty}_{0}e^{-t}t^{k}dt=k!. $$ Hence we get the next integral $$ \int^{\infty}_{-\infty}\left|\Gamma\left(\frac{1}{2}+it\right)\right|^2e^{itx}dt=2\pi\frac{e^{x/2}}{e^x+1}\textrm{, }x&gt;0. $$ 2) Also from the inverse Fourier theorem with $\Psi(x)=\frac{a^x}{\Gamma(x)}$, then $$ \left|(M\Psi)\left(\frac{1}{2}+iw\right)\right|^2=\left|\int^{\infty}_{0}\frac{a^t}{\Gamma(t)}t^{-1/2+iw}dt\right|^2= $$ $$ =\int^{\infty}_{-\infty}e^{-x/2} \sum^{\infty}_{k=0}\frac{\Psi^{(k)}(0)}{k!}\left(\int^{\infty}_{0}\frac{a^t}{\Gamma(t)}t^{k}dt\right)e^{-kx}e^{-i x w}dx $$ [1]: Bruce. C. Berndt. "Ramanujan's Notebooks Part 1". Springer-Verlang. New York, Berlin, Heidelberg, Tokyo. 1985. 2: N.D. Bagis."Numerical Evaluations of Functions Series and Integral Transforms with New Sampling Methods". Thesis. Aristotele University of Thessaloniki, Greece (2007), (in Greek from Researchgate here)
Another sum involving totient function or gcd
Here is not a closed form, but an alternative representation. Below, $(x,y)={\rm gcd}(x,y)$. I claim that $$\sum_{k=1}^n (k,n)=n\sum_{d\mid n}\frac{\phi(d)}{d}. $$ To see this, observe that if $d\mid n$ is a divisor of $n$ $$ (k,n)=d \iff \left(\frac{k}{d},\frac{n}{d}\right)=1. $$ In particular, the divisor, $d$, appears exactly $\phi(n/d)$ times in the summation. Thus, $$ \sum_{k=1}^n (k,n)=\sum_{d\mid n}\phi(\frac{n}{d})d. $$ Now, letting $d'=n/d$, the sum can be rewritten as, $$ \sum_{k=1}^n (k,n)=n\sum_{d'\mid n}\frac{\phi(d')}{d'}. $$ Hence, this object has the given operational meaning. Now, for multiplicative property, take coprime $(m,n)$. Our goal is to show, $$ \left(\sum_{d\mid mn}\frac{\phi(d)}{d}\right) = \left(\sum_{d\mid m}\frac{\phi(d)}{d}\right)\left(\sum_{d\mid n}\frac{\phi(d)}{d}\right). $$ Check that, the left hand side sum has $\phi(mn)=\phi(m)\phi(n)$ terms, so do the right hand side. Furthermore, each divisor $d\mid mn$ can be uniquely decomposed into $d=d_1d_2$ where $d_1\mid m$ and $d_2\mid n$, since $(m,n)=1$. Hence, we deduce the given function is indeed multiplicative. Remarks: In fact, more is known about this function. Theorem: For any positive integer $a$, there exists a positive integer $n$, such that, $$ \sum_{d\mid n}\frac{\phi(d)}{d}=a. $$ Try to prove this theorem, it is nice ;)
Determine the maximal compact interval such that $\sum_{n=0}^{\infty}(-1)^n\frac{x^{2n+1}}{2n+1} = \arctan(x)$ holds true
By the ratio test we see that the radius of convergence is $R=1$ and by the Leibniz's rule we have the convergence at $x=\pm1$ hence the interval of convergence is the compact $[-1,1]$ which's maximal since for all $x\not\in [-1,1]$ the series is divergent.
find the minimum value of $x^2-6x+9+ \dfrac{64}{x^2}$
Let $x=4+u$. Then it is easy to see $$ x^2-6x+9+\dfrac{64}{x^2}=5+\frac{u^2(u^2+10u+28)}{(u+4)^2}=5+\frac{u^2[(u+5)^2+3]}{(u+4)^2}\ge 5 $$ and the equal sign holds if and only if $u=0$ or $x=4$. Thus the function reaches its min 5 when $x=4$.
Inherently discrete concepts
I've not yet seen, that the "rank of a matrix" has been interpolated to fractional ranks (but I'm surely not very profound with literature...) (side remark: also your question focuses on whether it has "resisted to attempts" to interpolate ... such attempts may or may not exist, but to know this needed even a bigger radius of insight into literature and non-literature manuscripts...)
Does $|A|^{|A|}=2^{|A|}$ hold for any infinite set $A$?
The axiom of choice implies $\kappa^2=\kappa$ for transfinite cardinals $\kappa$. Then$$2^\kappa\le\kappa^\kappa\le(2^\kappa)^\kappa=2^{\kappa^2}=2^\kappa.$$By the Schröder–Bernstein theorem, we're done.
Is the FC-center of a finitely generated group itself finitely generated?
The example I gave in my answer to Center of a finitely generated group is a finitely generated group in which the centre is not finitely generated. In fact $Z(G) = {\rm FC}(G)$ in that example, so the answer to your first question is no. In a comment, Arturo Magidin gave a reference for an example of a finitely presented group in which $Z(G)$ is not finitely generated. You would need to look at the example and check whether it also has ${\rm FC}(G)$ not finitely generated. Perhaps $Z(G)={\rm FC}(G)$ in that example too, but I don't know!
If $u\in H^1_0(\Omega)\cap C(\Omega)$ is it true that $u\in H^1_0(\{u>0\})$?
Yes, this is true in general. The idea is to consider the sequence of functions, $$u_{\varepsilon} = (u-\varepsilon)_+ = \max\{u-\varepsilon,0\} \in H^1_0(\Omega)$$ and approximate these by functions in $C^{\infty}_c(D).$ If $\Omega$ is bounded, the support of each $\operatorname{supp} v_{\varepsilon}$ is compactly contained in $\Omega,$ so we can mollify each to obtain elements in $C^{\infty}_c(D).$ The general case will require an additional cutoff argument. Let $\chi_{1/\varepsilon} \in C^{\infty}_c(B_{1+1/\varepsilon})$ be a cutoff such that $\chi_{1/\varepsilon} \equiv 1$ in $B_{1/\varepsilon}$ and $|\nabla\chi_{1/\varepsilon}| \leq 2$ everywhere. Put $v_{\varepsilon} = u_{\varepsilon}\chi_{1/\varepsilon},$ so one can check that $v_{\varepsilon} \rightarrow u$ in $H^1(D)$ as $\varepsilon \rightarrow 0.$ Now $K_{\varepsilon} = \operatorname{supp} v_{\varepsilon} \subset \Omega$ is compact, by continuity of each $v_{\varepsilon}$ and as $K_{\varepsilon} \cap \partial\Omega = \emptyset.$ Hence the mollification $v_{\varepsilon} \ast \eta_{\delta}$ lies in $C^{\infty}_c(\Omega)$ provided $\delta &lt; \delta_0(\varepsilon).$ Taking $\delta = \delta_0(\varepsilon)/2$ and letting $\varepsilon \rightarrow 0$ gives a sequence of $C^{\infty}_c(D)$ functions converging to $u.$ Hence $u \in H^1_0(D).$
Is a weakly$^*$ convergent net of positive(!) Radon measures eventually norm-bounded?
For $X = [0, \omega_1)$ ($\omega_1$ the first uncountable ordinal), a $\sigma(M(X), C_0(X))$-convergent net of positive Radon measures need not be eventually uniformly bounded. The construction is similar to this one. Let me first provide some general remarks. For $\mu \in M(X)$ and $f \in C_0(X)$ denote by $\mu f := \int f \, d\mu$. A neighborhood base of $0$ in $M(X)$ for the considered weak$^*$ topology $\sigma(M(X), C_0(X))$ is given by the sets $V_{f_1, \dots, f_n} = \{ \mu \in M(X) \mid |\mu f_1| &lt; 1, \dots, |\mu f_n| &lt; 1 \}$ where $f_1, \dots, f_n \in C_0(X)$. Therefore, $U_{f_1, \dots, f_n} := V_{f_1, \dots, f_n} \cap M(X)^+$ is a neighborhood base of $0$ on the set of positive measures $M(X)^+$. For $f_1, \dots, f_n \in C_0(X)$ set $f := |f_1| + \dots + |f_n| \in C_0(X)^+$. Then $U_f = \{ \mu \in M(X)^+ \mid \mu f &lt; 1 \} \subseteq U_{f_1, \dots, f_n}$ since $|\mu f_i| \leq \mu |f_i| \leq \mu f$. Therefore, the sets $U_f$, $f \in C_0(X)^+$ form a neighborhood base of $0$ in $M(X)^+$. On $X = [0, \omega_1)$ every continuous function is eventually constant. Therefore, any $f \in C_0(X)$ has compact support. For any $\alpha &lt; \omega_1$ the set $[0, \alpha] \subseteq X$ is compact and any compact set of $X$ is contained in some $[0, \alpha]$. Consider the index set $\mathcal{F} := C_0(X)^+ \times \mathbb{N}$ directed by $(f, n) \preceq (g, m) :\Leftrightarrow f \leq g$ pointwise. Fix $f \in \mathcal{F}$. Since $f$ has compact support, there is $x \in [0, \omega_1)$ such that $f(x) = 0$. Therefore, $\delta_x \in U_f$ and moreover $n \delta_x \in U_f$ for any $n \in \mathbb{N}$. Set $\mu_{(f,n)} := n \delta_x \in M(X)^+$. Then the net $(\mu_{(f,n)})_{(f, n) \in \mathcal{F}}$ converges to $0$. In fact, if $U$ is an open neighborhood of $0$ in $M(X)^+$ then there is $f_0 \in C_0(X)^+$ such that $U_{f_0} \subseteq U$. Then for all $(f, n) \succeq (f_0, 0)$ (i.e. for all $f \geq f_0$) it holds $\mu_{(f,n)} \in U_f \subseteq U_{f_0} \subseteq U$. But the net $\mu_{(f,n)}$ is not eventually uniformly bounded: for any $(f_0, n_0)$ and any $r \geq 0$ there is $n \in \mathbb{N}$, $n \geq r$ such that $\lVert \mu_{(f_0,n)} \rVert = n \geq r$, hence $\sup_{(f,n) \succeq (f_0,n_0)} \lVert \mu_{(f,n)} \rVert = \infty$.
orthogonal group over reals as specific units in matrix algebra
Consider the matrices $$A_k=\begin{pmatrix}1&amp;0\\0&amp;\frac{1}{k}\\ \end{pmatrix}$$ where $k$ is a positive integer. For each $k$ we have $||A_k||=1$, as can easily be checked, and each $A_k$ is in $GL_n(\mathbb{R})$ If $A_k$ was orthogonal for every $k$, then since $O_n(\mathbb{R})$ is compact, we would get that $\lim_{k\to\infty}A_k$ is also orthogonal. But clearly the limit matrix is not orthogonal, because it is not invertible. The set of invertible matrices whose norm is $1$ is obviously not a group. For example, the inverse of $A_k$ has $k$ instead of $1/k$, and the norm is $k$.
Can the expression $a^2 + b^2 - c^2$ be factored as a product of two quaternions, where $a$, $b$, $c$ are real numbers?
It cannot be written as the product of two $\mathbb{H}$-linear combinations of variables $a,b,c$. (We assume the formal variables commute with all quaternion scalars, of course.) Suppose it were, say $(ap+q)(ar+s)$ where $p,q\in\mathbb{H}$ and $r,s$ are $\mathbb{H}$-linear combinations of $b$ and $c$. The leading term (with respect to $a$) would then be $pra^2$ which must be $a^2$ and so $pr=1$. We may factor the coefficients of $a$ out as $(a+qp^{-1})pr(a+r^{-1}s)=(a+u)(a+v)$. Now the middle term is $(u+v)a$, which must be $0$ as in the expression $a^2+b^2-c^2$, so $v=-u$ and we get $(a+u)(a-u)=a^2-u^2$. Thus, we want some combination of $b$ and $c$ (that is, $u$) which squares to $b^2-c^2$. Writing $(xb+yc)^2=b^2-c^2$, we see $x^2=1$ so $x=\pm1$. But then the middle term is $\pm2ybc$ which can only be $0$ if $y=0$ in which case we don't see $-c^2$ as the final term. It is however factorizable in the split quaternions. Then $a^2+b^2-c^2-d^2$ is the product of the split quaternion $a+bi+cj+dk$ with its conjugate $a-bi-cj-dk$.
Multiplicative structure on spectra with coefficients
This is not a full answer, but rather just an example. Let $MC_2$ be the mod 2 Moore spectrum. Notably, it is not a ring spectrum, so even though $E = \mathbb{S}$ is $E_\infty$, the spectrum $\mathbb{S}C_2 \simeq MC_2$ is not a ring spectrum, let alone $A_\infty$ or $E_\infty$. This shows that in general even if $E$ is a structured ring, $EG$ need not be. It also shows that the finiteness of $G$ is not the condition you are looking for. Added later: In fact, you can just ask whether Moore spectra are highly structured. It turns out that it is generally not the case. For example, they are never $E_\infty$, and that for many $G$ they are known to not be $A_\infty$. See the answers on this mathoverflow question for details.
How one can show $(r!)^s$ divides $(rs)!$?
We have \begin{align*} \frac{(sr)!}{(r!)^s} &amp;=\frac{(sr)!}{r!\cdot[(s-1)r]!} \cdot\frac{[(s-1)r]!}{r!\cdot[(s-2)r]!}\cdots\frac{r!}{r!}= {sr\choose r}\cdot{(s-1)r\choose r}\cdots{r\choose r}\in\mathbb{N}. \end{align*}
Whats the error in my continuum hypothesis "proof"
The main issue here is treating cardinal numbers as though they were real numbers. For example you write $$\beth_\lambda=\frac{\log_\gamma(\beth_{\lambda+1})}{\log_\gamma2}$$ Here you divide cardinals, which makes no sense, and you later repeat that again and again with $\beth_{\alpha+\frac\kappa\mu}$, which also makes no sense. But here it's worse. You're dividing what is arguably not even a cardinal, by a real number. If you want to define $\log$ on cardinals, that's fine, we can sort of do that, and it works okayish with $\beth$ numbers in particular. But you cannot treat this as a real logarithm. You cannot "change base" and you most certainly cannot change base to something which is not a cardinal number itself. To make a proof about cardinal arithmetic work, you need to make sure that you're sticking to the rules of cardinal arithmetic. If you are throwing in real numbers, division, and fraction ordinals, then you're not using cardinals anymore, and the symbols make no sense in the traditional sense. My recommendation is to pick up a book about set theory, Enderton's book is great.
$A$ is a vector subspace of $ℝ^4$?
The dimension of a vector space is the number of vectors in a basis. This doesn't say anything about what kind of vectors are in the vector space. You can have a one-dimensional subspace of $\mathbb R$, or $\mathbb R^2$, or of $\mathbb R^{100}$. The row space is a subspace of $\mathbb R^5$, because its elements (linear combinations of the rows) are $5$-dimensional vectors. The column space is a subspace of $\mathbb R^4$, because its elements (linear combinations of the columns) are $4$-dimensional vectors. For this matrix, the row space is a $2$-dimensional subspace of $\mathbb R^5$, and the column space is a $2$-dimensional subspace of $\mathbb R^4$, and there's nothing wrong with that.
Prove $\sqrt{x_1^2+4x_1x_2+5x_2^2+2x_1+6x_2+5}$ is convex
Extended hint: a non-negative quadratic function with positive-definite $A$ can be written as $$ x^TAx+b^Tx+c=\|M(x-x_0)\|^2+d^2. $$ It can be done by completing the square in $x$ or simply identifying $M$, $x_0$ and $d$ from the equation: $A=M^TM$, $d^2$ is the minimum value ($3$ in your case) and $x_0$ is the minimizer. Now the function you are dealing with is $$ f(x)=\left\|\begin{matrix}M(x-x_0)\cr d\end{matrix}\right\|. $$ It could be proven to be convex directly by definition using triangle inequality (or as a composition of the convex norm-function and the affine one).
Expand $(1-z)^{-m}$ in powers of $z$
Have your heard of Newton's generalized binomial theorem? http://en.wikipedia.org/wiki/Binomial_theorem#Newton.27s_generalised_binomial_theorem. According to this result, in the case $|z|&lt;1$ the answer is simply $$ (1-z)^{-m} =\sum_{r=0}^\infty \binom{-m}{r}(-z)^r $$ Try for yourself using the taylor series expansion at $z=0$. The statement of the theorem is that in general: $$ (x+y)^s=\sum_{r=0}^\infty \binom{s}{r}x^{s-r}y^r $$. Note also that the binomial coefficient is well defined for $s\in\mathbb{R}$. There are several proofs available on the internet. By the way, the binomial coefficient for $s$ any real numbers is just $$ \binom{s}{r} = \frac{s(s-1)...(s-r+1)}{r!}$$ REMARK: When we say $|z|&lt;1$ it doesn't mean that the function will only have an expression in power series near $0$, it actually means that THE GIVEN particular series expansion will be only valid for these region. BUT YOU CAN CHOOSE ANY OTHER REGION! Do this simply by taking the Taylor expansion around some other value, say $z_0=3$, then your formula will be valid for the numbers $2&lt;z&lt;4$ (if $z$ is real, otherwise for values such that $|z-3|&lt;1$). To do this in a faster way for the particular function you are giving, you can do do a simple trick using again Newton's generalized theorem, $$ (1-z)^{-m} = ((1-z_0)+(z_0-z))^{-m} = \sum_{r=1}^\infty \binom{-m}{r}(1-z_0)^{-m-r}(z_0-z)^r = \sum_{r=0}^\infty \binom{-m}{r}(-1)^{-m-r}(1-z_0)^{-m-r}(z-z_0)^r $$ This expansion will be valid for the values of $z$ such that $|z-z_0|&lt;1$. Hope this helps.
How many are the boolean vectors of length $n$ having at least one element equal to $1$?
A boolean vector of dimension n is a vector with n coordinates each equal to either 0 or 1. For example : (0,0) is a size 2 boolean vector. The only vector to have no element equal to 1 is (0,..., 0), and since there are $2^n$ boolean vectors, there are $2^n -1$ boolean vectors with at least one element equal to 1. Edit : Another way to put it is that a boolean vector of size n is an element of $\{0, 1\}^n$
Integer factorization simplification
I think what you're suggesting is wheel factorization using the first seven primes. That technique is well-known and is useful -- though smaller wheels tend to be more efficient in practice. But that wheel has 92160 elements, not 41284.
Show $a_n$ is unbounded if $a_n= a_{n-1} \left(1+ \frac{1}{\sqrt n}\right)$ to determine that $a_n$ diverges.
Further to my comment, starting with $$a_n = a_{n-1}\left(1+\frac{1}{\sqrt{n}}\right)=a_{n-2}\left(1+\frac{1}{\sqrt{n-1}}\right)\left(1+\frac{1}{\sqrt{n}}\right)=\\ a_0\prod\limits_{k=1}^n\left(1+\frac{1}{\sqrt{k}}\right)\geq \left(1+\frac{1}{\sqrt{n}}\right)^n= \left(\left(1+\frac{1}{\sqrt{n}}\right)^{\sqrt{n}}\right)^{\sqrt{n}} \geq ...$$ and applying Bernoulli's inequality $$...\geq \left(1+\frac{\sqrt{n}}{\sqrt{n}}\right)^{\sqrt{n}}=2^{\sqrt{n}}$$ As a result $$a_n \geq 2^{\sqrt{n}}$$ and the result follows, i.e. the sequence is unbounded and diverges.
Calculus - Calculate Work done to lift water out of tank
Your $P(x)$ is wrong because you took the base to be the top face, presumably because it is larger than the bottom face. The base is the bottom face. Instead of $5-(2/3)x$ it should be $3+(2/3)x$. Good on you for using parentheses. I didn't do the second part.
Understanding the vector space $\bigoplus_{i\in I} F$
The requirement means that the set $\{ i \in I : f(i) \neq 0_F \}$ is finite. To put in other words, $f$ takes a nonzero value only a finite number of times. For example consider the case $I = \mathbb{N}$. Then the function $f \in F^\mathbb{N}$ that has $f(0) = 1_F$ and $f(n) = 0_F$ for $n \ge 1$ is in $\bigoplus_{i \in \mathbb{N}} F$, because the set $\{ i \in \mathbb{N} : f(i) \neq 0_F \} = \{0\}$ is indeed finite. But the function $g \in F^\mathbb{N}$ given by $g(n) = 1_F$ for all $n \in \mathbb{N}$ is not in $\bigoplus_{i \in \mathbb{N}} F$, because $\{i \in \mathbb{N} : g(i) \neq 0_F\}$ is $\mathbb{N}$, which is infinite.
Reference for Fundamental Group of infinite product
There is a proof of this (for arbitrary homotopy groups, not just $\pi_1$) as Proposition 4.2 in Hatcher's Algebraic Topology, if you just want a source you can cite. The proof is only two sentences long and gives very little detail, though, so this is not a good reference if you want a detailed proof to refer readers to.
Reference request: Kuratowski convergence and convergence in Hausdorff metric
If you look in Topics on Analysis by Luigi Ambrosio &amp; Paolo Tilli, the full proof is on pages 73 and 74.
Mean of a sampling distribution.
Your professor is incorrect. See this post. $\hat P$ is only asymptotically unbiased. However, it will converge rather quickly, so it may be practically unbiased, just like the sample standard deviation, which is very biased for small samples, but quickly becomes essentially unbiased for typical sample sizes. However, you can get a UMVUE for $\frac{(1-p)}{p}$ see here. Also, to empirically validate this, just get an excel spreadsheet and populate a column with 10 geometric RVs, ten use them to calculate yoru estimator and compare it to $p$, you will see that you are consistently high.
Evaluating $\int^{\pi/2}_0\sin(x)\ \cos(x)\ \mathrm dx$
HINT: $\sin{2x}=2\sin{x}\cos{x}$
Time needed to check all possible combinations
If you want the expected time needed to write down some fraction of the possible strings, you can use the same argument as in the coupon collector's problem with $58^n$ coupons by adding the expected time to see a new coupon until you reach whatever fraction of the coupons you are aiming for.
A number of men enter a disreputable establishment
Let the number of ways in which this can happen for $n$ people be $D_n.$ For each $1\le i \le n,$ let $C_i$ be the set of arrangements in which person $i$ gets both of his items back. Then by inclusion-exclusion, $D_n = (n!)^2 - \sum |C_i| + \sum |C_i\cap C_j| + \ldots$ But $|C_i| = [(n-1)!]^2,$ $|C_i\cap C_j| = [(n-2)!]^2,$ etc. Clearly the number of terms in each sum involving $k$ people is $\binom{n}{k},$ so the formula follows easily.
Fourier series of $\log(\Gamma(x))$
An alternative approach is to use infinite products for $\Gamma$ (either one), or rather the formulae for $\psi(x)=\Gamma'(x)/\Gamma(x)$ obtained by taking the logarithmic derivative. Integrating by parts, we get $$b_n=-\frac{1}{n\pi}\int_0^1(1-\cos 2n\pi x)\psi(x)\,dx$$ and, using Euler's infinite product, we find $$\psi(x)=-\frac1x+\sum_{k=1}^\infty\left[\log\left(1+\frac1k\right)-\frac{1}{k+x}\right],$$ so that \begin{align*} n\pi b_n&amp;=\int_0^1\frac{1-\cos 2n\pi x}{x}\,dx-s_n, \\s_n&amp;=\sum_{k=1}^\infty\left[\log\left(1+\frac1k\right)-\int_0^1\frac{1-\cos 2n\pi t}{k+t}\,dt\right] \\&amp;=\sum_{k=1}^\infty\int_0^1\frac{\cos 2n\pi t}{k+t}\,dt\underset{k+t=x}{\phantom{[}\quad=\quad\phantom{]}}\int_1^\infty\frac{\cos 2n\pi x}{x}\,dx. \end{align*} Since $\int_0^\infty\frac{\cos ax-\cos bx}{x}\,dx=\log b-\log a$ for $a,b&gt;0$, we obtain $$n\pi b_n=\log(2n\pi)+B,\quad B=\int_0^1\frac{1-\cos x}{x}\,dx-\int_1^\infty\frac{\cos x}{x}\,dx.$$ Finally the (known) fact that $B=\gamma$ can be taken from somewhere else. Also, the connection between $b_n$ and $b_1$ can be obtained from the multiplication theorem: $$\log\Gamma(nx)=\left(nx-\frac12\right)\log n-\frac{n-1}{2}\log(2\pi)+\sum_{k=0}^{n-1}\log\Gamma\left(x+\frac{k}{n}\right).$$ We get \begin{align*} b_1&amp;=2\int_0^1\sin 2\pi t\log\Gamma(t)\,dt\underset{t=nx}{\phantom{[}=\phantom{]}}2n\int_0^{1/n}\sin 2\pi nx\log\Gamma(nx)\,dx \\&amp;=2n\int_0^{1/n}\sin 2\pi nx\left(nx\log n\color{LightGray}{-(0)}+\sum_{k=0}^{n-1}\log\Gamma\left(x+\frac{k}{n}\right)\right)dx \\&amp;=2\log n\underbrace{\int_0^1t\sin 2\pi t\,dt}_{t=nx}+2n\sum_{k=0}^{n-1}\underbrace{\int_{k/n}^{(k+1)/n}\sin 2\pi nt\log\Gamma(t)\,dt}_{t=x+k/n} \\&amp;=-\frac{\log n}{\pi}+2n\int_0^1\sin 2\pi nt\log\Gamma(t)\,dt=nb_n-\frac{\log n}{\pi}\\\implies b_n&amp;=\frac{b_1}{n}+\frac{\log n}{n\pi}. \end{align*}
Fast method for solving modular exponential function with semi-prime modulus
The problem of finding the smallest $x &gt; 0$ for this is finding the multiplicative order of $10 \bmod N$, and no efficient algorithm is known. An $x$ here can be used to obtain the factorization of a number that isn't a prime power, and this is in fact how the end of Shor's Algorithm works, assuming $x$ isn't odd and $10^{x/2}$ isn't $-1$, in which case you need to try a different coprime base than 10. Assuming you had such an $x$, you could do $10^{x/2}$, which would lead to a number $a \not\equiv \pm 1 \pmod N$ such that $a^2 \equiv 1 \pmod N$. Then $(a + 1)(a - 1) \equiv 0 \pmod N$, and we can find the factors via GCD($a + 1, N$) and GCD($a - 1, N$). For any $x &gt; 0$, a multiple of the multiplicative order, we could just divide it by two until we get a number we can work with or can conclude that we need a different base (e.g. getting an odd number $x$ such that $10^x$ that still equals $1$ or that $10^x$ is equal to $-1$). So, given the ability to solve this for 10 and other bases, you could factor, so at least that generalization isn't any easier.
Finding all the subgroups of a cyclic group
The conjecture above is true. To prove it we need the following result: Lemma: Let $G$ be a group and $x \in G$. If $o(x) = n$ and $\operatorname{gcd}(m, n) = d$, then $o(x^m) = \frac{n}{d}$ Here now is a proof of the conjecture. Proof: Let $G = \langle x \rangle$ be a finite cyclic group of order $n$, then we have $o(x) = n$. Choose a subgroup $H \leq G$, by theorem $(1)$ mentioned in the question above, $|H| = m$ where $m$ is some divisor of $n$. Since $m | n$ (and both $m$ and $n$ are positive integers), there exists a $d \in \mathbb{N}$ such that $md = n \iff \frac{n}{d}=m$. Note also that $d$ is a divisor of $n$. By the above lemma and the fact that $\operatorname{gcd}(d, n) = d$ (since $d$ is a divisor of $n$) it follows that $o(x^d) = \frac{n}{d} = m$. Hence the subgroup $\langle x^d \rangle$ has order $m$. But since by theorem $(1)$ there is only one subgroup of order $m$ in $G$ we must have $H = \langle x^d \rangle$. Thus any subgroup of $G$ is of the form $\langle x^d \rangle$ where $d$ is a positive divisor of $n$. $\ \square$ The above conjecture and its subsequent proof allows us to find all the subgroups of a cyclic group once we know the generator of the cyclic group and the order of the cyclic group.
Multiplication Simplification in a Composition Function
You just got a mistake which probably confuses you: $5^4=625$ already. And $5^2=25$, but you still have to multiply with the factor $6$... To take the exponent of a product, you have to multiply the exponents of each factor. You only took the exponent of the unknown variable.
Expectation of pairwise differences between uniform random points in hypercube
The asymptotics is $1/\sqrt6=0.40824829$. To see this, consider i.i.d. random variables $X_i$ and $Y_i$ uniform on $[0,1]$ and write the quantity to be computed as $I_k=\mathrm E\left(\sqrt{Z_k}\right)$ with $$ Z_k=\frac1k\sum_{i=1}^k(X_i-Y_i)^2. $$ By the strong law of large numbers for i.i.d. bounded random variables, when $k\to\infty$, $Z_k\to z$ almost surely and in every $L^p$, with $z=\mathrm E(Z_1)$. In particular, $I_k\to \sqrt{z}$. Numerically, $$ z=\iint_{[0,1]^2}(x-y)^2\mathrm{d}x\mathrm{d}y=2\int_0^1x^2\mathrm{d}x-2\left(\int_0^1x\mathrm{d}x\right)^2=2\frac13-2\left(\frac12\right)^2=\frac16. $$ Edit (This answers a different question asked by the OP in the comments.) Consider the maximum of $n\gg1$ independent copies of $kZ_k$ with $k\gg1$ and call $M_{n,k}$ its square root. A heuristics to estimate the typical behaviour of $M_{n,k}$ is as follows. By the central limit theorem (and in a loose sense), $Z_k\approx z+N\sqrt{v/k}$ where $v$ is the variance of $Z_1$ and $N$ is a standard gaussian random variable. In particular, for every given nonnegative $s$, $$ \mathrm P\left(Z_k\ge z+s\right)\approx\mathrm P\left(N^2\ge ks^2/v\right). $$ Furthermore, the typical size of $M_{n,k}^2$ is $z+s$ where $s$ solves $\mathrm P(Z_k\ge z+s)\approx1/n$. Choose $q(n)$ such that $\mathrm P(N\ge q(n))=1/n$, that is, $q(n)$ is a so-called quantile of the standard gaussian distribution. Then, the typical size of $M_{n,k}^2$ is $k(z+s)$ where $s$ solves $ks^2/v=q(n)^2$. Finally, $$ M_{n,k}\approx \sqrt{kz+q(n)\sqrt{kv}}. $$ Numerically, $z=1/6$, $v=7/180$, and you are interested in $k=1&#39;000$. For $n=10&#39;000$, $q(n)=3.719$ yields a typical size $M_{n,k}\approx13.78$ and for $n=100&#39;000$, $q(n)=4.265$ hence $M_{n,k}\approx13.90$ (these should be compared to the values you observed). To make rigorous such estimates and to understand why, in a way, $M_{n,k}$ concentrates around the typical value we computed above, see here.
Is it obvious that $P(X_1 < X_3 | X_1 < X_2) = \int_0^R f_{X_1}(x_1 | X_1 < X_2)P(x_1 < X_3) dx_1 dx_2dx_3$ without Bayes rule?
Is it some variant of the law of total probability applied to a conditional probability? Yes. It also relies on the fact that the variables are independent so the events $X_1&lt;X_3$ and $X_1&lt;X_2$ are conditionally independent for a given value of $X_1$ . $$\require{cancel}\color{red}{\xcancel{\begin{align}\mathsf P(X_1&lt;X_3\mid X_1=x, X_1&lt;X_2) &amp;=\mathsf P(X_1&lt;X_3\mid X_1=x)\\[1ex] &amp;=\mathsf P(x&lt;X_3)\end{align}}}$$ Then $$\color{red}{\xcancel{\begin{align}\mathsf P(X_1&lt;X_3\mid X_1&lt;X_2)&amp;=\int_0^R f_{X_1}(x)\,\mathsf P(X_1&lt;X_3\mid X_1=x, X_1&lt;X_2)\,\mathrm d x\\&amp;=\int_0^R f_{X_1}(x)\,\mathsf P(x&lt;X_3)\,\mathrm d x\\&amp;=\int_0^R \tfrac 1R\cdot\tfrac{R-x}{R}\,\mathrm d x&amp;&amp;\text{If uniformly distributed}\\&amp;=\tfrac 12\end{align}}}$$ Hold on, something seems wrong here. When given that $X_1$ is small, the conditional probability that $X_3$ exceeds it should be greater that the unconditional probability. Ah, indeed. In fact the events $X_1&lt;X_2$ and $X_1&lt;X_3$ are not conditionally independent when given $X_1$. Rather the events $x&lt;X_2$ and $x&lt;X_3$ are conditionally independent when given $X_1=x$ (since the random variables are mutually independent). $$\begin{align}\mathsf P(X_1&lt;X_3\mid X_1&lt;X_2)&amp;=\dfrac{\mathsf P(X_1&lt; X_2\cap X_1&lt;X_3)}{\mathsf P(X_1&lt;X_2)}\\&amp;=2 \mathsf P(X_1&lt;X_2\cap X_1&lt;X_3)\\&amp;=2\int_0^R f_{X_1}(x)\,\mathsf P(x &lt; X_2\cap x&lt; X_3\mid X_1=x)\,\mathrm d x\\&amp;=2\int_0^R f_{X_1}(x)\,\mathsf P(x &lt; X_2)\mathsf P(x&lt; X_3)\,\mathrm d x\\&amp;=2\int_0^R R^{-3}(R-x)^2\,\mathrm d x &amp;&amp;\text{If uniformly distributed}\\&amp;=\tfrac 23\end{align}$$ So you should have been given: $$\mathsf P(X_1&lt;X_3\mid X_1&lt;X_2)=\dfrac{\int_0^R f_{\small X_1}(x)\,\mathsf P(x&lt;X_2)\,\mathsf P(x&lt;X_3)\,\mathrm d x}{\int_0^R f_{\small X_1}(x)\,\mathsf P(x&lt;X_2)\,\mathrm d x}$$
How do I show that $a \mid b$ and $a \mid c$ implies that $a \mid (b+c)$?
If $a |b $ and $a|c$, there are integers $k,l \in \mathbb{Z}$ with $ak = b$ and $al = c$. Then $b+c = ak + al = a(k+l)$ so we see that $a|(b+c)$. Simple, right?