title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to Integrate by Parts when One Function Is a Polynomial?
You cannot reason based on even and odd function identities because $u(x) = x - 2$ is neither even nor odd, and you are integrating the product of two functions. Let us use integration by parts with the tabular method. Our "derivating" side runs $$x - 2$$ $$1$$ $$0.$$ Our "integrating" side runs $$\cos(x)$$ $$\sin(x)$$ $$-\cos(x).$$ The antiderivative is $$(x - 2)\sin(x) + \cos(x).$$ Evaluating, we have $$\int_{-a}^{a} (x - 2)\cos(x) dx$$ $$= \left[(x - 2)\sin(x) + \cos(x)\right]_{-a}^{a}$$ $$= [((a - 2)\sin(a) + \cos(a)) - ((-a - 2)\sin(-a) + \cos(-a))].$$ Simplifying, and now using even/odd identities, our final answer is $$\boxed{-4\sin(a)}.$$ Hope this was helpful.
Definition of algebraically closed field.
1) The definition is right. 2) No. That not the same. 3) That's wrong. Consider $x(x^2+1)$ over the reals, which is not algebraically closed.
Prove equivalence between sums of complex numbers.
Let the common modulus be $r >0$. Then $\frac{1}{z_j} = \frac{\overline{z_j}}{r^2}$ and therefore $$ \sum_{i=1}^n\sum_{j=1}^n\left( \frac{z_i}{z_j} \right) = \left( \sum_{i=1}^n z_i \right) \left( \sum_{j=1}^n \frac{1}{z_j} \right) \\ = \left( \sum_{i=1}^n z_i \right) \left( \sum_{j=1}^n \frac{\overline{z_j}}{r^2} \right) = \frac{1}{r^2} \left| \sum_{i=1}^n z_i \right|^2 \, \in\Bbb R . $$ It follows that the expression on the left-hand side is a real number, and $$ \Re\left( \sum_{i=1}^n\sum_{j=1}^n\left( \frac{z_i}{z_j} \right)\right) = \sum_{i=1}^n\sum_{j=1}^n\left( \frac{z_i}{z_j} \right) = \frac{1}{r^2} \left| \sum_{i=1}^n z_i \right|^2 $$ which is zero exactly if $\sum_{i=1}^n z_i = 0$.
how to find the other angles of an oblique triangle, given two sides and one angle
Hint: Cosine rule: Let us say that the sides of a triangle have a length $a,b,c$. And $\alpha$ is the opposite angle of the $A$, then: $$a^2=b^2+c^2-2bc\cos(\alpha).$$
Is $(\mathbb{Q},+)$ the direct product of two non-trivial subgroups?
HINT: If $\Bbb Q=H\times G$, then there is an obvious homomorphism $h:\Bbb Q\to G$ with kernel $H$. Now consider this earlier question.
Why is subspace $\mathcal{C}$ the intersection of the kernels of $n-d$ linear forms?
Extend $e_1, \ldots, e_d$ to a basis $e_1, \ldots, e_n$ of $F_q^n$. For each $i$ between $1$ and $n$, define a linear form $e_i^*$ by its action on the basis: let $e_i^*(e_j)$ be $0$ when $i \neq j$ and $1$ when $i = j$. The linear forms $e_1^*, \ldots, e_n^*$ form the dual basis to the basis $e_1, \ldots, e_n$. I claim that $\mathcal{C} = \bigcap_{i = d + 1}^n \ker e^*_i$. Note that if $1 \le j \le d$ and $d + 1 \le i \le n$, then $$e_i^*(e_j) = 0 \implies e_j \in \ker e_i^*,$$ hence $$\mathcal{C} = \operatorname{span}(e_1, \ldots, e_d) \subseteq \bigcap_{i=d+1}^n \ker e_i^*.$$ Conversely, suppose $x \in \bigcap_{i=d+1}^n e_i^*$. Since $x \in F_q^n$, we have $x = a_1 e_1 + \ldots + a_n e_n$ for some scalars $a_1, \ldots, a_n \in F_q$. We have, for $d + 1 \le i \le n$, $$0 = e_i^*(x) = a_1 e_i^*(e_1) + \ldots + a_{i - 1} e_i^*(e_{i - 1}) + a_i e_i^*(e_i) + a_{i + 1} e_i^*(e_{i + 1}) + \ldots + a_n e_i^*(e_n) = a_i,$$ hence $$x = a_1 e_1 + \ldots + a_d e_d + 0 + \ldots + 0 \in \mathcal{C}.$$ Thus, $\mathcal{C}$ can indeed be expressed as the intersection of the kernels of $n - d$ linear forms.
Show that the function $d$ defines a metric on $\mathbb Z$.
If either $x=y$ or $y=z$, then the triangle inequality is clear. Otherwise, $d(x,y)$ is $\frac{1}{n+1}$, where $n$ is the number of common trailing digits of $x$ and $y$, assuming that they have the same sign. Ditto for $d(x,z)$ and $d(y,z)$. Now, suppose that $x$ and $y$ have $m$ common trailing digits, and $y$ and $z$ have $n$ common trailing digits. If $m=n$, then $x$ and $z$ would have at least $n$ common trailing digits, let's say $p$. Then, $d(x,z)=\frac{1}{p+1} \le \frac{1}{n+1} \leq \frac{1}{n+1}+\frac{1}{n+1}=d(x,y)+d(y,z)$. Otherwise, $x$ and $z$ would have $p:=\min(m,n)$ common trailing digits, and then $d(x,z)=\frac{1}{p+1}=\max(\frac{1}{m+1}, \frac{1}{n+1})=\max(d(x,y),d(y,z))$. Hence, we have an ultrametric space.
Can a factor of a number be divided by a number that will not evenly divide the original factored number?
It's not possible. If $a$ divides $b$ (i.e., $b=ka$) and $b$ divides $c$ (i.e., $c=nb$), then $a$ divides $c$ (because $c=nka$).
Find the integral $\frac{d }{d x}\int_{\sin x}^{\cos x}\cos ( \pi t^2)\, d t $
$$\int_a^b \cos ( \pi t^2))\,\mathrm d t = \frac{1}{\sqrt{2}}\left[C(\sqrt{2} b)-C(\sqrt{2}a)\right].$$ This is a Fresnel integral. $$\int_{\sin x}^{\cos x}\cos ( \pi t^2))\,\mathrm d t =\frac{1}{\sqrt{2}}\left[C(\sqrt{2} \cos x)-C(\sqrt{2}\sin x)\right].$$ $$\frac{\mathrm{d} }{\mathrm{d} x}\int_{\sin x}^{\cos x}\cos ( \pi t^2))\,\mathrm d t =-\sin x \, C^\prime(\sqrt{2} \cos x)- \cos x \, C^\prime(\sqrt{2}\sin x). $$ $$= -\sin x \cdot \cos (\pi \, \cos^2x) - \cos x \cdot \cos (\pi \, \sin^2x))$$ or just use the fundamental theorem of calculus, as you did.
Find all arithmetic functions f satisfying the given relation
"Should I start with the definition of convolution?" Yes. $f*g(n)= \sum_{d|n}f(d)g(n/d)$. So if $f*f(n) =e(n)$ then $f*f(1) = 1$ so $f*f(1) = f(1)f(1) = 1$ so $f*f(1) = \pm 1$. If $p$ is prime $f*f(p) = f(1)f(p) + f(p)f(1) = \pm 2f(p)=0$ so $f(p) = 0$. Strong induction. Suppose we assume $f(m*p) = 0$ for all $m \le n$. (we did the base case with $n = 1$) can we show $f((n+1)p)=0$? $f*f((n+1)p) = f(1)f((n+1)p) + \sum_{d|n}f(d*p)f(n/d) + \sum_{d|n}f(d)f(p*n/d) + f((n+1)p)f(1) = $ $ f(1)f((n+1)p) + \sum_{d|n}0*f(n/d) + \sum_{d|n}f(d)*0 + f((n+1)p)f(1) =$ $\pm 2f((n+1)p)f(1)= 2f((n+1)p)=0$ So $f((n+1)p) = 0$. Thus for all $n = m*p$ for some prime, which is to say, for all $n \ne 1$ we have $f(n) = 0$. So $f(1) = \pm 1$ and $f(n) = 0$ if $n \ne 1$. So $f(n) = \pm e(n)$.
Find the smallest integer constant $c$ such that $f(n) = \mathcal{O}(n^c )$
$O(g(n))$ is often written in terms of limits (or limsup), but this definition can work. $f(n)=O(g(n))$ if there exists a $C$ so that $\frac{f(n)}{g(n)}\leq C$ for all $n$. In your first problem, you're trying to find $c$ and potentially $C$ so that $$ \frac{\frac{1}{2}n^2}{n^c}\leq C $$ for all $n$. Therefore, you need $$ n^{2-c}\leq 2C=K $$ (we can define $2C$ as a new constant $K$ because both are just constants). Therefore, you need to know the smallest $c$ where $n^{2-c}$ is bounded. If $c<2$, then the exponent is positive and the quantity grows as $n$ does. if $c=2$, then the exponent is $0$ and $n^0=1$, which is bounded. Therefore, the answer is $c=2$. In your second problem, $c=2$. In $$ \frac{n(\log_2 n)^3}{n^c}, $$ if $c\leq 1$, then the fraction is $n^{1-c}(\log_2 n)^3$, which grows because the exponent on $n$ is nonnegative and $\log_2 n$ is increasing. On the other hand, for any $c>1$, the quantity is bounded (take the limit as $n$ approaches $\infty$). Therefore, $c=2$ will work.
Why eigenfunction cannot be identically zero in this question?
An eigenvector $v$ with corresponding eigenvalue $\lambda$ for a linear operator $L$ is a nonzero vector such that the following holds: $$Lv=\lambda v$$ That is to say that the operator $L$ acts on $v$ in the same way as multiplication by a scalar does. Eigenfunctions are exactly the same as eigenvectors, the only difference being the name eigenfunction implies that we are talking specifically about a function space as opposed to an arbitrary vector space. Function spaces are vector spaces, just specifically a vector space whose vectors are themselves functions. We require that eigenvectors be nonzero because the zero vector trivially satisfies $L0=\lambda 0$ and this is incredibly uninteresting. The interesting properties and theories surrounding eigenvectors and eigenvalues are those situations where there is a corresponding eigenspace (a linear subspace of our vector space spanned entirely by eigenvectors for that corresponding eigenvalue) of dimension strictly greater than zero. As such, we opt to not call the zero vector an eigenvector as its existence alone as a solution to $Lv=\lambda v$ does not imply anything about the dimension of the eigenspace for $\lambda$.
Limit using L'Hopital's Rule
Let $u(x)=\sqrt{x-1}^{\sin(\pi x)}$. We have \begin{equation} \begin{array}{lll} \lim_{x\rightarrow 1^+}\ln u(x)&=&\lim_{x\rightarrow 1^+}\sin(\pi x)\ln(\sqrt{x-1})\\ &=&\frac{1}{2}\lim_{x\rightarrow 1^+}\frac{\ln(x-1)}{\frac{1}{\sin(\pi x)}}\\ &\overset{H^\prime}{=}&\frac{1}{2}\lim_{x\rightarrow 1^+}\frac{\frac{1}{x-1}}{\frac{-\pi\cos(\pi x)}{\sin^2(\pi x)}}\\ &=&\frac{1}{2}\lim_{x\rightarrow 1^+}\frac{-\sin^2(\pi x)}{\pi(x-1)\cos(\pi x)}\\ &=&\frac{1}{2}\lim_{x\rightarrow 1^+}\frac{-\sin^2[\pi (x-1)]}{\pi(x-1)\cos(\pi x)}\\ &=&\frac{1}{2}\lim_{x\rightarrow 1^+}\frac{-\pi^2 (x-1)^2}{\pi(x-1)\cos(\pi x)}\\ &=&0. \end{array} \end{equation} Hence $\displaystyle\lim_{x\rightarrow 1^+}\sqrt{x-1}^{\sin(\pi x)}=\lim_{x\rightarrow 1^+}u(x)=\lim_{x\rightarrow 1^+}e^{\ln u(x)}=e^{\displaystyle\lim_{x\rightarrow 1^+}\ln u(x)}=e^0=1.$
Surjective Map between UFD and PID
Suppose that such a surjection $f$ exists, $X=f(P), Y=f(Q)$, Let $D=gcd(P,Q), P=AD, Q=BD$ implies that $X=f(A)f(D), Y=f(B)f(D)$ implies that $f(D)$ divides $X$ and $Y$ and $f(D)\in k$. $D=UP+VQ$ implies that $f(D)=Xf(U)+Yf(V)$ this is impossible since $Xf(U)+Yf(V)$ is constant only if it is $0$, this would implies $f(D)=X=Y=0$.
Prove that $ u_1 + u_5 = 2u_3 $.
$u_5 = u_1 + 4d$ $u_3 = u_1 + 2d$ So $u_1 + u_5 = 2u_1 + 4d = 2(u_1 + 2d) = 2u_3$
Let $S$ be an integral extension ring of $R$. Then $S[x_1,\ldots, x_n]$ is an integral extension ring of $R[x_1,\ldots, x_n]$.
To show an extension ring is an integral extension, it suffices to prove that it is generated (as a ring) by integral elements. Here $S[X_1,\ldots,X_n]$ is generated by the elements of $S$ and the $X_i$. Each element of $S$ is integral over $R$ and a fortiori over $R[X_1,\ldots,X_n]$. The $X_i$ are elements of $R[X_1,\ldots,X_n]$ and so are integral over $R[X_1,\ldots,X_n]$. We conclude that $S[X_1,\ldots,X_n]$ is integral over $R[X_1,\ldots,X_n]$.
Are the stationary probabilities of an irreducible Markov chain zero, infinite or something else?
Existence of a stationary distribution is in fact equivalent to positive recurrence (not just recurrence) for an irreducible Markov chain. If the chain is only assumed recurrent and still irreducible, then we can find a unique stationary measure $\pi$ where $0 < \pi_i < \infty$ for all $i \in S$, but we potentially have $\sum_{i \in S} \pi_i = \infty$. As for proof of these facts, they can be found in most textbooks on discrete time Markov chains, for example Grimmett and Welsh's "Probability: An Introduction".
Probability that A wins a best of 7 (4 matches)
I don't see any way to go with your approach without utilizing the book's approach but I will do so anyways. For $G_5$ to occur, either team $A$ needs to win $3$ or the first $4$ games and then win game $5$ with probability $$\binom{4}{3}p^{4}q$$ or team $B$ needs to win $3$ or the first $4$ games and then win game $5$ with probability $$\binom{4}{3}q^{4}p$$ so $$P(G_5)=\binom{4}{3}p^{4}q+\binom{4}{3}q^{4}p$$ and $$P(A\mid G_5)=\frac{\binom{4}{3}p^{4}q}{\binom{4}{3}p^{4}q+\binom{4}{3}q^{4}p}$$ Then $$P(A|G_{5})P(G_{5})=\frac{\binom{4}{3}p^{4}q}{\binom{4}{3}p^{4}q+\binom{4}{3}q^{4}p}\cdot\left(\binom{4}{3}p^{4}q+\binom{4}{3}q^{4}p\right)=\binom{4}{3}p^{4}q$$ Of course it's easier to just note that team $A$ must win $3$ of the first $4$ games and then win the $5^{th}$ game. This is a negative binomial with $n$ trials given $k$ successes where $n=5$ and $k=4$. We have $$P(X=n)={n-1 \choose k-1}p^kq^{n-k}={4 \choose 3}p^4q$$
Confusion about convergence of series and improper integral
A recent exercise came up by myself. I am not yet very mathematically matured, so please check. Proposition If $f: \mathbb{R} \to \mathbb{R}$ is nonnegative and uniformly continuous on $(1, \infty)$ and $\int_1^\infty f(x) dx$ converges, then $\lim_{x \to \infty} f(x) = 0$. Proof Assume for contradiction that $\lim_{x \to \infty} f(x) \neq 0$. Then by definition there exists $\epsilon \gt 0$ such that: for every $m \gt 0$ we have some $x \gt 0$ satisfying $x \gt m$ and $f(x) \ge \epsilon$. Thus we can construct a strictly increasing sequence $1 \lt x_1 \lt x_2 \lt x_3 \lt ...$ with $x_n \to \infty$ as $n \to \infty$ and $f(x_n) \ge \epsilon$ for $n = 1, 2, 3,...$. Moreover, we can construct the sequence such that $x_n + 1 \lt x_{n + 1}$ for $n = 1, 2, 3,...$. Now by uniform continuity there is some $\delta \gt 0$ with $\delta \lt \frac{1}{2}$ such that $\lvert f(t) - f(x_n) \rvert \le \frac{1}{3}\epsilon$ for every $t \in (x_n - \delta, x_n + \delta)$ and for $n = 1, 2, 3,...$. Thus $f(t) \ge f(x_n) - \lvert f(t) - f(x_n) \rvert \ge \epsilon - \frac{1}{3}\epsilon = \frac{2}{3}\epsilon$ for every $t \in (x_n - \delta, x_n + \delta)$ and for $n = 1, 2, 3,...$. Hence $\int_{x_n - \delta}^{x_n + \delta} f(t) dt \ge \int_{x_n - \delta}^{x_n + \delta} \frac{2}{3}\epsilon dt =\frac{4}{3}\epsilon\delta$ for $n = 1, 2, 3,...$. Hence $\int_1^\infty f(x) dx \ge \sum_{n = 1}^{\infty} \int_{x_n - \delta}^{x_n + \delta} f(t) dt \ge \sum_{n = 1}^{\infty} (\frac{4}{3}\epsilon\delta)$ which clearly do diverge, a contradiction. Hence $\lim_{x \to \infty} f(x) = 0$ as desired. QED. Note Without uniform continuity a counterexample can be constructed like the following drawing. Just making the n-th triangle having area ${(\frac{1}{2})}^n$.
Factoring real polynomials with no real zeros, and other polys whose zeros come in pairs
The answer to your first question is NO. Consider for example $p(x)=(x-1)^6+x^2+1$. Then PARI/GP tells us that the Galois group of this polynomial is $S_6$, which is not a solvable group, so $P$ cannot be solved by radicals.
If $X$ is $\sigma(\mathcal{G} \cup \mathcal{H})$-measurable can we write $X=f(G,H)$?
What is true, and this may be sufficient for your purposes, is that under the stated conditions there is a $\mathcal G\otimes\mathcal H$-measurable map $Z:\Omega\times\Omega\to\Bbb R$ such that $X(\omega) = Z(\omega,\omega)$ for all $\omega\in\Omega$. (And conversely, because $\omega\mapsto(\omega,\omega)$ is $\sigma(\mathcal G\cup\mathcal H)/\mathcal G\otimes\mathcal H$-measurable.)
Fixed point of $f:[0, 4] \to [1,3]$ such that $f'(x) \neq 1$
First note that $f(x)-x$ is positive at $x=0$ and negative at $x=3$ and hence, by the intermediate value property it has at least a fixed point. Next, by Darboux Theorem, $f'$ satisfies the Intermediate Value Property. Since $f'(x) \neq 1$ for all $x \in [0,4]$ you have either $f'(x) <1$ for all $x \in [0,4]$ or $f'(x) >1$ for all $x \in [0,4]$ Show that the second is not possible, and hence $f'(x) <1$ for all $x \in [0,4]$. Now, you can prove that the fixed point is unique. Indeed, if $a,b$ are fixed points, apply the Mean Value Theorem on $[a,b]$.
Question on one step in proof of prime factorization
I presume $[a]=[b]$ means $a\equiv b\pmod p$. In that case $[p^{i-j}q_1\cdots q_k]=[0]$ since $p$ is a factor of $p^{i-j}q_1\cdots q_k$.
The probability of choosing topics
No. Not quite. You need to ensure there is exactly one topic not discussed; not more and not less. Count the ways to: Select the topic not to be discussed. Select two people to double up on one topic. Select the topic they choose. Select topics for each of the three remaining people (without repetition). That's the size of your favoured space out of all $5^5$ (equally probable) possibilities. Alternatively. There are $(4^5- 4\cdot 3^5 + 6\cdot 2^5-4)$ ways five people can select exactly four of four topics, by the Principle of Inclusion and Exclusion.   Multiply this factor by the count of ways to select the topic not-discussed.
Real irrational algebraic numbers "never repeat"
Still not sure if we're talking about the same thing, but this is what I found. Let $r$ be a non-negative real. If $r \ne 1, 2$ then there is a number base in which the first two digits of $r$ are the same. The proof below is partly by computer program. If $r \ge 3$ the result is clear. Suppose $2 < r < 3$ and let $f$ be the fractional part of $r$. Then for a number base $b \ge 3$, the representation of $r$ begins $2.2$ iff $f \in [2/b, 3/b)$. Since $b > 2$ we have $3/(b + 1) > 2/b$, so the intervals for $b = 3, 4, \ldots$ cover the whole of $(0, 1).$ If $1 < r < 2$ the proof is similar, using bases $b \ge 2$. The case $0 \le r < 1$ is the most interesting. If $b \ge 2$ is a number base and $d$ is a digit such that $0 \le d < b$, then the representation of $r$ in base $b$ begins $.dd$ iff $$r \in [(bd + d)/b^2, (bd + d + 1)/b^2).$$ So we have intervals $[0, 1/4)$, $[3/4, 1)$, then $[0, 1/9)$, $[4/9, 5/9)$, $[8/9, 1)$, etc. The question is, do these cover the interval $[0, 1)$? The answer isn't obvious, so I wrote a computer program to find out (using integer arithmetic throughout, to avoid rounding errors). The answer is that the intervals for $2 \le b \le 50$ cover $(0, 1)$, and $50$ is the least upper limit that will work. In case this unexpected result was due a bug, I wrote programs to generate random reals and random rationals in $(0, 1)$ and look for a base in which the first two digits were the same. Such a base could always be found, and the maximum base required over $10^8$ trials was $50$.
Arithmetic and Geometric sequences - Describe the following sequence
The last equation (part d) means (though ambiguous) that you get the $n^{th}$ term of the sequence by plugging the values of $x$ in the R.H.S. For example - The first term of the sequence can be obtained by putting $x=1$ and it yields $(77*1-12=65)$. The $10^{th}$ term (for example) can be found by putting $x=10$ which yields $(77*10-12=758)$. Now it is given that the domain of $x$ is the set of all positive integers i.e. $1,2,3...$ . So the first term is $65$ (put $x=1$). Second term is $142$ (put $x=2$). Third term is $219$ (put $x=3$). So it is clear that the sequence is an Arithmetic Progression with common difference $77$. Note - a, r and d are not fixed notations. You can rather say a = first term of the sequence, d = common difference of an Arithmetic Progression, r = $r^{th}$ term of the sequence.
Question about isomorphism of modules.
First, let's simplify the notation a little bit and set $J' := \sigma\cdot J,\;S' := \sigma\cdot S$. Denote the multiplication induced by $\sigma$ by $s\bullet x := \sigma(s)x$. Since $S$ is commutative, $J'$ is a $S-S$-bimodule via $sxt := t(s\bullet x)$. Hence $J' \otimes_S J^\ast$ is a left $S$-module via $s\bullet (x\otimes f) := (s\bullet x)\otimes f$. Since $J$ represents an element from the Picard group, $F: J \otimes_S J^\ast \to S,\;x\otimes f \mapsto f(x)$ is an isomorphism of abelian groups. Moreover, $$F(s\bullet (x \otimes f))=F((s\bullet x)\otimes f)=F(\sigma(s)x\otimes f)=f(\sigma(s)x)=\sigma(s)f(x)=\sigma(s) F(x\otimes f)\\=s\bullet F(x\otimes f)$$ Hence $F$ is an isomorphism $J' \otimes_S J^\ast \cong S'$ of $S$-modules. q.e.d.
Moment Generating Function - Find K
We know that, by definition of a moment generating function, $$M_X(w)=E(\exp(wX)), w \in \mathbb R.$$ In particular, we have for $w=0:$ $M_X(0)=E(\exp(0))=E(1)=1.$ (Regarless of what's the random variable $X$ and its moment generating function $M_X$) What does this imply for $M_X(w)=K/(2-w)$?
How to find the following summation
For $x<1$,by differentiation, $$S(x):=\sum_{n=1}^\infty\frac{x^n}{n^3}$$ $$xS'(x)=\sum_{n=1}^\infty\frac{x^n}{n^2}$$ $$x(xS'(x))'=\sum_{n=1}^\infty\frac{x^n}{n}$$ $$(x(xS'(x))')'=\sum_{n=1}^\infty x^{n-1}=\frac1{1-x}.$$ Then by integration $$(xS'(x))'=-\frac{\log(1-x)}x$$ and the closed-form stops here.
Definite integral of strictly increasing postive function
Hint. By the change of variable $x \to 1-x$ one gets $$ I=\int_0^1 \frac{f(x)}{f(x) + f(1-x)}dx=\int_0^1 \frac{f(1-x)}{f(1-x) + f(x)}dx $$ then $$ 2I=I+I=\int_0^1 \frac{f(x)+f(1-x)}{f(x) + f(1-x)}dx=\int_0^1 1\:dx=1. $$
if $0 1- \frac{1}{x}$ without derivative or integral
Hint: Take the upper bound: $$ \ln {x} \leq x-1 $$ Apply it to $1/x$ Or Starting from the fairly well-known, $$1 - y \leq e^{-y}$$ Rearranging, $$1 - e^{-y} \leq y$$ Substituting $y = \ln x$ For given upper bound To prove: $lnx \leq x - 1$ for $x > 0$. $\ln(x) < x−1$ for all $x>1$ can be done by contradiction (not required for your question). At $x = 1$ we have equality, so consider $x \in (0, 1)$. Then $0 < 1 - x < 1$. So, using a power series expansion for $ln(1 - x)$ at $1 - x$ we have: $\ln x = \ln(1 - (1-x)) = -(1-x) - \dfrac{(1-x)^2}{2} - \dfrac{(1-x)^3}{3} - \dfrac{(1-x)^4}{4} -\cdots-\dfrac{(1-x)^n}{n}-\cdots < -(1-x) = x - 1$ Or $y=x-1$ is the equation of the tangent to the ln curve at $(1,0)$ and the function is concave, hence its graph is under the tangent. Or You can use $$\tag1e^x\ge 1+x,$$ which holds for all $x\in\mathbb R$
In a Banach space, absolute convergence of series implies convergence
Hint: Use the triangle inequality: $\mid S_n-S_m\mid=\mid x_n+x_{n-1}+\dots +x_{m+1}\mid\le \mid x_n\mid+\dots+\mid x_{m+1}\mid$. Then $S_n$ will be Cauchy, hence convergent.
Why do linear and quadratic equations form straight line and parabola?
It might help you to go back to the original Euclidean geometry. For example, one theorem in 'The Elements' is: A straight line is the locus of all points equidistant from two (distinct) given points" ('locus of points' just means 'the shape all of the points fall upon and/or trace out'). If you transpose to a Cartesian coordinate system, and accept the obvious translation of the Pythagorean Theorem into an equation for distance between points as a function of coordinates, then you can translate this theorem into an equation between two formulae defining the distance of each 'locus' point from each of the given points. That equation can be simplified and rearranged into the usual linear expression. Similarly, for Euclid a parabola is -defined- to be 'the locus of all points equidistant from a given line ('directrix') and a given point ('focus') which is not on that line'. Restrict the case to a horizontal line [greatly simplifying the equation for the distance from a point to that line] and a point above the line, then translate the definition into an equation of two distance formulae, and you get a form of the quadratic equation.
Correct bound on Taylor polynomial approximation
What you did is absolutely correct. Though it might be worth mentioning that $e^{-x}$ is strictly decreasing, so that's why you can use $\xi=0$ as an upper-bound.
Last 3 digits of $7^{12341}$
$7^{20}=(7^5)^4=((16807)^2)^2\equiv((807)^2)^2\equiv(651249)^2\equiv249^2\equiv62001\equiv1\pmod {1000} $ Then we write $7^{341}=7*(7^{20})^{17}\equiv7*1^{17}\equiv7\pmod {1000}$
Do the two equations agree?
Yes, that could happen. In that case $\lambda = 0$.
$T$ is diagonalizable on finite dimensional v.s. $\implies$ $(T^2+T+I)(\vec v) \ne \vec 0 , \forall \vec v \ne \vec0$?
Yes true. Notice that if $\lambda$ is an eigenvalue for $T$ then for every polynomial $P$, $P(\lambda)$ is an eigenvalue of $P(T)$ hence if $v\ne0$ exists such that $(T^2+T+I)v=0$ then there's $\lambda$ eigenvalue of $T$ such that $$\lambda^2+\lambda+1=0$$ which is a contradiction since in this case $\lambda\not\in\Bbb R$.
Reccurence relation $S(n) = S(n-1) + 2S(n-2) +2 S(0)=0 S(1)=2$;
To solve $$ a_n=a_{n-1}+2a_{n-2}+2\tag{1} $$ we can introduce $b_n-1=a_n$. Then we get the linear recurrence $$ b_n=b_{n-1}+2b_{n-2}\tag{2} $$ Since $$ x^2-x-2=(x-2)(x+1)\tag{3} $$ we get the solution to the linear recurrence $(2)$ $$ b_n=c_1(-1)^n+c_22^n\tag{4} $$ Therefore, $$ a_n=c_1(-1)^n+c_22^n-1\tag{5} $$ Plugging the initial conditions, $a_0=0$ and $a_1=2$, into $(5)$ we get $$ \begin{align} c_1+c_2&=1\\ -c_1+2c_2&=3 \end{align}\tag{6} $$ giving $c_1=-\frac13,c_2=\frac43$. Therefore, $$ a_n=-\frac13(-1)^n+\frac432^n-1\tag{7} $$
Mathematical statistics: Expected value, Probability density function
It's a simple partition.   $Y=X~\mathbf 1_{X< a}+a~\mathbf 1_{ X>a}$, and so Linearity of Expectation says: $$\begin{align}\mathsf E(Y) ~=~& \mathsf E( X~\mathbf 1_{X\leq a})+\mathsf E(a~\mathbf 1_{ X>a}) \\[1ex] =~& \mathsf E(X\mid X\leq a)~\mathsf P(X\leq a) + a~\mathsf P(X > a) \\[1ex] =~& \int_0^a x~f_X(x)\operatorname d x + a\int_a^\infty f_X(x)\operatorname d x\end{align}$$ NB: $\mathbf 1_{x\leq a}= \begin{cases}1 & : & x\leq a\\0 &:& \text{otherwise}\end{cases}\\\mathbf 1_{x> a}= \begin{cases}1 & : & x> a\\0 &:& \text{otherwise}\end{cases}$
On morphisms in an abelian category
Question 1: You cannot say anything about the image of $f_1-f_2$. Here is an example. Let $\mathcal{C}$ be the category of real vector spaces and $B = \mathbb{R}^n$. The set $\text{End}(\mathbb{R}^n)$ of endomorphisms $\phi : \mathbb{R}^n \to \mathbb{R}^n$ is a normed linear space and it is well-known that the set $\text{GL}(\mathbb{R}^n)$ of automorphisms is open in $\text{End}(\mathbb{R}^n)$. Let $f_1 = id_B$ and $V \subset \mathbb{R}^n$ any linear subspace. Choose any linear map $g : \mathbb{R}^n \to \mathbb{R}^n$ such that $\text{im}(g) = V$. Hence for sufficiently small $\epsilon > 0$ the linear map $f_2 = id_B + \epsilon g$ belongs to $\text{GL}(\mathbb{R}^n)$. Both $f_1, f_2$ are isomorphisms, thus their kernels and cokernels are trivial and their images are $\mathbb{R}^n$. However, $f_1 - f_2 = - \epsilon g$ whose image is $V$. Question 2: I do not think that one can get a reasonable answer.
Generalising Cubic Roots in terms of a third variable
With the derivative you want to be zero, you are facing the problem of solving a cubic equation in $B$ $$ sB^3 -15\times 10^5s B^2 +62\times 10^{10} sB -6\times 10^{16} s-18\times 10^{16}=0$$ If you use Cardano method for cubics, the discriminant evaluates to $$\Delta =4\times 10^{30} s^2 \left(2197 s^2-218700\right)$$ and, then, the number of real roots just depends on the sign of $\left(2197 s^2-218700\right)$. If this term is positive, there three distincts real roots; if it is zero, still three real roots but two are identical; if it is negative, there only one real root. I give you below the formulae and I hope you will enjoy them $$B_1=-\frac{1300000 s}{\sqrt[3]{3} \sqrt[3]{P}}-\frac{100000 \sqrt[3]{P}}{3^{2/3} s}+500000$$ $$B_2=\frac{650000 \left(1+i \sqrt{3}\right) s}{\sqrt[3]{3} \sqrt[3]{P}}+\frac{50000 \left(1-i \sqrt{3}\right) \sqrt[3]{P}}{3^{2/3} s}+500000$$ $$B_3=\frac{650000 \left(1-i \sqrt{3}\right) s}{\sqrt[3]{3} \sqrt[3]{P}}+\frac{50000 \left(1+i \sqrt{3}\right) \sqrt[3]{P}}{3^{2/3} s}+500000$$ in which $$P=\sqrt{3} \sqrt{218700 s^4-2197 s^6}-810 s^2$$
Showing that the class of well ordered structures is not an elementary class
You ask how to prove that there is no finite set of set of statements $T$ such that $\text{Mod}(T) = K$. But the usual definition of elementary class allows the theory $T$ to be infinite. And, in fact, the class of well-ordered structures is not elementary, i.e. there is no (finite or infinite) first order theory $T$ in the language $\{<\}$ such that $\text{Mod}(T) = K$. To prove this, suppose we had some theory $T$ picking out the well-orders. Consider the expanded language $\{<\}\cup\{c_0,c_1,c_2,\dots\}$, where the $c_i$ are constant symbols. Let $T' = T \cup \{c_{i+1} < c_i\,|\,i\in\mathbb{N}\}$. This theory expresses that the $c_i$ form an infinite descending chain. Now every finite subset of $T'$ is consistent (there is a well-order with a descending chain of length $n$ for any $n$), so $T'$ is consistent by compactness. So there is a model $M\models T'$, which cannot be a well-order. But then also $M\models T$, contradicting our assumption on $T$.
Reverse search for rational function
Let's ask a much simpler question: if we have two transcendental numbers, and their difference is a rational number, can we find that rational number? Seems to me it would depend a bit on what it means to "have" a transcendental number. For all we know, $\pi-e$ is rational. The more decimals we know in the expansions of $\pi$ and $e$, the better the lower bound we can put on the numerator and denominator of the rational, but how can we ever find the rational?
Promissory note example Financial Math
The formula you got is wrong. The correct present value formula given a future value $FV$ and with compounding $m$ times per period is: $PV_{t=0}=\cfrac{FV_t}{(1+\cfrac{r}{m})^{t\cdot m}}$ I assume $r=8\%$ is the interest rate with annual compounding, and $t$ is measured in years, so $m=1$ and the formula simplifies to: $PV_{t=0}=\cfrac{FV_t}{(1+r)^t}=FV_t(1+r)^{-t}$ $t=\cfrac{304}{365}$ (based on actual/actual convention) So that: $PV_{t=0}=10043.55(1+0.08\%)^{-\frac{304}{365}}=\$9,419.97$ Alternatively, if one understands the question like this: the bank wants to make a return of 8% in the given period, i.e. 8% is not an annualized rate, but the rate the bank applies for that specific period, then you would just have: $P=FV(1+8\%)^{-1}$ which is also not the formula you have been using.
Showing 1/g is a measurable function
$ x \mapsto \frac{1}{x} $ is a continuous function on $\overline{\mathbb{R}} \setminus \{ 0 \}$, and the composition of a continuous function with a measurable one is measurable.
Application of mean value theorem to $\int^{6\pi}_{4\pi} \frac{\sin x}{x}dx$ yields incorrect result
One version of mean value theorem for definite integral states: Given two functions $f, g$ on $[ a, b ]$. If $f$ is continuous and $g$ is an integrable function which doesn't change sign on $[ a, b ]$, then there exists $c \in (a,b)$ such that $$\int_a^b f(x) g(x) dx = f(c) \int_a^b g(x) dx$$ For your case, you can't apply this version of mean value theorem because $g(x) = \sin x$ changes sign over $[4\pi, 6\pi]$.
Show that the given matrix is standard matrix for an orthogonal projection of R^4 onto a 2-dimensional subspace of R^4
Let $e_k$ be the $k$th column of identity matrix $I_4$ (canonical basis of $\mathbb{R^4}$). First of all, one checks that $A^2=A$ : therefore it is a projection. $A^T=A$ : therefore this projection is orthogonal. We notice that the last column $C_4$ is the sum of the 2 first ones $C_1$ and $C_2$, which, besides, are independent. Otherwise said, the images of $e_1,e_2,e_4$ are in the same vector space generated by $C_1$ and $C_2$. Having seen that the image of $e_3$ by $A$ is the zero vector, we can conclude that $A$ represents the orthogonal projection onto the two dimensional vector space generated by $C_1$ and $C_2$. More precisely $C_1,C_2$ are the respective projections of $e_1,e_2$ where Edit 1: It is interesting to determine a basis of the kernel which is 2 dimensional as well due to rank-nullity theorem. Here is a very natural one: $$K_1=\begin{pmatrix}0\\0\\1\\0\end{pmatrix}, \ \ K_2=\begin{pmatrix} \ \ \ 1\\ \ \ \ 1\\ \ \ \ 0\\-1\end{pmatrix}.$$ Indeed both of them result from observations on matrix $A$: $K_1$ accounts for the fact that the third column of $A$ is $0$. $K_2$ is obtained plainly from the coefficients of the relationship we have seen before $$C_1+C_2-C_4=0 \ \iff \ \underbrace{[C_1|C_2|C_3|C_4]}_A*K_2=0$$ Edit 2: Consider the $4 \times 2$ matrix $B$ whose columns are any orthonormal basis of the plane of projection of $A$, like this one: $$B:=\begin{pmatrix}a& \ \ \ b\\a&-b\\ 0 & \ \ \ 0\\ a & \ \ \ 0\end{pmatrix}, \ \text{where} \ a=1/\sqrt{6}, \ b=1/\sqrt{2}$$ One can verify at once that: $$B^TB=A$$ which is another rather efficient way to prove that a matrix is an orthogonal projection.
Is tensor a left adjoint or a right adjoint?
For the sake of having an answer, and expanding my comments: To be explicit, let's consider the tensor functor $(-) \otimes_R N$ from right $R$-modules to abelian groups, where $N$ is a left $R$-module. This functor is left adjoint to the hom functor $\text{Hom}_{\mathbb{Z}}(N, -)$, and therefore preserves colimits. It preserves limits if and only if $N$ is finitely generated projective; see this answer for a proof. The well-known fact is that tensoring preserves direct limits. "Direct limits" are actually colimits, and in more modern terminology would be referred to as directed colimits, a special case of filtered colimits. The confusion here comes from the fact that "direct limit" is old terminology and actually predates the modern terminology of colimits and limits. My preference here is to never use the terms "direct limit" or "inverse limit," precisely because they cause this confusion, and to instead use the modern terms "directed colimit" and "codirected limit."
Prove that if $x | (3x + 20)$ then the only positive $x$ for which the statement is true are $1,2,4,5,10,20$.
$x$ divides $3x$ since $x$ divides $x$. Now $3x$ and $20$ are in addition. For $3x+20$ to be divisible by $x$, $(3x+20)/x$ should give an integer value (Problem Statement). Now let us assume, for a moment, that $x$ doesn't divide $20$. This means that $20/x$ would give a non-integral value. Say this value is $p$. Now $(3x+20)/x = (3x)/x + 20/x = 3 + p$. From our assumption, since p is non-integral, $3+p$ should also be non-integral. But this is a contradiction from the statement of the question. Hence, our assumption was wrong and therefore $p$ is integral. Hence $20$ should be divisible by $x$ and therefore possible values of $x$ are $1,2,4,5,10,20$.
If $g(x) = xf(x^2)$ and $f(x)=\sum_{n=0}^\infty \sin(\frac{\pi}{n+2})x^n$, what is $f^{(20)}(0)$ and $g^{(35)}(0)$?
Note that $$f'\left(x\right)=\sum_{n\geq0}n\sin\left(\frac{\pi}{n+2}\right)x^{n-1}=\sum_{n\geq1}n\sin\left(\frac{\pi}{n+2}\right)x^{n-1} $$ since for $n=0 $ there is no addend. Furthermore $$f''\left(x\right)=\sum_{n\geq1}n\left(n-1\right)\sin\left(\frac{\pi}{n+2}\right)x^{n-2}=\sum_{n\geq2}n\left(n-1\right)\sin\left(\frac{\pi}{n+2}\right)x^{n-2} $$ since for $n=1 $ again there is no addendum. Iterating we have $$f^{\left(20\right)}\left(x\right)=\sum_{n\geq20}n\left(n-1\right)\cdots\left(n-19\right)\sin\left(\frac{\pi}{n+2}\right)x^{n-20} $$ and if we consider $f^{\left(20\right)}\left(0\right) $ the only term which is not zero is when the exponent is $n=20$, since $x^{n-20}=1 $, then $$f^{\left(20\right)}\left(0\right)=20!\sin\left(\frac{\pi}{22}\right). $$ For the other function we can observe that $$g\left(x\right)=\sum_{n\geq0}\sin\left(\frac{\pi}{n+2}\right)x^{2n+1} $$ so if we take the derivative $$g'\left(x\right)=\sum_{n\geq0}\left(2n+1\right)\sin\left(\frac{\pi}{n+2}\right)x^{2n} $$ and in this case the series start from $0$ since there is no cancellation. For the second derivative $$g''\left(x\right)=\sum_{n\geq0}\left(2n+1\right)\left(2n\right)\sin\left(\frac{\pi}{n+2}\right)x^{2n-1}=\sum_{n\geq1}\left(2n+1\right)\left(2n\right)\sin\left(\frac{\pi}{n+2}\right)x^{2n-1} $$ since for $n=0 $ the sum is $0$. So in this case we have to change the starting number of the series only when we have to differentiate an even exponent. Hence $$g^{\left(35\right)}\left(x\right)=\sum_{n\geq17}\left(2n+1\right)\left(2n\right)\cdots\left(2n-33\right)\sin\left(\frac{\pi}{n+2}\right)x^{2n-34} $$ and again if we consider $g^{\left(35\right)}\left(0\right) $ the only non zero terms is when $2n-34=0 $ so $n=17$. Hence $$g^{\left(35\right)}\left(0\right)=35!\sin\left(\frac{\pi}{19}\right).$$
Any metric space is isometric to a subspace of some complete metric space
Note that $\rho(s,t)$ is the same as $|\rho(x,s)-\rho(x,t)|$ when $x=s$, so $\rho(s,t)$ is an element of $\{|\rho(x,s)-\rho(x,t)| : x \in S\}$ and then $$\rho(s,t) \leq \sup\{|\rho(x,s)-\rho(x,t)| : x \in S\} = \|f_s-f_t\|_u.$$
Ring such that $x^4=x$ for all $x$ is commutative
Here's an old post of mine from Yahoo! Answers: First, note $-x = (-x)^4 = x^4 = x$, so $x+x = 0$ for any $x$ in $R$. Then $(x^2+x)^2 = x^2 + x + x^3 + x^3 = x^2+x$. Thus $x^2+x$ is idempotent, and it is easy to see idempotent elements are central in this ring. [I give a proof of this at the end.] Now let $x=a+b$, where $a$ and $b$ are arbitrary. From above, for any $c$ in $R$, $c(x^2 + x) = (x^2 + x)c$, and expanding this out and cancelling terms we get $c(ab + ba) = (ab + ba)c$. Setting $c=a$, we get, after cancelling again, $a^2b = ba^2$. Thus, for any $x$ in $R$, $x^2$ is central. Then of course $x = (x^2+x)-x^2$ is central. To prove that idempotents are central, first note that if $xy=0$, then $yx = (yx)^4 = y (xy)(xy)(xy)x = 0$. So now if $z^2 = z$, then $z(y - zy) = 0$, so $(y-zy)z = 0$, or $yz = zyz$. Similarly, $(yz - y)z = 0$, so $z(yz-y) = 0$, or $zy = zyz$. Thus $yz = zy$.
Ricci flow of the Torus
Consider a point of negative curvature. Here metric is increasing so that $R,\ r$ goes to $\infty$. Hence a limit is flat metric. Reference : Richard S. Hamilton. The Ricci flow on surfaces.
remainder of the division $2^{1990}/1990$
Well, since 1990 is not prime, you might want to look into Euler's Theorem, which extends Fermat's Little Theorem.
Calculating the odds of the next dice throw with sample
If the dice are fair, then all possible rolls are equally likely. If we are to assume, though, that they fall on face $n$ with probability proportional to the numbers you gave above, then $n=4$ would be the most likely roll. Since the next roll on a die is independent of the last roll, there's no reason to think that $n=6$ will be most likely (if the dice are fair) just because it's come up the least number of times so far – this is called the Gambler's Fallacy.
Quadratic equation with absolute value, is there a real number for which equation has unique solution
If $x \ge 0$ then $x = \frac {-1\pm \sqrt{1- 4a}}2$ but only $\frac {-1 + \sqrt{1-4a}}2 \ge 0$. If $x < 0$ then $x = \frac {1- \sqrt{1-4a}}2$ So only way there is a single unique solution is if $1 - \sqrt{1-4a} = -1 + \sqrt{1-4a}$ and $\sqrt {1-4a} = 1$ so $a = 0$. will have a unique solution $x$ and no other $a$ will. ... or .... $(-x)^2 + |-x| + a = x^2 +|x| + a$ so if $x$ is a solution then so is $-x$. So to be unique $x = 0$ must be the only solution. So $0^2 + |0| + a= 0$. So $a = 0$ is only option.
Is a mixture of two uniform distributions more complex than a single distribution?
in one of your comments you mentioned the paper "Measuring the complexity of continuous distributions". In this paper Complexity is defined as a function of Shannon's entropy. Shannon's entropy provide a measure of the average uncertainty of a system given a probability distribution. Thus, this Complexity determines the "balance" between emergence of new patterns, and the self-organization of the system. Intuitively, Emergence can be understood as the uniformization of a probability distribution (for instance, the white noise which has a uniform distribution has the highest Emergence). Self-organization can be understood as the concentration of probability around a specific state(s) of the probability distribution (a delta Dirac has the highest self-organization, thus, the lowest emergence since there is no change). Then, a distribution with high complexity has one or few states which concentrate a large proportion of the probability, and many others with very low probability. Thus, using these measures, it is not conclusive if a mixture of distributions will be more complex than only 1 distribution. For instance, compare a power-law distribution with the second mixture provided by @BruceET. Given a suitable x_min value, the power-law will surely display more complexity than the mixture of normals, since the latter is more equiprobable than the former. Best regards, Guillermo
Prove the formula $(\exists x A(x) \to B) \to \forall y (A(y) \to B)$
Lines 8 and 9 are fine ... but your Lemma is not. What you need to show is $A(t) \vdash \neg \forall x \neg A(x)$, so that together with $\neg \forall x \neg A(x) \rightarrow B$ you can get $A(t) \rightarrow B$ and then apply 8 and 9. Also, while I don't know the exact notation of your system, I find the way you introduce premises suspect. For example, you write: $1. \vdash \lnot \forall x \lnot A(x)$ (premise) Given the $\vdash$, that looks like a theorem, not a premise. I would suspect that something like: $1. \lnot \forall x \lnot A(x) \vdash \lnot \forall x \lnot A(x)$ (premise) or simply $1. \lnot \forall x \lnot A(x) $ (premise) would be called for.
rotating a fixed-width rectangle to opposite corners
HINT: You have the length of the diagonal (since you have opposite points) Also you have the width, figure out the length and then you can find the angle of rotation using right angle triangles (better draw a figure).
Writing basic proofs about cycles?
The first two are really just a matter of having a clear definition of a cycle and working with it. I define an $n$-cycle to be a graph with vertices $v_0,v_1,\ldots,v_{n-1}$ and edges $\{v_k,v_{k\oplus 1}\}$ for $k=0,\ldots,n-1$, where for $r,s\in\{0,\ldots,n-1\}$ we define $k\oplus\ell=(k+\ell)\bmod n$. Similarly, we define $k\ominus\ell=(k-\ell)\bmod n$. (This is just a clever trick to avoid having to treat the edge $\{v_{n-1},v_0\}$ as a special case.) Let $C$ be an $n$-cycle. If $k,\ell\in\{0,\ldots,n-1\}$, there is a unique $r\in\{1,\ldots,n-1\}$ such that $\ell=k\oplus r$. Then the sequence $\langle v_k,v_{k\oplus 1},\ldots,v_{k\oplus r}\rangle$ is a path in $C$ from $v_k$ to $v_\ell$, so $C$ is connected. For any $k\in\{0,\ldots,n-1\}$, the vertices of $C$ connected to $v_k$ by edges are $v_{k\ominus 1}$ and $v_{k\oplus 1}$, so $C$ is $2$-regular. It’s the third one that requires more than just working directly with the definition of a cycle. Suppose that $G$ is $2$-regular and has $n$ vertices. Let $v_0$ be any vertex of $G$, and let $v_1$ be one of the two vertices adjacent to $v_0$. Since $\deg v_1=2$, there must be some vertex $v_2\ne v_0$ adjacent to $v_1$. In general, given adjacent vertices $v_{k-1}$ and $v_k$, let $v_{k+1}$ be the unique vertex adjacent to $v_k$ and different from $v_{k-1}$. Since $G$ has only $n$ vertices, for some smallest $k\le n-1$ we must have $v_{k+1}=v_\ell$ for some $\ell\in\{0,\ldots,k-2\}$. Show that $\ell$ must be $0$. Conclude that the subgraph of $G$ induced by $\{v_0,\ldots,v_k\}$ is a $(k+1)$-cycle and a component of $G$. Conclude further that if $G$ is connected, then $k+1=n$, and this cycle is all of $G$.
transforming vector potential with a coordinate rotation
A magnetic field shoudn't appear from no-where if I merely rotate my coordinate system That's right, it should not :-) What am I doing wrong? It seems to be a simple sign error (or you may have to look up the contravariant transformation law for vectors, e.g. on Wikipedia here). We would like to compute $A(x', y', z')$, so remember that we have to compute, for the first component, $$ A_1(x', y', z') = \frac{\partial x'}{\partial x} * A_1(x, y, z) + \frac{\partial x'}{\partial y} * A_2(x, y, z) + \frac{\partial x'}{\partial z} * A_3(x, y, z) = \frac{\partial x'}{\partial x} * x = x \cos(\theta) $$ and inserting $x = \cos(θ) x'+ \sin(θ) y'$ we get $$ A_1(x', y', z') = \cos(\theta) ( \cos(θ) x'+ \sin(θ) y') $$ and for the second: $$ A_2(x', y', z') = \frac{\partial y'}{\partial x} * A_1(x, y, z) + \frac{\partial y'}{\partial y} * A_2(x, y, z) + \frac{\partial y'}{\partial z} * A_3(x, y, z) = \frac{\partial x'}{\partial x} * x = x \sin(\theta) $$ Inserting again $x = \cos(θ) x'+ \sin(θ) y'$ we get $$ A_2(x', y', z') = \sin(\theta) ( \cos(θ) x'+ \sin(θ) y') $$ For the z-component of $B$ we now get $$ B_{z'} = \partial_{x'} A_2(x', y' ,z') - \partial_{y'} A_1(x', y' ,z') = \sin(\theta) \cos(\theta) - \sin(\theta) \cos(\theta) = 0 $$ I hope I did not screw up :-)
When does an atomic topos have this property?
There is only one possible direct image functor $\Gamma : \mathcal{E} \to \mathbf{Set}$, namely the one represented by the terminal object of $\mathcal{E}$. In particular, $\Gamma : \mathcal{E} \to \mathbf{Set}$ is faithful if and only if $1$ is a separator. But $1$ is a separator if and only if $\mathcal{E} \simeq \mathbf{Set}$ or $\mathcal{E} \simeq \mathbb{1}$.
If $\emptyset \neq X$ is closed and $\operatorname{ri} X \subset \operatorname{int} Y$ then $X \subset Y$?
No: counterexamples in $\Bbb R$ are $X=[0,1]$ and $Y=(0,2)$. In fact, $$\operatorname{relint} X=\operatorname{int} X=(0,1)\subsetneq \operatorname{int} Y=Y$$ but $X\nsubseteq Y$. Added: It is, however, true if $Y$ is convex and closed, since for a convex subset $Y$ of a finite dimensional vector space it holds $\overline Y=\overline{\operatorname{relint} Y}$ (of course, if $\operatorname{int} Y\ne\emptyset$, then $\operatorname{int}Y=\operatorname{relint}Y$)
Slicing $N$-dimensional cubes along the common direction of pairs of directions
The hypercube is cut into $N!$ simplexes, each corresponding to one of the orderings of the coordinates, since continuously moving from one ordering to another crosses the cutting hyperplane associated with the two coordinates swapping places. The simplexes are all congruent with the ”standard“ one defined by $x_1\le x_2\le\cdots\le x_N$.
Spectral Measures: References
I like Conway's treatment in section X.4 of A course on functional analysis. It builds on some previous results, including the spectral theory of bounded normal operators from Chapter IX. I don't know if it is gentler.
Prove that $d(x_{m_i},x_{n_j})\geq c$ for all $i$ and $j$.
Hint Show that: If $(x_n)$ doesn't converge to $x$, then there exists a neighborhood $U$ of $x$ such that infinitely many $x_n$'s aren't members of $U$. If $x$ is a partial limit of $(x_n)$ (i.e., there exists a subsequence converging to $x$) then for every neighborhood $V$ of $x$, infinitely many $x_n$'s are members of $V$. Then it's a matter of gainfully picking $U,V$ in a metric space.
Need help understanding poker hand probability/combinatorics.
You can't apply the two pair method to the full house because the 2 "ranks" out of 13 that you choose are not symmetric like a two pair. Consider a two pair with 5's and 4's 5-5-4-4-X If you switch the 5's and the 4's the hand would stay the same However, for a full house you choose one "rank" to be the 3 of a kind and one rank to be the pair 5-5-5-4-4 is different from 4-4-4-5-5 By your "two pair method" applied to a full house you don't distinguish the above two hands so you undercount. Now, you could apply the "full house method" to two pairs, but with a few changes. If you apply this method to two pairs, 5-5-4-4-X would be distinguished from 4-4-5-5-X, which is not desired. (If 5 is chosen from the first selection then 4, and vice-versa) So every two pair will be counted twice, so you need to divide by two "Two pair method" for two pairs: $\binom {13}{2}\binom {4}{2}\binom {4}{2}\binom {11}{1}\binom {4}{1} $ This is just a factor of two away from the "full house method" for two pairs: $\binom {13}{1}\binom {4}{2}\binom {12}{1}\binom {4}{2}\binom{11}{1}\binom {4}{1}$, because: $\binom{13}{2} = \frac{1}{2} * \binom{13}{1} \binom{12}{1}$ Hope this helps! Note: (Another way to think about the two pair) For the last card in the two pair, you don't need to do $\binom{11}{1}\binom{4}{1}$ explicitly. As 5's and 4's are ruled out, you can choose any one of the 52-8 = 44 remaining cards (which is the same as $\binom{11}{1}\binom{4}{1}$)
What is the spectrum of this matrix?
Since the inverse of $A_n$ is $$ A^{-1}_n = \begin{pmatrix} 2 & -1 & 0 & \dots & 0 & 0\\ -1 & 2 & -1 & \dots & 0 & 0\\ 0 & -1 & 2 & \dots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \dots & 2 & -1\\ 0 & 0 & 0 & \dots & -1 & 1 \end{pmatrix} $$ Solving $A_n^{-1} x = \lambda^{-1} x$ is the same that solving difference problem $$\begin{aligned} x_0 &= 0\\ -x_{k+1} + 2x_k -x_{k-1} &= \lambda^{-1} x_k, \quad k = 1, \dots, n-1\\ x_n - x_{n-1} &= \lambda^{-1} x_n \end{aligned} $$ Substituting $x_k = \sin \omega k$ gives following system of equations for $\omega, \lambda$ $$ 4\sin^2\frac{\omega}{2} = \lambda^{-1}\\ (1-\lambda^{-1})\sin \omega n = \sin \omega (n-1) $$ or $$ 1 - 4\sin^2\frac{\omega}{2} = \frac{\sin \omega(n-1)}{\sin \omega n} $$ That equation has exactly $n$ different solutions for $\omega$ in $(0, \pi)$ since left is monotonically decreasing and right is increasing and has $n-1$ poles (see image for $n=5$). Values outside of $(0, \pi)$ produce the same values for $\lambda$ due to periodicity. So $$ \lambda_j^{-1} = 4\sin^2\frac{\omega_j}{2}\\ (x_j)_k = \sin k \omega_j $$
Rank of the given matrix
The rank is 2. Show that the row space is spanned by $(2, 3, \cdots, n+1)$ and $(1, 1, \cdots, 1)$.
Let $D$ a division ring. Show that $Z(M_n(D)) \simeq Z(D)$
The main problem is to determine the center of the matrix-algebra. One can show that it is formed by $d I_n$ where $d$ is in the center of $D$. The isomorphism is then rather direct to establish. (Reposted corrected comment as per OP's proposal.)
How many whole bricks 6 × 12 × 24 cm3 will be sufficient to construct a solid cube of minimum size
Since each brick has a longest dimension of $24$ cm, you can’t possibly make a cube with a side any shorter than $24$ cm. A cube with side $24$ cm has a volume of $24^3=13824\text{ cm}^3$, and each brick has a volume of $6\cdot12\cdot24=1728\text{ cm}^3$. Now $\frac{13824}{1728}=8$; is it possible to stack $8$ of these bricks to form a cube?
Why must c be a real number?
You might notice that $e^{-b(\frac{\pi}{2} + 2k\pi)}$ is multivalued in $k$. As $k$ can be chosen to be any integer, this term has no single well-defined value. However, if $b$ is $0$, then that issue does not emerge.
Why does $\sin(x) - \sin(y)=2 \cos(\frac{x+y}{2}) \sin(\frac{x-y}{2})$?
The main trick is here: \begin{align} \color{red} {x = {x+y\over2} + {x-y\over2}}\\[1em] \color{blue}{y = {x+y\over2} - {x-y\over2}} \end{align} (You may evaluate the right-hand sides of them to verify that these strange equations are correct.) Substituting the right-hand sides for $\color{red}x$ and $\color{blue}y,\,$ you will obtain \begin{align} \sin \color{red} x - \sin \color{blue }y = \sin \left(\color{red}{{x+y\over2} + {x-y\over2} }\right) - \sin \left(\color{blue }{{x+y\over2} - {x-y\over2}} \right) \\[1em] \end{align} All the rest is then only a routine calculation: \begin{align} \require{enclose} &= \sin \left({x+y\over2}\right) \cos\left( {x-y\over2} \right) + \sin \left({x-y\over2}\right) \cos\left( {x+y\over2} \right)\\[1em] &-\left[\sin \left({x+y\over2}\right) \cos\left( {x-y\over2} \right) - \sin \left({x-y\over2}\right) \cos\left( {x+y\over2} \right)\right]\\[3em] &= \enclose{updiagonalstrike}{\sin \left({x+y\over2}\right) \cos\left( {x-y\over2} \right)} + \sin \left({x-y\over2}\right) \cos\left( {x+y\over2} \right)\\[1em] &-\enclose{updiagonalstrike}{\sin \left({x+y\over2}\right) \cos\left( {x-y\over2} \right)} + \sin \left({x-y\over2}\right) \cos\left( {x+y\over2} \right) \\[3em] &=2\sin \left({x-y\over2}\right) \cos\left( {x+y\over2} \right)\\ \end{align}
given a discount rate find annual effective rate of interest?
HINT: A discount rate applied $n$ times over equal subintervals of a year is found from the annual effective rate $d$ as $$ 1-d = \left(1-\frac{d^{(n)}}{n}\right)^n$$ where $d^{(n)}$ is called the annual nominal rate of discount convertible $n$-thly (payable $n$ times per period).
How to Prove $a^3 \cos (B-C) + b^3 \cos (C-A)+ c^3 \cos (A-B)=3abc$?
It can be proved with sine-rule and cosine rule but it is really ugly. $${\rm LHS} - {\rm RHS} = \sum_{cyc} ( a^3\cos(B-C) - abc ) = \sum_{cyc}( a^3(\cos B\cos C + \sin B\sin C) - abc)$$ By sine rule, $$a : b : c = \sin A : \sin B : \sin C \quad\implies\quad \begin{cases} a^3\sin B\sin C = abc\sin A^2\\ b^3\sin C\sin A = abc\sin B^2\\ c^3\sin A\sin B = abc\sin C^2 \end{cases}$$ This leads to $$\begin{align}{\rm LHS} - {\rm RHS} &= \sum_{cyc}( a^3 \cos B \cos C - abc\cos^2 A)\\ &= \sum_{cyc}\left[a^3\left(\frac{a^2+c^2-b^2}{2ac}\right)\left(\frac{a^2+b^2-c^2}{2ab}\right) - abc\left(\frac{b^2+c^2-a^2}{2bc}\right)^2\right]\\ &= \frac{1}{4abc}\sum_{cyc}a^2(\underbrace{a^4 - (b^2-c^2)^2 - (b^2+c^2-a^2)^2}_{I}) \end{align} $$ Notice what's in the square bracket equals to $$\require{cancel}I =\cancel{a^4} - (b^2-c^2)^2 - (b^2+c^2)^2 + 2a^2(b^2+c^2) - \cancel{a^4} =2(a^2(b^2+c^2) - b^4-c^4)$$ We obtain $$\begin{align}{\rm LHS} - {\rm RHS} &= \frac{1}{2abc}\sum_{cyc} a^4b^2 + \color{red}{a^4}\color{green}{c^2} - a^2 b^4 - \color{blue}{a^2}\color{magenta}{c^4}\\ &= \frac{1}{2abc}\sum_{cyc} a^4b^2 + \color{red}{b^4}\color{green}{a^2} - a^2 b^4 - \color{blue}{b^2}\color{magenta}{a^4}\\ &= 0 \end{align}$$
Difficult diophantine equation
The idea is going modulo $9$. Indeed, we will prove that $4n^4 + 7n^2+3n+6$ leaves only remainders $2,5,6$ modulo $9$. None of these are cubes modulo $9$(only $0,1,8$ are), completing the proof that no such integers $n,m$ exist. For this, we note that if $n \equiv 0 \pmod{3}$ then $4n^4 + 7n^2+3n+6 \equiv 6\pmod{9}$. If $n \equiv 1 \pmod{3}$ then $4n^4 + 7n^2+3n+6 \equiv 2 \pmod{9}$. Finally, if $n \equiv - 1 \pmod{3}$ then $4n^4+7n^2+3n+6 \equiv 5 \pmod{9}$.
Locally Lipschitz and differentiable almost everywhere
They are of measure zero. It's because you're mistaking length with area. In $\mathbb R^2$, the Lebesgue measure is defined using the $\sigma$-algebra generated by rectangles over which the Lebesgue measure is just the area. You can therefore see that the infinite-length zero-area lines you speak of have measure zero. Note that if you fix say $x_2$, the function $V(x_1) = |x_1| + |x_2|$ is locally Lipschitz and differentiable almost everywhere (i.e. except at $x_1 = 0$). Hope that helps,
When $\frac{z-1}{z+1}$ is a real number?
You've shown that $$\frac{z-1}{z+1} = \frac{|z|^2 -1}{|z+1|^2} +i \frac{2 \operatorname{Im}z}{|z+1|^2}$$ so that the fraction is purely real if and only if $\operatorname{Im} z = 0$ and $z \not= -1$.
Why is my integrand wrong?
You need to multiply r to the integrand because $\mathrm{d}A=r\mathrm{d}r\mathrm{d}\theta$, which would make your integrand to $-\left(r^3-9r\right)\cdot r$.
One confusion over conditional expectation
Indeed, if $\sigma(X_1) = \sigma(X_2)$ then $E[Y | X_1] = E[Y | X_2]$ almost surely, thus $E[Y | X_1] = u_1(X_1)$ and $E[Y | X_2]=u_2(X_2)$ for some measurable functions $u_1$ and $u_2$ such that $u_1(X_1)=u_2(X_2)$ almost surely. One can then define $E[Y | X_1=x]$ as $u_1(x)$ and $E[Y | X_2=x]$ as $u_2(x)$ for every $x$ in the target set of $X_1$ and $X_2$. This does not entail that $E[Y | X_1=x]$ and $E[Y | X_2=x]$ coincide since, in general, $u_1(x)\ne u_2(x)$. To sum up, the condition that [$u_1(X_1)=u_2(X_2)$ almost surely] does not imply that [$u_1=u_2$]. Edit: Perhaps a simple example can help. Assume that $Y=6X_1=3X_2$, hence $X_2=2X_1$. Then $$E[Y | X_1]=E[Y | X_2]=Y=6X_1=3X_2,$$ hence, for every $x$, $E[Y | X_1=x]=6x$ and $E[Y | X_2=x]=3x$. If one selects some $\omega$ in $\Omega$ and one measures $X_1(\omega)=x_1$ and $X_2(\omega)=x_2$ then $x_2=2x_1$ hence $E[Y | X_1=x_1]=6x_1$ and $E[Y | X_2=x_2]=3x_2$, which implies that $$E[Y | X_1=x_1]=E[Y | X_2=x_2].$$ More generally, if $\sigma(X_1)=\sigma(X_2)$, there exists some invertible bimeasurable $v$ such that $X_2=v(X_1)$ almost surely hence $$ E[Y\mid X_2]=u_2(X_2)=u_2\circ v(X_1)=E[Y\mid X_1], $$ thus, for almost every $x_1$ (with respect to the distribution of $X_1$), $$ E[Y\mid X_2=v(x_1)]=u_2\circ v(x_1)=E[Y\mid X_1=x_1]. $$
Translating a sentence to predicate logic
Perhaps it would make more sense if it were written as follows: $$\forall x(hasTail(x) \rightarrow dog(x))$$ Your statement is equivalent to the above statement: $$\begin{align} \lnot \lnot \forall x(hasTail(x) \rightarrow dog(x))&\equiv \lnot \exists x( \lnot(hasTail(x) \rightarrow dog(x)))\tag{1}\\ \\ & \equiv \lnot \exists x (\lnot(\lnot hasTail(x) \lor dog(x)))\tag{2}\\ \\ &\equiv \lnot \exists x(hasTail(x) \land \lnot dog(x))\tag{3}\\ \\ &\equiv \lnot \exists x (\lnot dog(x) \land hasTail(x))\tag{4} \end{align}$$ $(1)$ follows from the equivalence $\lnot \forall x P(x) \equiv \exists x (\lnot P(x))$ $(2)$ follows from the equivalence $p \rightarrow q\equiv \lnot p \lor q$ $(3)$ follows from DeMorgan's Law $(4)$ because of the commutativity of $\land$
How to find the the force necessary to pull from a cylinder so it keeps in place?
The module of the torque is $F·R·\sin\alpha$ being $\alpha$ the angle between the force and the arm and this is not what you wrote that it rather seems the decomposition of some force. So, the condition of rotation equilibrium is: $$f_R·R-F·R=0$$ But this doesn't matter because one of your implicit assumptions is wrong. You assumed that it is a problem of statics, but it isn't. If the (center of mass of the) cylinder has to be in place, it has to have an angular acceleration around its center of mass: the above equations doesn't hold. By Hypothesis, the center of mass is not moving, thus, the total force along an horizontal axis is zero: $$F\cos 37º-f_R=0$$ Then $$f_R=\frac{4F}{5}$$ that, with the equation you got, $$f_R=\frac{3}{10}\left(0.5\times 9.8-\frac{3F}{5}\right)$$ Gives $F=1.5$, approx. We can get the angular acceleration from here: $$I\alpha=F·R-f_R·R$$ Being $\alpha$ the angular acceleration and $I$ the moment of inertia of the cylinder around its symmetry axis.
Convergence in Distribution of a random variable with an upper bound implies convergence of higher moments.
It suffices to prove the claim when $k=1$, the general case will follow by replacing $X_n$ with $(X_n)^i$ and $Y$ with $Y^i$. Since $\forall n\geq 0$, $|X_n|\leq Y$, the family $(X_n)$ is uniformly integrable. In thay case, it is well-known that $X_n\xrightarrow{d} X$ implies $E(X_n)\to E(X)$.
In how many ways can they form a committee of 6 members that does not include both Chloe and George?
In case you haven’t figured it out yet, it’s ${18\choose 6}-{2\choose2}{16\choose4}= {18\choose 6}-{16\choose4} $. There are only 16 people to choose from after forcing Chloe and George to be in the group. That is, you subtract the number of groups with Chloe and George (the undesired groups) from the total number of possible groups to get the number of groups that are permissible.
Number of elements of a matrix subset with field $\mathbb{Z}_p$
Hint: you are working with a pool of $p^n$ column vectors from $\Bbb F^n$. Of course, $n$ could be 1 or 2, but to get you going, what I say will venture up to 3. When picking the first column, you'll have $p^n-1$ choices. (Anything except the zero vector.) When you pick the second column, you'll have to avoid picking something in the span of the first column. There are $p$ things in that span since you can choose the coefficient freely from the field, so you now have $p^n-p$ choices for the second column. For the third column, you'd have to pick a vector not in the span of the first two columns. There are $p^2$ such vectors, since you can choose two coefficents for the vectors freely. So now you are down to $p^n-p^2$ vectors... Can you see how to count the total possibilities from these data?
Rule C (Introduction to mathematical logic by Mendelson fifth edition)
In the fourth edition, at least, the reason Mendelson uses the notation $\vdash_C$ is that he is defining a second derivability relation, which differs from his original one by allowing the use of Rule C under certain conditions. He then proves that if $\Gamma \vdash_C B$ then $\Gamma \vdash B$, which justifies why he left out Rule C in his original definition of $\vdash$. The subscript $C$ has nothing to do with a new theory, it is just a decoration. As for why new constants are required, consider applying Rule C to the statements $(\exists x)(x = 0)$ and $(\exists x)(x = 1)$ in the presence of axioms that imply $0 \not = 1$.
Am I solving these initial value problem correctly?
In your third step, you factor out a minus sign from $\frac 1{-y+5}$. The result should be $-\frac 1{y-5}$, but you wrote $-\frac 1{y+5}$. So replace the later $+5$'s with $-5$, and when you transpose it to the other side it becomes $+5$ rather than $-5$. You use $x=\pm e^{-c}$. You explain this badly, by first saying $x=\pm e^{c}$ then later replacing $x^{-1}$ with $x$. The easiest way to correct this is to do the line replacing $(y-5)^{-1}$ with $y-5$ first then defining $x$ correctly. Last, $x$ is a lousy name for a constant; it looks like another independent variable distinct from $t$. You already used $C$ earlier, so you could use something like $D$. I would replace the earlier uses of $C$ with $C'$ so I could use $C$ in my final answer. With those changes, the final answer is $$y=Ce^{-t}+5$$ Substituting that into your original differential equation shows that it is correct. You should have done that with your attempted answer! There is one more technical problem, that your method of solution ignores the possibility that $y=5$. When that is the case, you divide by zero in your steps. The later replacement of $\pm e^{-C'}$ with $C$ reintroduces that possibility, so your final answer is correct--but you got there by committing two opposing errors that canceled each other out. Those particular errors are commonly done in early differential equation solving, so your homework grader will probably overlook it. But I thought you should know about it anyway.
Relating values algebraically in exponential function
I don't have my paper copy handy, but it appears that this is the page in question and that your OP has the code directly. The relationship in your chart is $$a=b^{8-n}$$ Since it calculates an exponent there wouldn't be an additive relationship. But I will note that what they are asking for means you have not labeled your chart correctly. Note that your chart has the heading $n$ and they are asking for the relationship of $counter$. If you label your chart correctly you will find that $n$ is a constant and what you have labeled as $n$ is actually $counter$ and $a$ is $product$ in which case your relationship is $$\mathrm{product}=b^{n-\mathrm{counter}}$$
Absolutely continuous implies Lipschitz?
The answer to the title question is "No, absolute continuity does not imply Lipschitz continuity". This is shown for example by the square root function on $[0,1]$. Of course we can find absolutely continuous functions with much worse behaviour, say that the derivative is unbounded on every nonempty open interval contained in the domain. But the conditions you are given for $f$ are — very sneakily, I overlooked it at first too — stronger than just absolute continuity. The key point is that the intervals $\mathopen{]} a_j, b_j\mathclose{[}$ are not necessarily disjoint. This changes the game. Let $\delta_1$ be the $\delta$ corresponding to $\varepsilon = 1$, i.e. we have $$\sum_{j = 1}^n (b_j -a_j) \leqslant \delta_1 \implies \sum_{j = 1}^n \lvert f(b_j) - f(a_j)\rvert \leqslant 1$$ for all families of open intervals $\mathopen{]} a_j, b_j\mathclose{[}$, $1 \leqslant j \leqslant n$, contained in $[a,b]$. Now consider $x,y \in [a,b]$ with $0 < y-x \leqslant \delta_1$ and let $n = \bigl\lfloor \frac{\delta_1}{y-x}\bigr\rfloor$. Taking the family of open intervals with $a_j = x$ and $b_j = y$ for $1 \leqslant j \leqslant n$ shows $$\sum_{j = 1}^n \lvert f(b_j) - f(a_j)\rvert = n\lvert f(y) - f(x)\rvert \leqslant 1\,,$$ thus $$\lvert f(y) - f(x)\rvert \leqslant \frac{1}{\bigl\lfloor \frac{\delta_1}{y-x}\bigr\rfloor} \leqslant \frac{1}{\frac{1}{2}\frac{\delta_1}{y-x}} = \frac{2}{\delta_1}(y-x)\,.$$ From this it follows that $$\lvert f(y) - f(x)\rvert \leqslant \frac{2}{\delta_1}\lvert y-x\rvert$$ for all $x,y \in [a,b]$, i.e. that $f$ is Lipschitz continuous with Lipschitz constant $2/\delta_1$. (With a bit more work one can show that $1/\delta_1$ works as a Lipschitz constant for $f$.)
Can this transformation be expressed as a matrix equation?
Yes you can. $B$ is a rank one matrix. Use the column vector $\vec{1}$ with $n$ ones and the row vector $\vec{1}^\top$ with $m$ ones. Then $$B = \frac{(A\vec{1})(\vec{1}^\top A)}{\vec{1}^\top A\vec{1}}$$ Where the denominator is a scalar. Note that you need only do the left and right vector multiplication once.
Show that $f_n\to f$ in $L^1(\mu)$ iff $\sup\limits_{A \in \mathcal M} \left| \int_{A} f_n d\mu - \int_{A} f d\mu \right| \rightarrow 0$
I preassume that you meant $n\rightarrow\infty$ in stead of $n\rightarrow0$. Firstly denote $g_{n}:=f_n-f$. Secondly abbreviate $\mu\left(h\right):=\int_{X}hd\mu$ for integrable functions $h$. You ask for a proof of $\mu\left(\left|g_{n}\right|\right)\rightarrow0$ on base of $s_{n}:=\sup\left\{ \left|\mu\left(g_{n}1_{A}\right)\right|\mid A\in\mathcal{M}\right\} \rightarrow0$. Let $A_{n}:=\left\{ g_{n}>0\right\}\in\mathcal M$ and $B_{n}:=\left\{ g_{n}<0\right\}\in\mathcal M$. Then $\mu\left(\left|g_{n}\right|\right)=\mu\left(g_{n}1_{A_{n}}\right)+\mu\left(-g_{n}1_{B_{n}}\right)=\left|\mu\left(g_{n}1_{A_{n}}\right)\right|+\left|\mu\left(g_{n}1_{B_{n}}\right)\right|\leq2s_{n}$. Hence $s_n\rightarrow0$ implies that $\mu\left(\left|g_{n}\right|\right)\rightarrow0$
Sequence of unbounded continuous functions
In the first case such a sequence can't exist. Note that $\mathbb{Q}$ is $G_\delta$ set, since if $\mathbb{Q} = \cap_n V_n$ with $V_n$ open, then $\mathbb{R} = \{ r_m \} \cup (\cup_n V_n^c)$ would be of first category (this is discussed more detailed here). On the other hand, the set $\cap_m \cup_n \{x : f_n(x) > m \}$ of points at which a sequence of positive continuous functions is unbounded is a $G_\delta$ set. In the second case there is such a sequence. Let $\mathbb{Q} = \{q_k\}$ and consider $$f_n(x) = \min_{1 \leq k \leq n} \{k + n \vert x - q_k \vert \} \geq 1.$$ Then for each $q_m \in \mathbb{Q}$ and $n \geq m$, one has $f_n(q_m) \leq m + n \vert q_m - q_m \vert = m$, hence $\{f_n(q_m)\}$ is bounded. On the other hand, if $x \in \mathbb{Q}^c$, then given $M> 0$, there is some $N = N(M) > M$ such that $n \vert x - q_i \vert > M$ for all $1 \leq i \leq M$ provided $n > N$. Therefore, $$f_n(x) = \min_{1 \leq k \leq n} \{k + n \vert x - q_k \vert \} > M,$$ that is, $f_n(x) \longrightarrow \infty$.
Hom-functor (Bifunctor)
The mapping $(X, Y) \mapsto Hom(X,Y)$ is a functor $C^{op} \times C \to Set$, (where $X, Y$ are objects of $C$) not $C \times C \to Set$. This is how you get functoriality (what you called the bifunctor axiom). To be more precise, if $f, g, h, i$ are morphisms in $C$, such that both $gi$ and $hf$ exist, and $f^*$, $h^*$ are the reversed morphisms in $C^{op}$, we get: $$ F( (f^*,g) \circ (h^*, i)) = F ( f^*h^*, gi ) = F( (hf)^*, gi) = gi\circ - \circ hf, $$ which is postcomposition by $gi$ and precomposition by $hf$. This mapping acts on $\phi \in Hom(X, Y)$ by $\phi \mapsto gi\phi hf$. On the other hand, $$ F(f^*, g) \circ F(h^*, i) = ( g \circ - \circ f ) \circ (i \circ - \circ h) $$ These mappings act on $\phi \in Hom(X, Y)$ by: $$ ( g \circ - \circ f )\Big[(i \circ - \circ h)(\phi)\Big] = ( g \circ - \circ f )[i\phi h] = gi \phi hf. $$ Thus, $$ F( (f^*,g) \circ (h^*, i)) = F( (f^*,g) \circ (h^*, i)). $$ Hope this helps!
Integrable functions that decay slower than every polynomially integrable function
Here are some examples (for $d=1$, but which can be easily adapted). Let $L(x):=\max\{\ln x,1\}$ for positive $x$ and $L_k(x)=L\circ L_{k-1}(x)$, $x>0$, $k\geq 2$, $L_1=L$. Let $$ f_n(x):= \frac 1x\left(\prod_{k=1}^n\frac1{L_k(x)}\right)\frac 1{\left(L_{n+1}(x)\right)^{1+\delta}},x\geqslant 1, \delta>0 $$ and $f_n(x)=0$ for $x\leqslant 1$.
Tail bound for sum of geometric random variables
Viewing $M_n$ as a sequence of Bernoulli trials until we get $n$ failures, we get $$P(M_n-2n>k)=P(B(k+2n,\frac 1 2)<n)=P(B(k+2n,\frac 1 2)-\frac 1 2(k+2n)<-\frac 1 2 k)\le \exp\left(-\frac {k^2}{2(2n+k)}\right)$$ by Hoeffding's bound. Similarly for $k\le 2n$ $$P(M_n-2n<-k)=P(B(2n-k,\frac 1 2)>n)\le \exp\left(-\frac {k^2}{2(2n-k)}\right)$$ We can nicely combine the two for $k\le 2n$ into $$P(|M_n-2n|>k)\le 2\exp\left(-\frac {k^2}{4n}\right)$$ due to concavity of $e^{-1/x}$.
Confidence interval of functional - R code
I don't know if this is exactly what you're looking for but it might help you understanding what is going on. You could get a $95\%$ confidence interval in three different ways: (1) Monte Carlo (2) Calculate the distribution of $e^X$ and the distribution of the mean of $e^X$ (3) Central Limit Theorem You used the third one in question 1. However, as Ian noted, the variance is not finite. In that case, you cannot use it because, well, the variance is infinite. Moreover, you estimated the variance with the sample variance but, in general, it is better if you could plug in the actual variance there if you know how to calculate that one. Furthermore, you found out in Question 2 that the covering probability is not close to $.95$ and this is thus due to the statistical error that is introduced when the Central Limit Theorem is used. I haven't tried the second method here and I don't know whether it is possible to do it exactly (I don't think it is, to be honest). But in general this might be useful if, for example, when the mean is distributed according to a known distribution (such that you can calculate the parameters). Last but definitely not least, you can always approximate the distribution of the mean of $e^X$ using Monte Carlo methods, which is the first option I gave and essentially your solution is Question 2. I hope this is what you're looking for. If some details are unclear, feel free to comment!
Random cellular automaton with three colors.
This is rather a research problem - you have to run statistical tests on the Cellular Automata (CA) Rule you find to show that it is random. If you would like to do a research projects like this check out The Wolfram Science Summer School. For now let see what information and tools can get you started. First of all I would read Chapter 6: Starting from Randomness - Section 5: Randomness in Class 3 Systems in the "New Kind of Science" (NKS) book and surounding chapters for better understanding of the subject. I would also look at many free apps exploring 3-color rules at The Wolfram Demonstrations Project. Next you can start from good candidates found on page 64. Follow that link and read the image captions about 3-color CAs with seamingly random behavior. The online book is free (you may need to register once). I would recommend also reading pages 62 - 70 exaplaining those images. Also take a look at "Random Sequence Generation by Cellular Automata" by Stephen Wolfram. If you do no thave Mathematica, then Wolfram|Alpha can provide tons of valuable information. Here are the queries for the CAs from NKS book: rule 177, rule 912, and rule 2040. Note how Wolfram|Alpha gives you, for example, difference pattern images - higly divergent (spread fast) means chaos and randomness: If you have Mathematica - it is easy to evolve CAs (and further test their random properties say with Chi-squared test). This is how you set up a 3 color range 1 totalistic CAs from pictures in NKS book (you can dig further with Hypothesis Testing). ArrayPlot[CellularAutomaton[{#, {3, 1}}, {{1}, 0}, 50], Mesh -> True, PixelConstrained -> 7, ColorRules -> {0 -> White, 1 -> Red}, Epilog -> Text[Style["Rule " <> ToString@#, Red, Bold, 25], {50, 340}]] & /@ {177, 912, 2040} // Column
Viewing $m/m^2$ as a vector space
$\mathfrak{m}/\mathfrak{m}^2$ is annihilated by $\mathfrak{m}$ because if $x,y\in \mathfrak{m}$ then $xy\in \mathfrak{m}^2$, hence $x\cdot(y+\mathfrak{m}^2)=0+\mathfrak{m}^2$. Since $\mathfrak{m}/\mathfrak{m}^2$ is an $A$-module on which $\mathfrak{m}$ acts trivially, we can make it an $A/\mathfrak{m}$-module by defining the action $$ \overline{x}\cdot(y+\mathfrak{m}^2)=x\cdot(y+\mathfrak{m}^2)$$ for $\bar{x}\in A/\mathfrak{m}$ and $x\in A$ representing $\bar{x}$. In order for this action to be well-defined (i.e. be independent of the choice of $x\in A$ representing $\bar{x}\in A/\mathfrak{m}$), it is essential that $\mathfrak{m}$ act trivially on $\mathfrak{m}/\mathfrak{m}^2$. This is why we consider $\mathfrak{m}/\mathfrak{m}^2$ rather than say $\mathfrak{m}/\mathfrak{m}^3$.
Series convergence from Rudin
Your proof is indeed correct. For the sake of variety, I would suggest an alternative to Cauchy—Schwarz, namely the Am-GM inequality: analogously to what you have written, for any $n\geq 0$, $$ 0\leq \sum_{k=1}^n \frac{\sqrt{a_k}}{k} \stackrel{\tiny\rm (AM-GM)}{\leq} \sum_{k=1}^n \frac{a_k+\frac{1}{k^2}}{2} = \frac{1}{2}\left( \sum_{k=1}^n a_k + \sum_{k=1}^n \frac{1}{k^2}\right) \leq \frac{1}{2}\left( \sum_{k=1}^\infty a_k + \sum_{k=1}^\infty \frac{1}{k^2}\right) $$ Thus, the partial sums $S_n$ are bounding, and non-decreasing: $(S_n)_n$ converges by the monotone convergence theorem.
Is there any simple way to solve $366^n(366-n)!\geq 2\times 366!$?
The arithmetic-geometric inequality tells us $$\frac{N!}{(N-n)!}\le\left(N-\frac{n-1}{2}\right)^n $$ and is quite sharp when $n\ll N$. Thus a good approximation for $n$ might be the solution of $$N^n=2\cdot \left(N-\frac{n-1}{2}\right)^n,$$ or: $$\left(1-\frac{n-1}{2N}\right)^n \approx \frac 12$$ With $c:=n^2/N$ and for $n\gg 1$, $$\left(1-\frac{n-1}{2N}\right)^n\approx\left(1-\frac c{2n}\right)^n\approx e^{-\frac c2}.$$ This suggests $c\approx 2\ln 2$ and so $n\approx \sqrt{2N\ln 2}$. With $N=366$, this crude approximation gives us $n\approx 22.53$.