title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to prove that A=B using set builder notation
Suppose there exists an element $a$ in $A$ that is $\color{red}{\text{not in $B$}}$. Either $a$ is in $C$ or $a$ is in $C^{-}$. If $a$ is in $C$, then it is in $A\cap C$. But $A\cap C$ equals $B\cap C$ by hypothesis, therefore $a$ is in $B\cap C$; in particular, $a$ is in $B$. This is a $\color{red}{\text{contradiction}}$. If $a$ is in $C^{-}$, you are led to a contradiction similarly. Therefore there is no element in $A$ that is not also in $B$, that is, every element in $A$ is also in $B$. Hence $A\subseteq B$. A symmetric argument shows that $B\subseteq A$. Since $A\subseteq B$ and $B\subseteq A$, we have $A=B$.
The $2N$ balls problem
This is the Langford's cubes problem, although the "cubes" part is something of my own memory and not found in the literature. The number of solutions for each $N$ is OEIS A014552; in particular there are solutions iff $N\equiv0,3\bmod4$.
Not sure where "if $p(1)=0$ then $p(x)=(x-1)(ax+b)$" comes from in a proof
First note that a quadratic with a real root has no complex roots (you can look at the quadratic formula to see that). Also note that if a non-zero polynomial $p$ is zero at a point $a$, $p(a)=0$, then $p(x)=(x-a)q(x)$ where $q(x)$ is one degree lower than $p$, I am guessing this is the factor theorem that is being sited, or maybe the fundamental theorem of algebra. In your problem $p$ is a quadratic or linear and $p(1)=0$, so $p(x)=(x-1)q(x)$, where $q$ is a degree one polynomial or a constant (depending on whether or not $p$ is quadratic or linear). We also have that $p$ has one real root so if it has another it is also real, so you can assume $q(x)=ax+b$ where $a,b \in \mathbb{R}$. (Edit: I just read the problem more carefully and it seems that paragraph one was all explained in the problem statement, so not sure if I cleared anything up for you.)
Difficulty solving problem regarding 2 parallel vectors
$$c\cdot d=(a-2b)\cdot (14a-2b)=14|a|^2-30a\cdot b+4|b|^2$$ Use that $$|b|=2|a| \quad (1)$$ and then $$a\cdot b=|a||b|\cos 60º \rightarrow a\cdot b=2|a|^2\cdot1/2=|a|^2 \quad (2)$$ Remember that $c \perp d \Leftrightarrow c \cdot d=0$. Using $(1)$ and $(2)$ we get: $$c\cdot d=14|a|^2-30a\cdot b+4|b|^2=14|a|^2-30|a|^2+4\cdot 4|a|^2=0$$
What is the approach to understand this algorithm?
No matter what values all except the last variable are assigned, there is exactly one value the last variable can be assigned to satisfy the equation. Since all choices are independent, the probability of satisfying the equation is the same as the last variable taking the correct value, which happens with probability $\frac{1}{2}$.
Quasilinear equation and their properties.
Follow the method in http://en.wikipedia.org/wiki/Method_of_characteristics#Example: $\dfrac{dy}{dt}=1$ , letting $y(0)=0$ , we have $y=t$ $\dfrac{du}{dt}=0$ , letting $u(0)=u_0$ , we have $u=u_0$ $\dfrac{dx}{dt}=a(u)=a(u_0)$ , letting $x(0)=f(u_0)$ , we have $x=a(u_0)t+f(u_0)=a(u)y+f(u)$ , i.e. $u=F(x-a(u)y)$ $u(x,0)=h(x)$ : $F(x)=h(x)$ $\therefore u=h(x-a(u)y)$
In what sense do complex functions have norms?
You misstated Liouville's theorem. The crucial assumption is "analytic and bounded in the WHOLE PLANE". An analytic function is some other region, for example in the unit disk, can be bounded, without being a constant, and you may consider various norms of such functions. But even for analytic functions in the plane one can introduce various norms. For example $$\| f\|=\sup_z|e^{-|z|^2}f(z)|$$ This is a norm. Function does not have to be bounded to have this norm finite.
Semidirect product defined by a non-trivial abstract homomorphism
Since $\phi: K \to \text{Aut}(H)$ is non-trivial, there is some $k \in K$, and some $h' \in H$ such that $\phi_k(h') \neq h'$. Now consider the products (for any $h \in H$): $(h,k)\ast(h',e_K)$ and $(h',e_K)\ast(h,k)$. The first is $(h\phi_k(h'),k)$, while the second is $(h'\phi_{e_K}(h),k) = (h'h,k) = (hh',k)$ (since $H$ is abelian, and $\phi_{e_K}$ is the identity automorphism). By supposition, $\phi_k(h') \neq h'$, so $h\phi_k(h') \neq hh'$, so these two products are two distinct elements of $H \rtimes_{\phi} K$, and thus the semi-direct product cannot be abelian.
Prove the group of rotations of $\mathbb R^2$ about origin is cyclic
As @Qiaochu Yuan said : 1) It means a finite subgroup of the group $$R:=\Big\{r_\theta=\begin{pmatrix}\cos(\theta)&-\sin(\theta)\\\sin(\theta)&\cos(\theta)\end{pmatrix};\theta\in\mathbb{R}\Big\}$$ which is itself a subgroup of $GL_2(\mathbb{R}),$ 2) For example, $R$ is uncountable and so it is non-cyclic (each cyclic group is countable because you have the surjective morphism $\mathbb{Z}\to<g>$ defined by $n\mapsto g^n$), 3) Let $G=\{Id=r_0,r_{\theta_1},...,r_{\theta_n}\}$ and $\theta:=\min(\theta_1,...,\theta_n)\in]-\pi,\pi[,$ and show that $G=<r_\theta>.$ Let $r_{\theta'}\in G\backslash\{r_0\}.$ We can suppose that $\theta'\in]0,\pi[$ (replacing $r_{\theta'}$ with $r_{\theta'}^{-1}=r_{-\theta'}$). By definition, $\theta\leq \theta'.$ Let $q$ be the smaller integer such as $\theta'\geq q\theta$ and $\theta'<(q+1)\theta.$ Let $\alpha:=\theta'-q\theta,$ it is the angle of $r_{\theta'}\circ r_\theta^{-q}\in G$ and by minimality of $\theta$ you get $\alpha=0.$ So $r_{\theta'}=r_\theta^q$ and you get $G=<r_\theta>.$
Is this formula for the $n^{th}$ prime number useful?
Every "elementary" formula I know of is a disguised implementation of a slow algorithm for testing whether a number is prime. For actually computing primes, it's better just to directly implement a fast primality testing algorithm, of which there are many. For actually proving something about primes, experience has shown that it's better to either ask for asymptotic rather than exact information or to use more sophisticated techniques (e.g. the Riemann zeta function). A basic reason formulas like the one you give are not useful for proving anything is that they involve the cancellation of many terms, and there's no way to extract reliable asymptotic information without knowing much more about how the terms cancel.
Evaluating an integrals by appropriate substitution
When you are facing a radical which has a linear sum in it, it works well to use that as the basis of the substitution. Here, you would take $ \ u = x + 1 \ $ , which will give you $ \ du = dx \ $ . To deal with the numerator, you need to solve your substitution equation for $ \ x \ $ , giving $ \ x = u - 1 \ $ . The integral becomes $$ \int \ \frac{1+ x^2}{\sqrt{1 + x }} \ dx \ \rightarrow \ \int \frac{1 + (u - 1)^2}{\sqrt{u}} \ du \ . $$ You would then multiply out the polynomial in the numerator. The point in doing this is that you now have a polynomial divided simply by the square root of the variable, which will leave you with a set of terms in the integrand which are all just fractional powers of $ \ u \ $ , something which is much easier to integrate.
What's the definition of F[x] and F[[x]], where F is a field?
$F[x]$ represents the ring of polynomials over the field F. Formally, this ring can be defined as the set of functions with finite support (taking only finitely many nonzero values) from the natural numbers into the field. The operations are defined as follows: $$ (f+g)(i):= f(i) + g(i) \text{ }\forall f,g \in F[x] \text{ and } i \in \mathbb{N} \\ (fg)(i):= \sum_{j=0}^{i}f(j)g(i-j) \text{ }\forall f,g \in F[x] \text{ and } i \in \mathbb{N} $$ The product defined above is often called Cauchy product. At this point, you might think that this construction has little to do with the usual polynomials we know from highschool. However, notice first that a function $f$ from $\mathbb{N}$ can be represented as a sequence or infinite tuple as follows: $$ f=(f(0),f(1),f(2),...) $$ Now, if you define $X^{i}:=(0,0,...,1 (\text{ith-spot}),0,0,0...)$ (i.e. the function sending i to 1 and everything else to zero), you will realize that we have just defined usual polynomials. Checking this might be a good exercise. Prove, for instance, that $XX=X^{2}$. The operations defined above turn quite nicely in just the usual way in which we add an multiply polynomials. For example, the finite support restriction takes care of the fact that polynomials have only finitely many terms. The ring $F[[X]]$ is called the ring of formal power series over $F$. The definition is identical to the one described above, except for a very important detail: we do not require the functions in $F[[X]]$ to have finite support. Thus, we get a ring whose elements look the way we would expect: $$ a_{0} + a_{1}X + \dots + a_{n}X^{n} + \dots \text{ with } a_{i} \in F $$ where we have defined $X$ exactly as above.
What is the computational complexity of linear programming?
The best possible (I believe) is by Michael Cohen, Yin Tat Lee, and Zhao Song: Solving linear program in the current matrix multiplication time. https://arxiv.org/abs/1810.07896 (STOC 2019) Hope this helps.
Determine function's domain type
You want to solve $xy \le 3$. $x = 0$ is a solution for all $y$. If $x > 0$, then $y \le \dfrac 3x$. If $x < 0$, then $y \ge \dfrac 3x$ Below is a graph of $xy \le 3$. clearly its complement is an open set. So it must be a closed set.
Find the integral: $\int \frac{( x-6)^2}{x^4}\mathrm{d} x$
You don't need substitution. Simply expand $(x-6)^2$ and get $x^2 -12x +36$, and separate these term, i.e. $\frac{x^2 -12x +36}{x^4} = \frac{1}{x^2} - \frac{12}{x^3} + \frac{36}{x^4}$. This form will be familiar to you, integrate directly.
What is the explanation for the elements of this set?
To calculate $B$ systematically, I would recommend you to forget about the complete description for $f$; instead focus on one element $x$ and its image $f(x)$ at a time. One important thing to keep in mind is that $x$ is an element of the set $A = \{ a, b, c\}$, whereas $f(x)$ is a subset of $A$. So it is perfectly legitimate to ask whether $x \in f(x)$ or not. Let me do the example (a)(i) in full. Take the element $a$. How do I know whether $a \in B$? The definition says it belongs to $B$ if and only if $a \not\in f(a)$. Now, consulting the function, we find that $f(a) = \{ a \}$. So we are interested in knowing if $a \in \{ a \}$ holds or not. This statement is indeed true; hence $a \in f(a)$ is also true. Therefore, from the definition of $B$, we conclude that $a$ is not present in $B$. Now, for the element $b$, we have $f(b) = \{ a, c\}$. Now, the question is whether or not $b \in \{ a,c\} = f(b)$. This time, we have $b \not\in f(b)$. Hence $b \in B$. Finally, for the element $c$, the image $f(c)$ is $\{ a,b,c \}$. Notice that $c$ is present in $\{ a,b,c \} = f(c)$. What does this tell you about the membership of $c$ in $B$? The remaining exercises involve a similar reasoning; can you take it from here? You are also asked to note that $B$ is not in the range $f$ in each case. Here, $f(A)$, the range of $f$, is a set containing subsets of $A$. For the above example, $$ f(A) = \{ \{a\}, \{ a,c \}, \{ a,b,c \} \}. $$ Also $B$ is just a subset of $A$ (this is actually even more evident). So the exercise asks you to check that $B$ is not an element of $f(A)$. In the above example, $B = \{ b \}$, and it is easy to verify that $b \not\in f(A)$.
For triangle $ABC$ there are median lines $AH$ and $BG$ with $\angle CAH=\angle CBG={{30}^{0}}$ . Prove that $ABC$ is the equilateral triangle.
Let's connect G and H. $GH$ is parallel to $AB$ (midsection theorem). As Michael Rozenberg noted, ABGH is a cyclic trapezoid, which is only possible when trapezoid is isosceles. Hence, $AG=BH$, $AC=BC$. Now from $\triangle BCG$ we see that the side opposite 30 degree angle ($CG$) is half of the other side ($CB$) which means $\angle CGB$ is a right angle and $\angle ACB$ is 60 degrees. Done.
Climbing the Postnikov tower
There is an answer using obstruction theory as Vincent Boelens suggests in the comments. Specifically, we can replace $r : X \to Z$ with a cofibration. Then since $r$ is a rational equivalence, $H^{n+1}(Z,X; \pi_n Y)=0$ because $\pi_n Y$ is rational. With this fact in mind we can appeal to cor. 4.73 in Hatcher.
How many numbers are between $1$ and $9999$ in this case?
Your answer to the first question is correct. However, your answer to the second question is not. Let's see why. How many natural numbers between $1$ and $9999$ have digit sum $16$? By appending leading zeros to a number with fewer than four digits, we can express each positive integer less than $10,000$ as a four-digit string. For instance, the number $17$ is represented by $0017$. Thus, if we let $x_i$ represent the digit in the $i$th position, the number of positive integers less than $10,000$ that have digit sum $16$ is the number of solutions of the equation $$x_1 + x_2 + x_3 + x_4 = 16 \tag{1}$$ in the nonnegative integers subject to the restrictions that $x_i \leq 9$ for $1 \leq i \leq 4$. A particular solution of equation 1 corresponds to the placement of $4 - 1 = 3$ addition signs in a row of $16$ ones. For instance, $$1 1 1 + + 1 1 1 1 1 + 1 1 1 1 1 1 1 1$$ corresponds to the solution $x_1 = 3$, $x_2 = 0$, $x_3 = 5$, $x_4 = 8$. The number of solutions of equation 1 in the nonnegative integers is the number of ways we can place three addition signs in a row of $16$ ones, which is $$\binom{16 + 4 - 1}{4 - 1} = \binom{19}{3}$$ since we must choose which three of the $19$ positions required for $16$ ones and $3$ addition signs will be filled with addition signs. From these, we must subtract those cases in which one or more of the $x_i$'s exceeds $9$. At most one $x_i$ can exceed $9$ since $2 \cdot 10 = 20 > 16$. Suppose $x_1 > 9$. Then $x_1' = x_1 - 10$ is a nonnegative integer. Substituting $x_1' + 10$ for $x_1$ in equation 1 yields \begin{align*} x_1' + 10 + x_2 + x_3 + x_4 & = 16\\ x_1' + x_2 + x_3 + x_4 & = 6 \end{align*} which is an equation in the nonnegative integers with $$\binom{6 + 4 - 1}{4 - 1} = \binom{9}{3}$$ solutions. By symmetry, there are an equal number of solutions in which $x_i > 9$ for each $i$ satisfying $1 \leq i \leq 4$. Hence, the number of solutions of equation 1 in which no $x_i$ exceeds $9$ is $$\binom{19}{3} - \binom{4}{1}\binom{9}{3}$$ which is equal to the number of positive integers less than $10,000$ with digit sum $16$. What error did you make? You tried to subtract off the number of solutions in which one of the variables equals $10$. Suppose that variable is $x_4$. Then \begin{align*} x_1 + x_2 + x_3 + 10 & = 16\\ x_1 + x_2 + x_3 & = 6 \end{align*} which is an equation in the nonnegative integers with $$\binom{6 + 3 - 1}{3 - 1} = \binom{8}{2}$$ solutions. By symmetry, there are $$\binom{4}{1}\binom{8}{2}$$ solutions in which a variable equals $10$. By similar argument, there are $$\binom{4}{1}\binom{7}{2}$$ solutions of equation 1 in which a variable equals $11$, $$\binom{4}{1}\binom{6}{2}$$ solutions of equation 1 in which a variable equals $12$, $$\binom{4}{1}\binom{5}{2}$$ solutions of equation 1 in which a variable equals $13$, $$\binom{4}{1}\binom{4}{2}$$ solutions of equation 1 in which a variable equals $14$, $$\binom{4}{1}\binom{3}{2}$$ solutions of equation 1 in which a variable equals $15$, and $$\binom{4}{1}\binom{2}{2}$$ solutions of equation 1 in which a variable equals $16$. Hence, the number of positive integers less than $10,000$ with digit sum $16$ is $$\binom{19}{3} - \binom{4}{1}\left[\binom{8}{2} + \binom{7}{2} + \binom{6}{2} + \binom{5}{2} + \binom{4}{2} + \binom{3}{2} + \binom{2}{2}\right]$$
Convergent or divergent series examples
If $\sum a_n$ converges absolutely, the the answer is affimative. We claim that this is no longer the case for conditional convergence. Note that $$ \frac{x}{1+|x|} = x - x|x| + O(x^3)$$ near the origin. Now consider the series $$\sum_{n=1}^{\infty} a_n = \frac{2}{\sqrt{1}} - \frac{1}{\sqrt{1}} - \frac{1}{\sqrt{1}} + \frac{2}{\sqrt{2}} - \frac{1}{\sqrt{2}} - \frac{1}{\sqrt{2}} + \frac{2}{\sqrt{3}} - \frac{1}{\sqrt{3}} - \frac{1}{\sqrt{3}} + \cdots.$$ This series converges conditionally. Now then we have $$ \frac{a_{3n-2}}{1+|a_{3n-2}|} + \frac{a_{3n-1}}{1+|a_{3n-1}|} + \frac{a_{3n}}{1+|a_{3n}|} = -\frac{2}{n} + O\left( \frac{1}{n^{3/2}}\right). $$ Therefore the sum $\sum \frac{a_n}{1+|a_n|}$ diverges. Slightly modifying this argument also generates a conditionally convergent series $\sum a_n$ whose corresponding sum $\sum \frac{a_n}{1+|a_n|}$ also converges, thus the answer is inconclusive.
Computing trace and norm in a number field
This just says that the characteristic polynomial of the vector space linear map $x\to \theta x$ is given by the very polynomial $\theta$ solves. This should be clear because the matrix associated to the linear map will act like $\theta$ does when plugged into polynomials, i.e. $g(M_\theta)=0$ when $g(\theta)=0$. Since $f$ is such a polynomial, is of degree $n$ and monic irreducible, the characteristic polynomial of $M_\theta$ must be $f$, whence we have $\det(n\operatorname{Id}-M_\theta)=f(n)$ for scalars $n\in\mathbb{Q}$.
angle in triangle of pre-known measure
The law of cosines says that $$c^2=a^2+b^2-2ab\cos\gamma.$$ Thus $\gamma=60^\circ$ iff $$c^2=a^2+b^2-ab$$ (or with the sides permuted for the other angles).
Finding the number of days that should be written on carton milk
(1) I'm not sure whether you were expected to find $\sigma$ from the information that $\mu = 20$ and $P(X > 22) = 1/3.$ Assuming a normal distribution, that can be done as follows: $$0.33 = P(X > 22) = P(Z > (22 - 20)/\sigma),$$ for $Z \sim Norm(0, 1).$ This implies $2/\sigma = .4399$ or $\sigma \approx 4.643.$ But using tables, there are various ways to round, and this is very close to 4.651, which I use below. You want your 'sell-by' date to be when at most 5% of the milk has spoiled. Thus $$0.05 = P(X < d) = P(Z < (d - 20)/4.651),$$ which implies $(d - 20)/4.651 = -1.645$ or (next lower integer) $d = 12$ days. As you begin to learn how to solve problems with the normal distribution, I think it is important to start with sketches of the standard normal distribution and of the normal distribution for the problem at hand. Then show the areas corresponding to the desired probabilities. Such plots are shown below. In each plot, the area to the right of the the green line is .95 and the area to the right of the orange line is 1/3. Roughly speaking, the time period represented on the plot at the right is the period during which the milk spoils. You want set the sell-by date before a large amount of spoilage occurs.
Conditions for $A^2-B^2=(A+B)(A-B)$ to be true
A sufficient condition for $AB=BA$ is that $A$ and $B$ are Simultaneously diagonalizable. With this condition, the relationship is true.
Prove one city is connected to another
This is a rather simple problem if you do this trick: Observe that any graph can be separated in some disjoint connected graphs. Suppose our graph is $G$. We assume there is no path between $X$ and $Y$. Thus, $G$ is not connected. Let us split $G$ into $G_1$, $G_2$,...,$G_k$, all disjoint and connected. There is no edge between $X$ and $Y$ means that $X$ and $Y$ are not in the same $G_i$. Let $n$ such that $X\in G_n$. Then, suppose we have $x$ other cities, which are different from $Y$, so they all have a degree of $10$. The degree of $X$ is $23$, so $$\sum_{v\in G_n}deg(v)$$ is odd, which is a contradiction, because in any graph $$\sum_{v\in G}deg(v)=2e$$ where $e$ is the number of edges. So there must be a path between $X$ and $Y$ Note: this problem can be generalized in many ways. The only important values here are the parity of degrees. Good luck!
System matrix of a 2nd order state space representation
The response of an autonomous system is indeed defined by the matrix exponential - transition matrix or the Laplace transformed version of the differential function which can be obtained through $\mathcal{L}(\dot x) = sX(s)-x(0)$ where $s$ being the indeterminate of the Laplace transform: $$ x(t) = e^{At} x(0) \text{ or}\quad X(s) = (sI-A)^{-1}x(0) $$ From this and after applying Laplace transform to the given time trajectories, we have, $$ \pmatrix{\frac{2}{s} - \frac{1}{s+1}\\\frac{1}{s}+\frac{2}{s+1}} = \pmatrix{\frac{s+2}{s(s+1)}\\ \frac{3s+1}{s(s+1)}}=(sI-A)^{-1} \pmatrix{1\\3}$$ Then, $$ (sI-A)\pmatrix{\frac{s+2}{s(s+1)}\\ \frac{3s+1}{s(s+1)}} =\pmatrix{1\\3} \implies \pmatrix{\frac{1}{s+1}\\\frac{-2}{s+1}}= A\pmatrix{\frac{s+2}{s(s+1)}\\ \frac{3s+1}{s(s+1)}}$$ Let $$A = \pmatrix{a &b\\c&d}$$ then $a(s+2)+b(3s+1) = s$ and $c(s+2) + d(3s+1) = -2s$. (Note that $s$ is cancelled out). These leads to $$ \pmatrix{1 &3\\2 &1}\pmatrix{a\\b} = \pmatrix{1\\0}\ , \ \pmatrix{1 &3\\2 &1}\pmatrix{c\\d} = \pmatrix{-2\\0} $$ Solving for $a,b,c,d$ gives, $$ A = \pmatrix{\frac{-1}{5} &\frac{2}{5}\\\frac{2}{5} &\frac{-4}{5}} $$
For $m$ cubefree, $k^{6}|27m^{2}\Rightarrow k=1 $ or $3$
Hint: Note that both $k^6$ and $27$ are perfect cubes; if $p$ is a prime divisor of $k$ that is not $3$, try concluding that $p^3 | m$.
Surjective functions from a $n$-dimensional hypercube to $\mathbb{R}^m$ when $n > m$
Yes, this is possible, you can even find a $C^\infty$-smooth map with this property. Let $C\subset R^n$ be a closed subset. A $C^k$-smooth function $F: C\to R^m$ is a function such that for every $a\in C$ there exists a neighborhood $U$ of $a$ in $R^n$ and a $C^k$-smooth extension of $F|_{C\cap U}$ to $U$. Just to settle the terminology, I will be considering the $n$-dimensional (hyper)cube $Q$ given by the inequalities $$ |x_i|\le 1, i=1,...,n. $$ Then the $2^n$ regions $P_i, i=1,...,2^n$, of $Q$ appearing in your question are the connected components of the complement $Q-D$, where $$ D= \{(x_1,...,x_n): x_1\cdots x_n=0\}. $$ (Each $P_i$ is a unit cube in $R^n$ missing part of its boundary.) In the answer to your previous question you were given a $C^\infty$-function $f: Q\to R^m$ such that the images of the cubes $P_i\subset Q$ are pairwise disjoint. I will construct a $C^\infty$-smooth extension of $f$ to a surjective function $F: R^n\to R^m$. Consider the $n$-dimensional orthant $$ O_n= \{x\in R^n: x_i\ge 2, i=1,...,n\}. $$ Lemma. For every $m\le n$, there exists a surjective $C^\infty$-smooth map $g: O_n\to R^m$. Proof. Consider the function $\phi: R\to R$, $$ \phi(t)= t^{k+1}\sin(t). $$ This function is easily seen to be $C^\infty$-smooth and surjective. Moreover, its restriction of $\phi$ to each interval $[T,\infty)$ is also surjective. Now, define $$ g(x_1,...,x_n)= (\phi(x_1),...,\phi(x_m)). $$ This function $R^n\to R^m$ is also clearly $C^\infty$-smooth and satisfies $g(O_n)=R^m$. qed Now, for the set $C=Q\cup O_n$ we define a function $h: C\to R^m$, $$ h(x)= \begin{cases} f(x), \quad x\in Q\\ g(x), \quad x\in O_n. \end{cases} $$ By the construction, this function is surjective, $C^\infty$-smooth and the images of cubes $P_i$ are pairwise disjoint. The last ingredient we need is Whitney extension theorem which is a smooth analogue of Tietze Extension Theorem. I will formulate only a weak version of Whitney's theorem which is much easier to prove (the full Whitney's theorem also prescribes values of partial derivatives). Theorem. Suppose that $A\subset R^n$ is a closed subset and $f: R^n\to R^m$ is a $C^k$-smooth function, $1\le k\le \infty$. Then $f$ admits a $C^k$-smooth extension $F: R^n\to R^m$. Remark. Whitney's theorem is usually stated only for $R$-valued functions but applying it to each component $f_i, i=1,...,m$ of the function $f: A\to R^m$, one obtains the theorem stated above. Lastly, applying Whitney's theorem to the function $h$ defined above, we obtain a surjective $C^\infty$-smooth map $F: R^n\to R^m$ extending the map $h$ and, therefore, satisfying the property that the images of the cubes $P_i$ are pairwise disjoint. qed Here is what I do not know: Question. Is there an open surjective smooth (or even continuous) map $F: R^n\to R^m$ such that the images of the cubes $P_i$ are pairwise disjoint?
Exercise about injective modules in Lang's Algebra
Since $$\text{Hom}_A(M,\text{Hom}_{\Bbb Z}(A,\Bbb R/\Bbb Z))\cong\text{Hom}_{\Bbb Z} (M\otimes_A A,\Bbb R/\Bbb Z)\cong\text{Hom}_{\Bbb Z}(M,\Bbb R/\Bbb Z)$$ so what one needs is an Abelian group homomorphism $g:M\to\Bbb R/\Bbb Z$ with $g(x)\ne0$. There is a nonzero homomorphism from $\Bbb Z x$ to $\Bbb R/\Bbb Z$ and we can extend this by a Zorn's lemma argument to an Abelian group homomorphism $g:M\to \Bbb R/\Bbb Z$. If you want, you may be more explicit about how this defines a homomorphism $f:M\to\text{Hom}_{\Bbb Z}(A,\Bbb R/\Bbb Z)$. We let $f(m)$ be the map $a\mapsto g(ma)$ from $A$ to $\Bbb R/\Bbb Z$.
Ask for Definition: Coefficient of Correlation $r$ and (Pearson) correlation coefficient $p$
When an intercept is included in linear regression(sum of residuals is zero), they are equivalent. $$ \begin{eqnarray*} ρ(y_i,\hat y_i)&=&\frac{cov(y_i,\hat y_i)}{\sqrt{var(y_i)var(\hat y_i)}}\\&=&\frac{\sum_{i=1}^n{(y_i - \bar{y})(\hat y_i - \bar{y})}}{\sqrt{\sum_{i=1}^n{(y_i - \bar{y})^2}\sum_{i=1}^n{(\hat y_i - \bar{y})^2}}} \\&=&\frac{\sum_{i=1}^n{(y_i -\hat y_i+\hat y_i- \bar{y})(\hat y_i - \bar{y})}}{\sqrt{\sum_{i=1}^n{(y_i - \bar{y})^2}\sum_{i=1}^n{(\hat y_i - \bar{y})^2}}}\\&=&\frac{\sum_{i=1}^n{(y_i -\hat y_i)(\hat y_i - \bar{y})}+\sum_{i=1}^n{(\hat y_i- \bar{y})^2}}{\sqrt{\sum_{i=1}^n{(y_i - \bar{y})^2}\sum_{i=1}^n{(\hat y_i - \bar{y})^2}}} \end{eqnarray*} $$ $$ \begin{eqnarray*} \sum_{i=1}^n(y_i-\hat y_i)(\hat y_i-\bar y)&=&\sum_{i=1}^n(y_i-\beta_0-\beta_1x_i)(\beta_0+\beta_1x_i-\bar y)\\&=&(\beta_0-\bar y)\sum_{i=1}^n(y_i-\beta_0-\beta_1x_i)+\beta_1\sum_{i=1}^n(y_i-\beta_0-\beta_1x_i)x_i \end{eqnarray*} $$ In Least squares regression, the sum of the squares of the errors is minimized. $$ SSE=\displaystyle\sum\limits_{i=1}^n \left(e_i \right)^2= \sum_{i=1}^n\left(y_i - \hat{y_i} \right)^2= \sum_{i=1}^n\left(y_i -\beta_0- \beta_1x_i\right)^2 $$ Take the partial derivative of SSE with respect to $\beta_0$ and setting it to zero. $$ \frac{\partial{SSE}}{\partial{\beta_0}} = \sum_{i=1}^n 2\left(y_i - \beta_0 - \beta_1x_i\right)^1 (-1) = 0 $$ So $$ \sum_{i=1}^n \left(y_i - \beta_0 - \beta_1x_i\right)^1 (-1) = 0 $$ Take the partial derivative of SSE with respect to $\beta_1$ and setting it to zero. $$ \frac{\partial{SSE}}{\partial{\beta_1}} = \sum_{i=1}^n 2\left(y_i - \beta_0 - \beta_1x_i\right)^1 (-x_i) = 0 $$ So $$ \sum_{i=1}^n \left(y_i - \beta_0 - \beta_1x_i\right)^1 x_i = 0 $$ Hence, when an intercept is included in linear regression(sum of residuals is zero), $$ \begin{eqnarray*} ρ(y_i,\hat y_i)&=&\frac{\sum_{i=1}^n{(y_i -\hat y_i)(\hat y_i - \bar{y})}+\sum_{i=1}^n{(\hat y_i- \bar{y})^2}}{\sqrt{\sum_{i=1}^n{(y_i - \bar{y})^2}\sum_{i=1}^n{(\hat y_i - \bar{y})^2}}}\\&=&\frac{0+\sum_{i=1}^n{(\hat y_i- \bar{y})^2}}{\sqrt{\sum_{i=1}^n{(y_i - \bar{y})^2}\sum_{i=1}^n{(\hat y_i - \bar{y})^2}}}\\&=&\sqrt{\frac{\sum_{i=1}^n{(\hat y_i- \bar{y})^2}}{\sum_{i=1}^n{(y_i- \bar{y})^2}}}\\&=&\sqrt{\frac{SSR}{SST}}\\&=&\sqrt{R^2} \end{eqnarray*} $$
Intersection of Circle and Parabola
Hint You have$$x^{2} + y^{2} + 2fx + 2gy + c = 0\qquad \text{and} \qquad y^2=4ax$$ Replace $x$ by $\frac{y^2}{4a}$ to get $$y^4+ 8a\left(2a+ f\right)\,y^2+32 a^2 g \,y+16 a^2 c=0$$ What is the sum of the roots ? No case to study !
How many integers between $100$ and $500$ are divisibly by both $6$ and $12$, but not by $9$?
if you observe $6*n$ .you will see that for your condition to be true. $n$ should be even and at the same time $n$ should not be divisible by $6.$ therefore the numbers start from $6*20$ to $6*82$. now find the number of even numbers from $20$ to $32$ and subtract the number of multiples of $6$ from it. $$32-10=22$$
Drawing balls one at a time without replacement.
The only sequence which consists of $5$ balls is R,Y,R,Y,Y. The probability for that sequence is $\frac26\cdot\frac45\cdot\frac14\cdot\frac33\cdot\frac22=\frac{1}{15}$. The only sequence which consists of $6$ balls is Y,R,Y,R,Y,Y. The probability for that sequence is $\frac46\cdot\frac25\cdot\frac34\cdot\frac13\cdot\frac22\cdot\frac11=\frac{1}{15}$.
Computing $[\mathbb{Q}(\sqrt{6}):\mathbb{Q}]$
We know that $\newcommand{\Q}{\mathbb{Q}}[\Q(\sqrt 6):\Q]\leq 2$ since $\sqrt 6$ is a root of $X^2-6$. But actually $6$ is square-free meaning it is not divisible twice by any prime $p$. Hence Eisenstein's Criterion shows that $X^2-6$ is irreducible over $\Q$. Simply choose $p=2$ or $p=3$ to argue this. It follows that $[\Q(\sqrt 6):\Q]=2$.
Is inverse use of mean value theorem right?
No. Consider $f(x)=x^3$ on the interval $[-1,1]$ with $x=0$.
Finding matrix exponential
You have a matrix composed by two 2x2 diagonals blocks. You can compute the exponential of the blocks separaterly. The blocks themselves are of the form $I+N$ and $-I+M$ where $N$ and $M$ are nilpotent ($N^2=0$, $M^2=0$). So: $$ e^{N+I} =e^Ne^I = (I+N)e^I, \qquad e^{M-I} = e^M e^{-I}=(I+M)e^{-I}. $$ Matrix exponential can be computed blockwise because the exponential is a sum of powers, and both sums and products can be computed blockwise. The exponential of a square free matrix $N$ is $I+N$ since all higher powers: $N^2$, $N^3$... in the sum: $e^N = I + N + N^2/2 + N^3/3! + ...$ are null. Clearly $I$ commutes with every matrix, hence $\exp(N+I) = e^Ne^I$. The same is true for $-I$ which is a multiple of $I$. Specifically: $$ \exp\begin{pmatrix}1&2\\0&1\end{pmatrix} = \begin{pmatrix}1&2\\0&1\end{pmatrix}\begin{pmatrix}e&0\\0&e\end{pmatrix} =\begin{pmatrix}e&2e\\0&e\end{pmatrix} $$ while $$ \exp\begin{pmatrix}-1&0\\1&-1\end{pmatrix} = \begin{pmatrix}1&0\\1&1\end{pmatrix}\begin{pmatrix}1/e&0\\0&1/e\end{pmatrix} =\begin{pmatrix}1/e & 0\\1/e & 1/e\end{pmatrix} $$ Hence $$ e^A = \begin{pmatrix}e&2e&0&0\\ 0&e&0&0\\ 0&0&1/e&0\\ 0&0&1/e&1/e\end{pmatrix} $$
Is equivalent labelling enough to prove isomorphism between two graphs?
A graph isomorphism $G_1\cong G_2$ is by definition a bijection $\varphi\colon V(G_1)\to V(G_2)$ between the sets of vertices of $G_1$ and $G_2$ such that $(u,v)$ is an edge in $G_1$ if and only if $(\varphi(u),\varphi(v))$ is an edge in $G_2$. Your labeling of vertices achieves exactly that: you have constructed a bijection between the sets of vertices and verified that it send edges to edges and non-edges to non-edges. Hence, you have shown $G_1\cong G_2$.
Use of prime symbol in proof writing
Statements $\exists a, \Phi(a)$ and $\exists b, \Phi(b)$ are equivalent and interchangeable (apart from technical details that the variables may not occur "hidden" in $\Phi$; for example $\exists a,a+3b=5$ is not equivalent to $\exists b,b+3b=5$). The same holds for $\forall a,\Phi(a)$ and $\forall b,\Phi(b)$. Now regarding the rulkes of inference used: If we know $\exists a,\Phi(a)$, we are allowed to pick and work with an object, $b$ say, that makes this true, i.e. a $b$ such that $\Phi(b)$. It may be clearer to name this specific $b$ different from the $a$ used inthe quantifier. Then again, as we might have just as well started from $\exists b,\Phi(b)$ it seems that nothing forbids us to use the same variable name. The author chose to not keep the variable name and in order to end up with the desired $j$, he started with something different, namely $j'$. Likewise at the end: If we derive $\Phi(x)$ for some expression $x$, then we may conclude $\exists a,\Phi(a)$ (with similar caveats about the variable $a$) and again the specific quantifier variable used doesn't matter. It should of course not be the same as some variable already defined in the current "scope" as one would say in programming. So $j''$ instead of $k$ would indeed be fine, but would that look nicer?
Can a bounded number sequence be strictly ascending?
Can a bounded number sequence be strictly ascending? Sure it can. Hint $0.9 \;,\; 0.99 \;,\; 0.999 \;,\; \ldots$
Alternative Formulation of Arc Length
You have essentially solved this already. $$ |\gamma(b)-\gamma(a)| = \left| \int_a^b \gamma'(t) \, dt \right| \le \int_a^b |\gamma'(t)| \, dt $$ by the triangle inequality. This is another version of the Mean Value Theorem, summarisable as: "global growth $\le$ local growth".
Denseness of the set $\{f: \int_0^1 x^\alpha f''(x) dx = \int_0^1 x^\beta f''(x) dx = 0 \}$ in $C[0,1]$
Your set seems to be indeed dense in $\mathcal C([0,1])$. Without loss of generality, assume that $\alpha\neq \beta$. We need the following fact: $\mathbf{Fact.}$ Given a compact set $K\subset\mathbb R^2$, there is a constant $C$ such that the following holds: for any $(p,q)\in K$ and $A>1$, one can find a function $\Phi\in \mathcal C^2([0,A])$ such that $\int_0^{A} \Phi''(t)t^\alpha dt=p$, $\int_0^{A}\Phi''(t)t^\beta dt=q$ and $\Vert\Phi\Vert_\infty\leq C$. Assuming this has been proved, let us show that your set (call it $\mathcal A$) is dense in $\mathcal C([0,1])$. By something like Weierstrass theorem, it is enough to approximate any $\mathcal C^2$ function; so let us fix $f\in\mathcal C^2([0,1])$ and $\varepsilon \in (0,1)$. Put $L_\alpha(f)=\int_0^1 f''(x)x^\alpha dx$ and $L_\beta(f)= \int_0^1 f''(x)x^\beta dx$. Choose $\gamma >0$ such that $\gamma(1-\alpha)>1$ and $\gamma(1-\beta)>1$ By the above fact applied with $K=\{ (p,q);\; \vert p\vert\leq \vert L_\alpha(f)\vert\;{\rm and}\; \vert q\vert\leq \vert L_\beta(f)\vert \}$, one can find a function $\Phi\in\mathcal C^2([0,{\varepsilon^{-\gamma}}])$ such that $\int_0^{{\varepsilon^{-\gamma}}} \Phi''(t)t^\alpha dt=\varepsilon^{\gamma(1-\alpha)-1}L_\alpha(f)$, $\int_0^{{\varepsilon^{-\gamma}}}\Phi''(t)t^\beta dt=\varepsilon^{\gamma(1-\beta)-1}L_\beta(f)$ and $\Vert\Phi\Vert_\infty\leq C$, where $C$ does not depend on $\varepsilon$. Now define $g$ on $[0,1]$ by $g(x)=f(x)-\varepsilon\, \Phi({\varepsilon^{-\gamma}}x)$. Then $g\in\mathcal C^2([0,1])$ and $\Vert g-f\Vert_\infty\leq C\varepsilon$. Moreover, \begin{eqnarray*}\int_0^1 g''(x)x^\alpha\, dx&=&L_\alpha(f)-\varepsilon^{1-2\gamma}\int_0^1\Phi''({\varepsilon^{-\gamma}}x)x^\alpha dx\\ &=&L_\alpha(f)- \varepsilon^{1-\gamma+\gamma\alpha}\int_0^{\varepsilon^{-\gamma}}\Phi(t) t^\alpha\, dt\\&=&0\, , \end{eqnarray*} and likewise $\int_0^1 g(x)x^\beta dx=0$. So $g\in\mathcal A$, and since $C$ does not depend on $\varepsilon$, this shows that $\mathcal A$ is dense in $\mathcal C([0,1])$. To prove the fact, we first note that given $p,q\in\mathbb R$, one can find a quadratic function $\psi(x)=ax^2+bx+c$ such that $\int_0^{1} \psi(x)x^\alpha dx=p$, $\int_0^{1}\psi(x)x^\beta dx=q$, $\psi(1)=0$ and $\vert a\vert+\vert b\vert+\vert c\vert\leq M (\vert p\vert+\vert q\vert)$, where $M$ is a constant depending only on $(\alpha,\beta)$. Indeed, this amounts to solving the linear system $$\left\{ \begin{matrix}\frac{1}{\alpha+3}& a&+&\frac{1}{\alpha +2}& b&+&\frac{1}{\alpha +1}& c&=&p\\ \frac{1}{\beta+3}& a&+&\frac{1}{\beta +2}& b&+&\frac{1}{\beta +1}& c&=&q\\ &a&+&&b&+&&c&=0 \end{matrix} \right. $$ whose matrix depend only on $(\alpha,\beta)$ and turns out to be invertible (I'm skipping some row manipulations here). It follows that for any $(p,q)\in\mathbb R^2$ and any $A>1$, one can find a function $\varphi\in\mathcal C([0,A])$ such that $\int_0^A \varphi(t) t^\alpha dt=p$, $\int_0^A\varphi(t) t^\beta dx=q$, $\varphi\equiv 0$ on $[1,A]$ and $\Vert\varphi\Vert_\infty\leq M(\vert p\vert+\vert q\vert)$ for some constant $M$ which does not depend on $(p,q)$: just define $\varphi$ to be equal to the above quadratic function $\psi$ on $[0,1]$ and $\varphi\equiv 0$ on $[1,A]$. Now, let $K$ be an arbitrary compact subset of $\mathbb R^2$ and let $A>1$. For any $(p,q)\in K$, define $\Phi:[0,A]\to \mathbb R$ by $\Phi(t)=\int_1^t\int_1^s \varphi (u) du$, where $\varphi$ is as above. Then $\Phi\equiv 0$ on $[1,A]$, so $\Vert\Phi\Vert_\infty\leq C$ for some constant $C$ depending only of $K$; and $\Phi$ does the job.
Is $x^2$ function differentiable at $x=0$?
Both limits are $0.$ The function $x^2$ is always positive regardless of the sign of $x.$ Thus, it has to be the case that the right limit must be equal to the left limit as well.
Random graphs with a hamiltonian path
For context: a result of Pósa says that a random graph with $C n \log n$ edges already contains a Hamiltonian path with probability tending to $1$, and here we have a uniformly random graph (with $\frac{n^2}{4}$ edges on average). So we can be fairly wasteful. As far as an algorithm goes, we can take the algorithm used to prove Dirac's theorem. The hypotheses of that theorem don't hold here, but the algorithm will still work with high probability. The strategy is this: Pick a path greedily: start at a vertex $v_1$, pick a vertex $v_2$ it is adjacent to, then pick a vertex $v_3$ adjacent to $v_2$, and so on. Repeat until you get stuck. Turn the path into a cycle: if it has endpoints $v_1$ and $v_\ell$, find adjacent vertices $v_i$ and $v_{i+1}$ on the path such that $v_{i+1}$ is adjacent to $v_1$ and $v_i$ is adjacent to $v_\ell$. Then the cycle is $v_1, \dots, v_i, v_\ell, \dots, v_{i+1}, v_1$. Turn the cycle into a longer path: if $v_{\ell+1}$ is a vertex not on the cycle, find a vertex $v$ on the cycle adjacent to $v_{\ell+1}$, and take the path starting next to $v$, going around the cycle, and then going to $v_{\ell+1}$. Repeat steps 2 and 3 until the path contains all the vertices. All of these steps are quite likely to work individually. The problem is that to analyze how likely they are to work as a whole, we'd have to check edges of the graph multiple times, which doesn't preserve independence. So instead, we will write our uniformly random graph $G$ as the union of graphs $G_0, G_1, \dots, G_{10 \log n}$, where $G_0$ contains each edge independently with probability $\frac14$, and $G_1, \dots, G_{10 \log n}$ contain each edge independently with probability $p$, chosen so that $\frac34(1-p)^{10 \log n} = \frac12$. Then the union will contain each edge with probability $\frac12$, so it will be the uniformly random graph. If we solve for $p$, we get $p = O(\frac{1}{\log n})$, but all we'll really need is for $p$ to be asymptotically bigger than $\frac{1}{\sqrt n}$. Then I claim that: A greedy path chosen in $G_0$ will reach length $n - 5\log n$ with very high probability. Doing step 2 of the algorithm in any of the $G_1, \dots, G_{10 \log n}$ will work with very high probability. Doing step 3 of the algorithm in any of the $G_1, \dots, G_{10 \log n}$ will work with very high probability. Then we can just look at graphs $G_0, G_1, \dots$ sequentially as we go through the algorithm. This preserves independence, and if the edges we need will be in the graph $G_i$ we're looking at, they will be in the union $G$. For the first claim: the probability that a greedy path will get stuck while there's still $5 \log n$ vertices to pick from is $(\frac34)^{5\log n} = n^{-5\log \frac43}$, so the probability is at most $n^{1 - 5\log \frac43} \approx n^{-0.43}$ that it ever gets stuck. For the second claim: we have at least $\frac n2$ options for vertices $v_i$ and $v_{i+1}$, and each one works with probability $p^2$. The probability that none of them work is $(1-p^2)^{n/2} \le e^{-p^2n/2}$, which approaches $0$ as $n\to \infty$. (This is where we want $p \gg \frac{1}{\sqrt n}$, so that the exponent goes to $-\infty$ as $n\to\infty$. We actually want a bit better than that, because we want all $O(\log n)$ of these steps to work, so the probability should go to $0$ faster than $\frac{1}{\log n}$. But our $p$ is actually quite a bit larger than $\frac{1}{\sqrt n}$, so we have that wiggle room.) For the third claim: we have at least $\frac n2$ options for the vertex $v$, and each works with probability $p$. The probability that none of them work is $(1-p)^{n/2} \le e^{-pn/2}$, which approaches $0$ as $n\to \infty$.
Example of Improper integral in complex analysis
The contour is good. Two things though: 1) You have to consider the integral along the angled line of the wedge contour. The angle of the contour was chosen to preserve the integrand. 2) Write $z=e^{i 2 \pi/3} x$ and get that the contour integral is $$\left(1-e^{i 2 \pi/3}\right) \int_0^{\infty} \frac{dx}{x^3+1} = i 2 \pi \frac{1}{3 e^{i 2 \pi/3}}$$ The term on the right is the residue at the pole $z=e^{i\pi/3}$ times $i 2\pi$. I used the fact that, if $f(z)=a(z)/b(z)$, then the residue of a simple pole $z_k$ of $f$ is $a(z_k)/b'(z_k)$. Note that $e^{i 2 \pi/3}-e^{i 4 \pi/3}=i \sqrt{3}$. The result follows.
Combinatorial proof of $\sum_{j=0}^{k} \binom{n}{j} = \sum_{j=0}^k \binom{n-1-j}{k-j}2^j$
Big hint $\sum_{j=0}^k\binom{n}j$ counts subsets of $\{1,2,\dots,n\}$ with at most $k$ elements. Such a subset is missing at least $n-k$ elements. Arrange the missing elements in a sorted list, so the first element in the list is the smallest missing element. For how many such subsets is the $(n-k)^{th}$ entry of the list of missing elements equal to $n-j$? Further explanation, just shy of a full solution: If a subset satisfies this condition, then the element $n-j$ is missing, and among the elements $\{1,2,\dots,n-j-1\}$, exactly $n-k-1$ are missing. The elements above $n-j$ are either included or excluded, arbitrarily. Here is an illustration for $n=5,k=3$. We represent subsets as a string of zeroes and ones. The $(n-k)^{th}$ smallest, i.e. second smallest, missing element is highlighted in red. $$ \begin{array}{c|c|c|c} \binom{5-1-0}{3-0} & \binom{5-1-1}{3-1}2 & \binom{5-1-2}{3-2}2^2 & \binom{5-1-3}{3-3}2^3 \\\hline 1110\color{red}0 & 110\color{red}00 &10\color{red}000 &0\color{red}0000 \\ 1101\color{red}0 & 110\color{red}01 &10\color{red}001&0\color{red}0001 \\ 1011\color{red}0 & 101\color{red}00 &10\color{red}010&0\color{red}0010 \\ 0111\color{red}0 & 101\color{red}01 &10\color{red}011&0\color{red}0011 \\ & 011\color{red}00 &01\color{red}000&0\color{red}0100 \\ & 011\color{red}01&01\color{red}001&0\color{red}0101 \\ & &01\color{red}010&0\color{red}0110 \\ & &01\color{red}011&0\color{red}0111 \end{array} $$
Are there more solutions to $x_1x_2+x_2x_3+x_3x_1=x_1^2+x_2^2+x_3^2$ than all $x_i$ equal?
Hint $a^2+b^2+c^2 \geq ab+bc+ca, \forall a,b,c \in \Bbb R^+$. Hope you know when equality holds in AM-GM or Rearrangement Inequality. Otherwise you can go for this, $2a^2+2b^2+2c^2=2ab+2bc+2ca$ $\implies (a^2-2ab+b^2)+(b^2-2bc+c^2)+(c^2-2ca+a^2)=0$ $\implies (a-b)^2+(b-c)^2+(c-a)^2=0$ Since, LHS is always non-negative($a,b,c \in \Bbb R$) [Why?], we get $a-b=b-c=c-a=0$ $\implies a=b=c$ or as you would like it $x_1=x_2=x_3.$
Truth statements regarding invertible matrices
Careful: you're assuming that the inverses of the matrices exist. In question 1, for example, you've multiplied on the right by $C^{-1}$. How do you know it exists? Instead, use the definition of inverse. Given a square matrix $M$, if $MN = I$, then we say that $N$ is the inverse of $M$, and we can call it $M^{-1}$. In question 1, you have $ABC=I$. Writing this as $(AB)C=I$, we see that $C^{-1} = AB$ by definition. This problem stands out especially in question 3, where your argument is: "Assuming $A$ is invertible, then $A$ is invertible". Do you see why this doesn't work? Much simpler: $A$ is invertible because its inverse, again by definition, is $BC$. For questions 4 and 5, it is incorrect to assume that $B^{-1}A^{-1} \neq A^{-1}B^{-1}$. It's true that they will be different in general, but your argument is insufficient. (As @openspace points out in a comment above, you might have $A=B=C=I$.) Instead, find a specific counterexample where the two sides are different.
Hausdorff Dimension of Julia set of $z^2+2$?
I am fairly certain that there is no explicit formula for the Hausdorff dimension of the Julia set of $z^2+2$. In fact, except for some highly special parameters such as $z^2$ or $z^2-2$ there is no explicit formula. The best that one can hope for is either an algorithm giving you a numerical approximation, or asymptotic bounds near those special parameters or near infinity. Note that just because there is no explicit formula, it doesn't mean that one cannot ask interesting questions about the Hausdorff dimension $\delta(c)$ of Julia sets of $z^2+c$. Here's a couple that are (to my knowledge) open, though some partial answers are known: at which $c$ is $\delta(c)$ continuous? at which $c$ does $\delta(c)$ attain a minimum? is it true that $\delta$ is decreasing between $c=-1.41...$ (Feigenbaum point) and $c=0$?
How do I find a unit vector orthogonal to a line?
Since you already have the slope you just need to understand how to turn it into a vector. The usual interpretation of a slope is, $$ Slope = rise / run,$$ this implicitly describes a vector with y-component equal to the $rise$ and x-component equal to the $run$. $$ \vec{v} = ( run, rise )$$ The vector isn't normalized yet because its magnitude is not $1$. To normalize a vector you just divide it by its own magnitude. In our case the magnitude of the vector is, $$\| \vec{v} \| = \sqrt{run^2 + rise^2},$$ and so our normalized vector is, $$ \hat{v} = \frac{\vec{v}}{\|\vec{v}\|}$$ $$ \hat{v} = \Bigg( \frac{run}{\sqrt{run^2 + rise^2}}, \frac{rise}{\sqrt{run^2 + rise^2}} \Bigg)$$
How to simplify $\left(x+i\pi\right)^{1+x}+\left(x-i\pi\right)^{1+x}$ for $x>0$
That the expression $$f(x) = (x+i\pi)^{1+x} + (x-i\pi)^{1+x}, \quad x > 0$$ has zero imaginary component is immediately appreciable by noting that the arguments of $x+i\pi$ and $x-i\pi$ are equal in magnitude and opposite in sign, thus by De Moivre's theorem, the arguments of $(x+i\pi)^{1+x}$ and $(x-i\pi)^{1+x}$ are also equal in magnitude and opposite in sign. Their sum therefore has argument $0$. To find a closed form, we can let $\theta = \tan^{-1} \frac{\pi}{x}$ and $r = \sqrt{x^2+\pi^2}$, hence $$x \pm i \pi = re^{\pm i\theta},$$ and $$(x\pm i \pi)^{1+x} = r^{1+x} e^{\pm (1+x) i \theta},$$ and $$f(x) = r^{1+x} (e^{(1+x)i \theta} + e^{-(1+x)i\theta}) = 2r^{1+x} \cos\left( (1+x) \theta \right) \\ = 2(x^2 + \pi^2)^{(1+x)/2} \cos \left( (1+x) \tan^{-1} \frac{\pi}{x} \right).$$
Determining automorphism group
If $\alpha^2=2$ then $\sigma(\alpha)^2=\sigma(\alpha^2)=\sigma(2)=2$. In general, if $f(\alpha)=0$ with $f\in\mathbb Q[X]$, then also $f(\sigma\alpha)=0$ (because $\sigma f=f$). For your second question note that with $u,v\in \mathbb Q$ you have $\sigma(u\alpha+v\beta)=u\sigma(\alpha)+v\sigma(\beta)$.
Convergent sequence of monotone sequence
Obviously, the conjecture does not always hold, because the LHS does not necessarily converge. Say, $f_k(x)=e^{-kx}$. Then $\sum\limits_{k=1}^\infty f_k(x)$ is a geometric progression and surely does converge for any fixed $x>0$, just as you wanted. Now select $x_k={1\over k}$. Then $f_k(x_k)={1\over e}$, so their sum is...
Formal power series ring over a valuation ring of dimension $\geq 2$ is not integrally closed.
What Andrew says is correct. However the element / root $f$ must be in the field of fractions of $R[[X]]$, which is the field of Laurent series $K((X))$, $K$ the fraction field of $R$. To this end the form of the polynomial helps: $Y^2+aY+X$ modulo the maximal ideal $XK[[X]]$ of the discrete valuation ring $K[[X]]$ is a polynomial having the roots $0$ and $-a$ in $K$. Hence for $a\neq 0$ this polynomial is separable and Hensel's lemma assures the existence of a root $f$ in $K((X))$.
Banach Space continuous function
A related problem. 1) Continuity, note that $|x|\leq 1$ and $|t|\leq 1$, $$ |(Tf)(x)-(Tg)(x) |\leq \dfrac{1}{3} \displaystyle\int^1_0 |tx||f(t)-g(t)|\ dt \leq \int^1_0 |f(t)-g(t)|\ dt $$ $$ \implies \sup|(Tf)(x)-(Tg)(x) | \leq \frac{1}{3} \int^{1}_{0} \sup|f(t)-g(t)|\ dt $$ $$ ||Tf-Tg||_{\infty} \leq \frac{1}{3} ||f-g||_{\infty}<\frac{\epsilon}{3}=\delta . $$ 2) The operator is a contraction mapping, since $$ ||Tf-Tg||_{\infty} \leq \frac{1}{3} ||f-g||_{\infty}. $$ 3) Define $$ f_{n+1}(x)=(T f_n)(x) = \dfrac{1}{3} \displaystyle\int^1_0tx f_n(t)\ dt + e^x - \dfrac{\pi}{3} \longrightarrow (*) $$ $$ \implies f_{1}(x)=(T f_0)(x) = \dfrac{1}{3} \displaystyle\int^1_0tx f_0(t)\ dt + e^x - \dfrac{\pi}{3} $$ $$ \implies f_{1}(x) = \dfrac{1}{3} \displaystyle\int^1_0tx \ dt + e^x - \dfrac{\pi}{3}. $$ To find $f_2$, subs $f_1$ in $(*)$ and carry on the calculations. This technique is known as the Picard iteration. A related problem. Added: Here is $f_2(x)$ $$ f_{2}(x)=(T f_1)(x) = \dfrac{1}{3} \displaystyle\int^1_0tx f_1(t)\ dt + e^x - \dfrac{\pi}{3}. $$
Convergence of a sequence given by $x_{n+1}=\frac 23(x_n+1)$
If $L$ denotes the limit value, we should have $L = \frac23(L+1)$ or $L=2$. Then writing $y_n = 2- x_n$, we want to show that $y_n \to 0$ and our recursion becomes $y_{n+1} = \frac23 y_n$, with starting value $y_0 = 2$.
Order of quantifiers and reversing variables
For the 3rd and 4th question, an informal way to understand these, notice that an existential can be seen as kind of disjunction, that is, if $a,b,c,...$ denote the objects in your domain, then you can think of an existential like this: $\exists x \: \varphi(x) \approx \varphi(a) \lor \varphi(b) \lor \varphi(c) \lor ...$ I use $\approx$ since this is technically not a logical equivalence, but if you really want to prove the above equivalence, you'd need to go into formal semantics, and that might be a bit to much to ask for a binner in logic. But, what you would be doing there does follow this basic idea, so let's just leave it more informal. So, with this 'equivalence', you can show (or at least informally understand) an equivalence like $\exists x \exists y \ P(x,y) \Leftrightarrow \exists y \exists x \ P(x,y)$ as follows: $\exists x \exists y \ P(x,y) \approx$ $\exists y \ P(a,y) \lor \exists y \ P(b,y) \lor \exists y \ P(c,y) \lor ... \approx$ $(P(a,a) \lor P(a,b) \lor P(a,c) ...) \lor (P(b,a) \lor P(b,b)\lor ...) \lor (P(c,a) \lor P(c,b) \lor ...) \lor ... \Leftrightarrow$ $P(a,a) \lor P(a,b) \lor P(a,c) ... \lor P(b,a) \lor P(b,b) \lor ... \lor P(c,a) \lor P(c,b) \lor ... \Leftrightarrow$ $P(a,a) \lor P(b,a) \lor P(c,a) ... \lor P(a,b) \lor P(b,b) \lor P(c,b) ... \lor P(a,c) \lor P(b,c) ... \lor ... \approx$ $\exists x P(x,a) \lor \exists x P(x,b) \lor \exists x P(x,c) \lor ... \approx$ $\exists y \exists x \ P(x,y)$ So, you see that we can swap two existentials if they are next to each other basically because the $\lor$ is associative and commutative. Likewise, by thinking a universal like this: $\forall x \: \varphi(x) \approx \varphi(a) \land \varphi(b) \land \varphi(c) \land ...$ you can understand why two universals that are next to each other can be swapped, since the $\land$ is both associative and commutative. As a general equivalence principle: Swapping Quantifiers of Same Type $\forall x \forall y \ P(x,y) \Leftrightarrow \forall y \forall x \ P(x,y)$ $\exists x \exists y \ P(x,y) \Leftrightarrow \exists y \exists x \ P(x,y)$ Now, I note that in 3) you asked: $(∃x∃y \space P(x,y)) \rightarrow (∃y∃x \space P(y,x))$ So here you not only swap the quantifiers, but you also swap the role of the variables in the formula. Well, that always works, because variables are just dummy place-holders, so of course you always have: Swapping Bound Variables $\forall x \ \varphi(x) \Leftrightarrow \forall y \ \varphi(y)$ $\exists x \ \varphi(x) \Leftrightarrow \exists y \ \varphi(y)$ And in particular you therefore have: $\exists x \exists y \ P(x,y) \Leftrightarrow$ $\exists z \exists y \ P(z,y) \Leftrightarrow$ $\exists z \exists x \ P(z,x) \Leftrightarrow$ $\exists y \exists x \ P(y,x)$ In fact, this also works with mixed quantifiers, e.g.: $\forall x \exists y \ P(x,y) \Leftrightarrow$ $\forall z \exists y \ P(z,y) \Leftrightarrow$ $\forall z \exists x \ P(z,x) \Leftrightarrow$ $\forall y \exists x \ P(y,x)$ Finally, swapping variables in a formula typically results in a statement that is not equivalent, but there are cases where it does remain equivalent: Swapping Role of Variables $\forall x \forall y \ P(x,y) \Leftrightarrow \forall x \forall y \ P(y,x) $ $\exists x \exists y \ P(x,y) \Leftrightarrow \exists x \exists y \ P(y,x) $ And these we can derive from the earlier principles, e.g.: $\forall x \forall y \ P(x,y) \Leftrightarrow \text{ (Swapping Quantifiers)}$ $\forall y \forall x \ P(x,y) \Leftrightarrow \text{ (Swapping Bound Variables)}$ $\forall x \forall y \ P(y,x)$
How does a sum of products equal the product of sums here?
It is possible. In the first step, the example shows $$\sum_{l,k=0}^n \frac{k^2x^ky^l}{l!}=\sum_{l=0}^n \left(\frac{y^l}{l!}\sum_{k=0}^n k^2x^k\right)=\left(\sum_{l=0}^n \frac{y^l}{l!}\right)\left(\sum_{k=0}^n k^2x^k\right)$$ (Think about this: on the LHS each combination of $(l,k)$ is considered, and on the RHS each combination of $(l,k)$ is also included too.) Remember, $l,k$ are independent. In the second step, the limit can be separated since $\lim_{a\to a_0,b\to b_0} a\cdot b=\lim_{a\to a_0} a\cdot\lim_{b\to b_0} b$
A question on integration by parts
Assuming $w,v$ are in the necessary spaces you are correct about the first line. Remember that the general idea for IBP is $$ \int_S u v' \, \mathrm{d} \mu = \int_{\partial S} uv \, \mathrm{d} \mu - \int_S u'v \, \mathrm{d} \mu $$ and that the first integral vanishes as long as either $v$ or $u$ has compact support (i.e. the functions are in the correct space). So now consider $$ \begin{eqnarray} \int_0^\infty \int_{-\infty}^\infty w_t v_x \, \mathrm{d} x \, \mathrm{d} t & = & \int_{-\infty}^\infty \left. w v_x \right|_{t=0}^{t\to\infty} \,\mathrm{d} x - \int_0^{\infty} \int_{-\infty}^\infty w v_{xt} \, \mathrm{d} x \, \mathrm{d} t \\ & = & - \int_{-\infty}^\infty \left. w v_x \right|_{t=0} \, \mathrm{d}x - \int_0^{\infty} \int_{-\infty}^\infty w v_{xt} \, \mathrm{d} x \, \mathrm{d} t \end{eqnarray} $$ Here we are integrating by parts in the $t$, and the second equality comes from an assumption that $w$ has compact support in $t$. Now let's do this again but this time in the $x$ integral: $$ - \int_{-\infty}^\infty \left. w v_x \right|_{t=0} \, \mathrm{d}x - \int_0^{\infty} \int_{-\infty}^\infty w v_{xt} \, \mathrm{d} x \, \mathrm{d} t \\ = - \left. \left. wv \right|_{t=0} \right|_{x\to -\infty}^{x \to \infty} + \int_{-\infty}^{\infty} \left. w_x v \right|_{t=0} \, \mathrm{d} x - \int_0^{\infty} \left. w v_t \right|_{x\to-\infty}^{x\to\infty} \, \mathrm{d}t + \int_0^{\infty} \int_{-\infty}^\infty w_x v_t \, \mathrm{d} x \, \mathrm{d} t \\ = \int_{-\infty}^\infty \left. w_x v \right|_{t=0} \, \mathrm{d} x + \int_0^{\infty} \int_{-\infty}^\infty w_x v_t \, \mathrm{d} x \, \mathrm{d} t $$ where the last equality follows from the assumption that $w$ has compact support in $x$.
How can I solve this form of optimization problem?
The problem is convex so basically any nonlinear solver should solve this without issues. If you want to have something closer to linear programming, you can use the fact that $x^{-1}$ is second-order cone representable and thus use a second-order cone programming solver. Here tested in MATLAB Toolbox YALMIP (requires an SOCP solver such as Mosek, Gurobi, SeDuMi, ECOS etc for the reformulated problem) b = rand(n,1); a = rand(n,1); c = rand(1); x = sdpvar(n,1); % Solve as general nonlinear program obj = sum(a.*b./(x+a)); optimize([x>=0, sum(x)==c],obj) % Model inverse using socp cone obj = sum(a.*b.*cpower(x+a,-1)); optimize([x>=0, sum(x)==c],obj)
Prove from definition of convergence that (-2n+5)/(3n+1) is convergent.
Let $\epsilon>0$. We have $$\left|\frac{-2n+5}{3n+1}+\frac23\right|=\frac{17}{9n+3}<\epsilon\iff n>\frac19\left(\frac{17}{\epsilon}-3\right)=:\alpha$$ so for $n_0=\max(0,\lfloor\alpha\rfloor+1)$ we have for $n\ge n_0$ $$\left|\frac{-2n+5}{3n+1}+\frac23\right|<\epsilon$$ hence we proved by definition that the limit is $-\frac23$.
$\mathbb C\cup\{\infty\}$ is compact, a "direct proof".
Well, you can note/show that the complement of an open neighborhood of $\infty$ is a compact subset of $\Bbb C$. Thus, given any open cover $\mathcal U$ of the Riemann sphere, one of the elements of $\mathcal U$ will have a compact complement, which can then be covered by finitely-many other elements of $\mathcal U$.
Prove that a sequence of measures is uniformly controled by a finite measure
Let $E_1\Delta E_2$ denotes the symmetric difference of two sets. Define the following equivalence relation: $$E_1\sim E_2\quad\Longleftrightarrow \quad \mu(E_1\Delta E_2)=0$$ Define $$d(E_1,E_2)=\mu(E_1\Delta E_2)$$ then under this equivalence relation $(\mathcal{F},d)$ is a complete metric space. As $\nu_n$ is absolutely continuous with respect to $\mu$, and each $\nu_n$ is finite, thus $\forall\epsilon>0$, $\exists\delta_n>0$ s.t. $$\nu_n(E)<\epsilon\text{ whenever }\mu(E)<\delta$$ So if $\mu(E_1\Delta E_2)<\delta_n$, we have $$|\nu_n(E_1)-\nu_n(E_2)|\leq \nu_n(E_1\Delta E_2)<\epsilon$$ This means each $\nu_n$ is a continuous function on $(\mathcal{F},d)$. On the other hand, as $\lim_{n\to\infty}\nu_n(E)=\nu_E$ for all $E\in\mathcal{F}$, then $\forall \epsilon>0, E\in\mathcal{F}$, $\exists N$ s.t. $$|\nu_n(E)-\nu_m(E)|<\epsilon,\quad\forall n,m>N$$ Set $$\begin{aligned}F_N(\epsilon)&=\{E\in\mathcal{F}: |\nu_n(E)-\nu_m(E)|<\epsilon,\forall n,m\geq N\}\\ &=\bigcap_{n=N}^\infty\bigcap_{m=N}^\infty\{E\in\mathcal{F}: |\nu_n(E)-\nu_m(E)|<\epsilon\} \end{aligned}$$ It is easy to see that $F_N(\epsilon)$ is closed for all $N$ and $\mathcal{F}=\bigcup_{N\in\mathbb{N}}F_N(\epsilon)$. Thus by Baire's category theorem, there is $N_0$ s.t. $F_{N_0)}(\epsilon)$ contains an interior point, i.e. $\exists E_0\in F_{N_0}(\epsilon)$ and $\delta>0$ s.t. $E\in F_{N_0}(\epsilon)$ whenever $\mu(E\Delta E_0)<\delta$. Now choose $\delta>0$ sufficiently small s.t. $$\mu(A)<\delta\quad\Rightarrow\quad\nu_n(A)<\epsilon,\forall n=1,\dots,N_0$$ Assume $A\in\mathcal{F}$ with $\mu(A)<\delta$, then set $$E_1=E_0\setminus A,\quad E_2=E_0\cup A=E_1\cup A$$ then $\mu(E_0\Delta E_1)<\delta,\mu(E_0\Delta E_2)<\delta$, so $E_1,E_2\in\mathcal{F}_{N_0}(\epsilon)$, so for all $n\geq N_0$: $$\begin{aligned}\nu(A)&\leq |\nu_{N_0}(A)|+|\nu_n(A)-\nu_{N_0}(A)|\leq\epsilon+|\nu_n(A)-\nu_{N_0}(A)|\\ &=\epsilon+|\nu_n(A)+\nu_n(E_1)-\nu_n(E_1)-\nu_{N_0}(A)-\nu_{N_0}(E_1)+\nu_{N_0}(E_1)|\\ &\leq\epsilon+|\nu_n(A\cup E_1)-\nu_{N_0}(A\cup E_1)|+|\nu_n(E_1)-\nu_{N_0}(E_1)|\\ &=\epsilon+|\nu_n(E_2)-\nu_{N_0}(E_2)|+|\nu_n(E_1)-\nu_{N_0}(E_1)|\\ &\leq 3\epsilon \end{aligned}$$
How can I find the PDF of this function of normal variables? Or what is the distribution of distances between two random points on a unit sphere?
The first thing to do is to simplify the problem by using spherical coordinates. Because you're really only interested in points on the unit sphere, you can write $x=\cos\phi$, $y=\sin\theta\sin\phi$, and $z=\cos\theta\sin\phi$, so that $x^2+y^2+z^2$ is automatically $1$. (Note that, while the polar axis is conventionally taken to be the $z$ axis, I have taken it to be the $x$ axis, since that is the "special" one in your original definition of $D$). Then you have \begin{align*} D&=\sqrt{(\cos\phi-1)^2+\sin^2\theta\sin^2\phi+\cos^2\theta\cos^2\phi}\\ &=\sqrt{\cos^2\phi-2\cos\phi+1+\sin^2\phi}\\ &=\sqrt{2-2\cos^2\phi}=2\sqrt{\frac{1-\cos^2\phi}{2}}. \end{align*} We can use a half-angle formula to simplify this even further, finding that \begin{equation*} D=2\sin\frac{\phi}{2}. \end{equation*} To find the probability density function for $D$, start by finding the CDF: \begin{align*} P(D\le b)=P\left(2\sin\frac{\phi}{2}\le b\right)=P\left(\frac{\phi}{2}\le \arcsin\frac{b}{2}\right)=P\left(\phi\le 2\arcsin\frac{b}{2}\right). \end{align*} Since the point $(x,y,z)$ is to be uniformly distributed on the unit sphere, the probability density function $f(\theta,\phi)$ should be proportional to the surface area of an infinitesimal patch of the sphere of width $d\theta$ and height $d\phi$: \begin{equation*} f(\theta,\phi)\ d\theta\ d\phi=\frac{1}{4\pi}\sin\phi\ d\theta\ d\phi \end{equation*} The CDF for $D$ is then given by \begin{align*} P(D\le b)&=P\left(\phi\le 2\arcsin\frac{b}{2}\right)=\int_0^{2\arcsin b/2}\int_0^{2\pi} f(\theta,\phi)\ d\theta\ d\phi\\ &=\int_0^{2\arcsin b/2}\int_0^{2\pi} \frac{1}{4\pi}\sin\phi\ d\theta\ d\phi=\frac{1}{2}\int_0^{2\arcsin b/2} \sin\phi\ d\phi\\ &=-\frac{1}{2}\cos\left(2\arcsin\frac{b}{2}\right)+\frac{1}{2}\cos(0)=\frac{1}{2}-\frac{1}{2}\cos\left(2\arcsin\frac{b}{2}\right). \end{align*} We can find the PDF for $D$ by differentiating this expression: \begin{align*} f(b) &= \frac{d}{db}\left(\frac{1}{2}-\frac{1}{2}\cos\left(2\arcsin\frac{b}{2}\right)\right) = \frac{1}{2}\sin\left(2\arcsin\frac{b}{2}\right)\left(\frac{1}{\sqrt{1-\left(\frac{b}{2}\right)^2}}\right)\\ &=\left(\frac{b}{2}\right)\left(\sqrt{1-\left(\frac{b}{2}\right)^2}\right)\left(\frac{1}{\sqrt{1-\left(\frac{b}{2}\right)^2}}\right)=\frac{b}{2}. \end{align*} This is a surprisingly simple result. In fact, it means that $D^2$ is uniformly distributed on $[0,4]$. There's another interesting solution to the same problem here: http://godplaysdice.blogspot.com/2011/12/solution-to-distance-between-random.html. This uses a result known as Archimedes' Hat-Box Theorem to deduce that $D^2$ is uniformly distributed on $[0,4]$, from which the distribution of $D$ follows.
Regarding a proof of Tanaka's formula
For fixed $\epsilon>0$ set $$L_t^{(\epsilon)} := \frac{1}{2} \int_0^t \frac{\epsilon}{(\epsilon+B_s^2)^{3/2}} \, ds,$$ i.e. $$f_{\epsilon}(B_t) = \sqrt{\epsilon} + \int_0^t \frac{B_s}{\sqrt{\epsilon+B_s^2}} \, dB_s+ L_{t}^{\epsilon}.$$ Since $f_{\epsilon}(B_t) \to |B_t|$ in $L^2$ and the stochastic integral converges in $L^2$ to $\int_0^t \text{sgn}(B_s) \, dB_s$, it follows that $$L_t^{(\epsilon)} = f_{\epsilon}(B_t)-\sqrt{\epsilon} - \int_0^t \frac{B_s}{\sqrt{\epsilon+B_s^2}} \, dB_s \xrightarrow[L^2]{\epsilon \to 0} |B_t| - \int_0^t \text{sgn}(B_s) \, dB_s =: L_t $$ for fixed $t \geq 0$. Note that $(L_t)_{t \geq 0}$ has (a modification with) continuous sample paths since the stochastic integral has continuous sample paths a.s. (the continuity of $|B_t|$ is obvious from the continuity of $B_t$). Now $$\sup_{t \leq T} |L_t^{(\epsilon)}-L_t| \leq \sqrt{\epsilon}+ \sup_{t \leq T} |f_{\epsilon}(B_t)-|B_t|| + \sup_{t \leq T} \left| \int_0^t \frac{B_s}{\sqrt{\epsilon+B_s^2}} \, dB_s - \int_0^t \text{sgn}(B_s) \, dB_s \right|.$$ The right-hand side converges to $0$ in $L^2$ as $\epsilon \to 0$, and so does the right-hand side. Convergence in $L^2$ implies that there exists a subsequence converging almost surely, i.e. we can find $\epsilon_k \downarrow 0$ such that $$\sup_{t \leq T} |L_t^{(\epsilon_k)}-L_t| \to 0 \quad \text{a.s.}$$ Since $t \mapsto L_t^{(\epsilon)}$ is increasing, it follows that the limit is also increasing in $t$: $$L_s = \lim_{k \to \infty} L_s^{(\epsilon_k)} \leq \lim_{k \to \infty} L_t^{(\epsilon_k)} = L_t$$ for any $s \leq t \leq T$. Remark: Note that we have shown, as a by-product, that $$|B_t| = \int_0^t \text{sgn}(B_s) \, dB_s + L_t$$ is the Doob-Meyer decomposition of the submartingale $(|B_t|)_{t \geq 0}$.
How can this English sentence be translated into a logical expression? ( Translating " unless")
The suggestion of $P\to (Q \wedge R)$ would say that in order to ride the roller coaster you must be at least $4$ feet tall and you must me at least $16$ years old. But I would say the meaning of the given sentence is that you need to satisfy one of the age and height conditions, not both. I think the sentence means: In order to ride the roller coaster, you must be at least $4$ feet tall, or you must be over $16$ years old. Symbolically (using your $P, Q, R$), this would be $P\to (Q\vee R)$. In contrapositive form (which would tell you what keeps you from riding the roller coaster: $(\neg P\wedge \neg Q)\to \neg R$. (If you are under 4 feet tall and younger than $16$, then you can't ride the roller coaster).
Trying to understand polynomials proof of Vandermonde's identity.
After what you have done . For a fixed $x^l$ in right hand side,you can get$x^l$ in L.H.S by having $x^k$ from the first term and $x^{m-k}$ from the second term where $ m = 0,1,..l$. Thus now equate the coefficients to get the result.
Prove an analog of Rolle's theorem for several variables
Assume that $\overline{U}$ is compact. If $f$ is constant on $\overline{U}$, then we are done. Otherwise, $f$ attains a global maximum or minimum at some $x\in\overline{U}$ such that $f(x)\ne0$, and in fact $x\in U$ since $f=0$ on $\partial U$. Now use the fact that the derivative is zero at a local minimum or maximum.
Understanding Structural Induction
The proof works because it "mimicks" the definition by recursion of terms. A term is : a variable or a constant : and thus case 1 applies, or : a string $f(t_1, \ldots, t_n)$, where $f$ is a function symbol and all the $t_i$s are already "produced" terms : and thus case 2 applies. You cannot have an infinite descending "chain", simply because a term is a string of finite lenght, exactly as a description in human language : you can parse it in finite many words. If e.g. we have a term $t=f(t_1)$ with $t_1=f'(t_2)$, for sure $t_2$ must be a string shorter than $t$. Consider for example the f-o language of arithmetic; a well-formed term is : $x+S(0)$ i.e. : $+(x,S(0))$. The "procedure" is easy to understand if you consider the parsing-tree for a term; see : Ian Chiswell & Wilfrid Hodges, Mathematical Logic (2007), page 114.
Triangle Markov Chain question
The given matrix $P$ is diagonalizable, and we can write $$ P = \underbrace{ \begin{bmatrix} 1&1&0\\1&0&1\\1&-1&-1 \end{bmatrix}} _T \underbrace{ \begin{bmatrix} 1\\&-1/2\\&&-1/2 \end{bmatrix}} _D \underbrace{ \frac 13 \begin{bmatrix} 1&1&1\\2&-1&-1\\-1&2&-1 \end{bmatrix}} _{S=T^{-1}}\ . $$ Then $$ P^n=\underbrace{(TDS)(TDS)\dots(TDS)}_{n\text{ times}} =TD^nS\ , $$ since touching $ST$ matrices are collapsing to the identity matrix. We need to isolate the $(1,1)$ entry in the product, the "swimming-swimming" entry, so we compute: $$ \begin{aligned} &[1\ 0\ 0]\cdot P^n\cdot\begin{bmatrix}1\\0\\0\end{bmatrix} \\ &\qquad= [1\ 0\ 0]T\cdot D^n\cdot S\begin{bmatrix}1\\0\\0\end{bmatrix} \\ &\qquad= [1\ 1\ 0] \cdot \begin{bmatrix} 1^n\\&(-1/2)^n\\&&(-1/2)^n \end{bmatrix} \cdot \frac 13\begin{bmatrix}1\\2\\-1\end{bmatrix} \\ &\qquad= [1\ (-1/2)^n\ 0] \cdot \frac 13\begin{bmatrix}1\\2\\-1\end{bmatrix} \\ &\qquad= \frac 13\left(1+2\left(-\frac 12\right)^n\right)\ . \end{aligned} $$
Explaining the differential operator found in Physics equations.
An example: $d^3x = dxdydz$, In general $d^n x$ is a symbol used for the volume element under integration. For example $\int f(x_1,x_2,x_3,x_4) d^4 x$ means you have to perform 4 integrals, not just one, over a 4-volume domain. Quite generally: $$ d^n x = d x_1 dx_2 \cdots dx_n $$ In this sense $d^n$ is not an operator, but rather a shorthand.
How to show that a real polynomial of degree $n$ is bounded on any finite interval?
Your polynomial $P$ is a continuous function, and continuous functions are bounded on bounded intervals. A direct way to see this: $$\left|P(x)\right|=\left|\sum_{k=0}^n c_kx^k\right| \leq \sum_{k=0}^n |c_kx^k|\leq\sum_{k=0}^n|c_k|\left(\max(|a|,|b|)\right)^k$$
What is the name of this game?
According to the Games article Names for Games: Locating 2 × 2 Games by Bryan Randolph Bruns, this game is "Double Harmony". It's particularly boring since your "defect" just means "take the high payoff", and there's no reason for either player not to.
Contravariant associate to volume form
With upper indices, the expression in coordinates is $$ \epsilon^{a_1\cdots a_n} = \frac{(-1)^s}{\sqrt{\lvert g \rvert}} \mathrm{sgn}(k \mapsto a_k) $$ where $(-1)^s$ is $g/\lvert g \rvert$ (e.g. $1$ if Riemannian, $-1$ if Lorentzian) and $\mathrm{sgn}(k \mapsto a_k)$ is the sign of the permutation. One way to think of things is that the volume form $\epsilon$ (a.k.a. Levi-Civita tensor) is normalized so that $$ \frac{1}{n!} g^{a_1 b_1} \cdots g^{a_n b_n} \epsilon_{a_1 \cdots a_n} \epsilon_{b_1 \cdots b_n} = (-1)^s. $$ We can solve this equation to find that $\epsilon_{1\cdots n} = \sqrt{\lvert g \rvert}$ and $\epsilon_{a_1 \cdots a_n} = \sqrt{\lvert g \rvert} \mathrm{sgn}(k \mapsto a_k)$; the determinant $g$ makes its appearance because we're summing over permutations. And since the left-hand side of the equation can also be though of as $$ \frac{1}{n!} \epsilon^{a_1 \cdots a_n} \epsilon_{a_1 \cdots a_n}, $$ we immediately also find $\epsilon^{a_1 \cdots a_n}$. (Wald, General Relativity, Appendix B essentially takes the view just outlined, but stops short of explicitly stating the formula $\epsilon^{a_1 \cdots a_n}$. Carroll, Spacetime and Geometry, Section 2.8 states the formula, although with not as much proof. Note that the lecture notes which formed the basis of Carroll's book are openly available on his website.)
$H \cap K$ is a normal subgroup of $K$
Your proof starts off fine. You showed that $xax^{-1}\in H$ correctly. But it goes astray at the last sentence. You say "Therefore $xax^{-1}\in K$" but you haven't given a reason for your "Therefore". Of course it is obvious, so you could say "Obviously $xax^{-1}\in K$" and to be safe, if you are being graded, add "because both $x$ and $a$ are in $K$". At the end of that sentence you say "and $K$ is a normal subgroup." This is a misstatement, you haven't proved it, and it may not be true. What you need to say is something like: "Thus $H\cap K$ is a normal subgroup of $K$."
Prove that an uncountable set X is equivalent to X\Y where Y is a denumerable subset of X
Here's an idea for when $Y =\{y\}$ is a singleton: Let $Z \subset X$ be any denumerable subset, $Z = \{z_1, z_2, \cdots\}$, where we take $z_1 = y$. Define a map $\phi : X \to X \setminus Y$ by sending $x \mapsto x$ for all $x \in X \setminus Z$, and by sending $z_i \mapsto z_{i + 1}$. It's not hard to see that this is a bijection. How do you extend this method to the situation where $Y$ is countable? Think about how you would do this for $|Y| = 2, 3$, etc., and extrapolate to $Y$ countable.
Solve Equation and find X. Quadratic equation equated to indices power of x
you can write $$(5x+1)^2=5^{x/2-1}$$ and then $$2\ln(5x+1)=(\frac{x}{2}-1)\ln(5)$$ and then use a numerical method
Flipping two fair coins probability function
This is an answer, since i need to plug in a table... The modeling probability space has four elements / atoms, in notation HH, HT, TH, TT. (The atoms are the the one element sets for the one or the other outcome of a single two-coins-toss.) Then: $$ \begin{array}{|r||cccc|} \hline & X & Y & Z & W\\\hline HH & 1 & 1 & 2 & 1\\ HT & 1 & 0 & 1 & 0\\ TH & 0 & 0 & 0 & 0\\ TT & 0 & 1 & 1 & 0\\\hline \end{array} $$ So we get $Z=2$ only in the one case, the first one. "Each row shows" with same probability, $1/4$. The error is in the line where instead of the dot in $P(Z=2)=P(X=1\text{ and }Y=1)=P(X=1)\cdot P(Y=1)$ there is a plus. (The dot is correct since in the model $X,Y$ are independent.)
Vector Decomposition of $\operatorname{adj} (\lambda \mathbf I_n - \mathbf A)$
Let $\mathbf{B}=\lambda\mathbf{I}_n-\mathbf{A}$, $\mathbf{C}=\mathrm{adj}\ \mathbf{B}$. If $\mathrm{rank}\mathbf{B}=n-1$, then the solution space for the equation $\mathbf{B}x=0$ is one dimensional, i.e. a constant multiple of $x$. Since $\mathbf{B}\cdot\mathbf{C}=0$, all the column vector of $\mathbf{C}$ are solutions for the equation. The result follows.
An unusual power value in power series
Just read off the coefficients. For (a), for instance, you have $$a_n=\begin{cases} \frac{k^k}{k!},&\text{if }n=3k\\\\ 0,&\text{otherwise}\;. \end{cases}$$
How to prepare this function for integration
Hint You do not need to have $1+x^2$ in the top. Just notice that the derivative of the denominator is $2x$ that is to say very similar to the numerator. So, try to put it into something looking as $\frac {u'(x)}{u(x)}$
What is the integral of $1/(1+x)$
Although the derivative of $\arctan(x)$ is $\dfrac{1}{1+x^2}$, the derivative of $\arctan(\sqrt{x})$ is not $\dfrac{1}{1+x}$. Using the chain rule we find that it is $\dfrac{1}{2\sqrt{x}(1+x)}$. The derivative of $\ln(1+x)$ is $\dfrac{1}{1+x}$, so this is the correct antiderivative.
Function $f$, where $f(x)=f'(x)=F(x)$
$f(x)=f'(x)$ implies $\frac d {dx} (e^{-x}f(x))=e^{-x}f'(x)-e^{-x}f(x)=0$ for all $x$ which implies that $e^{-x}f(x)=c$ for some constant $c$. Hence $f(x)=ce^{x}$.
Proof that 1/x + 1/y is distinct for distinct unordered pairs of (x,y), xy = k.
To elaborate on the discussion in the comments: suppose that $x+y=S$ and $xy=k$ are given. We claim that this data specifies the pair $(x,y)$ up to order. Indeed, declaring $x$ to be the larger of the two we easily see that $$2x=\sqrt {S^2-4k}+S$$. Just to emphasize, if $(x',y')$ was a another pair with $x'≥y',x'+y'=S,x'y'=k$ then the same algebra would show that $$2x'=\sqrt{S^2-4k}+S=2x\implies x'=x$$
Let $A$, $B$ be normal subgroups and $A\cap B=\{e\}$. Prove that $ab=ba$ for all $a\in A$, $b\in B$
Consider $[a,b] = aba^{-1}b^{-1}$. We know that $aba^{-1} \in B$, so $[a,b] = (aba^{-1})b^{-1} \in B$. Also, $ba^{-1}b^{-1} \in A$, so $[a,b] = a(ba^{-1}b^{-1}) \in A$. This means that $[a,b] \in A$ and $[a,b] \in B$, so $[a,b] \in A \cap B = \{e\}$, so $[a,b] = e$, and $aba^{-1}b^{-1} = e$, and $ab=ba$.
What's the relationship between δ and d/dt?
In the context of calculus of variations, $\delta$ can be interpreted as a directional derivative (of a function, typically of a function — so calculus of variations texts often refer to this as a functional) in an arbitrary direction (which is again a function). The heuristic your book is using is this: If you look at the variation of $F(f)$ in the direction of $g$, you are taking $$\delta_g F(f) = \frac{d}{dt}\Big|_{t=0}F(f+tg),$$ quite in analogy with directional derivatives in calculus, where you have $$D_v f(p) = \frac d{dt}\Big|_{t=0} f(p+tv).$$ So the usual sum and product rules (and so forth) hold nicely. This $t$ has nothing to do with the $t$ appearing in your time integrals.
How to cover a disk with radius $1.01$ with three unit disks?
If you offset the unit disk centers by $d$ in symmetric directions, the circles intersect in pairs at a distance $r$ of the origin such that $$\left(r-\frac d2\right)^2+\frac{3d^2}4=1.$$ The relevant root is $$r=\frac{\sqrt{4-3d^2}+d}2$$ and it achieves a maximum when $$d=\frac1{\sqrt3},$$ corresponding to $$r=\frac2{\sqrt3}>1.01$$ The minimum decentering is obtained with $$\left(1.01-\frac d2\right)^2+\frac{3d^2}4=1,$$ or $$d\approx0.02031$$
Probability recursion
Let $S$ denote the first time when the motive TTH is completed. To compute the distribution of $S$, one considers the Markov chain on the state space $\{0,1,2,3\}$ whose state at time $n$ is the length of the maximal initial subword of TTH which is at the end of the letters produced at time $n$. For example, if the initial letters are HTHHTTTHT, the first states of the Markov chain are $0010012231$. Note that $S$ is the first hitting time $\theta$ of state $3$ by this Markov chain hence one can compute $u_i=E_i(s^\theta)$ where the subscript $i$ means that one starts at state $i$. The usual one-step Markov recursion yields $$ u_0=\tfrac12s(u_0+u_1),\quad u_1=\tfrac12s(u_0+u_2),\quad u_2=\tfrac12s(1+u_2), $$ hence $$ u_0=\frac{s^3}{(2-s)(4-2s-s^2)}. $$ Now, $P[S=n]$ is the coefficient of $s^n$ in the power series $u_0$ hence one decomposes $u_0$ as $$ u_0=1-\frac{2}{2-s}+\frac1{\sqrt5}\frac{a}{a-s}-\frac1{\sqrt5}\frac{b}{b+s}, $$ with $a=\sqrt5-1$ and $b=\sqrt5+1$. Thus, for every $n\geqslant3$, $$ P[S=n]=[s^n]u_0=-\frac1{2^n}+\frac1{\sqrt5}\frac1{a^n}-\frac1{\sqrt5}(-1)^n\frac1{b^n}\sim\frac1{\sqrt5}\frac1{a^n}. $$
Is it possible to perform gradient descent on a complex valued cost function?
Costs are normally assumed ordered, and there can be no (consistent) order between complex values. Quick: Which is larger, $2 + 3 i$ or $-3 + 2 i$? You could compare them by comparing absolute values, but $\lvert 2 + 3 i \rvert = \lvert -3 + 2 i \rvert$. And you'd be back to real costs that way.
Chances of someone being of a certain gender at websites
General case without independence suppositions: $$A=\text{visit to website 1},$$ $$B=\text{visit to website 2},$$ $$F=\text{visitor is female},$$ $$0.8P(A)=P(A)P(F|A)=P(F\cap A)=P(A\setminus B)P(F|A\setminus B)+P(A\cap B)P(F|A\cap B),$$ $$0.8P(B)=P(B)P(F|B)=P(F\cap B)=P(B\setminus A)P(F|B\setminus A)+P(A\cap B)P(F|A\cap B).$$ Now, $P(A)$, $P(B)$, $P(A\cap B)$ are free parameters (with $P(A)+P(B)\ge 1$, $P(A\cap B)>0$,...), $P(A\setminus B)=P(A)-P(A\cap B)$, $P(B\setminus A)=P(B)-P(A\cap B)$ and we have a system of two equations with three unknowns: $P(F|A\setminus B)$, $P(F|B\setminus A)$, $P(F|A\cap B)$, i.e., we have a lineal relationship between the three unknowns. If $P(A|F)$... are known we can use Bayes (maybe in a future edition). EDIT: an illustrative diagram $P(A)$, $P(B)$ are areas and also lenghts (why?) $P(F|A\setminus B)$, $P(F|B\setminus A)$, $P(F|A\cap B)$ are quotients of areas and also lenghts (why?)
Compute the length of a module
Let $(R,m,k)$ be a local ring, and $M$ a finitely generated $R$-module such that $mM=0$. Then $l(M)=\dim_kM$. In fact, $M$ is an $R/m=k$-vector space and its submodules (as $R$-module) are the same as its subspaces (as a $k$-vector space). Since it is finitely generated it follows that $\dim_kM<\infty$, so $M$ is a module of finite length, and $l(M)=\dim_kM$ (since the composition factors have all dimension one). Furthermore, $\dim_kM$ equals the minimal number of generators of $M$. Can you find this in your case?
What is the shortest possible distance from one point to multiple points?
This is a relatively simple, but somewhat involved exercise in basic Calculus (if we ignore road distance and deal only with straight-line distance at least), what you need to do is the following: Get the coordinates for each track, this forms a set of points: $T=\{(x_1,y_1),...,(x_n,y_n)\}$ Then setup the equations giving the (euclidean) distances, $d_i$ from an arbitrary point, $(x,y)$ to $(x_i,y_i)$. The sum: $s(x,y)=\sum_{i=1}^nd_i$ is now an equation of two variables, $x$ and $y$ which are the coordinates of an arbitrary point, it outputs the sum of the distances to each track from that point. The goal you seek now is to minimize that sum. This is done using a version of the first derivative test for functions of several variables. The steps are the following: Compute the first partial derivatives: $\partial/\partial x$, and $\partial/\partial y$. Set them each equal to zero, and solve (note that you may get one variable as a function of the other in this case). Find the intersection set of these two solutions (i.e. those points $(x,y)$ for which both partial derivatives are $0$. These points are the critical points of the sum, $s$. All that remains is to compute the values of $s(x,y)$ for the critical points and find the lowest one (there could be several points with the same minimum value).
Drawing a pair from a poker hand, unordered with replacement
I think you are asking the following question. Draw a card from the deck, record what it was, replace it in the deck. Do this a total of $5$ times. How many records are there, where order does not matter, and the record qualifies as a $1$-pair hand. In this setup, it is possible that the "hand" will consist of $\heartsuit$ Q, $\heartsuit$ Q, and three useless cards. Should this count as a $1$-pair hand? In a real poker game, it would probably get the other players upset. But we will allow it. The kind that we have a pair of can be chosen in $\binom{13}{2}$ ways. The actual cards in the pair can then be chosen in $\binom{4}{2}+4$ ways. As to the rest of the cards, we need to choose $3$ kinds from the remaining $12$, and for each kind choose a card. There are $\binom{12}{3}\binom{4}{1}^3$ ways to do this. Now we have all the ingredients. It is really pretty much the same as the conventional $1$-pair problem. The only difference is that we have $\binom{4}{2}+4$ where the $1$-pair answer has $\binom{4}{2}$.
Optimizing Rectilinear Distance Traveled
If you can discretize the problem (e.g., only look for positions with integral coefficients), it is a set cover problem: consider a matrix with one column for each (integral) position in the network, one row for each possible position of each fluid source, and value 1 if the position (column) can be reached by the fluid (row), 0 otherwise; you are looking for a minimum-cost set of rows with at least one 1 in each column. The problem can be formulated as an integer linear program.
Prove spectral norm $\|A\|\geq x^T A x$, $\forall x$ where $\|x\|_2=1$
If $A$ is symmetric it’s diagonalizable and has an orthonormal basis $v_i$ with eigenvalues $\lambda_i$. Then write $x=\sum_i (x,v_i)v_i$, so that: $$|x^TAx|=|\sum_{i=1} (x,v_i)^2\lambda_i|\leq |\lambda_M| \|x\|^2,$$ Can you finish from here?
if $A^2 \in M_{3}(\mathbb{R})$ is diagonalizable then so is $A$
Try $A=\begin{pmatrix} 0&0&1 \\ 0&0&0 \\ 0&0&0\end{pmatrix}$. The idea is that the null matrix is diagonalizable, but there exist matrices which satisfy $A^2=0$ and $A$ is not diagonalizable. For the second question, note that the eigenvalues of $A^2$ are distinct, and therefore the eigenvalues of $A$ are distinct, which implies that $A$ is diagonalizable. If $P_A$ denotes the minimal polynomial of $A$ then $P_{A^2}(X^2)$ is an annulating polynomial for $A$. This means that $P_A|P_{A^2}(X^2)$. If $A^2$ is diagonalizable then $P_{A^2}$ has simple roots. If zero is not root of $P_{A^2}$ then all roots of $P_{A^2}(X^2)$ are simple so all roots of $P_A$ are simple and $A$ is diagonizable. (here we need to be sure that the roots of $P_A$ are all in the field of elements of $A$; if we work on $\Bbb{C}$ then we are fine) So if zero is not an eigenvalue of $A$ then $A^2$ diagonizable implies $A$ diagonizable.
For which values ​​of $z$ the inequality $|e^{z-1}|<2$ holds
Put $\;z=x+iy\;,\;\;x,y\in\Bbb R\;$ , then $$|e^{z-1}|=|e^{x-1+iy}|=e^{x-1}&lt;2\iff x-1&lt;\log 2\iff x&lt;\log 2 +1$$ and that's all!
Strange Method of Differentiating $x^2$
This is calculus based on differentials, which is different from the limits-based calculus that is prevalent these days. However, despite what some commenters (and contemporaries of Newton and Leibniz) feel, this is perfectly sound logically. If you'd like to see some details of how to work with infinitesimals, I recommend Keislers' Calculus, which he has kindly placed online for free.
Does $x^TAx = \frac{1}{2} x^T(A+A^T)x$ hold for all matrices $A$?
Yes it holds obviously. Your case with $A^T=-A$ implies also that the left side of the equation is $0$ since $ x^TAx = (x^TAx)^T = x^TA^Tx = -x^TAx$
Find the exponential generating function for the number of ways to distribute $r$ distinct objects into five different boxes
Here is a very painstaking approach that may help you to see exactly what’s going on. The possible values of $b_1$ are $0,1,2$, and $3$, so far starters we try $$1+x+\frac{x^2}2+\frac{x^3}6$$ to account for $b_1$. Similarly, the possible values of $b_2$ are $1,2,3$, and $4$, so we try $$y+\frac{y^2}2+\frac{y^3}6+\frac{y^4}{24}$$ to account for $b_2$. I’m using different indeterminates for now, because at this point I still need to keep the $b_1$ and $b_2$ contributions separate. The product of these polynomials is $$\begin{align*} &amp;y+\frac{y^2}2+\frac{y^3}6+\frac{y^4}{24}+\\ &amp;xy+\frac{xy^2}2+\frac{xy^3}6+\frac{xy^4}{24}+\\ &amp;\frac{x^2y}2+\frac{x^2y^2}4+\frac{x^2y^3}{12}+\frac{x^2y^4}{48}+\\ &amp;\frac{x^3y}6+\frac{x^3y^2}{12}+\frac{x^3y^3}{36}+\frac{x^3y^4}{144}\;; \end{align*}$$ however, we don’t want the terms in $x^ky\ell$ with $k\ge\ell$, since they correspond to having $b_1\ge b_2$. After we throw them away, we have $$y+\frac{y^2}2+\frac{y^3}6+\frac{y^4}{24}+\frac{xy^2}2+\frac{xy^3}6+\frac{xy^4}{24}+\frac{x^2y^3}{12}+\frac{x^2y^4}{48}+\frac{x^3y^4}{144}\;.$$ Now replace $y$ by $x$, collect terms, and adjust the denominators to match the exponents to get $$\frac{x}{1!}+\frac{x^2}{2!}+\frac{4x^3}{3!}+\frac{5x^4}{4!}+\frac{15x^5}{5!}+\frac{15x^6}{6!}+\frac{35x^7}{7!}\;,$$ which is the egf for boxes $1$ and $2$ combined. Multiply this by $e^{3x}$, and you’re done. (And now that I’ve written this, I see that Markus has given you the abbreviated version of it.)
if $B$ is a boolean algebra and $a\neq b$ in $B$ there exist an ultrafilter containing $a$ but not $b$.
As pointed out in the comments, it is not true in general that if $a \neq b$ that there is some ultrafilter containing $a$ and not containing $b$. A simple counterexample would be the case where $a &lt; b$. In fact, this is the only obstacle. So something we can prove (using the axiom of choice) is the following: Let $B$ be a Boolean algebra, and let $a, b \in B$. Then there is an ultrafilter $F$ with $a \in F$, $b \not \in F$ if and only if $a \not \leq b$. One direction is trivial: if $F$ is an ultrafilter that contains $a$, but does not contain $b$, then we cannot have $a \leq b$ because otherwise $F$ would have to contain $b$. Now for the other direction, suppose that $a \not \leq b$. We claim that $a \wedge \neg b \neq 0$. Suppose for a contradiction that $a \wedge \neg b = 0$, then $b \vee a = (b \vee a) \wedge (b \vee \neg b) = b \vee (a \wedge \neg b) = b$. But that means that $a \leq b$, which we assumed is not the case. So indeed $a \wedge \neg b \neq 0$. Now consider the principal filter $F' = \{c \in B : c \geq a \wedge \neg b\}$. Then clearly $a \in F'$ and $\neg b \in F'$. Extend $F'$ to an ultrafilter $F$ (using the axiom of choice). So now we have $a \in F$, but we cannot have $b \in F$ because we already have $\neg b \in F$.