title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to approach solving this 2x2 system?
for simplifications set $$\sqrt[3]{r^3-1}=a,\sqrt[3]{p^3-1}=b$$
Arc length of a curve - derivation
Using the triangle inequality, $$\left|\sum_{i=1}^N \left(\left|\frac{f(t_i)-f(t_{i-1})}{\Delta t}\right|\Delta t-|f'(t_i)|\Delta t\right)\right|=\left|\sum_{i=1}^N \left(\left|\frac{f(t_i)-f(t_{i-1})}{\Delta t}\right|-|f'(t_i)|\right)\Delta t\right|\leq \\ \leq \sum_{i=1}^N\left|\left|\frac{f(t_i)-f(t_{i-1})}{\Delta t}\right|-|f'(t_i)|\right|\Delta t \leq \sum_{i=1}^N \varepsilon \cdot \Delta t \leq N\varepsilon \sum_{i=1}^N \Delta t = \varepsilon (b-a)$$ the last equality being true because $$\sum_{i=1}^N \Delta t= (t_1-t_0)+(t_2-t_1)+\dots+(t_N-t_{N-1})=t_N-t_0=b-a$$
Evaluate $\int_0^{2 \pi} \sin \theta \cos^2 \theta \,d\theta$
Here's a slick method that doesn't involve substitution (and still doesn't involve F.T.C.): The integrand is periodic with period $2 \pi$, because it is a product of periodic functions, so the value of the integral is the same over any interval of length $2 \pi$, including $[-\pi, \pi]$. But the integrand is odd (it is the product of an even function and an odd function), and the integrand of any odd function over an interval symmetric about the origin is zero, and so $$\int_0^{2 \pi} \sin \theta \cos^2 \theta \, d\theta = 0.$$
Are there studies of Sidon sequences (or sets) which have the further property of all possible sums of their members being different, not just pairs?
There are of course "sum free sets" i.e. subsets of integers, say A, such that a + b = c has no solution in A. This part of mathematics is sometimes called "Additive Combinatorics". There is quite extensive literature related to this, for example see the book by T.Tao with the same title. Of course the sequence 3^k has such property and I think this is in a sense a generic example. That is, it looks likely that for any infinite "all sums free" set, say {n_1,n_2,...} there is a constant c > 1 such that n_k >c^k.
Expansion for Partial Fractions for $(3-2x)/(x^2+6x+9)$
You want: $$\frac{A(x+3)+B}{(x+3)^2}=\frac{3-2x}{(x+3)^2}$$ You lost the denominator on the right hand side. Much, then, to solve: $$A(x+3)+B = 3-2x$$
Applying the chain rule to a line integral over gradients to get the Integrated Gradients method
Regarding the interpretation of the math in the paper, I think you have a $∂(x′i+α⋅(xi−x′i))$ term instead of a $∂(xi)$ term in the denominator of the LHS of the equation "it follows, that:". (I also wonder if there is some confusion about notation: The derivative is taken with respect to a variable[$xi$] at a particular point[$x′i+α⋅(xi−x′i$)].) Regarding the implementation in github repository, I am not seeing the extra α term in the implementation. [If it helps, the derivative returned by the ML library (e.g. tensorflow) corresponds to $∂F(x′+α⋅(x−x′))/∂xi$ and not $∂F(x′+α⋅(x−x′))/∂(x′i+α⋅(xi−x′i)$]. (If you want to do the complete derivation, first use the fundamental theorem of calculus, then the partial derivative chain rule as in case 1 here: http://tutorial.math.lamar.edu/Classes/CalcIII/ChainRule.aspx and the result should match up.)
How many nxn diagonal real matrices which orthogonal
Not all real diagonal matrices are orthogonal. As you said, they are indeed self-adjoint ($A^t=A$), but that's not the same as orthogonality - $AA^t=I$. If I want a diagonal matrix to be orthogonal, I need its inverse to be itself (since we have seen that diagonal matrices are self adjoint). Meaning - $A=A^{-1}$. There are only two matrices that satisfy this: $I$ and $-I$.
transferring to prenex(logic)
HINT A basic mistake you make is that you do not change variables so as to make them unique, and as a result certain variables, which were once quantified by one quantifier, are now captured by a different quantifier. As an example of what I mean, consider: $\exists x \ P(x) \land \forall x \ Q(x)$ The meaning of this is: "something is a $P$, and everything is a $Q$' Notice, though, what happens if you don't change the variables when bringing the quantifiers to the outside: $\exists x \forall x (P(x) \land Q(x))$ In this sentence, the $x$ of $P(x)$ has been captured by the $\forall x$, while the $\exists x$ is no longer quantifying anything. Indeed, this latter statement is equivalent to simply: $\forall x (P(x) \land Q(x))$ which means that 'everything is a $P$ as well as a $Q$' ... whch is clearly no longer the same thing as the original sentence. To avoid this, you need to change variables so each quantifier quantifies a unique variable. So, for example, we can change the original sentence to: $\exists x_1 \ P(x_1) \land \forall x_2 \ Q(x_2)$ And now we can bring the quantifiers to the front: $\exists x_1 \forall x_2 (P(x_1) \land Q(x_2))$ And this is still equivalent to the original, since each variable is still quantified by the original quantifier. Also: for the first one you still need to pull all quantifiers to the front ... As an incentive, here is the final result for 1): $\exists x_1 \forall x_2 \exists x_3 \forall x_4 \exists x_5 \forall x_6 \exists x_7 \forall x_8 \exists x_9 \forall x_{10} ((p(x_1,x_2) \rightarrow R(x_3,x_4,x_5)) \land (R(x_6,x_7,x_8) \rightarrow p(x_9,x_{10})))$
Can we say that the vectors $(1,1,0)$ and $(0,1,1)$ are in linearly independent in $\mathbb{R}^3$ while linearly dependent in $\mathbb{R}^2$?
$(1,1,0)$ and $(0,1,0)$ are not vectors in $\mathbb R^2$, since they have three components, so you cannot discuss their linear independence in $\mathbb R^2$. Now, in $\mathbb R^3$, take $a\cdot (1,1,0)+b\cdot (0,1,0)=(0,0,0)\Rightarrow \begin{cases} a\cdot 1+b\cdot 0=0\\ a\cdot 1+b\cdot 1=0\\ a\cdot 0+b\cdot 0=0\end{cases}\Rightarrow a=0,\ b=0$. Therefore your vectors are linearly independent. Working with matrices, your vectors are linearly independent if the matrix constructed with their components has rank equal to the number of vectors. The rank of your matrix is $2$, therefore the vectors are linearly independent.
Percentage and Geometry Relations
Ans 1: Volume of Cube $ = a^3$ Increase in volume by $72.8$% $= a^3 + 0.728a^3 = 1.728a^3$ Now, Finding side of Cube $= \sqrt[3]{1.728a^3} = 1.2a$ Now Increase in sides $= 1.2a - a = 0.2a$ So $0.2$ times increase in sides of cube. Hence $0.2$ in % = $0.2 * 100 = 20$%. Thus $20$% increase in side of the cube Ans 2: Area of Square $ = a^2$ Decrease in Area by $19$% $= a^2 - 0.19^2 = 0.81a^2$ Now, Finding side of Square $= \sqrt[2]{0.81a^2} = 0.9a$ Now Decrease in sides $= a - 0.9a = 0.1a$ So $0.1$ times decrease in sides of square. Hence $0.1$ in % = $0.1 * 100 = 10$%. Thus $10$% decrease in side of square
Solve $\int_0^2\sqrt{x} \, d\sqrt{x}\overset{?}{=}$
Method 1 $$\int_0^2 \sqrt{x} d\sqrt{x}$$ $$=\int_{\color{brown}{\sqrt{x}=0}}^{\color{brown}{\sqrt{x}=2}} \sqrt{x} d\color{brown}{\sqrt{x}}$$ $$= \left[\frac{(\sqrt{x})^2}{2}\right]_0^2$$ $$=\left[\frac{(2)^2}{2}\right]-\left[\frac{(0)^2}{2}\right]=2$$ Method 2 $$\int_0^2 \sqrt{x} d\sqrt{x}$$ $$\int_{\color{brown}{\sqrt{x}=0}}^{\color{brown}{\sqrt{x}=2}} \sqrt{x} d\color{brown}{\sqrt{x}}$$ $$\sqrt{x}=t \implies \frac{d\sqrt{x}}{dt}=1 \implies d\sqrt{x}=dt$$ $$\bbox[2pt, border: 2pt green solid]{\sqrt{x}=t=2, \sqrt{x}=t=0}$$ $$=\int_{t=0}^{t=2} t dt=\frac{t^2}{2}\Big]_0^2=2$$
A continuous function also being onto
Hint: The interval $[0,1]$ is compact. $\mathbb{R}$ is not compact. What were to happen if you had a continuous and onto function from $[0,1]$ to $\mathbb{R}$? What would that imply about $\mathbb{R}$ to create a contradiction?
Let $u_m=L(v_m)-v_m$ where $L$ is the left shift operator and $v_m$ a convergent sequence of $\ell^2$ then $\left\| u_m \right\|_{\ell^1}$ is bounded?
The answer in general seems to be no: Define $v_m = \left(v^{(m)}_n\right)_{n=1}^\infty \in \ell^2$ as: $$v^{(m)}_n = \begin{cases} \frac1n, & \text{if $n$ is a perfect square and $n \le m^2$} \\ 0, & \text{otherwise} \end{cases}$$ We have $v_m \xrightarrow{m\to\infty} 0$: $$\left\|v_m \right\|_2^2 = \sum_{k=1}^\infty \left|v^{(m)}_n\right|^2 = \sum_{k=1}^m \frac1{k^2} \xrightarrow{m\to\infty} 0$$ However, $$u_m = \left(v^{(m)}_2 - v^{(m)}_1, v^{(m)}_3 - v^{(m)}_2, \ldots, v^{(m)}_{m^2} - v^{(m)}_{m^2-1}, -v^{(m)}_{m^2}, 0, 0, \ldots \right) = \left(1, 0, 0, \frac{1}{2}, \ldots ,\frac{1}{m},\frac{1}{m}, 0, 0\ldots \right)$$ $$\|u_m\|_1 = \sum_{k=1}^{m^2 - 1}\left|v^{(m)}_{k+1} - v^{(m)}_k\right| + \left|v^{(m)}_{m^2}\right| \ge \sum_{k=1}^m \frac{1}{k} \xrightarrow{m\to\infty} +\infty$$ Therefore, the set $\big\{\|u_m\|_1 : m\in\mathbb{N}\big\}$ is unbounded.
Computation of Laplace-Beltrami operator in a conformally equivalent metric
The proposed relationship $$\Delta_{\tilde{g}} = e^{-2w} \Delta_g$$ only holds for surfaces. In your final calculation, assuming $M$ is a surface, you should have $g^{ij} g_{ij} = 2$, so that the final term is \begin{align*} e^{-2w}g^{ij}\left(\delta_{ik}\frac{\partial w}{\partial x_j} + \delta_{kj}\frac{\partial w}{\partial x_i} - g^{k\ell}g_{ij}\frac{\partial w}{\partial x_{\ell}}\right)\frac{\partial}{\partial x_k} & \\ = g^{i j}\frac{\partial w}{\partial x_j}\frac{\partial }{\partial x_i} + g^{i j}\frac{\partial w}{\partial x_j}\frac{\partial }{\partial x_i} - 2g^{i j}\frac{\partial w}{\partial x_j}\frac{\partial }{\partial x_i} & = 0. \end{align*} An easier approach which avoids using Christoffel symbols or messing around with indices is to use the formula $$\Delta_g = - \frac{1}{\sqrt{|\det g|}} \frac{\partial}{\partial x_j} \left( \sqrt{|\det g|} g^{ij} \frac{\partial}{\partial x^i} \right).$$ With this formula, you can compute \begin{align*} \Delta_{\tilde{g}} & = - \frac{1}{e^{dw}\sqrt{|\det g|}} \frac{\partial}{\partial x_j} \left( e^{dw} \sqrt{|\det g|} e^{-2w} g^{ij} \frac{\partial}{\partial x^i} \right) \\ & = - \frac{e^{-dw}}{\sqrt{|\det g|}} \frac{\partial}{\partial x_j} \left( e^{(d-2)w} \sqrt{|\det g|} g^{ij} \frac{\partial}{\partial x^i} \right) \\ & = - \frac{(d-2)e^{-2w}}{\sqrt{|\det g|}} \frac{\partial w}{\partial x_j} \sqrt{|\det g|} g^{ij} \frac{\partial}{\partial x^i} - \frac{e^{-2w}}{\sqrt{|\det g|}} \frac{\partial}{\partial x_j} \left( \sqrt{|\det g|} g^{ij} \frac{\partial}{\partial x^i} \right) \\ & = \Delta_g - (d-2)e^{-2w} g^{ij} \frac{\partial w}{\partial x_j} \frac{\partial}{\partial x^i}, \end{align*} so $\Delta_{\tilde{g}} = e^{-2w} \Delta_g$ only holds when $d = 2$. (You'll get the same formula when you correctly state $g^{ij}g_{ij} = d$ in your work above).
What is multivalued analytic function?
You're looking for the concept of complete analytic function. There are several ways to set up the details, but one of them is: Let $\mathscr B$ be the set of all analytic functions defined on open discs in $\mathbb C$. Define a relation $\sim$ on $\mathscr B$ such that $f\sim g$ iff the domains of $f$ and $g$ overlap, and their function values coincide on the intersection. An anlytic continuation (which the above Wikipedia article calls a "global analytic function", though it doesn't have to be very "global") is any nonempty subset $\mathcal F$ of $\mathscr B$ such that every two $f,g\in\mathcal F$ can be connected with a finite chain of members of $\mathcal F$: $$ f \sim h_1 \sim h_2 \sim \cdots \sim h_n \sim g$$ A complete analytic function is an $\mathcal F$ that it maximal in that it cannot be extended without losing the above property. Equivalently, a global analytic function is an equivalence class under the transitive closure of $\sim$. For example, the global analytic function corresponding to the logarithm is the set of all functions that arise by (a) pick any open disc that doesn't contain the origin, and then (b) pick any branch of the logarithm on this disc, that is, let $f$ be such that $f'(z)=1/z$ everywhere and $e^{f(z)}=z$ at the center. An analytic continuation in this sense is almost an atlas of its Riemann surface, except that the charts need to map $z$ not only to $f(z)$, but to a package containing both $z$ and all of the derivatives of $f$ at $z$ (or, equivalently, the coefficients of the power series at $z$) to avoid collapsing point on the Riemann surface. It is not enough simply to construct a theory where a function can have multiple values, because the mere values of a function can coincide between different branches at a single point without anything else matching. For example consider $f(z)=(z-1)\sqrt z$ with the usual "multi-valued" square root. It has only one possible value at $z=1$, but its two branches nevertheless behave differently there -- they have different derivatives.
Distinct roots for a continuous function with $\int^{1}_{0}{f(x)}\text{d}x=\int^{1}_{0}{xf(x)}\text{d}x=0.$
HINT: Let $c\in (0,1)$ so that $f(c)=0$. Use $$\int_0^1 (x-c) f(x) dx = 0$$ $\bf{Added:}$ In general, if $\int_{0}^{1}f(x) x^k dx = 0$ for $0\le k \le n$ then $f$ has at least $n+1$ (distinct) zeroes in $(0,1)$. Use induction. Assume true for $n-1$. So $f$ has at least $n$ zeroes $0< a_1< a_2 < \ldots a_n < 1$. Assume that $f$ has constant sign on each of the interval $(0, a_1)$, $(a_1, a_2)$, $\ldots$, $(a_n, 1)$. Now, there is a product of the form $\pm (x-a_{i_1})(x-a_{i_2})\cdots(x-a_{i_l})$ that has the same sign as $f$ on these intervals. Therefore, $$\pm \int_0^1 f(x)(x-a_{i_1})(x-a_{i_2})\cdots(x-a_{i_l})>0$$ contradiction.
Im trying to solve an initial value problem but can't find the inverse for the part below
You could do a partial fraction decomposition using $$ (s^2+1)^2=(s+i)^2(s-i)^2 $$ Or use that $$ \frac{\partial}{\partial \alpha}\frac1{s^2+α^2}=-\frac{2α}{(s^2+α^2)^2} $$ to reduce to a known problem and then set $α=1$
Multiplication of complex numbers $(3-2i)(\cos2t+i\sin2t)$
You are probably looking for the real part-that is the answer you provide. In general form, let $z_1,z_2 \in ℂ ,z_1=x_1+iy_1, z_2=x_2+iy_2$ $$z_1z_2=x_1x_2-y_1y_2+i(x_1y_2+y_1x_2)$$ So you can find the real and imaginary parts by this last equivalence.
Prove that any polynomial in $F[x]$ can be written in a unique manner as a product of irreducible polynomials in F[x].
I am assuming the letter $F$ was chosen to indicate a field, although the question does not say so. This is just an instance of the general fact that Euclidean domains (or more generally principal ideal domains) are unique factorisation domains. The usual proof passes by establishing Euclid's lemma for irreducible elements, which serves to show that given two factorisations, any irreducible factor in the factorisation on the left can also be found (up to an invertible factor) in the factorisation on the right. The proof really matches that for the integers in all respects.
Inequality problem: $\sum_{i=1}^{n}\sqrt{i}\le (n+1)\sqrt{n+1}-1\quad\forall n\in\mathbb{N}$
Note that $\sqrt{i}\leq\sqrt{n+1}$ for $i=1,\dots,n$. Therefore $$\sum_{i=1}^{n} \sqrt{i}\leq n\cdot \sqrt{n+1}.$$ Since $\sqrt{n+1}\geq 1$, $$\sum_{i=1}^{n} \sqrt{i}\leq n\cdot \sqrt{n+1}+\sqrt{n+1}-1=(n+1)\cdot \sqrt{n+1}-1.$$
Partial Derivative of the conical surface $z=\sqrt{x^2+y^2}$
Derive the expression $z^2=x^2+y^2$ with respect to $x$ and use the chain rule to get $$2z \frac{\partial z}{\partial x} = 2x+0.$$ Hence $$\frac{\partial z}{\partial x}=\frac{x}{z}.$$
Power series convergence in boundary problem
Given $\epsilon>0$ $$|\sum_{k=n}^{m}a_k|=|\sum_{k=n}^{m} a_k-\sum_{k=n}^{m} a_kx^k+\sum_{k=n}^{m} a_kx^k|\leq |\sum_{k=n}^{m} a_k-\sum_{k=n}^{m} a_kx^k|+|\sum_{k=n}^{m} a_kx^k|$$ Take now $N$ large enough such that $|\sum_{k=n}^{m} a_kx^k|<\epsilon/2$ for all $x\in[0,1)$ and all $n,m>N$.Then move $x$ close enough to $1$ such that $|\sum_{k=n}^{m} a_k-\sum_{k=n}^{m} a_kx^k|<\epsilon/2$ Therefore $$|\sum_{k=n}^{m}a_k|<\epsilon$$ for $n,m>N$.
How to compute $\mathbb{E}[X_{s}^{2}e^{\lambda X_{s}}]$ where $(X_s)$ is a Brownian motion with drift $\mu$?
Using the setting as stated in the question and working out the expectation by using the tip given in the comments, yields the following. The key observation is that the expectation we want to compute, equals the second derivative with respect to $\lambda$ of the moment generating function for a normal distribution. The moment generating function for a $N(\mu,\sigma^2)$ is given by (with $X$ in this case a continuous random variable): $\mathbb{E}[e^{\lambda X}]=e^{\lambda \mu + \frac{1}{2}\sigma^2 \lambda^2}$ In our setting we have that $X_s = \mu s +W_s\sim N(\mu s, s)$. So the moment generating function becomes: $\mathbb{E}[e^{\lambda X_s}]=e^{\lambda \mu s + \frac{1}{2}s \lambda^2}$ Taking the first derivative with respect to $\lambda$ yields: $\mathbb{E}[X_{s}e^{\lambda X_s}]=(\mu s+s\lambda)e^{\lambda \mu s + \frac{1}{2}s \lambda^2}$ Taking the derivative of the first derivative yields our desired answer: $\mathbb{E}[X_{s}^{2}e^{\lambda X_s}]=(\mu s+s\lambda)^{2}e^{\lambda \mu s + \frac{1}{2}s \lambda^2}+s e^{\lambda \mu s + \frac{1}{2}s \lambda^2}=((\mu s+s\lambda)^{2}+s)e^{\lambda \mu s + \frac{1}{2}s \lambda^2} $
What is the expected number of flips that are needed?
Indeed let $X_n$ denote the number of consequtive heads that have flipped at time $n$. The possible states of $X_n$ are $\{0,1,2,3,4\}$ since if $4$ consecutive heads the process is considered to end. Given $X_n=j$ the next state can be either $0$ if we flip a tail or $j+1$ if we flip another consecutive head, thus $$X_{n+1}|X_n=j=\begin{cases} 0, &\text{ with probability } \frac{1}{2} \\ j+1, &\text{ with probability } \frac{1}{2} \end{cases}$$ since each event (head or tails) occurs with the same probability. Hence, the transition matrix of the markov chain is given below $$\begin{pmatrix}\frac{1}{2}&\frac{1}{2}&0&0&0\\\frac{1}{2}&0&\frac{1}{2}&0&0\\\frac{1}{2}&0&0&\frac{1}{2}&0\\\frac{1}{2}&0&0&0&\frac{1}{2}\\0&0&0&0&1\end{pmatrix}$$ where we have assumed that as long the game reaches state $4$ i.e. 4 consecutive heads the process seizes. The initial state is $X_0=0$. Let $h(j)$ denote the expected number of flips until you reach state $4$. Then you have the following system of equations $$\begin{align*}h(0)&=1+\dfrac{1}{2}h(0)+\dfrac{1}{2}h(1)\\h(1)&=1+\dfrac{1}{2}h(0)+\dfrac{1}{2}h(2)\\h(2)&=1+\dfrac{1}{2}h(0)+\dfrac{1}{2}h(3)\\h(3)&=1+\dfrac{1}{2}h(0)+\dfrac{1}{2}h(4)\\h(4)&=0\end{align*}$$ Now solve the above system recursively to obtain $h(0)$ which is the expected number of flips needed. The solution is $h(0)=30$.
Simplification of conditional probability expression
Facts that might help: \begin{align} P(\bar E\mid D)P(D) &= (1 - P(E\mid D))P(D) \\[1ex] P(E) - P(E\mid D)P(D) &= P(E) - P(E \cap D) \\ &= P(E \cap\bar D) \\ &= P(E\mid\bar D)P(\bar D) \end{align} In case you need a derivation of the first fact: \begin{align} P(\bar E\mid D)P(D) &= P(\bar E \cap D) \\ &= P(D) - P(E \cap D) \\ &= P(D) - P(E\mid D)P(D)\\ &= (1 - P(E\mid D))P(D) \end{align}
How to find the solution of the Tchebycheff differential equation?
HINT: $$(1-x^2)u''(x)-xu'(x)+ku(x)=0\Longleftrightarrow$$ Let $t=i\sqrt{k}\ln(\sqrt{x^2-1}+x)$, which gives $x=\frac{1}{2}e^{-\frac{it}{\sqrt{k}}}\left(1+e^{\frac{2it}{\sqrt{k}}}\right)$: $$\left(-\frac{1}{4}e^{-\frac{2it}{\sqrt{k}}}\left(1+e^{\frac{2it}{\sqrt{k}}}\right)^2+1\right)u''(x)-\frac{1}{2}e^{-\frac{it}{\sqrt{k}}}\left(1+e^{\frac{2it}{\sqrt{k}}}\right)u'(x)+ku(x)=0\Longleftrightarrow$$ Apply the chain rule $\frac{\text{d}u(x)}{\text{d}x}=\frac{\text{d}u(t)}{\text{d}t}\frac{\text{d}t}{\text{d}x}$: $$k\left(u''(t)+u(t)\right)=0\Longleftrightarrow$$ Assume a solution will be proportional to $e^{\lambda t}$ for some constant $\lambda$. Substitute $u(t)=e^{\lambda t}$ into the differential equation: $$k\cdot\frac{\text{d}^2}{\text{d}t^2}(e^{\lambda t})+k\cdot e^{\lambda t}=0\Longleftrightarrow$$ Substitute $\frac{\text{d}^2}{\text{d}t^2}(e^{\lambda t})=\lambda^2e^{\lambda t}$: $$e^{\lambda t}\left(k+k\lambda^2\right)=0\Longleftrightarrow$$ Since $e^{\lambda t}\ne 0$ for any finite $\Lambda$, the zeros must come from the polynomial: $$k+k\lambda^2=0\Longleftrightarrow$$ $$k\left(\lambda^2+1\right)=0\Longleftrightarrow$$ $$\lambda=\pm i$$
Find $\int_{a}^{b}\frac{x^{n-1}\left((n-2)x^2+(n-1)(a+b)x+nab\right)}{(x+a)^2(x+b)^2}dx$
We Can Write it as $$\displaystyle \frac{x^{n-1}\left((n-2)x^2+(n-1)(a+b)x+nab\right)}{(x+a)^2(x+b)^2} = \frac{d}{dx}\left[\frac{x^n}{(x+a)\cdot (x+b)}\right]$$ So $$\displaystyle \int \frac{x^{n-1}\left((n-2)x^2+(n-1)(a+b)x+nab\right)}{(x+a)^2(x+b)^2}dx = \int \frac{d}{dx}\left[\frac{x^n}{(x+a)\cdot (x+b)}\right]dx$$ $$\displaystyle = \left[\frac{x^n}{(x+a)(x+b)}\right]_{a}^{b} = \frac{b^{n-1}-a^{n-1}}{2(a+b)}$$
Questions about manifolds
Look at the image of $f.$ If you show that it is open and closed, then it is a connected component of $N,$ and since $N$ is connected, it is the whole thing. The properness comes in showing that the image is closed: if $p_1, \dots, p_n, \dots$ converge to $p$ in $N,$ then, since the preimage of a closure of a neighborhood of $p$ is compact (and non-empty, since it contains all but a finite number of the convergents), then for any sequence of $q_1, \dots, q_n, \dots$ such that $f(q_i) = p_i$ you can pick a convergent subsequence, and the image of the limit is $p,$ by continuity. I leave openness to you... By Popular Demand Some comments about the other conditions: They are needed to show openness. First, the orientation condition is necessary (think of projecting the round sphere on its first coordinate). Second, to respond to Georges' concern, you can't have just one regular point, through the magic of Sard's theorem. Either all the points are critical (obviously possible), or the set of critical points has hausdroff dimension at most $n-1.$ The points where the nullity is two or more can be ignored (since they don't separate), at a point where the nullity is $1,$ there is potentially a separating set of critical points, but the only way the map can't be extended past this set is if the map changes orientation there. Now, if this is an exercise, I have no idea if Spivak had introduced Sard's theorem at this point, but I don't really see how to avoid it.
Arranging $6$ distinct rooks on a $6\times 6$ board
Hint: There must be one rook in each row and one in each column. Relate the (unordered) set of positions of the rooks to permutations of $1,2,\ldots, 6$. Then think about labelling the rooks.
Forecasting using LSTM network
You can make a change in the XTest length to incorporate your "151 onwards points". Note that you can no longer evaluate the error as your labeled data gets over by 150 points. You can just get YPred. You will have to figure out the syntax. Please consider asking such questions on CS Stack Exchange or StackOverflow.
Probability of picking $2$ specific cards
You haven't considered some of the cases and the calculation is wrong in some of the cases you have considered. Also remember that in some of the cases the order can be reversed to get the same probability again leading us to directly multiply by $2$ most of the times. Let's start at the beginning $P(X=0, Y=0)=\frac {21\choose2}{32\choose2} = \frac {21}{32} \cdot \frac {20}{31}$ - This one is correct $P(X=0,Y=1)= \frac{2\cdot3\cdot21}{32\cdot31}$ - $3$ ways to choose a king, $21$ ways to select a non heart, non king card and $2$ since the order can be reversed with the same probability $P(X=0,Y=2)=\frac{3}{32} \cdot \frac{2}{31}$ - $3\choose2$ ways to choose 2 non heart kings $P(X=1,Y=0)=\frac{2\cdot7\cdot21}{32\cdot31}$ - $7$ ways to choose a non king heart, $21$ ways to choose from a non heart non king card and $2$ since the order can be reversed with the same probability $P(X=1,Y=1)=\frac {21\cdot2 + 2\cdot7\cdot3}{32\cdot31}$ - This one's a bit complicated. First let's say you select the king of hearts. Then $21$ ways to select the other card. Next, we assume king of hearts is not selected. Then, $7$ ways to choose a heart's card (excluding the king) and $3$ ways to choose a non heart's king card. The $2$ is present in both cases since the order can be reversed with the same probability $P(X=1,Y=2)=\frac{2\cdot3}{32\cdot31}$ - Since both cards are kings, one of the cards has to be the king of hearts. The second king can be chosen from the $3$ left in $3$ ways and $2$ since the order can be reversed with the same probability $P(X=2,Y=0)=\frac{7}{32} \cdot \frac{6}{31}$ - $7\choose2$ ways to select $2$ hearts card from $8-1 = 7$ (excluding hearts king as $0$ kings) $P(X=2,Y=1)=\frac{2\cdot7}{32\cdot31}$ - Since two hearts cards are there and one is king, then $1$ card has to be the king of hearts. The second card can be selected from the remaining $7$ cards in $7$ ways and $2$ since the order can be reversed with the same probability Summing all of them will give you $1$. Please ask if you need more clarifications in any case.
How to formally prove that index renaming doesn’t change sum
To make it formally, you can use induction as you wanted, but first let's properly define $\sum$ for any set $I$. Let $I$ be a well-ordered finite set, $(G, +)$ be an abelian group (for example $(\mathbb R, +)$), and let $x: I \rightarrow G$. If $I$ is non-empty, then it has a minimal element, and let $m\in I$ be this minimal element. We define $\sum_{i\in I}x(i)\in G$ inductively as $$ \sum_{i\in I}x(i) := \left\{\begin{array}{ll} 0& \text{for } I=\varnothing \\ x(m) + \sum_{i\in I\setminus\{m\}}x(i) & \text{for } I \neq\varnothing\end{array} \right.$$ where we use the fact that after removing the minimal element from a well-ordered set, the remaining set is still well-ordered. We want to prove that The value of $\sum_{i\in I}x(i)$ does not depend on the chosen ordering of the set $I$. We'll prove it by Induction over $|I|$. For $|I|=0$ and $|I|=1$ it's trivial, as such sets have only one ordering. For $|I|\ge 2$, let's assume that we have two order structures on $I$. I'll denote them as $I$ and $I'$; they are equal as sets, but they have different ordering. Let $m$ be the minimal element in $I$ and $m'$ be the minimal element in $I'$. We have $$ \sum_{i\in I} x(i) = x(m) + \sum_{i\in I\setminus\{m\}}x(i)$$ $$ \sum_{i\in I'} x(i) = x(m') + \sum_{i\in I'\setminus\{m'\}}x(i)$$ If $m=m'$ then we have $x(m) =x(m')$ and $I\setminus\{m\}=I'\setminus\{m'\}$ (as sets), so from the inductive assumption we have $\sum_{i\in I\setminus\{m\}}x(i) = \sum_{i\in I'\setminus\{m'\}}x(i)$. That means that $\sum_{i\in I} x(i)=\sum_{i\in I'} x(i)$. If $m\neq m'$, then $m\in I'\setminus\{m'\}$. By the inductive assumption, the ordering of $I'\setminus\{m'\}$ doesn't change the value of the sum, so we can choose such ordering that $m$ is the minimal element of $I'\setminus\{m'\}$. We have then (using associativity and commutativity of addition in $G$): $$ \sum_{i\in I'\setminus\{m'\}}x(i) = x(m) + \sum_{i\in I'\setminus\{m,m'\}}x(i)$$ \begin{align} \sum_{i\in I'} x(i) &= x(m') + \Big(x(m) + \sum_{i\in I'\setminus\{m,m'\}}x(i)\Big) = \\ &= \Big(x(m') + x(m)\Big) + \sum_{i\in I'\setminus\{m,m'\}}x(i) = \\ &= \Big(x(m) + x(m')\Big) + \sum_{i\in I'\setminus\{m,m'\}}x(i) = \\ &= x(m) + \Big(x(m') + \sum_{i\in I'\setminus\{m,m'\}}x(i)\Big) = \\ &= x(m) + \sum_{i\in I'\setminus\{m\}}x(i) = \\ &= x(m) + \sum_{i\in I\setminus\{m\}}x(i) = \sum_{i\in I}x(i)\end{align} $\Box$
On the sharpness of the Harada-Sai lemma
Bongartz gives an example, for any $m$, in Section A.1 on page 326 of Bongartz, Klaus, Treue einfach zusammenhaengende Algebren. I, Comment. Math. Helv. 57, 282-330 (1982). ZBL0502.16022. The algebra is given by the quiver $$ 1 \begin{array}{c} \xrightarrow{a}\\ \xleftarrow[b]\\ \end{array} 2 \begin{array}{c} \xrightarrow{a}\\ \xleftarrow[b]\\ \end{array} 3 \cdots m-1 \begin{array}{c} \xrightarrow{a}\\ \xleftarrow[b]\\ \end{array} m $$ with relations $ab=ba=0$. As an example I'll list the modules in a chain of $14$ morphisms for $m=4$. Arrows to the right indicate the action of $a$; arrows to the left indicate the action of $b$. $$1\to 2\to3\to4$$ $$1\to2\to3$$ $$1\to2\to3\leftarrow4$$ $$1\to2$$ $$1\to2\leftarrow3\to4$$ $$1\to2\leftarrow3$$ $$1\to2\leftarrow3\leftarrow4$$ $$1$$ $$1\leftarrow2\to3\to4$$ $$1\leftarrow2\to3$$ $$1\leftarrow2\to3\leftarrow4$$ $$1\leftarrow2$$ $$1\leftarrow2\leftarrow3\to4$$ $$1\leftarrow2\leftarrow3$$ $$1\leftarrow2\leftarrow3\leftarrow4$$
Definitions of Boolean algebras
If you have a lattice, that is a partially ordered set $L,\le$ where every two element set has a greatest lower bound and a least upper bound, you can define the operations $\land$ and $\lor$ by $$ a\land b=\inf\nolimits_\le\{a,b\},\qquad a\lor b=\sup\nolimits_\le\{a,b\} $$ These operations satisfy the following properties Idempotency: $a\land a=a$, $a\lor a=a$ Absorption: $a\land(a\lor b)=a$, $a\lor(a\land b)=a$ Commutativity: $a\land b=b\land a$, $a\lor b=b\lor a$ Associativity: $a\land(b\land c)=(a\land b)\land c$, $a\lor(b\lor c)=(a\lor b)\lor c$ These properties are easily proved from the fact that $\le$ is a partial order relation. Conversely, suppose you have two operations $\land$ and $\lor$ on the set $L$ that satisfy the above properties. Define $$ a\le b \quad\text{stands for}\quad a\land b=a $$ Then you can prove $a\le a$ for all $a\in L$ If $a\le b$ and $b\le a$ then $a=b$ If $a\le b$ and $b\le c$, then $a\le c$ The first follows from idempotency, the second from commutativity, the third from associativity. Thus $\le$ is a partial order on $L$. Moreover $a\land b=\inf_\le\{a,b\}$. Indeed, $$ a\land b\le a $$ because $(a\land b)\land a=a\land(a\land b)=(a\land a)\land b=a\land b$. If $c\le a$ and $c\le b$, then $$ c\land(a\land b)=(c\land a)\land b=c\land b=c $$ so $c\le a\land b$ and we have proved $a\land b$ is the greatest lower bound of $\{a,b\}$. By symmetry, if we define $a\le'b$ to stand for $a\lor b=b$, we can prove that $\le'$ is a partial order on $L$ and $a\lor b=\sup_{\le'}\{a,b\}$. We didn't use absorption yet. It is proved for showing that $a\le b$ if and only if $a\le' b$. Suppose $a\le b$, that is $a\land b=a$. Then $$ a\lor b=(a\land b)\lor b=b\lor(a\land b)=b\lor(b\land a)=b $$ and so $a\le' b$. Conversely, suppose $a\le'b$, that is $a\lor b=b$. Then $$ a\land b=a\land(a\lor b)=a $$ so $a\le b$. Since every two element set has a least upper bound with respect to $\le'$ and a greatest lower bound with respect to $\le$, but the two relations are the same, we have that $L,\le$ is a lattice. Now add maximum, minimum, distributivity and complements and you have a Boolean algebra.
Epsilon-delta proof that $ \lim_{x\to 0} {1\over x^2}$ does not exist
I'll try to sketch the idea of the proof without giving all the details, since that part of the exercise is your job. In order for the limit to exist there must some particular number $L$ for the limit. Then the function's value is will be close to $L$ whenever $x$ is close enough to $0$. Since $1/x^2$ is very large when $x$ is near $0$, the limit $L$ will have to be a large number if it is to exist. Could the limit be $1000$? I don't think so. You should be able to show that if you look at values of $x$ close to $0$ then $1/x^2$ will be greater than $1001$, which tells you there's no $\delta$ that works with $\epsilon = 1$. Now write the formal proof that no number $L$ can be the limit. So there is no limit. PS Don't say that the limit is infinity. Infinity is not a number. There is a sense in which it's correct to say the limit is infinity, but this exercise does not ask about that and you can get into trouble if you try. PPS Phrases like "infinitely close" sometimes provide useful intuition, and they were (in a sense) the best that mathematicians managed when calculus was being invented, but you should not use them now in formal arguments. The whole $\epsilon - \delta$ thing is designed to express the idea precisely. I think you can understand many of the ideas of calculus without it, but you can't prove theorems that way.
why is the set of intersections and symmetric differences of a fixed set finite
There are only finitely many sets that are intersections of sets in $V$ and their complements. Every set in $W$ must be a union of those. (This straightforward argument may be just a rephrasing of the idea behind the BIG HAMMER you want to avoid.) Edit in response to comment from the OP Not doubting of course, but not understanding the reasoning. Here's another kind of argument. Maybe it helps your intuition. Each subset of $U$ is completely described by its characteristic function. The ring structure on subsets is isomorphic to the ring structure of the characteristic functions with pointwise operations mod $2$. You are interested in a finitely generated subring of that ring. The finite number of characteristic functions $\chi_a$ and $1 - \chi_a$ for $a \in V$ can be distinguished by their values at a finite subset of $U$, so that will be true of the functions in the subring they generate. This is a more formal way to say that the universe $U$ might as well be finite - in which case the question is trivial.
Prove that the Markov chain is irreducible and recurrent
Not true even when the state space is finite. Let $\{X_n\}$ have state space $\{0,1\}$ and suppose both $0$ and $1$ are absorbing states. The transition matrix is the identity matrix. Then $(\frac 1 2,\frac 1 2)$ is an invariant distribution but the chain is not irreducible.
Curse of dimensionality $2^d +1$ hyperspheres inside a hypercube
We may as well set $l=1$ and talk about the unit hypercube. The hyperspheres in the corners then had diameter $\frac 12$. Consider the body diagonal of the hypercube. It goes through the centers of two of the corner hyperspheres, the center of the center hypersphere, and two of the points of tangency between the center hypersphere and corner hyperspheres. The length of the body diagonal is $\sqrt d$. The distance from the corner of the hypercube to the center of a corner hypersphere is $\sqrt{\frac d{16}}=\frac {\sqrt d}4$. The distance from the corner of the hypercube to a tangency point is then $\frac {\sqrt d+1}4$. The radius of the central hypersphere is then $\frac {\sqrt d}2-\frac{\sqrt d+1}4$. The central hypersphere touches the face of the hypercube when this is $\frac 12$ $$\frac {\sqrt d}2-\frac{\sqrt d+1}4=\frac 12\\\frac {\sqrt d}4=\frac 34\\d=9$$
Relative Simplicial Approximation
Actually, I'm able to solve this problem when $A$ is a subcomplex of $X$, by first approximating $f_{|\partial I^q}$ to be simplicial, then using the fact that $\partial I^q \to I^q$ is a cofibration I can extend this homotopy obtaining $\tilde{f}:I^q \to X$, simplicial on $A$, hence I can conclude by the relative simplicial approximation. I believe this is what the author had in mind.
Finding a quantile $\rho=\frac{5}{16}$
To clear the confusion, if $X\sim \exp(λ)$ then $$f(x)=λe^{-λx}$$ for $x>0$ and $$F(x)=\int_{-\infty}^xf(x)dx=1-e^{-λx}$$ for $x>0$ (so there is no $λ$ in front of the exponential in the cdf $F$). So, in this case $$F(x_p)=\frac{5}{16}\implies 1-e^{-λx_p}=\frac5{16}\implies x_p=-\frac1λ\ln(11/16)$$
Good, relatively short math textbooks?
For topology, John McCleary's A First Course in Topology: Continuity and Dimension could be exactly what you're looking for. It's fairly short, covers the essentials of both point set and low dimensional algebraic topology in a rigorous yet very pictorial way and contains a wonderful historical slant that makes it a pleasure to read. This combined with a series of terrific problems as well as suggestions for further reading make it a great choice I think you'll find very helpful.
$\mathbb E[W_s^2W_t^2]$ for Brownian motion
Suppose $s\geq t$. $$E(W_s^2W_t^2) = E((W_s-W_t)^2W_t^2+2W_t^3W_s-W_t^4) \\ = E(W_{s-t}^2)E(W_t^2)+2E(W_t^3W_s)-E(W_t^4) =\\ (s-t)t+2E(W_t^3(W_s-W_t))+E(W_t^4) = \\ (s-t)t+2E(W_t^3)E(W_{s-t})+3t^2 = (s-t)t+3t^2 = 2t^2+st $$
Prove that it cannot be proven that "The United States had more fallow acreage than planted acreage"
Let $p_u,f_u$ be the planted/fallow acreage in the U.S. and $p_s,f_s$ be the planted/fallow acreage in the Soviet Union. Also, let $y_u$ be the yield per planted acre in the U.S. and $y_s$ be the yield per planted acre in the Soviet Union. The information given says that $$\frac{y_s}{y_u} = 0.68 \; \iff \; y_s = 0.68y_u \tag{1}\label{eq1}$$ The crop amount in the U.S. is $y_u p_u$, so the yield per the total acreage would be $$y_{tu} = \frac{y_u p_u}{p_u + f_u} \tag{2}\label{eq2}$$ Similarly, for the Soviet Union, it's yield per the total acreage used would be $$y_{ts} = \frac{y_s p_s}{p_s + f_s} \tag{3}\label{eq3}$$ By cross-multiplying and combining the terms for $f_s$ and $p_s$, you get \begin{align} y_{ts}(p_s + f_s) & = y_s p_s \\ y_{ts}p_s + y_{as}f_s & = y_s p_s \\ y_{ts}f_s & = y_s p_s - y_{ts}p_s \\ f_s & = \frac{p_s(y_s - y_{ts})}{y_{ts}} \tag{4}\label{eq4} \end{align} It's also given that $$\frac{y_{ts}}{y_{tu}} = 1.14 \; \iff \; y_{ts} = 1.14y_{tu} \tag{5}\label{eq5}$$ I originally misread the question to think it was asking to show it cannot be proven whether the fallow or the planted U.S. acreage was larger. However, the answer to the actual question just needs to show that $f_u \le p_u$ is possible, which is done with the second set of calculations. For the more general question this originally answered, note this is the only information provided, so it can be answered if $2$ sets of values are found which are consistent with the above equations but with one showing that $f_u \gt p_u$ and the other showing that $f_u \lt p_u$. Let's set $y_u = 100$. Then from \eqref{eq1}, you get $y_s = 68$. Next, let $p_u = 10,000,000$ and $f_u = 11,000,000$. Substituting these into \eqref{eq2} gives $y_{tu} = 47.619\ldots$. From \eqref{eq5}, this gives $y_{ts} = 54.285\ldots$. From \eqref{eq4}, you get $$f_s = \frac{p_s(68 - 54.285\ldots)}{54.285\ldots} \tag{6}\label{eq6}$$ Note you can plug any value of $p_s$ you want to get a specific value of $f_s$, e.g., if $p_s = 10,000,000$, then $f_s = 2,526,315.789\ldots$. Next, consider $f_u = 9,000,000$. Then \eqref{eq2} gives $y_{tu} = 52.631\ldots$. From \eqref{eq5}, this gives $y_{ts} = 60$. From \eqref{eq4}, you get $$f_s = \frac{p_s(68 - 60)}{60} \tag{7}\label{eq7}$$ If you use $p_s = 10,000,000$ again, then $f_s = 1,333,333.333\ldots$. All of these values are consistent with the equations relating the only information which was provided, but with one set showing more fallow acreage than planted acreage in the U.S. (i.e., $f_u = 11,000,000 \gt p_u = 10,000,000$) and the other one showing the opposite (i.e., $f_u = 9,000,000 \lt p_u = 10,000,000$). A main reason why you can't prove which of the fallow and planted acreage in the U.S. is greater is because there are $6$ input values of $p_u,f_u,p_s,f_s,y_u$ and $y_s$, but only $4$ equations of \eqref{eq1}, \eqref{eq2}, \eqref{eq3} and \eqref{eq5} using them to relate to specified constants and other variables. Note, however, these $6$ input values are not independent of each other, with some being simply defined in terms of others, such as $y_s$ in terms of $y_u$ in \eqref{eq1}. In particular, as these equations are consistent with each other, it's an under-determined system of equations, with $6 - 4 = 2$ degrees of freedom in this case (in general, you would have more than $2$ if any of the equations are linearly dependent). Also, note the question's numeric value restrictions are for comparing values between the U.S. and the Soviet Union, meaning there are fewer constraints among the values within the U.S. (and the Soviet Union as well).
Probability of guessing a secret code
The second approach is right and the first one has a mistake. Here's a suggestion for how you can see the mistake: what if everything were the same about this problem, except that there were just 16 total codes (10 of which were valid) and you had 5 guesses to find them? What would happen if you repeated your first approach in the exact same way? EDIT: Let me try to go a bit further than just convincing you that the first one is wrong; here's a bit on why it's wrong. The short answer for what's broken is that you're overcounting probabilities. When you add probabilities, you're implicitly assuming that the underlying events are disjoint -- that is, they can't occur together. However, that's not the case here. It's quite possible to guess correctly on attempt 1 and to guess correctly on attempt 2. If you want to proceed in this fashion, you need to restyle your events: Event 1: You guess right on your first try (probability: $10/10000$) Event 2: You don't guess right on your first try, but you do on your second (probability: $\frac{9990}{10000} \cdot \frac{10}{9999}$) Event 3: You don't guess right on your first two tries, but you do on your third (probability: $\frac{9990}{10000} \cdot \frac{9989}{9999} \cdot \frac{10}{9998}$) etc. By doing this, you'll have constructed disjoint events, which means it's appropriate to add their probabilities. You will then recover the other answer.
$\Omega$ of a homotopy cofiber sequence
The archetypal example of a homotopy cofiber sequence is a suspension $$X \to \bullet \to \Sigma X.$$ Taking loops on such a thing gives $$\Omega X \to \bullet \to \Omega \Sigma X$$ and asking for this to continue to be a homotopy cofiber sequence is equivalent to asking that $\Omega \Sigma X \cong \Sigma \Omega X$; in other words, you're asking for taking loops and taking suspensions to commute (up to homotopy, etc.). This is almost never true. In particular, $\Omega \Sigma X$ is almost never a suspension. Let's take $n \ge 2$ and $X = S^n$ so that $\Sigma X = S^{n+1}$, which is about as nice as possible and in particular can have arbitrarily high connectivity. Then $\Omega \Sigma X \cong \Omega S^{n+1}$ turns out to have rational cohomology a polynomial algebra on a generator of degree $n$; in particular it supports nontrivial cup products. But $\Sigma \Omega X \cong \Sigma \Omega S^n$ is a suspension, so the cup product on it vanishes in positive degree. If you're working with stable objects, e.g. chain complexes or spectra, then homotopy cofiber and fiber sequences agree, and in particular taking loops and taking suspensions are inverses and so $\Omega \Sigma X \cong \Sigma \Omega X \cong X$. In spaces there are theorems such as the Blakers-Massey theorem measuring the extent to which a homotopy cofiber sequence fails to be a homotopy fiber sequence and vice versa.
Idea behind geometric points in étale theory.
Minhyong Kim wrote something very nice about this which I can't find at the moment. His point, as I understand it, is that in topology you can use any map from a simply connected space into $X$ as a "basepoint" for the fundamental group, up to and including a universal cover. The analogous statement in algebraic geometry is that you can use any map from a scheme with no nontrivial etale covers into $X$ as a "basepoint" for the etale fundamental group, and separably closed fields are the easiest examples of such things.
Binary vector counting
Hint: Compute the number of $\hat a$ with $|\hat a|=5$ and $v(\hat a)\le 2^5$; which positions of $\hat a$ have to be zero for the latter condition to be satisfied?
Does a compact linear operator on an infinite dimensional Banach space have a bounded inverse?
Nothing is wrong. It just happens that there are no invertible compact operators from an infinite dimensional Banach space $X$ into itself.
Complex logarithm and the residue
The reason that it is interesting to calculate the residue of $\frac 1z$ at $0$ (or equivalently, the counterclockwise integral $\oint_{|z| = 1} \frac 1z dz$) is that it cannot be (directly) calculated using an antiderivative. If $\gamma$ is a curve in $\Bbb C$ that starts at $a \in \Bbb C$ and ends at $b \in \Bbb C$ and if there exists a function $F(z)$ such that $F$ is differentiable at all points in $\gamma$ with $f(z) = \frac d{dz}F(z)$, then we have $$ \int_\gamma f(z)\,dz = F(b) - F(a). $$ It follows that if $\gamma$ is a closed contour (so that $a = b$), then we have $\int_\gamma f(z)\,dz = F(a) - F(a) = 0$. In other words: if we know that there is an anti-derivative of $f$ that is globally defined along the contour $|z| = 1$, then it necessarily follows that its reside at $0$ is $0$. That said, we can use the antiderivative to compute the residue if we split the desired integral into parts. Let $\gamma_1$ denote the path along $|z| = 1$ from $1$ to $-1$, and let $\gamma_2$ denote the path along $|z| = 1$ from $-1$ to $1$, both taken counterclockwise. We have $$ \oint_{|z| = 1}\frac 1{z}\,dz = \int_{\gamma_1} \frac 1zdz + \int_{\gamma_2} \frac 1z dz. $$ We now consider two different antiderivatives for $\frac 1z$ corresponding to different branch cuts. Define $\log^1,\log^2$ such that $$ \log^1(e^{i \theta}) = i\theta, \quad \theta \in [-\pi/2,3 \pi/2);\\ \log^2(e^{i \theta}) = i\theta, \quad \theta \in [\pi/2, 5\pi/2). $$ We then have $$ \int_{\gamma_1} \frac 1zdz = \log^1(-1) - \log^1(0) = \pi i - 0 = \pi i,\\ \int_{\gamma_2} \frac 1z dz = \log^2(0) - \log^2(-1) = 2 \pi - \pi i = \pi i. $$ Regarding your edited question: your definition $$ \log(re^{i\theta}) = \{\log r + i(\theta + 2 \pi k) : k \in \Bbb Z\} $$ is consistent with the definition $$ \log(z) = \left\{\int_{\gamma}\frac 1z \,dz : \gamma \text{ is a contour from } 1\ \text{to}\ z\right\}. $$ It is indeed the case that the multiple of $k$ corresponds to the contribution of the residue at $z = 0$.
How is a general case equivalent to a special case and how showing a special case demonstrates a general case, in the proof of pythagoras theorem?
It means that there is not a (simple) demonstration that $a^2+b^2=c^2$ implies $\lambda a^2=\lambda b^2+\lambda c^2$, namely just multiply both sides by $\lambda$ and expand on the right. (Of course there is also the derivation of the special case from the general case by just plugging in $\lambda =1$ and simplifying). There are general cases and there are general cases. You talk about $5^2=3^2+4^2$ as a special case of $a^2=b^2+c^2$, whereas the quote talks about $a^2=b^2+c^2$ as special case of $\lambda a^2=\lambda b^2+\lambda c^2$. The claim that "any special case true implies general case true" works only if (as above) the general case is equivalent to the special case.
Prove that $a$ is an eigen value of $p(T)$ where $T$ is a linear operator on $V$ $\iff$ $a=p(\lambda$ )
Suppose that $\lambda\in\mathbb{C}$ and write $p(z)-\lambda=\beta(z-\alpha_1)(z-\alpha_2)\cdots(z-\alpha_n)$ for some elements $\beta,\alpha_1,\alpha_2,\ldots,\alpha_n\in\mathbb{C}$. From here, $$p(T)-\lambda I=\beta(T-\alpha_1I)(T-\alpha_2I)\cdots(T-\alpha_nI).$$ Notice that $p(T)-\lambda$ is non-invertible if and only if some factor $T-\alpha_j I$ is non-invertible (this isn't too hard to see since all the factors above commute). That means that $\lambda$ is an eigenvalue of $p(T)$ if and only if $\alpha_j$ is an eigenvalue of $T$ for some $j$. Since these $\alpha_j$'s are exactly the roots of $p(z)-\lambda$, this says that $\lambda$ is an eigenvalue of $p(T)$ if and only if $p(\alpha)-\lambda=0$ for some eigenvalue $\alpha$ of $T$. That is, if and only if $\lambda=p(\alpha)$ where $\alpha$ is an eigenvalue of $T$.
Convert English to logic
You seem right, but you may want to formulate it as follows to reflect the nature of the question: If $x$ properly appreciates art, then $x$ is a monkey AND if $x$ is not a monkey, then $x$ does not appreciate art, but this latter statement is just the contrapositive of the previous statement which takes the form $(\forall x)(A(x)\to M(x))$. Thus, I do not see any issue with your formulation.
Characterize the compact subspaces in a topological space.
No. For instance, let $X=\{0,1\}$ with $\{0\}$ open but $\{1\}$ not open. Then $A=\{0\}$ is compact but $K(A)=A=\{0\}$ is not closed so $A$ is not $K$-closed. Ultimately, the issue is that the "reason" $A$ is compact could be totally unrelated to the "reason" $X$ is compact. In particular, $X$ might have a point $x$ (like $1$ in the example above) whose only neighborhood is $X$ itself, which automatically guarantees that $X$ is compact. If $x\not\in A$, there's never going to be any nice way to relate open covers of $A$ to open covers of $X$ to learn anything interesting using the compactness of $A$, since open covers of $X$ are all trivial ($X$ itself must always be one of the sets).
Differential equation $x'(t) e^{-x'(t)^2} = c$ with Lambert W function.
Square the equation and multiply with $-2$, $$ (-2x'^2)e^{-2x'^2}=-2c^2. $$ Then apply Lambert-W as the inverse of the function $ve^v=u$, $$ -2x'^2=W(-2c^2). $$ Now select one of the square roots so that the sign of $x'$ is the sign of $c$ and integrate.
An example of a category where every object is both inital object and terminal object?
If there is exactly one arrow from a to b for all objects a,b, then every object is initial and terminal. And the converse is also true. So such categoried are exactly the complete reflexive directed graphs.
Maxima of a function
The standard definition of a local maxima of $f$ at $x_0$ is that you have an open set $U$ with $x_0 \in U$ and $f(x) \le f(x_0)$ for $x \in U$. Nothing required on function continuity or differentiability. For $x > 0$ $f(x) \le 1$ and $$\lim\limits_{x \to 0^+} f(x) = 1$$ Hence we must have $f(0) = a \ge 1$. This condition is also sufficient as $f$ is increasing on $(-\infty, 0)$ whatever the value of $a$ is. Conclusion: $f$ has a local maxima at $0$ if and only if $a \ge 1$.
$G$ finite, $P < G$ a Sylow p-subgroup, $N_{G}(P)$ the normalizer is contained in $H < G$, show $N_{G}(H)=H$
Observe that we need only show that $N_G(H) \leq H$. Let $g \in N_G(H)$. Then $$ gPg^{-1} \subseteq gHg^{-1} = H. $$ Note that $gPg^{-1}$ is a subgroup of $H$. It is, in fact, a Sylow $p$-subgroup of $H$! Therefore by the Sylow Conjugacy Theorem, $gPg^{-1} = hPh^{-1}$, for some $h \in H$. Then $$ (g^{-1}h)P(g^{-1}h)^{-1} = P, $$ so $$g^{-1}h \in N_G(P) \leq H.$$ But that implies that $$g^{-1}h h^{-1} = g^{-1} \in H,$$ so $g \in H$, too.
Prove that, given any positive integer n, some multiple of it must be of the form 99...900...0
Hint: use Dirichlet pigeonhole principle to show that some two numbers of the form 999...99 (all nines) give the same remainder when divided by n.
True or false: $\frac{\mathrm d}{\mathrm dx}(\int _2^{\:e^x}\:\ln t \,dt)=x-\ln 2$. Support with a proof
Your first step replaces a perfectly normal definite integral with two diverging improper integrals, so you are already on shaky ground. The logarithm is undefined at zero, so it is not immediate that the integrals on $[0,2]$ and $[0,\mathrm{e}^x]$ are defined. Far better is to split the interval of integration at a point where the integrand is well-behaved, for instance at $1$. $$ \int_2^{\mathrm{e}^x} \ln(t) \,\mathrm{d}t = \int_1^{\mathrm{e}^x} \ln(t) \,\mathrm{d}t - \int_1^{2} \ln(t) \,\mathrm{d}t $$ This result should make it clear that it doesn't matter whether we split the interval of integration, we are faced with a minor variation of the same integral minus zero. So don't split. Next, the definite integral $\int_1^{2} \ln(t) \,\mathrm{d}t$ is just some number, so $$ \frac{\mathrm{d}}{\mathrm{d}x} \int_1^{2} \ln(t) \,\mathrm{d}t = 0 \text{.} $$ Finally, let $u(x) = \mathrm{e}^x$ and $f(u) = \int_1^{u} \ln(t) \,\mathrm{d}t$. Then, \begin{align*} \frac{\mathrm{d}}{\mathrm{d}x} \int_1^{\mathrm{e}^x} \ln(t) \,\mathrm{d}t &amp;= \frac{\mathrm{d}}{\mathrm{d}x} f(u(x)) \\ &amp;= \frac{\mathrm{d}f}{\mathrm{d}u} \cdot \frac{\mathrm{d}u}{\mathrm{d}x} \\ &amp;= \ln(u) \cdot \mathrm{e}^x \\ &amp;= x \cdot \mathrm{e}^x \text{.} \end{align*}
If $P(G)>0$, then $\int_GX\ dP<0$ where $G:=\{X<0\}$?
Hint: One of the sets $\{X &lt; -1/n\}, n = 1,2,\dots $ has positive measure.
Show that r is a primitive root?
A primitive root of $m$ is a number $g$ such that $g,g^2,g^3, \dots, g^{\varphi(m)}$ are all incongruent modulo $m$. So $g$ is a primitive root of $m$ precisely if $g$ has order $\varphi(m)$ modulo $m$. So it is enough to show that for any $x$, if $xy\equiv 1\pmod{m}$, then $x^k\equiv 1\pmod m$ if and only if $y^k\equiv 1\pmod{m}$. This is immediate, since $(xy)^k=x^ky^k$.
prove, that following formula is correct
The root of the solution lies in the fact, $\binom n {2[\frac{n}{2}]}$ is the last term of $(1+1)^n+(1-1)^n$ as proved below: for $n=2m, [\frac n2]=[\frac{2m}2]=m, \binom n {2[\frac{n}{2}]}=\binom{2m}{2m}$ for $n=2m+1, [\frac n2]=[\frac{2m+1}2]=m ,\binom n {2[\frac{n}{2}]}=\binom{2m+1}{2m}$ Now, $$2^{2m}=(1+1)^{2m}=\binom{2m}{0} +\binom{2m}1 +\binom{2m}2+\cdots+\binom{2m}{2m-1}+\binom{2m}{2m}$$ $$0=(1-1)^{2m}=\binom{2m}{0} -\binom{2m}1 +\binom{2m}2+\cdots-\binom{2m}{2m-1}+\binom{2m}{2m}$$ Adding we get, $2^{2m}=2\left(\binom{2m}{0}+\binom{2m}2+\cdots+\binom{2m}{2m-2}+\binom{2m}{2m} \right)$ $2^{2m-1}=\binom{2m}{0}+\binom{2m}2+\cdots+\binom{2m}{2m-2}+\binom{2m}{2m}---&gt;(1)$ Replacing $2m$ with $n$ and $\binom{2m}{2m}$ with $\binom n {2[\frac{n}{2}]}$ we get, $2^{n-1}=\binom n 0+\binom n2+\cdots+\binom n{n-2}+\binom n {2[\frac{n}{2}]}$ Similarly, $$2^{2m+1}=(1+1)^{2m+1}=\binom{2m+1}{0} +\binom{2m+1}1 +\binom{2m+1}2+\cdots+\binom{2m+1}{2m}+\binom{2m+1}{2m+1}$$ $$0=(1-1)^{2m+1}=\binom{2m+1}{0} -\binom{2m+1}1 +\binom{2m+1}2-\cdots+\binom{2m+1}{2m}-\binom{2m+1}{2m+1}$$ Adding we get, $2^{2m+1}=2\left(\binom{2m+1}{0}+\binom{2m+1}2+\cdots+\binom{2m+1}{2m-2}+\binom{2m+1}{2m} \right)$ $2^{2m}=\binom{2m+1}{0}+\binom{2m+1}2+\cdots+\binom{2m+1}{2m-2}+\binom{2m+1}{2m}---&gt;(2)$ Replacing $2m+1$ with $n$ and $\binom{2m+1}{2m}$ with $\binom n {2[\frac{n}{2}]}$ we get, $2^{n-1}=\binom n 0+\binom n2+\cdots+\binom n{n-2}+\binom n {2[\frac{n}{2}]}$
Completion of Measure Space
I believe that it is just a definition. I do not think it has something to do with the completeness of a metric space for instance. Now there are cases in some sigma algebras where we have a set $A$ in the sigma algebra with $\mu(A)=0$ and $V \subseteq A$ such that $V$ is not in the sigma algebra. Take for instance the Borel sigma algebra on $\Bbb{R}^d,d \geq 1$ with the Lebesgue measure. There are sets with zero measure that have subsets that are not Borel measureable. The completion of the Borel sigma algebra,is the sigma algebra $\mathcal{M}$ of Lebesgue measurable sets. With the completion and the appropriate theorem, we can extend the sigma algebra in order to contain all those ''negligible'' sets described in the pink.
Evaluating $\lim_{x\to -\infty}\left(\sqrt{1+x+x^2}-\sqrt{1-x+x^2} \right )$
Conjugate multiplication will help and what you got is correct. $$\lim_{x\to -\infty}\sqrt{1+x+x^2}-\sqrt{1-x+x^2}$$$$=\lim_{x\to -\infty}(\sqrt{1+x+x^2}-\sqrt{1-x+x^2})\cdot\frac{\sqrt{1+x+x^2}+\sqrt{1-x+x^2}}{\sqrt{1+x+x^2}+\sqrt{1-x+x^2}}$$ $$=\lim_{x\to -\infty}\frac{2x}{\sqrt{1+x+x^2}+\sqrt{1-x+x^2}}$$ (here setting $t=-x$ gives you) $$=\lim_{t\to \color{red}{+}\infty}\frac{-2t}{\sqrt{1-t+t^2}+\sqrt{1+t+t^2}}$$ $$=\lim_{t\to\infty}\frac{-2}{\sqrt{\frac{1}{t^2}-\frac{1}{t}+1}+\sqrt{\frac{1}{t^2}+\frac 1t+1}}$$ $$=\frac{-2}{1+1}$$
Is it true that for any two events $A$ and $B$, $P(A)$ equals $0$ only if $P(A|B)$ is also $0$?
Showing $P(A) = 0 \implies P(A | B) = 0:$ Assume $P(A) = 0,$ and as mentioned above, $$P(A | B) = \frac{P(A \wedge B)}{P(B)} = \frac{0}{P(B)} = 0.$$ The converse is not necessarily true. As an example, let $$\Omega = \{1,2,3,4\},$$ with $\frac{1}{4}$ probability assigned to each sample point. Let $$A = \{1,2\}$$ and $$B = \{3,4\},$$ We can see that $$P(A | B) = \frac{P(A \wedge B)}{P(B)} = \frac{P(\emptyset)}{P(B)} = \ frac{0}{P(B)} = 0$$ but $P(A) = \frac{2}{4} = \frac{1}{2}.$
Solve Quadratic diophantine equation in two unknowns.
As $m^2 + 1$ is a prime number it has to be odd ( 2 doesn't satisfy given conditions ), assume the prime number to be of the form of $ 4k + 1$ (m is even and therefore a multiple of 4) then use the second condition to obtain that $ n^2 = 40k + 9 $ now n can be 7,13, 27 etc. And you can get corresponding values m by evaluating value of k. Hope this helps :)
Proof of radicals of ideals being equal
For two ideals $\mathfrak{a},\mathfrak{b}$ we always have $$\mathfrak{a} \subset \mathfrak{b} \; \Rightarrow r(\mathfrak{a})\subset r(\mathfrak{b}).$$ But we also have $r(\mathfrak{a})\subset r(\mathfrak{a}^2)$, since $x\in r(\mathfrak{a})$ just means $x^n\in \mathfrak{a}$ for some $n$, and since $\mathfrak{a}$ is an ideal, then also $x^{2n}\in \mathfrak{a}^2$, thus giving $x\in r(\mathfrak{a}^2)$. Since $\mathfrak{a}^2 \subset \mathfrak{a}$, $$ r(\mathfrak{a}) = r(\mathfrak{a}^2).$$ Then you can argue $$r((\mathfrak{a} \cap \mathfrak{b})\cdot (\mathfrak{a} \cap \mathfrak{b})) \subset r(\mathfrak{a}\cdot \mathfrak{b}) \subset r(\mathfrak{a} \cap \mathfrak{b}) = r((\mathfrak{a} \cap \mathfrak{b})\cdot (\mathfrak{a} \cap \mathfrak{b})),$$ which then must entail equality.
Discrete Math Functions on infinity
You need to find the direct image of $-\infty,0].$ Note that $f$ is increasing, $f(0)=1$ and $f(-\infty):=\displaystyle \lim_{x \to -\infty} 3^x=0$ so $f[[9,+\infty)]=(0,1]$. Now, you need to calculate the inverse image of $[9,+\infty)$. That is, the values of $x$ such that $9\leq 3^x &lt;\infty$, which are $2\leq x&lt;\infty$, so $f^{-1}[[9,+\infty)]=[2,+\infty )$.
Fixed point property for the total space and base space of a principal bundle
The group action on $P$ implies no principal bundle can have the fixed point property, unless the fiber (group) $G$ is trivial. If $G$ is not trivial, note that the morphism (group action) induced on $P$ by any non-zero element in $G$ lacks a fixed point. If $P$, a principal bundle over $X$ has the fixed point property, then $P=X$ . Therefore $X$ has the fixed point property.
In the regular hexagon tell each area,But How find this length with $B_{i}B_{i+1}$?
Definitions.Origin in the middle of the hexagon, $O = (p,q)$ , angle of normal $\overline{B_5 B_6}$ with x-axis $= \phi$ . Equations. $$ \begin{array}{l} \overline{B_5 B_6} \quad : \quad \cos(\phi)(x-p)+\sin(\phi)(y-q) = 0 \\ \overline{B_1 B_2} \quad : \quad \cos(\phi+\pi/3)(x-p)+\sin(\phi+\pi/3)(y-q) = 0 \\ \overline{B_3 B_4} \quad : \quad \cos(\phi-\pi/3)(x-p)+\sin(\phi-\pi/3)(y-q) = 0 \end{array} $$ Edges of the hexagon, assuming that the length of an edge $= R\,$ and $\;0 \le \lambda \le 1$ : $$ \begin{array}{l} \overline{A_6 A_1} \quad : \quad (x,y) = (A_6^x,A_6^y) + \lambda (\vec{A_1}-\vec{A_6}) = (-R/2,-R\sqrt{3}/2) + \lambda (R,0) \\ \overline{A_1 A_2} \quad : \quad (x,y) = (A_1^x,A_1^y) + \lambda (\vec{A_2}-\vec{A_1}) =(R/2,-R\sqrt{3}/2) + \lambda (R/2,R\sqrt{3}/2) \\ \overline{A_2 A_3} \quad : \quad (x,y) = (A_2^x,A_2^y) + \lambda (\vec{A_3}-\vec{A_2}) =(R,0) + \lambda (-R/2,R\sqrt{3}/2) \\ \overline{A_3 A_4} \quad : \quad (x,y) = (A_3^x,A_3^y) + \lambda (\vec{A_4}-\vec{A_3}) =(R/2,R\sqrt{3}/2) + \lambda (-R,0) \\ \overline{A_4 A_5} \quad : \quad (x,y) = (A_4^x,A_4^y) + \lambda (\vec{A_5}-\vec{A_4}) =(-R/2,R\sqrt{3}/2) + \lambda (-R/2,-R\sqrt{3}/2) \\ \overline{A_5 A_6} \quad : \quad (x,y) = (A_5^x,A_5^y) + \lambda (\vec{A_6}-\vec{A_5}) =(-R,0) + \lambda (R/2,-R\sqrt{3}/2) \end{array} $$ Determine intersection points of lines $\overline{B_i B_j}$ with hexagon edges $\overline{A_i A_j}$ as appropriate. Introduce $\tan(\phi)$ instead of $\cos(\phi)$ and $\sin(\phi)$ , and put $\;\tan(\phi) = t$ . For example $\overline{B_1B_2}$ with $\overline{A_1A_2}$ : $$ \cos(\phi+\pi/3)(\left[R/2 +\lambda R/2\right]-p) +\sin(\phi+\pi/3)(\left[-R\sqrt{3}/2 +\lambda R\sqrt{3}/2\right]-q) = 0 \quad \Longrightarrow \\ \lambda = \frac{\cos(\phi+\pi/3)(-R/2+p) +\sin(\phi+\pi/3)(R\sqrt{3}/2+q)} {\cos(\phi+\pi/3)R/2+\sin(\phi+\pi/3)R\sqrt{3}/2} =\\ 1/2+1/2\,{\frac {p}{R}}+1/2\,{\frac {\sin \left( \phi \right) \sqrt {3 }}{\cos \left( \phi \right) }}-1/2\,{\frac {\sin \left( \phi \right) \sqrt {3}p}{R\cos \left( \phi \right) }}+1/2\,{\frac {\sin \left( \phi \right) q}{R\cos \left( \phi \right) }}+1/2\,{\frac {\sqrt {3}q}{R}} \quad \Longrightarrow \\ \lambda = 1/2+1/2\,{\frac {p}{R}}+1/2\,t\sqrt {3}-1/2\,{\frac {\sqrt {3}pt}{R}}+ 1/2\,{\frac {qt}{R}}+1/2\,{\frac {\sqrt {3}q}{R}} $$ Substitute $\,\lambda\,$ into $\;(x,y)=(R/2,-R\sqrt{3}/2)+\lambda(R/2,R\sqrt{3}/2)\;$ to find $\,(B_1^x,B_1^y)$ . Likewise all the other coordinates: $$ \begin{array}{l} B_6^x = -1/2\,R+ \left( 1/2+{\frac {p}{R}}+1/2\,t\sqrt {3}+{\frac {qt}{R}} \right) R \quad ; \quad B_6^y = -1/2\,R\sqrt {3} \\ B_5^x = -1/2\,R+ \left( 1/2+{\frac {p}{R}}-1/2\,t\sqrt {3}+{\frac {qt}{R}} \right) R \quad ; \quad B_5^y = 1/2\,R\sqrt {3} \\ B_1^x = 1/2\,R+1/2\, \left( 1/2+1/2\,{\frac {p}{R}}+1/2\,t\sqrt {3}-1/2\,{ \frac {\sqrt {3}pt}{R}}+1/2\,{\frac {qt}{R}}+1/2\,{\frac {\sqrt {3}q}{ R}} \right) R \\ B_1^y = -1/2\,R\sqrt {3} \left( 1/2-1/2\,{\frac {p}{R}}-1/2\,t\sqrt {3}+1/2\,{ \frac {\sqrt {3}pt}{R}}-1/2\,{\frac {qt}{R}}-1/2\,{\frac {\sqrt {3}q}{ R}} \right) \\ B_2^x = -1/2\,R-1/2\, \left( 1/2-1/2\,{\frac {p}{R}}+1/2\,t\sqrt {3}+1/2\,{ \frac {\sqrt {3}pt}{R}}-1/2\,{\frac {qt}{R}}-1/2\,{\frac {\sqrt {3}q}{ R}} \right) R \\ B_2^y = 1/2\,R\sqrt {3} \left( 1/2+1/2\,{\frac {p}{R}}-1/2\,t\sqrt {3}-1/2\,{ \frac {\sqrt {3}pt}{R}}+1/2\,{\frac {qt}{R}}+1/2\,{\frac {\sqrt {3}q}{ R}} \right) \\ B_3^x = 1/2\,R+1/2\, \left( -1/2\,t\sqrt {3}+1/2\,{\frac {\sqrt {3}pt}{R}}+1/2 +1/2\,{\frac {p}{R}}-1/2\,{\frac {\sqrt {3}q}{R}}+1/2\,{\frac {qt}{R}} \right) R \\ B_3^y = 1/2\,R\sqrt {3} \left( 1/2+1/2\,t\sqrt {3}-1/2\,{\frac {\sqrt {3}pt}{R }}-1/2\,{\frac {p}{R}}+1/2\,{\frac {\sqrt {3}q}{R}}-1/2\,{\frac {qt}{R }} \right) \\ B_4^x = -R+1/2\, \left( 1/2\,t\sqrt {3}+1/2\,{\frac {\sqrt {3}pt}{R}}+1/2+1/2 \,{\frac {p}{R}}-1/2\,{\frac {\sqrt {3}q}{R}}+1/2\,{\frac {qt}{R}} \right) R \\ B_4^y = -1/2\,R\sqrt {3} \left( 1/2\,t\sqrt {3}+1/2\,{\frac {\sqrt {3}pt}{R}}+ 1/2+1/2\,{\frac {p}{R}}-1/2\,{\frac {\sqrt {3}q}{R}}+1/2\,{\frac {qt}{ R}} \right) \end{array} $$ The areas are calculated with determinants; $O_1$ as an example: $$ 2\,O_1 = -(B_4^x-A_6^x)(B_6^y-A_6^y)+(B_4^y-A_6^y)(B_6^x-A_6^x) + (B_4^x-p)(B_6^y-q)-(B_4^y-q)(B_6^x-p) $$ Giving rise to the following equations: $$ \begin{array}{l} O_1 = 1/4\,{R}^{2}\sqrt {3}-3/8\,{p}^{2}t+3/4\,qR+3/4\,pq+1/4\,\sqrt {3}pR-3 /4\,tRp+1/4\,\sqrt {3}Rqt-1/4\,\sqrt {3}pqt-1/8\,\sqrt {3}{p}^{2}+1/8 \,\sqrt {3}{q}^{2}+3/8\,{q}^{2}t = 1363 \\ O_2 = 1/4\,{R}^{2}\sqrt {3}+3/8\,{p}^{2}t+3/4\,qR-3/4\,pq-1/4\,\sqrt {3}pR-3 /4\,tRp-1/4\,\sqrt {3}Rqt-1/4\,\sqrt {3}pqt-1/8\,\sqrt {3}{p}^{2}+1/8 \,\sqrt {3}{q}^{2}-3/8\,{q}^{2}t = 1775 \\ O_3 = 1/4\,\sqrt {3} \left( -2\,pR+{p}^{2}+{R}^{2}-{q}^{2}-2\,Rqt+2\,pqt \right) = 3115 \\ O_4 = -3/8\,{p}^{2}t-1/4\,\sqrt {3}pR+3/4\,pq-3/4\,qR-1/8\,\sqrt {3}{p}^{2}+ 1/4\,{R}^{2}\sqrt {3}+3/8\,{q}^{2}t+1/8\,\sqrt {3}{q}^{2}-1/4\,\sqrt { 3}Rqt-1/4\,\sqrt {3}pqt+3/4\,tRp = 4067 \\ O_5 = 3/8\,{p}^{2}t+1/4\,\sqrt {3}pR-3/4\,pq-3/4\,qR-1/8\,\sqrt {3}{p}^{2}+1 /4\,{R}^{2}\sqrt {3}-3/8\,{q}^{2}t+1/8\,\sqrt {3}{q}^{2}+1/4\,\sqrt {3 }Rqt-1/4\,\sqrt {3}pqt+3/4\,tRp = 3127 \\ O_6 = 1/4\,\sqrt {3} \left( 2\,pR+{p}^{2}+{R}^{2}-{q}^{2}+2\,Rqt+2\,pqt \right) = 1763 \end{array} $$ These are $6$ equations $\;(O_1,O_2,O_3,O_4,O_5,O_6) = (0,0,0,0,0,0)\;$ with $3$ unknowns $\;(p,q,t)\;$ , typically an over-determined system. Let's first substitute the real length of the hexagon edges: $$ R = \sqrt{15210/(3\sqrt{3}/2)} $$ If we now solve only the first three $\;(O_1,O_2,O_3) = (0,0,0)\;$ then we get two exact solutions with our computer algebra system. One of these involves a tangent $\,t \approx -1.427142752$ , corresponding with too large an angle of $\;-55^\circ$ . The other solution must be the right one: $$ p = -\frac{42\sqrt{5\sqrt{3}}}{13} \quad ; \quad q = -\frac{46\sqrt{5\sqrt{3}}\sqrt{3}}{13} \quad ; \quad t = \frac{\sqrt{3}}{45} $$ And here come the corresponding areas, exactly as they are in the OP's picture : O1 = 1363. O2 = 1775. O3 = 3115. O4 = 4067. O5 = 3127. O6 = 1763. At last, we calculate the lengths $\,x\,$ of the $\,\overline{B_i B_j}$ . Indeed they are all the same and they are only dependent upon the (tangent $\,t\,$ of the) angle $\phi$ : $$ x = \sqrt{3 R^2 (t^2+1)} \times 15/R = \huge 26. $$ Where the scaling factor with respect to $\overline{A_1A_2} = 15$ has been taken into account.
Found a real number $t$ such that $(14 + 5 \sqrt{3})(5 - \sqrt{3})\sqrt{8- 2 \sqrt{15}}= t \sqrt{2}$
Hint: $$8-2\sqrt{15}=\cdots=(\sqrt5-\sqrt3)^2$$ We know for real $a,$ $$\sqrt{a^2}=|a|$$ which $=a,$ if $a\ge0$ else $=-a$ Now just multiply out
Does the restriction of the well ordering principle to a fixed uncountable cardinal imply the general well ordering principle?
The answer is negative. And you are correct. To construct a Vitali set one only need choice up to $2^{\aleph_0}$, or that the real numbers can be well ordered (which might happen regardless to how badly choice fails in the universe). The universe of set theory has sets much much larger than the real numbers, and even more is true: the universe is built in steps, and the real numbers appear fairly soon in the construction, and the axiom of choice could hold for sets up to some step, and then fail incredibly above some point of the construction of the universe. One last remark would be that cardinal is ambiguous without choice. Specifically, some people use it exclusively to talk about well ordered cardinals, and in that case every set which is equipotent to an ordinal has a well ordering. Just it's true that the real numbers might not have a cardinal. I prefer the broader usage of the term cardinal to talk about cardinals of arbitrary sets, in which case the question has some actual content to it.
What do we need to guarantee the existence of the extension of a bounded linear operator?
If $X$ is complemented in $X_1$ then the projection $\pi_X :X_1\to X$ is continuous and $T_1 : X_1 \to Y$ $T_1 (x) =T(\pi_X (x))$ is an extension of $T.$
How do I prove if a relations is symmetric,transitive or reflexive?
Hints: $(1)$ (Reflexive?) Can it ever be the case that $x \neq x$? No: no ordered pair of the form $(x, x)$ can be in the relation R. (Symmetric?) For all $x, y \in \mathbb Z$, does $x\neq y $ imply $y\neq x$? Of cource it does. (Transitive?) Suppose $x \neq y$ and $y\neq z$. Does this necessarily mean that $x\neq z$? Hint: let $x = z = 1, y = 2$. So $(1, 2) \in R$ and $(2, 1) \in R$, but clearly, $(1, 1)\notin R$ because $1 = 1$.
Probability of lottery jackpot with or without bonus ball
Not correct. Note that the event of choosing $6$ regular balls and bonus ball correctly is a subset of the event of choosing $6$ regular balls correctly. Thus the probability of winning with or without the bonus ball is just $\frac{1}{49\choose 6}$. If you want to break it out by cases: $6$ correct without bonus or $6$ correct with bonus, you can rewrite as $$\frac{1}{{49\choose 6}}\cdot \frac{48}{49}+\frac{1}{{49\choose 6}}\cdot \frac{1}{49}$$ which conditions on whether you don't or do get the bonus ball (along with the $6$ regular balls).
What does Z subscript + mean in sets
$$ \Bbb{Z}_+ = \Bbb{Z}_{&gt;0} = \{ n \in \Bbb{Z} \mid n &gt; 0 \} $$
Closed linear span of a subset in LCS is equivalent to intersection of all closed hyperplanes containing the subset
Solved myself - First note that all closed hyperplanes in $\mathcal{X}$ are kernels of elements of $\mathcal{X}^*$, and vice versa. The closed linear span of $A$ clearly lies in this intersection. To show opposite containment, we note that since the closed linear span of $A$ is a closed convex subset of $\mathcal{X}$, it is strictly separate from any point $x_0\notin A$. ie, there exists $f\in\mathcal{X}^*$ and $\alpha\in\mathbb{R}$ such that $f(x_0)&lt;\alpha$ and $f(a)&gt;\alpha$ for all $a\in A$. Then the fact that $x_0$ is not in the intersection easily follows.
Double Integral Area Calculation
If you have not yet learned how to transform the coordinate system in double integration... The region bound by the line the circle and the parabola can be broken into two. $\displaystyle \int_0^1 \int_{\frac 12y^2}^{\sqrt{1 -y^2}+1} xy\ dx\ dy + \int_1^2 \int_{\frac 12 y^2}^y xy\ dx\ dy$ Alternatively: $\displaystyle \int_0^1 \int_{\sqrt{2x -x^2}}^{\sqrt {2x}} xy\ dy\ dx + \int_1^2 \int_{x}^{\sqrt {2x}} xy\ dy\ dx$ and in polar coordinates $\displaystyle \int_{\frac \pi4}^\frac\pi2 \int_{2\cos\theta}^{2\csc\theta\cot\theta} r\sin\theta\cos\theta \ dr\ d\theta$
Largest set of consecutive prime numbers.
2 is the most possible. Suppose otherwise; start with $\{p_n,p_{n+1}\}$ and assume that $p_np_{n+1} + 1= p_{n+2}$ (or equivalently, $p_np_{n+1} = p_{n+2}-1$). By Bertrand's Postulate, we have that $p_{n+2} -1 &lt; 2p_{n+1} - 3 &lt; 2p_{n+1}$, giving us $p_n &lt; 2$, a contradiction.
group cohomology of abelianization
In general, a short exact sequence of groups $1 \to N \to G \to G/N \to 1$ leads to a spectral sequence of group cohomology, with $$ E_2^{pq} = H^p(G/N, H^q(N, A)) \implies H^{p+q}(G,A).$$ This is known as the Lyndon-Hochschild-Serre spectral sequence. This tells us roughly that $H^3(G)$ will be determined by the cohomologies in pairs of degrees which add to $3$, modulo relations arising from total degrees $2$ and $4$. Note here that $H^q(N,A)$ is a $G/N$-module, and may be nontrivial even if $A$ is a trivial $G$-module. So, this cohomology depends on the way $N$ and $G/N$ interact in $G$.
How do I find the sum of the series $\sum _{ n=1 }^{ \infty }{ (x+5)^n}$ for the values $-6<x<-4$?
Use geometric series to get \begin{align} \sum^\infty_{n=1} (x+5)^n = \frac{x+5}{1-(x+5)} = -\frac{x+5}{x+4} \end{align} since $|x+5|&lt;1$.
Moment generating functions...which distributions to use?
The moment generating function is that of a gamma distribution. If you look up "gamma distribution wiki", you'll find the moment generating function and that $\alpha$ and $\theta$ are shape and scale parameters, respectively. (And you can click on "shape" and "scale" on that page to find definitions of those terms.) Of course, this only addresses your question about what $\alpha$ and $\theta$ represent.
Closed form of recurrent arithmetic series summation
We can write the last multiple sum as \begin{align*} \color{blue}{\sum_{i_1=1}^n\sum_{i_2=1}^{i_1}\sum_{i_3=1}^{i_2}i_3} &amp;=\sum_{i_1=1}^n\sum_{i_2=1}^{i_1}\sum_{i_3=1}^{i_2}\sum_{i_4=1}^{i_3} 1\\ &amp;=\sum_{1\leq i_4\leq i_3\leq i_2\leq i_1\leq n}1\tag{1}\\ &amp;\,\,\color{blue}{=\binom{n+3}{4}}\tag{2} \end{align*} In (1) we observe the index range is the number of ordered $4$-tuples with repetition from a set with $n$ elements resulting in (2).
Approximation of a $L^1$ function by a dominated sequence of continuous functions
Yes, your proposition is true. By density of the continuous functions, there is a sequence of continuous $g_n$ with $g_n \to f$ in $L^1$. Replacing $g_n$ with $g_n^+$, we can assume $g_n \ge 0$. Passing to a subsequence, we can also assume $g_n \to f$ almost everywhere. Passing to a further subsequence, we can assume $\|g_n - f\|_{L^1} \le 2^{-n}$. (Actually, the first "pass to a subsequence" is unnecessary. As soon as $\sum_n \|g_n - f\|_{L^1} &lt; \infty$, a Borel-Cantelli argument ensures that $g_n \to f$ almost everywhere.) I claim $g_n$ is the desired sequence. It remains to construct the dominating function $g$. Let $h_n = (g_n - f)^+$. Then $h_n$ is measurable, $\|h_n\|_{L^1} \le \|g_n -f\|_{L^1} \le 2^{-n}$, and we have $h_n \ge 0$ and $g_n \le f + h_n$. Set $g = f + \sum_{n=1}^\infty h_n$. Now by monotone convergence $$\int \sum_{n=1}^\infty h_n = \sum_{n=1}^\infty \int h_n \le \sum_{n=1}^\infty 2^{-n} = 1$$ so $g$ is integrable. And for each $n$ we have $g_n \le f + h_n \le g$. This argument didn't use any topology. Indeed, it still goes through if we replace $\mathbb{T}$ by any measure space $(X,\mu)$, and replace $C(\mathbb{T})$ by any dense subset $E \subset L^1(X,\mu)$ which is closed under the "positive part" operation. Maybe that latter condition can be weakened even further.
Is $\ker(T)=U\cap W$ right?
$1.$ No, because $\ker T\subset U\times W\subset V\times V$ whereas $U\cap W\subset V$. However it is isomorphic to $U\cap W$, since $\ker T=\{(u,w)\mid u\in U,\;w\in W, \;u=w\}=\{u,u)\mid u\in U\cap W\}$, whence the inverse isomorphisms: $$\begin {align} \ker T&amp;\longrightarrow U\cap W&amp;&amp;&amp;U\cap W&amp;\longrightarrow \ker T\\ (u,u)&amp;\longmapsto u&amp;&amp;&amp;u&amp;\longmapsto(u,u) \\ \end{align}$$
If some power $A^n$ of a matrix $A$ is symmetric, is $A$ necessarily symmetric?
No. Consider the $2 \times 2$ Jordan block $$\pmatrix{0 &amp; 1 \\ 0 &amp; 0},$$ or the matrix $$\pmatrix{0 &amp; -1 \\ 1 &amp; 0},$$ which represents an anticlockwise rotation by $\frac{\pi}{2}$.
Solving exponential complex equation.
We have $\displaystyle e^{z^2}=1=e^{2k\pi i}$ where $k$ is any integer $\displaystyle\implies z^2=2k\pi i$ $\displaystyle\implies z=\sqrt{2k\pi i}=\sqrt{2k\pi }\sqrt i$ Method $1:$ By observation, $i=\frac{i^2+1+2i}2=\frac{(i+1)^2}2$ $\displaystyle\implies i^{\frac12}=\pm\frac{(1+i)}{\sqrt2}$ Method $2:$ Let $\sqrt{i}=x+iy$ where $x,y$ are real Squaring we get, $i=(x+iy)^2=x^2-y^2+2xyi$ Equating the real &amp; the imaginary parts we get $x^2-y^2=0$ and $2xy=1$ From the first relation $x=\pm y$ If $x=-y, 1=2xy=2(-y)y\implies y^2=-\frac12$ which is impossible as $y$ is real, $\implies y^2\ge0$ $\implies x=y$ and $1=2xy=2(y)y\implies y^2=\frac12\implies y=\pm\frac1{\sqrt2}$ $\implies \sqrt i=\pm\left(\frac1{\sqrt2}+i\frac1{\sqrt2}\right)=\pm\frac{1+i}{\sqrt2}$ Method $3:$ Let $i=r(\cos\theta+i\sin\theta)=r\cos\theta+ir\sin\theta$ where real $r\ge0$ Equating the real &amp; the imaginary parts we get $r\cos\theta=0\ \ \ \ $ and $r\sin\theta=1\ \ \ \ (2) $ From $(1),$ either $r=0$ or $\cos\theta=0$ If $r=0,$ from $(2),1=r\sin\theta=0$ which is impossible $\implies r&gt;0$ and $\cos\theta=0\implies \sin\theta=\pm1$ If $\sin\theta=-1,1=r\sin\theta=-r\iff r=-1$ which is impossible as $r&gt;0$ $\implies \sin\theta=1\implies \theta=2n\pi+\frac\pi2$ where $n$ is any integer $\implies i=\cos(2n\pi+\frac\pi2)+i(2n\pi+\frac\pi2)=e^{(2n\pi+\frac\pi2)i}$ (using Euler's Formula) Using de Moivre's formula, $\implies i^{\frac12}=e^{\frac{(2n\pi+\frac\pi2)i}2}$ where $n=0,1$
Identity for central binomial coefficients
It appears to be true for $x &gt; .8305123339$ approximately: $c_x \to 0$ as $x \to 0+$.
If $a,b > 1$ and $r>2$ does $ax^2+by^2=z^r$ have any rational solutions?
It's quite easy to parameterize, $$\color{red}ax^2+\color{red}by^2 =z^k$$ for odd $k$. Assume, $$x^2+by^2 = (p^2+bq^2)^k$$ $$(x+y\sqrt{-b})(x-y\sqrt{-b}) = (p+q\sqrt{-b})^k(p-q\sqrt{-b})^k$$ Equate factors and solve for $x,y$. Hence, $$x =\frac{\alpha+\beta}{2},\quad y = \frac{\alpha-\beta}{2\sqrt{-b}},\quad\text{where}\quad \alpha = (p+q\sqrt{-b})^k,\quad \beta = (p-q\sqrt{-b})^k$$ For example, let $k = 3$. Then, $$(p (p^2 - 3 b q^2))^2 + b(3 p^2q - b q^3)^2 = (p^2+b q^2)^3$$ Let $p = \sqrt{a}p$. Then, $$\color{red}a(ap^2 - 3 b q^2)^2 + \color{red}b(3 ap^2q - b q^3)^2 = (ap^2+b q^2)^3\tag{k=3}$$ for free variables $p,q$. Let $k = 5$, $\color{red}a(a^2p^4 - 10 a b p^2q^2 + 5 b^2 q^4)^2 + \color{red}b(5 a^2p^4q - 10 a bp^2 q^3 + b^2 q^5)^2 = (ap^2+b q^2)^5\tag{k=5}$ and so on.
Show that for each $m\in \mathbb{N}$, there exists $n\in \mathbb{N}$ such that $x^2\equiv 1 \pmod n$ has at least $m$ solutions?
let $p$ be a prime number $x^2\equiv 1 \pmod p$ $\Rightarrow$ $x^2-1\equiv (x-1)(x+1)\equiv 0 \pmod p$ $(x-1)(x+1)\equiv 0 \pmod p$ $\Rightarrow$ $x \equiv 1$ or $x \equiv -1$ Let $n=p_1p_2...p_n$ and $x^2\equiv 1 \pmod n$ $x^2 \equiv 1 \pmod {p_1} $ $\Rightarrow$ $x \equiv \pm1 \pmod{p_1} $ $x^2 \equiv 1 \pmod{p_2}$ $\Rightarrow$ $x \equiv \pm1 \pmod{p_2} $ $\vdots$ $x^2 \equiv 1 \pmod{p_n}$ $\Rightarrow$ $x \equiv \pm1 \pmod{p_n} $ we know that for $a_i \in \{-1,1\}$ $(a_1,a_2,...,a_n)=(x \pmod{p_1},x \pmod{p_2},...,x \pmod{p_n})$ shows a solution for $x^2 \equiv 1 \pmod{n}$ there are $n$ primes and $2$ options for per prime so there are $2^n $ solutions and for all $m \in N^+$ there exists a $n \in N$ such that $2^{n}&lt;m&lt;2^{n+1}$ which proves your statement
Applications of abstract algebra?
I know that field theory is part of coding theory and cryptography, which are responsible for technological security. Field theory is part of abstract algebra.
From singular homology to 'regular' homology
This is a much more difficult chain complex to work with. If say $X$ is an $m$-manifold, then there will not be any "regular" simplices of dimension $&gt;m$. Therefore in the "regular" homology, every $m$-cycle will give a distinct homology class. So say $H_1^{\text{reg}}(S^1)$ will be uncountable. We can divide up the usual generator of the homology of $S^1$ into a sum of two regular 1-chains in uncountably many ways and these will give distinct elements of $H_1^{\text{reg}}(S^1)$.
Find $\lim_{n \to \infty} \sqrt[n]{n^2+1} $
Let $a_n = \sqrt[n]{n^2 + 1}$. Then $$n^{2/n} &lt; a_n &lt; (n + 1)^{2/n}.$$ Since $\lim_{n\to \infty} n^{1/n} = 1$ and $\lim_{n\to \infty} (n + 1)^{1/n} = 1$, it follows that the left- and right-most sides of the above inequality tend to $1$ as $n\to \infty$. Therefore, by the squeeze theorem, $\lim_{n\to \infty} a_n = 1$.
Proving general result in limits $\lim\limits_{x\to0}[f(x)+g(x)]=\lim\limits_{x\to0}[f(x)]+\lim\limits_{x\to0}[g(x)]$
Your book is making it sound more complicated than it is. In general, it will help to think of theorems like this as facts about error control. Specifically, suppose $\lim_{x\rightarrow 0}f(x)$ and $\lim_{x\rightarrow 0}g(x)$ each exist - and call them $a$ and $b$, respectively. We want to show that $$\lim_{x\rightarrow 0}[f(x)+g(x)]=a+b.$$ Looking at the definition, this means that we need to argue that: For any $\epsilon&gt;0$ there is some $\delta&gt;0$ such that whenever $0&lt;\vert x\vert&lt;\delta$ we have $\vert (f(x)+g(x))-(a+b)\vert&lt;\epsilon$. And our hypotheses are: For any $\epsilon_1&gt;0$ there is some $\delta_1&gt;0$ such that whenever $0&lt;\vert x\vert&lt;\delta_1$ we have $\vert f(x)-a\vert&lt;\epsilon_1$. For any $\epsilon_2&gt;0$ there is some $\delta_2&gt;0$ such that whenever $0&lt;\vert x\vert&lt;\delta_2$ we have $\vert g(x)-b\vert&lt;\epsilon_2$. Note that I've given the variables distinct names here - I didn't have to do this, but it winds up clarifying the situation. The "error control" bit comes in when we think about how to manage the quantity $$\vert (f(x)+g(x))-(a+b)\vert,$$ given that we know how to manage the quantities $$\vert f(x)-a\vert\quad\mbox{and}\quad\vert g(x)-b\vert.$$ Specifically, we'll use the triangle inequality, which tells us that $$\vert(f(x)+g(x))-(a+b)\vert\le \vert f(x)-a\vert+\vert g(x)-b\vert.$$ (Informally: "the error in the sum is at most the sum of the errors.") OK, that probably looks mysterious. The triangle inequality does indeed connect one of the things we care about with two things we understand, but how does that actually help us solve the problem? Well, suppose you give me an $\epsilon&gt;0$. I need to find a $\delta&gt;0$ such that $\vert (f(x)+g(x))-(a+b)\vert&lt;\epsilon$ whenever $x$ is within $\delta$ of $0$ (excluding $x=0$ itself). By the triangle inequality, I'll be happy if I can ensure that $$\vert f(x)-a\vert+\vert g(x)-b\vert&lt;\epsilon$$ for all such $x$, which in turn will happen if we know that $$\vert f(x)-a\vert&lt;{\epsilon\over 2}\quad\mbox{and}\quad\vert g(x)-b\vert&lt;{\epsilon\over 2}$$ for all such $x$. But those sorts of constraints are exactly what our hypotheses let us get! Specifically, noting that ${\epsilon\over 2}&gt;0$ (since $\epsilon&gt;0$), we have: Because $\lim_{x\rightarrow 0}f(x)=a$, there is some $\delta_1&gt;0$ such that if $0&lt;\vert x\vert&lt;\delta_1$ then $\vert f(x)-a\vert&lt;{\epsilon\over 2}$. Because $\lim_{x\rightarrow 0}g(x)=b$, there is some $\delta_2&gt;0$ such that if $0&lt;\vert x\vert&lt;\delta_2$ then $\vert g(x)-b\vert&lt;{\epsilon\over 2}$. Combining all this, we have: If $\vert x\vert&lt;\delta_1$ and $\delta_2$, then $\vert (f(x)+g(x))-(a+b)\vert&lt;\epsilon$. So what should our $\delta$ be here? Just $\min\{\delta_1,\delta_2\}$ - go with whichever is smaller!
A problem of factoring a polynomial with a hint
Notice that $P(a_i)Q(a_i)=1$ for all $i$. But since $P$ and $Q$ have integer coefficients and the $a_i$ are integers, this means $P(a_i)=Q(a_i)=1$ or $P(a_i)=Q(a_i)=-1$ for each $i$. In particular, $a_i$ is a root of $P-Q$ for each $i$, so $P-Q$ is either $0$ or has degree at least $n$. But, since $\deg(PQ)=n$, the only way $P-Q$ can have degree at least $n$ is if one of $P$ and $Q$ has degree $n$ and the other is a constant. Presumably this possibility is meant to be excluded by the requirement that $P$ and $Q$ are "smaller" polynomials. The only remaining possibility is that $P-Q=0$, so $P=Q$. This actually is possible--for instance, as darij grinberg commented, you could have $n=2$, $a_1=1$, and $a_2=-1$ and so the polynomial is $(x-1)(x+1)+1=x^2$ and so we can have $P(x)=Q(x)=x$. Another example (with $n=4$) is $$(x-1)(x-2)(x-3)(x-4)+1=(x^2-5x+5)^2.$$ So, the problem statement is not quite correct without some additional assumption to rule out this case.
Find a False Counterexample for : "$\forall$ $A, B$ and $C$, we have $A \cap (B - C) = (A \cap B) - (A \cap C)$"
Here we give a False counter-example We assume that we always have the following $$\color{red}{A \cap (B - C) \not= (A \cap B) - (A \cap C)} \tag{F}$$ We aim to construct a counter-example to the statement (F). To this end, we consider that our universe is $$E =\{\color{red}{1,2,3,4,5,6,7,8,9,10}\color{blue}{,a,b,c,d,e,f,g,h,i,j}\}$$ We consider the sets, $$A =\{\color{red}{1,2,3,4,5}\color{blue}{,a,b,c,d,i,j}\}$$ $$~~~~~~~B =\{\color{red}{1,2,3,4,9,10}\color{blue}{,a,b,c,d,e,f,g}\}$$ $$C =\{\color{red}{4,5,6,7,8}\color{blue}{,d,e,f,g,h,i,j}\}$$ We have \begin{split} A\cap B&amp;=&amp; \{\color{red}{1,2,3,4}\color{blue}{,a,b,c,d}\}\\ A\cap C &amp;=&amp;\{\color{red}{4,5}\color{blue}{,d,i,j}\}\\ C^c=\complement_E^C &amp;=&amp;\{\color{red}{1,2,3,9,10}\color{blue}{,a,b,c}\}\\ B-C =B\cap C^c&amp;= &amp;\{\color{red}{1,2,3,9,10}\color{blue}{,a,b,c}\}\\ (A\cap C )^c=\complement_E^{A\cap C }&amp;=&amp;\{\color{red}{1,2,3,6,7,8,9,10}\color{blue}{,a,b,c,e,f,g,h}\} \end{split} We then obtain: \begin{split} A \cap (B - C) &amp;=&amp;\{\color{red}{1,2,3,4,5}\color{blue}{,a,b,c,d,i,j}\}\cap \{\color{red}{1,2,3,9,10}\color{blue}{,a,b,c}\}\\&amp;=&amp;\{\color{red}{1,2,3}\color{blue}{,a,b,c}\}\\ and\\ (A\cap B)- (A\cap C) &amp;=&amp;A\cap B\cap (A\cap C )^c\\&amp;=&amp;\{\color{red}{1,2,3,4}\color{blue}{,a,b,c,d}\}\cap\{\color{red}{1,2,3,6,7,8,9,10}\color{blue}{,a,b,c,e,f,g,h}\}\\&amp;=&amp;\{\color{red}{1,2,3}\color{blue}{,a,b,c}\} \end{split} From this particular example, realize that $$A \cap (B - C)=\{\color{red}{1,2,3}\color{blue}{,a,b,c}\} =(A\cap B)- (A\cap C).$$ Therefore the statement (F) is False. I would rather propose another prove only by using Morgan's formulas: First By definition we have, $$A \cap (B - C) = A \cap( B \cap C^c) \tag{I} $$ on the Other hand, $$ (A \cap B) - (A \cap C) = (A \cap B) \cap (A \cap C)^c \\= (A \cap B) \cap (A^c \cup C^c) \\=(A \cap B\cap C^c) \cup (A \cap B \cap A^c) $$ But $$(A \cap B \cap A^c)= \emptyset$$ Thus, $$ (A \cap B) - (A \cap C) =(A \cap B\cap C^c) \tag{II} $$ (I) and (II) give $$\color{red}{A \cap (B - C) = A \cap( B \cap C^c) = (A \cap B) - (A \cap C)}$$ Now this prove that statement is true for every set A, B and C
Rewriting statements with quantifiers to full detail
I'm not sure where you think you are confused. Each time, you have replaced the quantified variables ($x$ and $y$), with the appropriate logical connectives for the predicate for the set of possible values and getting the correct answer. Or do you not understand how you got your own answers? (a) $\forall x \forall y\, \big(P(x) ∨ Q(y)\big)$ To make sure you understand the steps, we split the task up. Do the innermost bound variable first, simplify, then do the outermost bound variable. $\forall x \forall y\, \big(P(x) ∨ Q(y)\big) \\\equiv \forall x \big((P(x)\vee Q(a))\wedge(P(x)\vee Q(b))\big) \\\equiv \forall x \big(P(x)\vee \big(Q(a)\wedge Q(b)\big)\Big) \\\equiv \Big(P(a)\vee \big(Q(a)\wedge Q(b)\big)\Big)\wedge \Big(P(b)\vee \big(Q(a)\wedge Q(b)\big)\Big) \\\equiv \big(P(a)\wedge P(b)\big)\vee \big(Q(a)\wedge Q(b)\big) $ Which is what you've gotten. (b) $\exists x\, P(x) ∧ \exists x\,Q(x)$ Again, to be sure split the task; do each bound variable one at a time $\exists x\, P(x) ∧ \exists x\,Q(x) \\\equiv \big(P(a)\vee P(b)\big) \wedge \exists x\, Q(x) \\\equiv \big(P(a)\vee P(b)\big) \wedge \big(Q(a)\vee Q(b)\big)$ Which is almost what you had gotten. Which is what you have (after your edit).
Probability of guessing incorrect infinite number of times
If the events are independent and $0 &lt; p &lt; 1$, then P(the event occurs infinitely often) and P(the event fails to occur infinitely often) are both 1, by Borel-Cantelli.
How can I prove that $\sum_{n=1}^\infty \frac{1}{n(n+1)} = 1$?
Use $$\frac{1}{n(n+1)} = \frac{1}{n} - \frac{1}{n+1}$$ and you get a telescoping sum.
Is the number of orbits of the automorphism group of infinite field with a finite characteristic acting of the field is finite?
This is usually not true (in fact, off the top of my head, I don't know how to construct any example where it is true). For instance, if $\mathbb{F}$ is the algebraic closure of $\mathbb{F}_p$, then every automorphism maps $\mathbb{F}_{p^n}$ to itself for each $n$, so $\mathbb{F}_{p^n}\setminus\bigcup_{d\mid n, d&lt;n}\mathbb{F}_{p^d}$ is a union of orbits for each $n$. This set is nonempty for every $n$, so there are infinitely many orbits. For another example, every automorphism of $\mathbb{F}_p(x)$ preserves the degree of rational functions, so there are infinitely many orbits (at least one for each degree).