title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Generating completely new vector based on other vectors
For a general case, if the given vectors span a subpace with dimension less than the dimension of the space, a possible way to create a new vector "not close" to the others is to take $v_5$ orthogonal to the span of the given vectors.
Showing a linear properties of outer measure(Lebesgue).
A translate of a measurable set is measurable i.e. For any $b\in \Bbb R$ if $A$ is measurable so is $A+b$. Note that for any interval $(s,t)$; $m^{*}(c(s,t))=|c|m^{*}((s,t))=|c|(t-s)$ and $A\subset \bigcup_{n=1}^{\infty}(a_n,b_n)$ holds if and only if $aA\subset \bigcup_{n=1}^{\infty}a(a_n,b_n)$ holds for each $a\in \Bbb R$ and since $m^{*}([a_n,b_n))=m^{*}((a_n,b_n))$, it follows that $m^{*}(aA)=|a|m^{*}(A)$ for each $a\in \Bbb R$. Consider the following useful identites $$E\cap aA=a((a^{-1}E)\cap A)\text{ and } E\cap (aA)^c=a((a^{-1}E)\cap A^c)\quad (a\neq0)$$ imply $$m^{*}(E\cap aA)+m^{*}(E\cap (aA)^c)=|a|[m^{*}((a^{-1}E)\cap A)+m^{*}((a^{-1}E)\cap A^c)]$$ which shows $A$ is measurable if and only if $aA$ is measurable.
What are the properties of a prime number?
$p \not = ab$ when $a,b > 1 \in \mathbb N$.
prove: a complete metric space $X$ is compact if and only if ...
How to prove this depends a great deal on what tools you already have. The first answer assumes that you know that a metric metric space is compact if and only if it’s complete and totally bounded, but I shouldn’t be at all surprised if this exercise were intended to prepare for that result. Suppose that whenever $A$ is an infinite subset of $X$ and $\epsilon>0$, there are $x,y\in A$ such that $d(x,y)<\epsilon$. For each $n\in\Bbb N$ we can construct a subset $D_n$ of $X$ as follows. Let $x_0^{(n)}$ be any point of $X$. Suppose that $m\in\Bbb Z^+$, and we’ve chosen points $x_k^{(n)}\in X$ for $k<m$ in such a way that $d\left(x_k^{(n)},x_\ell^{(n)}\right)\ge 2^{-n}$ for $0\le k<\ell<m$. If $$\bigcup_{k<m}B_d\left(x_k^{(n)},2^{-n}\right)=X\;,$$ let $D_n=\{x_k^{(n)}:k<m\}$, and stop. Otherwise, pick any point $$x_m^{(n)}\in X\setminus\bigcup_{k<m}B_d\left(x_k^{(n)},2^{-n}\right)\;,$$ and continue. If this process did not stop after some finite number of steps, we’d end up with an infinite set $\{x_k^{(n)}:k\in\Bbb N\}$ such that $d\left(x_k^{(n)},x_\ell^{(n)}\right)\ge 2^{-n}$ whenever $k,\ell\in\Bbb N$ and $k\ne\ell$, contradicting our assumption about $X$. Thus, the process must stop after a finite number of steps, and we end up with a finite set $D_n$ such that $$\bigcup_{x\in D_n}B_d(x,2^{-n})=X\;.$$ (In fact we’ve simply shown that $\langle X,d\rangle$ is totally bounded.) Now let $\sigma=\langle x_n:n\in\Bbb N\rangle$ be any sequence in $X$. $D_0$ is finite, so there are a $y_0\in D_0$ and an infinite $N_0\subseteq\Bbb N$ such that $x_k\in B_d(y_0,1)$ for each $k\in N_0$. Similarly, $D_1$ is finite, so there are a $y_1\in D_1$ and an infinite $N_1\subseteq\Bbb N$ such that $x_k\in B_d(y_1,2^{-1})$ for each $k\in N_1$. Continuing in this fashion, we find for each $n\in\Bbb N$ a $y_n\in D_n$ and an infinite $N_n\subseteq\Bbb N$ such that $x_k\in B_d(y_n,2^{-n})$ for each $k\in N_n$, and moreover $N_n\supseteq N_{n+1}$ for each $n\in\Bbb N$. Now choose a subsequence $\langle x_{n_k}:k\in\Bbb N\rangle$ of $\sigma$ in such a way that $n_k\in N_k$ for each $k\in\Bbb N$. I leave it to you to show that this subsequence is necessarily Cauchy and therefore must converge, since $X$ is complete.
Does ONLY the ellipse have these properties?
Every closed convex piecewise-differentiable curve with the given tangent property is an ellipse. Proof: The problem is affine, in the sense that if a curve has the given property then so does any affine transformation of it. So, starting with a pair of tangents at the widest extent of the curve, use a rotation to make the tangents vertical and a shear to bring the curve to $\mathcal{C}$ whose line of symmetry is the $x$-axis. $\hspace{2cm}\mapsto\hspace{2cm}$ Now take the horizontal pair of tangents on $\mathcal{C}$, meeting it at two points one vertically above the other. Translate it so this vertical line is the $y$ axis. Then $\mathcal{C}$ is symmetric about both the $x$ and $y$ axes. Scaling along these axes brings their intercepts to $1$. Every other point has radius at most $1$, by the way the original tangents were chosen. Proposition 1. $\mathcal{C}$ is balanced, i.e., $x\in \mathcal{C}\implies -x\in\mathcal{C}$. This follows directly from the symmetry along the two perpendicular axes. Hence given any pair of tangents, the line joining the points of contact passes through the origin. Proposition 2. The curve is differentiable. Join opposite corners by a line through the origin. Then $\mathcal{C}$ would have equal distances from this line along two sets of parallel lines, which gives a contradiction. $\hspace{4cm}$ Proposition 3. Any point on $\mathcal{C}$ with radius $1$ has a perpendicular tangent. A point with the maximum radius $r(\theta)=1$ must have $r'=0$. Proposition 4. If $OA$ and $OB$ have radii of $1$ then so does their angle bisector $OC$. The tangent parallel to $AB$ touches the curve at some point $C$. The line $OC$ cuts $AB$ in half by hypothesis and is thus the median and angle bisector of $AOB$, and perpendicular to $AB$. Thus $\mathcal{C}$ is symmetric about $OC$ and so the tangent at $C$ is perpendicular to $OC$. $\hspace{3cm}$ Let the tangent at $C$ meet the tangent at $A$ at the point $P$. Consider tangents parallel to $AC$ and the line $Q'OQ$ joining the opposite tangents. This line passes through the midpoint of $AC$ by hypothesis. In the limit, nearby points $A'$ on $AP$ and $C'$ on $CP$ with $A'C'$ parallel to $AC$ are also bisected by $OQ$ since $AP$ and $CP$ are tangents to $\mathcal{C}$. But this means that $OQ$ is the median of $APC$, and thus $Q$ is on $OP$. Since $OAPC$ is a cyclic quadrilateral with diameter $OP$, the bisected chord $AC$ is perpendicular to $OP$ and so $OC=OA=1$. Proposition 5. $\mathcal{C}$ is a circle. Since the $x$ and $y$ intercepts have radius $1$, one can keep taking the angle bisectors, forming a dense set of points of radius $1$. By continuity, all points have the same radius. Hence the original curve is an affine transformation of a circle, namely an ellipse.
elements in the product of subgroups in $S_4$
I don't know how you expect to find the eight elements without making a computation, but the second isomorphism theorem does give you a way to be efficient. Using the isomorphism $HN/N\cong H/H\cap N$ you get that the distinct cosets in $HN/N$ are $N$ and $(1234)N$. Now, compute the elements in these cosets.
Question about a power series
The derivative of the given sum which is a power series is $$\sum_{n=2}^\infty\frac{(1+x)^{n-1}}{n-1}=\sum_{n=1}^\infty\frac{(1+x)^{n}}{n},\quad\forall x: |x+1|<1$$ but with $z=x+1$ we have $|x+1|<1\iff|z|<1$ and $$\sum_{n=1}^\infty\frac{(1+x)^{n}}{n}=\sum_{n=1}^\infty\frac{z^{n}}{n}=-\log(1-z)=-\log(-x)$$
Questions about the canonical class of a nonsingular projective surface.
For question 1), consider the line bundle associated to each divisor: they're both $\mathcal{O}(n)$. For 2), recall that the canonical sheaf of a smooth variety is just the sheaf of top differential forms, which can be calculated by taking the top exterior power of the sheaf of differential forms. By the Euler exact sequence $$0\to \Omega_{\Bbb P^n}^1 \to \mathcal{O}_{\Bbb P^n}(-1)^{n+1} \to \mathcal{O}_{\Bbb P^n} \to 0 $$ we get that $\Omega^n_{\Bbb P^n} \cong (\bigwedge^n\Omega^1_{\Bbb P^n})\otimes(\mathcal{O}_{\Bbb P^n})\cong \bigwedge^{n+1}\mathcal{O}_{\Bbb P^n}(-1)^{n+1}\cong \mathcal{O}_{\Bbb P^n}(-n-1)$. So $\Omega_{\Bbb P^2}^2=\mathcal{O}_{\Bbb P^2}(-3)$ as requested.
Conversion of $\int_0^1\frac{\log( 1+x)}{x}dx$ to summation
HINT: Write \begin{align}\frac1r\log\left(1+\frac rn\right)&=\frac1r\left[\frac rn-\frac12\left(\frac rn\right)^2+\frac13\left(\frac rn\right)^3-\cdots\right]\\&=\frac1n-\frac r{2n^2}+\frac{r^2}{3n^3}-\cdots\end{align} so \begin{align}\sum_{r=1}^n\frac{\log\left(1+\frac{r}{n}\right)}{r}&=1-\frac{\sum r}{2n^2}+\frac{\sum r^2}{3n^3}-\cdots\\&=1-\frac12\int_0^1r\,dr+\frac13\int_0^1r^2\,dr-\cdots\end{align} as each term is essentially a Riemann sum.
Diophantine with primes factorials
Observe if $p>6$ then $(p-1)^2 | p^\alpha -1$ and hence $p-1 |\frac{p^{\alpha}-1}{p-1}=\sum_{i=0}^{\alpha-1} p^i\implies p-1 | \alpha \implies \alpha \geq p-1 $ but then $p^{\alpha}\ge p^{p-1}> (p-1)!+1$
Lemma 4.3. Aluffi Algebra Chapter VIII.
I don't like the fact that the same notation is used for the fixed index and the indexing of the summation, so let me rewrite the equality, and then explain it : the author claims that $$\lambda_{i_1...i_l} = \overline{\varphi_I}(\sum_{1\leq j_1 < ... < j_l\leq r} \lambda_{j_1...j_l}e_{j_1}\wedge ...\wedge e_{j_l})$$ Now the reason for this is the following : $\overline{\varphi_I}$ is linear, so the rhs is $\displaystyle\sum_{1\leq j_1 < ... < j_l\leq r}\lambda_{j_1...j_l} \overline{\varphi_I}(e_{j_1}\wedge ...\wedge e_{j_l})$ But now if $(j_1...j_l)\neq (i_1...i_l)$ and they're both strictly ordered $l$-tuples, then there can be no permutation $\sigma$ such that $\sigma(j_k)=i_k$ for all $k$. Indeed such a $\sigma$ would then be an increasing bijection $\{j_1,...,j_l\}\to \{j_1,...,j_l\}$, thus (by induction on $k$ if you don't know that result) the identity, in other words $(j_1,...,j_l) = (i_1,...,i_l)$. Therefore, by definition, for $(j_1,...,j_l)\neq (i_1,...,i_l)$, $\overline{\varphi_I}(e_{j_1}\wedge ...\wedge e_{j_l}) = \varphi_I(e_{j_1},...,e_{j_l}) = 0$. So the only remaining index in the sum is $(i_1<...<i_l)$, and its $\overline{\varphi_I}$ is $1$ by definition, so you do get the equality
A boundary term in a Hardy inequality
Your example is correct, but the issue is with what we want to show. The version of the inequality with $r$ that is claimed is $$ r\int_{\partial B(0,r)} u^2\,dS ≤ C\int_{B(0,r)} u^2 + r^2|Du|^2\,dx,\tag{1}\label{one} $$ instead of $$ r\int_{\partial B(0,r)}u^2\,dS ≤ C\int_{B(0,r)}u^2\,dx, $$ which is false, as you showed. The inequality \eqref{one} is obtained by applying the divergence theorem and noticing that $$ r\int_{\partial B(0,r)}u^2\,dS = \int_{B(0,r)}\mathrm{div}(\vec x u^2)\,dx = \int_{B(0,r)} nu^2 + 2uDu\cdot \vec x\,dx. $$ If we add the term $\int_B|Du|^2\,dx$ for the example you gave, $|Du(r)|^2\sim (\alpha r^{\frac\alpha2-1})^2 = \alpha^2r^{\alpha-2}$. For large $\alpha$, the right-hand side of \eqref{one} is bounded by the second term, which is approximately $$ \int_0^1 \alpha^2r^{\alpha-2}r^{n-1}\,dr \sim \alpha, $$ while the left-hand side is $\sim 1$, as you showed, and these are consistent.
Showing that the intersection of two sets is integral
Your proof is not correct. Recall that for $A$ a ring and $K$ a field containing $A$, $x\in K$ is integral over $A$ if there is a monic polynomial $f(X)\in A[X]$ such that $f(x)=0$. Your polynomial is not necessarily monic (it won't be in most cases I think). But more importantly: it is a polynomial with $\alpha$ as root but you need a polynomial with $x$ as root. The correct proof is different. We use the usual equivalence that $x$ is integral iff there is some finitely generated non-zero $A$-module $M$ stable under $x$-multiplication (i.e. $xM\subseteq M$). You can check that $$M=A[\alpha^{-n},\dots,1,\dots,\alpha^m]$$ will do the job in your case. In your proof you confused statements about $x$ and $\alpha$. In fact, you proved a completely different proposition: if $R[\alpha]\cap R[\alpha^{-1}]$ is non-trivially (we always have $R\subseteq R[\alpha]\cap R[\alpha^{-1}]$ but this will show nothing) non-empty, then $\alpha$ is algebraic over $R$. So your example does not work (or does, but you did not realised the implications) as $e\in\mathbb R$ is not algebraic over $\mathbb Q$.
Finding the Laplace Transform Inverse
$$\mathcal{L}^{-1} \left\{ \frac{\frac{5s}{4} + \frac{13}{4}}{s^2+5s+8} \right\}$$ $$ = \mathcal{L}^{-1} \left\{{\frac {\frac{5s}4 + \frac 5 4\frac 5 2 - \frac 5 4\frac 5 2 + \frac {13}4}{\left({s + \frac 5 2}\right)^2 + \frac 7 4}}\right\}$$ You'll need: $$\mathcal{L}^{-1}\left\{{F(s + \alpha)}\right\} = e^{-\alpha t}\mathcal L^{-1} \left\{ {F(s)}\right\}$$ where $\alpha$ is constant. Let me know if you need more help.
Composition of dominant rational maps
The first question can be explained by a fact from point-set topology. Fact: As topological spaces, $A\subset B$ is dense iff $A$ intersects every nonempty open subset $C\subset B$ nontrivially. The proof goes as follows (spoiler in case you'd like to try yourself first): if $C\subset B$ is a nonempty open set that $A$ doesn't intersect, then $C^c$ is a proper closed subset and contains $A$, which means $\overline{A}\subset C^c \neq B$ To use this to solve your issues, note that $V$ is a dense open subset and $\varphi(U)$ is also dense (this is what dominance means). So applying our fact with $A=\varphi(U)$ and $C=V$ we see that their intersection is nonempty. So $\varphi^{-1}(V)$ is nonempty. For your second question, you've hit the nail on the head. Suppose two nonempty open subsets of $Y$ do not intersect- then their complements are proper nonempty closed sets whose union is $Y$, which means $Y$ is not irreducible, contradiction.
Show that there exist $x,y\in[0,1]$ such that $|u(x)-u(y)|<|x-y|^{1/2}$.
Suppose without loss that $u : [0,1] \to [0,R]$ and consider the graph of $u$ in $[0,1]\times[0,R]$. Note that the area of the big rectangle $[0,1]\times[0,R]$ is $R$. Let $n$ be a positive integer. For each integer $0 \leq k \leq n$ cover the point $(k/n,u(k/n))$ with the rectangle $[0,1]\times (u(k/n) + (-\frac{1}{2\sqrt{n}},\frac{1}{2\sqrt{n}}))$. If the rectangles were disjoint, their union would cover an area of more than $n\cdot \frac{1}{\sqrt{n}} = \sqrt{n}$, which is not possible for $n &gt; R^2$. So for $n &gt; R^2$ we find two distinct points $(k/n,u(k/n))$ and $(l/n,u(l/n))$ whose rectangles overlap. $$ |u(k/n) - u(l/n)| \leq \frac{1}{\sqrt{n}} \leq |k/n-l/n|^{1/2} $$
Is the Axiom of Choice equivalent to say that for two (infinite) cardinal $\kappa$ and $\lambda$ it's result $\kappa+\lambda=\kappa*\lambda$
The statement "every infinite cardinal is an aleph" means that every infinite set can be put in bijection with an aleph (which is a particular ordinal). Thus, every set can be well-ordered. The well-ordering principle is well-known to be equivalent to the axiom of choice.
The set {${m^m}/{n^n}: m,n\in \mathbb{N}$} density in ${\mathbb{Q}_+}$
Hint: try to find values between $\dfrac{1}{2}$ and $2$ $\dfrac{n^n}{n^n}=1$ is a possibility. Are there any others? If so, what values do they take? If not, why not?
Probability of multiple mutually exclusive events succeeding at least once
Let $A_i$ denote the event that item $i$ is dropped after n defeats. Then the event in question is $\bigcap_i A_i$. $$P(\bigcap_i A_i)=1- P(\bigcup_i A_i^c)$$ where $A_i$ denotes the complement of $A_i$. Now use Poincaré's exclusion-inclusion principle. In your case $A_i^c$ is the event that item $i$ is not dropped through the 42 raids. According to the formula mentioned above $$ P(\bigcup_{i=1}^{5} A_i^c) = \sum_{k=1}^{5} (-1)^{k-1} S_k $$ where $$ S_k = \sum_{1\leq j_1&lt;j_2&lt;\ldots &lt;j_k \leq 5} P(A_{j_1}^c \cap \ldots \cap A_{j_k}^c) .$$ For some fixed indices $j_1&lt;j_2&lt;\ldots&lt; j_k$ $$ P(A_{j_1}^c\cap \ldots \cap A_{j_i}^c) = (1-p_{j_1}-p_{j_2} -... - p_{j_k})^{42} $$ where $p_j$ is the probability of the monster dropping item $j$ after one raid. In other words $S_k$ is the probability that $k$ items won't be dropped. So $S_1$ consists of the cases when 1 item is not dropped, i.e. either item $A$ is not dropped, or item $B$ is not dropped etc. But $S_1$ counts twice the cases when two items are missing, hence you need to subtract $S_2$. $S_2$ counts twice the cases when 3 items are missing so you need to add $S_3$ etc. Let's look at a simpler case: there are 3 sets and you want to calculat $P(A\cup B \cap C)$. The first thought would be to add up, i.e. $P(A)+P(B)+P(C)$, but if the sets are not disjoint, you counted some elements twice, i.e. the intersections of each 2 sets. So you subtract it from the sum. Then you realize you subtracted too the intersection of the three sets 1 to many times, hence you add it, so you end up with $$ P(A\cup B \cup C) = P(A)+P(B)+P(C) - P(A\cap B)-P(A\cap C)-P(B\cap C) + P(A\cap B \cap C). $$ Poincaré's formula is the generalization of this for $n$ sets.
Probability that XOR of an arbitrary number of random bits is 1
The last bit is $0$ or $1$, each with probability $\frac12$, independently of the XOR of the previous bits, so leaves that result the same or changes it, each with probability $\frac12$, meaning the overall XOR is $0$ or $1$, each with probability $\frac12$.
Lambert series expansion identity
Hint: Try using the expansions $$ \frac{1}{1-x}=1+x+x^2+x^3+x^4+x^5+\dots $$ and $$ \frac{1}{(1-x)^2}=1+2x+3x^2+4x^3+5x^4+\dots $$ Expansion: $$ \begin{align} \sum_{n=1}^\infty\frac{z^n}{(1-z^n)^2} &amp;=\sum_{n=1}^\infty\sum_{k=0}^\infty(k+1)z^{kn+n}\\ &amp;=\sum_{n=1}^\infty\sum_{k=1}^\infty kz^{kn}\\ &amp;=\sum_{k=1}^\infty\sum_{n=1}^\infty kz^{kn}\\ &amp;=\sum_{k=1}^\infty\sum_{n=0}^\infty kz^{kn+k}\\ &amp;=\sum_{k=1}^\infty\frac{kz^k}{1-z^k} \end{align} $$
Prime Ideal and Proper Ideal
First of all $\mathbf Z_3$ is a ring, not an ideal, much less a prime ideal. A prime ideal is an ideal $P$ (i.e. a subset of a ring that is stable by addition, and by multiplication by an element of the ring), which further has the property that whenever a product $xy\in P$, at least one of the factors $x,y$ lies in $P$. It is the ideal-theoretic version of the number of a prime number, or an irreducible polynomial. In terms of quotient rings, an ideal $P$ in the ring A$ is prime if and only if the quotient ring $A/P$ is an integral domain. As an example, a natural number $p$ is prime if and only if the principal ideal $p\mathbf Z$ is a prime ideal. A proper ideal is simply an ideal that is different from the whole ring.
Proving Convexity review question
Short answer: Let $x=2$, $x'=3$, and $\lambda=\frac{1}{2}$. What is $\bar x$? Is it in $X$? Long answer: Consider the definition. A set $X\subseteq\mathbb{R}$ is convex if for all $x$ and $x'$ in $X$ and $\lambda$ in $[0,1]$, the combination $\bar x = \lambda x + (1-\lambda)x'$ is in $X$. To negate this contradiction, you need to invert the quantifiers: A set $X$ is not convex if there exist $x$ and $x'$ in $X$ and $\lambda \in [0,1]$ such that $\bar x = \lambda x + (1-\lambda)x'$ is not in $X$. What the definition says in English is that “the line between any two points of $X$ is contained in $X$.” So to show that a set is not convex, you need to find two points of $X$ such that the connecting line between them is not contained in $X$. That led me to pick $x=2$ and $x'=3$. Letting $\lambda = \frac{1}{2}$ just makes $\bar x$ the midpoint $2\frac{1}{2}$, obviously not in $X$.
Second order linear differential equation $t^2\cdot x''- t\cdot x' +4\cdot x= \log(t)$
This can be solved using a substitution by writing $t=e^u$. $$t=e^u\to\frac{dt}{du}=e^u=t$$ $$\frac{dx}{du}=\frac{dt}{du}\frac{dx}{dt}=t\frac{dx}{dt}$$ Then it can be written that $$\begin{equation}\begin{aligned} \frac{d^2x}{du^2}=\frac{d}{du}\bigg[\frac{dx}{du}\bigg]&amp;=\frac{d}{dt}\bigg[t\frac{dx}{dt}\bigg]\times\frac{dt}{du} \\ &amp;=\bigg(t\frac{d^2x}{dt^2}+\frac{dx}{dt}\bigg)\times\frac{dt}{du} \\ &amp;=t^2\frac{d^2x}{dt^2}+t\frac{dx}{dt} \end{aligned}\end{equation}$$ And so the equation can be rewritten to the form $$\frac{d^2x}{du^2}-2t\frac{dx}{dt}+4x=\log e^u$$ $$\frac{d^2x}{du^2}-2\frac{dx}{du}+4x=u$$ From this it can be solved as a linear inhomogenous second order differential equation.
Game Theory Moving Around Coins
I found a way to do it in three moves: CAFDBEG ADFCBEG ADCEBFG ABCDEFG It cannot be done in 2 moves since only D and G are in their aphabetic positions, and any two triangles will leave at least one of the 5 others outside it unmoved.
Does the existence of the integral $\int_0^\infty f(x)dx$ imply that f(x) is bounded on $[0,\infty)$ when f(x) is continuous in this same interval?
Does the existence of the integral $\int_0^\infty f(x)dx$ imply that f(x) is bounded on $[0,\infty)$ when f(x) is continuous in this same interval ? No. For each interval $[n,n+1]$ define $f_n:[n,n+1]\to\mathbb{R}$ by piecewise linear extension of the following four points: $$f_n(n)=0$$ $$f_n(n+\frac{1}{2n^3})=2n$$ $$f_n(n+\frac{1}{n^3})=0$$ $$f_n(n+1)=0$$ So it's a triangle with basis of length $1/n^3$ and height $2n$. And thus $$\int_n^{n+1}f_n(x)dx=\frac{1}{n^2}$$ And for $n=0$ we put $f_0$ to be the constant $0$ and so $\int_0^1 f_0(x)dx=0$. Now we glue all $f_n$: $$f:[0,\infty)\to\mathbb{R}$$ $$f(x)=f_{\lfloor x\rfloor}(x)$$ $f$ is continuous, it is not bounded but $$\int_0^{\infty} f(x)dx=\sum_{n=0}^\infty\int_n^{n+1}f_n(x)dx=\sum_{n=1}^\infty \frac{1}{n^2}=2$$
Linear Program for Hyperplane Sparation
The proposed model works as expected. To avoid chance of misclassification by finding a better hyperpland that maximizes the distance from the planes to each class (like a SVM) can be done in the following way if we assume the data is linearly separable: $minimize: \sum_{j=1}^{p}( \vec{w}_{j}^{+} + \vec{w}_{j}^{-}) $ Constraints: $(\vec{w}^T\vec{x}_{j})y_j \geq 1 - \epsilon _{j}, \forall j$ $w_{i}^{+}, w_{i}^{-}, \epsilon_{i} \geq 0$ Where $|\vec{w}|$ is the distance from the hyperplane to a class, and $w = w^+ - w^-$ Note this is equivalent to the quadratic support vector machine approach but instead of using the L2 norm for distance we use the L1 norm
Mathematical way of determining whether a number is an integer
The most basic thing you could do is check if $x = \text{floor}(x)$. Here $\text{floor}$ returns the integer part of a number (rounds down). It is present in standard libraries of most languages.
simple set theoretic question of subsets of $\mathbb R^d$
Sorry, I misread the question initially. The answer is no in general. For example, take $B$ to be plane minus $x$-axis, and $A$ to be $x$-axis. If $B'$ exists, it must be a subset of both $A$ and $B$, which is empty.
Recurrence relation for increasing sequence of numbers
Let $S_k$ be the set of sequences $\langle x_1,\ldots,x_n\rangle$ such that $x_i\in[k]$ for $i=1,\ldots,n$ and $x_i\le\frac{x_{i+1}}2$ for $i=1,\ldots,n-1$; $s_k=|S_k|$. Suppose that $\sigma=\langle x_1,\ldots,x_n\rangle\in S_k$. If $x_n&lt;k$, then $\sigma\in S_{k-1}$. And $S_{k-1}\subseteq S_k$, so there are $s_{k-1}$ sequences in $S_k$ whose last term is less than $k$. If $x_n=k$, then $x_{n-1}\le\frac{x_n}2$, so $\langle x_1,\ldots,x_{n-1}\rangle\in S_{\lfloor k/2\rfloor}$. That is, every sequence in $S_k$ whose last term is $k$ is obtainted from a sequence in $S_{\lfloor k/2\rfloor}$ by appending a term $k$. Conversely, if $\langle x_1,\ldots,x_{n-1}\rangle\in S_{\lfloor k/2\rfloor}$, then $\langle x_1,\ldots,x_{n-1},k\rangle\in S_k$, so there are $s_{\lfloor k/2\rfloor}$ sequences in $S_k$ that end in $k$. Every $\sigma\in S_k$ either does or does not end in $k$, and none does both, so we’ve counted every sequence in $S_k$ once, and $s_k=s_{k-1}+s_{\lfloor k/2\rfloor}$. We can infer the relationship $$\frac{F(x)}{F(x^2)}=\frac{1+x}{1-x}\tag{1}$$ directly from the recurrence without determining the generating function $F(x)$ itself. Rewrite the recurrence as $s_k-s_{k-1}=s_{\lfloor k/2\rfloor}$, multiply through by $x^k$, and sum over $k\ge 0$: $$\sum_{k\ge 0}s_kx^k-\sum_{k\ge 0}s_{k-1}x^k=\sum_{k\ge 0}s_{\lfloor k/2\rfloor}x^k\;.$$ The lefthand side is $$\begin{align*} \sum_{k\ge 0}s_kx^k-\sum_{k\ge 0}s_{k-1}x^k&amp;=F(x)-x\sum_{k\ge 0}s_{k-1}x^{k-1}\\ &amp;=F(x)-x\sum_{k\ge 0}s_kx^k\\ &amp;=(1-x)F(x)\;, \end{align*}$$ and the righthand side is $$\begin{align*} \sum_{k\ge 0}s_{\lfloor k/2\rfloor}x^k&amp;=\sum_{k\ge 1}s_k(x^{2k}+x^{2k+1})\\ &amp;=(1+x)\sum_{k\ge 0}s_kx^{2k}\\ &amp;=(1+x)F(x^2)\;, \end{align*}$$ so $(1-x)F(x)=(1+x)F(x^2)$, and $(1)$ follows immediately. The sequence is OEIS A000123, and the generating function apparently does not have a nice form.
Evaluate $\sum\limits_{r=1}^{50}\left[\frac{1}{49+r} - \frac{1}{2r(2r-1)}\right]$
$S=\sum\limits_{r=1}^{50}\left[\frac{1}{49+r} - \frac{1}{2r(2r-1)}\right]$ The first part of the sum: $\sum\limits_{r=1}^{50}\frac{1}{49+r}=\sum\limits_{r=50}^{99}\frac{1}{r}=\sum\limits_{r=1}^{99}\frac{1}{r}-\sum\limits_{r=1}^{49}\frac{1}{r}$ The second part of the sum: $\sum\limits_{r=1}^{50} - \frac{1}{2r(2r-1)}=\sum\limits_{r=1}^{50}\big( \frac{1}{2r}{-\frac{1}{(2r-1)}}\big)=\frac{1}{100}-\sum\limits_{r=1}^{99}\frac{(-1)^{r-1}}{r}$ Finally $S=\frac{1}{100}+\sum\limits_{r=1}^{99}\big(\frac{1}{r}-\frac{(-1)^{r-1}}{r}\big)-\sum\limits_{r=1}^{49}\frac{1}{r}$ as the 99. term is equal to zero in the first part of the sum and $r=2k$ we get the followings: $S=\frac{1}{100}+\sum\limits_{k=1}^{49}\frac{2}{2k}-\sum\limits_{r=1}^{49}\frac{1}{r}=\frac{1}{100}$
Is it possible to simplify this expression?
Unless you have some orthogonality conditions on the vectors, not really. $$ \sum(\ldots) \cdot \sum(\ldots ) = \sum_{i = 1}^N \sum_{j = 1}^N \lambda_i \lambda_j (x_i \cdot x_j) $$ Where I've used the dot product instead of multiplying by the transpose. Note that everything commutes, so this is really $$ \sum_{k = 1}^N \lambda_k^2 \|x_k\|^2 + 2\sum_{i &lt; j}^N \lambda_i \lambda_j (x_i \cdot x_j) $$ i.e. one copy of every square of things, and two copies of each `non-square' product.
What does it mean that the exponential topology of a space is T$_1$?
I'll use the definitions and notations from this answer. Just recall two things: A topological space $Y$ is T1 if $\{ y \}$ is closed in $Y$ for all $y \in Y$; and The points of the space $\exp (X)$ are the closed subsets of $X$. Therefore $\exp(X)$ is T1 if $\{ F \}$ is closed in $\exp(X)$ for each $F \in \exp(X)$, or rather, $\{ F \}$ is closed in $\exp(X)$ for each closed $F \subseteq X$. If $X$ itself is T1, then $\exp(X)$ being T1 would imply that $\{ \{ x \} \}$ is closed in $\exp(X)$ for each $x \in X$. Added. A fairly simply argument gives that $\exp(X)$ is T1 whenever $X$ is. sketch. Assume $X$ is T1, and let $F,E \subseteq \exp(X)$ be distinct. There are two cases: If $F \not\subseteq E$, then $\langle X , X \setminus E \rangle$ is an open neighbourhood of $F$ in $\exp(X)$ not containing $E$. If $E \not\subseteq F$, then pick $x \in E \setminus F$. It follows that $\langle X \setminus \{ x \} \rangle$ is an open neighbourhood of $F$ in $\exp(X)$ not containing $E$.
Is there a classification of the inner products on $\mathbb{R}^n$ up to isomorphism?
The classical result that any inner product admits an orthonormal basis exactly says that any two inner products are equivalent. In general, Sylvester's theorem says that over $\mathbb R$ symmetric bilinear forms are classified up to equivalence by rank and signature.
Proof that $\mathbf{R}[\omega]_\times\mathbf{R} = [\mathbf{R}\omega]_\times$
For convenience, I write $w$ in place of $\omega$ and $R$ in place of $\mathbf{R}$. \begin{align*} R[w]_\times R^T = [Rw]_\times \Leftrightarrow&amp; R[w]_\times = [Rw]_\times R \\ \Leftrightarrow&amp; z\cdot(R[w]_\times x) = z\cdot([Rw]_\times Rx)\quad \forall x,z\\ \Leftrightarrow&amp; Rv\cdot(R[w]_\times x) = Rv\cdot([Rw]_\times Rx)\quad \forall v,x\\ \Leftrightarrow&amp; Rv\cdot R(w\times x) = Rv\cdot(Rw\times Rx)\quad \forall v,x\\ \Leftrightarrow&amp; v\cdot (w\times x) = Rv\cdot(Rw\times Rx)\quad \forall v,x\\ \Leftrightarrow&amp; \det(v,w,x) = \det(Rv,Rw,Rx). \end{align*} Now the last line is true because $$\det(Rv,Rw,Rx)=\det\left(R(v,w,x)\right)=\det(R)\det(v,w,x)=\det(v,w,x).$$
How have I incorrectly computed $\int \frac{x^{2}+4}{x(x-1)^{2}} dx$?
$x$ has a $-B$ coefficient as well, so $0 = - 2A - B - C$. And $4 = A-B$ should just be $4=A$. It looks like you mistakenly did $Bx(x-1) = Bx^2 - B$ instead of $Bx^2 - Bx$. Here's a handy tip for dealing with partial fraction decompositions. When you get to $x^{2} + 4 = A(x-1)^{2} + Bx(x-1) + C(x) $, you can exploit the fact that this is an identity in the variable $x$. In other words, this equation is true no matter what $x$ is. So, you can plug in "helpful" values of $x$ to determine the values of $A$, $B$, $C$. For example, if $x=0$ then $0^2 + 4 = A(-1)^2$, which gives $A = 4$. Plugging in $x=1$ immediately gives you $C$. Then since you have $A$ and $C$ you can plug in any other value of $x$ to get $B$. No need to mess around with systems of equations.
Gamma and Beta Functions : Integration
So that the question won't remain unanswered, let us develop L. F. nice hint: $$x=at^{1/3}dx\implies dx=\frac13at^{-2/3}dt\implies$$ $$\implies\int\limits_0^ax^3(a^3-x^3)^5dx=\frac{a^{19}}3\int\limits_0^1t^{1/3}(1-t)^5dt=\frac{a^{19}}3B\left(\frac43\,,\,6\right)$$
Prove that $\sum_{t \vert n} d^3(t) = (\sum_{t \vert n}d(t))^2$ for all $n \in \mathbb{N}$
You are on the right track, but a simple way is to notice that both $$ a(n) = \sum_{t\mid n}d^3(t), \qquad b(n)=\left(\sum_{t\mid n}d(t)\right)^2 $$ are multiplicative functions, so, in order to prove $a(n)=b(n)$, it is enough to prove: $$ a(p^k) = b(p^k) $$ that is equivalent to the well-known identity: $$ \sum_{j=0}^{k}(j+1)^3 = \left(\sum_{j=0}^{k}(j+1)\right)^2 $$ since every divisor of $p^k$ is some $p^j$ with $j\in[0,k]$, and $d(p^j)=(j+1)$.
Why contour integral is different than an approximate contour summation for complex functions with poles?
Let $\gamma: [0, 1] \to \Bbb C$ be a parametrization of the contour $c$. For a “sufficiently fine” partition $0 = t_0 &lt; t_1 &lt; \cdots &lt; t_n = 1$ we have $$ \int_c f(z) \, dz = \int_0^1 f(\gamma(z)) \gamma'(z) \, dz \approx \sum_{j=1}^{n} f(\gamma(t_j)) \gamma'(t_j)(t_j - t_{j-1}) \\ \approx \sum_{j=1}^{n} f(z_j) (z_j - z_{j-1}) $$ where $z_j = \gamma(t_j)$. In particular, the “increments” $z_j - z_{j-1}$ are not constant along the curve.
Convergence of sequence - Do we have to take these cases?
It is not necessary to use tight bounds, it is often easier to take gross estimates. For $$\frac{|n-2|}{4n^2}&lt;\epsilon$$ we can use $|n-2|&lt;n$ (which is true for all $n&gt;1$) so that $$\frac{|n-2|}{4n^2}&lt;\frac1{4n}&lt;\epsilon.$$ Then, $$n&gt;\frac1{4\epsilon}$$ is fine. (Notice that this bound is just slightly larger than your last expression, and is obtained with less effort.)
Recurrence relations
Assuming that the recurrence relation should be $T(n)=4T(n-1)+2$ for $n&gt;1$, then we get $$ T(n)=4T(n-1)+2=4(4T(n-2)+2)+2=4(4(4T(n-3)+2)+2)+2=\cdots=4^{n-1}T(1)+4^{n-2}\cdot 2+4^{n-3}\cdot 2 +\cdots +2=\frac{1}{3}(5\cdot 4^{n-1}-2) $$ This formula can be proven formally by induction: it holds for $n=1$, and $$ T(n)=4T(n-1)+2=\frac{1}{3}(5\cdot 4^n-8)+2=\frac{1}{3}(5\cdot 4^n-2) $$ So it holds for all positive integers $n$.
Hilbert's Hotel Paradox: Guests moving to new room every day?
A far simpler approach than prime powers is as follows. Number the rooms starting from $0$. Each day, guests in even-numbered rooms move two rooms up guests in odd-numbered rooms move two rooms down, except for the one in room $1$, who moves to room $0$ This creates an infinite cycle linking every room, on which all guests move: $$\dots\to5\to3\to1\to0\to2\to4\to6\cdots$$ So not only is it possible to have all guests occupy different rooms on infinitely many days, it is possible to achieve full utilisation while doing so for all eternity. If all rooms can only ever be occupied by at most one guest, the following construction (also simpler than prime powers) still ensures that every room is eventually used. Arrange the rooms in an array like this, and this time start from $1$: $$\begin{array} 01&amp;2&amp;4&amp;7&amp;11&amp;\dots\\ 3&amp;5&amp;8&amp;12&amp;17&amp;\dots\\ 6&amp;9&amp;13&amp;18&amp;24&amp;\dots\\ 10&amp;14&amp;19&amp;25&amp;32&amp;\dots\\ 15&amp;20&amp;26&amp;33&amp;41&amp;\dots\\ \vdots&amp;\vdots&amp;\vdots&amp;\vdots&amp;\vdots&amp;\ddots\end{array}$$ On the first day, the guests stay in triangular-numbered rooms, i.e. the first column of the above array. Each subsequent day, all guests move to the room that is to their immediate right in the array, e.g. $6$ moves to $9$, then to $13$ and $18$, etc. Both solutions here are based on canonical bijections to $\mathbb N$, from $\mathbb Z$ and $\mathbb N^2$ respectively.
Showing that the Spectrum of a Hermitian/Self Adjoint Operator is Real
Note that (b) implies that the range of $B$ is dense. If we can show that it's all of $H$, then we'll know that $B$ is bijective, which means that $\lambda\notin\sigma(A).$ This will follow from (c). Indeed, take $x\in H.$ By density, there exists a sequence $(y_n)$ in $H$ so $By_n\rightarrow x.$ Using part (c), we get that $(y_n)$ is Cauchy, so it converges, say to $y\in H$. By continuity, we get that $By=x$, so $x\in B(H).$ The other inclusion is obvious. All that's left is to show part (c). Simply observe that \begin{align*} \|x_n-x_m\|^2&amp;=\|By_n-By_m\|^2=\|(\lambda-A)(y_n-y_m)\|^2\\ &amp;=\|(\text{Re}\ \lambda-A)(y_n-y_m)+i\text{Im}\ \lambda(y_n-y_m)\|^2\\ &amp;= \|(\text{Re}\ \lambda-A)(y_n-y_m)\|^2+\|\text{Im}\ \lambda(y_n-y_m)\|^2\\ &amp;\geq |\text{Im}\ \lambda|^2\|y_n-y_m\|^2 \end{align*} (Self-adjointness was used when moving from the second to the third line.)
The area of the square formed by the centers of 4 squares
It is just a matter of using pythagorean theorem twice.
Sum and product of digits equation
Infinitely many. If we make sure $N$ has at least one $0$ digit, $P(N)$ will be $0$ as will $P(P(N))$ and $S(P(N))$ We will also make sure $S(N)$ has a $0$ digit so $P(S(N))=0$. Then all we need is $S(S(N))=2019$. As the sum of digits is like a logarithm, $N$ will be huge. One number $K$ with $S(K)=2019$ is $\frac 19(10^{2019}-1)$, which has $2019\ 1$s. We will multiply it by $10$ to get the $0$ digit we want, so our number becomes $K=\frac {10}9(10^{2019}-1)$ Now we want a number with this sum of digits, so we follow the same strategy, getting $$\frac 19\left(10^{\frac {10}9(10^{2019}-1)}-1\right)$$ Now we multiply this by $10^n$ for any $n \ge 1$ and we have the zero digit we wanted. We could get smaller examples by using this strategy with larger digits.
A question regarding the definition of Galois group
There is a slight divergence of nomenclature. Everyone agrees on what $\mathrm{Aut}(E/F)$ is. The question is what to call it. Some books (e.g., Hungerford, Rotman's Galois Theory), always refer to $\mathrm{Aut}(E/F)$ as the "Galois group" of $E$ over $F$ (or of the extension), whether or not the extension is a Galois extension. Other books (e.g., Lang), use the generic term "automorphism group" to refer to $\mathrm{Aut}(E/F)$ in the general case, and reserve the term Galois group exclusively for the situation in which $E$ is a Galois extension of $F$. So, in Lang, even just saying "Galois group" already implies that the extension must be a Galois extension, that is, normal and separable. In Hungerford, just saying "Galois group" does not imply anything beyond the fact that we are looking at the automorphism of the extension. Wikipedia is following Convention 2; your book is following convention 1. There is also the question of whether to admit infinite extensions or not. A lot of introductory books only consider only finite extensions when dealing with Galois Theory, and define an extension to be Galois if and only if $|\mathrm{Aut}(E/F)| = [E:F]$. This definition does not extend to the infinite extension, so the definitions are restricted to finite (algebraic) extensions, with infinite extensions not considered at all. Other characterizations of an extension being Galois (e.g., normal and separable) generalize naturally to infinite extensions, so no restriction is placed. Likewise, some books explicitly restrict to algebraic extensions, others do not; but note that most define "normal" to require algebraicity, because it is defined in terms of embeddings into the algebraic closure of the base field, so even if you don't explicitly require the extension to be algebraic in order to be Galois, in reality this restriction is (almost) always in place. This is not such a big deal as it might appear, because one can show that an arbitrary (possibly infinite) Galois extension $E/F$ is completely characterized in a very precise sense by the finite Galois extensions $K/F$ with $F\subseteq K\subset E$ with $[K:F]\lt\infty$, as the automorphism group $\mathrm{Aut}(E/F)$ is the inverse limit of the corresponding finite automorphism groups.
How is a formal system including only a first-order axiomatization of induction stronger than a system without?
What goes wrong with your argument is that what you call the "basic axioms" of PA admit models that are excluded by the induction axioms. For example, let $\mathbb{N}_{\infty}$ be the structure for the language of PA with underlying set $\mathbb{N} \cup \{\infty\}$ with the usual arithmetic on $\mathbb{N}$ and with $\infty + x = x + \infty = \infty$, $0\times \infty = \infty\times 0 = 0$, $S(x)\times\infty=\infty\times S(x) = \infty$ for any $x$, and $S(\infty) =\infty$. Then $\mathbb{N}_{\infty}$ is a model of Robinson's axioms that does not satisfy $S(x) \neq x$. So $\mathbb{N}_{\infty}$ is a model of Robinson's axioms that is not a model of PA.
Ratio Problem Technique
8 people in 3 hours, can paint 6 houses. 8 people in 1 hours, can paint $\frac63$ houses. (the number of houses(quantity of task) is directly proportional to the time). 1 people in 1 hour can paint $\frac6{3\cdot8}$ houses(the number of houses(quantity of task) is directly proportional to the man-power) 3 people in 4 hours in can paint $\frac{6\cdot 3\cdot 4}{8\cdot3}=3$ houses.
Is it possible to have 6 values in a ring (1,0,1,0,0,0) become equal after a series of steps?
The answer is no, it's not possible. Consider $s= x_1+x_3+x_5-(x_2+x_4+x_6)$. Your permitted operation will never change that sum. At your starting position $s=2$ so it will always remain $2$ no matter how many times you apply the permitted operation. At your desired end position $s$ would be $0$, so you can't get there from here.
Calculate the following limit : $\lim_{\alpha\to 0;\alpha >0 } \int_{\alpha}^1 \frac{1}{x(x^n+a)}dx $ , a>0, $n\in N^*$
Note that $$ 0&lt;x^n+a\le 1+a, x\in[\alpha,1]$$ and hence one has $$ \int_\alpha^1\frac1{x(x^n+a)}dx\ge\frac1{1+a}\int_\alpha^1\frac1xdx=\frac1{1+a}(-\ln\alpha). $$ Since $\lim_{\alpha\to0^+}(-\ln \alpha)=\infty$, one has $$ \lim_{\alpha\to0^+}\int_\alpha^1\frac1{x(x^n+a)}dx=\infty. $$
Expanding $\prod_{m=1}^{n-1}(1+a_m+b_m)$.
$$\begin{align}\prod_{m=1}^{n-1}(1+c_m)&amp;=(1+c_1)(1+c_2)\cdots(1+c_{n-1})\\&amp;=1+\sum_{\text{cyc}}c_1+\sum_{\text{cyc}}c_1c_2+\cdots+\prod_{i=1}^{n-1}c_i\end{align}$$
The existence of a complex continuation of the logarithm
from $\frac{1}{1-z}= \sum_{n=0}^\infty z^n$ for any $|z| &lt; 1$ you know that whenever $|z| &lt; 1$: $$\log(1-z) = 2 i \pi k-\sum_{n=1}^\infty \frac{z^n}{n}$$ hence if $0 &lt; |f(z)| &lt; 1$ then $$\log(f(z)) = \log(1-(1-f(z))) = 2 i k \pi - \sum_{\nu = 1}^\infty \frac{(1-f(z))^\nu}{\nu}$$ for some integer $k$. because $\log(a f(z)) = \log(a) + \log(f(z))+2 i k \pi $ for some $k$, you get that in any region where $0 &lt; |f(z)| &lt; C$ : $$\log(f(z)) = \log(f(z)/C)+ \log(C) + 2 i k \pi = \log(1-(1-f(z)/C)) +2 i k \pi + \log(C)$$ $$= \log(C) + 2 i m \pi -\sum_{\nu = 1}^\infty \frac{(1-f(z)/C)^\nu}{\nu}$$ is analytic on that region.
Problem in understanding solution to problem 4a, chapter 22 of Spivak Calculus
The proof seems to be incorrect. In order to deduce \begin{align*} |a_n-a_{n_{J+1}}|&lt;\frac{\epsilon}{2}\tag{1} \end{align*} we have to assure that $n&gt;N_1$ and $n_{J+1}&gt;N_1$. Since $n&gt;N=\max(N_1,n_J)\geq N_1$ we have $n&gt;N_1$. But we don't know if $n_{J+1}&gt;N_1$. We know $n_{J+1}&gt;n_J$ which is sufficient to show \begin{align*} |a_{n_{J+1}}-l|&lt;\frac{\epsilon}{2} \end{align*} but not sufficient to show (1).
Composition of a convex function
Note that $f$ is increasing. Let $x,y\in U, \lambda\in(0,1)$. From Jensen's inequality follows $$f(g(\lambda x + (1-\lambda)y))\leq f(\lambda g(x) + (1-\lambda)g(y))\leq \lambda f(g(x)) + (1-\lambda)f(g(y)),$$ which means the convexity of $f(g(u))$.
What is the value of y in $x\left(\tan\left(\sqrt{x^{2}+y^{2}}\right)\right)-y=0$
If you use polar coordinates $x=r \cos(t)$, $y=r \sin(t)$, you end with the equation $$r \tan (r) \cos (t)=r \sin (t)$$ the only non-trivial solution being $r=t$ which gives the equation of a spiral. Otherwise, let $x=k y$ and, for a given value of $k$, you would have $$y=\frac{\cot ^{-1}(k)}{\sqrt{k^2+1}}$$
Is "element of" an a Relation with two Parameters?
Yes. A relation on $n$ sets is ANY subset of the Cartesian product (set of all $n$-tuples) of those sets. Frequently for a relation on 2 sets the sets will be the same ($&lt;$, $\subseteq$, etc), but for $\in$ the first set is whatever domain the elements of the second set come from (such as the integers) and the second set is the powerset (set of sets) of that domain. Symbolically, $"\in"\subseteq A\times\mathcal{P}(A)$ where $A$ is the domain and $\mathcal{P}(A)$ the powerset.
On Thompson's normal $p$-complement theorem
I think the answer is that the definition resulted from experiment and trial and error. Thompson was looking for a single subgroup of $P$ for which the normalizer would determine whwther or not $G$ has a normal $p$-complement. In the version of his theorem that you state, he does not quite achieve this, because he requires also that $C_G(Z(P))$ has a normal $p$-complement. But there is an improved version of the result in Theorem 3.1, Chapter 8 of Gorenstein's book "Finite Groups", in which the aim is achieved. The definition of $J(P)$ there is slightly different from the one you give, and probably came later. It is defined as the subgroup of $P$ generated by all abelian subgroups of maximal order (rather than maximal rank), and the theorem, which is attributed to Glauberman and Thompson, states that, for odd primes $p$, $G$ has a normal $p$-complement if and only if $N_G(Z(J(P)))$ does.
Analytic proof of area probability
Let $D = \{(x, y): x^2+y^2 \leq 1\}$. Let $g(x, y)$ be the density of $(X, Y)$. It exists because $X$ and $Y$ are independent, and equals to multple of their densities. (Fubini theorem) $$P(X^2+Y^2\leq1) = \iint_D g(x, y)dxdy = \frac{1}{4}\iint_D dxdy = \frac{\pi}{4}$$
contraposition in intuitionistic logic
Proof : 1) $A \to B$ --- premise 2) $\lnot B$ --- assumed [a] 3) $A$ --- assumed [b] 4) $B$ --- from 3) and 1) by $\to$-elim 5) $\bot$ --- from 4) and 2) by $\to$-elim 6) $\lnot A$ --- from 3) and 5) by $\to$-intro, discharging [b] 7) $\lnot B \to \lnot A$ --- from 2) and 6) by $\to$-intro, discharging [a].
How do I prove $\sum_{i=1}^n \frac{i-1}{n}(\sin^{-1}(\frac{i}{n})-\sin^{-1}(\frac{i-1}{n}))$ converges to $1$?
Notice that for $i=1,2,\ldots,n-1$, the $i$-th term contains $(i-1)\sin^{-1}\frac in$ and the next term contains $i\left(-\sin^{-1}\frac in\right)$. Adding them gives $-\sin^{-1}\frac in$. So the sum simplifies to $$\frac 1n\left(\sum_{i=1}^{n-1}\left(-\sin^{-1}\frac in\right)+(n-1)\sin^{-1}\frac nn\right)=-\frac1n\sum_{i=0}^{n-1}\sin^{-1}\frac in+\frac{n-1}{n}\frac \pi2 $$ The first summand is a Riemann sum of $\sin^{-1}x$ on the interval $[0,1]$ with left endpoints. And the second summand converges to $\pi/2$. Thus, the limit is $$\frac\pi 2-\int_0^1\sin^{-1}x\,dx=1 $$
Find a sequence of complex polynomials with certain properties. (Hardy spaces over unit circle)
For the sake of simplicity I assume $\lambda=1$. The general case can be generated by rotating the arguments. Consider \begin{align*} q_n(z):=\prod_{k=1}^{n-1}\left(z-\exp\left(\frac{2\pi ik}{n}\right)\right). \end{align*} and define $f_n(t):=q_n(e^{it})$. Then the Fourier Coefficients $c_k$ of $f_n$ are $c_k=1$ for $0\leq k&lt;n$ and $c_k=0$ otherwise. In particular, $q_n(1)=n$. By Parceval's Identity we have \begin{align*} \frac{1}{2\pi}\int_0^{2\pi}|f_n(t)|^2dt=\sum_{k\in\mathbb Z}|c_k|^2=n. \end{align*} Thus, if we define $p_n(z):=q_n(z)/n$, then \begin{align*} \frac{1}{2\pi}\int_0^{2\pi}|p_n(e^{it})|^2dt=\frac{1}{n}\to0, \end{align*} but $p_n(1)=1$ for each $n\in\mathbb N$.
f(g(x)) has a degree divisible by n
Suppose $h$ is an irreducible factor of $f \circ g$, and $\alpha$ is a root of $h$ (in some extension field). Then $g(\alpha)$ is a root of $f$, and so, since $f$ is irreducible of degree $n$, $[F(g(\alpha)):F]=n$. Thus $\deg(h)=[F(\alpha):F]=[F(\alpha):F(g(\alpha))]\cdot[F(g(\alpha)):F]$ is divisible by $n$.
A $\mathcal{C}^{1}$ differentiable domain and Hausdorff dimension estimates
Being a codimension 1 smooth submanifold, your set $\partial E\cap B_R(x_0)$ will have Hausdorff dimension $N-1$. The same goes for $\partial E$ since you can take compact sets $K$ and obtain Hausdorff dimension $N-1$ for each intersection $\partial E\cap K$ (by taking a finite subcover by balls $B_{R_i}(x_i)$). Finally, take an increasing sequence of compact sets $K_n$ with $\bigcup_{n=1}^\infty K_n=\mathbb R^N$ and note that $$\dim_H \partial E=\lim_{n\to\infty}\dim_H(\partial E\cap K_n)=N-1.$$
$\int_{a}^{+\infty}dx\int_{c}^{+\infty}f(x,y)dy=\iint_{D}f(x,y)dxdy$?
$\forall \epsilon&gt;0,\exists b&gt;a$, s.t.$$\Biggl|\int_{a}^{b}dx\int_{c}^{+\infty}f(x,y)dy-I\Biggr|&lt;\varepsilon$$, let $\{d_n\}$ be an ascending sequence satisfying $\lim_{n\to+\infty}d_n=+\infty$ and $\varphi_n(x)=\int_c^{d_n}f(x,y)dy$, $D_n=[a,b)\times[c,d_n)$, then $$\int_{a}^{b}dx\int_{c}^{d_n}f(x,y)dy=\iint_{D_n}f(x,y)dxdy,$$ by Arzelà's dominated convergence theorem for the Riemann integral, $$\lim_{n\to\infty}\int_a^b \varphi_n(x)dx=\int_a^b \lim_{n\to\infty}\varphi_n(x)dx=\int_{a}^{b}dx\int_{c}^{+\infty}f(x,y)dy$$ Then therer exists $N&gt;0$, s.t. $\forall n&gt;N$, $$\Biggl|\int_{a}^{b}dx\int_{c}^{d_n}f(x,y)dy-\int_{a}^{b}dx\int_{c}^{+\infty}f(x,y)dy\Biggr|&lt;\epsilon,$$ the rest of the proof is obvious. The idea to prove Arzelà's theorem is to find a increasing sequence of continuous functions $\{g_n\}$ satisfying $\lim_{n\to\infty}\varphi_n&gt;g_n(x)&gt;\varphi_n(x)$ such that $\int_a^b g_n(x)-\varphi_n(x)dx$ is small enough, then by Dini's theorem $\{g_n\}$ converges to $\lim_{n\to\infty}\varphi_n$ uniformly, thus $\lim_n\int \varphi_n=\int \lim_n \varphi_n$
Normal vector for a surface: explicit vs implicit formula
Usually "normal vector" is understood to mean unit normal vector. In your answer, $$\hat{n}\cdot\hat{n} = \frac{x^2+y^2+z^2}{4} = 1$$ and you do indeed have a unit vector. On the other hand, if you check the textbook's answer, $$\hat{n}\cdot\hat{n} = 1+\frac{x^2+y^2}{z^2} = \frac{x^2+y^2+z^2}{z^2} = \frac{4}{z^2}$$ which is not usually 1 (and moreover there is a problem at $z=0$, as you noted). Normalizing the book answer gives $$\frac{z}{2}\left(\frac{x}{z},\frac{y}{z}, 1\right) = \frac{1}{2}(x,y,z)$$ which matches your answer.
Find all positive integers $L$, $M$, $N$ such that $L^2 + M^2 = \sqrt{ N^2 +21}$
Let $(L^2 + M^2) = A$. Then you have $A^2 = N^2 + 21$ i.e. $(A+N)(A-N) = 21 = 21 \times 1= 7 \times 3$. Now work out the different possibilities with the constraint that you are dealing with positive integers. Move your mouse over the gray area to find the solution. The first one i.e. $21 \times 1$, gives us $A = 11$ and $N = 10$, while the second one i.e. $7 \times 3$ gives us $A = 5$ and $N = 2$. $A = 11$ cannot be expressed as sum of two squares since it is $3 \bmod 4$. $A=5$ implies that $L^2 + M^2 = 5$ which gives us $(L,M) = (2,1)$ or $(L,M) = (1,2)$. Hence, $(2,1,2)$ and $(1,2,2)$ are the only solutions.
How to prove it when f(x,y) is not given
The curves $y=\frac{x^2}{4a}$ and $y=2\sqrt{ax}$ intersect at $(4a,4a)$ Now, $0\le x\le4a \implies \color{blue}{0\le y\le4a}$ in the required region. The given equations can be rewritten as, $y = \frac{x^2}{4a} \to \color{blue}{x = 2\sqrt{ay}}$ $y = 2\sqrt{ax} \to \color{blue}{x = \frac{y^2}{4a}}$ Just substitute these limits for $x$ and $y$ and interchange the order of integration.
Finding the matrix representation of $T$
When we are interested in the matrix representation of a linear transformation, we have to choose a basis for the domain and the counter domain. At your case, it has been given the value of $T$ at the vectors $(1,1)$ and $(0,1)$, whose images are described in terms of the standard basis $\mathcal{B}_{2} = \{(1,0,0),(0,1,0),(0,0,1)\}$ of $\mathbb{R}^{3}$. If your are interested in the representation of $T$ in terms of $\mathcal{B}_{1} = \{(1,0),(0,1)\}$, notice that \begin{align*} T(1,0) = T((1,1) - (0,1)) = T(1,1) - T(0,1) = (-1,\pi,-1) \end{align*} Consequently, the sought matricial representation is given by: \begin{align*} [T]_{\mathcal{B}_{1}}^{\mathcal{B}_{2}} = \begin{bmatrix} -1 &amp; 4\\ \pi &amp; 0\\ -1 &amp; 1 \end{bmatrix} \end{align*} Hopefully this helps.
Can I use the pseudoinverse of a Jacobian like I think I can?
If you take a surface $z=f(x,y)$ and re-write it as $x=g(z,y)$ then ${{\partial z} \over{ \partial x}}=f_x$ and ${{\partial x} \over{ \partial z}}=g_z$ will be reciprocal of each other since they are slopes (for a curve at fixed $y$) measured as $z$ changes with $x$, or as $x$ changes with $z$.
Counting of unique vector combinations in a projective Hilbert space
The coefficient of every basis vector is one of $-1,0,1$, so there are $3^n$ such choices in total. Excluding $0$, there are $3^n-1$ choices. Projectivising means we only double-counted (since we started with a basis), giving the answer $\frac12(3^n-1)$. Your attempt only counted those with every coefficient nonzero, and counted each combination $n!$ times.
closures of the following subsets of the ordered square ?1
First $E = \{\frac{1}{2}\} \times (0,1)$. A short check on cases shows that indeed all basic neighbourhoods of $(\frac{1}{2},0)$ and $(\frac{1}{2}, 1)$ have points in $E$, and so they lie in $\overline{E}$. And as $\{\frac{1}{2}\} \times [0,1]$ is closed (as a closed interval in an ordered space) we see that $E$ plus these two points is the whole closure. As to $D = (0,1) \times \{\frac{1}{2}\}$, the seeming endpoints $(1, \frac{1}{2})$ are and $(0, \frac{1}{2})$ are not in the closure: $((0,0), (0,1))$ is an open neighbourhood of $(0,\frac{1}{2})$ (an open interval) that misses $D$ and similarly for $((1, 0), (1,1))$ and $(1, \frac{1}{2})$. But consider $(a,0)$ for $a \in (0,1]$:a basic neighbourhood of that point is an open interval (it’s not one of the two endpoints), with $(b,c) &lt; (a,0) &lt; (d,e)$ as endpoints. We cannot have $b=a$ as then we’d have $c &lt;0$ which cannot be, so $b &lt; a$. Pick some $b’$ with $b &lt; b’ &lt; a$. Then $(b,c) &lt; (b’, \frac{1}{2}) &lt; (a,0)$ (all determined by the first coordinate), and so this open interval intersects $D$. This shows that $(0,1] \times \{0\} \subseteq \overline{D}$ as well. A similar argument can be made symmetrically at the top edge to show that $[0,1) \times {1} \subseteq \overline{D}$ as well. I claim that $\overline{D} = C:= D \cup (0,1]\times \{0\} \cup [0,1) \times \{1\}$ and one inclusion we have seen and the other follows from the fact that $$[0,1] \times [0,1]\setminus C = ([(0,0),(0,1)) \cup \bigcup_{x \in (0,1)} [((x,0), (x,\frac{1}{2})) \cup ((x,\frac{1}{2}),(x,1))] \cup ((1,0),(1,1)]$$ which is a union of basic open sets of $[0,1]\times [0,1]$ and so $C$ is closed.
Trace identity of product of matrices
For $n \times n$ matrices, $n \geq 2$, there is not: Taking $A = B = 0_{n \times n}$ gives $\text{tr } A = \text{tr } B = \text{tr} (AB) = 0$, so if there were such a function, it would satisfy $f(0_{n \times n}, 0_{n \times n}) = 0$. On the other hand, if $n = 2$, taking $A = B = \begin{pmatrix}0 &amp; 1\\ 1 &amp; 0\end{pmatrix}$ gives $\text{tr } A = \text{tr } B = 0$ but $\text{tr}(AB) = \text{tr} \begin{pmatrix}1 &amp; 0\\ 0 &amp; 1\end{pmatrix} = 2$, so $f(0_{n \times n}, 0_{n \times n}) = 2$, a contradiction. For $n &gt; 2$, taking $A = B = \begin{pmatrix}0 &amp; 1\\ 1 &amp; 0\end{pmatrix} \oplus 0_{(n - 2) \times (n - 2)}$ leads to the same contradiction. For $1 \times 1$ matrices (as voldemort points out), we do indeed have $\text{tr}(AB)=\text{tr } A \cdot \text{tr } B$ (in fact, the trace in this dimension coincides with the obvious isomorphism of the space of $1 \times 1$ matrices with the space of scalars), so $f(x, y) = xy$ satisfies the condition.
need help to get $Cov(XY,Z)$
You do not seem to have enough information. It can be shown (I used a developmental version of mathStatica here to automate, but one could do this manually) that: $$\text{Cov}(XY, Z) = \mathbb{E}[X] \text{Cov}(Y,Z)+\mathbb{E}[Y] \text{Cov}(X,Z)+\mu _{1,1,1}$$ where $$\mu _{1,1,1}=\mathbb{E}\left[\;(X-\mathbb{E}[X]) \; (Y-\mathbb{E}[Y]) \;(Z-\mathbb{E}[Z]) \; \right]$$ You know all the bivariate covariances, but you would still need to know the raw means $\mathbb{E}[X]$, $\mathbb{E}[Y]$ and the product central moment $\mu _{1,1,1}$ or ... the product raw moment $\mathbb{E}[X Y Z]$ and $\mathbb{E}[X]$, $\mathbb{E}[Y]$, $\mathbb{E}[Z]$.
Finding Length of Arc
The problem is that $$dx=t\sqrt{t^2+4}dt$$ and you have to integrate w.r.t $x$
Need better explanation for part of trig substitution problem with intengral
The first integral has the dummy variable $x$ which goes from $0$ to $\frac{3\sqrt 3}2$. The substitution $x=\frac 32\tan\theta$ is then made, so the limits are now for $\theta$ from $$x_1=0=\frac 32\tan\theta_1$$ $$0=\tan\theta_1$$ $$\theta_1=0$$ to $$x_2=\frac{3\sqrt 3}2=\frac 32\tan\theta_2$$ $$\sqrt 3=\tan\theta_2$$ $$\theta_2=\frac{\pi}3$$ So the next integral has $\theta$ from $0$ to $\frac{\pi}3$. The next substitution is $u=\cos\theta$. So the limits are now for $u$ from $$u_1=\cos\theta_1=\cos 0$$ $$u_1=1$$ to $$u_2=\cos\theta_2=\cos\frac{\pi}3$$ $$u_2=\frac 12$$ So the next integral has $u$ from $1$ to $\frac 12$. These all agree with what you see in your text. One strange thing is that the last integral, for $u$, runs from a larger value to a smaller value. This is due to the substitution $u=\cos\theta$, since the cosine function is a decreasing function in the first quadrant: a larger $\theta$ corresponds to a smaller $\cos\theta$. The rules of calculus tell us that is just fine, and everything will work out fine in the end. As it does in your case: the integral still ended up being positive. Another strange thing is a difference in the two substitutions. The first substitution $x=\frac 32\tan\theta$ gives the old variable in terms of the new, while the second substitution $u=\cos\theta$ gives the new variable in terms of the old. Therefore the methods of finding the new bounds differ, as you see in my equations above. Again, this is legitimate, if confusing, since both functions from old to new variable are one-to-one (at least in the ranges used, in the first quadrant), so we can describe the relationships between the variables in either direction. Doing the substitutions in the way shown in your text made the integrals easier, though understanding the new bounds became harder. Is it all clear now?
Variance Combinations of tomatoes
We are using indicator random variables. Let $X_i$ be the indicator that box $i$ contains differently coloured tomatoes. Then $X=\sum_{i=1}^n X_i$ . For any box $i$ the probability that $X_i=1$ is $\binom n1^2/\binom{2n}2$ or $n/(2n-1)$. &nbsp; This is $\mathsf E(X_i)$. Now as you had :$$\begin{align}\mathsf E(X)&amp;=\mathsf E(\sum_i X_i) \\ &amp;= n~\mathsf E(X_1)\\ &amp;= n^2/(2n-1)\end{align}$$ Similarly for any box $i$ the probability that $X_i=1$ is also $\mathsf E(X_i^2)$ since the indicator variable is either $0$ or $1$. &nbsp; $\mathsf E(X_i^2)=1^2\mathsf P(X_i=1)+0^2\mathsf P(X_i=0)$ Next, for any two distinct boxes $i,j$, the probability that $X_i=1$ and $X_j=1$ is something. &nbsp; This is $\mathsf E(X_iX_j)$ . So we find $$\begin{align}\mathsf E(X^2)&amp;=\mathsf E((\sum_{i=1}^n X_i)(\sum_{j=1}^n X_j)) \\ &amp;= \sum_{i=1}^n \mathsf E(X_i^2)+2\sum_{i=2}^{n}\sum_{j=1}^{i-1}\mathsf E(X_iX_j) \\ &amp;= n^2/(2n-1) + n(n-1)\mathsf E(X_1X_2) \end{align}$$ Now what is the probability that two particular boxes each contain differently coloured tomatoes? Back to you. Count the ways to select two from $n$ red tomatoes, two from $n$ green tomatos, and arrange them into two boxes of two differently coloured tomatoes. Count the ways to select four from $2n$ tomatoes and arrange them into two boxes of two tomatoes. Divide and calculate.
Prove function (0, ∞) to (0, ∞) can't exist if $(f(x))^2>(f(x+y))((f(x)+y))$
The idea behind the proof is as follows. $f(x)$ is decreasing positive function with the limit $0$ at infinity. We try to show that with the inequality stated above, $f(x)$ decreases so fast that it cannot stay all the time above zero. First observe that $f$ is monotonically decreasing: $$ (f(x))^2\geq f(x+y)((f(x)+y))\geq f(x+y)f(x)\implies f(x)\geq f(x+y) $$ Secondly observe that: $$ (f(x))^2\geq f(x+y)((f(x)+y))\geq f(x+y)y\implies f(x+y)\leq \frac{f(x)^2}{y} $$ Therefore $\displaystyle\lim_{x\to+\infty}f(x)=0$. Now because $f$ is monotonically decreasing, it has countable point of discontinuities. Suppose for the rest that the values of $x,y$ are chosen in the set of continuities of $f$, denoted by $\mathcal C$. $f$ is over $\mathcal C$ differentiable. We have: $$ \frac{f(x+y)-f(x)}{y}+\frac{f(x+y)}{f(x)}\leq 0 $$ When $y\to 0$ for $x\in\mathcal C$, we have: $$ f'(x)+1\leq 0. $$ Therefore $g(x)=f(x)+x$ is decreasing on $\mathcal C$ and because $f(x)$ and $x$ are positive, therefore $g(x)\geq 0$, bounded from below and hence it should converge to $L&lt;\infty$ when $x\to\infty$. But: $$ \displaystyle\lim_{x\to+\infty}g(x)=\displaystyle\lim_{x\to+\infty}f(x)+\displaystyle\lim_{x\to+\infty}x=0+\infty=\infty $$ which is a contradiction.
Prove $\mid$x-1$\mid$+$\mid$x+5$\mid$ $\ge$ 6 for all real numbers
We can rewrite this as $$ |x-1| + |x-(-5)| \geq 6 $$ and then it becomes apparent that geometrically, we are looking at the sum of two distances on the number line: the distance between $x$ and $1$, and the distance between $x$ and $-5$ If $x$ is between $-5$ and $1$, then this is just the distance between $-5$ and $1$, and if it's less than $-5$ or greater than $1$, it should be greater than that distance. Since the distance between $-5$ and $1$ is already $6$, the sum must be greater than or equal to $6$.
Convergence of series with sum
First thing to note is that since the range of $f$ lies inside of a compact set, we have that $\sup_{y \in Ran(f)} \|y\| \le M$. Thus for any choice of points $z_1, z_2, z_3, ...$ the series converges: $$\left\|\sum_{i=0}^\infty \alpha^i f(z_i)\right\| \le M \sum_{i=0}^\infty \alpha^i = \frac{M}{1-\alpha}.$$ Now since the series converges absolutely, the series converges (from the theory of Banach spaces). The sum on top of your recursive formula is dominated by a sum of this type, and so converges as well. Note that $\alpha$ does not depend on $\mathcal{K}$ at all, only on the fact that $\mathcal{K}$ is bounded. Though the rate of convergence does depend on $\mathcal{K}$.
Ring Homomorphism in field of rational functions of x and y with rational coefficients
The $f$ you are given should have the phrase "extended by linearity" following the specification of its action on $x$ and $y$. Those words tell you that the two specified rewrites happen to every $x$ and to every $y$ in an element of $A$. For instance $f(x+y) = f(x)+f(y)$ and $f(x \cdot y) = f(x)f(y)$. Unfortunately, sometimes those words are omitted. And strictly, you are correct: without those words, $f$ is only defined at two points of $F$. However, extension by linearity is required and may (when context demands) be assumed. Otherwise, this map has no hope of being a homomorphism, much less an automorphism.
Union and Intersection of Indexed Family
The union with empty index should be any set rather than empty. There are mainly 2 reasons for it. First, if all $A_{\lambda}$ are same, to be consistent with Idempotence of set the union should be $A_{\lambda}$ even if index is empty. More precisely, if for all $\lambda \in \Lambda, \: A_{\lambda}=A$, then $$ \bigcup_{\lambda\in\Lambda} A_\lambda=A $$ Thus union with empty index should be $$ \bigcup_{\lambda\in\Phi} A_\lambda=A $$ Second, if union with empty index is empty, then as you see, intersection would be universe by De Morgan's law, which makes no sense since intuitively union is always larger than intersection.
Composition of morphisms under adjunction
An adjunction is not just an isomorphism in the category of sets. It's actually a family of bijections $$\alpha(N,M):hom_{\mathcal{G}}(SN,M) \to hom_\mathcal{H}(N,TM),$$ for each $N\in Obj(\mathcal{H})$ and $M\in Obj(\mathcal{G})$, which must be natural with respect to $N$ and $M$, in the sense that this family defines a natural isomorphism $$\alpha:hom_{\mathcal{G}}(S\_,\_) \to hom_\mathcal{H}(\_,T\_)$$ in the category of functors $\mathcal{H}^{op}\times \mathcal{G}\to\mathbf{Set}$ (in your case you could replace $\mathbf{Set}$ with $\mathbf{Ab}$). This naturality condition is precisely the equality in your question (and its dual); see this question for more details.
Find an angle between two integral lines at a given point.
Let the angle be $\theta$. Then, $$\tan\theta = \tan(\theta_2-\theta_1) = \frac{\tan\theta_2-\tan\theta_1}{1+\tan\theta_2\tan\theta_1} =\frac{x_2'-x_1'}{1+x_2'x_1'} =\frac{(2x+t^2)-(x-t^2)}{1+(2x+t^2)(x-t^2)}$$ Now, substitute $t=1$ and $x=-2$.
How to expand this expression
This is not completely correct, the Skalar Product is just a number so normal rules like binomial rules apply but you got a triangle inequality in there: $$|\langle a,b\rangle +\langle c,d\rangle|^2\le(|\langle a,b\rangle|+|\langle c,d\rangle |)^2=|\langle a,b\rangle|^2+2|\langle a,b\rangle||\langle c,d\rangle|+|\langle c,d\rangle|^2$$ So it holds as an inequality but not an equality
Can you win the monochromatic urn game?
The game is unwinnable iff the largest number is greater than or equal to the sum of all the others, plus 2. If the largest number is this big, then there are too few balls in the other vases to separate all the balls from this vase. If there are fewer balls than this in the largest vase we use induction to prove it is winnable. Firstly if there is only 1 ball the game is trivially winnable,and if there are 2 balls they are in different vases so the games is again winnable. Suppose it is winnable when there are $n$ balls. If there are $n+1$ balls then remove a ball from the biggest number, and a ball from any other vase. Note that if a different vase now has the largest number it can have at most 1 more than the previous largest. The largest number still satisfies the condition and the smaller game is winnable.
Find missing entires $f \circ g(x)$ calculus composite functions
Hint: $$x=(f\circ g)(x)=f(g(x))=\frac{g(x)+1}{g(x)}\tag1$$ On base of $(1)$ find an expression of $g(x)$ in $x$.
Definition of $\oint_{\gamma}\left(f\,{\rm d}x + g\,{\rm d}y\right)$
The typical definition is $$\int_{\gamma}(f\,\mathrm{d}x+ g\,\mathrm{d}y) = \int_{a}^{b}\vec{F}(\gamma(t))\cdot\dot{\gamma}(t)\,\mathrm{d}t,$$ where $\gamma\colon[a,b]\to\mathbb{R}^{2}$ and the dot above the $\gamma$ denotes (component-wise) differentiation. The idea is that the integrand is $\vec{F}\cdot\mathrm{d}\gamma$ by abuse of notation and the chain rule, and then writing $\mathrm{d}\gamma = (\mathrm{d}x,\mathrm{d}y)$ gives the notation in question.
Sublinearity and concavity of a function
No, take $f(n) = \sqrt{n}+\sin(n)$ as an example.
Is a Sudoku a Cayley table for a group?
A Cayley table for a group can never be a sudoku. Assume you have a $9 \times 9$ Cayley table for a group of order $9$, and say your identity element is at index $1 \le i \le 9$. Then row $i$ and column $i$ are symmetric to each other because they correspond to multiplication with the identity. In particular, if you look at the $3\times3$ sub-square containing the element of coordinates $(i,i)$, this square has duplicates (because it contains symmetric elements of row $i$ and column $i$). So the table isn't a Sudoku. If you allow to swap the rows or columns, this is possible. Take the table of $G = \mathbb{Z}/9\mathbb{Z}$ (I wrote $0$ instead of $9$ for convenience): $$ \begin{array}{r|lllllllll} +&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8\\ \hline 0&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8\\ 1&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8&amp;0\\ 2&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8&amp;0&amp;1\\ 3&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8&amp;0&amp;1&amp;2\\ 4&amp;4&amp;5&amp;6&amp;7&amp;8&amp;0&amp;1&amp;2&amp;3\\ 5&amp;5&amp;6&amp;7&amp;8&amp;0&amp;1&amp;2&amp;3&amp;4\\ 6&amp;6&amp;7&amp;8&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5\\ 7&amp;7&amp;8&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6\\ 8&amp;8&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7\\ \end{array}$$ And swap the rows in order $0,3,6,1,4,7,2,5,8$, to obtain the Sudoku $$ \begin{array}{r|lll|lll|lll} +&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8\\ \hline 0&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8\\ 3&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8&amp;0&amp;1&amp;2\\ 6&amp;6&amp;7&amp;8&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5\\ \hline 1&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8&amp;0\\ 4&amp;4&amp;5&amp;6&amp;7&amp;8&amp;0&amp;1&amp;2&amp;3\\ 7&amp;7&amp;8&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6\\ \hline 2&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8&amp;0&amp;1\\ 5&amp;5&amp;6&amp;7&amp;8&amp;0&amp;1&amp;2&amp;3&amp;4\\ 8&amp;8&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7\\ \end{array}$$
How to create an accurate baseline mean when outliers exist in a large dataset
The definition of an outlier is $1.5$ times the interquartile range outside of the 1st and 3rd quartiles. Use all the data points to determine the quartiles and eliminate the outliers. Determine the mean and standard deviation from the remaining data. Large quantities of outliers is a bit of a contradiction and may just mean the standard deviation is large.
Solving $x^2 + y^2 = 1 + z^4$ with (x,y,z) = 1 and z < x < y
You don't have to worry about $(x,y,z)=1$ because it's automatic: if $x\equiv y\equiv z\equiv 0 \pmod{p}$ then your condition $x^2+y^2=z^4+1$ becomes $0\equiv1\pmod{p}$. Edit: Modified code to cater for missed constraint $z&lt;x$ as per @alex.jordan's comment. This brute force python fragment produces about 380 solutions in a few seconds on my laptop: for z in range(1,300+1): target = z*z*z*z+1 for x in range(z+1,int((target/2)**.5)+1): y = int((target-x*x)**.5) if x*x + y*y == target: print("Solution: z=",z,"x=",x,"y=",y) (Here (blah)**.5 is python for $\sqrt{\text{blah}}$.) And int(blah) is floor(blah). Basically, for a given $z$ and viable $x$ values, you test the single plausible $y$ value of $\left\lfloor\sqrt{z^4-1-x^2}\right\rfloor$. If you need to find even more solutions efficiently, I think you'll have to factor $z^4+1$ into primes of the form $4k+1$ (plus an optional factor of 2), decompose the primes into sums of squares, then recombine those solutions. It'd be faster, but take a lot more code and you'd still be limited by the size of the numbers you can factor which probably won't go much beyond the $300^4+1$ in the code fragment above. Incidentally, there will always be at least one solution to $x^2+y^2=z^4+1$ for each $z$ since $z^4+1$ is a product of primes of the form $4k+1$, plus an optional factor of 2. For a prime $p$ of the form $4k+3$, $z^4+1\equiv0 \pmod{p}$ has no solutions since $x^2\equiv-1\pmod{p}$ has no solutions. And $z^4+1\equiv0\pmod{4}$ has no solutions since $x^2\equiv3\pmod{4}$ has no solutions, so there is at most 1 factor of 2 in $z^4+1$. Any number can be written as the sum of two squares iff is it the product of powers of 2 and $4k+1$ type primes and even powers of $4k+3$ primes. The constraint $z&lt;x$ doesn't always hold in the solutions, though (thanks to alex for remark in comment below).
Proving convergence of $\sum (\sin n^2) (1-\cos \frac{1}{3n} )$
Remember that $|\sin x| \le 1$ for all $x \in \Bbb R$ and that $1 - \cos x = 2 \sin^2 \frac x 2$. Knowing these and slipping the modulus inside the sum, we may write $$\left| \sum _{n \ge 1} (\sin n^2) \left( 1 - \cos \frac 1 {3n} \right) \right| \le \sum _{n \ge 1} \underbrace{| \sin n^2 |} _{\le 1} \cdot \left| 2 \sin^2 \frac 1 {6n} \right| \le \sum _{n \ge 1} \left| 2 \sin^2 \frac 1 {6n} \right| = 2 \sum _{n \ge 1} \sin^2 \frac 1 {6n} .$$ For the last series, remember that $\lim _{x \to 0} \frac {\sin x} x = 1$, which means that the last series obtained has the same behaviour as $$2 \sum _{n \ge 1} \left( \frac 1 {6n} \right)^2 = \frac 1 {18} \sum _{n \ge 1} \frac 1 {n^2}$$ which is known to be convergent, therefore the series given in the problem is convergent too.
Balanced tree number of nodes
Look here: http://en.wikipedia.org/wiki/Geometric_series#Sum
Proving the relationship between Weibull and Exponential Density Functions
If $Y$ be an RV with Exponential distribution, pdf of $Y$ is: $$f_Y(y) = \lambda e^{-\lambda y},\ 0\leq y$$ and CDF is: $$F_Y(y)=\Pr\{y\leq Y\} =1-e^{-\lambda y},\ 0\leq y$$ and Mean or $E(Y)$ is $$E(Y) = \frac{1}{\lambda} = \beta = \alpha$$ 2.We know $$W=\sqrt{Y}$$ and this is CDF definiiton of $W$ $$ \begin{align} F_W(w)&amp;=\Pr\{w\leq W\}\\ \\ &amp;=\Pr\Big\{w\leq \sqrt{Y} \Big\}\\ \\ &amp;=\Pr\Big\{w^2\leq Y \Big\} \\ \\ &amp; = F_Y(w^2)\\ \\ &amp; = 1-e^{-\lambda w^2} \end{align} $$ so $$F_X(w)=1-e^{-\lambda w^2}$$ Calculation pdf from CDF $$f_Y(y) = \frac{d}{dy}F_Y(y)$$ so $$ \begin{align} f_W(w) &amp;= \frac{d}{dw}F_W(w)\\ \\ &amp;=\frac{d}{dw}\Big(1-e^{-\lambda w^2}\Big)\\ \\ &amp;=2 \cdot \lambda \cdot w e^{-\lambda w^2} \end{align} $$ and $\frac{1}{\lambda} = \beta = \alpha$ so $$ f_W(w) = 2 \cdot \frac{1}{\beta} \cdot w e^{-\frac{1}{\beta} w^2} $$ and this is Weibull density with $m=2$ and $\alpha=\beta$.
Is it necessary to use sigmoid function in every layer of neural network?
Mathematically, artificial neural networks are just mathematical functions. You can apply whatever function you want for each neuron it is still a function. If you want to apply the sigmoid function only in the last layer you can do that. However, activation functions have a certain purpose. They make a neural network more powerful. Observe that a composition of affine functions (i.e., functions of the form $f(x)= ax +b$) is still affine, hence, if you don't apply an activation function in your hidden layers at all, you basically render them useless. It is the same as if there would be no hidden layers at all. That is why we use non-linear activation functions. Esentially, every neuron that doesn't have a non-linear activation function is useless.
Find all polynomials $P(x) \in \mathbb R[x]$ such that $P(x^2) = P(x)[P(x) - 2x], \forall x \in \mathbb R$.
Suppose $$P(x)=ax^n+bx^m+\text { lower order terms}, \text { where } ab\ne0 \text { and } n&gt;m&gt;1$$ Then comparing terms of degree $n+m$ in $P(x^2)=P(x)\left(P(x)-2x\right)$ gives $0=2ab$ which is impossible. So we can suppose that $$P(x)=ax^n+bx+c, n&gt;1.$$ $$ax^{2n}+bx^2+c=\left (ax^n+bx+c\right)\left (ax^n+(b-2)x+c\right).$$ $$a(a-1)x^{2n}+2a(b-1)x^{n+1}+2acx^n+b(b-3)x^2+2c(b-1)x+c(c-1)=0.$$ All coefficients must be zero however note that if $n=2$ then the coefficient of $x^2$ is $2ac+b(b-3)$. If $a\ne0$, then $a=b=1$ and so $n=2$ and $c=1$. $P(x)=x^2+x+1$. If $a=0$, then $b(b-3)=c(b-1)=0$ and so $c=0$ and $b=0$ or $3$. $P(x)=0$ or $3x$.
How to find number of real and complex roots?
Hints: Fill up the reasons for the following As the polynomial's degree is odd ($\;3\;$) it has at least one real root (why?). $$\begin{cases}f(0)f(1)&lt;0\\{}\\f(2)f(3)&lt;0\end{cases}\;\implies\;\text{there's one real root in each of the intervals}\;(0,1)\;,\;(2,3)\;(\text{why?})$$ and since the complex roots of real polynomials appear in conjugate pairs (why?), the correct option is (b)
Proving correctness of recursive list reversing algorithm.
Intuitively it seems like a perfect application for proof-by-induction. First prove that it works for null-length lists, which is pretty immediate just by inspecting the first few code lines. Now assume that it reverses any list of arbitrary length up to $n$. Now introduce an abstract list of length $n+1$. For convenience ensure the list is constructed by appending an element to the head ("first"). By inspecting the relatively simple code, again, we see that the third statement down must reverse the $n$ tailing elements. This has to be true by assumption (see second paragraph above). Now the remainder of the code seems to put the only outstanding element ("first") on the back of the list. first.next clearly still refers to the same element it did upon program start since "first" hasn't been modified yet even by the recursive call. By induction, again, first.next is already at the end of the list. So now perhaps the line first.next.next = ... is well understood to put first at the end. And finally first.next = nul is needed to have a valid linked list. The above is a proof-by-induction albeit with a lot of English and over explanation. However, you should probably just run a few examples by such an algorithm and see that it works. If it ever fails, you'll find the error soon enough when you're debugging. I am not certain whether your recursive algorithm is the most optimal. Also, check your language. Python for instance should have list-reversing.
If X ~Poi(λ) proof that $\lambda > i$, $P(X \le i) \le \frac{(e)^{-λ} (eλ)^i}{i^i}$
You can use a Chernoff bound to derive this. For $s&gt;0$, $P(X \leq i) = P(-sX \geq -s i) = P(e^{-sX} \geq e^{-si}) \leq \frac{E[e^{-sX}]}{e^{-si}} = e^{s i} e^{\lambda (e^{-s}-1)}$ where the Inequality is Markov's inequality. Now, optimize this bound over $s&gt;0$ under the assumption that $\lambda&gt;i$.
array equalizer by multiplication
In order to solve this you need to look at the prime factorisation. So the array of three numbers should contain the unique prime factors of the numbers of the other array. However if some prime(s) only occurs as a common factor raised to some power(s), than that factor can be used as well in the first array. For example 12 and 144, which is $2^2\times3$ and $(2^2\times3)^2$. In your example there are more prime factors that three, so there will be no solution.