title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Solution to the limit of a series
For the limit: We take advantage of obtaining a difference of squares. We have a factor of the form $a - b$, so we multiply it by $\dfrac{a+b}{a+b}$ to get $\dfrac{a^2 - b^2}{a+b}.$ Here, we multiply by $$\dfrac{\sqrt{n^2+3}+ \sqrt{n^2 - 1}}{\sqrt{n^2+3}+ \sqrt{n^2 - 1}}$$ $$n(\sqrt{n^2+3}-\sqrt{n^2-1})\cdot\dfrac{\sqrt{n^2+3}+ \sqrt{n^2 - 1}}{\sqrt{n^2+3}+ \sqrt{n^2 - 1}} = \dfrac{n[n^2 + 3 - (n^2 - 1)]}{\sqrt{n^2+3}+ \sqrt{n^2 - 1}}$$ Now simplify and evaluate.
generalized eigenvectors - questions
This is by the Cayley–Hamilton theorem: $0=\chi_\varphi(\varphi)=(-1)^n(\varphi-\lambda I)^n$
Elementary Geometry : Structure of presentation
Obviously, this depends on your presentation and the rules. However, examples always help in math. Explain it, show the equation, then give an example, preferably visual. Especially for geometry, make things visual. It's way easier to show things than just talk about it. About your question about more emphasis on specific questions, it really depends on what the rules are. If I were you I would go over theorems that you feel are more special or useful. Prove the important ones, depending on how much time you have. -FruDe
In how many ways we can split set$ \{a_{1},..,a_{9}, b_{1},.., b_{9}, c_{1} ,.., c_{9}\}$ into 9 set of shape $\{ a_{i}, b_{j}, c_{k} \} $
For choosing the first "team" we have $9*9*9$ ways to pick it. The second team has $8*8*8$ ways. Following the pattern, we get to a number of arrangements totaling $(9!)^3$ However, we are over counting the total number of ways by $9!$ as we don't care about the order the teams were picked, it only matters which elements are on which team. Therefore, our answer is $\dfrac{(9!)^3}{9!} = (9!)^2$ Generalizing this, if we have the set $\{a_{1,1}, a_{1,2}, \dots a_{1,k}, a_{2,1}, a_{2,2},\dots a_{2,k}, \dots a_{n,1}, a_{n,2}, \dots a_{n,k}\}$ (so if instead of $3$ letters we have $n$ letters, and instead of having $9$ of each letter we have $k$ of each letter) and we want to split this into $k$ sets of shape $\{a_{1,j1},a_{2,j2}\dots a_{n,jn}\} $ the number of ways to arrange them into teams is $$\frac{(k!)^n}{(k!)} = (k!)^{n-1} $$
Arc Length Clarification Question
Yes, you have applied the arc length formula correctly: $$ \int_a^b\sqrt{1+(f'(x))^2}\,dx $$ You then got an elliptic integral and you have the correct approximate value.
Evaluating $\int_{-\infty}^{\frac{x^2}{2}} e^{x-\frac{t^2}2}dt$ using the error function
Hint. Using $$\text{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^x e^{-u^2}du$$ you may write $$ \begin{align} \int_{-\infty}^{\frac{x^2}{2}} e^{x-\frac{t^2}{2}}dt&=e^{x}\int_{-\infty}^{\frac{x^2}{2}} e^{-\frac{t^2}{2}}dt\\\\ &=\sqrt{2}\:e^{x}\int_{-\infty}^{\frac{x^2}{2\sqrt{2}}} e^{-u^2}du\\\\ &=\sqrt{2}\:e^{x}\left(\int_{-\infty}^{0} e^{-u^2}du+\int_{0}^{\frac{x^2}{2\sqrt{2}}} e^{-u^2}du\right)\\\\ &=e^x \sqrt{\frac{\pi }{2}} \left(1+\text{erf}\left(\frac{x^2}{2 \sqrt{2}}\right)\right). \end{align} $$
Can Green's theorem be used in a plane other than the xy-plane?
Green's theorem is just a special case of Stokes' theorem (which is just a special case of the generalized Stokes' theorem). $$\oint_{\partial M} d\ell \cdot F = \int_M dA \cdot (\nabla \times F)$$
What is the structure of the group $\langle x, y \mid xy=yx, x^m = y^n \rangle$
This is an Abelian group. In additive notation it would be $\left<x,y\mid mx-ny=0\right>$. The structure is given by the Smith Normal Form of the matrix $\pmatrix{m&-n}$. This is $\pmatrix{g&0}$ for $g=\gcd(m,n)$ and so $G\cong\Bbb Z/g\Bbb Z\oplus\Bbb Z$.
Proving arc length polar coordinate formula
If you parameterize the curve using $x=x(t)$, $y = y(t)$, $a \le t \le b$ the formula for arclength is $$L = \int_a^b \sqrt{x'(t)^2 + y'(t)^2} \, dt.$$ You can let $r(t)$, $\theta(t)$ denote the polar coordinates of the point $(x(t),y(t))$. Since $x(t) = r(t) \cos \theta(t)$ and $y(t) = r(t) \sin \theta(t)$ you get $$x'(t) = r'(t) \cos \theta(t) - r(t) \sin \theta(t) \theta'(t)$$and $$y'(t) = r'(t) \sin \theta(t) + r(t) \cos \theta(t) \theta'(t).$$ Square to get $$x'(t)^2 = r'(t)^2 \cos^2 \theta(t) - 2 r(t) r'(t) \cos \theta (t) \sin \theta(t) \theta'(t) + r(t)^2 \sin^2 \theta(t) \theta'(t)^2$$ $$y'(t)^2 = r'(t)^2 \sin^2 \theta(t) + 2 r(t) r'(t) \cos \theta (t) \sin \theta(t) \theta'(t) + r(t)^2 \cos^2 \theta(t) \theta'(t)^2$$ thus $$x'(t)^2 + y'(t)^2 = r'(t)^2 + r(t)^2 \theta'(t)^2$$ so that $$L = \int_a^b \sqrt{r'(t)^2 + r(t)^2 \theta'(t)^2} \, dt.$$
Proving sums of multinomial coefficients
The LHS is the number of ways to partion n entities into m sets that can be done by asking individiual element one by one where it wants to go from a list of m subsets that we can form beforehand. Thus ways now would be m for first element, sm for second element and so on. So total ways would be $\prod_1^nm=m^n$.
Divide line segment into 2 parts
If any value of $x$ is as likely as any other value of $x$ that you choose, then the probability is simply the fraction of the entire line segment that contains the points that satisfy the condition. You already have the correct parts of the line segment. What fraction of the total line segment is it? There's your answer.
A question about the category of functors
The category $D$ here has two objects $a,b$ thus a functor $X:D\rightarrow C$ is defined by $X(a)$ and $X(b)$ and the image of the map $i:a\rightarrow b$ and this gives the morphism $X(i):X(a)\rightarrow X(b)$. Conversely, every morphism $f:U\rightarrow V$ of $C$ defines a functor $X:D\rightarrow C$ by $X(a)=U, X(b)=V$ and $X(i)=f$.
Question about sup norm
We need to show that $$ \max\big\{ |x_1|,...,|x_n|\big\} \,\,\stackrel{(1)}{\leq}\,\, \sqrt{ \sum x_i^2} \,\,\stackrel{(2)}{\leq}\,\, \sqrt{n} \max\big\{ |x_1|,...,|x_n|\big\} $$ For $(1)$, let $|x_j|=\max\big\{ |x_1|,...,|x_n|\big\}$, for some $j=1,\ldots,n$. Then $$ |x_j|^2\le |x_1|^2+\cdots+|x_n|^2, $$ and hence $$ \max\big\{ |x_1|,...,|x_n|\big\}=|x_j|\le \sqrt{|x_1|^2+\cdots+|x_n|^2}. $$ For $(2)$, if once again $|x_j|=\max\big\{ |x_1|,...,|x_n|\big\}$, then $$ |x_1|^2+\cdots+|x_n|^2\le \underbrace{|x_j|^2+|x_j^2|+\cdots+|x_j|^2}_{n\,\,\,\text{times}}=n|x_j|^2, $$ and thus $$ \sqrt{|x_1|^2+\cdots+|x_n|^2}\le \sqrt{n\,|x_j|^2}=\sqrt{n}\,|x_j|=\sqrt{n}\,\max\big\{ |x_1|,...,|x_n|\big\}. $$
Linear transformation -- Counterclockwise rotation of 45 degrees?
Suppose that $\|(x,y)\|=r$. Then there exists some angle $\theta^\circ$ such that $(x,y) = r(\cos \theta^\circ, \sin \theta^\circ)$ Rotating that point through an angle of $45^\circ$ will produce the new point \begin{align} (x',y') &= r(\cos(\theta^\circ + 45^\circ), \sin(\theta^\circ+45^\circ)) \\ &= r( \cos \theta^\circ \cos 45^\circ - \sin \theta^\circ \sin 45^\circ, \cos \theta^\circ \sin 45^\circ + \sin \theta^\circ \cos 45^\circ) \\ &= (x \cos 45^\circ - y \sin 45^\circ, x \sin 45^\circ + y \cos 45^\circ) \\ \end{align}
U substitution with definite integrals and limits...
You can use the following property of integrals in this situation: $$\int \limits_{a}^{b} f(x) \,dx = - \int \limits_{b}^{a} f(x) \,dx.$$ That is, if you switch the upper and lower limit, you have to introduce a factor of $-1$ into the integrand.
Prove that, if $a,b,c$ are positive real numbers $\frac {a} {2a+b+c}+ \frac {b} {2b+a+c}+\frac {c} {2c+a+b}<\frac{19}{25}$
The inequality is homogeneous, i.e. $(a,b,c)$ satisfies the inequality if and only if $(ta,tb,tc)$ $\forall t&gt;0$ satisfies the inequality. So we can assume WLOG $a+b+c=1$. $$\sum_{\text{cyc}}\frac{a}{2a+b+c}=\sum_{\text{cyc}}\frac{a}{a+1}=$$ $$=\sum_{\text{cyc}}\left(1-\frac{1}{a+1}\right)=3-\sum_{\text{cyc}}\frac{1}{a+1}$$ You can use Cauchy Schwarz (CS) inequality to prove that for real $a_i$, positive $b_i$: $$\sum_{i=1}^n\frac{a_i^2}{b_i}\ge \frac{(\sum_{i=1}^n a_i)^2}{\sum_{i=1}^n b_i}$$ To prove it, multiply both sides by $\sum_{i=1}^n b_i$ and use CS. $$3-\sum_{\text{cyc}}\frac{1}{a+1}\le 3-\frac{9}{4}=\frac{3}{4}&lt;\frac{19}{25}$$
Find density functions from two dimensional function
If $f(x,y)$ is the joint density function, then the density function of $X$ is $\int_{-\infty}^\infty f(x,y)\,dy$. (We "integrate out" $y$.) Here our density is $\frac{2}{5}$ when $0\lt y\lt 1$ and $0\lt x\lt 5y$, and $0$ elsewhere. It is useful to draw a picture. So draw the line $y=\frac{x}{5}$. Note that since $y$ travels from $0$ to $1$, it follows that $x$ travels from $0$ to $5$. The region where our density lives is the triangle with corners $(0,0)$, $(5,0)$, and $(0,1)$. So $y$ travels from $\frac{x}{5}$ to $1$, and we need to find $$\int_{y=\frac{x}{5}}^1 \frac{2}{5}\,dy.$$ This is $$\frac{2}{5}\left(1-\frac{x}{5}\right)$$ for $0\lt x\lt 5$, and $0$ elsewhere. We can use the same picture to find the density function of $Y$. We need to integrate out $x$. It is clear that $x$ travels from $0$ to $5y$. Remark: For me at least, drawing a picture of the region where the density function lives is an essential part of solving the problem.
How do i find inverse of this function?
I think you're getting confused, because $x$ and $y$ have opposite signs, and then you switched them. For $x&gt;0$, $y&lt;0$, so the solution would be $x=\sqrt{-y/(y+1})$, which makes sense with the signs, and then you could switch $x$ and $y$ to get $y=\sqrt{-x/(x+1)}$, keeping in mind that, after the switch, $y&gt;0$ and $x&lt;0$; that agrees with the answer in your book for $x&lt;0$.
Bayes theorem in a simple example
I am just discussing the situation and hope that will answer your question. First of all, I quite disagree with your contention that $P(B)$ is $5/7$. Because, to calculate this, you need to make two cases: the urn chosen was 1 and the urn chosen was not 1, and in each case, consider the probability of getting a white ball. So, $P(B)$ is $$P(B)=P(A).P(B|A)+ P(\bar A).P(B|\bar A)$$ So, it turns out that $P(B)=\frac{17}{24}$. So, Now you can calculate, $P(A|B)$ using Bayes' Rule, $$P(A|B)=\frac{P(B|A).P(A)}{P(B)}$$ Here, from the information you have provided yourself, we get $P(A|B)=\frac{8}{17}$. So, now see that, the expression you have written down is true. So, the probability you are asking about, the probability that urn 1 was chosen given a white ball was drawn, is computable and is reasonable to ask. But, note that computation of this fact needs Bayes' Rule.
Show that $\rho$ must be 2-dimensional
We have $g^h=g^{-1}$. If $\lambda$ is an eigen value of $p(g)$ it is also eigen value of $p(g^{-1})=p(g^h)$ as they are similar. Thus, we have $\lambda=\dfrac{1}{\lambda}\implies \lambda=+,-1$. That means that $p(g)v=v$ or $p(g)v=-v$. Hence, the supspace $&lt;v&gt;$ is invariant the under $H=&lt;g&gt;$. Now, show that $W=&lt;v,p(h)v&gt;$ is invariant under $G=D_8$.
Big theorem to generate a sequence
I assume that the inequalities are to hold for all large enough $n$. A key for this is to know that, for any $c &gt; 0$, $\ln(x) &lt; x^c$ for all large enough $x$. If $x_n = \dfrac{n^2}{\ln(n)}$, then, $x_n &gt;&gt; n^a$ for $a &lt; 2$ and $x_n \lt\lt n^2 \le n^b$ for $b \ge 2$.
Product of nilpotent ideal and simple module is zero
The connection you are looking for is that nilpotent ideals are all contained in the Jacobson radical. This is easy to see since the primitive ideals of a ring are prime, and hence each one has to contain all nilpotent ideals. Thus their intersection (the Jacobson radical) contains all nilpotent ideals. Since the Jacobson radical annihilates simple $R$ modules, so must each nilpotent ideal.
For what value of k onwards is it pointless for a computer to compute the probability mass function of the Poisson distribution
You can certainly define one. Let $IP(\lambda, p)$ be the smallest $k$ such that $\frac {e^{-\lambda}\lambda^k}{k!} \lt p$. It will do funny things when $\lambda$ is large because no bin may have much content or when $p$ is not small, but will do what you expect in the range of interest. To compute it, you can use Stirling's approximation on the factorial to say $\frac {e^{-\lambda}\lambda^k}{k!}\approx \frac {e^{-\lambda}(e\lambda)^k}{k^k\sqrt{2 \pi k}}$ This can't be solved for $k$ analytically (except using the W function) but it is easy to solve numerically. For $p=1E-16$ I find $$\begin {array} {c|c} \lambda&amp;k\\ \hline 0.01&amp;6.404739\\0.1&amp;9.702412\\1&amp;17.81112\\2&amp;22.66661\\3&amp;26.52406\\4&amp;29.88898\\5&amp;32.94789\\10&amp;45.87589\\20&amp;66.85164\\30&amp;85.16933\\40&amp;102.1354\\50&amp;118.2484\\\end {array}$$ For small $\lambda$ the $\lambda^k$ term will do most of the work bringing down $p$, for moderate $\lambda$ it will be the $k!$ and for large $\lambda$ it will be the $e^{-\lambda}$
Calculate the weak formulation of $\int_\Omega \left(\alpha v(x) + \beta\Delta\Delta v(x)\right)w(x)dx=\int_\Omega \gamma w(x)dx$
To summarize for 2nd order problems: You first multiply your strong formulation with a test function $v\in C^\infty_0(\Omega)$, where $\Omega\subset\mathbb{R}^d$ is your domain (usually it should have a Lipschitz boundary). Then you apply Green's theorem to this result to obtain the weak formulation. Green's theorem states (take care of the assumptions that have to be made) $$\int_\Omega \Delta u v\,dx = -\int_\Omega \nabla u\nabla v + \int_{\partial\Omega} v\nabla u\cdot\nu$$ where $\nu$ is the outer normal of $\Omega$ However, this formula does not apply directly to your problem in this form, as you have the 4th derivative (bilaplacian, $\Delta\Delta$) inside Green's formula can be modified to apply also to fourth order problems (including $\Delta^2 u$): $$\int_\Omega\Delta^2 uv\,dx = \int_\Omega\Delta u\Delta v\,dx + \int_{\partial\Omega} \partial_\nu\Delta u v\,dS - \int_{\partial\Omega} \Delta u\partial_\nu v\,dS$$ This is the formula you have to use for the $\Delta^2 u$ part (it holds for all $u\in H^4(\Omega), v\in H^2(\Omega)$. At this point you just need to use your boundary conditions to get the weak formulation
Each open cover of a sequentially compact metric space has Lebesgue number
We assume there is no $\varepsilon&gt;0$. For every $n\in\mathbb{N}$ we define $\varepsilon_n:=\frac{1}{n}$. Now it exists for every $\varepsilon_n$ a $x_n\in X$ such that $K_{\varepsilon_n}(x_n)\nsubseteq U$ for all $U\in\mathfrak{U}$ by assumtion. The sequence $(x_n)_{n\in\mathbb{N}}$ has a convergent subsequence, such that $x_{n_\nu}\to x$ for $\nu\to\infty$. Now, we choose $U\in\mathfrak{U}$ such that $x\in U$. $U$ is open, hence there exists a $\delta&gt;0$ such that $K_\delta(x)\subseteq U$. Because our subsequence converge there is a $N\in\mathbb{N}$ such that $x_{n_\nu}\in K_{\frac{\delta}{2}}(x)$ for all $\nu\geq N$. Let $\varepsilon:=\frac{\delta}{2}$. So we get $K_\varepsilon(x_{n_\nu})\subseteq K_\delta(x)\subseteq U$ für all $\nu\geq N$ However, $\varepsilon_n$ is convergent with $\lim_{n\to\infty}\varepsilon_n=0$ and $K_\varepsilon(x)$ is open. Hence there exists a $N'\geq N$ such that $K_{\varepsilon_n}(x_n)\subseteq K_\varepsilon(x)\subseteq U$, this is our contradiction.
Let $f : [0, ∞[→ [0,∞[ $ be a continuous function such that $\int_0^\infty f(t) dt < ∞$. Which of the following statements are true?
Integral test is applicable only when the function is monotonic. All three are false. Draw a small triangle with base $(n-\frac 1 {n^{3}},n+\frac 1 {n^{3}})$ and height $n$ and consider a continuous function whose graph is made up of these triangles together portions of the $x-$ axis left over by these intervals.
Likelihood Function Given Maximum of data, but not actually data points
We have the functions $\Phi()$ and $\phi()$, standard notation for the cdf and pdf respectively of a $N(0,1)$ distribution. Or if you want to use $F$ and $f$, that also makes no difference as long as you define them. The fact that the cdf does not have a closed form does not stop us from writing down the likelihood at least. Assuming you have i.i.d observations $X_1,\ldots,X_n\sim N(\theta,1)$, the distribution function of $T=\max\limits_{1\le k\le n}X_k$ is $$P(T\le t)=(P(X_1\le t))^n=(\Phi(t-\theta))^n\quad,\,t\in\mathbb R$$ So the pdf of $T$ is $$f_{T}(t)=n(\Phi(t-\theta))^{n-1}\phi(t-\theta)\quad,\,t\in\mathbb R$$ The likelihood function given $t \,(\in\mathbb R)$, the observed value of $T$, is therefore $$L(\theta\mid t)=n(\Phi(t-\theta))^{n-1}\phi(t-\theta)\quad,\,\theta\in\mathbb R$$
Sign of quadratic form with parameter
Following a remark in the comments, here's an efficient way to calculate the sign using the characteristic polynomial---recall that the signature is essentially a count of the positive, zero negative eigenvalues of the representative matrix---but without actually computing its roots. The characteristic polynomial of the matrix representation $A$ of the quadratic form is $$\det \left(t I_3 - \pmatrix{1&amp;\alpha&amp;1\\ \alpha&amp;0&amp;0\\1&amp;0&amp;1}\right) = t^3 - 2 t^2 - \alpha^2 t + \alpha^2 .$$ For $\alpha \neq 0$, we have $\alpha^2 &gt; 0$, in which case the number of sign changes of the coefficients is $2$, and thus by Descartes' Rule of Signs (and the fact that all of the eigenvalues of a real, symmetric matrix are real) the polynomial has (1) two positive roots, and, (2) since $0$ is not a root, one negative root. Thus, the signature is $(2, 1)$ (Lorentzian). NB we avoided computing explicitly the roots of the characteristic polynomial, which would have been difficult. On the other hand, if $\alpha = 0$, the constant term of the characteristic polynomial is zero, so in that case the quadratic form is degenerate.
Construction of a surjective (or onto) function relating $X$ and $Y$
Take $f:X\rightarrow Y$ given by $$f(x)=\begin{cases}1+\frac{y-1}{x_{0}}x&amp;\text{ if }x\in(0,x_{0}].\\1+\frac{2c-2y}{1-x_{0}}x&amp;\text{ if }x\in(x_{0},\frac{1+x_{0}}{2}].\\c-\frac{2c}{1-x_{0}}(x-\frac{1+x_{0}}{2})&amp;\text{ if }x\in(\frac{1+x_{0}}{2},1).\end{cases}$$ Note that $f$ is surjective, continuous and $f(x_{0})=y$. Furthermore note that $h$ could be a constant function and therefore in general there is no natural way to construct a surjective function from $X$ to $Y$ using $h$.
Error while calculating derivatives
First, of all, you write in the first method that \begin{align} y' &amp;= \frac 1 2 \cdot x^{-3} \\ y' &amp;= \frac 1 2 \cdot -3 x^{-4} \end{align} You've mixed up $y$ and $y'$ in several lines. I point this out because this is a very common careless error that I see a lot of students make. Secondly, you've simplified $x^2 / x^6$ incorrectly in your second method. It's $1/x^4$, not $x/x^4$. Now the answers agree.
Permutations avoiding repeated elements in a row
Let $D(n)$ be the number of strings of length $n$ that have a single item at the end, $E(n)$ the number of strings of length $n$ that have two of the same at the end, and $F(n)$ the number of strings of length $n$ that have three of the same at the end. We have $D(1)=3, E(1)=F(1)=0$ and the recurrence $D(n)=2(D(n-1)+E(n-1)+F(n-1)), E(n)=D(n-1), F(n)=E(n-1)$ because you can put two letters on any string to make a D and one letter to extend the last run to make an E or F. You could solve this analytically but for small lengths it is easier in Excel. I find $5676$ strings of length $8$, $48384$ of length $10$
Intersection of a cube and the plane $x+y+z=0$
All the corners of your hexagon will be on an edge of the cube, not in the middle of a face. If a cube has side length $1$ and is centered at the origin, then the twelve edges of the cube consist of those points for which two of $x$, $y$, $z$ are either $-1/2$ or $1/2$, and the third coordinate is between those two values. So for example $(1/2,1/2,0)$ is the midpoint of one of the edges. However, if two of the coordinates are $1/2$, for example if $x=y=1/2$, then that edge does not intersect the plane, since plugging them into $x+y+z = 0$, we see that $z$ is out of range. Similarly for $x=y=-1/2$, or $x=z=1/2$, etc. So we are just looking for points with one coordinate $1/2$ and the other $-1/2$. To satisfy the plane equation, we must have the third coordinate be $0$. So the six vertices of the hexagon are $$(1/2,-1/2,0) \\ (-1/2,1/2,0) \\ (1/2,0,-1/2) \\ (-1/2,0,1/2) \\ (0,1/2,-1/2) \\ (0,1/2,-1/2) $$
Is there a technique to find a hamilton circuit in a graph?
This is a hard computational problem in general, and so I can't give you a universal technique that always solves the problem quickly. But here are some tips that can help you with small cases that are easy to solve by hand. If a vertex has only $2$ usable edges coming out of it, then both of those edges have to be part of the Hamiltonian cycle. If we've already selected $2$ edges out of a vertex, then none of the other edges out of that vertex can be in the cycle, and should be marked as not usable. If adding an edge would create a short cycle that includes only some, but not all of the vertices, then that edge can't be part of the Hamiltonian cycle, and should be marked as not usable. Generalizing the first bullet point: if there is a set of vertices $S$ with only two usable edges between $S$ and the rest of the graph, then both of those edges have to be part of the Hamiltonian cycle. (Also, an even number of the edges between $S$ and the rest of the graph are used, so if you've figured out what happens for all but one of the edges, the last edge is either definitely in or definitely out.) Applying these steps over and over again will sometimes get you to a Hamiltonian cycle. You might also have to do some casework: when none of these steps apply, pick an edge that looks promising, and add it to the cycle (maybe switching to a different color to mark things in). Then, keep making the deductions above and see if you get a Hamiltonian cycle or a contradiction. If you have to do too much of that, that's the point at which you decide the graph is too complicated to find a Hamiltonian cycle in it by hand.
Is there such an $x$ that both $2^{\frac{x}{3}}$ and $3^{\frac{x}{2}}$ are simultaneously rational?
Negative answer follows from the four exponentials conjecture. Indeed, it's well-known that $\frac{\log 2}{3},\frac{\log 3}{2}$ are linearly independent over $\mathbb Q$ (a relation between them would imply an equality of the form $2^a=3^b$ for $a,b\in\mathbb N_+$ which clearly cannot happen). Further, it's easy to see $x$ must be irrational too, so that $1,x$ are linearly independent over $\mathbb Q$. Now the conjecture mentioned implies that one of the following numbers is transcendental: $$\begin{align*} e^{1\cdot\frac{\log 2}{3}} &amp;=2^{1/3},\\ e^{1\cdot\frac{\log 3}{2}} &amp;=3^{1/2},\\ e^{x\cdot\frac{\log 2}{3}} &amp;=2^{x/3},\\ e^{x\cdot\frac{\log 3}{2}} &amp;=3^{x/2}.\end{align*}$$ The first two numbers are clearly algebraic, so we conclude we can't have the latter two simultaneously rational. It should be noted that the four exponentials conjecture is still wide open, despite the fact there are some very similar results which have been proven. A related question, asking when $2^x,3^x$ can be both integral, is also well-known and open.
Solving a congruence/modular equation : $(((ax) \mod M) + b) \mod M = (ax + b) \mod M$
$(((ax)\bmod M) +b)\bmod M\equiv ((ax)\bmod M)\bmod M +(b\bmod M)$ then use debanjana's hint.
Estimating error using Taylor Polynomial
You shouldn't guess $c$, but you can estimate the error, i.e. you take the $c\in[x_0,x]$ ($f(x)=f(x_0)+...$) that give a worse case for $g^{(n)}$, i.e. biggest value, and show that your error is less then something.
What's the complexity class of Sub-Polytrees isomorphism?
From the definition of planar graphs - that you can always draw them on plane in such a way that edges just meet on nodes - I believe all polytrees are planar graphs (please correct me if I am wrong). Now since subgraph isomorphism of planar graphs has polinomial time algorithms available, I guess Sub-Polytrees isomorphism is also in the polynomial time complexity class.
How to prove $\binom{n+1}{m+1}=\binom{0}{m}+\binom{1}{m}+\dots+\binom{n}{m}$ combinatorially
Count how many ways to select $m+1$ people from a line of $n+1$ people, by selecting one person at some place (call it $k$), and then select $m$ people from the $k-1$ earlier in the line. This count is $\sum\limits_{k=1}^{n+1} \binom{k-1}{m} = \sum\limits_{k=m+1}^{n+1}\binom{k-1}{m}$
Proving uniform approximation by polynomials when sets are not compact
(1) Just define $g:[0,1]\to\mathbb R$ by $g(y):=f(1/y)$ for $y\in(0,1]$ and $g(0):=a$. Due to the limit assumption on $f$, $g$ becomes continuous on the compact interval $[0,1]$. Weierstrass' Theorem now gives you a sequence of polynomials $p_n(y)$ converging to $g(y)$ uniformly on $[0,1]$. Then show that $p_n(1/x)$ converges uniformly to $f(x)$ on $[1,\infty)$. (2) The idea is basically the same. Define $g:[0,1]\to\mathbb R$ by $g(y):=f(-\log(y))$ for $y\in(0,1]$ and $g(0):=a$.
$⊢ ∀x∀y(f(x) \neq f(y) → x \neq y)$
I don't know how exactly your rule for universal introduction (or universal generalization) is defined, or how you do a proof by contradiction in the system that you use, but the proof below will give you the general idea: The crucial steps are of course 5 and 6. For 5, notice that the $= Intro$ rule says that you can introduce an equality between any two of the same terms $t$, so you can use it to get $a = a$, but we can also use it (as I did here) to get $f(a) = f(a)$. If you have never seen this before, that may look a bit weird, but it makes perfect sense: since $f(a)$ is a term, it denotes some object, and whatever that object is, it is of course the same as itself. So: $f(a) = f(a)$! For 6, we used $= Elim$ to replace the second $a$ in $f(a) = f(a)$ with a $b$, which we can do, since $a = b$. Again, this may look weird, since we are not replacing all of the $a$'s with $b$'s, but the rule says that you can replace any of the $a$ with $b$'s; it does not say that you have to replace all of them. So again, this is a perfectly good use of the rule.
Training error in linear regression can always be made zero, given certain condition?
Now we can reach every possible y, by some linear combination of columns of X, if there are at least n linearly independent columns in . So we would always be able to find a weight vector w, that makes the total error zero when the above condition hold (is this a right deduction?) This is only true if we know that the entries of $y$ will exactly be a linear function of the rows of $X$ (consider the case where there is some noise, in which case you wont be able to exactly drive the error to $0$). Now when X does not have n linearly independent columns, we can only reach some lower dimension then the dimensions in which y exists , and there would be a non zero error term? (is this true?) I'm not sure what you mean by &quot;reach some lower dimension&quot;. You need to specify some assumptions on how $y, X$ are related. For example, assume that: $$y_i = x_i^Tw + \epsilon$$ where $\epsilon \sim N(0, 1)$ is additive gaussian noise. In this case, given $n&gt;d$ samples, we can minimize the squared error by using the formula for $w$ that you've given above. In contrast to solving a consistent system of equations such as $\alpha = A\beta$ for $beta$ where $A$ is an $n\times n$ matrix, usually when solving for a least square solution, our $X$ is not full-rank (not invertible), so we cannot simply compute $w = X^{-1}y$. Instead, the formula you've given computes $w$ such that $Aw$ is the orthogonal projection of $y$ onto the range of $X$. This what you want since the orthogonal projection onto the range of $X$ is exactly: $\min \|y-u\|_2^2$ subject to $u = Xw$.
A property of cardinal numbers.
Given a map $f\colon N_1\cup N_2\to M$, this induces two maps $f|_{N_i}\colon N_i\to M$. So you have a map $M^{N_1\cup N_2}\to M^{N_1}\times M^{N_2}$ given by $f\mapsto (f|_{N_1},f|_{N_2})$. Conversely, suppose you have a pair $(f,g)\in M^{N_1}\times M^{N_2}$. Since $N_1\cap N_2=\emptyset$, this allows you to define a new function $h\colon N_1\cup N_2\to M$, $$ h(x)=\begin{cases} f(x) &amp;&amp;\text{if }x\in N_1,\\ g(x) &amp;&amp;\text{if }x\in N_2. \end{cases} $$ So you have another map $(f,g)\mapsto h$. You can then verify these two maps provide a bijection, so that $M^{N_1\cup N_2}\simeq M^{N_1}\times M^{N_2}$, which gives the equality of the cardinals.
Logic statement languages $L_0 L_1$
I see that the double usage of &quot;$&lt;$&quot; is confusing. When doing predicate logic, we define a formal language. A formal language of predicate logic contains non-logical symbols: constant symbols (= symbols that stand for individual objects), predicate symbols (= symbols that stand for properties of and relations between objects), and function symbols (= symbols that stand for functions on objects). In mathematical practice, it is convenient to choose the non-logical symbols such that they convey the intended meaning according to which we want to interpret them. If we want to formally talk about the &quot;smaller than&quot; relation, it makes sense to choose the symbol $&lt;$ rather than just something like $P$ or ☺, in order to make the formulas more readable. But what is the &quot;intended meaning&quot; still has to be explicitly defined and thereby fixed &quot;from the outside&quot;. Logic &quot;doesn't know&quot; that the predicate symbol $&lt;$ is supposed to mean &quot;smaller than&quot;. In principle, there is absolutely nothing that prevents it from meaning &quot;is greater than&quot;, or &quot;is a less beautiful number than&quot;. By itself, $&lt;$ when used in a formula is just a completely meaningless string. It is the role of a structure such as $(\mathbb{Z},0,1,+, \cdot,&lt;)$ to give a meaning to these non-logical symbols. By defining $&lt; \ \mapsto P_2^2: \mathbb{Z}^2 \rightarrow \{T,F\} : P_2^2(d_1, d_2) = (T ~~ \text{if}~~ d_1 &lt; d_2), (F ~~ \text{if} ~~ d_1 \geq d_2 )$, we are fixing how to interpret the symbol &quot;$&lt;$&quot;. The &quot;$&lt;$ &quot;on the left is the predicate symbol: A previously meaningless symbol of the formal language that is now assigned a meaning. The &quot;$&lt;$&quot; and &quot;$\geq$&quot; in &quot;$d_1 &lt; d_2$&quot; and &quot;$d_1 \geq d_2$&quot; are &quot;real&quot; &quot;smaller than&quot;/&quot;greater than or equal to&quot; relations: They are mathematical objects which are already known and defined. The point of the line you cited is to explicitly state that by the formal symbol &quot;$&lt;$&quot; we mean the &quot;smaller than&quot; relation. The same goes for the non-logical symbols $0, 1, +, \cdot$: Logic makes no predictions on what these symbols are supposed to mean, so we have to define that we want to interpret the symbol &quot;$0$&quot; as the number zero etc. As for your last question: No. A predicate is a symbol that is interpted as a property of a relation betwen individuals, e.g. $&lt;, = \text{is-even}, \text{is-divisible-by}$. What you mean by &quot;mathematical text&quot; is called a formula.
Let $G$ be an abelian group. Define $G^{n}=\lbrace g^n :g \in G \rbrace$. Show that $G^n$ is a group.
Hint: $$\forall\,x\in G\;\;,\;\;\left(x^n\right)^{-1}=x^{-n}=\left(x^{-1}\right)^n$$ Further hint: $$\forall\,x,y\in G\;\;,\;\;(xy)^k=x^ky^k\,\,,\,\,k\in\Bbb Z$$
Matching with number of edges from one side
Hint: Yes, provided that $l \geq m$. You don't need Hall's marriage theorem (but you can use it if you wish). Pick any student from $B$ and match him with anyone he knows, now you have at least $m-1$ students in $B$ each of whom knows at least $m-1$ students in $A$. I hope this helps $\ddot\smile$
Which positive whole number below 100 has the most number of factors?
This question adresses the question about a mathematical function which outputs the number of factors. The numbers under 100 with most factors are $60=2^2\cdot3\cdot5$, $84=2^2\cdot3\cdot7$, $96=2^5\cdot3$ and $72=2^3\cdot3^2$, which all have 12 factors. The numbers which have more factors than any smaller number are called highly composite numbers, more info here
What is the principal components matrix in PCA with SVD?
Ok - Ill give it a shot. Let me start from the top and try to recap PCA, and then show the connection to SVD. Recall: For PCA, We begin with a centered $M$ (dim = $(n,d)$) of data. For this data, we compute the sample covariance : $S = \frac{1}{n-1}M^TM$ where $n$ is the number of data points. For this covariance matrix we find the eigenvectors and eigenvalues. Corresponding to the largest eigenvalues we select $l$ eigenvectors. Lets call the matrix consisting of these eigenvectors $W$. $W$ will have dimensions $d \times l$ . Then we can write $Z = MW$ and we understand that each row of Z is a lower dimensional embedding of $m_i$ a row of $M$. OK now suppose we can write $M = U S V^T$. Then we notice that : \begin{align} M^TM &amp;= VS^TU^TUSV^T\\ &amp;= V(S^TS)V^T \text{(since U is orthonormal)} \\ &amp;= VDV^T \end{align} Where $D = S^2$ is a diagonal matrix containing the squares of singular values. Thus we have that $(M^TM)V = VD$ since $V$ is also orthonormal. Aha! So we can see that the columns of V are the eigenvectors of $M^TM$ and $D$ contains the eigenvalues. This $V$ is precisely what we called $W$ above. Hope that clears things up a bit. Sorry for the delay :)
Finding Maximum Under Constraint
WOLOG, suppose $c$ is the largest. If $a&gt;b$, then $(a-b)(b-c)(c-a)\le 0$. So we may suppose $a&lt;b&lt;c$. Let $b-a=x\ge 0, c-b=y\ge 0$, then $3a+2x+y=1$, thus $2x+y\le 1$. The problem becomes find maximum value of $$ xy(x+y), \quad x, y \ge0. $$ So we may suppose $2x+y=1$, then $$ xy(x+y)=x(1-2x)(1-x)=:f(x), \quad 0\le x\le \frac{1}{2}. $$ Put $f'(x)=1-6x+6x^2=0$, when $x=\frac{3-\sqrt{3}}{6}$, $f(x)$ attains the maximum.
Subgroup of a (free) group.
An argument I find really nice uses algebraic topology and finite graphs. More information may be found in Hatcher's or Massey's book, or on my blog and its references. Let $F$ be a free group and $H$ be a subgroup. We view $F$ as the fundamental group of a graph $X$ (for instance, a bouquet of $\mathrm{rk}(F)$ circles). Let $Y \twoheadrightarrow X$ be a covering satisfying $\pi_1(Y) \simeq H$. In particular, the covering induces a structure of graph on $Y$ making the covering cellular. Furthermore, if $n=[F:H] &lt; + \infty$, the covering $Y \twoheadrightarrow X$ is $n$-sheeted, hence $\chi(Y)=n \cdot \chi(X)$. If $T \subset X$ is a maximal subtree, then $ X$ is the disjoint union of $T$ and $\mathrm{rk}(F)$ edges, hence $$\chi(X)= \chi(T)- \mathrm{rk}(F)=1- \mathrm{rk}(F);$$ in the same way, $\chi(Y)= 1- \mathrm{rk}(H)$. Therefore, $\mathrm{rk}(H)=1+n \cdot (\mathrm{rk}(F)-1)$.
Possible dimension of a Hilbert space.
$\max\{|\mathbb{Q}|,|I|\} $ is an upper bound on the Hilbert dimension.
Find if one rectangle can fit inside the other
Joseph Malkevitch's answer is perfect. For all who can't read the complete article, here's the most elegant solution (necessary and sufficient condition for a rectangle pxq to fit in a rectangle axb, provided p≥q, a≥b and p>a): =((a+b)/(p+q))^2+((a-b)/(p-q))^2 Two obvious day-to-day applications: rug on a floor, tray in an oven.
Prove the subset of a linear code consisting of codewords with even weight is a subgroup.
Let $I_c$ be the set of coordinates of non-zero elements of $c$, $I_d$ the set of coordinates of non-zero elements of $d$, and $K$ the set of coordinates on which $c$ and $d$ agree; $c+d$ has non-zero elements precisely in those coordinates where $c$ and $d$ disagree, i.e., those in the symmetric difference $I_c\mathop{\triangle}I_d$. Let $n$ be the length of the code; then $$\operatorname{weight}(c+d)=|I_c\mathop{\triangle}I_d|=n-|K|=n-k\,.$$ Now $$\begin{align*} \operatorname{weight}(c)+\operatorname{weight}(d)&amp;=|I_c|+|I_d|\\ &amp;=|I_c\cup I_d|+|I_c\cap I_d|\\ &amp;=|I_c\mathop{\triangle}I_d|+2|I_c\cap I_d|\\ &amp;=n-k+2|I_c\cap I_d|\,. \end{align*}$$ Let $k'=|I_c\cap I_d|$; then $$\begin{align*} \big(\operatorname{weight}(c)-k'\big)+\big(\operatorname{weight}(d)-k'\big)&amp;=n-k\\ &amp;=\operatorname{weight}(c+d)\,. \end{align*}$$ In general $k'$ need not be equal to $k$: $k'$ is the number of coordinates in which $c$ and $d$ are both $1$, while $k$ is the number on which $c$ and $d$ agree. Thus, the expression for the weight of $c+d$ given in the solution appears to be wrong, but we do still have $$\operatorname{weight}(c+d)=\big(\operatorname{weight}(c)-k'\big)+\big(\operatorname{weight}(d)-k'\big)$$ and hence $$\operatorname{weight}(c+d)=\operatorname{weight}(c)+\operatorname{weight}(d)-2k'\,,\tag{1}$$ which is good enough: by hypothesis $\operatorname{weight}(c)$ and $\operatorname{weight}(d)$ are even, and $2k'$ is even, so the righthand side of $(1)$ is even, and $c+d\in C^+$, as desired.
Why no faithful finite-dimensional irreducible representation of the Heisenberg algebra
You've already described below why the matrix representation is not irreducible (sorry, I mistakenly talked about the regular representation in the first place). To prove that there is no finite-dimensional faithful irreducible representation, consider the action of $z$: On the one hand, it must be of trace $0$ (why?) on the other hand, it must be a scalar (why?). Can you fill in the details? Remarks from the comments: The crucial point here is that ${\mathfrak g}$ has the property ${\mathfrak z}({\mathfrak g})\cap [{\mathfrak g},{\mathfrak g}]\neq \{0\}$, but this property does not distinguish Lie algebras possessing a finite-dimensional faithful irreducible representation: No reductive Lie algebra ${\mathfrak g}$ like ${\mathfrak g}{\mathfrak l}(n)$ has this property (even though it may have nontrivial center), since ${\mathfrak g} = {\mathfrak z}({\mathfrak g})\oplus [{\mathfrak g},{\mathfrak g}]$ for reductive ${\mathfrak g}$. Nonetheless, it may or may not possess a faithful finite-dimensional irreducible representation: Namely, if ${\mathfrak z}({\mathfrak g})$ is $1$-dimensional, the adjoint representation $[{\mathfrak g},{\mathfrak g}]\to{\mathfrak g}{\mathfrak l}([{\mathfrak g},{\mathfrak g}])$ extends to a faithful finite-dimensional irreducible representation of ${\mathfrak g}$ by letting ${\mathfrak z}({\mathfrak g})$ acts nontrivially by some scalar, while on the other hand, no irreducible finite-dimensional representation can be faithful on ${\mathfrak z}({\mathfrak g})$ by Schur's Lemma if $\dim{\mathfrak z}({\mathfrak g})&gt;1$. A solvable Lie algebra may or may not have this property, but (over an algebraically closed field of characteristic $0$) it never has a finite-dimensional faithful irreducible representation by Lie's Theorem. Btw, it's not a stupid question.
Find the minimum value of $x$ s.t. $\sqrt{\left(\frac{x+y}{2}\right)^3}+\sqrt{\left(\frac{x-y}{2}\right)^3}=27$
Hint $a+b=27,a,b\ge0$ $a^{2/3}+b^{2/3}=a^{2/3}+(27-a)^{2/3}=f(a)$ Use http://mathworld.wolfram.com/SecondDerivativeTest.html
How are differential forms on $\mathbb{R}^n$ independent of coordinates?
The thing about $\mathbb{R}^n$ is that it &quot;naturally&quot; comes with a choice of coordinates since $\mathbb{R}^n = \overbrace{\mathbb{R}\times \cdots \times \mathbb{R}}^n$ this means that elements of $\mathbb{R}^n$ are of the form $\mathbf{x}=(x_1,...,x_n)$. We also have maps \begin{align*} x_i &amp;=\text{pr}_i : \mathbb{R}^n \rightarrow \mathbb{R}\\ &amp;\mathbf{x} \mapsto x_i \end{align*} These maps are also natural and they do not need the introduction of more objects. And the basis one forms are the differentials of these maps, so technically we haven't chosen coordinates we've only used the definition of $\mathbb{R}^n$. We can use different coordinates for $\mathbb{R}^n$ though, say for the case $n=2$ we can use polar coordinates (though there is a slight subtlety at the origin). In this case we have the basis forms $dr$ and $d\varphi$, and if we use the rule of how differential forms change under change of coordinate $ (r,\varphi)\mapsto (x,y)$ we find $$dx=\cos\varphi dr - r \sin \varphi d\varphi \hspace{15mm} dy = \sin\varphi dr + r \cos \varphi d\varphi$$ From here we notice that $dx\wedge dy = r dr \wedge d\varphi$ (technically I should have written pullbacks but that's beside the point). We see that we obtain the element of area in polar, so the area of a region given in polar coordinates is the same as it would have been if I used cartesian. This is the reason why coordinate independence is useful because I can define an object $\omega = dx\wedge dy$ in cartesian, and write an expression like $$\int_D \omega =\text{Area}(D)$$ And that expression does not depend on whether I use polar or cartesian coordinates. A lot of the definitions can be made into coordinate independent ones, some of them are not as popular as the coordinate versions though. For example you have probably seen the definition for the exterior derivative of a one form $\alpha= \alpha_j dx^j$ in local coordinates as $$d\alpha = d\alpha_j \wedge dx^j = \partial_i \alpha_j dx^i \wedge dx^j$$ There is however a coordinate independent definition for the exterior derivative, the one for one forms becomes $$d\alpha(X,Y) = X(\alpha(Y))- Y(\alpha(X))- \alpha(\left[X,Y\right])$$ Hope this helps.
Let $M:=\{(x,y,z)\in\mathbb R^3 :x^2+y^2=2z^2,z>0\}$ and $f(x,y,z):=(x+y+z)^2e^{-z}, \forall(x,y,z)\in \mathbb R^3$. Find...
The minimum value of $f$ is zero. This should be easy enough to show as $(x+y+z)^2 \ge 0$ and $e^{-z} &gt; 0$ and $(1,-1, 1)$ is in $M$ As for the maximum: My best idea is to chose some value for $z = n.$ Find the maximal value with this restriction. Let $g(n)$ be this maximum. Find the $n$ that maximizes $g.$ I suppose we can be more abstract with it. For some $n$ there is an upper bound on $f$ when $(x,y,z) \in M$ and $z &gt; n$ Let $M' = \{(x,y,z): x^2+y^2 = 2z^2, 0\le z\le n\}$ On the domain $M'$ we have a continuous function on a compact (closed and bounded) domain. Which must achieve a maximum value, by the extreme value theorem. The show that there is a point on $M'$ where $f(x,y,z)$ is greater than the upper bound you found earlier.
Finding extremum of a function using Lagrange
Your system is incomplete. It should be$$\left\{\begin{array}{l}\frac1x=\lambda\\\frac1y=\frac\lambda2\\x+\frac y2-1=0.\end{array}\right.$$Its only solution is $(x,y,\lambda)=\left(\frac12,1,2\right)$.
Prove that there exists no differentiable real function $g(x)$ such that $g(g(x))=-x^3+x+1$.
We have $$ g(g(g(x)) = -g(x)^3 + g(x) + 1 \iff \\ g(-x^3+x+1) = -g(x)^3 + g(x) + 1 $$ For $x=1$ this turns into $$ g(1) = -g(1)^3 + g(1) + 1 \iff \\ g(1)^3 = 1 $$ So $g(1) = 1$, if $g$ is a real valued function. Differentiating both sides of $g(g(x)) = -x^3+x+1$ gives $$ g'(g(x))\, g'(x) = -3x^2 + 1 $$ This gives $$ g'(g(1))\,g'(1) = - 2 \iff \\ g'(1)^2 = -2 $$ which is not possible for real valued $g'(x)$ and thus for $g(x)$, as the derivative of a real valued function is a real valued function.
How to compute the joined distribution
The distribution of the price $P$ is gamma, and the conditional distribution of the demand $D$ given the price is also gamma (actually, exponential). So the marginal or unconditional distribution of the demand is found by the formula $$f_D(d) = \int_{p=0}^\infty f_{D \mid P}(d \mid p) f_P(p) \, dp,$$ where $$f_{D \mid P}(d \mid p) = \frac{p}{10} e^{-(p/10)d}, \quad d &gt; 0,$$ and $$f_P(p) = pe^{-p}, \quad p &gt; 0.$$ Thus your integral is $$f_D(d) = \int_{p=0}^\infty \frac{p^2}{10} e^{-(d/10 + 1)p} \, dp,$$ as you correctly noted. We can perform the integration by observing that the PDF of a gamma random variable $X$ with shape $a$ and rate $b$ is $$f_X(x) = \frac{b^a x^{a-1} e^{-bx}}{\Gamma(a)}, \quad x &gt; 0,$$ hence the integrand is proportional to a gamma density with shape $a = 3$ and rate $b = 1 + d/10$. It follows that we need a constant of proportionality: $$f_D(d) = \frac{\Gamma(3)}{10(1 + d/10)^3} \int_{p=0}^\infty \frac{(1+d/10)^3 p^2 e^{-(1+d/10) p}}{\Gamma(3)} \, dp = \frac{1}{5(1+d/10)^3} = \frac{200}{(d+10)^3}, \quad d &gt; 0.$$ This is a Pareto distribution.
How to deal with subtraction of sigma?
I suspect that you're supposed to shift the index of one of the series. What you've done is 100% valid, but as you said, it's not yielding something &quot;useful&quot;. Instead, take the series $$\sum_{n=1}^\infty \frac{n}{3^{n-1}},$$ and shift the index. Let $m = n - 1$. Then when $n = 1$, $m = 0$, so $$\sum_{n=1}^\infty \frac{n}{3^{n-1}} = \sum_{m=0}^\infty \frac{m+1}{3^{m}} = \sum_{n=0}^\infty \frac{n + 1}{3^n} = 1 + \sum_{n=1}^\infty \frac{n + 1}{3^n}.$$ Thus, $$\sum_{n=1}^\infty \frac{n}{3^{n-1}}-\sum_{n=1}^\infty \frac{n}{3^n} = 1 + \sum_{n=1}^\infty \frac{n + 1}{3^n} - \sum_{n=1}^\infty \frac{n}{3^n} = 1 + \sum_{n=1}^\infty \frac{1}{3^n},$$ which is a geometric series that you can calculate easily.
What (filtered) (homotopy) (co) limits does $\pi_0:\mathbf{sSets}\to\mathbf{Sets}$ preserve?
$\pi_0 : \mathbf{sSet} \to \mathbf{Set}$ is a left adjoint (exercise), so it preserves all colimits. It's also a left Quillen functor, so it even preserves homotopy colimits. There's no reason to believe anything good happens with (homotopy) limits, filtered or not, but it is true that finite (homotopy) products are preserved.
Prove that $\mathbf{FinSets}^{\mathbf{N}}$ has no subobject classifier.
There's a standard trick using the Yoneda lemma for computing what universal objects in (restricted) functor categories must be if they exist. In the case of the subobject classifier, this is explained on p. 37 of Sheaves in Geometry and Logic. Specifically, suppose that $\Omega\colon \mathbf{N}\to \mathbf{FinSets}$ is a subobject classifier in $\mathbf{FinSets}^\mathbf{N}$. Let's try to find out what finite set $\Omega(0)$ is. Since $\mathbf{FinSets}^\mathbf{N}$ is a full subcategory of $\mathbf{Sets}^\mathbf{N}$, by Yoneda we have $$\Omega(0) \cong \text{Hom}_{\mathbf{Sets}^\mathbf{N}}(h^0,\Omega) = \text{Hom}_{\mathbf{FinSets}^\mathbf{N}}(h^0,\Omega) \cong \text{Sub}(h^0),$$ where $h^0$ is the functor $\text{Hom}_{\mathbf{N}}(0,-)$. But now $h^0$ has infinitely many subobjects, but $\Omega(0)$ is a finite set, and this is a contradiction. To see that $h^0$ has infinitely many subobjects, just note that since $0$ is the initial object in $\mathbf{N}$, $h^0(n)$ is a singleton $\{*\}$ for all $n$. Incidentally, this makes $h^0$ isomorphic to the terminal object $1$ - your guess about the identity of terminal object is correct. Now for each natural number $n$ (or $n = \infty$), there is a distinct subobject of $h^0$, given by $$m\mapsto \begin{cases} \varnothing &amp; \text{if }m&lt;n\\ \{*\} &amp; \text{if }m\geq n.\end{cases}$$
How to calculate $A^{-1}$, $A^{-7}$ efficiently
If $A$ is diagonalizable, then $A = P \Lambda P^{-1}$ for a diagonal matrix $\Lambda$. Then $A^n = P \Lambda^n P^{-1}$ for any positive integer $n$. If none of the eigenvalues are zero, the diagonal entries in $\Lambda$ are nonzero, so $\Lambda^{-1}$ exists; its entries are the reciprocals of the entries in $\Lambda$. Therefore $A^{-1} = P \Lambda^{-1} P^{-1}$, and $A^{-n} = P (\Lambda^{-1})^n P^{-1}$. That having been said, if you're writing code and efficiency is important, find a library that does the computation for you!
What are the rest when $p(x)+q(x)$ and $p(x)q(x)$ is divided by $x^2+2$?
The hint: There are polynomials $p_1$ and $q_1$ for which $$p(x)+g(x)=p_1(x)(x^2+2)+q_1(x)(x^2+2)+(a+c)x+b+d.$$ Thus, the first answer is $(a+c)x+b+d.$ For the second use the similar reasoning, but you need to work also with degree.
Proof verification of linear independence of $\{1,x,x^2\}$
You need to prove that $a=b=c=0$. Nothing more, nothing less. You do not prove this equality for any given set of triplets $x$. There is no mention of $x$ in the statement "$a=b=c=0$." What you use to prove your point is not important in the sense that as long as it proves the main point, the proof is good. Therefore, your proof can either go $$\forall x\in[-1,1] ax^2+bx+c = 0\implies a=b=c=0$$ or $$\forall x\in [-1,1]: ax^2+bc+c=0\implies \forall x\in\{-1,0,1\}: ax^2+bc+c=0\implies\\\implies a=b=c=0$$ or any other way, as long as you know how to prove each implicaiton.
Show that all the intervals in $\mathbb{R}$ are uncountable
Let $f_s:\mathbb{R} \to \mathbb{R}$ be a scaling function $x \mapsto sx$ for some positive scaling factor $s$; then $f_s$ maps the unit interval to an interval of length $s$. Then let $g_t:\mathbb{R} \to \mathbb{R}$ be a shifting function $x \mapsto x+t$ for some shift $t$. Then to show that an arbitrary interval from $a$ to $b$, $a \neq b$, is uncountable, note that $g_a \circ f_{b-a}$ maps the unit interval to the interval from $a$ to $b$. $f_s$ and $g_t$ are both bijective, so the interval from $a$ to $b$ must have the same cardinality as the unit interval. Therefore the interval from $a$ to $b$ is uncountable. Since $a$ and $b$ are arbitrary, every interval of $\mathbb{R}$ is uncountable. Q.E.D. Note: I'm assuming a priori that we know $\mathbb I$ is uncountable, but since you used that in your prior proof, that seems OK.
find the required differential of $f$.
What is $d^nf(x,y)$? It is the the "full $n^{\rm th}$ derivative" of $f$ at the given point $(x,y)\in{\mathbb R}^2$. Usually it is written as a homogeneous polynomial of degree $n$ in terms of increment variables $X$, $Y$ attached at $(x,y)$ and appears in the Taylor expansion of $f$ at $(x,y)$. The latter has the form $$f(x+X,y+Y)=f(x,y)+df(x,y)(X,Y)+{1\over2}d^2f(x,y)(X,Y)+{1\over3!}d^3f(x,y)(X,Y)+\ldots\quad.$$ In particular one has $$d^{10}f(x,y)(X,Y)=\sum_{j=0}^{10}{10\choose j}D_x^j D_y^{10-j}f(x,y)\&gt;X^j\,Y^{10-j}\ ,\tag{1}$$ where I have denoted partial derivatives by $D_x$ and $D_y$. For the function $f(x,y):=\log(x+y)$ one easily checks that $$D_x^jD_y^{10-j}f(x,y)=-{9!\over(x+y)^{10}}\qquad(0\leq j\leq 10)\ .$$ Plugging this into $(1)$ we obtain $$d^{10}f(x,y)(X,Y)=-{9!\over(x+y)^{10}}(X+Y)^{10}\ .\tag{2}$$ Since $dx(X,Y)=X$ and $dy(X,Y)=Y$ we can rewrite $(2)$ as $$d^{10}f(x,y)=-{9!\over(x+y)^{10}}(dx+dy)^{10}$$ if this more along the lines of your textbook.
Question about $p(z,w)=\alpha_0(z)+\alpha_1(z)w+\cdots+\alpha_k(z)w^k$
If a complex polynomial is locally (on an open set) equal to zero, it is globally equal to zero. Therefore the interior of the set of zeroes is empty since $p \neq 0$. Each of the terms $\alpha_m$ has only finitely many zeroes. In particular there are only finitely many $z \in \mathbb{C}$ for which $\alpha_m(z) = 0$ for all $m \in \{2, \dotsc, k\}$ simultaneously. If $|z|$ is large enough then $z$ will not be an element of this finite set and $w \mapsto p(z,w)$ is a non-constant polynomial in $w$ which therefore has a zero in $\mathbb{C}$. This shows that the set of zeroes of $p$ is unbounded.
All intermediate sub extensions of $\mathbb{Q} \subseteq K \subseteq \mathbb{Q}(\zeta_8)$.
You can do this ad hoc. Note we know all the automorphisms already $$\begin{cases} \zeta_8\mapsto\zeta_8 \\ \zeta_8\mapsto \zeta_8^{-1} \\ \zeta_8\mapsto \zeta_8^{3} \\ \zeta_8\mapsto\zeta_8^5\end{cases}.$$ The first one is the identity which fixes the whole field, so that's not our issue and the whole group clearly fixes the base field so that's not an interesting case either. So let us consider the three quadratic subfields. We see $\zeta_8+\zeta_8^{-1}$ is fixed by complex conjugation, $\zeta_8\mapsto\zeta_8^{-1}$, but not $\zeta_8\mapsto \zeta_8^3$ or $\zeta_8\mapsto\zeta_8^5$ so it is the fixed field of that automorphism. Similarly $\zeta_8+\zeta_8^3$ is fixed not by complex conjugation, but is fixed by $\zeta_8+\zeta_8^3$. Finally $\zeta_8^2=\zeta_4$ is fixed by $\zeta_8\mapsto\zeta_8^5$ but conjugation makes this $\zeta_8^{6}$ and the second automorphism makes it $\zeta_8^{6}$ as well, neither of which fix it, so this generates the third fixed field. But we can do better. For the first one $(\zeta_8+\zeta_8^{-1})^2=\zeta_4+\zeta_4^{-1}+2=2$ so the fixed field of conjugation is just $\Bbb Q(\sqrt 2)$, which is what you expect. The second one is $\zeta_8(1+i)=i\sqrt 2$ by factoring out a $\zeta_8$ because squaring this gives $i\cdot 2i=-2$ and so this fixed field is $\Bbb Q(i\sqrt 2).$ Similarly $\zeta_8^2=\zeta_4$ so the fixed field of the third automorphism is just $\Bbb Q(i)$.
inflexion point how to find it?
There are two kinds of POSSIBLE inflection points,(i) where $f''(x_0)=0$ and (ii)where $f''(x_0)$ does not exist.First, find all the points where $f''(x_0)=0$. At each such point, investigate whether $f''(x)$ is positive on one side of $x_0$ and negative on the other for $x$ close enough to $x_0.$ Such points are inflection points. Second, find all the points where $f''(x_0)$ does not exist.At each such point investigate whether $f''(x)$ is positive on one side of $x_0$ and negative on the other for $x$ close enough to $x_0.$ Such points are inflection points.
If $A^n=0$, then $I_n-A$ is invertible.
Consider the matrix $B := I_n + A + A^2 + \cdots + A^{n-1}$. Show that $(I_n - A)B = I_n$ using the relation $A^n = 0$.
How to compare dispersion of data?
Bayes Theorem can help you out of this dilemma, but we do not even have to go that far to understand what is going on. You can calculate the standard deviation for any two datasets and compare them. But it depends on what you want to know and what your assumptions are about the data. For example if you want to know what you can measure more exactly - the size of a human or the size of a human cell, you would copare the relative standard deviation. If you want to know what measures lengths more exactly, a microscope or a measuring stick, you compare the absolute standard deviation. In a related matter it depends on how you interpret your dataset, the way you ask your second question and the way you drew the datasets suggests you think of them as histograms. Then you have to estimate the standard deviation and the mean in a different way and you get the answer you were expecting. If you interpret the data as a list of points then obviously dataset A has the higher std as 15 is far away from 35 and 5, 6, 8 etc. are not that far away from each other.
$(X,|.|_A)$ is Banach implies $A$ is closed
$x \mapsto (x,Ax)$ is an isometry between $(X, \lvert\,\cdot\lvert_A)$ and the graph $$\Gamma(A) = \{ (x,Ax) : x \in X\} \subset X\times X$$ of $A$ when we endow $X\times X$ with the norm $\lVert (x,y)\rVert = \lvert x\rvert + \lvert y\rvert$. That $(X,\lvert\,\cdot\rvert_A)$ is a Banach space thus means $\Gamma(A)$ is a complete subspace of $X\times X$. But a complete subspace of a Hausdorff space is always closed, hence it means $\Gamma(A)$ is a closed subspace of $X\times X$, which by definition means $A$ is closed.
Strange Algebraic Number
Take any root of any polynomial "quite long and crazy". You can approximate the root using a sequence of rational numbers and then easily find a series with sum = the root. Nice example: $$\sqrt 2 = \sum_{k=0}^\infty(-1)^{k+1}\frac{(2k-3)!!}{(2k)!!}$$ (write the Taylor series of $\sqrt{1+x}$)
12 boxes reference weight
A partial answer. As Ross Millikan pointed out, the reference weight must be $24$ or less, otherwise it will not be possible distinguish between having all $1$s and all $2$s. I therefore tried all reference weights between $1$ and $24$ to see if I could find a failure scenario for each weight and therefore show that no single reference weight could handle all scenarios. The result is below: As you can see, I did not immediately find a failure scenario for the reference weight $22$. Let me know if you find any mistakes and especially let me know if you find a failure scenario for $22$.
Complex analysis for generating functions
A great, user-friendly introduction to all the necessary prerequisites we need from complex analysis together with a wealth of applications is presented in part B of Analytic Combinatorics by P. Flajolet and R. Sedgewick. Each of the chapters IV - VIII IV: Complex Analysis, rational and meromorphic asymptotics V: Applications of rational and meromorphic asymptotics VI: Singularity Analysis of generating functions VII: Applications of singularity analysis VIII: Saddle-point asymptotics starts with explanatory sections and cites resp. derives the theorems needed to understand the given applications.
Expressing $\frac{x-y}{w-z}$ as a function of $\frac{x}{w}$ and $\frac{y}{z}$
@Babado has written it as a function of $x/w,\,y/z,\,w,\,z$. It can't be written as a function of $x/w,\,y/z$ alone, because e.g. $x/w=2,\,y/z=1$ is compatible with $\frac{x-y}{w-z}=\frac{2w-z}{w-z}$ taking any value $k\notin\{1,\,2\}$ via $w/z=(1-k)/(2-k)$.
(Resolved) Does the sum of a subset of the Harmonic sequence converge iff its density approaches 0?
It's a nice idea, but unfortunately you found a counterexample yourself, you just misinterpreted it. You write: "As it turns out, the number of terms of that contain a specific string will be fewer as the number gets big." It's actually the other way around, and you show this in your calculations: The proportion of terms that contain a specific string goes to $1$, and the proportion of terms that are left goes to $0$. The proportion of primes also goes to $0$. So this is in fact the opposite case, and a counterexample to your conjecture. Despite the density going to zero, the sum of the reciprocals of the primes diverges.
Integrability of Derivative of a Continuous Function
This answer refers to a previous version of the (now edited) question, without assuming uniform integrability of $(f_n)$. This is in general not true. A counterexample is $f(x) = x \sin \frac1x$ on $[0,1]$ (with $f(0)=0$). Then $f'(x) = \sin \frac1x - \frac1x \cos \frac1x$ is not Lebesgue-integrable over $[0,1]$, because $\int_0^1 |f'(x)| \, dx = \infty$ in this case. You can even modify this example to be differentiable everywhere, using $g(x) = x^2 \sin \frac{1}{x^2}$. Then $g'(x) = 2x \sin \frac1{x^2} -\frac2x \cos\frac1{x^2}$ and again $\int_0^1 |g'(x)|\, dx = \infty$.
Sum of a proper subset of the $p^\text{th}$ roots of unity
First recall for a nontrivial $p$-th root of unity $\zeta$, that is $\zeta \neq 1$, you have that $\zeta^j$ with $j=0, \dots, p-1$ is the set of all $p$-th roots of unity. So if there are roots of unity summing to $0$, this means $\sum_{j \in J} \zeta^j = 0$ for some subset $J$ of $\{0, \dots, p-1\}$. This means $\zeta$ is a root of $\sum_{j \in J} X^j$. Yet the minimal polynomial of $\zeta$ is $\sum_{j = 0}^{p-1} X^j$. The minimal polynomial of an algebraic number (over the rationals) is the monic polynomial (with rational coefficients) of lowest degree that has this number as a root; equivalently, it is the irreducible monic polynomial having this number as a root. Each polynomial (with rational coefficients) having that algebraic number as a root must be a multiple of the minimal polynomial. Since $\sum_{j \in J} X^j$ cannot be a multiple of $\sum_{j = 0}^{p-1} X^j$, except if they are equal, the claim follows. What remains to check is that $\sum_{j = 0}^{p-1} X^j$ is the minimal polynomial. Since this is equal to $(X^p-1)/(X-1)$ it is clear that $\zeta$ is a root. It remains to show that it is irreducible. This can be done by applying Eisenstein's criterion to this polynomial shifted by $1$; see page 2 of Section 16 of some Algebra notes by Paul Garrett
proving $\lim\limits_{n\to\infty} \int_{0}^{1} f(x^n)dx = f(0)$ when f is continous on [0,1]
take any $\epsilon$, choose $\delta &gt; 0$ such that $|f(x) - f(0)| &lt; \epsilon$ on $[0,\delta]$. choose $n$ big enough such that $(1-\epsilon)^n &lt; \delta$ then $$ |\int_0^1 f(x^n) dx - f(0) |= |\int_0^{(1-\epsilon)} [f(x^n) - f(0)]|dx + \int_{1-\epsilon}^1 [f(x^n) - f(0) ]dx | \leq $$ $$ \int_0^{(1-\epsilon)} |f(x^n) - f(0)|dx + \int_{1-\epsilon}^1 |f(x^n) - f(0) |dx $$ first factor is smaller than $\epsilon(1 - \epsilon)$ thanks to the choice of $\delta$ and $n$, second one is smaller than $\epsilon \cdot 2 \sup |f|$ because length of your interval of integration is $\epsilon$ so the result follows since $\epsilon$ was arbitrarily small
If $w_1=a_1+ib_1$ and $w_2=a_2+ib_2$ are complex numbers, then $|e^{w_1}-e^{w_2}|\geq e^{a_1}-e^{a_2}$
Hint: $\bigl||x|-|y|\bigr|\leq |x-y|$ (the reverse triangle inequality) holds for complex numbers $x,y$.
Finding $\lim \frac{3^n}{4^n}$
One textbook proceeds like this. (1) Bernoulli's inequality: $$ (1+x)^n \ge 1+nx\qquad\text{when } x &gt; 0, n \in \mathbb N $$ Hint: induction. (2) Use (1) to show $$ t^n \to \infty\quad\text{as } n \to \infty, \text{when } t &gt; 1 $$ Hint: use $1+x=t$. (3) Use (2) to show $$ s^n \to 0\quad\text{as } n \to \infty, \text{when } 0&lt;s&lt;1 $$ Hint: use $t=1/s$.
If H is a subset of G, prove that H is also a subgroup.
To show $h^{-1}$ is in $H$ for all $h\in H$, note that the given property implies: $hh^{-1} =e \in H$. But now letting $e$ take the role of $a$ and $h$ take the role of $b$ above we get that: $eh^{-1} =h^{-1}\in H$
A question on nilpotent linear operator on finite dimensional vector space with dimension same as degree of nilpotency
By contradiction assume that there's $S$ such that $S^2=T$ then $$0=T^{n}=S^{2n}$$ hence $S$ is nilpotent hence $S^n=0$ but $$0\ne T^{n-1}=S^{2n-2}\implies 2n-2\le n-1\iff n\le1\leftarrow\text{contradiction}$$
Transmit Two Numbers Using a Single Number
An usual way to encode two numbers $a,b$ into one, $x$, provided that at least one of $a$ or $b$ is bounded by a known number and non negative (say that $0\le a&lt; M$), is the following: Codification: $bM+a\to x$ Decodification: $b=x/M$ (integer division), $a=x\mod M$. I don't know what kind of temperatures you want to handle. Negative temperatures should not be a problem, but double check that how integer division and mod are implemented for negative numbers. The mod operation must always yield a non negative remainder, lesser than the divisor. For example $-7/5$ should be $-2$ and $-7\mod 5$ should be $3$. Very high temperatures can be a problem. Since the greatest error code is $140$, $M=141$, and the maximum temperature should not be greater than $464$. ($464\cdot141+140=65534$).
What is $ \lim_{x \to \infty} \frac{\tan{x}}{\sec{x}} $
$\lim_{x \to \infty} \frac{\tan{x}}{\sec{x}}$ $\lim_{x \to \infty} \tan{x}\cos{x}$ $\lim_{x \to \infty} \frac{\sin{x}}{\cos{x}}\cos{x}$ $\lim_{x \to \infty} \sin{x}$ which doesn't exist.
Prove that category has all finite products.
Yes, it does mean that. Imagine finite sets of numbers for analogy. Define the product of the set $\{a_1,a_2\dots,a_n\}$ as $a_1a_2\dots a_n$. What should we mean by the empty product, i.e. the product of $\emptyset$? If it will have a value, we have to define it as $\bf1$ to keep the properties, as $1$ doesn't affect the product: $1\cdot x=x$ for any number $x$. The same goes on for the categorial product, of which the terminal object $\bf1$ is the unit, in that ${\bf1}\times X\cong X$ for any object $X$. Having all finite products means that the product exists for all finite sets of objects. $\emptyset$ is a finite set of objects.
Hom on sequences of integers is determined by values on finite sequences
I know the proof now but I haven't solved it myself. Note that for a prime $p$ a sequence of the form $(p^na_n)$ is divisible in $X$ by $p^k$ for all $k \in \mathbb{N}$ because all but finitely many of $p^na_n$ are. Therefore for any $f \in Hom(X, \mathbb{Z}), f(p^na_n) $ is divisible by $p^k$ for all $k$ so $f(p^na_n) = 0$. Now take $x_n \in X$, it can be represented as $(2^na_n + 3^nb_n)$ because gcd$(2^k, 3^k) = 1 $. Hence $f(x_n) = f(2^na_n) + f(3^nb_n) = 0$ as desired.
Explanation of example of Banach Tarski .
Although Williams doesn't say explicitly, the rotations $\tau_i$ are rotations about distinct axes through the origin. Williams's statement of the Banach-Tarski paradox says there is a subset $F\subset S^2$ and a sequence of rotations $\tau_1,\tau_2,\tau_3,\cdots$ such that $$\begin{array}{ll}S^2&amp;=\tau_1^3F \bigcup\tau_2^3F\bigcup\tau_3^3F\\ &amp;=\tau_1^4F \bigcup\tau_2^4F\bigcup\tau_3^4F\bigcup\tau_4^4F\\ &amp;=\tau_1^5F \bigcup\tau_2^5F\bigcup\tau_3^5F\bigcup\tau_4^5F\bigcup\tau_5^5F\\ &amp;=\cdots \end{array}$$ where the sets in each right-hand-side line are disjoint. That is, $S^2$ can be made by 3 non-overlapping copies of $F$, by 4 non-overlapping copies of $F$, by 5 non-overlapping copies of $F$ etc. where each copy of $F$ is a rotation of $F$ about an axis through the origin. The subset $F$ is not an area in any normal sense: it's more like a cluster of scattered points. When reassembled in $S^2$, any open disk of $S^2$ will contain points from each of the copies of $F$. That is, no open disk of $S^2$ lies completely inside $F$. For the proof of the paradox, you'd need some elementary group theory - at least up to the level of understanding group actions, orbits and free groups. Then you should be able to work through the proof sketch in Wikipedia.
Is it possible to solve $x = y - \arctan\left(y\right)$ for $y$ analytically?
As others have already told you it is impossible to solve your equation for $y$ in terms of elementary functions. When $|x|\ll1$ and $|y|\ll1$ you can do with power series, as follows: From $$x=y-\arctan y={1\over3}y^3-{1\over5}y^5+{1\over7}y^7-\ldots$$ you obtain $$3x=y^3-{3\over5}y^5+{3\over7}y^7-\ldots \ =\left(y-{1\over5}y^3+{18\over175}y^5+\ldots\right)^3$$ and therefore $$(3x)^{1/3}=y-{1\over5}y^3+{18\over175}y^5+\ldots\quad.$$ Taking the inverse series of the RHS we finally obtain $$y=\left(t+{1\over5}t^3+{3\over175}t^5+\ldots\right)_{t:=(3x)^{1/3}}\quad .$$ Using Mathematica, or a similar algebra system, you can obtain as many coefficients as you want.
Probability Theory, Chernoff Bounds, Sum of Independent (but not identically distributed) r.v
Hoeffding's inequality can be used but the random variables need to be bounded to get meaningful bounds. http://en.wikipedia.org/wiki/Hoeffding%27s_inequality.
Find the area between $(y-x+2)^2=9y, \ \ x=0, \ \ y=0.$
Did you notice that your answer, $2 + 2^{3/2} &gt; 2 + 2^1 = 4$, yet the intended region, which I presume to actually be the set satisfying all inequalities $$0 \le x \le 2, \\ 0 \le y \le 1, \\ (y-x+2)^2 \ge 9y, $$ obviously has area less than $1$, being bounded above by the triangle with vertices at $(0,0)$, $(2,0)$, and $(0,1)$? Let's do this the correct way. When you solved the equation $$(y - x + 2)^2 = 9y,$$ you chose $$x = y + 3\sqrt{y} + 2.$$ But when $y = 1$, this gives $x = 6$, whereas we would expect instead $x = 0$ if we are to be on the portion of the curve bounding the region of interest. If we choose the other root, we get $$x = y - 3\sqrt{y} + 2,$$ and now when $y = 1$, we get $x = 0$ as expected. Now we integrate, but since this equation gives us the boundary as a function of $y$, we have to integrate with respect to $y$, not $x$: $$A = \int_{y=0}^1 y - 3\sqrt{y} + 2 \, dy = \frac{1}{2},$$ and this is consistent with our requirement that $0 &lt; A &lt; 1$.
$F\subset L \subset F[a]$ fields, prove L is created by $f_a$
Hint (and actually an answer spoiling all suspense) : induction on $n$.
Given a group, how to show the distributive law and some examples (does the distributive law have to be an axiom?)
Let $(G,\circ)$ be a group with neutral element $e$ and let $X$ be a set. We say that $G$ acts on the set $X$ if we are given a map $G\times X\to X$, $(g,x)\mapsto g\cdot x$ if the following natural requirements are fulfilled: $(g\circ h)\cdot x = g\cdot(h\cdot x)$ and $e\cdot x=x$ for all $g,h\in G$, $x\in X$. (The first requirement may allow one to be sloppy and use the same multiplication symbol for the group operation and the action and also drop parentheses). If $X$ carries additional structure, say $(X,*)$ is also a group, we - naturally - demand that more requirements be respected before we say that the group $(G,\circ)$ acts on the group $(X,*)$, namely that $g\cdot(x*y)=(g\cdot x)*(g\cdot y)$. Note that this looks like the distributive law, but $g,x,y$ are (usually) not from the same set! If $(X,*)$ is an abelian group with neutral element $e$, then there is always a standard way to define an action of the additive group $(\mathbb Z,+)$ on it: Define $0\cdot x=e$ first, then by the recursion $(n+1)\cdot x=(n\cdot x)*x$ define the action of all positive integers and finally by $(-n)\cdot x=(n\cdot x)^{-1}$ the action of negative integers. (Verify that this is an action, be careful with all those different operation symbols!). If we are a bit sloppy and use $+$ also for the group operation of $X$ (and negation for inverse), as is usual for abelian groups, then the fact that $(\mathbb Z,+)$ acts on the group $(X,+)$ precisely formalizes that "distributivity" holds. As $(\mathbb Z,+)$ is also an abelian group, the above method can be used to define an action of $(\mathbb Z,+)$ on itself. We should be careful because we use $\cdot$ to denote this action and already have an intrinsic multiplication of integers that is also written with $\cdot$. Fortunately, the group action of $(\mathbb Z ,+)$ on itself is precisely the normal integer multiplication (check that without getting confused about the different operations). As a bonus check that $(n\cdot m)\cdot x=n\cdot (m\cdot x)$ holds for all $n,m\in \mathbb Z$ and $x\in X$ with the group action of $(\mathbb Z,+)$ on the abelian group $(X,*)$ defined above, justifying again the use of the same symbol for both things (and allowing one to drop parentheses) even though they are strictly speaking not the same.
Show that if $V$ is isomorphic to $A/I$ for some left ideal $I$, then $V$ is a cyclic representation of $A$ over $k$
It is in general the case that if you have two isomorphic modules, then one is cyclic if and only if the other is. Here is an outline of the argument: Recall that a homomorphism of $A$-representations $\rho_V:A\to \operatorname{End}(V)$ and $\rho_W:A\to \operatorname{End}(W)$ is a linear map $f\colon V\to W$ such that $f\circ \rho_V(a)=\rho_W(a)\circ f$ for all $a\in A$. If $f$ is invertible, then this means that $\rho_V(a)=f^{-1}\circ \rho_W(a)\circ f$. Now we assume that $V$ is cyclic with cyclic vector $v$. We claim that $W$ is cyclic with cyclic vector $f(v)$. To prove this claim take an arbitrary vector $w'\in W$. Then $f^{-1}(w')\in V$, and thus $f^{-1}(w')=\rho_V(a)v$ for some $a\in A$. This leads to $f^{-1}(w')=(f^{-1}\circ \rho_W(a)\circ f) (v)$. The claim follows by applying $f$.
Limit: $\lim\limits_{x\to0}\frac{e^x-e^{\sin x}}{x-\sin x}$
since $\lim _{ x\rightarrow 0 }\frac { e^x -1 }{ x } =1 $ we have $$\lim _{ x\to 0 } \frac { e^x -e^{ \sin x } }{ x-\sin x } =\lim _{ x\to 0 } \frac { e^{ \sin x } \left( e^{ x-\sin x }-1 \right) }{ x-\sin x } =\lim _{ x\rightarrow 0 }{ e^{ \sin x } } =1 $$
Show that the $\sup S=1/2$ where $S= \{(-1)^n/n : n \in \mathbb{N} \}$
The supremum of a set is, first of all, an upper bound of the set. It is very easy to check that for any $s \in S$ it holds that $1/2 \geq s$. Set $s^* = 1/2$. We will show that $s^*$ using the definition you mentioned. We already proved that $s^* \geq s$ for any $s \in S$. Now let $\epsilon &gt;0$. Then we know that $1/2 \in S$ and moreover, $s^* - \epsilon &lt; 1/2$. Therefore we found an element of the set that is greater than $s^* -\epsilon$ for any $\epsilon &gt;0$. Thus, $s^*=1/2$ is the supremum.
Recurrence relation for number of subsets that contain no consecutive integers
We write down, almost exactly, the argument you quoted, changing the wording a little bit in the hope things will become clearer. Let $S_n=\{1,2,3,\dots,n\}$. We say that a subset $A$ of $S_n$ is good if $A$ does not contain any two consecutive integers. For any $k$, let $a_k$ be the number of good subsets of $S_k$. There are two types of good subsets of $S_n$. Type 1 good subsets of $S_n$ contain the element $n$, and Type 2 good subsets of $S_n$ do not contain $n$. We first get an expression for the number of Type 1 good subsets of $S_n$, where $n\ge 2$. Such a subset cannot contain $n-1$. So any Type 1 good subset of $S_n$ is obtainable by adding $n$ to a good subset of $S_{n-2}$. Also, if we add $n$ to a good subset of $S_{n-2}$, we always obtain a Type 1 good subset of $S_n$. So there are exactly as many good Type 1 subsets of $S_n$ as there are good subsets of $S_{n-2}$. By definition there are $a_{n-2}$ good subsets of $S_{n-2}$. Now we obtain an expression for the number of good Type 2 subsets of $S_n$. Such a subset is a good subset of $S_{n-1}$, and any good subset of $S_{n-1}$ is a good Type 2 subset of $S_n$. So by definition there are $a_{n-1}$ good Type 2 subsets of $S_n$. A good subset of $S_n$ is either of Type 1 or of Type 2. So the number of good subsets of $S_n$ is $a_{n-2}+a_{n-1}$. We have therefore shown that $a_n=a_{n-2}+a_{n-1}$.
Irreducible hypersurfaces vs irreducible polynomials
First let's figure out what the projective closure of the curve looks like. This has homogeneous equation $Y^2 Z = X^3 - X^2 Z$. Crucially, this equation is linear in $Z$: solving for $Z$ we get $$Z = \frac{X^3}{X^2 + Y^2}$$ from which it follows that $$(X : Y) \mapsto (X(X^2 + Y^2) : Y (X^2 + Y^2) : X^3)$$ is a rational parameterization of the projective closure of the curve. It is almost, but not quite, an isomorphism: it sends the points $(1 : i)$ and $(1 : -i)$ to the same point $(0 : 0 : 1)$ but away from this point it has inverse given by projection down to the first two coordinates. In other words, over the complex numbers, after adding points at infinity, the curve is topologically a sphere with two points identified (the "kissing banana"). The identified points correspond to the singularity at $(0, 0)$ that is isolated over $\mathbb{R}$: all of the nearby points are being parameterized by points with complex coordinates (even up to scaling). To figure out what the actual curve looks like we need to remove the points at infinity. These are the points where $Z = 0$ in homogeneous coordinates, so $0 = X^3$ and the only such point is $(0 : 1 : 0)$. Hence the actual curve is topologically a sphere with two points identified and another point missing; in particular, it is connected.