title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
how to tell if a vector field in the space is conservative
It has scalar potential $\oint\vec{f}\cdot\mathrm{d}\vec{s}=0$ $\nabla\times\vec{f}=\left(\frac{\partial f_z}{\partial y}-\frac{\partial f_y}{\partial z},\frac{\partial f_x}{\partial z}-\frac{\partial f_z}{\partial x},\frac{\partial f_y}{\partial x}-\frac{\partial f_x}{\partial y}\right)=0$ All these conditions should be equivalent, but for the third one the field has to be differentiable. But it is usually the easiest to compute. The first one is good when you see you can find a scalar field, which satisfies the condition $$\nabla\varphi=-\vec{f}$$ Good example of conservative field is the gravitational field. The work you have to do when you move something in space from one place to another is equivalent to the difference of potential between the two places. edit: The integral condition means, that whatever path you choose, you have done no work if the path is closed (i.e. you return to the place where you started). To use this statement you would have to prove that for all pathes the work is zero In your case the rotation of the field is $$\left(1,-1,-1\right)$$ so it's not conservative
Surjective homomorphism from $k[x,y,z]/(xy-z^2) $ onto $ k[y]$
By the universal property of polynomial rings there is a (surjective) ring homomorphism $f:k[x,y,z]\longrightarrow k[y]$ that sends $x,z$ to $0$, $y$ to $y$, and which is identity on $k$. On the other side, there is a canonical (surjective) ring homomorphism $\pi:k[x,y,z]\longrightarrow k[x,y,z]/(xy-z^2)$. Now use the universal property of the quotient rings in order to find the required surjective ring homomorphism. The kernel of this homomorphism is given by the classes of polynomials $p\in k[x,y,z]$ with the property that $p(0,y,0)=0$. Obviously $p(0,y,0)=0$ iff $p\in (x,z)$, and this shows that the kernel of our homomorphism coincides with the ideal $(\overline{x}, \overline{z})$.
Stronger version of AM-GM with condition
First we check that $abc \leq 1$, it's easy by AM-GM: $1=\frac{a^2+b^2+c^2}{3} \leq (abc)^{\frac{2}{3}}$ Next we prove that $(a+b+c)^2 \geq 9(abc)^{\frac{13*2}{76}}$. If we prove this the inequality $a+b+c \geq 3(abc)^{\frac{13}{76}}$ will be proven, because $x \leq y$ implies $\sqrt{x} \leq \sqrt{y}$ for $x,y \geq 0$.Next: $(a+b+c)^2=a^2+b^2+c^2+2ab+2ac+2bc=1+1+1+ab+ab+ac+ac+bc+bc$ Apply AM-GM to this sum. We have: $ 1+1+1+ab+ab+ac+ac+bc+bc \geq 9(abc)^{\frac{2}{9}}$, but: $9(abc)^{\frac{2}{9}} \geq 9(abc)^{\frac{2*13}{76}}$, because it's equal: $1 \geq (abc)^{\frac{2*13}{76}-\frac{2}{9}}=(abc)^{\frac{41}{342}}$ and it's true, because $abc \leq 1$, so $(a+b+c)^2 \geq 9(abc)^{\frac{2*13}{76}}$
Counting the number of words with restrictions on the consecutive repetitions
Your first argument giving the Fibonacci numbers is perfectly sound. It is easier to consider the final letter(s) of each word. Let $a_n$ be the number of words (of length $n$) that end the letter $A$. Let $b_n$ be the number of words (of length $n$) that end with a single $B$. Let $bb_n$ be the number of words (of length $n$) that end the letters $BB$. Let $c_n$ be the number of words (of length $n$) that end the letter $C$ or $D$ or $E$. These satify the recurrence relations \begin{eqnarray*} a_n&=&b_{n-1}+bb_{n-1}+c_{n-1} \\ b_n&=&a_{n-1}+c_{n-1} \\ bb_{n}&=&b_{n-1} \\ c_n&=&3(a_{n-1}+b_{n-1}+bb_{n-1}+c_{n-1}) \\ \end{eqnarray*} with the initial conditions $a_1=1, b_1=1, bb_1=0,c_1=3$.
Conceptual doubt in reading the domain of a relation
The definition of the domain of $R$ isn't quite right. It should be $$\{x:\color{red}{\exists y}((x,y)\in R)\}.$$ That is, the domain of $R$ is the set of things which are $R$-related to something; or, perhaps more smoothly, $x$ is in the domain of $R$ iff there is some $y$ such that $x$ is $R$-related to $y$. The definition you've given suggests that we're looking at some specific $y$; this would describe the $R$-preimage of the particular element $y$ (or of the particular set $\{y\}$ - there's some abuse of terminology here), but that's not the whole domain of $R$ in general. (This is a situation where natural language is a bit clunky, unfortunately.)
Construct a homeomorphic map
Hint: Think about radial projection from punctured Euclidean space to the sphere. The fibers of the projection are radial rays. Each of these rays is homeomorphic to the real line. Each ray corresponds to a unique point on the sphere, and vice-versa. Also, don't you mean $S^{n-1}$?
Why does counting companion matrices count conjugacy classes?
This is because the rational canonical form (block diagonal with companion matrices for the invariant factors, ordered by divisibility) is, um..., canonical. Two matrices are in the same conjugacy class of $GL(n,\mathbb{F}_q)$ if and only if they have the same rational canonical form. In other words every matrix is similar to its rational canonical form, and no two rational canonical forms are similar (which is by the uniqueness of the list of invariant factors, thanks to the divisibility requirement).
Proof by Induction for $n^2\leq2^n$ .
Induction hypothesis: Let $k\geq 4$ be such that $k^2\leq 2^k$. Then, we can multiply both sides of this equation by $2$ to get $2\cdot k^2\leq 2\cdot 2^k$ or $2k^2\leq 2^{k+1}$. It remains now to show by you that you have $(k+1)^2\leq 2k^2$ for $k\geq 4$. I hope this clarifies your problem!
Maxima or minima in a function
It's true that $f'(a)=0$ necessarily, but $a$ may be neither a local minimum nor a local maximum. For instance, $$f(x)=\begin{cases}e^{-x^{-2}}\sin(x^{-2})&\text{if }x\ne 0\\ 0&\text{if }x=0\end{cases}$$ is $C^\infty$ and even (i.e. the case $a=0$), but it is frequently positive and negative in all neighbourhoods of $0$.
Polar form of a complex number
No, that's not correct. You must have made a couple of errors in your expansions. \begin{align} \frac{(1+i)^{13}}{(1-i)^7} &= \frac{(1+i)^{13}(1+i)^7}{(1-i)^7(1+i)^7} \\ &= \frac{1}{2^7}(1+i)^{20} \\ &= \frac{1}{2^7}\left(\sqrt{2}\left(\cos\frac{\pi}{4} + i \sin\frac{\pi}{4}\right)\right)^{20} \\ &= \frac{2^{10}}{2^7}\left(e^{i\pi/4}\right)^{20} \\ &= 8e^{5\pi i} \\ &= -8. \end{align} The polar form is $8(\cos\pi + i\sin\pi)$, or $(8,\pi)$.
How to know what $q$ is when $p$ and $f(p)$ are known in $f(p) = qpq^{-1}$
If you know $p$ and $p'=qpq^{-1}$, you can only recover $q$ up to a quaternion which commutes with $p$. This is not special to quaternions, and is valid in any algebra. Indeed, suppose $q_0$ is any invertible quaternion which commutes with $p$. Then $$(qq_0)p(qq_0)^{-1} = q(q_0pq_0^{-1})q^{-1}=p'.$$ I'll let you figure out why the converse is also true. Note that if $p$ is a non-zero pure quaternion (which seems to be your assumption), then the quaternions which commute with $p$ are those of the form $a+bp$ with $a,b\in \mathbb{R}$.
Open Set in $\ell_1$ is open in $\ell_2$or not
Let $x_n=0$ for all $n$ and $y_n=\frac 1 n$ for $N_1 <n<N_2$, $y_n=0$ for all other $n$. Then $\|x-y\|_2 \to 0$ as $N_1 \to \infty$ but $\|y\|_1 \to \infty$ as $N_2 \to \infty$. Hence no open ball around $x$ in $\ell^{2}$ norm is contained in the open unit ball w.r.t. $\ell^{1}$.
Prove that $ \lim (s_n t_n) =0$ given $\vert t_n \vert \leq M $ and $ \lim (s_n) = 0$
Try the following, perhaps slightly clearer and more direct approach: We're given $\;|t_n|\le M\;\;\forall\,n\in\Bbb N\;$, and let $\;\epsilon>0\;$ be arbitrary. Since $\;s_n\xrightarrow[n\to\infty]{}0\;$ there exists $\;N\in\Bbb N\;$ s.t. $\;n>N\implies |s_n|<\frac{\epsilon}M\;$ , but then $$\forall n>N\;:\;\;|s_nt_n|\le|s_n|M<\frac{\epsilon}MM=\epsilon\implies s_nt_n\xrightarrow [n\to\infty]{}0$$
limits of sequences of functions and uniform convergence
HINT: The limit is $x^2$, and because $|f(x)-f_n(x)|<1/n$ it is the uniform convergence.
Show that $\widehat{\mathbb Z}\cong \prod_{p\;\text{prime number}}\mathbb Z_p$
For each prime $p$, and each integer $k\geq 1$, there is a canonical continuous homomorphism $\widehat{\mathbf{Z}}\to\mathbf{Z}/p^k\mathbf{Z}$ (by definition of the inverse limit), and these are compatible (again by definition of the inverse limit) as $k$ changes, so we get a canonical continuous homomorphism $$\widehat{\mathbf{Z}}\to\varprojlim_{k\geq 1}\mathbf{Z}/p^k\mathbf{Z}=:\mathbf{Z}_p$$ for each prime $p$. Explicitly, it sends a compatible sequence $(x_n+n\mathbf{Z})_{n\geq 1}$ to $(x_{p^k}+p^k\mathbf{Z})_{k\geq 1}$. The universal property of the product topology then allows us to package these together into a continuous homomorphism $\widehat{\mathbf{Z}}\to\prod_p\mathbf{Z}_p$. The map is closed because the source is compact and the target is Hausdorff, so to show that it is a topological isomorphism, we just need to show injectivity and surjectivity. Why is the map injective? Say $(x_n+n\mathbf{Z})$ is sent to zero. Then its component in each $\mathbf{Z}_p$ is zero, which means $x_{p^k}\in p^k\mathbf{Z}$ for all primes $p$ and all $k\geq 1$. Fix an integer $n$, and factor it as $n=p_1^{k_1}\cdots p_r^{k_r}$, so $\mathbf{Z}/n\mathbf{Z}\simeq\prod_{i=1}^r\mathbf{Z}/p^{k_i}\mathbf{Z}$. By compatibility of the sequence $(x_n+n\mathbf{Z})$, because $p_i^{k_i}$ divides $n$, we have $x_n+p_i^{k_i}\mathbf{Z}=x_{p^{k_i}}+p^{k_i}\mathbf{Z}=p^{k_i}\mathbf{Z}$, so $x_n$ is divisible by $p_i^{k_i}$. This is true for all $i$, so $n\mid x_n$, and $x_n+n\mathbf{Z}=n\mathbf{Z}$. Our element is thus equal to zero. Why is the map surjective? Note that $\widehat{\mathbf{Z}}\to\prod_p\mathbf{Z}_p$ is a ring map, and the image of $\mathbf{Z}$ in the source is dense. By compactness of the source and Hausdorffness of the target (again), it suffices to show that the image of $\mathbf{Z}$ in $\prod_p\mathbf{Z}_p$ is dense. If we project to any factor, then we definitely have density. Now a basic open subset of the product has the form $\bigcap_{i=1}^r \pi_{p_i}^{-1}(a_i+p^{k_i}\mathbf{Z}_p)$ for finitely many primes $p_i$, where $\pi_{p_i}$ is projection to the $p_i$-th factor. We may assume each $a_i$ is an integer (using the aforementioned density of $\mathbf{Z}$ in each $\mathbf{Z}_{p_i}$). Now, again using the CRT, choose an integer $m$ which is congruent to $a_i$ modulo $p_i^{m_i}\mathbf{Z}$ for all $i$. Then this integer $m$ will have image in $\prod_p\mathbf{Z}_p$ in the given open set. This proves the desired density, from which surjectivity follows.
If $A, B, C$ are angles of $\Delta ABC$ and $\sin (A-\pi /4) \sin (B-\pi/4) \sin (C-\pi/4)=\frac{1}{2\sqrt 2}$...
Hint: Both \begin{align} \sin (A-\pi /4) \sin (B-\pi/4) \sin (C-\pi/4)=\frac{1}{2\sqrt 2} \tag{1}\label{1} \end{align} and \begin{align} \tan A\tan B+\tan B\tan C+\tan C\tan A &= \tan A+\tan B+\tan C , \tag{2}\label{2} \end{align} expressed in terms of semiperimeter $\rho$, inradius $r$ and circumradius $R$ of the triangle, are equivalent to \begin{align} \rho^2&=r^2+4\,r\,R+2\,\rho\,r \tag{3}\label{3} , \end{align} which implies \begin{align} \rho&=r+\sqrt{2\,r^2+4\,r\,R} \tag{4}\label{4} . \end{align} But \eqref{4} is true only for the degenerate triangle when one angle is $180^\circ$ and the other two are zero. For the conversion, use known identities for the angles of triangle \begin{align} \cos A\cos B\cos C&=\frac{r}{R}+1 \tag{5}\label{5} ,\\ \sin A\sin B\sin C &= \frac{\rho\,r}{2R^2} \tag{6}\label{6} , \end{align} \begin{align} \tan A\tan B\tan C = \tan A+\tan B+\tan C &=\frac{2\rho\,r}{\rho^2-(r+2\,R)^2} \tag{7}\label{7} ,\\ \tan A\tan B+\tan B\tan C+\tan C\tan A &=1+\frac{4\,R^2}{\rho^2-(r+2\,R)^2} \tag{8}\label{8} ,\\ \cot A+\cot B+\cot C&= \frac12\,\left(\frac{\rho}r -\frac r\rho \right) -2\,\frac R\rho \tag{9}\label{9} . \end{align}
A question about the depth of a ring with respect to some ideal
Let $I=(x,y^2)$. Because $k[x,y]$ is a Cohen-Macaulay ring, we have that $\operatorname{grade} I = \operatorname{height} I$ (this is corollary 2.1.4 in Bruns and Herzog, Cohen-Macaulay Rings). In case you are not aware of the terminology, the grade of $I$ is precisely equal to the length of a maximal regular sequence in $I$. Additionally we have $\operatorname{height} I + \dim k[x,y] / I = \dim k[x,y]$. The above give you $\operatorname{grade} I = \dim k[x,y] - \dim k[x,y] / I = 2 - \dim k[y]/y^2 = 2 - 0 =2$.
Factorization of ideals in $\mathbb{Z}[\sqrt{5}]$
Something is wrong here? $$1+\sqrt5\in I \implies (1-\sqrt5)(1+\sqrt5)=-4\in I.$$ $$-4\in I, 3\in I\implies 1\in I\implies I=R.$$
Why the empty set isn't a terminal object in the $\mathcal{SET}$ as well as it is an initial object?
The point is that the subset $f$ of the cartesian product $A\times B$ (for an application $f:A\to B$) must check a property beginning with $$\forall a\in A,\exists b\in B \,\mathrm{s.t.}\dots,$$ so if $A\neq\varnothing$ and $B=\varnothing$, you can't construct a such subset since for a $a$ in $A$, we would have a $b\in\varnothing$, which is impossible.
Flawed proof by induction that $a^{n-1}=1$
To understand what's happening in the first example, it suffices to substitute actual values for $n$. We clearly have no argument with $a^0 = 1$, which corresponds to the case $n = 1$. But when $n = 2$, the inductive reasoning step looks like this: $$a^{2-1} = \frac{a^{1-1} a^{1-1}}{a^{0-1}} = \frac{a^0 a^0}{a^{-1}}.$$ So the next step fails immediately because the base case begins at $n = 1$, not $n = 0$; that is to say, $a^{-1} \ne 1$ as required for the inductive step to remain true. The reason why the second example does not suffer from this defect is because the inductive step does not rely on an incorrect assumption about an earlier case being true. You don't have this problem in your example because all of the algebraic steps are valid and supported by the induction hypothesis. In the first example, this is not so, because the very next case is assuming that $a^{-1} = 1$. Indeed, if $a = 1$, thereby satisfying this assumption, then it is true that $a^{n-1} = 1$ for all $n$. The way in which the inductive step fails actually illustrates precisely when the theorem is true.
Find a fraction $\frac{m}{n}$ which satisfies the given condition
This is a bit cheeky, and probably not what the question meant, but if $m=n$, the condition is satisfied.
Maximizing perimeter to area ratio of a function
This the oldest known problem in Calculus of Variations called the Dido's Problem. In modern times, this is known as an Isoperimetric Problem. I understand that you want to minimize the ratio between the perimeter and the area. The standard way I know to solve this is by making the area constant$(A)$ and then try to find the shortest curve that has this constant area $A$. \begin{align} &\int_{x_i}^{x_f}\sqrt{1 + \dot{y}^2}dx\rightarrow min\\ &\int_{x_i}^{x_f}{ydx}=A \end{align} With end constrains $y(x_i) = 0$ and $y(x_f) = 0$. Then you can solve it as a bolza problem as follows. \begin{align} &L=\lambda _0\sqrt{1 + \dot{y}^2}+\lambda_1y\\ &\dot{y}L_{\dot{y}}-L=constant \end{align} Where the second equation is given by the Euler-Lagrange equation and $\lambda_0$ is positive for a minimization problem and $\lambda$'s are unique upto a multiple. So it would be sufficient to check for $\lambda_0=0,1$.
Technique for finding cardinalites in combinatorics
The method is called overcounting. I sometimes hear $f$ described as a $\kappa$-to-one function, in analogy with a bijection being one-to-one.
Why $\mathbb{Z}[\mu_6]$ is a PID?
For every complex number $z$ there is at least one element $x\in\Bbb Z[\mu_6]$ such that $|z-x|<1$ (in fact the maximal necessary distance is $1/\sqrt3$, which is less than the maximal distance $1/\sqrt2$ for the Gaussian integers). This means that one can define a Euclidean division on $\Bbb Z[\mu_6]$ by rounding the exact quotient $z$ towards such an element $x$. It is an easy exercise to show the remainder is now less in (complex) absolute value than then the divisor in the Euclidean division, as it should. So $\Bbb Z[\mu_6]$ is norm-Euclidean, and therefore a PID. See Eisenstein integer for more details.
Show that composition of endomorphisms share some eigenvector
We proceed by induction on the dimension, with the one-dimensional case being trivial. Let $v$ be an eigenvector of $\varphi$ with respect to the eigenvalue $\lambda$. We compute $\lambda \psi(v) = \psi(\varphi(v))=\varphi(\psi(v))$, hence we deduce that $\psi(v)$ is also an eigenvector of $\varphi$ with respect to the eigenvalue $\lambda$. We conclude: $\ker(\varphi - \lambda)$ is an invariant subspace with respect to both $\varphi$ (a priori clear) and $\psi$ (shown in the previous computation). So we can restrict to $\ker(\varphi - \lambda)$ and use induction. Note that we can assume $\ker(\varphi - \lambda)$ to be a proper subspace, because the assertion is trivial if $\ker(\varphi - \lambda)$ is the whole vectorspace.
Fourier coefficients of even and odd functions
Your first two affirmations hold. Let $f$ be $p$-periodic and $f \in L^1[-p/2,p/2]$. Then the Fourier coefficients of $f$ are $$ \hat{f}(k) = \frac{1}{p} \int_{-p/2}^{p/2} f(t) e^{-i2\pi k t/p} \,dt $$ If $f$ is odd then the change of variable $u=-t$ gives (we can keep the interval of integration $[-p/2,p/2]$ by $p$-periodicity) \begin{align} \hat{f}(k) &= -\frac{1}{p} \int_{-p/2}^{p/2} (-f(-t)) e^{i2\pi k t/p} \,dt \\ &= -\frac{1}{p} \int_{-p/2}^{p/2} f(t) e^{-i2\pi (-k) t/p} \,dt \\ &= -\hat{f}(-k) \end{align} If $f$ is even then the same change of variable gives \begin{align} \hat{f}(k) &= \frac{1}{p} \int_{-p/2}^{p/2} f(-t) e^{i2\pi k t/p} \,dt \\ &= \frac{1}{p} \int_{-p/2}^{p/2} f(t) e^{-i2\pi (-k) t/p} \,dt \\ &= \hat{f}(-k) \end{align} I don't think your third affirmation holds. Take $f(t):=\sin(\pi t)$. Then $f$ is odd and of period $2\cdot1$. Yet, we can calculate $\hat{f}(1)=-i/2 \neq 0$.
Parametric Equation with a Parameter and an Angle
HINT $\theta$ is a parameter. We have equation $$x=0$$ when $\cos\theta=0$ and $$y=x\tan\theta$$ when $\cos\theta\not=0.$
How to Maximise Efficiency With hp12c gold calculator: Equation of Value Loan Schedule
Sequence for an hp12c 0 g CFj 1000 g CFj 0 g CFj 0 g CFj 2000 g CFj 0 g CFj g CFj 4000 g CFj 10 i f NPV =4,120.92
Notation for immediate predecessor-successor relation in a poset
a is covered by b, b covers a, when a < b and not exists x with a < x < b. I have seen notation somewhat like a -< b for b covers a. In general, there is no immediate precessor or successor. Every element may cover numerous elements or none at all.
Is it possible to solve the difference equation $K_{n+1}=aK_n+bK_n^{\theta}+c$?
In the case $\theta = 2$ you have the logistic map, for which there is a closed form solution only in a few special cases. With general $\theta$, I would think it is even more unlikely that you would have closed form solutions.
"Folding" a 3-sphere (4D ball's surface) onto 3 dimensions
Just as an example, the so called stereographic projection could be generalized for the four dimensional case. In three dimensions, the definition is as follows: Let the Southern pole of the sphere of radius one above be tangent to the plane $z=0$. The coordinates of the North pole are then $(0,0,2)$. Say, we have a point $(a,b,c)$ on the sphere with an $\color{green}{R}$ on its surface. The vector pointing from $(a,b,c)$ to the North is $(a,b,c-2)$. So the equation of the straight line through the North pole and the point with the point $\color{green}{R}$ is $$(x(t),y(t),z(t))=t(a,b,c-2)+(0,0,2).$$ This straight punches the plane $z=0$ at the $t$ for which $z(t)=t(c-2)+2=0$. So $t=\frac{2}{2-c}$ and the coordinates of the punching point are: $$\left(\frac{2a}{2-c},\frac{2b}{2-c},0\right).$$ By definition the point above is the stereographic image of the green $\color{green}{R}$ on the horizontal plane. It is easy generalize this projection if our sphere is four dimensional and its Southern pole touches the hyper plane $w=0$. (Let the coordinates of the North pole be $(0,0,0,2)$.) If we have a point $\color{red}{Q}$ on the surface of the four dimensional sphere at $(a,b,c,d)$ then the vector pointing from this point to the North pole is $(a,b,c,d-2)$, that is the equation of the straight through the North pole going through the point with the $\color{red}{Q}$ is $$(x(t),y(t),z(t),w(t))=t(a,b,c,d-2)+(0,0,0,2).$$ We can repeat the last steps of the three dimensional case and we will find the location of the $\color{red}{Q}$ in the three dimensional space given by $w=0$. I guess that any projection based on setting up equations of straights in the four dimensional space could be generalized for higher dimensions.
Calculate integral with cantor measure
Let $C_1=\left[0,\frac{1}{3}\right]\cup\left[\frac{2}{3},1\right]$, $C_2=\left[0,\frac{1}{9}\right]\cup\left[\frac{2}{9},\frac{3}{9}\right]\cup\left[\frac{6}{9},\frac{7}{9}\right]\cup\left[\frac{8}{9},\frac{9}{9}\right]$ and so on the usual sets used to define the Cantor set. Then $\mu_F$ is the limit as $n\to +\infty$ of the probability measure $\mu_{P_n}$ on $C_n$. Let $I=[a,a+3b]$ be any closed interval of the real line and $J$ the same interval without its middle third, $J=[a,a+b]\cup[a+2b,a+3b]$. Then: $$ \int_I x^2 d\mu = \frac{1}{3}\left((a+3b)^3-a^3\right)=3b(a^2+3ab+3b^2), $$ $$\frac{3}{2}\int_J x^2 d\mu = 3b(a^2+3ab+3b^2)+b^3, $$ so: $$ \frac{3}{2}\int_J x^2 d\mu = \int_I x^2 d\mu + \frac{\mu(I)^3}{27},\tag{1}$$ giving immediately: $$ \int_{0}^{1} x^2\, d\mu_F = \lim_{n\to +\infty}\int_{0}^{1} x^2\, d\mu_{P_n} = \lim_{n\to +\infty}\sum_{k=0}^{n}\frac{1}{3^{2k+1}}=\color{red}{\frac{3}{8}} .\tag{2}$$
Solve the recurrence $T(n) = 2T(n-1) + n$
\begin{align} T(n) & = 2 T(n-1) + n = 2(2T(n-2) + n-1) + n = 4T(n-2) + 2(n-1) + n\\ & = 8 T(n-3) + 4(n-2) + 2(n-1) + n = 2^k T(n-k) + \sum_{j=0}^{k-1} 2^j (n-j)\\ & = 2^{n-1} T(1) + \sum_{j=0}^{n-2}2^j (n-j) = 2^{n-1} + \sum_{j=0}^{n-2}2^j (n-j) \end{align} \begin{align} \sum_{j=0}^{n-2}2^j (n-j) & = n \sum_{j=0}^{n-2}2^j - \sum_{j=0}^{n-2} j2^j = n(2^{n-1}-1) - \dfrac{n \cdot 2^n - 3 \cdot 2^n + 4}2\\ & = n(2^{n-1}-1) - (n \cdot 2^{n-1} -3 \cdot 2^{n-1} + 2) = 3 \cdot 2^{n-1} -n - 2 \end{align} Hence, $$T(n) = 2^{n-1} + 3 \cdot 2^{n-1} -n - 2 = 2^{n+1} - n - 2$$ EDIT (Adding details) First note that $\displaystyle \sum_{j=0}^{n-2}2^j$ is sum of a geometric progression and can be summed as shown below.$$\sum_{j=0}^{k} x^j = \dfrac{x^{k+1} -1}{x-1}$$ $\displaystyle \sum_{j=0}^{n-2} j2^j$ is a sum of the form $\displaystyle \sum_{j=0}^{k} jx^j$ $$\sum_{j=0}^{k} jx^j = x \sum_{j=0}^{k} jx^{j-1} = x \dfrac{d}{dx} \left( \sum_{j=0}^k x^j\right) = x \dfrac{d}{dx} \left( \dfrac{x^{k+1} - 1}{x-1}\right) = x \left( \dfrac{kx^{k+1} - (k+1) x^k +1}{(x-1)^2} \right)$$
Calculate weighted estimated variance
First of all, it is an unfortunate consequence of the way Math.SE has changed the language of flagged duplicate questions that causes those who submitted the question to think that the flag is only a suggestion; i.e. the phrase Does this answer your question is in this case not an opportunity for you to disagree. It is a canned reply that I have no control over, when the reality is that it absolutely and certainly does answer your question, and if you do not think it does, you have not studied what the answer says. I am the author of that answer. I know what it says because I wrote it. And I am telling you that it answers your question, which I will now proceed to demonstrate. Your table of summary statistics is $$\begin{array}{c|ccc} & SP1 & SP2 & SP3 \\ \hline n_i & 20 & 30 & 46 \\ \bar x_i & 67 & 56 & 90 \\ s_i^2 & 12.3 & 23.2 & 11.2 \\ \end{array}$$ Using the formula $$\bar z = \frac{n \bar x + m \bar y}{n + m}$$ adapted to your notation on $i = 1, 2$, we have $$\bar x_{1,2} = \frac{n_1 \bar x_1 + n_2 \bar x_2}{n_1 + n_2} = \frac{20(67)+30(56)}{20+30} = \frac{302}{5}.$$ Using the formula $$s_z^2 = \frac{(n-1) s_x^2 + (m-1) s_y^2}{n+m-1} + \frac{nm(\bar x - \bar y)^2}{(n+m)(n+m-1)},$$ also adapted to your notation, we have $$\begin{align*} s_{1,2}^2 &= \frac{(n_1 - 1) s_1^2 + (n_2 - 1) s_2^2}{n_1 + n_2 - 1} + \frac{n_1 n_2(\bar x_1 - \bar x_2)^2}{(n_1 + n_2)(n_1 + n_2 - 1)} \\ &= \frac{19(12.3) + 29(23.2)}{20 + 30 - 1} + \frac{20(30)(67 - 56)^2}{(20 + 30)(20 + 30 - 1)} \\ &= 48.1327. \end{align*}$$ Therefore, we have combined the mean and variance for $SP1$ and $SP2$. Now we repeat the process to combine $SP(1,2)$ with $SP3$. This I leave as an exercise.
Finding number of elements $n\in\{1,2,...,20\}$ for which $1.9\le\frac{A_n}{A_{n-1}}\le2$ where $A_n=\max\left\{\binom{n}{r}:0\le r\le n\right\}$
HINT: observe that $(n-r)!r!=s!(n-s)!$ for all $(n,s)$ such that $r=n-s$. Then $$\{(n-r)!r!:r\in\{0,\ldots,n\}\}=\{(n-r)!r!:r\in\{0,\ldots,\lfloor n/2\rfloor\}\}$$ where $\lfloor n/2\rfloor$ is the floor function on $n/2$. Now verify that the function $$f:\{0,\ldots,\lfloor n/2\rfloor\}\to \Bbb N,\quad r\mapsto (n-r)!r!$$ is strictly decreasing, i.e. $f(k)>f(k+1),\forall k\in\{0,\ldots,\lfloor n/2\rfloor\}$. Then $$f^{-1}(\min\{(n-r)!r!:r\in\{0,\ldots,\lfloor n/2\rfloor\}\})=\lfloor n/2\rfloor\implies A_n=\binom{n}{\lfloor n/2 \rfloor}$$ Then the question about the desired evaluation becomes $$\frac{A_n}{A_{n-1}}=\frac{\frac{n!}{\lfloor (n+1)/2\rfloor!\cdot\lfloor n/2\rfloor!}}{\frac{(n-1)!}{\lfloor n/2\rfloor!\cdot\lfloor(n-1)/2\rfloor!}}=\frac{n}{\lfloor (n+1)/2\rfloor}$$ In the last step I used the fact that $\lfloor (n+1)/2\rfloor+\lfloor n/2\rfloor=n$ for all $n\in\Bbb N_{>0}$, what implies that $\lfloor(n+1)/2\rfloor -\lfloor(n-1)/2\rfloor=1$.
geometric meaning of differentiation with respect to the complex conjugate of $z$
Geometrically one can see what's going on by analyzing $f$'s induced map on tangent vectors. You'll see that the operator $\frac{\partial f}{\partial \overline{z}}$ is measuring the failure of a function to respect multiplication by $i$ between domain tangent vectors and range tangent vectors. It is helpful to think of $f$ mapping from a domain $C$ ~ $R^2$ to a distinct range $C$ ~ $R^2$. Any differentiable function $f$ will induce a map from tangent vectors in the domain to tangent vectors in the range, denoted here $v \rightarrow f'(v)$. This is the derivative of $f$ along the vector $v$. Both our domain and range are complex and thus have a concept of multiplication by $i$ on tangent vectors. Multiplying by $i$ is like rotating a vector counter-clockwise 90 degrees, e.g. at a point $z=(x,y)$, the vector $\frac{\partial}{\partial x}$ gets sent to $\frac{\partial}{\partial y}$ under multiplication by $i$. The beauty of holomorphic functions is that they respect this 90 degree rotation between the domain and range. That is, if $f$ is holomorphic, then $f'(iv) = i f'(v)$ where the first $i$ multiplication is in the domain and the second represents $i$ multiplication in the range. So, from the image of a vector in the domain (the derivative in one direction at some point), you can infer the image of $f'$ in every direction at that point. We write this as, for all $v$, $f'(iv) = if'(v)$. The failure to achieve holomorphicity is thus $f'(iv)-if'(v)$ being different from $0$ for some $v$. The operator $\frac{\partial f}{\partial \overline{z}}$ is essentially this measurement for a specific v. Let v be the unit vector $\frac{-\partial}{\partial y}$. Then $i v$ is $\frac{\partial}{\partial x}$. The above formula $f'(iv)-if'(v)$ becomes $f'(\frac{\partial}{\partial x}) - i f'(-\frac{\partial}{\partial y})$ = $\frac{\partial f}{\partial x} + i (\frac{\partial f}{\partial y})$ = $2 \frac{\partial f}{\partial \overline{z}}$. Of course there's a certain symmetry here. While functions where $\frac{\partial f}{\partial \overline{z}}=0$ are holomorphic, functions where $\frac{\partial f}{\partial z}=0$ are anti-holomorphic; $f'$ sends the vector $i v$ to $- i f'(v)$ for all $v$.
Series convergence geometric series
hint $$3n^2+4n+2^{-2n}\sim 3n^2 \;\;(n\to +\infty)$$ $$7^{n+2}+4n+5\sqrt{n} \sim 7^{n+2} \;\;(n\to +\infty)$$ the general term of your series is equivalent to $$\frac{9n^4}{7^{2n+4}}$$ and has the same nature as $$\sum n^4(\frac{1}{49})^n$$
Proving an equivalent to a summation formula
Showing that $$\sum_{k=1}^nk=\frac{1}{2}n^2+\mathcal{O}(n)$$ is the same as showing that $$\left(\sum_{k=1}^nk\right)-\frac{1}{2}n^2=\mathcal{O}(n).\tag1$$ The LHS of (1) equals $\frac n2$, by the summation formula, so now you have to show that $$ \frac n2 = {\mathcal O}(n). $$ Using the $f$, $g$ notation, we have $f(n):=\frac n2$, and $g(n):=n$. For what value of $c>0$ do we have $|f(n)|\le c|g(n)|$ for all large $n$?
Linear image of closed convex subset
Even in finite dimensions this can fail: let $X=\mathbb{R}^2$, $Y = \mathbb{R}$, $A$ be projection onto the first coordinate, and $M = \{ (x,y) : x > 0, y \ge 1/x\}$. In particular compactness of $A$ isn't sufficient. Compactness of $M$ is obviously sufficient. Boundedness of $M$ is sufficient in finite dimensions thanks to Heine-Borel, but not in infinite dimensions: take $X = C([0,1])$, $Y = L^1([0,1])$, $A$ the inclusion map, and $M$ the closed unit ball of $X$; you can approximate, say, $1_{[0,1/2]}$ in $L^1$ norm by continuous functions bounded by 1. If $A$ is bounded away from 0 (i.e. there is a constant $c$ with $\|Ax\| \ge c\|x\|$), then $A$ is a homeomorphism and so $A(M)$ is closed. Likewise, if $A$ is bijective, then it follows from the open mapping theorem that it is again a homeomorphism.
Triangle Inequality for proving Cauchy Criterion for Series
First, this is the reverse triangle inequality (not the triangle inequality) which states $$ |x-y| \geq ||x| - |y||$$ Additionally, they are using \begin{align} \sum_{k=n+1}^\infty a_k &= \sum_{k=n+1}^m a_k + \sum_{k=m+1}^\infty a_k\\ \sum_{k=n+1}^\infty a_k - \sum_{k=m+1}^\infty a_k&= \sum_{k=n+1}^m a_k\\ \end{align} Applying the reverse triangle inequality we have \begin{align} |\sum_{k=n+1}^m a_k| &= |\sum_{k=n+1}^\infty a_k - \sum_{k=m+1}^\infty a_k|\\ &\geq ||\sum_{k=n+1}^\infty a_k| - |\sum_{k=m+1}^\infty a_k|| \end{align} But this does not match with what they are saying, it seems they have the inequality backwards? The following is adapted from a (different?) version of the book. If (2) holds then for $L \in \mathbb{R}$ we have for all $\varepsilon > 0$ there exists $N$ such that for $n \geq N$. $$|L - \sum_{k=n+1}^\infty a_k| < \dfrac{\varepsilon}{2}$$ Then by the triangle inequality, for $m \geq N$ we have \begin{align} |\sum_{k=n+1}^m a_k| &= |\sum_{k=n+1}^\infty a_k - \sum_{k=m+1}^\infty a_k|\\ &= |\sum_{k=n+1}^\infty a_k - L + L - \sum_{k=m+1}^\infty a_k|\\ &\leq |\sum_{k=n+1}^\infty a_k - L| + |L - \sum_{k=m+1}^\infty a_k|\\ &=|L - \sum_{k=n+1}^\infty a_k| + |L - \sum_{k=m+1}^\infty a_k|\\ &< \dfrac{\varepsilon}{2} + \dfrac{\varepsilon}{2}\\ &= \varepsilon \end{align} Thus $|\sum_{k=n+1}^m a_k| < \varepsilon$ as desired.
Show that the generator of a strongly continuous contraction semigroup on $L^2$ is nonpositive definite
Because $T(t)$ is contractive, then $\|T(t)f\|^2$ is a non-increasing function of $t$ for each fixed $f$. Consequently, for all $f\in\mathcal{D}(A)$, $$ 0 \ge \left.\frac{d}{dt}\|T(t)f\|^2 \right|_{t=0} = \langle Af,f\rangle+\langle f,Af\rangle = 2\Re\langle Af,f\rangle. $$ Assuming that $A$ is selfadjoint gives $A \le 0$.
Finding counterexamples for infinite series.
a. As shown in the comment, $a_k = 1/(k \log k)$ fails. b. True, since $\newcommand{\abs}[1]{\left\vert #1 \right\vert} \newcommand\rme{\mathrm e} \newcommand\imu{\mathrm i} \newcommand\diff{\,\mathrm d} \DeclareMathOperator\sgn{sgn} \renewcommand \epsilon \varepsilon \newcommand\trans{^{\mathsf T}} \newcommand\F {\mathbb F} \newcommand\Z{\mathbb Z} \newcommand\R{\Bbb R} \newcommand \N {\Bbb N} \renewcommand\geq\geqslant \renewcommand\leq\leqslant \newcommand\bm\boldsymbol \abs {a_{k+1}}/\abs{a_k} > (1+c)$ for some $c >0$ for all large $k$, and eventually $\abs {a_k} >M(1+c)^{k} \to +\infty$, hence the series cannot converges. In detail, by the limit equation and the definition, there is some $N\in \N^*$ s.t. whenever $k \geq N$, $$ \frac {\abs {a_{k+1}} }{\abs {a_k}} > \frac {1+L}2 > 1, $$ hence $$ \abs {a_{k+1}} = \abs {a_N} \frac {(1+L)^{k-N}}{2^{k-N}} [k \geq N], $$ then taking the limit $k \to +\infty$ we get $\abs {a_k} \to +\infty$. The convergence of $\sum x_k$ requires $\lim_{k \to +\infty} x_k =0$, since if $S = \sum {x_k}$, then $x_k = \sum_1^k x_j - \sum_1^{k-1} x_j$ and $\lim x_k = S - S = 0$. The deduction above shows $a_k$ does not meet this requirement, hence it diverges. c. False. $$ a_n = \frac {(-1)^n}{\sqrt n}, b_n = a_n + \frac 1n. $$
How do you prove that $e^{ia} + 1 = 2e^{i\frac{a}{2}}\cos(\frac{a}{2})$?
Hint: $$e^{ia}+1=e^{\frac{ia}{2}}(e^{\frac{ia}{2}}+e^{-\frac{ia}{2}})$$ It might be useful to note that $$\cos(x)=\frac{e^{ix}+e^{-ix}}{2}$$
The convergence of $ \sum_{i=1}^n \frac{1}{p(n)}$ when p(n) is a polynomial of degree bigger than 1
I assume that you are asking about the convergence of the series $\sum_{n=1}^\infty\frac{1}{p(n)}$ where $p$ is a polynomial whose degree $d$ is greater than $1$. Let's write $$ p(n) = a_dn^d + a_{d-1}n^{d-1} + \dotsb + a_1n + a_0, $$ where $a_d \ne 0$. Choose $N$ so large that $|a_{d-1}n^{d-1} + \dots + a_1n + a_0| < {|a_{d}n^d|\over 2}$ for all $n>N$. (I will let you think about why such a choice of $N$ is possible by considering the degree of $a_{d-1}n^{d-1} + \dots + a_1n + a_0$ and the degree of $a_dn^d$.) With this choice of $N$, and an application of the reverse triangle inequality, $$ \sum_{n>N}\frac{1}{|p(n)|} \le \sum_{n>N}\frac{2}{|a_dn^d|} = {2\over|a_d|}\sum_{n>N}\frac{1}{n^d}. $$ Now, the series $\sum_{n>N}\frac{1}{n^d}$ converges because $d > 1$, by the $p$-test, so the original series converges absolutely.
The notation for partial derivatives
A nicer notion is that of the differential: $$ \text{If} \qquad z = 5x + 3y \qquad \text{then} \qquad dz = 5\, dx + 3\,dy $$ Then if you decide to hold $y$ constant, that makes $dy = 0$, and you have $dz = 5 \, dx$. Another notation that works well with function notation is that if we define $$ f(x,y) = 5x + 3y$$ then $f_i$ means derivative of $f$ with respect to the $i$-th entry; that is $$ f_1(x,y) = 5 \qquad \qquad f_2(x,y) = 3 $$ This doesn't work well with a common abuse of notation, though; sometimes people write $f(r,\theta)$ when they really mean "evaluate $f$ at the $(x,y)$ pair whose polar coordinates are $(r, \theta)$" rather than the 'correct' meaning of that expression "evaluate $f$ at $(r, \theta)$". So if you're in the habit of doing that, don't try to indicate derivatives by their position. I confess I really dislike partial derivative notation; when one writes $\partial/\partial x$, one "secretly" means that they intend to hold $y$ constant, then when one passes it through the differential, one gets $$ \frac{\partial z}{\partial x} = 5 \frac{\partial x}{\partial x} + 3 \frac{\partial y}{\partial x} = 5 \cdot 1 + 3 \cdot 0 = 5$$ However, the suggestive form of Leibniz notation starts becoming very misleading at this point; for example, let's compute other partial derivatives. $\partial z / \partial x = 5$, holding $y$ constant as the notation suggests $\partial x / \partial y = -3/5$, holding $z$ constant as the notation suggests $\partial y / \partial z = 1/3$, holding $x$ constant as the notation suggests Then putting it together, $$ \frac{\partial z}{\partial x} \frac{\partial x}{\partial y} \frac{\partial y}{\partial z} = 5 \cdot \left(-\frac{3}{5}\right) \cdot \frac{1}{3} = -1 $$ This is a big surprise if you expect partial derivatives to behave similarly to fractions as their notation suggests!!!
Prove that $\lim_{n\rightarrow\infty}\frac{1}{n^{p+1}}\sum_{k=1}^{n}k^{p}=\frac{1}{p+1} $
Hint: $$\int_1^nx^p dx \leq \sum_{k=1}^n k^p \leq \int_1^{n+1}x^p dx$$ Precalculus: Use induction. For the upper bound assume that $$\sum_{k=1}^n k^p \leq \frac{(n+1)^{p+1}}{p+1}$$ holds. Then show that $(n+1)^p + \frac{(n+1)^{p+1}}{p+1} \leq \frac{(n+2)^{p+1}}{p+1}$ to conclude that $\sum_{k=1}^{n+1} k^p \leq \frac{(n+2)^{p+1}}{p+1}$ and since it holds for $n=1$ it follows by induction that the inequality hold for all $n$. For the lower bound assume that $$\frac{n^{p+1}-1}{p+1}\leq \sum_{k=1}^n k^p$$ holds. Then show that $\frac{n^{p+1}-1}{p+1} + (n+1)^p\geq \frac{(n+1)^{p+1}}{p+1}$ and conclude.
Prove there exists $c,d\in(a,b)$ such that $\frac{f'(c)}{f'(d)}=\frac{e^b-e^a}{b-a}e^{-d}$
By the MVT, there exists some $c \in (a, b)$ such that $$f'(c) = \frac{f(b) - f(a)}{b - a}.$$ Also, consider $g(x) = f(\ln(x))$, on the interval $[e^a, e^b]$. Then, also by the MVT, there exists some $d$ such that $e^d \in (e^a, e^b)$ and $$e^{-d}f'(d) = \frac{1}{e^d} f'(\ln(e^d)) = \frac{f(\ln(e^b)) - f(\ln(e^a))}{e^b-e^a} = \frac{f(b) - f(a)}{e^b - e^a}.$$ Taking the quotient yields the desired result.
How many elements $x$ in the field $\mathbb{ Z}_{11}$ satisfy the equation $x^{12} - x^{10} = 2$?
Hint: every nonzero element $x \in \mathbb{Z}/11\mathbb{Z}$ satisfies $x^{10} = 1$.
Csiszar Sum Identity
Let us recall the Csiszar sum identity. For any random variables $X$ and sequences of random variables $Y_1^n, Z_1^n$ it holds that $$ \sum_i I(Y^{i-1};Z_i|Z_{i+1}^n, X) = \sum_i I(Z_{i+1}^n;Y_i|Y^{i-1}, X)$$ For the first equality, notice that $$ I(M, Z_{i+1}^n; Y_i|Y^{i-1}) = I(M;Y_i|Y^{i-1}) + I(Z_{i+1}^n; Y_i|Y^{i-1}, M).$$ Similarly, $$ I(M, Y^{i-1}; Z_i|Z_{i+1}^n) = I(M;Z_i|Z_{i+1}^n) + I(Y^{i-1}; Z_i|Z_{i+1}^n, M).$$ So, to show the first equality, it suffices to show that $$ \sum_i I(Y^{i-1};Z_i|Z_{i+1}^n, M) = \sum_i I(Z_{i+1}^n;Y_i|Y^{i-1}, M).$$ But this is precisely the Csiszar sum identity (applied with $X= M$). For the second equality, we can expand mutual information in the second way to see that $$ I(M, Z_{i+1}^n; Y_i|Y^{i-1}) = I(Z_{i+1}^n;Y_i|Y^{i-1}) + I(M; Y_i|Y^{i-1}, Z_{i+1}^n)\\ I(M, Y^{i-1}; Z_i|Z_{i+1}^n) = I(Y^{i-1};Z_i|Z_{i+1}^n) + I(M; Z_i|Z_{i+1}^n, Y^{i-1}).$$ Again, the equality will follow if $$ \sum_i I(Z_{i+1}^n; Y_i|Y^{i-1}) = \sum_i I(Y^{i-1}; Z_i|Z_{i+1}^n).$$ But this is again just the Csiszar sum identity (applied with the trivial random variable $X \equiv 1$)
Integration by parts of $\sin(x)e^x$
Another way could be to consider $$I=\int \sin(x)\,e^x\,dx$$ $$J=\int \cos(x)\,e^x\,dx$$ Then $$J+i I=\int e^{ix}\,e^x\,dx=\int e^{(1+i)x}\,dx=\frac { e^{(1+i)x}} {1+i}=\left(\frac{1}{2}-\frac{i}{2}\right) e^{(1+i) x}$$ Expand the last term to get $$J+i I=\frac{1}{2} e^x \sin (x)+\frac{1}{2} e^x \cos (x)+i \left(\frac{1}{2} e^x \sin (x)-\frac{1}{2} e^x \cos (x)\right)$$ Doing the same for $$K=\int \sin(ax)\,e^{bx}\,dx$$ $$L=\int \cos(ax)\,e^{bx}\,dx$$ $$L+iK=\int e^{iax}e^{bx}\,dx=\int e^{(b+ia)x}dx=\frac {e^{(b+ia)x}}{b+ia}=\frac {b-ia}{a^2+b^2}{e^{(b+ia)x}}$$ and expanding again $$L+iK=\frac{a e^{b x} \sin (a x)}{a^2+b^2}+\frac{b e^{b x} \cos (a x)}{a^2+b^2}+i \left(\frac{b e^{b x} \sin (a x)}{a^2+b^2}-\frac{a e^{b x} \cos (a x)}{a^2+b^2}\right)$$
Integer absolute difference sequences
They're called Ducci sequences. The proof that they terminate for $n = 2^k$ is fairly elementary: all you have to show is that eventually all of the terms are even, and then you're done by induction on the binary expansion. To show that eventually all of the terms are even it suffices to work $\bmod 2$, and then you're just looking at powers of the matrix $I + P$ over $\mathbb{F}_2$, where $P$ is a cyclic permutation. We have $(I + P)^{2^k} \equiv I + P^{2^k} \bmod 2$, so when $n = 2^k$ we are guaranteed that after at most $n$ steps all of the terms are even (and otherwise we get the behavior you already described).
how do you find the highest common factor of two multivariate polynomials?
This is discussed in Cox, Little, and O'Shea's book "Ideals, Varieties, and Algorithms" around page 180. There is also the command PolynomialGCD in mathematica, and it appears to work on wolfram alpha.
Questions on $\overline{\mathbb{F}_p}|\mathbb{F}_p$
Yes, the algebraic closure of $\mathbb{F}_p$ is the union (or more carefully, the direct limit) of the finite fields: first, this makes sense because given any $n$ and $m$, both $\mathbb{F}_{p^n}$ and $\mathbb{F}_{p^m}$ are subfields of $\mathbb{F}_{p^{\mathrm{lcm}(n,m)}}$, so you will indeed get a field. And given any polynomial $f(x)$ with coefficients in $\mathbb{F}_p$, $f(x)$ has at least one root in a finite extension of $\mathbb{F}_p$, hence in the union. And, furthermore, we clearly need at least all the finite fields, since they are the splitting fields of $x^{p^m}-x$ over $\mathbb{F}_p$. So the union is an algebraic closure of $\mathbb{F}_p$. The Galois group is the inverse limit of the Galois groups of the finite subextensions (this is true in any infinite Galois extension), which are cyclic; you end up with the inverse limit of the cyclic groups, and that is called $\widehat{\mathbb{Z}}$, the profinite completion of $\mathbb{Z}$. Yes, there are other intermediate fields. For example, take the union of $\mathbb{F}_{p^{2^k}}$ for $k=1,2,\ldots$. These fields form a chain (since $\mathbb{F}_{p^m}\subseteq\mathbb{F}_{p^n}$ if and only if $m|n$), so their union is a field. It is clearly not finite. And it does not contain any element of degree not a power of $2$ over $\mathbb{F}_p$ (in particular, no root of an irreducible cubic), so it cannot be the whole algebraic closure.
Intuition Behind Maximum Principle (Complex Analysis)
Think about what the mean value property of analytic functions says: $f(z_0) = \dfrac{1}{2\pi} \int_0^{2\pi} f(z_0 + re^{i\theta}) d\theta$, where $f$ is analytic in the disk $B_r(z_0)$. This says that $f$ is equal to the average of the boundary points. How can $|f(z_0)|$ be larger than every single point of which it is the average? Thinking discretely, if $a= \dfrac{a_1 + \cdots + a_n}{n}$, can $a$ be larger than every point in this sum? No, this is not possible.
How do I convert an expression in terms of the general equation of a conic section to one in the equation of an ellipse?
Think of the ellipse as a quadratic form: $$[x,y,1]\begin{bmatrix}a&b/2&d/2\\b/2&c&e/2\\d/2&e/2&-f \end{bmatrix}\begin{bmatrix}x\\y\\1 \end{bmatrix}=0 $$. Lets call the matrix in the middle $Q_0$ and then $[x,y,1]Q_0 \begin{bmatrix}x\\y\\1 \end{bmatrix}=0 $ Since you are allowed translate and rotate the coordinate system, you can push a 2d "rigid motion" matrix $E$ such that $[x,y,1] E^T Q_1 E\begin{bmatrix}x\\y\\1 \end{bmatrix}=0 $ and you can look for such and $E$ such that $Q_1=\begin{bmatrix} a'&0&0\\0&b'&0\\0&0&-1 \end{bmatrix}$ The general form of $E$ is $\begin{bmatrix}\cos(\theta)&\sin(\theta)&p_x \\-\sin(\theta)&\cos(\theta)&p_y\\ 0&0&1 \end{bmatrix}$
An example of topological space in which each singleton is not in $G_\delta$
Yes, there are compact Hausdorff spaces in which no point is a $G_\delta$. (I’ll have to think more about the second question.) Let $X=\beta\omega\setminus\omega$; then $X$ is a compact Hausdorff space in which no singleton is a $G_\delta$. To see this, let $p\in X$, and think of $p$ as a free ultrafilter on $\omega$. Let $\{V_n:n\in\omega\}$ be a countable family of open nbhds of $p$. Then for each $n\in\omega$ there is a set $A_n\subseteq\omega$ such that $p\in\hat A_n\subseteq V_n$, where $$\hat A_n=\{q\in X:A_n\in q\}$$ is a basic open set in $X$. Without loss of generality we may assume that $\bigcap_{n\in\omega}A_n=\varnothing$, that $A_{n+1}\subseteq A_n$, and further that $|A_n\setminus A_{n+1}|\ge 2$ for each $n\in\omega$. For each $n\in\omega$ choose an integer $b_n\in A_n\setminus A_{n+1}$, and let $B=\{b_n:n\in\omega\}$. Then the families $$\mathscr{F}=\{B\}\cup\{A_n:n\in\omega\}$$ and $$\mathscr{G}=\{\omega\setminus B\}\cup\{A_n:n\in\omega\}$$ are both centred, so they can be extended to free ultrafilters $q(\mathscr{F})$ and $q(\mathscr{G})$, respectively. Exactly one of $B$ and $\omega\setminus B$ belongs to $p$. If $B\in p$, then $$q(\mathscr{G})\in\bigcap_{n\in\omega}\hat A_n\setminus\{p\}\;,$$ and if $\omega\setminus B\in p$, then $$q(\mathscr{F})\in\bigcap_{n\in\omega}\hat A_n\setminus\{p\}\;,$$ so $$\{p\}\ne\bigcap_{n\in\omega}\hat A_n \subseteq\bigcap_{n\in\omega}G_n\;,$$ and $\{p\}$ is not a $G_\delta$. Added: An even simpler example is $X=\{0,1\}^{\omega_1}$. If $x\in X$ belongs to some $G_\delta$-set $H$, there is an $\alpha<\omega_1$ such that $G=\{y\in X:y\upharpoonright\alpha= x\upharpoonright\alpha\}\subseteq H$, and clearly $\{x\}\subsetneqq G$.
Parametrise curve by angle and convex curves
You can certainly parametrize a plane curve by its tangential angle. If, for instance, you have the Whewell intrinsic equation $s=f(\varphi)$ ($s$ being the arclength, and $\varphi$ being the tangential angle), you can obtain a corresponding set of parametric equations: $$\begin{align*}x&=x_0+\int_{\varphi_0}^\varphi f^\prime(u)\cos\,u\;\mathrm du\\y&=y_0+\int_{\varphi_0}^\varphi f^\prime(u)\sin\,u\;\mathrm du\end{align*}$$ where $x_0,y_0,\varphi_0$ are arbitrary. For instance, the circle involute has the Whewell equation $s=\dfrac{a\varphi^2}{2}$.
Show that ∇· (∇ x F) = 0 for any vector field
It's better if you define $F$ in terms of smooth functions in each coordinate. For instance I would write $F = (F_x, F_y, F_z) = F_x\hat{i} + F_y \hat{j} + F_z \hat{k}$ and compute each quantity one at a time. First you'll compute the curl: $$ \nabla \times F \;\; =\;\; \left | \begin{array}{ccc} \hat{i} & \hat{j} & \hat{k} \\ \partial_x & \partial_y & \partial_z \\ F_x & F_y & F_z \\ \end{array} \right | \;\; =\;\; G_x\hat{i} + G_y \hat{j} + G_z\hat{k} $$ where the functions $G_x, G_y, G_z$ are obtained by computing the determinant. Then you will want to compute $$ \nabla\cdot (\nabla\times F) \;\; =\;\; \frac{\partial G_x}{\partial x} + \frac{\partial G_y}{\partial y} + \frac{\partial G_z}{\partial z}. $$ You should find that the last equation yields zero.
Convergence test for series $\sum_{n=2}^{\infty}\frac{(-1)^n}{\sqrt{n}\sqrt{n+(-1)^n}}$
Denote $\sum a_n$ the given series. We have $$\vert a_n\vert\ge\frac{1}{n+1}$$ so $\sum a_n$ isn't absolutely convergent. Moreover $$a_n=\frac{(-1)^n}n\left(1+\frac{(-1)^n}{n}\right)^{-1/2}=\frac{(-1)^n}{n}+O\left(\frac1{n^2}\right)$$ so $\sum a_n$ is convergent as sum of two convergent series.
Can we use the Chi-Square table for the Sign Test of small sample? What are the different versions of a Sign Test?
I will illustrate a sign test using R statistical software. To begin, we put the data into a vector x, count the observations, and then sort them. x=c(163, 165, 160, 189, 161, 171, 158, 151, 169, 162, 163, 139, 172, 165, 148, 166, 172, 163, 187, 173) sx = sort(x); sx ## 139 148 151 158 160 161 162 163 163 163 ## 165 165 166 169 171 172 172 173 187 189 sum(sx > 160) ## 15 We see that there are 20 observations, of which 15 exceed 160. In a formal sign test the five observations with values below 160 would be called 'minuses' and those above 'plusses'. (Because one observation is exactly equal to 160 we thrown it out; it can't provide any evidence against $\mu = 160.$) Now think of the random variable $Y \sim Binom(n = 19, p = 1/2)$, which counts the number of plusses (successes), under the null hypothesis. We seek the p-value, which is the probability of a result as or more extreme than what we observed. Specifically, for the $one$-sided test about which you ask, we need to evaluate $$P(Y \ge 15) = 1 - P(Y \le 14) \approx 0.0096.$$ In R this is computed as follows: 1 - pbinom(14, 19, .5) ## 0.009605408 Because the p-value is smaller than 5%, we reject the null hypothesis at the 5% level of significance. In R there is a function binom.test that does all of this in one step (where ... indicates I have omitted some output that is not directly relevant). binom.test(15, 19, .5, alte="greater") Exact binomial test data: 15 and 19 number of successes = 15, number of trials = 19, p-value = 0.009605 ... You could use the normal approximation to find the p-value: Let $\mu = E(Y) = np = 19(.5) = 9.5$ and $\sigma = SD(Y) = \sqrt{np(1-p)} = \sqrt{4.75}.$ Then $Z = (Y - \mu)/\sigma$ is approximately standard normal. Thus, the p-value is approximated as $$P(Y < 14.5) = P(Z < (14.5 - 9.5)/\sqrt{4.75} = 2.294157 ) \approx 0.011,$$ where the numerical result can be obtained from printed normal tables or from software. (Notice that the normal approximation is not quite accurate to the second decimal place in this case.) In the figure below, the vertical bars show the distribution $Binom(19, 1/2)$, the total heights of the heavy bars at the right is the p-value, and the curve is the density of $Norm(9.5, \sqrt{4.75}).$ Finally, you asked whether the chi-squared distribution can be used to solve this problem. Consider the random variable $Z^2 = Q \sim Chisq(df=1)$. For a $two$-sided test you could use $$1 - P(-2.012 < Z < 2.294) = P(Q = Z^2 > 2.294^2 = 5.262)$$ and use tables of the chi-squared distribution or software to find that $P(Q > 5.262) \approx 0.0218,$ which is the approximate p-value for a two-sided test. But this does not directly solve your problem. So it seems an unnecessary complication to use the chi-squared distribution here. $Addendum:$ Perhaps you have already learned about t tests or will see them soon in your course. A t test takes into account the exact value of each observation (not just whether it is above or below 160). Some results for the appropriate one-sample, one-sided t test are shown below: t.test(x, mu=160, alt="g") One Sample t-test data: x t = 1.8735, df = 19, p-value = 0.03823 alternative hypothesis: true mean is greater than 160 ... A t test requires data to be approximately normal. Even though your data show a couple of near outliers, a Shapiro-Wilk test does not reject the null hypothesis of normality.
Interesting problem involving gradient
By the chain rule we have $$h_x(x,y)=f_x(x^2,x^2-y^2) \cdot 2x+f_y(x^2,x^2-y^2) \cdot 2x$$ and $$h_y(x,y)=f_x(x^2,x^2-y^2) \cdot 0+f_y(x^2,x^2-y^2) \cdot (-2y).$$ Now compute $\nabla h(1,1)$ and then $||\nabla h(1,1)||.$
Smallest $k$ Such that $13 + 4 \cdot k \cdot p^2$ is a Perfect Odd Square
There is one major optimization screaming at me here. Check each square sequentially for whether or not it is the “odd square” that the formula equals. This will be faster because $n^2$ (for odd $n$) grows faster than the current linear formula dependent on $k$. Of course you would start with the first square greater than $13 + 4p^2$ since any lower square is impossible. This method will be faster when $\frac {n^2}{4p^2} > n - \sqrt{4p^2} = n - 2p$. I do not know whether or not this equation ever works out to being true. However, for sufficiently large $p$ I strongly suspect that iterating through the squares will be faster. One may note that my formula assumes that every multiple of $p^2$ needs to be as well as every $n^2$. This cancels out as I would divide both sides by $2$. Therefore it is irrelevant. EDIT: I thought about this a bit more. For sufficiently small $k$ iterating through squares will be slower (because the growth rate of sequential squares will be smaller than the growth of sequential multiples of $4p^2$). Once $k > 2p^2 - 1$ the growth of sequential squares outpaces the linear growth of your formula. Therefore you should add something in your code to start counting by squares once you reach $k = 2p^2 - 2$. The value of $n$ to start iterating squares would then be $n = 2p^2 - 1$. This should be about as fast you can get (assuming $k$ exists) other than iterating through odd values of $k$ and $n$.
Maximization of Utility Function
petrol and auto repairs cost 0.5 (not 50). This means $U=2(M−0.5d)+5d−d^2−2h$ and maximization w.r.t "$d$" gives $D1=2$. The formula for $D2$ is $U=2(M−0.5d)+5d−d^2−2d$, where "$h$" is replaced by "$d$". Maximization gives $D2=1$. Both formulas assume the budget constraint $0.5d+c=M$.
How to show: $f$ is surjective $\iff $ for every generator $G\subset X$ of $X$, $f(G)\subset Y$ is a generator of $Y$
I guess that with “generator” you mean “generating set” (or “spanning set”). You're correct in saying that (i) is trivially equivalent to (ii), which is just a rephrasing. Suppose (iii). In particular, $f(X)$ is a generating set for $Y$, because $X$ is a generating set for $X$. Therefore $f$ is surjective. Suppose $f$ is surjective and that $G$ is a generating set for $X$. We need to show that $f(G)$ is a generating set for $Y$. Let $y\in Y$; then $y=f(x)$ for some $x\in X$. Since $G$ is a generating set for $X$, we can write $$ x=\alpha_1x_1+\alpha_2x_2+\dots+\alpha_kx_k $$ for $x_1,x_2,\dots,x_k\in G$ and scalars $\alpha_1,\alpha_2,\dots,\alpha_k\in F$. Then $$ y=f(x)=\alpha_1f(x_1)+\alpha_2f(x_2)+\dots+\alpha_kf(x_k) $$ belongs to the subspace generated by $f(G)$ and therefore this subspace is $Y$.
Linear functional on $M_n(\mathbb{C})$
The bilinear form $\varphi : \mathcal{M}_n(\mathbb{C}) \times \mathcal{M}_n(\mathbb{C}) \rightarrow \mathbb{C}$ defined by $$\varphi(A,B)=\mathrm{Tr}(AB)$$ is non-degenerated, hence it induces a canonical isomorphism $ \mathcal{M}_n(\mathbb{C}) \rightarrow \left( \mathcal{M}_n(\mathbb{C})\right)^*$.
How would the graph of $y=x^{x^{x^{x^\cdots}}}$ appear?
Well, $y=x^{x^{x^{x^{...}}}}, y=x^y$ Using Wolfram the plot would look like this
Find the matrix for $T$ with respect to the standard bases $B = \{1,x,x^2\}$ for $P_2$.
Note that $$\begin{bmatrix} 1 & -1 & 1\\ 0 & 1 & -2\\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} a_0 \\ a_1 \\ a_2 \end{bmatrix} = \begin{bmatrix} a_0 - a_1 + a_2 \\ a_1 - 2a_2 \\ a_2 \end{bmatrix}.$$ At the same time, \begin{align} a_0 + a_1(x-1) + a_2(x-1)^2 &= a_0 + a_1x - a_1 + a_2x^2 - 2a_2x + a_2 \\ &= (a_0 - a_1 + a_2) + (a_1 - 2a_2)x + a_2x^2. \end{align} I hope it is more clear this way.
Probability and independence (joint PMF)
$X$ and $Y-X$ are independent (discrete) random variables if and only if for each pair of real numbers $(u,v)$, $P\{X=u, Y-X=v\}$ equals $P\{X=u\}P\{Y-X = v\}$ Note that $$P\{X=u, Y-X=v\} = P\{X=u, Y=u+v\} = p_{X,Y}(u,u+v)$$ can be read off from the joint probability mass function of $X$ and $Y$ while $$P\{X=u\} = \sum_{t} p_{X,Y}(u,t)$$ requires just a little more work in computing the sum, and $$P\{Y-X=v\} = \sum_t P\{Y=t, X=t-v\} = \sum_t p_{X,Y}(t-v,t)$$ is yet another sum you need to find. All this for just $(u,v)$. Then repeat for other $(u,v)$. Note that if $u$ is not a value that $X$ takes on, that is, $P\{X=u\}=0$, then $P\{X=u, Y-X=v\} = 0$ also (think why!) and so $$P\{X=u, Y-X=v\} = 0 = P\{X=u\}P\{Y-X = v\}$$ and you can skip all the above calculations.
Why is the set of all Real Upper Triangular Square matrices not a vector space?
I guess what they mean is that you need to precise that they must be the same size and that you can’t have a vectorial space of matrices of size p and of size n. But that’s quite unclear.
Basics of group theory. Finding group isomorphism
Your $(\{1,2,4\},\,\cdot \bmod 7)$ example is correct, no idea what the question's on about. Maybe a typo from $\langle 2 \rangle$ or $\langle 4 \rangle$?
Why would you choose Simplex over Lagrange/KKT multipliers methods?
Let me restrict myself to linear problems, because for nonlinear problems you cannot choose the simplex method. Although the simplex method has exponential complexity in the worst case, in practice it is still quite fast on most real world problems. That is mostly due to the spectacular improvements made by CPLEX and Gurobi in their simplex implementation. The paper A Brief History of Linear and Mixed-Integer Programming Computation by Robert Bixby (the last two letters in Gurobi) provides a good overview of the historic developments. Solving $\vec{\nabla \mathcal{L}} = \vec{0}$ only works if you have equality constraints and if the variables are not restricted in sign. To solve a linear optimization problem in standard form: $$\min\{c^Tx : Ax \geq b, x\geq 0\},$$ what you need to solve are the KKT conditions: $$Ax \geq b$$ $$A^T y \leq c$$ $$y^T(Ax-b)=0$$ $$x\geq 0, y\geq 0.$$ In other words, you need to find a pair $(x,y)$ where $x$ is feasible to the primal problem, $y$ is feasible for the dual problem, and the complementary slackness (CS) condition $y^T(Ax-b)=0$ is satisfied. That is by no means easy! What interior point methods do is replace the CS condition with $y^T(Ax-b)=\mu$ and solve the problem iteratively while gradually letting $\mu \downarrow 0$. Whether that has an advantage compared to the simplex method depends on the structure of $A$. Interior point methods are fast when $AA^T$ is sparse, and the rows/columns can be permuted in a way that its Cholesky factor is also sparse. In practice it is also important to detect infeasible or unbounded problems. To do that, interior point methods often solve the homogeneous model. The paper The Mosek Interior Point Optimizer for Linear Programming: An Implementation of the Homogeneous Algorithm. by Andersen & Andersen shows how solving the homogenous model reveals ill posed problems. The running time of an interior point method is very predictable, but that of the simplex method is not. Therefore, if you run CPLEX or Gurobi with their default settings, they actually run the simplex method and the interior point method in parallel, and terminate as soon as one method finds the optimal solution.
Prove that if $(a+3)(b-1)=n-3$, where $n\in\mathbb N$ and $a$, $b$ are different factors of $n$, then $3n=q^2$, where $q\in\mathbb N$.
From the given condition $$n = (a+3)(b-1)+3 =ab+3b-a $$ hence $ a=b(a+3)-n$ is a multiple of $b$ and $3b=n-a(b-1)$ is a multiple of $a$. This allows only $a=b$ (which is excluded) or $a=3b$, therefore $$3n=3(ab+3b-a)=3(3b^2+3b-3b)=(3b)^2.$$ Especially, you cannot conclude that $2=3^{2k+1}$ for some $k$: Just note that $n=12$, $b=2$, $a=6$ is an example.
Is it correct to say that all geometric series are power series centered at $0$?
Every geometric series given by $$a+ar+ar^2+\dots$$ is simply the power series representation of the function $$f(r)=\frac{a}{1-r}$$ centred at $r=0$.
Positive integers of the form $(a+b+c)(\frac{1}{a}+\frac{1}{b}+\frac{1}{c})$
$$ a \leq b \leq c \leq 1000, \; \; \; \gcd(a,b,c) = 1 $$ 9 was ratio 1 1 1 LHS 9 RHS 1 ratio 9 10 was ratio 1 1 2 LHS 20 RHS 2 ratio 10 10 was ratio 1 2 2 LHS 40 RHS 4 ratio 10 11 was ratio 1 2 3 LHS 66 RHS 6 ratio 11 11 was ratio 2 3 6 LHS 396 RHS 36 ratio 11 14 was ratio 2 3 10 LHS 840 RHS 60 ratio 14 14 was ratio 3 10 15 LHS 6300 RHS 450 ratio 14 15 was ratio 1 2 6 LHS 180 RHS 12 ratio 15 15 was ratio 1 3 6 LHS 270 RHS 18 ratio 15 18 was ratio 2 10 15 LHS 5400 RHS 300 ratio 18 18 was ratio 2 3 15 LHS 1620 RHS 90 ratio 18 26 was ratio 1 6 14 LHS 2184 RHS 84 ratio 26 26 was ratio 3 7 42 LHS 22932 RHS 882 ratio 26 30 was ratio 10 77 165 LHS 3811500 RHS 127050 ratio 30 30 was ratio 14 30 231 LHS 2910600 RHS 97020 ratio 30 34 was ratio 1 6 21 LHS 4284 RHS 126 ratio 34 34 was ratio 2 7 42 LHS 19992 RHS 588 ratio 34 35 was ratio 3 10 65 LHS 68250 RHS 1950 ratio 35 35 was ratio 6 39 130 LHS 1064700 RHS 30420 ratio 35 55 was ratio 1 14 35 LHS 26950 RHS 490 ratio 55 55 was ratio 2 5 70 LHS 38500 RHS 700 ratio 55 59 was ratio 2 15 85 LHS 150450 RHS 2550 ratio 59 59 was ratio 6 34 255 LHS 3069180 RHS 52020 ratio 59 63 was ratio 5 119 170 LHS 6372450 RHS 101150 ratio 63 63 was ratio 7 10 238 LHS 1049580 RHS 16660 ratio 63 74 was ratio 11 135 594 LHS 65274660 RHS 882090 ratio 74 74 was ratio 5 22 270 LHS 2197800 RHS 29700 ratio 74 105 was ratio 1 21 77 LHS 169785 RHS 1617 ratio 105 105 was ratio 3 11 231 LHS 800415 RHS 7623 ratio 105 126 was ratio 3 114 247 LHS 10643724 RHS 84474 ratio 126 126 was ratio 6 13 494 LHS 4855032 RHS 38532 ratio 126 131 was ratio 1 35 90 LHS 412650 RHS 3150 ratio 131 131 was ratio 7 18 630 LHS 10398780 RHS 79380 ratio 131 179 was ratio 3 65 442 LHS 15428010 RHS 86190 ratio 179 190 was ratio 2 21 322 LHS 2569560 RHS 13524 ratio 190 190 was ratio 3 46 483 LHS 12664260 RHS 66654 ratio 190 294 was ratio 5 570 874 LHS 732324600 RHS 2490900 ratio 294 297 was ratio 5 513 945 LHS 719905725 RHS 2423925 ratio 297 298 was ratio 2 85 493 LHS 24975380 RHS 83810 ratio 298 319 was ratio 2 150 475 LHS 45457500 RHS 142500 ratio 319 323 was ratio 2 75 550 LHS 26647500 RHS 82500 ratio 323 323 was ratio 3 22 825 LHS 17587350 RHS 54450 ratio 323 326 was ratio 2 55 570 LHS 20440200 RHS 62700 ratio 326 330 was ratio 1 90 234 LHS 6949800 RHS 21060 ratio 330 370 was ratio 1 77 286 LHS 8148140 RHS 22022 ratio 370 851 was ratio 1 234 611 LHS 121670874 RHS 142974 ratio 851
Convention for definition of $n$-connectedness of spaces vs maps
Unravelling the definitions you give, we find a much cleaner statement. Firstly we need to add to your definition the requirement that $f$ is a basepoint-preseving map that induces an isomorphism on path components. Then a space is $n$-connected if and only if $\pi_iX=0$ for each $0\leq i\leq n$. On the other hand, a map $f:X\rightarrow Y$ is $n$-connected, or an $n$-equivalence, if the induced maps $f_*:\pi_iX\xrightarrow{\cong} \pi_iY$ are isomorphisms for each $0\leq i< n$, and $f_*:\pi_nX\rightarrow \pi_nY$ is onto. To see the equivalence recall that giving an extension of $\alpha:S^{k-1}\rightarrow X$ to a map $\tilde\alpha:D^{k+1}\rightarrow X$ is equivalent to a giving a null-homotopy of $\alpha$. Therefore the injectivity conditions , if $\alpha\in\pi_{k-1}X$ and $f_*\alpha\simeq 0$ then $\alpha\simeq \ast$, follow from immediately. The surjectivity conditions follow by representing $\beta\in\pi_{k+1}Y$ by a relative class $[(D^{k+1},S^k),(Y,\ast)]$ and using that fact that $f(\ast)=\ast$. In particular, by this definition, for a space $X$, the map $X\rightarrow \ast$ is $(n+1)$-connected if and only if $$\pi_kX=0,\qquad 0\leq k\leq n.$$ In particular $X$ is $n$-connected. So we find agreement. Now, to your question. Consider the inclusion of the basepoint $\ast\rightarrow X$. For this map to be $n$-connected we require $$\pi_kX=0,\qquad 0\leq k\leq n.$$ In particular $X$ is also $n$-connected. So, to answer you question, that is the reason we define things like this, namely because we consider the map $\ast\rightarrow X$. Taking this map as motivation for the definition makes better sense than considering $X\rightarrow\ast$, since for the former map it coincides with the more concrete definition you give, that every map $S^{k-1}\rightarrow X$ is compressible in $X$ to a point.
Does classification of 1-manifolds with boundary give induced orientation of image of closed interval under a smooth immersion?
You are right that the classification does not imply that $C$ and $[a, b]$ are diffeomorphic. It may very well be that $C$ is diffeomorphic to a circle, and that $c$ winds the interval $[a, b]$ 3.5 times around it. (The minute hand of the clock was giving you a hint.) But assuming that $C$ has nonempty boundary, it follows indeed that the immersion $c$ is injective (i.e. is an embedding), as shown in a linked question: Is a smooth immersion $c: [a,b] \to M$ injective if its image is a 1-manifold with non-empty boundary? In particular, the boundary of $C$ has cardinality $2$ and equals $\{ c(a), c(b) \}$.
$H$ thin in $X$ implies that $X\setminus H$ contains a generic subset
The standard terminology in English is rare is called "nowhere dense", thin is called "meagre" ("meager" in the US), or first category. generic is the same in both. So $X$ is Baire (which means that a countable intersection of open and dense sets is dense) and $H$ is meagre, then $X\setminus H$ contains a generic subset. This is quite clear if you realise that $N$ nowhere dense is equivalent to $X\setminus N$ contains a dense and open subset (namely $X\setminus \overline{N}$), so if $H$ is meagre, so $H \subseteq \bigcup_n N_n$ where all $N_n$ are nowhere dense, then by de Morgan we have: $$\bigcap_n (X\setminus N_n) \subseteq X\setminus H$$ and all $X\setminus N_n$ contain the dense and open set $G_n = X\setminus \overline{N}$ and so $\bigcap_n (X\setminus N_n)$ is generic (because it's dense as $X$ is Baire). So $X\setminus H$ indeed contains a generic set. Finally, $$\mathbb{R}\setminus{\Bbb Q} = \bigcap_{q \in \Bbb Q} \mathbb{R}\setminus \{q\}$$ showing the irrationals are quite clearly a generic set, as the complement of a point is open and dense in the reals, and there are only countably many rationals, and the irrationals are dense (which we could already conclude from the reals being complete and hence a Baire space).
Approximation using Legendre polynomials
Is it enough to use to Legendre Polynomials as basis within the Fit[] Function. Fit[data,{LegendreP[1,x],LegendreP[2,x],LegendreP[3,x]},x] I would be glad about an explanation how it actually works. Thx rainer
Probability of two person competing the race not winning the race.
Probability of $B$ wins given that $A$ wins would be $0$. (assuming one champion) Probability of $B$ wins is non-zero. Hence the event of $A$ winning and the event that $B$ winning are not independent.
Why is this an incomplete residue system?
It's much less brute force to calculate $n(n+1)/2\pmod5$ for $n=1$ to $5$ and get $1, 3, 1, 0, 0$. That shows that $n(n+1)/2\not\equiv2\bmod5$, so $n(n+1)/2\not\equiv2\bmod2020=5\times404$.
Clarification of a concept in Permutation
It is because the groups are identical in statement $1$ and identical in statement $2$ that the difference arises. The difference can be seen in the following example: There is exactly $1$ way to split a group of three people into three groups of size $1$, $\frac{3!}{1!1!1!3!}$. However if each group is distinct there are $6$ ways to split them, $\frac{3!}{1!1!1!}$
Inverse ideal of $(6,2+\sqrt{10})$ in $\mathbb{Q}(\sqrt{10})$
The inverse ideal $I^{-1}$ consists of all the elements $z$ of the number field $K$ with the property $zI\subseteq \mathcal{O}_K$. When $I$ is contained in $\mathcal{O}_K$ we trivially have $\mathcal{O}_K\subseteq I^{-1}$, and the question is about finding all the non-integral elements of $K$ with the property $zI\subseteq \mathcal{O}_K$. The case when $I$ is a principal ideal $a\mathcal{O}_K$ is easy. In this case $z\in I^{-1}$ if and only if $za\in\mathcal{O}_K$. It follows that $I^{-1}$ is the $\mathcal{O}_K$-module generated by $1/a$. Because $(2+\sqrt{10})(2-\sqrt{10})=-6$ it follows that your ideal $I$ is principal, generated by $2+\sqrt{10}$. Consequently $I^{-1}$ is the $\Bbb{Z}[\sqrt{10}]$-module generated by $$\alpha=1/(2+\sqrt{10})=(\sqrt{10}-2)/6.$$
canonical coset representative
Canonical is in the eye of the beholder. There's usually no group-theoretic reason for singling out any of the representatives, but you can apply other criteria. For instance, if you have a well-ordering on the group (which you can always define if the group is finite), you can choose the least element from each coset. The most interesting case is when you have no explicit formula at all to make the choice, for instance if you want to choose representatives for $\mathbb R/\mathbb Q$. Then you need the axiom of choice to even claim that there is a set of representatives, and you can't explicitly construct one.
Compactness and dimensionality.
It is not quite clear what you are asking, so I will answer your questions the way I understood them. "Is there more to this intuition? Can we formalize it?" Sort of. The usual notion of compactness deals with finite open covers and their subcovers. One of the abstract notions of dimension (applied to all topological spaces) is due to Chech (based on ideas due to Lebesgue), it is called the covering dimension. It is formulated in terms of refinements of open covers. For $E^n$, covering dimension gives you the expected number, namely $n$. (Which is not at all obvious.) Thus, if you use the notion of covering dimension, then indeed, you get some formalization of your intuition of relation between compactness and dimensionality (both are defined in terms of certain procedures related to open covers, although the procedures are quite different if you look at them closely). "Can we build a notion of dimensionality directly from compactness, in a way which is somehow consistent with the usual one, in the case of vector spaces?" It depends on what class of topological spaces you want to "cover". If you are satisfied with manifolds (defined as spaces locally homeomorphic to a certain, say, Banach, vector space) then yes: You define dimension of this manifold to be the dimension of the Banach space. This dimension will be a topological invariant and a manifold will be finite-dimensional if and only if it is locally compact. If you are satisfied with this class of topological spaces then you are done. Originally, going back to the 19th century, topology was developed primarily in the context of manifolds. However, eventually, people realized that this class is way too narrow and it took quite a bit of effort to develop notions of dimension for general topological spaces. In this degree of generality (even if you restrict to, say, metric spaces) finite dimensionality has nothing to do with compactness and your "vector space intuition" breaks down completely. The best thing to do is to abandon it and deal with dimensionality separately from compactness. "Is there a more general relation between compactness and dimensionality, with compactness being a 'signature' for a notion of finite dimensionality?" No. A good example is the Hilbert cube: It is compact but infinite dimensional in any reasonable sense. One last thing: Different parts of mathematics have their own notions of dimension. For instance, dimension in the sense of algebraic geometry is (mostly) different from the one used by topologists. If you study fractals then you get yet another notions of dimension, which are geometric rather than topological invariants. Why is this is another question: Everybody wants to have a single numerical invariant of their favorite class of spaces.
Form an equation whose roots are $(a-b)^2,(b-c)^2,(c-a)^2.$
You already know the value of $(a-b)^2+(b-c)^2+(c-a)^2$. Use the fact that\begin{multline}(a-b)^2(b-c)^2(c-a)^2=-4 a b c (a+b+c)^3+(a b+c b+a c)^2 (a+b+c)^2+\\+18 a b c (a b+c b+a c) (a+b+c)-4 (a b+c b+a c)^3-27 a^2 b^2 c^2\end{multline}and that\begin{multline}(a-b)^2(b-c)^2+(b-c)^2(c-a)^2+(c-a)^2(a-b)^2=(a+b+c)^4+\\-6 (a b+c b+a c) (a+b+c)^2+9 (a b+c b+a c)^2\end{multline}to get the other coefficients. This is easy, since you know $a+b+c$, $ab+bc+ca$ and $abc$.
Number of lists of n sorted elements of m values
Say $A = \{a_1 = 1, a_2 = 2, ..., a_m = m\}$, $m$ distinct elements in sorted order. You are making a sorted list of $n$ elements with values from $A$. This is equivalent to making a set of $(m+n)$ elements where I first place $a_1$ to $a_m$ in sorted order in m places and then there is only one way to place our sorted list in the remaining $n$ places. Say, the values of all elements of our sorted list equal to the preceding element of $A$. So, for example, if $k$ positions are free after $a_i$, all of them will have value $a_i$. Since our list follows elements of $A$, we fix the first position for the first element of $A \, (a_1)$ and choose rest $(m-1)$ places for $A$ from $(m+n-1)$ places. So number of sorted list with $n$ elements and values between $a_1$ and $a_m$ = ${m+n-1} \choose {m-1}$. Also, you can apply Vandermonde's identity to your result. $\sum_{k=1}^m{\binom{m}{k}\binom{n-1}{k-1}} = \sum_{i=0}^{m-1}{\binom{m}{i+1}\binom{n-1}{i}} = \sum_{i=0}^{m-1}{\binom{n-1}{i} \binom{m}{(m-1)-i}} = {{m+n-1} \choose {m-1}}$
Find all values of $a\in (0,\infty)$ such that $a^x=2x+1$ has only one real solution.
Hint: Define $$f(x)=\frac{\ln(2x+1)}{\ln(a)}-x$$ and use calculus.
Is there an infinite sequence AB, BC, CD, DX, ..., YZ
Yes, it is possible. Let us define an order relation $\prec$ on $\mathbb{Z}\setminus\{0\}$ by: $$x \prec y \iff -\frac{1}{x} < -\frac{1}{y}$$ We then have: $$(1,2) \prec (2,3) \prec (3,4) \prec \cdots \prec (-4,-3) \prec (-3, -2) \prec (-2,-1)$$ with $1$ and $-1$ serving the roles of $A$ and $Z$.
Find consumer demand as a function of time, given the demand equation and price
Hint: $Q$ is a function of $p$, and $p$ is a function of $t$. What do you get when you insert the expression $p=0.04t^2+0.2t+12$ into $Q$?
Why the normal distribution?
The derivations of the normal from the central limit theorem don't start or assume the form of the final distribution, instead they derive it. See Proof of CLT.
What is an elegant way of defining complement of an element in this ring?
I don't think I understand the construction of your ring, but it is well known that every boolean ring can be turned into a boolean algebra. To (hopefully) answer your question, we can set $$\overline{x} = x+1$$ Then $$\overline{x}y + x\overline{y} = (x+1)y + x(y+1) = xy + y + xy + x = x+y$$ (since $xy + xy = 0$ and $y+x = x+y$) I hope this helps ^_^
Sub Sigma-Algebra and measurability
Yes, this is correct. Let $(\Omega, \beta, \mathbb P)$ be a probability space and $\beta_1\subset\beta$ a $\sigma$-algebra. If $X:\Omega\to\mathbb R$ is $\beta_1$-measurable, then for each Borel set $B$, $X^{-1}(B)\in\beta_1\subset\beta$, so $X$ is $\beta$ measurable. Clearly the converse is not true (just take any non-degenerate random variable and $\beta_1=\{\varnothing,\Omega\}$).
Applying Ascoli's theorem to the space of continuous cumulative distribution functions
No. First Arzela's theorem requires the compactness; localizing as you suggest results in uniform convergence on compact subsets which is not really what you want. More to the point, the local result fails too because equicontinuity fails under these assumptions. Consider $f_n(x)=x^n$ on $[0,1]$ and extended to be $0$ on the left and $1$ on the right.
Confusion about finding eigenvectors from matrix multiplication
Note that the matrix is very "symmetric" not only in the linear algebra sense of the term : If you look directly to $A$, you can notice that called $e$ the vector of all $1$'s, we have that $A e = A e_1 + A e_2+\cdots A e_6$; but in generale given $e_i$ the $i$-th vector of the standard basis $Ae_i = A^i$, the $i-$th column of the matrix. In this case taking $Ae$ corresponds to summing all the column of the matrix, which by "symmetry" corresponds to $14$ the vector $e$, since summing each line corresponds to have a $14$ factor in each entry. This is a useful standard trick to spot eigenvector (i.e summing columns in appropriate way which corresponds to find non trivial linear combinations) and from here simply note that $B = A-4I$ in this way $B e = Ae - 4e = 10 e$.
Find all triples $(x,y,z)\in \Bbb{R}$ that satisfy the following conditions:
We can note that $4a^2+1 \ge 4a \ \ \forall a \in R$. Also, since $\frac{4a^2}{4a^2+1} \ge 0$, then we know that $x,y,z \ge 0$ Therefore $y=\frac{4x^2}{4x^2+1} \le \frac{4x^2}{4x}=x$ for nonezero values of $x$. Similarly $z \le y$ and $x \le y$. Therefore $x \le y \le z \le x \implies x=y=z$. Now solving equation $a=\frac{4a^2}{4a^2+1} \implies 4a^2+1=4a \implies a = \frac{1}{2} \implies x=y=z=\frac{1}{2}$. Finally, we assumed that numbers are non-zero, so we should include solution $(0,0,0)$
Strongly differentiable function has inverse satisfying Lipschitz condition
To apply the known theorem, you can use a general trick for compact sets: if you have a result that holds at every point in this set, and if one of the things that this result gives you at each point is a neighbourhood of that point, then you've got an open cover of the compact set, so pass to a finite subcover so that you can focus attention on finitely many points; that might be enough. So in this case, take the collection of all triples $(\mathbf{a},c,r) \in K \times (0,\infty) \times (0,\infty)$ such that for $\mathbf{x}, \mathbf{y} \in U$, $\lVert{f(\mathbf{x})-f(\mathbf{y})}\rVert \geq c\,\lVert{\mathbf{x}-\mathbf{y}}\rVert$ whenever $\lVert{\mathbf{a}-\mathbf{x}}\rVert < r$ and $\lVert{\mathbf{a}-\mathbf{y}}\rVert < r$; each such triple defines an open set $B_{r/2}(\mathbf{a})$. Your theorem guarantees that each $\mathbf{a} \in K$ (indeed each $\mathbf{a} \in U$) shows up in at least one such triple; since $\mathbf{a} \in B_{r/2}(\mathbf{a})$, this means that these open sets form an open cover of $K$. So there is a finite subcover, indexed by a finite collection of triples $(\mathbf{a}_i,c_i,r_i)$ for $i = 1, \ldots, n$. Let $c$ be $\min_i c_i$, and let $r$ be $\min_i (r_i/2)$ (which is why we need a finite subcover). Then if $\lVert{\mathbf{x}-\mathbf{y}}\rVert < r$ for some $\mathbf{x} \in K$ and $\mathbf{y} \in U$, we have $\mathbf{x} \in B_{r_i/2}(\mathbf{a}_i)$ for some $\mathbf{a}_i$ (which is why we need a finite subcover), so $\lVert{\mathbf{a}_i-\mathbf{x}}\rVert < r_i/2 < r_i$ and $$\lVert{\mathbf{a}_i-\mathbf{y}}\rVert \leq \lVert{\mathbf{a}_i-\mathbf{x}}\rVert + \lVert{\mathbf{x}-\mathbf{y}}\rVert < r_i/2 + r \leq r_i/2 + r_i/2 = r_i$$ (which is why I had divide $r$ by $2$ all the time), so $\lVert{f(\mathbf{x})-f(\mathbf{y})}\rVert \geq c_i\,\lVert{\mathbf{x}-\mathbf{y}}\rVert \geq c\,\lVert{\mathbf{x}-\mathbf{y}}\rVert$, QED.
Shortest path in a graph using adjacency list
Yes. Traverse the graph according to Dijkstra's algorithm and keep track of each current path while you go.
What is $\sum_{n=1}^{\infty} n\cdot\left(\frac{1.05}{1.1}\right)^n$?
You want to compute $\sum_{n\ge 1}nr^n$, where $|r|<1$. There is a fairly simple formula for this; it can be derived very easily using a little calculus, and with more work using only the formula for the sum of an infinite geometric series. With calculus. Let $f(x)=\sum_{n\ge 1}x^n$; this is the sum of an infinite geometric series with ratio $x$ and first term $x$, so we know that $$f(x)=\sum_{n\ge 1}x^n=\frac{x}{1-x}\;,\tag{1}$$ provided that $|x|<1$. Now differentiate $(1)$: $$f\,'(x)=\sum_{n\ge 1}nx^{n-1}=\frac1{(1-x)^2}\;.$$ That sum is almost what you want, and if we multiply by $x$, we’ll have what you want: $$\sum_{n\ge 1}nx^n=x\sum_{n\ge 1}nx^{n-1}=\frac{x}{(1-x)^2}\;.\tag{3}$$ Substitute $x=\frac{1.05}{1.1}$ into $(3)$, and you’ll get the desired value. Without calculus. Write out $\sum_{n\ge 1}nr^n$ in a two-dimensional array: $$\begin{array}{c} r&+&r^2&+&r^3&+&r^4&+&\dots&=&\frac{r}{1-r}\\ &&r^2&+&r^3&+&r^4&+&\dots&=&\frac{r^2}{1-r}\\ &&&&r^3&+&r^4&+&\dots&=&\frac{r^3}{1-r}\\ &&&&&&r^4&+&\dots&=&\frac{r^4}{1-r}\\ &&&&&&&&\ddots&\vdots&\vdots\\ \\ \hline r&+&2r^2&+&3r^3&+&4r^4&+&\dots&=&S \end{array}$$ Summing the bottom row should result in the same total as summing the righthand column, provided that these series actually converge. Thus, $$\sum_{n\ge 1}nr^n=S=\sum_{n\ge 1}\frac{r^n}{1-r}=\frac1{1-r}\sum_{n\ge 1}r^n\;.$$ Now $\sum_{n\ge 1}r^n$ is again a geometric series, so $$\sum_{n\ge 1}nr^n=\frac1{1-r}\sum_{n\ge 1}r^n=\frac1{1-r}\cdot\frac{r}{1-r}=\frac{r}{(1-r)^2}\;;$$ this is of course the same formula that we got before.
Legendre Polynomial definite integral identity
The Legendre polynomials $P_n(x)$ are solutions of ODE of the form $$\frac{d}{dx}\left[ A(x) \frac{dP_n(x)}{dx} \right] = \lambda_n P_n(x) \quad\text{ with }\quad \begin{cases} A(x) &= 1-x^2,\\ \lambda_n &= -n(n+1) \end{cases}$$ For any two distinct $n, m \ge 0$, notice $$\begin{align} & \frac{d}{dx} \left[ A(x) \left(P_n(x) \frac{dP_m(x)}{dx} - P_m(x) \frac{dP_n(x)}{dx}\right)\right]\\ = & P_n(x) \frac{d}{dx}\left[ A(x) \frac{dP_m(x)}{dx} \right] - P_m(x) \frac{d}{dx}\left[ A(x) \frac{dP_n(x)}{dx} \right]\\ = & (\lambda_m - \lambda_n) P_m(x)P_n(x)\end{align} $$ We have $$\int_{-1}^1 P_m(x)P_n(x) dx = \frac{1}{\lambda_m - \lambda_n} \left[ A(x) \left(P_n(x) \frac{dP_m(x)}{dx} - P_m(x) \frac{dP_n(x)}{dx}\right)\right]_{-1}^1 = 0$$ because $A(x) = 0$ at $x = \pm 1$. Since $P_1(x) = x$, this means for any $n \ne 1$, we have $$\int_{-1}^1 xP_n(x) dx = 0$$
completely polarized polynomial
I think I have the answer: $$ P_l(A_1,\cdots , A_l)=\frac{(-1)^l}{l!}\sum_{j=1}^l \sum_{i_j < \cdots < i_j} (-1)^j H_l (A_{i_1} + \cdots + A_{i_j}). $$