title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to determine number of solutions modular
A ring $\mathbb Z_p$ is a field, if $p$ is prime. A polynomial of degree $n$ has at most $n$ roots in such a field. So, the equation $x^3=1$ has at most $3$ solutions in $\mathbb Z_m$ and at most $3$ solutions in $\mathbb Z_n$. So, you correctly assumed that the maximum number of solutions modulo $mn$ is $9$ due to the chinese remainder theorem.
What is this problem asking for?
Reparametrization means changing the way the time is measured. $$\frac d{ds}W(t(s))=\dot W(t(s))t'(s)=A(k-\tfrac MW)t'(s)$$ $$\frac d{ds}M(t(s))=\dot M(t(s))t'(s)=B(k-\tfrac MW)t'(s)$$ so if $t'(s)=W(t(s))$ can be arranged, the resulting system will be linear. So start with the $s$-dependent functions $w(s),m(s),t(s)$ and the system $$w'(s)=A(kw(s)-m(s))$$ $$m'(s)=B(kw(s)-m(s))$$ $$t'(s)=w(s)$$
Cover of a Grassmannian by an open set
One way to think about this is to pass to the frames (so we're looking at a Stiefel variety). Fix a basis $u_{r+1},\dots,u_d$ for $V_W$. The set of frames $v_1,\dots,v_r\in V$ so that $v_1,\dots,v_r,u_{r+1},\dots,u_d$ form a linearly independent set is an open set ($\det\ne 0$) in the space of all $r$-frames. This descends, taking quotients, to an open set in $G(r,d)$.
Prove that f(x) is regulated.
A function $f: [a,b]\to \mathbb{R}$ is regulated if and only if it has RHS limit at $x\in[a,b)$ and LHS limit at $(a,b]$. Hint: ($\impliedby$) $\forall \epsilon>0$, Let $A=\{x\in [a,b]\,|\exists \, \text{step function}\, s: [a,b]\to \mathbb{R}\,\text{such that}\, |f(z)-s(z)| <\epsilon, \forall z\in[a,x]\}$. Try to show $A=[a,b]$. Thomae's function has limit $0$ at all real number ( which is mentioned here), so it's regulated. But here I also allow step function of degenerate interval.
Surface area of ​the figure of rotation
HINT A way to compute this kind of surface integrals is by the following set up $$S=\int_a^b 2\pi f(z) \sqrt{1+[f’(z)]^2}\, dz$$ with $$f(z)=\sqrt[3] z$$
Dividing an integral by a variable.
What would be valid is$$\int_u^vdv^\prime=\int_0^T(-4v)dt\implies v-u=-4vT,$$or $$\int_u^v\frac{dv^\prime}{v^\prime}=-4\int_0^Tdt\implies\ln\frac{v}{u}=-4T.$$But in your notes, something has gone wrong due to conflating a free variable $v$ with a bound variable I've denoted $v^\prime$. The latter is a dummy integration variable.
5-tuples of n integers
Let $a_1$ be the number of $1$'s that we use, and $a_2$ the number of $2$'s, and so on. Then our $5$-tuple is completely determined once we know $(a_1,a_2,\dots,a_n)$. So we want to count the number of solutions of $a_1+a_2+\cdots+a_n=5$ in non-negative integers. By Stars and Bars (please see Wikipedia if the term is unfamiliar) the number of solutions is $\dbinom{5+n-1}{n-1}$ or equivalently $\dbinom{n+4}{5}$.
Summation to Infinity Query
If you mean $$\sum_{n=1000}^\infty n^3$$ then $$\sum_{n=1000}^\infty n^3>\sum_{n=1000}^\infty 1=+\infty.$$
What's a good way to denote a Dirichlet series which depends on parameters?
One way I've sometimes seen (including several times on this site, although I don't recall which posts offhand & am not sure how to easily find any) is to specify the parameters after a semi-colon after the variables in the function definition. Your specific example would then be something like $$\zeta(s; a,b,c) \tag{1}\label{eq1A}$$ In What does the semicolon ; mean in a function definition, this answer says A semicolon is used to separate variables from parameters. Quite often, the terms variables and parameters are used interchangeably, but with a semicolon the meaning is that we are defining a function of the parameters that returns a function of the variables. A comment to that answer gives a link to Definition of Parameter which says basically the same thing. However, note this notation of using a semicolon separator is not necessarily always interpreted this way, with it possibly meaning different things depending on the context. For example, that linked post's accepted answer states: The semicolon is used sometimes to optically separate some variable group. So the semicolon is not more than a reading aid. As for the methods used to specify function parameters in published literature, I don't know of any examples offhand which I can point you to.
Find an $(n,\varepsilon)$-spanning set $A\subset [0,1)^{\mathbb{N}}$.
To begin with, notice the way the distance changes for close points as we apply $f$. Let $d(f^{i}(x),f^{i}(a))= d_i(x, a)$. If the first $k$ coordinates of $x_1$ and $a_1$ are the same, then $$d_k(x, a) = d(x, a) 10^{-k} + |x_1 - a_1| \sum_{i=1}^{k} 10^{k-i} = d(x, a) 10^{-k} + |x_1 - a_1| \frac{(10^k -1)}{9}$$ or something very like that. The first term more or less controls itself. The second means you need to have a bunch of points for each initial part of the decimal expansion. So, start with a $(0, 5 \varepsilon)$ cover, say $C$, and then get together $10^n \lfloor 2\varepsilon^{-1} \rfloor$ in $[0, 1)$ to deal with the first entry in the product, call them $P$. And then form $P \times C$ to construct the $(n, \varepsilon)$ cover. Does that help?
What is -cos(t) equivalent to in terms of cos(t)
Yes, both are true: $$-\cos (t) = \cos (t \pm 180^\circ)\quad\text{ or in radians, }\;\;-\cos(t) = (t \pm \pi)$$ Note that $$t + 180^\circ - (t - 180^\circ) = 360^\circ$$ See this link for similar trigonometric "shifts" In the same Wikipedia article, you'll find a handy diagram with ordered pairs $(\cos x, \sin x)$ for angle x measured in radians, as they appear cycle the unit circle:
value of $k$ for which Definite Integration has finite value
Your work is correct. For the last step, consider limits. After some simplification you'll find that $$I=\underset{t\to\infty}{\text{lim}}\int_{-t}^0ue^{u (1-k)}\, du=\frac{-1+\underset{t\to\infty}{\text{lim}}e^{(k-1)t}(1-(k-1)t)}{(k-1)^2}$$ Substituting $n=k-1$ in the limit gives $$\underset{t\to \infty }{\text{lim}}e^{n t} (1-n t)$$ This limit converges to $0$ for all $\Re(n)<0$ and otherwise diverges. Therefore, the integral converges for $\Re(k-1)<0$ which is $\Re(k)<1$. The value of the integral is $$I=-\frac{1}{(k-1)^2}$$
Calculation with function
just an attempt $$\phi (f (x)-1)=2x+5=2f(f (x))+1$$ thus $$f (f (x))=x+2$$ $$f (4)=f (f (f (f (0)))) $$
Complex variety with Zariski dense set of algebraic points
No. If you take any (say projective, irreducible ) variety $X$ defined over $\overline{\mathbb{Q}}$, then its $\overline{\mathbb{Q}}$-points -- i.e., points in which every coordinate in a suitable projective embedding is $\overline{\mathbb{Q}}$-rational -- are Zariski dense in its $\mathbb{C}$-points. You can see this e.g. by noting that the dimension of the closure in each case is the transcendence degree of the function field, and the transcendence degree of a field extension is unchanged by base extension. There are many other ways as well... So it comes down to showing that there are varieties over $\overline{\mathbb{Q}}$ which are not products of one-dimensional varieties. The easiest such example seems to be the projective plane $\mathbb{P}^2$: the fact that $H^2(\mathbb{P}^2(\mathbb{C}),\mathbb{C}) = 1$ means, by the Kunneth formula, that it cannot be a product of curves.
The Limit of The Sequence Given By a Recursive Relation
$$\begin{align} \qquad\qquad a_{n+1} & =a_n\left(1-\frac1{2^n}\right) \\ & =a_{n-1}\left(1-\frac1{2^{n-1}}\right)\left(1-\frac1{2^n}\right) \\ & =a_{n-2}\left(1-\frac1{2^{n-2}}\right)\left(1-\frac1{2^{n-1}}\right)\left(1-\frac1{2^n}\right) \\ \end{align}$$ $$\text{etc.}$$ $$a_{n+1}=a_1\prod_{k=1}^n\left(1-\frac1{2^k}\right)$$ $$\lim_{n\to\infty}a_{n+1}=\lim_{n\to\infty}a_1\prod_{k=1}^n\left(1-\frac1{2^k}\right)$$ $$\qquad =a_1\prod_{k=1}^\infty\left(1-\frac1{2^k}\right)$$ $$=a_1\phi(1/2)=Qa_1\approx0.2887880950\times a_1$$ where $\phi(1/2)$ is Euler's phi function (please see link, as this is not the regular "Euler's phi function"). This is also the somewhat famously known as the $Q$ that appears in Tree Searching (19). So for $a_1=1$, $$\lim_{n\to\infty}a_n=\phi(1/2)=Q\approx0.2887880950$$
Find the value of $|a|+|b|$
Hint: $\;x=y=1 \implies 2|a+b| = 2\,$, and $\,x=1, y=-1 \implies 2|a-b| = 2\,$. But $\,|a+b|=|a-b|\,$ can only hold if either $\,a=0\,$ or $\,b=0\,$, and then the other one must be $\,\pm1\,$.
Find Minimum area of given hexagon. Geometry Question.
At least one among triangles $APB$, $CPD$ or $EPF$ must lie entirely inside the hexagon, so its area is at least $4$. On the other hand, the difference between the hexagon area and $4$ can be made as small as one wishes, as shown for example in the picture below. In this case we have a degenerate hexagon whose area is $4+\epsilon$, and $\epsilon$ can be taken arbitrarily small. So the infimum of the area is 4. EDIT As Peter Shor's comment points out, my assumption that one among triangles $APB$, $CPD$ or $EPF$ must lie entirely inside the hexagon was wrong: his construction shows that the area of the hexagon can be vanishingly small. In the picture below I followed his idea, but with a non-degenerate hexagon. Start the construction with an equilateral triangle $A'E'C'$ of side $2\sqrt3\epsilon$, so that its center $P$ is at a distance $\epsilon$ from its sides. Draw then points $A$, $C$ and $E$ on the sides of $A'E'C'$ such that $AA'=CC'=EE'=\epsilon^2$. Finally, draw $B$, $D$ and $F$ on the extensions of the sides of $A'E'C'$ such that $AB=8/\epsilon$, $CD=12/\epsilon$ and $EF=18/\epsilon$. The conditions on the areas of $PAB$, $PCD$ and $PEF$ are satisfied, while the area of hexagon $ABCDEF$ turns out to be $$ area={19\over3}\sqrt3\epsilon+3\sqrt3\epsilon^2 -{9\over2}\epsilon^3+{3\over4}\sqrt3\epsilon^4. $$ This tends to zero as $\epsilon\to0$, so the infimum of hexagon area is zero.
Calculating Flux through surface, stokes theorem, cant figure out parameterization of vector field
It seems like utter garbage to me. :) Just apply Stokes's Theorem directly. To say $\mathbf A$ is a vector potential for $\mathbf F$ is to say that $\mathbf F = \operatorname{curl} \mathbf A$, so $$\int_S \mathbf F\cdot \mathbf n\,dS = \int_S \operatorname{curl} \mathbf A\cdot\mathbf n\,dS = \int_{\partial S}\mathbf A\cdot d\mathbf r = 25.$$
Combinatory sum of multiplications
I suspect you are wanting to know if there is a compact way to write Symmetric Polynomials, and afaik, the best you could w/ sigma notation would something like: Let $N=\{ 1,...,m\}$ (the first $m$ naturals) be the number of variables, and $n$ be how many distinct variables appear in each term. Let $T\subset N$, then $S_{N,n}=\underset{|T|=n}{\sum}\underset{t\in T}{\prod}a_t$
Proving $\tan(z)$ is analytic.
$\tan(iy) = \frac{\sin(iy)}{\cos(iy)}$ Also, $$\sin(iy) = \frac{1}{2i}(e^{-y}-e^y) = i\sinh(y).$$ $$\cos(iy) = \frac{1}{2}(e^{-y}+e^y) = \cosh(y).$$ So we have: $$\tan(z) = \frac{\tan(x)+i\tanh(y)}{1-i\tan(x)\tanh(y)}.$$ From here, try rationalising the denominator and move forward.
Expectation of the minimum of n independent Exponential Random Variables
Hint Let $Y=\min(X_1,...,X_n)$. $$\mathbb P\{Y\geq y\}=\mathbb P\{X_1\geq y,...,X_n\geq y\}=\mathbb P\{X_1\geq y\}^n,$$ where the last equality come from independence. The density function is therefore given by $$f_Y(y)=-n\lambda e^{-\lambda x}(e^{-\lambda x}-1)^{n-1}\cdot \boldsymbol 1_{[0,\infty )}(x).$$ Can you continue from here ?
What is the householder reflector's singular value?
Since $H = I- 2ww^*$ is hemitian, the singular values of $H$are just the absolute values of the eigenvalues of $H$. Notice that if $u\perp w$, then $$Hu = u - 2ww^*u = u-2\langle u,w\rangle w = u$$ so $u$ is an eigenvector for $H$ with eigenvalue $1$. If you are working in $\mathbb{C}^n$, you can pick $n-1$ linearly independent vectors orthogonal to $v$ so the multiplicity of the eigenvalue $1$ is at least $n-1$. On the other hand, we have $$Hw = w - 2ww^*w = w - 2\|w\|^2w = w-2w = -w$$ so $w$ is the eigenvector for $H$ with eigenvalue $-1$. Therefore, the only eigenvalues are $\pm 1$ so the desired singular value is $1$.
When should I go for RREF or REF?
Both forms are fine for the row space: as soon as you're in row-echelon form, you know that the remaining nonzero rows are linearly independent, so they form a basis for the row space. For the column space, you're done as soon as you find out which columns will contain the pivots (which you also already know from the row-echelon form). Then the corresponding columns of the original matrix form a basis for the column space. It's for the null space that you may want a reduced row-echelon form: finding a basis for the null space amounts to solving for the pivot variables in terms of the free variables. So you either go for the reduced row-echelon form, and then you can read off the expressions you want directly, or you stop at the row-echelon form and do a lot of back-substitution. Finally, for finding an orthonormal basis, you use the Gram-Schmidt algorithm, which is not row reduction to begin with.
Matrix Equation $A^3-3A=\begin{pmatrix}-7 & -9\\ 3 & 2\end{pmatrix}$
$$ A=\pmatrix{{2}&{3}\\{-1}&{-1}}.$$ Note that $a^3+b^3=(a+b)^3-3ab(a+b)$. Let $a,b$ be eigenvalues of $A$. Then $trA=a+b$, $detA=ab$, and they are integers. Let $X=trA$. We see from $A(A^2-3I)$ has determinant $13$, we have $detA | 13$. From the identity in the beginning, we have $X^3-3(ab+1)X+5=0$ , which we can obtain from taking traces of original equation. Just plug in the four possible values for $ab$, namely $\pm 1, \pm 13$, we find that only possible value is $ab=1$, and consequently $X^3-6X+5=0$. Solving for integers, we obtain $X=1$. Thus, $A$ should satisfy $trA=1$, $detA=1$. By Cayley-Hamilton, we have $A^2-A+I=0$, this forces $A^3=-I$. Now, plug in to original equation to get $$ -3A=\pmatrix{{-6}&{-9}\\{3}&{3}}.$$ Then we arrive at the answer.
Proving that linear operator is diagonalizable but not normal
To prove it's diagonalisable, find its eigenvalues; they should be distinct. To prove it's not normal, just multiply it by its transpose both ways round. You should get two different answers.
Prove $c$ satisfies the integral
Since $f$ is continuous there exist $a,b\in [0,1]$ such that $f(a)\le f(x)\le f(b), \:\forall x\in [0,1].$ Now, $$t\in[0,1]\implies tf(a)\le t f(x)\le t f(b).$$ So $$2f(a)\int_0^1 tdt \le 2\int_0^1 tf(t)dt\le 2f(b)\int_0^1 tdt.$$ Can you finish now?
Find the conditional distribution of $X$ given that $Y=y$
Yes, just like for 2 events $A,B$ you have $$\mathbb{P}[A|B] = \frac{\mathbb{P}[A \cap B]}{\mathbb{P}[B]}$$ so too more generally in terms of random variables, $$ f_{X|Y}(x,y) = \frac{f_{X,Y}(x,y)}{f_Y(y)} $$
Find those points of S that have no neighborhoods in which the equation $f(x, y) = 0$ can be solved for y in terms of x .
Note that $$f(x,y)=(x+y)(2x^2-2xy+2y^2-3x+3y)\ .$$ It follows that the set $S\colon\>f(x,y)=0$ is the union of the line $\ell\colon \>x+y=0$ and the quadric $$Q:\quad g(x,y):=2x^2-2xy+2y^2-3x+3y=0\ .$$ Writing $x:={1\over2}+\bar x$, $\>y:=-{1\over2}+\bar y$ we obtain the description $$Q:\quad 2\bar x^2-2\bar x\bar y+2\bar y^2={3\over2}$$ of $Q$, which indicates that $Q$ is an ellipse with center $M:=\bigl({1\over2},-{1\over2}\bigr)\in \ell$ and axes at a $45^\circ$ angle, i.e., aligned with $\ell$. The two curves intersect at the points $(0,0)$ and $(1,-1)$ which are true singularities of $S$. These two points have already appeared in your calculations as saddle points of $f$, the reason being that $f$ changes sign when crossing $\ell$ or $Q$. We now look at $$\nabla f(x,y)=(x+y)\nabla g(x,y)+g(x,y) (1,1)\ .$$ When $(x,y)\in\ell\setminus Q$ we have $\nabla f(x,y)=g(x,y)(1,1)\ne0$. This shows that all points on $\ell$ other than the two points $(0,0)$ and $(1,-1)$ are regular points of $S$. This means that each such point $(x_0,y_0)$ is the center of a rectangular window $W$ such that $\ell\cap W$ can be written in at least one of the forms $$y=\phi(x), \quad{\rm resp.,}\quad x=\psi(y)\ .\tag{1}$$ On the other hand, when $(x,y)\in Q\setminus \ell$ we have $\nabla f(x,y)=(x+y)\nabla g(x,y)\ne0$, because $x+y\ne0$, and $$\nabla g(x,y)=(4x-2y-3, \>4y-2x+3)$$ vanishes only at the point $M\notin Q$. It follows that all points of $Q\setminus\ell$ are regular points of $S$ as well. To sum it all up: Apart from the two singularities $(0,0)$ and $(1,-1)$ the set $S$ has at all points a local description of the form $(1)$. Here is a contour plot of $f$:
1 equation of two unknowns and their number of digits --- how many (x, y) exist?
if your last equation is true, you can write $$x=50-\frac{2500}{50+y}$$
Möbius transformation are the same iff exists a constant
Idea: if $T_1 = T_2$: $$ac'z^2 + (ad' + bc')z + bd' = (az + b)(c'z + d') = (a'z + b')(cz + d) = a'cz^2 + (a'd + b'c)z + b'd.$$ Equal polynomials have the same coefficients. Start form $bd' = b'd$ and define $\lambda = b/b' = d/d'$ (what happens if $b' = 0$ or $d' = 0$?).
Find general solution of linear congruence equation
The important properties of congruences are if $a\equiv b\pmod{n}$ and $c\equiv d\pmod{n}$ then $a+c\equiv b+d\pmod{n}$; if $a\equiv b\pmod{n}$ and $c\equiv d\pmod{n}$ then $ac\equiv bd\pmod{n}$. If $x$ is a solution to $2x\equiv5\pmod{13}$, then also $$ 2kx\equiv 5k\pmod{13} $$ for every integer $k$, because $k\equiv k\pmod{13}$ and you can apply property 2. How does this simplify the situation? Well, if you choose $k=7$, you get $$ 14x\equiv 35\pmod{13} $$ and therefore $$ x\equiv9\pmod{13} $$ So if $x$ is a solution, then $x\equiv 9\pmod{13}$. But also the converse is true, because from $x\equiv 9\pmod{13}$ we obtain $2x\equiv18\equiv5\pmod{13}$. In this case it is quite the same as solving a degree one equation: $2x=5$ becomes $x=5/2$ after multiplying both sides by $1/2$. In the case of congruences we cannot “multiply by $1/2$”; but we can see that $\gcd(2,13)=1$, so by the general theory, we know there exists $k$ such that $2k\equiv1\pmod{13}$. It's just a matter of finding it. Trial will work for small numbers, the extended Euclidean algorithm will do for bigger numbers. When we have this $k$, then the congruence becomes $$ 2kx\equiv 5k\pmod{13} $$ and so $x\equiv5k$. The steps can be done backwards, because upon multiplying this by $2$, the right-hand side becomes $5\cdot2k\equiv5\cdot1\equiv5\pmod{13}$.
Is the inverse of a bijective monotone function also monotone?
Hint #1: Let $f: D \rightarrow C$ be a bijective function with domain $D$ and codomain $C$. Then $f$ is monotonic implies that there exist $x, y \in D$ such that $$x \leq y \implies f(x) \leq f(y)$$ (if $f$ is monotonic increasing), or $$x \geq y \implies f(x) \geq f(y)$$ (if $f$ is monotonic decreasing). Hint #2: If $f: D \rightarrow C$ is bijective, then $f^{-1}: C \rightarrow D$ is also bijective. Hint #3: The compositions $$f(f^{-1}(x)) = x$$ and $$f^{-1}(f(x)) = x$$ hold. Can you take it from here?
An orthogonal matrix that fails to be an isometry?
A translation by a non-zero vector is an isometry that does not preserve dot products and is not of the form $h(x) = Ax$ where $A$ is orthonormal. Therefore, if we leave out the condition $h(0)=0,$ the theorem is false. With the condition $h(0)=0,$ however, we restrict ourselves to linear transformations, which excludes non-zero translations. Among the linear transformations, all isometries preserve the dot product and are of the form $h(x) = Ax$ where $A$ is orthonormal. That is what the theorem asserts.
Expectation and ratio distribution
Maybe I found an error in my previous derivation. This is my new try. Using the ratio distribution formula, $p_Z(z)$ can be expressed as: \begin{equation*} p_Z(z) = \int_y y p_X(zy)p_Y(y)dy, \quad y>0. \end{equation*} \begin{equation*} \begin{split} E\{(Z\psi(Z))^2\} & = \int_z (z\psi(z))^2 p_Z(z)dz \\ &= \int_z (z\psi(z))^2 \left[ \int_y y p_X(zy)p_Y(y)dy \right] dz\\ &= \int_z \left[ \int_y (z\psi(z))^2y p_X(zy)p_Y(y)dy \right] dz\\ &= \int_y \left[ \int_z (z\psi(z))^2y p_X(zy)p_Y(y)dz \right] dy \quad \mathrm{ (from \; Fubini's \; Theorem)}\\ &=\int_y \left[ \int_x \left( \frac{x}{y}\psi\left( \frac{x}{y}\right)\right)^2 y p_X(x)p_Y(y)\frac{dx}{y}\right] dy \quad \mathrm{from}\; z=x/y\\ &=\int_y \left[ \int_x \frac{1}{y^4} (x \psi(x))^2 p_X(x)dx\right] p_Y(y) dy \\ &= E\{Y^{-4}\}E\{(X \psi(X))^2\} \end{split} \end{equation*} Is this correct?
Time period of Sinusoidal Signal
If you add together two sinusoids, the period of the sum is the least common multiple of the periods. For examples: $\sin(\pi x)+\cos(2 \pi x/5)$ has period lcm(2,5)=10. $\sin(\pi x)+\cos(\pi x/3)$ has period lcm(2,6)=6. $\sin(x)+\sin(2x)$ has period lcm($2 \pi$,$\pi$)=$2 \pi$. Whenever the two periods are a rational multiple of one another, this least common multiple exists. When the two periods are irrational multiples of one another, the least common multiple does not exist. For suppose the two periods are $p_1$ and $p_2$. Suppose $p_3=k_1 p_1=k_2 p_2$ for nonzero integers $k_1,k_2$. Then $p_1=\frac{k_2}{k_1} p_2$, so $p_1$ is a rational multiple of $p_2$.
Either $f$ is a polynomial or $|f(z_j)| > e^{n|z_j|}. $
If $f$ is not a polynomial, then $h$ is a nonconstant entire function without zeros. Therefore, $\phi = \log h$ is a nonconstant entire function. So its derivative at some point $z_0$ is nonzero. By the Cauchy integral formula, $$ \phi'(z_0) = \frac{1}{2\pi i}\int_{|z-z_0|=r} \frac{\phi(z)}{z^2}\,dz $$ which implies that $\max_{|z-z_0|=r}|\phi(z)| \ge r|\phi'(0)|$. Translated back in terms of $h$, this yields $$ \max_{|z-z_0|=r}|h(z)| \ge e^{r|\phi'(0)|} $$
From this expression ax/(b+cx), how can I have x only in the numerator?
You can't because $(A/B)x$ gets large in absolute value for large $x$ (unless $A=0$) while $\begin{array}\\ \frac{x}{b+cx} &=\frac1{c}\frac{cx}{b+cx}\\ &=\frac1{c}\frac{cx+b-b}{b+cx}\\ &=\frac1{c}(1-\frac{b}{b+cx})\\ &=\frac1{c}-\frac{b}{c(b+cx)}\\ \end{array} $ is bounded for large $x$.
how to see whether a bundle is trivial or not?
I believe that it's trivial. Since the automorphism of $S^1$ is oriented, your question is equivalent to asking whether or not the (complex) line bundle determined by the bundle is trivial. This is classified by a map $$ C(S^n,2)/(\mathbb{Z}/2) \to \mathbb{CP}^\infty $$ or more neatly, an element of $H^2(C(S^n,2), \mathbb{Z})$. In order to compute this, define $U$ to be the complement of the diagonal in $S^n \times S^n$ (this is the ordered configuration space). We can look at the long exact sequence of the pair $(S^n \times S^n, U)$, from which we obtain (assuming at least that $n > 2$) $$ 0 = H^2(S^n \times S^n) \to H^2(U) \to H^3(S^n \times S^n, U) $$ which tells us that the cohomology of the ordered configuration space injects into $H^3(S^n \times S^n, U)$. Since the cohomology of the quotient is just the invariant cohomology, it suffices to show that $H^2(U)$ is zero, and it of course then suffices to show that $H^3(S^n \times S^n , U) = 0$. This should actually be the same (due to Excision) as the Thom space of a rank $n$ bundle over $S^n$, which means that it only has cohomology in degrees 0, $n, 2n$. So if $n > 3$ then you're done. This is a pretty ugly proof, I admit. It doesn't obviously work for $n = 1, 2, 3$, and it uses a lot of pretty heavy machinery. I'm sure there is a simpler proof out there, but this is the first argument that came to mind...
E.T. Jaynes probability theory exercise 3.2
I think the given answer is double-counting. Take a simple example with \begin{align} k &= \text{$2$ colors} \\ N_1 &= 2 \\ N_2 &= 1 \\ N &= N_1+N_2 = 3 \\ m &= 3. \end{align} Of course, with $m=N$, there is only $1$ way to select the balls. However, $$N_1\cdot N_2\cdot\binom{N-k}{m-k} = 2\cdot 1\cdot\binom{3-2}{3-2} = 2.$$ The problem is that, for any particular color $c$, each selection is counted once for each $c$-color ball being the designated mandatory one for that color. Instead, the selection should be only counted once. One solution is to use the Inclusion-Exclusion principle. Define set: $$A_i = \{\text{Selections that exclude balls of color $i$}\}.$$ Then the required probability is $$\dfrac{\left|\bigcap_{i=1}^{k} A_i^c \right|}{\binom{N}{m}}.$$ Using the IE principle, and with $\Omega$ the universal set of all possible selections: \begin{align} \left|\bigcap_{i=1}^{k} A_i^c \right| &= \left|\left(\bigcup_{i=1}^{k} A_i\right)^c \right| \\ &= \left|\Omega\right| - \left|\bigcup_{i=1}^{k} A_i \right| \\ &= \binom{N}{m} - \sum_{i=1}^{k} |A_i| + \sum_{i\lt j} |A_i\cap A_j| - \cdots + (-1)^k\left|\cap_{i=1}^{k}A_i\right|. \end{align} Here, \begin{align} |A_i| &= \binom{N-N_i}{m} \\ |A_i\cap A_j| &= \binom{N-N_i-N_j}{m}\text{, and so on.} \end{align}
What did I do wrong in u-substitution
You did nothing "wrong" with your u-substitution. Your evaluation of the indefinite integral is correct. To see this, Suggestion: Expand the binomials in your answer: Expand $(x-1)^4$ and $(x - 1)^3$ in the numerators, respectively, simplify, and take into account the constant value (absorbed by the constant of integration)...The answers will match, up to the constant of integration. $$\dfrac{(x-1)^4}{4} + \dfrac{(x-1)^3}{3} + C \quad = \quad \frac{x^4}{4} - \frac{2x^3}{3} + \frac{x^2}{2} + \left(\dfrac{1}{12} + C\right)$$
Solve the following composition law equation
Cute question, but can you say what you've tried or how the question came up? I've worked with this function before. If you define $x^\# = \dfrac{1-x}{1+x}$, then you have $(x^\#)^\# = x$ and $(xy)^\# = x^\# \circ y^\#$ and $(x\circ y)^\# = (xy)^\#$. That means this operation on $(-1,1)^2$ is conjugate to multiplication on $(0,\infty)^2$ (with $-1$ corresponding to $\infty$ and $1$ corresponding to $0$), and you can use that to solve this problem. It's associative because multiplication is associative: $$ (a\circ b)\circ c = ((ab)c)^\# = (a(bc))^\# = a\circ(b\circ c). $$ But you can also prove it by brute force: expand $$ (a\circ b)\circ c = \frac{(a\circ b)+c}{1+(a\circ b)c} = \frac{\frac{a+b}{1+ab}+c}{1+\frac{a+b}{1+ab}c} = \frac{a+b+c+abc}{1 + ab+ac+bc} $$ and do the same with $$ a\circ(b\circ c) $$ and see if you get the same thing both ways. $$ (\,\underbrace{x\circ x \circ x \circ\cdots\circ x}_{22\text{ terms}}\,)=0 \text{ if and only if } (\,\underbrace{x^\#\cdots x^\#}_{22\text{ factors}} \,)^\# = 0 $$ so you need $$(x^\#)^{22}=0^\# = 1.$$ Where you write $G=(-1,1)$ you must have intended $x,y\in G=(-1,1)$, and if that's what you meant, I think you should have said so explicitly. I'm guessing that by "stabile part" you mean "stable part" and that means $G=(-1,1)$ is closed under this operation.
When is $(x-1)(y-1)(z-1)$ a factor of $xyz-1$?
There are no other solutions besides $(2,4,8)$ and $(3,5,15)$, already discovered in aRaRa's answer. Let $f(x,y,z)=\frac{xyz-1}{(x-1)(y-1)(z-1)}$ for integers $z>y>x>1$. We want to know when $k=f(x,y,z)$ is an integer. Note first that $k$ cannot equal $1$, because this would imply $z=\frac{x+y-xy}{x+y-1}$ whence $xy\leq 1$ which is impossible. Now $$ f(x,y,y+1)-f(x,y,z)=\frac{(xy-1)(z-(y+1))}{y(x-1)(y-1)(z-1)}\geq 0 \tag{1} $$ and $$ f(x,x+1,x+2)-f(x,y,y+1)=\frac{t\bigg((2x^2+2x-1)t+(2x^3+4x^2-1)\bigg)}{(x-1)x(x+1)(y-1)y}\geq 0 \tag{2} $$ where $t=y-(x+1)$, so that $f(x,y,z)\leq f(x,x+1,x+2)=\phi(x)=\frac{x^3 + 3*x^2 + 2*x - 1}{x^3 - x}$. If $x\geq 4$, we have $f(x,x+1,x+2)=\frac{119}{60}-\frac{(x-4)(100+(x-1)(59x+15))}{60x(x^2-1)}$, so $k\leq \frac{119}{60} <2$ and hence $k=1$ which is impossible as already shown above. So $x$ can only be $2$ or $3$. As $\phi(2)=\frac{23}{6}<4$ and $\phi(3)=\frac{59}{24}<3$, we have $k<4$ when $x=2$ and $k<3$ when $x=3$. Note that $f(x,y,z)=k$ can be rewritten as $$ z=\frac{k(x+y)-kxy-k+1}{k(x+y)-(k-1)xy-k} \tag{3} $$ When $x=2$ and $k=2$, (3) yields $z=\frac{3}{2}-y$ which is impossible. When $x=2$ and $k=3$, (3) yields $z=\frac{3y-4}{y-3}$ which is possible iff $y=4$. When $x=3$ and $k=2$, (3) yields $z=\frac{4y-5}{y-4}$ which is possible iff $y=5$. This concludes the proof.
Publishing mathematics content in a blog
Use either Wikipedia or some math-related site such as this to temporarily create LaTeX content, use print screen, and then upload the picture onto your blog.
Draw level curves for $f(x,y)=\frac{x^2+y^2}{xy}$
Update.... convert to polar coordinates. $f(r,\theta) = \frac {1}{\sin\theta\cos\theta} = 2\csc 2\theta$ Each contour is a ray with an open end at the origin. and $f(x,y)$ is undefined at $(0,0)$
Measure : $μ^{*}(G\bigtriangleup A)=0$.
I suppose it should be like this. $\mu^*(A\setminus G)\leq μ^{*}(G\bigtriangleup A)=0$ so $\mu^*(A\setminus G)=\mu^*(G\setminus A)=0$. $\mu^*(A\cup G)=\mu^*(A\cup (G\setminus A))\leq \mu^*A +\mu^*(G\setminus A)=\mu^*A$ and $\mu^*A\leq \mu^*(A\cup G)$ so $\mu^*A= \mu^*(A\cup G)$ The same for $G$ and finally $\mu^*A=\mu^*G$. But for Lebesgue measure... Let $M$ be the set with a property: for every Lebesgue measurable $E:\mu_*(M\cap E)=0,\,\mu^*(M\cap E)=\mu^*E=\mu E$ where $\mu_*$ stands for inner measure and $M$ is not Lebesgue measurable. Let then $A=[0,1], G=M\cap A$. So, you have $μ^{*}(G\bigtriangleup A)=0$ which implies $μ(G\bigtriangleup A)=0$. And finally, $μ(G\bigtriangleup A)=0, \, \mu A=1$ and $G$ is not Lebesgue measurable.
Cutting a sandwich with a crust
Let $$s\mapsto z(s)\qquad(0\leq s\leq L)$$ be the counterclockwise representation of the crust with respect to arc length. Hold a knive over the sandwich connecting the points $z(0)$ and $z(L/2)$, and assume that the area to the right of the knive is more than half of the sandwich. Now turn the knive slowly counterclockwise so that at all times it points from $z(s)$ to $z(s+L/2)$. When we arrive at $s=L/2$ we will have less than half of the sandwich to the right of the knive. By the intermediate value theorem there has to be a position $\sigma\in\ ]0,L/2[\ $ of the knive for which the area of the sandwich is exactly halved.
Fenchel Duality in Prof. Bertsekas' lecture
It's actually easy. Let's do it from first principles. First observe that \begin{eqnarray} \underset{\mu \ge 0}{\text{sup }}\mu^T(g(x)-u) = \begin{cases}0, &\mbox{ if }g(x) \le u,\\+\infty, &\mbox{ otherwise.}\end{cases} \end{eqnarray} Now, \begin{eqnarray} \begin{split} \text{LHS of 1.47} &= \underset{u \in \mathbb{R}^r}{\text{inf }}p(u) + P(u) = \underset{u \in \mathbb{R}^r}{\text{inf }}\underset{x \in X, g(x) \le u}{\text{inf }}f(x) + P(u) \\ &= \underset{u \in \mathbb{R}^r}{\text{inf }}\underset{x \in X}{\text{inf }}f(x) + \underset{\mu \ge 0}{\text{sup }}\mu^T(g(x)-u) + P(u)\\ &=\underset{\mu \ge 0}{\text{sup }}\underbrace{\underset{x \in X}{\text{inf }}f(x) + \mu^Tg(x)}_{q(\mu)} + \underbrace{\underset{u \in \mathbb{R}^r}{\text{inf }}P(u) - \mu^Tu}_{-Q(\mu)}\\ &=\underset{\mu \ge 0}{\text{sup }}q(\mu) - Q(\mu) = \text{RHS of 1.47} \end{split} \end{eqnarray} This should hopefully answer your question.
$n \mid (a^{n}-b^{n}) \ \Longrightarrow$ $n \mid \frac{a^{n}-b^{n}}{a-b}$
Let $\,c = (a^n\!-b^n)/(a\!-\!b).\,$ To show $\,n\mid c\,$ it suffices to show $\,p^k\mid n\Rightarrow\, p^k\mid c\,$ for all primes $p$. If $\,\ p\nmid a\!-\!b\ $ then $\ p^k\mid n\mid a^n\!-b^n\!= (a\!-\!b)\:\!c\,\Rightarrow\ p^k\mid c\:$ by iterating Euclid's Lemma, else $\, p\mid a\!-\!b\ $ so $\ p^k{\,\LARGE \mid}\, \dfrac{\color{#90f}{a^{\large p}\!-b^{\large p}}}{\color{#0a0}{a-b}}\,\dfrac{a^{\large p^2}\!\!-b^{\large p^2}\!\!}{\color{#90f}{a^{\large p}-b^{\large p}}}\cdots \dfrac{\color{#c00}{a^{\large p^k}\!\!-b^{\large p^k}}}{a^{\large p^{k-1}}\!\!-b^{\large p^{k-1}}}\, \dfrac{\color{#0a0}{a^{\large n}\!-b^{\large n}}}{\color{#c00}{a^{\large p^k}-b^{\large p^k}}} = \color{#0a0}{\dfrac{a^{\large n}-b^{\large n}}{a-b}} = c$ by first $\,k\,$ factors have form $\,Q= \dfrac{A^{\large p}\!-B^{\large p}\!\!}{A-B}\,$ so each is divisible by $\,p,\,$ by $\,p\mid A\!-\!B\,$ thus $\qquad\ \ \ \bmod p\!:\ \color{#c00}{A}\equiv B\,\Rightarrow\, Q = \color{#c00}A^{p-1}\!+\color{#c00}A^{p-2}B+\cdots+\!B^{p-1}\!\equiv\ pB^{p-1}\!\equiv 0$ Remark $ $ For generalizations of the above (multiplicative telescopic) lifting of $p$-divisibility see LTE = Lifting The Exponent and related results.
uncertain orthogonality of discrete Fourier transform on the ring of integers modulo some number
Given any $m$ and $w\in \Bbb{Z/mZ}^\times$ of order $n$, the necessary and sufficient condition for $T_0 \in \Bbb{Z/mZ}^\times$ and $T_1=\ldots=T_{n-1}=0$ is that $\gcd(n,m)=1$ and for each $k\ne 0\bmod n, w^k-1$ is a unit ie. for each prime $p| m$, $w\bmod p$ has order $n$. If $\gcd(n,m)\ne 1$ then $T_0$ is not a unit. If each $w^k-1$ is a unit then $w^{nk}-1=0$ implies $T_k=\frac{w^{kn}-1}{w^k-1}=0$. If $w^k-1$ is not a unit then $w^k=1\bmod p$ for some $p|m$ thus $T_k = n\ne 0 \bmod p$ If for each prime $p| m$, $w\bmod p$ has order $n$ then each $w^k-1$ is a unit. If for some prime $p| m$, $w\bmod p$ has order $k<n$ then $w^k-1$ is not a unit. It is not obvious at all for which $n,t$ the condition is satisfied with $m=2^{nt/2}+1, w=2^t$ (except when $n$ is a power of $2$, see comment below)
Is $f(x,y) = \begin{cases}\frac {x^3+y^3}{x^2+y^2}&(x,y)\neq (0,0)\\0& (x,y)=(0,0)\end{cases}$ differentiable at origin?
Since $\frac{\partial f}{\partial x}(0,0)=\frac{\partial f}{\partial y}(0,0)=1$, if $f$ is differentiable at $(0,0)$, its derivative at $(0,0)$ is the linear map $(x,y)\mapsto x+y$. Now, since$$\frac{x^3+y^3}{x^2+y^2}-f(0,0)-(x+y)=\frac{-x^2y-xy^2}{x^2+y^2}=-\frac{xy(x+y)}{x^2+y^2},$$the question is: is it true that$$\lim_{(x,y)\to(0,0)}\frac{\left|-\frac{xy(x+y)}{x^2+y^2}\right|}{\|(x,y)\|}=0?$$No. See what happens whe $x=y$.
Finding a formula for the sum of a series that is neither Geometric nor Arithmetic
Hint: If you split it $$\sum_{n=1}^k2^n-1=\sum_{n=1}^k2^n-\sum_{n=1}^k1$$ you have one geometric series and one (simple) arithmetic series.
Find minimal polynomial of given matrix
The minimal polynomial is $(X-2)^3(X-3)$. This a triangular matrix with a $2\times 2$ diagonal block whose eigenvalue is $3$ thus whose minimal polynomial is $X-3$. There exists a block which is a $3\times 3$ matrix which has $2$ on the diagonal. The minimal polynomial of this block is $(X-2)^3$.
Applying transitivity to digraphs question.
Unluckily, I'm not coming up with anything at the moment. Here are a few properties, maybe they help: Let $ G= (V,E) $ be a graph. Let's define the transitivity-operation as $f((V,E)) := (V,E')$, with $E':= \{(x,y)\in V^2\mid \exists a\in V: (x,a),(a,y)\in E \}$ The key property is that if $$x_1\to x_2\to ...\to x_n$$ is a path in $G$, then $$x_1\to x_3\to ...\to x_{n-1}$$ is a path in $f(G)$. Let's call $x\to y$ path connected, if there exists some path $x\to y$. If at some iteration a path connection $x\to y$ is lost, then it is impossible to restore it. The path connection only is maintained into the next iteration if there is a path of even length from $x$ to $y$. For a path connection $x\to y$ to never be lost, one needs to be able to construct an arbitrarily long path from $x\to y$ (in $E$) with even length. This is possible exactly if there's a path $x\to y$ that passes through a circle with an odd number of vertices. An edge $x\to y$ will exist permanently after some iteration exactly if we can create a path of length $2n$ for all $n\ge k$ for some $k$. Partial result: Any graph $G$ without odd cycle has $$\lim_{n\to\infty} M( f^n(G)) = 0$$ Partial result: Let $P$ be the adjacency matrix of the graph. Then we have $$(P^n)_{i,j} >0 \Leftrightarrow \text{There's a path from $i$ to $j$ with length } n$$ Especially this means that $M(f(G^n))$ converges exactly if the number of zero-fields in $P^n$ is at some point constant.
Positive definite in the limit
The sufficient condition given in that MO answer is already one that makes $F$ positive definite for small $\varepsilon>0$. In general, suppose $A$ and $B$ are two Hermitian matrices of the same sizes (whether they are entrywise positive/nonnegative is unimportant). If $A-\varepsilon B$ is positive semidefinite for every small $\varepsilon>0$, then by passing $\varepsilon$ to the limit, $A$ must be positive semidefinite. If $A$ is indeed positive semidefinite, denote its restriction on $(\ker A)^\perp$ as $A_1$. Then $A_1$ is positive definite and we may write $F=A-\varepsilon B$ in the form of $$ F=\pmatrix{A_1-\varepsilon X&-\varepsilon Y^\ast\\ -\varepsilon Y&-\varepsilon Z}. $$ Therefore, a necessary condition for $F$ to be positive semidefinite for every small $\varepsilon>0$ is that $A$ is positive semidefinite and $Z=(I-A^+A)B(I-A^+A)$ is negative semidefinite on $\ker A$. However, if $A$ is positive semidefinite but $Z$ is negative definite on $\ker A$, we get a sufficient condition. This should be evident, because $F$ is congruent to $(-\varepsilon Z)\oplus S$, where \begin{align} S&=A_1-\varepsilon X-(-\varepsilon Y^\ast)(-\varepsilon Z)^{-1}(-\varepsilon Y)\\ &=A_1-\varepsilon (X-Y^\ast Z^{-1}Y) \end{align} is the Schur complement of $-\varepsilon Z$ in $F$. Since $A_1$ is positive definite and $Z$ is negative definite, $-\varepsilon Z,\ S,\ (-\varepsilon Z)\oplus S$ and in turn $F$ are positive definite when $\varepsilon>0$ is small. In your case, $A$ is a positive multiple of the all-one matrix. So, $A$ is positive semidefinite, $\ker A=\{x\in\mathbb R^n:\sum_ix_i=0\}$ and $I-A^+A$, the orthogonal projection on $\ker A$, is the matrix $P$ in the MO answer you mentioned.
Solving a system of 3 equations with 2 variables
The rank of the coefficient matrix must be equal to the rank of the augmented matrix. In your example the matrix of the coefficients is $3\times 2$ so the rank is $2$, while augmented matrix is $3\times 3$. To have the same rank the augmented mtrix determinant must be zero.
Conflict between geometric intuition and computed answer
This is how to calculate the circumference. It is not exactly what you're asking, but it's a bit too long for a comment, so I'm doing it in an answer. The length of a curve $C$ is calculated by integrating $\int_C ds$ (if you dislike the empty integrand, we can write $\int_C 1\,ds$ which is equivalent). Substituting a (differentiable) parametrisation $C:t \mapsto(x(t), y(t))$ for $t\in [a, b]$, we get that this is equal to $$ \int_{t = a}^{t =b}\sqrt{x'(t)^2 + y'(t)^2}\:dt $$ Interpreting this as a Riemann sum, and dividing $[a, b]$ into lots of tiny pieces of size $\Delta t$, it tells us that the length of the parametrised curve $C$ is equal to the sum of the lengths of all the little parts of the parametrized curve. Each of those, in turn, is equal to $\Delta t$ times the speed of the parametrisation at that $t$-point, which is $\sqrt{x'(t)^2 + y'(t)^2}$. In this case, we can calculate the integrand. We have $x'(t) = -2\sin t$ and $y'(t) = 2\cos t$, so $$ \sqrt{4(-\sin t)^2 + 4(\cos t)^2} = 2\sqrt{\sin^2t+\cos^2t} = 2 $$ so the entire integral (and thus the circumference) evaluates to $$ \int_C ds = \int_{t = 0}^{t =2\pi}\sqrt{x'(t)^2 + y'(t)^2}\:dt = \int_{t = 0}^{t = 2\pi}2\,dt = 4\pi $$
Using calculus results for functions of operators
If $A$ is a normal operator (and every selfadjoint operator is normal,) then the Spectral Theorem for normal operators gives you a way to define $F(A)$ for any function $F$ that is continuous on the spectrum $\sigma(A)$ of $A$. And, this correspondence preserves algebra, meaning $$ (F+G)(A) = F(A)+G(A),\;\;\;(FG)(A)=F(A)G(A),\;\;\; 1(A)=I. $$ And $F(A)=A$ if $F(z)=z$. Therefore, $F(z)=z^{n}$ must give $F(A)=A^n$, which is the usual definition for powers of operators. The above also imply that, if $F_{a}(t)=e^{at}$, the definition $e^{aA}$ as $e^{aA}=F_{a}(A)$ satisfies $$ e^{aA}e^{bA}=e^{(a+b)A},\;\; e^{0A}=I. $$ And $\|F(A)\|=\sup_{\lambda\in\sigma(A)}|F(\lambda)|$ holds, which allows the series approximation: $$ \lim_{N\rightarrow\infty}\|e^{aA}-\sum_{n=0}^{N}\frac{A^n}{n!}\|=0. $$ If the sequences of functions converge uniformly on the spectrum, then the corresponding operators convergen in operator norm. This calculus can be extended to Borel functions on the spectrum, but convergence must be weakened a bit. The is the holomorphic functional calculus works for more general operators, but you don't get the tight bounds that you want, and you can only take holomorphic functions of operators. This calculus is based on the Cauchy integral formula $$ F(A) = \frac{1}{2\pi}\oint_{C} \frac{1}{(\lambda I -A)}F(\lambda)d\lambda $$ (Here $\frac{1}{\lambda I-A}$ stands for $(\lambda I-A)^{-1}$.) This works for all functions that are holomorphic on a neighborhood of the spectrum $\sigma(A)$, and where the contour must be a single or finite set of positively oriented contours enclosing the spectrum in their interiors. In this case you also get $(FG)(A)=F(A)G(A)$, $1(A)=I$, etc.. Obviously this functional calculus works great for power series functions. And it works for general bounded operators. There is a functional calculus devoted to the exponential operator all by itself. This is the study of $C^0$ semigroups. The expression $e^{tA}$ can potentially make sense for bounded and unbounded operators provided $t \ge 0$ and $\sigma(A)$ contained in the left half plane of $\mathbb{C}$. No all operators are candidates for this functional calculus because $e^{A}$ must have have all positive power roots. For example $(e^{\frac{1}{2}A})^2=e^{A}$. And not all operators have roots, even for matrices. But it turns out there is a condition you can put on the resolvent of the operator that will rule out the nilpotent parts that keep you from making this happen. There's a necessary and sufficient condition. Then you get the nice exponential formulas, with a little trick to rewrite in terms of the resolvent. Instead of $(I+t\frac{A}{n})^{n}$, you replace $n$ by $-n$ and use $(I-t\frac{A}{n})^{-n}$. This formulation is equivalent to solve the Cauchy vector problem $$ \frac{dx(t)}{dt} = Ax(t),\;\;\; x(0)=x_0. $$ You end up with a solution operator $x(t)=e^{tA}x_0$. Then you can form functions of $A$ using a calculus related to the Laplace transform: $$ \int_{0}^{\infty}F(t)e^{tA}x_0 dt $$ If the Laplace transform $\mathscr{L}\{F\}$ is $f$, then the above corresponds to $f(A)$. References: Spectral Theorem Kreyszig, Introductory Functional Analysis with Applications Pazy, $C^0$ Semigroups Holomorphic Functional Analysis Angus Taylor, Introduction to Functional Analysis
Degree of minimal polynomial when number of distinct eigenvalues are given
The roots of the minimal polynomial are precisely the eigenvalues, and the degree of a nonzero polynomial is at least the number of its roots, so 1. is true. With your conditions, it is possible for the minimal polynomial to have any degree between 4 (in the diagonalisable case) and 10, so all the other answers are false. To prove this, let's call $J_k = \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & \cdots & 0 & 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & 1 \\ 0 & 0 & 0 & \cdots & 0 & 0 \end{pmatrix} \in M_k(\mathbb C)$ the usual nilpotent Jordan block. You can easily check that the minimal polynomial of $J_k$ is $X^k$. For $1 \leq k \leq 7$, the block-diagonal matrix $\mathrm{diag}(1,2,3) \oplus J_k \oplus \mathrm{diag}(\underbrace{0,\ldots,0}_{\text{$7-k$ zeroes}}) \in M_{10}(\mathbb C)$ has four eigenvalues, and its minimal polynomial is $X^k(X-1)(X-2)(X-3)$.
Triangulate square with $30$ distinct points inside square
There seems to be a problem with your statement. You say you want the triangularization of the square with 29 distinct interior points, but the list $A_5,\ldots, A_{34}$ Acually has $34-5+1=30$ points? In any event, if you triangularize a triangle then the relationship between $F$ and $E$ is $$3F=2E.$$ This is because each face is a triangle which uses 3 edges and each edge is in exactly 2 triangles. We can extend this to the triangularization of a square by observing all but one of the regions is a triangle, so $$3F+1=2E.$$ Assuming 29 interior points we have $$ F-E+V=2\,\Longleftrightarrow\, F-\frac{3F+1}{2}+33=2\,\Longleftrightarrow\,\frac{-F-1}{2}=-31\,\Longleftrightarrow\,F=61. $$ Assuming 30 interior points we have $$ F-E+V=2\,\Longleftrightarrow\, F-\frac{3F+1}{2}+34=2\,\Longleftrightarrow\,\frac{-F-1}{2}=-32\,\Longleftrightarrow\,F=63. $$ Since our $F$ also counts the exterior (a square region), we need to reduce our values by $1$ to count the triangles. Thus there are 60 triangles with 29 interior points and 62 triangles with 30 interior points. In general, the number of triangles in the triangularization of a square with $n$ interior points, $t(n)$, is given by $$ t(n)=2(n+1). $$
Metric Spaces: Let X be non empty and $\rho: X \times X \rightarrow \mathbb{R}$ satisfy the axioms of a metric on X, prove $\rho$ is a metric on X
The lecturer expects you to prove that $\rho$ is indeed a metric. So, you have to deduce from the given assumptions that $(\forall x,y\in X):\rho(x,y)=\rho(y,x)$. It should be easy for you, since you think that it seems to be “kind of trivial”. You can prove it as follows: $\rho(x,y)\leqslant\rho(x,x)+\rho(y,x)$ (I took $z=x$), which means that $\rho(x,y)\leqslant\rho(y,x)$. For the same reason, $\rho(y,x)\leqslant\rho(x,y)$, and therefore $\rho(x,y)=\rho(y,x)$.
If $n$ is such a positive integer, that $8|n^2$, then $4|n$
If $k$ is the number of factors $2$ that $n$ has in its prime factorisation, then we know that $n^2$ has $2k$ such factors. We are given that $8$ divides $n^2$ so $2k \ge 3$. As $k$ is an integer this means that $k \ge 2$, so indeed $4|n$.
Inclusion of $C^0(\bar\Omega)$ in $L^p(\Omega)$
This follows because a continuous function on a closed and bounded set is bounded. So, for any $f \in C(\overline{\Omega})$, we have $$ \biggl(\int_{\Omega} |f(x)|^p dx\biggr)^{1/p} \leq \mathcal{L}^n(\Omega)^{1/p} (\sup_{x \in \Omega}|f(x)|) < \infty. $$
Reduced Residue System in Mathematica
This would be more appropriate on mathematica.stackexchange.com but, for the time being: residueSystem[n_Integer?Positive] := Select[Range[n], GCD[#, n] == 1 &]; residueSystem[10] (* Out: {1, 3, 7, 9} *)
Compute $P(M_n<c)$ with $M_n=\max\{X_1,...,X_n\}$ and $X_i's$ are i.i.d
Your analysis is mixed up. As $c\to \infty$, $P(X_k\le c)=F(c)\to 1$. Therefore for fixed $n$, $P(M_n\le c)=F^n(c)\to 1$. What happens as $n\to \infty$ may be a little tricky, but that is not part of your original question.
How this equation with Laplace operator simplifies
One of Green's identities states that $$ - \int_\Omega (\nabla^2 u) v \, dx = \int_\Omega \nabla u\cdot\nabla v \, dx - \oint_\Gamma \frac{\partial u}{\partial n} v \, ds.$$ Is that what you meant to write?
Isometry in compact metric spaces
Consider the map $f\times f: X\times X \mapsto X\times X$. It is continuous (because f is) and surjective (because f is). Consider the orbit of $(x,y)$ under $f$, i.e. the sequence $p_n=(f^n(x), f^n(y))$. Because $X\times X$ is compact, it has a convergent subsequence, hence for any $\epsilon$ (sorry, I hoped to do no epsilons, but can't), there is an $N_0, k&gt;0$, such that $f^{N_0}(x)$ is $\epsilon$ close to $f^{k+N_0}(x)$ and $f^{N_0}(y)$ is epsilon close to $f^{k+N}(y)$, hence $d(f^{k+N_0}(x), f^{k+N_0}(y))$ is $2\epsilon$ close to $d(f^{N_0}(x), f^{N_0}(y))$. Then because $f$ does not expand distances, induction shows the same is true for all $N \geq N_0$. But we want more. We want this to be true for some fixed $N$ (and varying $k$) for all $(x,y)$. That is achievable by compactness as follows: As we showed (replacing $\epsilon$ by $\epsilon/2$) we have $d(f^{k+N_0}(x), f^{k+N_0}(y))$ is $\epsilon$ close to $d(f^{N_0}(x), f^{N_0}(y))$. Because $f$ does not expand distances (and triangle inequality), this means that for any $(v,w)$ with $d(v,x)&lt;\epsilon/4$ and $d(w,y)&lt;\epsilon/4$ the we have $d(f^{k+N_0}(v), f^{k+N_0}(w))$ is $2\epsilon$ close to $d(f^{N_0}(v), f^{N_0}(w))$. Now cover $X$ with $\epsilon/4$ balls and pick a finite subcover. For any pair of centers $(x_i,y_i)$ of the balls we have some $N_0$ which gives $\epsilon$-closeness, and hence gives $2\epsilon$ closeness on the ball, hence the maximum $M$ of these $N_0$'s gives $2\epsilon$ closeness everywhere. Now we can prove $f$ is an isometry. To get contradiction, assume you have $(x,y)$ with $d(f(x), f(y))&lt; d(x,y)$, hence for some $\epsilon$ also $d(f(x), f(y))&lt; d(x,y)-2\epsilon$. Then for any $k&gt;0$, we have $d(f^k(x), f^k(y))&lt; d(x,y)-2\epsilon$. Finally we use surjectivity. Take $(a,b)$ with $f^M(a)=x, f^M(b)=y$. Then $d(f^{M+k}(a), f^{M+k} (b))= d(f^k(x), f^k(y))$ is $2\epsilon$ close to $d(f^M (a), f^M(b))=d(x,y)$ for some $k$, contradiction.
spanning tree on hypergraph
Deciding if a $k$-uniform hypergraph has a linear spanning tree is NP-complete for all $k\geq 3$. Of course one could still write a slow brute force algorithm to do this. Unfortunately Kruskal's algorithm will not work usually. Take for example the $3$-uniform hypergraph on vertex set $[5]$ with edges $\{1,2,3\},\{3,4,5\},\{2,3,4\}$. If one attempts to naïvely run Kruskal's algorithm, and $\{2,3,4\}$ is the first edge chosen, then the algorithm will fail even though our hypergraph does has a spanning tree.
I need help figuring this error percentage homework problem.
Hoping that I am not wrong, you are asked to evaluate the change of $$f(x) = \frac{55}{2x^2 + 1}$$ if $x$ changes by $10$% around $x=2.4$ that is to say if $\Delta x=0.24$. Using derivatives, you have $$\frac {df(x)}{dx}=-\frac{220 x}{\left(2 x^2+1\right)^2}$$ and then $$\Delta f(x)=-\frac{220 x}{\left(2 x^2+1\right)^2}\Delta x$$ So, if $x=2.4$,$f(2.4)=4.39297$,$\frac {df(x)}{dx}=-3.36841$. I am sure that you can take from here.
Show that a vectorfield is rotation free.
Calcuate the partial derivatives $\frac{\partial Q_G}{\partial x}$, $\frac{\partial P_G}{\partial y}$ using the definition of partial derivative. $$\frac{\partial Q_G}{\partial x}(0,0)=\lim_{x_\to 0}\frac{Q_G(x,0)-Q_G(0,0)}x=\cdots$$ $$\cdots$$
Area of a surface - surface integral
The cylinder you want to find the surface area is $z=f(x,y)=\sqrt{a^2-x^2}$ so the differential of surface area is $dS=\sqrt{1+\left(\frac{-2x}{\sqrt{a^2-x^2}}\right)^2}dxdy$. Note that we are only taking the upper half of the surface, so will double our answer in the end. $$ \begin{aligned} \iint_SdS&amp;=\iint_D \sqrt{1+\left(\frac{-x}{\sqrt{a^2-x^2}}\right)^2} dx dy \\ &amp;=\int_{-a}^a\int_{-\sqrt{a^2-x^2}}^{\sqrt{a^2-x^2}} \frac{a}{\sqrt{a^2-x^2}} dy dx \\ &amp;=\int_{-a}^a2adx = 4a^2. \end{aligned}$$ Then we double the answer to get $8a^2$ for the surface area desired.
relations among the bounds of $f(x),f'(x),f''(x)$
Let $M_0,M_1,M_2$ denote the least upper bounds of $|f(x)|,|f'(x)|,|f''(x)|$ respectively. If $h&gt;0$, Taylor's theorem shows that $$f'(x)=\frac{1}{2h} \left[f(x+2h)-f(x) \right]-hf''(\xi) $$ for some $\xi \in(x,x+2h)$. Hence $$|f'(x)| \leq h M_2+\frac{M_0}{h} $$ We would like to choose the number $h$ which makes the RHS minimal, in order to find the optimal bound. It is easy to differentiate WRT $h$, and see that the minimum occurs when $h=\sqrt{\frac{M_0}{M_2}}$, so that $$|f'(x)| \leq2 \sqrt{M_0 M_2} $$ for all $x \in (0,\infty)$. Taking the supremum we find $$|M_1| \leq2 \sqrt{M_0M_2}. $$ In our case, $M_0 \leq 1$ and $M_2 \leq 2$ so that $$|f'(x)| \leq2 \sqrt{2}$$ for all $x \in (0,\infty)$.
Get zero, poles and gain from state space model?
Static gain is simply $C(- A)^{-1}B+D$ directly from the transfer function definition. To compute zeros, it is more involved as it depends on your definition of a zero (invariant or transfer?). There are some references in MATLABs help https://se.mathworks.com/help/control/ref/tzero.html
Solving one equation for two unknowns
Yes, there are infinitely many solutions to this equation. However, there are only two unique solutions for $(x,y)\in\mathbb{N}^2$. This can, indeed, be found by unique prime factorization. $10125=3^4\cdot5^3=x^2\cdot5^y$. Thus, the solution sets are $(x,y)=(9,3),(45,1)$
Approximating reals with rationals
The rationals with denominator less than $N$ form part of the Farey sequence. The largest gap is between $0$ and $\frac 1N$ and again from $\frac {N-1}N$ to $1$. The smallest gap is from $\frac 1N$ to $\frac 1{N-1}$ and again from $\frac{N-2}{N-1}$ to $\frac {N-1}N$, a gap of $\frac 1{N(N-1)}$ To find the best rational approximation to a given $x$, you find its continued fraction and quit with the last convergent with denominator less than $N$, and see if a small adjustment can improve it.
Trigonometry basic proof problem
Since $\tan x=\frac14$ and $\tan y=\frac35$,$$\tan(x+y)=\frac{\tan(x)+\tan(y)}{1-\tan(x)\tan(y)}=\frac{\frac{17}{20}}{1-\frac3{20}}=1.$$Can you take it from here?
Expected Value: What exactly did I do?
Well, 1000 M&amp;Ms didn't have an M on them out of the total number of M&amp;M's, which is your $x$. Therefore, $\frac{1000}{x}$ of the jar has M&amp;M's. Then, out of the (randomly chosen) sample of $1000$ M&amp;M's, $245$ of them had no M. We expect $\frac{245}{1000}$ to be the same fraction as $\frac{1000}{x}$ since it is a randomly selected sample, from which we can set up the proportion $\frac{245}{1000}=\frac{1000}{x}$.
Norm $||.||$ on $C(X)$ is equivalent to $||.||_{\infty}$ norm if evaluation linear functionals on $(C(X),||.||)$ is continuous.
Since OP never posted the solution (as promised in the comments!) I will add the answer: We consider the identity operator $I:(C(X),\|\cdot\|)\to(C(X),\|\cdot\|_\infty)$ and we want to show that it is bounded. Since both spaces are Banach, we apply the closed graph theorem. Let $(f_n)\subset C(X)$ so that $f_n\to0$ in $\|\cdot\|$. We want to show that $f_n\to0$ in $\|\cdot\|_\infty$ and by the closed graph theorem we can assume that $f_n\to f$ in $\|\cdot\|_\infty$ for some function $f\in C(X)$. All we need to do is show that $f=0$. Let $x\in X$. Then $|f(x)|=|\lambda_x(f)|=\lim_{n\to\infty}|\lambda_x(f_n)|$. But $f_n\to 0$ in $\|\cdot\|$ and $\lambda_x$ is continuous for $\|\cdot\|$, so $\lambda_x(f_n)\to\lambda_x(0)=0$. This shows that $f(x)=0$ and since $x$ was arbitrary we have that $f=0$. We also want to show that the identity operator $I:(C(X),\|\cdot\|_\infty)\to(C(X),\|\cdot\|)$ is bounded. Again, we apply the closed graph theorem. Let $f_n\to 0$ in $\|\cdot\|_\infty$ and we want to show that $f_n\to0$ in $\|\cdot\|$. By the closed graph theorem we can assume that $f_n\to f$ in $\|\cdot\|$ for some $f\in C(X)$. All we need to do is show that $f=0$. By the previous result we also have that $f_n\to f$ in $\|\cdot\|_\infty$, so since both $0$ and $f$ are uniform limits of $(f_n)$ we have that $f=0$. This concludes the proof.
Differential equations: Of non-vertical lines
The equation of any non-vertical line in the plane is $y = ax + b$, where $a \in \mathbb{R}$ and $b \in \mathbb{R}$. Therefore, the differential equation will be $y' = a$, for all $a$. EDIT: I let this slip by, we can in fact have $a$ as zero. As Mathguy pointed out, this means all horizontal lines. Sorry.
Conjugacy class of permutatuion
The conjugacy class of a permutation consists of all the permutations with the same cycle structure. How many of those are there in $S_{12}$ with structure $(xx)(xxxxxxx)(x)(xx)$? Remember that disjoint cycles commute, so the order of the cycles doesn't matter, nor does the order of elements in each cycle.
An epidemic is modelled by $y=5e^{0.3t}$. Find another expression for $t ≥ 2$
The number of patients in hospital equals the number of patients infected - number of patients discharged. The number of patients discharged is the same as the number of patients who were infected 2 or more weeks ago. for $t&gt;2, y = 5e^{0.3t} - 5e^{0.3(t-2)}$ $y = 5e^{0.3t}(1-e^{-0.6})\\ y = 5(0.45)e^{0.3t}\\ y = 2.26 e^{0.3t}$
Sum of two truncated gaussian
This is not an easy problem to obtain a closed-form solution to. As always, there are a number of different approaches, but unfortunately many of them seem to yield intractable outcomes. The approach taken here is to proceed manually step-by-step using the Method of Transformations, aided by using a computer algebra system to do the nitty-gritties where that is helpful. Given: Let: $X \sim N(\mu_1, \sigma_1^2)$ be doubly truncated (below and above) at $(a_1, b_1)$, where $0&lt;a_1&lt;b_1$, and $Y \sim N(\mu_2, \sigma_2^2)$ be doubly truncated (below and above) at $(a_2, b_2)$, where $0&lt;a_2&lt;b_2$. Here is an illustrative plot of the doubly truncated Normal pdf, given different parameter values: Solution By virtue of independence, the joint pdf of $(X,Y)$, say $f(x,y)$ is the product of the individual pdf's: where Erf[.] denotes the error function. Step 1: Let $Z=X+Y$ and $W=Y$. Then, using the Method of Transformations, the joint pdf of $(Z,W)$, say $g(z,w)$, is given by: where: Transform is a function from the mathStatica package for Mathematica, which automates the calculation of the transformation and required Jacobian. the transformation $(Z=X+Y, W=Y)$ induces dependency in the domain of support between $Z$ and $W$. In particular, since $X$ is bounded by $(a_1,b_1)$, it follows that $Z=X+Y$ is bounded by $(a_1+W, b_1+W)$. This dependency is captured using the Boole statement in the line above which acts as an indicator function. Because the dependency has been captured into pdf g itself, we can enter the domain of support for pdf g in standard rectangular fashion as: Step 2: Given the joint pdf $g(z,w)$ just derived, the marginal pdf of $Z$ is: which yields the 4-part piecewise solution: ... defined subject to the domain of support: $a_1 + a_2 &lt; Z &lt; b_1 + b_2$. The solution sol appears very small on screen: to view it larger, either open the image in a new window, or click here. The Marginal computation is far from simple: it takes about 10 minutes of pure computing time to solve on the new R2-D2 Mac Pro. All done. Illustrative Plot Here is a plot of the solution pdf of $Z = X + Y$, for the same parameter values used in the first diagram: $(\mu_1 = 5, \sigma_1 = 2, a_1 = 5, b_1 = 8)$ $(\mu_2 = 1, \sigma_2 = 4, a_2 = 7, b_2 = 9)$ Monte Carlo Check It is always good idea to check symbolic work with a quick Monte Carlo comparison, just to make sure that no mistakes have crept in. Here is a comparison of the empirical pdf (Monte Carlo simulation of the sum of the doubly truncated Normals) [the squiggly blue curve] plotted together with the exact theoretical pdf (sol) derived above [ red dashed curve ]. Looks fine! Notes The Marginal function used above is also from the mathStatica package for Mathematica. As disclosure, I should add that I am one of the authors of the software used.
Is $\mathbb{Q}\times \mathbb{Q}$ a field?
For every $0 \neq a,b\in K$ for a field $K$ we have that $(a,0) \cdot (0,b)=(0,0)$ , so $K \times K$ is not an integral domain, hence not a field.
Is this set in $C[0,1]$ a countable union of nowhere dense sets?
$X$ is a complete metric space and closed subsets of complete metric spaces are complete. As a consequence, if $\mathcal A$ is closed it cannot be a countable union of nowhere dense sets by the Baire category theorem. Is $\mathcal A$ closed? A set is closed if and only if it contains all its limit points. Let $x_n \in \mathcal A$ be a sequence converging to some $x \in X$. The point $x$ is in $\mathcal A$ if there exists $y \in X$ such that $x(t) = \int_0^{t^2} y(\tau) d\tau$ and $\|y\|_\infty \le 1$. Let $y_n$ denote the elements in $X$ such that $\|y_n\|_\infty \le 1$ and $x_n(t) = \int_0^{t^2} y_n(\tau) d\tau$. Let $y$ be the limit of $y_n$ in the $\sup$ norm. Let $\varepsilon &gt; 0$ and $N$ be so large that $\|x_n-x\|_\infty &lt; {\varepsilon \over 2}$ and $\|y_n-y\|_\infty &lt; {\varepsilon \over 2}$. Then $$ \begin{align} \|x-\int_0^{t^2} y(\tau) d\tau\|_\infty &amp;\le \left\| x-x_N\right\|+\left\| x_N-\int_0^{t^2} y(\tau) d\tau\right\|\\ &amp;= \left\| x-x_N\right\|+\left\| \int_0^{t^2} y_N(\tau) d\tau-\int_0^{t^2} y(\tau) d\tau\right\| \\ &amp;&lt;{\varepsilon \over 2}+\left\| \int_0^{t^2} y_N(\tau) d\tau-\int_0^{t^2} y(\tau) d\tau\right\| \\ &amp;={\varepsilon \over 2}+\left\| \int_0^{t^2} y_N(\tau)-y(\tau) d\tau\right\| \\ &amp;&lt;{\varepsilon \over 2} + {\varepsilon \over 2}= \varepsilon \end{align}$$ Since $\varepsilon$ was arbitrary, $x = \int_0^{t^2} y(\tau) d\tau$. Therefore $\mathcal A$ is closed and it cannot be a countable union of nowhere dense sets.
Determining the center of the p-Sylow subgroup of $S_p $
Like anon I assume that the question is about the centralizer of $P$ (with that typo/mistranslation the question makes IMHO more sense). The highest power of $p$ that divides $p!$ is $p^1$. Therefore the Sylow subgroups $P$ are all cyclic of order $p$. A permutation of prime order $p$ must be a product of disjoint $p$-cycles (ignoring fixed points). In the case of $S_p$ there is room for only one disjoint $p$-cycle, so we know that $P$ is generated by a $p$-cycle, call it $\sigma$. Because $P=\langle\sigma\rangle$ the centralizer of $P$ equals the centralizer of $\sigma$, $C_{S_p}(\sigma)$. Let's take a detour via conjugacy classes. We know from basic properties of symmetric groups that the conjugates of $\sigma$ are exactly all the $p$-cycles in $S_p$. How many are there? A $p$-cycle in $S_p$ will move all the numbers $1,2,3,\ldots,p$. Recall that in cycle notation we can choose the starting point of the cycle any way we want without changing the cycle, but no other changes are possible. So we can write all $p$-cycles $\alpha$ of $S_p$ uniquely in such a way that they begin with a $1$. We then have $p-1$ choices for the next number $=\alpha(1)$, $p-2$ choices for the next $=\alpha(\alpha(1))$ et cetera. Therefore there are exactly $(p-1)!$ different $p$-cycles in $S_p$. In other words, the conjugacy class $[\sigma]$ of $\sigma$ has size $(p-1)!$. But the orbit-stabilizer theorem (often introduced at about the same time as the class equation) tells us that $$ (p-1)!=|[\sigma]|=\frac{|S_p|}{|C_{S_p}(\sigma)|}=\frac{p!}{|C_{S_p}(\sigma)|}. $$ This implies that the centralizer $C_{S_p}(\sigma)$ has order $p$. Clearly $P\subseteq C_{S_p}(\sigma)$, so this means that we must have equality $$ P=\langle\sigma\rangle=C_{S_p}(\sigma)=C_{S_p}(P). $$
Orthonormal frame bundle orthogonal to a curve
The first idea only works when $\phi$ is a reparametrized geodesic. Write $\dot \phi= \tfrac{d}{dt}\phi$ and denote parallel transport along $\phi$ by $P_{\phi}$. Then $P_{\phi}(\dot\phi(0))$ remains tangent to $\phi$ iff there is an $\alpha:(-\varepsilon,\varepsilon)\to \mathbb{R}$ s.t. $P_{\phi}(\dot \phi(0))(t)=\alpha(t)\dot \phi(t)$ and then $\nabla_{t}(\alpha\dot \phi)=0$, i.e. $\nabla_t(\tfrac{d}{dt}\phi(\int_{0}^{t}\alpha(\tau)d\tau))=0$. If $\phi$ is not a reparametrized geodesic then assume it is normalised, choose an orthonormal (n-k)-frame $\{e_{i}\}$ at $T_{\phi(0)}M$ which is orthogonal to $\dot\phi(0)$ and solve the system of equations: $$\nabla_{t}E_{i}=\langle E_{i},\nabla_{t}\dot \phi\rangle \dot \phi,\quad E_{i}(0)=e_{i}$$ The solutions satisfy $\langle \nabla_{t}E_{i},\dot \phi\rangle=\langle E_{i},\nabla_{t}\dot \phi\rangle$ and therefore $\tfrac{d}{dt}\langle E_{i},\dot \phi\rangle=0$ and the orthogonality at $t=0$ is preserved for all $t$. Then also $\langle \nabla_{t}E_{i},E_{j}\rangle=0$ and therefore orthonormality of the $\{e_{i}\}$ is preserved too. Something similar will work for you second question, but I think you will need to involve $\eta$ in the system of equations.
Orthogonal vectors, $||\lambda a+\mu b||$
Given: $a$ and $b$ are orthogonal, so $\langle a , b \rangle = \langle b , a \rangle = 0$, and $\| a \| = \| b \| = 1$, which means $\sqrt{\langle a , a \rangle }= \sqrt{ \langle b , b \rangle }= 1$. We want to evaluate: \begin{align*} \| \lambda a + \mu b \|^2 &amp;= \langle \lambda a + \mu b , \lambda a + \mu b \rangle \\ &amp;=\langle \lambda a , \lambda a + \mu b \rangle + \langle \mu b , \lambda a + \mu b \rangle \\ &amp;= \langle \lambda a , \lambda a \rangle + \langle \lambda a , \mu b \rangle + \langle \mu b , \lambda a \rangle + \langle \mu b , \mu b \rangle \\ &amp;= \lambda\overline{\lambda} \langle a , a \rangle + \lambda\overline{\mu}\langle a , b \rangle + \mu\overline{\lambda}\langle b, a \rangle + \mu\overline{\mu}\langle b , b \rangle \\ &amp;=\lambda\overline{\lambda}(1) + \lambda\overline{\mu}(0) + \mu\overline{\lambda}(0) + \mu\overline{\mu}(1) \\ &amp;=\lambda\overline{\lambda} + \mu\overline{\mu}, \end{align*} Hence, $$\| \lambda a + \mu b \| = \sqrt{\lambda\overline{\lambda}+\mu\overline{\mu}}$$ where $\overline{\xi}$ denotes the complex conjugate of $\xi$, note if $\xi\in\mathbb{R}$ then $\overline{\xi}=\xi$.
Degree of splitting field of $x^5-1$ over $\mathbb{Q}$ and the order of its Galois group?
The polynomial is reducible: $x^5-1=(x-1)(x^4+x^3+x^2+x+1)$. You do not need to add the element $1$ to the field (it is already there), only a root of the quaternary polynomial (which generates the rest of the roots). That is why the degree is $4$.
"the standard two-fold branched cover of $CP^2$"
Identify $S^2 \cong \mathbb{C}P^1$. Then act on $S^2 \times \ldots \times S^2$ by $S_n$. The quotient space is $\mathbb{C}P^n$ and the projection map is the branched cover in question. In my case n=2 and the diagonal sphere is the the fixed set under the involution of $S_2$, so it is the branching locus.
show that $y = \cos x$, has a maximum turning point at $(0, 1)$ and a minimum turning point at $(\pi, -1)$
$$f(x)=\cos x \qquad [-1,\pi]$$ Finding the $\min$ and $\max$: $$f'(x)=-\sin x$$ that's equal to zero when $x=\color{red}{0},x=\color{red}\pi$ (critical points) $$f''(x)=-\cos x\\ f''(0)=-\cos 0=-1$$ $\Longrightarrow\quad x=0$ is $\max$ of $\cos x$ $$\boxed{\max \text{ point }(0,1)}$$ $$f''(\pi)=1$$ $\Longrightarrow\quad x=\pi$ is $\min$ of $\cos x$ $$\boxed{\min \text{ point }(\pi,-1)}$$
About the fractional part
We know $a + b \equiv 0 \pmod1$. Write $a = n + f$ where $n \in \mathbb{Z}$ and $0\leq f&lt;1$. Then $n + f + b \equiv f + b \equiv 0 \pmod1$. Since both $f, b &lt; 1$, it must be that $f+b = 1$.
Variation in Einstein-Hilbert action
The symbol "$\delta$" in calculus of variation, especially in physics, denotes the change of an output variable due to the change of input variable(s). In general, let $g:Y_1\times Y_2\times\cdots\times Y_n\to Z$ be a mapping, then $$ \left(\delta g\right)(y_1,y_2,\cdots,y_n):=g(y_1+\delta y_1,y_2+\delta y_2,\cdots,y_n+\delta y_n)-g(y_1,y_2,\cdots,y_n). $$ Here are some examples. (1) Consider $f:\mathbb{R}\to\mathbb{R}$, which is a function. Then $$ \left(\delta f\right)(x)=f(x+\delta x)-f(x)=f'(x)\delta x+o(\delta x), $$ which is exactly the differentiation of this function, i.e., $$ \left({\rm d}f\right)(x)=f(x+{\rm d}x)-f(x)=f'(x){\rm d}x+o({\rm d}x). $$ It is conventional to keep only the first-order variation and drop the higher-order terms. Thus $$ \left(\delta f\right)(x)=f'(x)\delta x. $$ (2) Consider $J:\mathscr{F}\to\mathbb{R}$ with $\mathscr{F}$ being some function space, which is a functional. Then $$ \left(\delta J\right)(f)=J(f+\delta f)-J(f). $$ (3) Consider $g=\det\left(g_{\mu\nu}\right)$. Recall the definition of the determinant, and $g$ can be regarded as a polynomial function of all $g_{\mu\nu}$'s. For example, $$ \left(g_{\mu\nu}\right)=\left( \begin{array}{cc} a&amp;b\\ c&amp;d \end{array} \right)\Longrightarrow\det g=ad-bc, $$ which is a second-order multivariate polynomial. In this sense, $g$ is a function of all of its entries, and $\delta g$ is no more than the differentiation of $g$ with respect to those entries, i.e., $$ {\rm d}g=g^{\nu\mu}g{\rm d}g_{\mu\nu}\iff\delta g=g^{\nu\mu}g\delta g_{\mu\nu}. $$ (4) Consider $\mathcal{L}[x(t)]=\left(m/2\right)\dot{x}^2(t)-\Phi_g(x(t))$ with $\Phi_g$ being, say, gravitational potential, which is the integrant of the Newtonian action. In this case, "$\delta$" is in the sense of a shorthand of the respective functional derivative. That is, define $$ J[x(t)]=\int_{t_1}^{t_2}\mathcal{L}[x(t)]{\rm d}t, $$ and we have, using integration by parts, $$ \left(\delta J\right)[x(t)]=\int_{t_1}^{t_2}\left(-m\ddot{x}(t)-\left(\nabla\Phi_g\right)(x(t))\right)\delta x(t){\rm d}t. $$ With this in mind, denote $$ \delta\mathcal{L}[x]=\left(-m\ddot{x}-\left(\nabla\Phi_g\right)(x)\right)\delta x. $$ Remark. In this last example, $\mathcal{L}[\cdot]$ and $\mathcal{L}(\cdot)$ do not mean the same. $\mathcal{L}[\cdot]$ is a mapping defined as above, while $\mathcal{L}(\cdot)$ is simply regarded as a function, e.g., let $\mathcal{L}(p,q)=\left(m/2\right)p^2-\Phi_g(q)$, and $\mathcal{L}[x(t)]=\mathcal{L}(p,q)|_{p=\dot{x}(t),q=x(t)}$.
Is there a probability measure on the Cantor set?
Consider the intervals $I_{i_1\cdots i_n}$ of length $1/3^n$ at the $n$th level of the construction of Cantor set, and define a measure $\mu$ by $\mu(I_{i_1\cdots i_n})=1/2^n$.
Volume of the revolution solid
Do this in two parts. First find the section of the volume between $y=3$ and $y=2$, then the section of the volume between $y=2$ and $y=1$. For the first volume, use the integral $$\pi\int_{2}^{3} \frac{16}{x^2}dx-\pi\int_{2}^{3} \frac{4}{x^2}dx$$ This is the difference of the volumes formed by rotating the regions between $xy=4$ and the y-axis and between $xy=2$ and the y-axis between $y=2$ and $y=3$. When you evaluate these integrals, you get $$\pi\int_{2}^{3} \frac{12}{x^2}dx$$ $$\pi(-\frac{12}{3}+\frac{12}{2})$$ $$\pi(6-4)$$ $$2\pi$$ Now for the second region. For this I use the integrals $$4\pi-\pi\int_{1}^{2} \frac{4}{x^2}dx$$ Which is the difference of the volumes of the cylinder made by rotating a rectangle about the x-axis and the volume of the area under $xy=2$ rotated about the x-axis. This gives us $$4\pi-\pi(-\frac{4}{2}+\frac{4}{1})$$ $$4\pi-2\pi$$ $$2\pi$$ The total volume is the sum of the volumes, which is $$4\pi$$ Is this the correct answer?
Expectation of integral of involving geometric brownian motion
Hint $$\mathbb{E} \left( \exp(\alpha \cdot W_t) \cdot \int_0^t \exp (\gamma \cdot W_u) \, du \right) = \mathbb{E} \left( \int_0^t e^{\alpha \cdot (W_t-W_u)+(\gamma+\alpha) \cdot W_u} \, du \right)$$ Apply Fubini's theorem to interchange expectation with integration and note that for fixed $u \in [0,t]$, $W_t-W_u$ is independent of $W_u$, i.e. $$\mathbb{E}(e^{\alpha \cdot (W_t-W_u)+(\gamma+\alpha) \cdot W_u}) = \mathbb{E}(e^{\alpha \cdot (W_t-W_u)}) \cdot \mathbb{E}(e^{(\gamma+\alpha) \cdot W_u})$$ Since the exponential moments of a normal distributed random variable are well-known, this allows you to compute the expression you are looking for.
Distributions of sampling statistics problem (not a hw problem)
Let random variables $X_1, X_2, \dots, X_n$ be the lifetime of the first blade, the second,and so on. Here $n=36$. The $X_i$ are independent, and therefore the variance of their sum is the sum of their variances. Thus the sum $X_1+\cdots+X_n$ of the lifetimes has variance $8^2+8^2+\cdots +8^2$ ($n$ terms). This sum is $(n)(8^2)$, and hence the variance of the sum is $(n)(8^2)$. The standard deviation of the sum is therefore$(\sqrt{n})(8)$. We also use an informal version of the Central Limit Theorem. The sum of $n$ independent identically distributed "nice" random variables is "nearly normal" if $n$ is large enough. The numerical questions: (a) The sample average exceeds $60$ precisely if the sample sum exceeds $(36)(60)$. So in a sense problems (a) and (b) are very much alike. But there is a slightly different approach to (a) which is probably a bit better. Let $S$ be the sample sum discussed above. Then the sample average $Y$ is equal to $\dfrac{S}{n}$. The random variable $Y$ is also nearly normal. It has mean $40$. The variance of $\dfrac{S}{n}$ is $\dfrac{1}{n^2}$ times the variance of $S$ calculated above. It follows that $Y$ has variance $\dfrac{(n)(8^2)}{n^2}$, that is, $\dfrac{8^2}{n}$. Since $n=36$, $Y$ has standard deviation $\dfrac{8}{6}$. So we want the probability that a normal random variable with mean $40$ and standard deviation $\dfrac{8}{6}$ exceeds $60$. Since $60$ is a ridiculous number of standard deviation units away from $40$, the probability is nearly $0$. (b) The mean of the sum is $1440$. We want the probability that a nearly normal random variable with standard deviation $48$ is less than $1250$. This is a standard calculation. Pretty unlikely! Remark: You may be asking why is the variance of an independent sum equal to the sum of the variances. Here is a proof for a sum of two random variables $X$ and $Y$. The proof readily extends to longer sums. Let $X$ and $Y$ be independent, with variances $\sigma^2_X$ and $\sigma^2_Y$. Recall that $\text{Var}(W)=E(W^2)-(E(W))^2$. Apply this with $W=X+Y$. We have $$E((X+Y)^2)=E(X^2+2XY++Y^2)=E(X^2)+2E(XY)+E(Y^2)=E(X^2)+2E(X)E(Y)+E(Y^2).\tag{$!$}$$ (For the fact that $E(XY)=E(X)E(Y)$ we used independence.) Also. $$(E(X+Y))^2=(E(X)+E(Y))^2=(E(X))^2+2E(X)E(Y)+(E(Y))^2.\tag{$2$}$$ Using $(1)$ and $(2)$ and rearranging a bit, we find that $$\text{Var}(X+Y)=E(X^2)-(E(X))^2 +E(Y^2)-(E(Y))^2.$$ But the above is just $\sigma^2_X+\sigma^2_Y$.
Is $T$ a homeomorphism?
The problem here is that $T^{-1}$ is not continuous. In fact $$\frac 1n x^n \to 0 \ \ \ \mbox{ as $n \to \infty$}$$ but $$T^{-1} \left( \frac 1n x^n\right) = x^n$$ does not approach to $0$.
Creating a function from graph
Two possible guesses. Probability Density: $\large y=ce^{-\left(\frac{x-k}{h}\right)^2}$ I don't know what a name for this is, but another similar graph is $\displaystyle y=\frac{c}{1+(h(x-k))^2}$.
Proof of $R[X] \otimes_R M \cong M[X]$
There is indeed an obvious choice for a bilinear map $$ h\colon R[x]\times M\to M[x] $$ where $$ h\biggl(\,\sum_{i=0}^k r_ix^i,m\biggr)=\sum_{i=0}^k r_imx^i $$ Now, let $f\colon R[x]\times M\to N$ be a bilinear map. We need to find $\phi\colon M[x]\to N$ such that $f=\phi\circ h$. Define $$ \phi\biggl(\,\sum_{i=0}^k m_ix^i\biggr)= \sum_{i=0}^k f(x^i,m_i) $$ You should have no problem in proving it is $R$-linear, using bilinearity of $f$. Next, $$ \phi\circ h(x^i,m)=\phi(mx^i)=f(x^i,m) $$ and use again bilinearity.
Fermat's little theorem confusion
Since $3^6\equiv1\pmod7$ and $50=6\cdot8 +2\Rightarrow 3^{50}\equiv3^{6\cdot8 +2}\equiv3^{6\cdot8}3^2\equiv(3^6)^83^2\equiv3^2\equiv9\equiv2\pmod7.$
Line integral under closed sign
$$\oint_C (2x-y^3)dx-(xy)dy=\iint_R\left(\frac{\partial(-xy)}{\partial x}-\left(\frac{\partial(2x-y^3)}{\partial y}\right)\right)\,dA=$$ $$=\iint_R\left(-y-3y^2\right)\,dx\,dy=\int_0^{2\pi}\int_1^3 r(-r\sin\theta-3r^2\sin^2\theta)\,dr\,d\theta=$$ $$=\int_0^{2\pi}\int_1^3-\left(r^2\sin\theta+3r^3\sin^2\theta\right)=\int_0^{2\pi}\left(-\frac{26}3\sin\theta-60\sin^2\theta\right)\,d\theta=$$ $$=0-\left.30(\theta-\sin\theta\cos\theta)\right|_0^{2\pi}=-60\pi$$ $$=$$