title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Minimal Polynomial of linear map induced by quotients space divides the minimal polynomial of itself
$$ m(\bar{T})(v+U)=m(T)v+U=0+U $$ where of course $0+U$ is the zero element of $V/U$. So $m(\bar{T})=0$, and then by the basic property of minimal polynomials, the minimal polynomial of $\bar{T}$ must divide $m$.
Holomorphic vector fields on complex projective spaces
You can describe them using the Euler sequence. It turns out that they are the classes of homogeneous linear polynomial vector fields on $\mathbb{C}^{n+1}$ $$ \sum A_i(z_0, \dots , z_n)\frac{\partial}{\partial z_i} $$ ($A_i$ homogeneous polynomial of degree one) modulo $$R = \sum z_i \frac{\partial}{\partial z_i} $$ the radial vector field. Take a look at Huybrechts' book Complex Geometry, an introduction.
Is $T_{(r,0)}(TM)\cong T_rM$?
dimension of the tangent space is the double of the dimension of the manifold $dimT_{(r,0)}TM=2dimT_rM$
Dirac delta function of a function - can I make this transformation?
Yes, provided that the integral makes sense to begin with ($\rho$ should be continuous, so that we can integrate $\delta$ against it). The idea is that for every $\epsilon>0$, the open set $\{\mathbf x: |U-\hat U(\mathbf x)|>\delta\}$ does not contribute to the integral. Given $\epsilon>0$, let $\delta>0$ be such that $$|U-V|\le \delta\implies |\rho(U)-\rho(V)|\le \epsilon$$ So, on the set that actually contributes to the integral, $\rho \circ \hat U$ is bounded between $\rho(U)-\epsilon$ and $\rho(U)+\epsilon$.
Find the condition on $t$ if the limit is defined.
At first, you make $2$ mistakes: First one: The number $f(c)$, in fact, is dependent to variable $t$, that may changes by different $t$, so the true derivation: $$g'(t) = t\frac{d}{dt}f(c) + f(c)$$ Second one: You use L'Hôpital's Rule when your denominator goes to $\infty$, so it was allowed iff the nominator also goes to $\infty$, that was not true, for example let $f(x)=0$. But you can do as follow: $$h(t) = \lim_{n \longmapsto \infty} \frac{g(t+n)}{n} = \lim_{n \longmapsto \infty} \frac{\int_0^{t+n}f(x)dx}{n} = \lim_{n \longmapsto \infty} \frac{\int_0^nf(x)dx + \int_n^{t+n}f(x)dx}{n} =$$ $$\lim_{n \longmapsto \infty} \frac{n \int_0^1f(x)dx + \int_0^tf(x)dx}{n} = \lim_{n \longmapsto \infty} (\int_0^1f(x)dx + \frac{g(t)}{n}) = \int_0^1f(x)dx + \lim_{n \longmapsto \infty} \frac{g(t)}{n}$$ Now as $f$ is a continuous periodic function, it's bounded, and so for every $t$, $g(t)$ is finite, so we have: $$h(t) = g(1)$$ So the option c is correct, but in this way you can prove it.
Distributional equality and independence
Take $A =\mathbb R$ to see that $P(Y \in B)=P(X \in B)$. Hence $X$ and $Y$ have the same distribution. We now get $P(X \in A, Y \in B)=P(X\in A) P(Y \in B)$ so $X$ and $Y$ are independent.
Convergence and value of improper integral
To evaluate the integral, we analyze the closed-contour integral $I$ given by $$I=\oint_C e^{iz^2}\,dz$$ where $C$ is comprised of (i) the line segment from $0$ to $R$, (ii) the circular arc from $R$ to $R(1+i)/\sqrt{2}$, and the line segment from $R(1+i)/\sqrt{2}$ to $0$. Since $e^{iz^2}$ is analytic in and on $C$, Cauchy's Integral Theorem guarantees that $I=0$. Then, we have $$\int_0^R e^{ix^2}\,dx+\int_0^{\pi/4}e^{iR^2e^{i2\phi}}iRe^{i\phi}\,d\phi-\frac{1+i}{\sqrt{2}}\int_0^R e^{-x^2}\,dx=0 \tag 1$$ Letting $R\to \infty$, the second integral on the left-hand side of $(1)$ approaches zero. Therefore, we find that $$\begin{align} \int_0^\infty e^{ix^2}\,dx&=\frac{1+i}{\sqrt{2}}\int_0^\infty e^{-x^2}\,dx\\\\&=\frac{1+i}{\sqrt{2}}\frac{\sqrt{\pi}}{2} \tag 2 \end{align}$$ Finally, equating real and imaginary parts of $(2)$, we obtain $$\int_0^\infty \sin(x^2)\,dx=\sqrt{\frac{\pi}{8}}$$ and $$\int_0^\infty \cos(x^2)\,dx=\sqrt{\frac{\pi}{8}}$$
Is it possible to define the tangent space of a Lie group at points other than the identity?
I'm not sure I got what you meant. The Lie Algebra of a Lie group is identified with the tangent space at the identity, anyway there's a canonical way to find a basis of the tangent space in anypont exploiting the Left multiplication which is an operation that every Lie Group has. Let's make an example, let take the Heisenberg group with generic element $$X=\left(\begin{array}{ccc} 1 & x & y\\ 0 & 1 & z\\ 0 & 0 & 1 \end{array}\right).$$ Now You have to calculate the left action which is easily found$$L_{A}X=\left(\begin{array}{ccc} 1 & a & b\\ 0 & 1 & c\\ 0 & 0 & 1 \end{array}\right)\left(\begin{array}{ccc} 1 & x & y\\ 0 & 1 & z\\ 0 & 0 & 1 \end{array}\right)=\left(\begin{array}{ccc} 1 & x+a & y+az+b\\ 0 & 1 & z+c\\ 0 & 0 & 1 \end{array}\right).$$ This action $L_{A}$ brings the identity $I$ into the element $A$ , i.e. $L_{A}(I)=A$ . So you can use the differential of the application $L_{A*}$ to send the tangent plane in the identity (i.e. your Lie Algebra) to the tangent plane in the point A . First of all you have to calculate the differential of the application$$L_{\left(a,\,b,\,c\right)}\left(x,\,y,\,z\right)=\left(x+a,\,y+az+b,\,z+c\right)$$ so you just have to differentiate to obtain the differential$$L_{\left(a,\,b,\,c\right)*}=\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 1 & a\\ 0 & 0 & 1 \end{array}\right).$$ Here I've used a shortcut to represent the element $$\left(\begin{array}{ccc} 1 & x & y\\ 0 & 1 & z\\ 0 & 0 & 1 \end{array}\right)\rightarrow\left(\begin{array}{c} x\\ y\\ z \end{array}\right)$$ otherwise You should have done the long way. Now we have the differential we take a base at the identity, i.e. $$E_{1}=\left(\begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{array}\right),\,\,E_{2}=\left(\begin{array}{ccc} 0 & 0 & 1\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{array}\right),\,\,E_{3}=\left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & 1\\ 0 & 0 & 0 \end{array}\right).$$ with the ususal commutation relation$$\left[E_{1},E_{2}\right]=\left[E_{2},E_{3}\right]=0,\,\,\left[E_{1},E_{3}\right]=E_{2}.$$ Then you define a base on the point you want, namely $A$ as$$\left(E_{i}\right)_{A}=L_{A*}\left(E_{i}\right)$$ which is $$\left(E_{1}\right)_{A}=\left(\begin{array}{ccc} 0 & a & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{array}\right),\,\,\left(E_{2}\right)_{A}=\left(\begin{array}{ccc} 0 & 0 & a\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{array}\right),\,\,\left(E_{3}\right)_{A}=\left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & a\\ 0 & 0 & 0 \end{array}\right).$$ If you want to define some commutation relation you'll see that$$\left[E_{1},E_{2}\right]=\left[E_{2},E_{3}\right]=0,\,\,\left[E_{1},E_{3}\right]=a^{2}E_{2}$$. I then don't know what you want to do with this base in the tangent space, anyway here you have it.
can anyone give a proof by definition: $11$ is prime in $ \mathbb{Z}[\sqrt{-5}] $
We will show that if $11$ divides $\alpha\beta$ then $11$ divides $\alpha$ or $11$ divides $\beta$. Since $11$ is obviously not a unit of $\mathbb{Z}[\sqrt{-5}]$, by the definition of prime it follows that $11$ is prime in $\mathbb{Z}[\sqrt{-5}]$. From the post, it is clear that you know that if $11$ divides $\alpha\beta$, then $11$ divides $N(\alpha)N(\beta)$ and therefore $11$ divides $N(\alpha)$ or $11$ divides $N(\beta)$. Let $\alpha=s+t\sqrt{-5}$. Then $N(\alpha)=s^2+5t^2$. Suppose that $11$ divides $s^2+5t^2$. We will show that $11$ divides $t$ (and therefore $11$ divides $s$). Suppose that $s^2+5t^2=11k$, and $11$ does not divide $t$. Let $wt\equiv 1\pmod{11}$ (there is such a $w$ since $t$ has an inverse modulo $11$). We get $(ws)^2\equiv -5\equiv 6\pmod{11}$. This is impossible, since $6$ is not a quadratic residue of $11$ (just try all squares up to $5^2$). Since $11$ divides $t$ and $s$, it follows that $11$ divides $\alpha$. If $11$ does not divide $N(\alpha)$, it must divide $N(\beta)$, and then we conclude in the same way that $11$ divides $\beta$.
About boundedness of continuous functions on a bounded closed interval
If $f$ is unbounded on $I$ it does not necessarily mean that there is a value $a \in I$ for which $f(x) \to \pm\infty$ as $x \to a$. For example consider $f(x) = (1/x)\sin(1/x)$ on $(0, 1)$. It is essential to use some form of completeness here and also note that the the requirement of closed interval is essential. Result fails to hold for functions continuous on open intervals (see example $f$ given earlier). While there are many way to prove this result (based on many different forms of completeness property) I find the one via Heine Borel Theorem to be very direct. Let $f$ be continuous on $[a, b]$ and defined $f(x) = f(a)$ for $x < a$ and $f(x) = f(b)$ for $x > b$ so that $f$ is continuous everywhere. By continuity for each $x \in [a, b]$ there is an open interval $I_{x}$ containing $x$ such that $|f(t) - f(x)| < 1$ for all $t \in I_{x}$. Thus $f$ is bounded on $I_{x}$ and let $M_{x}$ be such that $|f(t)| < M_{x}$ for all $t \in I_{x}$. The collection of all such intervals $I_{x}$ forms an open cover of $[a, b]$ and by Heine Borel Theorem there are a finite number of these intervals, say, $I_{x_{1}}, I_{x_{2}}, \ldots I_{x_{n}}$ which cover $[a, b]$. Let $M = \max(M_{x_{1}}, M_{x_{2}}, \ldots, M_{x_{n}}$. Clearly every point $t$ of $[a, b]$ lies in some $I_{x_{j}}$ and hence $|f(t)| < M_{x_{j}}$ and thus $|f(t)| < M$ for all $t \in [a, b]$.
What is the conditional distribution function of P(T < z| U=x) where U, V are uniformly distributed on [0,1] and T= Max(U,V)
If so, should I add them together or should I separate the situation and say if x&lt;=z, then the answer is xz and if x>z, then the answer is z^2? The latter. &nbsp; You have a piecewise function. &nbsp; Indeed there are a few other cases to consider. $$\begin{align}\mathsf P(U\leq x, T\leq z) ~=~&amp; \begin{cases}\mathsf P(U\leq x, V\leq z) &amp; : x&lt;z \\[1ex] \mathsf P(U\leq z, V\leq z) &amp; : z\leq x \end{cases} \\[1ex] =~&amp; \begin{cases} 0 &amp; : x&lt; 0 ~\vee z&lt;0 \\[1ex] xz &amp; : 0\leq x &lt; z\leq 1 \\[1ex] x &amp; :0\leq x\leq 1&lt;z \\[1ex] z^2 &amp; : 0\leq z\leq x \leq 1~\vee~0\leq z\leq 1 &lt;x \\[1ex]1 &amp; : 1&lt; z~\wedge~ 1&lt;x \end{cases}\end{align}$$ I did the following : if x>z, the answer is 0 because it is impossible. if x&lt;=z, P(max(U,V)&lt;=z | U = x)= P(V &lt;= z, U = x)/ P(U=x), since U ,V are independent, the answer is P (V &lt;= z) = z So far so good. &nbsp; Can you express it piecewise, as above? But How to express P(max(U,V)&lt;=z | x-1/n &lt;= U &lt;= x+ 1/n)? $$\lim_{n\to \infty} \mathsf P(T\leq z\mid x-1/n \leq U \leq x+ 1/n)~=~\lim_{n\to \infty} \dfrac{P(T\leq z, x-1/n\leq U\leq x+1/n)}{\mathsf P(x-1/n\leq U\leq x+1/n)}$$
Limit of multivariable function $\frac{x^2 y^2}{x^2 + y^2}$
With the AM-GM inequality: for $x,y)\neq (0,0)$ $$ 0 \leq \frac{x^2 y^2}{x^2+y^2} \leq \frac{x^2 y^2}{2\sqrt{x^2 y^2 }} = \frac{\lvert x y\rvert }{2} \xrightarrow[(x,y)\to(0,0)]{}0 $$ the last part being "easy" to quantify with $\epsilon$ and $\delta$, as $\sqrt{x^2+y^2}=\left\lVert \begin{pmatrix} x\\y\end{pmatrix}\right\rVert_2\leq \delta$, the very same AM-GM inequality implies $\sqrt{2\lvert xy \rvert }\leq \delta$.
If $f(n)\geq\beta \sum_{i=0}^{n-1}f(i)$,then $f(n)?$
Does the following help? Let $g(n) = \sum_{i=0}^{n-1} f(i)$. Then we have $f(n) \ge \beta g(n)$ and $g(n) = f(n-1) + g(n-1)$. So $$f(n) \ge \beta g(n) = \beta f(n-1) + \beta g(n-1) \ge \beta^2 g(n-1)+\beta g(n-1) = \beta(\beta+1)g(n-1)$$ $$\implies f(n) \ge \beta(\beta+1)\left(f(n-2)+g(n-2) \right) $$ $$\implies f(n) \ge \beta(\beta+1)^2g(n-2)$$ $$...$$ $$\implies f(n) \ge \beta (\beta+1)^{n-1} g(1)$$ As $g(1) = f(0)$, this means $f(n) \ge \beta (\beta+1)^{n-1} f(0)$
How to solve $2f-f_u=g(v)$?
Fix some $v$. Then you have $f_u = 2f - g(v)$, which is a (very simple) linear ODE in $u$ (namely $y' = 2y - g$, where $g$ is a constant). Solve it and you get $f(u,v)$ for this particular $v$. But since $v$ was arbitrary, you get $f(u,v)$. The solution is $f(u,v) = \frac 1 2g(v) + c(v)e^{2u}$, where $c$ is some function depending on $v$.
Multivariable function continuity with a if $x=x_0$
We have that $$\max\left(2|x|,\sqrt{x^4+y^2}\right)=\frac{2|x|+\sqrt{x^4+y^2}+\left|2|x|-\sqrt{x^4+y^2}\right|}2$$ $$\max\left(|x|,|y|\right)=\frac{|x|+|y|-\left||x|-|y|\right|}2$$ then $$\dfrac{y-\max(2|x|,\sqrt{x^4+y^2})}{\sqrt{\max(|x|,|y|)+\sqrt{|x||y| }}}=\frac{y-\frac{2|x|+\sqrt{x^4+y^2}+\left|2|x|-\sqrt{x^4+y^2}\right|}2}{\sqrt{\frac{|x|+|y|-\left||x|-|y|\right|}2+\sqrt{|x||y| }}}=$$ $$=\sqrt r\,\frac{\sin \theta-\frac{2|\cos \theta|+\sqrt{r^2\cos^4 \theta+\sin^2 \theta}+\left|2|\cos \theta|-\sqrt{r^2\cos^4 \theta+\sin^2 \theta}\right|}2}{\sqrt{\frac{|\cos \theta|+|\sin \theta|-\left||\cos \theta|-|\sin \theta|\right|}2+\sqrt{|\cos \theta||\sin \theta| }}} \to 0$$
meaning of "linear form" in binary code
I guess your textbook refers to a linear code. Then it is true that every check digit $A_{n+i}$ can be written as a linear form $\mathbb{Z}_2^n \to \mathbb{Z}_2$ evaluated at $A_1,\ldots,A_n$. A linear form $f : \mathbb Z_2^n \to \mathbb Z_2$ has the form $$f(x_1,\ldots,x_n) = \lambda_1 x_1 + \lambda_2 x_2 + \ldots + \lambda_n x_n$$ where the $\lambda_i$ are fixed values in $\mathbb Z_2$.
Areas inside a Triangle
$$\frac{CE}{EA}=\frac{x}{y_1}=\frac{x+z}{y_1+y_2+x}\tag{1}$$ $$\frac{DB}{AD}=\frac{x}{y_2}=\frac{x+z}{y_1+y_2+x}\tag{2}$$ By comparing (1) and (2) we get: $$y_1=y_2=\frac x2$$ Replace that in (1) and you'll get: $$\frac{x}{\frac x2}=\frac{x+z}{\frac x2+\frac x2+x}$$ $$z=3x\tag{3}$$ Use (3) to calculate $x,z$ from the total area of the triangle: $$x+x+\frac x2+\frac x2 + z=12$$ The rest is trivial.
Stream functions and divergence?
The problem with non-simply-connected domains is that the result of integration may depend on how the path of integration goes around the holes. A better known manifestation of this phenomenon is the absence of a potential function for some irrotational fields. The standard example for your situation is the radial flow with velocity $$\vec V(x,y) = \frac{x}{x^2+y^2}\vec\imath+ \frac{y}{x^2+y^2}\vec\jmath$$ more compactly written as $\vec V=\vec r/|\vec r|^2$. This is a divergence free flow in $\mathbb R^2\setminus \{(0,0)\}$. (Note that the flux through a circle centered at the origin is $2\pi$ regardless of its radius.) Yet, there is no streamline function for this flow, for such a function would have a gradient pointing in the tangential direction, which leads to a contradiction when we come back to the original position after following the gradient.
Proving certain properties using curl, divergence, and gradient
You're nearly there for part b). Note, however, that $\nabla \cdot$ denotes the divergence, not the gradient. So, $$ \nabla \cdot (\mathbf m \times \mathbf r) = \frac{\partial }{\partial x}(bz - cy) + \frac{\partial }{\partial y}(cx - az) + \frac{\partial }{\partial z}(ay - bx) $$ That is, we should have no $\mathbf {i,j,k}$ in our answer. Also, all of these partial derivatives are zero since, as you said in the comment, $a,b,c$ are constant. Part c) is pretty tedious. You can save some effort if you avoid writing out cross products. In particular, we can write $$ \nabla \times (\mathbf m \times \mathbf r) = \mathbf i \times \frac{\partial }{\partial x}(\mathbf m \times \mathbf r) + \mathbf j \times \frac{\partial }{\partial y}(\mathbf m \times \mathbf r) + \mathbf k \times \frac{\partial }{\partial z}(\mathbf m \times \mathbf r) \\ = \mathbf i \times \left(\mathbf m \times \frac{\partial \mathbf r}{\partial x}\right) + \mathbf j \times \left(\mathbf m \times \frac{\partial \mathbf r}{\partial y}\right) + \mathbf k \times \left(\mathbf m \times \frac{\partial \mathbf r}{\partial z}\right) $$ and then use the BAC-CAB formula.
Sigma-compactness if there exists an open cover ...
It's in fact an equivalence. The easier direction is even easier than you showed: just note that $X = \bigcup_n \operatorname{Cl}(A_n)$ and thus a countable union of compact sets and thus $\sigma$-compact. The $Q_n$ are not needed, as being increasing is not a requirement of $\sigma$-compactness (and moreover the $A_n$ and thus $Q_n$ already were increasing so the union is superfluous). For the other direction, write $X$ as an increasing union (using the finite unions idea, indeed) of compact sets $K_n$ and apply the lemma that in a locally compact Hausdorff space we have that for each compact $K$ and open set $U$ with $K \subseteq U$ we have an open set $V$ such that $\operatorname{Cl}(V)$ is compact with $K \subseteq V \subseteq \operatorname{Cl}(V) \subseteq U$ for each of those $K_n$ and use the $V_n$ from the lemma. Note that the existence of the $A_n$ already implies the local compactness too. So we have for a Hausdorff space $X$ that the following are equivalent: $X$ is locally compact and $\sigma$-compact. There exists such an increasing cover of open sets $A_m$ with compact closure as in the question.
Prime ideals of $\mathbb{C}[x]/(x^2+1)$ and $\mathbb{R}[x]/(x^2+1)$.
You are correct about $C$. As for the unit, when you have a ring $R$ with a unit $1$, and an ideal $I$, then $R/I$ is also a ring with a unit, $1+I$. Now for $R$, it's easy to see that your mapping is well defined, and also a ring homomorphism. Moreover, it's surjective: for every $α+βi$, $α, β$ real, there is $p(x)=α+βx\in \mathbb{R}[x]$ such that $φ(p(x))=α+βi$. It remains to show that $\ker φ=(x^2+1)$. Obviously, $(x^2+1)$ is a subset of $\ker φ$. Moreover, $0\in\ker φ$, while every other real number does not belong to the kernel. Nor do real polynomials of degree $1$, since that would mean they have $i$ as a root. So now let's take a polynomial $p(x)\in \ker φ$, $\deg p(x)\ge 2$. Then the division algorithm would imply $p(x)=(x^2+1)π(x)+r(x)$, with $\deg r(x)&lt;2$ or $r(x)=0$. But then we should have $p(i)=0$ or $r(i)=0$, which holds if and only if $r(x)=0$. This shows that $p(x)\in (x^2+1)$ or $\ker φ$ is a subset of $(x^2+1)$. This concludes the proof that $\ker φ=(x^2+1)$.
Fourier-Laplace Transform of Heaviside Step function multiplied to Sine
You need a table of correspondences for distributions. I use angular frequency $\omega$ and the non-unitary version of the Fourier transform: $$\sin\omega_0t\Longleftrightarrow -i\pi[\delta(\omega-\omega_0)-\delta(\omega+\omega_0)]\\ \Theta(t)\Longleftrightarrow\frac{1}{i\omega}+\pi\delta(\omega)$$ With these correspondences you get $$\Theta(t)\sin\omega_0t\Longleftrightarrow\frac{1}{2\pi}\mathcal{F}(\Theta(t))* \mathcal{F}(\sin\omega_0t)$$ where $*$ stands for convolution. The result of the convolution is $$\Theta(t)\sin\omega_0t\Longleftrightarrow -\frac12\left[\frac{1}{\omega-\omega_0}-\frac{1}{\omega+\omega_0}\right]-\frac{i\pi}{2}\left[\delta(\omega-\omega_0)-\delta(\omega+\omega_0)\right]$$ Note that WolframAlpha uses the unitary version of the Fourier transform which gives you different constants.
Show how to find the cubic root
Well, it is must be useful to think in something. ${\sqrt[3]{{\sqrt[3]{2} - 1}}}.\frac{\sqrt[3]{{(\sqrt[3]{2})^2+\sqrt[3]{2}+1}}}{\sqrt[3]{{(\sqrt[3]{2})^2+\sqrt[3]{2}+1}}}=\frac{1}{\sqrt[3]{{(\sqrt[3]{2})^2+\sqrt[3]{2}+1}}}$ $=\frac{1}{\sqrt[3]{{(\sqrt[3]{2})^2+\sqrt[3]{2}+1}}}.\frac{\sqrt[3]{3}}{\sqrt[3]{3}}=\frac{\sqrt[3]{3}}{\sqrt[3]{{3.(\sqrt[3]{2^2})+3.(\sqrt[3]{2})+3}}}$ $\frac{\sqrt[3]{3}}{\sqrt[3]{{(\sqrt[3]{2}+1)^3}}}=\frac{\sqrt[3]{3}}{(\sqrt[3]{2}+1)}.\frac{{{{(\sqrt[3]{4}-\sqrt[3]{2}+1)}}}}{{{{(\sqrt[3]{4}-\sqrt[3]{2}+1)}}}}=\frac{\sqrt[3]{3}.(\sqrt[3]{4}-\sqrt[3]{2}+1)}{2+1}=\sqrt[3]{\frac{4}{9}}-\sqrt[3]{\frac{2}{9}}+\sqrt[3]{\frac{1}{9}}$
Adjoint of a matrix vs adjoint operator
It seems that for matrices the easiest procedure is to write down the $(i,j)$'th entry of $(A+B)^*$ and convince oneself that it is equal to the $(i,j$)'th entry of $A^*+B^*$.
On the largest and smallest values of $ {D_{\mathbf{u}} f}(x,y) $, assuming that $ ∇f(x,y) ≠ 0 $.
We have $\nabla f = (\frac{\partial f}{\partial x}, \frac{\partial f}{\partial x})$. Let $u = (u_1, u_2)$. Then we have $$\nabla_u f (x, y)= \frac{d}{dt} f(x+ tu_1, y+tu_2)|_{t=0} = \frac{\partial f}{\partial x}u_1 + \frac{\partial f}{\partial x}u_2 = \langle u, \nabla f (x, y)\rangle\ .$$ The second equality follows from chain rule. To maximize the last quantity, note $$ \langle u, \nabla f (x, y)\rangle =|u| |\nabla f(x, y)| \cos \theta\ ,$$ where $\theta $ is the angle between the two vectors. Then in order to maximize this term, we should choose $u$ such that $\theta=0$. That is, $u$ is pointing to the same direction to $\nabla f(x, y)$. Thus $$u =\frac{\nabla f(x, y)}{|\nabla f(x, y)|}$$ if you insist that $u$ has length one. Similar argument works for the minimum.
Can the empty set be an index set?
If you've seen How I Met Your Mother, you might remember the episode when Barney is riding a motorcycle inside a casino, and when the security guards grab him he points out one simple thing: "Can you show me the rule that says you cannot drive a motorcycle on the casino's floor?". Mathematics is quite similar. If there's nothing in the rules which forbids it, it's allowed. Direct your attention to Definition 7.1.9 in that book, it says that an index set is a set of all indices of some family of computable [partial] functions/computably enumerable sets. The empty set is a set of computable functions, and the empty set is exactly its index set.
Show that not exists any polynomial function such that $f(x) = \log (1+x)$.
Maybe there are many ways to show this; but what I have in mind is this: If $f$ is a polynomial of degree $k$, then its $(k+1)$-th derivative is zero. But this is not the case for $\log(1+x)$; the $(k+1)$-th derivative of $\log(1+x)$ does not vanish.
Is induction valid when starting at a negative number as a base case?
Probably the easiest way to understand it all is to think about it as sort of "reindexing": Most often encountered formulation of induction: Let $S(n)$ be a statement involving $n$. If (i) $S(1)$ holds, and (ii) for every $k\geq 1, S(k)\to S(k+1)$, then for every $n\geq 1$, the statement $S(n)$ holds. With the above formulation in mind, we can give a more general but equivalent formulation: More generalized formulation of induction: Let $S(n)$ denote a statement regarding an integer $n$, and let $k\in\mathbb{Z}$ be fixed. If $S(k)$ holds, and for every $m\geq k, S(m)\to S(m+1)$, then for every $n\geq k$, the statement $S(n)$ holds. Explanation: Let $T(n)$ be the statement $S(n+k-1)$, and repeat the above but instead with $T$ replacing every occurrence of $S$. Then the base case becomes $T(1)=S(1+k-1)=S(k)$, as desired. Example: Suppose you have the statement $S(n)$ where $$ S(n) : n+5\geq 0, $$ and you claim this is true for all $n\geq -5$, where $n\in\mathbb{Z}$. Your base case would be $n=-5$, and this is true since $-5+5=0\geq 0$. As explained above, when we reformulate the proposition, the base case would be $T(1) = 0 = S(-5)=S(k)$. The following schematic may be easier to understand: $$ \color{blue}{T(n)}\equiv S(n+k-1) : (n+k-1)+5\geq 0\equiv n+k+4\geq 0\equiv\underbrace{\color{blue}{n-1}}_{k\,=\,-5}\color{blue}{\geq 0}\\\Downarrow\\[1em] \color{blue}{T(n): n-1\geq 0}. $$ As you can see above, proving $S(n)$ is true for all $n\geq -5$ is the exact same as proving $T(n)$ is true for all $n\geq 1$.
Prove that $\| v \|_{1} \|v\|_\infty \leq \frac{1+\sqrt{n}}{2} \|v\|^{2}_{2}$
Assume WLOG that all $v_i$'s are non-negative and that $v_1 = \max v_i = \|v\|_\infty$. Your inequality is then equivalent to $$ 2v_1(v_2+\ldots+v_n)\le(\sqrt n - 1)v_1^2 + (\sqrt n+1)(v_2^2+\ldots+v_n^2). $$ Now, for the right-hand side (RHS), \begin{align*} \operatorname{RHS} &amp;= \left(v_1\sqrt{\sqrt n - 1} - \sqrt{\sqrt n + 1}\sqrt{v_2^2+\ldots+v_n^2}\right)^2 + 2v_1\sqrt{n-1}\sqrt{v_2^2+\ldots+v_n^2}\\ &amp;\ge 2v_1\sqrt{n-1}\sqrt{v_2^2+\ldots+v_n^2}\\ &amp;\ge 2v_1(v_2+\ldots+v_n). \end{align*}
Help Constructing an infinitely differentiable function...
Use $M=\int_{[a,b]}\phi(x)dx$ and remove the $(b-a)$ from denominator.
Number Theory: Prove that $x^{p-2}+\dots+x^2+x+1\equiv 0\pmod{p}$ has exactly $p-2$ solutions
If $a \in Z_p \setminus \{0,1\}$ and $p$ is prime, then: $$1+a+a^2+\cdots +a^{p-2}=(a^{p-1} - 1)\cdot(a-1)^{-1} = 0,$$because $a^{p-1}=1$. On the other hand, if $a=1$, then $$1 + a + a^2 + \cdots + a^{p-2} = \underbrace{1+1+ \cdots +1}_{p-1} =(p-1) \ne 0.$$ Finally if $a = 0$, $$1 + a + a^2 + \cdots + a^{p-2} = 1 \ne 0. $$ So the solutions are all $a \ne 0, 1$, which are $p-2$ values.
For real $a$, $b$, $c$ all greater than $1$, show $\frac{a^a}{b^b}+\frac{b^b}{c^c}+\frac{c^c}{a^a} \;\ge\; \frac{a}{b}+\frac{b}{c}+\frac{c}{a}$
Note that $$0=\ln\dfrac{a^a}{b^b}+\ln\dfrac{b^b}{c^c}+\ln\dfrac{c^c}{a^a}=\ln\dfrac{a}{b}+\ln\dfrac{b}{c}+\ln\dfrac{c}{a}$$ Let (WLOG) $\;\ln\dfrac{a}{b}\ge\ln\dfrac{b}{c}\ge\ln\dfrac{c}{a}$ We have $\;\dfrac{a}{b}\ge\dfrac{b}{c}\ge\dfrac{c}{a}$ Thus, $\;ac\ge b^2\;$, $a^2\ge bc\;$ and $\;ab\ge c^2\;$ Combining these $3$ inequalities gives us $a\ge b$ and $a\ge c$ Observe that (easy to show) $$\ln\dfrac{a^a}{b^b}\ge \ln\dfrac{a}{b}$$ $$\ln\dfrac{a^a}{b^b}+\ln\dfrac{b^b}{c^c}\ge \ln\dfrac{a}{b}+\ln\dfrac{b}{c}$$ Then we have $$\left(\ln\dfrac{a}{b}, \ln\dfrac{b}{c}, \ln\dfrac{c}{a}\right) \prec \left(\ln\dfrac{a^a}{b^b}, \ln\dfrac{b^b}{c^c}, \ln\dfrac{c^c}{a^a}\right)$$ or (actually only former majorization holds but showing it requires more work, so, including the possibility is better for the sake of shortness) $$\left(\ln\dfrac{a}{b}, \ln\dfrac{b}{c}, \ln\dfrac{c}{a}\right) \prec \left(\ln\dfrac{b^b}{c^c}, \ln\dfrac{a^a}{b^b}, \ln\dfrac{c^c}{a^a}\right)$$ Karamata(Majorization) Inequality with using $f(x)=e^x$ ($f''(x)=e^x&gt;0$) gives the desired inequality $$\frac{a^a}{b^b}+\frac{b^b}{c^c}+\frac{c^c}{a^a} \;\ge\; \frac{a}{b}+\frac{b}{c}+\frac{c}{a}$$
How to test convergence for $\sum^{\infty}_{n} \frac{1}{(\ln{n})^3}$?
Note that the comparison test states the following: Suppose that we have two series $\sum a_n$ and $\sum b_n$ with $a_n,b_n\geq 0$ for all n and $a_n\leq b_n$ for all n. If $\sum a_n$ is divergent, then so is $\sum b_n$. So, while you do have the right conclusion, you could run into a problem with this comparison. However, another way to test convergence is the integral test: $\int_0^\infty \dfrac{1}{(\ln(x))^3}dx$. This integral does not converge so we are good.
How to solve this 3-variables differential equation?
Hint: From the first two, $$b\frac{dx}x=a\frac{dy}y$$ gives $$y=c x^{b/a}.$$ Then with $\alpha:=\dfrac ba-1$, we eliminate $y$, $$\frac{dz}z=\frac xa(p+qcx^\alpha)(1+cx^\alpha)\,{dx}$$ and $$z=d\exp\left(\frac{x^2}2\left(\frac{c^2qx^{2\alpha}}{\alpha+1}+\frac{2c(p+q)x^{\alpha}}{\alpha+2}+p\right)\right).$$ Finally, eliminating $z$, we get a nasty separable equation in $x,t$: $${(1+cx^\alpha)}\,{\exp\left(-\dfrac{x^2}2\left(\dfrac{c^2qx^{2\alpha}}{\alpha+1}+\dfrac{2c(p+q)x^{\alpha}}{\alpha+2}+p\right)\right)}\,dx=-a d\,dt.$$
Using Central Limit Theorem when we NON-IID sample
$3S_{n} = \epsilon_{1}+ \epsilon_{n+2}+ 2(\epsilon_{2}+\epsilon_{n+1}) +3 \Sigma_{i=3}^{n}\epsilon_{i}$ now we could use central limit theorem , since they are independent. here $a_{n}=E3S_{n}$ , $b_{n}= Var3S_{n}$ , and then we could get the result.
Finding the Laurent series of $f(z)=\frac{1}{(z-1)^2}+\frac{1}{z-2}$?
A related technique. Here is how you advance. $$ f(z)=\dfrac{1}{(z-1)^2}+\dfrac{1}{z-2} =\dfrac{1}{((z-4)+3)^2}+\dfrac{1}{(z-4)+2}$$ $$ = \dfrac{1}{9\left(\frac{(z-4)}{3}+1\right)^2}+\dfrac{1}{(z-4)(1+\frac{2}{z-4})} $$ $$ = \dfrac{1}{9\left(w+1\right)^2}+\dfrac{1}{(z-4)(1+t)}, $$ where $$ w= \frac{(z-4)}{3}\,\quad t= \frac{2}{z-4}. $$ Now, recalling the geometric series, we have, the power series of $g(w)=\frac{1}{(1+w)(1+w)}$ will converge for $|w|&lt;1$ which implies $|z-4| &lt;3 $. On the other hand, the power series $h(t)=\frac{1}{1+t}$ will converge for $|t|&lt;1$ which gives $ |z-4|&gt;2 $.
2-digit combinations
You are probably thinking of De Bruijn sequences. These are often generated for a binary "alphabet", but can be formed for any size $k$ alphabet. With wraparound the minimum sequence length of $k^n$ for all $n$ subsequences on an alphabet of size $k$ can be obtained. If wraparound is not allowed, then the sequence has to be "padded" with the first $n-1$ characters repeated at the end, for a total length of $k^n + n - 1$. The Combinatorial Object Server has a De Bruijn sequence generator (among other things), and for $n=2, k=10$ as in the Question, it produced this (lexicographically least) sequence: 0010203040506070809112131415161718192232425262728293343536373839445464748495565758596676869778798899
Is the fourth root of 2 constructible?
Hint: If $a$ is constructible, then so is $\sqrt a$. [image from Wikipedia]
How many 2 digit even numbers can be formed from these numbers?
There is no formula for this, here. Because you have specified the digits and a way to understand this intuitively is- $__ __ $ Suppose these dashes are the two digits. So, in order for a number to be even, the last digit should be $0$ or $2,4,6,8$. So second digit can be $4$ or $6$ from your specified numbers. And the digit cannot be repeated, so there are $3+1=4$ digits left for the first place. So by multiplication principle, $$4\cdot 2=8$$ is the answer.
The Stupid Computer Problem : can every polynomial be written with only one $x$?
First, let's assume that Schanuel's conjecture is true and speak very loosely. :) Timothy Chow's 1999 article What is a Closed-Form Number? proves that the exponential and logarithm functions don't really help us to express algebraic numbers. Any algebraic number that can be expressed using those functions can also be expressed using only radicals. This is stated as Corollary 1 at the top of page 444. So the expression $x^5-x$ can't be rewritten with a single $x$ using exponentials and logarithms. If it could, we would be able to solve the equation $x^5-x=1$ by inverting those functions, which means we could solve it using radicals, which is impossible. Can we do without Schanuel's conjecture? Maybe. Chow hints at partial results that don't need the conjecture: It is folklore that general polynomial equations (i.e., those with variable coefficients) cannot be solved in terms of the exponential and logarithmic functions, although nobody seems to have written down a complete proof; partial proofs may be found in [C. Jordan, Traité des Substitutions et des Équations Algébriques, Gauthier-Villars, 1870, paragraph 513] and [V. B. Alekseev, Abel's Theorem in Problems and Solutions, Izdat. "Nauka," 1976 (Russian), p. 114]. I haven't tracked down those references. Maybe there's an equation where only the 0th coefficient is a variable, like $x^5-x+C=0$, that can't be solved in the relevant way. I'm not necessarily gunning for the bounty, so perhaps someone else can pick up the story from there?
Disjoint Cycles and Supports
For example, let's go over the "only if" part of the proof. First note that if $x\in \{i_1,...,i_r \}$ then $\alpha(x) \neq x$, and if $x\in \{j_1,...,j_s \}$ then $\beta(x) \neq x$ (why?). Assume that the cycles are disjoint, and assume for contradiction that $\{i_1,...,i_r \} \cap \{j_1,...,j_s \} \neq \emptyset$. Thus there exists some $x\in \{i_1,...,i_r \} \cap \{j_1,...,j_s \}$. For this $x$ we have both $\alpha(x) \neq x$ and $\beta(x) \neq x$, and that contradicts the definition of disjoint cycles.
Finding equilibria and determining their behaviour
You need $$ \frac {\alpha\beta I}{\alpha I+Nr}(N-I)-\mu I = 0. $$ If $I\ne 0$ then you can divide both sides by $I$: $$ \frac {\alpha\beta}{\alpha I+Nr}(N-I)-\mu = 0. $$ Can you solve that for $I$?
How find this $\sum_{k=0}^{100}a_{3k}$
Yes. $\frac13(f(1)+f(w)+f(w^2))=3^{149}$ is the desired result where $w$ is a primitive third root of unity.
Understanding Preimage
Hint: by definition, the preimage of $\{4\}$ is $f^{-1}(\{4\})=\left\{\,x \in \mathbb{R} \mid f(x) \in \{4\}\,\right\}$. The above is equivalent to $x^2=4$, or $x=\pm 2$, so the preimage of $\{4\}$ is $f^{-1}(\{4\})=\{-2,2\}$.
Where can I find the theorem that says an n order diffeq has n solutions?
The answer has two parts. The first part is the transformation of a differential equation of order n into a system of dimension n of first order. This is the usual setting $y_1=x$, $y_2=x'$, ..., $y_n=x^{(n-1)}$. The second part is the existence theorem resp. Picard-Lindelöf. It says that every initial value problem has at least a local solution. And the initial value has $n$ free parameters. Of course these $n$ parameters can be replaced via any bijective function. So in the end, a differential equation involving up to the $n$-th derivative requires $n$ integration constants in the solution. In that sense, your solution is complete, in has two integration constants for a second order differential equation. Since the equation is not linear, one can not expect the solution to be a linear combination of basis solutions. In general one can not even expect that the solution can be separated as a sum of terms involving just one constant as it is the case here.
How do I generate a mathematical formula for the following question and answer?
If you take each part and multiply it with the number of units in subset, you should get the number of units in reserve. part=Subset/Reserve
Are all the conditions of the Moore-Penrose inverse definitions necessary?
No. Let $A=\pmatrix{1&amp;0\\ 0&amp;0},B_1=\pmatrix{x&amp;0\\ 0&amp;0},B_2=\pmatrix{1&amp;0\\ 0&amp;x},B_3=\pmatrix{1&amp;x\\ 0&amp;0},B_4=B_3^T$ where $x$ is arbitrary. Then $B=B_1$ satisfies the conditions that $BAB=B$ and that $AB$ and $BA$ are Hermitian. $B=B_2$ satisfies the conditions that $ABA=A$ and that $AB$ and $BA$ are Hermitian. $B=B_3$ satisfies the conditions that $ABA=A,BAB=B$ and that $BA$ is Hermitian. $B=B_4$ satisfies the conditions that $ABA=A,BAB=B$ and that $AB$ is Hermitian.
Malliavin Derivative
This is a consequence from the Clarke-Ocone Theorem, and uses Malliavin derivative. See also the Clarke-Ocone formula paragraphe here. If you want a technical reference, see this introductory course, mainly theorem 1 p. 18.
Parameter estimation truncated Laplace distribution
1) there is an error (I suppose it is only a typo) in your $g(x)$ denominator... it's $F(b)-F(a)$ 2) calculate the usual MLE estimators and choose them if they are in $[a;b]$. Otherwise choose $a$ or $b$
Convergence of subsequences
There is nothing to prove. Any sequence $(a_n)$ is a subsequence of itself. So any fact true for all subsequences of $(a_n)$ is automatically true of $(a_n)$. Remark: The other direction, the one not asked about, does require proof. It is the assertion that if the sequence $(a_n)$ converges to $a$, then every infinite subsequence of $(a_n)$ converges to $a$.
Functional equation: $f(f(x))=k$
Choose arbitrary subsets $A,B\subseteq\mathbb R$ such that $A\cup B=\mathbb R$ and $A\cap B=\emptyset$. We may assume without loss of generality that $k\in B$. Let $g:A\to B$ be an arbitrary function. Then define $$f(x)=\begin{cases}k;&amp;x\in B,\\ g(x);&amp;x\in A.\end{cases}\tag{$\ast$}$$ This clearly satisfies the requirements. Conversely, if $f:\mathbb R\to\mathbb R$ is any function that satisfies $f(f(x))=k$, we may choose $B=f(\mathbb R)$ and $A=\mathbb R\setminus B$ which shows that $f$ is of the form given above. So $(*)$ in fact gives all possible solutions.
Iterates of $\frac{\sqrt{2}x}{\sqrt{x^2 +1}}$ converge to $\text{sign}(x)$.
You may argue as follows: $f(x) \leq 1$ for any $x \in (0, \infty)$; $f(x) \geq x$ for any $x\in(0, 1]$, with equality if and only if $x = 1$. Therefore for fixed $x&gt; 0$, the sequence $(F_n(x))_{n\geq 1}$ is increasing, hence the limit $\lim_{n\rightarrow \infty} F_n(x)$ exists and must be equal to $1$.
Prove that group $G$ has subgroup with index 2
The relation $x^2=y^2$ is the same as $xxy^{-1}y^{-1}=1.$ If a group $G$ has a presentation in which all relations have even length, then there is a well defined length mod 2 of any $w \in G$, since the process of reducing a word by cancelling $tt^{-1}$ does not change the length of $w$ mod 2, and use of the relations does not change the length mod 2 either, since the relations have even length. So there is a homomorphism $f:G \to Z_2$ taking $w \in G$ to its length mod 2. The kernel $K$ of $f$ is then a subgroup of index 2.
Rational map on smooth projective curve
Assume $f:C\to C'$ is nonconstant. Let us view $C'$ as embedded in some projective space $\mathbb P^N$. Since $f$ is rational, where it is defined it is the form $x\mapsto (f_0(x):\dots:f_N(x))$ for some rational functions $f_i\in k(C)$. Let $P\in C$ be any point. As $C$ is smooth, $P$ is a smooth point, thus its local ring is a DVR. So we can take a uniformizer $\pi\in \mathcal O_{C,P}\subset k(C)$. Define $$m:=\min\{\textrm{ord}_{P}(f_i)\},$$ so that $\textrm{ord}_P(\pi^{-m}f_i)\geq 0$ for every $i=0,\dots,N$, and for some $j$ we have $\textrm{ord}_P(\pi^{-m}f_j)=0$. This means that $f$ is defined at $P$. We used no hypothesis about $C'$, except its closed embedding in $\mathbb P^N$. Indeed, the same proof says that a map from a curve to a projective variety is defined at smooth points.
Showing a C* algebra with certain properties has a minimal projection
By proposition 6.3.3 of Blackadar $(K_0(A),K_0(A)_+)$ is an ordered group. The scale $\Sigma(A)$ is the image of Proj(A) in $K_0(A)$. Since $A$ is unital, the scale is simply the elements of $K_0(A)_+$ which are $\le [1_A]$, so the minimal element of the scale is the image of a minimal projection. The scale cannot be zero, since that would imply that $A$ is contractil, which implies $K_0(A)=0$.
Interval around a root of a function
It is not always true. Let $f(x)=x^2\sin(1/x)$ when $x\ne 0$, and let $f(0)=0$. Note that $f(x)=0$ whenever $x=\frac{1}{n\pi}$, where $n$ is a non-zero integer. In this example, $f$ is differentiable at $0$ but the derivative is not continuous at $0$. We can fix that if we want by using $x^3\sin(1/x)$. Added: The question has been clarified. Perhaps it asks now whether we can have a function which is everywhere differentiable, is not identically $0$, but is identically $0$ in some interval. The answer is yes, and such functions are even useful. Let $g(x)=e^{-1/x^2}$ when $x\gt 0$, and let $g(x)=0$ for $x\lt 0$. Then $g(x)$ is everywhere differentiable, infinitely often. Using $g(x)$, we can construct a function $f(x)$ which is is everywhere differentiable infinitely often, is $0$ in the interval $[-1,1]$, and non-zero when $|x|\gt 1$.
In the context of linear algebra, is it possible for a vector space or a subspace to have a finite number of elements?
Such a vector space would have to be finite-dimensional, of course. But every finite-dimensional vector space $V$ over a field $\Bbb k$ is isomorphic to $\Bbb k^{\oplus n}$ for some $n$. This means that $V$ has finitely many vectors if and only if $V$ is finite-dimensional and $\Bbb k$ is a finite field. For each prime $p$ and $n\geq0$ it is known that there is a unique field of order $p^n$. This classifies all possible vector spaces with finitely many vectors.
$\sum_{r=0}^{19} (r+1)^4 \binom{20}{r+1}=\lambda. \, 2^{16}$
Hint: Set $r+1=n$ $$\sum_{n=1}^{20}n^3\binom{19}n$$ Now write $n^3=n(n-1)(n-2)+An(n-1)+Bn$ so that $$n^3\binom{19}n=n(n-1)(n-2)\binom{19}n+An(n-1)\binom{19}n+Bn\binom{19}n$$ $n=1\implies B=1$ $n=2\implies2^3=2A+2B\iff B=?$
Estimate of integral
Let $y_0=a,y_k=a+\frac{k}{n}(b-a)$, $$LHS=\left|\frac{1}{n}\sum_{k=1}^nf(x_k)-\frac{1}{b-a}\sum_{k=1}^n\int_{y_{k-1}}^{y_k}f(x)dx\right|$$ $$=\left|\frac{1}{n}\sum_{k=1}^n\left(f(x_k)-\frac{n}{b-a}\int_{y_{k-1}}^{y_k}f(x)dx\right)\right|$$ $$=\left|\frac{1}{n}\sum_{k=1}^n\left(f(x_k)-f(\xi_k)\right)\right| =\left|\frac{1}{n}\sum_{k=1}^n\int_{\xi_k}^{x_k}f'(x)dx\right|$$ $$\leq\frac{1}{n}\sum_{k=1}^n\left|\int_{\xi_k}^{x_k}|f'(x)|dx\right| \leq\frac{1}{n}\sum_{k=1}^n\int_{y_{k-1}}^{y_k}|f'(x)|dx$$ $$=\frac{1}{n}\int_{a}^{b}|f'(x)|dx=RHS.$$ Here we use mean value theorem of integrals: $$\exists\ \xi_k\in(a,b),\ \text{such that } f(\xi_k)=\frac{n}{b-a}\int_{y_{k-1}}^{y_k}f(x)dx;$$ and Newton-Leibniz formula $$f(x_k)-f(\xi_k)=\int_{\xi_k}^{x_k}f'(x)dx.$$
Understanding Sylvester's Criterion determining positive definiteness
Yes, the diagonal entries of a positive definite matrix must be positive, and the diagonal entries of a positive semidefinite matrix must be nonnegative. But that's a necessary condition, not sufficient. You still need those determinants to be positive.
Slope of line tangent to function at point.
Function: $y^2 + (xy +1) ^3 = 0$ Differentiate implicitly: \begin{align*} y^2 + (xy +1) ^3 &amp;= 0\\ \implies \frac{d}{dx}y^2 + (xy +1) ^3 &amp;= 0\\ \implies 2y\frac{dy}{dx}+3(xy+1)^2\left(y+x\frac{dy}{dx}\right)&amp;=0 \end{align*} Now substitute $(x,y)$ and $\dfrac{dy}{dx}$ the subject. Alternatively, you could make $y$ the subject in your original function. Key Fact: $$\frac{d}{dx}y^2=\frac{dy}{dx}\frac{d}{dy}y^2=\frac{dy}{dx}2y$$ Edit (Chain rule and implicit differentiation): \begin{align*} \frac{d}{dx} (xy +1) ^3 &amp;= 0\\ 3(xy+1)^2\times\frac{d}{dx}(xy+1)&amp;= 0 \end{align*} Note that for the chain rule, you just bring down the power from outside the bracket and reduce the power by one then perform the differentiation on the inside.
Why is $A$ uncountable?
The statement in the box says the following: If $X\subset A$ and $X$ is countable then $X \ne A$. This shows that $A $ is uncountable since if you assume that $A$ is countable then $A\subset A$ so by the statement we proved, $A\ne A$ which is of course impossible.
When solving PDEs by separation of variables, why are we allowed to divide by the dependent variable?
Maybe you can use the separation of variables in $D\backslash U$, where $D$ is the whole domain and $U=\{(x,y)|XY=0\}$. The interior of $U$, denoted by $$\mathring{U}=\{(x,y)\in U| \text{There exists a neighborhood } W \text{ s.t. } (x,y)\in W\subset U\}$$ is empty. Then we can extend the solution to the whole domain by continuity. Lapalace's equation as an example: 1) If $X=0, Y\neq 0$, then $Y\frac{{\rm d}^2X}{{\rm d}x^2}+X\frac{{\rm d}^2Y}{{\rm d}y^2}=0$ implies $\frac{{\rm d}^2X}{{\rm d}x^2}=0$, hence the equation $$ \frac{{\rm d}^2X}{{\rm d}x^2}+k^2X=0 $$ still holds. Similarly for $X\neq0,Y=0$. 2) If $X=Y=0$, I denote $U=\{(x,y)|X(x,y)=Y(x,y)=0\}$, The interior of $U$ is empty(if not , then it is a zero constant solution in an open subset, hence the whole domain $D$). Now you can solve the problem safely in $D\backslash U$ and then extend to the whole domain by continuity. More general $W=\{(x,y)|XY=0\}$ is a set whose interior is empty, hence we can always safely solve the question in $D\backslash U$ that $XY\neq 0$, then extend to the whole set by continuity. Maybe these thoughts can be modified to explain the validity of separation of variables.
If supposing that a statement is false gives rise to a paradox, does this prove that the statement is true?
It depends on the statement. Some statements e.g. This statement is false. lead to a contradiction whether you assume them true or false, so don't have an assignable truth value. You also need to know or prove that your statement has a truth value (i.e. is either true or false) before you can conclude your argument.
Order of magnitudes comparasions
There are quite a number of comparisons to be made. It is in most cases relatively straightforward to decide about the relative long-term size. Let's start with your pair. We have $f(n)=\log(n)/\log(\log (n))$ and $g(n)=n^{\log(n)}$. For large $n$ (and it doesn't have to be very large), we have $f(n)\lt \log(n)$. Also, for large $n$, we have $\log(n)\gt 1$, and therefore $g(n)=n^{\log(n)}\gt n$. So for large $n$, we have $$\frac{f(n)}{g(n)} \lt \frac{\log(n)}{n}.$$ But we know that $\lim_{n\to\infty}\dfrac{\log(n)}{n}=0$. This can be shown in various ways. For instance, we can consider $\dfrac{\log(x)}{x}$ and use L'Hospital's Rule to show this has limit $0$ as $x\to\infty$. It takes less time to deal with the pair $n/\log(n)$ and $n^{\log(n)}$. If $n$ is even modestly large, we have $n/\log(n)\lt n$. But after a while, $\log(n)\gt 2$, so $n^{\log(n)}\gt n^2$. It follows that in the long run, $n^{\log(n)}$ grows much faster than $n/\log(n)$. As a last example, let us compare $\log^3(n)$ and $n/\log(n)$. Informally, $\log$ grows glacially slowly. More formally, look at the ratio $\dfrac{\log^3(n)}{n/\log(n)}$. This simplifies to $$\frac{\log^4 (n)}{n}.$$ We can use L'Hospital's Rule on $\log^4(x)/x$. Unfortunately we then need to use it several times. It is easier to examine $\dfrac{\log(x)}{x^{1/4}}$. Then a single application of L'Hospital's Rule does it. Or else we can let $x=e^y$. Then we are looking at $y^4/e^y$, and we can quote the standard result that the exponential function, in the long run, grows faster than any polynomial. Remark: The second person in your list is the slowest one. Apart from that, they are in order. So you only need to prove four facts to get them lined up. A fair part of the work has been done above.
Subset of first countable space closed iff intersection with every compact set is closed.
Suppose $x\in \bar S.$ Since $X$ is first countable, there is a sequence $\{x_n\}_{n\in\mathbb N}$ in $S$ which converges to $x.$ Then the set $T=\{x\}\cup\{x_n:n\in\mathbb N\}$ is compact. Since $x_n\in S\cap T$ for each $n\in\mathbb N,$ and since $S\cap T$ is closed, it follows that $x\in S.$
Probability of drawing winning combination of Magic Card at first turn
Since the first person does not win, he must not have drawn the $5$ winning cards in the first draw. The probability of this happening is $1$ minus the probability of him drawing those $5$ cards in the first draw i.e.$$1-\frac{\binom55}{\binom{50}5}$$The second person picks the $5$ winning cards and $1$ additional card, so this probability is$$\frac{\binom{45}1\cdot\binom55}{\binom{50}6}$$The required probability is the product of the above two terms, assuming the events are independent.
On rings isomorphic to the ring of integers...
Hint: $\{n\cdot 1_R: n\in\mathbb Z\}$ is an ideal of $R$.
Prove that the following is true: a sentence is unsatisfiable if and only if it implies all other sentences.
So the frist direction of the "if and only if" statement is done i.e. the direction where you assume that $A$ is not satisfiable. So for the second direction, why do you assume that B implies A? Especially, you can not here assume that A is not satisfiable (as this is what you want to prove) which you actually do on row 4 in the second part. Instead for the second direction, assume that a sentence $A$ implies all other sentences. So especially it implies $\neg A$. Thus $A\rightarrow \neg A$ hold. However if $A$ is ever interpreted as true in any evaluation, this implication does not hold. Thus we conclude that $A$ allways needs to be interpreted as false i.e. $A$ is unsatisfiable.
A persistent difference
This is known as Kaprekar's constant. We can partition the set of possible four digit combinations in to those with the same difference. Note that because $M$ and $m$ share the same digits, they are congruent modulo $9$ and so their difference is a multiple of $9$, and so the number of partitions we need to consider is smaller than one might imagine. In fact, the set of four-digit integers which can be written as $M-m$ is only on the order of $50$. So, these partitions can then be fairly easily enumerated and placed in to a diagram much like the one below found on Wikipedia. It's a rather 'brute force' approach, but it works.
Cloth cutting algorithm
I think I've figured it out. Consider the edge $\{v_1, v_2\}$ and the two triangles $t_1$ and $t_2$ associated with that edge. I "sweep" through the edges incident to $v_1$ by iterating through adjacent triangles in a clockwise fashion (WLOG), starting with $t_1$. I reassign only those edges to $v_1'$. The rest stay assigned to $v_1$.
Finding a well-defined solution to a matrix equation
Let me restate the question, because I'm not entirely sure I understand it correctly. As I understand it, you want a function that takes $M$ and $TMT^\top$ and returns a matrix $S$ that depends only on $T$ and not on $M$, such that $SMS^\top=TMT^\top$. This is impossible. Given $T$, since $SMS^\top=TMT^\top$ for all positive-definite symmetric $M$, we have $$T^{-1}SMS^\top T^\top{}^{-1}=(T^{-1}S)M(T^{-1}S)^\top=M$$ for all positive-definite symmetric $M$. In particular, for $M=I$, we have $(T^{-1}S)(T^{-1}S)^\top=I$, so $T^{-1}S$ is orthogonal. Thus we have $$(T^{-1}S)M=M(T^{-1}S)$$ for all positive-definite symmetric $M$. But the only matrices that commute with all positive-definite symmetric matrices are multiples of the identity. Thus $S=\pm T$. So your function would have to return $T$ (up to a sign), which it can't, since different $T$s can lead to the same inputs.
Perfect square palindromic numbers
This is not an answer! The examples below a million are : 26 676 264 69696 307 94249 836 698896 2285 5221225 2636 6948496 22865 522808225 24846 617323716 30693 942060249 798644 637832238736
how to prove $x+n+a = \sqrt{ax + (n+a)^2 + x \sqrt{a (x+n) + (n+a)^2 + (x+n)\sqrt{ 1 + \dots}}}$
This is the general form of Ramanajuan's famous nested radical. $(x+n+a)^2 = (ax+(n+a)^2+x(x+2n+a)$ $(x+n+a) = \sqrt{ax+(n+a)^2+x(x+2n+a)}$ The $x+2n+a = \sqrt{a(x+n)+(n+a)^2+(x+n)((x+n)+2n+a)}$ Etc. Replacing $x+kn+a$ repeatedly by rewriting it.
The limit of arithmetric mean of a properly divergent sequence
For $n &gt;k$ we have$$S_n \geq \frac {a_1+a_2+\cdots+a_k} n +\frac {a_{k+1}+a_{k+2}+\cdots+a_n} n$$ $$&gt;\frac {a_1+a_2+\cdots+a_k} n +\frac {\beta +\beta+\cdots +\beta} n$$ $$=\frac {a_1+a_2+\cdots+a_k} n +\frac {n-k} n \beta $$ if $k$ is chosen so large that $a_j &gt;\beta$ for $j &gt;k$. Now let $ n \to \infty$.
An application of isomorphism theorem for quotient rings
1) A finite domain is a field. 2) $\mathbf Z/2\mathbf Z[y]/(p(y))$, where p(y) is polynomial of degree $d$ is a $\mathbf Z/2\mathbf Z$-vector space of dimension $d$, hence it has $2^d$ elements. Some more details to justify the equivalences above: If $I, J$ are ideals of a ring $R$, the ideal generated by $J$ in $R/I$ is $\;J\cdot R/I=(I+J)/I$, so $$(R/I)\big/J\cdot R/I=(R/I)\Big/\big((I+J)/I\big)\simeq R/(I+J).$$ For the last isomorphism, you can tensor the short exact sequence $$0\longrightarrow (y^2-y-1)\longrightarrow \mathbf Z[y]\longrightarrow \mathbf Z[y]/(Y^2-y-1)\longrightarrow0$$ by $\mathbf Z/2\mathbf Z$, and use that for polynomial rings $$R[y]\otimes_R R/I\simeq R[y]/IR[y]\simeq R/I[y].$$
An interesting property of curves $V:$ $x^3$ + $y^3$ = A$z^3$
Anyone wishing to see the proof can see it on this link (it's in Spanish). There are preliminary details starting on page 108 but the proof is reduced to pages 112-113 http://revistas.pucp.edu.pe/index.php/promathematica/article/download/8186/8482.
Prove that a function is a density : $f(x) = \exp ( r ( c+a) x - r e^{ax} ) $
You have a mistake in the change of variable formula. Making the substitution $y = re^{ax}$ so that $x =\frac{1}{a} \log(y/r)$, we get $$\int_0^\infty \exp\left(r(c+a)x - re^{ax}\right) \, dx = \int_r^\infty \left(\frac{y}{r}\right)^{r(c+a)/a} e^{-y}\,\frac{dy}{ay} = \frac{1}{ar^{r(c+a)/a}}\Gamma\left(r,\frac{r(c+a)}{a}\right).$$
Game of Bridge probability
Yep, the probability that neither $A$ nor $B$ occurs is the probability that both $A$ and $B$ do not occur. The truth table for NOR is, A B A NOR B 0 0 1 0 1 0 1 0 0 1 1 0 Compare this to the truth table for not ($A$ and $B$) to confirm that you're right.
On the Hex/Nash connection game theorem
At the end of the game after all choices were made, assign resistance tending to infinity to edges that were deleted and reciprocal resistance tending to zero for the eges that were left. Now, the effective resistance between terminal nodes will tend to zero if and only if there is a path berween them and to infinity if and only if the terminal nodes are disconnected, because of continuity of rational functions from resistances to effective resistances. The product of the effective resistances is 1, $$R_{RL}R_{UD}=1$$, because the dual networks are Y-Delta equivalent to two dual edges. See https://www.academia.edu/19760380/Circular_planar_e-networks, for more details on that. Therefore, by continuity the effective resistances cannot both tend to zero or infinity, so one tends to zero and another to infinity and we have a winner with a path and disconnected loser and the tie is impossible.
Orthonormal Basis help
The dot product of polynomials $P_1=a_1x^2+b_1x+c_1$ and $P_2=a_2x^2+b_2x+c_2$ is $$P_1.P_2=(a_1 - b_1 + c_1)(a_2 - b_2 + c_2) + c_1 c_2 + (a_1 + b_1 + c_1)(a_2 + b_2 + c_2)$$ or $$P_1.P_2=2a_1a_2 + 2b_1b_2 + 2a_2c_1 + 2a_1c_2 + 3c_1c_2 \ \ \ (1)$$ and the corresponding (squared) norm: $$\|P\|^2=2a^2+2b^2+4ac+3c^2 \ \ \ (2)$$ which is definite positive, because it can be written $2(a+c)^2+2b^2+c^2$ (always $&gt;0$ but for $a=b=c=0$): it's a "full fledged" inner product. As I dont know what you call $p_1,p_2,p_3$, I am blocked... May be they are $1=0x^2+0x+1,x=0x^2+1x+0,x^2=1x^2+0x+0$, so it suffices to replace in (1) or in (2) to find a contradiction, for example that the norm of polynomial $p=x^2$ is not 1... But if you want to find now an orthonormal base with respect to this dot product, here is a method: 1) begin by finding an orthogonal base (it will be time enough at the end to normalize to one the norms of the vectors). 2) Take $p_1=x^2$, then check that all $p$ of the form $p=\alpha x^2+\beta x - \alpha$ are orthogonal to $p_1$ (with respect to inner product (1)). The set of such polynomials $p$ is a 2-dimensional subspace $E$. $p_2$ and $p_3$ should be found in $E$...
Does Bayesian probability have a different interpretation of a random variable?
I agree with Edwin Jaynes that the word "random" should be banished from this context. Suppose you're uncertain of the average weight of male freshmen at Very Big University, which has 100,000 male freshmen. You have a complete list of their names, from which you can choose 30 at random and weigh them. You can't possibly afford the cost of weighing more than a few hundred and are not comfortable paying for even that many. Let's say you had a prior probability distibution specifying the probability that the average weight is between $a$ and $b$, for any positive numbers $a$ and $b$ that you might pick. Then based on the observed weights of the randomly chosen 30, you find a posterior distribution, i.e. a conditional probability distribution given those observations. Next you could pick another random sample of 30 and further update your information. What is random? I would prefer to use the word "random" to refer to that which changes every time you take another sample of 30. Or of 20, etc. So the observed average weight of the students in your sample is a "random variable". But notice that we've also assigned a probability distribution to the average weight of all 100,000 male freshmen, and we cannot observe that quantity. That quantity remains the same when a new sample is taken; it is therefore not "random" in this sense. But we've assigned a probability distribution to it. By the prevailing conventions of standard Kolmogorovian probabilistic terminology, we are treating that population average as a "random variable". I would prefer to call it an "uncertain quantity". However, this does not alter the mathematics. Is there a difference in "interpretation"? There is if by that one means: Are we interpreting the quantity to which we assign a probability distribution as being random, in the sense of changing if we take a new sample, or as a uncertain quantity that does not change when we take a new sample? The way in which one applies the mathematics is different; the axioms of probability are not. This does raise a question of why the same rules of mathematical probability should apply to uncertain quantities that cannot be interpreted as relative frequencies or as proportions of the population, etc. A number of authors have written about that question, including Bruno de Finetti, Richard Cox, and me. Apparently no one gets very excited about the results because the result is that one should not use different mathematical methods. "Since there's no difference, who cares?" seems to be the prevailing attitude. There are some who question whether countable additivity or merely finite additivity should be taken to be axiomatic. De Finetti is one of those. Dubins &amp; Savage in their book Inequalities for Stochastic Processes assumed only finite additivity, but that may be only because they wanted to avoid some icky technical issues that might have taken them off topic. I see that I haven't carefully cited all the works I've mentioned. Maybe I'll get to this later&nbsp;.&nbsp;.&nbsp;.&nbsp;.&nbsp;.&nbsp;.&nbsp;.
Let $x$ be a real number such that $|x|<1$. Determine $\lim_{n \to \infty} \prod_{i=1}^{n} \left(1+x^{2^{i}}\right)$
Hint: Multiply by $ ( 1 - x^2)$. There's a telescoping series going on.
How do I calculate the following double integral and evaluate the limits of $\theta$?
Your work looks good. But I don't get why you didn't just take $-\pi/2 \leq \theta \leq \pi/2$ to begin with. I suppose it works with negative $r$ and angles above $\pi/2$, so you're not wrong, but I guess I just prefer to stick to positive $r$ where possible.
What is the minimum value of $(\tan^2)(A/2)+(\tan^2)(B/2)+(\tan^2)(C/2)$, where $A$, $B$ and $C$ are angles of a triangle
Just another way: noting $\tan^2 \frac x 2$ is convex, by Jensen's inequality you have $$\tan^2\frac A 2 + \tan^2 \frac B 2 + \tan^2 \frac C 2 \ge 3\tan^2\frac{A+B+C}{2\cdot 3 } = 1$$
$x^2-7x+m=0$, $x_1^2+4 x_2^2=68$, $m$=?
Hint: If so then $$x_1+x_2=7$$ and $$x_1x_2=m$$ With $$x_2=7-x_1$$ we get $$x_1^2+4(7-x_1)^2=68$$ so $$x_1=8$$
Inverse of a complex number in finite field
The element $i$ is not a complex number. It represents a solution to the equation $x^2+1=0$ relative to the field $\mathbb{F}_p$ (assuming that $p\equiv 3\pmod{4}$ so that the polynomial $x^2+1$ is irreducible). In particular, $(p-1)i+i=0$, so it cannot possibly be the complex number $i$. Two methods: When you see a fraction $\frac{a}{b}$, that really means “multiply $a$ by the multiplicative inverse of $b$”. So $\frac{324-171i}{134217}$ means “multiply $324-171i$ by the multiplicative inverse of $134217$“. So find the solution to $1324217x\equiv 1\pmod{p}$, and that is the value of $x$ you should multiply $324-171i$ by to get the value of the “fraction”. Taking $324+171i$, consider the polynomials $p(x)=324+171x$, and $q(x)=x^2+1$. Since $\mathbb{F}_p(i) = \mathbb{F}_p[x]/(x^2+1)$, find polynomials $r(x)$ and $s(x)$ in $\mathbb{F}_p[x]$ such that $r(x)p(x) + s(x)q(x) = 1$, which should be possible because $x^2+1$ is irreducible and therefore relatively prime to any polynomial of degree $1$. Evaluating at $i$ you get $r(i)p(i) + s(i)q(i)=1$. But $q(i)=0$, so this gives $r(i)p(i)=1$. Since $p(i) = 324+171i$, then $r(i)$ is the multiplicative inverse of $324+171i$.
Probability of throwing missiles "all at once" and "one by one"
The first scenario (throwing missiles until you either stop or hit 5 throws) can be modeled as following a geometric distribution, where we are only calculating up to $n=5$: $$P(\text{Success})= 0.3\sum_{k=1}^5 0.7^{k-1} = 0.83193$$ For throwing all 5 at once, you are performing a binomial experiment: Let $X$ be the number of missiles that hit the target. We succeed as long as they don't all miss. $$P(X&gt;0)=1-P(X=0)=1-0.7^5 = 0.83193$$ Same probabilities, as you calculated. So your question is why probabilities calculated for a stopped sequence is the same as for a simultaneous group. At a non-technical level, we can imagine that you continue to throw missiles after your first hit and then aggregate all sequences that are successful. This will reproduce the same set of outcomes as the binomial case where you throw them all at once. Now, note that all possible outcomes after the first hit are counted as part of the probability of the first hit (i.e., the probability that you hit it on your first throw are higher than hitting it on your second, because it counts more of these events after the first hit). This really just means that the probability of hitting on your second throw has the same probability as the sum of all binomial trials whose first hit is at the second "position" (assuming you numbered your missiles in increasing order).
Fourier series for $f(x)=(\pi -x)/2$
You forgot to divide by $\pi$. WolramAlpha's result is correct, but the coefficients $b_n$ are given by $$b_n=\frac{1}{n}$$ Your second formula for the series in WolframAlpha is wrong, you should divide by $k$, not multiply. Then everything should be fine: WolframAlpha
When a logistic law loses his first bend?
Your curve is defined by the equation $$f(z) = D + \frac{A-D}{1 + (\frac{z}{C})^B} $$ The interesting part of it is the second term (ignoring the inconsequential, for our purposes, constants at the numerator), $$\frac{1}{ 1 + (\frac{z}{C})^B}$$ Let us rescale the $z$ variable for simplicity, $\frac{z}{C} \to z$, and now look at the second derivative of $ \frac{1}{ 1 + z^B}$, which reads $$ \frac{\mathrm{d}^2}{\mathrm{d}x^2} \Big( \frac{1}{ 1 + z^B}\Big) =\frac{Bx^{B-2} (x^B + B(x^B -1) +1)}{(x^B+1)^3} $$ For $B=1$ the second derivative is easily checked to have a constant sign. For $0 \leq B \leq 1$, as in your case, this is also true. Indeed, the denominator is always positive, for $x&gt;0$. The first term on the numerator s positive. One is then left with te term $$ x^B + B(x^B -1) +1$$ whose derivative $B(B+1)x^{B-1}$ is positive and whose value for $x=0$ equals $ 1-B$, so positive for $B &lt; 1$. It seems to me that only for $B&gt;1$ one gets a sigmoid shape, i.e. the second derivative changes sign.
Statistical notation for random variables
It is easier to find $p_X(x)$ and $p_Y(y)$ first. We are given that $p_X(a) = 0.4$, therefore $p_X(b) = 1 - p_X(a) = 0.6$. Doing the same thing with $Y$ gives $p_Y(\alpha) = 0.3$. We also know that $$p_{Y|X}(\alpha|a) = \frac{p_{XY}(a,\alpha)}{p_X(a)} \implies p_{XY}(a,\alpha) = 0.7* 0.4 = 0.28$$ If $X$ and $Y$ are independent, $p_{XY}(x,y) = p_X(x)p_Y(y)$ for any $x,y$. But we have that $$p_{XY}(a,\alpha) = 0.28 \ne p_X(a)p_Y(\alpha) = 0.12$$ Therefore, $X,Y$ are not independent. As for $p_{XY}(x,y)$, you can find it with the marginals. $$p_{XY}(a,\beta) = p_X(a) - p_{XY}(a,\alpha) = 0.4 - 0.28 = 0.12$$ $$p_{XY}(b,\alpha) = p_Y(\alpha) - p_{XY}(a,\alpha) = 0.3 - 0.28 = 0.02$$ $$p_{XY}(b,\beta) = p_X(b) - p_{XY}(b,\alpha) = 0.6 - 0.02 = 0.58$$
Isomorphism between $H/H\cap N$ and $HN/N$
Compose the maps $$H\xrightarrow{\text{inclusion}}HN\xrightarrow{\text{quotient}}HN/N$$ Show that the composite map from $H$ to $HN/N$ is surjective. Observe that its kernel is $H\cap N$. Then apply the first isomorphism theorem. (Incidentally, the fact that $H/(H\cap N)\cong HN/N$ is known as the second isomorphism theorem.)
$f:[0,1]\rightarrow\mathbb{R}\in C[0,1]$ show $\lim_{a\rightarrow 0^+}(\int^1_a(t^{\frac{-1}{2}}f(t))\,dt)$ exists
Since $f$ is continuous on $[0,1]$ so are the functions $g: [0,1]\to \mathbb{R},\ t \mapsto f(t^2)$ and $G:[0,1] \to \mathbb{R},\ G(x)=\int_x^1g(t)\,dt$. Setting $$ u=\sqrt{t}, $$ we have $$ \int_a^1t^{-1/2}f(t)\,dt=2\int_{\sqrt{a}}^1f(u^2)\,du=2\int_{\sqrt{a}}^1g(u)\,du=2G(\sqrt{a}). $$ Hence $$ \lim_{a\to0+}\int_a^1t^{-1/2}f(t)\,dt=2\lim_{a\to0+}G(\sqrt{a})=2G(0). $$
Finding limit of a secant?
Hint: Intuitively, you have that $|\sec x| = |1/\cos x|$ goes to $+\infty$ as $x \to \frac{\pi}{2}$. To compute the limit without the absolute value, with $x \to \frac \pi 2^+$, you only have to find out the sign... is it $+$ or $-$? What is the sign of $\cos x$ for $x &gt; \frac \pi 2$, but very close to it?
Explanation of a proof from Lee "Introduction to Topological Manifolds": compact closed set with nonempty interior
Find point $y'\in\partial D$ such that $y=|y|/|y'|y'$ then $y=F(|y|/|y'|f(y'))$.
Books on Lie Groups via nonstandard analysis?
I never looked at that part of the book in any detail at all, but Abraham Robinson's Nonstandard Analysis includes some material on Lie groups via nonstandard analysis. A little bit of googling found a paper on the arXiv that might be relevant.
Convergence of two nested geometric sequences
If you calculate $h_n$ for $n=0,1,2,3,4$, it’s easy to conjecture that $$h_n=a^nh_0+\nu_0\sum_{k=0}^{n-1}a^kb^{n-1-k}$$ and prove it by induction. If $a\ne b$ it can then be written more compactly as $$h_n=a^nh_0+\frac{(a^n-b^n)\nu_0}{a-b}=(h_0+c)a^n-cb^n$$ where $c=\frac{\nu_0}{a-b}$. If $a=b$, $h_n=h_0a^n+n\nu_0b^n$.
Prove that $a_1A^1 + a_2A^2 + ... + a_5A^5 = 0$
$M_2(\mathbb R)$ is a four dimensional vector space and hence any $5$ elements in it are linearly dependent.
Alternative Proof of why Every Manifold is Locally Compact
Your proof is wrong. You cannot conclude that $B$ is compact. Let $X$ be an infinite set and equip it with the discrete metric $d(x,y) = 1$ if $x \neq y$ and $0$ otherwise. $X$ is not compact, since the cover $\{x\}_{x \in X}$ admits no finite subcover. However, $X = \{y \in X: d(x,y) \le 2\}$ for any $x \in X$. In particular, you cannot conclude that your set $B$ above is compact.
Show that $P_n(X)=\frac{(X+i)^{2n+1} - (X - i)^{2n+1}}{2i}$ is of degree $2n$, even..., using the binomial coefficients formula
Hint: Use $$ (a+b)^n=\sum_{k=0}^n\binom{n}{k}a^{n-k}b^k, i^{2k}=(-1)^k, i^{2k-1}=(-1)^{k+1}i $$ and you will get the answer.