title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Finding the orthogonal complement of a subspace of $\mathbb{R}^4$
The space $V$ is the space of those vectors $v\in\mathbb{R}^4$ such that $\bigl\langle v,(1,1,-1,0)\bigr\rangle=\bigl\langle v,(1,1,0,1)\bigr\rangle=0$. Since $(1,1,-1,0)$ and $(1,1,0,1)$ are linearly independent, they form the basis that you aer after.
Morphisms of sheaves
Given $\mathfrak{R}\longrightarrow f^{-1}\mathfrak{S}$ and $\mathfrak{S}\longrightarrow g^{-1}\mathfrak{T}$ gives you $\mathfrak{R}\longrightarrow f^{-1} \mathfrak{S}\longrightarrow f^{-1} (g^{-1}\mathfrak{T}) = (g\circ f)^{-1}\mathfrak{T}$, which is the wanted arrow. Of course, the arrow $f^{-1} \mathfrak{S}\longrightarrow f^{-1} (g^{-1}\mathfrak{T})$ is obtained from the arrow $\mathfrak{S}\longrightarrow g^{-1}\mathfrak{T}$ by functoriality of $f^{-1}$. Let me detail a bit the inverse image sorite. This is general, say you have $f : X \rightarrow Y$ continuous. How is defined the functor $f^{-1}$ from the category of sheaves of abelian groups of $Y$ to the category of sheaves of abelian groups on $X$ ? First, how does it act on objects ? Let $\mathscr{F}$ be a sheaf of abelian groups on $Y$. Consider the following presheaf of abelian groups $I_{\mathscr{F}} : U\mapsto {\varinjlim}_{f(U)\subseteq V}\mathscr{F}(U)$. Then the sheaf $f^{-1} \mathscr{F}$ is simply the sheaf associated to this presheaf by the sheafification functor. How does our inverse image functor act on arrows ? For this, let $u : \mathscr{F} \rightarrow \mathscr{G}$ a morphism of sheaves of abelian groups on $Y$. Interpreting presheaves as contravariant functors from the category of open sets of $Y$ with inclusions as arrows and interpreting sheaves in the same way plus the equalizer conditions, $u$ is simply a morphism of functor, as so-called natural transformation. Or also a morphism of directed systems of abelian groups. This allows to define a morphism $I_{u} : I_{\mathscr{F}} \rightarrow I_{\mathscr{G}}$. Applying to it the sheafification functor gives you $f^{-1} u$, and finished the definition of the inverse image functor. Valid of course for the category of sheaves of abelian groups. For $\mathscr{O}_Y$-modules the definition is not the same : the inverse image is noted $f^{\star}$ and is defined as $f^{\star} \mathscr{M} = (f^{-1} \mathscr{M}) \otimes_{f^{-1} \mathscr{O}_Y} \mathscr{O}_X$ for any $\mathscr{O}_Y$-Module $\mathscr{M}$. For all this stuff, two references come immediately to me : the long and noble road SGA IV I, first exposés, and Maclane's and Moerdijk's Sheaves in Geometric and Logic. And maybe one of Kashiwara's and Shapira's books the name which I can remember for now. I don't remember if Hartshorne does it.
The definition of orientation of a manifold from Spivak, Calculus on Manifolds
(1) On ${\bf R}^n$ fix $n$-form $\omega=dx_1\cdots dx_n$ Then at each point $x\in {\bf R}^n$ we have basis $\{e_i\}$. Then $$ \omega_x (e_1,\cdots, e_n)>0\ {\rm or}\ <0 $$ Hence we have two classes, and we choose first class as orientation at $x$, denoted by $[e_1,\cdots, e_n]_x$ That is if $e_i$ is a vector field on ${\bf R}^n$ as we already know, $[e_1\cdots e_n]$ determine orientation at each point. (2) If $f: W\subset M \rightarrow {\bf R}^n$ is a coordinate chart, then orientation consistent to $f$ on $W$ is given by $$ [df^{-1} e_1 \cdots df^{-1} e_n ]$$ If $E_i$ is basis on $T_yM$, and if $ \omega (df E_i,\cdots,df E_n) >0 $ then $[E_1\cdots E_n]_y=[df^{-1} e_1 \cdots df^{-1} e_n ]_y $, i.e., $[E_1\cdots E_n]_y$ represented an orientation at $y$. (3) If $g : U\rightarrow {\bf R}^n$ is another chart and $U\cap W\neq \emptyset $, then a manifold is orientable iff for any $f,\ g$, $$ [df^{-1} e_1\cdots df^{-1} e_n]_y=[dg^{-1} e_1\cdots dg^{-1} e_n]_y $$ for $y\in U\cap W$.
Geometry study question
$O$ is center of gravity of the triangle, it implies that : $$\vec{OA}+ \vec{OB} + \vec{OC} = \vec{0} = \vec{OD}+\vec{DA}+\vec{OB}+\vec{OD}+\vec{DC} $$ Since ABC is equilateral : $\vec{DA}+\vec{DC}=\vec{0}$ And then you have $OD = \frac{OB}{2} = 2$
brownian motion and process C1 (order 1 of continuity)
This assertion is true and we use the approximation theorem of Weirstrass for explaining why : We can approximate uniformly a continuous function defined on a segment by polynomial functions. Thus, it exists a polynomial function $(B^\epsilon_t)_{t\in[0,T]} \in \mathcal{P}$ (where $\mathcal{P}$ is the set of polynomial function with real coefficient). And obviously $\mathcal{P}\subset {C^\infty}\subset{C^1}$
Books on complex analysis
Robert Gunning's "old" Princeton-Yellow-Series book "Intro to Riemann Surfaces" (the first in a sequence of several books he wrote about Riemann surfaces and related matters...) systematically uses sheaf theory (albeit not the derived-functor version, but Cech). In my opinion, it wonderfully illustrates how sheaf theory can be used, in a very tangible example.
Probability Of Birthday Months
Firstly, consider the event $D=\{\text{a student has birthday either in November or December}\}.$ Then it is true that $p = p(D) = \dfrac{2}{12}$. As for me, my suggestion is to approach this problem through the binomial distribution. Let $X$ be the random variable which expresses the number of the students that have birthday either in November or December. More specifically, $X\sim \mathcal B(n,p)$, where $n=20$ and $p=p(D)$. Suppose that exactly $k$ students have birthday either in November or November. Then it holds: $$P(X=k)= \dbinom{n}{k} \cdot p^k \cdot (1-p)^{n-k}.$$ Since we want at least one student have birthday either in November or December, it is sufficient to calculate: $$P (X\ge 1) = 1 - P(X=0),$$ where we take the first formula you wrote.
Is there a better way to visualize the midpoint formula?
If we are on the real line we have that the mid point of $x_1 < x_2$ is the point $x$ such that: $$x-x_1=x_2-x \iff 2x=x_1+x_2$$ which is exactly what we want.
Show that : $ x_n \overset {\sigma (X,X')}{\to} x $
I doubt the question may be wrong. Consider the case that $X=\mathbb{R}$ (with real scalar field). Clearly, the one dimensional Banach space $X$ is reflexive and separable. Define $$ x_{n}=\begin{cases} n, & \mbox{if }n\mbox{ is odd}\\ 0, & \mbox{if }n\text{ is even} \end{cases}. $$ Clearly the subsequence $(x_{2n})$ converges to $x=0$ in norm, and trivially $x_{2n}\rightarrow x$ weakly. Moreover, $(x_{n})$ has only one limit point, namely $x=0$. That is, both condition s are satisfied. However, it is false that $x_{n}\rightarrow x$ weakly because if $x_{n}\rightarrow x$ weakly, then for any $f\in X^{\ast}=\mathbb{R}$, $fx_{n}=\langle f,x_{n}\rangle\rightarrow\langle f,x\rangle=fx$ (Here, $fx$ means the product of $f$ and $x$), which obviously does not hold.
Number of possibilities in a partition problem
You have to choose $n/2$ items from $n$ items . When you select any $n/2$ items it automatically forms two sets of $n/2$ items each . So the number of ways of choosing $n/2$ items from $n$ items is simply $$n\choose{n/2}$$
What can be said about a convex combination of orthogonal matrices?
As user251257 points out in the comment, the statement (2) is true because the determinant is continuous. Here is a counterexample for (1). Let's take $$A=\begin{pmatrix}\sqrt{0.5}&-\sqrt{0.5}\\\sqrt{0.5}&\sqrt{0.5}\end{pmatrix},\,\,\, B=\begin{pmatrix}0&1\\1&0\end{pmatrix},$$ then $A, B$ are orthogonal and $\det A=1$, $\det B=-1$. Their convex combination is, with $\lambda\in[0,1]$, $$C(\lambda)=\begin{pmatrix}\lambda\sqrt{0.5}&-\lambda\sqrt{0.5}+1-\lambda\\\lambda\sqrt{0.5}+1-\lambda&\lambda\sqrt{0.5}\end{pmatrix}.$$ Clearly, $C(\lambda)$ cannot be the identity matrix, so it must be singular if it is a projector. We have $$\det C(\lambda)=0.5\lambda^2-(1-\lambda+\sqrt{0.5}\lambda)(1-\lambda-\sqrt{0.5}\lambda)=\lambda^2-(1-\lambda)^2=-1+2\lambda.$$ Therefore, $C(\lambda)$ can be a projector only if $\lambda=0.5$, but then $$C(0.5)=\begin{pmatrix}0.5\sqrt{0.5}&-0.5\sqrt{0.5}+0.5\\0.5\sqrt{0.5}+0.5&0.5\sqrt{0.5}\end{pmatrix},$$ and we can check that the $(1,1)$ entry of the square of the latter matrix is $1/4$, so $C(0.5)$ is not a projector as well.
Limit of $\lim _{\left(x,y\right)\to \left(0,0\right)}\left(\left(xy\right)\ln \left(x^2+y^2\right)\right)$
HINT: Note that $|xy|\le \frac12(x^2+y^2)$ so that $$\left|xy\log(x^2+y^2)\right|\le (x^2+y^2)\left|\log\left(\sqrt{x^2+y^2}\right)\right|$$ Can you finish now?
Orbits and numerical ranges of zero diagonal matrices with block patterns
Long story short: while there is a nice way to generate the kind of patterns you've been producing (i.e. there is a sufficient condition), there is no necessary/sufficient condition that I'm aware of for more general zero/non-zero patterns. If $A$ is symmetric and its spectrum is "symmetric with respect to the imaginary axis", then $W(A)$ will also be "symmetric with respect to the imaginary axis". Saying that $A$ is symmetric and persymmetric is equivalent to saying that $A$ is symmetric and satisfies $AJ = JA$, where $J$ is the exchange matrix. Notably, this means that First of all, note that with all your examples, all you've done is taken a matrix of the form $$ M = \pmatrix{0&A\\A^T&0} $$ and applied a permutation similarity. That is: for a suitable permutation matrix $P$, $PMP^T$ will be a "checkerboard matrix". If $M$ is symmetric, then $PMP^T$ will also be symmetric, since we'd find that $$ [PMP^T]^T = P^{TT}M^TP^T = PMP^T $$ If $M$ is also persymmetric and if $PJ = JP$ (which implies that $P^TJ = JP^T$), then $PMP^T$ will also be persymmetric, since we'd find that $$ [PMP^T]J = PMP^TJ = PMJP^T = PJMP^T = JPMP^T = J[PMP^T] $$ We can also show that for $M$ as above, $M^k$ will be traceless for odd $k$ (from which it follows that $[PMP^T]^k$ is traceless). In particular: note that $$ M^2 = \pmatrix{AA^T & 0\\0&A^TA} $$ Thus, $$ M^{2n+1} = \pmatrix{AA^T & 0\\0&A^TA}^{2n} \pmatrix{0 & A\\A^T&0} = \pmatrix{0 & (AA^T)^{2n}\\(A^TA)^{2n}A^T & 0} $$ which is traceless since it is zero on the diagonal. If $A$ is symmetric and its spectrum is symmetric with respect to the imaginary axis, then $W(A)$ will also be symmetric with respect to the imaginary axis. To see this, it suffices to note that the numerical range is simply the convex hull of the spectrum of $A$ whenever $A$ is symmetric. It can also be directly shown that any matrix of the form $$ M = \pmatrix{0&A\\A^T&0} $$ will be "symmetric with respect to the imaginary axis". In particular, if $A = U\Sigma V^T$ is a singular value decomposition, then we write $$ \pmatrix{&A\\A^T} = \pmatrix{U \\ & V} \pmatrix{ & \Sigma\\ \Sigma} \pmatrix{U \\ & V}^T $$ And since $\Sigma$ is diagonal, we can easily determine the spectrum of the matrix $$ \pmatrix{ & \Sigma\\ \Sigma} = \pmatrix{0&1\\1&0} \otimes \Sigma $$ where $\otimes$ denotes the Kronecker product.
Statistical independence to linear independence
For a continuous distribution of $\mathbb{R}^n$, the measure of a $d$-dimensional vector space $V\subset\mathbb{R}^n$ with $d<n$ is zero, hence $$\mathbb{P}[X_n\in\operatorname{Span}(X_1,\ldots,X_{n-1})]=0,$$ so you are right.
Find element in subsets
Given $e$, $$\{\,i\in\Bbb N\mid e\in S_i\,\} $$ is $\{j\}$ if $e\in S_j$. To ontain $j$ itself, we can use $$\bigcup \{\,i\in\Bbb N\mid e\in S_i\,\}.$$
How to find this double summation?
We have, with a classical symmetry trick: $$2\sum_{n=1}^{+\infty}\sum_{m=1}^{+\infty}\frac{m^2 n}{3^m(m 3^n + n 3^m)}=\sum_{n=1}^{+\infty}\sum_{n=1}^{+\infty}\frac{\frac{m^2n}{3^m}+\frac{mn^2}{3^n}}{m3^n+n3^m}=\sum_{n=1}^{+\infty}\sum_{m=1}^{+\infty}\frac{mn}{3^{m+n}}=\left(\sum_{n=1}^{+\infty}\frac{n}{3^n}\right)^2$$ hence the original series just equals $\frac{1}{2}\left(\frac{3}{4}\right)^2 = \color{red}{\frac{9}{32}}.$
Meaning of notation $\mathbb{A}^{\mathbb{R}\times\mathbb{R}}$
Usually, the notation $A^B$ denotes the set of all functions from $B$ to $A$. This makes sense because in a sense, $\mathbb R^3$ can be seen as the set of all functions from $\{1,2,3\}$ to $\mathbb R$. This is because any function $f$ from $\{1,2,3\}$ to $\mathbb R$ can represent one element of $\mathbb R^3$ as $[f(1), f(2), f(3)]$, and every element $[x_1,x_2,x_3]$ of $\mathbb R^3$ represents one function from $\{1,2,3\}$ to $\mathbb R$, specifically the function for which $f(1)=x_1, f(2)=x_2$ and $f(3)=x_3$. This generalizes even if we replace $3$ with an infinite set. For example, $\mathbb R^\mathbb N$ can be seen as the set of sequences of real numbers, so an element would be $[x_1,x_2,x_3,\dots]$, but at the same time, this is also a mapping from $\mathbb N$ to $\mathbb R$ (one that maps $1$ to $x_1$, $2$ to $x_2$ and so on). In general then, $A^B$ simply denotes a set where each element is a "$|B|$-tuple", i.e. each elements is some mapping that maps each element of $B$ to some element of $A$. A function from $\mathbb R $ to $\mathbb R$ is really nothing more than an object which, for each real number $x$, perscribes another real number $y$. It's just our decision to denote this as $f(x)=y$.
Triangle inequality for $d(x,y)=\lvert\ln(y/x)\rvert$
$$\left\lvert\ln\frac xy\right\rvert=\left\lvert \ln\left(\frac xz\cdot\frac zy\right)\right\rvert=\left\lvert \ln\frac xz+\ln\frac zy\right\rvert\le\left\lvert \ln\frac xz\right\rvert+\left\lvert\ln\frac zy\right\rvert$$
The expected number of triangles in Erdos-Renyi graph : why wrong derivation also works?
The answer is the same because the probability for each triangle to be counted is the same – by linearity of expectation these probabilities are all that matters. You can think of your "wrong" calculation as calculating the expected number of triangles for a fictitious graph consisting of $\binom n3$ unconnected triangles, with each triangle being formed independently with the same probability with which the actual triangles are formed.
Subderivative of a function of a complex-valued variable
Full disclosure, I don't know what Wirtinger's calculus is. (OK, now I do, thanks to Wikipedia.) But from a pragmatic, convex optimization context, there's a straightforward way forward. For a real function $f:\mathbb{R}^n\rightarrow\mathbb{R}$, the subdifferentiable satisfies $$\partial f(x) = \left\{ v ~\middle|~ f(y) \leq f(x) + \langle v, y - x \rangle ~ \forall y \right\}$$ For a function $f:\mathbb{C}\rightarrow\mathbb{R}$, we can do the same, only we define the real inner product on $\mathbb{C}$: $\langle a, b \rangle = \Re(\mathop{\mathrm{conj}}(a) b) = \Re(a)\Re(b) + \Im(a)\Im(b)$. Given this, $$\partial |x| = \begin{cases} \{x/|x|\} & x\neq 0 \\ \{v\in\mathbb{C}~|~|v|\leq 1\} & x=0 \end{cases}$$ There are a couple of sanity checks here. First, you can consider $\mathbb{C}$ as an isomorphism of $\mathbb{R}^2$, so this is just the equivalent to the subdifferential of the $\ell_2$ norm on $\mathbb{R}^2$. Secondly, note that our formula yields $+1$ for real $x\geq 1$ and $-1$ for real $x\leq 1$, just like the real case.
Square of the expectation of number of edges not in a triangle in random graph
I am only beginning to read your answer, but there is already a few errors: Nitpick, but the title is wrong. You are trying to compute the expectation of the square, not the square of the expectation. More annoying, what follows is wrong: First, the probability that the edge $e_i$ is not in a triangle is $$\mathbb{P}[A_i]=(1-p^3)^{n-2}$$ since each the edge $e_i$ can be part of $n-2$ different triangles. There are $n-1$ distinct events. First, the edge $e_i$ must belong to the graph. In addition, for each of the $n-2$ couples of edges which form a triangle with $e_i$, at least one of these edges does not belong to the graph. As all these events are independent, this makes: $$\mathbb{P}[A_i]=p(1-p^2)^{n-2}.$$ Your error is twofold: first, you mistook the event "the edge $e_i$ exists and does not belong to a triangle" with the event "there is not triangle with the edge $e_i$". Second, you misapplied the independence in the computation of the probability of the second event (the $n-2$ events you were taking into account are not independent, as they all depend on the state of the edge $e_i$). The same mistake is reapeated in the remainder of the proof. For this kind of exercise, you should always do a quick sanity check. At fixed $n \ge 3$, you would expect the random graph $G(n, 0)$ to be totally disconnected, and the random graph $G(n, 1)$ to be complete, so if you plug $p = 0$ or $p=1$ in your final formula you should find $0$.
Technique for finding arbitrary matrix powers.
Hint: If you could write $$ A = P\begin{pmatrix}\lambda_1 & 0 \\ 0 & \lambda_2\end{pmatrix}P^{-1}\tag{1} $$ then we would have the formula $$ A^k = P\begin{pmatrix}\lambda_1^k & 0 \\ 0 & \lambda_2^k\end{pmatrix}P^{-1} $$ Can you find $P$, $\lambda_1$, and $\lambda_2$ so that (2) holds? If not, try reading up on diagonalization.
If $z \in \mathbb C$ such that $|z|+|z-2019|=2019$ then $z \in \mathbb R$
$|a+b|=|a|+|b|$ iff $a =t b$ for some $t \geq 0$ (or $b =t a$ for some $t \geq 0$). Here we get $z=t(z-2019)$ (which implies $t \neq 1$). So $z= \frac{(2019) t} {t-1}$ which is real.
$\lim\limits_{(x,y)\to (0,0)}{\sin(xy)-xy\over x^2y}$
$\sin A=A-\frac{1}{3!}A^3+\frac{1}{5!}A^5+\cdots$, so $$\sin (xy)=xy-\frac{1}{3!}x^3y^3+\frac{1}{5!}x^5y^5+\cdots$$ The powers are all high enough to cancel the $x^2y$ on the bottom, so you get $$\frac{\sin(xy)-xy}{x^2y}=\frac{-\frac{1}{3!}x^3y^3+\frac{1}{5!}x^5y^5+\cdots}{x^2y}=-\frac{1}{3!}xy^2+\frac{1}{5!}x^3y^4-\cdots,$$ which tends to $0$ as $(x,y)\to(0,0)$.
$(ab+bc+ca)^3=abc(a+b+c)^3$, prove that $a,b,c$ are in $G.P.$
This one is a long-way of solving (With a very obvious result). \begin{gather*} \implies(ab+bc+ca)^3-abc(a+b+c)^3=0\\ \end{gather*} \begin{multline*} \implies (a^3b^3+b^3c^3+c^3a^3+3a^2b^3c+3a^2bc^3+3a^3b^2c+3ab^2c^3+3a^3bc^2+3ab^3c^2)-\\abc(a^3+b^3+c^3+3a^2b+3ab^2+3b^2c+3bc^2+3c^2a+3ca^2)=0 \end{multline*} After simplifying \begin{equation} \implies a^3b^3+b^3c^3+c^3a^3-a^4bc-ab^4c-abc^4=0 \end{equation} Factorising this will give \begin{equation} \implies(ab-c^2)(bc-a^2)(ca-b^2)=0 \end{equation} Therefore, a,b,c must be in G.P.
Probability of $k$ fixed points for a random function from and to $\{1,..,n\}$
As JMoravitz pointed out in a comment, to find the number of functions with $\ k\ $ fixed points you need to multiply $\ {n\choose k}\ $ by the number of functions with no fixed points on $\ n-k\ $ elements—namely $\ (n-1)^{n-k}\ $—, not the number of derangements. Thus, \begin{align} p_k&=\frac{{n\choose k}(n-1)^{n-k}}{n^n}\\ &= {n\choose k} \left(\frac{1}{n}\right) ^k\left(1-\frac{1}{n}\right)^{n-k}\ . \end{align} That is, the distribution of the number of fixed points is $\ Binomial\big(n, \frac{1}{n}\big)\ $. Another way of seeing this is that each of the numbers $\ 1,2,\dots,n\ $ is independently and equally likely to be a fixed point with probability $\ \frac{1}{n}\ $.
Mean position of random walk with only $+1$ jumps, with probability $\frac{c^x}{t-x+1}$ at time $t$ and site $x$
The process $(r_c(t))$ based on $P(\xi(t,n)=1)=c^n$ jumps from $n$ to $n+1$ after a geometric time with mean $1/c^n$ hence $(r_c(t))$ hits $n$ after $\Theta(1/c^n)$ steps. Since $E(r_c(t))=\Theta(\log t)$ and $r_c(t)\geqslant r(t)$, $E(r(t))=O(\log t)$. On the other hand, $r(t)\to+\infty$ almost surely. Otherwise, $r(t)$ would stay at some level $n$ forever after some time $t\geqslant n$, which happens with probability $$ \prod_{s=t}^\infty\left(1-\frac{c^n}{s-n+1}\right)=0. $$
Difference in definition of limit points and cluster points of a set
Being honest I always get confused by the names given to these kind of points (especially since the terms are not universal), but what there is no doubt is that for $A \subseteq \mathbb R$ we have $$\overline{A} = \{x \in \mathbb R : (x-\varepsilon,x+\varepsilon) \cap A \neq \varnothing \textrm{ for all } \varepsilon>0\}$$ and that $$A' = \{x \in \mathbb R : (x-\varepsilon,x+\varepsilon) \cap (A\setminus \{x\}) \neq \varnothing \textrm{ for all } \varepsilon>0\}.$$ Note that $x \in A'$ if and only if $x \in \overline{A \setminus \{x\}}$, and that $\overline{A} = A \cup A'$. So, if $A$ is a finite set of points, $A' = \varnothing$ and then $\overline A = A$.
sequence notation indice written twice
The index is repeated outside of the parentheses because sometimes it is ambiguous which variable denotes the index, e.g. $(n^k)_n$ and $(n^k)_k$ are different sequences. Sometimes you even see $(X_n)_{n \in \mathbb{N}}$ if you want to emphasise the range of indices. Usually the index is apparent from the context, though. In the example above, $k$ or $n$ might already be defined, so the other variable must be the index. Therefore it is common to skip the repeated index, which is what you are used to: $(X_n)$. Strictly speaking, $X_n$ does not denote the sequence but the $n$-th term of the sequence. But when no confusion arises, it is also acceptable to denote the sequence this way. The main advantage is that you can use less words when defining the sequence by saying: “Consider the sequence $X_n = n^2$.” instead of “Consider the sequence $(X_n)$ where $X_n = n^2$.” or “Consider the sequence $(X_n) = (n^2)$.” (which would be correct but many people find unnatural).
Proof of the second Bianchi identity
It is the ambiguity of the conventional abstract index notation for the covariant derivative that causes troubles here. Getting used to it over time helps :-) We say that a tensor $T_{abc\dots}$ has the Bianchi symmetry if $T_{[abc]\dots} = 0$. Your goal is to show that the tensor $\nabla_e R_{abc}{}^d := (\nabla R)_{eabc}{}^d$ has the Bianchi symmetry in the indices $e,a,b$, and this fact is called the Bianchi identity. The fact $$ 2 \nabla_{[a} \nabla_{b]} \omega_c = (\nabla_a \nabla_b -\nabla_b \nabla_a)\omega_c=R_{abc}^{\;\;\;d}\omega_d $$ can be seen as the definition of the curvature $R$. Taking the covariant derivative of $R_{abc}{}^d \omega_d$ by an application of the Leibniz rule we see that $$ \nabla_e (R_{abc}{}^d \omega_d) = (\nabla_e R_{abc}{}^d) \omega_d + R_{abc}{}^d \nabla_e \omega_d $$ which means that for any $\omega_d$ the tensor $\nabla R$ satisfies the identity $$ (\nabla_e R_{abc}{}^d) \omega_d = \nabla_e (R_{abc}{}^d \omega_d) - R_{abc}{}^d \nabla_e \omega_d $$ Now we can use the definition of $R$ and rewrite the last display as $$ (\nabla_e R_{abc}{}^d) \omega_d = 2 \nabla_e (\nabla_{[a} \nabla_{b]} \omega_c) - 2 \nabla_{[a} \nabla_{b]} \nabla_e \omega_c + R_{a b e}{}^d \nabla_d \omega_c $$ where we have used the Ricci identity $$ 2 \nabla_{[a} \nabla_{b]} t_{ec} = R_{a b e}{}^d t_{d c} + R_{a b c}{}^d t_{e d} $$ I will wait now for you to figure out that we are almost done.
Laurent series of $\frac{1}{(1-z)}$
Basically correct. The two series have different regions of convergence: the first on $|z|\lt1$, the second $|z|\gt1$. They are both Laurent series. The first happens to be a Taylor series, as well. They are both around $z=0$. I wouldn't say they are representations of the same series, but rather the same function on different regions.
Convexity of circle in neutral geometry
I know this question is more than a year old, but hopefully the solution below is still helpful! As stated in the problem, let $P$ be a point on $\overline{AB}$. The key claim is the following. CLAIM: $OP<\max\{OA,OB\}$. Proof. Let $D$ be the foot of the perpendicular from $O$ to $AB$. Note that there are two cases to consider depending on the location of $P$. CASE 1: $P$ lies on $\overline{AD}$. In this case, remark that $AD>PD$, so $$\sqrt{AD^2+DO^2}>\sqrt{PD^2+DO^2}\quad\implies\quad AO>PO.$$ CASE 2: $P$ lies on $\overline{BD}$. This case is similar: from $PD<BD$ we have $$\sqrt{BD^2+DO^2}>\sqrt{PD^2+DO^2}\quad\implies\quad BO>PO.$$ Combining both cases yields $PO<\max\{AO,BO\}$ as desired. $\blacksquare$ Let $R$ be the radius of the circle. Note that since $AO<R$ and $BO<R$, we have $PO<R$ as well, so $P$ must lie within the circle. (If that is not a detailed enough finish, you could extend $PO$ to intersect the circle at $C$ and $D$, then use your "$ABCO$ collinear" case to resolve the matter.)
Probability: People sitting in a row (linear arrangement)
You’re doing nothing wrong, assuming that your reasoning is similar to Alex Becker’s in his comment: the correct answer is indeed $\frac15$. Here’s another route to it. There are $\binom{10}2=45$ pairs of seats, and the couple is equally likely to occupy any one of those $45$ pairs of seats. Nine of the $45$ pairs are adjacent, so the probability that they will occupy adjacent seats is $\frac9{45}=\frac15$, as you say. And here is yet another. The man sits in an end seat with probability $\frac2{10}=\frac15$. If he’s in an end seat, only one of the remaining nine seats is adjacent to him, and his wife’s probability of getting that seat is $\frac19$. With probability $\frac45$ the man sits in one of the eight seats that have two neighbors, and in that case his wife’s probability of ending up next to him is $\frac29$. The overall probability that the end up sitting together is therefore $$\frac15\cdot\frac19+\frac45\cdot\frac29=\frac9{45}=\frac15\;.$$ Added: And just for fun, here’s yet another. Imagine that the seats are arranged in a circle around a table. Wherever the wife is sitting, the husband’s probability of sitting next to her is $\frac29$. Then the table is taken away and the seats unwrapped into a straight line, with the breakpoint chosen at random: with probability $\frac1{10}$ it will fall between the husband and the wife, so with probability $\frac9{10}$ they will still end up together. Thus, they end up together with probability $$\frac9{10}\cdot\frac29=\frac15\;.$$
Prove that $|f'(0)| \leq 1$ for an analytic function on the unit disk
Your 'proof' is correct and the statement is false: $$f(z) = i \frac{1+z}{1-z}$$ is an example of such a map (the inverse of yours) and its derivative at $0$ is $2i$.
Trigonometric Series Proof
This proof makes use of the gamma and beta functions. We have the basic identity $$B(x, y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x + y)} $$ Note that $$B(x, y) = \int_0^1 t^x(1 - t)^y \,dt $$ by definition. Put $x = \sin^2\theta$. Thus we have $$\int_{0}^{\pi/2}\sin^{2m + 1}\theta\cos^{2n + 1}\theta \,\,d\theta = \frac{\Gamma(m)\Gamma(n)}{2\Gamma(m + n)}$$ Note that since we were originally dealing with two independent variables, we need to maintain those after making this substitution, hence the $m$ and $n$. Now put $2k = 2m + 1$ and $2n + 1 = 0$. This gives the following result: $$\int_{0}^{\pi/2}\sin^{2k}\theta \,d\theta = \frac{\Gamma[\frac{1}{2}(2k + 1)]\Gamma(1/2)}{2\Gamma(k + 1)} $$ The denominator is simply $2\cdot k!$ and the numerator is $\pi\cdot k!\cdot \binom{k - 1/2}{k}$ Expanding the binomial coefficient gives the required result.
Can every curve be subdivided equichordally?
Niels Diepeveen helped me fill the gap in my incomplete proof, which I've deleted from the original question and moved here because it fits this question better. We take the curve to be $c:[0,1]\to\mathbb R^m$ and assume that $c(0)\ne c(1)$ (otherwise a trivial solution exists). A subdivision into $n$ segments is determined by its vector of interval lengths, $s=(s_1,\dots,s_n)$ where $s_i=t_i-t_{i-1}$. The set of all valid $s$ forms the standard simplex $$\Delta^{n-1}=\left\{(s_1,\dots,s_n):\sum_{i=1}^n s_i=1, s_i\ge 0\text{ for all }i=1,\dots,n\right\},$$ which is an $(n-1)$-dimensional polytope embedded in $\mathbb R^n$. In fact, $\Delta^{n-1}$ lies in the nonnegative orthant $\mathbb R_+^n$ and its boundary lies in $\partial\mathbb R_+^n$: vertices lie on the coordinate axes, $1$-faces (edges) lie on the coordinate $2$-planes, and so on. Consider the function $d:\Delta^{n-1}\to\mathbb R_+^n$ mapping the vector of interval lengths to the vector of chord lengths, $$d(s)=(\|c(t_1)-c(t_0)\|,\dots,\|c(t_n)-c(t_{n-1})\|),$$ where $t_i=\sum_{j=1}^i s_j$. This function is nonnegative ($d(s)_i\ge 0$), nondegenerate ($d(s)\ne0$ because $c(t_0)\ne c(t_n)$), and preserves zero coordinates ($d(s)_i=0$ if $s_i=0$). Zero coordinate preservation is the key property here: it means that while $d$ may transform $\Delta^{n-1}$ into an arbitrarily complicated, possibly self-intersecting $(n-1)$-dimensional surface, it cannot detach its boundary from the faces of $\partial\mathbb R_+^n$. Vertices still lie on the coordinate axes, edges become curves lying on the coordinate $2$-planes, and so on. We want to prove that there exists an $s\in\Delta^{n-1}$ such that all the components of $d(s)$ are equal. Equivalently, we want to show that the surface $d(\Delta^{n-1})$ intersects the line $\{(a,\dots,a):a\in\mathbb R\}$. From left to right: A curve $c([0,1])$, the corresponding deformed simplex $d(\Delta^{n-1})$ for $n=2$, $d(\Delta^{n-1})$ for $n=3$. Rescaling $d(s)$ so that its components sum to $1$, we obtain the map $$\hat d(s) = \frac{d(s)}{\sum_{i=1}^n d(s)_i},$$ which is well-defined and continuous because $d(s)$ is never zero. It is easy to verify that $\hat d$ maps the simplex $\Delta^{n-1}$ to itself; further, zero coordinate preservation implies that $\hat d$ also maps each face of $\Delta^{n-1}$ to itself. It can be shown using Brouwer's fixed point theorem that such a mapping must be surjective. Therefore, there exists an $s\in\Delta^{n-1}$ such that $\hat d(s)=(\frac1n,\dots,\frac1n)\in\Delta^{n-1}$, which is equivalent to the desired result. In fact, we have proved a slightly stronger property: For any vector of nonnegative chord length ratios $r=(r_1,\dots,r_n)$, we can find a subdivision $s$ such that $d(s)=ar$ for some $a\in\mathbb R$.
How to do well on Math Olympiads
I suggest you take a look at the website The Art of Problem Solving. There are links to resources, to articles, to competition preparation books, an online AoPS competition problems ("For the Win"), and more: all geared to bright students who love math, are looking for challenging problems, and it is particularly aimed to those students who participate in (or would like to start participating in) competition math.
Intuition - Homomorphic Image of Group Element is Coset - Fraleigh p. 135 13.52, p.130 Theorem 13.15
See that: $H=\text{ker }\phi$. (1.) Don't forget that in non-abelian groups left and right cosets aren't the same unless the subset wich you are looking at is normal in $G$. The overall intuition on cosets is that, they partition the group into distinct subsets. (2.) the dotted lines show the image of the set in the upper part of the picture under $\phi$. (3.) See (1.). otherwise: it's just a picture, if this picture doesn't work for you, try to come up with another one, that suits you better. (wich would be a good exercise) (4.) your statement is exactly the same. but see (5.) (5.) actually: $\phi(y)=\phi(kg)=\phi(k)\phi(g)=\phi(g)$ for some $k\in \text{ker }\phi$ (6.) could you clarify your question? in the statement of the theorem it says: $g\in G$ ?! I don't get your point, sry. EDIT: (6.) in the above theorem it says "$g\in G$" wich is short for "let $g\in G$". this is just the usual proof strategy: show the statement is true for some arbitrary element, hence is true for all, since you did not claim your element to satisfy any special property and this is true for all elements.
Every linear operator on $\mathbb{R}^5$ has an invariant 3-dimensional subspace
This is an easy consequence of the existence of the real Jordan normal form of the matrix of the endomorfism. That matrix is similar to a block diagonal matrix, with each block being a real Jordan block. There are several cases to be considered. For instance, if you endomorfism has one and only one real eigenvalue (with multiplicity $1$) and four complex non-real eigenvalues, then the real Jordan normal form will be of the type$$\begin{pmatrix}1&0&0&0&0\\0&a&-b&0&0\\0&b&a&0&0\\0&0&0&c&-d\\0&0&0&d&c\end{pmatrix}$$and therefore the span of the first three vectors of the correspondeng basis will be invariant. If you endomorfism has one and only one real eigenvalue (with multiplicity $1$) and two complex non-real eigenvalues (each with multiplicity $2$), then either the real Jordan normal form will be like the previous one (with $c=1$ and $d=b$) or will have the form$$\begin{pmatrix}1&0&0&0&0\\0&a&-b&1&0\\0&b&a&0&1\\0&0&0&a&-b\\0&0&0&b&a\end{pmatrix},$$but again you can consider the span of the first three vectors of the corresponding basis. And so on.
How do I find the value of this weird expression?
We don't allow infinite expressions, so first you need to define what it means. One way to make sense of it is as a sequence, $a_1=\sqrt 2, a_2 =\sqrt 2 ^{\sqrt 2}, a_3=\sqrt 2 ^{\sqrt 2^{\sqrt 2}},a_n=\sqrt 2^{a_{n-1}}$ and ask if the sequence has a limit as $n \to \infty$ If the limit exists, call it $L$. Then $L=\sqrt 2^L$, which is satisfied by $2$. To prove the limit exists, show that $a_n \lt 2 \implies a_{n+1} \lt 2$ and $a_n \gt 1 \implies a_{n+1} \gt a_n$. The sequence is now monotone and bounded above, so has a limit.
Show that $re^{i\theta} = se^{i\alpha} \implies \text{$r = s$ and $\theta = \alpha + 2\pi k$}$
HINT Clearly we have $r = 0 \iff s = 0$ and if $r=s=0$ the angles are not important. Assume now $r \ne 0$ and hence $s \ne 0$ and divide the complex numbers. We get $$ 1 = \frac{r}{s}e^{i[\theta - \alpha]}. $$ Taking modulus, we see $|r/s|=1$ so $r=s$ since they are both positive. Now we are left with $1 = e^{i[\theta-\alpha]}$. Can you finish it?
Non-isomorphic product of two groups
The main reason $\{\pm I_n\}$ is not isomorphic to $\{\pm I_n\} \times \{\pm I_n\}$ is because they have a different number of elements. Isomorphisms are bijective homomorphisms, so only groups of the same order can be isomorphic. E.g. when $n=2$, we have: $$\{\pm I_n\}=\left\{\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix},\begin{bmatrix} -1 & 0 \\ 0 & -1 \\ \end{bmatrix}\right\}$$ and $$\{\pm I_n\} \times \{\pm I_n\}=\left\{\left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix},\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}\right),\left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix},\begin{bmatrix} -1 & 0 \\ 0 & -1 \\ \end{bmatrix}\right),\left(\begin{bmatrix} -1 & 0 \\ 0 & -1 \\ \end{bmatrix},\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}\right),\left(\begin{bmatrix} -1 & 0 \\ 0 & -1 \\ \end{bmatrix},\begin{bmatrix} -1 & 0 \\ 0 & -1 \\ \end{bmatrix}\right)\right\}.$$ We do, however, have $$\{\pm I_1\} \cong \{\pm I_2\} \cong \{\pm I_3\} \cong \cdots.$$
How do I prove for $r>0$ the estimation $\left |\int_{K_r(0)}\frac{1}{z}dz \right |\leq \frac{1}{r}2\pi r=2\pi$?
$$\left|\oint_{K_r(0)}\frac1zdz\right|\le\oint_{K_r(0)}\frac{dz}{|z|}=\frac1{|z|=r}\oint_{K_r(0)}dz=\frac1r2\pi r=2\pi$$ assuming $\;K_r(0)=\;$ the circle of radius $\;r\;$ and center at the origin of the complex plane.
Zeros off the critical line, but extremely close to it
The zeros of the Riemann zeta function are symmetric about the critical line. So if $1/2+\epsilon+iy_0$ is a zero, so is $1/2-\epsilon+iy_0$. Methods for finding zeta zeros involve computing numerically the number $N(y)$, the number of zeta zeros $z$ with $0<\Im(z)<y$. If we had an $\zeta(1/2+\epsilon+iy_0)=0$ as above, then $N(y)$ would jump by $2$ when passing through $y=y_0$. But all jumps observed in $N(y)$ so far have been by $1$, indicating the presence of a simple zero on the critical line. What would be difficult to do is to distinguish a double zero of $\zeta$ on the critical line from two simple zeros near the critical line.
Simple variance question: variance of the whole expression when one variable is random
If $x_i$ and $z_i$ are deterministic constants for each $i = 1, \ldots, m$, and $m$ is also a known constant--that is to say, $y_i$ for $i = 1, \ldots, n$ are the only random variables involved, and they are independent--then the variance of $a$ is better calculated as $$\operatorname{Var}[a] \overset{\text {ind}}{=} \sum_{i=1}^m x_i^2 \operatorname{Var}[y_i].$$ This is because $z_i$, being deterministic, has zero variance. If the $y_i$ are not independent, then the variance of $a$ will require knowing the covariances between each $y_i$, $y_j$ for $1 \le i \ne j \le m$ in addition to the variances of each $y_i$.
Relation between $G$-orbits and Cycle Decomposition of a Permutaion.
Let $i\in X_n$ and let $k>0$ be minimal such that $\delta^k(i)=i$. Then $(i\delta(i)\cdots\delta^{k-1}(i))$ occurs in the cycle decomposition of $\delta$. You can argue conversely that if $(a_1a_2\cdots a_k)$ occurs in the cycle decomposition of $\delta$, then $\{a_1,\ldots, a_k\}$ is an orbit of the action. Thus the orbits of the action correspond exactly to the cycles in the cycle decomposition of $\delta$, with the size of the orbit being the length of the cycle and the elements of the cycle being the elements in the orbit.
How to find a minimal polynomial
The way to find the minimal polynomial is to start with the supposition that $$ x - \left( \sqrt{2} + \sqrt[3]{3} \right) = 0 $$ then move the $\sqrt[3]{3}$ to the right side of the equality and cube, getting $$ x^2 + 6x - \sqrt{2} \left( 3x^2+2 \right) = 3 $$ Now isolate the terms involving $\sqrt{2}$ on the right and square, getting (after some grouping of terms) $$ x^6 - 6x^4 - 6x^3 + 12x^2 - 36x + 1 = 0 $$ from which you read off the minimal polynomial. It is easy to verify that this is a polynomial of $\alpha$ over $\Bbb{Q}$. To prove minimality you need to invoke the fact that $\gcd(2,3) = 1$, which implies that a minimal polynomial of a square root and a cube root of two numbers sharing no common factors must be of degree at least $6$.
Why is $|z^k|=|z|^k$ for every $z\in \mathbb{C}, k\in\mathbb{N}$?
It follows from induction and the fact that $|zw|=|z|\cdot |w|$.
Prove inequalities with Cauchy's integral theorem
I am assuming you mean the $\textit{closed}$ curve consisting of the segment $-1$ to $1$ followed by the upper unit semicircle. Hints: $1).\ F$ is analytic on $B(0,1)$. Once you show this, i) is immediate. $2).\ F(x+iy)=|f(x)|^2$ on the segment. Now split up the contour and use i). $3).\ $ What happens if you integrate over the contour consisting of the segment and the lower semicircle?
Gradient of a quadratic form — row or column?
Let $f : \mathbb R^n \to \mathbb R$ be defined by $f (\mathrm x) := \mathrm x^{\top} \mathrm A \, \mathrm x$, where $\mathrm A \in \mathbb R^{n \times n}$ is given. Hence, $$f (\mathrm x + h \mathrm v) = (\mathrm x + h \mathrm v)^{\top} \mathrm A \, (\mathrm x + h \mathrm v) = f (\mathrm x) + h \langle \mathrm v, \mathrm A \, \mathrm x \rangle + h \langle \mathrm A^{\top} \mathrm x, \mathrm v \rangle + h^2 \mathrm v^{\top} \mathrm A \, \mathrm v$$ The directional derivative of $f$ in the direction of $\mathrm v$ at $\mathrm x$ is, thus, $$D_{\mathrm v} f (\mathrm x) = \langle \mathrm v, \mathrm A \, \mathrm x \rangle + \langle \mathrm A^{\top} \mathrm x, \mathrm v \rangle = \langle \mathrm v, (\mathrm A + \mathrm A^{\top}) \, \mathrm x \rangle$$ and the gradient of $f$ is $$\boxed{\quad \nabla f (\mathrm x) = (\mathrm A + \mathrm A^{\top}) \, \mathrm x = 2\left(\frac{\mathrm A + \mathrm A^{\top}}{2}\right) \, \mathrm x \quad}$$ where $\dfrac{\mathrm A + \mathrm A^{\top}}{2}$ is the symmetric part of $\mathrm A$.
Show that there is a unique matrix $ A $ such that $ \varphi (t) = e ^ {tA} $.
Here's a more elementary approach which doesn't rely on Lie groups: Since each component of $\varphi(t) \in \mathcal M_{n \times n}(\Bbb R)$ is of class $C^1$, $\varphi(t)$ is a differentiable matrix function of $t$. Note that $\varphi(s)$ and $\varphi(t)$ commute for any $s, t \in \Bbb R$, since $\varphi(s)\varphi(t) = \varphi(s + t) = \varphi(t + s) = \varphi(t)\varphi(s). \tag{1}$ We compute $\varphi'(t)$ directly from first principles, using the formula $\varphi(s + t) = \varphi(s) \varphi(t)$. We have: $\varphi'(t) = \lim_{h \to 0}\dfrac{\varphi(t + h) - \varphi(t)}{h} = \lim_{h \to 0}\dfrac{\varphi(t)\varphi(h) - \varphi(t)}{h} = \lim_{h \to 0}\dfrac{\varphi(h)\varphi(t) - \varphi(t)}{h}$ $= \lim_{h \to 0}\dfrac{(\varphi(h) - I)\varphi(t)}{h}= \lim_{h \to 0}\dfrac{(\varphi(h) - I)}{h}\varphi(t) = \varphi'(0)\varphi(t), \tag{2}$ where we have used (1) in the derivation (4); $\lim_{h \to 0}\dfrac{(\varphi(h) - I)}{h} = \varphi'(0) \tag{3}$ follows directly from the hypotheses that $\varphi(t)$ is of class $C^1$ and that $\varphi(0) = I$. We thus see that $\varphi(t)$ satisfies the linear differential equation $\varphi'(t) = \varphi'(0)\varphi(t) \tag{4}$ with the initial condition $\varphi(0) = I$. The unique solution to (4) with $\varphi(0) = I$ is $\varphi(t) = e^{\varphi'(0) t}; \tag{5}$ this shows that $\varphi(t)$ has the requisite form. To see that $\varphi'(0)$ is the only matrix $A$ such that $\varphi(t) = e^{At}$, note that if $\varphi(t) = e^{At}, \tag{6}$ then $\varphi'(0)\varphi(t) = \varphi'(t) = Ae^{At}; \tag{7}$ setting $t = 0$ in (7) yields $\varphi'(0) = A$; $\varphi'(0)$ is the only matrix such that (6) holds. And we are done! QED!!! Hope this helps. Cheerio, and as always, Fiat Lux!!!
Determining whether or not a relation involving absolute value is transitive
If $|x+y|=|x|+|y|$, by squaring both sides we get $xy=|xy|$. Thus either one of the numbers is $0$ or they are both nonzero and have the same sign. What happens with $x=1$, $y=0$ and $z=-1$?
Prove that $f$ is a convex function
Let $z = \alpha x + (1-\alpha) y$ where $0 \le \alpha \le 1 $. By hyopthesis we have: \begin{align} &f(x) \ge f'(z)(x-z) + f(z) = f'(z)\bigg[x - \alpha x + (1-\alpha) y\bigg] + f(z) = (1-\alpha)f'(z)(x- y) + f(z)\\ \implies & \alpha f(x) \ge \alpha(1-\alpha)f'(z)(x- y) + \alpha f(z)\qquad\text{(1)}. \end{align} Again by hypothesis: \begin{align} &f(y) \ge f'(z)(y-z) + f(z) = f'(z)\bigg[y - \alpha x + (1-\alpha) y\bigg] + f(z) = -\alpha f'(z)(x- y) + f(z)\\ \implies & (1-\alpha)f(y) = -\alpha(1-\alpha) f'(z)(x- y) + (1-\alpha)f(z)\qquad\text{(2)}. \end{align} Adding (1) and (2): \begin{align} \alpha f(x) + (1-\alpha)f(y) \ge \alpha f(z) + (1-\alpha) f(z) = f(z) = f(\alpha x + (1-\alpha)y). \end{align}
Secant method and false position method exercise
Your Secant Method calculations are correct. Using Algorithm $1.19$ for the Method of False Position, we have (note, the algorithm uses the point numbers over for efficiency): $p_0 = 3, p_1 = 2$ $f(p_0) = 3, f(p_1) = -2 \implies ~\mbox{root in} ~(2, 3)$ $p_2 = p_1 - f(p_1)\dfrac{(p_1-p_0)}{f(p_1)-f(p_0)} = 3 - f(3)\dfrac{(2-3)}{f(2)-f(3)} = 2.4$ $f(2.4) = -0.24 \implies ~\mbox{root in} ~(2.4, 3)$ $p_3 = 3 - f(3)\dfrac{(3-2.4)}{f(3)-f(2.4)} = 2.4444444444444446$ Note, if you want to practice the method and calculate more points, here they are in exact form: $$\left\{3,2,\frac{12}{5},\frac{22}{9},\frac{120}{49},\frac{218}{89},\frac{1188}{485},\frac{2158}{881}\right\}$$
$\liminf_{\epsilon\to 0} \frac{f(\epsilon)}{\epsilon}$ finite implies $\lim_{\epsilon \to 0} f(\epsilon)=0$
Note that for any $x_n \to 0$, $$f(x) = \begin{cases} 0 & \text{if } x = x_n \\ \text{anything }\ge 0 &\text{otherwise}.\end{cases}$$ satisfies $$\liminf_{\epsilon \to 0} \frac{f(\epsilon)}{\epsilon} =0.$$
Prove that $\int_0^\infty\frac{x^n}{1+e^{x-t}}\mathrm{d}x = \frac{t^{n+1}}{n+1} + o(t^n)$, when $t \to \infty,\,n\in\Bbb{R}^+$
First, let's split up the integral: $$ \begin{align} \int_0^\infty\frac{x^n}{1+e^{x-t}}\mathrm{d}x &=\int_{-t}^\infty\frac{(x+t)^n}{1+e^x}\mathrm{d}x\\ &=\color{#C00000}{\int_{-t}^0\frac{(x+t)^n}{1+e^x}\mathrm{d}x}+\color{#00A000}{\int_0^\infty\frac{(x+t)^n}{1+e^x}\mathrm{d}x}\tag{1} \end{align} $$ Note that the first integral on the right side of $(1)$ is $$ \begin{align} \color{#C00000}{\int_{-t}^0\frac{(x+t)^n}{1+e^x}\mathrm{d}x} &=t^{n+1}\int_{-1}^0\frac{(1+x)^n}{1+e^{tx}}\mathrm{d}x\\ &=t^{n+1}\int_{-1}^0(1+x)^n\,\mathrm{d}x-\color{#0000FF}{t^{n+1}\int_{-1}^0\frac{(1+x)^n}{1+e^{tx}}e^{tx}\,\mathrm{d}x}\\ &=\frac{t^{n+1}}{n+1}+O\left(t^n\right)\tag{2} \end{align} $$ because $$ \begin{align} \color{#0000FF}{t^{n+1}\int_{-1}^0\frac{(1+x)^n}{1+e^{tx}}e^{tx}\,\mathrm{d}x} &\le t^{n+1}\int_{-1}^0(1+x)^n\,e^{tx}\mathrm{d}x\\ &\le t^{n+1}\int_{-1}^0e^{nx}\,e^{tx}\mathrm{d}x\\ &\le\frac{t^{n+1}}{n+t}\\[6pt] &=O\left(t^n\right)\tag{3} \end{align} $$ Furthermore, by dominated convergence $$ \lim_{t\to\infty}\int_0^\infty\left(1+\frac xt\right)^n\,e^{-x}\,\mathrm{d}x =1\tag{4} $$ therefore, the second integral on the right side of $(1)$ is $$ \begin{align} \color{#00A000}{\int_0^\infty\frac{(x+t)^n}{1+e^x}\mathrm{d}x} &\le t^n\int_0^\infty\left(1+\frac xt\right)^n\,e^{-x}\,\mathrm{d}x\\ &=O\left(t^n\right)\tag{5} \end{align} $$ Combining $(1)$, $(2)$, and $(5)$, we get $$ \int_0^\infty\frac{x^n}{1+e^{x-t}}\mathrm{d}x =\frac{t^{n+1}}{n+1}+O\left(t^n\right)\tag{6} $$
When this series converges? $\sum_{n=45}^{\infty}(-1)^n(x^2+2x)^{\log({R(n)})}$ with R(n)=...
Hint: For $x^2 + 2 x > 0$, you'll want to think about alternating series. For $x^2 + 2 x < 0$, things are rather more complicated because $(x^2+2x)^{\log(R(n))}$ is complex, but absolute convergence should be easy to test because $\log(R(n)) = 2 \log(n) + o(1)$.
Show a locally bounded Lipschitz function space is compact for sup metric
Just show $\Omega$ = {$f\in\text{Lip}(\alpha,M):|f(u)|\leq M$} is totally bounded and complete. Note that $\Omega$ is equicontinuous. Given any $\epsilon>0$ and any $x\in S$, there exists a open neighborhood $B(x,\delta,d)=$ {$y\in S:d(x,y)<\delta$} of $x$ such that $|f(x)-f(y)|\leq Md(x,y)^\alpha<M\delta^\alpha<\epsilon$ for all $f\in\Omega\subset\text{Lip}(\alpha,M)$. Just let $\delta=(\epsilon/M)^{1/\alpha}/2$. Since $\Omega$ is equicontinuous, every $f\in\Omega$ is continuous. Since $k$ is compact, by the extreme value theorem, each $f\in\Omega$ is bounded. Thus $\Omega$ is uniformly bounded. By Azela-Ascoi Theorem, $\Omega$ is totally bounded. $\Omega$ is complete by the following theorem.
Proving inequality in basic number theory
Set $t=\dfrac xy$. Then $$x^2+xy+y^2=y^2(t^2+t+1)=y^2\biggl(\Bigl(t+\frac12\Bigr)^2+\frac34\biggr)\ge \frac{3y^2}4>0.$$
Branch and bound branching
No, this needs not happen. Consider the problem $$\max 1.5x+0.5y$$ $$x+0.5y \leq 3.75$$ $$y \leq 3.5$$ $$x \leq 2.5$$ $$x,y \geq 0$$ The feasible set with the objective function is: Solving the relaxation gives us the solution (2.5,2.5). Both coordinates are non-integers so we have to branch. We branch at $y$ and add the constraints $y\geq 3$ and $y \leq 2$. This reduces to the feasible sets below: Hence, both subproblems are neither unbounded nor infeasible. We would find the solutions (2.25,3) and (2.5,2) and would have to keep on branching. Note, however, if we had branched at $x$ instead of $y$ in the first step, then the subproblem with the constraint $x\geq3$ would indeed have been infeasible. Yet, as the example above shows, this needs not always happen.
If $A,B,C,D$ are complex numbers on the unit circle with $A+B+C+D=0$, then they form a rectangle
I suppose the assumption is that $A,B,C,D$ are all distinct, otherwise it is not necessarily true. Here is a pure complex number only proof. Assume that $A+B \ne 0$ and $A + D \ne 0$. We will show that this implies that $A + C = 0$. Since $$A+B+C+D = 0 \quad \quad (1)$$ we must have that $$\overline{A} + \overline{B} + \overline{C} + \overline{D} = 0$$ where $\overline{z}$ is the conjugate of $z$ and thus $$\frac{1}{A} + \frac{1}{B} + \frac{1}{C} +\frac{1}{D} = 0 \quad \quad \quad (2)$$ $(1)$ and $(2)$ imply that $$A + B = -(C+D) $$ and $$\frac{A+B}{AB} = -\frac{C+D}{CD}$$ and thus $$AB = CD\quad \quad \quad (3)$$ (because $A+B \neq 0$). Similary because $A + D \ne 0$, we get $$AD = BC\quad \quad \quad (4)$$ Now $(3)$ and $(4)$ imply (just divide) that $B^2 = D^2$ and hence $B+D = -(A+C) = 0$. Now rotate the plane around the origin so that $\overline{A} = D$. (This is always possible). Since rotation is just multiplying by some non-zero $w$, we still have that $A+C = 0$ Thus we have that $D = \overline{A} $, $C = -A$ and $B = -\overline{A}$ and thus $A,B,C,D$ form a rectangle.
form of the group of all the root of unity in $\mathbb{Q}(\zeta^{p^m})$
Each of the $\pm\zeta_{p^m}^j$ are roots of unity in the field $K=\Bbb Q(\zeta_{p^m})$. If there was some other roots of unity, say $\zeta_r$, then $r\nmid 2p^m$. Then $K$ contains the $s$-th roots of unity where $s=\text{lcm}(r,2p^m)>2p^m$. But the degree of $\Bbb Q(\zeta_s)$ is $\phi(s)>\phi(2p^m)$ as $s=2kp^m$ where $k>1$ is an integer. So $|\Bbb Q(\zeta_s):\Bbb Q|>|K:\Bbb Q|$ so that it's impossible for $\Bbb Q(\zeta_s)\subseteq K$.
Suppose that the $\lim_{n\to\infty} \frac{h(n)}{\max\{f(n), g(n)\}} < \infty$. Show that $h(n)=O(f(n)+g(n))$
Let $$L= \lim_n \frac{h(n)}{\max \{ (f(n), g(n) \}}$$ Then eventually you have $$h(n) \le (L +1) \max \{ (f(n), g(n) \} \le (L+1) (f(n) + g(n))$$ i.e. $h(n) =O(f(n) + g(n))$.
Cyclic Function of Order 5
Let $\phi=\frac{1+\sqrt{5}}{2}$, then consider $g(x)=\frac{1}{\phi-x}$. It’s not possible if $g$ is a rational fraction with rational coefficients and $g$ isn’t just the identity. Indeed, this implies that $f \in \mathbb{C}(x) \longmapsto f\circ g \in \mathbb{C}(x)$ is surjective and thus that $g$ is injective from $\mathbb{C}$ to itself. It’s not hard to deduce that $g$ must be of “degree” one, that is, with both numerator and denominator of degree at most one. Now write $g(x)=\frac{ax+b}{cx+d}$, and let $A=\begin{bmatrix}a &amp;b\\c&amp;d\end{bmatrix}$. You can easily compute that the conditions on $g$ force $A$ to be invertible, with rational entries and such that $A^5$ is scalar, say, $A^5=\alpha I_2$, $\alpha \neq 0$. But then, this means that $\alpha$ has a fifth root (say, $\beta$) such that the characteristic polynomial of $A$ vanishes at $\beta$ – in particular $\beta$ is a element of a quadratic number field. Write $\beta=u+v\sqrt{C}$, with $C$ being a square-free integer, as $\beta^5$ is rational, it follows that $C^2v^5+10Cv^3u^2+5vu^4=0$. If $u=0$ or $v=0$, then $v=0$ or $C=0$, so $\beta$ is rational (and this implies that $A$ is diagonalizable in $\mathbb{Q}$ and then that $A$ is scalar and thus that $g(x)=x$). So let $w=v/u \in \mathbb{Q}^*$ so that $P(w):=C^2w^4+10Cw^2+5=0$. If $5$ doesn’t divide $C$, then $P$ is irreducible by a slight elaboration on Eisenstein. If $C=5D$, then $D$ isn’t divisible by $5$ and $1/w$ is a root of $Q(x)=x^4+10Dx^2+5D^2$, which is irreducible by the same argument.
Determine the steady state temperature distribution for the given problems
Integrating twice gives $$y'=-T_0x+A$$ $$y=-\frac{T_0}2x^2+Ax+B$$ Now use initial conditions: $$y'(0)=0\implies A=0$$ $$y(1)=0\implies 0=-\frac{T_0}2+B\implies B=\frac{T_0}2$$ $$y=-\frac{T_0}2x^2+\frac{T_0}2=\frac{T_0}2(1-x^2)$$
Special dice generating non-decreasing sequence
If $p_n$ is the probability that the $n$th roll is a six, the generating function for this sequence is $$\phi(s)=\sum_{n\geq1} p_n s^n={120s\over(6-s)(5-s)(4-s)(3-s)(2-s)(1-s)}.$$ A partial fraction expansion gives us the explicit formula $$p_n=\sum_{j=0}^5 {5\choose j}{(-1)^j\over (j+1)^n}.$$ The first few values are $$p_1={1\over 6},\quad p_2={49\over 120 },\quad p_3={13489\over 21600},\quad p_4={336581\over 432000}.$$
What does $p_b \propto ρ^n _b $ mean
It means that $p_b$ is directly proportional to $ρ^n _b$. i.e. $p_b = C\rho_b^n$, for some constant $C$ (to be determined).
continuity, supremum/infimum
Because as $\varepsilon$ decreases you're taking bounds over smaller sets. Anything that is an upper or lower bound for the larger set must necessarily also be an upper or lower bound for any subset. So at a minimum, we know that the difference is non-increasing as $\varepsilon$ gets smaller. If $f$ is continuous at $a$ (as the answer to the linked question assumes), then the difference of the bounds must in fact go to $0.$ If the difference were to go to some $\mu \gt 0$, then any open set around $a$, no matter how small, would have points $y, z$ such that $\vert f(y)-f(z) \vert \gt \frac{\mu}{2}.$ That means for $\varepsilon \lt \frac{\mu}{4}$ it's impossible to find $\delta$ such that $\vert x-a \vert \lt \delta \Rightarrow \vert f(x)-f(a) \vert \lt \varepsilon$, which contradicts the assumption that $f$ is continuous at $a$.
Is $\{\tan(x) : x\in \mathbb{Q}\}$ a group under addition?
It follows rather easily from this fact: for every $0\neq n\in\mathbb N$ the number $e^{i/n}$ is transcendental. If we suppose it: if $\tan y=2\tan x$ then $(e^{2iy}-1)(e^{2ix}+1)=2(e^{2ix}-1)(e^{2iy}+1)$. If $x\neq0$ and $y$ are rational and $n$ is the least common denominator of $x$ and $y$, this becomes a polynomial equation for $e^{i/n}$, hence we get a contradiction. [the used result is a special case of the Lindemann-Weierstrass theorem which says, in particular, that $e^\alpha$ is transcendental for every algebraic $\alpha\neq0$. There is probably a simpler proof in this case.]
When is the image of a $\sigma$-algebra a $\sigma$-algebra?
For reference, I'm proving the equivalence suggested in this comment. Let $(E, \mathcal{E})$ be a measurable space, $F$ be a set and $f: E \to F$ be a function. Then the following two conditions are equivalent: $f(\mathcal{E})$ is a $\sigma$-algebra on $F$ and $f$ is $\mathcal{E}/f(\mathcal{E})$-measurable; $f$ is surjective and, for all $A \in \mathcal{E}, f^{-1}(f(A)) \in \mathcal{E}$. First 1. $\implies$ 2. $f$ is surjective. Since $f(\mathcal{E})$ is a $\sigma$-algebra over $F$ it must contain $F$ and hence we can write $F = f(A)$ for some $A \subseteq E$. For all $A \in \mathcal{E},\ f^{-1}(f(A)) \in \mathcal{E}$. Take any $A \in \mathcal{E}$. Clearly, $f(A) \in f(\mathcal{E})$. Moreover, since $f(\mathcal{E})$ is a $\sigma$-algebra on $F$ and $f$ in $\mathcal{e}/f(\mathcal{E})$-measurable then $f^{-1}(f(A)) \in \mathcal{E}$. Now, 2. $\implies$ 1. $f(\mathcal{E})$ is a $\sigma$-algebra on $F$. $F = f(E) \in \mathcal{E}$ since $f$ is surjective. For any $B \in f(\mathcal{E})$ we must show that $B^\complement \in f(\mathcal{E})$. Since $f$ is surjective, there is an $A^\complement \in E$ such that $f(A^\complement) = B^\complement$. By hypothesis we know that $f^{-1}(f(A^\complement)) \in \mathcal{E} \implies A^\complement \in \mathcal{E} \implies f(A^\complement) = B^\complement \in f(\mathcal{E})$. As before, for any $(B_n)_{n\in\mathbb{N}} \subset f(\mathcal{E})$, there exists a sequence $(A_n)_{n\in\mathbb{N}} \subset \mathcal{E}$ such that $f(A_n) = B_n$. Since $\mathcal{E}$ is itself a $\sigma$-algebra, it holds that $\bigcup_{n\in\mathbb{N}} A_n \in \mathcal{E}$ and thus $$\bigcup_{n\in\mathbb{N}} B_n = \bigcup_{n\in\mathbb{N}} f(A_n) = f(\bigcup_{n\in\mathbb{N}} A_n) \in f(\mathcal{E}).$$ $f$ is $\mathcal{E}/f(\mathcal{E})$ measurable. Take any $B \in f(\mathcal{E})$. By hypothesis we now that $B = f(A)$ for some $A \in \mathcal{E}$ and also $f^{-1}(f(A)) = f^{-1}(B) \in \mathcal{E}$, hence $f$ is $\mathcal{E}/f(\mathcal{E})$ measurable.
Find maximal ideals of a ring
Since $\Bbb Z[\sqrt{3}]\cong \Bbb Z[x]/(x^2-3)$, you are looking for the maximal ideals of $\Bbb Z[x]$ which contain $(x^2-3)$. The maximal ideals of $\Bbb Z[x]$ appear among the prime ideals described here, and are exactly the ones generated by two elements. There are three possibilities that each prime $p$ can fall under: $x^2-3\pmod{p}$ is irreducible, as in the case when $p=5$. In that case, $(p,x^2-3)$ is maximal. This corresponds to the case when $3$ is not a quadratic residue in $\Bbb F_p$. $x^2-3\equiv (x-\alpha)(x-\beta) \pmod{p}$, where $\alpha\neq\beta$, as in the case for $p=11$. We have two distinct maximal ideals $(p,(x-\alpha))$ and $(p,(x-\beta))$. This corresponds to the case when $3$ is a quadratic residue in $\Bbb F_p$. $x^2-3\equiv (x-\alpha)^2 \pmod{p}$, as in the case $p=2$ and $p=3$. (Actually, you should show that these are the only two primes for this case.) Then $(p,(x-\alpha))$ is maximal.
Box-counting dimension: $\lim_{\delta \to 0} N_{\delta}(F)\delta^{s} = \infty$ when $s < \dim_{M}(F)$ and another similar limit.
Consider $\lim_{\delta \to 0} N_\delta(F) \delta^{\dim_M (F) + \epsilon}$. By a property of exponents, this equals $\lim_{\delta \to 0} N_\delta(F) \delta^{\dim_M (F)} \delta^{\epsilon}$. When the individual limits are finite, this can be split into a product of limits: $\left(\lim_{\delta \to 0} N_\delta(F) \delta^{\dim_M (F)}\right) \left(\lim_{\delta \to 0}\delta^{\epsilon}\right)$. Now assuming $\epsilon &gt; 0$, $\lim_{\delta \to 0}\delta^{\epsilon} = 0$. To show that the other limit in the product is finite: $\lim_{\delta \to 0} N_\delta(F) \delta^{\dim_M (F)} = K \in \mathbb{R}$ iff $\lim_{\delta \to 0} \log N_\delta(F) + \dim_M(F) \log \delta = \log K$ iff $\dim_M(F) = \lim_{\delta \to 0} \frac{\log N_\delta(F) - \log K}{-\log \delta} = \lim_{\delta \to 0} \frac{\log N_\delta(F)}{-\log \delta}$.
$W^{m,p}_{0}(\mathbb{R}^d) = W^{m,p}(\mathbb{R}^d)$ and continuously imbedded
Hint: $C^{\infty}(\mathbb{R}^{n}) \cap W^{m,p}(\mathbb{R}^{n})$ is dense in $W^{m,p}(\mathbb{R}^{n})$ with respect to the Sobolev norm. Now show that $C_{0}^{\infty}(\mathbb{R}^{n})$ is dense in $C^{\infty}(\mathbb{R}^{n}) \cap W^{m,p}(\mathbb{R}^{n})$ with respect to the Sobolev norm. To this end, note that if $u \in L^{p}(\mathbb{R}^{n})$, we can approximate it - in $L^{p}$ - by a function with compact support.
Can a single chart on a real number, ($\mathbb{R}, f:x\mapsto x^{1/3}$), be allowed to be considered as an atlas (differential structure)?
There is a priori no notion of smoothness on the domain. While this is not smooth with respect to the standard manifold structure on $\mathbb R$, it doesn't matter for the purpose of defining an atlas. You're right that we don't have a compatibility condition to check in this case (perhaps besides the trivial transition function from the given chart to itself), but we still need to make sure that the chart is a homeomorphism onto its image (which is not difficult to see in this case).
Transforming the derivative of $f(x)$ using a table of values
$y = f(u)\\ y' = f'(u) u'$ This is the chain rule. $a(x) = f(-2x)\\ a'(x) = f'(-2x)(-2)\\ a'(-1) = f'(2)(-2) = -2$
Condition for points to be concyclic
Lemma 1: Mappings of the form $z\mapsto w = \dfrac{az+b}{cz+d}$ preserve cross-ratios. I.e., if $w_k=f(z_k)$ for $k=1,2,3,4$; then $$ \text{cross-ratio}=\frac{z_1-z_4}{z_1-z_2} \times \frac{z_2-z_3}{z_4-z_3}=\frac{w_1-w_4}{w_1-w_2} \times \frac{w_2-w_3}{w_4-w_3}. $$ Lemma 2: Every circle and every straight line can be mapped onto $\mathbb R\cup\{\infty\}$ by a mapping of the form considered in Lemma 1. (Here we consider every line to contain the point $\infty$, which may be mapped either to $\infty$ or to some (finite) real number.) Lemma 3: The inverse of a mapping of the form considered in Lemma 1 is another mapping of that form; consequently every circle is also the image of $\mathbb R\cup\{\infty\}$ under such a mapping. So see if you can prove the three lemmas and then use them to get your result.
How to calculate this sum of series?
Let $y \in \mathbb R$ and define $f_y\colon [0,1] \to \mathbb R$ by $$ f_y(x) := \sum_{r=0}^\infty b_r(x)\frac{y^r}{r!} $$ We have, taking derivatives, that for $x \in [0,1]$: \begin{align*} f_y'(x) &amp;= \sum_{r=0}^\infty b_r'(x) \frac{y^r}{r!}\\ &amp;= \sum_{r=1}^\infty rb_{r-1}(x) \frac{y^r}{r!}\\ &amp;= y\cdot \sum_{r=0}^\infty b_r(x) \frac{y^r}{r!}\\ &amp;= y \cdot f_y(x) \end{align*} Hence $f_y(x) = \exp(xy)f_y(0)$. Integrating, we have by uniform convergence \begin{align*} \int_0^1 f_y(x)\, dx &amp;= \sum_{k=0}^\infty \int_0^1 b_r(x)\, dx \cdot \frac{y^r}{r!}\\ &amp;= 1. \end{align*} On the other hand \begin{align*} \int_0^1 f_y(x)\, dx &amp;= \int_0^1 f_y(0)\exp(xy)\, dx\\ &amp;= f_y(0) \cdot \left.\frac{\exp(xy)}y\right|_{x=0}^1\\ &amp;= f_y(0) \cdot \frac{\exp y -1}y \end{align*} So $$ 1 = f_y(0) \cdot \frac{\exp y - 1}y \iff f_y(0) = \frac y{\exp y -1} $$ This gives $$ f_y(x) = \frac{y\exp(xy)}{\exp y- 1}. $$
What hyperreal functions satisfy this condition involving positive infinitesimals?
By transfer, if $a\in\mathbb{R}$ and $g(a+\epsilon)=g(a)+\epsilon$ is true for all infinitesimal $\epsilon&gt;0$, it must also be true for all sufficiently small real $\epsilon&gt;0$. So your condition is equivalent to the condition (in the following all variables are real) that for each $x$ there exists $\delta&gt;0$ such that $g(x+\epsilon)=g(x)+\epsilon$ for all $0\leq\epsilon&lt;\delta$. Or, letting $h(x)=g(x)-x$, this means that $h$ is constant in a half-open interval starting at each point. (So, $\mathbb{R}$ can be partitioned into half-open intervals $[a,b)$ such that $h$ is constant on each one.) If you removed the positivity condition on $\epsilon$, you would similarly conclude that $h$ is constant in an open interval around each point and thus globally constant since $\mathbb{R}$ is connected, so $g(x)$ would have the form $x+c$ for some constant $c$.
Area under quarter circle by integration
From the equation $x^2+y^2=r^2$, you may express your area as the following integral $$ A=\int_0^r\sqrt{r^2-x^2}\:dx. $$ Then substitute $x=r\sin \theta$, $\theta=\arcsin (x/r)$, to get $$ \begin{align} A&amp;=\int_0^{\pi/2}\sqrt{r^2-r^2\sin^2 \theta}\:r\cos \theta \:d\theta\\ &amp;=r^2\int_0^{\pi/2}\sqrt{1-\sin^2 \theta}\:\cos\theta \:d\theta\\ &amp;=r^2\int_0^{\pi/2}\sqrt{\cos^2 \theta}\:\cos\theta \:d\theta\\ &amp;=r^2\int_0^{\pi/2}\cos^2 \theta \:d\theta\\ &amp;=r^2\int_0^{\pi/2}\frac{1+\cos(2\theta)}2 \:d\theta\\ &amp;=r^2\int_0^{\pi/2}\frac12 \:d\theta+\frac{r^2}2\underbrace{\left[ \frac12\sin(2\theta)\right]_0^{\pi/2}}_{\color{#C00000}{=\:0}}\\ &amp;=\frac{\pi}4r^2. \end{align} $$
application of limit rules in finite limits
You can do this by rationalizing both: $\lim_{x\to 4}\frac{3-\sqrt{5+x}}{1-\sqrt{5-x}}=\lim_{x\to 4}\frac{3-\sqrt{5+x}}{1-\sqrt{5-x}}\frac{3+\sqrt{5+x}}{3+\sqrt{5+x}}\frac{1+\sqrt{5-x}}{1+\sqrt{5-x}}= \lim_{x\to 4}\frac{(4-x)(1+\sqrt{5-x})}{(-4+x)(3+\sqrt{5+x})}$ From here you should be able to get to your desired result.
solve $\log(-0.7x-9)=1+5\log(5)$. I get $370$ whereas textbook solution says $-44655.7143$
There are two errors in your calculations. First, $5^5 \ne 25$...And in the third line before the end you are missing a sign. It should be, using your steps, $$\log(-0.7x-9)=1+5\log(5)$$ $$\log(-0.7x-9)=1+\log(3125)$$ $$\log(-0.7x-9)=\log(3125)+\log(10)$$ $$\log(-0.7x-9)=\log(31250)$$ $$-0.7x-9=31250$$ $$-0.7x=31259$$ $$x=-\frac{31259}{0.7}$$ $$x\approx - 44655.7 $$
What does the Y-axis represent in probability density function?
The values that the density function takes on are in comparison to the uniform distribution on the space. If the density is larger than 1, you are more likely to find the random variable in that small region than if you had the uniform law. Similarly, if your density is less than 1 at a point, you are less likely to find the random variable in that small region around the point.
Interesting sequence of all the natural numbers
The Online Encyclopaedia of Integer Sequences is a great source for interesting integer sequences. You will find some pointers relevant to your question here.
What is the number of trailing zeros in a factorial in base ‘b’?
Suppose that $b=p^m$, where $p$ is prime; then $z_b(n)$, the number of trailing zeroes of $n!$ in base $b$, is $$z_b(n)=\left\lfloor\frac1m\sum_{k\ge 1}\left\lfloor\frac{n}{p^k}\right\rfloor\right\rfloor\;.\tag{1}$$ That may look like an infinite summation, but once $p^k&gt;n$, $\left\lfloor\frac{n}{p^k}\right\rfloor=0$, so there are really only finitely many non-zero terms. The summation counts the number of factors of $p$ in $n!$. The set $\{1,2,\dots,n\}$ of integers whose product is $n!$ contains $\left\lfloor\frac{n}p\right\rfloor$ multiples of $p$, $\left\lfloor\frac{n}{p^2}\right\rfloor$ multiples of $p^2$, and so on $-$ in general $\left\lfloor\frac{n}{p^k}\right\rfloor$ multiples of $p^k$. Each multiple of $p$ contributes one factor of $p$ to the product $n!$; each multiple of $p^2$ contributes an additional factor of $p$ beyond the one that was already counted for it as a multiple of $p$; each multiple of $p^3$ contributes an additional factor of $p$ beyond the ones already counted for it as a multiple of $p$ and as a multiple of $p^2$; and so on. Let $s=\sum_{k\ge 1}\left\lfloor\frac{n}{p^k}\right\rfloor$; then $n!=p^sk$, where $k$ is not divisible by $p$. Divide $s$ by $m$ to get a quotient $q$ and a remainder $r$: $s=mq+r$, where $0\le r&lt;m$. Then $$n!=p^sk=p^{mq+r}k=(p^m)^qp^rk=b^qp^rk\;,$$ where $p^rk$ is not divisible by $b$. Since $p^rk$ isn’t divisible by $b$, in base $b$, it will not end in $0$. Multiplying it by $b$ will simply tack a $0$ on the righthand end of it, just as multiplying $123$ by $10$ in base ten tacks a $0$ on the end to make $1230$. Multiplying by $b^q$ is multiplying by $b$ a total of $q$ times, so it tacks $q$ zeroes onto a number that did not end in $0$; the result is that $n!$ ends up with $q$ zeroes in base $b$. But $$q=\left\lfloor\frac{s}m\right\rfloor=\left\lfloor\frac1m\sum_{k\ge 1}\left\lfloor\frac{n}{p^k}\right\rfloor\right\rfloor\;,$$ showing that $(1)$ is correct. In your example $b=2^4$, so $p=2$ and $m=4$, and with $n=100$, $(1)$ becomes $$\begin{align*} z_{16}(100)&amp;=\left\lfloor\frac14\sum_{k\ge 1}\left\lfloor\frac{100}{2^k}\right\rfloor\right\rfloor\\\\ &amp;=\left\lfloor\frac14\left(\left\lfloor\frac{100}2\right\rfloor+\left\lfloor\frac{100}4\right\rfloor+\left\lfloor\frac{100}8\right\rfloor+\left\lfloor\frac{100}{16}\right\rfloor+\left\lfloor\frac{100}{32}\right\rfloor+\left\lfloor\frac{100}{64}\right\rfloor\right)\right\rfloor\\\\ &amp;=\left\lfloor\frac14(50+25+12+6+3+1)\right\rfloor\\\\ &amp;=\left\lfloor\frac{97}4\right\rfloor\\\\ &amp;=24 \end{align*}$$ The value of the summation is $97$, which tells you that there are $97$ factors of $2$ in $100!$: $100!=2^{97}k$, where $k$ is an odd number. $97=4\cdot24+1$, so $100!=2^{4\cdot24+1}k=(2^4)^{24}2k=16^{24}(2k)$, where $2k$ is not a multiple of $16$ (since it’s just $2$ times an odd number). Thus, the base $16$ representation of $2k$ does not end in $0$. Each of the $24$ multiplications of this number by $16$ tacks another $0$ on the end in base $16$, so you end up with $24$ zeroes on the end. The original sum counts the factors of $2$ in $100!$, but the number of zeroes on the end isn’t the number of factors of $2$: it’s the number of factors of $2^4$, the base. Every four factors of $2$ give you one factor of $2^4$, so you divide by $4$ (and throw away the remainder) to see how many factors of $2^4$ you can build out of your $97$ factors of $2$. When the base is not a power of a prime, counting the trailing zeroes is a little harder, but it can be done using exactly the same ideas.
How to square equations involving random variables.
For simplicity let $S=T_{n-1}$. The correct results are as follows, where $S'$ and $S$ are identically distributed. $$S=1.\frac{1}{n}+(1+S')\frac{n-1}{n}$$ $$S^2=1.\frac{1}{n}+(1+S')^2\frac{n-1}{n}$$ These give $E(S)=n$ and $E(S^2)=2n^2-n$. So, what has gone wrong in your derivation? In essence, you cannot square the equation in the way you have done. Just to pick up one problem, the $T_{n-1}$s on each side of the equation are not equal; they are different random variables with the same expectation. Therefore you can only manipulate them in the way you appear to have done once you have taken the expectation.
proving that the height of $(X)$ in $R[X]$ is equal to 1
This follows directly from Krull's principal ideal theorem.
Conditional expectation probability question
If $X\mid Y=y\sim \mathrm{po}(y)$, then ${\rm E}[X\mid Y=y]=y$ and not $4p$. For the latter part use that if $\varphi(y)={\rm E}[X\mid Y=y]$ then ${\rm E}[X\mid Y]=\varphi(Y)$.
Differential Equation RLC circuit analysis.
The problem is that your "guess" fits the general form of a homogeneous solution. Since it's a solution to the homogeneous problem, plugging it in on the left side gives you zero. The solution to this problem is to modify your particular solution. Use $$ i_p(t) = Ate^{-10t} $$
Justifying Operations on Abel Summability?
In general, if $f_n$ is a sequence of Riemann integrable functions on $[-\pi,\pi]$ and $f_n \to f$ uniformly, then $ \lim_n \int f_n = \int f$. In our case, $| \sum_n r^{|n|} f(x) e^{-in(x-\theta)}| \le \sum_n r^{|n|} |f(x)|$ Since $f$ is Riemann integrable, $f$ is bounded. So this sum is bounded by $c \sum_n r^{|n|}$ for some constant $c &gt; 0$, hence the sum converges uniformly in $x$. So, $\lim_{N \to \infty} \sum_{|n| \le N} \int r^{|n|} f(x) e^{-in(x-\theta)}dx = \lim_{N \to \infty} \int \sum_{|n| \le N} r^{|n|}f(x) e^{-in(x-\theta)} dx $ $= \int \lim_{N \to \infty} \sum_{|n| \le N} r^{|n|} f(x) e^{-in(x-\theta)}dx$ Edit In some sense, this is the appeal of Abel summability. The object you started with is $\sum_n a_n e^{in\theta}$, with $a_n = \int f(x) e^{-inx} dx$. For this sum to be manageable, we need $e^{in\theta}a_n$ to decay rapidly. But $|e^{in\theta}| = 1$, so any straightforward estimate we can come up with isn't going to get us anywhere. The summability of the Fourier series is going to depend strongly on how much cancelation we can get out of these $e^{-inx}$ terms for $n$ large. By replacing the coefficients with $r^{|n|} a_n e^{in\theta}$, we introduce a factor that will force convergence and then we can hope to take a limit as $r \to 1^{-}$. I haven't read Stein's book recently, but he probably covers convergence in the case that $f$ is smooth as well (which controls the amount of oscillation $f$ can have and hence we can hope for a lot of cancellation).
If all of the world’s population stood 6 feet apart, what’s the smallest country they would fit in to?
While the problem itself is very simple, one can be easily trapped in an argument between equilateral triangles and regular hexagons; but that is an illusion: Both of these tessellate the 2D plane, with the same density of people (red dots). The key is to calculate the area needed per person correctly. On one hand, we can say that each person stands at a grid point of a regular equilateral triangle grid, with grid spacing (and equilateral triangle edge length) the same as the interperson distance $d = 6\text{ ft} = 1.8288\text{m }$. Each person is at the vertex shared by six triangles, but there are just three vertices per triangle; so, the density of people is three times one sixth of a person per triangle, or half a person per triangle. In other words, each person uses up the area of two triangles: $$A_\Delta = 2 \frac{\sqrt{3}}{4} d^2 = \frac{\sqrt{3}}{2} d^2 \approx 2.8964\text{ m}^2$$ or about $31.177\text{ sq ft}$. On the other hand, we can say that each person stands inside a regular hexagon with edge length $a$. Consider the yellow right angle part in the above diagram: the horizontal part is $d/2$, vertical part is $a/2$, and the angle at the person (red dot) is $30°$; with $d = 6\text{ ft} = 1.8288\text{m }$ the interperson distance. In other words, $$\frac{a / 2}{d / 2} = \tan 30° \quad \iff \quad a = \frac{d}{\sqrt{3}}$$ and the area each person occupies is $$A_⬡ = \frac{3\sqrt{3}}{2} a^2 = \frac{3\sqrt{3}}{2}\left(\frac{d}{\sqrt{3}}\right)^2 = \frac{\sqrt{3}}{2} d^2 \approx 2.8964\text{ m}^2$$ or about $31.177\text{ sq ft}$. So, saying that people stand at grid points of a regular equilateral triangular grid, or that people occupy hexagonal cells or tiles, is essentially just two different ways to say the same thing; either way, each person will occupy an area of $$A_{1} \approx 2.8964\text{ m}^2 \approx 31.177\text{ sq ft}$$ when tessellating the 2D plane (packing them on a flat plane). This means that the area needed for roughly $7\,780\,000\,000$ people is approximately $$A_\text{total} \approx 22\,534\text{ km}^2 \approx 8\,701\text{ sq mi}$$ As to which country is the smallest but has at least $22\,534\text{ km}^2$ surface people could stand on – including actual area along hillsides, whether to consider only land area or include inland lakes that freeze hard enough during winters to stand on – is not really a geometry question, but a geographical or perhaps political question; we humans tend to disagree even which areas are their own countries, or just parts of other countries.. so I won't be making any statements or claims on that, I'm here just for the math and geometry part, and to show how to estimate the surface area needed. If one of your friends calculates the area needed based on circles with radius $r = d/2$, i.e. half the interpersonal distance, they'll arrive an answer of $20\,436\text{ km}^2$ or $7\,890\text{ sq mi}$. The problem with that is that a circle does not tessellate a plane; there is always a small area left uncovered between three circles touching each other, that is not included in that area but is needed for the circles to not overlap. If you were to calculate or estimate the area of those uncovered parts, you'd arrive at the difference, or about $2\,098\text{ km}^2$ or $811\text{ sq mi}$.
Evaluate $\int_{0}^{2\pi}f(z_0+re^{i\theta })e^{ki\theta }d\theta $
Your calculation is not correct, $e^{ki \theta}=e^{i\theta}$ does not hold. But you can apply Cauchy's integral theorem to $$ \int_\gamma f(z)(z-z_0)^{k-1} \, dz = \int_0^{2 \pi} f(z_0+re^{ki \theta)})r^{k-1}e^{(k-1)i \theta} ir e^{i \theta} \, d\theta = ir^k \int_0^{2 \pi} f(z_0+re^{ki \theta})e^{ki \theta} \, d\theta $$
Groups and vectorspaces
These axioms are satisfied because $G$ is an Abelian group: Associativity of addition: $u + (v + w) = (u + v) + w$ Commutativity of addition: $u + v = v + u$ Identity element of addition: $\exists e \in V: \forall v \in V: v + e = v$. Inverse elements of addition: $\forall v \in V: \exists −v ∈ V: v + (−v) = e$ These still need to be checked to see if they impose any conditions on $G$: Identity element of scalar multiplication: $\bar{1}v = v$ Distributivity of scalar multiplication with respect to field addition: $(a + b)v = av + bv$ Distributivity of scalar multiplication with respect to vector addition: $a(u + v) = au + av$ Compatibility of scalar multiplication with field multiplication: $a(bv) = (ab)v$ The first of these doesn't impose any restrictions on $G$. Distributivity of scalar multiplication with respect to field addition implies $\bar{0}v = e$. This can be seen as follows: $(\bar{1} + \bar{0})v = \bar{1}v + \bar{0}v \Rightarrow v = \bar{0}v + v \Rightarrow \bar{0}v = e$ This allows us to conclude that every element of $G$ needs to be it's own inverse: $(\bar{1} + \bar{1})v = \bar{1}v + \bar{1} v \Rightarrow e = v + v \Rightarrow v = (-v)$ Given that we already know that $\bar{0}v = e$ and $\bar{1}v = v$, it is easy to check that distributivity of scalar multiplication with respect to vector addition and compatibility of scalar multiplication with field multiplication impose no new conditions on $G$. Hence, the only requirement on $G$ is that each element is its own inverse: $\forall v \in G: v = (-v)$
finding distribution $\mathbb{Z}$ in problem
Note that $(S_n)_{n\geqslant1}$ is the set of events of a Poisson process $(N_t)_{t\geqslant0}$ with intensity $\sigma$ and that $Z=N_s$ counts the number of events before time $s$. Hence, the distribution of $Z$ is Poisson with parameter $s\cdot\sigma$.
Calculating limit involving factorials.
Hint: $$ \begin{align} \frac{\pi^kk!}{(2k+1)!} &amp;=\frac{\pi^k}{\underbrace{(k+1)(k+2)(k+3)\cdots(2k+1)}_{k+1\text{ terms}}}\\ &amp;\le\frac{\pi^k}{(k+1)^{k+1}}\\ &amp;=\frac1\pi\left(\frac\pi{k+1}\right)^{k+1} \end{align} $$
Opposite category trivial example
Your construction involving elements of sets and element-wise definitions of set functions is external to the data defining this as a category. A category consists of only "names" of objects and morphisms (with composition rules), and no other information. In this example the category has objects $\{A, B, C\}$ and morphisms $\{id_A, id_B, id_C, f, g\}$ with the implied compositions. The category doesn't "know" about the elements of the sets, so you wouldn't expect any new category constructed from this one to have a corresponding interpretation for elements. $f^{OP}$, for example, is simply an abstract morphism from $B$ to $A$, and nothing more. One way in category theory to interpret elements of a set is as morphisms from a singleton set $\{*\}$, so $a_1$ is represented by a morphism from $\{*\}$ to $A$ that sends $*$ to $a_1$. However, passing to the opposite category construction turns these morphisms out of $\{*\}$ into morphisms into $\{*\}$, which no longer have the same interpretation as elements of a set.
Minimising the area of triangle.
Note the following : For the choice $x=0$ and $y=\epsilon$ , an arbitrarily small positive real number, the area can be made arbitrarily small. Hence the minimal possible area of the triangle doesn't exist. Suggestion : In case you consider only integral points, try computing the area considering few nearby points of $(2,1)$. It won't be a difficult exercise.
Simple question regarding factoring quadratics
The equation $x(ax+b) = c$ is valid but does not help. There is a general fact that $AB = 0$ implies $A = 0$ or $B=0$ and this allows us to solve a product expression by reducing it to easier equations. So we need 0 on the right hand side of the product to be useful. e.g. we want to rewrite your equation $ax^2 + bx - c = 0$ as $a(x+\alpha)(x+\beta) = 0$. In order to find $\alpha, \beta$ to do this, note that we ensured the quadritic term is already OK: $ax^2$ in both. The linear term is $a(\alpha+\beta) = b$ and the constant term is $a\alpha\beta = -c$. So you need to find $\alpha$ and $\beta$ with known sum $\frac{b}{a}$ and known product $\frac{-c}{a}$, and this can sometimes be seen by inspection for concrete $a,b$ and $c$.
Evaluate the integral$ \int_{-\infty}^{\infty}\frac{b\tan^{-1}\big(\frac{\sqrt{x^2+a^2}}{b}\big)}{(x^2+b^2)(\sqrt{x^2+a^2})}\,dx$.
In the following we shall assume that $a$, $b$, and $c$ are real valued and that $a&gt;b&gt;0$. Let $F(c)$ be represented by the integral $$F(c)=b\int_{-\infty}^\infty \frac{\arctan\left(\frac{\sqrt{x^2+a^2}}{c}\right)}{(x^2+b^2)\sqrt{x^2+a^2}}\,dx\tag 1$$ Differentiating $(1)$ reveals $$\begin{align} F'(c)&amp;=-b\int_{-\infty}^\infty \frac{1}{(x^2+b^2)(x^2+a^2+c^2)}\,dx\\\\ &amp;=-\frac{\pi}{c^2+a^2+b\sqrt{c^2+a^2}}\tag2 \end{align}$$ Integrating $(2)$ and using $\lim_{c\to \infty}F(c)=0$, we find that $$\begin{align} F(c)&amp;=\pi\,\left(\frac{\arctan\left(\frac{bc}{\sqrt{a^2-b^2}\sqrt{a^2+c^2}}\right)-\arctan\left(\frac{b}{\sqrt{a^2-b^2}}\right)+\pi/2-\arctan\left(\frac{c}{\sqrt{a^2-b^2}}\right)}{\sqrt{a^2-b^2}}\right) \end{align}$$ Setting $c=b$ yields the coveted result $$\int_{-\infty}^\infty \frac{b\arctan\left(\frac{\sqrt{x^2+a^2}}{b}\right)}{(x^2+b^2)\sqrt{x^2+a^2}}\,dx=\pi\,\left(\frac{\arctan\left(\frac{b^2}{\sqrt{a^2-b^2}\sqrt{a^2+b^2}}\right)+\pi/2-2\arctan\left(\frac{b}{\sqrt{a^2-b^2}}\right)}{\sqrt{a^2-b^2}}\right)$$
Localness of the UFD Property
The answer is no in general. Let $C$ be a smooth projective curve over some field $K$, of positive genus. Let $U$ be a non-empty affine open subset of $C$ and let $A=\mathcal O_C(U)$. Then $A$ is a Dedekind domain. In particular, the localization of $A$ at any prime ideal is a PID, hence UFD. Let $V\subseteq U$ be an open subset (e.g. defined as $D(f)$, with $\mathcal O_X(V)=A_f$) and let us see whether $\mathcal O_X(V)$ is UFD, or equivalently, PID, or again $\mathrm{Pic}(V)=\{ 1\}$ (the Picard group of $V$ is the same as the class ideal group of $\mathcal O_X(V)$). Let $p_1,\dots, p_n$ be the points in the complement of $V$ in $C$. Any divisor of degree $0$ on $C$ gives rise, by restriction, to a divisor on $V$. Moreover, this restriction is compatible with linear equivalence. So we have a canonical map $$ \mathrm{Pic}^0(C)\to \mathrm{Pic}(V).$$ Edit (thanks to Georges' comments) The elements of the kernel of this map are all linear combinations of the classes of $p_1, \dots, p_n$: if a divisor $D$ on $C$ is trivial on $V$, then $D\sim \sum_i a_ip_i$ with $a_i\in \mathbb Z$. Therefore we obtain an exact sequence $$ 0\to H \to \mathrm{Pic}^0(C)\to \mathrm{Pic}(V)$$ with $H$ a finitely generated abelian group. end of edit. If $\mathcal O_C(V)$ has trivial Picard group, then $\mathrm{ Pic}^0(C)$ is a finitely generated abelian group. But this is not always true. In fact, if $J$ is the Jacobian of $C$ (and suppose $C$ has a rational point in $K$), then $\mathrm{ Pic}^0(C)=J(K)$. If $K$ is algebraically closed and uncountable, then $J(K)$ is uncountable because $J$ is an algebraic variety of positive dimension (equal to the genus of $C$). Concrete example: let $C$ be an elliptic curve over $\mathbb C$ and take $U=C\setminus \{ 0\}$. Then for all $f\in A:=\mathcal O_C(U)$, $A_f$ is never a PID.
Incorrect Definitions of NP
If we don't include the second statement, then there is no restriction on what we do when x is not in $L$, so consider the program: A(x,y): return 1 And this shows that any language $L$ is in our new language class. A similar construction works for 2. See if you can figure it out now!