title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Integral of Legendre polynomials with tangent
Since $\tan(x/2)=\frac{\sin(x)}{1+\cos(x)}$, by letting $\cos(\gamma)=T\in[-1,1]$ your identity can be written in the simplified form $$ \int_{T}^{1}\frac{dx}{1+x}(P_n(x)+P_{n-1}(x))\,dx = \frac{1}{n}(P_{n-1}(T)-P_n(T)). $$ The identity clearly holds at $T=1$. By differentiating both sides wrt $T$ we find Bonnet's identity and we are done.
Can someone help me solve this quadratic word problem?
In Mathematica, create a function that has roots at the specified points, in this case, x=-1 and x=3, then choose the sign negative. Here is an example: Plot[-(x + 1) (x - 3), {x, -5, 6}]
Let $u=u(x,y)$ be a differentiable function such that $u(x,x^{2}) = 1$ and $u_{x} (x,x^{2}) = x$. Find $u_{y} (x,x^{2})$
We compute using Leibniz rule $$ 0 = \frac{d}{dx} u(x,x^2) = u_x(x,x^2) + u_y(x,x^2) \cdot 2x = x + 2x \cdot u_y(x,x^2) = x (1+ 2u_y(x,x^2)).$$ Hence $$ u_y(x,x^2)=-\frac{1}{2}.$$
basis of sum of 2 vector spaces
To show that $V+W=\mathbb{R}^3$ you need to show that the span of the four basis vectors you've found is all of $\mathbb{R}^3$. One way to do this is, as you mention, to consider a matrix whose columns are these four vectors, and apply the Gauss-Jordan elimination method to this matrix. If the resulting matrix (after GJE) has three pivots, this means that your four vectors span a 3-dimensional subspace of $\mathbb{R}^3$, which must be $\mathbb{R}^3$. Note that even though you have four vectors, you can't have four pivots at the end, since the matrix has just three rows. Next, recall that a vector space sum can only be direct if $V\cap W=\{0\}$. Once you've shown that $V+W=\mathbb{R}^3$, you can use the following dimension equation: \begin{equation} \dim(V+W) = \dim(V) + \dim(W) - \dim(V\cap W) \end{equation} to determine whether or not $V\cap W=\{0\}$.
In how many ways can we select $x$ distinct candies from a collection of $n$ candies of distinct types?
One way to solve this is to use generating functions. Use $z$ to count candies, then the candies from jar $i$ are represented by: $\begin{align} 1 + z + \dotsb + z^{p_i} = \frac{1 - z^{p_i + 1}}{1 - z} \end{align}$ The full collection of candies is: $\begin{align} \prod_{1 \le i \le k} \frac{1 - z^{p_i + 1}}{1 - z} \end{align}$ and you want the number of ways to make up $x$ candies, the coefficient of $z^x$: $\begin{align} [z^x] \prod_{1 \le i \le k} \frac{1 - z^{p_i + 1}}{1 - z} \end{align}$ Sorry, but unless some other conditions are imposed (e.g. $x \le p_i$ for all $i$, or even all $p_i$ the same), there is no simple expression for what you are asking for.
To find dimension of nullspace of A
Your matrix should look like this:$$A= \begin{bmatrix} 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 1 & 1\\ \end{bmatrix}$$ Given $a_{ij}=1$ for $rk<i,j\le rk+k$. For each $r$, we have $k$ admissible values for both $i,j$, i.e. $\{rk+1,rk+2,...,rk+k\}$. These values represent a square sub-matrix of order $k$ between $a_{rk+1,rk+1}$ and $a_{rk+k,rk+k}$. Each value of $r$ gives you a distinct submatrix, no two of which share a row or column. The rank of each submatrix is $1$, and since we have $m$ such submatrices, the rank of $A$ is $m$. Thus,$$\dim\ker A=mk-m$$
How to solve $2e^3\cos (6z)-e^6=1$, being $z\in\mathbb{C}$.
HINT: for every $w\in\Bbb C$ $$ \cos(w)=\frac{e^{iw}+e^{-iw}}{2} $$
Seeking a specific book with graphs of interesting functions, but can't remember the name
I am not %100 sure but I think I found the book. I still can't find the video where Matt Parker suggested it. Curves for the Mathematically Curious: An Anthology of the Unpredictable, Historical, Beautiful, and Romantic ISBN-10 : 0691180059 ISBN-13 : 978-0691180052 https://www.amazon.com/Curves-Mathematically-Curious-Unpredictable-Historical/dp/0691180059/
Restriction of the projection from compact manifold onto hyperplane is a smooth embedding
A series of hints, which should lead you to the solution 1: Try to show that there exists such a hyperplane, by showing that the set of all such hyperplanes has full measure. 2: Note that a hyperplane corresponds to a line and vice versa. We have found a bijection by taking the orthogonal complement! 3: The projection injects iff the corresponding line based at any point hits the embedded manifold at most once. 4: Lines are described by the space $RP^n$. Is this a smooth manifold? If yes which dimension? 5: Try to find all elements of $RP^n$ which correspond to a non-injective map. Try to do so as the image of a smooth map. 6: Try to calculate dimensions to use Sard to show that the image of the above map is a measure zero set. Finally note that you might want to try to use the hints not in a particular order.
How to prove that it is possible to build a triangle from three segments?
Reflect $A$ by line $CK$ and denote the reflected point by $X$. Then $X$ is also $B$ reflected by $CL$, you can check it by $|CX|=|CA|=|CB|$ and by angles at the point $C$. Now, $KLX$ is the desired triangle ;-)
Connected sum of two cylinders
-->-- -->-- | | + | | -->-- -->-- If these are 2 cylinders with 2 holes each, after taking connected sum the resultant gives 3 holes only
Gaussian integers introduction
If $sv = 1$, then $N(sv) = N(s)N(v) = 1$. But the norm is a non-negative integer so this forces $N(s) = N(v) = 1$. If $s = a+b\sqrt{-2}$ and $N(s) = 1$, then $a^2+2b^2 = 1$, and it is easy to see that this forces $(a,b) = (\pm 1, 0)$.
Finding $\mathbb{Z}[\sqrt{-3}]/(p)$ for some prime $p$.
Use $\mathbb Z[\sqrt{-3}]/(p) \cong \mathbb Z/p\mathbb Z[X]/(X^2+3)$. Now it should become clear how to use the knowledge for which primes $-3$ is a quadratic residue $\mod p$.
Proof with conditional probabilities
A bit simpler to use $P(A|B)=P(A\cap B)/P(B)$ and note that the stated inequality $P(A)<P(A|B)$ becomes $$ P(A)P(B) < P(A\cap B)$$ which is symmetric in $A$ and $B$. So it is equivalent to $P(B)<P(A\cap B)/P(A)=P(B|A)$
What is the easiest way to describe the Leech lattice explicitly?
Well, http://www.math.rwth-aachen.de/~Gabriele.Nebe/LATTICES/Leech.html#GRAM gives the gram matrix. That is pretty explicit. A number of descriptions are in papers of Borcherds, available in SPLAG and http://math.berkeley.edu/~reb/papers/index.html#thesis Meanwhile, the most active person with whom I have had contact is Daniel Allcock, see http://www.ma.utexas.edu/users/allcock/ Not widely known: the Leech lattice has a left-handed and a right-handed version. Any lattice with roots possesses an automorph with negative determinant, just take the reflection in that root. Or reflexion. Now, in dimension 2, people distinguish "opposite" classes of quadratic forms because of the importance as inverses in the class group. In odd dimension it does not matter. For even dimension four and higher, pretty much everyone uses the lax definition of equivalence class, although i know some people who occasionally fiddle with the strict version in dimension 4. So, the strict Niemeier class number is 25. Let's see. Among other surprising features, the Leech lattice has covering radius $\sqrt 2,$ which is very small. Any even lattice (ignore unimodular) with covering radius strictly below $\sqrt 2$ has (lax) class number one, and so is of dimension no larger than ten. As it happens, all such also have strict class number one, which mostly comes down to either odd dimension or possessing roots. Well, enough for the moment. Lots more where that came form. EEDDITTTT: perhaps what you want is pages 129-130 in Lattices and Codes, second edition, by Wolgang Ebeling: take the hyperbolic lattice $II_{25,1}$ and vector $$ w = (0,1,2,3,\ldots,23,24|70) $$ This $w$ is isotropic, as the sum of the squares of the first bunch of numbers is 4900. Then the Leech lattice is isomorphic to $$ w^\perp/ \langle w \rangle. $$ A third edition of Ebeling's book is coming out. He did not have time to put in my stuff about covering radius and class number. BOOKS: THOMPSON EBELING SPLAG
function holomorphic on an annulus with real part bounded by z at infinity reduces has limit
Let $f(z)=\sum_{n=-\infty}^\infty a_nz^n$ be its Laurent expansion in $1<|z|<\infty.$ Then \begin{align} &u(Re^{i\theta })=\operatorname{Re}f(Re^{i\theta })\\ &=\operatorname{Re}a_0+\sum_{n=1}^\infty \left[\left(\operatorname{Re}a_nR^n+\operatorname{Re}a_{-n}R^{-n}\right)\cos n\theta -\left(\operatorname{Im}a_nR^n-\operatorname{Im}a_{-n}R^{-n}\right)\sin n\theta \right] \end{align} Therefore for $n\ge 1$, we have \begin{align} &\operatorname{Re}a_nR^n+\operatorname{Re}a_{-n}R^{-n} =\frac{1}{\pi}\int_0^{2\pi}u(Re^{i\theta })\cos n\theta \,d\theta, \\ &\operatorname{Im}a_nR^n-\operatorname{Im}a_{-n}R^{-n}=-\frac{1}{\pi}\int_0^{2\pi}u(re^{i\theta })\sin n\theta \, d\theta. \end{align} Dividing by $R^n$ we have \begin{align} &\operatorname{Re}a_n+\operatorname{Re}a_{-n}R^{-2n} =\frac{1}{\pi}\int_0^{2\pi}\frac{u(Re^{i\theta })}{R^n}\cos n\theta \,d\theta, \tag{1} \\ &\operatorname{Im}a_n-\operatorname{Im}a_{-n}R^{-2n}=-\frac{1}{\pi}\int_0^{2\pi}\frac{u(re^{i\theta })}{R^n}\sin n\theta \, d\theta.\tag{2} \end{align} By the condition $z^{-1} \operatorname{Re}f(z)\to 0$ as $z\to \infty$, we have $$\frac{1}{R}u(Re^{i\theta })\to 0 \:\;(R\to \infty). $$ Tending $R\to \infty$ in $(1),(2)$ we have $$ \operatorname{Re}a_n =0,\quad \operatorname{Im}a_n=0, $$ that is, $a_n=0$ for $n\ge 1.$ Thus $$ f(z)=a_0+\sum_{n=1}^\infty \frac{a_{-n}}{z^n}.$$ We see that $\lim_{z\to \infty}f(z)$ exists.
It is possible to prove the existence of Gibbs measures using the Kolmogorov extension theorem?
You can, at least for finite state spaces $\mathbb{E}^i$. The link to Kolmogorov's extension theorem is used explicitly in Theorem 5 of these notes: http://www.stat.yale.edu/~pollard/Courses/606.spring06/handouts/Gibbs1.pdf The above argument is really using "local convergence". In the case of finite states, local convergence is the same as weak convergence of measures, and the weak topology is known to be compact in this case. To see how local convergence can be used for infinite state spaces, see Section 4 of "Gibbs Measures and Phase Transitions" by H.O. Georgii.
$1) True/false A$ is one -one if and only if $\dim V \le \dim W$. $2) A$ is onto if and only if $\dim V \ge \dim W$
Answer to both questions is false because 1) You can always construct map $A$ s.t. dim$V \leq$ dim$W$ but $A$ is not one-one. For example, if dim$V=$ dim$W=2$ then take $A$ to be $0$ map i.e. $A(x)=0$ for all $x\in A$. So correct statement would be, If $A$ is one-one then dim$V \leq$ dim $W$ 2) similar reason for question 2 as well. Please try to construct examples.
If$\int_0^{2\pi} |f^*(e^{i\theta})|^{1/3}d\theta <\infty$ then $f\in H^{1/3}(D) $
This is wrong at least twice. First, $H^{1/3}$ is a space of holomorphic functions, and the functions in $L^{1/3}(D)$ are not holomorphic. So $L^{1/3}\subset H^{1/3}$ is simply not true, hence any &quot;proof&quot; must be wrong. (A more sensible version of the problem would be to show that $L^{1/3}\cap H(D)\subset H^{1/3}$; that's also false, not quite so obviously.) You ask whether the second-last inequality is right. In fact the very first inequality is wrong or worse. Starting with $f\in L^{1/3}(D)$ you say $\int_0^{2\pi}|f(e^{it})|^{1/3}\,dt&lt;\infty$. That's not so; in fact that doesn't make any sense, because there's no such thing as $f(e^{it})$ in the first place. (If the &quot;second-last&quot; inequality is $\lim_{r\to 1^-} \int_0^{2\pi} |f(re^{i\theta})|^{1/3}d\theta \leq \int_0^{2\pi} |f(e^{i\theta})|^{1/3}d\theta$$ $, that's also wrong - no reason it should hold, unless for example $f$ is holomorphic...) Edit: Turns out that what the OP really wanted to know was why a proof in a certain post works. Note first that in that other post the author uses the same letter, $f$, for the holomorphic function $f$ and also for its non-tangential maximal function. Very bad notation, leading to confusion over whether the author is or not assuming that $f\in L^p$. So, if $f$ is a function defined in $D$ we let $f^*(e^{it})$ denote the non-tangential maximal function of $f$. Trivial Lemma. Suppose $f\in H(D)$ and $p&gt;0$. Then $f\in H^p(D)$ if and only if $f^*\in L^p(\Bbb T)$. Note that's $L^p(\Bbb T)$, not $L^p(D)$. Proof. Well, one direction is not actually trivial: If $f\in H^p$ then it's a standard result, towards the start of any treatment of Hardy spaces, that $f^*\in L^p(\Bbb T)$. Suppose now that $f^*\in L^p(\Bbb T)$. If $0&lt;r&lt;1$ the definition of $f^*$ shows that $$|f(re^{it})|\le f^*(e^{it}).$$Hence $$\int_0^{2\pi}|f(re^{it})|^p\,dt\le \int_0^{2\pi} f^*(e^{it})^p\,dt;$$so $\sup_r\int_0^{2\pi}|f(re^{it})|^p\,dt&lt;\infty$.
Problem ranking features on different numerical ranges
There is no single accepted way; what you use depends on what you deem useful. Let $x$ and $y$ be the normalized (i.e. in $[0,1]$) intelligence and attractiveness. Then, one simple possibility is to take a linear combination of the two: $$ x+\alpha y $$ where $\alpha&gt;0$. $\alpha=1$ implies that both attributes are equally important, while $\alpha\lessgtr1$ weights in favour of one or the other.
The closure of $X = [0,1) \cup (1,2] \cup \{3\}$ and the closure of its complement
No. $\overline X = [0,2] \cup \{3\}$ $X^c = (-\infty,0) \cup \{1\} \cup (2,3) \cup (3,\infty)$ $\overline{X^c} = (-\infty,0) \cup \{1\} \cup (2,\infty)$ A good intuition for whether some element $s$ is in the closure $\overline S$ of a subset $S$ of a metric space is whether you can find a sequence $\{s_n\}_{n \in \Bbb N}$ of elements of $S$ such that $s_n \to s$. If such a sequence $\{s_n\}_{n \in \Bbb N} \subseteq S$ exists such that $s_n \to s$, then $s \in \overline S$. (This follows readily from the definition of $\overline S$ as $S \cup S'$, where $S'$ denotes the set of limit points. Apply the definition of sequence convergence in a metric space. Note in particular that this means $S \subseteq \overline S$: just take constant sequences in $S$ to justify this or use the original definition.) So to see why your answers are wrong: Can you find a sequence in $X$ converging to anything in $(2,3)$? Say, to $2.5$? Knowing what $X^c$ is, it should be not difficult to see why you have much more than just $\{1\}$ in the closure - and, again, in particular, $X^c \subseteq \overline{X^c}$. But you can also trivially get a sequence converging to $3$, say the sequence $$s_n = 2 + 9 \sum_{k=1}^n \left( \frac{1}{10} \right)^k = 2.\underbrace{999\ldots 999}_{n \; \text{dots}}$$ which lets us include that too.
Let $a,b$ be relative integers such that $2a+3b$ is divisible by $11$. Prove that $a^2-5b^2$ is also divisible by $11$.
We have for some integer $c$ $$2a+3b=11c,$$ and this is equivalent to $$a=\frac{11c-3b}{2}.$$ Now, squaring both sides we get $$a^2=\frac{11^2c^2-66bc+9b^2}{4} $$ and substracting $5b^2$: $$a^2-5b^2=\frac{11^2c^2-66bc-11b^2}{4} $$ $$ a^2-5b^2=11\left(\frac{11c^2-6bc-b^2}{4}\right).\tag{$\star$}$$ Note that the quantity in brackets in the RHS of $(\star)$ is an integer because $11$ and $4$ are coprime, so the numerator of the fraction must be divisible by $4$ in order to make the RHS itself an integer, just as the LHS is.
Which algebraic structure is like a magma augmented with an operation which is an anti-function?
You're looking for the Jónsson-Tarski algebras. Given a set $X$ together with a bijection $m:X\times X\to X$, we have the reverse map $X\to X\times X$ and the left and right components can be given by $\ell:X\to X$ and $r:X\to X$. In other words it satisfies the identities: $m(r(x),\ell(x))=x$, $r(m(x,y))=x$, and $\ell(m(x,y))=y$. Some interesting facts about Jónsson-Tarski algebras include: The only finite Jónsson-Tarski algebra is trivial, the free Jónsson-Tarski algebra on $n$ generators is isomorphic to the free Jónsson-Tarski algebra on $m$ generators, and the automorphism group of the finitely generated free Jónsson-Tarski algebra is Thompson's group $V$.
Is there a simpler form for $\Re \frac{\Gamma(1/2-i)}{\Gamma(1-i)}$?
No, one only has a simpler formula for the absolute value, since $$\left|\frac{\Gamma(1/2-i)}{\Gamma(1-i)}\right|^2=\frac{\Gamma(1/2-i)}{\Gamma(1-i)}\frac{\Gamma(1/2+i)}{\Gamma(1+i)}=\frac{\sin\pi i }{i\sin\pi(1/2-i)}=\tanh\pi.$$
Equation in complex field $z^2+z\overline{z}-2z =0 $
One obvious solution is $z=0$,. The other ( for $z=x+iy$) is the solution of: $$ z+\bar z-2=0 \iff 2x-2=0 \iff x=1 $$
$\limsup(a\cdot a_n)=a\cdot \limsup(a_n)$
Assuming $ a&gt;0$. \begin{equation} \limsup x_n = \lim_n (\sup \{x_m : m\ge n\}) \end{equation} Thus, assuming $a&gt;0$ and $\sup a_n \ge 0$ we have. \begin{eqnarray} \limsup( a \cdot a_n \_n) &amp;=&amp; \lim_n (\sup \{a \cdot a_m : m\ge n\}) \\ &amp;=&amp; \lim_n [a \cdot(\sup \{ a_m : m\ge n\})]\\ &amp;=&amp; a \cdot \lim_n (\sup \{a_m : m\ge n\})\\ &amp;=&amp; a \cdot \limsup a_n. \end{eqnarray} Also, \begin{eqnarray} \limsup (a_n + b_n ) &amp;=&amp; \lim_n (\sup \{a_m + b_m : m\ge n\}) \\ &amp;\le&amp; \lim_n [(\sup \{a_m : m\ge n\}) + (\sup \{a_m : m\ge n\})]\\ &amp;=&amp; \lim_n (\sup \{a_m : m\ge n\} + \lim_n (\sup \{b_m : m\ge n\} \\ &amp;=&amp; \limsup a_n + \limsup b_n . \end{eqnarray}
find the area of the region lying inside the circle $r=6$ and inside the cardioid $r=4-3\sin \theta$.
By symmetry: \begin{align} \text{Area }&amp;=2\times\frac{1}{2}\int_{-\pi/2}^{-\sin^{-1}(2/3)}6^2\;d\theta+2\times\frac{1}{2}\int_{-\sin^{-1}(2/3)}^{\pi/2}\left(4-3\sin\theta\right)^2\;d\theta \end{align}
Confused about using normal approximation to binomial
Let $X \sim \mathcal{N}(np,\,np(1-p))$ be the normal approximation to the binomial. So $X \sim \mathcal{N}(30,\,21)$. $\frac{X - 30}{\sqrt{21}} = Z \sim \mathcal{N}(0,\,1)$ $$P(X \geq 40)=1-P(X\leq39)= 1-P\left(Z \leq\frac{39 + 0.5 - 30}{\sqrt{21}}\right)$$ Now you can just plug it into your calculator or use a lookup table.
A normed space is Banach iff its unit sphere is complete
Hint Consider a Cauchy sequence $\{x_n\}$. See what happens with $$\left\{\frac1{\|x_n\|}x_n\right\}$$ (Note that you also have to consider sequences with zeros).
Why is there no function with a nonempty domain and an empty range?
The standard set-theoretic way to define functions is that: The cartesian product of the sets $A$ and $B$, written as $A \times B$, is the set of all ordered pairs where the first element of the pair is in $A$ and the second in $B$: $$A \times B = \{(a,b): a \in A, b \in B\}.$$ (Representing these ordered pairs as sets, and showing that the cartesian product of two sets is indeed a set under the axioms of set theory, are details that we may safely skip here.) A relation $R$ between the sets $A$ and $B$ is any subset of their cartesian product: $R \subset A \times B$. (Often, by convention, we write $a \mathop R b$ as a shorthand for $(a,b) \in R$; this is particularly common when the symbol chosen for the relation is not a letter like $R$, but something abstract like $\sim$ or $\odot$.) A function $f$ from $A$ to $B$ is a relation between $A$ and $B$ (i.e. a subset of their cartesian product) satisfying the following two extra conditions: existence of images: for all $a \in A$, there is a $b \in B$ such that $(a,b) \in f$. uniqueness of images: if $(a,b) \in f$ and $(a,b') \in f$, then $b = b'$. If the relation $f$ is a function, then, for each $a \in A$, there exists exactly one $b \in B$ satisfying $(a,b) \in f$. We call this $b$ the image of $a$ under $f$, written $f(a)$, so that: $$f(a) = b \iff (a,b) \in f.$$ So, what about when $B = \varnothing$? In that case, for any $A$, the cartesian product $A \times B = \varnothing$, since there exist no pairs $(a,b)$ such that $a \in A$ and $b \in B$. (The same, of course, is also true whenever $A = \varnothing$.) Since the only subset of $\varnothing$ is $\varnothing$, the only relation between $A$ and $B$ is the empty relation $\varnothing$. The question, then, is: is the empty relation a function from $A$ to $B$? If $A \ne \varnothing$, no, it is not, because there exists at least one $a \in A$, but there can be no $b$ such that $(a,b) \in \varnothing$. If $A = \varnothing$, yes, it is. In this case, both the existence and uniqueness conditions are vacuously true, since there is no $a \in A$ for which they could fail. Thus, there is a (single) function from the empty set to any set (including the empty set itself), but there is no function from a non-empty set to the empty set.
If $\frac{a+b}2$ is rational, can we say that $a,b$ are rational?
Is there any better way? Since all you have to do is give a counterexample, there is not much need to try to improve your "disproof"; however, a simpler and more all-encompassing consideration would lend itself to multiple counterexamples. Let $\mathbb{I}$ denote the set of irrational numbers. Suppose that $a,b\in\mathbb{I}$, where $b=-a$. Then $$ \frac{a+b}{2}=\frac{a+(-a)}{2}=\frac{0}{2}\in\mathbb{Q},\;a,b\not\in\mathbb{Q}.\tag{1} $$ How is this an improvement? Well, you don't have to prove that $\sqrt{2}$ is irrational. You also don't have to prove that the sum of a rational number and an irrational number is irrational (i.e., $1+\sqrt{2}$ is irrational, which you seem to take for granted). Basically, in $(1)$, there is minimal legwork done for a consideration that provides groundwork for countless counterexamples.
The probability of the intersection of the compliments of three events.
$P(A^{c}\cap B^{c}\cap C^{c})=P((A\cup B\cup C)^{c})=1-P(A\cup B\cup C)=1-(P(A)+P(B\cup C)-P(A\cap(B\cup C)))=1-P(A)-P(B\cup C)+P((A\cap B)\cup (A\cap C))=1-P(A)-P(B)-P(C)+P(B\cap C)+ P(A\cap C)+P(A\cap B) - P(A\cap B\cap C)= 1- 0.1-0.3-0.3 + 0 + 0 + 0 - 0 = 0.3$.
A property of Hermitian matrix and positive semi-definite matrix?
Hint: Let $B=QAQ$ then $Q^{-1}BQ^{-1}=A.$
Differences between finite and infinite CW complex categories
A CW complex if finite if and only if it is compact. There are innumerable things that hold for compact spaces but which are false (or at least more subtle) for noncompact spaces. For example, noncompact manifolds are far more complicated than compact manifolds.
Differential equation - variable substitution
Hint: $$(p')^3 \sin(p) + \cos(xp)= \tan(p^4)$$ This gives us: $$p' = \left(\dfrac{\tan(p^4) - \cos(xp)}{\sin(p)}\right)^{1/3}$$ It is now a first-order, non-linear equation. Can you see how to proceed now?
Show that $\int_0^\infty e^{-(x-u/x)^2}(1-\frac{u}{x^2})dx$ converges uniformly on $u\in [\delta,L]$
Let $0 &lt; \delta &lt;L$. Given $\varepsilon \in \Bbb{R}_{&gt;0}$, we want \begin{split} \left \vert\int_0^{\frac{1}{n}} e^{-(x-u/x)^2}\left(1-\frac{u}{x^2}\right)dx+\int_n^\infty e^{-(x-u/x)^2}\left(1-\frac{u}{x^2}\right)dx \right\vert &lt; \varepsilon \end{split} for all big enough $n$ (i.e. all $n \geq N$ for some $N \in \Bbb{N}$) and all $u \in [\delta,L]$. First, note that for $1/n^2&lt; \delta$, we have \begin{split} \left \vert\int_0^{\frac{1}{n}} e^{-(x-u/x)^2}\left(1-\frac{u}{x^2}\right)dx \right \vert&amp;=\left \vert\int_0^{\frac{1}{n}} e^{-x^2(1-u/x^2)^2}\left(1-\frac{u}{x^2}\right)dx \right \vert\\ &amp;=\left \vert\int_0^{\frac{1}{n}} \frac{1}{\frac{1}{1-u/x^2}+x\sum_{j=1}^{\infty}\frac{(x-u/x)^{2j-1}}{j!}}dx \right \vert\\ &amp;=\int_0^{\frac{1}{n}} \frac{1}{\underbrace{\frac{1}{u/x^2-1}}_{\geq 0}+\underbrace{x\sum_{j=1}^{\infty}\frac{(u/x-x)^{2j-1}}{j!}}_{\geq \delta-x^2}}dx\\ &amp;\leq \int_0^{\frac{1}{n}} \frac{1}{\delta-x^2}dx\\ &amp;&lt; \varepsilon/2 \end{split} with the last inequality holding for all big enough $n$. Similarly, we have for $n^2&gt;L$ \begin{split} \left \vert\int_n^\infty e^{-(x-u/x)^2}\left(1-\frac{u}{x^2}\right)dx \right \vert&amp;=\int_n^\infty \frac{1}{\underbrace{\frac{1}{1-u/x^2}}_{\geq 0}+\underbrace{x\sum_{j=1}^{\infty}\frac{(x-u/x)^{2j-1}}{j!}}_{\geq x^2-L}}dx\\ &amp;\leq \int_n^{\infty} \frac{1}{x^2-L}dx\\ &amp;&lt; \varepsilon/2 \end{split} for all big enough $n$, and the result follows.
Expected value of number of steps until range reduced to a given fraction
Let $E(x)$ denote the expected number of steps required to obtain a number less than $1/x$. If $X_1 &lt; 1/x$, then the process is complete; this occurs with probability $1/x$. Otherwise, $X_1 &gt; 1/x$, and suppose $X_1 = y$. We are now choosing $X_2, X_3, \cdots$ from the interval $[0,y]$ and want to know the number of steps required to obtain a result less than $1/x$; since this is simply a "scaled version" of the original process, it will take $E(xy)$ additional steps on average. Thus, $E(x)$ satisfies the integral equation $$E(x) = \frac{1}{x} + \int_{1/x}^{1} (1+ E(xy))\, dy =1+ \frac{1}{x} \int_{1}^{x} E(y)\, dy$$ Differentiating this gives $E(x) + xE'(x) = 1+ E(x)$, so $$E'(x) = 1/x \implies E(x) = \ln(x) + C$$ Using the initial value $E(1) = 1$, we conclude $E(x) = \ln(x) + 1$.
What is correct order of foundational concepts of mathematics?
Are you familiar with formal theories and models? It sounds like you are mixing these two different things. In fact, it sounds like you might be asking about mathematicians rather than about mathematical objects. The word "concepts" is a clue to this. Also, if you are thinking "first this happens, then that happens" (e.g., "first, you get some axioms, then you apply some rules"), you are probably thinking about mathematicians rather than mathematical objects. Anyway, I don't have an answer about mathematicians. But as far as your question applies to mathematical objects, the objects in a formal language are symbols, strings, terms, formulas, proofs, etc. Languages don't have points, lines, numbers, sets, membership relations, additions, etc. Those are parts of models. So it's not clear what it means to say that, say, a number is below an axiom that talks about numbers because the two things are not even in the same structure. Let's look at the language first. We usually think of symbols as atomic. Then a formula (axioms and theorems are formulas) that can be interpreted to talk about numbers is just a bunch of symbols concatenated together. A (formal) proof is also just a bunch of symbols concatenated together. Or instead of concatenation, it's probably better to think of sequences, i.e., functions from indexing sets to the set of symbols. It's the same deal: everything is just a bunch of symbols in some order. Symbols are then below axioms in the sense that axioms are composed of symbols. But you could also think of the formulas as atomic. I've never seen this done, but you can just reverse the previous thinking. Instead of thinking of a formula as a function from indices to symbols (telling you which symbol is in each position), think of a symbol as a function from formulas to sets of indices (telling you in which positions (if any) the symbol occurs). I don't know why this way of thinking would be useful, but who knows. The point is that now axioms are below symbols in the sense that in order to identify a symbol, you have to look at the positions in which it occurs in which formulas. When symbols were below axioms, it was in the same sense: to identify an axiom, you had to look at which symbols occurred in its positions. You're looking at the same mapping in each case, just in a different direction. (The situation is only slightly different because you've singled out axioms, which aren't special with respect to this mapping. This is like singling out some type of symbol, say, binary predicate symbols. You can't identify axioms (as functions) by looking only at binary predicate symbols (as atoms). In the same way, you can't identify symbols (as functions) by looking only at axioms (as atoms). You need to consider all formulas and all symbols.) When you interpret the language in some model, some of the symbols get mapped to numbers (and relations and operations on numbers), and the formulas get mapped to truth-values. It's not clear to me how numbers or any other objects in the domain of an interpretation are related to truth-values. Truth-values only need to have very simple properties -- possibly, they just need to be distinct from each other. The numbers and truth-values are related by a function that takes numbers (and their relations and operations), plugs them into formulas, and outputs truth-values. But you can think of this function as working in other direction also. Also, since you are on axioms, note that there's nothing special about an axiom in that you can take any formula as an axiom. Axiomatizations are special. They are special because not every theory has axiomatizations with nice or decent properties. I want to pretend to be old and wise and advise you to give up looking for "the correct order" because I think it's all relative. A lot of math (maybe all?) is about mappings, and you can often (maybe always?) reverse them in some way. But I think there are some interesting questions related to your question. They are questions about what can be defined in terms of what, when axioms are independent of each other, questions about properties of axiomatizations, and so on. Also, the things that you might think to call "undefined" -- numbers in arithmetic, sets in set theory, etc. -- are not undefined at all. The whole theory defines them. Set theorists don't walk around with no idea of what a set is.
Prove a sequence of maps is a sequence of random variables
Note that $$Y_n = g_n(X_n), $$ with $g_n:\mathbb R\to\mathbb R$ defined by $$g_n(x) = \frac1n\mathsf 1_{\left[0,\frac1n\right)}(x) + \mathsf 1_{\left[\frac1n,1\right]}(x).$$ It follows immediately that each $Y_n$ is measurable as the composition of measurable functions. As for the distribution of $Y_n$, recall that the probability measure on $[0,1]$ induced by the uniform distribution is Lebesgue measure. It follows that $Y_n$ is discrete with $$\mathbb P\left(Y_n=\frac1n\right) = \frac1n = 1-\mathbb P(Y_n=1). $$
Is this a theorem or a conjecture?
First few intervals : $(1/3,1/2),(2/5,3/7),(7/17,5/12),(12/29,17/41),...$ The general construction is related to the Farey sequence. Define freshman's sum $\frac{a}{b}\oplus \frac{c}{d}=\frac{a+c}{b+d}$. This sum satisfies the median property : $\frac{a}{b}&lt;\frac{c}{d}\Rightarrow \frac{a}{b}&lt;\frac{a}{b}\oplus \frac{c}{d}&lt;\frac{c}{d}$. Then, the intervals are constructed as the following rule : Start with $\frac{1}{3}&lt;\frac{1}{2}$. Then add the (freshman's sum) $\frac{1}{3}\oplus \frac{1}{2}=\frac{2}{5}$, which is the next entry of the sequence between $\frac{1}{2}$ and $\frac{1}{3}$. Now the modified sequence of appearing fractions is $\frac{1}{3}&lt;\frac{2}{5}&lt;\frac{1}{2}$. As we know, the next entry appearing is $\frac{2}{5}\oplus\frac{1}{2}=\frac{3}{7}$.(the sum of leftmost two has greater denominator than this sum, so this sum should appear first.) So the modified sequence in this step is $\frac{1}{3}&lt;\frac{2}{5}&lt;\frac{3}{7}&lt;\frac{1}{2}$. Note that the second interval $(\frac{2}{5},\frac{3}{7})$ is constructed from this sequence in this step. In the next step, the fraction added is $\frac{2}{5}\oplus\frac{3}{7}=\frac{5}{12}$(Note that we should find a number between the second interval $(\frac{2}{5},\frac{3}{7})$), and the modified seq is $\frac{1}{3}&lt;\frac{2}{5}&lt;\frac{5}{12}&lt;\frac{3}{7}&lt;\frac{1}{2}$. Following step add $\frac{2}{5}\oplus\frac{5}{12}=\frac{7}{17}$(sum with less-denominator one) to the modified seq and we get the third interval $(\frac{7}{17},\frac{5}{12})$. Now I think you'll be able to develop all following step with ease; in summary, choosing the terms in the sequence corresponds to the freshman's sum in Farey sequence, and the intervals constructed is the center two terms in the modified sequence in each even step. Because the intervals are determined in each even step, I'll describe the patterns of the modified sequence in even steps from now on. As you can check, the order relation between the center two terms reverses as 2 steps go along, thus our algorithm is a period 4 calculation, i.e, starting with $\frac{1}{3}&lt;\frac{1}{2}$, we proceed the following 4 sum in one period: Start at the centermost two fractions, say $A,B$. $\cdots &lt; A&lt;B&lt;\cdots \Rightarrow \cdots &lt; A &lt; A\oplus B &lt; B &lt; \cdots$ $ \Rightarrow \cdots &lt; A &lt; A\oplus B &lt;(A\oplus B)\oplus B &lt; B &lt; \cdots$ $\Rightarrow \cdots &lt; A &lt; A\oplus B &lt;(A\oplus B )\oplus \left\{(A\oplus B)\oplus B\right\}&lt;(A\oplus B)\oplus B &lt; B &lt; \cdots$ $\Rightarrow \cdots &lt; A &lt; A\oplus B&lt;(A\oplus B)\oplus [(A\oplus B )\oplus \left\{(A\oplus B)\oplus B\right\}] &lt;(A\oplus B )\oplus \left\{(A\oplus B)\oplus B\right\}&lt;(A\oplus B)\oplus B &lt; B &lt; \cdots$ Then retake the centermost two terms in the final sequence and iterate the above algorithm. Hence, the $(2n+1)$th interval is $\left( (A\oplus B)\oplus [(A\oplus B )\oplus \left\{(A\oplus B)\oplus B\right\}] ,(A\oplus B )\oplus \left\{(A\oplus B)\oplus B\right\}\right)$, where $(A,B)$ is the $(2n-1)$th interval. Let the two endpoints of the $(2n-1)$th inverval $\frac{a_n}{c_n} &lt; \frac{b_n}{d_n}$. This sequence $a_n,b_n,c_n,d_n$ satisfies the following recurrence formula: $a_{n+1}=3a_n+4b_n, b_{n+1}=2a_n+3b_n, c_{n+1}=3c_n+4d_n, d_{n+1}=2c_n+3d_n$ ($a_1=b_1=1,c_1=3,d_1=2$) $\therefore \begin{pmatrix} a_n&amp;c_n \\ b_n &amp; d_n \end{pmatrix}= \begin{pmatrix} 3&amp;4 \\ 2&amp; 3 \end{pmatrix}^{n-1}\begin{pmatrix} a_1 &amp;c_1 \\ b_1 &amp; d_1 \end{pmatrix}= \begin{pmatrix} 3&amp;4 \\ 2&amp; 3 \end{pmatrix}^{n-1}\begin{pmatrix} 1 &amp;3 \\ 1 &amp; 2 \end{pmatrix}$ $\therefore a_n=\frac{1+\sqrt{2}}{2}\xi_1^{n-1}-\frac{\sqrt{2}-1}{2}\xi_2^{n-1},b_n=\frac{2+\sqrt{2}}{4}\xi_1^{n-1}+\frac{2-\sqrt{2}}{4}\xi_2^{n-1},c_n=\frac{\xi_1^n+\xi_2^n}{2},d_n=\frac{4+3\sqrt{2}}{4}\xi_1^{n-1}+\frac{4-3\sqrt{2}}{4}\xi_2^{n-1}$ ($\xi_1=3+2\sqrt{2},\xi_2=3-2\sqrt{2}$) $\therefore \lim_{n\to \infty}\frac{a_n}{c_n}=\lim_{n\to \infty}\frac{b_n}{d_n}=\sqrt{2}-1$ P.S. The fractions in some interval $(a,b)$ indeed lies between $a$ and $b$ in a Farey sequence(at least one of order the denominator of this fraction). And, if $\frac{p}{q}$ has neighbors $a,b$ in some Farey sequence, then $\frac{p}{q}=a\oplus b$. Because we find the numbers first appearing in a squence having lexicographic order in each step, it is obviouse that if $\frac{p}{q}$ is the first-appearing fraction between $a$ and $b$, then $\frac{p}{q}$ has neighbors $a,b$ in $q$th Farey seqence. To find properties I mentioned, this wikipedia page would be helpful.
Corollary from Arzela-Ascoli
Note the followings: $C(X)$ is equipped with the metric $d_{\sup}(f,g):=\sup_{x\in X}|f(x)-g(x)|$, and so, a sequence $(f_n)$ in $C(X)$ converges under the metric $d_{\sup}$ if and only if $(f_n)$ converges uniformly. Arzela-Ascoli theorem tells that a subset $K$ of the metric space $(C(X),d_{\sup})$ is relatively compact if and only if $K$ is equibounded and equicontinuous. A relatively compact sequence in a metric space has a convergent subsequence. The the desired corollary is simply a combination of all these observations. Indeed, let $(f_n)$ be a sequence in $C(X)$. Then \begin{align*} &amp;\text{$(f_n)$ equibounded and equicontinuous} \\ &amp;\Rightarrow \text{$(f_n)$ relative compact in $(C(X),d_{\sup})$} \tag{by A-A} \\ &amp;\Rightarrow \text{$(f_n)$ has a convergent subsequence in $(C(X),d_{\sup})$} \tag{by 3}\\ &amp;\Rightarrow \text{$(f_n)$ has a uniformly convergent subsequence.} \tag{by 1} \end{align*}
How to calculate third point if I know 2 points and all angles into triangle?
Solve this equations: $${ \left( 10-3 \right) }^{ 2 }+{ \left( 10-4 \right) }^{ 2 }=|AB|\\ { \left( x-3 \right) }^{ 2 }+{ \left( y-4 \right) }^{ 2 }=|AC|\\ { \left( x-10 \right) }^{ 2 }+{ \left( y-10 \right) }^{ 2 }=|BC|=|AC|\\ \\ \frac { |AB| }{ \sin { (100) } } =\frac { |BC| }{ \sin { (40) } } $$ where $(x,y)$ is the coordinates of $C$.
Why would vector space addition axiom #5 be verified in this way?
I'm going to use $\oplus$ for vector addition. The zero vector is $-7$, since for all $x$ we have $$x \oplus -7 = x + (-7) + 7 = x.$$ So, to find the inverse of any $x$, we need a $y$ such that $x \oplus y = -7$: $$x \oplus y = -7\\ x + y + 7 =-7\\ y = -x - 14$$ Without distinguishing $+$ from $\oplus$, this would get very confusing! How is it actually written in your notes (or textbook), I wonder?
Multilinear Algebra, finding $z \wedge z.$
Denote by $\{e_i\}$ a basis of $R^4$, and by $\{e^i\}$ the dual basis, $e^j(e_i)=\delta^j_i$. Then you can check that e.g. for $z=e^1\wedge e^2+e^3\wedge e^4$ we have $z\wedge z= 2 e^1\wedge e^2\wedge e^3 \wedge e^4\neq0$. EDIT: So let me spell this out in more detail. First note that I have not used any particular properties of the $e^i$s, and since for any base of $V$ (finite dimensional) I can find a basis of $V^*$ defined by the relation above, so the example I gave is valid for any basis. Let us get the sign right. By definition of exterior product and apart an arbitrary conventional overall normalization factor we have \begin{equation} (z\wedge z)(v_1,v_2,v_3,v_4)=\sum_{\sigma}(-1)^\sigma z(v_{\sigma_1},v_{\sigma_2})z(v_{\sigma_3},v_{\sigma_4}), \end{equation} where $\{\sigma_1,\sigma_2,\sigma_3,\sigma_4 \}$ is a permutation of $\{1,2,3,4\}$, $(-1)^\sigma$ denotes the parity of the permutation and the sum is over all the permutations of $\{1,2,3,4\}$. So a priori we expect $4!=24$ terms. By the antisymmetry property of forms, we can multiply by 4 and sum over the permutations with $\sigma_1&lt;\sigma_2$, $\sigma_3&lt;\sigma_4$, thus getting only $4\cdot3/2=6$ terms. You can write down them explicitly or note that taking e.g. $(v_1,v_2)$ as the argument of the first $z$ and $(v_3,v_4)$ as the argument of the second is the same taking first $(v_3,v_4)$ and then $(v_1,v_2)$ (this only holds because you are taking the product of $z$ with itself). Thus we van multiply by another factor of 2 and get only 3 terms: \begin{equation} (z\wedge z)(v_1,v_2,v_3,v_4)=8[(-1)^{\sigma} z(v_1,v_2)z(v_3,v_4)+(-1)^\tau z(v_1,v_3)z(v_2,v_4)+(-1)^\rho z(v_1,v_4)z(v_2,v_3)], \end{equation} where $\sigma$ is the permutation $\{1,2,3,4\} \rightarrow \{1,2,3,4\}$ which is clearly even, $\tau$ is the permutation $\{1,2,3,4\}\rightarrow \{1,3,2,4\}$ which is odd and $\rho$ is the permutation $\{1,2,3,4\}\rightarrow \{1,4,2,3\}$ which is even. Therefore we finally have \begin{equation} (z\wedge z)(v_1,v_2,v_3,v_4)=8[z(v_1,v_2)z(v_3,v_4)- z(v_1,v_3)z(v_2,v_4)+ z(v_1,v_4)z(v_2,v_3)]. \end{equation} More generally, since $e^i\wedge e^i=0$, if you want a form wedge itself to be non-zero, it has to be a sum of more than one (non-zero) term (that is not sufficient, e. g. if $z=e^1\wedge e^2+ e^1\wedge e^3 $ then $z\wedge z=0$). Actually it also needs to be of even degree as an odd form wedge itself is necessarily zero, can you see why?
How can I sketch the graph of these rational functions?
When there are common linear factors in the numerator and denominator, the graph will be the graph of the simplified function after cancellations, but with missing points at the zeros of the denominator. Here, for example, is the graph of $$ f(x)=\frac{x(x-1)(x-2)}{(x-1)(x-2)} $$ In the example of the first graph which has no missing points to complicate the graph, there are several principles at work. There is an $x$ intercept at each zero of the numerator There is a vertical asymptote at each zero of the denominator To the right of the right-most zero, whether it is of the numerator or denominator, the graph will remain entirely above or entirely below the $x$-axis depending upon the sign of the ratio of leading coefficients $p$ and $q$ of the numerator and denominator. In the example, that ratio is $\frac{p}{q}=\frac{1}{1}=1&gt;0$ So the graph lies entirely above the $x$ axis on the interval $(6,0)$. If the numerator and denominator have the same degree, then there will be a horizontal asymptote $y=\frac{p}{q}$ If the numerator is of higher degree than the denominator, then one may divide the denominator $D(x)$ into the numerator $N(x)$ to obtain a quotient $Q(x)$ and a remainder $R(x)$: $\dfrac{N(x)}{D(x)}=Q(x)+\dfrac{R(x)}{D(x)}$. In this case, the "tail ends" of the graph will approach the graph of $y=Q(x)$. $Q(x)$ is the quotient asmyptote. If the numerator has lower degree than the denominator, then the $x$-axis is a horizontal asymptote of the graph, since $Q(x)=0$. To begin the process of graphing, one graphs all intercepts and asymptotes, using dashed lines for the asymptotes. Next, one sketches the rightmost portion of the graph to the right of the largest zero, drawing it above or below the axis as per principle (3) and approaching the asymptote, whether a horizontal or quotient asymptote. The word "transitive" means "crosses" and "intransitive" means "does not cross." This is an important concept with regards to $x$-intercepts and vertical asymptotes. The graph crosses the $x$-axis at transitive intercepts and vertical asymptotes, but remains on the same side at intransitive intercepts and vertical asymptotes. Transitivity of a zero depends upon its multiplicity. Suppose $(x-a)^n$ is a factor of either the numerator or of the denominator. Then the zero $a$ has multiplicity $n$. If $n$ is even, then $a$ is an intransitive $x$-intercept (or vertical asymptote) and if $n$ is odd, then $a$ is transitive. In your example, $n=1$ for all the zeros so the graph will cross at all intercepts and asymptotes. The graph "crosses" at an asymptote means the graph switches sides as it passes by the asymptote. After one has correctly graphed the rightmost section of the graph, one procedes to the left either crossing or not crossing the $x$ axis at each intercept or asymptote until one reaches the leftmost part which must approach any horizontal or quotient asymptote. For non-horizontal quotient asymptotes there is a principle seldom covered which is quotient intercepts. The graph will intersect the quotient asymptote at zeros of the remainder $R(x)$, either crossing or not-crossing the asymptote depending upon the transitivity of the zero. Here is a better sketch of the graph:
Database of FOL statements and proofs
There is a proof tree generator made by Wolfgang Schwarz which takes arbitrary statements in a few different logics (first order logic, modal logic, propositional logic) and provides a proof that they are tautologies, contingent, always false, valid statements, and so forth automatically. https://www.umsu.de/trees/ The only disadvantage is that it uses a proof tree method instead of the Hilbert style you mentioned. Hopefully another answer can provide a link to such a proof generator. Here are proofs of your two examples (1,2).
Alligators and Creepy Crawlers
For statement (A) to be true, you'd need all aligators to be creepy crawlers. You only know that some creepy crawlers are alligators. Not only does this leave room for non-alligator creepy crawlers, but it also leaves room for some alligators to not be creepy crawlers. Here's an example that's easy to grasp: some real numbers smaller than 1 are bigger than -1. Can you therefore conclude that all numbers bigger than -1 are smaller than 1?
Show that $c(G)\geq |V|-|E|$, where the equality holds iff $G$ is cycless.
This is a little long for a comment, so I'm adding this as an answer. You are being asked to show two things. First: $c(G) \geq |V| - |E|$. Second, $c(G) = |V| - |E|$ iff $c(G)$ contains no cycles (i.e., $G$ is a forest). You may find the following fact useful. Let $G$ be a connected graph on $|V|$ vertices. $G$ does not contain a cycle iff $|E| = |V| - 1$. Now suppose $G$ is a forest. What can you say about each component $C$'s vertices and edges? Can you use that to count the number of connected components? Now what happens if you add an edge to a graph without a cycle? Consider two cases: you connect two disconnected components, or both endpoints are on the same component. Do you see what is going on? Hope this helps!
Bound on $\sum_{i=1}^n \sum_{k=1}^i \frac{1}{(n-k+1)^2}$
Hint : $$\sum_{i=1}^n \sum_{k=1}^i \frac{1}{(n-k+1)^2} = \sum_{k=1}^n \sum_{i=k}^n \frac{1}{(n-k+1)^2} = \sum_{k=1}^n \frac{n-k+1}{(n-k+1)^2} = \sum_{k=1}^n \frac{1}{n-k+1}$$
When the polynomial $16(n+1)^2(p(x)-1)^3p(x)+1$ is a perfect square
COMMENT: we consider a certain case. Let $p(x)=t$ , we may write: $$16(m+1)^2[t-1)^2[(t-1)t]=(m-1)(m+1)$$ Suppose: $m-1=[4(n+1)(t-1)]^2$ $m+1=t^2-t$ ⇒ $t=\frac{1±\sqrt{4m+5}}{2}$ We get : $(m, t, n)=[(1, 2, -1), (1, -1, -1)], [(5, 3, -3/4), (5, -2, 5/4) $ It can be seen in series: $m=1, 5, 11, 19, . . . $ only $1-1=0$ and $5-1=4$ are perfect square but for $m=1$, n is not positive integer and for $m=5$, n is a rational number. Similar result comes out if we let: $m+1=[4(n+1)(t-1)]^2$ $m-1=t^2-t$ $t=\frac{1±\sqrt{4m-3}}{2}$ For series: $m=1, 3, 7, 13, 21, . . .$ Corresponding values of t are: $t=(0, 1), (2, -1), (3, -2), . . .$ Where discriminant is positive, only $3+1=4$ is perfect square which gives $t=(2, -1)$ and $n=(1/2, 3/2), (-3/4, -5/4)$ which is not acceptable.Hence only $p(x)=t=0$ can give solutions if we let $n∈ \mathbb R$.In this case all zeroes of $p(x)$ can provide the condition. We can consider other cases, may be find a suitable t and m.
A question on $G=S_5$ concerning notation
It is probably like you wrote it, i.e $G = \mathcal{S}_5$. $|G| = |\mathcal{S}_5|$ doesn't imply that $G = \mathcal{S}_5$, since there are 47 groups of order 120 up to isomorphism.
Probability statement comprehension
You want the proportion of "boys that study humanities" relative to the total number of "boys." $$\frac{p(B \cap H)}{p(B)} = p(H\mid B)$$
Is this a valid proof? Discrete Mathematics
So I need to prove $P \to (Q \vee Z)$ and $(Q \vee Z) \to P$ because this is a bi-conditional statement, yes? Yes. Let's assume $m$ doesn't equal $n$ (proving by contra-position). It is said that $m^2 = n^2$. Take the square root of both sides and you are left with $m = n$. However, we are assuming $m$ doesn't equal $n$ so there is a contradiction. There are a couple of flaws here. First, the outline of what you did is "We want to prove $R$. So assume $\neg R$. Then, blah blah blah, $R$. But we assumed $\neg R$. This is a contradiction." But you only contradicted your original assumption $\neg R$. In the middle, you proved $R$. So in proofs like that you don't need the proof-by-contradiction framework. You just did a direct proof. Or did you? You said $m^2 = n^2 \implies m = n$ “by taking the square root of both sides.” But this isn't valid. For $2^2 = (-2)^2$ while $2 \neq -2$. So something went wrong, and it was that you lost a solution by assuming $m$ and $n$ were both positive. With nonlinear algebraic equations, it's often safer to set the equation equal to zero and factor. In this case: $$ m^2 = n^2 \implies m^2 - n^2 = 0 \implies (m-n)(m+n) = 0 $$ Two numbers have a product of zero if and only if one of them is zero. So either $m-n = 0$ (in which case $m=n$) or $m+n=0$ (in which case $m=-n$).
Equality of the totient function of two multiples of $x$
We use the fact that the totient function is multiplicative. Let: $$x=2^a5^by$$ where $\gcd(y,10)=1$. Then: $$\phi(4x)=\phi(5x) \implies \phi(2^{a+2}5^bx)=\phi(2^a5^{b+1}x)$$ Using the fact that the totient function is multiplicative, we yield: $$\phi(2^{a+2})\phi(5^b)\phi(x)=\phi(2^a)\phi(5^{b+1})\phi(x)$$ Cancelling $\phi(x)$, we have: $$\phi(2^{a+2})\phi(5^b)=\phi(2^a)\phi(5^{b+1})$$ We know that $\phi(2^{a+2})=2^{a+1}$ and $\phi(5^{b+1})=4\cdot 5^b$. If $b&gt;0$, then $\phi(5^b)=4 \cdot 5^{b-1}$. However, this would be a contradiction as the LHS as one less factor of $5$ than required. Thus, $b=0$. Substituting: $$\phi(2^{a+2})=4\phi(2^a)$$ which holds for all $a \geqslant 1$. Thus, $x=2^ay$ where $a&gt;0$. This means that $x$ can be any even number not divisible by $5$.
Fourth order filter with assigned poles
The equations that you wrote down are very similar to the coefficients of the product of linear expressions $p+p_i$. This is known as the Theorem of Vieta (normally uses $p-p_i$ as terms). $$P(p)=(p+p_1)(p+p_2)(p+p_3)(p+p_4)=p^4+(p_1+p_2+p_3+p_4)p^3$$ $$+(p_1p_2+p_1p_3+p_1p_4+p_2p_3+p_2p_4+p_3p_4)p^2$$ $$+(p_1p_2p_3+p_1p_2p_4+p_1p_3p_4+p_2p_3p_4)p+p_1p_2p_3p_4$$ Compare this with $P(p)=p^4+k1_fp^3+k2_fp^2+k3_fp+k4_f$ to get your equations.
Computing a Fréchet derivative of a norm function including the Laplacian operator
Generally, one replaces $\mathbf m$ with $\mathbf m+\delta$ and then extracts the term that is linear in $\delta$. In this case, $$\frac{1}{2}\|\nabla^2(\mathbf{m} +\delta - \mathbf{m}^{\textrm{ref}})\|_2^2 = \frac12\langle \nabla^2(\mathbf{m} +\delta - \mathbf{m}^{\textrm{ref}}), \nabla^2(\mathbf{m} +\delta - \mathbf{m}^{\textrm{ref}})\rangle $$ so the linear term is $$\langle \nabla^2 \delta, \nabla^2 (\mathbf{m} - \mathbf{m}^{\textrm{ref}}) \rangle $$ Under reasonable boundary conditions (such as vanishing at infinity), the Laplacian is self-adjoint, which allows us to move it to the other side, getting $$\langle \delta, \nabla^4 (\mathbf{m} - \mathbf{m}^{\textrm{ref}}) \rangle$$ So, the Frechét derivative is the biLaplacian of $(\mathbf{m} - \mathbf{m}^{\textrm{ref}})$, or, more precisely, the linear functional indiced by it on $L^2$.
a prime divisor of $1+q+q^2+q^3+q^4$
Let $p$ be a prime divisor of $1+q+q^2+q^3+q^4$, then $$1+q+q^2+q^3+q^4 \equiv 0 \pmod{p} \implies q^5-1 \equiv 0 \pmod{p}$$ So $q$ has order $1$ or $5$ in the group $(\mathbb{Z}/p\mathbb{Z})^{\times}$. If it has order $1$, then $q\equiv 1 \pmod{p}$, hence $$5 \equiv 1+q+q^2+q^3+q^4 \equiv 0 \pmod{p} \implies p=5$$ If $q$ has order $5$, from $|(\mathbb{Z}/p\mathbb{Z})^{\times}| = p-1$, we obtain $5|(p-1)$. Hence every prime divisor of $1+q+q^2+q^3+q^4$ is either $5$ or congruent to $1$ modulo $5$. Since the number $1+q+q^2+q^3+q^4$ is always odd, its prime divisor is either $5$ or congruent to $1$ modulo $10$.
Finding unit's digit in exponentiation
Unit Digit means $\pmod{10}$ Observe that $7\equiv7\pmod{10},7^2=49\equiv9,7^3\equiv9\cdot7\equiv3,7^4\equiv3\cdot7\equiv1$ $3\equiv3\pmod{10},3^2=9\equiv9,3^3=27\equiv7,3^4=81\equiv1$ So, both $3,7$ have a cycle with period $=4$ which can be also confirmed using Euler's Totient Theorem with $\phi(10)=\phi(2)\phi(5)=4$ and $(3,10)=(7,10)=1$ As $95=23\cdot4+3$ and $58=4\cdot14+2,$ $7^{95}=(7^4)^{23}\cdot7^3\equiv1^{23}\cdot3\pmod{10}$ and $3^{58}=(3^4)^{14}\cdot3^2\equiv1^{14}\cdot9\pmod{10}$
cauchy schwarz inequality extreme
Define the following two vectors \begin{align} \textbf{a}&amp;=(a_1,a_2,...,a_n),\\ \textbf{b}&amp;=(b_1,b_2,...,b_n) \end{align} the dot product is defined such that $|\textbf{a}\cdot \textbf{b}|^2 = |\textbf{a}|^2|\textbf{b}|^2\cos^2\theta$ where $\theta$ is the angle between the two vectors $\textbf{a}$ and $\textbf{b}$. So the desired condition holds if $\cos^2\theta\ll 1$, i.e. if $\textbf{a}$ and $\textbf{b}$ are close to being at right-angles.
Essential extension and injective hull of a simple module
Suppose $M$ is an essential extension of $N$ and consider the injective hull $E(N)$ of $N$. Denote by $f\colon N\to M$ an injective homomorphism and by $i\colon N\to E(N)$ the embedding in the injective hull. By definition of injective module, there exists $g\colon M\to E(N)$ such that $gf=i$. Since $$ \ker g\cap f(N)=\{0\} $$ and $f(N)$ is essential in $M$ (by assumption), we conclude $\ker g=\{0\}$, so $g$ is an embedding. It's an abuse of language saying that $M$ is a submodule of $E(N)$; what you can say is that $M$ is isomorphic to a submodule of $E(N)$. The converse is obvious: if $N\subseteq M\subseteq E(N)$, $N$ is essential in $E(N)$ and so also in $M$. Note that the simplicity of $N$ has not been used. Indeed, the result holds for every $R$-module $N$. The key fact is that $E(N)$ is an essential extension of $N$.
How to solve this system of equation.
Observe that $$(a^2)^2-b^2\cdot c^2=(x^2-yz)^2-(y^2-zx)(z^2-xy)=x(x^3+y^3+z^3-3xyz)$$ Similarly, $$b^4-c^2a^2=y(x^3+y^3+z^3-3xyz)\text{ and }c^4-a^2b^2=z(x^3+y^3+z^3-3xyz)$$ So, $$\frac x{a^4-b^2c^2}=\frac y{b^4-c^2a^2}=\frac z{c^4-a^2b^2}=\frac1{x^3+y^3+z^3-3xyz}=k\text{(say)}$$ Now, $x^3+y^3+z^3-3xyz=(x+y+z)(x^2+y^2+z^2-xy-yz-zx)=(x+y+z)\frac{\{(x-y)^2+(y-z)^2+(z-x)^2\}}2$ $x+y+z=k(a^4-b^2c^2+b^4-c^2a^2+c^4-a^2b^2)=\frac k2\{(a^2-bc)^2+(b^2-ca)^2+(c^2-ab)^2\}$ and $x-y=k\{a^4-b^2c^2-(b^4-c^2a^2)\}=k(a^2+b^2+c^2)(a^2-b^2)$ $\implies (x-y)^2+(y-z)^2+(z-x)^2=k^2(a^2+b^2+c^2)^2\{(a^2-b^2)^2+(b^2-c^2)^2+(c^2-a^2)^2\}=k^2(a^2+b^2+c^2)^2\cdot 2(a^4-b^2c^2+b^4-c^2a^2+c^4-a^2b^2)$ $\implies x^3+y^3+z^3-3xyz=k^3\{(a^2+b^2+c^2)(a^4-b^2c^2+b^4-c^2a^2+c^4-a^2b^2)\}^2=k^3(a^6+b^6+c^6-3a^2b^2c^2)^2 $ $$\implies\frac1k=x^3+y^3+z^3-3xyz=k^3(a^6+b^6+c^6-3a^2b^2c^2)^2$$ $$\implies k^2=\frac1{a^6+b^6+c^6-3a^2b^2c^2}$$ as $a^6+b^6+c^6-3a^2b^2c^2=(a^2+b^2+c^2)\frac{\{(bc-a^2)^2+(ca-b^2)^2+(ab-c^2)^2\}}2\ge 0$ for real $a,b,c$
Logistic family and chaos
This map is certainly not chaotic on any invariant interval. It's likely that $f^2$ is chaotic on an interval smaller than the one you mention, but I don't have a proof. As is well known, the critical orbit (the orbit of the critical point $1/2$, in this case) dominates the dynamics. The most famous fact along these lines is that an attractive orbit must attract a critical point. Even when the map is chaotic, the critical orbit is extra important through the kneading theory. So let's start by iterating the function from the critical point $1/2$ and examine the output. If I plot the first $2000$ iterates as points along the number line, I see something like so: Thus, the orbit appears to be dense in the union of two sub-intervals. We can see what's going on by plotting $f^2$. Note that there are two, invariant sub-intervals for $f^2$ with endpoints labeled in red in that figure. These are exactly the the first four iterates of $f$. $$(d,a,c,b) = (f(1/2), f^2(1/2), f^3(1/2), f^4(1/2).$$ On these sub-intervals, has nearly the classic unimodal look that gives rise to chaos. It's not quite there, though. It should probably be pointed out that chaos on an entire interval is rare in the logistic family. This is made precise in the paper Generic Hyperbolicity in the Logistic Family. In fact, between any two $\mu$ values such that $f_{\mu}(x)=\mu x(1-x)$ has an attractive orbit, there is another such $\mu$ value. It follows that the set of all $\mu$ values without attractive orbits is nowhere dense; this precludes the possibility that these $\mu$ values have chaotic intervals. It is much more common that the chaotic set is a Cantor set, arises in my answer here.
How to find subgroups of $ \;\;\Bbb Z_2\times \Bbb Z_6$
Hint: Recall the theorem highlighted below, and note that it follows that $$\quad \mathbb Z_{\large 2} \times \mathbb Z_{\large 6} \quad \cong \quad \mathbb Z_{\large 2} \times \mathbb Z_{\large 2}\times \mathbb Z_{\large 3}$$ This might help to make your task a bit more clear, noting that each of $\mathbb Z_2, \; \mathbb Z_3,$ and $\,\mathbb Z_6 \cong \mathbb Z_2 \times \mathbb Z_3$ are cyclic, but $\;\mathbb Z_2 \times \mathbb Z_2,\;$ of order $\,4,\,$ is not cyclic. Indeed, there is one and only one group of order $4$, isomorphic to $\mathbb Z_2\times \mathbb Z_2$, i.e., the Klein $4$-group. Theorem: $\;\mathbb Z_{\large mn}\;$ is cyclic and $$\mathbb Z_{\large mn} \cong \mathbb Z_{\large m} \times \mathbb Z_{\large n}$$ if and only if $\;\;\gcd(m, n) = 1.$ This is how we know that $\mathbb Z_6 = \mathbb Z_{2\times 3} \cong \mathbb Z_2\times \mathbb Z_3$ is cyclic, since $\gcd(2, 3) = 1.\;$ It's also why $\,\mathbb Z_2\times \mathbb Z_2 \not\cong \mathbb Z_4,\;$ and hence, is not cyclic, since $\gcd(2, 2) = 2 \neq 1$. Good-to-know Corollary/Generalization: The direct product $\;\displaystyle \prod_{i = 1}^n \mathbb Z_{\large m_i}\;$ is cyclic and $$\prod_{i = 1}^n \mathbb Z_{\large m_i}\quad \cong\quad \mathbb Z_{\large m_1m_2\ldots m_n}$$ if and only if the integers $m_i\,$ for $\,1 \leq i \leq n\,$ are pairwise relatively prime, that is, if and only if for any two $\,m_i, m_j,\;i\neq j,\;\gcd(m_i, m_j)=1$.
Strong Law of Large Numbers imply Weak Law
A weak law may come with some estimate of the rate of convergence which is not available for the strong law. In practical situations, such as statistics, you never have a whole infinite sequence of trials, only a finite sequence. Yet you would still like to deduce something about the underlying probability distribution. The textbook Chung, Kai Lai, Elementary probability theory with stochastic processes, Undergraduate Texts in Mathematics. New York - Heidelberg - Berlin: Springer-Verlag. X, 325 p. Cloth DM 29.40; $ 12.00 (1974). ZBL0293.60001. perplexes students by showing two quotes on the same page, one saying that the strong law is superior to the weak, and the other saying the weak law is superior to the strong. This leaves the poor instructor (me) to explain the discrepancy! Here it is, from page 233 in the first edition: Feller: &quot;[the weak law of large numbers] is of very limited interest and hould be replaced by the more precise and more useful strong law of large numbers&quot; (p. 152 of An Introduction to Probability Theory and its Applications, vol I, 3rd edition, 1971). van der Waerden: &quot;[the strong law of large numbers] scarcely plays a role in mathematical statistics&quot; (p. 98 of Mathematische Statistik, 3rd ed, 1971)
ODE $y(x)=xy'(x)-\sqrt{y'(x)-1}$
You can insert the expressions $$ \sqrt{y'-1}=\frac1{2x} $$ and $$ y'=\frac1{4x^2}+1 $$ directly into the original equation to get $$ y=\frac1{4x}+x-\frac1{2x}=x-\frac1{4x} $$ This is the only singular solution. You have a constant error in integrating $\frac1{4x^2}$, and you did not compare with the original equation to get $C=0$ in the singular solution.
How to solve optimization problem $\max_{x,y} \ ax + by^3 \ \text{subject to}: \ 0\leq x \leq 1,\ 0\leq y\leq 1$
This is not a convex optimization problem. Actually it is a concave optimization (programming) problem. The objective function, $ax + by^3$, is a convex function over the constraint region $ 0 \le x \le 1, 0 \le y \le 1$, but it is being maximized, which is equivalent to minimizing its negative, which is a concave function. A concave function being minimized over compact convex constraints, as in this problem, has a global optimum at an extreme of the constraints. In this case, that means at $x = 0$ or $1$, and $y =0$ or $1$. Because $a$ and $b$ are both positive, the optimum occurs at $x = y = 1$, and has optimal objective value equal to $1$.
Construction by transfinite induction
The arrow is the notation for the restriction of $f$. It means $\{\langle u,v\rangle\in f\mid u\in X_{&lt;x}\}$. So $f(x)$ has the same value as $G$ when applied to the initial segment defined by $x$.
$x^2 + 3x + 7 \equiv 0 \pmod {37}$
In the real numbers, a method of finding a solution to a quadratic equation is to complete the square. This would involve adding and subtracting $(b/2)^2$. $b=3$ in your case, and remember that $1/2 = 19 \mod 37$. Specifically notice: $$(x+3 \cdot 19)^2 \equiv x^2 + 2\cdot 3 \cdot 19 x + (3 \cdot 19)^2$$ $$\equiv x^2 + 3x + (20)^2 \mod 37$$ Note that $3 \cdot 19 \equiv 20 \mod 37$. Also $20^2 = 400 \equiv 30 \mod 37$. Thus the method of completing the square is as follows $$x^2 + 3x + 7 \equiv x^2 + 3x + 20^2 - 20^2 + 7 \equiv (x+20)^2 - 23 \mod 37$$ Finally this means you need to solve $$(x+20)^2 \equiv 23 \mod 37$$
Evaluating $a_j$ sum given the $n$-th partial sum. Not working.
$\sum a_j$ is by definition just the limit of $S_n$ as $n \to \infty$. Dividing numerator and denominator by $n^{2}$ we can write $S_n$ as $\frac {1-4/n+5/n^{2}} {n+7/n-9/n^{2}}$ The numerator tends to $1$ and the denominator tends to $\infty$ and the limit is $0$.
If $u_n\to u$ in $L^p$ and $u_n\to v$ in $L^q$ do we have that $u=v$?
The argument is correct. Elements of $L^p$ are equivalence classes for the relation of equality almost everywhere. Therefore the equality $u=v$ means that $u=v$ almost everywhere.
Definition of $C^{m,k/2}$-capacity of a point.
The $C^{m,k/2}$ capacity of a point $x \in \mathbb R^d$ might be the number: $$ \inf\{ \|\varphi\|_{W^{m,k/2}(\mathbb R^d)} \mid \varphi \ge 1 \text{ in an open superset of } \{x\}\}. $$ (Often, this is raised to the power $k/2$). Note that this number does not depend on $x$, since everything is translation invariant. For further reading, I would suggest the books Nonlinear Potential Theory of Degenerate Elliptic Equations by Juha Heinonen, Tero Kipelainen, Olli Martio Function Spaces and Potential Theory by Adams, David R., Hedberg, Lars I.
Is $| \lceil \frac{a}{2} \rceil - \lceil \frac{b}{2} \rceil |\geq \lfloor |\frac{a - b}{2}| \rfloor $?
Yes, it is true. $$ \left | \left \lceil \frac{a}{2} \right \rceil - \left \lceil \frac{b}{2} \right \rceil \right |\geq \left \lfloor \left | \frac{a - b}{2} \right |\right \rfloor \tag1$$ In the following, $m,n$ are integers. Case 1 : If $a=2m,b=2n$, then both sides of $(1)$ equal $|m-n|$. Case 2 : If $a=2m,b=2n+1$, then $$(1)\iff |m-n-1|\ge \left\lfloor\left |m-n-\frac 12\right|\right\rfloor\tag2$$ If $m-n-\frac 12\ge 0$, then $m-n-1\ge 0$, so$$(2)\iff m-n-1\ge m-n-1$$which is true. If $m-n-\frac 12\lt 0$, then $m-n-1\lt 0$, so$$(2)\iff -m+n+1\ge -m+n$$which is true. Case 3 : If $a=2m+1, b=2n$, then $$(1)\iff |m-n+1|\ge \left\lfloor\left|m-n+\frac 12\right|\right\rfloor\tag3$$ If $m-n+\frac 12\ge 0$, then $m-n+1\ge 0$, so$$(3)\iff m-n+1\ge m-n$$which is true. If $m-n+\frac 12\lt 0$, then $m-n+1\lt 0$, so$$(3)\iff -m+n-1\ge -m+n-1$$which is true. Case 4 : If $a=2m+1,b=2n+1$, then both sides of $(1)$ equal $|m-n|$.
What is wrong with my proof here?
Another proof uses the division algorithm with an addition: For any positive integers $u$ and $v$, there are non-negative integers $p$ and $q$ such that $u = pv+q$ where $0 \le q \le v-1$; in addition $v | u$ if and only if $q = 0$. To use this to solve your problem: $a | b \implies b = ka$; $a \not\mid c \implies c = ja+i$ with $1 \le i \le a-1$. Therefore $b+c = ka+ja+i =(k+j)a+i$ which means that $a \not\mid b+c$.
Every compact subset of $\mathbb R^1$ is the support of a Borel measure
To show that $\mu$ is a measure, note that $\mu (\emptyset ) = 0$ is immediate, and $\sigma$-additivity follows from the $\sigma$-additivity of your Dirac masses. That is, if $X = \bigsqcup_{k=1}^\infty B_k$, where the union is disjoint and the $B_k$'s are Borel, then $$\mu (X) = \sum_{n=1}^\infty \frac{\delta_{b_n}(X)}{2^n} = \sum_{n=1}^\infty \frac{1}{2^n} \sum_{k=1}^\infty \delta_{b_n}(B_k) = \sum_{k=1}^\infty \sum_{n=1}^\infty \frac{1}{2^n} \delta_{b_n}(B_k) = \sum_{k=1}^\infty \mu (B_k)$$ Note that we can interchange the order of summation because all the summands are non-negative. To see that the support of $\mu$ is $A$, first observe that $A$ is already a closed set, so that if we take an open set $U$ that does not hit $A$ then we must show that $\mu (A) = 0$. But that is immediate because such a $U$ cannot hit any of the points $\{b_n\}$. Conversely, if $K$ is a closed proper subset of $A$, then we have that $b_n \not\in K$ for some $n$, and so $\mu (K^c) \geq \delta_{b_n} (K^c) &gt; 0$. Since the support of a measure is the smallest closed set whose complement is $\mu$-null, we conclude that $A$ is the support of $\mu$.
Piecewise Function and Convergence
Some hints to possibly help: For the first part, for any fixed $x$, once $n$ has gotten large enough (i.e., far enough along in the limit) what happens? The key here is that $f_n(x)$ is only one of the three piecewise parts, for any particular $x$, so you don't need to worry about all three pieces if, as you seem to have noticed, you always eventually land in the first piece. For the uniform convergence, you would certainly have to justify why the maximum occurs there. I don't believe this to be true, and also, $f_n(n) \neq e^{-x}$ as you've written above? I think I may see what you are trying to do. I believe that, perhaps, you think that the maximum of $f_n(x)$ being at $n$ (which it is not), means that the supremum of $f_n(x) - f(x)$ is maximized at $n$ as well? This is very much not true. What you need to do is find the supremum of the function $(f_n - f)(x)$, as a function of $n$, and then take the limit of this as $n\to \infty$. For the last part, those not familiar with your course/book/notes/wherever this is coming from may not know explicitly what your Integral Convergence Theorem is, so may not know exactly where your contradiction isn't occurring (a quick Google shows theorems that are similar, but not the same, by that name). Either way, why do you think the $f_n$ aren't continuous? You may want to look a little closer at the bounds of integration to see where the problem lies...
System of two differential equations where all characteristic roots are complex conjugate
Symplectic systems typically have such an eigenvalue structure. But on the other hand, they are of order $1$, or as Newton equation, of order $2$, not $4$ as in this case. The eigenvalues tell you that the homogeneous solutions are linear combinations of $\cosh(m_kt)\cos(n_kt)$, $\cosh(m_kt)\sin(n_kt)$, $\sinh(m_kt)\cos(n_kt)$, $\sinh(m_kt)\sin(n_kt)$ where $k=1,2$. Insertion with vector valued coefficients should result in the eigen equation for the corresponding eigenvectors. These functions have the advantage to be real, but the decided disadvantage that they are not eigensolutions, since they combine eigenvectors of different eigenvalues. While one can make an ansatz with each group ($k=1,2$) of real functions at once, it is easier to do the complex calculations and set for the first eigenvalue $y_1(t)=c_1e^{(m_1+n_1i)t}$ and $y_2(t)=c_2e^{(m_1+n_1i)t}$, etc., insert into the homogeneous system and after factoring out the exponential, the resulting $2x2$ system should be singular and give the solution subspace. Then, the real and imaginary part of the eigensolutions are real solutions.
P not non-negative $\rightarrow$ we can assign its variables to rational numbers and get a negative answer
Expanding on a suggestion in the comments, suppose that $P $ is negative when the variables $x_1, \cdots, x_i $ take the values $a_1, \cdots, a_i $. Then, because $P $ is continuous, there exists $\epsilon &gt; 0$ such that the polynomial on $a_1 + \epsilon, \cdots, a_i + \epsilon $ is still negative (it has not changed signs). Then, because "there are a heck lot of rationals", there is, for each $j $, a rational between $a_j $ and $a_j + \epsilon $. Making the variables assume those values, the polynomial is still negative for those values.
A problem about direct product
The proof I'm thinking of does indeed use that fact that $U$ and $V$ are nonabelian. Let $G=U\times V$ be as you say. Let $N\leq G$ be a nontrivial normal subgroup. Suppose $x=(u,v)\in N$, with $u\neq 1$. (Or $v\neq 1$, at least one of the cases must be true.) Since $U$ is simple and nonabelian, there must exist some $g\in U$ such that $gu\neq ug$, for if not, $u\in Z(U)$, in which case $Z(U)$ is a nontrivial normal subgroup of $U$. Let $y=(g,1)$, and consider the commutator $[y,x]=y^{-1}x^{-1}yx$. Since $N$ is normal, $[y,x]\in N$. Now define $G_1=\{(u,1):u\in U\}$. Observe that $$ [y,x]=y^{-1}x^{-1}yx=(g^{-1}u^{-1}gu,1)\in G_1 $$ and $[y,x]\neq (1,1)$ since $gu\neq ug$. Thus $[y,x]\in G_1\cap N$. But then $G_1\cap N$ is a nontrivial, normal subgroup of $G_1\simeq U$, so necessarily $G_1\cap N=G_1$. Thus $G_1\leq N$. If there exists some $(u,v)\in N$ with $v\neq 1$, then a similar argument would show $G_2\leq N$, where $G_2=\{(1,v):v\in V\}$. From this is follows that there are precisely four normal subgroups of $G$.
Find a recurrence for the number of ways to arrange cars in a row with $n$ parking spaces
To fill a parking lot with $n$ spaces, you can either: Fill the first $n-1$ spaces and then put a Cadillac or a Ford at the end; there are $2 \cdot a_{n-1}$ ways to do this ($a_{n–1}$ ways to fill the first $n-1$ spaces, and $2$ ways to fill the last space) Fill the first $n-2$ spaces and then put a Hummer at the end; there are $a_{n-2}$ ways to do this Adding the two, we get $$a_n = 2a_{n-1} + a_{n-2}$$
Solving a Diophantine equation: $p^n+144=m^2$
This gives us $m+12=p^a$ and $m-12 = p^b$. This means $p^a-p^b = 24$, i.e., $p^b(p^{a-b}-1) = 24 = 2^3 \cdot 3$. Note that $p^b$ and $p^{a-b}-1$ are of opposite parity. If $b=0$, we need $p^a-1 = 24 \implies p^a = 25 \implies p = 5,a=2$. Hence, $m=13$. If $b&gt;0$, we need $p^b \mid 24$. Hence, $p=2$ or $p=3$. If $p=2$, we have $p^{a-b}-1$ to be odd. Hence, this means $p^{a-b}-1=3$ and $p^b=2^3$. This gives us $b=3$ and $a-b=2$, i.e., $a=5$. This gives us $m=20$. If $p=3$, we have $p^b=3$ and $p^{a-b}-1=2^3$. Hence, $b=1$ and $a-b=2$, i.e., $a=3$. This gives us $m=15$. Hence, the solutions are ${\color{blue}{(m,p,n) = (13,5,2)}}$, ${\color{blue}{(m,p,n) = (20,2,8)}}$ and ${\color{blue}{(m,p,n) = (15,3,4)}}$.
For two linear maps, does having the same matrix when expressed in different bases imply they are equal?
Edit: the discussion is relevant only when the domain and the range of the linear map are the same, otherwise we cannot talk about similarity, regards to Christoph for the remark. In general case - no, they are not equal. Let us declare a linear transformation $f: R^2 \to R^2$ and $g: R^2 \to R^2$. Let us fix 2 bases in $R^2: B_{1} = \{(1, -1)^T, (2, 3)^T\}$ and $B_{2} = \{(4, 6)^T, (3, 1)^T\}$ (they constitute a basis, because they are linearly independent and there are 2 of them, which is $dim(R^2)$). Let us say that linear transformations $f$ and $g$ have the same matrix with respect to bases $B_{1}$ and $B_{2}$ respectively: $[f]_{B_{1}} = [g]_{B_{2}}= \begin{bmatrix}0 &amp; 1\\1 &amp; 0\end{bmatrix}$ We would need the following property on which you can read more in the Change of basis topic (or here) while discussing linear maps. This is a corollary of a more general theorem for the linear maps which have the same domain and range: Assume that $f : R^n \to R^n$ is a linear map and let $B=\{v_{1},...,v_{n}\}$ and $C=\{v' _{1},...,v'_{n}\}$ be bases of $R^n$. Let $g : R^n \to R^n$ be the uniquely determined linear map for which $g(v_{j}) = v'_{j}$ holds for every $1 ≤ j ≤ n.$ Then $[f]_{C} = [g]^{−1}_{B}[f]_{B}[g]_{B}$. Likewise $[f]_{B} = [g]_{B}[f]_{C}[g]_{B}^{-1}$. We will show that the linear maps defined above have different matrices with respect to the standard basis $\{(1,0)^T, (0, 1)^T\}$ in $R^2$. $[f]$ We need matrix of linear map $h$ which maps standard basis in $R^2$ to the basis $B_{1}$ $[h] = \begin{bmatrix}1 &amp; 2\\-1 &amp; 3\end{bmatrix}$ $[h]^{-1} = \frac{1}{5}*\begin{bmatrix}3 &amp; -2\\1 &amp; 1\end{bmatrix}$ $[f] = [h][f]_{B_{1}}[h]^{-1} = \frac{1}{5}*\begin{bmatrix}7 &amp; -3\\8 &amp; -7\end{bmatrix}$ $[g]$ As in the first case, we need matrix of linear map $h$ which maps standard basis in $R^2$ to the basis $B_{2}$ $[h] = \begin{bmatrix}4 &amp; 3\\6 &amp; 1\end{bmatrix}$ $[h]^{-1} = \frac{1}{14}*\begin{bmatrix}-1 &amp; 3\\6 &amp; -4\end{bmatrix}$ $[g] = [h][g]_{B_{2}}[h]^{-1} = \frac{1}{2}*\begin{bmatrix}3 &amp; -1\\5 &amp; -3\end{bmatrix}$ As we can see, matrices of linear maps $f$ and $g$ w.r.t. the standard basis are not equal, that is, these maps are not the same.
Proving that there are only 2 ways to glue pairs of sides of a fundamental polygon
I suppose you want to say $\phi_1\sim\phi_2$ if there exists continuous $\theta\colon[0,1]\times[0,1]\to[0,1]$ such that $\theta(0,\cdot)=\phi_1$ and $\theta(1,\cdot)=\phi_2$ and $\theta(t,\cdot)\in\Phi$ for all $t$. Let $\phi\in\Phi$. There exists unique $a,b\in[0,1]$ with $\phi(a)=0$ and $\phi(b)=1$. Suppose $0&lt;a&lt;1$. Then $\phi(0)&gt;0$ and $\phi(1)&gt;0$. By the IVT, there exist $x_1\in[0,a)$ and $x_2\in(a,1]$ woth $\phi(x_1)=\phi(x_2)=\min\{\phi(0),\phi(1)\}$, contradicting injectivity of $\phi$. Hence $a\in\{0,1\}$, likewise $b\in\{0,1\}$. In other words, $\phi|_{\{0,1\}}$ is a bijection $\{0,1\}\to\{0,1\}$. If $\theta $ is a homotopy betawwn $\phi_1$ and $\phi_2$, then $t\mapsto \theta(t,0)$ is acontinuous map $[0,1]\to\{0,1\}$, hence constant. In partivular, $x\mapsto 1-x$ is not homotopic to the identity. The main work perhaps is: Claim. If $\phi|_{\{0,1\}}$ is the identity, then $\phi$ is homotopic to the identity. Proof. For such $\phi$, define $\theta\colon[0,1]\times[0,1]\to[0,1]$ as $\theta(t,x)=t\phi(x)+(1-t)x$. Clearly, $\theta$ is continuous and $0\le \theta(t,x)\le 1$ for all $t,x\in[0,1]$. Also $\theta(0,x)=x$ and $\theta(1,x)=\phi(x)$. For good measure, we also note that $\theta(t,\cdot)\in\Phi$ for all $t\in[0,1]$. Indeed, for $0&lt;t&lt;1$, $\theta(t,\cdot)$ is continuous and onto (because $\theta(t,0)=0$ and $\theta(t,1)=1$). Assume $\theta(t,x)=\theta(t,x')$ with $x&lt;x'$. Then $\phi(x)&gt;\phi(x')$, and by another IVT-argument as above, we arrive at a contradiction with the injectivity of $\phi$. $\square$ Likewise, if $\phi|_{\{0,1\}}$ permutes the two points, $\phi$ is homotopic to $x\mapsto 1-x$ (just note that $x\mapsto \phi(1-x)$ is homotopic to the identity). Remark: If we drop the requirement that $\theta(t,\cdot)\in\Phi$ (or is at least injective) for all $t$, then all elements $\in\Phi$ may become homotopic because we can pass through a constant map (or a map that zig-zags a bit).
How can I find this limit involving thrice-iterated logarithm?
You can find it with a lot of patience ! I give you what I did (hoping that a simpler solution will be provided) : First, I look at the exponent and, going first to logarithms, obtain $$\frac{(x+1)^{\frac{1}{x}}}{x}=\frac{e}{x}-\frac{e}{2}+\frac{11 e x}{24}-\frac{7 e x^2}{16}+O\left(x^3\right)$$ Then $$(x+1)^{\frac{(x+1)^{\frac{1}{x}}}{x}}=e^e-e^{1+e} x+\frac{1}{24} e^{1+e} (25+12 e) x^2+O\left(x^3\right)$$ So$$A=x+(x+1)^{\frac{(x+1)^{\frac{1}{x}}}{x}}=e^e+\left(1-e^{1+e}\right) x+\frac{1}{24} e^{1+e} (25+12 e) x^2+O\left(x^3\right)$$ Now, let me play with the logarithms $$\log A=e+\left(e^{-e}-e\right) x+\left(\frac{25 e}{24}+e^{1-e}-\frac{e^{-2 e}}{2}\right) x^2+O\left(x^3\right)$$ $$\log\log A=1+\left(e^{-1-e}-1\right) x+\left(\frac{13}{24}-\frac{1}{2} e^{-2-2 e}-\frac{1}{2} e^{-1-2 e}+e^{-1-e}+e^{-e}\right) x^2+O\left(x^3\right)$$ $$\log\log\log A=\left(e^{-1-e}-1\right) x+\frac{1}{24} e^{-2-2 e} \left(-24-12 e+48 e^{1+e}+24 e^{2+e}+e^{2+2 e}\right) x^2+O\left(x^3\right)$$ where you can notice that the first term is $$-x\left[1-\frac{1}{e^{e+1}}\right]$$ So, the limit is $$\frac{1}{24} e^{-2-2 e} \left(-24-12 e+48 e^{1+e}+24 e^{2+e}+e^{2+2 e}\right)=\frac{1}{24}+\frac{1}{2} e^{-2 (1+e)} (2+e) \left(2 e^{1+e}-1\right)$$ All of the above used successively the development (Taylor series) of $\log(1+y)$ close to $y=0$.
A simple probability question about distribution
Yes it is. You can set $X$ as the number of problems solved per day and it makes sense that its distribution is represented by a poisson. In particular: $$\mathbb{P}[X=x]=\frac{e^{-7}7^x}{x!}$$ $x=0,1,2,3...$
How to make π degree angle?
Yes it is possible, This is plotted in GeoGebra $$\angle A'OA=\pi^{\circ}$$
Prove that $\sum_{n=1}^\infty\frac{\mu(n)}{n}H_n\sum_{k=n+1}^\infty\frac{\mu(k)}{k^2}$ is convergent
With a very crude estimation $$\left|\sum_{k\geq n+1}\frac{\mu(k)}{k^2}\right|\leq \sum_{k\geq n+1}\frac{1}{k^2}\leq \int_{n}^{+\infty}\frac{dx}{x^2}=\frac{1}{n} $$ since $\left|\mu(k)\right|\leq 1$. Since $$\sum_{n\geq 1}\frac{H_n}{n^2}=2\,\zeta(3) $$ is a convergent series, the given series is convergent too.
For a countable infinite product, each (xi, Ti) is homeomorphic to a subspace of the product
You'll have to assume $X:=\prod_{i \in I} X_i$ is non-empty, or equivalently that all $X_i$ are non-empty. The Axiom of Choice then allows us me to pick a point $a \in \prod_{i \in I} X_i$ and I'll fix that point from now on. Then for any $j \in I$ we define $e_j: X_j \to X$ by $$\pi_i(e_j(x))= \begin{cases} x &amp; i=j\\ a_i &amp; i \neq j\\ \end{cases}$$ and note that $e_j$ is continuous as all $\pi_i \circ e_j$ are either a constant map (so continuous) or the identity on $X_j$ (also continuous). The universal property of products strikes again. No separate proof needed. $e_j$ is 1-1: $x \neq x'$ implies $e_j(x)_i \neq e_j(x')_i$ so $e_j(x) \neq e_j(x')$. To show its en embedding it is enough to it has a continuous inverse $e_j[X_j] \to X_j$ and this inverse is $\pi_j\restriction_{e_j[X_j]}$, which is clearly continuous and the required inverse. Don't overcomplicate things. The idea of the map was OK though. There is no doubt in my mind that we can fix such a point. I believe in AC, as do most topologists. Not going to give up Tychonoff's theorem after all...
Probability of drawing the king of hearts and a red card
One way is by a counting procedure. There are $\binom{52}{2}$ two-card hands. They are all equally likely. We now count the favourables. There are $25$ hands that have the King of $\heartsuit$ and an additional red card. For there are $25$ red cards that are not the King of $\heartsuit$. Thus the required probability is $\frac{25}{\binom{52}{2}}$. For King of $\heartsuit$ and a black card, it's your turn. Another way: Imagine drawing the cards one at a time (it makes no difference to the probability). We will be happy if (i) we draw the King of $\heartsuit$, and then another red card, or (ii) if we draw a red card other than the King of $\heartsuit$, and then the King of $\heartsuit$. We find the probability of (1). The probability the first card is the King of $\heartsuit$ is $\frac{1}{52}$. Given that this happened, the probability the next card is red is $\frac{25}{51}$. So the probability of (i) is $\frac{1}{52}\cdot \frac{25}{51}$. The probability of (ii) is the same. Add.
Rank of product of a matrix and its transpose
This is only true for real matrices. For instance $\begin{bmatrix} 1 &amp; i \\ 0 &amp;0 \end{bmatrix}\begin{bmatrix} 1 &amp; 0 \\ i &amp;0 \end{bmatrix}$ has rank zero. For complex matrices, you'll need to take the conjugate transpose.
Will $P$ be necessarily bounded?
Here is an extended hint. You're given a lot of assumptions to work with here; idempotent, meaning $P^2 = P$, and self adjoint, meaning $\langle Pa, b\rangle = \langle a, Pb\rangle$ for any vectors $a$, $b$. Imagine combining these assumptions - say, consider $\langle P^2a, b\rangle$. On the one hand, this should be equal to $\langle Pa, b\rangle$ by the idempotence. On the other, it should also be equal to $\langle PPa, b\rangle = \langle Pa, Pb \rangle$ by the self-adjointness. If you choose $a = b$, then the latter can be related to $\| Pa \|$. Finally, relate the latter to the former, and try using the cauchy schwartz inequality. Hope this helps!
Theories with Skolem function
You just need to combine two facts to prove this: If a theory $T$ has built-in (also called definable) Skolem functions, then every substructure of a model of $T$ is an elementary substructure. A theory $T$ has a universal axiomatization if and only if it's class of models is closed under substructure.
An estimator whose variance attains Cramer-Rao lower bound is consistent
Given an ubiased estimator $\hat\theta$ of the parameter $\theta\in\Theta$, this estimator is efficient if its var-cov matrix equals the Cramér-Rao lower bound. In other words, the Cramér-Rao inequality provides a lower bound for the var-cov matrix of unbiased estimators. As you know, unbiasedness is different from consistency. To show the consistency of $\hat\theta$, one must show that $plim\hat\theta$ = $\theta_0$. If the set $\Theta$ is compact; the objective function $\mathbb{Q}_0(\theta)$ is continuous and has a unique maximum in $\theta_0$, and; $\widehat{\mathbb{Q}_n}(\theta)$ converges (uniformily) in probability to $\mathbb{Q}_0(\theta)$; then $plim\hat\theta=\hat\theta_0$. Lets consider the case of a linear model $Y_i=X_i\theta$ + $\epsilon_i$. The OLS estimator or MLE is $\hat\theta=\theta_0+(X'X)^{-1}X'\epsilon$. If one assumes that plim $n^{-1}(X'\epsilon)=0$ and that $\lim_{n\to \infty}n^{-1}X'X=Q$ where $Q$ is not singular, then $\hat\theta\rightarrow_p\theta_0$.
graduate level introduction to elliptic curve cryptography
Neil Koblitz, one of the major figures in the development of ECC, has a graduate level work on cryptography and some relevant sectors of number theory. Somewhat needless to say, it has an extended discussion of elliptic curves. The only thing to note is that it is from 1994, so it may be a bit dated in some places. https://www.springer.com/mathematics/numbers/book/978-0-387-94293-3
Application of Uniform Boundedness Theorem to prove an equivalence involving sequences.
Since $\lVert T_nx\rVert\leqslant \lVert T_n\rVert\cdot \lVert x\rVert\leqslant \sup_{k\geqslant 1}\lVert T_k\rVert\cdot \lVert x\rVert$, the hardest part was the consequence of the uniform boundedness principle.
exhibit a countable set of irrational numbers with justification
Take $\{n+\pi \mid n\in \mathbb{N}\}$. The bijection is obvious and each element here is irrational.
Derivation algebra of direct sum of non associative algebras
This is proved for associative algebras by Jacobson in "Abstract derivation and Lie algebras", Trans. Amer. Math. Soc. 42 (1937), 206-224.
Is there a general formula for the integral $I_{n} = \int_{0}^{\frac{\pi}{2}} \sin^{2n-1}x + \sin^{2n-3}x + ... + \sin x dx, n\in \mathbb{N}$.
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &amp;\int_{0}^{\pi/2}\sum_{k = 1}^{n}\sin^{2k - 1}\pars{x}\,\dd x = \int_{0}^{\pi/2}\sin\pars{x}\,{\sin^{2n}\pars{x} - 1 \over \sin^{2}\pars{x} - 1} \,\dd x\,\,\, \stackrel{\cos\pars{x}\ \mapsto\ x}{=}\,\,\, \int_{0}^{1}{1 - \pars{1 - x^{2}}^{n} \over x^{2}}\,\dd x \\[5mm] = &amp;\ -1 + \int_{0}^{1}{1 \over x}\bracks{-n\pars{1 - x^{2}}^{n - 1}}\pars{-2x}\,\dd x = -1 + 2n\int_{0}^{1}\pars{1 - x^{2}}^{n - 1}\,\dd x \\[5mm] \stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, &amp;\ -1 + n\int_{0}^{1}x^{-1/2}\,\pars{1 - x}^{n - 1}\,\dd x = -1 + n\,{\Gamma\pars{1/2}\Gamma\pars{n} \over \Gamma\pars{1/2 + n}} = -1 + {\pars{-1/2}!\,n! \over \pars{n - 1/2}!} \\[5mm] = &amp;\ {1 \over \ds{{n - 1/2 \choose n}}} - 1= \bbx{\ds{{2^{2n} \over \ds{{2n \choose n}}} - 1}} \end{align} A proof of the last identity can be seen in one of my previous answers.
Reduced suspension and mapping cone.
I think once you have established $\Sigma C(X) \cong C(\Sigma X)$ there is a more general argument you can follow to obtain the result. This goes as follows: A mapping cone of a morphism is obtained by forming the pushout diagram: $$\begin{array} =X &amp; \stackrel{f}{\longrightarrow} &amp; Y \\ \downarrow &amp;&amp; \downarrow \\ C(X) &amp; \longrightarrow &amp; C(f) \end{array} $$ Then since the functor $\Sigma$ is left adjoint to the loop space functor it follows that $\Sigma$ commutes with pushouts, i.e., applying $\Sigma$ to the above diagram again yields a pushout diagram $$\begin{array} 0\Sigma X &amp; \stackrel{\Sigma f}{\longrightarrow} &amp; \Sigma Y \\ \downarrow &amp;&amp; \downarrow \\ \Sigma C(X) &amp; \longrightarrow &amp; \Sigma C(f) \end{array} $$ Now using the homeomorphism $\Sigma C(X) \cong C(\Sigma X)$ you can see that $\Sigma C(f)$ is a model for the Cone of the morphism $\Sigma(f)$.
Find the arc length of a curve. Problem integrating
I get the following: The line cutting the parabola is $\;y=\frac83x\;$, so it cuts the parabola whenever $$\frac{64}9x^2=y^2=4ax\iff 4x\left(\frac{16}9x-a\right)=0\iff x=0\;,\;\;x=\frac{9a}{16}$$ From this it follows at once the line intersects the parabola at the origin and at some point in the first quadrant, and here we have the function $$y=2\sqrt a\sqrt x\implies y'=\frac{\sqrt a}{\sqrt x}=\sqrt{\frac ax}$$ thus, the wanted length is given by $$\int\limits_0^{\frac{9a}{16}}\sqrt{1+\frac ax}\;dx\;\ldots$$
If a function is continuous everywhere, but undefined at one point, is it still continuous?
$G$ is continuous on the domain $[0,3)\cup(3,6]$. Referring to the aforementioned definition (1) that the limits converge to the actual value at this point. 3 is not in the domain. For every point in the domain of $g$, we have the required convergence.