title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Iterative Power Regression
First you make the usual transformations $y \rightarrow \ln y$ and $x \rightarrow \ln x$. With these new variables you do the step-wise iterative update of the usual sums for $n=1,2,\dots $ (the sums are initialized to zero) \begin{aligned} M_x^{(n)} &= M_x^{(n-1)} + \frac{x_n-M_x^{(n-1)}}{n}\\ M_y^{(n)} &= M_y^{(n-1)} + \frac{y_n-M_y^{(n-1)}}{n}\\ S_x^{(n)} &= S_y^{(n-1)} + \frac{(n-1)\left(x_n-M_x^{(n-1)}\right)^2}{n}\\ S_y^{(n)} &= S_y^{(n-1)} + \frac{(n-1)\left(y_n-M_y^{(n-1)}\right)^2}{n}\\ C_{xy}^{(n)} &= C_{xy}^{(n-1)} + \frac{(n-1)(x_n-M_x^{(n-1)}) (y_n-M_y^{(n-1)})}{n} \end{aligned} After each iterative step you can compute your values as $$b^{(n)} = \frac{C_{xy}^{(n)}}{S_x^{(n)}} $$ and $$\ln a^{(n)} = M_y^{(n)} - b^{(n)} M_x^{(n)}.$$
Tricky integration/functions problem
So we have $$f\left(\frac{1}{x}\right) = \int_{1}^{x} \frac{\ln(t)}{t \cdot (1+t)} \ dt$$ Adding $$ f(x) + f\left(\frac{1}{x}\right) =\int_{1}^{x} \frac{\ln(t)}{t} \ dt =\frac{(\ln{x})^{2}}{2}$$ Added. \begin{align*} f\left(\frac{1}{x}\right) &= \int_{1}^{1/x} \frac{\ln(t)}{1+t}\ dt \\ &= \int_{1}^{x} \frac{\ln(1/v)}{1+\frac{1}{v}} \cdot -\frac{dv}{v^2} \qquad t\mapsto \frac{1}{v} \\ &=\int_{1}^{x} \frac{\ln(v)}{v \cdot (1+v)} \ dv =\int_{1}^{x} \frac{\ln(t)}{t \cdot (1+t)} \ dt \end{align*}
Simplfy trigonometric functions by only considering integer inputs?
Notice that $$\begin{align}\sin\left(\frac{4\pi t}{3}\right) &= \sin\left(\frac{6\pi t}{3} - \frac{2\pi t}{3}\right)\\ &= \sin\left(2\pi t - \frac{2\pi t}{3}\right)\\ &= - \sin \left(\frac{2\pi t}{3}\right)\end{align}$$ For the cosine part, $$\begin{align}\cos\left(\frac{4\pi t}{3}\right) &= \cos\left(\frac{6\pi t}{3} - \frac{2\pi t}{3}\right)\\ &= \cos\left(2\pi t - \frac{2\pi t}{3}\right)\\ &= \cos \left(\frac{2\pi t}{3}\right)\end{align}$$ If we apply these two identitiesto the given expression, then we have $$ 2 \sqrt{3} \sin \left(\frac{\pi t}{3}\right)+2\sqrt{3} \sin \left(\frac{2 \pi t}{3}\right)+6 \cos \left(\frac{\pi t}{3}\right)+2\cos \left(\frac{2 \pi t}{3}\right)\\ $$ With the so-called 'R-Formulae', this becomes $$\sqrt{48}\sin\left(\frac{\pi t}{3} + \arctan{\frac{3}{\sqrt{3}}}\right) + \sqrt{16}\sin\left(\frac{2\pi t}{3} + \arctan{\frac{1}{\sqrt{3}}}\right)\\ = {4\sqrt{3}\sin\left(\frac{\pi t}{3} + \frac{\pi}{3}\right) + 4\sin\left(\frac{2\pi t}{3} + \frac{\pi}{6}\right)}$$ $$= {4\sqrt{3}\sin\left(\frac{(t + 1)\pi t}{3}\right) + 4\sin\left(\frac{(4t + 1)\pi t}{6}\right)}$$ ... and this is as far as I got. I am unsure what you mean by "systematic". If you are referring to a general approach of summing up $\sin$'s and $\cos$'s that does not depend on one's observation skills, then perhaps you can make use of complex numbers. Sum up the sines by summing up their corresponding complex numbers and taking the imaginary part of the result, and then do the same for the cosines, this time taking the real part of the result.
How do you show $\int \limits_{X \times Y} f(x,y)\, d\lambda < \infty$ if $\int \limits_{X} \int \limits_{Y} f(x,y) \,d\nu \,d\mu < \infty$?
Here's a classical example. Let $\nu$ be Lebesgue measure on $[0,1]$ and let $\mu$ be counting measure on $[0,1]$. Both are complete measures. Then define $f(x,y)=0$ if $x \ne y$ and $f(x,x)=1$ for $0 \le x \le 1$. This function is jointly measurable. Check the three integrals: 1 with respect to the product measure and the other two iterated integrals. You get three different values. (By the way: knowing the two iterated integrals are not the same tells you the value of the integral with respect to the product measure.)
Evaluating $\prod_{k=1}^m \tan \frac{k\pi}{2m+1}$
So you can start by breaking the product to get: $$\prod_{k=1}^{2m}\tan{(\frac{k\pi}{2m+1})}=\prod_{k=1}^{2m}\sin{(\frac{k\pi}{2m+1})}\prod_{k=1}^{2m}\frac{1}{\cos{(\frac{k\pi}{2m+1}})}$$ Now : $$\prod_{k=1}^{2m}\cos{(\frac{k\pi}{2m+1})}=\frac{(-1)^{m}}{2^{2m}}$$ Proof: $$\cos{(\frac{k\pi}{2m+1})}=\exp{(\frac{ki\pi}{2m+1})}\frac{(1+\exp{(\frac{-2ik\pi}{2m+1}}))}{2}$$ If we have polynomial $z^{2m+1}-1$ then its $2m+1$ roots of unity are $\exp{(\frac{-2ik\pi}{2m+1}}):1\le k \le 2m+1$ Thus : $$z^{2m+1}-1=-\prod_{k=1}^{2m+1}(-z+\exp{(\frac{-2ik\pi}{2m+1}}))$$ $$\therefore \frac{(-1)^{2m+1}-1}{2}=-1=-\prod_{k=1}^{2m}(1+\exp{(\frac{-2ik\pi}{2m+1}}))$$ $$\therefore \prod_{k=1}^{2m}\cos{(\frac{k\pi}{2m+1})}=\frac{1}{2^{2m}}\exp(\frac{i(2m)(2m+1)\pi}{2(2m+1)})=\frac{(-1)^{m}}{2^{2m}}$$ Now we have to evaluate : $$\prod_{k=1}^{2m}\sin{(\frac{k\pi}{2m+1})}$$ Using similar process like one above we get: $$\prod_{k=1}^{2m}\sin{(\frac{k\pi}{2m+1})}=(-1)^{m}\lim_{z\to1}\frac{z^{2m+1}-1}{(z-1)(2i)^{2m+1}}=\frac{(2m+1)(-1)^{m}}{(2i)^{2m}}$$ Okay we have these two products but how do they relate to the original problem. Well $$\tan{(\frac{(2m+1-k)\pi}{2m+1})}=\tan{(\pi-\frac{k\pi}{2m+1})}=-\tan{(\frac{k\pi}{2m+1})}$$ thus: $$\prod_{k=1}^{2m}\tan{(\frac{k\pi}{2m+1})}=\prod_{k=1}^{m}\tan{(\frac{k\pi}{2m+1})}\prod_{k=m+1}^{2m}\tan{(\frac{k\pi}{2m+1})}=(-1)^{m}(\prod_{k=1}^{m}\tan{(\frac{k\pi}{2m+1})})^{2}$$ $$\prod_{k=1}^{2m}\tan{(\frac{k\pi}{2m+1})}=\frac{(2m+1)}{(2i)^{2m}}2^{2m}=(-1)^{m}(2m+1)$$ $$\therefore (-1)^{m}(\prod_{k=1}^{m}\tan{(\frac{k\pi}{2m+1})})^{2}=(-1)^{m}(2m+1)$$ $$\therefore \prod_{k=1}^{m}\tan{(\frac{k\pi}{2m+1})}=\sqrt{2m+1} $$
Divergent? $\lim_{x\rightarrow\infty} A\left(\int_{0}^{\frac{x}{B}} te^{-t}\ dt\right)$
We can use integration by parts to get $$\int_0^{x/B} t e^{-t}\, dt = \left[ -t e^{-t}\right]_0^{x/B} - \int_0^{x/B} (-e^{-t})\, dt = \frac{-x}{B} e^{-x/B} - \left[ e^{-t}\right]_0^{x/B} = \frac{-x}{B} e^{-x/B} - e^{-x/B} + 1$$ Assuming that $B &gt; 0$, the limit of this function as $x\to \infty$ is $1$. Thus your original integral converges to the value $A$.
How does the definition of a type of algebraic structures in Stacks Projects differ to the usual definition in universal algebra?
I'll rename $F$ to $U$ (for &quot;underlying&quot;). This definition is equivalent to the universal algebra definition if we require either that $C$ is locally presentable or that $U$ has a left adjoint (call this axiom 2'), and also replace filtered colimits in axiom 3 with sifted colimits (call this axiom 3'), as follows: If $C$ is locally presentable then $C$ has limits, $U$ preserves them, and $U$ is accessible, so by the locally presentable adjoint functor theorem, $U$ has a left adjoint $F : \text{Set} \to C$ ($F$ for &quot;free&quot;). Alternatively we can just require that $U$ has a left adjoint in the axiomatization. By the crude monadicity theorem, $U$ has a left adjoint, is conservative, and preserves reflexive coequalizers (using the fact that reflexive coequalizers are sifted), hence the adjunction $F \vdash U$ is monadic. So $C$ is the category of algebras of a monad $UF$ on $\text{Set}$. Since $U$ preserves filtered colimits and $F$ is cocontinuous, $UF$ preserves filtered colimits and is therefore a finitary monad. Finitary monads on $\text{Set}$ are in turn equivalent to Lawvere theories, and categories of models of Lawvere theories are exactly universal algebraic structures in the sense you describe. Moreover finitary monads on $\text{Set}$ preserve sifted colimits and their categories of algebras are locally presentable, so this definition is equivalent to the modification with 2' and 3'. Every example the Stacks project lists in tag 007L, and I bet every example they discuss anywhere, is the category of models of a Lawvere theory and so satisfies the stronger axioms 2' and 3'. Personally I find &quot;category of models of a Lawvere theory&quot; to be the cleanest way of describing algebraic structures and I don't understand why the Stacks project is using such a bizarre axiomatization. With the Stacks project's axiomatization you a priori need to verify all of these axioms by hand for each example, but instead you can just prove once and for all that categories of models of Lawvere theories satisfy all of these axioms, and categories of algebraic structures in the sense of universal algebra are categories of models of Lawvere theories. This argument implies that the Stacks project's definition could be strictly more general than universal algebra in two different ways: $U$ could fail to have a left adjoint (which implies that $C$ is not locally presentable), or $U$ could have a left adjoint but the adjunction $F \vdash U$ could fail to be monadic (which implies that $U$ does not preserve reflexive coequalizers and so does not preserve sifted colimits). An example of the first situation is the following: take $C$ to be the category of $M$-sets for $M$ a &quot;large monoid&quot; (a &quot;monoid&quot; whose underlying &quot;set&quot; is a proper class, say the free monoid on a proper class to be specific). Limits and colimits are computed as in $\text{Set}$, so the forgetful functor $U : C \to \text{Set}$ preserves them, and $U$ is also faithful and conservative. But $U$ does not have a left adjoint because $M$ is not a set, and $C$ is not locally presentable for the same reason. I don't know an example of the second situation off the top of my head. It seems tricky to write down an example where $U$ preserves filtered colimits but not reflexive coequalizers. Maybe a finite limit sketch could be used here. Edit: I think $C = \text{Cat}$ (the category of small categories) and $U : \text{Cat} \to \text{Set}$ the forgetful functor given by sending a small category to the disjoint union of its set of objects and its set of morphisms is an example but I am too lazy to check explicitly that $U$ preserves filtered colimits but not reflexive coequalizers; anyone else want to take a stab?
How would you find the Cartesian Equation that fits the following requirements?
The vectors normal to your planes are $(3,-2,0)$ and $(3,0,4)$ Find their cross product $$ (3,-2,0) \times (3,0,4) = (8,-12,6)$$ which is a vector parallel to the LOI . Using the given point the LOI is $$(x,y,z)= (3,-1,2)+t(8,-12,6)$$ To convert to a Cartesian equation eliminate $t$ in the system of equations $$\begin{cases} x=3+8t &amp; \\ y=-1-12t&amp; \\ z=2+6t &amp; \end{cases}$$
Conformal transformations
Hint $f'(z)$ is actually the Jaccobi matrix of $$(u(x,y),v(x,y)):\mathbb R^2\rightarrow{\mathbb R^2}$$
Real-valued continuous function defines subspace
Disclaimer: While calling a function "$x$" isn't forbidden in any way, when dealing with functions, I'll avoid using $x$ or $y$. Your misconception in the comment is the exact reason why. So for the rest of the answer, functions will be noted $f$ or $g$. In each question, I'll also note $A$ the subset defined by your equation. You say you know how to do it with vector, and even give the conditions you have to check. It seems that you're stressing to much about your objects beeing functions, but you'll treat these exactly like you'll do with vectors. So, let's treat your functions 1 by 1. 1/ $f(1)+f(2)=2$ This question as already been answered in the comment by mfl: the function $f: x \rightarrow 0$, AKA the $0$ function doesn't verify the equation, so it doesn't belong to the subset. If the 0 function isn't is the subset, it can't be a subspace. 2/$f(2)=0$ Once again, the comments solved it: a/ The $0$ function trivially belongs to the subset b/ It holds under addition, in fact: $\forall f,g \in A, (f+g)(2) = f(2)+g(2)= 0+0 = 0 $ c/ It holds under scalar multiplication: $\forall \lambda \in \mathbb R,\forall f \in A, (\lambda * f)(2)= \lambda * f(2)= \lambda * 0 = 0$ Therefore,your subset $A$ is a subspace 3/ f is periodic of period $2\pi$ This one is a bit more tricky (not much), because you don't have a defined value for f(x). All you know is that: $\forall k \in \mathbb Z, \forall x \in \mathbb R, f(x+2k\pi) = f(x)$ By using this, you can prove that $A$ is once again a subspace
How to prove that $\frac{x}{a} + \frac{y}{b} = 1$ where $a$ is $x$-intercept and $b$ is $y$-intercept
If the x intercept is $a$ then $(a,0)$ is on the line. If y-intercept is b then $(0,b)$ is on the line. Now you have two points of the line and can get its formula. The slope between thse points is $m={(0-b)\over(a-0)}=-b/a$. So the equation of line is $y=mx+b=-bx/a+b$. Now rearrange as $y+bx/a=b$ divide by $b$ to get $y/b+x/a=1$
Test if line segment formed by two points intersect with image edges
If you're working on a pixel grid, and the points are exact pixel centers, then precomputing all answers for all possible input pairs and hashing the results lets you simply query the hashtable, which gives you an expected $O(1)$ runtime. Of course, the precomputation is a bit expensive. :( The real point of this answer is not to suggest that as a serious approach, but rather to suggest that you might want to formulate the problem a little more precisely. I personally suspect that $O(1)$, without precomputation, is unlikely. This is, by the way, the 2D visibility problem, much discussed by those who develop computer games, who want to know if the person at the red dot can shoot and kill the person at the blue dot, etc. If you did a web search on '2D visibility' and perhaps 'portal', you might find it instructive.
Question about union with $\{∅\}$: $\{1,2\}\cup \{∅\}=\{1,2,∅\}$?
$\varnothing = \{\}$ It is the empty set. &nbsp; It is not just nothing. &nbsp; It is a set with no elements. Sets can be elements of other sets. &nbsp; The set of the emepty set is not empty, it contains the empty set. &nbsp; $\{\varnothing\}=\{\{\}\}$ So we have that: $\{1,2\}\cup\{\} = \{1,2\}$ but $\{1,2\}\cup\{{\small\{\}}\} = \{1,2,{\small\{\}}\}$ Similar to: $\{1,2\}\cup\{3\} = \{1,2,3\}$ but $\{1,2\}\cup\{{\small\{3\}}\} = \{1,2,{\small\{3\}}\}$
Restriction of ${\rm spin}^c$ structures
$\text{Spin}^c(X)$ is a torsor (affine space) over $H^2(X, \mathbb{Z})$, similarly for $\partial X$. The action of $\alpha \in H^2(X, \mathbb{Z})$ is by tensoring the complex spinor bundle $\Sigma \to X$ with the line bundle $L$ specified by $\alpha$. Hence, surjectivity follows in the present situation. If $\dim X = 4$, then$$c_1(\Sigma \otimes L) - c_1(\Sigma) = 2c_1(L) \in H^2(X, \mathbb{Z})$$because $\Sigma$ has rank $2$. So the first Chern class of $\Sigma$ is not necessarily enough to specify $\Sigma$.
A problem about Sobolev Spaces
Consider the simple case $\Omega=(0,1)$. A function $u$ is in $H^2_0(\Omega)$ if there exists a sequence $u_n$ of $C^\infty(\Omega)$ functions with compact support such that $$ \lim_{n\to\infty}\int_0^1\bigl(|u-u_n|^2+|u'-u_n'|^2+|u''-u_n''|^2\bigr)=0. $$ In particular $u'\in H^1_0(\Omega)$. The function $u(x)=\sin(\pi\,x)$ is in $H^1_0\cap H^2$, but $u'\not\in H^1_0$, so that $u\not\in H^2_0$.
If $A$ is symmetric show that $(BA^{-1})^T(A^{-1}B^T)^{-1}=I$
You just have to use $$ (X^T)^{-1}=(X^{-1})^T, \quad (XY)^T=Y^TX^T, \quad (XY)^{-1}=Y^{-1}X^{-1}$$ for all quadratic matrices $X,Y$ (with existing inverse). Didn't you use these equations above? $$ (BA^{-1})^T(A^{-1}B^T)^{-1}=((A^{-1})^TB^T)((B^T)^{-1}(A^{-1})^{-1})\\ =(A^{-1})^T(B^T(B^T)^{-1})A\\ =(A^{-1})^TA\\ =(A^{-1})^TA^T \\ =I $$ In the fourth equation you need, that $A$ is symmetric.
How to solve difficult exponential equation
Let $y=\exp(1/x)$ and $n_i'=\exp(n_i)$ and multiply both sides by $y^{n_4}$ to get $$y^{n_1+n_4}+(n_2'-n_5')y^{n_4}+n_3'=0$$ which is a trinomial in $y$, which cannot be algebraically solved in general. It can, of course, be solved numerically.
There is a subset of positive integers which no computer program can print
There is a really elegant proof for this that requires virtually no math background at all. All computer programs are a finite sequence of bytes, which is just a number in base 256. So each computer program can be represented as a unique natural number. This statement is elaborated in detail below the divide. If a computer program prints its own number, then that program is blue and its number is blue. If a program does not print its own number, then that program is red and its number is red. The set of red numbers is a subset of natural numbers. Now write a program that prints this set. Is that program red or blue? Suppose the program is red. Then it must print its own number as part of the set, but this causes it to be a blue program. Suppose the program is blue. Then it must print its own number as a blue program, but this causes its output to not be the set of red numbers. This program is impossible! Therefore, there must exist at least one set which programs can not print. This is how I learned the Cantor set/subset inequality. I couldn't find a better link, but I got it from a Martin Gardner book. Addendum Let's get into the statement each computer program can be represented as a unique natural number This will involve some math, particularly working in binary. We are going to create a Gödel numbering for computer programs. Assume every program can be represented as a finite string of 0's and 1's. This accurately describes real world programs and the input to a Universal Turing Machine. So any given program X is x1x2...xN where xi is 0 or 1 for each i. Let us define Num(X) as the binary number 1x1x2...xN. Num(X) of the program X = '0110' would be '10110' in binary, which is 22. Num(X) gives each program a unique natural number because... If two programs have differing length, then the longer program has a greater Num() than the shorter. If two programs have identical length but differ on some bits, then the binary representation is different for that bit, so Num() will differ between the programs. In any other case, the two programs have identical length and identical bits, meaning they are identical programs. Now we can get on to the set theory part of the proof. That proof assumed binary, but as long as your program is finitely describable in any language which uses a finite number of characters (like English!) you can similarly map it to a unique natural number. This is interesting because it extends the proof from programs to any describable concept (like genie wishes!).
What is the difference between a scalar and a vector field?
A scalar is a number, like 3 or 0.227. It has a bigness (3 is bigger than 0.227) but not a direction. Or not much of one; negative numbers go in the opposite direction from positive numbers, but that's all. Numbers don't go north or east or northeast. There is no such thing as a north 3 or an east 3. A vector is a special kind of complicated number that has a bigness and a direction. A vector like $(1,0)$ has bigness 1 and points east. The vector $(0,1)$ has the same bigness but points north. The vector $(0,2)$ also points north, but is twice as big as $(0,2)$. The vector $(1,1)$ points northeast, and has a bigness of $\sqrt2$, so it's bigger than $(0,1)$ but smaller than $(0,2)$. For directions in three dimensions, we have vectors with three components. $(1,0,0)$ points east. $(0,1,0)$ points north. $(0,0,1)$ points straight up. A scalar field means we take some space, say a plane, and measure some scalar value at each point. Say we have a big flat pan of shallow water sitting on the stove. If the water is shallow enough we can pretend that it is two-dimensional. Each point in the water has a temperature; the water over the stove flame is hotter than the water at the edges. But temperatures have no direction. There's no such thing as a north or an east temperature. The temperature is a scalar field: for each point in the water there is a temperature, which is a scalar, which says how hot the water is at that point. A vector field means we take some space, say a plane, and measure some vector value at each point. Take the pan of water off the stove and give it a stir. Some of the water is moving fast, some slow, but this does not tell the whole story, because some of the water is moving north, some is moving east, some is moving northeast or other directions. Movement north and movement west could have the same speed, but the movement is not the same, because it is in different directions. To understand the water flow you need to know the speed at each point, but also the direction that the water at that point is moving. Speed in a direction is called a "velocity", and the velocity of the swirling water at each point is an example of a vector field. I think the only other thing to know is that in one dimension, say if you had water on a long narrow pipe instead of a flat dish, vectors and scalars are the same thing, because in one dimension there is only one way to go, forwards. Or you can go backwards, which is just like going forwards a negative amount. But there is no north or east or northeast. So one-dimensional vectors are interchangeable with scalars: all the vector stuff works for scalars, if you pretend that the scalars are one-dimensional vectors. If this isn't clear please leave a comment.
Obtaining expression for a conditional expectation
You're on the right track. Because $E(X \mid K = k) = k$ for all $k &gt; 0$, then $E(X \mid K) = K$, and $$E(X) = E(E(X \mid K)) = E(K) = \frac{1}{λ}.$$
How to find a matrix $Y$ with $Y^2≠0$ while $Y^3=0$.
Since you are asking for a method, not for an example, here's something more general. You are given a monic polynomial $P=X^3$ of degree $d=3$, and asked to construct a matrix $A$ such that substituting it into $P$ gives the zero matrix: $P[A]=0$, but the same does not hold for any lower degree nonzero polynomial. If this is the case then $P$ is called the minimal polynomial for $A$ (every matrix has one). A systematic way to solve this is imagine a nonzero vector $v$ and to assume that $v,Av,A^2v,\ldots,A^{d-1}v$ form a basis of the vector space; this insures that any nonzero polynomial $Q$ with $\deg Q&lt;d$ will have $Q[A]v\neq0$, so certainly $Q[A]\neq0$. In fact you can start out with a basis, and make $A$ map each non-final basis vector to the next basis vector. That leaves the image by$~A$ of the final basis vector $A^{d-1}v$ free to choose. That image will become $A^dv$. If we split off the first term of $P$, so $P=X^d+R$ with $\deg R&lt;d$, then the requirement $P[A]v=0$ gives $A^dv=-R[A]v$, and the right hand side is a specific linear combination of $v,Av,A^2v,\ldots,A^{d-1}v$. So we can set image by$~A$ of $A^{d-1}$ to be that linear combination. It may seem that this only ensures $P[A]v=0$ rather than $P[A]=0$. However, every vector$~w$ can be written as $w=Q[A]v$ for some polynomial $Q$ (which can be chosen of degree${}&lt;d$); then $P[A]w=P[A](Q[A]v)=(PQ)[A]v$ since composition of polynomials in $A$ corresponds to multiplication of polynomials; since the latter operation is commutative, one can further rewrite the expression as $Q[A](P[A]v)=Q[A]0=0$. So $P[A]w=0$ for all vectors $w$, whence $P[A]=0$. If $P=c_0+c_1X+\cdots+c_{d-1}X^{d-1}+X^d$, then $-R[A]v=-c_0v-c_1Av-\cdots-c_{d-1}A^{d-1}v$, so the final column of the matrix has entries $-c_0,-c_1,\ldots,-c_{d-1}$ and the whole matrix is $$ \begin{pmatrix} 0 &amp; 0 &amp; \dots &amp; 0 &amp; -c_0 \\ 1 &amp; 0 &amp; \dots &amp; 0 &amp; -c_1 \\ 0 &amp; 1 &amp; \dots &amp; 0 &amp; -c_2 \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots &amp; \vdots \\ 0 &amp; 0 &amp; \dots &amp; 1 &amp; -c_{d-1} \end{pmatrix} $$ This is the companion matrix of $P$, which is the standard example of a matrix with minimal polynomial$~P$. This method builds a square matrix of size$~d$, and this is indeed the minimal size necessary for an example. In your question $d=3$ and $R=0$, so you get the matrix $$ \begin{pmatrix} 0 &amp; 0 &amp; 0 \\ 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \end{pmatrix}. $$
topology neighborhood base
HINT: The first step is to see exactly what topology (if indeed it is one) is being defined. What’s intended is to let $$\tau=\left\{U\subseteq X:\forall x\in U\exists n\in\Bbb Z^+\big(B_n(x)\subseteq U\big)\right\}\;,$$ and the main assertion it that $\tau$ is a first countable topology on $X$. To prove that $\tau$ is a topology on $X$, you must show three things: $\varnothing,X\in\tau$. If $\mathscr{U}\subseteq\tau$, then $\bigcup\mathscr{U}\in\tau$; i.e., $\tau$ is closed under arbitrary unions. If $U,V\in\tau$, then $U\cap V\in\tau$; i.e., $\tau$ is closed under intersection. All of these follow very easily from the definition of $\tau$; just remember that $B_n(x)\subseteq B_m(x)$ whenever $n\ge m$. It’s also easy to see that this topology makes $\{B_n(x):n\in\Bbb Z^+\}$ a nbhd base at $x$ for each $x\in X$, though not necessarily of open nbhds. The final assertion, that $\langle X,\tau\rangle$ is separable iff $d$ is a metric, is false in both directions. For one direction let $X=\Bbb Z$, and define the pseudometric $d$ by $$d(m,n)=\begin{cases}1,&amp;\text{if }|m|\ne|n|\\0,&amp;\text{otherwise}\;.\end{cases}$$ Then $d(-1,1)=0$, so $d$ is not a metric, but $\Bbb Z$ is countable, so the space is necessarily separable. In this space a set $U$ is open iff it has the following property: if $n\in U$, then $-n\in U$. For the other direction let $d$ be the discrete metric on $\Bbb R$ given by $$d(x,y)=\begin{cases}1,&amp;\text{if }x\ne y\\0,&amp;\text{otherwise}\;.\end{cases}$$ Then $d$ is a metric, and every subset of $\Bbb R$ is open. Since $\Bbb R$ is uncountable, this space is not separable.
A wizard has $5$ friends. How many days does the wizards’ conference last? (Inclusion–Exclusion)
This questions is really asking how many dinners occurred Assume only 1 dinner per day The problem gives us 6 pieces of information about how many friends the wizard ate dinner with. How many with 0 friends, how many with 1 friend, up to how many with all 5 friends. The Inclusion-Exclusion Formula says $$|A_1, A_2, A_3, ... A_{n-1}, A_n| = S_1 - S_2 + S_3 - S_4 + ... +(-1)^n S_{n-1} +(-1)^{n+1}S_n$$ Which means for a compound union of potentially overlapping sets $A_1, A_2, A_3, ... A_{n-1}, A_n$ that $S_1, S_2, S-3, S_4, ... , S_{n-1}, S_n$ are subsets and you add the subsets where an event is counted an odd number of times and subtract the subsets where an event was counted an even number of times. The giant formula turns into "add $S_1, S_3, S_5, ...$ and subtract $S_2, S_4, S_6, ...$" The 6 pieces of information from the problem correspond to these subsets $S_1={5 \choose 1}*10$ We have to choose 1 of the 5 friends ("it met any given friend at dinner 10 times") $S_2={5 \choose 2}*5$ How many ways to choose 2 friends ("any given pair of friends 5 times") $S_3={5 \choose 3}*3$ How many ways to choose 3 friends ("any given threesome of friends 3 times") $S_4={5 \choose 4}*2$ How many ways to choose 4 friends ("any given foursome 2 times") $S_5={5 \choose 5}*1$ How many ways to choose 5 friends ("all 5 friends together once") $S_1 - S_2 + S_3 - S_4 + S_5$ Is how many nights the wizard ate with at least 1 friend But we can't forget about the 6 times he ate alone The total number of dinners is $S_1 - S_2 + S_3 - S_4 + S_5 +6$ $={5 \choose 1}*10 - {5 \choose 2}*5 + {5 \choose 3}*3 - {5 \choose 4}*2 + {5 \choose 5}*1 + 6$ $=5*10 - 10*5 + 10*3 - 5*2 + 1*1 + 6 $ $=50 -50 +30 -10 +1 +6$ $=27$
How to find the following sum? $\sum\limits_{n = 0}^\infty {\left( {\frac{1}{{4n + 9}} - \frac{1}{{4n + 7}}} \right)} $
Here is a method without complex analysis. I use the following two: $$\int_0^1 x^{4n+8}\,dx=\frac{1}{4n+9}$$ $$\int_0^1 x^{4n+6}\,dx=\frac{1}{4n+7}$$ to get: $$\sum_{n=0}^{\infty} \left(\frac{1}{4n+9}-\frac{1}{4n+7}\right)=\int_0^1 \sum_{n=0}^{\infty} \left(x^{4n+8}-x^{4n+6}\right)\,dx=\int_0^1 \frac{x^8-x^6}{1-x^4}\,dx$$ $$\Rightarrow \int_0^1 \frac{x^8-x^6}{1-x^4}\,dx=\int_0^1 \frac{-x^6}{1+x^2}\,dx=-\left(\int_0^1 \frac{1+x^6-1}{1+x^2}\,dx\right)$$ $$=-\int_0^1 \frac{1+x^6}{1+x^2}\,dx+\int_0^1 \frac{1}{1+x^2}\,dx$$ Write $1+x^6=(1+x^2)(1-x^2+x^4)$ to obtain: $$-\int_0^1 (x^4-x^2+1)\,dx+\int_0^1 \frac{1}{1+x^2}\,dx$$ Both the integrals are easy to evaluate, hence the final answer is: $$\boxed{\dfrac{\pi}{4}-\dfrac{13}{15}}$$
Is the naturals to the reals a bijection of the reals to the Naturals?
HINT: Note that $2^\Bbb R\subseteq\Bbb N^\Bbb R$. On the other hand $\Bbb R$ and $\Bbb N^\Bbb N$ have the same cardinality.
Need help with an integral in quantum mechanics
One way of computing it is by using the expansion \begin{equation} \frac{1}{|\mathbf{r_1} - \mathbf{r_2}|} = 4 \pi \sum_{l=0}^{\infty} \sum_{m=-l}^l \frac{1}{2l+1} \frac{r_&lt;^l}{r_&gt;^{l+1}} Y^*_{lm}(\theta_2,\phi_2) Y_{lm}(\theta_1,\phi_1) \, , \end{equation} where $r_&gt;$ ($r_&lt;$) is the maximum (minimum) of $r_1$ and $r_2$. You can find it as equation $(3.70)$ in Jackson's Classical Electrodynamics. Plugging it into the integral makes the angular integrations trivial, thanks to the orthonormalization of spherical harmonics when integrated over the whole solid angle, \begin{equation} \int \mathrm{d} S \, Y^*_{l m}(\theta,\phi) Y_{l' m'}(\theta,\phi) = \delta_{l l'} \delta_{m m'} \, . \end{equation} Hence we have, \begin{eqnarray} \langle \frac{1}{|\mathbf{r_1} - \mathbf{r_2}|} \rangle &amp;=&amp; \left(\frac{Z^3}{\pi}\right)^2 4 \pi \sum_{l=0}^{\infty} \sum_{m=-l}^l \frac{1}{2l+1} \int_0^\infty \mathrm{d} r_1 \int_0^\infty \mathrm{d} r_2 \frac{r_&lt;^l}{r_&gt;^{l+1}} \mathrm{e}^{-2 Z(r_1 + r_2)} r_1^2 r_2^2 \int \mathrm{d} S_1 Y_{lm}(\theta_1,\phi_1) \int \mathrm{d} S_2 Y^*_{lm}(\theta_2,\phi_2) \\ &amp;=&amp; 4 \frac{Z^6}{\pi} \int_0^\infty \mathrm{d} r_1 \int_0^\infty \mathrm{d} r_2 \frac{1}{r_&gt;} \mathrm{e}^{-2 Z(r_1 + r_2)} r_1^2 r_2^2 \times \sqrt{4 \pi} \times \sqrt{4 \pi} \\ &amp;=&amp; 16 Z^6 \int_0^\infty \mathrm{d} r_2 \left[ \int_0^{r_2} \mathrm{d} r_1 \frac{1}{r_2} \mathrm{e}^{-2 Z(r_1 + r_2)} r_1^2 r_2^2 + \int_{r_2}^\infty \mathrm{d} r_1 \frac{1}{r_1} \mathrm{e}^{-2 Z(r_1 + r_2)} r_1^2 r_2^2 \right] \\ &amp;=&amp; \frac{5}{8} Z \,, \end{eqnarray} where we have used the fact that only the $l = 0$ and $m = 0$ term contributes.
To show that $\dot{x} = -y - x +y^2 - x^2,\; \dot{y} = xy$ has no periodic orbits.
A phase portrait shows there can be no periodic orbits. Each trajectory must either approach a fixed point or go off to infinity. For a detailed proof of this, you could examine the fate of solutions starting out in each of the regions into which the isoclines divide the plane.
Increasing function on small interval given positive derivative
I came across this problem as well. The solution I found dealt with decreasing functions, but the problem is equivalent to the increasing case (just take $-f$ as defined below). Consider the function $f$ on $[ 0 , 2/\pi ]$: $$ f(x)=\begin{cases} x^2\sin^2 \left( \frac 1x \right) -\frac 12 x&amp;, \text{if } x\neq 0 \\ -\frac 12 x &amp;, \text{if } x = 0 \end{cases} $$ Its derivative is: $$ f'(x) = \begin{cases} 2x\sin^2 \left(\frac{1}{x} \right) - \sin \left( \frac{2}{x} \right) - \frac 12 &amp;, \text{if } x\neq 0 \\ -\frac 12 &amp;, \text{if } x = 0 \end{cases} $$ After doing some fiddling with the identities, it is possible to find a sequence $\langle a_n \rangle_{n=1}^\infty$ such that $f'(x)&gt;0$. Letting $\langle a_n \rangle_{n=1}^\infty=\left\langle \left( \frac{5\pi}{8} + n\pi \right)^{-1} \right\rangle_{n=1}^\infty$, then $a_n \to 0 $ as $n \to \infty$. Therefore, in any open interval $[0, \delta]$ for positive $\delta$ one can find an element of this sequence contained within it. Consider: \begin{align*} f'(a_n) &amp;= 2a_n\sin^2 \left(\frac{1}{a_n} \right) - \sin \left( \frac{2}{a_n} \right) - \frac 12 \\ &amp; = \frac{2}{\frac{5\pi}{8} + n\pi}\sin^2 \left(\frac{5\pi}{8} + n\pi \right) - \sin \left( \frac{5\pi}{4} + 2n\pi \right) - \frac 12\\ &amp; = \frac{2}{\frac{5\pi}{8} + n\pi}\sin^2 \left(\frac{5\pi}{8} \right) - \left( \frac{-1}{\sqrt{2}} \right) - \frac 12 \\ &amp; = \frac{2}{\frac{5\pi}{8} + n\pi}\sin^2 \left(\frac{5\pi}{8} \right) + \frac{\sqrt{2}-1}{2} \\ &amp;&gt;0 \end{align*} But we know that a differentiable function $f$ is decreasing (strictly or not) on an interval $I$ only if $\forall x \in I: f'(x) \le 0$. But since for any interval $[0,\delta]$, we can find an $n$ such that $f'(a_n) &gt; 0$, we see that for no $\delta &gt; 0$ is $f$ decreasing on $[0,\delta]$, just as you had conjectured.
Determining the eigenvalues of a $4×4$ block diagonal matrix.
For every block matrix $$A = \begin{pmatrix} B &amp; 0 \\ 0 &amp; C \end{pmatrix} $$ with $B, C \in \mathbb{C}^{n \times n}$ it holds that $\det(A) = \det(B) \det(C)$. Every eigenvalue of your matrix $A$ satisfies $\det(A - \lambda I ) = 0$. Hence such an eigenvalue also satisfies $$\det(A - \lambda I) = \det(B - \lambda I) \det(C - \lambda I) = 0$$ and your reasoning is correct.
The limits of the gamma function
Using $\displaystyle \Gamma(1+x)=\lim\limits_{n\to\infty}\frac{n^x}{\prod\limits_{k=1}^n \left(1+\frac{x}{k}\right)}$ one gets $\displaystyle \frac{\Gamma(1+x)^2}{\Gamma(1+2x)}=\prod\limits_{k=1}^\infty \left(1-\left(\frac{x}{k+x}\right)^2\right)$ . Then $x\to\infty$ . You can also use $\displaystyle 0&lt;\frac{n!^2}{(2n)!}\leq\frac{1}{2^n}$ for $n\in\mathbb{N}$ which is easier to proof (e.g. induction).
How prove this integral inequality $\int_{0}^{s}f(x)\,dx\le\int_{s}^{1}f(x)\,dx\le\dfrac{s}{1-s}\int_{0}^{s}f(x)\,dx$
Without loss of generality, we may assume that $\int_0^1f(x)dx=1$. Define $F(x)=\int_0^x f(t)dt$. By definition, $F(0)=0$ and $F(1)=1$. Moreover, since $F'=f$ is positive and increasing, $F$ is increasing and convex. Therefore, by Jensen's inequality, $$F(s)=F\big(\int_0^1xf(x)dx\big)\le \int_0^1F(x)f(x)dx=\int_0^1F(x)F'(x)dx=\frac{1}{2}.\tag{1}$$ It follows that $$\int_0^sf(x)dx=F(s)\le 1-F(s)=\int_s^1f(x)dx.\tag{2}$$ By the convexity of $F$, when $0\le t\le 1$, $$F(ts)\le tF(s)+(1-t)F(0)=tF(s)\tag{3}$$ and $$F(ts+1-t)\le tF(s)+(1-t)F(1)=tF(s)+1-t.\tag{4}$$ Due to $(3)$ and $(4)$, we have $$\int_0^s F(x)dx=s\int_0^1F(ts)dt\le\frac{s}{2}F(s)\tag{5}$$ and $$\int_s^1 F(x)dx=(1-s)\int_0^1F(ts+1-t)dt\le\frac{1-s}{2}(F(s)+1).\tag{6}$$ $(5)+(6)$ implies that $$\frac{1}{2}(F(s)+1-s)\ge\int_0^1 F(x)dx= xF(x)\big|_0^1-\int_0^1xf(x)dx =1-s.\tag{7}$$ It follows that $$\int_s^1f(x)dx=1-F(s)\le\frac{s}{1-s}F(s)=\frac{s}{1-s}\int_0^sf(x)dx.\tag{8}$$
Confusion in basic trigonometry
For convention angles are misured from positive x axis counterclockwise direction. https://en.wikipedia.org/wiki/Unit_circle
What’s name of this test?
You are looking for Bertrand's test. Here you can find the convergence tests: Series Convergence Tests.
Interesting question regarding Weibull distribution, sample mean, and sample median
You are right, $a$ and $b$ are not uniquely determined. But if we decide to make $a$ and $b$ as symmetric as possible about the mean, they are determined. In this case, the mean is $\frac{\sqrt{\pi}}{2}$. The numbers you mentioned in a comment are in fact chosen to be symmetric about this.
Why is $V_{4}$ the semi direct product of $Z_{2}$× $Z_{2}$
I think if you consider the presentation of Klein four-group $V$ as follows, then the problematic points get solved easily. This is one presentation of it: $$V=\langle a,b\mid a^2=b^2=1,(ab)^2=1\rangle$$ The relation $(ab)^2=1$ can be regarded as $abab=1$ or $aba^{-1}=b^{-1}=b$. It shows that if we set $H=\langle b\rangle$ then $H$ is normal in $V$ (However, we already know that this group is abelian and maybe we don't need this result anymore). Now $V/H$ is defined as $$V/H=\langle a,b\mid a^2=b^2=1,(ab)^2=1, b=1\rangle=\langle a\mid a^2\rangle=K$$ so $K$ is a complement of $H$ in $G$ and since $H\cap K=1$ and $HK=V$ so we have $$V=H\times K\cong\mathbb Z_2\times \mathbb Z_2$$
Pick random number based on the values of a distribution whose PDF is unknown
To clarify your question: you have 99 samples of PDF $f_X(x)$ at points $x = 0.01, 0.02, ... 0.99$. You want to pick a number from the distribution that has overall unknown PDF $f_X(x)$. Solution: The first step is to convert the PDF to a PMF of corresponding discrete distribution. Let the discrete RV is $W$. Let PMF of $W$ is $P_W(w)$. Since, you have the PDF values at $x = 0.5$ (for example), $P(0.495 &lt; x &lt; 0.505) = y_{0.50} \times (0.505 - 0.495)$. For discrete case, it is equivalent to $P_W(w = 0.5) = y_{0.5} \times 0.01$. With this PMF, you can calculate the CDF, make a table (or write a general function), and do a reverse lookup to get inverse CDF function. Then, as you mentioned, a uniform random variable between (0,1) should give you a number with given PMF $P_W(w)$. Note that this method gives you the precision of 0.01. If you want more precision, you can interpolate the sequence $y_i$. One way is to fit a curve or create a step function that passes through $(i, y_i)$ and consider the new function as "finely" sampled PDF $f_X(x)$. Cheers.
Selection of jobs based on queue priority
If $x_{wjt}$ is a binary variable taking value 1 if worker $w$ takes job $j$ at time $t$ and zero otherwise, then $\pi_{wt}=\sum_j Q_j x_{wjt}$ gives you the priority of the job worker $w$ snags at time $t$ (0 if the worker does not take any job). So you want $\pi_{1t} \ge \pi_{2t}$ if both worker 1 and worker 2 pick up jobs at time $t$. What is unclear from your description is whether every worker picks up a job at every time epoch.
Counting all possible board positions in Quoridor
An alternative way of looking at the wall positions would be to think in terms of intersection points instead of slots. In this model, the board has an 8x8 grid of intersection points; each intersection point can be the midpoint of a horizontal wall; the midpoint of a vertical wall; or "free". Then the non-overlap rules say that you can't have two horizontally adjacent horizontal-wall intersection points or two vertically adjacent vertical-wall intersection points. If we look only at the rows and the horizontal walls, there are 55 possibilities. (This could be tackled as a basic problem in combinatorics on words, although I admit that I cheated and computed it). If we assume that each row is independent, each column (looking at vertical rather than horizontal walls) is independent, and there's no limit on the number of walls, we get an upper bound of $55^{16} \approx 7 \times 10^{27}$ positions, which is already a massive improvement. Restricting to 20 walls and excluding the cases which put a vertical and a horizontal wall at the same point would reduce this number considerably, but it's already much lower than Mertens' estimate. If we instead start by focussing on the number of walls, we get an upper bound of $$\sum_{i=0}^{20} 2^i \binom{64}{i} \approx 2.6\times 10^{22}$$ To actually enumerate the possibilities I would take a programming approach. Each row has $3^8 = 6561$ possible arrangements of $V$, $H$, or $X$. Once we exclude those which have $HH$ as a substring there are $3344$ possibilities left. You can build a graph which places an edge between two of those $3344$ cases iff they don't have a $V$ at the same offset. Then the total number of wall placements (ignoring the limit of 20) is the number of paths of length 7 in the graph. (For elaboration, see the footnote). To take the limit into account you can either weight the nodes and count paths of less than a certain weight or you can construct a slightly different graph whose nodes are a pair (row arrangement, walls placed). Then the running time of a simple non-naïve count would be on the order of $3344^2 \times 20 \times 8$ times a small number of basic operations. Assuming that I don't have any bugs in my program, I make it $$1,375,968,129,062,134,174,771$$ Elaboration on the graph: The nodes are strings like $VHXXXXHX$ of 8 intersection points with no double-$H$. In this example, the first is the midpoint of a vertical wall, the second and seventh are midpoints of horizontal walls, and the others are not mid-points of any wall. There are edges from $VHXXXXHX$ to every node except ones which also start with $V$, because if we put another row starting $V$ below it we'll have vertically adjacent $V$s. So a valid board consists of 8 rows such that each pair of adjacent rows avoids vertically adjacent $V$s. E.g. the trivial board is one valid board: $$XXXXXXXX\\ XXXXXXXX\\ XXXXXXXX\\ XXXXXXXX\\ XXXXXXXX\\ XXXXXXXX\\ XXXXXXXX\\ XXXXXXXX$$ So is a board which alternates $VHXXXXHX$ and its reverse: $$VHXXXXHX\\ XHXXXXHV\\ VHXXXXHX\\ XHXXXXHV\\ VHXXXXHX\\ XHXXXXHV\\ VHXXXXHX\\ XHXXXXHV$$ Because the edges in the graph correspond to rows which can be adjacent to each other, and because there are 7 adjacent pairs in 8 rows, the valid boards correspond to the paths of length 7 in the graph. As noted, this doesn't take into account the limit of 20 walls. The approach I've taken in the code linked above is to replace each node with 21 nodes consisting of (row, total walls). Then each board starts with (row, number of walls in the row) and the edges take into account adding the walls. E.g. the second board is now excluded, because it would be $$\begin{eqnarray*}VHXXXXHX&amp;,&amp;3\\ XHXXXXHV&amp;,&amp;6\\ VHXXXXHX&amp;,&amp;9\\ XHXXXXHV&amp;,&amp;12\\ VHXXXXHX&amp;,&amp;15\\ XHXXXXHV&amp;,&amp;18\\ VHXXXXHX&amp;,&amp;21\\ XHXXXXHV&amp;,&amp;24\end{eqnarray*}$$ and the last two aren't nodes in the graph, because the numbers exceed 20. But if we remove the walls from the last two rows we get $$\begin{eqnarray*}VHXXXXHX&amp;,&amp;3\\ XHXXXXHV&amp;,&amp;6\\ VHXXXXHX&amp;,&amp;9\\ XHXXXXHV&amp;,&amp;12\\ VHXXXXHX&amp;,&amp;15\\ XHXXXXHV&amp;,&amp;18\\ XXXXXXHX&amp;,&amp;18\\ XXXXXXXX&amp;,&amp;18\end{eqnarray*}$$ which is valid and is counted by the program.
Coprime elements generate coprime ideals
This is not true in arbitrary rings. For (counter)example, in a polynomial ring in two variables $R = k[x,y]$, we have $\gcd(x,y)=1$, but $(x)+(y) \neq R$. It is true in Bézout domains. I don’t know if it’s equivalent to being a Bézout domain.
How can I solve this distance-time word problem with functions?
Someone posted the answer and then deleted it, so I'll repost it. I should've considered in the 2nd equation using g(3) and f(3) rather than "t+3" That way it works fine and I get the right solution. Thank you!
Example for a non-Borel set $A$ in $\mathbb{R}^2$ such that all $x$- and $y$- sections are measurable
Consider the diagonal $\Delta=\{(x,x): x\in \mathbb R\}$. Not every subset of this is a Borel set. (Why?). For any subset of $\Delta$ every section is measurable. (Why?).
How do you solve this permutation problem? Evaluate $\,{}_nP_1? $
Well, do you know the formula for $_n P_r$? $$_nP_r=\frac{n!}{(n-r)!}$$ Just plug in the values of $n$ and $r$ into this formula. In this case your $n$ still equals $n$, and $r$ equals $1$. Therefore: $$_nP_1=\frac{n!}{(n-1)!}$$ $$=\frac{n(n-1)(n-2)\dots 3\cdot 2\cdot 1}{(n-1)(n-2)\dots 3\cdot 2\cdot 1}$$ $$=n$$ $$\color{green}{\boxed{_nP_1=n}}$$ Hope I helped
The letters A, E, I, P, Q, and R are arranged in a circle. Find the probability that at least 2 vowels are next to one another.
There are $6$ chairs around a table. The chairs are labelled $1$ to $6$, in counterclockwise order. The letter $P$ sits down somewhere, it doesn't matter where, say on the chair labelled $1$. Then the other letters sit down at random. We will find the probability that the vowels are all separated. If the vowels are to be all separated, they must occupy chairs $2$, $4$, and $6$, and the remaining two consonants must occupy $3$ and $5$. There are $\dbinom{5}{2}$ ways of selecting two chairs for the consonants, from the remaining five. There is exactly $1$ selection that consists of chairs $3$ and $5$. I expect you can now complete things.
Help to change the orders of this triple integral
The region of integration is given by the inequalities $0\le x\le1$, $0\le y\le1-x$ and $0\le z\le1-x^2$. The latter two are equivalent to $x\le 1-y\le 1$ and $x^2\le1-z\le1$ (which is equivalent to $x\le\sqrt{1-z}\le1$). Your integral is over the region defined by $0\le y\le1$, $0\le z\le1$ and $0\le x \le\sqrt{1-z}$. This is not the same as the original region. There are points like $(x,y,z)=(3/5,1,16/25)$ in your region which are not in the original region. You have neglected the condition that $0\le y\le 1-x$. As you say, we must have $0\le y\le 1$ and $0\le z\le1$. Then the condition on $x$ is that $0\le x\le\min(1-y,\sqrt{1-z})$. When $0\le y\le1-\sqrt{1-z}$ that amounts to $0\le x\le \sqrt{1-z}$ and when $1-\sqrt{1-z}\le y\le 1$ it amounts to to $0\le x\le1-y$. So we get Stewart's limits. By the way, is Stewart's book really 1,950+ pages?
Bounds on functions via its derivatives
$f'$ is invariant under shifts on $f$, that is $(f+k)'=f'$. Thus a bound depending on the derivative $f'$ should work for all shifts of the function $f$, which is impossible for arbitrarily large shifts $k$. EDIT: However, using the fundamental theorem of calculus, based around some point $x_0 \in \text{Dom }f $ we have $$f(x)=f(x_0)+\int_{x_0}^x f'(t)dt \leq f(x_0)+M(x-x_0)$$ where $M=\max_{t \in [x_0,x]} f'(t)$. If $f'$ is bounded globally, we can use this bound for $M$. Note that this bound depends on $f'$ as well as $f(x_0)$!
what groups have only elements of prime order?
There are of course (non-abelian) $p$-groups of exponent $p$, so every element is of prime order $p$ there, See for instance this 1974 article. For an example involving two different primes, consider the following generalization of the example of Alex. Let $p$ be a prime, $n \ge 1$ and let $N$ be the additive group of the field $F$ with $p^{n}$ elements. Pick any subgroup $H$ of prime order $q$ in the multiplicative group $F^{\star}$. (Necessarily, $q \ne p$.) Then in the semidirect product of $N$ by $H$ all elements have order $p$ or $q$. There are more examples among Frobenius groups. (The example just given is that of a Frobenius group.) A particularly interesting one (quoted in the Wikipedia article) is a suitable semidirect product of a non-abelian group $N$ of order $p^{7}$ and exponent $7$ by an element $h$ of order $3$. This can be realized as $$ N = \left\{ \begin{bmatrix} 1 &amp; a &amp; c \\ &amp; 1 &amp; b \\ &amp; &amp; 1 \\ \end{bmatrix} : a, b, c \in F \right\}, $$ where $F$ is the field with $7$ elements, and $$ h : \begin{bmatrix} 1 &amp; a &amp; c \\ &amp; 1 &amp; b \\ &amp; &amp; 1 \\ \end{bmatrix} \mapsto \begin{bmatrix} 1 &amp; 2a &amp; 4 c \\ &amp; 1 &amp; 2b \\ &amp; &amp; 1 \\ \end{bmatrix}. $$
Probability of 2-pair in Poker
The flaw in the second approach is that you account for the order of the denominations. I.e. you count Aces and Kings separately from Kings and Aces. Hence you get a factor of two too much. In the formulas the difference manifests as ${{13}\choose{1}} \cdot {{12}\choose{1}} = 13\cdot 12$ versus ${{13}\choose{2}}= \frac{13\cdot 12}{2}$.
Does flipping the negative in a fraction flip all terms both sides?
This is not an equation. Rather, this is an expression. For the change, multiply both the numerator and the denominator by -1.
finding a function representing a sequence
If you suspect that the formula for $a_n$ is of polynomial type (degree $m$), you can try polynomial interpolation, using the first $m+1$ terms in the sequence as data. Then, check that the formula works for all $n$ using induction.
Find the number of children, given that the estate was divided evenly between them
Assume that he left behind some money; then we can choose units in such a way that he left behind $1$ unit of money. Then the first payment is $x+(1-x)/16=15x/16+1/16$. This is equal to the second payment, which is $2x+(1-2x-15x/16-1/16)/16=2x+15/256-47x/256=465x/256+15/256$. Clearing denominators, $240x+16=465x+15$. Now solving we get $225x=1$ or $x=1/225$. So everyone received the amount $15(1/225)/16+1/16=240/3600$. So if $n$ children received $240/3600$ each, then $240n/3600=1$. What do you conclude?
Demonstrate: $(s \Longrightarrow u) \Longrightarrow (p \wedge((-(u \wedge t) \wedge s ) \vee (q \wedge r))\Longrightarrow -t\vee q\vee r)$
For this you can use Conditional Proofs: To prove $\phi \Rightarrow \psi$ using a Conditional proof, you simply assume that $\phi$ is true, and then try to show that $\psi$ will then have to be true as well. So in your case: Assume $s \Rightarrow u$ Now show [what comes to the right of the $\Rightarrow$] but since that is a $\Rightarrow$ as well, do another Conditional Proof, i.e. Also Assume that $p \land ((\neg (u \land t) \land s) \lor (q \land r))$ And try to show $\neg t \lor q \lor r$ And that should not be hard: from the second assumption it follows that $(\neg (u \land t) \land s) \lor (q \land r)$, and thus you have either $\neg (u \land t) \land s$ or you have $q \land r$, and in either case you can establish your goal $\neg t \lor q \lor r$ (use the initial assumption for the first case) So you don't really transform any statements here like you did algebraically when working with $\Leftrightarrow$'s. Rather, you infer things from other things ... And thus you get $\Rightarrow$'s, since inference is (typically) a 'one-way street'.
Proving $CF = r(\frac{1-\cos x}{\sin x})$ in a geometry problem without using $\tan\frac{x}{2}$
Solution without trigopnometry. Let $P\in AB$ such that $AP=AD$. We can assume also that $P$ is placed between $A$ and $O$ as on your picture. Thus, $$\measuredangle APD=\frac{1}{2}(180^{\circ}-\measuredangle A)=x,$$ which says that $DPOC$ is cyclic, which gives: $$\measuredangle CPO=\measuredangle CDO=y$$ and since $$\measuredangle B=180^{\circ}-2y,$$ we obtain $$BC=PB$$ and $$AD+BC=AP+PB=AB.$$
Interchange integrals in $\int_0^\pi\int_0^{\sin(x)}f(x,y)dydx$
Draw the section of the $xy$ plane bounding the integral. Then work out the $y$ limits. From the picture this is $0$ to $1$. The $x$ limits then correspond to solving $y=\sin x$ for the first two quadrant solutions. So the integral becomes: $$\int_0^1\int_{\arcsin y}^{\pi-\arcsin y}f(x,y)dxdy$$
Are there non-constant independent random variables $X$ and $Y$ such that $X^2 + Y^2 \equiv 1$?
Yes: suppose you have any $k$, $p$, $q$ all strictly between $0$ and $1$. Having them all $\frac12$ gives some symmetry but is not necessary. Then suppose $X = +\sqrt k$ with probability $p$ and $X=-\sqrt k$ with probability $1-p$ $Y = +\sqrt{1-k}$ with probability $q$ and $Y = -\sqrt{1-k}$ with probability $1-q$, independently of $X$ Then $X$ and $Y$ are independent, neither is almost surely constant, and $X^2+Y^2=1$ with probability $1$
What is the norm of this linear functional?
Answer. The norm of this operator is equal to 2. Explanation. Clearly the norm of this operator is less or equal to 2. Set $$ x_n(t)=\left\{\begin{array}{llll} 1 &amp; \text{if} &amp; x\in[0,1/2-1/n], \\ n-1-2nt &amp; \text{if} &amp; x\in[1/2-1/n,1/2],\\ -n-1+2nt &amp; \text{if} &amp; x\in[1/2,1/2+1/n],\\ 1 &amp; \text{if} &amp; x\in[1/2+1/n,1]. \end{array}\right. $$ Then $$ \|x_n\|=1, $$ and $$ \Big|\int_0^1 x_n(t)\,dt-x(1/2)\,\Big|\ge\int_0^{1/2-1/n}dt+\int_{1/2+1/n}^1dt -\int_{1/2-1/n}^{1/2+1/n}+1=2-\frac{4}{n}. $$ Hence, the norm of this operator is greater or equal to $2-4/n$, for every $n\in\mathbb N$, and hence, it is greater or equal to $2$.
Finding $f$ satisfying $\int f=\sum f$
There are millions of them. Take any function $g(x)$ such that $g'(x)e^x$ is monotonic* and such that the integral and the sum converge. Then let $$f(x):=g(x)+ae^{-x}$$ and write $$\int_0^1 f(x)\,dx=\int_0^1 g(x)\,dx+a(1-e^{-1})=I+a(1-e^{-1}),$$ $$\sum_{n=1}^\infty f(n)=\sum_{n=1}^\infty g(n)+\frac{ae}{1-e^{-1}}=S+\frac{ae}{1-e^{-1}},$$ which you can solve for $a$ after equating. *This requirement ensures a single root of $f'(x)$.
Is "$f$" the function or is "$f(x)$" the function?
In my opinion the function is $$ f. $$ It is the "name" of this "machine" that acts on numbers, you give a number to it and it gives back a number (you are of course not restricted only to numbers, so let us call this as input from now on and you can give it more than only one input, as it can also give you back more than only one number which we call as output from now on). But we call the function itself just $f$. Now, only calling it $f$ doesn´t say much about what $f$ is doing. So you can make its name more informative as writing it as $$ f(x) $$ which means that $f$ would like to recieve $1$ input to act on. You can think of $f(x)$ as the output of $f$. If $f$ can take more than only one input, say $n$, so you may write $$f(x_1,x_2,x_3,\ldots,x_n).$$ Using this way of writing $f$ also allows us to talk about what kind of inputs does $f$ can handle, you may say that $$x\in\mathbb{N}$$ or $$x_1,x_2,\ldots,x_n\in\mathbb{Q}$$ All this still doesn´t say much about what $f$ is doing so you may tell us about it in the following way $$ f(x)= x^2 $$ which tells us that whatever $f$ gets will be squared. At this point we can make a rather compact and neat way describing what $f$ does, namely $$ f: A\to B, \ f(x)= \text{some expression involving $x$} $$ which can be read as "$f$ takes one element from the set $A$ and substitutes it into the expression given above and returns an element of the set $B$". This way you can think of $f$ as linking elements between the two sets $A$ and $B$ according to some rules that would make my answer even longer so we skip them. All in all I think a function is an abstract mathematical concept which we usually name $f$ as it may stand for function.
PRove this Inequality $4(\sqrt{a^3b^3}+\sqrt{b^3c^3}+\sqrt{c^3a^3})\leq 4c^3+(a+b)^3$
It's $$4c^3-4(\sqrt{a^3}+\sqrt{b^3})\sqrt{c^3}+(a+b)^3-4\sqrt{a^3b^3}\geq0,$$ which is a quadratic inequality of $\sqrt{c^3}.$ Thus, it's enough to prove that $$4(\sqrt{a^3}+\sqrt{b^3})^2-4((a+b)^3-4\sqrt{a^3b^3})\leq0,$$ which is smooth. We can rewrite this solution by the AM-GM's stile of course.
Show that the lexicographic order topology for $\mathbb{N}\times \mathbb{N}$ is not the discrete
For order topologies we also include as (basic) open sets all intervals of the form $$( \leftarrow , a ) = \{ x \in X : x &lt; a \}$$ where $a \in X$. So in the lexicographic order order on $\mathbb{N} \times \mathbb{N}$ we have $\{ \langle 0 , 0 \rangle \} = ( \leftarrow , \langle 0 , 1 \rangle )$. (We similarly include all intervals of the form $( a , \rightarrow ) = \{ x \in X : a &lt; x \}$ as (basic) open sets.) It is relatively easy to prove that if $&lt;$ is a strict linear order on a set $X$, then $x \in X$ is isolated in the order topology iff $x$ has an immediate predeccessor, or is the minimum element; and $x$ has an immediate successor, or is the maximum element. Can you think of points in $\mathbb{N} \times \mathbb{N}$ (points similar to the one you chose, in fact) where one of these two fails?
Equivalence of universal closure of formulas and implication
As my previous answer was wrong, this will instead be an explanation to why it was wrong and what was the correct answer. I assumed that the truth value of the individual formulas of $A^\forall$ must be the same as those of $B^\forall$ since $A^\forall$ is satisfied if and only if $B^\forall$ is satisfied. However, consider the case in which a single non quantified formula of $A^\forall$ is false. Now, since $A^\forall$ is false iff $B^\forall$ is false, there must be at least one false formula in $B^\forall$ . For the case in which there is only one, the truth values of the formula (1) could be that of (3) and the values of (2) those of (4) . $A\rightarrow B\tag{1}$ $B\rightarrow A\tag{2}$ $F\rightarrow T\tag{3}$ $T\rightarrow F\tag{4}$ In such a case (1) is true while (2) is false. Hence the truth value of the individual formulas of $A^\forall$ need not be the same as those of $B^\forall$ . The statement (5) however, is still true, since at least one of either $A→B$ or $B→A$ has to be true; because the same $B$'s and $A$'s are used. either $A→B$ or $B→A$ is true $\tag{5}$
Product of two elliptic isometries with distincts centers
Here is what's happening: Let $p, q$ be distinct points in the hyperbolic plane $H^2$ and let $\alpha, \beta$ be the angles of rotation of elliptic elements $f, g$ fixing $p$ and $q$ (both $\alpha$ and $\beta$ are computed counterclockwise). Then $h=g\circ f$ is elliptic iff the following holds: There exists a finite hyperbolic triangle with one side $pq$ and the angles $\alpha/2, \beta/2$ at $q, p$ respectively, and lying to the left from the oriented segment $pq$. If we fix the angles $\alpha, \beta$ in advance and take $pq$ sufficiently long then this triangle will not exist, as its sides $s$ (from $p$) and $t$ (from $q$) will be disjoint in $H^2$ (they will intersect either on the boundary or beyond, if you work in the Klein model). In order to prove this statement, consider three isometric reflections $r_1, r_2, r_3$ in the sides $pq, s$ and $t$ of "would be triangle'' defined above. Then $r_2\circ r_1= g, r_1\circ r_3=f$. The composition $h= r_2 \circ r_3$. This transformation is elliptic iff the geodesics $s, t$ intersect in $H^2$.
Given regular language $L,$ prove $L'$ is regular
A substitution $\sigma: A^* \to B^*$ is defined by choosing, for each letter $a \in A$, a language $\sigma(a)\subseteq B^*$, and by setting $\sigma(1) = 1$ and $\sigma(a_1 \dotsm a_n) = \sigma(a_1) \dotsm \sigma(a_n)$ for each word $a_1 \dotsm a_n$. Now, if $\sigma: A^* \to B^*$ is a substitution and $R$ is a regular language of $B^*$, then the language $$ R' = \{u \in A^* \mid \sigma(u) \cap R \not= \emptyset\} $$ is regular. Your question corresponds to the special case where $A = B = \Sigma$ and $\sigma(a) = aAA$ for every $a \in A$.
Why does these complex sequences converge uniformly?
The first limit you gave amounts to $$ \lim_{h\to 0} \frac{f_\zeta(z+h)-f_\zeta(z)}{h} = f_\zeta'(z) $$ where $f_\zeta(z) = \frac{1}{(\zeta-z)^n}$. The rate of convergence of divided difference to the derivative is controlled by the supremum of the second derivative $f_\zeta''$ in a neighborhood $N$ of $z$. This derivative is uniformly bounded in $N$ with respect to $\zeta$, as long as $N$ is small enough to be at positive distance from the boundary. By the Taylor estimate $$ \left|\frac{f_\zeta(z+h)-f_\zeta(z)}{h} - f_\zeta'(z)\right| \le \frac{h}2 \sup_N |f_\zeta''| $$ the convergence is uniform. For the second one, the Weierstrass M-test applies. Let $k=|z-z_0|/r$, where $r=|\zeta-z_0|$ is the radius of the circle, independent of $\zeta$. Note that $k&lt;1$. Since $\sum_{n=0}^\infty k^n$ converges, the Weierstrass test gives uniform convergence.
How can a universe include an infinite (sum) relation
Your question discusses the set-theoretic superstructure obtained as the countable union starting with, say, $X$ the set of real numbers and then adding subsets, subsets of subsets, etc. The superstructure itself is not related to the tag nonstandard-analysis since there are no infinitesimals in sight yet. The way this is used in Robinson's framework is that one now proceeds to form a suitable ultrapower to get a strictly larger exotic version of the superstructure. Here it is preferable to use a bounded version of the construction when one does not allow sequences where the $X$-rank tends to infinity. This is the approach formalized in Kanovei's version BST of Nelson's IST; for details see this recent publication. The original IST did not have the property that every model of ZFC extends to a model of IST, whereas BST does have this property.
Find all natural numbers such that $\sqrt{n+2020}+\sqrt{n} \in \mathbb{N}$
As I pointed out in the comments, the first step is false. (Though the subsequent conclusions are true, so we just need to fix this first step.) Hint: Show that if $ k \in \mathbb{N} $ and $ \sqrt{k} \in \mathbb{N}$, then $ k = a^2$ for some $ a \in \mathbb{N}$. After proving the hint, apply it to show that $n$ and $n+2020$ are both perfect squares. Show that if $ \sqrt{n} + \sqrt{ n+2020} = k \in \mathbb{N}$, then consider $ n+2020 = k^2 - 2k\sqrt{n} + n $ to prove that $(4k^2n) = a^2$ for some $a\in\mathbb{N}$, and so $ n = b^2$ for some $b\in \mathbb{N}$. Likewise, $ n+2020 = c^2$ for some $ c \in \mathbb{N}$. The rest of your proof then follows.
Primitive element and Galois group Computation
$$2-{y^2\over9-5\sqrt3}=\sqrt2$$ $$2=4-{4y^2\over9-5\sqrt3}+{y^4\over(9-5\sqrt3)^2}$$ $$4y^2(9-5\sqrt3)=2(9-5\sqrt3)^2+y^4$$ $$y^4-36y^2+162+150=(-20y^2+180)\sqrt3$$ Now square both sides to get a polynomial of degree 8 for $y$.
For what values of $a$ and $b$ is $\int \frac{ax+b}{(x-1)(x+1)^2}dx$ a rational function?
Your partial fraction approach was correct. You're looking for the decomposition $$\frac{ax+b}{(x-1)(x+1)^2}=\frac{A}{x-1}+\frac{B}{x+1}+\frac{C}{(x+1)^2}$$ which implies $ax+b = A(x+1)^2+B(x-1)(x+1)+C(x-1)$. The unknowns here are $A, B, C$. Little $a$ and $b$ are parameters. In addition to obtaining a system and solving it, you can plug in $x=\pm1$ to find the values (also known as the cover-up method). So $x=1$ gives $A=\frac{a+b}{4}$ and $x=-1$ gives $C=\frac{a-b}{2}$. To find $B$, notice you have $(A+B)x^2=0x^2$, so $B=-A=-\frac{a+b}{4}$. The integral is then $$A\ln|x-1|+B\ln|x+1|-\frac{C}{x+1}+\text{constant}$$ which is a rational function when the logarithms vanish, i.e., $A=B=0 \Leftrightarrow a+b = 0 \Leftrightarrow a=-b$.
Show that there exist a constant c>0
Note that for $x\ge 1$ $$\sum_{n&gt;=x}^{\infty} 1/n^2 &lt;\int _x^{\infty} \frac {1}{t^2}dt= \frac{1}{x} &lt;2/x$$ Thus $c=2$ does the trick.
Norm on a Geometric Algebra
Let $C = A + B$. Then $|C|^2 = CC^\dagger = AA^\dagger + BB^\dagger + AB^\dagger + BA^\dagger$, right? The key isn't to eliminate those cross terms but to bound them. The classic statement of the triangle inequality is $$|C|^2 \leq (|A| + |B|)^2$$ Expand the right to get $$|C|^2 \leq |A|^2 + |B|^2 + 2 |A ||B|$$ The classic proof argues that $AB^\dagger+BA^\dagger \leq 2 |A| |B|$. It's not clear to me how exactly one might go about this for the case of a general blade; perhaps you could argue that the $A, B$ must share at least a common plane, and so one can be rotated to the other through a simple rotation, so that $AB^\dagger + BA^\dagger = 2|A||B| \cos \theta$, where $\theta$ is the angle between them. That would be exactly in analogue to the vector case. Regardless, I don't think you're meant to eliminate these scalars. Rather, you should bound them.
$5-3|x-6|\leq 3x -7$
We need $x\ge6$ and $x\ge5,$ so $x\ge$max$(6,5)=6$
Convergence types in probability theory : Counterexamples
Convergence in probability does not imply convergence almost surely: Consider the sequence of random variables $(X_n)_{n \in \mathbb{N}}$ on the probability space $((0,1],\mathcal{B}((0,1]))$ (endowed with Lebesgue measure $\lambda$) defined by $$\begin{align*} X_1(\omega) &amp;:= 1_{\big(\frac{1}{2},1 \big]}(\omega) \\ X_2(\omega) &amp;:= 1_{\big(0, \frac{1}{2}\big]}(\omega) \\ X_3(\omega) &amp;:= 1_{\big(\frac{3}{4},1 \big]}(\omega) \\ X_4(\omega) &amp;:= 1_{\big(\frac{1}{2},\frac{3}{4} \big]}(\omega)\\ &amp;\vdots \end{align*}$$ Then $X_n$ does not convergence almost surely (since for any $\omega \in (0,1]$ and $N \in \mathbb{N}$ there exist $m,n \geq N$ such that $X_n(\omega)=1$ and $X_m(\omega)=0$). On the other hand, since $$\mathbb{P}(|X_n|&gt;0) \to 0 \qquad \text{as} \, \, n \to \infty,$$ it follows easily that $X_n$ converges in probability to $0$. Convergence in distribution does not imply convergence in probability: Take any two random variables $X$ and $Y$ such that $X \neq Y$ almost surely but $X=Y$ in distribution. Then the sequence $$X_n := X, \qquad n \in \mathbb{N}$$ converges in distribution to $Y$. On the other hand, we have $$\mathbb{P}(|X_n-Y|&gt;\epsilon) = \mathbb{P}(|X-Y|&gt;\epsilon) &gt;0$$ for $\epsilon&gt;0$ sufficiently small, i.e. $X_n$ does not converge in probability to $Y$. Convergence in probability does not imply convergence in $L^p$ I: Consider the probability space $((0,1],\mathcal{B}((0,1]),\lambda|_{(0,1]})$ and define $$X_n(\omega) := \frac{1}{\omega} 1_{\big(0, \frac{1}{n}\big]}(\omega).$$ It is not difficult to see that $X_n \to 0$ almost surely; hence in particular $X_n \to 0$ in probability. As $X_n \notin L^1$, convergence in $L^1$ does not hold. Note that $L^1$-convergence fails because the random variables are not integrable. Convergence in probability does not imply convergence in $L^p$ II: Consider the probability space $((0,1],\mathcal{B}((0,1]),\lambda|_{(0,1]})$ and define $$X_n(\omega) := n 1_{\big(0, \frac{1}{n}\big]}(\omega).$$ Then $$\mathbb{P}(|X_n|&gt;\epsilon) = \frac{1}{n} \to 0 \qquad \text{as} \, \, n \to \infty$$ for any $\epsilon \in (0,1)$. This shows that $X_n \to 0$ in probability. Since $$\mathbb{E}X_n = n \cdot \frac{1}{n} = 1$$ the sequence does not converge to $0$ in $L^1$. Note that $L^1$-convergence fails although the random variables are integrable. (Just as a side remark: This example shows that convergence in probability does also not imply convergence in $L^p_{\text{loc}}$.)
How to show that $A(x_1, y_1)$ and $B(x_2, y_2)$ are at opposite ends of a diameter of a circle?
What you have done is find the perpendicular bisector of $(x_1,y_1)$ and $(x_2,y_2)$ (points that are the same distance from both). I think the simplest way to find the circle is to note that the angle in a semicircle is a right-angle, so the triangle $(x_1,y_1)(x,y)(x_2,y_2)$ has a right-angle at $(x,y)$. Lines intersect at a right-angle when the gradients have product $-1$, so $$ \frac{y-y_1}{x-x_1} \frac{y-y_2}{x-x_2} = -1, $$ at least where $x \neq x_1$ and $x \neq x_2$, and rearranging this gives the result.
What is the convergence of this series?
As pointed out in the comments, there is an algebra error that is easily fixed and convergence follows. For an alternative proof, write out \begin{align*} (3n)! &amp;= \big(1 \cdot 2 \cdots n\big)\big((n + 1) \cdot (n + 2) \cdots 2n)\big) \big((2n + 1) \cdot (2n + 2) \cdots 3n\big) \\ &amp;\ge n! \cdot (2^n n!) \cdot (3^n n!) \\ &amp;= 6^n (n!)^3 \end{align*} The inequality follows from the fact that $n + k \ge 2k$ for each $k \in [1, n]$ (giving $2^n$) and something similar for the last term. Hence, the series is bounded by the convergent geometric series $\sum_{n =1 }^{\infty} (1/6)^n$, which even gives rather rapid convergence.
Properties of the "distinguished logarithm"
The result you state holds because $\mathbb{R}^d$ is simply connected. And (1) and (2) are both true, because you can verify in each case that the right hand side satisfies the properties of the distinguished logarithm on the left hand side, which is uniquely determined by those properties. Nevertheless, this notation has a potentially confusing issue which you have already pointed out, namely: $\ln \varphi$ is not to be confused with the composition of a function called &quot;ln&quot; with $\varphi$. So there is no function $\ln$, only a function $\ln \varphi$. This means that in (2), when you write $\ln \varphi(x) = ix$, the left side must be read as $(\ln \varphi)(x)$ and not as $\ln (\varphi(x))$. If you read it as the latter, then from $\ln(\varphi(x)) = ix$, you would get an apparent contradiction by substituting $x=0$ and then $x = 2 \pi$, since $\varphi(0) = \varphi(2 \pi)$ but $0 \ne 2 \pi i $. For your last question, the distinguished logarithm does select $ix$ in (2), but it is not as magical as it might initially seem. From the definition of the distinguished logarithm, it is a function $\psi(x)$ satisfying $e^{\psi(x)} = \varphi(x) = e^{ix}$. Of course $\psi(x) = ix$ satisfies that.
Confidence interval for variance
If you have a random sample of $n$ observations $X_i$ from $\mathsf{N}(\mu, \sigma^2),$ with $\mu$ known and $\sigma^2$ unknown, then $\frac{nV}{\sigma^2} \sim \mathsf{Chisq}(df = n),$ where $V = \frac{1}{n}\sum_{i=1}^n(X_i = \mu)^2.$ [I prefer to reserve the notation $S^2$ for the case where $\mu$ is unknown and estimated by $\bar X.$] Then, to make a $1 - \alpha = .95$ confidence interval (CI) for $\sigma^2,$ we can find values $L$ and $U$ that cut 2.5% from the lower and upper tails, respectively of $\mathsf{Chisq}(n).$ That is, $L$ is quantile $\alpha/2 =.025$ and $U$ is quantile $1-\alpha/2 =.975.$ Thus $P\left(L \le \frac{nV}{\sigma^2} \le U \right) = .95.$ After some manipulation of inequalities we obtain $P\left(\frac{nV}{U} \le \sigma^2 \le \frac{nV}{L}\right) = .95),$ so that a 95% CI for $\sigma^2$ is of the form $\left(\frac{nV}{U},\; \frac{nV}{L}\right).$ This is sometimes called a 'probability-symmetrical' CI; while $V$ is contained in the CI, it is not at the center of the CI. (So the CI is not symmetrical about $V$.) As an example, suppose $n = 20$ so that $L = 3.247$ and $U = 20.483$ from R statistical software as below or from printed tables of the chi-squared distribution. Then if $V = 20.5,$ the 95% CI is $(10.01,\, 63.14).$ qchisq(c(.025,.975), 10) ## 3.246973 20.483177 10*20.5/qchisq(c(.975,.025),10) ## 10.00821 63.13573 For simplicity, this is the CI most commonly used in practice. However, two variations are possible: (a) One variation is to use software to search for values $L_c$ and $U_c$ so that $V$ turns out to be at the center of the CI. People accustomed to symmetrical CIs for $\mu$ of the form $\bar X \pm M,$ where the margin of error $M$ comes from a (symmetrical) normal or t distribution, may prefer to see CIs where the point estimate is at the center of the interval estimate. But in practice, many types of CIs are not symmetrical in this way, and there seems to be no practical advantage to symmetrical CIs--particularly when it is extra work to find them. (b) Perhaps a more useful variation is to search for $L_m$ and $U_m$ so that the length $\left(\frac{nV}{L_m} - \frac{nV}{U_m}\right)$ of the CI is as small as possible, while maintaining $P\left(\frac{nV}{U_m} \le \sigma^2 \le \frac{nV}{L_m}\right) = .95,$ Roughly speaking larger sample sizes $n$ result in shorter CIs, and so there is some focus on the length of CIs. There may be practical reasons for taking the extra trouble to find the shortest CI for the population variance. Note: Response to comment. Here is a figure showing the PDF of $\mathsf{Chisq}(10).$ Quantiles .025 (3,247) and .975 (20.483) are shown as vertical red lines. Areas in each tail beyond the quantile are 2.5%, leaving 95% area between the red lines.
Why does covariant dot contra variant basis equal 1
Firstly, you can't just push the parts of a (partial) derivative $\partial u/\partial x$ around as you like: $\partial u/\partial x$ is one quantity, that happens to be composed of a number of symbols. This is clearer if you write it as $u_x$ or $u_{,x}$. The next problem is that while for total derivatives or functions of one variable $$ \frac{dy}{dx} \frac{dx}{dy} = 1, $$ the same is not the case for partial derivatives, as the following example will illustrate: let $$ u = x+y, \qquad v=x-y. $$ Then inverting gives $$ x = \frac{u+v}{2} \qquad y = \frac{u-v}{2}. $$ So, $$ \frac{\partial u}{\partial x} = 1 \qquad \frac{\partial u}{\partial y} = 1 \\ \frac{\partial x}{\partial u} = \frac{1}{2} \qquad \frac{\partial y}{\partial u} = \frac{1}{2}, $$ and therefore $$ \frac{\partial u}{\partial x} \frac{\partial x}{\partial u} = \frac{1}{2} \qquad \frac{\partial u}{\partial y} \frac{\partial y}{\partial u} = \frac{1}{2}, $$ so the sum is $1$, but the products are not $1$ individually. A further instructive example is the even simpler-looking example $$ u = x + ay, \qquad v = y. $$ Then $$ x = u-av \qquad y = v. $$ So $ \frac{\partial v}{\partial x} = 0 $, but $$ \frac{\partial x}{\partial v} = -a. $$ This tells us two interesting things: firstly, what we already know about the product not being $1$. And secondly, that although $y=v$, $\partial x/\partial y = 0 $ ($x$ and $y$ are meant to be independent to start with, after all!), so $ \frac{\partial x}{\partial v} \neq \frac{\partial x}{\partial y} $. What does this mean? It means that the partial derivative very much depends on what is being held constant, in addition to what is allowed to vary. In the case of $\frac{\partial x}{\partial y}$, the coordinates other than $y$ (namely $x$) is being held constant, so of course this gives zero. But in $\frac{\partial x}{\partial v}$, $u$ is being held constant, and since this is not the same as $x$, the answer is different. This is one of the most difficult, and most important, things to understand about partial derivatives. There's a nice illustration of this on p.190 of Penrose's Road to Reality. The correct approach is to apply the chain rule for partial derivatives, namely that if $f$ is a function of $x,y,z$, which in turn are functions of $u$ and other variables, then $$ \frac{\partial f}{\partial u} = \frac{\partial x}{\partial u}\frac{\partial f}{\partial x} + \frac{\partial y}{\partial u}\frac{\partial f}{\partial y} + \frac{\partial z}{\partial u}\frac{\partial f}{\partial z}. $$ If then $f=u$, the result follows.
Are sequences properly denoted as $\subset$ of a set, or $\in$ a set?
Neither $(x_n)\subset M$ nor $(x_n)\in M$ should be used, since neither is correct. The fact that we say "sequence in $M$" does not mean we should use the symbol for element in a set. There are clear and precise definition for the meaning of $\in $ and $\subset$ and they dictate what the correct usage is. Generally speaking, a sequence $(x_n)$ of elements in $M$ is not itself an element of $M$, and so $(x_n)\in M$ is not justified. Further, $(x_n)$ is a sequence, not a set, and thus $\subset$ certainly isn't justified. There is no accepted symbol to denote a sequence in a set $M$, though formally speaking a sequence in $M$ is precisely a function $\mathbb N \to M$. The set of all these functions is commonly denoted by $M^{\mathbb N}$, so writing $(x_n)\in M^{\mathbb N}$ is equivalent to the claim that $(x_n)$ is a sequence of elements in $M$. So technically, there is existing notation, but it is not often used.
Are these transformations of the $\beta^\prime$ distribution from $\beta$ and to $F$ correct?
2 and 3) Both of these transformations are correct; you can prove them with the cdf (cumulative distribution function) technique. I don't see any limitations on using them. Here is the derivation for the first transformation using the cdf technique. The derivation for the other will be similar. Let $X \sim \beta(\alpha, \beta)$. Let $Y = \frac{X}{1-X}.$ Then $$P\left(Y \leq y\right) = P\left(\frac{X}{1-X} \leq y\right) = P\left(X \leq y(1-X)\right) = P\left(X(1+y) \leq y\right) = P\left(X \leq \frac{y}{1+y}\right)$$ $$= \int_0^{\frac{y}{1+y}} \frac{x^{\alpha - 1} (1-x)^{\beta-1}}{B(\alpha,\beta)} dx.$$ Differentiating both sides of this equation then yields the pdf (probability density function) of $Y$. We have $$f_Y(y)=\frac{\left(\frac{y}{1+y}\right)^{\alpha - 1} \left(1-\frac{y}{1+y}\right)^{\beta-1}}{B(\alpha,\beta)} \frac{d}{dy} \left(\frac{y}{1+y}\right)$$ $$= \frac{\left(\frac{y}{1+y}\right)^{\alpha - 1} \left(\frac{1}{1+y}\right)^{\beta-1}}{B(\alpha,\beta)} \left(\frac{1}{1+y}\right)^2$$ $$= \frac{y^{\alpha - 1} \left(1+y\right)^{-\alpha-\beta}}{B(\alpha,\beta)},$$ which is the pdf of a $\beta&#39;(\alpha,\beta)$ random variable.
What is the quotient of the absolute value metric in $\Bbb Z[\frac16]^+/\langle2,3\rangle$?
If your metric is based on absolute value, $d(x,y)=|x-y|$, then the pseudometric you get here is trivial, in other words $d([x],[y])=0$ for all $[x],[y]\in Q$. You don't need to use longer and longer sequences of &quot;stepping stones&quot; $[p_i]$. Let's fix an integer $m&gt;0$. Set $$ p_1=x/6^m\in[x]\quad\text{and}\quad q_1=y/6^m\in[y]. $$ Then $d(p_1,q_1)=|x-y|/6^m$. Because we can choose $m$ as large as we wish, this already implies that the infimum is zero. You need those sequences of stepping stones for things like the following. Let $X=[0,1]$ with the usual metric. Let $\sim$ be the equivalence relation that, for all integers $n&gt;1$, collapses all the intervals of the form $[1/n,1/(n-1))$ to a single point. What's the (pseudo)distance $d([0],[1])$? We have $1/2\sim 99/100$. Consequently, using the sequences $p_1=0$, $p_2=99/100$, $q_1=1/2$, $q_2=1$ to shorten one of the steps: $$d([0],[1])\le d(0,1/2)+d(99/100,1)\le 1/2+1/100.$$ This is less than the original distance $d(0,1)=1$ because we can equate $1/2$ with $99/100$ and instead of $1\to1/2\to0$ we can reroute $1\to99/100, 1/2\to0$. But we also have $1/3\sim49/100$, so with $p_1=0$, $p_2=49/100$, $p_3=99/100$, $q_1=1/3$, $q_2=1/2$, $q_3=1$ we get $$d([0],[1])\le d(0,1/3)+d(49/100,1/2)+d(99/100,1)=1/3+1/100+1/100.$$ By using longer and longer sequences of steps and moving the representatives even closer to the end points of the equivalence classes (i.e. distances less than $1/100$ that I used above), it is easy to see that the infimum over all sequences of stepping stones becomes $0$. In other words, we get $d([0],[1])=0$.
Minimizing $\sqrt{(4\cos^3(x)-4\cos(x)+2)^2+(2\sin(x)-4\sin^3(x))^2}$
Hint Using the trigonometric identities for double and triple angles, you should find that $${(4\cos^3(x)-4\cos(x)+2)^2+(2\sin(x)-4\sin^3(x))^2}=-4 \cos (x)-2 \cos (2 x)+4 \cos (3 x)+6$$ Reuse the trig identities for $\cos(2x)$ and $\cos(3x)$ to get a simple polynomial in $\cos(x)$ (just as @Blue commented). On the other hand, you want to minimize $$g(x)=\sqrt{f(x)}\implies g'(x)=\frac{f'(x)}{2 \sqrt{f(x)}}$$ and since you want $g'(x)=0$, consider $f'(x)=0$.
diagonalizing a hadamard product
It is possible to re-formulate the problem without reference to the Hadamard product (whose drawback is that it has few properties by itself). Let $$\tag{1}x=(x_1,x_2,... x_n) \ \ \text{and} \ \ \Delta:=diag(x_1,x_2,... x_n).$$ Then $$\tag{2}F \circ G = \Delta F \Delta.$$ Straightforward proof: left-multiplying (resp right-multiplying) by diag$(x_1,x_2,... x_n)$ amounts to multiply the k-th row (resp. th k-th column)by $x_k$. Example with $n=2$: $$\pmatrix{a&amp;b\\c&amp;d} \circ \left(\pmatrix{x_1\\x_2}*\pmatrix{x_1&amp;x_2}\right)=\pmatrix{x_1&amp;0\\0&amp;x_2} *\pmatrix{a&amp;b\\c&amp;d} *\pmatrix{x_1&amp;0\\0&amp;x_2}$$ (where * denotes the usual matrix product). The question is now: are there relationships between the eigenvalues of $F$ and those of $\Delta F \Delta$ ? I am tempted to say that an answer is that the more $\Delta$ is away from $I_n$ (Identity matrix) (otherwise the more $X$ is far away from $(1,1,\cdots 1)$), the less connection one has with the initial eigenvalues. Here is an example that illustrates that with the $3 \times 3$ matrix: $$F=\pmatrix{0&amp;-1&amp;0\\0&amp;0&amp;-1\\3&amp;3&amp;1}$$ whose eigenvalues are $1$ and $\pm\sqrt{3}$ (red points on graphics below). Edit: I have taken 5000 realizations of $F \circ G$, where $G=XX^T$ with $X=(1+\epsilon_1,1+\epsilon_2,1+\epsilon_3)^T$ where the $\epsilon_k$s have (small) uniform variations in $[-1/4,1/4]$. Here are the 5000 results ($3 \times 5000$ points representing the eigenvalues. One can observe rather important fluctuations around the initial eigenvalues:
Maximum number of codewords
For codewords with length n, there are n error patterns with weight=1. That is (00...01), (00...10), ..., (10...00). If you have to correct each of them, then you'll need n distinct sydromes. But if you only need to detect them, then what you need is only a nonzero syndrome which can be realized by only 1 parity check bit. Thus, the length of your message is (n-1), which means you have $2^{(n-1)}$ codewords at most.
Why $||f||=\mathrm{sup}_{||g||=1}|\langle{f,g}\rangle|$
Choose $g={f \over \|f\|}$. Then $\|g\| =1$ and $\langle g,f \rangle = \|f\|$.
Recursion- to pave 2xn rectangle
Consider a $2 \times (n+2)$ rectangle tiled in this way, and consider in particular the way the first column is tiled. If the first column is tiled with a vertical $2 \times 1$ tile, there are $a_{n+1}$ ways to tile the remaining grid. If the first column is tiled with two $1 \times 1$ tiles, there are $a_{n+1}$ ways to tile the remaining grid. If the first column is tiled with a $1 \times 1$ tile and half of a $2 \times 1$ tile, first there are $2$ ways to do this ($1 \times 1$ on top and $2 \times 1$ on bottom, or vice versa), and second, there are $b_{n+1}$ ways to tile the remaining grid. In total there are $2b_{n+1}$ ways to tile in this case. Otherwise, the first column is tiled with two horizontal halves of $2 \times 1$ tiles. In this case, the first two columns are taken up, and there are $a_{n}$ ways to tile the remaining grid. In total, this shows that $$ a_{n+2} = a_{n+1} + a_{n+1} + 2b_{n+1} + a_n $$ I'll let you prove $b_{n+1} = a_n + b_n$ yourself. Hint: consider the slot below the missing corner, and how it could be tiled.
Prove that $X_n/\lambda_n \to$ 1 in probability for $X_n \sim \text{Pois}(\lambda_n)$
Chebyschev's inequality is indeed a very good idea, but you mixed up several things: First of all, we would like to estimate $$p:= \mathbb{P} \left( \left| \frac{X_n}{\lambda_n}-1 \right| \geq \epsilon \right)$$ (and not $\mathbb{P}(|X_n-\lambda_n| \geq \epsilon)$.) Obviously, $$p = \mathbb{P}(|X_n-\lambda_n| \geq \epsilon \lambda_n)$$ and therefore it follows from the Chebyshev inequality that $$p \leq \frac{1}{(\epsilon \lambda_n)^2} \text{var} \, (X_n) = \frac{1}{\epsilon^2} \frac{1}{\lambda_n}.$$ Letting $n \to \infty$ finishes the proof.
Prove that the rate of convergence of a sequence is unique
Suppose $e_n &gt; 0$ and $e_n \rightarrow 0$ for $n \rightarrow \infty$ and $n \in \mathbb{N}$. Let us assume that the order of convergence is both $p_1\geq1$ and $p_2 \geq 1$. We have to show that $p_1 = p_2$. By assumption we have $$ \frac{e_{n+1}}{e_n^{p_i}} \rightarrow c_i &gt; 0, \quad n \rightarrow \infty, \quad n \in \mathbb{N}.$$ Without loss of generality, we have $p_1 &lt; p_2$. If $p_1 = 1$, then the demand is $c_1 \in (0,1)$, but this will not be important here. It follows that $$ \frac{e_{n+1}}{e_n^{p_1}} = \frac{e_{n+1}}{e_n^{p_2}} e_n^{p_2 - p_1} \rightarrow c_2 \cdot 0 = 0, \quad n \rightarrow \infty, \quad n \in \mathbb{N}$$ because $p_2 - p_1&gt;0$ and $e_n$ tends to zero. This shows that the order is not $p_1$. We have a contradiction. Hence $p_1 = p_2$.
Are these subsets of $\mathbb{R}$ homeomorphic?
Hint: If $f:A\to B$ is a continuous map between topological spaces, and $R$ is a connected component of $A$, then there is a connected component $S$ of $B$ such that $f(R)\subseteq S$. What does this imply about how homeomorphisms map the connected components of spaces? Do you see how to apply this to your situation?
graph of this is rectangular hyperbola?
Not quite. The shape is indeed a hyperbola with horizontal and vertical asymptotes, as does $xy=c^2$, but the center is moved. The center of $xy=c^2$ is at the origin, while your function has the center at the point $(f,f)$. The vertices of your graph are at the origin and $(2f,2f)$. We can show this by rewriting your equation as $$(x-f)(y-f)=f^2$$ And, as @Blue points out in a comment, there is a "hole" at the origin: the origin is on the hyperbola but not in the graph of your original equation. This can also be shown by the standard conic-section technique of a change of variables causing a rotation of the axes of $x+y=xy/f$ by $45^\circ$. Here is a graph where $f=1$.
Find the real roots for $\displaystyle \sqrt[4]{386-x}+\sqrt[4]{x}=6.$
Very nice! The only thing that I can add to this is that I would have dealt with $u^4+v^4$ in a way which is, I think, simpler than yours. Observe that\begin{align}u^4+v^4&amp;=(u+v)^4-uv(4u^2+4v^2+6uv)\\&amp;=(u+v)^4-uv\bigl(4(u+v)^2-2uv)\bigr)\\&amp;=1\,296-uv(144-2uv)\\&amp;=1\,296-144uv+2(uv)^2.\end{align}So, this leads to the equation$$2(uv)^2-144uv+1\,296=386,$$which is equivalent to$$(uv)^2-72uv+455=0.$$
Injective homomorphism
If you intended ring homomorphisms, then I second Curufin's comment. But I assumed that, since you wrote "multiplicative", you intend your homomorphisms to just preserve multiplication, not necessarily addition. In that case, such homomorphisms would exist as long as the characteristic of $\mathbb F$ is not 2. You can define a homomorphism by sending each prime number $q$ to an arbitrarily chosen polynomial $f_q(x)$ in $\mathbb F[x]$ and then extending the map to all integers using the prime factorization: send $\pm q_1^{k_1}\dots q_r^{k_r}$ to $\pm f_{q_1}(x)^{k_1}\dots f_{q_r}(x)^{k_r}$. To make this homomorphism injective, choose the $f_q$'s a little carefully; for example, choose distinct, irreducible polynomials for distinct primes $q$.
A trigonometric equation - Finding the value of theta
You need formulae for double angle, e.g. in here: $ \cos (2 \theta) = 1 - 2 (\sin (\theta))^2 $ Then let $x = \sin (\theta)$ to get $$ 1 = \cos (2 \theta) \sin (\theta) = (1 - 2 (\sin (\theta))^2) \sin (\theta) = (1-2x^2)x $$ Solving that third order equation gives as the only real root $x = -1$, hence $-1 = \sin (\theta)$ or $$ \theta = \frac32 \pi $$ and of course you may add multiples of $2 \pi$.
Computing the error function for Euler's number
From Taylor's theorem, you know that $$ e - \sum_{i=1}^n \frac{1}{i!} = \dfrac{e^{\xi_n}}{(n+1)!}, \quad \xi_n \in [0,1], $$ so your condition would be equivalent to $$ \dfrac{e^{\xi_n}}{(n+1)!} &lt; \varepsilon $$ Now, is the best case scenario, setting $\xi_n = 0$ we would get $$ (n+1)! &gt; \frac{1}{\varepsilon} $$ and in the worst case scenario, $\xi_n=1$ we would get $$ (n+1)! &gt; \frac{e}{\varepsilon} $$ Solving these inequalities will provide bounds for $f(\varepsilon)$ but in general will not give you the best $n$ that you are looking for.
Distribution of $\log(x)$ for exponentially distributed $x$.
Let $Y = \ln X$. The cdf for $Y$ is $$G(y) = P (Y \le y ) = P (\ln X \le y ) = P (X \le e^y) = F (e^y) = 1 - e^{-\lambda e^{y}}$$
Combinatorics identity proof by induction
Pascal's identity is $$ \binom{n-1}{m}+\binom{n-1}{m-1}=\binom{n}m $$ Let $n=k+2$, $m=r+1$.
Prove that $(X\times Y)\setminus (A\times B)$ is connected
We can simplify Davide Giraudo's answer by noting that we only need to show that $(a,b)$ is in the same connected component as every other point. So, start by fixing $a \in X \setminus A$ and $b \in Y \setminus B$ as Davide does, and consider an arbitrary point $(x,y) \in (X \times Y) \setminus (A \times B)$. If $x \notin A$, then $\{x\} \times Y$ is connected and contains both $(x,y)$ and $(x,b)$, while $X \times \{b\}$ is connected and contains both $(x,b)$ and $(a,b)$. Thus, $(\{x\} \times Y) \cup (X \times \{b\})$ is connected and contains both $(x,y)$ and $(a,b)$. Otherwise, $x \in A \implies y \notin B$. Thus, analogously, $X \times \{y\}$ is connected and contains both $(x,y)$ and $(a,y)$, while $\{a\} \times Y$ is connected and contains both $(a,y)$ and $(a,b)$, and so $(X \times \{y\}) \cup (\{a\} \times Y)$ is connected and contains both $(x,y)$ and $(a,b)$.
Check directed graph for strong connectivity
There are linear-time algorithms based on depth-first-search which determine if a graph is strongly connected.
question about counting recursively
A valid string of length $k$ either begins with valid string of length $k-1$ ($10N_{k-1}$ term comes from this) or has the ($k-1$)th place filled with a sign,in which case the first $k-2$ symbols constitute a valid string of length $k-1$ (this gives $40N_{k-2}$ as there $4$ ways to choose the signs for $k-1$th place and $10$ ways to choose a digit for $k$th place)
show $\frac{1}{n}\sum (X_i - \mu_i) \xrightarrow{L^2} 0$, $X_i$ independent with mean $\mu_i$
Squaring $\sum_{i=1}^n (X_i-\mu_i)$ and integrating, we get $\sum_{i=1}^n V(X_i)$ because covariances vanish. So, the goal is to show $\sum_{i=1}^n V(X_i)=o(n^2)$, given that $V(X_i)=o(i)$. This is pretty standard: given $\epsilon&gt;0$, find $k$ such that $V(X_i)&lt;\epsilon \, i$ for $i&gt;k$. Then $\sum_{i=1}^k V(X_i)=O(1)$ and $\sum_{i=k+1}^n V(X_i)&lt;\epsilon \sum_{i=k+1}^n i&lt; \epsilon\, n^2$
What is the conceptual reasoning behind order of operations for polynomials?
If I understand correctly, you are asking about this $$ a^2-2ab+b^2 $$ and whether there is a 'plus $-2ab$' or 'minus $2ab$'. The short answer is that it doesn't matter—they mean exactly the same thing. This is because subtraction is addition of a negative number. It is absolutely fine to think of subtraction as taking away, but you must remember that $a-2ab$ is just a shorthand for $a+(-2ab)$. Here is an excerpt from the Wikipedia article on subtraction: In advanced algebra and in computer algebra, an expression involving subtraction like A − B is generally treated as a shorthand notation for the addition A + (−B). Thus, A − B contains two terms, namely A and −B. This allows an easier use of associativity and commutativity. All that being said, I think it is more intuitive to think of subtraction as taking away, and so I would generally think of $a^2-2ab$ as taking $2ab$ away from $a^2$. But it would be equally valid to think of this as addition of $-(2ab)$. Indeed, one could argue it is more valid, given that it is how subtraction is formally defined. To answer another of your questions: How do we determine whether to multiply $ab$ by $−2$, or to multiply $2ab$ together and then minus that from $(x^2)^2$? Again, it doesn't matter, and so nothing really needs to be determined. Let's say $a=7$ and $b=3$. Here are two equally valid methods of interpreting $a^2-2ab$: Method $1$ (Subtraction is taking away): $$ a^2-2ab=a^2-2(7)(3)=a^2-42 $$ Method $2$ (Subtraction is adding a negative number): $$ a^2-2ab=a^2+(-2ab)=a^2+(-2\times7\times3)=a^2+(-42)=a^2-42 $$ (Note that the final step, where we rewrite $a^2+(-42)$ as $a^2-42$ is perhaps more of an aesthetic choice than anything else. $a^2-42$ can seen as a neat shorthand for $a^2+(-42)$.) In method $2$, we multiply $ab$ by $-2$, and then perform the subtraction. But because these methods are equally valid, we get the same final answer.
Find the value of $E[X^2Y]$.
Observe that $$ E(X^2Y)=\sum x^2yP(X=x,Y=y) $$ where the summation is over the set $(x,y)\in\{0,1\}\times\{3,4\}$ and $P(X=x,Y=y)$ can be found from your joint distribution table.