title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Prove that $a_1^2+a_2^2+a_3^2=b_1^2+b_2^2+b_3^2=c_1^2+c_1^2+c_3^2=1$, $a_1b_1+a_2b_2+a_3b_3=b_1c_1+b_2c_2+b_3c_3=c_1a_1+c_23a_2+c_3a_3=0$
Hint : By the assumption of the problem, your matrix became orthonormal matrix. i.e. $AA^{t}=I$. Then we also have $A^{t}A=I$.
Two geometrical objects in same dimensional plane are homeomorphic.
Exhibit a particular homeomorphism in each case and prove that it's a homeomorphism. The mapping $x\mapsto (x,x^2)$ is a homeomorphism from the line to the parabola. If the circle and the ellipse have the same center, draw a ray from that center, and map the point where the ray intersects the circle to the point where the ray intersects the ellipse. That's a homeomorphism. Then there's the moderately nitpicking work of proving that those mappings are bijections and that they're continuous in both directions. In the case of the circle and the ellipse, there may be other homeomorphisms that are easier to do that last part with. Maybe $(\cos\theta,\sin\theta)\mapsto(a\cos\theta,b\sin\theta)$?
Expression of the sum of a power series
There is an error after the second $=$ sign; it should read $\alpha C^2(1+\alpha+\alpha^2+\cdots+\alpha^{k-2})+\alpha_kU_0$. Now, use the fact that$$1+\alpha+\alpha^2+\cdots+\alpha^{k-2}=\frac{1-\alpha^{k-1}}{1-\alpha}.$$
Straight Line Equation in Complex Plane
Let $z=x+iy$. Then, $(m+i)(x+iy)+b=mx+ix+imy-y+b$. Thus, looking at the real part, we get $mx-y+b=0$, also known as $y=mx+b$.
Find $\lim_{x\rightarrow 0}\frac{\exp{(ax)}-\exp{(bx)}+(b-a)x}{1-\cos{(x)}}$
Using L'Hopital's rule twice works well: $$\lim_{x \to 0} \frac{e^{ax}-e^{bx}+(b-a)x}{1-\cos(x)}$$ $$=\lim_{x \to 0} \frac{ae^{ax}-be^{bx}+(b-a)}{\sin(x)}$$ $$=\lim_{x \to 0} \frac{a^2 e^{ax}-b^2 e^{bx}}{\cos(x)}$$ Can you continue?
Probability of X belongs to [a,c] given that X is uniformly distributed on [a,b]
The sets $[2,5]$ and $(5,7]$ are disjoint, so the probability of $X$ belonging in their union is the sum of their respective probabilities. $$P_X(X \in [2,7])=P_X(X\in [2,5]\cup(5,7])=P_X(X \in [2,5])+\underbrace{P_X(X\in(5,7])}_{=0}$$ So you just have to compute $$P_X(X \in [2,5])=\frac{1}{10}\int_2^5dx=\frac{3}{10}$$
Find the area of cd without a diameter or radius
Suppose the center hole's radius is $\,r\,$ , then $$\pi r^2=(0.0136)\pi(53+r)^2\implies 0.9824 r^2-1.4416r-38.2024=0$$ Now just solve the above simple albeit pretty annoying quadratic in $\,r \,$ ...
Can we prove we know all the ways to prove things?
I feel it is important to clarify some definitions, because there are two distinct notions of completeness: semantic completeness and syntactic completeness. The answer is yes with respect to semantic completeness, but no with respect to syntactic completeness. It seems to me what you are after is semantic completeness. In what follows, we suppose that we have specified a logical (deductive) system, and a theory in this logical system. For example, take first order logic with ZFC set theory, or first order logic with group theory. Semantic completeness A statement $\phi$ in the theory is provable if one can derive it from the rules of the deductive system. Now for each theory there is the notion of a model of that theory, and given a specific model we can ask whether a statement is satisfied in this model. I won't attempt to define this, but here are some examples: a group is a model for the theory of groups, the set of natural numbers is a model for Peano arithmetic, a set-theoretic universe is a model for set theory. So we define a statement to be valid in a theory if it is satisfied in every model of the theory. Two reasonable questions we could ask about our system are Soundness: Does provability entail validity? (Semantic) completeness: Does validity entail provability? In general we always ask that our system is sound. For completeness, Gödel's completeness theorem tells us that any theory over first order logic is complete in this sense. For example, the statement $\phi := \forall x,y. (x*y)^{-1} = y^{-1}*x^{-1}$ is something that is true in every group, and indeed, it is easily derivable from the axioms of group theory. This form of completeness also holds for ZFC set theory. Syntactic completeness However there is yet another notion of completeness. A theory is (syntactically) complete if for any statement $\phi$, we can either derive $\phi$ or $\neg\phi$ in our system. Now we know that $\phi$ can only be derivable if it is satisfied in every model of our theory, and similarly for $\neg \phi$. An interesting question is therefore whether our theory has a statement $\phi$ which is satisfied in some model $M$, and its negation $\neg\phi$ is satisfied in another model $N$. If this is the case, then we can deduce that neither $\phi$ nor $\neg \phi$ is derivable in our system (note: this says nothing about the derivability of $\phi\vee\neg\phi$)! For the theory of groups, the statement $\forall x,y. x* y = y * x$ is such a statement, because some groups are abelian and others are not. Now Gödel's incompleteness theorem tells us that any theory which is strong enough to do arithmetic is incomplete in this way. In fact, there are numerous axioms of set theory which hold in some models of set theory but not others, such as the axiom of choice or the continuum hypothesis, meaning that they are independent of the theory.
Inequality proof (strange)
Let $a+b+c+d=4u$, $ab+ac+bc+ad+bd+cd=6v^2$ and $u^2=tv^2$. Thus, since $u^2\geq v^2$ it's $$\sum_{sym}(a-b)^2\geq0,$$ we get $t\geq1$ and since by AM-GM $$ab+ac+bc+ad+bd+cd\geq6\sqrt{abcd},$$ it's enough to prove that $$(a+b+c+d)\sqrt{(a^2+b^2+c^2+d^2)^3}-(a^2+b^2+c^2+d^2)^2\geq16v^4$$ or $$2u\sqrt{(4u^2-3v^2)^3}-(4u^2-3v^2)^2\geq v^4$$ or $$t(4t-3)^3\geq(8t^2-12t+5)^2$$ or $$(t-1)(48t^2-68t+25)\geq0$$ and we are done!
Get a Probability Distribution Function of a Multivariate Function
$$f_{X}(x) = \mathbb{I}(x\in[21,26])$$ and $$f_{Y}(y) = \frac{1}{\sqrt{2\pi}}\exp(-\frac{1}{2}y^2)$$ $X$ is independent of $Y$ $\Rightarrow f_{X,Y}(x,y) = f_X(x)f_Y(y)$ Now, let $U = Y$ and $A=\frac{X}{Y}$. Therefore, $Y = U$ and $X = AU$ and $$J = \begin{pmatrix} A & U \\ 1 & 0\end{pmatrix} \Rightarrow |detJ| = U$$ So, $$f_{U,A}(u,a) = f_{X,Y}(au, u) . u = \mathbb{I}(au \in [21,26])\frac{1}{\sqrt{2\pi}}\exp(-\frac{1}{2}u^2).u$$ Finally, $$f_{A}(a) = \int_{u:\, au \in [21,26]}f_{U,A}(u,a)du = \begin{cases} \frac{1}{\sqrt{2\pi}}(\exp(-\frac{21}{a})-\exp(-\frac{26}{a})) \text{ if } a > 0 \\ \frac{1}{\sqrt{2\pi}}(\exp(\frac{26}{a})-\exp(\frac{21}{a})) \text{ if } a < 0 \end{cases}$$
Approximation of $L^1$ function with compactly supported smooth function with same mass and same uniform bounds
Yes, you can find such a $g$. First, for any $\epsilon_1 > 0$ we can find a $g_0 \in C_c(\mathbb{R})$ such that $\lVert f-g_0\rVert_{L^1} < \epsilon_1$. Then define $g_1(x) = \max \{ 0, \min \{ 1, g_0(x)\}\}$, this gives $g_1 \in C_c(\mathbb{R})$ with $0 \leqslant g_1 \leqslant 1$, and for every $x\in \mathbb{R}$ we have $\lvert f(x) - g_1(x)\rvert \leqslant \lvert f(x) - g_0(x)\rvert$, so $$\lVert f-g_1\rVert_{L^1} \leqslant \lVert f-g_0\rVert_{L^1} < \epsilon_1.$$ Next, we mollify $g_1$ to obtain $g_2\in C_c^{\infty}(\mathbb{R})$ with $0 \leqslant g_2 \leqslant 1$ and $\lVert f-g_2\rVert_{L^1} < 2\epsilon_1$. For that, take $\varphi \in C_c^{\infty}(\mathbb{R})$ with $0 \leqslant \varphi$ and $\int_{\mathbb{R}} \varphi(x)\,dx = 1$, and define $$h_{\eta}(x) = (g_1 \ast \varphi_{\eta})(x) = \int_{\mathbb{R}} g_1(x-\eta\cdot y)\varphi(y)\,dy$$ for $\eta > 0$. Then $h_{\eta} \in C_c^{\infty}(\mathbb{R})$, $0 \leqslant h_{\eta} \leqslant 1$, and $h_{\eta} \to g_1$ as $\eta \to 0$ in $L^1$ (and uniformly), so we can choose $\eta$ small enough that $\lVert g_1 - h_{\eta}\rVert_{L^1} < \epsilon_1$ and set $g_2 = h_{\eta}$. If we already have $\lVert g_2\rVert_{L^1} = \lVert f\rVert_{L^1}$ we are done now. If $\lVert g_2\rVert_{L^1} > \lVert f\rVert_{L^1}$, we multiply with the scaling factor $\frac{\lVert f\rVert_{L^1}}{\lVert g_2\rVert_{L^1}}$ to obtain $g_3 \in C_c^{\infty}(\mathbb{R})$ with $0 \leqslant g_3 \leqslant \frac{\lVert f\rVert_{L^1}}{\lVert g_2\rVert_{L^1}} < 1$ and $\lVert f - g_3\rVert_{L^1} < 4\epsilon_1$. If finally $\lVert g_2\rVert_{L^1} < \lVert f\rVert_{L^1}$, choose $h \in C_c^{\infty}(\mathbb{R}^n)$ with $0 \leqslant h \leqslant 1$ and $\int h(x)\,dx = \lVert f\rVert_{L^1} - \lVert g_2\rVert_{L^1}$ and set $g_3(x) = g_2(x) + h(x-s)$, where $s\in \mathbb{R}$ is so large that the supports of $g_2$ and $h(\,\cdot\, - s)$ don't intersect. The latter guarantees that we still have $0 \leqslant g_3 \leqslant 1$, and the positivity of $g_2$ and $h$ ensures $\lVert g_3\rVert_{L^1} = \lVert g_2\rVert_{L^1} + \lVert h\rVert_{L^1} = \lVert f\rVert_{L^1}$. Since $\lVert g_2\rVert_{L^1} \geqslant \lVert f\rVert_{L^1} - 2\epsilon_1$, it follows that here too we have $\lVert f - g_3\rVert < 4\epsilon_1$, so for a given $\epsilon > 0$, we choose $\epsilon_1 = \frac{1}{4}\epsilon$ for our construction.
Looking for a proper math notation
I'd just say $f: \mathbb{B} \to X$, where $X \subset \mathbb{R}$ is a finite set of reals.
Closure of $f\mapsto{\rm i}f'$
For simplicity, forget about the imaginary unit $i$, we are to show that, for $f_{n}\rightarrow f$ in $L^{2}$, and $f_{n}'\rightarrow g$ in $L^{2}$, it will follow that $g=f'$ a.e. Since $\|\cdot\|_{L^{1}[0,1]}\leq\|\cdot\|_{L^{2}[0,1]}$, it is easy to see that \begin{align*} \int_{0}^{x}g(t)dt=\lim_{n\rightarrow\infty}\int_{0}^{x}f_{n}'(t)dt=\lim_{n\rightarrow\infty}f_{n}(x) \end{align*} But $f_{n}\rightarrow f$ in $L^{2}$ entails an almost everywhere pointwise convergent subsequence $f_{n_{k}}(x)\rightarrow f(x)$, this leads to \begin{align*} \int_{0}^{x}g(t)dt=\lim_{k\rightarrow\infty}f_{n_{k}}(x)=f(x)~~~~\text{a.e.} \end{align*} We are done.
Can this system of linear equations have infinite solutions?
It can have infinitely many solutions (e.g. When $a=1, b=1$) and it can also have NO solution e.g. When $a=2, b=2$. For the case $a=1, b=1$, the solution set is given by $(t,1-t,0)$, where $t$ is any real number.
Proving that something is not cyclic
If $\gcd(m,n) >1$ then $L:=\text{lcm}(m,n) = \dfrac{mn}{\gcd(m,n)} < mn$. Since $m\mid L$ and $n\mid L$ then $L(x,y) = (Lx,Ly) = (0,0)$ for all $(x,y)\in \mathbb{Z}_m\times\mathbb{Z}_n$. Therefore there's no element of degree $mn$, i.e. $\mathbb{Z}_m\times\mathbb{Z}_n$ is not cyclic.
Indices and Bases: Solve "x"
Well, from $2^x - 3^{x-1} = -(x+2)^2$, $2^x = 3^{x-1} - (x+2)^2$. LHS is always even, and $3^n$ is always odd. Therefore $(x+2)^2$ has to be odd, $\Rightarrow$ x is odd. Also, from the first equation, $2^x < 3^{x-1}$. This is true for $x > 2$. Since x is odd, the new condition for $x$ is $x \ge 3$ and x is odd. So, $$x \in {\{3,5,7,9,\dots\}}$$ Checking, we see that $5$ is an answer. (But I'm not sure how to prove that this is the only answer. Maybe I could bring the graph part here).
Showing that the restricted wreath product $\Bbb Z\wr\Bbb Z$ is finitely generated.
I’m turning my comment into an answer (with more generality), as Shaun suggested. Assume we’re in the setting of a standard wreath product, with groups $H$ and $K$ acting by right translation on themselves. With the Robinson notation, it’s not hard to see that for any $h \in H,y \in K$, we have $h(y)=y^*h(e)(y^{-1})^*$. We can also see that $y \in K \longmapsto (y^{-1})^*$, and $h \in K \longmapsto (h^{-1})(e)$ are group homomorphisms. Thus the standard wreath product of $H$ and $K$ is generated by the $k^*$ and the $h(e)^*$, where $k$ runs through a system of generators for $K$ and $h$ runs through a set of generators for $H$. In particular, if $H$ and $K$ are finitely generated, the standard wreath product is finitely generated.
Linear system - number of solutions depending on the parameter k
You're not arrived at the end of the elimination: \begin{align} \left[\begin{array}{ccc|c} 1 & 2 & 1 & 3\\ 2 & -1 & -3 & 5\\ 4 & 3 & -1 & k \end{array}\right] &\to \left[\begin{array}{ccc|c} 1 & 2 & 1 & 3\\ 0 & -5 & -5 & -1\\ 4 & 3 & -1 & k \end{array}\right] &&R_2\gets R_2-2R_1 \\&\to \left[\begin{array}{ccc|c} 1 & 2 & 1 & 3\\ 0 & -5 & -5 & -1\\ 0 & -5 & -5 & k-12 \end{array}\right] &&R_3\gets R_3-4R_1 \\&\to \left[\begin{array}{ccc|c} 1 & 2 & 1 & 3\\ 0 & 1 & 1 & 1/5\\ 0 & -5 & -5 & k-12 \end{array}\right] &&R_2\gets -\frac{1}{5}R_2 \\&\to \left[\begin{array}{ccc|c} 1 & 2 & 1 & 3\\ 0 & 1 & 1 & 1/5\\ 0 & 0 & 0 & k-11 \end{array}\right] &&R_3\gets R_3+5R_2 \end{align} Now you should be able to end.
instantaneous probability and probability per unit time
The probability of decaying at a given instance of time is zero. It is usual to work with the probability density function which when integrated over a given period of time gives the probability of decay in that time interval. Radioactive decay is modelled using the exponential distribution $f(t)=\lambda e^{-\lambda t}$. The quantity $\frac{1}{\lambda}$ is the half-live. The probability that the decay will happen in the first $t_1$ seconds is given by $\int^{t_1}_0\lambda e^{-\lambda t}dt$. If you let $t_1$ go to infinity then the probability of decay approaches one as it must.
Does $P(Z > 1 | X=x, Y=y) 1 | Y=y) < y$?
Seems like it's true, provided $X,Y,Z$ have a joint density function. Indeed, first note \begin{align*} P(Z &gt; 1 \mid X = x, Y = y) &amp; = \int_1^\infty \frac{f_{X,Y,Z}(x,y,z)}{f_{X,Y}(x,y)} \, dz &lt; y \\ &amp; \iff \int_1^\infty f_{X,Y,Z}(x,y,z)\, dz &lt; y \cdot f_{X,Y}(x,y). \end{align*} Then, \begin{align*} P(Z &gt; 1 \mid Y = y) &amp; = \int_1^\infty \frac{f_{Y,Z}(y,z)}{f_Y(y)} \, dz \\ &amp; = \int_1^\infty \left(\frac{\int_{-\infty}^\infty f_{X,Y,Z}(x,y,z)\, dx}{\int_{-\infty}^\infty f_{X,Y}(x,y)\, dx}\right) \, dz \\ &amp; = \frac{1}{\int_{-\infty}^\infty f_{X,Y}(x,y)\, dx} \int_{-\infty}^\infty \int_1^\infty f_{X,Y,Z}(x,y,z)\, dz \, dx \\ &amp; &lt; \frac{1}{\int_{-\infty}^\infty f_{X,Y}(x,y)\, dx} \int_{-\infty}^\infty y \cdot f_{X,Y}(x,y) \, dx \\ &amp; = y. \end{align*} The first equality is the definition of the conditional probability density function, the second is the definition of marginal density functions from joint densities, the third is Tonelli's theorem (valid since $f_{X,Y,Z}(\cdot,y,\cdot) \geq 0$ for all $y$ and the Lebesgue measure is sigma-finite), and the fourth equality is by assumption. Update Just to clarify, Tonelli's theorem just provides sufficient conditions to interchange the order of integration. A similar result is Fubini's theorem. Anyways, depending on your background/class this might be for, I've seen most people simply exchange the order without much thought. I'm just justifying it for myself.
How many natural numbers from 1 to 10000 there are (with 2 conditions)
There are 93. This isn't a very well motivated answer, but it is true. The numbers are as follows. 9, 18, 27, 36, 45, 54, 63, 72, 81, 117, 126, 135, 144, 153, 162, 171, 216, 225, 234, 243, 252, 261, 315, 324, 333, 342, 351, 414, 423, 432, 441, 513, 522, 531, 612, 621, 711, 1116, 1125, 1134, 1143, 1152, 1161, 1215, 1224, 1233, 1242, 1251, 1314, 1323, 1332, 1341, 1413, 1422, 1431, 1512, 1521, 1611, 2115, 2124, 2133, 2142, 2151, 2214, 2223, 2232, 2241, 2313, 2322, 2331, 2412, 2421, 2511, 3114, 3123, 3132, 3141, 3213, 3222, 3231, 3312, 3321, 3411, 4113, 4122, 4131, 4212, 4221, 4311, 5112, 5121, 5211, 6111.
Is this function linear in x?
Studying with respect to $x$, observe that $e^t$ is nothing more than a constant generated by different values of $t$. By linear map definitions : $$f(x+y,t) = (x+y)e^t = xe^t + ye^t = f(x,t) + f(y,t) $$ $$f(cx,t) = e^t \cdot c \cdot x = c \cdot e^t \cdot x = cf(x,t)$$ Thus the function $f(x,t) = e^t \cdot x$ is linear with respect to $x$.
What numbers are irrational in number systems that use an irrational base?
Being irrational (or not) is an intrinsic property of a real number, which is to say a property that the number has completely independently of what symbols you choose to use to describe it. Either the number is a ratio of integers, or it isn't, and using a different number base (even an irrational one) isn't going to change that.
Find $m$ such that $4 \nmid \phi(m)$
Hint How many factors of $2$ are there in each of those $p-1$ factors appearing in the product you gave for $\phi(m)$? If $4\nmid\phi(m)$, then you're only allowed at most one factor of $2$.
Combinatorics problem involving matrices
Let $A = (a_{ij})$ be such an $m \times n$ matrix of $1$s and $-1$s. Consider the product $\prod a_{ij}$ of all the entries. On the one hand, computing the product one row at a time, this is $(-1)^m$. Computing one column at a time yields $(-1)^n$. So, from $(-1)^m = (-1)^n$, we conclude that, in order for such a matrix to exist, $m$ and $n$ must have the same parity. Now let $m,n$ be positive integers of the same parity. We construct all the desired $m \times n$ matrices $A = (a_{ij})$. First, choose $(a_{ij})$ arbitrarily when $1 \leq i \leq m-1$ and $1 \leq j \leq n-1$. There are $2^{(m-1)(n-1)}$ ways to form this $(m-1) \times (n-1)$ submatrix. Now, in order to guarantee that the first $m-1$ rows and first $n-1$ columns have product $-1$, we are forced to assign \begin{align*} a_{in} = (-1) \prod_{j=1}^{n-1} a_{ij} &amp;&amp; \text{ for } 1 \leq i \leq m-1 \\ a_{mj} = (-1) \prod_{i=1}^{m-1} a_{ij} &amp;&amp; \text{ for } 1 \leq j \leq n-1 \end{align*} It remains only to choose the last entry $a_{mn}$ in such a way that the final row and final column each have product $-1$. On the one hand, for the last row to work out, we need to take $$a_{mn} = (-1) \prod_{j=1}^{n-1} a_{mj} = (-1)^n P_{m-1,n-1}$$ where $P_{m-1,n-1}$ denotes the product of the entries of the upper left $(m-1) \times (n-1)$ submatrix. Similarly, to make the final column work out, we need to take $$a_{mn} = (-1)^m P_{m-1,n-1}.$$ Fortunately, we assumed $m$ and $n$ have the same parity, so these choices are consistent. In conclusion, the number of $m \times n$ matrices of the desired type is $0$ if $m \neq n \pmod{2}$ or $2^{(m-1)(n-1)}$ if $m = n \pmod{2}$. In the latter case, one can choose the $(m-1) \times (n-1)$ submatrix arbitrarily, and this choice determines the entries in the final column and row.
Explaining the form of the Gaussian measure
The Gaussian can be viewed as the "best guess" of a distribution, given that we only know that it is a distribution, and we know its mean and its variance. For instance, suppose I have a deck of 52 cards, and I tell you to pick a card "at random". If you had no prior knowledge as to how I would choose my card, what probability of selection would you assign to any given card? I'd say $\mathbb{P}(\text{any card}) = \frac{1}{52}$ is a reasonable guess. This is an example of a "maximum entropy" distribution on the discrete set $\{1,...,52\}$. Mathematically, the solution to the optimisation problem $$\begin{cases} \text{maximise} &amp; \left\{-\sum_{i=1}^{52} p_i \log p_i\right\} \\ \text{subject to}&amp; \sum_{i=1}^{52} p_i = 1\end{cases}$$ is $p_i = 1/52$. Next, suppose I tell you to pick a number "at random" from the interval $[0,1]$. Having no prior knowledge of my predispositions, you might assign equal likelihood to each number, giving a uniform distribution. Here you are solving the optimisation problem $$\begin{cases} \text{maximise} &amp; \left\{ -\int_0^1 f(x) \ \log f(x)\ dx\right\} \\ \text{subject to} &amp; \int_\mathbb{R} f(x)\ dx = 1 \\ &amp; f \text{ continuous and } f \geq 0.\end{cases}$$ Now suppose I tell you to pick a number "at random" from $\mathbb{R}$. I want your selection to have a mean of $0$ and a variance of $1$. What is the distribution of the number selected? The analogous "maximum entropy" distribution is the Gaussian with density $\frac{1}{\sqrt{2\pi}}\exp(-x^2/2)$. Here, you are solving the optimisation problem $$\begin{cases} \text{maximise} &amp; \left\{ -\int_\mathbb{R} f(x) \ \log f(x)\ dx\right\} \\ \text{subject to} &amp; \int_\mathbb{R} f(x)\ dx = 1 \\ &amp; \int_\mathbb{R} x \;f(x)\ dx = 0 \\ &amp; \int_\mathbb{R} x^2 \;f(x)\ dx = 1 \\ &amp; f \text{ continuous and } f \geq 0.\end{cases}$$
Function$f(x)={2x+3xf(x)\over x-1}$ is one to one and an onto function. what is the codomain of f(x)?
$$y(x-1)=2x+3xy\implies y = {-2x\over 2x+1}$$ so it is linear rational function so the range is $\mathbb{R}-\{-1\}$ The range of a linear rational function is $$f(x)= {ax+b\over cx+d}$$ is $\mathbb{R}-\{{a\over c} \}$. Proof: Let $y_0$ be in a range of $f$, so there is an $x_0$ such that $$y_0= {ax_0+b\over cx_0+d}$$ then $$y_0cx_0+y_0d = ax_0+b$$ so $$x_0(cy_0-a)=b-y_0d\implies x_0 = {b-y_0d\over cy_0-a} \;\;\;{\rm if}\;\;y_o\ne {a\over c}$$ and thus a conclusion.
If $f(x)<g(x)$, can $\int_a^b f(x)\,dx = \int_a^b g(x)\,dx$?
If one function is strictly less than another at every point in a set whose measure is positive, then their integrals over that set are not equal. Consider the set of points $x$ at which $\dfrac 1 {n+1} &lt; g(x)-f(x) \le \dfrac 1 n$ for $n=1,2,3,\ldots$ and the set of points $x$ at which $g(x)-f(x)&gt;1.$ The union of those sets is the whole domain, so the measure of at least one of them is more than $0.$ And the integral of $g-f$ over that set is more than $1/(n+1)$ times the measure of that subset of the domain, so it's positive. Using Riemann's approach, this seems more complicated, although it would be easier if one had an assumption of continuity.
A non-homogeneous recurrence of Fibonacci sequence
Don't call this the "Fibonacci sequence" - it isn't. Look at $f(x) = \sum_{n=0}^\infty F(n) x^n$. Use your recurrence to express this in terms of $G(x)$.
Show that if the smallest prime factor $p$ of the positive integer $n$ exceeds $n^{\frac{1}{3}}$ then $\frac{n}{p}$ must be a prime or 1.
Why does $m$ not have a prime factor less than $n^{\frac{1}{3}}$? Since $m$ is constructed as $m=\frac np$, and $p$ is given as a factor of $n$, we know that $m$ is an integer factor of $n$ also. In this section of the proof we have $m&gt;1$, and obviously any factor of $m$ is also a factor of $n$. By the initial premise, since no prime factor of $n$ is less than $p$, it is also true that no prime factor of $m$ is less than $p$, either. Since $\sqrt m &lt; n^{1/3}$, $m$ cannot either be a prime square or indeed have two factors itself (since one of them would then have to be less than $n^{1/3}$)- therefore $m$ must be prime.
Dirichlet's theorem on primes in arithmetic progression
As Jonas says, Keith Conrad's paper will tell you that the answer is essentially no. He defines a certain notion of "Euclidean" proof coming from writing down a polynomial with certain properties and gives two classic results which say that these proofs exist for primes in arithmetic progression $a \bmod n$ if and only if $a^2 \equiv 1 \bmod n$. The first case these proofs can't handle is primes congruent to $2 \bmod 5$ (equivalently, primes congruent to $3 \bmod 5$). The basic problem is that there is no way to force a positive integer to have factors congruent to $2 \bmod 5$ and not congruent to $3 \bmod 5$ (or the other way around) solely by controlling its residue class $\bmod 5$. In more sophisticated language, $2$ and $3$ lie in all the same subgroups of $(\mathbb{Z}/5\mathbb{Z})^{\ast}$.
Direct Summands of Invariant Subspaces
Let's consider the kernel and image of $T$. The kernel of $T$ must be two dimensional, since it cannot be three dimensional since $T\ne 0$, and if it were one dimensional, then we would have that the rank of $T$ would be two by rank-nullity, so we couldn't have $T^2=0$. Hence the dimension of the image of $T$ must be one dimensional. Also it is clear that the image of $T$ is a one dimensional subspace of $\ker T$. Now, if $V\subseteq F^3$ is $T$-invariant, then $T(V)\subseteq V$. Since the image of $T$ is one dimensional, either $V$ contains the image of $T$, or $V$ lies in the kernel of $T$. If $V$ is two dimensional, then if it lies in the kernel of $T$, then it is the kernel of $T$, and hence contains the image of $T$. Thus the two dimensional invariant subspaces are precisely those containing the image of $T$. To count the number of planes containing a line in $F^3$, fix a complement of the line. Then planes containing the line in $F^3$ correspond to lines in that complement, of which there are $(p^2-1)/(p-1)=p+1$. Conversely if $V$ is one dimensional, then if it contains the image of $T$, it is the image of $T$, so it lies in the kernel. Thus the one dimensional invariant subspaces for $T$ are the one dimensional subspaces of the kernel of $T$. We just counted the number of one dimensional subspaces of a two dimensional $F$ vector space. There are $p+1$ one-dimensional invariant subspaces. I'm not sure exactly what the second question is asking us to count at all, but if you can clarify that it'd be cool. Might be because it's late, since I'm having trouble even formulating guesses as to what it might be supposed to mean, which suggests my brain is about to cease functioning. Intuition? Hard to say exactly what the intuition is, but I guess the idea is that since $T$ has nontrivial kernel, that is easily invariant (as is any subspace contained in it), so let's start looking into that. Similarly, since the kernel is nontrivial, $T$ is not surjective, and the image of $T$ will also be invariant (and any subspace containing it), so it's worth looking into that.
Direct numerical solutions for first kind Volterra integral equations
If convergent, how to prove? Volterra equations are typically convolution function and therefore are suitable for use Laplace transform. In particular, this approach can be applied to the equation $$\int_0^t\sqrt{t-s}\,f(s)\,ds = t,$$ or $$\sqrt t*f(t) = t.$$ Let $$F(p) = \mathcal L(f(t)) = \int_0^\infty f(s)e^{-ps}\,ds.$$ Using data from the table of Laplace transforms, we get: $$\mathcal L(\sqrt t) = \dfrac{\sqrt\pi}{2p^{3/2}},\quad \mathcal L(t) = \dfrac1{p^2},\quad\mathcal L\left(\dfrac1{\sqrt t}\right) = \sqrt{\dfrac\pi {p}},$$ so Laplace transform for the considering equation takes form: $$\dfrac{\sqrt\pi}{2p^{3/2}}F(p) = \dfrac1{p^2},$$ then $$F(p) = \dfrac2{\sqrt{\pi p}},$$ and $$\boxed{f(t) = \dfrac2{\pi\sqrt t}}.$$ Check it: $$\sqrt t*f(t) = \dfrac2\pi\int_0^t\sqrt{\dfrac{t-s}s}\,ds = \genfrac{|}{|}{0pt}{0}{s=t\sin^2u,}{ds = 2t\sin u \cos u} = \dfrac{4t}\pi\int_0^{\pi/2}\cos^2u\,du = t.$$
Question about the limit of a sequence.
At the request of the OP, we present herein a purely algebraic way forward. HINT: Let $a_n=\frac{(n!)^2\,(2n)!}{(4n)!}$. Then, we can write $$\begin{align}\log(a_n)&amp;=2\log(n!)+\log((2n)!)-\log((4n)!)\\\\&amp;=2\sum_{k=1}^n\log(k) +\sum_{k=1}^{2n}\log(k)-\sum_{k=1}^{4n}\log(k)\\\\&amp;=2\sum_{k=1}^n\log(k)-\sum_{2n+1}^{4n}\log(k)\end{align}$$ SPOLIER ALERT: Scroll Over the Highlighted Area to Reveal the Full Solution \begin{align}\log(a_n)&amp;=2\log(n!)+\log((2n)!)-\log((4n)!)\\\\&amp;=2\sum_{k=1}^n\log(k) +\sum_{k=1}^{2n}\log(k)-\sum_{k=1}^{4n}\log(k)\\\\&amp;=2\sum_{k=1}^n\log(k)-\sum_{2n+1}^{4n}\log(k)\\\\&amp;=2\sum_{k=1}^n\log(k)-\sum_{k=1}^{2n}\log(k+2n)\\\\&amp;\le 2n\log(n)-2n\log(2n+1)\\\\&amp;=-2n\log\left(2+\frac1n\right)\\\\&amp;&lt;-2n\end{align}Inasmuch as $\lim_{n\to \infty}e^{-2n}=0$, we find that $$\lim_{n\to \infty}a_n=0$$
Probability that sum of independent random (non iid) variables exceeds the variance infinitely often
I think I have a proof: Denote $n(i)$ such that $\Sigma_{n(i)}=i$. Denote by $A_k$ the event that there exists $i\in \{n(k^2),\ldots,n((k+1)^2\}$ for which $S_i\geq \alpha(\Sigma_i)^2$. We shall prove that there exists $c$ for which $\Pr[A_k]\leq c/k^2$ for all $k$. Set $\beta=\alpha/2$. Then, for any $k$, considering $n(k^2)$, $\Pr[ S_{n(k^2)} &gt; \beta\cdot(\Sigma_{n(k^2)})^2 ] \approx \frac{1}{\sqrt{2\pi}}\int_{\beta\cdot k^2}^{\infty} e^{-x^2/2}dx &lt; \frac{1}{\sqrt{2\pi}} \cdot \frac{1}{\beta\cdot k^2} =\frac{c_1}{k^2}, $ for the appropriate $c_1$. Now consider the random variables $X_i$ for $i=n(k^2)+1,\ldots,n((k+1)^2)$. Set $D_{j}=\sum_{i=n(k^2)+1}^{j} X_i$. Then $Var(D_{n((k+1)^2)})=(\Sigma_{n((k+1)^2)})^2-(\Sigma_{n(k^2)})^2=(k+1)^4-k^4 = O(k^3). $ So, by the Kolmogorov inequality $ \Pr[\sup_{n(k^2)&lt;j\leq n((k+1)^2)} D_j \geq \beta(k+1)^4 ] \leq \frac{Var(D_{n((k_1)^2)})}{(\beta(k+1)^4)^{2}}= \frac{O(k^3)}{\beta^2(k+1)^8} \ll \frac{c_2}{k^2} $ for some constant $c_2$. So, for any $k$, $ \Pr[A_k]\leq \Pr[\sup_{n(k^2)&lt;j\leq n((k+1)^2)} S_{j} \geq \alpha (k+1)^4 ]\leq $ $ \Pr[S_{n(k^2)}&gt;\beta (k+1)^4]+Pr[\sup_{n(k^2)&lt;j\leq n((k+1)^2)} D_{j} \geq \beta(k+1)^4 ]\leq $ $ \leq \frac{c_1+c_2}{k^2} . $ So, $ \sum_{k=1}^{\infty}\Pr[A_k] &lt; \infty , $ and the result follows by the Borel Cantelli lemma. Is this right?
Newton-Côtes closed formula for $\int_{0}^{2n} x^{2n+1}\cos(2{\pi}x) dx$
I don't know if this helps, but $$x^{2n+1}-\prod_{k=0}^{2n}(x-k)=x^{2n+1}-(x)_{2n+1}$$ Is a polynomial of degree $2n$ that goes through all the $2n+1$ points, where $(x)_{2n+1}$ is the falling factorial. The first few integrals: 0 0.000000000000000000000000000000000E+0000 1 4.00000000000000000000000000000000 2 682.666666666666666666666666666667 3 209952.000000000000000000000000000 4 107374182.400000000000000000000000 5 83333333333.3333333333333333333333 6 91708461753490.2857142857142857145 7 136122083613085696.000000000000000 8 262353693492758067427.555555555558 9 637411810819803908721868.800000012 10 1906501818181818181818181818.18188 EDIT: Are you trying to form some sort of inductive hypothesis like $$A_n=\frac{2^{2n-1}(n-1)^{2n}}n$$ EDIT: The situation is so much simpler than this. If we make the substitutions $y=2n-x$ and $j=2n-k$ then $$\begin{align}\int_0^{2n}\prod_{k=0}^{2n}(x-k)dx&amp;=\int_{2n}^0\prod_{k=0}^{2n}(2n-y-k)(-dy)\\ &amp;=\int_0^{2n}\prod_{k=0}^{2n}(2n-y-k)(dy)\\ &amp;=\int_0^{2n}\prod_{j=0}^{2n}(2n-y-2n+j)(dy)\\ &amp;=(-1)^{2n+1}\int_0^{2n}\prod_{j=0}^{2n}(y-j)(dy)\\ &amp;=-\int_0^{2n}\prod_{k=0}^{2n}(x-k)dx\\ &amp;=0\end{align}$$ Because only zero is its own additive inverse. The $2n+1$-point Newton-Cotes formula also says this integral is $0$ because the integrand is zero at all the sample points. Thus the $2n+1$-point Newton-Cotes formula is exact for all polynomials of degree $2n+1$ because it's exact for all polynomials of degree $2n$ by construction and the difference between and degree $2n+1$ polynomial and some degree $2n$ polynomial is a multiple of $\prod_{k=0}^{2n}(x-k)$, which the $2n+1$-point Newton-Cotes formula evaluates exactly. That's an old theorem. Thus if $f(x_k)=x_k^{2n+1}$ then a polynomial of degree $2n+1$ that goes through all $2n+1$ points is $f(x)=x^{2n+1}$ and because the $2n+1$-point Newton-Cotes formula integrates this exactly, it spits out $$\int_0^{2n}x^{2n+1}dx=\frac{(2n)^{2n+2}}{2n+2}=A_{n+1}$$ which is the same as our numerical results.
Using the definition of the convergence of the sequence, how can I prove that this sequence converges to the limit?
By multiplying the fraction by $\frac{1/n^2}{1/n^2}$, this sequence becomes $$\frac{2}{(n+\frac{3}{n^2})}$$ From this point, it should be pretty clear that the numerator is fixed and the denominator goes to $\infty$, so this must go to $0$. To explicitly solve for $N$, observe that this is bounded above by $$\frac{2}{n+\frac{3}{n^2}} \leq \frac{2}{n},$$ so for any $\epsilon&gt;0$ if $n\geq N$, where $N=2/\epsilon$, then it must be that $$\frac{2}{n+\frac{3}{n^2}}\leq \frac{2}{n}\leq \frac{2}{\frac{2}{\epsilon}}=\epsilon.$$ where the last inequality comes from $\frac{2}{n}$ is decreasing as $n$ gets larger. Similar logic can be used for your question about $\frac{sin(n^2)}{\sqrt[3]{n}}$. This is bounded above by $|\frac{sin(n^2)}{\sqrt[3]{n}}|\leq \frac{1}{\sqrt[3]{n}}$, which can then be used to solve for $N$.
How to proof that $\sum_{i=1}^{2^n} 1/i \ge 1+n/2$
By induction and for the inductive step we have $$\sum_{i=1}^{2^{n+1}}\frac1i=\sum_{i=1}^{2^{n}}\frac1i+\sum_{i=2^n+1}^{2^{n+1}}\frac1i\ge1+\frac n2+\underbrace{\sum_{i=2^n+1}^{2^{n+1}}\frac1{2^{n+1}}}_{=\frac12}$$
What is the maximum number of primes generated consecutively generated by a polynomial of degree $a$?
The Green-Tao Theorem states that there are arbitrarily long arithmetic progressions of primes; that is, sequences of primes of the form $$ b , b+a, b+2a, b+3a,... ,b+na $$ Since such a progression will be the first $n$ values of the polynomial $ax+b$, this implies that even for degree 1, there is no upper bound to how many primes in a row a polynomial can generate.
Is $M$ is a compact manifold?
This is indeed true for all $n\ge 1$ and what you have written can be converted to a proof: Use a partition of unity to construct a smooth function $g: M\to R^n$ whose set of critical values is unbounded in $R^n$. Now, compose $g$ with a diffeomorphism $h: R^n\to B^n\subset R^n$, where $B^n$ is the open unit n-ball. Then each point on the boundary of $B^n$ is a regular value of $f=h\circ g$ (since it is not a value!) but one of these points will be the limit of critical values. Hence, the set of regular values of such $f$ is not open. For @Najib Idrissi: This argument breaks down for $n=0$ since $R^0=B^0$ and, hence, $B^0$ has empty boundary in $R^0$.
Continuously shift "selected" vertices in a graph so that none get squished.
The problem boils down to enumerating perfect matchings in a bipartite graph. For a set $S \subseteq V(G)$, construct an auxiliary bipartite graph - not quite a subgraph - as follows: On one side ($A$), you have all the vertices in $S$. On the other side ($B$), you have all the vertices of $G$ adjacent to $S$. If there are vertices of $S$ adjacent to $S$, they appear on this side, again. Put an edge between the two sides whenever that edge appears in $G$. We should have $|A| \le |B|$ if there is any way for the dwarves to successfully move. Of course, you may allow the dwarves to sit still (approximating continuous motion along a path that goes nowhere), in which case all of $S$ is contained in $B$ as well, and this is guaranteed to hold. A possible transition of the dwarves is a matching in this bipartite graph that saturates the vertices in $A$. Many people thinking about these problems like perfect matchings, which saturate both sides; we can obtain this by adding $|B|-|A|$ artificial vertices to $A$, which are adjacent to all of $B$. The artificial vertices will get matched to vertices of $B$ that are not the destination of any dwarf. Unfortunately, counting perfect matchings in a graph is hard in general. This is one of the #P-complete problems: the equivalent of NP-complete, for counting problems rather than for decision problems. Fortunately, we can go through all the perfect matchings in a way that takes $O(|S|)$ time per matching. This paper suggests several ways to do it. The problem is difficult mainly because there could be exponentially many perfect matchings. If you happen to have a dwarven network where the number of shifts is small for each $S$, then enumerating the shifts will not take long. Finally, if you want to predict the dwarves' short-term state, this is going to be very hard with this approach, because the number of possible $S$ grows very quickly. However, long-term, the stationary distribution is relatively easy to describe: the probability of the dwarves occupying a set $S$ will be proportional to the number of shifts out of $S$.
How to minimize spectral radius in this situation?
I have made the calculations for $2\times 2$ matrices, maybe it helps you to solve the general problem. Let $A=\left( \begin{array}{cc} a &amp; b\\ b &amp; c\end{array}\right)$ be a Hermitian matrix and let $B=\left( \begin{array}{cc} s &amp; 0\\ 0 &amp; t\end{array}\right)$ be such that $tr(A-B)=0$. Since $tr(A)=a+c$ and $tr(B)=s+t$ we conclude that $s=a+c-t$. Hence $$A-B=\left( \begin{array}{cc} t-c &amp; b\\ b &amp; c-t\end{array}\right).$$ The eigenvalues of $A-B$ are $$ \lambda_{1,2}=\pm \sqrt{(t-c)^2+b^2}$$ which gives $$ \rho(A-B)=\sqrt{(t-c)^2+b^2}. $$ Of course, the minimal possible value of $\rho(A-B)$ is $|b|$, we get it if $B=\left( \begin{array}{cc} a &amp; 0\\ 0 &amp; c\end{array}\right)$.
Show that Pn is an (n+1)-dimensional subspace
Use the fundamental theorem of algebra to show that the set $\{1,x,x^2, \dotsc, x^n\}$ is a linearly independent set of vectors. They will therefore span a vector space of dimension $n+1$.
Prove that if $f:D\to D$ is analytic and has two distinct fixed points, then $f$ is the identity
Proof assuming that $D$ is the open unit disk: Let $z_1,z_2$ be two distinct fixed points of $f$. Let $g(z)=\frac {z-z_1}{1-\overset {-}z_1 z}$. Let $h=g\circ f \circ g^{-1}$. Then $h$ masp $D$ into itself, vanishes at $0$ and it has a second fixed point, namely $g(z_2)$. By your argument based on Schwarz lemma you get $h=$ identity. This implies $f=$ identity. [I have used the fact that functions of the form $\frac {z-a}{1-\overset {-}a z}$ (where $|a| &lt;1)$ are bijective, bi-holomorphic maps of the open unit disk].
$\int_\gamma\textbf{u}\ d\textbf{r}$, wtih $\gamma$ a curve along an ellipse.
You cannot use Green's Theorem as the path is not a closed curve. So you will have to do line integral directly. $\vec F = (8xy+y,2y^2)$ $\gamma: 4x^2+y^2=1, \ \text{from}\ \text{point A} (-\frac{1}{4},\frac{\sqrt{3}}{2})\ \text{to} \ \text{B} (\frac{\sqrt2}{4},\frac{\sqrt2}{2})$. Parametrize the ellipse as $ \ \gamma(t) = (\frac{1}{2} \cos t, \sin t)$. Based on values of $x, y$ coordinates, point A is represented by $t = \frac{2\pi}{3}$ and point B by $t = \frac{\pi}{4}$. $\gamma'(t) = (-\frac{1}{2} \sin t, \cos t)$ $\vec F(\gamma(t)) = (4 \sin t \cos t + \sin t, 2 \sin^2t)$ $\vec F(\gamma(t)) \cdot \gamma'(t) = - \frac{1}{2} \sin^2t$. Now integrate going from point A to point B.
Find the limit of the cosine sequence.
If $x \ne 0$, then \begin{align} \lim_{n\to\infty}\cos \frac{x}{2}\cos\frac{x}{2^2}\cos\frac{x}{2^3}\cdots \cos \frac{x}{2^n}&amp;=\lim_{n\to\infty}\frac{\cos \frac{x}{2}\cos\frac{x}{2^2}\cos\frac{x}{2^3}\cdots \cos \frac{x}{2^n}\sin\frac{x}{2^n}}{\sin\frac{x}{2^n}}\\ &amp;=\lim_{n\to\infty}\frac{\cos \frac{x}{2}\cos\frac{x}{2^2}\cos\frac{x}{2^3}\cdots \cos \frac{x}{2^{n-1}}\sin\frac{x}{2^{n-1}}}{2\sin\frac{x}{2^n}}\\ &amp;=\lim_{n\to\infty}\frac{\cos \frac{x}{2}\cos\frac{x}{2^2}\cos\frac{x}{2^3}\cdots \cos \frac{x}{2^{n-2}}\sin\frac{x}{2^{n-2}}}{2^2\sin\frac{x}{2^n}}\\ &amp;=\cdots\\ &amp;=\lim_{n\to\infty}\frac{\sin x}{2^n \sin\frac{x}{2^n}}\\ &amp;=\frac{\sin x}{x}. \end{align} If $x=0$, then given limit goes to $1$.
Three coupled differential equations
I have a solution for decoupling the equations, note that it is not through some transformation of the matrix. Rearranging your first and third equations gives \begin{align} \phi_1 &amp;= \frac{1}{2 \lambda + \varepsilon - V(r)} \frac{\alpha}{2} \frac{d \phi_2}{d r}, \\ \phi_3 &amp;= \frac{1}{2 \lambda -\varepsilon + V(r)} \frac{\alpha}{2} \frac{d \phi_2}{d r}. \end{align} And rearranging your second equation allows it to be written as \begin{align} \frac{d}{dr} \big[ r(\phi_1 + \phi_3) \big] + \left( \frac{V(r) - \varepsilon}{\alpha} \right) r \phi_2 = 0. \end{align} Next I substitute the expressions given above for $\phi_1$ and $\phi_3$ into this equation, this yields the following second order ODE for the quantity $\phi_2(r)$ \begin{align} \frac{d}{dr} \left(r \beta(r) \frac{d \phi_2}{dr} \right) + \left( \frac{V(r) - \varepsilon}{\alpha^2} \right) r \phi_2 = 0, \end{align} where the function $ \beta(r) := \dfrac{2 \lambda}{[2 \lambda + \varepsilon - V(r)][2 \lambda - \varepsilon + V(r)]}$ has been defined for convenience.
Smallest number of people that has birthday today exceeds 1/2
Rearrange to $$ 1 - 1/2 \ge (364/365)^n $$ and take logarithms on both sides.
Does taking the negative of the anti-hermitian matrix A turn it into Hermitian or does it stay anti-Hermitian?
Yes, your argument is correct. The negative of an antihermitian matrix $A$ remains antihermitian. For an example, observe that if $$A= \begin{bmatrix} -i &amp; 2+i\\ -2+i &amp; 0 \end{bmatrix}$$ then $A$ is anti-Hermetian since $$-A = \begin{bmatrix} i &amp; -2 - i \\ 2 - i &amp; 0 \end{bmatrix} = \begin{bmatrix} \overline{-i} &amp; \overline{-2 + i} \\ \overline{2 + i} &amp; \overline{0} \end{bmatrix} = \begin{bmatrix} \overline{-i} &amp; \overline{2 + i} \\ \overline{-2 + i} &amp; \overline{0} \end{bmatrix}^\mathsf{T} = A^\mathsf{H} $$ and writing $$-A= \begin{bmatrix} i &amp; -2-i\\ 2-i &amp; 0 \end{bmatrix}$$ would produce $$-(-A) = A= \begin{bmatrix} -i &amp; 2 + i \\ -2 + i &amp; 0 \end{bmatrix} = \begin{bmatrix} \overline{i} &amp; \overline{2 - i} \\ \overline{-2 - i} &amp; \overline{0} \end{bmatrix} = \begin{bmatrix} \overline{i} &amp; \overline{-2 - i} \\ \overline{2 - i} &amp; \overline{0} \end{bmatrix}^\mathsf{T} = -A^\mathsf{H} $$
combination of passwords problem
I assume that you mean that every password must be exactly four characters long, exactly two of the characters must be a letter from $\{a,A,b,B,c,C,d,D,e,E\}$, the remaining two characters must be a number, the first character must be a letter, and repetition is allowed. The first character must be a letter, choose which letter Only one of the remaining three positions will be a letter. Choose which position Choose which letter occupies that selected position Choose which number occupies the left-most remaining position Choose which number occupies the final position Applying multiplication principle, we get then a final total of $10\cdot 3\cdot 10\cdot 10\cdot 10 = 30000$ possible arrangements.
Intuition on Limit Sup and Inf for sequences of sets
For a sequence $(x_n)_{n=1}^\infty$, we have $$ \limsup_{n\to\infty}x_n = \lim_{n\to\infty}\sup_{k\geq n}x_k = \inf_{n}\sup_{k\geq n}x_k $$ because $(\sup_{k\ge n}x_k)_n$ is a decreasing sequence. Replacing $\sup$ by union and $\inf$ by intersection, we obtain a natural definition $$ \limsup_{n\to\infty}X_n := \bigcap_n\bigcup_{k\ge n} X_k. $$ This definition is usually presented, rather than $\limsup_{n\to\infty}X_n:=\lim_{n\to\infty}\cup_{k\ge n}X_k$, because for this to make sense one first has to define what it means to take a limit of sets. As an aside, the way one "thinks" about the limit supremum and infimum for sets is as follows: \begin{align*} \limsup_{n\to\infty}X_n &amp;= \{x \mid x\in X_n\ \text{for infinitely many $n$}\} \\ \liminf_{n\to\infty}X_n &amp;= \{x \mid x\in X_n\ \text{for all but finitely many $n$}\} \end{align*}
Find local and global minimizer(s)?
You constrain your function to the intersection of the circles of radius 1 centred at $(-1,0)$ and $(1,0)$, i.e. you minimize the function f over the set $\{(0,0)\}$. Consequently, $(0,0)$ is the only local and global optimiser.
Group over it's center isomorphic to itself means the center is trivial?
If we let $G$ be an extension of a Prüfer $2$-group $H = {\mathbb Z}(2^\infty)$ by an element $t$ of order $2$ with $t^{-1}ht = h^{-1}$ for all $h \in H$, then $|Z(G)| = 2$ and $G/Z(G) \cong G$. But in your application, the map $G \to G/Z(G)$ defined by $g \mapsto gZ(G)$ is an isomorphism, which implies immediately that $Z(G)=1$, because $Z(G)$ is the kernel of this map.
Round based on number of digits
Scale the first three digits and divide by $10$ (= scale the first two digits), $$\frac{v}{10^{\lfloor\log_{10}v\rfloor-1}},$$ round to integer, $$\left\lfloor\frac{v}{10^{\lfloor\log_{10}v\rfloor-1}} +0.5\right\rfloor,$$ fill with zeroes, $$\left\lfloor\frac{v}{10^{\lfloor\log_{10}v\rfloor-1}} +0.5\right\rfloor10^{\lfloor\log_{10}v\rfloor-1}.$$ E.g. $115\to11.5\to12\to120$, $1115\to11.15\to11\to1100$. More compactly, $$[vs^{-1}] s$$ where $s$ is the scaling factor $10^{\lfloor \log_{10}v\rfloor-1}$ and the angle brackets denote rounding.
Why numbers of the form $2^{2t} ± 3$ are primes?
PARI/GP says that primes among these numbers are not as ubiquitous as you believe: ? for(t=2,25,p=2^(2*t)-3;if(!isprime(p),print([t,p,factor(p)]))) [4, 253, [11, 1; 23, 1]] [8, 65533, [13, 1; 71, 2]] [9, 262141, [11, 1; 23831, 1]] [13, 67108861, [37, 1; 349, 1; 5197, 1]] [14, 268435453, [11, 1; 13, 1; 1877171, 1]] [15, 1073741821, [23, 1; 46684427, 1]] [16, 4294967293, [9241, 1; 464773, 1]] [17, 17179869181, [5113, 1; 3360037, 1]] [18, 68719476733, [242819, 1; 283007, 1]] [19, 274877906941, [11, 1; 12589, 1; 1984979, 1]] [20, 1099511627773, [13, 1; 84577817521, 1]] [21, 4398046511101, [47, 1; 193, 1; 4463, 1; 108637, 1]] [22, 17592186044413, [5927, 1; 2968143419, 1]] [23, 70368744177661, [227, 1; 19273, 1; 16084391, 1]] [24, 281474976710653, [11, 1; 167, 1; 239, 1; 641110271, 1]] [25, 1125899906842621, [59, 1; 176329, 1; 108224111, 1]] It seems (tested up to $t=800$) that $t=2,3,5,6,7,10,11,12$ are perhaps the only cases that produce a prime. UPDATE: $2^{2\cdot 868}-3$ is also prime. But that is the only other case up to $t=1976$. (Beyond that I would have had to increase my stack size, but didn't want to)
How are positive intervals calculated?
All you need, in fact, is that $M&gt;\frac{n}{2}$. First, the $3^{rd}$ line gives us $p^2 + np = 161$. This is always satisfied for $p&gt;0$, $n&gt;0$. Now, substituting this in the $1^{st}$ line, we get - $$N = p^4 + np^2 = p^2(p^2+n)$$ So, if $n&gt;0$, $N&gt;0$. Thus, considering the $2^{nd}$ equation, we see that $N&gt;0$ only if $M&gt;\frac{n}{2}$.
Upper bound $\int_0^\infty \exp\left(-\alpha(x(x+\beta+1))^2\right)$
Hint Let $\beta &gt;0$. Remark that $$\int_0^\infty e^{-a\big(x(x+\beta +1)\big)^2}\,\mathrm d x\leq \int_0^\infty e^{-a(\beta +1)^2x^2}\,\mathrm d x,$$ and use the fact that $$\int_0^\infty e^{-x^2}\,\mathrm d x=\frac{\sqrt\pi}{2}.$$
Equicontinuous family of functions on a compact set
This proof has all the correct elements in it. Just one remark about the order in which you place them. The definition of uniform convergence requires the existence of a certain $\delta$ (with certain properties) for any given $\epsilon.$ You are starting your proof by constructing such a $\delta,$ and then halfway down the proof you say "let $\epsilon&gt;0$ be given". It would make more sense to assume the given $\epsilon$ before you start the construction.
Find by integrating the area of ​​the triangle vertices $(5,1), (1,3)\;\text{and}\;(-1,-2)$
It is just tedious. Let $f_1(x) = \frac{5x+1}{2}$, $f_2(x) = \frac{7-x}{2}$, $f_3(x)= \frac{x-3}{2}$. $(-1,f_1(-1)) = (-1,f_3(-1)) = (-1,-2)$, $(1,f_1(1)) = (1,f_2(1)) = (1,3)$, $(5,f_2(5)) = (5, f_3(5)) = (5, 1)$. $A = \int_{-1}^1 (f_1(x)-f_3(x) ) dx + \int_{1}^5 (f_2(x)-f_3(x) ) dx = 4 +8 = 12$.
Three lines go through one point
Let us add a couple of additional points: $H$ and $I$, as the missing vertices of the square built on $AC$. The solid green segments are orthogonal and with equal lengths due to a $90^\circ$ rotation around $C$. The solid purple segments are orthogonal and with equal lengths due to a $90^\circ$ rotation around $A$. Let us denote through $AF\cap CE$ as $Y$, and let $B'$ be the image of $B$ with respect to the translation bringing $H$ to $A$. $AF$ and $CE$ are heigths of $AB'C$, hence $Y$ lie on the height of $AB'C$ through $B'$. Since $B'BHA$ is a parallelogram by construction, $Y$ also lies on height of $ABC$ through $B$. An alternative approach is to show through the cosine theorem that $YA^2-YC^2=BA^2-BC^2$ holds: the conclusion is the same, since the height through $B$ is exactly the locus of points $P$ such that $PA^2-PC^2=BA^2-BC^2$. This classical result is related to Van Aubel's theorem for quadrilaterals. Remark: if we denote $AG\cap CD$ as $X$, $BX$ goes through the center of $ACIH$, by a similar argument.
Applying a convex function to a weak solution yields a weak subsolution
A standing assumption in exercises in that chapter is that the coefficients $a^{ij}$ satisfy an ellipticity condition. In particular, the matrix $A=(a^{ij})$ is positive definite, which implies $$\sum_{i,j=1}^n a^{ij} u_{x_i} u_{x_j} = \nabla u^T A\nabla u\ge 0$$ Since also $v\ge 0$ and $\phi''\ge 0$, the conclusion $$-\int_U \phi''(u) v \sum_{i,j=1}^n a^{ij} u_{x_i} u_{x_j} \, dx \le 0$$ follows.
Simple random sample of a Bernoulli and probability function of a statistic.
$Outline:$ Sample mean. If $n$ independent $X_i$ are $Bernoulli(p)$, then $T = \sum_{i=1}^n X_i \sim Binom(n, p).$ From which it is easy to find the PDF of $\bar X = T/n.$ Sample Variance. Notice that $$S^2 = \frac{1}{n-1}\left(\sum_{i=1}^n X_i^2 - n\bar X^2 \right).$$ Also, for Bernoulli random variables, which take only values $0$ and $1$, we have $$\sum_{i=1}^n X_i = \sum_{i=1}^n X_i^2,$$ and that ought to simplify the rest of it.
Is the definition of degenerate bilinear forms equal to the two variables?
Yes, $\exists a\ne0\ f(a,−)=0$ and $\exists b\ne0 \ f(−,b)=0$ are different conditions. For example, define a bilinear form $f$ on the Hilbert space $\ell_2$ of sequences $a=(a_1,a_2,\dots)$ by $$f(a,b)=\sum_{n=1}^\infty a_{n+1}b_n$$ Then $f(a,-)=0$ when $a=(1,0,0,\dots)$. However, $f(-,b)$ is a nonzero form for every $b\ne 0$.
How to find the Euler characteristic of a cellularly embedded graph, given only the vertices and edges?
Rather than speaking of the Euler characteristic of a graph, we traditionally speak of the genus of a graph, which is the least genus of a surface into which it can be embedded... ...but there is no way to find this number that improves substantially on finding all embeddings of the graph. This CS StackExchange answer outlines a method. The basic idea is that the global structure of the embedding is determined by some local decisions: for each vertex, you should determine the cyclic order of the edges out of that vertex. Not all choices made at the different vertices are compatible, so we work out the ones that are, and now we can work out the faces of this embedding and determine the genus (or, if you prefer, the Euler characteristic). If we do this for all possible ways to choose the cyclic orders, we can get a bunch of embeddings, and then we can choose the best one. There are better algorithms, and there are ways to speed up this algorithm so we don't have to check very possibility, but there's no fast algorithm. See Wikipedia for some citations.
Function is $1$ when rational and $-1$ when irrational
The limit does not exist. To see this, note that given any $\delta &gt; 0$ there exists a rational number $x \in (\sqrt{2}, \sqrt{2}+\delta)$ and an irrational number $y \in (\sqrt{2}, \sqrt{2}+\delta)$. For such values of $x$ and $y$, we have $h(x)-h(y) = 2$, and so taking $\varepsilon = 1$ in the (negation of the) definition of a limit shows that no limit exists. Intuitively, the function takes both values $1$ and $-1$ arbitrarily close to $\sqrt{2}$, so it can't have a limit at $\sqrt{2}$.
Relationship between f and f' in terms of number of real/non-real roots
Between any two consecutive distinct roots of $f$ there is at least one root of $f'$ (Rolle theorem). This shows the claim if we do not count multiplicities. And what happens at a (multiple) root of $f$ is the subject of the theorem yuo quote. Both statements can be combined as follows if you like: If $a_1\le a_2\le \ldots \le a_k$ are the real roots of $f$ (with repetitions for multiple roots) then there is a sequence $b_1\le b_2\le \ldots \le b_{k-1}$ of roots of $f'$ (with repititions possible for multiple roots; note that in the context of complex roots for $f$ there may be additional real roots of $f'$) such that $a_1\le b_1\le a_2\le b_2\le a_3\le \ldots\le a_{k-1}\le b_{k-1}\le a_k$; in the end we also obtain the claim with counting multiplicities.
Find the area under a Gaussian
$$I=\int_{-\infty}^{\infty} e^{-ax^2} dx$$ Let $x\sqrt{a}=t \implies dx=\frac{dt}{\sqrt{a}}$, then $$I=\int_{-\infty}^{\infty} e^{-t^2} \frac{dt}{\sqrt{a}}=\frac{\sqrt{\pi}}{\sqrt{a}} .$$
Why is IBN not a Morita property?
Lam's Lectures on Rings and Modules (GTM 189, Springer-Verlag 1999) contains in the exercises an example attributed to George Bergman that shows that IBN is not a Morita invariant property (exercise 11, page 502). (Lam also mentions earlier in the book that Cohn's book, Free Rings and their Relations, contained an exercise asking the reader to prove that IBN was a Morita invariant property...) Anyway, here's the sketch: Given a ring $R$, let $\mathcal{P}(R)$ be the monoid of isomorphism classes of finitely generated right $R$-modules under the direct sum operation. It can be shown (using coproducts) that there is a ring $R$ for which $\mathcal{P}(R)$ is generated as a monoid by $[R]$ and $[M],[N]$, together with the defining relations $[M]+[N]=[R]=[R]+[R]$. Assuming this holds, $S = \mathrm{End}_R(M\oplus N)$ is Morita equivalent to $R$ and has IBN, but $R$ does not (since $R\cong R\oplus R$). To show that $S$ has IBN, the suggestion is to use the following exercise: Let $S=\mathrm{End}(P_R)$, where $P_R$ is a progenerator over the ring $R$; then $S$ has IBN if and only if $P^n\cong P^m$ as right $R$-modules implies $n=m$.
Calculus of variations? Optimal control? What is this problem, and how should I approach it?
Define $$P(t):=\int_1^tp(x)dx=-\int_t^1p(x)dx$$ Note that $\dot P(t)=p(t)$ Then you can rewrite your problem as $$\min_{P(t)} \int_0^1 g(\dot P(t))f(t)dt-\int_0^1P(t)F(t)dt$$ This is a standard calculus of variations problem with the Lagrangian: $$L(t,P,\dot P)=g(\dot P(t))f(t)dt-P(t)F(t)$$ You just solve this, and then convert $P$ back to $p$. By the way, I'm not sure why you want to have the derivative $$\frac{d}{dp(t)}\int_t^1p(x)dx$$ But note that this is equal to $$-\frac{d}{dp(t)}\int_1^tp(x)dx=-\frac 1 {\frac {dp(t)}{dt}}\cdot\frac d {dt} \int_1^tp(x)dx=-\frac {p(t)}{\dot p(t)}$$
Show that $\alpha f + \beta g$ is measurable when $f,g$ are measurable.
Essentially we need to show that the sum $f+g$ of measurable functions (with additional agreement $\pm\infty+\mp\infty=0$) is measurable. The proof is a slight modification of the standard proof for finite-valued functions. Namely, let $F_{\pm\infty}=\{x : f(x) = \pm\infty\}$ and similarly for $g$. Then for $a\in\mathbb{R}$ $$\{x : f(x) + g(x) &lt; a\} = I(a) \cup \bigcup_{r\in\mathbb{Q}}\big(\{x : f(x) &lt; r\} \cap \{x : g(x) &lt; a - r\}\big),$$ where $I(a) = (F_{+\infty}\cap G_{-\infty})\cup(F_{-\infty}\cap G_{+\infty})$ if $a &gt; 0$ and $I(a) = \varnothing$ otherwise. All the sets on the right-hand side are measurable, thus the one on the left-hand side is measurable too.
Existence of Joint probability density distribution
Let $U$ be uniform on $[0,1]$; it has a density. Now let $V=U$. The pair $(U,V)$ is supported on the diagonal of the unit square, and does not have a joint density function. Your instinct to use the Radon-Nikdym theorem is good, but that theorem has hypotheses that need checking. In this case the joint probability measure of the diagonal is $1$ but its 2-dimensional Lebesgue measure is $0$. The joint measure is not absolutely continuous w.r.t. Lebesgue measure, so the R-N plan cannot work.
Are these two expression equal?
I assume your friend and you both know that the expression $(-1)^x$ only makes sense if $x$ is an integer. That said, you have $$(-1)^{(-n)} = \frac{1}{(-1)^n} = \left(\frac{1}{-1}\right)^n = (-1)^n$$ I believe all steps only use equalities you learnt in school: For all $a, b$ we have $a^{-b} = \frac{1}{a^b}$ If $\frac{c}{d}$ is a fraction, then $\left(\frac cd\right)^b = \frac{c^b}{d^b}$ $\frac{a}{-b} = -\frac{a}{b}$.
Simple example of a mapping between topological spaces
What you gave is the definition for a function to be continuous at a point. The usual definition for a function to be continuous (which is equivalent with $f$ being continuous at every point) is that $f$ is continuous if the inverse image of every open set is open. It is easy to see, by the way you constructed your example, that indeed it is continuous.
Evaluating $\lim_{x\to 0} x^x$
Use these two rules: For every $a&gt;0, b$: $$\ln(a^b) = b\ln(a)$$ If the limit $$\lim_{x\to a} f(x)$$ exists and $g$ is continuous, then $$\lim_{x\to a}(g(f(x)) = g(\lim_{x\to a} f(x))$$
Is $\mathfrak{b}^{ce} = \mathfrak{b} $ where $c$ and $e$ are contraction and extension of an ideal.
No, because if $f$ is not surjective it is still possible that $\mathfrak b\cap f(A)=\mathfrak a\cap f(A)$ for two distinct ideals of $B$. Consider for instance the map $f:R\to R[T]$, $f(x)=x$, $\mathfrak b=(T)$ and $\mathfrak a=0$.
Problem in understanding a notation in graph theory (intersection of edges)
The authors are identifying an edge with the pair of vertices it is incident to (since the graph is simple this can be done uniquely). This is pretty common in the literature. In fact it is not unusual for edges to be defined as pairs of vertices.
Find perimeter and angle of triangle using three 3d vectors .
The side vectors of the triangle are given by the differences of the position vectors of the vertices. For example $$\vec a -\vec b = 2i+4j-k$$ is one of the sides whose length is $\sqrt{4+16+1}=\sqrt{21}.$ Can you do the same with the other two edges? The following figure may help to better understand the solution: Regarding the angles: Let's calculate another side vector: $$\vec a -\vec c=i-5j+6k.$$ The length of this vector is $\sqrt{62}.$ So, $$\vec x=\frac{\vec a- \vec b}{\sqrt{21}}$$ and $$\vec y=\frac{\vec a- \vec c}{\sqrt{62}}$$ are two unit vectors. The scalar product of these two vectors will give you the cosine of the angle of the two corresponding edges, the angle at $\vec a$. Can you do that with the other two pairs of edges?
Proving that the set $u_1(x) = e^{a_1x}, \ldots, u_n(x) = e^{a_nx}$ is linearly independent.
This is due to the fact that $\lim_{\rightarrow+\infty}e^{-x}=0$ by multiplyiing the equality by $e^{-a_M}x$ you obtain $c_M+\sum_{k\neq M}e^{(a_k-a_M)x}$, since $a_k-a_M&lt;0$, $\lim_{x\rightarrow +\infty}e^{(a_k-a_M)x}=0$.
Distribute n balls across m bags when bags are not empty to get the same sizes
We assume that balls are identical and bags are distinct. The idea is non-algorithmic, it might help. Let $N=n+n_1+n_2+\dots +n_m$. Then every box contains at least $\lfloor \frac{N}{m} \rfloor$ balls. We are left with $r:=N-\lfloor \frac{N}{m} \rfloor$ balls that can be distributed in $\binom{m}{r}$ ways.
Global minimum and maximum defined in the square
Let $f(x,y)=x^3-3xy+3y^3-2$. Thus, $f$ is a convex function of $x$ and $f$ is a convex function of $y$. Since a convex function gets a maximal value for an extreme value of the variable, we obtain: $$\max_{(x,y)\in[0,3]\times[0,2]}f=\max_{x\in\{0,3\},y\in\{0,2\}}f=f(3,0)=25.$$ Now, $f(1,1)=-3$ and by AM-GM: $$x^3-3xy+y^3-2-(-3)=x^3+y^3+1-3xy\geq3\sqrt[3]{x^3y^3}-3xy=0,$$ which says that $-3$ is a minimal value.
Every metric space is completely normal.
A better idea is to adapt the distance per point instead: For each $a \in A$ pick $r_a &gt;0$ such that $B(a,r_a) \cap B = \emptyset$, and also for each $b \in B$ pick $s_b &gt;0$ such that $B(b, s_b) \cap A = \emptyset$. Then use $U = \bigcup\{B(a,\frac{r_a}{3}): a \in A\}$ and $V=\bigcup \{ B(b, \frac{s_b}{3}): b \in B\}$ as the required open sets (open balls are open, and so are all unions of them, $A \subseteq U$ and $B \subseteq V$ are obvious, and use the triangle inequality to show that $U$ and $V$ are disjoint. There need not be a positive distance between two separated sets: consider $A=(0,1), B=(1,2)$ in the reals e.g.
A question on non-principalness of ideal $\langle 3, 1 + \sqrt{223} \rangle \subset \textbf Z[\sqrt{223}]$
Here is an algorithm for solving this type of problem. It is not in Marcus, but then I think Marcus makes some assumptions as to background, I dont know what he has in mind as a solution, but at least this method is simple and general. It is related to continued fractions and ultimately quadratic forms. ${\bf Theorem.}$The solutions of $x^2-dy^2=N$ where $|N|&lt;\sqrt{d}$ are given by $$p_n^2-dq_n^2=(-1)^{n-1}a_{n+1}$$ where $\frac{p_n}{q_n}$ are the continued fraction approximations. To find the continued fraction expansion Build a sequence of triples $(a_n,b_n,c_n)$ such that $b_n^2-4a_nc_n=D$. In the case of $d=223$, we have $D=4\cdot 223=892$. The $a_n$ will give the possible values for $N$. Further 1)$a_{n+1}=-c_n$ 2)$2c_n|b_n+b_{n+1}$ 3) $\sqrt{D}-2|c_n|&lt;b_{n+1}&lt;\sqrt{D}$ So in the case of $d=223$ here is the calculation. $\sqrt{892}=29.8\cdots$ Start with $$(a_0,b_0,c_0)=(223,0,-1).$$ To get the next triple, set, $$(223,0,-1)(1,b_1,c_1)$$ we must have $2|0+b_1$ and $27.8&lt;b_1&lt;29.8$ so $b_1=28$ and since $D=b_1^2-4a_1c_1$, $c_1=27$. To get the next one, $$(-1,28,27)(-27,b_2,c_2)$$ and $2\cdot 27| 28+b_2$ with $0\leq b_2&lt;29.8$ , thus $b_2=26$. The complete sequence is $$(223,0,-1)$$ $$(-1,28,27)$$ $$(-27,26,2)$$ $$(-2,26,27)$$ $$(-27,28,1)$$ $$(-1,28,27)$$ thus the last is the same as the second, so we have repeat. Thus we see that the only numbers $\leq 14$ that are represented are $223, 1, 27, 2$ so only $1,2$ amongst the candidate numbers are represented and split into principal ideals. Thus $3,11,13$ split into non principal ideals. If further you calculate the differences $\delta_n=\frac{b_n+b_{n+1}}{2|c_n|}$ you get the continued fraction expansion, here we have $$\delta_0=14$$ $$\delta_1=1$$ $$\delta_2=13$$ $$\delta_3=1$$ $$\delta_4=28$$ giving $$\sqrt{223}=[14,\overline{1,13,1,28}]$$ ${\bf Additional \ Explanation:}$ If $\sqrt{d}=[k_0, k_1, k_2, \cdots ]$ is a continued fraction expansion, and let $$\tau_n=[k_n, k_{n+1}, k_{n+2}, \cdots ]$$ Then $$\tau_n=\frac{b_n+\sqrt{D}}{2c_n}$$ where $c_n,b_n$ are the numbers above.
Question about the last step of this AM-GM inequality proof
\begin{align*} \left(x_1 \cdots x_n A^{2^k - n}\right)^{\frac{1}{2^k}} \leq A &amp;\Rightarrow x_1 \cdots x_n A^{2^k - n} \leq A^{2^k} \\ &amp;\Rightarrow x_1 \cdots x_n \leq A^n \\ &amp;\Rightarrow (x_1 \cdots x_n)^{\frac{1}{n}} \leq A \end{align*}
What is the definition of polynilpotent Lie algebras?
Usually poly-P means that there is a subnormal series (or normal series; here it doesn't matter) in which all subquotients have Property P. When P is "abelian" or "solvable", clearly poly-P is the same as solvable. Since nilpotent is between solvable and abelian, poly-nilpotent would just be the same as solvable. So the definition has no interest. Unless one counts the number of steps. Then being $n$-step poly-nilpotent, i.e. polynilpotent with a subnormal series of length $n$ has a reasonable meaning. For instance for $n=2$ it is called meta-nilpotent. Over a field, every solvable finite-dimensional Lie algebra is nilpotent-by-abelian, hence 2-step polynilpotent. In infinite dimension one can construct solvable Lie algebra of arbitrary large minimal step of polynilpotency.
Ordering word problem
Group the guys and assume it is a single guy. Then we get $4$ persons in total and they can be arranged in $4!$ ways. The $3$ guys among themselves can be arranged in $3!$ ways. Totally, $4! \times 3!$ possible arrangements.
Equality from Definition Riemann Sum
So we are left to show that $$\tag1{\lim_{n\to\infty}\sum_{i=1}^n\sum_{j=1}^n\log\left[1+\frac{1}{n^2}f(\tfrac{i}{n},\tfrac{j}{n})\right] } = {\int_{0}^{1}\!\!\int_{0}^{1}f(x,y)\,\mathrm dy\,\mathrm dx}$$ From $$ e^t\ge 1+t\qquad\text{for all }t\in\Bbb R,$$ we get (using $(1+t)(1-t+2t^2)=1+t^2(2t+1)\ge 1$ for $t\ge -\frac12$) $$ 1+t\ge \frac1{1-t+2t^2}\ge\frac1{e^{-t+2t^2}}=e^{t-2t^2} \qquad\text{for }|t|&lt;\frac12,$$ and hence $$t-2t^2\le \ln(1+t)\le t\qquad\text{for }|t|&lt;\frac12. $$ From this, $$|\ln(1+t)-t|\le 2t^2\le 2\epsilon^2\qquad\text{for }|t|&lt;\epsilon &lt;\frac12. $$ In order to be Riemann integrable, $f$ must be bounded, say $|f(x,y)|&lt;M$ for all $0\le x,y\le 1$. Then for all $n&gt; \sqrt{2 M}$, we have $\left|\frac1{n^2}f(\tfrac in,\tfrac jn)\right|&lt;\frac M{n^2}&lt;\frac 12$ and hence $$ \left|\log\left[1+\frac{1}{n^2}f(\tfrac{i}{n},\tfrac{j}{n})\right]-\frac{1}{n^2}f(\tfrac{i}{n},\tfrac{j}{n})\right|\le 2 \frac{M^2}{n^4}.$$ It follows that the sum on the left in $(1)$ differs from a usual Riemann sum by absolutely at most $n^2\cdot 2 \frac{M^2}{n^4}=\frac{2M^2}{n^2}$, and this tends to $0$ as $n\to\infty$.
Algebra is Generated by nilpotent Lie Algebra
Without more context I cannot be certain, but I don't think your interpretation is the one your author has in mind. Given an associative algebra (like a Banach algebra), say $\mathcal{A}$, one gets a Lie algebra using the commutator bracket: $[x,y]=xy-yx$ where $x,y \in \mathcal{A}$. Here "$xy$" denotes $\mathcal{A}$'s (associative) multiplication. So $\mathcal{A}$ can be viewed either as an associative algebra (with $(x,y) \mapsto xy$) or a Lie algebra (with $(x,y) \mapsto [x,y]=xy-yx$). This is where it gets a little confusing. Suppose $\mathcal{L}$ is a subalgebra of $\mathcal{A}$. What is meant? a sub-associative-algebra or a sub-Lie-algebra? These are not the same thing. A subalgebra (in the associative sense) is a subspace which is closed under the associative multiplication. A subalgebra (in the Lie algebra sense) is a subspace which is closed under the Lie bracket. Every subalgebra (in the associative sense) is a subalgebra (in the Lie sense), but the converse is not true. All that said, I would interpret "$\mathcal{A}$ is generated by a nilpotent Lie algebra $\mathcal{L}$" as follows: There exists a Lie subalgebra $\mathcal{L}$ of $\mathcal{A}$ ($\mathcal{L}$ is a subspace closed under the bracket) such that $\mathcal{L}$ is nilpotent (as a Lie algebra) where $\mathcal{L}$ generates $\mathcal{A}$ as an associative algebra. [Here $\mathcal{L}$ may or may not be a subalgebra of $\mathcal{A}$ in the associative sense.] "Generated" here means: The smallest associative subalgebra of $\mathcal{A}$ containing $\mathcal{L}$ is $\mathcal{A}$ itself. Or, in other words, for each $a \in \mathcal{A}$ there exists some multivariate polynomial $f$ (in non-commuting variables) and elements $x_1,\dots,x_n \in \mathcal{L}$ such that $f(x_1,\dots,x_n)=a$. So every element in $\mathcal{A}$ can be expressed as a linear combination of words over $\mathcal{L}$.
Convert from binary to quinary
In scanning a binary number (positive integer) from left to right, a $0$ bit doubles the previous value and a $1$ bit doubles the previous value and adds $1$. So in left to right scanning, say $1011_2$, values are one ($1$), two ($10$), five ($101$), eleven ($1011)$. The value of this binary numeral is eleven. This idea of doubling or doubling and adding one can be done in any base. For instance $10111_2$; converting to base $5$ (quinary) would go $1, 2, 10, 21, 43$. Answer $43_5$. Or $1000101_2$ to base $5$ would go $1, 2, 4, 13, 32, 114, 234$ Answer: $234_5$.
Countability of the continuum
Not to put too fine a point on it, the paper is rubbish. In Proposition 2.2 the author confuses the set of nodes of his tree, which has cardinality $\aleph_0$, with the set of branches through the tree, which has cardinality $10^{\aleph_0}$; because he mistakenly thinks that these are the same set, he mistakenly concludes that $\aleph_0=10^{\aleph_0}$. Proposition 2.3 is merely a restatement of the same error in a very slightly different guise. In his argument for Proposition 3.1 he (mostly) establishes an injection from the set of real numbers to the set of branches of his tree, but (as he himself acknowledges in the argument for Lemma 3.1) this is not a bijection, since a real number may correspond to more than one branch of the tree. (The assertion on page $4$ that $3.1415926$ ‘is the number $\pi$’ is of course false, since $\pi$ is irrational and therefore does not have a finite decimal expansion.) Proposition 3.2 is yet another instance of his inability to distinguish the nodes of a tree from its branches. The rest of the paper is just more of the same.
Use Newton's method to approximate a unique root in [-1,1] for $f(x)=arccos(x)-2x-3$ to within $10^{-6}$
Note that $f'(x)$ is undefined at $x= \pm 1$ because of division by zero. Newton's method works well if you start close to the root. If you graph it, the root is somewhere near $-0.4$. Starting at $-0.4$ I converge rapidly to $-0.4699722$
Integral involving Airy function
You can obtain it using the integral representation $$ \operatorname{Ai}(y) = \frac{{\sqrt 3 }}{{2\pi }}\int_0^{ + \infty } {\exp \left( { - \frac{{t^3 }}{3} - \frac{{y^3 }}{{3t^3 }}} \right)dt} ,\quad |\arg y|&lt;\tfrac{\pi}{6} $$ (cf. http://dlmf.nist.gov/9.5.E6). Indeed, by the Fubini theorem and the Gauss multiplication theorem for the gamma function, it is found that your integral is \begin{align*} &amp; \frac{{\sqrt 3 }}{{2\pi }}\int_0^{ + \infty } {\exp \left( { - \frac{{t^3 }}{3}} \right)\int_0^{ + \infty } {\exp \left( { - \frac{{y^3 }}{{3t^3 }}} \right)dy} dt} \\ &amp;\mathop = \limits^{x = y^3 /(3t^3 )} \frac{1}{{2\pi }}\frac{1}{{3^{1/6} }} \int_0^{ + \infty } {\exp \left( { - \frac{{t^3 }}{3}} \right)t \int_0^{ + \infty } e^{ - x} x^{1/3 - 1} dx dt} \\ &amp; = \frac{1}{{2\pi }}\frac{1}{{3^{1/6} }}\Gamma \left( {\frac{1}{3}} \right)\int_0^{ + \infty } {\exp \left( { - \frac{{t^3 }}{3}} \right)tdt} \\ &amp;\mathop = \limits^{s = t^3 /3} \frac{1}{{2\pi }}\frac{1}{{\sqrt 3 }}\Gamma \left( {\frac{1}{3}} \right)\int_0^{ + \infty } {e^{ - s} s^{2/3 - 1} ds} \\ &amp; = \frac{1}{{2\pi }}\frac{1}{{\sqrt 3 }}\Gamma \left( {\frac{1}{3}} \right)\Gamma \left( {\frac{2}{3}} \right) = \frac{1}{3}. \end{align*} For a more general result, see http://dlmf.nist.gov/9.10.E17. It can be derived the same way.
Probability of Normal Dice
You could argue completely with Bayes' theorem, but you can also observe that uniformly picking a die and rolling it once amounts to uniformly picking one of the $ 1200$ faces. You picked a six (out of $250$ sixes). There are $150$ sixes on "even" dice and $100$ sixes on normal dice (plus none on "odd" dice). Hence the probability that your particular six is on a normal die is $\frac{100}{250}$.
Find all function satisfying a condition with $\min$ and $\max$
The zero function is a trivial solution. For any nonzero function we may assume by symmetry that $f(x)&gt;0$ for some $x$. Assume there exists $x$ with $f(x)&gt;0$ and $y$ with $f(y)\le 0$. By repeatedly replacing one of $x,y$ with their mean $\frac{x+y}{2}$ we can assume that $|x-y|&lt;1$. Then $$ \left|\frac{f(x)-f(y)}{x-y}\right|=\frac{|f(x)|+|f(y)|}{|x-y|}&gt;\max\{|f(x)|,|f(y)|\}$$ gives us a contradiction. Thus if $f(x)&gt;0$ somewhere, $f(x)&gt;0$ everywhere. Then $f$ is strictly increasing, hence the only way for $f$ to be not continuous at some point $x_0$ is that we have $d:=\lim_{x\to x_0^+}f(x)-\lim_{x\to x_0^-}f(x)&gt;0$ (note that the onesided limits exist to begin with). Then for $y&lt;x_0&lt;x$ we have $$\frac{f(x)-f(y)}{x-y}&gt;\frac d{x-y}.$$ As $x\to x_0^+$ and $y\to x_0^-$ this grows without bound contradicting $\max\{f(x),f(y)\}\to\lim_{x\to x_0^+}f(x)&lt;\infty$. Therefore $f$ is continuoous and your original argument shows that it is of the form $f(x)=Ke^x$.
Left and right coset representatives of $\text{SL}_2(\mathbb{Z})$ action
$$M_n = \bigcup_{ad=n, b \bmod d}SL_2(Z) \pmatrix{a &amp; b \\ 0 &amp; d}, \qquad \qquad\scriptstyle\pmatrix{0 &amp; 1\\ -1 &amp; 0}\pmatrix{a &amp; b \\ 0 &amp; d}=\pmatrix{d &amp; 0 \\ -b &amp; a}\pmatrix{0 &amp; 1\\ -1 &amp; 0}$$ Thus $$M_n =\bigcup_{ad=n, b \bmod d} SL_2(Z)\pmatrix{0 &amp; 1\\ -1 &amp; 0}\pmatrix{a &amp; b \\ 0 &amp; d}=M_n^\top = \bigcup_{ad=n, b \bmod d} \pmatrix{d &amp;0 \\ -b &amp; a} SL_2(Z)\\= \bigcup_{ad=n, b \bmod d} \pmatrix{d &amp;0 \\ -b &amp; a} \pmatrix{0 &amp; 1\\ -1 &amp; 0}SL_2(Z) = \bigcup_{ad=n,b \bmod d}\pmatrix{0 &amp; 1\\ -1 &amp; 0}\pmatrix{a &amp; b \\ 0 &amp; d}SL_2(Z) $$
Given an overdetermined system of linear equation, find a subset that can be solved with exact solution
If the system has an unique solution the standard way is by RREF otherwise we can find an approximate solution by least squares.
What's the probability of a certain event occurring when there are an infinite number of possibilities?
There are many probability measures on the set of natural numbers. However, there is no probability measure that assigns equal weight to all points. For if that weight is positive, we violate the fact that the sum of the probabilities over the sample space is $1$. And if that weight is $0$, then by countable additivity the weight of the sample space is $0$, not $1$.
Is there any typical approach to solving a problem of this kind? (Propositional Calculus)
A straightforward thing to do is to look at the rows where $\varphi$ is true. So this is in the row where $p$ and $q$ are false, and $r$ is true, and in the row where $p$ is true and $q$ and $r$ are false. So for the first row I can generate the term $\neg p \land \neg q \land r$, and for the second row I get $p \land \neg q \land \neg r$. And now I simply disjunct those: $(\neg p \land \neg q \land r) \lor (p \land \neg q \land \neg r)$ This can (using Distribution) be simplified to $((\neg p \land r) \lor (p \land \neg r)) \land \neg q$, which is basically the same as the one you got at the end.
Quick ways of finding principle curvature in the Cartesian plane
Since the curvature $k=d\theta/dl$, then average curvature is $k_a=\Delta\theta/\Delta l$. So you break you curve into segments at the points where $k=0$ or $k=\infty$. For each segment you calculate $|\Delta\theta|$ — angle the tangent of the curve rotated. You sum all $|\Delta \theta|$ and divide by total length of the curve. Example. Let's calculate the average curvature for $y=\sin x$ from $0$ to $2\pi$. Curvature is zero for $x=\pi$. So we consider two segments. At first segment the tangent rotated from $\theta_0=\pi/4$ to $\theta_\pi=-\pi/4$, so $|\Delta\theta_1|=\pi/2$. It's the same for the second segment $|\Delta\theta_2|=\pi/2$. The length of the curve is $$l=\int_0^{2\pi}\sqrt{1+\cos^2x}dx=4\sqrt{2}E(1/2)\approx7.64$$ So mean curvature is $k_a=2(\pi/2)/l\approx0.41$
Subspace topology on direct limit topology
Apparently the coherent topology can be defined for any collection of spaces without any requirements such that one be a closed subspace of another. The proof that a the topology of a subspace of the coherent topology is the coherent topology of the subspaces, requires, as you noticed, that the subspace is open for just one direction of the inclusions. When the subspace is closed, for those two topologies to be the same could be why the other requirements, that the spaces be a nest of closed spaces. I've yet to find out. A discussion about the coherent topology has been started in the ASCII only web site. http://at.yorku.ca/cgi-bin/bbqa?forum=ask_a_topologist&amp;task=list Post there for details of my proofs or write [email protected].
Distribution of median of three exponential observations
Comment continued (not Answer). From the standard exponential distribution (rate and mean both unity), I took a million samples of size $n = 3$ and found the median $h$ of each. Here is a histogram of the results. When you get your PDF you can compare it with the histogram. For a start, the distribution of $h$ is clearly not exponential. [The red curve is the PDF I got following the hints in my (real) Comment.] Note: In case the formula for the PDF of the $i$th order statistic is not in your text or notes, look here at 7th slide or about halfway down the Wikipedia article on 'order statistics'.