title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Lipschitz space-filling maps
First, notation/terminology: Talking about Lipschitz maps with respect to that nonstandard metric is a bad idea, simply because there exists a simple and much more standard way to say the same thing. Your function is Lipschitz with respect to that funny metric if $$|f(x)-f(y)|\le c|x-y|^{1/2}.$$ This is exactly the definition of $$f\in \mathrm{Lip}_{1/2}.$$That $\mathrm{Lip}_{1/2}$ thing is known as a Holder class. Is every space-filling curve $\mathrm{Lip}_{1/2}$? Of course not. Silly counterexample: Say $f:[0,1]\to[0,1]^2$ is surjective. Say $g:[0,2]\to[0,1]^2$ and $g|_{[0,1]}=f$. Then $g$ is space-filling, and $g$ can do whatever you want on $[1,2]$. Yes, the Hilbert curve $f$ is $\mathrm{Lip}_{1/2}$. This follows from the following: If $I=[j4^{-n},(j+1)4^{-n}]$ then $f(I)=[k2^{-n},(k+1)2^{-n}]\times[l2^{-n},(l+1)2^{-n}]$. Now if $x,y\in[0,1]$ then $x,y\in I_1\cup I_2=[j4^{-n},(j+2)4^{-n}]$ where $|x-y|\sim 4^{-n}$; since $f(I_2)\cap f(I_2)\ne\emptyset$ the diameter of $f(I_1\cup I_2)$ is less than $c2^{-n}$, and there you are. I suspect the same applies to the Peano curve, with $3$ and $9$ in place of $2$ and $4$. Not sure, not really familiar with the construction.
Integral of the product $x^n e^x$
No, your argument is completely wrong, sorry. You are mistaking $$ (\ln u)^n $$ with $$ \ln(u^n) $$ which are very different. By the way, if you try differentiating, you get $$ D(ne^x(x-1))=ne^x(x-1)+ne^x=nxe^x $$ Hint: $$ \int x^ne^x\,dx=(x^n+p(x))e^x $$ where $p(x)$ is a polynomial of degree at most $n-1$.
Does a convergent sequence of norms of a vector space always converge to a norm?
Let $V$ be a vector space and $\|\cdot\|_n$, $n\in\Bbb N$ be a sequence of norms and assume that $\|\cdot\|_n\to\|\cdot\|$ pointwise. Then $\|\cdot\|$ is a norm iff $\|x\|\ne 0$ for all $x\ne 0$. Indeed, all properties of norm clearly transfer to the limit except the condition that non-zero vectors have non-zero norm.
Induction Not Starting from $\boldsymbol{0}$
Given any integer $a,$ observe that the statement $P(n)$ holds for all integers $n \geq a$ if and only if the statement $P(m + a)$ holds for all integers $m \geq 0.$ Consequently, the Principle of Mathematical Induction can be invoked to show that (1.) if $P(n)$ holds for the integer $n = a$ and (2.) $P(n + 1)$ holds whenever $P(n)$ holds for some integer $n \geq a,$ then $P(n)$ holds for all integers $n \geq a.$ Ultimately, this says precisely that the base case for a proof by mathematical induction need not be the case that $n = 0.$
Blow-Up in p not singular in p
A single blow-up will not always resolve the singularity. The following two examples are exercises from chapter I of Hartshorne (undoubtedly covered in many other books as well). Blowing up the singularity at the origin of a tacnode $x^2=x^4+y^4$ amounts to substituting $x=yt$ and cancelling the common factor. This gives us the equation $t^2=y^2t^4+y^2$, and that curve still has an ordinary double point as solving for $y$ gives $y=\pm t/\sqrt{t^4+1}$ (implying two branches through the origin with tangents of slopes $\pm1$). Blowing up a higher order cusp of $y^3=x^5$ at the origin with the substitution $y=xt$ leads to the usual cusp $t^3=x^2$. You see that in both cases above, another blowing up will resolve the singularity. IIRC a finite number of blowings up will always do. But, as we saw, a single one will not necessarily suffice.
Determine $h$ so that the linear system $Ax=b$ has infinitely many solutions.
Hint Obtain the solutions for the truncated system $$\pmatrix{5&3&2\\-7&-1&6} x = \pmatrix{6\\1}$$ Now find out wich $h$ satisfies $\pmatrix{-6&6&h}x = 21$ for all solutions to the above.
Prove that there exists a number α > 0 such that f(x) > α for all $x \in I$.
Let $L = inf_{x \in I} f(x)$. Since $f$ is a continuous function, $\exists x_0 \in I $ such that $0<f(x_0)=L$. Take $a = \frac L2$. Then $f(x)>a \forall x \in I$.
Checking uniform continuity of a function.
If you can extend $f$ to $[0,1]$ in such a way that the extension is continuous you are done, since continuos functions on compact sets are uniformly continuous. The extension to $x=1$ is obvious. Can you see how to extend it t0 $x=0$?
Can a doubly stochastic matrix be asymmetric?
$${1\over15}\pmatrix{8&1&6\cr3&5&7\cr4&9&2\cr}$$
How prove there exsit $\xi\in (0,1)$ such $|f(\xi)|\le|f'(\xi)|$
It is trivial if $f$ has a zero in $(0,1)$, so suppose it doesn't. Then $f$ can't change signs on $(0,1)$, and without loss of generality suppose $f$ is positive on $(0,1)$. Thus $\log(f)$ is defined on the interval, and $\lim\limits_{x\to 1-}\log(f(x)) = -\infty$, which implies by the mean value theorem that the derivative of $\log(f)$ takes on arbitrarily large negative values. In particular, there exists $\xi\in(0,1)$ such that $\dfrac{f'(\xi)}{f(\xi)}\leq -1$.
$Int(A\times B) = Int(A)\times Int(B)$, for which metric?
Let $A$ be a subset of topological space $X$ and let $B$ be a subset of topological space $Y$. If $X\times Y$ is equipped with the product topology then $A^{\circ}\times B^{\circ}$ is open, and secondly it is a subset of $A\times B$. We conclude that $A^{\circ}\times B^{\circ}\subseteq\left(A\times B\right)^{\circ}$. Conversely if $\langle a,b\rangle\in\left(A\times B\right)^{\circ}$ then there are sets $U,V$ such that $U$ is open in $X$ and $V$ is open in $Y$ with $\langle a,b\rangle\in U\times V\subseteq\left(A\times B\right)^{\circ}\subseteq A\times B$. This because sets $U\times V$ with $U$ open in $X$ and $V$ open in $Y$ form a base of the product topology on $X\times Y$. We have $a\in U\subseteq A$ and $b\in V\subseteq B$ so that $a\in A^{\circ}$ and $b\in B^{\circ}$. This is a proof without use of metrics. If $d$ is a metric on $X\times Y$ such that the topology induced by the metric coincides with the product topology then necessarily for $r>0$ small enough the open disc with radius $r$ and center $\langle a,b\rangle$ is a subset of $U\times V$.
Prove $f(x) = x^2$ is continuous at $x=4$
If you assume $\delta < 1$, then you know that: $$\begin{align*} |x-4|<\delta \\ \Rightarrow |x-4|<1 \\ \Rightarrow -1 < x-4 < 1 \\ \Rightarrow 3 < x < 5 \end{align*} $$ But then, we can determine what this means about $|x+4|$: $$\begin{align*} &\Rightarrow 7 < x+4 < 9 \\ &\Rightarrow |x+4| < 9 \end{align*} $$ So this means that if we assume $\delta<1$, we have: $$|f(x)−16| = |x+4||x-4| < 9|x-4|$$ And if you let $\delta = \min\left(1,\frac{\epsilon}{9}\right)$, then: $$|f(x)-16| < 9|x-4| < 9\left(\frac{\epsilon}{9}\right) = \epsilon$$
Classification of fields which are isomorphic to some finite extension
Luroth's theorem on simple transcendental extension says what you have described is possible in every degree. Instead of going upwards you just have to search inward: For any transcendental $x$ over $K$ and any polynomial $f(x)$ with coefficients in $K$ the subfield of $K(x)$ generated by $f$ over $K$ is actually isomorphic to $K(x)$. Clearly $K(x)$ is an extension of degree = degree of $f$ over $K(f)$.
First four nonzero terms of the McLaurin expansion of $\frac{xe^x}{\sin x}$ at $x_0=0$
I didn't check the computations but, yes, you are doing it right. As for the other question, note that the expression $\frac{xe^x}{\sin x}$ is undefined if $x=0$. In order to make $f$ continuous at $0$; you must define $f(0)$ as $\lim_{x\to0}\frac{xe^x}{\sin x}$, which is equal to $1$.
How to write the set of all permutations on a set $n=\{1, 2, \ldots, n\}$
Say you have the matrix $$A= \begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix},$$ a very normal $2\times 2$ matrix. As you said $S_n$ is the set of all the permutations of the set $\{1,\dots,n\}$. So $S_2$ is the set of all the permutations of $\{1,2\}$, hence $$S_2 = \{[12],[21]\}.$$ You can think at the function $\sigma$ as follows: $\sigma(i)=j$ means the permutation $\sigma$ sends the number $i$ to $j$. The permutation $\sigma_1:=[12]$ is the identity of the set $S_2$, since the number $1$ and the number $2$ do not change their position. Hence we have $\sigma_1(1)=1$ and $\sigma_1(2)=2$. The permutation $\sigma_2:=[21]$ actually says that $1$ goes to $2$ and that $2$ goes to $1$, so $\sigma_2(1)=2$ and $\sigma_2(2)=1$. These two are the only possible permutations of the set $\{1,2\}$. This is how $S_2$ looks like. We can also look at how $$F(A) := \sum_{σ ∈ S_2}\text{sign}(σ) \prod_{i=1}^{2} A_{iσ(i)}$$ looks like. First of all the term $\sum_{σ ∈ S_2}$ tells you that you need to sum $2$ elements, since you have only $2$ permutations in $S_2$. As a second step you want to evaluate the signum of the permutations in $S_2$. You defined it as $\text{sign}(\sigma) = (-1)^N$, where $N$ is the number of transpositions of your permutation. We have two signum to evaluate: $$\text{sign}(\sigma_1)= (-1)^0=1; \qquad \text{sign}(\sigma_2) = (-1)^1=-1.$$ (ask if you have problem with this step). Now we can finally evaluate $F(A)$: \begin{align*} F(A) &= \sum_{σ ∈ S_2}\text{sign}(σ) \prod_{j=1}^{2} A_{jσ(j)} \\ &= \sum_{i=1}^2 \text{sign}(\sigma_i) \prod_{j=1}^2 A_{j\sigma_i(j)} \\ &= \text{sign}(\sigma_1)(A_{1\sigma_1(1)}A_{2\sigma_1(2)}) + \text{sign}(\sigma_2)(A_{1\sigma_2(1)}A_{2\sigma_2(2)}) \\ &= A_{11}A_{22}-A_{12}A_{21}. \end{align*} This is actually the determinant of the matrix $A$. Indeed $F(A)$, in general, is exactly the determinant of an $n\times n$ matrix. Don't esitate to ask if you have problems.
Finding a subgroup $H \leq S_4$ such that a set $X$ is a coset by $H$.
$X$ and $H$ must be of the same order, so $H$ has size 2. One of those elements of $H$ must be the identity, so call $H=\{1, \alpha\}$. You know that $(13)H = X$, so get to work on the algebra and work out who $\alpha$ is.
What does $\frac{z_1-z_3}{z_3-z_2}=\frac{z_2-z_1}{z_1-z_3} $ imply?
Let$$t=\frac{z_1-z_3}{z_3-z_2}=\frac{z_2-z_1}{z_1-z_3}.$$ We have $$t=\frac{z_1-z_3}{z_3-z_2}=\frac{z_2-z_3+z_3-z_1}{z_1-z_3}=\frac{z_2-z_3}{z_1-z_3}-1=-\frac1t-1,$$ or $$t^2+t+1=0.$$ From $$t^2+t+1=\frac{t^3-1}{t-1},$$ we deduce that $t$ is a root of $1$ and has unit module, hence $$|z_1-z_2|=|z_2-z_3|=|z_3-z_1|.$$
Theorem for Equal Sums of Like Powers $x_1^8+x_2^8+x_3^8+\dots$
What I write here is not the final answer to this question, but I think it will be helpful. I notice that the Consequence 1 can be simplified and generalized as below, Denote $$R_n=(a^n+b^n+c^n+d^n-e^n-f^n-g^n-h^n)/n$$ for any $n<>0$, and $$R_0=2(abcd-efgh)/(abcd+efgh)$$ $$m=(a^2+b^2+c^2+d^2)/(a+b+c+d)^2$$ We have If $R_0=R_1=R_2=0$, then $$R_3R_5/R_4^2=(m+1)/2$$ $$R_4/R_3=a+b+c+d$$ $$R_3/R_{-1}=-abcd$$ $$\frac{R_{-2}}{R_{-1}}-\frac{R_{-1}}2=\frac1a+\frac1b+\frac1c+\frac1d$$ If $R_1=R_2=R_3=0$, then $$R_4R_6/R_5^2=(m+1)/2$$ $$R_5/R_4=a+b+c+d$$ $$R_4(1+R_0/2)/R_0=-abcd$$ $$\frac{R_{-1}}{R_0}-\frac{R_{-1}}2=\frac1a+\frac1b+\frac1c+\frac1d$$ More similar identities can be found in my site Algebraic Identities.
measurable functions in product measure space
Let $S=\{(x,y) | x \in E, f(x) \ge y \} \cap \{ (x,y) | y \ge 0 \}$, and note that $S$ is measurable. Then the function $(x,y) \mapsto 1_S(x,y)$ is measurable. Fubini Tonelli tells us that the function $x \mapsto \int_{\mathbb{R}} 1_S(x,y) dy$ is measurable. Since $\int_{\mathbb{R}} 1_S(x,y) dy = f(x) $, we see that $f$ is measurable.
How can I find the equation for a reverse exponential curve based on three known points?
A function with exponential decay and a horizontal asymptote of y=2 will have the form $$ y= A e^{-Cx} + 2\tag1 $$ If you subtract $2$ from both sides and take the log, this gives $$ \log(y-2) =\log A - Cx\tag2 $$ Equation (2) says that there is a linear relationship between $\log(y-2)$ and $x$. So one way to fit this curve is to fit a line (either by eye, or by using a software package) to the four pairs $$ (0, \log(68)),\quad (5,\log(38)),\quad (25, \log (9)),\quad (50, \log(0.2))$$ If the line turns out to have slope $m$ and intercept $b$, then you can solve for $A$ and $C$ in (1) using the relationships $m=-C$ and $b=\log A$.
Random walk around circle
Hint: you may try setting up the 1-step analysis (as they do in the analysis of random walks). Here, you denote $p_i(j)$ the probability, starting at $i$, to eat the cheese at $j$ the last. Then, do one step of the random walk and recalculate $p_i(j)$'s...
GAP: how to obtain the Young Symmetrizer?
A very naive way of implementing the definition is: YoungSymmetrizer:=function(lambda) local f,n,sym,a,emb,pl,al,ql,bl,g; f:=Flat(lambda); Sort(f); n:=f[Length(f)]; if f<>[1..n] then Error("not a tableau");fi; sym:=SymmetricGroup(n); a:=GroupRing(Rationals,sym); emb:=Embedding(sym,a); pl:=Stabilizer(sym,lambda,OnTuplesSets); al:=Zero(a); for g in Iterator(pl) do al:=al+ImagesRepresentative(emb,g); od; ql:=Stabilizer(sym,TransposedMat(lambda),OnTuplesSets); bl:=Zero(a); for g in Iterator(ql) do bl:=bl+SignPerm(g)*ImagesRepresentative(emb,g); od; return al*bl; end; I suspect that the dense support will make this infeasible for n much beyond 8 or 9.
Why does this limit exist?
Nice example for the importance of the statement "the limit of an integral is not (necessarily) the integral of the limit." Anyways, we can find an explicit, although admittedly messy, anti-derivative using partial fractions (difference of two squares in the denominator). Evaluating the improper integral (edit: for $\epsilon > 0$) gives $$\int_\mathbb{R} \frac{1}{x^2 - (1+ i \epsilon)^2 } ~\mathrm{d}x = \frac{\pi}{\epsilon - i} = \frac{ \epsilon \pi + i \pi}{1 + \epsilon^2}.$$ Taking a limit as $\epsilon \to 0$ (edit: $\epsilon\to 0^+$) yields an answer of $i \pi$. Edit!! I originally forgot to account for a few branch cut issues. As pointed out in the comments, there are issues depending on the sign of $\epsilon$. The above argument holds for $\epsilon > 0$ only. However, for $\epsilon < 0$ we have $$\int_\mathbb{R} \frac{1}{x^2 - (1+ i \epsilon)^2 } ~\mathrm{d}x = - \frac{\pi}{\epsilon - i} = - \frac{ \epsilon \pi + i \pi}{1 + \epsilon^2}.$$ Thus taking the limit as $\epsilon \to 0^-$ gives $- i \pi$, and therefore the limit does not exist. We can also see this sign reversal by means of the residue theorem (which gives a compact way to evaluate the improper integral as well). Apologies, but as the old joke goes, "How do you horrify a mathematician? Let $\epsilon < 0$..." Edit 2: Alternate Solution via Residue Theorem Let $$f: \mathbb{C} \to \hat{\mathbb{C}}, \quad f(z) = \frac{1}{z^2 - (1+ i \epsilon)^2} = \frac{1}{(z + 1 + i \epsilon)(z - 1 - i \epsilon)}.$$ $f$ is meromorphic with simple poles at $z_1 = 1 + i \epsilon$ and $z_2 = -1 - i \epsilon$. Define the simple closed curve $\gamma = \gamma_1 + \gamma_2$ with counter-clockwise orientation, where $\gamma_1$ goes from $-R$ to $R$ along the real line and $\gamma_2$ is the semi-circular arc from $+R$ to $iR$ to $-R$. By the Residue Theorem $$\int_{\gamma_1} f(z) ~\mathrm{d}z + \int_{\gamma_2} f(z) ~\mathrm{dz} = \oint_\gamma f(z) ~\mathrm{d}z = 2 \pi i \sum Res(f, a_k)$$ where $\{a_k \}$ is the set of poles in the region inside our Jordan curve. Applying an ML-estimate to the integral along $\gamma_2$ gives $$\left| \int_{\gamma_2} f(z) ~\mathrm{d}z \right| \approx \frac{\pi R}{R^2} \overset{R \to \infty}{\longrightarrow} 0$$ and therefore $$\int_{-\infty}^\infty f(z) ~\mathrm{d}z = 2 \pi i \sum Res(f, a_i).$$ We have two cases: $\epsilon >0$ and $\epsilon <0$. Case 1: If $\epsilon>0$, then the only pole inside our region is $z_1 = 1+ i \epsilon$ as $\Im z_2 < 0$. Since $f$ has a simple pole at $z_1$, \begin{align*} Res(f, z_1) &= \lim_{z \to z_1} (z-z_1) f(z) \\ &= \lim_{z \to z_1} \frac{1}{z - z_2}\\ & = \frac{1}{z_1 - z_2} \\ &= \frac{1}{1 + i \epsilon - (-1 - i \epsilon)} \\ &= \frac{1}{2 (1+i \epsilon)}. \end{align*} Thus we have $$\int_{-\infty}^\infty f(z) ~\mathrm{d} z = 2 \pi i \frac{1}{2 (1+i \epsilon)} \to \pi i$$ as $\epsilon \to 0+$. Case 2: If $\epsilon <0$, then the only pole inside our region is $z_2 = -1- i \epsilon$ as $\Im z_1 < 0$. Since $f$ has a simple pole at $z_2$, we have \begin{align*} Res(f, z_2) &= \lim_{z \to z_2} (z-z_2) f(z) \\ &= \lim_{z \to z_2} \frac{1}{z-z_1} \\ &= \frac{1}{z_2 - z_1} \\ &= \frac{1}{-1 - i \epsilon - (1 + i \epsilon)} \\ &= - \frac{1}{2 (1 + i \epsilon)}. \end{align*} Thus we have $$\int_{-\infty}^\infty f(z) ~\mathrm{d} z = 2 \pi i \frac{-1}{2 (1+i \epsilon)} \to -\pi i$$ as $\epsilon \to 0^-$.
Show sublinearity of the function for a second order Cauchy problem
$$y''=-y^3$$ $$2y''y'=-2y^3y'$$ $$y'^2=-\frac12 y^4+c_1$$ Conditions $y(0)=0$ and $y'(0)=1$ imply $c_1=\frac12$ $$y'^2=\frac12-\frac12 y^4$$ $$y'=\sqrt{\frac12-\frac12 y^4}\qquad \text{with sign according to }y'(0)=1$$ This is a Jacobi elliptic integral. $$y=\text{sn}\left( \frac{x}{\sqrt{2}}+c_2\:\Big|\:-1\right)$$ sn is the Jacobi sn elliptic function https://mathworld.wolfram.com/JacobiEllipticFunctions.html The condition $y(0)=0$ implies $c_2=0$. The solution is : $$y=\text{sn}\left( \frac{x}{\sqrt{2}}\:\Big|\:-1\right)$$
Question regarding the given definition of algebra
See the Wikipedia page on field of sets.
How to prove the following inequality using Lagrangian multipliers?
We have that $f(x,y,z) \ge 0$ for all $(x,y,z)$. Now $f(0,r/{\sqrt 2}, r/{\sqrt 2}) = 0$, hence the minimum of $f$ is $0$. I am not sure what you mean by $x,y,z \neq 0$, but if it is "$ x \neq 0$ and $y \neq 0$ and $z \neq 0$", then it is incorrect. What we know is that at least one of them should be $\neq 0$, but this is not important. From the system you have, multiply the first equation by $x$, the second by $y$ and the third by $z$ and subtract the equations from each other to get $x^2 = y^2 = z^2$ (realizing that $\lambda = 0$ corresponds to $f(x,y,z) = 0$, and then that we may suppose $\lambda \neq 0$). Then, $x^2 = y^2 = z^2 = r^2/3$. Hence, $f$ achieves its maximum $6$ times (at the points $(\pm r/{\sqrt 3}, \pm r/{\sqrt 3},\pm r/{\sqrt 3})$), and this maximum is $f(r/{\sqrt 3},r/{\sqrt 3},r/{\sqrt 3}) = r^6/27$. See Wore's comment for how to get the inequality.
How many subsets of $\{1,2,3,\ldots,100\}$ contain all the even numbers?
"Contain all the even numbers" means exactly that. Example: $\{2,4,6,...,100,1\}$ contains all the even numbers and $1$.
Prove that a function is positive semi-definite
The function $\phi(t)=(1+|t|+kt^2)\exp(-|t|)$ is a characteristic function for $0 \leq k \leq 1/4$ according to the Theorem 1.2 from the following paper: Gneiting T., Kuttner’s problem and a Polya type criterion for characteristic functions, Proc. Am. Math. Soc. 128 (2000),1721–1728.
Graph property intuition
Not sure whether this is what you are looking for. Let $v$ be a vertex of $G$. If $v$ has degree less than $d$, then we are done. If not, then $v$ has at least $d$ neighboring vertices, say, $v_1,v_2,\ldots,v_k$ with $k\geq d$. Now, the total number of edges of the vertex-induced subgraph with vertices $v_1,v_2,\ldots,v_k$ and all their neighbors is at least $$\frac{1}{2}\,\left(k+\sum_{i=1}^k\,\deg(v_i)\right)\,.$$ (Here is probably a possible visual idea. Well, try to imagine what this subgraph look like and why the expression above is a lower bound on the number of edges in this subgraph.) By the assumption that there are less than $\dfrac{d(d+1)}{2}$ edges in $G$, we get $$\frac{1}{2}\,\left(k+\sum_{i=1}^k\,\deg(v_i)\right)<\frac{d(d+1)}{2}\,.$$ Thus, $$\frac{1}{k}\,\sum_{i=1}^k\,\deg(v_i)<\frac{d(d+1)}{k}-1\leq d\,.$$ By the Pigeonhole Principle, $\deg(v_j)<d$ for some $j\in\{1,2,\ldots,k\}$.
Determining a surface of revolution from the metric function
Suppose the induced metric of an embedded surface $S$ has the form $ds^{2} = \lambda^{2}(du^{2} + dv^{2})$ for some function $\lambda$ of one variable, and the coordinate curves are principal, i.e., lines of curvature. Does this imply $S$ is a surface of revolution? No; $S$ could be a cylinder (over an arbitrary curve) parametrized by $$ X(u, v) = (x(u), y(u), v),\qquad x'(u)^{2} + y'(u)^{2} = 1. $$
Number of ways to get a total of 14 when tossing a die 3 times
Using your definitions of $x_i$, $1\leq x_3\leq 6$, so that the sum of the first two is any of $$x_1+x_2=8,9,10,11,12,13$$ For $8$, we have $5$ combinations of $(x_1,x_2)$; for $9$ we have $4$; for $10$, we have $3$; $11$, $2$; $12$, $1$; $13$, $0$. Adding these up together, there are $5+4+3+2+1+0=15$ ways to get the three dice to add to $14$. This method can be generalised to higher amounts of dice too.
Neighbour of a set on a graph
A reasonable term for the simple case, by analogy with other uses of the adjective, would be the common neighbours (or common neighbourhood if you want to talk about the set rather than its elements). A web search for graph common neighbours or graph common neighbourhood shows that the term has been coined independently by other people in the past. Given the neighbourhood properties of bipartite graphs, I don't see any reason that the term wouldn't extend naturally to your proposed special case.
Find the probability that there are at least three teachers in the selection.
You can do so by dividing them into cases: Case I: Three teachers and one student You can choose this in $\binom{8}{3} \times \binom {7}{1}$ ways. So the probability for case I is given by $\frac{56 \times 7}{1365}=\frac{392}{1365}$ Case II: Four teachers and no student You can choose this in $\binom{8}{4}$ ways. So the probability for case II is $\frac{70}{1365}$ The final probability will be found by adding the two cases.
Help me come up with a function
For $n\in\mathbb N$ let $$v_2(n):=\max\{\,k\in\mathbb N_0: 2^k|n\,\} $$ be the number of twos in the prime decomposition of $n$. Then $$ f(n) = \left\lfloor\frac n{2^{v_2(n+1)+1}}\right\rfloor.$$
I need to show that the following function is positive.
The following approach is ugly but will get you what you want, with enough work. Hopefully somebody else will come along and find a more clever solution. Let's look at the $d=3$ case. We can ignore the factor of $2$, which leaves us with $$H(x) = 7^x + 4^x - 3^x + 4^x + 7^x - 5^x - 16^x$$ By Taylor's theorem we have that $$H(-1+x) = T^n(x) + R^n(x),$$ where $$T^n(x) = \sum_{i=0}^n \frac{x^i}{i!}H^{(i)}(-1)$$ is the degree $n$ Taylor polynomial and $R^n(x)$ is the remainder term. Let $K(x) = - 3^x - 5^x - 16^x$. Notice that $H^{(n)}(x) \geq K^{(n)}(x)$ for all $n$ and $x\in (-1,0)$, and that $K$ is monotonic decreasing, so $$R^{(n)}(x) \geq \frac{x^{n+1}}{(n+1)!}K^{(n+1)}(0)$$ on $(0,1)$. So $H$ is positive on $[-1,0)$ if, for some $n$, the polynomial $$T^n(x) + \frac{x^{n+1}}{(n+1)!}K^{(n+1)}(0)$$ has no roots in $(0,1)$. Trial and error shows that this is true for $n=5$: Lack of a root can be proven formally using e.g. Sturm's theorem.
Prove that $\lambda = 0$ is an eigenvalue if and only if A is singular.
$A$ is singular $\iff x\mapsto Ax$ is not injective $\iff$ we can find $x\neq 0$ with $Ax=0\iff 0 $ is an eigenvalue of $A$.
Rewriting $\delta(x,y)$ in terms of $\delta(r)$.
If the transformation between coordinates ${\pmb x}$ and ${\pmb y}$ is not singular then $$\delta({\pmb x}-{\pmb x}') = \frac{1}{|J|}\delta({\pmb y}-{\pmb y}'),$$ where $J$ is the Jacobian of the transformation. Proof $\;$ Let $F$ a test function and $\varphi=(\varphi_1,\varphi_2)$ the transformation $\pmb{x}=\varphi(\pmb{y})$, i.e. $(x_1,x_2)=(\varphi_1(y_1,y_2),\varphi_2(y_1,y_2))$, with $\pmb{y}'$ the corrensponding value of $\pmb{x}'$ under $\varphi$ (i.e. $\pmb{x}'=\varphi(\pmb{y}')$). Let $J=\frac{\partial(\varphi_1,\varphi_2)}{\partial(y_1,y_2)}$ the jacobian matrix of the transformation, i.e. $\operatorname{d}\pmb{x}=J\operatorname{d}\pmb{y}$, and $|J|$ the determinant. Then $$ \int F(\pmb{x})\delta(\pmb{x}-\pmb{x}')\operatorname{d}\pmb{x}=F(\pmb{x}') $$ and $$ \int F(\varphi(\pmb{y}))\delta(\varphi(\pmb{y})-\pmb{x}')|J|\operatorname{d}\pmb{y}=F(\pmb{x}') $$ It follows that $\delta(\varphi(\pmb{y})-\pmb{x}')|J|$ assign to any test function $F$ the value of that test function at the point where $\varphi(\pmb{y})=\pmb{x}'$ that is $\pmb{y}=\pmb{y}'$. Hence, we obtain $$ \delta(\varphi(\pmb{y})-\pmb{x}')|J|=\delta(\pmb{x}-\pmb{x}')|J|=\delta(\pmb{y}-\pmb{y}') $$ and thus $$\delta(\pmb{x}-\pmb{x}')=\frac{1}{|J|}\delta(\pmb{y}-\pmb{y}').$$ The Jacobian is $r$ so, assuming $r'\ne 0$, the delta is $$\delta(x-x')\delta(y-y') = \frac{1}{r}\delta(r-r')\delta(\theta-\theta').$$ It satisfies the two properties $$ \frac{1}{r}\delta(r-r')\delta(\theta-\theta') = 0\qquad\text{for }(r,\theta)\neq (r',\theta')$$ and $$\int_0^\infty r dr\int_0^{2\pi}d\theta \ \frac{1}{r}\delta(r-r')\delta(\theta-\theta') = 1$$ If $r'=0$ we must integrate out the ignorable coordinate $\theta$, $J\to \int_0^{2\pi}d\theta \ J = 2\pi r$. Thus $$\delta(x)\delta(y) = \frac{1}{2\pi r}\delta(r).$$ Again $$ \frac{1}{2\pi r}\delta(r)= 0\qquad\text{for }r\neq 0$$ and $$\int_0^\infty r dr\int_0^{2\pi}d\theta \ \frac{1}{2\pi r}\delta(r) = 1.$$ Note that the domains of the polar arguments were just stated to be $0 ≤ r <+∞$ and $0\le\theta<2\pi$ (or equivalently $−\pi ≤\theta <+\pi$). For engineering application, like image processing for example, sometimes the domains for the polar and azimuthal variable is modified to equivalent forms with a bipolar radial variable over the domain $(−\infty, +\infty)$ and an azimuthal domain that is constrained to an interval of $\pi$ radians, such as $[0, +\pi)$ or $[−\pi/2, +\pi/2)$. In this case the representation of the delta is $$ \delta(x)\delta(y) = \frac{1}{\pi |r|}\delta(r) $$ and $$\int_{-\infty}^\infty r dr\int_0^{\pi}d\theta \ \frac{1}{\pi |r|}\delta(r) = 1.$$
Help solving first order differential equation ($\frac {dy}{dx}+\frac {2y}{x}=x^2$)
Hint: Integrating Factor: $$\displaystyle \mu (x) = e^{\int (2/x)~dx} = x^2$$
Intuition about the definition of Probability Density Function
It looks like their way of dealing with random variables that have zero probability in certain regions. For example, a uniform distribution might be defined to be $\frac{1}{3}$ on the interval $[4,7]$ and $0$ everywhere else. You can write in that case $\int_{4}^{7} p(x) dx = 1$, or you can say that $I$ is the indicator function for $[4,7]$, and say that $$\int_{-\infty}^{\infty} I\cdot p(x) dx = 1$$ where $p(x) = \frac{1}{3}$. Some deal with that by simply defining $p(x)$ to be $0$ outside of the interval and $\frac{1}{3}$ on the interval, and integrating from $-\infty$ to $\infty$.
Why is the integral of $dx$ equal to $x$?
It's not. It's equal to x+C. More seriously, You're not integrating "dx". When we write $\int dx$,we mean to solve the problem $\int 1dx$. "dx" has no meaning outside "I'm trying to integrate by x" or "I'm taking the x derivative", unless you possibly mean $x\cdot d$.
Lebesgue integral on $[0,1]^{2}$
All right, the function isn't absolutely integrable on the whole square. But that's not the question; we're looking at an improper integral, approaching the full square in a specific way akin to a principal value. For any $a>0$, $$I(a) = \int_a^1\int_a^1 \frac{x^2-y^2}{(x^2+y^2)^2}\,dy\,dx = \int_a^1\int_a^1 \frac{x^2-y^2}{(x^2+y^2)^2}\,dx\,dy$$ $$\int_a^1\int_a^1 \frac{x^2-y^2}{(x^2+y^2)^2}\,dx\,dy = \int_a^1\int_a^1 \frac{y^2-x^2}{(x^2+y^2)^2}\,dy\,dx = -I(a)$$ In the first line, we use Fubini's theorem to switch the order (it's a bounded continuous function). In the second line, we just swap the names of the two variables - that isn't really a change of order. Combining the two, we get $I(a)=-I(a)$, so $I(a)=0$. Take the limit as $a\to 0$, and we get zero.
Maximum sum after XOR
Note that XORing a number twice with X is the same as doing nothing-it gets you back to the start. You can think of each number in your list as having two states, the original and the number after one XOR. In your example, the first number could be either $10$ or $4$. If you do a number of passes, you can change any number of numbers (if $K$ is odd) or any even number of numbers (if $K$ is even) to their alternate versions. For example, you could choose $1,2,3,4$ the first time and $1,2,3,5$ the second to XOR just $4$ and $5$. This gives an algorithm. Compute the XOR of each number in your list with $X$. Subtract the original version from the XOR version. If $K$ is odd, invert the ones with positive differences. If $K$ is even and there are an even number of positive differences, those are the ones you want to XOR, so do them. If $K$ is even and there is an odd number, do the largest of the positive differences and check whether doing the smallest positive and smallest negative together shows a profit. If so, do it as well.
Show that T is a linear transformation and find a, b, c
Answers: When $\;V_{\Bbb F}\;$ is a vector space over a field $\;\Bbb F\;$ , a linear transformation $\;T:V\to \Bbb F\;$ is called A linear functional . $\;P_n\;$ is the vector space of all polynomials of degree less than or equal $\;n\;$ with coefficients from some field. Its dimension is $\;n+1\;$ $\;p\;$ represents a polynomial in $\;P_3\;$ . The next question has an obvious answer if you've understood so far what's going on here. I've no idea why you think such a thing is/would be "obvious", "intuitive" or whatever. This is mathematics and stuff must be proved. Period.
lim$_{n→∞}d(x_n, a) \neq d($lim$_{n→∞}x_n, a)$ in Metric Space - Implications
In this case,the Cauchy sequence is not converging. The map $x\mapsto d(x,a)$ is uniformly continous, so $n\mapsto d(x_n, a)$ must converge. The space cannot be complete and therefore it is not compact.
A Jacobian involving probabilities
Your strategy is unsound from its inception since, except in some degenerate cases such as $\lambda=0$ or $(I_1,I_2)$ independent, the derivative of $E(I_k)$ is not a function of $(E(I_1),E(I_2))$. Hence the functions $f_k$ simply do not exist. Without informations on the dependence structure of $(I_1,I_2)$, one cannot go much further. A remark, though, is that $U=E(I_1)-E(I_2)$ solves $U'=-(\lambda+\mu)U$ hence, for every $t\geqslant0$, $E(I_1)(t)-E(I_2)(t)=\mathrm e^{-(\lambda+\mu)t}(E(I_1)(0)-E(I_2)(0))$, and in particular, $E(I_1)(t)-E(I_2)(t)\to0$. This is only natural since every fixed point of the differential system is on the line $E(I_1)=E(I_2)$. Thus, the stability of the system depends on the behaviour of $E(I_1I_2)$ when $E(I_1)=E(I_2)$ or when $E(I_1)\approx E(I_2)$.
Does $f(dx)$ have any meaning?
Sort of. There is a dual concept to differentials, that of a "tangent vector", which is not unreasonable to think of as a kind of infinitesimal. While $\mathrm{d}x$ is supposed to denote a differential, many unfortunately use the notation when they wish to speak of an infinitesimal. :( Anyways, if $f$ is differentiable at a point $a$ and $\epsilon$ is an infinitesimal, then we have $$ f(a+\epsilon) = f(a) + f'(a) \epsilon $$ Note this is a literal equality and not merely an approximation, as this kind variety of infinitesimal satisfies $\epsilon^2 = 0$.
Find all $a \in \Bbb {C}$ such that $F$ has at least one multiple root.
A multiple root is a root also of the derivative. Since $$ F'(X)=18X^{17}-72X^8=18X^8(X^9-4) $$ the roots of the derivative are $0$ and the ninth roots of $4$. Now $0$ is a root of $F$ if and only if $A=0$. If $b^9=4$, then $$ b^{18}-8b^9+4A=16-32+4A $$ so the condition is $A=4$. The roots of $F$ when $A=0$ are easy to find they are $0$ (with multiplicity $9$) and the roots of $X^9-8$, which are simple. For $A=4$, $$ F(X)=X^{18}-8X^9+16=(X^9-4)^2 $$ and every root is double.
Considering isomorphism of groups and their properties
$G$ is cyclic if and only if there exists a surjective group homomorphism $e:\mathbb Z\to G$. Since $\phi$ is bijective, $\phi\circ e:\mathbb Z\to H$ is a surjective group homomorphism if and only if $e:\mathbb Z\to G$ is a surjective group homomorphism. Consequently, $G$ is cyclic if and only if $H$ is cyclic. Let $K\subseteq G$ be a subgroup. Then $\phi(K)\subseteq H$ is a subgroup and $[K:1]=[\phi(K):1]$ because $\phi$ is bijective. Consequently, $G$ has subgroup of order $n$ if and only if $H$ has subgroup of order $n$.
Zeros of partial sums of the exponential
You have all the facts you need. Here is an outline of what you need to do: Use Rouche's theorem with $f_n$ and $z^n\over n!$ to show that if some $f_N$ has a zero within the unit disk, then every $f_n$ with $n\gt N$ has a zero in the unit disk. Now since there are an infinite number of zeros in the unit disk, the set of points mapping to zero under our $f_n$'s has at least one accumulation point. So we have a sequence of pairs, $f_i, p_i$ with $f_i(p_i)=0$ for each i, with a limit point $f, p$ (as the $f_n$'s also converge). For the sake of contradiction, we want to show $f(p)=0$ as we already know that f is the exponential and so has no zeros. To do this use an epsilon delta argument: let $\epsilon \gt 0$, then find $N_0$ such that $f_n(x)$ is within $\epsilon \over 3$ of f(x) for all x in the unit disk when $n\gt N_0$, then find $\delta$ such that when $|x-x_0|\lt\delta$, then $|f(x)-f(x_0)| \lt$ $ \epsilon \over 3$. Lastly find $N_1$ so that $n\gt N_1$ implies $|p-p_n| \gt \delta$. Use the triangle inequality to show that when $n>max(N_0,N_1)$ we have $|f(p)-f_n(p_n)|<\epsilon$. This completes the contradiction, as it implies $f(x)=e^x$ has a zero in the unit disk.
Matrix optimization problem
Note that the sum $\sum_{k=1}^p\langle Au_k,u_k\rangle$ only depends on the subspace $S$ spanned by the $u_k$s but not on the orthonormal system $(u_1,\ldots,u_p)$ itself. In fact, if we extend the orthonormal system to an orthonormal basis $(u_1,\ldots,u_n)$ of $\mathbb R^n$ and denote by $P_S$ the orthogonal projector onto $S$, then $$ \sum_{k=1}^p\langle Au_k,u_k\rangle =\sum_{k=1}^n\langle AP_Su_k,P_Su_k\rangle =\operatorname{tr}(U^TP_S^TAP_SU) =\operatorname{tr}(AP_S) $$ and the last expression above depends only on $S$ rather than on any basis of $S$. Therefore, as pointed out by a user in a comment, the uniqueness claim in the theorem is wrong. That said, the theorem does follow from the lemma more or less directly. It clearly follows from the lemma if $p=1$. When $p>1$, let $S$ be a maximiser of $\operatorname{tr}(AP_S)$. If $v_1\not\in S$, $S$ must contain a $(p-1)$-dimensional subspace $S'$ that is orthogonal to $v_1$ (if $v_1\perp S$, simply pick any $(p-1)$-dimensional subspace of $S$; otherwise, let $w\ne0$ be the orthogonal projection of $v_1$ onto $S$ and take $S'$ as the orthogonal complement of $\operatorname{span}(w)$ in $S$). But then by the lemma, $\operatorname{span}(v_1)+S'$ would be a better solution than $S$, which is a contradiction. Thus $v_1$ must lie inside $S$ and $\operatorname{tr}(AP_S)=v_1^TAv_1+\operatorname{tr}(AP_{S'})$. The dimension of the problem is now reduced by one, and by the lemma, $u^TAu\le v_2^TAv_2$ for every $u\perp v_1$ and in particular for every $u\in S'$. By a similar argument to the above, we conclude that $v_2$ must lie inside the optimal $S'$. Proceed recursively, we see that the optimal $S$ is given by the span of $v_1,\ldots,v_p$.
Show that $\frac{|\sin x|}{x}$ is uniformly continuous on (0,1) and (-1,0) but not the union of both intervals
Hint:              
Expressing a matrix as an expansion of its eigenvalues
The proof using $AU = U\Lambda$ is not tedious. Since the $U$ is orthogonal, you have $U^{-1} = U^T$, so $A = U \Lambda U^T$. Then $$Ax = U \Lambda U^T x = U \Lambda \begin{bmatrix} u_1^T x \\ \vdots \\ u_n^T x \end{bmatrix} = U \begin{bmatrix} \lambda_1 u_1^T x \\ \vdots \\ \lambda_n u_n^T x \end{bmatrix} = \sum_k (\lambda_k u_k^T x) u_k = \sum_k \lambda_k u_k u_k^T x = (\sum_k \lambda_k u_k u_k^T)x$$ Hence $A=\sum_k \lambda_k u_k u_k^T$. Since $AU = U\Lambda$, inverting both sides gives $U^T A^{-1} = \Lambda^{-1} U^T$, and hence $A^{-1} = U\Lambda^{-1} U^T$. Applying the above result to $A^{-1}$, noting that $\Lambda^{-1}$ is just the diagonal matrix of the inverses of the diagonal elements of $\Lambda$, we have $A^{-1} = \sum_k \frac{1}{\lambda_k} u_k u_k^T$. To address your other question, the same result holds for Hermitian matrices ($A^* = A$), with the proviso that the $U$ will be unitary rather than orthogonal (ie, may be complex). A normal matrix ($A A^* = A^* A$) can also be expressed as above, except the eigenvalues may be complex (and eigenvectors, of course) The matrix $\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} $ is real, but not symmetric, but does not have a basis of eigenvectors (hence it cannot be expressed as above). The matrix $\begin{bmatrix} 0 & i \\ i & 0 \end{bmatrix} $ is symmetric but not real (it is normal). It can be unitarily diagonalized, but the eigenvalues and eigenvectors are complex.
Real infinite sequence
1. Let's begin by studying if $(a_n)$ is increasing or decreasing. Note that $\forall n\in\Bbb N^*$, if $a_n\ne 0$: $$a_{n+1} - a_n = \dfrac{1}{a_n} - \dfrac{a_n}2 = \dfrac{2-a_n^2}{2a_n}\;\;\;(\star)$$ Thus, we have to see if this is positive or negative. To do this, we need to see if $a_n>0$ or not, and if $2-a_n^2\ge 0$ or not (i.e. what's the position of $a_n$ with respect to $0$ and to $\pm\sqrt{2}$). Since $a_1=2>\sqrt{2}$ and $a_2 = \dfrac32>\sqrt{2}$, we may hope that $\forall n\in\Bbb N^*, a_n\ge \sqrt{2}$. Let's try to prove it by induction: $a_1 = 2>\sqrt{2}$ if we assume that for a given $n\ge 1$, $a_n\ge \sqrt{2}$, then $$a_{n+1} - \sqrt{2} = \dfrac{1}{a_n} + \dfrac{a_n}2 - \sqrt{2} = \dfrac{2+a_n^2 - 2\sqrt{2}a_n}{2a_n} = \dfrac{(a_n - \sqrt{2})^2}{2a_n}\ge 0$$ since $a_n\ge \sqrt{2}\ge 0$. thus $\forall n\in\Bbb N^*,\,a_n\ge \sqrt{2}\;\;(\triangle)$. Thus, using $(\star)$, we can see that $\forall n\in\Bbb N^*,\,2-a_n^2\le 0$ (since $\forall n\in\Bbb N^*,\,a_n\ge \sqrt{2}$) and $a_n\ge 0$. This shows $a_{n+1}-a_n\le 0$ and $(a_n)$ is non-increasing. 2. Now, we also know that $(a_n)$ is bounded from below by $\sqrt{2}$ and non-increasing. Thus, it's a convergent sequence.
How to sample from a Gamma distribution with shape not integer
See Devroye "Non-Uniform Random Variate Generation" Section IX.3.2 (it is freely available online). Several algorithms are explained in detail there with pseudo-code, so it will be quick to code. -- Monseigneur Myriel
A well-known lemma about curvilinear circles!
Hint: Homothety in $T$ which takes smaller circle to bigger will do. It takes $K$ to $M$ and line $AB$ to parallel line through $M$ since $K$ is on $AB$. Thus this new line is also tangent to bigger circle, since $AB$ is tangent to smaller. Now it should not be difficult to see that $AMB$ is isosceles (remember tangent-chord property).
How to solve $\log(x -1) + \log(x - 2) = 2?$
After Step 3 you can subtract 100 on both sides such that the right hand side becomes 0 and then find the solution using the quadratic equation.
Another definition of normal operator
If you assume that $\|Tv\| = \|T^\ast v\|$ holds for all $v$, the equalities $$\|Tv\|^2 = \langle Tv, Tv \rangle = \langle T^\ast Tv, v \rangle = \langle TT^\ast v, v \rangle = \langle T^\ast v, T^\ast v \rangle = \| T^\ast v \|^2$$ are true. But from $\langle T^\ast Tv, v \rangle = \langle TT^\ast v, v \rangle $ for all $v$ it follows that $T^\ast T = TT^\ast$.
finding out the chord length
Draw the perpendicular bisector AP of DC. For some reasons, the two marked angles are equal to θ. Thus, CD = 2*DQ = 2*r sin θ ∴ sin θ = … = 3 / 8 From which, θ is known. ∴ Arc DC = 0.5*(arc DB) = 0.5 r*(4θ) Finding θ, without using calculator, is almost impossible. A rough approximation:- θ is roughly 22 degrees. 4θ is roughly 90 degrees. Then, arc CD is roughly one-eighth of the circumference [or 2π(2) / 4) / 2].
distribution of (inverse) distribution function
Let $Y = F(X)$. Then $$\begin{align} F_{Y}(y) = \mathbb{P}\left(Y \leq y\right) &= \mathbb{P}\left(F(X) \leq y\right) \\& = \mathbb{P}\left(X \leq F^{-1}(y)\right)\text{ since }F\text{ is strictly increasing, }F^{-1}\text{ exists} \\ &= F_{X}\left(F_{X}^{-1}(y)\right) \\ &= y\text{,}\qquad y \in [0, 1]\text{.} \end{align}$$ The solution for finding the distribution of $F^{-1}(U)$ is similar.
Finding Eigenvalues and Eigenfunctions for a Sturm-Liouville Problem
The solutions of $$ Y''+LY=0 \\ Y(0)+Y'(0)=0 $$ are simplified by adding normalization such as $Y(0)=1$. The solutions are $$ Y_L(x)=\cos(\sqrt{L}x)-\frac{\sin(\sqrt{L}x)}{\sqrt{L}}. $$ These solutions satisfy $Y(0)+Y'(0)=0$, including the limiting case where $L=0$, which is $Y_0(x)=1-x$. $L$ is a valid eigenvalue iff $Y_L$ satisfies the required endpoint condition at $x=\pi$, which holds iff $L$ satisfies $$ Y(\pi)+Y'(\pi)=0, $$ or equivalently, $$ \cos(\sqrt{L}\pi)-\frac{\sin(\sqrt{L}\pi)}{\sqrt{L}}-\sqrt{L}\sin(\sqrt{L}\pi)-\cos(\sqrt{L}\pi)=0 \\ \left(\frac{1}{\sqrt{L}}+\sqrt{L}\right)\sin(\sqrt{L}\pi)=0. $$ There are solutions $\sqrt{L}=n\pi$ or $L=n^2\pi^2$ for $n=1,2,3,\cdots$, as well as $L=-1$. The solution for $L=-1$ is $$ Y_{-1}(x)=\cosh(x)-\sinh(x) = e^{-x}. $$
Bijection Contruction
Hint: Does the "obvious" choice $\alpha(f)=(f(0),f(1))$ work?
What is a good example to show high school students why a proof for induction is a reasonable kind of proof?
Once a student spots a pattern, induction is a natural approach to formalize the proof. I would suggest looking up various interesting patterns (or ask your kids for some) and then get them to demonstrate that it is true via induction. I would suggest the sum of the first $n$ odd numbers. If you play around with it, it is very easy to see that the sum is always a square number. An induction proof is almost immediate. You can bring in the gauss reference as summing of an arithmetic progression. You can also give a pictorial proof of sequentially adding layers of a square. I did this with middle school teachers and they loved it. They came up with several different ways of showing it, after seeing the pattern. They also tried finding a sum to give cubes, which is somewhat trickier. Visualizing peeling apart a cube is helpful here, if they are not mathematically inclined.
Solve $x''(t) x(t)=x^4(t)$ with $x'(0)=\frac{1}{\sqrt 2},x(0)=0$.
The solution can be written as $$ x(t) = \left( 1/2+i/2 \right) \sqrt {2}\; {sc} \left( \left( 1/2-i/2 \right) t \mid \sqrt {2} \right) $$ where $sc$ is a Jacobi elliptic function.
What is the particular solution for $ y''+2y'=2x+5-e^{-2x}$?
Hint: The particular solution is of the form: $$y_p = a x + b x^2 + c x e^{-2x}$$ We have to take $a + b x$ and multiply by $x$ and multiply $e^{-2x}$ by $x$ because we already have a constant in homogeneous and also have $e^{-2x}$ in homogeneous.
Is it okay to write $f(x)^2$?
by convention $$\cos^2x=(\cos x)^2$$ if $$f^2(x)=f(f(x))$$ then must be $$\cos^2x=\cos(\cos x)$$ in general $$(\cos x)^2\neq\cos(\cos x)$$ Evidently using such notation is not correct but we use.
Probability that the smallest side of the line forming triangle will be smaller than $\frac L3$
In any division of the stick into three pieces, if no piece is shorter than $L/3$ then all sticks must be $L/3$ in length, else they would form a stick of length greater than $L$, and this event has measure zero (i.e. it almost never happens). Thus there is probability 1 that the shortest stick is shorter than $L/3$, irrespective of whether the pieces form a triangle or not.
Convergence of Sum Involving Double Factorial
Applying the ratio test to the terms $a_i$, we have \begin{align*} \frac{a_{i + 1}}{a_i} &= \frac{a^{i + 1} (2i + 1)!!}{a^i (2i - 1)!!} = a (2i + 1) \end{align*} This blows up in absolute value unless $a = 0$. (As a consequence of this estimate, the terms of the series don't even tend to zero) The moral of the story is that things that are at all like a factorial are really big.
conditional probability dice problem
Let $A$ be the event that a 1 shows in eight rolls.   Let $B$ be the event that a 2 shows at least once in eight rolls. You have found $\mathsf P[A^\complement\cap B^\complement] = 0.6^8$   The probability that neither number occurs in eight rolls. You want to find $\mathsf P[A\cap B]$.   The probability that both numbers show (at least once) in eight rolls.   These events are not complements. You should use: $\mathsf P[A\cap B] = \mathsf P[A]+\mathsf P[B]-\mathsf P[A\cup B] \\\qquad\qquad = (1-\mathsf P[A^\complement])+(1-\mathsf P[B^\complement])-(1-\mathsf P[A^\complement\cap B^\complement])$
let's play a (continuous) probability game!
Given $x$, let $f(x)$ be the expected length of the game. We'll show that $f(x) = 1 - \ln x$. The probability that we finish the game on the first step is $x$; the game continues with probability $1 - x$. We therefore have $$f(x) = 1 + (1 - x)\cdot(\textrm{#future moves}).$$ Now, suppose we have to continue, i.e., $c_i \in (x, 1)$. Rather than choosing the next number $c_{i+1}$ from $(0, c_i)$, we can instead scale $x$ appropriately so that $c_{i+1}$ is chosen from $(0, 1)$, effectively restarting the game with $x/c_i$ instead of $x$. To account for the infinitely many ways to choose $c_i \in (x, 1)$, we can take the following integral: $$\textrm{#future moves} = \frac1{1-x}\int_x^1f\left(\frac xu\right)\ du$$ In other words, $$f(x) = 1 + \int_x^1f\left(\frac xu\right)\ du.\tag{1}$$ The rest (I'm skipping a few steps here...) is a matter of manipulating this to eventually bring it into the form of a differential equation, $$xf''(x) = -f'(x),$$ whose general solution is $f(x) = a + b\ln x$. From there, using $(1)$ we find $a=1, b=-1$, solving the problem. Given the simplicity of the final expression there may be a much more direct solution, but at the moment I don't see one.
Find a closed form for $\Sigma_{k = 1}^\infty 3x^{3k -1}$
Step 1: factor out $(3/x)$. Step 2: Substitute $y=x^3$. Result: $$(3/x)\sum_{k\ge 1} y^k=(3/x)\frac{y}{1-y}=(3/x)\frac{x^3}{1-x^3}$$ The error in OP was in the last step. The sum of a geometric series is the first term divided by the ratio subtracted from 1. This series begins with $k=1$, so the first term is $x^3$.
Solving the Inequality $\frac{14x}{x+1}<\frac{9x-30}{x-4}$
Basic approach. Perhaps easier would be to rewrite the original inequality as $$ 14 - \frac{14}{x+1} &lt; 9 + \frac{6}{x-4} $$ This leads to $$ \frac{14}{x+1} + \frac{6}{x-4} &gt; 5 $$ which becomes $$ \frac{4x-10}{(x+1)(x-4)} &gt; 1 $$ You can now (a) consider the cases $x = 0, 1, 2, 3$ separately, and then otherwise, (b) for $x &lt; -1$ or $x &gt; 4$, we have $$ (x+1)(x-4) &lt; 4x-10 $$ which becomes $$ x^2-7x+6 &lt; 0 $$ which you should be able to handle. Keep in mind that this inequality is only valid for the subcase (b) $x &lt; -1$ or $x &gt; 4$. (There's probably a simpler way, incidentally. See alans' comment, for instance. This is just what I wrote up.)
In the proof of Bolzano Weierstrass thm. about converging subsequences.
Its from the fact that $$2^{k+1} - 2^k = 2^k(2-1) = 2^k$$ Therefore each power of $2$ is expanded as a difference. i.e. $$2^{-l+1} = 2^{-l+2} - 2^{-l+1}$$ and so on....
I have to divide the following $\frac{2x^3-9x^2+10}{2x-1}$
$$\begin{align*} \frac { 2x^{ 3 }-9x^{ 2 }+10 }{ 2x-1 } &amp;=\frac { 2x^{ 3 }-8x^2-4x-{ x }^{ 2 }+4x+2+8 }{ 2x-1 } \\ &amp;=\frac { 2x\left( { x }^{ 2 }-4x-2 \right) -\left( { x }^{ 2 }-4x-2 \right) +8 }{ 2x-1 } \\ &amp;=\frac { \left( 2x-1 \right) \left( { x }^{ 2 }-4x-2 \right) +8 }{ 2x-1 } \\ &amp;=\left( { x }^{ 2 }-4x-2 \right) +\frac { 8 }{ 2x-1 } \end{align*} $$
The sum of the biggest member and smallest member equal to $11$
Choose $1$ and $10$, then you can include any number from the rest of the numbers. So how many distinct subsets does the set $\{2,3,...,8,9\}$ has? Now similarly choose $2$ and $9$ and find all distinct subsets from $\{3,4,...,7,8\}$. And so on and so forth. At the end it should be an easy calculation.
Probability a given probability ditribution was used
Let's consider two distributions over the set $\{H, T\}$ (for "heads" and "tails"): distribution 1: $H$ with probability 1/2, $T$ with probability 1/2. distribution 2: $H$ with probability 2/3, $T$ with probability 1/3. You have 100 observations; they are 60 Hs and 40 Ts. Which is more likely: distribution 1 or distribution 2? For concreteness, we'll say that you got the 60 Hs first, followed by the 40 Ts; the arithmetic works out the same for any order. Then you want to compute a conditional probability $P(H^{60} T^{40} | D_1) = (1/2)^{100} \approx 7.88 \times 10^{-31}$ - that is, if you're working from distribution 1, the probability of getting 60 heads followed by 40 tails is $(1/2)^{100}$. (This probability is the same no matter which 100 coin flip outcomes you have.) Similarly, $P(H^{60} T^{40} | D_2) = (2/3)^{60} (1/3)^{40} \approx 2.23 \times 10^{-30}$. Since this probability is larger, 60 heads and 40 tails is evidence in favor of selecting from disribution 2. If you're worried about the computations underflowing, you can take logs of the probabilities, so here you'd ger $$\log P(H^{60} T^{40} | D_1) = 100 \log (1/2) \approx -69.31 $$ and $$\log P(H^{60} T^{40} | D_2) = 60 \log (2/3) + 40 \log (1/3) \approx -68.27$$ and the second log-probability is larger. Since you're assuming that $D_1$ and $D_2$ are equally likely before anything is observed, you actually have a prior and can compute by Bayes' theorem $$ P(D_1 | H^{60} T^{40}) = {P(H^{60} T^{40} \hbox{ and } D_1) \over P(H^{60} T^{40})} = {P(H^{60} T^{40} | D_1) P(D_1) \over P(H^{60} T^{40} | D_1) P(D_1) + P(H^{60} T^{40} | D_2) P(D_2)} $$ and putting in the numerical values from above and $P(D_1) = P(D_2) = 1/2$ gives you $$ {(1/2)^{100} \times (1/2) \over (1/2)^{100} \times (1/2) + (2/3)^{60} (1/3)^{40} \times (1/2)} \approx 0.2607$$ This will generalize to a larger discrete set, more distributions, etc.
Find K for negative roots
The function $f(x)=x^3+x^2+2x+K$ is increasing, for the derivative $$f'(x)=3x^2+2x+2$$ is positive. Then the polynomial has only one real root. It is clear that if $K=0$, this root is $0$. Since $f(0)=K$ and $f$ is increasing, the root is negative if $K&gt;0$ and positive if $K&lt;0$.
Let V be vector space, U subspace of V,Prove that it is not possible that every vector in V\U is a scalar multiple of v from V
It's correct that $v$ cannot belong to $U$. But there's a different way. Suppose such a $v$ exists. Then $$ V=U\cup\langle v\rangle $$ (where $\langle v\rangle$ denotes the subspace spanned by $v$). It is well known that if $U_1$ and $U_2$ are subspaces of a vector space, then $U_1\cup U_2$ is a subspace if and only if $U_1\subseteq U_2$ or $U_2\subseteq U_1$.
How to solve this sum with Riemann-sum?
We can separate each $n$ terms, so the sum turns to sum of $p-1$ sums, \begin{align} \lim_{n\to\infty}\left[\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{n.p} \right] &amp;= \lim_{n\to\infty}\sum_{k=1}^{p-1}\frac{1}{kn+1}+\frac{1}{kn+2}+\cdots+\frac{1}{kn+n}\\ &amp;= \sum_{k=1}^{p-1}\lim_{n\to\infty}\left [\frac{1}{kn+1}+\frac{1}{kn+2}+\cdots+\frac{1}{kn+n}\right ]\\ &amp;= \sum_{k=1}^{p-1}\int_0^1\dfrac{1}{k+x}dx\\ &amp;= \sum_{k=1}^{p-1}\left(\ln(k+1)-\ln(k)\right)\\ &amp;= \color{blue}{\ln p} \end{align}
If $(1+\sqrt{2})^n=a_{n}+b_{n}\sqrt{2}\;,(\forall n\in \mathbb{N}).$Then $\lim_{n\rightarrow \infty}\frac{a_{n}}{b_{n}} = $
By the binomial theorem, $$(1+\sqrt2)^n=a_n+b_n\sqrt2\implies(1-\sqrt2)^n=a_n-b_n\sqrt2.$$ Then $$\frac{a_n}{b_n}=\sqrt2\frac{(1+\sqrt2)^n+(1-\sqrt2)^n}{(1+\sqrt2)^n-(1-\sqrt2)^n}=\sqrt2\frac{1+\left(\dfrac{1-\sqrt2}{1+\sqrt2}\right)^n}{1-\left(\dfrac{1-\sqrt2}{1+\sqrt2}\right)^n}.$$ This gives linearly converging rational approximations of $\sqrt2$. For instance, with $n=10$, $$\sqrt2\approx\frac{6726}{4756}$$ with a relative error on the order of $$2\left(\frac{1-\sqrt2}{1+\sqrt2}\right)^{10}\approx4.4\cdot10^{-8}.$$
How do I determine what $f(y, t)$ is when using Runge-Kutta to solve a PDE?
One approach for performing such time-integration is the method of lines (MOL). Let us introduce the nodal values $u_j(t) = u(x_j,t) = u(x_0 + j\Delta x,t)$. Using centered differencing to approximate the spatial derivative, the PDE gives \begin{aligned} \frac{\text d}{\text{d} t} u_j(t) &amp;= -c\, \partial_x u(x_j,t) \\ &amp; \simeq -c\, \frac{u_{j+1}(t) - u_{j-1}(t)}{2\, \Delta x} \, . \end{aligned} The system of ordinary differential equations \begin{aligned} \frac{\text d}{\text{d} t} {\bf u}(t) &amp;= -\frac{c}{2\, \Delta x} \left( \begin{array}{ccccc} 0 &amp; 1 &amp; &amp;\\ -1 &amp; \ddots &amp; \ddots &amp;\\ &amp; \ddots &amp; \ddots &amp; 1\\ &amp; &amp; -1 &amp; 0 \end{array} \right) \, {\bf u}(t) - \frac{c}{2\, \Delta x} \left( \begin{array}{c} -u_0(t)\\ 0\\ \vdots\\ 0\\ u_N(t) \end{array} \right) ,\\ &amp;= {\bf f}({\bf u}(t)) \, , \end{aligned} made by the vector ${\bf u} = (u_1,\dots ,u_{N-1})^\top$ of nodal values can then be integrated in time explicitly using Runge-Kutta methods. In particular, if the forward Euler (RK1) method ${\bf u}^{n+1} = {\bf u}^{n} + \Delta t\, {\bf f}({\bf u}^n)$ is used for time-integration, then an unstable scheme $$ u_j^{n+1} = u_j^n -c\, \frac{\Delta t}{2\,\Delta x} (u_{j+1}^n - u_{j-1}^n)\, , \qquad 1 \leq j \leq N-1 $$ is obtained. A slight modification of this method gives the stable Lax-Friedrichs scheme. If the proposed improved Euler (RK2) method is used for time integration, one has \begin{aligned} \tilde u_j^{n+1} &amp;= u_j^n + \Delta t\, k_1 \\ &amp;= u_j^n -c\, \frac{\Delta t}{2\, \Delta x} (u_{j+1}^n - u_{j-1}^n) \, ,\\ u_j^{n+1} &amp;= u_j^n + \frac{1}{2} (k_1+k_2)\\ &amp;= u_j^n -c\, \frac{\Delta t}{4\, \Delta x} (u_{j+1}^n - u_{j-1}^n + \tilde u_{j+1}^{n+1} - \tilde u_{j-1}^{n+1}) \, . \end{aligned} Finally, for $2 \leq j \leq N-2$, $$ u_j^{n+1} = u_j^n -c\, \frac{\Delta t}{2\, \Delta x} (u_{j+1}^n - u_{j-1}^n) + c^2\, \frac{\Delta t^2}{8\, \Delta x^2}(u_{j+2}^n - 2u_{j}^n + u_{j-2}^n) \, , $$ which looks quite similar to the Lax-Wendroff scheme, but may have different stability properties.
Spivak's Calculus: intuition for series proof
As mentioned in the comments, since the series $\sum_n a_n$ converges, the individual summands must go to $0$, $a_n \to 0$. If I recall correctly, Spivak refers to this as the "vanishing criterion" or something like that. Now, the $p_n$'s and $q_n$'s are a subsequence of the $a_n$. Recall that if a sequence $a_n \to 0$, then every subsequence also converges to $0$; in particular both $p_n \to 0$ and $q_n \to 0$ (and $p_{N_k} \to 0$ and $-q_{M_k} \to 0$ as $k \to \infty$, since subsequence of a subsequence is still a subsequence, and hence will also converge to $0$). Now, the bolded sentence can be viewed as an application of the squeeze theorem for sequences. Or just show it directly: Given $\epsilon &gt; 0$, choose $\nu\in \Bbb{N}$ large enough so that $k&gt; \nu$ implies $0 \leq |p_{M_k}| &lt; \epsilon$. Such a $\nu$ exists since $\lim_{k \to \infty} p_{M_k} = 0$. Therefore, we have $|S_k - \alpha| \leq p_{M_k} &lt; \epsilon$. Since $\epsilon &gt; 0$ is arbitrary, it completes the proof that $\lim_{k \to \infty} S_k = \alpha$. A similar proof holds for the $q$'s. Finally, the motivation for the proof is pretty straight forward. You know that both the positive and negative series do not converge. So, $\sum_n p_n = \infty$ and $\sum_n q_n = - \infty$. So, the idea is that given any $\alpha$, to construct the rearrangement, you take a bunch of positive terms $p$ so that the sum just barely exceeds $\alpha$, then you take a bunch of negative terms so that the total sum is just below $\alpha$, then take a bunch of positive terms, then a bunch of negative ones. Each time, the sequence of partial sums will be hopping to the right and left of $\alpha$, and the "amount by which it hops" will eventually decrease, so that the limit of the partial sums of the rearrangement actually converges to $\alpha$. Imagine letting a pendulum swing. First, it will overshoot and swing to the right, then it will swing to the left, then it will swing to the right, and then left again, but eventually, the amplitudes of oscillation will get smaller and smaller, until it eventually stabilizes. That's pretty much the illustration to keep in mind for this proof. (In this analogy, the motion of the pendulum is like the partial sums of the rearrangement)
The evaluation of proofs
Try to prove theorems and results you already know by several different methods. I am not sure about your level of mathematics so the following examples may not be appropriate. If not pick problems that are. For example prove the Pythagorean theorem or find the roots of a second order polynomial. If you know the Peano axioms prove addition and multiplication are commutative. When you think you have a proof go over it carefully and see if the justification for each step is correct. You might want to wait a day or two for this step.
What does this series converge to? $\sum_{k=0}^n \left(\frac{1}{4k+1}+\frac{1}{4k+3}-\frac{1}{2k+1}\right)$
$$\mathop {\lim }\limits_{n \to + \infty } \sum_{k=0}^n \left(\frac{1}{4k+1}+\frac{1}{4k+3}-\frac{1}{2k+1}\right)=$$ $$=\mathop {\lim }\limits_{n \to + \infty } \sum_{k=0}^n\left(\frac{1}{4k+1}-\frac{1}{4k+2}\right)-\mathop {\lim }\limits_{n \to + \infty } \sum_{k=0}^n\left(\frac{1}{4k+2}-\frac{1}{4k+3}\right)=$$ $$=\frac{\pi}{8}+\frac{\ln2}{4}-\left(\frac{\pi}{8}-\frac{\ln2}{4}\right)=\frac{\ln2}{2}.$$
Differentiation involving determinant
The fist term is a sum instead of a product because you can use $ \log(ab) = \log(a) + \log(b) $ to write $ \log \det \bar{C} = \sum_i \; \log \left(\theta x_i^1 + (1-\theta) x_i^2 \right) $. On the other hand, it is not at all clear to me why $ U $ should simultaneously diagonalize both $ C^1 $ and $ C^2 $. I think you have also made a typo, and that $ \bar{m} = \theta m^1 + (1- \theta) m^2 $. For the second term, it is straightforward to work out the product rule as follows. Writing for short $ A = (O_t-\bar{m})$ and $ \frac{d}{d\theta} M = \dot{M} $ then $$ \frac{d^2}{d\theta^2}(A^T C A) = \frac{d}{d\theta} \; \left(\dot{A}^T C A + A^T \dot{ C} A + A^T C \dot{A} \right) $$ Now you can use the fact that $ \frac{d^2}{d\theta^2} $ of any of the matrices is zero, so that $$ \frac{d}{d\theta} \; \left(\dot{A}^T C A + A^T \dot{ C} A + A^T C \dot{A} \right) = (\dot{A}^T \dot{C} A + \dot{A}^T C \dot{A}) + (\dot{A}^T \dot{C} A + A^T \dot{C} \dot {A}) + (\dot{A}^T C \dot{A} + A^T \dot{C} \dot{A}) \\ = 4 ( \dot{A}^T \dot{C} A ) + 2 \dot{A}^T C \dot{A} $$ Apparently, the matrices are symmetric, so that $ \dot{A}^T \dot{C} A = A^T \dot{C} \dot{A} $. This gives the answer that is given.
Help to compute the integral
Hint: $I = e^{x/2}x^{3/2}-\displaystyle \int e^{x/2}x^{1/2}dx$. $y = x^{1/2} \to x = y^2\to J = \displaystyle \int e^{y^2/2}y^2dy = \displaystyle \int yd\left(e^{y^2/2}\right)= ye^{y^2/2}- \displaystyle \int e^{y^2/2}dy$
Proving that being $R$ is a relation in a set $A$ and $R^{-1}$ its inverse, satisfy 5 properties between them.
For the purpose of this answer I assume that you define $R^{-1} = \{(y,x)\text{ }|\text{ }(x,y)\in R\}$. Proof. We examine each of the above propositions in turn. Notice that $(1)$ and $(2)$ follow immedietly from the definition of $R^{-1}.$ For $(3)$ you can assume that $R$ is symmetric and let $x,y\in A$ such that $(x,y)\in R^{-1}$ equivalently $(y,x)\in R$ then by symmetry of $R$ we have $(x,y)\in R$ and equivaletly $(y,x)\in R^{-1}$. For $(4)$ assume $R$ is transitive then given arbitrary $x,y,z\in A$ such that $xR^{-1}y$ and $yR^{-1}z$ equivalently we have $yRx$ and $zRy$ which in conjunction with the transitivity of $R$ imply $zRx$ equivalently $xR^{-1}z$. and yes your proof of proposition $(5)$ is very much correct. Lastly if you would oblige me may i ask which text is this problem from?
Partial Integration doubt
$\frac{d(1-c)}{dc}=-1\implies d(1-c)=-d(c)$. Alternatively you can substitute $m=1-c$ and your LHS becomes$$\int_0^{1/2}\frac{2~dm}{m^2(5+m)}$$Integrate this with respect to $m$ and substitute $1-c$ back in the final result.
probability question - virus damage computer
Use the CDF of a binomial distribution with $p=0.35$ and $n=2400$ evaluated at $x_1 = 799$ and $x_2 = 850$.
Algebra divisibility proof
Ignoring the fact that the equation has no solution in integers, $a$ can not be divisible by $5$ because this would imply $5$ divides $5b^2-24a^2=1$, a contradiction. However $b$ is divisible by $5$. For $5b^2= 24a^2+1 = 25a^2-(a^2-1)$ implies $5$ divides $a^2-1$. Hence $a$ is either $1$ or $-1$ modulo $5$. But in either case you will find $24a^2+1$ is divisible by $25$. Thus $25$ divides $5b^2$ which implies $5$ divides $b^2$ which implies $5$ divides $b$, because $5$ is a prime.
Is L^p norm-closed in the bounded continuous functions?
The answer depends on the measure you put on $G$. If the measure is finite, then the answer is affirmative, because if $f_n\in L^p(G)$ is a sequence converging in $C_b(G)$ then it converges uniformly and when the measure is finite this implies that the limit function is also in $L^p(G)$. However, if the measure is infinite (for example, the real line being your locally compact group with the Lebesgue measure) then there are examples of uniformly convergent sequences of integrable, continuous functions whose limit is not integrable.
Combinatorics: Prove that $x, y, z$ exist such that $\frac {1}{2} \leq \frac {x^2}{yz} \leq 2$
HINT.- Consider the rational function $f(X)=\dfrac{(X-2)^2}{(X-3)(X-4)}$. It is easy to prove that $$\begin{cases}f(X)\le2\text{ when } X\gt7\\f(X)\ge\dfrac12\text{ when } X\gt5\end{cases}$$ Making now $X=2^x$ the inequalities above prove that for $n\ge3$ (in which case $X\ge8$) a solution is given by $$x=2^n-2\\y=2^n-3\\z=2^n-4$$ It remains to solve the values $n=1$ and $n=2$ in which cases $(x,y,z)=(1,1,1),(1,1,2)$ are trivial solutions.
Taylor double series
The double Taylor series for $f(x,y) = (1 - x - y)^{-1}$ around the origin is $$\tag{1}\sum_{n=0}^\infty \sum_{m=0}^\infty \left.\frac{\partial^{n+m}f}{\partial^nx \partial^m y}\right|_{(0,0)}\frac{x^ny^m}{n!m!} = \sum_{n=0}^\infty \sum_{m=0}^\infty \frac{(n+m)!}{n!m!} x^ny^m.$$ On the other hand, using the binomial theorem we have the expansion $$\tag{2}\sum_{n=0}^\infty (x+y)^n = \sum_{n=0}^\infty \sum_{k=0}^n \frac{n!}{k!(n-k)!}x^ky^{m-k}.$$ Series (1) and (2) are related in that partial sums of (2) coincide with partial sums of (1) taken in a specific order -- summing along diagonals of the matrix $[a_{nm}]$ of general terms. If the double series (1) is absolutely convergent, then (1) and (2) will converge to the same value. However, domains of convergence are not identical. For (2) we have absolute convergence for $|x + y| &lt; 1$, as you mentioned, and divergence for $|x+y| \geqslant 1$. This is simply a property of the geometric series. Determining the convergence domain for (1) is more involved. For positive integers $N$ and $M$, the partial sums satisfy $$\sum_{n=0}^N \sum_{m= 0}^M \frac{(n+m)!}{n!m!}|x|^n|y|^m &lt; \sum_{n =0}^{N+M}\sum_{k=0}^n \frac{n!}{k!(n-k)!} |x)^k |y|^{n-k} = \sum_{n = 0}^{N+M} (|x| + |y|)^n$$ which should be apparent because every term on the LHS appears on the RHS along with additional terms. Thus, (1) is absolutely convergent for $|x| + |y| &lt; 1$, and using a similarly constructed lower bound not absolutely convergent if $|x| + |y| \geqslant 1$. The case where $|x + y| &lt; 1 &lt; |x| + |y|$ can arise where the series are not both absolutely convergent.
Seifert matrix, linking numbers, generators
The Seifert surface $\Sigma\subset S^3$ is oriented, so the normal bundle $\nu$ is trivial; choose a nowhere-vanishing section $s\in \Gamma(\nu)$. For a curve $\gamma$, let $\gamma^+$ denote the curve obtained by pushing $\gamma$ off $\Sigma$ a small distance along $s$. Basically, we're taking a tubular neighborhood around $\Sigma$ and pushing $\gamma$ outward, but codimension $1$ and orientability mean that there's a well-defined direction in which to push at each point along $\gamma$. Descending to homology gives $\operatorname{lk}(x_i, x_j^+)$.
Linear algebra and matrix.
Counter-example is given. To show that the sum of orthogonal matricies can be orthogonal take $\begin{pmatrix}\frac{1}{2}&amp;\frac{\sqrt3}{2}\\ -\frac{\sqrt3}{2}&amp;\frac{1}{2}\end{pmatrix}+\begin{pmatrix}\frac{1}{2}&amp;-\frac{\sqrt3}{2}\\ \frac{\sqrt3}{2}&amp;\frac{1}{2}\end{pmatrix}$
Can we say that $X$ knowing $Y$ is a random variable?
Indeed, $E(X \mid Y)$ is a random variable; it takes on the same values as X, of course, but its distribution is modified by the fact that you know $Y$. Note also how $E(X \mid X) = X$. About your equality, note that the right hand side is a number, which can't be equal to a random variable. The proper way to write it is $$E(E(X \mid Y)) = \sum x P(x \mid Y)$$
Dedekind Cuts and Rationals
Note that it is not part of your axioms that $X\cup Y = \Bbb Q$. For instance, the rational number $0$ is given by $$ X = \{q\in \Bbb Q\mid q&lt;0\}\\ Y = \{q\in \Bbb Q\mid q&gt;0\} $$ and the number $0$ isn't contained in either of them. On the other hand, the axiom $$\forall x,y, x&lt;y \Rightarrow \text{ either } x \in X \text{ or } y \in Y$$ implies that $\Bbb Q\setminus(X\cup Y)$ has at most one element. The usual definition of Dedekind cuts that I've come across does have $X\cup Y = \Bbb Q$, but it also allows $Y$ to have a least element, which your axioms do not allow. Specifically, $$\forall x \in Y, \; \exists y \in Y \text{ such that } y &lt; x$$ says that $Y$ has no least element.
Count the number of intervals that fall in the given range
You can build the interval tree of the input intervals. Following, you can execute a range query (i, j) that returns all intervals that overlap with (i, j) in O(logn + k) time, where k is the number of overlapping intervals, or a range query that returns the number of overlapping intervals in O(logn) time.
The density of the distribution whose Laplace transform is the following
If $X \sim U([0,\frac 12])$, that is $X$ is uniformly distributed on $[0,\frac 12]$, we have \begin{align*} \def\E{\mathbf E} \E[e^{tX}] &amp;= 2\int_0^{\frac 12} \exp(tx)\, dx\\ &amp;= \frac 2t \exp(tx)\bigl|_0^{\frac 12}\\ &amp;= \frac{\exp(t/2) - 1}{t/2} \end{align*} So the density with respect to Lebesgue measure on $\mathbf R$ is $2\chi_{[0,\frac 12]}$.
Trouble with l'Hôpital's rule for $\lim_{x\rightarrow 0} \frac{4x+4\sin x}{10x+10\cos x}$
Original posted question: $$\lim_{x\rightarrow 0} \frac{4x+4\sin x}{10x+10\sin x} = \lim_{x\to 0} \frac{(4x + 4\sin x)'}{(10x + 10 \sin x)'} = \lim_{x\to 0} \frac{4+4\cos x}{10 + 10\cos x} = \frac {4 + 4}{10 + 10} = \frac {8}{20} = \frac 25$$ Since you meant to post $$\lim_{x\rightarrow 0} \frac{4x+4\sin x}{10x+10\cos x}$$ note that in this case, l'Hôpital is not applicable (the limit does not at first evaluate to an indeterminate limit). Nor would we want to use it! It is easily solved by evaluating immediately: $$\lim_{x\rightarrow 0} \frac{4x+4\sin x}{10x+10\cos x} = \frac{ 0 + 0}{0 + 10(1)} = \frac {0}{10} = 0$$ IMPORTANT TO REMEMBER: We apply l'Hôpital's rule if and only if a limit evaluates to an indeterminate form. That bold-face link will take you to Wikipedia's concise list of "what counts" as an indeterminate form, and why.
Find $\sum_{n=1}^\infty\frac{2^{f(n)}+2^{-f(n)}}{2^n}$, where $f(n)=\left[\sqrt n +\frac 12\right]$ denotes greatest integer function
Note that $f(n) = \left\lfloor \sqrt{n}+\tfrac{1}{2}\right\rfloor = k$ iff $k^2-k+\tfrac{1}{4} \le n &lt; k^2+k+\tfrac{1}{4}$, i.e. $k^2-k+1 \le n \le k^2+k$. Therefore, we have: \begin{align*} \sum_{n = 1}^{\infty}\dfrac{2^{f(n)}+2^{-f(n)}}{2^n} &amp;= \sum_{k = 1}^{\infty}\sum_{f(n) = k}\dfrac{2^{f(n)}+2^{-f(n)}}{2^n} \\ &amp;= \sum_{k = 1}^{\infty}\sum_{n = k^2-k+1}^{k^2+k}\dfrac{2^k+2^{-k}}{2^n} \\ &amp;= \sum_{k = 1}^{\infty}(2^k+2^{-k})\cdot\sum_{n = k^2-k+1}^{k^2+k}\dfrac{1}{2^n} \\ &amp;= \sum_{k = 1}^{\infty}(2^k+2^{-k})\left(2^{-(k^2-k)}-2^{-(k^2+k)}\right) \\ &amp;= \sum_{k = 1}^{\infty}\left(2^{-k^2+2k}+2^{-k^2}-2^{-k^2}-2^{-k^2-2k} \right) \\ &amp;= \sum_{k = 1}^{\infty}\left(2^{-(k-1)^2+1} - 2^{-(k+1)^2+1}\right) \\ &amp;= \sum_{i = 0}^{\infty}2^{-i^2+1} - \sum_{j = 2}^{\infty}2^{-j^2+1} \\ &amp;= 2^{-0^2+1}+2^{-1^2+1} \\ &amp;=3 \end{align*}