title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Let $x=0,a_1a_2a_3\cdots a_i\cdots$ be a number such that $a_1=0$, $a_i=1$ if $i\in\mathbb{N}$ is a prime number and $a_i=0$ otherwise.
It is not rational because the decimal expansion of a rational number is eventually periodic. However, there are arbitrarily long sequence of consecutive composite numbers (for example, by the Chinese remainder theorem), and of course infinitely many primes, so the decimal expansion for this number is not eventually periodic
Localization of roots of complex quadratic equations
The conjecture about imaginary parts is certainly false, for the equation $$ P(x) = (x - i) (x - (1+i)) $$ has roots $i$ and $1 + i$, whose imaginary parts have the same sign. In general, there are two roots, and just as in the real case, the sum of the roots is $-\frac{b}{2a}$, while the product of the roots is $c/a$. Other than that, there's not a lot to say.
what is the n-k derivative of $x^n$? Also, why is $n!/k! = ...$
Start from $$(x^n)^{(k)}=\dfrac{n!}{(n-k)!}x^{n-k},\quad\text{which is easy to prove by induction.}$$ and apply it replacing $k$ with $n-k$: $$(x^n)^{(n-k)}=\dfrac{n!}{(n-(n-k))!}\,x^{n-(n-k)}=\dfrac{n!}{k!}\,x^{k}. $$ Also $$\frac{n!}{k!}=\frac{n(n-1)\cdots(k+1)k(k-1)\cdots1}{k(k-1)\cdots1}=n(n-1)\cdots(k+1).$$
The graphs in which radius is equal to diameter
The obvious family of such graphs are vertex transitive graphs. All vertex transitive graphs are self-centered (every vertex has the same eccentricity). Their complements are also vertex transitive graphs and thus are also self-centered. These will always be regular graphs, however.
Can a disconnected set disconnect a connected set?
Take $V=\mathbb{R}^n$ and $F = \{x\in \mathbb{R}^n:x_n=0 \text{ or } x_n=1\}$. Then $V\setminus F$ and $V\cap F$ are disconnected.
Find point on a curve that is part of a tangent line
Hint. The direction of the tangent to the point at $t$ is given by $r'(t)=\langle 1,4t,-3-3t^2 \rangle$. So for some $s$, $$ r(t)+s\,r'(t)=X $$ where $X$ is the point you need it to pass through. So just solve for $t$ and $s$.
Probability of a sequence of random letters
The way of dealing with such questions is to think clearly about what is involved. Suppose I am looking for the word "the". Let the probablity that it appears in the first $n$ letters be $p_n$. Now if "the" appears, either it appears in the first $n-1$ letters, or it appears for the first time at the $n^{th}$ letter. In this case the last three letters chosen are "the" with probability $\frac 1{26^3}$, and the first $n-3$ letters do not contain the word "the". I am left with $$p_n=p_{n-1}+\frac 1{26^3}(1-p_{n-3})$$ and $p_0=p_1=p_2=0$. This is now a recurrence which can be explicitly solved. If I were looking for the single letter "a" the same approach would give me $p_n=p_{n-1}+\frac 1{26}(1-p_{n-1})$ or $p_n = \frac {25}{26}p_{n-1}+\frac 1{26}$ The solution to this is $p_n=1+A\left(\frac{25}{26}\right)^n$ and $p_0=0$ gives $A=-1$, which checks with simpler ways of computing, which are available for a single letter.
The Jacobian determinant as the ratio of differential volume elements
In essence, the problem is that the unit volume spanned by $\mathrm du$ and $\mathrm dv$ does not need to be a rectangular parallelogram, in which case its volume is obviously no longer given by the product of the lengths of the sides. Thus this is an issue with the concept of volume itself, and it will remain even if your transformation $(x,y)\mapsto(u,v)$ is a general linear transformation. The outer product of covectors was built to capture the concept of volume in this context, and the wedge product of one-forms is its transparent generalization to the case of curvilinear coordinates.
Sum of IID Chi Square Random Variables Approximation?
Since from definition $X_i$ are nonnegative, $\sum_i X_i <a$ implies each $X_i<a$ and then you can use the fact that they are independent to provw the inequality. That means $\{\sum_i X_i<a \}\subset \cap_i\{X_i<a\}$ and we can take their measures and use independence of these events $\{X_i<a\}$
Current price of a European call option
You use the no free lunch principle. You describe a strategy that will guarantee a profit if $c$ is less than either number. The $0$ comes from noting that if somebody will pay you to to take the option you can collect the payment and let the option expire. The other term comes from selling a share and buying an option, then exercising at expiration. If the option price is below this value, you make a guaranteed profit. A good way to simplify it is to assume $r=0$, which is not far wrong today. Then it just says that if the strike price is below the stock price, the option has to cost at least the difference. If it didn't, instead of buying the stock you would buy the option and exercise it. The effect of interest is to reduce the value of the $K$ you will have to pay in the future, so the option price has to be a little higher yet to avoid this arbitrage.
Homology group of $CP^n$ and $\mathbb{R}P^n$
You Might want to try induction over $n$ and use Mayer-Vietoris. The base step is trivial (you should know the homologises), for the inductive step try to make use of this hint: $\mathbb{R}P^n$ minus a point deformation retracts in $\mathbb{R}P^{n-1}$ , making use of M-V you should be able to compute the homology of the space. Analogously for the complex projective space, which should be easier. Another approach very similar to M-V is to use the homology l.e.s of the pair, since you can identify $\mathbb{R}P^{n-1}\subset \mathbb{R}P^n$ and since they are nice spaces (CW-complex) you have $H_k(\mathbb{R}P^n,\mathbb{R}P^{n-1})\cong \widetilde{H}_k(\mathbb{R}P^n/\mathbb{R}P^{n-1})$ and you should realise what the latter space look like
Questions about the reflection operator on an inner product space
a) The trace and determinant can be computed if you know the eigenvalues of $r_u$. (Hint: the eigenvalues are $-1$ and $1$, but you need to figure out how many of each there are.) b) Have you written down the definition of isometry and tried to verify that $r_u$ satisfies it? c) Draw a picture (say, in 2-dimensions) of some arbitrary $v$ and $w$, and think about what kind of reflection sends $v$ to $w$. Can you draw what $u$ should be for that reflection? (Hint: the line in direction $u$ should perpendicularly bisect the segment between $v$ and $w$.)
Solving a PDE possibly with method of characteristics or other methods
Solving your "related equations" should give you the solutions: $u$ is constant on the curves $y = s$, $x = 2+2 s+s^2+ c \exp(s)$, i.e. $u = F((x - 2 - 2 y - y^2) \exp(-y))$.
Quadratic Polynomial with complex coefficients
For every $z\in \mathbb{C}$ we have $$ |p(z)|^2=(z^2+az+b)(\bar{z}^2+\bar{a}\bar{z}+\bar{b})=|z|^4+a\bar{z}|z|^2+\bar{a}z|z|^2+|a|^2|z|^2+\bar{b}z^2+b\bar{z}^2+a\bar{b}z+\bar{a}b\bar{z}+|b|^2, $$ in particular, when $|z|=1$, we have: $$ |p(z)|^2=\bar{b}z^2+b\bar{z}^2+(a+\bar{a}b)\bar{z}+(\bar{a}+a\bar{b})z+|a|^2+|b|^2+1. $$ Since $|p(z)|=1$ when $|z|=1$, we have: $$\tag{1} \bar{b}z^2+b\bar{z}^2+(a+\bar{a}b)\bar{z}+(\bar{a}+a\bar{b})z+|a|^2+|b|^2=0. $$ Putting $z=-1,1,-i,i$ in (1), we get: $$ \left\{ \begin{array}{lcc} \bar{b}+b-(a+\bar{a}b)-(\bar{a}+a\bar{b})+|a|^2+|b|^2&=&0\\ \bar{b}+b+(a+\bar{a}b)+(\bar{a}+a\bar{b})+|a|^2+|b|^2&=&0\\ -\bar{b}-b-i(a+\bar{a}b)-i(\bar{a}+a\bar{b})+|a|^2+|b|^2&=&0\\ -\bar{b}-b+i(a+\bar{a}b)+i(\bar{a}+a\bar{b})+|a|^2+|b|^2&=&0 \end{array}\right., $$ and combining these four identities we have: $$ 4(|a|^2+|b|^2)=0. $$ Thus $a=b=0$.
Let $\mu$ be finite measure and $\lVert f\rVert_\infty>0$ and $a_n=\int |f|^n d\mu$. Prove $\lim {a_{n+1}\over a_n}=\lVert f\rVert_\infty$
Hint: By Holder's inequality, $\int |f|^{n}d\mu \leq (\int |f|^{n+1}d\mu)^{\frac n {n+1}} C^{\frac 1 {n+1}}$ where $C=\mu (X)$. Now use the fact that $\int |f|^{n+1}d\mu \geq \int_E |f|^{n+1}d\mu$ where $E=\{x: |f(x)| >\|f\|_{\infty} -\epsilon\}$. Can you finish?
The image of a morphism between affine algebraic varieties.
I will assume varieties are irreducible. Let $I$ be the kernel of $f^\sharp : \mathbb{C}[W] \to \mathbb{C}[V]$. Since $f^\sharp$ is surjective, $I$ must be a prime ideal, so corresponds to some closed subvariety $Y \subseteq W$. Thus the morphism $f : V \to W$ must factor through the inclusion $Y \hookrightarrow W$. We may assume without loss of generality that $Y = W$. But then $f^\sharp$ is an isomorphism, say with two-sided inverse $g^\sharp : \mathbb{C}[V] \to \mathbb{C}[W]$, and the fundamental theorem regarding morphisms of varieties implies the morphism $g : W \to V$ corresponding to $g^\sharp$ must be a two-sided inverse for $f : V \to W$. You are right that the image of a morphism of affine varieties need not be closed: for example, if $V = \{ (x, y) \in \mathbb{C}^2 : x y = 1 \}$, $W = \mathbb{C}$, and $f(x, y) = x$, then the image of $f$ is a not closed in $W$. But what about $f^\sharp$? Well, $\mathbb{C}[V] = \mathbb{C}[x, y] / (x y - 1)$ and $\mathbb{C}[W] = \mathbb{C}[x]$, so $f^\sharp : \mathbb{C}[V] \to \mathbb{C}[W]$ is not surjective. (In fact, it is injective!) Thus the argument in the previous paragraph does not apply.
Embedded submanifold and isomorphisms of the ambient space.
The key fact if you need, if you want to show that $f\circ F$ is a topological embedding is, that restrictions of continous maps are continous: If $f:X\to Y$ is a continous map between topological spaces, $A\subseteq X$, $B\subseteq f(A)$, then the restriction $f :A\to B$ is also continous, where $A$ and $B$ are equipped with the subspace topologies. From this it follows that if $f:X\to Y$ is a homeomorphism and $A\subseteq X$, the restriction $f:A\to f(A)$ is also a homeomorphism. Now in your case you want to show that $f\circ F:N\to M'$ is a topological embedding, which means that the restriction $f\circ F:N\to f(F(N))$ is a homemorphism. But this map can be written as the composition $$N\stackrel{F}\to F(N)\stackrel{f}\to f(F(N))$$ which by the above and the assumption that $F:N\to M$ is a topological embedding is a composition of homeomorphisms and hence a homeomorphism.
Indefinite Integral without parts or u-sub
you do realize that $x^{-1}=\frac{1}{x}$, right? $$\int{\frac{\frac{1}{x}}{1-\frac{1}{x}}dx}=\int{\frac{1}{x-1}dx}=\ln(x-1)$$
Divergence Theorem to determine the flux
Your $dV$ is really an area element; you are integrating the 2D divergence over the area within the given bounds. The net flux over this area is given by $$4 \int_{-2}^1 dy \: y \: \int _{-3 \sqrt{1+9 y^2}}^{3 \sqrt{1+9 y^2}} dx \: x$$ which is zero by symmetry.
Prove $\int_{0}^1 f(x^2)\,\mathrm{d}x \le f\left(\frac{1}{3}\right)$ an unspecified $f$
If $f''\leq0$ then $f$ if concave and Jensen's inequality gives $$ \int_{0}^{1}f(x^2)\,dx \leq f\left(\int_{0}^{1}x^2\,dx\right).$$
Shannon's channel coding theorem with bit error probability
Your question essentially is the following (just to help other readers not familiar with McKay's textbook and/or the context of the question): Consider an encoder that maps source codewords $x\in \{0,1\}^K$ to channel codewords $y \in \{0,1\}^{N'}$ (with $K/N' < 1$), and its corresponding decoder, which maps a codeword $r \in \{0,1\}^{N'}$ to a source codeword $\hat{x}\in \{0,1\}^K$. Assume that this coding scheme is able to correctly decode codewords at the output of a BSC with transition probability $q$ (therefore, by the Shannon theorem it must hold $K/N'\leq 1-H(q)$). Now consider the "reverse" operation: An arbitrary (uniformly distributed) codeword $r \in \{0,1\}^{N'}$ is input to the decoder, which, by definition, outputs a codeword $\hat{x}\in \{0,1\}^K$. The codeword $\hat{x}$ is then input to the encoder, which, by definition, will output a codeword $y \in \{0,1\}^{N'}$ What is the (error) probability that a certain bit of $y$ differs from the corresponding one of $r$? First, lets consider in detail how the decoder operates when the encoder/decoder are used "as normal". As stated in the same section of McKay's book, when the maximum possible rate $K/N'=1-H(q)$ is employed, the received (noisy) codeword $r$ that is input to the decoder is uniformly distributed over $\{0,1\}^{N'}$ (see edit below). Now, the decoder takes the received codeword $r$ and essentially determines the most probable (typical) noise sequence $n \in \{0,1\}^{N'}$ for which $r=\hat{y}\oplus n$, where $\hat{y} \in \{0,1\}^{N'}$ is a valid codeword, and returns the estimate $\hat{x}$ as the unique source codeword corresponding to $\hat{y}$. Note that, for sufficiently large $N'$, the noise codeword $n$ will "typically" consist of $qN'$ ones and $(1-q)N'$ zeros, i.e., $\hat{y}$ and $r$ "typically" differ at $qN'$ positions (bits). Now, consider the "reverse operation", again with $K/N'=1-H(q)$. A uniformly distributed codeword $r$ is provided at the input of the decoder. Note that this codeword is distributed exactly the same as the codewords at the input of the decoder under "normal" operation, i.e., $r$ is with high probability equal to the sum of a (valid) codeword plus a (typical) noise sequence. The decoder will generate, as described above, a codeword $\hat{x}$ which will have a one-to-one correspondence to a valid codeword $\hat{y}$. When the codeword $\hat{x}$ is provided at the input of the encoder, the encoder will generate the one-to-one map to the codeword $y=\hat{y}$, which, as stated above, differs from $r$ at "typically" $qN'$ positions. This implies that the error probability is $q$. P.S.: The above arguments (as well as McKay's) are informal and non-rigorous. A rigorous approach requires consideration of rate-distortion theory aspects. edit: the uniform distribution of the output of the channel follows since the capacity of the binary symmetric channel is achieved when its output is uniformly distributed and a capacity achieving code is considered.
Generatos of non-abelian Galois group of order 8.
Hint: $Gal(K/\mathbb Q)$ must be the dihedral group of order 8 from Kaplansky's Theorem. Therefore, it must be generated by some $\rho$ and $\tau$ where $\tau^2 = 1$, $\rho^4 = 1$ and $\rho\tau = \tau\rho^{-1}$.
How to generate unique id from each element in matrix?
How about $$f(r,c) = Kr+c,$$ where $K$ is the number of columns in the matrix and $r$ and $c$ correspond to the row and column you want an ID for? For your 4 by 4 matrix, we would have $$f(r,c) = 4r+c,$$ where we have $r,c\in\{0,1,2,3\}$.
Absolute value of complex Radon measure
1 is certainly not true. Consider a set $X = \{a,b\}$ with two points, and a measure $\mu$ so that $a$ has measure $+1$ and $b$ has measure $-1$, i.e., $$\mu(\{a\})=1, \quad \mu(\{b\})=-1, \quad \mu(\emptyset) = \mu(X) = 0.$$ Then for any (compact) $K \subset X$ we have $|\mu(K)| \le 1$ but $|\mu|(X) = 2$.
On some inequality about positive numbers
If $b=0$ then we have $$0\geq a^4$$ for all $a\geq 0$, so no it does not exsist.
Lax-Wendroff and Godunov schemes for $u_t + (u^4)_x = 0$
The present conservation law $u_t + f(u)_x = 0$ is nonlinear, with a convex flux $f(u)=u^4$. The derivative of the flux is $f'(u)=4u^3$, and the only solution of $f'(u_s) = 0$ is $u_s=0$. The Lax-Wendroff method is well-described in the Wikipedia article. The method can be written in conservation form $$u_{j}^{n+1} = u_{j}^n - \frac{\Delta t}{\Delta x} (F_{j+1/2}^n - F_{j-1/2}^n) $$ with the numerical flux $$ F_{j+1/2}^n = \frac{1}{2} \left({f(u_j^n) + f(u_{j+1}^n)}\right) - \frac12 \frac{\Delta t}{\Delta x} A_{j+1/2} \left(f(u_{j+1}^n)-f(u_{j}^n)\right) $$ where $ A_{j+ 1/2}=f'\big(\tfrac12(u_{j}^n + u_{j+ 1}^n)\big) $, which is of the desired form. Godunov's method is usually written in conservation form too, with the numerical flux (see (1) p. 228) $$ F_{j+1/2}^n = \left\lbrace \begin{aligned} &f(u_j^n) & &\text{if}\quad u_j^n > u_s \;\text{and}\; s_{j+1/2} > 0 ,\\ &f(u_{j+1}^n) & &\text{if}\quad u_{j+1}^n < u_s\;\text{and}\; s_{j+1/2} < 0 ,\\ &f(u_s) & &\text{if}\quad u_{j}^n < u_s < u_{j+1}^n , \end{aligned}\right. $$ where $ s_{j+1/2} = [{f(u_{j+1}^n) - f(u_j^n)}]/[{u_{j+1}^n - u_{j}^n}] $, which is also of the desired form. (1) R.J. LeVeque, Finite Volume Methods for Hyperbolic Problems, Cambridge university press, 2002. doi:10.1017/CBO9780511791253
Show that $E[X]=\frac{2}{\lambda}$
$E[X]=\int_0^{\infty}x\lambda^2xe^{-\lambda x}dx=\frac1{\lambda}\int_0^{\infty}\lambda^3x^2e^{-\lambda x}dx=\frac1{\lambda}\int_0^{\infty}y^2e^{-y}dy\text{ (by change of variable } y=\lambda x)=\frac1{\lambda}\Gamma(3)=\frac2{\lambda}.$
How prove that there are $a,b,c$ such that $a \in A, b \in B, c \in C$ and $a,b,c$ (with approriate order) is a arithmetic sequence?
This was a conjecture posed by Rados Radocic Which was proven independently by by Radocic and Veselin Jungic in 2003 and by Maria Axenovich and Dimitri von der Flass in 2004.
Laplace equation in polar coordinates with complex boundary condition
Consider $T(\theta)=a_1\sin\lambda\theta+a_2\cos\lambda\theta$ $T(\theta)=-T(-\theta)$ : $a_1\sin\lambda\theta+a_2\cos\lambda\theta=-a_1\sin(-\lambda\theta)-a_2\cos(-\lambda\theta)$ $a_1\sin\lambda\theta+a_2\cos\lambda\theta=a_1\sin\lambda\theta-a_2\cos\lambda\theta$ $2a_2\cos\lambda\theta=0$ $a_2=0$ $\therefore T(\theta)=a_1\sin\lambda\theta$ $T(\theta)=T(\theta+2\pi)$ : $a_1\sin\lambda\theta=a_1\sin(\lambda(\theta+2\pi))$ $\lambda=n$ , $n\in\mathbb{Z}$ $\therefore$ Let $u(r,\theta)=\sum\limits_{n=1}^\infty C(r,n)\sin n\theta$ , Then $\sum\limits_{n=1}^\infty\dfrac{\partial^2C(r,n)}{\partial r^2}\sin n\theta+\sum\limits_{n=1}^\infty\dfrac{1}{r}\dfrac{\partial C(r,n)}{\partial r}\sin n\theta-\sum\limits_{n=1}^\infty\dfrac{n^2C(r,n)}{r^2}\sin n\theta=0$ $\sum\limits_{n=1}^\infty\left(\dfrac{\partial^2C(r,n)}{\partial r^2}+\dfrac{1}{r}\dfrac{\partial C(r,n)}{\partial r}-\dfrac{n^2C(r,n)}{r^2}\right)\sin n\theta=0$ $\therefore\dfrac{\partial^2C(r,n)}{\partial r^2}+\dfrac{1}{r}\dfrac{\partial C(r,n)}{\partial r}-\dfrac{n^2C(r,n)}{r^2}=0$ $C(r,n)=A_nr^n+B_nr^{-n}$ $\therefore u(r,\theta)=\sum\limits_{n=1}^\infty A_nr^n\sin n\theta+\sum\limits_{n=1}^\infty B_nr^{-n}\sin n\theta$
Proof of range of piecewise function
Let $m$ be any integer. It is either even or odd. In either case we shall show that $f(n)=m$ for some integer $n$. If $m$ is odd then $f(m+1)=m$ and if $m$ is even then $f(m-5)=m$. Thus we can take $n=m+1$ when $m$ is odd and $n=m-5$ when $m$ is even.
A continuous root for $(z-a)(z-b)$
The Möbius transformation $$T \colon z \mapsto \frac{z-a}{z-b}$$ maps $(\mathbb{C}\cup \{\infty\}) \setminus [a,b]$ biholomorphically to $U := \mathbb{C}\setminus (-\infty,0]$. On $U$, we have two branches of the square root, each of which we can use to define $\sqrt{T(z)}$. Then $$g \colon z \mapsto (z-b)\cdot \sqrt{T(z)}$$ is a holomorphic square root of $f$ on $\mathbb{C}\setminus [a,b]$. (The other is $-g$ of course.)
Proof to n-th order inhomogenous differential equation
If by absurd the solution $y$ were not $C^{\infty}$ it means that the left hand side of your equation is not $C^{\infty}$, but the r.h.s is $C^{\infty}$, and this means that the $y$ is not a solution, this a contraddiction. If $y$ is a solution it has to be $C^{\infty}$, if it's not $C^{\infty}$ it can't be a solution.
$z\overline{w}\ne 1 $ Prove that $|\frac{w-z}{1-\overline{w}z}|\le 1|$ if $|z|\le 1$ and $|w|\le 1 $
$$\left|\frac{w-z}{1-\overline{w}z}\right|\le 1\iff|w-z|\le|1-\overline{w}z|\iff|w-z|^2\le|1-\overline{w}z|^2\\\iff(w-z)(\overline{w}-\overline{z})\le(1-w\overline{z})(1-\overline{w}z)\iff(1-z\overline{z})(1-w\overline{w})\ge0\\\iff|z|\le1,|w|\le 1~~~~~\text{or}~~~~~|z|\ge1,|w|\ge1$$
A global optima: $\text{max}_{x} \frac{1}{2}\left\| X (a + b) \right\|_2^2 \ \text{s.t.} \ a^T X b \leq \delta; 0 < x \leq 1$,$X := {\rm Diag}(x)$
As for your edited question: you want to maximize $\sum_i (a_i+b_i)^2 x_i^2$ subject to $a_ib_ix_i\le\delta$ for each $i$ and $0&lt; x_i \le 1$ for each $i$. When you relax your constraint to insisting only that $a_i b_ix_i\le\delta$ and $0\le x_i\le1$ for all $i$, the constraint set becomes compact, and the maximum of the convex objective function occurs at an extreme point of the relaxed constraint set. Let $[l_i,u_i]=\{t: a_i b_it\le\delta\}\cap [0,1]$; then the maximum occurs at a point $x$ for which each component $x_i$ is either $l_i$ or $u_i$. Since the coefficient of $x_i^2$ in the objective function is non-negative, it is obvious that the maximum of the relaxed problem occurs when $x_i=u_i$ for each $i$. This point lies in the original constraint set, so it solves your unrelaxed optimization problem as well.
How to find centre,vertics,foci,focal radii,letus rectum... when exists of a general quadratic equation in x and y
One can use a list of equations to determine the property you require. Note, however, that in many cases it is easier to use derived value(s) as opposed to using equations that rely solely on the coefficients of the equation in general quadratic form. Below are examples of equations one can use. Properties of an ellipse from equation for conic sections in general quadratic form Given the equation for conic sections in general quadratic form: $ a x^2 + b x y + c y^2 + c x + e y + f = 0 $ The equation represents an ellipse if: $ b^2 - 4 a c &lt; 0 $ or similarly, $ 4 a c - b^2 &gt; 0 $ The coefficient normalizing factor is given by: $ q = 64 {{f (4 a c - b^2) - a e^2 + b d e - c d^2} \over {(4ac - b^2)^2}} $ The distance between center and focal point (either of the two) is given by: $ s = {1 \over 4} \sqrt { |q| \sqrt { b^2 + (a - c)^2 }} $ The semi-major axis length is given by: $ r_\max = {1 \over 8} \sqrt { 2 |q| {\sqrt{b^2 + (a - c)^2} - 2 q (a + c) }} $ The semi-minor axis length is given by: $ r_\min = \sqrt {{r_\max}^2 - s^2} $ The latus rectum is given by: $ l = 2 {{ {r_\min}^2 } \over {r_\max}} $ The eccentricity is given by: $ g = {{s} \over {r_\max}} $ The distance between center and closest directix point (either of the two) is given by: $ h = {{{r_\max}^2} \over {s}} $ The center of the ellipse is given by: $ x_\Delta = { b e - 2 c d \over 4 a c - b^2} $ $ y_\Delta = { b d - 2 a e \over 4 a c - b^2} $ The top-most point on the ellipse is given by: $ y_T = y_\Delta + {\sqrt {(2 b d - 4 a e)^2 + 4(4 a c - b^2)(d^2 - 4 a f)} \over {2(4 a c - b^2)}} $ $ x_T = {{-b y_T - d} \over {2 a}} $ The bottom-most point on the ellipse is given by: $ y_B = y_\Delta - {\sqrt {(2 b d - 4 a e)^2 + 4(4 a c - b^2)(d^2 - 4 a f)} \over {2(4 a c - b^2)}} $ $ x_B = {{-b y_B - d} \over {2 a}} $ The left-most point on the ellipse is given by: $ x_L = x_\Delta - {\sqrt {(2 b e - 4 c d)^2 + 4(4 a c - b^2)(e^2 - 4 c f)} \over {2(4 a c - b^2)}} $ $ y_L = {{-b x_L - e} \over {2 c}} $ The right-most point on the ellipse is given by: $ x_R = x_\Delta + {\sqrt {(2 b e - 4 c d)^2 + 4(4 a c - b^2)(e^2 - 4 c f)} \over {2(4 a c - b^2)}} $ $ y_R = {{-b x_R - e} \over {2 c}} $ The angle between x-axis and major axis is given by: if $ (q a - q c = 0) $ and $ (q b = 0) $ then $ \theta = 0 $ if $ (q a - q c = 0) $ and $ (q b &gt; 0) $ then $ \theta = {1 \over 4} \pi $ if $ (q a - q c = 0) $ and $ (q b &lt; 0) $ then $ \theta = {3 \over 4} \pi $ if $ (q a - q c &gt; 0) $ and $ (q b &gt;= 0) $ then $ \theta = {1 \over 2} {atan ({b \over {a - c}})} $ if $ (q a - q c &gt; 0) $ and $ (q b &lt; 0) $ then $ \theta = {1 \over 2} {atan ({b \over {a - c}})} + {\pi} $ if $ (q a - q c &lt; 0) $ then $ \theta = {1 \over 2} {atan ({b \over {a - c}})} + {1 \over 2}{\pi} $ The focal points are given by: $ F_{1,x} = x_\Delta - s \ cos (\theta) $ $ F_{1,y} = y_\Delta - s \ sin (\theta)) $ $ F_{2,x} = x_\Delta + s \ cos (\theta) $ $ F_{2,y} = y_\Delta + s \ sin (\theta)) $ (I tried to enter the equations without error. If you find an error, please post a comment)
Using an energy argument to prove a solution is unique for a PDE
Note that one must assume $\alpha \geq 0$. Your proof is correct, it is only that last few implications that are (perhaps) too quick. You wrote that $\Delta v=0$ implies $v_1 = v_2$, but were it the case, you would no need to do this long computation. Let me sum it up: You have $\nabla v \equiv 0 $ in $D$ and $v=0$ on $\partial D$. The critical part is deducing $v\equiv 0$. The gradient vanishes so $v$ must be a constant function on each connected component of $D$. That constant must be $0$. It is worthwhile to understand when does the fact that $v$ is harmonic comes into play. It does in the begining of the calculations, in the first integral. Also, the successful use of Green's identity is due to the fact that the expression involved $\Delta$ operator, but that's that. Another tactic is to use maximum principle, which again relies heavily on the fact that $v$ is harmonic.
Can every locally compact Hausdorff space be recognized as a subspace of a cube that has an open underlying set?
Are these observations correct? Yes. Can locally compact Hausdorff spaces also be classified (up to homeomorphisms) as exactly the subspaces of cubes that have an open subset of the cube as underlying set? No. Compact spaces are locally compact. And so the simpliest counterexample is a singleton $\{*\}$. Also if you take any locally compact space $X$ and the disjoint union $Z:=X\sqcup Y$ with a compact space $Y$ then $Z$ cannot be an open subset of a cube even though it is locally compact.
Show that this sum is divergent
$$\sum_{j=1}^{+\infty}\frac{1}{j\beta}$$ is not convergent, hence the whole series cannot be convergent.
Zeros of a complex polynomial
First dispose of real roots: e.g. $P(z) = z^2 (z+1)^2 + 2 (z+1/4)^2 + 15/8$. Let's look at what happens to $P(z)$ as $z$ goes around a contour around part of the first quadrant. As $z$ goes from $0$ to some large positive $R$ on the real axis, $P(z)$ increases from $2$ to $P(R) &gt;&gt; 0$. Then go on the quarter-arc of the circle $|z| = R$ from $R$ to $iR$: $P(z)$ goes almost in a circle, ending at $P(iR)$ which is in the fourth quadrant. Now come back in to the origin on the imaginary axis. Note that $\text{Re}(P(it)) = t^4 - 3 t^2 + 2 = 0$ at $t = 1$ and $t=\sqrt{2}$, while $\text{Im}(P(it)) = - 2 t^3 + t = 0$ at $t=0$ and $t = \sqrt{2}/2$. So you hit the negative imaginary axis at $t=\sqrt{2}$ and again at $t=1$, then the positive real axis at $t=\sqrt{2}/2$ and $t=0$, but not the negative real or positive imaginary axis. Thus as $z$ goes around this contour, the winding number of $P(z)$ around $0$ is $1$, indicating that there is exactly one zero of $P(z)$ inside the contour. Here's a plot of the case $R=1.6$:
Isolate Costs in NPV equation
Let $\frac{1}{\texttt{1+discount}}=a$. The equation becomes $NVP=-CapEx+\sum_{i=0}^n\texttt{(Revenue-Costs)}\cdot a^i $ $NVP=-CapEx+\texttt{(Revenue-Costs)}\cdot\sum_{i=0}^n a^i $ $\sum_{i=0}^n a^i $ is the partial sum of a geometric series. $\sum_{i=0}^n a^i =\frac{1-a^{n+1}}{1-a}$ Therefore $NVP=-CapEx+\texttt{(Revenue-Costs)}\cdot\frac{1-a^{n+1}}{1-a} $ Adding CapEx on both sides of the equation and after that dividing the equation by $\frac{1-a^{n+1}}{1-a} $ $(NVP+CapEx)\cdot \frac{1-a}{1-a^{n+1}}=\texttt{Revenue-Costs} \quad \color{blue}{(1)}$ Now it is just one step to isolate the costs. Further transformations: Multiplying the equation by (-1). The signs are going to be the opposite. $-(NVP+CapEx)\cdot \frac{1-a}{1-a^{n+1}}=\texttt{-Revenue+Costs}\quad | +\texttt{Revenue} $ $\texttt{Revenue}-(NVP+CapEx)\cdot \frac{1-a}{1-a^{n+1}}=\texttt{Costs}\quad $ Pay attention on the negative sign on the RHS. The costs can be higher or lower than the revenue. The (constant) revenue must be higher than the (constant) costs, if you want a positive NVP. But this is not sufficient (only necessary condition), because of the CapEx. This can be seen in $\color{blue}{(1)}$.
Show that the language wb^|w| (any string w over {a,b} followed by as many b’s as the size of w) is not regular.
Let $L=\left\{wb^{|w|}:w\in\{a,b\}^*\right\}$; you want to prove that $L$ is not regular. This can easily be done with the pumping lemma, but what you’ve done doesn’t make much sense. First, there is no reason to bring in the language $\left\{w^2b^{|w|}:w\in\{a,b\}^*\right\}$, and it is not at all clear that this language would be regular if $L$ were. Next, it is not at all clear what you are doing with $W$: you’ve given us no explanation of the bulleted lines. Finally, if $X=XYZ$, then $Y=Z=\epsilon$, the empty word, which I’m sure is not what you intended. To use the pumping lemma properly, suppose that $L$ is regular, let $p$ be the pumping length, and let $u=a^pb^p$; setting $w=a^p$, we can see that $u\in L$. The pumping lemma now tells us that $u$ can be decomposed as $u=xyz$, where $|xy|\le p$, $|y|\ge 1$, and $xy^kz\in L$ for each integer $k\ge 0$. Since $|xy|\le p$, $xy$ must be contained in the $a^p$ part of $u$. Thus, there are integers $r\ge 0$ and $s\ge 1$ such that $x=a^r$, $y=a^s$, and $r+s\le p$. Of course this means that $z=a^{p-r-s}b^p$. Then for each $k\ge 0$ we have $$xy^kz=a^ra^{ks}a^{p-r-s}b^p=a^{p+(k-1)s}b^p\in L\,.$$ But $p+(k-1)s&gt;p$ whenever $k&gt;1$, and in particular, $xy^2z=a^{p+s}b^p$. If we can show that this word is not in $L$, we’ll have a contradiction proving that $L$ is not regular after all. To show that $a^{p+s}b^p\notin L$, we have to show that there is no $w\in\{a,b\}^*$ such that $a^{p+s}b^p=wb^{|w|}$. Suppose that $a^{p+s}b^p=wb^{|w|}$; there are only $p$ $b$s in $a^{p+s}b^p$, so $|w|\le p$. But then $$\left|wb^{|w|}\right|=2|w|\le 2p&lt;2p+s=\left|a^{p+s}b^p\right|\,,$$ since $s\ge 1$, so in fact $wb^{|w|}$ cannot be equal to $a^{p+s}b^p$, and $a^{p+s}b^p$ cannot be in $L$. This is the contradiction that we wanted, showing that $L$ is not regular.
How to show frechet characterization differentiability 2
Let $U\subset \Bbb R^{n} $ and $f:U\to \Bbb R$. Show that: $f$ is differentiable at $x_0\in U$ iff exist $A\in L(\Bbb R^{n},\Bbb R)$ and exist $r&gt;0$: $\lim_{t\rightarrow 0^{+}}\frac{f(x_0 +tv)-f(x_0)}{t}=Av$ uniformly in $v\in r\mathcal{S}^{n-1}$ with $\mathcal{S}^{n-1}$ is the unitary sphere. Proof: $(\Rightarrow)$ Since $f$ is differentiable at $x_0\in U$, exist $A\in L(R^{n},R)$, such that $$\lim_{h\to 0} \frac{\|f(x_0+h)-f(x_0)-Ah\|}{\| h\|}=0$$ So, for all $\varepsilon &gt;0 $, there is $\delta_\varepsilon &gt;0$ such that, for all $h\in \Bbb R^n$, if $\|h\|&lt;\delta_\varepsilon$ then $$\frac{\|f(x_0+h)-f(x_0)-Ah\|}{\| h\|} &lt; \varepsilon$$ It follows, immediately that, for any $r&gt;0$, for all $\varepsilon &gt;0 $, there is $\delta'= \frac{\delta_{\varepsilon/r}}{r} &gt;0$ such that, for all $t&gt;0$, if $t&lt;\delta'$, then, for all $v\in r\mathcal{S}^{n-1}$, $\|tv\|=t\cdot r &lt; \delta_{\varepsilon/r}$ and so $$\frac{\|f(x_0+tv)-f(x_0)-tAv\|}{t \cdot r} &lt; \varepsilon/r$$ So, $$\left \| \frac{f(x_0+tv)-f(x_0)}{t} - Av \right \| =\frac{\|f(x_0+tv)-f(x_0)-tAv\|}{t} &lt; \varepsilon$$ Note that $\delta'$ independs on $v$. So, we have proved that $\lim_{t\rightarrow 0^{+}}\frac{f(x_0 +tv)-f(x_0)}{t}=Av$ uniformly in $v\in r\mathcal{S}^{n-1}$. $(\Leftarrow)$ Suppose exists $A\in L(\Bbb R^{n},\Bbb R)$ and exists $r&gt;0$ such that $\lim_{t\rightarrow 0^{+}}\frac{f(x_0 +tv)-f(x_0)}{t}=Av$ uniformly in $v\in r\mathcal{S}^{n-1}$. Then, for all for all $\varepsilon &gt;0 $, there is $\delta_\varepsilon &gt;0$ such that, for all $t&gt;0$, if $t&lt;\delta_\varepsilon $, then, for all $v\in r\mathcal{S}^{n-1}$, $$\left \| \frac{f(x_0+tv)-f(x_0)}{t} - Av \right \| &lt; \varepsilon$$ So, for all $\varepsilon &gt;0$, there is $\delta'=r \delta_{r\varepsilon}&gt;0$, such that for all $h\in \Bbb R^n$, if $\|h\|&lt;\delta'=r \delta_{r\varepsilon}$, then $t= \frac{\|h\|}{r} &lt;\delta_{r\varepsilon}$ and $v=r \frac{h}{\|h\|} \in r\mathcal{S}^{n-1}$ and $h=tv$. So \begin{align*} \frac{\|f(x_0+h)-f(x_0)-Ah\|}{\| h\|} &amp;= \left \| \frac{f(x_0+tv)-f(x_0)- tAv }{t r} \right \| =\\ &amp;= \frac{1}{r}\left \| \frac{f(x_0+tv)-f(x_0)}{t} - Av \right \| &lt; \frac{1}{r} r\varepsilon = \varepsilon \end{align*} So, we have proved that $$\lim_{h\to 0} \frac{\|f(x_0+h)-f(x_0)-Ah\|}{\| h\|}=0$$ It means, $f$ is differentiable in $x_0\in U$. Remark: In fact, we proved a stronger result. We proved that Let $U\subset \Bbb R^{n} $ and $f:U\to \Bbb R$. Then the following three statements are equivalent: There is $A\in L(\Bbb R^{n},\Bbb R)$ and there is $r&gt;0$ such that $\lim_{t\rightarrow 0^{+}}\frac{f(x_0 +tv)-f(x_0)}{t}=Av$ uniformly in $v\in r\mathcal{S}^{n-1}$ (where $\mathcal{S}^{n-1}$ is the unitary sphere). $f$ is differentiable at $x_0\in U$ There is $A\in L(\Bbb R^{n},\Bbb R)$ and, for all $r&gt;0$, $\lim_{t\rightarrow 0^{+}}\frac{f(x_0 +tv)-f(x_0)}{t}=Av$ uniformly in $v\in r\mathcal{S}^{n-1}$ (where $\mathcal{S}^{n-1}$ is the unitary sphere). In the first part of our proof, we proved that $(2 \Rightarrow 3)$. In the second part of our proof, we proved that $(1 \Rightarrow 2)$. The fact that $(3 \Rightarrow 1)$ is trivial.
How to find $ P $ such that $ A^\top = PAP^{-1} $?
Assuming that A is diagonalisable (the algebraic multiplicity of each eigen-value is equal to its geometric multiplicity) then you can find a matrix X (consisting of the eigen-vectors of A) such that: D = $X^{-1}AX$ where D is a diagonal matrix. So then A = $XDX^{-1}$. So $A^{T}$ = $(XDX^{-1})^{T}$ =$(X^{-1})^{T}D^{T}X^{T}$ =$(X^{T})^{-1}DX^{T}$ $$But D = X^{-1}AX$$ So $A^{T} =Y^{-1}X^{-1}AXY$ where Y = $X^{T}$ so $$A^{T} = (XY)^{-1}AXY$$ So you can take $(XY)^{-1}$ = P
Eigenvalues of an iterative and random selection between two different linear transformations.
Let's consider a simple case first. Let $A,B$ be simultaneously diagonalizable matrices. WLOG, assume they are diagonal. Then, the problem becomes computing infinite products of discrete random variables $X_n$, each of which takes values $\lambda$ or $\mu$. If either of these eigenvalues are $0$ (and appear in the product), then the result is trivial. Otherwise, if either $\lambda$ or $\mu$ are not equal to 1, then the product will diverge for (almost) any distribution (from which $X_n$ is chosen) for which there is a non-zero, non-symmetric probability of getting $\lambda$ or $\mu$. Now for the general (far harder) case: My educated guess is that the "eigenvalues of the overall process" will be undefined or fluctuate wildly when permuting the infinite word. There are likely many more details here, but this is as far as I can confidently say.
Rate word problem given 3 different speeds and times
Let the total time taken be $t$. Then we have that $\frac{1}{4}t$ is the time he swam, $\frac{1}{3}t$ is the time he biked, and $\frac{5}{12}t$ is the time he spent running. We know that the total distance is $40$ as well as the velocities involved in each segment of the race. Using the equation $d=vt$, we get $40=\Sigma \,vt\rightarrow 40=\frac{5}{4}t+\frac{15}{3}t+\frac{50}{12}t$. You can just solve that for $t$. (I'm just multiplying the time taken in each segment of the race by the velocity at which the racer completed that segment.)
Directional Derivative (3 variables)
To find the relative vector between the points, you merely subtract the coordinates of $M$ from the coordinates of $N$. $$N-M = (5,5,15) - (2,1,3) = (3, 4, 12)$$ This vector is not unit, but if you divide this vector by its magnitude, you will have a unit vector. Do you know how to find the magnitude of a vector?
Tensor Product with Trivial Vector Space
Notice that the space of bilinear maps $f:V\times \{0\}\to k$ consists of exactly the zero map, therefore the constant map $w:V\times \{0\}\to\{0\}$ satisfies the universal property of the tensor product: $w$ is bilinear and any bilinear map $f:V\times\{0\}\to k$ is the constant zero map, therefore the linear map $0:\{0\}\to k$ satisfies $f=0\circ w$. $0$ is also the only linear map $\{0\}\to k$, so it's a fortiori the only linear map $g$ such that $f=g\circ w$.
Turing machine that accepts language in n+1
You can make a Turing machine for this which operates as follows. It has $n+2$ states, where $n$ is the same as the n in $\Sigma^n$. It starts in state $q_1$ on the first character of the word and goes right on the tape and goes to state $q_2$ if the first character is not blank. It keeps doing this until it is on state $q_n$. Again it goes right if the character is not blank, and now it goes to state $q_{n+1}$, and goes right on the tape. State $q_{n+1}$ goes to state $q_{n+2}$ if the tape IS blank on this character, and $q_{n+2}$ is an accepting state. We made n+1 moves to get from $q_1$ to $q_{n+2}$. Note that the machine gets stuck (with an undefined transition function) if the word is too long or too short. In this case it may not take time $n+1$. If you want it to take time $n+1$ whether accepting or rejecting, then you can view the states described above as on a line from $q_1 \implies q_2 \implies ... \implies q_{n+2}$. Simply put another line of states underneath, where $r_1 \implies r_2 \implies ... \implies r_{n+2}$ and $r_{n+2}$ is not an accepting state. Then change the transition function so that if the machine would get stuck on the transition from $q_k \implies q_{k+1}$, instead have it go to $r_{k+1}$ and just follow the r sequence till the end regardless of what is on the tape. Then the machine will take time $n+1$ on all inputs, accepting or rejecting.
Find intersection of hyperbola and ellipse.
I'd follow this answer of mine. Turn your conics into symmetric matrices: $$A=\begin{bmatrix}4&amp;-1&amp;0\\-1&amp;2&amp;1\\0&amp;1&amp;-8\end{bmatrix}\qquad B=\begin{bmatrix}2&amp;0&amp;0\\0&amp;-1&amp;0\\0&amp;0&amp;-1\end{bmatrix}$$ Note that I scaled your first equation by a factor of $2$ because symmetrically distributing the off-diagonal terms divides them by two and I wanted to avoid fractions. Find linear combinations with zero determinant. $$\det(\lambda A+\mu B)=-60\lambda^3-9\lambda^2\mu+16\lambda\mu^2+2\mu^3$$ This does not factor so you can't expect rational solutions. In particular zero is not a solution for either $\lambda$ or $\mu$ (although $\lambda=\mu=0$ is a solution but we ignore that). So set $\lambda=1$ and compute $\mu$ for that: $$\mu_1\approx-8.10\qquad\mu_2\approx-1.88\qquad\mu_3\approx1.97$$ The exact solutions are tedius to write out. $A+\mu_iB$ is a degenerate conic formed by a pair of lines passing through the points of intersection. I'll continue with $\mu_1$ for my numeric examples. $$C_1=A+\mu_1B\approx\begin{bmatrix}-12.2&amp;-1&amp;0\\-1&amp;10.1&amp;1\\0&amp;1&amp;0.0982\end{bmatrix}$$ The adjugate of that is $$\operatorname{adj}C_1\approx\begin{bmatrix}-0.00805&amp;0.0982&amp;-1\\0.0982&amp;-1.20&amp;12.2\\-1&amp;12.2&amp;-124\end{bmatrix}$$ It has rank 1, so all rows and all columns are multiple of one another. Take any non-zero row or column of that, e.g. the first row, and you have the homogeneous coordinates where the two lines intersect. (Divide by the last coordinate if you want to have regulas $(x,y)$ coordinates but you don't need that.) Now use these coordinates to form an anti-symmetric matrix $$P_1\approx\begin{bmatrix}0&amp;-1&amp;-0.0982\\1&amp;0&amp;-0.00805\\0.0982&amp;0.00805&amp;0\end{bmatrix}$$ Then consider $$C_1+\lambda P_1\approx\begin{bmatrix}-12.2&amp;-1-\lambda&amp;-0.0982\lambda\\-1+\lambda&amp;10.1&amp;1-0.00805\lambda\\0.0982\lambda&amp;1+0.00805\lambda&amp;0.0982\end{bmatrix}$$ Take any $2\times2$ minor of this, e.g. $$\begin{vmatrix}-12.2&amp;-1-\lambda\\-1+\lambda&amp;10.1\end{vmatrix}\approx\lambda^2-124\overset!=0$$ From this you can conclude that $\lambda\approx\pm11.1$. Choosing the positive solution (an arbitrary choice) you get $$C_1+\lambda P_1\approx\begin{bmatrix}-12.2&amp;-12.1&amp;-1.09\\10.1&amp;10.1&amp;0.910\\1.09&amp;1.09&amp;0.0982\end{bmatrix}$$ This matrix has again rank one, so its rows are multiples of one another, and its columns are multiples of one another. Pick any non-zero row and any non-zero column, and you have the equations of two of the lines. $$g_1\approx[-12.2:-12.1:-1.09]\qquad h_1\approx[-12.2:10.1:1.09]$$. Repeat these steps for $\mu_2$. I wont print this here, but in the end you obtain $$g_2\approx[0.248:-1.20:1.23]\qquad h_2\approx[0.248:-0.799:-1.23]$$ Now intersect one of the lines from roun $1$ with one from round $2$ by computing the corss product to obtain one of your points of intersection. Divide by the last coordinate to dehomogenize. \begin{align*}q_1&amp;=g_1\times g_2\approx[-16.3:14.7:17.7]\to(-0.921, 0.835)\\q_2&amp;=g_1\times h_2\approx[14.1:-15.3:12.8]\to(1.10, -1.20)\\q_3&amp;=h_1\times g_2\approx[13.8:15.3:12.1]\to(1.14, 1.26)\\q_4&amp;=h_1\times h_2\approx[-11.6, -14.7, 7.23]\to(-1.61, -2.04)\end{align*}
Could there be a formula of this?
There is indeed. Let $g(m,n)=f(m-n+1,n)$, so that for instance $g(5,3)=f(3,3)=14$. The corresponding table for $g$ is: $$\begin{array}{} 1\\ 2&amp;2\\ 3&amp;4&amp;3\\ 4&amp;7&amp;7&amp;4\\ 5&amp;11&amp;14&amp;11&amp;5\\ 6&amp;16&amp;25&amp;25&amp;16&amp;6 \end{array}$$ Now $$\begin{align*} g(m,n)&amp;=f(m-n,n)\\ &amp;=f(m-n-1,n)+f(m-n,n-1)\\ &amp;=f\big((m-1)-n,n\big)+f\big((m-1)-(n-1),n-1\big)\\ &amp;=g(m-1,n)+g(m-1,n-1)\;, \end{align*}$$ which is the recurrence that generates Pascal’s triangle of binomial coeffients. Moreover, $g$’s table looks a lot like Pascal’s triangle in overall form: $$\begin{array}{} 1\\ 1&amp;1\\ 1&amp;2&amp;1\\ 1&amp;3&amp;3&amp;1\\ 1&amp;4&amp;6&amp;4&amp;1\\ 1&amp;5&amp;10&amp;10&amp;5&amp;1\\ 1&amp;6&amp;15&amp;20&amp;15&amp;6&amp;1 \end{array}$$ Ignore that first column of Pascal’s triangle: $$\begin{array}{} 1\\ 2&amp;1\\ 3&amp;3&amp;1\\ 4&amp;6&amp;4&amp;1\\ 5&amp;10&amp;10&amp;5&amp;1\\ 6&amp;15&amp;20&amp;15&amp;6&amp;1 \end{array}\tag{1}$$ Subtract this from $g$’s triangle: $$\begin{array}{} 0\\ 0&amp;1\\ 0&amp;1&amp;2\\ 0&amp;1&amp;3&amp;3\\ 0&amp;1&amp;4&amp;6&amp;4\\ 0&amp;1&amp;5&amp;10&amp;10&amp;5 \end{array}\tag{2}$$ Ignore the first column of this, and put $1$’s along the diagonal, and Pascal’s triangle shows up again. Now the $(m,n)$-entry in $(1)$ is $\binom{m}n$, and if $(2)$ really is Pascal’s triangle, its $(m,n)$-entry is $\binom{m-1}{n-2}$, so we conjecture that $g(m,n)=\binom{m}n+\binom{m-1}{n-2}$ and hence that $$f(m,n)=g(m+n-1,n)=\binom{m+n-1}n+\binom{m+n-2}{n-2}\;.$$ We first verify the recurrence: $$\begin{align*} \binom{m+n-1}n+\binom{m+n-2}{n-2}&amp;=\binom{m+n-2}{n-1}+\color{red}{\binom{m+n-2}n}+\binom{m+n-3}{n-3}+\color{blue}{\binom{m+n-3}{n-2}}\\ &amp;=\binom{m+(n-1)-1}{n-1}+\color{red}{\binom{m+(n-1)-2}{n-3}}\\ &amp;\qquad\qquad+\binom{(m-1)+n-1}n+\color{blue}{\binom{(m-1)+n-2}{n-2}}\\ &amp;=f(m,n-1)+f(m-1,n)\;, \end{align*}$$ as desired. Finally, $$f(1,n)=\binom{1+n-1}n+\binom{1+n-2}{n-2}=1+(n-1)=n$$ and $$f(m,1)=\binom{m+1-1}1+\binom{m+1-2}{-1}=m+0=m\;,$$ so the initial conditions are also satisfied. To repeat, $$f(m,n)=\binom{m+n-1}n+\binom{m+n-2}{n-2}\;,$$ which can easily be manipulated into a variety of other forms involving binomial coefficients, e.g., $$f(m,n)=g(m+n-1,n)=\binom{m+n-2}n+\binom{m+n-2}{n-1}+\binom{m+n-2}{n-2}\;.$$
how to solve this permutation
I don't have a closed-form solution but instead a formula for it. I hope it's of some value. A building block towards the answer is counting the number of solutions to an equation of the following form for any $0\leq k\leq M$ $$x_1 + x_2 + \cdots + x_M = k.$$ where the $x_i$ are non-negative and represent the entries in any given row. By the "stars and bars" method, the number of solutions is $\binom{M+k-1}{k}$. So we need to sum over all combinations of $k$ values for each row where such $k$ values are non-decreasing in going from row $1$ to $N$. Thus, the formula is: $$T_{N,M} = \sum_{j_1=0}^{M} \binom{M+j_1-1}{j_1} \sum_{j_2=j_1}^{M} \binom{M+j_2-1}{j_2} \cdots \sum_{j_N=j_{N-1}}^{M} \binom{M+j_N-1}{j_N}.$$ Each index $j_i$ represents the sum of entries for the $i^{th}$ row. For example, with $N=2,M=2$, we have \begin{eqnarray*} T_{2,2} &amp;=&amp; \binom{1}{0}\left[\binom{1}{0} + \binom{2}{1} + \binom{3}{2} \right] + \binom{2}{1}\left[\binom{2}{1} + \binom{3}{2} \right] + \binom{3}{2}\left[\binom{3}{2} \right] \\ &amp;=&amp; 25. \end{eqnarray*}
Why doesn't $1/i$ equal $i$?
Here, look at the function $f(x) = \sqrt{x}$ in Desmos: https://www.desmos.com/calculator/odlrlh9ozn Notice anything? It is only defined over $[0, \infty)$. This also means that an identity of square roots like $$\sqrt{a}\sqrt{b} = \sqrt{ab}$$ is only defined for non-negative numbers. So when you are squaring $i$ you are essentially doing $$i^2 = \sqrt{-1}\sqrt{-1} = \sqrt{(-1)(-1)} = 1$$ but also $$(-1^\frac{1}{2})^{2}=-1^{1}=-1$$ This "phenomenon" is not a phenomenon at all; it is a contradiction because negative numbers are not defined on the domain of the square root function. Imaginary numbers on not included in the set of all real numbers, so no wonder they don't act at all like real numbers!
Why is this function not unbounded?
Well, I can see why $\lim_{|x| \rightarrow \infty} |f(x)| = \infty $: In $\forall \epsilon &gt; 0, \exists \delta \ s.t. \ |x-1| \ge \delta \rightarrow |f(x) - f(1)| \ge \epsilon $, choose $\epsilon$ very large and you get $|f(x) - f(1)| \ge \epsilon$ so $|f(x)| \ge \epsilon- |f(1)| $. I agree with you that "$f \ \text{is not unbounded} $", which, to me, is the same as $f$ is bounded, contradicts this. So, I am not sure what is going on here. Go ahead, somebody: Prove me wrong. (Won't be the first time.)
Triple integrals using spherical co-ordinates
Here, from the integrals, you can easily deduce that: The function to integrate is $f(x,y,z)=z^2\sqrt{x^2+y^2+z^2}$ and the boundaries to integrate over are: $0 \leq z \leq \sqrt{4-x^2+y^2}$, $\sqrt{4-x^2} \leq y \leq \sqrt{4-x^2}$ and $-2 \leq x \leq 2$. Basically, you are integrating over a hemisphere of radius 2, which lies on the positive side of the z axis. So, let $$ x \to p\sin\phi \cos\theta $$ $$y \to p\sin\phi \sin\theta$$ $$z \to p\cos\phi $$ with the limits $ 0 \leq p \leq 2$, $ 0 \leq \phi \leq \pi/2$ and $ 0 \leq \theta \leq 2\pi$. So the integral will become, $ \int_{0}^{2} \int_{0}^{\pi/2} \int_{0}^{2\pi} (p^2\cos^2 \phi)*(p)* p^2\sin \phi \ d\theta d\phi dp$ $= (2\pi) \ \dfrac{p^6}{6}|_{0}^{2} \ \dfrac{-\cos^3 \phi}{3}|_{0}^{\pi/2} $ $=\dfrac{64}{9} \pi$
Why $\lim_{t\to 0} \frac{t^2\sin ^2}{(t+\text{sint})(t-\text{sint})}=\lim_{t\to 0} \frac{ \sin ^2t}{2\left(1-\frac{\text{sint}}{t}\right)}$
Note that $$\displaystyle \begin{align} \frac{t^2\sin^2t}{(t+\sin t)(t-\sin t)}&amp;=\frac{t^2\sin^2t}{t^2\left(1+\frac{\sin t}{t}\right)\left(1-\frac{\sin t}{t}\right)} =\frac{\sin^2t}{\left(1+\frac{\sin t}{t}\right)\left(1-\frac{\sin t}{t}\right)} \\ \end{align}$$ then $$\displaystyle \begin{align} \lim_{t\to0}\frac{t^2\sin^2t}{(t+\sin t)(t-\sin t)}&amp;=\lim_{t\to0}\frac{\sin^2t}{\left(1+\frac{\sin t}{t}\right)\left(1-\frac{\sin t}{t}\right)} \\ &amp;=\left(\lim_{t\to0} \frac{1}{1+\frac{\sin t}{t}} \right) \left(\lim_{t\to0} \frac{\sin^2t}{1-\frac{\sin t}{t}} \right)\\ &amp;= \frac{1}{2}\lim_{t\to0} \frac{\sin^2t}{1-\frac{\sin t}{t}}\\&amp;=\lim_{t\to0} \frac{\sin^2t}{2\left(1-\frac{\sin t}{t}\right)} \end{align}$$ because $\displaystyle \lim_{t\to0}{\frac{\sin t}{t}}=1$.
Can we write $Bv_i$ in the form $\beta_1v_1+\beta_2v_2+\beta_3v_3+...\beta_nv_n$ where $\beta_1,\beta_2,..,\beta_n\in\Bbb R$?
If $B$ is a matrix then $Bv_i$ does not make sense in general. Assuming you actually meant that $B : V \to V$ and $BL = LB$ (although this assumption is unnecessary). Since $Bv_i \in V$, and $\{v_1, v_2, \ldots, v_n\}$ is a basis for $V$, then $Bv_i = \beta_1 v_1 + \cdots + \beta_n v_n$ for some $\beta_1, \ldots, \beta_n \in \mathbb{R}$ by the definition of basis.
Functions of the form $f(x^2)=f^2(x)$
Then your $f(x)$ is anything that satisfies: $f(0)=0$ or $1$ $f(1)=0$ or $1$ and $f(-1)=c(-1) f(1)$ for $x&gt;1$, $f(x) = a(y-\lfloor y \rfloor)^{2^{\lfloor y \rfloor}}$ and $f(-x)=c(-x)f(x)$, where $y = \log_2(\log_2(x))$ for $0 &lt; x &lt; 1$, $f(x) = b(z-\lfloor z \rfloor)^{2^{\lfloor z \rfloor}}$ and $f(-x)=c(-x) f(x)$, where $z = \log_2(\log_2(1/x))$ for any three arbitrary functions $a:[0,1) \to \mathbb R_{\ge 0}$ and $b:[0,1) \to \mathbb R_{\ge 0}$ and $c:\mathbb R_{\lt 0} \to \{-1,+1\}$. So you have choices for $f(0)$ and $f(1)$, plus the three arbitrary functions $a$ and $b$ and $c$.
If $f \circ f$ is a contractive map, is $f$ a contractive map as well?
No, it is not true, even supposing $f$ smooth: let $f\colon\mathbb{R}^2\to\mathbb{R}^2$ the linear map defined by the matrix $$M=\begin{pmatrix} &amp; a \\ b &amp; \end{pmatrix},$$ then $f\circ f$ is defined by $$M^2=\begin{pmatrix} ab &amp; \\ &amp; ab\end{pmatrix}.$$ If you chose $|a|&gt;1$ and $|b|&lt;1/|a|$, then $f\circ f$ is a contraction while $f$ is not (in fact is a dilation in one direction).
Polynomial Function from $R^n$ to $R^n$
Probabiy a polynomial in several variables for each coordinate, like $$ f(x,y) = (x^2y + 2y + 1, x + y^{13}) $$ mapping $\mathbb{R}^2$ to itself. (The link you posted doesn't work for me so I can't check.)
An orthogonal matrix which sends a vector to other vector with same length.
Do you know Gram-Schmidt? If so, complete $u$ to an orthogonal basis $\mathcal{B}_1$ and do the same to $v$ (let's call its o.b. $\mathcal{B}_2$), and take the linear map which sends one to the other. In more detail, note that if $\Vert u\Vert=0$ the result is trivial. If not, complete $u$ to any basis $\{u,e_2,e_3,\cdots,e_n\}$ of $\mathbb{R}^n$, and do the same for $v$: $\{v,f_2,f_3,\cdots,f_n\}$. Apply Gram-Schmidt to both, yielding orthonormal bases $$\mathcal{B}_1=\{\widetilde{u},\widetilde{e_2},\widetilde{e_3},\cdots, \widetilde{e_n}\}$$ and $$\mathcal{B}_2=\{\widetilde{v},\widetilde{f_2},\widetilde{f_3},\cdots, \widetilde{f_n}\},$$ where $\widetilde{u}=\frac{u}{\Vert u\Vert}$ and $\widetilde{v}=\frac{v}{\Vert v\Vert}$. Consider the linear transformation $A$ which sends the elements of $\mathcal{B}_1$ to $\mathcal{B}_2$ "orderly" (i.e., $\widetilde{u} \mapsto \widetilde{v}$ and $\widetilde{e_i} \mapsto \widetilde{f_i}$ for all $i$). This is by construction an orthogonal linear map, and thus will have an orthogonal matricial representation in the canonical basis*. Note also that since $A\widetilde{u}=\widetilde{v}$ and $A$ is linear, we have that $$A(u)=A(\Vert u\Vert \cdot\widetilde{u})=\Vert u \Vert \cdot \widetilde{v}=\Vert v \Vert \cdot \widetilde{v}=v.$$ As a sidenote, note that such a matrix will depend on the initial bases you take. *Just to be extremely clear, since you explicitly say in the comments that you are dealing with the definition that $A^TA=AA^T=I$, note that this is equivalent to asking that the collumn vectors of the matrix are orthonormal. The collumn vectors in the canonical basis are precisely $A\cdot E_i$, where $\{E_i\}$ is the canonical basis. Therefore, we must show that $$\langle A E_i,A E_j \rangle=\delta_{i,j}.$$ We will prove more generally that $\langle Av, Aw \rangle=\langle v,w\rangle$ (which is usually the definition of an orthogonal map). To see this, note that (letting $\widetilde{e_1}:=\widetilde{u}$ and $\widetilde{v_1}:=\widetilde{v}$). \begin{align*} \langle Av,Aw \rangle&amp;= \langle A\left(\sum c_i\widetilde{e_i}\right),A\left(\sum d_j\widetilde{e_j}\right) \rangle \\ &amp;=\langle \sum c_iA\widetilde{e_i},\sum d_jA\widetilde{e_j} \rangle \\ &amp;=\langle \sum c_i\widetilde{f_i},\sum d_j\widetilde{f_j} \rangle \\ &amp;=\sum c_id_i \\ &amp;=\langle \sum c_i\widetilde{e_i},\sum d_j\widetilde{e_j} \rangle \\ &amp;=\langle v, w \rangle. \end{align*} Another argument could be to use the above fact that $\langle Av,Aw\rangle=\langle v, w \rangle$ for all $v,w$ to conclude that $\langle A^TA v,w \rangle=\langle v, w\rangle$ for all $v,w$, and thus $\langle (A^TA-I)v,w \rangle=0$ for all $v,w$. Fixing any $v$ and taking $w:=(A^TA-I)v$, we have that $\Vert (A^TA-I)v\Vert^2=0$. Thus, $A^TA-I=0$, and then $A^TA=I$ (analogously, you have that $AA^T=I$).
Find the set $B$ such that its derived set $B´=A$
Such a set does not exist for the usual topology on $\mathbb{R}$, because a derived set containing each $\frac{1}{n}$ must also contain 0: Suppose every $1/n$ $(n\in \mathbb{N})$ is a limit point of $B$. Then $B$ contains arbitrarily close approximations to each $1/n$. Then 0 must also be a limit point of $B$. Indeed, if $1/n$ gets arbitrarily close to 0, and members of $B$ get arbitrarily close to each $1/n$, then members of $B$ get arbitrarily close to 0. (To see this formally, pick $\epsilon &gt; 0$. The ball around 0 of radius $\frac{\epsilon}{2}$ contains some $1/n$. The ball around $1/n$ of radius $\frac{\epsilon}{2}$ contains some $y\in B$. The distance between $y$ and 0 is at most $\epsilon = \frac{\epsilon}{2} + \frac{\epsilon}{2}$, by the triangle inequality. Hence for each ε, we can find $y\in B$ within $\epsilon$ of 0, so 0 is a limit point of $B$.) Hence if the set of limit points of $B$ contains $\{\frac{1}{n} : n\in \mathbb{N}\}$, it must also contain 0. There is no set $A$ whose set of limit points includes $\{\frac{1}{n} : n \in \mathbb{N}\}$ and nothing else. If our goal is to include $\{\frac{1}{n} : n \in \mathbb{N}\}$ and very little else, we can define $H = \{ \frac{1}{n} : n\in \mathbb{N}\}$, so that $H\cup \{0\}$ has zero as its sole limit point. Then define your set to be: $$B = \{ x + y \mid x, y \in H\} \cup H.$$ Then if we fix $x$ and vary $y\in H$ in the definition of $B$, we can see that each $x\in H$ is a limit point of $B$, and so is 0. It should be straightforward to see whether any other point $x\notin H$ is a limit point. (Following answer to distantly-related question: Example of a countable compact set in the real numbers)
Subsets and simple set operations
A counter example for the first is A = {a,aa}, B = {b}, C = {ab}. A counter example for the second is A = {a}, B = {b}.
Proving subadditivity of max norm for matrices
Using the common notation for the matrix maximum norm $$ \Vert A \Vert_\infty = \max_{i, j} |a_{ij}| $$ we have that for all possible indices $i, j$ $$ |a_{ij} + b_{ij}| \le |a_{ij}| + | b_{ij}| \le \Vert A \Vert_\infty + \Vert B \Vert_\infty \, . $$ The last expression does not depend on $i,j$, therefore $$ \Vert A+B \Vert_\infty \le \Vert A \Vert_\infty + \Vert B \Vert_\infty \, . $$
For $f: \mathbb{N} \rightarrow \mathbb{N}$ given by $f(x)=ax+b$, for what $a,b \in \mathbb{R}$ is $f$ a bijection?
This might help. A function is a bijection if and only if it has an inverse. First you can wonder: if $f:\mathbb N\to\mathbb N$ prescribed by $x\mapsto ax+b$ has an inverse $g:\mathbb N\to\mathbb N$ then how is $g$ prescribed? The answer to that is: $$x\mapsto\frac{x-b}{a}$$ The real question can now be worded as: under what conditions on $a,b$ is this indeed a legitimate prescription of a function $\mathbb N\to\mathbb N$? E.g. it is immediate that $a=0$ is not allowed. Also $a=2$ is not because it would lead to values not in $\mathbb N$.
Why is this set of continued fractions perfect?
Almost forgot that I&rsquo;ve asked this question. Here&rsquo;s how I did it using GEdgar&rsquo;s hints: Let the set of continued fractions described be $X$. There are no isolated points in $X$, for if $x \in X$ and $$x = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \dotso}}$$ where $a_i \in \{1,2\}$, define the sequence $x_n$ for $n&gt;0$ by replacing $a_n$ in the expansion of $x$ with $1$ if $a_n = 2$, $2$ if $a_n = 1$. Then, denoting the denominator of the $n$-th convergent of $x$ by $q_n$, and using the determinant formula, $|x_n - x| &lt; \frac{1}{q_n q_{n-1}}$. So $x_n \to x$. $X$ is compact, hence closed and bounded: Let $C$ denote the standard (ternary) Cantor set, and define $f:C\to X$ by $0.b_0 b_1 \dotso _3$ (in ternary) $\mapsto$ $x$ where $a_n = 1$ if $b_n = 0$, $a_n = 2$ if $b_n = 2$. This is continuous because given $y:=0.b_0 b_1 \dotso _3$ (in ternary) $\in C$ and a $\frac{1}{q_n q_{n-1}}$ of $f(y)$ (see above), if we let $\delta = 3^{-n-2}$ then $( z\in C$ and $|z-y| &lt; \delta )$ $\implies$ $| f(z) - f(y) | &lt; \frac{1}{q_n q_{n-1}}$. Feel free to improve this answer.
Density definition implies total mass integral
Your question can be worded as follows: Lemma: Suppose we have a map $m(V) := \int_V \rho\, d \lambda$, where $\lambda$ is the appropriate Lebesgue Measure. Then: $$\rho(x) = \lim_{V \to x} \frac{m(V)}{|V|}$$ for (almost) every point $x$. Note that once this is proved, we have $$\int_V\lim_{V'\to \{P\}} \frac{m(V')}{|V'|} d\lambda = \int_V\rho \, d\lambda = m(V)$$ proving your claim. Now, the lemma above is highly non-trivial. It is known as the Lebesgue Differentiation Theorem, and a standard proof using the Hardy-Littlewood Maximal Function can be found on the linked Wikipedia page. Technically the result holds only almost-everywhere (it could fail on a set of measure zero), but in physical applications this might be meaningless. Additionally, the family of sets $\{V\}$ must be of bounded eccentricity for the limit to converge, and the limit is taken such that the diameters of the sets decrease
Every semisimple Lie algebra is rigid
The "more direct way" with the Killing form is not enough as Mariano already explained. I think, the natural way to see that every semisimple Lie algebra is rigid, over a field of characteristic zero, is indeed via $2$-cocycles for the adjoint representation. One could also argue that this is the most direct way, because a formal deformation of a Lie algebra $L$ directly leads to $2$-cocycles. This is due to Gerstenhaber, see On the Deformation of Rings and Algebras: A formal one-parameter deformation of $L$ is a power series $$ [g,h]_t := [g,h] + \sum_{k\ge 1}\phi_k(g,h)t^k $$ such that the Jacobi identity for $[ \;, \;]_t$ holds, with $2$-cochains $\phi_k\in C^2(L,L)$ and $g,h\in L$. The Jacobi identity implies in particular, that the maps $\phi_k$ are $2$-cocycles for the adjoint representation, i.e., $$ \phi_k\in Z^2(L,L) $$ Two formal deformations of $L$ are called equivalent, if the resulting Lie algebras are isomorphic. The equivalence classes are represented by a cohomology class from $H^2(L,L)$. Now $L$ is called formally rigid, if every formal deformation is trivial. If $H^2(L,L)=0$, then $L$ is obviously formally rigid. The Whitehead Lemma says that every semisimple Lie algebra over characteristic zero satisfies $H^2(L,L)=0$, hence is formally rigid. Gerstenhaber and Schack proved that formally rigid is equivalent to geometrically rigid, which means that $L$ has open orbit in the "variety of all Lie algebra structures of dimension $\dim (L)$".
Basketball probability
Yes, your working is correct. Further underlying assumptions that are not stated explicitly are there are just these $3$ teams or only these $3$ teams can win the tournament.
Find the $2\times 2$ matrix $D$ such that $P^{-1}DP=\begin{pmatrix} -4 & -15 \\ 2 & 7 \end{pmatrix}.$
If $$P^{-1}DP=A$$ then $$D=PAP^{-1}$$ so you just need to calculate $P^{-1}$ and multiply.
Mathematics Audiobooks for the Blind?
You might find the answers to this post helpful, in terms of standards for speaking mathematics without relying on the receiver's vision. The question asks for a definitive guide to spoken mathematics. You'll find many resources there, including the link to Handbook for Spoken Mathmatics.
Can someone help me understand this: integrating over a discrete set of points yields 0 under Lebesgue integral?
Any set of reals which is countable (i.e. finite or countably infinite) has zero Lebesgue measure. Lebesgue integration over a Lebesgue-null set always gives zero.
Convergence in Distribution which places mass of 1/2 at -1 and +1
Why not complete the OP's proof? Let $u_0(t)=1$ and, for every $n\geqslant1$, $$u_n(t)=\prod\limits_{k=1}^n\cos(t/2^k). $$ The goal is to prove that $u_n(t)\to v(t)$ for every $t$. Using the doubling formula $\sin(2a)=2\sin a\cos a$ for $a=t/2^n$, one gets $$ 2\sin(t/2^n)u_n(t)=\sin(t/2^{n-1})u_{n-1}(t). $$ Iterating this yields $$ u_n(t)=\frac{\sin t}{2^n\sin(t/2^n)}, $$ from which the pointwise convergence of $u_n$ to $\varphi_Y$ is clear.
Product-topology of discrete $\{ 0, 1 \}$ spaces
The product topology is discrete if and only if $I$ is finite. If $I$ is countably infinite, the product is homeomorphic to the Cantor set. The complement of a singleton is not a singleton unless $I$ is a one-point set. If $I$ has at least two elements, then $\prod_{i\in I}\{0,1\}$ has at least $2^2=4$ elements, so the complement of a singleton has at least three elements. More important, if $I$ is infinite, so is $\prod_{i\in I}\{0,1\}$ (indeed, it’s uncountable), so the complement of a singleton is infinite and by no means has to be closed.
Fourier transform symmetry property proof
I suggest you start from $F(-\nu)$ : $$F(-\nu) = \int_{-\infty}^\infty f(t) e^{j2 \pi \nu t} \;dt \;=\; \int_{\infty}^{-\infty} f(-u) e^{-j2 \pi \nu u} \,\;(-du) \;=\; \int_{-\infty}^{\infty} -f(u) e^{-j2 \pi \nu u} \,\;du \;=\; -F(\nu)$$ where I used the change of variables $u=-t$.
What will be the elapsed time for a person standing a railway can listen a beep?
It is difficult for me with agree with the answer given. The simplest logic says that the time heard will be $T=2(1-v_c/v)$, where $v_c$ is the cruising speed. The explanation is, assuming that the stating observer and the moving source (approaching the observer) are on the same straight line (see figure), [the image taken from Physics by Resnick and Halliday, 1968, clearly illustrates the contraction of the wavefronts] At $t_i=0$, the first wavefront starts when the source is at $S_1$. It reaches the observer at $t_1=d/v$, where $\overline{S_1O} = d$ and $v=340m/s$. At $t_f=2$, the last wavefront emits when the source is at $S_2$. It reaches the observer after a time interval of $\Delta t=[d-v_c(t_f-t_i)]/v = t_1-2v_c/v$. The observer hears this wavefront at $t_2=2+\Delta t$. [Note, $\overline{S_2O} = \overline{S_1O} - \overline{S_1S_2} = d - v_c(t_f-t_i)$] [Also Note, the speed of each wavefront is $v$, since, once it leaves the source it travels through air at sound speed] Therefore, the duration for which the observer hears the sound it $T=t_2 - t_1=2+t_1-2v_c/v-t_1=2(1-v_c/v)$.
walks on proper coloring of odd cycles: comparing asymptotics
Here's a somewhat more systematic argument for the intuitively plausible result that the growth factor of the $F_k$ is bounded away from $2$ for $n\to\infty$ whereas the growth factor of the $G_k$ tends to $2$ for $n\to\infty$. I will use the exponential growth formula (Theorem IV.7 in Analytic Combinatorics (free download link)), which states that the radius of convergence of an analytic function at the origin (which for a rational function is the magnitude of the pole nearest to the origin) is the inverse of the growth factor of the coefficients. First, for $F_k$, we have two cases. The case of even $n$ is easy. The colouring is $010\ldots0102$, and independently of $n$ the possible colour walks are given by arbitrarily long strings of the form $010\ldots010$ with $2$s in between. The generating function to count these has a factor $x$ for each $2$ and a factor $x+x^3+x^5+\dotso=x/(1-x^2)$ for the strings of the form $010\ldots010$, and this can be repeated any number of times, so with $u=x^2/(1-x^2)$ we have $$f(x)=1+u+u^2+\dotso=\frac1{1-u}=\frac1{1-x^2/(1-x^2)}=\frac{1-x^2}{1-2x^2}\;.$$ Actually the last string of $0$s and $1$s can also end in a $1$, and there doesn't have to be a $2$ at the beginning, but these "finite-size effects" are just corrections in the numerator that don't affect the zeros of the denominator that determine the growth factor (which makes sense). The only poles of $f$ are at $1/\sqrt2$, so the growth factor in this case is $\sqrt2$. In the case of odd $n$, the colouring is $010\ldots1012$, and now it makes a difference which side of the $2$ we go to and whether we return to the same side or the other side. The generating function again has a factor of $x$ for each $2$, but the factors for the strings of $0$s and $1$s are a bit more complicated. There's a factor of $2$ because we can go to either side of the $2$. Then we have the choice of either adding an odd number of alternating $0$s and $1$s and returning to the $2$ from the same side, or of adding an even number of at least $n-1$ $0$s and $1$s and returning to the $2$ from the other side. These correspond to contributions $x(1+x^2+x^4+\dotso)$ and $x^{n-1}(1+x^2+x^4+\dotso)$, respectively, so in this case we get (again neglecting finite-size effects) $u=2x(x+x^{n-1})/(1-x^2)$ and thus $$f(x)=\frac1{1-u}=\frac1{1-2x(x+x^{n-1})/(1-x^2)}=\frac{1-x^2}{1-3x^2-2x^n}\;.$$ Here are the growth factors for the first few odd $n$: $$ \begin{array}{c|c} n&amp;1/\rho_n\\ \hline 3&amp;2\\ 5&amp;1.824621\\ 7&amp;1.765400\\ 9&amp;1.743786\\ 11&amp;1.736075\\ 13&amp;1.733412 \end{array} $$ The growth factors clearly tend to $\sqrt3$, and I presume this wouldn't be difficult to show rigorously from the simple form of the denominator. This makes sense, since in the limit $n\to\infty$ there is no longer any gain from traversing the vast expanse of repeating $0$s and $1$s, so the $x^n$ term becomes negligible and the denominator is just $1-3x^2$. For the $G_k$, the situation is somewhat more complicated. Setting up the complete generating functions would be a bit tedious, though possible, so I'll use a rough bound to show that the growth factor tends to $2$: I'll consider only the colour walks that reach and leave the final, irregular vertex from and to one of the two sides. Then the colour of that vertex and its neighbours becomes irrelevant, and we don't have to distinguish different cases depending on $n\bmod 3$. Each part of the walk between two visits to the final vertex corresponds to a Dyck path that doesn't go above $n-2$ or below $0$. These are counted in this paper by the generating function $$R_{n-2}(x)=\frac{U^*_{n-2}(x)}{U^*_{n-1}(x)}\;,$$ where the $U^*_n$ are defined in terms of the Chebyshev polynomials of the second kind by $$U^*_n(x)=x^nU_k\left(\frac1{2x}\right)\;.$$ Again, we have a factor of $x$ for each occurence of the final vertex, and a factor of $xR_{n-2}(x)$ for each foray into the rest of the cycle ($x$ because it takes one step in addition to the Dyck path to get started), and we can repeat this any number of times, so the generating function (up to finite-size effects) is $$f(x)=\frac1{1-xR_{n-2}(x)}=\frac{U_{n-1}(t)}{U_{n-1}(t)-xU_{n-2}(t)}$$ with $t=1/2x$. By a lucky coincidence (or perhaps not), the Chebyshev polynomials of the second kind satisfy the recurrence relation $U_n(t)=2tU_{n-1}(t)-U_{n-2}(t)$, which gives us $$f(x)=\frac{2tU_{n-1}(t)}{U_n(t)}\;.$$ Thus, the growth factor is twice the largest root of $U_n$. The roots of $U_n$ are given by $$x_k=\cos\frac {k\pi}{n+1}$$ for $k=1,\dotsc,n$, so the largest one is $\cos\pi/(n+1)$, which tends to $1$ as $n\to\infty$; thus the growth factor tends to $2$. Here are the growth factors for the first few values of $n$: $$ \begin{array}{c|c} n&amp;1/\rho_n\\ \hline 1&amp;0\\ 2&amp;1\\ 3&amp;\sqrt2\\ 4&amp;1.618034\\ 5&amp;\sqrt3\\ 6&amp;1.801938\\ 7&amp;1.847759 \end{array} $$ We can check that the first three values make sense: For $n=1$, there are no walks at all of the kind described; for $n=2$, there are no choices and thus only one walk for every $n$, and for $n=2$ there is a binary choice to make every second step when we're on the first vertex; thus these three growth factors are correct. In summary, the growth factor for the $F_k$ for even $n$ is $\sqrt2$; the growth factors for the $F_k$ for odd $n$ tend to $\sqrt3$ from above for $n\to\infty$; and the growth factors for the $G_k$ tend to $2$ from below for $n\to\infty$. The crossover is at $N=7$ at the latest (taking into account that the factors for the $G_k$ are only lower bounds since we didn't count all the walks). Thus the claim in the question is correct.
Area such that $d(P, AB) \leq \min \{d(P,BC), \ d(P,AC)\}$
The region where $d(P,AB) \leq d(P,BC)$ is defined by the angle bisectors of the angles formed by lines $AB$ and $BC$, and so on. In the actual region that you state, the region $R$ is given by the areas shaded yellow or red in the diagram below. The areas on the left and on the right are infinitely large, so the answer to your question is $\infty$. If you limit the region further to the interior of the triangle, you see that $R$ is then the triangle defined by $A$, $B$, and the incenter of triangle $ABC$. (I.e. $\triangle ABI$.) The coordinates of the incenter $I$ can be found to be $(7 - \sqrt 5, 7 - \sqrt 5)$. The area of $\triangle ABI$ is then "one-half base times height" or $$\frac12 \cdot 4 \cdot ((7 - \sqrt 5)-4) = 6 - 2 \sqrt 5 \approx 1.52786$$
Existence of Matrix inverses depending on the existence of the inverse of the others..
Hint:$(I-BA)^{-1}=X$ (say), Now expand left side. we get $$X=I+BA+ (BA)(BA)+(BA)(BA)(BA)+\dots$$ $$AXB=AB+(AB)^2+(AB)^3+(AB)^4+\dots$$ $$I+AXB=I+(AB)+(AB)^2+\dots+(AB)^n+\dots=(I-AB)^{-1}$$ Check yourself: $(I+AXB)(I-AB)=I$, $(I-AB)(I+AXB)=I$
Closed form of the sum of two binomial expansions
Maybe you are looking for a symmetric formulation like this $$ \eqalign{ &amp; \left( {x + y} \right)^{\,n} + \left( {x + z} \right)^{\,n} = \cr &amp; = \left( {x + \left( {{{z + y} \over 2}} \right) - \left( {{{z - y} \over 2}} \right)} \right)^{\,n} + \left( {x + \left( {{{z + y} \over 2}} \right) + \left( {{{z - y} \over 2}} \right)} \right)^{\,n} = \cr &amp; = \left( {x + \left( {{{z + y} \over 2}} \right)} \right)^{\,n} \left( {\left( {1 - \left( {{{z - y} \over {2x + y + z}}} \right)} \right)^{\,n} + \left( {1 + \left( {{{z - y} \over {2x + y + z}}} \right)} \right)^{\,n} } \right) \cr} $$
Pushforward of inverse map at the identity?
When $G=\mathbb R$, $i(x)=-x$, and so $i_*(X)=-X$. Suppose that $\varphi:H \to G$ is a homomorphism of (Lie)-groups, and $i_H, i_G$ are the inversion maps. We can write the fact that homomorphisms preserve inverses as $i_G \circ \varphi = \varphi i_H$. Therefore $(i_G)_* \circ \varphi_* = \varphi_* (i_H)_*$. Consider a one parameter subgroup $\varphi:\mathbb R\to G$. Then combining the two above observations, we have $$(i_G)_*(\varphi_*(X)) = \varphi_* (-X)=-\varphi_* (X)$$ for $X\in T_e \mathbb R$. Thus, $i_*(Y)=-Y$ for every $Y\in T_e G$ that is in the image of (the derivative of) a one parameter subgroup. Since we can find a one parameter subgroup through each vector at the identity, the proposition is proved.
Differential equation, Stability , Lyapunov function
The solution stays bounded, and $z$ undergoes periodic motion. The reason for this is symmetry: If you replace $x$ by $-x$ and $t$ by $-t$, the system is unchanged. Therefore, while $(x,y)$ travels once around its ellipse $x^2+2y^2=\text{constant}$ from $x=0$, $y&gt;0$ back to the same point, $z$ must have returned to its original value. I know that $x$, $y$ traverses the entire ellipse because the first two equations are just a variation of $x'=2y$, $y'=-x$, which has that property. More precisely, introduce some new time variable $\tau$. Write $x_\tau$ etc for the derivative wrt $\tau$ and $x_t$ etc for the derivative wrt $t$, then $x_\tau=x_t\tau_t$, so if $\tau$ is defined by $\tau_t=z-1$, then the first two equations become $x_\tau=2y$, $y_\tau=-x$. This works fine so long as $z&lt;1$. But we are interested in stability at the origin, and if we start close to the origin then $z$ has no chance to grow as big as $1$ in the time it takes to traverse the ellipse.
First-countable space
Lemma: Let $X$ be first countable. A set $U\subset X$ is open if and only if whenever $x_n\rightarrow x\in U$, then $(x_n)$ is eventually in $U$ (that is, there is a positive integer $n$ so that $x_m\in U$ for each $m\ge n$). Proof: As the forward implication is trivial, we only provide the proof for the reverse implication. Towards this end, suppose $U$ is not open. Let $x$ be a point in $U$ such that no nhood of $x$ is contained in $U$. Let $\{ U_i\mid i\in\Bbb N\}$ be a countable nhood base at $x$ such that $U_1\supset U_2\supset U_3\supset\cdots$ (replace $U_n$ with $\cap_{k=1}^n U_n$ if necessary). For each $i$, choose $x_i\in U_i$ such that $x_i\notin U$. Then $(x_i)$ converges to $x\in U$ but is not eventually in $U$. Now to show that your statement is true: Let $V$ be open in $Y$. Set $U=f^{-1}(V)$ and suppose $x\in U$. Let $(x_n)$ be a sequence in $X$ converging to $x$. Then $(f(x_n))$ converges to $f(x)\in V$. Since $V$ is open the the sequence $(f(x_n))$ is eventually in $V$ by the Lemma. From this, it follows that the sequence $(x_n)$ is eventually in $U$. Thus, by the Lemma again, $U$ is open in $X$. As $V$ was an arbitrary open set in $Y$, it follows that $f$ is continuous.
How high up do kinds go in type theory?
You might want to look at the notion of a pure type system (PTS). A PTS is built from the following datum: A set ${\mathscr S}$ of sorts, which intuitively correspond to the different layers of types, kinds, higher order kinds, ... A set ${\mathscr A}\subset{\mathscr S}^2$ of axioms: $(s_0,s_1)\in{\mathscr A}$ signifies that $s_0$ itself is a type of sort $s_1$. A ternary relation ${\mathscr R}\subseteq{\mathscr S}^3$ of rules, specifying the way dependent product types can be built: $(s_0,s_1,s_2)\in{\mathscr R}$ signifies that the type system should allow the formation of dependent product types for those dependent types for which the parameter type has sort $s_0$ and the argument type has sort $s_1$, and that the resulting dependent product type itself should have sort $s_2$. For example, if you take ${\mathscr S} := \{\ast\}$, ${\mathscr A} := \emptyset$ and ${\mathscr R} := \{(\ast,\ast,\ast)\}$, you have a single sort of types, and you may form dependent product types for types depending on terms. In particular, function types like $A\to \ast$ do not exist (since $\ast$ is not a type, i.e. has no sort), so that a posteriori the only dependent product types one may form are those corresponding to constant dependent types, i.e. ordinary function types $A\to B$ for types $A,B$. In other words, you get the ordinary typed $\lambda$-calculus. However, you might include a sort $\square$ and force $\ast:\square$ by putting $(\ast,\square)\in{\mathscr A}$. Then, the rule $(\ast,\square,\square)$ would allow for the formation of dependent product types $A \to \ast:\square$ for $A$ a type of sort $\ast$, and for such a dependent type $V:A\to\ast$, you might indeed form its dependent product type $\prod_{x:A} V x: \ast$ from the rule $(\ast,\ast,\ast)$. The resulting system is $\lambda\text{P}$. To implement 'terms depending on types' like the Church naturals $\prod_{X:\ast} X\to (X\to X)\to X:\ast$ you would need the rule $(\square,\ast,\ast)$ (giving System $F$), and to form 'types depending on types' you'd need to include the rule $(\square,\square,\square)$ (giving $\lambda_\omega$). The 8 possible combinations of the above rules gives the so called $\lambda$-cube of pure type sytems, at the top of which you have the calculus of constructions. However, you can also have an infinite ladder of sorts, as in the calculus of inductive constructions. Finally, note that the 'paradise' configuration with ${\mathscr S} := \{\ast\}$, ${\mathscr A}:=\{(\ast,\ast)\}$ and ${\mathscr R} := \{(\ast,\ast,\ast)\}$ yields an inconsistent system. See e.g. nlab or these slides
Proof the following!
You just need to show that $$W(g_1,g_2)=(A_1B_2−A_2B_1)W(f_1,f_2).$$ where $$ g_1=A_1f_1+A_2f_2,\\ g_2=B_1f_1+B_2f_2. $$ or $$ (g_1 \ \ g_2)=(f_1 \ \ f_2)\pmatrix{A_1&amp;B_1\\A_2&amp;B_2} $$ In A), already the first equation is wrong, as $g_1=A_1f_1+A_2f_2$ is not the zero function. Since $f_1,f_2$ are independent, $A_1f_1+A_2f_2=0$ is only possible for $A_1=A_2=0$.
Can a property $P$ for real numbers be defined as eventually true?
The notion of some property holding eventually is not new and is actually very common in mathematics, though often not called so. For example, recall the definition of convergence of sequence of real numbers. It can be equivalently stated as: Sequence $(x_n)$ converges to $l$ if for any $\varepsilon$ eventually we have $x_n\in(l-\varepsilon,l+\varepsilon)$. This notion of "eventuality" is often hidden behind the words "property holds for all sufficiently large $n$". This notion makes perfect sense both in natural numbers and real numbers.
What are the advantages/disadvantages of non-standard analysis?
Look at the answers to Is non-standard analysis worth learning? Do they answer your question? If I understood your question correctly, it seems the answer is that non-standard analysis (NSA) is not technically needed for the kind of practical reasons you mention. in particular, this answer to the previously-cited question mentions a few results that were first found via non-standard analysis, but also says these results were all found later by means of standard analysis. Indeed it seems that theoretically, the interesting thing about NSA is how well its theorems correspond to statements that are also true (and provable) in standard analysis. On the other hand, consider this answer concerning the teaching of calculus. It cites evidence that students of calculus learn the concepts better when things are presented first in terms of infinitesimals and the epsilon-delta formalisms are introduced later. Not everyone agrees with this conclusion. (And that's an understatement.) But I think this might qualify as some kind of practical use of NSA if the conclusion is true.
finite dimensional function space
One transitional example is to look at the vector space of real-valued functions on a set with $k$ elements, such as $\{1,2,\ldots,k\}$, with addition pointwise ($(f+g)(x)=f(x)+g(x)$) and scalar multiplication similarly. This is a $k$-dimensional space of functions. Indeed, hard to tell it apart from $\mathbb R^k$. Equicontinuity is relatively easy...
Graph Laplacian appellation
In terms of "sign": there's a long debate between geometers and analysts about whether the Laplacian should be the trace of the Hessian or its negative. From the functional analytic point of view (which will be useful also for the graph Laplacian), defining the Laplacian as the negative trace of the Hessian has the advantage in giving a positive operator. The $h^2$ term is unimportant: you are working on a graph so you are just multiplying by an overall factor. Unless you are thinking of the graph as some discretization of the plane and are actually interested in the limit of taking the step-size $h\to 0$, whether you keep track of the $h^2$ is not something to worry about.
How to prove if these statements involving logical consequences and tautologies are true?
i). $\neg s \vDash s$ means that $s$ is a tautology, for if it were possible for $s$ to be false, then in that case $\neg s$ is true, and $s$ is false, and so $\neg s \vDash s$ would not hold. Also, if $s$ is a tautology, then $t \rightarrow s$ is a tautology as well. So, i) is True. The $s \vDash t \rightarrow s$ is not even needed. ii) You got the right idea, but the execution is a little off: You can't conclude that $t \vDash \neg s$ just on the basis of a particular valuation. So, instead, try to prove that for any valuation that sets $t$ to true, $\neg s$ will have to be true as well, given $s \vDash t \rightarrow \neg s$. Put differently: show that any valuation that sets $t$ to true and $\neg s$ to galse will contradict $ s \vDash t \rightarrow \neg s$ iii) Correct! Regardless of what any valuation does to $t$, it must set either $u$ or $\neg u$ to true, and therefore it must set either $t \lor u$ to true or $t \lor \neg u$ to true (or both). Hence, any valuation must set $s$ to true, and thus $s$ is a tautology.
Solving for X with multiple exponents
Your equation $$x^3-6x^2+17x - 2280=0$$ is perfectly correct; it is a cubic and to solve it, you could use Cardano method which will tell you that there are one real root and two complex roots. They are respectively $$x_1=15 \qquad , \qquad x_{2,3}=\frac{1}{2} \left(-9\pm i \sqrt{527}\right)$$ I do not suppose that you care about the complex solution so $x=15$ is your solution. If you need to solve for $x$ equation $$\frac{50}{3} (x^3-6x^2+17x-12)=p$$ the only real solution will be $$x=2+\frac{A^2-50 \sqrt[3]{30}}{30^{2/3} A}$$ where $$A=\sqrt[3]{27 (p-100)+\sqrt{3} \sqrt{243 (p-200) p+3680000}}$$
Trigonometry: Least square of a tangent function
The usual way is to use a nonlinear regression method of fitting. They are several softwares. The calculus is iterative starting from &quot;guessed&quot; values of the parameters. Sometimes the numerical calculus fails due to bad initial guessed values and/or not convergent iteration. The principle of a non conventional method (no iteration, no initial guess) is explained in this paper : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales , partially translated in : https://scikit-guess.readthedocs.io/en/latest/appendices/references.html. An application to the case of tangent function is shown below. Numerical example : If a well defined criteria for fitting is specified ( least mean absolute deviation or least relative deviation or others) the above method is not sufficient. An iterative process is required with guessed values to start. The above calculus can be used to get a set of good initial values. CAUTION : In the above method numerical integrations are carried out for the computation of $S_k$ and $T_k$. Such a simplified process cannot be used if the function tends to infinity around a point on the range of the data $(x_k\, ,\,y_k)$. For example if the points are distributed as on the next figure the above method will not be convenient. FOR INFORMATION : An integral equation to which $\quad y(x)=a\tan(bx+c)+d\quad$ is solution is : $$y(x)=\frac{b}{a}\int (y(x))^2dx-2\frac{bd}{a}\int y(x)\,dx+\left(\frac{b\,d^2}{a}+ab\right)x+\text{constant}$$ This allows the linearisation of the regression and the construction of the first 4x4 matrix.
Have doubts that it's right equivalency of predicates when there are two people with different tastes.
I recently learnt about method of finite universe. Application of this method to universe consisting out of two people discovers that there is no need to worry about the right hand statement. Let the universe consists of a and b. Then we can reformulate our statement in following way: [(La v Lb) v (Ga v Gb)] &lt;=> [ (La v Ga) v (Lb v Gb) ] Firstly, please notice that the right part of our logical equality is just rearranged left part. According to properties of associativity and commutativity of "OR" said parts must be logically equal. Secondly, just to make it obvious HOW situation "One loves guro, another loves lolicon" does NOT turn the right part into a false statement, here goes proof: Initially: (La v Ga) v (Lb v Gb) Then: (F v T) v (T v F) Then: T v T And finally: T
Identity involving a projection matrix, originating from statistical regression theory
Write $X$ as block matrix and calculate $X^{\top}X$: $$ X=\begin{pmatrix}X_0 &amp; x_m\end{pmatrix}, $$ $$ X^{\top}X=\begin{pmatrix}X^\top_0 \\ x^\top_m\end{pmatrix}\begin{pmatrix}X_0 &amp; x_m\end{pmatrix}=\begin{pmatrix}X^\top_0X_0 &amp; X^\top_0x_m\\ x^\top_mX_0 &amp; x^\top_m x_m\end{pmatrix}. $$ To find the value $w_m^2$ at position $(m,m)$ of $(X^{\top}X)^{-1}$, we need to divide determinant of $X^\top_0X_0$ by the determinant of $X^{\top}X$: $$ w_m^2=\frac{\det(X^\top_0X_0)}{\det(X^{\top}X)}. $$ With the help of determinant of block matrices https://en.wikipedia.org/wiki/Determinant#Block_matrices get $$ \det(X^{\top}X) = \textrm{det}(X^\top_0X_0)\cdot \textrm{det}\bigl(x_m^\top x_m-x_m^\top \underbrace{X_0 (X_0^\top X_0)^{-1}X_0^\top}_{P} x_m\bigr) $$ $$ =\det(X^\top_0X_0)\cdot \bigl(x_m^\top (I_n-P) x_m\bigr). $$ We skip last $\det$ since matrix $x_m^\top (I_n-P) x_m$ is $1\times 1$. Finally, $$ w_m^2=\frac{\det(X^\top_0X_0)}{\textrm{det}(X^{\top}X)} = \frac{1}{x_m^\top (I_n-P) x_m}. $$
Stuck on this definite integral problem
Hint: Substitute $x=a\sinh(t)$. $$a^6\int\cosh^6(t)\,dt=\frac{a^6}{32}\int(\cosh(6t)+6\cosh(4t)+15\cosh(2t)+40)\,dt.$$
What are the deck transformations of this threefold cover of the figure 8?
Your statement is only true for normal covering spaces $\hat X\to X$, i.e. $\pi_1(\hat X) \subset \pi_1(X)$ is normal. Then we get the isomorphism $Deck(\hat X) \cong \pi_1(X) / \pi_1(\hat X)$, which is very interesting, have a look at its geometric interpretation. Furthermore there are threefold coverings which have trivial deck transformation group. Of course this can never happen for abelian fundamentalgroups by the first part of the answer.
what kind of $x$ fits into $argsort(x)$=$ argsort(argsort(x))$?
If I understand your $\operatorname{argsort}$ correctly, then $\operatorname{argsort}(x)$ takes as input a finite sequence of distinct numbers -- that is a map from $\{0,1,2,\ldots,n-1\}$ to distrinct numbers -- and produces a permutation $\sigma$ such that $x\circ\sigma$ is an increasing sequence. Therefore, the fixed points of $\operatorname{argsort}$ are the permutations $\sigma$ where $\sigma\circ\sigma$ is an increasing sequence. But $\sigma\circ\sigma$ is always a permutation, and the only permutation that is increasing is the identity, so the fixed points are the $\sigma$ with $\sigma\circ\sigma=\rm Id$, or in other words permutations of order $1$ or $2$. That is exactly the permutations that are products of disjoint transpositions. This is relevant because your equation $\operatorname{argsort}(x) = \operatorname{argsort}(\operatorname{argsort}(x))$ calls for $\operatorname{argsort}(x)$ to be a fixed point of $\operatorname{argsort}$. Thus, the solutions to your equation are exactly the ones that you can make by: Start with an increasing sequence. Select zero or more non-overlapping pairs of elements in the sequence (not necessarily neighbors), and swap each pair.
Help understanding steps in computations with Conditional random variables
The Law of Total Expectation states that: $$\mathbb{E}[X] = \mathbb{E}[\mathbb{E}[X|Y]]$$ This means that: $$\mathbb{E}[TZ] = \mathbb{E}[\mathbb{E}[TZ|Z]]$$ Now, we have that: $$\mathbb{E}[TZ] = \mathbb{E}[TZ|Z = 0] \Pr[Z = 0] + \mathbb{E}[TZ|Z = 1]\Pr[Z = 1]$$ We have that $\mathbb{E}[TZ|Z = 0]=\mathbb{E}[0X] = 0$, and that $\mathbb{E}[TZ|Z = 1] = \mathbb{E}[Y]$, so: $$\mathbb{E}[TZ] = \mathbb{E}[Y]\Pr[Z = 1]$$ as desired. Since $X\sim U[-3,-1]$, we have that $\Pr[X &lt; t] = 1$ for $t\in[-1,0)$. Since $Y\sim\text{Exp}(1/4)$, we have that $\Pr[Y &lt; t] = 0$ for $t\in[-1,0)$ (the support of an exponential random variable is $t &gt; 0$). It follows that: $$F_T(t) = \Pr[X &lt; t]\frac{1}{4} + \Pr[Y &lt; t] \frac{3}{4} = 1\frac{1}{4} + 0\frac{3}{4} = \frac{1}{4},\quad t\in [-1,0)$$ The relevant line of reasoning here is: Since $T = Y\geq 0$ when $Z = 1$, and $T = X &lt; 0$ when $Z = 0$. This observes that the only way for $T\geq 0$ to hold is if $T = Y$ and $Z = 1$, so $TZ = T*1 = T$ (It's additionally the case that $T = Y$, but writing this doesn't aid the integral later). Similarly, the only way for $T &lt; 0$ to hold is if $T = X &lt; 0$, but this occurs when $Z = 0$, so $TZ = X*0 = 0$.
Abelian group whose center is proper subgroup
the center of the group is the subgroup consisting of the elements that commute with everything. If a group is abelian then it is clearly equal to its center.
How to visualize Dini theorem regarding sequence of functions?
"Dini's theorem says that if a monotone sequence of continuous functions converges pointwise on a compact space and if the limit function is also continuous, then the convergence is uniform." My (poor) intuition for this result is as follows: Uniform convergence means that the rate of convergence can be lower bounded. Now since the sequence of functions is pointwise increasing (or decreasing) they are all squeezed in a smaller and smaller gap towards the continuous limit. Furthermore, continuous functions on compact spaces have nice properties (uniform continuity and so forth). This might not help much yet -- but in analysis, when dealing with problems that ask to upper bound some quantity in order to make it smaller than $\epsilon$ (using the triangle inequality), I personally find it more helpful to summarise the data of the problem to see how you can use it with the triangle inequality to upper bound your quantity of interest, rather than trying to visualise it. Finding the solution often helps in visualising the problem a posteriori. Here you want to show that the sup norm on your compact space between your function sequence $f_n$ and its limit $f$ decreases as $n\to \infty$. Can you combine the data of your problem to upper bound $\sup_{x\in K}d(f_n(x),f(x))$, using the triangle inequality multiple times? HINT:at some point you might need to subdivide your compact space into very small compact subspaces.
How to express series summation notation in reverse?
So you will have to start somewhere, with a first term, call it a. Let's say the terms represent area as in the example, then they will be positive, and if they go from smaller to larger then you will expect all of the remaining terms to be bigger than the first term a. This represents a problem since we expect that the sum of infinitely many terms bigger than a lower bound a should diverge. So for instance if you started with a square and kept doubling it, it would diverge. All of that being said, the phrase 'express a reverse series' which you used is vague enough that it is hard to definitively answer no to the question. If an expression such as the one you gave in the statement meets the criteria that you set out, then that would suggest that it can be done for any series, since you are just re-arranging a finite segment of the series to that the smallest are ordered first and having the usual trailing dots go from right to left instead of left to right and be located in the front. We can surely rearrange how we write series so that the smallest terms are left most and the trailing dots go from right to left and are located in front just as you demonstrated there. The notation once it is defined and agreed upon is unambiguous. The phrase 'small to large areas' indicates to me that the first interpretation is more along the lines of what you are thinking, in that case then no. If those areas are supposed to represent the partial sums of the sequence then they will move from small to large and the answer is yes.