title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Change of basis associated to several successive rotations
The problem is here $$ v_0 = R_1 v_1\\ v_1 = R_2 v_2 $$ $R_1$ as a matrix is expressed in terms of basis $\mathcal B_0$, while $R_2$ as a matrix is expressed in basis $\mathcal B_1$. In order to use matrix multiplication to do this problem, they need to be expressed in the same basis. The rotation $R_2$ expressed as a matrix in basis $\mathcal B_0$ is $R_1^{-1}R_2R_1$. So instead of $v_0 = R_1 R_2 v_2$, you get $$ v_0 = R_1(R_1^{-1}R_2R_1)v_2 = R_2 R_1 v_2 $$ and there is no contradiction.
$\Delta e^i =0$ where $e_i$ is geodesic.
Not true. Try lines of longitude on a sphere ($\theta=\text{constant}$). Then $\omega = d\phi$ and if you compute $(d\delta+\delta d)\omega$, you don't get $0$.
Is $\lnot\forall x(Px \lor Qx) \equiv \lnot(\forall x(Px \lor Qx))?$
Yes, they are equivalent, and they are both equivalent to $$\exists x \textrm{ such that }(\lnot Px) \wedge (\lnot Qx)$$
Continuous function at a point characterized by sequences
Here I present a counterexample to my question. I think I have find a function that is sequentially continuous at $a$, i.e. for any sequence $(x_n)_n$ on $X$ converging to $a$, the sequence $(f(x_n))_n$ converges to $f(a)$, but that $f$ is not continuous at $a$, i.e. there exists a neighbourhood of $f(a)$ such that there doesn't exist a neighbourhood of $a$ which image is contained into the first. Let $X=\mathbb Z^+\cup\{\ast\}\cup(\mathbb Z^+\times\mathbb Z^+)$ be the Arens' space (see here) and let $Y$ be the real line. Consider a function $f:X\rightarrow Y$ defined as follows: $$ f(x)=\begin{cases} 1/x, & \mbox{if } x\in\mathbb Z^+ \\ 0, & \mbox{if } x=\ast \\ m^n, &\mbox{if } x=(m,n)\in\mathbb Z^+\times\mathbb Z^+ . \end{cases} $$ I claim that $f$ is sequentially continuous at $a=\ast$. Indeed, the unique sequences that converge to $\ast$ are $x_n=n$ for $n\geq n_0$ or $x_n=\ast$ for $n\geq n_0'$. In the firts case we have that $f(x_n)=1/n$ while in the second and $f(x_n)=0$ (we are assuming $n_0=n_0'=0$). In both cases the sequences $(f(x_n))$ converge to $0$ in $Y$. However $f$ is not continuous at $a$: A neighbourhood of $\ast$ contains infinitely many isolated points $(m,n)$ which $f$ maps to big numbers $m^n$. Hence, for a neighbourhood say $V=(-1,1)$ of $0$, there is no $U$ neighbourhood of $\ast$ such that $f(U)\subseteq V$. Remark. Notice that this $f$ is not unique. We could set other functions $f$ with different values on $Z=\mathbb Z^+\times \mathbb Z^+$; for example that which is constant (different from $0$) on this subset. The important think here is that the set $f(Z)$ must be unbounded or have a positie infimum. Let me give some details about the topology of $X$. In terms of neighbourhoods, the topology of $X$ is described as follows: Points belonging to the plane $\mathbb Z^+\times \mathbb Z^+$ are isolated. For positive integers $n$, a fundamental system of neighbourhoods is given by the set $\{B_m(n)\}_{m\in\mathbb N^*}$, where $B_m(n)$ consists in the point $n$ together with the $n$-th column, except the firts $m$ points. For example, $B_3(2)=\{2\}\cup\{(2,4),(2,5),(2,6),\dots\}$. And finally the point $\ast$. Neighbourhoods for it consists of the whole space $X$ minus finitely many $B_1(n)$ (i.e. some columns of the plane $\mathbb Z^+\times\mathbb Z^+$ together with the naturals that label such columns). And, in the resulting set, you are allow to remove also finite many points of each of the remaining columns (so can remove infinitely many in total). Now, my first claim: The unique sequences that converge to $\ast$ are of the form $x_n=\ast$ for $n\geq n_0$ or $x_n= n$ for $n\geq n_0'$. Well, it is clear that the above sequences converge to $\ast$ (the first is obvious and for the second, recall that neighbourhoods of $\ast$ have all but a finite number of $n\in\mathbb Z^+$). Now suppose that $(x_n)$ is a sequences on $X$ but is not of the above form. We can suppose that all the terms are isolated points. Now, if the sequence is eventually constant we can remove that point from a neighbourhood of $\ast$. And the result follows. If it is non-constant but all the points all contained in a column, we can remove that, and $\ast$ is not a limit neither. On the other hand, if the sequences is a row, since we were allowed to remove a finite quantity of points of each row, we can define a neighbourhood $V$ of $\ast$ such that $x_n\not\in V$ for all $n\in\mathbb N$. And we are done, because there is no more convergent sequences on $X$.
Finding the fourth point of a perfect square (without knowing order of points)
Let $u,v,w\in\mathbb R^2$ be the three known vertices (we don't require the four sides of the square to be parallel to the $x$-axis or $y$-axis) and $c$ be the centre of the square. Then $c$ is also the centre of the unique circumcircle of the square. That is, $c$ is the unique solution of $\|u-c\|^2 = \|v-c\|^2 = \|w-c\|^2$. Expanding the squared norms, we get $$ 2c^T\pmatrix{u-v&v-w} = (\|u\|^2-\|v\|^2,\ \|v\|^2-\|w\|^2). $$ Thus $$ c^T=\frac12(\|u\|^2-\|v\|^2,\ \|v\|^2-\|w\|^2)\pmatrix{u-v&v-w}^{-1} $$ and the remaining vertex is given by $4c-(u+v+w)$.
Prove that there exists one and only $n\in\mathbb{Z}$ so that $10^{n}\leq x<10^{n+1}$
$10^n \leq x &lt; 10^{n+1} \iff n \leq \log_{10} x &lt; n+1$. As you note there is a unique such $n \in \mathbb{Z}$. The $\iff$ follows from the fact that $\log_{10}$ is a bijection of $(0,\infty) \to \mathbb{R}$.
Calculate $\prod_{n=2}^\infty\bigg(1-\frac1{n^2}\bigg)$
Hint. This may be seen as a telescoping product using $$ 1-\frac1{n^2}=\frac{n-1}{n} \cdot \frac{n+1}{n}, \qquad n\ge2. $$ Hope you can finish it.
How to interpret the following function?
First, notice that $a,b$ are subsets of $X$, there. I would be more comfortable using $A$ and $B$ instead, so it is always clear that we're dealing with sets. You're supposed to use the notion of direct image: if $f: X \to Y$ is a function, and $A \subset X$, then: $$f(A) := \{ f(x) \in Y\mid x \in A \}.$$ You must check which of the set identities are valid or not.
Interpolation inequality in sobolev space
We have $\|\partial^\beta u\|_{L^p(U)} \leq \|u\|_{W^{k-1,p}(U)}$ Also, by the theorem, $W^{k,p}(U) \rightarrow W^{k-1,p}(U)$ is compact. Now, suppose there is no $C_\epsilon$. Then we get a sequence $u_n \in W^{k,p}(U)$ (normalize so these are all norm 1) violating it with $n$ in place of $C_\epsilon$. That is, $\|u_n\|_{W^{k-1,p}} \geq \epsilon + n \|u_n\|_{L^p(U)}$. There's a subsequence which is convergent in $W^{k-1,p}(U)$. Call the limit $u$. Now, what can you say about $\|u\|_{L^p(U)}$?
Integral foliation identity
Using a partition of unity, we assume that $V$ is a sufficiently small neighborhood of $p\in U$ such that, up to renumbering of the coordinates, $\partial_n f(p)\neq 0$. As in the implicit function theorem, define the change of coordinates $\Phi(x)=(x',f(x))=(y,t)$, where $x'=(x_1,\ldots,x_{n-1})=y$. The Jacobian gives $dydt=|\partial_n f|\,dx$, hence $$\int g\,dx = \int_{\mathbb{R}}\int g(\Phi^{-1}(y,t))\frac{dy}{|\partial_n f(\Phi(y,t))|}dt $$ Notice that $\Phi^{-1}(y,t)=(y,h(y,t))$ and $f(y,h(y,t))=t$, hence $M_t$ is the graph of the function $y\mapsto h(y,t)$. Since the area element of $M_t$ is $\sqrt{1+|\nabla_yh(\cdot, t)|}dy$, to conclude our argument we should prove that $\sqrt{1+|\nabla_yh(\cdot, t)|}/|\nabla f|=1/|\partial_n f|$, or $|\nabla f|^2=|\partial_nf|^2+\sum_i|\partial_nf|^2|\partial_ih|^2$, but this follows from $f(y,h(y,t))=t$ by taking the $i$-derivative. This is the Gelfand-Leray formula and admits generalizations to vector functions $f$ instead of simply scalar, this is called integration over the fibre.
Calculating the Fourier series for $\frac{1}{x+i}$
I don't understand why you are trying to use int and sum instead of Int and add, for adding up a finite number of expressions and performing numeric integration. I suspect one or both was the cause of trouble. Let me know if this is quick enough. (You could also utilize a coarser tolerance in the Int calls, using say epsilon=1e-5 or some such.) restart; x_i := 1: l_x := 5*x_i: k_x := n*Pi/l_x: p := -1: Try to make the following procedure quick (but flexible if you want to adjust options). A_n := proc(n, p) local opts, rng; if not [n,p]::[numeric,numeric] then return 'procname'(args); end if; opts := method=_d01akc; rng := -l_x .. l_x; 1/(2*l_x) *(Int(unapply(evalc(Re((x + x_i*I)^p*exp(-I*k_x*x))), x), rng, opts) +I*Int(unapply(evalc(Im((x + x_i*I)^p*exp(-I*k_x*x))), x), rng, opts)); end proc: And now to compute some coefficients numerically. A_0 := 1/(2*l_x)*evalf(Int((x + x_i*I)^(p), x = -l_x .. l_x)): KK := A_0 + add(evalf(A_n(n,p))*exp(k_x*x*I), n = -100 .. -1) + add(evalf(A_n(n,p))*exp(k_x*x*I), n = 1 .. 100): And plot that, plot([Re(KK),Re((x + x_i*I)^(p))], x = -l_x .. l_x);
Notation for $\{n\in\mathbb{Z}:n\ge m\}$ for a given $m\in\mathbb{Z}$
Collecting all comments together (to push it out of unanswered queue): $\Bbb Z_{\ge m}$ $\Bbb Z^+ \text\I_{m-1}$, where $I_m:=\{n\in\Bbb N: n\le m\}$, from Real Analysis by Elon Lages Lima. Quoting following similar questions (on $\Bbb R$): How does one denote the set of all positive real numbers? Correct notation for “for all positive real $c$”
Riemann sum to definite integral
Here's a trick: instead of using the explicit formula for Riemann sums, set $x_i=\dfrac in$. Your sum becomes $$\sum_{i=1}^n \frac{1}{n(2+x_i)\ln(2+x_i)}=\sum_{i=1}^n \frac{1}{(2+x_i)\ln(2+x_i)}\Delta x, $$ so the function is defined by $\; f(x)=\dotsm$
A normal to the hyperbola $\frac{x^2}{4}-\frac{y^2}{1}=1$ had equal intercepts on positive x and y axis...
Using a different method, I got the same solution as you did, so it’s likely that the answer key is wrong. That’s been known to happen from time to time. If the line has equal axis intercepts, then its direction vector is $(-1,1)$. For it to be normal to the hyperbola at some point, by computing the gradient of $x^2/4-y^2$ we have the condition $$\begin{vmatrix}\frac x2&amp;-2y\\-1&amp;1\end{vmatrix} = \frac x2-2y = 0.$$ Solving the system yields $(x,y)=\left(\pm\frac4{\sqrt3},\pm\frac1{\sqrt3}\right)$, so the two normals are $x+y=\pm\frac5{\sqrt3}$. These lines are at a distance of $\frac5{\sqrt6}$ from the origin, so considering the origin-centered circle tangent to the lines, we must have $a^2+b^2=2r^2=2\left(\frac5{\sqrt6}\right)^2 = \frac{25}3$. We can also obtain this value directly from the dual conic: all tangents $lx+my+n=0$ to the ellipse ${x^2\over a^2}+{y^2\over b^2}=1$ satisfy the dual equation $a^2l^2+b^2m^2=n^2$, which here gives $a^2+b^2=\frac{25}3$. Since $3^2=9$, I suspect that whoever put together the answer key either squared something that shouldn’t have been squared, omitted a square root, or something along those lines.
Free abelian groups in Algebraic Topolgogy
See Wolfram Mathworld - "A free Abelian group is a group $G$ with a subset which generates the group $G$ with the only relation being $ab=ba$." "For instance, in algebraic topology, chains are formal sums of simplices, and the chain group is the free abelian group whose elements are chains." (Wikipedia).
Normal Distribution and Conditional Probability
Yes, It should definitely work. I am assuming that you know how to calculate probabilities of normal random variables. If that is the case then calculating the probability should be trivial. Because the problem now reduces to $$\frac{P(Y\gt 90)}{P(Y\gt 75)}$$ I'll continue from thereon. To compute $P(Y\gt 90)$ you first compute $P(Y\lt 90)$. To do his you first figure out how many standard deviations away from the mean is $90$. This is very often called the z-score, which you are aware of. So, $90$ is $10$ points away from your mean ($80$), this implies that 90 is $\frac{10}{8}=0.8 \sigma$s away from your mean. If you look up the probability of this particular z- score on a table, you'll see that $P(Y\lt90)=\Phi(0.8)=0.788.$ So $P(Y\gt90)=1-\Phi(0.8)=0.212.$ Similarly, $P(Y\lt75)=\Phi(-0.625)=.266$. So $P(Y\gt75)=1-\Phi(-0.625)=.266=0.734$ So $$\frac{P(Y\gt 90)}{P(Y\gt 75)}=\frac{0.212}{0.734}=0.289$$ I hope It helps
How are you supposed to differentiate $f(a) = (ax^2 + bx^3 + cx^4)^5$?
As written, via the chain rule: $$ f'(a) = 5(a x^2 + b x^3 + c x^4)^4 \cdot x^2 \text{,} $$ but I seriously doubt this is what you were asked. Alternatively, if this comes from an instructor, the instructor has noticed that the students are sloppy with their notation and has decided to point this out very clearly.
minimal polynomials of trigonometric numbers
The number $\zeta= e^{2\pi i/n}$ is a primitive $n$th root of unity. Its degree is given by the Euler's phi function $\phi(n)$. Now $2\cos(2\pi/n)=\zeta+\bar \zeta$, being real is in the fixed field of complex conjugation automorphism: its degree is half that of $\zeta$. Same way for $\sin(2\pi/n)$. Now $\tan(\pi/180)=\tan (2\pi/360)$. It is the ratio of two numbers of degree $\frac12\phi(360)=48$. It can be expressed $(\zeta-\bar\zeta)/(\zeta+\bar\zeta)$. Using this expression counting distinct conjugates we can prove its degree is 24.
Homework on Quotient Topology
HINT: Note that $\langle x_0,y_0\rangle\sim\langle x_1,y_1\rangle$ iff $g(x_0,y_0)=g(x_1,y_1)$. In other words, $g$ is constant on $\sim$-equivalence classes and takes different $\sim$-classes to different real numbers. The quotient map $q:\Bbb R^2\to X^*$ does the same thing, if you replace different real numbers by different points of $X^*$. This suggests that you should try proving that $X^*$ is homeomorphic to the range of $g$, which is pretty clearly all of $\Bbb R$.
Is $A \rightarrow \mathcal{P}(A)$ injective when $A = \{\}$?
Normally when you want to ask if a function $f: A \to Y$ is injective, you need to specify the function. For example, the question &quot;is $f: \mathbb{N} \to \mathbb{R}$ injective?&quot; is not a good question, because it depends on $f$. That is, $f(x) = x$ is injective (the obvious inclusion), but $f(x) = 0$ is not (a constant map). That being said, because you want $A$ to be empty your question is actually well-defined. That is because for any set $Y$ there is only one function $\emptyset \to Y$. That is the empty function, let's call it $e$. As mentioned in the comments, the empty function is always injective. We have to check whether for all $x$ and $y$ in the domain we have that $e(x) = e(y)$ implies $x = y$. Well, there is not much to check, in fact we have nothing to check at all! So that statement is automatically true because there is nothing to check, we call this vacuously true. The above argument works for any set $Y$, so in particular for $Y = \mathcal{P}(\emptyset)$.
Question regarding a sequence and a monotone subsequence
If $x_n$ does not converge to $x_0$ then there is an $\epsilon&gt;0$ and a subsequence $x_{n_k}$ such that $|x_{n_k}-x_0|\ge\epsilon$ for all $k$. In particular, no subsequence of $x_{n_k}$ can converge to $x_0$ either. Now, note that a subsequence of a subsequence is again a subsequence, and argue abstractly that any sequence contains a monotone subsequence. The proof of the latter is somewhat non-trivial. It can be seen as a consequence of Ramsey's theorem; Given a sequence $y_1,y_2,\dots$, define a two-coloring of the two-sized subsets of $\mathbb N$ as follows: Given $i&lt;j$, color $\{i,j\}$ red iff $y_i&lt;y_j$ and blue otherwise. Ramsey's theorem ensures that there is an infinite monochromatic set. The corresponding subsequence is increasing if the color is red, and decreasing otherwise. (Some analysis books include a proof that any sequence contains a monotone subsequence. Typically, the argument avoids mentioning Ramsey's theorem, but the proof tends to be very close to the usual proof of the theorem anyway.)
Either all elements of a subgroup of $\mathbb{Z}_n$ are even or exactly half of them
Sketch of proof: Consider the homomorphism $f:H\to (\{\pm 1\},.)$ which sends only even numbers to $1$. Now apply (the corollary to) Lagrange's theorem, namely, $|H|=|$ker $f||$Im $f|$. Clearly ker $f=\{\text{even numbers}\}$. If $|$Im $f|=1$ then $H$ consists only of even numbers. If $|$Im $f|=2$ then $|$ker $f|=|H|/2$ so that exactly half the numbers are even.
Probability question - the denominator in a specific problem
$r^n$ counts the ways $n$ independent choices from $r$ options can be made. So, the probability for: a selection of three people to make a choice from one option, and four people to each make a choice from four other options, when seven people each make a choice from five options, will be: $$\dbinom 73 \dfrac{4^4}{5^7} = \dfrac{7!}{\lower{0.4ex}{3!~4!}}\dfrac{1^3~4^4}{5^7}$$
Find the poles and residues of the complex function $f(z) =\frac{2z+1}{(z-1)^2}$
Yes, you are correct, $z=1$ is the only pole and it is of order $2$. As regards the residue evaluation note that $$\frac{2z+1}{(z-1)^2}=\frac{2z-2+3}{(z-1)^2}=\frac{3}{(z-1)^2}+\frac{2}{z-1}.$$ Now recall the definition of residue. Can you take it from here?
Find the common ratio of the geometric progression
The $N$-th partial sum is $$ S_N = a \sum_{k=0}^N q^k = \left\{ \begin{align} a\frac{q^{N+1} - 1}{q - 1}, \quad q \ne 1 \\ a (N + 1), \quad q = 1 \end{align}\right. $$ So given a problem instance $(a, S, N)$ the first step is to treat the case $a = 0$ which implies $S=0$. In this case any number will work as value for $q$. For the following we assume $a \ne 0$. If $a \ne 0$ and $S = 0$ there is no solution $q$. For the following we assume $S \ne 0$ as well. Next thing would be to check if $$ S = a (N+1) $$ in this case $q=1$ is the solution. For the following we assume $q \ne 1$ too, having $a \ne 0$, $S \ne 0$, $q \ne 1$ as constraints. If we still got the case $S = a$, then this would require $q = q^{N + 1}$, which would have $q = 0$ as solution and additionally $q = -1$ in case of odd $N$. Looking for Fixed Points One method would be to search for a fixed point $q^{*}$ for $$ f(q) = \frac{a}{S} q^{N+1} + 1 - \frac{a}{S} $$ which fullfills $f(q^{*}) = q^{*}$. This version of the original problem is easier to reason with, because one can treat this as the geometric problem of the graph of $f(q) = u \, q^n + v$ (two cases for even and odd exponents $n$, two cases for positive or negative factor $u$) crossing the diagonal line $\mbox{id}(q) = q$. For odd $N$ the exponent in $f$ is even, we got some parabolic graph which has zero, one or two crossings with $\mbox{id}$ and thus that many solutions. For even $N$ the exponent is $f$ is odd and for $N \ge 2$ there might be even a third solution.
Are there fields and ordered fields of every infinite cardinality?
What about the field $\mathbf{Q}\left( \{ T_{i}\;|\; i\in \alpha \} \right)$, where the $T_i$ are independant formal variables indexed by $\alpha$ ?
Differential equation of the form $y^2y'' = a$
Divide by $y^2$ and multiply both sides by $y′$ and integrate to give $\frac{y'^2}{2}=c1-\frac{a}{y}$. Square root and integrate again. $\int \frac{dy}{\sqrt{c1-a/y}}= \sqrt{2}x +c2$. That gets nasty if $c1\ne0$ but Wolfram alpha can do it. http://www.wolframalpha.com/input/?i=integrate+dy+%2F+sqrt(c-a%2Fy). Which root you want should be clear from the boundary conditions or obvious from the task at hand.
How can I quickly know the rank of this / any other matrix?
If the determinant of $A \ne 0$ then it is "full rank." This is easy to program. However, if you are working with pen and paper, probably not the easiest. Next, row operations. Clearly row 1 is independent from rows 2 and 3 as it is the only one with a non zero entry in the first column. Are 2 and 3 independent from each other? Yes they are. (one is not a multiple of the other) This matrix is full rank.
For $a,b,m,n \in \mathbb{Z}^+$, does there always exist $x,y \in \mathbb{Z}^+$ with ${\rm gcd}(x,y) = 1$ but ${\rm gcd}(ax+m,by+n) > 1$?
Yes, we can. Suppose $p$ is a prime number and not a divisor of any number in the problem. There exists an integer $x$ such that $p\mid ax+m$. Furthermore, there exists an integer $z$ such that $p\mid b(xz+1)+n$. Let $y=xz+1$. To be precise, we are using Bezout's Theorem. 1) Because $gcd(a, p)=1$, there exists integers $x$ and $w$ such that $$ax+pw=-m,$$ which implies $p|ax+m$. 2) Note that $gcd(p, bx)=1$, there exists integers $z$ and $u$ such that $$(bx)z+pu = -(b + n),$$ which implies $p\mid b(xz+1)+ n$
Notation for separating out factors of a number
[Note: of course for one integer $n$, there is no such decomposition.] The most common way to say this is indeed some variation of "Write $n$ as $2^r m$ with $m$ odd." If you really want alternatives (there might be contexts where these read better due to placing emphasis on $r$ or on $m$), you might also consider: "Let $2^r$ be the largest power of $2$ dividing $n$" (this gives you the option of defining this directly if you care more about the factor than the exponent). "Suppose $2^r \| n$", where $\|$ is understood as the "prime power which exactly divides" relation. "Let $r = v_2(n)$", with $v_2(\cdot)$ being the $2$-adic order function. "Let $m$ be the odd part of $n$", abusing the fact that the term "odd part" is reasonably hard to misinterpret when applied to an integer (as opposed to, say, a function $f :\mathbb R \to \mathbb N$). But for most purposes the first formulation is the go-to idiom, even though it sounds clunky when you (intentionally, I presume) string it out to an extreme length.
Evaluate $\int_{0}^{\pi/4}x\ln^{2}(\sin(x))dx$
A solution by Cornel Ioan Valean First, let's observe that $$\log ^2\left(\frac{1}{2}\sin(2x)\right)+\log ^2(\tan(x))=2 \log ^2(\sin(x))+2 \log ^2(\cos(x)), \ 0&lt;x&lt;\pi/2, \tag1$$ and if we multiply both side of $(1)$ by $x$ and integrate from $x=0$ to $x=\pi/4$, we get $$\int_0^{\pi/4}x\log ^2\left(\frac{1}{2} \sin(2x)\right)\textrm{d}x+\int_0^{\pi/4}x\log ^2(\tan(x))\textrm{d}x$$ $$=2 \int_0^{\pi/4}x\log ^2(\sin(x))\textrm{d}x+2\int_0^{\pi/4}x \log ^2(\cos(x)) \textrm{d}x. \tag2$$ Since $$\int_0^{\pi/4}x \log ^2\left(\frac{1}{2} \sin(2x)\right)\textrm{d}x=\frac{1}{4}\int_0^{\pi/2}x\log^2\left(\frac{1}{2} \sin(x)\right)\textrm{d}x$$ $$=\frac{\log^2(2)}{4}\int_0^{\pi/2}x\textrm{d}x-\displaystyle\frac{\log(2)}{2}\underbrace{\int_0^{\pi/2}x\log(\sin(x))\textrm{d}x}_{\displaystyle 7/16 \zeta(3)-3/4\log(2)\zeta(2)}+\frac{1}{4}\int_0^{\pi/2}x \log ^2(\sin(x))\textrm{d}x$$ and $$\int_0^{\pi/4} x\log^2(\cos(x)) \textrm{d}x= \int_{\pi/4}^{\pi/2} \left(\frac{\pi}{2}-x\right) \log^2(\sin(x)) \textrm{d}x$$ $$=\int_{0}^{\pi/2} \left(\frac{\pi}{2}-x\right) \log^2(\sin(x)) \textrm{d}x-\int_{0}^{\pi/4} \left(\frac{\pi}{2}-x\right) \log^2(\sin(x)) \textrm{d}x$$ $$=\frac{\pi}{2}\underbrace{\int_{0}^{\pi/2}\log ^2(\sin(x))\textrm{d}x}_{\displaystyle \pi^3/24+\pi/2 \log^2(2)}-\int_{0}^{\pi/2}x\log ^2(\sin(x))\textrm{d}x-\frac{\pi}{2}\int_{0}^{\pi/4}\log ^2(\sin(x))\textrm{d}x$$ $$+\int_{0}^{\pi/4}x\log ^2(\sin(x))\textrm{d}x,$$ based on $(2)$, we obtain that $$\int_0^{\pi/4} x \log ^2(\sin(x)) \textrm{d}x$$ $$=-\frac{15 }{16}\zeta(4)-\frac{39}{64}\log^2(2)\zeta (2)-\frac{7}{128} \log(2)\zeta(3)+\frac{\pi}{4}\int_0^{\pi/4} \log ^2(\sin (x)) \textrm{d}x$$ $$+\frac{9}{16}\int_0^{\pi/2} x \log ^2(\sin (x)) \textrm{d}x+\frac{1}{4} \int_0^{\pi/4} x \log^2(\tan(x)) \textrm{d}x$$ $$=\frac{1}{8}\log (2)\pi G+\frac{5}{384}\log ^4(2)+\frac{5}{32}\log^2(2)\zeta(2)-\frac{35}{128}\log(2)\zeta(3)+\frac{95}{256}\zeta(4)$$ $$-\frac{\pi}{4}\Im\biggr\{\operatorname{Li}_3\left(\frac{1+i}{2}\right)\biggr\}+\frac{5 }{16}\operatorname{Li}_4\left(\frac{1}{2}\right),$$ which is the desired closed-form. In the calculations we also used that $$\int_0^{\pi/4} \log^2(\sin (x))\textrm{d}x=\frac{23}{384}\pi^3+\frac9{32}\pi\log^2(2)+\frac{1}{2}\log(2)G-\Im\biggr\{\operatorname{Li}_3\left(\frac{1+i}{2}\right)\biggr\},$$ which can be extracted from this answer, next $$\int_0^{\pi/2} x \log ^2(\sin (x))\textrm{d}x=\operatorname{Li}_4\left(\frac{1}{2}\right)+\frac{1}{24}\log^4(2)+\frac{1}{2}\log^2(2)\zeta(2)-\frac{19}{32}\zeta(4),$$ which is proved here and here, and finally $$\int_0^{\pi/4} x \log^2(\tan(x)) \textrm{d}x=\int_0^1 \frac{\arctan(x)}{1+x^2} \log^2(x) \textrm{d}x$$ $$=\frac{151}{128}\zeta(4)+\frac{1}{4}\log^2(2)\zeta(2)-\frac{7}{8}\log(2)\zeta(3)-\frac{1}{24}\log^4(2)-\operatorname{Li}_4\left(\frac{1}{2}\right),$$ which can be extracted by using a key advanced series from this answer, beautifully derived with the help of a Fourier series from the book, (Almost) Impossible Integrals, Sums, and Series. End of (beautiful) story A note: the solution nicely and completely avoids the necessity of using contour integration.
Surjectivity of canonical homomorphism between free objects in varieties of algebras
Yes. We have $h(\eta_V(x))=\eta_{V'}(x)$, and the elements $\eta_{V'}(x)$ generate $F_{V'}(X)$.
Equivalence of NFA with and without $\epsilon$-transitions.
It's possible to see that $\beta(r,a) = E \left(\bigcup_{t \in \beta(r,\epsilon)} \alpha(t,a) \right) = E \left(\bigcup_{t \in E(r)} \alpha(t,a) \right) = E \left(\alpha'(E(r),a) \right)$, where $\alpha'(R,a) = \bigcup_{r \in R} \alpha(r,a)$. Thus, it follows from the definition of $E: Q \to \mathcal{P}(Q)$ that: $$\bigcup_{r \in \beta(q,w)} \beta(r,a) = \bigcup_{r \in \beta(q,w)} E(\alpha'(E(r),a)) = E \left( \bigcup_{r \in \beta(q,w)} \alpha'(E(r),a) \right)$$ The last equality is possible because for a state $x$ to be in $\bigcup_{r \in \beta(q,w)} E(\alpha'(E(r),a))$, then it must be in the $\epsilon$-closure of some $\alpha'(E(r),a)$, $r \in \beta(q,w)$. We know $\alpha'(E(r),a) \in \left( \bigcup_{r \in \beta(q,w)} \alpha'(E(r),a) \right)$, and, therefore, we may conclude that $x$ is in the $\epsilon$-closure of the latter set, and that: $$\left( \bigcup_{r \in \beta(q,w)} E(\alpha'(E(r),a)) \right) \subseteq E \left( \bigcup_{r \in \beta(q,w)} \alpha'(E(r),a) \right)$$ The other direction follows similarly. With this we can arrive at your desired equality: $$E \left( \bigcup_{r \in \beta(q,w)} \alpha'(E(r),a) \right) = E \left( \bigcup_{r \in \beta(q,w)} \alpha(r,a) \right)$$ That's so because there isn't any state $r \in \beta(q,w)$ such that there is a path of $\epsilon$-transitions from it to another state not already in $\beta(q,w)$.
Open Nodes More Than Twice during A* Search?
Your graph is monotone. Notice that the algorithm does not state that a vertex does not come up twice in the algorithm, just that you do not need to reconsider vertices added to closedSet. I get five steps, following the pseudocode described for monotone graphs. current = s, openSet = {}, closedSet = {s} neighbors are x and y, neither in closedSet after considering both have cameFrom = [-ss--], gScore = [0,2,7,infty,infty], fScore = [3,4,8,infty,infty]. current = x, openSet = {y}, closedSet = {s,x} neighbors are z and t, neither in closedSet after considering both have cameFrom = [-,s,s,x,x], gScore = [0,2,7,5,12], fScore = [3,4,8,6,12]. current = z, openSet = {y,t}, closedSet = {s,x,z} neighbors are y and t, neither in closedSet after considering both we have cameFrom = [-,s,z,x,z], gScore = [0,2,6,5,11], fScore = [3,4,7,6,11]. current = y, openSet = {t}, closedSet = {s,x,y,z} only neighbor is t, not in closedSet after considering we have cameFrom = [0,s,z,x,y], gScore = [0,2,6,5,9], fScore = [3,4,7,6,9]. current = t, which is the goal so we finish. This gives us our (optimal) path of $sxzyt$ having distance 9. We do not discover $y$ twice because our algorithm doesn't allow us to discover vertices twice. There are more complicated algorithms that can apply to nonmonotone graphs, you certainly are allowed to use these algorithms. I think they will have you discover $y$ twice, but they will not give a better solution than the one we discovered above (in a nonmonotone graph you may need this more complicated algorithm to find an optimal solution).
How to compute $\lim\limits_{x \to 0} \left(\frac{e^{x^2} -1}{x^2}\right)^\frac{1}{x^2}$?
Right limit. Set $y=x^{-1}$ then as $y\to \infty$ we obtain $$ \left(y^2 \left(e^{2/y}-1\right)\right)^{y^2}\approx \left(y^2\cdot \frac{2}{y}\right)^{y^2} \to \infty. $$ Left limit. Suppose now that $y=-x^{-1}$. Then as $y\to \infty$ it holds $$ \left(y^2 \left(e^{-2/y}-1\right)\right)^{-y^2}\approx \left(y^2\cdot \frac{-2}{y}\right)^{-y^2}=\left(\frac{1}{2y}\right)^{y^2} \to 0. $$ The limits are different, therefore it does not exist.
Does $\sum \limits_{n=1}^{\infty}{\frac{(\ln n)^2}{n^2}}$ converge?
Since for all $\,n\,$ big enough we have that $\,\log n&lt;n^\epsilon\;$ , with arbitrary $\,\epsilon&gt;0\;$ , we get $$\frac{\left(\log n\right)^2}{n^2}&lt;\frac{\left(n^{1/4}\right)^2}{n^2}=\frac1{n^{3/2}}\ldots$$
Real Analysis Integral
To show Riemann integrability, it suffices for any $\epsilon &gt; 0$ to find a partition of $[0,2]$ over which the upper and lower sums differ by at most $\epsilon$. In this case, $[0, 1 - \frac{\epsilon}{2}], [1 - \frac{\epsilon}{2}, 1 + \frac{\epsilon}{2}], [1 + \frac{\epsilon}{2}, 2]$ will work, since the lower sum is zero and the upper $\epsilon$.
Report Comparing two Subsets against Aggregate
Turns out it was a complicated bit. To project the worth of a given sector, it used distinct months for a given set, when the aggregate and subsequent subsets were calculated, because they added up to different month counts, creating a different 'worth' ratio, the aggregate didn't match ($V_{a} + V_{b} \neq V_{t}$). Since the aggregate represents the full picture and a proper representation of the target audience, I had to approximate the relative time (vs the aggregate) the sub sets represented. This also took into account response rates, which are known for the subsets and the aggregate's response rate is an aggregation of those. So the sub-sets' worth figure had to be calculated off of the aggregate and neutralize the response rate that was figured into the aggregate. $$ \begin{equation*} W_{a} = \frac{\sum^{THEO}_{a}}{C_{a}*\frac{M_{t}*E_{t}}{C_{t}}}*E_{a} \end{equation*} $$ $$ E_{t} = \frac{E_{a} * C_{a} + E_{b} * C_{b}}{C_{t}} $$ $C_{n} =$ The number of people within a set. Where: $$ C_{a} + C_{b} = C_{t} $$ $E_{n} =$ Estimated frequency at which a segment is to respond. The aggregate months of a subset is not considered, because these subsets need to be a representation of the whole, and their 'counts' are already a part of the whole's data.
Perfect set without rationals
An easy example comes from the fact that a number with an infinite continued fraction expansion is irrational (and conversely). The set of all irrationals with continued fractions consisting only of 1's and 2's in any arrangement is a perfect set of irrational numbers.
How to find $\int \frac{\sin x}{2\cos 2x} \operatorname{d}x$?
Substituting $\cos x = y$, the integral becomes $$ \frac{1}{2} \int \frac{(-dy)}{2y^2-1} = \frac{1}{4} \int \frac{1}{\frac{1}{2}-y^2} dy. $$ Since the integrand is a rational function of $y$, we can write it in terms of partial fractions: $$ \frac{1}{\frac{1}{2}-y^2} = \frac{1}{\sqrt{2}} \left(\frac{1}{\frac{1}{\sqrt 2} +y} + \frac{1}{\frac{1}{\sqrt 2}-y} \right). $$ Can you take it from here?
Difficulty in understanding the proof of open mapping theorem and maximum modulus principle
If $g$ has a zero in the disc $D=\{z:|z-z_0|&lt;\delta\}$, then there is $z\in D$ with $f(z)=w$. This means that if $z$ is "near" to $w_0$, say in an $\varepsilon$-neighbourhood $U=\{w:|w-w_0|&lt;\varepsilon\}$ then $w\in f(D)$. This means that $U\subseteq f(D)\subseteq f(\Omega)$. Therefore $f(\Omega)$ is a neighbourhood of each of its points, and so is open. For the last question, split into cases $f(z_0)=0$ and $f(z_0)\ne0$. For the first case, you only need that $f$ takes nonzero values near $z_0$. In the second case you have $f(z_0)(1+\varepsilon)\in f(D)$ for small positive $\varepsilon$.
Expectation over an expected value in ergodic process
Fist of all, I think you should correct your writing. At first I was a little confused, because I thought by $x_n$ you mean random variables, but now I think I got what you mean. Please, for the next time write every infromaiton you have, this makes it much easier for others to answer. We assume $f\in\mathcal{L}^1(P)$, $x\in X$ ($(X,\sigma,P)$ is your probability space) and $T$ is a measure-preserving transformation. We write $x_i$ for $T^ix$. Then from ergodic theorem follows $$\frac{1}{N} \sum_{i}f(x_{i}) \to {E}[f]\qquad (N\to\infty)\qquad a.s.$$ Define $f^n(y):=f(T^n y)$. Since $T$ is a measure-preserving transformation we have $$E[f^n]=E[f].$$ I think this is what you mean by the rhs of your second equation. The expression $E[f(x_n)]$ doesn't make any sense to me, since we are not talking about random variables. So what you want to show is $$\frac{1}{N} \sum_{i}E[f^i] \to \lim_{n\to\infty}E[f^n]\qquad(\ N\to\infty).$$ But on the lhs you have $\frac{1}{N} \sum_{i}E[f^i]=\frac{1}{N} \sum_{i}E[f]=E[f]$ and on the rhs $E[f^n]=E[f]$. So there is nowhere the need of the limit.
Complex matrices are similar over $R$ iff they and thier conjugates are similar over $C$.
The only direction you need help with here is the &quot;only if&quot; direction. We are given that $$ A = SBS^{-1}, \quad \bar A = S\bar BS^{-1}. $$ Rearranging these equations and taking the conjugate of both sides on the right brings us to the equations $$ AS = SB, \quad A \bar S = \bar S B. $$ We can therefore conclude that $H$ and $C$ (the real and imaginary parts of $S$) satisfy $AH = HB$ and $AC = CB$. The catch, however, is that we need an invertible real matrix $T$ for which $AT = TB$, and we have no guarantee that either $H$ or $C$ is invertible. With that in mind, consider the polynomial $$ p(t) = \det(H + Ct). $$ We know that $p(i) \neq 0$, which means that $p$ cannot be the zero-polynomial. Thus, $p$ has at most finitely many zeros. Thus, there necessarily exists a $t \in \Bbb R \subset \Bbb C$ for which $p(t) \neq 0$. If we define $T = H + Ct$, then we see that $T$ is real, $T$ is invertible, and $AT = TB$. We can rearrange this last equation to get $$ A = TBT^{-1}, $$ which was what we wanted.
is a convex continuous function absolutely continuous
This is true. In fact, it is locally in $W^{1,p}$ for any $p\in [1,\infty]$ because a convex function is locally Lipschitz.
Are the sets $\mathbb{R}$ and $\mathbb{R}_{>0}$ equinumerous?
The exponential map $\exp \colon \mathbb{R} \to \mathbb{R}_{&gt;0}$ is a (continuous) bijection with (continuous) inverse $\log \colon \mathbb{R}_{&gt;0} \to \mathbb{R}$. You could also use the map $$ \phi \colon \mathbb{R} \to (-1,1), \quad x \mapsto \frac{x}{1+|x|}. $$ This is a bijection with inverse $$ \psi \colon (-1,1) \to \mathbb{R}, \quad x \mapsto \frac{x}{1-|x|}. $$ Because $\phi$ restricts to a bijection $\phi|_{\mathbb{R}_{&gt;0}} \colon \mathbb{R}_{&gt;0} \to (0,1)$ and $$ f \colon (-1,1) \to (0,1), \quad x \mapsto \frac{x+1}{2} $$ is also a bijection, the composition $\psi|_{(0,1)} \circ f \circ \phi \colon \mathbb{R} \to \mathbb{R}_{&gt;0}$ is bijective.
Proof verification - Minimal polynomial of $\alpha$ has integer coef.s iff its an alg. integer
Essentially, the lemma we need to prove is: Suppose $f(X) \in \mathbb Z[X]$ is a monic polynomial, and suppose $f(X) =q(X)r(X)$, where $q(X)$ and $r(X)$ are monic polynomials in $\mathbb Q[X]$. Then $q(X)$ is in $\mathbb Z[X]$. I'll try my best to guide you through a proof that incorporates as many of your ideas as possible. Define $V$ and $U$ to be the smallest positive integers such that $Vq(X) \in \mathbb Z[X]$ and $Ur(X) \in \mathbb Z[X]$. I invite you to check that the greatest common divisor of the coefficients of $Vq(X)$ is one. The same is true of the coefficients of $Ur(X)$. While checking this, it is important to think about why we need $q(X)$ and $r(X)$ to be monic. Suppose, for contradiction, that $V &gt; 1$. Then there exists a prime number $p$ that divides $V$. Consider the equation $$ (Vq(X))(Ur(X)) = UVf(X)$$ as an equation in $\mathbb Z_p[X]$. I encourage you to show that $UVf(X)$ is zero in $\mathbb Z_p[X]$, and that $Vq(X)$ and $Ur(X)$ are non-zero in $\mathbb Z_p[X]$. Finally, you can think about how this contradicts the fact that $\mathbb Z_p[X]$ is a integral domain, and hence, conclude that $V = 1$. So the idea of this argument is very similar to your argument, except that we are working in $\mathbb Z_p[X]$ rather than $\mathbb Z_{(UV)}[X]$. The latter is not necessarily an integral domain, and that is the important difference.
Uniform convergence of $xe^{-nx}$
$$(xe^{-nx})'=e^{-nx}-nxe^{-nx}=e^{-nx}(1-nx)$$ is $0$ only if $x=\frac{1}{n}$. So the maximum is attained at this point. So: $$\lvert\lvert f_{n}\rvert\rvert_{C^{0}}=\frac{1}{ne}\to0$$ as $n$ tends to $\infty$. Alternatively if you are aware that $e^{x}&gt;x$ then for $x\ge0$ then: $$xe^{-nx}=\frac{nx}{e^{nx}}\cdot\frac{1}{n}\le\frac{1}{n}$$ which holds for all $x\ge0$. This shows that the convergence is uniform.
Polynomials in irrational powers and their roots
Probably the solutions are almost always transcendental, but I don't know if this is provable in general with current technology. Known results along these lines include the Gelfond-Schneider and Lindemann-Weierstrass theorems.
Cube root of a complex number
The $i$ should not be inside the square root. It should be $$\left(\frac{-1+\sqrt3 \,i}{2}\right)^{\!2}=\frac{-1-\sqrt3 \,i}{2}$$ rather than $$\left(\frac{-1+\sqrt{3 i}}{2}\right)^{\!2}=\frac{-1-\sqrt{3 i}}{2}$$ It might be easier to understand it as $$\exp \left(\frac{2\pi i}3 \right)^{\!2}=\exp \left(\frac{4\pi i}3 \right)$$
Let $f$ be an irreducible cubic polynomial over $\mathbb Q$ with exactly one real root and $K$ the splitting field of $f$.
Let assume $[K:\mathbb{Q}]$ is an odd number then we shall prove that all roots of $f$ are real numbers. If $f$ has only one real root ($f$ is a cubic so always has at least one real root), denote $\alpha_1,\alpha_2,\alpha_3$ three roots $f$ in which $\alpha_1 \in \mathbb{R}$. The complex conjugation $\sigma: K \longrightarrow K, x + iy \mapsto x - iy=\overline{x+iy}$ satisfies that $g(\overline{z}) = \overline{g(z)}$ for all polynomials so in particular $\sigma \in \mathrm{Gal}(K/\mathbb{Q})$ and further more $\sigma(\alpha_1)=\alpha_1,\sigma(\alpha_2)=\alpha_3,\sigma(\alpha_3)=\alpha_2$ which implies that $\sigma^2 = 1$. Consequently, $2 =\mathrm{ord}(\sigma) \mid [K: \mathbb{Q}] = \left |\mathrm{Gal}(K/\mathbb{Q}) \right|$ (since $\mathbb{Q}$ has characteristic $0$) which contradicts to our assumption.
Mellin space convolution and usual convolution
Assume the following definitions of the Fourier transform, inverse transform, and related convolution. (1) $\quad F(s)=\mathcal{F}_x[f(x)](s)=\int\limits_{-\infty}^{\infty}f(x)\,e^{-i\,s\,x}\,dx$ (2) $\quad f(x)=\mathcal{F}_s^{-1}[F(s)](x)=\frac{1}{2\,\pi}\int_{-\infty}^{\infty}F(s)\,e^{i\,s\,x}\,ds$ (3) $\quad f*_{\mathcal{F_x}}g\,(y)=\int\limits_{-\infty}^{\infty}f(x)\,g(y-x)\,dx\,,\quad y\in\mathbb{R}$ Assume the following definitions of the Mellin transform, inverse transform, and related convolution. (4) $\quad F(s)=\mathcal{M}_x[f(x)](s)=\int\limits_0^{\infty}f(x)\,x^{s-1}\,dx$ (5) $\quad f(x)=\mathcal{M}_s^{-1}[F(s)](x)=\frac{1}{2\,\pi\,i}\int\limits_{c-i \infty}^{c+i\,\infty}F(s)\,x^{-s}\,ds$ (6) $\quad f*_{\mathcal{M_x}}g\,(y)=\int\limits_0^{\infty}f(x)\,g\left(\frac{y}{x}\right)\frac{dx}{x}\,,\quad y&gt;0$ The Fourier transform of the Fourier convolution $f(x)*_{\mathcal{F}}g(x)$ is the product of the Fourier transforms of $f(x)$ and $g(x)$ as illustrated in (7) below. (7) $\quad\mathcal{F}_x[f(x)*_{\mathcal{F}}g(x)](s)=\mathcal{F}_x[f(x)](s)\,\mathcal{F}_x[g(x)](s)$ Likewise, The Mellin transform of the Mellin convolution $f(x)*_{\mathcal{M}}g(x)$ is the product of the Mellin transforms of $f(x)$ and $g(x)$ as illustrated in (8) below. (8) $\quad\mathcal{M}_x[f(x)*_{\mathcal{M}}g(x)](s)=\mathcal{M}_x[f(x)](s)\,\mathcal{M}_x[g(x)](s)$ The relationships between the Fourier and Mellin transforms are as follows. Note in (9) and (10) below the Mellin transforms are evaluated at $-i\,s$. (9) $\quad\mathcal{F}_u\left[f\left(e^u\right)\right](s)=\mathcal{M}_x[f(x)](-i\,s)$ (10) $\quad\mathcal{M}_u\left[f(\log(u))\right](-i\,s)=\mathcal{F}_x[f(x)](s)$ The derivation of the Mellin transform from the Fourier transform in (9) above can be verified with the variable substitution $x=e^u$ in the Fourier transform integral in (11) below. Since $x=e^u$, $e^{-i\,s\,u}=x^{-i\,s}$, $du=\frac{dx}{e^u}=\frac{dx}{x}$, the lower integration limit becomes $e^{-\infty}=0$, and the upper integration limit becomes $e^{\infty}=\infty$. (11) $\quad\mathcal{F}_u\left[f\left(e^u\right)\right](s)=\int\limits_{-\infty}^{\infty}f\left(e^u\right)\,e^{-i\,s\,u}\,du=\int\limits_0^{\infty}f(x)\,x^{-i\,s-1}\,dx=\mathcal{M}_x[f(x)](-i\,s)$ The derivation of the Fourier transform from the Mellin transform in (10) above can be verified with the variable substitution $x=\log(u)$ in the Mellin transform integral in (12) below. Since $x=\log(u)$, $u=e^x$, $dx=\frac{du}{u}$, the lower integration limit becomes $\log(0)=-\infty$, and the upper integration limit becomes $\log(\infty)=\infty$. (12) $\quad\mathcal{M}_u\left[f(\log(u))\right](-i\,s)=\int_0^{\infty}f(\log(u))\,u^{-i\,s-1}\,du=\int_{-\infty}^{\infty}f(x)\,e^{-i\,s\,x}\,dx=\mathcal{F}_x[f(x)](s)$ Assuming $F(u)=f(e^u)$ and $G(u)=g(e^u)$, the Mellin convolution can be derived from the Fourier convolution as follows where the Fourier convolution is evaluated with the variable substitution $x=e^u$. Since $x=e^u$, $e^{\log(y)-u}=\frac{e^{\log(y)}}{e^u}=\frac{y}{x}$, $du=\frac{dx}{e^u}=\frac{dx}{x}$, the lower integration limit becomes $e^{-\infty}=0$, and the upper integration limit becomes $e^{\infty}=\infty$. (13) $\quad F*_{\mathcal{F_u}}G\,(log(y))=\int\limits_{-\infty}^{\infty}f\left(e^u\right)g\left(e^{\log(y)-u}\right) \,du=\int\limits_0^{\infty}f(x)\,g\left(\frac{y}{x}\right)\frac{dx}{x}=f*_{\mathcal{M_x}}g\,(y)$ Assuming $F(u)=f(log(u))$ and $G(u)=g(log(u))$, the Fourier convolution can be derived from the Mellin convolution as follows where the Mellin convolution is evaluated with the variable substitution $x=\log(u)$. Since $x=\log(u)$, $\log\left(\frac{e^y}{u}\right)=\log\left(e^y\right)-\log(u)=y-x$, $dx=\frac{du}{u}$, the lower integration limit becomes $\log(0)=-\infty$, and the upper integration limit becomes $\log(\infty)=\infty$. (14) $\quad F*_{\mathcal{M_u}}G\,(e^y)=\int_0^{\infty}f(\log(u))g\left(\log\left(\frac{e^y}{u}\right)\right)\frac{du}{u}=\int_{-\infty }^{\infty }f(x)\,g(y-x)\,dx=f*_{\mathcal{F_x}}g\,(y)$
$f_n = (\frac{1}{n})\chi_{[n, +\infty)}$. Find $\lim \int f_n d\lambda$.
For a set $A$ of finite measure, we have $\int a\chi_{A} \text{d} \lambda = a\lambda(A)$ for all $a \in \mathbb{R}$. We note that for $f_n$, we have $g_{n,m} = \frac{1}{n} \chi_{[n,m]}$ for $m&gt;n$ converges pointwise to $f_n$ and is increasing to $f_n$, so in this case we may interchange limit and integration. So we have $$ \int f_n \text{d} \lambda = \lim_{m \to \infty} \int g_{n,m} \text{d} \lambda = \lim_{m \to \infty} \frac{m-n}{n} = \infty.$$ So for all $n \in \mathbb{N}$, we have $\int f_n \text{d} \lambda = \infty$, and so is their limit as $n \to \infty$.
One-parameter group of Lie groups
Let $K$ be the closure of this one-parameter group. Then $K$ is compact abelian, so any representation of $K$ is completely reducible, and all irreducible representations are 1-dimensional. Applying this to the adjoint representation shows that $\mathrm{ad}_\eta$ is semisimple.
Convergence of $\frac{\sin(nx)}{1+\vert x\vert^2}$
This is homework-like so I'll just give some hints. Show that if ${\sin(n_kx) \over 1 + x^2}$ converges in $L^1$ to some $f(x)$, then $\int_R{\sin(n_kx) \over 1 + x^2} g(x)$ converges to $\int_R f(x)g(x)$ for any bounded $g(x)$. Show this contradicts the Riemann-Lebesgue lemma in some way. Note you have to consider the case where $f(x) = 0$ too.
Calculate domain of $f(x) = \sqrt{\frac{1}{x-4} + 1 }^2 -9$
Note that $$\frac{1}{x-4} + 1 \geq 0 \\ \Rightarrow \frac{1+(x-4)}{x-4} \geq 0 \\ \Rightarrow \frac{x-3}{x-4} \geq 0$$ Can you figure out the rest? (Sign chart)
$2\int_{0}^{\infty}\frac{\sin(s\tan^{-1}(x))}{(1+x^2)^{\frac{s}{2}}(e^{2x\pi}-1)}dx+\frac{1}{2}+\frac{1}{s-1}=\zeta(s)$
FYI, If you know the Hermite-like integral representation of the Lerch transcendent then you get the prove at the values of $z=1$ and $a=1$. $\Phi(z,s,a)=\dfrac{1}{2a^s}+\int\limits_0^\infty \dfrac{z^t}{a+t)^s}dt+\dfrac{1}{2a^{s-1}}\int\limits_0^\infty\dfrac{\sin(s \arctan(t))-t a \ln(z)}{(1+t^2)^{s/2}(e^{2\pi ta}-1)}dt$
Show the intersection of two parts of an $n$-sphere, each homeomorphic an $n$-disk have a union that's homeomorphic to an $n-1$-sphere.
For our earth we know such a union: northern hemisphere, southern hemisphere; their intersection is the equator.. Formally: $U = \{x \in S^n: x_{n+1} \ge 0\}$ is upper, $L= \{x \in S^n: x_{n+1} \le 0\}$ is lower. Both are closed as e.g. $U=S^n \cap \pi_{n+1}^{-1}[[0,\infty)]$ is the intersection of closed sets etc. $ U \cap L = \{x \in S^n: x_{n+1}=0\}$ which is obviously homeomorphic to $S^{n-1}$ (just ditch the last coordinate and we have a homeomorphism). And $U$ and $L$ are both homeomorphic to $D^n$ by the map $h(x_1,\ldots,x_n, x_{n+1})= (x_1,\ldots,x_n)$. The inverse for $U$ is $(x_1, x_2, ,\ldots, x_n) \to (x_1, \ldots, x_n, \sqrt{1-\sum_{i=1}^n x_i^2})$, while for $L$ we negate the final coordinate of that map.
$\space \lim_{n \to \infty} s_n(x)=0$ and $ \lim_{n \to \infty} \int_{0}^{1}s_n(x)\,dx=1.$ Possible for step functions?
Let $s_n = n\chi_{(0,1/n)}.\,\,$
A peculiar fact about studying the rate of change of physical quantities
I don’t see a contradiction. All $$\frac{dR}{d\sigma} =-\frac{k}{\sigma^2} $$ is saying is that as $\frac{1}{\rho} $ changes by a small amount, the amount which $R$ changes by is dependent on the value of $\rho$. Saying $$\frac{dR}{d\rho} = k$$ is entirely different, as in this case we look at small changes to $\rho$ instead, and this tells us that $R$ will change by the same amount, no matter the value of $\rho$.
Where is the nontriviality of Taylor's theorem?
It's 'surprising' that the error term can be written as $h^n g(x+h)$ because, if you solve for $g$, you get $$ \frac{f(x + h) - \sum_{k=0}^n\frac{h^k}{k!}\,f^{(k)}(x)}{h^n} = g(x + h) $$ and because of the division, the solution only works for $h \neq 0$. A priori, it's not at all obvious that you can even pick a value for $g(x)$ so that $g$ is continuous at $x$, let alone that $g(x) = 0$ is the value that does so. The usefulness is that it shows the error in approximation by using the Taylor polynomial is less significant than $h^n$ as $h \to 0$.
How to use the $b\cdot\nabla$ operator?
Let $e_1,e_2,e_3$ be your (orthonormal) basis vectors so that any vector $x = (x_1,x_2,x_3)$ can be written $x = x_1e_1 + x_2e_2 + x_3e_3$. The basis vectors, usually $e_1=(1,0,0)$, $e_2=(0,1,0)$ and $e_3=(0,0,1)$ satisfy $e_i \cdot e_j = 0$ if $i\not=j$ and $e_i\cdot e_i = 1$. Using this we have that the operator $b\cdot\nabla$ can be written $$b\cdot \nabla = (b_1e_1+b_2e_2+b_3e_3)\cdot\left(\frac{\partial}{\partial x_1} e_1+\frac{\partial}{\partial x_1} e_2+\frac{\partial}{\partial x_1} e_3\right) \\= b_1\frac{\partial}{\partial x_1} + b_2\frac{\partial}{\partial x_2} + b_3\frac{\partial}{\partial x_3}$$ which is a scalar operator (in other words it is not a vector itself). When this operator acts on a vector $a= a_1 e_1 + a_2 e_2 + a_3e_3$ then the result is a vector $$[b\cdot \nabla]a = \left(b_1\frac{\partial a_1}{\partial x_1} + b_2\frac{\partial a_1}{\partial x_2} + b_3\frac{\partial a_1}{\partial x_3}\right)e_1 \\ + \left(b_1\frac{\partial a_2}{\partial x_1} + b_2\frac{\partial a_2}{\partial x_2} + b_3\frac{\partial a_2}{\partial x_3}\right)e_2+\left(b_1\frac{\partial a_3}{\partial x_1} + b_2\frac{\partial a_3}{\partial x_2} + b_3\frac{\partial a_3}{\partial x_3}\right)e_3$$ In more compact form the $i$'th component can be written $$([b\cdot \nabla]a)_i = b_1\frac{\partial a_i}{\partial x_1} + b_2\frac{\partial a_i}{\partial x_2} + b_3\frac{\partial a_i}{\partial x_3}$$ or even more compact by using a sum $([b\cdot \nabla]a)_i = \sum_{j=1}^3b_j \frac{\partial a_i}{\partial x_j}$. To prove the formula you can simply write out both the left and right hand side of the equation in terms of the components of $a,b,c$ and compare the two expressions (which should be equal for any value of the components of $a,b,c$).
How do I evaluate this integral: $ \int \frac{x^2+3x-2}{x^5+x^4+x^3-x^2-x-1}dx$
Use $\left( \frac{5x+4}{x^2+x+1}\right)’ =- \frac5{x^2+x+1} -\frac{3(x-2)}{(x^2+x+1)^2}$ to integrate in compact form below \begin{align} &amp; \int \frac{x^2+3x-2}{x^5+x^4+x^3-x^2-x-1}dx = \int \frac{x-2}{(x^2+x+1)^2}dx\\ = &amp; -\frac13 \frac{5x+4}{x^2+x+1}- \frac53 \int \frac{1}{x^2+x+1}dx\\ = &amp; -\frac13 \frac{5x+4}{x^2+x+1}- \frac{10}{3\sqrt3} \tan^{-1}\frac{2x+1}{\sqrt3}+C \end{align}
How to easily find limits in transformation problems
The condition is $0\leqslant Q\leqslant Z$ if $0\leqslant Z\leqslant1$ and $Z-1\leqslant Q\leqslant 1$ if $1\leqslant Z\leqslant2$.
Common generator of units in finite prime fields
I think this is an open problem. There is a famous conjecture due to Artin giving the density of the set of those primes $p$ such that a non-square natural number $a$ is a primitive root modulo $p$. Positive density, of course, implies that $a$ is a primitive root modulo infinitely many primes $p$. Note that if $a=b^\ell$ for some integer $b$ and prime $\ell$, then $a$ is automatically not a primitive root modulo a prime $p$ such that $p\equiv1\pmod\ell$. This lowers the chances for such an $a$ to be a primitive root, and the conjectured density is adjusted accordingly. An extreme case is when $a$ is a perfect square, for then this obstruction applies to all primes, and therefore perfect squares are not interesting for the purposes of this question. There is an analogue of Artin's conjecture for the polynomial rings $R=\Bbb{F}_p[x]$. In that case the question is whether the coset of a given non-square polynomial $f(x)$ is a primitive element of the finite field $R/\langle p(x)\rangle$ for infinitely many irreducible monic polynomials $p(x)\in R$. That version (together with a density formula) was proven by Bilharz.
How to find dimensions and compute bases for a cubic polynomial with constraints?
Forget the conditions on $p$ for now - let's just think about the dimension of $$\mathbb{P}_3(x,y):=\{p(x,y)\,:\,p(x,y)=\sum_{j+k\le3}a_{j,k}x^jy^k\}$$ A basis of $\mathbb{P}_3(x,y)$ is clearly $\mathcal{B}=\{1,x,y,x^2,xy,y^2,x^3,x^2y,xy^2,y^3\}$ so $\dim\mathbb{P}_3(x,y)=10$. If the base field is $F$, define $$T:\mathbb{P}_3(x,y)\longrightarrow F^{10}\\ p(x,y)\mapsto(p(0,0),p(1,0),p(0,1),0,\ldots,0)$$ This is a linear transformation between vector spaces of the same dimension, so the rank-nullity theorem applies. Clearly $\operatorname{im}T$ has dimension $3$, so $\dim\ker T=7$. However $\ker T$ is precisely those polynomials in $\mathbb{P}_3(x,y)$ such that $p(0,0)=p(1,0)=p(0,1)=0$, so the space you are looking at has dimension $7$. A basis can be found relatively easily. Suppose $p(x,y)=\sum_{j+k\le3}a_{j,k}x^jy^k\in\ker T$. Evaluating $p(0,0),p(1,0)$ and $p(0,1)$ we get $$a_{0,0}=0,\\ a_{0,0}+a_{1,0}+a_{2,0}+a_{3,0}=0,\\ a_{0,0}+a_{0,1}+a_{0,2}+a_{0,3}=0.$$ Solving this linear system we get $a_{0,0}=0,a_{1,0}=-a_{2,0}-a_{3,0}$ and $a_{0,1}=-a_{0,2}-a_{0,3}$, all other entries as free parameters. A basis of $\ker T$ is hence $$\{-x+x^2,-x+x^3,-y+y^2,-y+y^3,xy,x^2y,xy^3\}$$
If $f,g$ are $2$ onto homomorphisms, $\exists$ $y \neq e \in M$ such that $f(y)=g(y)$.
Define a homomorphism $h:M\to N\times N$ by $h(x)=(f(x),g(x))$. If $h(x)=(e,e)$ for some $x\neq e$, we are done. Otherwise, $h$ is injective, i.e. an embedding. We may therefore identify $M$ with a subgroup of $N\times N$ via this embedding. Since $f$ and $g$ are surjective and not injective, this reduces the problem to: Problem. Suppose $M\leq N\times N$ is a subgroup such that $|M|&gt;|N|$ and $\pi_1(M)=\pi_2(M)=N$, where $\pi_1,\pi_2:N\times N\to N$ are the projections on the first and second factor, respectively. Then, $M$ contains an element of the form $(x,x)$ for some $x\neq e$. To solve this, observe that $\Delta=\{(x,x)\mid x\in N\}$ is a subgroup of $N\times N$ and $|\Delta|=|N|$. We claim that $\Delta\cap M\neq\{e\}.$ Suppose not. Then $\Delta\cap M$ is trivial, implying that the product $\Delta M$ is a subset of $N\times N$ of cardinality $|N||M|&gt;|N\times N|$, by the product formula. This is a contradiction.
If in a group $G$ , $(ab)^2=(ba)^2$ for all $a,b \in G$ , then show that $G$ is abelian .
You don't. A counterexample is the quaternion group of order $8$.
How do I find the sample standard deviation given the population standard deviation?
If we sample from $X\sim N(150, 40^2)$ 100 times, then the sample mean is $$\bar X = \frac{X_1+\dotsb+X_{100}}{100}.$$ Then \begin{align*} \text{Var}(\bar X) &amp;= \text{Var}\left(\frac{X_1+\dotsb+X_{100}}{100}\right) \\ &amp;=\frac{1}{100^2}\text{Var}(X_1+\dotsb+X_{100})\\ &amp;= \frac{1}{100^2}\cdot100\cdot\text{Var}(X_1) \\ &amp;= \frac{40^2}{100} \end{align*} Hence $$\text{SD}(\bar X) = \frac{40}{10} = 4.$$ If you use a box model, then $$\text{SD}_{\text{avg}} = \frac{\text{SD}_{\text{box}}}{\sqrt{\text{#draws}}}.$$
Visualizing Markov and Chebyshev inequalities
Fix a non-negative integer random variable $X$. Plot $\Pr[X\ge x]$ as a function of $x$. By splitting the area under the curve into horizontal rectangles, we see that the area under curve is $\sum_x \Pr[X\ge x] = \sum_x x\cdot \Pr[X=x] = E[X]$. For any particular value of $x$, the (shaded) rectangle with width $x$ and height $\Pr[X\ge X]$ fits under the curve. We conclude that $x\cdot \Pr[X\ge x] \le E[X]$. The same argument works for continuous random variables; the only difference is that the curve may be smooth.
Negative index coefficients of Laurent series for 1/sin(z)
$$\sin z\frac1{\sin z}=(z-\frac{z^3}6+\frac{z^5}{120}-...)(\frac1z+\frac16z+a_3z^3+...)=1$$ What is the coefficient of $z^4$?
If the generating function summation and zeta regularized sum of a divergent exist, do they always coincide?
Just an idea. The zeta-function has a laurent-series-representation as a sum of the term $$\zeta_a(x)=-{1\over1-x}$$ and of the series $$\zeta_b(x) = 1/2 + 0.081x-0.0031x^2+... = \sum_{k=0}^\infty c_k x^k$$ such that $$\zeta (x)=\zeta_a(x)+\zeta_b(x)$$ If I recall correctly, the series $\zeta_b(x)$ is entire - this means, that in the case of arguments $x$ where the zeta becomes divergent, the divergent part is completely eaten by the $\zeta_a(x)$ part - and the behave of the geometric series is completely accepted as regular even in the divergent cases by the expression as fraction ${1\over1-x}$ with the sole exception of the case where $x=1$. This is not a proof for the identity of the zeta-regularization and the power-series/generating function, but it shows at least a strong formal relation ship: the zeta -even in the divergent cases as formal sum of two powerseries regularly computable- and the power series of the generating function. I think someone with more formal background might be able to put this two things together.
All linear combinations diagonalizable over $\mathbb{C}$ implies commuting.
Definition: A pair $ (A,B) $ of complex $ (n \times n) $-matrices is said to have Property L if there exist orderings $ (\lambda_{i})_{i = 1}^{n} $ and $ (\mu_{i})_{i = 1}^{n} $ respectively of the eigenvalues of $ A $ and $ B $ such that $$ \forall (x,y) \in \mathbb{C}^{2}: \quad \operatorname{Spectrum}(x A + y B) = \{ x \lambda_{i} + y \mu_{i} \}_{i = 1}^{n}. $$ By Theorems 3 and 4 of T.S. Motzkin, O. Taussky, Pairs of Matrices with Property L. II, Trans. Amer. Math. Soc. 80 (1955) 387-401, if $ \lambda A + \mu B $ is a pencil in which all matrices are diagonalizable, then $ (A,B) $ has Property L, and moreover, $ A $ and $ B $ commute. EDIT: Using some concepts in algebraic geometry, Motzkin and Taussky showed that the result above holds for any field $ \mathbb{K} $ (if $ \mathbb{K} $ has finite characteristic, then they further assumed that $ \operatorname{char}(\mathbb{K}) \geq n $). Using complex analysis, Kato (in his book Perturbation Theory for Linear Operators, pp. 82-85) gave another proof that is valid only for $ \mathbb{C} $. Using Kato’s method, Friedland (in A Generalization of the Motzkin-Taussky Theorem, Linear Algebra Appl. 36 (1981) 103-109) and De Seguins Pazzis (in On Commuting Matrices and Exponentials, Proc. Amer. Math. Soc. 141 (3) (2013) 763-774 - An easier access is on arXiv -) showed generalizations of the previous result that are valid only for $ \mathbb{C} $.
Question on proof that $|G| = pqr$ is not simple
Since $n_r$ is not $1$, we have $k&gt;0$ which implies $n_r=1+kr &gt; r &gt; p$ and $q$, so the only possible divisor of $pq$ is $pq$.
quivers and tensor product
As Matthias Klupsch has stated in a comment to the question, the map $\varphi_{ik}^j$ is the restriction of the multiplication map. Let us write it explicitly: $$ \array{ \varphi_{ik}^j: \varepsilon_i (KQ) \varepsilon_j \bigotimes_{\varepsilon_i (KQ) \varepsilon_i} \varepsilon_j (KQ) \varepsilon_k &amp; \longrightarrow &amp; \varepsilon_i (KQ) \varepsilon_k \\ \alpha \otimes \beta &amp; \longmapsto &amp; \alpha\beta. } $$ This map is well-defined and $K$-linear. To apply this to the Kronecker quiver, one simply has to specify the multiplication map.
Perfect pairing induces isomorphism of tensor products
Assuming your definition of perfect pairing, the answer to your first question is no. You are essentially asking if $N \cong\text{Hom}(\text{Hom}(N,R), R))$ and this is not true when $N = \mathbb{Z}/2$, $M = 0$ and $R = \mathbb{Z}$. The same example with $P = \mathbb{Z}$ tells you that the answer to your second question must also be no. I don't understand how $\mathbb{C}^*$ is a $\mathbb{Z}$-module, so your third question doesn't make sense to me. Are you using some sort of exponential map?
do the four vectors span R^3? why or why not?
The inconsistency in that system shows that there is no solution to $xv_1+yv_2+zv_3+wv_4=b$ provided $b_3-2b_1-5b_2\neq 0$. Since $b\in\Bbb R^3$, can we have that $v_1,v_2,v_3,v_4$ span $\Bbb R^3$? Addendum: Recall that $$\operatorname{span}(v_1,v_2,v_3,v_4)=\{v\in\Bbb R^3:v=c_1v_1+c_2v_2+c_3v_3+c_4v_4,~c_i\in\Bbb R\}$$ That is, the span of a collection of vectors is the set of linear combinations of those vectors. So the inconsistency in the system you have shows us that there is no solution to $xv_1+yv_2+zv_3+wv_4=b$ for an arbitrary vector $b\in\Bbb R$. Hence, $b$ is not a linear combination of $v_1,v_2,v_3,v_4$. So can we say that $v_1,v_2,v_3,v_4$ span $\Bbb R^3$? In general, to show some vectors do not span a vector space, we can just show that there is a vector in the space which is not a linear combination of those vectors. Linear dependence does not imply that they do not span $\Bbb R^3$. For example, $e_1,e_2,e_3,e_1+e_2$ span $\Bbb R^3$ however they are clearly linearly dependent. In fact, any collection of more than $3$ vectors will be linearly dependent in $\Bbb R^3$, however they may or may not span $\Bbb R^3$.
Triples $(x, y, z)$ that satisfy a set of equations
Make a polynomial $$P(t) =t^3 -at^2 -t+a$$ By Vieta formulas, $x,y,z$ are roots of $P(t)$ and it is easy to calculate them: $$ P(t) = t(t^2-1)-a(t^2-1)= (t-1)(t+1)(t-a)$$ so $x,y,z$ are permutation of $1,-1,a$.
What lessons have mathematicians drawn from the existence of non-standard models?
As you point out, Gaifman wrote: From the point of view of those who subscribe to the intended interpretation, the existence of such nonstandard models counts as a failure of the formal system to capture the semantics fully. Note that this is not merely about first-order logic but a broader observation. To answer your question, some mathematicians have drawn the lesson that the entity called the intended model does not exist and that belief in such an entity is an obstacle to progress.
Use $S_n-\frac{n}{n+1}T_n<M$ to prove $\sum_{n=1}^{\infty}a_n$ converges
Concerning a) (assuming that $M&gt;0$): The inequality can be rewritten as follows: $$S_n-\frac{n}{n+1}T_n&lt;M \Leftrightarrow \boxed{(n+1)S_n- nT_n &lt; (n+1)M}$$ The boxed inequality can be easily proved by direct calculation using the fact $\boxed{nT_n} = \sum_{k=1}^n S_k = \sum_{k=1}^n \sum_{i=1}^k a_i = \boxed{\sum_{i=1}^n (n-i+1)a_i}$ To see this just note that the sum of the first $n$ partial sums consists of $n$ times $a_1$, $n-1$ times $a_2$ and so on till $1$ time $a_n$. Now, you get immediately $$(n+1)S_n - nT_n = \sum_{i=1}^n \left((n+1) - (n+1-i) \right)a_i$$ $$ = \sum_{i=1}^n ia_i \stackrel{ia_i \leq M}{\leq} nM &lt; (n+1)M$$
Commutator subgroup of $GL_{2}(\mathbb{Z}/p^{2}\mathbb{Z})$ is $SL_{2}(\mathbb{Z}/p^{2}\mathbb{Z})$
Let's work in $\mathrm{GL}_n$, leaving only aside the case $(n,p)=(2,2)$, so that $\mathrm{SL}_n(\mathbf{Z}/p\mathbf{Z})$ is the derived subgroup of $\mathrm{GL}_n(\mathbf{Z}/p\mathbf{Z})$. Consider a matrix $g$ in $\mathrm{SL}_n(\mathbf{Z}/p^2\mathbf{Z})$. Then its image in $\mathrm{SL}_n(\mathbf{Z}/p\mathbf{Z})$ is a product of commutators of $\mathrm{GL}_n(\mathbf{Z}/p\mathbf{Z})$; lifting this shows that there exist a product $c$ of commutators such that $gc^{-1}$ is in the kernel of the reduction map $\mathrm{SL}_n(\mathbf{Z}/p^2\mathbf{Z})\to\mathrm{SL}_n(\mathbf{Z}/p\mathbf{Z})$, which is exactly the set of matrices of the form $I+pA$ with $A\in\mathrm{M}_n(\mathbf{Z}/p\mathbf{Z})$ of trace zero. Such a matrix is a product of matrices of the form: either $I+pE_{ij}$ with $i\neq j$, or $I+p(E_{ii}-E_{jj})$ with $i\neq j$. That these matrices are product of commutators is a simple exercise.
Find $\lim\limits_{n\to+\infty}(u_n\sqrt{n})$
A solution from a friend of mine: (i) Show $u_{n} &gt; -1$ for all $n$. (Easy) (ii) If $u_{0} = 2 - 2t$, where $0 \le t \le 1$ then $u_{n} &lt; (n+2)(1-t)$ for all $n &gt; 0$. (Induction) (iii) There exists integer $K &gt; 0$ s.t. $-1 &lt; u_K &lt; 1$. From (ii) we get that eventually $u_{n-1} &lt; n$, whence $u_n &lt; n$, and $u_{n+1} &lt; n-1 $, etc. $\text{(iv) } |u_n| \leqslant 1/n\text{ for all } n &lt; K\text{.}\\\text{Therefore the limit is 0.}\\\text{I let the OP to complete the details. (to prove (i) and (ii)).}\\\text{Q.E.D. (Chris)}$
Probability $1 < X < 6$ given cdf $F_X(x)$
Arithmetic is incorrect but idea is right. Indeed, $F(6) = 6/8$ but $F(1) = 1/8$ not $1/7$ as you claimed.
Explicit computation of Ext groups
Recall that for any quasicoherent sheaf $F$ on any scheme $X$, $Ext^i(\mathcal{O}_X,F) = H^i(F)$. So I don't think your statement about $Ext^i(\mathcal{O}_X,\mathcal{O}_C) = H^i(\mathcal{O}_{\mathbb{P}^1})$ is correct: it is 1-dimensional for $i=0$ and zero for $i=1,2$. Similarly, $Ext^i(\mathcal{O}_X(-C),\mathcal{O}_C) = Ext^i(\mathcal{O}_X,\mathcal{O}_C(C)) = H^i(\mathcal{O}_C(C)).$ And, since $\mathcal{O}_C(C) \cong \mathcal{O}_{\mathbb{P}^1}(-2)$, this cohomology group is $0$ for $i=0$, 1-dimensional for $i=1$ and zero for higher $i$. Also clearly $Ext^0(\mathcal{O}_C,\mathcal{O}_C)$ is 1-dimensional. Now plug these into the long exact sequence to find that $Ext^1(\mathcal{O}_C,\mathcal{O}_C) = 0$ (it is enclosed between two zeroes) and $Ext^2$ is 1-dimensional (for similarly easy reasons).
What functions asymptotically approximate a closed form?
Let's write $f(x) = \frac{\{x\}}{\lceil x \rceil}$ I think part of the problem is that $\frac{1}{2x+1}$ doesn't actually approximate $f$ all that well. They both tend towards 0 from above as $x \to \infty$ and from below as $x \to -\infty$, but that seems to be the only feature they have in common. Proof by desmos: Let's try to find a better approximation, which is still smooth, using fourier analysis. $\{x\}$ is a well known function, called the saw wave, and its fourier series is well known to be $$ \{x\} = \frac{1}{2} - \frac{1}{\pi} \sum \limits_{k=1}^{\infty} \frac{\sin(2 \pi k x)}{k}$$ (proof by wikipedia) As you noted, we can recover $\lfloor x \rfloor$ and $\lceil x \rceil$ from $\{x\}$, and so $$ f(x) = \frac{\{x\}}{\lceil x \rceil} = \frac{\{x\}}{1+x-\{x\}}$$ Using our fourier approximation for $\{x\}$, we see $$ f(x) \approx \frac{\left ( \frac{1}{2} - \frac{1}{\pi}\sum_{k=1}^n \frac{\sin(2 \pi k x)}{k} \right )}{1 + x - \left ( \frac{1}{2} - \frac{1}{\pi}\sum_{k=1}^n \frac{\sin(2 \pi k x)}{k} \right )}$$ Now if we take $n = 10$, say, we already get the following approxmation: Of course, this isn't completely honest. There's some funny stuff happening between $-1$ and $0$, because that's where we are dividing by $0$. Thankfully, $f$ is undefined on this region, so the fact that our approximation is weird is of little importance. As for why "a finite truncation or infinite series of trigonometric functions could have such a result", there is a lot of deep mathematics underpinning the theory of fourier analysis. Every function can be well approximated by trigonometric functions, and this fact is both deep and beautiful. A wonderful avenue for learning more about fourier series, and using them to approximate real world functions, is this youtube playlist. It is a set of lectures from a Stanford engineering course, and is taught by a humorous and capable lecturer. It will teach you the practicalities of fourier analysis without getting too bogged down in details, though the professor makes sure to mention when he is hiding some heavy math under the rug so that you can look into it independently if you want to. I cannot recommend it highly enough. I hope this helps ^_^
Finding $\lim_{n\to\infty}\frac1{n^3}\sum_{k=1}^{n-1}\frac{\sin\frac{(2k-1)\pi}{2n}}{\cos^2\frac{(k-1)\pi}{2n}\cos^2\frac{k\pi}{2n}}$
This is based on one of the official solutions for the problem. We first write $a_n$ as a telescoping sum. Notice that $$ \frac{1}{AB} = \left(\frac{1}{A}-\frac{1}{B}\right)\cdot \frac{1}{B-A}\,. $$ It follows that the summand of $a_n$ can be written as $$ \left(\frac{1}{\cos ^{2}\left(\frac{(k-1) \pi}{2 n}\right)}-\frac{1}{\cos ^{2}\left(\frac{k \pi}{2 n}\right)}\right)\cdot \frac{\sin \left(\frac{(2 k-1) \pi}{2 n}\right)}{\cos ^{2}\left(\frac{k \pi}{2 n}\right)-\cos ^{2}\left(\frac{(k-1) \pi}{2 n}\right)}\tag{1} $$ If we can show that the quantity $$ \frac{\sin \left(\frac{(2 k-1) \pi}{2 n}\right)}{\cos ^{2}\left(\frac{k \pi}{2 n}\right)-\cos ^{2}\left(\frac{(k-1) \pi}{2 n}\right)}\tag{2} $$ is independent of $k$, then we have a telescoping sum. By the double angle and sum-product identities for cosine, we have \begin{align} &amp;\phantom{=}2 \cos ^{2} \left(\frac{(k-1) \pi}{2 n}\right) -2 \cos ^{2}\left(\frac{k \pi}{2 n}\right) \\ &amp;=\cos \left(\frac{(k-1) \pi}{n}\right)-\cos \left(\frac{k \pi}{n}\right) \quad &amp;(2\cos^2x = \cos 2x - 1) \\ &amp;=2 \sin \left(\frac{(2 k-1) \pi}{2 n}\right) \sin \left(\frac{\pi}{2 n}\right) \quad &amp;(\cos \theta-\cos \varphi=-2 \sin \left(\frac{\theta+\varphi}{2}\right) \sin \left(\frac{\theta-\varphi}{2}\right)) \end{align} and it follows that the summand in $a_n$ can be written as $$ \frac{1}{\sin \left(\frac{\pi}{2 n}\right)}\left(-\frac{1}{\cos ^{2}\left(\frac{(k-1) \pi}{2 n}\right)}+\frac{1}{\cos ^{2}\left(\frac{k \pi}{2 n}\right)}\right) $$ Thus the sum telescopes and we find that $$ a_{n}=\frac{1}{\sin \left(\frac{\pi}{2 n}\right)}\left(-1+\frac{1}{\cos ^{2}\left(\frac{(n-1) \pi}{2 n}\right)}\right)=-\frac{1}{\sin \left(\frac{\pi}{2 n}\right)}+\frac{1}{\sin ^{3}\left(\frac{\pi}{2 n}\right)} $$ Finally, since $\lim_{x\to 0}\frac{\sin x}{x}=1$, we have $$ \lim_{n\to\infty} n\sin\frac{\pi}{2n} = \frac{\pi}{2} $$ and thus $$ \lim_{n\to\infty}\frac{a_n}{n^3} = \frac{8}{\pi^3}\;. $$
Conservation Law for Heat Equation on Infinite Domain
For given initial conditions $u(x,0)$, you can write the solution as a convolution against the fundamental solution as $$ u(x,t) = \int_{\mathbb{R}^n}\frac{1}{(4\pi t)^{n/2}}e^{-||x-x'||^2/4t}u(x',0)dx' $$ You can verify that such a representation matches your boundary condition at infinity. Then integrating over x: $$ \int_{\mathbb{R}^n} u(x,t)dx = \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\frac{1}{(4\pi t)^{n/2}}e^{-||x-x'||^2/4t}u(x',0)dx'dx $$ If the initial condition is well behaved enough, we can switch the order of integrations: $$ \int_{\mathbb{R}^n} u(x,t)dx = \int_{\mathbb{R}^n}u(x',0)\left(\int_{\mathbb{R}^n}\frac{1}{(4\pi t)^{n/2}}e^{-||x-x'||^2/4t}dx\right) dx' $$ The inner integral is unity, leaving $$ \int_{\mathbb{R}^n} u(x,t)dx = \int_{\mathbb{R}^n}u(x',0)dx' $$ The heat equation "smears out" the initial condition in time, but conserves it's total integral as time goes on. All that's left is to decide what is regular enough to justify switching the integration order.
Distance preserving function on a Hilbert space
By subtracting $f(0)$ we can assume that $f(0)=0$. Let $x$ and $y$ be orthogonal. Then $\|x-y\|^2=\|x-0\|^2+\|y-0\|^2$. But this equation is equivalent to $\|f(x)-f(y)\|^2=\|f(x)-f(0)\|^2+\|f(y)-f(0)\|^2$. Therefore, $f(x)$ and $f(y)$ are orthogonal if and only if $x$ and $y$ are orthogonal. For every $z$ that is orthogonal to $x$ and $y$ (and therefore to $x+y$) we have $$0=(f(x+y)-f(x)-f(y),f(z)).$$ Therefore $f(x+y)-f(x)-f(y)=0$. From this it follows that $f(rx+sy)=rf(x)+sf(y)$ for all rational $r,s$. From continuity it follows that it is linear. This proof is not complete. There are arguments missing, but these are the ideas to use: That it preserves orthogonality, and the idea of using continuity to pass from additivity to linearity.
Formal definition on differentiability - an excerise
OK, so for (a), your solution is almost right. Notice that the limit does exist if $u_1$ or $u_2$ are $0$, in which case the directional derivative does exist. In (b) note that $f$ is not continuous at $0$ since approaching $(0,0)$ along the curve $(0,t)$ yields the limit $0$, while approaching it along $(t,t)$ yields the limit $1/2$. If $f$ is not continuous at $0$ it cannot be differentiable.
Formula to Calculate Each Pie Angle Where the Intercepted Arc is NOT the Center Point of the Circle AND All Slices are Equal Sized
Let’s place the point at which all of the cuts converge at the origin and the center of the circular pie at $(h,0)$ so that the circle can be parameterized as $x=h+r\cos t$, $y=r\sin t$. The parameter $t$ represents the angle at the center of the pie. If $\Gamma$ is the arc of the circle that goes from $t_1$ to $t_2$ then the area of the slice is $$\frac12\int_\Gamma x\,dy-y\,dx = \frac r2\int_{t_1}^{t_2}r+h\cos t\,dt = \frac r2\left(r(t_2-t_1)+h(\sin t_1-\sin t_0)\right).$$ If we want $n$ equally-sized slices, this area must be equal to $\pi r^2/n$, which leads to the equation $$rt_2+h\sin t_2 = \frac{2\pi r}n+rt_1+h\sin t_1.$$ If we fix $t_1$, this can be solved for $t_2$. Unfortunately, there’s no closed-form solution, but you can get a numerical approximation good enough for making the slices. Taking your example, $h=\sqrt{1^2+1.5^2}\approx1.803$ and the area of each slice is approximately $10.603.$ The first cut is at $t=0$, and since there’s an even number of slices, we know that there will be another at $t=\pi$. By symmetry, we only need to compute two more cuts. Setting $t_1=0$ produces $t_2\approx 0.77$, and working backwards from the other cut, setting $t_2=\pi$ yields $t_1\approx 1.70$. The resulting pie slices look something like this: If we relax the requirement that all of the cuts radiate from a common point, then there are many more ways to divvy up the pie.
Riesz's lemma on invisible points
The inequality $\max\{g(x_0^+),g(x_0),g(x_0^-)\}&lt;M$ is equivalent to: $x_0$ has a neighborhood in which $g&lt;M$. All points of this neighborhood also satisfy this inequality, which implies that the set on which the inequality holds is open. The set of points invisible from the right can be written as $ \bigcup_{\xi\in (x_0,b]} A_\xi $ $$A_\xi=\{x\in [a,\xi): \text{$x$ has a neighborhood where } g &lt;g(\xi)\}$$ Each $A_\xi$ is open in the subset topology of $[a,b]$. Therefore, the union is open. The inequality between $g(a_k)$ and $g(b_k)$ need not hold when $g$ is discontinuous. Consider this function: $$g(x)=\begin{cases}2x\quad &amp; x\in [0,1] \\ x-1\quad &amp;x\in (1,2]\end{cases}$$ The visible points from the right are precisely $\{1,2\}$. So, $(1,2)$ is an invisible interval. But $g(1)&gt;g(2)$.
LU factorization of a non-symmetric matrix
If $b\ne r$, $c\ne s$, you indeed get the $LU$ decomposition $$ A=LU=\begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 \\ -a &amp; 1 &amp; 0 &amp; 0 \\ -a &amp; 0 &amp; 1 &amp; 0 \\ -a &amp; 0 &amp; 0 &amp; 1 \end{bmatrix}\, \begin{bmatrix} a&amp;r&amp;r&amp;r\\ 0&amp;b-r&amp;s-r&amp;s-r\\ 0&amp;0&amp;c-s&amp;b-s\\ 0&amp;0&amp;0&amp;d-b \end{bmatrix} $$ The conditions $a\ne0$ and $d\ne b$ are irrelevant. If $a=0$ and $b=d$ the rank is $2$; if $a=0$ and $d\ne b$ the rank is $3$; if $a\ne0$ and $d=b$ the rank is $3$; if $a\ne0$ and $d\ne b$ the rank is $4$. However, the $LU$ decomposition doesn't always exist. Let's see the case $b=s=r$, so that the first step in the elimination produces $$ U = \begin{bmatrix} a&amp;r&amp;r&amp;r\\ 0&amp;0&amp;0&amp;0\\ 0&amp;0&amp;c-s&amp;0\\ 0&amp;0&amp;0&amp;d-b \end{bmatrix} $$ If $c\ne s$ or $d\ne b$, there's no way to write $A=LU$ with $L$ lower triangular and $U$ upper triangular.
any Math game for kids?
Here are a few suggestion: The game of Nim: There are say 11 coins on the table. Two players. Players altrnate by taking between one and three coins per turn. The loser is the person who takes the last coin. Connect 4. One person makes a move, the next person does, then the next... (includes everyone) Bridgeit (google it) Tic Tac Toe Estimator game. Take a whole bunch of marbles or w/e and put them into a jar. Then get groups of kids to make a guess, one per kid, and then average the guesses for each group. The group that is closest to the true number wins. Then, after the win, take the average for all of the guessses and show them how much better it is.
Probability that colored balls are separated
Not what you are looking after, but a simple asymtoptic result would be: assume the number of blue (red) balls in each box are iid Poisson with $\lambda_B = b/n$ ($\lambda_R=r/n$), hence the probability that no box has red and blue balls would be $$P \approx \left(1-(1-e^{-\lambda_B})(1-e^{-\lambda_R}) \right)^n=\left(e^{-b/n} + e^{-r/n} - e^{-(r+b)/n} \right)^n \tag{1}$$ I would bet that this is only useful for $n$ much bigger than $r,b$, and that it overestimates $P$, but it does not seem easy to prove that it's an upper bound. Another approximation: the expected number of boxes with red balls is $E_r=n \left(1-(1-1/n)^r\right)$. Equating this (expected) number with the actual number, we get $$P \approx \frac{(n-E_r)!(n-E_b)!}{n! \, (n-E_r-E_b)!} \tag{2}$$ A few results (ps comes from simulation, pa1 and pa2 are the above approximations) n b r ps pa1 pa2 1000 30 10 0.741 0.745 0.736 500 30 30 0.165 0.183 0.168 500 80 8 0.278 0.309 0.275 500 20 6 0.786 0.791 0.782 10 5 3 0.215 0.341 0.167
Find $\lim_{x\to -\infty}\frac{3^{\sin x}+2x+1}{\sin x-\sqrt{x^2+1}}$
Note : In the denominator, when we take $x$ out of the square root, we will get $|x|\sqrt{1+\frac{1}{x^2}}$ Which will become = $-x\sqrt{1+\frac{1}{x^2}}$ (As $x$ is negative) No need for LH here, Dividing the numerator and denominator by $x$, the given expression becomes: $$\frac{{3^{\sin x} \over x} + 2 + \frac{1}{x}}{{\sin x \over x} + \sqrt{1+\frac{1}{x^2}}}$$ Now, applying the limit, $$\lim_{x \to -\infty}\frac{{3^{\sin x} \over x} + 2 + \frac{1}{x}}{{\sin x \over x} + \sqrt{1+\frac{1}{x^2}}}=2$$
Hint on writing a proof for slope
Just pull out a factor of $-1$ from both numerator and denominator and what do you get?
Can fractions actually be converted to decimals?
Technically, $$9.\overbrace{9\ldots9}^n &lt; 10$$ for any finite number $n$ of $9$s after the decimal point. Also, technically, $$9.\overbrace{9\ldots9}^n &lt; 9.\overline9$$ for any finite number $n$ of $9$s after the decimal point; the number on the right has an infinite number of digits and the number on the left doesn't. Certainly if you put any finite number of $9$s on the right of the decimal point, no matter how large a finite number of $9$s you put there you will always have a number less than $10$. But you also won't have $9.\overline9$. So what is $9.\overline9$ after all? To quote from this answer to the question "Is $0.999999999\ldots = 1$?", Symbols don't mean anything in particular until you've defined what you mean by them. If we want to define $9.\overline9$ so that it has a meaning and actually represents a real number rather than a different mathematical object altogether, the definition that generally makes sense is that it is a limit of an infinite series, and when we look more carefully to see what that limit is, we find that it is $10$. Not just very, very close to $10$, but actually equal to $10$.
Minimization of the distance between 2 vectors
You can write also $$(-2+7t,-7+5t,-8-t)(7,5,-1)=0$$
For the existence of one-point compactification, do we need locally compactness?
For any space $X$ we can construct a space $\alpha(X)$, the Aleksandrov extension of $X$ by defining a space $Y$ as Munkres does with the extra provision that we take all complements of closed compact subsets of $X$ as the extra neighbourhoods for $\infty$. One can easily check that $\alpha(X)$ is then compact. The "closed" is needed in general because if e.g. $X$ is not Hausdorff it could have some compact subset $K$ which is not closed, and then (if we were to omit the closed condition) $(X\setminus K) \cup \{\infty\}$ would be open while its intersection with $X$ would be $X\setminus K$, which was not open, so if we left out the closed condition $X$ would not have the same topology as a subspace of $\alpha(X)$ as originally, going against the idea of an extension/compactification: we want to embed $X$ in a larger space with better properties, so in the larger space it should be a subspace with the same topology that it had originally. If we want $Y = \alpha(X)$ to be Hausdorff, (so in particular $X$ should then be Hausdorff, as a subspace of $Y$) we need to be able to separate $\infty$ from every point $x$ in $X$. As a neighbourhood of $\infty$ is of the form $\{\infty\} \cup X \setminus C$, with $C$ compact and closed, every point $x$ should then have a neighbourhood that sits inside a compact closed set, i.e. $X$ must be locally compact. So $\alpha(X)$ can always be defined such that $\alpha(X)\setminus X$ is a point and $X$ is a subspace of $\alpha(X)$ and it is always compact (regardless of $X$) but $\alpha(X)$ is Hausdorff iff $X$ is locally compact and Hausdorff. A special case is when $X$ is already Hausdorff and compact, in which case we add an isolated point $\infty$ (as $X$ can be taken as $C$, a compact closed subset) and we get that $X$ is not dense in $\alpha(X)$. Normally we only consider Hausdorff compactifications and in that case the local compactness is needed for the Hausdorffness of the construction $\alpha(X)$. And also because then $X$ is an open subset of a compact Hausdorff space and thus locally compact for that reason.
On the series $\sum \limits_{n=1}^{\infty} \left ( \frac{1}{n} - \frac{1}{2n-1} - \frac{1}{2n+1} \right )$
A very simple way: $$ \begin{eqnarray*}\sum_{n\geq 1}\left(\frac{1}{n}-\frac{1}{2n-1}-\frac{1}{2n+1}\right)&amp;=&amp;\sum_{n\geq 1}\int_{0}^{1}\left( 2x^{2n-1}-x^{2n-2}-x^{2n}\right)\,dx\\&amp;=&amp;\int_{0}^{1}\sum_{n\geq 1}x^{2n-2}(2x-1-x^2)\,dx\\&amp;=&amp;\int_{0}^{1}\frac{2x-1-x^2}{1-x^2}\,dx\\&amp;=&amp;\int_{0}^{1}\frac{x-1}{x+1}\,dx=\left[x-2\log(1+x)\right]_{0}^{1}=\color{red}{1-2\log 2}.\end{eqnarray*} $$
gcd and lcm of $a$ and$ $b in $\mathbb Z$ which verify...
$$\gcd(a,b)=10\to a=10m,b=10n,\gcd(m,n)=1\\ lcm(a,b)=lcm(10m,10n)=10mn=100\to mn=10$$ since $\gcd(m,n)=1$ we have different cases: Case 1: $m=1,n=10 \quad \to\quad a=10,b=100$ Case 2: $m=2,n=5 \quad \to\quad a=20,b=50$ Case 3: $m=5,n=2 \quad \to\quad a=50,b=20$ Case 4: $m=10,n=1 \quad \to\quad a=100,b=10$