title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
$\mathbb CP^n$ with a quadric removed is homeomorphic to $T(\mathbb RP^n)$.
Let $\pi:S^{2n+1}\rightarrow \mathbb{C}P^n$ be the canonical projection and let $\tilde V_f=\pi^{-1}(V_f)\subseteq S^{2n+1}$ be the inverse image of $V_f\subseteq \mathbb{C}P^n$. As you have shown, $\tilde V_f\cong V_2(\mathbb{R}^{n+1})$, and $\pi$ restricted to this space is an $S^1$-principal fibration over $V_f\cong Gr_2(\mathbb{R}^{n+1})$. Let $$F=\{(z_0,\dots,z_n)\in S^{2n+1}\setminus\tilde V_f\mid z_0^2+\dots+z_n^2=1\}$$ be the Milnor fibre of the polynomial $f$. Notice that $\pi|_F:F\rightarrow \mathbb{C}P^n\setminus V_f$ is surjective. Writing $\underline z=(z_0,\dots,z_n)\in S^{2n+1}\subseteq \mathbb{C}^{n+1}$ as its real and imaginary parts $\underline z=\underline x+i\underline y$, we define a map $$\tilde \Psi:F\rightarrow TS^n,\qquad \underline x+i\underline y\mapsto\left(\frac{\underline x}{|\underline x|},\underline y\right),$$ where we have identified the tangent bundle $TS^n=\{(\underline x,\underline y)\in S^{n}\times \mathbb{R}^{n+1}\mid \underline x\cdot \underline y=0\}$. We see that this map is well-defined using the calculation you have already supplied. In fact, writing $$F=\{(\underline x,\underline y)\in\mathbb{R}^{n+1}\times\mathbb{R}^{n+1}\mid \underline x^2=\underline y^2+1,\;\underline x\cdot \underline y=0\}$$ we see quite easily that $\tilde \Psi$ is one-to-one and onto. It is clearly smooth and its smooth inverse is easy to write down, so it is seen to be a diffeomorphism. We observed before that $\pi|_F:F\rightarrow \mathbb{C}P^n\setminus V_f$ is surjective, although it is not difficult to see that it fails to be $S^1$-principal. What is true, however, is that if we consider the $S^1$ orbit in $S^{2n+1}\setminus\tilde V_f$ of a point $\underline z\in F$, then we find that $$(S^1\cdot \underline z)\cap F=\{\pm\underline z\}$$ consists of exactly two points. In fact $\pi^{-1}(\pi(\underline z))\cap F=\{\pm\underline z\}$. The point is that $F$ is closed under the restriction of the $S^1$-action to its $\mathbb{Z}_2$-subgroup and we have a principal fibration $$\mathbb{Z}_2\hookrightarrow F\xrightarrow{\pi|} \mathbb{C}P^n\setminus V_f.$$ Now recall that map $\tilde \psi:F\rightarrow TS^n$. it is clear from its definition that this map is $\mathbb{Z}_2$-equivariant with respect to the natural $\mathbb{Z}_2$-action on $TS^n$ induced by the tangent map of the antipodal map on $S^n$. Thus there is an induced map of $\mathbb{Z}_2$-orbit spaces $$\Psi:\mathbb{C}P^n\setminus V_f\rightarrow T\mathbb{R}P^n\cong (TS^n)/\mathbb{Z}_2.$$ Since the fibre-preserving map $\tilde\Psi$ is a diffeomorphism, so too is its quotient $\Psi$, which is therefore the map you have been looking for.
Matrix Exponentiation $A^{15}$
If $A^2 = A - I$ then $$A^3 = A A^2 = A(A - I) = A^2 - A = (A - I) - A = -I.$$ Therefore, $A^{15} = (-I)^5 = -I$.
An analytic doubt from Riemann zeta function-Titchmarsh
The interval $[1,N]$ is split dyadically into $[\frac {N}{2}, N]$, $[\frac {N}{4},\frac {N}{2}]$ etc so in up to $[log_2(N)]$+$1$ intervals (it doesn't matter if the ends are not integral since we can apply Lemma 5.7 as the 2 integral ends of the respective interval are of the type $a \leq b \leq 2a$ and we stop when the lower end is at or below 1, so after at most $[log_2(N)]$+$1$ steps (it's actually that many unless $N$ is a power of 2 when we can use only $[log_2(N)]$ steps); obviously $[log_2(N)]$+$1$ is $O(logN)$ (which is $O(log(t))$ in our situation). So as noted in the comment we apply Lemma 5.7 for each dyadic interval as above and we notice that we have an absolute $O\left(N^{1-1/L}t^{1/\{(l+1)L\}}\log^{1/L}t\right)$, and then a sum of the type $1$+$2$^$\frac {1-L}{L}$+...$2$^$\frac {k(1-L)}{L}$ from applying the estimates with $a=N$, $a=\frac{N}{2}$, etc and that sum is uniformly bounded by an absolute constant ($\sqrt2$/$(\sqrt2 - 1)$ for $L \geq 2$ as it is a piece of a geometric series etc, so we absorb the $O(logN)$ sum into a constant, hence we do not need to increase the power of the $log(t)$ in the big $O$ at the end; same for the other sum
Why is the Euclidean metric the natural choice?
Beginning with the one-dimensional case: On ${\mathbb R}$ we have translations $t\mapsto t+a$, $\, a$ fixed, and reflexions $t\mapsto -t$ as natural "geometric" isomorphisms. A translation and reflexion invariant metric then necessarily is of the form $d(x,y)=\phi\bigl(|x-y|\bigr)$ where $\phi$ should satisfy some "technical" conditions to make $d$ a metric which in addition is compatible with the inborn topological structure of ${\mathbb R}$. There are many such $\phi$; e.g. the definition $d(x,y):=\tanh\bigl(|x-y|\bigr)$ would turn ${\mathbb R}$ into a bona-fide metric space where all distances are $<1$. But in ${\mathbb R}$ we have an additional set of "geometric" isomorphisms, namely scalings. If we want that our metric behaves in a reasonable way under scalings $T_\lambda: \ x\mapsto \lambda x$, $\,\lambda>0$ fixed, then the only $\phi$s left are the functions $\phi(u)= cu$, $\, c>0$ fixed, and we may as well choose $c=1$, so that we arrive at $d(x,y)=|x-y|$. Now the two-dimensional case: Symmetry considerations like the above imply that we should choose a direction dependent $\phi:\ S^1\to {\mathbb R}_{>0}$ which is even and satisfies a certain convexity condition; then we should put $$d(x,y):=\phi\left({x-y\over |x-y|}\right)\ |x-y|\ .$$ This metric is translation invariant and behaves correctly under scalings. But again, in ${\mathbb R}^2$ new sets of "geometric" isomorphisms are available, namely compact one-parameter groups of "rotations". If we want our $d$ to be invariant under such a group the only candidates left are of the form $$\|x\|^2:=\bigl(d(x,0)\bigr)^2= x'Qx\ ,$$ where $Q$ is a positive definite quadratic form of the coordinate variables $x_1$, $x_2$. Introducing a coordinate system adapted to the $Q$ at hand we then arrive at $\|x\|^2=x_1^2+x_2^2$, i.e., the euclidean distance function.
$16^{18}+16^{18}+16^{18}+16^{18}+16^{18}=4^x$ without a calculator
$$16^{18}+16^{18}+16^{18}+16^{18}+16^{18}=5\cdot 16^{18}=5\cdot 4^{36}=4^x$$ The solution for $\,x\,$ is going to be a little ugly because of that $\,5\,$ there. If instead of $\,5\,$ summands there were only $\,4\,$ then things would be nicer...anyway: $$5\cdot 4^{36}=4^x\Longrightarrow x=\frac{\log 5+36\log 4}{\log 4}=\frac{\log 5}{\log 4}+36$$
What numerical method can be recommended to solve an exponential system of equations with two unknowns?
The vector version of Newton's method goes like this: suppose you have $n$ equations in $n$ unknowns where the unknowns are $x_1, x_2, \dots, x_n$ and the equations are $$f_i(x_1, x_2, \dots, x_n)=0$$ for $i=1, 2, ..., n$. Then the linear approximation to each equation is $$\begin{align}f_i(r_1, r_2, \dots, r_n)&=f_i((r_1-x_1)+x_1,(r_1-x_2)+x_2,\dots,(r_n-x_n)+x_n)\\ &\approx f_i(x_1, x_2, \dots, x_n)+\sum_{j=1}^n(r_j-x_j)\frac{\partial}{\partial x_j}f_i(x_1, x_2, \dots, x_n)=0\end{align}$$ Where $x_1=r_1,x_2=r_2,\dots,x_n=r_n$ is the true solution the the system of equations. If we construct the vector $\vec b$ where $b_i=f_i(x_1, x_2, \dots, x_n)$ and the matrix $M$ where $M_{ij}=\frac{\partial}{\partial x_j}f_i(x_1, x_2, \dots, x_n)$ then we can solve for $$M(\vec x-\vec r)=M\vec y=\vec b$$ Then our next approximation to the solution is $x_j\leftarrow x_j-y_j$ Here is a Matlab example for your posted problem % kinetics.m x = [0.5; 2.5]; err = 1; while abs(err) > 1.0e-12, func = [4^x(1)+7^x(2)-53; 3^x(1)+8^x(2)-67]; jacobian = [4^x(1)*log(4) 7^x(2)*log(7); 3^x(1)*log(3) 8^x(2)*log(8)]; err = jacobian\func; fprintf('Error = %e\n', norm(err)); x = x-err; end fprintf('Solution is x = %.16f, y = %.16f\n',x); Output is: Error = 7.745216e-01 Error = 3.381383e+00 Error = 6.486683e-01 Error = 5.944094e-01 Error = 4.931151e-01 Error = 3.146575e-01 Error = 1.086841e-01 Error = 1.055246e-02 Error = 9.001149e-05 Error = 6.479704e-09 Error = 2.232400e-15 Solution is x = 1.0000000000000007, y = 2.0000000000000000 It took a while to get going, but once it gets close Newton's method is generally quadratically convergent. EDIT: Let's do one Newton-Raphson iteration by hand. Start with $$\vec x=\begin{bmatrix}1.1193\\1.9988\end{bmatrix}$$ Evaluate the residual $$\vec f\left(\vec x\right)=\begin{bmatrix}4^{x_1}+7^{x_2}-53\\ 3^{x_1}+8^{x_2}-67\end{bmatrix}=\begin{bmatrix}4^{1.1193}+7^{1.9988}-53\\ 3^{1.1193}+8^{1.9988}-67\end{bmatrix} =\begin{bmatrix}0.605103\\0.260622\end{bmatrix}$$ And the Jacobian $$J=\begin{bmatrix}4^{x_1}\ln4&7^{x_2}\ln7\\ 3^{x_1}\ln3&8^{x_2}\ln8\end{bmatrix}= \begin{bmatrix}4^{1.1193}\ln4&7^{1.9988}\ln7\\ 3^{1.1198}\ln3&8^{1.9988}\ln8\end{bmatrix}= \begin{bmatrix}6.542462&95.12721\\ 3.75739&132.7526\end{bmatrix}$$ Now we have to solve $J\vec\epsilon=\vec f$ we form the augmented matrix and perform Gaussian elimination. $$M=\begin{bmatrix}6.542462&95.12721&0.605103\\ 3.75739&132.7526&0.260622\end{bmatrix}$$ Divide the first row by $M_{11}=6.542462$ $$M=\begin{bmatrix}1&14.53997&0.092489\\ 3.75739&132.7526&0.260622\end{bmatrix}$$ Subtract $M_{21}=3.75739$ times the first row from the second $$M=\begin{bmatrix}1&14.53997&0.092489\\ 0&78.12024&-0.08689\end{bmatrix}$$ Divide the second row by $M_{22}=78.12024$ $$M=\begin{bmatrix}1&14.53997&0.092489\\ 0&1&-0.001112308\end{bmatrix}$$ Subtract $M_{12}=14.53997$ times the second row from the first $$M=\begin{bmatrix}1&0&0.108661\\ 0&1&-0.001112308\end{bmatrix}$$ So our estimate of the error in our root is the last column $$\vec\epsilon=\begin{bmatrix}0.108661\\-0.001112308\end{bmatrix}$$ Subtract our error estimate from the current value of $\vec x$ to get our next value of $\vec x$ $$\vec x\leftarrow\vec x-\vec\epsilon=\begin{bmatrix}1.1193\\1.9988\end{bmatrix}- \begin{bmatrix}0.108661\\-0.001112308\end{bmatrix}=\begin{bmatrix}1.010639\\1.999912\end{bmatrix}$$ And repeat as necessary until the residuals and errors are small enough.
Integrating $\int xe^{-x} dx$ without parts
Differentiation under the integral sign. The integral is precisely $$-\frac{1}{a} \dfrac{d}{da} \int e^{-a x} \ dx$$ evaluated at $a=1$. But that is $$-\dfrac{1}{a} \dfrac{d}{da} \left(-\frac{1}{a} e^{-ax} \right) = -\frac{1}{a} \left( \frac{e^{-ax}}{a^2} + \frac{e^{-a x}x}{a} \right)$$ Evaluating at $1$ yields $$-(e^{-x} + x e^{-x}) = -e^{-x}(1+x)$$
Issue with Compound Interest
You're on the right track. $$\frac{3000}{2517} = (1 + \frac{i}{12})^{36}$$ $$ \sqrt[36]{\frac {3000}{2517}} = (1 + \frac{i}{12}) $$ $$1.0048 \approx 1 + \frac{i}{12}$$ $$0.0048 \approx \frac{i}{12}$$ $$0.0587 \approx i$$ Then, multiply by 100 to get the percentage value: $$i \approx 5.87\%$$
Sum of angles of Triangle greater than 180
It may depend on your definition of triangle, but at least in Euclidian geometry the sum of interior angles of a triangle (three points connected by straight lines) is always 180°. See also Sum of Angles in a Triangle.
Easiest way to calculate the determinant of this 4x4 matrix
As suggested in the comments, Gauss elimination is usually the way to go, and the fastest in this case, too: $$\det A= \det\begin{pmatrix} 2 & 3 & 1 & 0 \\ 4 & -2 & 0 & -3\\ 8 & -1 & 2 & 1\\ 1 & 0 & 3 & 4\\ \end{pmatrix} = \det\begin{pmatrix} 0 & 3 & -5 & -8 \\ 0 & -2 & -12 & -19\\ 0 & -1 & -22 & -31\\ 1 & 0 & 3 & 4\\ \end{pmatrix} = (-1)^{4+1}\cdot 1\cdot\det\begin{pmatrix} 3 & -5 & -8 \\ -2 & -12 & -19\\ -1 & -22 & -31\\ \end{pmatrix} = -\det\begin{pmatrix} 0 & -71 & -101 \\ 0 & 32 & 43\\ -1 & -22 & -31\\ \end{pmatrix} = -1\cdot(-1)^{3+1}\cdot(-1)\cdot\det\begin{pmatrix} -71 & -101 \\ 32 & 43\\ \end{pmatrix} = (-71)\cdot 43-(-101)\cdot 32=179 $$ (Wolfram Alpha-verified result; I never could remember the 3x3-formula, so I don't use it) If you absolutely want an upper diagonal matrix, you can do this, but it's only a restriction of the normal algorithm: $$\det A= \det\begin{pmatrix} 2 & 3 & 1 & 0 \\ 4 & -2 & 0 & -3\\ 8 & -1 & 2 & 1\\ 1 & 0 & 3 & 4\\ \end{pmatrix} = \det\begin{pmatrix} 2 & 3 & 1 & 0 \\ 0 & -8 & -2 & -3\\ 0 & -13 & -2 & 1\\ 0 & -\frac12 & \frac52 & 4\\ \end{pmatrix} = \det\begin{pmatrix} 2 & 3 & 1 & 0 \\ 0 & -8 & -2 & -3\\ 0 & 0 & ? & ?\\ 0 & 0 & ? & ?\\ \end{pmatrix} = \det\begin{pmatrix} 2 & 3 & 1 & 0 \\ 0 & -8 & -2 & -3\\ 0 & 0 & ? & ?\\ 0 & 0 & 0 & ?\\ \end{pmatrix} $$ (I'm too lazy to calculate the $?$ now, just continue with the Gaussian Elimination. The determinant will then be the product of the entries on the diagonal.) For the eigenvalues: yes, you have to calculate the characteristic polynomial.
limit points of $\{1+\frac{1}{n}+i \sin\frac{1}{m}|n,m\in \mathbb{Z}\}$
Each $1+\frac{1}{n_0}$ is a limit point as $$\lim_{m\to \infty}\bigg(1+\frac{1}{n_0}+i \sin\frac{1}{m}\bigg)=1+\frac{1}{n_0}.$$Also, for each fixed $n_0,m_0\in \Bbb N$ the point, $1+\frac{1}{n_0}+i \sin\frac{1}{m_0}$ is an isolated point, as $$\lim_{n,m\to \infty}\bigg(1+\frac{1}{n}+i \sin\frac{1}{m}\bigg)=1.$$
Show that a given measure is equal to the Lebesgue measure on Borel subsets on $\mathbb{R}$
Having $\mu (a,b]=b-a$ for rational $a,b$ gives $\mu (a,b)=b-a$ for all real $a<b.$ Proof: Write $(a,b)$ as the increasing union of $(a_n,b_n]$ for appropritate rational $a_n,b_n.$ Standard measure theory with your result for rationals then gives the result. Since every open set in $\mathbb R $ is the disjoint union of open intervals, we see $\mu(U) = \lambda (U)$ for all open $U\subset \mathbb R.$ I'll stop here for now. Ask questions if you like.
Why this is Normal Distribution?
Since $\int_0^tdZ_u\sim N(0,\,t)$, a deterministic $\mu$ satisfies$$\mu+\int_0^t\pi\sigma dZ_u=\mu+\pi\sigma\int_0^tdZ_u=\mu+\pi\sigma N(0,\,t)=N(\mu,\,\pi^2\sigma^2t).$$In this case, $\mu=\int_0^tcdu=ct$ for $u$-independent integrand $c=r+\pi(\mu-r)-\frac12\pi^2\sigma^2$.
Trouble Finding Dominating Integrable Functions for Limiting Integrals
The first one is correct. For the others, we can decompose the integral to $\int_0^1+ \int_1^{\infty}$, and for each integrand find a dominating function. I will work out the second one (the third one is analogous). $$\int_0^{\infty}f_n(r)dr = \int_0^1 f_n(r)dr + \int_1^{\infty} f_n(r)dr$$ On $[0,1]$, $f_n(r)\to 0$ almost everywhere (it converges to $0$ except at $r=1$) and: $$0\le f_n(r)\le \frac1{1+r^2} \in L^1$$ So we get $\lim \int_0^1 f_n(r)dr = 0$. On $[1,\infty)$, $f_n(r)\to \frac1{r^2}$, and: $$0\le f_n(r)\le \frac{1}{r^2} \in L^1$$ So $\lim \int_1^{\infty}f_n(r)dr = \int_1^{\infty}\frac1{r^2}dr = 1$ Therefore $\lim \int_0^{\infty}f_n(r)dr = 0+1=1$.
Inverse Outer Product
The easiest way, if you are allowed to use a linear algebra package, is to just do an SVD: $C = U S V^T$. If you do "econo-mode" SVD, then $S$ will be an $r \times r$ diagonal matrix, and if $r > 1$ then you can't possibly accomplish that task. If $r = 1$, then $A = S^{1/2}V$, $B = S^{1/2}V$ are both vectors, and you're done. If you don't get to use computational tools, then you can start out by just eyeing the matrix and see if it looks "rank-one-y". Basically, it should be like $C = \left[\begin{matrix} a_1b_1 & a_1b_2 & ... & a_1b_n\\ a_2b_1 & a_2b_2 & ... & a_2b_n\\ .\\ .\\ .\\ a_nb_1 & a_nb_2 & ... & a_nb_n\\ \end{matrix}\right]$ You can usually tell right away whether this $C$ will work or not by visual inspection. Note that as of now, for any correct choice of $A$ and $B$, the choice $\gamma A, \gamma^{-1} B$ is also valid, for any nonzero $\gamma$, so you do need a condition on that, but we can also just characterize this as a function of $\gamma$. (the svd approach will pick something where $\|A\|_2 = \|B\|_2$. Since no complete row is 0, then you can start by picking 1 nonzero row $i$, with some $C_{ij} \neq 0$, and compute $b_j = C_{ij}, b_k = C_{ik} / C_{ij}$ for $k \neq j$. Now pick any nonzero column $\hat i$, where $C_{\hat i\hat j} \neq 0$ for some $\hat j$, and compute $\gamma = b_{\hat i}$, $a_{\hat i} = C_{\hat i\hat j}/\gamma, a_k = C_{k\hat j} / (\gamma C_{\hat i\hat j})$ for $k \neq \hat i$.
Proofs of Asymptotics
If we write $x = k+h$ with $0 \leqslant h < 1$, then we have $$\left\lvert \ln \frac{\lfloor x\rfloor}{x}\right\rvert = \ln \frac{k+h}{k} = \ln \left(1+\frac{h}{k}\right) \leqslant \frac{h}{k} < \frac{1}{k} = \frac{1}{\lfloor x\rfloor}.$$ In the second, we have $$\ln x = \ln \left(\lfloor x\rfloor \frac{x}{\lfloor x\rfloor}\right) = \ln \lfloor x\rfloor + \ln \frac{x}{\lfloor x\rfloor},$$ and we saw above that the last terms is smaller than $1/\lfloor x\rfloor$, so $$\ln^2 x < \ln^2 \lfloor x\rfloor + 2 \frac{\ln \lfloor x\rfloor}{\lfloor x\rfloor} + \frac{1}{\lfloor x\rfloor^2},$$ so the difference can be absorbed in an $O\left(\frac{\ln x}{x}\right)$ term.
Why can't we prove that union of infinite no of countable sets is also countable by induction?
This is a common mistake. Induction can be used to show that a statement $P(n)$ is true for infinitely many $n$ (like $n=1, 2, 3, \ldots$), but this does NOT mean that $P(\infty)$ is true (or that it even makes sense). For a related but more severe example, let $A_n = \{n\}$ for $n=1, 2, 3, \ldots$. Now prove by induction that $\bigcup_{i=1}^n A_i$ is finite. (This is obviously true without induction, since the union is just $\{1, 2, \ldots, n\}$.) But by your approach, you would be able to argue that $\bigcup_{i=1}^\infty A_i$ is finite, which is really false.
How can I show that $\mathbb{E}(Y|\sigma(X))$ satisfies the abstract definition of conditional expectation?
Hint on third bullet: On base of $H\left(x\right)f_{X}\left(x\right)=\int f\left(x,y\right)ydy$ we find for suitable functions $g$: $$\mathbb{E}\left[H\left(X\right)g\left(X\right)\right]=\int H\left(x\right)g\left(x\right)f_{X}\left(x\right)dx=\int\int f\left(x,y\right)yg\left(x\right)dydx=\mathbb{E}\left[Yg\left(X\right)\right]$$
Arithmetic progression topology
Hint: Try $d = \operatorname{lcm}(b,b')$ or $d = bb'$ - either should work fine. The idea being that we're looking for things that are in a list of step size of $b$ (so to say) and in another list of step size $b'$.
For a non-vanishing field $X$ and $a\in C^1(\mathbb R^n;\mathbb C)$, how to reduce the study of $X\cdot \nabla u +au= f$ to the study of $Xv=g$?
I suppose that Alignac provides details for showing that $Xv=g$ admits smooth local solutions in a neighborhood of a point where $X$ is non-zero? (e.g. by integrating along the flow of X). After that you pose and solve locally, as suggested in comments, $XA=a$ and then $XB=e^{A}f$ (there is no need to separated into real and imaginary parts). After that $u=Be^{-A}$ provides a local solution to the problem since $$ X u + a u = (X + a) (B e^{-A}) = (XB) e^{-A}-B a e^{-A} +a B e^{-A} = f $$ The non-vanishing of $X$ ensures that the problems are equivalent. Otherwise you run into counter examples like: $$ x u'(x) + u(x) = 1 $$ which does have a smooth solution ($u\equiv 1$) but for which $x A'(x)=1$ does not have a solution in a neighborhood of zero.
Properties of set using probabilities
Reminders of properties: $P(X\cup Y)=P(X)+P(Y)-P(X\cap Y)$ $P(X) = 1 - P(X^c)$ $X = (X^c)^c$ $(X\cup Y)^c = X^c\cap Y^c$ $(X\cap Y)^c = X^c\cup Y^c$ We know that $P(A^c\cap B\cap C^c) = 0.2$ so we know that $0.8 = 1 - P(A^c\cap B\cap C^c) = P((A^c\cap B\cap C^c)^c) = P(A\cup B^c\cup C)$ Next, let us apply some conveniently placed parentheses and then apply inclusion-exclusion on the result: $=P(A\cup (B^c\cup C)) = P(A)+P(B^c\cup C)-P(A\cap (B^c\cup C))$ We recognize now that two of these were given to us in the problem details: $ = 0.8 + P(B^c\cup C)-0.2$ Now... simplifying everything and moving things around this tells us that $P(B^c\cup C) = 0.2$ Now, its just one more step to get the value of $P(B\cap C^c)$
For a regular parametrised plane curve $\alpha$, show that $\langle \alpha''(t),n(t)\rangle =- \langle \alpha'(t),n'(t)\rangle$
Hint: we have $$\langle \alpha'(t),n(t)\rangle=0.$$ Now take the derivative of the above equation with respect to $t$.
Extract a vector that is in the middle of a matrix equation
I don't think so, the second equation you're writing should output a vector (if $X$ is a matrix), while the first equation you're writing should output a matrix. Also the first equation is not linear but affine ($f(v+w)\neq f(v)+f(w)$ for $v,w\in\mathbb{R}^{n\times1}$).
What is a proximity operator? why do we need it?
Checkout my answer to a similar question. It explains what prox operators are, and how they're used in modern convex optimization. It also gives useful references for further reading on the subject. If you still have questions, then we can look at those.
Entire function with zeros of even multiplicity is the square of another entire function
On the one hand, from the product representation (we disregard $f \equiv 0$, where the existence of a square root is trivial) $$f(z) = e^{a(z)} z^{2\nu_0} \prod \left(1-\tfrac{z}{b_k}\right)^{2\nu_k}e^{p_k(z)}\tag{1}$$ of $f$, we directly obtain what should be a product representation of a square root of $f$: $$g(z) = e^{a(z)/2} z^{\nu_0} \prod\left(1-\tfrac{z}{b_k}\right)^{\nu_k} e^{p_k(z)/2},\tag{2}$$ and then it remains to see that the product in $(2)$ converges locally uniformly on all of $\mathbb{C}$. That is maybe a little tedious if done rigorously. We can avoid that tedium if we use the Weierstraß product theorem in a different way: There is an entire function $g_1$ that has zeros of order $\nu_k$ in the $b_k$ (with $b_0 = 0$ and possibly $\nu_0 = 0$), and no other zeros. Then the function $$q(z) = \frac{f(z)}{g_1(z)^2}$$ is entire and has no zeros. Hence $q$ has a logarithm, $q(z) = e^{g_2(z)}$. Now it is clear that $g(z) = g_1(z) e^{g_2(z)/2}$ is an entire square root of $f$. An alternative way to establish the existence of a square root of $f$ considers the logarithmic derivative $$h(z) = \frac{f'(z)}{f(z)}.$$ $h$ is an entire meromorphic function, with simple poles in the zeros of $f$, where the residue in $b_k$ is $2\nu_k$, and no other poles. Therefore the residue of $\tilde{h} = \frac{1}{2}\cdot h$ in all poles is an integer, and hence $$g(z) = \exp \left(c_0 + \int_{z_0}^z \tilde{h}(\zeta)\,d\zeta\right),$$ where $z_0$ is an arbitrary point such that $f(z_0) \neq 0$, $c_0$ is chosen so that $e^{2c_0} = f(z_0)$, and the integral is over an arbitrary piecewise differentiable path from $z_0$ to $z$ that does not pass through any zero of $f$, is a well-defined holomorphic function on $\mathbb{C}\setminus f^{-1}(\{0\})$ with $g(z)^2 \equiv f(z)$ there. From that, it follows that the zeros of $f$ are removable singularities of $g$, and the identity $g(z)^2 \equiv f(z)$ holds on all of $\mathbb{C}$ after removing the removable singularities. Note: the same arguments work on any simply connected domain, but on a domain that is not simply connected, not all holomorphic functions with all zeros of even order have a square root. Also, the arguments, mutatis mutandis, also work for any $m$-th root of $f$ if the order of all zeros of $f$ is a multiple of $m$.
different essential ideals
Take $A=C[0,1]$. For each $x\in[0,1]$, $$ I_x=\{f:\ f(x)=0\} $$ is an essential ideal. So you have uncountably many of them. And there are more, because finite intersections of essential ideals are again essential. For the inclusion, one could have $M(I)\subset M(J)$ if $I\subset J$ if $I$ has an approximate unit for $J$; this doesn't require them to be ideals nor essential. I'm not sure what can be said in general. .
How can I show that for every $\alpha \in [0,1]$ there is a $ B_{\alpha} \in \cal B(\mathbb R)$ with $\mu(B_{\alpha})=\alpha$?
IF the measure of all singletons is zero, the distribution function is continuous. You have zero covered since every singleton is of measure zero and you have 1 covered since the line minus a singleton has measure 1. Now invoke the Intermediate value theorem for the rest.
Trouble with understanding transitive, symmetric and antisymmetric properties
The three properties required for an equivalence relation are 1) reflexive: for any $x$ in the set $(x, x)$ is in the relation. That is obviously true here- the set is $\{1, 2, 3\}$ and we have $(1, 1), (2, 2), (3, 3)$ in the relation. 2) symmetric: if $(x, y)$ is in the relation then so is $(y,x)$. Again that is obvious. The only pairs in the relation are $(1, 1), (2, 2)$, and $(3, 3)$. Reversing the order just gives the same thing again. 3) transitive: if $(x, y)$ and $(y, z)$ are in the relation, then so is $(x, z)$. Once again, obvious! $(x, y)$ is in the relation only if $x= y$ but then we must have $y= z$ so $x= z$. That is the same as $(z, z)$. Actually, while there can be very complicated "equivalence relation", this particular one is the "epitome" of equivalence relations- it is the identity relation. "$x$ is equivalent to $y$" if and only if "$x$ is equal to $y$". (Anti-symmetric plays no role in equivalence relations but since you ask: where "symmetric" requires "if $(x, y)$ is in the relation, then $(y, x)$ is also". "Anti-symmetric" requires "if $(x, y)$ is in the relation, then $(y, x)$ is not.)
Solving the following equation: $\ln(x)=\frac{x}{1+x}$
Hint. The function $f(x)=\ln(x)-\frac{x}{1+x}$ is strictly increasing for $x>0$: $$f'(x)=\frac{1+x+x^2}{x(1+x)^2}>0.$$ Moreover $f$ is continuous, $f(1)=-1/2<0$ and $f(2)=\ln(2)-2/3>0$. So the equation $f(x)=0$ has a unique solution which belongs to the interval $(1,2)$. Finding the exact value of such a solution in a closed form is not trivial. However you can approximate it by using the Bisection Method or the Newton's method.
How many natural numbers $n$ exist for $n=a^2−b^2−c^2$
HINT: $2n+1=(n+1)^2-n^2-0^2 ;a=n+1,b=n,c=0$ $2n=(n+1)^2-n^2-1^2 ;a=n+1,b=n,c=1$
Computing $\lim_{n \to \infty} \left[\left(\prod_{i=1}^{n}i!\right)^{1\over n^{2}} (n^{x})\right] $ if exists for certain $x\in\mathbb R$
Asymptotic Expansion via Riemann Sum Compute the log of the product as a Riemann Sum $$ \begin{align} \frac1{n^2}\sum_{k=1}^n k\log(n-k+1) &=\sum_{k=1}^n\frac{k}{n}\left(\log\left(1-\frac{k}{n}+\frac1n\right)+\log(n)\right)\frac1n\tag{1a}\\ &\sim\int_0^1x\log(1-x)\,\mathrm{d}x+\frac12\log(n)\tag{1b}\\ &=\int_0^1\log(1-x)\,\mathrm{d}\frac{x^2-1}2+\frac12\log(n)\tag{1c}\\ &=-\int_0^1\frac{x+1}2\,\mathrm{d}x+\frac12\log(n)\tag{1d}\\ &=\frac12\log(n)-\frac34\tag{1e} \end{align} $$ Thus, the product is asymptotically $$ \left(\prod_{k=1}^nk!\right)^{1/n^2}\sim e^{-3/4}n^{1/2}\tag2 $$ Therefore, for $x=-1/2$, the limit comes to $$ \bbox[5px,border:2px solid #C0A000]{\lim_{n\to\infty}\left(\prod_{k=1}^nk!\right)^{1/n^2}n^{-1/2}=e^{-3/4}}\tag3 $$ For $x\lt-1/2$, the limit is $0$. Asymptotic Expansion via Euler-Maclaurin Sum Formula As is shown in this answer, we have asymptotically in $n$, $$ \sum_{k=1}^n k^{-z} =\zeta(z)+\frac{n^{1-z}}{1-z}+\frac12n^{-z}-\frac{z}{12}n^{-1-z}+O\left(\frac1{n^{3+z}}\right)\tag4 $$ applying $-\frac{\mathrm{d}}{\mathrm{d}z}$: $$ \begin{align} \sum_{k=1}^n\log(k)k^{-z} &=-\zeta'(z)+n^{1-z}\frac{(1-z)\log(n)-1}{(1-z)^2}+\frac12\log(n)n^{-z}\\ &-n^{-1-z}\frac{z\log(n)-1}{12}+O\!\left(\frac{\log(n)}{n^{3+z}}\right)\tag5 \end{align} $$ Setting $z=0$: $$ \sum_{k=1}^n\log(k)=\overbrace{\,\,-\zeta'(0)\ }^{\frac12\log(2\pi)}+n(\log(n)-1)+\frac12\log(n)+\frac1{12n}+O\!\left(\frac{\log(n)}{n^3}\right)\tag6 $$ Setting $z=-1$: $$ \begin{align} \sum_{k=1}^n\log(k)k &=\overbrace{-\zeta'(-1)}^{\log(A)-\frac1{12}}+n^2\frac{2\log(n)-1}4+\frac12n\log(n)+\frac{\log(n)+1}{12}\\ &+O\!\left(\frac{\log(n)}{n^2}\right)\tag7 \end{align} $$ where $A$ is the Glaisher–Kinkelin constant. Thus, $$ \begin{align} \sum_{k=1}^n(n-k+1)\log(k) &=n^2\frac{2\log(n)-3}4+n\log\left(\frac{\sqrt{2\pi}}en\right)+\frac5{12}\log(n)\\ &+\log\left(\frac{\sqrt{2\pi}}{A}\right)+\frac1{12}+\frac1{12n}+O\!\left(\frac{\log(n)}{n^2}\right)\tag8 \end{align} $$ and therefore, $$ \prod_{k=1}^nk!=\frac{\sqrt{2\pi}}{A}e^{1/12}\,\color{#C00}{n^{n^2/2}e^{-3n^2/4}}\color{#090}{\left(\frac{\sqrt{2\pi}}en\right)^n}n^{5/12}\color{#00F}{e^{\frac1{12n}+O\left(\frac{\log(n)}{n^2}\right)}}\tag9 $$ Finally, $$ \bbox[5px,border:2px solid #C0A000]{\left(\prod_{k=1}^nk!\right)^{1/n^2}=\color{#C00}{n^{1/2}e^{-3/4}}+\color{#090}{O\!\left(\frac{\log(n)}{n^{1/2}}\right)}}\tag{10} $$ The Glaisher–Kinkelin Constant Equation $(6)$ is essentially Stirling's Formula: $$ \prod_{k=1}^nk=\sqrt{2\pi}\,n^{n+1/2}e^{-n}\left(1+\frac1{12n}+O\!\left(\frac1{n^2}\right)\right)\tag{11} $$ where $\sqrt{2\pi}=e^{-\zeta'(0)}$. This shows that $\zeta'(0)=-\frac12\log(2\pi)$. Equation $(7)$ says that $$ \prod_{k=1}^nk^k=A\,n^{n^2/2+n/2+1/12}e^{-n^2/4}\left(1+O\!\left(\frac{\log(n)}{n^2}\right)\right)\tag{12} $$ where $A=e^{\frac1{12}-\zeta'(-1)}$. Just as $$ \sum_{k=1}^n\frac1k=\log(n)+\gamma+O\!\left(\frac1n\right)\tag{13} $$ is the defining limit for $\gamma$, the Euler-Mascheroni Constant, $(12)$ appears to be the defining limit for $A$, the Glaisher–Kinkelin constant.
If $\cos3A + \cos3B + \cos3C = 1$ in a triangle, find one of its length
$\mathbf {Hint...}$ $$\cos {3A}+\cos {3B}+ \cos{3C}=1$$ $$\Rightarrow 4 \sin {\frac {3C}{2}} .\sin {\frac{3B}{2}} .\sin{\frac{3A}{2}}=0$$ Hence the largest angle of triangle is $\frac{2\pi}{3}$ which can be either angle $C$ or angle $A$. By applying cosine rule in each of these cases we get the value of $AB$ as $\sqrt {399}$ or $\sqrt {94}-5$ respectively. Note: $$\cos {3A}+\cos {3B}+ \cos{3C}=1$$ $$\Rightarrow -2\cos {\frac {3(A-B)}{2}}\sin {\frac {3C}{2}}- 2\left(\sin {\frac {3C}{2}}\right)^2=0$$ $$=\sin\frac{3C}{2}\left(\cos\frac{3(A-B)}{2}+ \sin\frac{3C}{2}\right)=0$$ $$\sin\frac{3C}{2}\left(\cos\frac{3(A-B)}{2}-\cos\frac{3(A+B)}{2}\right)=0$$ $$\sin\frac{3C}{2}\sin\frac{3B}{2}\sin\frac{3A}{2}=0.$$
How is the Lagrangian related to the perturbation function?
I wrote the post you linked to so I'll take a stab at this. Short answer: In terms of $\phi$, the dual problem is to maximize $-\phi^*(0,z)$ with respect to $z$. But \begin{align*} -\phi^*(0,z) &= - \sup_{x,y} \, \langle 0,x \rangle + \langle z, y \rangle - \phi(x,y) \\ &= \inf_x \, \inf_y \, \phi(x,y) - \langle z, y \rangle. \end{align*} The "Lagrangian" is the function \begin{equation*} L(x,z) = \inf_y \, \phi(x,y) - \langle z, y \rangle. \end{equation*} The "dual function" $G(z) = -\phi^*(0,z)$ is obtained by minimizing $L(x,z)$ with respect to $x$, as we are accustomed to. This expression for the Lagrangian appears on p. 54 of Ekeland and Temam. That's what the Lagrangian looks like in the general setting. The significance of the Lagrangian is that for convex problems, under mild assumptions, $x$ and $z$ are primal and dual optimal if and only if $(x,z)$ is a saddle point of the Lagrangian. We sometimes think of convex optimization problems as coming in pairs (primal and dual problem), but actually they come in triples if we remember to include the saddle point problem. If we only knew about the primal problem and the dual problem, but not the saddle point problem, then we'd be missing $1/3$ of the picture. Longer answer: The function $\phi$ is the minimal notation we need to describe the perturbation viewpoint. The Lagrangian only appears later, when we write the dual problem more explicitly. The perturbation viewpoint provides one possible explanation (my favorite explanation) of where the Lagrangian comes from, where the dual problem comes from, and why we expect strong duality to hold for convex problems. In the thread you linked to, note that the Lagrangian appears naturally at the very end. (The appearance of the Lagrangian should have been highlighted more explicitly.) I'll repeat myself a bit here but I will add something new -- a derivation of the Lagrangian when we have equality constraints in addition to inequality constraints. I've been meaning to write this down anyway. I'll work in $\mathbb R^N$ rather than an abstract real vector space, and I'll use slightly different notation. The primal problem is to minimize $\phi(x,0)$ with respect to $x$ (where $\phi:\mathbb R^n \times \mathbb R^m \to \mathbb R$). The perturbed problems are to minimize $\phi(x,y)$ with respect to $x$. We should also introduce the "value function" $v(y) = \inf_x \, \phi(x,y)$. ($v$ is called $h$ in the other thread.) So the primal problem is to evaluate $v(0)$. Now, using some highly intuitive facts about the convex conjugate, we note that $v(0) \geq v^{**}(0)$, and that when $\phi$ is convex we typically have equality. Thus, the dual problem is to evaluate $v^{**}(0)$. This is a very clear way of understanding what the dual problem is and why we care about it. And we can express this dual problem in terms of $\phi$: the dual problem is to maximize $-\phi^*(0,z)$ with respect to $z$. (Here $\phi^*$ is the convex conjugate of $\phi$. See other thread for details.) We haven't yet mentioned the Lagrangian. But the Lagrangian will appear when we work these ideas out in detail for the optimization problem \begin{align*} \text{minimize} & \quad f_0(x) \\ \text{subject to} & \quad f(x) \leq 0,\\ & \quad h(x) = 0, \end{align*} where $f:\mathbb R^n \to \mathbb R^m$ and $h:\mathbb R^n \to \mathbb R^p$. The inequality $f(x) \leq 0$ should be interpreted component-wise. We can perturb this problem as follows: \begin{align*} \text{minimize} & \quad f_0(x) \\ \text{subject to} & \quad f(x) + y_1 \leq 0,\\ & \quad h(x) +y_2= 0. \end{align*} This perturbed problem can be expressed as minimizing $\phi(x,y_1,y_2)$ with respect to $x$, where \begin{align*} \phi(x,y_1,y_2) = \begin{cases} f_0(x) & \quad \text{if } f(x) + y_1 \leq 0 \text{ and } h(x) + y_2 = 0, \\ \infty & \quad \text{otherwise}. \end{cases} \end{align*} To find the dual problem, we need to evaluate $-\phi^*(0,z_1,z_2)$, which is a relatively straightforward calculation. \begin{align*} -\phi^*(0,z_1,z_2) &= - \sup_{\substack{f(x) + y_1 \leq 0 \\ h(x) + y_2 = 0}} \langle 0,x\rangle + \langle z_1,y_1 \rangle + \langle z_2,y_2 \rangle - f_0(x) \\ &= -\sup_{q \geq 0} \, \langle z_1,-f(x) - q \rangle + \langle z_2,-h(x)\rangle - f_0(x) \\ &= \begin{cases} \inf_x \, f_0(x) + \langle z_1,f(x) \rangle + \langle z_2, h(x) \rangle & \quad \text{if } z_1 \geq 0 \\ -\infty & \quad \text{otherwise}. \end{cases} \end{align*} In the final step, the Lagrangian \begin{equation} L(x,z_1,z_2) = f_0(x) + \langle z_1,f(x) \rangle + \langle z_2,h(x) \rangle \end{equation} has appeared, as has the usual description of the dual function (where you minimize the Lagrangian with respect to $x$). Up until this point, we didn't know what the Lagrangian was or why it would be relevant.
Pivot Row in Simplex Method
Minimising $x_1+x_2-4x_3$ is equivalent to maximising it's additive inverse, so we can simply copy the coefficients in the original problem to the last row of the inital simplex tableau: \begin{array}{r|rrrrrr|rr} & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & \text{RHS} & \text{ratio}\\ \hline x_4 & 1 & 1 & 2 & 1 & 0 & 0 & 9 & 9/2 \\ x_5 & 1 & 1 & -1 & 0 & 1 & 0 & 2 & - \\ x_6 & -1 & 1 & 1^* & 0 & 0 & 1 & 4 & 4\\ \hline & 1 & 1 & -4 & 0 & 0 & 0 & 0 \end{array} Choose the most negative number at the $z$-row (-4 in this case.), just like what we do for a standard simplex maximisation problem. Then pick the least nonnegative number at the "ratio" column. (You may consult my other post on choosing the leaving variable for further explanation.) \begin{array}{r|rrrrrr|rr} & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & \text{RHS} & \text{ratio}\\ \hline x_4 & 3^* & -1 & 0 & 1 & 0 & -2 & 1 & 1/3\\ x_5 & 0 & 2 & 0 & 0 & 1 & 1 & 6 & - \\ x_3 & -1 & 1 & 1 & 0 & 0 & 1 & 4 & - \\ \hline & -3 & 5 & 0 & 0 & 0 & 4 & 16 \end{array} \begin{array}{r|rrrrrr|r} & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & \text{RHS} \\ \hline x_1 & 1 & -1/3 & 0 & 1/3 & 0 & -2/3 & 1/3 \\ x_5 & 0 & 2 & 0 & 0 & 1 & 1 & 6 \\ x_3 & 0 & 2/3 & 1 & 1/3 & 0 & 1/3 & 13/3 \\ \hline & 0 & 4 & 0 & 1 & 0 & 2 & 17 \end{array} Hence our optimal solution is $(x_1,x_2,x_3,x_4,x_5,x_6) = (1/3, 0, 13/3, 0 ,6, 0)$ with optimal value -17.
Can we prove that $F(x)=\int_0^x \sin(t^2) e^t dt, x\geq 0$ is not bounded on $[0,+\infty)$?
As @Sangchul Lee said in his comment, $|F(\sqrt{n\pi})|\to \infty$ as $n \to \infty$. This can be shown as follows: Put $a_n = \sqrt{n\pi}$. If $F(a_n)$ is bounded, then $|F(a_{n+1})-F(a_n)|$ should also be bounded. However, \begin{align}|F(a_{n+1})-F(a_n)| &\geq \int_{a_n}^{a_{n+1}}|\sin(t^2)|e^tdt \\ &\geq \int_{\sqrt{n\pi + \frac \pi 6 }}^{\sqrt{(n+1)\pi -\frac \pi 6 } } |\sin(t^2)|e^tdt \\ & \geq \frac 12 \int_{\sqrt{n\pi + \frac \pi 6 }}^{\sqrt{(n+1)\pi -\frac \pi 6}}e^tdt \\ & \geq \frac12\exp\left(\sqrt{n\pi + \frac \pi 6} \right)\cdot \left( \sqrt{(n+1)\pi -\frac \pi 6} - \sqrt{n\pi + \frac \pi 6} \right) \\ &= \frac12 \exp\left(\sqrt{n\pi + \frac \pi 6} \right)\cdot \frac {\frac{2}{3}\pi }{ \sqrt{(n+1)\pi -\frac \pi 6} + \sqrt{n\pi + \frac \pi 6} } \end{align} Thus $|F(a_{n+1}) - F(a_n)| \to \infty $ as $n \to \infty$; contradiction.
Why [K : Q] is at most 6
You obtain $K$ by adjoining the three roots $\alpha, \beta, \gamma$ to $\mathbb Q$. When you adjoin $\alpha$, it is a degree $3$ extension (as explained in the proof). Then you adjoin $\beta$, which is a root of the quadratic polynomial $f(x)/(x-\alpha) \in \mathbb{Q}(\alpha)$. So the degree of $\beta$ is at most $2$. Finally, after adjoining $\beta$, you are left with $\gamma$, which is now in the extended field (as it is the root of the unary polynomial $f(x)/((x-\alpha)(x-\beta))\in \mathbb{Q}(\alpha, \beta)$). In general, the argument yields that the splitting field of an $n$-ary polynomial is an extension of degree at most n!.
Solution of $x^2 = 2$ in $\mathbb{Q}_p$
Hint: Show that if $y^2\equiv 2\mod p^n$, there is a solution of $z^2\equiv 2\mod p^{n+1}$ with $z\equiv y\mod p^n$ (this technique is called Hensel lift).
When do a triangle and its Morely triangle have the same centroid
See whether this link helps you: http://jwilson.coe.uga.edu/emt668/EMAT6680.F99/Erbas/emat6690/essay2/essay2.html
Find a sequence in $l^p$ but not in $l^q$, where $q < p$
I will answer a bit more than what the question actually asks for, since the question itself doesn't really give the full picture. A function can be in $L^p$ but not $L^q$ for $q&lt;p$ if it has a long tail which larger powers shrink. For instance, for $r&lt;0$, $x^r 1_{[1,\infty)}(x)$ is in $L^p$ if and only if $pr&lt;-1$, i.e. $p&gt;-1/r$. A function can be in $L^p$ but not $L^q$ for $q&gt;p$ if it has a singularity which larger powers amplify. For instance, if $r&lt;0$ then $x^r 1_{(0,1)}(x)$ is in $L^p$ if and only if $pr&gt;-1$, i.e. $p&lt;-1/r$. Case 1 is only possible on infinite measure spaces; case 2 is only possible on measure spaces which contain sets of arbitrarily small positive measure. The real line has both of these properties. A bounded interval has just the second property. $\mathbb{N}$ with the counting measure (where the $\ell^p$ spaces live) has just the first property.
Properties of Partial Derivatives?
Partial differentials don't have properties like you are wanting, because partial differentials aren't, generally speaking, unique. The $\partial y$ in $\frac{\partial y}{\partial x}$ is a different variable than the $\partial y$ in $\frac{\partial y}{\partial z}$. This is why the partials are never split up - part of their identity is based on the lower part of the fraction. There are some notational options you can use to keep track of which quantity is which, and then you can treat them algebraically as above. One option is to mark which variable is allowed to freely vary. For instance, if I had $z = y^2 + x^3$, then I could mark the partial of $z$ when $y$ was allowed to vary freely as $\partial_y z$. So, $\partial_y z = 2y\,dy$. Or, $\frac{\partial_y z}{dy} = 2y$. If you keep track of which is which in a fashion like this, then normal algebraic rules apply.
Convergence in measure and bounded $L^p$ norm implies convergence in $L^p$
The two conditions stated are not enough to imply convergence in $L^p$. Indeed, if $X=[0,1]$ and $\mu$ is Lebesgue measure, then the sequence of functions $$ f_n=n1_{[0,\frac{1}{n^2}]}$$ converges to zero in measure and is bounded in $L^2$ because $$ \int_X|f_n|^2\;d\mu=n^2\cdot\frac{1}{n^2}=1 $$ for all $n$. But $f_n\not\to 0$ in $L^2$.
Dimension of subspace in the space of polynomials
If we denote the homogeneous part of $f$ of degree $k$ by $f_k$, we see that $$\begin{align} I(R) &amp;:= \int\limits_{x^2+y^2 = R^2} f(x,y)\,ds\\ &amp;= \int\limits_{x^2+y^2=R^2} \sum_{k=0}^N f_k(x,y)\,ds\\ &amp;= \sum_{k=0}^N \int\limits_{x^2+y^2=R^2} f_k(x,y)\,ds\\ &amp;= \sum_{k=0}^N \int\limits_{x^2+y^2=1}f_k(x,y)\,ds\cdot R^{k+1}, \end{align}$$ so the integral of each homogeneous part must vanish. By symmetry, the integral of each monomial $x^\alpha y^\beta$ vanishes when $\alpha$ or $\beta$ is odd, and is strictly positive when $\alpha$ and $\beta$ are both even. So for odd $k$, the integral vanishes for all monomials of degree $k$, and for even $k = 2m$, we have $m$ monomials where the exponent of $x$ and $y$ is odd, so their integral vanishes, and for $\mu = 0,\dotsc,m-1$, we have a homogeneous polynomial $x^{2\mu}y^{2(m-\mu)} - c_\mu\cdot x^{2m}$ of degree $k$ whose integral vanishes, and these $k$ homogeneous polynomials are linearly independent. So overall, we lose one dimension for every even $0 \leqslant k \leqslant N$, whence $$\dim V = \sum_{k=0}^N (k+1) - \left(\left\lfloor \frac{N}{2}\right\rfloor + 1\right) = \frac{(N+1)(N+2)}{2} - \left\lfloor \frac{N+2}{2}\right\rfloor.$$
Eigenspace of $T(A)=A^t$ on $\mathcal M_{n\times n}$
Let us first consider the eigenspace corresponding to $\lambda=1$. This is just the set of all A such that $A^T=A$. For any such element, the diagonal entries can be anything they want, while the off-diagonal entries must be equal when reflected along the main diagonal. Thus, our eigenspace is the span of all matrices of the form $E_{ij}+E_{ji}$ (where $E_{ab}$ is a matrix consisting of all zeros except in the (i,j)th position, where there is a 1). To get a basis, consider those positions (i,j) on or above the main diagonal. Next, the matrices where $A^T=-A$. For diagonal entries, we have $a_{ii}=-a_{ii}$, thus the diagonal is all zeroes. Then for all off-diagonal entries, $a_{ij}=-a_{ji}$. Thus, the eigenspace is the span of all matrices of the form $E_{ij}-E_{ji}$. A basis can be constructed by considering only positions (i,j) strictly above the main diagonal.
Surface integral on unit sphere
Let us make the computation, without seeing the result as in Sabyasachi's answer, and for a general $r$. The elementary area is, with Mathematica notations in spheric coordinates: $$ r^2\sin\phi d\phi d\theta $$ Here $r$ is a constant and $\theta\in[0,2\pi]$ and $\theta\in[0,\frac\pi 2]$: $$ A = r^2\int_0^{\frac \pi 2}\sin \phi d\phi\int_0^{2\pi} d\theta = 2\pi [-\cos\phi]_0^{\frac \pi 2}r^2 = 2\pi r^2 $$
Evaluate $\iint_s\text{curl}\textbf F\cdot \textbf {n}dS$
Even easier, what is the flux of $\text{curl}\,\mathbf F$ across the filled-in ellipse? But, to answer your question, an ellipse is a stretched circle, so try $x=a\cos t$, $y=b\sin t$ for appropriate $a$ and $b$.
If almost-periodic function is not identically zero, then it is not in L2
Your proof is wrong. An $L^2$ function does not necessarily have limit $0$ at $\infty$. However, what is true is that given $\epsilon &gt; 0$, an almost periodic function $f$ has arbitrarily large "almost periods" $\tau$ such that $|f(t+\tau) - f(t)| &lt; \epsilon$ for all $t \in \mathbb R$. You can use this to show that if $\int_a^b |f(t)|^2\; dt = c &gt; 0$, there are infinitely many disjoint intervals $[a_n, b_n]$ with $\int_{a_n}^{b_n} |f(t)|^2 \; dt &gt; c/2$.
Trigonometric integral
I am sure this substitution might work. Let $u=\sin x$ so that $du=\cos xdx$. Then your integrand becomes; $$\int_0^{\pi/2}\frac{\cos^2x\cos xdx}{\sqrt \sin x(1+\sin^3x)}=\int_0^{\pi/2}\frac{(1-\sin^2x)\cos xdx}{\sqrt \sin x(1+\sin^3x)}=\int_0^1\frac{(1-u^2)du}{\sqrt u(1+u^3)}$$ $$=\int_0^1\frac{(1-u)(1+u)du}{\sqrt u(1+u)(1-u+u^2)}=\int_0^1\frac{(1-u)du}{\sqrt u(1-u+u^2)}$$ I believe at this point you can employ the method of partial fractions to integrate. HINT: Let $u=v^2$ so that $du=2vdv$. Then you have $$\int_0^1\frac{(1-u)du}{\sqrt u(1-u+u^2)}=\int_0^1\frac{2(1-v^2)dv}{(1-v^2+v^4)}$$ and partial fraction follows. I think at this point the procedure for partial fraction is the same as above.
If $z$ is in an inner product space $X$, show that $f(x)=\langle x,z \rangle$ defines a bounded linear functional $f$ on $X$.
Let $z\in X$. we define $f_{z}(x)=&lt;x,z&gt;$. f is linear: $(f_{z}(ax+by)=&lt;ax+by,z&gt;= a&lt;x,z&gt;+b&lt;y,z&gt;=af_{z}(x)+bf_{z}(y)$ f is bounded: $|f_{z}(x)|=|&lt;x,z&gt;|\leq\|x\|\|z\|\Longrightarrow \|f\|\leq\|z\|$ (where the first inequality is the Cauchy-swartz inequality). Let $T: X\longrightarrow X^*$ with $T(z)=f_{z}$. Observe that $T$ is surjective means that every linear functional is of the form $f_{z}$ for some $z\in X$, which is the Riesz's representation theorem.
Show that $||(kI-T)^{-1}|| \le \frac{1}{d}$
If $k$, $d$ are as specified, then $$ d\|x\|^2 \le |k\|x\|^2-\langle Tx,x\rangle| \\ d\|x\|^2 \le |\langle (kI-T)x,x\rangle| \le\|(kI-T)x\|\|x\| \\ d\|x\| \le \|(kI-T)x\|. $$ That's enough to give you injectivity of $kI-T$, along with a closed range. Then $$ \mathcal{R}(kI-T) = \mathcal{N}(\overline{k}I-T^{\star})^{\perp} $$ However, the first inequality also gives you $$ d\|x\|^2 \le |\overline{k\|x\|^2-\langle Tx,x\rangle}| = |\overline{k}\|x\|^2-\langle T^{\star}x,x\rangle| \\ d\|x\| \le \|(\overline{k}I-T^{\star})x\|. $$ Therefore $\mathcal{R}(kI-T)=\{0\}^{\perp}=H$. So $kI-T$ is invertible.
Method of Maximum Likelihood for Normal Distribution CDF
The question, as you wrote it, is worded in an unclear way. My interpretation is the following: Suppose you have a sample of $X_i\sim \mathcal{N}(\mu,\sigma)$, which are i.i.d. Find the Maximum likelihood estimate for the two parameters, $\mu$ and $\sigma$. So $$\mathcal{L}(\mu,\sigma)=(\frac{1}{\sqrt{2\pi\sigma^2}})^n\cdot \prod_{i=1}^ne^{-\frac{(X_i-\mu)^2}{2\sigma^2}}$$ which is the likelihood function, that we seek to maximize. To that end, we take its logarithm; it will make the calculation easier, while preserving the extrema, being a monotonically increasing function. Thus $$\mathcal{F}=\ln(\mathcal{L}(\mu,\sigma))=-\frac{n}{2}\ln(2\pi)-\frac{n}{2} \ln(\sigma^2)+\sum_{i=1}^n-\frac{(X_i-\mu)^2}{2\sigma^2}$$ Now, you simply differentiate with respect to the two parametrs and equate to zero $$\frac{\partial}{\partial\mu}\mathcal{F}=0\implies \mu=\sum_{i=1}^n\frac{X_i}{n}$$ $$\frac{\partial}{\partial\sigma}\mathcal{F}=0\implies \sigma^2=\sum_{i=1}^n\frac{(X_i-\mu)^2}{n}$$ So, the estimated distribution, given $\bar X=\sum_{i=1}^n\frac{X_i}{n}$, is $$X\sim \mathcal{N}\left(\bar X,\sqrt{\sum_{i=1}^n\frac{(X_i-\bar X)^2}{n}}\right)$$ Thus it follows that $$P(X&gt;c)=1-\phi\left(\frac{c-\bar X}{\sqrt{\sum_{i=1}^n\frac{(X_i-\bar X)^2}{n}}}\right)$$
Is there any language that $\bar L^*= (\bar L)^*$?
If we denote the empty string by $\epsilon$, then by definition $\epsilon\in A^*$ for any language $A$. Consequently, $\epsilon\notin\overline{(L^*)}$ and $\epsilon\in\left(\overline{L}\right)^*$ so the left and right sides of your equation can never be equal.
Existence of fixed points with a certain property
Let $$F= \{ u \in [a;b] | f(u)=u \}$$ denote the set of fixed points of $f$. We know that $F$ is a non empty closed set. For all $\varepsilon &gt;0$ consider the set $$F_{\varepsilon} = \bigcup_{u \in F} B(u, \varepsilon)$$ This is the set of points whose distance from a fixed points is less than $\varepsilon$: this is an open set. Hence its complement is closed (and compact). Then simply define $$\delta = \min_{x \notin F_{\varepsilon}} |f(x)-x|$$ and you are done. Indeed $\delta &gt;0$, since the minimum exists and cannot be zero. WHY DOES THIS WORK: if $|f(x)-x| &lt; \delta$, then clearly $x$ does not satisfy the condition $x \notin F_{\varepsilon}$. In other words $x \in F_{\varepsilon}$, which means that there is a fixed point whose distance from $x$ is less than $\varepsilon$.
The angles of a convex pentagon are in AP. Then the minimum possible value of the smallest angle is?
Say the angles are $a,a+d,a+2d,a+3d,a+4d$ degrees, then we have $5a+10d=540$ or $a+2d=108$. We can thus rewrite the angles as $108-2d,108-d,108,108+d,108+2d$, and since the pentagon is convex we must have $108+2d\le180$ or $d\le36$. The largest value of $d$, 36°, will lead to the minimum smallest angle, which is also 36°.
Boundary points and isolated points of $\Bbb{Q}$
$\Bbb Q$ has no isolated points. But every real number is a boundary point of $\Bbb{Q}$, because $\partial{\Bbb Q}= \overline{\Bbb{Q}}\smallsetminus \text{int}(\Bbb Q)= \Bbb R\smallsetminus \emptyset=\Bbb R$.
Number of Rectangular prisms
Hint: (Rectangular Prism is a cuboid) $lbh=100$ How many possible values are there? Prime factorize $100$ and permute them. For example: $100=5^2 \cdot 2^2$. You can have $5 \times 10 \times 2$, $50 \times 1 \times 2$. You will get more possibilities.
About the cases when $N=N(\epsilon)$ is bounded or unbounded
In general, if we have a sequence $(x_{n})$ converging to some $x$ and we take $N(\varepsilon)$ to be the smallest integer such that for all $n\geq N(\varepsilon)$ we have $|x_{n}-x|&lt;\varepsilon$, then $X=\{N(\varepsilon):\varepsilon&gt;0\}$ is bounded if and only if there exists an $N\in\mathbb{N}$ such that $x_{n}=x$ for all $n\geq N$. First suppose that $X$ is a bounded set. Then $X$ is finite, so we can find a $\varepsilon'$ such that $N(\varepsilon')=\max X$. In particular, for all $0&lt;\varepsilon&lt;\varepsilon'$ we have $N(\varepsilon)=\max X$, as $$\max X\geq N(\varepsilon)\geq N(\varepsilon')=\max X.$$ Hence for all $n\geq\max X$ we have $$|x_{n}-x|\leq\lim_{\varepsilon\downarrow0}\varepsilon=0,$$ so $x_{n}=x$. Now suppose that there exists and $N\in\mathbb{N}$ such that for all $n\geq N$ we have that $x_{n}=x$. Then for all $\varepsilon&gt;)$ we have that $N(\varepsilon)\leq N$, so $X$ is bounded.
Fixed Point of a complex dynamical spiral system
You are going ( on parameter u plane ) u=0.05 to u=2.05 in increments of .1. IMHO you are ( on parameter plane) inside component where period 1 point is attracting ( regular spiral) and you move toward parabolic point where period 3 coincide with period 1 ( 3 arm distorted star) The star is distorted because u point is not on the internal ray 1/3 of this component Compare with this animation ( 1 to 6 ) and this image ( 1 to 2)) : HTH
What are the axioms of the class of lattices?
There is one major thing missing here: the axioms do not require $\cup$ to be the least upper bound. For example, if I take any partial order with a top element (say, $1$), then setting $x \cup y = 1$ for all $x,y$ would satisfy the axiom for $\cup$ you wrote. There is a similar problem with your axiom for $\cap$. That being said, how can we fix this? First of all, we can simplify things. A lattice is fully encoded by either its order or its meet and join operations. We can recover the join and meet from the order as least upper bound and greatest lower bound respectively. The other way around, we can recover the order from the join as $$ a \leq b \quad \Longleftrightarrow \quad a \cup b = b, $$ or from the meet as $$ a \leq b \quad \Longleftrightarrow \quad a \cap b = a, $$ whatever you prefer. So I will give the axiomatisations for both cases. First, if we only have an order symbol $\leq$: The first three axioms you mentioned (expressing $\leq$ is a partial order). $\forall xy \exists z(x \leq z \wedge y \leq z \wedge \forall w(x \leq w \wedge y \leq w \to z \leq w))$, expressing there is a least upper bound for any $x$ and $y$. $\forall xy \exists z(z \leq x \wedge z \leq y \wedge \forall w(w \leq x \wedge w \leq y \to w \leq z))$, expressing there is a greatest lower bound for any $x$ and $y$. In the other case we have symbols $\cup$ and $\cap$ for join and meet. Then the following axioms are enough (see also Wikipedia for more information here): Commutativity: $\forall xy(x \cup y = y \cup x)$ and $\forall xy(x \cap y = y \cap x)$. Associativity: $\forall xyz((x \cup y) \cup z = x \cup (y \cup z))$ and $\forall xyz((x \cap y) \cap z = x \cap (y \cap z))$. Absorption: $\forall xy(x \cup (x \cap y) = x)$ and $\forall xy(x \cap (x \cup y) = x)$.
Can you take a fractional root of a number?
Usually (when $x$ is a positive number) $\sqrt[n]{x}$ can be treated as $x^{1/n}$, and this extends to non-integral $n$. So $\sqrt[1.5]x$ can be interpreted as $x^{2/3}$, and if $a\ge0$ we do have $\sqrt[0.5]{a^3}=a^{3×2}=a^6$.
Consider the mapping $f: \mathbb R^ 3 \to \mathbb R ^3$ defined by $f(x,y,z) = (x, y^3, z^5)$
For an easier example, just think about $f: \Bbb R \mapsto \Bbb R$ given by $f(x)=x^{3}$, it also has an inverse, even though $f'(0)$ is zero. What is the inverse and is it differentiable at $0$?
construct function in $C_{c}^{\infty}$
For each $n\in \mathbb N$ we can do the following. There exists a smooth $f$ with support in $[a-1/n,a]$ with $f&gt;0$ on $(a-1/n,a)$ such that $\int_{a-1/n}^a f = 1.$ And there exists a a smooth $g$ with support in $[b,b+1/n]$ with $g&lt;0$ on $(b,b+1/n)$ such that $\int_{b}^{b+1/n} g = -1.$ Define $$\phi_n(x) = \int_{-\infty}^x (f(t) + g(t)\,dt.$$ Then $\phi_n$ is smooth, with support in $[a-1/n,b+1/n],$ such that $0&lt;F&lt;1$ on $(a-1/n,a),$ $F=1$ on $[a,b],$ and $0&lt;F&lt;1$ on $(b,b+1/n).$ The functions $\phi_n$ do what you want.
I need help with this definition of a category from SEP
The definition of a category is twofold. If we want to say that something is a category you need to specify the objects your category consists of together with the morphisms between them. So in a way, one can identify a category $\mathbf{C}$ with its "class" of objects $\textbf{Ob}$. However by doing that you are not specifying the morphisms, so people usually write "let $X \in \mathbf{C}$" instead of "let $X \in \textbf{Ob}$" when it is clear which category we are working with. This means that no, $\textbf{Ob}$ is not a subset of $\mathbf{C}$. Object-wise we have that $\textbf{Ob}$ is $\mathbf{C}$, so there is no more stuff outside $\mathbf{Ob}$. Next, a category consists of, for each pair of objects $X$ and $Y$ in $\mathbf{Ob}$, a set $\textbf{Hom}(X,Y)$, so this means that yes, given a morphism $f \colon X \to Y$ in your category, both the target and the source are objects in your category. Your fourth question has a negative answer however. Although it is true that in the first examples of categories that come to mind the objects are "sets plus extra information" (think of the category of sets, groups, abelian groups, topological spaces, etcetera), it is not true in general that an object in an arbitrary category has the underlying structure of a set. Therefore it doesn't make sense to talk about elements of an object at all. Maybe this is better explained with an example. Let $\mathbf{C} = \mathbf{1}$ denote the category whose only object is the symbol $\star$ and the identity morphism $\text{id}_{\star}$ is the only morphism. This is a category, but notice that $\star$ is not a set and hence the (unique) morphism in this category can't be thought of as function mapping elements of $\star$ to itself. $$ \star \to \bullet$$ is another category, where I've omitted the identity arrows for $\star$ and $\bullet$, provided that the morphism $\star \to \bullet$ respects the identity axiom. Again the only morphism that is not the identity is not a function between sets, since $\star$ and $\bullet$ might not be sets to begin with. Finally, I would like to address part of the set-theoretical difficulties that appear inevitably when dealing with categories. We want to be able to work with categories such as $\textbf{Sets}$, whose objects are sets and the set of morphisms $\textbf{Hom}(X,Y)$ for any two sets $X$ and $Y$ are functions between sets. Then, what "thing" is $\textbf{Ob}$ in this category? It can't be a set since Russell's paradox shows the inconsistency of allowing a "set of all sets". One way to get rid of this difficulty is to allow classes, and to work in an axiomatic set theory that has this notion, so that one says "$\textbf{Ob}$ is a class". Another possibility which I personally find cleaner is to fix what is known as a Grothendieck Universe $\mathcal{U}$ and talk about the category of $\mathcal{U}$-sets, $\mathcal{U}$-groups, etcetera. Anyway, this is more a matter of preference, I'd say. Hope this helps!
Is it possible to eliminate a contradiction without recourse to the principle of explosion?
I'm going to try to answer this question although I'm not entirely sure the answer is right. Maybe it will encourage debate, though. I think that the answer is no, you cannot derive this rule without some form of the principle of explosion. My reasoning goes as follows... The premise is a disjunction $p\lor(q\land\neg q)$. I have chosen to arrive at the conclusion by proof by cases, disjunction elimination in other words, which means I have to deal with $q\land\neg q$ somehow. The only other way that I can see to make progress from the premise is to use the rule that distributes disjunction over conjunction. This gives $(p\lor q)\land(p\lor\neg q)$. Now I have two disjunctions to deal with. Taking the rightmost $p\lor q$, I can only proceed by proof by cases again. Assuming $p$, I'm done. Assuming $q$ on the other hand, I must make use of the other side of the conjunction, namely $p\lor\neg q$ and I must then proceed to $q\land(p\lor\neg q)$. Using the rule that conjunction distributes over disjunction I get, unfortunately, $(p\land q)\lor(q\land\neg q)$ and I'm no better of than where I started. My line of reasoning is based on there only really being one way that the derivation can proceed. However it seems exhaustive, so I think there's something in it. It also seems, from an intuitive standpoint, that you can't eliminate $q\land\neg q$ without something like the principle of explosion, although admittedly this intuition is tainted by my findings above. Perhaps some other interpretation would shed more light on this.
Trying to understand discontinuity by examples
You can adjoin $\infty$ to $\mathbb{R}$ and still have a metric space whose structure is compatible with the usual structure. Defining $g(1)=\infty$ turns $g$ into a continuous function from $\mathbb{R}$ to $\mathbb{R}\cup\left\{\infty\right\}$. As for $h$, the limit does not exist.
What is a "right" automorphism?
Automorphisms are special cases of group actions - the automorphism group acts on the underlying set of some first group. In particular this is a left group action. If $A$ is a group and we have a set $X$, recall a left action of $A$ on $X$ is a map $A\times X\to X$ denoted $(a,x)\mapsto ax$ which satisfies the "associativity" relation $a(bx)=(ab)x$ for all $a,b\in A$ and $x\in X$. Similarly then we can define a right group action as a map $X\times A\to X$ with $(xa)b=x(ab)$ for all $a,b,x$. Even though functions are normally written on the left (e.g. $f(x)$) sometimes they can be on the right side instead (e.g. $(x)f$). It is somewhat unusual though. The right automorphism group is then the set of all right functions of $G$ which are automorphisms of $G$. The point of right actions is to keep track of how the actions compose together in a consistent and correct manner; sometimes naturally occurring actions are not left actions. For instance we know that $S_n$ acts on $\{1,\cdots,n\}$ and so it acts on $\{(x_1,\cdots,x_n):x_i\in X\}$ for any set $X$ by permuting the coordinates. If $\sigma$ is to put $x_i$ into the $\sigma(i)$-coordinate though, that means $(\sigma x)_{\sigma(i)}=x_i$, or in other words (via the substitution of $\sigma^{-1}(i)$ for $i$) $\sigma(x_1,\cdots,x_n)=(x_{\sigma^{-1}(1)},\cdots,x_{\sigma^{-1}(n)})$. The fact that $(x_1,\cdots,x_n)\sigma=(x_{\sigma(1)},\cdots,x_{\sigma(n)})$ is a right action, not a left action, often trips many people up, even authors of lecture notes in my experience.
How to find an alternate form of this polynomial (factorize?)
Hint: Cancel out the common factor $t-1$ when you take the limit. Note that $t \neq 1$. Observe that $t = 1$ is a zero of both numerator and denominator, hence a well known "theorem" states that $t-1$ is a factor of both of them and you can use Synthetic division or long division to factorise....
$n^k \equiv n \pmod 5 $ for all $ n\in \mathbb Z $ iff $k \equiv 1\pmod 4 $
Hint: if $n$ is a multiple of $5$, so is $n^k$. If not, try showing that $n^{k-1}\equiv 1$ mod $5$ if $k\equiv 1$ mod $4$. Edit: missed the "iff". To show that it doesn't work if $k\not\equiv 1$ mod $4$, use the if part and $n=2$. E.g. if $k\equiv 3$ mod $4$ then $2^{k-2}\equiv 2$ so $2^k\equiv 8\not\equiv 2$.
A question about the composition of negative exponent functions
$f(0)$ is not defined because you cannot calculate $0^\sigma$ as you say. However you can calculate $\lim_{x\to 0^+}f(x)$. Intuitively, as $x$ gets very small $x^\sigma$ will get very large so $b$ will not matter. You will then have $f(x)$ very close to $x$ for small $x$ and the limit will be $0$. If you want, you could define a new function $g(x)$ by $$g(x)=\begin {cases} 0&amp;x=0\\f(x)&amp;x \gt 0 \end {cases}$$ Now $g(x)$ is well defined and continuous from above at $0$. It is the same idea as a removable singularity in a function, where you "fill in a hole" of the definition in such a way that you make the function continuous.
Norm in the space of absolutely convergent Fourier series
Your claim is not true if $Im(g(x)) &lt; 0$, in that case it grows exponentially. And if $Im(g(x))\ge 0$ I find $\mathcal{O}(k^2)$ : $$\|f\|_{A(\mathbb{T})}= \sum_{n=-\infty}^\infty |c_n(f)|, \qquad \qquad c_n(f) = \int_0^{2\pi}f(x)e^{-i n x}dx$$ Also note that $$c_n(f')=i n c_n(f), \qquad\qquad c_n(f'')=-n^2 c_n(f)$$ With $G_k(x) = e^{ik g(x)}$ we have $G_k'(x) = ik g'(x)G_k(x)$ and $G_k''(x) = ik g''(x)G_k(x)-k^2g'(x)^2 G_k(x)$. Assuming $Im(g(x))\ge 0$ we have $|G_k(x)|,|g(x)|,|g'(x)|,|g''(x)|$ bounded by a constant $\beta$, so that $|G_k''(x)|&lt;k^2 \beta_2$, and for $n \ne 0$ : $$|c_n(G_k)| = \frac{|c_n(G_k'')|}{ n^2} \le \frac{ k^2 \beta_2}{n^2}$$ with i.e. $$\|G_k\|_{A(\mathbb{T})} = |c_1(G_K)|+\sum_{n\ne 0} |c_n(G_k)| \le 2\pi\beta+2\pi k^2 \beta_2\sum_{n\ne 0} \frac{1}{n^2}= \mathcal{O}(k^2)$$
What is the limiting transition matrix of a general Markov Chain?
I actually found a way to solve my problem. The rows of the limiting transition matrix are linear combinations of the stationary distributions, $\pi_k$, $k=1,...,K$ of the $K$ strongly connected components that correspond to a leaf of the condensation graph (i.e. the absorbing states). The coefficients associated with each $\pi_k$ for each row are given by the matrix $B=NR$, where the entry $(i,j)$ gives the probability of being absorbed in the absorbing state $j$ when starting from transient state $i$. Here, $N=(I-Q)^{-1}$ is the fundamental matrix of the absorbing Markov Chain, $Q$ gives the probability to transition from transient states to transient state and $R$ gives the probability to transition from transient states to absorbing states. They are found by rearranging the transition matrix as $$ P = \left( \begin{array}{cc} Q &amp; R\\ \mathbf{0} &amp; I_r \end{array} \right).$$
rationale for book's solution of combinatorics question about scheduling ten speakers with restrictions
If all the people spoke in alphabetical order, there would only be one possible arrangement. Judging by the answer, the question means that we wish to count the number of speaking schedules in which A, B, and C appear in alphabetical order. Method 1: We choose three of the ten speaking slots for A, B, and C. Since they must appear in alphabetical order, there is only one way to arrange them in the three chosen slots. The other seven people can be arranged in the remaining slots in $7!$ ways. Hence, the number of possible speaking schedules in which A, B, and C appear in alphabetical order is $$\binom{10}{3} \cdot 7! = \frac{10!}{3!7!} \cdot 7! = \frac{10!}{3!}$$ Method 2: There are $10!$ ways to schedule the speakers. There are $3! = 6$ orders in which A, B, and C can appear. Only one of these orders is alphabetical. By symmetry, the number of speaking schedules in which A, B, and C appear in alphabetical order is $$\frac{10!}{3!}$$
Uniqueness of a linear map
The theorem you've quoted is the way to go. Just note that $\alpha\ne0$ implies that $\alpha$ is part of a basis of $V$. Indeed, let $v_1=\alpha$ and complete this to a basis $v_1, \ldots, v_n$ of $V$. Let $u_1=\beta$ and $u_2=u_3=\cdots=u_n=0$. Now apply the theorem.
Studying Series (non)convergence
Such a series can diverge (in your terms). Let $$a_n=\begin{cases}2^{\frac{n}2+1},&amp;\text{if }n\text{ is even}\\\\2^{\frac{n-1}2},&amp;\text{if }n\text{ is odd}\;,\end{cases}$$ so that $$\sum_{n\ge 0}(-1)^na_n=2-1+4-2+8-4+16-8+\ldots\;.$$ Show that $$\lim_{n\to\infty}\sum_{k=0}^n(-1)^ka_k=\infty\;.$$
Ring homomorphisms from $\Bbb Q$ into a ring
If you want to avoid invoking the universality of localization, you can show the result directly (although the demonstration here is pretty much the same argument as in the proof of that property). Let $f:\mathbb{Q} \to A$ be a ring homomorphism. Then $f$ restricts to a map $\mathbb{Q}^\times \to A^\times$, so every nonzero $n\in \mathbb{Z}$ must be invertible in $A$. Conversely, suppose every nonzero $n\in \mathbb{Z}$ lies in $A^\times.$ Then any $f:\mathbb{Q} \to A$ must have $f(n) = f(1) + \cdots + f(1) = n$ and thus $f(p/q) = f(p)/f(q) = p/q\in A$ for all $p, q$ with $q\not = 0$.
Constructing a Continuous Everywhere but Nowhere Differentiable Function
Note that for $k\ge n, 2^k u_n = i\cdot 2^{k-n}$ and $2^k v_n = (i+1)\cdot 2^{k-n}$ are integers, so $g(2^k u_n) = g(2^k v_n) = 0 \quad\forall\ k\ge n$, thus $$\frac{f(u_n) - f(v_n)}{u_n - v_n} = \frac1{u_n-v_n} \sum_{k=0}^\infty \frac{g(2^k u_n) - g(2^k v_n)}{2^k} = \sum_{k=0}^{n-1} \frac{g(2^k u_n) - g(2^k v_n)}{2^k u_n - 2^k v_n}$$ The series are absolutely convergent (because they are bounded by $\sum_{k=0}^\infty \frac1{2^k} = 2$) so they can be combined as is done in the first step.
Recurring Decimals to Fractions: Question on method of conversion
Let's first consider a number between $0$ and $1$ with a recurring decimal expansion. In this answer, let $d_{1}d_{2}d_{3}\cdots d_{n}$ be the decimal expansion, and an overline represents recurring decimal expansion. Let $p = 0.\overline{d_{1}d_{2}d_{3}\cdots d_{n}}$ and $q = d_{1}d_{2}d_{3}\cdots d_{n}$. Then, we can represent this as $$p = \frac{q}{10^{n}} + \frac{q}{10^{2n}} + \frac{q}{10^{3n}} + \cdots$$ $$p = \sum_{i = 1}^{\infty}\frac{q}{10^{in}}.$$ Notice that this is a geometric series, converging to \begin{align*}p &amp;= \frac{\frac{q}{10^{n}}}{1 - \frac{1}{10^{n}}} \\ p &amp;= \frac{\frac{q}{10^{n}}(10^{n})}{10^{n} - 1}\\p &amp;= \frac{q}{10^{n} - 1}.\end{align*} Then, $p$ can just be simplified if it is still not in the lowest terms. In your case, you have a value $a &gt; 1$. This can be expressed as $a = [a] + \{a\}$ where $[a]$ and $\{a\}$ is the integer part and the fractional part, respectively. We just solved the fraction form of $\{a\}$, hence adding $[a]$ to $\{a\}$ is just a matter of simplifying fractions. Does anyone know why subtracting and solving for $x$ converts the repeating decimal to a fraction? To answer your question, this is just a background process of solving for a converging geometric series. The process I gave is the simplified form already.
differential inequality of continuous functions
By contradiction, assume there exists some $\epsilon&gt;0$ such that $u(x_n)&gt;\epsilon$ for some sequence $\{x_n\}$ such that $x_{n+1}-x_n&gt;\tau$ for some $\tau&gt;0$ we will provide later. $u$ is positive, and by the inequality we really only bound the slope from above, but it can drop with unbounded slope. We are interested then, in bounding from below the area beneath the graph of $u$ on the intervals $[x_n-\tau,x_n]$. Due to the upper bound of the slope and the fact that $u(x_n)&gt;\epsilon$, we have $u(x_n-\tau)&gt;\epsilon - \tau\epsilon(a+b\epsilon)$. This is the crucial observation. This is true because on the interval $u'$ is bounded by $u(a+bu)$, and if it was less than this value, $u$ couldn't increase over the distance $\tau$ to rise to $\epsilon$ without violating the bound. You can prove this formally by using any ODE comparison theorem with the bound and $u(0)\leq\epsilon - \tau\epsilon(a+b\epsilon)$ and proving $u(\tau)&lt;\epsilon$. We pick $\tau$ such that $\epsilon - \tau\epsilon(a+b\epsilon)=\epsilon/2\implies \tau = 1/2(a+b\epsilon)&gt;0$. Now we have that the graph of our function lies above $\epsilon/2$ for a distance of $\tau$ on an infinite sequence of (almost) disjoint intervals, therefore \begin{align} \int_0^\infty u(x)\ dx &amp;\geq \sum_n \int_{x_n-\tau}^{x_n} u(x)\ dx \\ &amp;&gt; \sum_n \int_{x_n-\tau}^{x_n} \epsilon/2 \ dx \\ &amp;= \sum_n \tau\epsilon/2 \ dx \\ &amp;= \infty \end{align} which is a contradiction.
Prove $\frac{1}{2} \|y-x\|^2$ is a strongly convex function
The Hessian at any point is the identity matrix, which is of course positive definite. Conclude.
Decreasing sequence of sets: Power of natural numbers
Let $S$ denote the intersection of the $S_n$. Start with a fixed $m$. The set $S_m-S$ is finite and for every element $s\in S_m-S$ some $n_s$ exists with $s\notin S_{n_s}$. Then $S_n=S$ if $$n\geq\max\{n_s|s\in S_m-S\}\in\mathbb N$$
Is this correct? $\sin'(z) = \cos(z),~\cos'(z) = -\sin(z)$
Looks okay (although I am not sure about the Cauchy-Riemann approach). The best way to prove this is by using the exponential definition of $\sin$e or $\cos$ine to verify the derivatives. Just a quick note; For a more rigorous method, you can also prove the result by differentiation from first principles: \begin{equation*} \frac{d}{dx}\sin(x)=\lim_{h\to 0}\frac{\sin(x+h)-\sin(x)}{h} \end{equation*} and apply the double angle formulae. Proceed in a similar fashion for $\cos$ine. You may also be interested in the following geometric approach to your derivative problem: http://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/1.-differentiation/part-a-definition-and-basic-rules/session-8-limits-of-sine-and-cosine/MIT18_01SCF10_Ses8d.pdf
Struggling to understand real projective space
As a lemma, we show that $q$ is an open quotient. If suffices to show that if $B\subset\mathbb R^{n+1}\setminus\{0\}$ is an open ball, then $q(B)$ is open. We know that $q^{-1}\circ q(B)=C$ is the set of all origin-less linear subspaces intersecting nontrivially with $B$. Pick a point $x\in C$. Then $x=\lambda y$ for some $x\in B$. Since $B$ is open, so is $\lambda B\subset C$, and $y\in \lambda B$, so $C$ is open. Thus, $q(B)$ is also open, and $q$ is an open map. Now we show that $\mathbb{RP}^n$ is Hausdorff. Let $x,y\in\mathbb{RP}^n$ be distinct. Choose $x'$ and $y'$ as points in $q^{-1}(\{x\})$ and $q^{-1}(\{y\})$ respectively. Let $L_1$ and $L_2$ be the lines through the origin containing $x'$ and $y'$ respectively. We know that neither $x'$ nor $y'$ are the origin, and that $L_1$ and $L_2$ are distinct, intersecting only at the origin. Let $$C_1=\{z\in L_1:|z|\geq |x'|\}$$ and $$C_2=\{z\in L_2:|z|\geq |y'|\}.$$ Then $C_1$ and $L_2$ are disjoint closed subspaces of $\mathbb R^{n+1}$. Since $\mathbb R^{n+1}$ is normal, there exist disjoint open sets $U_1$ and $V_2$ containing $C_1$ and $L_2$ respectively. Likewise there exist open sets $V_1$ and $U_2$ containing $L_1$ and $C_2$ respectively. Thus, $U=U_1\cap U_2$ and $V=V_1\cap V_2$ are disjoint open sets contain $C_1$ and $C_2$, with $U$ disjoint from $L_2$ and $V$ disjoint from $L_1$. Also note that neither $U$ nor $V$ contains teh origin, so we may think of these sets as being open sets in $\mathbb R^{n+1}\setminus \{0\}$. Therefore, $q(U)$ and $q(V)$ are disjoint open sets containing $x$ and $y$.
Proving that if $a < b$ and $c < 0$, then $bc<ac$
$ac&lt;bc \Rightarrow ac-bc&lt;0 \Rightarrow c(a-b)&lt;0$. Now we examine the factors. $c&lt;0$ (given) and $a&lt;b \Rightarrow a-b&lt;0$. $\underbrace{c}_{&lt;0}(\underbrace{a-b}_{&lt;0})&lt;0$ Both factors are negative, and since negative times negative is positive, we have $c(a-b)&gt;0$ or $bc&lt;ac$.
If $H \leq G$ and $H \subset Z(G)$, the center of $G$, is $H \trianglelefteq G$?
This can be understood intuitively with group actions. Say $G$ acts on a set $X$: A subset $Y\subseteq X$ is pointwise fixed if $gy=y$ for all $g\in G$. A subset $Y\subseteq X$ is setwise fixed if $gY:=\{gy:y\in Y\}=Y$ for all $g\in G$. The group $G$ acts on itself by conjugation. Then: A subset $H\subseteq G$ is central if $~[G,H]=1 ~~\Leftrightarrow~H\subseteq Z(G)~ \Leftrightarrow~ H$ is pointwise fixed. A subset $H\subseteq G$ is normal if it is setwise fixed. For general group actions, pointwise fixed is much stronger than ($\Rightarrow$) setwise fixed. Therefore we may directly conclude that central ($H\subseteq Z(G)$) implies normal ($H\trianglelefteq G$) in this case.
Find a plane parallel to a given vector and containing two given points
Vector $$ (1,1,-2)\times(2,1,-7)=(-5,3,-1) $$ is perpendicular to both vectors and thus normal to the plane.
Find a series solution for legendre equation in powers of $(x-1)$ for $(x-1) > 0$
If you make the substitution $t=x-1$, then your series should have $t^n$, not $(t+1)^n$. The coefficient on the sum for $y^{\prime\prime}$ is $$(1-x^2)=(1-x)(1+x) = (1-x)(2-(1-x)) = -2(x-1) -(x-1)^2.$$ So when you multiply the coefficient inside the sum and split it you get $$(1-x^2)y^{\prime\prime}= (1-x^2)\sum = (-2(x-1) -(x-1)^2)\sum $$ $$= \sum -2n(n-1)a_n (x-1)^{n-1}+\sum -n(n-1)a_n(x-1)^{n}.$$ The manipulation keeps all the power series in terms of powers of $(x-1)$. The 3rd hint does the same thing for the $y^{\prime}$ term.
Restriction of domain of bijective function
Yes, the restriction of any injection itself is an injection. For otherwise there would be $x \neq y$, $x,y \in C$ such that $$ f \restriction C (x) = f \restriction C (y). $$ But then $f(x) = f(y)$ -- contradicting the fact that $f$ is injective.
What is the $n^\text{th}$ derivative of $f(x)=\frac{1}{1+x^2}$
$$2f(x)=\frac1{1+ix}+\frac1{1-ix}.$$ Therefore $$2f^{(n)}(x)=\frac{(-i)^nn!}{(1+ix)^{n+1}}+\frac{i^nn!}{(1-ix)^{n+1}}.$$
Shafarevich's problem 1.7, showing the existence of a rational function.
Perhaps this is a bit late, but here's what I first thought: Suppose $a\not=c,b\not=d$, and characteristic is not $2$. Then let $u(x,y):=\frac{x-a}{2(c-a)}+\frac{y-b}{2(d-b)}$. Then clearly $u(a,b)=0$ and $u(c,d)=\frac12+\frac12=1$.
convex function of n convex function
Let $\alpha\in]0,1[$ and let $(x_1, \ldots, x_n)$ and $(y_1, \ldots, y_n)$ be in $\mathbb{R}^N$. To show convexity of your function, it suffices to show \begin{equation} f\left(h_1(\alpha x_1 + (1-\alpha)y_1), \ldots, h_n(\alpha x_n + (1-\alpha) y_n)\right)\leq \alpha f(h_1(x_1),\ldots,h_n(x_n)) + (1-\alpha)f(h_1(y_1),\ldots,h_n(y_n)). \end{equation} Proof idea: For every $i\in \{1,\ldots,n\}$, since $h_i$ is convex, we know \begin{equation} h_i(\alpha x_i + (1-\alpha) y_i)\leq \alpha h_i(x_i) + (1-\alpha)h_i(y_i). \end{equation} Since we know $f$ is increasing in each component, we use this inequality in every component to find \begin{equation} f\left(h_1(\alpha x_1 + (1-\alpha)y_1), \ldots, h_n(\alpha x_n + (1-\alpha) y_n)\right)\leq f(\alpha h_1(x_1)+ (1-\alpha)h_1(y_1), \ldots, \alpha h_n(x_n) + (1-\alpha)h_n(y_n)). \end{equation} Notice that the righthand side above is just $f$ evaluated at $\alpha (h_i(x_i))_{1\leq i\leq n} + (1-\alpha) (h_i(y_i))_{1\leq i\leq n}.$ Using convexity of $f$ in $\mathbb{R}^n$ will yield the desired inequality :)
Are open sets of a product space necessarily given by the Cartesian product of two open sets?
An open set $W$ in the product topology is in the basis $\mathcal B$ if and only if: For all $(x_1,y_1),(x_2,y_2)\in W$, we have that $(x_1,y_2),(x_2,y_1)\in W$. But it is not hard to find, in most cases[*], that there is some pair $W_1=U_1\times V_1$ and $W_2=U_2\times V_2$ such that $W_1\cup W_2$ does not have this property. [*] I say "in most cases," because in a few cases, like $|Y|=1$, this is not true.
Double Integral Weird Change of order of integration
Hint: You cann't solve these type of integrals without graphs. Think about it's area and find that $$\int_1^e\int_{\frac12\ln y}^{\ln y}x\ln y dx dy+\int_e^{e^2}\int_{\frac12\ln y}^{1}x\ln y dx dy$$ Edit: Look at the graph again as you see in the strip $0\leqslant x\leqslant1$, the area is between two curves $y=e^x$ and $y=e^{2x}$. This view is seen from $x$-restriction to $y$-restriction. Other view is from $y$-restriction to $x$-restriction, in this case the area isn't uniform so you should look at it in two portion, $1\leqslant y\leqslant e$ and $e\leqslant y\leqslant e^2$. With $1\leqslant y\leqslant e$ (blue color), the area lies between $\dfrac12\ln y\leqslant x\leqslant \ln y$ while in $e\leqslant y\leqslant e^2$ (orange color), the area is between $\dfrac12\ln y\leqslant x\leqslant 1$.
What is the type of "A implies B"?
Short answer: It is a definition. Long answer: The definition of the conditional in logic is simply a choice, in the same way that we choose that multiplication has higher precedence over addition in arithmetic. Sure, there are reasons for the choice, but it is silly to expect there to be some famous reference from which it comes, especially when the reasons for the choice are easily arrived at independently by every logician. Specifically, in natural language we have conditional expressions of the form "If A then B.". Now this has many possible meanings and subtle nuances, such as when used with the subjunctive or imperative. It is completely reasonable to want to choose one specific meaning for the logical form "$A \to B$" just so that it is tidy. Since logic was meant to be for reasoning about facts, the obvious choice is the meaning when both A and B are declarative sentences. That meaning considers the natural language sentence to be valid exactly according to a certain truth table, and we simply adopt that same truth table for the logical form. By doing so we of course end up with a logical connective that does not capture any other meaning of the natural language conditional except that which we chose.
Uniqueness set for holomorphic function in $\mathbb{C}^n$
$f(z_1,z_2,...,z_n)=(z_1,z_1,...,z_1)$ is a counterexample.
Does there exist a function $f$ from $R^2$ to $R$ and constant $c_0>0$ such that for all $x,y\in R^2$, $|f(x)-f(y)| \ge c_0|x-y|^2$?
In this answer, it is proved that the Hilbert space-filling curve is Hölder continuous with exponent $\frac{1}{2}$. Althouth the original curve is a mapping $[0,1]\to[0,1]^2$, it is easy to extend this function to a surjection $g : \mathbb{R} \to \mathbb{R}^2$ which is also Hölder continuous: there exists $C &gt; 0$ such that $$ \| g(t) - g(s) \| \leq C |t - s|^{1/2}. $$ Since $g$ is a surjective, we can find an injective function $f : \mathbb{R}^2 \to \mathbb{R}$ such that $g\circ f = \operatorname{id}_{\mathbb{R}^2} $. Then this function satisfies $$ \forall x, y \in \mathbb{R}^2 \ : \quad \| x - y \|^2 \leq C^2 |f(x) - f(y)|. $$ So the condition is satisfied with $c_0 = C^{-2} &gt; 0$.
Let $z_0 = \left(a,b\right)$ find $z$ such that $z^2 = z_0$ where $z_0 , z$ are complex numbers.
If $z_0=a+ib$ and $z=x+iy$ with $z^2=z_0$ then $x^2-y^2+2ixy=a+ib$ hence $x^2-y^2=a$ and $2xy=b$. Your turn !
Rewriting a second order ODE as a first order ODE
The original ODE can be written also as $\left(\frac{y'-F(x)}{x^2 \rho(x)} \right)' + \frac{4 \pi G \rho(x)}{x^2 P(x)}\, y = 0$ and making $$ \cases{y_1 = y\\ y_2=\frac{y_1'-F(x)}{x^2 \rho(x)} } $$ can be represented as $$ \cases{ y'_1 = x^2\rho(x) y_2+F(x)\\ y'_2 = -\frac{4\pi G \rho(x)}{x^2 P(x)}y_1 }. $$
Problem about a process with bins of balls
I believe what you are discussing is known as Bulgarian solitaire. This is a theorem of Jorgen Brandt, i.e., that the game ends as you have described when the number of balls (or cards) is a triangular number, i.e., of the form $n(n+1)/2$. Here is a nice source to read over: (paywall) Solution of the Bulgarian Solitaire Conjecture Kiyoshi Igusa Mathematics Magazine Vol. 58, No. 5 (Nov., 1985), pp. 259-271 Published by: Mathematical Association of America Article Stable URL: http://www.jstor.org/stable/2690174 Note this is also the subject of a problem in the June/July 2013 AMM: Problem 11712.
Confusion in MAP estimation
The dependence structure of $a,n$ and $r$ is important! Typically in such "inverse problem" scenarios you assume that $n$ is independent of $a$ and therefore $$ p_{r|a} = \mathcal{N}(A, \sigma_n^2), $$ but since $r$ and $n$ are not independent (knowing $r$ gives some information on $n$), you can not conclude $$ p_{a|r} = \mathcal{N}(R, \sigma_n^2). $$ (I tried to adopt your notation, which is not very intuitive to me). So, under the above assumption, Option 1 is not correct, while Option 2 is (for that prior). As far as I know, any "stretched out" prior will have the property you describe in question 2 as long as it is strictly positive everywhere.
Upper bound on distance between trajectories that share same initial position and velocity
I'm not sure what you're hoping for here. But without further hypotheses, there is no universal upper bound, since $\gamma_1$ and $\gamma_2$ could have arbitrarily large speeds and head off in different directions. But if you assume that $|\gamma_1'(t)|$ and $|\gamma_2'(t)|$ are both bounded above by a constant $c$, then an easy computation shows that $d(\gamma_1(t),\gamma_2(t))\le 2 c |t|$. This is sharp, because it's possible to construct constant-speed curves in $\mathbb R^2$ that start at the origin with the same initial velocity, and then very quickly veer sharply in opposite directions, so that the distance is as close as you want to $2c|t|$. One other thing that can be said is that for any given $\gamma_1,\gamma_2$, there's a constant $C$ such that $d(\gamma_1(t),\gamma_2(t))\le C t^2$ for small $t$ (this is essentially just Taylor's theorem in local coordinates); but there's no universal upper bound on $C$ because it would contradict the argument in the previous paragraph that you can get as close as you want to $2c|t|$.
What is $\lim_{x \to 0}\frac{\sin(\frac 1x)}{\sin (\frac 1 x)}$ ? Does it exist?
I quote Walter Rudin's Principles of Mathematical Analysis for the definition of the limit of a function: Let $X$ and $Y$ be metric spaces; suppose $E\subset X$, $f$ maps $E$ into $Y$, and $p$ is a limit point of $E$. We write $\lim_{x\to p}f(x)=q$ if there is a point $q\in Y$ with the following property: For any $\epsilon&gt;0$, there exists a $\delta&gt;0$ such that $d_Y(f(x),q)&lt;\epsilon$ for all points $x\in E$ such that $0&lt;d_X(x,p)&lt;\delta$. The symbols $d_X, d_Y$ refer to the distances in $X$ and $Y$, respectively. In our case, $X=Y=\mathbb R$ with the metric $d(x,y)=|x-y|$. The function $f(x)=\frac{\sin \frac1x}{\sin \frac1x}$ maps the set $$ E=\mathbb R\setminus (\{\tfrac1{k\pi}:k\in \mathbb Z\setminus \{0\}\}\cup \{0\}) $$ into $\mathbb R$, and $0$ is a limit point of this set. We would conclude $\lim_{x\to 0}f(x)=1$ if for all $\epsilon&gt;0$, we could find a $\delta&gt;0$ so whenever $x\in E$ and $0&lt;|x|&lt;\delta$, then $|f(x)-1|&lt;\epsilon$. But any $\delta$ suffices, since $f(x)=1$ for all $x\in E$. Therefore, we do conclude that $\lim_{x\to 0}f(x)=1$.