title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to evaluate $\lim_{x\rightarrow 0^-} \sec^{-1}(1+x)$?
Note that $$\sec^{-1}y$$ is definded for $$y\in(-\infty,-1]\cup[1,+\infty) $$ thus the limit for $x\to0^-$ that is for $(1+x)\to 1^-$ is not defined. For continuity $$\lim_{x\rightarrow 0^+} \sec^{-1}(1+x)=\sec^{-1}1=0$$
Let $f:A\to B$ and $g:B\to C$. Suppose $g\circ f$ is a bijection. Then $f$ is injective and $g$ is surjective onto $C$.
You're almost there. As you say there is $x$ in $A$ with $[g \circ f](x)=y.$ Now use that $[g \circ f](x)=g(f(x))$ and give $f(x)$ some name like $f(x)=k.$ This $k$ then lies in $B$ and $g(k)=y$ so that $g$ is onto.
How should we think about the sections of a sheaf on a scheme as functions?
Given a ring $A$ the structural sheaf $\mathcal O_X$ of the affine scheme $X=\operatorname {Spec}A$ is indeed the sheaf associated to a very natural presheaf, namely the presheaf of rings $\mathcal O'_X$ defined as follows: For an open subset $U\subset X$, the ring $\mathcal O'_X(U)$ is the ring of fractions $$\mathcal O'_X(U)=S(U)^{-1}O_X(U)$$ where the multiplicative set $S(U)$ is $$S(U)=\{f\in A\vert\,\forall \mathfrak p\in U, f\notin \mathfrak p\}$$ You can find an example proving that in general $O'_X(U)\neq O_X(U)$ here. Note that for $U$ of the form $U=D(f) \; (f\in A)$ we do have $O'_X(U)= O_X(U)$: Hartshorne Proposition 2.2 (b), page 71 in chapter II. Edit: and what about differential manifolds? It is an extremely amazing and underappreciated fact for an open subset $U\subset X$ of a differential manifold $X$ we do have $$C^\infty(U)=\{\frac {g\vert U}{f\vert U}: f,g\in C^\infty(X)\operatorname {and}\forall x\in U, f(x)\neq0 \}$$ So if manifolds are seen as locally ringed spaces (as they should!) we do have in that category $\mathcal O_{X,\operatorname {diff}}^{'}= \mathcal O_{X,\operatorname {diff}}$ ! One of the very rare references on this result is Nestruev, Proposition 10.7, page 145.
Using Correspondence Theorem for Rings
$q(\mathcal{J})$ is not contained in any maximal ideal. You may be confused because every proper ideal is contained in a maximal ideal. The point of course is that $q(\mathcal{J})$ is not a proper ideal.
Geometric proof Brahmagupta-Fibonacci identity
One geometric proof is shown here:
Fully truth-functional version of modal logic?
I think you're asking whether or not, or perhaps claim that, modal logic can be reduced to truth-functional, many-valued logic. If this is the question, then the answer is no. Some explanations below. It is important to make a distinction between the formalism and the semantics of a modal logic (system). A modal logic (system) is formally defined, for example Hilbert style, by its syntax, axioms and rules. Thereafter, a formula is considered a theorem iff it can be formally proved from the above. This uniquely defines the set of formulas that are theorems of that system. (And this set is also considered to be the modal logic, i.e. the system is identified with the set of all its theorems.) So whether or not a formula is is a theorem is only a matter of formal derivation. It has nothing to do with the semantics or any interpretation of the formula. Also: if there is an algorithm that allows you to decide, given any arbitrary formula, whether or not it is a theorem, then that modal logic is called decidable. NOTE: It has bee proven that there are modal logics that are not decidable. A semantics is a/any different method (finitary or not) to associate a truth value to each formula. You can in principle invent any semantics S that you want, including one based on multi-valued truth tables. But then you run into the following questions: Is your modal logic sound with respect to S? i.e. is every theorem S-valid? Is your modal logic complete with respect to S? i.e. is every S-valid formula a theorem. Having both would mean that a formula is a theorem iff it its S-valid. Now, your truth-table semantics for modal logics being finitary means validity would be decidable. Therefore, in such semantics you cannot have soundness and completeness for all modal logics, because it would imply that all modal logics are decidable. Which they are certainly not. The usual semantics for normal modal logics is the Kripke semantics. In this semantics all modal logics are sound, but there are many that are incomplete. There is also the general frame semantics, under which all modal logics are sound and complete. But neither provides a finitary algorithm for evaluating validity, such as to make all modal logics decidable. Of course, you are still free to define your truth-table based semantics for modal logic. The only question is: how useful is it (what would you do with it)? As illustated, you would not be able to use it to decide theoremship in all cases. There are certainly some (particular) modal logics that are decidable and can be "reduced" to truth-based logics. For example the ultra-simple Triv and Ver systems, in a quite straightforward way. (I once saw an attempt to do this for the system S5; it worked for some formulas, not sure if for all.) For your example case study, I doubt that you can achieve soundness and completeness for provability logic in this way. This is because it has an axiom of modal degree 2, and not a particularly simple one. But it's certainly not enough to postulate some (multi-valued) truth tables. You also need to mathematically examine soundness and completeness with respect to them.
Find units in $\mathbb{Z}[(\sqrt{13}+1)/2]$
You were on the right track, but must continue. Having derived the equation $ba'+(a+b)b'=0$ (note the rearrangement to get terms with the same primed variable together) Now equate the rational parts to get a second relation $(2a+b)a'+(a+7b)b'=2$ You now consider these two equations as a linear system, with $a'$ and $b'$ your "unknowns" for which you solve in terms of $a$ and $b$. By standard methods for solving linear systems obtain the results $a'=\dfrac{a+b}{a^2+ab-3b^2}$ $b'=\dfrac{-b}{a^2+ab-3b^2}$ For a unit you must have $a'$ and $b'$ both integers. From the above solutions it follows that the common denominator $a^2+ab-3b^2$ must be a common factor of $a+b$ and $-b$, therefore also a common factor of $a=(a+b)-b$ and $b$. Now suppose that $f$ is any such common factor. Then $f^2|(a^2+ab-3b^2)$ because each term in $a^2+ab-3b^2$ is of degree $2$. Then $a+b,b$ must both be multiples of $f^2$ and thus $a,b$ must be multiples of $f^2$. Then $f^4=(f^2)^2$ must also divide both $a$ and $b$, $f^8$ must also be a common factor and there will be a contradiction (the intended divisor grows absolutely larger than $a$ and $b$) unless the sequence $f,f^2,f^4,f^8,...,f^{2^{n-1}},...$ is bounded. This means $a,b$ can have no common factors other than $\pm1$ forcing $a^2+ab-3b^2$ to also be one of these values.
Is it worth playing this game of dice?
Let $Z$ be the random variable corresponding to the product of the 3 (independent, fair) dice, when you roll them. Then $Z=U_1U_2U_3$, where the $U_i$ are iid random variables, uniform on $\{1,\dots,6\}$. In one experiment, when you play, the expected gain is $$ \mathbb{E} Z-42 = \mathbb{E}[U_1U_2U_3] - 42 = \mathbb{E}[U_1]\mathbb{E}[U_2]\mathbb{E}[U_3] - 42 = \left(\mathbb{E}[U_1]\right)^3 - 42 = \left(\frac{7}{2}\right)^3 - 42 $$ where we used independence of the $U_i$. Repeating it 10 times multiplies this expected gain by 10; the expected gain is therefore $g = 10\cdot\left(\mathbb{E} Z-42\right)$ it is worth playing, thus, if $ g \geq 0. $
Finding the root, domain, and limit to infinity of $f(x) = xe^{-x}$
I'm going to assume you mean your question to be the following: Find the root and domain of: $$f(x) = xe^{-x}$$ Also, find $\lim_{x\to\infty}f(x)$. First, we find the root by setting $f(x) = 0$: $$xe^{-x} = 0$$ $$\frac{xe^{-x}}{e^{-x}} = \frac{0}{e^{-x}}$$ $$x = 0$$ This is the only root. (We are justified dividing by $e^{-x}$ because $e^{-x} \ne 0$ for all $x\in\mathbb{R}$.) The domain is $\mathbb{R}$. I don't know exactly how you'd want to "prove" this, as it's kinda self-evident (there's no way to have a not-defined value...) To find the limit: $$\lim_{x\to\infty}xe^{-x} = \lim_{x\to\infty}\frac{x}{e^x}$$ Here, we can "eyeball" this and note that $e^x$ beats the (insert favorite word (that fits) here) out of $x$, in terms of how fast it grows. So, as $x$ gets bigger and bigger, the denominator will get bigger much faster than the numerator. Thus the limit is $0$: $$\lim_{x\to\infty}\frac{x}{e^x} = 0$$
Maps preserving roots of a polynomial function over finite fields
Let $F$ be a finite field. Then $f\colon x\mapsto \prod_{a\in F\setminus\{0\}}(x-a)$ is a polynomial function that is nonzero iff $x=0$. Therefore, for any subset $S\subseteq F^n$, $$ \sum_{(a_1,\ldots,a_n)\notin S}\prod_{i=1}^nf(x-a_i)$$ is a polynomial function $F^n\to F$ which is zero precisely for the points in $S$.
Bounded monotonic sequences
It is correct that bounded, monotonic sequences converge. Conversely, convergent sequence are bounded. They are not necessarily monotonic (like your first example). Sequences which are merely monotonic (like your second example) or merely bounded need not converge.
Simple Logistic Regression - how do I use real data?
1) It seems like there is some confusion between the "real" output of a logistic regression and the use of this output for a classification task. The idea behind the model is to estimate the conditional probability of some event of interest (Parent or not - in your case) given some independent variables (age), namely $$ \hat{P}(y_i=1|x=x_i)=\frac{e^{x_i'\hat{\beta}}}{1+e^{x_i'\hat{\beta}}}, $$ is the estimated probability that the $i$th subject is a parent $\{y_i=1\}$, given his age $x_i$ and the estimated coefficients $\hat{\beta}$. As such, for every individual you get a probability $p_i\in(0,1)$, so in order to use this values for classification you have to set some cut-off $c$, such that $$ \mathcal{I}\{p_i>c\}. $$ That is, if $p_i>c$ then give the label "parent" to this individual. Thus, the $\log(odds)$ in your table computed after the dichotomization (instead of before). 2) Mostly they are the maximum likelihood estimators. Because the logistic model is non-linear model - the optimization is done numerically. The likelihood is $$ \mathcal{L}(\beta) = \prod_{i=1}^np(x_i)^{y_i}(1-p(x_i)^{1-y_i} $$ or equivalently $$ \mathcal{l}(\beta) = \log\left(\prod_{i=1}^np(x_i)^{y_i}(1-p(x_i)^{y_i-1}\right),\\ $$ after some algebra the maximization problem becomes $$ \mathcal{l}(\beta)=-\sum_{i=1}^n\log(1+e^{x_i'\beta})+\sum_{i=1}^ny_i(x_i,'\beta), $$ that have to be solved numerically (e.g., by Newton-Raphson method). 1) Lets take your example. Lets assume that you have estimated the coefficients $\beta$ (For now it doesn't matter how it works). For the sake of simplicity assume that $(\hat{\beta_0}, \hat{\beta_1})=(1,0.01)$. Now, you want to classify each subject using the following logistic model. $$ \hat{P}(y_i=1|age)=\frac{e^{\beta_0+\beta_1age}}{1+e^{\beta_0+\beta_1age}}. $$ Lets take the first subject. His\her age is $15$, so by plugging it in the model we get $$ \hat{P}(y_i=1|age=15)=\frac{e^{1+0.01\cdot15}}{1+e^{1+0.01\cdot15}}\approx 0.76, $$ which mean that his/her probability of being parent is $0.76$, and the $\log(odds)$ are $$ \log\left(0.76/(1-0.76)\right)=1+0.01\cdot 15. $$ Now, you are interested in using this model in order to perform classification rather than compute probabilities. As such, you need to set a rule that converts probabilities into classification. The default rule (which, in a sense, assumes no prior knowledge) is: if $p(y_i|age_i) >1/2$ then give the label "parent" to subject $i$. Namely, the first one, with $p_i=0.76$ is classified as a parent. If you compute the log of the odds to the classified value $\{1,0\}$ you'll get either $\infty$ for "parent" and $-\infty$ for non-parent.
How to properly find supremum of a function $f(x,y,z)$ on a cube $[0,1]^3$?
You want to find the maximum inside the unit cube of $$f(x, y, z) = \frac{xyz(1-xy)(1-yz)(1-zx)}{(1-xyz)^3}$$ Now, suppose this maximum is when WLOG $y > z$. Then we have $$f(x, y, z) = \frac{xyz(1-yz)(1-x(y+z)+x^2yz)}{(1-xyz)^3} < f(x, \sqrt{yz}, \sqrt{yz} )$$ as $y+z > 2\sqrt{yz}$. Thus we must have $x=y=z$ for the maximum. However, $$f(t, t, t) = \frac{t^3(1-t^2)^3}{(1-t^3)^3}=\frac{t^3(1+t)^3}{(1+t+t^2)^3}< \frac8{27}$$ as $27t^3(1+t)^3 < 8(1+t+t^2)^3 \iff 3t(1+t)< 2(1+t+t^2) \iff (1-t)(t+2) > 0$. Finally we note that as $t \to 1$, $f(t, t, t)$ gets arbitrarily close to $ \frac8{27}$ so this is the supremum.
Partition $Z_2^n \setminus \{0\}$ into disjoint sets. Does one contain a maximal subspace?
This is not true. For $n=4$ let $$A=\{(0,1,0,0), (0,0,1,0), (0,0,0,1), (1,1,0,0), (1,1,1,0), (1,1,0,1), (1,0,1,1), (0,1,1,1)\}$$ $$B=\{(1,0,0,0), (1,0,1,0), (1,0,0,1), (0,1,1,0), (0,1,0,1), (0,0,1,1), (1,1,1,1) \}$$ $B \cup \{0\}$ is not a subspace, and its size (with the zero vector) is $2^3$, so it doesn't contain a subspace of dimension 4-1=3. Notice that if you remove one vector from A, then it will not be a subspace (with the zero vector) so again $A\cup \{0\}$ doesn't contain a 3 dimensional subspace.
Why is the canonical module of a local Gorenstein ring $R$ of dimension 0 isomorphic to the injective hull of the residue field?
Over an arbitrary CM ring you have $\Omega$ a canonical module if $$\text{Ext}_{R}^{n}(k,\Omega)\simeq \begin{cases} k &\mbox{ if } n=\text{dim}\,R;\\ 0 &\mbox{ otherwise } \end{cases} $$ Now, if $\text{dim}\,R=0$ and $\Omega$ is canonical then $\Omega$ is injective by the definition and the fact there are no other prime ideals. Yet then $\Omega\simeq E(k)^{(n)}$ for $n=\text{dim Hom}_{R}(k,\Omega)$, but this is just equal to 1 by definition so $\Omega\simeq E(k)$. But if $R$ Gorenstein then $R\simeq \Omega$ which is then isomorphic to $E(k)$ if $\text{dim}\,R=0$. Edit: Over any commutative noetherian ring $R$ and for any $R$-module $M$, it is known that $$E(M)\simeq\bigoplus_{\mathfrak{p}\in\text{Spec}R}E(R/\mathfrak{p})^{(\mu_{\mathfrak{p}})}$$ where $\mu_{\mathfrak{p}}=\text{dim}_{R_{\mathfrak{p}}}\text{Hom}_{R}(R/\mathfrak{p},M)_{\mathfrak{p}}$. In our case (when $R$ is local of dimension zero), we know $\Omega$ is injective so is its own injective hull, and $\mu_{\mathfrak{m}}= \text{dim Hom}_{R}(k,\Omega)=1 $ as $R_{\mathfrak{m}}=R$.
How to prove this simple fact without using distribution theory?
$\int dk |F(k)|^2= \int dk \int dx' \int dx'' f(x')f^*(x'')e^{-ik(x'-x'')}$ because $|F(k)|^2=F(k)F^*(k)$. Integrating over $k$ yields: $\int dk |F(k)|^2 = \int dx' \int dx'' f(x')f^*(x'') \frac{[e^{-ik(x'-x'')}]_{- \infty}^\infty}{i(x''-x')}$. Substitution: $z=x''-x'$. Now the Tools from complex Analysis can be used: The integral over the whole real axis can be Extended to a contour integral over the upper-half circle and the real axis (the path is $C$). The half-circle integral vanishes because $z^{-1} \rightarrow 0$ as $z \rightarrow \infty$. Therefore one is left with the following contour integral: $\int dx' \int dx'' f(x')f^*(x'') \frac{[e^{-ik(x'-x'')}]_{- \infty}^\infty}{i(x''-x')} = \int dx' f(x') \oint_C dz f^*(x'+z) \frac{[e^{ikz}]_{- \infty}^\infty}{iz}$. Using the residue Theorem (Attention: The pole lies on $C$!) and elementary trigonometry you have: $\int dx' f(x') \oint_C dz f^*(x'+z) \frac{[e^{ikz}]_{- \infty}^\infty}{iz} = \int dx' f(x') f^*(x') \pi i \frac{\lim_{z \rightarrow 0, k \rightarrow \infty} 2sin(zk)}{i} =$ $2 \pi \int dx' |f(x')|^2 \lim_{z \rightarrow 0, k \rightarrow \infty} sin(zk) = 2 \pi \lim_{z \rightarrow 0, k \rightarrow \infty} sin(zk)$
$p^2 - 2 q^2 = 5039$ for primes $p, q$
Not a solution, too long for comment: I am pretty close to finish checking the first billion prime values of $q$ with no solution so far. Currenttly I'm at $q=19,047,324,319$ To make things even worse, it seems that the solutions of the equation $p^2-2q^2=5039$ for prime $q$ are extremely rare, even if you allow $p$ to be composite. So far I have found only two such solutions: $$p=209, \ q=139$$ $$p=6889, \ q=4871$$ Both 209 and 6889 are composite numbers (the latter also being a perfect square) so both have to be discarded. I'll let the code run over the weekend but this looks more and more like mission impossible.
How do I calculate the probability of a second order Markov event?
If you're calculating the probabilities from the text and you want to find second order probabilities, then you identify all occurrences of 'aa' and see what proportion of them are followed by 'b'. If you already have your probabilities, but they're all first order - i.e. you have the probability of one letter following another - then in your model the probability that 'aa' is followed by 'b' is exactly the same as the probability that 'a' is followed by 'b', because your model is working on the basis that what happened two letters ago has no bearing on the current one.
How to find value of arctan using Maclaurin
Not exactly: $\arctan x$ is defined for all $x$, but the radius of convergence of its Maclaurin series is equal to $1$. You may reduce the problem to this case, using the relation $$\arctan x+\arctan \frac1x=\begin{cases}\dfrac\pi 2&\text{if }x>0, \\[1ex]-\dfrac\pi 2&\text{if }x<0.\end{cases}$$
Unbiased estimator of a uniform distribution
Another answer has already pointed out why your intuition is flawed, so let us do some computations. If $X$ is uniform, then: $$ P(X_{max}<x)=P(X_i<x,\forall i)=\prod_i P(X_i<x)= \begin{cases} 1 & \text{if } x\ge \theta \\ \left(\frac{x}{\theta}\right)^n & \text{if } 0\le x\le \theta \\ 0 & \text{if } x\le 0 \end{cases} $$ so the density function of $X_{max}$ is: $$ f_{max}(x;\theta)=\begin{cases} \frac{n}{\theta^n}x^{n-1} & \text{if } 0\le x\le \theta \\ 0 & \text{otherwise} \end{cases} $$ Then we can compute the average of $X_{max}$: $$ E(X_{max})=\int_0^\theta x \frac{n}{\theta^n}x^{n-1} dx =\frac{n}{n+1} \theta $$ so $X_{max}$ is biased whereas $\frac{n+1}{n}X_{max}$ is an unbiased estimator of $\theta$.
Convergence in probability of $X_n\sim \operatorname{Bin}\left(1,\frac{1}{n}\right)$
I assume that what you mean by $P_{X_n} = Bin(1,\frac{1}{n})$ is that $P(X_n = 1) = \frac{1}{n}$ and $P(X_n = 0) = 1 - \frac{1}{n}$. Therefore, if $\epsilon > 1$, we know that $P(|X_n-0| \geq 1) = 0$, since $X_n\in \{0,1\}$. Now, if $0<\epsilon \leq 1$, then $P(|X_n -0|\geq\epsilon) = P(X_n=1)=\frac{1}{n}$. It is now easy to see that $$\lim_{n \to \infty}P(|X_n -0|\geq\epsilon) = \lim_{n \to \infty}\frac{1}{n}=0$$
Matrix Chain Multiplication Dynamic Equation
This is the task of putting brackets in matrix multiplication such that it'll minimize operations count. There's some ambiguity in notation and I'll replace $F(n_1, \dots, n_{k+1}; k)$ with $F(n_1, \dots, n_{k+1})$. Suppose that we have to put first inner-most brackets. At first step you have to choose a pair of matrices to be multiplicated. If you choose matrices $M_i$ and $M_{i+1}$ and multiply them, that will cost you $n_{i}\cdot n_{i+1}\cdot n_{i+2}$ multiplications. After that you obtain the operation minimizing task for set of matrices $M_1, \dots, M_{i-1}, (M_i \cdot M_{i+1}), M_{i+2}, \dots, M_h$ which costs exactly $$F(n_1,...,n_{i-1},n_i, n_{i+2},...,n_{h+1}).$$ $F(n_1,...,n_{h+1})$ is obtained exactly as a minimum among all inner-most variants of brackets: $F(n_1,...,n_{h+1})=\min \limits_{1<i<h+1}\{n_{i}\cdot n_{i+1}\cdot n_{i+2}+F(n_1,...,n_{i-1},n_{i},n_{i+2},...,n_{h+1})\}$. Also, you could do that another way. You may choose the outer-most bracket first and it will result in such dynamical equation: $$F(n_1,...,n_{h+1})=\min_{1<i<h+1}\{n_{1}n_{i+1} n_{h+1} + F(n_1, \dots, n_{i+1}) + F(n_{i+1},\dots, n_{h+1})\}.$$ This corresponds to splitting the product in two brackets $(M_1\cdot\dots\cdot M_i)$ and $(M_{i+1},\dots, M_{h+1})$. To multiplicate them we have to spend $n_1 n_{i+1} n_{h+1}$ iterations. But to multiplicate matrices inside these two brackets we have to spend $F(n_1, \dots, n_{i+1})$ and $F(n_{i+1},\dots, n_{h+1})$ operations respectively. And, again, we take minimum among all variants of outer-most brackets.
Cauchy-Schwarz inequality problem with four variables
Applying Titu's lemma, it suffices to show that (fill in the slight gap) $$ (a+b+c+d)^2 \geq 2(ab+ac+bc+bd+cd+ac+da+bd ).$$ This is obviously true by expansion, since it becomes $$ ( a -c)^2 + (b-d) ^2 \geq 0 .$$
Why is the number of orientations of order $n$ equal to $3^\binom{n}{2}$?
There are $3$ possible orientations for each edge of an oriented graph: forward arrow, reverse arrow, and no arrow. So $3^\binom{n}{2}$ is the total number of orientations. If you are only counting tournaments, then the orientation of "no arrow" is not valid, so the answer would be $2^\binom{n}{2}$
The function $A$ is given by the formula $f(t)=4-t$ and integral from $A(x) = \int_0^x f(t) dt$
By definition, $$A(x) = \int_{0}^{x} f(t)~dt$$ where $f(t) = 4 - t$. Evaluating the integral yields \begin{align*} A(x) & = \int_{0}^{x} (4 - t)~dt\\ & = \left(4t - \frac{1}{2}t^2\right) \bigg|_{0}^{x}\\ & = 4x - \frac{1}{2}x^2 - (0 - 0)\\ & = 4x - \frac{1}{2}x^2 \end{align*} Can you take it from there?
Derive the exact ODE solved by specific function
\begin{align} \qquad & y^2 = K|x| - 1 \\ \implies &y^2 + 1 = K|x|\quad \left(\implies K > 0, |x| \geq \frac{1}{K}\right) \\ \implies &2yy' = K\operatorname{sgn}(x) \\ \implies &2yy' = K\left(\frac{x}{|x|}\right) = (y^2+1) \frac{x}{|x|^2}\\ \implies &2xyy' = (y^2 + 1),\; \text{ given that } x \neq 0. \end{align} The above equation is the required ODE, with the necessary domain conditions.
Is an orthogonal operator with determinant equal to $1$ or $-1$ always a rotation or a reflection?
First of all, an orthogonal map (in finite dimensions) always has determinant $\pm 1$. But in $\Bbb R^4$, you can have a rotation in the $x_1x_2$-plane along with a rotation in the $x_3x_4$-plane, for example, and I would not call that a rotation.
How do I evaluate $|x-1|+|x-2|-|x-3|<5$?
Consider several cases: If $x&lt;1$, your inequality becomes $1-x+2-x-3+x&lt;5$. If $x\in[1,2)$, your inequality becomes $x-1+2-x-3+x&lt;5$. If $x\in[2,3)$, your inequality becomes $x-1+x-2-3+x&lt;5$. If $x\geqslant3$, &hellip;
Make x the subject of a double exponential equation
Except for very few cases (such as $R=S^2$, $S^3$ or $S^4$ or the reciprocals which would lead to polynomials of degree $2$, $3$ ot $4$ in $R$ or $S$), there is no analytical solution to equations $$y = A + B * R^x + C * S^x$$ and only numerical methods will provide $x$ from known values of $y,A,B,C,R,S$. Newton or some variants (Halley, Householder, ...) would be quite suitable but they require a "reasonable" starting value. For illustration purposes, just consider the equation $$y=7+6\times 5^x +4\times 3^x$$ and we look for $x$ such that $y=123456$. Plotting the function, we can see that the solution is close to $6$. So let us use Newton method with $x_0=6$. The successive iterates would then be $$x_1=6.173815857$$ $$x_2=6.153702325$$ $$x_3=6.153371723$$ $$x_4=6.153371635$$ which is the solution for ten significant figures. But, when the function is stiff, a more efficient way is to solve $$\log(y) = \log(A + B * R^x + C * S^x)$$ Doing the same as above, using the same method and starting value, the successive iterates would then be $$x_1=6.153427858$$ $$x_2=6.153371635$$ which is the solution for ten significant figures.
Is this rearrangement correct? Algebra
Not quite. Note $P_x \left( \frac{I}{P_x} - a\right) + P_yY = I \implies I - P_xa + P_yY = I \implies P_xa = P_yY$.
If $\operatorname{lcm}(m, m + k) = \operatorname{lcm}(n, n + k)$, then $m = n$
$\textbf{Hint:}$ Consider one prime at a time.Say we have a prime $p$. Its highest exponent dividing $m,n,k$ are respectively $a_1,a_2,a_3$ Now,if we can deduce that $a_1=a_2$ for every prime we consider,we would be done. $\textbf{Solution:}$consider one side: $lcm(m,m+k)$. If $a_1 \ge a_3$ then, $m+k$ has highest exponent $a_3$ and m has $a_1$.then,$lcm(m,m+k)$ has $a_1$ since its the greater one. Again,if $a_1 \le a_3$ then,$m+k$ has highest exponent $a_1$ ,so is $m$. Hence, $lcm(m,m+k)$ has $a_1$ as its highest divisor. same is true for the other side of the equation. Since,both sides are equal $a_1=a_2$ for every prime we consider.
How to calculate $\int \limits_{-\infty}^{+\infty} e^{-\frac{(x-m)^2}{2\sigma^2}}dx$
$$I=\int \limits_{-\infty}^{+\infty} e^{-\frac{(x-m)^2}{2\sigma^2}}dx$$ Substitute with u : $$u=\frac {(x-m)}{\sqrt 2 \sigma} \implies dx=du\sqrt 2 \sigma$$ $$I=\sqrt 2 \sigma\int \limits_{-\infty}^{+\infty} e^{-u^2}du$$ $$I=\sqrt {2\pi} \sigma$$
About the product of two sets
Suppose that $A \ne \emptyset$ and $B \ne \emptyset$, then pick $a \in A$ and $b \in B$. We get $ab \in AB$. Therefore $AB$ is not a subset of $ \emptyset.$ Conclusion: if $AB\subset \emptyset$, then $A = \emptyset$ or $B = \emptyset$.
How to implement the adaptive Heun's method?
On theoretical lower accuracy bounds for order 2 methods Heun is a second order method, that means that the global error is of second order and the local discretization error is of size $$ |y_n(t_n+h)-y_n(t_n)-h·\Phi_f(t_n,y_n(t_n),h)|=C\cdot h^3, $$ where $y_n(t)$ solves the IVP $y_n'=f(t,y_n)$, $y_n(t_n)=y_n$. To be measurable, this must not exceed the machine precision $\mu\sim 10^{-16}$, so $h\sim 10^{-5}$ is the lowest step size where the order behaves numerically as theoretically. The estimate of the global error has the form and size $$ C·T·h^2+\frac{D·T·\mu}{h}\ge 3·T·\left(\frac14·C·(D·μ)^2\right)^{\frac13}, \qquad\text{"=“ for }h=\left(\frac{D·\mu}{2·C}\right)^{\frac13}. $$ The second term stands for the accumulation of floating point errors of about $D·μ$ in each step and over $N=T/h$ steps (a more correct factor instead of $N$ for longer time intervals is $(e^{LT}-1)/(Lh)$ with the Lipschitz constant $L$). This is a convex function in $h$ with a minimum and thus gives the limit for any realistic tolerance prescription. Assuming the constants $C,D,T$ all have the magnitude $1$, the lower limit is about $tol=10^{-10}$ with a step size of again $h=10^{-5}$. With a prescription of $tol=10^{-12}$, the number $N=10^6$ of steps is guaranteed to lead to an accumulation of floating point errors of size $10^{-10}$ at least, thus preventing to reach this tolerance. On ways to have a correct adaptive scheme The following corrects your python code to a version where $h$ is fixed for the full integration interval and adapted over several integration runs so that the relative error in the end is between tol/4 and tol. This is the closest variant to your code, but probably not what was originally intended. The original intend may have been to adapt the value of $h$ for each Heun step, but one would have to more radically change your code to achieve that. In iterating once for step size $h$ giving the value $y_1$ and once with $h/$ to get $y_2$, one gets $y_1=y^*+C·h^2+O(h^3)$ and $y_2=y^*+\frac14C·h^2+O(h^3)$. This can be used to compute better approximations for the error term and the exact value $$ y^*=\frac{4·y_2-y_1}3+O(h^3),\quad y_1-y^*=\frac43(y_1-y_2)+O(h^3),\quad y_2-y^*=\frac13(y_2-y_1)+O(h^3) $$ import math def f(x,y): return math.sin(x+y) ''' h is the step size ''' def Heun(f,x,y,h,end): while x &lt; end: # floating point error may make x over- or undershoot end if x+h&gt;=end: h=end-x k0=f(x,y) k1=f(x+h,y+h*k0) x+=h y+=h*0.5*(k0+k1) return y def AdaptDiff(diffF,f,x,y,h,end,tol): # print debugging information on the recursion level print "called with h=",h if abs(1.-x/end) &lt; tol: return y # integrate the full interval, # once with step size h and once with h/2 y1=diffF(f,x,y,h,end) y2=diffF(f,x,y,h/2.0,end) # print debugging information on the approximations found print "y1=%.15f, y2=%.15f, y1-y2=%.4e, y*=%.15f" % ( y1, y2, y1-y2, (4*y2-y1)/3 ); # if relative error is too large, decrease step size err = abs(1.-y1/y2)/3; if err &gt; tol: while err &gt; tol/2: h /= 2.; err /= 4. return AdaptDiff(diffF,f,x,y,h,end,tol) # if relative error is far too small, increase step size # but this will rarely happen if err &lt; tol/64: return AdaptDiff(diffF,f,x,y,h*4.,end,tol) # return the last computed value, might also return y2 return (4*y2-y1)/3 x0=0; y0=1; xe=1; tol = 1e-10; h=(tol/(xe-x0))**0.5; print "returned %.16f" % AdaptDiff(Heun,f,x0,y0,h,xe,tol) This recursion finishes in one step, i.e., with two integration runs with step sizes h= 1e-05 and h/2= 5e-06 for the objective tol=1e-10: called with h= 1e-05 y1=1.801069211924622, y2=1.801069211940325, y1-y2=-1.5703e-11, y*=1.801069211945559 returned 1.8010692119455591
Is my approach to this set theory proof correct?
The first bolded sentence is not a valid conclusion: for example, consider $A = \{ 1, 2 \}$, $B = \{ 1, 3 \}$, $C = \{ 1, 4 \}$. Then $B \cap C = \{ 1 \} \subseteq A$ but $B$ is not a subset of $A$.
Why is $e_1\wedge e_2\dots \wedge e_n \ne 0$ in $\wedge^n V$?
We can identify $V$ with $k^n$ where $k$ is the ground field. And then we can use $e_1=(1,0,\ldots,0),\ldots,e_n=(0,\ldots,0,1)$. Consider the map $T:V^n\to k$ defined as: $$T(v_1,\ldots,v_n)=\mbox{det}\begin{pmatrix}\alpha_{11} &amp; \cdots &amp; \alpha_{1n} \\ \alpha_{21} &amp; \cdots &amp; \alpha_{2n}\\ \vdots &amp; \ddots &amp; \vdots \\ \alpha_{n1} &amp; \cdots &amp; \alpha_{nn}\end{pmatrix},$$ where $v_j=(\alpha_{1j},\alpha_{2j},\ldots,\alpha_{nj})$, with $\alpha_{ij}\in k$. alternating multilinear and such that $T(e_1,\ldots,e_n)=1\neq 0$.
Let W be a basis in $R^n$
If $\mathcal U = \{\mathbf u_1, \dots, \mathbf u_n\}$ is a basis for $W$ then we can express any vector $\mathbf w\in W$ as $$\mathbf w=w_1\mathbf u_1+w_2 \mathbf u_2 + \cdots + w_n\mathbf u_n$$ for some unique scalars $w_1, \dots, w_n$. The fact that $\mathcal U$ is an orthogonal basis means that $\mathbf u_i \cdot \mathbf u_j=0$ for any $i\ne j$. That's useful because then $$\begin{align}\mathbf w\cdot \mathbf u_i &amp;= (w_1\mathbf u_1+w_2 \mathbf u_2 + \cdots + w_n\mathbf u_n)\cdot \mathbf u_i \\ &amp;= w_1\mathbf u_1 \cdot \mathbf u_i + \cdots + w_n\mathbf u_n\cdot \mathbf u_i \\ &amp;= 0 + \cdots + 0 + w_i\mathbf u_i\cdot \mathbf u_i + 0 + \cdots + 0 \\ &amp;= w_i\|\mathbf u_i\|^2\end{align}$$ Therefore we get the Fourier expansion of $\mathbf w$: $$\mathbf w = \frac{\mathbf w\cdot \mathbf u_1}{\|\mathbf u_1\|^2}\mathbf u_1 + \cdots + \frac{\mathbf w\cdot \mathbf u_n}{\|\mathbf u_n\|^2}\mathbf u_n$$
Determine remainders of large numbers
Part (a) is simpler to group digits 3 at a time, using $1000 ≡ -1 \bmod 7$ $$12,345,678,923 ≡ 923-678+345-12 ≡ 578 ≡ 501 ≡ 5(2)+1≡ 4\bmod 7$$ Part (b), modulo $m$, with $a≡b, r≡s$. Assume $ar \not\equiv bs$, we have: $$ar-bs ≡ r(a-b) ≡ a(r-s) \not\equiv 0 \bmod m$$ $$→ a \not\equiv b \bmod m \text{, and } r \not\equiv s \bmod m$$ Thus, assumption were wrong, we have $\;ar ≡ bs \bmod m$ Part (c), use $\;4^6 \bmod 7 ≡ 1$ $$12345678923^{128} ≡ 4^{6\times21+2} ≡ 4^{2} ≡ 16 ≡ 2 \bmod 7$$ Part (d) $9^{2} \bmod 100 ≡ 81$ $9^{4} \bmod 100 ≡ 81^2 ≡ 6561 ≡ 61$ $9^{9} \bmod 100 ≡ 61^2\times 9 ≡ 89$ $9^{10} \bmod 100 ≡ 89 \times 9 ≡ 801 ≡ 1$ $$9^{9^9} \bmod 100 ≡ 9^{10k+9} ≡ 9^9 ≡ 89$$ $$9^{9^{9^9}} \bmod 100 ≡ 9^{10k'+9} ≡ 9^9 ≡ 89$$
Finding $x$ that makes $x^2 ≡ 1 (\mathrm{mod} \ n)$, when $n$ is a composite number
If $p\neq 2$ is prime then $$x^2 \equiv 1 \pmod{p^k} \Leftrightarrow p^k |(x-1)(x+1) \,.$$ As $\gcd(x+1,x+1)|2$, and $p \neq 2$, we have $p^k |(x-1)(x+1) \Leftrightarrow p^k | x-1$ or $p^k |x+1$. If $p=2$ then $2^k |(x-1)(x+1)$ if and only if $2^{k-1}|(x-1)$ or $2^{k-1}|x+1$. This fact follows immediately from the observation that if $x^2-1$ is even, then both $x+1, x-1$ are even and $\gcd(x-1,x+1)=2$. let $n=p_1^{k_1}....p_k^{k_k}$. Case 1 n is odd, or n is even but not multiple of $4$. Then $x^2=1 \pmod n$ if and only if $$x \equiv \pm 1 \pmod{p_i^{k_i}} \,;\, \forall 1 \leq i \leq k \,.$$ There are 2 possibilities for each prime; thus by the Chinese Remainder Theorem you get $2^k$ possible $x$. There are $2^k-1$ solutions $x \neq 1 \pmod{n}$ in this case. Case 2 n is multiple of 4. WLOG $p_1=2$. Then $x^2=1 \pmod n$ if and only if $$x \in \{ \pm 1 ; \pm 1 + 2^{k_1-1} \} \pmod{2^{k_1}} \,.$$ $$x \equiv \pm 1 \pmod{p_i^{k_i}} \,;\, \forall 2 \leq i \leq k \,.$$ There are 4 possibilities for $2$ and 2 possibilities for each prime; thus by the Chinese Remainder Theorem you get $2^{k+1}$ possible $x$. There are $2^{k+1}-1$ solutions $x \neq 1 \pmod{n}$ in this case.
interchange integral and inf
No. Integral of an infimum is less than or equal to the infimum of the integrals but equality holds only in very special cases.
Propagation of Error very strange (matlab)
First: As the Wiki article says: "Neglecting correlations or assuming independent variables yields...". But the variables are heavily correlated because the argument of the logarithm has to be positive. Second: Even if you really had uncorrelated and independent variables, this is not necessarily a contradiction, because it may simply demonstrate the fact that a non-linear function cannot always be accurately approximated by the first order Taylor series.
Irreducible action Lie algebras
Hint: Show that the action of the Lie algebra is enough to move between the standard basis vectors of $\mathbb C^{2n+1}$; is this enough to show the only submodule is the whole space? Hint+: think in terms of matrix units Complete Answer: Let $E_{ij}$ be the matrix unit with $1$ in the $(i,j)$ entry and $0$ elsewhere. Let $v_0, v_1,\ldots, v_n, v_{-1}, \ldots, v_{-n}$ be the standard basis of $\mathbb C^{2n+1}$ and consider the matrix description of $\mathfrak{so}(2n+1)$, i.e. the Lie algebra of matrices solving the equation $A^tM+MA=0$ for $$M=\begin{bmatrix}1&amp;0&amp;0\\ 0&amp;0&amp;I_n\\ 0&amp;I_n&amp;0 \end{bmatrix} $$. One can show the Lie algebra contains the matrices $E_i=E_{i,i+1}-E_{-i-1,-i}$ for $0&lt;i&lt;n$, $E_0=E_{n,0}-E_{0,-n}$, and as well as their transposes $F_i, F_0$. But then by inspection, we see that by acting on a basis element $v_i$ by some sequence of these matrices we can obtain a multiple of any other basis element $v_j$. (Ex. for $n=2$, $E_1E_2E_0v_0=v_1$). Then given any nonzero vector $v$, we can systematically act by these $E$ matrix elements until we obtain $v_1$, from which we can generate all basis vectors using the $F$ 's, hence every cyclic submodule is the whole space; i.e., it is irreducible. PS. This argument is just the $\mathfrak{so}$-analogue of the same (but perhaps easier to parse) argument for why $\mathbb C^n$ is irreducible for $\mathfrak{sl}_n$, and indeed this sort of argument works for all the "natural" representations of the matrix Lie algebras.
Defining a smooth curve between 2 points with given angles
I forgot that I need to caculate the length of the curve and find a number of equidistant points along it, so a Bezier curve would make the second part easier. Fortunately Wikipedia's explanation of quadratic Beziers is quite clear and points out that P1 is where the two tangents meet, and P1 is easy to find: P0's tangent is the x-axis, so a 45 degree line passing through P2 (2, 1) crosses it at P1 (1, 0). Unfortunately, finding the length of a quadratic curve is rather complicated, and my calculus is rusty, but I think I can manage it.
Locus of intersection of two perpendicular normals to an ellipse
I do not have even a sniff of a clue as to the stated question (the name of the curve), but I did find the formula for it. Starting with an ellipse at origin, with axis-aligned semi-major axes $r_x$ and $r_y$, $$\begin{align} x_E(\varphi) &amp;= r_x \cos \varphi \\ y_E(\varphi) &amp;= r_y \sin \varphi \end{align}$$ the normal (defined as tangent rotated 90° counterclockwise in a right-handed coordinate system) is $$\begin{align} x_N(\varphi) = \frac{d y_E(\varphi)}{d \varphi} &amp;= r_y \cos \varphi \\ y_N(\varphi) = - \frac{d x_E(\varphi)}{d \varphi} &amp;= r_x \sin \varphi \end{align}$$ If the normals corresponding to $\varphi=\varphi_1$ and $\varphi=\varphi_2$ are perpendicular, then their dot product is zero, $$x_N(\varphi_1) x_N(\varphi_2) + y_N(\varphi_1) y_N(\varphi_2) = 0$$ i.e. $$r_y^2 \cos(\varphi_1) \cos(\varphi_2) + rx^2 \sin(\varphi_1) \sin(\varphi_2) = 0$$ Solving for $\varphi_2$ we get $$\varphi_2 = - \arctan\left(\frac{r_y^2 \cos\varphi_1}{r_x^2 \sin\varphi_1}\right)$$ I shall use that to define a function $$\theta(\varphi) = - \arctan\left(\frac{r_y^2 \cos\varphi_1}{r_x^2 \sin\varphi_1}\right)$$ The equation for the perpendicular line at $\varphi$, with $t$ as the line parameter is $$\begin{cases} x_1(\varphi, t_1) = x_E(\varphi) + t_1 x_P(\varphi) \\ y_1(\varphi, t_1) = y_E(\varphi) + t_1 y_P(\varphi) \end{cases}$$ Using the function $\theta(\varphi)$ defined before, we know that the above is also perpendicular to $$\begin{cases} x_2(\varphi, t_2) = x_E(\theta(\varphi)) + t_2 x_P(\theta(\varphi)) \\ y_2(\varphi, t_2) = y_E(\theta(\varphi)) + t_2 y_P(\theta(\varphi)) \end{cases}$$ Their intersection occurs at $$\begin{cases} x_1(\varphi, t_1) = x_2(\varphi, t_2) \\ y_1(\varphi, t_1) = y_2(\varphi, t_2) \end{cases}$$ which we can solve for $t_1$ and $t_2$. I shall omit $t_2$ here for brevity. $t_1 = t(\varphi)$: $$t(\varphi) = r_x r_y \frac{ (r_x^2 - r_y^2) \cos(\varphi) - \sqrt{r_x^4 + \frac{r_y^4 \cos(\varphi)^2}{\sin(\varphi)^2}}}{(r_y^2 \cos(\varphi)^2 + r_x^2 \sin(\varphi)^2)\sqrt{r_x^4 + \frac{r_y^4 \cos(\varphi)^2}{\sin(\varphi)^2}}}$$ Note that $t(\varphi)$ is not defined at integer multiples of $\pi$ (including zero), but it approaches the value $$-rx/ry$$. The curve is then $$\begin{cases} x'(\varphi) = x_E(\varphi) + t(\varphi) x_P(\varphi) \\ y'(\varphi) = y_E(\varphi) + t(\varphi) y_P(\varphi) \end{cases}$$ Because of how we defined the normal, the actual curve is composed of two parts, $$\begin{cases} x(\varphi) = x'(\varphi) \\ y(\varphi) = y'(\varphi) \end{cases}, \text{ and } \begin{cases} x(\varphi) = -x'(\varphi) \\ y(\varphi) = -y'(\varphi) \end{cases}, \; \; \varphi = (0, \pi)$$ In pseudocode (with ^ denoting exponentiation), tmp1 = sqrt( rx^4 + ry^4 / sin(phi)^2 ) tmp2 = ( ( rx^2 - ry^2 ) * cos(phi) - tmp1 ) / ( tmp1 * ( rx^2 * sin(phi)^2 + ry^2 * cos(phi)^2 ) ) x = rx * ( 1 + tmp2 * ry^2 ) * cos(phi) y = ry * ( 1 + tmp2 * rx^2 ) * sin(phi) Each of the four lobes is $C^2$-continuous, as at $\varphi \to 0$ and $\varphi \to \pi$ the curve parts approach the origin at the same slopes.
How is a sample from a Bernoulli distribution different from a binomial distribution?
A binomial distribution occurs if you add them: $Y = X_1 + \ldots + X_n$. If you just leave them as is, they are a sample from a Bernoulli distribution.
A non-example of covering of the 1-circle
I think $p$ is a covering. Let $\varphi : \mathbb{R} \to S^1, \varphi(t) = e^{it}$. The set $U = \varphi((-\pi,\pi)) = S^1 \backslash \{ -1 \}$ is an open neighborhood of $1$ in $S^1$. We have $p^{-1}(U) \cap S^1 \times \{ n \} = \bigcup_{k=0}^{n-1} U(k,n) \times \{ n \} $ where $U(k,n) = \varphi((\frac{2\pi k}{n} - \frac{\pi}{n},\frac{2\pi k}{n} + \frac{\pi}{n}))$. The intervals $J(k,n) = (\frac{2\pi k}{n} - \frac{\pi}{n},\frac{2\pi k}{n} + \frac{\pi}{n})$ are pairwise disjoint and contained in $(-\frac{\pi}{n}, \frac{2\pi (n-1)}{n} + \frac{\pi}{n}) = (-\frac{\pi}{n}, 2\pi - \frac{\pi}{n})$. This shows that also the $U(k,n)$ are pairwise disjoint. In fact, the $U(k,n)$ are the components of $S^1 \backslash \{ \eta_1, .... \eta_n \}$, where the $\eta_k$ are the $n$-th complex roots of $-1$. Each $U(k,n) \times \{ n \}$ is mapped by $p$ homeomorphically onto $U$. Therefore $U$ is evenly covered. A similar argument works for $V = \varphi((0,2\pi)) = S^1 \backslash \{ 1 \}$.
MLE for mixed distribution
Maybe this is where knowing a bit of measure theory helps with down-to-earth concrete problems. $f$ is presumably a sort of mixture of a probability mass function and a probability density function, and I take your way of stating the problem to mean that $\Pr(X=t) = e^{-5\lambda}$ and that for every (measurable) set $A\subseteq[0,5)$ you have $\Pr(X\in A) = \int_A f(x)\, dx.$ Imagine a measure according to which the measure of any set $A\subseteq[0,5)$ is just how much space in that interval the set $A$ takes up, e.g. the measure of the interval $(1,3)$ is $2$ and that of $(1,4)$ is $3,$ but according to which the measure of the set $\{5\}$ is $1.$ Call that measure $m.$ Then your function $f$ is the density of this probability distribution with respect to the measure $m,$ and that means that for every (measurable) set $A\subseteq[0,5],$ the following is true: $$ \Pr(X\in A) = \int_A f(x)\, dm(x) $$ where the integral is with respect to this measure $m.$ Now you have $$ \Pr(X_1=x_1\ \&amp;\ \cdots\ \&amp;\ X_n=x_n) = \prod_{i=1}^n f(x_i). $$ The likelihood function is the value of the expression above as a function of $\lambda.$ Here's a useful fact that I seldom see mentioned: Just suppose we had used a different measure $n,$ and $n(\{5\})=2\ne 1.$ In that case, the value of $f$ at $5$ will be $e^{-5\lambda}/2.$ Then this will alter the likelihood function only by multiplying it by a constant (and “constant” means not depending on $\lambda.$ Therefore the MLE will come out the same either way. And the product of the prior and the likelihood, normalized, yielding a posterior distribution of $\lambda,$ will still give the same results.
Orthogonal basis of complete Euclidean space
One uses Zorn's lemma to have the existence of a maximal orthonormal set $\mathscr{B}$ in $R$. It remains to see that $\mathscr{B}$ is an orthonormal basis (not a basis in the algebraic sense, unless $R$ is finite-dimensional), i.e. $S := \overline{\operatorname{span} \mathscr{B}}$ is the whole space $R$. Clearly, the maximality of $\mathscr{B}$ implies that $S^\perp = \{0\}$, since otherwise we could extend $\mathscr{B}$ by any unit vector in $S^\perp$ to an orthonormal system properly containing $\mathscr{B}$ in contradiction to its maximality. Since (see below) we have $R = S \oplus S^\perp$, the triviality of $S^\perp$ implies $S = R$, as desired. The decomposition $R = S \oplus S^\perp$ is a consequence of the Projection lemma: Let $E$ be an inner product space, and $C\subset E$ be a nonempty complete convex set. Then there is a (continuous) projection $P_C \colon E \to C$ mapping each point $x\in E$ to the unique point $y\in C$ with $$\lVert x-y\rVert = \operatorname{dist}(x,C) := \inf \{ \lVert x-c\rVert : c\in C\}.$$ Proof: We first show the existence of an $y\in C$ with $\lVert x-y\rVert = \operatorname{dist}(x,C)$. By definition of the infimum, there is a sequence $(y_n)_{n\in\mathbb{N}}$ in $C$ with $\lVert x-y_n\rVert \to d := \operatorname{dist}(x,C)$. By the parallelogram identity, we have $$\lVert y_n - y_m\rVert^2 + \lVert (x-y_n) + (x-y_m)\rVert^2 = 2\lVert x-y_n\rVert^2 + 2\lVert x-y_m\rVert^2$$ for all $m,n\in\mathbb{N}$. Given $\varepsilon &gt; 0$, there is an $N(\varepsilon)$ such that $\lVert x-y_k\rVert^2 &lt; d^2 + \frac{\varepsilon^2}{4}$ for $k \geqslant N(\varepsilon)$, and since $\frac{1}{2}(y_n+y_m)\in C$ for all $m,n$ by convexity, we have $$\begin{align} \lVert y_n-y_m\rVert^2 &amp;= 2\lVert x-y_n\rVert^2 + 2\lVert x-y_m\rVert^2 - 4 \left\lVert x - \tfrac{1}{2}(y_n+y_m)\right\rVert^2\\ &amp;\leqslant 4d^2 +\varepsilon^2 - 4 \left\lVert x - \tfrac{1}{2}(y_n+y_m)\right\rVert^2\tag{1}\\ &amp;\leqslant \varepsilon^2 \end{align}$$ for all $m,n\geqslant N(\varepsilon)$. So $(y_n)$ is a Cauchy sequence, and since $C$ is complete, there is an $y\in C$ with $y_n \to y$. By the continuity of the norm, we have $\lVert x-y\rVert = d$. The argument also implies the uniqueness of $y\in C$ realising the distance: If $y_1,y_2\in C$ with $\lVert x-y_1\rVert = \lVert x-y_2\rVert = d$, then setting $m = 1,\,n = 2$ in $(1)$ shows $\lVert y_1 - y_2\rVert^2 \leqslant 0$. We omit the proof of the continuity of $P_C$, since we don't need that for the orthogonal decomposition $R = S \oplus S^\perp$, but we note that $P_C(x)$ is characterised by the condition $$\bigl(\forall y \in C\bigr)\bigl(\operatorname{Re} \langle x-P_C(x), y-P_C(x)\rangle \leqslant 0\bigr).$$ For, given any $y\in C$, the point $y_t := (1-t)\cdot P_C(x) + t\cdot y$ belongs to $C$ for all $t\in [0,1]$ by convexity, and $$\begin{align} \lVert x- y_t\rVert^2 &amp;= \lVert (x-P_C(x)) - t(y-P_C(x))\rVert^2\\ &amp;= \lVert x-P_C(x)\rVert^2 - 2t\operatorname{Re} \langle x-P_C(x),y-P_C(x)\rangle + t^2 \lVert y-P_C(x)\rVert^2. \end{align}$$ Since $P_C(x)$ minimises the distance to $x$ in $C$ it follows that the derivative of $\lVert x- y_t\rVert^2$ in $0$ is non-negative, and that derivative is $-2\operatorname{Re} \langle x-P_C(x),y-P_C(x)\rangle$. Conversely, if $P_C(x)\in C$ is a point with $\operatorname{Re}\langle x-P_C(x),y-P_C(x)\rangle \leqslant 0$ for all $y\in C$, it follows that $\lVert x-y\rVert = \lVert x-y_1\rVert \geqslant \lVert x-P_C(x)\rVert$ for all $y\in C$, and so $P_C(x)$ minimises the distance to $x$ in $C$. Now, if $C$ is a complete linear subspace of $E$, then $\{ y- P_C(x) : y \in C\} = C$, so $$\operatorname{Re} \langle x-P_C(x),y\rangle \leqslant 0$$ for all $y\in C$, and since $-y\in C$ (and $\pm i y\in C$ if the scalar field is $\mathbb{C}$) for $y\in C$, it follows that $x-P_C(x) \in C^\perp$. So we have the decomposition $E = C \oplus C^\perp$ for every complete subspace $C$ of an inner product space $E$. In our setting, $S$ is a closed subspace of the complete space $R$, hence complete, and the decomposition $R = S \oplus S^\perp$ follows.
How to find a symplectic matrix that satisfies an additional condition
Your problem is called the Williamson normal form for symplectic matrices. It states that any positive definite matrix H (I assume this is given in your case) can be brought to a block diagonal form as you specified it using symplectic transformations. There are many proofs of this theorem, but most of them are not constructive (enough) for your case. The one constructive proof that I know uses the fact from linear algebra, that skew-symmetric matrices can be brought to a "nearly diagonal" form orthogonally. More precisely : Let $A \in \mathbb{R}^{2n\times 2n}$ be a skew-symmetric matrix. Then $\exists K \in O(2n)$ such that $K^TAK = \begin{pmatrix} 0 &amp; \Lambda \\ -\Lambda &amp; 0\end{pmatrix}$ Using this fact it is easy to prove Williamson's theorem, which is stated as follows: Let $H \in \mathbb{R}^{2n\times 2n}$ be positive definite. Then there is a symplectic matrix $T\in Sp(2n)$ such that $T^TJ S = \begin{pmatrix} \Lambda &amp; 0\\ 0 &amp; \Lambda \end{pmatrix} =: D^2$ where $\Lambda$ is a diagonal matrix with positive entries. Proof: Since we are only proving existence we can assume without loss of generality that $ T = H^{-1/2}KD$ for some $K\in O(2n)$. Here we are using the positive definiteness of $H$ and $D^2$ to define their square roots. For $S$ to be symplectic we need $T^TJ S = J $ to hold which using the assumed structure of $T$ leads to \begin{equation}DK^TH^{-1/2}JH^{-1/2}KD = J \end{equation} or equivalently \begin{equation}K^TH^{-1/2}JH^{-1/2}K = D^{-1}J D^{-1} = \begin{pmatrix} 0 &amp; \Lambda^{-1} \\ -\Lambda^{-1} &amp; 0\end{pmatrix} \end{equation} Now noticing that the matrix $H^{-1/2}JH^{-1/2}$ is skew-symmetric you can use the previous statement and be done. I hope that this helps you constructing the matrix T, though getting the square root of H might be quite difficult.
Find the solution of the congruence modulo 4199
For two congruences involving coprime moduli: Start from a Bézout's relation between the moduli: $\;um+vn=1$. $$\begin{cases} x\equiv a\pmod m\\x\equiv b\pmod n \end{cases}\iff x\equiv bum+avn\pmod{ab}.$$ For more than two congruences modulo pairwise coprime moduli: solve the first two congruences,then solve the system made up of the resulting congruence and the third congruence, and so on
What is the ring of 2-valued functions on the empty set? (Halmos Boolean Algebras)
No, there is exactly one function from the empty set to any set, so the ring in question is the "trivial ring" with exactly one element $0 = 1$. (This is a Boolean ring!) Let's check this carefully: a function $f: X \rightarrow Y$ is a subset $R$ of $X \times Y$ such that for every $x \in X$, there is exactly one element of $R$ with first coordinate $x$. Now if $X = \varnothing$ then $X \times Y = \varnothing \times Y = \varnothing$, so the only subset of $X \times Y$ is $\varnothing$. This subset does satisfy the defining property of a function, since for every $x \in \varnothing$..., well never mind. You might want to check for yourself the similar case of functions $f: X \rightarrow \varnothing$: there are none except in the case $X = \varnothing$ (in which, as a special case of what we checked above, there is exactly one).
For what values of $\epsilon > 0$ is it nonetheless true that...
Note that if $n$ is even then $s_n = 1$. If so then $|s_n - 1| = |1 -1| = 0$. And if $n$ is odd then $s_n = -1$. If so then $|s_n - 1| = |-1-1|= 2$. So $|s_n - 1| = 0$ or $2$ and $|s_n - 1| \le 2$. So as long as $\epsilon &gt; 2$ it will be true that $|s_n - 1| &lt; \epsilon$. This is true for all $n$ and not just "significantly large" $n$, so we don't have to worry about there being an $N$ that $n$ has to be larger than. Now by the same reasoning, only easier, $|s_n - 0| = |s_n|$ and if $n$ is even $s_n = 1$ and if $n$ is odd $s_n = -1$. So $|s_n -0| = |s_n| = |\pm 1| = 1\le 1$. So for any $\epsilon &gt; 1$ we will always have $|s_n -0| &lt; \epsilon$. Again this is true for all $n$; not just $n &gt; N$ for some $N$.
If x is in the derived algebra, show that Tr(ad a)=0
Hint: If $x=[a,b]$, Jacobi implies that $ad_x=ad_a(ad_b)-ad_b(ad_a)$. Since $trace(fg)=trace(gf)$ the result follows.
The limit and asymptotic analysis of $a_n^2 - n$ from $a_{n+1} = \frac{a_n}{n} + \frac{n}{a_n}$
Since $a_1=1&gt;0$ it is clear that $a_n&gt;0$ for all $n$. Squaring gives $$a_{n+1}^2 = \frac{a_n^2}{n^2} + \frac{n^2}{a_n^2} + 2$$ and defining $a_n^2=nb_n$, this recurrence becomes $$b_{n+1}=\frac{b_n}{n(n+1)} + \frac{n}{(n+1)b_n} + \frac{2}{n+1}$$ with $b_1=1$. Now suppose $$1\leq b_n \leq 1+\frac{1}{n}+ \frac{2}{n^2}$$ which is true for $b_2=2$, $b_3=4/3$ and $b_4=(13/12)^2$ and continue inductively, i.e. $$b_{n+1}\geq \frac{1}{n(n+1)} + \frac{n}{(n+1)(1+1/n+2/n^2)} + \frac{2}{n+1} = 1 + \frac{3n+2}{n(n+1)(n^2+n+2)}\geq 1$$ and also $$b_{n+1}\leq \frac{1+1/n+2/n^2}{n(n+1)} + \frac{n}{n+1} + \frac{2}{n+1} \\ = 1 + \frac{1}{n+1} + \frac{2}{(n+1)^2} - \frac{1-2/n-3/n^2-2/n^3}{(n+1)^2} \leq 1 + \frac{1}{n+1} + \frac{2}{(n+1)^2}$$ whenever $n\geq 4$. Taking the limit on both sides it follows $$\lim_{n\rightarrow \infty} b_n = 1 \, .$$ Next we formally write $b_n$ as an asymptotic expansion $$b_n = 1 + \sum_{k=1}^\infty \frac{c_k}{n^k}$$ and insert it into $$(n+1)b_{n+1}b_n - \frac{b_n^2}{n} - 2b_n - n = 0$$ which gives after some extensive algebra $$0 = 2c_1 - 1 \\ + \sum_{m=1}^\infty \frac{1}{n^m} \left\{ \sum_{k=0}^m (c_{m-k} + c_{m+1-k}) \sum_{l=0}^k \binom{-l}{k-l} c_l + \sum_{l=0}^{m+1} \binom{-l}{m+1-l} c_l - \sum_{k=0}^{m-1} c_k c_{m-1-k} - 2c_m \right\} \, .$$ Setting the coefficient of each power $n^{-m}$ to zero, iteratively gives a linear equation for $c_{m+1}$ ($m=1,2,3,...$) in terms of $c_0=1$ and $c_k$ ($k=1,2,...,m$). The coefficient of $n^0$ already gives $c_1=1/2$. The first higher coefficients read: $c_2=5/8, c_3=13/16, c_4=155/128, c_5=505/256$. The denominator seems to follow a power of $2$ pattern.
Mean and root mean square of a random variable
It is well known that the variance $\sigma^2$ of $x$ is defined as follows: $$\sigma^2 = \tilde{x}^2 - \bar{x}^2.$$ The variance is always non-negative. Therefore: $$\sigma^2 \geq 0 \Rightarrow \tilde{x}^2 - \bar{x}^2 \geq 0 \Rightarrow \tilde{x} \geq |\bar{x}|.$$
Balls in urns with a twist
An urn contributes to the sum each time it is chosen out of the first $s$ times that it or any higher urn is chosen. The $j$-th urn from the top (with index $N-j+1$) has probability $\frac 1j$ to be chosen in each of these $s$ choices, so we expect it to contribute $\frac sj$. Thus the expected value of the sum is $$ \sum_{j=1}^N\frac sj=sH_N\;. $$
"M is reflexive" implies "M is maximal Cohen-Macaulay". Is the converse true?
If $R$ is a local normal domain with $\dim R=2$, then every MCM is reflexive. First prove that $M$ is torsion-free. This shows that $M_{\mathfrak p}$ is free over $R_{\mathfrak p}$ for any prime $\mathfrak p$ of height $\le 1$. Next, if $\mathfrak p$ is a prime of height $2$ it's obvious that $M_{\mathfrak p}$ satisfies Serre's condition $(S_2)$. In the end, use Proposition 1.4.1(b) from Bruns and Herzog. (Maybe this can help you.)
Obtain the convergence order of a succession of real numbers
Note that $$x^*=\lim_{n\to\infty} \left(1+\frac{1}{n}\right)^{\frac{1}{2}}= 1$$ We are looking for $p$ such that $$\lim_{n\to\infty} \frac{|x_{n+1}-x^*|}{|x_n-x^*|^p}= \lim_{n\to\infty} \frac{\big|\left(1+\frac{1}{n+1}\right)^{\frac{1}{2}}-1\big|}{\big|\left(1+\frac{1}{n}\right)^{\frac{1}{2}}-1\big|^p}=C&gt;0$$ since by binomial series $$\left(1+\frac{1}{n}\right)^{\frac{1}{2}}=1+\frac1{2n}+o\left(\frac1n\right)$$ we have that $$\left(1+\frac{1}{n}\right)^{\frac{1}{2}}-1=\frac1{2n}+o\left(\frac1n\right)$$ $$\left(1+\frac{1}{n+1}\right)^{\frac{1}{2}}-1=\frac1{2(n+1)}+o\left(\frac1n\right)$$ and thus $$\lim_{n\to\infty} \frac{\big|\left(1+\frac{1}{n+1}\right)^{\frac{1}{2}}-1\big|}{\big|\left(1+\frac{1}{n}\right)^{\frac{1}{2}}-1\big|^p}= \lim_{n\to\infty} \frac{\frac1{2(n+1)}+o\left(\frac1n\right)}{\left(\frac1{2n}+o\left(\frac1n\right)\right)^p}=\begin{cases}1 \quad p=1\\0 \quad p&lt;1 \\\infty \quad p&gt;1\end{cases}$$ thus the order is $p=1$.
Solve $\lim_{x \rightarrow 0^+}(e^{\frac{1}{x}}x^2)$ without using L'Hopital's rule
Without using L'Hopital but although it looks same, $x=\dfrac{1}{t}\\$ , $\displaystyle L = \lim_{t\to\infty}\dfrac{e^t}{t^2} = \lim_{t\to\infty}\dfrac{1+t+\dfrac{t^2}{2!}+\cdots}{t^2} = \dfrac{1}{2}+ \lim_{t\to\infty}\left(\dfrac{t}{3!}+\dfrac{t^2}{4!}+\cdots\right)=\infty$
Determine all ring homomorphism from$ \mathbb{Z}\oplus \mathbb{Z}$into $\mathbb{Z}\oplus \mathbb{Z}$
Hint: A ring homomorphism $f: \Bbb Z \oplus \Bbb Z \to \Bbb Z \oplus \Bbb Z$ is determined by the image of the elements $(1,0)$ and $(0,1)$. From there, we have $$ f(a,b) = af(1,0) + bf(0,1) $$ Of course, we have the additional constraint that $f(1,1) = (1,1)$, and of course $(1,0) + (0,1) = (1,1)$. Perhaps you could take it from there.
Do I need to study Group Representations before learning Rings in Artin's Algebra?
No: rings and modules may be studied for quite a while without a background in group representations and, if I recall correctly, this should be the case for Artin's Algebra, which covers more or less the topics for an undergraduate course in commutative algebra.
why is the chain rule used for the area function $A=\frac{1}{2}xy$
You are right: here we only need the product rule. Since $x$ and $y$ are functions of $t$ perhaps it would be clearer if we wrote them as $x(t)$ and $y(t)$. Then we have $$\frac{dA}{dt}=\frac{d}{dt}\frac{1}{2}x(t)y(t)=\frac{1}{2}\frac{d}{dt}x(t)y(t)=\frac{1}{2}\left(x'(t)y(t)+y'(t)x(t)\right)=\frac{1}{2}\left(y\frac{dx}{dt}+x\frac{dy}{dt}\right).$$
Example for $f\in L^q$ with $\log|f|^q\not\in L^1$
Let $X=[0,1]$ with Lebesgue measure and $f(x)=e^{-1/x}$.
Deriving an identity using Einstein’s summation
Your $r^2$ should actually be $r^3=(r_l^2)^{3/2}$, because $\hat{r}=r^{-1}\vec{r}$. In a moment, we'll use the $k=-\frac32$ case of$$\partial_wr_l^2=2r_w\implies\partial_w(r_l^2)^k=2kr_w(r_l^2)^{k-1}.$$There's another mistake: dimensional analysis tells us the cross product's units are those of $r^{-3}m$, so the intended result should be divided by $r^3$ too. Given this, the desired result is $\frac{3\vec{m}\cdot\vec{r}}{r^5}\vec{r}-\frac{\vec{m}}{r^3}$. Indeed, use $\hat{e}_w\times\hat{e}_k=\epsilon_{wkm}\hat{e}_m$ to rewrite the cross product as$$\begin{align}\epsilon_{ijk}\epsilon_{wkm}m_i\partial_w\left(\frac{r_j}{r^3}\right)\hat{e}_m&amp;=(\delta_{jw}m_m-\delta_{jm}m_w)\frac{r^2\delta_{jw}-3r_jr_w}{r^5}\hat{e}_m\\&amp;=\frac{2m_mr^2+3r_w(m_wr_m-m_mr_w)}{r^5}\hat{e}_m\\&amp;=\frac{3\vec{m}\cdot\vec{r}}{r^5}\vec{r}-\frac{\vec{m}}{r^3}.\end{align}$$
Calculate a sum of elements over a finite field $\mathbb{Z}_n$
Let $F=\mathbb Z/n\mathbb Z$ be a finite field of prime order $n$ and $a\ne 0$. Then $k\mapsto ak$ is a permution of the nonzero elements of $F$. We conclude that $$ \sum_{k=1}^{n-1} k^{-2} = \sum_{k=1}^{n-1} (ak)^{-2}=a^{-2}\sum_{k=1}^{n-1} k^{-2}.$$ Hence if we can find any $a\in F$ with $a\ne 0$ and $a^2\ne 1$, we immediately obtain $$\sum_{k=1}^{n-1} k^{-2}=0.$$ For $n\ge 5$, we can pick $a=2$.
The exercise about the epsilon-delta definition of limit.
Let $\epsilon = 1$. $$ | x^2-2x|\leq 1 \Leftrightarrow (x^2-2x)^2\leq 1 \Leftrightarrow (x^2-2x-1)(x^2-2x+1)\leq 0 $$ $$ \Leftrightarrow (x^2-2x-1)(x-1)^2 \leq 0$$ $$\Leftrightarrow x^2-2x-1\leq 0$$ $$x \in [1-\sqrt2,1+\sqrt2] $$. We are looking for the largest $\delta$ so that $$ |x-0|\leq \delta \Rightarrow |x^2-2x|\leq 1$$ $$\Leftrightarrow |x| \leq \delta \Rightarrow x\in [1-\sqrt2,1+\sqrt2]$$ $$ \Rightarrow \max \delta = \sqrt2 -1$$ Now, perhaps the question meant "the largest of the 4 solutions". As D is the only valid answer, it should be that one.
An arrow is monic in the category of G-Sets if and only if its monic the category of sets
As wildildildlife pointed out, the naive approach is to consider morphisms $\alpha_0, \alpha_1 : \sigma\to\tau$, where $\sigma$ is the “trivial” object. The mistake is that a singleton set is a “trivial” set, we instead need a “trivial” action of $G$ (= $G$-set). It is the canonical action of $G$ on itself given by the curried 2-ary operation of $G$ (like in Cayley's theorem)! Lets denote by $|\rho|$ the carrier and by $\triangleleft_\rho$ the operation of any action $\rho$ of $G$. Lets $\sigma$ be the canonical action of $G$, then $|\sigma|=|G|$, $\forall g_0 g_1 (g_1\triangleleft_\sigma g_0 = g_1 +_G g_0)$. For every $x\in|\tau|$ (you defined $\tau$ in your question) define a function $\psi(g):=g\triangleleft_\tau x$. By definition of the action, $g_1\triangleleft_\tau (g_0\triangleleft_\tau x) = (g_1 + g_0)\triangleleft_\tau x$, $g_1\triangleleft_\tau (\psi(g_0)) = \psi(g_1 + g_0) = \psi(g_1\triangleleft_\sigma g_0)$, then $\psi:\sigma\to\tau$ is a homomorphism of actions. $\psi(0)=0\triangleleft_\sigma x=0 + x=x$, i.e. we can have a homomorphism which maps $0$ to any $x$ we want. Proceed as in the naive approach. $\sigma$ is a separator in the category of actions of $G$.
$\ln^2(x)\overset{?}=2\ln(x)$
No it is not! Note that $\log (x^2)=2\log(x)$ but $\log^2(x)\color{red}\neq 2\log(x)$ Generally $\log(x^n)=n\log(x)$
Proving/Disproving there are always two uncountable sets whose intersection is uncountable.
The statement is false as noted by @Hanul Jeon in comments. Consider the following uncountable collection of uncountable disjoint subsets of $\mathbb{R}^2$: $$\mathcal{U}=\big\{\{x\}\times\mathbb{R}\ \big|\ x\in\mathbb{R}\big\}$$ Then consider any bijection $f:\mathbb{R}^2\to\mathbb{R}$ and note that $$f(\mathcal{U})=\big\{f(U)\ \big|\ U\in\mathcal{U}\big\}$$ is an uncountable collection of pairwise disjoint uncountable subsets of $\mathbb{R}$.
real analysis, functions, continuity
This is my first answer attempt and also I've studied this topic recently myself for the first time but here it goes: $ \lim_{(x,y)-&gt;(o,o)} \frac{x^2-y^2}{|x|+|y|}=0 $ because $ \frac{x^2-y^2}{|x|+|y|} \le \frac{||(x,y)||^2-||(x,y)||^2}{||(x,y)||+||(x,y)||} \le \frac{||(x,y)||^2+||(x,y)||^2}{||(x,y)||+||(x,y)||}=||(x,y)||=||(x,y)-(0,0)|| $ It's called Lipschitz-approximation in my country. I don't know if it's the same here. I'm open for feedback. So at (0,0) the function is continuous and of course everywhere else too.
Can we embedd $S^2$ into $S^3$?
Short answer: any values of $m,n$ as long as $n\geq m$. These are simply restrictions of embeddings of $\mathbb{R}^m \to \mathbb{R}^n$: $$ F(x_1, \ldots, x_m) \;\; =\;\; (x_1, \ldots, x_m, \underbrace{0,0,\ldots, 0}_{n-m \; \text{times}}). $$
eigenvectors of a matrix
You shouldn't round, if you can avoid it. Your matrix is \begin{bmatrix} 29 &amp; 10 \\ 10 &amp; 19 \end{bmatrix} whose characteristic polynomial is $$ X^2 - 48X + 451 $$ The roots are given by the formula $$ \frac{48\pm\sqrt{48^2-4\cdot 451}}{2} $$ so they are $24+5\sqrt{5}$ and $25-5\sqrt{5}$, so you computed correctly. An eigenvector relative to $25+5\sqrt{5}$ is a non zero solution of $$ \begin{bmatrix} 29-(24+5\sqrt{5}) &amp; 10 \\ 10 &amp; 19-(24+5\sqrt{5}) \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}= \begin{bmatrix} 0\\ 0 \end{bmatrix} $$ However, since we know that only one equation is sufficient, because by definition of eigenvalue the matrix has rank less than $2$, we can simply solve $$ (5-5\sqrt{5})x_1+10x_2=0 $$ or $$ (1-\sqrt{5})x_1+2x_2=0 $$ We can obviously give any (non zero) value to $x_2$, so we set $x_2=1$ and so $$ x_1=\frac{2}{\sqrt{5}-1}=\frac{2}{\sqrt{5}-1}\frac{\sqrt{5}+1}{\sqrt{5}+1} =\frac{2(\sqrt{5}+1)}{5-1}=\frac{\sqrt{5}+1}{2} $$ and an eigenvector is \begin{bmatrix} \frac{\sqrt{5}+1}{2}\\[2ex] 1 \end{bmatrix} Similarly for the other eigenvector.
Counting the number of arrangements around a rectangle.
The problem lies in an ambiguity in the book. In the phrase "are A and B seated [at?] longer sides of the table across from each other", "across from each other" may be taken to refer either to the sides or to the people being seated. You took it to refer to the sides, and your calculation is correct for that interpretation of the question. The answer you quote from the book shows that the intended interpretation is that it's the people, not the sides, that are across from each other.
How do you randomly select unit grid boxes non uniformly from a rectangle?
If the top left square of the grid is called $(1, 1)$ and the bottom right $(i, j)$ (following standard matrix indexation), then you could say, for instance, that the probability of filling box $(m, n)$ is $$ \frac{m+n}{i+j} $$ This can be tweaked to suit your needs. For instance, this formula guarantees that the bottom right square will be chosen. If you don't like that you can use, for instance, $$ \frac{m+n}{i+j+1} \quad \text{ or }\quad \frac{m+n-1}{i+j} $$ A different approach is $$ p^{m/i}q^{n/j} $$ where $p, q$ are numbers between $0$ and $1$. I am certain that there are many more ways to do it as well.
Overloading binary operation symbols
I think it's inappropriate, because $v\in \phi$ already has a meaning. I would define a new object, say $$\phi^*=\{v\in V_1|\exists w\in V_2, (v,w)\in\phi\}\cup\{v\in V_2|\exists w\in V_1,(w,v)\in\phi\}$$ and then write $v\in \phi^*$.
An example of a metric space
Hint: play around with $[0,\infty)$.
Definition of a real-valued random variable
The probability of an event $X \in A$ is, by definition, the $P$-measure of the set of "outcomes" $\omega$ for which $X(\omega)$ is in $A$. Strictly speaking, all events are measurable subsets of the sample space, but it's usually simpler to speak of events involving random variables without explicitly mentioning the sample space. For example, if your random variable $X$ is the number of successes in $n$ independent Bernoulli trials with probability of success $p$ in each one (and thus has the binomial($n$,$p$) distribution), the sample space $\Omega$ could be $\{0,1\}^n$ (where $0$ corresponds to failure and $1$ to success). $\mathcal B$ would be all subsets of $\Omega$, and $P$ gives each outcome with $k$ 1's and $n-k$ 0's probability $p^k (1-p)^{n-k}$. But then if you want to relate this to some other random variable $Y$ that is not determined by those same Bernoulli trials, you'll need a bigger sample space, where each outcome consists not just of the outcomes of those $n$ trials but also something else that determines $Y$. EDIT: This is what distinguishes probabilists from real analysts. The analyst is studying real-valued functions on a given space $\Omega$ with a given $\sigma$-algebra $\mathcal B$ and probability measure $P$. The probabilist will use the rigourous definition of the random variable $X$ in terms of $\Omega$ and $\mathcal B$ if necessary, but really thinks of $X$ in terms of a quantity involved in some (actual or imagined) experiment, and he/she is willing to change $\Omega$ and $\mathcal B$ in midsentence if that becomes convenient.
Smallest and Largest Eigenvalues of a PSD Matrix
The first question does not have an answer since an arbitrary matrix can possess nonreal eigenvalues and equations (1) and (2) above are meaningless in this context. However, even if we adjust the question, the answer is 'no'. Given $A \in M_n(\mathbb{C})$, the field (of values) of $A$ (also called the numerical range of $A$), is defined by $$ F(A) = \{ z^*Az\mid z^*z = 1\}. $$ It is well-known that $F(A)$ is compact (easy to prove), $\sigma(A) \subseteq F(A)$ (easy to prove), and convex (nontrivial; known as the Hausdorff-Toeplitz theorem). For $A \in M_n(\mathbb{C})$, let $\rho_{\min}(A) := \min\{\lambda \mid \lambda \in \sigma(A)\}$ and $\rho_\max(A) := \max \{\lambda \mid \lambda \in \sigma(A)\}$. Is it the case that \begin{equation} \rho_\min(A) = \min\{ |x| \mid x \in F(A) \} \tag{$\ast$} \end{equation} and \begin{equation} \rho_\max(A) = \max\{ |x| \mid x \in F(A) \}? \tag{$\ast\ast$} \end{equation} The answer is 'no'. The elliptical range theorem states that the field of a two-by-two matrix is a (possibly degenerate) elliptical disk. In particular, if $$ A = \begin{bmatrix} a &amp; b \\ 0 &amp; c \end{bmatrix}, $$ then $F(A)$ is an elliptical disk with foci at $a$ and $c$ and minor-axis length $|b|$. For example, the field of the matrix $$ \begin{bmatrix} 1 &amp; i \\ 0 &amp; -1 \end{bmatrix} $$ is given by: From the picture, it is clear that $(\ast)$ and $(\ast\ast)$ do not hold.
Find $y'$ and $y''$ : $ y=x^2\ln(2x)$
$$y = x^2 \ln 2x$$ Using the product rule we get $$y' = 2x \cdot \ln 2x + {\color{blue}{x^2 \cdot \frac{2}{2x}}}$$ which simplifies to $$y' = 2x \ln 2x + {\color{blue}{x}}$$ Then differentiating this again using the product rule on the first term we get $$y'' = 2 \ln 2x + 2x \cdot \frac{2}{2x} + x = 2 \ln 2x + 3$$ Remember: $${\color{blue}{x^2 \cdot \frac{1}{x} = x}}$$
Determine $N(A),R(A),R(A^T)$ in terms of eigenvectors
I'm going to assume you're dealing with a $3\times3$ matrix here. The key here is to recognize that eigenvectors must be linearly independent if they correspond to different eigenvalues. The column space of $A$ is spanned by the nonzero eigenvectors as this matrix has an eigenbasis (a set of eigenvectors that span $\mathbb{R}^3)$: since any vector $\vec{v} \in \mathbb{R}^3$ can be written as a unique linear combination of the eigenvectors, the space $A\vec{x} = A (a_1 \vec{x}_1 + a_2 \vec{x}_2 + a_3\vec{x}_3) = a_2 \vec{x}_2 + 2a_3\vec{x}_3$, for any $a_2, a_3 \in \mathbb{R}$. To find the row space, look for the subspace of $\mathbb{R}^3$ that is perpendicular to the null space. So the row space of $A$ here is $\{\vec{v} \text{ }| \text{ } \vec{v} \cdot \vec{x}_1 = 0\}$. Again, we know that $A\vec{x}$ can be rewritten as $A(a_1 \vec{x}_2 + a_2 \vec{x}_2 + a_3\vec{x}_3) = a_2 \vec{x}_2 + 2a_3\vec{x}_3$, because all the $\vec{x}_i$ are independent and thus form a basis for $\mathbb{R}^3$. Equating the two sides, we must have $a_2 = -2, a_3 = -3/2$. However, we can choose any value of $a_1 \in \mathbb{R}$ because $A$ maps any multiple of $\vec{x}_1$ to the zero vector. So any vector $\vec{v}$ in the form $\vec{v} = a_1 \vec{x}_1 - 2\vec{x}_2 - \frac{3}{2}\vec{x}_3$ is a solution to this equation. We know that any linear combination including $\vec{x}_1$ is inconsistent, because the solution to $A \vec{x} = \vec{b}$ is spanned solely by vectors $\vec{x}_2$ and $\vec{x}_3$. Because $\vec{x}_1, \vec{x}_2$, and $\vec{x}_3$ are linearly independent, we know there is no such solution to $A \vec{x} = a_1 \vec{x}_1 + a_2 \vec{x}_2 + a_3 \vec{x}_3$ with $a_1 \neq 0$.
A fourth degree curve with four singular points is reducible
Easier : take a conic through the four singular points and an arbitrary fifth point of the quartic. The two curves intersect in $\geq 9$ points and thus have a common irreducible component, necessarily of degree $\leq 2$ , and since the quartic contains that component, it is reducible.
Divergent non negative series smaller than harmonic series
What about $\displaystyle\sum_{n=2}^\infty\frac1{n\log n}$?
Algebraic versus topological line bundles
Let $X$ be a complex analytic space. There is an exact sequence, the exponential exact sequence, which is of fundamental importance for analyzing this (and related) questions: $$ 0 \to 2 \pi i \mathbb Z \to \mathcal O_X \buildrel \exp \over \longrightarrow \mathcal O^{\times}_X \to 1 .$$ Assume now that $X$ is proper and connected. When we pass to cohomology, the sequence of $H^0$s is then short exact, but we obtain the following crucial long exact sequence: $$H^1(X, 2 \pi i \mathbb Z) \to H^1(X,\mathcal O_X) \to Pic(X) \to H^2(X,2 \pi i \mathbb Z) \to H^2(X,\mathcal O_X).$$ Here I am writing (as is usual) $Pic(X)$ to denote $H^1(X,\mathcal O_X^{\times})$, the group of isomorphism classes of analytic line bundles on $X$. If $X$ is algebraic, then by GAGA this is the same as the group of algebraic line bundles on $X$. The boundary map $Pic(X) \to H^2(X,2 \pi i \mathbb Z)$ is the Chern class map (with a $2\pi i$ twist, or Tate twist; this is natural in the algebraic context, and to get the topological Chern class you just divide through by $2 \pi i$). So that we see that the kernel of the Chern class map can be identified with $H^1(X,\mathcal O_X)/H^1(X,2 \pi i\mathbb Z)$, and vanishes when $H^1(X,\mathcal O_X) = 0$. The image of $Pic(X)$ under the Chern class map is called the Neron--Severi group; its kernel is denoted $Pic^0(X)$ or $Pic^{\tau}(X)$. When $X$ is algebraic, $Pic(X)$ is naturally an algebraic group, $Pic^0(X)$ is the connected component of the identity, and $H^1(X,\mathcal O_X)$ is the tangent space to the identity. If $X$ is a smooth projective curve, then $Pic^0(X)$ is usually called the Jacobian of $X$. You can look at the section of Hartshorne in Chapter IV to get some sense of it, although you may not realize from reading that how fundamental the role of the Jacobian is in the theory of algebraic curves. If you google Torrelli theorem, Abel--Jacobi theorem, and theta divisor (just to give some sample search terms) you will get some sense of it. Griffiths and Harris also has a detailed discussion, which gives a better sense of its significance. If $X$ is algebraic but not proper, then you can compacitify it by adding a divisor at infinity. Let me write $\overline{X}$ for the compacification, and let me assume that $\overline{X}$ is in fact smooth, so then the divisor $D := \overline{X}\setminus X$ is a Cartier divisor, and gives rise to an associated line bundle $\mathcal O(D) \in Pic(\overline{X}).$ (This is denoted $\mathcal L(D)$ in Hartshorne, I think, and in some other texts, especially older ones, but $\mathcal O(D)$ is more common notation these days, and is better notation too.) If $D$ is reducible (as can happen; in general it can be taken to be a normal crossings divisor, but no better --- e.g. to compactify a curve to a smooth projective curve, we have to add in a finite number of points, but one point will not be enough, typically), write it as $D_1 \cup \cdots \cup D_n$. Then we also have associated line bundles $\mathcal O(D_i)$ for each $i$, whose product is $\mathcal O(D)$, and note that each of these is trivial when restricted to $X$ (because $X = \overline{X} \setminus D$). One now sees that $Pic(X) = Pic(\overline{X})/\langle \mathcal O(D_1),\ldots,\mathcal O(D_n) \rangle,$ and so if $Pic^0(\overline{X})$ is non-trivial, then $Pic(X)$ will also have a non-discrete part (because we can't kill a connected algebraic group by quotienting out a finitely generated subgroup). So the answer to your question is, at least for smooth $X$, is: compactify $X$ to $\overline{X}$, and then compute $H^1(\overline{X}, \mathcal O)$; if this is non-trivial, then the Chern class map has a (huge!) kernel. [Added later: As an example, if $X$ is a hypersurface in $\mathbb P^n$ for $n &gt; 2$ (so $X$ has dimension $> 1$), then $H^1(X,\mathcal O_X) = 0$ (exercise!), and so hypersurfaces give interesting examples. For a surfaces, the dimension $H^1(X,\mathcal O_X)$ was classically (i.e. by the Italians) known as the irregularity of the surface $X$. (The reason being that they knew formulas, like Riemann--Roch, for surfaces in space, which when they tried to extend to more general surfaces became false unless the extra quantity $\dim H^1(X,\mathcal O_X)$ was introduced --- although of course they didn't describe it this way.) See my comment here, as well as the notes of Kleiman linked to by Jason Starr, which will tell you a lot about Picard varieties and much more.]
Example of continuous real valued functions
Hint: It's not hard to show that if $f:\mathbb{R}\rightarrow \mathbb{R}$ is open then it has to be strictly monotone.
Lie Group Homomorphism from $U(n)$ into $SO(2)$
No, that is not a map from $SU(n)$ into $SO(2,\mathbb R)$. What your teacher meant was the map $M\mapsto e_{SO(2,\mathbb R)}$. But consider the map $\det\colon U(n)\longrightarrow\mathbb C\setminus\{0\}$, and the map$$\begin{array}{rccc}\psi\colon&amp;\mathbb C\setminus\{0\}&amp;\longrightarrow&amp;GL(2,\mathbb R)\\&amp;a+bi&amp;\mapsto&amp;\begin{bmatrix}a&amp;-b\\b&amp;a\end{bmatrix}.\end{array}$$Then $\psi\circ\det$ is a group homomorphism and its image is presicely $SO(2,\mathbb R)$, since\begin{align}\det\bigl(U(n)\bigr)&amp;=\{z\in\mathbb C\mid\lvert z\rvert=1\}\\&amp;=\bigl\{\cos(\theta)+i\sin(\theta)\mid\theta\in\mathbb R\bigr\}.\end{align}
How to construct an operator $T$ such that $T^2=0$?
If $\{e_n\}$ is a Hilbert base, define $$T(e_1)=e_2$$ $$T(e_n)=0\text{ for }n\ge 2$$
Discrete math logic question
Yes, indeed, you are correct in your assessment of the truth or falsity of each statement. In the first, we can see this as allowing $y$ to depend on $x$. So for any given $x$, we can find some $y$, and in particular, we can simply choose $y = 7 - 2x$ which will guarantee the equalition holds. In the second case, $y$ cannot depend on any given $x$. For the statement to be true, we need to consider the existence of a particular $y$ such that for every $x$, regardless of what $x$ may be, the equality holds. Since $x$ can vary, but $y$ can not vary accordingly, the statement is clearly false. These two statements help demonstrate just how crucial the order of quantifiers and quantified variables can be: in the first, we have a true statement, and in the second, a false statement, and the only difference between them is the placement of $\exists y \cdots$.
Unbiased estimator for median (lognormal distribution)
First find that:$$\mathbb{E}X^{\alpha}=e^{\alpha\mu+\frac{1}{2}\alpha^{2}\sigma^{2}}$$ For that have a look at this answer. Then: $$e^{\hat{\mu}}=e^{\frac{1}{n}\sum_{i=1}^{n}\ln X_{i}}=\prod_{i=1}^{n}X_{i}^{\frac{1}{n}}$$ so that: $$\mathbb{E}e^{\hat{\mu}}=\prod_{i=1}^{n}\mathbb{E}X_{i}^{\frac{1}{n}}=\left(\mathbb{E}X_{1}^{\frac{1}{n}}\right)^{n}=\left(e^{\frac{1}{n}\mu+\frac{1}{2n^{2}}\sigma^{2}}\right)^{n}=e^{\mu+\frac{1}{2n}\sigma^{2}}$$ Then consequently:$$\mathbb{E}e^{\hat{\eta}}=\mathbb{E}e^{\hat{\mu}-\frac{1}{2n}\sigma^{2}}=e^{-\frac{1}{2n}\sigma^{2}}\mathbb{E}e^{\hat{\mu}}=e^{-\frac{1}{2n}\sigma^{2}}e^{\mu+\frac{1}{2n}\sigma^{2}}=e^{\mu}=\eta$$
binomial expansion for negative and fractional powers
For indices which are not positive integers you look at $(1+x)^a$ for $|x| \lt 1$ and expand as a power series in $x$. When $a$ is a positive integer the coefficient of $x^k$ is $\binom{a}{k}$. This may be written as: $$ P_k(a) = \frac1{k!}a(a-1)...(a-k+1) $$ so that (still with $a$ a positive integer) we have the binomial expansion ($P_0(a)=1$): $$ (1+x)^a = \sum_{k=0}^a P_k(a)x^k $$ since $P_k(a) = 0$ if $k \gt a$ we may write this as: $$ (1+x)^a = \sum_{k=0}^{\infty} P_k(a)x^k $$ and it turns out that this same form can be used for fractional or negative integer values of $a$ for which $P_k(a) \ne 0$ for an infinite sequence of values of $k$. To see why this should work let us compute: $$ (1+x)^{a+1} = (1+x)(1+x)^a $$ if the expansion is valid we require: $$ \sum_{k=0}^{\infty} P_k(a+1)x^k = (1+x)\sum_{k=0}^{\infty} P_k(a)x^k $$ or, for $k \gt 0$ $$ P_k(a+1) = P_k(a) + P_{k-1}(a)\tag{1} $$ In other words (leaving questions of convergence aside) we want the polynomials $P_k(a)$ to satisfy the same recurrence relation as the binomial coefficients do for $a$ an integer: $$ \binom{a+1}{k}=\binom{a}{k}+\binom{a}{k-1} $$ you may find it instructive to prove (1) directly. Or you could note that for each value of $k$ the relation is a polynomial equation in $a$ of degree $k$, which we already know is satisfied for an infinite set of (positive integer $\gt k$) values of $a$. so it must hold identically.
Show that $(\ln a)^k \neq k \ln a $
Because we need to disprove this statement, we only need to find 1 counterexample. $$(\log e)^2\neq 2(\log e)$$ Statement proved.
Fourier Transforms and Schwartz Space
If a function is infinitely differentiable and has a Fourier transform, then if this function is not in the Schwartz Space $\mathscr S(R)$, its transform is NOT in $\mathscr S(R)$. For example if a function in $L^1(R)$ is $C^\infty$, but it is not in $\mathscr S(R)$, then it has a FT (all $L^1(R)$ funtions have FT), but it is not in $\mathscr S(R)$. Consider for example $f(x)=1/(1+x^2)$. The FT maps $\mathscr S(R)$ into $\mathscr S(R)$ in a one to one way, so the FT of functions not in $\mathscr S(R)$ will never be in $\mathscr S(R)$, even if they are $C^\infty$. For a function to be in $\mathscr S(R)$ it not only has to be $C^\infty$, but in addition it and its derivatives have to go to zero at $\pm\infty$ fast enough.
Solve $\mathscr{F}^{-1} [ \cot{a \omega} \times \mathscr{F} \{ U(t) \sin{\omega_0 t} \} ] $ using contour integration
This fact may make things a little easier: $$ \mathscr{F}^{-1} (\hat{f} \hat{g}) = f * g $$ where $\hat f = \mathscr{F}( f)$ and $f*g$ is convolution.
Show that there is no set satisfying the following properties.
I'm glad to see my notes are in use. I should make the caveat that they were written for my students, who previously took introduction to set theory with Azriel Levy and myself (where a lot of material was covered). Hilbert's paradox predates the axiom of regularity by quite a few years. So appealing to it in the solution is admittedly overshooting. The same can be said about replacement (which is used for transfinite recursion). The solution, instead, is due to Zermelo, and it is quite clever: If $S$ satisfies the two properties, then $\bigcup S\in S$, and therefore $\mathcal P(\bigcup S)\in S$. This now implies that $\mathcal P(\bigcup S)\subseteq\bigcup S$. This is impossible, as the previous exercise in the notes points out, since $\{x\in X\mid x\notin x\}\in\mathcal P(X)\setminus X$ for any set $X$. Of the "strong axioms", we only need separation and union, since power set is implicit in the properties of the set. Of the separation axioms, we only need one; and we only need union to apply to $S$ itself.
Let $\theta : \mathbb{C} → \mathbb{R}$ be a homomorphism. Prove that $\theta(x) = 0$ for all $x \in \mathbb{C}$.
Since $\theta$ is a homomorphism, we have $ 0 = \theta(0) = \theta(i^2 + 1^2) = \theta(i^2) + \theta(1^2) = \theta(i)^2 + \theta(1)^2. $ But then, since $\theta(i), \theta(1) \in \mathbb{R}$ we must have $\theta(i) = \theta(1) = 0.$ Thus, $\theta(x1+yi) = \theta(x) \theta(1) + \theta(y) \theta(i) = \theta(x)0 + \theta(y)0 = 0.$
Contour Integration (Choice of Contour)
You can choose a semicircle in the half-plane $\operatorname{Re} z \geqslant \sigma$. Then $e^{\alpha z}$ is bounded on the contour - $\operatorname{Re} (\alpha z) = \alpha\operatorname{Re} z \leqslant \alpha\sigma \leqslant 0$, and $\lvert e^{\alpha z}\rvert = e^{\operatorname{Re} (\alpha z)}$ - and the integral over the semicircle tends to $0$ for $R \to \infty$. Since the contour encloses no singularity, the contour integral is $0$, and hence $$\int_{\sigma - i\infty}^{\sigma+i\infty} \frac{e^{\alpha z}}{z^2+1}\,dz = 0$$ for $\sigma &gt; 0$ and $\alpha \leqslant 0$. It is worth noting that $$I(\alpha) = \int_{\sigma-i\infty}^{\sigma+i\infty} \frac{e^{\alpha z}}{z^2+1}\,dz = \begin{cases}\quad 0 &amp;, \alpha \leqslant 0\\ 2\pi i\sin \alpha &amp;, \alpha \geqslant 0 \end{cases}$$ does not depend smoothly on $\alpha$, although the integrand does. That should, however, not be too surprising since the differentiated integrand $\dfrac{ze^{\alpha z}}{z^2+1}$ is not integrable anymore, and the non-differentiability in $0$ manifests itself by the need to switch the half-plane in which the contour is closed to have the integral over the auxiliary part tend to $0$.
Orbit stabilizer theorem gives a homeomorphism for algebraic groups?
For the first question, the answer is in general no. For example, $\textrm{GL}_n(\mathbb{R})$ acts continuously on $\mathbb{R}^n$, and there are two orbits, the zero vector and its complement. However, if $G$ is a topological group acting continuously on a topological space $X$, then the quotient sapce $G \setminus X$ is Hausdorff if and only if all orbits are closed. This is because a point in $G \setminus X$ is closed if and only if its preimage is closed in $X$. For the second question, the answer is yes. Since $k$ is locally compact Hausdorff, so are $G(k)$ and $X(k)$, and $G(k)$ has a countable basis. So there is a more general result we can apply. This is from J.S. Milne's notes on modular forms. Lemma 1: If $G$ is a nonempty locally compact Hausdorff topological space, and $G$ is the countable union of closed sets $V_n$, then at least one of the $V_n$ contains a nonempty open set. Proof: Suppose no $V_n$ contains a nonempty open set. Let $U_1$ be a nonempty open set in $G$ whose closure is compact. Then $U_1 \not\subseteq V_1$, which means $U_1 \cap V_1 \subsetneq U_1$. Now $U_1 \cap V_1$ is closed in $U_1$, so we can find a nonempty open set $U_2 \subseteq U_1$ whose closure $\overline{U_2}$ is contained in $U_1$ and disjoint from $U_1 \cap V_1$. This is because compact sets form a neighborhood basis of any point. Now $U_2 \not\subseteq V_2$, so $U_2 \cap V_2 \subsetneq V_3$, and we can find an open $U_3$ whose closure $\overline{U_3}$ is compact and disjoint from $U_2 \cap V_2$. Iterating, we have open sets $U_1 \supseteq U_2 \supseteq \cdots$ with compact closures $\overline{U_1} \supseteq \overline{U_2} \supseteq \cdots$ such that $\overline{U_i}$ is disjoint from $U_{i-1} \cap V_{i-1}$ for $i \geq 2$. The intersection of the compact sets $\overline{U_i} : i = 2, 3, ...$ must be nonempty. Let $p$ be in this intersection. For each $i \geq 1$, we have $p \in U_i$. And for each $i \geq 2$, we have $p \not\in U_{i-1} \cap V_{i-1}$. Hence $p$ is not in any of the $V_i$, which is impossible. $\blacksquare$ Lemma 2: Let $G$ be a locally compact Hausdorff topological group with a countable basis, acting continuously on a locally compact Hausdorff space $X$. If $x_0 \in X$, then the natural map $G/ \textrm{Stab}(x_0) \rightarrow G.x_0$ is a homeomorphism. Proof: We may replace $X$ by the orbit of $x_0$ and assume $G$ acts transitively on $X$. Let $H$ be the stabilizer of $x_0$. The universal property of quotients gives us a continuous bijection $G/H \rightarrow X$, which we need to show is open. Let $U$ be open in $G$. Let $gx_0$ be a point of $Ux_0$. We need to find an open neighborhood $W$ of $gx_0$ which is contained in $Ux_0$. Of course, we can choose $g$ to be in $U$. Standard topological group shenanigans gives us a neighborhood $V$ of $1_G$ such that $V = V^{-1}$, and $gV^2 \subseteq U$. Furthermore, we can choose $V$ to be compact. Since $G$ has a countable basis, it is easy to see that $G$ is the union of countably many translates $g_nV$ for some $g_n \in G$. Each $g_nV$ is compact, hence so is the image $g_nVx_0$. So $X$ is the countable union of the closed sets $g_nVx_0$. By the first lemma, one of the closed sets $g_nVx_0$ contains a nonempty open set. These closed sets are translated around homeomorphically by $G$. So this means that $Vx_0$ must contain a nonempty open set $W$. Let $hx_0 \in W$ for some $h \in V$. Then $gx_0 = gh^{-1}hx_0$ lies in the open set $gh^{-1}W$. But since $V^2 \subseteq U$, we have $$gh^{-1}W \subseteq gV^2x_0 \subseteq gVx_0 \subseteq Vx_0 \subseteq Ux_0$$
Find : $\lim_{n \to \infty} \{(\frac{n}{n+1})^\alpha+\sin(\frac{1}{n})\})^n$...
Let us consider $$A=\frac{1}{(1+h)^{\alpha}}+\sin({h})$$ and use Taylor series; as a result $$A=1+(1-\alpha) h+\frac{1}{2} \left(\alpha^2+\alpha \right) h^2+O\left(h^3\right)$$ So $$\log(A)=(1-\alpha ) h+\left(\frac{3 \alpha }{2}-\frac{1}{2}\right) h^2+O\left(h^3\right)$$ $$\frac{1}{h}\log(A)=(1-\alpha ) +\left(\frac{3 \alpha }{2}-\frac{1}{2}\right) h+O\left(h^2\right)$$ I am sure that you can take from here.
Patterns of no formula.
Let $p(n)$ be the $n$th term in the sequence. Clearly, this sequence follows the formula: $$\begin{align} p(x) &amp;= \frac{600631 x^{19}}{121645100408832000}-\frac{791723 x^{18}}{800296713216000}+\frac{196988587 x^{17}}{2134124568576000}\\ &amp;-\frac{41785811 x^{16}}{7846046208000}+\frac{8219611 x^{15}}{38626689024}-\frac{49026370303 x^{14}}{7846046208000}+\frac{26296057821373 x^{13}}{188305108992000}\\ &amp;-\frac{1098593289863 x^{12}}{452656512000}+\frac{320897391017407 x^{11}}{9656672256000}-\frac{39606777445183 x^{10}}{109734912000}\\ &amp;+\frac{30088961291838131 x^9}{9656672256000}-\frac{(12871880314235441 x^8)}{603542016000}+\frac{(5410873671286319827 x^7)}{47076277248000}-\frac{(708875674839982733 x^6)}{1471133664000}+\frac{4033947669590964373 x^5)}{2615348736000}-\frac{(599274486262658993 x^4)}{163459296000} \\&amp;\frac{+(49104110859304547 x^3)}{7916832000}-\frac{(2153634755170519 x^2)}{308756448}+\frac{(3188726258687 x)}{692835}-1320490 \end{align}$$ Thus, $p(20)=42$. Ok, so that was a joke. However, this illustrates an important point--you can find some formula for $p(n)$ such that $p(20)$ is any value you wish. Without other context or information, coming up with the $20$th term in this sequence is not a well-defined question.
Finding the ratios of triangle sides
You logic is sound because one Pythagorean triple that fits your picture is $$(589,300,661)$$ where $\sin^{-1}\big(\frac{300}{661}\big)\approx 0.471089961 r\approx 26.99146656^\circ$ Side ratios: $$A/C= 589/661\approx 0.89107413$$ $$B/C= 300/661\approx 0.453857791$$ $$A/B=589/300\approx 1.963333333$$
Why does FTOC apply here, to find the derivative of $\int_{\sin(x)}^{\pi} \frac{t}{\cos(t)} dt$
$ f $ is defined for $ t $ such that $$t\ne \frac{\pi}{2}+k\pi$$ or $$t\in \Bbb R -\{...,-1.57,157,....\}$$ For $ x\in \Bbb R $, we have $$[0,\sin(x)]\subset [-1,1]$$ and $$\frac{\pi}{2}\approx 1.57\notin [-1,1]$$ $ f $ is continuous at $ [-1,1] $ and consequently at $ [0,\sin(x)]$