title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
What's the difference between the expectation of a function of a random variable and the law of the unconscious statistician
The "Law of the Unconscious Statistician" is just a name for the fact that $E(g(X))$ is given by the formula you wrote. There is no difference.
Prove: $h \in T_xM \iff \operatorname{dist}(x+\epsilon h,M) = o(\epsilon)$
Here you go: $ " \Rightarrow " $ Let $\gamma: \mathbb{R} \to M$ be a curve s.t. $\gamma(0)=x$ and $\gamma'(0)=h$. You can always find such a curve. Now you have: $$ \operatorname{dist}(x+\epsilon h,M) \leq \operatorname{dist}(x+\epsilon h,\gamma(\epsilon)) $$ So what we want to show is that $$ \lim_{\epsilon \to 0} \frac{\operatorname{dist}(x+\epsilon h,\gamma(\epsilon)) }{\epsilon}=0 $$ We can rewrite \begin{eqnarray} \frac{\operatorname{dist}(x+\epsilon h,\gamma(\epsilon)) }{\epsilon}&=&\frac{\Vert\gamma(\epsilon)-\epsilon\gamma'(0)-\gamma(0) \Vert }{\epsilon}\\ &\leq & \left\Vert\frac{\gamma(\epsilon)-\gamma(0) }{\epsilon}- \gamma'(0) \right\Vert \end{eqnarray} Now letting $\epsilon \to 0$ we obtain that $$ \lim_{\epsilon \to 0}\frac{\gamma(\epsilon)-\gamma(0) }{\epsilon}= \gamma'(0) $$ This shows our claim. "$\Leftarrow$" We want to show that $h \in \ker(Dg_x)$. Choose a domain of definition $V \subset U$ for $g$ s.t. $\overline{V}\subset U$ and $\overline{V}$ is compact in $U$. By this we can assume that $g$ is Lipschitz continous with Lipschitz constant $L$. Now let $\gamma(\epsilon):=x+\epsilon h$, notice that $\gamma(0)=x$ and $\gamma'(0)=h$. Furthermore let $\phi:\mathbb{R}\to M$ be the function that associates to $\epsilon$ one of the $y \in \overline{V} \cap M$ with $\operatorname{dist}(\gamma(\epsilon),M)=\operatorname{dist}(\gamma(\epsilon),y)$. Notice that as long as we restrict ourselfs to $\epsilon$ that are small enough this is well defined (You maybe want your $\epsilon$ so small that there is a ball $B$ around $\gamma(\epsilon)$ which contains $x$ and s.t. $B\subset V$ .). Also note that $\phi$ is not continous but just a function. Now notice that $g(\gamma(0))=g(\phi(\epsilon))=0$. By the chain rule one gets: \begin{eqnarray} \Vert Dg_x(h) \Vert &=& \Vert Dg_x(\gamma'(0))\Vert\\ &=& \left\Vert \frac{d}{d\epsilon}g(\gamma(\epsilon))_{\vert\epsilon=0}\right\Vert\\ &=& \lim_{\epsilon\to 0}\left\Vert \frac{g(\gamma(\epsilon))-g(\gamma(0))}{\epsilon} \right\Vert\\ &=&\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left\Vert g(\gamma(\epsilon))-g(\phi(\epsilon)) \right\Vert\\ &\leq&\lim_{\epsilon\to 0}\frac{1}{\epsilon} L \left\Vert \gamma(\epsilon)-\phi(\epsilon) \right\Vert\\ &=& L\lim_{\epsilon\to 0}\frac{\operatorname{dist}(\gamma(\epsilon),M)}{\epsilon} \end{eqnarray} But this is zero by assumption. So all in all we get $Dg_x(h)=0$ which finishes the proof.
A random walk on a finite square with prime numbers
your conjecture is True assuming that : there is a $k$ sequence of consecutive primes that the difference between every two consecutive prime of that sequence produces the sequence $\{n-1,n-1,n-1,n-2,n-2,n-3,n-3,.....,2,2,1,1\} \mod n$ such that $n-1$ element is repeated $3$ time and all the other elements are repeated $2$ times in descending order. which is $2n-1$ elements or $2n$ consecutive prime. to explain what i mean, lets take $n=3$ so i am looking for $2*3= 6$ consecutive prime numbers that their consecutive difference produces $\{2,2,2,1,1\} \mod 3$ now if i assume that this will happen for all odd $n$ numbers, then no matter at what cell in the square i was standing or in what direction i was going, i am guaranteed to travel across every cell in the $n*n$ square. note : as i wrote in the comments, i think that the assumption could be proved by Green-Tao theorem (i am not sure of that). i hope this give you some idea on how to tackle this problem
Integrals are equal? How?
We can show that the difference between them is $0$ which shows how they are equal. $$\int_0^1 \exp\left(\frac{1}{\ln x}\right)\left(\ln x-\frac{1}{\ln^3 x}\right)dx\overset{\ln x \to x}=\int_{-\infty}^0 e^{x+1/x}\left(x-\frac{1}{x^3}\right)dx$$ $$=\int_{-\infty}^0 e^{x+1/x}\left(1-\frac{1}{x^2}\right)\left(x+\frac{1}{x}\right)dx=\int_{-\infty}^0 \left(e^{x+1/x}\right)'\left(x+\frac{1}{x}\right)dx$$ $$\overset{IBP}=-\int_{-\infty}^0 e^{x+1/x}\left(1-\frac{1}{x^2}\right)dx=-e^{x+1/x}\bigg|_{-\infty}^0=0$$ This can be further generalized. A more direct approach than the one from above is as follows: $$I(n)=\int_0^1 \exp\left(\frac{1}{\ln x}\right)\left(\ln^{n-1} x-\frac{1}{\ln^{n+1}x}\right)dx\overset{\ln x\to x}=\int_{-\infty}^0 e^{x+1/x}\left(x^{n-1}-\frac{1}{x^{n+1}}\right)dx$$ $$\overset{x\to 1/x}=\int_{-\infty}^0 e^{x+1/x}\left(\frac{1}{x^{n+1}}-x^{n-1}\right)dx\Rightarrow 2I(n)=0$$ $$\Rightarrow \int_0^1 \exp\left(\frac{1}{\ln x}\right)\ln^{n-1} xdx=\int_0^1 \exp\left(\frac{1}{\ln x}\right)\frac{1}{\ln^{n+1}x} dx$$
Does $A^2 \geq B^2 > 0$ imply $ACA \geq BCB$ for square positive definite matrices?
I don't know if there are any nice (i.e. not-too-strong) conditions for the inequality to hold, but I'm sure that it doesn't always hold, even when $C$ is positive definite. Counterexample: \begin{align} A&=A^2=I,\\ B&=B^2=\frac12\pmatrix{1&1\\ 1&1},\\ C&=\operatorname{diag}(1,4). \end{align} In this case, we have $A^2\ge B^2\ge 0$ but $ACA-BCB=\frac14\pmatrix{-1&-5\\ -5&11}$ is indefinite. While $B$ is not positive definite here, by continuity, we can obtain a valid counterexample by adding a small positive multiple of $I$ to both $A$ and $B$. Edit. Note that if $ACA\ge BCB$ for all real symmetric $C$, we must have $A=B$ because $AIA\ge BIB$ and $A(-I)A\ge B(-I)B$ imply that $A^2=B^2$. It isn't quite meaningful to consider $ACA\ge BCB$ for all $C\ge0$ either. In particular, if $A(vv^\ast)A\ge B(vv^\ast)B$ for every vector $v$, then $Bv$ must be equal to $\lambda_v Av$ for some $0\le\lambda_v\le1$. Therefore, by linearity, $B=\lambda A$ for some $0\le\lambda\le1$. It is interesting to ask, however, if $A\ge B>0$ and $A^2\ge B^2$, what class of $C$ (under perhaps some additional conditions on $A$ and $B$) would satisfy the inequality $ACA\ge BCB$.
How do I find the cumulative inflation in this problem?
Assume you are interested in an item that costs 1) 100 USD at the beginning of September. At the end of September the same item is sold for 2) 110 USD , I.e 10% inflation. At the end of October this item is sold for 110 + (5/100)110 USD = 110(1.05)USD= (105 +10.5) USD = 115.5 USD, I.e 5% inflation in October. "Combined " inflation: Total price increase; 115.5 -100 USD =15.5 USD. Original price: 100 USD Total price increase in % for the combined period? Can you do it?
Let $X$ be partially ordered set. We say that $U$ is open if $(x \in U \land x\le y) \Rightarrow y\in U$
To show that the family of thus defined "open" sets is a topology, we need to verify the condition $$(x \in U \land x \leq y) \implies y \in U\tag{1}$$ for some sets. It is clear that $(1)$ holds for $U = X$, since then the conclusion $y\in U$ always holds. And $(1)$ holds (vacuously) for $U = \varnothing$, since the antecedent is always false. Next we verify that the intersection of two "open" sets is an open set. So let $U_1,U_2$ be open, and set $U = U_1 \cap U_2$. We must verify that $$x \leq y \implies y \in U$$ holds for every $x \in U$. Hence let $x \in U$ and $x \leq y$. Since $U = U_1 \cap U_2$, we have $x \in U_k$ for $k \in \{1,2\}$. Since $U_k$ is open, and $x \leq y$, it follows that $y \in U_k,\, k \in \{1,2\}$. But that is just $y \in U_1 \cap U_2$, or equivalently $y\in U$. Thus $(1)$ is verified for $U = U_1 \cap U_2$. In other words, the intersection of two open sets is open. Finally, we need to show that an arbitrary union of open sets is open. So let $I$ be an arbitrary index set, and for every $i \in I$ let $U_i$ be an open set. Let $$U = \bigcup_{i\in I} U_i.$$ Suppose $x \in U$, and $x \leq y$. By definition, $x\in U \iff (\exists i \in I)(x \in U_i)$. Choose such an index $i_0$. Since $U_{i_0}$ is open, we have $$(x \in U_{i_0} \land x \leq y) \implies y \in U_{i_0}.$$ The antecedent of that is true by assumption, and hence we conclude $y\in U_{i_0}$. But $U_{i_0} \subset U$, and thus $y \in U$, showing that $U$ is open.
Solve $\int e^{\cos x}\frac{x \sin^3x+\cos x}{\sin^2x}dx$
Substitute $t=\cos(x)$ to get $$\int e^t \left(\arccos(t)-\frac t{(1-t^2)^{3/2}}\right)dt$$ and apply your technique with a plus/minus trick: $$\int e^t \left(\arccos(t)-\frac1{\sqrt{1-t^2}}+\frac1{\sqrt{1-t^2}}-\frac t{(1-t^2)^{3/2}}\right)dt=e^t\arccos(t)+\frac{e^t}{\sqrt{1-t^2}}.$$
If the mean value theorem always gives a $c \in (0,\infty) $ such that $f'(c) > 0$ is there an interval starting from $0$ such that $f' >0$
A counterexample is $f(x)=x^2(2+\sin(1/x))$, when $x\neq 0$, and $f(0)=0$. Below is an image from Mathematica to help you believe that it works, but you can calculate the derivative to check that it works.
The number of monotone lattice paths in the vicinity of the diagonal.
I will count the lattice paths who never deviate by $d$ or more steps. The deviation of a path at a point is its $y$-coordinate minus its $x$-coordinate at that point. Start with all of the paths, $\binom{2n}{n}$ Subtract the bad paths which at some point deviate by $\pm d$, which by the reflection principle is $2\binom{2n}{n-d}$. Add back in the doubly subtracted paths which at some point have a deviation of $+d$, and at another point have a deviation of $-d$. Applying the reflection principle twice, the number of such paths is $2\binom{2n}{n-2d}$. For example, let's count paths which at some point deviate by $+d$, then at a later point deviate by $-d$. You reflect at the first time they deviate by $-d$, and then reflect at the first time they deviate by $+d$. The result is a path of length $2n$ who at the end deviates by $+4d$. This path has $n+2d$ up steps and $n-2d $ right steps. Now, paths which go from $+d$ to $-d$ back to $+d$ will be doubly subtracted in step $2$, then doubly added back in in step $3$. These must be subtracted out. Same for $-d\to +d\to -d$, so we subtract $2\binom{2n}{n-3d}$ (where we reflect three times to get this). However, paths which go from $+d$ to $-d$ to $+d$ to $-d$ were subtracted twice in $2$, added twice in $3$, then subtracted twice in $4$. These must be added back in, so $\dots$ You see the pattern. The result is $$ \boxed{\binom{2n}n+2\sum_{k=1}^{\lfloor n/d\rfloor}(-1)^k\binom{2n}{n-kd}.} $$
Determine whether or not T is a linear transformation from $\mathbb{R}^3$ to $\mathbb{R}^4$
To disprove linearity, all you need is two points $x,y \in \mathbb{R}^3,$ where $T(x + y) \neq T(x) + T(y)$ Try $x=(0,0,0)$ and $y=(1,1,1).$ Compute $T(x),$ $T(y)$ and $T(x+y)$ separately.
Polynomial Approximation of an Integral
Write down the ordinary Taylor expansion of $\cos(t^2)$ about $t=0$. This is done by recalling that $\cos w=1-\frac{w^2}{2!}+\frac{w^4}{4!}-\frac{w^6}{6!}+\cdots $ and substituting $w=t^2$. Now integrate term by term from $0$ to $x$. Note that the series we get is, for $|x|\le 1$, an alternating series: the terms alternate in sign, decrease in absolute value, and approach $0$. So the error made in cutting off somewhere is less, in absolute value, than the first "neglected" term. That criterion works efficiently, and does not require the complicated manipulations and notation that you are using. Remark: The various expressions for the remainder are important theoretical* tools. They are often much less useful as practical tools for making estimates.
Limit as $x \to 64$ of $\frac{\sqrt[6] x - 2}{\sqrt x - 8}$
Use $a^3-b^3=(a-b)(a^2+ab+b^2)$ with $a=\sqrt[6\,]x$ and $b=2$. Or use the extended mean value theorem. Or its application, the rule of l'Hopital.
How to check if a point $P(x,y)$ lies inside a rounded rectangle?
Let $R$ be the unrounded rectangle, $R'$ the rounded rectangle, and $r$ the radius of the rounded corners. Check if $P$ is contained in $R$. If not, then $P$ is not in $R'$. If $P\in R$, then check if $P$ falls within any of the four squares of side length $r$ contained within $R$ and sharing a corner with $R$. If not, then $P\in R'$. If $P$ falls in one of those squares, compute the distance from $P$ to the corner of the square contained in the interior of $R$. If that distance is greater than $r$, then $P\notin R'$. Otherwise, $P\in R'$.
Let $A=\{1,2,3,4,5,7,8,10,11,14,17,18\}$. How many subsets of $A$ contain six elements?
For i.): Choosing a certain number $m$ of elements out of a set of $n$ without regard for the order is calculated by $n \choose m$. In this case it is $12 \choose 6 $$=924$. For ii.): Possibilities to get 4 even numbers out of 6 is $6 \choose 4$$=15$ and there are $6 \choose 2$$=15$ ways to get 2 odd numbers out of 6 (which is by the way the same number because you can reformulate it to not count the ways to draw 4 but to throw out 2). To get the final number of ways we have to multiplicate both, so we get $15\cdot 15=225$. For iii.): There are 6 odd integers in the set, you decide for each odd number if you include it in a set or not, that is 2 choices - and as there are 6 odd numbers you get to choose 6 times. You can get each of the described sets with this method and you miss none if you play through all choices. So the total number of sets containing only those is $2^6=64$.
Isomorphism $\text {Rep}_{G,k}\cong \space \text {Mod}_{k[G]} $
Actually more is true: Let $k$ be a commutative ring (think of a field if you want to) and $M$ be a monoid (e.g. a group). There is an isomorphism of categories $\mathsf{Rep}_{M,k} \cong \mathsf{Mod}(k[M])$ over $\mathsf{Mod}(k)$. This means: If $V$ is some $k$-module, then giving an action of the monoid $M$ on $V$ is the same as giving a $k$-linear action of the $k$-algebra $k[M]$ on $V$ (and similar with homomorphisms, see below). The proof is an immediate consequence of the universal property of the monoid algebra $k[M]$: An action of $M$ on $V$ is a monoid homomorphism $M \to \mathrm{End}_k(V)$. This corresponds to a $k$-algebra homomorphism $k[M] \to \mathrm{End}_k(V)$. But this is precisely a $k$-linear action of the $k$-algebra $k[M]$ on $V$. Explicitly, if $M$ acts on $V$, then $k[M]$ acts on $V$ via $\bigl(\sum_m \lambda_m \cdot m\bigr) \cdot v := \sum_m \lambda_m \cdot (m \cdot v).$ If $V,W$ are $k[M]$-modules, then a $k$-linear map $f : V \to W$ is $k[M]$-linear iff it is $M$-linear (one usually says $M$-equivariant) - this follows easily from the explicit description in the last paragraph, or one useses again the universal property of $k[M]$.
Using a real number X calculate quadratic equation using only addition, multiplication and subtraction
One of the common ways to evaluate such polynomials is to write it as: $$((5\cdot x+2)\cdot x\cdot x - 8)\cdot x - 1$$ This trick is used because it minimizes the number of multiplications. Compare that to a more straight-forward approach: pre-calculating $y=x\cdot x$ then writing the polynomial as: $$5\cdot y\cdot y +2\cdot y\cdot x -8\cdot x -1$$ That takes six multiplications (including calculating $y$,) compared to a mere $4$ multiplications in my original formula.
Questions about if $\dim(U)\ge\dim(V)−\dim(W)$ and proving $∃T∈\mathfrak{L}(V,W) \text{s.t.}\text{null}(T)=U$?
If $n > p$ you would not be able to define $Tv_i=w_i$ for all $i = 1, \dots, n$ with different values. So that $Tv_j = Tv_k$ for some $j$, $k$, and $v_j - v_k \in \text{null}(T) \setminus U$.
general linear group $GL_{2}(\mathbb{Z}_3).$
Just count the possible columns of a matrix in $GL_2(\Bbb{Z}_3)$. The first column has to be nonzero, which gives $9 -1 = 8$ possibilities. The second column should not be a multiple of the first column, which gives $9 - 3 = 6$ possibilities. Thus you find $8 \times 6 = 48$ elements.
Area in the plane described by inequalities
Hints: $$\text{For}\;\;x\ge 1\;\;,\;\;x^2\le\frac6{\sqrt x}\iff x^5\le 36\iff 1\le x\le\sqrt[5]{36}\;$$ And now just draw the functions $\;x^2\;$ and $\;\frac6{\sqrt x}\;$ ...
Are $l_{p} \cap k$ and $l_{p} \cap k_{0}$ complete in $||$ $||_{\infty}$? Are they complete in $l_{p}$ norms?
Observe that for $p<\infty$ $$ l_p\cap k = l_p \cap k_0 = l_p. $$ Hence these spaces are complete with respect to the $l_p$ norm. To see these inclusions: If $x\in l^p$, then $\sum_{n=1}^\infty |x_n|^p$ converges, hence $\lim_{n\to\infty}x_n=0$ and $x\in k_0$. These spaces are not complete with respect to the $l_\infty$ norm: Take the sequence $x_n$ defined as $$ x_n = (1^{1/p}, 2^{1/p}, \dots , n^{1/p}, 0, \dots) \in k_0. $$ This is a Cauchy sequence in $l^\infty$, but the sequence is unbounded in $l^p$, hence cannot converge. If $p=\infty$, then $k_0\subset k \subset l_\infty$. Moreover, $k_0$ and $k$ are closed subspaces of $l_\infty$.
Showing Continuity: Point-Set Topology
If $C \subset Y $ is closed , then $$ f^{-1}(C) = \cup_{\alpha} (A_{\alpha} \cap f^{-1}_{|A_{\alpha}}(C) ) $$ but $A_{\alpha} \cap f^{-1}_{|A_{\alpha}}(C) $ is closed in $A_{\alpha}$, so is closed in $X$. The $A_{\alpha}$ are finite and so $f^{-1}(C)$ is closed in $X$ $\Rightarrow $ $f$ is continous
fractions inside of a decimal?
Note that $\displaystyle\ \frac{1}3\: =\: \ 0.33\frac{1}3\ $ means $\displaystyle\ \frac{1}3\: =\: \frac{3}{10} +\: \frac{3\frac{1}3}{100}\ $ which, times $100\:,\:$ becomes $\displaystyle\ \frac{100}3\: =\ 33\frac{1}3\:.\:$ So the notation is "legal". Whether or not it is advisable depends on the context. Certainly it could lead to confusion if not well-explained. It does prove handy as a notational way to represent recursive computations of infinite digit "streams" in functional programming languages. Here the $\:1/3\:$ in the final "digit" represents the continuation function that computes the remaining digits in the tail of the stream. Analogous ideas are sometimes employed in computer algebra systems for representing similar objects e.g. power series and $\rm p$-adic numbers.
How to calculate the price of a product without the sales tax, if we know the price including the tax and the rate of the tax?
Suppose the price is $\;x\;$, so $$x+\frac{10}{100}x=x\cdot(1.1)=8800\implies x=\frac{8800}{1.1}=8000$$
Negating ∃x∀z∃y(S(x,y) ∧ C(y,z))
The final answer will use De Morgan's law to get $$(\forall x )\;\;(\exists z) \;\; : $$ $$\;\; (\forall y ) \;\;\lnot S(x,y) \; \vee \lnot C(y,z).$$
Solving a puzzle: Graph where each node has degree 3
Define a hypothesis to consist of the following: a $3$-regular graph representing a hypothetical map of the park; a vertex $r$ in that graph representing a hypothetical location of the restaurant; a vertex $s$ in that graph representing a hypothetical starting location; an edge incident to $s$ representing the hypothetical path by which you arrived at $s$. There are infinitely many hypotheses, but countably many: for each natural number $n$, there are only finitely many hypotheses in which the park has $n$ vertices. So we may number the hypotheses $H_1, H_2, H_3, \dots$. One of the hypotheses represents the true map of the park, location of the restaurant, and your starting position: call that hypothesis $H_N$. You don't know which hypothesis this is, but $N$ is some finite number. Now, here is the strategy that will get you to the restaurant. For each $i=1,2,3,\dots$, you: Assume hypothesis $H_i$ is true for the time being. Given the steps you've taken so far, determine the location in $H_i$'s graph where you would now be, if hypothesis $H_i$ were true. From that location, find directions in $H_i$'s graph that would take you to $H_i$'s restaurant location $r$. Follow those directions in the actual physical park. For the first $N-1$ iterations of this procedure (for $i=1, 2, \dots, N-1$), you're going to be following directions based on a false premise, and they're unlikely to get you to the restaurant except by chance. On the $N^{\text{th}}$ iteration of the restaurant, you'll assume a true hypothesis: the graph you take will be the correct graph, with the restaurant and your starting location and starting direction marked correctly. Then the location you work out in step 2 will happen to be your actual location in the park, and the directions you work out in step 3 will actually take you to the restaurant.
Juggling three non-Archimedean fields
$\DeclareMathOperator{\cof}{cof}$The Dehn field embeds naturally into the Levi-Civita field and fields $^*\mathbb{R}$ of hyperreal numbers defined by ultrafilters on $\mathbb{N}$, and the Levi-Civita field embeds into $^*\mathbb{R}$. I'll try to explain this in some detail. I assume you are familiar with the notion of ordered Hahn series fields and I first introduce what I called Cauchy-completion. You'll find a discussion about it here. Any linear order $X$ is equipped with a topology, called the order topology, whose open sets are the unions of open intervals. In ZFC, $X$ has a cofinality $\cof(X)$ which is the least order type of a cofinal subset of $X$. If $F$ is an ordered field, then there is also a natural uniform structure on $F$ and in particular a notion of Cauchy sequence (same as regular Cauchy sequences but indexed by $\cof(F)$). The topology is sequential, in that closed sets are sets which contains the limits of their convergent sequences (indexed by $\cof(F)$), and thus in general every topological notion can be stated in terms of sequences indexed by $\cof(F)$, with the usual properties: continuity is sequential continuity, and so on.... An ordered field $F$ is said Cauchy-complete if its Cauchy sequences are convergent in $F$, or equivalently if it has no proper dense (ordered field) extension. Every ordered field $F$ has a dense Cauchy-complete extension $(\widetilde{F},\varphi)$, with the following initial and terminal properties: (IP:) If $(I,\mu)$ is a Cauchy-complete continuous (for the order topology, equivalently, the embedding is cofinal) ordered field extension of $F$, then there is a unique morphism $\sigma: \widetilde{F} \rightarrow I$ with $\sigma \circ \varphi=\mu$. (TP:) If $(T,\mu)$ is a dense extension of $F$, then there is a unique morphism $\sigma: T \rightarrow \widetilde{F}$ with $\varphi=\sigma \circ \mu$. The extension $(\widetilde{F},\varphi)$ is called the Cauchy-completion of $F$. For instance $\mathbb{R}$ is the Cauchy completion of $\mathbb{Q}$, and the field $\mathcal{L}=\mathbb{R}((\varepsilon^{\mathbb{Z}}))$ of Laurent series is the Cauchy-completion of $\mathbb{R}(\varepsilon)$. To prove this, one needs only check that this is a dense ordered field extension of this field, and that each Cauchy sequence in $\mathbb{R}(\varepsilon)$ converges in $\mathcal{L}$. Cauchy-completion is similar real closure in that they correspond to a specific version of reflective subcategories linked to properties of extensions: algebraic extensions and dense extensions. In this answer, I give some (quite poorly written) explanation. What matters here is that the Cauchy-completion is a functorial construction. Using the Newton polygon method, one can prove that the real closure of $\mathcal{L}$ is the ordered field $\mathcal{P}=\bigcup \limits_{n \in \mathbb{N}^{>0}} \mathbb{R}((\varepsilon^{\frac{1}{n}.\mathbb{Z}}))$ of Puiseux series. Its Cauchy-completion, which is the Levi-Civita field $\mathcal{C}$ (sorry Levi...) can be construed as the field of Hahn series $s=\sum \limits_{n \in \mathbb{N}} s_n \varepsilon^{q_n}$ where $(s_n)_{n \in \mathbb{N}}$ is a sequence of real numbers and $(q_n)_{q \in \mathbb{N}}$ is a strictly increasing and cofinal sequence of rationnal numbers. As the Cauchy-completion of a real-closed field, $\mathcal{C}$ is automatically real-closed. Since the Dehn field $\mathcal{D}$ is an algebraic extension of $\mathbb{R}(\varepsilon)$, by the terminal property of real closure, there is a unique embedding of $\mathcal{D}$ into the real closure of $\mathbb{R}(\varepsilon)$ extending the given real closure morphism. By the initial property of real closure, this real closure enjoys a unique embedding in $\mathcal{C}$ extending the embedding $\mathbb{R}(\varepsilon) \rightarrow \mathcal{C}$. Now let's turn to germs of real valued functions. The ring $\mathcal{G}$ of germs of real valued functions is the quotient of the set of real valued functions defined on intervals $[a,+\infty)$ for some real number $a$ by the equivalence relation $f \sim g$ iff $f(x)=g(x)$ for sufficiently big $x$. It is a partially ordered ring under poitwise sum and product, and eventual comparison. Any (linearly) ordered subfield $F$ of $\mathcal{G}$ embeds naturally in the ultrapower $^*\mathbb{R}$ (given a free ultrafilter $U$ on $\mathbb{N}$) by sending the germ of a function $f$ defined on $[n,+\infty)$ to the class of $(0,0,...,0,f(n),f(n+1),...)$ modulo $U$ (where there are $n$ zeroes for instance). The Dehn field is such an ordered field, or more precisely it is a field or representatives of germs of real valued functions. Thus it also embeds naturally in $^*\mathbb{R}$. The field of Laurent series, and for that matter the Levi-Civita field, are not fields of germs of real valued functions, at least not that I know of (but maybe every Laurent series is Borel summable?). Thus I don't see how to naturally embed them into $^*\mathbb{R}$. It is possible that the saying "$^*\mathbb{R}$ extends the Levi-Civita field" is better understood as a thematic remark: the Levi-Civita can be used to do analysis although it is not archimedean, and thus "not standard" analysis, with nice properties (see this here found on Wikipedia), and fields of hyperreal numbers can be seen as the completion of this goal. In ZFC, there are embeddings of $\mathcal{C}$ into $^*\mathbb{R}$. To see this, we can use the fact that $^*\mathbb{R}$ is real-closed and countably saturated: if $L,R$ are countable sets of hyperreal numbers with $L<R$, then there is a hyperreal number $a$ with $L<a<R$. This implies that $(i)$: $^*\mathbb{R}$ contains a canonical copy of the real closure of each of its subfields, and $(ii)$: $^*\mathbb{R}$ contains a copy of the Cauchy-completion of each of its subfields with countable cofinality. The first statement follows from the initial property of real closure. To prove the second one, consider a subfield $K$ of $^*\mathbb{R}$ with countable cofinality. The set of cofinal extensions of $K$ in $^*\mathbb{R}$ is inductive for the relation of inclusion, and has a maximal element $F$ by Zorn's lemma. Since any algebraic extension is cofinal and by the first statement, $F$ must be real-closed. Assume for contradiction that there is a non-convergent Cauchy sequence $u$ in $F$. We may also assume that we have $u_{2m}<u_{2n+1}$ for all $m,n \in \mathbb{N}$ (I let you figure this out). Let $L$ (resp. $R$) be the set of elements of the sequence which lie below (resp. above) infinitely many elements of the sequence. We have $L<R$ so there is a hyperreal number $a$ with $L<a<R$. Note that $a$ leis outside of $F$, otherwise $u$ would converge to it. I claim that the subfield $F(a)$ of $^*\mathbb{R}$ is a dense extension of $F$ where $u$ converges to $a$. In fact, we only require that it is a cofinal extension of $F$, but density follows. Indeed, it will follow that $u$ converges to $a$. Then since $F$ is real closed, no polynomial in $F[X]$ annihilates $a$, and thus by continuity of fractions in $F(X)$ on $F(a)$ outside of their poles (true for any ordered field), for such fraction $f(X)$, the sequence $(f(u_n))_{n \in \mathbb{N}}$ converges to $f(a)$. So let's prove that $F$ is cofinal in $F(a)$. It is easy to see that each number $P(a)$ for $P \in F[X]$ is bounded by elements of $F$. We must only check that for any non zero polynomial $Q \in F[X]$, the fraction $\frac{1}{Q(a)}$ is also bounded in $F$, that is, we must check that $Q(a)$ may not be infinitesimal with respect to $F$, denoted $Q(a)\prec_F 1$. We do so by valuation-theoretic arguments. Since $^*\mathbb{R}$ is countably saturated, in particular $F$ is bounded in $F$, and its convex hull in $^*\mathbb{R}$ is a proper convex valuation ring on $^*\mathbb{R}$ which contains $a$. By real closure of $^*\mathbb{R}$, the corresponding valued field is henselian (see for instance Theorem 3.5.16 in ADH). Assume towards a contradiction that there is $Q \in F[X]$ which is non zero with $Q(a) \prec_F 1$, and choose such polynomial $Q$ with minimal degree, hence $Q'(a)$ is not infinitesimal with respect to $F$. By henselianity, this means that there is $b \in ^*\mathbb{R}$ with $Q(b)=0$ and $a-b\prec_F 1$. The first relation yields $b \in F$ (by real closure of $F$), and the second implies that $u$ converges to $b$ in $F$: a contradiction. Thus $F(a)$ is a cofinal extension of $F$, which contradicts the maximality of $F$. So $F$ must be Cauchy-complete. The initial property of Cauchy completion then implies that the Cauchy-completion of $K$ embeds in $F$ and thus in $^*\mathbb{R}$. Applying those two results and starting with the fields-of-germs-style embedding $\mathbb{R}(\varepsilon) \rightarrow ^*\mathbb{R}$ with $f(\varepsilon)\sim (0,1,\frac{1}{2},\frac{1}{3},...)$, we get an embedding of Laurent series, then Puiseux series, then "Levi-Civita series" into $^*\mathbb{R}$. (Using similar arguments as above, one can prove that the maximal subfields $F$ in $^*\mathbb{R}$ are in fact almost countably saturated (countably saturated but at $+\infty$ and $-\infty$), hence in particular spherically complete. This implies that $^*\mathbb{R}$ also contains a copy of the Hahn series field $\mathbb{R}((x^{\mathbb{R}}))$.)
Let X and Y be metric spaces, with X compact, and let f be a function from X to Y be a continuous map that that is surjective.
Take an open cover for $Y$ consisting of sets $\{ O_{\alpha} \}$. Find the inverse image of each set $O_{\alpha}$, which must be open by the continuity of $f$. The inverse images constitute an open cover of $X$. Then as $X$ is compact there exists a finite subcover of it. The image of the finite subcover is a finite subcover of $\{ O_{\alpha} \}$ and hence $Y$ is compact. These are your guidelines, now just fill in the details and you have a proof.
Show that $ |\operatorname{det}(x, y)| \leq|x|^{s}|y|^{t}|x-y|^{r} $
Let $D = |\det(x, y)|$. The estimate $$ D \le |x| |y| $$ follows from the Cauchy-Schwarz inequality: $$ |D| = |x_1 y_2 - x_2 y_1| \le |x_1|| y_2| + |x_2|| y_1| \le \sqrt{x_1^2+x_2^2} \sqrt{y_2^2+y_1^2} = |x| |y| \, . $$ Substituting $x$ or $y$ by $x-y$ does not change the absolute value of the determinant, so that also $$ D \le |x| |x-y| \, ,\\ D \le |x-y| |y| \, . $$ Now we can exponentiate these three inequalities with suitable exponents and multiply them to get the desired result: $$ D = D^{1-r} D^{1-t} D^{1-s} \le \bigl( |x| |y|\bigr)^{1-r} \bigl( |x| |x-y|\bigr)^{1-t} \bigl( |x-y| |y|\bigr)^{1-s} = |x|^s |y|^t |x-y|^r \, . $$
Distributing and over or in logic: how to do it if brackets ambiguous?
What happens if you have $(A \lor B \land C)$? There is a preference for modern authors to avoid such ambiguity by using explicit parenthesis.   However, the traditional order of operations gives precedence to $\land$ over $\lor$, as analogous to $\times$ over $+$, so this would be read as $\big(A\lor(B\land C)\big)$. $$\big(A\lor(B\land C)\big) ~=~ \big( (A\lor B)\land(A\lor C) \big)$$ Or (A v B ^ C ^ D)? Likewise we would read this as $\big(A\lor(B\land C\land D)\big)$ $$\big(A\lor(B\land C\land D)\big)~=~\big((A\lor B)\land(A\lor C)\land(A\lor D)\big)$$
For which values of $a$ lie the roots of $z^2+2a(1+i)z+(4+2a^2i)=0$ in the first quadrant of complex plane?
You can write the equation as$$z^2+2a(1+i)z+(4+2a^2i)=\Big(z+a(1+i)\Big)^2+4=0$$where we have used $\Big((1+i)a\Big)^2=a^2(1+i)^2=2ia^2$ therefore$$z_1=2i-a(1+i)=(2-a)i-a\\z_2=-2i-a(1+i)=(-2-a)i-a$$and they fall in the first quadrant if $$2-a>0\\-a>0\\-2-a>0$$which yield to $$a<2\\a<0\\a<-2$$ Conlusion For $a<-2$, the roots of $z^2+2a(1+i)z+(4+2a^2i)=0$ reside in the first quadrant.
Doubt about a paragraph in the book "Algebraic Number Theory by Neukirch".
Here, $\beta_i$'s belong to $\bar{K}$ which are integral over $A$. So, coefficients of $p(x)$ are integral over $A$. But the coefficients of $p(x)$ belong to $K$. So, coefficients of $p(x)$ belong to $A$ as $A$ is integrally closed in $K$. Hence, $p(x)\in A[x]$.
Integrals that require multiple applications of the change of variable formula
To do the integral, make the substitution $v=\sin\phi.$ Then it takes the form $$\int_0^{π/2}{\cos^4\phi\mathrm d \phi}.$$ To do this, note that $\cos^4\phi-\sin^4\phi=\cos 2\phi$ and $$\cos^4\phi+\sin^4\phi=\frac34+\frac14\cos 4\phi.$$
Why do we say Hexadecimal, combining Greek with Latin?
From Wikipedia's Hexadecimal page, under the section "Cultural" The word hexadecimal is composed of hexa-, derived from the Greek ἕξ (hex) for six, and -decimal, derived from the Latin for tenth. Webster's Third New International online derives hexadecimal as an alteration of the all-Latin sexadecimal (which appears in the earlier Bendix documentation). The earliest date attested for hexadecimal in Merriam-Webster Collegiate online is 1954, placing it safely in the category of international scientific vocabulary (ISV). It is common in ISV to mix Greek and Latin combining forms freely. The word sexagesimal (for base 60) retains the Latin prefix. Donald Knuth has pointed out that the etymologically correct term is senidenary (or possibly, sedenary), from the Latin term for grouped by 16. (The terms binary, ternary and quaternary are from the same Latin construction, and the etymologically correct terms for decimal and octal arithmetic are denary and octonary, respectively.) Alfred B. Taylor used senidenary in his mid-1800s work on alternative number bases, although he rejected base 16 because of its "incommodious number of digits". Schwartzman notes that the expected form from usual Latin phrasing would be sexadecimal, but computer hackers would be tempted to shorten that word to sex. The etymologically proper Greek term would be hexadecadic / ἑξαδεκαδικός / hexadekadikós (although in Modern Greek, decahexadic / δεκαεξαδικός / dekaexadikos is more commonly used).
How do I evaluate $\sum_{k=1}^nk^pr^k=?$
AFAIK there is no closed form. One thing you can do: the sequence $$f(p) = \sum_{k=1}^n k^p r^k $$ has exponential generating function $$g(z) = \sum_{p=0}^\infty \dfrac{f(p) z^p}{p!} = \sum_{k=1}^n (r e^z)^k = \dfrac{(r e^z)^{n+1} - r e^z}{r e^z - 1}$$
Rigor of this direct justification of mathematical induction
Well I can't say for sure whether there is a non-circular way to justify induction (Ian essentially claims any justification would be infinite), but certainly your 'proof' is not valid justification. All you have 'established', being as generous as I possibly can, is that given your two conditions we can prove that #1, #2 and #3 are true. I do not accept "etc", so you will have to define it or explain what it means and why I should accept your usage of it.
generating function as english statement
Relating that polynomial to your own example, we can understand $(1+x^5+x^9)^{100}$ as follows: There are $9$ each of $100$ kinds of objects. The ordinary enumerator for selecting none or five or all nine of the objects of that kind is $(1+x^5+x^9)^{100}$. Here's another interpretation: Consider three-sided dice whose faces have $0$, $5$, or $9$ pips, and roll $100$ such dice. Then the coefficient of $x^k$ in $(1+x^5+x^9)^{100}$ is the number of ways to get a total of $k$.
$\lim_{n\to \infty}n^ka^n=0$ for $a\in (0,1)$ and $k$ a positive integer
applying the case you proved $(k=1)$ for $c:=a^{1/k} \in (0,1)$ would give that $$\lim_{n \to \infty} n \cdot c^n =0$$ then $$n^k \cdot a^n=(n \cdot c^n)^k \underset{n\to \infty}{\longrightarrow} 0$$ since, for a fixed $k\in \mathbb{N}$, $x\mapsto x^k$ is continuous
Finite dimensional division algebra over $\Bbb{C}$
A finite-dimensional division algebra over $\mathbb{C}$ is a division algebra $D$ over $\mathbb{C}$ which is finite-dimensional as a complex vector space. The way you'll use this hypothesis is the following: every $d \in D$ has the property that the elements $\{ 1, d, d^2, \dots \}$ must be linearly dependent, since there are infinitely many of them, so it follows that there is some nontrivial linear dependence between them. Equivalently, every $d \in D$ satisfies a polynomial $f(d) = 0$ with complex coefficients. Now, since $\mathbb{C}$ is algebraically closed, this polynomial factors into linear factors...
Polytopes in binary field
While an equation $a \cdot x = b$ (over the real numbers) defines a hyperplane in $\mathbb R^n$, $a \cdot x \equiv b \mod p$ (where $a, x \in \mathbb Z^n$ ) picks out those $x$ lying on any of the hyperplanes $a \cdot x = b + k p$. If you have enough points on more than one hyperplane, the convex hull (again over the real numbers) will be $n$-dimensional rather than $n-1$-dimensional.
Does the weak divergence exist for each $\mathcal L^2(\Omega;\mathbb R^d)$-function?
In the sense of (1), it can be defined for $u_i,v\in L^1_{\text{loc}}(\Omega)$. More generally, it can be defined for arbitrary distributions. There are $u \in L^2(\Omega; \mathbb{R}^d)$, such that there is no $v \in L^1_{\text{loc}}(\Omega)$ with $v = \operatorname{div} u$. But $u$ has always a distributional divergence. If each component of $u$ is weakly differentiable with weak derivative $\partial_i u \in L^1_{\text{loc}}(\Omega)$, then $\operatorname{div} u = \sum_{i=1}^d \partial_i u$.
Calculus 2: Strategy for Integration, Integral of e^(x+e^x)dx
Hint: You can write $\int e^{x+e^x}dx$ as $\int e^{e^x}\cdot e^x dx$
Scaling variables in homogeneous equation of degree two in a,b,c
You can think of this scaling as dividing all equalities by $c^2$, which is valid because $c\neq 0$. This division basically reduces a 3-variables problem into a 2-variables problem, which should be more tractable. Let $A=a/c$, $B=b/c$ and $C=c/c=1$. Then, we have $$ a^2-b^2=bc\implies A^2-B^2=B;\\ b^2-c^2=ca\implies B^2-1=A. $$ It follows that $$ A^2-1=(A^2-B^2)+(B^2-1)=B+A. $$ Thus, it remains that we show $A+B=AB$. Note that $$ B=A^2-B^2=A^2-(A+1)=A^2-A-1 $$ so $$ AB=A+B\iff A^3-A^2-A=A^2-1\iff A^3-2A^2-A+1=0.\tag{i} $$ To show that (i) holds, we use $$ 1+A=B^2=(A^2-A-1)^2=A^4-2A^3-A^2+2A+1 $$ which implies $$ 0=A^4-2A^3-A^2+A=A(A^3-2A^2-A+1).\tag{ii} $$ Now we are done: (ii) implies that (i) is true, therefore we indeed have $AB=A+B$.
Last digit of a triangular number is the midpoint between two primes
I think you are right about even numbers: add $1$ or subtract $1$, since $3 \cdot even - 1 \lor 3 \cdot even +1 = prime$ (if prefer to use positive coefficients $5$ and $7$ here instead of $1$ and $-1$ as of $0$ is regarded an even as well) But, in case of odd progression you might consider to add $4$ to see the full pattern: $3 \cdot odd + 2 \lor 3 \cdot odd + 4 = prime$ So for example a triangular number $15$ using the prime number form above will generate: $13$, $17$, and also $11$ and $19$. In case of last digit $3$, for which you mentioned to have no solution, a triangular number $153$ will give you two prime numbers: $149$ and $157$, but not $151$ and $155$, since the latter one is the product of $5$ and $31$.
Why is the matrix representing a non-degenerate sesquilinear form invertible?
Let $q$ be a sesquilinear form on a vector space $E$, given by a matrix $A$. The following statements are equivalent: $q$ is degenerate. There exists a nonzero vector $x\in E$ so that $q(x,y)=0$ for all $y\in E$. There exists a nonzero vector $x\in E$ so that $x^H A y = 0$ for all $y\in E$. There exists a nonzero vector $x\in E$ so that $x^H A$ is the zero (row) vector. The left nullspace of $A$ is non-trivial. The matrix $A$ is singular. It should be clear that $(1)\Leftrightarrow(2)\Leftrightarrow(3)\Leftrightarrow(4)\Leftrightarrow(5)\Leftrightarrow(6)$.
Why does my well defined linear transformation not work?
The notation here is strongly suggestive of, unfortunately, the wrong thing: that $S$ is an operator which takes in an arbitrary term and outputs the square of that term. By changing notation it's easier to see what's going on. We have two basis elements $v_1,v_2$, and a function defined on this basis by $f(v_1)=w_1$ and $f(v_2)=w_2$. Now we ask what $f(-v_2)$ - or rather, $\hat{f}(-v_2)$, where $\hat{f}$ is the extension of $f$ to all of our space (note that you have a common minor abuse of notation, using $S$ to denote both the function defined on the basis elements only and the extension of that function to the whole space) - is equal to. We need to write $-v_2$ as a linear combination of basis elements, and we do this as $$-v_2=0\cdot v_1 + (-1)\cdot v_2.$$ We now apply $f$ to each of the basis vectors in this expression and look at what we get: $$\hat{f}(-v_2)=0\cdot f(v_1)+(-1)\cdot f(v_2).$$ This is just $-f(v_2)$ - and shifting back to our original context, this gives the desired result that $S(-x)$ does indeed equal $-S(x)$.
Intuitively understanding Fatou's lemma
Since the Lebesgue integral for nonnegative functions is built up "from below" by taking suprema of "obvious" integrals, the monotone convergence theorem has always seemed to me to be the most natural of the big three (MCT, FL, LDCT). And FL is a direct corollary of the MCT: Start with the obvious, i.e., $$\int \inf \{f_n,f_{n+1}, \dots \} \le \int f_n.$$ From that we get $$\lim_{n\to \infty} \int \inf \{f_n,f_{n+1}, \dots \} \le \liminf_{n\to \infty} \int f_n.$$ Really, that should be $\liminf$ on the left, but since the integrands increase, so do the integrals, so the limit exists and we're fine. Now by MCT, that limit can be moved through the integral sign, and then you have FL.
prove that Limit of $sin\bar{z} \over{sinz} $ does not exists as $z\to 0$
If $z = x \in \mathbb R$, then $$ \frac{\sin \overline z}{\sin z} = \frac {\sin x}{\sin x} = 1 \to 1; $$ if $z = \mathrm i y, y \in \mathbb R$, then $$ \frac {\sin \overline z}{\sin z} = \frac {\sin(-\mathrm i y)}{\sin (\mathrm i y)}, $$ using the series expansion of $\sin z$ we have $\sin (-z) = -\sin(z)$, so the quotient above equals $-1$. Thus the limit does not exist.
Continuity of the lebesgue integral
The result is not true in general, for example, let $\mu A = 1_A(0)$ on $\mathbb{R}$, $f(x) = 1$ and $A = \{0\}$. Then $g(t) = 1_{\{0\}} (t)$ which is not continuous. The result is true for the Lebesgue measure. Let $f_t$ denote the function $f_t(x) = f(x-t)$. Then we have $g(t) = \int 1_{A+t} f = \int 1_A f_t = \int_A f_t$. Since $g(s+t) = \int_A (f_s)_t$, we see that it is sufficient to show that $g$ is continuous at $0$ (that is, assuming that the result is true for integrable $f$). Littlewood's principles tells us that $C_0(\mathbb{R})$ is dense in $L^p(\mathbb{R})$, so for $\epsilon>0$, we can find a $\tilde{f} \in C_0(\mathbb{R})$ such that $\|f-\tilde{f}\|_1 < \frac{\epsilon}{3}$. Since $\tilde{f} \in C_0(\mathbb{R})$, it is uniformly continuous and supported on a set of finite measure, hence we can find a $\delta>0$ such that if $|t| < \delta$, then $\|\tilde{f} - \tilde{f}_t\| < \frac{\epsilon}{3}$. Consequently we have $$\|f-f_t\| \leq \|f -\tilde{f}\|+ \|\tilde{f}-\tilde{f}_t\|+ \|\tilde{f}_t - f_t\| < \epsilon$$ So, if $|t| < \delta$, then $|g(t)-g(0)| \leq \int_A |f-f_t|\leq \|f-f_t\| < \epsilon$.
Prove that every continuous map $f: S^1 \to S^1$ is homotopic to a continuous map $g: S^1 \to S^1$ with $g(1) = 1$
Yes, the $\arg$ function is useful here, but there is no need to use $\arg(z)$ as a part of the definition of your homotopy. Suppose that $g(1) = e^{i \theta}$ (equivalently, we could say $\theta = \arg(g(1))$). Consider the function $F: [0,1] \times S^1 \to S^1$ defined by $$ F(t,z) = e^{-i t\theta} g(z). $$ $F$ is indeed continuous.
Golden Ratio Approximation
No, not at all. Factoring and then using a well known property of $\varphi$: $$\left(\varphi(\varphi-1)\right)^{5050.3535}=\left(\frac{\varphi}{\varphi}\right)^{5050.3535}=1\ne\varphi. $$ The argument needs to be a bit larger than $1$. You're dealing with an approximation.
If $A^2=0$, then $I−A$ is invertible
$(I-A)(I+A)=I^2-A^2=I$ What does that tell you?
Proving Fourier transform is continuous with limited knowledge.
Let $\sigma_n \longrightarrow s_0$.Use the continuity of $e^{i \sigma x}=\cos({\sigma x})+i \sin({\sigma x})$ and then aplly Lebesgue's dominated convergence theorem for the sequence $g_n=f(x)e^{i\sigma_nx}$
Understanding poles in complex analysis
$f(z)=\frac{1}{g(z)}$ has a pole at $z=a$ of order $m$ if $g(z)$ has a zero at $z=a$ of order $m$. In your case $g(e^{iπ/4})=0$ but $g'(e^{iπ/4})\neq 0$ which implies $g(z)$ has a simple zero ($m=1$) at $z=e^{iπ/4}$, thus $f(z)$ has a simple zero thereat.
Find $f(x)$ if $f(x+1) = x^2-5x+3$.
OK, let's use the hint. Let $u = x+1$ Then $f(x+1) = f(u) = x^2-5x+3 = (u-1)^2 - 5(u-1) + 3 = u^2-7u + 9$ So we just got that: $$ f(u) = u^2-7u + 9$$ But the variable name doesn't matter (it can be $u$, $x$, $t$, etc.), this does not change the function itself... So we can now replace the variable $u$ with $x$ and we get: $$ f(x) = x^2 - 7x + 9$$
Show that the sum of two uniformly continuous functions is uniformly continuous in an arbitrary metric space
EDITED: After the comments of @Brian Moehring, We assume that the distance is stable under translation and that both $f$ and $g$ target the same space, $d(f(x)+a, f(y)+a)=d(f(x), f(y))$, So for every $x,y$ such that $d(x, y) < \delta$ we have, $d(f(x)+g(x), f(y)+g(y)) \leq d(f(x)+g(x), f(y)+g(x))+d(f(y)+g(x), f(y)+g(y))\leq \frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$
Show that $E(S)=\sqrt{\frac{1}{n-1}}\frac{\Gamma(n/2)}{\Gamma[(n-1)/2]}\sigma$
To get you started, define a random variable $X$ by $$X = \frac{(n-1)S^{2}}{\sigma^{2}} \sim \chi_{n-1}^{2}$$ Then $$S = \sqrt{\frac{\sigma^{2}X}{n-1}}$$ and $$\mathbb{E}[S] = \mathbb{E}\left[\sqrt{\frac{\sigma^{2}X}{n-1}}\right]= \sqrt{\frac{\sigma^{2}}{n-1}}\mathbb{E}[\sqrt{X}]$$
Parametrization of Doubly Stochastic Matrix
Adams & Zemel (see https://arxiv.org/pdf/1106.1925) devised a technique, based on the Sinkhorn-Knopp theorem, which they call Sinkhorn propagation. In essence: A $n \times n$ doubly stochastic matrix (DSM) can be parametrized with a strictly positive $n \times n$ parameter matrix $M$. Due to the Sinkhorn-Knopp theorem, $M$ can be iteratively row-normalized, column-normalized, row-normalized, column-normalized, etc., and the sequence of resulting normalized matrices $M_1, M_2, \ldots, M_\infty$ converges to a DSM. By truncating this sequence to a finite number of row/column normalization steps, you obtain a tight approximation to a DSM. For optimizing over DSMs, you can define a function $f=n_{row}(n_{col}(n_{row}(\ldots(M)\ldots)))$, where $n_{row}$ is a row-wise normalization and $n_{col}$ is a column-wise normalization, that maps from any strictly positive $M$ into a DSM. By approximating $f$ as the composition of a finite number of normalizations, you can then use gradient descent on $f$ w.r.t. $M$ as part of a MLE procedure.
Regarding Weierstrass approximation theorem and simply connected domains
Suppose $\|\gamma-P\|_\infty < \epsilon$ and let $l(t) = (1-t)(\gamma(0)-P(0)) + t (\gamma(1)-P(1))$ and let $Q=P+l$. Then $\|\gamma-Q\|_\infty \le \|\gamma-P\|_\infty+ \|l\|_\infty < 2 \epsilon$. Since $G$ is open, for sufficiently small $\epsilon>0$ we have $Q(t) \in G$ and $Q(t) = \gamma(t)$ for $t \in \{0,1\}$. Suppose you have a polynomial $P$ such that $P([0,1]) \subset G$. This is smooth and defined everywhere on $\mathbb{C}$. In particular, $P^{-1}(G)$ is open and contains $[0,1]$. In particular, we can find some $\delta>0$ such that $[0,1]+B(0,\delta) \subset P^{-1}(G)$. This is a suitable simply connected domain.
Is there a Mathematical system which is stochastic at its core?
Your question is undoubtedly philosophical in nature: it is predicated on the premise "it's solipsism to talk about anything that cannot be put to use in predicting an outcome". But solipsism is a philosophical term (and more fundamentally, an inherently philosophical -- as opposed to mathematical or scientific -- idea)! The philosophical view you advocate seems to be, roughly, that the truths of arithmetic are (only) empirical in nature. As with many basic philosophical ideas, this has been propounded before and has gotten a lot of discussion. Famously, it was the view of arithmetic espoused by John Stuart Mill. For a pretty good quick summary of his positions, see http://plato.stanford.edu/entries/mill/#GeoAri Mill's ideas on arithmetic and geometry have generally been poorly received by later philosophers. Notably, much of the work of Immanuel Kant is a reaction against Mill's ideas, especially his notion of the "synthetic a priori". So far as I know, essentially no very few mathematicians or philosophers of mathematics have accepted Mill's ideas. Rather, we have followed the path of Frege and viewed logic as being the foundation for mathematics. (Later work by Russell, Godel and others have showed that there are some pitfalls and limitations of the logicist approach, but it has nevertheless been the one mathematicians have adopted for the last hundred years.) [As Qiaochu says, if you really want to talk mathematics you should remove the overtly philosophical lead to your question. Instead, you should say more about the mathematical aspects of your question: what mathematical problem are you trying to solve?]
A version of Rellich-Kondrachov's theorem
The case $k=1$ established in Evans is quite enough, since $$ W^{k,p}(D)\overset{I_1}{\hookrightarrow}W^{1,r}(D) \overset{I_2}{\hookrightarrow}L^q(D),\quad r=\frac{np}{n-(k-1)p}\,,\; q\in\bigl[1,\frac{nr}{n-r}\bigr)=\bigl[1,\frac{np}{n-kp}\bigr),$$ with the embedding operators $I_1$ and $I_2\,$, where $I_1$ is just continuous, while $I_2$ is continuous and compact. Hence so is their composition $I=I_1\circ I_2\,$.
Theorem 3.22 from baby Rudin
Because statement is "$\sum a_n$ converges if and only if for every $\varepsilon >0$ there is an integer $N$ such that $$\left|\sum_{k=n}^{m}a_k\right|\leqslant \varepsilon$$ if $m\geqslant n\geqslant N$." Notice that for $\sum a_n$ to converge, $$\left|\sum_{k=n}^{m}a_k\right|\leqslant \varepsilon$$ should happen for all $m\ge n\ge N$. Choosing $m=n$ is just one case. In your example, $\sum\frac1n$ does not converge as for every given $\varepsilon$, you cannot find an $N>0$ such that $$\left| \frac1n+\frac1{n+1}+\dots \frac1m \right| < \varepsilon$$ for all $m\ge n\ge N$ Although you can find an $N$ for the case $m=n$, i.e. for every given $\varepsilon$, we can find an $N$ such that $$\left| \frac1n \right|< \varepsilon$$ for all $n \ge N$, by archimedean property.
Evaluation of $\int \frac{x\sin( \sqrt{ax^2+bx+c})}{ax^2+bx+c} \ dx\ $
Too long for a comment: If the polynomial has only a single root, i.e $ax^2+bx+c=A^2(x+B)^2$, the integral may be solved by substitution $y=A(x+B)$. The result is then a straightforward combination of the trigonometric integrals: $$\frac{B}{A}\left(\text{sinc}(y)-\text{Ci}(y)\right)+\frac{1}{A^2}\text{Si}(y)$$ I don't quite see how the non-degenerate case can be solved in closed form, though an approximation can be derived by expanding $\sin$ in a Taylor series, and then using Euler's substitution on each of the rational fractions in $\sqrt{ax^2+bx+c}$.
If $\frac1x-\frac1y=\frac1z$, $d=\gcd(x,y,z)$ then $dxyz$ and $d(y-x)$ are squares
It suffices to prove that $d (y - x)$ is a square, because $$\tag{id}(y - x) z = x y,$$ and thus $$ d (y - x) z^{2} = d x y z. $$ Let $p$ be a prime, and let $p^{a}, p^{b}, p^{c}$ be the highest powers of $p$ that divide respectively $x, y, z$. Let us look at (the exponent of) the highest powers of $p$ that divides $y - x$. If $a \ne b$, the highest power of $p$ that divides $y - x$ is $\min(a, b)$. Comparing the powers of $p$ in (id) we get thus $$ \min(b, a) + c = a + b, $$ so that $c = \max(a, b)$. This implies that the highest power of $p$ that divides $d (y - x)$ is $$ \min(a, b, c) + \min(b, a) = 2 \min(b, a). $$ So let us consider the case $a = b$. Writing $x = p^{a} x'$, $y = p^{a} y'$, we obtain $$ p^{a} (y' - x') z = x' y' p^{2 a}. $$ Therefore, if $e$ is the highest power of $p$ that divides $y' - x'$, we have $e + c = a$, so that $c \le a$. Thus in this case the highest power of $p$ that divides $d (y - x)$ is $c + a + e = 2 a$. We have proved that the highest power of every prime that divides $d (y - x)$ has an even exponent. Thus $d (y - x)$ is a square.
Number of solvable lights out puzzles on m x n rectangle
$$1+x^2 \equiv (1+x)^2 \mod 2$$ $$x(x^3+x^4) = x^4(1+x)$$ So $\gcd(1+x^2,x(x^3+x^4)) = 1+x$ and is also of degree $1$. Note that if $\gcd(p(x),q(x+1))=r(x)$, then substituting $x=X+1$ gives $\gcd(p(X+1),q(X+2))=r(X+1)$ showing that when working modulo $2$ the degrees of $\gcd(p(x),q(x+1))$ and $\gcd(p(x+1),q(x))$ are equal. P.S. Some of the links on Brouwer's site are very out of date. The link to my website should be updated to https://www.jaapsch.net/puzzles/lomath.htm
Abelianization is left adjoint to the forgetful functor
Yes, it is correct. Similar formulation: For every $f\in\mathbf{Grp}(G,U(H))$ there is a unique $r\in\mathbf{AbGrp}(G/[G,G],H)$ such that $f=U(r)\circ\eta$ where $U$ denotes the forgetful function $\mathbf{AbGrp}\to\mathbf{Grp}$ and $\eta:G\to U(G/[G,G])$ is the arrow in $\mathbf{Ab}$ that is prescribed by $g\mapsto g[G,G]$. This implies for every pair $G,H$ the existence of a bijection $\mathbf{Grp}(G,U(H))\to\mathbf{AbGrp}(G/[G,G],H)$ that can be shown to be natural in $G$ and $H$. So we are dealing with an adjunction of the forgetful functor and the functor that you denote as $\mathsf{ab}$. If it is not clear to you that this is an adjunction then can you please explain what your understanding of an adjunction is?
Maximum index of a generalized eigenvector of $A$ associated to $\lambda$
Consider an easy example, like $$A=\begin{pmatrix}1&1\\0&1\end{pmatrix}$$. Consider $\lambda =1$. It has ordinary eigenvector $\begin{pmatrix} 1\\0\end{pmatrix}$ and generalized eigenvectors of the form $\begin{pmatrix}a\\1\end{pmatrix}$ (easy to see). The index of $ \begin{pmatrix} 1\\0\end{pmatrix}$ is the min over all $n$ such that $(A-1I)^n\begin{pmatrix} 1\\0\end{pmatrix}=0$. So it's $1$. For $\begin{pmatrix}1\\1\end{pmatrix}$, say, $(A-1I) \begin{pmatrix}1\\1\end{pmatrix}=\begin{pmatrix}1\\0\end{pmatrix}$. So the index is $2$. Finally since the $\operatorname{max-ind}(1)$ of $\lambda =1$ is defined to be the max of these two indices, it is $2$.
Normal line to a curve $C_1$
Your approach is correct, but the problem does not require that the two functions are normal in the crossing point. We have only to determine for which values of $a$ there exists a point where the slope of the curve $y=4/x$ is normal to that of the line $y=\frac{a-3}{a}x-\frac{a^2-1}{a}$. Thus we can focus on the first of the two equations correctly shown in the question, i.e. $$\frac{a-3}{a}=\frac{x^2}{4}$$ from which $$x=\sqrt{\frac{4(a-3)}{a}}$$ The last equation has real solutions only for $\frac{a-3}{a}\geq 0$, so that we get $a< 0$ or $a>3$. Note that the values $a=3$ and $a=0$ have to be excluded because the slope of the line reduces to $0$ and $\infty$, respectively.
Show that if $X$ is absolutely continuous and $g$ is absolutely continuous on bounded intervals, then $g(X)$ is absolutely continuous.
Maybe a hint: Since $g$ is invertible under these assumptions, $ F_Y(t) = P( Y \le t) = P(X \le g^{-1}(t)). $ Calculate the derivative and in the chain rule apply the inverse function theorem of calculus.
Find all the natural numbers which are coprimes to $n$ and are not a fermat witness to compositeness of $n$.
We want to find the $a$ in the interval $1\le a\le 34$ such that $a^{34}\equiv 1\pmod{35}$. Note that since $a$ is relatively prime to $5$ and $7$, we have $a^4\equiv 1\pmod{5}$ and $a^{6}\equiv 1\pmod{7}$. It follows that $a^{12}\equiv 1$ modulo each of $5$ and $7$, and hence modulo $35$. We have $a^{12}\equiv 1\pmod{35}$ and $a^{34}\equiv 1\pmod{35}$ if and only if $a^d\equiv 1\pmod{35}$, where $d=\gcd(12,34)=2$. Thus the non-witnesses to primality are the solutions of $x^2\equiv 1\pmod{35}$. This congruence has $4$ solutions, obtained by splicing together the $2$ solutions of $x^2\equiv 1\pmod{5}$ with the $2$ solutions of $x^2\equiv 1\pmod{7}$ using the Chinese Remainder Theorem. There are the two obvious solutions $x\equiv \pm 1\pmod{35}$. This gives $a=1$ and $a=34$. Now solve the system $x\equiv 1\pmod{5}$, $x\equiv -1\pmod{7}$. That gives the solution $a=6$. The final solution is obtained by taking the negative of $6$ modulo $35$, which gives $a=29$.
Graph terminology: Are there terms for "a source or a sink" and for "neither source nor sink"?
There is some prior art, but nothing that will be universally recognized. In the context of series-parallel digraphs, the source and sink are called the terminals of the graph. This is a slightly more specific case, but you might adopt it for general digraphs. As you've mentioned, there's internal and its cousins interior and intermediate, which I expect to fill in the blank in a sentence along the lines of In a network flow problem, flow conservation must hold at every ______ vertex. But it's equally common even for people studying network flows to just suffer through the awkwardness of saying "at every vertex other than the source and sink". Unfortunately, internal vertices can also mean the non-endpoints of a path (which at least is not too far from this usage) or the vertices of a planar graph not on the external face of a given plane embedding. Specifically in multi-commodity network flow, we might also call the intermediate nodes the transit nodes, but (in my opinion, at least) it makes less sense to use this terminology for a general digraph.
Welldefined Hilbert-Schmidt Operator
Hint: Apply Holder's Inequality and Fubini-Tonelli to the product $k(x,\cdot)f$.
solving differential equation second order
OK, here's a step-by-step guide to solving the equation $y'' + w^2y = 0 \tag{1}$ with the initial conditions $y(a) = A, \; \; y'(a) = B, \tag{2}$ under the assumption $0 \ne w \in \Bbb R$; when $w = 0$, we have a different, and simpler, situation. The first step is to make an intelligent guess as to what the general form of the solution might be; this is often the most difficult thing to do. In this case, it helps to notice that (1) implies $y'' = -w^2y; \tag{3}$ so we ask ourselves, what kind of functions $y(x)$ have the property that their derivatives are given by multiplication by a constant, which in this case is $-w^2$? The ready answer is, of course, exponential functions of the form $y(x) = e^{\mu x}$; so we take that as our first guess. And then . . . We move on to our second step, which is to check and see how our provisional solution works out. To do this, we substitute $y(x) = e^{\mu x}$ into (1); since $y''(x) = \mu^2 e^{\mu x}, \tag{4}$ we find that (1) now takes the form $\mu^2 e^{\mu x}+ w^2 e^{\mu x} = 0, \tag{5}$ and since $e^{\mu x} \ne 0$ for all $x \in \Bbb R$, we may divide (5) through by $e^{\mu x}$ to obtain $\mu^2 + w^2 = 0; \tag{6}$ we have thus converted our provisional solution $y(x) = e^{\mu x}$ to a polynomial (specifically, a quadratic) equation for $\mu$; certainly a more tractable problem. It remains to be seen, however, whether the progress we have made is along a road which ultimately leads to a solution to (1). But with courage in the face of uncertainty, we proceed to our third step, and this one is easy! We solve (6) for $\mu$, obtaining $\mu = \pm iw. \tag{7}$ Fourth step: we check to see if $y(x) = e^{\pm iwx}$ are in fact solutions to (1). With $y(x) = e^{iwx}$ we have $y'(x) = iw e^{iwz} = iwy(x), \tag{8}$ $y''(x) = -w^2 e^{iwx} = -w^2 y(x); \tag{9}$ we thus see that $y(x) = e^{iwx}$ satisfies (3) and hence (1). In a similar manner we see that $y(x) = e^{-iwx}$ is also a solution to (1), (3). Step the fifth: at this point, we need to draw upon the somewhat deeper theoretical fact that there are at most two linearly independent solutions to (1), (3); this fact is usually proved in more advanced courses, and taken on faith in the introductory ones, as we shall do here. Accepting this state of affairs, we note that the functions $e^{\pm i w x}$ are in fact linearly independent over $\Bbb C$, for if there were $0 \ne m_+, m_- \in \Bbb C$ with $m_+ e^{i w x} + m_- e^{-iwx} = 0 \tag{10}$ then we would have $m_+ e^{2i w x} + m_- = 0, \tag{11}$ or $e^{2iwx} = \dfrac{-m_-}{m_+}, \tag{12}$ a constant; clearly an impossible situation. The linear independence of $e^{\pm iwx}$ in turn implies that any solution to (1), (3) is a linear combination of these two solutions, for if $f(x)$ is a third solution, the functions $f(x), e^{iwx}, e^{-iwx}$ are linearly dependent and this means we can write $m_f f(x) + m_+ e^{iwx} + m_- e^{-iwx} = 0 \tag{13}$ for some $m_f, m_+, m_- \in \Bbb C$ not all zero (note that cannot have $m_f = 0$ by the linear independence of $e^{\pm iwx}$; (3) is precluded); (13) shows that $f(x)$ is a linear combination of the $e^{\pm iwx}$. Using this little bit of theory we see that any solution of (1) may be written in the form $y(x) = m_+ e^{iwx} + m_- e^{-iwx}. \tag{14}$ Realizing (14) holds is the essence of the fifth step. Having this at hand, we turn to Step the sixth: using (14) and the initial conditions (2), we solve for the coefficients $m_\pm$; from (14) we have $y'(x) = iwm_+ e^{iwx} - iwm_- e^{-iwx}, \tag{15}$ and thus, via (2), we arrive at the following linear system for the coefficients $m_\pm$: $m_+ e^{iwa} + m_- e^{-iwa} = y(a) = A, \tag{16}$ $iwm_+ e^{iwa} - iwm_- e^{-iwa} = y'(a) = B; \tag{17}$ it is then easy to see that $2iwm_+ e^{iwa} = iwA + B, \tag{18}$ or $m_+ = \dfrac{iwA + B}{2iw e^{iwa}}, \tag{19}$ and $2iwm_- e^{iwa} = iwA - B, \tag{20}$ or $m_- = \dfrac{iwA - B}{2iw e^{-iwa}}. \tag{21}$ We note that when $A, B \in \Bbb R$, then $\bar m_+ = \dfrac{-iwA + B}{-2iw e^{-iwa}} = \dfrac{iwA - B}{2iw e^{-iwa}} = m_-, \tag{22}$ so we may write $y(x) = m_+ e^{iwx} + \overline{m_+ e^{iwx}}; \tag{23}$ in this case, then, $y(x)$ is real; similar remarks apply to $y'(x)$; it, too, is real when $A, B \in \Bbb R$. It is a matter of straightforward algebra, using the formulas (19) and (21) for $m_\pm$, and the Euler formula $e^{i\theta} = \cos \theta + i \sin \theta$, to express $y(x)$ as a linear combination of $\cos(x - a)$ and $\sin(x - a)$. And so there it is, a step-by-step guide to solving (1), as per request. Finally, it is worth noting that the substitution $y(x) = e^{\mu x}$ works for any linear ordinary differential equation with contstant coefficients, viz. $\sum_0^n c_i\dfrac{d^i y}{dx^i} = 0, \tag{24}$ and yields a polynomial equation for $\mu$: $\sum_0^n c_i\mu^i = 0; \tag{25}$ each root of (25) then becomes a solution $y(x) = e^{\mu x}$ of (24); that is not the entire tale, but it is a big part of it! Hope this helps! Cheers! And as ever, Fiat Lux!!!
check for linear independence
When actually figuring out if a set of vectors is Linearly dependent, yes you put them in a matrix and check the determinant, since what you are doing is solving the system $$ a_1v_1+a_2v_2+.....+a_nv_n=0 $$ For $a_i$ not all zero, which can only happen if the matrix made up of the vectors $v_1,...,v_n$ as rows or columns is not injective (nontrivial kernel).
Bounding the error for the remainder of $\log(x)$
I think that you have a mix of $a$ and $x$ in your formulae. The expansion is $$\log(x)=\log(a)+\sum_{n=1}^p \frac{(-1)^{n+1}}{n\, a^n} (x-a)^{n}+O((x-a)^{p+1})$$
How to prove that construction of Farey sequence by mediant is coverage?
Given $0\le r\lt s$ and $\gcd(r,s)=1$, you want to show that $r/s$ shows up as a mediant. We argue by induction on $s$. Since $\gcd(r,s)=1$, there are integers $x,y$ with $$rx-sy=1$$ and $x\lt s$, $y\lt r$. By the induction hypothesis, $y/x$ and $(r-y)/(s-x)$ have already turned up in the Farey sequence, and their mediant is $r/s$.
Demonstrate via finite induction the following statements
It is simply a matter of noticing that\begin{align*}(k+1)\bigl(7+6(k+1)+2(k+1)^2\bigr)-k(7+6k+2k^2)&=15+18k+6k^2\\&=3\times(5+6k+2k^2).\end{align*}
Impossible identity? $ \tan{\frac{x}{2}}$
Note that you seemed to have attempted finding $$\dfrac{dt}{dx} = \dfrac{d}{dx}\left( \tan \left(\frac x2\right)\right)= \frac 12 \sec^2\left(\frac x2\right)$$ However, the task at hand is to differentiate $f(t)$ with respect to $t$: i.e., $$\bf \dfrac{dx}{dt}\neq \dfrac{dt}{dx}$$ To obtain $\bf \dfrac{dx}{dt}$, we need to express $x$ as a function of $t$: $$\begin{align} t & = \tan\left(\frac x2\right) \\ \arctan t & = \arctan\left(\tan \left(\frac x2\right)\right) \\ \arctan t + n\pi & = \frac x2 \\ \\ 2\arctan t + 2n\pi & = x\end{align}$$ NOW we can find $\dfrac{dx}{dt}$: $$\dfrac{dx}{dt} = \frac{d}{dt}(2\arctan t + \underbrace{2n\pi}_{\text{constant}}) = \dfrac{2}{1 + t^2}$$
Continous functions and supremum
It's because $0 \in \mathbb{R}^n$ is a limit point of $\mathbb{R^n}\backslash 0$. To prove the claim, note that there is nothing to prove if the supremum of $f$ is not attained at $0$. Suppose then that $\sup_{x \in \mathbb{R}^n}f(x) = f(0)$. Continuity of the function implies that $$ \lim_{x\to 0}f(x) = f(0).$$ As such, for all $\epsilon > 0$ there exists $x \neq 0$ such that $f(0) < f(x) + \epsilon$. This implies that $$\sup_{x \in \mathbb{R}^n \backslash 0}f(x) > f(0)-\epsilon$$ for all $\epsilon$. Hence the two suprema are equal.
How to calculate "more precise" average? Giving less importance to extreme values
One way would be to use the median instead of the mean. The median is less sensitive to outliers. If you insist on using the mean, you can find the interquartile range (or the middle X% percent of your data, in general), and only calculate the mean of the points that fall within that range. You will be throwing data out which may be uncomfortable, but you've already said you're willing to discount outliers at least partially.
How many independent parameters in $e^{c_1x}+e^{c_2x}$
Observe that we may rewrite this expression as $$f(x)=e^{c_1x}+e^{c_2x}=e^{\left(\frac{c_1+c_2}{2}x\right)}\left[\exp\left(\frac{c_1-c_2}{2}x\right)+\exp\left(-\frac{c_1-c_2}{2}x\right)\right]=2 e^{c_+ x}\cosh c_-x$$ where $c_{\pm}:=\frac{1}{2}(c_1\pm c_2)$. Written in this form, we see that $f(x)$ has two separate parameters, but is insensitive to the sign of the latter since $\cosh x$ is an even function. Including values of $c_-<0$ will thus 'double-count' the parameter space, and so we limit ourselves to the two-dimensional subset $(c_{+},c_-)\in \mathbb{R}\times \mathbb{R}^+$. This amounts to $(c_1,c_2)$ taking values in the half of $\mathbb{R}^2$ with $c_1\geq c_2$. (Figuratively, we have 'folded' the parameter space in half along the line $c_1=c_2$.)
Finding a monotonically increasing function with limit 1
Another, more natural example (to me anyway) is $$f(x)=1-\frac{1}{x^2+1}.$$ This has the advantage that the limit to $\pm \infty$ is $1$.
Inequality $(\sum_{k=0}^{2n-1}x^k / k!)(\sum_{k=0}^{2n-1}(-x)^k / k!)\leq1$ for all $x\in\mathbb R$
This can be proved purely combinatorially using nothing more than the binomial theorem. In fact, I claim that the polynomial $P_n(x)=\left(\sum\limits_{k=0}^{2n-1} \frac{x^k}{k!}\right)\left(\sum\limits_{\ell=0}^{2n-1}(-1)^\ell \frac{x^\ell}{\ell!}\right)-1$ is even, has no constant term, and has no positive coefficients, which is sufficient to show that it is never positive. The proof of this fact mostly recapitulates the formal-power-series proof that $e^xe^{-x}=1$. The catch is that some of the limits on our sums are different, in a way that makes it messier. To start with, we'll multiply the two sums together, let $m=k+\ell$, and rewrite the whole sum in terms of $m$ and $\ell$. We have to be careful about the limits of the sum when we do this; the square in $k$ and $\ell$ turns into a diamond in $m$ and $\ell$, meaning the sum breaks up into two pieces. We have \begin{align} P_n(x)&=\left(\sum\limits_{k=0}^{2n-1} \frac{x^k}{k!}\right)\left(\sum\limits_{\ell=0}^{2n-1}(-1)^\ell \frac{x^\ell}{\ell!}\right)-1\\ &=\sum_{k=0}^{2n-1}\sum_{\ell=0}^{2n-1} (-1)^\ell\frac{x^k}{k!}\frac{x^\ell}{\ell!}-1\\ &=\sum_{m=0}^{2n-1} \sum_{\ell=0}^m (-1)^\ell \frac{x^m}{\ell!(m-\ell)!}-1+\sum_{m=2n}^{4n-2}\sum_{\ell=m-2n+1}^{2n-1} (-1)^\ell\frac{x^m}{\ell!(m-\ell)!}\\ &=\underbrace{\sum_{m=0}^{2n-1}\frac{x^m}{m!} \sum_{\ell=0}^m (-1)^\ell \binom{m}{\ell}-1}_{(*)}+\underbrace{\sum_{m=2n}^{4n-2}\frac{x^m}{m!}\sum_{\ell=m-2n+1}^{2n-1}(-1)^\ell \binom{m}{\ell}}_{(**)} \end{align} We'll consider the two braced pieces of this sum separately. First, I claim that the sum $(*)$ is identically zero. To see this, note that the coefficient $\sum_{\ell=0}^m (-1)^\ell \binom{m}{\ell}$ of $\frac{x^m}{m!}$ is the binomial expansion of $(1-1)^m$, and so it vanishes except when $m=0$. But when $m=0$, the coefficient of $x^m$ is equal to $1$, so it cancels with the subtracted-off $1$, leaving the entire expression identically zero as desired. Now, we'll examine $(**)$. Our goal is to show that the coefficient $\sum\limits_{\ell=m-2n+1}^{2n-1}(-1)^\ell \binom{m}{\ell}$ of $\frac{x^m}{m!}$ is zero whenever $m$ is odd, and negative whenever $m$ is even; this will complete the proof. Note that this coefficient represents the middle terms of the binomial expansion of $(1-1)^m$; since the entire expansion sums to zero, we can write it in terms of the outer terms, which are friendlier: $$ \sum_{\ell=m-2n+1}^{2n-1}(-1)^\ell \binom{m}{\ell}=-\sum_{\ell=0}^{m-2n} (-1)^\ell \binom{m}{\ell}-\sum_{\ell=2n}^m (-1)^\ell \binom{m}{\ell} $$ Now, we use the identity $\binom{m}{\ell}=\binom{m}{m-\ell}$ to combine these two sums. When $m$ is odd, the factor of $(-1)^\ell$ means they combine with opposite sign and therefore vanish. When $m$ is even, they combine with the same sign, yielding $$ -2\sum_{\ell=0}^{m-2n} (-1)^\ell \binom{m}{\ell} $$ So we need only show that the sum $\sum_{\ell=0}^{m-2n} (-1)^\ell \binom{m}{\ell}$ is positive when $m$ is even and $2n \leq m \leq 4n-2$. But with these bounds on $m$, $m-2n < \frac{m}{2}$, meaning the terms of the sum increase in absolute value as $\ell$ increases. Since $m$ is even, the last term in the sum will be positive, meaning that we can pair each negative term in the sum off with a larger positive term. Thus the entire sum is positive, and we are done.
Finitely generated modules over complete ring is zero if
Hint: Since $M\otimes R/I^n\simeq M/I^nM$ then $M=I^nM$ for every $n$. Now use that $M$ is an inverse limit of the system ${M/I^nM}$.
Distribution of slope of the line
The cdf. is the area of the triangle enclosed in the unit square to the right of $y=Sx$ for $S \le 1$ $$ F(S)= \frac S2 $$ notice that ... $$ F (\frac 1S) = 1-F(S) $$ So for $S>1$ $$F(S) = 1-\frac1{2S} $$
Reference for Weak convergence in hilbert space
I took an undergraduate functional analysis course which used the text "Introductory Functional Analysis with Applications" by Erwin Kreyszig. Weak convergence is discussed in Chapter 4. I think it is easy to follow and suitable for self-learners. Let me also talk a little bit concerning your question. In a Hilbert space $H$, every bounded linear functional $T \in H^*$ can be characterized by an element $y \in H$, in the sense that $$Tx = (x, y) \text{ for } x \in H $$ by F. Riesz Representation Theorem on a Hilbert space. A sequence $(x_n) \subseteq H$ is said to converge to $x \in H$ weakly if $Tx_n \rightarrow T_x \text{ for } T \in H^*$. As a consequence of the F. Riesz Representation Theorem, $(x_n) \subseteq H$ converge to $x \in H$ weakly if and only if $$ (x_n, y) \rightarrow (x,y) \text{ for } y \in H$$ Strong convergence implies weak convergence (easy to check) but the reverse is not true. Consider $H = L^2(-\pi,\pi)$ equipped with the $L^2$ norm and $f_n(x) = sin(nx) \text{ for } x \in [-\pi, \pi]$. Then $(f_n) \subseteq H$ converge to $0$ weakly but not strongly.
Solve $e^{-\frac{g\cdot t}{3000}}\cdot\cos\left(\sqrt{4-\frac{1}{9\cdot10^6}\cdot g^2}\cdot t-\frac{\pi}{2}\right)=0,1$ for t dependent on g
Well, in general we have: $$\exp\left(\text{a}\cdot t\right)\cdot\cos\left(\omega\cdot t+\varphi\right)=0\tag1$$ Now, we get two possible solutions: Solution $1$: $$\exp\left(\text{a}\cdot t\right)=0\tag2$$ But equation $\left(2\right)$ does not have a solution. Solution $2$: $$\cos\left(\omega\cdot t+\varphi\right)=0\space\Longleftrightarrow\space t=-\frac{\pi+2\cdot\varphi-2\pi\cdot\text{n}}{2\cdot\omega}\tag3$$ When $\omega\ne0$ and $\text{n}\in\mathbb{Z}$ So, in your problem $\omega=\sqrt{4-\frac{1}{9\cdot10^6}\cdot\text{g}^2}$ and $\varphi=-\frac{\pi}{2}$: $$t=-\frac{\pi+2\cdot\left(-\frac{\pi}{2}\right)-2\pi\cdot\text{n}}{2\cdot\sqrt{4-\frac{1}{9\cdot10^6}\cdot\text{g}^2}}=\frac{\pi\cdot\text{n}}{\sqrt{4-\frac{\text{g}^2}{9\cdot10^6}}}\tag4$$ When $\text{g}^2\ne36\cdot10^6$
Prove: T$_1$ = T$_2$
If $\mathcal{T}_1 \subseteq \mathcal{T}_2$ (WLOG), consider the identity map $1_X(x)=x$ $$1_X: (X, \mathcal{T}_2) \to (X,\mathcal{T}_1)$$ For $O \in \mathcal{T}_1$ we have that $1_X^{-1}[O] = O \in \mathcal{T}_2$ so that $1_X$ is continuous. If $C$ is closed in $\mathcal{T}_2$, then $C$ is compact because a closed subspace of a compact space is compact and then $1_X[C]=C$ is compact in $(X,\mathcal{T}_1)$ too, and as that space is Hausdorff, $C$ is closed in $(X,\mathcal{T}_2)$ and so $1_X$ is a closed and continuous bijection and this means that $1_X$ is a homeomorphism and this implies $\mathcal{T}_1 = \mathcal{T}_2$.
Inverse function as series
Here's a possible start. $\begin{array}\\ x(y) &=\frac{1}{4}\Big(\operatorname{arctanh }( \sqrt{\frac{2}{\epsilon^2}} y + C_1) - \frac{2 ( \sqrt{\frac{2}{\epsilon^2}} y + C_1)}{(\sqrt{\frac{2}{\epsilon^2}} y + C_1)^2-1}\Big)\\ &=\frac{1}{4}\Big(atanh ( a y + c) - \frac{2 ( ay + c)}{(ay + c)^2-1}\Big) \quad a=\sqrt{\frac{2}{\epsilon^2}}, c=C_1\\ &=\frac{1}{4}\Big(atanh(z) - \frac{2z}{z^2-1}\Big) \quad z = ay+c\\ &=\frac{1}{4}\Big(atanh(z) - atanh(w)\Big) \quad w = \tanh(\frac{2z}{z^2-1})\\ &=\frac{1}{4}atanh\Big(\dfrac{z-w}{1-zw}\Big)\\ \text{so}\\ \tanh(4x(y)) &=\dfrac{z-w}{1-zw}\ \end{array} $ Not sure where to go from here so I'll stop.
How to prove an increasing sequence that converges is bounded above by its limit
HINT You can easily show that if for some n $a_n>L$ then by definition of limit $a_n$ must decrease which is impossible. You only need to formalize this idea by setting “assume exists n such that ...then by definition of limit...contradiction”. Notably suppose $\exists n_1$ such that $a_{n_1}>L$ with $d=a_{n_1}-L>0$ set $\epsilon=d$ by definition of limit must exists $n_2>n_1$ such that $|a_{n_2} -L|<\epsilon \implies a_{n_2}<a_{n_1}$
Generalizing compound probability distributions
I just found out about improper priors. I believe $G$ would be called an improper prior.
Does the Laplace operator include the second derivative with respect to time variable?
It depends... the Laplacian is always defined with respect to some metric: $\Delta f = \nabla \cdot \nabla f$ and divergence requires a metric. Alternatively, you can define $\Delta$ as the gradient, in the sense of the calculus of variations, of the Dirichlet energy $\int \langle \nabla f, \nabla f\rangle\,dV$ and here again the metric is seen. My guess is that for classical fluid dynamics the metric is simply the Euclidean $dx^2+dy^2+dz^2$. In which case $\Delta f(x,z,t) = f_{xx} + 0 + f_{zz}.$
Why does $\int \sin y\;dx = x \sin y + C$?
$$ \int{\sin{x}\;dx}=-\cos{x}+C $$ But in this case $\sin{y}$ plays role as constant: $$ \int{\sin {y}\;dx}=\sin{y}\int{dx}=x\sin{y}+C $$
Relation between $ \bigcap_{i \in I}A_i $ and $ \bigcap_{i=1}^{n}A_i $
The notation $$\bigcap _{i=1}^nA_i$$ denotes the same as $$ \bigcap_{i\in I}A_i$$ for the special case $I=\{1,2,\ldots, n\}$.
Writing a sentence that is true in one model and false in the other
HINT: There is nothing in $M_2$ corresponding to the relationship between the subsets $\{1\}$ and $\{2\}$ of $\Bbb N$ in $M_1$. Your sentence should start $\exists x\,\exists y\ldots$.
Need some help understanding this calculation of distance from a point to a line
The distance of a point $P$ from a line $L$... Write $N$ for the line through $P$ perpendicular to the line $L$. Parametrize $N$: $$ {\bf x}(t)= \pmatrix {x(t)\\ y(t)} = P + t {\bf n},$$ where $\bf n$ is a vector of length one (in the direction of $N$). Now, ${\bf x}(0) = P$. Therefore, since $\bf n$ is of length one (we are walking away from $P$ at speed one), the distance of $P$ from $L$ is, up to sign, equal to the $t$ such that ${\bf x}(t)$ ALSO belongs to $L$ (how long it takes us to reach $L$ from $P$). Your algorithm (I haven't checked that it's right!) should basically boil down to solving for $t$. The line $L$ is the line perpendicular to $\bf n$ through $P_1$, and therefore has equation $$ {\bf n}\cdot ( {\bf x}- P_1)=0,$$ for any ${\bf x} =(x,y)$ on the line $L$. So substituting, we can solve for $t$: $${\bf n} \cdot (P+ t{\bf n} - P_1)=0.$$ This reduces to $${\bf n}\cdot(P-P_1) + t{\bf n}\cdot {\bf n} = 0.$$ But since ${\bf n}$ has length one, ${\bf n} \cdot {\bf n} =1$. Therefore, $$ {\bf n}\cdot(P-P_1) + t = 0,$$ and the distance of $P$ to $L$ has to be $$ | {\bf n}\cdot(P-P_1)|. $$ Now, ${\bf n}$ is perpendicular to your unit vector ${\bf u}$, so if ${\bf u} = (u_1, u_2)$, then we can take ${\bf n} = (-u_2,u_1)$. This should match up with your algorithm (up to swapping the roles of $P_1$ and $P_2$, say)... Hope this helps. EDIT/ADDENDUM... Much more simply! Look at the vector $P-P_1$. We want the length of its projection onto $\bf n$, as this is precisely the distance $d$ of $P$ to the line $L$, i.e., geometrically, $d$ equals (up to sign) $ |P-P_1|\cos \theta$, where $\theta$ is the angle between (the segment) $P-P_1$ and $\bf n$. Use dot products: since $\bf n$ has length one, once again, $$d=\Big| \, |P-P_1| | {\bf n}| \cos \theta \, \Big| = | {\bf n}\cdot(P-P_1)|. $$
A proof of Hölder's inequality and trying to understand this
$a_i$ and $b_i$ are just a very cunning change of variable nothing more. As you can see $a_i=|x_i|^{1/\theta}$ $b_i=|y_i|^{1/(1-\theta)}$ Next $a_i>0$ and $b_i>0$ so $a_i^\theta b_i^{1-\theta} >0$ and transforming $$a_i^\theta b_i^{1-\theta} \leq \theta a_i A^{\theta-1} B^{1-\theta}+(1-\theta)b_i A^\theta B^{-\theta}$$ into $$ \sum_i (a_i^\theta b_i^{1-\theta}) \leq \sum_i (\theta a_i A^{\theta-1} B^{1-\theta}+(1-\theta)b_i A^\theta B^{-\theta})$$ is completely licit. The inequality is finally obtained noticing that $$\sum_i (\theta a_i A^{\theta-1} B^{1-\theta}+(1-\theta)b_i A^\theta B^{-\theta})=A^\theta B^{1-\theta}$$ Proof is obtained noticing that $|\sum_{k=1}^n x_k y_k| \leq \sum_{k=1}^n |x_k| |y_k|$
The group generated by exponents of $\mathfrak{g} \le \mathfrak{gl}_n$
This is Proposition 8.41 in Fulton and Harris' Representation Theory: A First Course. Their proof uses exactly the Baker-Campbell-Hausdorff formula as you suggest. You only need to consider the case that the $X_i$s are small because every element of $G$ is a product of exponentials where the $X_i$ are small; more formally, $G$ is (connected, hence) generated by an arbitrarily small neighborhood of the identity.
Expansive homeomorphism on $\mathbb{R}^{2}$
Let $(S^2,d)$ be a metric space with $d$ being the Euclidean metric inherited from $\mathbb{R}^3$. Using the stereographic projection $P: S^2-\{(0,0,1)\}\to \mathbb{R}^2$ we define the metric on $\mathbb{R}^2$ to be given by $d'(P(x_1,y_1,z_1), P(x_2, y_2, z_2))=d((x_1,y_1,z_1),(x_2, y_2, z_2))$. Note that $P^{-1}: \mathbb{R}^2\to S^2-\{(0,0,1)\}$ is given by $$ (X,Y)\mapsto \left(\frac{2X}{X^2+Y^2+1}, \frac{2Y}{X^2+Y^2+1}, \frac{X^2+Y^2-1}{X^2+Y^2+1}\right) $$ Now consider a linear function $f:\mathbb{R}^2\to \mathbb{R}^2$ determined by a matrix $M=\mathrm{diag}(e^\lambda, e^{\lambda'})$ for some $\lambda,\lambda'\neq 0$. Consider the points $A_0=(e^\mu, 0), B_0=(e^{\nu}, 0)$, also for simplicity let $A_n=f^n(A_0)$ and $B_n=f^n(B_0)$. Then (by a bit of calculation) $$ P^{-1}(A_n)=\left(\frac{1}{\cosh (n\lambda+\mu)}, 0, \tanh (n\lambda+\mu)\right), \quad P^{-1}(B_n)=\left(\frac{1}{\cosh (n\lambda+\nu)},0, \tanh (n\lambda+\nu)\right) $$ Then with a bit of calculation $$d'(A_n, B_n)=\sqrt{\frac{2[\cosh(\mu-\nu)-1]}{\cosh(n\lambda+\mu)\cosh(n\lambda+\nu)}}\leq \sqrt{2[\cosh(\mu-\nu)-1]}$$ So $f$ cannot possibly be $\epsilon$-expansive for any $\epsilon>0$. Because by chooisng $\nu=0$ and $\mu$ such that $\cosh \mu < 1+\epsilon^2/2$ the points $A_0=(e^\mu,0)$ and $B_0=(1,0)$ are such that for all $n\in \mathbb{Z}$ one has $d'(A_n, B_n)< \epsilon$. If you take a closer look at the above argument, if $M$ is any invertible diagonal matrix then $f$ is not $\epsilon$-expansive for any $\epsilon>0$. In fact more generally if (the invertible matrix) $M$ has any eigenvectors (over $\mathbb{R}$ that is) then $f$ is not expansive [If $v$ is that eigenvector take $A_0=e^\mu v$ and $B_0=v$.]
Graham scan with collinear points
I think you've omitted one sentence from the Wikipedia description of Graham's algorithm: This process is continued for as long as the set of the last three points is a "right turn" So after correctly discarding point (2, 4) you continue to check if last 3 points make a left or right turn. In your example (3, 1), (3, 7), (2, 5), (1, 6) last 3 points make a right turn so we're discarding (2, 5) and continue with (3, 1), (3, 7), (1, 6). Further processing would look like this: Next is (2,3) last three points (3, 7), (1, 6), (2,3) - left turn. Next is (1,2) last three points (1, 6), (2,3), (1,2) - right turn. Discard (2,3) leaving us with (3, 7), (1, 6), (1,2) - left turn. End of processing result is: (3, 1), (3, 7), (1, 6), (1, 2).