title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
If an element in ring of integers is square modulo every prime, then it is a square in ring of integers.
Let me expand Ethan Alwaise's comment into an answer. Let $L$ be a number field and $\alpha\in \mathcal O_L$ be a square modulo every prime. Suppose by contradiction that $\alpha$ is not a square. Then the polynomial $x^2-\alpha$ is irreducible in $L[x]$. Hence, the field extension $L(\sqrt{\alpha})/L$ has degree 2. This implies that the Galois group of such extension is $C_2$. Hence, by Chebotarev there exist infinitely many primes of $L$ that are inert in $L(\sqrt{\alpha})$. Choose one of these primes $p$ large enough. Then by Dedekind-Kummer the splitting of $p$ in $L(\sqrt{\alpha})$ is governed by the factorization of $x^2-\alpha$ modulo $p$. But this is a product of two linear factors by hypothesis, leading to a contradiction.
Inequality in intergration
$1+t^2 \geq 1$ for every $t\geq 0$. Take reciprocals. It is not clear why $t^k$ disappears, especially when $k \geq 6$.
Baby Rudin Chapter 3 Exercise 11(d)
Take $a_{m^2} = 1/m$, while $a_n = 2^{-n}$ if $n$ is not a square. Note that $a_{m^2}/(1 + m^2 a_{m^2}) \le 1/m^2$.
LP modeling issue (factory process)
First of all let me come back to your first attempt as I think that it is important to understand why it does not work. Then I will try to give you some pointers as to why the second set of equations is probably correct. Your first attempt at modeling the problem was as follows: \begin{align} p_P^{U_1} = c_A^{U_1} + 0.5~c_B^{U_1} \\ p_Q^{U_1} = 0.2~c_A^{U_1} + 0.3~c_B^{U_1} \text{.} \end{align} It does not work because: The meaning of each equation is not consistent with the problem considered. Put into words the equations correspond to the following sentences: $1$ unit of $P$ requires $1$ unit of $A$ or $0.5$ units of $B$; $1$ unit of $Q$ requires $0.2$ units of $A$ or $0.3$ units of $B$ These equations do not take into account that the raw material is most certainly consumed when creating a product. In other words, if there is a limited amount of raw materials these two equations do not take into account the trade-off that exists between the quantity $p_P^{U_1}$ of product $P$ produced and the quantity $p_Q^{U_1}$ of product $Q$. Similarly if the goal is to produce a certain amount of each product then these equations do not take into account the fact that the raw materials $A$ and $B$ used to create product $P$ are disjoint from the raw materials $A$ and $B$ used to create product $Q$. Your second attempt at modeling the problem is as follows: \begin{align} c_A^{U_1} = p_P^{U_1} + 0.2~p_Q^{U_1} \\ c_B^{U_1} = 0.5~p_P^{U_1} + 0.3~p_Q^{U_1} \text{.} \end{align} This is probably correct: The meaning of each equation is respectively: $1$ unit of $A$ can be used to produce $1$ unit of $P$ or $5$ units of $Q$; $1$ unit of $B$ can be used to produce $2$ units of $P$ or $\frac{1}{0.3}$ units of $Q$ If you produce one unit of product $P$ you will increase the need in both $A$ and $B$ since $p_P^{U_1}$ appears in both equations. In other words if you need to produce a certain amount of each product the amount of raw materials needed will be correct. If there is a limited amount of raw materials (fixed $c_A^{U_1}$ and $c_B^{U_1}$) these two equations take into account the trade-off that exists between the quantity $p_P^{U_1}$ of product $P$ produced and the quantity $p_Q^{U_1}$ of product $Q$. In other words if you consume some raw materials to produce the product $P$ then you have less raw materials available for product $Q$. Here I said that it is probably correct because you did not give the underlying problem that you want to solve (Maximizing the number of products created under a limited amount of raw materials, Maximizing your profit when selling your products, Minimizing the amount of raw material used for a given profit,...). If I cannot think of a counter example right away it might still happen that you need to change your model to solve a specific problem (in fact it is very likely that you will need to add more equations).
Does the maximum cut implies the minimum flow?
Yes. These are dual problems. Note that the max-flow problem can be formulated as a Linear Program. So the Max-Flow Min-Cut Theorem follows from LP duality.
What is wrong with this induction proof?
Hint: Try the inductive step with $k = 0$.
Rational locus of a function defined on $x^2+x^3=y^2$
To see that $f$ cannot be extended to the point $(0,0)$, one can argue as follows, at least if the ground field is the complex numbers. For $(x,y) \neq (0,0)$, $f(x,y)$ is the slope of the chord joining $(0,0)$ to $(x,y)$. But at $(0,0)$ the curve has two branches with distinct tangent directions, so the value of $f(x,y)$ approaches two different limits as we approach $(0,0)$ in these different directions.
Volume using spherical coordinates.
It seems to me that using Cylindrical coordinates will make our job easier. As you see, our region is symmetric so it is enough to consider $1/8$ of whole volume such that $x\ge 0, y\ge 0,z\ge 0$: Now let's look at the bottom of above region which is on $z=0$: So the limits are as follows: $$\theta|_0^{\pi/2},~~r|_{a\cos\theta}^a,~~z|_0^{\sqrt{a^2-r^2}}$$ Note: for plots I assumed that $a=2$ and $a$ is $r$ in your original equations.
If $p(x)$ has even degree and a positive leading coefficient and $p(x)\ge p''(x)$, then $p(x)\ge0$
Hints in highlights: First, be sure you understand why $\;\lim\limits_{x\to\pm\infty}f(x)=+\infty\;$ . Next: suppose there's $\;a\in\Bbb R\;$ s.t. $\;f(a)<0\;$ . By the above + continuity, differentiability of $\;f(x)\;$ it follows $\;f(x)\;$ has a local minimum at some point where the value of the function is negative. Now use the given data and get a contradiction.
If an inequality has solutions or not
$$(\log n)^{3n}$$ is a fast growing function and the shortest way to solve the inequation is probably to try the integers systematically. $$0,0.11\cdots,2.33\cdots,50.38\cdots$$ leaves little doubt. If the question is understood as $$n>\log_{10}n$$ the sequence is, starting from $10$, $$1,3.37\cdots,15.53\cdots$$
Finding local max of analytic function
One (not elegant solution) is to substitute $z=a+bi$ with $a^2+b^2=1$, than use Lagrange's multipliers to find $|f|^2$ extremum (as function of $a$ and $b$) on unit circle.
Why are two functions different if they differ in their codomain?
It's true that a function is a relation, but it's Very Convenient to be more specific, and agree that a function $f:X \to Y$ is a relation "from $X$ to $Y$", with both sets explicitly given as part of the definition (in the sense of Fred's answer). If the codomain is not explicitly specified in advance, for example, The concept of "surjectivity" loses its usefulness: Every function is surjective to its image. Families of functions "do not live in" (i.e., are not subsets of) a single universe $X \times Y$.
Transience of the usual random walk on $\Bbb{Z}^3$ for non mathematicians.
Have a look at the Example 12.2 on page 6 of PDF at this link. This is a chapter of the freely available book Introduction to Probability by Grinstead and Snell. They describe how to use basic counting to conclude transience of random walks in dimension 3 and higher.
Divergence of the solution.
A direct argument, not a proof by contradiction, for proving the existence of a global in time solution for every initial data is the following one. Assuming $k$ is an upper bound for $g:\mathbb{R}\to\mathbb{R}$, i. e. $|g(x)|\le k$ for all $x\in\mathbb{R}$, we also know that $f:\mathbb{R}\to\mathbb{R}$ is continuous: this implies that, in any (time) interval we have $$ |f(x)|\le \max_{x\in I} |f(x)|<\infty. $$ Considering $I=[t_0,t_1]$ or $[t_1,t_0]$ (we must also consider the behavior of the solution backward in time) and defining $\max_{t\in I} |f(t)|\triangleq M^{t_1}_{t_0}<\infty$, we have that $$ |\phi^\prime(t)|\le k M^{t_1}_{t_0}<\infty\quad\forall t\in I\label{1}\tag{1} $$ Note that $M^{t_1}_{t_0}$ depends in general on both $t_1$ and $t_2$. Equation \eqref{1} implies $$ |x(t)-x_0|=\Bigg|\int\limits_{t_0}^{t}\phi^\prime(s)\mathrm{d}s\Bigg|\le \begin{cases} \displaystyle\int^{t_0}_tk M^{t_1}_{t_0} \mathrm{d}s &t_1<t_0\\ \\ \displaystyle\int_{t_0}^tk M^{t_1}_{t_0} \mathrm{d}s &t_1>t_0 \end{cases} \le k M^{t_1}_{t_0} |t_1-t_0| $$ i. e. $$ |x(t)|\le k M^{t_1}_{t_0} |t_1-t_0|+|x_0|<\infty\quad \forall t\in I,\;\forall(t_0,x_0)\in\mathbb{R}\label{2}\tag{2} $$ The arbitrariness of $t_1$ and formula \eqref{2} imply that the solution $x(t)$ of the posed Cauchy problem exists finite for each $(t_0,x_0)\in\mathbb{R}$ and each time $t$.
Prove or disprove the limit of a definite integral
Here's an incomplete solution (also known as an idea, I guess). The change of variables $y=\frac1{\xi u}$ gives $$ \xi\int_0^1\exp\bigg(\frac{\xi^2}{4}\frac{2y^3-3y^2}{{(1-y)}^2}\bigg) \,dy = \int_{1/\xi}^\infty \exp \bigg( {-}\frac{\xi (3 \xi u-2)}{4 u (\xi u-1)^2} \bigg) \frac{du}{u^2}. $$ If the interchange of limits can be justified somehow, then we could get \begin{align*} \lim_{\xi\to\infty} \xi\int_0^1\exp\bigg(\frac{\xi^2}{4}\frac{2y^3-3y^2}{{(1-y)}^2}\bigg) \,dy &= \lim_{\xi\to\infty} \int_{1/\xi}^\infty \exp \bigg( {-}\frac{\xi (3 \xi u-2)}{4 u (\xi u-1)^2} \bigg) \frac{du}{u^2} \\ &\underset{?}= \int_0^\infty \exp \bigg( {-}\frac3{4u^2} \bigg) \frac{du}{u^2} = \sqrt{\frac\pi3}. \end{align*}
Dimension of a vector space.
Assuming $a_i\ne 0, i=1,\cdots,5$ then, if we assume that $\vec{v_i}\ne \vec{0}, i=1,\cdots,5,$ then the smallest dimension is $2:$ consider $\vec{v_1}=\cdots =\vec{v_4}\ne \vec{v_5}.$ Note that the dimension can't be one. In such a case $\vec{v_i}=\alpha_i\vec{v_1}, i=2,3,4,5.$ Now, $$-(\alpha_2+\alpha_3+\alpha_4+\alpha_5)\vec{v_1}+\vec{v_2}+\vec{v_3}+\vec{v_4}+\vec{v_5}=\vec{0}.$$ If $\alpha_2+\alpha_3+\alpha_4+\alpha_5\ne 0$ we have done. Note that $\alpha_2+\alpha_3+\alpha_4+\alpha_5=0$ means $\vec{v_2}+\vec{v_3}+\vec{v_4}+\vec{v_5}=\vec{0}.$ In other case, write $\vec{v_i}=\beta_i\vec{v_5}, i=1,2,3,4.$ Now, $$\vec{v_1}+\vec{v_2}+\vec{v_3}+\vec{v_4}-(\beta_1+\beta_2+\beta_3+\beta_4)\vec{v_5}=\vec{0}.$$ If $\beta_1+\beta_2+\beta_3+\beta_4\ne 0$ we have done. Note that $\beta_1+\beta_2+\beta_3+\beta_4=0$ means $\vec{v_1}+\vec{v_2}+\vec{v_3}+\vec{v_4}=\vec{0}.$ So we have to show that $$\alpha_2+\alpha_3+\alpha_4+\alpha_5=0=\beta_1+\beta_2+\beta_3+\beta_4$$ is not possible. In such a case $$\vec{v_1}+2\vec{v_2}+2\vec{v_3}+2\vec{v_4}+\vec{v_5}=(\vec{v_1}+\vec{v_2}+\vec{v_3}+\vec{v_4})+(\vec{v_2}+\vec{v_3}+\vec{v_4}+\vec{v_5})=\vec{0}+\vec{0}=\vec{0},$$ which is not possible by assumption.
How to get WolframAlpha to determine $f(2)$ when $f(x)=3x^2-2x+\int_0^2f(t)dt$
Let $\int_0^2f(t)dt=a$. We get $$f(x)=3x^2-2x+a$$ Let $F'(x)=f(x)$. We get $$F(x)=x^3-x^2+ax+C$$ Therefore: $$\begin{eqnarray*}a&=&\int_0^2f(t)dt=F(2)-F(0)\\&=&(2^3-2^2+2a+C)-(0+C)\Longrightarrow a=-4\end{eqnarray*}$$ So: $$f(x)=3x^2-2x-4\Longrightarrow f(2)=4$$
absolute convergence and series. bounded sequences
This is false. $\sum 1/n^3$ and $\sum n/n^3$ both converge absolutely. But $n$ is obviously not bounded.
Upper-triangular matrix is invertible iff its diagonal is invertible: C*-algebra case
So, the exercise is incorrect as stated, as the nice example in the question shows. They probably meant to say that the matrix is invertible in the subalgebra of upper triangular matrices if and only if the diagonal entries are invertible. This is the version given on page 16 in a set of lecture notes by Matthes and Szymański based primarily on the same book. They also give a counterexample to the original statement.
Properties of preimage of critical values of a manifold
There is a theorem in Lee's Introduction to smooth manifolds which states that for all closed subset $K$ of a smooth manifold $M$, there is a smooth function $f:M\to\mathbb{R}$ such that $f^{-1}(0)=K$ (!). So there is no hope in the general case to put structure on the preimage of any critical value. But you can ask your function to be Morse, that is every critical point is non-degenerate, i.e. you can find coordinates $\varphi:U\to\mathbb{R}^n,p\mapsto (x_1(p),\dots,x_n(p))$ centered on each critical point $c\in U$ such that in these coordinates, $$f(x_1,\dots,x_n)=f(c)+x_1^2+\dots+x_k^2-x_{k+1}^2-\dots-x_n^2.$$ This tells you that the preimage of a critical value of a Morse function is locally homeomorphic either to an euclidean space (near regular points of the sublevel) or to a $0$-level set of a quadric (near critical points of the sublevel).
Ambiguous notation in isomorphic groups
It's the real numbers greater than $0$ under multiplication. Besides, under addition they don't even form a group. And your obvious isomorphism is a correct answer.
Do the set of vectors give a basis?
Three vectors can never give a basis for $R^4$. All bases for $R^4$ will have 4 elements-in fact, it is a necessary and sufficient condition for m vectors to be a basis of $R^n$ that $m=n$ and all of the vectors are linearly independent.
Greatest lower bound
Suppose there are two greastest lower bound, $a$ and $b$. Then for any lower bounds of $A$, $x$, we have $x \le a$. Since $b$ is a lower bound, we have $b \le a$. By symmetry, we have $a \le b$. Hence $a=b$.
Multilateral shift operators
The unitary shift operator on $l^{2}(\mathbb{Z})$ is not unilateral. This operator is the same as multiplication by $e^{i\theta}$ on $L^{2}[0,2\pi]$ with normalized inner product $$ (f,g) = \frac{1}{2\pi}\int_{0}^{2\pi}f(\theta)\overline{g(\theta)}\,d\theta. $$ You can see this because every $f \in L^{2}[0,2\pi]$ can be written as the orthogonal sum $$ f = \sum_{n=-\infty}^{\infty}(f,e^{in\theta})e^{in\theta}. $$ The unilateral shift is multiplication by $z$ on the Hardy space $H^{2}(D)$ consisting of all holomorphic functions $f(z)=\sum_{n=0}^{\infty}a_n z^{n}$ in the unit disk $D$ such that $$ \|f\|^{2} = \sum_{n=0}^{\infty}|a_n|^{2} < \infty. $$ Another common type of shift is a weighted shift where $$ e_{n} \mapsto \lambda_n e_{n+1}. $$
Finding a topology doesn't convey Hausdorff property by an injective continuous function
Yes. Consider the topology $\tau$ whose elements are the open subsets of $(0,1)$ (with respect to the usual topology) and the unions of an open subset of $(0,1)$ (again, with respect to the usual topology) with $\mathbb{R}\setminus(0,1)$. Then $(0,1)$, as a subspace of $(\mathbb{R},\tau)$, has its usual topology and therefore it is Hausdorff. But $[0,1]$ isn't, since every open set which contains $0$ also contains $1$ (and vice-versa). Now, let $f\colon A\longrightarrow B$ be the function defined by $f(x)=x$. It is injective and continuous.
Exercise: splitting field, showing that it splits
It is just a matter of repeating the trick you already know about. If $\alpha$ is a root of $f = x^{3} + x^{2} + 1$, then also $\alpha^{2}$ is, and then also $\alpha^{4} = (\alpha^{2})^{2}$. This is, as you already know, because of the binomial in characteristic two: $$ 0 = (\alpha^{3} + \alpha^{2} + 1)^{2} = (\alpha^{2})^{3} + (\alpha^{2})^{2} +1. $$ And of course $\alpha^{2}$ and $$ \alpha^{4} = \alpha \alpha^{3} = \alpha (\alpha^{2} + 1) = \alpha^{3} + \alpha = \alpha^{2} + \alpha + 1 $$ are in $\mathbb{Z}_{2}[\alpha]$. Note also that the coefficient $1$ of $x^{2}$ is the sum of the roots (it is actually minus the sum, but signs here do not count). So once you know the roots $\alpha$ and $\alpha^{2}$, the third one will be $1 - \alpha - \alpha^{2} = \alpha^{2} + \alpha + 1$.
Framed Links are Ribbon Graphs
That is nearly accurate. A ribbon graph is usually an inclusion $\Gamma\to\Sigma$ that is a homotopy equivalence between a graph $\Gamma$ and an oriented surface $\Sigma$ with boundary (one of the many ways to define a combinatorial map). A ribbon graph embedded in $S^3$ is called various things, such as a spatial graph or a spatial ribbon graph, diagrams for which have been called flat vertex graphs up to regular isotopy. A framed link is such a spatial graph where every vertex is degree-$2$, modulo edge subdivision. The correspondence is that the surface $\Sigma$ gives a section of the normal bundle of the embedding of $\Gamma$ in $S^3$. The $\mathcal{RIBBON}$ category later in the paper is a category of framed tangles. Or, the category of oriented spatial graphs such that interior vertices are all degree-$2$, modulo edge subdivision. If it were just of ribbon graphs, then there'd be no concept of over- vs. under-crossings, just permutations.
Why do exponential objects (in category theory) require currying?
I'm not quite sure I understand your question, but some comments. The intuition that exponentials generalize products really only makes sense in categories that resemble $\text{Set}$ and in general exponentials can look quite different from this. A nice example to think about is Heyting algebras, where the exponential generalizes implication in propositional logic (to intuitionistic logic). Here eval becomes modus ponens. In any case, once you have composition you have eval provided that you believe a very small thing. Let me write $A^B$ as $[B, A]$. The composition map $[A, B] \times [B, C] \to [A, C]$ specializes, when $A = 1$, to a map $$[1, B] \times [B, C] \to [1, C]$$ which reproduces eval as soon as you believe that there should be a natural isomorphism $[1, A] \cong A$. Edit: Okay, so I think I understand your question better now. In fact it is not necessary to relate exponentials in a suitably generalized sense to products if you don't want to. You can write down the axioms of a closed category instead, although I don't know any natural examples which are not closed monoidal categories. In a closed monoidal category the cartesian product is replaced by another (usually symmetric) monoidal structure; the prototypical example is $\text{Vect}$ equipped with the tensor product.
Additive number theory clarification
Okay we want to prove that if we have a set of $2*2^m -1$ elements we can select $2^m$ elements so that their sum is $2^m$ (the $n$ just confuses everything.) Okay, the proof goes as follows: Letting $n = 2^m$ , our proof is by induction on $m$. Clearly the result is valid for $m = 1$. Assume the result is valid for $n = 2^m$ Consider the case $n = 2^{m+1}$ . Since $2^{m+2} - 1 = 2^m + 2^m +(2^{m+1} - 1)$, by the inductive hypothesis we can always select three disjoint subsets, each of $2^m$ numbers, from $2^{m+2} - 1$ numbers such that the sum of each subset is divisible by $2^m$ . Letting the three sums be $a\cdot (2^m) , b\cdot (2^m) , c\cdot(2^m)$ , at least two of the numbers $a,b,c$ have the same parity. By selecting the two sets corresponding to these numbers, we obtain $2^{m+1}$ numbers whose sum is divisible by $2^{m+1}$. Consequently, the result is valid for all positive integers $m$ by induction. "Clearly the result is valid for $m = 1$" So if you have $2^1 =2$ then for any set of $2*2^1-1 = 3$ elements we can select $2^1= 2$ of them so that their sum is divisible by $2$. Pf: It there are two odd elements we can pick them. Their sum will be even and therefore divisible by $2$. If there aren't two odd elements that means there is at most one odd element which means there are at least two even elements. If so we pick those. Their sum will be divisible by $2$. .... Now assume the statement is true for some $m$. That is for any set with $2*2^m -1$ elements we can select $2^m$ elements whose sum add to a multiple of $2^m$. We need to prove the statement is true for $m+1$. "Since $\color{blue}{2*2^{m+1} - 1=}2^{m+2} - 1 = 2^m + 2^m +(2^{m+1} - 1)$by the inductive hypothesis we can always select three disjoint subsets, each of $2^m$ numbers, from $2^{m+2} - 1$ numbers such that the sum of each subset is divisible by $2^m$ " So we have a set of $2*2^{m+1} -1 > 2*2^m - 1$ elements. We can choose $2^m$ elements from those whose sums add to $a\cdot 2^m$. We'll call those $2^m$ elements set $A$. We have have $2*2^{m+1} -1 - 2^m = 3*2^m - 1 > 2*2^m - 1$ elements. We can choose $2^m$ elements from those whose sums add to $b \cdot 2^m$. We'll call those $2^m$ elements set $B$. This leaves us with $3*2^m - 1 - 2^m = 2*2^m - 1$. And we can choose $2^m$ elements from those so that their sum add to $c \cdot 2^m$. We'll call those $2^m$ elements set $C$. If either $a,b,c $ are all even or $a,b,c$ are all odd, or two of them are even and one is odd, or two of them are odd and one of them are even. In other words, at least two of $a,b,c$ are the same parity. Without loss of generality, we will assume $a$ and $b$ have the same parity. So $a+b$ is even (odd + odd = even. even + even = even). So .... We have a set with $2*2^{m+1} - 1$ elements. Set $A$ and set $B$ are $2^m +2^m = 2^{m+1}$ elements. They add up to $a\cdot 2^m + b \cdot 2^m = (a + b)\cdot 2^m$. $2|a+b$ so $2^{m+1}|(a+b)2^m$. So we can select $2^{m+1}$ elements whose sum is a multiple of $2^{m+1}$. So statement is true for $m +1$. So we have proven this by induction.
Conditional variance and expectation of random variables
The same argument works: expand the LHS of the inequality $$E((X-x)^2|A)\geqslant0,\qquad\text{with}\ x=E(X|A).$$
What is the particular integral of this equation?
As $\sin x$ is the imaginary part of $e^{ix}$, you can find the particular solution of your equation as the imaginary part of a particular solution of $$ (D^2+2D+5)y=e^{ix}. $$ Using the unknown-coefficients approach set $y_p=Ce^{ix}$ to get $$ (4+2i)Ce^{ix}=e^{ix}\implies C=\frac{2-i}{10}. $$ Then $Im((2-i)e^{ix})=2\sin x+\cos x$, confirming your particular solution. However, in your solution formula the homogeneous part is wrong, it is misssing the argument $x$ in the exponentials $e^{(1\pm 2i)x}$. It would be slightly better if you wrote the homogeneous solution in its real form using the basis functions $e^x\cos 2x$ and $e^x\sin 2x$.
Given a point and distance , is it possible to get the second point on the line + three dimensional points
Let's say, if the point $A$ is on the line $$x=1-t\\ y=5+2t\\ z=2-3t$$ The direction vector of the line is $(-1, 2, -3)$ (the coefficients of $t$). Norm of the direction vector is $\sqrt{(-1)^2+2^2+(-3)^2}=\sqrt{14}$. Now from your point $A$, you would add $5$ times the unit direction, which is then $$(3,1,4)+\frac{5}{\sqrt{14}}(-1,2,-3)=(3-\frac{5}{\sqrt{14}}, 1+\frac{10}{\sqrt{14}}, -3-\frac{15}{\sqrt{14}})$$
I need to prove that the next proposition about integral is false
Consider a sequence of "steps" of height $n$ and width $1/n^3$, i.e. $$f(x)=\begin{cases} 1 & \text{for} & x\in (1, 1+1/1^3)\\ 2 & \text{for} & x\in (2, 2 + 1/2^3)\\ 3 & \text{for} & x\in (3, 3 + 1/3^3)\\ \dots\\ 0 & \text{otherwise} \end{cases}$$ (In fact you can use this idea and a partition of unity to produce such a counterexample of class $C^\infty$).
Expected number of coin flips
This is known as renewal argument. The logic behind it is to condition on the result of the first toss. Intuitively at the first toss two things can happen You can toss heads with probability $1-α$ and in that case you are done. But you have done 1 step. You can toss tails with probability $α$ and in that case you start anew with the second toss. But again you have done 1 step. Counting this 1 step, it is as if you start tottaly from the beginning with the second step. Thus $$E[N]=(1-α)\cdot1+α(1+Ε[Ν])=1-α+α+α\cdotΕ[Ν]=1+α\cdotΕ[Ν]$$ Formally denote with $X$ the result of the first step, with $X \in \{T,H\}$ and $P(X=T)=α$ and $P(X=H)=1-α$. Then by the law of total expectation: $$E[N]=E_X\left[E_{N|X}[N|X]\right] \tag1$$ where $E[N|X=H]=1$ and $E[N|X=T]=1+E[N]$ Thus, substituting in (1) we find that $$\begin{align*}E[N]&=E_X\left[E_{N|X}[N|X]\right]=P(X=H)E[N|X=H]+P(X=T)E[N|X=T]=\\&=(1-α)\cdot1+α\cdot(1+E[N])=1+α\cdot E[N]\end{align*}$$ which yields the same result.
RSA Group theory proof
Note that $$f_1(x)=f_0(y) \iff x^e\equiv ry^e\pmod n\iff (xy^{-1})^e\equiv r\pmod n.$$ As inverting is easy in $\Bbb Z_n^\times$, finding such $x,y$ allows us to decrypt $r$, which is assumed hard for random $r$.
Greatest integer less than or equal to $x^*$
For "greatest integer less than or equal", you always round down. Ordinary rounding is different. It rounds to the closest integer, which could be up or down. In fact, they are related by $$\operatorname{ordinaryRound}(x)=\operatorname{roundDown}(x+\tfrac12)$$
Modern Mathematics having serious problems with Real Numbers?
If you know about countable and uncountable infinities, consider the following problem: Is there a subset of the reals whose cardinality is strictly between that of the integers and that of the reals? Cantor's Continuum Hypothesis says the answer is "No". Godel and Cohen proved that one can neither prove nor disprove the Continuum Hypothesis on the basis of the usual axioms of set theory (ZFC). Some people consider this a serious problem; if we really know what the reals are, we should be able to decide whether or not there's a set bigger than the integers but smaller than the reals. Other people shrug their shoulders and get on with doing mathematics. If you don't know about countable and uncountable infinities and such, the above won't mean much to you, but then you have some very nice experiences waiting for you.
What is an affine change of variables?
It's a synonymous for a linear change of variable. That is a change of variables in the form: $$\vec x'=A\vec x+ \vec c$$ EG for n=2 $$\begin{cases}x'=ax+by+c\\y'=dx+ey+f\end{cases}$$ where $a,b,c,d,e,f$ are given constants.
Help getting the result $\sqrt{n}-\sqrt{n-\frac{1}{4}}\approx\frac{1}{8\sqrt{n-\frac{1}{8}}}$
$$\sqrt{n}-\sqrt{n-\frac{1}{4}} = \frac{\frac{1}{4}}{\sqrt{n}+\sqrt{n-\frac{1}{4}}}=\frac{\frac{1}{4}}{\sqrt{n-\frac{1}{8}+\frac{1}{8}}+\sqrt{n-\frac{1}{8}-\frac{1}{8}}}$$ $$=\frac{1}{4\sqrt{n-\frac{1}{8}}\sqrt{1+\frac{1}{8n-1}}+4\sqrt{n-\frac{1}{8}}\sqrt{1-\frac{1}{8n-1}}}$$ $$=\frac{1}{4\sqrt{n-\frac{1}{8}}\left(\sqrt{1+\frac{1}{8n-1}}+\sqrt{1-\frac{1}{8n-1}}\right)}$$ $$=\frac{1}{4\sqrt{n-\frac{1}{8}}\left(1+\frac{1}{16n-2}+1-\frac{1}{16n-2}+O(n^{-2})\right)}$$ $$=\frac{1}{8\sqrt{n-\frac{1}{8}}\Bigl(1+O(n^{-2})\Bigr)}\approx\frac{1}{8\sqrt{n-\frac{1}{8}}}$$
Let $X_1$ and $X_2$ be uniform on $n$-spheres. What is the distribution of $\| X_1+X_2\|$?
Given $U=X_1+X_2$ in $\mathbb{R}^n$ where $X_i$ are random points on the $n-1$-spheres $||X_i||=r_i$, and $R=||U||$, we have $$ R^2 = U^2 = X_1^2 + X_2^2 + 2X_1\cdot X_2 = r_1^2 + r_2^2 + 2r_1r_2\cos\Theta $$ where $\Theta\in[0,\pi]$ is the angle between $X_1$ and $X_2$. So, $\Theta$ is a random variable corresponding to the angle between two random points on the $n-1$-sphere. Let's start tackling $\Theta$ and $\cos\Theta$ directly. Note first that these do not depend on the lengths $||X_i||=r_i$. Also, if we pick $X_1$ first, we can either rotate or choose a coordinate system so $X_1=[1,0,\ldots,0]$: ie, $X_1$ points along the first axis (aka $x$-axis in low dimensions). This is true because $X_2$ is uniformly distributed and independent of $X_1$. So, basically, the distribution of $\Theta$ (or $\cos\Theta$) is the same as the angle between a random point on the unit $n-1$-sphere and $[1,0,\ldots,0]$. Now, let $Z = [Z_1,\ldots,Z_n]$ be a random point on the unit $n-1$-sphere: ie, so that $Z_1^2+\cdots+Z_n^2=1$. Then, $\cos\Theta=Z\cdot[1,0,\ldots,0]=Z_1$. So what we are after is the distribution of $Z_1$ for random points $Z$ on the unit $n-1$-sphere. We can express the $n-1$-dimensional area of the unit $n-1$-sphere as $$ \omega_{n-1} = \int_0^\pi \omega_{n-2}(\sin\theta)^{n-2}\,d\theta $$ where $\omega_{n-2}(\sin\theta)^{n-2}$ is the $n-2$-area of an $n-2$-sphere with radius $\sin\theta$. Since we are after a uniform probability distribution, we need to divide this by $\omega_{n-1}$. Next, we wish to express this in terms of the coordinate $z_1=\cos\theta$, which, using $d\theta/dz_1=-\sin\theta$ and $\sin\theta=\sqrt{1-z_1^2}$, gives us $$ \begin{align} \int_0^\pi \frac{\omega_{n-2}}{\omega_{n-1}} (\sin\theta)^{n-2}\, d\theta &=\int_{-1}^1 \frac{\omega_{n-2}}{\omega_{n-1}} (\sin\theta)^{n-2}\, \left|\frac{d\theta}{dz_1}\right|\,dz_1 \\ &=\int_{-1}^1 \frac{\omega_{n-2}}{\omega_{n-1}} (1-z_1^2)^{\frac{n-3}{2}}\, dz_1. \end{align} $$ Replace the boundary $[-1,1]$ for $z_1$ with any other interval, and you get the probability of $Z_1=\cos\Theta$ within that interval; so the probability density of $Z_1=\cos\Theta$ is $$ f_{\cos\Theta}(z) = \frac{\omega_{n-2}}{\omega_{n-1}} (1-z^2)^{\frac{n-3}{2}}. $$ This is basically just stating that for random variables $Y=h(X)$, the probability densities are related by $f_X(x)=f_Y(y)\cdot\left|h'(x)\right|$. Returning to $R$, we already know $R^2$ is linear in $\cos\Theta$ with values in $[(r_1-r_2)^2, (r_1+r_2)^2]$. Entering the distribution of $\cos\Theta$, this gives the density of $S=R^2$: $$ f_{R^2}(s) = \frac{\omega_{n-2}}{2r_1r_2\omega_{n-1}} \left[ 1 - \left(\frac{s-r_1^2-r_2^2}{2r_1r_2}\right)^2 \right]^{\frac{n-3}{2}}. $$ Now, $f_R(r) = 2rf_{R^2}(r^2)$ (same rule as above for change of variables) which yields $$ f_{R}(r) = \frac{r\omega_{n-2}}{r_1r_2\omega_{n-1}} \left[ 1 - \left(\frac{r^2-r_1^2-r_2^2}{2r_1r_2}\right)^2 \right]^{\frac{n-3}{2}}. $$ Note that for points uniformly distributed between two radii, you should have density $f_R(r)=ar^{n-1}$ for some constant $a$ as the $n-1$-areas of the $n-1$-spheres of radius $r$ is $\omega_{n-1}r^{n-1}$. So for no $n$ is that the case. This gives the distribution $F_R(r)$ of the distance from the origin. The probability density of the $n$-dimensional vector $U$ is found by dividing by the $n-1$-area $\omega_{n-1}r^{n-1}$ of the $n-1$-sphere of radius $r=||u||$: $$ f_{U}(u) = \frac{F_R(||u||)}{\omega_{n-1} ||u||^{n-1}} = \frac{\omega_{n-2}}{||u||^{n-2}r_1r_2\omega^2_{n-1}} \left[ 1 - \left(\frac{||u||^2-r_1^2-r_2^2}{2r_1r_2}\right)^2 \right]^{\frac{n-3}{2}}. $$ As for the $k$-area of the unit $k$-sphere, $$ \omega_k = \frac{2\pi^{\frac{n+1}{2}}}{\Gamma\left(\frac{n+1}{2}\right)}, $$ as may be found on Wikipedia, where the gamma-function satisfies $\Gamma(s+1)=s\Gamma(s)$, and $\Gamma(n+1)=n!$ for integers. As a side-note, the $1-z^2$ term inside the brackets may be rewritten $$ 1 - \left(\frac{r^2-r_1^2-r_2^2}{2r_1r_2}\right)^2 = \frac{[r^2-(r_1-r_2)^2]\cdot[(r_1+r_2)^2-r^2]}{(2r_1r_2)^2} $$ which helps highlight that $r^2$ lies between $(r_1-r_2)^2$ and $(r_1+r_2)^2$.
Coefficient of friction of a triangular lamina $ABC$ on a rough horizontal table.
You meed to resolve the forces vertically and horizontally. If $N$ is the normal reaction at A and $F$ is the friction at A, you have$$N+P\cos CAB=W$$ and $$P\sin CAB=F$$ This leads to $$N=0.76W, F=0.32W$$ Then $$F\leq\mu N\Rightarrow \mu\geq \frac{8}{19}$$
Show that $\mathbb{Q}(\zeta_n)$ is Galois over $\mathbb{Q}$ and $Gal(\mathbb{Q}(\zeta_n)/\mathbb{Q}))\cong \mathbb{Z_n}^*$
For your second question, if you take $\sigma$ in $G=Gal(\mathbb{Q}(\zeta_n)/\mathbb{Q})$, $\sigma(\zeta_n)$ is an $n$-th primitive root of unity, so it can be written $\sigma(\zeta_n)=\zeta_n^k$ for some $k$ coprime to $n$. So you can introduce $\chi :\sigma \mapsto k$ from $G$ to $(\mathbb{Z}/n\mathbb{Z})^{\times}$, which is a well defined and injective group homomorphism. It's an isomorphism since the cardinal of $G$ is the degree of the extension $\mathbb{Q}(\zeta_n)/\mathbb{Q}$, i.e the degree of the $n$-th cyclotomic polynomial $\Phi_n$, that is $\varphi (n)$. ($\Phi_n$ is irreducible over $\mathbb{Q}$ -which is not trivial to prove- so it is the minimal polynomial of $\zeta_n$ over $\mathbb{Q}$).
Prove that $f:\mathbb{R}\rightarrow S$, defined by $f(x)=s_x$ is continuous
If $S=\mathbb{R}$, it is easy prove the property. Suppose that $U={}^c S$ is not empty, then this open subset of $\mathbb{R}$ is equal to the union of it connected components, these components are disjoint open intervals. Suppose that there is among these intervals an $]a,b[$ with $a, b\in \mathbb{R}$, $a<b$. We have $a,b\in S$. Then for $\displaystyle c=\frac{a+b}{2}$, we have $d(c,S)=d(c,a)=d(c,b)$, in contradiction with the hypothesis. Hence $U$ is $]u,+\infty[$, $]-\infty,v[$, or the union $]-\infty,v[\cup ]u,+\infty[$, with $v<u$. So $S$ is $]-\infty, u]$, or $[v,+\infty[$, or $[v,u]$. It is now easy to find $f(x)$, and to finish.
Tangent to $y=(1+2x)^2$ at $(4,81)$
The derivative of the curve is $y'=8x+4$, which at point $(4, 81)$ is equal to $8(4)+4 = 36$, which tells us the slope of the tangent line. Therefore $y = 36x + b$ is the equation of the tangent line in general. At point $(4, 81)$, we have $81 = 36(4) + b$ or $b=-63$. So the equation of the tangent line is $y = 36x - 63$.
Morphisms between schemes such that every point in the codomain has at most $n$ preimages.
This is Asal Beag Dubh's answer. Consider the finite ring map $k[x, y]/(y^2 - xy - x^3) \to k[t]$ with $x \mapsto t(t - 1)$ and $y \mapsto t^2(t - 1)$. Take Spec of this. Then $n = 1$ but the fibre over $(0, 0)$ has two points. If $Y$ is normal, then the result does hold, but it isn't that easy to prove. One way to do it is to reduce to the case where the extension of function fields is Galois (say with group $G$; this reduction already takes a bit of work in case of inseparability) and then to show that the fibres of $X \to Y$ are acted on transitively by $G$ (in case $X$ is normal) as in one of the proofs of going down for finite over normal.
Is this some kind of Holder's inequality?
Proof by induction on $k$. When $k = 1$ the inequality above is an equality. Now suppose $k > 1$ and the result holds for all positive integers less than $k$. By Hölder's inequality (with conjugate exponents $k$ and $k/(k-1)$), $$\sum_{i} \left\lvert \prod_{j = 1}^k x_{ij}\right\rvert = \sum_i \left\lvert \prod_{j = 1}^{k-1} x_{ij}\right\rvert \lvert x_{ik}\rvert \le \left(\sum_i \left\lvert \prod_{j = 1}^{k-1} x_{ij}\right\rvert^{k/(k-1)}\right)^{(k-1)/k}\left(\sum_i \lvert x_{ik}\rvert^k\right)^{1/k}.$$ Now $$\left(\sum_i \left\lvert \prod_{j = 1}^{k-1} x_{ij}\right\rvert^{k/(k-1)}\right)^{(k-1)/k} = \left(\sum_i \left\lvert \prod_{j = 1}^{k-1} x_{ij}^{k/(k-1)}\right\rvert\right)^{(k-1)/k} \le \prod_{j = 1}^{k-1} \left(\sum_i \lvert x_{ij}^{k/(k-1)}\rvert^{k-1}\right)^{1/k},$$ using the induction hypothesis in the last step. Thus $$\left(\sum_i \left\lvert \prod_{j = 1}^{k-1} x_{ij}\right\rvert^{k/(k-1)}\right)^{(k-1)/k} \le \sum_{j = 1}^{k-1} \left(\sum_i \lvert x_{ij} \rvert^k\right)^{1/k},$$ and consequently $$\sum_i \left\lvert \prod_{j = 1}^k x_{ij}\right\rvert \le \prod_{j = 1}^{k-1} \left(\sum_i \lvert x_{ij}\rvert^k\right)^{1/k} \left(\sum_i \lvert x_{ik}\rvert^k\right)^{1/k} = \prod_{j = 1}^k \left(\sum_i \lvert x_{ij}\rvert^k\right)^{1/k},$$ as desired.
Suppose A and B are finite sets and $f : A \rightarrow B$ is surjective. Is it possible that |A| < |B|?
The set $f(A)$ contains at most $|A|$ elements. If $f(A)=B$ then $|A|\ge |f(A)|=|B|$.
Higher order poles, how high?
Multiplying by $z^3$ won't help. If the limit after that multiplication was $0$, it would only mean you multipied by a too high power of $z$, i.e. the pole order is lower than $3$, but the residue can't be deduced from that. The function $\displaystyle f(z) = \frac{1}{z^3} \cdot \frac{1}{\sqrt{1-z^2}}$ can be expanded into Laurent series $$f(z) = \sum_{n=-\infty}^{\infty} a_n \cdot z^n$$ and $\mathrm{res}_{z=0} \, f(z) = a_{-1}$. The first step is to find that expansion. As you may know, $$(1-w)^{\mu} = \sum_{n=0}^{\infty} \binom{\mu}{n} \cdot w^n,$$ hence for $w = z^2$ and $\mu = -\frac{1}{2}$ we obtain $$\frac{1}{\sqrt{1-z^2}} = \sum_{n=0}^{\infty} \binom{-\frac{1}{2}}{n} \cdot z^{2n} = 1 -\frac{1}{2}z^2 + \ldots.$$ Multiplynig by $\frac{1}{z^3}$ only shifts the series, so $$f(z) = \sum_{n=0}^{\infty} \binom{-\frac{1}{2}}{n} \cdot z^{2n-3} = \frac{1}{z^3} - \frac{1}{2} \cdot \frac{1}{z} + \ldots$$ Thus $a_{-1} = -\frac{1}{2}$. In general you start with $$f(z) = \sum_{n=-\infty}^{\infty} a_n \cdot z^n.$$ If you check that $\displaystyle \lim_{z \to 0} z^k \cdot f(z) = g \neq \infty$, it only means that actually $$f(z) = \sum_{n=-k}^{\infty} a_n \cdot z^n$$ and $a_{-k} = g$. Still doesn't allow to conclude what $a_{-1}$ is without determining the expansion.
Limit of the form $0/0$
Hint Taylor series could be useful. Start with $$\sin(x)=x-\frac{x^3}{6}+O\left(x^4\right)$$ Then use the Taylor series of $\cos(y)$; for the first term in numerator replace in the result $y$ by $x-\frac{x^3}{6}$ and for the second term in numerator replace in the result $y$ by $x$. I am sure that you can take from here.
Find the volume of the region D in spherical coordinate
Its almost correct! The only problem is the bounds for $\rho$. Since you want to describe $\rho$ in terms of the angle $\phi$ you can proceed as follows: Fix two angles $\theta\in [0,2\pi),\ $$\phi\in [\pi/6,5\pi/6]$. Then for this two fixed angles take the unique point $(\rho,\theta,\phi)\equiv (x,y,z)$ which lies in the cylinder $x^2+y^2=1$. You want to calculate the radius $\rho$ of this point. Using the relations of $(x,y,z)$ in terms of $(\rho,\theta,\phi)$ and the equation $x^2+y^2=1$ we get \begin{align} (\rho \cos\theta \sin\phi)^2&amp;+(\rho \sin\theta \sin \phi)^2=1\\ &amp;\implies \rho^2\sin^2\phi=1\\ &amp;\implies \rho=\frac{1}{|sin\phi|} \end{align} and since $\phi\in [\pi/6,5\pi/6]$ we eventually end up with $\rho=1/\sin\phi=\csc \phi$. Now , from that point and since the cylinder is inside the sphere of radius $2$ you move till you hit the sphere (for fixed $\theta,\phi$). This will happen when $\rho$ becomes $2$. So, the bounds for $\rho$ must be $\csc \phi\leq \rho\leq 2$. Hence, the volume is $$\int_{0}^{2\pi}\int_{\pi/6}^{5\pi/6}\int_{\csc\phi}^{2}\rho^2\sin\phi\, d\rho d\phi d\theta$$
Proof of a series having infinitely many zeros in the unit disc
This function has the property that on the circle $|z|=1-\frac{1}{n_k}, k \ge 10$, $|h(z)|&gt;C5^k$, where we can take for example $C=\frac{1}{100}$ (note that $(1-\frac{1}{n_k})^{n_k} \ge \frac{1}{3}$ as that sequence converges increasingly to $\frac{1}{e}$ with the inequality already happening for $k \ge 10$ and $n_k&gt;k$, while for $m&gt;k,(1-\frac{1}{n_k})^{n_m} \le (\frac{1}{e})^{2^{m-k}(m-1)...k} &lt; 6^{-m}$ so the corresponding terms form a geometric sum that converges to a small finite number, while the smaller terms sum in absolute value to obviously less than $\frac{1}{4}5^{k}$ by the trivial estimate and the corresponding geometric series) Assume now that $h$ has finitely many zeroes only (could be none here of course). Let $B$ a finite Blaschke product with the same zeroes (including multiplicities etc, where we take $B=1$ if $h$ has no zeroes). Then $g=\frac{h}{B}$ has no zeroes in the unit disc and is analytic and still satisfying $|g(z)| &gt; C5^k, |z|=1-\frac{1}{n_k}$, since $|B(z)| &lt;1, |z| &lt;1$. But then $\frac{1}{g}$ is analytic in the unit disc and by maximum modulus, $|\frac{1}{g}| &lt; \frac{1}{C5^k}, |z| \le 1-\frac{1}{n_k}$. If we let $k \to \infty$ we get $\frac{1}{g}=0$ in the unit disc and that is a contradiction. Note that $h(z)-w$ has the same properties as $h$ (taking all $k$ large enough so $5^k &gt; 200|w|$ say, so we can use $C=\frac{1}{200}$ say, so the same proof applies to show that $h$ takes every complex value infinitely many times in the unit disc!
Prove that convex subspace of $l_2$ is compact
Being closed: $A=\{\xi|\sum_{n=1}^{\infty} \xi_n^2 n^2 \le 1\}=\cap A_n$, where $A_n=\{\xi|\sum_{k=1}^{n} \xi_k^2 k^2 \le 1\}$. Now define $T:l_2\to\mathbb{R}$, $T\xi=\sum_{k=1}^{n} \xi_k^2 k^2$. It's continuous (it is not linear, but that doesn't matter), so... Theorem: A set $A$ is relatively compact in $l_p$ if and only if: There exists $(p_n)_{n\in\mathbb{N}}$ so that for every $x=(x_1,x_2,...)\in A$ we have $|x_n|\le p_n$ For all $\epsilon$ there exists $n_0$ so that for all $x\in A$ series $\sum\limits_{k=n_0+1}^{\infty}|x_k|^p&lt;\epsilon$.
How to define an action of $\pi_1(E)$ on $\pi_n(F)$ for a fibration $F\to E\to B$?
Let $p:E\rightarrow B$ the projection and $i:F\hookrightarrow E$ the fibre inclusion. Choose a common basepoint $e_0\in F\subseteq E$ for both spaces and let $b_0=p(e_0)\in B$ be the basepoint of $B$. Let $[\beta]\in\pi_1(E,e_0)$ be the homotopy class of a loop $\beta:I\rightarrow E$ and consider the following commutative diagram $\require{AMScd}$ \begin{CD} F\times0\cup e_0\times I@&gt;i\cup\beta&gt;&gt; E\\ @VV V @VV p V\\ F\times I @&gt;p\circ\beta\circ pr_2&gt;&gt; B. \end{CD} Then the homotopy lifting property of the fibration produces a map $\tilde\beta:F\times I\rightarrow E$ satisfying $\tilde\beta(f,0)=i(f)$, $\tilde\beta(e_0,t)=\beta(t)$ and $p\circ \tilde\beta(f,t)=p\circ \beta(t)$. We note that $p\circ\tilde\beta(f,1)=p\circ \beta(1)=b_0$, so for each $f\in F$, the point $\tilde\beta(f,1)$ actually lies in the fibre $F$. We set $\tilde\beta_1=\tilde\beta(-,1):F\rightarrow F$. One shows by standard arguments that the homotopy class of $\tilde\beta_1$ depends only on that of $\beta$, and is uniquely defined by it. The loop $\beta$ has an inverse $\beta^{-1}$ in $\pi_1(E,e_0)$, and from this it follows that $\tilde\beta_1$ has a homotopy inverse $\widetilde{\beta^{-1}}_1$, and so is a homotopy equivalence. Thus we have a map $$\pi_1(E,e_0)\rightarrow \pi_0Aut_*(F),\qquad \beta\mapsto\tilde\beta_1.$$ This gives an action of $\pi_1(E,e_0)$ on $\pi_n(F,e_0)$ by declaring $$\beta\cdot \alpha=\tilde\beta_1\circ\alpha$$ for $\beta\in\pi_1(E,e_0)$, $\alpha\in\pi_n(F,e_0)$. Now the action of $\pi_1(F,e_0)$ on $\pi_n(F,e_0)$ may be obtained similarly. In particular by applying exactly the same procedure to the fibration $F\rightarrow\ast$. Now let $\alpha\in\pi_1(F,e_0)$ and consider the following diagram $\require{AMScd}$ \begin{CD} F\times0\cup e_0\times I@&gt;id_F\cup\alpha&gt;&gt; F@&gt;i&gt;&gt;E\\ @V V V @VV V@VV p V\\ F\times I @&gt;&gt;&gt; \ast@&gt;&gt;&gt;B \end{CD} which commutes strictly since $p\circ i=\ast$. Applying the homotopy lifting property to the left-hand square gives the map $\tilde\alpha:F\times I\rightarrow F$ which defines $\tilde\alpha_1:F\rightarrow F$ and so specifies the action of $\alpha\in\pi_1(F,e_0)$ on $\pi_n(F,e_0)$. On the other hand, if we apply the homotopy lifting property to the combined square we get a map $\widetilde{i\alpha}:F\times I\rightarrow E$ which defining $\widetilde{i\alpha}_1:F\rightarrow F$ andspecifying the action of $i\circ\alpha\in\pi_1(E,e_0)$ on $\pi_n(F,e_0)$. Now the arguments for uniqueness of the resulting maps apply, so since the diagram commutes strictly we may take $\widetilde{i\alpha}=i\circ\widetilde\alpha:F\times I\rightarrow E$. This tells us that $\widetilde{i\alpha}(-,1)\simeq i\circ\tilde\alpha(-,1):F\rightarrow E$, so that in particular $$\widetilde{i\alpha}_1=\tilde\alpha_1\in\pi_0Aut_*(F).$$ Thus if $\gamma\in\pi_n(F,e_0)$ we have $$(i\alpha)\cdot\gamma=\widetilde{i\alpha}_1\circ\gamma\simeq \tilde\alpha_1\circ\gamma=\tilde\alpha_1\cdot\gamma$$ where the left hand side is the action of $\alpha=i_*\alpha\in\pi_1(E,e_0)$ on $\gamma$, and the right hand side is the action of $\alpha\in\pi_1(F,e_0)$ on $\gamma$. We're done.
Finding area of trapezoid $\{(0,2),(2,2),(3,0),(-1,0)\}$ using double integral
The line on the left is $y=2x+2$ so the limits for $x$ should be from $\frac{y-2}{2}$ to $\frac{6-y}{2}$ Then the middle integrand changes to $4-y$ and you get the correct answer of 6.
Integrate the integral. Where $({x_1},{y_1}),({x_2},{y_2})\in[0,a] \times[0,b],x_1<x_2,y_1<y_2$
Since $(Nu)(x,y) =\mu(x,y)+f(x,y,I_\theta^r u(x,y),u(x,y)), (Nu)(x,y)=u(x,y)$by fixed point theorem, where $I_\theta^r$ is the left-sided mixed Riemann-Lioville integral of order r, $r=(r_1,r_2)\in(0,\infty)\times(0,\infty), \theta\in(0,0)$ and $u\in L^1(J),L^1(J)$ is the space of Labesgue-integrable functions from $J=[0,a]\times[0,b]$ into $R^n.$ $I_\theta^r$ is defined as follows. $(I_\theta^ru)(x,y)=\dfrac{1}{\Gamma (r_1)\Gamma (r_2)}\int_0^{x}\int_0^{y}(x-s)^{r_1-1}(y-t)^{r_2-1}u(s,t)dtds$ Now there exist constants $L_1,L_2&gt;0 $ such that $\|f(x,y,u,v)-f(x,y,\bar u,\bar v)\|\le L_2\|u-\bar u\|+L_1\|v-\bar v\|$ and $\displaystyle\|Nu\|_E=\sup_{(x,y)\in J}{\|(Nu)(x,y)\|}e^{-\lambda(x+y)} $, $\|u\|_E\le M$ Set $f^*(x,y) =f(x,y,0,0)$ $\|(Nu)\|_E \le \|\mu\|_E+\|f^*\|_E+ ML_1+\dfrac{ML_2a^{r_1}b^{r_2}} {\Gamma(1+r_1)\Gamma(1+r_2)} =:\eta$ Now we come to the point $({x_1},{y_1}),({x_2},{y_2})\in J=[0,a] \times[0,b],x_1&lt;x_2,y_1&lt;y_2$ and $u\in B_\eta $ where $N$ transform the ball $B_\eta= \{u\in E:\|u\|_E\}$ into itself and $E$ is space of functions $u:J\to R^n.$ Then $\|N(u)(x_1,y_1)-N(u)(x_2,y_2)\|\le\|\mu(x_1,y_1)-\mu(x_2,y_2)\|+L_1\|u(x_1,y_1)-u(x_2,y_2)\|+\|\dfrac{L_2}{\Gamma (r_1)\Gamma (r_2)}\int_0^{x_1}\int_0^{y_1}[(x_2-s)^{r_1-1}(y_2-t)^{r_2-1}-(x_1-s)^{r_1-1}(y_1-t)^{r_2-1}]\times\ u(s,t)dtds+\dfrac{L_2}{\Gamma (r_1)\Gamma (r_2)}\int_{x_1}^{x_2}\int_{y_1}^{y_2}(x_2-s)^{r_1-1}(y_2-t)^{r_2-1}u(s,t)dtds+\dfrac{L_2}{\Gamma (r_1)\Gamma (r_2)}\int_0^{x_1}\int_{y_1}^{y_2}(x_2-s)^{r_1-1}(y_2-t)^{r_2-1}u(s,t)dtds+\dfrac{L_2}{\Gamma (r_1)\Gamma (r_2)}\int_{x_1}^{x_2}\int_0^{y_1}(x_2-s)^{r_1-1}(y_2-t)^{r_2-1}u(s,t)dtds \|$ The answer of the above equation is as follows $\|N(u)(x_1,y_1)-N(u)(x_2,y_2)\|\le\|\mu(x_1,y_1)-\mu(x_2,y_2)\|+L_1\|u(x_1,y_1)-u(x_2,y_2)\|+\dfrac{L_2\eta}{\Gamma(1+r_1)\Gamma(1+r_2)}[2y_2^{r_2}(x_2-x_1)^{r_1}+2x_2^{r_1}(y_2-y_1)^{r_2}+x_1^{r_1}y_1^{r_2}-x_2^{r_1}y_2^{r_2}-2(x_2-1)^{r_1}(y_2-y_1)^{r_2}]$ But how we will get this answer? Please help me anybody with explanation.
How to interpret $\exists x (\forall x \Phi (x))$?
One usually takes these to be well-formed formulas. Let us take, for example, $\exists x\forall x \Phi(x)$. When we interpret this sentence, we examine $\forall x \Phi(x)$ for all free occurrences of $x$ in $\forall x\Phi(x)$. There are no such free occurrences, so $\exists x\forall x\Phi(x)$ is true in a structure $M$ precisely if $\forall x\Phi(x)$ is true in $M$. More informally, the $\exists x$ in front has no effect. For that reason, one would never (except for the purposes of this question!) actually use the sentence $\exists x\forall x\Phi(x)$.
Using RK4 on the van der Pol oscillator
$\newcommand\rx{\mathfrak{x}}\newcommand\ru{\mathfrak{u}}\newcommand\mA{\mathfrak{A}}$ ) For the definition of the most common stability properties one refers to the properties of solutions of linear systems of ODE. Apply the RK4 method to some linear system $\ru'=\mA\ru$ (for instance the linearization of a non-linear ODE system around a fixed point). You should get exactly the first 5 terms (up to degree 4) of the exponential series. The stability condition is that this propagator has to be non-expanding if the spectrum of $\mA$ is contained in the negative half-plane of the complex plane. On the level of singular eigenvalues $\lambda$ and $z=λh$ with step size $h$ this means that $z$ is in the stability region if $$|1+z+z^2/2+z^3/6+z^4/24|\le1.$$ The half-circle with $\Im(z)\le 0$, $|z|\le 2.5$ is contained in that set. If $L$ is a Lipschitz constant (or $L=\|\mA\|$), then this gives a bound $h&lt;2.5/L$. At the upper bound the numerical solution is merely not totally wrong, for a useful solution one needs a much smaller step size, $Lh\in[10^{-3},10^{-1}]$. Step sizes much smaller than that lead to an accumulation of floating point errors that dominate the method error. See Maximum timestep for RK4 for visualizations of the stability region and the effect of too small or too large step sizes. ) There are two readily available strategies to explore the error picture of a numerical ODE solver. One proceeds to compute a series of results over a series of doubled step sizes $Lh=(1,2,4,8)·10^{-2}$. Compare these to the expected formula $y_h=y^*+C·h^4+O(h^5)$, that is, apply Richardson extrapolation to compute values for $C$ and $y^*$ and check if they are stable over all the doublings. Another strategy uses MMS (manufactured solutions). Here one would fix some nice function $p$ and consider the equation $$ y''+μ(y^2−1)y'+y=p''+μ(p^2−1)p'+p, ~~~y(0)=p(0),~~y'(0)=p'(0), $$ for instance $p(t)=2\sin(t)$ or with the next perturbation term added. Then $p$ is the exact solution that the numerical solution can be compared against. If you want to actually prove the order, that is, verify the order conditions, read up on Butcher trees, for instance on Butcher's homepage https://www.math.auckland.ac.nz/~butcher/ODE-book-2008/Tutorials/. Or read the original derivation of the low order Runge-Kutta methods in W. Kutta (1901) https://archive.org/details/zeitschriftfrma12runggoog/page/n449
Proof Zamfirescu graph 36 is non traceable?
(I think there must be some sort of Eulerian universal type of property) If you transform the vertices into edges and the edges into vertices:
Apply Runge-Kutta method on paper
If you were to solve this with the explicit midpoint method (RK2 or modified Euler), \begin{array}{c|cc} 0\\ \frac12&amp;\frac12\\ \hline &amp;0&amp;1 \end{array} you would have to compute \begin{align} k_{11}&amp;=f_1(x_0,u_0)&amp;&amp;=u_{02}&amp;&amp;=1,\\ k_{12}&amp;=f_2(x_0,u_0)&amp;&amp;=x_0u_{02}-\frac1{1+u_{01}}&amp;&amp;=0.5, \\[0.8em] k_{21}&amp;=f_1(x_0+0.5h, u_0+0.5hk_1)&amp;&amp;=u_{02}+0.25k_{12}&amp;&amp;=1.125,\\ k_{22}&amp;=f_2(x_0+0.5h, u_0+0.5hk_1)&amp;&amp;=~...&amp;&amp;=0.961805555...,\\[0.5em] \text{so that at }x_1&amp;=x_0+h=1.5&amp; \\[0.8em] u_{11}&amp;=u_{01}+hk_{21}&amp;&amp;&amp;&amp;=1.5625, \\ u_{12}&amp;=u_{02}+hk_{22}&amp;&amp;&amp;&amp;=1.480902777... \end{align} You have just to apply this same computation to the 3-stage method (Karl Heun's (1900) 3rd order method) If you were to compute 5 steps with step size $h=0.1$, the intermediate values would be \begin{align} x_0= 1.000, ~~ u_0&amp;=[1.0, 1.0]\\[1em] k_1 &amp;= [1.0, 0.5]\\ k_2&amp;=[1.025, 0.5884451219512195]\\ x_1= 1.100, ~~ u_1&amp;=[1.1025, 1.058844512195122]\\[1em] k_1 &amp;= [1.058844512195122, 0.6891047065775355]\\ k_2&amp;=[1.0932997475239987, 0.7933527917860186]\\ x_2= 1.200, ~~ u_2&amp;=[1.2118299747524, 1.138179791373724]\\[1em] k_1 &amp;= [1.138179791373724, 0.9137014319048803]\\ k_2&amp;=[1.1838648629689679, 1.0390575848336334]\\ x_3= 1.300, ~~ u_3&amp;=[1.3302164610492968, 1.2420855498570873]\\[1em] k_1 &amp;= [1.2420855498570873, 1.1855665337446706]\\ k_2&amp;=[1.3013638765443207, 1.3388370820146442]\\ x_4= 1.400, ~~ u_4&amp;=[1.460352848703729, 1.3759692580585516]\\[1em] k_1 &amp;= [1.3759692580585516, 1.519911194559178]\\ k_2&amp;=[1.4519648177865105, 1.709959435386379]\\ x_5= 1.500, ~~ u_5&amp;=[1.60554933048238, 1.5469652015971895] \end{align} The result you compute should be somewhere between these, but more to the side of the second computation.
Interpreting $\textbf{PA} + \neg\text{Con}(\textbf{PA})$ in $\textbf{PA}$
This is not exactly an answer to your question, since it's more about the particular result that $\mathsf{PA} + \neg\mathsf{Con}(\mathsf{PA})$ is interpretable in $\mathsf{PA}$. Still, I hope it is of some help. This result was proved by Feferman in his important paper &quot;Arithmetization of Metamathematics in a General Setting&quot; (it's theorem 6.5 in the paper). The notation in the paper is a bit heavy and old-fashioned, though, so it may take some time to get used to it. I'm not going to prove his result here, since the proof is rather laborious, but I do want to make some quick remarks about it. (1) The first thing to note is that the result relies on the following fundamental point made in Feferman's paper, namely that some care must be taken when handling consistency statements (for an introductory treatment of the same point, cf. chapter 36 of Peter Smith's Gödels book). In particular, in that paper, Feferman constructs consistency statements relative to a particular formula coding the theory, and this may be relevant. That is, if $T$ is a theory, then he roughly says that a formula $\alpha(x)$ of the language of arithmetic numerates the theory if for every sentence $\phi$ of the language of the theory, $\phi \in T$ iff $\mathsf{Q} \vdash \alpha(\ulcorner \phi \urcorner)$, where $\mathsf{Q}$ is Robinson's arithmetic. If, moreover, $\alpha$ is such that if $\phi \not \in T$ iff $\mathsf{Q} \vdash \neg \alpha(\ulcorner \phi \urcorner)$, then $\alpha$ is said to bi-numerate the theory. (These notions correspond to what is generally called weakly represent and strongly represent, respectively.) Anyway, the point is that consistency statements are relative to such bi-numerations, so that they are better expressed by Feferman's notation $\mathsf{Con}_\alpha(T)$. Indeed, by exploiting some coding tricks, Feferman shows (Theorem 5.9) roughly that if $T$ is a recursive consistent extension of $\mathsf{PA}$, then there is a rather strange bi-numeration $\alpha^*$ of $T$ such that $T \vdash \mathsf{Con}_{\alpha^*}(T)$. In particular (Corollary 5.10), there is a bi-numeration $\pi^*$ of $\mathsf{PA}$ such that $\mathsf{PA} \vdash \mathsf{Con}_{\pi^*}(\mathsf{PA})$. As Feferman notes, this does not contradicts Gödel's theorem because these bi-numerations do not &quot;properly express membership&quot; in the given theory: &quot;Indeed, inspection of the proof of 5.9 reveals that it expresses membership in a certain subsystem of [$T$] which, independent of the consistency of [$T$], is always consistent&quot; (p. 69). In fact, using a similar technique, Feferman also shows that, letting $\alpha$ be a bi-numeration of $\mathsf{PA}$ and setting $T=\mathsf{PA} + \neg \mathsf{Con}_\alpha(\mathsf{PA})$, there is a bi-numeration $\beta^*$ of $T$ such that $\mathsf{PA} \vdash \mathsf{Con}_{\beta^*}(T)$! (This is theorem 5.11.) Using these results, Feferman then shows, first, that if $T$ is a theory and $\alpha$ is a numeration of $T$, then $T$ is interpretable in $\mathsf{PA} + \mathsf{Con}_\alpha(T)$ (theorem 6.2), which he then uses to prove (a result that implies) that there is a numeration $\alpha$ of $\mathsf{PA}$ such that $\mathsf{PA} + \neg \mathsf{Con}_\alpha(\mathsf{PA})$ is interpretable in $\mathsf{PA}$. The idea is roughly this, again using $T = \mathsf{PA} + \neg\mathsf{Con}_\alpha(\mathsf{PA})$: By 5.11 there is a bi-numeration $\beta^*$ such that $\mathsf{PA} \vdash \mathsf{Con}_{\beta^*}(T)$. By 6.2, $T$ is interpretable in $\mathsf{PA} + \mathsf{Con}_{\beta^*}(T)$. But we have just seen that $\mathsf{PA} + \mathsf{Con}_{\beta^*}(T)$ just is $\mathsf{PA}$. Hence, the result follows. (2) As Feferman notes, this roughly means that &quot;we can construct a 'non-standard model' of [$\mathsf{PA}$] within [$\mathsf{PA}$] which, moreover, we can verify, axiom by axiom, to be a model of [$\mathsf{PA} + \neg\mathsf{Con}_{\beta^*}(\mathsf{PA})$]&quot; (p. 77). Moreover, as he also observes in a footnote appended to this text, this is not all that surprising, given Gödel's theorems. The second incompleteness theorems basically states that, if $\mathsf{PA}$ is consistent, then so is $\mathsf{PA}$ extended with the negation of its consistency statement. Given that we generally use interpretations to prove relative consistency, this is basically a translation of that idea to relative consistency proofs. EDIT: Well, I'm far from an expert on these questions, but here are my two cents (take these with lots of salt!): First, it seems to me that there are two separate issues in the background of your question: (i) whether interpretation preserves meaning and (ii) whether the incompleteness of $\mathsf{ZF} + \neg \mathsf{Inf}$ is due to the negation of the axiom of infinity or some other limitation (the issue about arbitrary sets). Let's tackle these in order. (i) It is true that just having an interpretation between theories is generally not sufficient to preserve &quot;meaning&quot;, whatever that is. Indeed, one can have two theories being mutually interpretable without these interpretations preserving nice properties such as decidability, etc. Still, when two theories are mutually interpretable, perhaps the interpretations are of such a nature that we might as well identify the two theories. Here is a reasonable test. Suppose two theories, $T$ and $T'$, are mutually interpretable with interpretations $i: T \rightarrow T'$ and $j: T' \rightarrow T$. Suppose moreover that $i \circ j$ is the identity on $T'$ and $j \circ i$ is the identity on $T$, i.e. when I translate a formula using $i$, and then translate back using $j$, I always end up with the formula which I originally started (and vice-versa). When this situation takes place, say that the theories are bi-interpretable. Now, bi-interpretation can be reasonably taken to imply sameness of &quot;meaning&quot;, since it preserves most of the interesting properties of the theories (technically, this is usually called synonymy, but the difference here is not relevant---cf. this article by Friedman and Visser for more on the difference). So, given this, what is the situation with $\mathsf{PA}$ and $\mathsf{ZF} + \neg\mathsf{Inf}$ (which I'll call in the sequel $\mathsf{ZF}_{\mathsf{Fin}}$)? These theories are mutually interpretable, but, disappoitingly, they are not bi-interpretable. In fact, they don't satisfy a weaker requirement of &quot;sentential equivalence&quot;, as shown by Enayat, Schmerl, and Visser in &quot;$\omega$-models of Finite Set Theory&quot;, theorem 5.1; the proof is not difficult, but it does use some facts about the model theory of $\mathsf{PA}$ (and of $\mathsf{ZF}_{\mathsf{Fin}}$). On the other hand, there is a theory in the vicinity which is bi-interpretable with $\mathsf{PA}$, namely $\mathsf{ZF}_{\mathsf{Fin}} + \mathsf{TC}$, where $\mathsf{TC}$ is the axiom which states that every set is contained in a transitive set (cf. this article by Kaye and Wong). The issue is related to $\varepsilon$-induction: adding this axiom is essentially equivalent to adding $\varepsilon$-induction (see again the article by Kaye and Wong). So there is a very strong sense in which these two theories are the same. (ii) On the other hand, there is the issue about whether the fact that $\mathsf{ZF}_{\mathsf{Fin}} + \mathsf{TC}$ cannot prove Goodstein's theorem is due to the absence of an axiom of infinity or something else. From what I understand, he's probably referring to the fact that first-order $\mathsf{ZFC}$ does not fully capture the idea of an arbitrary set (cf. this article by Ferreirós for an analysis of the notion). Now, I'm really out of my depth here, but I thought the issue arose only for infinite sets. Is there an hereditarily finite set that is not first-order definable? If not, then his complaint is mute. If, however, there are such sets, then he may be on to something.
Simplification of the Expected Value via CDF: Does it work for ALL Probability Distributions?
Let's use $X$ for the random variable, keeping $x$ for the variable of integration. In general, we have a probability measure $\mu$ on $I = [a,b]$ and $$\eqalign{E[X] &amp;= \int_I x\ d\mu(x) = a + \int_I (x-a) d\mu(x)\cr &amp;= a + \int_I \int_a^x 1 \ dt \ d\mu(x) = a + \int_a^b \int_{[t,b]} 1 \ d\mu(x)\ dt \cr &amp;= a + \int_a^b (1 - F(t)) \ dt\cr}$$ (note that $\int_{[t,b]} 1 \ d\mu(x) = 1 - F(t-)$, but that is $1 - F(t)$ (Lebesgue) almost everywhere)
Does the uniqueness of solutions to convex optimization with linear constraints hold in n>3 dimensions?
OK. I've done a bit of reading and found that this is indeed true in both the finite and infinite spaces. The following reference from UCLA shows that the defining property of convex minimization problems is that if a local minimum is found, it is also a global mimimum. Here, the functional equality contraints are affine in y and the surface of the objective function is convex, so this is a convex optimization in a vector space and hence has a unique global solution.
Is logical implication always determinable from just the given statements?
"Logical implication" is a potentially misleading term; it may mean the propositional connective often called Conditional. In this case : YES, having two statements $P,Q$ we can always produce the "complex" statement $P → Q$, that reads : "if $P$, then $Q$". A different (but related) case is when we use "implies" to mean Logical consequence : a fundamental concept in logic, which describes the relationship between statements that hold true when one statement logically follows from one or more statements. In this case we use the symbol : $Γ \vDash \varphi$, that reads : "statement $\varphi$ logically follows from the set $Γ$ of statements". Statements are $2=2$ (which is True) and $2=3$ (which is False). To evaluate the truth value of a "complex" statement (like $P → Q$) we have to start from statements having a precise truth value. $x=2$ is not a statement : it is a formula with a variable and its truth value depends on the value assigned to variable $x$. A different case is when we have quantifiers, like e.g. $∀x(x=2 → x&gt;1)$. In this case there are no more free variables and the formula is a statement : if we read it as a formula about natural numbers, it has a precise truth value : it is True. Regarding your examples, we have that $\forall x (x=2 \to x^2 &lt; 6)$ is always True (as you say) when red as an arithmetical statement, while $\forall x \forall y (x=2 \to y=5)$ is not. how can the truthfulness of the statement $P \to Q$ be variable, depending on context? $P \to Q$ is a formula of propositional calculus. Formulas of propositional calculus are Truth functions meaning that : a compound statement is constructed by one or two statements connected by a logical connective; if the truth value of the compound statement is determined by the truth value(s) of the constituent statement(s), the compound statement is called a truth function, and the logical connective is said to be truth functional. This means exactly that, in order to evaluate the truthfulness of the statement $P \to Q$, we have to specify a "context", i.e. a truth assignment, that is a function that maps propositional variables to True or False. In this way, given a "context" (a truth assignment), then YES : the truth value of a (truth-functional) compound statement, like e.g. the conditional $P \to Q$, is always determinable from the given statements $P$ and $Q$.
Total Derivatives and Total Differential
the total differential is $dz = \frac {\partial z}{\partial x} dx + \frac {\partial z}{\partial y} dy$ How much do we expect $z$ to change for some changes in $x$ and $y?$ The total derivative is $\frac {dz}{dt} = \frac {\partial z}{\partial x} \frac {dx}{dt} + \frac {\partial z}{\partial y} \frac {dy}{dt}$ $x,y$ are both functions with respect to parameter $t$ what is the derivative of $z$ with respect to this parameter?
Sequence of Polynomials and Weierstrass Approximation Theorem
Hint: Suppose we started with a sequence of polynomials $\{g_n\}$ that converges to $f'$. How could we find a sequence $G_n$, with $G_n' = g_n$, such that $\{G_n\}$ converges to $f$? Perhaps we could set $G_n = \int_a^x g_n(t)\,dt+ C$ for the "right" choice of $C \in \Bbb R$. How do we choose $C$? How do we guarantee that the result converges as desired? Note: for a sequence $\{g_n\}$ converging to $g$ uniformly on $[a,b]$, we can indeed state that $$ \int_a^x g_n(t)\,dt \to \int_a^x g(t)\,dt $$ uniformly. If you've never seen this before, you should try to prove it.
Closed-form for Floor Sum 3 - With knowledge of inner expression
How large is $n$ compared to $N$? Is $n\approx\sqrt{N}$? You are counting the lattice points in a region bounded by a hyperbola, and $\sqrt{x^2+N}$ behaves like $x$ for large values of $x$. Additionally, it is not likely that your expression has a nice closed form, but it is for sure pretty close to $$ \int_{0}^{n}\sqrt{x^2+N}\,dx = \frac{1}{4} \left(2 n \sqrt{n^2+N}-N \log N+2 N\log\left(n+\sqrt{n^2+N}\right)\right)$$ which if $n=\sqrt{N}$ behaves like $\left(\sqrt{2}+\log(1+\sqrt{2})\right)\frac{N}{2}\approx\frac{7}{6}N$ for large values of $N$. Also notice that the problem of finding the first two terms of the asymptotic expansion of $$ \sum_{k=0}^{\sqrt{N}}\left\lfloor\sqrt{N\color{red}{-}k^2}\right\rfloor $$ as $N\to +\infty$ is exactly the Gauss circle problem. In this case the first term of the asymptotic expansion is $\frac{\pi}{4}N$ for similar reasons.
Searching $A$ to maximize $\|x\|$ whilst fulfilling a constraint $x^TA^TAx=c$
Since your $A$ is real and symmetric, it is diagonalizable and only has real eigenvalues. Thus we can assume w.l.o.g. that $A$ is a diagonal matrix. ($x^TA^TAx = x^TAAx= x^TS^{-1}DDSx = y^T DD y = y^T D^T D y$) Hence we can just look at the sum: $$ \langle Ax, Ax\rangle = \sum_{i=1}^n (\lambda_i x_i)^2 $$ Where $\lambda_i, i=1,\dots,n$ are the eigenvalues of $A$. So you can modify the eigenvalues of $A$ to keep the constraint. Since if $\max_{i\in[n]}|\lambda_i|$ goes to zero, $||x||$ has to go to infinity.
What is the 'complexification' of an algebra?
Both (1) and (2). You then have a structure which (most likely) is not contained in any structure you have seen previously, so you can't assume that it is a vector space. There is rather a lot of work to do, but fortunately most of it is defining things according to the "wishlist" principle - just define them they way you would like them to work out - and then checking that it works. For a start: define equality of vectors, addition and scalar multiplication for $\mathfrak U$ in terms of $\mathfrak U_{\Bbb R}$, which is assumed given. By the "wishlist" principle this would be $A+iB=C+iD$ iff $A=C$ and $B=D$; addition... too easy, leave it to you, and multiplication $(u+iv)(A+iB)=(uA-vB)+i(uB+vA)$. You then need to show that you have a vector space, and you need to do it by checking all the axioms, as $\mathfrak U$ is (probably) not a subset of any vector space you already know. The proofs will be long boring algebra, making sure you scrupulously apply the definitions. You will probably want to go further and define the complexification of linear transformations, inner products etc. Again the "wishlist" principle applies and we define the complexification of a linear transformation $T:\mathfrak U_{\Bbb R}\to\mathfrak V_{\Bbb R}$ is the function $$\hat T:\mathfrak U\to\mathfrak V\ ,\quad \hat T(A+iB)=T(A)+iT(B)\ .$$ You will need to prove that this is linear but again it's just long boring algebra. You may also need to know that $\mathfrak U_{\Bbb R}$ is a subspace of $\mathfrak U$. This is slightly more tricky, mainly because it's not actually true (can't be, as the spaces have different scalar fields). But you can show that $$\mathfrak U_0=\{A+i{\bf0}\mid A\in \mathfrak U_{\Bbb R}\}$$ is a real vector space and is isomorphic to $\mathfrak U_{\Bbb R}$. Good luck!
Help with differential problem
Consider the length of the wire in kilometers, $\ell$, as a function of the elevation (in kilometers as well) of the poles, $h$. We are interested in the increase $\Delta \ell=\ell(0.001)-\ell(0)$. The idea is to use the differential approximation $$\Delta \ell \approx \ell'(0) \Delta h$$ We already know that $\Delta h=0.001-0=0.001$, so we are left with finding $\ell'(0)$. In order to do that, we need a formula for $\ell(h)$ for general $h$: In that case we are talking about a circle with radius equal to that of the Earth's (6370 km) $+$ height of the poles ($h$). The circumference of that circle is $\ell(h)=2 \pi(6730+h)$. From here it is easy to find that $\ell'(0)=2 \pi$, and plugging this into the differential we find $$\Delta \ell \approx 2 \pi \times 0.001 \approx 0.006 \text{ km}.$$
Let $a_0>0$. If $a_{n+1}=\frac{1}{1+a_n}$. Show that the sequence $a_n$ converges.
We prove the sequence converges to the unique positive number $L$ such that $L = \frac{1}{1+L}$ [such $L$ can be solved for using the quadratic formula]. Indeed, note that for any $n \in \mathbb N$, \begin{align*} \lvert a_{n+1} - L \rvert &amp;= \left\lvert \frac{1}{1+a_n} - \frac{1}{1+L} \right \rvert\\ &amp;= \left\lvert \frac{1+L -(1+a_{n})}{(1+L)(1+a_n)} \right \rvert \\ &amp;\le \frac{1}{1+L} \lvert a_n - L\rvert. \end{align*} Recursively applying this bound we find that $$\lvert a_{n+1} - L \rvert \le\frac{1}{(1+L)^{n+1}} \lvert a_0 - L\rvert $$ whence sending $n \to \infty$ shows that $a_n \to L$.
For a bounded sequence ${a_n}$ the function $f(x)$=$\sum_{k=0}^{\infty}a_{k} x^{k}$ is well defined for $-1< x <1$
The ratio test is useful when you know something about the behaviour of the ratios $\left| \frac{a_{n+1}}{a_n} \right|$ which is not the case if $a_n$ is a general bounded sequence. Instead, you can use the Cauchy-Hadamard theorem that provides you a formula for the radius of convergence of a power series. Assume that $|a_n| \leq M$ for all $n \in \mathbb{N}$ and $M &gt; 0$. Using the formula, you have $$ \frac{1}{R} = \limsup_{n \to \infty} |a_n|^{\frac{1}{n}} \leq \limsup_{n \to \infty} M^{\frac{1}{n}} = 1 $$ and thus the radius of convergence of $\sum_{n=0}^{\infty} a_n x^n$ satisfies $R \geq 1$ an in particular, the series converges for $|x| &lt; 1$.
How does Godel use diagonalization to prove the 1st incompleteness theorem?
Goedel provides a way of representing both mathematical formulas and finite sequences of mathematical formulas each as a single positive integer (by replacing each symbol with a number, and then using the numbers as exponents in the prime factorization). If you can identify when a number corresponds to an axiom, and your "rules of inference" (valid logical arguments; such as Modus ponens, that allows you to deduce $Q$ if you have both $P\to Q$ and $P$) can be modeled by certain finite processes (you can have a computer do them), then there is a way of checking whether a given number corresponds to a formal proof, and so given two numbers, $N$ and $M$, you can check: Is $N$ the number of a sequence of formulas? If so, is the sequence of formulas a formal proof? If so, is the last line of the proof the formula with number $M$? If the answer to all three questions is 3, then you know that $N$ is the number of a proof for the formula with number $M$, and in particular that there is a proof for that formula. Conversely, if you can prove a given formula $F$, then you can convert the proof into a number $N$, the formula into a number $M$, and then the number $N$ will be the number of a proof for the formula $M$. This entire thing can be coded as a relationship between numbers. Just like you can say "$n$ is a multiple of $m$", or "$k$ is a power of $q$", or "$p$ is a prime", you can also say "$N$ is a proof for $M$." This is a statement that can be described purely in terms of the numerical properties of $N$ and $M$. Goedel constructs a formula which essentially says: "There is no number $N$ which is a proof for the number you get by starting with the number $k$, and performing the following operations to it." Now, this is itself a formula, so it has a number. It turns out that if you calculate the number of this formula, you get exactly the number you get by starting with the number $k$ and performing the operations described by the statement. So even though the statement is, on its face, about number (it just says "There is no number $N$ which is in the relation of 'being a proof' for the number $f(k)$"), when you interpret the relationship 'being a proof' and you interpret the number $f(k)$, the statement is talking about itself. One reason the process is sometimes called diagonalization is that you are essentially looking for a number $k$, corresponding to the value of the entire statement, which has $k=f(k)$ (so that the statement will "refer to itself"). That is, you are trying to find a number $k$ in the "diagonal" of the graph.
Verification involving modulus and complex numbers
In $\mathbb{C}$ if $|\beta|=1$ then $\beta$ can be anywhere on the unit circle centred at $0$. In particular, $$\vert{\beta}\vert=1 \iff \beta\in \{e^{it}:t\in[0,2\pi)\}$$ A general trick when working with $\vert z \vert$ is to use the fact that $\vert z \vert^2=z \bar{z}$. So: $$\begin{align} \left\lvert \cfrac{\beta - \alpha}{1- \bar \alpha \beta}\right\rvert^2 &amp;= \left(\cfrac{\beta - \alpha}{1- \bar \alpha \beta}\right) \overline{\left(\cfrac{\beta - \alpha}{1- \bar \alpha \beta}\right)} \\ &amp;= \left(\cfrac{\beta - \alpha}{1- \bar \alpha \beta}\right) \left(\cfrac{\bar{\beta} - \bar{\alpha}}{1- \alpha \bar{\beta}}\right) \end{align}$$ By expanding this bracket and cancelling, you should find your solution!
Proving the reciprocal rule by using the difference quotient
From simple algebra, the difference between $f(x+h)$ and $f(x)$ is $1/{(x+h)}-1/x=$ $-h/{(x(x+h))}$ And you work from there.
A regular $2017$-gon is partitioned into triangles by a set of non-intersecting diagonals. Prove that only one is a acute angled.
Since $2017$ is odd there is no diagonal going through the centre of the circle. Therefore there is exactly one triangle containing $M$ in its interior. This triangle is acute, all others are obtuse.
Isomorphism of preferences
No, of course not. For example, $\mathbb{N}$ has a smallest element, and hence so does any order isomorphic to it. But $\mathbb{Q}$ has no smallest element, so is not isomoprhic to $\mathbb{N}$.
Proof of "For transitive closure of a relation $R$ on a finite set with $n$ elements it is sufficient to find $R^*=\bigcup_{k=1}^n R^k$"
It doesn’t matter whether there was more than one loop: the argument isn’t intended to produce a shortest path. The point is that the path was assumed to be a shortest possible path from $a$ to $b$ and of length $m&gt;n$, and the argument shows that these two assumptions are incompatible: if the path has length greater than $n$, then it can be shortened by removal of a loop, so it cannot have been a shortest possible path. There certainly is a shortest path from $a$ to $b$, so we can now conclude that it cannot have length greater than $n$: if it did, we’ve just seen that there would be at least one shorter path.
Matrix form of the differential operator $\sum_{k=1}^N x^k\frac{d^k}{dx^k}$
My answer may or may not be helpful, as it can not enlighten the question regarding to Chebischev polynomial. One matrix formation is possible in the following manner. For $N=2$ you shall get your differential equation as $$x^2\frac{d^2y}{dx^2} + x\frac{dy}{dx} + y = 0$$ Take $y_0 = y$ and $y_1 = y_0'$ (meaning of notetion is different) $= x \frac{dy_0}{dx}$. Therefore $y_1' = x\frac{dy_1}{dx} = x\frac{d}{dx}(x\frac{dy_0}{dx}) = x\frac{dy_0}{dx} + x^2\frac{d^2 y_0}{dx^2} = y_1 + x^2\frac{d^2y_0}{dx^2}$ From the equation $x^2\frac{d^2y}{dx^2} = -y_0 - x\frac{dy_0}{dx} = -y_0 - y_1$ So $y_1' = y_1 - y_0 -y_1 = -y_0 $ Combining we shall get y_0' = y_1 y_1' = -y_0 Write them in matrix form We can generalise the result for higher orders.
Question on algebra used in induction proof
That step should be $$ 1-\frac{k+2}{(k+1)(k+2)}+\frac{1}{(k+1)(k+2)}\\ =1+(-1)\cdot\left(\frac{k+2}{(k+1)(k+2)}\right)+\frac{1}{(k+1)(k+2)}\\ =1+\frac{-k-2}{(k+1)(k+2)}+\frac{1}{(k+1)(k+2)}\\ =1+\frac{-k-1}{(k+1)(k+2)}\\ =1-\frac{k+1}{(k+1)(k+2)} \\ =1-\frac{1}{k+2}. $$
Probability people will occupy $k$ adjacent chairs?
There are $n-k+1$ possible locations for $k$ people occupying adjacent seats, and there are $\binom{n}k$ possible locations for $k$ people, so the probability in the first question is $$\frac{n-k+1}{\binom{n}k}=\frac{(n-k+1)k!(n-k)!}{n!}=\frac{(n-k+1)!k!}{n!}\;,$$ as you say. In the second question there are still $\binom{n}k$ possible choices of $k$ seats, but there are now $n$ of them that have the $k$ people in adjacent seats, so the probability is $$\frac{n}{\binom{n}k}=\frac{nk!(n-k)!}{n!}=\frac{k!(n-k)!}{(n-1)!}\;.$$ (I’m assuming that the seats in the circle are individually identifiable, i.e., that seatings that differ by a rotation are still different seatings.)
Showing that the intersection of two subgroups of index 2 is normal of index 4
For a proof not using the Second Isomorphism Theorem, let $\pi\colon G\to C$, with kernel $H$ and $C$ cyclic of order two. Similarly, let $\pi'\colon G\to C'$ with kernel $K$ and $C'$ cyclic of order two. Now define $\Pi\colon G\to C\times C'$ by $g\mapsto(\pi(g),\pi'(g))$. Clearly a homomorphism, clearly onto since there are elements of $H$ not in $K$ and vice versa. Now the only thing is to check that $\ker(\Pi)=H\cap K$, and this is almost immediate.
Alternative unconditional form of $\sqrt{n -\sqrt{n -\sqrt{n -\cdots}}}$?
$a_n =\small{\sqrt{n -\!\!\!\sqrt{n -\!\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\sqrt{n - \cdots}}}}}}}}$ $a_n^2 =\small{n -\!\!\!\sqrt{n -\!\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\sqrt{n - \cdots}}}}}}}$ $a_n^2 -n =\small{ -\!\!\!\sqrt{n -\!\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\!\sqrt{n -\!\sqrt{n - \cdots}}}}}}}$ $a_n^2 -n =-a_n$ $a_n^2+a_n -n =0$ Using quadratic formula for positive root. $a_n=\dfrac{-1+\sqrt{1+4n}}{2}$
Two random vectors with the same distribution and one with independent components
$\quad \mathbb P(Y_1 \le y_1, Y_2 \le y_2, \ldots , Y_n \le y_n)$ the CDF of $\mathbf Y$ $=\mathbb P(X_1 \le y_1, X_2 \le y_2, \ldots , X_n \le y_n)$ since $\mathbf{X}$ and $\mathbf Y$ have the same distribution $=\mathbb P(X_1 \le y_1) \mathbb P(X_2 \le y_2) \cdots \mathbb P(X_n \le y_n)$ by independence of the components of $\mathbf X$ $=\mathbb P(Y_1 \le y_1) \mathbb P(Y_2 \le y_2) \cdots \mathbb P(Y_n \le y_n)$ since $\mathbf{X}$ and $\mathbf Y$ have the same distribution So the components of $\mathbf Y$ are also independent and this does not depend on the shapes of the individual marginal distributions of the components
Is an isometric and bijective mapping between two metric spaces complete?
Why would they be complete? Consider e.q. $\Bbb Q\times\{0\}$ and $\Bbb Q\times\{1\}$ as subspaces of the real plane and $f:=(x,0)\mapsto (x,1)$.
$(x^a-y^b)$ is a prime ideal in $R[x,y]$ where $R$ is a domain and $\gcd(a,b)=1$.
With your notation, we know that $r(x, y) = s_{0}(y)+s_{1}(y)x+\cdots+s_{(a-1)}(y)x^{a-1}$ satisfies $r(t^{b}, t^{a}) = 0$, since $g(t^{b}, t^{a}) = 0$. We want to show that $r = 0$, i.e. that $s_{i}(y) = 0$ for each $i = 0, \ldots, a-1$. It is clear that if some $s_{i}(y)$ is nonzero, then some $s_{j}(y)$ must also be nonzero for $j \neq i$. I claim that $$\deg_{t}(s_{i}(t^{a})(t^{b})^{i}) \neq \deg_{t}(s_{j}(t^{a})(t^{b})^{j})$$ if $s_{i}(y), s_{j}(y)$ are nonzero and $i \neq j$. Note that $s_{i}(t^{a})(t^{b})^{i}$ has terms of the form $t^{ka+bi}$ for $k$ a non-negative integer, so it suffices to show that we can never have $ka+bi = la+bj$ for any non-negative integers $k, l$ and $0 \leqslant i \neq j \leqslant a-1$. Indeed, if $ka+bi = la+bj$ with $k, l, i, j$ as specified, then $(k-l)a = b(j-i)$; since $a$ and $b$ are coprime, $a$ must divide $j-i$, which is a contradiction since $0 &lt; j-i &lt; a$.
Show that: $\sum \models p_1 \lor p_2 \lor .... \lor p_n$ for some $n\in \mathbb{N}$
This is Exercise 3.27 [page 117] of Derek Goldrei, Propositional and Predicate Calculus : A Model of Argument (2005); we have to use Exercise 3.22 [page 108]. By Compactness Theorem, there is a truth assignments $v$ such that, for all $\varphi \in \Gamma$, $v(\varphi) = 1$ iff for each finite subset $Δ ⊆ Γ$ there is a $v$ such that for all $σ \in Δ$, $v(σ) = 1$. This is equivalent to (we have to negate both clauses of the bi-conditional) : for all truth assignments $v$, exists $\varphi \in \Gamma$ such that $v(\varphi) = 0$ iff exists a finite subset $Δ ⊆ Γ$ such that, for all truth assignment $v$ exists $σ \in Δ$ such that $v(σ) = 0$. Let $Γ = \{ \lnot p_1, \lnot p_2, \ldots, \lnot p_n,. \ldots \}$. By the assumption that all truth assignments which satisfy $\Sigma$ make at least one of the $p_i$'s true, we have that for every truth assignment $v$ which satisfy $\Sigma$ there is an $i \in \mathbb N$ such that $v(\lnot p_i)=0$. By the above reformulation of Compactness, exists a finite subset $Δ ⊆ Γ$ and exists $\lnot p_n \in Δ$ such that $v(\lnot p_n) = 0$, for all truth assignment $v$ which satisfy $\Sigma$. Thus $v(p_n)=v(p_1 \lor \ldots \lor p_n)=1$, for all truth assignment $v$ which satisfy $\Sigma$, i.e. $\Sigma \vDash p_1 \lor \ldots \lor p_n$, for some $n \in \mathbb N$.
efficient way to invert a Matrix plus a diagonal one
You may note that $\ker(\Sigma \otimes V + \phi I_{2n}) = \{0\}$ if and only if $\phi$ is not in the spectrum of $\Sigma \otimes V$. It follows that $(\Sigma \otimes V + \phi I_{2n})$ is invertible if and only if $\phi $ is not an eigenvalue of $\Sigma \otimes V$. The spectrum of a Kronecker product of matrix is already studied and you can express it explicitly in termes of the spectrum of $A$ and the spectrum of $V$. Here it is shown that if $(\lambda,\sigma)$ and $(\mu,v)$ are two eigenpairs of $\Sigma$ and $V$ respectively, then $(\lambda\mu,\sigma \otimes v)$ is an eingenpair of $ \Sigma \otimes V$. Anyway in the link you should find some interesting factorization for solving efficitently the linear equation system $(\Sigma \otimes V + \phi I_{2n})x=b$. Computing the inverse directly is not very efficient except unless you need to solve this system a large amount of times for fixed $V$ and $\Sigma$ and varying $b$.
Why does a differential form represent a vector field?
Advice: First, you need understand some basic concepts of differential forms. I recommend two excellent readings of about differential forms, in my point-view. 1: Differential Forms, by Henri Cartan. This book is ideal for understand differential forms in various contexts, for example, Cartan develops the theory of forms in space of finite and infinite dimension. 2: Differential Forms, by Manfredo do Carmo. This book is ideal to learn the concepts of differential forms to apply in Differential Geometry. So a quick reading of Manfredo's book will be great for you and your doubts, but I recommend, principally for geometry study, a reading in Cartan's book.
A maximal ideal is always a prime ideal?
Let $R$ be a ring, not necessarily with identity, not necessarily commutative. An ideal $\mathfrak{P}$ of $R$ is said to be a prime ideal if and only if $\mathfrak{P}\neq R$, and whenever $\mathfrak{A}$ and $\mathfrak{B}$ are ideals of $R$, then $\mathfrak{AB}\subseteq \mathfrak{P}$ implies $\mathfrak{A}\subseteq \mathfrak{P}$ or $\mathfrak{B}\subseteq \mathfrak{P}$. (The condition given by elements, $ab\in P$ implies $a\in P$ or $b\in P$, is stronger in the case of noncommutative rings, as evidence by the zero ideal in the ring $M_2(F)$, with $F$ a field, but is equivalent to the ideal-wise definition in the case of commutative rings; this condition is called "strongly prime" or "totally prime". Generally, with noncommutative rings, "ideal-wise" versions of multiplicative ideal properties are weaker than "element-wise" versions, and the two versions are equivalent in commutative rings). When the ring does not have an identity, you may not even have maximal ideals. But here is what you can rescue; recall that if $R$ is a ring, then $R^2$ is the ideal of $R$ given by all finite sums of elements of the form $ab$ with $a,b\in R$ (that is, it is the usual ideal-theoretic product of $R$ with itself, viewed as ideals). When $R$ has an identity, $R^2=R$; but even when $R$ does not have an identity, it is possible for $R^2$ to equal $R$. Theorem. Let $R$ be a ring, not necessarily with identity, not necessarily commutative. If $R^2=R$, then every maximal ideal of $R$ is also a prime ideal. If $R^2\neq R$, then any ideal that contains $R^2$ is not a prime ideal. In particular, if $R^2\neq R$ and there is a maximal ideal containing $R^2$, this ideal is maximal but not prime. Proof. Suppose that $R^2=R$. Let $\mathfrak{M}$ be a maximal ideal of $R$; by assumption, we know that $\mathfrak{M}\neq R$. Now assume that $\mathfrak{A},\mathfrak{B}$ are two ideals such that $\mathfrak{A}\not\subseteq \mathfrak{M}$ and $\mathfrak{B}\not\subseteq\mathfrak{M}$. We will prove that $\mathfrak{AB}$ is not contained in $\mathfrak{M}$ (we are proving $\mathfrak{M}$ is prime by contrapositive). Then by the maximality of $\mathfrak{M}$, it follows that $\mathfrak{M}+\mathfrak{A}=\mathfrak{M}+\mathfrak{B}=R$. Then we have: $$\begin{align*} R &amp;= R^2\\ &amp;= (\mathfrak{M}+\mathfrak{A})(\mathfrak{M}+\mathfrak{B})\\ &amp;= \mathfrak{M}^2 + \mathfrak{AM}+\mathfrak{MB}+\mathfrak{AB}\\ &amp;\subseteq \mathfrak{M}+\mathfrak{M}+\mathfrak{M}+\mathfrak{AB}\\ &amp;=\mathfrak{M}+\mathfrak{AB}\\ &amp;\subseteq R, \end{align*}$$ hence $\mathfrak{M}\subsetneq\mathfrak{M}+\mathfrak{AB}=R$. Therefore, $\mathfrak{AB}\not\subseteq\mathfrak{M}$. Thus, $\mathfrak{M}$ is a prime ideal, as claimed. Now suppose that $R^2\neq R$ and $\mathfrak{I}$ is an ideal of $R$ that contains $R^2$. If $\mathfrak{I}=R$, then $\mathfrak{I}$ is not prime. If $\mathfrak{I}\neq R$, then $RR\subseteq \mathfrak{I}$, but $R\not\subseteq \mathfrak{I}$, so $\mathfrak{I}$ is not prime. In particular, if $\mathfrak{M}$ is a maximal ideal containing $R^2$, then $\mathfrak{M}$ is not prime. $\Box$ In your example, we have $R=2\mathbb{Z}$, $R^2=4\mathbb{Z}\neq R$, so any ideal that contains $R^2$ (in particular, the ideal $R^2$ itself) is not prime. And since $4\mathbb{Z}$ is a maximal ideal containing $R^2$, exhibiting a maximal ideal that is not prime. (In fact, $2\mathbb{Z}$ has maximal ideals containing any given ideals; this can be proven directly, or invoking the fact that it is noetherian)
Given $\lambda$ regular cardinal, $\left(\kappa^{<\lambda}\right)^{<\lambda}=\kappa^{<\lambda}$?
Note that since $\lambda$ is regular, for any $\mu&lt;\lambda$, $f\colon\mu\to\lambda$ is bounded. Now think about $g\in\left(\kappa^{&lt;\lambda}\right)^{&lt;\lambda}$ as some $g\colon\mu\to\kappa^{&lt;\lambda}$. Then there is some $\nu&lt;\lambda$ such that $g\colon\mu\to\kappa^\nu$. So we get the wanted result, since clearly $\left(\kappa^{&lt;\lambda}\right)^\mu=\kappa^{&lt;\lambda}$ for any $\mu&lt;\lambda$.
Boundary on a manifold
First, and perhaps foremost, manifolds don't have boundaries. Suppose, by way of contradiction, that $M$ is a manifold with nonempty boundary and that $p$ is a point on that boundary. Then, no neighborhood of $p$ is homeomorphic to $\mathbb{R}^m$. Contradiction. What you actually mean is manifold with boundary. A manifold with boundary has a boundary if and only if it is not (strictly speaking) a manifold. Like I stated above, $p$ is on the boundary if and only if none of its neighborhoods is homeomorphic to $\mathbb{R}^m$. To determine if an oriented smooth manifold with boundary has a boundary, we can use Stoke's theorem: if there exists an $m$-form $\alpha$ with compact support such that $$\int_M\mathrm{d}\alpha=\int_{\partial M}\alpha\neq 0,$$ then the boundary is nonempty.
how many ways to make $k$ faces
Given an arrangement of $k$ closed curves, construct a rooted tree on $k+1$ vertices as follows: there is a vertex for each closed curve, and a root vertex corresponding to the entire plane (or a giant curve large enough to enclose all the $k$ given curves). Two vertices are connected if one of the two corresponding curves contains the other, but there is no third curve containing the inner one and contained in the outer one. This correspondence gives a bijection between the number of "$k$-face" configurations you describe, and the number of rooted trees with $k+1$ vertices (equivalently, with $k$ edges). (The attached picture shows an example of this bijection when $k=4$.) No closed formula is known, but a lot of information can be found on OEIS for example.
Given a box with 100 balls and 2 poission random variables one is the the number of blue and other number of red
If I understand correctly, what you mean is that the numbers of red and blue balls were drawn from these Poisson distributions (plus one), and the sum turned out to be $100$. If so, we can take out the two $+1$ balls (one blue, one red) and are left with two Poisson variables that sum to $98$. Conditional on their sum, these variables are binomially distributed; each ball is independently blue with probability $\frac n{n+2n}=\frac13$ and red with probability $\frac{2n}{n+2n}=\frac23$. Thus the expected number of blue balls is $\frac{98}3$. Add back the blue $+1$ ball, and we expect to have $\frac{98}3+1=\frac{101}3$ blue balls, so the probability that any given (e.g. the last) ball is blue is $\frac{\frac{101}3}{100}=\frac{101}{300}$.
Assume $f(n)=O(g(n))$ with $g(n)\geq 2$ for all $n$
You are correct, and just as a general rule of thumb when there is a linear combination of terms, for example $f(x)=ag(x)+bh(x)+...$ the only term that matters for the asymptotic is the leading(Largest) term. For example $3x^2+99x$ is $O(x^2)$ This can be used to show that you are right since $f(x)\sim O(g(x))$ by assumption and $g(x)\sim O(g(x))$ trivially then $f(x)+g(x)\sim O(g(x))$
Einstein notation difficulties
$$(- ik_i \frac{dM_i}{dt} - \frac{1}{2}k_i k_j \frac{dU_{ij}}{dt} - iY_{ij}k_iM_j - Y_{ij}k_iU_{jl}k_l + D_{ij}k_ik_j)P = 0$$ We know $P \ne 0$, real and imagine equal zero separately. $$- \frac{1}{2}k_i k_j \frac{dU_{ij}}{dt} - Y_{ij}k_iU_{jl}k_l + D_{ij}k_ik_j = 0 ~~~~~~~(1)$$ $$- k_i \frac{dM_i}{dt} - Y_{ij}k_iM_j = 0~~~~~~~~(2)$$ for (1), $$- \frac{1}{2}k_i k_j \frac{dU_{ij}}{dt} - Y_{ij}k_iU_{jl}k_l + D_{ij}k_ik_j = 0 $$ $$ k_i k_j \frac{dU_{ij}}{dt}= - 2Y_{ij}k_iU_{jl}k_l + 2D_{ij}k_ik_j $$ $$ k_i k_j \frac{dU_{ij}}{dt} =- Y_{ij}k_iU_{jl}k_l- Y_{ij}k_iU_{lj}k_l + 2D_{ij}k_ik_j $$ then you need to exchange j, l so you can remove $k_ik_j$ $$ k_i k_j \frac{dU_{ij}}{dt} =- Y_{il}k_iU_{lj}k_j- Y_{il}k_iU_{jl}k_j + 2D_{ij}k_ik_j $$ $$ \frac{dU_{ij}}{dt} =- Y_{il}U_{lj}- Y_{il}U_{jl} + 2D_{ij} $$
How to split this polynomal?
Hint: $$\frac{x}{(x+3)(x+2)} = \frac{A}{x+3} + \frac{B}{x+2}$$ i.e. $$x=A(x+2)+B(x+3)$$ which is an identity in $x$. Now solve for $A$ and $B$ to get the above relation.
Solve the SDE $dX_t=\sqrt t(X_t+\sin t)dW_t$
The SDE is a particular example of a so-called linear SDE $$dX_t = (\alpha(t)+\beta(t) X_t) \, dt + (\gamma(t)+\delta(t) X_t) \, dW_t \tag{1}$$ where $\alpha, \beta,\gamma,\delta$ are deterministic functions. Such linear SDEs can be solved explicitly, and you can find formula for the solution for instance in the book Brownian motion - An Introduction to Stochastic Processes by Schilling &amp; Partzsch. The idea is to solve first the homogeneous SDE $$dX_t = \beta(t) X_t \, dt + \delta(t) X_t \, dW_t$$ and then to use a "variation of constants"-approach, see this question. For the particular case that $\alpha=\beta=0$ the solution to $(1)$ is given by $$X_t = \exp \left( M_t \right) \left[ X_0 + \int_0^t \exp(-M_s) \gamma(s) \, dW_s - \int_0^t \exp(-M_s) \gamma(s) \delta(s) \, ds \right]$$ where $$M_t := \int_0^t \delta(s) \, dW_s - \frac{1}{2} \int_0^t \delta(s)^2 \, ds.$$ Plugging in $\delta(t) = \sqrt{t}$ and $\gamma(t) = \sqrt{t} \sin t$ gives the solution to the SDE $$dX_t = \sqrt{t} (X_t+\sin t) \, dW_t. \tag{2}$$ You can use the approach, which I mentioned above, to "reprove" the formula for the solution, i.e. first solve the SDE $$dX_t = \sqrt{t} X_t \, dW_t$$ and then use the "variation of constants"-approach to obtain the solution to $(2)$.
Complex derivative of Frobenius norm with the pseudo inverse with respect to the original matrix
The differential of the pseudoinverse is well known but complicated $$\eqalign{ dA^+ &amp;= A^+{A^+}^HdA^H(I-AA^+)+(I-A^+A)dA^H{A^+}^HA^+-A^+dAA^+ \cr }$$ Keep the $dA$ terms and ignore the $dA^H$ terms in accordance with the so-called Wirtinger or $\mathbb{CR}$-calculus. For convenience, define the matrix $$X = C-A^+B$$ Write the function in terms of this variable. Then find its differential and gradient. $$\eqalign{ \phi &amp;= \|X\|_F^2 \,\,\in {\mathbb R} \cr &amp;= X^*:X \cr d\phi &amp;= X^*:dX \cr &amp;= X^*:(-dA^+B) \cr &amp;= X^*B^T:A^+\,dA\,A^+ \cr &amp;= {A^+}^TX^*B^T{A^+}^T:dA \cr &amp;= {A^+}^T(C-A^+B)^*B^T{A^+}^T:dA \cr G=\frac{\partial\phi}{\partial A} &amp;= {A^+}^T(C-A^+B)^*B^T{A^+}^T \cr }$$ Given the gradient wrt $A$, it's a simple matter to find the gradient wrt $A^H$ or $A^*$ $$\eqalign{ \frac{\partial\phi}{\partial A^H} &amp;= G^H,\quad \frac{\partial\phi}{\partial A^*} &amp;= G^* \cr }$$ In some intermediate steps, a colon was used to denote the trace/Frobenius product, i.e. $$A:B = {\rm Tr}(A^TB)$$
Calculate the pushforward of smooth map between manifolds
In general for a smooth map $f: M\to N$, the pushforward $(\phi_*)_x : T_xM \to T_{f(x)}N$ is given by $$[\gamma] \mapsto [f\circ \gamma],$$ where $\gamma \in T_xM$ is a curve in $M$ so that $\gamma(0) = x$. Then in our case, if you want to calculate $(\phi_*)_I$, it is (a tangent vector $M$ is identified as $\gamma (t) =I+ tM$ $$(\phi_*)_I (M) = [ \phi(I+tM)] = [(I+tM)(I+tM)^t] = [I + t(M+ M^t) + t^2MM^t]$$ thus $(\phi_*)_I$ is given by $$M \mapsto M+M^t$$ which is not the one you suggested. (Your map is not linear so it cannot be $(\phi_*)_I$)
Derivatives of a trace with respect to perturbation
Here is a partial anwser. In Section 2.5 of the Matrix Cookbook, there is an unnumbered formula between Eqn 98 and Eqn 99 which yields the derivative for the trace of $any$ matrix function. A few iterations with that formula should convince you that the $k$-th derivative of your function as $$ (-1)^k \,{\rm tr}\big(Y^k Z^{1-k}\big) $$ for $k&gt;1$. For $k=1,\,$ the result includes a $\log$-term $$ {\rm tr}\big(Y\log(Z)+Y\big) $$ The complete answer involves reformulating these results in terms of the eigenvalues.