title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Prove that $\frac{\sum_\text{cyc} x_1}{2020^2} \geq (n - 1)(\sum_\text{cyc} \frac{1}{x_1})$
Your inequality is true for any positive variables such that:$$\sum_{cyc} \frac{1}{x_1^2 + 2020^2} = \frac{1}{2020^2}.$$ Indeed, let $a_i=\frac{x_i^2}{2020}$ and see here: Prove that $\sqrt{x_1}+\sqrt{x_2}+\cdots+\sqrt{x_n} \geq (n-1) \left (\frac{1}{\sqrt{x_1}}+\frac{1}{\sqrt{x_2}}+\cdots+\frac{1}{\sqrt{x_n}} \right )$
How to find the center of an Ellipse, given a focal point, radius, and eccentricity
Turning existing comments into an answer: Wikipedia will tell you that $$\epsilon=\dots=f/a$$ (where again $a$ and $b$ are one-half of the ellipse's major and minor axes respectively, and $f$ is the focal distance) So you know $\epsilon$ and $a$, you can compute $f$ and that's the distance between focus and center. If the term “focal distance” isn't clear enough (it could be the distance between the foci, after all), the images on Wikipedia make it clear that this is the distance you need, between center and focus. If you also know the direction, you can just move that distance in the given direction. However, your specification doesn't tell us whether you should move in the positive or the negative $x$ direction. But perhaps you know which.
Minimizing $\left(a+\frac{1}{a}\right)^{2}+\left(b+\frac{1}{b}\right)^{2}$ over positive reals with $a+b=1$. Why is the minimum not $18$?
The issue is in the first line of simplification. $$a+\frac{a+b}{a}=a+1+\frac{b}{a}\neq 2+\frac{b}{a}.$$ If you make the correct simplification, you can continue $$\sqrt{\frac{\left(a+1+\frac ba\right)^2+\left(b+1+\frac ab\right)^2}{2}}\geq \frac{a+b+2+\frac ab+\frac ba}{2}=\frac{3+\frac ab+\frac ba}{2}\geq \frac 52,$$ and get the correct result.
Constructing Manifolds: Submersion
I'm assuming that the statement is meant to be interpreted as follows: Let $M$ be a smooth manifold and $X$ be a topological space. Suppose we are given a local homeomorphism $F:M\to N$ with $\operatorname{im} F=N$. Then $N$ is a topological manifold, and it has a smooth structure with the property that $F$ is a submersion. Unfortunately, this is false. There are two problems, one easily fixed, the other not so easily. The first problem is that $X$ need not be Hausdorff, so if you include the Hausdorff property as part of your definition of a manifold, then $X$ need not be one. A counterexample is given by letting $M = \mathbb R\times \{0,1\}$ (two disjoint copies of $\mathbb R$), letting $X$ be the quotient space obtained by identifying $(x,0)$ with $(x,1)$ for all nonzero $x$ (called "the line with two origins"), and letting $F$ be the quotient map. This problem is easily fixed by adding the stipulation that $X$ is Hausdorff (or, if you prefer, by allowing non-Hausdorff manifolds). The second, more serious problem is that there many be no way to endow $X$ with such a smooth structure. Here's a counterexample (inspired by your comment about $x^3$): Let $M=\mathbb R$ with its usual smooth manifold structure, $X=\mathbb S^1$ with its usual topology, and $$F(x) = (\cos 2\pi x^3,\sin 2\pi x^3).$$ Suppose there were some smooth structure on $\mathbb S^1$ such that $F$ is a submersion. Because dimension is a topological invariant, it follows that $\dim X = \dim M = 1$, and therefore $F$ is a local diffeomorphism. Note that $F(0) = F(1) = (1,0)$. Thus there are neighborhoods $V_0,V_1$ of $(1,0)$ in $X$, $U_0$ of $0$ in $\mathbb R$, and $U_1$ of $1$ in $\mathbb R$ such that $F|_{U_0}\colon U_0\to V_0$ and $F|_{U_1}\colon U_1\to V_1$ are diffeomorphisms. By shrinking all four neighborhoods, we can also assume that $V_0=V_1$. Then $(F|_{U_0})^{-1}\circ (F|_{U_1})$ is a diffeomorphism from $U_1$ to $U_0$. But a direct computation shows that this map is $$x\mapsto \sqrt[3]{(1/2\pi)\arcsin (\sin 2\pi x^3)},$$ which is not smooth at $x=1$ because the argument of the cube root vanishes there. Here's a theorem that is true: Let $M$ be a smooth manifold and $X$ be a topological space. Suppose we are given a covering map $F:M\to X$, such that the covering automorphism group acts smoothly and properly on $M$. Then $X$ is a topological manifold, and it has a unique smooth structure with the property that $F$ is a submersion. This follows from Theorem 21.13 of my Introduction to Smooth Manifolds (2nd ed.).
Uniqueness of a "restriction" of a local operator $\alpha:\Gamma(E)\to \Gamma(F)$, where $E,F\to M$ are smooth vector bundles
Here's one argument using bump functions. Let $\beta,\gamma:\Gamma(U,E)\to\Gamma(U,F)$ be two such restrictions. It suffices to show that $\beta-\gamma=0$. Choose any local section $s\in\Gamma(U,E)$, and any $x\in U$. It now suffices to show that $(\beta-\gamma)s(x)=0$. We may always find a compactly supported bump function $\psi:M\to\mathbb{R}$ such that $\psi=1$ on an open neighborhood $V\ni x$ and $\text{supp}(\psi)\subset U$. We know by locality that $(\beta-\gamma)s(x)=(\beta-\gamma)(\psi s)(x)$ (since $s-\psi s$ vanishes on $V$), and since $\psi s$ may be smoothly extended to a section $\widetilde{\psi s}$ on all of $M$, we have $$ (\beta-\gamma)s(x)=(\beta-\gamma)(\psi s)(x)=\beta(\psi s)(x)-\gamma(\psi s)(x)=\alpha(\widetilde{\psi s})(x)-\alpha(\widetilde{\psi s})(x)=0 $$ Edit: As probably123 points out, this proof shows that the restriction $\alpha|_U$ is unique among local operators. If there is no requirement that the restriction be local, then there is no guarantee of uniqueness. The space of restricted sections $\Gamma(M,E)|_{U}:=\{s|_U:s\in\Gamma(M,E)\}$ is a proper subspace of $\Gamma(U,E)$ whenever there exist local sections that cannot be smoothly extended. This means that the quotient $Q:=\Gamma(U,E)/\Gamma(M,E)|_U$, as well as its algebraic dual $Q^*$, are nontrivial. In this case, we may choose a nonvanishing section $t\in\Gamma(U,F)$ and a nonzero element $\lambda\in Q^*$. Define a local operator $\psi:\Gamma(U,E)\to\Gamma(U,F)$ by $\psi(s)=\lambda([s])t$ (where $[\ ]$ denotes the projection into $Q$). Since $\psi$ vanishes on $\Gamma(M,E)|_U$, for any restriction $\alpha|_U$, $\alpha|_U+\psi$ is also a valid and distinct restriction. A more explicit counterexample is hard to find in the smooth category since $Q^*$ is rather difficult to describe. It may also be possible to ensure uniqueness in other ways, such as topologizing the spaces of sections and requiring continuity, or restricting attention to compactly supported sections.
Evaluate $(\sqrt{3}-3i)^6$
Yes, it is correct. In fact $$(\sqrt{3}-3i)^6=3^3(1-\sqrt{3}i)^6=2^63^3(e^{-i\pi/3})^6=1728.$$ P.S. There is no need of the term $+2\pi k$.
undergraduate math vs graduate math
Here is what I tell my grad students: The difference between undergrad mathematics and graduate mathematics is the difference between art history, or art appreciation, and learning to be an artist. As an undergraduate you see a lot of mathematics, but you don't create new mathematics. The goal of graduate school (and here I am speaking from experience with top fifty U.S. graduate schools, so what I am saying probably applies best in that context) is to learn how to create new mathematics, and then to create that new mathematics. One specific consequence of this (in my view) is the following: often in undergraduate mathematics classes, proofs and rigor are presented almost as moral imperatives --- as if it is a moral failing to know a statement without knowing why it is true; consequently, people often put a lot of effort into learning arguments just for the sake of having learnt them. (This is exaggerated, perhaps, but I think it reflects something real.) On the other hand, in research, one learns arguments for different reasons: to learn technique, to pick out important ideas --- there is a professional aspect to the way one looks at pieces of mathematics which is not usually present in undergraduate mathematics. One gives proofs in order to be sure that one hasn't blundered; one's interaction with the mathematics and the arguments is much more visceral than in undergraduate courses. (I am not speaking from any experience now, but I think of the difference between learning how to interact with a block of marble, and bring a new form out of it, however rough it might be, in comparison to looking and learning about a lot of existing beautiful statues, masterpieces that they are.)
If two tangents can be drawn to different tangents of a hyperbola $\frac{x^2}{1}-\frac{y^2}{4}=1$ from the point $(a,a^2)$, then find range of $a$
Your solution is correct Let me verify by a separate method We know two distinct tangents can be drawn for any point $P$ if it lies outside the hyperbola So, we need $$\dfrac{a^2}1-\dfrac{a^4}4-1<0$$ $$\iff a^4-4a^2+4>0$$ $$\iff(a^2-2)^2>0$$ which is true if $a^2-2\ne0$ and real
Why do parametrizations to the normal of a sphere sometimes fail?
I think a large part of the difficulty here is the variation in the magnitudes of your "normals" in general. When you give a "normal" in the form $\langle -f_x, - f_y, 1\rangle$, basically what you have is a radial projection of the hemisphere parameterized by $\langle \sin\phi \cos\theta, \sin\phi \sin\theta, \cos\phi\rangle$ (for $0 \leq \phi \leq \frac\pi2$) onto the plane $z = 1$. That is, you have a projection that takes a point $P$ on the surface of the hemisphere to a projected point $P'$ on the plane such that $P$ and $P'$ are collinear with the center of the sphere. (This is also called a gnomonic projection of the sphere.) The coordinates of your "normals" are the coordinates of the projected points. As long as $0 \leq \phi < \frac\pi2$, each point of your sphere does in fact project onto that plane, although as $\phi$ approaches $\frac\pi2$ the magnitudes of your "normals" grow without bound. And obviously the boundary of your hemisphere, where $\phi = \frac\pi2$, does not project on to the plane $z = 1$ at all. If you change your choice of $f$ as recommended in the answer by H.R., this problem goes away. All your normals then will have the same magnitude, and they will be defined at all points of the hemisphere. When you give your "normals" in the form $\langle\sin^2\phi\cos\theta, \sin^2\phi\sin\theta, \sin\phi\cos\phi\rangle$, you again have non-uniform magnitudes, but this time the magnitudes go to zero as $\phi$ approaches zero. In effect, you are radially projecting the hemisphere onto a kind of degenerate torus given by $r = \sin\phi$. But notice that the three components of your "normals" all have the common factor $\sin\phi$. You can normalize the magnitudes of all your "normals" (except for the case $\phi = 0$) by multiplying by the scalar $1/\sin\phi$. If you do this, you get the vectors $\langle\sin\phi\cos\theta, \sin\phi\sin\theta, \cos\phi\rangle$, that is, the coordinates of each vector (for $0 < \phi \leq \frac\pi2$) are simply the coordinates of the point on the sphere. If you define these vectors as the normals for all points such that $0 < \phi \leq \frac\pi2$, you may see that you can use the same formula to define the normal for $\phi = 0$ as well, and it works very nicely. The formula ${\bf r}_\phi \times {\bf r}_\theta$ works nicely for the measurement of area (which is what it is used for in the example you quoted) precisely because its magnitude does go to zero as $\phi$ goes to zero and does so in the same way that the area element $r \,d\phi\,d\theta$ goes to zero. But as you observed, this property is not so desirable when you are trying to construct a set of normal vectors rather than trying to integrate some scalar function over area.
Riemann problem of nonconvex scalar conservation laws
The method is very similar to the convex case, e.g. Burgers' equation where $f(u) = \frac{1}{2}u^2$, but there are more possible types of waves. In facts, in addition to shock waves and rarefaction waves, there may be waves with both discontinuous and continuous parts. Moreover, the Lax entropy condition for shocks must be replaced by the more general Oleinik entropy condition. In the case where the flux $f$ is not convex, these are the possible types of waves: shock waves. If the solution is a shock wave with expression $$ u(x,t) = \left\lbrace \begin{aligned} &u_L & &\text{if}\quad x < s\, t \, ,\\ &u_R & &\text{if}\quad s\, t < x \, , \end{aligned} \right. \tag{1} $$ then the speed of shock $s$ must satisfy the Rankine-Hugoniot jump condition $s = \frac{f(u_R)- f(u_L)}{u_R - u_L}$. Moreover, the shock wave must satisfy the Oleinik entropy condition [1] $$ \frac{f(u)- f(u_L)}{u - u_L} \geq s \geq \frac{f(u_R)- f(u)}{u_R - u} , $$ for all $u$ between $u_L$ and $u_R$. In the case where $f$ is convex, the slope of its chords can be compared with its derivative using convexity inequalities. Thus, the classical Lax entropy condition $f'(u_L)>s>f'(u_R)$ is recovered, where $f'$ denotes the derivative of $f$. rarefaction waves. The derivation is similar to the convex case, starting with the self-similarity Ansatz $u(x,t) = v(\xi)$ where $\xi = x/t$, which gives $f'(v(\xi)) = \xi$. In the nonconvex case, the equation $f'(v(\xi)) = \xi$ may have multiple solutions $v(\xi)$, and the correct one is deduced from the continuity conditions $v(f'(u_L)) = u_L$ and $v(f'(u_R)) = u_R$. Such a solution is given by $$ u(x,t) = \left\lbrace \begin{aligned} &u_L & &\text{if}\quad x \leq f'(u_L)\, t \, ,\\ &(f')^{-1}(x/t) & &\text{if}\quad f'(u_L)\, t \leq x \leq f'(u_R)\, t \, ,\\ &u_R & &\text{if}\quad f'(u_R)\, t \leq x \, , \end{aligned} \right. \tag{2} $$ where the expression of the reciprocal $(f')^{-1}$ of $f'$ has been chosen carefully. compound waves, a.k.a. composite waves or semi-shocks. The latter occur when neither shock waves nor rarefaction waves are entropy solutions, but combinations of them are. The position of rarefaction parts and of discontinuous parts is deduced from the Rankine-Hugoniot condition and from the Oleinik entropy condition. A rather practical method of solving such problems is convex hull construction: [1] The entropy-satisfying solution to a nonconvex Riemann problem can be determined from the graph of $f (u)$ in a simple manner. If $u_R < u_L$, then construct the convex hull of the set $\lbrace (u, y) : u_R ≤ u ≤ u_L \text{ and } y ≤ f (u)\rbrace$. The convex hull is the smallest convex set containing the original set. [...] If $u_L < u_R$, then the same idea works, but we look instead at the convex hull of the set of points above the graph, $\lbrace (u, y) : u_L ≤ u ≤ u_R \text{ and } y ≥ f (u)\rbrace$. Between $u_L$ and $u_R$, the intervals where the slope of the hull's edge is constant correspond to admissible discontinuities. The other intervals correspond to admissible rarefactions. One can also use Osher's expression of general similarity solutions $u(x,t) = v(\xi)$, which writes [1] $$ v(\xi) = \left\lbrace \begin{aligned} &\underset{u_L\leq u\leq u_R}{\text{argmin}} \left(f(u) - \xi u\right) && \text{if}\quad u_L\leq u_R \, ,\\ &\underset{u_R\leq u\leq u_L}{\text{argmax}} \left(f(u) - \xi u\right) && \text{if}\quad u_R\leq u_L \, . \end{aligned} \right. $$ To summarize, here are the different entropy solutions and their validity in the case $f(u) = \frac{1}{3}u^3$, where the inflection point of $f$ is located at the origin. The speed of sound is $f'(u) = u^2$, with reciprocal $(f')^{-1}(\xi) = \pm\sqrt{\xi}$. Using the convex hull construction method, one gets: if $[0<u_L<u_R]$ or $[u_R<u_L<0]$, the solution is a rarefaction wave $({2})$ with shape $\text{sgn}(u_R) \sqrt{x/t}$. else, if $[u_L<u_R< -\frac{1}{2}u_L]$ or $[-\frac{1}{2}u_L <u_R<u_L]$, the solution is a shock wave $({1})$, which speed $s = \frac{1}{3}\left( {u_L}^2 + {u_L}{u_R} + {u_R}^2 \right)$ is given by the Rankine-Hugoniot condition. else, if $[u_L\leq 0\leq -\frac{1}{2}u_L \leq u_R]$ or $[u_R\leq -\frac{1}{2}u_L \leq 0 \leq u_L]$, the solution is a semishock, more precisely a shock-rarefaction wave. The intermediate state $u^*$ which connects the discontinuous part to the rarefaction part satisfies $\frac{1}{3}\left( {u_L}^2 + {u_L}{u^*} + ({u^*})^2 \right) = (u^*)^2$ according to the convex hull construction, i.e. $u^* = -\frac{1}{2}u_L$. Thus, $$ u(x,t) = \left\lbrace \begin{aligned} &u_L & &\text{if}\quad x \leq \left(-{\textstyle\frac{1}{2}u_L}\right)^2\, t \, ,\\ &\text{sgn}(u_R)\sqrt{x/t} & &\text{if}\quad \left(-{\textstyle\frac{1}{2}u_L}\right)^2\, t \leq x \leq {u_R}^2\, t \, ,\\ &u_R & &\text{if}\quad {u_R}^2\, t \leq x \, . \end{aligned} \right. $$ (1) R.J. LeVeque, Finite Volume Methods for Hyperbolic Problems. Cambridge University Press, 2002.
Classifying the stationary point of $h(x,y,z) = 2(x−1)^2 + 3(y−1)^3 + 4(z−1)^4$
We have $ h(1,1,1)=0$ and $h(1,y,1) =3(y-1)^3$ Hence $$h(1,y,1) >0 =h(1,1,1)$$ for $y>1$ and $$h(1,y,1) <0 =h(1,1,1)$$ for $y<1$ . Conclusion ?
Ring with non-trivial idempotent splitting as product of two rings
The converse is easier: Suppose $R \approx A \times B$ as rings. Then $R$ is a direct product of two other rings, each with their own identity, $1_A$ and $1_B$. Both of these are non-trivial idempotents: $$1_A^2 = (1_A,0) \cdot (1_A,0) = (1_A^2,0) =(1_A,0)=1_A.$$ In particular, $1_A$ and $1_B$ are orthogonal idempotents, meaning that also $1_A 1_B = 1_B 1_A = 0$. See also this Tricki page. Added I think I see where the confusion lies. Okay, here's how to define $R \times S$. Given an idempotent $e \in T$ (in your notation), you get another idempotent $1-e$ automatically ($s := (1-e)^2=1-2e+e=1-e$). Then let $R$ be the subring of $T$ generated by $e$ and $S$ the subring generated by $s$. Then we define a map $f:T \to R \times S$ by $g \mapsto (eg, sg)$. This is surjective (easy), and it is injective: for suppose $eg=0$ and $(1-e)g=0$. Then it follows trivially that $g=0$. So $f$ is an isomorphism. Thus, all of $R \times S$ used, there are no "subsets".
Determine the asymptotic behavior of $f(n)$ in relation to $g(n)$
Take logarithms $\log_2 f(n) = \sqrt{n} \log_2 n$ while $\log_2 g(n) = n $. Note that $\log_2 n$ grows more slowly than $\sqrt{n}$ $\log_2 f(n) = (\log_2 10)(\log_2\log_2 n)$ while $\log_2 g(n) = \log_2\log_2 n $. So $\log_2 f(n)$ is more than three times $\log_2 g(n)$ and so $f(n)$ is more than the cube of $g(n)$
definition of derivative
The idea is, if $f$ is differentiable at $a$, then "$f$ is approximately affine (linear plus a constant) close to $a$". The intuitive notion of "close to" generally translates as "in some open set". Note, incidentally, that the limit condition $$ \lim_{h \to 0} \frac{\|\epsilon(h)\|}{\|h\|} = 0 $$ implicitly assumes $\epsilon(h)$ is defined in some open neighborhood of $0$. If you assume less, i.e., that $$ f(a + h) = f(a) + Ah + \epsilon(h) \tag{1} $$ for all $h$ in some set $V$ having $0$ as a limit point, and such that $$ \lim_{n \to \infty} \dfrac{\|\epsilon(h_{n})\|}{\|h_{n}\|} = 0 $$ for every sequence $(h_{n})$ in $V$ that converges to $0$, you risk defining a condition that: Depends not only on $f$, but on the set $V$. Fails to capture the desired intuition of being "approximately affine" near $a$. Think, for example, of $f(x, y) = |x|$ at $a = (0, 0)$. Condition (1) holds if $V$ is the $x$-axis, but $f$ isn't differentiable at $a$. "Worse" examples are easy to construct, e.g., functions that are discontinuous at $a$, but satsfy (1) if $V$ is an arbitrary algebraic curve through $a$.
Is every convergent sequence in a topological vector space bounded?
The set of points of a convergent sequence, together with the limit of the sequence (any limit if the space isn't Hausdorff), form a (quasi)compact set in any topological space. For if we have an open covering $\mathscr{U}$ of $S := \{ x_n : n \in \mathbb{N}\} \cup \{\lambda\}$, where $x_n \to \lambda$, then there is an $U_{\lambda} \in \mathscr{U}$ with $\lambda \in U_{\lambda}$. By definition of convergence, the set $E = \{n \in \mathbb{N} : x_n \notin U_{\lambda}\}$ is finite, and for every $e\in E$, we can choose an $U_e\in \mathscr{U}$ with $x_e \in U_e$. Thus the finite family $\{U_{\lambda}\} \cup \{ U_e : e \in E\}$ covers $S$. Since $\mathscr{U}$ was an arbitrary open covering of $S$, every open covering of $S$ has a finite subcover, i.e. $S$ is (quasi)compact. Since neighbourhoods of $0$ are absorbing, (quasi)compact sets in a topological vector space are bounded. Let $K \subset X$ be (quasi)compact. If $U$ is any neighbourhood of $0$, there is a balanced open neighbourhood $V$ of $0$ with $V \subset U$, and $$X = \bigcup_{n \in \mathbb{N}} n\cdot V$$ since $V$ is absorbing. Thus $\{ n\cdot V : n \in \mathbb{N}\}$ is an open covering of $K$. By quasicompactness, it has a finite subcover $\{ n\cdot V : n \in F\}$. Since $V$ is balanced, $$\bigcup_{n\in F} n \cdot V = (\max F)\cdot V,$$ and so $K \subset m\cdot V \subset m\cdot U$ for $m = \max F$. Since $U$ was arbitrary, this shows that $K$ is bounded. And any subset of a bounded set is bounded, naturally.
Find the sum, the sum of the squares, and the sum of the cubes for the first $250$ natural numbers.
The sum of an AP consisting of $n$ terms is given by $\dfrac{n(a_1+a_n)}{2}$ Here, the sum of natural numbers till 250 is $\dfrac{250(251)}{2}= 31375$ Hence, your answer is correct
The kernel of a map from the integers to a symmetric group
The answer to question 1 is yes, because for any $m\in\mathbb{Z}$, $$\begin{align*} m\in\ker(\phi)&\iff \phi(m)=0 & \text{ definition of kernel}\\\\ &\iff\phi(1)^m=0 & \phi\text{ is a homomorphism}\\\\ &\iff |\phi(1)|\;\text{ divides }\;m & \qquad{\text{a basic property regarding}\atop\text{orders of elements in groups}}\\\\\\\\ &\iff m\in|\phi(1)|\mathbb{Z} & \text{definition of }|\phi(1)|\mathbb{Z} \end{align*}$$ Note that nothing about this depended on the codomain of $\phi$ being the group $S_n$. The answer to question 2 is also yes, but really ... what do you think the definition of $|g|$ is?
Question from an IMC(International mathematics competition) key statge III selection exam
If $n$ has digits $d_j$, $j = 0 \ldots m$, then $2n$ has digits $e_j$ where $e_j + 10 c_j = 2 d_j + c_{j-1}$, $c_{j} = 0$ or $1$ being the "carry" from position $j$. Adding these up for all $j$ tells us $\sum_j e_j + 10 \sum_j c_j = 2 \sum_j d_j + \sum_j c_j$, i.e. $9 \sum_j c_j = 2 \sum_j d_j - \sum_j e_j$. In your case, that is $2 \cdot 104 - 100 = 108$. Thus there are $108/9 = 12$ carries. A carry occurs whenever $d_j = 5, 6, 7, 8$, or $9$. But you know the number of $6, 7, 8$ and $9$...
How prove that $\frac{1}{\sin^2\frac{\pi}{2n}}+\frac{1}{\sin^2\frac{2\pi}{2n}}+\cdots+\frac{1}{\sin^2\frac{(n-1)\pi}{2n}} =\frac{2}{3}(n-1)(n+1)$
To develop further on Jean-Claude Arbaut's comment, you'll find here the proof that: $$\sum_{k=1}^{n-1}{\frac{1}{\sin^2\left(\frac{k\pi}{n}\right)}}=\dfrac{(n-1)(n+1)}{3}$$ All you have to do now, is show that: $$\sum_{k=1}^{n-1}{\frac{1}{\sin^2\left(\frac{k\pi}{\color{red}{2n}}\right)}}=\color{red}{2}\sum_{k=1}^{n-1}{\frac{1}{\sin^2\left(\frac{k\pi}{\color{red}{n}}\right)}}$$
How to use advanced probability theory to explain "Independent" of two randan variable?
Here is an answer from a general measure-theoretic point of view. If you have real-valued random variables $X,Y$ on $\Omega$, you can use them to obtain two (probability) measures on ${\bf R}$, namely $X[\mathcal P]$ and $Y[\mathcal P]$. At the same time, $(X,Y)$ is a ${\bf R}^2$-valued random variable on $\Omega$, which gives us a (probability) measure $(X,Y)[\mathcal P]$ on ${\bf R}^2$. We say that $X,Y$ are independent if $(X,Y)[\mathcal P]=X[\mathcal P]\otimes Y[\mathcal P]$, or in other words, $(X,Y)[\mathcal P]$ is the product measure of $X[\mathcal P]$ and $Y[\mathcal P]$. By definition, it means that for any intervals $I,J\subseteq {\bf R}$ you have $$ \mathcal P((X,Y)^{-1}[I\times J])=\mathcal P(X^{-1}[I])\cdot \mathcal P(Y^{-1}[J]) $$ Since $(X,Y)^{-1}[I\times J]=X^{-1}[I]\cap Y^{-1}[J]$, this is equivalent to saying that for any intervals $I,J$ you have $$ \mathcal P(X\in I\land Y\in J)= \mathcal P(X\in I)\cdot \mathcal P(Y\in J). $$ I.e. the events $X\in I$ and $Y\in J$ are independent. It is easy to see that you can take half-lines instead of intervals here, so in fact this is equivalent to saying that for any real numbers $a,b$ the events $X\leq a$ and $Y\leq b$ are independent, which is exactly the statement you have given. The first definition generalises immediately to random variables taking value in an arbitrary measure space (except then you may have no notion of a cumulative distribution function, or you may have to consider more complicated sets than just intervals), and in fact measurable functions on any measure space. It also works more or less the same for more than two variables (even infinitely many).
Finding Radius of convergence of $\sum_{n=2}^{\infty} \frac{n^{2n}}{4^n(2n+1)!} (3-2x)^n$
Factor out a $-2$ from $(3-2x) $. This gives $$\textstyle(3-2x)=\bigl( (-2)\cdot {-3\over2}+ (-2) \cdot x\bigr)=(-2)(x-{3\over2});$$ so then $(3-2x)^n= (-2)^n(x-{3\over2} )^n$. Recall that a power series has the form $\sum\limits_{n=1}^\infty c_n (x-a)^n$; so the $c_n$ for your series is as you've written (that your series starts at $n=2$ doesn't matter). As for your second question, you should know that $\lim\limits_{n\rightarrow\infty}\bigl(1+{1\over n}\bigr)^n=e$. This is a fundamental limit.
If $\mathbb Q(\alpha )/\mathbb Q$ and $\mathbb Q(\beta )/\mathbb Q$ have same extension, then $\mathbb Q(\alpha )\cong \mathbb Q(\beta )$.
Your statement is not true. Just take $\alpha =\sqrt 2$ and $\beta =\sqrt 3$. First of all $$\varphi :\sqrt 2\mapsto \sqrt 3,$$ is even not a ring homomorphism. Indeed, if it would be, then $$2=2\varphi (1)=\varphi (\sqrt 2^2)=\varphi (\sqrt 2)^2=\sqrt 3^2=3,$$ which is of course not correct. Moreover, suppose there is a homomorphism ring $\psi:\mathbb Q(\sqrt 2)\longrightarrow \mathbb Q(\sqrt 3)$, then it would be uniquely determinated by $\psi(\sqrt 2)$. So If there are $a,b\in\mathbb Q$ s.t. $$\psi(\sqrt 2)=a+b\sqrt 3,$$ then $$2=\psi(\sqrt 2)^2=a^2+b^2+2ab\sqrt 3,$$ and thus $ab=0$ and $a^2+b^2=2$. The only solutions are $(0,\pm\sqrt 2)$ and $(\pm\sqrt 2,0)$ that are not in $\mathbb Q^2$. Therefore, there is no such homomorphism. Remark : However, the statement is true for finite field since there are unique up to isomorphism. I.e. if $K$ is a finite field, and $K(\alpha )/K$ and $K(\beta )/K$ have same degree extension, then $K(\alpha )\cong K(\beta )$.
$xy\in (x^2,y^2)$ if $R$ is a Dedekind domain
Hint $\ $ It's a one-line proof: cancel $\rm\:(x,y)\:$ from $\rm\:(x,y)\,(x^2,y^2)\, =\, (x,y)^3$ The generalization to higher powers and multiple generators follow similarly, for example see my post on the Freshman's Dream $\rm\ (A+B)^n = A^n\! + B^n\:$ for GCDs and invertible ideals. Remark $\ $ Conversely, an integrally closed domain satisfying $\rm\:xy \in (x^2,y^2)\:$ is Prüfer, i.e. finitely generated nonzero ideals are invertible. This holds true more generally if $\rm\ x^{n-1}y \in (x^n,y^n)\:$ for some $\rm\:n>1,\:$ or if $\rm\:(x,y)^n = (x^n,y^n)\:$ for some $\rm\:n>1.\:$ Prüfer domains are non-Noetherian generalizations of Dedekind domains. Their ubiquity stems from a remarkable confluence of interesting characterizations. For example, they are those domains satisfying the Chinese Remainder Theorem for ideals, or Gauss's Lemma for polynomial content ideals, or for ideals: $\rm\ A\cap (B + C) = A\cap B + A\cap C,\ $ or $\rm\ (A + B)\ (A \cap B) = A\ B,\ $ or $\rm\ A\supset B\ \Rightarrow\ A\:|\:B\ $ for fin. gen. $\rm\:A\:$ etc. It has been remarked that there are probably around $100$ such characterizations known. See this answer for around $30$ characterizations.
What does it mean to be compact in the $w^{*}$-topology?
A set $\rm K$ is compact in the $\ast$-weak topology if and only if given any covering of $\rm K$ by $\ast$-weak open sets $(\rm U_i)_{i \in \rm I}$, there exists a finite subset $\rm J \subset \rm I$ such that the $(\rm U_j)_{j \in \rm J}$ is a covering of $\rm K$.
Work and efficiency puzzle
I take it that fractional days will be taken into account in assessing the difference. Your answer is the correct one. ADDED: If "certain amount of work" and "same amount of work" mean the full work, all that condition $1$ means is that working together, $A$ and $B$ complete the full work in $n$ days, and if instead they mean some fraction (may be improper fraction) of the full work, they complete that fraction of the work in $n$ days. Either way, it has no bearing on the question asked.
Transformation of coordinates given by a matrix
Well, does $$\begin{pmatrix}3x+2y-z\\ -x-y+z\\ x-z\\ y+z\end{pmatrix} = x \begin{pmatrix}3\\ -1 \\1 \\0 \end{pmatrix}+y\begin{pmatrix}2\\ -1 \\ 0\\ 1 \end{pmatrix}+z\begin{pmatrix}-1\\ 1\\ -1\\ 1\end{pmatrix}= \begin{pmatrix}3&2& -1\\ -1&-1&1\\ 1&0&-1\\ 0&1&1\end{pmatrix}\begin{pmatrix}x\\ y\\ z\end{pmatrix}$$ help?
What is $\sin(nx)$ iteration in terms of $\sin A$ and $\cos A$?
Is this OK? Trigonometry Sine Multiple Angle Formulae from Ken Ward's Mathematics Pages.
show that for any irrational $\alpha $ the limit $\lim _{n \to\infty} \sin n \alpha \pi $ does not exist ..
Hint: Let $\alpha$ be irrational. Prove that $$ \mathbb N \cdot \alpha \mod 2 $$ is dense in $[0,2]$. Hence $$ (\mathbb N \cdot \alpha \cdot \pi) \mod 2 \pi $$ is dense in $[0,2\pi]$. By the continuity of $\sin$ the result follows.
Is that true, that $f$ is continuous?
As stated above in the comments, the indicator function of the set $\{0\}$, $f=\chi_{\{0\}}$, has $\lim_{h\to 0}(f(x+h)-f(x-h))=\lim_{h\to 0}0-0=0$ at every $x\in \mathbb{R}$ but $f$ is obviously discontinuous at $x=0$.
Finding a set of matrices based on eigenvalues and eigenvectors with constraints
HINT For each permutation of possible eigenpairs $(\lambda_i,v_i),$ solve the three systems of equations corresponding to $$Av_i=\lambda v_i$$ with $A=\begin{bmatrix} x_{11} & 2 & x_{13} \\ x_{21} & x_{22} & 1 \\ x_{31} & 3 & x_{33} \end{bmatrix}.$ At a first glance I'd guess there are $6$ solutions as is the number of permutations of $3,$ which corresponds to possible eigenpairs $(\lambda_i,v_i).$ Anyway, some systems can have infinite number of solutions, or none.
In how many ways can $4$ different letters be posted in $6$ letter boxes?
If we can have more than one letter in a given box, then we just count the number of ways to choose one of six boxes, four times in a row: $$6\times 6\times 6\times 6 = 6^4=1296$$ If each box can hold at most one letter, then we begin, as you said, by counting the number of ways to choose $4$ of $6$ boxes: $\binom64=\frac{6!}{4!2!}=15$, we can multiply this by the number of orders in which $4$ letters can go into the four chosen boxes: $4!=24$. Thus: $$\binom64\times 4!=15\times 24=360$$ Both of these solutions assume that the four letters are different. If the letters are indistinguishable, then we have a different counting problem. Suppose the letters are all the same, and a mailbox can hold more than one. Then, we consider the number of ways to arrange $5$ "bars" and $4$ "stars", where the bars represent separations between adjacent boxes, and stars represent letters. For example: $$\,\,**\,\,|\,\,*\,\,|\,\,\,\,|\,\,\,\,|\,\,*\,\,|\,\,\,\,$$ would represent two letters in the first box, one in the second, and one in the fifth. This is a total of $9$ symbols, and they can be arranged as many ways as there are to decide which $4$ of the nine should be "stars": $$\binom94=\frac{9!}{4!5!}=126$$ Finally, if the boxes can hold at most one letter each, but the letters are indistinguishable, then the answer is simply: $$\binom64=15$$
Lebesgue measure on $\mathbb{R}$
The idea to use the outer regularity (i.e. the outer approximation with open sets) of the Lebesgue measure is correct. We are going to show that, indeed, the claim is true with $U$ open interval. Clearly it is not restrictive to assume $l(A)$ is finite, since otherwise it is enough to consider $A \cap [k, k+1)$ for some $k\in\mathbb{Z}$. Assume by contradiction that $l(A\cap I) \leq \epsilon l(I)$ for every open interval $I$. Given any $\delta > 0$, by the outer approximation property we can find a sequence of pairwise disjoint open intervals $(I_k)$ such that $$ A \subset \bigcup_k I_k, \qquad \sum_k l(I_k) \leq l(A) + \delta. $$ Hence $$ l(A) = l(A \cap \bigcup_k I_k) \leq \sum_k l(A\cap I_k) \leq \sum_k \epsilon l(I_k) \leq \epsilon(l(A) + \delta). $$ If $\delta$ is choosen at the beginning such that $$ \epsilon\delta < (1-\epsilon) l(A) \quad \text{i.e.} \quad \delta < \frac{1-\epsilon}{\epsilon}\, l(A), $$ we get $$ l(A) \leq \epsilon(l(A) + \delta) < \epsilon l(A) + (1-\epsilon) l(A) = l(A), $$ a contradiction.
Derivative of solution of differential equation with respect to parameter
OP is considering the 2nd order ODE $$\ddot{x} ~=~ \dot{x}^2 + x^3, \qquad x(0)~=~0,\qquad \dot{x}(0)~=~A.\tag{A}$$ Notice that when $A=0$, we get the constant solution $$x_0~\equiv~ 0.\tag{B}$$ OP's method. Expand the ODE (A) in a power series $$x~=~x_0 +Ax_1 + \frac{A^2}{2}x_2+ \ldots , \qquad x_n(0)~=~0,\qquad \dot{x}_n(0)~=~\delta_n^1,\tag{C}$$ in $A$. To first order in $A$, we get $$\ddot{x}_1~=~0\qquad x_1(0)~=~0,\qquad \dot{x}_1(0)~=~1,\tag{D} $$ cf. OP's eq. (3b). The solution is $$ x_1(t)~=~t.\tag{E} $$ Alternative method. A first integral to ODE (A) is$^1$ $$ \dot{x}^2~=~\left(A^2+\frac{3}{4}\right)e^{2x}-2V(x), \qquad x(0)~=~0,\tag{F}$$ where $$ 2V(x)~:=~\frac{4x^3+6x^2+6x+3}{4}.\tag{G} $$ Next expand the first integral (F) in a power series (C) To second order in $A$, we get $$ \dot{x}_1^2~=~1, \qquad x_1(0)~=~0, \tag{H}$$ with solution $$ x_1(t)~=~\pm t.\tag{I} $$ The minus sign branch in eq. (I) is discarded by comparing with initial conditions. -- $^1$ In fact, eq. (A) is the Euler-Lagrange eq. for the Lagrangian $$ L(x,\dot{x})~=~e^{-2x}\left(\frac{1}{2}\dot{x}^2-V(x)\right). \tag{J}$$ Since there is no explicit time dependence, the corresponding energy function $$h(x,\dot{x})~:=~\dot{x}\frac{\partial L}{\partial \dot{x}}-L ~=~e^{-2x}\left(\frac{1}{2}\dot{x}^2+V(x)\right)\tag{K}$$ is conserved, cf. Noether's theorem.
Smoothing effect of Laplace, heat, and wave equations
Consider the one-dimensional wave equation $u_{tt}=u_{xx}$. The general solution is $u(x,t)=f(x+t)+g(x-t)$ where $f$ and $g$ are two arbitrary $C^2$ functions. Thus, if $f$ and $g$ are $C^2$ but not $C^3$, he corresponding solution is $C^2$ but not $C^3$. In general, hyperbolic equations do not have smoothing effects. To the contrary, singularities in the initial or boundary data are transmitted through the characteristics.
Why well-defined function sometimes means a group homomorphism function?
It seems that the question implicitly requires the map to be an abelian group homomorphism, i.e. $\phi_s(x+y)=\phi_s(x)+\phi_s(y)$ for all $x,y$ in the domain. In particular, if we write $n\cdot x$ for $x+\ldots+x(n$ times$)$ (which is essentially viewing abelian groups as $\def\Z{\mathbb Z}\Z$-modules), then $\phi_s(n\cdot x)=n\cdot\phi_s(x)$. In this case it is the same as $\bar n\phi_s(x)$. For the map to be well-defined, the problem is $\phi_s(\bar0)$. In $\Z/p^a\Z$, $\bar0=\bar{p^a}=p^a\cdot\bar1$. Hence $\bar0=\phi_s(\bar0)=\phi_s(p^a\cdot\bar1)=p^a\cdot\phi_s(\bar 1)=\bar p^a\phi_s(\bar 1)$, which is what you want to check.
Does the uniform convergence imply continuity?
Yes, if $f_n\to f$ uniformly , $f_n$ is continuous $\forall n$, then $f$ is continuous. Here is the idea of proof. In your example, $f$ is continuous and bounded, $x_n(t)=\sum_{k=0}^{n}2^{-k-1}f(3^{2k}t)$ is also continuous, and $x_n\to x$ uniformly by M-test, so $x$ is continuous.
How do you prove that the canonical map $C$ from a vector space $X$ to its second algebraic dual, $X^{**}$, is well defined?
From what you've written, I think you have the right idea of what the canonical map does. I think a change of notation might make it easier to see that the canonical map is well-defined. Afterward, I'll give a general (but maybe too abstract) argument that there's no need to check that the canonical map is well-defined. For this kind of reasoning, I like to use the somewhat nonstandard notation $[X \to Y]$ for the vector space of linear maps from $X$ to $Y$. In this notation, for vector spaces over the field $k$, we have $X^{**} = [X^* \to k]$. We can then describe the canonical map $C \colon X \to X^{**}$ as the linear map $$\begin{align*} C \colon X & \to [X^* \to k] \\ x & \mapsto [f \mapsto fx]. \end{align*}$$ Let's unpack that. If you fix a vector $x \in X$, you can turn any $f \in X^*$ into a number by evaluating at $x$. More formally, for any fixed vector $x \in X$, we can define the map $$\begin{align*} X^* & \to k \\ f & \mapsto fx. \end{align*}$$ It's straightforward to show that this map is linear, and thus an element of $[X^* \to k]$. Now, given a vector $x \in X$, we can name the map above $Cx$, defining a map $C \colon X \to [X^* \to k]$. If you keep your wits about you, it's also straightforward to show that $C$ is linear, and you're done. More generally, as far as I know, there are only two situations in which you need to check that a function is "well-defined." Applying the function requires you to make a choice. In this situation, you need to prove that the result you get doesn't depend on what choice you made. Applying the function involves an operation that could concievably fail. In this situation, you need to prove that the operation will not actually fail. As an example of the first situation, let's call the unit circle $U$. An angle—a point on the unit circle—can be represented in radians by a real number. There are many different real numbers represent the same angle. For example, $\pi/2$, $5\pi/2$, and $-3\pi/2$ all represent the same angle. There's an "angle addition" function $a: U \times U \to U$, which is applied as follows. To add two angles $\Theta$ and $\Phi$, you pick a number $\theta$ representing $\Theta$ and a number $\phi$ representing $\Phi$. Then, $a(\Theta, \Phi)$ is the angle represented by the number $\theta + \phi$. If you choose a different number $\theta'$ representing $\Theta$, and a different number $\phi'$ representing $\Phi$, the number $\theta' + \phi'$ can be different from $\theta + \phi$, so you have to prove that it represents the same angle. As an example of the second situation, there's an "angle inversion" function $n \colon U \to U$, which is applied as follows. To invert an angle $\Theta$, you find another angle $\bar{\Theta} \in U$ with the property that $a(\Theta, \bar{\Theta})$ is the angle represented by the number $0$. Then $n(\Theta) = \bar{\Theta}$. The operation of finding an angle $\bar{\Theta}$ with the required property could concievably fail, because it's not obvious that an angle with the required property exists. You must therefore prove that such an angle always exists. (There could concievably be many such angles, which would put you back in the first situation. In this case, the resolution is to show that there's really only one such angle.) When you define the canonical map $X \to X^{**}$, you're not in either situation, so there's no need to check that the canonical map is well-defined.
Solving algebra with multiple square root
Of course $c=0$ is a solution. But there are no other real solutions. After noting that $N=0$ is impossible, we divide both sides by $\sqrt{N}$ and let $c^2/N^2 = t$ to get $$ 2 \sqrt{1+\sqrt{1 + 4 t}} = \sqrt{1+\sqrt{1+3t}} + \sqrt{1+\sqrt{1+5t}} $$ We can write this as $$ 2 g(4 t) = g(3 t) + g(5 t) $$ where $$ g(x) = \sqrt{1+\sqrt{1+x}}$$ Now this function is strictly concave for $x > 0$, as we see by taking its second derivative. Thus since $4 = (3+5)/2$, $g(4t) > (g(3t) + g(5t))/2$ for $t > 0$.
Calculated Expected Number and Variance
Hint: for one toss, the distribution of the variable 'number of 2s' is Bernoulli with parameter $p=\frac 16$. Hence when the experimented is repeated, the distribution becomes Binomial.
Calculating matrix $\ [T+I]^{10}$
Hint: Calculate $T^3$. (I know it's a really short answer, but I don't want to spoil everything, it should become clear after that)
Logic, writing proof
For ii), a better starting approach is to assume that $y=0$ and $x \neq 0$. You are trying to find a contradiction. Then for the left hand side you would have: $$ x^2 y = x^2 (0) = 0$$ On the right hand side: $$ 2x + y = 2x + 0 = 2x \neq 0 \text{$\qquad$ by assumption that $x \neq 0$}$$
Is there a parametrization for $a^2+2b^2=c^2$ (analoguous to that for pythagorean triples)?
The following could possibly serve as a factorization for odd $n$ and even $m$: $$ a = \frac{|m^2-2n^2|}{2} ; b = mn; c = \frac{m^2+2n^2}{2} $$ For example, you could take $m=4;n=3$ to get $1^2 + 2\cdot 12^2 = 17^2$.
Notation for change-of-coordinate matrix
Intuitively, $P_{\mathscr{C}\leftarrow\mathscr{B}}$ acts on vectors from the left, taking a vector $[v]_{\mathscr{B}}$ and producing a vector $P_{\mathscr{C}\leftarrow\mathscr{B}} [v]_{\mathscr{B}} = [v]_{\mathscr{C}}$, so you can see how $\mathscr{B}$ is being sucked into $P$ to produce $\mathscr{C}$.
Numerical Differentiation of $f(x) = \sin(x)$ with noise
Unless explicitly stated, gaussian noise is additive since it usually depends on the sensors and other sources and not on the signal of interest. Numerical differentiation has to be taken care off differently when there is noise in the signal (you can filter it, apply estimation techniques, regularize the problem) but that doesn't mean it is useless.
From open cover to ball cover - role of AC
The answer is unfortunately negative, at least if you want to keep the cover countable. In Cohen's first model there is an infinite Dedekind-finite set of reals which is dense in $\Bbb R$. This is a set which is infinite, but has no countably infinite subset. And as a set of reals, it is of course a metric space, and by being dense, it is also not bounded. Now, cover $A$ with rational intervals (or their intersection with $A$) with diameter $\varepsilon$, this is a countable cover. But if you want these to be open balls of diameter $\varepsilon$, you need to choose centers for these balls, which would necessitate a countably infinite subset of $A$.
Is it possible to show that the fifth roots of 1 add up to 0 simply by using trigonometric identities?
If you want a simple trigonometric proof, recall $$\sin a-\sin b=2\sin\frac{a-b}2\cos\frac{a+b}2.$$ Therefore \begin{align} 0&=\sin(a+2\pi)-\sin a\\ &=(\sin(a+2\pi)-\sin(a+8/\pi/5))+(\sin(a+8/\pi/5)-\sin(a+6/\pi/5))\\ &+(\sin(a+6/\pi/5)-\sin(a+4/\pi/5))+(\sin(a+4/\pi/5)-\sin(a+2/\pi/5))\\ &+(\sin(a+2/\pi/5)-\sin a)\\ &=2(\sin\pi/5)\left[\cos(a+9\pi/5)+\cos(a+7\pi/5)+\cos(a+\pi)+\cos(a+3\pi/5)+\cos(a+\pi/5)\right] \end{align} and $$\cos(a+9\pi/5)+\cos(a+7\pi/5)+\cos(a+\pi)+\cos(a+3\pi/5)+\cos(a+\pi/5)=0.$$ Taking $a=-\pi/5$ gives $$\cos(8\pi/5)+\cos(6\pi/5)+\cos(4\pi/5)+\cos(2\pi/5)+1=0.$$ and taking $a=-7\pi/10$ gives $$\sin(8\pi/5)+\sin(6\pi/5)+\sin(4\pi/5)+\sin(2\pi/5)+0=0.$$ These are the real and imaginary parts of the sum of the fifth roots of zero. Of course, this method works for other values of $5$.
$Im(\phi)$ is closed subset of $\mathbb{A}^2$
This is obvious if both $\alpha(t)$ and $\beta(t)$ are constants, since the image is then a point. If at least one of those polynomials, say $\alpha(t)$, is not a constant the dual morphism of rings $f:K[x,y]\to K[t]$ sending $x$ to $\alpha(t)$ and $y$ to $\beta(t)$ makes of $K[t]$ a finite algebra over $K[x,y]$ (since $K[t]$ is already a finite $K[x]$-algebra) and thus a fortiori an integral algebra. The conclusion follows by remembering that a morphism of varieties is closed as soon as its dual morphism is integral (Atiyah-Macdonald, Chapter 5, Exercise 1).
show this inequality with $xy+yz+zx=3$
By Muirhead $$\sum_{cyc}\frac{x}{x^3+y^2+1}\leq\sum_{cyc}\frac{x}{x^2+y^2+x}.$$ Thus, it's enough to prove that $$\sum_{cyc}\frac{x}{x^2+y^2+x}\leq1$$ or $$\sum_{cyc}\left(x^4y^2+x^4z^2+\frac{2}{3}x^2y^2z^2-x^3y-x^2yz-\frac{2}{3}xyz\right)\geq0$$ or $$\sum_{cyc}(3x^4y^2+3x^4z^2+2x^2y^2z^2-x^4y^2-x^4yz-x^3y^2z-x^3y^2z-x^3z^2y-x^2y^2z^2-2xyz)\geq0$$ or $$\sum_{cyc}(2x^4y^2+3x^4z^2-x^4yz-2x^3y^2z-x^3z^2y+x^2y^2z^2-2xyz)\geq0$$ or $$\sum_{cyc}(x^4z^2-x^3y^2z)+2\sum_{cyc}(x^4y^2+x^4z^2-2x^4yz)+$$ $$+xyz\sum_{cyc}(x^3-x^2y-x^2z+xyz)+2xyz\sum_{cyc}(x^3-1)\geq0,$$ which is true by AM-GM and Schur.
Change subject of this equation
rearrange in a few steps: $$2ab+4ac=k$$ $$2ab=k-4ac$$ $$b=\frac{k-4ac}{2a}$$
Prove using the axioms that the square of any number is nonnegative
According to trichotomy $O_1$, you have either $x >0$ or $x<0$. If $x >0$, then according to $O_4$ you get $x^2>0$ as desired. And if $x <0$ then using $O_3$ you get $x + (-x) < -x$, i.e. $0 < -x$. Using what is above $ 0 < (-x) \cdot (-x)$. With $F_9$ $$0 = (-x)0 = (-x)(-x + x)= (-x)(-x) + (-x)x = (-x)(-x) + (-1)x^2 = (-x)(-x) + -x^2$$ Adding $x^2$ on both sides of the equality your get $x^2 = (-x)(-x)$ by use of $F_4$. Plugging that in the inequality $ 0 < (-x) \cdot (-x)$ you conclude again that $0 < x^2$.
Find $x,y,z\in\mathbb N$ where $x,y,z$ are lengths of a triangle and $x\le y\le z$, $x^2+y^2+z^2+126\le13(x+y+z)$, and $x^3+y^2+z=272$.
The first condition gives $$(2x-13)^2+(2y-13)^2+(2z-13)^2\leq3,$$ which gives $$\{2x-13,2y-13,2z-13\}\subset\{1,-1\}$$ and the rest is smooth.
Determine the kernel of a surjective morphism of differential modules
This is one of thousands concrete statements which are best (or in fact only) explained in the functorial approach to mathematics: Instead of looking at an object $X$ "internally", look at its represented functor $\hom(X,-)$, which is usually described by means of a universal property of $X$. Put differently: Forget about elements, think about morphisms! All this is justified by the Yoneda Lemma. The module of differentials $\Omega^1_{S/R} \in \mathsf{Mod}(S)$ is defined by $$\hom_S(\Omega^1_{S/R},M) = \mathrm{Der}_R(S,M)$$ for $S$-modules $M$. Now let $\phi : S \to S'$ be a surjective homomorphism "over" $\psi : R \to R'$ (diagram). Then we have for $S'$-modules $M$: $$\hom_{S'}(\Omega^1_{S'/R'},M) = \mathrm{Der}_{R'}(S',M)$$ I am a bit lazy and don't write down the functors which restrict scalars, but they will be all over the place. Let $I := \ker(\phi)$. Then we have an isomorphism of $R$-modules $S' \cong S/I$. Hence, $$\mathrm{Der}_{R'}(S',M) \subseteq \hom_{R'}(S',M) \subseteq \hom_{R}(S',M) \cong \{\delta \in \hom_R(S,M) : \delta|_I = 0\}.$$ The image consists of all $R$-linear maps $\delta : S \to M$ vanishing on $I$ such that the $R$-linear map $\delta' : S' \to M$ defined by $\delta' \circ \phi = \delta$ satisfies the Leibniz rule and vanishes on $R'$. This means that $\delta$ satisfies the Leibniz rule and that $\delta$ vanishes on $T := \{s \in S : \phi(s) \in R' \cdot 1\}$. This contains $I$. Thus, if $d : S \to \Omega^1_{S/R}$ is the universal differential, $$\mathrm{Der}_{R'}(S',M) \cong \{\delta \in \mathrm{Der}_R(S,M) : \delta|_T = 0\}$$ $$\cong \{f \in \hom_S(\Omega^1_{S/R},M) : f(d(T))=0\} \cong \hom_S(\Omega^1_{S/R}/\langle d_S(T) \rangle ,M)$$ Since $\Omega^1_{S/R}/\langle d_S(T) \rangle$ is killed by $I$, it carries the structure of an $S'$-module. It follows $$\hom_{S'}(\Omega^1_{S'/R'},-) \cong \hom_{S'}(\Omega^1_{S/R}/\langle d_S(T) \rangle,-)$$ and hence $\Omega^1_{S'/R'} \cong \Omega^1_{S/R}/\langle d_S(T) \rangle$ by Yoneda. $\square$ Try to prove this "directly" using elements in one of the explicit constructions of $\Omega^1$. Does it work at all? If yes, how many pages? In order to illustrate the technique, let me also show a quick proof of the next lemma: If $A \to B \to C$ are homomorphisms of commutative rings, then there is an exact sequence $$C \otimes_B \Omega^1_{B/A} \to \Omega^1_{C/A} \to \Omega^1_{C/B} \to 0.$$ Proof: By definition of a cokernel and the Yoneda Lemma, the claim is equivalent to an exact sequence $$0 \to \hom_C(\Omega^1_{C/B},M) \to \hom_C(\Omega^1_{C/A},M) \to \hom_C(C \otimes_B \Omega^1_{B/A},M)$$ naturally in $M \in \mathsf{Mod}(C)$. By definition of $\Omega^1$ and the adjunction between scalar extension and restriction, this simplifies to $$0 \to \mathrm{Der}_B(C,M) \to \mathrm{Der}_A(C,M) \to \mathrm{Der}_A(B,M).$$ The map on the left is the obvious inclusion, the map on the right is given by pullback with $B \to C$. Exactness is clear. $\square$
Does there exist a sequence $(a_n)$ such that, for all $n$, $a_0 +a_1 X +\cdots+a_nX^n$ has exactly $n$ distinct real roots?
We can give inductive method of constructing such a sequence. We can begin with the base case $a_0 = 1, a_1 = -1$. Suppose we have the first $n$ coefficients, so that $p_n(x)$ has exactly $n$ distinct roots, $r_1 \lt r_2 \lt \, ... \lt r_n$. Now we select $(n + 1)$ points, \begin{align} s_0 &\lt r_1\\ s_1 &\in (r_1, r_2)\\ s_2 &\in (r_2, r_3)\\ &...\\ s_{n-1} &\in (r_{n-1}, r_n)\\ s_n &\gt r_n \end{align} Notice that, since $(r_i, r_{i+1})$ contains no roots, $p_n(x)$ has the same sign on the whole interval and that adjacent intervals have different signs, as all of the roots have multiplicity $1$. $^{\dagger}$ Hence, we have that $p_n(s_i)$ and $p_n(s_{i+1})$ always have different sign. If we now choose $a_{n+1}$ to be small enough - specifically, $$|a_{n+1}| \lt \min_{0 \le i \le n} \left|\frac{p_n(s_i)}{s_i^{\, n+1}}\right|$$ then we will retain the property that $p_{n+1}(s_i)$ and $p_{n+1}(s_{i+1})$ always have different sign. By the intermediate value theorem, this gives us exactly $n$ distinct roots lying between $s_0$ and $s_n$. Now, to get the final root, consider the sign of $p_{n+1}(s_n)\, $; if we choose the sign of $a_{n+1}$ to be the opposite, then, for sufficiently large $x \gg s_n$, we will have that $$sign(p_{n+1}(x)) = sign(a_{n+1}) = sign(-p_{n+1}(s_n))$$ so that there must be a root lying between the two points (which is necessarily distinct from the other $n$ roots which are less than $s_n$). $\dagger$: $p_n(x)$ must take the same sign on the whole interval, else by the intermediate value theorem, there would be another root in the interval. If $p_n(x)$ took the same sign on two adjacent intervals, $(r_{i-1}, r_i), (r_i, r_{i+1})$, the root $r_i$ would be a local extremum, so, by Fermat's theorem, we would have that ${p_n}'(r_i) = 0$ which would necessarily mean that $(x - r_i)^2$ was a factor of $p_n(x)$.
Determine a solution of an ODE by "inspection"
In general, the phrase "by inspection" means "by reasonable guessing". For instance, suppose we are to prove that there are integers $x,y,z$ such that $x^{2}+y^{2}=z^{2}$. Without manipulating the given condition, we have a candidate solution to the equation in mind, i.e. the triple $(3,4,5)$ of integers, which happens to be a solution to the equation; hence the proposition is proved. Well, we just proved the proposition by inspection. Yes, it is just the same thing to say "just find some solutions" instead of "find some solutions by inspection".
(Generalised) pigeonhole principle
Something is odd, on the face of it, about the "hint". Here's a set of six natural numbers: $\{1,2,3,4,5,6\}$ with different remainders when divided -- zero times! -- by 8. So we have an immediate boring proof by counterexample that it is false that, given any set $S$ of six natural numbers, there must be two numbers in $S$ that have the same remainder when divided by 8. The pigeonhole principle doesn't come into it! (Was there a series of examples, perhaps, and you were supposed to spot which ones the pigeonhole principle is useful for??)
Find biholomorphism between these domains
So the map you have given $g(z)$ is the Cayley Transform which maps the upper half plane to the unit disc. Now observe that if you restrict this map, to the first quadrant which is essentially $G_{1}$, then the image is the lower semi-circle. Now you take a map from the lower semi circle to the upper semcircle(just rotate it by $e^{i\pi}$.) So call $g:G_{1}\to D_{1}$ as your definition of $g$, where $D_{1}$ is the lower semi-circle. Then take $g_{2}:D_{1}\to D_{2}$ as $g_{1}(z)=e^{i\pi}g(z)$, where $D_{2}$ is the upper semi circle. Then take $g_{3}:D_{2}\to G_{2}$ , the square-root map, which maps $D_{2}$ to $G_{2}$. Hope it's fine!
Find the radius of convergence with a function including log
If $|z| > 2$ it's easy to see that this series is divergent. Can you see why? If $|z| < 2$ it's easy to see that this series is convergent. Can you see why? So the radius of convergence is $2$.
Inverse of the exponential map in local coordinates
No, unless $\varphi = \exp_y$ (up to translation). You can assume that $y=0$ (i.e. $\varphi(y) = 0$) after translating $\varphi$. Then the question is: does $x = {\exp_y}^{-1}(x)$ hold for every $x$ in $U$? That's just saying that $\varphi = \exp_y$.
If $f_{n-1}^2=(f_n/2)^2+h^2$ then $n=6$
We need $4h^2 = 4f_{n-1}^2-f_n^2 = (2f_{n-1}+f_n)(2f_{n-1}-f_n) = (4f_{n-2}+3f_{n-3})f_{n-3}$. Since $\gcd(f_{n-2},f_{n-3}) = 1$, we have $\gcd(4f_{n-2}+3f_{n-3},f_{n-3}) = \gcd(4f_{n-2},f_{n-3}) \in \{1,2,4\}$. Let $v_p(n)$ denote the largest integer $k$ such that $p^k \mid n$. Then, if $p \ge 3$ is a prime such that $p \mid f_{n-3}$, then $p \not\mid (4f_{n-2}+3f_{n-3})$, and so we have $v_p(f_{n-3}) = v_p((4f_{n-2}+3f_{n-3})f_{n-3}) = v_p(4h^2) = v_p(h^2) = 2v_p(h)$, which is even. If $p \ge 3$ is a prime such that $p \not\mid f_{n-3}$, then trivially, $v_p(f_{n-3}) = 0$, which is even. Therefore, $v_p(f_{n-3})$ is even for all primes $p \ge 3$. Hence $f_{n-3}$ is either a perfect square or twice a perfect square. By this link the only Fibonacci numbers which are perfect squares are $0,1,144$, and the only Fibonacci numbers which are twice a perfect square are $0,2,8$. So it remains to check all of these $5$ cases and see which work.
Variables and Language
(1) natural language doesn't really have variables. The nearest you can get to "let x = 2" is to say something like "let's think of the number 2 and, for convenience, let's refer to it as x". So after you've said that "x" acts as a noun denoting 2. (2) In statements like "if $x$ increases then $x^{-1}$" decreases, we are using a standard mathematical convention whereby a formula like $x$ or $x^{-1}$ is interpreted as a function of the variable $x$ ($x \mapsto x$ or $x \mapsto x^{-1}$). It then makes sense to talk about the function as increasing or decreasing.
Extensions of degree two are Galois Extensions.
Let $L/K$ be a field extension of degree $2$. If $\alpha \in L \setminus K$ then $p(t)=\min_K(\alpha,t)$ has degree $2$. In particular $p(t)$ must split over $L$ since $p(t)=(t-\alpha)q(t)$, forcing $q(t)$ to be degree $1$. If the characteristic of $K$ is not equal to $2$ then $p^\prime(t) = 2t + \cdots \neq 0$, so $\alpha$ is separable over $L$. Thereby $L/K$ is Galois. For a counterexample in the case of characteristic $2$ consider the splitting field of $p(x)=x^2-t \in \mathbb{F}_2(t)[x]$. It's not hard to see that $p(x)=(x+\sqrt{t})^2$ so the extension is purely inseparable and not Galois.
Maximize inner product of two vectors of which end points are on two separate circles
I tried some different approach, but it relies heavily on wolfram alpha doing the calculations: $$ f((x_0,y_0),(x_1,y_1)) = x_0 x_1 + y_0 y_1 $$ $$ g_1(\alpha) = (\cos{\alpha}+3, \sin{\alpha}+4) $$ $$ g_2(\beta) = (\cos{\beta}+5, \sin{\beta}+2) $$ $$ g(\alpha,\beta) = (g_1(\alpha), g_2(\beta)) $$ $$ c(\alpha,\beta) = f(g(\alpha,\beta)) = (\cos{\alpha} + 3)(\cos{\beta} + 5) + (\sin{\beta}+2)(\sin{\alpha}+4) $$ $$ \frac{\partial c}{\partial \alpha} = \cos{\alpha}(\sin{\beta}+2) - \sin{\alpha}(\cos{\beta}+5) $$ $$ \frac{\partial c}{\partial \beta} = \cos{\beta}(\sin{\alpha}+4) - \sin{\beta}(\cos{\alpha}+3) $$ Maximum and minimum are at points where: $$ \frac{\partial c}{\partial \alpha} = \frac{\partial c}{\partial \beta} = 0 $$ Wolfram alpha gives this picture: Also, I got large number of solutions: Then the only thing to worry is whether these points are minimum or maximum. Next I tried $\max{c(\alpha,\beta)}$ to get a local maximum: Which seems to be at: So the maximum value should be somewhere near 34.2783...
How to factor $4x^2 + 2x + 1$?
$$4x^2+2x+1$$ Finding roots by quadratic rule as follows $$x=\frac{-2\pm\sqrt{2^2-4(4)(1)}}{2(4)}$$ $$=\frac{-2\pm2i\sqrt{3}}{8}$$ $$=-\frac{1}{4}\pm\frac{i\sqrt 3}{4}$$ $$x=-\frac{1}{4}+\frac{i\sqrt 3}{4}\ \vee\ x=-\frac{1}{4}-\frac{i\sqrt 3}{4}$$ Edit: Now, we have the factors as follows $$4x^2+2x+1=4\left(x+\frac{1}{4}-\frac{i\sqrt 3}{4}\right)\left(x+\frac{1}{4}+\frac{i\sqrt 3}{4}\right)$$
Find the number of ways to reach from one end of grid to another
Your result is fine. Presumably you were writing numbers in the squares, of which there are five on a side. The question was following the lines of the grid, of which there are six on a side. You take $10$ moves to get from one corner to the other, of which you have to choose $5$ to be downwards, and $\frac {10!}{5!5!}=252$. They take $12$ moves to get from one corner to the other, have to choose $6$ to be down, and have $\frac {12!}{6!6!}=924$ choices. If you add one more row of squares on the top and right and continue your approach, you should find $924$ as well.
In baby rudin, absolutely converge, uniformly converge, and so on...
Try comparison test with $\frac{1}{x}\sum_{n=1}^{\infty}\frac{1}{n^2}$ for $x\neq 0$, at $x=0$ judge convergence by inspection.
Every Partially ordered set has a maximal independent subset
This is a classical application of Zorn's lemma, but you can just as well use the Teichmüller–Tukey lemma instead. HINT: Show that $\mathcal F=\{A\subseteq E\mid A\text{ is independent}\}$ has finite character. What can you conclude about a maximal element in $\cal F$?
Proof of satisfaction given the same free variable interpretation
As you said, the proof of theorem 22A is by induction on (the complexity of) the formula $\varphi$. There are two subtleties in this proof. In the base case, where $\varphi$ is an atomic formula, the proof uses the following lemma: given a term $t$, if $s_1$ and $s_2$ agree at all variables in $t$ then $\overline{s_1}(t) = \overline{s_2}(t)$. As mentioned in the text, this lemma should be proved separately, by induction on the (complexity of the) term $t$. In the inductive step where $\varphi = \forall x \, \psi$, the induction hypothesis says that if $s'_1$ and $s'_2$ are two functions from $V$ to $|\mathfrak{A}|$ that agree at all free variables in $\psi$, then $\mathfrak{A}$ satisfies $\psi$ with $s_1'$ iff $\mathfrak{A}$ satisfies $\psi$ with $s_2'$. Unfortunately, you cannot apply the induction hypothesis to $\psi$ with $s_1$ and $s_2$ because $x$ is not a free variable of $\varphi = \forall x \, \psi$ but it might be a free variable in $\psi$ and we do not have any hypothesis about the behavior of $s_1$ and $s_2$ in $x$ (said differently, the formula $\psi$ is smaller than $\varphi$ but the hypothesis are not satisfied by $\psi$ with $s_1$ and $s_2$). But the functions $s_1(x \mid d)$ and $s_2(x \mid d)$ agree at all the free variables of $\psi$, hence we can apply the induction hypothesis to $\psi$ with $s_1(x \mid d)$ and $s_2(x \mid d)$: therefore, $\mathfrak{A}$ satisfies $\psi$ with $s_1(x \mid d)$ iff $\mathfrak{A}$ satisfies $\psi$ with $s_2(x \mid d)$. By definition of satisfaction, this means that $\mathfrak{A}$ satisfies $\varphi = \forall x \, \psi$ with $s_1$ iff $\mathfrak{A}$ satisfies $\varphi$ with $s_2$.
First-year matrix problem: How do you show that a sum of an identity matrix and another matrix is equal to the sum's inverse?
$$(I-A^2)(I-A^2)=I\times I - I \times A^2 - A^2 \times I + A^2 \times A^2 $$ $$=I - A^2 - A^2 + A^4 = I - 2A^2 + A^4,$$ so this is $I$ if $A^4=2A^2,$ so $(I-A^2)^{-1}=(I-A^2)$ in that case.
Extension of a finite group
You must have some extra condition, such as $N \le G'$, otherwise the direct product of a centreless group of odd order by $\operatorname{PSL}(2, p)$ will do. If it's a particular example you're looking for, consider $H = \operatorname{PSL}(2,2) \cong S_{3} = \langle s, t \rangle$, where $s$ has order $3$ and $t$ has order $2$. Consider the action on $G$ on a vector space $N = \Bbb{Z}_{7}^{2}$, where $$ s \mapsto \begin{bmatrix}2&0\\0&4 \end{bmatrix}, \qquad t \mapsto \begin{bmatrix}0&1\\1&0 \end{bmatrix}. $$ (Note that $2$ is a primitive $3$-rd root of unity in $\Bbb{Z}_{7}$.) Then $G = N \rtimes H$ will do. In fact $H$ clearly acts irreducibly on $N$, so the centre of $G$ is trivial. Here $N \le G'$.
How to find the PDF of one random variable when the PDF of another random variable and the relationship between the two random variables are known?
There are shortcuts, but we will use a basic method. The idea is to find the cumulative distribution function of $Y$, and then differentiate to find the density function. We have $$F_Y(y)=\Pr(Y\le y)=\Pr(e^{-\lambda X}\le y)=\Pr(-\lambda X \le \log y).$$ (We took the logarithm: this preserves inequalities.) Thus $$F_Y(y)=\Pr\left(X\ge -\frac{\log y}{\lambda}\right).$$ We know that $\Pr(X\ge t)=e^{-\lambda t}$, if $t$ is positive. If $t$ is negative, the probability is $1$. Substitute for $t$. There is dramatic simplification. The $\lambda$'s cancel, and we get $e^{\log y}$, that is $y$. But note this is correct only when $-\log y$ is $\ge 0$, that is, when $y\le 1$. If $y\gt 1$, $F_Y(y)=1$. Finally, differentiate. The density function is $1$ on the interval $(0,1)$, and $0$ elsewhere.
Vectors In Space - Solving 3 x 3 Linear Systems - Row Reduction
$\Delta=-m^2-6m-5$, $\Delta_x=-7m-7$, $\Delta_y=-3m^2+3m+6$ and $\Delta_z=7m+7$. If $m\neq-1$ and $m\neq-5$ we obtain $\left(\frac{7}{m+5}, \frac{3(m-2)}{m+5},-\frac{7}{m+5}\right)$. if $m=-5$ then the system has no solutions. If $m=-1$ we obtain $2x+y=z+3$ and $x+2y=z-1$, which give infinitely many solutions: $$\left\{\left(\frac{z+7}{3},\frac{z-5}{3},z\right)|z\in\mathbb R\right\}$$
Baire category related question
Hint: It has nothing to do with Urysohn's lemma. Use the Baire category theorem. Prove by contradiction. Suppose none of the $A_n$'s contains an interval, i.e., they are closed nowhere dense sets. Use Baire's theorem to show that the union of countably many closed nowhere dense subsets of $\mathbb R$ can't contain an interval. Exactly how was Baire's theorem stated in your class? If it was in terms of open dense sets, well, the complement of a closed nowhere dense set is an open dense set.
Opens maps from topological manifolds whose fibers are not generically topological foliations
Some examples of open maps which are not generically foliations are given by dimension-raising continuous open maps from manifolds to cells (it's hard to imagine that such maps exist...), see my answer here and more examples and references for instance here: J.J.Walsh, Monotone and open mappings on manifolds, I, Transactions of the American Mathematical Society Vol. 209 (Aug., 1975), pp. 419-432.
The set of bilinear forms is a right $(R \otimes R)$-module
In this answer, I explain how $(\beta \cdot (a \otimes b))(x \otimes y) = \beta( (ax) \otimes (by))$ makes sense. Those are equalities (1) and (2) below. First I want to isolate two claims we will use. These are things we should sort out first if we want to answer the question. For the following two claims, I want to emphasize that I am not using the context of your question (I use the letters differently). Claim 1: If $M$ is a left $A$-module and $N$ is a left $B$-module, then $M \otimes_{\mathbb{Z}} N$ is a left $A \otimes_{\mathbb{Z}} B$ module. We want to set $(a \otimes b) \cdot (m \otimes n) = am \otimes bn$. $$\text{Hom}_{\mathbb{Z}} ((A \otimes_{\mathbb{Z}} B) \otimes_{\mathbb{Z}} (M \otimes_{\mathbb{Z}} N), M \otimes_{\mathbb{Z}} N) \cong \text{Hom}_{\mathbb{Z}} ((A \otimes_{\mathbb{Z}} M )\otimes_{\mathbb{Z}} (B \otimes_{\mathbb{Z}} N), M \otimes_{\mathbb{Z}} N)$$ This isomorphism comes from the isomorphism $(A \otimes_{\mathbb{Z}} B) \otimes_{\mathbb{Z}} (M \otimes_{\mathbb{Z}} N) \cong (A \otimes_{\mathbb{Z}} M )\otimes_{\mathbb{Z}} (B \otimes_{\mathbb{Z}} N)$, which in turn comes from the commutativity of the tensor product. If $\alpha: A \otimes_{\mathbb{Z}} M \rightarrow M$ is the structure map for $M$, and $\beta : B \otimes_{\mathbb{Z}} N \rightarrow N$ is the structure map for $N$, then $\alpha \otimes \beta$ is a map from $(A \otimes_{\mathbb{Z}} M ) \otimes_{\mathbb{Z}} (B \otimes_{\mathbb{Z}} N)$ to $M \otimes_{\mathbb{Z}} N$, and by the isomorphism above, it induces a map from $(A \otimes_{\mathbb{Z}} B) \otimes_{\mathbb{Z}} (M \otimes_{\mathbb{Z}} N)$ to $M \otimes_{\mathbb{Z}} N$. Another way to word it is this: Let $\alpha : A \rightarrow \text{End}_{\mathbb{Z}} (M)$ and $\beta : B \rightarrow \text{End}_{\mathbb{Z}}(M)$ be the structure maps. To define a map $\mu: A \otimes_{\mathbb{Z}} B \rightarrow \text{End}_{\mathbb{Z}} (M \otimes_{\mathbb{Z}} N)$, send $a \otimes b$ to the map $\alpha(a) \otimes \alpha(b) : M \otimes_{\mathbb{Z}} N \rightarrow M \otimes_{\mathbb{Z}} N$. For $a, a' \in A$ and $b, b' \in B$, $\mu(a a' \otimes b b') = \mu(a \otimes b) \circ \mu(a' \otimes b')$. Both of these constructions result in a multiplication such that $(a \otimes b) \cdot (m \otimes n) = am \otimes bn$. Of course, it's the same multiplication both times. Claim 2: If $X$ is a left $A$-module, and a right $B$-module, and $Y$ is a right $B$-module, then $\text{Hom}_{\text{right-} B \text{-mod}}(X,Y)$ is a right $A$-module (note this is reverse from $X$; $X$ is a left $A$-module). We put $(\phi \cdot a)(x) = \phi(ax)$. Combining these, let's see what we have in your situation. Now I work in the context of your question. $V$ and $A$ are abelian groups. Since $V$ is a left $R$-module, $V \otimes_{\mathbb{Z}} V$ is a left $R \otimes_{\mathbb{Z}} R$-module by (1). So $\text{Hom}_{\mathbb{Z}}(V \otimes_{\mathbb{Z}} V, A)$ is a right $R \otimes_{\mathbb{Z}} R$-module by (2). Let's calculate what this $R$-module structure on this abelian group. Take $\beta \in \text{Hom}_{\mathbb{Z}}(V \otimes_{\mathbb{Z}} V, A)$, and take $r \otimes s \in R \otimes_{\mathbb{Z}} R$. What is $\beta \cdot (r \otimes s)$? $$(\beta \cdot (r \otimes s)) (v \otimes w) \stackrel{(1)}{=} \beta (r \otimes s \cdot v \otimes w) \stackrel{(2)}{=} \beta ( (rv) \otimes (sw)) $$ This gives a formula for $\beta \cdot (r \otimes s)$ in terms of $\beta$, $r$ and $s$. The first equality holds by how we defined the $R \otimes_{\mathbb{Z}} R$-module structure on $\text{Hom}_{\mathbb{Z}}(V \otimes_{\mathbb{Z}} V, A)$. The second equality holds because of how we defined the $R \otimes_{\mathbb{Z}} R$-module structure on $V \otimes_{\mathbb{Z}} V$.
If $\sqrt{18-6\sqrt{5}} = \sqrt{a}- \sqrt{b}$, then which of the following relations are true?
Consider $$(\sqrt a-\sqrt b)^2=a+b-2\sqrt{ab}=18-6\sqrt5=18-2\sqrt{45}$$ For integer $a,b$, integral part of the expression is 18. So $a+b=18$.
Motivations for using Strong-K3 in Sentential Logic.
As far as I know, the greatest advantage of K3 is that it preserves the classical definition of the material conditional in classical two-valued logic; (P $\supset $ Q) $\equiv$ (~P $\lor $ Q). However, I don't see any real benefit here. I believe that the apparent identity of (P $\supset $ Q) and (~P $\lor $ Q) is a coincidence, an artifact of the limited set of 2-valued logical connectives. It does not convey any useful insight into the nature of logical implication in the three valued case and may actually be misleading.
How to solve this first order nonlinear PDE?
Hint It is in Clairaut's form. Can you see? $u=ax+by+\frac{1}{2}(a^2+b^2)$ then use the initial condition to eliminate $b$
definition of differentiability on a regular surface
do Carmo and Gray's definitions are the same. Only the terminology was changed to protect the innocent. Both of these are defining differentiability of a function (i.e., map in $\Bbb R$) whose domain is a regular surface in $\Bbb R^2$. On the other hand, Thomas is defining differentiability of a function whose domain is a subset of $\Bbb R^2$. By identifying $\Bbb R^2$ with the plane $\{(x,y,0) \in \Bbb R^3 \mid x, y \in \Bbb R\}$, you can consider Thomas' definition to apply to functions whose domains are certain surfaces in $\Bbb R^3$. But it certainly doesn't apply to all of them. What Thomas calls a surface is the set $\{(x,y,f(x,y))\mid x, y \in \text{dom}(f)\}$. This is not the domain of $f$ as in the other definition, but rather, its graph. When Thomas' $f$ is differentiable, you can check that it satisfies the requirements of being a regular surface as described in the differential geometry books. Indeed, $f$ itself can be used to provide a parametrization (a.k.a., coordinate patch) of the surface. Note that not every regular surface is expressible in this form. Indeed, even the sphere is not, since for every $x, y$ not on the equator correspond to two values of $z$, and so cannot be expressed in terms of a single function $f$. This is why the differential geometry books define the more general concept of a regular surface. Where Thomas leaves off is where do Carmo and Gray start. Now that we have the surface defined, we can talk about functions whose domain is just that surface. They do not need be defined for the points on either side of that surface. How do we define differentiability for such function?. If we try to apply the 3D version of Thomas' definition, we run into a problem, in that the definitions of $f_x, f_y, f_z$ requires $f$ must be defined as we allow small variations in $x, y, z$ around the point. But those small variations leave the surface, so the function is not defined there. In order to discuss differentiability of an $f$ defined only on the surface, we have to restrict ourselves to that surface. This is what do Carmo and Gray are doing. They establish a coordinate system on the surface around the point of interest. These coordinates do not leave the surface as they vary. Differentiation is defined with respect to those coordinates instead of the $(x,y,z)$ of the ambient space $\Bbb R^3$. They are also setting you up for a future generalization, as there is no reason that we have to think of surfaces as being inside some larger ambient space. Our surface does not need to be subsets of $\Bbb R^3$. They could be subsets of other spaces, or just objects in and of themselves. The definitions of do Carmo and Gray are easily adaptable to this concept. But you cannot define differentiation with respect to some ambient space coordinates if you have no ambient space.
Modifying unitary matrix eigenvalues by right multiplication by orthogonal matrix
This is not exactly what you asked but it might help to solve this or narrow the search for a counter example. We have $U \in U(n) \implies spec(U)\subset S^1$. $spec(U)$ is discrete and therfore any function on it is continuous. Define $f:spec(U) \to \{-1,1\}$ $$e^{i\pi \theta} \mapsto \begin{cases} 1 & \theta \in [0,\pi) \\ -1 & \theta \in [\pi,2\pi) \end{cases}$$ Note that $\bar{f}f=f^2=1$. Set $O=f(U)$. Since $f$ has image in $\mathbb{R}\cap S^1$ then $$O \in U(n)\cap H(n)$$ Moreover, for all $x \in Spec(U)$: $$\overline{xf(x)}xf(x)=1$$ so $(UO)^*(UO)=I$ because the continuous functional calculus is a $C^*$ algebra homomrphisim. Finally $xf(x) \in S^1 \cap \mathbb{H}$ and therefor $UO$ has spectrum in $S^1 \cap \mathbb{H}$. $U(n) \cap H(n)$ has similar properties to the orthogonal group so this might help.
Prove that $f_n = \frac{\alpha^n - \beta^n}{\alpha - \beta}$ for all $n \in \mathbb Z^+$.
By the definition of Fibonacci numbers, \begin{align*} f_1\equiv&\,1,\\ f_2\equiv&\,1,\\ f_n\equiv&\,f_{n-1}+f_{n-2}\quad\forall n\in\mathbb Z_+:n\geq 3. \end{align*} You can check directly that $$f_n=\frac{\alpha^n-\beta^n}{\alpha-\beta}\tag{$\clubsuit$}$$ does hold for $n\in\{1,2\}$. Now let $n\geq 3$ and assume that ($\clubsuit$) is true up to $n-1$. Then, \begin{align*} f_n=&\,f_{n-1}+f_{n-2}=\frac{\alpha^{n-1}-\beta^{n-1}}{\alpha-\beta}+\frac{\alpha^{n-2}-\beta^{n-2}}{\alpha-\beta}=\frac{\left(\alpha^{n-2}+\alpha^{n-1}\right)-\left(\beta^{n-2}+\beta^{n-1}\right)}{\alpha-\beta}\\ =&\,\frac{\alpha^{n-2}\color{blue}{(1+\alpha)}-\beta^{n-2}\color{red}{(1+\beta)}}{\alpha-\beta}=\frac{\alpha^{n-2}\color{blue}{\alpha^2}-\beta^{n-2}\color{red}{\beta^2}}{\alpha-\beta}=\frac{\alpha^n-\beta^n}{\alpha-\beta}, \end{align*} where I used the fact that both $\alpha$ and $\beta$ solve $x^2-x-1=0$, so that $\color{blue}{\alpha^2=1+\alpha}$ and $\color{red}{\beta^2=1+\beta}$. Therefore, the induction goes through, as desired.
Multiplying a matrix by the other one is zero. Are those two matrice in null space of each other?
It implies that each column of $B$ is in the null space of $A$. Since the null space is a set of vectors and $B$ is a matrix, $B$ itself cannot be in the null space. It is not guaranteed either that the columns of $B$ will span the null space, although that is possible.
An example of a discrete, abellian and not cyclic group in $PSL(2,\mathbb{C})$
A simple example is given by the matrices $$\pmatrix{1&a+bi\\0&1}$$ for $a$, $b\in\Bbb Z$.
Let $L/Q$ be a field extension. Let $\sigma\in\textrm{Aut}_Q(L)$. Let $f(x) \in Q[x]$ be a polynomial. Show that $f(σ(α)) = σ(f(α))$ for all $α ∈ L.$
Write $f(x)=\sum_{i=0}^na_ix^i$ with $n\in\Bbb N$ and $a_i\in Q$. Then for every $\alpha\in L$ we have \begin{align} \sigma(f(\alpha)) &=\sigma\left(\sum_{i=0}^na_i\alpha^i\right)\\ &=\sum_{i=0}^na_i\sigma(\alpha)^i\\ &=f(\sigma(\alpha)) \end{align}
Question of proving the limit of $\lim\limits_{x\to0}x\ln(x)$
The equality$$\lim_{x\to0}x\log(x)=\frac{\lim_{x\to0}x}{\lim_{x\to0}\frac1{\log x}}$$is false, since the LHS is $0$, whereas the RHS doesn't make sense. And the equality$$\frac{\lim_{x\to0}x}{\lim_{x\to0}\frac1{\log x}}=\frac{\lim_{x\to0}1}{\lim_{x\to0}-\frac1{x\log^2(x)}}$$simply says that a thing that doesn't make sense is equal to another thing which also doesn't make sense. On the other hand, it is indeed true that if you try to apply L'Hopital's rule to compute the limit$$\lim_{x\to0}\frac{x}{\frac1{\log x}}$$you get another indeterminate forme. So what? That happens a lot.
Why is $1$ not a prime number?
One of the whole "points" of defining primes is to be able to uniquely and finitely prime factorize every natural number. If 1 was prime, then this would be more or less impossible.
Ways to choose $k$ items out $n$ without overlap in the chosen sets
Indeed, to choose $k$ elements without repetition and without regard to order from a set of size $n$, there are $ \binom{k}{n} $ ways to do this. However this considers choosing, for example, $\{0,1\}$ from the set $\{0,1,2,3\}$, and thereby automatically ‘not choosing’ $\{2,3\}$, as not being equivalent to choosing $\{2,3\}$ and thereby automatically ‘not choosing’ $\{0,1\}$, counting as two different ways to choose 2 items from a list of 4, since a different selection was made. This is then why the value from $ \binom{k}{n}$ disagrees with your test examples by a factor of 2: you considered choosing a set and not choosing the same set as being equivalent. If in your application this is not appropriate, then you are free to divide by 2 to account for this difference of perspectives.
I'm having trouble understanding a definition (about cards in combinatorics)
No, it is not necessary that all cards in a hand have the same picture. Here is a an illustration from the first edition of the book that appears in Example 2 of section 3.4. The second edition does not include this illustration for some reason, although it does include the same example. As you can see, the three cards in the hand have three different pictures. (The example relates to the set of all permutations as "an exponential family" in Wilf's terminology. The goal is to develop generating functions that yield information about the various kinds of permutations given numbers and sizes of cycles.)
How can I express "for any two distinct elements $x$ and $y$ of the class $K$, at least one of the formulas: $xRy$ and $yRx$ holds"
No, the last statement is false because nothing in the quantifier prohibits $x=y$. You can say $$\forall x,y∈K~(x \neq y \to (xRy ∨ yRx))$$ which says nothing about whether $xRx$ or not. I think I have also seen $$\forall \underset{\large x\neq y}{x,y∈K}~(xRy ∨ yRx)$$
Prove that if $S: U\rightarrow V$ and $T: V\rightarrow W$ are isomorphisms, then $TS$ is also an isomorphism?
The easiest and the fastest way to prove that $T\circ S$ is an isomorphism is by constructing the inverse. Put $$F:=S^{-1}\circ T^{-1}$$ It is extremely easy to check (by composing $F\circ T\circ S$ and $T\circ S\circ F$) that $F=(T\circ S)^{-1}$.
Proof that odd perfect numbers cannot consist of single unique factors?
I think you have the right idea. However, the exposition is insufficiently clear. Let $N$ be an odd perfect number. We show that $N$ is divisible by a perfect square greater than $1$. Suppose to the contrary that $$N=p_1 p_2\cdots p_n,$$ where the $p_i$ are distinct primes. Then $$2N=(p_1+1)(p_2+1)\cdots (p_{n}+1).$$ This is impossible of $n\gt 1$. For $2^n$ divides the right-hand side, while the highest power of $2$ that divides $2N$ is $2^1$. We conclude that $n=1$, that is, $N$ is prime. That is impossible, since the sum of the divisors of $N$ would then be $N+1$.
$\frac{a_n}{b_n}=\sum_{k=1}^{n} \frac{1}{k}$ are in their lowest terms.
In this problem I choose $$n = 2\times3^k-1,m=\operatorname{lcm}(1,2,3...,n)$$ And denote $v_3(a)$ is the max exponent in prime factor of integer $a$ . We can see that $$\frac{a_n}{b_n} = \frac{\sum_{i=1}^n \frac{m}{i}}{m}$$ And $$\sum_{i=1}^n \frac{m}{i}$$ Not divisible $3$ after irreducible fraction , we have $2 \mid b_n$ and $3^k \mid b$ so we have $n+1=2.3^k|b_n$ , but $$\frac{a_{n+1}}{b_{n+1}}=\frac{a_n+\frac{b_n}{n+1}}{b_n}$$ Because LHS is a irreducible fraction so we have $b_{n+1}\mid b_n$ and $v_3(b_n) \geq k > k-1 \geq v_3(b_{n+1})$ . So we are done.
Is there an improper subset that isn't equal to its superset?
Of course, $A= \{1\}$, $B =\{1,2\}$, will do. Or $A = \emptyset$ and $B$ any non-empty set, like $\{\emptyset\}$.
Where am I going wrong? When I try to go farther than this they stop being equal to each other.
I assume that you’re attempting the induction step of a proof by induction. The problem is that you’ve not made the change from $n$ to $n+1$ correctly. The next term after $3n+2$ is $3(n+1)+2=3n+5$, not $2(3n+1)+2$, and the new righthand side should be $$\frac{(n+1)\big(3(n+1)+7\big)}2=\frac{(n+1)(3n+10)}2\;.$$
Markov Chain Snakes and Ladders
Here's a good reference for exactly your problem: http://www.sosmath.com/matrix/markov/markov.html Since landing on 4 or 8 moves the player immediately (i.e. without requiring another turn, Your transition matrix $P$ should actually be the black numbers below: $$ \begin{matrix} & \color{red}0& \color{red}1& \color{red}2& \color{red}3& \color{red}5& \color{red}6& \color{red}7& \color{red}9\\ \color{red}0&0 & .5 & .5 & 0 & 0 &0&0&0\\ \color{red}1&0&0&.5&.5&0&0&0&0 \\ \color{red}2&0&0&0&.5&0&0&.5&0\\ \color{red}3&0&0&0&0&.5&0&.5&0\\ \color{red}5&0&0&0&0&0&.5&.5&0\\ \color{red}6&0&0&.5&0&0&0&.5&0\\ \color{red}7&0&0&.5&0&0&0&0&.5\\ \color{red}9&0&0&0&0&0&0&0&1 \end{matrix} $$ Spaces 4 and 8 are removed since it's impossible to move to and stay on them in the same turn. Then the initial state of the player on space 3 is defined as $X$: $$ \begin{bmatrix} 0&0&0&1&0&0&0&0 \end{bmatrix} $$ The formula $XP^n$ yields the probabilities of the player being at each state after $n$ moves. In your case, starting at space 3 and tossing the coin 4 times: $$ \begin{bmatrix} 0&0&{1\over 8}&{1\over 8}& {1 \over 16}& 0&{3 \over 16}& {1 \over 2} \end{bmatrix} $$ The probability of you reaching space 9 within 4 turns is ${1 \over 2}$.
How to solve $f(x)$ in the equation $f(x)/F(x)=\ln(2)$
Letting $g(x)=F(x)$ we see this gives $\ln(2) g(x)=g'(x)$. You can separate and integrate, or you can remember that this is the equation that defines $f(x)=k2^x$, for $k\neq 0$ due to the division in the original problem.
Maximum value of directional derivative is invariant
Let $a=(a_1,a_2,a_3)$, $b=(b_1,b_2,b_3)$ and $c=(c_1,c_2,c_3)$. Then $$ \frac{\partial f}{\partial a}=a_1\frac{\partial f}{\partial x}+ a_2\frac{\partial f}{\partial y}+ a_3\frac{\partial f}{\partial z} $$ $$ \frac{\partial f}{\partial b}=b_1\frac{\partial f}{\partial x}+ b_2\frac{\partial f}{\partial y}+ b_3\frac{\partial f}{\partial z} $$ $$ \frac{\partial f}{\partial c}=c_1\frac{\partial f}{\partial x}+ c_2\frac{\partial f}{\partial y}+ c_3\frac{\partial f}{\partial z} $$ Thus $$ \nabla_{a,b,c}\ f=U\nabla_{x,y,z}\ f, $$ where $$ U=\left(\begin{matrix}a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{matrix}\right) $$ Clearly $UU^T=I$, i.e., $U$ is orthogonal. Then \begin{align} \left|\frac{\partial f}{\partial a}\right|^2+\left|\frac{\partial f}{\partial b}\right|^2 +\left|\frac{\partial f}{\partial c}\right|^2&=\nabla_{a,b,c}\ f\cdot \nabla_{a,b,c}\ f =U\nabla_{x,y,z}\ f\cdot U\nabla_{x,y,z}\ f \\ &=\nabla_{a,b,c}\ f\cdot \nabla_{a,b,c}\ f =\nabla_{x,y,z}\ f\cdot U^TU\nabla_{x,y,z}\ f=\nabla_{x,y,z}\ f\cdot \nabla_{x,y,z}\ f\\ &=\left|\frac{\partial f}{\partial x}\right|^2+\left|\frac{\partial f}{\partial y}\right|^2 +\left|\frac{\partial f}{\partial z}\right|^2 \end{align}
Associativity, commutativity and distributivity of modulo arithmetic
To prove (or "notice") that $\mathbb{Z}/13\mathbb{Z}$ is a (cyclic) group under addition you would need to know first of all that it is a group under addition; and in order to know that it is a group under addition, you would need to know that addition is associative, has an identity element, and has inverses; so you have to show it is associative. You cannot save yourself knowing/showing it is associative by first showing it's a group: showing/knowing associativity is a prerequisite of it being a group. You don't have to prove it by "case analysis", but only through the basic properties of congruences. The key is really to show that addition and multiplication are well defined. That is, we define addition and multiplication in $\mathbb{Z}/n\mathbb{Z}$ by "class representatives": if $[a]$ is the modular class of $a$, and $[b]$ is the modular class of $b$, we define $[a]+[b]$ as $[a+b]$ and $[a][b]$ as $[ab]$. So long as these operations are well-defined, the properties will be inherited from the properties of the usual addition and multiplication of integers (just like in any ring modulo an ideal). To show it is well-defined, you need to show that if $[a]=[x]$ and $[b]=[y]$, then $[a+b]=[x+y]$ and $[ax]=[by]$. This is a direct consequence of the properties of congruences, namely that if $a\equiv x\pmod{n}$ and $b\equiv y\pmod{n}$, then $a+b\equiv x+y\pmod{n}$ and $ab\equiv xy\pmod{n}$. Verifying these through the definition of congruence is indeed trivial: If $a-x$ and $b-y$ are multiples of $n$, then so is $(a-x)+(b-y) = (a+b)-(x+y)$; and likewise, $(a-x)b + (b-y)x = ab - xy$ is also a multiple of $n$, being a sum of multiples of $n$. The properties of the addition and multiplication of congruence classes now follows from the same properties for integers. Added. It's important to note that it is not merely the fact that we are dealing with equivalence classes that makes this work. The key is really that the operations on equivalence classes defined via representatives are well defined. We have a set $S$ on which we already have a (binary) operation $*$, and an equivalence relation $\sim$ on $S$. If we want to define an operation on the congruence classes $S/\sim$ by using $*$, then the natural definition is to try $[a]*[b]=[a*b]$. That is: the "product" of the classes is the class of the "product". However, this does not always work correctly: it will depend on the operation and how $\sim$ relates to the operation. In the case of groups, for example, the only equivalence relations for which this definition works are the equivalence relations induced by normal subgroups (see my answer to this previous question for an ample discussion of trying to define an operation on equivalence classes in a group). So it is not the fact that we have an equivalence relation/partition, but really how the partition behaves relative to the operation. What we have is the following general result from Universal Algebra: Theorem. Let $S$ be a set, and let $*$ be an $n$-ary operation on $S$. If $\sim$ is an equivalence relation on $S$, then the operation on $S/\sim$ defined by $$*([s_1],\ldots,[s_n]) = [*(s_1,\ldots,s_n)]$$ is well-defined if and only if $\sim$ is closed under $*\times*$ as a subset of $S\times S$, where $$*\times*\Bigl((a_1,b_1),\ldots,(a_n,b_n)\Bigr) = \Bigl(*(a_1,\ldots,a_n),*(b_1,\ldots,b_n)\Bigr).$$ Proof. Suppose $*$ is well defined. If $(a_1,b_1),\ldots,(a_n,b_n)\in \sim$, then $[a_i]=[b_i]$, so by assumption we have that $$[*(a_1,\ldots,a_n)] = *([a_1],\ldots,[a_n]) = *([b_1],\ldots,[b_n])=[*(b_1,\ldots,b_n)].$$ Therefore, $$*\times *\Bigl( (a_1,b_1),\ldots,(a_n,b_n)\Bigr) = \bigl( *(a_1,\ldots,a_n),*(b_1,\ldots,b_n)\bigr)\in \sim,$$ so $\sim$ is closed under $*\times *$. Conversely, suppose that $\sim$ is closed under $*\times *$, and that $[a_1]=[b_1],\ldots,[a_n]=[b_n]$. We need to show that $*([a_1],\ldots,[a_n]) = *([b_1],\ldots,[b_n])$. Since $(a_1,b_1),\ldots,(a_n,b_n)\in \sim$, then $$*\times *\Bigl( (a_1,b_1),\ldots,(a_n,b_n)\Bigr) = \Bigl(*(a_1,\ldots,a_n),*(b_1,\ldots,b_n)\Bigr)\in \sim,$$ hence $[*(a_1,\ldots,a_n)] = [*(b_1,\ldots,b_n)]$. But this is exactly what we need to show $*$ is well defined on $S/\sim$. QED So if the equivalence relation is a substructure of $S\times S$, then we can define the induced operation on $S/\sim$ via representatives, and this operation on $S/\sim$ will inherit the properties of the operation from $S$. Thus, to show that sums and products of congruence classes modulo $m$ in the integers inherit the properties of the sum and product of integers is equivalent to showing that the equivalence relation "congruent modulo $m$", $\equiv_m$, interpreted as a subset of $\mathbb{Z}\times\mathbb{Z}$, is a subring of $\mathbb{Z}\times\mathbb{Z}$ (the latter with coordinate-wise operations). This is precisely showing that if $a\equiv x\pmod{m}$ (i.e., $(a,x)\in\equiv_m$) and $b\equiv y\pmod{m}$ (i.e., $(b,y)\in\equiv_m$) , then $a+b\equiv x+y\pmod{m}$ (i.e., $(a,x)+(b,y)=(a+b,x+y)\in\equiv_m$) and $ab\equiv xy\pmod{m}$ (i.e., $(a,x)(b,y) = (ab,xy)\in\equiv_m$). So it still comes down to the specific properties of the equivalence relation, and not merely to the fact that it is an equivalence relation.
Is there a function such that $f(f(n)) = 2^n$?
I'm having trouble getting the 2; Helmut Kneser showed that there is a real analytic function, call it $h(x),$ such that $$ h(h(x)) = e^x. $$ It might be necessary to redo the whole argument to get $2.$ Kneser and related papers at MEEEEEEEEEEEEEEEEEEEEE A good example is the half iterate of $\sin x,$ it took me quite a while, but see https://mathoverflow.net/questions/45608/formal-power-series-convergence/46765#46765 on average, a decreasing function rules out any half iterate, $\sin x$ is an extremely special case. Note that if we just ask for differentiabilty, there is a theorem in the KCG book that gives it. I think I included that in the excerpts on my website.
is this a Vector Space or not?
Consider $f,g$ such that $f(x)=kf(-x)$ and $g(x) = kg(-x)$, $a\in\mathbb{R}$. Then, 1.- $0(x) = k0(-x)$. 2.- $(f+g)(x) = f(x)+g(x) = k(f(-x)+g(-x)) = k(f+g)(-x)$. 3.- $(af)(x) =af(x) = akf(-x) = k(af)(-x)$.
Hamming distance of a CRC
Those bounds follow from the theory of cyclic codes. A (binary linear) cyclic code of length $n$ (an odd natural number in all the interesting cases) has a generator polynomial $g(D)$ that is a factor of $D^n+1$ in the ring $\Bbb{F}_2[D]$ of polynomials in the unknown $D$ with binary coefficients. The cyclic code $C$ generated by $g(D)$ then consists of the polynomials $x(D)$ that are 1) multiples of $g(D)$, and 2) have degree $<n$. This follows from the fact that the cyclic codes of length $n$ are viewed as ideals of the ring $\Bbb{F}_2[D]/\langle D^n+1\rangle$. Any book that explains the algebra of cyclic codes will explain why a cyclic code has a generator, and how it works. The data on the minimum Hamming distance come from those of the code $C$. For example the polynomial $g(D)=D^8+D^4+D^3+D^2+1$ can be used. It is known to be a primitive polynomial of degree eight, so its zeros (in the extension field $\Bbb{F}_{256}$ have multiplicative order $255$. This means that $255$ is the smallest exponent $n$ with the property $g(D)\mid D^n+1$ (this can be brute forced). It follows that no binomial $D^i+D^j$ with $0\le i<j<255$ can be divisible by $g(D)$ either. What this means that using $g(D)$ as a CRC-polynomial we will detect all error patterns of weight at most two, if the total length of the protected data block+ 8-bit CRC tag is at most $255$. In practice the CRC-polynomials are often products of two irreducible polynomials (often but not always primitive) of the same degree and possibly also the extra factor $D+1$. I recall figuring out such things for a few standardized CRC-polynomials as an exercise for myself. The product of an intelligently chosen pair of polynomials will generate a cyclic code belonging to some well studied family. IIRC at least double-error-correcting BCH codes, Melas codes and Zetterberg codes are used. The extra factor $1+D$ is a nice trick making sure that the minimum Hamming distance is even (if it isn't already). If you can spare that extra bit in the CRC-tag that may give the desired level of reliability to the check. Anyway, if the cyclic code of length $n$ with a generator polynomial of degree $r$ is used, then the maximum length of the data block that can be protected is $n-r$. If the length of the data block is lower, then the resulting codeword has a corresponding number of extra zeros (that won't show anywhere). In the cases I checked the bound on the minimum Hamming distance comes from some known bound on the cyclic codes. In the simplest cases the BCH-bound will suffice. In the case of Melas or Zetterberg codes we need that Hartmann-Tzeng bound (search with those buzzwords for their derivation). One last word: there are often several pairs of irreducible factors leading to, say, minimum distance five. There exists some very specific tools in selecting among those. They involve largish simulations and such, but I'm not familiar with the details.