title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Help with a step in Diestel's proof of Tutte's theorem in Graph Theory
$M_2$ is a 1-factor of $G+bd$, and $G$ does not have a 1-factor, so $bd$ must be an edge of $M_2$. This means that the edges of $M_2$ that are in $G$ hit all vertices except $b$ and $d$. So if we cannot continue $P$, it must end with either $b$ or $d$. We also know that $P$ begins and ends with an edge of $M_1$. Since $M_1$ only has independent edges, the last one cannot be incident with the first one, in other words, the last one cannot hit $d$, since $P$ begins with $d$. So $P$ must end with $b$.
Is it true that $|\arcsin z | \le |\frac {\pi z} {2} |$?
Note $\arcsin(z) = \sum_{n=0}^\infty \frac{1 }{2^{2n}}\binom{2n}{n} \frac{ z^{2n+1}}{2n+1}$. So we have that $\lvert\arcsin z \rvert \le \arcsin\lvert z \rvert$. Further, we have that $\arcsin\lvert z \rvert$ is convex. Since $\arcsin\lvert z \rvert$ is declared on $0 \le \lvert z \rvert \le 1 $ and $\arcsin 0 = 0$, we have that $\arcsin\lvert z \rvert \le \arcsin(1) \cdot \lvert z \rvert = \frac{\pi}{2}\lvert z \rvert$, which proves the claim.
Finding the vertex of a quadratic equation with two unkowns
If construct a Cartesian plane in which you label one axis $k$ and the other axis $y,$ then the graph of the equation $y = k^2−8k−4$ is a parabola with vertex $(-4,-20),$ which is what you find when you apply the vertex form $y = a(k-h)^2 + L$ to the equation $y = k^2−8k−4.$ But I think the parabola whose vertex you are supposed to find is the graph of $$y = x^2−kx+2k+1$$ in a conventional Cartesian plane where the axes are labeled $x$ (not $k$) and $y.$ So you should apply the vertex form to $x^2−kx+2k+1$ (viewed as a quadratic over $x,$ where $k$ is some as-yet-unspecified constant) rather than to $k^2−8k−4$ (viewed as a quadratic over $k$).
Time required to build a pyramid
Each person does $\;\frac{\frac12}{45}=\frac1{90}\;$ of a pyramid in $288$ days, which means each persons does $\;\frac{\frac1{90}}{288}=\frac1{25,920}\;$ of a pyramid in one day. Thus, $\;65\;$ persons do $\;65\cdot\frac1{25,920}=\frac{13}{5184}\;$ of the pyramid every day, so if it will take them $\;k\;$ days to complete one sixth of the pyramid: $$\frac{13k}{5,184}=\frac16\implies k=\frac{5,184}{78}=66.46\cong66$$
Is it possible for the Cauchy product of the two series to converge?
No, under the given assumptions, the Cauchy product always diverges. We have $$c_n = \sum_{k = 1}^{n-1} a_k b_{n-k} \geqslant a_1\cdot b_{n-1}$$ for all $n\geqslant 2$, so $$\sum_{n = 2}^N c_n \geqslant a_1\sum_{m = 1}^{N-1} b_m \to +\infty.$$
Simplify $\langle \operatorname{Ind}^G_1 1, \operatorname{Ind}^G_H\phi\rangle_G$
With the help of Tobias Kildetoft's comments I think I can now provide an answer to my question. Recall that $\operatorname{Ind}^G_1 1$ is in fact the regular character of $G$ and so $\operatorname{Ind}^G_1 1=\sum_\chi \chi(1) \chi $ where the sum is taken over all the irreducible characters of $G$. Therefore we have $\langle\operatorname{Ind}^G_11, \operatorname{Ind}^G_H \phi\rangle_G= \sum_\chi\langle\chi,\operatorname{Ind}^G_H\phi\rangle_G\chi(1)=(\operatorname{Ind}^G_H\phi)(1)=[G:H]\phi(1)$. There was no need to use Frobenius reciprocity nor any of Mackey's theorems.
How many solutions does this equation have in $\mathbb{Z}_{\ge 0}$?
Circles and lines (often referred to as "stars and bars", though maybe this is USA-centric?) is a good idea. A linear arrangement of $6$ circles and $2$ lines is equivalent to a solution of your equation; the $2$ lines divide the circles into $3$ groups, corresponding to the values of the three variables. How many ways are there to arrange $6$ identical circles and $2$ identical lines in a row? As you said in the comments, there are $\frac{8!}{6!\cdot 2!}$ ways, as there woul be $8!$ ways to arrange $8$ objects, but we divide by $6!$ since the $6$ circles are identical, and divide by $2!$ since the $2$ lines are identical.
Fastest way to solve linear system with block symmetric banded/Toeplitz matrix
For each diagonal block $A$, it seems that you have a fixed small distance between your nonzero diagonals. You have a nonzero diagonal, a n nonzero 5th super/subdiagonal and a nonzero 10th super/subdiagonal. The fact that the spacing between the nonzero diagonals is critial. You will be able to do a symmetric permutation $PAP^T$ of each diagonal block $A$ into a block diagonal matrix with many small blocks on the main diagonal. Here $P$ is a permutation matrix. These small blocks will be banded and dense within the band. I recommend the symmetric reverse Cuthill-McKee algorithm for this kind of problem. There is an implementation in MATLAB (symrcm) and there are free C implementations available online. At first it may seem astonishing that your diagonal blocks decouple into smaller disjoint blocks. To gain an understanding of this, I recommend that you first consider a 10 by 10 matrix with a nonzero diagonal, a nonzero 2nd superdiagonal and a nonzero 2nd subdiagonal. Applying the permutation $q = (9,7,5,3,1,10,8,6,4,2)$ to the rows and columns will give you two diagonal blocks of dimension 5 which are tridiagonal. The permutation is written in MATLAB format and you should interpret it as follows: put row 9 as row 1, put row 7 as row 2, put row 5 as row 3, etc. and similarly for the columns. This is not a bad place to start (as it will dramatically simplify your problem and expose massive parallelism and give you small banded systems to solve), but you may want to look into graph reorderings and the elimination game for sparse solvers in general. I hope this helps. EDIT: Once you have obtained small banded diagonal blocks, then we can begin to discuss which algorithm is the fastest as it depends very much on the system which you are programming for, but obtaining good data locality is critical on any architecture. It is possible that your blocks will be so small that you can not do vector operations (SIMD) in the usual manner and that you will have to interleave the data representing different blocks, but that is doable.
injective function from N to Q+
One of way of showing $f : \mathbb{N} \rightarrow \mathbb{Q}^{+}$ is injective is to show that for two arbitrary elements $a, b$ $f(a) = f(b) \implies a = b$. Consider two arbitrary elements $a, b \in \mathbb{N}$. Then, we have $$f(a) = a + 1, $$ and $$f(b) = b + 1.$$ It's pretty easy to verify that both $f(a)$ and $f(b)$ are in $\mathbb{Q}^{+}$ (if you want to prove this as well, just write each of them as a ratio of two integers). Then, if we have $a + 1 = b + 1 $ (that is, $f(a) = f(b)$), it follows that $a = b$, which is what we wanted to show.
Closed-form solution for 3D rotation angles given pre- and post-image
There are two possible solutions for each angle, only one of which will work best $$\begin{aligned} \theta & = 2 \arctan\left( -\frac{\sqrt{x^2+y^2-B^2}+x}{y+B}\right) \\ \theta & = 2 \arctan\left( \frac{\sqrt{x^2+y^2-B^2}-x}{y+B}\right) \\ \end{aligned}$$ $$\begin{aligned} \varphi & = 2 \arctan\left( -\frac{\sqrt{A^2+C^2-z^2}+A}{z+C}\right) \\ \varphi & = 2 \arctan\left( \frac{\sqrt{A^2+C^2-z^2}-A}{z+C}\right) \\ \end{aligned}$$ Why? I used the Tan Half Angle substitution for $\theta = 2 \arctan(t)$ and $\varphi = 2 \arctan(s)$, $$\cos(\theta) = \frac{1-t^2}{t^2+1}$$ and $$\sin(\theta) = \frac{2 t}{t^2+1}$$ and similarly for $\cos(\varphi)$ and $\sin(\varphi)$. Then I solved the two quadratic equations from the y and z components of the equation $${\rm Rot}_Y(\varphi)[A,B,C] = {\rm Rot}_Z(-\theta) [x,y,z]$$ I used the standard convention for the 3×3 rotation matrices ${\rm Rot}_Y()$ and ${\rm Rot}_Z()$ The two equations I solved are $$B = y \cos \theta-x \sin \theta \\ C \cos\varphi-A \sin\varphi=z$$ which get transformed to $$ B = y \frac{1-t^2}{1+t^2}-x \frac{2 t}{1+t^2} \\ C \frac{1-s^2}{1+s^2} - A \frac{2 s}{1+s^2} = z $$ or $$ B (1+t^2)=y (1-t^2)-2 t x \\ C (1-s^2) - 2 A s = z (1+s^2) $$ with solution $$ t = \frac{\sqrt{x^2+y^2-B^2}-x}{y+B} \\ s = \frac{\sqrt{C^2+A^2-z^2}-A}{z+C} $$
Show that a set is finite if and only if every linear ordering on it is a well-ordering
Some axiom of choice is needed, since it is possible that there are sets which cannot be linearly ordered (and therefore must also be infinite). But once you have the axiom of choice, every infinite set has a countably infinite subset. Now think, what sort of linear ordering can be made with a set which has a countably infinite subset and it is not a well-ordering?
How likely is it to get a no-pair AND no-flush 6 card hand?
I see that what I suggested in my last comment is actually pretty much what you already did. So I think your approach is basically right, but the problem is with your $4^4\cdot\binom42$, which I take it is counting the choices for the suits. For the first four cards you have free choice, but then the last two are only restricted if the first four have three the same suit (otherwise you've already avoided a flush), so your $\binom{42} as a simple factor added in doesn't work. It's probably easier to count the number of flush arrangements then subtract it from $4^6$. There are 4 possibilities with all 4 cards the same suit, and then $4\cdot3\cdot4=48$ possible five-card-flushes (four choices of five-card suit, each with three choices of suit for the other card, and four choices of position for the other card). So the suits part of the expression should be $6^4 - 48$. With that replacement I think the probability evaluates to about 0.341.
Orthogonal Diagonalization of a matrix
Note: Since $P^T = P^{-1}$ for an orthogonal $P$, the equality $P^T A P = D$ is the same as $P^{-1} A P = D$. Your answer is perfectly correct, just not in a simplified form. We have: $$P^T A P = \begin{bmatrix} -6 & 0 & 0 \\ \frac{2 \left(-\sqrt{\frac{2}{3}}-\frac{1}{\sqrt{6}}\right)}{\sqrt{3}}+\sqrt{2} & \sqrt{\frac{2}{3}} \left(-\sqrt{\frac{2}{3}}-\frac{1}{\sqrt{6}}\right)-2 & 0 \\ \frac{-\frac{1}{\sqrt{2}}+2 \sqrt{2}}{\sqrt{3}}+\frac{\frac{1}{\sqrt{2}}-2 \sqrt{2}}{\sqrt{3}} & \frac{-\frac{1}{\sqrt{2}}+2 \sqrt{2}}{\sqrt{6}}+\frac{\frac{1}{\sqrt{2}}-2 \sqrt{2}}{\sqrt{6}} & \frac{-\frac{1}{\sqrt{2}}+2 \sqrt{2}}{\sqrt{2}}-\frac{\frac{1}{\sqrt{2}}-2 \sqrt{2}}{\sqrt{2}} \\ \end{bmatrix}$$ For example, simplifying the bottom rightmost value yields: $$\frac{-\frac{1}{\sqrt{2}}+2 \sqrt{2}}{\sqrt{2}}-\frac{\frac{1}{\sqrt{2}}-2 \sqrt{2}}{\sqrt{2}} = \dfrac{3}{2} + \dfrac{3}{2} = 3$$ If we simplify each value in the matrix, it reduces to: $$P^T A P = \begin{bmatrix} -6 & 0 & 0 \\ 0 & -3 & 0 \\ 0 & 0 & 3 \\ \end{bmatrix}$$ This is obviously a diagonal matrix $D$.
To show closedness of a subset in a metric spaces
Assuming that $p\in X$ and $\delta > 0 $ are fixed (as stated in the comments), it seems clear that $A$ could be open (in fact this is the more "natural" conclusion, unless you can provide more details). To see this, let $p=0\in \mathbb{R}^2=X$, and let $d$ be the usual metric on $\mathbb{R}^2$, namely, $d(x,y)=|x-y|=\sqrt{x^2-y^2}$. The set $A^c$ in this case is written $$A^c=\{q\in \mathbb{R}^2 : |q| \le \delta \}$$ If $\delta=1$, we call $A$ the closed unit disc (or closed unit ball); and the complement of a closed set is open, so $A$ must be open. If you have any further suspensions, one can easily check that $A^c$ is indeed closed (I'll leave this as an exercise, or consult any introductory book on point-set topology).
If $t = \tanh (x/2)$, prove what $\sinh (x)$ and $\cosh (x)$ are in terms of $t$?
$$tanh(\frac{x}{2})=t=\frac{e^{\frac{x}{2}}-e^{\frac{-x}{2}}}{e^{\frac{x}{2}}+e^{\frac{-x}{2}}} *\frac{e^{\frac{x}{2}}}{e^{\frac{x}{2}}}\\=\\t=\frac{e^x-1}{e^x+1}\\e^x(t-1)=-t-1\\e^x=\frac{t+1}{1-t}\\$$now put e^x in sinh , cos h $$sinh(x)=\frac{e^x-e^{-x}}{2}=\\\frac{\frac{t+1}{1-t}-\frac{1-t}{1+t}}{2}\\=\frac{2t}{1-t^2} $$and like this for coshx
Evaluate $\int_0^\infty\left\{ \frac{1}{t} \right\}^{kn}t^{s-1}dt$ for positive integers $n,k$ and $0<\Re s<1$
Let $$F_k(s) = s\int_0^\infty \{x\}^k x^{-s-1}dx = s \int_0^\infty \left\{\frac1t\right\}^k t^{s-1}dt$$ Show that $$\int_0^x \{t\}^kdt = \frac{\lfloor x \rfloor +\{x\}^{k+1}}{k+1}$$ And that (where it converges) $$\zeta(s) = s\int_0^\infty \lfloor x \rfloor x^{-s-1}dx = \frac{s}{s-1}-s\int_1^\infty \{x\} x^{-s-1}dx, \qquad\quad F_1(s) = -\zeta(s)$$ Integrating by parts $$F_k(s) = s(s+1)\int_0^\infty \frac{\lfloor x \rfloor +\{x\}^{k+1}}{k+1} x^{-s-2}dx = s\frac{\zeta(s+1)+F_{k+1}(s+1)}{k+1}$$ i.e. $$\boxed{F_k(s) = \frac{k}{s-1} F_{k-1}(s-1)-\zeta(s)= -\sum_{m=0}^{k-1} \zeta(s-m)\prod_{l=0}^{m-1} \frac{k-l}{s-l-1}}$$
Finding the angle b/w two lines in Coordinate Geometry
Before you use the formula, you should determine what type of angle you are looking for, specifically, acute or obtuse - when two lines intersect, two pairs of identical angles are formed. To specify which angle you are targeting, use your formula, with $m_{2}$ being the angle's starting line. If you get a negative output after taking inverse tangent, just take the positive of the answer. This results from the fact that inverse tangent is an odd function; specifically, $\arctan(-\theta) = -\arctan(\theta).$ Hope this helps!
If $\mathcal{P}(A)=\mathcal{P}(B)$, then $A=B$?
You can prove: $$\bigcup \mathcal{P}(A)=A$$$$\bigcup\mathcal{P}(B)=B$$ and by hypothesis you have $\mathcal{P}(A)=\mathcal{P}(B)$ therefore $$B=\bigcup\mathcal{P}(B)=\bigcup\mathcal{P}(A)=A$$
Unique bounded function satisfying $f(s)=1+\int_{0}^{s} e^{-t^{2}} f(s t) d t$
The Banach fixed-point theorem can be used. As pointed out in the comments to On the solution of a Volterra type of integral equation, $T: C[0,\infty) \to C[0,\infty)$ defined by $$ Tf(s)= 1+ \int_{0}^s e^{-t^2} f(st)dt $$ maps bounded functions to bounded functions: $$ |Tf(s)| \le 1 + \int_{0}^s e^{-t^2} \, dt \cdot \Vert f \Vert_\infty \\ \le 1 + \int_{0}^\infty e^{-t^2} \, dt \cdot \Vert f \Vert_\infty = 1 + \frac{\sqrt \pi}{2 } \Vert f \Vert_\infty $$ and is a contraction: $$ |Tf(s) - Tg(s)| \le \int_0^s e^{-t^2} \, dt \cdot \Vert f-g \Vert_\infty \\ \le \int_0^\infty e^{-t^2} \, dt \cdot \Vert f-g \Vert_\infty = \frac{\sqrt \pi}{2 }\cdot \Vert f-g \Vert_\infty $$ because $\frac{\sqrt \pi}{2 } \approx 0.886 &lt; 1$. For the value of $\int_0^\infty e^{-t^2} \, dt $ see for example Proving $\int_{0}^{\infty} \mathrm{e}^{-x^2} dx = \frac{\sqrt \pi}{2}$.
Permutation of the swedish word "matematik"
The number of words that contain at least one &quot;mat&quot; may be evaluated by treating one set of the three constituent letters as a &quot;macro letter&quot;. Then all $7$ letters are different and the number of admissible words is $7!$. From this, the inclusion/exclusion principle says we must subtract the number of words with two &quot;mat&quot;s. Here, both macros &quot;mat&quot; are the same, so the number of words is $\frac{5!}2$. Subtracting gives the result as $7!-\frac{5!}2=4980$.
Connected vs irreducible Variety
(See the book "Algebraic Geometry I" by Görtz and Wedhorn, Exercise 3.16.) If $X$ is a scheme such that its set of irreducible components is locally finite (i.e. every point of $X$ has an open neighborhood which is disjoint to all but finitely many irreducible components of $X$), then the following are equivalent: Every connected component of $X$ is irreducible. $\forall x \in X :$ the nilradical of $\mathcal{O}_{X,x}$ is a prime ideal (where $\mathcal{O}_{X,x}$ denotes the local ring of $X$ in $x$) Note that, for example, any locally noetherian scheme satisfies the requirement that the set of its irreducible components is locally finite. Feel free to ask for a proof, the above result is not difficult. Edit (Proof): We shall use the following facts: Fact 1: For any scheme $X$ and $x \in X$, there is a bijection between the set of those irreducible components of $X$ that contain $x$; and the set of minimal prime ideals of the local ring $\mathcal{O}_{X,x}$. (Sketch of proof: Let $U = \operatorname{Spec} A$ be an affine open subscheme containing $x$. First, use basic topology to relate the irreducible components of $U$ to those of $X$. Then, translate this to a statement involving the minimal prime ideals of $A$. Finally, use the well known 1-1 correspondence between the prime ideals of $A_\mathfrak{p}$ and the prime ideals of $A$ which are contained in $\mathfrak{p}$ (where $\mathfrak{p}$ denotes a prime ideal of $A$ and $A_\mathfrak{p}$ the localization of $A$ by $A \setminus \mathfrak{p}$.) Fact 2: Let $A$ be any ring (commutative, with unit). Then $A$ contains exactly one minimal prime ideal if and only if its nilradical is a prime ideal. (This follows easily from $\operatorname{nil}(A) = \bigcap_{\mathfrak{p} \in \operatorname{Spec} A} \mathfrak{p}$.) Fact 3: Every irreducible topological space is connected. Fact 4: Let $X$ be a topological space and $(X_i)_{i \in I}$ a family ob subspaces of $X$. Assume that all $X_i$ are connected and that $\bigcap_{i \in I} X_i \neq \emptyset$. Then $\bigcup_{i \in I} X_i$ is connected. Fact 5: Let $X$ be a topological space satisfying the following two assumptions: The set of the irreducible components of $X$ is locally finite; and $X$ is the disjoint union of its irreducible components. Then every irreducible component of $X$ is open and closed in $X$. (Sketch of proof: Let $Z$ be an irreducible component of $X$; we only have to show that $Z$ is open. Let $(U_i)_{i \in I}$ be an open cover of $X$ such that every $U_i$ meets only finitely many irreducible components of $X$. It suffices to show that $Z \cap U_i$ is open in $U_i$ for all $i$. This follows from the assumptions by writing $U_i = \bigcup (Y \cap U_i)$ where $Y$ ranges over all irreducible components of $X$.) The proof of the original statement will now be easy: 1. implies 2.: (Note that we won't need the assumption that the set of the irreducible components of $X$ is locally finite for this implication.) Let $x \in X$. By Fact 1 and Fact 2, it suffices to show that there is exactly one irreducible component of $X$ that contains $x$. So let $(X_i)_{i \in I}$ be those irreducible components of $X$ that contain $x$. By Fact 3, all $X_i$ are connected; by construction, $\bigcap_{i \in I} X_i \neq \emptyset$ (because the intersection contains at least $x$). Thus, by Fact 4, $\bigcup_{i \in I} X_i$ is connected and thereforce contained in a connected component, say $Y$, of $X$. That is, for all $j \in I$, we have inclusions $$ X_j \subset \bigcup_{i \in I} X_i \subset Y ,$$ where $X_j$ is an irreducible component and $Y$ a connected component of $X$. But by assumption, $Y$ is even irreducible! Therefore, $X_j = Y$ because $X_j$ is an irreducible component. We have seen, that there is exactly one ireducible component of $X$ that contains $x$, namely $Y$. 2. implies 1.: By Fact 1, Fact 2 and the assumption about the local rings, $X$ is the disjoint union of its irreducible components. (Clearly, $X$ is the union of its irreducible components; this union is not disjoint if and only if there is an $x \in X$ which is contained in more than one irreducible component of $X$. But by Fact 1 and Fact 2, the local rings $\mathcal{O}_{X,x}$ for such $x$ would violate our assumption.) We may therefore use Fact 5 in the following. From now on, the proof proceeds by purely topological arguments. Let $Y$ be a connected component of $X$ and let $Z'$ be an irreducible component of $Y$. Let $Z$ be an ireducible component of $X$ which contains $Z'$. By (the purely topological) Fact 5, $Z$ is open and closed in $X$. (This is the only place where the requirement that the set of the irreducible components of $X$ is locally finite is needed.) Thus, $Z \cap Y$ is open and closed in $Y$. Since $Y$ is connected, we must have either $Z \cap Y = Y$ or $Z \cap Y = \emptyset$; however, the latter is impossible since, by construction, $Z \cap Y \supset Z'$ (and $Z'$, being an irreducible component of some space, can't be empty). Therefore, $Z \cap Y = Y$; equivalently, $Z \supset Y$. But $Y$ is a connected component and $Z$ is connected (by Fact 3, because $Z$ is irreducible by definition). Thus, $Y = Z$; in particular, $Y$ is irreducible.
How does factoring a quadratic equation relate to the parabola on the graph and to common sense?
It's not entirely clear how you use your method to factor a quadratic polynomial, but the usual goal of factorizing a general quadratic, $$ ax^2 + bx + c, $$ is that you end up with something like this: $$ ax^2 + bx + c = a(x - r_1)(x - r_2). $$ (That is, to deal with the case where $a \neq 1,$ you can factor $a$ out of the polynomial as the very first step, and then you just have to factor something where the $x^2$ term has coefficient $1$.) The numbers $r_0$ and $r_1$ are the roots of the quadratic, that is, they are the $x$ coordinates where the graph of $y = ax^2 + bx + c$ crosses the $x$ axis. The $x$ coordinate of the vertex is midway between $r_0$ and $r_1$. The $y$ coordinate is not obvious from the factorization; basically I would find the $x$ coordinate and then plug it into either the original polynomial or the factorization in order to compute the $y$ coordinate. There is also something called the vertex form of a quadratic equation, which is somewhat related to factorization (and can help you with the factorization), but it is not actually a factorization: $$ ax^2 + bx + c = a(x - h)^2 + k. $$ Now the coordinates of the vertex are $(h,k),$ but the $x$ coordinates where the graph crosses the $x$ axis are not as easy to find as in the factorization. The vertex form is not a factorization in general because it is not the product of some expressions multiplied together; it is the sum of $a(x-h)^2$ and $k.$ In the case where $k = 0$ you are left with just $a(x-h)^2,$ which is a factorization, but in other cases the $k$ prevents you from using $x - h$ as a factor of the quadratic.
Showing a Set is Inconsistent In Logic
Actual answer First of all, no, there's no obligation to use everything in $\Phi$. The set $\Phi$ is the set of sentences you're allowed to use in a proof. Adding stuff to $\Phi$ only makes it easier to prove things. You're also mixing up $\Phi$ and the set of all sentences in the language. Obviously $\Phi$ proves every sentence in $\Phi$. What you want to talk about is the sentences in general which $\Phi$ proves. Now, on to the issue of inconsistency. There are two notions of inconsistency of a set of sentences $\Phi$: $\Phi$ proves everything. For some $p$, $\Phi$ proves both $p$ and $\neg p$. Conveniently, in classical propositional logic these are equivalent: clearly if $\Phi$ is inconsistent in the first sense, it's inconsistent in the second sense, and conversely if $\Phi$ proves both $p$ and $\neg p$ we can use proof by contradiction to prove whatever we want from $\Phi$: &quot;Suppose $\neg q$. Then ... we conclude $p$ and $\neg p$. So since we got a contradiction from $\neg q$, we have $q$.&quot; So all you need to do is find a single sentence such that $\Phi$ proves both the sentence and its negation. Irrelevant but interesting note Now note that I said &quot;classical logic&quot; above. There are other logics out there, and in some of them it is not the case that proving a contradiction means proving everything. For such logics we do need to distinguish between the notions of inconsistency given above: generally, the second (weaker) one is called &quot;inconsistent&quot; while the first is called &quot;trivial.&quot; In these logics, and unlike classical logic, inconsistent theories may be interesting; see e.g. here.
Gamma function results
Observe the identity carefully- the correct version is this. $$\Gamma\big(n+\frac{1}{2}\big)=(n-1+\frac{1}{2})~\Gamma\big(n-1+\frac{1}{2}\big)$$
I think there is an error in this solution
The solution is incorrect. In the first passage both sides of $$\frac 58-\frac x3&gt;\frac18-\frac23x$$ are multiplied by $24$. But in the proposed solution "$-\frac x3$" becomes "$-8\cdot 3x$", instead of "$-8x$".
Show that if $R$ is a strict partial order on $X$, and $R$ is not linear, then there exists a strict partial order $R'$ and $R' \supsetneqq R$.
Let $R$ be a strict partial order on $S. \ $ For ex. $R = \{(x,y)\}, \ $ where $x \neq y$ are from $S$, is a strict partial order. Let $R' = \{(y,x)\}$. Clearly that is another SPO, and disjoint from $R$. How did we come about it? Well, we reversed the relation i.e. if $(x,y) \in R$, then put $(y, x) \in R'$. This might work in the general case. Let's check. Let $R$ be an SPO. Let $R' = R \text{ with relationships reversed}. \ $ Then Clearly $R'$ is irreflexive, and transitive (check this one). And the definition to antisymmetry is the same wth $x,y$ reversed (as text in the definition!!!), so is also satisfied. And $R \cap R' = \varnothing$ or else $(x,y) \in R \cap R' \implies (y,x) \in R'$ and by transitivity $(x,x) \in R'$ which violates irreflexivity. Thus we've done way better than $R' \supsetneq R$. That proves what you set out to.
An interesting question on progressions:
Hint: Let $S_1=\pi^2/6$ be the known series and $S_2$ the unknown series. Since the terms of $S_1$ contain all of those in $S_2$, consider what terms remain in $S_1-S_2$. Is there a common factor in all of them?
Can you determine from the history of x,y coordinates of the mover whether he's employing the levy walk?
The important part is the step length; if you can find those from the history of coordinates, then the rest falls into place: to detect a power law distribution you can take a power law regression, which is essentially taking, for each index and matching value $k$, $x_k$, their logarithms $\log(k)$ and $\log(x_k)$, and taking a linear regression through those values. A power-law distribution will correlate well in this situation.
In a similar triangle, if the property is SAS the the other two property automatically becomes true?
1 No, it's easy to think up a triangle pair which share all angles but are different sizes, so they will not be congruent, and then won't satisfy the other two. 2 Yes: think about what the law of cosines says about the third side, and you'll see the triangles are congruent, even. 3 Yes: the two triangles are congruent, hence similar.
If $\sum_{n=1}^{\infty} a_n^{2}$ converges, then so does $\sum_{n=1}^{\infty} \frac {a_n}{n}$
What you did is correct; in fact you can show that if $\{a_n\}$ and $\{b_n\}$ are two sequences of real numbers and $\sum_{n\geq 0}a_n^2$ and $\sum_{n\geq 0}b_n^2$ are convergent then the series $\sum_{n=0}^{+\infty}|a_nb_n|$ is convergent, noting that $0\leq |a_nb_n|\leq \max(a_n^2,b_n^2)\leq a_n^2+b_n^2$. Your particular case is $b_n=\frac 1n$.
What is P(Y1−Y2>3) of a given joint density function?
The second integral is correct. The first is not and it must be $$\int_{\color{red}{3}}^{\infty}\int_{0}^{y_1-3} e^{-y_1-y_2} dy_2dy_1$$ since $y_2&gt;0$. we have that $0&lt;y_2&lt;y_1-3$ which implies that $0&lt;y_1-3$ or that $3&lt;y_1$ in the outer integral.
Radius of convergence for $\sum_{n=1}^{\infty}a_{n}z^n $ and $ \sum_{n=1}^{\infty}\frac{1}{a_{n}}z^n $ is the same .
The Cauchy-Hadamard formula only yields $R \leqslant 1$ under the given conditions, for $$\frac{1}{R} = \limsup_{n\to\infty} \sqrt[n]{\lvert a_n\rvert} \geqslant \liminf_{n\to \infty} \sqrt[n]{\lvert a_n\rvert} = \frac{1}{\limsup_{n\to\infty} \sqrt[n]{1/\lvert a_n\rvert}} = \frac{1}{1/R} = R.$$ Your example shows that $R &lt; 1$ is indeed possible. The common radius of convergence would have to be $1$ if $\lim\limits_{n\to\infty} \dfrac{\lvert a_n\rvert}{\lvert a_{n+1}\rvert}$ exists.
Is it natural that $\overline{\int f}=\int\bar f$?
It is common to define the integral of a complex valued function (on $\mathbb{R}^n$) via considering its real and imaginary part. That is for $f = u + i v$ one defines $\int f = \int u + i \int v$ (and $f$ is integrable if both $u,v$ are integrable). Then the question boils down to is $\int (-v) = -\int v$, which is a well-known property of the (real) integral.
Differential Equations Constants
i think the problem lies in this line $$-\frac{1}{5}\ln(y)c_2=t+c_1$$ and then we have $$-\frac{1}{5}\ln(y)=t+c_1-c_2$$ then you can define $$C=c_1-c_2$$
This theory proof about instability of a point of equilibrium is not understandable for me, any help?
Let's show that there exists $t&gt;t_0$ such that $\|\varphi(t)\|&gt; \mathcal E_0$. Let us suppose it is not the case and let us get a contradiction. So, suppose that: $$\forall t &gt;t_0 :\|\varphi(t)\| \leq \mathcal E_0.$$ However as long as $\varphi$ remains in $\mathbb B_{\mathcal{E}_0}= \{X : \|X\|\leq\mathcal{E}_0\}$ we have that $$V'(\varphi(t)) \geq \gamma &gt; 0$$ (see remark below for a proof). Since V' is the derivative of V along the solutions of $X'=F(X)$, we have that the function $f$ defined by $f(t)= V(\varphi(t))$. has derivative $f'(t)= V'(\varphi(t))$. So we have that $\forall t &gt;t_0$, $\varphi$ remains in $\mathbb B_{\mathcal{E}_0}$ and so $f'(t) \geq \gamma &gt; 0$. It is easy to prove that $$\lim_{t \to +\infty}f(t)=+\infty$$ So $\forall t &gt;t_0$, $\varphi$ remains in $\mathbb B_{\mathcal{E}_0}$ and $$\lim_{t \to +\infty}V(\varphi(t))=+\infty$$ In particular, $V$ is not unbounded in $\mathbb B_{\mathcal{E}_0}$, which is a contradiction since $V$ is continuous and $\mathbb B_{\mathcal{E}_0}$ is compact. Remark: We can prove that there is $\gamma &gt; 0$ such that $\forall X\in\mathbb B_{\mathcal{E}_0}$, $V'(X) \geq \gamma &gt; 0$ in the following way: We know that $\forall X \in \mathbb O(0): V'(X) \geq W(X)$ where $W$ is some continuous, positive function on $\mathbb O (0)$. And we have that $\mathbb B_{\mathcal{E}_0} \subseteq \mathbb O (0)$. So $W$ is a continuous, positive function on $B_{\mathcal{E}_0}$. So we have $\forall X\in\mathbb B_{\mathcal{E}_0}$ $$V'(X)\geq W(X)\geq\min_{Y\in B_{\mathcal{E}_0}}W(Y)&gt; 0$$ Just make $\gamma=\min_{Y\in B_{\mathcal{E}_0}}W(Y)$.
Cross product of vector functions of different (input) dimensions?
Since $\mathbf B$ and $\mathbf C$ are vectors in $\mathbb{R}^3$ the cross product $\mathbf B\times \mathbf C$ is a vector that depends from the two vector field and is a vector field $\mathbf B\times \mathbf C:\mathbb{R}^n \times\mathbb{R}^p \rightarrow \mathbb{R}^3$ . The case of $\mathbf D$ and $\mathbf E$ is a special case.
Intuitive Explanation of the graph $y = \sin x$
This animation will most likely help you! Cheers! :-)
How to use general recursion to generate a set of words?
I would also include in the base case the three minimal non-empty strings in $T$: $\lambda,aab,aba,baa\in T$ If $u,v\in T$, and $v=xy$, then $xuy\in T$. (Note that $x$ or $y$ can be empty.) The hard part is going to be proving that this actually generates all of $T$; this will be the case if and only if every $u\in T\setminus\{\lambda,aab,aba,baa\}$ has a proper substring in $T$. (Why?) HINT: Set $c_0=0$, and for $k=1,\ldots,n$ let $$x_k=\begin{cases} c_{k-1}+1,&amp;\text{if }x_k=a\\ c_{k-1}-2,&amp;\text{if }x_k=b\;; \end{cases}$$ note that $c_n=0$, since $u\in T$. Show that if $c_k=0$ for some $k\in\{1,\ldots,n-1\}$, $u$ has a proper substring in $T$. Suppose that $c_i&gt;0$ for $i=1,\ldots,n-1$, and let $c_k=\max\{c_i:0\le i\le n\}$. Show that there are $i,j$ such that $0&lt;i&lt;k&lt;j&lt;n$ and $c_i=c_j$ and conclude that $u$ has a proper substring in $T$. (Use the fact that although it decreases by $2$ when it decreases, the counter $c$ only ever increases by $1$. Thus, it can skip over a value when decreasing but not when increasing.) Prove a similar result in the case that $c_i&lt;0$ for $i=1,\ldots,n-1$. Show that the only remaining possibility is that $c_k=c_{n-1}=-1$ for some $k$ such that $2\le k&lt;n-1$ and again conclude that $u$ has a proper substring in $T$. It may help to realize that these bullet points correspond to the following types of strings, respectively: $u=vw$ for some non-empty $v,w\in T$. Every non-empty proper initial segment of $u$ has too many $a$s to be in $T$. Every non-empty proper initial segment of $u$ has too many $b$s to be in $T$. $u$ has the form $xva$, where $xa,v\in T$, every non-empty proper initial segment of $x$ has too many $a$s, $x$ ends in $b$, and $x\notin T$.
Does the logic in my solution to this problem make sense?
Allison travels $\frac54$ times as fast as Jay. When Jay has traveled $20$ feet, Alison has traveled $25$ feet and they are at points $B$ and $C$. The line between them has slope $-\frac 54$ and you are looking for the line passing through $A$ with slope $-\frac 54$. Using the point-slope form this is $y-30=-\frac 54(x-20)$. The intercepts of this line with the axes are $(0,55)$ and $(44,0)$. Now divide by their speeds to get $5.5$ seconds.
Finding a point that is a certain distance away from a segment
The third point is going to be a linear combination of the two other points. Consider a scalar parameter $t$ defining $$ x_3 = (1-t) x_1 + t x_2 \\ y_3 = (1-t) y_1 + t y_2 $$ If you define the distance from point (2) $$ d^2 = (x_3-x_2)^2 + (y_3-y_2)^2 $$ and use the expressions for $(x_3,y_3)$ you can solve for the parameter $t$ $$ t = 1+ \frac{d}{\ell} $$ where $\ell = \sqrt{ (x_2-x_1)^2+(y_2-y_1)^2 }$ is the segment length In the end you get $$ x_3 = x_2 + \frac{d}{\ell} (x_2-x_1) \\y_3 = y_2 + \frac{d}{\ell} (y_2-y_1)$$
concept behind interchange of column vectors
interchanging columns is equivalent to post-multiplying by a matrix, so you are reducing $AX = Y$ into $AXZ = YZ$ with $YZ=I$, so $A^{-1} = XZ$. interchanging rows is pre-multiplying but that does not help here since you must do that to $A$, not to $X$...
How do you add two fractions?
Null has given you a good way. Here's a way without worrying about the LCM: $${a\over b}+{c\over d}={ad+bc\over bd}$$ In the example, $${5\over12}+{3\over50}={(5)(50)+(12)(3)\over(12)(50)}={286\over600}={143\over300}$$ The price of not worrying about the LCM is that you get an answer, $286/600$, that isn't in lowest terms, so you have the extra step at the end of reducing the fraction.
numerically solving differential equations
Choose an integer $N$, let $h=2/N$ and let $\theta_k$ be the approximation given by the finite difference method to the exact value $\theta(k\,h)$, $0\le k\le N$. We get the system of $N-1$ equations $$ \frac{\theta_{k+1}-2\,\theta_k+\theta_{k-1}}{h^2}(1+\beta\,\theta_k)+\beta\,\Bigl(\frac{\theta_k-\theta_{k-1}}{h}\Bigr)^2-m^2\,\theta_k=0,\quad 1\le k\le N-1\tag1 $$ complemented with two more coming from the boundary conditions: $$ \theta_0=100,\quad \theta_N-\theta_{N-1}=0. $$ I doubt that this nonlinear system can be solved explicitly. I suggest two ways of proceeding. The first is to solve the system numerically. The other is to apply a shooting method to the equation. Choose a starting value $\theta_N=a$. The system (1) can be solved recursively, obtaining at the end a value $\theta_0=\theta_0(a)$. If $\theta_0(a)=100$, you are done. If not, change the value of $a$ and repeat the process. Your first goal is to find two values $a_1$ and $a_2$ such that $\theta_0(a_1)&lt;100&lt;\theta_0(a_2)$. Then use the bissection method to approximate a value of $a$ such that $\theta_0(a)=100$.
Continuous Image of a profinite group into $T=\mathbb{R}/\mathbb{Z}$
The image is finite here is a proof The torus has an open neighborhood $V$ of zero, which contains no non-trivial subgroups of $S^1$. Since $G$ is a profinite group, there exists an open normal subgroup $U$ of $G$ satisfying $U\subseteq \chi^{-1}(V)$. This implies $U\subseteq Ker(\chi)$. So, the map $\chi$ factors throw $G/U$, which is finite (G is compact and U open). It follows the image is also finite.
Choosing a committee with a constraint - where is my reasoning wrong?
Your first way is absolutely correct, and the easiest way to do it. Another, more cumbersome, way, is to consider cases: exactly one woman: ${8 \choose 1}{10 \choose 2} = 8 \times 45 = 360$ ways. exactly two women: ${8 \choose 2}{10 \choose 1} = 28 \times 10 = 280$ ways. exactly three women: ${8 \choose 3} = 56$ ways. All in all: $360+280+56 = 696$ ways, confirming your own way. The second way you propose does double counting (which seems logical, as it's more). Namely, whenever there are two or more women in the committtee. If there are, say, exactly two women in the committee, A and B, you can pick A first as one of the 8, and then B as one of the remaining 17. Or you can pick B first as one of the eight, and A as one of the 17. So you count that committee twice. For comittees with 3 women, you count them thrice, depending on which one is chosen as one of the 8. If there is one woman only there is no double counting. So you count $360 + 2\times 280 + 3 \times 56 = 1088$..
Discriminating Row Vectors and Column Vectors
The matrix consists of $2$ rows and $2$ columns. We can write the matrix is an element of $\mathbb{R}^{2 \times 2}$. As for your question regarding matrix multiplication, it is the definition of the matrix multiplication that we define if $C=AB$, where $A\in \mathbb{R}^{m \times p}$ and $B\in \mathbb{R}^{p \times n}$, then $C\in \mathbb{R}^{m \times n}$ where $C_{ij}=\sum_{k=1}^pA_{ik}B_{kj}$, fixing $i$ and $j$, notice that as we increment $k$, we travel along the $i$-th row of $A$ and the $j$-th column of $B$> Remark: if you want to compute $$\begin{bmatrix}1 &amp; 2 \\ 3 &amp; 4 \end{bmatrix}\begin{bmatrix}2 \\ 3\end{bmatrix},$$ From the definition, we compute $$\begin{bmatrix} \begin{bmatrix} 1 &amp; 2 \end{bmatrix}\begin{bmatrix}2 \\ 3\end{bmatrix} \\ \begin{bmatrix} 3 &amp; 4 \end{bmatrix}\begin{bmatrix}2 \\ 3\end{bmatrix} \end{bmatrix}$$ but you can also verify that it is equal to $$2\begin{bmatrix} 1 \\ 3\end{bmatrix} + 3\begin{bmatrix} 2 \\ 4\end{bmatrix}.$$
Distance from point to parabola (quadratic bezier)
The formula gives only an approximation. If $f(\mathbf x)$ is a continuously differentiable function and the point $\mathbf a$ is ,,close'' to the implicit curve/surface $f(\mathbf x)=0$ and the closest point of the curve is $\mathbf a_0$ then $$ f(\mathbf a) = f(\mathbf a)-f(\mathbf a_0) \approx (\nabla f)\cdot (\mathbf a-\mathbf a_0). $$ The vectors $\nabla f$ and $\mathbf a-\mathbf a_0$ must be approximately parallel, so $$ f(\mathbf a) \approx \pm ||\nabla f(\mathbf a)|| \cdot ||\mathbf a-\mathbf a_0|| = ||\nabla f(\mathbf a)|| \cdot sd. $$ If the point $\mathbf a$ is far from the curve then the formula is useless. If you need the precise distance from a parabola then it leads to some cubic equation.
let $a,b,c >0 $ and $abc=1$,prove that $\sqrt{1+8a^2}+ \sqrt{1+8b^2}+ \sqrt{1+8c^2}\leq 3(a+b+c )$
Let $$ f(a,b,c)=3\sum_{cyc}a-\sum_{cyc}\sqrt{1+8a^2}.$$ Assuming that $a\geq b\geq c$, the Cauchy-Schwarz inequality gives: $$ f(a,b,c)\geq f(\sqrt{ab},\sqrt{ab},c)\tag{1}$$ hence it is sufficient to prove the inequality in the case $(a,b,c)=\left(x,x,\frac{1}{x^2}\right):$ $$ 3\left(2x+\frac{1}{x^2}\right)-2\sqrt{1+8x^2}-\sqrt{1+\frac{8}{x^4}}\geq 0,$$ $$ 3(2x^3+1)\geq 2x^2\sqrt{1+8x^2}+\sqrt{x^4+8}$$ $$ 1+36x^3-5x^4+4x^6 \geq 4x^2\sqrt{(1+8x^2)(8+x^4)}.\tag{2}$$ To prove $(2)$ it is sufficient to prove that for any $x\in\mathbb{R}^+$ we have: $$ 1+72x^3 - 138x^4 + 2328 x^6 - 360 x^7 \geq 0 \tag{3}$$ or the still weaker: $$ 12-23x+328x^3 \geq 0,$$ $$ f(x)=1-2x+27x^3 \geq 0,\tag{4}$$ that is trivial since that cubic polynomial has a negative discriminant, hence only one real root, and since $f(0)&gt;0$ while $f(-1)&lt;0$ that root is between $-1$ and $0$, so the cubic polynomial is positive over $\mathbb{R}^+$.
Could you tell me if this "proof" is correct?
In full, l'Hopital says that $$\lim_{x\to a}\frac{f(x)}{g(x)} = \lim_{x\to a}\frac{f'(x)}{g'(x)} $$ if the latter exists (and of course $\lim_{x\to a} f(x)=\lim_{x\to a}g(x)=0$). Your proof (if we correct a few typoes such as replace $f'(x)$ with $f'(a)$ ) works only if $f(a),g(a)$ exist, $f'(a), g'(a)$ exist, $g'(a)\ne 0$.
How do I show that: $\cos^2(z)+\sin^2(\bar{z})=1+\mathrm i\beta $ using series evaluation?
The relation $\cos^2z+\sin^2z=1$ holds also for complex values of $z$, because the derivative is $0$ (that you can compute with the power series, of course). Thus $\cos^2z+\sin^2\bar{z}=1-\sin^2z+\sin^2\bar{z}$ and you want to prove that $$ (\sin z-\sin\bar{z})(\sin z+\sin\bar{z}) $$ is purely imaginary. Now, from the power series for the sine, you get $$ \sin z-\sin\bar{z}= \sum_{k\ge0}(-1)^n\frac{z^{2n+1}-\bar{z}^{2n+1}}{(2k+1)!} $$ which is a sum of purely imaginary numbers, whereas $$ \sin z+\sin\bar{z}= \sum_{k\ge0}(-1)^n\frac{z^{2n+1}+\bar{z}^{2n+1}}{(2k+1)!} $$ is a sum of real numbers. Alternatively, the power series expansion shows $$ \sin\bar{z}=\overline{\sin z} $$ and $w^2-\bar{w}^2$ is purely imaginary for every complex number $w$.
Being mathematically critical: how should a student approach statements that appear to be obvious?
I realize that there is a popular style, especially in undergraduate and beginning graduate coursework (e.g., in the U.S.) to behave as though every small detail were equally deserving of attention and worry. I disagree with this attitude, at least because it fails to distinguish important from unimportant details by declaring every one important. It is subtler to ask about the "obviousness" of things like Rolle's theorem or intermediate value theorem, and such. I agree, these are "obviously" true, ... or should be, and, indeed, if whatever notion of "real numbers" or "continuity" or "differentiability" we had failed to allow us to prove these things (or provided counter-examples), then more likely we'd enhance those notions however it took to make the conclusion true. Further, in practice (as opposed to exercise sets and exams in school), I think the most productive approach is to first see what interesting conclusions might arise from some intuitive, heuristic approach, and, if sufficiently interesting, go back and try to bulwark things as much as possible, to give an appropriate degree of surety to the conclusion. I do not say "absolute surety", even though that pretense is part of our mythology. (E.g., the supposed surety of Euclidean geometry, apart from parallel postulate issues, was shown to have gaps/problems by Hilbert's careful analysis.) And I do think it is especially perverse if one gets into the habit of exaggerated self-doubt, or doubt-of-intuition, as a matter of course. Easy to get nowhere from such a viewpoint, both because of paranoia, and because, after all, what guide is there for "what to do next" than some sort of (refined) intuition?
Get c.d.f. from given p.d.f.
For $-3&lt;x&lt;0$ your CDF should actually be $$F(x)=\int_{-3}^x \frac{1}{18} (y+3) dy.$$ This differs by a constant from what you've written, and this constant will remove the negative values that you are seeing. Similarly, for $0&lt;x&lt;9$ you should have $$F(x)=F(0)+\int_0^x \frac{1}{54} (9-y) dy.$$
Proof Validation: Let $f: A \to B$. Then $f^{-1}f$ is the identity on $\mathcal{P}(A)$ if 1-1, and $ff^{-1}$ the identity on $\mathcal{P}(B)$ if onto.
The problem is abusing notation rather badly by using $f$ to represent two different functions. We are told that $f:A\to B$, so the domain of $f$ is $A$. We are then asked to show that if $f$ is injective, then $f^{-1}f$ is the identity on $\wp(A)$. This is nonsense: the domain of the composite function $f^{-1}f$ is necessarily the same as the domain of $f$, which is $A$, not $\wp(A)$. What is true is that $f$ induces a function $$F:\wp(A)\to\wp(B):S\mapsto f[S]\overset{\text{def}}=\{f(a):a\in S\}\,.$$ (Note the square brackets in $f[S]$: this is a standard notation used to show that the argument $S$ is a subset of the domain of $f$, not an element of it, and that $f[S]$ is the set of images under $f$ of the elements of $S$.) The function $F$ is very closely related to $f$, but $f$ and $F$ are not the same function. The function $f$ also induces a function $$G:\wp(B)\to\wp(A):T\mapsto f^{-1}[T]\overset{\text{def}}=\{a\in A:f(a)\in T\}\,,$$ and the result that you’re asked to prove is really that $GF=\operatorname{id}_{\wp(A)}$ if $f$ is injective, and $FG=\operatorname{id}_{\wp(B)}$ if $f$ is surjective. Thus, it really is a question about mappings of sets. In the first part of your proof you say that for each $T\in\wp(B)$ there is an $S\in\wp(A)$ such that $f[S]=T$. This is true and is the crux of the argument, but you haven’t actually justified it. That a justification is required would be more readily apparent if the statement of the problem had carefully distinguished $f$ and $F$, because what you actually need to show here is that for each $T\in\wp(B)$ there is an $S\in\wp(A)$ such that $F(S)=T$; that is, you need to show that $F$ is surjective if $f$ is. This is of course very easy, but it’s an essential part of the argument, since $f$ and $F$ are not the same function. Here is one possible argument: Let $T\in\wp(B)$; $f$ is surjective, so for each $b\in T$ there is an $a_b\in A$ such that $f(a_b)=b$. Let $S=\{a_b:b\in T\}$; then $$F(S)=f[S]=\{f(a_b):b\in B\}=\{b:b\in B\}=B\,,$$ as desired. Now we can argue as follows. Let $T\in\wp(B)$; we’ll be done if we can show that $(FG)(T)=T$. Let $S=G(T)=\{a\in A:f(a)\in T\}$; we want to show that $F(S)=T$. Clearly $$\begin{align*} F(S)&amp;=F(\{a\in A:f(a)\in T\})\\ &amp;=f[\{a\in A:f(a)\in T\}]\\ &amp;=\{f(a):f(a)\in T\}\\ &amp;\subseteq T\,. \end{align*}$$ On the other hand, we know (since $F$ is surjective) that there is a $U\in\wp(A)$ such that $F(U)=T$. Clearly $f(a)\in T$ for each $a\in U$, so $U\subseteq S$, and therefore $$\begin{align*} T&amp;=F(U)\\ &amp;=\{f(a):a\in U\}\\ &amp;\subseteq\{f(a):a\in S\}\\ &amp;=F(S)\,, \end{align*}$$ and it follows that $F(S)=T$. The beginning of the second half of your argument is easily modified to show that $F$ is injective if $f$ is. Now you want to show that if $S\in\wp(A)$, then $(GF)(S)=S$. We can use the injectivity of $F$. $$\begin{align*} (GF)(S)&amp;=G(\{f(s):s\in S\})\\ &amp;=\big\{a\in A:f(a)\in\{f(s):s\in S\}\big\}\,, \end{align*}$$ so $$\begin{align*} F\big((GF(S)\big)&amp;=F\left(\big\{a\in A:f(a)\in\{f(s):s\in S\}\big\}\right)\\ &amp;=\big\{f(a):f(a)\in\{f(s):s\in S\}\big\}\\ &amp;=\{f(s):s\in S\}\\ &amp;=F(S)\,, \end{align*}$$ and since $F$ is injective, we conclude that $(GF)(S)=S$. In this case I really think that what you needed to show is to a significant extent hidden by the very sloppy statement of the problem in the first place. Combine that with the fact that the result is intuitively pretty clear anyway, and it’s not surprising that it wasn’t clear just what actually has to be proved here. I’ve tried to make that clearer both by being much more careful with my notation and by including a great deal of detail in order to show what is really going on behind the intuitive understanding.
Does there exist a function $f: \mathbb{R} \to \mathbb{R}$ that is differentiable only at $0$ and at $\frac{1}{n}$, $n \in \mathbb{N}$?
Let $$g(x)=\cases {\exp\left(-\frac{1}{x(1-x)}\right) &amp; if $0\le x \le 1$ , $x$ irrational \\ 0 &amp; elsewhere}$$ Then, in the interval $[0,1]$, $g(x)$ is only continuous and differentiable at $x=0$ and $x=1$ (and all the derivatives are zero). The copy and paste the $y$-scaled/$x$-scaled-translated copies of it: $$ f(x)=\sum_{n=1}^\infty \frac{g(x \, n (n+1) + 1/n)}{n^2}$$
Computing $\kappa^{<\lambda}$, for cardinals $\kappa$ and $\lambda$
$$\begin{align*} \sup\{\kappa^\theta : \theta &lt; \lambda,\ \theta\mbox{ cardinal}\} &amp;= \sup\{|{}^\theta\kappa| : \theta &lt; \lambda,\ \theta\mbox{ cardinal}\} \\ &amp;= \sup\{|{}^{\leq\theta}\kappa| : \theta &lt; \lambda,\ \theta\mbox{ cardinal}\}\\ &amp;= \sup\{\theta^+\cdot|{}^{\leq\theta}\kappa| : \theta &lt; \lambda,\ \theta\mbox{ cardinal}\}\\ &amp;= \sup\{|{}^{&lt;\theta^+}\kappa| : \theta &lt; \lambda,\ \theta\mbox{ cardinal}\}\\ &amp;= \left|\bigcup\{{}^{&lt;\theta^+}\kappa : \theta &lt; \lambda,\ \theta\mbox{ cardinal}\}\right|\\ &amp;= \left|\bigcup\{{}^\alpha\kappa: \alpha &lt; \lambda\}\right| \end{align*} $$ Each line can be justified as follows: Definition of exponentiation. There is an injection $F : {}^{\leq\theta}\kappa \to {}^\theta\kappa$ defined by: $$F(f)(\beta) = \begin{cases}f(\beta)+1, &amp; \beta \in \mathrm{dom}(f)\\\\0, &amp; \mbox{otherwise}\end{cases}$$ Since cardinal multiplication is just the maximum of the two cardinals, and $$|{}^{\leq\theta}\kappa| \geq \kappa^\theta \geq 2^\theta \geq \theta^+$$ For each $\theta &lt; \alpha &lt; \theta^+$, fix a bijection $b_\alpha : \theta \to \alpha$. Then we get an injection $G : {}^{&lt;\theta^+}\kappa \to \theta^+\times{}^{\leq\theta}\kappa$ defined by: $$G(f) = \begin{cases}(\mathrm{dom}(f), f), &amp; \mathrm{dom}(f) \leq \theta\\\\(\mathrm{dom}(f), f\circ b_{\mathrm{dom}(f)}), &amp; \mbox{otherwise}\end{cases}$$ Because $\{{}^{&lt;\theta^+}\kappa : \theta &lt; \lambda,\ \theta\mbox{ cardinal}\}$ forms an increasing $\subseteq$-chain of sets. Because the two unions are the same set.
How to find $\det(-6A)$, if $\det A=-4$?
Hint: Determinant is multi-linear for each column. Let $A=(a_1,\ldots,a_n)$. Thus $$\det(kA)=\det (ka_1,\ldots,ka_n)=k^n\det(a_1,\ldots,a_n)=k^n\det A$$
Analytically solve $\Phi(\frac{d-\mu}{\sigma}) = c$ for $\mu$
$\mu = d - \sigma \Phi^{-1}(c)$..
Must $f$ be continuous?
No. For example take $f : \mathbb{R} \rightarrow \mathbb{R}$ defined by $f(x) = 0$ if $x &lt; 0$ $f(x) = 1$ if $x \geq 0$
Proving equivalence relation $a∼b \iff \left(a^2-b^2\right)\left(a^2b^2-1\right)=0$
$$(a^2-b^2)(a^2b^2-1)=0\iff (b^2-a^2)(b^2a^2-1)=0.$$
How to split a vector into components in non rectangular cartesian coordinate systems?
Suppose you are given a basis of dimension N: $\beta = \{\beta_1,\beta_2, \ldots, \beta_N\}$. We would like to find the coefficients $\{c_1,c_2,\ldots,c_N\}$ such that our vector $v=c_1\beta_1 + c_2\beta_2 + ... + c_N\beta_N$. Equivalently, we would like to solve the following system: $A c = v$ where $A = \begin{bmatrix} \beta_1 &amp; \beta_2 &amp; \cdots &amp; \beta_N \end{bmatrix}$. Gaussian elimination does exactly this!
Mixed up with hierarchy of $L_p$ spaces
This is a more expanded version of 251257's comment: Your function is bounded above by any convex function that satisfies $f(1-2^{-n})\geq 2^{(n+1)/2}/n$; in particular, by $f(x)=2(1-x)^{-1/2}$. This integrates to $4\sqrt{1-x}$ (unless I screwed up the constant; calculus is hard), so you do have a finite $L^1$ norm. [You don't really need to being in the convexity here; I just like integrals more than sums...]
Poisson Process expectation of time of an event given number of events until that time shows Uniform distribution characteristics
For a Poisson process with rate $\lambda$, the following general result holds. Given $N_{t_0} = n$, the conditional joint distribution of $S_1, \ldots, S_n$ has the same joint distribution as the joint distribution of the order statistics $U_{(1)} &lt; \cdots &lt; U_{(n)}$ of $n$ i.i.d. $\text{Uniform}(0, t_0)$ random variables $U_1, \ldots, U_n$. This result is much stronger than you need for this particular problem, but helps explain why you observe something related to uniform random variables in your computations. If we use the above result, your problem becomes $$E[U_{(n)}] = \int_0^{t_0} P(U_{(n)} &gt; t) \, dt = \int_0^{t_0} (1 - (t/t_0)^n) \ dt$$ for $t_0 = 6$ and $n=10$ which is the resemblance you observed. To prove the above result, I show now that the conditional joint density of $(S_1, \ldots, S_n)$ given $N_{t_0} = n$ is precisely that of the joint density of the order statistics of $n$ i.i.d. $\text{Uniform}(0, t_0)$ random variables. For $0 &lt; s_1 &lt; \cdots &lt; s_n &lt; t_0$ the density is $$f_{S_1, \ldots, S_n \mid N_{t_0} = n}(s_1, \ldots, s_n) \, ds_1, \cdots \, ds_n \approx \frac{P(S_1 \in [s_1 + ds_1), \ldots, S_n \in [s_n + ds_n), N_{t_0} = n)}{P(N_{t_0} = n)}.$$ The numerator is \begin{align} &amp;P(S_1 \in [s_1 + ds_1), \ldots, S_n \in [s_n + ds_n), N_{t_0} = n) \\ &amp;= P(\text{one arrival in each interval $[s_i, s_i + ds_i)$ for $i=1,\ldots, n$, and no other arrivals in $[0, t_0]$}) \\ &amp;= P(N_{[s_1,s_1 + ds_1)} = 1) \cdots P(N_{[s_n,s_n + ds_n)} = 1) P(\text{no other arrivals in $[0, t_0]$}) \\ &amp;\approx \lambda\, ds_1 \cdot \lambda \, ds_2 \cdots \lambda \, ds_n \cdot e^{-\lambda t_0} \\ &amp;= \lambda^n e^{-\lambda t_0} \, ds_1 \cdots \, ds_n \end{align} Above, we used the fact that $P(N_{[s_i,s_i + ds_i)} = 1) = e^{-\lambda \, ds_i} (\lambda \, ds_i) \approx \lambda \, ds_i$ since $e^{-\lambda \, ds_i} = 1 + O(\,ds_i)$. Dividing by $P(N_{t_0} = n) = e^{-\lambda t_0} (\lambda t_0)^n/n!$ yields the conditional density $$\frac{1}{n!} \, ds_1 \cdots \, ds_n.$$ There are various arguments to show that this is the density of $(U_{(1)}, \ldots, U_{(n)})$, the order statistics of i.i.d. $\text{Uniform}(0, t_0)$ random variables.
Computing the pdf of two independent Gaussians conditioned on being inside the circle
STANDARD NUMERICAL APPROACH It is kinda bizarre in fact, but if I understand the situation you're trying to evaluate the integral $$\iint_D f(u)f(v) \, dA$$ with $f(t)$ the standard normal density ($f(t)=\tfrac1{\sqrt{2\pi}}e^{-t^2/2}$), and while is not an issue (I assume) that integration has to be numerically done, it is an issue that $D$ is not nice to describe. Anyway, I believe that $\partial D$ is the union of segments and arcs of the circunference in such a way that the worst case is two of each type, and that the border is not that hard to parameterize since the arcs are the images of intervals —of the form $\left(\arcsin (y/r),\arccos (x/r)\right)$— and variations of this depending on each case by the function $\sigma(t)=(r \cos t,r \sin t)$. So if parameterization of the border for each case is messy but not that much, you could consider using Green's Theorem and do $$\iint_D f(u)f(v) \, dA=\frac12\int_{\partial D} F(x)f(y)\, dy - f(x)F(y)dx,$$ where $F(t)$ is the standard normal distribution function (a primitive of $f$), and since the last integral ends up being a Riemann integral, numerical integration is now quite straightforward. Of course there are still a couple of different cases to study, but considering some symmetries and other things, is not that bad as before. For instance, all the cases in which $x_0^2+y_0^2\leq r^2$, no mater the quadrant on which $(x_0,y_0)$ lies correspond to integration over a region bounded by an arc of circumference closed by a horizontal segment above (at ordinate $y_0$) and a vertical segment to the right (at abscissa $x_0$). These are parameterized by: $$\sigma_1(t)=(x_0,t),\quad y_1 \leq t\leq y_0,$$ $$\sigma_2(t)=(x_0-t,y_0),\quad 0\leq t\leq x_0-x_1,$$ $$\sigma_3(t)=(r\cos t,r\sin t),\quad t_y \leq t \leq t_x,$$ where $$x_1=-\sqrt{r^2-y_0^2}$$ $$y_1=-\sqrt{r^2-x_0^2}$$ $$t_x=2\pi-\arccos(x_0/r)$$ $$t_y=\pi-\arcsin(y_0/r).$$ The only other cases worth analyzing occur when $x_0^2+y_0^2&gt;r$ and are: $x_0&gt;r$ and $y_0\leq r$ $y_0&gt;r$ and $x_0\leq r$ (by symmetry it can be approached with the previous case) $0&lt;y_0\leq r$ and $0&lt;x_0\leq r$ (here $\partial D$ has two arcs and two segments). The remaining cases of course give $0$ or $1$ conditional probability. MONTE CARLO APPROACH If you prefer to use a Monte Carlo method, just generate two standard normal values $x_1$ and $y_1$ independently and check whether $x_1\leq x$, $y_1\leq y$ and $x_1^2+y_1^2\leq r^2$ are ALL true or not. Then generate $x_2$ and $y_2$ independently and standard normal and check again... and so on. The number proportion of 'successes' to repetitions will be an estimate of that probability. Remember to divide by $P(Z_1^2+Z_2^2\leq r^2)$ to calculate the conditional probability. That one can be analytically obtained or you can also calculate it's rate of success to repetitions in the previous simulation as an estimate, and then divide both estimations, or divide directly both numbers of 'successes'. Do NOT try to generate random points of the disk at once, since most likely you will not respect the distribution of both coordinates and/or their independence. The safest (and perhaps essentially the only) way to simulate this model is to pick the two coordinates independently and eventually disregard those that fall not in the circle. I haven't implemented anyone yet, but the Monte Carlo method is way easier to program. Anyway, it might be more time consuming than the other ---once programmed--- for a given precision (which would be an actual precision in the first case and just a good confidence precision in the second case). If it were my choice to take, I would use the Monte Carlo method in the beginning of the investigation, but for a more profound and rigorous work I'd work on implementing the first option.
Derivative at $x=0$ of $y=x\sqrt{x(4-x)}$
Seems like WA uses the standard derivation formulas, tries to substitute $0$ to $x$, and finds a division by $0$. The theorem about $\sqrt u$ tells you that this function is differentiable when $u$ is differentiable, and strictly positive. If you apply it to your function, you'll find that the theorem applies only on $]0,4[$ (and in fact, the function has no derivative at $4$). So if you want to check wether you can derive at $0$, you have either to 1) get back to the definition : $$\frac{f(x)-f(0)}{x-0}=\sqrt{x(4-x)}\xrightarrow[x\to 0]{}0$$ 2) use the limit of the derivative (I don't know the name of the theorem in english) : if $f$ is continuous on $[a,b]$, differentiable on $]a,b]$ and if $f'(x)$ has a finite limit $\ell$ when $x$ tends to $a$, then $f$ is differentiable at $a$ and $f'(a)=\ell$. This theorem applies here too.
Demonstration using the Pigonhole principle
You don't use the pigeonhole principle, but the double counting principle. Note that since the matrix is symmetric, if an entry appears in the strictly upper right triangle, it must appear again in the strictly lower left triangle, hence appear an even number of times. The only way for a number to appear an odd number of times, is for it to appear an odd number of times on the diagonal. Hence, each number must appear at least once on the diagonal.
Expanding a list of items.
You could make a list of all 6 number sequences of 1's and zeroes. The ones in each sequence then correspond to the elements you choose in that combination. This can be done fairly structural: $$ 100000 \\ 010000 \\ \vdots \\ 000001 \\ 110000 \\ 101000 \\ 100100 \\ $$ etc.
"Almost" in the kernel
As mentioned in the comments section, there are two major problems: Such a notion of "almost kernel" needs a notion of distance, thus a metric is necessary. The resulting set is not a linear subspace. So, let $M$ be some $n\times m$ matrix with elements from some field $A$ and $V$ be the vector space $A^{m\times1}$ of all columns of length $m$ with elements from $A$. Also, let $d$ be some metric on $V$. Then, one may define for every $\epsilon&gt;0$ the $\epsilon-$kernel of $M$ as: $$\ker_\epsilon M:=\{x\in V\mid d(Mx,\mathbf{0})\leq\epsilon\},$$ or, alternatively: $$\ker_\epsilon M:=M^{-1}(\hat{B}(\mathbf{0},\epsilon)),$$ where, $\mathbf{0}$ is the zero element of $V$, $\hat{B}(x,r)$ denotes a closed ball with center $x$ and radius $r$ and $M$ is viewed as a linear map $M:V\to V$. Now, note that the word "ball" is not used just for a nice generalization of our intuition; it also implies some kind of symmetry in our case. More specifically, a "ball" has a non-linear symmetry that makes it impossible for it to be a linear subspace, apart from "trivial" situations. By "trivial", we mean metrics that are not derived from norms (one may object that these are interesting cases and, indeed, they are, but we will examine them later on). So, suppose that $d$ is derived from some norm $\lVert\cdot\rVert$. Let $B=\{u_1,u_2,\ldots,u_m\}$ be some base of $V$. Then, the vectors $$v_k:=\left\{\begin{array}{ll} \dfrac{\epsilon u_k}{2\lVert Mu_k\rVert} &amp; u_k\not\in\ker M\\ u_k &amp; u_k\in\ker M \end{array}\right.,$$ are linearly independent and: $$\lVert Mv_k\rVert=\frac{\epsilon}{2},$$ for $u_k\not\in\ker_\epsilon M$. Hence $v_k\in\ker_\epsilon M$ for every $k=1,2\ldots,m$. Supposing that $\ker_\epsilon M$ is a linear subspace of $V$, we have that $\dim\ker_\epsilon M\leq m$. Also, $v_k$ being linerarly independent implies that $\dim\ker_\epsilon M=m$. Since both $V$ and $\ker_\epsilon M$ are both finite-dimensional of equal dimensions with $\ker_\epsilon M\leq V$, we get that $\ker_\epsilon M=V$, which, implies that $\ker M=V$ and, hence $M$ is the zero matrix. So, "non-trivial" metrics give trivial results (the only matrix that fulfils our "wishes" is the trivial one). However, what if we demand that $\ker_\epsilon M$ is a non-trivial linear subspace of $V$, with $M\neq0$? Of course, from the above it is obvious that $d$ is not derived from any norm. Before starting, note that one metric that is "almost" (no pun intended) what we would wish is the discrete metric: $$\delta(x,y)=\left\{\begin{array}{ll} 1 &amp; x\neq y\\ 0 &amp; x=y \end{array}\right..$$ Indeed, for $\epsilon&lt;1$ $\ker_\epsilon M=\{0\}$ and for $\epsilon\geq1$, $\ker_\epsilon M=V$ - which are trivial subspaces of $V$, however.
Given input and output values of a function with unknown coefficients, find the optimal coefficients
This is a typical parameter fitting problem that can be solved with optimization methods. Let's denote your relationship as $y = f(\vec{k},\vec{x})$ and your data as $(y_n, \vec{x}_n)$. For some hypothesis $\vec{k}_\text{H}$ express the squared error as a cost function $$C(\vec{k}_\text{H}) = \sum_{n=1}^N (y_n - f(\vec{k}_\text{H},\vec{x}_n))^2$$ and find your true coefficients $\vec{k}$ by cost function minization $$\vec{k} \in \text{argmin} \ C(\vec{k}_\text{H})$$ which can be tackled with gradient-based solvers like Levenberg-Marquardt or trust region. You'll encounter the typical problems with local cost function minima. EDIT Aniruddha Deshmukh is right, this can be transformed to a linear problem. Every data gives you an equation $$(x_{1,n} - y_n) k_1 + (x_{2,n} - y_n x_{2,n} - y_n x_{3,n}) k_2 + (x_{4,n}-y_n ) k_3 = 0$$ for $n = 1 \ldots N$ so you'll get an overdetermined linear system of $N$ equations in three variable and can compute the least squares solution (e.g., with the pseudo-inverse). That's messed up though because it's a homogeneous system that would give all zeros as solution. However, note that you can apply any scaling to your $k$-values without changing the function because of the fraction. Thus you can fix one of the coefficients, e.g., set $k_3 = 1$. This way you obtain an inhomogeneous system of equations $$(x_{1,n} - y_n) k_1 + (x_{2,n} - y_n x_{2,n} - y_n x_{3,n}) k_2 = y_n -x_{4,n}$$ and non-vanishing $k_1, k_2$.
Sequence is union of convergent subsequences. Are the limits of the subsequences the only cluster points the sequence have?
We need to cover the set of indices, not the set of terms of the sequence. As written it is not ruled out that some point $x$ is attained infinitely often by the sequence $(a_n)$, but each of the subsequence attains $x$ only a finite number of times. Then $x$ is a cluster point of $(a_n)$, but it need not be the limit of any of the subsequences. If the indices used in the subsequences cover $\mathbb{N}$ except perhaps for finitely many $n$, then the conclusion holds. Let's use a different notation for subsequences which I find clearer. Given a sequence $(a_n)_{n\in\mathbb{N}}$, its subsequences are in bijective correspondence with the strictly increasing maps $\mathbb{N}\to \mathbb{N}$. Thus we can write a subsequence of $(a_n)_{n\in\mathbb{N}}$ as $(a_{\sigma(n)})_{n\in\mathbb{N}}$. So, if we have a real sequence(1) $(a_n)_{n\in\mathbb{N}}$ and $m$ convergent subsequences $\bigl(a_{\sigma_i(n)}\bigr)$ such that $$E := \mathbb{N}\setminus \bigcup_{i = 1}^m \sigma_i(\mathbb{N})$$ is finite. Then $$L := \Bigl\{ \lim_{n\to\infty} a_{\sigma_i(n)} : 1 \leqslant i \leqslant m\Bigr\}$$ is the set of all cluster points of $(a_n)_{n\in \mathbb{N}}$. Proof: I omit the proof that the limit of a subsequence is a cluster point of the sequence and only show that $c \notin L$ implies that $c$ is not a cluster point of $(a_n)$. Fix $c\notin L$. Let $$\delta = \min \Bigl\{ \bigl\lvert c - \lim_{n\to\infty} a_{\sigma_i(n)}\bigr\rvert : 1 \leqslant i \leqslant m\Bigr\}.$$ As the minimum of a finite set of strictly positive numbers, $\delta$ is itself strictly positive. Let $\varepsilon = \delta/2$. By the assumed convergence of the subsequences, for each $i$ there is an $n_i$ such that $$n \geqslant n_i \implies \bigl\lvert a_{\sigma_i(n)} - \lim_{k\to\infty} a_{\sigma_i(k)}\bigr\rvert &lt; \varepsilon.$$ Define $$N = \max \Bigl(\bigl\{ \sigma_i(n_i) : 1 \leqslant i \leqslant m\bigr\} \cup \{n_E\}\Bigr)$$ where $n_E = 0$ if $E = \varnothing$ and $n_E = 1 + \max E$ if $E\neq \varnothing$. Then we have $\lvert a_n - c\rvert &gt; \varepsilon$ for $n \geqslant N$. For, if $n \geqslant N$, then $n \notin E$, so there is an $i$ with $n \in \sigma_i(\mathbb{N})$. Since $\sigma_i(n_i) \leqslant N \leqslant n$, it follows that $k = \sigma_i^{-1}(n) \geqslant n_i$ by the monotonicity of $\sigma_i$. Hence, by the choice of $n_i$, we have $$\bigl\lvert a_n - \lim_{k\to\infty} a_{\sigma_i(k)}\bigr\rvert &lt; \varepsilon$$ and consequently $$\lvert a_n - c\rvert \geqslant \bigl\lvert c - \lim_{k\to\infty} a_{\sigma_i(k)}\bigr\rvert - \bigl\lvert a_n - \lim_{k\to\infty} a_{\sigma_i(k)}\bigr\rvert &gt; 2\varepsilon - \varepsilon.$$ Thus $c$ is not a cluster point of $(a_n)$. (1) It's not important that it's a sequence in $\mathbb{R}$, any metric space - in fact any first countable Hausdorff space - would work, with [almost] the same proof.
Integral of $\mathrm{sech}(x)$
One way to validate your own answer is by differentiating your result: $\newcommand{\sech}{\mathrm{sech}} \newcommand{\diff}{\frac{d}{dx}} \newcommand{\LHS} {(\tan^{-1}(\sinh(x))}$ $$\begin{align} \diff \LHS&amp;=\diff\int \sech(x)dx = \sech(x) \\ \diff \LHS &amp;= \frac{1}{1+\sinh^2(x)}\times\cosh(x) \\ \diff \LHS&amp;=\frac{1}{\cosh{x}}=\sech(x)\end{align}$$ Because taking the derivative yielded the same result as the integrand, thus leading one to say that presented method is a valid one.
Showing $\int_{a}^{b} \left\lfloor x \right\rfloor dx + \int_{a}^{b} \left\lfloor -x \right\rfloor dx=a-b$
What happens at the integers does not matter since they have content 0. Suppose $$n &lt; x &lt; n + 1.$$ then $\lfloor x \rfloor = n$, and $-n &gt; x &gt; -(n+1)$, so $\lfloor -x \rfloor = -(n+1)$. Adding gives $\lfloor x\rfloor + \lfloor -x \rfloor = -1$.
Is the polynomial $x^5+5x-10$ is solvable by radicals over $\Bbb Q$?
Let's recall the following. Theorem (Dedekind): If an irreducible polynomial $f(x)$ factors $\bmod p$ into factors with degrees $d_1, d_2, \dots$, then the Galois group of $f$ contains an element of cycle type $(d_1, d_2, \dots)$. Theorem (Frobenius / Chebotarev): Every cycle type in the Galois group occurs this way, with density proportional to its density in the Galois group. Furthermore, transitive subgroups of $S_5$ (and I think maybe also $S_6$?) are determined up to conjugacy by the cycle types of their elements, so factorizations mod a prime eventually identify all Galois groups of irreducible quintics. The complete list of transitive subgroups of $S_5$ (up to conjugacy) is $C_5$ $D_5$ The Frobenius group $x \mapsto ax + b, a \in \mathbb{F}_5^{\times}, b \in \mathbb{F}_5$ $A_5$ $S_5$. Using WolframAlpha we find that $f(x) = x^5 + 5x - 10$ is irreducible $\bmod 3$, so the Galois group contains a $5$-cycle (but this already follows from transitivity), a product of an irreducible quadratic and cubic $\bmod 7$, so the Galois group contains a permutation of cycle type $(2, 3)$ and we can stop here: already $S_5$ is the only transitive subgroup of $S_5$ containing an element of cycle type $(2, 3)$.
How to show that $C[a,b]$ is infinite dimensional?
Hint: this does not have anything to do with the norm. You can for instance observe that $C[a,b]$ contains the subspace of all polynomials, which is infinite-dimensional.
Proving that a Particular Set Is a Vector Space
This concept really threw me for a loop when I took linear algebra and afterwards, but what you're using in that step is the fact that the real numbers themselves are commutative, since $f(t)$ and $g(t)$ are real numbers. You can just quote the fact that addition of real numbers is commutative. You couldn't do it before you evaluated since $f$ is a not a number until you evaluate. Now, on another note, you can't say that $f+g=f(t)+g(t)$, but you can say that $(f+g)(t)=f(t)+g(t)$.
A problem about abelian group
I will try and generalize Nicky Hekster's solution to the coprime case. Let $\gcd(m,n)=r$, with $m=ra$, $n=rb$, and $a,b$ coprime Let $S = \{g^r \mid g \in G \}$, and let $M = G_m = \langle k^a \mid k \in S \rangle$, and $N = G_n = \langle k^b \mid k \in S \rangle$. So $M$ and $N$ are abelian normal subgroups of $G$, with $[M,N] \le M \cap N \le Z(\langle M,N \rangle)$. It is enough to prove that any two elements $g^r$ and $h^r$ of $S$ commute. We have $r = \lambda m + \mu n$ for some $\lambda,\mu \in {\mathbb Z}$, and since $g^m$ commutes with $h^m$ and $g^n$ commutes with $h^n$, it is enough to show that $u=g^m= (g^r)^a$ commutes with $v = h^n = (g^r)^b$. Now $u^{-1}v^{-1}uv=z \in M \cap N$, so $v^{-1}uv = uz$. Since $z$ centralizes $u$ and $v$, $v^{-1}u^bv = u^bz^b$. But $u^b,v \in G_n$, which is abelian, so $v^{-1}u^bv=u^b$ and hence $z^b=1$. Similarly $z^a=1$ and, since $a$ and $b$ are coprime, $z=1$, and we are done.
finding probability using Methods of Enumeration
So in total you have ${12 \choose 3}$ choices to choose 3 computers. Now to get one defective computer exactly, you choose 2 good computers in ${8 \choose 2}$ ways, and 1 defective computer in ${4 \choose 1}$ ways. So together you can choose 2 good computers and exactly one defective one in ${8 \choose 2}{4 \choose 1}$. Now the probability is : $$\frac{{8 \choose 2}{4 \choose 1}}{{12 \choose 3}}.$$ Try generalizing to variable choices.
Proof by induction, dont know how to represent range
Start with $$1+\frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{2^k} \ge 1+\frac{k}{2}.$$ On the left-hand side, you want to add $\frac{1}{2^k+1} + \frac{1}{2^k+2} + \cdots + \frac{1}{2^{k+1}}$ (all the terms from $2^k+1$ to $2^{k+1}$), as you noted. On the right-hand side you want to add $\frac{1}{2}$, as you have done. It remains to show $$\frac{1}{2^k+1} + \frac{1}{2^k+2} + \cdots + \frac{1}{2^{k+1}} \ge \frac{1}{2}.$$ Can you do this? Hint: each term on the left-hand side is larger than $\frac{1}{2^{k+1}}$, and there are $2^k$ terms.
directional derivative of convex function sublinear proving that fact
Suppose $x \in \text{dom} f$, and let $df(x,h) = \lim_{t \downarrow 0} \frac{f(x+th)-f(x)}{t}$. To see that the limit exists (it may be $\pm \infty$), let $I_{x,h} = \{t \geq 0 | x+th \in \text{dom} f \}$. $I_{x,h}$ is convex, and $0 \in I_{x,h}$. If $I_{x,h} = \{0 \}$, then $df(x,h) = +\infty$, otherwise consider $\phi(t) = f(x+th)$ on $I_{x,h}$. Convexity of $\phi$ shows that if $0&lt;u&lt;v$, and $v \in I_{x,h}$, then $\frac{\phi(u)-\phi(0)}{u} \leq \frac{\phi(v)-\phi(0)}{v}$, from which it follows that $t \mapsto \frac{\phi(t)-\phi(0)}{t}$ is non-decreasing, and the existence of the (possibly extended valued) limit follows from this. It also follows from this that $df(x,h) = \inf_{t &gt; 0} \frac{f(x+th)-f(x)}{t} = \inf_{s &gt; 0} s (f(x+\frac{h}{s})-f(x))$. Note that positive homogeneity follows from the definition, ie, if $\lambda \geq 0$, then $df(x,\lambda h) = \lambda df(x,h)$. Two other facts are needed to finish: First, suppose $g$ is convex on some domain $C$. Then the function $\eta((x,s)) = s g(\frac{x}{s})$ is convex on $C \times (0,\infty)$. This follows from (with $\lambda_k \geq 0$, $\lambda_1+\lambda_2 = 1$) \begin{eqnarray} \eta(\lambda_1 (x_1, s_1) + \lambda_2 (x_2, s_2)) &amp;=&amp; (\lambda_1 s_1 + \lambda_2 s_2) g (\frac{\lambda_1 x_1 + \lambda_2 x_2}{\lambda_1 s_1 + \lambda_2 s_2}) \\ &amp; = &amp; (\lambda_1 s_1 + \lambda_2 s_2) g (\frac{\lambda_1 s_1 }{\lambda_1 s_1 + \lambda_2 s_2} \frac{x_1}{s_1} + \frac{\lambda_2 s_2 }{\lambda_1 s_1 + \lambda_2 s_2} \frac{x_2}{s_2}) \\ &amp; \leq &amp; (\lambda_1 s_1 + \lambda_2 s_2) \left[ \frac{\lambda_1 s_1 }{\lambda_1 s_1 + \lambda_2 s_2} g ( \frac{x_1}{s_1}) + \frac{\lambda_2 s_2 }{\lambda_1 s_1 + \lambda_2 s_2} g(\frac{x_2}{s_2}) \right] \\ &amp;=&amp; \lambda_1 s_1 g ( \frac{x_1}{s_1}) + \lambda_2 s_2 g(\frac{x_2}{s_2}) \\ &amp;=&amp; \lambda_1 \eta((x_1,s_1)) + \lambda_2 s_2 \eta((x_2,s_2)) \end{eqnarray} Second, suppose $\theta$ is convex on some domain $C\times D$. Then $x \mapsto \inf_{y \in D} \theta((x,y))$ is convex. To see this, let $\epsilon&gt;0$ and choose $y_k \in D$ such that $\epsilon+\inf_{y \in D} \theta((x_k,y)) \geq \theta((x_k,y_k))$. Then \begin{eqnarray} \inf_{y \in D} \theta(\lambda_1(x_1,y)+ \lambda_2 (x_2,y)) &amp;\leq&amp; \theta(\lambda_1(x_1,y_1)+ \lambda_2 (x_2,y_2)) \\ &amp;\leq &amp; \lambda_1 \theta((x_1,y_1)) + \lambda_2 \theta((x_2,y_2)) \\ &amp; \leq &amp; \lambda_1 \inf_{y \in D} \theta((x_1,y)) + \lambda_2 \inf_{y \in D} \theta((x_2,y)) + \epsilon \end{eqnarray} Since this is true for all $\epsilon &gt;0$, the result follows. Consider the function $h \mapsto f(x+h)-f(x)$ and apply the first result which shows that $(h,s) \mapsto s (f(x+\frac{h}{s})-f(x))$ is convex, and the second result shows that $h \mapsto \inf_{s&gt;0} s (f(x+\frac{h}{s})-f(x))$ is convex. It follows that $h \mapsto df(x,h)$ is convex, and so $df(x,\frac{1}{2}(h_1+h_2)) \leq \frac{1}{2}(df(x,h_1)+df(x,h_2))$. Since $h \mapsto df(x,h)$ is positive homogeneous, multiplying across by $2$ yields $df(x,h_1+h_2) \le df(x,h_1)+df(x,h_2)$, hence the directional derivative is sublinear in the second argument.
Prove that $\sum\limits_{n=0}^{\infty}\frac{F_{n}}{2^{n}}= \sum\limits_{n=0}^{\infty}\frac{1}{2^{n}}$
Just for fun, here is a proof using probability. Let $N$ be the number of times you toss a fair coin until you get two heads in a row. Here are some outcomes for small $N$ values: $N=2\quad$ HH $N=3\quad$ THH $N=4\quad$ TTHH, HTHH $N=5\quad$ TTTHH, THTHH, HTTHH $N=6\quad$ TTTTHH, TTHTHH, THTTHH, HTTTHH, HTHTHH etc. The outcomes of length $n$ are formed in two ways: by sticking a T in front of an outcome of length $n-1$ or sticking HT in front of an outcome of length $n-2$. Thus, the number of outcomes of length $n$ is $F_{n-1}$. Since the probabilities must add to one, we have $$\sum_{n=2}^\infty {F_{n-1}\over 2^n}=1$$ which is equivalent to the required identity.
Probability of age of person renewing insurance?
Start with writing down the information you are given: $P(Does Not Renew) \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad= 0.05$ $P(Renew) = 1- P(DoesNotRenew) \quad\quad\quad\;= 0.95$ $P(&gt;= 40\; |\; Renew) \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\;= 0.9$ $P(&lt; 40\; |\; Renew) = 1 - P(&gt;=40\;|\;Renew) \quad= 0.1$ $P(&lt; 40\; |\; Does Not Renew) \quad\quad\quad\;\quad\quad\quad\quad\quad= 0.6$ a) By Law Of Total Probability: $\quad P(&lt;40) = P(&lt;40 \; | \; Renew)P(Renew) + P(&lt;40 \; | \; DoesNotRenew)P(DoesNotRenew)$ $\quad \quad \quad \quad \quad = (0.1)(0.95) + (0.6)(0.05) = 0.125$ b) By Bayes Theorem: $\quad P(DoesNotRenew\; | &lt; 40)P(&lt;40) = P(&lt;40\;|\;Does Not Renew)P(Does Not Renew)$ $\quad P(DoesNotRenew\; | &lt; 40)(0.125) = (0.6)(0.05)$ $\quad P(DoesNotRenew\; | &lt; 40) = (0.6)(0.05)/0.125$
Limit of $\left(1+\frac{1}{\sqrt n}\right)^n$ as n tends to $\infty$
Bernoulli's Inequality gives $$\left(1+\frac1{\sqrt n}\right)^n\ge1+\sqrt n\to \infty$$
Inductive Proof Bounded Harmonic Series
Follow the same as the usual proof that the entire harmonic series diverges, but stop with the finite part you want: \begin{align}\sum_{i=1}^{2^n} \frac1{i}&amp;=1+\left(\frac12\right)+\left(\frac13+\frac14\right)+\left(\frac15+\frac16+\frac17+\frac18\right)+\dots+\left(\frac1{2^{n-1}+1}+\dots+\frac1{2^n}\right) \\&amp;\ge 1+\left(\frac12\right)+\left(\frac14+\frac14\right)+\left(\frac18+\frac18+\frac18+\frac18\right)+\dots+\left(\frac1{2^n}+\dots+\frac1{2^n}\right) \\&amp;=1+\frac12+\frac24+\frac48+\dots+\frac{2^{n-1}}{2^n} \\&amp;=1+n\cdot\frac12. \end{align}
$[a+b]\geq[a]+[b]$ for all a,b belongs to Real number
You can pull an integer out of the floor function. Then $$\left\lfloor a+b\right\rfloor=\left\lfloor\lfloor a\rfloor+\{a\}+\lfloor b\rfloor+\{b\}\right\rfloor=\left\lfloor\{a\}+\{b\}\right\rfloor+\lfloor a\rfloor+\lfloor b\rfloor.$$ It is immediate that $$\left\lfloor\{a\}+\{b\}\right\rfloor\ge0.$$ We can add that equality holds when $$\{a\}+\{b\}&lt;1.$$
Dummit and Foote example of finding a Gröbner basis
The generators $x^3y$ and $x^2y^2$ are multiples of the other generator $x$, so they are redundant.
How do solve this pde problem?
I'll just remind you of some standard facts that I hope will lead you in the right direction, but you should refresh these in the book/references. First the homogenous PDE, $ \partial_t u = \Delta u$. As mentioned, $u_{i,j}(r,t) := \psi_{i,j}(r) e^{\lambda_{i,j} t} $ solves the heat equation, with initial condition at time zero $ \psi_{i,j}(r) $. Since everything here is linear, a linear combination $ \sum_{i,j} c_{i,j} u_{i,j} $ is also a solution. So the problem reduces to solving for the $ c_{i,j} $ so that $ f_1 = \sum c_{i,j} \psi_{i,j} $. Without any further knowledge of $ f_1 $, I am not sure what more can be said. Now for the next one, try to find a particular solution to: $$ \frac{\partial u}{\partial t}+ b_{2,12} \psi_{2,12}= \nabla^2 u $$ Hint: guess something of the form $ \psi_{i,j}(r) \; T(t) $, which upon substitution into the PDE will become an ODE for $ T(t) $. Then you can find a solution to the initial value problem by adding on the solution to the homogenous PDE.
Markov Chains - understand proof that if x and y communicate then if x is recurrent then y must also be recurrent
Regarding $p_{k+n+l}(y,y) \ge p_l(y,x) p_n(x,x) p_k(x,y)$ can be intuitively explained as follows: The left-hand side is the probability of starting at $y$ and after $k+n+l$ steps landing at $y$ again. The right-hand side is the probability of starting at $y$, then after $l$ steps landing at $x$, then after $n$ more steps landing at $x$ again, and then after $k$ more steps landing at $y$. The paths considered by the right-hand side each go from $y$ to $y$ in $l+n+k$ steps, but there are many other ways to do so, so the probability on the right-hand side is smaller. More succinctly, \begin{align} &amp;P(\text{go from $y$ to $y$ in $k+n+l$ steps}) \\ &amp;= \sum_{a,b} P(\text{go from $y$ to $a$ in $k$ steps, then to $b$ in $n$ steps, then to $y$ in $l$ steps}) \\ &amp;\ge P(\text{go from $y$ to $x$ in $k$ steps, then to $x$ in $n$ steps, then to $y$ in $l$ steps}). \end{align} For the second question, $$\sum_{j=0}^\infty p_j(y,y) \ge \sum_{j=l+k}^\infty p_j(y,y) \ge p_l(y,x) p_k(x,y) \sum_{n=0}^\infty p_n(x,x),$$ where the last step comes from writing $j=l+n+k$.
Integral $\int_{a}^{b}\frac{x\ln(x)}{x^4+a^{2}b^{2}}dx$
$$I=\int_ {a}^{b}\frac{x\ln(x)}{x^4+a^{2}b^{2}}dx\overset{x^2=t}=\frac14 \int_{a^2}^{b^2} \frac{\ln t}{t^2+a^2b^2}dt $$ Let's get rid of that ugly denominator by doing a $t=y(ab)$: $$I=\frac{ab}4\int_\frac{a}{b}^\frac{b}{a}\frac{\ln(yab)}{a^2b^2(y^2+1) }dy=\frac{1}{4ab}\int_\frac{a}{b}^\frac{b}{a}\frac{\color{blue}{\ln y+\ln(ab)}}{y^2+1 }dy$$ And now one might notice the simmetry directly. Substituting $y=\frac{1}{x}$ gives: $$\frac{1}{4ab}\int_\frac{a}{b}^\frac{b}{a}\frac{\color{red}{\ln\left(\frac{1}{x}\right)+\ln(ab)}}{x^2+1 }dx$$ Adding the red and blue integral simiplifies quite nice the logarithm: $$\require{cancel} 2I=\frac{1}{4ab}\int_\frac{a}{b}^\frac{b}{a}\frac{\color{red}{\cancel{\ln\left(\frac{1}{x}\right)}+\ln(ab)}+\color{blue}{\cancel{\ln x+}\ln(ab)}}{x^2+1}dx$$ $$\Rightarrow I=\frac{\ln (ab)}{4ab}\int_\frac{a}{b}^\frac{b}{a}\frac{1}{x^2+1}dx=\frac{\ln (ab)}{4ab}\left(\arctan\frac{b}{a} -\arctan \frac{a}{b}\right)$$
Algebraic Topology Hatcher Chapter 3.2 Problem 3
The only nontrivial map $H^1(\mathbb{R}P^m;\mathbb{Z}/(2))\to H^1(\mathbb{R}P^n;\mathbb{Z}/(2))$ is the identity since both domain and codomain is isomorphic to $\mathbb{Z}/(2)$. Assume for contradiction that $f:\mathbb{R}P^n\to \mathbb{R}P^m$ induces this nontrivial map on $H^1$. Recall that $H^*(\mathbb{R}P^n;\mathbb{Z}/(2))\cong \mathbb{Z}/(2)[x]/(x^{n+1})$ with $|x|=1$, and that the induced map $f^*:H^*(\mathbb{R}P^m;\mathbb{Z}/(2))\to H^*(\mathbb{R}P^n;\mathbb{Z}/(2))$ is a ring homomorphism. This gives us that we have ring homomorphism $f^*:\mathbb{Z}/(2)[x]/(x^{m+1})\to \mathbb{Z}/(2)[x]/(x^{n+1})$ which maps $x\to x$. However, this is not a ring homomorphism since $n&gt;m$, hence we have arrived at a contradiction. We conclude that no such $f$ exists. The similar result for $\mathbb{C}P^n$ is that there exists no map $\mathbb{C}P^n\to \mathbb{C}P^m$ inducing a nontrivial map $H^2(\mathbb{C}P^m;\mathbb{Z})\to H^2(\mathbb{C}P^n;\mathbb{Z})$ when $n&gt;m$. The argument for this is the same mutatis mutandis. Note in this case the nontrivial map on $H^2$ does not need to be the identity since we are working with coefficients over $\mathbb{Z}$ now. However, it is still multiplication by a non-zero integer which still leads to the contradiction.
Counterexample for a complex analysis proof
Let $f(z) = \begin{cases} 1, &amp; z=z_0 &amp; \\ -1, &amp; \text{otherwise} \end{cases}$. Then $|f(z)| = 1$, is continuous, and $f$ is discontinuous at $z=z_0$.
Given that $a$ $|$ $n$ and $b$ $|$ $n$, show that: $ab$ $|$ $an,bn$ iff $ab$ $|$ $\gcd(an,bn)$.
You don't need $\,a,b\mid n.\,$ The equivalence is the characteristic (universal) propery of the gcd $$ c\mid a,b \iff c\mid (a,b)$$ Proof $\ (\Rightarrow)\ $ By Bezout $\, (a,b) = ja\! +\! kb\,$ for $\,j,k\in\Bbb Z,\,$ so $\,c\mid a,b\,\Rightarrow\,c\mid ja\!+\!kb=(a,b)$ $\ (\Leftarrow)\ \ \ c\mid (a,b)\mid a,b\,\Rightarrow\, c\mid a,b\ $ by transitivity of divisiblity. $\ $ QED Remark $\ $ This property characterizes the gcd, i.e. if $\,c\mid a,b \iff c\mid d\,$ then $\,d = (a,b).\,$ Indeed, if we choose $\,c = d\,$ then $\ (\Leftarrow)\ $ yields $\,d\mid a,b,\,$ i.e. $\,d\,$ is a common divisor of $\,a,b.\,$ Further $\ (\Rightarrow)\ $ shows that $\,d\,$ is divisible by every common divisor $\,c\,$ of $\,a,b,\,$ so $\,d\ge c,\,$ therefore $\,d\,$ is the greatest common divisor of $\,a,b.$
A question about the integral of convex function
Note that if $||u||_{\infty} = 0$ then your question is trivially true with equality. Now assume that $||u||_{\infty} &gt; 0$. Observe $F(u(x)) \leq (\frac{-u(x) + ||u||_{\infty}}{2||u||_{\infty}})F(-||u||_{\infty}) + (\frac{+u(x) + ||u||_{\infty}}{2||u||_{\infty}})F(||u||_{\infty})$ by convexity of $F$ thus $\int_{0}^1F(u(x)) dx \leq \int_{0}^1(\frac{-u(x) + ||u||_{\infty}}{2||u||_{\infty}})F(-||u||_{\infty}) + (\frac{+u(x) + ||u||_{\infty}}{2||u||_{\infty}})F(||u||_{\infty})$ $dx$ $= \frac{1}{2}(F(-||u||_{\infty})+F(||u||_{\infty}))$ For equality we must have that for $y$ in the range of $u$ $F(y) = (\frac{-y + ||u||_{\infty}}{2||u||_{\infty}})F(-||u||_{\infty}) + (\frac{+y + ||u||_{\infty}}{2||u||_{\infty}})F(||u||_{\infty})$ almost everywhere (in the range of $u$) .
Number of ways to roll five 6-sided dice with sum 7
Each die always has value at least $1$, so we may as well ask for the number of ordered sequences $(a_1, \ldots, a_5)$ such that $a_1, \ldots, a_5 \in \{0, \ldots, 5\}$ and $a_1 + \cdots + a_5 = 2$. This is only possible if (1) one of the $a_i$ is $2$ and the others are all $0$, or (2) two of the $a_i$ are $1$ and the others are all $0$. There are only ${5 \choose 1} = 5$ possibilities for the form and ${5 \choose 2} = 10$ possibilities for the latter, and so only $5 + 10 = 15$ ordered sequences satisfying the criteria in total.
Writing $\frac{x^4(1-x)^4}{1+x^2}$ in terms of partial fractions
You should have tried $$\frac{x^4(1-x)^4}{1+x^2}=Ax^6+Bx^5+Cx^4+Dx^3+Ex^2+Fx +\mathbf{J}+\frac{\mathbf{G + Hx}}{1+x^2}$$ By the way, multiplying out and comparing coefficients gives: $$\frac{x^4(1-x)^4}{1+x^2}=x^6-4x^5+5x^4-4x^2+\mathbf{4}+\frac{\mathbf{-4}}{1+x^2}$$
Why is $V/W$ all multiples of $(0,1)^T$ given $V = \mathbb{R}^2$ and $W$ is all multiples of $(1,0)^T$?
$V/W$ is the set of all lines, parallel to the $x$ axis. However, as a vector space, $V/W$ is isomorphic to the vertical axis, as the mapping $$(0, y)\mapsto \{(x, y)|x\in\mathbb R\}$$ is an isomorphism. Note also that this means that $V/W$ is isomorphic to $\mathbb R$.
Geometric proof that a Pythagorean triple has a number divisible by $3$?
A slightly detailed(irrelevant) approach: If any of, m, n are divisible by 3, then we are done as $\ (a,b,c)=({ m }^{ 2 }-{ n }^{ 2 },\quad 2mn,\quad { m }^{ 2 }+{ n }^{ 2 }) $ and $\ 3|b $. Hence, assume both m and n aren't multiples of 3. Therefore $$\ m\equiv 1,2\quad (mod\quad3)\\ n\equiv 1,2\quad (mod\quad3)$$ This implies that $$\ { m }^{ 2 }\equiv { n }^{ 2 }\equiv 1(mod\quad 3) $$ Hence $$\ { m }^{ 2 }-{ n }^{ 2 }=a\equiv 0(mod\quad 3)$$
left/ right hand side limits
$$f(x) = \frac{2x}{\sqrt{x^2+6x^4}} = \frac{2x}{\sqrt{x^2}\sqrt{1+6x^2}},$$ since $x^2 \ge 0$ for all real $x$. Then since $\sqrt{x^2} = |x|$, we get $$f(x) = \frac{2 g(x)}{\sqrt{1+6x^2}},$$ where we will take $$g(x) = \frac{|x|}{x} = \begin{cases}1, &amp; x &gt; 0 \\ \text{undefined}, &amp; x = 0 \\ -1, &amp; x &lt; 0. \end{cases}$$ This makes the limiting behavior from either side readily apparent. There is nothing magical about $g$. The only thing to understand here is that when considering any particular one-sided limit, $g$ does not change value. For instance, evaluation of $$\lim_{x \to 0^-} f(x)$$ has $x &lt; 0$ for all such $x$, thus $g(x) = -1$ and $$\lim_{x \to 0^-} f(x) = \lim_{x \to 0^-} \frac{2}{\sqrt{1 + 6x^2}}.$$
Square-integrability of Fourier transform
Yes. In fact one derivative is enough. The Fourier transform of $f'$ is $itF(t)$ (or something like that, depending on how your definition of the Fourier transform is normalized). Since $f'\in L^1$ this shows that $tF(t)$ is bounded. So $$\int_{|t|&gt;1}|F(t)|^2\,dt\le c\int_{|t|&gt;1}\frac{dt}{t^2}&lt;\infty.$$ We certainly have $\int_{|t|\le1}|F(t)|^2\,dt&lt;\infty$ since $F$ is bounded. So $F\in L^2$ and hence $f\in L^2$. Bonus We've proved this: Theorem If $f\in L^1(\Bbb R)$ is absolutely continuous then $$||f||_2\le c(||f||_1+||f'||_1).$$That can't be "right", that is it can't be the whole truth, because the two sides transform differently under dilations. If we use the argument above but split the integral of $|F|^2$ at $|t|=A$ instead of $|t|=1$ we get this: $$||f||_2\le c(A^{1/2}||f||_1+A^{-1/2}||f'||_1).$$Now let $A=||f'||_1/||f||_1$: Better Theorem If $f\in L^1(\Bbb R)$ is absolutely continuous then $$||f||_2\le c||f||_1^{1/2}||f'||_1^{1/2}.$$
Find y values along decaying exponential curve that has defined points
In your decay equation $$ y = a(1-b)^x $$ the variables are $x$ abd $y$. The value of $b$ is fixed. It represents the constant delay as a percentage as $x$ changes. For example, if $x$ is measured in days then $b = 0.1$ describes a $10\%$ daily decrease (compounded continuously). At the end of each day you have $1 - b = 0.9 = 90\%$ of what you started. Since you know two points $(x,y)$ on the curve you can plug them into the equation and get two simultaneous equations to solve for the constants $a$ and $b$. You might find that easier if you take logarithms first: $$ \log(y) = \log(a) + x \log(1-b). $$
Prediction intervals with OLS and indicator variables
The prediction intervals are wider when the two are fit separately, because population variances are estimated based on smaller samples; hence there is greater uncertainty about them. In the first model, only a single population variancce is estimated, and it is based on a larger sample than those that estimated the two separate population variances in the second model. Being based on a larger sample, it has less uncertainty. Hence a shorter prediction interval. Notice the larger denominator in that case.
Reverse triangle inequality?
We can write $v(x)=w(x)+\Delta\theta(x-x^*)$, with $\Delta=\Delta v_x$ and $\theta(t)=1$ for $t&gt;0$ and $=0$ for $t&lt;0$ (I've slightly changed notations). Now $\int_{x^*-z}^{x^*}\theta(x-x^*)\, dx=0$, $\int_{x^*}^{x^*+z}\theta(x-x^*)\, dx= z$, so the $\Delta\theta$ part of $v$ makes a contribution of $-\Delta z$ to the difference of the integrals, and this is $&gt;bz$ in absolute value, by our assumption on $\Delta$. On the other hand, $$ \left| \int_{x^*-z}^{x^*}w(x)\, dx - \int_{x^*}^{x^*+z}w(x)\, dx\right| = \left| \int_{x^*-z}^{x^*}(w(x)-w(x^*))\, dx - \int_{x^*}^{x^*+z}(w(x)-w(x^*))\, dx\right| \le \int_{x^*-z}^{x^*}L|x-x^*|^{\alpha}\, dx + \int_{x^*}^{x^*+z}L|x-x^*|^{\alpha}\, dx \le 2Lz^{1+\alpha} . $$ Putting these together, we'll obtain the desired inequality.