title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Finding generators for matrix group | For $b \in \mathbb Z$, we have
$$
\begin{pmatrix}1 & 1 \\ 0 & 1\end{pmatrix}^b
=
\begin{pmatrix}1 & b \\ 0 & 1\end{pmatrix},
\qquad
\begin{pmatrix}1 & b \\ 0 & 1\end{pmatrix}
\begin{pmatrix}-1 & 0 \\ 0 & 1\end{pmatrix}
=
\begin{pmatrix}-1 & b \\ 0 & 1\end{pmatrix}
$$ |
Unitary matrix matrix problem | Hints:
1.) For $T,S\in M_n(\Bbb C)$ we have $\ T=S\ \iff\ \forall x,y:\langle Tx,y\rangle=\langle Sx,y\rangle$.
2.) Use the adjointness property: $\langle z,Ay\rangle=\langle A^*z,y\rangle$. |
What sort of mathematical object is a stochastic process? | A stochastic process $Q$ defined on some index set $T$ is a mapping $Q:\Omega\times T\to S$, $(\omega,t)\mapsto Q(\omega,t)$, where $(\Omega,\mathcal F,P)$ is a probability space, $(S,\mathcal S)$ is a measurable space, for example $(S,\mathcal S)=(\mathbb R^d,\mathcal B(\mathbb R^d))$, and each function $Q_t:\Omega\to(S,\mathcal S)$, $\omega\mapsto Q(\omega,t)$, is $(\mathcal F/\mathcal S)$-measurable.
One can use the notations $Q=(Q_t)_{t\in T}$ or $Q=(Q(t))_{t\in T}$, the notation $\{Q(t);t\in T\}$ being (also used but) more dubious for the reasons you explain.. |
Limit of function using sequences | It looks good, for the most part. The only issue with your proof that the given piecewise function is nowhere continuous is that you still need to show that $x_0+\frac{\sqrt{2}}n$ is irrational for all sufficiently large $n.$ You could instead show/note/use the fact that every real number is an adherent point of the irrationals, so there is also a sequence of irrationals converging to $x_0,$ from which we can draw the conclusion you stated.
As an alternative approach, note that the given function's range is $\{0,1\}.$ This makes things simpler, as it turns out.
Claim: If $f:\Bbb R\to\{0,1\}$ is continuous at a point $x_0$, then it is constant in some open interval about $x_0$.
Proof: Suppose by way of contradiction that $f$ is not constant in any such open interval. In particular, then, for each $n\ge 1,$ there exists some $x_n\in\left(x_0-\frac1n,x_0+\frac1n\right)$ such that $f(x_n)\ne f(x_0).$ Take $\epsilon=\frac12.$ By construction, $x_n\to x_0,$ so $f(x_n)\to f(x_0)$ by continuity, so there is some $N$ such that $|f(x_n)-f(x_0)|<\epsilon$ for all $n>N.$ But since $f:\Bbb R\to\{0,1\},$ then for all $n\ge 1,$ we have $|f(x_n)-f(x_0)|=1,$ so $1<\epsilon=\frac12.$ Contradiction. $\Box$
We can easily generalize the above result to functions $f:\Bbb R\to F$ for any non-empty finite $F\subseteq\Bbb R.$ This is trivial to prove when $F$ consists of a single point, and otherwise, we take $$\epsilon=\frac12\min\{|y-z|:y,z\in F\text{ and }y\ne z\}$$ rather than $\epsilon=\frac12.$ Since $F$ is finite, then it isn't difficult to show that $\epsilon>0$--we need only show that $\{|y-z|:y,z\in F\text{ and }y\ne z\}$ is a finite set of positive numbers and we're basically done--and from there, the proof proceeds in the same sort of way. |
Limit of a Power sequence | Hint:
Observe that by induction we have $P_n(0)=0$ for all $n$. Thus the limit is actually
$$\lim_{x \to 0} \frac{P_n(x)-P_n(0)}{x}=P_n'(0).$$ |
Variance of the square variation process | The argument is correct. Maybe in the third line of the computation of $\mathbf{Var}\bigl[X_n - X_0\bigr]$, I would refer (and give a name to) the displayed equation where $\mathbf{E}\bigl[X_n X_0\bigr]$ is computed. |
Prove that if ab=1 then a=b $\forall a,b \in \mathbb Z$ | Assume $ab=1$ for $a,b\in\mathbb{Z}$. Since $1$ is the multiplicative identity of $\mathbb{Z}$, we find that $a=b^{-1}$. If $a=1$, then $b=1$ as well. If $a=-1$, then we also have that $b=-1$. What are the multiplicative inverses of other elements of $\mathbb{Z}$? Particularly, are they integers? Conclude that $ab=1$ implies that $a=b=\pm1$ for $a,b\in\mathbb{Z}$. |
Solve the following Diffrential Equation $(3y-7x+7)dx-(3x-7y-3)dy=0$ | We are given:
$$\tag 1 \dfrac{dy}{dx} = \dfrac{3y-7x+7}{3x-7y-3}$$
It would really be nice if we could get rid of those constants, so lets use a trick to do that.
Let: $X = x + h \rightarrow dX = dx, Y = y + k \rightarrow dY = dy$
We substitute into $(1)$ and get:
$$\tag 2 \dfrac{dY}{dX} = \dfrac{3(y+k)-7(x+h)+7}{3(x+h)-7(y+k)-3}$$
What we want now is to get rid of those constants, so we have:
$$3k - 7h + 7 = 0 \\ -7k + 3h -3 = 0$$
This gives us: $ h = 1, k = 0$.
Our new systems now becomes:
$$\tag 3 \dfrac{dY}{dX} = \dfrac{-7X + 3Y}{3X - 7Y}$$
Let $Y = vX \ \rightarrow Y' = v + X v'$ and substitute into $(3)$, which is now a separable equation and we end up with:
$$\displaystyle \int \dfrac{7v - 3}{v^2 - 1}~ dv = -7 \int \dfrac{1}{X}~dX$$
I think you can take it from here.
Recall that once you solve this, there are two substitutions to get back to the solution. |
What is the difference between $i=(1+r)^{m}-1$ and $i=(1+\frac{r}{m})^{m}-1$? | (Rolling comments into an answer)
Compound interest calculations generally use a periodic interest rate, which is the annual rate divided by the number of periods, i.e., $r/m$. For example, if an annual rate of $3.00\%$ were to be compunded monthly, the rate used for the interest computation would be $$\frac rm={3.00\%\over12}=0.25\%.$$ |
Does the Continuum hypothesis say anything about the cardinality of the set $3^{{\aleph}_0}$? | In general, given $\kappa$ an infinite cardinal, we have
$$ 2^\kappa \leq 3^\kappa \leq (2^\kappa)^\kappa = 2^{\kappa\cdot\kappa}=2^\kappa$$
showing that $2^\kappa = 3^\kappa$. Thus, if $2^{\aleph_0} = \aleph_\alpha$, then $3^{\aleph_0} = \aleph_\alpha$.
(Alternate Solution) For another (easier) way of thinking of it, we can define an explicit injection of $3^\kappa$ into $2^\kappa$. For simplicity we will just do the case where $\kappa=\aleph_0$, but there isn't too much difficulty extending to arbitrary $\kappa$ (infinite cardinal). The idea is to think of $2^{\aleph_0}$ as being the set of countably-infinite sequences of $0$'s and $1$'s, $2^\mathbb{N}$. Likewise, $3^{\aleph_0}$ is the set of countably-infinite sequences of $0$'s, $1$'s, and $2$'s, $3^\mathbb{N}$. We can 'encode' a sequence in $3^{\aleph_0}$ by using the encoding
$$0 \mapsto 00, 1 \mapsto 01, 2 \mapsto 11$$
In other words, we define $f: 3^{\mathbb{N}} \to 2^{\mathbb{N}}$ by sending a sequence $(x_n)_{n\in\mathbb{N}} \in 3^\mathbb{N}$ to the sequence $(y_n)_{n\in\mathbb{N}}$ where
$$y_{2n},y_{2n+1} = \begin{cases} 0,0 & \text{if $x_n = 0$} \\ 0,1 & \text{if $x_n=1$} \\ 1,1 & \text{if $x_n=2$} \end{cases}$$
$f$ is an injection, and there is another obvious injection $2^\mathbb{N} \to 3^\mathbb{N}$. Cantor-Schroeder-Bernstein then shows $2^{\aleph_0} = 3^{\aleph_0}$.
(Alternate Solution 2) For a fun proof, note that when you put the discrete topology $3=\{0,1,2\}$, $3^\mathbb{N}$ (with the product topology) is a compact metrizable topological space. But since compact metrizable spaces are second-countable, the cardinality of $3^\mathbb{N}$ is at most continuum, i.e. $2^{\aleph_0}$. |
can we "represent" this finite topology as submodules of $R^X$? | More generally, let $C$ be any category. Then you can define a $k$-algebra $k[C]$ which has as a basis the morphisms of $C$, with multiplication given by composition in $C$ or $0$ when the composition is not defined (because the domain and codomain do not match up). Given a functor $F:C\to \mathtt{Vect}_k$, we then get an action of $k[C]$ on the direct sum $V$ of the values of $F$ on all the objects of $C$, with morphisms of $C$ acting via $F$ on the summand of $V$ corresponding to their domain and acting by $0$ otherwise.
In particular, suppose $F$ is the constant functor with value $k$, so $V=k^{\oplus O}$ where $O$ is the set of objects of $C$. Then I claim the $k[C]$-submodules of $V$ are exactly the subsets of the form $k^{\oplus S}$ where $S\subseteq O$ has the property that if $s\in S$ and there is a morphism $s\to t$, then $t\in S$. It is clear that these subsets are submodules. Conversely, suppose $M$ is a submodule of $V$. Note that the identity map on an object $o\in O$ acts on $V$ as the projection onto the summand corresponding to $o$. It follows that if $m\in M$, so are each of its projections, and thus $M=k^{\oplus S}$ where $S$ is the set of $s$ such that $M$ contains an element whose $s$-coordinate is nonzero. But now any morphism $s\to t$ maps the summand corresponding to $s$ to the summand corresponding to $t$ isomorphically, so if $s\in S$ then $t$ must be in $S$ as well.
In particular, if $C$ is a preorder, this gives a module structure on $k^{\oplus C}$ whose submodules are exactly sets of the form $k^{\oplus S}$ where $S$ is an upper set of $C$. Taking the specialization preorder of an arbitrary topological space (or its opposite, depending on your conventions), this gives a module structure where the submodules correspond to the specialization-closed sets. For a finite space, specialization-closed sets are the same as closed sets. |
quadratic constraints | No, it is not possible. That inequality describes a non-convex set, and SOCP necessarily requires convexity.
EDIT: Given the response below ("The terms in the bracket are convex by themselves") I thought I would expand further.
The convexity of individual functions and terms, frankly, is of far less importance than people often think. After all, $x^2 \geq z$ is non-convex, even though $x^2$ is the prototypical convex function. And $\log x \geq y$ is convex, even though the $\log$ function is not convex (concave, in fact).
What matters is the convexity of the set described by the constraint. And if that set is non-convex, no amount of reformulation in terms of convex functions or subexpressions is going to change that. It is non-convex, full stop.
That doesn't mean the problem can't be solved or approximated; that doesn't mean convex optimization can't be part of that solution process. But no, you're not going to be feeding your model to an SOCP solver or any other convex-specific solver.
(In rare occasions, a change of variables can give you a new model with a convex constraint set: for instance, this is how geometric programs are converted to convex form. But this is rare indeed. I am certain there is no change of variables that will accomplish this here, for instance.) |
If $A+B\ge C$, can we find positive operators $A_1\le A$ and $B_1\le B$ such that $A_1+B_1=C$? | No, it's not true in general. Take for instance
$$
A=\begin{bmatrix} 1&0\\0&0\end{bmatrix},\ B=\begin{bmatrix} 0&0\\0&1\end{bmatrix}, \ C=\begin{bmatrix} 1/2&1/2\\1/2&1/2\end{bmatrix}.
$$
Then $C\leq A+B$, and if $0\leq A_1\leq A$ then $A_1=t A$ for some $t\in[0,1]$. Similarly, $0\leq B_2\leq B$ implies $B=s B$ for some $s\in [0,1]$. It is not possible then to have $C=A_1+B_1=tA+sB$. |
Finding value of $\int^{2\pi}_{0}\frac{\sin^2(x)}{a-b\cos x}dx$ without contour Integration | $$I=\int_0^{2\pi}\frac{\sin^2(x)}{a-b\cos(x)}dx=\int_0^\pi\frac{\sin^2(x)}{a-b\cos(x)}dx+\int_\pi^{2\pi}\frac{\sin^2(x)}{a-b\cos(x)}dx$$
now using substitution:
$$t=\tan\frac{x}{2}$$
$$dx=\frac{2}{1+t^2}dt$$
$$\sin(x)=\frac{2t}{1+t^2}$$
$$\cos(x)=\frac{1-t^2}{1+t^2}$$
because of the discontinuities of the function, we must first change the limits. Now our integral becomes:
$$I=\int_{-\pi}^\pi\frac{\sin^2(x)}{a-b\cos(x)}dx=\int_{-\infty}^\infty\frac{\left(\frac{2t}{1+t^2}\right)^2}{a-b\left(\frac{1-t^2}{1+t^2}\right)}.\frac{2}{1+t^2}dt=2\int_{-\infty}^\infty\frac{2t}{(1+t^2)^2\left(a(1+t^2)-b(1-t^2)\right)}dt=2\int_{-\infty}^\infty\frac{2t}{(1+t^2)^2\left((a-b)+(a+b)t^2\right)}dt$$
then by using PFD this can be evaluated |
Is it circular to define the Von Neumann universe using "sets"? | You cannot construct what is not there.
When we construct the real numbers from the rational numbers, we don't invent them out of thin air. We use material around us: sets of rational numbers (or sets of functions which are sets of sets of rational numbers). And similarly when we want to construct a dual vector space, we don't wave our hands and whisper some ancient texts from the Necronomicon ex Mortis. We use the sets at our disposal (and the assumptions they satisfy certain properties) to show we can define a structure with the wanted properties.
So the real numbers, and dual vector spaces, and all the other mathematical constructions, have existed in your fixed universe of sets before you began your work. What we do, if so, is not as much as constructing as we are defining them and using our axioms to argue that as the definition "makes sense" (whatever that means in the relevant context), such objects exist.
"Okay, Asaf, but what does all that have to do with my question?", you might be asking yourself, or me, at this point. Well, if you don't interrupt me, I might as well tell you.
The von Neumann universe is a way to represent a universe of $\sf ZF$ as constructed from below. But it is using the pre-existing sets of the universe. What is clever in this construction is that it exhaust all the sets of the universe. And if the universe only satisfied $\sf ZF-Reg$, then the result is the largest transitive class which will satisfy $\sf ZF$.
But what happens in different models of set theory? Well, we can prove from $\sf ZF$ that the von Neumann hierarchy, which has a relatively simple definition in the language of set theory, exhausts the universe. So each different model will have a different von Neumann hierarchy. And models which are not well-founded, will have a non-well-founded von Neumann hierarchy.
So yes, we first need a model of $\sf ZF$ in order to construct this hierarchy, but we don't need it inside the theory. We need it in the meta-theory. Namely, if you are working with $\sf ZF$, then you most likely assume it is consistent in your meta-theory, where you formalize your arguments and do things like induction on formulas. And that is enough to prove the existence of the von Neumann hierarchy; because once you work inside $\sf ZF$, the whole universe is given to you! |
Is it possible for $\sum_{i=1}^n x_i^3 > \sum_{i=1}^n y_i^3 $ when $\sum_{i=1}^n x_i < \sum_{i=1}^n y_i $ where $x_i,y_i \in \mathbb{N}$ | It is definitely possible, even with $n=2$ where the smallest solution is
$$1+4<3+3$$
$$1^3+4^3>3^3+3^3$$
For higher $n$, simply add 1s to both sides. This works because small differences between the numbers on both sides are amplified by the operation of cubing. |
Is a Quadratic equation a function? | I think you're getting your dependent and independent variables mixed up. Given a quadratic equation, say $y=ax^2+bx+c$, the independent variable is $x$, whereas the dependent variable is $y$. Quadratics have at most two solutions for every output (dependent variable), but each input (independent variable) only gives one value. |
How to solve this limits question | Factor the term with the highest power (here, $x^{3}$) :
$$
\begin{align*}
\lim \limits_{x \to +\infty} \frac{x^{3}-4x}{7-2x^{3}} &= {} \lim \limits_{x \to + \infty} \frac{\require{cancel} \cancel{\color{blue}{x^{3}}} \big( 1 - \frac{4}{x^{2}} \big)}{\require{cancel} \cancel{\color{blue}{x^{3}}}\big( \frac{7}{x^{3}} - 2 \big)} \\[2mm]
&= \lim \limits_{x \to +\infty} \frac{1 - \frac{4}{x^2}}{\frac{7}{x^3}-2} \\[2mm]
&= \frac{1}{-2} \\[2mm]
&= -\frac{1}{2}
\end{align*}
$$ |
Extract matrix from this form of linear algebra term? | This can be done essentially through using the spectral theorem and the fact that when you compute the Euclidean dot product we essentially have
$$
(Mx)\cdot (My) \;\; =\;\; x^TM^TMy.
$$
We can extract the matrix $M^TM$ and note that this is a symmetric positive semi-definite matrix, hence we can use the spectral theorem to find that
$$
A \;\; =\;\; M^TM \;\; =\;\; VDV^T
$$
where $D$ is diagonal with the eigenvalues of $A$ and $V$ is orthogonal. Note that since $A$ is semi-definite, then the matrix $D$ has eigenvalues that are either $0$ or positive. It therefore makes sense to decompose $D = \sqrt{D}\sqrt{D}$ where we take the square root of the entries. We therefore find that we can write
$$
A \;\; =\;\; V\sqrt{D}\sqrt{D}V^T \;\; =\;\; \left (\sqrt{D}V^T\right )^T\left (\sqrt{D}V^T\right ).
$$
What we find here as a result is that $M = \sqrt{D} V^T$, however this is only unique up to the ordering of the eigenvalues and eigenvectors. Therefore there isn't a single matrix $M$ that accomplishes this, but there is a family of matrices generated by permutations. |
How to calculate variety of a vector under this constraint? | The problem you pose could be a challenging one, if one tries to find a general result, but for vectors with entries chosen from $1, 2, 3, 4$ there is a not too difficult solution.
Suppose that there are $d$ possible types, namely $1, 2, 3, \dots, d$, and we want to find the "variety" when there are $n$ components, and adjacent components differ by at most $1$.
Call this variety $V(d, n)$. Your question was about $V(4,10)$. By the way, $V(4,10)$ is indeed equal to $21892$, which is twice the $21$-th Fibonacci number, we will get to that in a while.
It is trivial to find general formulas for $V(1,n)$ and $V(2,n)$, and not hard to find a general formula for $V(3,n)$.
Edit: I take that back, $V(3,n)$ is roughly as hard as $V(4,n)$.
I will get you to a general expression for $V(4,n)$, but have barely thought about $V(d,n)$ for $d>4$.
From now on, $d=4$, so we simplify notation slightly, but also at the same time complicate it. Also, from now on, by vector I mean vector with entries chosen from $1, 2, 3, 4$, where adjacent entries differ by at most $1$. (It is a nuisance to keep repeating that.)
For any $n$, let $G_1(n)$ be the number of vectors with $n$ components (with adjacent entries differing by at most $1$) that begin with $1$. Let $G_2(n)$ be the number of such vectors that begin with $2$, $G_3(n)$ be the number that begin with $3$, and $G_4(n)$ the number that begin with $4$.
I hope that it is obvious by symmetry that $G_3(n)=G_2(n)$ and $G_4(n)=G_1(n)$.
So we focus attention on $G_1(n)$ and $G_2(n)$.
In order to get some insight, we compute. With care, one can easily calculate up to $n=3$, maybe even to $n=4$.
Here are the results that I get.
$$G_1(1)=1, \qquad G_2(1)=1$$
$$G_1(2)=2, \qquad G_2(2)=3$$
$$G_1(3)=5, \qquad G_2(3)=8$$
$$G_1(4)=13 \qquad G_2(4)=21$$
Does this experimentation scream out a message? Our numbers are $1$, $1$, $2$, $3$, $5$, $8$, $13$, $21$, looks an awful lot like the Fibonacci numbers. For details about these, look at the Wikipedia article.
Now let's think about the original problem. Let $H(n)$ be the number of vectors of length $n$, where as usual we only care about vectors in which successive elements differ by at most $1$. Then
$$H(n)=G_1(n)+G_2(n)+G_3(n)+G_4(n)=2(G_1(n)+G_2(n))$$
If indeed $G_1(n)$ and $G_2(n)$ are always consecutive Fibonacci numbers, then
$H(n)$ is twice a Fibonacci number. (It turns out, as you can see from the short computation above, that for $H(n)$ we end up using alternate Fibonacci numbers.)
A mathematician needs not only to compute, but to prove. Indeed, the computation was just to get a feeling about what is going on.
So let us prove the result. We do it by mathematical induction.
Suppose that we know that $G_1(n)$ and $G_2(n)$ are two consecutive Fibonacci numbers. We show that $G_1(n+1)$ and $G_2(n+1)$ are the next two consecutive Fibonacci numbers.
Look first at $G_1(n+1)$. We want to count the vectors of length $n+1$ that begin with $1$. The $1$ can be followed by any vector of length $n$ that begins with $1$ or $2$. There are $G_1(n)+G_2(n)$ of these. By induction hypothesis, $G_1(n)$ and $G_2(n)$ are consecutive Fibonacci numbers, and so, by the definition of Fibonacci number, $G_1(n+1)$, their sum, is the next Fibonacci number.
Now look at $G_2(n+1)$, the number of vectors of length $n$ that begin with $2$.
After the $2$ must come $1$, $2$, or $3$. The number of possibilities that begin with $1$ is of course $G_1(n)$. The number of possibilities that begin with $2$ is $G_2(n)$. And the number of possibilities that begin with $3$ is $G_3(n)$, but recall by symmetry that this is $G_2(n)$.
We conclude that $G_2(n+1)=G_2(n) + (G_1(n)+G_2(n))$. Note that $G_2(n)$ is a Fibonacci number, and, by earlier work, $G_1(n)+G_2(n)$ is the next Fibonacci number, and therefore their sum $G_2(n+1)$ is the next one after that.
That's it, proof complete!
Comment: Maybe I should provide explicit formulas in terms of $F_k$, the $k$-th Fibonacci number. Define the Fibonacci numbers as usual by $F_0=0$, $F_1=1$, $F_{k+2}=F_{k+1}+F_k$. Then the reasoning above shows that the number of vectors of length $n$ with any two adjacent elements differing by at most $1$ is equal to
$$2F_{2n+1}.$$
I wrote the post in a more or less stream of consciousness manner, roughly paralleling the way I found the solution. One could then polish things up, but I hope that what I wrote will be no less useful than a more concise presentation. |
Equality of these two sigma algebras? | To prove that $A_2 \subset A_1$, first, recall that $\mathbb{B} = \sigma(\text{open subsets of }\mathbb{R}).$ We already know that the intersection of any open subset of $\mathbb{R}$ with $\Omega$ is contained in $A_1$. From here, to prove that the intersection of any Borel set of $\mathbb{R}$ is contained in $A_1$, it suffices to prove the following two claims:
(1) If $\{E_n \cap \Omega\}_{n \in \mathbb{N}} \subset A_1$, then $$ \left(\cup_{n \in \mathbb{N}}E_n \right) \cap \Omega \in A_1 \text{,}$$
i.e. if $\{E_n\}$ is an indexed collection of sets so that the intersection of each with $\Omega$ is in $A_1$, then the intersection of their union with $\Omega$ is contained in $A_1$.
(2) If $E \cap \Omega \in A_1$, then $E^c \cap \Omega \in A_1$.
$$ $$
Proof of (1):
Let $\{E_n\}$ be as in the statement. Then
$$\left( \cup_{n \in \mathbb{N}} E_n \right) \cap \Omega = \cup_{n \in \mathbb{N}} (E_n \cap \Omega).$$ Since $A_1$ is a $\sigma$-algebra on $\Omega$ and $E_n \cap \Omega \in A_1$ for all $n$, $\cup_{n \in \mathbb{N}} (E_n \cap \Omega) \in A_1$.
$$ $$
Proof of (2):
Let $E$ be a subset of $\mathbb{R}$ so that $E \cap \Omega \in A_1$. Since $A_1$ is a $\sigma$-algebra on $\Omega$ (and hence closed under complementation relative to $\Omega$), we have that $\Omega \backslash(E \cap \Omega) \in A_1$.
We claim that $E^c \cap \Omega = \Omega \backslash(E \cap \Omega)$. The inclusion $E^c \cap \Omega \subset \Omega \backslash(E \cap \Omega)$ is obvious. To prove the reverse inclusion, suppose that $x \in \Omega \backslash(E \cap \Omega)$. Then $x \in \Omega \wedge (x \notin (E \cap \Omega))$, i.e. $x \in \Omega \wedge (x \notin E \lor x \notin \Omega)$. The only way for this statement to be true is to have $x \in \Omega \wedge x \notin E$.
$$ $$
To apply these claims, let $\mathcal{C}$ be the collection of sets $E \subset \mathbb{R}$ so that $E \cap \Omega$ is in $A_1$. The claims (1) and (2), combined with the fact that $A_1$ contains any set that is open relative to $\Omega$, show that $\mathcal{C}$ is a $\sigma$-algebra that contains the open sets of $\mathbb{R}$. Hence, by definition of $\mathbb{B}$, $\mathbb{B} \subset \mathcal{C}$. As a result, the intersection of any Borel set with $\Omega$ is contained in $A_1$. |
Integration by parts in a bit abstract form | I'll put $z_{\rm min} = a$ and $z_{\rm max} = b$.
Define the particular antiderivative $F(z) = \int_a^z{f(y)\,dy}$, so that $F^\prime(z) = f(z)$. Then $$\int_a^z{y\,f(y)\,dy} = z\,F(z) - \int_a^z{F(y)\,dy}$$
as you can verify by differentiating both sides with respect to $z$.
Put $z = b$ in the above and you get $$\int_a^b{y\,f(y)\,dy} = b\,F(b) - \int_a^b{F(y)\,dy}\,.$$
Use the definition of $F$:
$$\int_a^b{y\,f(y)\,dy} = b\,\int_a^b{f(y)\,dy} - \int_a^b{{\int_a^y{f(y^\prime)\,dy^\prime}}\,dy}\,.$$
Probably one wishes to see more $z$'s:
$$\int_a^b{z\,f(z)\,dz} = b\,\int_a^b{f(z)\,dz} - \int_a^b{{\int_a^z{f(z^\prime)\,dz^\prime}}\,dz}\,.$$
This is integration by parts "backwards." Usually we go the opposite direction with the technique. |
Why the graph of $y=x^{5/2}$ lies between $y=x^2$ and $y=x^3$? | Hint
If $0<x<1$ and $0<a<b$ then $0<x^b<x^a<1$ and if $x>1$ and $0<a<b$ then $1<x^a<x^b.$ |
Show convergence in mean | No.
For example, suppose $X(t) = a+1$ (and thus $Y(t) = a$) with probability $1/2$ and $X(t) = Y(t) = a-1$ with probability $1/2$. Thus in this case
$$ \lim_{t \to \infty} \dfrac{1}{t} {\mathbb E} \sum_{i=1}^t Y(i) = a - \frac12$$ |
A surjective, continuous and closed map is a quotient map | $p: X \to Y$ is a quotient map iff
$C \subseteq Y$ is closed iff $p^{-1}[C]$ is closed in $X$ $(a)$.
(a reformulation in terms of closed sets; follows from the standard definition as $X\setminus p^{-1}[O] = p^{-1}[Y\setminus O]$ for any $O \subseteq Y$ and any map $p$)
So if $p$ is continuous and onto, we can show $\Rightarrow$ in $(a)$ by noting that $p$ is continuous and $\Leftarrow$ by noting that $p[p^{-1}[C]] = C$ as $p$ is onto and closed as $p$ is closed. So $(a)$ holds for $p$ and so $p$ is quotient. |
How to convert from 10s complement to base 10/decimal | But how do I convert back?
In exactly the same way. Complements are involutions.
How do I know what sign it is?
Look at the leading digit. 0..4 => positive; 5..9 => negative. |
Proof of Injection and Surjection | Injective: Choose any $x_1, y_1, x_2, y_2 \in \mathbb Z$ such that $f(x_1, y_1) = f(x_2, y_2)$ so that:
\begin{align*}
5x_1 - y_1 &= 5x_2 - y_2 \\
x_1 + y_1 &= x_2 + y_2
\end{align*}
We want to show that $x_1 = x_2$ and $y_1 = y_2$. Hint: To prove the first part, begin by adding the two equations together. The second part follows by substitution.
Surjective: Choose any $a,b \in \mathbb Z$. We seek some $x, y \in \mathbb Z$ such that $f(x, y) = (a, b)$ so that:
\begin{align*}
5x - y &= a \\
x + y &= b
\end{align*}
Hint: As before, add the two equations together to solve for $x$ in terms of $a$ and $b$. Then substitute to solve for $y$ in terms of $a$ and $b$. |
New to probability - Is this true? | Let's find the probability of, say, rolling a 5. You could roll a 5 right away with probability $\frac{1}{20}$, roll a number over 12 and then roll again and get a 5 with probability $\frac{8}{20} \frac{1}{20}$, roll two numbers over 12 and then roll a 5 with probability $\frac{8}{20} \frac{8}{20} \frac{1}{20}$, etc. All these ways of rolling a 5 are mutually exclusive so their probabilities sum up. In total the probability of eventually rolling a 5 is given by the geometric series below, which I evaluated using the usual geometric series sum formula.
$$S = \frac{1}{20} + \frac{8}{20} \frac{1}{20} + \frac{8}{20} \frac{8}{20} \frac{1}{20} + \ ... \ = \sum_{i=0}^{\infty} \frac{1}{20} \left(\frac{8}{20}\right)^i = \frac{\frac{1}{20}}{1-\frac{8}{20}} = \frac{1}{12}$$
The fact that when you reduce the sample space there are only 12 of the 20 equally likely possible outcomes remaining, one of which must occur eventually, is a nice informal way to find the sum of this infinite series. |
Construct a fourth circle tangent to 3 circles forming a tangency chain | Assume that $X$ is the tangency point of $A$ and $B$. By applying a circle inversion with respect to $X$ the problem boils down to finding a circle tangent to two parallel lines and a third circle. This is equivalent to intersecting a line and a circle. Invert back and you are done. |
show that $\mathsf E(Y|X)=\mu_2+\rho\dfrac{\sigma_2}{\sigma_1}(X-\mu_1)$ | Let $E[Y|X] = aX +b$. Taking expectations w.r.t X,
$$E_X[E[Y|X]] = aE[X] + b$$
$$E[Y] = aE[X] + b$$
Therefore,
$$\mu_2 = a\mu_1 + b \quad (1)$$
Now, taking variance of $E[Y|X]$ w.r.t X,
$$Var_X[E[Y|X]] = a^2Var_X[X] \quad (2)$$
Rewriting E[Y|X] we get,
$$E[Y|X] - aX = b$$
Taking variance w.r.t X, we get,
$$Var_X[E[Y|X]] + a^2Var[X] - 2aCov(X, E[Y|X]) = 0$$
Now,
$$Cov(X, E[Y|X]) = E[XE[Y|X]] - E[X]E_X[E[Y|X]] = E[XY] - E[X]E[Y] = Cov(X, Y)$$
Therefore,
$$Var_X[E[Y|X]] = 2aCov(X, Y) - a^2Var[X] \quad (3)$$
Comparing (2) and (3), we get,
$$a = \frac{Cov(X, Y)}{Var[X]} = \rho\frac{\sigma_2}{\sigma_1} \quad (4)$$
From (1) and (4), we get,
$$b = \mu_2 - \rho\frac{\sigma_2}{\sigma_1}\mu_1$$
Substituting back a and b, we get,
$$E[Y|X] = \mu_2 + \rho\frac{\sigma_2}{\sigma_1} (X - \mu_1)$$
For the next part, use law of total variance and result is obtained. |
Show that if a trapping region Q is path connected, the basin of Q must also be path connected | My comment may make for a good enough answer. There are two points:
1) With your definitions, the basin of attraction may not be path connected, even if the trapping region is.
The problem is the one you pointed out in your question: the path between $x$ and $y$ may include points which are in $Q$, but which are not in the basin of attraction of $Q$.
For instance, take:
$$F (x,y) = \left(x,y^3+\left(1- \frac{\cos(x)}{2} \right)y \right).$$
Then, for fixed $x$, the point $(x,0)$ is attracting if $x \in (-\pi/2, \pi/2)+2\pi \mathbb{Z}$, neutral repulsive if $x \in \pi/2 + \pi \mathbb{Z}$, and repulsive otherwise.
Hence, $F(\mathbb{R} \times \{0\}) = \mathbb{R} \times \{0\})$, and you can find $\varepsilon > 0$ such that $F(\overline{B}(0, \varepsilon)) \subset B(0, \varepsilon) \cup \{(\pm \varepsilon,0)\}$. Now, take:
$$Q := \overline{B}(0, \varepsilon) \cup \overline{B}(2 \pi, \varepsilon) \cup ([0, 2\pi]\times \{0\}).$$
This set has a barbell shape. You can check that, by your definition, it is a trapping region. Its interior is $B(0, \varepsilon) \cup B(2\pi, \varepsilon)$. Hence, the basin of attraction of $Q$ is:
$$\{(x,y) : x \in (- \varepsilon, \varepsilon) + \{0, 2\pi\}, \lim_{n \to + \infty} F^n (x,y) = (x,0) \}.$$
In particular, is has two connected components, separated by a distance of $2(\pi-\varepsilon)$. It is not path-connected.
2) There are better definitions of a trapping region.
We may alter the definitions so that the basin of attraction has to be path-connected. For instance, we may require that $bas(Q) = \bigcup_{n \geq 0} F^{-n} (Q)$, or that $Q$ be open - both these tweaks would imply that $bas(Q)$ is path connected. However, these modifications would be non standard. A quick look at Wikipedia suggests that the problem comes from the definition of a trapping region, the proper definition being that:
$Q$ is a trapping region if it is compact and $F(Q) \subset int(Q)$.
This gives us two more properties to play with. Then, we can show that if $Q$ is path-connected, then so is $bas(Q)$.
Let $U := int(Q)$. Since $Q$ is compact and $F(Q) \subset U$, there exists $\varepsilon > 0$ such that $d(Q,U^c) \geq \varepsilon$. Then, $Q' := F(Q)+B(0,\varepsilon) \subset U$ and $Q'$ is open and path-connected. In addition, given $x$ and $y$ in $bas(Q)$, there exists $j$ such that $F^j (x)$ and $F^j (y)$ are in $Q$, so that $F^{j+1} (x)$ and $F^{j+1} (y)$ are in $Q'$.
Now, take a path between $F^{j+1} (x)$ and $F^{j+1} (y)$ in $Q'$, and apply $F^{-(j+1)}$. You get a path between $x$ and $y$ which lies entirely in $bas(Q)$. |
Determine the number of strings that start and end with the same letter | The number of $26$-character strings from a $26$-letter alphabet isn't $26\times 26$, it's $26\times 26\times\cdots\times 26=26^{26}$. Multiplying that by one more factor of $26$ to account for the first/last character should give you your answer, $26^{27}$. |
How to define the classical Lie algebras using the GAP package? | Use the user-contributed Lie algebra package by Willem de Graaf and Thomas Breuer for GAP, see here. |
Please show detailed steps of double integration of absolute difference | The problem is with your integration limits. You integrate $y$ from $0$ to $2$ and $x$ from $y$ to $1$. But For say $y=1.5$ it's the wrong way around. If you integrate $x$ from $0$ to $1$ and then you split the $y$ integration from $0$ to $x$ and from $x$ to $2$, you get the right answer. |
Taylor series in two variables | Usually $df$ denotes the total derivative. In that case, yes, you are right and
$$df=\frac{\partial f}{\partial x}dx + \frac{\partial f}{\partial t}dt.$$
However, in the article, the author is expanding $f$ into its Taylor series. The Taylor series of $f$ (expanded about $(x,t)=(a,b)$ is:
$$f(x,t)=f(a,b)+f_x(a,b)\cdot (x-a)+f_t(a,b)\cdot (t-b)+\frac{1}{2}f_{xx}(a,b)\cdot (x-a)^2+$$ $$\frac{1}{2}f_{xt}(a,b)\cdot (x-a)(t-b)+\frac{1}{2}f_{tx}(a,b)\cdot (x-a)(t-b)+
\frac{1}{2}f_{tt}(a,b)\cdot (t-b)^2+\cdots$$
Now, think "dX" means "change in X". So $df=f(x,y)-f(a,b)$, $dx=x-a$, and $dt=t-b$. Thus
$$df = f_x dx + f_t dt + \frac{1}{2}f_{xx} dx^2 + \frac{1}{2}f_{xt} dx dt + \frac{1}{2}f_{tx} dx dt + \frac{1}{2}f_{tt} dt^2 + \cdots$$
The total derivative is just the linear approximation of $f$ whereas the Taylor series takes into account higher order terms as well. |
Geometric interpretation of projecting a matrix | Write $A=[a_1 a_2 .. a_n]$, where $a_i$ are columns.
If $P_s A=0$, then $P_sa_i$=0, and so $col(A) \perp S$.
If $col(A) \perp S$, then $P_s a_i =0$ and then $P_s A =0$. |
Representing a matrix differential equation as a gradient of a function | If $Ax$ was the gradient of $U$ we'd have $$A_{ij} = \dfrac{\partial}{\partial x_i} (A x)_j = \dfrac{\partial^2}{\partial x_i \partial x_j} U = \dfrac{\partial^2}{\partial x_j \partial x_i} U = \dfrac{\partial}{\partial x_j} (A x)_i = A_{ji}$$
so $A$ would have to be symmetric. |
Combinatorial problem on 2 competing players and a fair pair of die. Need help in understanding the given solution | The $\frac{5}{36}$ comes from $P(A)$, we can throw 6 by $(1,5), (2,4), (3,3), (4,2), (5,1)$ so there are 5 ways out of 36 (we are throwing with two dice).
The probability of $3$ is $\frac{2}{36}$ from the possibilities $(1,2), (2,1)$. And for 5 we have 4 options: $(1,4),(2,3),(3,2),(4,1)$, so in total $\frac{6}{36} = P(B)$.
$P(C) = 1 - P(A) - P(B)$ as the options are all mutually exclusive and cover all options.
The options are enumerated as (value of die 1, value of die 2) above. There is only one way to have $(3,3)$ so we count it once.. (if you see 4 and 2, there are two ways this could happen: $(4,2)$ and $(2,4)$; image that the dice are different colours so they can be distinguished, so they would look different as well. $(3,3)$ can only happen one way.
The final part when we replace $P(E | C)$ by $P(E)$ is the following reasoning: after one throw, we either have a winner (the first two options) are the throw was inconclusive. After the inconclusive throw (so in the case that $C$ occurs) the chances are still the same as they were before we first threw the dice; we have a "reset". So the chance that Alice wins after an inconclusive throw ($P(E|C)$) is the same as her original chances $P(E)$. |
maximum using completing the square | I understand your confusion, since, the constraints. To maximize area given that length is to be 7 more than 5 times the width, the length and width are determinined by using as much of the $98$ yards of fencing:
We have $\mathcal l = 7 + 5w\tag{1}$
So perimeter $98 = 2\mathcal l + 2w$ using $(1)$ gives us $$2(7+5w) + 2w \implies 14 + 12 w = 98 \iff 12 w = 84 \iff w = 7$$
Then expanding on $A = \mathcal l\cdot w = (7 + 5w)w = 7w + 5w^2$
we have $$A = 7w + 5w^2 = 49 + 5\cdot 49 = 6\cdot 49 = 294\;\;\text{sq. yards}$$ |
Piecewise Functions difficulty | Answer
He jumps at $0$, so the function must be set to $0$.
\begin{align*}
0 & = -8.9t + 1509\\
8.9t & = 1509\\
t &= 169.55
\end{align*}
He lands a little over $2$ minutes. |
Two problems about Structure Theorem for finitely generated modules over PIDs | 1) The matrix $A$ has the characteristic polynomial $(X^2+1)^2$. Then its minimal polynomial can be $X^2+1$ or $(X^2+1)^2$.
If the minimal polynomial of $A$ is $X^2+1$, then its characteristic matrix is equivalent to the canonical diagonal matrix having $1,1,X^2+1,X^2+1$ on the main diagonal. This corresponds to the decomposition of the $\mathbb R[X]$-module $\mathbb R^4$ given by $$\mathbb R[X]/(X^2+1)\oplus\mathbb R[X]/(X^2+1).$$
In this case your matrix is similar to
\begin{pmatrix}
0&-1&0&0\\
1&0&0&0\\
0&0&0&-1\\
0&0&1&0
\end{pmatrix}
which is a matrix made by two (identical) companion matrices associated to the polynomial $X^2+1.$
On the other side, if the minimal polynomial of $A$ is $(X^2+1)^2$, then its characteristic matrix is equivalent to the canonical diagonal matrix having $1,1,1,(X^2+1)^2$ on the main diagonal. This corresponds to the decomposition of the $\mathbb R[X]$-module $\mathbb R^4$ given by $\mathbb R[X]/((X^2+1)^2)$. In this case your matrix is similar to
\begin{pmatrix}
0&0&0&-1\\
1&0&0&0\\
0&1&0&-2\\
0&0&1&0
\end{pmatrix}
which is the companion matrix associated to the polynomial $(X^2+1)^2$.
As you can see this is not the given matrix, so you need to work more. What you have to do now is to show that this matrix and the given one are similar and to prove this use the following result: two matrices are similar iff their characteristic matrices are equivalent. This means that you have to consider the characteristic matrix of the given matrix and to show that it has the same canonical diagonal form as the matrix above.
2) I don't know if the first question has something to do with the structure theorem of modules over a PID, but the second definitely has via the Smith Normal Form. Here $A\mathbb Z^3$ denotes the $\mathbb Z$-submodule of $\mathbb Z^3$ generated by the rows (or columns) of $A$ and coincides to the $\mathbb Z$-submodule of $\mathbb Z^3$ generated by the rows (or columns) of its SNF. Since the SNF of $A$ is
\begin{pmatrix}
1&0&0\\
0&3&0\\
0&0&0
\end{pmatrix}
we get that $\mathbb{Z}^3/A\mathbb{Z}^3$ is isomorphic to $\mathbb Z/3\mathbb Z\oplus\mathbb Z$. |
Any relation between primorial numbers and oblong (n(n+1)) numbers? | This PARI/GP program
(22:05) gp > n=1;s=1;while(n<10^6,n=nextprime(n+1);s=s*n;if(issquare(4*s+1)==1,print(n," ",s)))
2 2
3 6
5 30
7 210
17 510510
(22:11) gp >
shows that upt $10^6$# , there is no larger primorial than $510510$, which has the form $n(n+1)$. This leads to the strong conjecture that there are no more. To prove it, will be very difficult, I guess. |
Find the remainder when $11^{2013}$ is divided by $ 61$ | Pattern Observation :
reminder of $11^0 / 61 $ = $1$
reminder of $11^1 / 61 $ = $11$
reminder of $11^2 / 61 $ = $-1$
reminder of $11^3 / 61 $ = $-10$
There after the reminder repeats as 1, 11, -1 ,-10 ,1, 11, -1 ,-10....
so remainders repeat with period $4$
as $2013$ when divided by $4$ gives reminder $1$ , it will have same remainder as reminder of $11^1 / 61 $ .
Hence answer is $11$ |
What is the physical significance of integration of xf(x)? | Imagine a random variable with density $f(x)$. This means, intuitively, that the probability $X$ will fall in the interval $[x,x+dx)$ is $f(x)dx$. Hence the expected value of $X$ is summing up the values of $X$ multiplied by the "probability" that $X$ assumes $x$. In this sense $\int xf(x)dx$ is the expectation or weighted mean (centre of gravity) of $X$. |
Can we Arrange subsets of {1,...,8}, each on of size 2, in cycle such that any two subsets appear next to each other iff they disjoint? | Ok I got this:
Lets define $G=\langle V,E\rangle$ such that $$V=\left \{\left \{ a,b\right\}| a,b \in A \wedge a \neq b\right \}$$ $$\left \{ \left \{ a, b \right \}, \left \{ c, d \right \} \right \} \in E \Leftrightarrow \left \{a, b \right \} \cap \left \{ c, d\right \}=\varnothing$$
Notice that $|V|=28$: this is the number of ways to choose a subset of size $2$, out of a set of size $8$.
Now, for every $v \in V$, there are exactly $15$ neighbours to v:
We want to choose a subset of size $2$, that disjoint from $v$. So we choose subset of size $2$ out of a set of size $6$ (with out v).
$\delta (G) =15$ and $\frac{V(G)}{2} = \frac{28}{2} = 14$, so as Dirac theorem implies - there is an hamiltonian cycle in G. And we use this cycle as the arrangement. |
Problem with sorting inside a loop in GAP | Just -- for future readers -- the reason for the initial problem:
SortBy changes a list (but does not return the sorted list), while e.g. Sorted would leave the list unchanged and return the sorted copy. The general language convention is that verbs do something to an object, while nouns create a new object with the desired characteristics. |
Exception to definition of a function. | You are right about the second bullet point, unless one restricts the domain to $[0,\infty)$, say.
However, it is not correct (under the generally used definition of square root) that $r(4)=-2$. For non-negative $x$, we define $\sqrt x$ to be the unique non-negative real number $y$ such that $y^2=x$. We do not define $\sqrt x$ as "just any" number $y$ such that $y^2=x$. |
Suppose $\|x_n\|\leq 1$ for all $n\geq 1$ and $x_n\to x$ weakly as $n\to\infty$. Is it true that $\|x\|\leq 1$? | Yes, the norm is (sequentially) weakly lower-semicontinuous: if $x_n$ converges weakly to $x$, then $||x|| \le \liminf_n ||x_n||$.
Weak lower semicontinuity of a functional on Hilbert space?
The proof is here. |
Open covers by simply connected sets and fundamental group | The many base point version of the Seifert-van Kampen Theorem is covered in the book Topology and Groupoids (T&G). For a recent preprint on the result required, see arXiv:1404.0556.
There is discussion of the many base point situation at this mathoverflow question.
Later: To be more specific the result on groupoids given in the preprint is as follows:
Theorem Let
$$\begin{matrix} C & \to & B \\
\downarrow && \downarrow\\
A & \to & G
\end{matrix}
$$
be a pushout of groupoids. We assume $C$ is totally disconnected, that $i:C \to A,j:C \to B$ are the identity on objects, and that $G$ is connected. Then $G$ contains as a retract a free groupoid $F$
of rank $$k=n_C-n_A-n_B+1,$$ where $n_P$ is the number of components of the groupoid $P$ for $P=A,B,C$.
Further, if $C$ contains distinct objects $a,b$ such that $A(ia,ib),B(ja,jb)$ are nonempty, then $F$ has rank at least $1$.
(By the rank of a connected free groupoid we mean the rank of any of its vertex groups.)
January 23, 2017 To deal with more than two open sets and with many base points you need the main result of this paper (1984) which gives a generalisation to many base points of a result in Hatcher's book. It is also useful to have the notion of free groupoid, which is dealt with in Higgins' downloadable (1971) book Categories and Groupoids as well as in T&G.
See this mathoverflow question for a comment of Grothendieck on using many base points. |
How to find minimal distances route for a trip of $t$ days, given distances for each stop? | I observe that:
you are required to stay at a different hotel for $t$ nights and
it never makes sense to go backwards.
We can compute the minimal cost recursively on $t$, the number of remaining stops to make.
Define $C(r,m)$ as the minimal cost of a trip that starts at hotel $m$ and visits $r$ different hotels, each with index $> m$.
We then have:
$C(r,m) = \min_{m<i< n-r} \left[ (p_i-p_m)^2 + C(r-1,i) \right]$ if $r>0$ and
$C(0,m) = (p_n - p_m)^2$ (to account for the last day, I assume $p_n$ represents $f$).
If we also define $p_0=0$, the desired answer is $C(t,0)$. |
How to show the integral $\int_e^\infty \left(\frac{e}{t}\right)^t dt$ converges? | For $t > 2e$, you have
$$\left(\frac et\right)^t \le \left(\frac 12 \right)^t $$
and of course
$$\int_{2e} ^\infty \left(\frac 12 \right)^t dt $$ converges |
Prove that $\frac{x^n}{n!} < 1$ for $n$ sufficiently large | You are pretty spot on with your observation concerning the power series expansion of $e^x$; in fact, you are so nearly close to a proof that it's quite surprising you haven't finished it yourself! Because I believe that you'll learn more if you have to work out the appropriate gritty details, I'm only going to provide an outline, but I promise that the legwork on filling in the gaps will be worth it.
1) We know that $e^x$ has a power series expansion given by $\sum_{n=1}^\infty\frac{x^n}{n!}$.
2)By the ratio test, you can determine the radius of convergence of this series, finding that it converges on all of $\mathbb{R}$.
3) Since $\sum_{n=1}^\infty\frac{x^n}{n!}$ converges for all $x$, you know that the terms tend to $0$ in the limit as $n \rightarrow \infty$, and therefore are eventually smaller than any fixed positive constant you choose.
Hopefully that skeleton helps you flesh out the proof you are looking for! |
Expansion of $\log |1+x|$? | For this function, $x=0$ is an ordinary point, as $|x+1|$ is differentiable there. The trouble is with $x=-1$. |
First-order stochastic dominance and truncation | Lets define a the truncated distribution functions as:
$F_k(x)=a_kF(x)\mathbf{1}_{\leq k}(x)$ and $G_k(x)=b_kG(x)\mathbf{1}_{\leq k}(x)$ for $0 <b_k=\frac{1}{G(k)}\leq a_k=\frac{1}{F(k)}$.
Assume Condition 1: $\exists(k\in [0,1], c_k\in [0,k]): \frac{G(c_k)}{F(c_k)}<\frac{G(k)}{F(k)}$.
Then, $\frac{b_kG(c_k)}{a_kF(c_k)}=\frac{F(k)G(c_k)}{G(k)F(c_k)}<1$
Since we are talking about arbitrary distributions, first order stochastic dominance is not stable under half-truncations, as Condition 1 does not preclude $F$ from unconditionally dominating $G$. |
Calculating area of cube that has same volume as sphere with known area | That equation can be abbreviated as:
$6*V_{sphere}^{1/3}$
However, $V_{sphere}^{1/3}$ is the length of each edge on the cube-- you need the area of each face. The equation should be $6*V_{sphere}^{2/3}$. This equation will get you the expected answer. The unabbreviated version of the true equation is:
$$ A_{cube} = 6 * \Biggl(\sqrt[3]{\frac{4π}{3}*\biggl(\sqrt[2]\frac{A}{4π}\biggl)^3}\Biggl)^2 $$ |
Kakutani skyscraper is infinite | Note: Petersen assumes that $T$ is bijective a.e. (p.2)
This one is a bit tricky, but I think the following intuition can be made to work: We construct a gasket of positive measure by deleting countably many sets from $X$, leaving a set of positive measure with unbounded return time.
With the Kakutani-Rokhlin Lemma (Lemma 4.7, p. 48) we can find some set $A$ with measure less than a half, such that $TA$ and $A$ are disjoint. Delete $TA$ from $X$, then the essential supremum of $n_{X/TA}$ is at least $2$ (on $A$) and $\mu(X/TA)>1/2$.
Looking at the skyscraper of how $A$ was constructed in the Kakutani-Rokhlin Lemma, it should be possible to construct a sufficiently small set of positive measure $B\subset A$ with $TB,T^3B\subset TA$ and $T^2B\subset A$ and $B,TB,T^2B,T^3B$ disjoint. Delete $T^2B$ from $X/TA$, so that the resulting set has return time at least $4$ on B.
Continuing in this way, with successive (not too large) deletions from $X$, should yield the set we're looking for. I'll leave the exact details. |
Limits in a functor category do what precisely to morphisms? | The same formula applies: $(\lim_j F_j)f = \lim_j F_jf$.
The precise way of stating this is to say that $\lim F : \textbf C → \textbf D$ is the composition $\lim{} ∘ \tilde F$, where $\tilde F : \textbf C → \textbf D^J$ is the functor $C ↦ F_{(-)}C$.
Now given a morphism $f : A → B$, $\tilde F$ maps it to the natural transformation $(\tilde F_jf)_j : \tilde FA ⇒ \tilde FB$, and in general, given a natural transformation $α : G ⇒ H$ between two diagrams $J → \textbf D$, the functor $\lim$ associates to it the obvious induced morphism $\lim α : \lim G → \lim H$, ie. the one for which $π_j \lim α = α_j π_j$ holds. This last part, taking the limit of a natural transformation, is something that's often not mentioned (except for (co)products), but it's simple, and can often be useful. |
Complicated non-linear inequality reduces to simple linear equality. How? | Yes, you are right:
$$8xy+x^2y+2-(3x^3+5x^2+3xy^2+4x-y^3+y^2)=$$
$$=(y+1-3x)((x+1)^2+(y-1)^2)\geq0.$$
We can rewrite our inequality in the following form:
$$y^3-(3x+1)y^2+(x^2+8x)y-3x^3-5x^2-4x+2\geq0.$$
Now, easy to see that $\frac{1}{3}$ is a root of $3x^3+5x^2+4x-2,$ which gives:
$$3x^3+5x^2+4x-2=3x^3-x^2+6x^2-2x+6x-2=(3x-1)(x^2+2x+2).$$
Id est, we need to prove that
$$y^3-(3x+1)y^2+(x^2+8x)y-(3x-1)(x^2+2x+2)\geq0.$$
Now, we can check a factor $y-3x+1$.
Indeed, $$y^3-(3x+1)y^2+(x^2+8x)y-(3x-1)(x^2+2x+2)=$$
$$=y^3-(3x+1)y^2+6xy-2y+(x^2+2x+2)y-(3x-1)(x^2+2x+2)=$$
$$=y^3-(3x-1)y^2-2y^2+6xy-2y+(x^2+2x+2)y-(3x-1)(x^2+2x+2)=$$
$$=(y-(3x-1))(y^2-2y+x^2+2x+2)=(y-3x+1)((x+1)^2+(y-1)^2)\geq0.$$ |
Show that fourier transformation is a circle | Note that
$$
\hat{f}(\omega) = \frac{1}{\rho + i \omega}\frac{\rho - i \omega}{\rho - i \omega} = \frac{\rho - i\omega}{\rho^2 + \omega^2}
$$
Therefore
$$
x = \Re(\hat{f}(\omega)) = \frac{\rho}{\rho^2 + \omega^2} ~~~\mbox{and}~~~
y = \Im(\hat{f}(\omega)) = -\frac{\omega}{\rho^2 + \omega^2}
$$
We then have
\begin{eqnarray}
\left(x - \frac{1}{2\rho} \right)^2 + y^2 &=& \left(\frac{\rho}{\rho^2 + \omega^2} - \frac{1}{2\rho} \right)^2 + \left(- \frac{\omega}{\rho^2 + \omega^2} \right)^2 \\
&=& \frac{1}{(\rho^2 + \omega^2)^2} \left[ \left(\rho - \frac{\rho^2 + \omega^2}{2\rho} \right)^2 + \omega^2\right] \\
&=& \frac{1}{(\rho^2 + \omega^2)^2} \left[ \left(\frac{2\rho^2 - \rho^2 - \omega^2}{2\rho} \right)^2 + \omega^2\right] \\
&=& \frac{1}{4\rho^2(\rho^2 + \omega^2)^2} \left[ (\rho^2 - \omega^2) + 4\rho^2 \omega^2\right] \\
&=& \frac{1}{4\rho^2(\rho^2 + \omega^2)^2} \left[ \rho^4 - 2\rho^2\omega^2 + \omega^4 + 4\rho^2 \omega^2\right] \\
&=& \frac{1}{4\rho^2(\rho^2 + \omega^2)^2} (\rho^2 + \omega^2)^2 \\
&=& \frac{1}{4\rho^2}
\end{eqnarray}
So the locus of $\hat{f}(\omega)$ on the complex plane is a circle of radius $\frac{1}{2\rho}$ centered at $x = \frac{1}{2\rho}$, $y = 0$ |
How do you find the minimum percentage who have studied all four subjects? | Consider two subjects first. Suppose a certain fraction take subject A and another fraction take B then we can imagine shading in a bar with those fractions:
Then we are free to 'shift' these two colours so as to minimise the overlap.
Once you can solve this for two subjects then JiK has explained how to solve it for an arbitrary number. |
Correlation between min and max of two uniform variables | $$\overline A=\int_0^1\int_0^1\min(x,y)\,dx\,dy=\int_0^1\left[\int_0^y x\,dx+\int_y^1y\,dx\right]dy=\int_0^1\left[\frac{y^2}2+y(1-y)\right]dy\\
=\frac13.$$
$$\overline B=\int_0^1\int_0^1\max(x,y)\,dx\,dy=\int_0^1\left[\int_0^y y\,dx+\int_y^1x\,dx\right]dy=\int_0^1\left[y^2+\frac{1-y^2}2\right]dy\\
=\frac23.$$
$$\overline{AB}=\int_0^1\int_0^1\min(x,y)\max(x,y)\,dx\,dy=\int_0^1\left[\int_0^y xy\,dx+\int_y^1yx\,dx\right]dy\\
=\int_0^1\int_0^1xy\,dx\,dy=\frac14.$$
Remains to compute the standard deviations. |
Using the triangle inequality on a metric which measures dissimilarities in co-ordinates | We have $d(x, y) =0\iff x$ and $y$ differ in $0$ coordinates, i.e. iff $x=y$.
Now, for $x, y$ denote $D(x, y)$ the set of coordinates $\subseteq\{1,2,3\} $ where they differ.
Then we have $d(x, y) =|D(x, y) |$ and
$$D(x, z) \subseteq D(x, y) \cup D(y, z)$$
If $x_i=y_i$ and $y_i=z_i$ then $x_i=z_i$. So if $i\in D(x, z) $ it can't be the case that $i\notin D(x, y) $ and $i\notin D(y, z) $. |
Why are extraneous solutions created here? | That is simply because
$$A^2=B^2\iff A=B\quad\text{ OR }\quad \color{red}{A=-B}.$$ |
Exact sequences with parallel arrows | A definition of ‘exact’ for an augmented (semi)simplicial object can be found in Tierney and Vogel (1969), Simplicial resolutions and derived functors (MR, DOI 10.1007/BF01110914):
Definition. For a projective class $\mathcal{P}$, an object $A$ and a (semi)simplicial object $X_\bullet$, we say $\partial^0 : X_0 \to A$ is $\mathcal{P}$-exact if $\partial^0$ is a $\mathcal{P}$-epimorphism and, for every $n$, the comparison morphism $X_{n+1} \to K_{n+1}$ is a $\mathcal{P}$-epimorphism, where $K_{n+1}$ is the simplicial kernel of $$X_n \mathrel{\mbox{$\begin{matrix} \smash{\to} \newline \smash{\scriptstyle\vdots} \newline \smash{\to} \end{matrix}$}} X_{n-1}$$
i.e. we have morphisms $k^{n+1}_{0}, \ldots, k^{n+1}_{n+1} : K_{n+1} \to X_n$ which are universal with respect to the property that
$$\partial^n_i \circ k^{n+1}_i = \partial^n_i \circ k^{n+1}_{i+1}$$
(Recall that, by the simplicial identities, $\partial^n_i \circ \partial^{n+1}_i = \partial^n_i \circ \partial^{n+1}_{i+1}$, so there is a comparison morphism $X_{n+1} \to K_{n+1}$ by universality.)
In particular, in a regular category, if $\mathcal{P}$ is the class of regular projectives, and $X_0 \to A$ is $\mathcal{P}$-exact, we have a kernel pair diagram
$$K_1 \rightrightarrows X_0 \to A$$
which is also a coequaliser diagram (by unique image factorisation), so we have an exact fork. |
Is there a name for function $f(x)=\max(a, \min(x, b))$, where $a \leq b$? | I don't think there is any official "name" for this function. It is just a fancy way to express a piecewise linear function. It is very likely that people write it in such a compact way for a purpose of programming.
When $a=1$ and $b=4$, the graph of the function: |
Particular series on Hilbert Space | Some hints: let $F_N$ be the subspace generated by $\{x_1,\dots,x_N\}$. It's a closed subspace of $H$. So we can consider the projection over this subspace. This gives that $\sum_{n=1}^N|\langle x,x_n\rangle|^2\leqslant \lVert x\rVert^2$.
For the equivalence, you can compute the norm of the partial sums. |
Concentric circles, and distances from one point to the endpoints of a diameter of the other. | It's not so difficult if you make a good drawing. First draw a picture with two concentric circles in the xy plane. For easyness sake, draw one circle with radius $1$ and the other one radius $3$. In the small circle, draw a diameter that makes a 45 degree angle (with positive x-axis). Let's call the end points $A$ and $B$
Second, draw any point $C$ in the 4th quadrant on the bigger circle AND draw a point $D$ in the 2nd quadrant also on the bigger circle exactly opposite of the point in the 4th quadrant.
Now I want you to connect these four points to form a quadrilateral. Note that $AB$ is the diameter of the smaller circle and $CD$ the diamter of the bigger circle. The diameters bisect each other. Take a close look at the quadrilateral $ACBD$ What do you notice? How does that property of the parallelogram's sides and diagonals help you here? The other case goes similar. |
Maximal ideal on a distributive lattice is prime | The ideal generated by a subset $S$ of a lattice $A$ can be described as $J=\{x\in A: x\leq s_1\vee\dots\vee s_n \text{ for some }x_1,\dots,x_n\in S\}$. Indeed, it is clear that any ideal containing $S$ must contain $J$, and conversely $J$ is an ideal since if $x\leq s_1\vee\dots\vee s_n$ and $y\leq s_1'\vee\dots \vee s_m'$ then $x\vee y\leq s_1\vee\dots \vee s_n\vee s_1'\vee\dots\vee s_m'$.
So in your setup, if $\overline{I}=A$, then in particular $b\in \overline{I}$, so there exist $s_1,\dots,s_n\in I\cup\{a\}$ such that $b\leq s_1\vee\dots\vee s_n$. Now let $p$ be the join of all the $s_i$ that are in $I$. Since $I$ is an ideal, $p\in I$, and $s_1\vee\dots\vee s_n\leq p\vee a$ (since every $s_i$ that is not in $I$ must be equal to $a$). Thus $b\leq p\vee a$, as desired. |
A set of objects that satisfy $a^2 = \alpha x$ and commute | Take a set $Y$ of the required cardinality, $x$ an element not in $Y$, and $S$ the algebra of some field $K$ with generators $\{x\}\cup Y$ and relations $a^2=x$ and $ab=ba$ for all $a,b\in S$.
Seriously, if you don't want to specify what your question is about, you can only expect trivial answers like this. |
If $A\succeq{B}\succeq0$ and $C\succeq{D}\succeq0$ then $A \circ C\succeq B \circ D \succeq0$ | Just apply Schur product theorem to $A\circ C=(A-B)\circ C+B\circ (C-D)+B\circ D$. |
Does only one angle exists between two vectors? | "Two vectors from a point" defines a plane.
So, $\vec A \cdot \vec B=AB\cos\theta$ applies, where $\theta$ is an angle in that plane.
The dot product of two vectors can also be defined
using
$\vec A \cdot \vec B=A_xB_x +A_yB_y +A_z B_z$.
One can express the vector components in terms of angles with the coordinate axes
$\vec A \cdot \vec B=(\hat x\cdot \vec A)(\hat x\cdot \vec B) +(\hat y\cdot \vec A)(\hat y\cdot \vec B) +(\hat z\cdot \vec A)(\hat z\cdot \vec B).$ |
Coefficients in Faulhabers' sum $\sum n^{2m+1}, \ m=0,1,2...$ | Your Question 1 is answered in OES sequence A008957. For example, the coefficients $\;(1,6,1,120,30,1,\dots)\;$ are expressed in terms of A008957 exactly how Knuth meant it:
$$\;(1!\cdot1,\;\;3!\cdot1,1!\cdot1,\;\;5!\cdot1,3!\cdot5,1!\cdot1,\;\;7!\cdot1,5!\cdot14,3!\cdot21,1!\cdot1,\;\;\dots).\;$$
The triangular sequence A008957 has as exponential generating function
$$ x^2\! \cosh(\sinh(y\; x/2)/(x/2))\!-\!1)= (1x^2)y^2/2!\! +\!(1x^2\!+\!1x^4)y^4/4!\! +\!(1x^2\!+\!5x^4\!+\!1x^6)y^6/6!+\cdots.$$ |
Examining Convergence Of Improper Integral. | You cannot proceed this way, as you end up with something of the form
$$
\int_0^B dx\, \cos^2 x = u_B - v_B
$$
and $\lim_{B\to\infty} u_B = \infty$. But if $\lim_{B\to\infty} v_B = \infty$ as well, you have an indeterminate form and cannot conclude.
However, note that $\cos^2$ is perdiodic with period $\pi$, so that for any integer $k$
$$
\int_0^{k\pi} dx\, \cos^2 x = k\int_0^\pi dx\, \cos^2 x = k\cdot \frac{\pi}{2}
$$
and therefore
$$
\int_0^{B} dx\, \cos^2 x = \int_0^{\lfloor B/\pi\rfloor \pi} dx\, \cos^2 x
+ \underbrace{\int_{\lfloor B/\pi\rfloor \pi}^B dx\, \cos^2 x}_{\geq 0} \geq \lfloor B/\pi\rfloor\cdot \frac{\pi}{2} \xrightarrow[B\to\infty]{} \infty
$$ |
How to determine the values of $x$ such that $\frac{1}{x}=[1,1,1,\dots,x]$ with $n$ $1'$s? | Expanding the first few continued fractions:
$$[1,1,x]=1+\frac 1{1+\frac1x} = 1+\frac x{x+1} = \frac{2x+1}{x+1}$$
$$[1,1,1,x]=1+\frac1{[1,1,x]} = 1+\frac{x+1}{2x+1} = \frac{3x+2}{2x+1}$$
$$[1,1,1,1,x] = 1+\frac1{[1,1,1,x]} = 1+\frac{2x+1}{3x+2} = \frac{5x+3}{3x+2}$$
$$[1,1,1,1,1,x] = 1+\frac1{[1,1,1,1,x]} = 1+\frac{3x+2}{5x+3} = \frac{8x+5}{5x+3}$$
By following this pattern (or by induction), one can observe that
$$[\underbrace{1,1,\dots,1}_{n \text{ ones}},x] = \frac {F_{n+1}x+F_n}{F_nx+F_{n-1}}$$
hence we just need to solve
$$\frac1x=\frac {F_{n+1}x+F_n}{F_nx+F_{n-1}} \\ F_n x+F_{n-1}=F_{n+1}x^2+F_nx \\x = \sqrt{\frac{F_{n-1}}{F_{n+1}}}$$ |
Show that there is $\xi$ s.t. $f(\xi)=f\left(\xi+\frac{1}{n}\right)$ | Fix $\delta = 1/n, n \in \mathbb{N}.$ Consider the continuous function $g_n(x) :[0,1-\delta] \to \mathbb{R} ; g_n(x) := f(x+\tfrac{1}{n}) - f(x)$ and suppose that $g_n(x) \neq 0$ on $I_0 := [0,1].$ Then by the intermediate value theorem, $g_n(x)$ does not change sign in $I_0.$ WLOG, if $g_n(x) > 0,$ then adding the $n$ inequalities,
$$
g_n(1-\tfrac{1}{n}) = f(1) - f(1-\tfrac{1}{n}) > 0,
$$
$$
g_n(1-\tfrac{2}{n}) = f(1-\tfrac{1}{n}) - f(1-\tfrac{2}{n}) > 0,
$$
$$
\cdots
$$
$$g_n(0) = f(\tfrac{1}{n}) - f(0) > 0,
$$
yields $f(1) > f(0),$ which contradicts the hypothesis that $f(1) = f(0).$
Thus there exists a point $x_n$ satisfying $f(x_n + \tfrac{1}{n}) = f(x_n).$ |
Can you explain this equation to someone who hasn't learned integration? | There's not really an alternative to learning what an integral means here. The left-hand side of the equation is an integral, and so the exercise can't make sense to you until you know integrals.
The best we can do without writing an entire textbook here is:
For the purpose of dimensional analysis it is enough to know that an integral $\int f(x)\,dx$ is a limit of certain sums of terms of the form $f(x_1)\cdot(x_2-x_3)$. So your left-hand side
$$ \int \frac{dx}{\sqrt{2ax-x^2}} $$
has the same dimension as
$$ \frac{x_2-x_3}{\sqrt{2ax_1-x_1^2}} $$
where of course all the $x_i$s have the same dimension as the $x$. This turns out to mean that the value of the integral is dimensionless. |
Trying to find a left adjoint | Let $F(A)=A\times\{0,1\}$, with $p:F(A)\to F(A)$ given by $p((x,t))=(x,0)$ (for any $x\in A$, $t\in\{0,1\}$). If $B$ is a set with an idempotent map $e:B\to B$ then any map $g:A\to B$ gives us $g':F(A)\to B$, defined by $g'(x,1)=g(x)$, $g'(x,0)=e(g(x))$, which makes $F$ left adjoint to $U$. |
Scalar / Dot product | Let $V$ be a vector space over $\Bbb R$.
A bilinear form $f:V\times V\to\Bbb R$ is a scalar product if it satisfies
$f(x,y)=f(y,x)$ for all $x,y\in V$ (symmetric)
$f(x,x)> 0$ for all nonzero $x\in V$ (positive definite) |
Show that $\varphi:\mathbb{R}→Gl_2 (\mathbb{R})$ defined by $\varphi(a)=\begin{pmatrix} 1 & a \\ 0 & 1\end{pmatrix}$ is not an isomorphism | Hint: To show $\varphi$ is not onto, you just have to find a single element of $GL_2(\mathbb{R})$ that does not have the form $\begin{pmatrix}1 & a \\ 0 & 1\end{pmatrix}$ for any $a\in\mathbb{R}$. Can you find an invertible $2\times 2$ matrix that doesn't look like that? |
Controlled natural language for mathematics | I recommend reading "Book of Proof" $2009$ by Richard H. Hammack. Hardcover is only about $25$ USD and it describes the language (mostly natural) to be used in proofs. It shows how to address the reader as though in a conversation: "We can see that...". It also shows some mathematical symbols used in proofs but mostly, it's how to approach a proof by induction or whatever with examples.
I don't know how a proof would fit in a computer but it appears that PYTHON is the preferred language for doing math stuff. On the other hand, there is a movement to have machines both write and validate proofs. I don't know the language but it is described in "Mathematics without Apologies: Portrait of a Problematic Vocation (Science Essentials)" $2015$ by Michael Harris.
If you find what you are looking for, give us an update. I'm sure that many people would be interested. |
Can we have a closed form or approximation of following combination summation?Or when it is positive or not? | Welcome to MSE!
It is well known that your sum $S(n,t) = (-1)^t \binom{n-1}{t}$. There are a variety of proofs here for instance. Among other things, this means that the sum is nonnegative if and only if $t$ is even.
As for asymptotics, there are lots of known approximations for $\binom{n-1}{t}$ which depend on how quickly $t$ and $n$ are growing. One useful set of bounds is
$$
\frac{n^k}{k^k} \leq \binom{n}{k} \leq \frac{n^k}{k!}
$$
Which one can easily use to approximate $S(n,t) = (-1)^t \binom{n-1}{t}$.
Asymptotically, one can use stirling's approximation to obtain the sharpest result which is always true as follows:
First expand the definition of the closed form of $S(n,t)$:
$$S(n,t) = (-1)^t \binom{n-1}{t} = (-1)^t \frac{(n-1)!}{t!(n-1-t)!}$$
Then apply the stirling approximation:
$$
(-1)^t \frac{(n-1)!}{t!(n-1-t)!} \sim
(-1)^t \frac{\sqrt{2 \pi (n-1)}(n-1)^{n-1}/e^{n-1}}{\left ( \sqrt{2 \pi t} t^t / e^t \right ) \left ( \sqrt{2 \pi (n-1-t)} (n-1-t)^{n-1-t}/e^{n-1-t}\right )}
$$
Now you can cancel and simplify this to
$$
(-1)^t \sqrt{\frac{n-1}{2 \pi t (n-1-t)}} \frac{(n-1)^{n-1}}{t^t (n-1-t)^{n-1-t}}
$$
This expression, while correct, is slightly unwieldy. If you know more information about how $t$ grows (for instance, is $t \approx \sqrt{n}$? $kn$? $n^{0.1}$?) you can simplify the above expression (sometimes dramatically). Google has lots of results in this vein.
I hope this helps ^_^ |
Q). Show that the four points are angular points of a rectangle$ (0,-1) (4,-3) (8,5) (4,7)$. | The vector from $(0,-1)$ to $(4,-3)$ is $(4,-2)$, as is the vector from $(4,7)$ to $(8,5)$. You can check the other two sides are $(4,8)$ This establishes we have a parallelogram. Then since $(4,-2)\cdot (4,8)=0$ they are perpendicular-a rectangle. |
$|X|=|\mathbb{N}|$ for $X$ finite? | No, it is not true that a bijection exists from finite $X$ to $\mathbb N$. If you tried to construct one, you'd inevitably find that there are still infinitely many elements of $\mathbb N$ left after you mapped all the elements of $X$.
However if $X$ is countable, including finite, there always exists an injection to $\mathbb N$. That is, you can map each element to a different natural number. Or in other words, you can count the elements of $X$ (because counting is nothing else but assigning natural numbers). That's where the term countable comes from. |
What is the probability that I roll a 2 before I roll two odd numbers? | First observe that the numbers $4,6$ do not affect the outcome at all here, so it is equivalent to consider a $4$ sided die with sides $1,2,3,5$. The fact that it does not matter which odd number shows up means that we could consider a $4$ sided die with sides:
$2$, odd, odd, odd.
The probability that I roll $2$ odd numbers before rolling a $2$ is then $$\frac34 * \frac34=\frac9{16}$$
Hence your probability is $1-\dfrac9{16}=\dfrac7{16}$. |
Can only the middle school math knowlegde help to find solutions for $2013 y^2 -xy -4026 x=0$? | Your equation is equivalent to the following
$$x=\frac{2013y^2}{y+4026}$$
so you can restrict yourself on looking for $y$ such that this fraction is an integer (note that, for positive $y$, automatically $x$ is positive). Call $y+4026=t$ (so that $y=t-4026$). Recall that, if $y \ge 1$, then $t \ge 4027$. Then
$$x=\frac{2013(t-4026)^2}{t} = 2013 t - 2^2 \cdot 2013^2+ \frac{2^2 \cdot 2013^3}{t}$$
The RHS is an integer if and only if $t$ divides $2^2 \cdot 2013^3$. Factor that number:
$$2^2 \cdot 2013^3=2^2 \cdot 3^3 \cdot 11^3 \cdot 61^3$$
So, now you have to find all divisors $t$ of this number satisfying $t \ge 4027$ (there are a lot of them). For every such value of $t$, you can solve $y= t-4026$, and then $x = \frac{2013y^2}{t}$. |
How to show the existence of positive root for this equation? | Here a proof that a root exists: first, note that $f(1)<0$. Then, note that
$$
\lim_{x\to \infty}\frac{x}{(x+k)^{\epsilon}} > 1
$$
So, there exists a $c>0$ with $c>(c+k)^{\epsilon}$. By the intermediate value theorem, $f$ has a root. |
Finding the height given the angle of elevation and depression. | BF is the ground level.
Write EG in terms of angle $50^\circ$.Find $x$. Then the height will be $x\tan 46^\circ + 10$ |
To express a integrable function as difference. | If such $g_n$, $h_n$ exist, then $f$ is the pointwise limit of the sequence $g_n-h_n$. Therefore, $f$ belongs to Baire class 1. David Mitra gave a link to an example of a Riemann integrable function that is not of Baire class 1.
Therefore, such $g_n$ and $h_n$ cannot be found in general. |
Linear system solutions | Let $A$ be an $m\times n$ matrix.
If $\operatorname{rank}(A)=m$, there always exists a solution.
If $\operatorname{rank}(A)=n$, there might be solutions or not, but if, then they are unique.
Consequently, if $\operatorname{rank}=m=n$, there always exists a unique solution. One may rephrase this to saying that $\det(A)\neq 0$. |
Why are 1 and -1 eigenvalues of this matrix? | Answer by Terry Tao on MO:
First, one should conjugate all matrices by
$$
\begin{pmatrix}
\operatorname{diag}(\omega_1,\dots,\omega_n) & 0 \\ 0 & 1
\end{pmatrix}
$$
as this converts $S(t)$ to a rotation matrix while leaving the reflection $N$ unchanged.
The matrix $P^{-1} \operatorname{diag}(1,\dots,1,-1) P$ has a line as its $-1$ eigenspace and a hyperplane as its $+1$ eigenspace. By a limiting argument we may take $P$ to be in general position, so that the $-1$ eigenspace is not contained in any coordinate hyperplane and the $+1$ eigenspace does not contain any coordinate line. Then after applying a further conjugation by a diagonal matrix in the lower $n \times n$ block, we can arrange for the $-1$ and $+1$ eigenspaces to be orthogonal, without affecting $A(t)$.
After all these conjugations, $S(t)$ is now orthogonal and orientation preserving, while $N$ is orthogonal and orientation reversing. Then it is now clear that the product of any odd number of the $A(t)$ will be orientation-reversing orthogonal matrices. Such matrices have spectrum on the unit circle, symmetric with respect to conjugation, and multiplying to $-1$, hence must have an odd number of eigenvalues at $+1$ and also at $-1$. (If instead one multiplies an even number of $A(t)$ together, one obtains an even number of eigenvalues at $-1$ and at $+1$, but usually one expects to get zero eigenvalues at either.) |
Does a map of spaces inducing an isomorphism on homology induce an isomorphism between the homologies of the loop spaces? | It's not true in general.
Say, take a ring $R$ and consider the map $BGL(R)\to BGL(R)^+$. It always induces an isomorphism on homology, but
$$H_1(\Omega BGL(R))=H_1(GL(R))=0$$
($GL(R)$ has discrete topology) and
$$H_1(\Omega BGL(R)^+)=\pi_1(\Omega BGL(R)^+)=\pi_2(BGL(R)^+)=K_2(R)$$
is often non-trivial. |
By finding solutions as power series in $x$ solve $4xy''+2(1-x)y'-y=0 .$ | $$\sum_i(4i(i-1)b_i+2ib_i )x^{i-1}+(-2ib_i-b_i)x^i=\sum_i(2(i+1)b_{i+1}-b_i)(2i+1)x^i$$ |
3D coordinate Transformation | I think you must have typoed point data, or else didn't realize that you'd also need skewing in your transformation, because there is no way to accomplish this with a translation and rotation.
Such a transformation would preserve distances, but the distances between the first three points are $\{8.6238,0.848528,8.86397\}$ and the distances between the points of the second set are $\{1.68819,0.9,1.74929\}$. These are significantly different and can't be explained by measurement error.
A transformation between these two point sets would have to be composed of more than rigid Euclidean motions: it would need to have some affine transformation that includes some skewing. |
Linear Algebra Dynamical System Help | For a linear dynamical system, the origin could also be a center if the eigenvalues are purely imaginary. For example, the system
\begin{align}
\begin{bmatrix}
x_1^\prime \\x_2^\prime
\end{bmatrix} =
\begin{bmatrix}
0 & 1\\ -1 & 0
\end{bmatrix}
\begin{bmatrix}
x_1 \\x _2
\end{bmatrix}
\end{align}
has eigenvalues $\pm i$. The solution trajectories circle around the origin.
When the eigenvalues are complex with positive real components, then the trajectories spiral away from the origin. When the eigenvalues are complex with negative real components, then the trajectories spiral toward the origin.
A matrix that is not diagonalizable can be a repeller, although not all initial conditions will be permissible. For example, let
\begin{align}
\begin{bmatrix}
x_1^\prime \\x_2^\prime
\end{bmatrix} =
\begin{bmatrix}
1 & 0\\ 1 & 1
\end{bmatrix}
\begin{bmatrix}
x_1 \\x _2
\end{bmatrix}
\end{align}
There is only one linearly independent eigenvector $\mathbf{v} = [0, 1]^T$, with eigenvalue $1$. Thus if the initial condition $\mathbf{x}_0$ is a scalar multiple of $\mathbf{v}$, then the solution trajectories will be repelled from the origin. If the initial condition is not a scalar multiple of $\mathbf{v}$, then no solution exists.
Note that for nonlinear dynamical systems, the origin is not in general the location of a source, sink, center, or saddle point. |
Computing directional derivative of $f(x,y,z)$ | $$
\left . f_v \right |_P = \left . \left ( \nabla f\cdot \frac v{|v|} \right )\right |_P = \left . \left ( \frac {\langle y + z, x + z, x + y\rangle \cdot \langle 7, 3, -6\rangle}{\sqrt{49+9+36}} \right ) \right |_P = \left . \left ( \frac {-3x+y+10z}{\sqrt{94}}\right ) \right |_P = \frac 8{\sqrt{94}}
$$ |
What is the quickest way to find the characteristic polynomial of this matrix? | Observe that $I, B, B^2, \ldots, B^{n-1}$ are linearly independent (because their nonzero entries lie on entirely different diagonals). Hence no nonzero degree-$(n-1)$ polynomial over $\mathbb R$ can annilhilate $B$. In other words, $x^n-1$ is the minimal polynomial. You may continue from here.
Edit. On second thought, perhaps the quickest way to prove the assertion is to calculate $\det(xI−B)$ using Laplace expansion along the first row:
$$
(-1)^{1+1}x\det\pmatrix{x\\ -1&\ddots\\ &\ddots&\ddots\\ &&\ddots&\ddots\\ &&&-1&x}
+(-1)^{1+n}(-1)\det\pmatrix{-1&x\\ &-1&\ddots\\ &&\ddots&\ddots\\ &&&\ddots&x\\ &&&&-1}.
$$ |
How to prove 1/(1*2)+1/(2*3) + … + 1/((2n-1)(2n)) = 1/(n+1) + 1/(n+2) + … + 1/(2n) by induction? | Increasing $n$ from $k$ to $k+1$ adds $\frac{1}{(2k+1)(2k+2)}$ to the right-hand side, which is $0$ if $n=0$ as it's a sum of $n$ terms. Therefore, it's $\sum_{k=1}^n\frac{1}{2k(2k-1)}$, so the original problem has a misprint (the $\frac13$ should be $\frac{1}{12}$). |
Closed expressions for harmonic-like multiple sums | 1.
$$\begin{eqnarray*}w(n,1)&=&\sum_{a_1,\ldots,a_n\geq 1}\int_{0}^{+\infty}\prod_{k=1}^{n}\frac{e^{-a_kx}}{a_k}\,dx\\&=&\int_{0}^{+\infty}\left[-\log(1-e^{-x})\right]^n\,dx\\&=&\int_{0}^{1}\frac{\left[-\log(1-u)\right]^n}{u}\,du\\&=&\int_{0}^{1}\frac{[-\log v]^n}{1-v}\,dv\\&=&\sum_{m\geq 0}\int_{0}^{1}\left[-\log v\right]^n v^m\,dv\\&=&\sum_{m\geq 0}\frac{n!}{(m+1)^{n+1}}=\color{red}{n!\cdot\zeta(n+1).}\end{eqnarray*}$$
2.1. By classical Euler sums,
$$\begin{eqnarray*} w(2,2,1)&=& \int_{0}^{1}\sum_{i,j\geq 1}\frac{z^{i-1}}{i^2}\cdot\frac{z^{j-1}}{j^2}\cdot z\,dz\\&=&\int_{0}^{1}\frac{\text{Li}_2(z)^2}{z}\,dz\\&=&2\int_{0}^{1}\log(z)\log(1-z)\,\text{Li}_2(z)\frac{dz}{z}\\&=&2\sum_{n\geq 1}\frac{H_n-n\zeta(2)+n H_n^{(2)}}{n^4}\\&=&\color{red}{2\,\zeta(2)\,\zeta(3)-3\,\zeta(5)}.\end{eqnarray*}$$
2.2. In a similar fashion,
$$\begin{eqnarray*} w(2,2,2)&=& \int_{0}^{1}\sum_{i,j\geq 1}\frac{z^{i-1}}{i^2}\cdot\frac{z^{j-1}}{j^2}\cdot (-z\log z)\,dz\\&=&\int_{0}^{1}\frac{-\log(z)\,\text{Li}_2(z)^2}{z}\,dz\\&=&-\int_{0}^{1}\log(z)^2\log(1-z)\,\text{Li}_2(z)\frac{dz}{z}\\&=&\sum_{n\geq 1}\frac{2nH_n-2n^2(\zeta(2)-H_n^{(2)})-2n^3(\zeta(3)-H_n^{(3)})}{n^6}\\&=&\color{red}{\frac{\pi^6}{2835}}.\end{eqnarray*}$$
An alternative way:
$$\begin{eqnarray*} \sum_{m,n\geq 1}\frac{1}{m^2 n^2(m+n)^2}&=&\sum_{s\geq 2}\frac{1}{s^2}\sum_{k=1}^{s-1}\frac{1}{k^2(s-k)^2}\\&=&\sum_{s\geq 2}\frac{1}{s^4}\sum_{k=1}^{s-1}\left(\frac{1}{k}+\frac{1}{s-k}\right)^2\\&=&2\sum_{s\geq 1}\frac{H_{s-1}^{(2)}}{s^4}+4\sum_{s\geq 1}\frac{H_{s-1}}{s^5}\\&=&-6\,\zeta(6)+2\sum_{s\geq 1}\frac{H_s^{(2)}}{s^4}+4\sum_{s\geq 1}\frac{H_s}{s^5}\end{eqnarray*}$$
where $\sum_{s\geq 1}\frac{H_s^{(2)}}{s^4}=\zeta(3)^2-\frac{1}{3}\zeta(6)$ by Flajolet and Salvy's $(b)$ and
$$ 2\sum_{s\geq 1}\frac{H_s}{s^5} = 7\zeta(6)-2\zeta(2)\zeta(4)-\zeta(3)^2$$
follows from Euler's theorem, leading to
$$ \sum_{m,n\geq 1}\frac{1}{m^2 n^2(m+n)^2} =\frac{22}{3}\zeta(6)-4\zeta(2)\zeta(4).$$ |
Can any proof by contrapositive be rephrased into a proof by contradiction? | Yes it is valid... it doesn't really matter if it's something else 'in disguise', just that is it correct. And deriving $\lnot P$ from $P\land\lnot Q$ is certainly leads to a contradiction that implies $\lnot (P\land \lnot Q)$ is true, which implies that $P\to Q$ is true. One thing to note (that I think you have noticed based on your second question) is that you have an additional assumption of $P$ open and available for use when you derive $\lnot P,$ unlike in the case of just deriving $\lnot Q\to \lnot P$ by assuming $\lnot Q$ and deriving $\lnot P.$
So the second question amounts to whether it is admissible to assume $P$ in a proof of $\lnot Q\to \lnot P.$ It is, and the easiest way to see this is reasoning semantically and using the completeness/soundness theorem. If $P\vdash \lnot Q\to \lnot P$ then every interpretation in which $P$ is true has $Q$ true, which means precisely the same thing as $\vdash P\to Q,$ so we have $\vdash \lnot Q\to \lnot P.$
As a result, proof by contrapositive is essentially a special case of what you're calling proof by contradiction, where the contradiction takes the special form $\lnot P$ contradicting with $P.$
I think the reason you might have seen framing contrapositive proofs as proofs by contradiction discouraged is for style reasons. Unless the outstanding assumption of $P$ is used (unnecessarily), the part of the proof outside the inner proof of $\lnot Q\to \lnot P$ is just boilerplate that can be omitted. It is also more informative to call it a proof by contrapositive since that is a special case of contradiction.
(As Derek indicates in the comments, proof by contrapositive is not intuitionistically valid, so it doesn't have anything to do with this.) |
is it possible to choose points on the graph of $y = x^2$ to form vertices of an equilateral triangle? | Summary
Are there other points which can form equilateral triangle?
The answer is yes. In fact, there are infinitely many of them.
The centroids of the triangles lie on another parabola $y = 9x^2 + 2$.
The minimal area is $3\sqrt{3}$. achieved by the equilateral triangle with vertices $(0,0)$, $(\pm\sqrt{3},3)$.
Identify the euclidean plane $\mathbb{R}^2$ with complex plane $\mathbb{C}$.
Let $z = x + iy$ and $\bar{z} = x - iy$ be its complex conjugate, the equation of the parabola becomes
$$y = x^2 \iff \frac{z - \bar{z}}{2i} = \left(\frac{z+\bar{z}}{2}\right)^2 \iff
(z + \bar{z})^2 + 2i(z-\bar{z}) = 0$$
Let $\omega = e^{2\pi i/3}$. Given any equilateral triangle $T$, we can always find two complex numbers $\rho = u+iv$ and $a$ such that the vertices of $T$ are $\rho + a\omega^k$ for $k = 0, \pm 1$.
If they all lie on the parabola above, then for $k = 0, \pm 1$, we have
$$
\begin{align}
&\; (\rho + \bar{\rho} + a\omega^k + \bar{a}\omega^{-k})^2
+ 2i(\rho - \bar{\rho} + a\omega^k - \bar{a}\omega^{-k})\\
= &\; (\rho+\bar{\rho})^2
+ 2(\rho+\bar{\rho})(a\omega^k + \bar{a}\omega^{-k})
+ (a^2\omega^{-k} + 2|a|^2 + \bar{a}^2\omega^k)
+ 2i(\rho - \bar{\rho} + a\omega^k - \bar{a}\omega^{-k})\\
= &\; 0
\end{align}
$$
Comparing coefficients of different powers of $\omega$, we get
$$
\begin{cases}
(\rho + \bar{\rho})^2 + 2i(\rho - \bar{\rho}) + 2|a|^2 &= 0\\
2(\rho + \bar{\rho} + i)a + \bar{a}^2 &= 0\\
2(\rho + \bar{\rho} - i)\bar{a} + a^2 &= 0
\end{cases}
$$
Multiply the $2^{nd}$ equation by $4a^2$ and simplify it using $1^{st}$ equation,
we get
$$8(\rho + \bar{\rho} +i )a^3 + \left[(\rho + \bar{\rho})^2 + 2i(\rho - \bar{\rho})\right]^2 = 0\tag{*1}$$
Since $\rho + \bar{\rho} + i = 2u + i\ne 0$, we can use this to determine $a$ up to a factor $\omega^k$.
For this $a$ to be compatible with the $1^{st}$ equation, the condition is
$$\begin{align}
& (\rho + \bar{\rho})^2 + 2i(\rho - \bar{\rho}) = -2|a|^2 = -8|\rho + \bar{\rho} + i|^2
= -8 ((\rho+\bar{\rho})^2 + 1)\\
\iff & 9(\rho + \bar{\rho})^2 + 2i(\rho - \bar{\rho}) + 8 = 36u^2 - 4v + 8 = 0\\
\iff & v = 9u^2 + 2
\end{align}
$$
Base on this, we see start from any point $(u,v)$ from the parabola $y = 9x^2 + 2$, if one define $\rho = u + iv$ and use $(*1)$ to compute $a$, the 3 points
$\rho + a\omega^k$ will lie on the original parabola
$y = x^2$ and form an equilateral triangle.
Back to question of minimization of area. It is clear it is equivalent to
minimization of $|a|$. Since
$$|a|^2 = -\frac12 \left[ (\rho + \bar{\rho})^2 + 2i(\rho - \bar{\rho}) \right]
= 2(v-u^2) = 2(2+8u^2)$$
the minimal value $|a|$ is achieved at $(u,v) = (0,2)$ with value $2$.
The corresponding triangle has vertices at $(0,0)$, $(\pm\sqrt{3},3)$
with side length $2\sqrt{3}$ and area $3\sqrt{3}$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.