title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
If $f:\mathbb R^n\to \mathbb R$ has a local minimum at $a$, then the Hessian is positive semi-definite at $a$. | You can conclude that $x^{T}H(f)(a)x \ge 0$ for small $x$. Which shows the result for any $x \in \mathbb{R}^{n}$ by definition of positive semidefinite.
EDIT : Let $\bar{x}$ be the local minimum we have
$$ f(\bar{x} + h) = f(\bar{x}) + \frac{1}{2} h^{T} H_{f} (\bar{x})h + o(\|h\|^{2}) $$
Becuse $\nabla f (\bar{x}) = 0$.
And finally we have
$$
0 \le f(\bar{x} + h) - f(\bar{x}) \approx_{\| h \| \rightarrow 0} \frac{1}{2} h^{T} H_{f} (\bar{x})h
$$
EDIT3 : If $h^{T} H_{f} (\bar{x})h \ge 0$ for $\| h \| \le \delta$ then $h^{T} H_{f} (\bar{x})h$ for all $h \in \mathbb{R}^{n}$.
EDIT2 : As I said it proves semidefinite because definite is false. For exemple the hessian of $f(x,y)= xy$ is $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ which is only semidefinite. |
Show that the linear transformation is well defined | What you should have proved was that if $p(x)\in\mathcal P_2(\mathbf R)$, then $T\bigl(p(x)\bigr)\in\mathcal P_2(\mathbf R)$ too. For instance, this would not be true if the factor $3x-1$ was replaced by $3x^2-1$. |
Express the following complex quantity in the form $a+bi$ | You have $z=a+bi$ and want to compute
$$\frac{z}{\bar z}-\frac{\bar z } {z}$$
Where $\bar z= a-bi$. Combine denominators to get
$$\frac{z^2-\bar z^2}{z\bar z}$$
From which you get
$$\frac{4iab}{a^2+b^2}$$
So if you wanted to put this in the shape of a complex number $A+Bi$ you get that $A=0$ and $B=\frac{4ab}{a^2+b^2}$. |
Expectation of pricing algorithm | Suppose $v_i\in N(\mu,\sigma^2)$. Let $z_i=(v_i-\mu)/\sigma$, so the $z_i$ are standard normal, and let $q=(p-\mu)/\sigma$ be the standardized price. Let $\Phi(x)$ be the standard normal cdf, with pdf $\phi(x)$.
Conditioned on the existence at least one customer whose value is above the price, then the first customer whose value is larger than $p$ will have a value which is normally distributed, conditional on being greater than $p$. The conditional expected value is
$$
E[v\mid v>p]=E[\sigma z+\mu|z>q]=\mu+\sigma E[z|z>q] =\mu+\sigma\cdot\frac{\int_q^\infty x\,\phi(x)\,dx}{P(z>q)}
$$
The normal cdf satisfies $x\phi(x)=-\phi'(x)$, so using the fundamental theorem of calculus this is
$$
\mu+\sigma\frac{\phi(q)}{1-\Phi(q)}
$$
To make this unconditional, we have to multiply by the probability that at least one customer's value is above the price, resulting in
$$
\boxed{\Big(1-\Phi(q)^n\Big)\left(\mu+\sigma\frac{\phi(q)}{1-\Phi(q)}\right)}
$$ |
Proving $a^p \equiv a \pmod{p}$ for $a\in\mathbb{Z}$, $p\in\mathbb{P}$ | My solution:
Suppose that $a$ and $p$ are positive integers, and also that $a^p \not\equiv a\pmod{p}$. We may then conclude that $\gcd(a^p - a, p) = 1$. By Bezout's Lemma, there exist integers $x$ and $y$ such that $1 = (a^{p-1})x + (p)y$. Re-arranging this expression, we obtain...
$$p = \frac{1 - (a^{p-1})x}{y}$$
I claim that $y$ divides $1-(a^{p-1})x$. There exists a $j$ such that $yj = 1-(a^{p-1})x$, namely, $p$. We have that $y$ and $1-(a^{p-1})x$ are integers, and $y$ divides $1-(a^{p-1})x$, hence we have that $p$ is not prime. |
Find closed formula for the recurrence $a_{n}=na_{n-1}+n(n-1)a_{n-2}$ | Hint: Let $a_n=n!b_n$. The $b_n$ satisfy a familiar recurrence. |
How to find a real orthogonal matrix of determinant $1$? | For any $\theta$,
$$
\begin{pmatrix}
\cos\theta & \sin\theta\\
-\sin\theta & \cos\theta
\end{pmatrix}
$$
is such a matrix. Actually, they all have this form. |
Find all solutions for a complex equation: $(1+i)z^2 - (6+i)z + 9+7i=0$ | $$(1+i)z^2-(6+i)z+9+7i=0\Longleftrightarrow$$
Use the abc-formula:
$$z=\frac{-\left(-(6+i)\right)\pm\sqrt{\left(-(6+i)\right)^2-4\cdot\left(1+i\right)\cdot\left(9+7i\right)}}{2\cdot\left(1+i\right)}\Longleftrightarrow$$
$$z=\frac{-\left(-6-i\right)\pm\sqrt{\left(-6-i\right)^2-4\cdot\left(1+i\right)\cdot\left(9+7i\right)}}{2+2i}\Longleftrightarrow$$
$$z=\frac{\left(6+i\right)\pm\sqrt{\left(35+12i\right)-4\cdot\left(1+i\right)\cdot\left(9+7i\right)}}{2+2i}\Longleftrightarrow$$
$$z=\frac{\left(6+i\right)\pm\sqrt{\left(35+12i\right)-4\cdot\left(2+16i\right)}}{2+2i}\Longleftrightarrow$$
$$z=\frac{\left(6+i\right)\pm\sqrt{\left(35+12i\right)-\left(8+64i\right)}}{2+2i}\Longleftrightarrow$$
$$z=\frac{\left(6+i\right)\pm\sqrt{27-52i}}{2+2i}\Longleftrightarrow$$
$$z=\frac{6+i}{2+2i}\pm\frac{1}{2+2i}\sqrt{27-52i}\Longleftrightarrow$$
$$z=\left(\frac{7}{4}-\frac{5}{4}i\right)\pm\left(\frac{1}{4}-\frac{1}{4}i\right)\sqrt{27-52i}$$ |
Symmetric Group Acting on Direct Product | There is the orbit $O_1$ which is the diagonal $\{ (n,n) \in J \times J\}$, and the rest $O_2 = J\times J \setminus O_1$. It is clear that $O_1$ forms an orbit. If you have $(m,n) \in O_2$, you may send it to any $(m',n) \in O_2$ through the transposition which swaps $m$, $m'$. Similarly for the second variable.
I'll let you figure out the stabilizers from there. As you have noted, the first stabilizer must have order $n!/n = (n-1)!$, and the second must have order $n!/(n(n-1)) = (n-2)!$. Hint: this suggests that they are themselves smaller symmetric groups embedded in $S_n$. |
Demonstrate that the centering matrix is idempotent | In the third line you have $HI - H \frac{1}{n} 11^T$. So the assertion follows if $H \frac{1}{n} 11^T = 0$. Now substitute again for $H$, you'll see that the assertion follows if $11^T = \frac{1}{n} 11^T 11^T$. In $\mathbb{R}^n$ we have $11^T 11^T = n 11^T$ so the assertion follows.
To see why $11^T 11^T = n 11^T$ note that $11^T_{i,j} = \sum_{i=1}^n 1 \times 1$. |
A group theory problem | Hint:
I think this approach would work: try to prove that
the lattice of subnormal subgroups is modular:
$$ U\subset W \implies (U,V\cap W) = (U,V)\cap W $$
the desired property holds for modular lattices
Update: ..hmmm.. I might have misunderstood the question. So, we don't know yet that $\langle A',B'\rangle$ belongs to this lattice... |
Show that any continuous map $f: X \rightarrow Y$ induces a map of semisimplicial sets $Sing(X). \rightarrow Sing(Y).$ | Assuming by semisimplicial set you mean a simplicial set without degeneracies (though, in truth, this works for simplicial sets as well, and other things):
$f$ induces a map $\mathrm{Sing}(X) \to \mathrm{Sing}(Y)$ by composition, just as you stated. You'd need to check that these diagrams commute for face maps $\delta_i : \Delta^n \to \Delta^{n+1}$:
$$
\begin{array}{ccc}
\mathrm{Map}(\Delta^{n+1},X)
& \xrightarrow{\delta_i^*}
& \mathrm{Map}(\Delta^n, X) \\
f_* \downarrow
&
& f_* \downarrow \\
\mathrm{Map}(\Delta^{n+1},Y)
& \xrightarrow{\delta_i^*}
& \mathrm{Map}(\Delta^n, Y)
\end{array}
$$
But that's immediate; $\delta_i^*$ is given by precomposition, and $f_*$ by postcomposition. More specifically, if $\varphi \in \mathrm{Map}(\Delta^{n+1},X)$, then $f_*(\delta_i)^* \varphi = f \circ \varphi \circ \delta_i = \delta_i^*f_* \varphi$. |
The non-vanishing 1-form on $\mathbb R^2$ | Consider the equation $d(h\omega) = 0$. It means that $dh \wedge \omega =0$. If we write $\omega = adx + bdy$, this means that $b\partial h/\partial x - a\partial h /\partial y = 0$. Can you find a nonvanishing function $h$ satisfying this equation?
Provided you can, this means that the form $h\omega$ is exact, so by the Poincaré lemma, it is equal to $dg$ for some $g$. Then you'll have shown that $\omega = h^{-1} dg$. |
Find the smallest $n$ such that $\frac{a^n}{n!} < \epsilon $ | From a purely algebraic point of view, consider the equation $$a^n=\epsilon \times n!$$ Take the logarithms of both sides and the equation becomes $$n\log(a)=\log(\epsilon)+\log(n!)$$ Now use Stirling approximation $$\log(n!)\approx \frac{1}{2}\log(2\pi)+(n+\frac{1}{2})\log(n)-n$$ so you have the equation to solve. Use any graphing tool and you are done.
For example, using $a=\frac{3}{4}$,$\epsilon=10^{-20}$ we find $n=19.3826$ while the exact solution is $n=19.3812$ |
Is there a simple equation describing all potential ordinary differential equations of some particular order? | If I wanted to write the most general possible form for an ordinary differential equation of some integer order n, what would that look like?
$
\def\rr{\mathbb{R}}
$
An ordinary differential equation of order $n$ is defined as an equation expressible in the form "$f(x,y,{D_x}(y),{D_x}^2(y),\cdots,{D_x}^n(y)) = 0$" where $x,y$ are real variables (with $y$ varying with respect to $x$) and $f$ is a function from $\rr^{n+2}$ to $\rr$. It includes equations like "$x+y y' = y''$", but clearly does not include things like "$g'(x^2) = g(x)$" where $g$ is a function from $\rr$ to $\rr$.
With that in mind, something like $f(x,y,y',y'',y''') = 0$ (where f is any function) would not be a valid form for all 3rd order ordinary differential equations.
As stated above, it is valid (for ODEs) when interpreting your $y',y'',...$ as $D_x(y),{D_x}^2(y),...$. But if you want to define order for non-ODEs, you're going to have to do it yourself because it seems no one has bothered about them (no known practical use).
Order is still well-defined.I don't see how allowing y'''(y(x)) = 0 as a differential equations makes it any harder to tell that both it and y(y'''(x)) = 0 are third order differential equations.Order simply indicates the highest order derivative present in the equation.
That only works if you define what you mean by "equation", which is not as trivial as you may think. You could define it exactly like in defining syntax of higher-order logic:
"$x$" is an object-term.
"$y$", "$y'$", "$y''$", ... are $1$-input function-terms.
There is an object-term for each real number.
There is a $k$-input function-term for each function from $\rr^k$ to $\rr$.
"$f(t)$" is an object-term for every function-term $f$ and object-term $t$.
"$t = 0$" is a differential equation for every object-term $t$.
And of course, there are no other object-terms or function-terms other than those generated by the above rules. Now it should be clear how you can define order to express what you want; define the order of a differential equation to be the highest order of derivative of $y$ that is used (symbolically) as a function-term in it.
But I doubt the usefulness of such a definition. "$y(y'(y(x^2))) = \sin(x)$" would be a first-order DE under such a definition, but order seems to have no useful consequence, unlike in the case of ODEs. In the end, definitions are only useful if you can prove useful theorems about the defined objects. |
Cholesky decomposition and variance | Since $X$ is a vector
$$\mathbb{E}X=\mathbb{E}[A+BZ]=A$$
and
$$Var(X)=\mathbb{E}[(X-\mathbb{E}X)(X-\mathbb{E}X)']=\mathbb{E}[BZ(BZ)']$$
$$=\mathbb{E}[BZZ'B']=B\mathbb{E}[ZZ']B'=BIB'=BB'$$ |
Fully-ordered set and transfinite mathematical induction | What you have lost when you give up the well order is that $A$ has a least element. When there is one, your inductive statement says that $P$ is true of that element as there are no predecessors. Without that, you could have $P$ false of all the elements. The inductive statement is still true. One example would be to take $A$ as the rationals and $P(x)$ to say $x$ is irrational. |
Calculating matrix exponential for $n \times n$ Jordan block | Actually this matrix only has a one dimensional eigenspace, spanned by the first unit vector. This is an example of a 'Jordan'-block, you will find lots of theory if you google it.
Maybe a simpler way to calculate the matrix exponential is to write
\begin{align}
A=a\mathbb{I}+B
\end{align}
where $\mathbb{I}$ is the unit matrix and $B$ has only one's above the diagonal. Since $\mathbb{I}$ and $B$ commute, you get
\begin{align}
\exp(A)=\exp(a\mathbb{I})\cdot \exp(B)
\end{align}
Calculating the exponential of $a\mathbb{I}$ is straightforward and calculating the exponential of $B$ is also not too difficult, since $B$ is nilpotent, i.e $B^k=0$ for some appropriate $k$. |
Calculate the number of digits in a product of large numbers | Take $$[\log_{10}(\cdot)]$$ of it. |
Does compact Hausdorff imply Polish? | No: $\omega_1+1$ with the order topology is compact and Hausdorff but not even first countable, let alone metrizable.
Added: $\beta\Bbb N$, the Čech-Stone compactification of the natural numbers with the discrete topology, is another example, and it’s even separable. |
Edges which, when removed from a graph, increase the number of connected components | They're called bridges. According to Wikipedia,
A linear time algorithm for finding the bridges in a graph was described by Robert Tarjan in 1974. |
Inverse image of ideal under a homomorphism of rings is generated by inverse images of generators | There are two problems essentially boiling down to $\phi$ being possibly not invertible:
$\phi^{-1}(g)$ does not make sense if $g$ has more than one preimage.
Since $\phi$ is not given surjective, for $g$ not in the image of $\phi$,$\phi^{-1}(g)$ does not make sense.
Therefore, you need to change your hypotheses. Making $\phi$ surjective, and letting $\phi^{-1}(a)$ denote the set of all preimages of $a$ rather than a single one, should make your proof work. |
Relationship between borders of closure | What if $A$ is the set of rational numbers in $\mathbb{R}$?
Then the closure of $A$ is the entire line, which has empty boundary. However, the boundary of $A$ itself contains every point on the line. |
holomorphic on right half plane | Let $\phi(z) = \frac{1-z}{1+z}$. $\phi$ is a Möbius transformation that maps $U$, the open unit disk, into the right half plane. Let $\tilde{f} = f \circ \phi$. Then $\tilde{f}$ maps $U$ into itself, and $\tilde{f}(0) = f(1) = 0$. Hence the Schwartz lemma tells us that $|\tilde{f}(z)| \le |z|$ for $z \in U$. We have $\phi(-\frac{1}{3}) = 2$, hence this tells us that $|f(2)| = |\tilde{f}(-\frac{1}{3})| \le \frac{1}{3}$.
If we let $f(z) = \phi(z)$, we get $f(2) = -\frac{1}{3}$, hence the maximum is attained. |
Different definitions of $e^{t\Delta}$ | The three definitions are equivalent. (2) is a special case of (3), since unitary semigroups with self-adjoint generators are usually defined by functional calculus. The definition (1) is just passing from $\Delta$ to the unitary equivalent operator $\mathcal F^{-1}\circ\Delta\circ\mathcal F,$ where $\mathcal F: L^2\to L^2$ is the Fourier transform. Properties of the Fourier transform imply that $\mathcal F^{-1}\circ\Delta\circ\mathcal F$ acts as the multiplication operator
$$f(\xi)\mapsto-4\pi^2|\xi|^2 f (\xi).$$
Exponential of this multiplication operator is just
$$f(\xi)\mapsto e^{-4\pi^2t|\xi|^2}f(\xi).$$
Since unitary equivalence interchanges with functional calculus, that is $$\varphi(\mathcal F^{-1}\circ\Delta\circ\mathcal F)=\mathcal F^{-1}\circ\varphi(\Delta)\circ\mathcal F,\ \varphi\in C(\mathbb R)$$ (1) and (3) are equivalent.
An advantage of (1) is that you easily derive the properties of $\Delta$ and $e^{t\Delta}$ from the properties of the multiplication operator, e.g. spectrum $\sigma(\Delta)=(-\infty,0]$ etc.
An advantage of (3) is that is easily seen to be well-defined.
Preferring (1) or (3) also depends on how do you define $\Delta.$ It is not trivial if you care about self-adjointness of the symmetric operator $\Delta.$ |
Eigenvalues of A(adj A). Are they necessarily real? | I think you are confusing adjugate from adjoint of a matrix. In this context, it refers to the conjugate transpose.
Guide:
A possible way to solve the problem is to consider the SVD of $A$, compute $A^*$ and then compute $AA^*$ and $A^*A$. |
The Existence of $A^{-1}$ From the Characteristic Polynomial of A | I don't know about interpreting the actual value of coefficients of the characteristic polynomial geometrically, (the characteristic polynomial has a very algebraic flavour), but you can definitely see why the constant term affects invertibility.
In your example, if the characteristic polynomial of $A$ is $p(x) = x^3 + x - 1$, then by Cayley-Hamilton we have $A^3 + A - 1 = 0$, which we can rearrange into $A(A^2 + 1) = 1$, and so we find that $A^{-1} = A^2 + 1$. It should be clear we can do this whenever the constant term is nonzero. |
Eigenvalues of Kronecker product of non square matrices | Presumably $m\ne n$. If $m<n$, $Ax=0$ has a nontrivial solution $x\in\mathbb R^n$. Therefore, for every nonzero vector $y\in\mathbb R^m$, we have $(A\otimes B)(x\otimes y)=(Ax)\otimes(By)=0$ where $x\otimes y\ne0$. Hence $A\otimes B$ is singular. The case for $m>n$ is similar. |
Why is the decimal abacus more popular than the binary one? | Good. We are talking about the base and the length of a system for expression. The base is the number of individual symbols we have in one place, and the length is the number of places. When it comes to Number, we call it n-based system, where n is the measure of the base. Generally, if the base is large, the length would be small. But a larger base needs more complex operation. A small base will need long length, means more digits, for a number. More digits need more time or more loops to finish a job.
There must be a optimized size for base for a given problem. Then we need to understand how the human brain works, which is not a practical task though. |
Continuous differentiable spline or function resembling floor | You can build the function you need from an alternating sequence of linear pieces and polynomial pieces.
Over the interval $(j-1, j - \epsilon]$, you just use a straight line that increases from $j-1$ to $j -1+\alpha$. Call it's slope $m$, which we can calculate from $\epsilon$ and $\alpha$. In fact, $m = \alpha/(1-\epsilon$.
Similarly, over the interval $(j, j +1 - \epsilon]$, you just use a straight line that increases from $j$ to $j +\alpha$.
Now we need to fit in a polynomial piece $p$ that provides a steep upwards ramp over the interval $(j - \epsilon, j]$. The requirements are:
$$p(j - \epsilon) = j-1+\alpha \quad ; \quad p(j) = j$$
$$p'(j - \epsilon) = m \quad ; \quad p'(j) = m$$
A cubic polynomial has four coefficients, so you will be able to satisfy these four conditions. You could just assume $p(x) = ax^3 + bx^2 + cx + d$, write down the four equations arising from the four conditions, and solve for $a$, $b$, $c$, $d$. This is the brute force approach; there are cleverer approaches using Hermite interpolation techniques, which give you the required curve immediately, without the need to solve a system of linear equations. See this article, for example. Applying the Hermite cubic formulae, we get
$$
p(t) = (m - \epsilon)(2t^3 - 3t^2) + mt + j - \epsilon \qquad (0 \le t \le 1)
$$
where $t = (x - j + \epsilon)/\epsilon$. You only have to do this once. Thereafter, you can get other polynomial filler pieces just by shifting this basic one to the left or right.
The curve constructed this way will be only once differentiable -- there will be discontinuities in the second derivative at all the joints. If you want a curve that's twice differentiable, you let $p$ be a quintic polynomial, and you add two more conditions that force it to mate nicely with the adjacent linear pieces:
$$
p''(j - \epsilon) = 0 \quad ; \quad p''(j) = 0
$$
To get a curve that's thrice differentiable, use a polynomial of degree 7, and so on. |
Intersection multiplicity for two curves defined by $f=0,g=0$ | It doesn't look like anyone ever responded to your question, so here's an answer many days late (and many dollars short).
As Mohan said, $(y^2 - x^3, x^2) = (y^2, x^2)$. Note that since $x^2 \in (y^2 - x^3, x^2)$, then $x^3 = x \cdot x^2 \in (y^2 - x^3, x^2)$. Then $y^2 = y^2 - x^3 + x^3 \in (y^2 - x^3, x^2)$ as it a sum of elements of $(y^2 - x^3, x^2)$. This shows $(y^2 - x^3, x^2) \supseteq (y^2, x^2)$, but in fact the reverse inclusion holds as well since $y^2 - x^3 = y^2 - x \cdot x^2 \in (y^2, x^2)$.
Fulton lists 7 axioms that you can use in computing intersection multiplicity in section 3.3 (pp. 36-37) of his book Algebraic Curves. (The one I used above is axiom 7.)
Now we have to find the dimension over $k$ of the local ring $\left(\frac{k[x,y]}{(y^2,x^2)}\right)_{(x,y)}$. One can show that you don't actually need to localize, and $1, x, y, xy$ forms a basis for the quotient ring, so the answer is 4, as stated in the comments. (This also follows from axiom 6 of Fulton's book.) |
Combinations of 4 numbers | Your 9C3 chooses the digits and 3C1 chooses which digit will be repeated, which is fine. The 4P4 is supposed to put the digits in order, but you can swap the two matching ones to no effect. There are only $\frac{4P4}{2!}=\frac {24}2=12$ ways to order four digits including one pair, so you are overcounting by a factor $2$. Another way to get there is you choose two slots of the four for the matching digits, $4C2=6$ then choose the order for the remaining two, $2P2=2$, giving a product of $12$ |
Every point in S is umbilical $\rightarrow $ S is a plane or sphere. | A topological space $X$ is called locally path connected if every $x \in X$ is contained in a neighborhood $U$ which is path connected.
If $X$ is locally path connected, then it is connected iff it is path connected. We know that path connected implies connected always. To see the reverse implication in this case: Suppose $X$ was not path connected. Then we can write $X$ as a disjoint union of (at least two) path-connected components. But each path component must be open, by the locally path connected criteria.
Now, any manifold is locally homeomorphic to $\mathbb{R}^n$, and since $\mathbb{R}^n$ is locally path connected, we have that any manifold is locally path connected.
And so for a manifold, connectedness does imply path-connectedness. |
Representing with Hilbert Schmidt Norm | In order to be able to express $\text{Tr}(X^T A X)$ as a Hilber-Schmidt Norm you need to find the Cholesky decomposion of $A$ in terms of $C$, $A= C^T C$. Given that
$$\text{Tr}( X^T A X) =\text{Tr}( X^T C^T C X) =\text{Tr}[ (CX)^T (C X)] = \lVert CX \rVert_\text{HS}^2. $$ |
Elimination question | $(1)\quad 0.65p+0.45q=p \iff (-p + 0.65 p) + 0.45 q = 0 \iff -0.35 p + 0.45 q = 0$
$(2) \quad 0.35p+0.55 q = q \iff 0.35 p + (0.55 q - q) = 0 \iff 0.35 p + -0.45 q = 0$
Add the equations. You'll obtain $0 = 0$ which is true, regardless of the values for $p, q$. That is, whatever $p$ you choose, there will be a $q$ that satisfies both equations. Or vice-versa. So there are infinitely many solutions $(p, q)$.
Another way to look at this is to note that really, essentially, both equations say the same thing. To see this, multiply both sides of equation $(1)$: $-0.35 p + 0.45 q = 0$, by $-1$, and you'll obtain equation $(2)$.
Indeed, if you add both equations, as they were originally posted, you get on the left-hand side: $0.65 p + 0.35 p + 0.45 q + 0.55 q = p + q$. And on the right-hand side, you get $p + q$. $p +q = p+q$ no matter what the choices for $p$ and $q$, so again, there are infinitely many solutions for $p, q$. |
Find smallest natural n>0 such that $2^n > (100 \times n)^{(100^{100})}$ | The Lambert-W function solves the equation $ye^y=x$. Over the reals, it has two branches, one with index $-1$ mapping $(-e^{-1},0)$ to $(-\infty,-1)$ monotonically falling and the index zero branch mapping $(-e^{-1},\infty)$ to $(-1,\infty)$ monotonically increasing.
Bring the equation into the above form, esp. identify $y$.
$$
2^n>(M⋅n)^N\iff e^{\ln 2 \cdot n/N}>M⋅n\iff -\frac{\ln2⋅n}Ne^{-\frac{\ln2⋅n}N}>-\frac{\ln2}{NM}
$$
On the zero branch of the Lambert-W function we get
$$
-\frac{\ln2⋅n}N>W_0\left(-\frac{\ln2}{NM}\right)\iff n<-\frac{N}{\ln2}W_0\left(-\frac{\ln2}{NM}\right)\sim\frac1M
$$
which gives no solution.
On the other $W_{-1}$ branch we get
$$
-\frac{\ln2⋅n}N<W_{-1}\left(-\frac{\ln2}{NM}\right)\iff n>-\frac{N}{\ln2}W_{-1}\left(-\frac{\ln2}{NM}\right)
$$
With $M=100$ and $N=100^{100}$ this gives
$$
n>\frac{100^{100}⋅471.64492813697046}{\ln 2}=6.804397988836388⋅100^{101}
$$
You would need a multi-precision implementation to be able to determine the exact smallest value for $n$. For instance in Magma CAS (online) the scipt
RR := RealField(240);
M := RR!100; N := M^M;
n := M*N;
for k in [1..240] do n := Log(RR!2,M*n)*N; end for; n;
or with Newton
n := M*N; L2:=Log(RR!2);
for k in [1..7] do n:=n*N*(Log(M*n)-1)/(L2*n-N); n; end for;
gives the result
6804397988836388082768455850030996687989728466041200158720565632303321598433
24376653075166587535687740823838573648080603275323232029104967581035695697
18034366279018545983239970527758425935354868491153542.55079187667491436363
201084068556561 |
Where does it converges? | $\sum_{k \geq n}\frac{\lambda^k}{k!}\sim_{+\infty} \frac{\lambda^n}{n!}$ and so the limit will be $0$ |
Help with what a homomorphism means in $\Phi: (\Bbb{Z}_{18},+_{18})\rightarrow(\Bbb{Z}_{12},+_{12})$ | The fundamental constraint you have is that because $18 \cdot [1]=[0]$ (where $n \cdot$ represents addition repeated $n$ times), you must have $\Phi(18 \cdot [1])=18 \cdot \Phi([1])=[0]$, so that $o(\Phi([1])|18$.
So which elements of $\Bbb Z_{12}$ have order dividing $18$? Those would be multiples of $2$. Possible values of $a$ are therefore $0, 2, 4, 6, 8, 10$. |
Why does matrix exponentiation work with transposes? | The answer is affirmative. Maybe generalizing the properties of matrix transposes and powers will be insightful.
Let $A_i$ be a square matrix for $i=1,2,3,...,n$.
Consider the matrix product $B = A_1 A_2 A_3 \cdots A_n$. Recall that $(XY)^T = Y^T X^T$. It can be shown by induction that $$B^T = (A_1 A_2 A_3 \cdots A_n)^T \\
= A_n^T \cdots A_3^T A_2^T A_1^T.$$
In the special case where $A_i = A$ for all $i=1,2,3,\ldots,n$, then we have $B = AAA\cdots A = A^n$. Similarly, we have $B^T = A^T \cdots A^T A^T A^T = (A^T)^n$. But since $B = A^n$, then $B^T = (A^n)^T$. So altogether, we have $(A^n)^T = (A^T)^n$. QED |
Why is $\int_0^{{9\pi}\over{4}}{1 \over |\sin x|+|\cos x| }dx~ = ~9\int_0^{{\pi}\over{4}}{1 \over |\sin x|+|\cos x| }dx$? | Hint:
The function has a symmetry about $x=\frac\pi4$, i.e. $f(x)=f\left(\frac\pi2-x\right)$:
$$\left|\sin\left(\frac\pi2-x\right)\right|+\left|\cos\left(\frac\pi2-x\right)\right|=|\cos x|+|\sin x|.$$ |
Solve the set of equations | You get the second equation by multiplying the first equation by a factor of $-2$
This means that these two are simultaneous equations. If one of them is satisfied, then the other is automatically satisfied. There would be infinite solutions. |
Show Stochastic Differential Equation is Martingale | Ito-formula is of great help here
$$ df(X_t)= f'(X_t)dX_t + \frac12 f''(X_t)d[X]_t $$
now $d[X]_t = W_t^6dt$ hence
$$ d(Z_t) = d(X_t^2) = 2X_tdX_t + d[X]_t = 2 X_t W_t^3dW_t + W_t^6dt $$
and so
$$ X_t^2 = \int_0^t 2X_sW_s^3 dW_s + \int_0^t W_s^6ds. $$
$Z_t$ cannot be a martingale since its drift term is nonzero. |
How to find the conditional PDF of Y which is a sum of the source X and noise Z. | $$[Y\mid X=x]\stackrel{d}{=}[X+Z\mid X=x]\stackrel{d}{=}[x+Z\mid X=x]\stackrel{d}{=}x+Z$$
where the last statement is based on independence of $X$ and $Y$.
I used $\stackrel{d}{=}$ here because actually we are dealing with equality of distributions while "$Y$ under condition $X=x$" cannot be recognized as a random variable.
It is stated here that under condition $X=x$ the distribution of $Y$ coincides with the distribution of $x+Z$. |
Convergence in $(L_P(a,b),\|.\|_p)$ doesn't imply convergence in $(C(a,b),\sup)$ | Since $\|f_n\|_p\to0$, we say that $f_n$ converges to $0$ in $L^p$ (the expression uniform convergence in $L^p$ is not used.)
It is clear the $\lim_{n\to\infty}f_n(x)$ is $0$ if $x\in(0,1]$ and $1$ if $x=0$. Thus, $f_n$ converges pointwise to the discontinuous function $f(x)=0$ if $x\in(0,1]$, $f(0)=1$. Since each $f_n$ is continuous, the convergence cannot be uniform (in $(C([0,1]),\sup)$.) You can also see this by noting that $f_n(1/n)=1/2$ and $\sup|f_n|\ge1/2>0$. |
Understanding of exterior algebra | There are two views on graded rings, the view taken by algebraists and the view taken by topologists. To an algebraist, an element of the graded ring is just a formal linear combination of elements of each degree (we say an element of a single degree is homogenous). To a topolgist, you never actually add elements of different degrees. You can multiply two things of different degrees, but adding them gives nonsense. Not nonsense in that it can't be interpreted algebraically, but nonsense in the sense that your homogenous elements of your ring correspond to something not purely algebraic while non-homogenous elements don't.
This is a perfect example of why topologists take the approach they do. Given two alternating $n$-multilinear functions $f$ and $g$, we can define $f+g$ to be another alternating $n$-multilinear function. However, there is no good way to make sense of the "sum" of two functions that take in a different number of elements. So the solution, as unhelpful as it might sound is "don't add elements of different degrees, or if you do, don't try to give them meaning."
For your second question, I believe that Lee's introduction to smooth manifolds has a treatment of the exterior algebra in roughly the way that wikipedia does, but likely in enough detail that you can better understand it. The basic idea is that, instead of working with functions (where the multiplication ends up being funny), we just formally define a multiplication of vectors so that we can write "words" in our vectors, and we throw in just the relations that would make the product skew symmetric, and we let that be the alternating algebra. Understanding the construction requires understanding tensor products, which can be tricky the first time you see them.
You can find a quick definition and proof of the basic properties of the exterior algebra here, but I do not know that it will be much more helpful than the wikipedia article. |
If $E[X|Y]=Y$ almost surely and $E[Y|X]=X$ almost surely then $X=Y$ almost surely | Simply follow the hint... First note that, since $E(X\mid Y)=Y$ almost surely, for every $c$, $$E(X-Y;Y\leqslant c)=E(E(X\mid Y)-Y;Y\leqslant c)=0,$$ and that, decomposing the event $[Y\leqslant c]$ into the disjoint union of the events $[X>c,Y\leqslant c]$ and $[X\leqslant c,Y\leqslant c]$, one has $$E(X-Y;Y\leqslant c)=U_c+E(X-Y;X\leqslant c,Y\leqslant c),$$ with $$U_c=E(X-Y;X>c,Y\leqslant c).$$ Since $U_c\geqslant0$, this shows that $$E(X-Y;X\leqslant c,Y\leqslant c)\leqslant 0.$$ Exchanging $X$ and $Y$ and following the same steps, using the hypothesis that $E(Y\mid X)=X$ almost surely instead of $E(X\mid Y)=Y$ almost surely, one gets $$E(Y-X;X\leqslant c,Y\leqslant c)\leqslant0,$$ that is $$E(Y-X;X\leqslant c,Y\leqslant c)=0,$$ which, coming back to the first decomposition of an expectation above, yields $U_c=0$, that is, $$E(X-Y;X>c\geqslant Y)=0.$$ This is the expectation of a nonnegative random variable hence $(X-Y)\mathbf 1_{X>c\geqslant Y}=0$ almost surely, which can only happen if the event $[X>c\geqslant Y]$ has probability zero. Now, $$[X>Y]=\bigcup_{c\in\mathbb Q}[X>c\geqslant Y],$$ hence all this proves that $P(X>Y)=0$. By symmetry, $P(Y>X)=0$ and we are done. |
If one vertical arrow in a pullback is an iso, then so is the other | You can show that $\alpha \circ g$ is equal to $1_A$ using the uniqueness condition for maps into a pullback.
$f:A \rightarrow B$ and $g: A \rightarrow C$ are maps such that $h \circ f = j \circ g$ so there is a unique $x:A \rightarrow A$ such that $f \circ x = f$ and $g \circ x = g$. Obviously, $1_A$ is one such $x$.
We need to show that $(\alpha \circ g)$ is another such map. You've proved that $f \circ (\alpha \circ g) = f$. You can also show that $g \circ (\alpha \circ g) = g$. So $(\alpha \circ g)$ satisfies the condition.
Since there is a unique map $x$ satisfying $f \circ x = f$ and $g \circ x = g$, and both $1_A$ and $(\alpha \circ g)$ satisfy this, they must be equal. |
Residue of composite functions | Suppose $g$ has a simple pole at $s_0$ with residue $r$ there, and $f'(z_0) \ne 0$.
We have $$f(z) = s_0 + f'(z_0) (z - z_0) + O((z-z_0)^2)$$ and
$$\eqalign{g(f(z)) &= r (f(z) - s_0)^{-1} + O(1)\cr &= r(f'(z_0) (z-z_0) + O((z-z_0)^2))^{-1} + O(1)\cr
&= \frac{r}{f'(z_0)}(z-z_0)^{-1} + O(1)}$$
so the residue of $g(f(z))$ at $z=z_0$ is $r/f'(z_0)$.
If $g$ has a higher-order pole at $s_0$, or if $f'(z_0) = 0$, things can be more complicated. |
Finding minimum value of a square root function | The main thing is
$$ t(t+1)(t+2)(t+3) + 1 = \left( t^2 + 3 t + 1 \right)^2 $$
Let's see, we have
$$ \left| t^2 + 3 t + 1 \right| $$
which does achieve zero when $$ t = \frac{-3 \pm \sqrt 5}{2} $$ |
How to find the limit with limit on variable in power? | $$\lim_{x \to \infty}\left(1-\frac{s}{2^{n-1}}\right)^{x}=0, \quad0< s < 2^{n}$$
As $0<s<2^n$, $\displaystyle \frac{s}{2^{n-1}}\in(0,2)$, So $\displaystyle 1-\frac{s}{2^{n-1}}\in(-1,1)$, or $\displaystyle \left|1-\frac{s}{2^{n-1}}\right|<1$, As $\lim_{x\to\infty}k=0,|k|<1$, the rest remains understood. |
Help Please. Conditional Probability Question | (a) $P(X = 2) = P(\text{two get watered})\cdot P(\text{two live when watered}) + P(\text{two don't get watered}) \cdot P(\text{two live without water})$
$$= (0.95) \cdot \left({3\choose 2}(0.9)^{2}(0.1)\right) + (0.05)\cdot \left({3\choose 2}(0.3)^{2}(0.7)\right) = \boxed{0.2403} \\$$
$\\$
(b) $P(\text{neighbor remembered } \mid \text{one plant alive}) = \frac{P(\text{one plant alive} \mid \text{neighbor remembered}) \cdot P(\text{neighbor remembered})}{P(\text{one plant alive})}$
$$= \frac{\left({3\choose 1} \cdot (0.9)^{1} \cdot (0.1)^{2}\right) \cdot \left(0.95\right)}{(0.95) \cdot \left({3\choose 1}(0.9)(0.1)^2\right) + (0.05)\cdot \left({3\choose 1}(0.3)^{1}(0.7)^2\right)} = \boxed{0.537735849}$$
Note: ${n\choose k} = \frac{n!}{k!(n - k)!}$ |
Open ball with a radius tending to $0$ and adherent point | I think what you mean is : let $X$ be a metric space with metric $d$, then $ \{x_{0}\} = \cap_{\epsilon > 0}B(x_{0},\epsilon)$ because for any $x_{1}\neq x_{0} $ there is an $\epsilon_{1}> 0$ such that $d(x_{0},x_{1})> \epsilon_{1}$.
The problem is for any $\epsilon>0$, $B(x_{0},\epsilon) \cap A \neq \emptyset$ (for any fixed $\epsilon$ we can find at least one point in $A$, that is close to $x_{0}$) does not imply that $\cap_{\epsilon > 0}B(x_{0},\epsilon)\cap A \neq \emptyset$(there is a point in $A$, that can be arbitrarily close to $x_{0}$). |
Find next term of the series ${-3, 2, -4/3, 8/9, -16/27}$ | The next term problem is generally not a math problem, there is an infinity of possibilities, unless you spot a pattern. Possibly, starting from $k=0$, you have:
$$s_k = -(-2)^k.3^{1-k}\,,$$
or
$$s_k = -3 \left(\frac{-2}{3}\right)^k\,.$$
Works for the five first terms.
In your case, it is easy to spot powers of $2$ and powers of $3$, and the alternating sign. Powers suggest computing ratios of terms. The alternating sign suggests it is not too complicated, because $-1,1,-1,-1,\ldots$ is not.
Simple techniques are computing differences or ratios, or differences of differences, etc. And recognizing standard series. But this falls short fast with $5$ terms.
But you never know what the person who asks you the question has in mind. What would one do with $8,12,15,20,23,28,32,35,38,43,45,50,56$?
This is the cumulated number of letters from words of the lyrics of Shine on you crazy diamond, by the Pink Floyd, which I am currently listening to. So the next is $59$, because "now" follows "Remember when you were young, you shone like the sun. Shine on you crazy diamond". |
Trace of the multiplication of two matrix | Let $E_{ij}$ be the matrix with $1$ in the $(i,j)$ position and
$0$ elsewhere. Then $Tr(E_{i,j}A)=a_{j,i}$, the $(j,i)$ entry.
Thus the linear functional $\phi_{j,i}:A\mapsto a_{j,i}$ has a
representation as a trace map $A\mapsto Tr(TA)$.
All linear functionals on $M_n(F)$ are linear combinations of the $\phi_{j,i}$. |
Cardinality of solutions for a set of linear equations | No, the number of different solutions is not bounded by $n$. Take $n=3$ with
$(A,B,C)=(3,2,2)$. Then, over the finite field $\mathbb{F}_5$ the system
\begin{align*}
x_1+x_2+x_3 & = 3,\\
x_1x_2x_3 & = 2,\\
\frac{1}{x_1}+\frac{1}{x_2}+\frac{1}{x_3} & = 2
\end{align*}
has $6$ different solutions $(x_1,x_2,x_3)=(1,3,4), (1,4,3), (3,1,4),(3,4,1),(4,1,3),(4,3,1)$ |
Upper bound for the error magnitude | If you have got the expression of $p_2(x)$, then finding the maximum of $|e^x-p_2(x)|$ for $x\in[0,1]$ can be converted to the following question:$$\max_{x\in[0,1]} (e^x-p_2(x))$$ or $$\min_{x\in[0,1]} (e^x-p_2(x))$$
Then just let $g(x)=e^x-p_2(x)$ and find the critical points in $[0,1]$.
Hope this can help you. |
Ffind an ordinal type of the well-ordering | There's an easy way to get an ordering of $F(L)$ with order-type $\omega$: essentially, Goedel numbers.
We'll assign positive integers to each symbol and each word, as follows:
A word $a_1...a_n$ is assigned the number $$Code(a_1...a_n)=\prod_{1\le i\le n} p_i^{1+code(a_i)}.$$
We set $code(c_\varphi)=2^{Code(\varphi)+1}$.
And we initialize as follows: the $i$th basic symbol (that is, from $(,),P,\neg,...$, ordered as you do) gets code $3^{i+1}$.
This isn't the most efficient numbering scheme, but it's easy to show that the map $Code$ is an injection from $F(L)$ into $\mathbb{N}$; this gives the desired ordering.
As to the ordering on $F(L)$ that you describe: it appears to be $\epsilon_0$, the limit of $\omega, \omega^\omega, \omega^{\omega^\omega}, ...$.
To see this, consider the following: we let $M_0=\mathbb{N}$ with the usual ordering, and $M_{i+1}$ be the set of words in $M_i$ ordered first by length and second lexicographically. This isn't quite your construction, but it's easier to visualize, and it's not hard to see they have the same order type.
Then by induction, $M_{i+1}$ has order type $M_i+M_i^2+M_i^3+...$, so we have
$M_0$ has order type $\omega$,
$M_1$ has order type $\omega^\omega$,
$M_2$ has order type $\omega^{\omega^\omega}$,
Etc. |
Solving a simple system of ode | We rearrange our differential equation $y'=c+yt$.
$$\frac{dy}{dt}-ty=c$$
Now it is in the general form of a first order linear non-homogeneous ODE:
$$\frac{dy}{dt}+P(t)y=Q(t)$$
Hence our integrating factor is:
$$\mu(t)=e^{\int P(t)~dt}$$
$$\mu(t)=e^{\int -t~dt}$$
Note that it is not neccessary to consider the constant of integration:
$$\mu(t)=e^{-\frac{t^2}{2}}$$
Hence, your integrating factor was not correct.
We now multiply both sides by our integrating factor $\mu(t)$.
$$e^{-\frac{t^2}{2}} \frac{dy}{dt}-t e^{-\frac{t^2}{2}}y =ce^{-\frac{t^2}{2}} $$
After substituting $-e^{-\frac{t^2}{2}} t=\frac{d}{dt} \left(e^{-\frac{t^2}{2}}\right)$and applying the reverse product rule, we obtain:
$$\int \frac{d}{dt} \left(e^{-\frac{t^2}{2}} y\right)~dt=\int {ce^{-\frac{t^2}{2}}}~dt$$
$$e^{-\frac{t^2}{2}} y=\int {ce^{-\frac{t^2}{2}}}~dt$$
We notice that $\int ce^{-\frac{t^2}{2}}~dt$ is not solvable in terms of elementary functions. We can either evaluate this by using the definition of the error function $\text{erf} (x)$ or by using Wolfram Alpha.
We evaluate this to:
$$\int c e^{-\frac{t^2}{2}}~dt=c \sqrt{\frac{\pi}{2}} \text{erf} \left(\frac{t}{\sqrt{2}}\right)+k$$
Where $k$ is the arbitrary constant of integration. Hence,
$$e^{-\frac{t^2}{2}} y=c \sqrt{\frac{\pi}{2}} \text{erf} \left(\frac{t}{\sqrt{2}}\right)+k$$
Therefore, our general solution is:
$$y(t)=e^{\frac{t^2}{2}}\left(c \sqrt{\frac{\pi}{2}} \text{erf} \left(\frac{t}{\sqrt{2}}\right)+k\right)$$
Please do not hesitate to ask if you have any doubts or questions. |
Stock with long equation | First I'm going to change a bit the left part ($p\rightarrow q\equiv \neg p \vee q$ and De Morgan's rule). So we have to show $$\forall x (p(x)\wedge q(x))\vee\forall x(p(x)\vee q(x))\models \forall x (p(x)\vee q(x)).\quad \quad \quad(1)$$ Using the resolution method, we have to negate the right part and arrive to a contradiction. So, negating the right part we have $$\exists x \neg( p(x)\vee q(x)).$$ With this we know there is an element $a$ such that $$\neg (p(a)\vee q(a)).\quad\quad\quad (2)$$ Now using the left part of $(1)$ we know that for $x=a$ this stands (for both quantifiers), so we have $$(p(a)\wedge \neg q(a))\vee p(a)\vee q(a),$$ which you can show that is equivalent to $$p(a)\vee q(a).$$ Combining the last equation with $(2)$ we get that $$p(a)\vee q(a)\wedge \neg(p(a)\vee q(a)),$$ which is a contradiction. |
Determine the $\alpha$'s for which the integral converges $\int _0^4\:\frac{\sqrt{x}}{\left(4-x\right)^{\alpha }}dx $ | Look at the neighborhood of 0 and 4. In the neighborhood of 0 this integral equivalent to $\sqrt{x}/4^{\alpha}$. Now let us look at the neighborhood of 4. Integral equivalent to $2/(4-x)^{\alpha}$. This integral equals to $2/(-\alpha+1)(4-x)^{\alpha-1}$. So, $\alpha-1<0$ |
Paley Wiener stochastic integral | The argument is that the random variables $Z((2j-1)/2^n)$ are not just any random variables and the sigma-algebras $\mathcal F_n$ are not just any sigma-algebras... Each $Z((2j-1)/2^n)$ is proportional to $$Y=B(u+2v)+B(u)-2B(u+v),\qquad u=(j-1)/2^n,\qquad v=1/2^{n+1},$$ and $\mathcal F_n\subseteq\mathcal G_n$ where $$\mathcal G_n=\sigma(B(k/2^n);1\leqslant k\leqslant 2^n).$$ The crucial fact is the following.
Let $r\lt s\lt t$, then $E(B(s)-B(r)\mid B(r),B(t))=\dfrac{s-r}{t-r}(B(t)-B(r))$.
In particular, $B(u)$ and $B(u+2v)$ are $\mathcal G_n$-measurable (but not $B(u+v)$) hence $$E(B(u+2v)-B(u+v)\mid\mathcal G_n)=E(B(u+v)-B(u)\mid\mathcal G_n)=\tfrac12(B(u+2v)-B(u)).$$ This implies $E(Y\mid\mathcal G_n)=0$, and finally, $$E(Y\mid\mathcal F_n)=E(E(Y\mid\mathcal G_n)\mid\mathcal F_n)=0.$$ |
Breaking fraction in integral $\int \! \frac{ x+3}{ x^2+4x+5} \, \mathrm d x$ | Hints:
In order to integrate
$$\frac{x+3}{x^2+4x+5},$$
you would like to see $$(x^2+4x+5)'=2x+4$$ at the numerator.
You solve this by finding coefficients such that
$$x+3=a(2x+4)+b,$$ i.e. by identification $$2a=1,\\4a+b=3.$$
Then the first term of the integral, of the form $\dfrac{p'(x)}{p(x)}$, becomes easy. The second, of the form $\dfrac1{p(x)}$, can be related to the derivative of the $\arctan(t)$, namely $\dfrac1{t^2+1}$, by a linear substitution. |
Triple systems with no six points carrying three triangles | Maybe this one? You may need to sign in. |
Confusion in Math Logic Implications | The second premise is an 'only if' statement, which is different from an 'if' statement (and also from an 'if and only if' statement). 'Only if' is equivalent to a necessary condition, whereas 'if' is equivalent to a sufficient condition, and 'if and only if' is both necessary and sufficient. So the second premise means that it must be sunny if we are to go swimming, but it doesn't require that we go swimming if it is sunny.
From the wikipedia page on 'if and only if':
Sufficiency is the converse of necessity. That is to say, given P→Q (i.e. if P then Q), P would be a sufficient condition for Q, and Q would be a necessary condition for P. Also, given P→Q, it is true that ¬Q→¬P (where ¬ is the negation operator, i.e. "not"). This means that the relationship between P and Q, established by P→Q, can be expressed in the following, all equivalent, ways:
P is sufficient for Q
Q is necessary for P
¬Q is sufficient for ¬P
¬P is necessary for ¬Q |
Intersection of a cone and a plane | Expanding on the above comment, if we substitute $z = c(1-x)$ in the equation of $K$ we get $4x^2 = y^2 + c^2(1-x)^2$, i.e.
$$
(4-c^2)x^2 - y^2 + 2c^2 x = c^2 \label{eq:1} \tag{1}
$$
Now all we have to do is distinguish a few cases.
If $c = 0$ then \eqref{eq:1} becomes
$$
0 = 4x^2 - y^2 = (2x + y)(2x - y)
$$
so $E_0 \cap K$ is a degenerate conic, the union of the lines $y = \pm 2x$.
If $0 < c < 2$ then \eqref{eq:1} becomes the equation of a hyperbola, because the terms in $x^2$ and $y^2$ have opposite signs. In particular, for $c = \sqrt{3}$ we have a rectangular hyperbola.
If $c = 2$ then \eqref{eq:1} becomes
$$
y^2 = 8 x - 4
$$
so $E_2 \cap K$ is a parabola.
If $c > 2$ then \eqref{eq:1} becomes the equation of an ellipse, because the terms in $x^2$ and $y^2$ have the same sign.
In particular, for $c = \sqrt{5}$ we have a circle. |
Continuous one to one fuction is monotone | Suppose not. Then there must exist some $x \neq y$ for which $f(x) = f(y)$.
Either x > y or x < y. In both cases it contradicts to the fact that the function is strictly increasing $( x > y => f(x) > f(y) )$. |
Continuity of mapping a transformation to its inverse in Banach spaces | Use the formula
$$
T^{-1} =\det(T)^{-1} adj(T).
$$
Here, $T\mapsto \det(T)^{-1}$ is continuous, $T\mapsto adj(T)$ is continuous, as the entries of $adj(T)$ are polynomial in the entries of $T$. |
How to understand the wedge product of $k$ vectors in $\mathbb{F}^n$? | If $v_j=\sum_{i=1}^na_{ij}e_i\quad (j=1\cdots k, \;a_{ij}\in \mathbb F)$, then the requested formula is: $$ v_1 \wedge ... \wedge v_k=\sum _I A_Ie_I$$ where $I$ runs through all strictly increasing sequence of integers $I=(1\leq i_1\lt\cdots\lt i_k\leq n)$ and where $e_I=e_{i_1}\wedge \cdots \wedge e_{ i_k}\in \Lambda ^k (\mathbb F^n)$.
The coefficient $A_I\in \mathbb F$ of $e_I$ is the determinant of the square $k\times k$ matrix obtained by extracting lines $i_1,\cdots , i_k$ from the $n\times k$ matrix $A=(a_{ij})$. |
Understanding the proof of Prop 13.11 of Joy of Cats | By definition a (colimit-)dense subcategory is a full subcategory, see definition 12.10 in the linked file. So since $f_i \circ c_j$ is an arrow between two objects in $\mathbf{A}$, that arrow must also be in $\mathbf{A}$. So indeed, the existence of $g_j$ follows from the universal property of the limit $\mathcal{L}$ in $\mathbf{A}$. |
What does it mean for the infinite intersection of nested sets to be empty? | This means there cannot be a real number $x$ such that $0 < x < \dfrac{1}{n}$, for all $n$. For if there were such an $x$, then $x \leq \displaystyle \lim_{n \to \infty} \dfrac{1}{n} = 0$. Thus $x \leq 0$, and $x > 0$ , a contradiction. |
A jar contains $m + n$ chips, numbered $1, 2.... , n + m$. | This seems to be a draw without replacement.
Suppose $K$ is the largest value remaining in the bag. So all $X=n+m-K$ larger values are drawn and so too are $K-m$ smaller values from the remaining $K-1$ possibilities.
This happens with probability $$P(X=x)=\dfrac{k-1 \choose k-m}{n+m \choose n} = \dfrac{n+m-x-1 \choose n-x}{n+m \choose n}= m \dfrac{(n+m-x-1)! n! }{(n-x)! (n+m)!}$$
giving positive values for $0 \le X \le n$. |
Understanding the term "double roots" of a quadratic equation | Consider the assertion “the polynomial $(x-a)(x-b)$ has two roots”. Is it true? Well, yes if $a\ne b$, but no if $a=b$. If $a=b$, then that polynomial is $(x-a)^2$ and then we say that $a$ is a double root. Then if we count the roots with their multiplicity, (a double root is counted twice, a triple root is counted three times, …), a polynomial with degree $n$ will always have $n$ roots (at least over the complex numbers).
And, going back to quadratics, if we say that the roots of $(x-a)^2$ are $a$ and $a$, then $a+a=2a$ and $(x-a)^2=x^2-2ax+a^2$ and indeed the sum of the roots is minus the coefficient of $x$. |
Primitive polynomials | A polynomial is called primitive (in the context of finite fields), iff its zero is a generator of the multiplicative group of the field it generates. In this case the polynomial is quadratic, so a root $\alpha$ will generate the field $L=\mathbb{F}_{25}$.
The multiplicative group of $L$ is cyclic of order 24. By the well known facts about cyclic groups, the group $L^{\times}$ has $\phi(24)=\phi(2^3\cdot3)=2^2(3-1)=8$ generators. Another way of seeing this is to observe that if $g$ is a primitive root of unity of order $24$, all of them are in the list $g$, $g^5$, $g^7$, $g^{11}$, $g^{13}$, $g^{17}$, $g^{19}$ and $g^{23}$. Anyway, between them, these eight elements have four distinct minimal polynomials (the conjugate of a primitive element is also primitive, and the conjugates come in pairs in a quadratic extension).
The answer is thus that there are exactly $4$ quadratic primitive polynomials.
Looking at the specific polynomial $p(x)=x^2+2x+3$. Its discriminant is non-square, so it is irreducible (alternatively one may check that it has no zeros in $\mathbb{F}_5$). Let $g$ be one of its roots in the extension field
$\mathbb{F}_{25}$. We want to prove that it is of order $24$. We know that the order is a factor of $24$, so it suffices to rule out factors of $12$ and $8$
as orders.
The other zero of $p(x)$ is the Frobenius conjugate $g^5$ (basic Galois theory).
The product of the zeros of a monic quadratic is its constant term, so we know that
$$
g^6=g\cdot g^5=3.
$$
Therefore $g^{12}=3^2=4=-1$, and we can conclude that the order of $g$ is not a factor of $12$. We still have to rule out the possibility that $g$ could be of order $8$. If $g$ were of order $8$, then $g^2$ is of order four. But all the fourth roots of unity are in the prime field $\mathbb{F}_5$. OTOH from the minimal polynomial we read that $g^2=-(2g+3)=3g+2$ is not $\in \mathbb{F}_5$.
The claim follows from this. |
Local Homeomorphism and Connected Set. | Hint: Let $C=\{x\in X: f(x)=g(x)\}$. Suppose there is $x\in X$ such that $f(x)=g(x)$. Then $C\not=\emptyset$. The idea then is to show that $C$ is both open and closed, and must therefore be all of $X$. That $C$ is open will use the fact that $\varphi$ is a local homeomorphism and that $\varphi \circ f=\varphi \circ g$. That $C$ is closed amounts to proving $C^c$ is open, which uses the same logic.
Edit: Let $x\in C$. Then consider $f(x)=g(x)=y\in M$. Then there is a $U\subset M$ an open neighborhood of $y$ such that $\varphi: U\to \varphi(U)$ is a homeomorphism. Now consider $f^{-1}(U)\cap g^{-1}(U)=W$, which is open in $X$ by continuity. Now let $z\in W$. Then $f(z)\in U$ and $\varphi(f(z))=\varphi(g(z)$, which implies $g(z)\in U$. Now, since $f(z),g(z)\in U$, $\varphi$ is injective on $U$ so $f(z)=g(z)$. Thus $z\in C$. So $x\in W\subset C\Rightarrow C$ is open.
Edit 2: Actually as pointed out, that $C$ is closed is actually trivial since $f,g$ are continuous and $X$ is Hausdorff. That completes the proof that $C$ is closed + open and we assumed it was nonempty, so $C=X$. |
Multiplication of matrices ${\bf W}$ and ${\bf W}^H$, which are the span of nullspace | This is true providing W is an orthonormal basis for the nullsapce. Such basis will always exist. Perhaps you have been using MATLAB's null command, or something similar, which always produces an orthonormal basis.
By definition of othornormality, $W^HW = I_{m-n}$.
$\|{\bf W}{\bf W}^H-{\bf I}_m\|_F^2 = trace((WW^H - I_m)(WW^H - I_m)^H) = trace(W(W^HW)W^H - WW^H - WW^H + I_m) = trace(-WW^H + I_m) = -trace(W^HW) + trace(I_m) = -(m - n) + m = n$ |
Use Fermat's little theorem to solve $7^{222}$mod $11$ | If we have $$a \equiv b \pmod{c}$$ then we can also state that $$a^n \equiv b^n \pmod{c}$$ in your case, take $a= 7^{10}$, $b=1$ and $c=11$. Noting that you can write $$7^{222} = (7^{10})^{22 + 2}$$ the result follows. Since $$(7^{10})^{22} = 1^{22} \pmod{11}$$
The reason that we can say $a \equiv b \pmod{c} \implies a^n \equiv b^n \pmod{c}$ is from using the well known result that if $a \equiv b \pmod{c} \implies ad \equiv be \pmod{c}$ then taking $d = a$ and $e=b$ yields $a^2 \equiv b^2\pmod{c}$ and we continue by induction on $n$ to get the result. |
Casella and Berger, Standard Normal Hierarchical Conditional Distribution | We know that
$$
M_{Y}(t)=E(e^{Yt})=E(E(e^{Yt}|N))
$$
While we know the moment generating function for $\chi^2_{n}$ distribution is given by
$$
M_{Y|N=n}(t)=(1-2t)^{2n/2}=(1-2t)^{n}
$$
Therefore we have
$$
M_{Y}(t)=E((1-2t)^{-n})=\sum^{\infty}_{n=0}(1-2t)^{-n}\frac{\theta^{n}}{n!}e^{-\theta}=e^{-\theta}e^{\theta/(1-2t)}=e^{\frac{2t}{1-2t}\theta}
$$
Now the rest follows by using
$$
M_{aY+b}(t)=e^{at}M_{Y}(bt)
$$
and substitute $EY=2\theta, \textrm{Var}(Y)=8\theta$, then left $\theta\rightarrow \infty$. I will leave out the computational details to you.
My guess is you can also work using brutal force by fiddling with $Y$'s pdf (there is a problem in Casella and Berger solved by me this way earlier), but it is quite a mess in this case. |
Is semi-simple Lie algebra $L$, satisfy $ L\in{\rm Max}-\triangleleft$? | Hint: Can a semisimple Lie algebra have infinitely many ideals? |
Does Alexander Duality commute with inclusion? | Theorem 5.1 of http://projecteuclid.org/download/pdf_1/euclid.pjm/1102721108 gives what you want, at least in the setting of simplicial complexes. ("Derived complex" means there just "barycentric subdivision", i.e. you have to consider a sufficiently fine triangulation of the sphere to contain triangulations of A and A'.) |
How to resolve the apparent paradox resulting from two different proofs? | In the second proof, everything between "So, let us assume..." and "...then $r_z\le \min \{r_x-d(x,z),r_y-d(y,z)\}$." is not really part of the proof (you may notice that nothing proven in that section is ever used to make any logical deduction in the remainder of the proof). Rather, it is motivation for the following line of the proof: "Now let us choose $r_z\in (0,\min \{r_x-d(x,z),r_y-d(y,z)\}]$". The preceding section attempts to show why this is a wise choice of $r_z$, but you could still make this choice of $r_z$ even if you didn't provide any reason to think it was a wise choice (which is what is done in the first proof).
In particular, you don't care if $\inf R$ could be $0$, because nothing about $R$ is actually necessary for the validity of the second proof.
[In fact, the motivation provided by the discussion about $R$ is not actually good motivation. In most examples, $\inf R$ actually will be $0$, and so it was wrong to think that $r_z$ was forced to be $\inf R$. Indeed, it is unclear to me why you defined $R$ at all, or why you think $\inf R$ is a good choice for $r_z$.] |
interpret a sum geometrically | (Source: Wooly Thoughts afghans)
Or, more generally, try a search for "the sum of odd numbers is a square". |
Commutators of tensor product of Pauli matrices | If you check out Kronecker Product you will see that it has the mixed-product property:
$$
(\mathbf {A} \otimes \mathbf {B} )(\mathbf {C} \otimes \mathbf {D} )=(\mathbf {AC} )\otimes (\mathbf {BD} ).
$$
Using this property and the fact that
$$
\sigma^a\sigma^b = \delta_{ab}I+i\epsilon_{abc}\sigma^c
$$
you can expand the product $(\sigma^a \otimes \sigma^c)(\sigma^b \otimes \sigma^d)$ as
\begin{align}
(\sigma^a \otimes \sigma^c)(\sigma^b \otimes \sigma^d)
&= (\sigma^a\sigma^b)\otimes(\sigma^c\sigma^d) \\ &= (\delta_{ab}I+i\epsilon_{abe}\sigma^e)\otimes(\delta_{cd}I+i\epsilon_{cdf}\sigma^f) \\
&=\delta_{ab}\delta_{cd}I+i\epsilon_{abe}\delta_{cd}(\sigma^e\otimes I)+i\epsilon_{cdf}\delta_{ab}(I \otimes \sigma^f)-\epsilon_{abe}\epsilon_{cdf}(\sigma^e\otimes\sigma^f).
\end{align}
Since the first and last terms in this expression are symmetric when the indices $ab$ and $cd$ are permuted, the first commutator you ask for simplifies to
$$
[\sigma^a \otimes \sigma^c, \sigma^b \otimes \sigma^d] = 2i\epsilon_{abe}\delta_{cd}(\sigma^e\otimes I)+2i\epsilon_{cdf}\delta_{ab}(I \otimes \sigma^f).
$$
Note that the two terms are mutually exclusive since if $\delta_{cd}=1$, then $\epsilon_{cdf}=0$, and likewise for the pair of indices $ab$. |
pseudo-Anosov suface automorphisms and algebraic intersection numbers | This is not true.
In fact, if $S$ is a closed surface of genus $\ge 2$ then there exists a pseudo-Anosov homeomorphism $\psi : S \to S$ such that the induced action $\psi : H_1(S,\mathbb Z) \to H_1(S,\mathbb Z)$ is the identity, hence $\hat i(\alpha,\psi^n(\beta)) = \hat i(\alpha,\beta)$ is a constant independent of $n$.
The construction of such $\psi$ is easy. Start with an essential simple closed curve $c$ that separates $S$. Let $d$ be another simple closed curve that separates $S$ such that $d$ is transverse to $c$ and each component of $S - (c \cup d)$ is a polygon with $\ge 4$ sides. Let $\tau_c,\tau_d$ be the Dehn twists around $c$ and $d$, and note that each of them induces the identity on $H_1(S,\mathbb Z)$, and so $\psi = \tau_c \tau_d^{-1}$ also induces the identity. And by a result of Penner, the mapping class of $\psi$ is pseudo-Anosov. |
Evaluate $\lim_{n \rightarrow \infty} \int_0^{+\infty} \frac{e^{-n^2x}}{\sqrt{|x-n^2|}} dx$ | Following the given hint we have that for $n\geq 2$,
i) if $x\in [0,1]$ then
$$\frac{e^{-n^2x}}{\sqrt{|x-n^2|}}\leq \frac{1}{\sqrt{n^2-1}},$$
ii) if $x\in [1,n^2-n]$ then
$$\frac{e^{-n^2x}}{\sqrt{|x-n^2|}}\leq \frac{e^{-n^2}}{\sqrt{n}},$$
if $x\in [n^2+n,+\infty)$ then
$$\frac{e^{-n^2x}}{\sqrt{|x-n^2|}}\leq \frac{e^{-n^2x}}{\sqrt{n}}.$$
By using these estimates, you may show that the three related integrals go to zero. It remains to consider the integral:
$$\int_{n^2-n}^{n^2+n}\frac{e^{-n^2x}}{\sqrt{|x-n^2|}} \,dx\leq e^{-n^2(n^2-n)}\int_{n^2-n}^{n^2+n}\frac{dx}{\sqrt{|x-n^2|}}=e^{-n^2(n^2-n)}2\int_{0}^{n}\frac{dt}{\sqrt{t}}.$$
Can you take it from here? |
$d \cdot A - u u^t$ is posivite | Pick any $x\in\mathbb R^n$, then
\begin{align}
x^\top(d\cdot A-uu^\top)x&=d\cdot x^\top Ax-(u^\top x)^2\\
&\ge (u^\top A^{-1}u)(x^\top Ax)-(u^\top x)^2\\
&\ge 0
\end{align}
where the last inequality follows from the well known result that
\begin{equation}
\sup_{x\in\mathbb R^n}\dfrac{(u^\top x)^2}{x^\top Ax} = u^\top A^{-1}u
\end{equation} |
projecting a point onto a vector | The projection is found, by asking how much does the pointer $x$ point in the direction of $v$, and than heading in the direction $v$ by that amount.
$$proj_v(x)=v\cdot x \frac{v}{|v|}$$
which is
$$proj_v(x_i)=\sum\limits_{j=0}^Jv^jx_j\frac{v_i}{|v|}=\sum\limits_{j=0}^J\frac{1}{|v|}(v^jv_i)x_j$$
By inspecting the above expression, we see that
$$\frac{1}{|v|}(v^jv_i)$$
can be seen as a matrix with indices $j$ for rows and $i$ for columns. Thus
$$P^j_i=\frac{1}{|v|}(v^jv_i)$$
which means
$$proj_v(x)=P\cdot x$$ |
How can I find $6N^2 + 3NS + S^2 - 21N -61$, without finding N and S itself? | As you mentioned we can apply:
$(N+S)^2=N^2+S^2+2NS$
$5D^2-21D-61=0$ $\rightarrow D^2-\frac{21}5 D-\frac {61}5=0$
$N+S=\frac{-21}5$ and $NS=\frac{-61}5$
$N^2+S^2+2NS=(\frac {21} 5)^2=\frac{441}{25}$
$5N^2-21N-61=0$
Summing these relations we get:
$6N^2+3NS+S^2-21N-61=\frac{441}{25}-\frac{61}{5}=\frac{197}{25}$ |
Find the generating function and the number of integers solutions for $x_1 + x_2 + x_3 + x_4 = r$, where $-3 \leq x_i \leq 3$. | First you need to know why generating function works, and why negative integers have a problem.
https://brilliant.org/wiki/generating-functions-solving-recurrence-relations/#:~:text=Generating%20Functions,in%20a%20sequence%20of%20numbers&text=an.,used%20for%20solving%20recurrence%20relations.
So your generating function is wrong. Each power of $x$ represents the number of times the object is being taken, hence $1$ or $x^0$ means that the the object is not taken.
since the values are negative, we have to take the powers negative too.
Let's take only two variables and $r=0$ Listing out all the possible cases, $$\{ (-3,3), (-2,2), (-1,1), (0,0), (1,-1), (2,-2), (3,-3)\}$$ i.e. $7$
Our function will be, $$\left[\dfrac{1-x^7}{x^3(1-x)}\right]^2$$
And the constant term here is 7
By,
https://www.wolframalpha.com/input/?i=Series+%28%281-x%5E7%29%2F%28x%5E%283%29%281-x%29%29%29%5E2 |
Proof why the following summation equals the following formula | $$\sum_{j=1}^n k = \underbrace{k + k + \ldots + k}_n = k \times n.$$ |
Finding an irreducible polynomial | You can take $k=\mathbb{R}$ and $F(x,y)=x^2+y^2$. The only point where it vanishes is $(0,0)$, and you can show $F$ is irreducible by the same method as in my answer to your previous question. If you want $V(F)$ to not be irreducible, you could take $F(x,y)=x^2+y^2+1$, and then $V(F)=\emptyset$. If you don't like considering the empty set to not be irreducible, you could take $F(x,y)=x^2(x-1)^2+y^2$. |
Is there a sequence that each positive real number is a partial limit for it? | Outline: Imagine a flea that starts at $0$, and jumps to $1$. Then it jumps backwards in hops of $1/2$ until it reaches $-2$. Then it jumps forward in hops of $1/4$ until it reaches $4$. Then it jumps backwards in hops of $1/8$ until it reaches $-8$. Then it jumps forward in hops of $1/16$ until it reaches $16$. And so on. |
Real continuous function with attractive fixed point has range subset of domain | If $f$ is differentiable and $f'$ is continuous at $p$ then there
is some interval $I$ containing $p$ in the interior such that
$|f'(x)| < 1$ for $x \in I$. Without loss of generality we can
presume that $I$ is symmetric about $p$, and so has the form
$I=(p-\delta, p+\delta)$ for some $\delta >0$.
If $x \in I$ then the mean value theorem gives
$f(x) = f(p) + f'(\xi) (x-p)$ for some $\xi \in I$ and so
$|f(x)-p| = |f(x)-f(p) | = |f'(\xi)| |x-p| < |x-p|< \delta$, and so
$f(x) \in I$. |
Prove that convex functions satisfy $(1+\lambda)f(y_0) - \lambda f(y_0-r)\le f(y_0+\lambda r) \le (1-\lambda)f(y_0) + \lambda f(y_0+r)$ | The LHS can be written as $\displaystyle\,f(y_0) \le \frac{\lambda}{1+\lambda} f(y_0-r) + \frac{1}{1+\lambda}f(y_0+\lambda r)\,$, which follows from the definition of convexity since:
$\displaystyle \frac{\lambda}{1+\lambda}, \frac{1}{1+\lambda} \in [0,1]\,$ and $\,\displaystyle \frac{\lambda}{1+\lambda} + \frac{1}{1+\lambda} = 1\,$;
$\require{cancel} \displaystyle \frac{\lambda}{1+\lambda}(y_0-\bcancel{r})+\frac{1}{1+\lambda}(y_0+\bcancel{\lambda r})=y_0\,$. |
Value of $\pi$ in a redifined space | The definition of $\pi$ is not
"the ratio of circumference and diameter of a circle"
One of its possible definitions is
the ratio of circumference and diameter of a circle in the Euclidean geometry.
In hyperbolic geometry (as an example) the circumference of a circle of radius $r$ is
$$2\pi \sinh(r).$$
(Here the parameter of the hyperbolic plane is $1$.)
So the "hyperbolic $\pi$" is not
$$\frac{2\pi \sinh(r)}{2r}=\frac{\pi\sinh(r)}r.$$
Such a "$\pi$" would not even be a constant. It is an Euclidean specialty that the circles are similar and, as a result, we can define $\pi$ as constant. The real surprise is that this rudimentarily Euclidean constant appears so may times without Euclidean geometry. |
Combinatorics Proof of $\sum_{i=0}^n \sum_{j=0}^{i-1} j = {n+1 \choose 3}$ | Hints
How many subsets of $\{0,1,\dots,n\}$ have size $3$?
For how many of those subsets is the largest element equal to $i$, and the second largest equal to $j$? |
Is every function that is continuous and bounded on a certain interval, is integrable in that interval? | In the case of intervals of the type $[a,b)$, you are right: extende $f$ to $[a,b]$ putting, say, $f(b)=0$. The this new function is bounded and it has a single point at which it is discontinuous, at most. So, yes, it will be integrable.
But for intervals of the type $[a,\infty)$ this doesn't work: the integral $\int_a^\infty1\,\mathrm dt$ doesn't converge. |
How to find bounds in derivation of Stirling's Formula | Assume $|x|\leq \sqrt{n}$. First, consider the case $x>0$. Observe
\begin{align}
-\int^x_0 \frac{(x-y)\ dy}{(1+y/\sqrt{n})^2} \leq -\int^x_0 \frac{(x-y)}{4}\ dy = -\frac{1}{8}x^2
\end{align}
because
\begin{align}
\left(1+\frac{y}{\sqrt{n}}\right) \leq \left(1+\frac{x}{\sqrt{n}}\right) \leq 2.
\end{align}
In the case when $x<0$. Observe
\begin{align}
-\int^x_0 \frac{(x-y) dy}{(1+y/\sqrt{n})^2}= -\int^0_{-|x|} \frac{(|x|+y) dy}{(1+y/\sqrt{n})^2} \leq - \int^0_{-|x|} (|x|+y) dy = -\frac{x^2}{2}
\end{align}
since
\begin{align}
\left(1+\frac{y}{\sqrt{n}}\right) \leq 1.
\end{align}
Now, when $|x|> \sqrt{n}$. Consider the case $x>0$ and observe
\begin{align}
f(x):=n\log\left(1+\frac{x}{\sqrt{n}}\right)-\frac{3}{4}\sqrt{n} x \leq 0
\end{align}
because $f(\sqrt{n}) = n(\log 2- 3/4) \leq 0$ and $f'(x) \leq 0$ for all $x>\sqrt{n}$. Thus, it follows
\begin{align}
n\log\left( 1+ \frac{x}{\sqrt{n}}\right)-\sqrt{n} x< -\frac{1}{4}\sqrt{n}x.
\end{align}
Lastly, when $x<-\sqrt{n}$ we need to be cautious what Tao actually means because $\log(1-|x|/\sqrt{n})$ is not defined unless we look at the function $\log||x|/\sqrt{n}-1|$ instead. But then the left-hand side will be positive. So...Tao probably made a typo.
Edit: He probably looked at the Taylor expansion to see that
\begin{align}
n\log\left(1+\frac{x}{\sqrt{n}}\right)= \sqrt{n} x - \frac{1}{2}x^2 + \frac{x^3}{3\sqrt{n}}-\ldots = \sqrt{n} x+ \mathcal{O}(x^2).
\end{align}
So when $x$ is small (in his case when $x/\sqrt{n}\leq 1$), it should follow that
\begin{align}
n\log\left(1+\frac{x}{\sqrt{n}}\right)-\sqrt{n} x= \mathcal{O}(x^2).
\end{align}
However, Taylor expansion approximation doesn't really work when you are far away from where you want to expand. But when $x$ is big (in your case $x/\sqrt{n}>>1$), it should be clear that
\begin{align}
n\log\left(1+ \frac{x}{\sqrt{n}}\right)-\sqrt{n} x
\end{align}
is dominated by $-\sqrt{n}x$ because $n\log(1+\frac{x}{\sqrt{n}})$ doesn't grow fast enough to overtake $-\sqrt{n}x$. |
$h^2 = x^2 + (x+1)^4$ | \begin{align}
c^2 &= a^2 + b^2 \\[1em]
h^2 &= x^2 + \left[(x+1)^2\right]^2 \\
h^2 &= x^2 + (x+1)^4 \\
h^2 &= x^2 + x^4 + 4x^3 + 6x^2 + 4x +1\\
h^2 &= x^4 + 4x^3 + 7x^2 + 4x +1\\
h &= \sqrt{x^4 + 4x^3 + 7x^2 + 4x +1}
\end{align}
You probably put into Wolfram Alpha some other task - you wanted the square root, and you obtained the root (= solution) of the system of equations with unknowns $h$ and $x$.
Addendum:
If you wanted from Wolfram Alpha the result of my computing, enter into it
solve(h^2 = x^2 + (x+1)^4, h)
(h at the end determines which you want to compute), and you will obtain
$$h = \pm \sqrt{x^4 + 4x^3 + 7x^2 + 4x +1}$$
If you want the result for a particular $x$, enter its value, e. g.
h^2 = x^2 + (x+1)^4, x=1
to obtain plot
and solutions
$$\begin{array}{l}{h=-\sqrt{17}, \quad x=1} \\ {h=\sqrt{17}, \quad x=1}\end{array}$$ |
How many solutions can an onto matrix have and how many can one-to-one matrix have? | I will interpret your question in the following way:
Let $T:\mathbb{R}^n\rightarrow\mathbb{R}^m$ be a linear map. Then there is an $m\times n$ matrix $M$ such that $Mv = T(v)$ for each $v \in \mathbb{R}^n$.
Let $y \in \mathbb{R}^m$. We want to determine the number of solutions to $Mx = y$.
First, suppose $T$ is one-to-one. (Then we call $M$ a one-to-one matrix.) First we can prove there is at most one solution. Let $a, b \in \mathbb{R}^n$. Suppose $Ma = Mb = y$. Then $T(a) = Ma = Mb = T(b)$, so $a=b$ as $T$ is one-to-one. It is also possible that there are no solutions - we can use a small example: Consider the case where $T:\mathbb{R}\rightarrow\mathbb{R}^2$ is defined by $T(x) = \pmatrix{x\\x}$. (Then $M$ is given by $\pmatrix{1\\1}$.) Clearly this is one-to-one, but there are no solutions to the equation $Mx = \pmatrix{1\\0}$.
In summary, if $M$ is one-to-one, there is one solution or no solutions to the equation $Mx = y$ for a given $y$.
With similar reasoning, you can show that if $M$ is onto, there is one solution or infinitely many solutions to the equation $Mx = y$ for a given $y$. (I'll leave this as an exercise for you!)
Note that both of these results combine to give you: If $M$ represents a bijective linear transformation, then there is precisely one solution to the equation $Mx = y$. |
Range of Trigonometric function | hint:
Let $a = \cos^2x, b = \sin^2x\implies a+b = 1\implies f(a,b) = a^4+b^7=a^4+(1-a)^7=g(a), a \in [0,1]\implies g'(a) = 4a^3-7(1-a)^6=0 \implies (\sqrt[3]{4}\cdot a)^3= (\sqrt[3]{7}\cdot (1-a)^2)^3\implies a\sqrt[3]{4}=\sqrt[3]{7}(1-a)^2$. Solving this quadratic equation will give you $2$ critical points in $[0,1]$, and you can use the second derivative test if you like to find out which one is the minima. Then couple with the end points of $0,1$, you can determine the global minimum for $f$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.