title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Theory of Quadratic Equations | $A)$ They mean the fraction $f(x)=\dfrac{ax^2-7x+5}{5x^2-7x+a}$ can attain any real value.
$B)$ A quadratic polynomial has a constant sign or is $0$ if and only if its discriminant is non-positive.
$C)$ They use the factorisation $\; X^2-Y^2=(X-Y)(X+Y)$. |
Diagonalizability of a certain class of matrices | The answers to both your questions are "yes", because $A+BP = B(B^{-1}A+P)$ is similar to $B^{1/2}(B^{-1}A+P)B^{1/2}$, which is positive definite and diagonalisable. |
Parameterizing for a Complex Line Integral | Along $\gamma$, you have $z=tz_0$ ($0\leq t\leq 1$).
Then $e^z=e^{tz_0}$ and $dz=z_0\;dt$.
The integral becomes
$$\int_{\gamma}e^z\;dz
=\int_{t=0}^{t=1}e^{tz_0}z_0\;dt$$
It’s fine to integrate this directly—it doesn’t matter that there are complex numbers involved. You don’t have to take it all the way down to real and imaginary parts immediately. In fact it’s much easier to integrate it in this form (the same rules apply even with complex constants involved). Continuing:
$$=\left.\frac{e^{tz_0}}{z_0}z_0\right|_{t=0}^{t=1}$$
$$=\boxed{e^{z_0}-1}$$
Now at this point you may want to write this as
$$e^{x_0}\cos y_0-1 +ie^{x_0}\sin y_0$$
but I would just leave it in the boxed form above. |
Moment-determinacy in multivariate case | Yes. The characteristic function $\phi(s) = {\mathbb E}[\exp(i s\cdot X)]$ is the complex conjugate of the Fourier transform of the density, and Fourier transform is one-to-one on $L^1$. If it's real analytic, $\phi(s)$ is determined by the coefficients of its series expansion at $s=0$, which are the moments. |
Help understanding tensoring of exact sequences | The isomorphism $A \otimes_A M \to M$ is by sending $a \otimes m \to am$. Under this isomorphism, the image of $\mathfrak{a} \otimes M$ would be $\mathfrak{a}M$. |
Approximation for $2^r\ln \frac{2^r}{2^r-r}$ | Since $\dfrac{2^r}{2^r-r} = 1 + \dfrac{r}{2^r-r}$ and for $x$ close to $0$, $\log(1+x) \approx x,$ for large $r$ we have $$ 2^r \log \dfrac{2^r}{2^r-r} \approx 2^r \frac{r}{2^r-r}= r \left(\frac{1}{1-r/2^r}\right)\approx r.$$ |
Preimage and Cartesian Product | Your proof of $U \subseteq \bigcap_\alpha \pi_\alpha^{-1}\bigl[\pi_\alpha[U]\bigr]$ is fine.
For the other direction: Suppose that $x \in \bigcap_\alpha \pi_\alpha^{-1}\bigl[\pi_\alpha[U]\bigr]$, then for each $\alpha$ we have $x \in \pi_\alpha^{-1}\bigl[\pi_\alpha[U]\bigr]$, that is, by definition of the preimage, $\pi_\alpha(x) \in \pi_\alpha[U]$. Now, by definition of $U$, we have $\pi_\alpha[U] = U_\alpha$, hence $x_\alpha \in U_\alpha$ for every $\alpha$. Therefore, $x \in U$. |
On a zero-sum game betting market | I think it would need to operate like The Tote in horse racing. Where the players buy tickets for an outcome (each ticket worth the same value), and the winners are paid out dividends from the pool. The winners would get a percentage of the pool based on the number of tickets they held. |
Given $u_1$, find orthogonal matrix whose first column is $u_1$? | Let $u_1=(a_1,a_2,\ldots, a_n)$. Take the set of all solutions of
$a_1x_1+a_2x_2+\cdots +a_nx_n=0$. Take a basis for the set of solutions (they form a vector space of dimension $n-1$).
One easy basis for this is:
$v_2=(a_2, -a_1,0,0,\ldots, 0)$
$v_3=(a_3, 0, -a_1,0,0,\ldots, 0)$
$\vdots$
$v_n=(a_n,0,,0,\ldots, 0, -a_1)$.
Now for $\{u_1, v_2,v_3, \ldots, v_n\}$ apply Gram-Schmidt and get an orthonormal basis. When these basis vectors are made columns of a matrix that will provide the orthogonal matrix you are looking for. |
O(n) as embedded submanifold | I think you have almost done. As you said, it suffices to show that $\mathrm{Id}$ is a regular value of $f$, i.e. for each $A\in O(n)$, $f_*:T_A M_{n\times n}\to T_{\mathrm{Id}}Sym_n$ is surjective, where $T_pX$ denotes the tangent space of $X$ at $p$. Note that $T_A M_{n\times n}$(resp. $T_{\mathrm{Id}}Sym_n$) can be identified with $M_{n\times n}$(resp. $Sym_n$) and, as you have known, $f_*(X)=XA^t+AX^t$. Then you only need to verify that for any $S\in Sym_n$, there exists $X\in M_{n\times n}$, such that $XA^t+AX^t=S$. At least you may choose $X=\dfrac{1}{2}SA$. |
Why wolfram alpha assumed $ x>0$ as a domain of definition for $x^x $? | $$e^{x\log x}$$
is defined where the exponent $x\log x$ is defined. Even if $x$ is defined on the whole real line, the function $\log x$ is defined only on the open set $]0,\infty[$. Hence the natural domain is such interval, i.e. $x>0$.
EDIT (The natural domain of $x^x$ without using the form $e^{x\log x}$).
The function $b^x$ is defined if and only if $b>0$. Now you may consider the function $b(x)^x$, where $b$ depends on $x$, and also in this case $b(x)$ must be a positive function (this include that can be constant and positive). In particular $b(x)$ may be the identity function, that is $b(x)=x$. Hence $x=b(x)>0$. Hence the natural domain for $x^x$ is $]0,\infty[$. |
Why the radius of curvature of a curve is independent of the choice of the coordinate axes. | The book gives you a true statement but the last formula is questionable.
Curves' curvature is like talking about radius of a circle. With any coordinate transformation, the circle is still the same circle and so its radius.
However, interchanging x, y in the formula, may change sign if you use the formula.
Assuming you can parametrize the curve $x=x(t), y=y(t)$
$$\rho=\frac{(1+(\frac{dy}{dx})^2)^{\frac{3}{2}}}{\frac{d^2y}{dx^2}}=\frac{((x')^2+(y')^2)^{\frac{3}{2}}}{y''x'-x''y'}$$
So interchanging x and y will give you the same absolute value but with different sign. |
Probability of a two pair hand in a random 5 card poker hand | Pick two values (eg 2s and Kings): ${13\choose 2}$
Pick two suits (eg for each): ${4\choose 2},{4\choose 2}$
Pick value for the 5th card (not 2 or King): ${11\choose1}$
Pick suit for 5th card (any): ${4\choose1}$.
Hence ${13\choose2}{4\choose 2}{4\choose 2}{11\choose1}{4\choose1}=123552$. So probability $=123552/{52\choose 5}= 4.75\%$
You double-counted on the first step. |
Find the fourier series of a special square wave function (find my mistake) | Your wave is defined for $t\in(0,T_1),(T_1,T_2)$ but it's undefined over $(T_2,T)$, which might be where you're getting confused. I've found the coefficients for the wave $f(t)=q$ for $t\in(0,T_1)$, $f(t)=z$ for $t\in(T_1,T)$, with period $T$. I'm not sure where your mistake is but perhaps you can compare your coefficients with mine.
Define $f$ as the original function, then scale the $x$ co-ordinate by $\frac{T}{2\pi}$, so that the wave has a full period inside $(-\pi,\pi)$. Label this re-scaled function $g$. Then, $g$ takes three values over the interval $(-\pi,\pi)$; either $z,q,z$ or $q,z,q$, depending on the values of $T_1$ and $T_2$. The wave has two discontinuities in the interval $(-\pi,\pi)$; one at $t=0$ and one at $\alpha=2\pi\left(\frac{T_1}{T}-\left\lfloor\frac{T_1}{T}\right\rfloor\right)$. If $T_1<T_2$, then $\alpha\in(0,\pi)$, otherwise $\alpha\in(-\pi,0)$.
$$\begin{aligned}
&f(t)=\begin{cases}
q:&\frac{t}{T}-\lfloor\frac{t}{T}\rfloor<\frac{T_1}{T}-\lfloor\frac{T_1}{T}\rfloor
\\
z:&\text{else}
\end{cases}
\\
&g(t)=f\left(\frac{T}{2\pi}t\right)
\\
&T_2>T_1\quad\Rightarrow\quad g(t)=\begin{cases}
z:&t\in(-\pi,0)
\\
q:&t\in(0,\alpha)
\\
z:&t\in(\alpha,\pi)
\end{cases}
\end{aligned}$$
Find the coefficients of $g(t)$ by integrating over the intervals $(-\pi,0),(0,\alpha),(\alpha,\pi)$.
$$\begin{aligned}
a_0&=\frac{1}{2\pi}\left[z\pi+\alpha q+(\pi-\alpha)z\right]
\\
a_n&=\frac1\pi\left(z\int_{-\pi}^0 \cos(nt)+q\int_{0}^\alpha \cos(nt)+z\int_{\alpha}^\pi \cos(nt)\right)
\\
&=\frac{1}{n\pi}\left[{z\sin(\pi n)}+{(q-z)\sin(\alpha n)}\right]
\\
b_n&=\frac{1}{n\pi}\left[q-z+(z-q)\cos(\alpha n)\right]
\\
\end{aligned}$$
The figure below is the square wave, in black, for $q=4.4,z=-3.6,T_1=3.85,T=12.9$, with the Fourier series, in red, up to and including $n=13$. |
Proof of Levy's zero-one law | Let $Y_n = E(X| \mathcal{F}_n)$. Then it $Y_n$ is a martingale, and
$$\sup_n E(|Y_n|) = \sup_n E(|E(X| \mathcal{F}_n)|) \leq \sup_n E(E(|X||\mathcal{F}_n)) = E(|X|) $$
where the bound in the middle is due the conditional Jensen inequality.
Now the heavy artillery, by Doob's Convergence theorem $Y_\infty := \lim_{n \to \infty} Y_n$ exists almost surely. And since the sequence is dominated by $X$ (again conditional Jensen) we conclude the $L^1$ convergence and thus the convergence in probability.
You can find the Doob's Convergence theorem in Williams' "Probability with Martingales" Thm. 11.5. Is a rather important result based on a "band argument" and it can be extended to continuous time martingales. |
Elementary proof of $f>0$ implies $\int f>0$? | The statement is true because one can show there exists $\xi \in (a,b)$ such that $$\int_a^b f(x) \, dx \geq f(\xi) (b-a)$$
In fact assume that $$f(x) > \frac 1{b-a} \int_a^b f(y)\,dy$$ for all $x \in (a,b)$
Changing the values of $f$ at $a$ and $b$ if necessary (which doesn't alter the value of the integral), we have a new function $\hat{f}$ such that $$\hat{f}(x) > \frac 1{b-a} \int_a^b \hat{f}(y)\,dy$$ for all $x \in [a,b]$
Remember now the theorem discussed in this question.
Since $\hat{f}$ is continuous at some $c \in (a,b)$, we can find $\varepsilon > 0\,$ so that $$\hat{f}(x) > \frac 1{b-a} \int_a^b \hat{f}(y)\,dy + \varepsilon$$ for all $x \in (c-\varepsilon,c+ \varepsilon) \subset (a,b)$.
Then, if we consider the partition $P=\{a,c-\varepsilon,c+\varepsilon,b\}$, we obtain $$\int_a^b \hat{f}(x)\,dx \geq L(\hat{f},P) \geq \int_a^b \hat{f}(x)\,dx + 2\varepsilon^2 > \int_a^b \hat{f}(x)\,dx$$ which is absurd.
The paper Rodrigo Lopez Pouso, Mean Value Integral Inequalities , Real Anal. Exchange Volume 37, Number 2, (2011), 439-450 is worth reading not only as the source of this proof. |
Is the following derivative application statement true or false? | Take any bijection $f:\mathbb R\to\mathbb R$, and consider its inverse. Then $f\circ f^{-1}$ is the identity function which is obviously differentiable, but $f$ need not be. |
waiting for patterns? | We can start at the point when the first $T$ appears
The probability that $T$ occurs again is $\frac{1}{2}$ , and then $TTH$ comes first surely.
The probability that $HT$ comes next is $\frac{1}{4}$ , then the game is finished as well.
If $HH$ appears, we have to wait for the next $T$ and are again at the starting position.
The occurence of $T$ has a probability twice the probability of the occurence of $HT$ , hence the chance that $TTH$ wins must be twice the chance that $THT$ wins, giving the result $p=\frac{2}{3}$. |
Knuth's arrow up notation again | I will use Stirling's approximation, which isn't exact, but one can show that it doesn't effect the result. First, I define $\text{slog}_{10}(z)$ as the inverse of Tetration, extended by Kneser's method to real numbers, and I take the slog of the Op's sequence of "a" numbers.
$\text{slog}_{10}(a(1)) = \text{slog}_{10}(3!) = \text{slog}_{10}(6) \approx 0.852230828
\\ \text{slog}_{10}(a(2)) = \text{slog}_{10}(6!) = \text{slog}_{10}(720) \approx 1.56653651
\\ \text{slog}_{10}(a(3)) = \text{slog}_{10}(720!) \approx \text{slog}_{10}(2.60121894 \times 10^{1746}) \approx 2.62213791
\\ \text{slog}_{10}(a(4)) = \text{slog}_{10}((720!)!) \approx 1 + \text{slog}_{10}(\log_{10}(2.60121894 \times 10^{1746}!)))
\\ \text{slog}_{10}(a(4)) = \text{slog}_{10}((720!)!) = 1+\text{slog}_{10}
(\log_{10}((720!)!))
\\ \quad\quad\> = 1+\text{slog}_{10}
((720!)(\ln(720!)-1)/\ln(10) + O(\ln(720!))
\\ \quad\quad\> \approx 1 + \text{slog}_{10}(4.54167855 \times 10^{1749})
\approx 3.62224425$
The second line of the last equation makes use of Stirling's approximation for the log of factorial, $\ln(z!) = z(\ln(z)-1)+O\ln(z)$. In this case the $O(\ln(z)) \approx 4000$ is negligible when added to a number with 1749 decimal digits, leading to an overall error term of $O(1/a(3))$.
$\text{slog}_{10}(a(5)) = \text{slog}_{10}(720!!!) = 1+\text{slog}_{10}
(\log_{10}(720!!!))
\\ \quad\quad = 2+\text{slog}_{10}
(\log_{10}(\log_{10}(720!!!)))$
For a(5), Stirling's approximation can be used for the log(log(720!!!)) as well,
$ \log_{10}(\log_{10}(z!)) = \log_{10}((z)(\ln(z)-1)/\ln(10) + O(\ln(z))
\\ \log_{10}(\log_{10}(z!)) = \log_{10}(z) + \log_{10}(\ln(z)-1) - \log_{10}(\ln(10)) + O(1/z)
\\ \log_{10}(\log_{10}(720!!!)) = \log_{10}(720!!) + \log_{10}(\ln(720!!)-1) - \log_{10}(\ln(10)) + O(1/720!!)$
Here the Stirling's equation error term may seem large, 10^1750, until you realize it is being added to a number with 10^10^1750 digits. In the last eqution, the $+ \log_{10}(\ln(720!!)-1) - \log_{10}(\ln(10))\approx 1750$ term isn't significant either, because it is being added to a number with 1749 digits. So for (n>=5) we have the same equation as a(4) accurate to >1740 decimal digits, $\log_{10}(\log_{10}(720!!!)) \approx \log_{10}(720!!)+1750 \approx 4.54 \times 10^{1749}$. The slog of both numbers will also be the same, because the log of both numbers is the same, accurate to >1740 decimal digits. Finally, this results in the following approximation.
$\text{slog}_{10}(a(n)) \approx 1+ \text{slog}_{10}(a(n-1)) + O\frac{1}{a(n-2)}$
This in turn justifies the answer to the Op's question.
$$ 10 \uparrow \uparrow (n-1) < a(n) < 10 \uparrow \uparrow (n)$$ |
Is $f(x)=x^2\sin\frac{1}{x^2}$ is bounded? | HINT:
For $x\ne 0$ we have $$ \sin(1/x^2)<1/x^2$$ |
Using algebra and calculus i need to solve this written question for x | Move $B$ $2$km closer to the river, in a direction perpendicular to the river, and ignore the river. Then you won't even need to use calculus.
Specifically, the shortest distance between $A$ and$B$ is now the straight line distance
$$\sqrt{17^2+12^2}=\sqrt{433}$$
and the value of $x$ will be given by similar triangles:
$$x=17\times\frac{9}{12}=\frac{51}{4}\ .$$
Moving $B$ back where it was adds the $2$ km length of the bridge for a total of $2+\sqrt{433}$ km. |
Finding a trigonometric polynomial | Writing the answer here to close the question. Thanks to @5pm for the hint.
Since $ z^k F(z)$ is a polynomial. $z = 0$ is the only pole of $F$ and all $\gamma_j = 0$. This makes the last term in the question a polynomial. |
$K(x)$ is a simple extension of $K(u)$ and find the dimension of it | What you have done is that you established a surjective ring homomorphism $K(u)[t]/(p) \rightarrow K(x)$. This is a good starting point.
It remains to show that $p$ is irreducible in $K(u)[t]$, which would imply that the quotient $K(u)[t]/(p)$ is a field, hence the above ring homomorphism must be an isomorphism, and $K(x) \simeq K(u)[t]/(p)$ is by definition a simple extension (and obviously of degree $4$).
To show that $p$ is irreducible, there are many possible ways. The following is a more general result:
Proposition: Let $f, g \in K[t]$ be coprime polynomials. Then the polynomial $f + ug \in K(u)[t]$ is irreducible.
Proof: It is clear that $f + ug$, as a polynomial in $K[u][t]$, is irreducible, since it has degree $1$ in $u$, hence in any factorization $f + ug = pq$, one of $p$ and $q$ (say $p$) must have degree $0$ in $u$, i.e. is in fact a polynomial in $t$. It then follows that $p$ is a common factor of $f$ and $g$, which must be a constant.
The result then follows from Gauss's lemma for UFD. |
Proof of the formula for the probability of throwing heads after n throws? | I believe your question is not well formulated. The probability of a head after you have seen $n-1$ heads is still $p$, if you assume independence. On the other hand, the probability of getting $n$ heads when throwing the coin $n$ times is $p^n$ (before the experiment is performed).
The formula you presented is one used in the context of a Bernoulli random variable $X$, which assumes value $X=1$ (often called success) with probability $p$, and $X=0$ (often called a failure) with probability $(1-p)$.
The formula gives us the probability of getting a success after $n-1$ failures, for a size $n$ random sample of $X$. Yes, in the coin toss example you can assume that "heads" is a success and "tails" a failure, but in this case $p=(1-p)$, and the result is just $p^n$. It will be different if for your coin the probability of heads $\not =0.5$. |
Hessian matrix of $(||x||-b)^2$ | Let's use a convention wherein matrices, vectors, and scalars are represented by uppercase latin, lowercase latin, and greek letters, respectively.
Define the variables
$$\eqalign{\
\beta &= b \\
\lambda^2 &= \|x\|^2 = x^Tx \quad\implies\quad \lambda\,d\lambda = x^Tdx \\
}$$
Write the objective function in terms of these variables, then calculate the differential and gradient.
$$\eqalign{\
\phi &= (\lambda-\beta)^2 = \lambda^2-2\beta\lambda+\beta^2 \\
d\phi &= 2(\lambda-\beta)\,d\lambda = \frac{2(\lambda-\beta)}{\lambda}(x^Tdx) \\
g=\frac{\partial\phi}{\partial x} &= \frac{2(\lambda-\beta)}{\lambda}x \\
}$$
Now calculate differential and gradient of $g$, i.e. the Hessian.
$$\eqalign{
dg
&= \frac{2(\lambda-\beta)}{\lambda}\,dx
+ \frac{2x\,d\lambda}{\lambda}
- \frac{2(\lambda-\beta)x\,d\lambda}{\lambda^2}
\\
&= \frac{2(\lambda-\beta)}{\lambda}I\,dx
+ \frac{2x(x^Tdx)}{\lambda^2}
- \frac{2(\lambda-\beta)x(x^Tdx)}{\lambda^3}
\\
&= \frac{2}{\lambda^{3}}\Big((\lambda-\beta)\lambda^2I
+ \lambda xx^T - {(\lambda-\beta)xx^T}\Big)dx
\\
&= \left(\frac{2(\lambda-\beta)\lambda^2I + 2\beta xx^T}{\lambda^3}\right)dx \\
\\
H=\frac{\partial g}{\partial x}
&=\left(\frac{2(\lambda-\beta)\lambda^2I+2\beta xx^T}{\lambda^3}\right) \\
&= 2I + \frac{2\beta}{\|x\|}\left(\frac{xx^T}{\|x\|^2}-I\right) \\
}$$ |
technique for finding minima of quadratic surface with an $xy$-term? (without calculus) | Hint: You can use substitution $p = x + 2y$, $q = y$ to get rid of $xy$ term and solve it like the case before. |
maths problem with absolute numbers and complex conjugates | Your question is somewhat confusing - for instance, it is not clear why you say that $|x|^2+|y|^2 =2$, but one thing is certain: $|2|$ is the same number as 2, so it can never be the case that "... is only true if the 2 is $|2|$". |
First Derivative Test on Parametric Equations | You have analyzed the behavior of $ \frac {dy}{dx}$ as $t \to -1$
What you really want is to analyze is $ \frac {dy}{dx}$ as $$x=t^{5 }-5t^3-20t+7$$ approaches to $31$.
See what happens to $ \frac {dy}{dx}$ as $x$ approaches to $31$ from right or left. |
Reliability of linear regression to predict future | XKCD explains it perfectly in this comic, which is |
Finding GCD of two elements over a quadratic extension of integers | Note that $\Bbb{Z}[\frac{1+\sqrt{13}}{2}]$ is Norm Euclidean (see this post and the references given there), so we can apply the Euclidean algorithm to find the GCD.
$$\begin{align}
5+2\sqrt{13}&= 2(1+\sqrt{13}) +3 \\
1+\sqrt{13}&= -1\cdot 3 +(4+\sqrt{13}) \\
3 &= (4-\sqrt{13})(4+\sqrt{13}) + 0
\end{align}$$
So we see $4+\sqrt{13}$ is the GCD (up to associates) and $\dfrac{1+\sqrt{13}}{4+\sqrt{13}}=-3+\sqrt{13}$ and $\dfrac{5+2\sqrt{13}}{4+\sqrt{13}}=-2+\sqrt{13}$ |
How to derive $\mathbb E(e^X)$ if $X$ is normally distributed? | As always, $$\mathbb E[f(X)]=\int_{\mathbb R} f(x)\mu_X(\,\mathrm d x),$$
where $\mu_X$ is the measure on $\mathbb R$ induced by $X$. In the case where $X\sim \mathcal N(\mu,\sigma ^2)$,
$$\mathbb E[f(X)]=\frac{1}{\sigma \sqrt{2\pi}}\int_{\mathbb R}f(x)e^{-\frac{(x-\mu)^2}{2\sigma ^2}}\,\mathrm d x.$$ |
Why does $\mbox{Irr}(\alpha,K)$ have distinct roots in $N$? | Let $m(x)$ be the minimal polynomial of $\alpha$ over $K$. If it has a repeated root ($\alpha$ or another root - makes no difference), then that multiple root $\beta$ gives rise
to a common factor $(x-\beta)$ of $m(x)$ and $m'(x)$.
But by Euclid's algorithm $r(x):=\gcd(m(x),m'(x))\in K[x]$. This is thus a non-trivial factor of $m(x)$ contradicting the assumption that $m(x)$ is irreducible in $K[x]$ as a minimal polynomial. Unless $r(x)=m(x)$, which in turn would imply $m'(x)=0$, which in turn implies that all the zeros of $m(x)$ have multiplicity $>1$. |
An example of a compact topological space which is not the continuous image of a compact Hausdorff space? | This extended abtract by Künzi and van der Zypen seems of interest. It mentions in passing (remark 3, page 3) a reference
Stone, A.H.: Compact and compact Hausdorff, in: Aspects of Topology, pp. 315–324, London Math. Soc., Lecture Note Ser. 93, Cambridge Univ. Press, Cambridge, 1985.
where it is supposedly shown that a compact space need not be the continuous image of a compact $T_2$ space, based on a theorem
If $Y$ is KC and compact, $f: X \to Y$ is onto and continuous with $X$ compact Hausdorff, then $Y$ is Hausdorff.
I assume, but I have no access to the reference, that this theorem is shown in the Stone paper. I did find the (not so hard proof) in this paper (lemma 1)
Then $\alpha(\mathbb{Q})$ the Alexandroff extension of $\mathbb{Q}$, being a well-known example of a KC but not Hausdorff compact space (see Counterexamples in Topology), must be an example, based on this theorem.
Also the van Douwen example mentioned in this paper of a countable anti-Hausdorff (all non-empty open sets intersect) compact KC space (also sequential and US) is such an example. |
Importance of Derivatives | You have a function for profit that looks like this...
The profit is on the Y-axis. As you can see, it is maximum where the graph is at its peak (its turning point), this is where $dP/dx=0$ as you suggested. So to carry on with the calculations you need to differentiate the function for P and find where it is 0 and this is the optimum order size.
Hope this helps! |
If $f \circ f$ is affine and $f$ is area-preserving, must $f$ be affine? | As @fedja suggests in the comments, we construct an example as follows in polar coordinates:
$$
f:(r,\theta) \mapsto (\sqrt{1-r^2},-\theta).
$$
This maps $U \to U$ when $U=\{ z ∈ ℂ\, | \, 0.6 < \lvert z \rvert < 0.8\}$.
$f$ is an involution, i.e. $f^2=Id$ is affine, but $f$ is not affine |
General Steinitz exchange lemma- Proof Help | You are right to be stuck: this proof is incorrect. For instance, suppose $B=\{b_0,b_1,b_2,\dots\}$. You can then one-by-one replace elements of this basis: first replace $b_0$ by $b_0-b_1$, then replace $b_1$ by $b_1-b_2$, then replace $b_2$ by $b_2-b_3$, and so on. At each finite stage of this, you still have a basis, but in the limit you get the set $L=\{b_0-b_1,b_1-b_2,b_2-b_3,\dots\}$ which does not span the entire space. This then gives a chain in $\mathcal{L}$ that has no upper bound.
Here is a correct proof. Let $C\subseteq B$ be a maximal subset of $B$ with the property that $C\cup L$ is linearly independent (such a $C$ exists by Zorn's lemma). By maximality, the span of $C\cup L$ must contain every element of $B$ and this is all of $V$, so $C\cup L$ is a basis. To finish the proof, we just need to know that $|B\setminus C|=|L|$ so we can choose an injection $j:L\to B$ with image $B\setminus C$. To prove this, note that $|B\setminus C|$ and $|L|$ are both the dimension of the vector space $V/\operatorname{span}(C)$, since the images of $B\setminus C$ and $L$ are each bases for this vector space.
[Note that the fact that any two bases of an infinite-dimensional vector space have the same cardinality can be proven quite easily without the Steinitz exchange lemma. If $B$ and $B'$ are both infinite bases, then each element of $B$ is in the span of finitely many elements of $B'$, so you need at most $|B|\cdot \aleph_0=|B|$ elements of $B'$ to span all of $B$. Since $B'$ is a basis, this means $|B'|\leq |B|$. Similarly, $|B|\leq|B'|$ as well, so $|B|=|B'|$.] |
boolean algebra minterm and maxterm expansion | ab+ac'
ab(c+c')+a(b+b')c'
abc+abc'+abc'+ab'c'
abc+abc'+ab'c'
use a truth table with 3 variables a,b,c
find followings
abc = 111 - 7
abc' = 110 - 6
ab'c'= 100 -4
Sum m (4, 6, 7)
therefore, maxterm M (0,1,2,3,5) |
Value of $\sin (2^\circ)\cdot \sin (4^\circ)\cdot \sin (6^\circ)\cdots \sin (90^\circ) $ | Using the identity
$$\prod_{k=1}^{n-1}\sin \left( \frac{k\pi}{n}\right) = \frac{n}{2^{n-1}} \tag{1}$$
Putting $n=180$, it gives
$$ \left(\sin (1^\circ)\sin (2^\circ)\dots \sin (89^\circ)\right)^2= \frac{180}{2^{179}} \tag{2}$$
The value of
$$\boxed{ \sin(2^{\circ})\sin(4^{\circ}) \dots \sin(90^{\circ}) = \sqrt{\frac{180}{2^{179}}}\div \sqrt{\frac 1 {2^{89}}} = \sqrt{\frac{180}{2^{90}}}} \tag{3}$$
The required value seems to be in agreement with calculated value.
The proof of identity $(1)$ can be found at the end of this pdf. |
finding the range of a radical function. | As $\sqrt x\ge0$ and $x$ is non zero and finite
WLOG $\sqrt x=y^2,y\ne0$
$y^2+\dfrac1{y^2}=(y-1/y)^2+2\ge?$
Alternatively let $u=y^2+\dfrac1{y^2}$
Rearrange to form a quadratic equation in $y^2$
As $y^2$ is real and $y^2>0$ use the discriminant $\ge0$ |
When to use the multiplication rule in probability versus when to use a tree? | There are three ways to use the same transportation mode twice: Bus twice, cab twice, or train twice. So the probability of using, say, the bus twice is indeed $\frac{1}{9}$, but there is also the option of using the cab twice, or the train twice, so you have to consider those options as well.
$P(\text{same transportation twice}) = P(BB) + P(CC) + P(TT)$
$P(\text{same transportation twice}) = \left(\frac{1}{3} \cdot \frac{1}{3}\right) + \left(\frac{1}{3} \cdot \frac{1}{3}\right) + \left(\frac{1}{3} \cdot \frac{1}{3}\right)$
$P(\text{same transportation twice}) = 3 \cdot \frac{1}{9}$
$P(\text{same transportation twice}) = \frac{1}{3}$
Remember that "or" implies addition, whereas "and" implies multiplication. |
Putnam 2009 A4 clarification | Since $\gcd{(3n + 2, 3)} = 1$ we conclude that $b^2=3n+2$ and
$a^2−ab=3$.
There is another possibility: $b^2=-(3n+2)$ and $a^2−ab=-3$. And these conditions can be satisfied, for example, with $a=1$, $b=4$ and $n=-6$. So this proof is incorrect. |
Output assigning in Maple inside for loop | Suppress all output from the loop using a colon statement terminator on the end do. And then use print inside the loop to prettyprint exact that which you want shown.
for k from 1 to nops(sols) do
x[k] := unapply(c[k]*exp(lambda[k]*t).v[k],t);
print('x[k](t)' = x[k](t));
end do: |
A derivative of the inverse of the function | $$
f (x)=x^3-3x^2-1 \ \ \
\implies \ \ \ f '(x)= 3x^2-6x \ \ \ \ \ \ \implies \ \ \ \ f '(3) = 9
$$
let :
$$
f^{−1} = g
$$
so that$$ g(f\:(x))=x\ \ \ \ (\because\ \ g= f^{-1})$$
differentiating wrt x and chain rule:
$
g'(f(x)).f'(x)=1 $
$
\implies g'(f(x))\ =\frac{1}{f'(x)}
$
putting $x=3 \ \implies \ \ \ f(3)=-1 $
$\therefore$ derivative of $g$ at
$ x=-1 $
$g'(-1)=\frac{1}{f'(3)}=\frac{1}{9} $
now,
$$g = f^{-1} $$
thus,
$$ \frac{d}{dx} \left(f^{-1}(-1)\right)=\frac{1}{9}$$
Graph |
False proof of 0=1 using Laurent series | The equality $\dfrac{1}{z}\dfrac{1}{1-\frac{1}{z}} + \dfrac{1}{1-z}
= \dfrac{1}{z} \sum \limits_{n=0}^{\infty}\dfrac{1}{z^n} + \sum \limits_{n=0}^{\infty}z^n$ only holds if $z\not \in \{0,1\}$, $\left|\frac 1 z\right|<1$ and $|z|<1$, i.e., never. |
Suppose injective function, show there exists a linear map. | For $w \in \text{Im } T$, by injectivity we know there is a unique $v \in V$ such that $Tv = w$, set $Sw = v$. Check $S$ is linear, and it pretty clearly inverts $T$ on the image. |
How to solve differential equation $dy/dx = y^2/(1+y^2)$ by inegration | Indeed, in a fashion - simply flip the derivative using the chain rule
$$
\frac{\mathrm{d}x}{\mathrm{d}y} = \frac{1+y^2}{y^2} = \frac{1}{y^2} +1,
$$
integrating we have $x$ as a function of $y$.
$$
x(y) = -\frac{1}{y} + y + c.
$$ |
Convergence of tent function | In order to expand a little my comment, I draw some functions $x_n$.
We notice that $x_n(0)=0$ for all $n$ and if $r\in\left(0,1\right]$, then for $n\geq \lfloor x\rfloor+1$, where $\lfloor\cdot\rfloor$ is the floor function, we get $x_n(r)=0$, hence $\displaystyle \lim_{n\to +\infty}x_n(r)=0$ for all $r\in\left[0,1\right]$ and the sequence $\{x_n\}$ converges pointwise to the zero function.
But the convergence is not uniform. An intuitive way to see that is that, for example $\varepsilon=\frac 12$, we get for each $r$ a smallest $n_0(r)$ for which $|x_n(r)|\leq \frac 12$ for $n\geq n_0(r)$, but this $n_0(r)$ cannot be chosen independently of $r$. A more formal argument is the following: since $x_n\left(\frac 1n\right)=1$ we have $\sup_{r\in\left[0,1\right]}|x_n(r)-0|=1$ and the convergence cannot be uniform on $\left[0,1\right]$ (but of course it is on each interval $\left[a,1\right]$ with $0<a<1$, since for $n$ large enough we have $x_n(r)=0$ for all $r\in\left[a,1\right]$). |
Quickly finding $\theta$ from $\sin\theta=\frac{3}{\sqrt{73}}$. (In particular, why can we say that $\theta<37^\circ$?) | You're evaluating $\arctan\tfrac38\approx20.56^\circ$, which needs a calculator, but $\arctan\tfrac{1}{\sqrt{3}}=30^\circ$ is a tighter upper bound than your teacher requested. @Arthur obtained it another way. |
Rudin Analysis, Theorem 2.36: Is there a generalization? | Consider the metric space $\mathbb R$ with the usual metric. The open intervals $(0,1/n)$ for $n \in \mathbb N$ comprise a collection of subsets of $\mathbb R$. The intersection of every finite subcollection is nonempty [choose the greatest $n$ for which $(0,1/n)$ is in the finite subcollection and then $1/(n+1)$ is in the intersection of the finite subcollection] but the intersection of the entire collection is empty [if it contained $x$ we would have $0<x<1/n$ for all $n$, violating the Archimedean property].
The unbounded intervals $[n, \infty)$ for $n \in \mathbb N$ comprise another collection of subsets of $\mathbb R$ where the intersection of every finite subcollection is nonempty [choose the greatest $n$ for which $[n, \infty)$ is in the subcollection, and the intersection is $[n, \infty)$] but the intersection of the entire collection is empty [if it contained $m$ we would have $m>n$ for all $n$, which is absurd.
In summary, the conclusion of the theorem does not hold for collections of subsets of $\mathbb R$ that are not compact (closed and bounded). |
Cylinder dimensions based on volume and (height:width) ratio | The proportion between W and H is $2:3$. So $3W=2H$
$$V=\pi r^2 h=\pi\times\left(\frac{W}{2}\right)^2\times H=\frac\pi4\times W^2\times\frac32W=\frac{3\pi}{8}W^3$$
So given the volume, you can work out the width of a can with this proportion.
Alternative method: You already know the volume, width and height for a smaller version of this cylinder. Now you want width and height for a cylinder with a different volume, but the same proportions, i.e. a scaled up version of the small cylinder. Say the small cylinder has volume $v$, height $h$, width $w$ and the large one has volume $V$, height $H$, width $W$. We have that $V=\lambda v$ for some number $\lambda$ which you can work out. $V$ is a volume, $W$ is a length, so if $V$ has scaled up by $\lambda$, then $W$ would have scaled up by $\lambda^{1/3}$. This is true because volume is proportional to length cubed. It can also be shown to be true by the method above. So we have that $$W=\lambda^{1/3}w\\H=\lambda^{1/3}h$$ |
If $f: \mathbb{R}^2 \to \mathbb{R}$ is an harmonic function, the second partial derivatives are zero at a local maximum. | Hint: Suppose $f$ has a local max at $(a,b).$ Then $f_x(a,b)=0.$ Suppose $f_{xx}(a,b) > 0.$ Go back to one variable to see $x\to f(x,b)$ has a strict local min at $x=a,$ contradiction. |
Gambler's ruin (calculating probabilities--hitting time) | HINT:
Let $\pi_i = \mathbb{P}\left(\tau_N < \tau_0 \vert X_0 = i \right)$, then $\pi_0 = 0$, and $\pi_N = 1$. Conditioning on the first transition we get $\pi_i = \pi_{i-1} (1-p) + \pi_{i+1} p$. Now solve this recurrence equation. |
Square is homotopy Cartesian if horizontal maps are weak equivalences | This actually is trivial. The factorization $X_2\to U\to Y_2$ with the first map a trivial cofibration and the second map a fibration does it. I don't know how to delete this, but I would be happy if someone who does did; I think it's not a valuable question. |
Atiyah and Macdonald's proof of the existence of the tensor product | By definition, the element $x\otimes y$ is the class of $(x,y)\in C$ in the quotient $C/D$. Since by construction $(x+x',y)-(x,y) -(x',y)$ is in $D$, the class of this element is zero in the quotient.
But the class of this element is by definition $(x+x') \otimes y - x \otimes y - x' \otimes y$. So $(x+x') \otimes y - x \otimes y - x' \otimes y = 0$, which means $(x+x') \otimes y = x \otimes y + x' \otimes y$.
The same thing happens for the other relations. |
Dividing zeros out of the denominator | The point does not exist, as the division by zero is not defined. Wolfram Alpha just gives you the limit as you can see in this example too. However the limit exists so you can divide $(x-2)$ from the numerator like you said but in this case you have always that $x \neq 2$ because you consider the limit so canceling the term is a valid step to get the function without the disconuity. |
Indefinite integral calculation with sine and cosine | $$\dfrac1{2u^2}-\dfrac1{4u^4}=\dfrac{2(1+t^2)-(1+t^2)^2}4=\dfrac{1-t^4}4$$ where $t=\tan x$ which differ with $\dfrac{\tan^4x}4$ by a constant
In general, we can choose $\cos x=u$ for $$\int\sin^mx\cos^{2n+1}x\ dx$$ where $n$ is an integer |
Triangle that deals in terms of a and b? | The unspecified angles in both the middle and bottom left triangles are $$(180-a-b)^\circ.$$ Hence, the unspecified angle in the top triangle is $$180^\circ-2(180-a-b)^\circ=(2a+2b-180)^\circ.$$ That angle together with $b^\circ$ and $c^\circ$ adds up to $180^\circ,$ so $$c=360-2a-3b.$$
There are other ways you could go about it, too. |
Linear Algebra, proving subset is a subspace | You are not given what $W$ contains. And that is the whole point; you don't need to know what exactly $W$ contains, as long as it contains anything, and the given condition holds, you already can tell that it is a subspace, even though you don't know what it is.
Indeed, this is exactly where the power of such theorems comes from: They allow you to make conclusions even if you know very little about the set in question.
Also note that $\mathbf u$ and $\mathbf v$ are quantified, "for all $\mathbf u$ and $\mathbf v$ in $W$". That is, the statement is not about two specific, given vectors, but it means that if you draw any two vectors from $W$, the condition must hold (including, but not limited to, the case that you select the very same vector twice).
So your task is to prove that every non-empty subset $W$ that fulfils the given condition is a subspace of $V$.
(Note that I added “non-empty” to the claim because otherwise the statement you are supposed to prove is not true, as the empty set is not a vector space; if the task was actually given as stated, you might want to give that counterexample, and then proceed to prove the statement for non-empty sets). |
Can I do this? $A^c - B^c$ | $$\begin{align} x \in A^c - B^c & \iff x \notin A \land x\notin (B^c) \\ \\ & \iff x\notin A \land \lnot(x\notin B) \\ \\ & \iff x \notin A \land x \in B \\ \\ &\iff x \in B - A\end{align}$$
$$\therefore A^c - B^c = B - A$$
So yes, it is certainly "safe" and indeed correct, to say that $ A^c - B^c = B - A$. However, your logic is not correct. You treated your sets just like variables, using multiplication as we do on real numbers. The problem is, $A$ and $B$ are sets, and $2B$ doesn't make sense here (or at best, you have not defined what you mean by $2B$). Exactly how do you define $2\times \text{a set}?$ For that matter, how do you define $B + B$?
In any case, $A^c \not\equiv B - A$.
$A^c$ means all elements not in $A$, not just the elements that are in $B$ but not in $A$. So the set-minus operation on sets is not usually equivalent to taking the complement of a set. |
How to handle dice probability? ie, how much more likely would 3 six sided dice give a higher sum than 3 four sided dice? | For your first question, six-sided die vs. eight-sided, make a $6$ by $8$ table, with values $1, 2, 3, 4, 5, 6$ in one direction and $1, 2, 3, 4, 5, 6, 7, 8$ in the other direction.
The resulting $48$ small squares in the table determine $48$ possible outcomes of the two dice. You can then see for how many squares does six-sided beat eight-sided; how many ties; and how many times eight-sided beats six-sided. Assuming the dice are fair, you can then divide by $48$ to get the desired probabilities. |
Can any finite variance distribution be transformed into the normal distribution? | "No" to the question as originally phrased, without the clause "the functions to be transformed are differentiable over their entire domain". Continuous distributions can be so transformed, but not discontinuous ones. That is, for a random variable $X$, if there exists any number $a$ such that $P(X=a)>0$ then there is no transformation $g$ such that the random variable $g(X)$ is normally distributed. Finite variance or not. Your examples (exponential, uniform) are continuous, but other well-known distributions (binomial, geometric, Poisson) are not.
The edited version of the question is imprecise. I assume the functions that must be differentiable over their domains are the cumulative distribution functions. This has the effect of ruling out the OP's two examples (uniform and exponential) whose distribution functions are not differentiable at $0$, but does lead to an easy answer: yes.
The important concept here is continuous random variable, which means one whose cumulative distribution function (or "cdf") is continuous. It is a theorem that if $F_X$ is a continuous cdf and $F_Y$ is any cdf, one can generate a random variable $Y$ with cdf $F_Y$ by transforming a random variable $X$ with cdf $F_X$, by taking $Y=g(X)$, where $g(t)=F_Y^{-1}(F_X(t))$ is a monotone function. (See this and that for wikipedia evidence.) If, as the OP wants, $F_X$ is assumed differentiable, and the target $F_Y$ is the standard normal cdf $\Phi$, then $g$ will be differentiable, too.
The condition that $X$ is continuous is not the same as its probability density function being continuous. It is the same as requiring that $P(X=a)=0$ for all real $a$.
Another answer to this problem asserts that if the density function of $X$ has one hump, so must the density of $g(X)$ for any continuous $g$. This is easily seen to be false. If $X$ has the Cauchy distribution, with density $1/(\pi(1+x^2))$, and $g(x)=x+a\sin(x)$, for $a=9/10$, the density of $Y=g(X)$ will have many humps, near $x$ values that are multiples of $\pi$. This is because this particular $g$ has the effect of periodically puckering up and stretching the $x$-axis, which shows up as ripples in the graph of the derivative of $g$.
The density functions for $X$ and $Y$ are related as $f_Y(y)=f_X(x)/g'(x)$ with $y=g(x)$; when $g'(x)$ is small $f_Y(y)$ gets a hump. (It is easy to simulate this on a computer and inspect the resulting histograms: they are quite striking.) |
Are death-birth stochastic process double stochastic? | Suppose the state space is $\mathbb N\cup\{0\}$ with transition probabilities
\begin{align}
\mathbb P(X_{n+1}=i+1\mid X_n = i) =: p_i,\quad i=0,1,\ldots\\
\mathbb P(X_{n+1}=i\mid X_n = i) =: r_i,\quad i=0,1,\ldots\\
\mathbb P(X_{n+1}=i-1\mid X_n = i) =: q_i,\quad i=1,2,\ldots\\
\end{align}
For this to give a valid transition matrix $P$, we must have $r_o+p_0=1$ and $q_i+r_i+p_i=1$ for $i\geqslant 1$. If $\nu(0)=1$ and $\nu(n) = \prod_{i=1}^n\frac{p_{i-1}}{q_i}$ then $\nu$ is an invariant measure for $P$ iff $\sum_{n=0}^\infty \nu(n):=C<\infty$. In this case, there is a unique stationary distribution $\pi$ given by $\pi = \frac1C\nu$.
For $P$ to be a doubly stochastic matrix, we must have $r_o+q_1=1$ and $p_{i-1}+r_i+q_{i+1}=1$ for $i\geqslant 1$. Now, to have $\sum_{n=0}^\infty \nu(n)<\infty$, we must have
$$
\liminf_{n\to\infty} \frac{p_n}{q_{n+1}}<1
$$
(consider the case where the $p_n$ and $q_n$ are constant and so the sum is a geometric series). However, from $r_o+p_0=1$ and $r_0+q_1=1$ we get $p_0=q_1$, and from $q_i+r_i+p_i=1$ and $p_{i-1}+r_i+q_{i+1}=1$ we get $p_{i-1}+q_{i+1} = p_i+q_i$. Now, by induction we see that $p_{i-1}=q_i$, and hence $p_i=q_{i+1}$, for all $i\geqslant 1$. This implies that $p_{i-1}=q_i=\frac12$ for all $i$, and so if $P$ is doubly stochastic, it necessarily cannot have a stationary distribution (for it is null recurrent). |
Find Volume V bounded by surface | Let $x=r\sin^3\alpha\cos^3\beta$, $y=r\sin^3\alpha\sin^3\beta$
and $z=r\cos^3\alpha$.
Thus, $J=9r^2\sin^5\alpha\cos^2\alpha\sin^2\beta\cos^2\beta$ and the volume it's
$$9\int_{0}^1r^2dr\int_{0}^{\frac{\pi}{2}}\sin^5\alpha\cos^2\alpha d\alpha\int_{0}^{2\pi}\sin^2\beta\cos^2\beta d\beta=\frac{4\pi}{35}$$
About Jacobian see here: https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant |
Probability - Removing a ball and replacing with the same colour. | For the first question, there are two alternatives:
First ball is green, second is violet.
First ball is violet, second is violet.
$\frac{g}{v+g} \cdot\frac{v}{g+v+1} + \frac{v}{v+g} \cdot\frac{v+1}{g+v+1}$
For the second question use Bayes' theorem.
So you need basicaly three values. One you have from previous point ( $P(second=v)$). The other are $P(first=v)=\frac{v}{g+v}$
$P(second=v|first=v) = \frac{v+1}{v+g+1}$
Now we are done: $P(first=v|second=v) = \frac{P(s=v|f=v)\cdot P(f=v)}{P(s=v)} = \frac{v+1}{g+v+1}$ |
A question about complex analysis? | Hint: The Cauchy-Riemann equations must hold. That is: $$\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y}\\\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}$$
So, one thing we can immediately conclude is that $$v(x,y)=\int\frac{\partial u}{\partial x}\,dy+C+g(x)$$ for some constant $C$ and some function $g$ such that $g(0)=0$. (Why?) |
Continuous function with infinitely many zeros | This starts with a description of an answer, and then drills down to details. This question is more interesting than many.
The original question's answer seems to be "no".
One can construct, inductively, a sequence of intervals $I_k=[a_k,a_{k+1}]$ (with $a_k$ increasing to $1$), and values for $f$ on $I_k$, and a sequence of $n_k\to\infty$, so that the value of $\int_0^1 \exp(n_kx)f(x)dx$ is pretty much determined by the values of $f$ on $[0,a_k]=I_0\cup I_1\cup \cdots\cup I_{k-1}$ and not affected much by subsequent tinkerings of $f$ on $[a_{k},1]=I_{k}\cup\cdots$. This can be done in such a way that $f$ is continuous and $|f(x)|\le1-x$ for all $x$.
As EricWofsey comments, in effect, one can choose $a_{k+1}$ so large (that is, close to $1$) that $(1-a_{k+1})\exp(n_k)$ is as small as you want.
The upshot is a continuous $f$ such that $$\liminf_{k\to\infty}\quad (-1)^k \int_0^1 \exp(n_k x) f(x)dx = 1,$$
disproving the original conjecture.
Overall the construction I am describing is like the standard examples of bounded sequences without Cesaro averages. You know, 10 zeros followed by 100 ones followed by 1000 zeros followed by 10000 ones...
Now for some details. I apologize for the intricate notation.
It is possible I have made a typo somewhere.
It is more than possible that there is a simpler way to arrange the calculation or (better yet) a simpler way to see the result.
Let $\Lambda_\alpha(x)=\max(0,\alpha/2-|2x-(\alpha+1)|)$ be the piecewise linear function whose graph has an isosceles triangular spike centered at $(1+\alpha)/2$, of width $\alpha/2$ and height $\alpha/2$. Note that $\text{supp } \Lambda_\alpha = [(3\alpha+1)/4,(\alpha+3)/4]$, and that $|\Lambda_\alpha(x)|\le |1-x|$. The final form of $f$ will be
$$f(x)=\sum_{k\ge0} \epsilon_k (-1)^k \Lambda_{a_k}(x),$$ where the $\epsilon_k$ are in $[0,1]$. The function $f$ satisfies $|f(x)|\le|1-x|$ on $[0,1]$.
We will construct $a_k, n_k, \epsilon_k$ inductively.
At stage $k$ we will have specified $f$ on the interval $[0,a_k]$, and the inductive step delivers a formula for $f$ on the interval $I_k=[a_k,a_{k+1}]$, thus extending the definition of $f$ to $[0,a_{k+1}]$.
Start with $k=0$ and $a_0=0$. Let $L>0$ be a constant, such as $L=1$.
At inductive stage $k$, chose $n$ so large that $0<\epsilon_k<1$, where $$\epsilon_k = \frac { L - (-1)^k\int_0^{a_k}f(x)e^{nx}dx}{\int_0^1\Lambda_{a_k}(x)e^{nx}dx},$$
and, if $k>0$, also $n\ge1+n_{k-1}$. This is possible because the integral in the denominator has larger exponential growth rate (at least $(3a_k+1)/4$) than that in the numerator, which is at most $a_k$. Denote the chosen $n$ by $n_k$. Finally, and this is the key point identified by EricWofsey in a comment, chose to be very close to $1$, as in $$a_{k+1} = \max((a_k+3)/4, 1-\exp(-2n_k)).$$ Note $a_{k+1}<1$.
The restriction of $f$ to $I_k=[a_k,a_{k+1}]$ is $(-1)^k\epsilon_k\Lambda_{a_k}$.
It is easy to see from this that $n_k\to\infty$ and that $a_k\to 1$. Checking that $f$ is continuous is routine.
By construction, $\int_0^{a_{k+1}}f(x)e^{n_kx}dx = (-1)^k L$ and this differs from $\int_0^1 f(x) e^{n_kx}dx$ by $\int_{a_{k+1}}^1 \exp(n_k)dx = O(\exp(-n_k))$. |
How to prove function continuous on a point? | Hint: a simple $\epsilon / \delta$ thing...
details:
Let $\epsilon > 0$.
As $\lim_{c^+} f = f(c)$: there is $\delta>0$ such as
$$
c<x<c+\delta \implies |f(x)-f(c)|\le \epsilon
$$
As $\lim_{c^-} f = f(c)$: there is $\delta'>0$ such as
$$
c>x>c-\delta' \implies |f(x)-f(c)|\le \epsilon
$$
Then if $|x-c|\le \min(\delta,\delta')$:
$$
|f(x)-f(c)|\le \epsilon
$$
hence $$\lim_c f= f(c)$$ |
Quaternion Group as Permutation Group | Follow Cayley's embedding: write down the elements of $Q_8=\{1,-1,i,-i,j,-j,k,-k\}$ as an ordered set, and left-multiply each element with successively with each element of this set - this yields a permutation, e.g. multiplication from the left with $i$, gives you that the ordered set $(1,-1,i,-i,j,-j,k,-k)$ goes to $(i,-i,-1,1,k,-k,-j,j)$, which corresponds to the permutation $(1324)(5768)$. Etc. Can you take it from here? So it can be done and the statement on the WolframMathWorld - Permutation Groups page must be wrong. |
High-level linear algebra book | What follows is a substantially edited version of a 25 August 2001 k12.ed.math post of mine.
There are typically 3 different levels of linear algebra that can be found at American colleges and universities. [I'm restricting myself to America because I don't know much about the situation in other countries.]
1. The first level is what is often called elementary linear algebra. This is usually taken by 2nd year undergraduates after they have completed the second or third semester of the standard elementary calculus sequence. However, depending on the college, quite a few 1st year and/or 3rd-4th year students might also be in the class. [In each of the two linear algebra classes I taught during the Spring 2000 semester, over 50% of the students were 1st year students.] I assume this is not the level you're interested in and I'm only including it for completeness.
2. The second level is a course typically taken by upper level math, physics, and (sometimes) engineering students. At some colleges and universities, students may elect to skip the first level linear algebra course and begin with this level. [This was the case where I did most of my undergraduate work. We used Hoffman/Kunze and, when I took the course, there were 5 2nd year undergraduate students (including me) in the course and none of us had taken the lower level linear algebra class.] Texts that would be appropriate for this level are:
Paul R. Halmos, Finite-dimensional vector spaces
Kenneth Hoffman and Ray Kunze, Linear Algebra
Gilbert Strang, Linear Algebra and its Applications
Sheldon Axler, Linear Algebra Done Right
3. The third level is graduate level linear algebra. In many universities the Hoffman/Kunze text above is used (or at least it used to be used), but in these cases the first three chapters are usually covered very quickly (if at all) in order to devote more time to the 2nd half of the text. It is also common for graduate level linear algebra to be incorporated into the 2-3 semester graduate algebra sequence. For example, when I was a student two of the more widely used algebra texts were Lang's Algebra and Hungerford's Algebra, and each contains a substantial amount of linear algebra. Listed below are a couple of "stand-alone" texts for this level. I've had Jacobson since the early to mid 1980s and Brown's book since 1989 or 1990. Brown's book is definitely more modern, but if you're serious about the material, you should at least look at a copy of Jacobson's book (in most U.S. college and university libraries) from time to time. Without knowing anything more about you than what you wrote in your question, I would guess that Brown's book is the best for what you're looking for.
William C. Brown, A Second Course in Linear Algebra
Nathan Jacobson, Lectures in Abstract Algebra. Volume 2. Linear Algebra [See also Dieudonne's Bulletin of the AMS review of Jacobson's book.]
(TWO MORE "THIRD LEVEL" TEXTS, ADDED A YEAR LATER)
Werner Hildbert Greub, Linear Algebra
Steven Roman, Advanced Linear Algebra |
Why are functors exact if they preserve all short exact sequences? | Proceeding as you say, you have a diagram
$$\begin{matrix}
\ddots & & \vdots & & & & & & \\
& \searrow & \downarrow& & & & & & \\
& & A_{i-1}& & & & 0& & \\
& & \pi \downarrow & \searrow^d & & & \downarrow & & \\
0 & \rightarrow & K_i & \overset{\iota}{\longrightarrow} & A_i & \overset{\pi}{\longrightarrow} & K_{i+1} & \rightarrow & 0 \\
& & \downarrow & & & \searrow^d & \iota \downarrow & & \\
& & 0 & & & & A_{i+1} & & \\
& & & & & & \downarrow & \searrow & \ \\
& & & & & & \cdots & & \ddots \\
\end{matrix}$$
where you know rows and columns are exact and you want to deduce the diagonal is exact. (Sorry for rotating $45^{\circ}$ from your image, but I had to do it to get the diagram with the tools available here.)
Since $\pi$ is epic (think surjective), we know that $$\mathrm{Image}(A_{i-1} \overset{d}{\longrightarrow} A_{i}) = \mathrm{Image}(A_{i-1} \overset{\pi}{\longrightarrow} K_i \overset{\iota}{\longrightarrow} A_i) =\mathrm{Image}(K_i \overset{\iota}{\longrightarrow} A_i) $$
Since $\iota$ is monic (think injective), we know that
$$\mathrm{Ker}(A_{i} \overset{d}{\longrightarrow} A_{i+1}) = \mathrm{Ker}(A_{i} \overset{\pi}{\longrightarrow} K_{i+1} \overset{\iota}{\longrightarrow} A_{i+1}) =\mathrm{Ker}(A_{i} \overset{\pi}{\longrightarrow} K_{i+1}).$$
Since the horizontal row is exact, we know $\mathrm{Image}(K_i \overset{\iota}{\longrightarrow} A_i)=\mathrm{Ker}(A_{i} \overset{\pi}{\longrightarrow} K_{i+1})$. We deduce that $\mathrm{Image}(A_{i-1} \overset{d}{\longrightarrow} A_{i}) = \mathrm{Ker}(A_{i} \overset{d}{\longrightarrow} A_{i+1})$, as desired. |
One iteration of forward Gauss-Seidel followed by one iteration of backward Gauss-Seidel | Once more and a bit less messy (without vectors dancing randomly around matrices and missing terms in the last equation):
One forward Gauss-Seidel gives
$$
x_{k+1/2}=x_k+(D-L)^{-1}r_k
$$
with the residual
$$
\begin{split}
r_{k+1/2}&=b-Ax_{k+1/2}=b-A(x_k+(D-L)^{-1}r_k)=r_k-A(D-L)^{-1}r_k
\\&=r_k-(D-L-U)(D-L)^{-1}r_k=U(D-L)^{-1}r_k.
\end{split}
$$
Then the backward Gauss-Seidel follows like
$$
\begin{split}
x_{k+1}&=x_{k+1/2}+(D-U)^{-1}r_{k+1/2}
=x_k+[(D-L)^{-1}+(D-U)^{-1}U(D-L)^{-1}]r_k
\\&=x_k+(D-U)^{-1}[(D-U)+U](D-L)^{-1}r_k
=x_k+(D-U)^{-1}D(D-L)^{-1}r_k.
\end{split}
$$ |
Prove that any closed irreducible subset of $\mathrm {Spec} (R)$ has a unique generic point. | $A=Spec(R/I)$ is irreducible is and only $0$ is a prime ideal of $R/I$, so $\cap_{p\in Spec(R/I)}p=0$. Another way to say that $A$ is irreducible $A=V(I)$ if $I$ is a prime since every prime in $V(I)$ contains $I$, we deduce that $\cap_{p\in V(I)}p=I$ and $I$ is the generic point. |
Integration by parts. Cyclic effect. | Taking your approach it is possible to avoid the cyclic phenomena and get something useful. As you surely know, integration by parts works choosing a part of the function to be $u$ and the rest to be $dv$, so the integral is $\left[uv\right]_0^l-\int_0^lvdu$. The problem is that in the first time you choose $u=f(s)$ and $dv=\sin(s)ds$, and then you switch this selection, which produces the going back. If you solve $\int_0^lf'(s)\cos(s)ds$ by setting $u=f'(s)$ and $dv=cos(s)ds$, and keep going on this way, you are going to get $$\int_0^lf(s)\sin(s)ds=\left[-f(s)\cos(s)+f^{(1)}(s)\sin(s)+f^{(2)}\cos(s)-f^{(3)}\sin(s)-f^{(4)}\cos(s)+...\right]_0^l=\left[-\cos(s)\sum_{n=0}^\infty(-1)^{n}f^{(2n)}(s)+\sin(s)\sum_{n=0}^\infty(-1)^{n}f^{(2n+1)}(s)\right]_0^l$$ which is not a pretty beatiful answer, but that's what you'll get with integration by parts. You can also try using the Taylor expansion of $f$. Hope that helps! |
Prove that $\left\{\neg(\forall x \in M:a(x))\right\}\Leftrightarrow \left\{\exists x \in M: \neg a(x)\right\}$ | $$\left\{\neg(\forall x \in M:a(x))\right\}\Leftrightarrow
\left\{\exists x \in M: \neg a(x)\right\}\tag{*}$$
Consider two statements in "human" language:
(A) For every $x$ in $M$, $a(x)$ is true;
(B) There exists $x$ in $M$ such that $a(x)$ is not true.
What $(*)$ says is that (A) is not true if and only if (B) is true.
To prove $(*)$, you need to show two directions:
Suppose (B) is true, show that (A) cannot be true.
Suppose (A) is not true, show that (B) must be true? |
Prove this Generalizing AM-GM inequality | We can use the Vasc's EV-method. See here the corollary 1.7 (b):
https://www.emis.de/journals/JIPAM/images/059_06_JIPAM/059_06_www.pdf
Indeed, let $a_1\leq a_2\leq...\leq a_n$, $a_1+a_2+...+a_n=const$ and $a_1^n+a_2^n+...+a_n^n=const$.
Thus, by EV $a_1a_2...a_n$ gets a minimal value in the following cases.
One of our variables is equal to $0$.
Let $a_n=0$.
In this case we need to prove that
$$(n-1)^{n-1}\sum_{k=1}^{n-1}a_k^n\geq\left(\sum_{k=1}^{n-1}a_k\right)^n$$ or
$$\frac{\sum\limits_{k=1}^{n-1}a_k^n}{n-1}\geq\left(\frac{\sum\limits_{k=1}^{n-1}a_k}{n-1}\right)^n,$$
which is Power Mean inequality;
$a_1=x$ and $a_2=...=a_n=1$, where $0\leq x\leq1$.
We need to prove that $f(x)\geq0$, where
$$f(x)=(n-1)^{n-1}(x^n+n-1)+n^nx-(x+n-1)^n.$$
But $$f'(x)=n(n-1)^{n-1}x^{n-1}+n^n-n(x+n-1)^{n-1}\geq n^n-n\cdot (1+n-1)^{n-1}=0,$$
which says that $f$ is an increasing function.
Thus, $f(x)\geq f(0)=0$ and we are done! |
Using the definition of derivative, show that $f(x,y)=5x^2+7xy$ is differentiable at $(1,2)$ | $$f(x,y)=19+ 24(x-1)+ 7(y-2)+r(x,y)\iff$$
$$r(x,y)=f(x,y)-24(x-1)-7(y-2)-19=5x^2+7xy-24(x-1)-7(y-2)-19$$
$$\implies\frac{r(x,y)}{\sqrt{(x-1)^2+(y-2)^2}}=\frac{5x^2+7xy-24(x-1)-7(y-2)-19}{\sqrt{(x-1)^2+(y-2)^2}}$$
Now you can use "moved" polar coordinates (or simply a substitution, if you will):
$$x-1=r\cos\theta\;,\;\;y-2=r\sin\theta\implies$$
$$\implies\frac{5(1+r\cos\theta)^2+7(1+r\cos\theta)(2+r\sin\theta)-24r\cos\theta-7r\sin\theta-19}r=$$
$$=\color{purple}{10\cos\theta}+5r\cos^2\theta+\color{green}{7\sin\theta}+\color{purple}{14\cos\theta}+7r\cos\theta\sin\theta-\color{purple}{24\cos\theta}-\color{green}{7\sin\theta}=$$
$$=r\left(5\cos^2\theta+7\cos\theta\sin\theta\right)\xrightarrow[r\to 0]{}0$$ |
Is a function increasing if the derivative is positive except at one point of an interval? | Let $f\colon (a,b)\to \Bbb R$ be continuous, and $f'(x)>0$ for $x\in(a,b)\setminus\{c\}$. We do not even need to assume that $f'(c)$ exists.
Then $f$ is strictly increasing:
Suppose $a<x_1<x_2<b$. Then $f(x_1)<f(x_2)$ follows from the Mean Value Theorem if $x_2\le c$ or if $x_1\ge c$. If $x_1<c<x_2$, just go in two steps via $c$.
Now suppose additionally that $f'(c)=$ exists. Then directly from the increasing property we get $f'(c)\ge0$. |
How to show that $-x^2\mathrm{d}\left(\frac{y}{x}\right) = y\mathrm{d}x-x\mathrm{d}y$? | We have $y=y(x)$, then
$$\frac{\mathrm{d}}{\mathrm{d}x}\left(\frac{y(x)}{x}\right)= \frac{xy'(x)-y(x)}{x^2}.$$
Hence
$$-x^2\frac{\mathrm{d}}{\mathrm{d}x}\left(\frac{y(x)}{x}\right)=y(x)-xy'(x).$$ |
Number of Dyck Paths that touch the diagonal exactly $k$ times? | Have a look at Rukavicka's proof of $C_n=\frac{1}{n+1}\binom{2n}{n}$. He introduces the combinatorial concept of exceedance of a path and a path transform the increases/decreases the exceedance by one. Now consider what happens to the the number of crossings of the main diagonal when applying such transform, and the solution will be at hand.
As an alternative, we may consider Vandermonde's convolution. For instance, the following sum
$$ \sum_{a+b=n}\frac{1}{a+1}\binom{2a}{a}\frac{1}{b+1}\binom{2b}{b}, $$
is the coefficient of $x^n$ in
$$ \left(\sum_{m\geq 0}\binom{2m}{m}\frac{x^m}{m+1}\right)^2 = \left(\frac{1-\sqrt{1-4x}}{2x}\right)^2 =\frac{1-2x-\sqrt{1-4x}}{2x^2}$$
i.e. the coefficient of $x^{n+1}$ in $\frac{1-\sqrt{1-4x}}{2x}$, that is again a Catalan number. |
Show that $\cos (\sin \theta)>\sin (\cos \theta)$ | Over $I=\left[0,\frac{\pi}{2}\right]$ we have:
$$ \sin(\theta)+\cos(\theta) = \sqrt{2} \sin\left(\theta+\frac{\pi}{4}\right) \leq \sqrt{2} < \frac{\pi}{2} $$
hence:
$$ \forall \theta\in I,\qquad \cos\theta < \frac{\pi}{2}-\sin\theta. \tag{1}$$
The LHS of $(1)$ belongs to $[0,1]$, the RHS belongs to $\left[\frac{\pi}{2}-1,\frac{\pi}{2}\right]$; the sine function is increasing over $\left[0,\frac{\pi}{2}\right]$, hence:
$$ \forall \theta\in I,\qquad \sin(\cos\theta) < \cos(\sin\theta).\tag{2}$$ |
Arrows from initial objects to non-isomorphic objects in an elementary topos are monomorphisms? | The answer is that they are, but for stupid reasons : the same reason that in $\mathbf{Set}$, any map $\emptyset \to A$ is mono.
Indeed, in an elementary topos $\mathscr{T}$, any object $A$ with a map $A\to 0$ is also $0$, so this map is actually unique.
To prove this, first note that for any object $B$, $B\times 0 \simeq 0$. Indeed, by exponentiation, one has $\hom(B\times 0, X)\cong \hom(0,X^B) \cong \{*\}$, naturally in $X$, so $B\times 0$ is an initial object.
Then note that if you have a map $f:A\to 0$, then you have a map $A\to A\times 0$, namely $(id_A,f)$. Now $\pi_A\circ (id_A,f) = id_A$ by definition, and $\pi_A\circ (id_A,f)\circ \pi_A = id_A\circ \pi_A = \pi_A=\pi_A\circ id_{A\times 0}$, $\pi_0\circ (id_A,f)\circ \pi_A = f\circ \pi_A= \pi_0\circ id_{A\times 0}$ because $A\times 0$ is initial, and so there's a unique map $A\times 0 \to 0$ : if I know one such map, it must be this one.
These two equations prove (universal property of the product) that $(id_A,f)\circ \pi_A = id_{A\times 0}$; so that $(id_A,f)$ and $\pi_A$ are a pair of isomorphisms : $A\simeq A\times 0$. But $A\times 0 \simeq 0$, so $A\simeq 0$.
Now we proved that any object with a map $A\to 0$ must be initial, so if there are two maps $f,g: B\to 0$ such that [insert any condition you like], then $f=g$. In particular, any map leaving $0$ is a monomorphism |
Rational power in modular arithmetic | Welcome to MSE! Have a look into my answer made today: What do some cryptographic terminology mean?(e.g. public params, security params)
In view of the RSA (Rivest, Shamir, Adlement) cryptosystem, each user chooses two large primes $p$ and $q$. Then she (beloved Alice) computes $n=p⋅q$ and $ϕ(n)=(p−1)(q−1)$. Afterwards she chooses an integer $e$ in the range $1<e<ϕ(n)$, usually a prime, such that $e$ is invertible modulo $ϕ(n)$. Then she computes $d=e^1$ in Zϕ(n). Then $(n,e)$ is the public key and $d$ is the private key.
In your context, $x$ equals $d$. |
Prove that $F_n={n-1 \choose 0 }+{n-2 \choose 1 }+{n-3 \choose 2 }+\ldots$ where $F(n)$ is the $n$-th fibonacci number | Let's call that relation $B_n$ instead for a second ($B$ is for binomial). Let's also say for the moment that $n$ is even, so $n=2m$, with $m$ an integer.
Now,
$$B_{n-1} = {2m-2 \choose 0 }+{2m-3 \choose 1 }+{2m-4 \choose 2 }+\ldots+{m - 1 \choose m - 1},$$
and
$$B_{n-2} = {2m-3 \choose 0 }+{2m-4 \choose 1 }+{2m-5 \choose 2 }+\ldots+{m - 1 \choose m - 2}.$$
From Pascal's triangle, we have the relation
$${n \choose k} + {n \choose k+1} = {n+1 \choose k+1}.$$
Looking at the two expressions for $B_{n-1}, B_{n-2}$, and applying this relation as we add the $k$th term of $B_{n-2}$ and the $(k+1)$th term of $B_{n-1}$, we see that
$$B_{n-1} + B_{n-2} = {2m-2 \choose 0} + {2m-2 \choose 1} + {2m-3 \choose 2} + \ldots +{m \choose m-1}+ {m - 1 \choose m - 2}.$$
Since ${n \choose 0} = 1$ for any positive integer $n$, we can substitute ${2m-1 \choose 0}$ for the first term.
Hence, $B_n = B_{n-1} + B_{n-2}$ for even $n$.
The process is similar to show that this holds for odd $n$.
Finally, noting that $B_1 = 1$ and $B_2 = 2$, we've shown that $B_n$ is the Fibonacci series, which is in essence what you set out to prove.
But ... I'll agree with other commenters that induction is cleaner. :) |
Does a function relationship for a specific $y$ hold for any? | You could have $f(x,y) = a(y) x + b(y)$ for any continuous functions $a(y)$ and $b(y)$ on $\mathbb R$. No reason for them to be constant.
EDIT: With the new assumptions, you could have $f(x,y) = a x y + b x + c y + d$. |
Find the global maximum of $2\sin\left(x\right)-\cos\left(2x\right)$ | You can write the following using $\cos(2x) = 1-2\sin^2(x)$
$$f(x) = 2\sin( x) - \cos (2x) = 2 \sin(x) + 2\sin^2(x) -1 \\= 2(\sin(x) +\tfrac{1}{2})^2 - \tfrac{3}{2}$$
From here note that the square term is always positive, maximum occurs at $\sin(x) = 1$ and minimum at $\sin(x) = \tfrac{-1}{2}$.
Thus maximum $= 3$ and minimum = $\tfrac{-3}{2}$. |
Show (pointwise) convergence of function sequence $f_n :[0,1] \to \mathbb{R}$ with $f_n(t)=nxe^{-nx^2}$ | If $x=0$, then $f_n(x)=0$ and therefore $\lim_{n\to\infty}f_n(x)=0$.
And if $x\in(0,1]$, then$$\lim_{n\to\infty}ne^{-nx^2}=\lim_{n\to\infty}\frac n{e^{nx^2}}=0$$and therefore $\lim_{n\to\infty}f_n(x)=0$ in this case too.
Concerning your approach, I don't see why we should have $ne^{-x}<\varepsilon$. |
Among any $11$ integers, sum of $6$ of them is divisible by $6$ | We first show that among any 5 integers, sum of 3 of them is divisible by 3. The residue classes modulo 3 are $[0], [1], [2]$. By Pigeonhole principle, we have two cases:
each of these residue classes must have atleast one of the five integers, or
one residue class must have 3 of these 5 integers belonging to it.
In the former case, let $x_0 \equiv 0 \mod 3$, $x_1 \equiv 1 \mod 3$ and $x_2 \equiv 2 \mod3$. Then summing $x_1, x_2, x_3$, we get, $x_1 + x_2 + x_3 \equiv 0 \mod 3$.
In the latter case, we have 3 integers among the 5, say $x_1, x_2, x_3$ such that, $x_1 \equiv x_2 \equiv x_3 \equiv k \mod 3$, again summing these three we get $x_1 + x_2 + x_3 \equiv 3k \equiv 0 \mod 3$.
This proves that among any 5 integers, sum of some 3 of them is divisible by 3.
Now, we have 11 integers. By the previous result, we can choose 3 of them such that there sum is divisible by 3. Denote this sum by $s_1$. Now, we are left with 8 integers, again, choose 3 of them such that there sum is divisible by 3. Denote this by $s_2$. Now, we are left with 5 integers. Choose $s_3$ similarly.
Thus we have 3 sums: $s_1, s_2, s_3$ (each of which are sums of 3 integers). These sums are divisible by 3. So, each of these sums are congruent to either 0 or 3 modulo 6.
Now, since there are 3 sums, and two residue clases ($[0], [3]$), by Pigeonhole principle, one residue class must have two sums belonging to it. Let $s_i$ and $s_j$ be those sums. Either, $s_i \equiv s_j \equiv 0 \mod 6$ or $s_i \equiv s_j \equiv 3 \mod 6$. In both the cases, $s_i + s_j \equiv 0 \mod 6$.
Since, $s_i$ and $s_j$ are both sum of 3 integers, $s_i + s_j$ is a sum of 6 integers (which is divisible by 6). This completes the proof. |
Newton-Raphson in $\mathbb{R}^n$ | Ok, my friend Stefan helped me. It turned out that I don't need contraction fixed point theorem.
First observe $\phi(p) = p$.
By Taylor's Theorem (or definition or differentiability)
$$0 = f(p) = f(x)+Df_{x}(p-x)+o(|p-x|)$$
$$\Longrightarrow\;\;\phi(x) = x-(Df_x)^{-1}f(x) = p+(Df_x)^{-1}o(|p-x|)$$
$$\Longrightarrow\;\;\frac{|\phi(x)-p|}{|x-p|}\le \|Df_x^{-1}\|\cdot \frac{o(|p-x|)}{|p-x|}$$
Choose $0<\epsilon$ so small that
$$\forall x\in N_\epsilon(p).\;\frac{o(|p-x|)}{|p-x|}<\frac{1}{2\|Df_x^{-1}\|}$$
Therefore
$$\forall x\in N_{\epsilon_2}(p).\;|\phi(x)-p|<\frac{1}{2}|x-p|$$
Thus $\phi(N_\epsilon(p))\subset N_\epsilon(p)$, further
$$|\phi(x)^{\circ n}-p|<\left(\frac{1}{2}\right)^n|x-p|$$
That is, $\forall x\in N_{\epsilon}(p).\;\lim\limits_{n\to\infty} \phi^{\circ n}(x) = p$ |
Specific computation for the degree of maps from $S^1$ to $S^1$. | Let us write $q : [0,1] \to S^1$ for the quotient map in order to not confuse it with the real number $\pi$. We have $q = p \mid_{[0,1]}$.
Identifying $(x,y) \in \mathbb R^2$ with $x + iy \in \mathbb C$, we can write $p(x) = e^{2\pi ix}$. Then $f(z) = -z$ and $g(z) = \overline z$.
For $\alpha = f \circ q$ we have $\alpha(t) = -e^{2\pi i t}$. A lift is given by $\tilde \alpha(t) = t + 1/2$ because $p(\tilde \alpha(t)) = e^{2\pi i(t + 1/2)} = e^{2\pi it} e^{\pi i} = - e^{2\pi it} = \alpha(t)$. Thus $\deg(f) = \tilde \alpha(1) - \tilde \alpha(0) = 1$.
For $\alpha = g \circ q$ we have $\alpha(t) = \overline{e^{2\pi i t}} = e^{\overline{2\pi i t}} = e^{-2\pi i t}$. Obviously a lift is given by $\tilde \alpha(t) = -t$. Thus $\deg(g) = \tilde \alpha(1) - \tilde \alpha(0) = -1$. |
A graded abelian group and a graded map | The grading is given here with respect to the multiplicative (!) group $\{+1,-1\}$. Since $1$ is the neutral element of this group, by "graded map of degree $1$" we simply mean a graded map. This already answers all of your questions:
1) There is no element of degree $0$. The zero belongs both to degrees $+1$ and $-1$, and is in fact the only such element.
2) $1 \otimes 1$ and $x \otimes x$ have degree $1$, and are mapped to $1$ and $0$, which have both degree $1$. Besides, $1 \otimes x$ and $x \otimes 1$ have degree $-1$, which are mapped to $x$, which has degree $-1$. Thus, $m$ is graded.
By the way, one can construct $m$ more elegantly as follows: The ring $\mathbb{Z}[x]$ is $\mathbb{Z}$-graded. It follows that $\mathbb{Z}[x]/(x^2)$ is $\mathbb{Z}/2$- or $\{+1,-1\}$-graded. Now $A$ is isomorphic to its underlying abelian group. Hence, $A$ becomes a graded commutative ring in such a way that it is isomorphic to $\mathbb{Z}[x]/(x^2)$ as a graded commutative ring. |
Does the elliptic curve $y^2 = 4 x^3 -6075$ have any integer points? | No, $y^2=4x^3-6075$ has no integer solution.
An elementary observation: $3\nmid x$: because $$3\mid x \implies 27\mid y^2 \implies 9\mid y \implies
9\mid x$$ but $y^2+6075$ has no solution in $\mathbb{Z}/3^6\mathbb{Z}$.
The ring of integer of $\mathbb{Q}(\sqrt{-3})$ is a UFD. The equation can be written as
$$\left( {\frac{{y + 45\sqrt { - 3} }}{2}} \right)\left( {\frac{{y - 45\sqrt { - 3} }}{2}} \right) = {x^3}$$
elements inside brackets, denoted by $\alpha$ and $\beta$, are integral over $\mathbb{Z}$ since $y$ is odd.
I claim that $\alpha,\beta$ are relatively prime. If a prime $\pi$ divides both, then $\pi\mid 45\sqrt{-3}$, so $\pi = \sqrt{-3}$ or $5$. If $\pi = \sqrt{-3}$, then $3\mid x$, contradiction. If $\pi = 5$,
let $v_5$ denote valuation at $5$, normalized so that $v_5(5)=1$, note that $v_5(\alpha) \in \mathbb{Z}$ as $v_5$ is unramified. $$0< v_5(\alpha)+v_5(\beta) = 2v_5(\alpha)= 3v_5(x) $$
this says $v_5(x)$ is even, hence $5^6 \mid (y^2+6075)$, but $y^2+6075=0$ has no solution in $\mathbb{Z}/5^6\mathbb{Z}$, ruling out $\pi = 5$. Valuation can be saved by noting that $5^3 \mid (y^2+6075)$ is already impossible, but it seems difficult to deduce the stronger $5^6 \mid (y^2+6075)$ from consideration in $\mathbb{Z}$ alone.
Since $\alpha,\beta$ are relatively prime, they are both cube, say
$$\frac{{y + 45\sqrt { - 3} }}{2} = {(\frac{{a + b\sqrt { - 3} }}{2})^3} \qquad \text{ or } \qquad \left( {\frac{{1 + \sqrt { - 3} }}{2}} \right){\left( {\frac{{a + b\sqrt { - 3} }}{2}} \right)^3}$$
with $a,b$ both even or odd. The first case gives $60= {a^2}b - {b^3}$, so $b$ has only a few possible values, checking them gives no integer solutions to original equation. The second case gives $$\frac{{45}}{2} = \frac{{{a^3} + 3{a^2}b - 9a{b^2} - 3{b^3}}}{{16}}$$ this says $v_3(a) \geq 1$, thus $v_3(a^3+3a^2b-9ab^2)\geq 3$ but $v_3(45/2) = 2$, so $v_3(3b^3) = 2$, absurd. |
Finding the length of a semi-ellipse (Calculus) | Using for example the Wiki article on ellipses, you will find that the semi-major axis is $2.5$ feet and the semi-minor axis is $2$ feet. This means the foci are at $\pm 1.5$ feet, i.e.the tacks should be placed at the base, $1.5$ feet to either side of the center. The string should be $5$ feet long. |
Proving two norms are not equivalent | Suppose $x_n$ is the sequence with $1$'s in the first $n$ slots and $0$'s elsewhere. Then the sum norm of $x_n$ is $n,$ while the max norm is $1.$ Thus there is no constant $C$ such that the sum norm is bounded above by $C$ times the max norm. Hence these two norms are not equivalent on $L.$ |
Does existence of the derivative along a curve imply existence of the directional derivative? | Just continuity isn't enough. I'll give a counterexample $G\colon\mathbb{R}^2 \to \mathbb{R}$ first; then you can take $F(x,y)=(G(x,y),y)$ to get $F\colon\mathbb{R}^2 \to \mathbb{R}^2$.
For $x \neq 0$, let
$$
G(x,y) =
\begin{cases}
0, & \text{if $|y| \ge x^2$}
,\\
x(1-|y|/x^2), & \text{if $|y| < x^2$}
,
\end{cases}
$$
That is, for each fixed $x\neq 0$, the function $y \mapsto G(x,y)$ is a piecewise linear “tent function” with a peak of height $x$ (or a trough, if $x<0$) and support on the interval $-x^2 \le y \le x^2$.
This tends uniformly to zero as $x \to 0$, so if we also set $G(0,y)=0$ we get a continuous function $G$.
Now, $G(x,0)=x$ for all $x$, so the directional derivative in the $x$ direction at the origin equals $1$.
But on the parabola $(x,y)=(t,t^2)$, which passes through the origin in the $x$ direction, we have $G(t,t^2)=0$, so the derivative along that curve isn't equal to the directional derivative. |
Parametric curve parametriced by length | Yes. Every piecewise-smooth curve can be re-parametrized by length. Differentiate your equation and you will see that it is equivalent to the condition that $\|v'\| = 1$ for all $t$. A well-known example is the circle $(\cos t, \sin t)$. |
Proof on the number of roots of a polynomial | We are only looking at the maximum here. We have that $f(b) = 0\iff a=b \quad\text{or}\quad q(b)=0$. Now from the inductive assumption we have that $q(x)$ has at most $n-1$ roots. Then supposing that all those roots are distinct, and $a$ is also distinct from those, we can have at most $n-1$ roots for $q(x)$ plus an extra root $f(a)$ giving a maximum of $n$ roots. |
The number of edges this graph | I use instead $V'(G) = \{d_i; d_i|n\}$, and $E'(G) = \{ d_i d_j: d_i | d_j\}$, allowing a loop on each vertex corresponding to $d_i| d_i$; since this is homework I leave the necessary adjustments to you (i.e. counting how many edges are joined to $1$ and $n$, how many other loops there are, and subtracting these off).
So, let us determine $E'(G)$. By considering prime factorization, edit: where $n = \prod p_i^{n_i}$, $V'(G)$ is in bijection with $i$ tuples $(a_i)$, where $a_i \in \mathbb{N}, 0 \leqslant a_i \leqslant n_i$, and we draw an oriented edge $(a_i) \rightarrow (b_i)$ if $a_i \leqslant b_i$ for all $i$, corresponding to divisibility.
For $\lvert E'(G) \rvert$, we may count as follows: if $c_i$ denotes the number of pairs $(a,b)$ such that $0 \leqslant a_i \leqslant b_i \leqslant n_i$, then the number of edges is simply $\prod_i c_i$. By conditioning on whether $b = c_i$ or not, we get the recursion $c_i = c_{i-1} + (i+1)$, yielding $c_i = \frac{(n_i+1)(n_i+2)}{2}$. |
Stokes' Rule for an initial-value problem | I think I just figured out the answer to my question: $$v_t(x,0)=u_{tt}(x,0)=[u(x,0)]_{tt}=0_{tt}=0.$$ |
basic property of closure | Hint: try to show that $X \subseteq Y$ implies $\bar{X} \subseteq \bar{Y}$, and that $Y \subseteq \bar{X}$ implies $\bar{Y} \subseteq \bar{X}$. |
Why is the function $\mu(E)=\sum_{k=1}^{\infty}(1/2^k)\delta_{q_k}(E)$ a probability measure over $\mathbb{Q}$? | I think the easiest way to think about this question is to start by constructing a nice measure on the set of positive integers. Assign to the singleton $n$ the probability $1/2^n$. Then define the probability $P$ of any subset to be the sum of the probabilities of the elements it contains.
Then
$$
P(\mathbb{N}) =
\frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots = 1.
$$
The countable additivity of $P$ is straightforward - you're just summing subsets of the terms in that absolutely convergent geometric series.
To define a probability measure on a countably infinite set (like the rationals) just use a bijection to the positive integers to transfer that one. That settles the bonus question.
Edit in response to comment.
To "transfer" this measure from $\mathbb{N}$ to $\mathbb{Q}$, start with your favorite bijection
$$
b: \mathbb{Q} \to \mathbb{N}.
$$
Then for each $q \in \mathbb{Q}$ let
$$
P(\{q\}) = \frac{1}{2^{b(q)}}.
$$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.