title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Function Partially Differentiable but Not Totally Differentiable | Consider the function $$f(x,y) = \frac{xy}{x^2+y^2}\quad x^2+y^2 \neq 0$$ where $f(0,0) = 0$. This function is defined on $\mathbb{R}^2$ and has partial derivatives everywhere but is not continuous (and therefore not differentiable) at the origin. |
analytic function such that $f(5z)=f(z)$ on $C\setminus\{0\}$. | Here is an alternative argument: Let $M$ be the maximum of $|f(z)|$ on $\{z: \frac 1 5\leq |z| \leq \frac 2 5\}$. Then $M$ is also the maximum on $\{z: \frac 1 {5^{n+1}}\leq |z| \leq \frac 2 {5^{n}}\}$ for any $n$. Conclude that $f$ is bounded around $0$ so it extends to an entire function. Now $f(z)=f(\frac z {5^{n}}) \to f(0)$ for any $z$. |
simple optimization with inequality restrictions | We can rewrite the problem as
$$\arg\max_\theta \theta^T \nabla_xJ(x)$$
subject to $$-\epsilon \le \theta_i \le \epsilon, \forall i \in \{1,\ldots, n\}$$
which is a linear programming problem, furthermore, the problem is separable in the sense that we can optimize each variable separately.
By using the property that two numbers of the same sign multiplying together would give us a nonnegative number and since we want to maximize the quantity, we will choose $\theta_i$ to share the same sign as $\nabla_x J(x)$ and also we would want to maximize it's magnitude as well.
Hence, the solution is $\theta_i = \epsilon sgn(\nabla_x J(x)_i) .$
Remark: Even for the equality case, the solution remains the same. |
How to calculate $e^x$ with a standard calculator | This is what I would do to calculate $e^x$ with perfect 8-digit accuracy.
Take $\lfloor x\rfloor$ and $b = x - \lfloor x\rfloor$, the whole and fractional parts respectively. ($e^x = e^{\lfloor x \rfloor + (x - \lfloor x\rfloor)} = e^{\lfloor x \rfloor} e^{x - \lfloor x \rfloor}$)
Use the $(((b/4 + 1)b/3 + 1)b/2 + 1)b + 1$ pattern to calculate $e^{x - \lfloor x\rfloor}$. (If the fractional part of the exponent is .4 or less, only terms up to $b/7$ are needed for 8-digit accuracy. Worst-case scenario ($b \rightarrow 1$) terms up to $b/10$ are needed.) This method is easy to implement on a simple calculator, especially ones with a memory slot to quickly reinsert $b$.
When that is finished, multiply by $e$ ($\approx 2.71828183$ by memorization) and press the equals button $\lfloor x\rfloor$ times to repeat multiply. The result is $e^x$.
I've analyzed the number of terms required for full accuracy. Here is the chart (click to view full image):
Basically, if terms up to $b/t$ are needed to fully calculate $e^x$ up to 8-digit precision (7 digits after the decimal point), accounting for rounding (the $\frac{1}{2}$ term), the relation between $x$ and $t$ is given by $\sqrt[t]{\frac{1}{2}10^{-7}t!} = x$. |
Calculus of variations, interpreting the minimum in first order | Given a curve y(x), "going away to first order" means substituting $y(x)+\epsilon\delta y(x)$ where $\delta y(x)$ is a function of $x$, usually constrained to be zero at the relevant endpoints. First order refers to the power 1 of $\epsilon$. You are going to differentiate in $\epsilon$ and then set $\epsilon=0$ because you are to calculate the analogue of the directional derivative. If you use higher than power $1$ then the result will always be 0. Graphically, there will be the curve $y(x)$ and then there will be a perturbation that shrinks to $y(x)$ as $\epsilon\rightarrow0$. Many books have this picture.
At a critical curve, the functional differentiates to zero in $\epsilon$, which means that if you expand at $\epsilon=0$ after substituting $y+\epsilon\delta y$ then there are no terms linear in $\epsilon$. You have only terms like $\epsilon^2$. At the critical curve, when you vary the function linearly in $\epsilon$, the result is a higher-order change like $\epsilon^2$. You are missing an essential intuition: when the first derivative vanishes, then what remains is at second order variation.
You don't have to recognize closeness or anything like that. You are driving the functional along a particular perturbation. You are not interpreting a nearby curve that someone else provides to you. The analogue is, for an ordinary function $f(x,y,z)$, calculating the directional derivative (so $d/dt$ at $t=0$) of $f(x+t\,\delta x, y+t\,\delta y,z+t\,\delta z)$. $t$ is more common than $\epsilon$ in this context.
The chain rule is as usual, the only problem being when the Lagrangian is $L(y,y')$ then the $y'$ here is a (two-character) variable, not the derivative of something. This is the notation so that we understand how to substitute the function. It's a reminder that the derivative of the function goes in that slot.
These statements are not theorems. For example, they depend on the presence of sufficient smoothness. |
Dependent Sum vs Product Types | Dependent sums, i.e. sigma types, don't generalize dependent products, i.e. pi types. Maybe you meant that they generalize pairs which is true simply via $\Sigma\ \_\!:\!A.B \cong A\times B$. (Similarly, $\Pi\ \_\!:\!A.B \cong A \to B$.) In fact, these are usually the definitions of $A\times B$ and $A\to B$ in dependent type theories.
$\mathsf{Vec}\ n\ A$ can be defined as $\Pi n\!:\!\mathbb{N}.\mathsf{Fin}\ n\to A$ which, via a direct generalization of the currying isomorphism, is isomorphic to $(\Sigma n\!:\!\mathbb{N}.\mathsf{Fin}\ n) \to A$. Here $\mathsf{Fin}\ n$ is the type with exactly $n$ values. You could view it as $\mathsf{Fin}\ n \cong \{m\in\mathbb{N}\mid m < n\}$. With this perspective $$\Sigma n\!:\!\mathbb{N}.\mathsf{Fin}\ n \cong \{(n,m)\in \mathbb{N}\times\mathbb{N}\mid m < n\}$$ Indeed, if $<$ is proposition-valued, meaning $m < n$ is inhabited by at most one value, then the subset types above can be cast as dependent sums, e.g. $$\{(n,m)\in \mathbb{N}\times\mathbb{N}\mid m < n\} \cong \Sigma (n,m)\!:\!\mathbb{N}\times\mathbb{N}.m < n$$ |
Proving that $\sum_{k=0}^{n}\frac{(-1)^k}{{n\choose k}}=[1+(-1)^n] \frac{n+1}{n+2}.$ | A telescoping approach:
We obtain
\begin{align*}
\color{blue}{\sum_{k=0}^n\frac{(-1)^k}{\binom{n}{k}}}
&=\sum_{k=0}^n(-1)^k\frac{k!(n-k)!}{n!}\\
&=\sum_{k=0}^n(-1)^k\frac{k!(n-k)!}{n!}\cdot\frac{(n-(k-1))+(k+1)}{n+2}\\
&=\frac{n+1}{n+2}\sum_{k=0}^n(-1)^k\frac{k!(n-(k-1))!+(k+1)!(n-k)!}{(n+1)!}\\
&=\frac{n+1}{n+2}\sum_{k=0}^n\left((-1)^k\frac{1}{\binom{n+1}{k}}-(-1)^{k+1}\frac{1}{\binom{n+1}{k+1}}\right)\\
&\,\,\color{blue}{=\frac{n+1}{n+2}\left(1+(-1)^n\right)}
\end{align*}
and the claim follows. |
Is there any group in which number of the normal subgroups is equal to number of the conjugacy classes? | With $G=S_3$, there are three normal subgroups (namely $\{e\}$, $A_3$, $S_3$) and three conjugacy classes (namely $\{e\}$, $\{(12),(13),(23)\}$, and $\{(123),(132)\}$). |
Find the integer $x$ such $x^6+x^5+x^4+x^3+x^2+x+1=y^3$ | Here is an elementary approach, albeit tedious if one computes everything by hand. Our strategy is to find perfect cubes that are close enough to RHS (as Jack D'Aurizio mentioned in the comment).
Denoting $~f(x)=27(x^6+x^5+x^4+x^3+x^2+x+1)$, we have
Claim 1: If $x\ge3$, then
$$(3x^2+x)^3<f(x)<(3x^2+x+1)^3.$$
Claim 2: If $x\ge2$, then
$$(3x^2-x)^3<f(-x)<(3x^2-x+1)^3.$$
Back to the original equation, we can easily check the case $x=-1,~0,~1,~2$; when $x\ge3$, by claim 1, we see $(3y)^3=f(x)$ lies between two consecutive cubes, therefore no integer solutions; similarly claim 2 covers the case $x\le-2$. We are done.
Proof of claim 1:
$$f(x)-(3x^2+x)^3=27+27x+27x^2+26x^3+18x^4>0,$$
$$(3x^2+x+1)^3-f(x)=94+6(x-2)(7x+10)+x^2(x-3)(9x+19)>0,$$
Proof of claim 2:
$$f(-x)-(3x^2-x)^3=19+21(x-1)+(x-1)^2(18x^2+10x+29)>0;$$
$$(3x^2-x+1)^3-f(-x)=(x-1)(9x^3+17x^2+2x+26)>0.$$
(1) In hindsight, there is no reason to use elementary methods only. Note the two claims above are essentially same:
$$(3x^2+x)^3<f(x)<(3x^2+x+1)^3$$
for $x\le-2$ or $x\ge3$.
We could simply take the derivative and prove the differences are monotonic on either range...
(2) A bit heavier but more general approach is by Runge's method, e.g., see this paper by Sankaranarayanan and Saradha. |
Proving that a continuous bijective map is a homeomorphism | Well, $\phi$ is a so-called "proper" map, i.e. continuous with inverse images of compact sets compact. It is well-known that if the codomain $Y$ is compactly generated (which locally compact Hausdorff spaces are, as well as first countable spaces, e.g.; also known as a $k$-space in other texts, like Engelking) then $\phi$ is a closed map (i.e. perfect). I believe Bourbaki's Topologie Génerale will cover this general fact.
And a closed continuous bijection is a homeomorphism.
A direct proof for closedness of $\phi$ (which implies that the inverse is continuous) in this case: suppose $C \subseteq X$ is closed. Suppose $y \in \overline{\phi[C]}$ and let $K$ be a compact neighbourhood of $y$ in $Y$. Then $\phi^{-1}[K] \cap C$ is compact in $X$ and hence so is $\phi[\phi^{-1}[K] \cap C]= K \cap \phi[C]$ and by Hausdorffness of $Y$, $\phi[C] \cap K$ is closed in $K$ which implies $y \in \phi[C]$ and so $\phi[C]$ is closed. |
Integration problem (no 9-14) | Both of the $f(x)=\sin(x)$ and the $f(x)=x^3$ are odd function, so they satisfy the following equation: $f(-x)=-f(x)$.
$$\sin((-x)^3)=\sin(-x^3)=-\sin(x^3)$$
So $\sin(x^3)$ is also an odd function.
The integral of the odd functions from $-a$ to $a$ is $0$:
$$\int\limits_{-a}^{a} f(x) \mathrm{d}x=0$$
So:
$$\int\limits_{-1}^{1} \sin(x^3) \mathrm{d}x=0$$
A little "proof":
$$I=\int\limits_{0}^{a} f(x) \mathrm{d}x$$
Substitute $u=-x$, $\mathrm{d}u=-\mathrm{d}x$:
$$I=\int\limits_{0}^{-a} f(-u) (-\mathrm{d}u)$$
We can change the order of limits:
$$I=-\int\limits_{-a}^{0} f(-u) (-\mathrm{d}u)$$
$$I=\int\limits_{-a}^{0} f(-u) \mathrm{d}u$$
But because $f(-x)=-f(x)$:
$$I=-\int\limits_{-a}^{0} f(u) \mathrm{d}u$$
Replacing $u$ with $x$, and multiplyng by $-1$:
$$\int\limits_{-a}^{0} f(x) \mathrm{d}x=-I$$
So the $2$ equation for $I$:
$$\int\limits_{-a}^{0} f(x) \mathrm{d}x=-I$$
$$\int\limits_{0}^{a} f(x) \mathrm{d}x=I$$
Adding the $2$ together:
$$\int\limits_{-a}^{0} f(x) \mathrm{d}x+\int\limits_{0}^{a} f(x) \mathrm{d}x=I-I$$
$$\int\limits_{-a}^{a} f(x) \mathrm{d}x=0$$
Note: This method only works if both of the integrals exist, so $\int\limits_{-\infty}^{\infty} \sin(x) \mathrm{d}x \neq 0$, because neither the $\int\limits_{-\infty}^{0} \sin(x) \mathrm{d}x$ nor the $\int\limits_{0}^{\infty} \sin(x) \mathrm{d}x$ exist. |
Setting up null and alternate hypotheses | Usually the alternative hypothesis is what we wish to show. In this case we wish to show that the manager's claim is incorrect. Letting $p$ denote the proportion of customers who prefer human cashiers, the manager being incorrect would mean $p<0.5$
We have
$$H_0: p\geq0.5$$
$$H_a: p\lt0.5$$
Since $\hat{p}=\frac{47}{75}\approx0.627$ we certainly don't have evidence to reject the null hypothesis.
The more interesting question would be to test whether the manager's claim is correct. However, the wording of the problem indicates to me that these are the hypotheses.
Note: You cannot accept a null hypothesis. You can only fail to reject it or reject it. |
Find $\int_{- 1}^{ 1} xf(x)dx$ if $\ g(t) =\int_{- 1}^{ 1} e^{tx} f(x)dx$, for every $t \in \mathbb{R}$ | Since $x$ is a variable you can't take $t=\ln(x)/x$. On the other hand, note that
$$g'(t)=\int_{- 1}^{ 1} xe^{tx} f(x)dx$$
which implies that $\int_{- 1}^{ 1} x f(x)dx=g'(0)$. |
pumping lemma question | Instead of the pumping lemma, it seems (as almost always) to be much easier to use the Myhill-Nerode theorem:
$\mathtt a^n$ and $\mathtt a^m$ are distinguishable for all $n>m$ because $\mathtt a^n\mathtt b^n$ is in the language but $\mathtt a^m\mathtt b^n$ isn't. |
Axiomatic set theory proof involving powersets | Assume $Y \in X$. Then by the definition of $Y$, the condition $Y \in Y$ would hold iff $Y \not \in Y$. |
Supremum and norm property. | We have
$$
\begin{align}
\sup_{x\in\mathbb{R}^n}
\Big(
\left\langle
u,x
\right\rangle
-\dfrac{1}{2}\|x\|^2
\Big)
&=
\sup_{x\in\mathbb{R}^n}
\Big(
\frac{1}{2}\|u\|^2+
\frac{1}{2}\|x\|^2-
\frac{1}{2}\|u-x\|^2
-\frac{1}{2}\|x\|^2
\Big)
\\
&=
\sup_{x\in\mathbb{R}^n}
\Big(
\frac{1}{2}\|u\|^2-
\frac{1}{2}\|x-u\|^2
\Big)\\
&=\frac{1}{2}\|u\|^2.
\end{align}
$$
However,
since $\lim_{t\to+\infty}t\|u\|=+\infty$ if $u\neq 0$,
the last expression in your question gives $+\infty$
when $u\neq 0$, showing that the equality in your question is...incorrect. |
A question about colimits in enriched categories | Yes. There's no guarantee of cocompleteness just from that for the underlying ordinary category. But if you assume that $\mathcal{C}$ is cotensored over $\mathcal{V}$, then the existence of colimits in the underlying category implies the existence of conical colimits in $\mathcal{C}$, and if $\mathcal{C}$ is tensored, every colimit can be written in terms of conical colimits and tensors, so that $\mathcal C$ is cocomplete. This result appears near the end of Chapter 3 of Kelly's enriched category theory monograph. |
Correlation coefficient between two binominals | HINT: $E[XY] \neq 0$
Let $X_i$ be the number of $1$s rolled by the $i$th die; so $X_i = 1$ or $0$, and $X = \sum_{i=1}^{30} X_i$. Similarly let $Y_i$ be the number of $6$s rolled by the $i$th die.
As you already know, $E[X] = E[\sum X_i] = \sum E[X_i] = 30 \times {1 \over 6} = 5$ and same for $E[Y]$.
Now what is $E[XY]$? As you pointed out in the comments, it is true that $X_i Y_i = 0$ since one die cannot show both a $1$ and a $6$. However, $XY \neq \sum_{i=1}^{30} X_i Y_i$!! And they are certainly dependent (e.g. $X=29 \implies Y \le 1$).
To calculate this, just expand:
$$XY = (\sum X_i) (\sum Y_i) = \sum_i X_i Y_i + \sum_{i \neq j} X_i Y_j$$
This has $30 \times 30 = 900$ terms in it, but you can calculate the expectation of every term easily. Can you finish from here? |
What is $X^{\omega}$ where $X$ is a set? | If $X$ and $A$ are sets then $X^A$ denotes the set of all functions $A\rightarrow X$. Here $A=\omega=\{0,1,2,\dots\}$. |
If $f_n \rightarrow f$ uniformly and $f(z) =0$ only at $z_0$, then what can we say about $g(z_0)$, limiting function of $g_n(z) = f_n(z)^{1/n}$? | Suppose $\{a_n\}, \{b_n\}$ are sequences of real numbers with $a_n \to a$ and $b_n \to 0$ as $n \to 0$. Further suppose that all $a_n > 0$ and $a > 0$. Then
$$\lim_n a_n^{b_n} = 1$$
Because $a > 0$, there exists $N$ such that for $n \ge N, |a_n - a| < a/2$. Let $$0 < m < \min\left\{\frac a2, \frac 2{3a}, a_n, \frac 1{a_n}\mid n < N\right\}$$ and $$M > \max\left\{\frac {3a}2, \frac 2a, a_n, \frac 1{a_n}\mid n < N\right\}$$
Then we have that $m < a_n < M$ and also $m < 1/a_n < M$ for all $n$. Therefore $$m^{|b_n|} < a_n^{b_n} < M^{|b_n|}$$
But because $b_n \to 0$, so does $|b_n|$ and therefore by continuity of the exponential function, $$\lim_n m^{|b_n|} = \lim_n M^{|b_n|} = 1$$
By the squeeze theorem, $a_n^{b_n} \to 1$ also.
The condition $a_n > 0$ can be dropped, provided $b_n \ge 0$ or one doesn't mind a finite number of undefined values in the sequence $a_n^{b_n}$ (finite because $a > 0$ means that eventually $a_n > 0$). However, the condition $a > 0$ is required.
So your conditions about $f_n$ converging uniformly and compactness of the domain - or even the domain being in $\Bbb C$ do not matter. By simple pointwise convergence, if $g_n = f_n^{b_n}$, then $g(z) = 1$ for all $z$ with $f(z) > 0$. |
What if 1st pivot is missing but the 2nd one is there? | Hint:
The column space is defined as the span of the column vectors, so the column space is:
$$\mathrm{span}\left(\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 2 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 3 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 4 \\ 2 \\ 0 \end{bmatrix}\right)$$
What does having a zero vector included in the list do to that span? If you remove the zero vector, will the span remain the same? (Hint: Given any scalar $a$, what is: $a*\begin{bmatrix} 0 \\ 0 \\0 \end{bmatrix}$?)
Which elements can be removed to form a basis of the span? (another way to ask this question: which vectors in that spanning list can you write as linear combinations of other vectors).
By removing the linearly dependent vectors, you will have a basis of the column space.
I claim that this basis will be of length 2 and thus the column space has dimension 2. Do you see why that is? |
How is $\lbrace a_1, a_2, ..., a_n : a_i \in \Bbb Z_2\rbrace$ a group? | It is modular in each component. So
$$
(1,\ldots, 1)+(1,\ldots,1)=(0,\ldots,0)
$$
and the set is closed under component-wise addition mod 2. |
A tricky limit involving exponential integrals | Write $\alpha = \frac{2\pi}{\log 2}$ for simplicity. Then we are interested in the limit of the following quantity
\begin{align*}
I_n
:= \frac{\operatorname{Ei}_{1 - i\alpha} (2^{-n}) - \operatorname{Ei}_{1 + i\alpha} (2^{-n})}{i\log 2}.
\end{align*}
Plugging the definition of $\operatorname{Ei}_s$ and taking integration by parts,
\begin{align*}
I_n
&= \frac{1}{i\log 2} \int_{1}^{\infty} \frac{t^{i\alpha} - t^{-i\alpha}}{t}e^{-2^{-n}t} \, \mathrm{d}t
= \frac{2}{\log 2} \int_{1}^{\infty} \frac{\sin\left( \alpha \log t \right)}{t}e^{-2^{-n}t} \, \mathrm{d}t \\
&\hspace{1em}= \frac{e^{-2^{-n}}}{\pi} - \frac{1}{\pi} \int_{1}^{\infty} \cos\left( \alpha \log t \right) 2^{-n} e^{-2^{-n}t} \, \mathrm{d}t.
\end{align*}
Now apply the substitution $u = 2^{-n}t$ and notice that
$$\cos(\alpha \log t) = \cos(\alpha\log u + \alpha n\log 2) = \cos(\alpha \log u) $$
since $\alpha \log 2 = 2\pi$. Then
\begin{align*}
\lim_{n\to\infty} I_n
&= \lim_{n\to\infty} \left[ \frac{e^{-2^{-n}}}{\pi} - \frac{1}{\pi} \int_{2^{-n}}^{\infty} \cos\left( \alpha \log u \right) e^{-u} \, \mathrm{d}u \right] \\
&= \frac{1}{\pi} - \frac{1}{\pi} \int_{0}^{\infty} \cos\left( \alpha \log u \right) e^{-u} \, \mathrm{d}u
\end{align*}
By recalling that the gamma function is defined as $\Gamma(s) = \int_{0}^{\infty} u^{s-1}e^{-u} \, \mathrm{d}u$, this reduces to
\begin{align*}
\lim_{n\to\infty} I_n
&= \frac{1}{\pi} - \frac{\Gamma(1+i\alpha) + \Gamma(1-i\alpha)}{2\pi}
= \frac{1}{\pi}\operatorname{Re}\left[ 1 - \Gamma(1+i\alpha) \right].
\end{align*}
But I am skeptical of this having an elementary closed form. |
Isomorphism with Directional Graphs | In the original graph, vertices have following degrees.
$1$ — $2$ outs, $1$ in.
$2$ — $1$ out, $2$ ins.
$3$ — $2$ outs, $1$ in.
$4$ — $1$ out, $2$ ins.
Let's give the names to the vertices of the second graph:
Here's their degrees:
$A$ — $1$ out, $2$ ins.
$B$ — $2$ outs, $1$ in.
$C$ — $2$ outs, $1$ in.
$D$ — $1$ out, $2$ ins.
Let's try to assign numbers to the letters. For vertex $A$, we have only two options: either $A$ is $2$ or $4$.
Suppose $A$ is $2$. Then $D$ can only be $4$ (can you see why?) and $C$ is $3$. Then $B$ is $1$, which it absolutely can be, since it has the same degree and is adjacent to the same vertices. By showing the isomorphism, we effectively prove its existence.
So the error was here:
They retain the same shape, but the direction of the edge from vertex1 to vertex4 changes between the two, so it cant be isomorphic
On the second graph, there are no named vertices, so you can't just say that the direction between some named vertices changed. You need to have a more rigorous method of proving that, and often the one I've described above will suffice. |
Can the exact number of twin primes $\leq n$ be proved using a "twin-prime zeta function"? | The main problem is with the analytic continuation of the twin prime zeta function. No one knows how to do that!
Can you please elaborate? Perhaps give an expression for the twin prime zeta function and show why this is difficult?
At present the best results about the occurrence of twin primes are obtained by sieve methods, which are more combinatorial. Usually sieve methods do not exploit analytic continuation as occurs when working with the zeta function. but it is always interesting to experiment with different possible approaches. I try to say more in what follows.
Your question concerning the possible discovery of an Euler product over twin primes is a natural one. But as also pointed out in the discussion, there are very significant problems that would need to be overcome. A fairly elementary explanation of the difficulties might go something like this---in the half plane$$\{s = \sigma + it : 1 < \sigma\}\tag*{(1)}$$we have the familiar Euler product identity$$\prod_p (1 - p^{-s})^{-1} = \sum_{n = 1}^\infty n^{-s},\tag*{(2)}$$where here $p$ always denotes a prime number. When proving the identity $(2)$ one makes use of the fundamental theorem of arithmetic---each positive integer has a unique representation as a product of prime numbers, where "unique" has a certain technical meaning that I am sure that the reader understands. As is well known, if $K$ is a compact subset of the half plane $(1)$, then both the product on the left of $(2)$, and the sum on the right of $(2)$, converge absolutely and uniformly on $K$. This easily leads to the conclusion that both sides of $(2)$ define the same analytic function of $s$ in the half plane $(1)$. The really striking feature of the identity $(2)$ is that the prime numbers index the product on the left side, but the positive integers index the sum on the right side.
If we wrote $\zeta(s)$ for the function defined in the half plane $(1)$ by $(2)$, then it appears that we could investigate the properties of $\zeta(s)$ using the representation on the right side of $(2)$, and so learn all about $\zeta(s)$ without needing to know anything about prime numbers. Then we could use our knowledge of $\zeta(s)$ to discover results about prime numbers. This seems to be an excellent plan for research, but so far the plan has been only partially successful. For example, using the representation for $\zeta(s)$ as the sum---the Dirichlet series---on the right of $(2)$, it is possible to discover the functional equation for $\zeta(s)$, and to also discover the analytic continuation of $\zeta(s)$ to an analytic function $\mathbb{C}$ except for the simple pole at $s = 1$. Several ways of establishing the functional equation were indicated in Riemann's only paper on the zeta function, which was published in 1859. Riemann also made it clear that when tryinng to establish results about prime numbers, a crucial role would be played by the zeros of the function $\zeta(s)$. Riemann produced an explicit formula for a prime counting function in which there is sum over the zeros of the function $\zeta(s)$. And of course he conjectured that the nontrivial zeros would have real part equal to $1/2$.
As the theory of the zeta function developed, it became clear that to investigate the nontrivial zeros of the zeta function it would be necessary to exploit the Euler product representation of the left of $(2)$. I assume that the reader is familiar with the proof that $\zeta(1 + it) \neq 0$ using the elementary inequality$$0 \le 2(1 + \cos \theta)^2 = 3 + 4\cos\theta + \cos2\theta.$$You will note that the proof makes use of the Euler product representation on the left of $(2)$. Today, the best zero-free regions known for the zeta function all use the Euler product in a nontrivial way. Thus the plan outlined above, to learn all about $\zeta(s)$ by using the Dirichlet series on the right side of $(2)$, has turned out to be somewhat naive. Today it seems that both the Dirichlet series for the zeta function and the Euler product for the zeta function must be exploited.
In your question you proposed a (roughly) analogous approach to studying the distribution of twin primes. Here is a slight variant of your approach. First define$$T = \{p : p \text{ and }p - 2\text{ are both prime numbers}\} = \{5, 7, 13, 19, \ldots\}.$$Then in the half plane $(1)$ we could define a corresponding Euler product. A simple calculation shows that$$\prod_{p \in T} (1 - p^{-s})^{-1} = \prod_{p \in T} (1 + p^{-s} + p^{-2s} + p^{-3s} + \ldots) = \sum_{n = 1}^\infty b(n)n^{-s},\tag*{(3)}$$where the coefficients in the Dirichlet series on the right of $(3)$ are given by$$b(n) = \begin{cases} 1 & \text{if }n = 1 \\ 1 & \text{if }n = p_1^{e_1}p_2^{e_2}\ldots p_L^{e_L},\text{ where }\{p_1, \ldots, p_L\} \subseteq T \\ 0 & \text{if }p \mid n,\text{ where }p\text{ is prime and }p \notin T.\end{cases}\tag*{(4)}$$Again the elementary manipulations of the product and series in $(3)$ can be justified by absolute and uniform convergence of the relevant partial products and partial sums on compact subset of the half plane $(1)$. As with the zeta function, we can conclude that the expressions on both sides of $(3)$ define the same analytic function in the right half plane $(1)$. Write $\zeta_T(s)$ for the function defined in the right half plane $(1)$ by both the left and right side of $(3)$. Now, however, we encounter a formidable difficulty. The product on the left of $(3)$ which defines $\zeta_T(s)$ is indexed by the mysterious twin primes. And the sum on the right of $(3)$ which also defines $\zeta_T(s)$ contains the equally mysterious function $b(n)$. Thus neither side of $(3)$ can be investigated in a simple way. Alternatively, if we define$$U = \{n : 1 \le n \text{ and }b(n) = 1\},$$then unlike the situation with the primes and the positive integers, both $T$ and $U$ are mysterious subsets of the positive integers.
A basic unsolved problem is to decide if $T$ is a finite set or an infinite set. Plainly we would not need to use the Riemann zeta function to decide of the set of (ordinary) prime numbers is finite or infinite. So let us consider the possibility that $\zeta_T(s)$, as defined by $(3)$, might shed some light on the question---is $T$ a finite set or an infinite set? Let us start by assuming that $T$ is a finite set. Under this assumption the Euler product on the left of $(3)$ is finite, and so it defines an analytic function at every point $s$ in $\mathbb{C}$ except those points $s$ that satisfy $(1 - p^{-s}) = 0$ for some prime $p$ in the finite set $T$. It is easy to see that if $p$ is a prime number in $T$, then$$\{s \in \mathbb{C} : (1 - p^{-s}) = 0\} = \left\{ {{2\pi i m}\over{\log p}} : m \in \mathbb{Z}\right\}.\tag*{(5)}$$Also, if $p_1$ and $p_2$ are distinct prime numbres in $T$, if $m_1$ and $m_2$ are integers, we may ask if$${{2\pi im_1}\over{\log p_1}} = {{2\pi im_2}\over{\log p_2}}.\tag*{$(6)$}$$But $(6)$ implies that $p_1^{m_2} = p_2^{m_1}$, and by the fundamental theorem of arithmetic this happens if and only if $m_1 = m_2 = 0$. Thus we conclude that $\zeta_T(s)$ is analytic at all points of $\mathbb{C}$ except at each point$$s = {{2\pi im}\over{\log p}}, \text{ where }p \in T,\text{ and }m \in \mathbb{Z}, \text{ and }m \neq 0,$$where it has a simple pole---that is a pole of order $1$. And at $s = 0$, where $\zeta_T(s)$ has a pole of order $|T|$, where $|T|$ is the number of primes in $T$. To summarize, if $T$ is a finite set we know a great deal about the analytic function $\zeta_T(s)$. We know that $\zeta_T(s)$ has countably many poles, each pole occurs on the imaginary axis, there is a pole of order $|T|$ at $s = 0$, and all the remaining poles are simple poles at the nonzero points in the sets $(5)$. We also know that $\zeta_T(s)$ never takes the value $0$ in $\mathbb{C}$. This follows from the observation that$${1\over{\zeta_T(s)}} = \prod_{p \in T} ( 1 - p^{-s})\tag*{(7)}$$is analytic everywhere in $\mathbb{C}$. If $\zeta_T(s)$ has a zero at $s = \alpha$, then its reciprocal $(7)$ would have a pole at $s = \alpha$.
We have drawn these conclusions from the assumption that $T$ is a finite set, and this may well be the actual state of affairs. However, if we want to prove that $T$ is an infinite set, then we might try to derive a contradiction to some fact we have derived about $\zeta_T(s)$ under the assumption that $T$ is finite. We might try to derive a contradiction by exploiting the fact that we have another representation for $\zeta_T(s)$ given by the Dirichlet series$$\zeta_T(s) = \sum_{n = 1}^\infty b(n)n^{-s}.\tag*{$(8)$}$$At present the representation $(8)$ has not been useful because it contains the mysterious coefficients $b(n)$. The coefficients $b(n)$ encode twin prime information in a manner that differs significantly from the way that $T$ encodes twin prime information. Thus we can say quite a bit about the function $\zeta_T(s)$ if $T$ is finite, but the basic identity $(3)$ does not really supply us with a new tool, or with the right tool, to go further.
Update. Let me take another stab at the question. The point that the Euler product converges at $s = 1$ is quite good. One way of responding to the question at hand is simply that the Dirichlet series is much harder to understand. We want to define$$F(s) = \prod_{p \text{ twin prime}} \left(1 - \frac{1}{p^s} \right)^{-1} = \sum_n \frac{b(n)}{n^s}$$for some arithmetic functions $b(n)$---essentially recording whether $n$ is a product of twin primes or not. Here, $b(n)$ is not nearly as nice as the function $1$ that appears in the zeta function. In particular, one can say the following---in increasing order of sophistication.
The zeta function having a pole at $1$ is essentially related to $\sum_n 1/n$ diverges---this is relatively easy to prove---and implies that there are infinitely many primes. Understanding $\sum_n b(n)/n$ is much harder, and---as pointed out---is dependent on sieve theoretic bounds. Showing that $\sum_n b(n)/n^a$ diverges for some $0<a<1$ shows that there are infinitely many twin primes, and this is the original---hard---problem. In contrast, showing $\sum_n 1/n^a$ diverges for $a<1$ is "trivial".
One cannot apply partial summation to the Dirichlet series of $F(s)$ to get analytic continuation. Unlike the zeta function where $\sum_{n\le x} 1 = [x]$ is easily understood.
The coefficients for zeta---and other $L$-functions---arise from certain representations---this means that $\zeta(s)$ can be expressed as the Mellin transform of some automorphic form, and thereby can be continued everywhere. Moreover, the automorphy implies the functional equation for $\zeta(s)$. In this context, this is equivalent to Poisson summation and Poisson summation---and other Fourier analytic methods---appear to not help with the coefficients $b(n)$.
On a more structural level, it is impossible for $F(s)$ to be analytically continued with a standard functional equation of degree $1$---that would imply $F(s)$ belongs to the Selberg class, and all degree $1$ Selberg class have been classified.
The natural conjecture would be that $F(s)$ cannot be meromorphically continued to the whole complex plane---and I think this can probably be proven assuming certain conjectures. The overall answer is that proving certain fundamental properties of $\zeta(s)$ does not depend on any knowledge about primes, while even basic facts about $F(s)$ necessitate knowledge about twin primes---so that understanding $F(s)$ goes back to the original hard problem of understanding twin primes. A more exact count for twin primes was mentioned. In this context, perhaps it's useful to take a look at the following.
https://en.wikipedia.org/wiki/Twin_prime#First_Hardy%E2%80%93Littlewood_conjecture
Or the more general Hardy-Littlewood conjectures which do state an exact asymptotic for these things.
If you are really energetic, you can have a look at Soundararajan's Bulletin paper.
https://arxiv.org/abs/math/0605696 |
Prove that given a nonnegative integer $n$, there is a unique nonnegative integer $m$ such that $(m-1)^2 ≤ n < m^2$ | It's nicer to prove for $$m^2 \le n < (m+1)^2$$ which is obviously equivalent.
so take the square root: $$m \le \sqrt{n} < m+1$$
from this you can see $m=\lfloor{\sqrt{n}}\rfloor$ is the unique $m$. |
Fundamental rule for probability calculus for A, B | C | $P(A, B| C)$ means $P((A\land B)|C)$, not (as you write) $P(A\land (B|C))$. The second expression, actually, makes no sense. $B|C$ is not an event. |
Color an infinite equilateral grid with seven colors. Can it be possible to prove using Pigeonhole Principle that a monochromatic triangle exists? | There's no quick pigeonhole solution.
We can solve the problem using the Ajtai-Szemerédi corners theorem. The corners theorem is not about equilateral triangles in a triangular grid, but about right triangles of the form $\{(x,y), (x,y+h), (x+h, y)\}$ in a rectangular grid; this is an insignificant modification, since we can just skew the rectangular grid and turn these right triangles into the equilateral triangles we want. Or to put it differently, if we number the nodes in the grid as in the diagram below, then the triples of the form $\{(x,y), (x,y+h), (x+h, y)\}$ are precisely the (upright) equilateral triangles.
The corners theorem is also a density result rather than a coloring result: it says that for any $\epsilon>0$, if we pick $\epsilon N^2$ out of $N^2$ points in a $N\times N$ grid, then we can find the triangle among the points we picked, provided $N$ is sufficiently large. But this can be used to get a coloring result: just take $\epsilon = \frac17$, pick a sufficiently large finite subgrid of your infinite grid, and choose the most popular color used on points in that subgrid.
The corners theorem is a fairly high-powered result in Ramsey theory. But I claim that no simple result will do the trick, because the problem is at least as hard as finding monochromatic $3$-term arithmetic progressions, and you can see the arguments in the Wikipedia article on van der Waerden's theorem to see how tricky the proof for those is even for $2$ or $3$ colors.
Suppose you have a $7$-coloring of the integers $1,2,\dots,n$, for some large $n$ with no $3$-term arithmetic progression. Then by giving a point $(x,y)$ in the grid the color of the integer $2x+y$, we can color some large portion of the triangular grid; that portion of the grid will not have any monochromatic triangles. An upright triangle has coordinates $\{(x,y), (x,y+h), (x+h,y)\}$ which corresponds to the integers $2x+y, 2x+y+h, 2x+y+2h$. An inverted triangle has coordinates $(x,y+h), (x+h,y), (x+h,y+h)$ which corresponds to the integers $2x+y+h, 2x+y+2h, 2x+y+3h$. Either way, they form a $3$-term arithmetic progression.
So if you found a quick pigeonhole solution here, you'd have a quick pigeonhole solution to the length-$3$ case of van der Waerden's theorem, which would be a pretty big deal! |
Textbook recommendations for self-studying high school math? | If you master pre-algebra, then you can figure out almost any other branch of mathematics using the appropriate study material. Geometric formulas will be second nature to you. Trigonometry and Calculus are not required to graduate from every high school. If you are strong in Algebra, then your college placements scores will exempt you from college preparatory courses.
College preparatory courses are great if you want to master the fundamentals. I suggest you take all the college preparatory courses in your field if you are going to specialize. For Mathematics, you should take discrete mathematics.
Because of the way the brain works, you will gain better dominion of subject matter by studying for a few hours everyday rather than cramming. Yet, you seem to have found out how some high school experiences are less adequate than independent studies.
You might want to go for the General Education Development test, then transfer to a college or university. A community college offers more advantages for students. You can get an associates in arts degree and transfer to a university from there to get a four-year college degree and postgraduate degrees. (You will need to pass college algebra and another college level mathematics course to get your associate in arts degree.)
Have you ever skimmed or read from a GED preparation workbook? You should go a college library and take it out. It's similar to the SAT workbooks. These books will give you detailed explanations. Yet, what do you mean by progressive practice problems? The word progressive can have many meanings; do you mean updated versions? That's up to the student to send in suggestions and report errors to the publishing company.
Remember, it's what you learn that counts. Most things we believe to be requisites are psychological exaggerations. Remember, being a student is a profession. |
X*Z has same distribution of Y*Z? | Clearly they can have the same distribution, for example if all three are independent. But they do not have to have the same distribution
For example, let $X=1$ or $-1$ each with probability $\frac12$, $Y=-X$ and $Z=X$. All three have the same distribution. Then $XZ=1$ with probability $1$ while $YZ=-1$ with probability $1$, so these are different |
In an abelian category, an object $G$ is a generator iff for any nonzero object $X$, $\operatorname{Hom}(G, X) \neq 0$ | Here is a proof under the additional hypothesis that $G$ is projective; I don't know either a proof or a counterexample without this assumption. (Edit: In the comments Jeremy Rickard gives the example of $\mathbb{Z}/2\mathbb{Z}$ in the category of $\mathbb{Z}/4\mathbb{Z}$-modules).
Let $G$ be a weak generator, meaning that if $\text{Hom}(G, X) = 0$ then $X = 0$ (so $\text{Hom}(G, -)$ reflects zero objects), which is also projective. We want to show that $G$ is a generator, meaning that $\text{Hom}(G, -)$ is faithful, or equivalently that if $f : X \to Y$ is a morphism such that $\text{Hom}(G, f) = 0$ then $f = 0$ (so $\text{Hom}(G, -)$ reflects zero morphisms).
We have that $f = 0$ iff $\text{im}(f) = 0$. Since $G$ is projective, $\text{Hom}(G, -)$ is exact, so preserves kernels and cokernels, and hence preserves images. . This means that if $f : X \to Y$ is a morphism, then
$$\text{Hom}(G, \text{im}(f)) \cong \text{im}(\text{Hom}(G, f)).$$
So we have equivalences
$$\text{Hom}(G, f) = 0 \Leftrightarrow \text{im}(\text{Hom}(G, f)) = 0 \Leftrightarrow \text{Hom}(G, \text{im}(f)) = 0 \Leftrightarrow \text{im}(f) = 0 \Leftrightarrow f = 0$$
which is the desired result.
Without the hypothesis that $G$ is projective you get only that $\text{Hom}(G, -)$ preserves kernels and so reflects monomorphisms (using $\text{ker}(f)$ instead of $\text{im}(f)$ above). |
Connectedness of the set | The union of connected sets is connected if the intersection is nonempty:
Let $\mathcal{F}$ be a family of connected sets with $\bigcap \mathcal{F} \neq \emptyset$.
Suppose we have separated sets $A,B \subset X$ such that $A \cup B = \bigcup \mathcal{F}$ and let $x \in \bigcap \mathcal{F}$.
Now suppose $x \in A$ and $B \neq \emptyset$.
Then we have $y \in B$, and as $B \subset \bigcup \mathcal{F}$, there exists $F \in \mathcal{F}$ such that $y \in F$.
But also note that $x \in F$, since $x \in \bigcap \mathcal{F}$, so $x \in F \cap A$.
Now I'll prove that if $F$ is connected and $F \subset A \cup B$ with $A$ and $B$ separated, then $F \subset A$ or $F \subset B$.
Write $F = (F \cap A) \cup (F \cap B)$.
Note that $F \cap A$ and $F \cap B$ are separated, because if $z \in \overline{F \cap A} \cap (F \cap B)$, then $z \in B$ and $z \in \overline{F \cap A} \subset \overline{A}$.
Contradiction, since $z \in \overline{A} \cap B$, but $A$ and $B$, by hypothesis, are separated.
But, if $F \cap A$ and $F \cap B$ are separated and $F$ is connected, by definition $F \subset F \cap A \implies F \subset A$ or $F \subset F \cap B \implies F \subset B$.
Finally, note that $x \in F \cap A$ and $y \in F \cap B$, so we can't have $F \subset A$ or $F \subset B$. Contradiction. Then $B$ must be $\emptyset$.
Now note that the point $(-1,0) \in [-1,1] \times \{0\} \cap (-\infty,-1] \times \mathbb{R}$ and the point $(1,0) \in [-1,1] \times \{0\} \cap [1,\infty) \times \mathbb{R}$.
Then we only have to prove that each component is connected.
But you can see that $[-1,1] \times \{0\}$ is homeomorphic to $[-1,1]$, just consider the projection $\pi : [-1,1] \times \{0\} \to [-1,1]$. Then use the fact that $I \subset \mathbb{R}$ is connected iff $I$ is an interval.
To prove that $(-\infty,-1] \times \mathbb{R}$ and $[1,\infty) \times \mathbb{R}$ is connected, you can use that they're path-connected and path-connected implies connected (proof here).
I'll make the case $[1,\infty) \times \mathbb{R}$:
Let $(a,b),(c,d) \in [1,\infty) \times \mathbb{R}$.
Then $f : [0,1] \to [1,\infty) \times \mathbb{R}$ given by $f(t) = (1-t)(a,b) + t(c,d)$ is a continuous path connecting both points ($f$ is a sum and product of continuous functions).
We only have to show that $f([0,1]) \subset [1,\infty) \times \mathbb{R}$, ie, that $(1-t)a + tc \geq 1$ and $(1-t)b + td \in \mathbb{R}$.
The second is obvious, then I'll just prove $(1-t)a + tc \geq 1$.
If $a \leq c$, then $ta \leq tc \implies (1-t)a + ta \leq (1-t)a + tc$.
But $(1-t)a + ta = a \geq 1$, the result follows.
If $c \leq a$, then $(1-t)c \leq (1-t)a \implies (1-t)c + tc \leq (1-t)a + tc$.
But $(1-t)c + tc = c \geq 1$. The result also follows. |
Find subgame Nash equilibrium | Your answer is correct, as can be confirmed by backward induction starting at the simultaneous move. The formulations "will choose" and "when A and B select" are somewhat misleading, since these choices will not actually occur, as the game ends after A chooses OUT. These are merely the players' strategies for these choices. |
Operations on sets: how to transform the left side to the right side? | An arbitrary element of $(X_1 \times X_2) \setminus (A_1 \times A_2)$ is a pair $(x_1,x_2) \in X_1 \times X_2$ such that $(x_1,x_2) \not \in A_1 \times A_2$. To say that $(x_1,x_2) \in A_1 \times A_2$ is precisely to say $x_1 \in A_1 \wedge x_2 \in A_2$, so negating this gives $x_1 \not \in A_1 \vee x_2 \not \in A_2$. This is ultimately where the $\cup$ comes from, as you shall soon see, since we have to split into cases:
If $x_1 \not \in A_1$, then $x_1 \in X_1 \setminus A_1$ and $x_2 \in X_2$ (regardless of whether it's in $A_2$ or not), so that $(x_1,x_2) \in (X_1 \setminus A_1) \times X_2$;
If $x_2 \not \in A_2$, then $x_2 \in X_2 \setminus A_2$ and $x_1 \in X_1$ (regardless of whether it's in $A_1$ or not), so that $(x_1,x_2) \in X_1 \times (X_2 \setminus A_2)$.
Putting this together yields $(x_1,x_2) \in ((X_1 \setminus A_1) \times X_2) \cup (X_1 \times (X_2 \setminus A_2))$, and hence
$$(X_1 \times X_2) \setminus (A_1 \times A_2) \subseteq ((X_1 \setminus A_1) \times X_2) \cup (X_1 \times (X_2 \setminus A_2))$$
Similar reasoning proves the other direction of containment, so the two sets are equal.
As a general tip, proving set equalities by double containment (as is (half-)done above) is a good thing to do if it's not immediately clear how a string of algebraic manipulations would help you to arrive at the answer. |
How do I compute this limit: $\lim_{x \to \infty} \left( 1-\frac1{x^2}\right)^{x^2-1} = \frac1e$? | HINT:
$$\lim_{x\to\infty}\left(1-\dfrac1{x^2}\right)^{x^2-1}=\left(\lim_{x\to\infty}\left[1+\left(-\dfrac1{x^2}\right)\right]^{-x^2}\right)^{\lim_{x\to\infty}\frac{1-x^2}{x^2}}$$
Now use $\lim_{u\to\infty}\left(1+\dfrac1u\right)^u=e$ |
Fourier Transform of Measures on Hilbert, Banach, or general Topological Vector Space | The answers to many of the above questions can be found in Measure Theory (Vol. II) by Vladimir Bogachev. Section 7.12 is about measures on linear spaces and section 7.13 is about the Fourier transform of such measures.
Bogachev defines the Fourier transform as:
Here $Cyl(X,G)$ is the $G$-cylindrical $\sigma$-algebra of the topological vector space $X$ for a set of linear functionals $G$. A quasi-measure on $Cyl(X,G)$ is an additive real function $\mu$ on $Cyl(X,G)$ such that all finite dimensional projections $P_*\mu=\mu\circ P^{-1}$ are bounded and countably additive measures. Note that if we do a change of variables we get
$$
\tilde\mu(f) = \int_E e^{it} \,\mathrm{d}f_*\mu(x) = \int_E e^{if(x)} \,\mathrm{d}\mu(x) = \int_E e^{i\langle f, x\rangle} \,\mathrm{d}\mu(x)
$$
So the definition agrees with the question statement.
The following lemma from the book answers the question of injectivity:
Since this formulation applies when $X$ is locally convex and $G=X^*$, we can also apply it to the case where $X=M(K)$ with the weak-* topology (in which case $X^*=C(K)$). |
Prove a linear operator is continuous wrt weak topology iff it is continuous at $0$ wrt weak topology | You don't need anything about the relationship between weak and strong continuity. It's simpler than that. This fact will be true for any linear operator $T$ between any two topological vector spaces $E,F$.
To get you started, let $x \in E$ and let $V$ be an open set containing $Tx$. (For purposes of your problem, we are taking $E$ equipped with the weak topology, so "open" here can be read as "weakly open".) We have to show there is an open set $U$ containing $x$ such that $T(U) \subset V$. Now since $F$ is a topological vector space, the set $V - x = \{v - x : v \in V\}$ is open and contains $0$. We are assuming $T$ is continuous at $0$, so... (I'll let you take it from here.) |
Solve the recurrence relation $u_{n+1}-5u_{n}+6u_{n-1}=2$ subject to $u_0=u_1=1$ | Your method is correct. Rearranging the terms, we find that $u_{n+1} = 5u_n-6u_{n-1}+2$. When $u_0 = 1, u_1 = 1$ as in your problem, then $u_2$ is $5*1-6*1+2$, or $1$.
The key point is to note that this recurrence equation holds for any value of $n$. Since $u_{n+1} = u_n = 1$, letting $n = m+1$, then we have $u_{m+2} = u_{m+1}$. Continuing this pattern, we find out that every term will always equal $1$. This is the only solution, by definition of a recurrence equation. |
Diagonal calculation in a 3D square | It is not a square but a tetrahedron with base $BCD$ and height $h = 855.35$.
Label the edge lengths as
$$\begin{cases}
b &= |CD| = 2578,\\
c &= |BD| = 2828,\\
d &= |BC| = 2128,\\
b_1 &= |AB| = 2060,\\
c_1 &= |AC|\\
d_1 &= |AD| = 2045\\
\end{cases}$$
$c_1$ will be the length we seek.
You can compute the volume of this tetrahedron in two different manners.
$V = \frac13 \mathcal{A} h$ where $\mathcal{A}$ is area of $\triangle BCD$.
You can use Heron's formula to get $\mathcal{A}$.
$$16\mathcal{A}^2 = (b+c+d)(-b+c+d)(b-c+d)(b+c-d)$$
You can also compute the volume using Cayley Menger determinant.
$$288V^2 = \left|\begin{matrix}
0 & 1 & 1 & 1 & 1\\
1 & 0 & b_1^2 & c_1^2 & d_1^2\\
1 & b_1^2 & 0 & d^2 & c^2\\
1 & c_1^2 & d^2 & 0 & b^2\\
1 & d_1^2 & c^2 & b^2 & 0
\end{matrix}\right|$$
Expanding the CM determinant and combine it with first result, one obtain following equation which is quadratic in $c_1^2$:
$$\begin{align}
16\mathcal{A}^2h^2 = 144V^2
=& \phantom{+}\; b^2 b_1^2(-b^2-b_1^2 + c^2 + \color{red}{c_1^2} + d^2 + d_1^2)\\
& + c^2 \color{red}{c_1^2}(\;b^2+b_1^2 - c^2 - \color{red}{c_1^2} + d^2 + d_1^2)\\
& + d^2 d_1^2(\;b^2+b_1^2 + c^2 + \color{red}{c_1^2} - d^2 - d_1^2)\\
& - (b^2c^2d^2 + b^2 \color{red}{c_1^2} d_1^2 + b_1^2 c^2 d_1^2 + b_1^2\color{red}{c_1^2} d^2)
\end{align}$$
With help of a CAS, we can substitute back the numerical values of $b,c,d,b_1,d_1$ into above equation and simplify. The end result is
$$\begin{align}c_1
&= \sqrt{\frac{116153047144695\pm\sqrt{8168336037304042557755678133}}{19993960}}\\
&\approx 1135.385089196282 \;\text{ or }\; 3213.987289241557
\end{align}
$$
There are two possible solutions for $c_1$. The '+' solution corresponds to the
case where the dihedral angle between the planes holding $\triangle ABD$ and $\triangle BCD$ is obtuse ( $> 90^\circ$). For the '-' solution, the dihedral angle is acute ($< 90^\circ$).
Judging from your picture, the dihedral angle at edge $BD$ is obtuse. The length you seek is the one $\approx 3213.987289241557$. |
Optimization problems - functions | Use the fact that $P(x) = R(x)-C(x)$, so we get $P'(x) = R'(x)-C'(x)$. Then do your analysis of $P'(x)$ the way you did for $R$ and $C$ to find find where $P$ is maximal.
Added: Alternatively, you might note that $P(x)$ is a quadratic function, so your knowledge of quadratic functions may be enough to answer the question (e.g, What is the shape of the graph? Where is the vertex?). |
A inequality proposed at Zhautykov Olympiad 2008 | Since $\mathrm{LHS}$ of last inequality is homogeneous we can assume $x^2 + y^2 + z^2 = 1$. Then it becomes
$$
\mathrm{LHS} = 2\sum_{cyc} \frac {x^2} {1 + z^2} =:2I
$$
Now using Cauchy-Schwarz inequality we get
$$
1 = (x^2 + y^2 + z^2)^2 = \left(\sum_{cyc} x\sqrt{1 + z^2} \cdot \frac x {\sqrt{1 + z^2}}\right)^2 \leq\\
\left(\sum_{cyc} x^2(1 + z^2)\right) \cdot \left( \sum_{cyc} \frac {x^2}{1 + z^2} \right) = I \cdot \sum_{cyc} x^2(1 + z^2)
$$
To finish, let's note that CS inequality implies
$$
x^2\cdot z^2 + y^2 \cdot x^2 + z^2 \cdot y^2 \leq x^4 + y^4 + z^4
$$
and therefore
$$
\sum_{cyc} x^2(1 + z^2) = 1 + x^2 z^2 + y^2 x^2 + z^2 y^2 \leq 1 + \frac {(x^2 + y^2 + z^2)^2} 3 = \frac 4 3
$$ |
smallest positive number | Dario Alpern's solver shows
$X_0 = 0\\
Y_0 = 1$
$X_{n+1} = P X_n + Q Y_n \\
Y_{n+1} = R X_n + S Y_n \\ \\
P = 22 903355 954053 525066 202335 319378 237605 968890 (44 \text{ digits})\\
Q = 510732 021116 138713 675018 566232 201605 320997 (42 \text{ digits})\\
R = 1027 082094 464554 953200 462336 692957 428300 524967 (46 \text{ digits})\\
S = 22 903355 954053 525066 202335 319378 237605 968890 (44 \text{ digits})\\$
It will give you a step by step solution |
How to use Dirac delta sifting property to prove question? | It is just the distributive property. Whatever $\delta(t-t_0)$ is (I know, but it doesn't matter here) the equation is true. You don't need the shifting property. |
Define a linear operator $T:X \to X$ as $ Tf(x) = x^2f(x) $.Prove that its spectrum set is $[0,1]$ | The spectrum is the set of scalars such that $T - \lambda I$ is not invertible (i.e. either not surjective or not injective). In particular, we have
$$(T - \lambda I)(f) = \Big(x \mapsto x^2 f(x) - \lambda f(x) = (x^2 - \lambda)f(x)\Big).$$
Importantly, note that when $\lambda \in [0, 1]$, then $x^2 - \lambda$ has a root in the domain $[0, 1]$. This implies that $T - \lambda I$ is not surjective (why?), and thus, $[0, 1]$ is in the spectrum of $T$.
If $\lambda \notin [0, 1]$, then $x^2 - \lambda$ has no root in $[0, 1]$, and hence $x \mapsto \frac{1}{x^2 - \lambda}$ is a continuous function on $[0, 1]$. See if you can use this to find an inverse of $T$.
So, you'll hopefully have seen that, if $\lambda \in [0, 1]$, then $x^2 - \lambda = 0$ when $x = \sqrt{\lambda}$. This means that, for every $g$ in the range of $T - \lambda I$, we have $g(\sqrt{\lambda}) = 0$.
So, putting this another way, we can define a linear function $\phi_\lambda : C[0, 1] \to \Bbb{R}$ by
$$\phi_\lambda(f) = f(\sqrt{\lambda}).$$
Then, we have
$$\operatorname{Range}(T - \lambda I) \subseteq \operatorname{Ker} \phi_\lambda.$$
Since $C[0, 1]$ is equipped with the supremum norm, the map $\phi_\lambda$ is bounded, since
$$|\phi_\lambda(f)| = |f(\sqrt{\lambda})| \le \sup_{x \in [0, 1]} |f(x)| = \|f\|.$$
This implies that $\phi_\lambda$ is continuous, and hence its kernel (i.e. $\phi_\lambda^{-1}\{0\}$) is a closed set. (This is not true with the integral norms.)
So, if $\operatorname{Range}(T - \lambda I)$ were dense, then we would have
$$C[0, 1] = \overline{\operatorname{Range}(T - \lambda I)} \subseteq \overline{\operatorname{Ker} \phi_\lambda} = \operatorname{Ker} \phi_\lambda \subseteq C[0, 1],$$
and hence
$$\operatorname{Ker} \phi_\lambda = C[0, 1].$$
But this is not true! Simply choose $f$ to be, say, the constant function $1$, and $\phi_{\lambda}(f) = 1 \neq 0$. Thus, $\operatorname{Range}(T - \lambda I)$ is not dense.
You can also do the same argument in terms of adjoints. If the range of $T - \lambda I$ were dense, then we would expect $(T - \lambda I)^*$ to be injective. But, $(T - \lambda I)^*(\phi_\lambda) = \phi_\lambda \circ (T - \lambda)$ is the zero functional, hence $\phi_\lambda \in \operatorname{Ker}(T - \lambda I)^*$, proving $(T - \lambda I)^*$ is not injective. |
What do the open sets in $\mathbb R^4$ look like such that $ad - bc \neq 0$ | With the change of variables $a=t+s$, $b=t-s$, $c=u+v$, and $d=u-v$, you find $$ad-bc=(t^2+u^2)-(s^2+v^2).$$
To get an idea, consider the first quadrant and let one axis represent $\sqrt{t^2+u^2}$, and the other, $\sqrt{s^2+v^2}$. These represent absolute value in the $(t,u)$ and $(s,v)$ planes respectively. The diagonal is where the determinant vanishes. Thus the set where the determinant is nonzero is split into two components.
There's still a bunch of details hiding behind the little word “thus” in the previous sentence, but that's the gist of one way to look at it.
It may help to note that, if you lock one of the variables $t$, $s$, $u$, $v$ to a fixed value, you get either a double cone (for the zero case) or a hyperboloid in the other three variables. |
Measuring Financial Investment Performance | My interpretation of your question, my hand calculations and explanations are as follows. For periods of six months you have:
At the beginning of period 1 you invest $C_{0}=1000$ (dollars).
At the end of period 1 $C_{0}$ worths $F_{1}=1200$.
At the beginning of period 2 you invest the additional capital of $%
C_{1}=2000$.
At the end of period 2 your investement worths $F_{2}=4000$.
Figure: Cash Flows of -1000, -2000 and +4000 dollars
Let us denote by $r$ the anual nominal interest rate of your investement. In
$n$ periods of six months the investement worths $(1+r/2)^{n}$ per currency unit of invested capital.
The initial investement $C_{0}$ worths $C_{0}(1+r/2)^{2}$ at the end of
period 2.
The additional capital $C_{1}$ worths $C_{1}(1+r/2)$ at the end of period 2.
Then we have
$C_{0}(1+r/2)^{2}+C_{1}(1+r/2)=F_{2}$
$1000(1+r/2)^{2}+2000(1+r/2)=4000$
or
$(1+r/2)^{2}+2(1+r/2)=4$
$2r+\dfrac{1}{4}r^{2}+3=4.$
The solution is: $r=2\sqrt{5}-4\approx 0.47214\approx 47.214\%$ anual
nominal rate
To determine the effective interest rate, you could find how much would you
need to invest so that at a nominal rate of $r$ you yould have $4000$ in 1
year (2 periods):
$P\cdot 1.2361^{2}=4000$
$P=4000/1.2361^{2}=2617.9$
The anual effective interest rate is
$i_{eff}=\dfrac{4000}{2617.9}-1=0.52794\approx 52,794\%$.
Remark: The future value $F_{n}$ at the end of period $n$ of a present value
$P$ is given by
$F_{n}=P\left( 1+\dfrac{r}{m}\right) ^{n}$,
where $r=i_N$ is the anual nominal interest rate compounded $m$ periods per year. In your case $m=2$.
Reference: Rigg, Bedworth and Randhawa, Engineering Economics, Mc Graw Hill, 4th ed., 1996.
All errors and omissions are of mine, of course. |
Least absolute deviation minimization by derivative. | The function $\rho \mapsto \sum_i |y_i - \rho|$ is not differentiable at $y_1, \ldots, y_N$, so there will be some caveats to using derivatives to reason about minimizers.
I haven't checked your calculations, but it is not surprising that your computation leads to a zero second derivative. The original function $\sum_i |y_i - \rho|$ is a piecewise linear function in $\rho$ with "knots" at $y_1, \ldots, y_N$. Your derivative computations should be valid for values of $\rho$ that aren't at the knots; because the function is linear at each segment, you'll get a zero second derivative there. However, this does not tell you immediately about the function globally because of the non-differentiability at the knots. [There are things called subderivatives that generalize the idea of derivatives to handle these situations, but it's not necessary to go into that here.]
If you want to argue that the critical point is a minimizer, try showing that the first derivative is nonnegative to the right of the critical point, and nonpositive to the left of the critical point.
Let $f(\rho) = \sum_i |y_i - \rho|$.
Claim: if $\rho^*$ is a median of $y_1, \ldots, y_N$, then $f(\rho^*) \le f(\rho)$ for any $\rho$.
For simplicity I assume $y_1 < \cdots < y_N$. For cases where some of the $y_i$ are the same, the argument below can be modified to handle it.
It suffices to show that $f(\rho) \le f(\rho')$ for $\rho^* \le \rho \le \rho'$, and that $f(\rho') \ge f(\rho)$ for $\rho' \le \rho \le \rho^*$.
Show that $f$ is a piecewise linear function with knots at $y_1 \le \cdots \le y_N$ and slopes $-N, -(N-2), -(N-4), \ldots, N-4, N-2, N$. (You've basically already done this in your computation of $dQ/d\rho$.) Once you've done this, you've shown that $f$ has a "jagged U" shape, so there are global minimizers.
When $N$ is even, there is a segment of slope $0$ in the middle, between $y_{N/2}$ and $y_{N/2 + 1}$. Any point on this segment serves as a median, and you can see from the shape of $f$ that these are all minimizers.
When $N$ is odd, the two segments between $y_{(N-1)/2}, y_{(N+1)/2}, y_{(N+3)/2}$ have slopes $-1$ and $1$. This middle point $y_{(N+1)/2}$ is the median and can be seen to be the minimizer due to the shape of $f$.
Again, I've assumed the $y_1, \ldots, y_N$ are unique; you can modify this argument to handle the more general case when some values are repeated. |
A big list of non-trivial examples of functions from outside mathematics | Here are some things that I use as function examples for a general set of functions:
(1) Letter counting function, $L$. Domain: Set of words. Letter counting function outputs number of letters: E.g., $L($dog$)=3$.
(2) Initials function, $I$. Domain: Set of students in class. Initials function outputs first and last name initials: E.g., $I($Mary Jones$)=$MJ.
(3) Full sibling (two bio parents in common) relation, $S$: For people $a, b$ we have $a S b$ if and only if $a$ and $b$ have both bio parents in common. (Variants are possible: at least one bio parent in common; exactly one bio parent in common; etc.)
I'm sure others can add many other ideas to this list. |
Integrating an unknown function | You cannot do this. Jus to prove it, assume that your result $ x(t)^4*t$ is correct and compute its derivative. What you have then is $\frac{d}{dt}x(t)^4*t=4 t x(t)^3 x'(t)+x(t)^4$ which does not have much to do with $4x(t)^3$.
If the derivartives of $x(t)$ are simpler and simpler, probably the best would be integration by parts. Let $$I=\int 4*x(t)^3 dt=4\int x(t)^3 dt=4J$$ and start using $u=x(t)^3$, $v'=dt$ so $u'=3x(t)^2 x'(t) dt$, $v=t$ So, $$J=\int x(t)^3 dt=t x(t)^3-3\int t x(t)^2 x'(t)dt$$ and repeat for the last antiderivative. |
Prove that $P_1P_2P_3=2rR^2$. | In fact, $O_1,O_2,O_3$ are the second intersection points of $AI,BI,CI$ respectively and the circumcircle $(ABC)$. Thus, you may readily obtain that $$p_1=2R\sin \frac{A}{2},~~~p_2=2R\sin \frac{B}{2},~~~p_3=2R\sin \frac{A}{2}.$$
Moreover, I guess you must know the formula on the incircle radius that $$r=4R\sin\frac{A}{2}\sin\frac{B}{2}\sin\frac{C}{2}.$$
Therefore, $$p_1p_2p_3=2\cdot4R\sin\frac{A}{2}\sin\frac{B}{2}\sin\frac{C}{2}\cdot R^2=2rR^2.$$ |
Is it ok to prove a subset of a group is an abelian group this way? | As every element has order two you do not need to check both directions of every multiplication, i.e., you have $3$ non-identity elements and you only need to check $\binom32 = 3$ multiplications instead of the $6$ that a brute force approach would normally require.
The reason is because if $xy = z$ and $z$ is in your subset, then because $z$ has order two we know that $z^{-1} = z$ is in your subset, and $z^{-1} = (xy)^{-1} = y^{-1}x^{-1} = yx$ (because $x$ and $y$ also have order $2$).
That trick I just used is exactly how you prove that if every element of a group is order $2$ it is abelian, but I don't want to say that it's because of that theorem that you don't have to check the other multiplications. Really it's because of the proof of that theorem; because the method of proof still applies in this circumstance. I make this distinction because if you have a theorem that says "If a group satisfies condition $P$ then that group is a $Q$" and then you check that a subset satisfies condition $P$, then in general it is NOT true that this implies that your subset is in fact a subgroup of type $Q$. |
Is there any easy way to see that elementary matrices commute in $\text {Mat}_{n \times n} (\mathbb F)$? | There isn't, because they don't (for $n>1$).
Every invertible matrix is a product of elementary matrices. If invertible matrices commuted, then any two invertible matrices would commute!
Can you find an example of two elementary matrices which don't commute? |
Finding the limit of $f(f(x))$ type problem | Hint:
Normal $3y=x+18$ gives us $y=7$ for $x=3$ so it intersects the curve at $(3,7)$ so $f(3)=7$ and the slope of the normal is $\frac{1}{3}$, so $f'(3)=-3$ (normal being perpendicular to the tangent etc.). |
How Does $\frac{1+\sin(x)}{x}=1 + \frac{\sin(x)}{x}$? | There are two cancelling errors in that graphic: from the first step to the second, and the second to the third. You saw only the second error. The sequence of equations should begin
$$\lim_{x\to 0}\frac{\sin(x)}{x+\sin(x)}
=\lim_{x\to 0}\frac{\frac{\sin(x)}x}{\frac{x+\sin(x)}x}
=\lim_{x\to 0}\frac{\frac{\sin(x)}x}{1+\frac{\sin(x)}x}$$
The probable cause is a typographic error in the second step, typing a $1$ rather than an $x$. |
Can a large number of small matrices be multiplied quickly? | I've been looking around, and it doesn't really look like it. The problems come from being unable to rearrange the matrices due to their lack of communitivity and all the better matrix multiplication algorithms work better on larger matrices than on smaller matrices.
Your best bet would be to run a preliminary search for any patterns like $ABABABABABABABAB$, which can be reduced down to $(AB)^8$, whicy can be calculated quickly by using methods like exponentiation by squaring. Even searching for a pattern seems a little hard. You'll probably still take around a million calculations or so. |
Interior cap for many sets | This is true, and holds for any non-empty family of sets $A_i, i \in I$:
$\operatorname{Int}(\bigcap_{i \in I} A_i) \subseteq \operatorname{Int} A_{i_0} = O$ for any $i_0 \in I$.
On the other hand, we know that $O \subseteq A_i$ for all $i$, so $O \subseteq \bigcap_{i \in I} A_i$, and so $O \subseteq \operatorname{Int}(\bigcap_{i \in I} A_i)$, as $O$ is open, and the interior is the largest open set inside a set.
You can also deduce it from this dual fact, by considering $B_i = X \setminus A_i$ and noting that $\operatorname{Cl}(B_i) = X \setminus \operatorname{Int}(A_i) = X \setminus O$, and $\bigcup_i B_i = X \setminus \bigcap_i A_i$ by de Morgan. |
Prove that there exists two consecutive natural numbers such that sum of all digits of each number is multiple of $2017$. | Let $\Sigma_n$ be the sum of the decimal digits of $n$. Then
$$\Sigma_{n+1}=\Sigma_{n}+1-9T_n$$ where $T_n$ is the number of trailing nines (because cascaded carries replace all trailing nines by zeroes).
The smallest solution of $9T_n-1=2017k$ is indeed $T_n=1793$ and you can't avoid all these nines.
As the sum of these digits is $1\mod 2017$, a total of $2016=9\cdot224$ is missing. You can't just preprend these $224$ nines, because more carries would result. It suffices to split the last nine to avoid that, and the best way is by moving one unit ahead.
The smalles pair is thus
$$1\underbrace{9\dots 9}_{223}8\underbrace{9\dots 9}_{1793}\to9\cdot2017$$
$$1\underbrace{9\dots 9}_{223}9\underbrace{0\dots 0}_{1793}\to1\cdot2017$$ |
Proving that any finite integral domain is a field. | To show $f_a$ is injective, suppose $ar=ar'$. Then $a(r-r')=0$. Since $R$ is an integral domain, $a=0$ or $r-r'=0$. But you assumed $a\neq 0$. You are right on the other part: since $R$ is finite, any injection $\phi:R\to R$ is a bijection. This means $1$ must have a preimage... so? |
Given nonnegative $x_1, \cdots, x_n \geq 0$, show that $x_1 + \cdots + x_n \geq \sqrt{1} + \sqrt{3} + \cdots + \sqrt{2n-1}$ | Hints:
1. Argue it is enough to consider the cases with $\displaystyle \sum_1^n x_i^2 = n^2$.
2. Use Karamata's inequality with the concave function $t \mapsto \sqrt{t}$, after observing that $$(x_1^2, x_2^2, x_3^2, \dots x_n^2) \prec (1, 3, 5, \dots 2n-1)$$ |
The mean IQ of a sample of 1600 children was 99. Is it likely that this was a sample from a population with mean IQ 100 and standard deviation 15? | As requested in comments:
This is confusing description, as a 99.6% confidence level does not means what it appears to mean compared to a 95% confidence level.
In other words, it makes you more confident that null hypothesis will not be rejected when it is correct, but less confident that it will be rejected when it is false.
You chose α=5% and that led you to reject the null hypothesis; that seems to be what the question wants |
Maximum of $F(x,y)=(2x^2-y)(y-x^2)$ | Notice that
\begin{align*}
f(x,x^2+1) &= (2x^2-(x^2+1))((x^2+1)-x^2) = x^2 -1 \\
f(x,0) &= (2x^2-0)(0 - x^2) = -2x^4
\end{align*}
So $f$ has arbitrarily large positive values and arbitrarily large negative values. Therefore there is no global maximum or minimum.
As for local maxima and minima, they would have to be at critical points. You have found the only critical point and it isn't an extremum. |
The dimension of the space of solutions of $XY + YX = O$ | Call the block matrix in question $B$. Then $B$ diagonalisable over $\mathbb C$ if and only if $B-\frac12\operatorname{tr}(A)I_4$ is diagonalisable over $\mathbb C$. So, we may assume without loss of generality that $A$ is traceless. Let $p$ be the characteristic polynomial of this traceless $A$. Using the fact that $A$ has distinct complex eigenvalues, prove that $B$ is diagonalisable over $\mathbb C$ if and only if its minimal polynomial is equal to $p$. Hence show that in this case,
$$p(B)=\pmatrix{0&AN+NA\\ 0&0}.$$
Consequently you need to show that when $A$ is traceless and it has distinct complex eigenvalues, the set of all real matrices $N$ such that $AN+NA=0$ is a two-dimensional subspace of $M_2(\mathbb R)$. |
Can the two branches of a single hyperbola hypothetically intersect | The two branches of course don’t intersect in the Euclidean plane $\mathbb R^2$, but we can extend $\mathbb R^2$ to the projective plane $\mathbb P^2$ by adding some points and a line: for each family of parallel lines in $\mathbb R^2$ add a “point at infinity” that lies on all of those lines, and add a “line at infinity” on which all of these additional points, and no others, lie. A hyperbola will then include a pair of points at infinity that lie on its asymptotes. Not only do the two branches of a hyperbola intersect in $\mathbb P^2$, but it is a single connected curve.
The projective plane “wraps around” the way some old space battle video games did. If you move along a line, eventually you reach the point at infinity, and if you keep going from there, you continue along the line “from the other side.” With this in mind, just as in Euclidean geometry an ellipse can be viewed as a stretched-out circle, in projective geometry a parabola can be seen as a circle that’s been stretched out to the line at infinity, and a hyperbola as a circle that’s been stretched “through” the line at infinity. Conics can thus be distiguished by their intersection with the line at infinity: hyperbolas have two, parabolas one and ellipses none.
On the other hand, the line at infinity is a first-class citizen: there’s nothing about it that’s really different from any of the “finite” lines. Indeed, the choice of a line at infinity in the projective plane is arbitrary. With different choices, the same curve might be an ellipse, parabola or hyperbola. From the point of view of projective geometry, then, there is only one kind of nondegenerate conic. |
Using the Intermediate Value Theorem and Rolle's theorem to determine number of roots | The Intermediate Value Theorem establishes existence: there is at least one real root.
Notice that $p(0) = -2 < 0$ and $p(1) = 7 > 0$. Since $p$ is continuous, the I.V.T. guarantees a number $c$ such that $p(c) = 0$. (In fact, we know that $0 < c < 1$.)
Rolle's Theorem establishes uniqueness: there is at most one real root.
Why? Suppose that there were two roots $a, b \in \mathbb{R}$. Since $p$ is differentiable, Rolle's Theorem guarantees a number $c \in (a, b)$ where $p'(c) = 0$. What's wrong with that? The derivative
$$
p'(x) = 5x^4 + 3x^2 + 7 > 0
$$
for all $x \in \mathbb{R}$. Why? It's quadratic in $x^2$ and its discriminant $(3)^2 - 4(5)(7) < 0$. |
the order of zero of rational function | For analytic functions, if $g(a) \ne 0$ then $f/g$ and $f$ have the same order of zero at $a$. |
Determining whether the given solutions are a basis | First of all, the dimension of the space of solutions is 3, and there are 3 vectors so that's okay there. Then you have to verify those vectors are indeed solutions. And finally you have to check if they are linearly independent. If so, then you have indeed a basis.
Edit: for your edit, vector 1 and 2 are linearly independent, so you just have to check that the linearity doesn't work for, let's say the second coordinate: show that for all $t\in\mathbb R$, $ae^{2t} + be^{4t} = 0$ implies $a=b= 0$ (find all $a$ and $b$ possible for $t = 1$ and $t= 2$ for instance and then conclude). |
Boolean nonlinear function | $f(1,1,0,0) = 0, \; f(0,0,1,1) = 0, \, f(1,1,1,1) = 1$. |
Is T a continuous linear map? | Since$$a_1=\frac1\pi\int_{-\pi}^\pi f(t)\cos(t)\,\mathrm dt=\frac1\pi\langle f,\cos\rangle,$$the map $f\mapsto a_1$ is continuous. A similar argument shows that $f\mapsto b_1$ is continuous too and so your map is the sum of two continuous functions. |
Equal weight problem | If the probabilities of $A$ and $B$ coins are $a$ and $b$ respectively, you need $\frac ba$ times as many $A$ coins as $B$ coins. If any of these ratios are irrational you are sunk. Otherwise start with one of the highest probability coin. Compute how many of all the other coins you need. Compute the least common multiple of the denominators and multiply by that number. With your example of $0.1, 0.3, 0.5$, for one of the $0.5$ coins we would need $5$ of the $0.1$s and $\frac 53$ of the $0.3$s, so our result is $15$ of the $0.1$ coins, $5$ of the $0.3$ and $3$ of the $0.5$. This will give the minimal solution, which may or may not have a total of $100$ coins or less. If you want a total of exactly $100$ coins, it will work as long as the minimal solution divides into $100$ exactly. Just multiply by $100$ divided by the total number of coins in the minimal solution. |
Matrix Form for Hitting Probabilities in a Markov Chain | Matrix form:
$$ \left[
\begin{array}{cccccc|c}
1&0&0&0&0&0&1\\
0&1&0&0&0&0&1\\
0&0&1&0&0&0&1\\
&&&\cdots &&& 1\\
p_{i,0} & p_{i,1} & \cdots & p_{i,i}-1 & p_{i,i+1} & \cdots & 0 \\
p_{i+1,0} & p_{i+1,1} & \cdots & p_{i,i} & p_{i+1,i+1}-1 & \cdots & 0 \\
&&\cdots &&&& 0\\
p_{n,0} & p_{n,1} & \cdots & \cdots & p_{n,n-1} & p_{n,n}-1 & 0 \\
\end{array}
\right] $$
To see that the $h_i^A$'s are a solution:
Firstly, if $i\in A$ then $X_0=i\implies H^A=0\implies h_i^A=1$.
Suppose now that $i\notin A$. Conditioning on the first step:
\begin{align}
h_i^A &= \sum_{j\in S}P(X_1=j\mid X_0=i) P(H^A\lt\infty|X_1=j,X_0=i) \\
&= \sum_{j\in S}p_{ij} P(H^A\lt\infty|X_0=j) \qquad\text{since, given $X_0=i\notin A$, we know $H_A\neq 0$} \\
&= \sum_{j\in S}p_{ij} h_j^A \qquad\qquad\qquad\qquad\text{(1)}
\end{align}
If you want to see why this solution is minimal, take an arbitrary solution $s_1,s_2,\ldots,s_n$.
For $i\in A$ we must have $s_i=1=h_i$ so minimality for such $i$ holds trivially.
Suppose now that $i\notin A$. Then our equation (1) for $s_i$ can be split into separate sums, and then at each step iterate by replacing the $s_j$ values:
\begin{align}
s_i &= \sum_{j\in A}p_{ij} + \sum_{j\notin A}p_{ij} s_j \qquad\text{using $s_j=1$ for $j\in A$} \\
&= P(H_A\leq 1\mid X_0=i) + \sum_{j\notin A}p_{ij} s_j \\
&= P(H_A\leq 1\mid X_0=i) + \sum_{j_1\notin A}p_{ij_1} \left(\sum_{j\in A}p_{j_1j} + \sum_{j_2\notin A}p_{j_1 j_2} s_{j_2}\right) \\
&= P(H_A\leq 2\mid X_0=i) + \sum_{j_1\notin A}p_{ij_1} \left(\sum_{j_2\notin A}p_{j_1 j_2} s_{j_2}\right) \\
& \ldots \\
\end{align}
Continuing in this way, we have, for arbitrarily large $M$,
\begin{align}
s_i &= P(H_A\leq M\mid X_0=i) \;+\; \text{some non-negative term} \\
\therefore\quad s_i &\geq P(H_A\lt\infty\mid X_0=i) = h_i^A \qquad\text{which proves minimality.}
\end{align} |
boundary map in the (M-V) sequence | The generator of $H_3(S_3)$ can be given by taking the closures of $M_K$ and $N(K)$ and triangulating them so the triangulation agrees on the boundary, then taking the union of all the simplices as your cycle. The boundary map takes this cycle and sends it to the common boundary of its two chunks, which is exactly the torus, so the map is indeed 1. The map in the second question should be the diagonal map $(\times 1,\times 1)$. |
Prove that $\sqrt[3]{5-\sqrt{2}}$ is not a rational number | Let $\displaystyle\sqrt[3]{5-\sqrt2}=a$ where $a$ is rational
Cubing either sides, $\displaystyle5-\sqrt2=a^3\iff 5-a^3=\sqrt2$ which is irrational unlike the Left Hand Side |
Can you write a Turing machine in set theory? | If the Turing machine calculation might not terminate, then the set containing the result is indeed not defined. But that's no fault of set theory (on the contrary, things are working exactly as they should be), nor is it any indication that set theory is somehow unable to formalize the idea of turing machines.
When you write down a definition, it is incumbent on you to show that it is actually a definition. "Let $x=12$ if the Goldbach conjecture is true" cannot be shown to be a valid definition since we don't know if the Goldbach conjecture is true. "Let $x=12$ if the Goldbach conjecture is True, otherwise let $x=10$", is a valid definition regardless cause we've covered both options. Similarly, "let $x$ be the output of the 147th turing machine on input 5" is not necessarily a definition unless we know that the 147th turing machine halts on input 5. If we don't, we better say what $x$ is in the event that it doesn't halt.
(It might be easier to see this if we try to define a function instead. Let $f(n)=0$ if $n$ can be written as the sum of two squares. This is not a definition of a (total) function cause we haven't said what happens if $n$ is not the sum of two squares. This doesn't mean set theory, or arithmetic, or whatever, is incapable of handling numbers.)
Nothing changes in the situtation where we say "let $S$ be the singleton set containing the output of the 147th Turing machine on input 5"... this simply is not necessarily a definition.
Turing machines can certainly be defined in set theory. You can define the internal state they are in and the location of the tape head as simple recursive functions. You can express things like "at computational step $n,$ the turing machine is in state $i$" as sentences in the language of set theory. You can express "the turing machine halts with output $m$" as "there is an $n$ such that at step $n$, the turing machine is in one of its halting state, and the output tape at this time contains the number $m$". You can express anything you want about a given turing machine on a given input, or about all turing machines, on all inputs, or whatever.
And there is nothing inherently set theoretical about it either: all these things can be encoded (perhaps somewhat less elegantly) as numbers and then you can talk about Turing machines in the language of arithmetic. You can prove more things in ZFC than in PA (for instance, there is a TM that halts if and only if PA is inconsistent... this can be proven to not halt in ZFC but not in PA), but the 'natural arena' for formalizing the mathematics of Turing machines is in arithemetic, not set theory. |
If triangle's angles are $\alpha \leq \beta \leq \gamma$, then respective opposite sides are $a \leq b \leq c$. How to handle obtuse case? | My original answer was inadequate. Try this:
We have the angles $\alpha<\beta<\gamma$. If all are acute, your analysis applies, because the sine is increasing in $[0,\pi/2]$.
If not all are acute, then only $\gamma$ is obtuse, and since $\gamma+\beta<\pi$, we also have $\beta<\pi-\gamma$, both acute. Then, by your argument, we have $\sin\alpha<\sin\beta<\sin(\pi-\gamma)$. Since $\sin\gamma=\sin(\pi-\gamma)$, the result $a<b<c$ follows in this case as well. |
Fundamental group of a countable product of circles | Covering theory is probably not a good approach here, since your space isn't semi-locally simply connected. But it's still true that the fundamental group of a product space is the direct product of the fundamental groups of the factors. That's because paths, loops, and homotopies in a product spaces are determined by paths, loops, and homotopies in all the factor spaces. So the fundamental group of $(S^1)^{\aleph_0}$ is $\mathbb Z^{\aleph_0}$. |
Why is it okay to omit the limits on some definite integrals? | No. For a single absolutely continuous random variable, yes.
Consider the first expectation:
$$E(X)= \langle x_1+ x_2\rangle=\iint(x_1+x_2)\rho(x_1,x_2)\mathrm{d}x_1\mathrm{d}x_2$$
This is supposed to be:
$$E(X)= \langle x_1+ x_2\rangle=\iint_{\mathbb R^2}(x_1+x_2)\rho(x_1,x_2)\mathrm{d}x_1\mathrm{d}x_2$$
or
$$E(X)= \langle x_1+ x_2\rangle=\int_{\mathbb R}\int_{\mathbb R}(x_1+x_2)\rho(x_1,x_2)\mathrm{d}x_1\mathrm{d}x_2$$
Going back to the one variable case:
$$E[X] = \int_{\mathbb R} xf_X(x)dx$$
In the case where the expectation is not well-defined because of a certain subset $A \subseteq \mathbb R$, then
$$E[X] = \int_{\mathbb R} xf_X(x)dx = \int_{\mathbb R \setminus A} xf_X(x)dx,$$
provided the integral is well-defined.
If $$\int_{B} xf_X(x)dx = 0$$ for $B \subseteq \mathbb{R}$, then $$\int_{\mathbb{R}} xf_X(x)dx = \int_{\mathbb{R} \setminus B} xf_X(x)dx$$
For a single continuous random variable, the pdf may not exist.
For a single discrete random variable.
$$E[X] = \sum_{x \in Range(X)} xf_X(x)dx$$
If $Range(X) =\mathbb{R}$, $X$ is not discrete. |
Why does the function require p,q to be found? | Good for you if you can do the other parts without doing part (a) first. But it does potentially make the other parts easier. For instance, as long as $q\neq 0$, the representation $f(x)=p+\frac{q}{x-2}$ makes it pretty easy to see that $f(x)$ is one-to-one (since it is a composition of a bunch of functions that are clearly one-to-one: $x\mapsto x-2$, $x\mapsto 1/x$, $x\mapsto qx$, and $x\mapsto p+x$).
As for the range, I would assume that in the context of this question "range" means "image", rather than "codomain". That is, the range is the set $\{f(x):x\in A\}$, which might be smaller than the entire set $B$. The set $B$ such that $f$ is defined to be a function $A\to B$ is then called the "codomain", rather than the "range". |
Divisibility of ceiling function of surd | Here is a hint. Often these kinds of equations with $\alpha^n$ nearly an integer of some kind boil down to finding some conjugate number $\beta\lt 1$ with $\alpha^n+\beta^n$ the solution to some simple recurrence relation. The recurrence can make the condition obvious.
For example, this works with the Fibonacci numbers.
Note for the discerning - there is a reason for choosing $n$ rather than $2n$ - the recurrence is simpler as revealed in @minar's excellent comment. |
Probs. 2 (b) and 2 (c), Sec. 25 in Munkres' TOPOLOGY, 2nd ed: Components in the uniform and box topologies | First part: Let $\mathbb{R}^{\omega}$ have the uniform topology, and let $x,y \in \mathbb{R}^{\omega}$. Let $||\cdot||_u$ denote the uniform (pseudo)metric on $\mathbb{R}^{\omega}$, i.e, $||x||_u = \sup_n |x_n|$
Case $1$: Suppose $x-y$ is bounded, and define $f: [0,1] \to \mathbb{R}^{\omega}$ as $t \mapsto x+t(y-x)$. Then $f$ is continuous (in fact, Lipschitzian) because $||f(s)-f(t)||_u = |s-t|\cdot ||x-y||_u$. Since $[0,1]$ is connected and $f$ is continuous, it follows that $f([0,1])$ is a connected subset of $\mathbb{R}^{\omega}$ which contains $x$ and $y$. Therefore $x \sim y$.
Case $2$: Suppose $x-y$ is unbounded. Then form a separation of $\mathbb{R}^{\omega}$ as follows: let $U = \{ z \in \mathbb{R}^{\omega}: x-z$ is bounded $\}$ and let $V = \mathbb{R}^{\omega} \backslash U$. Then $U$ and $V$ are open and disjoint (it is easy to check), and their union is $\mathbb{R}^{\omega}$. Also $x \in U$ and $y \in V$, so $x$ and $y$ cannot be in the same connected component.
Second Part: Let $\mathbb{R}^{\omega}$ have the box topology, and let $x,y \in \mathbb{R}^{\omega}$.
Case $1$: Suppose $x_n=y_n$ for $n > N$. Define a map $f: \mathbb{R}^N \to \mathbb{R}^{\omega}$ as $(a_1,...,a_N) \mapsto (a_1,...,a_N,x_{N+1},x_{N+2},...)$. Then $f$ is continuous, since $f^{-1}(\Pi_{n\in\mathbb{N}}U_n) = \Pi_{n=1}^N U_n$ as long as $x_n \in U_n$ for $n > N$ (otherwise $f^{-1}(\Pi_{n\in\mathbb{N}}U_n) = \emptyset$). Since $\mathbb{R}^N$ is connected, it follows that $f\left(\mathbb{R}^N\right)$ is connected. Since $x,y \in f\left(\mathbb{R}^N \right)$, it follows that $x \sim y$.
Case $2$: Suppose that $x_n \neq y_n$ for infinitely many $n$. Define a map $g: \mathbb{R}^{\omega} \to \mathbb{R}^{\omega}$ as $$(g(z))_n = \begin{cases}
\frac{nz_n}{x_n-y_n} & \text{if } x_n \neq y_n \\
z_n & \text{if } x_n=y_n
\end{cases}$$
Then $g$ is continuous with respect to the box topology, because for any collection $\{U_n\}_{n\in\mathbb{N}}$ of open subsets of $\mathbb{R}$, the set $g^{-1}(\Pi_{n\in\mathbb{N}}U_n)$ is easily checked to be a set of the form $\Pi_{n\in\mathbb{N}} V_n$ for some open sets $V_n \subset \mathbb{R}$ (explicitly, $V_n = U_n$ if $x_n=y_n$, and $V_n = \frac{x_n-y_n}{n} \cdot U_n$ if $x_n \neq y_n$). It is easily checked that $g(x)-g(y)=g(x-y)$ is an unbounded sequence (since $(g(x-y))_n = n$ whenever $x_n \neq y_n$). Therefore, using the first part of the question (and the fact that the box topology contains the uniform topology), there exist open sets $U,V$ which form a separation of $\mathbb{R}^{\omega}$ and $g(x)\in U$ and $g(y) \in V$. Thus $g^{-1}(U)$ and $g^{-1}(V)$ form a separation of $\mathbb{R}^{\omega}$, with $x \in g^{-1}(U)$ and $y\in g^{-1}(V)$. So $x$ and $y$ are not equivalent. |
A simpler way to evaluate $\int {dx\over \sin^2x\cos^4x}$ | $I = \displaystyle\int \dfrac{1}{\sin^2{x}\cos^4{x}} dx = \displaystyle\int \sec^2{x}\dfrac{\left(\tan^2{x}+1\right)^2}{\tan^2{x}} dx$. Now substitute $y = \tan{x}$.
$I= \displaystyle\int \dfrac{(y^2+1)^2}{y^2} dy = \frac{y^3}{3}+2y-\frac{1}{y} +C$ |
Orthogonality Relations Problem with Sine | hint:
Use this identity:
$$sin(a)sin(b)={1\over2}[cos(a-b)-cos(a+b)]$$ |
Closed formulas for two Poincaré series | For the commutative case, we know that
$$ \dim k^i [x_1, \cdots, x_n] = \sum_{e_1 + \cdots + e_n = i} 1. $$
This gives
\begin{align*}
P_{\Bbb{C}[x_1, \cdots, x_n]}(t)
&= \sum_{i=0}^{\infty} \sum_{e_1 + \cdots + e_n = i} t^i \\
&= \sum_{i=0}^{\infty} \sum_{e_1 + \cdots + e_n = i} t^{e_1} \cdots t^{e_n} \\
&= \sum_{e_1, \cdots, e_n} t^{e_1} \cdots t^{e_n} \\
&= \frac{1}{(1-t)^n}.
\end{align*}
The non-commutative case is much easier, so try by yourself! |
supremum and summation Inequality | It is a matter of the fact that
\begin{align*}
\sup|x_{k}|\leq\left(\sum|x_{k}|^{r}\right)^{1/r}
\end{align*}
for $r\geq 1$.
One shall note that
\begin{align*}
|x_{k}|=(|x_{k}|^{r})^{1/r}\leq\left(\sum|x_{k}|^{r}\right)^{1/r}
\end{align*}
for all $k$. |
Question on integration upper bound, area under ellipse | Here you will find the complete method for this calculation. As for your specific question, I understand your confusion. To resolve it, make it a habit to write the variable of integration alongside the integration sign. Your correct step should be:
$$\frac{1}{4}A=\int^a_0\frac{b}{a}\sqrt{a^2-x^2}dx$$
Notice the $dx$ I added at the end. This makes it clear that the integration is taking place wrt $x$ (not $y$, as you must have originally thought). The upper limit for $x$ is clearly $a$ and not $b$. |
Triplet of integers such that expression is prime for every prime number? | No such $a,b,c$ exist.
Let $f(x)=(5a+4)x^2+(5b+3)x+2c$. Let $f(3)=q$. Then $f(kq+3)$ is a multiple of $q$ for all $k$. And by Dirichlet's Theorem $kq+3$ is prime for infinitely many $k$.
Now it's not possible to prove Dirichlet's Theorem by middle school methods, but at least the statement of Dirichlet's Theorem is quite simple. It says that if $\gcd(a,b)=1$ then there are infinitely many $n$ such that $an+b$ is prime. |
A different version of the 100 prisoners and a light bulb riddle (version 2) | I just realized that the protocol below can be drastically improved by letting all prisonsers accumulate and pass on knowledge about the days on which prisoners were first called. Instead of just the prisoner first called on day $k+1$ leaving the light on on day $k$ of a run, any prisoner can do so who’s seen the light left on on such a day, and thus knows that someone was first called on day $k+1$. This makes it a lot more difficult to estimate the expected runtime, so I wrote some code that simulates this protocol. It also includes, as another improvement, that we start with $n=200$, $k=0$, so that many prisoners can immediately pass on their knowledge about their first day. The initial $n$ and the growth rate could certainly be further optimized. In this form, the expected runtime is about $800000$ days, or about $2200$ years; still beyond the reach of our mortal prisoners, but quite a significant improvement over the original idea below.
This will take ages, there may be much more efficient protocols, but the expected runtime is finite: Divide the days into growing runs of $n=1,2,3,\ldots$ days and number each day in the run with $k=1,\ldots,n$. In each run, a prisoner leaves the light on on the $k$-th day of the run if they were first called on the $(k+1)$-th day overall. The last prisoner called knows that she was the last to be called when she's seen the light left on on $98$ days with different numbers less than her own. (The prisoner called on the first day isn't involved since they're certain to have been first called on the first day.)
We can estimate the expected runtime as follows: First we have the standard coupon collector's runtime of $100H_{100}\approx519$ days until the last prisoner is called. Then, in every run except near the beginning there are $98$ eligible days on which the last prisoner may find the light on. Each of them is successful with probability $\frac1{100\cdot100}$, since a particular prisoner must be called on the previous day and then the last prisoner must be called. Thus, the last prisoner has a chance of $\frac1{10000}$ per eligible day to collect a coupon, and she needs to collect all $98$ different coupons.
Let $X$ be the number of coupons she needs to collect before she has all $98$, and $Y$ the number of eligible days this takes her. Then
$$
E[Y]=10000E[X]=10000\cdot98H_{98}\approx5.06\cdot10^6
$$
and, by the law of total variance,
\begin{eqnarray}
\operatorname{Var}(Y)
&=&
E[\operatorname{Var}(Y\mid X)]+\operatorname{Var}(E[Y\mid X])
\\
&=&
E\left[9999\cdot10000\cdot X\right]+\operatorname{Var}(10000\cdot X)
\\
&=&
9999\cdot10000\cdot98H_{98}+10000^2\left(98^2H^{(2)}_{98}-98H_{98}\right)
\\
&=&
10000^2\cdot98^2H^{(2)}_{98}-10000\cdot98H_{98}
\\
&\approx&
{1.57\cdot10^{12}}\;.
\end{eqnarray}
(See Coupon collector's problem: mean and variance in number of coupons to be collected to complete a set (unequal probabilities) for the variance calculation). Thus
\begin{eqnarray}
E\left[Y^2\right]
&=&
\operatorname{Var}(Y)+E[Y]^2
\\
&=&
10000^2\cdot98^2H^{(2)}_{98}-10000\cdot98H_{98}+\left(10000\cdot98H_{98}\right)^2
\\
&\approx&
2.72\cdot10^{13}\;.
\end{eqnarray}
Since there are $98$ eligible days per run and $\frac12n(n+1)$ days in $n$ runs, the expected runtime of the protocol is approximately
\begin{eqnarray}
E\left[\frac12\cdot\frac Y{98}\left(\frac Y{98}+1\right)\right]
&=&
\frac{E\left[Y^2\right]}{19208}+\frac{E[Y]}{196}
\\
&\approx&
1.42\cdot10^9
\end{eqnarray}
days, or about $4$ million years. We could probably cut down on a large part of this by letting the runs grow more slowly, since we're using runs of length about $5\cdot10^4$ days even though we only expect to need about $5\cdot10^2$ of them. Still, that would get us at best to something like $100000$ years, well beyond the expected lifespan of the prisoners. |
submanifold with same homology | No, you can take a point in $R^n$. |
Moment generating function of Random Sums | For $N\sim\text{Geometric}(p)$, we wish to compute $\mathbb E[\gamma^N]$ for $\gamma:=\frac{pe^t}{1-qe^t}$. Now, since $\left|p\frac{qe^t}{1-qe^t}\right|$$<1$ for $t<-\log q$, observe that the infinite sum of a geometric series justifies
$$\mathbb E[\gamma^N]=\sum\limits_{j=1}^{\infty}\gamma^j(1-p)^{j-1}p=\frac{p}{1-p}\sum\limits_{j=1}^{\infty}(\gamma(1-p))^j=\frac{p}{1-p}\frac{\gamma(1-p)}{1-\gamma(1-p)} = \frac{p\gamma}{1-q\gamma}.$$
That is,
$$\mathbb E\Big[\Big(\frac{pe^t}{1-qe^t}\Big)^N\Big] = \frac{p\frac{pe^t}{1-qe^t}}{1-q\frac{pe^t}{1-qe^t}}$$ |
Do fewer axioms suffice to define the Moore-Penrose pseudoinverse? (motivated by least squares method and group theory) | A counterexample is $$A=\pmatrix {0&0\\1&1\\0&0}$$ $$B=\pmatrix{4&3&0\\-4&-2&0}$$ satisfying $1.$ and $4.$ but neither $2.$ nor $3.$ |
If I know $\langle g(f), a_i \rangle$ for all $i$ where $a_i$ is a basis, do I know the coefficients of $f$? | does that determine what $f$ is?
Not in general. It determines what $g\circ f$ is, because a continuous linear functional is determined by its values on a basis of the space. But the function $g$ could attain some values more than once (e.g., be constant on some interval), which would make it impossible to recover $f$ from $g\circ f$.
can I write the coefficients of $f$ in terms of $b_i$
No. Any nonlinear transformation of a function hopelessly messes up its coefficients in a basis. (An exception: Fourier coefficients behave reasonably when one squares a function, but even then it's still a convolution of a doubly infinite sequence with itself; not something one can explicitly compute in practice.) |
Surface Integral [Proof] | If you let $\vec n=(a,b,c)$ then the integrand function can be written as $f(ax+by+cz)=f(\vec n\cdot\vec r)$. Choose the reference frame so that $\vec n$ lies along the $z$ axis: it follows that $f(\vec n\cdot\vec r)=f(nz)$, where $n=\sqrt{a^2+b^2+c^2}$. Integrate then in spherical coordinates:
$$
\int \int_F f(ax+by+cz)dS = \int_{0}^{2\pi}d\phi\int_0^\pi f(n\cos\theta)d(-\cos\theta)={2\pi}\int_{-1}^1 f(nu)du.
$$ |
dimension of a complex vector space of the form $ S:=\{cw_k+\overline cw_{-k}\mid \langle c,k\rangle=0\}. $ | $S$ should have the same dimension as
$$
S'=\{cx+\overline{c}x^2\mid \langle c,k\rangle=0,c\in\mathbb{C}^n\},
$$
which has dimension $n-1$ as a complex vector space. The number $2(n-1)$ might be due to considering $S'$ as a real vector space. |
Showing $\lim_{t \rightarrow T} [X(t) + X'(t)] < \infty$ for $X''(t) + X'(t) + X(t) = 0$. | This is a second order autonomous differential equation, and with techniques in the link, it's pretty simple to exactly find an implicit solution in terms of integration. |
Is $\sum\limits_{k=1}^{n}\frac{1}{(n+k)^2}$ divergent or convergent? | It is convergent and the limit is $0$, because:
$$0\le\sum_{k=1}^{n}\frac{1}{(n+k)^2}\le n\times\frac {1}{n^2}=\frac {1}{n} $$
and both the sequence $0$ and $\frac {1}{n}$ converge to $0$. |
Probability of two events without replacement | In how many ways can you pick two different numbers that sum to 7?
$$2+5$$
$$3+4$$
$$4+3$$
$$5+2$$
How many possibilities do you have in total?
First extraction: $5$
Second extraction: $4$
Total: $5\times4=20$
So $4/20=1/5$. |
Evaluate $\int_0^\infty \frac{dx}{x^2+2ax+b}$ | Let the first integral be $I$ and the second one be $J$, then by putting $x=y-a$ and $y=z\sqrt{b-a^2}$ we have
\begin{align}
I(a,b)&=\int_0^{\infty}\frac{dx}{(x+a)^2+b-a^2}\\[10pt]
&= \int_a^{\infty}\frac{dy}{y^2+b-a^2}\\[10pt]
&=\frac{1}{\sqrt{b-a^2}}\int_{\large\frac{a}{\sqrt{b-a^2}}}^{\infty}\frac{dz}{z^2+1}\\[10pt]
&=\frac{1}{\sqrt{b-a^2}}\left(\frac{\pi}{2}-\arctan\left(\frac{a}{\sqrt{b-a^2}}\right)\!\right)\\[15pt]
\end{align}
and the 2nd integral is just a first derivative of $I$ with respect to $b$ times $-1$
\begin{equation}
\\[15pt]J(a,b)=-\frac{\partial I}{\partial b}=\frac{\partial }{\partial b}\left[\frac{1}{\sqrt{b-a^2}}\left(\arctan\left(\frac{a}{\sqrt{b-a^2}}\right)-\frac{\pi}{2}\right)\right]\\[10pt]
\end{equation}
Can you take it from here? |
Are local quasi-geodesics already quasi-geodesics in hyperbolic spaces? | Let $c:[a,b]\to X$ be a $k$-local $(L,A)$ quasi-geodesic in a uniquely geodesic $\delta$-hyperbolic space $X$. I will prove a local-to-global principle based on the proofs in Bridson-Haefliger, and want to thank Jeff Danciger who assisted me greatly in the proof.
I will need the following lemma, which may be proved by techniques similar to the proof of the theorem.
Lemma. If $k>2L(2D+4\delta+A)$ then $c$ lies within $D+2\delta$ of the geodesic $[c(a),c(b)]$ joining its endpoints.
Theorem. If $k>2L(3D+4\delta+A)$ then $c$ is an $(L',A')$-quasigeodesic.
Proof: Let $x,y,z$ be three points on $c$ such that $x=c(t-k/2)$, $y=c(t)$, and $z=c(t+k/2)$. Let $x',y',z'$ be points on $[c(a),c(b)]$ within $D+2\delta$ of $x,y,z$ respectively. Join $x$ and $z$ by a geodesic, and let $y_0$ be a point on $[x,z]$ within $D$ of $y$. I claim that $y_0$ is within $2\delta$ of a point $y''$ between $x'$ and $z'$.
To see the claim, draw the quadrilateral $(x',x,z,z')$ and cut it into two triangles. By $\delta$-hyperbolicity, $y_0$ lies within $2\delta$ of a point $w$ on an edge of the quadrilateral (beside the one it started on). Assume for the sake of contradiction that $w$ is on $[x,x']$. Then for large $k$ we have that $y_0$ is closer to $x'$ than $x$ because
$$ k/2L-A-D \le d(x,y_0) \le \delta + d(x,w) $$
so
$$ d(y_0,x') - d(x,x') \le \delta+d(w,x')-d(w,x')-d(x,w) \le -k/2L +A+D+2\delta $$
while we also have that $y_0$ is farther from $x'$ than $x$ because
$$ d(y_0,x')-d(x,x') \ge d(y,x) - D -(D+2 \delta)-(D+2 \delta) \ge k/2L-A-3D-4\delta $$
So we get a contradiction.
Now assume that $y_0$ is within $2\delta$ of a point $w$ on $[z,z']$. Then we have
\begin{align*}
d(y_0,z')-d(z,z') & \le 2 \delta + d(w,z') - d(z,w) - d(w,z') \\
& \le -\frac{k}{2L} + D+3 \delta +A
\end{align*}
is negative for large $k$, while
$$ d(y_0,z') -d(z,z') \ge d(y,z) - D-(D+2\delta) - (D+2 \delta) \ge \frac{k}{2L}-3D-4\delta-A $$
so its also positive for large $k$, again yielding a contradiction.
We conclude that $y_0$ is within $2\delta$ of a point $y''$ on $[x',z']$.
Thus $y$ is within $D+2\delta$ of a point $y''$ on the geodesic $[x',z']$ and by $\delta$-hyperbolicity and because $k$ is large, $y'$ is on $[x',z']$ as well. Hence suffiently spaced point projections are monotonic.
Let $t_n=a+nk/2$ and set $x_n=c(t_n)$ and let $x_n'$ be the nearest point projections to $[c(a),c(b)]$. We have
$$ d(x_n',x_{n+1}') \ge \frac{k}{2L} -A-2D-4\delta $$
and
$$ d(x_0,x_N') \ge N \left(\frac{k}{2L} -A-2D-4\delta \right) .$$
Now let $t,t'\in [a,b]$ and let $t_n,t_n'$ be their nearest points among $\{t_n\}$. For now let's put aside the case when $t'$ is close to $b$ and $b$ is nearly but not quite $a+Nk/2$ for some $N$. Then
$$ |t-t'|< 2 k/4 + |n-n'|(k/2) $$
$$ d(c(t),c(t_n)) \le kL/4+A $$
and likewise for $t',t_n'$. So
\begin{align*}
d(c(t),c(t')) & \ge d(c(t_n),c(t_n')) - d(c(t),c(t_n))-d(c(t'),c(t_n')) \\
& \ge |n-n'|(k/2L -A-2D-4\delta) - kL/2 -2A \\
& \ge (|t-t'|-1)(2/k)(k/2L -A-2D-4\delta) - kL/2 -2A \\
& \ge |t-t'|(2/k)(k/2L-A-2D-4\delta)- (2/k)(k/2L -A-2D-4\delta)-kL/2-2A \\
& =|t-t'|/L' -A'
\end{align*}
where $L'=(1/L -(2/k)(A+2D+4\delta))^{-1}$ and $A'=(2/k)(k/2L -A-2D-4\delta)+kL/2+2A$. To cover the case we excluded above, we should use something like $A'=3/2L+3kL/4+2A$, or for simplicity you can just use $(3/2)$ times the old bound.
We conclude that $c$ is a global $(L',A')$-quasigeodesic. Notice that we could have picked any $k_0$ bigger than $2L(3D+4\delta+A)$ and less than $k$, so you have some flexibility in obtaining $L'$ and $A'$. This may let you trade some badness of $L'$ for some goodness of $A'$, if you are interested.
Remark: For my personal recordkeeping, the best $A'$ which follows from my work is $A'=\frac{3}{2L}+\frac{3kL}{4}+2A-\frac{3}{k}(A+2D+4\delta)$ |
Computing the ring of integers of a number field $K/\mathbb{Q}$: Is $\mathcal{O}_K$ always equal to $\mathbb{Z}[\alpha]$ for some $\alpha$? | If $K$ is a number field, and there exists $\alpha$ such that $\mathcal{O}_K=\mathbb{Z}[\alpha]$, then $K$ is called monogenic. The first example of a non-monogenic number field is due to Dedekind, who showed that for $K=\mathbb{Q}(\theta)$ where $\theta$ is a root of the cubic polynomial $x^3-x^2-2x-8$, the ring of integers does not satisfy $\mathcal{O}_K=\mathbb{Z}[\delta]$ for any $\delta$. In what follows, I will provide a full proof of this claim.
Proof that $\mathcal{O}_K\neq \mathbb{Z}[\delta]$ for any $\delta$: First, we verify that $f$ is indeed irreducible over $\mathbb{Q}$.
Since it is a cubic polynomial, if it were reducible it would have
a rational root which is impossible by the rational root theorem.
Let $\eta=\frac{\theta^{2}+\theta}{2}$, then by calculating the determinant
and traces, note that $\eta^{3}-3\eta^{2}-10\eta-8=0$. The elements$1,\theta,\eta$
are independent over $\mathbb{Q}$ since $\theta$ does not satisfy
a degree $2$ equation, and so $\mathbb{Z}\oplus\theta\mathbb{Z}\oplus\eta\mathbb{Z}$ is a full rank subring of $\mathcal{O}_{K}$.
I claim that $\mathcal{O}_{K}=\mathbb{Z}\oplus\theta\mathbb{Z}\oplus\eta\mathbb{Z}$.
To prove this, we calculate the discriminant.
Let
$$
B=\left[\begin{array}{ccc}
\text{Tr}_{K/\mathbb{Q}}\left(1\right) & \text{Tr}_{K/\mathbb{Q}}\left(\theta\right) & \text{Tr}_{K/\mathbb{Q}}\left(\eta\right)\\
\text{Tr}_{K/\mathbb{Q}}\left(\theta\right) & \text{Tr}_{K/\mathbb{Q}}\left(\theta^{2}\right) & \text{Tr}_{K/\mathbb{Q}}\left(\theta\eta\right)\\
\text{Tr}_{K/\mathbb{Q}}\left(\eta\right) & \text{Tr}_{K/\mathbb{Q}}\left(\eta\theta\right) & \text{Tr}_{K/\mathbb{Q}}\left(\eta^{2}\right)
\end{array}\right].
$$
Writing $\theta$ and $\eta$ in the basis $1,\theta,\theta^{2}$,
we have that
$$
M_{\eta}=\left[\begin{array}{ccc}
0 & \frac{1}{2} & \frac{1}{2}\\
4 & 1 & 1\\
8 & 6 & 2
\end{array}\right],\ M_{\theta}=\left[\begin{array}{ccc}
0 & 1 & 0\\
0 & 0 & 1\\
8 & 2 & 1
\end{array}\right],
$$
and from this we find that
$$
M_{\theta}=\left[\begin{array}{ccc}
0 & 1 & 0\\
0 & 0 & 1\\
8 & 2 & 1
\end{array}\right],\ M_{\theta}^{2}=\left[\begin{array}{ccc}
0 & 0 & 1\\
8 & 2 & 1\\
8 & 10 & 3
\end{array}\right],\ M_{\theta}M_{\eta}=\left[\begin{array}{ccc}
4 & 1 & 1\\
8 & 6 & 2\\
16 & 12 & 8
\end{array}\right],\ M_{\eta}^{2}=\left[\begin{array}{ccc}
6 & \frac{7}{2} & \frac{3}{2}\\
12 & 9 & 5\\
40 & 22 & 14
\end{array}\right].
$$
It follows that $\text{Tr}_{K/\mathbb{Q}}\left(1\right)=3$, $\text{Tr}_{K/\mathbb{Q}}\left(\theta\right)=1$,
$\text{Tr}_{K/\mathbb{Q}}\left(\theta^{2}\right)=5$, $\text{Tr}_{K/\mathbb{Q}}\left(\eta\right)=3$,
$\text{Tr}_{K/\mathbb{Q}}\left(\eta^{2}\right)=29$, $\text{Tr}_{K/\mathbb{Q}}\left(\theta\eta\right)=18$,
and so
$$
B=\left[\begin{array}{ccc}
3 & 1 & 3\\
1 & 5 & 18\\
3 & 18 & 29
\end{array}\right].
$$
Thus
$$
\text{disc}_{K/\mathbb{Q}}\left(\mathbb{Z}\oplus\theta\mathbb{Z}\oplus\eta\mathbb{Z}\right)=\det B=-503.
$$
As $503$ is a prime number, this implies that $\mathbb{Z}\oplus\theta\mathbb{Z}\oplus\eta\mathbb{Z}$
equals the ring of integers, since otherwise we must have $\text{disc}_{K/\mathbb{Q}}\left(\mathbb{Z}\oplus\theta\mathbb{Z}\oplus\eta\mathbb{Z}\right)=m^{2}\text{disc}_{K/\mathbb{Q}}\left(\mathcal{O}_{K}\right)$
for some integer $m>1$.
Now, let $\delta=a+b\theta+c\eta$ be a general element of $\mathcal{O}_{K}$.
Our goal is to show that $2|\text{disc}\left(\mathbb{Z}[\delta]\right)$.
The matrix for $M_{\delta}$ in the basis $1,\theta,\eta$, equals
$$
M_{\delta}=\left[\begin{array}{ccc}
a & b & c\\
4c & a-b & 2b+2c\\
4b+6c & 2c & a+2b+3c
\end{array}\right].
$$
Reducing modulo $2$, we have that
$$
M_{\delta}\equiv\left[\begin{array}{ccc}
a & b & c\\
0 & a-b & 0\\
0 & 0 & a+c
\end{array}\right]\text{ mod }2.
$$
Since this upper triangular, it follows that
$$
\text{Tr}\left(M_{\delta}^{k}\right)\equiv a^{k}+(a-b)^{k}+(a+c)^{k}\text{ mod }2.
$$
$$
\equiv a-b+c\text{ mod 2}.
$$
Now, if $a-b+c\equiv0\text{ mod }2$, then the last column in the
matrix
$$
A=\left[\begin{array}{ccc}
\text{Tr}\left(1\right) & \text{Tr}\left(\delta\right) & \text{Tr}\left(\delta^{2}\right)\\
\text{Tr}\left(\delta\right) & \text{Tr}\left(\delta^{2}\right) & \text{Tr}\left(\delta^{3}\right)\\
\text{Tr}\left(\delta^{2}\right) & \text{Tr}\left(\delta^{3}\right) & \text{Tr}\left(\delta^{4}\right)
\end{array}\right]
$$
has each entry divisible by $2$, and hence $\text{disc}_{K/\mathbb{Q}}\left(\mathbb{Z}[\delta]\right)=\det A$
is divisible by $2$. If $a-b+c\equiv1\text{ mod }2$, then every
element in $A$ is an odd number. As this determinant is a sum over
permutations in $S_{3}$, we see that we are summing exactly $3!=6$
odd numbers, and so the determinant is even. In either case it follows
that $2|\det A$ as well, and since $\text{disc}_{K/\mathbb{Q}}\left(\mathcal{O}_{K}\right)=-503$,
we have shown that $\mathcal{O}_{K}\neq\mathbb{Z}[\delta]$ for any
$\delta\in\mathcal{O}_{K}$ . |
Curvatures in differential geometry-interpretation | For (1), see here
For (2), see here
For (3), see here.
For (4), see here.
All of those articles also contain references to more fundamental work. |
Prove that if $f$ and $g$ are uniformly continuous on A and are both bounded on A, then $fg$ is uniformly continuous on A. | And now for something slightly different:
Since $f,g$ are bounded on $A$, their ranges lie in some compact set
$[-B,B]$. Since $[-B,B]^2$ is compact, we see that multiplication $\cdot : [-B,B]^2 \to \mathbb{R}$ is uniformly continuous.
Since $f,g$ are uniformly continuous separately, we see that the map
$p(x) = (f(x),g(x))$ is also uniformly continuous.
Since the composition of uniformly continuous maps is again uniformly continuous (this follows almost immediately from the definition), we see that
$\cdot \circ p$ is uniformly continuous. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.