title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Finding number of pages in a book | If a book has 9 pages each digit is used exactly once. If a book has 99 pages, each digit is used 10 tines in the ones place and ten times in the tens place or 20 times. If the book has 999 pages each digit is used in the ones place 100 times, the tens place 100 times, and the hundreds place 100 times or 300 times.
So if the book has 9999....9 = $10^k-1$ pages, each digit is $10^{k-1} $ times in each of the $k $ positions so each digit is used $k*10^{k-1} $.
So if, say, the number $5$ is used $7349$ times then:
$4000 < 7349 < 50000$ so there are between $9999$ and $99999$ pages. The digit $5$ was used $4000$ up to page $9999$ so the remaining $3349$ were used in pages $10,000$ through $99,999$.
Between $10,000$ through $19,999$ each digit, except 1, appears another $4000$ times so there fewer than $19,999$ pages.
Between $10,000$ and $10,999$ each digit except 1 and 0 appear 300 times. Between $11,000$ and $11,999$ another 300 times. So between $10,000$ and $14,999$ the digit $5$ occurs $5*300=1500$ times.
We still have $1849$ occurrences to account for.
From $15,000$ the $15,999$ the digit $5$ has occured $300$ times in the ones, tens, and hundreds place, just like before, in in the thousands place it has occured $1000$ times. This accounts for $1,300$ more times. We have $549$ more occurences to account for.
Pages, $16,000$ to $17,000$, again account for $300$ and $17,000$ to $18,000$ will account for $300$ more. So we know the book has between, $17,000$ and $17,999$ pages.
We have $249$ more occurences to account for. Between $17,000$ and $17,100$ each digit (except 1 and 7) occur $20$ times so between $17,000$ and $17,499$ the digit $5$ occurs $5*20=100$ times. We have 149 occurences to account for. Between $17,500$ and $17,599$ the $5$ occurs 20 times in the one and tens place, but $100$ times in the hundreds. So that accounts for $120$ more and we have $29$ more to go.
$17,600$ to $17,700$ account for $20$ more leaving just $9$. Between $17,700$ and $17,749$ there are $5$ occurences in the ones position. $4$ more to go.
$17,750-17,753$ account fo the last $4$ occurences.
The book has exactly $17,753$ pages.
Note, not all numbers have answers. $7350$ would put as at $17,754$ pages but $17,755$ pages makes the number $7352$. The number $7351$ is impossible.
Also not all numbers have an exact number of pages. If there were say, exactly $3$ of the digit $5$, then we could have had anywhere between $25 - 34$ pages. |
Functional equation: $f(3x) = 3f(x)$ | In the second step
$$f'(3x)=\frac{d(f(3x))}{d(3x)}$$
which can't be reconciled with what you've done in $(4)\to(5)$.
Indeed,
$$\frac{d(f(3x))}{dx}=\frac{d(f(3x))}{d(3x)}\frac{d(3x)}{dx}$$ |
Exercise 2 in Hatcher, section 1.2: the union of convex sets is simply connected | Do it by induction on $n$. The case $n = 2$ is exactly Van Kampen's theorem: $X_1$ and $X_2$ are path-connected, and so is their intersection (it's the intersection of two convex sets, hence convex, and it's nonempty, hence path-connected). Thus
$$\pi_1(X_1 \cup X_2) \cong \pi_1(X_1) *_{\pi_1(X_1 \cap X_2)} \pi_1(X_2) = 0.$$
Now for the induction step, suppose that you know the result is true for a given $n \ge 2$ and let $X_1, \dots, X_{n+1}$ be convex sets such that $X_i \cap X_j \cap X_k$ is connected for all $i,j,k$. By the induction hypothesis, $Y = X_1 \cup \dots \cup X_n$ is simply connected. It remains to show that $Y \cap X_{n+1}$ is path-connected, and you can apply Van Kampen's theorem again to conclude. (1)
So let's show $Y \cap X_{n+1}$ is path-connected. Clearly
$$Y \cap X_{n+1} = (X_1 \cap X_{n+1}) \cup \dots \cup (X_n \cap X_{n+1}).$$
Now suppose $x,y$ belong to $Y$, we are looking for a path from $x$ to $y$. Let
$$x \in X_i \cap X_{n+1}, \qquad y \in X_j \cap X_{n+1}.$$ By the hypothesis on the $X_\cdot$, the intersection $X_i \cap X_j \cap X_{n+1}$ is non-empty; choose $z$ inside it. Since $X_i \cap X_{n+1}$ is path-connected (it's the nonempty intersection of two convex sets), there is a path $\gamma$ from $x$ to $z$. Similarly, there is a path $\gamma'$ from $z$ to $y$. Concatenating these two path gives a path $\alpha = \gamma \cdot \gamma'$ from $x$ to $y$. Thus $Y \cap X_{n+1}$ is path-connected and we can conclude (cf. (1)).
Remark. You forgot the assumption (which is written in Hatcher's book) that the sets $X_i$ have to be open. This is crucial to apply van Kampen's theorem, in general it's false. |
Prove that certain quotient space is homeomorphic to an interval. | Here’s an argument mostly from first principles. There are undoubtedly easier ways to arrive at the result, depending on how big a hammer you want to use, but working through this argument should give you a pretty good understanding of what’s really going on.
Clearly the $\simeq$-classes are order-convex, meaning that if $a\le b\le c$, and $a\simeq c$, then $a\simeq b\simeq c$. Let $C$ be one of these $\simeq$-classes, let $a=\inf C$, and let $b=\sup C$; the continuity of $\gamma$ implies that $a,b\in C$, so $C=[a,b]$. Thus, every $\simeq$-class is a closed interval. This implies that $I/\simeq\,$ inherits a linear order from $I$: if $C_1$ and $C_2$ are distinct members of $I/\simeq\,$, then $C_1<C_2$ iff there are $c_1\in C_1$ and $c_2\in C_2$ such that $c_1<c_2$. (No confusion should arise from using $<$ and $\le$ for this induced order.)
For convenience let $J=I/\simeq\,$, and for $x\in I$ let $\bar x\in J$ be the $\simeq$-class of $x$; $\bar 0$ and $\bar 1$ are clearly endpoints of the order $\langle J,\le\rangle$. (Here I am assuming without loss of generality that $I=[0,1]$.) If $\bar 0=\bar 1$, then $J$ is a single point, which is of course homeomorphic to a (degenerate) interval, so assume henceforth that $\bar 0<\bar 1$. If two members of $J$ were adjacent in the order $\le$, as intervals in $I$ they would share an endpoint and therefore be part of a single $\simeq$-class, i.e., a single member of $J$, which is a contradiction, so $J$ must be densely ordered by $\le$.
To see that the order topology induced on $J$ by this linear order is the same as the quotient topology, note first that $U\subseteq J$ is open in the quotient topology iff $\bigcup\limits_{u\in U}u$ is open in $I$ iff $\bigcup\limits_{u\in U}u$ is a union of pairwise disjoint open intervals in $I$. Suppose that $\bigcup\limits_{u\in U}u$ is a single open interval, say $(a,b)$, in $I$; then it’s easy to see that $U=(\bar a,\bar b)$ in $J$. Conversely, if $U=(\bar a,\bar b)$, then $\bigcup\limits_{u\in U}u=$ $(\max\bar a,\min\bar b)$. I’ll leave the cases $[0,b)$ and $(a,1]$ to you, as well as the extension of the argument from a single open interval to a union of pairwise disjoint open intervals.
Now let $Q=\{\bar q:q\in\mathbb{Q}\}$; I claim that $Q$ is dense in $J$. To see this, let $\bar x,\bar y\in J$ with $\bar x<\bar y$. There are $a,b,c,d\in I$ such that $\bar x=[a,b],\bar y=[c,d]$, and $b<c$. Clearly $b<c$, so there is a rational $q\in(b,c)$, and evidently $\bar x<\bar q<\bar y$. Thus, $J$ is a separable, densely ordered space with endpoints.
It’s well-known that up to isomorphism there is only one countable dense linear order with endpoints, $\mathbb{Q}\cap I$. (If you’ve not seen this before, it’s proved by this standard back-and-forth argument.) Let $f:Q\to\mathbb{Q}\cap I$ be an order-isomorphism, and extend $f$ to a function $h:J\to I$ as follows. Of course $h\upharpoonright Q=f$. If $\bar x\in J\setminus Q$, let $\langle\bar q_n:n\in\mathbb{N}\rangle$ be a monotonically increasing sequence in $Q$ converging to $\bar x$. Then $\langle f(\bar q_n):n\in\mathbb{N}\rangle$ is a monotonically increasing sequence in $\mathbb{Q}\cap I$, so it converges to some $y\in I$, and we set $h(\bar x)=y$. Of course one has to check that $f(\bar x)$ does not depend on the choice of sequence $\langle\bar q_n:n\in\mathbb{N}\rangle$; this is fairly straightforward $-$ just assume that two different sequences yield different results and get a contradiction $-$ and I leave it to you.
And if you’ve reached this point, you should have little trouble verifying that $h$ is a homeomorphism. |
Why is the limit $\lim_{x \to c}{(1+x)^{(1/x)}} = e$ only when $c=0$. | In order that you can work within the real line you have to assume that $1+x >0$ or $x >-1$.
$e^{x} >1+x$ for all $x \neq 0$. Hence we cannot have $(1+c)^{1/c}=e$ or $e^{c}=1+c$ with $-1<c<0$.
Note: $e^{-t} >1-t$ for $t >0$. This follows from the fact that $e^{-t} -(1-t)$ is an increasing function on $[0,\infty)$ and its value at $0$ is $0$. |
Does the operator $(\hat{f}\cdot m )^\vee$ maps Schwartz in it self? | If $(\hat{f} \cdot m )^\vee$ is Schwarz, then so is $\hat{f}\cdot m$ (as the Fourier transform sends Schwarz space into itself). However, $\hat{f} \cdot m$ needs not even to be continuous (as we only assume $m\in L^\infty$).
So, no. The function $(\hat{f} \cdot m)^\vee$ needs not to be Schwarz. |
The vertex of a parabola is at (3,2) and its directrix is $x-y+1=0$. Find the equation of latus rectum. | $h$ and $k$ are not computed correctly. They should be,
$$h= 3+ \frac1{\sqrt2}\sqrt2=4,\>\>\>\>k=2-\frac1{\sqrt2}\sqrt2=1$$
which yields $x-y=3$. |
What to learn after elementary geometry? | Maybe old-fashioned geometric conics? For example, see
Charles Smith's text
Cockshott/Walters' text
Clement V. Durell's text
Francis Sowerby Macaulay's text
Also, look through the books that you can find here as well as here. |
Let $f$ be a Lebesgue measurable function on $\Bbb{R}$ satisfying some properties, prove $f\equiv 0$ a.e. | Hint: Suppose $\theta = 1/2$ just to scratch around a bit. For $h>0$ we have
$$|\int_a^{a+h} f\,\,|^p \le \frac{1}{2}\cdot h^{p-1}\int_a^{a+h} |f|^p \implies |\frac{1}{h} \int_a^{a+h} f|^p \le \frac{1}{2}\cdot \frac{1}{h}\int_a^{a+h} |f|^p.$$
In the last inequality, let $h\to 0^+$ and apply the Lebesgue differentiation theorem. |
Determinant of a special nxn matrix | I will give here a matrix explanation.
Let us consider for example the case $4 \times 4$, which is in fact illustrative of the general case $n \times n$.
You can write $A$ as the sum:
$$\pmatrix{1&1&1&1\\2&2&2&2\\3&3&3&3\\4&4&4&4}+\pmatrix{1&2&3&4\\1&2&3&4\\1&2&3&4\\1&2&3&4}=\pmatrix{1\\2\\3\\4}\pmatrix{1&1&1&1}+\pmatrix{1\\1\\1\\1}\pmatrix{1&2&3&4}$$
which the sum of 2 rank-1 matrices, thus has rank $\leq 2$, and in fact effectively two.
Thus, due to the rank-nullity theorem, $\dim \ker A= 4 -2>0$ (and more generally $n-2$). Thus $\det(A)=0$.
It explains the "barrier" beyond $n=2$. |
Relationship Between Closure Algebras and Topological Spaces | There is a quite old theory on so-called "closure spaces", i.e. a set with an operator $\operatorname{cl}: \mathscr{P}(X) \to \mathscr{P}(X)$ such that the following axioms are fulfilled:
$\operatorname{cl}(\emptyset)=\emptyset$.
$\forall A: A \subseteq \operatorname{cl}(A)$.
$\forall A,B: \operatorname{cl}(A \cup B)=\operatorname{cl}(A) \cup \operatorname{cl}(B)$.
And in a topological space we can define such a closure operation by defining
$$\operatorname{cl}(A)=\bigcap \{B : A \subseteq B \text{ and } B \text{ closed }\}$$
(among many equivalent ways).
One can develop a theory close to topology on closure spaces (continuity for $f: X \to Y$ between closure spaces is defined as $\forall A\subseteq X: f[\operatorname{cl}_X(A)] \subseteq \operatorname{cl}_Y(f[A])$, e.g.)
Not all closure spaces arise from topological spaces in the above way, but we can characterise those that do by the extra condition
$$\forall A: \operatorname{cl}(\operatorname{cl}(A)) = \operatorname{cl}(A)$$
and closure spaces that obey this are topological closure spaces and fully equivalent with topological spaces. But you can develop a lot of the theory in just closure spaces without that last condition, Cech wrote a standard reference book on it quite a long time ago. They're not as popular as a research area any more, is my impression. |
Question about the Axiom of Specification and Russell's paradox | The approach of modern set theory is not merely to say, "I can prove this set can't be done, but I can't prove this one can't be", or even, "I can prove no contradiction would result from this set". Unfortunately, some "non-paradoxical" sets contradict each other, like an irresistible force meeting an immovable object.
Instead we start from some axioms that say either "this set exists" or "sets have these properties". Which axioms you start from varies by mathematician, but ZF or ZFC is the most popular choice. These axioms are sufficient to refute some hypothetical sets.
What would be an example of the force/object analogy? ZFC, if consistent, can neither prove nor disprove either of the following claims:
$\Bbb R$ is the same size as $\omega_1$;
$\Bbb R$ is the same size as $\omega_2$.
Needless to say, "a set the same size as both $\Bbb R$ and $\omega_1$" and the $\omega_2$ counterpart to that can't both exist.
ZFC can show, however, that some ordinal $\alpha$ is the same size as $\Bbb R$. For that ordinal, there is no set of sets of both that size and the size of $\Bbb R$, because there are "too many" such sets for one set to contain them. On the other hand, for any other ordinal this construction would just give us the empty set, which is fine. So ZFC implies there is exactly one ordinal $\alpha$ for which $\{x|x\approx\alpha\land x\approx\Bbb R\}$ is not a set. (Which ordinal that is depends on the model of ZFC.) |
Parameterize Intersection of Surfaces | Yes your equation is correct. It was smart of you to do a substitution.
If you [I] need a visual verification: |
Showing the sine series on the interval $[0,l]$ converges in $L^2$ | It seems that the questions are structured in a way that increases in difficulty. If you prove its uniform convergence, $(c)$, like in the comments, then basically $(a)$ and $(b)$ follows immediately.
So if you want a bottom-up approach, then it could go something like this.
a) Note that $|a_n|^2\leq\dfrac{1}{n^6}$ and the harmonic series is convergent. By comparision test, $\sum |a_n|^2$ is convergent.
Now Parseval's equation is simply $$\dfrac{1}{2l}\int_0^l|f(x)|^2dx = \sum_{n=-\infty}^{\infty}|c_n|^2,$$
where $f$ is your function and $c_n$ are the Fourier Coefficients. This will be very easy because you are already given the coefficients $c_n$ in your equation.
b) Pointwise convergent is again immediate again, by the comparison test as $\sum \dfrac{1}{n^3}$ is convergent. In order to evaluate the given sum, you just need to prove $\sin(\dfrac{n\pi}{2}) = (-1)^{\frac{n-1}{2}}$, for odd $n$ and zero otherwise. |
How to find the number and coordinates of self-intersections points for a polygon? | An approach that will work for a general self-intersecting polygon in 3-dimensional space is as follows:
For each adjacent vertex pair ${i,j}$, loop over every other set of adjacent vertex pairs ${m,n} : m\ne i\ne j,\hspace {0.5 cm} n\ne i\ne j$ and invoke the following procedure:
The position vector of any point along the line joining vertices $i$ and $j$ is
${\bf \vec x} = {\bf \vec x_i} + \xi L_{ij} \bf \vec q \tag{EQ. 1}$
and likewise that of any point along the line joining vertices $m$ and $n$ is
${\bf \vec x} = {\bf \vec x_m} + \eta L_{mn}\bf \vec r \tag{EQ. 2}$
where
$$
L_{ij}=\|{\bf \vec x_j -\bf \vec x_i}\|\\
L_{mn}=\|\bf \vec x_n -\bf \vec x_m\|$$
and
$${\bf \vec q}=\frac{1}{L_{ij}}({\bf \vec x_j} -{\bf \vec x_i})\\
{\bf \vec r}=\frac{1}{L_{mn}}({\bf \vec x_n -\bf \vec x_m})$$
At the intersection, we have
${\bf \vec p} + {\bf \vec q}L_{ij}\xi -{\bf \vec r} L_{mn}\eta =0 \tag{EQ. 3}$
where ${\bf \vec p}={\bf \vec x_i} -{\bf \vec x_m}$. Now you can find a unit vector $\bf\vec s$ orthogonal to $\bf\vec r$ as a linear combination of the unit vectors $\bf\vec q$ and $\bf\vec r$ as follows:
$\bf\vec s = \alpha \bf\vec q + \beta \bf\vec r\tag{EQ. 4}$
From the condition $\bf\vec s\cdot\bf\vec r=0$ we have
$\alpha \bf\vec q\cdot\bf\vec r +\beta=0\tag{EQ. 5}$
and from the stipulation that $\bf\vec s$ be a unit vector, we have
$\alpha^2 \bf\vec q\cdot\bf\vec q +\beta^2\bf\vec r\cdot\bf\vec r+2\alpha\beta\bf\vec q\cdot\bf\vec r=1\tag{EQ. 6}$
From EQS. 5 and 6, $\beta$ can be eliminated giving an equation for $\alpha$ whereupon only the positive root is relevant. Following this, $\beta$ is given by EQ. 5, and $\bf\vec s$ from EQ. 4
Now, taking the inner product of EQ. 3 with $\bf\vec s$ gives
${\bf \vec s \cdot\bf \vec p} + L_{ij}(\bf \vec s \cdot\bf \vec q)\xi =0\tag{EQ. 7}$
from which you can evaluate $\xi$. If $\xi<0$ or $\xi>1$, then you abort the procedure (the intersection is outside the lines joining the vertices $i$ and $j$), and proceed to the next pair.
Otherwise, calculate $\bf\vec x$ from EQ. 1 and then evaluate $\eta$ from
$\eta= \frac{1}{L_{mn}}(\bf\vec x-\bf\vec x_m)\cdot\bf\vec r
\tag{EQ. 8}$
If $0\le\eta\le1$, then $\bf\vec x$ is an intersection.
This is a brute-force approach of $O(n^2)$. A more efficient alternative may be the Bentley-Ottman Algorithm |
Expected number of paths required to separate elements in a binary tree | A recurrence / not a closed form solution
Your specification generates a random binary tree of $n$ leaves, but due to your specific randomization method, not all trees are equally likely. Also, you only care about the subtree containing both nodes, and don't care about the rest of the tree at all. For these reasons, I find it helpful to think of this as chopping away part of an array, instead of generating a whole tree.
So you have an array of length $n$, and you chop it into two parts, a left sub-array of length $a$ and a right sub-array of length $b$, where $a+b=n$, and $a \sim Uniform(\{1, 2, \dots, n-1\})$.
You start with a random pair of elements in the original array, equally likely to be any of ${n \choose 2}$ pairs. Three things can happen to this pair:
Event $A$: Both elements are in the left sub-array. This happens with prob ${a \choose 2} / {n \choose 2}$. Also, conditioned on $A$ happening, the pair is equally likely to be any of the ${a \choose 2}$ pairs residing in the left sub-array.
Event $B$: Similar to above, but for the right sub-array.
Event $C$: One element is in the left sub-array and the other is in the right sub-array. This happens with prob $ab / {n \choose 2}$.
So, if $f(n)$ is the answer you seek, i.e. expected number of chops to separate the two nodes, we have:
$$
\begin{array}{}
f(n) &= 1 + {1 \over n-1} \sum_{a=1}^{n-1} \Bigl( {{a \choose 2} \over {n \choose 2}}f(a) + {{b \choose 2} \over {n \choose 2}}f(b) \Bigr) \\
&= 1 + {1 \over n (n-1)^2} \sum_{a=1}^{n-1} \Bigl( a(a-1)f(a) + (n-a)(n-a-1)f(n-a) \Bigr) \\
&= 1 + {2 \over n (n-1)^2} \sum_{a=1}^{n-1} a(a-1)f(a)
\end{array}
$$
The first few values:
n f(n)
= =============
2 1.0
3 1.33333333333
4 1.55555555556
5 1.71666666667
6 1.84
7 1.9380952381
8 2.01836734694
9 2.08551587302
A bit more algebraic manipulations lead to some simplified forms, but I am still not sure we can get a closed-form for $f(n)$.
Define $g(n) = n(n-1)f(n)$ and we have:
$$g(n) = n(n-1) + {2 \over n-1} (g(1) + \dots + g(n-1))$$
Define $s(n) = g(1) + \dots + g(n)$ and we have:
$$s(n) - s(n-1) = n(n-1) + {2 \over n-1} s(n-1)$$
$$s(n) = n(n-1) + {n+1 \over n-1} s(n-1)$$
From the last equation, and remembering $g(1)=s(1)=0$, we might be able to solve for $s(n)$, at least as a sum / product but perhaps better yet as a closed form, and we can "rewind" back to $f(n)$ from there.
UPDATE 2019/10/11: The rest consists of intuitive (inexact) arguments, for large $n$:
$$
\begin{array}{}
&s(n) &= n(n-1) + {n+1 \over n-1} [ (n-1)(n-2) + {n \over n-2}s(n-2)] \\
& &= n(n-1) + (n+1)(n-2) + {n+1 \over n-1}{n \over n-2} [(n-2)(n-3) + \dots] \\
& &= n(n-1) + (n+1)(n-2) + {(n+1)n(n-3) \over n-1} + \dots \\
& &\approx n^3 + \dots \\
\implies &g(n) &= s(n) - s(n-1) \approx 3 n^2 \\
\implies &f(n) &\approx 3
\end{array}
$$
Well, I wasn't expecting that! But I wrote code to calculate $f$ and it does seem true:
n f(n)
===== ========
1 0.000000
3 1.333333
10 2.142681
30 2.586898
100 2.830813
300 2.929329
1000 2.974032
3000 2.989885
10000 2.996485
30000 2.998682
100000 2.999556
I am not 100% sure that I am using the correct numeric data types (of sufficient precision) in my python code, so these may be slightly inaccurate, but the trend is unmistakably $f(\infty) \to 3$. Hmm, wonder if we can prove that...
The effect of one chop in the $n \to \infty$ case can be modeled as cutting the real interval $(0,1)$ at a uniformly random point $x$. The probability that a random pair is not divided by the cut at $x$, is $x^2 + (1-x)^2$. So averaged over all $x \in (0,1)$ we have the probability that a random pair is not divided by a random cut:
$$p = \int_0^1 (x^2 + (1-x)^2) dx = \int_0^1 (1 - 2x + 2x^2) dx = \Bigl[x-x^2 + \frac23 x^3 \Bigr]_0^1 = \frac23$$
Come to think of it, a much easier proof that $p=\frac23$ is to realize that the two points of the pair and the cut-point are all independent and $\sim Uniform(0,1)$, so the cut divides the pair iff it is the middle number, which happens with prob $\frac13$ (by symmetry).
Anyway, now $f(n) \to 3$ makes sense. Either the pair is divided, in which case we're done, or if not, then we gotta start over; and when $n\to \infty$ then starting over you still have $n \to \infty$ (very roughly speaking). So the expected number of chops is
$$1 + \frac23 + (\frac23)^2 + (\frac23)^3 + \dots = {1 \over 1 - \frac23} = 3$$
Disclaimer: None of this properly proves that $f(n) < 3 \,\,\forall n$, or $\lim_{n \to \infty} f(n) = 3$, but I would be very surprised if they weren't true. |
$\int_{- \infty}^{+ \infty} |f(t)| dt < \infty \implies \int_{-\infty}^{x} f(t) dt$ is continuous? | HINT: For (B) consider $G_a(x)=\int_a^x f(t)\,dt$ and remind fundamental theorem of calculus.
$F(x)=\int_{-\infty}^a f(t)\,dt + G_a(x)$. |
Lebesgue Fundamental calculus theorem | First part is wrong. Take $f(x)=x^2$, then $B=2$ whereas $g_1(1)=3>B$.
Scond part is fundamental theorem of calculus: Define $F(y) := \int_0^y [f(x_0+x)] \, dx$ for $y\in [0,1-x_0]$. Then, $F$ is differentiable with $F'(y)=f(x_0+y)$, in particular $F'(0)=f(x_0)=\lim_n (\frac{1}{n})^{-1} \int_0^{1/n}[f(x_0+x)]\,dx$. |
Asymptotic behavior for the return to zero of a simple random walk | The approach is taken on pages 78 and 79 of Principles of Random Walk (2nd edition) by Frank Spitzer. I was able to see these pages using Google Books.
Spitzer first translates $[-\pi,\pi)^d$ by the vector $(\pi/2,\pi/2,\dots,\pi/2)$
which doesn't change the value of the integral.
Then he argues that the bulk of the integral is concentrated at two points,
the origin and $(\pi,\pi,\dots,\pi)$ both contributing the same value asymptotically.
The Taylor's series expansion ${1\over d}\sum_{j=1}^d \cos k_j\approx \exp(-|k|^2/2d)$
near the origin finishes the result. |
Eliminate unknown function $f$ by obtaining a PDE | Warning with division for zero!
However,
$$
z_x+2xz_y=(y+2xf'(x^2-y))+2x(x-f'(x^2-y))=y+2x^2,
$$
i.e.,
$$
z_x+2xz_y-y-2x^2=0.
$$ |
Is Sage on the same level as Mathematica or Matlab for graph theory and graph visualization? | Try asking on http://ask.sagemath.org/questions/
In fact, there was already a general question asked there about Sage versus other software, and the top answer said, "If you are doing graph theory or serious number theory, you shouldn't even be asking the question of which package to use." That is, if you're doing graph theory, or serious number theory, Sage is the winner by far. This was comparing Sage to all other computer algebra systems. So, I believe the answer is Sage is the best graph theory program that exists. And, it is getting better all the time.
http://ask.sagemath.org/question/1010/reliability-of-sage-vs-commercial-software
Sage combines together many open source graph theory tools that exist (nauty generator, networkx which contains tons of stuff by itself, cliquer, and more) and also all the things that have been programmed by Sage developers. And, if there is anything you want to be able to do that isn't already programmed, you can program it in Python (or if you need it to be really fast, in cython).
As far as visualization, yes, Sage graphs are extremely good graphics. If you save them to a PDF, you can zoom in as much as you want (I'm not exaggerating) and they will still be crisp, clean graphics. And, there is a graph editor that allows you to draw graphs and move the vertices around and add vertices and edges and things like that. Not to mention, there are many built in graphs.
Oh, and by the way, if Mathematica (or one of many other programs) is installed on the same computer as Sage, you can use the functions from those other programs and get the results in Sage.
Here's a video tutorial on graph theory (second video from top). It gives a lot of detail, and includes all the info on graphics such as saving pdfs and using the graph editor to make nice looking graphs.
http://www.sagemath.org/help-video.html
Here's a real quick tutorial on graph theory in Sage:
http://steinertriples.fr/ncohen/tut/Graphs/
Here are the functions available:
g.add_cycle
g.add_edge
g.add_edges
g.add_path
g.add_vertex
g.add_vertices
g.adjacency_matrix
g.all_paths
g.allow_loops
g.allow_multiple_edges
g.allows_loops
g.allows_multiple_edges
g.am
g.antisymmetric
g.automorphism_group
g.average_degree
g.average_distance
g.bipartite_color
g.bipartite_sets
g.blocks_and_cut_vertices
g.bounded_outdegree_orientation
g.breadth_first_search
g.canonical_label
g.cartesian_product
g.categorical_product
g.category
g.center
g.centrality_betweenness
g.centrality_closeness
g.centrality_degree
g.characteristic_polynomial
g.charpoly
g.check_embedding_validity
g.check_pos_validity
g.chromatic_number
g.chromatic_polynomial
g.clear
g.clique_complex
g.clique_maximum
g.clique_number
g.cliques
g.cliques_containing_vertex
g.cliques_get_clique_bipartite
g.cliques_get_max_clique_graph
g.cliques_maximal
g.cliques_maximum
g.cliques_number_of
g.cliques_vertex_clique_number
g.cluster_transitivity
g.cluster_triangles
g.clustering_average
g.clustering_coeff
g.coarsest_equitable_refinement
g.coloring
g.complement
g.connected_component_containing_vertex
g.connected_components
g.connected_components_number
g.connected_components_subgraphs
g.convexity_properties
g.copy
g.cores
g.cycle_basis
g.db
g.degree
g.degree_constrained_subgraph
g.degree_histogram
g.degree_iterator
g.degree_sequence
g.degree_to_cell
g.delete_edge
g.delete_edges
g.delete_multiedge
g.delete_vertex
g.delete_vertices
g.density
g.depth_first_search
g.diameter
g.disjoint_routed_paths
g.disjoint_union
g.disjunctive_product
g.distance
g.distance_all_pairs
g.distance_graph
g.dominating_set
g.dump
g.dumps
g.eccentricity
g.edge_boundary
g.edge_connectivity
g.edge_cut
g.edge_disjoint_paths
g.edge_disjoint_spanning_trees
g.edge_iterator
g.edge_label
g.edge_labels
g.edges
g.edges_incident
g.eigenspaces
g.eigenvectors
g.eulerian_circuit
g.eulerian_orientation
g.flow
g.fractional_chromatic_index
g.genus
g.get_boundary
g.get_embedding
g.get_pos
g.get_vertex
g.get_vertices
g.girth
g.gomory_hu_tree
g.graph6_string
g.graphics_array_defaults
g.graphplot
g.graphviz_string
g.graphviz_to_file_named
g.hamiltonian_cycle
g.has_edge
g.has_loops
g.has_multiple_edges
g.has_vertex
g.incidence_matrix
g.independent_set
g.independent_set_of_representatives
g.interior_paths
g.is_bipartite
g.is_chordal
g.is_circular_planar
g.is_clique
g.is_connected
g.is_directed
g.is_drawn_free_of_edge_crossings
g.is_equitable
g.is_eulerian
g.is_even_hole_free
g.is_forest
g.is_gallai_tree
g.is_hamiltonian
g.is_independent_set
g.is_interval
g.is_isomorphic
g.is_line_graph
g.is_odd_hole_free
g.is_overfull
g.is_perfect
g.is_planar
g.is_prime
g.is_regular
g.is_split
g.is_subgraph
g.is_transitively_reduced
g.is_tree
g.is_triangle_free
g.is_vertex_transitive
g.kirchhoff_matrix
g.laplacian_matrix
g.latex_options
g.layout
g.layout_circular
g.layout_default
g.layout_extend_randomly
g.layout_graphviz
g.layout_planar
g.layout_ranked
g.layout_spring
g.layout_tree
g.lex_BFS
g.lexicographic_product
g.line_graph
g.longest_path
g.loop_edges
g.loop_vertices
g.loops
g.matching
g.matching_polynomial
g.max_cut
g.maximum_average_degree
g.merge_vertices
g.min_spanning_tree
g.minimum_outdegree_orientation
g.minor
g.modular_decomposition
g.multicommodity_flow
g.multiple_edges
g.multiway_cut
g.name
g.neighbor_iterator
g.neighbors
g.networkx_graph
g.num_edges
g.num_verts
g.number_of_loops
g.order
g.periphery
g.plot
g.plot3d
g.radius
g.random_edge
g.random_subgraph
g.random_vertex
g.relabel
g.remove_loops
g.remove_multiple_edges
g.rename
g.reset_name
g.save
g.set_boundary
g.set_edge_label
g.set_embedding
g.set_latex_options
g.set_planar_positions
g.set_pos
g.set_vertex
g.set_vertices
g.shortest_path
g.shortest_path_all_pairs
g.shortest_path_length
g.shortest_path_lengths
g.shortest_paths
g.show
g.show3d
g.size
g.spanning_trees_count
g.sparse6_string
g.spectrum
g.steiner_tree
g.strong_orientation
g.strong_product
g.subdivide_edge
g.subdivide_edges
g.subgraph
g.subgraph_search
g.subgraph_search_count
g.subgraph_search_iterator
g.szeged_index
g.tensor_product
g.to_directed
g.to_simple
g.to_undirected
g.topological_minor
g.trace_faces
g.transitive_closure
g.transitive_reduction
g.traveling_salesman_problem
g.two_factor_petersen
g.union
g.version
g.vertex_boundary
g.vertex_connectivity
g.vertex_cover
g.vertex_cut
g.vertex_disjoint_paths
g.vertex_iterator
g.vertices
g.weighted
g.weighted_adjacency_matrix
g.wiener_index
g.write_to_eps
And here are some graphs you can easily generate:
graphs.BalancedTree
graphs.BarbellGraph
graphs.BidiakisCube
graphs.BrinkmannGraph
graphs.BubbleSortGraph
graphs.BuckyBall
graphs.BullGraph
graphs.ButterflyGraph
graphs.ChvatalGraph
graphs.CirculantGraph
graphs.CircularLadderGraph
graphs.ClawGraph
graphs.CompleteBipartiteGraph
graphs.CompleteGraph
graphs.CompleteMultipartiteGraph
graphs.CubeGraph
graphs.CycleGraph
graphs.DegreeSequence
graphs.DegreeSequenceBipartite
graphs.DegreeSequenceConfigurationModel
graphs.DegreeSequenceExpected
graphs.DegreeSequenceTree
graphs.DesarguesGraph
graphs.DiamondGraph
graphs.DodecahedralGraph
graphs.DorogovtsevGoltsevMendesGraph
graphs.DurerGraph
graphs.DyckGraph
graphs.EmptyGraph
graphs.ErreraGraph
graphs.FibonacciTree
graphs.FlowerSnark
graphs.FranklinGraph
graphs.FriendshipGraph
graphs.FruchtGraph
graphs.FuzzyBallGraph
graphs.GeneralizedPetersenGraph
graphs.GoldnerHararyGraph
graphs.Grid2dGraph
graphs.GridGraph
graphs.GrotzschGraph
graphs.HanoiTowerGraph
graphs.HeawoodGraph
graphs.HerschelGraph
graphs.HexahedralGraph
graphs.HigmanSimsGraph
graphs.HoffmanSingletonGraph
graphs.HouseGraph
graphs.HouseXGraph
graphs.HyperStarGraph
graphs.IcosahedralGraph
graphs.IntervalGraph
graphs.KneserGraph
graphs.KrackhardtKiteGraph
graphs.LCFGraph
graphs.LadderGraph
graphs.LollipopGraph
graphs.MoebiusKantorGraph
graphs.MoserSpindle
graphs.MycielskiGraph
graphs.MycielskiStep
graphs.NKStarGraph
graphs.NStarGraph
graphs.OctahedralGraph
graphs.OddGraph
graphs.PappusGraph
graphs.PathGraph
graphs.PetersenGraph
graphs.RandomBarabasiAlbert
graphs.RandomBipartite
graphs.RandomGNM
graphs.RandomGNP
graphs.RandomHolmeKim
graphs.RandomInterval
graphs.RandomLobster
graphs.RandomNewmanWattsStrogatz
graphs.RandomRegular
graphs.RandomShell
graphs.RandomTree
graphs.RandomTreePowerlaw
graphs.ShrikhandeGraph
graphs.StarGraph
graphs.TetrahedralGraph
graphs.ThomsenGraph
graphs.ToroidalGrid2dGraph
graphs.WheelGraph
graphs.WorldMap
graphs.cospectral_graphs
graphs.line_graph_forbidden_subgraphs
graphs.nauty_geng
graphs.trees |
What's the meaning of "Limits in Groups are done as in Set"? | The relation "X is done as Y" is clearly symmetric. So it doesn't really matter here which formulation you pick. But, in context there is a difference, because usually you first construct limits in the category of sets, and then look at more complicated examples, perhaps starting with some familiar and basic categories such as the category of groups. Then you realize that it can be done in such a way that the underlying set is just the limit of the underlying sets, i.e. the forgetful functor preserves limits. In fact, something much stronger is true: the forgetful creates limits. And generally speaking, when we already know Y and are currently investigating X, one prefers to say "X is done as Y".
To be precise, of course the construction of limits of groups involves more than just writing down the underlying set, so actually it is more complicated than the construction of limits of sets. To unify them, however, we can construct limits of algebraic structures of any type (function symbols with arities and equations between them). Groups are algebraic structures with function symbols $e^{[0]},i^{[1]},m^{[2]}$ (the exponents denote the arity) and equations $m(x,e)=m(e,x)=x$, $m(x,i(x))=m(i(x),x)=e$, $m(x,m(y,z))=m(m(x,y),z)$, and sets are algebraic structures with an empty set of function symbols and an empty set of equations. Limits of all algebraic structures (sets, groups, lattices, vector spaces, etc.) are done the same way: we take the limit of the underlying sets and then define the operations pointwise. |
Determining the proportional rate constant for chemical reaction at specific temperature | This looks right. The plot of $ln(k)$ vs. $k$ is a straight line, so $k$ can be computed without explicitly computing $k_0$ or $E_a$. |
Exact Values of the integal $\int_0^\infty \frac{r^{n-1}}{(1+r^2)^{\frac{s}{2}}}\,dr$ | By setting $\frac{1}{1+r^2}=u$ we get that $E_n(s)$ depends on a value of the Beta function:
$$ E_n(s) = \frac{\Gamma\left(\frac{n}{2}\right)\,\Gamma\left(\frac{s-n}{2}\right)}{2\,\Gamma\left(\frac{s}{2}\right)}.\tag{1} $$ |
Limit of a sequence defined by a non-linear recurrence relation | Hint
$$x_{n+1}-x_{n}=-\dfrac{n}{n+1}(x_{n}-x_{n-1})\Longrightarrow (n+1)(x_{n+1}-x_{n})=-n(x_{n}-x_{n-1})$$
so
$$(n+1)(x_{n+1}-x_{n})=(x_{1}-x_{0})(-1)^n=(-1)^n$$
so
$$x_{n+1}-x_{n}=\dfrac{(-1)^n}{n+1}$$
so
$$x_{n}=\sum_{i=1}^{n}(x_{i}-x_{i-1})+x_{0}=\sum_{i=1}^{n}\dfrac{(-1)^{i-1}}{i}$$
we konwn
$$\ln{2}=1-\dfrac{1}{2}+\cdots+$$ |
Question in relation to Fundamental Theorem of Calculus | Let
$$
F(x) = \int_0^x \frac{1}{\sqrt{1+t^6}}dt
$$
and write $f(x) = F(x^2) - F(-x)$.
Does this help? |
What is the inverse element of $u=i+j+k$ with respect to the multiplication? | For any nonzero quaternion $q$, obviously we have*
$$q\frac{\bar{q}}{q\bar q}=1$$
Said another way: $q^{-1}=\frac{\bar q}{q\bar q}$
In your case, $\bar q=-i-k-j$ and $q\bar q=3$. ($q\bar q$ is always equal to the squared Euclidean length of the coefficient vector.)
*There is a tiny bit of voodoo going on here with writing fractions over a skew field like $\mathbb H$. But since the denominator is a real number, the expression is 'saved.' If you really like, you can rewrite this to be $q(\bar q(q\bar q)^{-1})=1$ which is technically best but less readable. |
subspace of a metric space | Let $d'$ be the metric restriced to $A$, $B_d(x,\epsilon)=\{y\in S:d(x,y)<\epsilon\}$ and $B_{d'}(x,\epsilon)=\{y\in A:d'(x,y)<\epsilon\}$
$\underline{A\cap(B\cup A^c)^\circ\subseteq B}$
Suppose $x\in A\cap(B\cup A^c)^\circ$. So $x\in A$ and $x\in(B\cup A^c)^\circ$. Hence there is an $\epsilon>0$ such that $B_d(x,\epsilon)\subseteq B\cup A^c$. Because, $x\notin A^c$, we must have $x\in B$.
$\underline{B\subseteq A\cap(B\cup A^c)^\circ}$
Let $x\in B$. Since $B$ is open in $A$ there is an $\epsilon>0$ such that $B_{d'}(x,\epsilon)\subseteq B$. This implies $B_d(x,\epsilon)\subseteq B\cup A^c$. So we see that $x\in A$ and $x\in (B\cup A^c)^\circ$. |
Remainder of polynomial product, CRT solution via Bezout | Hint $ $ We can read off a CRT solution from the Bezout equation for the gcd of the moduli, viz. $$\bbox[5px,border:1px solid #c00]{\text{$\color{#90f}{\text{scale}}$ the Bezout equation by the residue difference - then ${\rm \color{#c00}{re}\color{#0a0}{arrange}}$}}$$
$$\begin{align}
{\rm if}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\begin{array}{rr} &f\equiv\, f_g\pmod{\!g}\\ &f\equiv\, f_h\pmod{\! h} \end{array}\ \ {\rm and}\ \ \gcd(g,h) = 1\\[.4em]
{\rm then}\ \ \ f_g - f_h\, &=:\ \delta\qquad\qquad\ \ \rm residue\ difference \\[.2em]
\times\qquad\quad\ \ \ 1\ &=\ \ a g\, +\, b h\quad\ \rm Bezout\ equation\ for \ \gcd(g,h) \\[.5em]\hline
\Longrightarrow\ \,f_g\, \color{#c00}{-\, f_h}\, &= \color{#0a0}{\delta ag} + \delta bh\quad\ \rm product\ of \ above\ (= {\color{#90f}{scaled}}\ Bezout)\\[.2em]
\Longrightarrow \underbrace{f_g \color{#0a0}{- \delta ag}}_{\!\!\!\large \equiv\ f_{\large g}\! \pmod{\!g}}\! &= \underbrace{\color{#c00}{f_h} + \delta bh}_{\large\!\! \equiv\ f_{\large h}\! \pmod{\!h}}\ \ \ \underset{\large {\rm has\ sought\ residues}\phantom{1^{1^{1^{1^1}}}}\!\!\!}{\rm \color{#c00}{re}\color{#0a0}{arranged}\ product}\rm\! = {\small CRT}\ solution\end{align} $$
More generally: $ $ if the gcd $\,d\neq 1\,$ then it is solvable $\iff d\mid f_g-f_h\,$ and we can use the same method we used below for $\,d=\color{#c00}2\!:\,$ scale the Bezout equation by $\,(f_g-f_h)/d = \delta/d.\,$ Since $\,\color{#c00}2\,$ is invertible in the OP, we could have scaled the Bezout equation by $\,1/2\,$ to change $\,\color{#c00}2\,$ to $\,1,\,$ but not doing so avoids (unneeded) fractions so simplifies the arithmetic.
In our specific problem we have the major simplification that the Bezout equation is obvious being simply the moduli difference $ =\color{#c00}2$
hence $\ \ \smash[t]{\overbrace{\color{0a0}{6x\!-\!1}-\color{#90f}{(2x\!+\!1)}}^{\rm residue\ difference}} = \overbrace{(2x\!-\!1)}^{\!\text{scale LHS}}\,\overbrace{\color{#c00}2 = (\color{0a0}{x^2\!+\!6}-\color{#0a0}{(x^2\!+\!4)}}^{{\overbrace{\textstyle\color{#c00}2\, =\, x^2\!+\!6-(x^2\!+\!4)_{\phantom{|_{|_i}}}\!\!\!\!}^{\Large \text{Bezout equation}}}})\overbrace{(\color{#0a0}{2x\!-\!1})}^{\text{scale RHS}},\ $ which rearranged
yields $\ \ \underbrace{\color{}{6x\!-\!1 - (\color{#0a0}{2x\!-\!1})(x^2\!+\!6)}}_{\large
\equiv\ \ 6x\ -\ 1\ \pmod{x^2\ +\ 6}\!\!\!}\, =\, \underbrace{\color{#90f}{2x\!+\!1} -\color{#0a0}{(2x\!-\!1)(x^2\!+\!4)}}_{\large \equiv\ \ 2x\ +\ 1\ \pmod{x^2\ +\ 4}\!\!\!} =\,r(x) =\, $ CRT solution.
Remark $ $ If ideals and cosets are familiar then the above can be expressed more succinctly as
$$ \bbox[12px,border:1px solid #c00]{f_g\! +\! (g)\,\cap\, f_h\! +\! (h) \neq \phi \iff f_g-f_h \in (g)+(h)}\qquad$$ |
Heat equation in cylindrical coordinates with Neumann boundary condition | Of course we use separation of variables:
Let $T(r,t)=F(r)G(t)$ ,
Then $F(r)G'(t)=D\left(F''(r)G(t)+\dfrac{1}{r}F'(r)G(t)\right)$
$F(r)G'(t)=DG(t)\left(F''(r)+\dfrac{1}{r}F'(r)\right)$
$\dfrac{G'(t)}{DG(t)}=\dfrac{F''(r)+\dfrac{1}{r}F'(r)}{F(r)}=-s^2$
$\begin{cases}\dfrac{G'(t)}{G(t)}=-Ds^2\\F''(r)+\dfrac{1}{r}F'(r)+s^2F(r)=0\end{cases}$
According to http://eqworld.ipmnet.ru/en/solutions/ode/ode0207.pdf,
$\begin{cases}G(t)=c_3(s)e^{-Dts^2}\\F(r)=\begin{cases}c_1(s)J_0(rs)+c_2(s)Y_0(rs)&\text{when}~s\neq0\\c_1\ln r+c_2&\text{when}~s=0\end{cases}\end{cases}$
$\therefore T(r,t)=C_1\ln r+C_2+\int_0^\infty C_3(s)e^{-Dts^2}J_0(rs)~ds+\int_0^\infty C_4(s)e^{-Dts^2}Y_0(rs)~ds$
$\partial_rT(r,t)=\dfrac{C_1}{r}-\int_0^\infty sC_3(s)e^{-Dts^2}J_1(rs)~ds-\int_0^\infty sC_4(s)e^{-Dts^2}Y_1(rs)~ds$
Substituting $T(r,0)=T_0$ and $\partial_rT(r_2,t)=T_1\sin\omega t$ respectively , you will get this system of equations $\begin{cases}C_1\ln r+C_2+\int_0^\infty C_3(s)J_0(rs)~ds+\int_0^\infty C_4(s)Y_0(rs)~ds=T_0\\\dfrac{C_1}{r_2}-\int_0^\infty sC_3(s)e^{-Dts^2}J_1(r_2s)~ds-\int_0^\infty sC_4(s)e^{-Dts^2}Y_1(r_2s)~ds=T_1\sin\omega t\end{cases}$ .
The only thing is that $C_3(s)$ and $C_4(s)$ are in fact really difficult to find. |
is this a mistake or am I missing something? | Simple algebra shows:
\begin{align}
\vert b_n-L\vert&=\vert(\frac1n \sum_{j=1}^n a_j)-L\vert\\
&=\vert(\frac1n \sum_{j=1}^n a_j)-\frac 1n \sum_{j=1}^nL\vert\\
&=\frac 1n\vert\sum_{j=1}^n (a_j-L)\vert \\
&\leq \frac 1n\sum_{j=1}^n |a_j-L|.
\end{align} |
Is this proof of $\operatorname{Var}(\overline{x})=\frac{\sigma^2}{N}$ correct? | Let $X,Y$ be random variables with $\mathbb{E}X^{2}<\infty$ and
$\mathbb{E}Y^{2}<\infty$. Then we have the following rules.
$\text{Var}\left(aX+b\right)=a^{2}\text{Var}X$
If moreover $X$ and $Y$ are independent then $\text{Var}\left(X+Y\right)=\text{Var}X+\text{Var}Y$.
Applying this on iid $X_{1},\cdots,X_{N}$ with $\mathbb{E}X_{1}^{2}<\infty$ we find:$$\text{Var}\overline{X}=\text{Var}\frac{1}{N}\left(X_{1}+\cdots+X_{N}\right)=$$$$\frac{1}{N^{2}}\text{Var}\left(X_{1}+\cdots+X_{N}\right)=\frac{1}{N^{2}}\left[\text{Var}X_{1}+\cdots+\text{Var}X_{N}\right]=\frac{\text{Var}X_{1}}{N}$$ |
Testing the Convergence of a series | You have done most of the work but your aim is unachievable. Perhaps you have the wrong expression for the error term (should there be an $x^n$ term as well)? To see this, for $n > k$, use the power series expansion for $\log(1+x)$ to conclude,
$$\log\left(1+\frac{k}{n}\right) > \frac{k}{n} - \frac{k^2}{2n^2},$$ which means that
$$\sum_{n > k}^{N} \log\left(1+\frac{n}{k}\right) > \sum_{n=k+1}^N \frac{k}{n} - \sum_{n=k+1}^N\frac{k^2}{2n^2}.$$
The second sum on the right converges while the first sum diverges, so the sum on the left eventually becomes unbounded and therefore the original product also diverges.
There is a general result along these lines that says, subject to the right conditions on $m_n$, that an infinite product $\prod (1 + m_n) $ converges in the same way as $\sum m_n$, see here for more details. |
Let $G$ be a finite group, let $p$ be a prime dividing $|G|$, and let $K$ be a $p$-Sylow subgroup of $G$. Show that $N_G(N_G(K)) = N_G(K)$. | If $g\in G-N_G(K) $, then $gKg^{-1}\neq K$ is a Sylow subgroup of $G$. Since $K$ is normal in $N_G(K) $, it is the unique subgroup of the same order as $K$ in $N_G(K) $. Thus $N_G(K) $ cannot contain $gKg^{-1}$, so $g$ does not normalize $N_G(K) $. |
Prove that $\gcd(3^n-2,2^n-3)=1$ if and only if $\gcd(6^n-4,2^n-3)=1$ | Let $A=3^n-2$ and $B=2^n-3$. If $n\equiv 3\pmod{4}$, $\;5\mid \gcd(A,B)$ by Fermat's little theorem.
We have $AB+3A+2B=6^n-6$ and $2\nmid B,\; 3\nmid A$, hence:
$$ \gcd(A,B) = \gcd(6^n\color{red}{-6}, 2^n-3).$$ |
Are these fields equal? | $\zeta_3$ is a root of $x^2 + x + 1$ since $x^3 - 1 = (x - 1)(x^2 + x + 1)$. So its degree is $2$ not $3$.
For 2) you can do the same kind of thing. The minimal polynomial of $\sqrt[3]2 \zeta_3$ is $x^3 - 2$ so $\mathbb{Q}(\sqrt[3]2 \zeta_3) \ne \mathbb{Q}(\sqrt[3]2, \zeta_3)$. It's a 3-dimensional subspace of a 6-dimensional $\mathbb{Q}$-vector space. |
In the ring of even integers , do 4 and 6 have a lcm and gcd? If they have , what are they? | Let $E$ be the (nonunital) ring of even integers. A common divisor of $4$ and $6$ in $E$ is also a common divisor in the integers, so the only candidate is $2$; however, as you observe, this is not a divisor of $6$ in $E$. Therefore the two elements have no common divisor.
Similarly, a common multiple in $E$ is also a common multiple in the integers, so we have to look to numbers of the form $12x$; since $12x/4=3x$, we need $x$ to be even. The candidate for the lcm is thus $24$. Is it?
Note that “least common multiple” means “a common multiple that divides every other common multiple” (the standard order relation in the integers is not considered).
Now try $72$.
Can any other common multiple be the lowest common multiple? |
Primality test for Mersenne numbers using the fourth Chebyshev polynomial of the first kind | Your test is in fact equivalent to the Lucas Lehmer test. First of all because of $$f(f(x))=T_4(x)$$ with $$f(x)=2x^2-1$$ your test is equivalent to the following test :
If $$S_0=2$$ and $$S_{i+1}=f(S_i)$$ then $$2^p-1$$ is prime if and only if $$S_{p-1}\equiv -1\mod M_p$$
If we compare to Lucas-Lehmer, we have $$T_0=4$$ $$T_{i+1}=T_i^2-2$$ then the statement is that $$2^p-1$$ is prime if and only if $$T_{p-2} \equiv 0\mod M_p$$ which is equivalent to $$T_{p-1}\equiv -2\mod M_p$$
We can easily show $T_i=2S_i$ for all $i$ by considering $$x^2-2=2(2(\frac{x}{2})^2-1)$$ and from this the equivalence easily follows. |
Separable Algebra, Equivalence of Definitions | See this blog post. I don't know a way to prove this that doesn't involve just classifying the separable algebras over a field. |
On the converse of Schur's Lemma | There are counterexamples in general.
Suppose $V$ is a non-split extension of two non-isomorphic simple modules. I.e., $V$ has a unique simple submodule $W$ with simple quotient $U=V/W$ where $U\not\cong W$. Then any non-zero endomorphism of $V$ is an isomorphism, since the only possible kernel is $W$, but the image can't be $V/W=U$ since $V$ has no submodule isomorphic to $U$. If, further, $U$ and $W$ have $F$ as endomorphism ring, this isomorphism must be a scalar multiple of the identity.
For an explicit example, let $G$ be the group of upper triangular $2\times 2$ matrices over a finite field $F$ with $\vert F\vert>2$, and let $V$ be the natural $2$-dimensional $FG$-module. |
Show that $(1-\frac{1}{n})^{\sum_{i=1}^n X_i}$ is an unbiased estimator of $\tau(\theta)=e^{-\lambda}$ | Hint: Try using the fact that $S= \sum X_i$ is a Poisson random variable with parameter $n\lambda$ so that $$E\left[\left(1-\frac{1}{n}\right)^S\right]=\sum_{i=0}^\infty \left(1-\frac{1}{n}\right)^i e^{-n\lambda}\frac{(n\lambda)^i}{i!}=e^{-n\lambda}\sum_{i=0}^\infty \frac{(n\lambda - \lambda)^i}{i!}$$ |
Prove $T$ is diagonalizable and find a basis $B$ of $V$ such that the matrix associated to $T$ be diagonal. | if a n * n matrix has n different eigenvalues then it is diagonalizable.
here you have to solve $$det(xI−T)=0$$ for x .
so you have $$ (-4+x)[(-3+x)(-3+x)-1]=0$$
$$ (-4+x)(X^2+9-6x-1)=0$$
$$(-4+x)(x^2-6x+8)=0$$
$$x=4,-2,4$$
now you have to find eigenspace, you have to show that T-4I span 2 dimension subspaces. |
A "simple" trigonometric computation... | Got it!
Consider $T(x)=256X^4-576X^3+432X^2-120X+9$.
Then $\cos^2\frac{\pi}{18}$, $\cos^2\frac{\pi}{6}$, $\cos^2\frac{5\pi}{18}$, $\cos^2\frac{7\pi}{18}$ are its roots.
$34$ is then found using Viete's formulas.
$T$ is derived from the 9-th Chebyshev polynomial. |
Number Theory: Prove that for any $n\exists k$ such that $n\mid\phi(k)$. | Looks good to me. Were there any doubts you had that motivated you to ask for verification? |
The derivative of something with respect to $3x+5$? | It's very useful to understand that just because you have used a particular
variable $x$ in a derivative or an integral
does not mean you are stuck having to do all your derivatives
or integrals with respect to the same variable.
You can, in fact, take a derivative of the same thing but with respect
to a different variable. In both cases, you can interpret the derivative as
the slope of a curve on a graph, but which curve you use depends on what you
are differentiating over.
Here are two graphs of the quantity $(3x + 5)^2$:
In the graph on the left, $(3x+5)^2$ is plotted as a function of $x,$ much as
you might expect. In the graph on the left, the same quantity is plotted as
a function of a different variable, $u,$ which we choose to define by the
equation $u = 3x + 5.$
Because $u$ is defined that way, the graph it makes is shifted $\frac53$ units to
the right (as compared to the graph with respect to $x$) and the plot of the
function is three times as wide. (Both graphs are to exactly the same scale.)
It is sometimes very helpful to use a different variable in this way. Here we can
already see that the graph with respect to $u$ can be a little easier to work with
than the graph with respect to $x,$ since the graph over $u$ is symmetric around
the $y$-axis. |
How can I find $P(X>Y)$? | Because you are integrating this area, and can be interpreted as
$$
D=\{(x,y)\in\mathbb{R}^2:0\le x\le 1, 0\le y \le x\}.
$$
If outer integral is from $0$ to $1$, then you integrate on rectangle $[0,1]\times [0,1]$.
The integral you set is equivalent to
$$
\int_0^1 \int_y^1 c(x+y)dxdy.\text{ (why?)}
$$ |
proving the inequality $\frac{1}{\sqrt[n]{1+m}}+\frac{1}{\sqrt[m]{1+n}}\ge 1$ | Well, $(1+m)^{1/n} \le (1+m/n)$, and $(1+n)^{1/m} \le (1+n/m)$. [Do you see why?]
From this the inequality follows:
$$\frac{1}{(1+m)^{1/n}} + \frac{1}{(1+n)^{1/m}} \ge \frac{1}{1+\frac{m}{n}} + \frac{1}{1+\frac{n}{m}} = \frac{n}{n+m} + \frac{m}{n+m} = 1$$ |
Integral $\int_{0}^{\infty} u^{2}\left(e^{-4x(u^{-12}-u^{-6})}-1\right) \,\mathrm du$ | Using your approximation:
\begin{align*}
I(x) &= \int_0^\infty u^2\left( \exp \left(-4x(u^{-12}-u^{-6})\right) - 1\right) \,\mathrm du \\
&\approx \int_0^1- u ^2 \,\mathrm du + \int_1^\infty -4xu^2(u^{-12}-u^{-6}) \,\mathrm du\\
&= -\frac 13 -4x\int_1^\infty u^{-10}-u^{-4}\,\mathrm du \\
&=-\frac 13 -4x\left( \bigg[ \frac{u^{-9}}{-9}\bigg]^\infty_1 - \bigg[ \frac{u^{-3}}{-3}\bigg]^\infty_1\right) \\
&= \frac 89x-\frac 13
\end{align*} |
Explicit form of a set - valued function | This is an alternative notation to $\mathcal P(S)$, power set of $S$. It's the set of every subset of $S$.
The notation is such because the cardinality of $\mathcal P(S)$ is $2$ to the power of the cardinality of $S$:
$$\vert \mathcal P(S) \vert = \vert 2^S \vert = 2^{\vert S \vert}$$
Another reason, perhaps more abstract:
The set of all functions with domain $X$ and codomain $Y$ is denoted $Y^X$, (because $\vert Y^X \vert = \vert Y \vert^{\vert X \vert}$). If you think of $2$ as the set $\{0,1\}$ then:
$$2^{S} = \{0,1\}^S = \left\{ { f\vert f :S \to \{0,1\}} \right\}$$
Is the set of all functions sending elements of $S$ to $0$ or $1$. You can think of elements of $S$ being sent to a subset if it's mapped to $1$ or omitted from a subset if it's mapped to $0$. Thus the set all combinations of either including or omitting the elements of $S$ gives you all possible subsets. |
Do all symplectic transformations give rise to skew symmetric matrices? | One should note, that $\Delta$ is the matrix representing the symplectic form on the vector space.
The matrix $A$ is symplectic if
$$
A^T\Delta A=\Delta.
$$
Now $A=\Delta^{-1}\alpha$. Thus using $\Delta^T = -\Delta = \Delta^{-1}$
$$
A^T\Delta A = (\alpha^T\Delta^{-T})\Delta \Delta^{-1}\alpha = \alpha\Delta\alpha.
$$
In order that $A$ is symplectic, $\alpha$ has to be symplectic as well.
Does this make sense? |
Localization of $\mathbb{Z}_6$ with respect to the powers of $2$ | The localization certainly cannot be $\mathbb{Z}_2$, because $[1]/[1]\ne[2]/[1]$ (where $[x]$ denotes the residue class of $x$ modulo $6$). Indeed, if they were equal, there would exist $n$ such that
$$
[2]^n([2]-[1])=0
$$
but $[2]$ is not nilpotent in $\mathbb{Z}_6$.
On the other hand, $[3]/[1]=[6]/[2]=[0]/[1]$. Now note that the canonical map
$$
\mathbb{Z}_6\to \mathbb{Z}_6/[3]\mathbb{Z}_6\cong\mathbb{Z}_3
$$
satisfies the universal property of the localization. |
$2^5 \times 9^2 =2592$ | The 2592 puzzle apparently originated with Henry Ernest Dudeney's 1917 book Amusements in Mathematics where it is given as puzzle 115, "A PRINTER'S ERROR":
In a certain article a printer had to set up the figures $5^4\times2^3,$ which of course meant that the fourth power of $5\ (625)$ is to be multiplied by the cube of $2\ (8),$ the product of which is $5,000.$ But he printed $5^4\times2^3$ as $5423,$ which is not correct. Can you place four digits in the manner shown, so that it will be equally correct if the printer sets it up aright, or makes the same blunder?
[. . . .]
The answer is that $2^5\times9^2$ is the same as $2592,$ and this is the only possible solution to the puzzle.
It was apparently rediscovered fifteen years later and published in the American Mathematical Monthly, vol. 40, December 1933, p. 607, as problem E69, proposed by Raphael Robinson, in the following form:
Instead of a product of powers, $a^bc^d,$ a printer accidentally prints the four digit number, $abcd.$ The value is however the same. Find the number and show that it is unique.
A solution by C. W. Trigg was published in vol. 41, May 1934, p. 332; the problem was also solved by Florence E. Allen, W. E. Buker, E. P. Starke, Simon Vatriquant, and the proposer. |
Proof of multivariable chain rule | These days I've been looking for a rigurous proof of the multivariable chain rule and I've finally found one that I think is very easy to understand. I will leave it here (if nobody minds) for anybody searching for this that is not familiar with little-o notation, Jacobians and stuff like this. To understand this proof, all you need to know is the mean value theorem.
Let's say we have a function $f(x,y)$ and $x = x(t), y = y(t)$. Let's also take $z(t) = f(x(t), y(t))$ By definition, the derivative of z $z'(t)$ is
$$ z'(t) = \lim_{\Delta t \to 0}{\frac {f(x(t+\Delta t),y(t+\Delta t)) - f(x,y)}{\Delta t}}$$.
$$ Let \ \Delta x = x(t+\Delta t)-x(t),$$ $$\Delta y = y(t+\Delta t)-y(t)$$
Now I'll take the numerator of the fraction in the limit, and make a small change.
$$ f(x(t+\Delta t), y(t+\Delta t)) - f(x,y) = f(x+\Delta x, y+\Delta y) - f(x,y)$$
$$ = \left[f(x+\Delta x, y+\Delta y) - f(x+\Delta x, y)\right] + \left[f(x+\Delta x, y) - f(x, y)\right]$$
I have just added and substracted $f(x+\Delta x, y)$. For some reason, I will invert the terms.
$$ = \left[f(x+\Delta x, y) - f(x, y)\right] + \left[f(x+\Delta x, y+\Delta y) - f(x+\Delta x, y)\right]$$.
Now, let's define 2 functions and I will name them g and h. First,
$$ Let \ g(x) = f(x, y) \implies g'(x) = \frac {\partial f} {\partial x} $$.
Please note that y is constant here since g is a function of a single variable. Now, by the mean value theorem we have
$$ \exists c_1 \in (x, x+\Delta x) \ so \ that$$
$$\frac {g(x+\Delta x) - g(x)} {\Delta x} = g'(c_1) $$
$$ \Longleftrightarrow $$
$$ f(x+\Delta x, y) - f(x, y) = f_x(c_1, y)\Delta x$$
Similarly, using the function
$$ h(y) = f(x + \Delta x, y) \implies h'(y) = \frac {\partial} {\partial y}f(x+\Delta x, y)$$
We will have by the same logic that
$$ f(x+\Delta x, y + \Delta y) - f(x+\Delta x, y) =
f_y(x + \Delta x, c_2)\Delta y, c_2 \in (y, y+\Delta y) $$
Notice that $c_1$ and $c_2$ are bounded with respect to $\Delta x$ and $\Delta y$
So as $\Delta x \to 0, c_1 \to x$ and as $\Delta y \to 0, c_2 \to y$. By our definition of $\Delta x$ and $\Delta y$, as $\Delta t \to 0$, both $\Delta x$ and $\Delta y$ $\to 0$. So, as $\Delta t \to 0$, $c_1 \to x$ and $c_2 \to y$.
The last step of the proof is to sum this all up, divide by $\Delta t$ and take the limit as $\Delta t \to 0$
$$ f(x(t+\Delta t), y(t+\Delta t)) - f(x, y) = f_x(c_1, y)\Delta x + f_y(x+\Delta x, c_2)\Delta y $$
$$ \lim_{\Delta t \to 0} \frac {f(x(t+\Delta t), y(t+\Delta t))}{\Delta t} = \lim_{\Delta t \to 0} f_x(c_1, y)\frac {\Delta x}{\Delta t} + f_y(x+\Delta x, c_2)\frac {\Delta y}{\Delta t} = f_x(x, y)x'(t) + f_y(x, y)y'(t) \ QED $$
Edit: After a long time I've realised that this proof assumes that $f$ has partial derivatives defined on intervals around the point $(x, y)$ and they are continuous at the point. This is a sufficient condition for the function to be ($\mathbb{R}^2$-)differentiable at $(x, y)$, but it's not equivalent. Yet, the multivariable chain rule works for the function being just differentiable at that point. So for a general proof, one should first understand little-o notation as in the other answers. |
Group Action - Permutation on the Polynomial | This is tricky - the two cases look the same, but they're not. The first one is a right action $v \cdot (p_1 \cdot p_2) = (v \cdot p_1) \cdot p_2$, while the second one is a left action $p_1 \cdot (p_2 \cdot f) = (p_1 \cdot p_2) \cdot f$. To see why, consider these 2 permutations:
$$p_1(1) = 1, p_1(2) = 3, p_1(3) = 2 $$
and
$$p_2(1) = 3, p_2(2) = 2, p_2(3) = 1.$$
Let's write out explicitly what the actions are in the two cases to see the difference.
First, let's work out what the compositions $p_1 \cdot p_2$ and $p_2 \cdot p_1$ are.
The composition $p_1 \cdot p_2$ is:
$$p_1(p_2(1)) = 2, p_1(p_2(2)) = 3, p_1(p_2(3)) = 1$$
while the composition $p_2 \cdot p_1$ is:
$$p_2(p_1(1)) = 3, p_2(p_1(2)) = 1, p_2(p_1(3)) = 2.$$
Observe that they are not the same. We will use these later.
Now let's look at the 2 actions. The first action is on vectors. By the definition of the first action, $$p_1 \cdot (v_1, v_2, v_3) = (v_{p_1(1)}, v_{p_1(2)}, v_{p_1(3)}) = (v_1, v_3, v_2).$$ In words: $p_1$ acting on a vector interchanges the second and third coordinates. Similarly, $$p_2 \cdot (v_1, v_2, v_3) = (v_3, v_2, v_1)$$ In words: $p_2$ acting on a vector interchanges the first and third coordinates.
(In this situation, I find that thinking in words reduces the confusion: $p_1$ interchanges the second and third coordinates, not $v_2$ and $v_3$. You'll see the difference below.)
Thus, $$p_1 \cdot (p_2 \cdot v) = p_1 \cdot (p_2 \cdot (v_1, v_2, v_3)) = p_1 \cdot (v_3, v_2, v_1) = (v_3, v_1, v_2).$$ (If the last equality seems wrong, use the "words" description of $p_1$: Interchange the second and third coordinates.) Now the rightmost term above is $(p_2 \cdot p_1) \cdot v$, not $(p_1 \cdot p_2) \cdot v$. (Use the calculation of $p_2 \cdot p_1$ above.)
Conclusion: For the first action, on vectors, $p_1 \cdot (p_2 \cdot v) = (p_2 \cdot p_1) \cdot v$. So this is not a left action, but a right action.
Now look at the second action, which is not on vectors, but on real-valued functions on the set of all vectors. By definition, $$(p_1 \cdot f)(x_1, x_2, x_3) = f(x_{p_1(1)}, x_{p_1(2)}, x_{p_1(3)}) = f(x_1, x_3, x_2).$$
In words: To evaluate $p_1$ applied to a function at a vector, interchange the second and third coordinates of the vector, then apply the function.
Similarly, for $p_2$, the words version is: To evaluate $p_2$ applied to a function at a vector, interchange the first and third coordinates of the vector, then apply the function.
So what is $p_1 \cdot (p_2 \cdot f)$, applied to a vector? It is $$(p_1 \cdot (p_2 \cdot f))(x_1, x_2, x_3) = (p_2 \cdot f)(x_1, x_3, x_2) = f(x_2, x_3, x_1).$$
(Again, using the "words" description may reduce the confusion.) Now is this last term $p_1 \cdot p_2$ or $p_2 \cdot p_1$ applied to $f$? It is the former, as you can see from the calculation of $p_1 \cdot p_2$ above.
Conclusion: $p_1 \cdot (p_2 \cdot f) = (p_1 \cdot p_2) \cdot f$. This one is a left action.
I hope this clears up why these two cases aren't the same. Of course, I haven't yet proven that these are actions in general (I've only illustrated it for the particular $p_1$ and $p_2$ above), but hopefully this will give you the right idea for the general proof.
The key point is to remember that with these definitions, we are permuting the coordinates of the vector according to $p$, rather than the indices of the $v$'s according to $p$. If we did the latter instead, then the left and right action cases above would be reversed. See alias vs alibi for more discussion of this point. |
Initial Value Problem, $\frac{dy}{dt}=\frac{\sec^2(t)}{y+1}$ | $\displaystyle \frac{dy}{dt} = \frac{\sec^2(t)}{y+1}$
So $(y+1)\,dy = \sec^2(t)\,dt$.
This gives $\displaystyle \frac{1}{2}y^2+y=\tan(t)+C$.
Then $\displaystyle y^2+2y+1=2\tan(t)+C$.
Thus $\displaystyle \left(y+1\right)^2=2\tan(t)+C$.
So $\boxed{\displaystyle y = \pm\sqrt{2\tan(t)+C}-1}$.
This would only be valid for a portion of a single period of $\tan$ where $2\tan(t)+C \ge 0. $
UPDATE: if $y(0)=-2$, then $-2 = -\sqrt{0+C}-1$, so $C=1$.
Therefore, $\displaystyle \boxed{y=-\sqrt{2\tan(t)+1}-1}$.
Therefore, the domain is $\displaystyle \boxed{t \in \left(\tan^{-1}\left(-\frac{1}{2}\right),\frac{π}{2}\right)}$
This is because we must have the function defined on the entire interval, and $\sqrt{2\tan(t)+1} $ is defined when $\displaystyle \tan(t)>-\frac{1}{2}$. Since $\tan$ is strictly increasing on the interval, we only need to check its zero. |
Colimit of a directed system of modules | $g$ doesn't have a universal property. $g$ is the arrow the universal property the colimit of the directed system, which is $(\bigoplus_{k\in I} M_k)/N$, guarantees exists. However, you haven't fully specified that colimit and the condition $g\circ u_k = h_k$ doesn't make sense since the codomain of $u_k$, namely $\bigoplus_{k\in I}M_k$, doesn't match the domain of $g$, namely $(\bigoplus_{k\in I}M_k)/N$.
You have a quotient map $q:\bigoplus_{k\in I}M_k\to(\bigoplus_{k\in I}M_k)/N$ with which you can then define the colimiting cocone as the arrows $w_k:M_k\to(\bigoplus_{k\in I}M_k)/N$ defined by $w_k=q\circ u_k$. With this, the condition on $g$ is that $g\circ w_k = h_k$. However, even $h_k$ seems a bit suspect to me. We have for any $R$-module $T$ and collection of arrows $\{v_k:M_k\to T\mid k\in I\}$ such that $v_j \circ f_{ij} = v_i$, a unique arrow $p : (\bigoplus_{k\in I}M_k)/N\to T$ such that $p\circ w_k = v_k$. These are exactly the same constraints as are on $h_k$; the only difference is an arbitrary codomain is allowed. We can always choose $T=N$, so there's nothing wrong with $h_k$, but it is somewhat misleading and is not the full universal property of the colimit of the directed system. I will consider this more general case from this point on.
Abstractly, since we've given a concrete description of the colimit of the directed system, we can see the arrow we want induced by the universal properties of the pieces that make up the definition. In particular, given the family $\{v_k:M_k \to T\mid k \in I\}$, the universal property of the coproduct, $\bigoplus_{k\in I}M_k$, gives us a unique arrow $p' : \bigoplus_{k\in I}M_k\to T$ such that $p'\circ u_k = v_k$. Then, since $v_j\circ f_{ij} - v_i = 0$, we have, $$p'(u_j(f_{ij}(x))-u_i(x)) = p'(u_j(f_{ij}(x))-p'(u_i(x))= v_j(f_{ij}(x))-v_i(x) = 0$$ which means $p'$ satisfies the conditions needed to apply the universal property of the quotient by $N$. This gives us a unique map $p : (\bigoplus_{k\in I}M_k)/N\to T$ such that $p\circ q = p'$. Of course, combining this with the characterization of $p'$ gives us, $p\circ q\circ u_k = p'\circ u_k = v_k$.
We can get a concrete description of $p$ simply by concretely describing the arrows induced by the universal properties of the coproduct and quotient, $p'$ and $p$. To do this, we need a concrete description of the $R$-modules $\bigoplus_{k\in I}M_k$ and $(\bigoplus_{k\in I}M_k)/N$. I will describe $\bigoplus_{k\in I}M_k$ as formal (finite) sums of pairs $(i,x)$ where $i\in I$ and $x\in M_i$, with the operations satisfying $r(i,x)=(i,rx)$, $(i,0) = 0$, and $(i,x)+(i,y)= (i,x+y)$ in addition to the other properties required of the operations of an $R$-module. For $i\neq j$, $(i,x)+(j,y)$ just doesn't "simplify", just like with complex numbers $1+i$ doesn't simplify further. You should show that this description is isomorphic to whichever concrete description you like to use. We can then define $u_k:M_k \to \bigoplus_{k\in I}M_k$ as $u_k(x)=(k,x)$ which can easily be verified to be an $R$-module homomorphism. $p'$ is then simplify defined as $p'((i,x))= v_i(x)$ and extended to all formal sums by linearity. We clearly have $p'\circ u_k = v_k$.
As is typical for quotients, we can concretely represent $(\bigoplus_{k\in I}M_k)/N$ as being $\{\{m+n\mid n\in N\}\mid m\in\bigoplus_{k\in I}M_k\}$. As usual, we can write the equivalence class of $m$, as $m+N = \{m+n\mid n\in N\}$. The arrow $q$ is then $q((i,x))=(i,x)+N$ extended to all formal sums by linearity. $p$ is then defined by $p(m+N)=p'(m)$ which clearly gives $p \circ q = p'$, but we have to check that this definition is well-defined meaning given any other $m'\in m+N$, $p'(m')=p'(m)$. By definition, $m'=m+n$ for some $n\in N$, and by definition of $N$ this means $n=\sum_{(i,j)\in S}(n_j,f_{n_in_j}(x_{(i,j)}))-(n_i,x_{(i,j)})$ where $S\subseteq I\times I$ is finite. So the well-definedness condition is $$p'(m+\sum_{(i,j)\in S}(n_j,f_{n_in_j}(x_{(i,j)}))-(n_i,x_{(i,j)}))=p'(m)$$ or $$\sum_{(i,j)\in S}\left[p'((n_j,f_{n_in_j}(x_{(i,j)})))-p'((n_i,x_{(i,j)}))\right]=0$$ which is, by definition of $p'$, $$\sum_{(i,j)\in S}\left[v_{n_j}(f_{n_in_j}(x_{(i,j)}))-v_{n_i}(x_{(i,j)})\right]=0$$ which holds by assumption.
Bringing everything together, elements of $(\bigoplus_{k\in I}M_k)/N$ look like $\sum_{j\in J}(j,x_j)+N$ for finite subsets $J\subseteq I$ and $p(\sum_{j\in J}(j,x_j)+N) = \sum_{j\in J}v_j(x_j)$.
As some minor nits with wording, saying "consider [the] [...] $\bigoplus_{k\in I}M_k\supseteq N$" is unnatural. It's like saying "consider the three greater than or equal to $x$". It would make more sense to flip it, e.g. "consider the $N \subseteq\bigoplus_{k\in I}M_k$" i.e. "consider the $x$ less than or equal to three". Similarly, "[given $h$] set $h_k(x)=g([(\dots,x,\dots)])$" also reads poorly. Again, an analogy would be "given a number $x$, set one to be $x$", and again the solution is just to flip them around, i.e. $g([(\dots,x,\dots)])=h_k(x)$ |
Understanding of the theorem that all norms are equivalent in finite dimensional vector spaces | To answer the question in the update:
If $(X,\|\cdot\|)$ is a normed space of infinite dimension, we can produce a non-continuous linear functional: Choose an algebraic basis $\{e_{i}\}_{i \in I}$ which we may assume to be normalized, i.e., $\|e_{i}\| = 1$ for all $i$. Every vector $x \in X$ has a unique representation $x = \sum_{i \in I} x_i \, e_i$ with only finitely many nonzero entries (by definition of a basis).
Now choose a countable subset $i_1,i_2, \ldots$ of $I$. Then $\phi(x) = \sum_{k=1}^{\infty} k \cdot x_{i_k}$ defines a linear functional on $x$. Note that $\phi$ is not continuous, as $\frac{1}{\sqrt{k}} e_{i_k} \to 0$ while $\phi(\frac{1}{\sqrt{k}}e_{i_k}) = \sqrt{k} \to \infty$.
There can't be a $C \gt 0$ such that the norm $\|x\|_{\phi} = \|x\| + |\phi(x)|$ satisfies $\|x\|_\phi \leq C \|x\|$ since otherwise $\|\frac{1}{\sqrt{k}}e_k\| \to 0$ would imply $|\phi(\frac{1}{\sqrt{k}}e_k)| \to 0$ contrary to the previous paragraph.
This shows that on an infinite-dimensional normed space there are always inequivalent norms. In other words, the converse you ask about is true. |
A property of a sequence | If $\dfrac{a_{n+1}}{a_n} \to 0$, then at some point $\dfrac{a_{n+1}}{a_n}<1$, $\forall n>n_0$.
Thus $a_n$ is a decreasing sequence... Which can't go to $\infty$.
EDIT: of course I take $a_n>0$ here, but you can take the absolute value anyway... |
understanding intervals in trigonometry | As the cosine is periodic, there are many values of $\theta$ which have the same $\cos(\theta)$. So they are just asking for all the values between $-\pi/2$ and $\pi/2$ that solve the equation. Your solution got one of them, $\cos(0.605)$ does equal $5/\sqrt(37)$. But so does $\cos(-.605)$ You were supposed to find that one, too. It leads to the solution -.771 when the .165 is subtracted.
For your second problem I presume the > sign is supposed to be *. How much does the argument of the sine function (the thing you take the sine of) have to increase to go through one cycle? How much does t have to increase to go through one cycle. This gives you the period. The frequency is one divided by this. |
Jacobi fields in polar coordinates. | As $V$ is perpendicular to $\gamma'$, we may write $V = g \frac{\partial }{\partial \theta}$. Then (Using the definition of the metric)
$$ 0=\nabla_r V = \nabla_r(g\frac{\partial }{\partial \theta}) = g_r \frac{\partial }{\partial \theta}+ g\nabla_r\frac{\partial }{\partial \theta} = \big( g_r + \frac{f_rg}{f}\big) \frac{\partial }{\partial \theta}$$
Thus $ g_r + \frac{f_rg}{f}=0$, which is the same as $\big(\log(fg)\big)_r=0$. Thus $g = C/f$ for some constant $C$ and $Y = fV = C\frac{\partial }{\partial \theta}$. Then $Y$ is a Jacobi field. You can either check directly, but it is clear as $Y$ generates a family of geodesics (varying $\theta$) starting at the origin. |
How to find the distance of a position in a specific direction in 3d space. | Let $\vec v$ be your position vector. Construct a unit vector $\vec u$ in the direction of interest. Then $\vec v \cdot \vec u$ is the distance traveled in the direction you want. |
Applications of integrals of rational functions of sine and cosine | Problem 1.17 of Fetter & Walecka, Theoretical Mechanics of Particles and Continua states the following interesting problem:
A uniform beam of particles with energy $E$ is scattered by an attractive central potential
$$V(r) = \begin{cases}0 & r \gt a\\ -V_0 & r \lt a \end{cases}$$
Show that the orbit of a particle is identical with that of a light
ray refracted by a sphere of radius $a$ and index of refraction
$n=[(E+V_0)/E]^{1/2}$. Prove that the differential cross-section for
$\cos{\theta/2} \gt 1/n$ is
$$\frac{d\sigma}{d\Omega} = \frac{n^2 a^2}{4 \cos{\frac12 \theta}}
\frac{(n\cos{\frac12 \theta}-1)(n-\cos{\frac12 \theta})}{(1+n^2-2 n
\cos{\frac12 \theta})^2} $$
What is the total cross-section?
In order to find the total scatter cross-section, one has to integrate over all solid angles; since the expression for the differential cross-section is independent of azimuth angle $\phi$, you end up with total cross-section
$$\sigma = \frac{n^2 a^2}{2} \int_0^{2 \arccos{(1/n)}} d\theta \, \sin{\frac{\theta}{2}} \frac{(n\cos{\frac12 \theta}-1)(n-\cos{\frac12 \theta})}{(1+n^2-2 n
\cos{\frac12 \theta})^2}\\ = n^2 a^2 \int_0^{\arccos{(1/n)}} du \, \sin{u} \frac{(n \cos{u}-1)(n-\cos{u})}{(1+n^2-2 n \cos{u})^2}$$
So here is an example in physics of an integral over a rational function of sines and cosines. In this case, however, the integral is pretty easy because one may substitute $v=\cos{u}$ and turn this into a simple integral over a rational function
$$\sigma = n^2 a^2 \int_{1/n}^1 dv \frac{(n v-1)(n-v)}{(1+n^2-2 n v)^2}$$
One may evaluate this integral using the substitution $w=1+n^2-2 n v$; the result is
$$\sigma = \frac12 a^2$$ |
Prove logical equivalence | Both statements are logical identities in propositional logic, typically taken as "axioms": In fact, we define the material conditional $p \rightarrow q$ to be equivalent to $\lnot p \lor q$: the implication is true whenever $p$ is false or whenever $q$ is true. The second is one of the equivalencies resulting from DeMorgan's Laws.
The best way to prove the given equivalencies is to show that they are equivalent for each possible assignment of truth values to $p$ and $q$ (and in the first case, they are identiclal merely by definition of the material conditional.) This is precisely what a what a truth-table does: a "proof-by-cases" so to speak: in each of the above, there are four cases to consider: each row of the truth-tables represent one possible case; together, the rows exhibit only and all such cases. Once we prove that these identities are true in this manner, we are done, and we can accept them, and use them validly when proving more complicated equivalencies.
Note: depending on one's "formal system", the equivalencies taken as "axioms" i.e., as "most basic", may vary:
See Propositional Calculus for a more thorough explanation. But to start,:
In general terms, a propositional calculus is a formal system that consists of a set of syntactic expressions (well-formed formulæ or wffs), a distinguished subset of these expressions (axioms), plus a set of formal rules that define a specific binary relation, intended to be interpreted as logical equivalence, on the space of expressions. |
If the Lorenz curve for a country is $f(x)=x^k$, how to calculate the value of $k$ since the Gini index is 0.9? | The support of a Lorenz curve is $(0,1).$
We know that the Gini index calculation is as follows:
$$\frac{9}{10}=2\bigg(\frac{1}{2}-\int_0^1 x^kdx \bigg) $$ and we need to find the $k$ that makes this true.
$$ \frac{9}{10}=1-\frac{2}{k+1}. $$
The answer is $k=19.$
(Also see the graphic on this page under the "Definition" heading: Gini Coefficient) |
How to prove Eulerian Identity? | By way of enrichment and not necessarily following the text we start
from the bivariate generating function of the Eulerian numbers, which
seems like a reasonable starting point and which is
$$\frac{u-1}{u-\exp((u-1)z)}.$$
We seek to show that for $n\ge m$
$$m! {n\brace m} = \sum_{k=0}^{n-1}
\left\langle {n \atop k} \right\rangle
{k\choose n-m}.$$
The RHS is
$$[v^{n-m}] \sum_{k=0}^{n-1}
\left\langle {n \atop k} \right\rangle
(1+v)^k
\\ = n! [z^n] [v^{n-m}]
\frac{v}{1+v-\exp(vz)}
= n! [z^n] [v^{n-m}]
\frac{v}{v-(\exp(vz)-1)}
\\ = n! [z^n] [v^{n-m}]
\frac{1}{1-(\exp(vz)-1)/v}.$$
Now $\exp(vz)-1$ as a formal power series in $z$ starts
with $z$ and hence we may write
$$n! [z^n] [v^{n-m}]
\sum_{q=0}^n \frac{(\exp(vz)-1)^q}{v^q}
= [v^{n-m}] n! [z^n]
\sum_{q=0}^n \frac{(\exp(vz)-1)^q}{q!} \frac{q!}{v^q}
\\ = [v^{n-m}]
\sum_{q=0}^n {n\brace q} v^n \frac{q!}{v^q}
= [v^{n-m}]
\sum_{q=0}^n q! {n\brace q} v^{n-q}.$$
This is a polynomial in $v$ and only the term with $q=m$
contributes, namely
$$\bbox[5px,border:2px solid #00A000]{
m! {n\brace m}.}$$ |
How does one evaluate $\lim _{n\to \infty }\left(\sqrt[n]{\int _0^1\:\left(1+x^n\right)^ndx}\right)$? | Showing that $2$ is an upper bound for the limit is straightforward, since for $0 \leqslant x \leqslant 1$, we have $(1 +x^n)^n \leqslant 2^n$, and
$$ \left(\int_0^1 (1 + x^n)^n \, dx \right)^{1/n} \leqslant 2, \\ \implies \limsup_{n \to \infty}\left(\int_0^1 (1 + x^n)^n \, dx \right)^{1/n} \leqslant 2.$$
For a useful lower bound, note that the integrand becomes more and more concentrated in a left neighborhood of $x = 1$ as $n$ increases. If $1 - \delta/n < x \leqslant 1$, then
$$1 + x^n > 1 +(1 - \delta/n)^n > 1 + 1 - n(\delta/n) = 2 - \delta,$$
and
$$\int_0^1 (1 + x^n)^n \, dx > \int_{1 - \delta/n}^1 (1 + x^n)^n \, dx \\ > \frac{\delta}{n}(2 - \delta)^n.$$
Hence,
$$\liminf_{n \to \infty} \left(\int_0^1 (1 + x^n)^n \, dx \right)^{1/n} > \liminf_{n \to \infty}\,(2-\delta)\frac{\delta^{1/n}}{n^{1/n}} = 2 - \delta.$$
Since $\delta > 0$ can be arbitrarily small, we have
$$2 \leqslant \liminf_{n \to \infty} \left(\int_0^1 (1 + x^n)^n \, dx\right)^{1/n} \leqslant \limsup_{n \to \infty} \left(\int_0^1 (1 + x^n)^n \, dx\right)^{1/n} \leqslant 2.$$
This shows both that the limit exists and the value is $2$. |
Prove that for all integers $a, b, c$ if $a+b^3+c^5=6001$ then at least one of $a,b,c$ is a multiple of three. | It's not true, e.g. $a=5999$, $b=1$, $c=1$. |
Inequality of Expectation of Indicator function | I think the answer is no.
let $P(X=1)=P(X=2)=P(X=3)=\frac{1}{3}$
$$E(X1_{X>2})=3*P(X=3)=1$$
$$E(X1_{X<2})=1*P(X=1)=\frac{1}{3}$$
$$E(X)=2$$
Maybe following inequalities help you: (I think they are hold)
$$E(X) \leq
E(X 1_{\{ X> t \}} ) + t P(X\leq t) \hspace{.5cm} (1)$$
$$E(X) \geq
t P(X\geq t)+ E(X 1_{\{ X< t \}} ) \hspace{.5cm} (2) $$
It is depend on $t>0$ or not you can use them.
Proof (1):let $A=\{ X>t\}$
It is obvious (see proof (4))
$$E(X)=E(X 1_{\{ X> t \}} ) +E(X 1_{\{ X \leq t \}} ) $$
and
$$E(X 1_{\{ X \leq t \}} )=\int_{-\infty}^{t} x f_X(x) dx $$
$$\leq
\int_{-\infty}^{t} t f_X(x) dx=t P(X\leq t)$$
so
$$E(X)=E(X 1_{\{ X> t \}} ) +E(X 1_{\{ X \leq t \}} ) $$
$$\leq
E(X 1_{\{ X> t \}} ) + t P(X\leq t) \hspace{.5cm} (3)$$
In (3) depend on $t\geq 0$ or $t<0$ you can use the fact $P(X\leq t)\in [0,1]$
for example if $t>0$
$$
E(X 1_{\{ X> t \}} ) + t P(X\leq t) \leq E(X 1_{\{ X> t \}} ) + t $$
Proof (2)
$$E(X)=E(X 1_{\{ X\geq t \}} ) +E(X 1_{\{ X < t \}} ) $$
$$=\int_t^{-\infty} xf_X(x) dx \, +E(X 1_{\{ X < t \}} ) $$
$$\geq tP(X\geq t) +E(X 1_{\{ X < t \}} ) $$
Proof (4)
simply for continues variables
$$E(X)=\int_{-\infty}^{t} xf_X(x) dx +\int_{t}^{+\infty} xf_X(x) dx$$
$$=\int_{-\infty}^{+\infty} x 1_{x\leq t}f_X(x) dx +\int_{-\infty}^{+\infty} 1_{x>t} xf_X(x) dx$$
$$=E(X 1_{\{ X \leq t \}} ) +E(X 1_{\{ X> t \}} ) $$
for all type of random variables
$$E(X)=E(X|A) P(A)+E(X|A^{c})P(A^{c})$$
$$=E(X|\{ X>t \} ) P(\{ X>t \} )+E(X|\{ X \leq t\})P(\{ X\leq t\})$$
$$=\frac{E(X 1_{\{ X>t \}} )}{P(\{ X>t \} )} P(\{ X>t \} )
+
\frac{E(X 1_{\{ X\leq t \}} )}{P(\{ X\leq t \} )} P(\{ X\leq t \} )
$$
$$=E(X 1_{\{ X> t \}} ) +E(X 1_{\{ X \leq t \}} )$$
so |
Spivak Calculus Ch. 19 #15 | I think that the formula for $\sin^2x$ is the one you used the first time and by "reduction formula", the author means that you should integrate by parts:
$$\int\sin^n\mathrm dx = -\cos x\sin^{n-1} x + (n-1)\int\cos^2x\sin^{n-2}x\mathrm dx$$
I think the above is mostly right although I'm not sure it's absolutely correct |
Show that $f(a)=\frac{ ((a+1)^k+a^k+1 )^2}{2} -a^{2k}- (a+1)^{2k}-1$ is negative for $k>2$. | If my computations are correct you have
$$-2f(a)=((a+1)^k-(a^{k/2}+1)^2)((a+1)^k-(a^{k/2}-1)^2) $$
hence this is
$$((a+1)^{k/2}-a^{k/2}-1)((a+1)^{k/2}+a^{k/2}+1)((a+1)^{k/2}-a^{k/2}+1)((a+1)^{k/2}+a^{k/2}-1)$$
Hence you have only to look at the sign of these $4$ expressions, and this seems not difficult (I have not done these last computations)
Added:
Note that $a^{2k}f(1/a)=f(a)$, so wlog we may suppose (if needed) that $a\geq 1$. Put $G(x)=(a+1)^x -a^x-1$. The derivative of $G$ is $G^{\prime}(x)=\log(a+1)(a+1)^x-(\log a) a^x=a^{x}(\log(a+1)(1+1/a)^x-(\log a))$, this is $\geq 0$ for $x\geq 0$. As $G(1)=0$, this prove the inequality for $0\leq x\leq 1$. For $x<0$, we have $(a+1)^x<1$, hence the inequality. |
Prove. $\frac{n}{n+1}$ is a Cauchy sequence. | It converges to $1$. And every convergent sequence is Cauchy. |
Differential-difference equations for Poisson arrivals | Let's denote $p_k(t)$ the probability that there is $k$ customers as pump $k$
And $\lambda^{in}_k = \lambda^{in}_0 (1 - k/4)$ the incoming rate of consumers at pump $k$ and $\lambda^{out}$ that rate at which customers finish their service.
After having written the diagrams for the system, you can deduce the following equation for the $p_k(t)$
$\dot{p_0}(t) = - \lambda_0^{in} p_0(t) + \lambda^{out} p_1(t)$
$\dot{p_1}(t) = - \lambda_1^{in} p_1(t) + \lambda^{out} (p_2(t) - p_1(t)) + \lambda_0^{in} p_0(t)$
$\dot{p_2}(t) = - \lambda_2^{in} p_2(t) + \lambda^{out} (p_3(t) - p_2(t)) + \lambda_1^{in} p_1(t)$
$\dot{p_3}(t) = - \lambda_3^{in} p_3(t) + \lambda^{out} (p_4(t) - p_3(t)) + \lambda_2^{in} p_2(t)$
$\dot{p_4}(t) = - \lambda_4^{out} p_4(t) + \lambda_3^{in} p_3(t)$
Since $\lambda^{in}_0 = 20$ and $\lambda^{out} = 60/3 = 20$ and since for stationary probabilities, by definition $\dot{p}_k(t) =0 \ \ \forall k \in \{1, 2, 3, 4\}$ you should be able to answer your question. This is the principle, I strongly encourage you to double check my calculations though. |
Dual Numbers and Automatic Differentiation | Break your problem into steps. Let's say $f(x) = g(x) + h(x)$ then $f' = g' + h'$ (omitting $x$). In general even with dual variables you should use the chain rule.
First solve the derivative of $1/x$ at $x=a$.
$$\begin{align} \frac{1}{a + \epsilon} = \frac{1}{a} \frac{1}{1+ \epsilon/a} = \frac{1}{a} \big(1 - \frac{\epsilon}{a} + O(\epsilon^2) \big)\end{align}.$$ The $O(\epsilon^2)$ terms become zero. so you get $-1/a^2$
Next find the derivative of $sin(y)$. $\sin(y+\epsilon) = \sin(y)+cos(y)\epsilon\ $ so the derivative of $\sin(y)$ is $\cos(y)$. Now put $y=1/x$, and use chain rule. |
Consider the group $s_3$ of example 8.7 | Note that $\mu_1 = (1 2)$, so $\mu_1 \mu_1 = (1 2)(1 2) = (1) = \rho_0$, which is the identity element for $S_3$.
So $\langle \mu_1 \rangle = \{ \rho_0, \mu_1 \}$. |
Proving $f\in H^\ast$ | This is an easy consequence of Uniform Boundedness Principle. For each $x$ the sequence $(f_n(x))$ is convergent, hence bounded. This implies that $\sup \{|f_n(x)|: n \geq 1, \|x\| \leq 1\}$ is finite by Uniform Boundedness Principle. Hence $\sup \{|f(x)|: \|x\| \leq 1\}$ is finite so $f$ is a continuous linear functional. |
Constructing a linear map | Here are some hints & suggestions:
(Note that the solution to the question is not unique. There are many $\alpha$ that satisfy the required conditions.)
Hint #1: Just by observation, we see that the vectors $(1,0,0,-1)^T, (0,1,-1,0)^T$ are orthogonal to the above two (which are orthogonal to each other).
The point is that the vectors $v_1=(1,0,0,1)^T, v_2=(0,1,1,0)^T, v_3=(1,0,0,-1)^T$, $v_4=(0,1,-1,0)^T$ are mutually orthogonal, and are a basis for the space $\mathbb{R}^4$.
Hint #2: To define the linear operator $\alpha$, it is sufficient to define its behaviour on a basis.
So, in order to define $\alpha$, you just need to specify the values of $\alpha(v_k)$, for $k=1,...,4$. In your case, the value must be zero for two specific vectors (which two?) and non-zero for the two other. |
System of 6 variables and 5 equations with parameters | Construct it as a matrix and just Gauss it down. |
Evaluating the claims $\mathcal P(A\cap B)=\mathcal P(A)\cap \mathcal P(B)$ and $\mathcal P(A\cup B)=\mathcal P(A)\cup \mathcal P(B)$ | From $X\subseteq A\cup B$ it does not follow that $X\subseteq A$ or $A\subseteq B$.
Consider $A=\{1\}$ and $B=\{2\}$ and $X=A\cup B$. $X$ is a subset of $A\cup B$, and yet not a subset of either $A$ or $B$.
So, literally the second line of the right hand side is already an incorrect deduction. What follows, therefore, is not acceptable.
Can I use the right steps to prove something and still get the wrong proof?
Of course, a proof wouldn't be a proof if you could use a series of valid steps led to something wrong. What seems to have happened is that you "dualized" the proof by swapping ands with ors and thought it must still be all "right steps."
It is simply not true that you can “dualize” every argument and get a valid proof. That only works under certain conditions. The current situation is a great example. |
Find out if a point is inside triangle, only data available are distances | Let's call the 3 vertices of the reference triangle $A,B,C$, the unknown point $X$ and the 6 given distances $a-f$, as indicated in the picture above.
The choice of which of the 3 given triangle sides you call $a,b$ and $c$, resp., is not important, the problem is symmetric for that. However, once that is fixed, it makes a difference which of the 3 distances of X to the triangle vertices you call $d,e$ and $f$. Let's for the moment assume that you how to assign the 3 given length to the vertices.
We will also need the angles $\angle BXC = \alpha, \angle CXA = \beta$ and $\angle AXB = \gamma$ (not shown in picture).
If $X$ is inside the triangle, then $\alpha + \beta + \gamma = 180°$. If $X$ is outside of the triangle, then the larger of the 3 angles is the sum of the 2 other angles. If $X$ is on one side, one of the angles will be 180° exactly.
So we need to find the values of those 3 angles, which is easy using the cosine-theorem. For example in the $\bigtriangleup XBC$ we have
$$ a^2 = f^2 + e^2 - 2ef \cos (\alpha),$$
which can be transformed into
$$\cos (\alpha) = \frac{f^2+e^2-a^2}{2ef}$$.
The cosine function is reversable in $[0,180°]$, so this gives you an easy way to calculate $\alpha$. Use the same way to calculate $\beta$ and $\gamma$, then check which condition ($\alpha + \beta + \gamma = 180°$ or the larger of the 3 angles is the sum of the other 2) checks out.
If neither is true (up to reasonable numerical accuracy), then the initial conditions are impossible. Note that the question is overdetermined. If you don't know $f$ (assuming a configuration as shown in the picture), then besides the shown point inside the triangle, $X$ could be the point that is the reflection of the shown point $X$ on the line $AB$. So with $a-e$ given, only 2 values of $f$ lead to existing configurations.
Let's come back to the initial problem of assigning the 3 values gives as 'distance of unknown point X to the vertices of the triangle' to the concrete vertices $A,B$ and $C$. There are 6 possible ways to do that (permutations), so you have to make the above calculation 6 times.
Normally I would assume that exactly one of those calculations leads to a possible configuration and the other 5 don't. Then you can use that one result from the possible configuration.
However, there might be cases where more than one permutation gives a possible result. For example, if the reference triangle is equilateral, then it doesn't matter how you assign the distances. If all possible configuration lead to the same result, good.
But I can't exclude the possibility that there might be values of $a-f$ where different permutations give possible configurations, but leading to different results: In one permutation $X$ is inside, in another it is outside. The you don't know what is true without knowing the correct permutation. |
Divisibility and the Fibonacci Sequence Proof | I'll prove a slightly more general result.
Suppose that $F_n\equiv 0 \bmod k$, and $F_{n+1} \equiv c \bmod k$. Then of course immediately $F_{n+2} \equiv 0{+}c$ $\equiv c \bmod k$ and thus $F_{n+i} \equiv cF_i \bmod k$ and $F_{2n+i} \equiv c^2F_i \bmod k$ etc.
In particular $F_{rn} \equiv c^{r-1} F_n$ $\equiv c^{r-1}{\cdot\,} 0$ $\equiv 0 \bmod k$.
In the specific case of this question, set $k=F_n$. |
How are continued fractions useful? | How are continued fractions cool? Let me count the ways:
If you know how to get the partial terms of the continued fraction expansion of an irrational number, you essentially have a way to find approximate values of the irrational number. This gives you one possible way to get a floating-point estimate for $\sqrt{2}$, $\pi$, etc.
It is known that a number is irrational if and only if it has an infinite continued fraction expansion. This makes it a sort of irrationality test.
Continued fraction expansions of irrational numbers exhibit sometimes surprising regularity. $\pi$ and $e$ have continued fraction representations that are simple, which is strange seeing as they are transcendental numbers. This can sometimes tell you things about these numbers, and other times the existence of such a regular expansion is just plain cool.
As mentioned in the comments, continued fractions can be used to solve certain equations like the Pell equation.
And others. Continued fractions solve some other problems - a famous story relates that Mahalanobis and Ramanujan shared a room, and one day Mahalanobis posed Ramanujan a problem which he instantly solved for all cases via continued fractions. |
Can you use the chain rule in vector calculus to compute the gradient of a matrix? | The gradient of $x^TAx$ can be determinated as follow.
Let
$$f(x)=x^TAx$$
thus
$$f(x_0+h)=(x_0+h)^TA(x_0+h)=\langle A(x_0+h),x_0+h\rangle =\langle Ax_0+Ah,x_0+h\rangle=$$
$$\langle Ax_0,x_0\rangle+\langle Ax_0,h\rangle+\langle Ah,x_0\rangle+\langle Ah,h\rangle$$
and note that
$\langle Ax_0,x_0\rangle=x_0^TAx_0=f(x_0)$
$\langle Ah,h\rangle=h^TAh=\frac12h^TH_f(x_0)h$
and
$$\langle Ax_0,h\rangle+\langle Ah,x_0>=\langle Ax_0,h\rangle+\langle A^Tx_0,h\rangle=\langle (A+A^T)x_0,h\rangle=$$
$$=\langle \nabla f(x_0),h\rangle$$
thus
$$\nabla f(x_0)=(A+A^T)x_0$$ |
For which chess boards do solutions exist for this generalised Knight's Tour problem? | Let me modify an argument that I gave in this MSE-post:
On an infinite board represented by $\Bbb Z \times \Bbb Z$ the positions the knight can visit starting from $(0,0)$ form a group. This group $P$ is generated by the moves represented by $(1,2),(-1,2) , (2,1),(2,-1)$. $$P=\langle (1,2),(-1,2) , (2,1),(2,-1) \rangle = \Bbb Z \times \Bbb Z.$$ Now you are identifying every $n$-th column and every $m$-th row, which is, algebraically speaking, to factor out $n \Bbb Z \times m \Bbb Z$. Thus the possible visiting positions are given by the image of $P= \Bbb Z \times \Bbb Z$ in $\Bbb Z /n\Bbb Z \times \Bbb Z /m\Bbb Z$, which is $\Bbb Z /n\Bbb Z \times \Bbb Z /m\Bbb Z$, i.e. the whole board. |
Hint on measure theory problem about rational-invariant measurable sets | It is known that if $\lambda (E) >0$ and $\lambda (F) >0$ then $E+F$ contains an open interval. (You can find a proof on MSE: "Sum" of positive measure set contains an open interval?). Assume that $\lambda (A) >0$ and $\lambda (A^{c}) >0$ ($A^{c}=\mathbb R\setminus A$). Then there exist $a,b$ with $a<b$ and $(a,b) \subset A-A^{c}$. Picking a rational number in $(a,b)$ we get a contradiction to the hypothesis. |
The maximum of determinant restrict to a sphere is at an orthogonal matrix | Assuming you are using the Frobenius norm you have
$\|A\|_F = \|U^TAU\|_F$ for all orthogonal $U$ and $\det A = \det (U^TAU)$. In particular, using the Schur decomposition, we see that the problem is
equivalent to $\min_{\|T\|_F = 1, T \text{ triangular}} \det T$. It is not too hard to see that this is equivalent to $\min_{\|D\|_F = 1, D \text{ diagonal}} \det D$
and so the problem reduces to
$\min_{x_1^1+\cdots + x_n^2 = 1} x_1 \cdots x_n$.
Lagrange gives (after multiplying each row by $x_k$) $x_1 \cdots x_n + 2 \lambda x_k^2 = 0$ from which we see that $|x_1|=\cdots=|x_n| = {1 \over \sqrt{n}}$ and
hence $x_1 \cdots x_n = {1 \over \sqrt{n^n}}$.
This is attained for $A={1 \over \sqrt{n}}I$ (which is not orthogonal!).
Elaboration:
Lagrange gives $x_2 \cdots x_n + 2 \lambda x_1 = 0$, etc.
Multiply the first line by $x_1$, the second by $x_2$, etc. This gives
$$x_1 \cdots x_n + 2 \lambda x_1^2 = 0$$
$$x_1 \cdots x_n + 2 \lambda x_2^2 = 0$$
etc, hence $2 \lambda x_2^2 = 2 \lambda x_1^2 = \cdots = 2 \lambda x_n^2$ |
Doob's inequalities: Going from discrete to continuous | Assume $\sup_{s<t} |X_s|> \lambda$ therefore, there is an $t^*\in [0,t)$ with $|X(t^*)|> \lambda$ or $|X(t)|> \lambda$.
$\{|X(t)| > \lambda\} \subset \bigcup_{n=1}^\infty A_n$
If there is an $t^*\in [0,t)$ with $|X(t^*)|> \lambda$ then there is a sequence a dyadic $s_n = \frac{\lfloor{2^n t}\rfloor + 1}{2^n} \in D_n$ $s_n \downarrow t^*$ and right continuity gives us that there is $n \in \Bbb{N}$ for which $|X(s_n)|> \lambda$ and therefore
$\{\exists t^* \in [0,t) ,|X(t^*)|> \lambda\} \subset \cup_{n=1}^\infty A_n$
So we can conclude that
$$ \bigg\{\sup_{s\leq t}|X_s| > \lambda\bigg\} \subset \bigcup_{n=1}^\infty A_n$$
the other inclusion $$ \bigg\{\sup_{s\leq t}|X_s| > \lambda\bigg\} \supset \bigcup_{n=1}^\infty A_n $$
Follows from the fact that
$$\bigg\{\sup_{s\leq t}|X_s| > \lambda\bigg\} \supset A_n \quad \forall n$$ |
Total cost, convergence of measure, Wassesrtein distance | Okay.
$1)$ Since $L^* \mu=\mu,$ we have
$$
d((L^*)^k \nu,\mu)=d((L^*)^k \nu,(L^*)^k\mu)\leq \lambda^k d(\nu,\mu),
$$
which, indeed, is exponential decay (i.e. of theform $Ce^{-ck}$ for $C,c>0$).
$2)$ The authors defined $\mathcal{P}_q$ to mean the set of measures having finite $W_q$-distance to some $\delta$-measure. Note that $W_q=C_q^{\min\{1,1/q\}},$ so if one's finite, then the other is finite.
$3)$ This I didn't check up on, but it's pretty standard notation for push-forward. I.e. for some measure $\mu$ on some space $X$ and some map $f:X\to Y$, we define $f_* \mu(A)=\mu(f^{-1}(A))$ for all measurable subsets $A$ of $Y$. |
It's $\mathbb{H}$ isomorphic to $\mathbb{H^{op}}$? | Isomorphisms $R\cong R^{\mathrm{op}}$ of a ring $R$ correspond to antiautomorphisms $\alpha$ (note this means that $\alpha$ satisfies the property $\alpha(rs)=\alpha(s)\alpha(r)$ instead of the usual rule). Let $(R,+,\cdot)$ be a ring with underlying set $R$ and define $\star$ by $a\star b=b\cdot a$. Then $(R,+,\star)$ is the opposite ring. The condition that an additive map $\alpha:R\to R$ satisfy $\alpha(r\cdot s)=\alpha(s)\cdot \alpha(r)$ (i.e. that $\alpha$ is an antiautomorphism) is equivalent to $\alpha(r\cdot s)=\alpha(r)\star\alpha(s)$ (i.e. that $\alpha$ defines an isomorphism between the ring and its opposite ring, $(R,+,\cdot)\to(R,+,\star)$.)
The quaternions $\mathbb{H}$ have conjugation as an antiautomorphism. |
How many number of primes below n such that they are sum of consecutive primes | A heuristic approach: OEIS A007504 lists the sums of the first primes and says the $k^{\text{th}}$ entry is about $\frac 12k^2\log k$ The maximum $k$ that you allow comes from $\frac 12k^2\log k=n$. We can do one step of fixed point iteration to say $\log k \approx \log \sqrt {2n}, k \approx \sqrt{\frac {2n}{\log \sqrt {2n}}}$. Now if we say the chance of a number $q$ being prime is $\frac 1{\log q}$ we get an approximation up to $n$ that is $$\sum_{i=2}^{\sqrt{\frac {2n}{\log \sqrt {2n}}}}\frac 1{\log( \frac 12i^2\log i)}$$ where I started at $2$ because it blows up at $1$ |
Need an explanation of an answer regarding $\log F_n = \Theta(n)$, where $n$ is the $n$-th Fibonacci number | Yes, $F_5=5$. The fact that it is $F_6=8$ does not change the argument there. You can either decrease $A$ a bit or demand a higher starting point for the bounds. With the given $A,B$, the base case becomes $F_{11}=89$, with $11 \log (3/2)\approx 4.46, \log(89) \approx 4.49, 11\log(2) \approx 4.80$ and everything goes through. |
Finding surface area of cone over $xy$ plane | You've already gotten to $\iint_D\sqrt{1+(f_x)^2+(f_y)^2}dA$, so you're on the home stretch. Insert and simplify $(f_x)^2+(f_y)^2$, remember that $\iint_D1\,dA=a$, and you're done. |
How do you solve a Mean Value Theorem problem that involves multiple square roots? | Typically to solve something like
$$
a \sqrt{x} + bx = d
$$
for instance, you square both sides, to get
$$
a^2 x + 2ab x\sqrt{x} + b^2x^2 = d
$$
and you can now move everything but the square root to the other side:
$$
2ab x\sqrt{x}= d - a^2 - b^2 x^2
$$
Now you can square again, and get
$$
4a^2 b^2 x^3 = (d - a^2 - b^2 x^2)^2
$$
which is a polynomial that you might hope to solve (esp. if some coefficients happen to be zeroes, etc.). At any rate, there are no square roots left.
There's a caution here. Suppose you're asked to solve
$$
\sqrt{x} = -\sqrt{x}
$$
and you square both sides. You get
$$
x = x
$$
whose solution is "all numbers $x$". But the original equation has only the solution $x = 0$. In general, squaring an equation can introduce extraneous solutions (because $u^2 = v^2$ doesn't tell you that $u = v$, only that $u = \pm v$). So when you find a solution to that polynomial equation at the end, you have to PLUG IT BACK INTO THE ORIGINAL EQUATION and verify that it's also a solution to the original equation.
This should help, I hope, with your problem. |
2nd Order Series Solution | Please verify that your second derivative is not correct.
Assuming that it is correct, what you have to do is to write the expansion of the differential equation, collect the terms of same degree in x and cancel the corresponding coefficient. This leads to very simple linear equations involving the coefficients. You will quickly notice that these coefficients express as functions of the very first coefficients. This will not make any problem since you have two boundary conditions the use of which will give you these two coefficients : y(0) = 1 obviously gives C(0) = 1 and y'(0) = 1 obviously gives C(1) = 0. So, from here, you can start working from the second power of x.
I personally do not see why the development of y has to be done as a plynomial of even powers plus another on of odd powers of x. If this has not been imposed to you, just consider a single polynomial with all powers of x.
I am sure you can continue from here. If you still have problems, post. Cheers. |
Function with Multiple Periods | The best way is to experiment, as it depends so much on the actual shape of the data.
You can get a sinusoidal type graph with a lop-sided cycle by considering a function in the form of a fraction consisting of $a\sin(nt+\epsilon)$ in the numerator and and $b\sin(mt+\delta)+k$ where $k$ is only just large enough to ensure the denominator remains strictly positive. Of $k$ is too large, the peaks and troughs get too close together.
To see what I mean, try plotting $$y=\frac{\sin x}{\sin x+\cos x+2}$$ on Wolfram Alpha. |
Calculating integral over unit ball using co area formula | Hint. If you pick
$$
\phi(\mathbf{x}) = \sqrt{x^2+y^2+z^2}
$$
one has $\parallel \nabla \phi \parallel =1$ and the coarea formula becomes the formula for change of variables into spherical coordinates. |
When $\delta(x,y)$ is a metric on $\mathbb{R}$? | You've done some good work already. I will say that you should be aiming to prove your conditions are equivalent. For example, you say
If $\delta$ satisfies point three, we have that ... $f$ is an even function.
You should also be trying to show that, if $f$ is an even function, then $\delta$ satisfies point three (which shouldn't be difficult). When you have a nice equivalent condition, that's when you know you can stop!
Let's look at condition 4. The conditions you have strike me as already necessary and sufficient, so you could conceivably just present the condition as is (with proof). I would suggest using the fact that $f$ must be an even function to simplify further:
$$f(x + y) \le f(x) + f(y). \tag{$\star$}$$
Note that, this implies
$$f(x - y) \le f(x) + f(-y) = f(x) + f(y),$$
since $f$ is even. It also implies
$$f(x) \le f(x - y) + f(y) \implies f(x) - f(y) \le f(x - y).$$
Finally, it implies, by induction, that $f(nx) \le nf(x)$. Let's establish that $(\star)$ is indeed equivalent to condition 4, given $f$ is even.
Suppose $(\star)$ holds. Then
\begin{align*}
\delta(x, y) &= f(x - y) \\
&= f(x - z + z - y) \\
&\le f(x - z) + f(z - y) \\
&= \delta(x, z) + \delta(z, y).
\end{align*}
Conversely, suppose condition (4) holds. That is,
$$\delta(x, y) \le \delta(x, z) + \delta(z, y) \iff f(x - y) \le f(x - z) + f(z - y).$$
If we choose $z = 0$, then we get
$$f(x - y) \le f(x - 0) + f(0 - y).$$
Replacing $y$ with $-y$ yields $(\star)$ as required. That is, $(\star)$ is a necessary and sufficient condition for 4. |
Probability circle to be in a circle | First, I'll assume the two points are obtained with an uniform distribution in the circle.
You have a good conclusion there. Let's build from it. As you've noticed, once you fix the distance from $A$ to the center of the circle, $B$ has to fall inside the little circle: $B$ can fall in an area of $\pi (R-r)^2$, from a total of $\pi R^2$. Here $r$ is the distance from the center of the original circle and $A$.
Now, if you use polar coordinates to express your points, the probability density function of the radius of a uniformly random point $A$ is $2r/R^2$. The only thing we have left is to compute the integral, which in words would mean "Compute the product of the probability of $A$ having distance $r$ from the center multiplied by the probability of $B$ falling close enough, and sum for each possible $r$":
$$\int_0^R \Big(\frac{R-r}{R}\Big)^2 \frac{2}{R^2}r \hspace{.2cm}dr = \int_0^R \Big(1-\frac{r}{R}\Big)^2 \frac{2}{R^2}r \hspace{.2cm}dr$$
Which is just a polynomial, so we can solve it. It's a bit tedious, so I'll leave it here with the answer of $\frac{1}{6}$ from wolfram alpha. |
$a+b=c+d$ , $a^3+b^3=c^3+d^3$ , prove that $a^{2009}+b^{2009}=c^{2009}+d^{2009}$ | Hint: If $a+b=c+d$ and $ab=cd$ then $\{a,b\}=\{c,d\}$. Use the identity
$$x^3+y^3=(x+y)(x^2-xy+y^2)=(x+y)^3-3xy(x+y).$$ |
Is there an ambiguity on $\sum\limits^{\infty}_{i=0}a_i$? | $\sum\limits_{i=0}^{\infty} a_i$ means one thing. It means the limit of the sequence $\lim\limits_{n \to \infty} S_n$, where
$$S_n = a_0 + a_1 + a_2 + \cdots + a_n.$$
However,
$$\sum\limits_{i=-\infty}^{\infty}a_i$$
is ambiguous when it is only conditionally convergent. It is not obvious in what order the terms are meant to be added here, and as you point out adding the terms in a different ways produces different results. |
$2\pi$-periodic $L^2$ functions on $R^1$ approximated by its Fourier series | Note $$\hat{s}_N(m) = \sum_{\lvert n\rvert \le N} \hat{f}(n)\cdot\frac{1}{2\pi}\int_{-\pi}^\pi e^{i(n-m)t}\, dt = \sum_{\lvert n\rvert \le N} \hat{f}(n)\delta_{nm}$$ where $\delta_{nm}$ equals $1$ when $n = m$ and equals $0$ otherwise. If $\lvert m\rvert > N$, then $\delta_{nm} = 0$ for every $n$ with $\lvert n \vert \le N$. Therefore, $\hat{s}_N(m) = 0$ whenever $\lvert m\rvert > N$. On the other hand, if $\lvert m\rvert \le N$, then $\sum_{\lvert n \rvert \le N} \hat{f}(n)\delta_{nm}$ reduces to $\hat{f}(m)$. It follows that $\hat{f}(m) - \hat{s}_N(m)$ is $\hat{f}(m)$ for $\lvert m\rvert > N$ and $0$ for $\lvert m \rvert \le N$. By Parseval's theorem, $$\|f - s_N\|_2^2 = \sum_{n\, =\, -\infty}^\infty \lvert\hat{f}(n) - \hat{s}_N(n)\rvert^2 = \sum_{\lvert n\rvert > N} \lvert \hat{f}(n)\rvert^2$$ Since the sequence $(\hat{f}(n))_{n = 1}^\infty$ is square summable, given a positive number $\varepsilon$, there corresponds a $k$ such that for all $N > k$, $\sum_{\lvert n \rvert > N} \lvert \hat{f}(n)\rvert^2 < \varepsilon$. Thus $$\lim_{N\to \infty} \sum_{\lvert n \rvert > N} \lvert \hat{f}(n)\rvert^2 = 0$$ implying the $\lim_{N\to \infty}\|f - s_N\|_2 = 0$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.