title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Prove the following proportions | Hint: Divide the top and bottom of the left side by $d$ to get
$$
\frac{2a + 3d}{3a - 4d} = \frac{2 \frac ad + 3}{3 \frac ad - 4}
$$
Now, note that
$$
\frac ad = \frac ab \cdot \frac bc \cdot \frac cd = \left(\frac ab \right)^3
$$
I'll leave the rest to you. |
Use l'Hôpital's rule to solve | Hints:
$$\left(\log x\right)^{\frac1{x-e}}=e^{\frac1{x-e}\log\log x}$$
$$\lim_{x\to e^+}\frac{\log\log x}{x-e}\stackrel{\text{l'H}}=\lim_{x\to e^+}\frac1{x\log x}=\ldots$$ |
Simplifying Exponents in Fractions | Here's how I think about the exponent rules. It may be helpful to you. Reduce the numerical part first and consider
$$
\frac{2x^3}{x^5y^2}.
$$
I notice that both have common factors of $x$. How many copies of $x$ are there? There are 3 on the top and 5 on the bottom, so I could think of the fraction as
$$
\frac{2xxx}{xxxxxy^2}.
$$
(Of course I would never actually write that down, but it's useful to keep in mind.) Now, if I went by and canceled factors of $x$ one-by-one, I would eventually be left with
$$
\frac{2}{xxy^2},
$$
or better yet
$$
\frac{2}{x^2y^2}.
$$
The advantage of thinking in this way is I don't memorize unmotivated rules about when to add or subtract exponents (though, if you reflect for a moment, you'll see that this method is the same as the exponent rules you've learned) and avoids getting negative exponents unnecessarily. |
Proving that every basis of $R^n$ has $n$ elements | I don't even think one even needs the morphism to be surjective (at least, provided your definition of ring homomorphism includes taking $1$ to $1$). And in fact, $K$ does not need to be a field; any ring satisfying the invariant basis property will suffice. As I recall, one has the following argument, which I first learned from T.Y. Lam's "Lectures on Modules and Rings":
Suppose $m$ is a positive integer such that $R^{n}$ admits a basis with $m$ elements. This is tantamount to providing an isomorphism of $R$-modules $\varphi \colon R^{m} \to R^{n}$. Indeed, suppose $r_{1}, \ldots, r_{m} \in R^{n}$ are your basis elements. The (unique) morphism of $R$-modules $R^{m} \to R^{n}$ sending the $i$th standard idempotent $e_{i}$ to $r_{i}$ must be surjective, since the $r_{i}$s generate $R^{n}$. On the other hand, this morphism must also be injective, since there are no nontrivial $R$-linear relations between the $r_{i}$s.
The data of $\varphi$, in turn, is the same as supplying matrices $A, B$ with entries in $R$ of the correct dimensions such that $AB = I_{m}, BA = I_{n}$. Applying the morphism $R \to K$ entrywise to $A, B$, we obtain matrices $A', B'$ with entries in $K$ such that $A'B' = I_{m}, B'A' = I_{n}$. But these matrices then define $K$-linear maps $K^{m} \to K^{n}, K^{n} \to K^{m}$ which are inverse to each other, and so we must have $n = m$ by usual linear algebra (or the invariant basis property).
(Alternatively, you can use the functor $M \mapsto M \otimes_{R} K$ from $R$-modules to $K$-modules given by extension of scalars, and by functoriality one sees that $\varphi \otimes_{R} \mathrm{Id}_{K} \colon K^{m} \to K^{n}$ must be an isomorphism. In the non-commutative setting, you need to pay a little attention to the "sidedness" of the modules under consideration, but this argument indeed goes through fine.) |
Largest number of edges removed from $Q_{10}$ such that the graph always has a Hamiltonian cycle. | Recall that $Q_n$ can be constructed with a vertex set consisting of all binary sequences of length $n$, where two sequences are adjacent if they differ in exactly one entry.
Lemma. Every edge $e$ of $Q_n$ is contained in exactly $n-1$ squares (induced $4$-cycles). Moreover, $e$ is the only common edge in any two of these squares.
Proof: Let $(v_0,v_1)$ be an edge differing in the $ith$ entry. Now $v_0$ is adjacent to exactly $n-1$ other vertices. But if $v_2$ is any one of these, say differing from $v_0$ in the $j$th entry, with $i\neq j$, then there is a unique vertex $v_3$ that differs from $v_2$ in the $i$th entry and differs from $v_1$ in the $j$th entry and of course $\{v_0,v_1,v_2,v_3\}$ induces a $4$-cycle. The moreover part follows from the uniqueness of $v_3$.
We now prove the following, which is actually stronger than what you require.
Theorem. For any $n\geq 2$, let $H$ be a subgraph obtained from $Q_n$ by deleting $n-2$ edges. Then every edge of $H$ is contained in a Hamiltonian cycle.
We proceed by induction, with the base case being trivial. So suppose we delete $n-2$ edges from $Q_n$. Let $H$ be the resulting subgraph.
Let $e=(v_0,v_1)$ be an edge in $H$ where $v_0$ and $v_1$ differ in the $i$th entry. We may assume without loss of generality that the $i$th entry of $v_0$ is $0$. Since we have only deleted $n-2$ edges, we can conclude that $e$ is contained in a square in $H$, say $\{v_0,v_1,v_2,v_3\}$.
Now, let $Q_{n-1}^0$ be the subgraph of $Q_n$ induced by those vertices whose $i$th entry is $0$, and let $Q_{n-1}^1$ be the subgraph of $Q_n$ induced by those vertices whose $i$th entry is $1$. Of course both of these subgraphs are isomorphic to $Q_{n-1}$. Let $H_0$ and $H_1$ be the corresponding induced subgraphs of $H$. Note that the edge $(v_0,v_2)$ is in $H_0$ and $(v_1,v_3)$ is in $H_1$.
Now, if, in forming $H$, not all of the $n-2$ edges that were deleted were in either $Q_{n-1}^0$ or $Q_{n-1}^1$, then at most $n-3$ edges were deleted from $Q_{n-1}^0$ and $Q_{n-1}^1$ respectively. Apply induction to obtain a Hamilton cycle $C_0$ in $H_0$ containing the edge $(v_0,v_2)$ and a Hamilton cycle $C_1$ in $H_1$ containing the edge $(v_1,v_3)$. Deleting these two edges and adding in $(v_0,v_1)$ and $(v_2,v_3)$ gives a Hamiltonian cycle of $H$ containing our arbitrarily chosen edge $e$. This argument in a picture:
Now, suppose that all $n-2$ edges that were deleted were in, say, $Q_{n-1}^0$. If we reinstate one of these, then there is a Hamiltonian cycle $C_0$ containing $(v_0,v_2)$. Thus in $H_0$ there is a Hamilton path from a vertex $w$ to a vertex $x$ containing $(v_0,v_2)$. Since $H_1=Q_{n-1}^1$, the Hamilton cycle that "mirrors" $C_0$ (say $C_1$) is in $H_1$. Also, all edges between $Q_{n-1}^0$ and $Q_{n-1}^1$ in $Q_n$ are in $H$. Let this set of edges be $B$.
The subgraph of $H$ consisting of just the edges in $B,C_1$ and $C_0\setminus (w,x)$ can be drawn (relabelling $w$ and $x$ if necessary) as the obvious generalization of the following picture for $n=4$:
It is easy to check that this subgraph has a Hamiltonian cycle containing $e$, obtained by alternating edges between $B$ and $C_1\cup C_0\setminus (w,x)$. For example: |
A problem to show that a certain field extension is not normal | If $i \in \mathbb Q(\alpha + i\alpha)$, then $1 + i \in \mathbb Q(\alpha + i\alpha)$, and hence $\alpha = \frac{\alpha + i \alpha}{1 + i} \in \mathbb Q(\alpha + i\alpha)$ too.
So $\mathbb Q(\alpha + i \alpha) = \mathbb Q(\alpha, i)$.
However, $$[\mathbb Q(\alpha, i) : \mathbb Q] = [\mathbb Q(\alpha, i) : \mathbb Q(\alpha)] \times [\mathbb Q(\alpha) : \mathbb Q] = 2 \times 4 = 8,$$
while
$$ [\mathbb Q(\alpha + i \alpha) : \mathbb Q] = {\rm deg}(X^4 + 20) = 4,$$
which is a contradiction. |
How to cluster words using Jensen-Shannon Divergence? | A general keyword you could search for is "distributional similarity". My formulation: Words are similar to the extent they occur together with the same words.
For this purpose, word co-occurrence statistics mean basically the number of times that other words occur together with the words of interest. First you build a "vector" that tells how often any of the "other words" occurred together with a word. Then you normalize these counts so that their sum is $1$. That is formally a probability mass function: the "other words" are the elementary outcomes and the normalized counts are their probabilities. The "words" in the formula in your image are these probability mass functions.
Then you apply the Jensen-Shannon divergence, aka information radius, aka mean divergence to the mean, $(D(f\| (f + g)/2) + D(g\|(f + g)/2))/2$, to the pairs of such functions, $f$ and $g$, and cluster the words based on the numbers you obtain, using them as if they were distances of the words.
It's possible to apply weights to the counts according to their informativeness. Formally the crucial things are that the mean of probability mass functions is again a probability mass function, and the mean is only $0$ where both of $f$ and $g$ are $0$, so relative entropy aka Kullback-Leibler divergence can be applied without smoothing.
The group where Dagan was at that time (I think) did some empirical experiments where this formula compared favourably to a number of others. You might also want to look up some of the papers of Lillian Lee from that time, including her PhD thesis.
(Some use the name "information radius" for a formula that is twice "Jensen-Shannon", but there is an earlier paper on information radius that develops the same general formula under that name. When comparing two equally weighted distributions, the difference does not matter.)
The paper I mentioned is Robin Sibson, 1969, "Information Radius", Probability Theory and Related Fields, 14(2):149-160. In 1969 the journal was Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete. It develops the formula, not the application to word co-occurrences. (Jianhua Lin's paper where he names the formula after Jensen and Shannon seems to be as late as 1991. Is that right?) |
In a class of 400 students, 180 read English, 371 read Sanskrit, 270 read Hindi, at least how many students read all the three? | $400 - 371 = 29$ students do not read sanskrit.
$270$ students read Hindi. At most $29$ of them do not read Sanskrit. So at least $270 - 29 = 241$ students read both Hindi and Sanskrit.
So at most $400 - 241 = 159$ students don't read both.
$180$ read english and at most $159$ of the don't read both sanskrit and hindi. So at least $180 - 159 = 21$ of the students who read english read both sanskrit and hindi.
So at the very least $21$ students read all three.
....
To do it by exclusion/inclusion.
$|E \cup S \cup H| = 400$
$|E \cup S \cup H|= |E|+ |S| + |H| - (|E\cap S| + |E \cap H| + |S \cap H|) + |E \cap S \cap H|=180 + 371 + 270 - (|E\cap S| + |E \cap H| + |S \cap H|) + |E \cap S \cap H|$
So $|E \cap S \cap H| = (|E\cup S| + |E \cup H| + |S \cup H| )- 421$.
Now $400 \ge |E\cup H| = |E| + |H| - |E\cap H|= 180+270- |E\cap H|$ so $|E\cap H|\ge 50$.
$400\ge |E\cap S| = 180 + 371 + |E\cap S|$ so $|E\cap S| \ge 151$
$400\ge |H\cap S| = 270 + 371 + |H\cap S|$ so $|H\cap S| \ge 241$
So $|E \cup S \cup H| = (|E\cap S| + |E \cap H| + |S \cap H| )- 421\ge 50 + 151 + 241 - 421 = 21$ |
Proving $E^{x}[|B_s-B_t|^4]=n(n+2)|t-s|^2$ | It is an exercise of a book "Stochastic Differential Equations" by Øksendal, isn't it? I have also considered about it a couple of days. You seem to confuse 1-dimensional solution and n-dimensional solution.
In n-dimensional case, $ |B_{t}-B_{S}| $ is
$$ |B_{t}-B_{S}|=\sqrt{\left(B_{t}^{(1)}-B_{s}^{(1)}\right)^{2}+\left(B_{t}^{(2)}-B_{s}^{(2)}\right)^{2}+\ldots+\left(B_{t}^{(n)}-B_{s}^{(n)}\right)^{2}} $$
where $ B_{t}^{(i)} $ and $ B_{s}^{(i)} $ are i-th elements of $ B_{t} $ and $ B_{s} $, respectively. Accordingly,
$$ |B_{t}-B_{s}|^{4}=\left\{\left(B_{t}^{(1)}-B_{s}^{(1)}\right)^{2}+\left(B_{t}^{(2)}-B_{s}^{(2)}\right)^{2}+\ldots+\left(B_{t}^{(n)}-B_{s}^{(n)}\right)^{2}\right\}^{2} $$
Since elements of i and j are independent, the expectation with an initial location, x, is
$$ E^{x}\left[|B_{t}-B_{s}|^{4}\right]=\sum_{i}^{n}E^{x}\left[\left(B_{t}^{(i)}-B_{s}^{(i)}\right)^{4}\right]+\sum_{i}^{n}E^{x}\left[\left(B_{t}^{(i)}-B_{s}^{(i)}\right)^{2}\right]\sum_{j\neq i}^{n}E^{x}\left[\left(B_{t}^{(j)}-B_{s}^{(j)}\right)^{2}\right] $$
$$ =n3(t-s)^{2}+n(t-s)(n-1)(t-s) $$
$$ =n(n+2)(t-s)^{2} $$
I am not a mathematician originally, therefore, this may include mistakes. It could be also helpful that some hints for exercises of Øksendal book are provided by http://www.quantsummaries.com/oksendal.pdf. |
What is the problem of difference between instruction's answer and my answer? | You have $\dfrac{1}{4}\cdot \left(x+\dfrac{1}{2}x\right) = \dfrac{1}{4}\cdot \dfrac{3}{2}x = \dfrac{3}{8}x$.
The answer on the board has
$$\dfrac{1}{2}x- \dfrac{1}{8}x = \dfrac{3}{8}x$$ |
marginalising probability with itself | I'll assume that $X$ is a discrete random variables (since there is a sum on $x$). What you want is called the Law of Total probability ;if we have a countable partition $B_n$ on the sample space then $$P(A) = \sum_n P(A \cap B_n)$$
In this case $B_x = \{X = x\}$ is a partition of the sample space, we can write
$$P(f(X,Y) \in A) = \sum_x P(f(X,Y) \in A, X = x) = \sum_x P(f(x,Y) \in A)$$
Notice that we don't need $X$ and $Y$ to be independent.
Also note that if $X$ is not discrete, then the partition $B_x = \{X = x\}$ is not countable, therefore the whole thing breaks down. (Also, $\sum_x$ don't really mean anything if you're summing on a uncountable numbers of $x$)
There are no restrictions on $Y$ though |
simple looking but hard to prove geometrical problem: prove that 4 points on the same circle. | First, let's prove an intermediate conclusion, or a lemma, which can be stated as follows.
Lemma Let $l$ be another exterior common tangent (namely, not $CD$) of the
circles $(ADE)$ and $(BCE)$. Then $l$ is tangent to the circle
$(ABE)$.
Proof All points are labeled as the figure shows. Notice that
\begin{align*}
AK&=AO-KO=\frac{1}{2}(AD+AE-DE)-KJ,\\
BL&=BN-LN=\frac{1}{2}(BC+BE-CE)-LM.\\
\end{align*}
Hence
\begin{align*}
AK+BL&=AB+\frac{1}{2}(AE+BE-DE-CE)-(KJ+LM)\\
&=AB+\frac{1}{2}(AE+BE-CD)-(JM-KL)\\
&=AB+\frac{1}{2}(AE+BE-CD)-(PQ-KL)\\
&=AB+KL+\frac{1}{2}(AE+BE-CD)-(EP+EQ)\\
&=AB+KL,\\
\end{align*}
which shows that the quadrilateral $ABLK$ has an inscribed circle. Apparently, it must be the one of triangle $AEB$, namely, $ABLK$ and $ABE$ have the identical inscribed circle. The conclusion we want to prove is followed. Moreover, we may see that, $AE,GH,l$ and $BE,GF,l$ are respectively concurrent.
Now, come back to deal with the present problem. Notice that $GA||EH$ and $GB||EF$. hence $\angle HEF=\angle AGB$. But $ABLK$ is a circumscribed quadrilateral, then it's obvious that $\angle AGB+\angle KGL=180^o$. As a result, $\angle HEF+HGF=180^o$, which implies that $H,E,F,G$ are concyclic. We are done. |
Stuck on derivative of logarithm of sum of exponentials | This is just an application of the chain rule
$$
\frac{\partial\mathrm{log}(\mathrm{exp}(w_1 * x_1 + b_1) + \mathrm{exp}(w_2 * x_2 + b_2))}{\partial w_1} = \frac{\partial}{\partial w_1}(log(f(w_1)) = \frac{\frac{\partial f(w_1)}{\partial w_1}}{f(w_1)}
$$
So your final answer is:
$$ \frac{x_1\mathrm{exp}(w_1x_1+b_1)}{\mathrm{exp}(w_1 * x_1 + b_1) + \mathrm{exp}(w_2 * x_2 + b_2))}$$ |
financial calculus call option | Hint: If you buy $1$ stock and put $b$ in interest free bonds, at the end of two months, your value would be either $53 + e^{1/60} b$ or $48 + e^{1/60}b$ depending on the market.
Now if instead you had invested the $50+b$ of money in options at a price $p$, you would either have $\dfrac{50+b}{p}\cdot 4$ or $0$ as the respective values.
If you can find values of $b, p$ to so that these situations are identical, then that $p$ must be your option price. |
If $A$ is diagonalizable in $\mathbb{R}$, is it diagonalizable in $\mathbb{C}$? | It is an obvious yes. If $B$ is a real matrix such that $BAB^{-1}$ is diagonal, then $B$ is also a complex matirx and $BAB^{-1}$ is still diagonal |
How do I show this property of a square matrix is true? | Let the square matrix be denoted by $A$. That every vector $v$ is a linear combination of the columns of $A$ means that you can always solve the equation $Ax=v$, where $x$ is the unknown. This is only possible if $A$ is invertible, so we have $\det(A)\neq 0$. The determinant of $A'$ (the transpose of $A$) is the same as the determinant of $A$, so $A'$ is also invertible. This means you can always solve the equation $A'x=v$, or that $v$ is a linear combination of the columns of $A'$. Since the columns of $A'$ are the same as the rows of $A$, we are done. |
Stationary points of $ \frac{1}{2} \|A-xx^T\|^2_F $ | Let $f(x) = \frac{1}{2} \|A-xx^T\|^2_F$ and $\partial_k f$ the partial derivative of $f$ with respect to $x_k$.
By the definition of the Frobenius norm
$$ f(x) = \frac{1}{2} \sum_{i=1}^{n} \sum_{j=1}^{n} (a_{ij} - x_i x_j)^2 $$
We can split that double summation in clear way so that computing $\partial_k f$ becomes straightforward as follows
$$\begin{align}
f(x) &= \frac{1}{2} \sum_{i=1}^{n} \sum_{j=1}^{n} (a_{ij} - x_i x_j)^2 \\
&= \frac{1}{2} \left( \overbrace{\sum_{i=1 \\ i\neq k}^{n} \sum_{j=1 \\ j \neq k}^{n} (a_{ij} - x_i x_j)^2}^{\text{has no $x_k$}} + \overbrace{\sum_{i=1 \\ i\neq k}^{n} (a_{ik} - x_i x_k)^2}^{\text{has only one $x_k$}} + \overbrace{\sum_{j=1 \\ j\neq k}^{n} (a_{kj} - x_k x_j)^2}^{\text{also has only one $x_k$}} + \overbrace{(a_{kk} - x_k^2)^2}^{\text{has two $x_k$s}} \right)
\end{align}$$
Therefore
$$\begin{align}
\partial_k f(x) &= \frac{1}{2} \left( 0 + \sum_{i=1 \\ i\neq k}^{n} 2(a_{ik} - x_i x_k)(-x_i) + \sum_{j=1 \\ j\neq k}^{n} 2(a_{kj} - x_k x_j)(-x_j) + 2(a_{kk} - x_k^2)(-2 x_k) \right) \\
&= - \sum_{i=1 \\ i\neq k}^{n} (a_{ik} - x_i x_k) x_i - \sum_{j=1 \\ j\neq k}^{n} (a_{kj} - x_k x_j) x_j - (a_{kk} - x_k^2) 2 x_k \\
&= - \sum_{i=1}^{n} (a_{ik} - x_i x_k) x_i - \sum_{j=1}^{n} (a_{kj} - x_k x_j) x_j \\
&= - \sum_{i=1}^{n} \left( (a_{ik} - x_k x_i) x_i + (a_{ki} - x_k x_i) x_i \right) \\
&= - \sum_{i=1}^{n} (a_{ik} - x_k x_i + a_{ki} - x_k x_i) x_i \\
&= - \sum_{i=1}^{n} (a_{ik} + a_{ki} - 2 x_k x_i) x_i \\
\end{align}$$
Since $\nabla f(x) = \left[ \begin{matrix} \partial_1 f(x) & \cdots & \partial_n f(x) \end{matrix} \right]^T$ it follows that $\nabla f(x) = - (A + A^T - 2x x^T)x $
Then $\nabla f(x) = 0$ implies $x = 0$ or $A + A^T - 2x x^T = 0$.
Now suppose that $x \neq 0$. So $x x^T = \dfrac{1}{2}(A + A^T)$. That is, $x x^T$ is equal to the symmetric part of $A$. This is only feasible if $A$ meets certains conditions which I will left for you to verify. Namely if $A$ have non-negative diagonal and for all $i, j \in 1 \dots n $ it holds that
$$ \underbrace{\sqrt{a_{ii} a_{jj}}}_{\text{geometric mean}} = \underbrace{\frac{a_{ij} + a_{ji}}{2}}_{\text{arithmetic mean}} $$
(Which particularly I think is a nice relation)
So if such conditions are meet, then $x = \left[ \begin{matrix} \sqrt{a_{11}} \cdots \sqrt{a_{nn}} \end{matrix} \right]^T$.
Thus the stationary points of $f(x)$ are $x=0$, and $x = \left[ \begin{matrix} \sqrt{a_{11}} \cdots \sqrt{a_{nn}} \end{matrix} \right]^T$ if $A$'s additional conditions are met. |
Representation in Banach space and norms 'induced' by representation | I think the averaging technique using the Haar measure is crucial to such an argument. For the proof using Haar measure, proceed as follows:
Let $\Vert . \Vert$ be the norm on $X$. Define the new norm
$$\Vert v \Vert_{G}=\frac{1}{\vert G \vert} \int_{G}\Vert \pi(g)v \Vert\ d\mu$$
One can easily verify that this is a norm and is equivalent to $\Vert . \Vert$.
Now since the Haar measure is left invariant, we have
$$\Vert \pi(h)v \Vert_{G}=\frac{1}{\vert G \vert}\int_{G} \Vert \pi(h)\pi(g)v \Vert \ d\mu=\frac{1}{\vert G \vert}\int_{G} \Vert \pi(hg)v \Vert \ d\mu=\frac{1}{\vert G \vert}\int_{G} \Vert \pi(u)v \Vert \ d\mu=\Vert v \Vert_{G}$$
Where in the last step $u=hg$ and we have used left invariance of Haar.
It follows that $\pi(h)$ is unitary for each $h$. |
Two person working together | Consider $P$ the amount of papers. In one minute the first person delivers $\frac{P}{40}$, the second one $\frac{P}{50}$. Together in one minute they deliver $\frac{P}{40}+\frac{P}{50}$. So they need
$$\frac{P}{ \frac{P}{40}+\frac{P}{50}}$$ minutes to deliver the whole thing.
The quantity $P$ magically disappears... |
Why does this pattern fail (sometimes) for the continued fraction convergents of $\sqrt{2}$? | The likelihood of a new sequence agreeing with a known sequence for 45 terms, then never again, is very small. The likelihood of a sequence agreeing with a known sequence for (apparently) infinitely many terms, but disagreeing for some scattered subset, is almost nil. This is how I suspected that the disagreement was illusory. |
Irreducible polynomials on $\mathbb Z_2$ and $\mathbb Z_k$ | Your question is not well-defined and even if one assumes so, the answer is "no".
The coefficients of $p(x)\in\mathbb Z_2[x]$ are elements of $\mathbb Z_2$, that is either $2\mathbb Z$ or $1+2\mathbb Z$. We cannot consistently make coefficients in $\mathbb Z_n$ from that. For example, $x^2 \in \mathbb Z_2[x]$ might be lifted to $3x^2+2\in \mathbb Z_5[x]$, one need not take $x^2\in \mathbb Z_5[x]$.
However, to make sense of the question: Let $p(x)\in \mathbb Z[x]$ be a polynomial whose coefficients are all in $\{0, 1\}$. Then we can consider the reduced polynomials $p_n(x)\in\mathbb Z_n[x]$ for all $n\ge 2$. Question: If $p_2(x)$ is irreducible, does that imply that all $p_n(x)$ are irreducible?
The answer is "no", as can be seen from this counterexample: For $p(x):=x^2+x+1\in \mathbb Z[x]$ its reduction $p_2(x)$ is irreducible because any nontrivial factor must be linear, i.e. lead to a root in $\mathbb Z_2$. But $p_2(0)=p_2(1)=1$. On the other hand, $p_3(x) = (x-1)^2$ is reducible. |
Show that the solution to $\ddot x + \sin(x) = 0$ exists globally | From $\dfrac{\dot{x}^2}{2} - \cos x = C$ you get that $|\dot{x}| \le \sqrt{|C| + 1}$, and hence $|x| \le |x(0)| + |t| \sqrt{|C|+1}$. |
An n×n matrix B is define e^B by e^B=∑(B^j/j!) . Let P be the char polynomial of B.then the matrix e^P(B) is | By the Cayley-Hamilton theorem, every matrix satisfies it's own characteristic polynomial. Thus
$P(B) = 0; \tag 1$
it follows that
$e^{P(B)} = e^0 = I. \tag 2$ |
if $ S \neq T $ then $ \| \mathcal {X}_S - \mathcal {X}_T \|_\infty = 1$ | If $S \neq T$, then (without loss of generality), there is some $k \in S$ such that $k \not\in T$. Then since the sets $\{E_n\}$ are non-empty and disjoint, there is $x \in E_k$ such that $x \not \in E_n$ for any $n \in T$. At this $x$, we have $\mathcal X_S(x) = 1$ and $\mathcal X_T = 0$. Thus $$\lvert \mathcal X_S(x) - \mathcal X_T(x) \rvert = 1$$ and passing to the supremum, you will have $$\|\mathcal X_S - \mathcal X_T \|_\infty \ge \lvert \mathcal X_S(x) - \mathcal X_T(x) \rvert= 1.$$ (It is also easy to show the supremum can be no more than $1$ and thus conclude that $\|\mathcal X_S - \mathcal X_T \|_\infty = 1$).
Note that this is only true if $\| \cdot \|_\infty$ is the honest-to-goodness supremum. If it is the essential supremum (as is used in $L^\infty$), then there are unequal sets $A,B$ such that $\| \chi_A - \chi_B\|_{\infty} =0$. Indeed, $A = (-1,1)$ and $B = (-1,1) \cup \{ 5\}$ will serve as an example. Similarly, in your case, if $E_1$ has measure zero and $E_2$ has positive measure, then you can put $S = \{1,2\}$ and $T = \{2\}$ and you'll have $$\|\mathcal X_S - \mathcal X_T \|_\infty=0.$$ To fix this, you could require that all $E_n$ have positive measure, and then the statement continues to hold when using the essential supremum. |
What is the fiber of the path space fibration? | The definition in your question is that for pointed spaces $(X,x_0)$. More precisely we should write $P(X,x_0) = (X,x_0)^{(I,0)}$ = set of all basepoint-preserving maps $(I,0) \to (X,x_0)$ with compact-open topology for the pointed path space, $p : P(X,x_0) \to (X,x_0), p(u) = u(1)$, and $\Omega(X,x_0) = p^{-1}(x_0)$ fot the pointed loop space. Both have as basepoint the constant path at $x_0$. Then $\Omega(X,x_0)$ is the fiber over the basepoint $x_0 \in X$.
You can do an analogous construction for unbased spaces $X$:
$PX = X^I$ = set of all maps $I \to X$ with compact-open topology is the free path space. The evaluation map $p : PX \to X, p(u) = u(1)$, is a fibration. Its fibers are the sets $p^{-1}(x) = \{u \in X^I \mid p(u) = u(1) = x \} =(X,x)^{(I,1)}$. The latter is homeomorphic to $P(X,x)$.
A third construction is the free loop space of a space $X$:
$$\mathcal L X = X^{S^1} .$$
It can be viewed as the unpointed version of $\Omega (X,x_0)$. There is a canonical embedding $\iota : \Omega (X,x_0) \to \mathcal L X$: Each $u \in \Omega (X,x_0)$ is a closed path $u : I \to X$ such that $p(0) = p(1) = x_0$ which determines a unique continuous $\hat u : I/\{0, 1\} \to X$ and via the identification $I/\{0, 1\} = S^1$ this gives us $\iota(u) \in \mathcal L X$.
Note that this construction also allows to identify $\Omega (X,x_0)$ with $(X,x_0)^{(S^1,*)}$. In fact, $\iota(\Omega (X,x_0)) = (X,x_0)^{(S^1,*)} \subset X^{S^1}$. |
why commutative integral with limit is important in real analysis? | Because often we know how to compute $\int f_n$ for each individual $n$, but we don't know how to compute $\int\lim_{n\to\infty}f_n$ directly. |
Unit tangent vector understanding | The calculation of the unit tangent vector can be found by using the
formula
v(t)
T(t) =
||v(t)||
That is to say that the vector is divided by its norm to arrive at the unit vector. An example is given in the link.
The calculation of the derivative of unit tangent vector T, with respect to the arc length, ds, can be found by using the so=called Frenet formulas. dT/ds = k N, where N is the unit normal vector, and k is the so-called curvature. |
PCA and multiplicity of eigenvalues | Imagine you have a circle gaussian in two dimensions, PCA will capture the axis that explain the most variance, but in this case you can actually find infinite ammount of axes that have the same variance. This happens when you have eigenvalues with multiplicity greater than 1. Now you do not have an unique solution. |
Times a particle detector gets stuck in an hour | By linearity of expectation, the expected number of times the detector gets stuck is the expected number of particles that arrive times the probability that the detector isn't stuck at any given time.
When a particle arrives, the detector gets stuck for $10$ seconds, and then on average it remains unstuck for $0.384^{-1}\mathrm s$. Thus the proportion of time it isn't stuck is
$$
\frac{0.384^{-1}}{10 + 0.384^{-1}}=\frac1{3.84+1}=\frac1{4.84}\;,
$$
and the expected number of times it gets stuck in an hour is
$$
3600\cdot0.384\cdot\frac1{4.84}\approx286\;.
$$ |
Is the Empty Set a Manifold | If you decided that $\emptyset$ is not a manifold, you'd have to write things like
the boundary of a manifold with boundary is a manifold unless it is empty
or
The preimage of a regular value under a smooth map is a smooth submanifold, unless it is empty
or you'd have to come up with a funny way of stating the usual proof that a volume form $\nu$ over a closed manifold $M$ is not exact, which usually uses Stoke's theorem asserting that $\int_Md\eta=\int_{\partial M}\eta$, where in that case $\partial M$ is empty.
In all, you'd litter everything with lots of «unless it is empty»... |
Asymptotic expansion of $n(\sqrt[n]{a} - 1)$, if $a > 0$, to terms of $O\Big(\frac{1}{n^3}\Big)$ | Hint:
$$
\sqrt[n]{a}=e^{\frac1n\log a}=1+\frac1n\,\log a+\frac1{2!}\,\frac1{n^2}(\log a)^2+\dots
$$ |
Mathematically determine ways to empty a box of toys | Let $a_n$ be the number of ways to empty the box of $n$ toys. Suppose that $n\ge 5$. If we first remove $2$ toys, there are $a_{n-2}$ ways to finish emptying the box. If we first remove $3$ toys, there are $a_{n-3}$ ways to finish emptying the box. And if we first remove $4$ toys, there are $a_{n-4}$ ways to finish emptying the box. Thus,
$$a_n=a_{n-2}+a_{n-3}+a_{n-4}\tag{1}$$
for $n\ge 5$. In fact, the recurrence $(1)$ is valid for $n\ge 4$ if we set $a_0=1$. (This actually makes some sense: if $n=0$ we empty the box in one ‘move’ by removing nothing at all.) Solving for a closed form is going to be messy, since it will require factoring a quartic, but the recurrence itself makes it easy to calculate $a_n$ efficiently for reasonable values of $n$. |
Sum of all of the numbers in row n of Pascal’s triangle? Explain why this happens, in terms of the way the triangle is formed... | For any subset of a set of $n$ elements, and for any element of that set, the subset either contains that element or not. Hence, there are $2^n$ different subsets for any set of $n$ elements.
This is of course the sum, with $i$ ranging from $0$ to $n$, of the number of different subsets with $i$ elements, of which there are $n \choose i$ ... Which we know as the $i$-th entry of the $n$-th row in the triangle.
So: $\sum_{i=0}^n {n \choose i} =2^n$, and therefore the sum of all entries in row $n$ equals $2^n$ |
Expected Value of function of two random variable | By definition,
$$
\mathrm E(X)=\iint_{x\geqslant0,z\geqslant0}\frac{x(1+z)}{c+x(\alpha+\beta z)}\mathrm e^{-x}\frac{z^{N-1}}{(N-1)!}\mathrm e^{-z}\mathrm dx\mathrm dz,
$$
which is equivalent to
$$
\mathrm E(X)=\iint_{x\geqslant0,x_i\geqslant0}\frac{x(1+x_1+\cdots+x_N)}{c+x(\alpha+\beta(x_1+\cdots+x_N))}\mathrm e^{-x-x_1-\cdots-x_N}\mathrm dx\mathrm dx_1\cdots\mathrm dx_N.
$$ |
Having problem in finding values of variables | In many cases, you can choose one variable as free parameter and then calculate the value of the other variable using that parameter which gives infinitely many solutions. |
$a_1=3$ and $a_{n+1}=\dfrac{2a_n}{3}+\dfrac{4}{3a_n^2}$. Show that $4^{1/3} \le a_n$for all $1\le n$. | $f'(x)=\dfrac{2}{3}-\dfrac{8}{3x^3}=0 \implies x=4^{\frac{1}{3}}$ it is easy to verify this is min point when$x>0$ ,so $f_{min}=4^{\frac{1}{3}}$
$a_n$ is belong to $f(x)$ so $a_n \ge 4^{\frac{1}{3}}$ |
Conditional Probability of conditionally independent events? | No, conditional independence does not imply independence. For example, if Alice and Bob are neighbors then the event that Alice wears a raincoat is conditionally independent of the event that Bob wears a raincoat given that it is raining outside. But, while these events are conditionally independent, they are not independent.
Without further information, I don't see a way to further simplify $P(B \mid C)$. |
Relation between line integral of scalar function and surface integral | Let $\vec F$, the open surface $S$ and its boundary $\partial S$ meet the conditions of Stokes' Theorem. Then, we have
$$\oint_{\partial S} \vec F\cdot \hat t\,d\ell=\int_S \nabla\times\vec F\cdot \hat n\,
dS$$
Now, let $\vec F=\hat x_i f$. Then, $\nabla \times \vec F=\nabla f\times \hat x_i$ and we have
$$ \oint_{\partial S}(\hat x_if) \cdot\,\hat t\,d\ell= \int_S (\nabla f\times \hat x_i)\cdot \hat n\, dS \tag1$$
Next, note that we have
$$\oint_{\partial S}(\hat x_if) \cdot\,\hat t\,d\ell=\hat x_i \cdot \oint_{\partial S}f\,d\vec\ell\tag2$$
and using the vector triple product we have
$$\begin{align}
\int_S (\nabla f\times \hat x_i)\cdot \hat n\, dS=\hat x_i\cdot \int_S \hat n\times \nabla f\, dS\tag3
\end{align}$$
Substituting $(2)$ and $(3)$ into $(1)$ yields
$$\hat x_i \cdot \oint_{\partial S}f\,d\vec\ell=\hat x_i\cdot \int_S \hat n\times \nabla f\, dS\tag4$$
Since $(4)$ is true for all $i$, then we conclude that
$$\oint_{\partial S}f\,d\vec\ell=\int_S \hat n\times \nabla f\, dS$$
as was to be shown! |
Distance of point $P$ from an ellipse | The system
$$ \left\{\begin{array}{rcl}\frac{x^2}{a^2}+\frac{y^2}{b^2}&=&1\\ (x-p)^2+(y-q)^2&=&R^2\end{array}\right.$$
describes the intersections between the ellipse and a circle with radius $R$ centered at $(p,q)$.
By eliminating the $y$-variable we get a quartic in $x$, whose discriminant is a quartic polynomial in $R^2$.
It turns out that the squared distance of $(p,q)$ from the ellipse is given by
a root of
$$ a^8 b^4-2 a^6 b^6+a^4 b^8-4 a^6 b^4 p^2+6 a^4 b^6 p^2-2 a^2 b^8 p^2+6 a^4 b^4 p^4-6 a^2 b^6 p^4+b^8 p^4-4 a^2 b^4 p^6+2 b^6 p^6+b^4 p^8-2 a^8 b^2 q^2+6 a^6 b^4 q^2-4 a^4 b^6 q^2+6 a^6 b^2 p^2 q^2-10 a^4 b^4 p^2 q^2+6 a^2 b^6 p^2 q^2-6 a^4 b^2 p^4 q^2+2 a^2 b^4 p^4 q^2-2 b^6 p^4 q^2+2 a^2 b^2 p^6 q^2+2 b^4 p^6 q^2+a^8 q^4-6 a^6 b^2 q^4+6 a^4 b^4 q^4-2 a^6 p^2 q^4+2 a^4 b^2 p^2 q^4-6 a^2 b^4 p^2 q^4+a^4 p^4 q^4+4 a^2 b^2 p^4 q^4+b^4 p^4 q^4+2 a^6 q^6-4 a^4 b^2 q^6+2 a^4 p^2 q^6+2 a^2 b^2 p^2 q^6+a^4 q^8-2 a^8 b^2 z+2 a^6 b^4 z+2 a^4 b^6 z-2 a^2 b^8 z+6 a^6 b^2 p^2 z-8 a^4 b^4 p^2 z+4 a^2 b^6 p^2 z-2 b^8 p^2 z-6 a^4 b^2 p^4 z+10 a^2 b^4 p^4 z-6 b^6 p^4 z+2 a^2 b^2 p^6 z-4 b^4 p^6 z-2 a^8 q^2 z+4 a^6 b^2 q^2 z-8 a^4 b^4 q^2 z+6 a^2 b^6 q^2 z+4 a^6 p^2 q^2 z-6 a^4 b^2 p^2 q^2 z-6 a^2 b^4 p^2 q^2 z+4 b^6 p^2 q^2 z-2 a^4 p^4 q^2 z+2 a^2 b^2 p^4 q^2 z-6 b^4 p^4 q^2 z-6 a^6 q^4 z+10 a^4 b^2 q^4 z-6 a^2 b^4 q^4 z-6 a^4 p^2 q^4 z+2 a^2 b^2 p^2 q^4 z-2 b^4 p^2 q^4 z-4 a^4 q^6 z+2 a^2 b^2 q^6 z+a^8 z^2+2 a^6 b^2 z^2-6 a^4 b^4 z^2+2 a^2 b^6 z^2+b^8 z^2-2 a^6 p^2 z^2+4 a^4 b^2 p^2 z^2-8 a^2 b^4 p^2 z^2+6 b^6 p^2 z^2+a^4 p^4 z^2-6 a^2 b^2 p^4 z^2+6 b^4 p^4 z^2+6 a^6 q^2 z^2-8 a^4 b^2 q^2 z^2+4 a^2 b^4 q^2 z^2-2 b^6 q^2 z^2+6 a^4 p^2 q^2 z^2-10 a^2 b^2 p^2 q^2 z^2+6 b^4 p^2 q^2 z^2+6 a^4 q^4 z^2-6 a^2 b^2 q^4 z^2+b^4 q^4 z^2-2 a^6 z^3+2 a^4 b^2 z^3+2 a^2 b^4 z^3-2 b^6 z^3-2 a^4 p^2 z^3+6 a^2 b^2 p^2 z^3-4 b^4 p^2 z^3-4 a^4 q^2 z^3+6 a^2 b^2 q^2 z^3-2 b^4 q^2 z^3+a^4 z^4-2 a^2 b^2 z^4+b^4 z^4,$$
not really pleasant but still manageable with the help of a CAS. |
For a finite complement topology , to which point or points does the sequence converge? | Perhaps the following is what you tried to convey in your question:
Take any point $\;r\in\Bbb R\;$ and let $\;U_r\;$ be any open neighborhood of it, which means that $\;X:=\Bbb R\setminus U_r\;$ is finite and thus there exists $\;M\in\Bbb N\;$ such that $\;m>M\implies \frac1m\notin X\;$, which means that
$$\forall\,m>M\,,\,\,\frac1m\in U_r\implies r=\lim_{n\to\infty}\frac1n$$ |
Picard's Theorem - Lipschitz Condition | $y'=3y^{2/3},y(0)=0$ can be solved separating variables
$\dfrac{dy}{dx}=3y^{2/3}\rightarrow \dfrac{dy}{3y^{2/3}}=dx$
Integrating both sides $y^\frac13=x+c\rightarrow y=x^3$
$c=0$ for the initial condition
The solution $y=0$ is obvious: $0=3\cdot 0^\frac23$ |
Conditional independence of two random variables given a third random variable | Let $X:=\epsilon_1+Z$ and $Y:=\epsilon_2+Z$, where $\epsilon_1$ and $\epsilon_2$ are independent $N(0,1)$ (independent of $Z$) and $\mathsf{P}(Z=1)=\mathsf{P}(Z=-1)=1/2$.
$$
\mathsf{P}(X\le 0\mid Z)=\mathsf{P}(\epsilon_1+Z\le 0\mid Z)=\Phi(-Z).
$$
But $\mathsf{P}(X\le 0)=1/2$.
Conditional independence means that for all Borel sets $A$ and $B$,
$$
\mathsf{P}(X\in A, Y\in B\mid Z)=\mathsf{P}(X\in A\mid Z)\mathsf{P}(Y\in B\mid Z).
$$ |
If $M$ is Riemannian, then $\kappa_f \oplus f^*TN \cong TM$, where $\kappa_f$ is built out of kernels of the $Df_x$? | You have $TM = \kappa_f \oplus \kappa_f^{\perp}$ so it is enough to show that $\kappa_f^{\perp}$ is isomorphic to $f^{*}(TN)$. Then note that the map $\Phi \colon \kappa_f^{\perp} \rightarrow f^{*}(TN)$ given by
$$ \Phi(p, v) = (p, df_p(v)) $$
is a bundle map which is an isomorphism on each fiber (as $df_p|_{(\kappa_f)_p^{\perp}} \colon (\kappa_f)_p^{\perp} \rightarrow T_{f(p)}N$ is an injective linear map between two vector spaces of the same dimension). |
Calculate discount needed in order to achieve required profit margin. (algebra with fractions) | First, multiply on both sides by the denominator to eliminate the fraction:
$$p-d-c = m(p-d) = mp - md$$
Then move all the terms containing $d$ to one side, and everything else to the other side:
$$md - d = mp -p + c$$
Now factor out the $d$.
$$(m-1)d = mp - p + c$$
Finally, divide by the factor to get an expression for $d$.
$$d = \frac{mp - p + c}{m-1}$$ |
Probablity of finding A or B or both A and B | Hint: The probability that $A$, $B$, or both $A$ and $B$ occur is
$$
P(A\text{ and }B)+P(A\text{ and not }B)+P((\text{not }A)\text{ and }B).
$$ |
Questions on limits of functions | (1) The definition of continuity usually says $0<|z-p|<\delta$ but one could instead say $|z-p|<\delta$ with $z\neq p$.
(2) In the complex setting $-\infty$ is usually undefined. A neighborhood of $+\infty$ is an open set containing all $|z|>c$ for some $c$.
(3) If for all c there is a neighborhood U of p with $|f(z)|>c$ in U then $f(z)\to\infty$. |
Integral domains such that all proper factor rings are finite | Such rings are called residually finite, or rings with the finite norm property (FNP). They have been studied at length e.g. see the paper reviewed below.
Levitz, Kathleen B.; Mott, Joe L. Rings with finite norm property.
Canad. J. Math. 24 (1972), 557--565.
Let $A$ be a ring with $A^2 \ne 0 ,$ and $A^+$ the additive group
of $A$ . If each non-zero homomorphic image of $A$ is finite, then
$A$ is said to be a ring with finite norm property (FNP ring). K. L.
Chew and S. Lawn studied FNP rings with identity, which they called
residually finite rings [same J. 22 (1970), 92--101; MR0260773 (41 #5396)]. In the paper under review, the authors extend the results of Chew and Lawn to arbitrary FNP rings. They also prove the following
results:
$(1)\ $ If $A$ is an FNP ring then $A^+$ is torsion and bounded, or
torsion-free and reduced, or torsion-free and divisible. Henceforth,
$A$ will be a commutative integral domain with $1$ and with quotient
field $K$ .
$(2)\ $ Let L be a finite extension of $K$ ; if $A$ is an FNP ring,
then so is every intermediate ring of $L/A$ .
$(3)\ $ Let $A'$ be the integral closure of $A$ in $K$ ; then, $A$ is
an FNP ring if and only if $A'$ is a Dedekind domain and $A_P$ is an
FNP ring for every maximal ideal $P$ .
$(4)\ $ Let $K$ be of characteristic $0,$ then, every subring of $A$
is an FNP ring iff $K$ is a finite extension of the field of rational
numbers.
$(5)\ $ Let $K \ne A$ be of prime characteristic; then, every subring
of $A$ is an FNP ring iff $K$ is a finite extension of some $F(x),$
where is the prime field of $K$ and $x$ is transcendental over $F$.
Review by H. Tominaga (AMS MR 45 #6872) |
Trapezoid Rule - Number of Points | You know that the error is lower than
$$
\sup_{[0,1]}|f''| /12n^2
$$with $n$ points. Now just compute the derivative.
You find that $\sup_{[0,1]}|f''| = \sup_{[0,1]} (2+4x^2)e^{x^2}=6e$ and then
$$
\sup_{[0,1]}|f''| /12n^2 < \epsilon \iff n > \sqrt\frac e{2\epsilon}\simeq 10^3
$$ |
Meaning of statement: Manifolds M,N "Normal to each Other", and with Interior(M) transverse to P | Two things: For (1), the phrase "$M$ is normal to $N$ along its boundary" is slightly ambiguous about whose boundary we're talking about, except that $M$ is explicitly defined as having a boundary so I think you're right about that. But for (2), note that it's stronger to say that two submanifolds of complementary codimension intersect transversely than to say that they intersect in points. Remember that a transverse intersection is one where the local model is what you'd expect; a standard counterexample is the pair of curves $y=0$ and $y=x\cdot \sin(1/x)$ in $\mathbb{R}^2$. |
Simplifying $\frac{\sin40^{\circ}-\sqrt{3}\cos40^{\circ}}{\sin10^{\circ}\cos10^{\circ}}$ | $\displaystyle B = \frac{\sin(40)-\sqrt{3}\cos(40)}{\sin(10)\cos(10)}$
$\displaystyle = \frac{2(\frac{1}{2}\sin(40)-\frac{\sqrt{3}}{2}\cos(40))}{\sin(10)\cos(10)}$
$\displaystyle = \frac{2(\sin(30)\sin(40)-\cos(30)\cos(40))}{\sin(10)\cos(10)}$
$\displaystyle = \frac{-2\cos(70)}{\sin(10)\cos(10)}$
$\displaystyle = \frac{-4\cos(70)}{\sin(20)}$
Now, remember that $\cos(90-x)=\sin(x)$?
Yeah me neither lol.
$=\boxed{-4}$ |
How to imagine a plane defined by Cartesian Plane Equation? | The vector $\mathbf{a}=(A,B,C)$ is the normal vector of the plane, and the equation says that the dot product of the normal vector $\mathbf{a}$ and all vectors from the origin to a point on the plane is a constant given by $-D$. Let $\mathbf{x}=(x,y,z)$, then we have:
$$\mathbf{a}\cdot\mathbf{x}=-D$$
If the normal vector $\mathbf{a}$ is normalized to length 1, then $D$ is the distance of the plane from the origin. |
Does the $\lim_{(x,y) \to (\infty,\infty)} ({x+y})/({x^2 - xy + y^2})$ exists? | Hint: $\left|\dfrac{x+y}{x^2-xy+y^2}\right| \le \dfrac{2|x+y|}{x^2+y^2}\le \dfrac{4}{|x+y|}$ By Cauchy-Schwarz inequality for the second one and $x^2 -xy + y^2 \ge \dfrac{x^2+y^2}{2}$ for the first one. |
Computing $\sum_{i=2}^{n-1} 1$ and $\sum_{i=2}^{n-1} i$. | For a). You go from $2$ to $n-1$ and add each time the value $1$. Thus you are summing up $$(n-1)-1$$ terms each equal to 1. Therefore $$\sum_{i=2}^{n-1}1=(n-2)\cdot1=n-2$$ (plug in some values of $n$. f.e. $n=3, n=4$ etc. to verify that this is correct.
For b). On way to do it is to use the well known formula of the sum of the first $n$ consequtive integers that is equal to $n(n+1)/2$. Now, you can write the given sum as $$\sum_{i=2}^{n-1}i=\left(\sum_{i=2}^{n-1}i\right)\pm1\pm n=\sum_{i=1}^{n}i-(1+n)$$ and use the above formula to obtain that $$\sum_{i=2}^{n-1}=\sum_{i=1}^{n}i-(1+n)=\frac{n(n+1)}{2}-(n+1)=(n+1)\left(\frac{n}{2}-1\right)$$ |
Find a Möbius transformation that maps part of a circle to the first quadrant | With the question as posed, the answer is not possible by Möbius transformations: you are correct. I think that the question should mean "Im>0", rather than "Re>0". The below runs through that calculation.
Consider the map
$$ T(z) = \frac{z+6}{4-z}. $$
Where did I get this from? The two "corners" of the domain are at $z=4$ and $z=-6$. I want to send one to $0$ and one to $\infty$, the "corners" of the region I wish to map to.
Notice that the real axis is a diameter of the circle, therefore they meet at right angles, just as the real and imaginary axes do. Since Möbius transformations are conformal, these angles will be preserved. Further, $T$ clearly maps the real axis (with $\infty$) to itself since the coefficients are all real. Therefore it should map the circle through $4$ and $-6$ at right angles to the real axis to the circle through $0$ and $\infty$ at right angles to the real axis, i.e. the imaginary axis.
Next, I need to check $T$ maps to the right parts of the axes. Clearly for $-6<z<4$ $T(z)>0$, so it maps the diameter to the positive real axis. If $z=-1+5e^{it}$, then
$$ T(z) = \frac{-1+5e^{it}+6}{4+1-e^{it}} = \frac{5(1+e^{it})}{5(1-e^{it})} = \frac{(1+e^{it})(1-e^{-it})}{(1-e^{it})(1-e^{-it})} = \frac{2i\sin{t}}{2(1-\cos{t})}, $$
which is clearly both purely imaginary and with imaginary part larger than zero for $0<t<\pi$.
The last thing to check is that the interior has ended up in the right place. I'm sure you can manage that yourself: just check any one point inside the circle goes to one in the first quadrant. Continuity gives you the rest. |
Simple representation for continuous vector fields | A vector field in 3-space can be represented as a function that takes a point in 3-space, and returns a 3-d vector. Generally, for each object, it would have such a function representing the force it exerts on every point, and so to calculate the acceleration of an object, you'd have to sum over all other objects to get the total force (which is then a vector field), passing in the position of the object, and divide by the mass of the object.
If you have lots of objects, you'll have to approximate it somehow. For example, if your object is far away from a point cloud, you can approximate the force from that point cloud by a single vector from its center of mass. |
Is this condition on three polynomials sufficient for them not being coprime. | The condition is not sufficient. Take for example
\begin{align*}
f&=X^2\\
g&=X^2+1\\
h&=-2X^2-1\\
s&=t=u=X,
\end{align*}
then clearly
\begin{align*}
1&\le \deg s < \deg f\\
1&\le \deg t < \deg g\\
1&\le \deg u < \deg h
\end{align*}
and
$$ftu + gsu + hst = X^2(x^2+X^2+1-2X^2-1)=0,$$
by $f,g,h$ are coprime over any $R[X]$. |
Does $a_0=0, a_1=1, a_{n+2}=2a_{n+1}-3a_n$ ever return to 1 or -1 for $n>3$? | Method 1 - Bilu-Hanrot-Voutier theorem on Lucas sequences.
First some definitions,
Lucas pair - A Lucas pair is a pair $(\alpha,\beta)$ of algebraic integers such that $\alpha + \beta$ and $\alpha\beta$ are non-zero coprime rational integers and $\frac{\alpha}{\beta}$ is not a root of unity.
Lucas numbers - Given a Lucas pair, the corresponding sequence of Lucas numbers is the sequence defined by
$$u_n = u_n(\alpha,\beta) = \frac{\alpha^n - \beta^n}{\alpha - \beta}, \quad \text{ for } n = 0, 1, 2, \ldots$$
Primitive divisor - Given a Lucas pair $(\alpha, \beta)$ and corresponding Lucas numbers $(u_n)$, a primitive divisor for $u_n$ is a prime divisor $p$ of $u_n$ such that
$$p | u_n \quad\text{ and }\quad p \not| (\alpha-\beta)^2 u_1 u_2\dots u_{n-1}$$
In 2001, Bilu, Hanrot and Voutier has proved$\color{blue}{^{[1]}}$ in general, for any sequence of Lucas numbers $(u_n)$ and any $n > 30$, $u_n$ has a primitive divisor. As a consequence, $|u_n| > 1$ for $n > 30$.
For our sequence at hand, we have
$$a_n = \frac{\alpha^n - \beta^n}{\alpha - \beta}\quad\text{ with }\quad
\begin{cases}
\alpha &= 1 + \sqrt{-2}\\
\beta &= 1 - \sqrt{-2}
\end{cases}
$$
and it is easy to check this pair of $(\alpha,\beta)$ is a Lucas pair.
This implies $|a_n| \ne 1$ for $n > 30$.
By brute force, one can verify for the remaining $n \le 30$, $|a_n| = 1$ when and only when $n = 1,3$. From this, wee can conclude $|a_n| \ne 1$ except for $n = 1, 3$.
Method 2 - Modular arithmetic.
Since OP asks for an easier approach, here is an elementary one.
The idea is extracted from a paper by B.Sury$\color{blue}{^{[2]}}$ on a
related equation $x^2 + 2 = y^n \color{blue}{^{[3]}}$.
The OP has handled the case $a_n = -1$ already and it is simple to check
$a_n \ne \pm 1$ for small $n$ other than $n = 1,3$. We will limit our proof to the case $n > 5$
and $a_n = 1$.
Let's say there is a $n > 5$ such that
$$a_n = \sum_{k=0}^{\left\lfloor\frac{n-1}{2}\right\rfloor} (-2)^k \binom{n}{2k+1} = 1$$
Compare the parity of both sides, we get $1 = n \pmod 2$. This implies $n$ is an odd number.
Rewrite $n$ as $2^a b + 1$ where $a > 0$, $b$ odd. We will first look at everything modulo $2^{a+1}$.
Let $\nu_2 : \mathbb{Q} \to \mathbb{Z}$ be the 2-adic order of rational numbers.
Notice for $k \ge 3$,
$$\nu_2(k) \le \lfloor\log_2 k \rfloor < k - 1
\quad\implies\quad
\nu_2\left(\frac{2^k}{2k}\right) > 0$$
This leads to
$$\nu_2\left((-2)^k \binom{n}{2k+1}\right)
=\nu_2\left(\frac{(-2)^k n(n-1)}{(2k+1)(2k)}\binom{n-2}{2k-1}\right) >
\nu_2(n-1) = a$$
whenever $k \ge 3$. As a result,
$$1 = a_n \equiv \sum_{k=0}^2 (-2)^k \binom{n}{2k+1}
= n - 2\binom{n}{3} + 4\binom{n}{5}
\pmod{2^{a+1}}\tag{*1}$$
It is not hard to see
$$n \equiv 2^a + 1 \pmod {2^{a+1}}\quad\text{ and }\quad
2\binom{n}{3} = \frac{n(n-1)(n-2)}{3}\equiv 2^a \pmod {2^{a+1}}$$
For the third term,
$$\begin{align}
\nu_2\left(4\binom{n}{5}\right)
&= \nu_2\left(\frac{n(n-1)(n-2)(n-3)(n-4)}{30}\right)
= \nu_2\left(\frac{(n-1)(n-3)}{2}\right)\\
&= \nu_2\left(2^a b(2^{a-1}b - 1)\right)
\quad\rightarrow\quad \begin{cases} = a,& a > 1\\ > a, &a = 1\end{cases}
\end{align}\\
{\Large\Downarrow}\\
4\binom{n}{5} \equiv
\begin{cases} 2^a,& a > 1\\ 0,& a = 1\end{cases}
\pmod {2^{a+1}}
$$
Combine these three terms, $(*1)$ becomes
$$1
\equiv 1 + 2^a + 2^a + \begin{cases}2^a,& a > 1\\0, & a = 1\end{cases}
\equiv \begin{cases} 1 + 2^a, & a > 1\\1, & a = 1\end{cases}
\pmod {2^{a+1}}
$$
This forces $a = 1$. As a result, $n$ has the form $2^c d + 3$ for some $c > 1$ and $d$ odd.
Next, let us look at everything modulo $2^{c+1}$. Once again, we can show that
for $k \ge 3$,
$$\nu_2(k(k-1)) < k - 1\quad\implies\quad \nu_2\left(\frac{2^k}{2k(k-1)}\right) > 0$$
For such $k$, we have
$$\begin{align}\nu_2\left((-2)^k\binom{n}{2k+1}\right)
&= \nu_2\left(\frac{(-2)^k n (n-1)(n-2)(n-3)}{(2k+1)(2k)(2k-1)(2k-2)}\binom{n-4}{2k-3}\right)\\
&= \nu_2\left(\left(\frac{2^k}{2k(k-1)}\right)\left(\frac{(2^c d + 2)(2^c d)}{2}\right) \binom{n-4}{2k-3}\right) > c
\end{align}
$$
As a consequence,
$$1 = a_n \equiv \sum_{k=0}^2 (-2)^k \binom{n}{2k+1}
= n - 2\binom{n}{3} + 4\binom{n}{5}
\pmod{2^{c+1}}\tag{*2}$$
With a little bit of algebra, one can check that
$$\begin{array}{rclcl}
n &=& 2^c d + 3
&\equiv& 2^c + 3 & \pmod {2^{c+1}}\\
2\binom{n}{3} &=&
\frac13 (2^c d + 3)(2^c d + 2)(2^c d + 1)
&\equiv& 2^c + 2 & \pmod {2^{c+1}}\\
4\binom{n}{5} &=&
\frac{1}{30}(2^c d + 3)(2^c d + 2)(2^c d + 1)(2^c d)(2^c d -1)
&\equiv& 2^c & \pmod {2^{c+1}}
\end{array}$$
This leads to a contradiction that
$$1 \equiv (2^c + 3 ) - (2^c + 2) + 2^c \equiv 2^c + 1 \pmod {2^{c+1}}$$
As a result, there is no $n > 5$ such that $a_n = 1$.
Notes
$\color{blue}{[1]}$ Y. Bilu, G. Hanrot, P.M. Voutier, Existence of primitive divisors of Lucas and Lehmer numbers, J. Reine Angew. Math. 539 (2001), 75-122.
(a copy for an earlier version can be found here)
$\color{blue}{[2]}$ B. Sury, On the Diophantine equation $x^2 + 2 = y^n$,
Arch. Math. (Basel) 74 (2000), 350–355.
$\color{blue}{[3]}$ Equation of the form
$x^2 + C = y^n$
is a special case of something known as Lebesgue-Nagell equation. Notice
$$a_n = \frac{\alpha^n - \beta^n}{\alpha - \beta} = \pm 1
\quad\implies\quad
\begin{cases}
\alpha^n &= x \pm \sqrt{-2}\\
\beta^n &= x \mp \sqrt{-2}
\end{cases}
\quad\text{ for some } x \in \mathbb{Z}\\
{\Large\Downarrow}\\
x^2 + 2 = (x \pm \sqrt{-2})(x \mp \sqrt{-2}) = \alpha^n\beta^n = 3^n$$
Every solution of $a_n = \pm 1$ corresponds to a solution for the equation $x^2 + 2 = y^n$. |
First year french college. Is there a finite set of points of the plane A. Finite set of point A such as... | Answer to be proved in details
Such a finite set of points $A$ can't exist.
Denote by $P_1, P_2, P_3$ three points for which the circumscribed circle $\mathcal C$ is achieving the minimum radius $R$ of all circumscribed circles. Let $O$ be its center. By hypothesis $O \in A$.
Assumption to be proven: one of the triangles $(O \ P_1 \ P_2)$, $(O \ P_2 \ P_3)$ or $(O \ P_1 \ P_3)$ has a circumscribed radius strictly less than $R$. A contradiction. |
Variance of an integral of Brownian Motion | Yes, it is a consequence of Fubini's theorem. By Fubini, we have
$$\text{var}(I(T)) = \mathbb{E} \left( \int_0^T W(u) \, du \int_0^T W_v \, dv \right) = \int_0^T \int_0^T \underbrace{\mathbb{E}(W_u W_v)}_{\text{cov}(W_u,W_v)} \, du \, dv.$$
Using again Fubini's theorem, we find
$$\begin{align*} \int_0^T \int_0^T \text{cov}(W_u,W_v) \, du \, dv &= \int_0^T \int_0^v\text{cov}(W_u,W_v) \, du \, dv + \int_0^T \int_v^T \text{cov}(W_u,W_v) \, du \, dv \\ &= \int_0^T \int_0^v\text{cov}(W_u,W_v) \, du \, dv + \int_0^T \int_0^u \text{cov}(W_u,W_v) \, dv \, du\\ &= 2 \int_0^T \int_0^r \text{cov}(W_r,W_t) \, dt \, dr \end{align*}$$
Combining both identities finishes the proof. |
Meaning of Exact Transformation and $K$-Automorphism in the context of Ergodic Theory/Mixing | These definitions can be found in this article, page 11.
$\ $
Definition: A transformation $T$ of $(X, \mathcal B, \mu)$ is exact if $\cap^{\infty}_{n=0}T^{-n}\mathcal B$ consists entirely of null sets and sets of measure $1$.
$ \ $
Definition: An invertible measure-preserving transformation $T$ of $(X, \mathcal B, \mu)$ is said to be $K$ (for Kolmogorov) if there is a sub-$\sigma$-algebra $\mathcal F$ of $\mathcal B$ such that:
$\cap_{n=1}^{\infty}T^{-n}\mathcal F$ is the trivial $\sigma$-algebra up to sets of measure $0$ (i. e. the intersection consists only of null sets and sets of full measure).
$\vee_{n=1}^{\infty}T^n\mathcal F = \mathcal B$ (i.e. the smallest $\sigma$-algebra containing $T^n \mathcal F$ for all $n>0$ is $\mathcal B$).
I would not worry that much about these, since they do not appear anywhere else in Charles' lecture notes for Ergodic Theory. |
Relationship between differential forms and cross products | Note that both $\wedge$ and $\times$ are antisymmetric and bilinear (the former is it at least for 1-forms). However, $\times$ is only defined in 3 dimensions. In general we have an exterior product, also denoted by $\wedge$, where $u \wedge v$ can be identified with the subspace spanned by $u$ and $v$ together with an orientation of this subspace and a number $|u| |v| \sin\theta(u,v)$ where $\theta(u,v)$ is the angle between $u$ and $v$. Then $\times$ can be seen of exchanging the subspace, which is a plane, with its normal given by the orientation, times the number.
The identification of $dx \wedge dy$ with $\vec i \times \vec j = \vec k$ follows if we identify $dx$ with $\hat x = \vec i$ and $dy$ with $\hat y = \vec j$. |
Combinatorics and trigonometry identity | First rewrite $\cos(x)$ as $\cos(x) = \dfrac{e^{ix}+e^{-ix}}{2}$. Then we also have $\dfrac{1}{2}(4+e^{ix}+e^{-ix}) = \cos(x) + 2$. We can factorize the left-hand side of the last equality as $\dfrac{1}{2}(4+e^{ix}+e^{-ix}) = \dfrac{1}{2}\left(a+\dfrac{e^{ix}}{a}\right)\left(a+\dfrac{e^{-ix}}{a} \right)$ where $a^2+\dfrac{1}{a^2} = 4$. This gives four solutions for $a$, so I will pick the positive one, which is $\dfrac{1+\sqrt{3}}{\sqrt{2}}$. Now notice that this allows us to evaluate the left-hand side of $(1)$. We have that $\dfrac{1}{2a^2}(a^2+e^{ix})(a^2+e^{-ix}) = \cos(x) + 2$. Then make the substitution $x = \dfrac{n\pi}{180}$ to get that $$\dfrac{1}{2a^2}(a^2+e^{\frac{in\pi}{180}})(a^2+e^{-\frac{in\pi}{180}}) = \cos \left ( \dfrac{n\pi}{180} \right) + 2,$$ which we can use to rewrite the left-hand side of $(1)$. Thus, $$\displaystyle \prod_{n=1}^{180} \left (\cos \left( \dfrac{n\pi}{180} \right) + 2 \right) = \prod_{n=1}^{180} \left(\dfrac{1}{2a^2}(a^2+e^{\frac{in\pi}{180}})(a^2+e^{-\frac{in\pi}{180}}) \right).$$ Then see that the numbers $e^{\frac{in\pi}{180}}$ for $n = -180,-179,\ldots,-1,0,1,\ldots,179$ are the roots of the equation $x^{360} = 1$. Now, we see that $$\prod_{n=1}^{180} \left(\dfrac{1}{2a^2}(a^2+e^{\frac{in\pi}{180}})(a^2+e^{-\frac{in\pi}{180}}) \right) = \dfrac{1}{2^{180}a^{360}}\dfrac{(a^{720}-1)(a^2-1)}{a^{2}+1}.$$ To calculate the RHS, we see that $\displaystyle \sum_{n = 0}^{89} \binom{180}{2n+1} \left( \dfrac{3}{4} \right)^n = \dfrac{\left(1+\dfrac{\sqrt{3}}{2}\right)^{180} - \left(1-\dfrac{\sqrt{3}}{2}\right)^{180}}{\sqrt{3}}.$ It remains to show that $$\dfrac{1}{2^{180}a^{360}}\dfrac{(a^{720}-1)(a^2-1)}{a^{2}+1} = \dfrac{\left(1+\dfrac{\sqrt{3}}{2}\right)^{180} - \left(1-\dfrac{\sqrt{3}}{2}\right)^{180}}{\sqrt{3}}.$$ To see this, first observe that $1-\dfrac{\sqrt{3}}{2} = \dfrac{1}{2a^2}$, $a^2 = 1+\dfrac{\sqrt{3}}{2}$, and $$\dfrac{1}{2^{180}a^{360}}\dfrac{(a^{720}-1)(a^2-1)}{a^{2}+1}=\left[\left(\dfrac{a^2}2\right)^{180}-\left(\dfrac1{2a^2}\right)^{180}\right]\cdot\dfrac{a^2-1}{a^2+1}.$$ Then finally we see that $\dfrac{a^2-1}{a^2+1} = \dfrac{1}{\sqrt{3}}$, proving the desired result. |
Basis vectors in information geometry | I'm not at all an expert, but here's what I think I've understood from a little (not-so) light reading---correct me if I've gotten anything wrong.
Let $(X,\mu)$ be a $\sigma$-finite measure space and (by Jensen's inequality) let
$$
\log\operatorname{PDF}(X,\mu) = \left\{c \in L^1(X,\mu) \mid \text{$c(x) \in \mathbb{R}$ for $\mu$-a.e. $x \in X$},\; \int_X \exp c \, d\mu = 1 \right\},
$$
which is bijective in the obvious way with the set of all $\mu$-a.e. positive PDFs on $(X,\mu)$. Then an $n$-dimensional statistical manifold is a subset $S$ of $\log\operatorname{PDF}(X,\mu)$ with the structure of an $n$-dimensional smooth manifold defined by a $C^\infty$ atlas $\{(\Xi_k,\ell_k)\}$, such that for every local chart $\ell_k : \Xi_k \to S$ (which you should think of as the pointwise logarithm $\ell_k = \log p_k$ of a map $p = \exp \ell$ from $\Xi_k \subset \mathbb{R}^n$ to the set of all $\mu$-a.e. positive PDFs on $(X,\mu)$),
the map $\ell_k$ is $C^\infty$ as a map $\Xi_k \to L^1(X,\mu)$, and
if $f : \Xi_k \to L^1(X,\mu)$ is a (pointwise) polynomial in $\ell_k$, $\exp \ell_k$, and any finite number of (higher) partial derivatives of $\ell_k$, then for every $\{i_1,\dotsc,i_n\} \subset \mathbb{N}\}$, $\tfrac{\partial^n}{\partial \xi^{i_1} \cdots \partial \xi^{i_n}}\int_X f(x,\cdot)\,d\mu(x) = \int_X \tfrac{\partial^n}{\partial \xi^{i_1} \cdots \xi^{i_n}}f(x,\cdot)\,d\mu(x)$.
However, what seems to get assumed but swept under the rug is that one usually wants a strengthened version of condition 1., namely, that each $\ell_k$ actually defines a smooth embedding of the open subset $\Xi_k$ of $\mathbb{R}^n$ into the Banach space $L^1(X,\mu)$, viewed trivially as a Banach manifold modelled on $L^1(X,\mu)$. This, then, implies that the inclusion of $S$ into $L^1(X,\mu)$ is itself a smooth embedding, i.e., that $S$ really is an $n$-dimensional submanifold of $L^1(X,\mu)$. Thus, as far as I can tell, you should be able to rewrite this slightly strengthened definition of $n$-dimensional statistical manifold in the following way:
An $n$-dimensional statistical manifold is a smooth $n$-manifold $S$ together with a smooth embedding $\ell : S \to L^1(X,\mu)$ for some $\sigma$-finite measure space $(X,\mu)$, such that:
for all $s \in S$, the measurable function $\exp \ell(s)$ defines a $\mu$-almost everywhere positive PDF on $(X,\mu)$, so that $\exp \ell(s)(x) > 0$ for $\mu$-almost every $x \in X$ and $\int_X \exp\ell(s) \,d\mu = 1$;
if $f : S \to L^1(X,\mu)$ is a (pointwise) polynomial in $\ell$, $\exp\ell$, and some finite number of (iterated) directional derivatives of $\ell$, then for every $n \in \mathbb{N}$ and every set of $n$ vector fields $a_1,\dotsc,a_n$ on $S$, we have $\int_X a_1 \cdots a_n(f)(x,\cdot)\,d\mu(x) = a_1 \cdots a_n\left(\int_X f(x,\cdot)\,d\mu(x)\right)$.
Now, given the trivial Banach manifold structure on $L^1(X,\mu)$, one can canonically identify $T_\phi L^1(X,\mu)$ with $L^1(X,\mu)$ for any $\phi \in L^1(X,\mu)$ (just as one identifies $T_v \mathbb{R}^n$ with $\mathbb{R}^n$ for all $v \in \mathbb{R}^n$), so that the derivative $d\ell : TS \to TL^1(X,\mu)$ can be identified with a $L^1(X,\mu)$-valued $1$-form on $S$, allowing one, for instance, to define the Fisher metric as
$$
\forall a, b \in \Gamma(TS), \; \forall s \in S \quad g(a,b)_s := \int_X d\ell(a)(s) d\ell(b)(s) \exp\ell(s) \, d\mu
$$
the moment that $\ell$ is sufficiently well-behaved for $g(a,b)_s$ to always converge. In terms of a local chart $\ell_k : \Xi_k \to S \subset L^1(X,\mu)$ as in the first main paragraph, for each $\xi \in \Xi$, this simply boils down to
$$
d(\ell_k)_\xi : \mathbb{R}^n \cong T_\xi \Xi \to T_{\ell_k(\xi)} S \subset T_{\ell_k(\xi)} L^1(X,\mu) \cong L^1(X,\mu), \quad v \mapsto v^i \partial_i \ell_k(\cdot,\xi);
$$
in particular, we see that the partial derivatives $\partial_i \ell_k(\cdot,\xi)$ do live in $T_{\ell_k(\xi)}S$ when viewed as a real linear subspace of $L^1(X,\mu)$. |
Are truth tables valid for universal statements? Why or why not? | Suppose $\mathcal{U}$ was some arbitrary set of infinite cardinality.
I think the issue that the book is touching on is that for some arbitrary statement containing a universal quantifier (like $\forall x \in \mathcal{U} \ p(x)$), although it does have a truth value, you cannot use a truth table to find that value directly by testing all values of $x$. (since you would have to have an infinitely large table to get all of the cases)
What made your example doable with a truth table was that in that particular case, you did not have to consider every possible $x$ (which could be infinite), you only needed to care about $p(x), q(x), p(x) \vee q(x)$, which can only take on finitely many values. |
Problem in Understanding the following steps in an integral | Just looking at the antideriavtive
$$I=\int\left(\frac{1-\cos (\theta )}{(1-a) \cos (\theta )+a+3}\right)^r\frac {d\theta}{1-\cos (\theta )}$$
making
$$\Phi=\frac{1-\cos (\theta )}{(1-a)\cos (\theta ))+3} \implies \theta=\cos ^{-1}\left(\frac{(a+3) \Phi -1}{(a-1) \Phi -1}\right)$$ I let you the pleasure of calculating $d\theta$.
Replacing
$$I=\frac 1{2\sqrt 2}\int\frac{\Phi ^{r-\frac{3}{2}}}{ \sqrt{1-(a+1) \Phi}}\,d\Phi$$
Now $t=(a+1) \Phi$ to get
$$I=\frac {(a+1)^{\frac{1}{2}-r} }{2\sqrt 2} \int \frac{t^{r-\frac{3}{2}}}{\sqrt{1-t}}\,dt$$ |
Why are contravariant functors called contravariant? | The definition of a functor is self-dual. If you reverse all the arrows in the definition of a functor $\mathsf{C} \to \mathsf{D}$, what you get is a functor $\mathsf{C}^\mathrm{op} \to \mathsf{D}^\mathrm{op}$, which is exactly the same thing as a functor $\mathsf{C} \to \mathsf{D}$. So in this sense a cofunctor is just a functor.
Now a contravariant functor is a functor too, but between different categories: a contravariant functor $\mathsf{C} \to \mathsf{D}$ is the same thing as a functor $\mathsf{C}^\mathrm{op} \to \mathsf{D}$, which is exactly the same thing as a functor $\mathsf{C} \to \mathsf{D}^\mathrm{op}$. This is rather unrelated to functors $\mathsf{C} \to \mathsf{D}$.
Note that some people do use the words "cofunctor" for contravariant functors, I guess it's just a matter of taste as long as every word is defined. |
How to find the determinant of this matrix? (A spherical-Cartesian transformation Jacobian matrix) | If you were really clever (e.g., if you already knew the answer, or thought hard about what a Jacobian in a different coordinate system represents), you could compute $\det(A)$ by computing $\det(B)\det(A) = \det (BA)$, where $\det B$ was particularly easy.
Picking
$$
B = \pmatrix{\cos \phi & \sin \phi & 0 \\
-\sin \phi & \cos \phi, & 0 \\
0 & 0 & 1}
$$
generates a matrix $BA$ whose form is rather simpler than that of $A$ (there's no $\phi$, for instance!), while $\det B$ is evidently $1$.
But I thing the question-asker's intent here is that you're just supposed to do the algebra and practice trig simplification. |
Dimension of the rationals over the integers | No set of more than one rational number is independent, so there is no basis, the rationals are not a free module over the integers.
However, your argument that there is no finite generating set is correct. |
cauchy estimates for derivatives | Cauchy's derivative-integral formula:
$$f'(0) = \frac{1}{2\pi \imath} \int_{|\xi|=r} \frac{f(\xi)}{\xi^2} \, d\xi$$
where $0<r<1$. Thus
$$2f'(0)= \frac{1}{2\pi \imath} \int_{|\xi|=r} \frac{f(\xi)-f(-\xi)}{\xi^2} \, d\xi \\
\Rightarrow 2|f'(0)| \leq \frac{1}{2\pi \imath} \cdot \frac{2\pi r}{r^2} \cdot \sup\{|f(\xi)-f(-\xi)|; |\xi|=r\} \\ \leq \frac{1}{r} \cdot \sup\{|f(z)-f(w)|; z,w \in D\}$$
Now let $r \to 1$...
Clearly the equality holds if $f$ is linear. On the other hand: Since $f(z) = \sum_{n \geq 0} a_n \cdot z^n$ we have
$$\sup_{w,z \in D} |f(z)-f(w)| = \sup_{w,z \in D} \left| f'(0) \cdot (z-w)+\sum_{n \geq 2} a_n \cdot (z^n-w^n) \right| \stackrel{!}{=} 2|f'(0)|$$ and this can only hold if $a_n =0$ for all $n \geq 2$. |
Let [a, b] be divided into n equal parts, and let y1 be the value of f(x) at the midpoint of the ith subinvterval.... | Let's assume that $f$ is Riemann integrable.
Then write
$\sum_{i=1}^{n}f(x_i^{*})\Delta x_{i}=\sum_{i=1}^n y_{i}\left ( \frac{b-a}{n} \right )=(b-a)A_{n}$,
where we have used the evident partition obtained from the data in the exercise.
Now it only remains take the limit as $n\rightarrow \infty $ and use the defnition of the Riemann integral. |
Applying existential instantation | Prove $\bigcup_{i \in I} \big(A_i\smallsetminus B_i\big)\subseteq \Big(\bigcup_{i \in I} A_i \smallsetminus \bigcap_{i \in I} B_i\Big)$
Well $\quad\bigcup_{i\in I} \big(A_i\smallsetminus B_i\big) ~=~ \{x\mid \exists i\in I~(x\in A_i\wedge x\notin B_i)\}$
And $\quad\Big(\bigcup_{i \in I} A_i \smallsetminus \bigcap_{i \in I} B_i\Big) ~=~ \{x\mid \exists i\in I~(x\in A_i)~\wedge~\neg\forall i\in I~(x\in B_i)\} $
You have stated that you have a proof for: $$\forall x~\Big(\exists i\in I~(x\in A_i\wedge x\notin B_i)~\to~\big(\exists i\in I~(x\in A_i)~\wedge~\exists j\in I~(x\notin B_j)\big)\Big)$$
(And yes, clearly so.)
So that leaves you needing to demonstrate that the contrapositive may possibly be false.
Just construct a counterexample:
$$(\{1,2\}\smallsetminus\{1\})\cup(\{3\}\smallsetminus\{2,3\})=\{2\}$$
$$(\{1,2\}\cup\{3\})\smallsetminus(\{1\}\cap\{2,3\})=\{1,2,3\}$$ |
Function whose limit does not exist at all points | As suggested in the comments define $f:\mathbb{R}\rightarrow \mathbb{R}$ by $f(x)=0$ if $x$ is irrational and $f(x)=1$ if $x$ is rational. Let's prove it doesn't have a limit at any point. Let $y\in \mathbb{R}$, suppose $y$ is irrational and that there exists $L$ the limit of $f$ at $y$. Then given any $\epsilon<1/2$, there is $\delta>0$, such that $x\in (y-\delta, y+\delta)\setminus\{y\}$ implies $|f(x)-L|<1/2$. Now, since there exists $x_0,x_1\in (y-\delta, y+\delta)\setminus\{y\}$ such that $x_0$ is rational and $x_1$ is irrational we get that $1=|1-0|=|f(x_0)-f(x_1)|\leq |f(x_0)-L|+|f(x_1)-L|<1/2+1/2=1$, a contradiction. So the limit at any irrational $y$ does not exists. The same argument applies to any rational $y$, so the limit doesn't exists at every point. |
$R ≅ R/I$ prove that for any two-sided ideals $A$ and $B$ we have $A⊆B $ or $B⊆A$ | Let me state a more general problem and then explain the solution.
Problem. Show that if $A$ is a nontrivial algebraic structure that
is isomorphic to each of its nontrivial quotients, then the lattice
of congruences on $A$ is a well order.
[Here $A$ is nontrivial means $|A|>1$. A congruence on $A$ is a
kernel of a homomorphism.]
[Before giving the solution, here is the terminology: an algebraic structure is simple if its lattice of congruences is a 2-element chain. It is pseudosimple if it is not simple, but it is isomorphic to each of its nontrivial quotients. So the problem is to show that if an algebra is simple or pseudosimple, then its lattice of congruences is a well order.]
Solution. Choose $a\neq b$ in $A$ and let $\alpha$ be a congruence
on $A$ maximal for separating these elements. Then $A/\alpha$ is a nontrivial
quotient of $A$, since it contains the distinct elements $a/\alpha$ and
$b/\alpha$, hence $A\cong A/\alpha$. By the maximality of $\alpha$, $A/\alpha$ has a smallest nonzero
congruence, namely the one that identifies $a/\alpha$ and $b/\alpha$.
This implies that the zero congruence of $A/\alpha$ is completely meet irreducible.
Since $A\cong A/\alpha$,
the zero congruence of $A$ is also completely meet irreducible.
Since $A\cong A/\theta$ whenever $|A/\theta|>1$, it follows that
the zero congruence on any
nontrivial quotient of $A$ is completely meet irreducible.
By the Correspondence Theorem, every proper congruence $\theta$ on $A$ is completely
meet irreducible.
This already implies that the lattice of congruences of $A$ is a well order.
First, it must be a linear order, since if it had incomparable elements
$\beta$ and $\gamma$, then $\theta=\beta\cap\gamma$ would be a congruence that
is not completely meet reducible. Next, it has DCC, since
if $(\delta_n)_{n\in\omega}$ is a strictly decreasing $\omega$-chain
in the lattice of congruences, then
$\theta=\bigcap_{n\in\omega} \delta_n$ is not completely meet irreducible.
Let me respond to the questions of Batominovski:
(Would $A\cap B$ be the maximal congruence $C$ that separates $A$ and $B$?) No, we are separating elements of $R$, not congruences/ideals of $R$. In ring notation, choose $a\neq 0$ in $R$, and then let $A$ be an ideal of $R$ that is maximal for the property that $a\notin A$. Such $A$ exists by Zorn's Lemma.
[Now I am taking $R$ for my algebra, $a$ and $0$ for my two elements to be separated, and congruence modulo $A$ for the separating congruence.]
(How do we show that the zero congruence of $R$ is completely meet irreducible?) It suffices to show that $R$ has a nontrivial quotient that has completely meet irreducible zero congruence, since $R$ is isomorphic to any such ring. So take a subdirectly irreducible quotient. This is what is being described in the first 2-3 sentences of the Solution. If $a\neq 0$ in $R$ and $A$ is maximal in $R$ for the property $a\notin A$, then $R/A$ is nontrivial,
since $\bar{a}\neq \bar{0}$ in this quotient, yet any nonzero ideal of $R/A$ contains $\bar{a}$ by the maximality of $A$. Thus the complete meet of nonzero ideals of $R/A$ also contains $\bar{a}$, which means this complete meet is not zero. This shows that the zero ideal of $R/A$ is completely meet irreducible.
(Do you know any ring $R$ with the required property?) As Tsemo Aristide pointed out, if $R$ has a maximal ideal and the required property, then $R$ must be simple, not pseudosimple. This will happen if $R$ has an identity element, as rschwieb has already mentioned. Hence any simple ring has the required property, and these are the only examples when $R$ has a maximal ideal. As for pseudosimple rings,
I do not know if there are any noncommutative examples. But it is a not-too-difficult exercise to show that a commutative pseudosimple ring must be a ring with zero multiplication and pseudosimple underlying additive group, and therefore any commutative pseudosimple ring is isomorphic to a Prufer group $\mathbb Z_{p^\infty}$ equipped with zero multiplication.
The pseudosimple commutative semigroups are classified in
Boris M. Schein, Pseudosimple commutative semigroups, Monatshefte für Mathematik
March 1981, Volume 91, Issue 1, pp 77-78.
and pseudosimple lattices with congruence lattices of every allowable order type are constructed in
Graetzer, G.; Schmidt, E. T.
Two notes on lattice-congruences.
Ann. Univ. Sci. Budapest. Eotvos. Sect. Math. 1 1958 83-87. |
Integrability of function on unit square | For any partitions $P=\{x_{0},...,x_{n}\}, Q=\{y_{0},...,y_{m}\}$ of $[0,1]$, denote the partition $P\times Q=\{[x_{i-1},x_{i}]\times[y_{j-1},y_{j}]\}_{1\leq i\leq n, 1\leq j\leq m}:=\{R_{i,j}\}_{1\leq i\leq n,1\leq j\leq m}$ of $[0,1]\times[0,1]$, then
\begin{align*}
&U(h,P\times Q)-L(h,P\times Q)\\
&=\sum_{1\leq i\leq n, 1\leq j\leq m}\left(\sup_{R_{i,j}}h-\inf_{R_{i,j}}h\right)|R_{i,j}|\\
&=\sum_{1\leq i\leq n, 1\leq j\leq m}(f(x_{i})g(y_{j})-f(x_{i-1})g(y_{j-1}))(x_{i}-x_{i-1})(y_{j}-y_{j-1})\\
&=\sum_{1\leq i\leq n, 1\leq j\leq m}(f(x_{i})g(y_{j})-f(x_{i})g(y_{j-1}))(x_{i}-x_{i-1})(y_{j}-y_{j-1})\\
&~~~~~~~~+\sum_{1\leq i\leq n,1\leq j\leq m}(f(x_{i})g(y_{j-1})-f(x_{i-1})g(y_{j-1}))(x_{i}-x_{i-1})(y_{j}-y_{j-1})\\
&\leq\sum_{1\leq i\leq n,1\leq j\leq m}f(1)(g(y_{j})-g(y_{j-1}))(x_{i}-x_{i-1})(y_{j}-y_{j-1})\\
&~~~~~~~~+\sum_{1\leq i\leq n,1\leq j\leq m}g(1)(f(x_{i})-f(x_{i-1}))(x_{i}-x_{i-1})(y_{j}-y_{j-1})\\
&=\sum_{1\leq j\leq m}f(1)(g(y_{j})-g(y_{j-1}))(y_{j}-y_{j-1})+\sum_{1\leq i\leq n}g(1)(f(x_{i})-f(x_{i-1}))(x_{i}-x_{i-1})\\
&=f(1)(U(g,Q)-L(g,Q))+g(1)(U(f,P)-L(f,P)),
\end{align*}
which can be made arbitrary small. |
Re-scaling the outputs of a matrix completion algorithm | You can alter the units of your points so that all distance values will be between $(0,1)$.
That is, if you original plane has coordinates $[0,50]\times [0,50]$, make a new coordinate system with corresponding set $[0,1/\sqrt{2}]\times [0,1/\sqrt{2}]$. So, if you original point is $(x,y)$, where $x,y\in[0,50]$, then create a new coordinate $(x',y')=(x/(50\sqrt{2}),y/(50\sqrt{2}))$. Then, the distance between each pair of points will be between $[0,1]$.
Once you have the completed distance matrix, multiply all values by $50\sqrt{2}$ to convert back to your original coordinate system. |
Linear function in normed vector space (Folland, Real Analysis Exercise) | Your proof is almost complete. If $S_1\neq S_2$, then there is a $0\neq y \in X/N$ such that $S_1(y)\neq S_2(y)$. There is also $x\in X\setminus N$ such that $\pi (x)=y$. Then $S_1\circ \pi (x)\neq S_2\circ \pi (x)$ and thus the two operators $S_i\circ \pi$ are not equal (and thus not both equal to $T$), so $S$ is indeed unique. |
If $X$ is a CW complex, then the path components of $X$ are the components of $X$. | I think one can show that a CW-complex is locally path-connected.
See here, looks non-trivial or these notes, this looks more accessible
And in a locally path-connected space $X$ path-components are the same as components:
First observe that if $P_x$ is the path-component of $x$, $P_x$ is open in $X$ (see here e.g.).
Directly: let $y \in P_x$, then $y$ has a path-connected neighbourhood $N_y$. But all points in $P_x \cup N_y$ can be reached via a path from $x$ (for $N_y$ we go via $y$ and use the path-connectedness of $N_y$, so by maximality: $P_x \cup N_y \subseteq P_x$ which implies $N_y \subseteq P_x$, so $y$ is an interior point of $P_x$; as $y$ was arbitrary, $P_x$ is open.
As the whole space $X$ is a disjoint union of its path-components all path-components are closed as well (the complement of a path-component is also open, as a union of the other path-components).
If $C_x$ is the component of $x$, then $P_x \subseteq C_x$ as $P_x$ is connected and contains $x$, and $C_x$ is the maximal set with that property. As $P_x$ is clopen in $X$ and $C_x$ is connected, the inclusion cannot be proper, or $P_x$ and its complement would disconnect $C_x$. So $P_x = C_x$. |
Is it true that a 3rd order polynomial must have at least one real root? | It is true that a cubic polynomial must have a real root. Since the lead coefficient is not $0$, we have that
$$
\lim_{x\to-\infty}ax^3+bx^2+cx+d=\left\{\begin{array}{}-\infty&\text{if }a>0\\+\infty&\text{if }a<0\end{array}\right.
$$
and
$$
\lim_{x\to+\infty}ax^3+bx^2+cx+d=\left\{\begin{array}{}+\infty&\text{if }a>0\\-\infty&\text{if }a<0\end{array}\right.
$$
Since a polynomial is continuous, by the Intermediate Value Theorem, if it takes a positive value and a negative value, it must take every value in between, in particular $0$. |
$\forall n \ge 1, \sum _{i=1}^n \frac 1 {i^2} \le 2$ mathematical induction | Since $a_n=\sum_{k=1}^{n}\frac{1}{k^2}$ is obviously an increasing sequence (I prefer to avoid the letter $i$ as a summation index, since it can be easily mistaken with the imaginary unit) it is enough to show that $\zeta(2)=\sum_{n\geq 1}\frac{1}{n^2}$ is less than two. Fourier theory ensures that $$\frac{\pi-x}{2}=\sum_{n\geq 1}\frac{\sin(nx)}{n}$$
for any $x\in(0,2\pi)$, with such identity holding uniformly over compact subsets of $(0,2\pi)$. Since
$$ \int_{0}^{2\pi}\sin(nx)\sin(mx)\,dx = \pi \delta(m,n)$$
we have
$$ \int_{0}^{2\pi}\left(\frac{\pi-x}{2}\right)^2\,dx = \pi \zeta(2)$$
and the given claim is equivalent to
$$ \int_{0}^{2\pi}(\pi -x)^2\,dx = 2\int_{0}^{\pi}(\pi -x)^2\,dx = 2\int_{0}^{\pi}x^2\,dx < 8\pi $$
or to
$$ \pi < 2\sqrt{3} $$
which is equivalent to the perimeter of the regular hexagon circumscribed to a unit circle is greater than the perimeter of the unit circle. This follows from the fact that if $A,B$ are bounded convex sets in $\mathbb{R}^2$ with $B\subsetneq A$, then $\mu(\partial B)<\mu(\partial A)$. |
Algorithm to tell if a partial recursive function is 0 everywhere | We can find a (total) recursive (indeed quite simple) function $f(c,n)$ with the following properties:
(i) If $c$ is the index of a Turing machine, then so is $f(c,n)$
(ii) The Turing machine with index $f(c,n)$, on input anything other than $n$, halts and gives result $0$.
(iii) On input $n$, the Turing machine with index $f(c,n)$ halts and gives result $0$ if the machine with index $c$ halts. Otherwise, the machine with index $f(c,n)$ does not halt.
Then the machine with index $c$ halts on input $n$ iff the machine with index $f(c,n)$ computes the identically $0$ function.
It is well-known that the Halting Problem for Turing machines is not Turing machine solvable. But if there were an algorithm (Turing machine) for determining whether a machine output is identically $0$, then by applying the algorithm to the machine $f(c,n)$ we would have an algorithmic solution to the Halting Problem. |
$S_3$ is indecomposable but not simple | Yes your argument is right! As there are only two groups of order 6 (up to isomorphism) one is $\Bbb{Z}_6$ which is cyclic and other is $S_3$ which is non-abelian and can not be written as direct product of two of it's proper subgroup otherwise $S_3$ will be isomorphic to $\Bbb{Z}_6$ ! |
What is the characteristics polynomial of some element of the extended field over the base field? | My knowledge on this subject is a bit rough, but here's what I think is going on.
Let $L/K$ be a finite field extension and let $\alpha \in L$. Consider the $K$-linear map $m_\alpha\colon L\to L: x\mapsto \alpha x$. The characteristic polynomial of $m_\alpha$ is the defined to be the characteristic polynomial of the element $\alpha\in L$.
In particular, if $\alpha\in K$, then $m_\alpha=cId_L$ and thus the characteristic polynomial is just $(X-\alpha)^n$ where $n=\dim_K(L)$ which is consistent with the formula you give for Galois extensions.
So the only thing going on here to extend the usual definition of characteristic polynomial to an element, is to view said element as an operator acting by multiplication by said element. |
Convexity of $|A^TA|$ | We have
$$
|A^\top B|^2 = \operatorname{trace}(B^\top A A^\top B) = \operatorname{trace}(B B^\top A A^\top) = (B B^\top, A A^\top)_F \le |B B^\top| \, |A A^\top| = |B^\top B| \, |A^\top A|.$$
From this, we get
$$|A^\top B + B^\top A| \le |A^\top B| + |B^\top A| \le 2 \, |B^\top B|^{1/2} \, |A^\top A|^{1/2} \le |A^\top A| + |B^\top B|.$$ |
Let $S$ be a nonempty subset of $\mathbb{R}$. Prove the following are equivalent? | Note that the following two statements are equivalent:
For all $\epsilon>0$ there exists some $s \in S$ such that $0 < |s-x| < \epsilon$.
For all $n \in \mathbb{N}$ there exists some $s \in S$ such that $0 < |s-x| < {1 \over n}$.
Also, $|s_n-x| \to 0$ iff $s_n \to x$. |
representing a map between two exterior algebras with a matrix | Let $A$ be given by the matrix $(a_{i,j})$. The matrix of $\wedge^2 A$
is given by what used to be known as the second compound matrix
$A^{(2)}$ of $A$. The rows/columns of $A^{(2)}$ are indexed by
pairs $(i_1,i_2)$ with $1\le i_1<i_2\le n$. The entry in row
$(i_1,i_2)$ and column $(j_1,j_2)$ is the determinant
$$\left|\matrix{a_{i_1,j_1}&a_{i_1,j_2}\\a_{i_2,j_1}&a_{i_2,j_2}}\right|.$$
Of course this all extends to higher exterior powers. |
Rank of a matrix based on its pivot elements | number of pivot elements indicate number of independent rows or columns in given matrix ,which is on the other hand ,exactly rank of matrix,in your case we have two leading $1$,it means that rank is equal to $2$ |
Let $(X,Y)$ be a uniformly chosen point of a region $A$, given $A$ compute $EX$ | The area of $A$ is given by
$$
\int_0^\infty xe^{-x}-(-xe^{-x})\, dx=2\int_0^\infty xe^{-x}\, dx=2.
$$
To see that $\int_0^\infty xe^{-x}\, dx=1$ use integration by parts. Indeed,
$$
\int_0^\infty xe^{-x}\, dx=-xe^{-x}]_{0}^\infty+\int_0^\infty e^{-x}\, dx=0+1=1.
$$ |
An open cover of regular surface | That role that it plays can only be judged if we know what is done with the $U_{\mathbf {x}_{i}}$. So let us see how to get them.
Although it is not really important, let us note that it is possible that $\partial S= \emptyset$. In that case $K_n = S \cap D(0,n)$ where $D(0,n)$ denotes the closed ball with center $0$ and radius $n$.
As you know, each $K_n$ is compact. We can therefore find finitely many $\varphi^n_{\mathbf {x}^n_{i}} : U_{\mathbf {x}^n_{i}} \to V_{\mathbf {x}^n_{i}}$, $i = 1,...,k(n)$, such that $K_n \subset \bigcup_{i=1}^{k(n)} V'_{\mathbf {x}^n_{i}}$. Now set $p_r = \Sigma_{n=1}^r k(n)$ and arrange the $\mathbf {x}^n_{i}$ to a sequence via $\mathbf {x}_{i} = \mathbf {x}^r_{i - p_{r-1}}$ for $p_{r-1} < i \le p_r$. Hence, for each $i$ there exist a unique $r(i)$ such that $\mathbf {x}_{i} = \mathbf {x}^{r(i)}_{i - p_{r(i)-1}}$
Define $U_i = U_{\mathbf {x}^{r(i)}_{i - p_{r(i)-1}}}$, $V_i = V_{\mathbf {x}^{r(i)}_{i - p_{r(i)-1}}}$ and $\varphi_i = \varphi_{\mathbf {x}^{r(i)}_{i - p_{r(i)-1}}}$. The $U_i$ are open subsets of $\mathbb{R}^2$, but they need not be pairwise disjoint. Let us adjust them. There is a diffeomorphism $h : B(0,1) \to \mathbb{R}^2$, where $B(a,r)$ = open ball with center $a$ and radius $r$ (take for example $h(x) = \frac{x}{1 - \lVert x \rVert^2}$). Moreover, the translations $t_i : \mathbb{R}^2 \to \mathbb{R}^2, t_i(\mathbf{x}) = \mathbf{x} + (2i,0)$ are diffeomorphisms.
Replace $U_i$ by $U'_i = t_i(h^{-1}(U_i))$ and $\varphi_i$ by $\varphi'_i = \varphi \circ g_i$, where $g_i = h \circ t_{-i} : U'_i \to U_i$.
Then the $U'_i$ are pairwise disjoint because they are contained in $B((2i,0),1)$. Moreover, each ball $B$ is contained in some $B(0,2i)$ and $U'_j \cap B \subset B((2j,0),1) \cap B(0,2i) = \emptyset$ for $j > i$. |
Show that these identities are true. | Here's $1$:$$\begin{align} \frac{\text d}{\text dx}x^{-n}J_n(x)&=\frac{\text d}{\text dx}\sum_{j=0}^\infty \frac{(-1)^jx^{2j}}{j!(n+j)!2^{n+2j}}\\
&=\sum_{j=1}^\infty\frac{2j(-1)^jx^{2j-1}}{j!(n+j)!2^{n+2j}}\\
&=\sum_{j=1}^\infty\frac{(-1)^jx^{2j-1}}{(j-1)!(n+j)!2^{n+2j-1}}\\
&=\sum_{j=0}^\infty\frac{(-1)^{j+1}x^{2j+1}}{j!(n+j+1)!2^{n+2j+1}}\\
&=-x^{-n}\sum_{j=0}^\infty\frac{(-1)^{j}}{j!(n+1+j)!}\left(\frac x2\right)^{n+1+2j}\\
&=-x^{-n}J_{n+1}(x)\end{align}$$ |
best strategy for probabilities problem | Let the first presentation be at time $1$, the second at time $2$, etc. We want to maximize the time that our hero is expected to do his/her presentation, i.e.,
$$\frac{n_1+n_2+n_3+n_4+n_5+n_6}{6},$$
where $n_i\in\{1,2,3,4,5,6,7,8,9\}$ (with all $n_i$s being different from each other) are the times possible after our hero's three mates have chosen their spots. But clearly for this expression to be as big as possible, we should choose the $n_i$s to be as large as possible, so the three friends should volunteer to do the first three presentations.
This holds no matter how many students and friends there are (the friends should always go first). |
Trace of a product of two PSD symmetric matrices being zero means this product being a zero matrix? | Since $P$ and $Z$ are semidefinite, $P=S^TS$ and $Z=YY^T$ for some $S$ and $Y$. Then
$$
\mathrm{tr}(PZ)=\mathrm{tr}(S^TSYY^T)=\mathrm{tr}(Y^TS^TSY)=\mathrm{tr}[(SY)^T(SY)]=\|SY\|_F^2.
$$
Hence $SY=0$ and $PZ=S^T(SY)Y^T=S^T(0)Y^T=0$. |
Nonzero solutions to $\mathbb E\left[e^{\theta X}\right] = 1$? | Expanding on @Did's comment, if $\mathbb P(X>0)=1$ then $\mathbb P(\theta X>0)=1$ for $\theta>0$ so $\mathbb P(e^{\theta X}>1)=1$ and $\mathbb E[e^{\theta X}]>1$. By the same logic, $\mathbb E[e^{\theta X}]<1$ for $\theta <0$.
If $\mathbb P(X<0)=1$ then by symmetry we see that $\mathbb E[e^{\theta X}]<1$ when $\theta>0$ and $\mathbb E[e^{\theta X}]>1$ when $\theta<0$.
If $\mathbb P(X>0)$ and $\mathbb P(X<0)$ are both positive, then $\mathbb E[e^{\theta X}]\to\infty$ when $\theta$ converges to a boundary point of its region of convergence. So $\mathbb E[e^{\theta|X|}]$ is finite for some $\theta>0$, we conclude by continuity of $\exp$ that there exists $\theta^\star\ne0$ such that $\mathbb E[e^{\theta^\star X}]=1$. |
Finitely Generated Algebra over Commutative Ring | Yes. When $A$ is commutative, this is known as the Artin-Tate lemma, and the same proof works in general. In detail, let $x_1,\dots,x_n$ generate $A$ as an $R$-algebra and let $y_1,\dots,y_m$ generate $A$ as a $B$-module. We can then choose scalars $b_{ij},b_{ijk}\in B$ such that $$x_i=\sum_j b_{ij}y_j$$ and $$y_iy_j=\sum_k b_{ijk} y_k.$$ Let $B_0\subseteq B$ be the $R$-subalgebra generated by the $b_{ij}$ and $b_{ijk}$, and note that the equations above imply $A$ is still generated by $y_1,\dots,y_m$ as a $B_0$-module (here we use the fact that $B$ is central in $A$, to write any finite product of the $x_i$ as a $B_0$-linear combination of the $y_j$ using the equations above since we can freely pull out the coefficients). Moreover, since $B_0$ is a finitely generated commutative $R$-algebra, it is a Noetherian ring by the Hilbert basis theorem, and hence $B$ is also finitely generated as a $B_0$-module. Since $B_0$ is a finitely generated $R$-algebra, this implies $B$ is also a finitely generated $R$-algebra. |
Second derivative is what? | The Jacobian matrix is the best linear approximation to $f$ at a particular point. However, if you change the point, you get a different Jacobian. The second derivative quantifies how the Jacobian changes as the point of approximation changes - the "change of the change".
To that end, we can think of the derivative $D$ as a mapping from the domain of the original function to the space of linear maps $\mathcal{L}(X,Y)$ with domain $X$ and range $Y$ the same as the original function:
\begin{align}
f:& X \rightarrow Y \\
Df:& X \rightarrow \mathcal{L}(X,Y)
\end{align}
The derivative of $f$, $Df$, is a function where you put in a point and it gives you a linear function,
$$Df(x_0) = \text{best linear function approximating $f$ near }x_0.$$
In matrix form, $Df(x_0)$ is the Jacobian matrix $J$ at $x_0$: $Df(x_0)(y) = J|_{x_0} y$.
Since $Df$ is itself a function, we can take it's derivative, and so on, getting a tower of higher and higher derivatives as follows:
\begin{align}
f:&\mathbb{R}^n \rightarrow \mathbb{R}^m \\
Df:&\mathbb{R}^n \rightarrow \mathcal{L}(\mathbb{R}^n, \mathbb{R}^m) \\
D(Df) = D^2f:&\mathbb{R}^n \rightarrow \mathcal{L}(\mathbb{R}^n, \mathcal{L}(\mathbb{R}^n, \mathbb{R}^m)) \\
D(D^2f) = D^3f:&\mathbb{R}^n \rightarrow \mathcal{L}(\mathbb{R}^n,\mathcal{L}(\mathbb{R}^n, \mathcal{L}(\mathbb{R}^n, \mathbb{R}^m))) \\
\dots
\end{align}
Now this gets confusing fast (spaces of linear maps mapping to spaces of linear maps mapping to... ack!!). Luckily there is an isometric isomorphism theorem saying that everything just boils down to multilinear maps:
$$\mathcal{L}^n(X,\mathcal{L}^m(X,Y)) \cong \mathcal{L}^{n+m}(X,Y),$$
where $\mathcal{L}^k(X,Y)$ is the space of $k$-linear maps from $X$ to $Y$, and $\cong$ denotes an isometric isomorphism of function spaces. In more detail, what it means for $g$ to be in $\mathcal{L}^k(X,Y)$ is that $g : X \times \dots \times X \rightarrow Y$, and $g$ is independently linear in each of it's entries:
$$g(x_a + x_b,z,w) = g(x_a,z,w) + g(x_b,z,w),$$
$$g(x,y_a+y_b,w) = g(x,y_a,w) + g(x,y_b,w),$$
and so on.
So, now we can simplify our tower of derivatives using spaces of multilinear functions:
\begin{align}
f:&\mathbb{R}^n \rightarrow \mathbb{R}^m \\
Df:&\mathbb{R}^n \rightarrow \mathcal{L}(\mathbb{R}^n, \mathbb{R}^m) \\
D^2f:&\mathbb{R}^n \rightarrow \mathcal{L}^2(\mathbb{R}^n, \mathbb{R}^m) \\
D^3f:&\mathbb{R}^n \rightarrow \mathcal{L}^3(\mathbb{R}^n, \mathbb{R}^m) \\
\dots
\end{align}
So, from this picture it is pretty clear what the second derivative of your function $f:X \rightarrow Y$ is at a point. It is a bilinear map from $X \times X$ to $Y$. You put in two vectors from $X$, and it gives out a vector in $Y$, and does so in a way that is linear in each input independently.
If you have a basis $\{ b_i\}$ of $n$ vectors for $X$ and basis $\{e_i\}$ of $m$ vectors for $Y$, you could completely characterize the second derivative by a 3D $n$-by-$n$-by-$m$ array of numbers $T_{ijk}$ where the $(i,j,k)$'th entry is found by applying the bilinear function with $b_i$ in the first argument and $b_j$ in the second argument, and then taking the component of the vector you get out in the $e_k$ direction:
$$T_{ijk} = e_k^T D^2f(x_0)(b_i,b_j).$$ |
open sets of reals are uncountable | Every non-empty open set in $\Bbb R$ contains an open interval. Given an open interval $O$, there is a bijection from $O$ to $(0,1)$ (or $\Bbb R$; use the inverse tangent function appropriately altered), which is uncountable.
Here, informally, is a bijection from $(0,1)$ (represented by the semicircle) to $\Bbb R$ (represented by the line):
And one from $(a,b)$, with $0<b-a<1$, to $(0,1)$: |
blowing-up preserves the first Betti number? | I'll first answer to the second question, and I assume you work over complex numbers. You only need to understand what happens for a single blow-up $X' \to X$ at $p \in X$ and can even look at the restriction to a small neighbourhood of $p$. For example, to understand the case when $X$ is smooth this is enough to see why the blow-up of $\Bbb C^2$ is simply connected.
In your situation, you need to look at double rational points, where this is also true because the exceptional divisor is simply connected (it's a graph of rational curves without cycles since it corresponds to ADE Dynkin diagrams) so you can use Mayer-Vietoris or you can by hand retract a ADE singularity on the exceptional divisor.
For the first question, this is true since a K3 is simply connected so again you can use Mayer-Vietoris.
I don't know if the second question holds in full generality, I guess not but can't think of a counter-example now. |
Mendelson's book, truth function of n argument | A truth function of $n$ variables (arguments) is technically a function from $\{T,F\}^n$ to $\{T,F\}$. That is, it’s a function of the form $f(x_1,x_2,\ldots,x_n)$, where the allowed inputs for $v_1,\ldots,v_n$ are $T$ and $F$, and the output of the function is $T$ or $F$. The inputs are the truth values of the statement letters of interest, and the output is the truth value of some logical combination of them.
As an example, the truth function for the logical connective $\leftrightarrow$ (‘if and only if’) is a function of two variables defined as follows:
$$\begin{align*}
&f(T,T)=T\\
&f(T,F)=F\\
&f(F,T)=F\\
&f(F,F)=T\,.
\end{align*}$$
To find the truth value of an expression $A\leftrightarrow B$, for instance, we let $x$ be the truth value of $A$ and $y$ the truth value of $B$; then $f(x,y)$ is the truth value of $A\leftrightarrow B$.
The truth function for the connective $\neg$ (‘not’) is a function of one variable defined as follows:
$$\begin{align*}
&g(T)=F\\
&g(F)=T\,.
\end{align*}$$
With $x$ and $y$ as before, the truth value of $(\neg A)\leftrightarrow B$ is $f\big(g(x),y\big)$, because $g(x)$ is the truth value of $\neg A$. |
Method of Characteristics for traffic flow equation | A good resource is Evan's book on PDE; he has a section on the method of characteristics. The basic idea is as follows. Given any suitable PDE with boundary conditions, we may turn this PDE into a series of ODE as follows. We pick a point $(x_0, 0)$ in the boundary (so here, the "boundary" is time $t = 0$). It would really be great if there were some path $y(s)$ connecting $(x_0, 0)$ to an arbitrary point $(x, t)$ in the interior (i.e. for time $t > 0$), and then set up an ODE along this path.
It turns out that for this specific PDE, what happens is that along this path $y$, the solution $u$ is constant. What this means is the following. Taken any $(x, t) \in \mathbb{R} \times (0, \infty)$. Then if we can connected $(x, t)$ to $(x_0, 0)$ along one of these characteristic curves $y$, we know that $u(x, t) = u(x_0, 0)$, since $u$ is constant on these curves.
Again, the general theory is helpful to see why, but that is a whole section in Evans' text. I will try to give a very brief introduction here.
Let $G(q, z, y)$ be a real-valued function. Into the variable $q$ here, we will input the tuple $(u_x, u_t)$; so $q$ represents the data of the gradient of $u$. The variable $z$ here will represent $u(x(s))$, where $x : I \to \mathbb{R} \times [0, \infty)$ is our characteristic curve (which is at this point unknown). The variable $y$ is just $y = (x, t)$. This function $G$ is the PDE function we are solving; that is, we are solving the equation $G = 0$. The general form of the characteristic equations that concern us now are
$$
\dot{z}(s) = q\cdot D_qG, \ \ \ \dot{x}(s) = D_qG.
$$
Well for this problem at hand, $G(q, z, y) =(1 - 2z, 1) \cdot q$, as
\begin{align*}
G(q, z, y) &= u_t + (1 - 2u)u_x \\
&= (1 - 2u, 1) \cdot (u_x, u_t) \\
&= (1 - 2z, 1)\cdot q.
\end{align*}
Thus, $D_qG$ is just the vector $(1 - 2z, 1)$. So the second characteristic equation is $\dot{x} = (1 - 2z, 1)$. So our characteristic curve $x$ satisfies this ODE.
Now the magic happens with $\dot{z}$, as this reduces to
$$
\dot{z}(s) = q\cdot D_qG = (u_x, u_t)\cdot (1 - 2z, 1) = 0;
$$
this is zero, remember, because this is exactly the PDE you are solving for! Since $z(s) = u(x(s))$, what this is literally saying is that the solution $u$ is constant (zero derivative!) along the characteristic curve $x$.
It is now a matter of integrating $\dot{z}$ along these curves to find the solution $u$. For a very good illustration of the method of characteristics, try solving the transport equation with constant coefficients, also in Evans.
As for shocks, I'll just suggest Evans, since this answer is already very long. |
Solving cumbersome "quadratic" equation | HINT:
We have $$3x\sqrt{1-2x^2}=3x^2-1$$
Square both sides to form a Quadratic Equation in $x^2$
Solve it & identify the extraneous roots introduced due to squaring |
Independence of Rotation Matrix Definitions | $A A^T = I$ already imposes $6$ independent constraints: $3$ saying that the rows should be orthogonal to each other, and $3$ saying that each row should be a vector of length $1$ (in particular the last three conditions you wrote down are redundant). These conditions also imply that
$$\det(A) \det(A^T) = \det(A)^2 = 1$$
hence that $\det(A) = \pm 1$. The result we get is a $(9 - 6 = 3)$-dimensional manifold, namely the group $\text{O}(3)$ of rotations and reflections. The final condition $\det(A) = 1$ is an independent constraint but does not drop the dimension; it just singles out the connected component containing the rotations. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.