title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
If $Q$ is nilpotent and commutes with $A$, then $(A + Q)^{-1}$ is invertible if and only if $A$ is invertible | If $A + Q$ is invertible then $-Q$ is nilpotent and commutes with $A + Q$ so $(A + Q) + (-Q)$ is invertible. |
Diophantine equation: $x^4+4=py^4$ | It is easy to see that $x$ is odd (otherwise $4$ divides $py^4$ and $x, y$ are both even, which gives contradiction mod $16$). Then the $\gcd$ of $x^2 + 2x + 2$ and $x^2 - 2x + 2$ is $1$. Therefore we must have \begin{eqnarray}x^2 + 2x + 2 &=& u^4\\x^2 - 2x + 2 &=& pv^4\end{eqnarray} or \begin{eqnarray}x^2 + 2x + 2 &=& pu^4\\x^2 - 2x + 2 &=& v^4.\end{eqnarray} In the first case, we have $(x^2 + 1)^2 + 1 = (u^2)^2$ which is impossible. In the second case, we have $(x^2 - 1)^2 + 1 = (v^2)^2$ which leads to $x = 1$, and hence $y = 1$ and $p = 5$. |
Is the Cantor Pairing function guaranteed to generate a unique real number for all real numbers? | Even for positive reals the answer is no, the result is not unique. You can choose any $x,y,$ compute $f(x,y)$, then choose any $x'\lt x$ and solve $\frac 12(x'+y')(x'+y'+1)+y'=f(x,y)$ for $y'$ The only reason for the $x'$ restriction is to make sure you get a positive square root. For example, let $x=3,y=5,x'=2$. We have $f(3,5)=41$ so want $\frac 12(2+y')(3+y')+y'=41$, which has solutions $y'=\frac 12(-7\pm\sqrt{353})\approx -12.8941,5.8941$ so $f(3,5)=f(2,\frac 12(-7+\sqrt{353}))$ in the positive reals. You can allow any of $x,y,x'$ to be other than integers. $y'$ will usually not be integral. |
Concept problem of the binary relation fo set | A binary relation on $A$ is a subset of $A\times A$.
As such it is a set whose elements are pairs.
If $a_1,a_2\in A$ then $(a_1,a_2)$ is a pair of elements of $A$, hence an element of $A\times A$; therefore it makes sense to ask whether or not $(a_1,a_2)$ is an element of that subset $R$ of $A\times A$.
Thus $(x,x)$ with $x\in A$ is a special case of such pairs, namel ya pair wher both components are the same.
If all such pairs are in $R$, then $R$ is called reflexive.
In formal symbolism, this is written as $\forall x\in A\colon (x,x)\in R$.
Thus $R$ is reflexive iff $\forall x\in A\colon (x,x)\in R$ is true.
We are concerned with ordered pairs here, that is a pair whose first componement is $x$ and second component is $y$ is something different than a pair whose first componement is $y$ and second component is $x$ (at least if $x\ne y$).
Some authors make a strict distinction between $\Rightarrow$ and $\rightarrow$ (cf. for example some discussion at Wikipedia), some don't. From your level of question I consider it save to assume that no distinction is necessary. Thus in short: Both symbols are used to denote implication ("if ... then").
The meaning of antisymmetric is precisely given by the formal definition stated. Once you will have inhaled these definitions a bit, the difference in the surface between the definitions will go away. Symmetric means that $(x,y)$ and $(y,x)$ are always both or neither in $R$; antisymmetric means that they are never both in $R$ (with the exception of pairs with $x=y$).
In the definition of transitivity, there is just no room for $y$ on the right side. Remember that $R$ contains pairs, not tripes of elements of $A. Here again, digesting a few examples smight make things clearer.
If "Jack is taller than Jill" and "Jill is taller than Joe", you can conclude that "Jack is taller than Joe" and there is no need to mention Jill in that result. |
Probability question about only one person out of four being able to choose the right box. | Not a probability buff, so let me know if correct or where wrong.
A : Probability of success = 1/5,
B : Probability of success: 4/5 × 1/4 = 1/5,
C: Probability of success: 4/5 × 3/4 × 1/3 = 1/5,
D: Probability of success: 4/5 × 3/4 × 2/3 × 1/2 = 1/5. |
Under What assumptions on $p$, $\mathcal{O}_K^* \simeq \mathbb{Z}_p^{*} \oplus \mathbb{Z}_p^{*}$ | I think that this never happens.
For, $\Bbb Z_p^*$ always has a nontrivial torsion subgroup $T_p$, of order $p-1$ if $p\ne2$, and of order two in case $p=2$. Thus $\Bbb Z_p^*\oplus\Bbb Z_p^*$ will always have a finite subgroup $T_p\oplus T_p$, noncyclic. But a finite subgroup of a field must be cyclic, so $\Bbb Z_p^*\oplus\Bbb Z_p^*$ can not be found inside the multiplicative group $\mathcal O^*$. |
Prove $\lim_{x\rightarrow 0}\cos (x)=1$ with the epsilon-delta definition of limits | We want to show that for every $\epsilon\gt 0$, there is a $\delta\gt 0$ such that if $0\lt |x-0|\lt \delta$, then $|1-\cos x|\lt \epsilon$.
To do this, we need an estimate of $|1-\cos x|$ for $x$ close to $0$.
Suppose that $|x|\lt 1$.
Multiplying top and (missing) bottom by $1+\cos x$ (which is $\gt 1$ in our interval), we get
$$|1-\cos x|=\frac{1-\cos^2 x}{1+\cos x}=\frac{\sin^2 x}{1+\cos x}.\tag{1}$$
In our interval, we have $\cos x\gt 0$, and therefore
$$\frac{\sin^2 x}{1+\cos x}\le \sin^2 x.\tag{2}$$
It is always true that $|\sin x|\le x$. It follows from (1) and (2) that in our interval,
$$|1-\cos x|\le x^2\le x.\tag{3}$$
Finally, we have the required estimate. Let $\delta=\min(1,\epsilon)$. Making sure that $\delta\le 1$ is just to make sure that we are working in the interval $|x|\le 1$, since our inequality was derived under the assumption $|x|\le 1$. It is protection against someone giving us a ridiculous $\epsilon$, such as $\epsilon=10$.
If $|x|\lt \delta$, then by Inequality (3) we have $|1-\cos x|\lt |x|\lt \delta=\epsilon$, so $|1-\cos x|\lt \epsilon$.
In fact we have the stronger inequality $|1-\cos x|\le x^2$. So we could have chosen $\delta$ to be the minimum of $1$ and $\sqrt{\epsilon}$.
Remark: An approach similar to the one in the answer may be intended. There are, however, issues, since the inequality was obtained by using "informal" geometric facts. For a fully formal approach, we should have a formal definition of $\cos x$, say via the power series. But then the result follows from general facts about power series. |
Does such a family of sets exist? | It's a well known theorem (of Sierpiński I believe) that the real line does not admit a nontrivial partition into countably many closed sets; see my answer to this question.
Now assume for a contradiction that the plane is the union of a (necessarily countable) collection of closed discs (of positive radius) with disjoint interiors, plus countably many single points.
Let $L$ be a line in the plane which does not pass through any of the countably many points where two of those closed discs touch. The intersection of each of the given discs with $L$ is either a closed interval or a single point. Thus $L$ is the union of a countable disjoint collection of closed intervals and singletons, contradicting the theorem mentioned above.
Remark. If $S$ is a family of closed discs of positive radius with disjoint interiors in $\mathbb R^2$, then $\mathbb R^2\setminus\bigcup S$ is an uncountable $G_\delta$ set, so it has cardinality $2^{\aleph_0}$. |
Proof of small part of Euclid's proof of pythagorean theorem | Remember the area is
$$ \frac{1}{2} base \cdot height $$ for all triangles with vertex sliding parallel to base and
$$ base \cdot height $$ for all parallelograms, the parallel side sliding similarly.
"Parallel" means that the height remains same. |
The spreading game and its expansion | Per OP's request, I am writing a solution for the game on an $m\times n$ board under the teacher's rule (the government and the virus can occupy anywhere on the board). When $m\equiv n\pmod{2}$, the government has a winning strategy that guarantees that it occupies at least $\left(\frac{m+n}{2}\right)\min\{m,n\}$ unit squares. The virus has a strategy to infect at least $\left(\frac{|m-n|}{2}\right)\min\{m,n\}$ unit squares.
Wlog, suppose that $m>n$. The government's first move is to occupy the central $2\times 2$ square when $m$ and $n$ are even, or the central $1\times 1$ square when $m$ and $n$ are odd. Then the government copies the virus move symmetrically about the board's center. This is a winning strategy of the government.
However, the government can maximize its occupied territory with the first move being the occupation of the central $n\times n$ square. This leaves $(m-n)n$ unit squares to be dealt with, and the strategy of symmetric moves against the virus guarantees that the government can take at least have of the remaining unit squares. That is, the government can aim to occupy at least $$n^2+\frac{(m-n)n}{2}=\left(\frac{m+n}{2}\right)n$$ unit squares.
On the other hand, the virus plays the greedy algorithm. That is, in every turn, it infects a square of the largest possible size. If the first move by the government is to take over a $k\times k$ square (clearly, $k\le n$), then the virus can take at least half of the remaining area. Hence, the virus has a strategy to infect at least
$$\frac{mn-k^2}{2}\geq \frac{mn-n^2}{2}=\left(\frac{m-n}{2}\right)n$$
unit squares.
P.S. The situation where $m\not\equiv n\pmod{2}$ is a bit complicated. For $(m,n)=(2k,1)$, the virus wins (and each player occupies half of the board). But for $(m,n)=(3,2)$, the government wins and is able to occupy all but one unit square. I have no idea about this problem if the OP's rule (the players start from opposite corners of the board, and each can only play a square adjacent to previously played squares) is used. |
Eigenvalues of matrix satisfying squared equation | If the roots of the polynomial $\lambda^2-\frac{1}{2}\lambda-2$ are in the field where you are working so that
$$
\lambda^2-\frac{1}{2}\lambda-2=(\lambda-\lambda_1)(\lambda-\lambda_2),
$$
then
$$
(A-\lambda_1 I)(A-\lambda_2 I)X = 0.
$$
Assume $X\ne 0$. Either $AX=\lambda_2 X$ or $AY=\lambda_1 Y$ where $Y=(A-\lambda_2I)X\ne 0$. So, either $\lambda_1$ or $\lambda_2$ is an eigenvalue, assuming $\lambda_1$, $\lambda_2$ are in your field. |
If $X$ has Unif[0,2] distribution, then what is the density of $Y=X1_{(0,1)}(X)$ | If $X\ge 1$ then the indicator function becomees zero, so $Y=0$. That happens with probability one half. In the other case, $Y$ becomes uniform on $(0,1)$. So the distribution of $Y$ is a mixture of a point mass and a uniform distribution, with mixture weights $(1/2, 1/2)$.
A dominating measure for this is the sum of Leb plus a point mass at zero. |
Exterior measure does not satisfy additivity. | Take the famous Vitali Set $\mathcal V$. Then, if you set $F=[0,1]\backslash \mathcal N$, you'll get $$m^*(F\cup\mathcal V)\neq m^*(\mathcal V)+m^*(F).$$
To prove it, I give you a hint : If $m^*(F)<1$, there is an open set $O$ s.t. $m^*(O)<1$. |
Archimedes's quadrature of a parabola: why point B (half the horizontal lenth across) | Let $F(x)=ax^2+bx+c$ and $G(x)=kx+m$. Their intersection points are given by,
$$ax^2+(b-k)x +c -m =0$$
Then, the $x$-coordinate of the vertical intersection is
$$x_m = \frac{x_1+x_2}{2}= -\frac {b-k}{2a}$$
Thus,
$$F'(x_m)= 2a x_m + b = k =G'(x_m)$$ |
$B \cong \operatorname{End}_{A^\text{op}}(N) \cong e\operatorname{End}_{A^\text{op}} (A^m)e$ | Let $M=A^m$ be decomposed as $M=N\oplus N'$ with $e=\iota_N\pi_N:M\to M,\ n+n'\mapsto n$ where $\iota_N:N\to M$ is the inclusion and $\pi_N:M\to N$ is the projection.
Then for an endomorphism $f:M\to M$, we get an endomorphism on $N$ by considering $f':=\pi_Nf\iota_N$.
Conversely, for any endomorphism $g:N\to N$ we can take $\tilde g:=\iota_Ng\pi_N:M\to M$.
Note that $e\tilde ge=\tilde g$.
I let you prove that these operations are inverses to each other when restricted to endomorphisms of $M$ in the form of $efe$. |
Is it possible to pull constant out of this summation | When you pull the $\alpha$ from the inner summation, I think you've misplaced your parenthesis:
$$ \sum_{m=1}^M \bigl(\alpha^{\omega_m} \sum_{i=1}^N x_i^{\omega_m}\bigl)^{1/\omega_m} $$ |
Equivalent optimization problems | Because $A$ is Hermitian, it has real spectrum. It is easy to see that $\max_{x^\ast x = 1} x^\ast A x = \lambda$. The maximum is attained for any unit norm $x$ vector from the eigenspace $V_\lambda$, corresponding to the eigenvalue $\lambda$.
Now, assuming $\lambda > 1$, let $y = \frac{1}{\sqrt{\lambda}} x$, then $y^\ast A y = 1$ and $\frac{1}{y^\ast y} = \lambda$. I hope you see the equivalence now, and what goes wrong if $A$ has no positive eigenvalue. |
Show $f$ is integrable in each subinterval, $[x_{i−1}, x_i]$, and further, $\int_a^bfdx=\sum\limits_{x_{k-1}}^{x_k}\int\limits_{x_{k-1}}^{x_k}fdx$. | I'm assuming Q2 is asking to how prove
$$\sum_{k=1}^n \int_{x_{i-1}}^{x_i} f dx = \int_{a}^{b} f dx$$
Q1 Using indicator functions (like here or here):
$1_{[x_{i-1}, x_i]}$ is integrable on $[a,b]$ (I guess because $1$ is integrable on $[x_{i-1}, x_i]$).
Finite products of functions that are integrable on $[a,b]$ are integrable on $[a,b]$ (Not sure if we are allowed to use this).
Hence, $f1_{[x_{i-1}, x_i]}$ is integrable on $[a,b]$
$\to f1_{[x_{i-1}, x_i]}$ is integrable on $[x_{i-1}, x_i] (*)$
Now, on $[x_{i-1}, x_i], f1_{[x_{i-1}, x_i]} = f$ so $f$ is integrable on $[x_{i-1}, x_i]$.
Q2 Denote such integral $\int_{x_{i-1}}^{x_i} f dx$
Now observe that
$$\sum_{i=1}^n \int_{x_{i-1}}^{x_i} f dx \stackrel{(**)}{=} \int_{x_0}^{x_n} f dx = \int_{a}^{b} f dx$$
Q1 Alternatively, we could directly use Cauchy Criterion (see p.8 of UCDavis - The Riemann Integral):
$f$ is integrable on $[a,b]$ if
$$\forall \varepsilon > 0, \exists P \ \text{s.t.} \ U(f;P) - L(f;P) < \varepsilon$$
where $P = \{x_0, \cdots, x_n\}$ is a partition of $[x_0,x_n] = [a,b]$
We are given that
$f$ is integrable on $[a,b]$
$\to f$ is bounded on $[a,b]$
$\to f$ is bounded on $[x_{i-1},x_i]$ (obvious but knowing your prof, I'd probably justify this)
Now since $f$ is bounded, by CC, $f$ is integrable on $[x_{i-1},x_i]$ if
$$\forall \varepsilon > 0, \exists Q \ \text{s.t.} \ U(f;Q) - L(f;Q) < \varepsilon$$
where $Q = \{y_0, \cdots, y_m\}$ is a partition of $[y_0,y_m] = [x_{i-1},x_i]$
Now we can do one of two things:
Prove that if $f$ is integrable on a closed interval $[a,b]$, then $f$ is integrable on any closed subinterval $[c,d]$ (see p.10 in TAMU Lecture 19 and then choose the closed subinterval to be $[x_{i-1},x_i]$.
Pf: Let $P' = P \cup \{c,d\}$
Then $$U(f,P') - L(f,P') \le U(f,P) - L(f,P) < \varepsilon$$ since $P'$ is finer than $P$ (further justification needed: apparently finer partitions yield smaller upper-lower sums?)
Now let $Q = P \cap [c,d]$
Then $$U(f,Q) - L(f,Q) \le U(f,P') - L(f,P') < \varepsilon$$
QED
So if you've already proven such a fact in class, just apply to it $[x_{i-1},x_i]$. Otherwise, prove it as given remembering to prove the 'further justification needed' part.
Follow the proof of 1 to prove that if $f$ is integrable on a closed interval $[a,b]$, then $f$ is integrable on any closed subinterval whose endpoints are adjacent elements in some (ordered?) partition $P$ of $[a,b]$.
Pf:
Here
$$P' = P \cup \{c,d\} = P \cup \{x_{i-1},x_i\} = P$$
so
$$U(f,P') - L(f,P') \color{red}{=} U(f,P) - L(f,P) < \varepsilon$$
$Q$ consists of only two elements:
$$Q = P' \cap [c,d] = P \cap [x_{i-1}, x_i] = \{x_{i-1}, x_i\}$$
so $U(f,Q)$ and $L(f,Q)$ each consist of only one term:
$$U(f,Q) - L(f,Q) := M_i(x_i - x_{i-1}) - m_i(x_i - x_{i-1}) = (M_i - m_i)(x_i - x_{i-1})$$
$$ \le \sum_{k=0}^{n-1} (M_k - m_k)(x_k - x_{k-1}) := U(f,P') - L(f,P')$$
where the last inequality follows because $M_k \ge m_k$ and $x_k \ge x_{k-1}$.
QED
$(*),(**)$ I'm not sure if we're allowed to do this. I think I made use of p. 11 in TAMU Lecture 19 whose first line of proof relies on p.10 which we are trying to prove. I guess it depends on the textbook. Does the text in your book prove p.11 in TAMU Lecture 19 without making use of what we're trying to prove? |
Generalised derivative and derivative of functions of bounded variation | Actually, this problem doesn't depend on knowing much about $f$. If $f$, $g$ are locally integrable on $\mathbb{R}$, and if
$$
\int f\varphi'\,dx = -\int g\varphi\,dx,\;\;\; \varphi\in C_{0}^{\infty}(\mathbb{R})
$$
then $f$ is equal a.e. to a continuous function $\tilde{f}$, and $\tilde{f}$ is absolutely continuous with $\tilde{f}'=g$ a.e..
To see this, choose $\varphi'$ to converge to $\frac{1}{\epsilon}\chi_{[a-\epsilon,a]}-\frac{1}{\delta}\chi_{[b,b+\delta]}$ for $0 < \epsilon,\delta$ and $a \le b$, and $\varphi$ to converge to the integral of this function (which has $0$ total integral.) Then you get
$$
\frac{1}{\epsilon}\int_{a-\epsilon}^{a}f\,dx-\frac{1}{\delta}\int_{b}^{b+\delta}f\,dx =-\int_{a-\epsilon}^{b+\delta}g\left[\int_{a-\epsilon}^{x}\frac{1}{\epsilon}\chi_{[a-\epsilon,a]}+\frac{1}{\delta}\chi_{[b,b+\delta]}\,dt\right]dx.
$$
The inner integral on the right converges to $\chi_{[a,b]}$ as $\epsilon\downarrow$ and $\delta\downarrow 0$, and it remains uniformly bounded by $1$ in the process. So the limit of the expression on the right exists as $\epsilon \downarrow 0$, $\delta\downarrow 0$, whether one at a time, or together. That means that the limits on the left also exist at every $a$, $b$. We know that the limits on the left are left- and right-hand derivatives of $\int_{0}^{x}f\,dt$, and by the Lebesgue differentiation theorem, those limits are equal a.e. to $f$. So, we have the a.e. equality
$$
f(a)-f(b) = -\int_{a}^{b}g\,dx.
$$
The function on the right is continuous in $a$, $b$, which means that $f$ is equal a.e. to a continuous function $\tilde{f}$, and $\tilde{f}$ is absolutely continuous with $\tilde{f}'=g$ a.e..
Mollifier: You are concerned that the weak relation
$$
\int_{\mathbb{R}}f'\varphi d\mu = -\int_{\mathbb{R}}f\varphi' d\mu,\;\; \varphi\in\mathcal{C}^{\infty}_{0}(\mathbb{R})
$$
cannot necessarily be strengthened to allow $\varphi$ to be a compactly supported absolutely continuous function instead. You can prove that such a thing can be done by finding $\eta \in \mathcal{C}^{\infty}_{0}(\mathbb{R})$ which is
non-negative, non-vanishing and constant in a neighborhood of $x=0$,
symmetric about $x=0$,
supported in $[-1,1]$,
bounded between $0$ and $\eta(0)$, and
normalized so that $\int \eta d\mu =1$.
Then one defines
$$
\eta_{n}(x) = n\eta(nx)
$$
so that $\int_{\mathbb{R}}\eta_{n}\,d\mu =1$ for all $n$. For any compactly supported absolutely continuous function $\varphi$, define
$$
\varphi_{n} = \int_{\mathbb{R}}\eta_{n}(x-y)\varphi(y)\,d\mu(y).
$$
The function $\varphi_{n}$ is in $\mathcal{C}^{\infty}_{0}(\mathbb{R})$. Because $\eta_{n}$ is supported in $[-1/n,1/n]$, and $\varphi$ is continuous, then $\varphi_{n}$ converges uniformly to $\varphi$ as $n\rightarrow\infty$. And, because $\varphi$ is absolutely continuous, then
$$
\varphi_{n}' = \int_{\mathbb{R}}\eta_{n}'(x-y)\varphi(y)\,d\mu(y)=
-\int_{\mathbb{R}}\eta_{n}(x-y)\varphi'(y)\,d\mu(y).
$$
For my case $\varphi'$ is piecewise continuous, and the right side then converges pointwise everywhere to mean of the left- and right-hand limits of $\varphi'$, and it remains uniformly bounded by any bound for $\varphi'$. Thus, your weak equation
$$
\int f'\varphi_{n}d\mu = -\int f\varphi_{n}'d\mu
$$
becomes the following in the limit
$$
\int f'\varphi d\mu = -\int f \varphi' d\mu.
$$
For a general absolutely continuous $\varphi$, you can show that $\varphi_{n}$ converges uniformly to $\varphi$ and $\varphi_{n}'$ converges in $L^{1}(\mathbb{R})$ to $\varphi'$.
Mollifiers are very useful for extending integral equations for $\mathcal{C}^{\infty}_{0}$ functions to more general functions. If $\varphi$ has $k$ continuous derivatives that are compactly supported, then $\varphi_{n}$ and all $k$ derivatives converge uniformly to the corresponding derivatives of $\varphi$. |
Sequences, Bolzano Weierstrass theorem | There are a variety of methods for proving the Bolzano-Weierstrass theorem.
One (such as the one given on wikipedia) relies on extracting a monotone subsequence.
Another, like the one you are reading I suspect, relies on the nested interval theorem: Given any nested sequence of closed bounded intervals $I_0 \supset I_1 \supset I_2 \supset \cdots$ there exists a point in the intersection $\cap_{k=0}^\infty I_k$.
A sketch of the rest of the proof: take a point (by the nested interval theorem) in the intersection of all the intervals you're considering. Then show that this point is in fact the limit of the subsequence, since the length of the intervals is going to zero.
I'll give some more details. The proof is as follows:
Let $M$ be an upper bound on the absolute values of the terms in your sequence. Let $I_0 = [-M,M]$. Let $a_0$ be the first term in your sequence. Note that infinitely many terms in the sequence are in $I_0$ (in fact they all are).
We inductively construct a subsequence. Given $I_k$ such that infinitely many terms in the sequence are in $I_k$, let $I_{k+1}$ be the closed left half of $I_k$ if there are infinitely many terms in the sequence in the closed left half. Otherwise, let $I_{k+1}$ be the closed right half of $I_k$ (necessarily with infinitely many terms). Let $a_{k+1}$ be the smallest term in your sequence which is both in $I_{k+1}$ and not already used.
Lemma: given $x \in I_k$, we have, for $j \geq k$ that $|x - a_j| \leq \frac{M}{2^k}$
Let $y \in \cap_{k=0}^\infty I_k$, which exists by the nested interval theorem.
For $j \geq k$, we have $|y - a_j| \leq \frac{M}{2^k}$ by the lemma, since $y \in I_k$. Thus the subsequence we have constructed converges to $y$.
One proof of the nested interval theorem: The left endpoints of the intervals form a monotone sequence. The limit of this sequence is in every one of the intervals.
While it's a slightly longer proof than the monotone subsequence proof, I find it highly intuitive, and the nice thing about this proof is that it generalizes well to higher dimensions. |
If the differential equation $\frac{d^2x}{dt^2} = f\left(x, \frac{dx}{dt}\right)$ has no constant solutions, it has no periodic solutions either | A periodic solution $x(t)$ results in a closed curve $(x(t),\dot x(t))$ in the phase space. The index of the vector field along that curve, its winding number, is $\pm1$. This is an invariant under homotopic deformations of the curve, as long as the homotopy remains in the region where the vector field is non-zero. Now contract the periodic solution to a point inside its enclosed region. The winding number along a point is always zero, so the contraction has to cross a zero of the vector field.
As another possibility, consider the vector field obtained by rotation by $90^\circ$ so that it points inward along the periodic cycle. Then the solution for the rotated vector field starting at the periodic solution stay always inside, so either converge to another limit cycle or a zero of the vector field. In the first case, we have the same situation as at the start, only for a smaller region. |
Independence random variables | The statement $(a)$ is true.
$X,Y$ are independent r.v if, and only if, the events $ \{ X \le a \} $ and $ \{ Y \le b \} $ are independent sets.
Once the images are countable you can write
$$ \{ X \le a \} = \bigcup_{d \le a}\{X=d\} $$
$$ \{ Y \le b \} = \bigcup_{c \le b}\{Y=c\} $$
Now, note that union is a disjoint union. So,
$$ P(\bigcup_{c \le a}\{Y=c\}, X =d) = P( \bigcup_{c \le a}(\{Y=c\} \cap \{ X =d \})) $$
using that the union is disjoint and the hypothesis you get that $ \{X = d\}$ and $\{ Y \le b \}$ are independent. Repeat the same argument now to $\bigcup_{d \le a}\{X=d\} \cap \{ Y\le b\}$ and using what you have above you get the result. |
Determining which $\arg\max$ is larger | If I understand your statement correctly, the conjecture is true but in the opposite direction (?)
I assume here that in the integral the exponential factor is $e^{-rx}$ and not $e^{-rt}$. Otherwise the problem may be reduced to that without the last term. We assume everything differentiable and we want to
to study monotonicity of the arg max of:
$$ f_r(t) = e^{-rt} B(t) + \int_0^t e^{-rx} A(x) dx = B_r(t) + \int_0^t A_r(x)dx$$
where we have introduced $B_r(t) = e^{-rt} B(t)$ and $A_r(x)=e^{-rx}A(x)$.
Consider what happens when replacing $r$ by $r+h$. We have:
$$ f_{r+h}(t) = e^{-ht} B_r(t) + \int_0^t e^{-hx} A_r(x) dx$$
which has as derivative:
$$ f_{r+h}'(t) = e^{-ht} (B_r'(t) - h B_r(t) + A_r(t))= e^{-ht} (f_r'(t)-h B_r(t))$$
Suppose for a given $r>0$ that $f_r$ has a nice global (concave) maximum at $t=t^*_r$, so that in particular
$f_r'(t^*_r) =0$ and $f_r''(t^*_r) < 0$.
Setting $t=t_r^*+u$ we get by Taylor expansion in $t$ of $f'_{r+h}(t)$ (noting that $f'_r(t^*_r)=0$):
$$ f_{r+h}'(t_r^* + u) = e^{-h(t_r^*+u)} ( f_r''(t^*_r) u - h B_r(t^*_r) + o(h,u)).$$
Here $f''<0$ and $B_r>0$ so
we find that there is an extremum when $u = \lambda h + o(h)$ where:
$ \lambda = B_r(t^*_r)/f''_r(t^*_r) < 0 $
so under the above assumptions, $t^*_r$ is decreasing with $r$ and we have the formula for its derivative:
$$ \frac{d t^*_r}{dr} = \frac{B_r(t^*_r)}{f''_r(t^*_r)}<0$$
The above gives the behaviour of a local max under the concavity assumption.
To get a global result note that the derivative w.r.t. $r$ is given by
$$ \frac{\partial f_r}{dr}(t) = - t B_r(t) - \int_0^t x A_r(x) dx .$$
Thus at two values $0\leq t_1 < t_2<+\infty$ we have (writing e.g. $B_1=B_r(t_1)$,..):
$$ \frac{\partial f_r}{dr}(t_1) - \frac{\partial f_r}{dr}(t_2) =
t_2 B_2 - t_1 B_1 + \int_{t_1}^{t_2} x A_r(x)dx =
t_2 B_2 - t_1 B_1 + \xi \int_{t_1}^{t_2} A_r(x) dx$$
for some $\xi \in (t_1,t_2)$ (by the MVT).
Now, since $f_r(t)=B_r(t)+ \int_0^t A_r(x)dx$ we get:
$$ \frac{\partial f_r}{dr}(t_1) - \frac{\partial f_r}{dr}(t_2) =
(t_2-\xi)B_2 + (\xi-t_1)B_1 + \xi (f_r(t_2)-f_r(t_1)) $$
In particular as $(t_2-\xi)B_2 + (\xi-t_1)B_1 >0$ we may conclude that if
$f_r(t_2)\leq f_r(t_1)$ then for every $r'>r$ we must have:
$f_{r'}(t_2) < f_{r'}(t_1)$. This implies that the arg max (whatever choice you make among possible max values) is always a decreasing function of $r$. |
Covariance with transformations of random variables | $YZ = 0 \implies \mathbb{E}(YZ)= 0$.
$\operatorname{cov}(Y,Z) = \mathbb{E}(YZ) - \mathbb{E}(Y) \mathbb{E}(Z) = \,...?$ |
Array Indexing Notation Trouble | You'd find out how this works by running your code, but let's run through the rules anyway.
array[a][b] means (array[a])[b], i.e. array lists the individual array[a]s. So depending on the value of $a$, array[a] is either $\{0,\,2\}$, $\{1,\,3\}$ or undefined, whence e.g. array[1][0] means {1, 3}[0], i.e. $1$.
My guess is that any implementation of $x_i^{(j)}$ you'll use from a library, or are expected to write with this book's guidance, treats $x$ as a list of the $x^{(j)}$s, so you'd need x[j][i]. But check with an example when you get there. Similarly, $x_{i,\,j}^{(k)}$ would be x[k][i][j].
Having said all that, arrays may not be the right approach anyway, if you want very efficient calculations. I'm not an expert on the Java implications (but see here), so I'll talk about more general issues.
In practice, machine learning often relies on a data type other than standard arrays, so we can do calculations faster. The language used for machine learning may therefore be determined by the availability of suitable types. Python is slower than Java ceteris paribus due to being an interpreted language, but is popular in machine learning because of "numpy arrays", which are the basis of scipy, scikit, tensorflow etc. Not that you need Python to take advantage of such techniques: Java has equivalents.
If you ever make use of such software, there are indexing complications. You'll be allowed to rewrite x[j][i] as x[j, i] and x[k][i][j] as x[k][i, j], and I expect you'd be allowed to use x[k, i, j] too. But more importantly, the most efficient way to do operations such as matrix multiplication wouldn't be the usual sum-over-a-loop syntax.
Response to a comment, which is too long to be a comment
As for the case $x_{l,\,u}^{(j)}$, See Chapter 6 for enlightenment on that.
In Sec 6.1's inline equation $a_{l,\,u}=\mathbf{w}_{l,\,u}\mathbf{z}+b_{l,\,u}$, we can rewrite the first term as $\sum_kw_{l,\,u,\,k}z_k$ or $\sum_kw_{k,\,l,\,u}z_k$, so we expect index lists containing l, u to place them together. But do we sum over a leftmost or rightmost $k$ index? Comparing two inline expressions in Sec. 6.2.2 answers that. The $t$th item in $\mathbf{X}$, with indexing starting at $1$ instead of $0$, is denoted $\mathbf{x}^t$, and later we see $h_{l,\,u}^t$. It seems we want to place l, u last, as per your first guess and my answer.
In the notation $x_{l,\,u}^{(j)}$, this is the usual Western down-and-right reading order. To relate this to the phrase "input feature $j$ of unit $u$ in layer $l$", I can only recommend watching the prepositions: in layer $l$ there is a unit $u$ which has a feature $j$, so l, u, j is a more natural ordering than your alternative suggestion of l, j, u, but for efficient dot-product calculations it's been changed by a cyclic permutation to j, l, u. |
Understanding length of schemes. | Presumably you've left off the condition that $Z$ should be a closed subscheme. As closed subschemes of noetherian schemes are noetherian, $Z$ is a zero-dimensional noetherian scheme which must be finite discrete (see here, for example - or it is not so hard to prove yourself). This means it's actually affine, as it's a finite coproduct of affine schemes. In particular, each point is open, so the stalk at a point is just the value of the sheaf on the open which consists of just that point, and picking a global section is equivalent to specifying the value in the stalk at each point. |
Find a plane that goes through three given points | You can always write $z$ in terms of the other values. Maybe it's easier to explain in 2D. Assume that you have the equation $ax+by+c=0$. This is the equation of a line. You can still write $y$ in terms of $x$ as $$y=-\frac ab x-\frac cb$$ The only requirement is that $b\ne 0$. Your 3D problem is equivalent, you just add one more parameter. The equation of your plane will become $$z=\alpha x+\beta y+\gamma$$
If you plug in the three triplets in the above equation, you get a system of three linear equations with three unknowns ($\alpha,\ \beta,\ \gamma$). |
How do you turn an irrational, non-transcendental number, like 1.618... back to its form of (a + sqrt(b))/c. | A quadratic irrational will have an eventually periodic continued fraction. |
$\lim_nP(X_n \in [x_n,u_n])=P(X \leq u)?$ | Let $F_X$ be the cumulative density function of $X$. $F_X$ is monotonically increasing (in the sense of "non-decreasing"), and $\lim_{x\to-\infty} F_X(x)=0$.
Let $\varepsilon > 0$. Then there exists a $B\in\Bbb R$ such that $F_X(x)<\frac\varepsilon2$ for all $x\le B$. By definition of convergence in distribution, we have $$\lim_{n\to\infty} F_{X_n}(x)=F_X(x) \text{ for all continuity points } x \text{ of }F_X.$$
Since $F_X$ is monotonic, it can have at most countably many points of discontinuity. Let $x\le B$ be a continuity point of $F_X$.
We can choose an $N\in\Bbb N$ such that $ |F_{X_n}(x)-F_X(x)|<\frac\varepsilon2$ for all $n\geq N$.
Now we see that $|F_{X_n}(x)|\le |F_{X_n}(x)-F_X(x)|+|F_X(x)|<\frac\varepsilon2+\frac\varepsilon2=\varepsilon$ for all $n\geq N$. We can fix a $\tilde N \geq N$ such that $x_n \le x$ for all $n\geq \tilde N$ (since the $x_n$ tend to $-\infty$).
For $n\geq\tilde N$ we then have, since the $F_{X_n}$ are increasing, $|F_{X_n}(x_n)|\le|F_{X_n}(x)|<\varepsilon$. Thus, $$\lim_{n\to\infty}\Bbb P(X_n < x_n)\overset{\text{Def.}}=\lim_{n\to\infty} F_{X_n}(x_n)=0.$$ |
$X,Y \sim \exp(1)$ and $U \sim [0,1]$ are independent, then what is $U(X+Y)$? | Let $M_X$, $M_Y$ be the moment-generating functions of $X$ and $Y$, respectively.
Then
$$M_{X+Y}(t) = M_X(t)M_Y(t)=\left(\dfrac{1}{1-t}\right)^2$$
so $X+Y \sim \text{Gamma}(2, 1)$ - i.e.,
$$f_{X+Y}(v) = \dfrac{1}{\Gamma(2)1^{2}}v^{2-1}e^{-v/1}=ve^{-v}\text{, } v > 0\text{.}$$
Write $V = X + Y$ for now. We wish to find the density of $W = UV$. According to this, we have
$$f_{W}(w) = \int_{-\infty}^{\infty}f_{V}(v)f_{U}\left(\dfrac{w}{v} \right) \cdot \dfrac{1}{|v|}\text{ d}v\text{.}$$
Observe that $$f_{U}(u) = \begin{cases}
1, & u \in [0, 1] \\
0, & u \notin [0, 1]
\end{cases}$$
so hence
$$f_{U}\left(\dfrac{w}{v}\right)=\begin{cases}
1, & 0 < w/v < 1 \Longleftrightarrow 0 < w < v \\
0, & w/v \leq 0 \text{ or }w/v \geq 1 \Longleftrightarrow w \leq 0\text{ or } w\geq v\text{.}
\end{cases}$$
hence (note that the variable of integration is $v$)
$$f_{W}(w)=\int_{w}^{\infty}ve^{-v}\cdot 1 \cdot \dfrac{1}{v}\text{ d}v$$
and note that $\dfrac{1}{|v|}$ turns into $\dfrac{1}{v}$ because we have $v > 0$. At last,
$$f_{W}(w)=\int_{w}^{\infty}ve^{-v}\cdot 1 \cdot \dfrac{1}{v}\text{ d}v =\int_{w}^{\infty}e^{-v}\text{ d}v = e^{-w}$$
so $W \sim \text{Exp}(1)$. |
Why do I get a big relative error for my function? (Numerical Analisys - floating point) | This happens because you are trying to divide two very small numbers, and the denominator has a lot more error in it. Let $x \to 0$, so then $\sin x \approx x$, and $\cos x \approx 1-x^2/2$, so
$$ f(x) \approx \frac{x^2}{(1-\frac{1}{2}x^2)^2-1} = \frac{x^2}{-x^2+\frac{1}{4}x^4}$$
When $x=10^{-8}$, then $x^4$ and $x^2$ differ by $10^{16}$, which is below machine precision for double floating point numbers, so $-x^2+x^4 = -x^2$, resulting in $-1$.
You can evaluate this more robustly by using the Taylor series below a certain value of $|x|$:
double x2 = x*x;
if(fabs(x) < tol){
return -1 - (2./3)*x2;
}else{
double sx = sin(x);
double csx = cos(sx);
return x2 / (csx*csx - 1);
}
You can determine the value of tol using the error bound for an alternating series, but it will be more stringent than necessary. Empirically, a value around $10^{-4}$ or $10^{-5}$ seems appropriate, by asking wolframalpha to plot the difference between $f(x)$ and it's truncated Taylor series.
In practice, such kinds of functions are numerically evaluated using Chebyshev polynomial approximations or Pade rational polynomial expansions which can achieve very low uniform error with on the order of 10th order polynomials. |
What are good resources to learn about convergence spaces? | Some suggestions:
Basic Properties of Filter Convergence spaces
On the theory of convergence spaces
and references therein. I'm also quite interested to learn about new resources, especially ones with lots of (worked) examples to get a feel for the difference between, say, a pseudotopological and a pretopological convergence, etc.
The motivation for their study often seems category-theoretical (convergence spaces are Cartesian closed, e.g.). I would also like to see a good list of open problems, e.g.
This page offers a general overview of "topology-like" classes of spaces, and I once read parts of the book: "G. Preuss, "Topological structures—An approach to categorical topology" , Reidel (1988)", which mostly talks about category constructions (like initial and final structures), it seems appropriate in view of your tags. |
Functions from $ \kappa $ to $ 2 $ ordered lexicographically | Let $\beta\leq \kappa$ be least such that $\langle g_{\alpha}\restriction \beta:\alpha<\kappa^+\rangle$ is strictly increasing and has size $\kappa^+$. For each $\alpha<\kappa^+$ pick $\beta(\alpha)$ such that $g_{\alpha}\restriction \beta(\alpha)=g_{\alpha+1}\restriction \beta(\alpha)$ and $g_{\alpha}(\beta(\alpha))=0, g_{\alpha+1}(\beta(\alpha))=1$. For each $\alpha<\kappa^+$ we have $\beta(\alpha)<\beta$. Now to show there can't be any strictly increasing sequence of length $\kappa^+$, define $h:\kappa^+\to \beta$ by $h(\alpha)=\beta(\alpha)$ and notice that this function must be constant on a set of size $\kappa^+$, because if not, then letting $A_{\alpha}=\{\gamma<\kappa^+:h(\gamma)=\alpha\}$ of size at most $\kappa$ for every $\alpha$, we have $\kappa^+=\bigcup_{\alpha<\beta} A_{\alpha}$ and so $\kappa^+<\kappa$. |
Taking out absolute value on the solution to integral equation | The rationale for determining the unique solution is embedded in the general solution
$$-\log |1-y|=\frac12(x^2-4)\tag 1$$
Note that to arrive at $(1)$, we already imposed the condition $y(2)=2$. Therefore, we are seeking solutions for which $y$ actually attains the value of $y=2>1$.
To resolve this further, let's solve $(1)$ for $y(x)$, disregarding the requisite condition. We find that
$$y(x)=
\begin{cases}
1+e^{-(x^2-4)/2},&y>1 \tag 2\\\\
1-e^{-(x^2-4)/2},&y<1
\end{cases}$$
is multi-valued. Upon inspection, we see that for $x=2$, only the first equation of $(2)$ satisfies either the condition $y(2)=2$ or the initial integral equation. That is, by assumption, the second solution must have values less than $1$ and can therefore, never attain the value $2$.
Therefore, the unique solution to the integral equation is
$$y(x)=1+e^{-(x^2-4)/2}$$
for which $y>1$.
Finally, we can write the equivalent solution as
$$-\log(y-1)=\frac12x^2-2$$
without need for the absolute value sign. |
$X_i = \mathcal{N}(0, \sigma_i^2)$, with $\sigma_1^2 \geq \sigma_2^2 \geq \dots \geq 0$ and $\sum_i \sigma^2_i = 1$ | Lemma 1 (Levy's Equivalence Theorem): Let $X_1,X_2,\cdots,X_n,\cdots$ be independent random variables and $S_n=\sum_{i=1}^{n}{X_i}$. Then
$$S_n\overset{a.s.}{\rightarrow}S\Leftrightarrow S_n\overset{\mathbb{P}}{\rightarrow}S\Leftrightarrow S_n\overset{d}{\rightarrow}S \ .$$
Lemma 2 : Let $\left( \Omega,\mathcal{F},\mu \right)$ be a finite measure space and $1\leq p<\infty$. Suppose $f$ and $\left\{ f_n \right\}$ are functions in $L^p\left( \Omega \right)$ and $f_n \rightarrow f$ almost everywhere or in measure. Then
$$f_n\overset{L^p}{\rightarrow}f \Leftrightarrow \left\| f_n \right\|_p \rightarrow \left\| f \right\|_p \ .$$
Now we can easily show that $S_n$ converges to a $\mathcal{N}(0,1)$ random variable $S$ in distribution by using characteristic function. Then by lemma 1, we have that $S_n\overset{a.s.}{\rightarrow}S$.
By lemma 2, if we want to prove that $S_n\overset{L^2}{\rightarrow}S$, we only need to prove that $$\lim_{n\rightarrow\infty}{\mathbb{E}\left( S_n^2 \right)}=\mathbb{E}\left( S^2 \right)=1\ .$$
Obviously, it is trivial. |
Characterization of noetherian modules via short exact sequences (understanding a step in the proof) | Restrict $g$ to a map $g':M'\to N$. The kernel of $g$ is $M_0$
so by the First Isomorphism Theorem $g'$ sets up an isomorphism $M'/M_0\to g'(M')=g(M')$, a submodule of $N$. |
Prove that sum of polynomials is equal to 3 | Hint: $4bc-a^2 = -(b-c)^2$, $bc+2a^2=(b-a)(c-a)$, $\ldots$ |
Defining Category of Adjunctions | (note: I had misread the construction in the OP when writing this)
You can't get away from the size issues; an easy way to see definition 2 doesn't help is that you can extract from it the paradoxical 1-category simply by taking its 0 and 1-cells.
It is standard to define Adj on the same 'level' as Cat; e.g. given a cardinal $\kappa$, if you are working with the 2-category $\mathbf{Cat}_\kappa$ of $\kappa$-small categories, functors, and natural transformations, then you would normally take $\mathbf{Adj}_\kappa$ to be the 2-category of $\kappa$-small categories, adjunctions, and natural transformations. Or if you are working with Cat as a 1-category, then you do the same with Adj.
As Derek Elkins points out in comments, Cat isn't special here; given any 2-category $\mathcal{C}$ you can construct the 2-category of adjunctions in $\mathcal{C}$. |
Showing that $f(x)$ is increasing on $(0,+\infty)$ | You can use the Bernoulli inequality $(1+x)^\alpha \geq 1+\alpha x$ for real $x>-1$ and $\alpha \geq 1$. Then for $x>0$ and $\alpha \geq 1$
$$
\alpha x \log(1+\frac{1}{\alpha x}) = x\log(1+\frac{1}{\alpha x})^\alpha \geq x\log(1+\frac{1}{x}).
$$
Edit: See E.Lim's comment above. The use of Bernoulli's inequality could be questionable. |
Is a tensor a multilinear map to the underlying field? | A $(k, l)$ tensor on a vector space $V$ over the field $\mathbb{F}$ is a multilinear map $(V^*)^k\times V^l \to \mathbb{F}$.
If $L(V_1, V_2)$ denotes the vector space of linear maps $V_1 \to V_2$, note that there is an isomorphism $L((V^*)^k\times V^l, (V^*)^a\times V^b) \cong L((V^*)^{k+b}\times V^{l+a}, \mathbb{F})$ and hence any linear map $(V^*)^k\times V^l \to (V^*)^a\times V^b$ can be regarded as a $(k + b, l + a)$ tensor.
Example: If $\mathbb{F} = \mathbb{R}$ and $V = \mathbb{R}^3$, then the cross product defines a multilinear map $T : V\times V \to V$ given by $(v_1, v_2) \mapsto v_1\times v_2$. This can be viewed as a multilinear map $\widetilde{T} : V^*\times V\times V \to \mathbb{R}$ given by $(\varphi, v_1, v_2) \mapsto \varphi(v_1\times v_2)$. That is, we can view the cross product as a $(1, 2)$ tensor on $\mathbb{R}^3$. |
What will be the difference of the roots of this given equation? | $$x^2-10x-45=y$$ then
$$\frac{1}{y+16}+\frac{1}{y}=\frac{2}{y-24}$$
$$(2y+16)(y-24)=(y+16)2y \rightarrow y=-6 $$
Can you finish? |
Symplectic form and wedge sum | There's a reason $\dfrac{\omega^n}{n!}$ shows up all over complex geometry as the induced volume form. You are correct and there is an error in whatever you're reading. |
inverse of $y=\frac{5x-3}{2x+1}$ | $\displaystyle y = \frac{5x-3}{2x+1}$
$\displaystyle (2x+1)y=5x-3$
$\displaystyle 2xy+y=5x-3$
$\displaystyle x(2y-5)=-y-3$
$\displaystyle x(5-2y)=y+3$
$\displaystyle \boxed{x=\frac{y+3}{5-2y}}$
Hope this helped. Have a nice day :D
Both answers are correct. Multiply by $\displaystyle \frac{-1}{-1}$ to get from one answer to the other. |
Showing that the first hitting time of a closed set is a stopping time. | Since $G_i \supseteq B$, we have $\tau_i \leq t$ and therefore, as you already noted,
$$\sup_{i \in \mathbb{N}} \tau_i \leq \tau.$$
It remains to show that
$$\tau \leq \sup_{i \in \mathbb{N}} \tau_i.$$
Fix $\omega \in \Omega$. Without loss of generality, we may assume that the right-hand side is finite (otherwise there is nothing to prove), i.e.
$$T(\omega):= \sup_{i \in \mathbb{N}} \tau_i(\omega) <\infty.$$
Since $\tau_1 \leq \tau_2 \leq \ldots$, it follows from the very definition of "sup" that $\lim_{i \to \infty} \tau_i(\omega) = T(\omega)$. Hence, as $(X_t)_{t \geq 0}$ has continuous sample paths,
$$X_{T(\omega)}(\omega) = \lim_{i \to \infty} X_{\tau_i(\omega)}(\omega).$$
Since
$$|X_{T(\omega)}-b| \leq |X_{T(\omega)}-X_{\tau_i(\omega)}(\omega)| + |X_{\tau_i(\omega)}(\omega)-b|$$
for any $b \in B$, we find
$$d(X_{T(\omega)}(\omega),B) \leq |X_{T(\omega)}-X_{\tau_i(\omega)}(\omega)| + \underbrace{d(X_{\tau_i(\omega)}(\omega),B)}_{\leq i^{-1}} \xrightarrow[]{i \to \infty} 0,$$
i.e.
$$d(X_{T(\omega)}(\omega),B)=0.$$
As $B$ is closed, this entails $X_{T(\omega)}(\omega) \in B$. By the very definition of $\tau$, this means that $\tau(\omega) \leq T(\omega)$.
As the proof shows we just need left-continuity of the sample paths and not necessarily continuity. |
Proving that dual space of $L^\infty(\mathbb{R})$ is not separable | If the dual of $L^{\infty}$ were separable, then $L^{\infty}$ would be separable as well.
The fact that if $X^*$ is separable, also $X$ is follows from the Hahn-Banach theorem. Here is a proof of this fact. |
How to calculate the number of tie combinations? | You are looking for the number of ways to pick the locations of heads, which is$${A \choose A/2}=\frac {A!}{[(A/2)!]^2}$$ This is the central binomial coefficient |
How can this sum from 1 to infinity be equal to an integral from 0 to 2? | The sum is a Riemann sum for that integral for a uniform partition $[0,2/n,4/n,\ldots,1]$. Thus, in the limit it becomes the integral by definition of a Riemann integral. |
Partial Converse to "Pushout of a cofibration is a cofibration" | In full generality, it's not true, even if $E_1 \to E_2$ is an inclusion. Take $E_1 \to E_2$ to be your favorite inclusion that's not a cofibration and let $B_1 = B_2 = *$. Then $E_i \to B_i= *$, $i = 1, 2$, are fibrations (every space is fibrant), the identity $B_1 \to B_2$ is a cofibration, and the diagram commutes, but $E_1 \to E_2$ is not a cofibration by construction.
However, if $E_1$ and $E_2$ are CW complexes, then it's probably true, since every (cellular) inclusion of CW complexes is a cofibration.
EDIT: The example I gave above ignores the "pushout" condition. If we assume (per OP's comment) that the square is a pullback, then the answer seems to be affirmative. See theorem 14.1 in Strom's Modern Classical Homotopy Theory. |
Is it possible for a triple integral to give a negative result? | For the sake of giving you some intuition, let $V=[0,1\textbf{]}^3$ and consider $$\iiint \limits_V-1\,dxdydz$$
The answer: yes, it is possible. |
In how many ways a student can get $2m $ Marks | Let us consider the marks in all the $4$ papers as $a,b,c,d$; $0\le a,b,c,d\le m$
$$a+b+c+d=2m$$
Now we need to find the integer solution of the above equation in the given condition.
Number of solutions $=\dbinom{2m+4-1}{4-1}=\dbinom{2m+3}{3}$, but note that this also contains those solutions in which any variable is grater than $m$.
Hence we need to subtract those solutions for $a\ge m+1$ and let $t=a-(m=1),t\ge0$
$a+b+c+d=2m\implies t+b+c+d=m-1$.$$\dbinom{m-1+4-1}{4-1}=\dbinom{m+2}{3}$$and for $b\ge m+1,c\ge m+1, d\ge m+1$. Finally, $\dbinom{2m+3}{3}-4\times \dbinom{m+2}{3}$ |
A non-denumerable set has non-denumerably many cluster points? | Let Y be the set of limit points of X.
Since X-Y does not contain any of its limit points, X-Y is discrete.
We need that all discrete subsets are (at-most-)countable.
With that, since $X\subseteq (X-Y)\cup Y$, $\; $ Y is not (at-most-)countable.
A discrete subset of a separable metric space is always (at-most-)countable. |
Sigma-algebra of two subsets | If $A,B$ are "most general" (not $A \subseteq B$ or reversely, non-empty intersection and $A \cup B \neq X$) we get in the $\sigma$-algebra all sets $ A \cap B, A \setminus B, B \setminus A, X\setminus (A \cup B)$, which are 4 (non-empty, as I assumed) sets that form a disjoint partition of $X$. So if we just add all possible unions of those $4$ we are done generating the $\sigma$-algebra, because we cannot add any more by intersections (except $\emptyset$ which we already get as the empty union, i.e. the union of $0$ sets from the partition, anyway) and complements of unions are unions already (in a partition). So we get $2^4$ sets in the generated $\sigma$-algebra (or just algebra, as we're never going to get infinite unions or intersections anyway, starting from finitely many sets).
As a result I get $$\{\emptyset, A^c \cap B^c, A^c \cap B, A^c, A \cap B, (A \Delta B)^c, B, A^c \cup B, A \cap B^c, B^c, A \Delta B, A^c \cup B^c, A, B^c \cup A, A \cup B, X\}$$ |
What does $\forall$ mean? | The symbol $\forall$ means "for all." The symbol $\in$ means "in" or is an element of.
In your context, it means for all $a$ and $b$ in $S$. |
Gaussian LU and Crout's Method give me different answers | First of all, when you are decomposing $A=LU$ and if
i) $L$ is lower triangular with all diagonal entries as 1, then it is Doolittle's decomposition.
ii) $U$ is upper triangular with diagonal entries as 1, then it is Crout's method.
In your case Doolittle method will give decomposition $A=LU$, where
$
L=
\begin{bmatrix}
1 & 0 & 0\\
2 & 1 & 0 \\
-1 & -3 & 1
\end{bmatrix}
$
and $
U=
\begin{bmatrix}
2 & 3 & -1\\
0 & -2 & -1 \\
0 & 0 & -5
\end{bmatrix}
$ |
Constant vector field on the torus $\mathbb{T}^{2n}$ is symplectic | Yes, that's right, you do have $\phi_{X}^t(y) = y + v t$ where we're using the additive group structure of the torus: this group structure is what allows us to identify $T_x \mathbb{T}^{2n}$ with $\mathbb{R}^{2n}$ at every point $x \in X$. Then given $A,B \in T_x \mathbb{T}^{2n}$ we can calculate the Lie derivative from the definition:
$$(\mathcal{L}_X \omega)_x(A,B) = \frac{\mathrm{d}}{\mathrm{d}t}\bigg|_{t=0} ((\phi_{X}^t)^{\ast}\omega)_x(A,B) = \frac{\mathrm{d}}{\mathrm{d}t}\bigg|_{t=0} \omega_{x+ tv}( D_x \phi_{X}^{t} (A), D_x \phi_{X}^{t}(B))$$
Under the identifications of the tangent spaces $T_x \mathbb{T}^{2n}$ and $T_{x+tv} \mathbb{T}^{2n}$ with $\mathbb{R}^{2n}$, the derivative $D_x \phi_{X}^{t}$ is simply the identity map (by definition) and $\omega_{x+tv}$ is $\omega_x$. Therefore when we differentiate a constant, we simply get zero:
$$(\mathcal{L}_X \omega)_x(A,B) = \frac{\mathrm{d}}{\mathrm{d}t}\bigg|_{t=0} \omega_{x}(A,B) = 0$$ |
Finding Coefficients Terms | The coefficient of $x^{28}y^{30}$ must be zero, we get two contradictory values of $k=16$ and $k=15$. |
moment generating function uniquely determines distribution | First, notice that from the definition if $c_j$ we have $-1 \le c_j \le 1$ for all $j$. Now suppose $c_0 \ne 0$, and WLOG $c_0 > 0$. Then we have
\begin{align*}
0 &= \sum_{j=0}^\infty e^{tj}c_j = c_0 + \sum_{j=1}^\infty e^{tj}c_j
\end{align*}
so $c_0 = -\sum_{j=1}^\infty e^{tj}c_j$ for all $t \in \mathbb{R}$. Since each $c_j$ is at least $-1$ this implies $$ 0 < c_0 \le \sum_{j=1}^\infty e^{tj} = \sum_{j=1}^\infty (e^t)^j,$$
but this is a geometric series and can be made arbitrarily small by choosing $t \ll 0$. This implies $c_0 = 0$, and we can use the same reasoning to show that $c_j = 0$ for all $j$ with induction. |
Question on integral identities: constant vanishing with uniformly bounded integral | By writing the given identity as
$$
\int_{-\infty}^{+\infty} f(x,y)\;dx=\text{Constant}-Cy,\qquad y \in \mathbb{R},
$$the right hand side is not bounded for $y \in \mathbb{R}$, unless $C=0$. So if the left hand side is uniformly bounded for $y \in \mathbb{R}$, you get the result. |
Constructing covering space of surfaces | To elaborate on Mike Miller's comment, it is easy to see that if $p:X\to Y$ is an $n$-sheeted cover, then $p$ induces a covering $q:X\#Z\#\dots\#Z\to Y\#Z$, where there are $n$ copies of $Z$ on the left. Indeed, if you remove a ball $B$ from $Y$, then $p$ restricts to a covering map from $X$ with $n$ balls removed to $Y\setminus B$; gluing in copies of $Z$ with a ball removed everywhere then yields the covering map $q$. To get your desired covering map, you now just have to let $X=Y=T^2$ and $Z=S_{g-1}$, and observe that there exists an $n$-sheeted cover $T^2\to T^2$ for all $n$. |
First-Order Linear Difference Equation with Constraint | If I understood correctly, the terms in your post can be summarized as
$$
\left\{ \matrix{
y_{\,t} - y_{\,t - 1} = \left( {1 - \lambda } \right)\left( {x_{\,t - 1} - y_{\,t - 1} } \right)\quad \left| {\;n \le \forall t} \right. \hfill \cr
y_{\,t} = \sum\limits_{1\, \le \,k\, \le \,t} {x_{\,k} } \hfill \cr} \right.
$$
Now, the starting point for $t$ is not clear. Supposing it is $0$ then:
$$
\left\{ \matrix{
y_{\,0} = 0 \hfill \cr
y_{\,1} = x_{\,1} \hfill \cr
\quad \vdots \hfill \cr
y_{\,n - 1} = \sum\limits_{1\, \le \,k\, \le \,n - 1} {x_{\,k} } \hfill \cr
\left\{ \matrix{
y_n - y_{\,n - 1} = \left( {1 - \lambda } \right)\left( {x_{\,n - 1} - y_{\,n - 1} } \right) \hfill \cr
y_{\,n} = \sum\limits_{1\, \le \,k\, \le \,n} {x_{\,k} } \hfill \cr} \right. \hfill \cr
\left\{ \matrix{
y_{\,t} - y_{\,t - 1} = \left( {1 - \lambda } \right)\left( {x_{\,t - 1} - y_{\,t - 1} } \right)\quad \hfill \cr
y_{\,t} = \sum\limits_{1\, \le \,k\, \le \,t} {x_{\,k} } \hfill \cr} \right.\left| {\;n < t} \right. \hfill \cr} \right.
$$
that is
$$
\left\{ \matrix{
y_{\,0} = 0 \hfill \cr
y_{\,1} = x_{\,1} \hfill \cr
\quad \vdots \hfill \cr
y_{\,n - 1} = \sum\limits_{1\, \le \,k\, \le \,n - 1} {x_{\,k} } \hfill \cr
\left\{ \matrix{
y_n - y_{\,n - 1} = x_{\,n} = \left( {1 - \lambda } \right)\left( {x_{\,n - 1} - y_{\,n - 1} } \right) = - \left( {1 - \lambda } \right)\left( {\sum\limits_{1\, \le \,k\, \le \,n - 2} {x_{\,k} } } \right) \hfill \cr
y_{\,n} = \sum\limits_{1\, \le \,k\, \le \,n} {x_{\,k} } \hfill \cr} \right. \hfill \cr
\left\{ \matrix{
x_{\,t} = - \left( {1 - \lambda } \right)\left( {\sum\limits_{1\, \le \,k\, \le \,t - 2} {x_{\,k} } } \right) \hfill \cr
y_t - y_{\,t - 1} = - \left( {1 - \lambda } \right)y_{\,t - 2} \hfill \cr} \right.\quad \left| {\;n < t} \right. \hfill \cr} \right.
$$
At this point, if $n=2$, we shall put $x_{\,2} = 0,\;y_2 = y_1 = x_{\,1} $, is that compatible with your problem ?
In any case for $3 \le n$ we get separated equations for $x$ and $y$, and I do not see any bound related with $\lambda$: so probably I did not catch some point about your problem. |
How to prove that $\pi \in \mathbb{R}$? | An answer similar in spirit to your example could be based on Leibniz' series for $\frac{\pi}{4}$ which requires a bit of calculus, in particular knowledge of the Taylor series for the $\tan^{-1}$ function. Leibniz' series is $$\frac{\pi}{4}=\sum_{k=0}^{\infty}\frac{(-1)^k}{2k+1}$$
As the series alternates the partial sums will be above and below the sum $\frac{\pi}{4}$, so we are indeed approaching $\frac{\pi}{4}$ with rational sequences from above and from below (but quite slowly).
$$
\begin{array}
ss_0 &= 1 \\
s_1 & = 1-\frac{1}{3} = \frac{2}{3} \\
s_2 & = 1-\frac{1}{3} +\frac{1}{5} = \frac{13}{15} \\
\dots
\end{array}$$ |
Difference of coprimes | It should, in principle, but appears to be impractical: I got$$13=2\cdot3^2\cdot5-7\cdot11=5\cdot7-2\cdot3\cdot11,$$$$17=2\cdot7\cdot13-3\cdot5\cdot11$$and$$19=2^2\cdot3\cdot5\cdot17-7\cdot11\cdot13,$$but got stumped at $23$. On the other hand, your approach can more easily generate other primes between $p_{n+1}$ and $p_{n+1}^2$ because a difference (or sum) of two coprime numbers can't be divisible by any of the numbers' factors; as examples, using the primes $2,3,5,7,11$, you can produce \begin{align}2\cdot3\cdot5^2-7\cdot11&=73,\\
2^2\cdot7^2-3\cdot5\cdot11&=31,\\
2\cdot3^3\cdot7-5^2\cdot11&=103,\\
2\cdot3\cdot5\cdot7-11^2&=89,\\
3\cdot5\cdot7\cdot11-2^{10}&=131,\\
2^5\cdot11-3^2\cdot5\cdot7&=37,\\
3\cdot5\cdot7-2^3\cdot11&=17,
\end{align}as well as any difference (less than $13^2$) resulting from experimentation.
Some (perhaps all) can be written with fewer primes...the examples above, respectively, can be rewritten as \begin{align}73&=3\cdot5^2-2=2^4\cdot5-7;\\
31&=3\cdot11-2=7^2-2\cdot3^2;\\
103&=3\cdot5\cdot7-2=11^2-2\cdot3^2=5^3-2\cdot11;\\
89&=2^2\cdot5^2-11=2^5\cdot3-7;\\
131&=3\cdot7^2-2^4=3^3\cdot5-2^2=2^2\cdot5\cdot7-3^2;\\
37&=2^4\cdot5-3=2^2\cdot11-7=7^2-2^2\cdot3;\\
17&=5^2-2^3=3\cdot7-2^2=2\cdot11-5=2^2\cdot5-3.
\end{align}In these cases, each representation found rules out a set of primes as divisors of its number; finding any with the excluded primes will lead to the number's primality. |
If $a,b,c, a_1, b_1, c_1$ are rational and equations $ax^2+bx+c=0$ and $a_1x^2+b_1x+c_1=0$ have only one root in common then | @Aditya you can begin by noting that if $x_0$ is the common root then one has
$$ ax_0^2 + b x_0 + c = a_1 x_0^2 + b_1 x_0 + c \iff ax_0^2 + b x_0 = a_1 x_0^2 + b_1 x_0 \iff ax_0 + b = a_1 x_0 + b_1 $$,
where in the last equality we have considered that $x_0 \neq 0$. |
symmetric group maximality | I suppose you take $\;S_n:=\left\{\,\sigma\in S_{n+1}\;/\;\sigma(n+1)=n+1\,\right\}\le S_{n+1}\;$ ...though this is completely unimportant.
Suppose then that there exists $\;H\lneqq S_{n+1}\;$ s.t. $\;S_n\lneqq H\implies \exists\,g\in H\setminus S_n\;$ , and thus using the hint we get that
$$S_{n+1}=S_n\cup S_ngS_n\le S_n\cup S_n HS_n\le S_n\cup H=H\;,\;\;\text{ contradiction.}$$ |
Why can't I just find the residue of the function? | When you're doing contour integration, you need to be worried that your function decays sufficiently rapidly that the integrals around all your pieces stay bounded, otherwise you may not be able to do the $R \to \infty$ limit or similar that you need to do to finish the computation. What goes awry here is that $\sin(z)$ doesn't decay anywhere: it blows up exponentially fast as $\mathrm{Im}(z)$ grows in either direction. This means that you have a highly nontrivial problem in trying to understand the asymptotics of the integral along the circular arc.
But for $e^{iz}$, it decays exponentially fast with $\mathrm{Im}(z)$ in the upper half plane, so the integral along the arc goes to zero, and you can then take the imaginary part at the end to get the desired result. You could just as well use $\sin(z)=-\mathrm{Im}(e^{-iz})$ with a contour in the lower half plane, if you wanted. |
Connection between covariant derivative and basis vectors. | As a hint, try re-writing the time-dependent basis vectors $\hat{r}$ and $\hat{\theta}$ in terms of the time-independent basis vectors $\hat{i}$ and $\hat{j}$. Compute the derivative $\frac{\partial}{\partial\theta}$ and see if the resulting expression can be written in terms of the original basic vectors $\hat{r}$ and $\hat{\theta}$. A visual aid is included below ($\hat{e}_r=\hat{r}$ and $\hat{e}_\theta=\hat{\theta}$). |
Show for a monotonic function $f$ on $(a,b)$ is $\omega_f(x_0)=|\lim\limits_{x \searrow x_0}f(x)-\lim\limits_{x \nearrow x_0}f(x)|$ | You have set this up correctly, but you have not actually proved the final conclusive statement.
If $\delta$ is sufficiently small we have $\bar{U}_\delta(x_0) \subset (a,b)$ -- that is the closed ball of sufficiently small radius is contained in the open interval -- and $f$ is defined on $\bar{U}_\delta(x_0)$. Thus, for all $x,y \in \bar{U}_\delta(x_0)$ it follows that $f(x_0-\delta) \leqslant f(x), f(y) \leqslant f(x_0 + \delta)$ and
$$|f(x) - f(y)| \leqslant f(x_0+ \delta) - f(x_0-\delta)$$
Whence,
$$\sup_{x,y \in U_\delta(x_0) \cap (a,b)}|f(x) - f(y)| \leqslant f(x_0+ \delta) - f(x_0-\delta)$$
Since, $f$ is monotonic we have existence of the limits
$$\lim_{x \searrow x_0}f(x) = \lim_{\delta \to 0+}f(x_0 + \delta), \quad\lim_{x \nearrow x_0}f(x) = \lim_{\delta \to 0+}f(x_0 - \delta)$$
Thus,
$$\lim_{x \searrow x_0}f(x)- \lim_{x \nearrow x_0}f(x) \leqslant\\ \liminf_{\delta\to 0+}\sup_{x,y \in U_\delta(x_0) \cap (a,b)}|f(x) - f(y)|\leqslant \limsup_{\delta\to 0+}\sup_{x,y \in U_\delta(x_0) \cap (a,b)}|f(x) - f(y)|\\ \leqslant\lim_{x \searrow x_0}f(x)- \lim_{x \nearrow x_0}f(x) ,$$
which implies that the limit of $\displaystyle \sup_{x,y \in U_\delta(x_0) \cap (a,b)}|f(x) - f(y)|$ as $\delta \to 0+$ exists and
$$\omega_f(x_0) = \lim_{\delta \to 0+}\sup_{x,y \in U_\delta(x_0) \cap (a,b)}|f(x) - f(y)| \leqslant\lim_{x \searrow x_0}f(x)- \lim_{x \nearrow x_0}f(x)$$ |
Find the value of parameter a, so the polynomial has root with multiplicity | For a double root, the discriminant
$$\Delta=(a+3)^2 \,(4 a-15)$$ must equal to $0$.
Have a look here. |
How to take the Laplace transform of floor(t) | Consider the sum:
$\begin{align*}
S(s)
&= \sum_{n \ge 0} e^{-n s} \\
&= \frac{1}{1 - e^{-s}}
\end{align*}$
So your mystery sum is just:
$\begin{align*}
-S'(s)
&= \sum_{n \ge 0} n e^{- n s} \\
&= \frac{e^s}{(1 - e^s)^2}
\end{align*}$
The frobbing is valid as long as $e^{-s} < 1$, i.e., $s > 0$. |
Proof by contradiction with two assumptions | One of my favorite proofs uses this technique twice as parts of a proof by contradiction. Let
$A=$ "If $r, s$ are irrational, then $r^s$ is irrational."
$B=$ "$\sqrt{2}^\sqrt{2}$ is rational."
We get an immediate contradiction, so we know $\neg A\lor\neg B$. Now let's do the same thing with
$A=$ "If $r, s$ are irrational, then $r^s$ is irrational."
$\neg B=$ "$\sqrt{2}^\sqrt{2}$ is irrational."
Now from $A$ and $\neg B$ we can conclude that $(\sqrt{2}^\sqrt{2})^\sqrt{2} = 2$ must be irrational, which is again a contradiction, so we can conclude $\neg A\lor B$.
Finally, we have shown that $\neg A\lor\neg B$ and $\neg A\lor B$ so we can conclude $\neg A$, in other words, there is at least one pair of irrationals $r, s$ such that $r^s$ is rational. The cute part is that this proof doesn't tell us what $r$ and $s$ are, just that there is such a pair. |
The number of subgroups of $\mathbb{Z}_3\times \mathbb{Z}_{16}$ | Hint: $\mathbb{Z}_3\times \mathbb{Z}_{16} \cong \mathbb{Z}_{48} $ and calculate the number of divisors of 48. |
Smallest $n$ such that $H_{n,2^{-1}} \geq n$ where $H_{n,2^{-1}}$ are generalized Harmonic numbers | Note that
$$H_{n,2^{-1}}=\int_1^{n+1}\frac{dt}{\sqrt{\lfloor t\rfloor}}$$
So
$$\int_1^{n+1}\frac{dt}{\sqrt{t}}\le H_{n,2^{-1}}\le1+\int_2^{n+1}\frac{dt}{\sqrt{t-1}}$$
That is,
$$2(\sqrt{n+1}-1)\le H_{n,2^{-1}}\le 1+2(\sqrt{n}-1)$$
For example, $1998 \le H_{999,999,\,2^{-1}} < 1999$. This means that $\mathfrak a(1998)\le 999,999 < \mathfrak a(1999)$.
But
$$\begin{align}\sum_{n\le 1999}&\left(\left\lfloor\frac{n+2}4\right\rfloor+\left\lfloor\frac{n+1}4\right\rfloor\right)\\
&=\sum_{n\le 1999}\frac{2n+3}4\\
&-\left(\frac12+\frac34\right)\cdot499\qquad\text{for $n=4k+1$}\\
&-\frac34\cdot499\qquad\text{for }n=4k+2\\
&-\frac14\cdot499\qquad\text{for }n=4k+3\\
&-\left(\frac12+\frac14\right)\cdot498\qquad\text{for }n=4k\\
&=999,413<999,999
\end{align}$$ |
Primitive $r/(1+r^2)$ without abs() | In principle there shoud be, but since r2+1>0, for all r∈ℝ, you can remove them. In any case, if I recall correctly, WA never includes abs even when it's necessary. -- Git Gud Apr 16 at 10:54 |
Closed form of maximum of reciprocal distances | Hint
$$\left| \frac{1}{e^{ix} - e^{i(x + \omega)}} - \frac{1}{\lambda - e^{i(x + \omega)}} \right|{=\left| \frac{e^{ix}}{e^{ix} - e^{i(x + \omega)}} - \frac{e^{ix}}{\lambda - e^{i(x + \omega)}} \right|\\=\left| \frac{1}{1 - e^{i\omega}} - \frac{1}{\lambda e^{-ix} - e^{i\omega}} \right|\\=\left|{\lambda e^{-ix}-1\over (1-e^{i\omega})\cdot ({\lambda e^{-ix} - e^{i\omega}})}\right|}$$then the problem converts to minimizing $$|(1-e^{i\omega})\cdot ({\lambda e^{-ix} - e^{i\omega}})|^2{=(2-2\cos \omega)\cdot \Bigg(1+a^2-2a\cos (\omega-\theta)\Bigg)}$$where$$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\lambda e^{-ix}\triangleq a e^{i\theta}\quad\text{(the polar representation)}$$ |
Problem involving the factor and remainder theorem | Your way is fine, and is the standard way to approach the problem. After getting $$f(3)=27+3a+b=16 \iff 3a+b=-11$$
Use the factor theorem to get that $$f(-1)=-1-a+b=0 \iff b=a+1$$
Putting this back into our original equation, we get $$4a+1=-11 \iff a=-3$$
And thus $a=-3, b=a+1=-2$. Thus $$f(x)=x^3-3x-2$$
Done. |
Proof of "the continuous image of a connected set is connected" | $f^{-1}(A) \cup f^{-1}(B)$ may be larger than $E$, but it must contain $E$: if $x \in E$, $f(x) \in f(E) = A \cup B$. Then either $f(x) \in A$ or $f(x) \in B$. In the first case, $x \in f^{-1}(A)$, and in the second $x \in f^{-1}(B)$. (The fact that $f^{-1}(A) \cup f^{-1}(B)$ may be larger than $E$ is the reason for intersecting with $E$ when defining $G$ and $H$.) |
Linear differential equations of the $n$th order | I think I have got an idea how to decode the message, or at least its major part.
I start with a system of linear differential equations
$$
\dot z(t)=A(t)z(t)+f(t),\quad z(0)=z_0\in\mathbb{R}^n.\tag1
$$
If $\Phi(t)$ denotes the fundamental matrix of the corresponding homogeneous system, i.e.
$$
\dot\Phi(t)=A(t)\Phi(t),\qquad\Phi(0)=I,\tag2
$$
then the solution to (1) is given by the formula (take derivative to see it)
$$
z(t)=\Phi(t)z_0+\Phi(t)\int_0^t\Phi(s)^{-1}f(s)\,ds.\tag3
$$
The linear differential equation of order $n$
$$
x^{(n)}(t)+a_1(t)x^{(n-1)}(t)+\ldots+a_n(t)x(t)=b(t)
$$
can be in the standard way converted to the form (1) if we denote
$$
z(t)=\left[\matrix{x(t)\\ x'(t)\\ \vdots\\x^{(n-2)}(t)\\x^{(n-1)}(t)}\right]\quad\Rightarrow
\quad
\dot z=\left[\matrix{0 & 1 & 0 & \ldots & 0\\
0 & 0 & 1 & \ldots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \ldots & 1\\
-a_n & -a_{n-1} & -a_{n-2} & \ldots & -a_1}\right]z+
\left[\matrix{0\\ 0\\ \vdots\\ 0 \\b}\right].
$$
The fundamental matrix for the homogeneous part now will be the (scaled) Wronski matrix
$$
\Phi(t)=W(\phi_1(t),\ldots,\phi_n(t))W(\phi_1(0),\ldots,\phi_n(0))^{-1}=W(t)W(0)^{-1}.
$$
Now to get the general solution I am going to use (3), but I will a) move the initial time from zero to $\alpha$, and b) take only the first coordinate of $z$, which is $x=[1\ 0\ \ldots\ 0]z$, since the other coordinates are not interesting - just the derivatives of $x$.
$$
x(t)=[1\ 0\ \ldots\ 0]W(t)z_\alpha+[1\ 0\ \ldots\ 0]W(t)\int_\alpha^tW(s)^{-1}\left[\matrix{0\\ \vdots\\ 0 \\b(s)}\right]\,ds.
$$
The rest is pretty technical. With $z_\alpha=\gamma$ (not quite get the connections to $U(\phi)=\gamma$ here) the first term becomes
$$
\sum_{i=1}^n\phi_i(t)\gamma_i
$$
and in the second term an explicit calculation of $W^{-1}=\frac{\text{adj}\,W}{\det W}$ is done. Actually. we need only the last column of the inverse since others are multiplied with zeros anyway. This is how we obtain the algebraic complements $W_{ni}$ and (NB!) $\det W$ in the denominator.
Finally, the function $K^*(t,s)$. We need only the last element of
$$
[\phi_1(t)\ \phi_2(t)\ \ldots\ \phi_n(t)]W(s)^{-1}=[*\ *\ \ldots\ K^*(t,s)]
$$
since the others are multiplied by zeros. Post-multiplying by $W(s)$ and taking transpose yields
$$
W(s)^T\left[\matrix{*\\*\\\vdots\\ K^*(t,s)}\right]=
\left[\matrix{\phi_1(t)\\\phi_2(t)\\\vdots\\ \phi_n(t)}\right].
$$
Now Cramer's rule gives
$$
K^*(t,s)=\frac{1}{\det(W(s))}\left|\matrix{\phi_1(s) & \phi_2(s) & \ldots &\phi_n(s)\\\phi_1'(s) & \phi_2'(s) & \ldots &\phi_n'(s)\\
\vdots & \vdots & \ddots & \vdots\\
\phi_1^{(n-2)}(s) & \phi_2^{(n-2)}(s) & \ldots &\phi_n^{(n-2)}(s)\\
\phi_1(t) & \phi_2(t) & \ldots &\phi_n(t)}\right|.
$$ |
Markov chain - Regular transition matrix | A stochastic matrix is regular if it's irreducible and has at least one non-zero entry on its main diagonal. It's easy to show that that your matrix is irreducible, since every state communicates with state $1$, and state $\ i\ $ communicates with state $\ i+1\ $ for $\ i=1,2,3,4\ $, and the first entry on its main diagonal is non-zero. Therefore it's regular. |
Optimization-Cost function | Hint: The optimal prices are those which maximise total profit, so the first-order optimality conditions are not
$$
\frac{\partial TC}{\partial p_1}= \frac{\partial TC}{\partial p_2}=0\ ,
$$
but
$$
\frac{\partial TP}{\partial p_1}= \frac{\partial TP}{\partial p_2}=0\ ,
$$
where
\begin{eqnarray}
TP &=& p_1q_1 + p_2q_2 - q_1 - kq_2\\
&=& \frac{1}{p_1^2p_2^3} + \frac{1}{p_1^3p_2^2}-\frac{1+k}{p_1^3p_2^3}
\end{eqnarray} |
Finding the derivative of a rational function with limit definition | If you want to find $$\lim_{h\rightarrow 0}\frac{-45 + 12h}{17 (h^2 - 8h + 17)}$$
as $h$ approaches $0$ the you can see that it becomes (since the numerator and denominator are such that when $h$ is $0$ we don't end up with and indeterminate form):
$$ -\frac{45}{17\times17} =-\frac{45}{289}$$
Which if you work out the derivative normally you can see that this is the case since:
$$ f'(x)=-\frac{3 \left(x^2-1\right)}{\left(x^2+1\right)^2} \;\; \therefore \;\; f'(4)=-\frac{45}{289}$$
Just as an extension to the above you can see that in general you have:
$$f'(x)=\frac{\frac{3 (h+x)}{(h+x)^2+1}-\frac{3 x}{x^2+1}}{h} \;\;\; \text{which can simplify to}\;\; -\frac{3 (x (h+x)-1)}{\left(x^2+1\right) \left((h+x)^2+1\right)} $$
which again if you let $h=0$ then you can see that it becomes:
$$f'(x)=-\frac{3 (x (0+x)-1)}{\left(x^2+1\right) \left((0+x)^2+1\right)}=-\frac{3 (x^2-1)}{\left(x^2+1\right) \left(x^2+1\right)}=-\frac{3 \left(x^2-1\right)}{\left(x^2+1\right)^2}$$ |
proof regarding the commutativity of an arbitrary oddball binary operator? | [Edit: this answer is for an older version of the question.]
It is not necessarily commutative if the two numbers have different numbers of digits. For example, $91 \# 1=2$ but $1 \# 91=11$. [For another counterexample without one-digit numbers, $103 \# 19 = 5$ but $19 \# 103=14$.]
If the two numbers have the same number of digits (say $a=a_na_{n-1} \cdots a_0$ and $b=b_n b_{n-1} \cdots b_0$), then the first stage of the computation of $a\#b$ will give
$$(a_0+b_n)10^n+(a_1+b_{n-1} )10^{n-1}+\cdots + (a_n+b_0)10^0,\tag{1}$$
while the respective sum for $b \# a$ is
$$(a_n+b_0)10^n+(a_{n-1}+b_{1} )10^{n-1}+\cdots + (a_0+b_n)10^0\tag{2}.$$
We just need to show that these two numbers (when written in decimal form) will have digits summing to the same number. This is true [even if terms like $a_1+b_{n-1}$ are greater than $9$ and you need to carry!] because you end up summing all the digits. Specifically, I mean that the sum of the digits of the decimal number in (1) is the same as the sums of the digits of all the decimal numbers $a_0+b_n, a_1+b_{n-1},\ldots, a_n+b_0$, and the same for (2). |
Examples of topologies in which all open sets are regular? | Well, for starters, if your space is $T_1$ (so that one-point sets are closed), then it must be discrete.
Proof: Suppose $X$ is $T_1$ and every open subset of $X$ is regular. Let $y \in X$. Then $X \backslash \{y\}$ is open. Now $X \backslash \{y\}$ must also be closed. For if not, then its closure is necessarily $X$, whose interior is $X$, not $X \backslash \{y\}$, contradicting regularity. Since $X \backslash \{y\}$ is closed, $\{y\}$ is open.
This suggests to me that there are not going to be very many interesting topologies with this property.
Addendum: Another possibility would be to consider spaces for which the regular open sets form a basis for the topology. In some sense, this ensures that there are "enough" regular open sets. Such spaces include $\mathbb{R}^n$, all topological manifolds, and all Banach spaces. I can't offhand think of an example of a space without this property; can anyone? If there are interesting necessary and/or sufficient conditions for this property, better still. |
Intuitive computation of localizations | Ok, a lot to say here.
1) I'm not sure how to answer this question. Just allow yourself to invert things in $S$. $S\newcommand{\inv}{^{-1}}\inv A$ is just $A$ where you allow yourself to divide by things in $S$, or fractions with numerators in $A$ and denominators in $S$. Not sure what you're looking for exactly here.
2) No. Note that $k(T)\ne k[T,T\inv]$. $\frac{1}{T+1}\not\in k[T,T\inv]$.
3) Note that $\newcommand{\Spec}{\operatorname{Spec}}\Spec k[x,y]$ is an integral scheme. Hence it has a well-defined field of rational functions, $k(x,y)$. An element of $\newcommand{\calO}{\mathcal{O}}\calO(U)$ is a collection of pairs $(f_i/g_i,D(g_i))$ of rational functions that agree on their overlaps, i.e. such that $f_i/g_i=f_j/g_j$ on $D(g_i)\cap D(g_j)$.
I.e. as an element of the field of rational functions, $f_i/g_i$ is well defined, call it $F$. Now because $k[x,y]$ is a UFD, there is a unique
smallest denominator $g$ such that $Fg = f\in k[x,y]$. Hence $D(g_i)\subset D(g)$ for all $g_i$. Then we must have that $U\subset D(g)$, or that the zeros of $g$ are at most the origin. However, over an
algebraically closed field, no polynomial in two variables has only one maximal ideal as a zero, in fact if $g\not\in k$, then $g$ will have a zero at $(p)$ for every prime factor of $g$ anyway. Hence $g$ must have no zeroes, so $g\in k$. But then $F\in k[x,y]$. Hence $k[x,y]\subset \calO(U)\subset k[x,y]$, so $\calO(U)=k[x,y]$. But $U$ is missing a point compared to $\Spec\calO(U)$, so $U$ is not affine. |
Software for organising mathematics | Tilo has probably got a solution now but for others.....
Windows-based outliner software like SEO note and TreePad are excellent programs to collect disparate and unorganized information into one document.
There is a tree structure you can create to organize and navigate your stuff.
You can create hyper links to documents (maple files, spreadsheets, pdf math books, scanned images, etc) on your hard disk or webpages on the net.
HyperLinks to start external programs (for typesetting or theorem provers in Tilo's case) is very easy to setup as well.
You can copy/paste scanned images of your handwritten notes into the outliner document and add some keyword text so that these notes can be located through the search function.
There are many helpful features but some features and functionality that would make these programs even better would be text folding, direct image scanning into the document, graphics tablet support, revision history, LaTex support and syntax highlighting. It's been a while since I looked into these programs. Perhaps some of this functionality exists in some outliners now. That Wikipedia link has a list. |
Largest set $B$ such that $|A\cap (B-B)|=p$ | Edit. From the comments below, there was a typo which has been fixed. Note that this answer only works if $k^2-k+1\leq n$. I will see if this can be improved, although from the remark below that there are cases with $k^2-k+1>n$ such that there is a counterexample.
Write $[n]:=\{1,2,3,\ldots,n\}$. Note that $|A|\leq \dfrac{n-1}{k}$. Presumably, $$B-B:=\{b-b'\,|\,b\text{ and }b'\text{ are in }B\}\,.$$
We shall determine elements $b_1<b_2<b_3<\ldots<b_k$ of $B\subseteq [n]$ inductively. First, set $b_1:=1$. Suppose that $l$ is a positive integer such that $l\leq k$ and $b_1,b_2,\ldots,b_{l-1}$ have been defined.
Consider $$T_{l-1}:=\{b_1,b_2,\ldots,b_{l-1}\}\cup\displaystyle\bigcup\limits_{r=1}^{l-1}\,\big(b_r+A\big)\,,$$ where $x+A:=\{x+a\,|\,a\in A\}$ for any $x\in \mathbb{Z}$. We note that $$\begin{align}|T_{l-1}|&\leq \big|\{b_1,b_2,\ldots,b_{l-1}\}\big|+\sum_{r=1}^{l-1}\,\big|b_r+A\big|=(l-1)+\sum_{r=1}^{l-1}\,|A|
\\&=(l-1)+(l-1)\,|A|\leq (l-1)+(l-1)\,\frac{n-1}{k}\\&=(l-1)\,\frac{n-1+k}{k}\leq n-1\,,\end{align}\tag{*}$$
because $$(n-1)(k-l+1)\geq n-1\geq k^2-k=(k-1)k\geq (l-1)k\,.$$
Hence, $[n]\setminus T_{l-1}$ is nonempty. Let $b_l$ be the least element of $T_{l-1}$. Because $T_1\subseteq T_2\subseteq \ldots \subset T_{l-1}$, we conclude that $$b_1<b_2<\ldots<b_{l-1}<b_l\,.$$
By induction, the set $B:=\{b_1,b_2,\ldots,b_k\}$ has the property that $|B|=k$ and $A\cap (B-B)=\emptyset$. The proof is now complete.
Example. Let $n:=7$, $k:=3$, and $A:=\{1,4\}$. Then, the procedure above produces
$b_1=1$ with $T_1=\{1,2,5\}$;
$b_2=3$ with $T_2=\{1,2,3,4,5,9\}$;
$b_3=6$ with $T_3=\{1,2,3,4,5,6,7,9,10\}$.
Hence, $B=\{1,3,6\}$ satisfies the requirement. Note that
$$B-B=\{-5,-3,-2,0,2,3,5\}$$
is disjoint from $A=\{1,4\}$.
Remark. Let $k\geq 3$ be an integer. Suppose that $n=(k-1)^2$. Then $k^2-k+1>n$. Note that $\left\lfloor\dfrac{n-1}{k}\right\rfloor=k-2$, so we take $A:=[k-2]$. We shall prove that the set $B$ does not exist. Partition $[n]$ into $$X_i:=\big\{(i-1)(k-1)+1,(i-1)(k-1)+2,\ldots,i(k-1)\big\}$$ for $i=1,2,\ldots,k-1$. Observe that, if $B$ exists, then $B$ much contain at most one element from each of the sets $X_i$ for $i=1,2,\ldots,k-1$. This means $|B|\leq k-1$, which leads to a contradiction.
P.S. If $|A|\leq \dfrac{n}{k^2}$, then we only need $k\leq n$ (the condition $k^2-k+1\leq n$ is unnecessary). The inequality (*) can be improved and become
$$|T_{l-1}|\leq (l-1)+(l-1)\frac{n}{k^2}\,.$$
Since $$\begin{align}n\left(k^2-l+1\right)&\geq n(k^2-k+1)\geq k(k^2-k+1)\\&> k(k^2-k)=k^2(k-1)\geq k^2(l-1)\,,\end{align}$$
we conclude that $|T_{l-1}|<n$. By a similar argument, the set $B$ exists. |
Non-standard geometry problem | Here is a sketch of a geometric solution that gives an area bound of 2. For simplicity, assume that the boundary has a unique tangent at every point.
Now I claim that there exists a halving chord $c$ such that the tangents at its two endpoints are parallel. The simplest way to see this is to start with an arbitrary halving chord and draw tangents at its endpoints. Continuously move it so that it always stays a halving chord. The chord must return to its original position in the opposite orientation. During this time, the angle between the tangents must go from $\theta$ to $-\theta$ so the tangents must be parallel at some point during this process.
Now the figure is sandwiched between two lines at unit distance. Since every halving chord intersects $c$, every point on the figure is at most distance 1 from $c$. Thus, the figure lies inside a $1\times 2$ rectangle. |
Can we construct a universal Turing machine using only Peano's axioms for addition? | Presumably what you describe as Peano's axioms for addition has been studied as Presburger arithmetic, which is known to be consistent and complete (therefore decidable). This seems to disallow the "embedding" of a universal Turing machine in that theory since then the halting problem would be decidable. |
Proof of Proposition 2.12 in Neukirch ANT | Identifying $\mathfrak a'$ as $\mathbb{Z}^n$, choose some basis $v_1,...,v_n\in\mathbb{Z}^n$ for $\mathfrak a$, and let $A$ be the matrix with columns $v_i$. Choosing another basis for $\mathfrak a$ is equivalent to multiplying $A$ from the right by the basis changing matrix which is in $GL_n(\mathbb{Z})$. Similarly, multiplying from the left corresponds to a basis change in $\mathfrak a$.
Prove to yourself that for any matrix $A$ over $\mathbb{Z}$, there are invertible matrices $P,Q$ in $GL_n(\mathbb{Z})$ such that $PAQ$ is diagonal. The determinant has not change, of course. The fact that the new matrix is diagonal means that you can find a basis $u_1,...,u_n$ for $\mathfrak a'$ such that $d_1u_1 ,..., d_n u_n$ is a basis for $\mathfrak a$ where the $d_i$ are the elements on the diagonal. Now it is easy to see that the quotient group is the product of $\mathbb{Z}_{d_i}$ which imply that the index is the product of the $d_i$ which is exactly the determinant. |
$AB$ is diameter of a semicircle and $S$ is incircle of $ABC$. Find $\angle ASB$. | Observe that the triangle is a right triangle with ∠A + ∠B = 90. Also, since S is the incenter, SA and SB are the angle bisectors of ∠A and ∠B, respectively.
Thus,
$$\angle ASB = 180 - \frac{\angle A + \angle B}{2} = 180 - \frac{90}{2}=135 $$ |
Why Do Imaginary Numbers Exist | When you are asking about why they exist, I take it you mean why they were developed? Because if you're really asking about whether numbers exist, that becomes a philosophical and rather complicated question about our ontological commitments to mathematical entities.
They were first noticed possibly when mathematicians were solving quadratic polynomials, i.e. $ax^2+bx+c=0$. You'll quickly notice that sometimes we get solutions involving taking the square root of negative value. Mathematicians dismissed this as being absurd until they began to work on finding a formula for the roots of the general cubic polynomial, i.e. $ax^3+bx^2+cx+d=0$.
As for what they do, they have a lot of applications within and outside of mathematics. We're able to solve a lot of problems which appear to be firmly fixed in the real numbers using complex numbers. Within mathematics, this can be seen in geometry, calculus, etc. Outside of mathematics, it is extremely useful to physics and thus useful to engineering, particularly electrical engineering.
If our goal was to "get rid" of the negative, sure, multiplying by a negative number would get rid of it symbolically but then that changes our equation algebraically. |
The timestep of Forward–backward algorithm | As I know the forward-backward algorithm, you need at least one of your summands to be smooth (appears to be $g$ in your case?) and $L$-Lipschitz continuous. I'll assume this holds.
The forward-backward iteration
$$x_{n+1}=\textrm{prox}_{(\Delta t) f}(x_n-(\Delta t) \nabla g x_n)$$
is proven to converge whenever a solution exists and $\Delta t\in]0,2/L[$. There is usually no way to know ``a priori'' which choice of $\Delta t$ is best, since this varies from problem to problem. Based on heuristics and numerical considerations, I usually avoid very small values of $\Delta t$. I usually try out $\Delta t=1/L$ first. |
Lagrange multipliers to optimize sum of cubes of roots of a quadratic | This is a solution without Lagrange Multiplier method. See my edit in the end for Lagrange Multiplier method.
$\alpha + \beta = \lambda-2, \alpha\beta = 10 - \lambda$
$f(\alpha, \beta) = \alpha^3 + \beta^3 = (\alpha + \beta)^3 - 3 \alpha \beta(\alpha + \beta)$
So, $f(\alpha, \beta) = (\lambda-2)^3 - 3 (10-\lambda)(\lambda-2) = \lambda^3 - 3 \lambda^2 - 24 \lambda + 52$
Now we take derivative with respect to $\lambda$ and equate to $0$,
$\lambda^2 - 2\lambda-8 = 0 \implies \lambda = -2, 4$
Second derivative test confirms a local minima at $\lambda = 4$. We also observe that the function $f$ is monotonically decreasing for $\lambda \lt - 2$ and is monotonically increasing for $\lambda \gt 4$. So in the given domain $\lambda \geq - 2$, absolute minima occurs at $\lambda = 4$.
And so we have,
$\alpha + \beta = \lambda - 2 = 2, \alpha \beta = 10 - \lambda = 6$
Solving, $\alpha = 1 \pm i \sqrt 5, \beta = 1 \mp i \sqrt5$, which are complex roots of the quadratic for $\lambda = 4$.
For Lagrange Multiplier method,
Say $\alpha + \beta = x, \alpha \beta = y$
You found $\alpha^2 + \beta^2 = - 8$. We also have $y = 8 - x$
$x^2 = - 8 + 2y = - 8 + 16 - 2x \implies x^2 + 2x - 8 = 0 \ $
(where $x = \alpha + \beta$)
Solving $x = 2, y = 6; x = - 4, y = 12$.
We test $\alpha^3 + \beta^3 = x^3 - 3 xy$ for both and obtain $x = 2, y = 6$ to be the minima.
$(\alpha-\beta)^2 = x^2 - 4y = - 20 \implies \alpha-\beta = \pm \sqrt{-20}$ |
What is the probability of this? | There are $\binom{25}{6}$ ways of choosing the sample. You will reject unless $0$ or $1$ defects are found. The probability of accepting is therefore
$$
\frac{\binom{17}{6} \binom{8}{0}+ \binom{17}{5} \binom{8}{1}}{\binom{25}{6}} = \frac{442}{1265}
$$
and the probability of rejecting is $$\frac{823}{1265} \approx 0.650593$$ |
What is $\lim_{n \to \infty} \frac{x^n-x^{-n}}{x^n+x^{-n}}$ when $0\lt x \lt 1$? | The correct answer (and method) is the first one, of course.
In the second case you cannot use L'Hopital Rule: it is NOT an indeterminate form $0/0$. It is a very simple form (-1)/1. |
Family of functions that are bounded in $L^1$ but *NOT* Uniformly Integrable | The thing which breaks this (and other things like martingale convergence) is the fact that you can have things be very large on small sets. If you have $L^p$ bounds for $p > 1$, this just won't happen. But for $p = 1$, it can! Neal provided a good answer, but allow me to add one that won't require much thinking for you!
$X_n = 2^{n} 1_{[0,2^{-n}]}$
Notice that $||X_n||_{L^1}$ = 1, but no matter how small a set you take, you can find an $n$ large enough where all of your mass will still be there. |
Let $f$ be a cont. on $\mathbb{R}$ and define $G(x)=\int_0^{\sin (x)}f(t) dt $. Show that $G$ is differentiable on $\Bbb{R}$ and compute $G'$. | We have
$$
\begin{align}
G(x)
&=F(\sin(x))\\
&=\int_0^{\sin(x)}f(t)\,\mathrm{d}t\tag{1}
\end{align}
$$
where
$$
\begin{align}
F(u)
&=\int_0^uf(t)\,\mathrm{d}t\\
&=\underbrace{\int_{-1}^uf(t)\,\mathrm{d}t}_{\substack{\text{differentiable}\\\text{for $u\ge-1$}}}-\underbrace{\int_{-1}^0f(t)\,\mathrm{d}t}_\text{constant}\tag{2}
\end{align}
$$
The Fundamental Theorem of Calculus says
$$
F'(u)=f(u)\tag{3}
$$
The Chain Rule says that
$$
\begin{align}
G'(x)
&=F'(\sin(x))\cos(x)\\
&=f(\sin(x))\cos(x)\tag{4}
\end{align}
$$
Technical Point: To apply Theorem 28.4 to Theorem 34.3 without any further work, we should choose the lower bound of integration in $(2)$ to be lower than $-1$.
However, with a bit of extra work, we can show that $f$ need only be continuous on $[-1,1]$. The only problem arises when computing $F'(\pm1)$. Since $F(u)$ will only see $u\in[-1,1]$, we only need to consider the one-sided derivative at $\pm1$.
Suppose $u_0=\pm1$, then for $u\in(-1,1)$ we can apply the Mean Value Theorem to find a $\xi$ between $u_0$ and $u$, hence in $(-1,1)$, so that
$$
\begin{align}
\frac{F(u)-F(u_0)}{u-u_0}
&=F'(\xi)\\
&=f(\xi)\tag{5}
\end{align}
$$
Therefore, since $f$ is continuous on $[-1,1]$, we have the one-sided derivative
$$
\begin{align}
F'(u_0)
&=\lim_{u\to u_0}\frac{F(u)-F(u_0)}{u-u_0}\\
&=\lim_{\xi\to u_0}f(\xi)\\
&=f(u_0)\tag{6}
\end{align}
$$
You are thinking correctly about applying the Fundamental Theorem of Calculus and the Chain Rule.
I am not sure why they are showing that $G$ is continuous. Since they have shown its derivative exists, it is already continuous.
The handwritten proof looks okay. It is a different approach, but it is valid. You don't need the Chain Rule with that approach. |
How do I calculate the integral of inequalities | It is not an "integral of inequalities." It is instead an integral with limits.
$$\int\limits_{x=0}^1 (\sqrt{x} - x^4)\ dx = \left( \frac{2 x^{3/2}}{3} - \frac{x^5}{5} \right) \Bigg|_{x=0}^1 = \frac{7}{15}$$ |
A a PID, then dimension of a free A-module well defined. | Suppose $\{x_i\}_{i \in I}$ is a basis for $M$ and $\sum a_i x_i$ is in $pM$ where $a_i \in A$ and all but finitely many are non zero. Then we can write $\sum a_i x_i = pm = \sum pb_i x_i$ where $m=\Sigma b_i x_i$. Then we have $\sum (a_i-pb_i)x_i = 0$ and $a_i-pb_i=0$. The statement that two elements of the basis belong to the same equivalence class would be $x_i-x_j \in pM$. In this case $a_i=1$, $a_j=-1$, and zero otherwise. This then implies $b_k=0$ for $k \neq i,j$ and we have the following equalities
$$1=pb_i$$
$$-1=pb_j$$
However, by definition $p$ cannot be a unit since it is a prime element. This results in a contradiction so two elements of the basis cannot be in the same conjugacy class |
Prove: $aH = bH \iff Ha^{-1} = Hb^{-1}$ | You are on the right way. In the last step you should arrive at: $Ha^{-1} \subset Hb^{-1}$ (not an equality). In a similar way you Show $Hb^{-1} \subset Ha^{-1}$; you start with "it exists $h \in H$ such that $a = bh$" and then take the inverse of $b$ from the left... |
Solving $u''(x)-\frac{u'(x)^2}{u(x)}+\frac{u'(x)}{x}=u(x)^2$ | If you have a non-linear ODE such as this there is one last hope in finding a solution by plugging in a monomial $x^{\alpha}$ with $\alpha\in\mathbb{R}$, especially if the depedencies of "$x$" is only given by some monomials, such as in the third summand.
By trying this, you have to count the de-/increase of homogeneity in each term, i.e. $u''$ has homogeneity $\alpha-2$, $\frac{(u')^2}{u}$ has $2(\alpha-1)-\alpha$, $\frac{u}{x}$ has $(\alpha-1)-1$ and finally $u^2$ has homogeneity $2\alpha$. Since every term on the L.H.S. would provide the same homogeneity $\alpha-2$, we are looking for a monomial $x^\alpha$ satisfying $\alpha-2=2\alpha$, i.e. $\alpha=-2$.
We still need to find the right coefficient $\beta\in\mathbb{R}$ such that $u(x)=\beta x^ {-2}$ actually solves the equation. Again, applying yields $6\beta-4\beta^2-2\beta=\beta^2$ and since we exclude $\beta=0$ we get a solution with $\beta=\frac{4}{5}$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.