title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How did $\pi$ originate? | In around 250 BC, Archimedes expressed $\pi$ as a limit. He constructed sequences of inscribed and circumscribed polygons whose perimeters were lower and upper bounds of the value of $\pi$ respectively, such that the perimeters converged to the same value.
I do not know how rigorously the ancient Greeks were capable of proving that the perimeters truly were upper and lower bounds on $\pi$, that the difference in the perimeters converged to zero, and that the limits existed.
They did understand the squeeze theorem (by the name of the "method of exhaustion), however, so they did understand in their own way that this construction expressed $\pi$ as a limit.
Reference: wikipedia |
Prove that the following program terminates | In 1972, J. H. Conway proved that a natural generalization of the Collatz problem is algorithmically undecidable
Specifically, he considered functions of the form
$$g(n) = a_i n + b_i, {n\equiv i \pmod P}$$
where $$a_0,b_0,\dots,a_{P-1},b_{P-1}$$ are rational numbers which are so chosen that g(n) is always integral.
The standard Collatz function is given by $$P=2, a_0 = 1/2, b_0 = 0, a_1 = 3, b_1 = 1.$$ Conway proved that the problem:
Given g and n, does the sequence of iterates $$g^k(n)$$ reach 1?
is undecidable, by representing the halting problem in this way. Closer to the Collatz problem is the following universally quantified problem:
Given g does the sequence of iterates $$g^k(n)$$ reach 1, for all n>0?
Modifying the condition in this way can make a problem either harder or easier to solve (intuitively, it is harder to justify a positive answer but might be easier to justify a negative one). Kurtz and Simon proved that the above problem is, in fact, undecidable and even higher in the Arithmetical hierarchy, specifically $$\Pi^0_2-complete$$. This hardness result holds even if one restricts the class of functions g by fixing the modulus P to 6480 .
SOURCE:Wikipedia |
what's the difference between variable and process from a statistical point of view? | First, random and stochastic are synonyms.
A stochastic process is a collection of random variables $X=(X_i)_{i\in I}$.
You have two important cases:
the discrete case: $I=\mathbb{N}$ or $\mathbb{Z}$ (or a subset of those). Then the process is a sequence of random variables.
the continuous case : $I=\mathbb{R}$ or $[0,+\infty )$ for example. Then the stochastic process is a random function.
As you may know, a random variable is a measurable function $\Omega\rightarrow S$ where $S$ is a measurable space. Then there are two points of view regarding stochastic processes. You can see them as a collection of random variables indexed by the set $I$. Or you can see them as a single random variable with a bigger $S$.
For example, let's say you have a stochastic process $X=(X_n)_{n\in\mathbb{N}}$ with $X_n\in\mathbb{R}$. Then you can see $X$ as a random variable with $S$ the set of all real-valued sequences.
You can write $X=X(\omega ,n)$. If you fix $\omega$, you obtain a real-valued sequence which is called a realization of $X$ (or a trajectory). If you fix $n$ (the time for example), you obtain the real-values random variable $X_n$. |
The characteristic function of the random time $N$ | The C.F of N is the following:
$\varphi_N = E[e^{itN}]=\sum_{k=1}^ne^{itk}P(N=k)$
Where $P(N=k)=pq^{k-1}$.
Which can be deduced from taking P(N=1), P(N=2), P(N=3) and so on in the following manner:
$P(N=1)= P(X_1=0) = \frac{1}{2}$
$P(N=2)= P(X_1\neq 0,X_2=0) = P(X_1 \neq 0)P(X_2=0)= \frac{1}{2}\frac{1}{2}$
$P(N=3)= P(X_1\neq 0,X_2 \neq 0,X_3 = 0) = P(X_1 \neq 0)P(X_2 \neq 0)P(X_3=0)= \frac{1}{2}\frac{1}{2}\frac{1}{2}$
And if one keeps on going one should be able to see that all this equates to:
$P(N=k)=(1-P(X_k =0))^{k-1}P(X_k=0)=pq^{k-1}$
where $q=(1-P(X_k=0))$ and $p=\frac{1}{2}$
This then gives:
$\varphi_N = E[e^{itN}]=\sum_{k=1}^ne^{itk}pq^{k-1}$, which equals to the C.F of the Fs(1/2):
$\varphi_{Fs(1/2)} =\frac{\frac{1}{2}e^{it}}{1-\frac{1}{2}e^{it}}$
Also I want to thank Did for the help and patience. |
How to prove that$\frac{\cos\left(x\right)}{1+\sin\left(x\right)} = \frac{1-\sin\left(x\right)}{\cos\left(x\right)}$ | It's
$$\cos^2x=(1-\sin{x})(1+\sin{x})$$ or
$$\cos^2x=1-\sin^2x$$ or
$$\sin^2x+\cos^2x=1,$$ which is obvious. |
unique solution for mod equation | $$b-a=mc\\d-e=nc\\gcd(b-a,d-e)=c*gcd(m,n)=1\\c=gcd(m,n)=1$$ |
Prove the following divisibility statements without use of induction | Hints with modular arithmetic:
$$(a)\;\;3^3=2\pmod 5\implies \left(3^{3}\right)^n\cdot 3=2^n\cdot 3\implies 3^{3n+1}+2^{n+1}=2^n(3+2)\pmod 5$$
$$(b)\;\;5^2=4\pmod {21}\implies 4^{n+1}+5^{2n-1}=4^n(4+5^{-1})\pmod{21}\;,\;$$
$$\text{but}\;5^{-1}=-4\pmod{21}\ldots$$
Now you try something similar as the above for the third one |
Logarithms. Solve for m | $xy^m=yx^3\implies \log x + m \log y=\log y + 3 \log x \implies (m-1)\log y=2\log x$
$\implies m-1=\dfrac{2\log x}{\log y}\implies m=\dfrac{2\log x}{\log y}+1$ |
Robinson's Consistency Theorem for first order languages | This is a short answer. (I am afraid it requires some background knowledge.)
Let $M_1\models T_1$ and $M_2\models T_2$ be saturated structures of the same cardinality. The reduced to $L$ of $M_1$ and $M_2$ are saturated models of the same complete theory $T$, hence there is an $L$-isomorphism $f:M_1\to M_2$. There is a (unique) expansion of $M_2$ that makes $f$ an $L_1$-isomorphism. This expansion is a model of $T_1\cup T_2$. |
Find all the solutions such that $\frac{∂^2g}{∂u∂v} = 0$ | $\frac{\partial^2g}{\partial u \partial v} = 0$
Integrate:
$\frac{\partial g}{\partial u} = f(u)$
Integrate again:
$g(u,v) = h(u)+s(v)$. Where $s$ is any function of $v$ and $h$ is any function of $u$. |
Factor group of profinite group | There are some details to think about in the following, but this is a hint:
You want by definition to prove that every connected components are singletons. This follows if you show that the connected component that contains the identity is a singleton. Let $C$ be this connected component containing $1$ in $G/N$. You want to show that $C= \{1\}$.
Let
$$\pi : G \to G / N
$$
be the projection map.
Now then, let $x\in G/N$, $x\neq 1$. You have the Hausdorff, so there is an open neighborhoos $U$ of $1$ such that $x\notin U$. So then $\pi^{-1}(U)$ is an open neighborhood of $1$ in $G$. Now $G$ is profinite, so $G$ has a neighborhood basis of $1$ consisting of compact open subgroups. Let $V$ be a compact open subgroup of $1$ in $G$ such that $V \subseteq \pi^{-1}(U)$.
Now $\pi(V)$ is open and compact in $G/N$. And $\pi(V)$ contains the connected component $C$. And $x \notin \pi(V)$.
So we have shown that given any $x\neq 1$ in $G/N$, $x$ is not contained in the connected component $C$ of $1$. So $C = \{1\}$. |
Example on domains of holomorphy in $\mathbb C^n$ | After translating and rotating, you may as well assume $p = 0$ and $\ell(z) = z_{n}$ (i.e., $S_{p}(\partial D) = \mathbf{C}^{n-1} \times \{0\}$), so that $f(z) = \frac{1}{z_{n}}$. Since $f$ is unbounded in every neighborhood of the origin $0$, $f$ does not extend holomorphically to $0$. |
Let S be a finite linearly independent set of vectors in V, and let T be a subset of S. Show that T is a linearly independent set of vectors in V. | Take any linear combination of elements of $T$ which turns out to be zero. Add rest of the elements of $S-T$ with coefficient 0. Now can you complete it? |
Find a>1 s.t. $a^x = x$ has a unique solution | Note that $a^x = x \iff a = x^{1/x}$. So, there will be a solution to your problem if and only if $a$ is in the image of the function $f(x) = x^{1/x}$ (over the domain $x > 0$). Note that the graph $y = f(x)$ achieves a maximum somewhere, then levels off to its asymptote at $y = 1$.
This problem is a bit easier to solve with calculus. In particular, it suffices to find the maximum value of $f(x)$.
Note that $f(x)$ achives its maximum iff $\ln(f(x))$ achieves its maximum. So, we consider the function
$$
g(x) = \ln(f(x)) = \frac{\ln x}{x}
$$
We find
$$
g'(x) = \frac{1 - \ln x}{x^2}
$$
Thus, $g$ has a unique critical point when $x = e$, which means that this must be where $g$ achieves its maximum. Thus, $f$ achieves its maximum at $x = e$.
Thus, the maximum value of $a$ such that $f(x) = a$ has a solution is
$$
a = e^{1/e} \approx 1.44467
$$ |
Do Gaussians have a Compact Support | You are right in confirming that Gaussian do not have a compact support.
With regards to your paper, it is not rare in statistical application to find cases where methods are applied outside their domain of rigorous validity.
This does not of course necessarily yield incorrect, from the practical point of view, results.
Imagine to take a Gaussian, and truncate it after some threshold. What would the mistake be, induced by such operation?
Under suitable conditions, it might well be fully negligible. |
Finding the matrix of the adjoint transformation | After you changed basis back to $B$, you wrote:
Now, as B is orthonormal, we have $A_{f∗,B} = A^∗_{f,B} = \begin{pmatrix} −2 & 1 \\ -1 & 2 \end{pmatrix}$.
This was not correct. You switched the signs on the diagonal entries of the matrix. This is not the transpose of the matrix you had on the line before. It should instead be $\begin {pmatrix} 2 & 1 \\ -1 & -2 \end{pmatrix}$. Now when you conjugate by $T$ you will get the matrix that your teacher said should be the answer. |
Proposition 17.11 - Tu's Introduction to Manifolds | It would be helpful to write the pullback definition more explicitly. If $\omega$ is a smooth $1$-form on $M$ and $F:N \to M$ is smooth, then $F^*{\omega}$ is a $1$-form defined by: for all $p \in N$ and all $X_p \in T_pN$,
\begin{align}
(F^*\omega)(p)(X_p) &:= \omega(F(p))[F_{*,p} X_p]
\end{align}
I believe you're having issues because you havent indicated where the forms are being evaluated. Recall that since $\omega$ is a one-form on $M$ it assigns to each point $q \in M$, an element $\omega(q) \in T^*_qM$. So, you first have to evaluate a one-form at a point of the manifold, and after that apply the whole thing to a tangent vector; the final result being a real number.
So, for all $p \in N$ and all $X_p \in T_pN$, we have
\begin{align}
\bigg(F^*(g \omega)(p) \bigg)(X_p) &:= \bigg((g \omega)(F(p)) \bigg)[F_{*,p} X_p] \\
&:= \bigg(g(F(p)) \cdot \omega(F(p)) \bigg) [F_{*,p} X_p] \\
&:= g(F(p)) \cdot \bigg( \omega(F(p))[F_{*,p} X_p]\bigg) \\
&:= (F^*g)(p) \cdot (F^*\omega)(p)[X_p] \\
&:= \bigg( (F^*g)(p) \cdot (F^*\omega)(p) \bigg)[X_p] \\
&:= \bigg( \left( F^*g \cdot F^* \omega \right) (p) \bigg)[X_p]
\end{align}
Thus, it follows that $F^*(g \omega) = F^*g \cdot F^* \omega$. I'll leave it to you to figure out why I put in so many equal signs. Each of them is true either by definition of pullback $^*$, or by definition of product of a function and form, or by definition of scalar multiplication in the cotangent space $T_{F(p)}^*M$ etc. Hopefully the bracketing makes it clearer what is being evaluated on what (although for convenience, I may have dropped some brackets). |
Show that $E=\{(x,\alpha)\mid 0\leq \alpha<|f(x)|\}$ is measurable if $f$ is measurable. | I think there is a typo in your proof, instead of
$E=\bigcup_{i=1}^n\big((\mathbb R\cap F_i)\times [0,a_i[\big)\cup \bigcap_{i=1}^n \big((\mathbb R \cap F_i)\times \{0\}\big)$
it should read
$E=\bigcup_{i=1}^n\big((\mathbb R\cap F_i)\times [0,a_i[\big)\cup \bigcap_{i=1}^n \big((\mathbb R \cap F_i^{\color{red}{c}})\times \{0\}\big).$
(Edit: The OP has fixed it.)
Here is an idea for an alternative proof: Consider the measurable space $(\mathbb{R}^2,\mathcal{B}(\mathbb{R}^2))$. Since $f$ is measurable, it is not difficult to see that
$$(\mathbb{R}^2,\mathcal{B}(\mathbb{R}^2)) \ni (x,\alpha) \mapsto |f(x)| \in (\mathbb{R},\mathcal{B}(\mathbb{R}))$$
is measurable. Moreover, also the mapping
$$(\mathbb{R}^2,\mathcal{B}(\mathbb{R}^2)) \ni (x,\alpha) \mapsto \alpha \in (\mathbb{R},\mathcal{B}(\mathbb{R}))$$
is measurable. Hence,
$$(\mathbb{R}^2,\mathcal{B}(\mathbb{R}^2)) \ni (x,\alpha) \mapsto g(\alpha,x) := |f(x)| -\alpha \in (\mathbb{R},\mathcal{B}(\mathbb{R}))$$
is measurable. This implies that
$$E = \{(x,\alpha); g(x,\alpha)>0\} \cap \{(x,\alpha); \alpha \geq 0\} = g^{-1}((0,\infty)) \cap (\mathbb{R} \cap [0,\infty))$$
is measurable. |
Supremum of measurable functions | As Nate Eldredge pointed out, the difference is that the first supremum is over a finite set while the second is over a countable set. The second statement can be deduced from the first one using monotonicity, but it is a priori stronger.
It is not a big problem if we know that $f\colon X\to\mathbb R$ is Borel measurable if and only if $\{x\mid f(x)\leqslant a\}$ is measurable for any $a\in\mathbb R$. Then
$$\left\{x\mid \sup_nf_n(x)\leqslant a\right\}=\bigcap_n\{x\mid f_n(x)\leqslant a\}$$
and the RHS is a countable intersection of measurable sets, hence a measurable set. |
Hyperbolic geometry when the curvature is constant and negative but not -1 | The quick way to get the correct answer is dimensional analysis: to get an area from a dimensionless angle defect you need to multiply by something with units length$^2$, so Wikipedia's formula $A = (\pi - \sum\theta)R^2$ is correct.
Sommerville is a very old text and uses $k$ for the negative inverse curvature - see e.g. page 75 where it states "The formulae of hyperbolic trigonometry become of those of euclidean plane geometry as $k \to \infty$."
The proof is going to depend on how you are constructing things - from the Riemannian perspective you just calculate how curvatures and areas behave under multiplication of the metric by a constant: covariant derivatives are unchanged, so Ricci curvature is unchanged, so scalar curvature scales as the inverse metric. Area scales as the metric. Thus area is inversely proportional to curvature under scaling. |
"All phase plane solution points remain stationary as $t$ increases"? | The matrix $A=0$ makes every initial condition to be stationary.
The solution of the start edo is $y(t)=e^{At}y_0$, then for every point be stationary we need that $e^{At}=I$, this means that $At=0$, for every $t$, then the matrix $A=0$. |
Is there a matrix $X$ possible such that $AXB=O$? | It might be helpful to think about it this way: in the expression
$$
AXB,
$$
we have a composition of three maps. First, $B$ is a map from $3$-dimensional space ($\mathbb{R}^3$) to $7$-dimensional space ($\mathbb{R}^7$). That means the output of $B$, the range, must only be at most a $3$-dimensional subspace of $\mathbb{R}^7$ (in particular there are lots of "unused" dimensions in $\mathbb{R}^7$ that don't occur in the output of $B$). Next, $X$ is a map from $\mathbb{R}^7$ to $\mathbb{R}^4$, and $A$ is a map from $\mathbb{R}^4$ into $\mathbb{R}^9$.
In order to force the whole product to be $0$, what should $X$ be? It should send the output of $B$ to $0$, as that way any vector that we apply $AXB$ to will end up $0$ ($v$ gets sent to $Bv$, which then gets sent to $0$, and $A$ applied to $0$ is still $0$.) But the output of $B$ is at most $3$ dimensions out of $7$ that we have to work with; so we can have $X$ do something else on the other $4$ dimensions.
Therefore, the answer is yes, this is possible with $X \ne 0$: just pick $X$ so that it sends the output of $B$ to $0$, but sends the other $4$ dimensions to something nonzero. |
Why a infinite dimensional vector space over $\mathbb F_2$ is uncountable? | This is false.
Let $V = \bigoplus_{n=1}^{\infty} \Bbb F_2$. Then $V$ is infinite-dimensional over $\Bbb F_2$.
Let $A_n = \{ (a_j)\in V: a_j=0$ for $j \geq n\}$. Then each $A_n$ is finite (with cardinality $2^{n-1}$). Moreover $V = \bigcup_n A_n$. Thus $V$ is countable. |
Dirac delta function in polar coordinates | In short: For any $\varphi \in C_c^\infty(\mathbb R^2)$ you want
$$
\varphi^{\text{rect}}(x_0, y_0)
= \int_{x=-\infty}^{\infty} \int_{y=-\infty}^{\infty} \delta_{(x_0,y_0)}^{\text{rect}}(x-x_0, y-y_0) \, \varphi^{\text{rect}}(x, y) \, dx \, dy \\
= \int_{r=0}^{\infty} \int_{\theta=0}^{2\pi} \delta_{(r_0,\theta_0)}^{\text{polar}}(r,\theta) \, \varphi^{\text{polar}}(r,\theta)\,r\,dr\,d\theta,
$$
where subperscript $\textit{rect}$ and $\textit{polar}$ denotes rectangular and polar representations.
Since
$$
\varphi^{\text{rect}}(x_0, y_0) = \varphi^{\text{polar}}(r_0, \theta_0) = \int_{r=0}^{\infty} \int_{\theta=0}^{2\pi} \delta(r-r_0)\,\delta(\theta-\theta_0)\,\varphi^{\text{polar}}(r,\theta)\,dr\,d\theta
$$
it follows that you should have
$$
r\,\delta_{(r_0,\theta_0)}^{\text{polar}}(r,\theta) = \delta(r-r_0)\,\delta(\theta-\theta_0),
$$
i.e.
$$
\delta_{(r_0,\theta_0)}^{\text{polar}}(r,\theta)
= r^{-1} \delta(r-r_0)\,\delta(\theta-\theta_0)
= r_0^{-1} \delta(r-r_0)\,\delta(\theta-\theta_0)
.
$$ |
Circular permutations in a particular order | Yep, your work looks good to me.
Another way to do the first problem is to just count all the ways they can sit in order. There are $6$ choices for person $A$ to sit, and for each choice of seat there are only $2$ viable arrangements of the rest of them. Hence the probability is $\dfrac{12}{6!} = \dfrac{1}{60}$. |
Is there a non trivial normal subgroup of a group $G$, where $|G|=pm, \ \gcd(p,m)=1$? | Answer to your second question: Since $H$ is normal in $G=Gal(K/\mathbb{Q})$, $H$ corresponds to a field $E$ such that $\mathbb{Q} \subseteq E \subseteq K$ (more preciously $E=K^{H}$). Now $H$ being normal in $G$ we must have that $E$ is a normal extension of $\mathbb{Q}$. (Because $\bigcap_{g\in G} gHg^{-1}$ corresponds to $\prod_{\sigma \in G}\sigma(E)$ and $H$ being normal in $G$ we've $\prod_{\sigma \in G}\sigma(E)$=E, which forces $E/\mathbb{Q}$ is anormal extension). And as $K/\mathbb{Q}$ is itself separable (as it is a Galois extension), $E/\mathbb{Q}$ is also separable and hence Galois of degree $p$.
So now by Primitive Element theorem there exists $b\in E$ such that $E=\mathbb{Q}(b)$. Now if $b$ is root of $f(x)$ then since $E/\mathbb{Q}$ is normal $E$ would become a splitting field of $f(x)$ forcing $m=1$. So WLOG $b$ is not a root of $f(x)$. Now pick a root of $f(x)$ $a\in F$ to and consider $\mathbb{Q}(a,b)/\mathbb{Q}$.
Note that $[\mathbb{Q}(b):\mathbb{Q}]=p$ so now if we can show $[\mathbb{Q}(a,b):\mathbb{Q(b)}]=p$ then we will get $p^2 \mid [K:\mathbb{Q}]$ which is a contradiction from first part. So, now we're just left to prove $[\mathbb{Q}(a,b):\mathbb{Q(b)}]=p$
Suppose not, which means $f(x)$ is reducible over $\mathbb{Q}(b)$. Now note that $f(x)$ is irreducible over $\mathbb{Q}$ and $\mathbb{char}(\mathbb{Q})=0$so $f(x)$ is seperable, so it has no multiple roots in any extension of $\mathbb{Q}$ and so, $f$ must split into linear factors over $\mathbb{Q}(b)[x]$ (if not then its has a factor of form $g(x)^r$ with $r>1$ and $g$ is irreducible over $\mathbb{Q}(b)[x]$, but then any root of $g$ in some extension comes as a root of $f$ with multiplicity more than $1$, contradiction) and so $b$ is a root of $f$ which is a contradiction to our assumption that $b$ is not a root of $f$ and so we're done. |
Ergodic action on quotient | Let $\{U_n\}_n$ be a countable base (of open sets). Then, $Hx$ is dense if and only if for each $n$, $x \in HU_n$. So it suffices to show $\cap_n HU_n$ has measure $1$, and so it suffices to show each $HU_n$ has measure $1$. But $H(HU_n) = HU_n$, so by ergodicity, the measure of $HU_n$ is $0$ or $1$, so it's $1$. |
Prove $\Delta=\left|\begin{smallmatrix} 3x&-x+y&-x+z\\ -y+x&3y&-y+z\\-z+x&-z+y&3z\end{smallmatrix}\right|=3(x+y+z)(xy+yz+zx)$ | As you have mentioned, $x+y+z$ is a factor. As the determinant is symmetric, the factor other than $x+y+z$ must also be symmetric. As it is quadratic, it must be in the form $a(x^2+y^2+x^2)+b(xy+yz+zx)$. (We don't have to guess. $a(x^2+y^2+x^2)+b(xy+yz+zx)$ is the general form of symmetric quadratic polynomial in $x$, $y$, $z$.)
Put $x=1$, $y=z=0$, we have $a=0$.
Put $x=y=z=1$, we have $b=3$. |
Why is $\cos((\omega+\alpha\cos(\omega' t))t)$ the wrong model for frequency modulation? | Let $\theta(t) = \omega t + t \alpha \cos (\xi t)$, then
$\theta'(t) = \omega + \alpha \cos (\xi t) - t \alpha \xi \sin (\xi t)$, so you
can see that the instantaneous frequency is unbounded. |
10 and 12 as the order of permutations in $S_7$ | To directly answer your question, the product of cycles
$$(12)(34567)$$
is an element of order $10$ in $S_7$ and the product of cycles
$$(123)(4567)$$ is an element of order $12$.
To see why, you can compute it directly. Or you can reason that since $(123)$ has order $3$, $(4567)$ has order $4$, and $(123)$ and $(4567)$ commute, the order of $(123)(4567) = \operatorname{lcm}(3,4)$.
In general for $S_n$, to answer this question you are looking for partitions of $n$. That is, lists of whole numbers that add up to $n$. For $n=7$, the partitions are $\{7\}, \{6,1\}, \{5,2\}, \{5,1,1\}, \ldots$ and so on. Each of these corresponds with a choice of cycle sizes for a permutation in $S_7$: $\{7\}$ corresponds with permutations like $(1234567)$ and $\{5,2\}$ corresponds with permutations like $(12)(34567)$, etc. The least common multiple of a given partition gives you the order of elements with the corresponding cycle decomposition. |
Solve the equation $28^x = 19^y+87^z$ in integers | This is number $59$ in the PEN H problems session. It has solutions online (due to TomciO):
I will only argue for non-negative integers (for negatives it's nothing interesting).
It's clear that $ x \geq 1$ and $ z \geq 2$ (mod $ 9$, keep in mind that $ 87 = 3 \cdot 29$). Suppose now that $ y \geq 1$ and $ z \geq 3$.
From the binomial expansion it easily follows that $ 27|(9k+1)^l - 1$ iff $ 3|l$. Therefore we must have $ 3|y$ since $ 19 = 18 + 1$, $ 28=27+1$ and $ 27|87^z$.
Now we look mod $ 29$. Because $ 29|87$ we have the congruence $ (-1)^x \equiv 19^y \pmod{29}$. It's easy to check that $ 19^y \equiv 1 \pmod{29}$ is equivalent to $ 28|y$ (we have to try $ y=2, 4, 7, 14$) and it follows that in order to have $ 19^y \equiv \pm 1 \pmod{29}$ we must have $ y \equiv 0 \pmod{28}$ or $ y \equiv 14 \pmod{28}$, anyway we have $ 14|y$.
So we have obtained that $ 42|y$. In particular $ 19^y \equiv 1 \pmod{7}$. $ 87 \equiv 3 \pmod{7}$ and we check that $ 3^z \equiv -1 \pmod{7}$ is equivalent $ z \equiv 3 \pmod{6}$.
Now mod $ 13$. We have $ 2^x \equiv 6^y + 9^z \equiv 6^y + 3^{2z} \pmod{13}$. Since $ 3|z$ and $ 3^3 \equiv 1 \pmod{13}$ it follows that $ 3^{2z} \equiv 1 \pmod{13}$. There are two possibilites: $ y \equiv 6 \pmod{12}$ or $ y \equiv 0 \pmod{12}$. In first case we get that the right side is divisible by $ 13$ which is impossible, and in the second we have $ 2^x \equiv 2 \pmod{13}$ and this is equivalent to $ x \equiv 1 \pmod{12}$.
We finish the solution with mod $ 19$. Since $ 3|z$ and $ 11^3 \equiv 1 \pmod{19}$ we have that $ 87^z \equiv 11^z \equiv 1 \pmod{19}$. It follows that $ 28^x \equiv 1 \pmod{19}$. But since $ x \equiv 1 \pmod{6}$ it follows that $ 28^x \in \{28, 28^7, 28^{13}\} \pmod{19}$ but this numbers give residues: $ 9, 4, 6$ which is a contradiction.
We are left with the cases $ y=1$ and $ z=2$. Let's start with the first one.
From the above reasoning the relation $ 3|z$ still holds (mod $ 7$). So let $ z=3k$, then
$ 28^x = 87^{3k}+1 = (87^{k}+1)(87^{2k} - 87^{k} + 1)$.
It's easy to see that $ 87^{k} + 1$ and $ 87^{2k}-87^{k}+1$ are coprime, so
$ 87^{k} + 1 = 4^x$ and $ 87^{2k} - 87^{k}+1 = 7^x$
because the second number is bigger. But:
$ 87^{2k} - 87^{k} + 1 = (4^x - 1)^2 - (4^x-1) + 1 = 4^{2x} - 3 \cdot 4^x + 3 = 7^x$,
which is a contradiction mod $ 8$, since obviously $ x \geq 2$.
Now consider the case $ z=2$. We still have that $ 14|y$ (mod $ 29$), so in particular $ 2|y$. But that's a contradiction mod $ 4$ since the right side is congruent to $ 2 \pmod{4}$.
So the equation doesn't have a solution. |
Integration with $e^{\cos(t)}$ | It looks like you are trying to calculate
$$
\int_\gamma P\,dx+Q\,dy=\int_\gamma(2xy+y^2e^x+2x)\,dx+(x^2+2ye^x+1)\,dy
$$
where $\gamma$ is a quarter circle of radius one, starting in $(1,0)$ ending in $(0,1)$.
Now, since it happens that
$$
\frac{\partial Q}{\partial x}=\frac{\partial P}{\partial y}
$$
you will actually have a potential function, which simplifies the calculations very much. Can you proceed by this hint, or do you need further help?
Edit
The next step is to find a potential function, i.e. a function $U$ that satisfies
$$
\frac{\partial U}{\partial x}=P,\quad\text{and}\quad \frac{\partial U}{\partial y}=Q.
$$
In this case, one can almost find one by staring. Your book surely gives a method on how to do it in general. Move mouse over box below to find one potential.
$$U(x,y)=x^2y+y^2e^x+x^2+y$$
Once that is done, your integral is calculated by
$$
\int_\gamma(2xy+y^2e^x+2x)\,dx+(x^2+2ye^x+1)\,dy=U(0,1)-U(1,0).
$$ |
sum function and differential equation | \begin{align*}
y&=\sum_{n=0}\dfrac{3^{n}}{7^{n+1}}x^{2n}\\
y'&=\sum_{n=1}\dfrac{2\cdot 3^{n}n}{7^{n+1}}x^{2n-1}\\
-3x^{2}y'&=\sum_{n=1}\dfrac{-6\cdot 3^{n}n}{7^{n+1}}x^{2n+1}\\
(7-3x^{2})y'&=\dfrac{6}{7}x+\sum_{n=1}\dfrac{2\cdot 3^{n+1}(n+1)}{7^{n+1}}x^{2n+1}-\sum_{n=1}\dfrac{6\cdot 3^{n}n}{7^{n+1}}x^{2n+1}\\
&=\dfrac{6}{7}x+\sum_{n=1}\left(\dfrac{2\cdot 3^{n+1}(n+1)}{7^{n+1}}-\dfrac{6\cdot 3^{n}n}{7^{n+1}}\right)x^{2n+1}\\
&=\dfrac{6}{7}x+\sum_{n=1}\dfrac{6\cdot 3^{n}n+6\cdot 3^{n}-6\cdot 3^{n}n}{7^{n+1}}x^{2n+1}\\
&=\dfrac{6}{7}x+\sum_{n=1}\dfrac{6\cdot 3^{n}}{7^{n+1}}x^{2n+1}\\
&=\sum_{n=0}\dfrac{6\cdot 3^{n}}{7^{n+1}}x^{2n+1}\\
-6xy&=\sum_{n=0}\dfrac{-6\cdot 3^{n}}{7^{n+1}}x^{2n+1}\\
(7-3x^{2})y'-6xy&=0.
\end{align*} |
Determination of a constant based on continuity | If we require continuity we have
$$\lim_{x\to0^+}f(x)=b=\lim_{x\to0^-}f(x)=\frac{-b}{b+1}$$
Can you solve this equation to find the value of $b$? |
One-point compactification problem | Thank You all for your answers. I now write my answer and hope someone would give me some feedback.
A=[0,1)U[2,3). Then I chose f:[0,2]->A+. Want to show that the map is bijectiv and continous.
Bijectiv because:
f:[0,2]->A+
[0,1)->[0,1)
(1,2]-> [2,3)
1-> x_0
where x_0 is the point at infinity in the one-point compactification A+ of A.
Is it right?
Continous:
Take U open in [0,2] and want to show that the image is open in A+.
But from here I am confused? |
Integral of $\frac{1}{\sin z}$ along a path | You can write $$\frac{1}{\sin z} = \frac{g(z)}{z}$$ where $g(z) := \frac{z}{\sin z}$ is a holomorphic function inside of $\gamma$ if we set $g(0)$ to be $1$. (L'Hopital's formula, etc.).
Cauchy's formula gives $$\int_{\gamma} \frac{dz}{\sin z} = \int_{\gamma} \frac{g(z)}{z} = 2\pi i g(0) = 2\pi i$$ |
$\lim_{n \rightarrow \infty} f_n(x) = n^2 \left( 1- \cos \frac{x^3 - 1}{n} \right)$ | Assuming that $x\neq 1$ (the limit is easy to calculate for $x=1$, you can calculate the limit by writing $f(x)$ as $$f(x) = \frac{1-\cos\frac{x^3-1}{n}}{\frac{1}{n^2}},$$
then applying L'Hospital's rule twice.
Alternatively, you can use Taylor's expansion of $\cos$ to cancel some terms out. |
How invariance is formulated mathematically? | One way of saying that an operation on smooth manifolds, such as the operation of considering its tangent bundle or its space of differential forms, is really invariant under change of coordinates is to say that it's functorial with respect to smooth maps. This implies, but is strictly stronger than, saying that it behaves nicely under diffeomorphisms, which I think is something like what what invariance means in general relativity.
In some sense, in mathematics the trick is that if you never work in coordinates then you aren't even allowed to say things that aren't invariant, so you get invariance for free. Once you introduce coordinates, you've made an extra choice, and now you might have to worry about how what you're saying changes when you change that extra choice. |
Show that If $B\subset R$ be a Borel set then $t+B$ also be Borel set. for any $t\in R$. where R is real numbers | $t+B=f^{-1}(B)$ where $f:x\mapsto x-t$ is clearly measurable (sum of the identity map $x\mapsto x$ and the constant map $x\mapsto -t$), hence $t+B$ is a Borel set. |
Solving $e^{\sin(z)}=1$ in the complex plane | To continue with your line of reasoning (which so far is correct), you need to solve $e^{2iz} + 4 k \pi e^{iz} - 1 = 0$. With $x = e^{iz}$, this becomes the quadratic equation $x^2 + 4kx - 1 = 0$. The discriminant is $\Delta = (4k\pi)^2 + 4 = 4 (4k^2\pi^2 + 1)$, which is always positive ($k$ is real). The two solutions are therefore $x = -2k\pi + \sqrt{4k^2\pi^2+1}$ and $x = -2k - \sqrt{4k^2\pi^2 + 1}$.
In the first case we want to solve $e^{iz} = -2k\pi + \sqrt{4k^2\pi^2+1}$. Since $4k^2\pi^2+1 = (-2k\pi)^2 + 1> (-2k\pi)^2$, this is a positive real numbers, and so the first batch of solutions is
$$z = i\log(-2k\pi + \sqrt{4k^2+1}) + 2 i \pi l, \text{ for some integers } k, l.$$
In the second case, the equation to solve is $e^{iz} = -2k\pi - \sqrt{4k^2\pi^2+1}$. This is a negative real number, and thus the second batch of solutions is:
$$z = i\pi + i\log(2k\pi+\sqrt{4k^2\pi^2+1}) + 2 i \pi l, \text{ for some integers } k, l.$$
And this is the complete set of solutions.
PS: You can express that a bit more concisely by noticing that $\log(2k\pi + \sqrt{4k^2\pi^2+1}) = \operatorname{argsinh}(2k\pi)$, and so the set of solutions becomes
$$e^{\sin z} = 1 \iff z \in \{ i\operatorname{argsinh}(-2k\pi) + 2 i\pi l \mid k,l \in \mathbb{Z} \} \cup \{ i\pi + i\operatorname{argsinh}(2k\pi) + 2 i \pi l \mid k,l \in \mathbb{Z} \}.$$ |
$f$ is monotone increasing in $\mathbb{R}$ . Prove that the set $\{x: \forall \epsilon > 0, f(x+\epsilon)> f(x-\epsilon) \}$ is closed. | Hint: show instead that the set $$\left\{x\in\mathbb R\,\big{|}\,\exists\,\varepsilon>0:f(x+\varepsilon)\leq f(x-\varepsilon)\right\}$$ is open. |
Calculate odds from statistics on multi-factor events | As the comments say, there is not enough information about the interaction between individuals
Here is a possible approach which gives answers (probably wrong but perhaps good enough for your purposes)
Suppose
the probability of a goal with an average kicker and average goalie is $a$, so the probability of a save is $1-a$, and the odds of a goal are $\frac{a}{1-a}$
the probability of a goal with a particular kicker and an average goalie is $k$: then the odds of a goal are $\frac{k}{1-k}$
the probability of a save (rather than of a goal) with a particular goalie and an average kicker is $g$: then the odds of a goal are $\frac{1-g}{g}$
then you could guess that, with that particular kicker and particular goalie, the odds of a goal might be close to $\dfrac{\frac{k}{1-k}\frac{1-g}{g}}{\frac{a}{1-a}}$ and the corresponding probability of a goal $\dfrac{k(1-g)(1-a)}{(1-k)ga + k(1-g)(1-a)}$
The following table then gives the calculated probabilities of goals associated with your example plus an example average kicker and goalie (numbers with four decimal places are rounded)
I J K L
a=0.72 g=0.8 g=0.3 g=0.1 g=0.28
A k=0.6 0.1273 0.5765 0.84 0.6
B k=0.3 0.04 0.28 0.6 0.3
C k=0.1 0.0107 0.0916 0.28 0.1
D k=0.72 0.2 0.7 0.9 0.72
Note that both B v. J and C v. K, each with $k=g$, give probabilities using this approach of $1-a$ rather than the naively intuitive $0.5$ |
Extending transvections/generating the symplectic group | Assume that $V$ is whole space with $2n$ dimensional. Then the orthogonal space you mentioned (let it be $W$) is $2n-2$ dimensional symplectic vector space (It is very easy to check the conditions). Then take another space spanned by $v$ and $w$ such that $v$, $w$ $\in W$ and $\omega(v,w)=1$. The new space orthogonal to this space is symplectic vector space with dimensional $2n-4$. You know the rest. |
Constructive expectancy in a hotel | Is it not $2 \left(\frac{18}{25}\right) + 1\left(\frac{7}{25}\right)=\frac{43}{25}$? |
Hamel basis dense in the unit sphere | The second answer is obviously false, for large enough $\lambda$.
For $\lambda>\frak c$ note that the dimension of $X$ must be $\lambda$, therefore it has a basis of size $\lambda$.
Let $B$ be such basis, take $B'=\left\{\frac1{\|v\|}v\mid v\in B\right\}$, then $B'$ is a subset of the unit sphere, and it is a basis of $X$ and so linearly independent. |
Show $ \int \frac{1}{\sin^{4}x+\cos^{4}x}dx \ = \frac{1}{\sqrt{2}}\arctan\frac{\tan2x}{\sqrt2}+c$ | Original post was to evaluate $$\int\frac{\,dx}{\sin^4(x)\cos^4(x)}$$ but I believe that the intended integral was $$\int\frac{\,dx}{\sin^4(x)+\cos^4(x)}$$ which I evaluate after the original request.
$$\begin{align}I=&\int\frac{\,dx}{\sin^4(x)\cos^4(x)}\\&=2^4\int\frac{\,dx}{\bigr[2\sin(x)\cos(x)\bigr]^4}\\&=2^4\int\frac{\,dx}{\sin^4(2x)}\\&=2^4\int\csc^4(2x)\,dx\\&=2^3\int \csc^4(u)\,du\end{align}$$
Using the reduction formula for cosecant, $$\int\csc^{m}(x)\,dx=-\frac{\csc^{m-1}(x)\cos(x)}{m-1}+\frac{m-2}{m-1}\int\csc^{m-2}(x)\,dx$$ or more concisely, $$J_m= -\frac{\csc^{m-1}(x)\cos(x)}{m-1}+\frac{m-2}{m-1}J_{m-2} $$ then we can obtain $$J_4=-\frac{\csc^{3}(u)\cos(u)}{3}+\frac{2}{3}J_2$$ where $J_2=-\cot(u)+C$
Let $$J_4=\int\csc^4{u}\,du$$ so that $$\begin{align}I&=2^3J_4\\&=2^3\bigg[-\frac{\csc^{3}(u)\cos(u)}{3}-\frac{2}{3}\cot(u)\bigg] +C\\&=-\frac{2^3}{3}\bigg[\csc^{3}(2x)\cos(2x)+2\cot(2x)\bigg] +C\\&=-\frac{2^3}{3}\cot(2x)\bigg[\csc^2(2x)+2\bigg]+C\end{align}$$
If you don't know the reduction formula off-hand or don't feel like deriving it, expand the integrand as $$\csc^4(u)=\csc^2(u)\csc^2(u)=\csc^2(u)(1+\cot^2(u))=\csc^2(u)+\csc^2(u)\cot^2(u)$$ which is fairly simple to integrate.
Now to evaluate $$I=\int\frac{\,dx}{\sin^4(x)+\cos^4(x)}$$
$$\begin{align}I&=\int\frac{\,dx}{\cos^4(x)-2\sin^2(x)\cos^2(x)+\sin^4(x)+2\sin^2(x)\cos^2(x)}\\&=\int\frac{\,dx}{[\cos^2(x)-\sin^2(x)]^2+2\sin^2(x)\cos^2(x)}\\&=\int\frac{\,dx}{\cos^2(2x)+\frac{\sin^2(2x)}{2}}\\&=\int\frac{\sec^2(2x)}{1+\frac{\tan^2(2x)}{2}}\,dx\\&=\int\frac{\sec^2(2x)}{1+\bigg(\frac{\tan(2x)}{\sqrt{2}}\bigg)^2}\,dx \end{align} $$
Let $u=\frac{\tan(2x)}{\sqrt{2}}$ and $\,du=\frac{2}{\sqrt{2}}\sec^2(2x)\,dx$. Then $$\begin{align}I&=\frac{\sqrt{2}}{2}\int\frac{\,du}{1+u^2}\\&=\frac{1}{\sqrt{2}}\arctan(u)+C\\&=\frac{1}{\sqrt{2}}\arctan\bigg(\frac{\tan(2x)}{\sqrt{2}}\bigg)+C \end{align} $$ |
Nice formula for a sum product | Another one could be
$$a_1 ( 1+ a_2 ( 1+ a_3 (\dots (1+ a_{n-1}(1+a_n)) \dots ))).$$
Comptationally speaking, this is better than $\sum_{i=1}^n \prod_{j=1}^ia_j$, since the first one uses only $O(n)$ multiplications, while the second uses $O(n^2)$ multiplications. |
Prove that $f$ is uniformly continous | Example is $f(x)=x^{\frac{1}{n}}$, basically you have to construct a function whose limit exists at end points and derivative is unbounded in $(0,1)$ |
Problem with a proof on Conway's book | Writing this answer to sum up the comment thread I had with @SamM, of which I post 1 and 2 screenshots ("it seems" link here), just in case someone decides to prune it as I have seen done on other posts. I credit him/her for the hints of those comments, and here is what I gathered out of them.
Firstly, it seems my doubt was from Conway having a different (but equivalent) definition of Stone-Čech compactification. My definition of $\beta X$ is by the universal property that if $K$ is a compact Hausdorff space (any one, not just real intervals) and $f:X\to K$ is continuous, then it extends to $\tilde f:\beta X\to K$, a continuous extension. Conway's definition, which I'll call $X'$, is that for any $f\in C_b(X)$ (which appears to be the space of continuous bounded real-valued functions on $X$) there exists an extension $\tilde f:X'\to\mathbb{F}$, where $\mathbb{F}$ is just a fancy notation to mean $\mathbb{R}$. Well, technically it is the field of scalars for a vector space, but here we are talking of $C_b$, which is a real Banach space.
It is theorem (6.1) on pp. 137 of Conway (screenshot) that $X$ is homeomorphic to a subset of the unit ball of $C_b(X)^\ast$, and it is Theorem (6.2) on pp. 137-138 of Conway (1 and 2) that the weak-* closure of that set is Conway's S-Č compactification $X'$. Long story short, once you have established that $\Delta:x\mapsto\delta_x$ is a homeomorphism onto its image (first theorem), the weak-* closure of $\Delta(X)$ is closed in the unit ball, which is weak-* compact by Banach-Alaoglu, hence that closure is compact, and if $f\in C_b$, then it defines a linear continuous functional on $C_b^\ast$, and in particular a continuous function on $X'$, and of course $\langle\delta_x,f\rangle=f(x)$ so this can be viewed as the required extension.
Exercise 4 on p. 141 is precisely showing that the more general property of $\beta X$ holds for $X'$. Of course, the property of $X'$ holds for $\beta X$, since $f\in C_b(X)$ is a continuous map with real values that is bounded, and thus can be seen as taking values in a compact interval of the real line, and the extension then follows from the property of $\beta X$.
Before pointing me to exercise 4, he suggested I take the inclusion o f$X$ into $X'$ and extend it to $\beta X$ by the universal property, since $X'$ is compact Hausdorff -- compact since it's closed in the compact ball, Hausdorff because the weak-* topology is always Hausdorff --, and show that the extension is a homeomorphism. I managed to prove it is continuous, surjective, open and closed, but I was stuck on injectivity, as is seen in the comment:
Let $i$ be the identity and $i'$ the inclusion of $X$ into $X'$ the weak-* closure of $X$. $f:=i'\circ i$ extends to $\tilde f:\beta X\to X'$. $\tilde f$ is continuous, hence has closed image. $X'$ is metrizable, hence Hausdorff, hence $\tilde f(\beta X)$ is closed. But $i'(X)=\tilde f(i_\beta(X))\subseteq\tilde f(\beta X)$, where $i_\beta$ is the inclusion of $X$ into $\beta X$. Then again, $i'(X)$ is dense in $X'$, so $\tilde f(\beta X)$, being closed and containing a dense subspace, must be $X'$. So $\tilde f$ is a continuous map from a compact space to a Hausdorff space, hence closed. …
To do exercise 4, I first of all need to know that for any $x\neq y$ in $X$ there is $f\in C_b(X)$ such that $f(x)\neq f(y)$. Compact Hausdorff implies Urysohn's lemma holds, so I can find $f\in C_b:f(x)=0,f(y)=1$.
Now of course if we want $f:X\to\Omega$ with $\Omega$ compact Hausdorff to extend to $\tilde f:X'\to\Omega$ continuous map, we will need to use nets. In particular, $X$ is dense in $X'$, by definition, so -- well first of all $\tilde f(x)=f(x)$ for $x\in X\subseteq X'$, or it isn't an extension of $f$ -- for $x\in X'$ we take a net $x_\alpha$ converging to $x$ from inside $X$. We need to prove $f(x_\alpha)$ converges in $\Omega$. For any $g\in C_b$ we know $g(f(x_\alpha))$ converges because $g\circ f$ extends by theorem (6.2) of Conway. If $f(x_\alpha)$ did not converge, it would need to have two subnets $f(x_{\alpha_\beta}),f(x_{\alpha_\gamma})$ converging to distinct points $y_1,y_2$. Take $g\in C_b:g(y_1)=0,g(y_2)=1$. Then $g(f(x_{\alpha_\beta}))\to g(y_1)=0$ and $g(f(x_{\alpha_\gamma}))\to g(y_2)=1$, because continuous maps preserve net convergence. But that is absurd since $g(f(x_\alpha))$ converges and so all its subnets must converge to the same number. Compactness is equivalent to every net having a convergent subnet, so I have at least one converging subnet for $f(x_\alpha)$, hence $f(x_\alpha)$ converges. Let $\tilde f(x_\alpha)=\lim_\alpha f(x_\alpha)$.
Next, we need to prove that any two nets $x_\alpha,x_\beta\to x$ give the same limit. Suppose $f(x_\alpha)\to y_1,f(x_\beta)\to y_2$. Then for any $g\in C_b$ we have $g(f(x_\alpha))\to g(y_1),g(f(x_\beta))\to g(y_2)$. Take a $g:g(y_1)=0,g(y_2)=1$. Then $g(f(x_\alpha))\to0,g(f(a_\beta))\to1$. But $g\circ f$ extends to $X'$ continuously, and $x_\alpha,x_\beta\to x$, so $g\circ f(x_\alpha),g\circ f(x_\beta)\to\widetilde{g\circ f}(x)$, impossible. Hence, the above extension is well-defined.
Now let us prove that if $g\in\mathcal{C}(\Omega)$ then $g\circ\tilde f=\widetilde{g\circ f}$. Suppose it isn't true. Then we have $x\in X'$ such that $g\circ\tilde f(x)\neq\widetilde{g\circ f}(x)$. If $x\in X$ this is impossible, since $\widetilde{g\circ f}(x)=g(f(x))=g(\tilde f(x))$. But then we have a net $x_\alpha\to x$ from inside $X$. $\tilde f$ is defined so that $\tilde f(x_\alpha)\to\tilde f(x)$, and $g$ is continuous, so $g\circ\tilde f(x_\alpha)\to g\circ\tilde f(x)$. $\widetilde{g\circ f}$ is defined as a continuous extension, hence $\widetilde{g\circ f}(x_\alpha)\to\widetilde{g\circ f}(x)$. But then:
$$\widetilde{g\circ f}(x)\leftarrow\widetilde{g\circ f}(x_\alpha)=g\circ\tilde f(x_\alpha)\to g\circ \tilde f(x),$$
which is again a contradiction. So $g\circ\tilde f=\widetilde{g\circ f}$ on all of $X'$.
Now let $x_\alpha$ be a net in $X'$ that converges to $x\in X'$. No idea where the $x_\alpha$s lie, they might bounce in and out of $X$ and converge to any point in $X$ or $X'\smallsetminus X$. We want to show $\tilde f(x_\alpha)\to\tilde f(x)$. We certainly know that:
$$g\circ\tilde f(x_\alpha)=\widetilde{g\circ f}(x_\alpha)\to\widetilde{g\circ f}(x)=g\circ\tilde f(x),$$
for any $g\in C_b$. Suppose we have $f(x_\alpha)\not\to f(x)$. Since it has a convergent subnet by compactness of $\Omega$, and if any subnet converges to $f(x)$ we know $f(x_\alpha)$ will have to converge to $f(x)$, we conclude there exists $f(x_{\alpha_\beta})$ which converges to a point $y\neq\tilde f(x)$. $g\circ\tilde f(x_\alpha)\to g\circ\tilde f(x)$ for any $g\in C_b$. But for $g\in C_b$ we have that $g\circ f(x_{\alpha)\beta})\to g(y)$. So we pick $g$ that separates $y$ and $f(x)$, as usual, and reach the same old contradiction, proving $\tilde f$ preserves net convergence, and this implies its continuity, ending the exercise.
Btw hidden in the above is that the topology on $\Omega$ is $\sigma(\Omega,C_b)$, or that $x_\alpha\to x$ if and only if $f(x_\alpha)\to f(x)$ for all $f\in C_b$.
As another extra, the theorem says $C_b$ is separable iff $X$ is a compact metric space, so $C_0=C_b$ will be nonseparable if $X$ is compact and nonmetrizable, as for example is $\{0,1\}^{\mathbb{R}}$, an example suggested by this post, a compact non-first-countable (hence non-metrizable) space. |
Find $E(Y|X) $ for $f(x,y)=e^{-y}$, where $0 \leq x \leq y$ | You have to take care here (it's okay to use any dummy variable you want in integrating, but you have to make sure to plug it in for all instances). Also we have $y\ge x$ as the support for $y$ so all the integrals go from $x$ to $\infty:$ $$ E(Y|X=x) = \int_x^\infty yf(y\mid x) \;dy = \int_x^\infty y \frac{f(y,x)}{f(x)}\; dy = \frac{1}{\int_x^\infty e^{-y}dy}\int_x^\infty ye^{-y}dy$$ |
First year calculus student: why isn't the derivative the slope of a secant line with an infinitesimally small distance separating the points? | Because from my understanding, in order for it to be a tangent line, it intersects the curve at one point only, however Δx approaches zero, it never reaches it, so Δx must be greater than zero, however infinitesimally small, correct?
You're right. We don't ever reach that point. We take a limit.
The colloquialism, "reaching the point" is a good anthropomorphic description. Limits allow us to stretch the constraints of the real numbers by pushing towards the infinite and infinitesimal. Technically, though, to venture into such territory, we need to properly define limits. This is often introduced with the epsilon-delta formalization.
Say there exists a limit $f'(x)=\lim_{\Delta x\rightarrow0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$. Then for every $\epsilon>0$, there exists some $\delta>0$ such that whenever $0<\Delta x<\delta$, we find $|f'(x) - \frac{f(x+\Delta x)-f(x)}{\Delta x}|<\epsilon$.
We can heuristically think of the last paragraph as the following: our derivative exists if for every positive number $\epsilon$ and $\delta$, including the most ridiculously small numbers you can ever imagine, whenever $\Delta x$ is trapped between zero and any of these ridiculously small numbers, the difference between our derivative and the original expression is imperceptible.
But wait a minute, you say
...Δx must be greater than zero, however infinitesimally small, correct?
The epsilon-delta definition seems to hint that as well, but there's a catch: $$|f'(x) - \frac{f(x+\Delta x)-f(x)}{\Delta x}|<\epsilon$$
This is not less than some real positive number $\epsilon$. This is less than ANY POSSIBLE real positive number $\epsilon$. Such a concept only exists within the formalism of a limit, and is by no means a measurable quantity. That's what is meant by infinitesimal.
Due to the limit, then, the derivative cannot represent any possible secant line. There are no two points corresponding to $x+\Delta x$ and $x$ that are indistinguishable! The value we reach has converged to that which represents the slope of the tangent.
Added note:
$\Delta x\rightarrow 0$ doesn't just imply that $\Delta x$ is running through the positive numbers towards zero. For the limit to exist, we typically require it to be two-sided, meaning that $\Delta x\rightarrow0^+$ and $\Delta x\rightarrow0^-$ must produce the same result. In either case, the difference between $\Delta x$ and zero becomes vanishingly small. |
Density of $C^\infty(\overline{\Omega})$ in $L^2(\Omega)$: can we find a bounded sequence approximating $a \in L^2(\Omega)$ | John suggested one approach in a comment: convolution with a bump function preserves the inequalities such as $c\le a_n\le C$.
But maybe you are using another way to construct the initial sequence of smooth functions $a_k\to a$. In that case you can use smooth truncation by means of composition with a function $\phi : \mathbb R\to [c,C]$. For definiteness, let's consider $c=-1$, $C=1$. Consider the sequence of functions
$$\phi_n(x) =\alpha_n^{-1} \int_0^x \frac{1}{1+t^{2n}} \, dt,\quad \text{where } \alpha_n = \int_0^\infty \frac{1}{1+t^{2n}} \, dt $$
which converges uniformly to $\max(1,\min(-1,x))$. Note that $|\phi_n|<1$.
Thus, for any smooth $a_k$, the composition $\phi_n\circ a_k$ is a smooth function, and $\phi_n\circ a_k \to \max(1,\min(-1,a_k))$ uniformly, hence in $L^p$.
Also,
$$\| a - \max(1,\min(-1,a_k))\|_{L^p} \le \|a-a_k\|_{L^p} $$
because $|a|\le 1$; the integral on the left is smaller pointwise.
Thus, by choosing large $k$ and then large $n$, we get a smooth function approximating $a$ and bounded between $\pm 1$. |
Probability formulas | I believe what you mean is you want the number of unordered $7$-tuples $(S_1, \ldots, S_7)$ where each $S_i$ is a subset of $\{1,\ldots, 7\}$ with cardinality $3$, and each intersection $S_i \cap S_j$ has cardinality at most $1$. Enumerate the ${7 \choose 3} = 35$ subsets of $\{1,\ldots,7\}$ with cardinality $3$ as $T_1,\ldots,T_{35}$. Form the graph $G$ with $35$ vertices and an edge $\{i,j\}$ if $|T_i \cap T_j| \le 1$. Then we want to count all $7$-cliques in this graph.
I don't think this is easy to do. But the answer is certainly more than $1$.
One possible $7$-clique corresponds to the $7$ sets
$$ [ \left\{ 1,2,3 \right\} , \left\{ 1,4,5 \right\} , \left\{ 1,6,7
\right\} , \left\{ 2,4,6 \right\} , \left\{ 2,5,7 \right\} , \left\{
3,4,7 \right\} , \left\{ 3,5,6 \right\} ]
$$
But for any permutation $\pi$ of $\{1,\ldots, 7\}$, we get another $7$-clique by replacing each $i$ by $\pi(i)$. In many cases this will give us a different $7$-clique. For example, interchanging $1$ and $2$ gives us another $7$-clique corresponding to the sets
$$ [ \left\{ 1,2,3 \right\} , \left\{ 2,4,5 \right\} , \left\{ 2,6,7
\right\} , \left\{ 1,4,6 \right\} , \left\{ 1,5,7 \right\} , \left\{
3,4,7 \right\} , \left\{ 3,5,6 \right\} ]$$
In fact using permutations we get $30$ such $7$-cliques. I don't know whether those are all the possible $7$-cliques. |
Differential $k$-form and integrating factor | $\omega \wedge \omega=0$ is only true for odd forms.
Observe $$(dx_1\wedge dx_2 +dx_3 \wedge dx_4) \wedge (dx_1\wedge dx_2 +dx_3 \wedge dx_4)=2 dx_1 \wedge dx_2 \wedge dx_3 \wedge dx_4 \neq 0.$$ |
Image of the set $\{z \in \mathbb{C}:|z|>r\}$ by the complex exponential function | Define the set $A:=\{z\in\mathbb{C}:|z|>r\}=\{x+iy:x,y\in\mathbb{R}\,\wedge\,\sqrt{x^2+y^2}>r\}\subset\mathbb{C}$. From what I understand, $r$ is some real number greater than zero, and you have a map
\begin{align*}
\phi:A&\longrightarrow\mathbb{R} \\
x+iy&\longmapsto e^x(\cos(y)+\sin(y))
\end{align*}
of which you want to know the image, $\mathrm{im}(\phi)$. Let $n_r\ge 1$ be the unique integer such that $n-1<r\le n$.
Clearly $\mathrm{im}(\phi)\subset\mathbb{R}$. Let $a\in\mathbb{R}$. If $a=0$, then $r-i\frac{\pi}{4}\in A$ is such that $\phi(r-i\frac{\pi}{4})=0$, so $0\in\mathrm{im}(\phi)$. If $a<0$, then $\ln(-a)+in_r\frac{3\pi}{2}\in A$ is such that $\phi(\ln(-a)+in_r\frac{3\pi}{2})=a$. If $a>0$, then $\ln(a)+in_r\frac{3\pi}{2}\in A$ is such that $\phi(\ln(a)+in_r\frac{3\pi}{2})=a$, so $\mathbb{R}\subset\mathrm{im}(\phi)$ and thus $\mathrm{im}(\phi)=\mathbb{R}$.
In the more probable case that you meant what @Trouble mentioned, i.e. the map
\begin{align*}
\psi:A&\longrightarrow \mathbb{C} \\
x+iy&\longmapsto e^x(\cos(y)+i\sin(y)),
\end{align*}
then it is clear that $\phi$ maps an element $z\in A$ to a complex number of norm $e^x$ and angle $y$. Then if $x=0$ it must hold that $|y|>r$, so any angle can be achieved. However the norm will always be greater than zero since $e^x>0$ for all $x\in\mathbb{R}$, so $\mathrm{im}(\phi)=\mathbb{C}\setminus\{0\}$. |
Prove formula for $\int \frac{dx}{(1+x^2)^n}$ | Use integration by parts,
$$I=\int\frac{dx}{(1+x^2)^n}=\int\frac{1}{(1+x^2)^n}\cdot 1\ dx $$
$$I=\frac{1}{(1+x^2)^n}\int 1 \ dx-\int \left((-n)\frac{2x}{(1+x^2)^{n+1}}\cdot x\right)dx$$
$$I=\frac{x}{(1+x^2)^n}+2n\int \left(\frac{(1+x^2)-1}{(1+x^2)^{n+1}}x\right)dx$$
$$I=\frac{x}{(1+x^2)^n}+2n\int \left(\frac{1}{(1+x^2)^{n}}-\frac{1}{(1+x^2)^{n+1}}\right)dx$$
$$I=\frac{x}{(1+x^2)^n}+2n\int \frac{dx}{(1+x^2)^{n}}-2n\int \frac{1}{(1+x^2)^{n+1}}dx$$
$$I=\frac{x}{(1+x^2)^n}+2nI-2n\int \frac{1}{(1+x^2)^{n+1}}dx$$
$$0=\frac{x}{(1+x^2)^n}+(2n-1)I-2n\int \frac{1}{(1+x^2)^{n+1}}dx$$
$$2n\int \frac{1}{(1+x^2)^{n+1}}dx=\frac{x}{(1+x^2)^n}+(2n-1)I$$
$$\int \frac{dx}{(1+x^2)^{n+1}}=\frac{x}{2n(1+x^2)^n}+\frac{(2n-1)}{2n}\int \frac{dx}{(1+x^2)^{n}}$$
setting $n=n-1$
$$\int \frac{dx}{(1+x^2)^{n}}=\frac{x}{(2n-2)(1+x^2)^{n-1}}+\frac{(2n-3)}{2n-2}\int \frac{dx}{(1+x^2)^{n-1}}$$ |
Do there exist two perfect numbers $N_1$ and $N_2$ such that $N_1 + N_2$ is also perfect? | Well, let's see. Even perfect numbers are all of the form $2^{n-1}(2^n-1)$, where $2^n-1$ is a Mersenne prime. As such, they are completely determined by the highest power of 2 dividing them. This value in a sum of two such numbers will be the same as in the smallest of them. Really, let $N_1=2^{n_1-1}(2^{n_1}-1) < N_2=2^{n_2-1}(2^{n_2}-1),\;n_1<n_2$. Clearly, $N_1+N_2$ is divisible by $2^{n_1-1}$ and gives an odd number. Thus its highest power of 2 is the same as in $N_1$, but it is not $N_1$ itself, hence it can't be perfect.
As for the odd perfect numbers, I'd rather wait till I see one of them. |
Graph Theory: (Open) Ear Decomposition Has Number of Ears Equal to Circuit Rank / Nullity / Cyclomatic Number | Both the circuit rank and the number of ears in an ear decomposition can be computed from the number of vertices and edges in the graph.
Suppose the graph $G$ is $2$-connected, and has $n$ vertices and $m$ edges.
The maximum number of edges in an acyclic graph on $n$ vertices is $n-1$, and we can reach that by deleting all edges outside a spanning tree of $G$. If we do that, we are deleting $m-(n-1) = m-n+1$ edges, so the circuit rank is $m-n+1$.
Suppose that $G$ has an ear decomposition that begins with a cycle and adds $k-1$ more ears. In the cycle, the number of vertices equals the number of edges. Adding an ear of length $\ell$ adds $\ell$ edges and $\ell-1$ vertices, increasing the difference $|E|-|V|$ by $1$. Therefore after adding $k-1$ ears, the difference is $k-1$; but we know the difference is $m-n$. So $k-1 = m-n$, or $k = m-n+1$. |
How I can solve this equation in $\mathbb{C}$? | Solutions consist of those $s$ satsifying $f(s)=0$ or $g(s)=0$. Without knowing what $f$ and $g$ are, you can't say anything more than that.
Note: this has nothing to do with complex-analysis. |
How to show that this algorithm needs $\Omega(n \log n)$ comparisons | Suppose that we have $n$ elements that need to be sorted and we can only do pairwise comparisons. There are $n!$ ways to arrange these elements, and if we assume the elements are unique, then there is only one way to sort them.
Let's think of each pairwise comparison as a yes/no question. Each pairwise comparison rules out some set of the $n!$ permutations that cannot be the correctly sorted order. Before answering yes or no, there is a set of possible permutations assuming the answer is yes and there is a set of possible permutations assuming the answer is no. These sets are disjoint because the elements being compared are unique, and have only one correct ordering.
If the "yes" set is bigger than the "no" set or vice versa, then we can define our ordering so that it answers the pairwise comparison that gives us the bigger set. Therefore, our algorithm can at best choose a comparison that makes the "yes" set and "no" sets equal. Upon the answer, our choice of permutations is cut in half.
This implies that the best comparison based algorithm runs in $\Omega(\log_2(n!))$ time, because there are $n!$ permutations and each comparison eliminates at best half of the choices in worst-case time. Notice that
$$ \log_2(n!) = \sum_{i=1}^n \log_2(i) \le \sum_{i = \lfloor n/2 \rfloor}^n \log_2(i) \le \lceil n/2 \rceil \log_2(\lfloor n/2 \rfloor) \le \frac{n \log(n-1)}{2} - 1 \in \Omega(n\log(n))$$ |
The univalent domain of $\cos z$ | This is not true as stated. For example, $\cos z$ is not univalent in $\{z: -\pi/2<\operatorname{Re}z<\pi/2\}$ because $\cos (-\pi/4)=\cos(\pi/4)$ (or, to give another reason, $(\cos z)'=-\sin z$ vanishes at $z=0$).
What is true is that for every integer $k$ the cosine is univalent in $$\{z: \pi k<\operatorname{Re}z<\pi (k+1) \}\tag{1}$$ A direct way to see this is to use the identity
$$\cos z-\cos \zeta = 2\sin \left(\frac{z+\zeta}{2}\right)\sin \left(\frac{z-\zeta}{2}\right)\tag{2}$$
which you can easily verify by converting everything into exponential form.
The sine function vanishes only at the integer multiples of $\pi$. For any two distinct elements $z,\zeta$ in the domain (1) we have $$\pi k<\operatorname{Re}\left(\frac{z+\zeta}{2}\right)<\pi (k+1) $$
so the sine in (2) is nonzero. Also, $z-\zeta\ne 0$ and $$\left|\operatorname{Re}\left(\frac{z-\zeta}{2}\right)\right|<\frac{\pi}{2}$$
which together imply that the second sine in (2) is nonzero. |
Does $\sum_{n=2}^{\infty}\frac{n^4}{\log(1)+\log(2)+\log(3)+\cdots+\log(n)}$ converge? | Without Stirling.
Note that the denominator is $$
\sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n
$$
from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$
and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$. |
Find f ae-differentiable with $f´\in L^1(0,1)$ but not in $BV$... | I just came out with a counterexample.
One produces a function similar to $x\sin(1/x)$ (which is not in $BV$),
but with the oscillating blocks made of Cantor staircases (hence continuous and with $f'(x)=0$ ae).
Here are the details:
Call $h(x)$ the usual Cantor staircase, ie $h:[0,1]\to[0,1]$, continuous, increasing, onto and with $h'(x)=0$ at almost every $x\in(0,1)$.
Next, dilate this function to $h_n:[\frac1{n+1},\frac1n]\to [0,\frac1n]$
by
$$ h_n(x)= \frac1n h\Big(\frac{x-\frac1{n+1}}{\frac1n-\frac1{n+1}}\Big),\quad \mbox{if $n$ odd}$$
or
$$ h_n(x)= \frac1n h\Big(1-\frac{x-\frac1{n+1}}{\frac1n-\frac1{n+1}}\Big),\quad \mbox{if $n$ even.}$$
Finally, define $f(x)=h_n(x)$ if $x\in [\frac1{n+1},\frac1n]$ and $f(0)=0$.
It is straightforward to verify that $f\in C[0,1]$ and $f'(x)=0$ a.e. $x\in(0,1)$.
Finally, $f\not\in BV[0,1]$ since
$$\sum_{n=1}^{2N}\big|f(\frac1n)-f(\frac1{n+1})\big|=\sum_{n=1}^N\frac1{2n+1}\to\infty,\quad \mbox{as }N\to\infty.$$ |
Existence of Unit-speed parametrization. | Consider the arc length function $L:t\mapsto s$ of the curve $C$. Its derivative is just the length element, which is non-zero since $C$ is regular. It follows by the inverse function theorem that $L^{-1}:s\mapsto t$ exists, and thus the arc length parametrisation $\alpha(s)=C(L^{-1}(s))$ exists. |
Area of integral with $e^{-x^2}$ | Your answer is correct.
For $a>b$, and given a function $f$ which has a first derivative continuous over the interval $(a,b)$, the Fundamental Theorem of Calculus provides
$$
\int_{b}^{a} f'(x) dx = f(a) - f(b)
$$
In this example
$$
f(x) = x^{2} e^{-x^{2}}
$$
so
$$
\int_{-2}^{1} f'(x) dx = f(1) - f(-2) = \frac{1}{e}-\frac{4}{e^4}
$$
The function is plotted below: |
Does a solution in $\mathbb{Q}(\sqrt{2})$ imply solution in $\mathbb{Q}$ for every system of linear equations? | The opposite of what you wrote is true.
$\mathbb Q$ is a subfield of $\mathbb Q(\sqrt2)$.
$\sqrt2\in\mathbb Q(\sqrt2)$ but $\sqrt2\not\in\mathbb Q$, i.e., $\sqrt2$ is irrational. |
Drawing two perpendicular tangent line from the origin to $y=x^2-2x+a$ | You don't need to calculate the tangent lines concretely, you can do it by the process of elimination:
The three values $-\frac{5}{4}, -\frac{3}{4}$ and $\frac{3}{4}$ are not possible. Why? The peak is then below $0$. That means that one tangent has to point below the $x$-axis, however, the other tangent line has to reach the left side of the parabola and therefore go beyond the $y$-axis to the left. That means no angle of $90$ degree is possible.
In fact, for the first two values, you cannot even draw any tangent from $(0,0)$ as this point will lie above the parabola.
That means the only possibility is $a = \frac{5}{4}$. I've added an image of GeoGebra to show that this really works out |
If $f=0$ is the unique function for which the Fourier coefficients are zero then the set $\{\phi_{1}, \phi_{2},...\}$ is complete | In your definition of completeness, in the integral it should be the absolute value squared, if you also want to deal with complex-valued functions.
Suppose $f$ is a square-integrable function with all Fourier coefficients zero. Then by your definition of completeness,
$$0=\lim_{N\to\infty}\int_a^b|f-\sum_{n=1}^NC_n\phi_n|^2\,dx=\int_a^b|f|^2\,dx$$
which implies that $f=0$ in $L^2(a,b)$.
It follows that the only function orthogonal to all the basis elements is the zero function.
Edit:
Following a comment by the OP. Apparently the OP takes as definition the fact that the only function orthogonal to $\phi_n$ in $L^2(a,b)$ is the zero function, and wants to prove that $\sum_{n=1}^{\infty}C_n\phi_n$ converges to $f$ for every $f\in L^2(a,b)$, where $C_n=(f,\phi_n)/\|\phi_n\|^2$. Note that as of now, 6.8.2018, 6:13 a.m GMT, the question suggest exactly the opposite direction. Anyway, the two being equivalent, here is the other direction:
Assume that the only function orthogonal to $\phi_n$ in $L^2(a,b)$ is the zero function. We want to prove that if $C_n=(f,\phi_n)/\|\phi_n\|^2$, then
$$\sum_{n=1}^{\infty}C_n\phi_n=f,$$
which is equivalent to
$$\lim_{N\to\infty}\|\sum_{n=1}^NC_n\phi_n-f\|=0$$
We are assuming that $\{\phi_n\}_{n=1}^{\infty}$ is an orthogonal system, not yet complete. Then it follows from Bessel's inequality that the series $\sum_{n=1}^{\infty}C_n\phi_n$ converges in $L^2(a,b)$. The only thing left is to show that it converges to $f$ itself. To show this, consider the function
$$g=f-\sum_{n=1}^{\infty}C_n\phi_n$$
It is easy to check that $(g,\phi_m)=0$ for all $m$, and so by hypothesis, $g$ is the zero function, and we are done. |
Direct and inverse limits of sheaves | It doesn't really make sense to ask "is the direct limit of sheaves a sheaf?" I assume you mean to ask whether or not the natural choice for a presheaf direct limit of a directed system of sheaves a sheaf. The answer is no in general. You have to sheafify. That is, given a directed system of sheaves $(\mathscr{F}_i,\varphi_{ij})$, the presheaf defined by $U\mapsto\varinjlim_i\mathscr{F}_i(U)$, taken with respect to the maps $\varphi_{ij}(U)$, is usually only a presheaf. The associated sheaf is what we call $\varinjlim_i\mathscr{F}$, and you can verify that it has the properties to be the (i.e. categorical) direct limit of the $\mathscr{F}_i$. I believe one case where the direct limit presheaf is already a sheaf is when $X$ is a Noetherian topological space (i.e. every subset is quasi-compact).
For an inverse system of sheaves $(\mathscr{F}_i,\varphi_{ij})$, the presheaf $U\mapsto\varprojlim_i\mathscr{F}_i(U)$ actually is a sheaf, $\varprojlim_i\mathscr{F}_i$, and it is the categorical inverse limit of the $\mathscr{F}_i$.
In general, for colimits of diagrams of sheaves, one takes the natural presheaf (take colimits open set by open set), and then has to sheafify, while for limits, the natural presheaf is already a sheaf. |
Almost sure convergence and lim sup | Note the definition of lim sup for a sequence $(A_n)$ of sets:
$$ \limsup_{n\to\infty} A_n=\bigcap_{k=1}^\infty \bigcup_{n=k}^\infty A_n.$$
This translates to:
$$ \omega\in\limsup_{n\to\infty} A_n \Leftrightarrow \forall k\, \exists n\ge k\colon \omega\in A_n.$$
Or, in everyday language: There are arbitrarily large $n$ for which $\omega\in A_n$. If you collect all such $n$ in a set and number them consecutively, you get an increasing sequence $(n_k)$ with $\omega\in A_{n_k}$ for all $k$.
That is the sequence mentioned in the proof, where $A_n=\{|X_n-X|\ge\epsilon\}$.
It matters whether lim sup is inside or outside the sets, because the two kinds of lim sup are entirely different beasts! But clearly, related.
The difference between sharp and non-sharp inequalities arise for $\omega$ such that $\limsup|X_n-X|=\epsilon$ exactly. Such an $\omega$ belongs in the set on the right end of $(*)$, but not in the left set. And it may or may not be in the middle set, depending on whether $|X_n-X|\ge\epsilon$ infinitely often or not. Both are possible, under the current assumption. |
Why does P(A,B) notation sometimes cause an addition and sometimes cause a multiplication? | If this is on one whiteboard, the instructor made a typographical error, or someone transcribed or interpreted the whiteboard incorrectly. There is variation in the meaning of $P(A,B)$, but those are clearly two different meanings.
The first "$A,B$" means "$A$ or $B$", or $A \cup B$, i.e. $A$ union $B$, for events that are not necessarily disjoint.
The second "$A,B$" means "$A$ and $B$" or $A\cap B$, i.e. $A$ intersect $B$, in the case where $A$ and $B$ are independent.
Sometimes people write "$A$ and $B$" or $A\cap B$ as $AB$. Sometimes people use the comma for this. Sometimes comma is used for conditional probability. I've never seen the comma for disjunction or union, as in the first example, but as I said, there is variation.
(To visualize the second equation, don't use circles. Use a rectangle divided vertically and horizontally. What's on one side of the vertical line is $A$, and what's on one side of the horizontal line is $B$. That's independence. Non-independence would be represented using a diagonal line or a line with steps in it.) |
How to find the maximum of this equation $x(1 - (1 - \frac{1}{x})^K)$ | Seems unlikely. If you differentiate, set to zero, and simplify, you get a polynomial equation of degree $k-1$. If $k\ge6$ I wouldn't expect there to be a formula for $x$ in terms of radicals and the four arithmetical operations. |
Missing premise in uniqueness portion of theorem | If $y \in B$ is not in the range of $g$,
it obviously doesn't matter what you take for $h(y)$.
So, assuming of course $C$ is not a singleton, what you need for $h$ to be unique is that $g$ is surjective. |
Close to diagonal implies derivative is close to 1? | No, this isn't true. Fix a large integer $N$ and consider a function $f$ which satisfies $f(k/N)=k/N$ for all integers $k$ between $0$ and $N$, and between $k/N$ and $(k+1)/N$ is piecewise linear, with one piece of slope $1/2$ and one piece of slope $2$. You can see that $f$ is absolutely continuous and $|f(x)-x|<1/N$ for all $x$, but $|f'(x)-1|\geq 1/2$ everywhere $f'$ is defined. |
Proof that $A_c = \{(x_1,x_2) \in \mathbb{R^2}| f(x_1,x_2) < c\}$ is open when $f$ is continuous | Your reasoning is rigorous and correct. Besides minor things like writing style that may admit a room for discussion, I read and followed your proof easily. In fact, among the similar questions here I have ever seen your work as I see it is the nicest one; you know what you are talking about.
By the way, as a commenter has pointed out, sometimes you may want to make some preliminary observations on the given problems first in order to solve it economically and elegantly. You know, "blindly" "overworking" every given problem without seeing the whole picture in the first place may not be a good habit in the long run. But I guess it is no doubt that this "overworking" habit is good for self-training. |
Example of entire function on $\mathbb C$ such that which does not take only one value in $\mathbb C$ | Sure. If $a\in\mathbb C$, take $f(z)=e^z+a$. Then the range of $f$ is $\mathbb{C}\setminus\{a\}$ and $f$ is entire. |
Homework - Showing any continuous functions on a compact subset of $\mathbb{R}^3$ can be approximated by a polynomial. | One approach is to use the more general Stone-Weierstrass theorem which applies to every compact space, and is not much harder to prove than the Weierstrass theorem for the interval. But if you don't have this result at your disposal, the following is one possible strategy:
Let's say a function $g$ is radial if it is of the form $g(x,y,z)=\varphi( x^2+y^2+z^2 )$ for some continuous $\varphi$. Since $\varphi$ can be uniformly approximated by polynomials on any interval $[0,R]$, it follows that $g$ can be uniformly approximated by polynomials on any closed disk.
Also, the shifted function $g(x-x_0,y-y_0,z-z_0)$ can be approximated by polynomials.
It remains to approximate $f$ by a linear combination of radial functions with shifts. One way to do this is to show
$$f(x,y,z) = \lim_{r\to 0} \frac{c}{r^{3}} \int_X f(u,v,w) \exp\left(-\frac{(x-u)^2+(y-v)^2+(z-w)^2}{r^2}\right)\,du\,dv\,dw $$
and then approximate the integral by a Riemann sum. You may have used this convolution integral in the proof of one-dimensional result, possibly with a different kernel. |
Diophantine equation: $x^2-y^2=z$ for a fixed integer $ z$ | Write $z$ as $z=ab$. Hence, one solution for $x^2-y^2 = ab$ is $(x+y) = a \text{ and } x-y =b$.
I am sure you can take it from here. |
Find all critical points and classify them | \begin{align*}
df(x,y)=(\vec{\nabla} f).(dx \hat i+ dy \hat j)
\end{align*}
where $\vec{\nabla} f=\frac{\partial f}{\partial x}\hat i+ \frac{\partial f}{\partial x} \hat j$ is the gradient of $f(x,y)$. It should be set zero to find critical points.
Proceed yourself now. As a check, you should obtain $x_1=\pm 1$ and $x_2=\pm \frac 1{\sqrt 2}$. |
How do you prove that the determinant of a 3 by 3 matrix with entries of either 1 and -1 (assume linearly independent rows) will always be 4 or -4? | HINT:
Add the first column to column $2$ and $3$. The determinant does not change. Now the new columns $2$ and $3$ are divisible by $2$. So the determinant is divisible by $4$.
The determinant has $6$ terms, each $\pm 1$. So it is between $-6$ and $6$. |
range of a polynomial module prime nunbers | Consider the polynomial
$$P(x)=x^4-5x^2+4=(x^2-1)(x^2-4)$$
We have that
$P(-x)=P(x)$ for each $x$, and so modulo any prime $p$ we see that $P(x)$ takes at most $\frac{p+1}{2}$ values modulo $p$.
But $P(-1)=P(1)=P(-2)=P(2)=0$, and so we see that $P(x)$ actually only takes at most $\frac{p-1}{2}$ values modulo $p$ for any prime $p>3$.
So the conjecture is not true in general.
Edit
We could then, of course, ask if one can classify for which polynomials the conjecture is true. This seems like a very difficult problem and I do not have any ideas on how to approach it.
I can show the following results, however:
If $P(x)$ satisfies the conjecture, then so does $aP(bx+c)+d$ for any integers $a,b,c$ and $d$ such that $ab \neq 0$.
This is fairly straightforward, since as long as $p$ does not divide $ab$, $ax+d$ and $bx+c$ both just permute the elements of $\mathbb{Z}_p$.
$P(x)=x^n$ satisfies the conjecture for any $n\in\mathbb{N}^+$.
This is slightly less straightforward (as far as I can tell), and I will make use of the following results:
The Chinese Remainder Theorem
Dirichlet's Theorem on primes in arithmetic progressions
The existence of primitive roots modulo $p$ (i.e. that $\mathbb{Z}_p^*$ is cyclic)
For any $n$, we first show that there are infinitely many primes $p$ such that $\gcd(n, p-1)\leq 2$. If $n$ is odd then $\gcd(n, 2)=1$, and so by Dirichlet's Theorem, there are infinitely many primes $p$ such that $p \equiv 2 \mod n$. Then $p - 1 \equiv 1 \mod n$ and so $\gcd(n, p-1)=1$.
Now suppose that $n$ is even, and let $d$ be the largest odd divisor of $n$. Consider the following system of congruences:
$$\begin{align*}
x \equiv 3 \mod 4\\
x \equiv 2 \mod d
\end{align*}$$
By the Chinese Remainder Theorem, this has a unique solution $X$ modulo $4d$. Clearly $X$ and $4d$ are relatively prime, and so by Dirichlet's Theorem, there are infinitely many primes $p$ such that $p \equiv X \mod 4d$.
Such a prime $p$ has $p-1 \equiv 2 \mod 4$ and $p-1 \equiv 1 \mod d$, and so we see that $\gcd(n, p-1)=2$.
Now let $p$ be any prime such that $\gcd(n, p-1)\leq 2$ (we just showed that there are infinitely many), and let us consider the possible values of $x^n$ modulo $p$.
Let $g$ be a primitive root modulo $p$. Then any non-zero value of $x$ modulo $p$ can be written as $g^k$ for some $0 \leq k < p-1$, and so the non-zero values of $x^n$ modulo $p$ are precisely
$g^{kn}$ for some $k$. Two such values will be equal iff
$$g^{kn} \equiv g^{mn} \mod p$$
which is equivalent to
$$g^{(k-m)n} \equiv 1 \mod p$$
or
$$ p-1\mid (k-m)n $$
Since $\gcd(n,p-1)\leq 2$, we see that this implies
$$ \frac{p-1}{2} \mid (k-m) $$
and so for any $k$, there can be at most one value $m\neq k$ such that $g^{kn} \equiv g^{mn} \mod p$ (Where $k$ and $m$ are each less than $p-1$)
We see that this implies that $x^n$ takes at least $\frac{p-1}{2}$ non-zero values modulo $p$. If we then include $0$, we see that $x^n$ takes at least $\frac{p+1}{2}$ values modulo $p$, and so $x^n$ satisfies the conjecture.
Any linear polynomial and any quadratic polynomial satisfies the conjecture
This final observation is effectively just a combination of the previous $2$, but some more care is taken with the quadratic case.
For the linear case, if $P(x)=ax+b$, then for any prime $p$ such that $p$ does not divide $a$ (so any prime greater than $a$, for example) we have that $P(x)$ takes all $p$ possible values modulo $p$.
For the quadratic case, let $P(x)=ax^2+bx+c$. Let $p$ be any prime which does not divide $2a$.
We can write
$$P(x)=a\left(x+\frac{b}{2a}\right)^2+c-\frac{b^2}{4a^2}$$
Modulo $p$, this is equal to
$$a(x+m)^2+n$$
where $m \equiv b(2a)^{-1} \mod p$ and $n \equiv c-b^2(2a)^{-2} \mod p$. $2a$ has an inverse modulo $p$ since $p$ does not divide $2a$.
But $x^2$ takes exactly $\frac{p+1}{2}$ values modulo $p$, and hence so does
$(x+m)^2$, and hence also $a(x+m)^2+n$. This then implies that $P(x)$ also satisfies the conjecture. |
Integral of exponential function with polynomial argument | The answer to your question is very simple. In fact the integral does not converge as $c>0$. |
Is the module of homomorphisms between graded modules also a graded module? | If by $\text{Hom}_R$ you mean graded homomorphisms (those that preserve the grading), then no. However, there is a "graded Hom" where the $i^{th}$ graded component consists of homomorphisms which raise degree by $i$, and the zeroth graded component of the graded Hom is the ordinary Hom. The general keyword here is internal Hom. |
what are properties of 1 divide by root of unity? | Hint: $$\omega^k=1\iff\frac{\omega^k}{\omega^k}=\frac1{\omega^k}\iff1=\frac1{\omega^k}$$So it is still a root of unity, just a different one. Particularly, if $\omega_a$ is the $a$th root of unity, then $\frac1{\omega_a}$ is the $(k-a)$th root of unity by Euler's formula. |
Why are partial orderings important? | Lots of sets one encounters in practice have naturally defined relations that want to be orders but are not total. For example, the subsets of a set are naturally comparable by inclusion, but that relation is not total at all in general. Similarly, integers come with the very natural relation of divisibility which behaves like an order but, again, is not total. There are many, many examples... |
Equivalent to Riemann Hypothesis | Online lists of equivalents of the Riemann hypothesis include:
http://aimath.org/WWN/rh/ (section C, "Equivalences to RH")
http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/RHreformulations.htm
http://en.wikipedia.org/wiki/Riemann_hypothesis#Criteria_equivalent_to_the_Riemann_hypothesis |
Why are the coefficients zero if $n$ contains a prime with prime power $1$ that is not split in $K$? | $$\zeta_K(s)=\sum_{c\in C_K} \zeta_{K,c}(s)$$
we know the Euler product of the LHS in term of split/inert/ramified primes
and the coefficients of $\zeta_{K,(1)}$ are those of your theta series. |
Finding the expectation of a random variable that is a function of another random variable | First of all note that the given function is not a valid joint pdf, since it integrates to $1/2$, not to $1$:
$$
\int_0^1 \int_0^y 3x\,dx\,dy = \frac12.
$$
So I supposed that it should be $6x$ instead of $3x$.
Next, $f_Y(y)$ depends on $y$, not on $x$. In order to find it, you should integrate joint pdf over all possible range of $x$. For any $y\in(0,1)$, $x$ belongs to $[0,y]$, so
$$
f_Y(y) = \int_0^y f_{X,Y}(x,y)\,dx.
$$ |
Proving $(V\oplus W)^*\cong V^*\oplus W^*$ | Expanding on Jose27's comment and my initial attempt:
Note that
$$(V\oplus W)^*=\{\varphi|\varphi:V\oplus W\rightarrow\mathbb{F}, \varphi \mbox{ is linear}\},$$
$$V^*\oplus W^*=\{(\psi,\theta)|\psi:V\rightarrow\mathbb{F},\theta:W\rightarrow\mathbb{F},\psi,\theta \mbox{ are linear}\}.$$
Let $\psi\in V^*$ and $\theta\in W^*$ and define
$$L_{(\psi,\theta)}:V\oplus W\rightarrow\mathbb{F}$$
by
$$L_{(\psi,\theta)}(v,w)=\psi(v)+\theta(w).$$
We will show that $L_{(\psi,\theta)}$ defines an isomorphism from $V^*\oplus W^*$ to $(V\oplus W)^*$ by showing that $L_{(\psi,\theta)}$ is a linear bijection. First, observe that since $\psi,\theta$ are linear by definition, it follows that
$$L_{(\psi,\theta)}(c_1(v_1+v_2),c_2(w_1+w_2))=c_1\psi(v_1)+c_1\psi(v_2)+c_2\theta(w_1)+c_2\theta(w_2)$$
for all $c_1,c_2\in\mathbb{F}$, $v_1,v_2\in V$, and $w_1,w_2\in W$. Hence, $L_{(\psi,\theta)}$ is linear. To see that $L_{(\psi,\theta)}$ is injective, suppose
$$L_{(\psi_1,\theta_1)}(v,w)=L_{(\psi_2,\theta_2)}(v,w)$$
for some $\psi_1,\psi_2\in V^*$ and $\theta_1,\theta_2\in W^*$, and all $v\in V$ and $w\in W$. Then
\begin{align*}
L_{(\psi_1,\theta_1)}(0,w)&=L_{(\psi_2,\theta_2)}(0,w)\\
\Rightarrow\psi_1(0)+\theta_1(w)&=\psi_2(0)+\theta_2(w)\\
\Rightarrow\theta_1(w)&=\theta_2(w)
\end{align*}
for all $w\in W$. Hence $\theta_1=\theta_2$. It is similarly shown that $\psi_1=\psi_2$. Hence $L_{(\psi,\theta)}$ is injective. To see that $L_{(\psi,\theta)}$ is surjective, given $\varphi\in(V\oplus W)^*$, define
$$\psi(v):=\varphi(v+0), ~~ \theta(w):=\varphi(0+w),$$
for each $v\in V$, and $w\in W$. It is clear that $\psi$ and $\theta$ are linear, hence, $\psi\in V^*$, $\theta\in W^*$. Therefore,
\begin{align*}
L_{(\psi,\theta)}(v,w)&=\psi(v)+\theta(w)\\
&=\varphi(v+0)+\varphi(0+w)\\
&=\varphi(v+w)
\end{align*}
The desired result follows. |
General way to find out whether a curve is positively oriented | The question you are asking is how to prove that the turning number of a simple closed curve is equal to one. This is the same as saying that the winding number of the curve is one around any point lying inside the curve.
There are several ways to do this, and none of them is particularly fast. Here are some possibilities:
One basic possibility is to make a graph of $\arg(\gamma'(t))$. If the argument increases by $2\pi$ as you go once around the curve (ignoring any jumps from $\pi$ to $-\pi$ on your graph), then the curve is positively oriented. Similarly, if $p$ is any point lying inside the curve, you could make a graph of $\arg(\gamma(t) - p)$ and do the same thing. Of course, both of these involve drawing a graph, which you may prefer not to do.
On a related note, given a closed curve $\gamma\colon [a,b]\to\mathbb{C}$, if you can find a continuous function $\varphi\colon [a,b]\to \mathbb{C}$ so that $\gamma'(t) = e^{\varphi(t)}$, then the curve is positively oriented if and only if $\varphi(b)-\varphi(a)=2\pi i$, and negatively oriented if and only if $\varphi(b)-\varphi(a)=-2\pi i$. This is certainly the fastest way of dealing with the example you gave.
Similarly, if $p$ is any point inside the curve, you could find a function $\varphi(t)$ so that $\gamma(t) - p =e^{\varphi(t)}$, and perform the same test.
Assuming the curve $\gamma\colon [a,b]\to \mathbb{C}$ is twice differentiable (with the derivatives at $a$ and $b$ matching up) and has no critical points, you can integrate the curvature of the curve along its length:
$$
\frac{1}{2\pi}\int_a^b \frac{d}{dt}[\arg(\gamma'(t))]\,dt \;=\; \frac{1}{2\pi}\int_a^b \frac{\mathrm{Re}[i\,\gamma'(t)\,\gamma''(t)]}{|\gamma'(t)|^2} dt,
$$
This integral will come out to $+1$ if the curve is positively oriented, and $-1$ if the curve is negatively oriented.
If $p$ is a point lying inside the curve, you can use an integral to compute the winding number around $p$. This requires that the curve $\gamma\colon [a,b]\to \mathbb{C}$ be differentiable:
$$
\frac{1}{2\pi}\int_a^b \frac{d}{dt}[\arg(\gamma(t)-p)]\,dt \;=\; \frac{1}{2\pi}\int_a^b \frac{\mathrm{Re}[i\,(\gamma(t)-p)\,\gamma'(t)]}{|\gamma(t)-p|^2} dt,
$$
Again, this integral will come out to $+1$ if the curve is positively oriented, and $-1$ if the curve is negatively oriented. By the way, if this integral comes out to $0$ it means that the point $p$ you chose does not lie inside the curve. Indeed, you can use this integral to test whether a given point lies inside a given closed curve.
You can compute the degree of $\gamma'$ as a sum of local degrees. Specifically, let $\theta$ be any fixed angle, and let $t_1,\ldots,t_n$ be the values of $t$ for which $\arg(\gamma'(t)) = \theta$. For each $k\in\{1,\ldots,n\}$, let
$$
d_k \;=\; \begin{cases}+1 & \text{if }\mathrm{Im}(e^{-i\theta}\gamma''(t_k))>0 \\[6pt] -1 & \text{if }\mathrm{Im}(e^{-i\theta}\gamma''(t_k))<0 \end{cases}
$$
(If the imaginary part every comes out to zero, you should probably just choose a different value of $\theta$.) Then the sum
$$
d_1 + \cdots + d_n
$$
will be $+1$ if the curve is positively oriented, and $-1$ if the curve is negatively oriented.
This test is easier to apply than it looks. For many curves, there will only be one value of $t$ for which $\arg(\gamma'(t)) = \theta$, so you only need to check whether $\mathrm{Im}(e^{-i\theta}\gamma'(t))$ is positive or negative for this one value of $t$.
Similarly, if $p$ is a point lying inside the curve, you can compute the winding number of $\gamma$ around $p$ using a sum of local degrees. Specifically, let $\theta$ be any fixed angle, and let $t_1,\ldots,t_n$ be the values of $t$ for which $\arg(\gamma(t)-p) = \theta$. For each $k\in\{1,\ldots,n\}$, let
$$
d_k \;=\; \begin{cases}+1 & \text{if }\mathrm{Im}(e^{-i\theta}\gamma'(t_k))>0 \\[6pt] -1 & \text{if }\mathrm{Im}(e^{-i\theta}\gamma'(t_k))<0 \end{cases}
$$
(If the imaginary part every comes out to zero, you should probably just choose a different value of $\theta$.) Then the sum
$$
d_1 + \cdots + d_n
$$
will be $+1$ if the curve is positively oriented, and $-1$ if the curve is negatively oriented.
Again, this test is easier to apply than it looks, since there is often just one value. As with test #3, if the sum comes out to $0$ it means that the point $p$ does not in fact lie in the interior of the curve.
Given a differentiable curve $\gamma\colon [a,b] \to \mathbb{C}$, find a partition of $[a,b]$ into intervals $I_1,\ldots,I_n$ with the following property: for each $k$, the image of $\gamma'$ on the closure of $I_k$ lies in one of the following half-planes:
$$
H_0 = \{\mathrm{Re}(z) > 0\},\quad H_1 = \{\mathrm{Im}(z) > 0\},\quad H_2 = \{\mathrm{Re}(z) < 0\},\quad H_3 = \{\mathrm{Im}(z) < 0\}.
$$
Note that these half-planes overlap, so you don't need to be particularly careful in choosing your partition. For each $k$, let $m_k\in \{0,1,2,3\}$ be the number of the half-plane containing $I_k$. For each transition $(m_k,m_{k+1})$, label it $+1$ if $m_{k+1} \equiv m_k +1 (\mathrm{mod}\;4)$, and $-1$ if $m_{k+1} \equiv m_k - 1 (\mathrm{mod}\;4)$. Then your curve is positively oriented if there are more $+1$ transitions than $-1$'s and negatively oriented if there are more $-1$ transitions than $+1$'s.
A similar approach can be used on $\gamma(t)-p$ instead to compute the winding number, where $p$ is a point inside of the curve. |
$ \sin^{2000}{x}+\cos^{2000}{x} =1$ equation explanation | Hint:
You always have $$\sin^2(x)+\cos^2(x)=1$$
Now how do $\sin^2(x)+\cos^2(x)$ and $\sin^{2000}(x)+\cos^{2000}(x)$ relate if $(\sin(x),\cos(x))\neq (\pm1,0)$ and $(\sin(x),\cos(x))\neq (0,\pm1)$?
Edit (to give a complete solution after the discussion):
If $(\sin(x),\cos(x))\notin\{(\pm1,0),(0,\pm 1)\}$, then
$$\sin^{2000}(x)+\cos^{2000}(x)<\sin^2(x)+\cos^2(x)=1$$
so no solution is of this form.
If $(\sin(x),\cos(x))\in\{(\pm1,0),(0,\pm 1)\}$, then the equation is clearly satisfied and you get the solutions as listed in the question. |
System of linear simultaneous equations | The procedure you're looking for is Cramer's rule. |
Understanding proof of $\tau \ (n) =(k_1+1)(k_2+1)\dots(k_r+1)$ and for $\sigma\ (n)$ also. | The key to both proofs is that if $n=ab$ with $a$ and $b$ relatively prime, then any divisor $d$ of $n$ can be written uniquely as a product $d=d_ad_b$ where $d_a\mid a$ and $d_b\mid b$.
This in turn means that:
$$\begin{align}\tau(n)&=\sum_{d\mid n} 1\\
& = \sum_{d_a\mid a,d_b\mid b} 1 \\
&= \left(\sum_{d_a\mid a} 1\right)\left(\sum_{d_b\mid b} 1\right)
\\ &= \tau(a)\tau(b)\\
\sigma(n)&=\sum_{d\mid n} d \\
&= \sum_{d_a\mid a,d_b\mid b} d_ad_b \\
&= \left(\sum_{d_a\mid a} d_a\right)\left(\sum_{d_b\mid b} d_b\right) \\
&= \sigma(a)\sigma(b)
\end{align}$$
This lets us conclude, by induction on $k$, that if $p_1,p_2,\dots,p_k$ are distinct primes then: $$\tau\left(p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}\right) = \tau\left(p_1^{a_1}\right)\cdots \tau\left(p_k^{a_k}\right)\\
\sigma\left(p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}\right) = \sigma\left(p_1^{a_1}\right)\cdots \sigma\left(p_k^{a_k}\right)$$
So now all you have to do is convince yourself that $\tau(p_i^{a_i})=a_i+1$ and $\sigma(p_i^{a_i})=1+p_i+p_i^2+\dots + p_i^{a_i}$.
A function $f$ is said to be "multiplicative" if $f(ab)=f(a)f(b)$ when $a,b$ are relatively prime. There is a general rule that if $f$ is multiplicative, then:
$$g(n)=\sum_{d\min n} f(d)$$
is also multiplicative.
In the above two cases, we that $f(n)=1$ for $\tau$ and $f(n)=n$ for $\sigma$.
All multiplicative functions can be computed by computing the values at their prime factorizations. |
Equation of tangent on Cartesian plane given center and radius of a circle | Here's how I was taught to get tangents to conic sections in high school, half a century ago. If you want to get the tangent at the point $(x_0,y_0)$, you take the equation of the conic, and replace "half" of the $x$'s by $x_0$ and "half" of the $y$'s by $y_0$.
In this case, the equation of the curve is $(x-h)^2+(y-k)^2=r^2$, so the tangent at $(x_0,y_0)$ will be $(x_0-h)(x-h) + (y_0-k)(y-k)=r^2$.
A little algebra, using the fact that $x_0^2 + y_0^2 = r^2$, shows that this can also be written in the form $(x_0 -h)(x-x_0) + (y_0 -k)(y-y_0) = 0$.
This works with any conic section curve, and it gives you tangent planes of quadric surfaces, too.
Of course, you can apply the same process even if the point $(x_0,y_0)$ doesn't lie on the curve. In this case, the line you get is called the polar line of the point. It just happens that, for any point lying on the curve, the polar line is the tangent at the point.
I'm curious -- do they still teach this stuff? |
DE with separable method | Well for $k=0$, your solution would equal to that in the book. The solution in the book is a solution, yours is more general and is also correct.
The $\pi k$ simply shifts the function left or right $\pi k$, and because tangent is a special function that repeats every $\pi$, $\pi k$ doesn't change the solution. |
Integrating a matrix function involving a determinant and exponential trace | As you pointed out, the integrand is invariant under the action of the orthogonal group. Thus it suffices to integrate over diagonal matrices, and multiply the result by the volume of the orthogonal group.
Suppose $X=diag(x_1,\ldots,x_k)$. Then the integral becomes
$$
\int_{\mathbb R} \prod_{i=1}^k\left(1+\frac{g}{a}x_i^2\right)^{-d/2}e^{-ax_i^2/2}\ dx_1\cdots dx_k=\left[\int_{\mathbb R}\left(1+\frac{g}{a}x^2\right)^{-d/2}e^{-ax^2/2}\ dx\right]^k.
$$
The innermost integral may be expressed in terms of moments of the Gaussian distribution. |
Laplace Transform IVP problem $y'''(t)-y(t) = 2$ with initial conditions $y(0)=0$, $y'(0)=0$, $y''(0)=1$. | With $y'''-y=2$ then
\begin{align}
{\cal L}(y)
&= \dfrac{s+2}{s(s^3-1)} \\
&= -\dfrac{2}{s}+\dfrac{1}{s-1}+\dfrac{s+\frac12-\frac12}{(s+\frac12)^2+\frac34} \\
&= -\dfrac2s+\dfrac{1}{s-1}+\dfrac{s+\frac12}{(s+\frac12)^2+\frac34}-\frac12\dfrac{2}{\sqrt{3}}\dfrac{\frac{\sqrt{3}}{2}}{(s+\frac{1}{2})^2+\frac{3}{4}} \\
y&= -2+e^t+e^{-\frac12t}\cos\dfrac{\sqrt{3}}{2}t-\dfrac{1}{\sqrt{3}}e^{-\frac12t}\sin\dfrac{\sqrt{3}}{2}t
\end{align} |
(Proof verification) Showing that the two $\limsup$ definitions are equivalent | This looks valid to me! You show that the $\lim \limits_{n \to \infty}$sup{${x_n, x_{n+1},...}$} is equal to sup(T). This shows the two definitions are equal.
Your steps look good. You show that $\beta \leq \alpha$ by establishing a cutoff point after which $x_{n_k} \leq\space $sup{$x_k, x_{k+1}, ...$}. You then take the limit of each side of this inequality. This should be valid since the left side is less than the right for all $x_{n_k}$.
You then show that $\alpha \leq \beta$ by constructing a subsequence of $x_n$ that converges to $\alpha$ [which establishes $\alpha$ as a cluster point of $(x_n)$] and then using the fact that $\beta$ is the supremum of all cluster points of $(x_n)$.
I was unable to find any flaws in these steps. |
Diagonalization of an infinite matrix | Since you are in infinite dimensions, you would first need to specify in which space the operator $A$ is supposed to act, then you can try to prove that it fulfils the assumptions for the spectral theorem.
If we first look at the action of $A$ on an arbitrary sequence of real (I'm assuming that you are working in $\mathbb{R}$) numbers $a=(a_1, a_2,...)$ we see that $A(a) = (a_1, a_1, ...)$ which won't be e.g. in $l^2$, the natural Hilbert space of sequences.
Actually, $A$ might seem to only make sense in $l^\infty$ (but it doesn't, as @MartinArgerami points out, because we don't have a countable basis with which to interpret what the action of $A$ on an arbitrary vector $u \in l^\infty$ is), and this is definitely not Hilbert. Because we then lack the notion of a scalar product, we cannot define what orthogonal eigenspaces would be, hence no orthogonal diagonalisation.
Note however that we can formally find another "infinite matrix" $P$ such that $P^{-1}$ "exists" in some sense and $D = P A P^{-1}$ is a diagonal infinite matrix, namely
$$D = \left(\begin{array}{ccccc}
1 & & & & \\
0 & 0 & & & \\
0 & 0 & 0 & & \\
0 & 0 & 0 & 0 & \\
\vdots & & & & \ddots
\end{array}\right)$$
with
$$P^{- 1} = \left(\begin{array}{ccccc}
1 & & & & \\
1 & 1 & & & \\
1 & 0 & 1 & & \\
1 & 0 & 0 & 1 & \\
\vdots & & & & \ddots
\end{array}\right),\ \ P = \left(\begin{array}{ccccc}
1 & & & & \\
- 1 & 1 & & & \\
- 1 & 0 & 1 & & \\
- 1 & 0 & 0 & 1 & \\
\vdots & & & & \ddots
\end{array}\right).$$
Edit: If you are wondering where those matrices came from, it was basically this: it is natural to see how $A$ acts on the canonical basis, and one immediately sees that $A(e_1)=u=(1,1,1,1,...)$ is an eigenvector with eigenvalue 1 and that $A(e_i)=0$ for all $i>1$, so $e_i$ are eigenvectors with eigenvalue 0. You want $P$, $P^{-1}$ such that $D=P A P^{-1}$, where $P^{-1}$ is a change from the "new" basis of eigenvectors into the "old", i.e. the matrix with columns $u, e_2, e_3, ...$ Compute its "inverse" $P$, see if $D$ is all zeros except in the first entry, and you are done. But again, this is all formal and quite wrong, since we don't have a basis to begin with. See Martin's answer for more. |
Field and vector spaces | A field, roughly, is any algebraic structure with two laws of composition (addition and multiplication) satisfying the usual alws you know to be true for rational, real and complex numbers. In particular every non zero element has a multiplicative inverse. Other examples of fields are : rational functions over the real or complex numbers (more generally over any field); $\mathbf Z/p\mathbf Z$ for any prime number $p$ — these are finite fields.
$\mathbf R$ is a $\mathbf Q$-vector space, but it is not finite-dimensional, not even countable dimensional, since otherwise, $\mathbf R$ would be countable, which is not true. A basis of $\mathbf R$ over $\mathbf Q$ is called a Hamel basis, but no one has ever seen one. Actually its existence is a consequence of the axiom of choice. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.