title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Showing a harmonic function takes its minimum on the boundary | Let $f(z)=u+vi$ and be analytic in $A$. Then $g(z)=e^{f(z)}$ is also analytic and not zero in $A$.
Now since $|g(z)|=e^u$, if $u$ takes minimum inside $A$, then so is $|g(z)|$, which violates minimum modulus principle of analytic function. So $u$ must take minimum on boundary of $A$ unless it is constant. |
Is this a new characteristic function for the primes? | As $n\in\Bbb Z^+$, if $\chi_{\Bbb P}(n)=1$ then $$\chi_{\mathbb{P}}(n)=\frac{(-1)^{2\Gamma(n)/n}-1}{(-1)^{-2/n}-1}=\frac{(-1)^{2(n-1)!/n}-1}{(-1)^{-2/n}-1}=1\implies (-1)^{2(n-1)!/n}=(-1)^{-2/n}$$ which is equivalent to $$\frac{2(n-1)!}n\equiv-\frac2n\pmod2\implies2\cdot\left(\frac{(n-1)!+1}{n}\right)\equiv0\pmod2$$ which is true if the term in brackets is an integer; that is, if $n\mid (n-1)!+1$, which in turn is equivalent to Wilson's Theorem.
Note that on the other hand, $$\chi_{\Bbb P}(n)=0\implies\frac{2(n-1)!}n\equiv0\pmod2\implies n\mid(n-1)!$$ so $n$ cannot be prime. |
Finding the Laurent series of a given function | As you found already, $f(z) = \frac{2}{2+z} - \frac{1}{1+z}$
Let's look at each fraction separately:
$-\frac{1}{1+z}$ is analytic in $|z| \gt 1$ and since $|\frac{1}{z}| \lt 1$ we have:
$$
-\frac{1}{1+z} = -\frac{1}{z}\cdot\frac{1}{1-(-\frac{1}{z})} = -\frac{1}{z}(1-\frac{1}{z}+\frac{1}{z^2}-\dots) = -\frac{1}{z}+\frac{1}{z^2}-\frac{1}{z^3}+\dots
$$
$\frac{2}{2+z}$ is analytic in $|z|\lt2$, we have $|\frac{z}{2}| \lt 1$ and we can represent it with it's Taylor series:
$$
\frac{2}{2+z} = \frac{1}{1-(-\frac{z}{2})} = 1 -\frac{z}{2}+\frac{z^2}{4}-\dots
$$
Therefore the Laurent series:
$$
\dots -\frac{1}{z^3}+\frac{1}{z^2}-\frac{1}{z}+1-\frac{z}{2}+\frac{z^2}{4}-\dots
$$
Converges in $1 \lt |z| \lt 2$ |
Solution of inhomogeneous Fredholm integral equation of the first kind with symmetric rational kernel | The equation
$$
f(x)=\frac{1}{\pi}\int_{0}^{\infty}\frac{g(y)}{x+y}\mathrm{d}y
$$
has solution
$$\begin{align}
y(x) &= \frac{1}{2 i} \lim_{\epsilon \to 0^+} \left\{f(-x-i\epsilon)-f(-x+i\epsilon)\right\} \\
&= \frac{1}{\sqrt{x}} \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k)!} \left(\frac{\pi}{x} \frac{\mathrm{d}}{\mathrm{d}x}\right)^{2k} \left\{\sqrt{x}f(x)\right\}.
\end{align}$$
Source: Polyanin and Manzhirov, Handbook of Integral Equations, section 3.1-3, #17.
Numerous other sources are cited below the entry there. |
interpretation of tangent space of level curve manifold | As you said so yourself, the tangent vectors can be interpreted as directional derivatives of $f$.
So in particular, if $\forall v \in T_xM: v(f) = 0$, then the directional derivatives of $f$ in any direction $v$ will be $0$, so $D_x f(v) = 0$.
Alternatively, since you're in Euclidean speace, you can work in local coordinates. $v(f)$ can be expressed as $\partial_v f = (D_x f)v$ (if you're not clear on why this is so let me know I'll explain more) |
Why is it that the level curves of $g$ are orthogonal to these curves of $f$ here? | Note that the gradient vectors $$< f_x, f_y>$$ and $$< g_x, g_y>$$ are perpendicular to each other.
That implies that the level curves meet at a $ 90$ degrees angle.
Thus the level curves of one is the flux curves of the other. |
Conditional entropy of linear transformation of random variables | This is true quite generically. Let $(X,Z)$ be an arbitrary pair of random variables with a reasonably nice joint density $p_{XZ}$. Recall that $$ h(X|Z) = -\int p_{XZ}(x,z) \log \frac{p_{XZ}(x,z)}{p_Z(z)} \mathrm{d}x \mathrm{d}z$$
By the standard transformation of random variables, for any invertible $A$, if $(U,V) = (AX, AZ) = B(X,Z),$ where $B = \mathrm{diag}(A,A)$ is a block diagonal matrix with blocks $A$, then $$ p_{UV}(u,v) = |\det B|^{-1} p_{XZ}(A^{-1} u, A^{-1}v).$$ Further, due to block diagonality, observe that $\det B = \det^2 A$.
By an idential calculation, $p_V(v) = |\det A|^{-1} p_Z(A^{-1}v)$
Thus, \begin{align}
h(U|V) &= -\int p_{UV}(u,v) \log \frac{p_{UV}(u,v)}{p_V(v)} \mathrm{d}u \mathrm{d}v \\
&\overset{(u,v) = B(x,z)}{=} -\int p_{UV}(Ax, Az) \log \frac{p_{UV}(Ax, Az)}{p_V(Az)} |\det B| \, \mathrm{d}x \mathrm{d}z\\
&= -\int p_{XY}(x,z) \log \frac{p_{XY}(x,z) |\det B|^{-1}}{p_Z(z) | \det A|^{-1}}
\mathrm{d}x \mathrm{d}z \\ &= h(X|Z) + \log |\det A|,\end{align} where I've used that $|\det B| = |\det A|^2$.
Your case follows on setting $X = X$ and $Z = (X+Y)$ and $\det A = 1$ in the above calculation (I chose slightly wrong notation in the beginning, because I thought that you defined $Z = X+Y$ instead of $Z = A(X+Y)$. There was too much to edit when I realised the notational snafu, so we get some mismatch. My apologies).
Notice that we made no assumptions about the joint law of $(X,Z)$ (besides regularity assumptions like $p(x,z)/p(z)$ have integrable logarithm), or about $A$ besides the fact that it is invertible. All that is really happening in this situation is that you're blowing up two random vectors with $A$, which is changing the common scale of the problem by a factor of $A$. |
Random Sample ¿Random variables or realizations of the same random variable? | It is possible to set up our model either way. Personally I like to think of a random variable as a measurement resulting from an experiment. In your example, I would consider $n$ different (independent) experiments consisting of measuring the weight of a random university student and each random variable will be associated with a single experiment. So the random sample will be made up of different random variables. |
How can I construct envelop of unity? | The maximum of a finite family of continuous functions is continuous. Since the supports of the $\varphi_i$ are a locally finite family, this implies that
$$\varepsilon := \sup_{i\in I} \varphi_i$$
is a continuous function on $B$, and since $\sum_i \varphi_i \equiv 1$, we have $\varepsilon(x) > 0$ for all $x \in B$.
Now define
$$\eta_i(x) := \min \left\lbrace 1, \frac{\varphi_i(x)}{\varepsilon(x)}\right\rbrace.$$
$\eta_i$ is continuous, and $\operatorname{supp} (\eta_i) = \operatorname{supp} (\varphi_i)$.
We have $\max\limits_{i \in I} \eta_i(x) = 1$ for all $x \in B$:
Since all $\eta_i$ are bounded above by $1$, it is clear that the maximum is at most $1$.
Since the family of supports is finite, $x$ has a neighbourhood $V$ that intersects only finitely many of the $\operatorname{supp} (\varphi_i)$, say for $i \in I_0$. Let $i_0 \in I_0$ be an index with $\varphi_{i_0}(x) = \max\limits_{i \in I_0} \varphi_i(x)$. Then $\varepsilon(x) = \varphi_{i_0}(x)$, and hence $\eta_{i_0}(x) = 1$, so the maximum is at least $1$ at every point. |
Permutations $\{1, ..., n\}$ with all cycles even | Any permutation is a product of disjoint cycles. If we say that "all cycles even", then you might want to seek clarification. This could mean all cycles have even length or all cycles are even permutations and thus have odd length. |
Is this an equivalent definition of cluster point? | No, take $x_n=a$ for all $n\in \mathbb{N}$. Then, any neighbourhood of $a$ contains every element of the sequence, and so, in particular, infinitely many terms. As such, $a$ is a cluster point of $(x_n)_{n\in \mathbb{N}}$.
In general, we want it to be true that if $(x_n)_{n\in \mathbb{N}}$ is convergent, then it has a unique cluster point: The limit. |
probability without replacement when numbers are drawn | so,you first draw a marble so what is the possibility that the number on it is 5-6(because we want more than 4).of course 2/6.so the probability of getting
more than 4 is 2/6.the second marble has the same probability.so the answer will be
(2/6)^4. |
triviality of tensor product of vector bundles | The Möbius real line bundle bundle $\xi$ over the circle $S^1$ is not trivial but its complexification $\xi\otimes_\mathbb R \mathbb C$ is trivial, like all complex line bundles over $S^1$.
[This last fact is due to complex line bundles on the circle being classified by $H^2(S^1,\mathbb Z)=0$] |
Is the sum of unbounded, symmetric operators with same domain self-adjoint? | Let $H_1=-\frac{d^2}{dx^2}$ on the domain $\mathcal{D}_1$ consisting of all twice absolutely continuous functions $f$ on $[0,1]$ with $f''\in L^2[0,1]$ and $f(0)=f(1)=0$. Let $H_2 = -\frac{d^2}{dx^2}$ on the domain $\mathcal{D}_2$ consisting of all twice absolutely continuous functions $f$ on $[0,1]$ with $f''\in L^2[0,1]$ and $f'(0)=f'(1)=0$. Both of these operators are densely-defined, non-negative selfadjoint linear operators. And $H_1+H_2$ is symmetric positive-definite and densely-defined on $\mathcal{D}=\mathcal{D}_1\cap\mathcal{D}_2$. It is even true that $H_1+H_2$ is closed on this domain, but it is not selfadjoint. |
Schur polynomials on 1's and -1's | Posting an answer in case anyone was interested. There doesn't seem to be a nice compact expression as in the all 1's case. But there is a simpler approach that doesn't involve Littlewood-Richardson coefficients, but instead just requires locating the right expression in Macdonald's 'Symmetric Functions and Hall Polynomials' text.
The Schur polynomial may be written as
$$
s_\lambda(x_1,\ldots,x_d) = \sum_{\rho \vdash k} z^{-1}_\rho \chi^\lambda_\rho \, p_\rho(x_1,\ldots,x_d)
$$
where $\chi^\lambda_\rho$ is the character $\chi^\lambda(\sigma)$ evaluated on elements of $S_k$ of cycle-type $\rho$, $z_\lambda = \prod_i i^{m_i} m_i!$, where $m_i$ is the multiplicity of $i$ in the partition, and $p_\rho$ is the power-sum symmetric polynomial on the partition $\rho$. Knowing this, we can simply write
$$
s_\lambda(1,\ldots,1,-1,\ldots,-1) = \sum_{\rho \vdash k} z^{-1}_\rho \chi^\lambda_\rho \, \textstyle\prod_{\rho_i} (a+(-1)^{\rho_i} b)\,,
$$
which gives a polynomial in $a$ and $b$, as desired. For example, the partition $\lambda=\{1,1\}$ gives $s_\lambda(1_a,-1_b)=\frac{1}{2}((a - b)^2-(a+b))$. |
For orthogonal vectors $x$ and $y \in \mathbb{R}^3$, if $x$ has norm $4$ and $y$ has norm $4$ then what is the norm of $x+y$? | Since the vectors are orthogonal, they meet at a right angle, so we have
$$|| x+y || = \sqrt{||x||^2 + ||y||^2}$$
by the Pythagorean theorem. |
Question about proof that euclidean metric and square metric are the product topology | From the geometry, it is sort of obvious that $B_{d}\left(x, \epsilon\right) \subset B_{p}\left(x, \epsilon\right)$
However, you asked for a proof. First let us show $y \in B_{d}\left(x, \epsilon\right) \Rightarrow y \in B_{p}\left(x, \epsilon\right)$:
$$
y \in B_{d}\left(x, \epsilon\right) \\
\Leftrightarrow d(x, y) < \epsilon \\
\Rightarrow p(x, y) < \epsilon \\
\Leftrightarrow y \in B_{p}\left(x, \epsilon\right)
$$
where the third line is the inequality $p(x, \epsilon) \le d(x, \epsilon)$. To show that $y \in B_{p}\left(x, \epsilon\right) \nRightarrow y \in B_{p}\left(x, \epsilon\right)$, consider the point $y=\left( x_1 + \alpha\epsilon, x_2 + \alpha \epsilon, \dots \right)^T$, then $p(x, y)=\alpha\epsilon$, while $d(x, y)=\sqrt{n}\alpha\epsilon$, so for $\frac{1}{\sqrt{n}} < \alpha < 1$, $y \not\in B_d\left(x, \epsilon\right)$. |
The boundary orientation coincides with the preimage orientation | Since the sphere is connected, it suffices to check that the orientations agree at a point. If $e_1,\dots e_{k+1}$ is a positively oriented orthonormal basis of $\mathbb{R}^{k+1}$, then at the point $e_1 \in S_k$, the basis $e_2, \dots e_{k+1}$ for $T_{e_1}S^k$ is positively oriented according to the boundary orientation.
We can think of the unit sphere as the preimage of $1$ under $g$, that is, $S^k=g^{-1}(1)$. So we set $Z$ to be the singleton $\{1\}\subset \mathbb{R}$ with its tangent space the positively oriented zero vector space (usually when a point is not given an explicit orientation, you can just take it to be positively oriented).
Now the tangent vectors pointing out from the sphere are mapped by $dg$ to positive real numbers and the vectors pointing into the sphere are mapped by $dg$ to negative real numbers. Any positive real number is a positively oriented basis for $\mathbb{R}=T_1\mathbb{R}$, so using the formulas in the linked post, you can see that indeed the preimage orientation is the same as the boundary orientation: if the outward normal followed by a basis for $T_xS^k$ is positively oriented for the whole ball, that basis is oriented for the sphere. |
Showing this set is closed in $\mathbb{C}$ | Let $(z_n)$ a sequence of $B$ that converge and let denote $\ell$ it's limits. Suppose $|\ell-a|>r$. Let $\varepsilon>0$ and $N$ s.t. $|z_N-\ell|<\varepsilon.$ Then
$$r<|\ell-a|\leq|\ell-z_N|+|z_N-a|<\varepsilon+r$$
Since $\varepsilon>0$ is unspecified, we get $|\ell-a|=r$ which is a contradiction. Therefore, $\ell\in B$ and the claim follow. |
Giving an proof on a combinatorial statement | Rewrite as $$\binom{a+b}{2}=\binom{a}{2}\binom{b}{0}+\binom{a}{1}\binom{b}{1}+\binom{a}{0}\binom{b}{2}$$ and note that both sides count the number of ways to choose a pair of people from $a$ men and $b$ women. The left hand side is clear. The right hand side performs the count according to three cases:
$2$ men and $0$ women
$1$ man and $1$ woman
$0$ men and $2$ women |
Calculate $\int_0^1\frac{(\ln x)^2}{\sqrt{x-x^2}}dx=?$ | This is a simple integral. By Euler's Beta function
$$ \int_{0}^{1}\frac{x^\alpha}{\sqrt{x(1-x)}}\,dx = \frac{\Gamma\left(\alpha+\tfrac{1}{2}\right)}{\Gamma(\alpha+1)}\sqrt{\pi}\tag{1}$$
for any $\alpha>-\frac{1}{2}$, hence it is enough to apply $\frac{d^2}{d\alpha^2}$ to both sides of $(1)$, then evaluate at $\alpha=0$.
It is pratical to write $\frac{dg}{d\alpha}$ as $g\cdot\frac{d}{d\alpha}\log g$ and recall that $\psi=\frac{d}{d\alpha}\log\Gamma$ and
$$ \psi(1)=-\gamma,\quad \psi\left(\tfrac{1}{2}\right)=-\gamma-\log(4),\quad \psi'(1)=\frac{\pi^2}{6},\quad \psi'\left(\tfrac{1}{2}\right)=\frac{\pi^2}{2}$$
to get
$$ \int_{0}^{1}\frac{\log^2(x)}{\sqrt{x(1-x)}}\,dx = \color{red}{\frac{\pi^3}{3}+4\pi\log^2(2)}.\tag{2}$$
This can be proved by Fourier-Legendre series expansions, too. Indeed, the hypergeometric functions mentioned by Raffaele in the comments have simple closed forms at $x\in\left\{0,\frac{1}{2},1\right\}$. Have a look at page 39 here.
Yet another (brutally efficient) approach is to apply Parseval's theorem to the Fourier series of $\log\sin$, which can be derived from the identity $\sum_{n\geq 1}\frac{\cos(n\theta)}{n}=-\log\left|2\sin\frac{\theta}{2}\right|$. This also explains why the Euler-Mascheroni constant $\gamma$ disappears from the RHS of $(2)$.
Additionally, the similar integral $\int_{0}^{1}\frac{\log^3(x)}{\sqrt{1-x^2}}\,dx$ is computed at page 76 of my notes through the same technique used above (Feynman's trick and special values for $\Gamma,\psi,\psi',\psi''$). Here there are no simple ways for applying Parseval's theorem, hence the approach by differentiation under the integral sign is a bit more general, even if Fourier series solve OP's problem sooner. |
What's wrong with this proof that ZFC is consistent | No, you can't do that. The most basic reason why is that the question, "Is $\varphi$ independent of $T$?" is not decidable - given an arbitrary first-order sentence $\varphi$ and a finite set of first-order sentences $T$, there is no way to tell if $\varphi$ is independent of $T$. Specifically, the set of pairs $$\{(\varphi, T): \mbox{$\varphi$ is a sentence, $T$ is a finite set of sentences, $T\not\vdash\varphi$, and $T\not\vdash\neg\varphi$}\}$$ is (when appropriately coded as a set of natural numbers) not computably enumerable.
Concretely, this means that - fixing some background recursive consistent set of axioms $S$ extending (say) ZFC - there is some finite theory $T$, and some sentence $\varphi$, such that (1) $\varphi$ is independent of $T$, but (2) $S$ doesn't prove that. At some point in your process, the following will happen: you've listed sentences $\varphi_1, . . . \varphi_n$ that you're confident are consistent with each other, and you ask - "Is $\psi$ consistent with $\varphi_1\wedge . . .\wedge\varphi_n$?" Unfortunately, your theory can't answer this question, and so your process gets stuck.
Now, what about if you run your process without regard to computational complexity? In this case you can indeed define a consistent, complete theory $T$! The problem is, this $T$ isn't computable. So the existence of such a $T$ doesn't contradict Godel's theorem at all.
ADDED: You might well ask, "What do such $T$ look like?" Well, bad news first: without specifying the order in which you look at sentences, I can't tell you much.
Good, and very surprising, news: as long as the order you look at sentences in is computable, I can tell you some things! Specifically, I can tell you that $T$ is wrong: $T$ will have axioms which are false statements about the natural numbers. (Ignoring for the moment the philosophical questions around what I mean by "false" - for now, just assume we have something we agree is the "real" set of natural numbers.) This is because of computability theory: the theory $T$ will be $\Delta^0_2$, but the true theory of arithmetic is much more complicated than that. And we know this without understanding any of the details about $T$!
Neat, huh?
One more commment: ZFC has infinitely many axioms. Even if we could do this for each finite subset of ZFC, we would need to be able to prove - in ZFC - that we could do this for each finite subset of ZFC! But "$\forall$" and "$\vdash$" don't commute: just because, for each $n$, $ZFC$ proves $p_n$, doesn't mean $ZFC$ proves "$\forall n p_n$."
One example of this is that ZFC (assuming it's consistent of course :P) proves "There is no contradiction in the ZFC axioms of length $n$" for each $n$. But, by Godel, ZFC doesn't prove "For every $n$, there is no contradiction in the ZFC axioms of length $n$."
In fact - and this is a point where ZFC is special (by contrast, everything I've said above is true for every reasonable theory of set theory and arithmetic) - ZFC can do more! For each finite $F\subset ZFC$, ZFC proves F is consistent! This is called the reflection theorem (see e.g. https://mathoverflow.net/questions/18787/montagues-reflection-principle-and-compactness-theorem). The problem is, the proof that ZFC can do this takes more than ZFC, so ZFC can't prove "all my finite fragments are consistent." This is very weird, and takes some time to understand. |
Is this definition of a polynomial adequate? If not, how do I fix it? | Your definition of polynomial doesn't agree with what usually people want to model.
Note that taking $\mathbb F=\{0,1\}$, according to your definition, setting $p(x)=x^2+1$ and $q(x)=x+1$, for all $x\in \{0,1\}$, one has $p=q$. But usually it is wished that $p\neq q$.
What you've defined is a polynomial function.
The operator that takes polynomials to its polynomial functions isn't injective in general (see example above), therefore you can't equate polynomial function with polynomial.
But it works sometimes, namely in fields of characteristic $0$. |
Function that is in $L^1(1,\infty)$ but not in $L^2(1,\infty)$ | Just consider e.g. $$f(x) := \begin{cases} \frac{1}{\sqrt{x-5}}, & x \in (5,6), \\ 0, & \text{otherwise}. \end{cases}$$ Then $f \in L^1((1,\infty))$, but $f \notin L^2((1,\infty)$. |
How to find the volume by triple integral? | $$\int_0^1\int_0^{\sqrt{1-z^2}}\int_0^ydxdydz=\int_0^1\int_0^{\sqrt{1-z^2}}ydydz=\int_0^1\frac{(1-z^2)}{2}dz=\left[\frac z2\right]_0^1-\left[\frac{z^3}6\right]_0^1=\frac12-\frac16=\frac13$$ |
Is it possible to simplify the sum $\sum\limits_{n = 0}^{\lfloor {\frac{M}{2}}\rfloor } {\frac{{M!}}{{{2^n} \times ( {n!} )\times( {M - 2n})!}}}$? | Making the problem more general
$$S_M=\sum\limits_{n = 0}^{\left\lfloor {\frac{M}{2}} \right\rfloor } {\frac{{M!}}{{ {n!} \, \left( {M - 2n} \right)!}}}x^n$$
$$S_{2m}=(-1)^m (4x)^m \,U\left(-m,\frac{1}{2},-\frac{1}{4 x}\right)$$
$$S_{2m+1}=(-1)^m (4x)^m \,U\left(-m,\frac{3}{2},-\frac{1}{4 x}\right)$$ where appears the confluent hypergeometric function.
If $x=\frac 12$, these generate the sequences
$$\{2,10,76,764,9496,140152,2390480,46206736,997313824,23758664096\}$$ which correspond to
$$S_{2m}=2^n n!\, L_n^{-\frac{1}{2}}\left(-\frac{1}{2}\right)$$ and
$$\{4,26,232,2620,35696,568504,10349536,211799312,4809701440,119952692896\}$$ which correspond to
$$S_{2m+1}=2^n n!\, L_n^{\frac{1}{2}}\left(-\frac{1}{2}\right)$$ where appear the generalized Laguerre polynomials.
Have a look at $OEIS$ for very interesting informations. |
Showing $R$ is a local ring if and only if all elements of $R$ that are not units form an ideal | The book "Introduction to Commutative Algebra" by Atiyah-Macdonald has in Corollary 1.5:
Every non-unit of $R$ is contained in a maximal ideal.
So
Let $R$ be local. If $m$ is the only maximal of $R$, then $m$ will be the set of non-units.
Conversely:
Let $n$ be the set of non-units of $R$. Let $m$ be a proper ideal s.t. $n\subseteq m$. Since $m$ is proper, no element of $m$ would be unit and so $m\subseteq n$. So $n$ is maximal.
Update:
This is also lemma 3.13 of the book "Steps in Commutative Algebra" by "Sharp" (in that terminology quasi-local means your "local". |
Proof involving matrix equation | Adding the identity matrix $I$ on both sides, we find $(A+I)(B+I) = I$. Hence $A+I$ and $B+I$ are inverses of each other. It follows that $(B+I)(A+I) = I$ as well. Expanding gives $BA + B + A = 0$, hence $AB = BA$. |
Application of linear systems | You are missing the positivity constraints for $t$ and $p$.
The inequality $b+j \leq 120000$ is already part of your first equation (assuming positivity for $t$ and $p$), while $b+j\geq 0$ is directly dependent on the positivity of $b$ and $j$ and does not add new information. |
Complex Number Geometry 5 | Let $z=a+bi$. Then we have that $$\frac{1}{z} = \frac{1}{a+bi} = \frac{a-bi}{(a+bi)(a-bi)}= \frac{a}{a^2+b^2} - \frac{b}{a^2+b^2}i$$ The real part is equal to $1/6$, so we have $$\frac{a}{a^2+b^2} = \frac{1}{6}$$ Plot this on a a-b plane and you'll get a nice circle. The area is what you're looking for. $$\frac{a}{a^2+b^2} = \frac{1}{6} \Rightarrow a^2+b^2=6a \\ \Rightarrow (a-3)^2+b^2=3^2$$ This is a circle with radius $3$, thus the area is simply $\pi (3)^2 = 9\pi$. |
Find the domain of the polar curve $r(\theta)=2\,\cos{2\theta}$ | The book seems to be assuming the domain is a subset of $[0,2\pi]$, and is simply removing the values of $\theta$ for which $\cos2\theta$ is negative. When you require $r(\theta)\ge0$, you only get the right- and left-pointing lobes in your graph. The upper and lower lobes come by allowing $r(\theta)\lt0$: the lower lobe is swept out as $\theta$ runs from $\pi/4$ to $3\pi/4$, and the upper lobe is swept out as $\theta$ runs from $5\pi/4$ to $7\pi/4$. |
Normal vector to ellisoid surface | An ellipsoid is given by the equation
$$ \frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2} = 1, $$
for some constants $a,b,c \in \mathbb{R}_+$. If you take any curve $\alpha(t) = (x(t),y(t),z(t))$ on the ellipsoid, you know that it satisfies
$$ \frac{x(t)^2}{a^2} + \frac{y(t)^2}{b^2} + \frac{z(t)^2}{c^2} = 1. $$
Differentiating with respect to $t$ gives
$$ \frac{2x\dot x}{a^2} + \frac{2y \dot y}{b^2} + \frac{2z \dot z}{c^2} = 0
\iff \left\langle \left( \frac{x}{a^2}, \frac{y}{b^2}, \frac{z}{c^2} \right), \dot \alpha \right\rangle = 0. $$
Since $\alpha$ was arbitrary, this shows that $$(x,y,z) \mapsto\left( \frac{x}{a^2}, \frac{y}{b^2}, \frac{z}{c^2} \right)$$ is orthogonal to any tangent of the ellipsoid, and is therefore normal to the ellipsoid.
After inserting the spherical coordinates, this becomes
$$(\theta, \varphi) \mapsto\left( \frac{\cos\theta\cos\varphi}{a}, \frac{\cos\theta\sin\varphi}{b}, \frac{\sin\theta}{c} \right).$$ |
MLE of variance for a spherical Gaussian | Yes, I think the reviewers, the typesetter, or the author, Pelleg, missed these two points, and it left me confused as well.
You first point is correct. Since the solution is applicable to an arbitrary number of dimensions (Pelleg uses two in his examples), the norm would be a better notation, while still referring to the Euclidean norm.
After going through the log-likelihood derivation myself, I think a few steps were skipped, leaving the formulas mangled. The correct derivation might look more like this:
The probability that a sample $i$ in cluster $n$ will be at position $x_i$ is given by
\begin{align}
p(x_i│μ_n,σ_n^2 )= \frac{1}{\sqrt{2\pi} σ_n^M} exp(-\frac{ ‖x_i-μ_n‖^2}{2σ_n^2 })
\tag{1}
\end{align}
(Why Pelleg uses the factor $\frac{R_{(i)}}{R}$ with this is a mystery. It looks like he is showing the joint pdf, but this probability assumes $x_i$ is part of the set with the given mean and variance parameter values. This is the assumption for a particular model, which is the basis for the subsequent derivation of the log-likelihoods.)
The likelihood for a single cluster $n$ (where $1 \leq n \leq K$) is given by
\begin{align}
L_n(μ_n,σ_n^2|x_i) & = \prod_{i \in n}\frac{1}{\sqrt {2\pi} σ_n^M}exp(-\frac{ ‖x_i-μ_n‖^2}{2σ_n^2 })
\tag{2}
\end{align}
where $i \in n$ are the indices only of the samples in the cluster $n$.
The log-likelihood for this is
\begin{align}
l_n(μ_n,σ_n^2|x_i) = -\frac{R_n}{2}log2\pi -\frac{MR_n}{2}log(σ_n^2) - \frac{1}{2σ_n^2}\sum_{i \in n}{‖x_i-μ_n‖^2}
\tag{3}
\end{align}
where, using Pelleg's notation, $R_n$ is the number of samples in cluster $n$.
This is the easiest place to substitute our unbiased best estimate for $\sigma_n^2$. (Pelleg's paper makes an error referring to this as the MLE of $\sigma^2$.) Using $\hat{\sigma_n}^2=\frac{1}{R_n-1}\sum_{i \in n}{‖x_i-μ_n‖^2}$, then
\begin{align}
\hat{l}_n(μ_n,σ_n^2|x_i)
&= -\frac{R_n}{2}log2\pi -\frac{MR_n}{2}log(\hat{σ_n}^2) - \frac{1}{2\hat{σ_n}^2}(R_n-1)\hat{σ_n}^2 \\
&= -\frac{R_n}{2}log2\pi -\frac{MR_n}{2}log(\hat{σ_n}^2) - \frac{1}{2}(R_n-1) \\
\tag{4}
\end{align}
The log-likelihood for K clusters becomes
\begin{align}
\hat{l}(μ_1,...,μ_K,σ_1^2,...,σ_K^2|x_i) &= \sum_{1 \leq n \leq K}\hat{l}_n \\
&= -\frac{log2\pi}{2}\sum_{1 \leq n \leq K}R_n -\sum_{1 \leq n \leq K}\frac{MR_n}{2}log(\hat{σ}_n^2) - \frac{1}{2}\sum_{1 \leq n \leq K}(R_n+1) \\
&= \frac{R}{2}log2\pi -\frac{M}{2}\sum_{1 \leq n \leq K}R_nlog(\hat{σ}_n^2)-\frac{1}{2}(R-K)
\tag{5}
\end{align}
where $R=\sum R_n$, or the total number of samples.
This somewhat resembles Pellleg's formula for $\hat{l}_n$. I believe an oversight explains how the $K$ mysteriously shows up in this partial calculation in Pelleg's paper for just one cluster:
I explained previously at equation (1) why I think the last two terms should not be present. |
Implicit Integrating Factor | $$y - 3y^3 = \left(y^4 + 2x\right)\frac{dy}{dx}$$
Your calculus is wrong. You wrote :
$$u=e^{\int\frac{3y^2-1}{y^4+2x}}$$
which is non-sens because it is not specified to which variable $x$ , or $y$ , or both related, the integration has to be carried out.
Note that $\int\frac{3y^2-1}{y^4+2x}dx \neq \frac{(3y^2-1)(ln|y^4+2x|)}{2}$ because $y(x)$ is not constant.
A more direct approach is to consider the inverse function $x(y)$ . The ODE becomes :
$$(y - 3y^3)x'-2x = y^4 $$
This is a first order linear ODE that you can solve with the standard method.
If you don't see it at first, change of variables : $X=y$ and $Y=x$ which leads to
$$(X-3X^3)=(X^4+2Y)\frac{dX}{dY} \quad\to\quad (X-3X^3)\frac{dY}{dX}-2Y=X^4$$
The solution is :
$$x=\frac{y^4}{2-6y^2}+C\frac{y^2}{1-3y^2}$$
I am sure that you can take it from here. |
Condensation point in Lindelöf space | Hint: Try to prove by contrapositive. Suppose that some $A$ doesn't have a condensation point; then for every point $x\in X$, there is some open neighborhood that contains at most countably many elements of $A$. Use these neighborhoods to construct $\mathscr{U}$ and $U_m$, can you see how this shows that $A$ is countable? |
Prove/Disprove that $\left|\phi^{-1}(\{a\})\right| = \left|\phi^{-1}(\{b\})\right|$ for every $a, b \in f(G)$ | Yes it's true, and i think the hypotesy of finite groups is unnecessary.
If $\phi\colon G\to H$ is an homomorphism then $G/\ker \phi\cong\phi(G)$, that means that $\phi^{-1}(g)=g\ker\phi$.
So $|\phi^{-1}(g)|=|\ker\phi|=|\phi^{-1}(0)|\:\forall g\in G$ |
Prove that $a_n$ is eventually zero? | $a_{n+1}(a_{n+1} +\alpha) \le a_n^2$
$(a_{n+1} + \alpha) \le \frac {a_n^2}{a_{n+1}}$ for the $a_{n+1} > 0$.
If we say $a_{n+1} \ge \beta a_n^2$ then $\frac {a_n^2}{a_{n+1}} \ge \frac 1\beta$ and this is certainly possible.
Let $a_{n+1} = \frac 12 a_n^2; a_1 < 1$ then we want
$\frac 12a_n^4 + \alpha \frac 12a_n^2 \le a_n^2$ which can be true for $\alpha = 1$ |
inner product space questions | First, prove that
$$(A+B)^\perp=A^\perp\cap B^\perp:$$
$$x\in (A+B)^\perp\implies \langle x,a+b\rangle=0\;,\;\;\forall\,a\in A\,,\,b\in B$$
Now, choose first $\;b=0\;$ and then $\;a=0\;$ in the above, getting
$$\begin{cases}\langle x,a\rangle=0\;,\;\;\forall a\in A\iff x\in A^\perp\\{}\\\langle x,b\rangle =0\;,\;\;\forall b\in B\iff x\in B^\perp\end{cases}\implies\;x\in A^\perp\cap B^\perp$$
I leave to you the second, opposite inclusion.
Finally, to obtain your first inclusion, put simply $\;W=A^\perp\;,\;\;U=B^\perp\;$ and pass to orthogonal complements.
Added on request:
$$\forall\,x=T^*v\in\text{Im}\,T\;,\;\;\forall\,u\in\ker T:$$
$$\langle u,x\rangle=\langle u,T^*v\rangle=\langle Tu,v\rangle=\langle0,v\rangle=0\implies u\in \left(\text{Im}\,T^*\right)^\perp$$ |
$L^p(\mathbb R^n,\mu)$ and $L^p(\mathbb R^n\setminus\{0\},\mu)$ | If $\nu$ is a measure such that $\nu(\{0\})=0$, then the spaces $L^p(\mathbb R^n,\nu)$ and $L^p(\mathbb R^n \setminus \{0\},\nu)$ are naturally isometric, and thus can be identified.
Lacking any references, it seems impossible to tell why someone somewhere would use the second notation. |
Can someone simplify this? It's a closed form for the approximation of $(\ln(2))!.$ | $$\sqrt{\frac{\pi\ln\left(2\right)}{2}}\ln\left(2\right)^{\ln\left(2\right)}\exp(2\sqrt[3\pi]{e^{\pi\left(11-2e-8\pi\right)+3}}\pi^{1+e}\cos^{2}\pi e)$$
Will this do the trick? |
Looking for energy functional | A partial answer in the case $p=1$:
$$
E(u) = \frac{1}{2} \int_D |\nabla u|^2 \, dx - \ln \left(\int_D e^u \ dx\right).
$$ |
Prove there isn't a continuous surjection $f: [0, 1] \to \Bbb R$ (without compactness) | Let's cheat and only use that $[0,1]$ is closed and bounded:
For each $k\in\mathbb N$ pick $x_k\in[0,1]$ with $f(x_k)=k$.
Starting with $I_0=[0,1]$, which contains all $x_k$, we can repeatedly split $I_n=[a_n,b_n]$ into two subintervals $[a_n,\frac{a_n+b_n}2]$, $[\frac{a_n+b_n}2,b]$ and one of these contains infinitely many $x_k$ and we let $I_{n+1}$ be that interval.
Then the intersection $\bigcap_{n\in\mathbb N} I_n$ is a singleton set $\{a\}$, where $a\in[0,1]$. A suitable subsequence $x_{k_n}$ of the $x_k$ converges to $a$, henc by continuity $f(x_{k_n})\to f(a)$, which is absurd. |
Riemann Sphere/Surfaces Pre-Requisites | try
J. Jost Compact Riemann Surfaces, third edition Springer Universitext. (I think is a very good book)
http://www.zbmath.org/?q=an:05044797
and
S. Donaldson Riemann Surfaces. (this is beautiful but it is more "concentrated")
http://www.zbmath.org/?q=an:05900831
Also you may find interesting this fantastic book about topology of manifolds.
Milnor, Topology from a differential viewpoint.
http://www.zbmath.org/?q=an:01950480 |
Show that rays of the form $(-\infty, a)$ and $(b, \infty)$ ; $a,b \in \mathbb R$, are a sub-basis for the standard topology on $\mathbb R$? | What you should check is the following:
Let's call the topology generated by the sets of the form $(a,b)$ $\mathcal T$.
Given $(a,b)$, with $a<b$, then you immediately see $(a,b)=(-\infty,b)\cap (a,+\infty)\in \mathcal T_S$, since a topology is closed under finite intersection. Hence $\mathcal T_S$ contains the basis of $\mathcal T$, hence
$$\mathcal T\subseteq\mathcal T_S.$$
The other inclusion is as follows: write $$(-\infty,a)=\bigcup_{n=0}^\infty(a-n-2,a-n)$$ and $$(b,+\infty)=\bigcup_{n=0}^\infty(b+n,b+n+2).$$ In this way you immediately see that $\mathcal T$ contains a basis of $\mathcal T_S$ (topologies are closed under arbitrary unions) hence $$\mathcal T_S\subseteq \mathcal T,$$ and you are done. |
Homeomorphism of Interior of Convex Polygon to Open Unit Disk | First, your quantity $b$ is not a constant, it depends on the ray $R$, and so ostensibly $b$ depends on $x$. But it should be clear that $b=b(\theta)$ really only depends on the angle $\theta$ of the ray $R$. Let me denote $R(\theta)$ as the ray of angle $\theta$ emanating from $c$, and $p(\theta)$ as the point on $R(\theta) \cap P$ furthest from $c$, so $b(\theta) = |p(\theta)-c|$.
The key is to prove that $b(\theta)$ is continuous at each $\theta_0$. Continuity of $f(x)$ away from $c$ then follows, because $\theta = \theta(x)$ has an obviously continuous formula near each $x \ne c$, involving the arctangent function.
By convexity, there exists a closed half-plane $H$ with boundary line $L$ containing $p(\theta_0)$ such that $P \cap H \subset L$. Also, $c \not\in H$ by non degeneracy. Let $\overline H$ be the opposite half-plane to $H$ with the same boundary line $L$. It follows that for angles $\theta'$ near $\theta_0$ we have $p(\theta') \in \overline H$. From this it follows that $b(\theta)$ is upper semicontinuous.
Consider any angle $\theta'$ near $\theta_0$ (any $\theta'$ not pointing opposite $\theta_0$ will do). By convexity, for each $q \in \overline{p(\theta_0) p(\theta')}$ the interior of the segment $\overline{cq}$ must be contained in $P$. From this it follows that $b(\theta)$ is lower semicontinuous. |
Every proper maximal subgroup of a $p$-group $P$ is normal and has index $p$. | I think the proof by induction is a good idea. But perhaps you should induct on the exponent of the group. So, if $|P|=p^1,$ then the problem holds. Assume true for $|P|=p^n,$ then for $|P|=p^{n+1},$ we have that the center of $P$ (which we can call $Z$) is non-trivial (and also a $p$ group) so $Z$ contains an element of order $p$, call it $x$. But since $Z$ is characteristic and abelion, then the group $X=<x>$ is normal in $P$. Thus $|P/X|=p^{n-1}.$ So any maximal subgroup of the quotient group is normal by the inductive hypothesis. I think from here the rest of the proof is pretty straight forward. |
Probabilistic inequalities involving random variables on both sides? | As has already been stated, your approach does not work. Here is an idea: Since $h,g$ are independent (this has not directly been stated in the question but I assume it's true), the following should hold: \begin{align*}\textbf{P}(h\geq A+B)&=\textbf{P}(h\geq(1+e^{sh})(a+bg))=\textbf{P}(h\geq(1+e^{sh})(a+bg))\\&=\textbf{P}(h/(1+e^{sh})\geq(a+bg))=\textbf{E}[\textbf{P}(h/(1+e^{sh})\geq(a+bg)|h)]\\&=\textbf{E}[F_g((h/(1-e^{sh})-a)/b)]\end{align*} where $F_g$ is the CDF of $g$. This is the case if $b>0$. Cases $b=0$ and $b<0$ follow analogously. |
How to deal with $\bar{x}$ when solving complex-variable linear equation(s) of x? | There are two ways. One, which is what you thought of, is to replace $x_k$ with $a_k+ib_k$ where $a_k,b_k$ are real. (By the way, it's best to avoid $i$ as an index in this context.) Then separate the real and imaginary part in every equation. Now you have twice as many variables and twice as many equations as you started with. But everything is over the real scalars, and all $a_k,b_k$ are independent variables.
Another, which takes less re-writing, is to conjugate every equation and add the conjugated forms to the system. Then solve it, treating $x_i$ and $\bar x_i$ as independent variables. Magically, the solution will be such that the value of $\bar x_i$ is the conjugate of the value of $x_i$.
Random example: let's solve
$$\begin{split} z+(1-i)w +2i\bar z - \bar w &= 4+2i \\
iz + (2+3i)w - 2\bar z + 3i \bar w &= 3-i
\end{split} $$
The conjugated equations are
$$\begin{split} \bar z+(1+i)\bar w - 2i z - w &= 4 -2i \\
-i \bar z + (2-3i)\bar w - 2 z - 3i w &= 3+i
\end{split} $$
In matrix form, the entire $4\times 4$ system is
$$\begin{pmatrix} 1 & 1-i & 2i & -1 \\ i & 2+3i & -2 & 3i \\
-2i & -1 & 1 & 1+i \\ -2 & -3i & -i & 2-3i
\end{pmatrix} \vec x = \begin{pmatrix} 4+2i \\3-i \\ 4-2i \\3+i \end{pmatrix} $$
This is easier to type than it looks, because a lot of it is copy-paste-conjugate. Notice in particular the block structure of the matrix on the left. Solution (found numerically, with A\b in Scilab):
$$ \begin{pmatrix} - 4.0909091 + 2.4545455\,i \\
- 1.3636364 + 3.1818182\,i \\
- 4.0909091 - 2.4545455\,i \\
- 1.3636364 - 3.1818182\,i \end{pmatrix}$$
The top two rows are $z,w$; the bottom two are their conjugates. |
Finding indefinite integral by partial fractions | HINT:
We need to use Partial Fraction Decomposition
Method $1:$
As $x^4-1=(x^2-1)(x^2+1)=(x-1)(x+1)(x^2+1),$
$$\text{Put }\frac1{x(x^4-1)}=\frac Ax+\frac B{x-1}+\frac C{x+1}+\frac {Dx+E}{x^2+1}$$
Method $2:$
$$I=\int \frac1{x(x^4-1)}dx=\int \frac{xdx}{x^2(x^4-1)} $$
Putting $x^2=y,2xdx=dy,$
$$I=\frac12\int \frac{dy}{y(y^2-1)}$$
$$\text{ Now, put }\frac1{y(y^2-1)}=\frac A y+\frac B{y-1}+\frac C{y+1}$$
Method $3:$
$$I=\int \frac1{x(x^4-1)}dx=\int \frac{x^3dx}{x^4(x^4-1)} $$
Putting $x^4=z,4x^3dx=dz,$
$$I=\frac14\int \frac{dz}{z(z-1)}$$
$$\text{ Now, put }\frac1{z(z-1)}=\frac Az+\frac B{z-1}$$
$$\text{ or by observation, }\frac1{z(z-1)}=\frac{z-(z-1)}{z(z-1)}=\frac1{z-1}-\frac1z$$
Observe that the last method is susceptible to generalization.
$$J=\int\frac{dx}{x(x^n-a)}=\int\frac{x^{n-1}dx}{x^n(x^n-a)}$$
Putting $x^n=u,nx^{n-1}dx=du,$
$$J=\frac1n\int \frac{du}{ u(u-a)}$$
$$\text{ and }\frac1{u(u-a)}=\frac1a\cdot\frac{u-(u-a)}{u(u-a)}=\frac1a\left(\frac1{u-a}-\frac1u\right)$$ |
Suspicious corollary of Lusin’s Theorem | The flaw is in assuming $f|_C$ is continuous. As a friend pointed out to me, a function that is continuous on an exhaustive sequence of closed sets need not be continuous: you’d need to show that given $x \in C$ and $\epsilon>0$, we have $|f(x) - f(y)| < \epsilon$ for all $y$ sufficiently close to $x$ in the set $C$. But if we have $x \in C_n$, then we could have $|x-y|<\delta$ for small $\delta>0$ but $y \in C_m$, $m \neq n$. Since there are infinitely many $C_m$, this can happen infinitely often. |
Use Pohlig-Hellman to solve discrete log | I will assume you have access to a better description of the algorithm than that web site. If not, refer to:
A Course in Number Theory and Cryptography, 2nd Ed., N. Koblitz
An Introduction to Cryptography, R. A. Mollin
An Introduction to Mathematical Cryptography, J. Hoffstein, J. Pipher, J. H. Silverman
We are asked to use the Pohlig-Hellman algorithm to solve a Discrete Log Problem and find $x$ for:
$$7^x = 166 \pmod{433}$$
Using the notation:
$$g^x = h \pmod p$$
We have:
$$g = 7, h = 166, p = 433, N = p - 1 = \prod q_i^{e_i} = q_1^{e_1} \cdot q_2^{e_2} = 432 = 2^4 \cdot 3^3$$
We can summarize the necessary algorithm calculations in a handy table as:
$$\begin{array}{|c|c|c|c|c|} \hline
\large q & \large e & \large g^{(p-1)/q^e} & \large h^{(p-1)/q^e} & \mbox{Solve}~ \large \left(g^{(p-1)/q^e} \right)^x = ~ \large h^{(p-1)/q^e}~ \mbox{for} ~ \large x \\ \hline
2 & 4 & 265 & 250 & \mbox{Calculation I = ?}\\
\hline
3 & 3 & 374 & 335 & \mbox{Calculation II = ?}\\ \hline
\end{array}$$
Calculation I:
We want to solve:
$$x \equiv x_0 + x_1q + \ldots + x_{e-1}q^{e−1} \pmod {2^4} \equiv x_0 + 2x_1 + 4x_2 + 8x_3 \pmod {2^4}$$
Solve $(265)^x = 250 \pmod {433}$ for $x_0, x_1, x_2, x_3$.
$x_0: (265^{2^3})^{x_0} = 250^{2^3} \pmod {433} \implies (432)^{x_0} = 432 \implies x_0 = 1$
$x_1: (265^{2^3})^{x_1} = (250 \times 265^{-x_0})^{2^2} \pmod {433} = (250 \times 265^{-1})^{2^2} \pmod {433} = (250 \times 250)^{2^2} \pmod {433} \implies (432)^{x_1} = 432 \implies x_1 = 1$
$x_2: (265^{2^3})^{x_2} = (250 \times 265^{-x_0-2x_1})^{2^1} \pmod {433} = (250 \times 265^{-3})^{2^2} \pmod {433} = (250 \times 195)^{2^1} \pmod {433} \implies (432)^{x_2} = 432 \implies x_2 = 1$
$x_3: (265^{2^3})^{x_3} = (250 \times 265^{-x_0-2x_1-4x_2})^{2^0} \pmod {433} = (250 \times 265^{-7})^{2^0} \pmod {433} = (250 \times 168)^{2^0} \pmod {433} \implies (432)^{x_3} = 432 \implies x_3 = 1$
Thus, our first result is:
$$x \equiv x_0 + 2x_1 + 4x_2 + 8x_3 \pmod {2^4} \equiv 1 + 2 + 4 + 8 \pmod {2^4} \equiv 15 \pmod {2^4}$$
Calculation II:
We want to solve:
$$x \equiv x_0 + x_1q + \ldots + x_{e-1}q^{e−1} \pmod {3^3} \equiv x_0 + 3x_1 + 9x_2 \pmod {3^3}$$
Solve $(374)^x = 335 \pmod {433}$ for $x_0, x_1, x_2$.
$x_0: (374^{3^2})^{x_0} = 335^{3^2} \pmod {433} \implies (234)^{x_0} = 198 \implies x_0 = 2$. Note: you only needed to test $x_0 = 0, 1, 2$, so it is clear which one $x_0$ is.
$x_1: (374^{3^2})^{x_1} = (335 \times 374^{-x_0})^{3^1} \pmod {433} = (335 \times 374^{-2})^{3^1} \pmod {433} = (335 \times 51)^{3^1} \pmod {433} = 1 \pmod{433} \implies (234)^{x_1} = 1 \pmod {433} \implies x_1 = 0$
$x_2: (374^{3^2})^{x_2} = (335 \times 374^{-x_0-3x_1})^{3^0} \pmod {433} = (335 \times 374^{-2})^{3^0} \pmod {433} = (335 \times 51)^{3^0} \pmod {433} = 198 \pmod{433} \implies (234)^{x_2} = 198 \pmod {433} \implies x_2 = 2$. Note: you only needed to test $x_2 = 0, 1, 2$, so it is clear which one $x_2$ is.
Thus, our second result is:
$$x \equiv x_0 + 3x_1 + 9x_2 \pmod {3^3} \equiv 2 + 0 + 9 \times 2 \pmod {3^3} \equiv 20 \pmod {3^3}$$
Next, we have to use the Chinese Remainder Theorem to solve the simultaneous congruences:
$$x \equiv 15 \pmod {2^4} , ~~ x \equiv 20\pmod {3^3}$$
This yields:
$$x = 47$$
We check our answer by computing:
$$7^{47} = 166 \pmod {433} ~~ \checkmark$$ |
Example for fiber sequence which is not exact int the following sense | I feel the bottom end of the usual exact sequence of a fibration is best understood by considering the family of exact sequences of a fibration of groupoids as explained in Topology and Groupoids Chapter 7. A reason for this is that a fibration $p: E \to B$ of spaces gives rise to a fibration of groupoids $\pi_1 p: \pi_1 E \to \pi_1 B$ of fundamental groupoids. |
Neighborhoods of $0$ in the co-countable topology on $\mathbb{R}$ | Suppose that there is a countable base for the topology, $(V_n)_{n\in\mathbb N}$. We can assume all $V_n$ non-empty. For each $n$, pick $x_n\in V_n$. Then $C=\{x_n\}_{n\in\mathbb N}$ is a closed set, i.e. $V=\mathbb R\setminus C$ is open and non-empty. But there is no $V_n\subset U$ (since $x_n\notin U$), which is a contradiction. |
Find multi-variable function that will make the statements true. | $f(x,y)= A_{produced}-A_{degrade}$
$A_{produced}(y)=\frac{n}{1+y}$
$A_{produced}(0)=3$
$3=\frac{n}{1+0}$
$n=3$
$A_{degrade}= x$
$f(x,y)= \frac{3}{1+y}-x$ |
Number of normal subgroups | Hint: Let $N\triangleleft \prod_{i=1}^n G$, and let $\pi_n$ be the projection onto the $n$th component.
Prove that $\pi_j(N)\triangleleft G$, and so it must be either trivial or the whole thing. Reduce to the case where all projections are surjective. Now consider $N\cap G_i$ (identifying $G_i$ with the subgroup of the product with trivial components in all other coordinates). This is also normal in $G_i$, hence is either trivial or the whole thing.
Prove it must be the whole thing. |
Basic notions on contraction mapping theorem | For question $1$, you've said nothing about a sequence of functions, which suggests that the standard interpretation of "uniform contraction applies.
In other words, $\|x-y\| \geq c \|f(x)-f(y)\|$ for all $x,y \in [0,1]^n$.
For the hint: use the bounded derivative to show that your function is locally lipschitz, with constant less than $1$ by the hint. Furthermore, since the set is compact, deduce that the function is globally lipschitz, with constant less than $1$, since you can choose a finite subcover, and hence choose a maximal constant $K < 1$ to bound your function.
More formally, show that in some $\delta$-neighborhood of $x$, we have $\|x-y\| \geq K_{\delta}\|f(x)-f(y)\|$ with $K_{\delta} <1$ because of the condition on the derivative (Think about the one dimensional case to motivate the argument) and then use these suitable neighborhoods to cover $[0,1]^n$, and use compactness. |
Arranging $\mathbb N$ into a two-dimensional array to prove a countably infinite collection of countable sets is countable. | Since each $A_n$ is countable, there exists, for each $n$, a bijective function $f_n : \mathbb N \to A_n$.
Using these $f_n$'s, we can define a function $f : \mathbb N \times \mathbb N \to \cup_{n=1}^\infty A_n$, which sends each $(n, m) \in \mathbb Z \times \mathbb Z$ to the element $f_n(m) \subset A_n\subset \bigcup_{n = 1}^\infty A_n$. Clearly, this function $f$ is surjective.
So we have exhibited a surjective function $f$ from the countable set $\mathbb N \times \mathbb N$ onto the set $\bigcup_{n = 1}^\infty A_n$. Hence $\bigcup_{n=1}^\infty A_n$ is countable.
[The set $\mathbb N \times \mathbb N$ in this answer is the "two-dimensional array" referred to in your hint.] |
Rewriting solution in terms of hyperbolic trigs | Use the fact that
$$e^{At}(B\cosh Ct + D\sinh CT) = \tfrac{B+D}{2}e^{(A+C)t} + \tfrac{B-D}{2}e^{(A-C)t}
$$
Compare this to your expression. The coefficients give you
$$\left\{
\begin{array}{cc}\tfrac{B+D}{2}=\tfrac23 \\
\tfrac{B-D}{2}=\tfrac13
\end{array}\right.
$$
The exponents give you
$$\left\{
\begin{array}{cc}A+C=-4 \\
A-C=2
\end{array}\right.
$$
From these systems, you can easily see that
$$A=-1,B=1, C=-3, D=\tfrac13$$
so that you can write your solution as
$$e^{-t}(\cosh 3t - \tfrac13\sinh 3t)
$$ |
p-adic numbers and group characters | A character is a continuous group homomorphism $(G,+)\to\Bbb T$ (with $\Bbb T\subset\Bbb C^\times$ the circle group).
The dual group $\widehat{G}$ is the space of all characters on $G$ equipped with pointwise multiplication.
Let $\psi$ be a character of $\Bbb Z_p$. Since $\Bbb Z\subset\Bbb Z_p$ is dense, the values $\psi(\Bbb Z)$ determine $\psi$, and as $\Bbb Z=\langle 1\rangle$ this in turn means that $\psi$ is determined by $\psi(1)$. Since $p^r\to0$ in $\Bbb Z_p$, the values $\psi(p^r)=\psi(1)^{p^r}$ must converge to $\psi(0)=1$. If $\psi(1)$'s complex phase were not of $2\pi\Bbb Q$, then $\psi(1)^{p^r}$'s phase would never settle down - furthermore it must be $p$-torsion mod $2\pi$ to settle down, so $\psi(1)$ is some $p$-power root of unity. Thus $\widehat{\Bbb Z_p}\cong\Bbb Z(p^\infty)$ via $\psi\leftrightarrow\psi(1)$ (with $\Bbb Z(p^\infty)$ the Prüfer $p$-group).
(KCd has a nice related blurb on the character group of $\Bbb Q$, which motivates the adeles.)
Note this is analogous to $\Bbb Z$ and $\Bbb R/\Bbb Z$ being a dual pair, in view of the fact $\,\Bbb Z(p^\infty)\cong\Bbb Q_p/\Bbb Z_p$.
The group $\Bbb Z(p^\infty)$ can be thought of as $\Bbb Z[p^{-1}]/\Bbb Z$ under addition, so every element of the Prüfer group may be represented by the rationals expressible finitely as $0.\square\square\square\cdots\square$ in base $p$.
The topology of $\Bbb Z_p$ is that of a (countably infinite depth $p$-ary rooted) "tree": draw a point, then draw $p$ child nodes from that point, then $p$ child nodes from that point, and so on. The $p$-adic integers will be all "leaves" (infinite paths through the tree from the root). The metric balls are obtained by picking a node on the tree and collecting all leaves that run through that node.
(More on this in the note pictures of ultrametric spaces.)
An equivalent way of representing the topology of the $p$-adics is used here. Draw one big ball, then draw $p$ balls inside, then draw $p$ balls inside each of those, and so on indefinitely. To select an integer from $\Bbb Z_p$, make an infinite sequence of selections of these balls, one choice representing each digit of $\Bbb Z_p\ni x$'s $p$-adic expansion. In your picture, the gray nest of balls represents $\Bbb Z_3$.
A number of three-adic integers are chosen from the gray urn, and correspond to colored circles wreathed around the outside. Each one of these outer colored circles represents the Prüfer $3$-group, in particular each "leaf" is an element. The biggest leaves are $0/3,1/3,2/3$ and the second biggest leaves are $1/9,2/9,4/9,5/9,7/9,8/9$ (so basically the rationals $k/9$ with $0\le k<9$ not counting the ones already listed, $0/3,1/3,2/3$). The $r$th biggest leaves correspond the the rationals expressible as $0.\square\cdots\square$ with $r$ digits in base $p$, not already listed (i.e. with last square nonzero).
The idea of "duality" is not only do elements of $\Bbb Z(p^\infty)$ act as characters on $\Bbb Z_p$, but conversely elements of $\Bbb Z_p$ act as characters on $\Bbb Z(p^\infty)$. To each $p$-adic integer $x\in\Bbb Z_p$, the colors of the leaves $a\in\Bbb Z(p^\infty)$ (on the associated circle outside) correspond to the value of $x$ applied to $a$ as a character, which will always end up being a $p$-power root of unity (as seen above). |
Computing $\lim_{x \to 1}\frac{x^\frac{1}{5}-1}{x^\frac{1}{6} -1}$ | Just to avoid L' Hospital's rule, consider the following:
$$\frac{x^{1/5}-1}{x^{1/6}-1}=\frac{x^{6/30}-1}{x^{5/30}-1}=\frac{(x^{1/30})^6-1}{(x^{1/30})^5-1}=\frac{(x^{1/30}-1)(x^{5/30}+x^{4/30}+\dots+1)}{(x^{1/30}-1)(x^{4/30}+x^{3/30}+\dots+1)}$$
So:
$$\lim_{x\to1}\frac{x^{1/5}-1}{x^{1/6}-1}=\lim_{x\to1}\frac{(x^{1/30}-1)(x^{5/30}+x^{4/30}+\dots+1)}{(x^{1/30}-1)(x^{4/30}+x^{3/30}+\dots+1)}=\lim_{x\to1}\frac{x^{5/30}+x^{4/30}+\dots+1}{x^{4/30}+x^{3/30}+\dots+1}=\frac{6}{5}$$
Hope this provided an alternative! :) |
Unique solution for $x = y + Tx$ if $T(x_1,x_2,\dots) = (\frac{1}{2}x_2,\frac{1}{3}x_3,\dots)$ | You are correct and the proof looks sufficiently rigorous. However, $\|T\|_\text{op}$ can be computed exactly. That is, $\|T\|_\text{op}=\dfrac12$. To show this, let $z=(z_1,z_2,z_3,\ldots)\in\ell^\infty$. Then,
$$T(z)=\left(\frac{z_2}{2},\frac{z_3}{3},\frac{z_4}{4},\ldots\right)$$
so that
$$\big\|T(z)\big\|_{\infty}=\sup\left\{\frac{|z_k|}{k}\,\Big|\,k=2,3,4,\ldots\right\}\leq \sup\left\{\frac{\|z\|_\infty}{k}\,\Big|\,k=2,3,4,\ldots\right\}=\frac{\|z\|_\infty}{2}\,.$$
Note that the equality holds for $z=(0,1,0,0,0,\ldots)$. This implies $\|T\|_{\text{op}}= \dfrac{1}{2}$.
You can write an explicit solution $x\in\ell^\infty$ to $x=y+T(x)$. That is, $$x=(1-T)^{-1}y=\left(\sum_{k=1}^\infty\,\frac{y_k}{k!},\sum_{k=2}^\infty\,\frac{2!y_k}{k!},\sum_{k=3}^\infty\,\frac{3!y_k}{k!},\ldots\right)$$
Nonetheless, you did sufficient and good work. I was just making additional comments. |
Find a closed subspace $M$ and a non-closed subspace $N$ of a Banach space $\mathbf{B}$ such that $M\oplus N=\mathbf{B}$ | Big hint: If $\lambda$ is an unbounded linear functional on $\bf B$ and $N$ is the nullspace of $\lambda$ then $N$ is not closed, but now if $M=\dots$ then $M$ is closed and ${\bf B}=M\oplus N$.
So you only need to fill in the dots above and then show that if $\bf B$ is infinite-dimensional there exists an unbounded linear functional on $\bf B$ (hint: Hamel basis). |
The Image of Unitary Representation in the Space of Bimodules | I forgot to post one of the possible answers. So with some delay here it goes:
Let $\Delta: \mathcal{L} G \to \mathcal{L} G \otimes \mathcal{L} G$ be the natural comultiplication. We can associate to it a correspondence $H(\Delta) = _{N}H(\Delta)_{N \otimes N} \in \mathrm{Corr}(N, N \otimes N)$, where $N = \mathcal{L} G$ given by $L^2(G \times G)$ with the actions given by
$$
x \cdot \xi \cdot y = \Delta(x) \xi y.
$$
Similarly, given two correspondences $_NH_M$ and ${}_{M}{K}_R$, there is a module tensor product $_{N} {(H \otimes_{M} K)}_R$, see again (Section 13.2. http://www.math.ucla.edu/~popa/Books/IIun-v13.pdf). Then, the image of the arrow $(iv)$ satisfies that
$$
H(\varphi) \otimes_N H(\Delta) = H(\Delta) \otimes_{N \otimes N} \big( L^2(N) \otimes_{\mathbb{C}} H(\varphi) \big),
\tag{$\star$}
$$
where the equality doesn't mean isomorphism but the fact that the isomorphism is induced by the natural map. A reciprocal follows and the map $\varphi(x) = L^\ast_\xi \, x \, L_\xi$ associated to a correspondence satisfying $(\star)$ is a Fourier multiplier. |
Poisson distribution | Hint 1:
The probability of $k$ events happening in an interval of time $t$ is
$$
P(k)=\frac{(\lambda t)^k}{k!}e^{-\lambda t}
$$
where $\lambda$ is the event rate ($1800/600$ events per minute).
Hint 2:
The expected number of occurrences is the probability of one occurrence times the number of trials ($600$ one minute trials). |
Prove that $\log X 0$ | One way to approach this question is to consider the minimum of $x - \log_a x$ on the interval $(0,\infty)$. For this we can compute the derivative, which is
$1 - 1/(\log_e a )\cdot x$. Thus the derivative is zero at a single point, namely $x = 1/\log_e a,$ and is negative to the left of that point and positive to the right. Thus $x - \log_a x$ decreases as $x$ approaches $1/\log_e a$ from the left, and then increases as we move away from this point to the right. Thus the minimum
value is achieved at $x = 1/\log_e a$. (Here I'm assuming that $a > 1$, so that $\log_e a > 0$; the analysis of the problem is a little different if $a < 1$, since then for $x < a < 1$, we have $log_a x > 1 > x,$ and the statement is
not true.)
Now this value is equal to $1/\log_e a + (\log_e \log_e a)/\log_e a,$ and you want this to be $> 0 $. This will be true provided $a > e^{1/e}$ (as noted in the comments). |
How can I calculate a polynomial trend line where `y` always increases as `x` increases? | If your trend line must go through those points, then you're probably stuck. If you just want a "trend line" that gives the "general trend", then plot a literal line. That is, tell Excel to give you a linear fit, $y=mx+b$, no quadratic term. |
Problem book on differential forms wanted | You may like Chapter 10 "Differential Forms, Integral Formulae, De-Rham Cohomology" of Mishchenko, Solovyev, Fomenko "Problems in Differential Geometry and Topology" (English translation, Mir Publishers, 1985). I have not seen a newer version of it that may be even better: A.T. Fomenko, A.S. Mischenko, Y.P. Solovyev: Selected problems in differential geometry and topology. Cambridge Scientific Publishers, Cambridge, 2006
Update 1. There is also a very interesting collection of problems by Prof. W.-H. Steeb. See Ch.4 Differential Forms and Applications in "Problems and Solutions in Differential Geometry"
Update 2. A comprehensive set of problems on differential geometry can be found in Analysis and Algebra on Differentiable Manifolds: A Workbook for Students, by P. M. Gadea, J. Munoz Masqué, see Ch.2 "Tensor Fields and Differential Forms". |
A positional number system for enumerating fixed-size subsets? | The combinatorial number system that Henry already linked to is the best you can ask for. Asking for it to be a positional number system is contradictory to your requirement that only fixed size subsets are encoded. In a positional number system any string of legal digits can occur (corresponds to a different number), including those with repeated digits that you don't want to consider. You could of course take your $n$-bit numbers and consider only those with exactly $k$ bits equal to $1$, but that would have the same problem that lots of digit-sequences do no correspond to an appropriate subset.
An indirect encoding in which the digit string has to undergo further processing in order to be turned into a $k$-combination can be defined, but is not satisfactory. As an extreme case you could take ordinary decimal notation, and use the numeric value to look up a combination in a (lexicographic) list of all $k$-combinations; that does not seem to be what you are asking for. Note that even the factorial number system does not directly encode permutations, but just their Lehmer code; converting from these to permutations is quite straightforward, but not even computationally easy (i.e., doable in $O(n)$ time for permutations of $n$) using any simple data structure.
If you want to learn all about such stuff, and lots more, read volume 4A of Knuth's Art of Computer programming (and do all the exercises). |
Examples and definition of cocompact objects | I prefer to call the quoted categorical concept of compactness strong compactness, since it is stronger than the usual notion of compactness for topological spaces.
We can express the definition of strong cocompactness of an object $D$ as the requirement that any morphism $\lim_{i \in I}C_i \to D$, with $I$ a cofiltered diagram, should factor through one of the limit projections $\pi_j: \lim_{i \in I}C_i \to C_j$. We see from this reformulation that if any subset/subspace of a set/space $X$ fails to be strongly cocompact, we can therefore deduce that $X$ is not strongly compact either; we use this to help us in our search.
In $\mathbf{Set}$, consider $\prod_{n \in \mathbb{N}} 2$ as a cofiltered limit of copies of finite products of the two-element set, $2$. Given any set $X$ with two or more distinct elements, pick two such elements and call them $0$ and $1$; label the elements of $2$ the same way. Then we have a function $f: \prod_{n \in \mathbb{N}} 2 \to X$ sending a sequence to $0$ if the sequence contains a $1$ and the first $1$ in the sequence is followed by a $0$; and to $1$ otherwise. By construction, this cannot factor through the projection from $\prod_{n \in \mathbb{N}} 2$ to any finite product. Clearly any morphism from a cofiltered limit to the one-element set $1$ factors through one (indeed any) of its projection maps. However, contrary to what Zhen Lin said in the comments, $0$ is not strongly cocompact in $\mathbf{Set}$, since if we consider the codirected limit of subsets $[n,\infty) \subseteq \mathbb{N}$, whose intersection is empty, we see that this is not preserved by $\mathrm{Hom}(-,\emptyset)$.
By passing across the forgetful-indiscrete adjunction $(U \dashv I)$ between $\mathbf{Top}$ and $\mathbf{Set}$, we can use the same argument as above to deduce that any indiscrete space with more than 2 points, or indeed any non-$T_0$ space, cannot be strongly cocompact in $\mathbf{Top}$. If the subspace of $X$ on the points $0$ and $1$ is homeomorphic to the Sierpinski space, such that $0$ is open, the argument still works, since the set $f^{-1}(0)$ of sequences described above is the union of opens $$\prod_{i=0}^{n-1} \{0\}_i \times \{1\}_n \times \{0\}_{n+1} \times \prod_{i=n+2}^{\infty} 2$$
in $\prod_{n \in \mathbb{N}} 2$. This union is disjoint and there can be arbitrarily many non-trivial projection indices, so $f$ cannot factor through any of the projections to finite products of $2$.
The argument fails more generally, though, since $f$ cannot be continuous for any choice of points of $X$ when $X$ is Hausdorff, say, since the subspace consisting of the points $\{0,1\}$ is necessarily discrete in that case, and $f^{-1}(1)$ is not open in $\prod_{n \in \mathbb{N}} 2$.
Instead, consider
$$Y_0 := \{(x,y) \in \mathbb{N} \times \mathbb{N} \mid x+y \geq 0\},$$
topologized with basic open sets $U_{a,b} := \{(x,y) \mid x \geq a, \, y \geq b\}$. Then we have a directed collection of subspaces,
$$Y_n := \{(x,y) \in Y_0 \mid 2^n \text{ divides } x+y \};$$
these subspaces are all connected, but their intersection is the subspace
$$Y_{\infty} := \{(x,y) \in \mathbb{N} \times \mathbb{N} \mid x+y = 0\},$$
homeomorphic to $\mathbb{N}$ with the discrete topology. Suppose $X$ has a pair of points $\{0,1\}$ such that the subspace on these points is discrete. Then we have a continuous map $Y_{\infty} \to X$ sending $(x,y)$ to $0$ if $y \geq x$ and to $1$ otherwise. This function cannot factor through any of the $Y_n$, though, since the inverse images of $0$ and $1$ would have to non-trivially disconnect $Y_n$, which is impossible.
In summary: Since we have exhausted the possibilities for two-point subspaces, and the argument for the emptyset carries across, we have shown that in $\mathbf{Top}$, just as in $\mathbf{Set}$, the singleton space is the only strongly cocompact space.
As a final observation, if $R$ is a functor having a left adjoint $L$ which preserves cofiltered limits (if $L$ has a further left adjoint, for example) then $R$ preserves strongly cocompact objects. Indeed, letting $X$ be strongly cocompact and $I$ be a cofiltered indexing diagram, we have:
$$\mathrm{Hom}(\lim_{i \in I}Y_i,R(X)) \cong \mathrm{Hom}(L(\lim_{i \in I}Y_i),X) \cong \mathrm{Hom}(\lim_{i \in I}L(Y_i),X)$$
$$\cong \mathrm{colim}_{i \in I}\mathrm{Hom}(L(Y_i),X) \cong \mathrm{colim}_{i \in I}\mathrm{Hom}(Y_i,R(X)),$$
so even without the explicit calculations in $\mathbf{Top}$, we could have deduced that the terminal space $1$ is strongly cocompact, since it is the image of the one-element set under the indiscrete functor $\mathbf{Set} \to \mathbf{Top}$ mentioned earlier. |
Eigenvalues of $AB$ and $BA$ ${}\qquad{}$ | Hint:
If $\lambda $ is an eigenvalue of $AB$ that is $\exists x\neq 0$ and
$$
AB(x)=\lambda x
$$
$BAB(x)=BA(Bx)=\lambda (Bx)$
Then $Bx$ is eigenvector for eigenvalue $\lambda$.
Only case which we have to worry about is when $Bx=0$. For that case see that $\lambda x =0$ which contradicts the very starting assumption (except when $\lambda =0$). |
Semisimple submodule | A submodule of a direct sum of modules is given by specifying a submodule of each direct summand (think about the composition of the inclusion $N\to M$ with the projection maps onto each summand of $M$). In this case, each summand has only two possible submodules. |
Prove that $f_{n}$ is not uniformly convergent | That's correct! For any uniform convergence to zero problem you can just find a point of each function that is a fixed height above zero, which means their supremum norms can never converge to zero. |
Chasing the diagram of a natural transformation - I'm lost, please help | Let's call the vertical map $i_V :V\to V^{**}$ (and similarly for $W$). By definition,
$$i_V( v ) (x^*)= x^*(v) \ \ \ \ \forall\ x^* \in V^*, v\in V.$$
Then
$$(f^{**} \circ i_V) (v) (y^*) = i_V (v) (f^* y^*) = f^*y^* (v) = y^*(fv)$$
and
$$(i_W \circ f)(v) (y^*) = i_W(fv) (y^*) = y^*(fv).$$
Thus the diagram commutes. |
$95\,\%$ confidence interval for geometric distribution | In frequentist inference, the proper confidence interval depends on what you consider your sample space: If you were to repeat your experiment/observational scheme, what would remain constant, what would be allowed to change. Specifically, did you always intend to observe N geometric RV's or was that just when the experiment ended? Are you always observing the same number of trials? If neither is controlled, then its hard to specify what type of process you've observed and how an interval would behave under repeated sampling.
Here are a couple suggestions:
Turn your observed geometric variables into an equivalent bernoulli series and apply binomial inference procedures to it. This assumes you had no pre-specified number of successes you needed to achieve (i.e. the sample size was not specified in advance).
Apply negative binomial confidence procedures to it, with a known number of successes, i.e., r=N, and you want an interval for p given N. This assumes you pre-specifed the sample size.
Perform boostrap resampling on your observed data vector, each time re-calculating the estimate for p, and see the distribution of your estimator relative to the actual value of the estimator for the original sample. Look up Boostrap confidence intervals (percentile and BCC methods). |
Matrix times Vector where the elements are vectors | $\def\o{{\tt1}}$The dimension $M$ must be factorable such that the $b$ vector can be arranged as a block vector of $p$ partitions each of length $N$, i.e. $M=pN.\;$ In your example $\;p=3,\;N=M/p$.
Using all-ones vectors $({\rm eg}\;\o_p\in{\mathbb R}^{p})$,
Kronecker products $(\otimes)$, the identity matrix $\,\left(I_N\in{\mathbb R}^{N\times N}\right)$, and vectorization (aka column stacking),
the desired operation can be written as
$$\eqalign{
&{\rm vec}\Big(\big(\o_p\otimes I_N\big)^T\left(A\odot b\o_L^T\right)\Big)\\
&{\rm vec}\Big(\big(\o_p\otimes I_N\big)^T\,{\rm Diag}(b)\,A\Big) \\
}$$ |
Convergence in probability of X/(2^i) | Hint: use Chebychev's inequality:
$$P(|X_i/2^i| > \epsilon) \le \epsilon^{-2} E[(X_i/2^i)^2] = \frac{1}{2^{2i} \epsilon^2} E[X^2_i]$$ |
Why is a differential a dual basis vector (i.e. why $dx^i \frac{\partial}{\partial x^j} =\delta^i_j$)? | By definition, for any real-valued function $f$, the differential form $df$ is a map that takes a derivative operator to that derivative of $f$.
So $dx^i\frac{\partial}{\partial x^j}=\frac{\partial x^i}{\partial x^j}$ by definition. |
Prime elements in $\mathbb{Z}[[X]]$ | The ring morphism $\mathbb Z[[X]]\to \mathbb Z:\sum_{j=0}^\infty a_jX^j\to a_0$ has as kernel the ideal $\langle X\rangle $, hence that ideal is prime since $\mathbb Z[[X]]/\langle X\rangle \cong \mathbb Z$ is a domain.
But the ideal $\langle X\rangle$ being prime is equivalent to the element $X$ being prime so that , yes, $X$ is prime. |
Prove that $\mathbb{Z}[i]/\langle 2+i \rangle = \{\overline{0}, \overline{1}, \overline{2}, \overline{3}, \overline{4}\}$. | The easiest way to see this is using the ring isomorphism theorems.
$$\frac{\mathbf Z[i]}{(2+i)}\stackrel{(1)}{\cong} \frac{\mathbf Z[X]/(X^2+1)}{(2+X)}\stackrel{(2)}{\cong} \frac{\mathbf{Z}[X]/(2+X)}{(X^2+1)}\stackrel{(3)}{\cong} \frac{\mathbf Z}{(5)}\cong \mathbf{F}_5$$
$(1)$: We can write $\mathbf{Z}[i]\cong \mathbf{Z}[X]/(X^2+1)$.
$(2)$: This is the third isomorphism theorem.
$(3)$: Here we send $X\mapsto -2$. |
Quick question about relation between Nullspace and Eigenspace | A matrix $A$ is singular, as you recalled, if there exists a $v\neq 0$ for which $Av=0=0\cdot v$. This shows it is the same as saying $v$ is an eigenvector for the eigenvalue $0$. Thus
$$A\enspace\text{singular}\iff \ker A \neq\{0\}\iff 0\enspace\text{is an eigenvalue for}\enspace A.$$ |
Normal Subgroups in $\mathbb{Z_9}$ | You can try to search for the following facts:
(i) If $G$ is abelian, then every subgroup of $G$ is normal.
(ii) Let $S$ be a subset of $G$, then $\langle S\rangle$ is a subgroup of $G$, namely the subgroup of $G$ generated by $S$. |
Solve the differential equation $xy'=y +\sqrt{y^2 -x^2}$ | To make explicit the method put forth by projectilemotion, with the substitution $v=\frac{y}{x}$, the equation becomes $$ y' = v + \sqrt{v^2-1} $$ Now, by the chain rule we have $$\frac{\text{d}v}{\text{d}x} = \frac{\text{d}}{\text{d}x} \frac{y}{x} = \frac{y'x - y}{x^2} \Rightarrow \frac{y' - v}{x} = v' \Rightarrow y' = v'x + v$$ Subbing this into the diffeq gives $$ v'x + v = v + \sqrt{v^2 - 1} $$ and then $$ \frac{\text{d}v}{\text{d}x} = \frac{\sqrt{v^2-1}}{x} $$ Which is a seperable differntial equation. Solving it in the regular way, we get $$\frac{1}{\sqrt{v^2-1}} \text{d}v = \frac1x \text{d}x \Rightarrow \ln x + c = \ln \left(v + \sqrt{v^2-1}\right) $$ where I've just ignored the absolute value to get a solution for positive $x$ only, which usually suffices. You could workout the casework for the other case too. This gives that $$ cx = v + \sqrt{v^2-1}$$ which gives $$(cx-v)^2 = v^2-1$$ so that $$ v = \frac{1}{2}\left(cx + \frac{1}{cx}\right)$$ subbing backing in our definition of $v$ gives us the family of solutions $$ \boxed{y = \frac{1}{2}cx^2 + \frac{1}{2c}}$$ where $c$ is an arbitrary constant. |
Morphism from a scheme to the spectra of global section | I want to respond to one particular statement you made, but first I'll make things a bit more precise.
We have the adjunction $$\newcommand\Hom{\operatorname{Hom}}\newcommand\Spec{\operatorname{Spec}}\newcommand\calO{\mathcal{O}}\Hom(X,\Spec A)\simeq \Hom(A,\Gamma(X,\calO_X)),$$ and in particular this tells us that
$$\Hom(X,\Spec \Gamma(X,\calO_X))\simeq \Hom(\Gamma(X,\calO_X),\Gamma(X,\calO_X)).
$$
The identity map $\Gamma(X,\calO_X)\to\Gamma(X,\calO_X)$ therefore gives us a map of schemes $X\to \Gamma(X,\calO_X)$ as you noticed, and this is indeed the canonical map. However, you seem to be under the impression that this map is therefore an isomorphism.
In general this cannot possibly be true, since if the map were an isomorphism, $X$ would necessarily have to be affine. However, if $X$ is affine, this map is indeed an isomorphism.
Let's be a little more clear how this map works then.
In fact let's be a little more clear how it works in general. Let $\phi : A\to \Gamma(X,\calO_X)$ be a ring morphism. Let's try to understand the induced map $f : X\to \Spec A$.
Let $U$ be an affine open in $X$. Then we have the maps
$$\newcommand\toby\xrightarrow A\toby{\phi}\calO_X(X)\toby{r_{XU}} \calO_X(U).$$
Taking $\Spec$ of this sequence gives
$$U\toby{\Spec r_{XU}} \Spec \calO_X(X) \toby{\Spec \phi} \Spec A.$$
Gluing these maps together gives the desired map from $X$ to $\Spec A$.
Observe then that if $\phi=\newcommand\id{\operatorname{id}}\id$, that the map $X\to \Spec \Gamma(X,\calO_X)$ is the result of gluing the maps obtained from applying the Spec functor to the restrictions $r_{XU}:\calO_X(X)\to \calO_X(U)$.
If $X$ is affine, then we can take $U=X$, and there's no need to glue, the map $X\to \Spec\Gamma(X,\calO_X)$ is $\Spec \id=\id$. On the other hand, if $X$ is not affine, for example, if $X$ is a projective $k$-scheme with $k$ algebraically closed, then $\Gamma(X,\calO_X)=k$, and $X\to \Spec\Gamma(X,\calO_X)$ is the $k$-scheme structure morphism $X\to \Spec k$, which in general is clearly not an isomorphism. |
Help needed in proving a question in Sequences of real numbers (Convergent , bounded sequences) | Hint : I give you four different methods to solve the question b) : you can choose the one you prefer.
First method : Prove that the sequence $(v_n)$ defined by $v_n = u_{n+1} - u_n$ is a geometric sequence. Deduce its general term, and then deduce the general term of $(u_n)$.
Second method : Prove that the sequence $(w_n)$ defined by $w_n = u_n + \frac{u_{n-1}}{2}$ is constant. Deduce that $(u_n)$ is arithmetico-geometric and find its general term.
Third method : Prove that the sequences $(a_n)$ and $(b_n)$ defined by $a_n= u_{2n}$ and $b_n =u_{2n+1}$ are adjacent. Then, calculate the limit of $(a_n)$, noticing that $a_{n+1}-a_n = \frac{1}{4}(a_n - a_{n-1})$ and applying the same method as the first one.
Fourth method : Solve directly the recurrence to find an explicit general formula of $u_n$. |
If a complex function $f$ is Entire, and there exist $k,R > 0$, $n \in N$ such that $|f(z)| > k|z|^n$ for all $|z| > R$, then f is a polynomial. | Such a function can have only finitely many zeros. Let's say the zeros are $\zeta_1,\dotsc, \zeta_r$, with multiplicities $\mu_1,\dotsc, \mu_r$. Then consider
$$P(z) = \prod_{\rho = 1}^r (z - \zeta_{\rho})^{\mu_{\rho}}$$
and
$$g(z) = \frac{P(z)}{f(z)}.$$
After removing the removable singularities at the $\zeta_{\rho}$, $g$ is an entire function, and $g$ has no zeros (since the zeros of $f$ precisely cancel the zeros of $P$). And $g$ satisfies a growth condition
$$\lvert g(z)\rvert \leqslant k'\cdot \lvert z\rvert^{\deg P - n}$$
for $\lvert z\rvert > R'$. So $g$ is a polynomial, and since it has no zeros, it is constant. That means $f(z) = c\cdot P(z)$. |
Backwards Euler Method/ Implicit Euler Scheme - Difference Equation + Stability | We have $$y(t_{n+1})=y(t_n)-hy(t_{n+1})$$
Hence,
$$(1+h)y(t_{n+1})=y(t_n)$$
$$y(t_{n+1}) = \frac1{1+h}y(t_n)$$
$$y(t_{n})=\left(\frac1{1+h} \right)^n y(t_0)$$
Note that we have $0<\frac1{1+h}<1$ for $h>0$, it is unconditionally stable. |
Check if a homogeneous system of linear equations has infinite solutions (not solving it) without calculating a determinant. | For larger $n$, the Gaussian elimination method requires less operations. (For manual computation, Bareiss might be more attractive).
For $n=4$, that makes little difference. Standard computation involves $24$ terms that are product of $4$ factors. |
Annihilator ideal of image sheaf | As anticipated, this is indeed straightforward. The point is simply the following:
$f_*(\mathscr{O}_X)$ is $\mathscr{O}_Y$-coherent, implying it is an $\mathscr{O}_Y$-algebra. But what makes it into an $\mathscr{O}_Y$-algebra is precisely the ring-homomorphism $\tilde f$, by defining $ab :=\tilde f(a)b, a\in \mathscr{O}_Y, b \in f_*(\mathscr{O}_X)$. From this, the statement easily follows. |
Probability of getting a $7$ in Minesweeper | The average number of $7's$, which is slightly different.
There are $14\times28 =392$ places to put a $7$.
There are eight places to put the non-mine.
There are $9$ squares involved with the $7$, so $480-9=471$ other squares.
These other squares contain the $92$ other mines. So the number of grids with a $7$ at a particular spot is $$8\times {471\choose 92}.$$ That is out of a total of $(480$ choose $99)$ different grids.
The chance of a $7$ in any one of those is
$$\frac{{8\choose1}{480-9\choose 92}}{480\choose99}\approx 0.00006928$$
so the average number of $7$s is $392$ times that, or approximately
$$0.02716$$
The average number of $8s$ would be
$$\frac{392{471\choose91}}{480\choose99}\approx 0.0008219$$ |
How to solve this differential equation? $y=\left(y'\right)^2+4\left(y'\right)^3$ | $y'=p\Rightarrow y=p^2+4p^3\Rightarrow p=y'=(2p+12p^2)p'\Rightarrow p=0$ or $p'=\frac{1}{2+12p}\Rightarrow p=0$ or $2p+6p^2=x+C\Rightarrow p=0$ or $p=\pm\sqrt{\frac{1}{6}x+C}-\frac{1}{6}\Rightarrow y=C$ or $y=\pm\frac{\sqrt{6}}{9}(x+C)^{\frac{3}{2}}-\frac{1}{6}x+C_1$. |
Solving homogeneous linear congruence recurrence relation with variable coefficients | Note that you require $c_{0,1}\equiv 0$ and $c_{0,0}\not\equiv 0$ to solve for $a_n$ in terms of $a_{n-1},a_{n-2},\dots,a_{n-d}$ for all $n$.
If you really want such "linear algebra method": Note that the recurrence is the same after shift by $p$, so you can just do $\mathbf{x}_n=(a_{np+p-1},a_{np+p-2},\dots,a_{np})$, $\mathbf{x}_{n+1}=C^{(p)}\mathbf{x}_n$, $C^{(p)}=C_{p-1}C_{p-2}C_{p-3}\dots C_0$ and recover a constant-coefficient recurrence (assuming $p\geq d$, otherwise you need to use $kp$ where $kp\geq d$). Of course this traded recurrence of larger order, so if $p\gg d$ this is impractical. |
Squaring a complex exponential that represents a real number | The problem lies in the fact that you cannot deduce from $a=\operatorname{Re}(z)$ that $a^2=\operatorname{Re}(z^2)$, which is what you did. For instance, $1=\operatorname{Re}(1+i)$, but $1^2\ne\operatorname{Re}\bigl((1+i)^2\bigr)=0$. |
How to show $A=\{(x,y)\in R^2:4x^2+9y^2=36\}$ is path connected and compact? | Find a path between two points, and you will have shown that the set is path connected.
Note that the parameterization
$$
x = 3 \cos t; \quad y = 2 \sin t; \quad t \in \mathbb{R}
$$
Yields a path through the entire space. Use this to "connect" any two points.
Alternatively: $A$ is the continuous image of $S^1$ under some map (which you should find). Since continuous maps preserve path connectedness, it suffices to note (or show) that $S^1$ is path connected. |
How to transfer linear map into a matrix? | If $M = (m_{ij})$ is the $m \times n$ matrix of our map $f:\Bbb R^n \to \Bbb R^m$ relative to the standard bases $\{e_1,\dots,e_n\}$, then the coefficients $m_{ij}$ satisfy
$$
f(e_j) = \sum_{i=1}^m m_{ij}e_i
$$
For instance: in order to find column $2$ of the matrix $M$ of your map (for which $m = 1$), we need a coefficient $m_{12}$ such that
$$
f(e_2) = m_{12}
$$
That is, the second column of your matrix $M$ will simply be the entry $m_{12} = f(e_2) = \langle a, e_2 \rangle = a_2$. |
Equation of a phase curve | Maybe you can eliminate $e^{-t}$ instead:
$$
\frac{y}{c_2} = e^{-3t} = \left(e^{-t}\right)^3 = \left(\frac{x}{c_1}\right)^3
$$
By plugging in the initial conditions $(x(0),y(0)) = (x_0,y_0)$, we see $c_1 = x_0$ and $c_2 = y_0$.
Thus $y = \frac{y_0}{x_0^3} x^3$ is an equation for the solution curve. If $x_0$ is positive, the actual orbit is the right half of the curve (excluding the origin), and if $x_0$ is negative, the left half.
If $x_0 = 0$ and $y_0 \neq 0$, the equations are $x = 0$, $y=y_0 e^{-3t}$. The orbit is the positive $y$-axis if $y_0 > 0$, and the negative $y$-axis if $y_0 < 0$.
If $y_0 = 0$ and $x_0 \neq 0$, the equations are $x = x_0 e^{-t}$. The orbit is the positive $x$-axis if $x_0 > 0$, and the negative $x$-axis if $x_0 < 0$.
If both $x_0=0$ and $y_0 = 0$, the orbit is the origin.
Your solution
$$\frac{dy}{dx}= \frac{-3y}{-x} = c \implies y = \frac{c}{3} x$$
does not make sense to me. How do you know $\frac{3y}{x}$ is constant? From the differential equation with $y$ and $x$, I would separate the variables:
\begin{align*}
\frac{dy}{y} &= \frac{3\,dx}{x}
\end{align*}
Integrating each side, we have
\begin{align*}
\ln y &= 3 \ln x + C \\\implies
y &= e^{3\ln x + C} = e^C e^{3\ln x} = C' x^3
\end{align*}
We have assumed $x$ and $y$ were positive, but those other cases can be accounted for. |
Countability of Topologies and Product Topology | The open unit circle in $\mathbb{R}^2$ is an example, or $\{(x,y): x \neq y\}$. The plane has the product topology wrt the usual topology on $\mathbb{R}$. These sets are open but not Cartesian products of two sets.
The product of two finite topologies is always finite: it has a finite base, and we can only form finitely many unions from it.
It surely is uncountable. Prove it. The standard base is countable, but in most cases there will be uncountably many different unions formed from it.
It can and often will be uncountable, as in the case of the Cantor cube $\{0,1\}^\mathbb{N}$. e.g. |
Roots of a polynomial with irrational coefficients | The roots of a polynomial with algebraic coefficients are all algebraic, and a monic polynomial whose roots are all algebraic has algebraic coefficients.
So a monic polynomial with some transcendental coefficient must have at least one transcendental root (and vice versa), but it can also have algebraic roots (for example, $0$ is a non transcendental root of $X^2- \pi X = 0$). |
How to find limit of the sequence given by $f_{n+1}=\frac{3}{7}f_n+8$ | $\mu_0$ is probably $f_0$, the first term of the sequence. As already said, if the limit exists ($\exists \lim_{n \to \infty}f_n = F$), then we can write $$\lim_{n \to \infty}f_{n+1} = {3 \over 7}\lim_{n \to \infty}f_n + 8 \\
F = {3 \over 7}F + 8 \\
F = 14$$
The problem is with if part. There are several ways to prove that $f_n$ converges for any $f_0$, but the simplest would be to take
$$g_n = f_n-14, g_0 = f_0-14 = -28 \\
g_{n+1}+14 = {3 \over 7}(g_n+14) + 8 \\
g_{n+1} = {3 \over 7}g_n = ({3 \over 7})^2g_{n-1} = ... = ({3 \over 7})^{n+1}g_0$$
Thus, $\exists \lim_{n \to \infty}g_n = 0$ and $\exists \lim_{n \to \infty}f_n = 14$. |
Is there an inverse function for $h=(-6y+100)\sin(\arctan(y/100))$ | Hint: Notice that $$\sin (\arctan u)={u\over\sqrt{1+u^2}}$$Can you finish now? |
Algorithmic question regarding permutations | Well, you're
assigning to the variable $i$ the values from $1$ to $n$ -> $n$ steps
assigning to the variable $\pi(i)$ the value $i$ for all $i$ -> $n$ steps
assigning to the variable $i$ the value $n-1$ -> $1$ step
so $2n+1$ assignments in total |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.