title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Density of gradient of restricted-function in $L^p$. | This is not true: We have $\nabla \times v=0$ for all $v\in V$. So these functions in $V$ cannot approximate non-curl-free functions. |
$[G:Z(G)] = n$ prove that each conjugacy class has at most n elements | Ok ,
so from looking at the comments -
$Z(G)≤C(x_i) $Implies that $[G:C(x_i)]≤[G:Z(G)]$
so because
$[G:Z(G)]=n$
we get -
$[G:C(x_i)]≤ n$
thanks N.S. for the help |
Automorphisms of a structure as a powerful tool for studying the structure | Lascar says often, not always. The philosophy -- which you correctly link back to Klein's Erlangen Program; I think you would do well to study that and read what Klein has to say about it -- certainly works better for structures which have large, rich automorphism groups. Klein's original motivation was the observation that many of features of a geometry are determined by its group of isometries, which is usually a Lie group of positive dimension. Geometries with nonisomorphic isometry groups are profoundly different. Geometries with isomorphic isometry groups often have some deep commonalities.
I think it is too broad a question to ask for anything like a comprehensive description of the structures for which the automorphism group plays a key role. Here are two important examples:
The classification of fiber bundles on a (sufficiently nice) topological space depends only on the automorphism group of the fiber, not on the homeomorphism type of the fiber. This leads to a description/construction of fiber bundles on a space in terms of $1$-cocycles with coefficients in the automorphism group. Also the notion of "reduction of the structure group" is a key one in the study of fiber bundles. E.g. a paracompact differentiable manifold is orientable if the structure group of the tangent bundle can be reduced to $\operatorname{GL}_n(\mathbb{R})^+$. It admits a Riemannian metric iff the structure group can be reduced to $O_n(\mathbb{R})$. In fact the latter is always the case, and that can be understood (among other ways) by understanding the way that $O_n(\mathbb{R})$ sits inside $\operatorname{GL}_n(\mathbb{R})$.
A closely related principle is the principle of Galois descent: if $V_{/K}$ is an algebraic structure defined over a field (examples: a $K$-algebra, a variety, a group variety), then the set of twisted forms of $V_{/K}$ -- namely those objects $W_{/K}$ which become isomorphic to $V$ over the algebraic closure of $K$ -- are again parameterized by
$1$-cocycles with coefficients in the automorphism group of $V$. In particular, when two objects which have isomorphic automorphism groups, there is a bijection between the twisted forms. Here the case of a trivial automorphism group is not a trivial case: when the automorphism group is trivial, there are no twisted forms other than $V$.
One could go on forever in the above manner, so I'll limit myself now to graph theory.
For every finite graph $G$, there is a nonisomorphic finite graph $G'$ such that $\operatorname{Aut}(G) \cong \operatorname{Aut}(G')$.
This follows easily from the fact that you mentioned: for all sufficiently large $n$, there is a connected graph $R_n$ on $n$ vertices with trivial automorphism group (and indeed the proportion of such graphs among all connected graphs approaches $1$ as $n$ approaches infinity). Then given a graph $G$ with $n$ vertices, $G \coprod R_{n+1}$ must have the same automorphism group as $G$.
Let $G$ be a nonempty simple graph on $n$ vertices with automorphism group $S_n$. Then $G = K_n$ is the complete graph on $n$ vertices.
We may assume $n \geq 2$, and then $S_n$ acts doubly transitively on $\{1,\ldots,n\}$, in other words, given any two pairs of distinct vertices, there is a graph automorphism taking one to another. Since the graph is nonempty, there are two vertices $v_1$ and $v_2$ connected by an edge. It follows that every pair of vertices is connected by an edge.
By the way, the fact that most finite graphs have trivial automorphism group only means that one cannot use the automorphism group as a tool to classify finite graphs (which is a rather hopeless problem anyway). It certainly does not mean that the automorphism group of a graph is not a highly interesting and useful invariant. In practice, the individual graphs of most interest tend to be those with a large, interesting automorphism group (and recall that by a theorem of Frucht, every group occurs up to isomorphism as an automorphism group of some graph). For instance Cayley graphs of groups are extremely important, and there the automorphism group acts simply transitively. Conversely, a celebrated theorem of G. Sabidussi says that any graph which admits a simply transitive subgroup $G$ of automorphisms must be a Cayley graph on the group $G$. So the automorphism group of a graph can tell us important information about the graph, especially when regarded as a permutation group on the set of vertices rather than as an abstract group. |
Continuity from a topology to itself | Let $a,b\in \Bbb R^+.$ Then $a\in (-\infty, a+b)\in T.$ If $f$ were continuous at $a$ then for some $t\in T$ we would have $a\in t$ and $\{f(x):x\in t\} \subset (-\infty,a+b).$
But if $a\in t\in T$ then for some $c\in \Bbb R^+$ we have $t\supset (\infty,a+c)\supset (-\infty,0)$ so $$\{f(x):x\in t\}\supset \{f(x):x<0\}=\{x^2:x<0\}=\Bbb R^+\not \subset (-\infty,a+b).$$ |
$C^{\infty}$ path-connected | It is $C^{\infty}$ path connected. After your polygon-path is constructed the only "bad" points are the vertices of the path. Now choose a small ball around the vertex. All you need to do is construct a $C^{\infty}$ path inside the ball, that connects to a given two points $C^{\infty}$ smoothly. This can be done (for the formula use a cube not a ball. then you can construct your function coordinate-vise because all you need is $C^{\infty}$ function that connects to linear functions) |
Proving simple form of Picard Existence Theorem | You know what exactly the right hand side is, so you have an explicit bound on $\|Tf_1-Tf_2\|$. However, you still need $c<1$ in order to have a contraction. This means in some sense the epsilon-interval had already been adjusted from beginning in the exercise. |
Define the following distribution in $\mathbb{R}^2$ | OK I will use the heuristic integral notation for the pairing of a distribution with a test function, i.e., what I said I would avoid in https://mathoverflow.net/questions/72450/can-distribution-theory-be-developed-riemann-free
Here $f(x)$ is a generalized function of the single variable $x\in\mathbb{R}$.
Now the quantity $\langle F,\phi\rangle$ defined by $\langle f,\phi_y\rangle$ should be
$$
\int_{\mathbb{R}} f(x)\phi(x,y)\ dx\ .
$$
This is not a number, but a function of the variable $y$ which has not been "integrated over". This function belongs to $\mathscr{D}(\mathbb{R})$, i.e., is a smooth compactly supported function of $y$. This is part of the statement of Fubini's Theorem for distributions, in the $\mathscr{D},\mathscr{D}'$ variant (see my answer to the above MO question for the $\mathscr{S},\mathscr{S}'$ variant).
Now what lcv proposes makes perfect sense but is something different: the application of the distribution $f\otimes 1\in\mathscr{D}'(\mathbb{R}^2)$ on the test function $\phi\in\mathscr{D}(\mathbb{R}^2)$ which indeed produces a number. Heuristically this is
$$
\int_{\mathbb{R^2}} f(x)\phi(x,y)\ dx dy
$$
where this time $y$ has been integrated over. |
Assuming a particle is at rest/stationary | You are correct in the fact that the particle is at rest at point $C$, the velocity is then $0$. Otherwise you don't have enough information to solve the problem. I think you have an error in your formula, in the term on the right hand side. There should not be $1.2$ in the denominator. The work done by the elastic force is $k(AC-AB)^2/2$. |
Simple matrix reasoning | if $Ax=(1,1,1)$ doesn't have any solutions then $Ax=(0,0,0)$ has NOT one unique solution |
Problem with definition of covering space | No, there is no problem and this is why: for every $x\in X$ we need a neighborhood $U$ such that $p^{-1}(U)=\sqcup_{i\in I}\widetilde{U}_i$ for some open sets $\widetilde U_i$ in $\widetilde X$ and some index set $I$. In the case that $p^{-1}(U)$ is empty, we are taking the index set $I$ to be the empty set, rather than having $I$ contain some elements and each $\widetilde U_i$ be the empty set. It's therefore vacuous that each $\widetilde U_i$ map homeomorphically onto $U$, since there are no $i\in I$ to begin with.
As for the second part of your question, you're right that saying "for every $x\in p(\widetilde X)$" would be just as correct. It really just doesn't matter and I guess in the author's head, saying "for every $x\in X$" looks nicer. |
How to show that the random variable is finite almost everywhere? | Since $T$ is non negative it is sufficient to show that it has finite expectation. Using the tail formula:
$\mathbb{E}[T]=\sum_{k=0}^\infty \mathbb{P}(T>k)=\sum_{k=0}^\infty \frac{1}{k!}<\infty$ |
$\mathbb{C} \cup \{ \infty \}$ and $\mathbb{R} \cup \{ -\infty, +\infty \}$ | "Complex infinity" is near to numbers of large magnitude in $\mathbb{C}$. The extended complex numbers $\mathbb{C}\cup\{\infty\}$ can be visualised as a sphere (called the Riemann Sphere) with the two poles being $0$ and $\infty$. Say $\infty$ is the north pole. Then complex numbers of large magnitude are near to the north pole, including large negative reals. |
Find all real solutions to $8x^3+27=0$ | You are working too hard. Note that
$$8x^3+27=0\iff x^3=\frac{-27}{8}\iff x=-\sqrt[3]{27/8}\iff x=-\frac{3}{2}$$
and so the only real solution is $x=-3/2$. |
Let $(R,M)$ be a local ring. Suppose that $R$ is noetherian and let $I,J \unlhd R$ such that $J \subseteq I$. Prove that the following are equivalent. | Let $R$ be a local ring. We denote by $\mu_R(M)$ the minimal number of generators of a finitely generated $R$-module. This turns out to be $\dim_{R/\mathfrak m}M/\mathfrak mM$, where $\mathfrak m$ denotes the maximal ideal of $R$. If $\mathfrak a$ is an ideal of $R$, $\mathfrak a\subseteq\mathfrak m$, then $\mu_R(M)=\mu_{R/\mathfrak a}(M/\mathfrak aM)$ (why?).
1) $\Rightarrow$ 2) Let $x_1\dots,x_m$ be a minimal system of generators in $J$. This is equivalent to $\overline x_1,\dots,\overline x_m$ is an $R/\mathfrak m$-basis in $J/\mathfrak mJ$. Now define $\varphi:J/\mathfrak mJ\to I/\mathfrak mI$ by $\varphi(\overline a)=\widehat a$. Since there is $y_1,\dots,y_n$ in $I$ such that $x_1\dots,x_m,y_1,\dots,y_n$ is a minimal system of generators for $I$ we get that $\widehat x_1\dots,\widehat x_m,\widehat y_1,\dots,\widehat y_n$ is an $R/\mathfrak m$-basis in $I/\mathfrak mI$. This shows that $\varphi$ is injective (why?), so $\ker\varphi=(0)$, that is, $J\cap\mathfrak mI=\mathfrak mJ$.
2) $\Rightarrow$ 3) Set $L=\mathfrak m$.
3) $\Rightarrow$ 4) Let $x_1,\dots,x_m$ be a minimal system of generators in $J$. Let $y_1,\dots,y_n\in I$ be such that their classes form an $R/\mathfrak a$-basis in $I/(\mathfrak aI+J)=\dfrac{I/J}{\mathfrak a(I/J)}$. They are also a minimal system of generators for $I/J$ and then $x_1\dots,x_m,y_1,\dots,y_n$ is a system of generators for $I$. Let's show that this is minimal.
Since $\mathfrak aI\cap J=\mathfrak aJ$ we have a short exact sequence $$0\to J/\mathfrak aJ\to I/\mathfrak aI\to I/(\mathfrak aI+J)\to 0.$$ But $I/(\mathfrak aI+J)$ is a free $R/\mathfrak a$-module, so the sequence is split. Then $$\mu_{R/\mathfrak a}(I/\mathfrak aI)=\mu_{R/\mathfrak a}(J/\mathfrak aJ)+\mu_{R/\mathfrak a}(I/(\mathfrak aI+J)),$$ so $\mu_R(I)=\mu_R(J)+\mu_R(I/J)$, and we are done.
4) $\Rightarrow$ 1) Suppose $\mu(I)=\mu(J)+\mu(I/J)$ and let $x_1,\dots,x_m$ be a minimal system of generators for $J$ and $\overline y_1,\dots,\overline y_m$ a minimal system of generators for $I/J$. Then $x_1\dots,x_m,y_1,\dots,y_n$ is a system of generators for $I$ and it is minimal since $\mu(I)=m+n$. |
How can I solve $\displaystyle{\lim_{x \to \infty} \frac{1}{(1+\frac{k}{x})^x}}$? | Note that I'll be assuming $k > 0$.
Observe that
$$\begin{align}\lim_{x\to\infty}\left(1 + \dfrac{k}{x}\right)^x &= \lim_{x\to\infty}\left(1 + \dfrac{k}{x}\right)^{kx/k}\\
&\overset{y=x/k}{=} \lim_{y\to\infty}\left(1 + \dfrac{1}{y}\right)^{ky}\\
&= \lim_{y\to\infty}\left(\left(1 + \dfrac{1}{y}\right)^{y}\right)^k\\
&= \left(\lim_{y\to\infty}\left(1 + \dfrac{1}{y}\right)^{y}\right)^k\\
&= e^k
\end{align}$$
The interchange of $(\cdot)^k$ and $\lim$ was possible because the function $x\mapsto x^k$ is continuous.
The answer to your question should now be clear. (The function $x \mapsto 1/x$ is continuous on $(0, \infty)$ and thus, you should get $e^{-k}$.)
Edit: Adding the case $k \le 0$ as well.
Nothing needs to be said for $k = 0$. It is clearly $1$.
Assume that $k < 0$. Before proceeding further, let us prove the following lemma.
Lemma. $\displaystyle\lim_{y \to -\infty}\left(1 + \dfrac{1}{y}\right)^{y} = e.$
Proof. Consider the limit
$$L = \lim_{y\to-\infty}y\ln\left(1 + \dfrac{1}{y}\right) = \lim_{y\to-\infty}\dfrac{\ln\left(1 + \dfrac{1}{y}\right)}{1/y}.$$
We can evaluate it using L'Hospital since the rightmost limit is of the form $0/0$. This gives us
$$L= \lim_{y\to-\infty}\dfrac{\left(1 + \dfrac{1}{y}\right)(-1/y^2)}{-1/y^2} = 1.$$
From this, it follows that the original limit was $e^L = e$.
With this lemma in place, the result for $k < 0$ follows by again making the substitution $y = x/k$ but now noting that $y \to -\infty$ instead.
Interestingly, the answer in all these cases turns out to be $\boxed{e^{-k}}$. |
How can I compute the residue at this order-2 pole? | you may find it easier in this case to use the formula for a pole of order $n$:
$$
Res_{z_0}f(z) = \lim_{z\to z_0} \frac1{(n-1)!}\frac{d^{n-1}}{dz^{n-1}}(z-z_0)^nf(z)
$$
with $z_0=ia$ and the pole of order 2 we therefore require
$$
r = \lim_{z\to ia}\frac{d}{dz}\frac{\cos z}{(z+ia)^2}
$$
since
$$
\frac{d}{dz}\frac{\cos z}{(z+ia)^2} = \frac1{(z+ia)^3}(-(z+ia)\sin z)-2\cos z)
$$
the limit is:
$$
r=-(4ia^3)^{-1}(ia\sin ia + \cos ia) \\
=\frac{i}{8a^3}(a(e^{-a}-e^a) +e^{-a}+e^{a})\\
=\frac{i}{8a^3}( (1-a)e^a +(1+a)e^{-a})
$$
(or something along these lines)
giving:
$$
\int_{-\infty}^{\infty} \frac {\cos z}{(x^2+a^2)^2}dz =2\pi r \\
= \frac{\pi}{4a^3}( (1-a)e^a +(1+a)e^{-a})
$$ |
Fourier series for $f(x)=\sin(ax)$ where $a$ is not an integer? | Development in cosine series means your function has to be even, so you define $f(x)=\sin(a|x|)$ on $[-\pi,\pi]$, and complete on $\Bbb R$ by periodicity. Then, your function is $2\pi$-periodic, continuous an piecewise $C^1$, hence the Dirichlet conditions apply.
Compute the Fourier coefficients:
$$a_n=\frac{1}{\pi}\int_{-\pi}^\pi \sin(a|x|)\cos(nx) \mathrm dx=\frac{2}{\pi}\int_{0}^\pi \sin(ax)\cos(nx) \mathrm dx$$
You have the identity $\sin(ax)\cos(nx)=\frac12[\sin(a+n)x+\sin(a-n)x]$, thus
$$a_n=\frac{1}{\pi}\int_{0}^\pi (\sin(a+n)x+\sin(a-n)x) \mathrm dx
\\=\frac1\pi\left(\frac{1-\cos(a+n)\pi}{a+n}+\frac{1-\cos(a-n)\pi}{a-n}\right)
\\=\frac{2a}{\pi(a^2-n^2)}\left[1-(-1)^n\cos(a\pi)\right]$$
For the last simplification, notice that $\cos(a+n)\pi=\cos(a-n)\pi=(-1)^n\cos(a\pi)$.
Then, for $x\in[0,\pi]$,
$$\sin (ax)=\frac{a_0}{2}+\sum_{n=1}^\infty a_n\cos (nx)=\frac{1-\cos(a\pi)}{a\pi}+\frac{2a}{\pi}\sum_{n=1}^\infty \frac{1-(-1)^n\cos(a\pi)}{a^2-n^2}\cos (nx)$$
Apart from the constant term, it's the same as your book's answer, in a slightly different form. Either you mistyped the answer, either there is a typo in your book. |
Finding the integral $\int_0^1 \frac{x^a - 1}{\log x} dx$ | Call your integral $I(a)$. Then
$$
I'(a) = \int_0^1 x^a dx = \frac{1}{a+1}
$$
as long as $a \geq 0$. Now you need to solve the differential equation
$$ I'(a) = \frac{1}{a + 1}.$$
This is a very easy differential equation to solve, and the solution is
$$ I(a) = \log(a+1) + C $$
where $C$ is some constant. Now we ask, what is that constant? Notice that
$$ I(0) = \int_0^1 \frac{1 - 1}{\log x} dx = 0,$$
so we need
$$ I(0) = \log(1) + C = 0,$$
or rather $C = 0$. So we conclude that
$$
\int_0^1 \frac{x^a - 1}{\log x} dx = \log(a + 1),
$$
as you suggested. $\diamondsuit$ |
Approximation of stochastic integrals by Riemann sums | The first important thing to notice is that we cannot expect convergence for fixed $\omega$, i.e. the sum
$$\sum_{k=1}^K f(t_k) (X_{t_k}(\omega)-X_{t_{k-1}}(\omega))$$
does in general not converge as $K \to \infty$. Recall the following lemma:
Lemma: Let $\alpha: [a,b] \to \mathbb{R}$. If $$A(f) := \int_a^b f(t) \, d\alpha(t) := \lim_{K \to \infty} \sum_{k=1}^K f(t_{k-1}) (\alpha(t_k)-\alpha(t_{k-1}))$$ exists for all continous $f$, then $\alpha$ is of bounded variation.
Since a Brownian motion $(B_t)_{t \geq 0}$ has infinite variation, this lemma shows that the Riemann sums
$$\sum_k f(t_k) (B_{t_k}(\omega)-B_{t_{k-1}}(\omega))$$
do (in general) not even converge for continuous $f$.
That's why Itô came up with a new idea: Let's consider the $L^2$-limit, i.e. define the stochastic integral as
$$\int_0^T f(t) \, dB_t := L^2-\lim_{K \to \infty} \sum_{k=1}^k f(t_{k-1}) (B_{t_k}-B_{t_{k-1}}).$$
Using martingale techniques, one can show that this limit does indeed exist for a large class of functions $f$ and today that's the usual way to define stochastic integrals with respect to stochastic processes. Note that $L^2$-convergence implies convergence in probability, so we have in particular
$$\sum_{k=1}^k f(t_{k-1}) (B_{t_k}-B_{t_{k-1}}) \stackrel{\mathbb{P}}{\to} \int_0^T f(t) \, dB_t.$$
The so-defined stochastic integral is well-defined for all functions $f:[0,\infty) \times \Omega \to \mathbb{R}$ which are progressively measurable and satisfy the integrability condition
$$\mathbb{E} \left( \int_0^T f(s)^2 \, ds \right) < \infty.$$ |
Simplest Way to Evaluate Lengthy Integration Equation with Succinct Answer | Use computer algebra (e.g., Mathematica) which gives a very simple answer:
$$\frac{\pi b^5}{30 \sqrt{b^2+1}}$$ |
Deriving the BBP identify for $\pi$ | Begin by writing
$$
\begin{align}
\int_0^{\frac{1}{\sqrt{2}}} &\frac{4\sqrt{2}-8x^3-4\sqrt{2}x^4-8x^5}{1-x^8} dx =
\\&
4 \sqrt{2}\int_0^{\frac{1}{\sqrt{2}}} \frac{x^0}{1-x^8} dx
-
8\int_0^{\frac{1}{\sqrt{2}}} \frac{x^3}{1-x^8} dx
\\ & \qquad -
4 \sqrt{2}\int_0^{\frac{1}{\sqrt{2}}} \frac{x^4}{1-x^8} dx
-
8\int_0^{\frac{1}{\sqrt{2}}} \frac{x^5}{1-x^8} dx
\end{align}
$$ |
Question about a singular matrix | You have the sum of all eigenvalues being equal to the trace; hence $k_1+k_2+k_3+k_4=4$. This'll give you $k_4$. |
Probability of winning a prize in a raffle (each person can only win once) | From what I understand of your problem, even if I buy all $2600$ tickets, I can get only one prize, i.e. although there are $42$ prizes, all needn't get distributed.
So we needn't bother about how many persons have bought how many tickets,
we can easily compute the $Pr$ that you win a prize.
You win a prize if all $42$ prizes don't fall in the $2590$ tickets not with you.
P(you win a prize) $= 1$ - P(you don't win a prize)
$= 1 - \dfrac{\binom{2590}{42}}{\binom{2600}{42}} \approx 0.1223$ |
Finding the coordinates of the vertices of an equilateral triangle. | Hint:$(x,y)$ is the midpoint of the triangle. |
The probability of $n$ being a square, given the units-digit in its decimal representation | The probabilities should not sum up to $\frac1{\sqrt N}$. Contemplate the problem with "square" replaced with "even", where you should have $p=1$ or $p=0$, depending on the unit digit being even/odd, whereas the overall probaility is $\frac12$.
Instead, your $K$ should be just $\frac1{\sqrt N}$, do you see why? |
Is it possible to solve sudoku without backtracking? | Using only the techniques you mention in your initial post (i.e. only Naked Singles and Hidden Singles), you can solve about 29.17% of all the minimal 9x9 Sudoku puzzles.
However, much more "pure logic" or "pattern-based" solving (without any backtracking) can be done by using more complex resolution rules, either "classical" ones (Pairs, Triplets, ...) or more recently defined ones (whips, braids, ...).
There is a fundamental classification parameter for puzzles, the Trial-and-Error depth. For all the known 9x9 puzzles, it is 0, 1 or 2.
If it is 0, the puzzle can be solved by (Naked and Hidden) Singles only.
If it is 1, it can be solved by braids only.
If it is 2, which may happen for about 1 minimal puzzle in 30,000,000, more complex patterns are necessary (B-braids).
Needless to say, all the puzzles you can find in newspapers or in apps on your smartphone require much less than the full power of depth 1.
The same classification works for larger puzzles, but the maximum Trial-and-Error depth is higher and the proportion of puzzles with small depth decreases fast with size.
(For more detailed information, see my book "Pattern-Based Constraint Satisfaction and Logic Puzzles". A free pdf version is available.) |
Equations and Solving Them | There is a canonical formula for solving a quadratic equation, and it is
$$
x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}
$$
when $p(x) = ax^2 + bx + c$. The reason for this is that
$$
a\left( x - \left( \frac{-b - \sqrt{b^2 - 4ac}}{2a} \right) \right) \left( x - \left( \frac{-b + \sqrt{b^2 - 4ac}}{2a} \right)\right) = ax^2 + bx + c.
$$
(I leave the fact that this is true for you as an exercise. Just expand the product.) Thus, the solutions to your equation are
$$
\frac{98 \pm \sqrt{98^2 - 4 \cdot 40 \cdot 49}}{2 \cdot 49}.
$$
This formula is very well known and very useful so you should remember it if you are going to do mathematics for a while.
Hope that helps, |
Conditional probability exercise does not match my intuition | Your intuition is correct this far:
Intuitively, I would say that the only prime number possible, given that 6 heads occur is 7, hence the experiment returned heads either 6 or 7 times,
Where you go wrong is in the part that comes after.
You assume since there are two possible outcomes,
they must be equally likely.
But you actually are much more likely to toss $6$ heads on $7$ coins than to toss all heads.
In fact, $6$ heads is exactly $7$ times as likely as all heads,
because only one sequence of tosses is all heads whereas there are $7$ different sequences that have exactly $6$ heads, and the sequences are equally likely. |
Cycling sequence of 0 and 1 - is cycle length even? | Assuming you are talking about the sequence of bitstrings repeating in cycles, and the period of that,
Hint: You either increase the length of the bitstring by 1 or reduce it by 1. |
$\sin(nx)$ does not contain Cauchy subsequence in $L^p([0,2\pi]) $ for $1\leq p < \infty$ | Your argument is fine. You can also do a direct variant of it like this. If $\{f_{n_k}\}_{k=1}^{\infty}$ were a Cauchy sequence, then $\lim_{k \rightarrow 0} ||f_{n_{k+1}} - f_{n_k}||_{L^p[0,2\pi]}$ would be zero. But by Holder's inequality
$$\int_0^{2\pi}|f_{n_{k+1}} - f_{n_k}|^2 \leq||f_{n_{k+1}} - f_{n_k}||_{L^p[0,2\pi]}\,\,\,\,||f_{n_{k+1}} - f_{n_k}||_{L^{p'}[0,2\pi]}$$
But $||f_{n_{k+1}} - f_{n_k}||_{L^{p'}[0,2\pi]}$ is bounded by a fixed constant $C$ just by taking absolute values of the integrand and integrating. So you have
$$\int_0^{2\pi}|f_{n_{k+1}} - f_{n_k}|^2 \leq C||f_{n_{k+1}} - f_{n_k}||_{L^p[0,2\pi]}$$
Taking limits as $k \rightarrow \infty$ gives zero on the right hand side, but you can directly compute the left-hand integral to be $2\pi$ for all $k$, giving a contradiction. |
Geometric series for values between 0 and 1 | The series is given by
$$\sum_{k=1}^\infty (-x)^k \to \frac1{1+x} \text{ iff } |x| < 1$$
Your problem is that the last value of the output is the error of the limit $\frac1{1+\frac12} = \frac23$ compared to the truncated sum.
The output is formatted like this:
$$\text{Term } k \qquad \sum_{j=0}^k (-\frac12)^j \qquad \Bigg| \sum_{j=0}^k (-\frac12)^j - \underbrace{\sum_{j=0}^\infty (-\frac12)^j}_{= \frac23} \Bigg|$$
As per request: calculation of Terms $1$ and $2$:
I denote
$$S_n := \sum_{k=0}^{n-1} (-\frac12)^k$$
The output of your program then gives $S_n$ and $\epsilon_n := |S_n - \frac23|$.
$$S_1 = \sum_{k=0}^{1-1} (-\frac12)^k = (-\frac12)^0 = 1$$
so
$$\epsilon_1 = |1-\frac23| = \frac13$$
. Now for term $2$:
$$S_2 = \sum_{k=0}^{2-1} (-\frac12)^k = (-\frac12)^0 + (-\frac12)^1 = 1 - \frac12 = \frac12$$
so that
$$\epsilon_2 = |\frac12- \frac23| = |-\frac16| = \frac16$$ |
$I(X_1\cap X_2)=\sqrt{I(X_1)+I(X_2)}$ | We have$$I(X_1\cap X_2)=I(V(I(X_1))\cap V(I(X_2)))=I(V(I(X_1)+I(X_2)))=\sqrt{I(X_1)+I(X_2)}$$
The only non-trivial equation is the last one which follows from the Nullstellensatz ($I(V(J))=\sqrt{J}$) |
Dumbbell Contour? $\int_0^1 \log(x)\log(1-x)dx$ via complex methods. | The reason that the dog bone contour is inapplicable here is that branch cuts from, say, $0$ to $\infty$ and $1$ to $\infty$ do not "collapse" into the "slit" from $0$ to $1$.
To see this, we note that for $z=x+iy$, $x>1$ and $y\to 0+$ we have $\arg(z)=0$ and $\arg(1-z)=-\pi$, while for $x>1$ and $y\to 0^-$, we have $\arg(z)=2\pi$ and $\arg(z)=\pi$.
Therefore, on the upper part of the coalescing branch cuts for which $x>1$
$$\begin{align}
f(z)&=\log(z)\log(1-z)\\\\
&=(\log(x)+i0)(\log(|1-x|)-i\pi)\\\\
&=\log(x)\log(|1-x|)-i\pi \log(x)
\end{align}$$
while on the lower part of the coalescing branch cuts for which $x>1$
$$\begin{align}
f(z)&=\log(z)\log(1-z)\\\\
&=(\log(x)+i2\pi)(\log(|1-x|)+i\pi)\\\\
&=\log(x)\log(|1-x|)+i\pi \log(x)+i2\pi \log(|1-x|)-2\pi^2
\end{align}$$
Therefore, the function $f(z)$ is not continuous across the coalescing branch cuts and they do not, therefore, collapse into a "slit."
NOTE:
This situation is different from the case for which $f(z)=z^{1/2}(1-z)^{1/2}$. Following the preceding analysis, we find that on the upper part of the coalescing branch cuts
$$\begin{align}
f(z)&=\sqrt{z} \sqrt{1-z} \\\\
&=\sqrt{x}\sqrt{x-1}e^{-i\pi/2}\\\\
&=-i\sqrt{x}\sqrt{x-1}
\end{align}$$
while on the lower part of the coalescing branch cuts for which $x>1$
$$\begin{align}
f(z)&=\sqrt{z}\sqrt{1-z}\\\\
&=\sqrt{x}e^{i\pi}\sqrt{x-1}e^{i\pi/2}\\\\
&=-i\sqrt{x}\sqrt{x-1}
\end{align}$$
Therefore, the function $f(z)$ is continuous across the coalescing branch cuts and they do, therefore, collapse into a "slit." |
Finding a system of linear equations which result is a given matrice | Given $x$ and looking for $M$ such that $Mx=b$ where
1) $b$ is arbitrary: Pick any $M$ and put $b=Mx$
2) $b$ is given: take $M=bx^{\dagger}+Z(I-xx^{\dagger})$, where $x^{\dagger}=x^T/(x^Tx)$ is the Moore-Penrose pseudo-inverse of the vector $x$ and $Z$ is arbitrary. |
Points of intersection of two parametric curves | I believe you should represent each of the parametric variables as different entities and then solve, e.g.:$$\begin{cases}
t_1+1= 3t_2+1 \\
t_1^2= t_2^2+1 \\
\end{cases}$$Solve for $t_1$ and $t_2$ here.
$t_1$ will give you the coordinates of $C_1$ and $t_2$ will give you the coordinates of $C_2$ at the points of intersection. Notes that both $C_1$ and $C_2$ should come out with the same $x$ and $y$ coordinates at the points of intersection. |
Solving the Definite Integral $\int_0^{\infty} \frac{1}{t^{\frac{3}{2}}} e^{-\frac{a}{t}} \, \mathrm{erf}(\sqrt{t})\, \mathrm{d}t$ | Represent the erf as an integral and work a substitution. To wit, the integral is
$$\frac{2}{\sqrt{\pi}} \int_0^1 dv \, \int_0^{\infty} \frac{dt}{t} e^{-(a/t+v^2 t)} $$
To evaluate the inner integral, we sub $y=a/t+v^2 t$. Then the reader can show that
$$\int_0^{\infty} \frac{dt}{t} e^{-(a/t+v^2 t)} = 2 \int_{2 v \sqrt{a}}^{\infty} \frac{dy}{\sqrt{y^2-4 a v^2}} e^{-y}$$
The latter integral is easily evaluated using the sub $y=2 v \sqrt{a} \cosh{w} $ and is equal to
$$2 \int_{2 v \sqrt{a}}^{\infty} \frac{dy}{\sqrt{y^2-4 a v^2}} e^{-y} = 2 \int_0^{\infty} dw \, e^{-2 v \sqrt{a} \cosh{w}} = 2 K_0 \left ( 2 v \sqrt{a} \right )$$
where $K_0$ is the modified Bessel function of the second kind of zeroth order. Now we integrate this expression with respect to $v$ and multiply by the factors outside the integral to get the final result:
$$\begin{align} \int_0^{\infty} dt \, t^{-3/2} e^{-a/t} \operatorname{erf}{\left ( \sqrt{t} \right )} &= \frac{4}{\sqrt{\pi}} \int_0^1 dv \, K_0 \left ( 2 v \sqrt{a} \right ) \\ &= 2 \sqrt{\pi} \left [K_0 \left ( 2 \sqrt{a} \right ) \mathbf{L}_{-1}\left ( 2 \sqrt{a} \right ) + K_1 \left ( 2 \sqrt{a} \right ) \mathbf{L}_{0}\left ( 2 \sqrt{a} \right ) \right ] \end{align}$$
where $\mathbf{L}$ is a Struve function. |
Existence of sequence for an integral function in $\mathbf{R}$. | Assume that no sequence as requested exists.
Then for each sequence $x_n\to \infty$ we have $S((x_n)_n):=\limsup |x_n f(x_n)|>0$.
Let $s$ be the infimum of all $S((x_n)_n)$ when $(x_n)_n$ runs over all such sequences.
If $s=0$, then for each $m\in \mathbb N$ there is a sequence $x_n^{(m)}\to \infty$ with $|x_n^{(m)}f(x_n^{(m)})|<\frac1m$ infinitely often.
Define recursively $y_0=0$ and $y_m=x_n^{(m)}$ with $n$ chosen big enough to gurarantee both $|x_n^{(m)}f(x_n^{(m)})|<\frac1m$ and $x_n^{(m)}>y_{m-1}+1$.
Then $y_n\to\infty$ and $|y_nf(y_n)|<\frac 1n$, i.e. $y_nf(y_n)\to0$, contrary to assumption.
Hence $s>0$. Then $|f(x)|>\frac s{2x}$ for all sufficiently big $x$ (for all $x>a$, say).
Indeed, if for every $n\in \mathbb N$ there were an $x_n>n$ with $|f(x_n)|\le\frac s{2x}$, then for this sequence $(x_n)_n$ we have $S((x_n)_n)\le \frac s2 < s$, contradiction.
Therefore $$\int_{\mathbb R}|f(x)|dx\ge \int_{a}^\infty \frac s{2x}dx=\infty$$
and $f$ is not $L^1$.
If I'm not mistaken, the proof above can easily be generalized to:
Assume $f\in L^1(\mathbb R)$, $a,b\in \mathbb R$, $b>0$, $g(a,\infty)\to(0,b)$ and $\int_a^\infty g(x)dx=\infty$. Then there exists a sequence $x_n\to \infty$ with $\frac{f(x_n)}{g(x_n)}\to 0$. |
Proving a function converges to 0 when we know something about its generalized integral | You have that$$f(x)=f(a)+\int_a^xf'(t)\,\mathrm d t.$$
Since $\int_a^\infty f'(x)\,\mathrm d x,$ converges, then $$\ell:=\lim_{x\to \infty }f(x),$$
exist. Suppose that $\ell\neq 0$. Suppose $\ell>0$. Let $M\geq a$ s.t. $f(x)\geq \frac{\ell}{2}.$ In particular, if $x>M$, then $$\int_a^x f(x)\mathrm d x=\int_a^Mf(x)\,\mathrm d x+\int_M^x f(x)\,\mathrm d x\geq \int_a^M f(x)\,\mathrm d x+\frac{(x-M)\ell}{2}\underset{x\to \infty }{\longrightarrow }\infty ,$$
which is a contradiction. Do the same if $\ell<0$ and conclude. |
Continuity and complex derivative | Hint: If we write $z = x + iy$ with $x, y$ real numbers, then the function is
$$f(z) = (x + iy)(x) = x^2 + i xy$$
Let $u = x^2$ and $v = xy$ be the real and imaginary parts of $f$, respectively. Then by the Cauchy-Riemann equations,
$$u_x = v_y$$ and $$u_y = - v_x$$
The first equation leads to $2x = x$, and the second leads to $0 = -y$, so.... |
Geometry, Find sides of a triangle | You don't know the distance from the two vertices of the triangle, you only know the difference between the distances, right? In that case, you cannot deduce the location of the sound source. For example, the red line in the following illustration is the locus of all possible placements of the sound source that would lead to the same difference.
The lower branch would have the sound arriving at the lower corner first, so you are only interested in the upper branch. The red curve is a hyperbola, with the corners of your triangle as foci. That's one defining property of a hyperbola.
Different positions on the upper branch of the hyperbola would lead to different angles $\alpha$. So the angle isn't known either. In many situations, the most reasonable thing you can do is concentrate on the case where the distance between sensors and sound source is much larger than the distance between the two sensors. So you could use the direction of the asymptote as an approximation.
To do that, consider a right triangle constructed between your two sensors. The idea is that sound waves would be parallel to one of its legs, and travel in the direction of the other leg. Construct it in such a way that the length of that second leg (which is the leg incident with the sensor where the sound arrived later) matches the difference in distance that you calculated.
From that you can read off the angle of the direction where a sound source far away would be located. |
Mathematically choose the better discount | Answer:
Draw out the equation for a price range from 100 to 4000 in increments of 100 and workout the discounted Price, you will find that the 20% discount on the total would be worse than the 30% on every 100 dollars. Chart out the equations in EXCEL and calculate for different prices and compare them. You will find the result self evident as you see below. |
Use of "for all" in definition of reflexive and symmetric relations. | The statement should be read as:
For all $a_1, a_2 \in A$: If $(a_1, a_2) \in R \Rightarrow (a_2, a_1) \in R$
Hence we don't consider all pairs $(a_1, a_2) \in A$, only $(a_1, a_2) \in R$. However, every element $a_i \in A$ may be used to "try" to form these pairs. |
Proof of Tychonoff's Theorem with Ultrafilter in Dudley's "Real Analysis and Probability" | We can apply 2.2.5 to $p_i(\mathcal{U})$ because it is an ultrafilter on $K_i$ and $K_i$ is compact. Once you accept that $p_i(\mathcal{U})$ is an ultrafilter on $K_i$, 2.2.5 says it has to converge to some $x_i \in K_i$.
If $A \subseteq K_i$, either $p_i^{-1}[A]$ is in $\mathcal{U}$, so $p_i[p_i^{-1}[A]] \in p_i(\mathcal{U})$ and as $p_i[p_i^{-1}[A]] \subseteq A$ by definition, the superset property of filters says that $A \in p_i(\mathcal{U})$ as well, or $(\prod_i K_i) - p_i^{-1}[A] \in \mathcal{U}$ (as $\mathcal{U}$ is an ultrafilter) but then $p_i[ (\prod_i K_i) - p_i^{-1}[A] ] \subseteq K_i - A$ and by the same reasoning, $K_i - A \in \mathcal{U}$. As $A$ is arbitrary, $p_i(\mathcal{U})$ is an ultrafilter on $K_i$.
The definition of the product topology is used in the fact that the collection of all sets of the form $\bigcap \{p_i^{-1}[U_i] \mid i \in F\}$, where $F \subseteq I$ is finite and all $U_i$ are open in $K_i$ for all $i \in F$, is a base for the product topology and so suffice to determine the convergence of filters: a filter converges to $x$ iff all basic open sets that contain $x$ are in that filter. This is a simple general fact about filter convergence.
Any neighbourhood $U$ of $x$ contains such a basic neighbourhood by the definition of the product topology. It follows in essence from the fact that the product topology is the minimal topology such that all projections $p_i$ are continuous: we need these finite intersections to be open for that and we can stop there for minimality reasons. |
Prove $\frac{a}{b}+\frac{b}{c}+\frac{c}{a} \geq \frac{a+b}{a+c}+\frac{b+c}{b+a}+\frac{c+a}{c+b}.$ | Assume $$\dfrac{a}{b}=x,\dfrac{b}{c}=y,\dfrac{c}{a}=z$$
So for instance
$$\dfrac{a+c}{b+c}=\dfrac{1+xy}{1+x}=x+\dfrac{1-x}{1+y}$$
And the problem would be transformed to:
$$\dfrac{x-1}{y+1}+\dfrac{y-1}{z+1}+\dfrac{z-1}{x+1}\ge0$$
$\equiv(x^2-1)(z+1)+(y^2-1)(x+1)+(z^2-1)(y+1)\ge0$
$\equiv \sum{x^2z}+\sum{x^2}\ge\sum{x}+3$
We have $xyz=1$, hence :$$\sum{x^2z}\ge3$$
And also $x+y+z\ge3$, so $$\sum{x^2}\ge\dfrac{(\sum{x})^2}{3}\ge\sum{x}$$
Problem solved |
Calculating derivatives (applying chain rule) | First, in case you're unfamiliar, $f \circ g$ is an alternate notation for the composite function $f(g(x))$.
$f_i\circ f_j \circ f_k$ is the same as $f_i ( f_j (f_k (x)))$. Using the circle notation is a bit cleaner. By stating that $\{i,j,k\}$ represents all permutations of $\{1,2,3\}$, we're listing listing all six arrangements of these numbers into the composites
$\quad f_1\circ f_2 \circ f_3 = \sin(\frac{1}{\ln x})$
$\quad f_1\circ f_3 \circ f_2 = \sin(\ln(\frac1x))$
and so on for the remaining four permutations.
Now you want to take the derivatives of all six, applying the chain rule. For the first of the two above, we get:
$\quad \frac{d}{dx}\sin(\frac{1}{\ln x})$ $\quad= \cos(\frac{1}{\ln x})\cdot\frac{d}{dx}(\ln x)^{-1} $ $\quad= \cos(\frac{1}{\ln x})\cdot(-1)(\ln x)^{-2}\frac{d}{dx}\ln x $ $\quad= \cos(\frac{1}{\ln x})\cdot(-1)(\ln x)^{-2}\cdot(\frac{1}{x}) $ $\quad= \frac{-\cos(\frac{1}{\ln x})}{x \ln^2 x}$ |
Finding a group with a prescribed normal subgroup and quotient group | Abelian case
There's a tool to work with group extensions, at least in the abelian context. It is called the Ext functor. Generally non-equivalent (which is a bit weaker then non-isomorphic, non-equivalent extensions can be isomorphic) extensions of $N$ via $Q$ correspond to elements of
$$\text{Ext}^1_{\mathbb{Z}}(Q, N)$$
The direct sum $Q\oplus N$ always corresponds to the trivial element in $\text{Ext}^1(Q, N)$.
Now if $Q$ is projective (as a $\mathbb{Z}$-module, i.e. free abelian group) then $\text{Ext}^1(Q, N)=0$ which implies that the answer to the question
Let $G$ be such that there is $N\subseteq G$ with $N\simeq\mathbb{Z}$ and $G/N\simeq\mathbb{Z}$. Are there solutions other than the obvious $G\simeq\mathbb{Z}\oplus\mathbb{Z}$?
is no.
Now lets consider $Q=\mathbb{Z}_2$ and $N=\mathbb{Z}$.
There are the two obvious Abelian solutions
Yes, this follows because
$$\text{Ext}^1(\mathbb{Z}_2,\mathbb{Z})\simeq\mathbb{Z}_2$$
(which you can calculate from general properties of the Ext functor). Therefore there are 2 non-equivalent extensions and thus at most 2 non-isomorphic. You found two non-isomorphic solutions which completes the classification.
Trivial non-abelian (i.e. semidirect product)
Now non-abelian case is generally hard. But luckily when we work with $\mathbb{Z}$ and $\mathbb{Z}_p$ (for prime $p$) everything becomes easier.
The trivial non-abelian example is the semidirect product $N\rtimes Q$. I call it "trivial" because it corresponds to a short exact sequence $1\to N\to G\to Q\to 1$ that splits. The reverse also holds: if such sequence splits then $G$ is a semidirect product of $N$ and $Q$.
Since we are working in non-abelian case then by "semidirect product" I'm going to understand "non-abelian semidirect product" from now on. Note that the only abelian semidirect product is the direct product.
For $Q=\mathbb{Z}_2$ and $N=\mathbb{Z}$ this gives the infinite dihedral group. There are no other possibilities of semidirect product of $N$ and $Q$ because there's only one non-trivial homomorphism $Q\to\text{Aut}(N)$. Note that $\text{Aut}(\mathbb{Z})\simeq\mathbb{Z}_2$.
For $Q=N=\mathbb{Z}$ we have that the only non-trivial semidirect product $Q\rtimes N$ is actually $\mathbb{Z}\times\mathbb{Z}$ with group multiplication given by
$$(a,b)(c,d)=(a+ (-1)^bc, b+d)$$
Non-trivial non-abelian?
There are none. Thanks to @Derek Holt for hints to solve that case.
Indeed, the case $N=Q=\mathbb{Z}$ is quite simple. Every short exact sequence of the form
$$1\to N\to G\to\mathbb{Z}\to 1$$
has to split because $\mathbb{Z}$ is not only free-abelian but also free among all groups. Therefore if $f:G\to\mathbb{Z}$ is an epimorphism and $f(g)=1$ for some $g\in G$, then $h:\mathbb{Z}\to G$ given by $h(1)=g$ is the partial inverse.
The case $N=\mathbb{Z}$ and $Q=\mathbb{Z}_p$ (for prime $p$) is a bit more difficult. We will show that every such extension has to split as well. Or equivalently that $G$ is a semidirect product of $N$ and $Q$.
So let $g\in G$ such that $g\not\in N$ (for simplicity I assume that $N\subseteq G$). Define by $H=\langle g\rangle$. Since $G/N\simeq\mathbb{Z}_p$ then $\{N, H\}$ generate entire $G$. Since $N$ is normal then $G=NH$. And since $G/N\simeq\mathbb{Z}_p$ then $g^p\in N$. Let $e\in G$ be the neutral element. Now we have two cases:
$g^p=e$. Then $N\cap H=\{e\}$ (the assumption about $p$ being prime kicks in here) and therefore (since $G=NH$) $G$ is the semidirect product of $N$ and $H$.
$g^p\neq e$. In that case consider the automorphism $\varphi_g:G\to G$ given by $\varphi_g(x)=gxg^{-1}$. Since $N$ is normal then we can restrict it to $\varphi_g:N\to N$. Since $N\simeq\mathbb{Z}$ then $N$ has only two automorphism: the identity and the inverse. But both cases are impossible. Indeed, if $\varphi_g(x)=x$ then $gxg^{-1}=x$ and thus $gx=xg$ which contradicts $G$ being non-abelian. If on the other hand $\varphi_g(x)=x^{-1}$ then $gg^pg^{-1}=g^{-p}$ and thus $g^p=g^{-p}$, i.e. $g^p$ is an element of order at most $2$ in $N$. This is impossible because $N\simeq\mathbb{Z}$ and the only finite-order element in $\mathbb{Z}$ is the neutral element. But we assumed that $g^p\neq e$.
This proves that there are no non-split extensions of the form
$$1\to \mathbb{Z}\to G\to\mathbb{Z}_p\to 1$$ |
I need help finding integral of $\frac{1}{x^2+x^4}$ | Hint:$$\frac{1}{x^2+x^4}=\frac{1}{x^2} - \frac{1}{x^2 + 1}$$ |
Wilkinson's polynomial very simple misunderstanding | As long as your computer can hold the exact coefficients of the polynomial, you will be able to get some sort of stability in calculating the roots, even with such a crude method as interval bisection. However, this isn't the point of Wilkinson's polynomial. Rather, what the polynomial demonstrates is that a tiny error in the data for a problem, namely in the coefficients of the polynomial, can lead to a big error in the solution values (the roots). A consequence of this, for example, would be big errors in obtaining the eigenvalues of a matrix by calculating the roots of its characteristic polynomial, where the matrix entries may not be exact even though specified to pretty good precision.
Wilkinson's polynomial is ill conditioned because of the huge ratio of the error in the solution to an error in the data. The ratio you cite is another matter. |
How to find the logical equivalent statement of $(P \lor Q) \lor (P \land R)$ without using truth table from given options. | We have $P \wedge R \leq P \leq P \vee Q$ for any assignment of truth values to $P$, $Q$, and $R$, and so $(P \vee Q) \vee (P \wedge R) = P \vee Q$. |
Proving the isomorphy of two modules | Denote by $\pi_i \colon M_i \rightarrow M_i / N_i$ and consider the map $\psi \colon M_1 \times M_2 \rightarrow M_1 / N_1 \times M_2 / N_2$ given by $\psi(m_1,m_2) = (\pi_1(m_1), \pi_2(m_2))$. This map is clearly linear and onto with kernel
$$ \ker(\psi) = \{ (m_1, m_2) \, | \, \pi_1(m_1) = \pi_2(m_2) = 0 \} = \{ (m_1, m_2) \, | \, m_1 \in N_1, m_2 \in N_2 \} = N_1 \times N_2 $$
and so by the first isomorphism theorem you get
$$ (M_1 \times M_2) / (N_1 \times N_2) = (M_1 \times M_2) / \ker(\psi) \approx \operatorname{im}(\psi) = M_1 / N_1 \times M_2 / N_2. $$ |
Candies drawn from both bowls | Let's fix a particular candy $C$ that is in both bowls. You draw $m_1$ out of the $N_1$ candies in the first bowl, so your chance of drawing $C$ is $\frac{m_1}{N_1}$. Independently, your chance of drawing it from the second bowl is $\frac{m_2}{N_2}$. So your probability of getting a pair of $C$ is $\frac{m_1m_2}{N_1N_2}$. Now you just need to sum this over all the possible values of $C$ to get the expected number of pairs is $p\frac{m_1m_2}{N_1N_2}$ |
Quotient of a finitely generated torsion free abelian group | Provided that the subgroup you quotient out by is pure, then yes it is. |
How to calculate reflected light angle? | Since you've stated all three angles in similar terms, and want a formula that works in all cases, lets use the angle with the x axis, in 360 degree terms, which accomplishes both purposes and is good for computation. So here's the picture, with z the angle you are seeking, and a the angle of reflection...
Then, using the two triangles with the x axis as base and the fact that an exterior angle of a triangle is the sum of the other interior angles, you get
z = x + a
y = $\pi$ - 2a + z
And solving those for z in terms of x and y gives
z = $\pi$ + 2x - y
OK, this is actually far from a general analysis -- angles x, y and z could occur in any order on the x axis, but we may assume that y is to the right of x without losing generality, so there are only two other cases. Also angle x could > $\frac{\pi}{2}$, but since the lines are undirected, we may assume that x < $\pi$. Finally, angle x could be 0.
Hmmm... thinking about this some more. When z falls to the right of y, the same formula follows because the roles of z and y are interchanged. But when z falls to the left of x, it's because the line of the reflected light intersects the x axis "backwards". And then the geometry yields z = 2x - y, or the prior formula if you take the angle of the reflected light as the supplement of z.
So we really need vectors for the light rays, not undirected lines, and/or the original problem is not altogether well-formed, that is the notion "angle" of a light ray needs to be defined as the direction of its vector. If you do that, then the angle labeled z in my diagram is not the correct vector directional angle. It should be $\pi$ + z, so the true formula, in vector direction is z = 2x - y, and that works for all cases. (Haven't checked the degenerate cases, but no reason to expect them to fail). |
Extrema of $f(x,y)=\sqrt{x^2+y^2} \cdot e^{-(x^{2}+y^{2})}$ | Hint:
note that the function is symmetric around the $z$ axis, so it can be better studied in cylindrical coordinates.
Using $\sqrt{x^2+y^2}= r$, the function becomes:
$$
z=re^{-r^2}
$$
and the derivative
$$\frac{\partial z}{\partial r}=e^{-r^2}(1-2r^2)$$
is more simple.
Can you do from this? |
Probability that integer n is even given integer k is in [1,10] and n is in [1,k]. | You have two options in this situation, the first is to simply brute force the probabilities (which given you have at most a few hundred outcomes, would be fairly straightforward to write in python or R or something).
A more analytic approach would be two begin breaking the problem down, in this case you have 2 possible situations, k is even, or k is odd. If k is even than the probability of n being even is .5, and if k is odd than the probability of n being even is $({({(k-1)}/2})/k)$, which you can then sum across. |
Can element of a set be a logical sentence? | A propositional function is what we get from a sentence of our language, like e.g. "Socrates is a philosopher" removing the name Socrates and using instead a "place holder" : a variable like $x$.
The resulting expression : "$x$ is a philosopher", is like a mathematical function: assigning to $x$ a "value" : Plato, Napoleon, what we get is a sentence : either true (for Plato) or false (for Napoleon).
In predicate logic, a propositional function $\phi(x)$ is called : open formula.
See : Kazimierz Kuratowski & Andrzej Mostowski, Set theory, North Holland (1968), page 45 :
we shall consider propositional functions. They are expressions which contain
variables. If each variable is replaced by the name of an arbitrary element,
then the propositional function becomes a sentence. For instance,
$x> 0, x^2 < 5, X$ is a non-empty set
are examples of propositional functions. By substitution we obtain, e.g.,
the following sentences:
$1 > 0, 25 < 5$, the set of prime numbers is a non-empty set.
An open formula $\phi(x)$ is what is used in the set-builder notation :
$\{ x \mid \phi(x) \}$
to define a set: the set of all and only those objects such that $\phi(x)$ holds of them.
Regarding propositional calculus, we satrt with a language (or alphabet), i.e. a set of propositional symbols : $p_0,p_1,\ldots$ and the connectives : $\lor, \land, \lnot$.
We define expressions as finite strings of symbols.
Finally, we have :
The set $\text {Prop}$ of propositional formulas [also : well-formed formulas] is the smallest set $X$ with the properties :
(i) $p_i ∈ X$,
(ii) if $ϕ,ψ ∈ X$, then $(ϕ ∨ ψ), (ϕ \land ψ) ∈ X$,
(iii) if $ϕ ∈ X$, then $(¬ϕ) ∈ X$. |
Find all holomorphic functions with $\Re(f(x+iy))=2xy$ | The first step is right, $f$ must be differentiable. But you missed some constants, we have - as you write (let me name the remaining function of $x$ $Q_1$ to see the difference to $Q$) -
$$ Q(x,y) = y^2 + Q_1(x) $$
and
$$ Q(x,y) = -x^2 + Q_2(y) $$
Equating both terms gives
$$ y^2 + Q_1(x) = -x^2 + Q_2(y) \iff x^2 + Q_1(x) = y^2 - Q_2(y) $$
Note that the left hand side depends on $x$ only, the right hand side only on $y$, hence, both sides must be constant, say equal $c \in \mathbf R$. So
$$ x^2 + Q_1(x) = c \iff Q_1(x) = c - x^2 \implies Q(x,y) = y^2-x^2 + c $$
Hence all functions
$$ f(x+iy) = 2xy + i(y^2 - x^2) + ic, \quad c \in \mathbf R $$
are solutions to your problem. |
Binary division, with reminder | Both calculators are computing in GF(2), the Galois field of order $2$. In this field, subtraction does not involve carries -- subtraction is accomplished by XORing the operands.
For instance, at your first link, the last "subtraction" is "$1100 \underline{\vee} 1001 = 101$.
An actual binary calculator gets the result you are expecting. |
Why need characteristic p? | Number theory, at it's heart, deals with problems over $\Bbb Z$--and of course more general things that come from motivation from $\Bbb Z$. And one of the most beloved subjects is solving quadratic forms. Thanks to Hensel's Lemma and the celebrated Hasse-Minkowski Theorem we know that solving a given integral quadratic form over $\Bbb Z$ is equivalent to solving it over $\Bbb F_p$ for all odd $p$ and $\Bbb Z/8$ for $p=2$ and over $\Bbb R$. This alone is an enormous number-theoretic reason which would justify studying $\Bbb F_p$ all by itself.
Even in elementary number theory, we use modular arithmetic, which--in the prime case--is arithmetic over $\Bbb F_p$, so I would also dispute the underlying claims you make in your problem statement. Congruences are an enormously important part of study in number theory. Dealing with modular arithmetic integers and Dirichlet characters leads us to a result on $\Bbb Z$ by using $\zeta$-functions to prove Dirichlet's Theoreom on Arithmetic Progressions.
There are many, many more reasons to care, even about problems from elementary theory, but I think these two are some of the most central, the latter is closer to pure questions on modular arithmetic rather than specifically primes, but I think in general it illustrates that rings with non-zero characteristic are both interesting and relevant even in the classical theory. |
A set of n equations that equal each other | This is an example of the Chinese Remainder Theorem. It says you can find a solution as long as the multipliers of $a,b,c$ are coprime. The solutions will recur at intervals of $75 \cdot 101 \cdot 163$ The Wikipedia page gives several approaches to find a solution. |
Difficult expected value problem | Consider a particular bucket and the $m$ balls with a particular number. The probability that bucket never receives a ball with that number is $\left(\dfrac{m-1}{m}\right)^m$
So the expected total number of buckets which never receive a ball with that number is $m\left(\dfrac{m-1}{m}\right)^m$. That is then equal to the expected number of balls with that number put in the garbage, since they did not go into any bucket
So the total expected number of balls put in the garbage is $m^2\left(\dfrac{m-1}{m}\right)^m$
which for large $m$ is about $m^2 e^{-1}$ |
Equivalence of $\operatorname{nullity}= 0$ and injectivity | If we have linear $T\colon U\to V$ where $U$, $V$ are vector spaces (of arbitrary, possibly infinite, possibly different dimension), then
$$ \dim\ker T=0\iff T\text{ is injective}$$
You showed $\implies$. The other direction is even simpler: If $T$ is injective and $Tu=0$, then from $T0=0$ and injectivity $u=0$, so $\ker T=\{0\}$. |
3d solid identification | I agree that figure you care about isn't a tetrahedron (the picture in the same question yesterday). I agree that it's an interesting figure. I'm pretty sure it doesn't have a name. Maybe I'm wrong and someone will know the name.
I think that topologically your figure is an orbifold.
Here's an image with one circle flattened:
https://www.math.toronto.edu/drorbn/Gallery/Symmetry/Tilings/22S/PlasticBag.html
There's no way to express in purely topological terms the fact that the other flattening is rotated 90 degrees. You need some kind of metric information for that. |
Are these the correct residues? | You have an addition mistake in your solution.
\begin{align}
\oint\frac{z+1}{z(z-2)}dz &= \oint\frac{(z+1)/z}{z-2}dz+\oint\frac{(z+1)/(z-2)}{z}dz\\
&= 2\pi i\bigl[f_1(2) + f_2(0)\bigr]
\end{align}
where $f_1(z) = \frac{z-1}{z}$ and $f_2(z)=\frac{z+1}{z-2}$. Then
$$
\oint\frac{z+1}{z(z-2)}dz = 2\pi i(3/2-1/2) = 2\pi i
$$
We could also do this problem using Residue theory. We have simple poles inside the contour at $z=0,2$.
Then
\begin{align}
\oint\frac{z+1}{z(z-2)}dz &= 2\pi i\sum\text{Res}\\
&=2\pi i\biggl[\lim_{z\to 0}z\frac{(z+1)}{z(z-2)}+\lim_{z\to 2}(z-2)\frac{(z+1)}{z(z-2)}\biggr]\\
&= 2\pi i(-1/2+3/2)\\
&= 2\pi i
\end{align} |
Subfield of $\mathbb{Q}(\sqrt[n]{a})$ | Let $\alpha=\sqrt [n]a\in \mathbb R_+$ be the real positive $n$-th root of $a$, so that $K=\mathbb Q(\alpha)$.
Consider some intermediate field $\mathbb Q\subset E\subset K$ (with $d:=[E:\mathbb Q]$) and define $\beta=N_{K/E}(\alpha)\in E$.
We know that $\beta=\Pi _\sigma \sigma (\alpha) $ where $\sigma$ runs through the $E$-algebra morphisms $K\to \mathbb C$.
Now, $\sigma (\alpha)=w_\sigma \cdot\alpha$ for some suitable complex roots $w_\sigma$ of $1$ so that $$\beta=\Pi _\sigma \sigma (\alpha)=(\Pi _\sigma w_\sigma) \cdot\alpha^e\quad (\operatorname { where } e=[K:E]=\frac nd)$$ Remembering that $\beta\in E\subset K\subset \mathbb R$ is real and that the only real roots of unity are $\pm 1$ we obtain $\Pi _\sigma w_\sigma=\frac {\beta}{\alpha^e}=\pm 1$ and $\beta=\pm \alpha^e=\pm \sqrt [d]a$.
Thus we have $\mathbb Q(\beta)=\mathbb Q(\sqrt [d]a)\subset E$ with $ \sqrt [d]a$ of degree $d$ over $\mathbb Q$.
Since $[E:\mathbb Q]=d$ too we obtain $E=\mathbb Q(\sqrt [d]a)$, just as claimed in the exercise. |
How do I prove $\frac{\sin x + \csc y}{\sin y + \csc x} = \csc y \sin x$? | If $\sin x=p,\sin y=q$
$\dfrac{p+\dfrac1q}{q+\dfrac1p}=\dfrac pq$ if $pq+1\ne0$ |
finding the minimal distance of 2 cars without trigonometry | For $\angle ABC=90^o$: $$\min_{0 \leq t \leq \frac{70}{8}}{(700-80t)^2+(100t)^2}$$ this can be rewritten as $$700^2+\min_{0 \leq t \leq \frac{70}{8}}{(80^2+100^2)t^2-2\cdot80\cdot700t}$$ As a quadratic with roots $t=0$ and $t=\frac{2\cdot80\cdot700}{80^2+100^2}$ it is minimized at the midpoint of the roots, that is in $$t^*=\frac{80\cdot700}{80^2+100^2}=3.41$$ |
An example of bounded linear operator | In the first comment I suggested the following strategy: write $T=\sum_j T_j$, where $T_j$ is a linear operator defined by $T_jx=\{k_jx_{n-j}\}$. You should check that this is indeed correct, i.e., summing $T_j$ over $j$ indeed gives $T$. Next, show that $\|T_j\|=|k_j|$ using the definition of the operator norm. Finally, use the triangle inequality $\|Tx\|_{\ell^p}\le \sum_j \|T_jx\|_{\ell_p}$. |
Construction of a vector valued function that maps from $\mathbb{R}^n$ to an embedded disc | By the gluing lemma, the map
$$
g : x \in \mathbb{R}^n \mapsto \begin{cases}x &\text{if $\|x\| \leq 1$}\\\|x\|^{-1}x&\text{if $\|x\| \geq 1$}\end{cases} \in \mathbb{R}^n
$$
is continuous and its image lies in the disc $\mathbb{D}^n = \{p \in \mathbb{R}^n: \|p\| \leq 1\}$. On the other hand, since $| \cdot | : t \in \mathbb{R} \mapsto |t| \in \mathbb{R}$ is continuous, we have that
$$
h = | \cdot | \times \cdots \times | \cdot | : (x_1,\dots,x_n) \in \mathbb{R}^n \mapsto (|x_1|, \dots,|x_n|) \in \mathbb{R}^n
$$
is continuous. Now, the function $f = hg : \mathbb{R}^n \to \mathbb{R}^n$ is continuous and $f(\mathbb{R}^n) = B$, it suffices to corestrict it to $B$ to obtain the desired mapping. |
Linear transformation problem from R^4 to R^2 | All you need to show is that $T$ satisfies $T(cA+B) =cT(A) +T(B)$ for any vectors $A,B$ in $\mathbb{R}^4$ and any scalar from the field, and $T(0) =0$. It looks like you got it. That should be sufficient proof. |
How to compute $\sum_{n\text{ odd}}\frac{1}{n\sinh n\pi\sqrt 3}$? | The identities:
$$\sum_{n\geq 1}\frac{(-1)^n}{n^2+m^2} = -\frac{1}{2m^2}+\frac{\pi}{2m\sinh(m\pi)},\tag{A}$$
$$ \frac{1}{m^2+n^2}=\int_{0}^{+\infty}\frac{\sin(nx)}{n}e^{-mx}\,dx,\tag{B}$$
$$ \sum_{n\geq 1}(-1)^n\frac{\sin(nx)}{x}=-\frac{x}{2}+\pi\left\lfloor\frac{x+\pi}{2\pi}\right\rfloor\tag{C} $$
give a wide range of possibilities to evaluate our series. For instance, $(A)$ gives:
$$\begin{eqnarray*}\sum_{k\geq 0}\frac{1}{(2k+1)\sinh(\pi(2k+1))}&=&\frac{1}{\pi}\sum_{k\geq 0}\frac{1}{(2k+1)^2}+\frac{2}{\pi}\sum_{k\geq 0}\sum_{n\geq 1}\frac{(-1)^n}{n^2+(2k+1)^2}\\&=&-\frac{\pi}{8}+\frac{2}{\pi}\sum_{h\geq 1}\frac{r_2(h)\cdot\eta(h)}{h}\tag{1}\end{eqnarray*}$$
where $\eta(h)$ equals $-1$ if $h\equiv 2\pmod{4}$, $1$ if $h\equiv 1\pmod{4}$, zero otherwise, and:
$$ r_2(h) = \#\{(n,k)\in\mathbb{N}^2: h=n^2+(2k+1)^2\}.\tag{2} $$
Now it is well-known that $a^2+b^2$ is the only reduced binary quadratic form of discriminant $-4$, hence:
$$\begin{eqnarray*} \#\{(a,b)\in\mathbb{Z}:a^2+b^2=n\} &=& 4\left(\chi_4 * 1\right)(n)\\&=& 4\left(d_{1(4)}(n)-d_{3(4)}(n)\right) \tag{3}
\end{eqnarray*}$$
so that we can evaluate the RHS of $(1)$ through Dirichlet convolution.
Since we have class number one also in the case $a^2+3b^2$, the situation is almost the same for the other series.
The Mellin transform gives another chance. See, for instance, this related problem. |
Conditional Probability Semantics | calculate the probability of both Marie and Jack calling when the alarm is ringing but neither a burglary nor an earthquake.
This means the probability sought is $P(J,M,A,\neg B,\neg E)$, with comma denoting "and" or "intersection". By the Chain Rule:
\begin{eqnarray*}
P(J,M,A,\neg B,\neg E) &=& P(J \vert M,A,\neg B,\neg E) P(M \vert A,\neg B, \neg E) P(A \vert \neg B,\neg E) P(\neg B \vert \neg E) P(\neg E) \\
&& \\
&=& P(J \mid A) P(M \mid A) P(A \mid \neg B,\neg E) P(\neg B) P(\neg E).
\end{eqnarray*}
That last step is due to the inherent independence and conditional independence built into the system. E.g. given $A$, events $J$ and $M$ are independent. Events $B$ and $E$ are (unconditionally) independent. These are assumptions made when constructing the system, presumably to match with reality.
The Chain Rule is actually a series of steps combined into one. It might help to break it down:
\begin{eqnarray*}
P(J,M,A,\neg B,\neg E) &=& P(J \mid M,A,\neg B,\neg E) P(M,A,\neg B, \neg E) \\
&& \\
&=& P(J \mid M,A,\neg B,\neg E) P(M \mid A,\neg B, \neg E) P(A,\neg B,\neg E).
\end{eqnarray*}
So one of the terms we need to evaluate is $P(A,\neg B,\neg E)$. Continuing with the Chain Rule:
\begin{eqnarray*}
P(A,\neg B,\neg E) &=& P(A \mid \neg B,\neg E) P(\neg B,\neg E) \\
&& \\
&=& P(A \mid \neg B,\neg E) P(\neg B \mid \neg E) P(\neg E). \\
&& \\
&=& P(A \mid \neg B,\neg E) P(\neg B) P(\neg E) \qquad\text{(by independence)}. \\
\end{eqnarray*}
Doesn't me already including the term P(A|¬B ∧ ¬E) imply ¬B and ¬E?
No, it's saying if they happened. The "they did happen" is done by multiplying by $P(\neg B,\neg E)$. Otherwise, what you're really saying is that $P(A,\neg B,\neg E)$ and $P(A \mid \neg B,\neg E)$ are the same thing.
Maybe one final way to illustrate the difference. The event $A\cap B \cap E$ is extremely unlikely. It has probability $0.95 \times 0.01 \times 0.02 = 0.00019$. However, $P(A\mid B \cap E)$ is very likely at $0.95$.
I hope that helps. |
The approximation function of $\frac{x}{y}$ | Though the question is unspecific about what constitutes an "approximation", the answer appears to be "no".
As lulu notes in the comments, an approximation $\frac{x}{y} \approx f(x) + f(y)$ leads (for $x = y$) to
$$
1 = \frac{x}{x} \approx f(x) + f(x) = 2f(x)\quad\text{for all $x$.}
$$
Similarly, an approximation $\frac{x}{y} \approx f(x) - f(y)$ leads (for $x = y$) to
$$
1 = \frac{x}{x} \approx f(x) - f(x) = 0.
$$
From the other direction (i.e., starting with customary notions of approximation and seeing where they lead):
If $y_{0} \neq 0$, then for $|y - y_{0}| < |y_{0}|$ the geometric series gives the first-order approximation
\begin{align*}
\frac{x}{y} &= \frac{x}{y_{0} + (y - y_{0})}
= \frac{x}{y_{0}} \cdot \frac{1}{1 + (\frac{y - y_{0}}{y_{0}})} \\
&= \frac{x}{y_{0}} \cdot \left[1 - \frac{y - y_{0}}{y_{0}} + \bigg(\frac{y - y_{0}}{y_{0}}\biggr)^{2} - \cdots\right] \\
&\approx \frac{x}{y_{0}} - \frac{x(y - y_{0})}{y_{0}^{2}},
\end{align*}
which is not of the form you seek. |
How to prove a knot with genus larger than 1 is prime, such as Miller Institute Knot? | As you guessed, there are a lot of ways to show a knot is prime and depending on the knot, certain ways are much easier than others. In the case for this knot, the easiest way that I know of is to realize it has a bridge index of 2. We see that from this diagram, it must be $\leq 2$ since there are 2 maxima, and the unknot is the only knot with bridge number less than 2. And it is known that all 2-bridge knots are prime.
Off-hand, I know of an algorithmic way to test if a knot is composite or prime, but it is from a paper not published and therefore, does not lend well to being explained here. Maybe someone else here knows of a more classical method.
Actually, I believe that the crossed out statement is false. I don't think there is an algorithmic way, or at least not one that is computationally viable for any given knot. Again, maybe someone else knows better than I. |
Local-global test of algebraicity | Sure, if $\alpha$ is not algebraic take $\beta\in \Bbb{C}_p,\not \in \overline{\Bbb{Q}}_p$ and $\sigma\in Aut(\Bbb{C}), \sigma(\alpha)=j^{-1}(\beta)$.
(those things require axiom of choice) |
Prove by induction that $n^3 + 11n$ is divisible by $6$ for every positive integer $n$. | For any positive integer $n$, let $S(n)$ denote the statement
$$
S(n) : 6\mid (n^3+11n).
$$
Base step: For $n=1, S(1)$ gives $1^3+11(1) = 12 = 2\cdot 6$. Thus, $S(1)$ holds.
Inductive step: Let $k\geq 1$ be fixed, and suppose that $S(k)$ holds; in particular, let $\ell$ be an integer with $6\ell = k^3+11k$. Then
\begin{align}
(k+1)^3 + 11(k+1) &= (k^3+3k^2+3k+1) + (11k+11)\tag{expand}\\[0.5em]
&= \color{red}{k^3+11k}+3k^2+3k+12\tag{rearrange}\\[0.5em]
&= \color{red}{6\ell} + 3k(k+1)+2\cdot 6.\tag{by ind. hyp.}
\end{align}
Since one of $k$ and $k+1$ is even, the term $3k(k+1)$ is divisible by $6$, and so the last expression above is divisible by $6$. This proves $S(k+1)$ and concludes the inductive step $S(k)\to S(k+1)$.
By mathematical induction, for each $n\geq 1$, the statement $S(n)$ is true. $\blacksquare$ |
Modulo on some interval | The question seems to be for: $a,b,x,z \in \mathbb{R} \land x \in [a,b]$, provide a function $f(x+z) \in [a,b]$.
Even though $x \in [a,b]$, because $z$ can be any real number, there's no essential difference between $z$ and $x+z$: they both cover the same range. Perhaps $z \geq 0$ is what was intended, in which case $z+x \geq a$.
Of course, multiple definitions of $f$ are possible. $f$ needs to map a domain of infinite length into a range of finite length. The reciprocal function does that. Using the interval length as a modulus is another way. arccos and arcsin also map an infinite domain to a finite range. Most any function that maps an infinite domain to a finite range could be adopted by appropriate scaling and translation of both the argument and the result.
Using the interval length as a modulus could be expressed as $f(x)= ((x+z) \mod (b-a)) +a$. Since $(b-a)$ is real $(v\mod(b-a))$ can be understood as $0 \leq v - k(b-a) < (b-a), v \in \mathbb{R}$ for some $k \in \mathbb{Z}$. A small tweak would be required if $f$ must be able to yield $b$ (which it doesn't with this definition).
Using the reciprocal, and assuming $z \geq 0$ then $f(x) = a + \frac{a(b-a)}{x+z}$. Note that $f$ goes from $b$ down to $a$ as $x+z$ increases from $a$ to infinity. A tweak would be required if $f$ must be able to yield $a$ (which it doesn't with this definition). |
Integrate by parts elementary | You can let $\frac1 2\phi^2 x=y$,
You will then have something like:
$$\int e^{-y}y^{-1-\alpha/2}dy=\Gamma(-\alpha/2)$$ if you have a boundary of the integral $0,\infty$. |
Why is an alternating $2$-form decomposable if and only if its self-wedge vanishes? | There is a canonical form: there is a basis $e_1,\dots,e_n$ of $V$ and a $k$ such that $w=e_1\wedge e_2+\dots +e_{2k-1}\wedge e_{2k}$. Notice that if $k>1$ then $w\wedge w$ contains the term $2e_1\wedge e_2\wedge e_3\wedge e_4$, hence $w\wedge w\neq 0$. We can conclude that if $w\wedge w=0$ then the canonical form is $w=e_1\wedge e_2$. |
Markov Chain with two components | Yes, you can. Actually this Markov chain is reducible, with two communicating classes (as you have correctly observed),
$C_1=\{1,2,3,4\}$, which is not closed and therefore any stationary distribution assigns zero probability to it and
$C_2=\{5,6,7\}$ which is closed.
As stated for example in this answer,
Every stationary distribution of a Markov chain is concentrated on the closed communicating classes.
In general the following holds
Theorem: Every Markov Chain with a finite state space has a unique stationary
distribution unless the chain has two or more closed communicating classes.
Note: If there are two or more communicating classes but only one closed then the stationary
distribution is unique and concentrated only on the closed class.
So, here you can treat the second class as a separate chain but you do not need to. No matter where you start you can calculate the steady-state probabilities and they will be concentrated on the class $C_2$. |
How can I fold paper into 3 x 4 grid? Or prove that it can't be done? | Let the paper be $[0,a]\times[0,b]$
By halving you find $(\frac a2,0)$, make a crease to find the line through $(\frac a2,0)$ and $(0,b)$. This intersects the diagonal at $(\frac a3,\frac b3)$. |
Ordinary Generating functions for $b_n$ | So for the first one there are a couple of issues, when you go from summing from $n = 3$ to $n=0$ you add 3 on to the power, but not the index of the coefficient.
Also when you factor out the $x^3$ it turns into $x^{-3}$. Fix these things and see how far you can get.
For the second I'd recommend looking at $f(-x)$ and comparing that to $f(x)$. |
Spectrum of an operator | Every nonzero spectral value of a compact operator is an eigenvalue.
So one method to find the spectrum of $A$ is to determine its eigenvalues. (There aren't many.)
What does an identity
$$\lambda x(t) = \int_0^t x(s)\,ds$$
for all $t\in [0,1]$ imply about $x$? |
Find a power series for $\frac{z^2}{(4-z)^2}$. | For $|z|<4$ we have:$$\frac{1}{4-z}=\sum_{n=0}^{\infty}\frac{z^n}{4^{n+1}}\to \frac{1}{(4-z)^2}=\sum_{n=0}^{\infty}\frac{(n+1)z^n}{4^{n+2}}\to\frac{z^2}{(4-z)^2}=\sum_{n=0}^{\infty}\frac{(n+1)z^{n+2}}{4^{n+2}}$$
and for $|z|>4$ similarly: $$\frac{z^2}{(4-z)^2}=\sum_{n=0}^{\infty}\frac{(n+1)4^n}{z^{n}}$$ |
Why do we use both sets and predicates? | The nifty thing about sets is that they are things. We can talk about functions whose values are sets, or functions that take sets as inputs, or make sets of sets, or sets of sets of sets ...
This is quite useful, and in fact absolutely ubiquitous in higher mathematics.
The downside of this is that when sets are "things", there can't be a set for any property of "things" that we can define. That's what Russell's paradox shows: if $\{x\mid x\notin x\}$ were a thing, then it would neither be nor not be an element of itself, which is absurd -- so it can't be a thing.
Of course that doesn't mean that we should refrain from ever speaking of a property that we have defined willy-nilly by stating what the condition for it is. We just need to be remember that such a propert is not a thing: it cannot necessarily be the value of a function, an element of a set, and so forth.
It is now convenient to have two different words for the two different rulesets we might be working under for the concepts we name.
The ones where we work by "is a thing but cannot be defined just by fiat" we call sets -- and then we need some explicit formal rules for how we can define a set, which are the axioms of set theory.
The ones where we work for "can be defined every which way but is not a thing", we call predicates. For historical reasons we often notate named predicates with a slightly different syntax than sets, but within set theory it is also common to use the same $\in$ notation as for sets, in which case we call them classes rather than predicates -- but it's the same concept in either case. In either case we need few axioms for them, because it's understood that a named predicate/class is formally just an abbreviation for the particular formula that defines it.
This works pretty well in practice.
Higher-order logic offers a different way out, where the same concept can at once be a "thing" at one level of the type hierarchy while being defined by free-wheeling fiat over lower levels of the hierarchy. At first glance this can look like it gives us the best of both worlds in one package. But in practice it turns out to be cumbersome to use for actual mathematics -- for example, if you take a quotient of a group, the elements of the quotient group are naturally sets of elements of the original group, so the quotient group exists on a different level of the type hierarchy. The fact that there's no level in the hierarchy that contains all groups we'll ever be interested in makes it difficult to formalize theorems about the relation between groups.
There are recent developments in type theory (where "recent" means the last 50 years or so) that aim at making this less cumbersome by allowing the system to internalize the meta-theorems one would otherwise need to state results that work on more than one type level. So far the results of this have not won general acceptance as a foundation for all of mathematics, though it's pretty widely used in the areas of automated theorem proving and machine-verified proof, for exampe with the HOL theorem prover.
A possible reason for this is that the formal rules of a useful non-cumbersome higher-order logic system are a lot more complex to state and explain than those of first-order logic and set theory. This puts the higher-order approach at a considerable didactic disadvantage -- but it may change in the future if machine-verified proofs really take off. |
Reference on a corollary for applying Deduction Theorem multiple times in FOL. | The gist of the proviso in the Deduction Theorem for predicate logic is to avoid the invalid step of quantifying a variable that is free in $B$, because $P(x) \to \forall x P(x)$ is not valid.
As already discussed, we have two derivations: the original one: $\Gamma, B \vdash C$ and the new one: $\Gamma \vdash B \to C$, and the DT gives us instructions how to produce the new one from the original one.
In the proof of Prop.2.5, we have to consider the case regarding the use of $\text {Gen}$ in the original derivation.
The text says:
Finally, suppose that there is some $j < i$ such that $D_i$ is $(\forall x_k)Dj$. By the inductive hypothesis, $\Gamma \vdash B \to D_j$, and, by the hypothesis of the theorem, either $D_j$ does not depend upon $B$ or $x_k$ is not a free variable of $B$.
The two sub-cases are considered and instructions are given for writing the corresponding steps in the new derivation.
In the first sub-case we have $\Gamma \vdash (\forall x_k)D_j$ by $\text {Gen}$. Thus, we have an application of $\text {Gen}$ in the new derivation, corresponding to the application of $\text {Gen}$ in the original one. The application is to $D_j$ that does not depend upon $B$ in the original derivation by induction hypotheses; thus, also the application in the new derivation is to a formula that does not depend upon $B$.
In the second sub-case we have $\Gamma \vdash (\forall x_k)(B \to D_j)$ by $\text {Gen}$, and we supposed that $x_k$ is not free in $B$.
In conclusion, in both sub-cases the applications of $\text {Gen}$ in the new derivation satisfy the condition:
no application of $\text{Gen}$ to a wf that depends upon $B$ has as its quantified variable a free variable of $B$ [in first sub-case: no dependency; in the second sub-case: $x_k$ not free in $B$].
But this means that if in the original derivation there were some application of $\text{Gen}$ to a wf depending upon a wf $E$ of $\Gamma$, then in the new derivation there is a corresponding application of $\text{Gen}$ that involves the same quantified variable and is applied to a wf that depends upon $E$.
This is so because applications of $\text {Gen}$ in the first derivation that are not involved in the derivation of $D_j$ are left unchanged into the new derivation and the new applications of $\text{Gen}$ discussed above generate no new dependencies. |
Isometries on complete metric spaces map to dense subspaces | Suppose $X=[0,\infty)$ and $g(x)=x+1$, then $g$ is an isometry, but the image is $Y=[1,\infty)$, which is not dense in $X$.
Note that, in fact, $Y$ must be closed in $X$, since $g^{-1}$ maps Cauchy sequences to Cauchy sequences. Hence, if $y_n=g(x_n)$ in $Y$ converges to $y$ in $X$, then $x_n$ is Cauchy with limit $x$. However, this gives $$
g(x)=\lim_{n\to\infty} g(x_n)=\lim_{n\to\infty} y_n=y$$
So $Y$ is simply a subspace of $X$ which is isometrically isomorphic to $X$. |
A continuous real function $f:[2, \infty] \to \mathbb R$ such that $|f|^p$ is integrable only for a given $p >1$. | Let $a,b>0$. Note that $x^{-1/a} \chi_{(0,1)}(x)$ is in $L^q$ exactly for $q \in [0,a)$. Similarly, $x^{-1/b} \chi_{(1,\infty)}(x)$ is in $L^q$ exactly for $q \in (b,\infty]$. Summing these gives a function in $L^q$ exactly for $q \in [0,a) \cap (b,\infty]=(b,a)$.
Now take your favorite $\epsilon_n \to 0^+$, with $\sup \epsilon_n \leq p$. Take $f_n \in L^q$ for $q \in (p-\epsilon_n,p+\epsilon_n)$ using the construction in the previous paragraph. Then $f=\sum_{n=1}^\infty 2^{-n} \frac{f_n}{\| f_n \|_{L^p}}$ does the job.
This construction is for the domain $(0,\infty)$ or a superset thereof, but it is easy to translate it to any other infinite interval.
This can be adapted to create a function continuous on $[0,\infty)$ including the left endpoint. The second one is easy to deal with, just replace $x^{-1/b} \chi_{(1,\infty)}(x)$ by $\begin{cases} 1 & x \in [0,1] \\ x^{-1/b} & x \not \in [0,1] \end{cases}$. Or you can take pretty much any other bounded, continuous extension.
The first case is not so easy to deal with. Intuitively you need to place spikes that get taller but narrower as $x \to \infty$. Such a spike centered at $0$ looks like $f(x;M,\delta)=\begin{cases} 0 & x \not \in [-\delta,\delta] \\ \frac{M}{\delta}(x+\delta) & x \in [-\delta,0] \\ M-\frac{M}{\delta} x & x \in [0,\delta] \end{cases}$. One can then sum such spikes, centered at, say, $n=0,1,\dots$ for growing $M$ and shrinking $\delta$ to get a function in $L^p$ exactly for $p \in [0,a)$ again. It will help to recall the convergence/divergence behavior of the "$p$ series".
Then you use modifications of these two functions in the same way I suggested in the second paragraph. |
Contraction mapping does not hold in metric space | Ok, first you can transform $f(x)$ into $f(x) = \frac{x^2+2}{2x}$ (by taking $x$ to the fraction and subtracting from $2x^2$ the value $x^2-2$). Then you can notice, that in the numerator and the denominator there are rational numbers, since you multiply and add rational numbers. So the whole fraction is rational.
Now: is it in $[1,2]$?
Assuming that $x \gt 0$ we have:
$\frac{x^2+2}{2x} \ge 1$ iff $x^2+2 \ge 2x$ iff $x^2 - 2x + 2 \ge 0$ iff $(x - 1)^2 + 1 \ge 0$ which is true for any $x$.
Is it smaller than 2?
$\frac{x^2+2}{2x} \le 2$ iff $x^2 + 2 \le 4x$
Hint for that: analyse the maximum and minimum values of appropriate sides of the inequality.
Now you want to show that it is a contraction. It means that you want it to be Lipschitz, and the Lipschitz constant should be smaller than one. In these situations the easiest way is to use the Lagrange theorem. The derivative is a simple monotonous function (wolfram says: $1/2 - \frac{1}{x^2}$), so you can easily notice that its supremum (actually supremum of its modulus) is smaller than 1.
And finally a less calculus-like part of the task: Why is there no constant point?
If you look at f as a function from $[1,2]$ (without intersecting with rational numbers), then all the above calculations are still true. Let it be called $F$. $[1,2]$ is a complete metric space, so a contraction must have a constant point and it is unique. You can easily calculate that $F(x)=x$ holds for $x=\heartsuit$, so the only constant point of $F$ is $\heartsuit$. Since $\heartsuit$ is not a rational number we know that $f$ has no constant points. |
What is the meaning of this sequence of random variables? | Here is a concrete example which may help to understand the meaning of $N_k$: you have customers which come to a shop, the $i$th of them spends a certain amount of money $X_i$ between $1$ and $d$. The random variable $N$ may be the number of customers entering in the shop in a certain interval of time, say one hour. Then $N_k$ is the number of customers who have spent exactly $k$ say euros. |
Formula for the number of integer solutions of an equation (using generating functions) | Since you seem to have run out of steam, I'll finish off what you've started. First note that you have a sign error in the second to last term: it should be $+ \frac{2}{9} \sum (n+1) x^n$. Thus the series expansion for the generating function is
\begin{align*}
\frac{-7x^2 - x + 8}{27} \sum_{n \geq 0} x^{3n}
&+ \frac{2x^3 - 3x^2 + 1}{9} \sum_{n \geq 0} (n+1) x^{3n}\\
&+ \frac{7}{27} \sum_{n \geq 0} x^n
+ \frac{2}{9} \sum_{n \geq 0} (n+1) x^n + \frac{1}{18} \sum_{n \geq 0} (n^2 + 3n + 2) x^n \, .
\end{align*}
The coefficient of $x^m$ in these series depends on whether $m \equiv 0, 1$ or $2$ mod ${3}$. If $m \equiv 0 \pmod{3}$, then the coefficient is
\begin{align}
&\frac{8}{27} + \frac{1}{9}\left(2 \left(\frac{m-3}{3} + 1\right) + \frac{m}{3} + 1 \right) + \frac{7}{27} + \frac{2}{9}(m+1) + \frac{1}{18}(m^2 + 3m + 2)\\
&= \frac{1}{18}m^2 + \frac{1}{2}m + 1 \, .
\end{align}
(Note that the second sum contributes two terms since $3n+3 \equiv 0 \pmod{3}$ as well.) The last $3$ terms in the first line of the above formula remain the same whether $m \equiv 0, 1$ or $2$ mod ${3}$.
[Edit: A bit more on how I got this. Let's consider the coefficient of $x^m$ in the first sum $\frac{-7x^2 - x + 8}{27} \sum_{n \geq 0} x^{3n}$. Distributing, this sum is equal to
$$
\frac{-7}{27}\sum_{n \geq 0} x^{3n+2} + \frac{-1}{27} \sum_{n \geq 0} x^{3n+1} + \frac{8}{27} \sum_{n \geq 0} x^{3n} \, .
$$
If $m \equiv 0 \pmod{3}$, then the first to sums do not contribute to the coefficient of $x^m$. Why? Because all their powers of $x$ are congruent to $2$ and $1$, respectively. Only the last sum contributes $\frac{8}{27}$ to the coefficient of $x^m$ when $n = m/3$. Similarly, only the second sum contributes when $m \equiv 1 \pmod{3}$, and the first when $m \equiv 2 \pmod{3}$. The same considerations work for finding the coefficient of $x^m$ in $\frac{2x^3 - 3x^2 + 1}{9} \sum_{n \geq 0} (n+1) x^{3n}$.]
Similar calculations for $m \equiv 1, 2 \pmod{3}$ imply that the coefficient of $x^m$ is
$$
\begin{cases}
\frac{1}{18}m^2 + \frac{1}{2}m + 1 & \text{if } m \equiv 0 \pmod{3}\\
\frac{1}{18}m^2 + \frac{7}{18}m + \frac{5}{9} & \text{if } m \equiv 1 \pmod{3}\\
\frac{1}{18}m^2 + \frac{5}{18}m + \frac{2}{9} & \text{if } m \equiv 2 \pmod{3}
\end{cases} \, .
$$
As vonbrand points out in his answer, we could've instead factored the quadratic $1 + x + x^2$ into linear factors with third roots of unity, which would have lead to the same periodic behavior with period $3$.
Finally, as you pointed out, we still haven't taken care of that $x^{15}$. It's very simple, though: to obtain the coefficient of $x^m$ in $\frac{x^{15}}{(1 - x)^3 (x^2 + x + 1)^2}$, we simply take the coefficient of $x^{m-15}$ in $\frac{1}{(1 - x)^3 (x^2 + x + 1)^2}$. For instance, to compute the coefficient of $x^{22}$, we substitute $m = 22 - 15 = 7$ into the formula for $m \equiv 1 \pmod{3}$, which yields $6$.
It turns out that these formulas can be simplified dramatically. Using the formulas above (or Wolfram Alpha), we find that the Maclaurin series for the generating function is
$$
x^{15}+x^{16}+x^{17}+3 x^{18}+3 x^{19}+3 x^{20}+6 x^{21}+6 x^{22}+6 x^{23}+10 x^{24}+10 x^{25}+10 x^{26}+15 x^{27}+15 x^{28}+15 x^{29}+ \cdots \, .
$$
These coefficients are the triangular numbers $T_n = \binom{n+2}{2} = \frac{(n+2)(n+1)}{2}$ repeated $3$ times each. This can also be seen from the formulas above: letting
\begin{align*}
f(m) &= \frac{1}{18}m^2 + \frac{1}{2}m + 1\\
g(m) &= \frac{1}{18}m^2 + \frac{7}{18}m + \frac{5}{9}\\
h(m) &= \frac{1}{18}m^2 + \frac{5}{18}m + \frac{2}{9}
\end{align*}
then
$$
f(3n) = g(3n+1) = h(3n+2) = \frac{1}{2} n^2 + \frac{3}{2}n + 1 = \binom{n+2}{2} \, .
$$
Thus a simpler formula for the coefficient of $x^n$ is
$$
\frac{1}{2} \left(\left\lfloor \frac{n-15}{3} \right\rfloor + 2\right) \left(\left\lfloor \frac{n-15}{3} \right\rfloor + 1\right) \, .
$$ |
Can we always express every closed and bounded set $S\subseteq[0,1]$ as a countable union of intervals? | You have not clearly stated what is clear; taking $S=[0,1]$ is permitted and $(0,1)$ is an open interval contained in $S$. What you intend is to talk of some sort of maximal subintervals, or maybe you can restrict to disjoint unions.
In any case, the answer is no, if $S$ is e.g. a Cantor set then the only intervals contained in $S$ are closed degenerate intervals (i.e. points). So if $S$ is the countable union of intervals, then it is the countable union of points, so it is countable. But Cantor sets are uncountable. |
Use mean value theorem on $f(x) = x^{1/5}$, to show that $2< \sqrt[5]{33}<2.0125$ | The mean value theorem, in your situation, says that
$$\frac{{\root5\of{33}}-2}{33-32}=\frac{1}{5c^{4/5}}$$
for some $c$ between $32$ and $33$. This can be rewritten
$${\root5\of{33}}=2+\frac{1}{5c^{4/5}}\ .$$
Since $c>32$ we have
$$\frac{1}{c^{4/5}}<\frac{1}{32^{4/5}}=\frac{1}{2^4}\ .$$
See if you can finish the problem for yourself. |
Injectivity of a complex function with conjugate | Directly by definition (fill in details):
$$h(z)=h(w)\iff\frac1{\overline z+i}=\frac1{\overline w+i}\iff\overline w+i=\overline z+i\iff \overline w=\overline z\iff w=z$$ |
A power series that converges for $|x| \leq 1$ and diverges otherwise. | You don't need the comparison test. If $|z|=1$ then $\sum z^2/n^2$ converges absolutely so it converges. |
Doubt in Fourier Series | The cosine is $2\pi$ periodic and has maxima at $2\pi k$ for $k \in \mathbb Z$ and minima at $\pi (2k+1)$ for $k \in \mathbb Z$. So $\cos n \pi$ is always at a maximum or minimum for $n \in \mathbb Z$.
I think if you draw it you will see it even better: |
Does there exist a r.v. $X$ with $E[x]= \mu$ and $Var[X]=\sigma^2$ for some fixed $\mu$ and $\sigma$ | As I said in the comments, for the first question you can just take $X\sim\mathcal N(\mu,\sigma^2)$ if $\sigma^2>0$, $X\equiv\mu$ if $\sigma^2=0$, and otherwise it is impossible. For the second question, recall that given $R:T\times T\to\mathbb R$ there exists a mean-zero Gaussian process $(X_t)_{t\in T}$ such that $E(X_sX_t)=R(s,t)$ if and only if $R$ is symmetric and positive definite, that is, $R(s,t)=R(t,s)$ and $\sum_{i,j=1}^nR(t_i,t_j)x_ix_j\ge0$ for all $t_1,\ldots,t_n\in T$ and all $x\in\mathbb R^n$. In your case this follows from e.g. the fact that $R(t,t)>0$ and $\sum_{s\neq t}R(s,t)<R(t,t)$ for all $t$. |
Fountains of Coins and Fibonacci Numbers | Let us call : $Fib(x)$ the generating function associated to the Fibonacci number $(F_k)$. We know that :
$$Fib(x)=\frac{x}{1-x-x^2} $$
Now if we want to make appear something like a serie with only the $F_{2k+1}$ it seems like a good idea to compute :
$$Fib(x)-Fib(-x)=2\sum_{k=0}^{\infty}F_{2k+1}x^{2k+1} $$
$$Fib(x)-Fib(-x)=2x\sum_{k=0}^{\infty}F_{2k+1}(x^2)^k $$
Hence, we see that if $G(y)$ is the generating function for the $(F_{2k+1})$ we must have that :
$$G(x^2)=\frac{Fib(x)-Fib(-x)}{2x} $$
Now it suffices to compute :
$$Fib(x)+Fib(-x)=\frac{x}{1-x-x^2}+\frac{x}{1+x-x^2} $$
$$Fib(x)+Fib(-x)=\frac{2x-2x^3}{1-3x^2+x^4}$$
Hence we have that :
$$G(x^2)=\frac{1-x^2}{1-3x^2+x^4} $$
Hence :
$$G(y)=\frac{1-y}{1-3y+y^2} $$
Now the generating function for the $F_{2k-1}$ is exactly $yG(y)$ hence :
$$\sum_{k\geq 0}F_{2k-1}y^k=\frac{y-y^2}{1-3y+y^2} $$
On the other hand compute :
$$F(y)-1=\frac{1-2y}{1-3y+y^2}-1=\frac{y-y^2}{1-3y+y^2} $$
Then you see that $F(y)-1=yG(y)$ this is exactly saying that the $k$-coefficient of $F(y)$ for $k\geq 1$ is $F_{2k-1}$. |
A closed Riemannian manifold of nonpositive sectional curvature with finite $\pi_1 (M)$ | By the Cartan-Hadamard theorem a manifold with a complete metric of non-positive sectional curvature has universal cover diffeomorphic to $\mathbb{R}^{n}$. If $M$ is compact this implies that that $\pi_{1}(M)$ is infinite, since it acts on $\mathbb{R}^{n}$ with a compact quotient space. |
Getting an equation involving logarithm into explicit form | $\def\inv#1{\frac{1}{#1}}$
$$t+c=\inv{\sqrt{2}}\log\left(\frac{x}{2+\sqrt{4-2x^2}}\right)$$
$$e^{\sqrt 2(t+c)}=\frac{x}{2+\sqrt{4-2x^2}}$$
As a comment suggested, we do $K=e^{\sqrt 2(t+c)}$
$$K(2+\sqrt{4-2x^2})=x$$
$$\frac{x-2K}{K}=\sqrt{4-2x^2}$$
Now we square both sides
$$\frac{x^2-4Kx+4K^2}{K^2}=4-2x^2$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.