title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Is a transitive and Euclidean relation necessarily symmetric? | My simpler counterexample for "Is a transitive and Euclidean relation necessarily symmetric?" is {⟨a,b⟩⟨b,b⟩}. So, no.
Euclidian relation is right Euclidian and left Euclidian relation, as I learnt. Wiki isn't agree with me. I guess, the right form is here. Or, maybe, they meant "If it's Euclidian and reflexive it's an equivalence." |
Finding a generator for $\mathbb{Z}_{2}\times \mathbb{Z}_{3}$ | Look for example
$$4(1,1)=(4\pmod2,\,4\pmod3)=(0\pmod2,\,1\pmod 3)=(0,1)$$
where the rightmost last expression is just a short handed one that should be clear when each coordinate is taken the correspondent modulo. |
f Lebesgue integrable iff improper Riemann integral of |f| converges | For the restated problem: This will follow from
If $f$ is Riemann integrable on $[c,d]$ then $f$ is Lebesgue integrable on $[c,d],$ and the two integrals agree.
If $f$ is Lebesgue integrable on $[a,b],$ then
$$\lim_{\beta \to b^-}\int_a^\beta f(x)\,dx = \int_a^b f(x)\,dx.$$
To prove 2., note it suffices to treat a sequence $\beta_n \to b^-.$ Set $f_n = \chi_{[a,\beta_n]}\cdot f$ and apply the dominated convergence theorem.
Previous answer
This is false:
$$\int_0^1 \frac{\sin(1/(1-x))}{1-x}\, dx$$
converges as an improper Riemann integral, although
$$ \int_0^1\left | \frac{\sin(1/(1-x))}{1-x}\right |\, dx= \infty.$$ |
Show that there exists a polynomial such that.. | We have $f(x)-p(x) = f(a)-p(a) + \int_a^x (f'(t)-p'(t)) dt$, so
$|f(x)-p(x)| \le |f(a)-p(a)| + (b-a) \|f'-p'\|_\infty$ or
$\|f-p\|_\infty \le |f(a)-p(a)| + (b-a) \|f'-p'\|_\infty$.
Note that $p(x) = p(a) + \int_a^x p'(t) dt$ is a polynomial and you can choose
$p(a)$ however you wish. |
Proving that $\mathbb{R}_K$ is connected with the topology $\tau$ | Suppose that $A$ and $B$ partition $\mathbb{R}$ and are clopen disjoint. Suppose that there is an integer $n$ such that the interval $(1/(n+1),1/n)$ contains points from both $A$ and $B$: this is a contradiction, since the intervals of that form are clearly connected (the induced topology on them is the standard one). Now suppose that there is $n$ such that $(1/(n+2),1/(n+1))\subset A$ and $(1/(n+1),1/n)\subset B$. Again, this is a contradiction, since the point $1/(n+1)$ could not belong to either $A$ or $B$ (everyone of its basic open neighbourhoods intersects both $A$ and $B$). So all of the intervals $(1/(n+1),1/n)$ belong to, say, A, and so all of the points $1/n$. But then, $0$ belongs to $A$ as well, since every open neighborhoods intersects $A$.
This means that the whole interval $[0,1)$ is in $A$, so we can restrict our attention to $\mathbb{R}\setminus [0,1)$. But since on this set the topology is the usual one, we have a contradiction. |
A set of stationary points of the function consists of isolated points only. | Just a short hint:
Examine the second derivative. It should be nonzero at a critical point, hence your desired statement follows. |
Determine whether the vectors span $\mathbb{R}^3$ | You just need to show
$$v_4 = \alpha_1 \cdot v_1 + \alpha_2 \cdot v_2 + \alpha_3 \cdot v_3$$ |
Prove the general case of Common tangents | $$p^2 - q^2 \;=\; \left(\; d^2 - (b-a)^2 \;\right)^2 - \left(\;d^2 - (b+a)^2\;\right)^2 \;=\; 4 a b \;=\; 2a \cdot 2b$$
For tangent circles, one can take $q=0$ in the above to get the geometric mean property. There's also this simple construction showing that the "half-tangent" is the geometric mean of the radii: |
can a geometric sequence have ratio if 1 | A geometric sequence $u_n$ is a sequence which is defined recursively by (given $u_1$)
$$u_{n+1}=ru_n,\quad r\in \Bbb R $$
where the ratio $r$ is a given number. The sum formula is
$$\sum_{k=1}^nu_k=u_1\frac{1-r^n}{1-r},\;\text{if}\, r\ne1$$
and
$$\sum_{k=1}^nu_k=nu_1,\;\text{if}\, r=1$$ |
Equivalent irreducible linear rep iff determinant is same | It's not true that irreducible representations which have the same determinant are equivalent. To see an example where this fails in a maximal possible way, consider a perfect finite group $G$ (i.e. $[G,G]=G$). For instance, any non-abelian simple group is perfect. Then the abelianization of $G$ is trivial, so all 1-dimensional representations of $G$ are trivial. But for a representation $\pi$ of $G$, $\det \pi$ is a one-dimensional representation by multiplicativity of determinants. Thus $\det \pi \equiv 1$ for any representation $\pi$. In particular, all irreducible representations of $G$ have the same determinant. |
How to find an integrating factor for this ODE? | $$ \left( x + (1-y^{1/2}) \tan{(x-y)}\right)dx - \left( x-y^{1/2} \tan{(x-y)} - \frac{1}{2} y^{-1/2} \right) dy =0$$
In order to simplify the search of integrating factor, change of variable : $\quad t=x-y \quad\to\quad x=y+t \quad\to\quad dx=dy+dt$
$$ \left( y+t + (1-y^{1/2}) \tan(t)\right)(dy+dt) - \left( y+t-y^{1/2} \tan(t) - \frac{1}{2} y^{-1/2} \right) dy =0$$
After simplification :
$$ \left(\tan(t) + \frac{1}{2} y^{-1/2} \right)dy +\bigg(y+t-(1-\sqrt{y})\tan(t) \bigg)dt =0$$
$$ \left(\sin(t) + \frac{1}{2} y^{-1/2}\cos(t) \right)dy +\bigg((y+t)\cos(t)-(1-\sqrt{y})\sin(t) \bigg)dt =0$$
$$\frac{\partial}{\partial x} \left(\sin(t) + \frac{1}{2} y^{-1/2}\cos(t) \right) =\frac{\partial}{\partial y}\bigg((y+t)\cos(t)-(1-\sqrt{y})\sin(t) \bigg)=$$
$$=\cos(t)-\frac{1}{2} y^{-1/2}\sin(t) $$
Hence $ \left(\sin(t) + \frac{1}{2} y^{-1/2}\cos(t) \right)dy +\big((y+t)\cos(t)-(1-\sqrt{y})\sin(t) \big)dt $ is a total differential.
$\begin{cases}
\int \left(\sin(t) + \frac{1}{2} y^{-1/2}\cos(t) \right)dy =y\sin(t)+\sqrt{y}\cos(t)+f(t) \\
\int \big((y+t)\cos(t)-(1-\sqrt{y})\sin(t) \big)dt= y\sin(t)+\sqrt{y}\cos(t)+t\sin(t)+g(y)
\end{cases}$
which implies $\quad g(y)=c=$ constant$\quad$ and $\quad f(t)=t\sin(t)+c$.
The solution is expressed on the form of an implicit equation :
$$y\sin(t)+\sqrt{y}\cos(t)+t\sin(t)=C$$
$$y\sin(x-y)+\sqrt{y}\cos(x-y)+(x-y)\sin(x-y)=C$$
$$\sqrt{y}\cos(x-y)+x\sin(x-y)=C$$ |
Show that there is a surjective mapping $f:\mathbb N\to A$ such that $f(m)\leq f(n)$ if $m\leq n$ | It looks like you have the right idea! Formally, you should probably prove that $f$ is increasing (unless previous results justify it for you), rather than just saying it's easy to do. Also, you should probably justify that $p_0$ is defined and prove that $s(p_0)=p$ (unless previous results justify them for you), each of which is straightforward. |
Fourier transform of a probability measure, and fourier transform of density | Maybe you are more familiar with the concept of Radon-Nikodym derivative. Basically the "density" is just the RN-derivative in the context of probability (see the "Application" section on the linked wiki page).
The claim is stating that if you can write $d\eta(x)$ as $f(x) dx$, where $dx$ is the Lebesgue measure, then
$$ \hat \eta (t) = \int e^{itx} d\eta(x) = \int e^{itx} f(x) dx = \hat f(t) $$ |
finding out the right intercepts of slope | The following figure describes the condition given in the question.
When you will consider the option a), the figure shall look like :
Notice here that x intercept of line m is positive. (if you will extend it, it will cut the x axis at approx. positive x axis.)
Now, consider option b)
As you might have noticed here, that the slope of line m is still between 0 and 1. Rotate the line m, this shall give you something like this :
As your main problem lies for option $E)$ , thus, I'm going to consider that only.
Now, in the above figure (where line m is rotated) , you may notice that y intercept is positive and x intercept is negative. So, basically the condition goes like this : $$\text{When y-intercept} \ \text{is positive, then x - intercept is negative} \\
\text{When y -intercept is negative, then x-intercept is positive.} $$
So, the product of x-intercepts and y-intercepts will always remain negative. [the product can be zero as well.] |
Ideal generated by initial forms in the associated graded ring. | An example is given in Bruns and Herzog, Exercise 4.6.12(b). Let $R=K[[X,Y,Z]]$ and $I=(X^2,XY+Z^3)$. Then $XZ^3\in I^*$, but $XZ^3\notin(X^2,XY)$.
The associated graded ring $\mathrm{gr}_{\mathfrak m}(R)$ is isomorphic to $K[X,Y,Z]$, and therefore it is and integral domain. |
Proofs that: $\text{Sp}(2n,\mathbb{C})$ is Lie Group and $\text{sp}(2n,\mathbb{C})$ is Lie Algebra | On the Lie algebra part. By definition, $sp(2n,\mathbb C)$ is the real vector space of complex matrices
$$sp(2n,\mathbb C)=\{ A\in Mat_{2n}(\mathbb C): \exp(tA)\in Sp(2n,\mathbb C), \forall t\in \mathbb R \}.$$
$\exp(tA)$ denotes the exponential of the matrix $tA$. By definition of
$Sp(2n,\mathbb C), $ we have that
$$J=\exp(tA)^{t}J\exp(tA)=\exp(tA^{t})J\exp(tA), (*)$$
where the second equality can be proved using the definition of the exponential of a matrix.
Using ($\exp(0)=1$)
$$\frac{d\exp(tA)}{dt}|_{t=0}:=\lim_{t\rightarrow 0}\frac{\exp(tA)-1}{t}=
\lim_{t\rightarrow 0}\frac{(1+tA+\frac{1}{2!}t^2A^2+\dots)-1}{t}=A$$
we arrive at
$$0=(\text{using (*)})=\frac{d}{dt}(J-\exp(tA)^{t}J\exp(tA))|_{t=0}=0-A^{t}J+JA,$$
i.e. $A^{t}J+JA=0$, as $J$ is independent of $t$. |
Parametric equations of a line | The equation of the line is $x=1+at$, $y=1+bt$, $z=ct$. The vector $\vec{d}=(a,b,c)$ is parallel to the line, so it must be orthogonal to $\vec{n}=(2,3,1)$. So $2a+3b+c=0$. A parallel vector to the line $\frac{x-1}{-2}= \frac{y}{3}=-z-2$ is $(-2,3,-1)$, so $(a,b,c)\cdot (-2,3,-1)=0$. Hence $-2a+3b-c=0$. So combining this with $2a+3b+c=0$, and adding , we get $6b=0$ so $b=0$.We also get $2a=c$. We can let $c=1$ and $a=2$. and $x=1+2t,y=1,z=t$. I hope I made no mistake! |
How to find $ \sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)^2}$? | The value of the series is known as Catalan's constant. It's value is approximately 0.915 965 594. At the moment there is no known method to evaluate the series in closed form. |
Using Dirichlet's hyperbola method and Dirichlet's formula | Following the notation from the
Planetmath Article
we note that
$$\frac{\tau(n)}{n} = 1/n * 1/n =
\sum_{d|n} \frac{1}{d} \left(\frac{n}{d}\right)^{-1}.$$
Substituting this into the formula for the method we obtain
$$\sum_{n\le x} \frac{\tau(n)}{n} =
\sum_{a\le\sqrt{x}} \sum_{b\le x/a}
\frac{1}{a} \frac{1}{b}
+ \sum_{b\le\sqrt{x}} \sum_{a\le x/b}
\frac{1}{a} \frac{1}{b}
- \sum_{a\le\sqrt{x}} \sum_{b\le\sqrt{x}}
\frac{1}{a} \frac{1}{b}
\\ =
\sum_{a\le\sqrt{x}} \frac{1}{a} \sum_{b\le x/a}
\frac{1}{b}
+ \sum_{b\le\sqrt{x}} \frac{1}{b} \sum_{a\le x/b}
\frac{1}{a}
- \sum_{a\le\sqrt{x}} \frac{1}{a} \sum_{b\le\sqrt{x}}
\frac{1}{b}
\\ =
\sum_{a\le\sqrt{x}} \frac{1}{a} (\log x - \log a + \gamma)
+ \sum_{b\le\sqrt{x}} \frac{1}{b} (\log x - \log b + \gamma)
- (\log\sqrt{x} + \gamma)^2
\\ = 2\log x (\log\sqrt{x} + \gamma)
+ 2\gamma (\log\sqrt{x} + \gamma)
- 2 \sum_{q\le\sqrt{x}} \frac{\log q}{q}
- (\log\sqrt{x} + \gamma)^2
\\ = 2\log x (\log\sqrt{x} + \gamma)
+ \gamma^2
- 2 \sum_{q\le\sqrt{x}} \frac{\log q}{q}
- \frac{1}{4} \log^2 x
\\ = \log^2 x + 2\gamma\log x
+ \gamma^2
- 2 \sum_{q\le\sqrt{x}} \frac{\log q}{q}
- \frac{1}{4} \log^2 x.$$
To evaluate the remaining sum term we could use the fact that
$$\left(\frac{1}{2} \log^2 x\right)' = \frac{\log x}{x}$$
but we need the constant term which is given in terms of the
Stiltjes constants
as $$\sum_{q=1}^n \frac{\log q}{q} \sim
\frac{1}{2} \log^2 n + \gamma_1.$$
This finally yields the formula
$$\log^2 x + 2\gamma\log x
+ \gamma^2
- 2 \left(\frac{1}{2} \log^2\sqrt{x} + \gamma_1\right)
- \frac{1}{4} \log^2 x
\\ = \log^2 x + 2\gamma\log x
+ \gamma^2
- 2 \left(\frac{1}{8} \log^2 x + \gamma_1\right)
- \frac{1}{4} \log^2 x
\\ = \log^2 x + 2\gamma\log x
+ \gamma^2
- \frac{1}{4} \log^2 x - 2\gamma_1
- \frac{1}{4} \log^2 x
\\ = \frac{1}{2} \log^2 x + 2\gamma\log x
+ \gamma^2 - 2\gamma_1.$$
The equalities are plus implicit lower order terms.
Addendum. The bound on the error terms follows from the harmonic number asymptotics $H_n = \log n + \gamma + \frac{1}{2n} + \cdots$ We get from the first term $\sum_{a\le \sqrt{x}} \frac{1}{a} \frac{1}{2x/a} = \frac{1}{2x} \sum_{a\le \sqrt{x}} 1$ which is $O(1/\sqrt{x}).$ The second term is the same. The third term contributes $\log\sqrt{x} \times 1/\sqrt{x}$ which is $O(\log x/\sqrt{x}).$ We see that the third lower order term dominates the first two, which gives exactly the formula proposed by the OP. |
Is the cardinality of $\mathcal{C}[0,1]$ the same as the cardinality of $\mathbb{R}$? | Each continuous function on $[0,1]$ is determined by its values on the countable
set $\Bbb Q\cap[0,1]$. Therefore
$$|C[0,1]|\le|\Bbb R|^{\aleph_0}=(2^{\aleph_0})^{\aleph_0}
=2^{\aleph_0\times\aleph_0}=2^{\aleph_0}=|\Bbb R|.$$ |
Prove all values are zero | I don't know if this is acceptable to you, as it uses polynomials. It is the most natural way to me. Given any polynomial $p$ with $p(0)=0$, i.e, $p(z)=\sum_{k=1}^ra_kz^k$, we have
$$\tag1
\sum_{j=1}^np(\lambda_j)=\sum_{j=1}^n\sum_{k=1}^ra_k\lambda_j^k=\sum_{k=1}^ra_k\left(\sum_{j=1}^n\lambda_j^k\right)=0.
$$
In particular, if $\lambda_j\ne0$ for some $j$ we can choose $p$ with $p(0)=0$ and $p(\lambda_j)=1$ for all $j$ such that $\lambda_j\ne0$. But then the left-hand-side in $(1)$ would be positive, giving a contradiction. So $\lambda_1=\cdots=\lambda_n=0$.
The above also shows that it is enough to have the equalities up to $m=n$, or even less if there are repetitions in the list. |
Prove that $\frac{\pi}{\phi^2}<\frac65 $ | It we express $\pi$ and $\phi$ as simple continued fractions, it is known:
$$\pi = [3;7,15,1,292,1,1,,1,2,\ldots]
\quad\text{ and }\quad
\phi = [1;1,1,1\ldots].$$
The first few convergents of $\pi$ and $\phi$ are
$$3, \frac{22}{7}, \frac{333}{106}, \frac{355}{113}, \ldots
\quad\text{ and }\quad
1, \frac{2}{1}, \frac{3}{2}, \frac{5}{3} \ldots, \frac{F_k}{F_{k-1}} \ldots$$
where $F_k$ are the Fibonacci numbers. It is also known these convergents sandwich the values of $\pi$ and $\phi$ in alternate fashion.
$$3 < \frac{333}{106} < \pi < \frac{355}{113} < \frac{22}{7}$$
$$1 < \frac{3}{2} < \cdots < \frac{F_{2k}}{F_{2k-1}} < \phi < \frac{F_{2k+1}}{F_{2k}} < \cdots < \frac{5}{3} < \frac{2}{1}$$
In particular, $\pi < \frac{355}{113}$ and $
\frac{377}{233} = \frac{F_{14}}{F_{13}} = \frac{\phi^{14} - \phi^{-14}}{\phi^{13}+\phi^{-13}} < \phi
$, this implies
$$\frac{5\pi}{6\phi^2} < \frac{5\left(\frac{355}{113}\right)}{6\left(1+\left(\frac{377}{233}\right)\right)} = \frac{82715}{82716} < 1$$ |
Bit strings of length $n$ in which any two consecutive $1$'s are separated by an even number of $0$’s | Here's another approach. Use the graph as in my previous answer, but don't bother with the starting state $S$, we can start at $A$ (we start with no zeros and then can have a zero or a one so it's OK). The graph looks like this:
Let's define $a_n, b_n, c_n, d_n, e_n$ to be the number of walks on the graph that start from $A$ and after $n$ steps end up in $A, B, C, D, E$. What we want to calculate is the total number of approved strings:
$$x_n = a_n+b_n+c_n+d_n.$$
We notice these facts:
$a_n = 1$
$b_n = b_{n-2}+b_{n-3}$, and $b_0=0, b_1=b_2 = 1$
$c_n = c_{n-1}+c_{n-2}-c_{n-4}$, and $c_0=c_1=0, c_2=c_3=1$
$d_n = c_{n-1}$, and $d_0=0$
The first is obvious, since only the string $00\dots0$ can end up in $A$.
Also, the last since we can only arrive to $D$ from $C$.
Let's prove the other two inductively on $n$. First notice a helpful equation from the fact that we can arrive to $C$ from $B$ or from $D$:
$\pmb{c_{m}} = b_{m-1} + d_{m-1} = \pmb{b_{m}+b_{m-1}-1}$, since $b_{m} = d_{m-1}+1$ (remember $a_m = 1$)
Let's now do the induction step for $b_{n+1}$:
$$b_{n+1} = a_{n}+ d_{n} = 1 + c_{n-1} = b_{n-1}+b_{n-2}$$
Then for $c_{n+1}$. I found it easier to start with what we want, that is:
$$c_n+c_{n-1}-c_{n-3} \\
= b_n+b_{n-1}-1 + b_{n-1}+b_{n-2}-1 - (b_{n-3}+b_{n-4}-1)\\
= b_n+b_{n-1}-1 + b_{n-1}+b_{n-2}-1 - b_{n-1} +1 \\
= b_n-1+ b_{n-1}+b_{n-2}\\
= b_n-1+ b_{n+1}\\
= c_{n+1}.
$$
The base cases can be checked easily (for example with the matrix approach), here are how the sequences begin
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
0, 1, 1, 1, 2, 2, 3, 4, 5, 7, 9, 12, 16, 21, 28...
0, 0, 1, 1, 2, 3, 4, 6, 8, 11, 15, 20, 27, 36, 48...
0, 0, 0, 1, 1, 2, 3, 4, 6, 8, 11, 15, 20, 27, 36...
Then finally we also find a recursion for the sum of these, it is actually the same recurrence as for $c$ and $d$
$$x_n = x_{n-1}+x_{n-2}-x_{n-4} \text{ and } x_0=1, x_1=2, x_3=3$$
Let's prove this with induction and also here it's easier to start with the right hand side. The $c$ and $d$ terms gather nicely, $a$ is just the $1$ that survives and noticing that $b_{n}-b_{n-3}=b_{n-2}$ and then that $b_{n-2}+b_{n-3}=b_{n}$ proves the result. |
Writing conditional logic using mathematics forumla | It's possible if you allow the absolute value 'operation'. This works except for the case when $x=y$:
$$f(x,y)=y-\dfrac12\left(1-\dfrac{y-x}{|{y-x}|}\right)(2y-x)$$
I've no idea what to do if the inputs were real numbers, but because you've restricted them to integers we can hack it thus:
$$f(x,y)=y-\dfrac12\left(1-\dfrac{y-x-\frac12}{\left|{y-x-\frac12}\right|}\right)(2y-x)$$
Mathematically, it's perfect (for integer inputs). In the real world of computer arithmetic, we'd need $x$ and $y$ to be promoted to real numbers for the inner division, and the inner division converted back to an integer immediately afterwards $\dots$ and the rest of the operations are done as integers, the multiply by $\dfrac12$ turned into $\div 2$. Depending on the precision of your real & integer data types, this could fail for very positive or very negative values of $y-x$. |
Fitting Shape in Circle for Shape Classification | (This is not an answer)
This is a very good, and "open", problem.
The objective function should be invariant with respect to reparametrizing the boundary curve $t\mapsto \alpha(t)$; but your proposed $e$ does not have this property. Even insisting on $t$ being arc length is not enough, because arc length is not affine invariant.
Here is a pointer to something affine invariant: For every bounded compact set $A\subset{\mathbb R}^2$ there is a unique ellipse $E_{\rm loew}(A)\supset A$ of minimal area, called the Loewner ellipsoid of $A$. There might be numerical algorithms around to find $E_{\rm loew}(A)$ from data about $A$. Mapping $E_{\rm loew}(A)$ to the unit disk $D$ gives an affine invariant representation of $A$ which is unique up to a rotation of $D$. |
Detemine the unit digit of a number | The unit digit of the numbers of the powers of $3$ form a cyclic sequence: $1,3,9,7,1,3,9,7,\ldots$ In particular, the unit digit of $3^n$ is $3$ if $n$ is of the form $4k+1$. And the unit digit of any power of $6$ is $6$. Since $7\,005$ is of the form $4k+1$, the answer to your question is $8(=3\times6\pmod{10})$. |
For what primes $p$ is $ (x + y)^{13} \equiv x^{13} + y^{13} \pmod{p}$, $\forall x,y \in \mathbb{Z}_p$? | If it must hold for every $x,y\in\mathbb Z_p$ then in particular it must hold for $x=y=1$, and this is only true if $p\mid 2^{13}-2=8190$. So $p\in \{2,3,5,7,13\}$ and you can check these by trying all possible $x,y$. |
Within the second Fraenkel model, show that any function from $A\rightarrow A$ has the following property | Your proposition cannot possibly be true, since we can consider the function $f$ such that if $P_n=\{a,b\}$, then $f(a)=b$ and $f(b)=a$. Namely, rotate each pair. This function has no fixed points and it is not constant either.
But it is true that modulo the pairs, the functions are finitary in some sense. To see this you can either work in the setting of the model itself, or prove something better and more general.
Suppose that $A$ is a Dedekind-finite set which is the countable union of pairs, $P_n$. Let $p(a)=n$ if $a\in P_n$. If $f\colon A\to A$ is a function, then $p(f(a))=p(f(b))$ whenever $f(a)=f(b)$, up to a finite mistake.
Otherwise, there are infinitely many pairs such that $f$ "separates" the members of the pair. And so we can choose the one which was sent to the lower indexed pair. So that's impossible. |
Is it possible to evaluate $-\int_0^\infty \log(1-\cosh(x))\frac{x^2}{e^x}\,dx$? | $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&-\int_{0}^{\infty}\ln\pars{1 - \cosh\pars{x}}\,{x^{2} \over \expo{x}}\,\dd x =
-\int_{0}^{\infty}\bracks{\ln\pars{\cosh\pars{x} - 1} + \ic\pi}\,
x^{2}\expo{-x}\,\dd x
\\[5mm] = &\
-2\pi\ic - \int_{0}^{\infty}\ln\pars{{\expo{x} + \expo{-x} \over 2} - 1}
\,x^{2}\expo{-x}\,\dd x
\\[5mm] \stackrel{\substack{x\ =\ -\ln\pars{t}\\[0.25mm] t\ =\ \expo{-x}}\\ }{=}\,\,\,&
-2\pi\ic - \int_{1}^{0}\ln\pars{{1/t + t \over 2} - 1}\ln^{2}\pars{t}\,
t\,\,{\phantom{-}\dd t \over -t}
\\[5mm] = &\
-2\pi\ic - \int_{0}^{1}\ln\pars{\bracks{1 - t}^{2} \over 2t}\ln^{2}\pars{t}\,
\dd t
\\[5mm] = &\
-2\pi\ic - 2\
\underbrace{\int_{0}^{1}\ln^{2}\pars{t}\ln\pars{1 - t}\,\dd t}
_{\ds{-6 + {\pi^{2} \over 3} + 2\,\zeta\pars{3}}}\ +\
\ln\pars{2}\ \underbrace{\int_{0}^{1}\ln^{2}\pars{t}\,\dd t}_{\ds{2}}\ +\
\underbrace{\int_{0}^{1}\ln^{3}\pars{t}\,\dd t}_{\ds{-6}}\label{1}\tag{1}
\\[5mm] = &\
\bbx{6 + 2\ln\pars{2} - {2\pi^{2} \over 3} - 4\zeta\pars{3} - 2\pi\ic}
\approx -4.0015 + 6.2832\,\ic
\end{align}
Note that the integrals in \eqref{1} are evaluated as follows:
$$
\left\{\begin{array}{rcl}
\ds{\int_{0}^{1}\ln^{2}\pars{t}\ln\pars{1 - t}\,\dd t} & \ds{=} &
\ds{\left.\partiald[2]{}{\mu}\partiald{}{\nu}
{\Gamma\pars{\mu + 1}\Gamma\pars{\nu + 1} \over
\Gamma\pars{\mu + \nu + 2}}\right\vert_{\ \mu =\ \nu\ =\ 0}}
\\[5mm]
\ds{\int_{0}^{1}\ln^{k}\pars{t}\,\dd t} & \ds{=} &
\ds{\left.\partiald[k]{}{\mu}\int_{0}^{1}t^{\mu}\,\dd t\,
\right\vert_{\ \mu\ =\ 0} = \pars{-1}^{k}\, k!}
\end{array}\right.
$$ |
Limits of a twice differentiable function. | By the Taylor-Lagrange Formula for $b=x+1$ and $a=x$, we have
$$f(x+1)=f(x)+f^{\prime}(x)+\frac{1}{2}f^{\prime\prime}(c_x)$$
for some $c_x\in (x,x+1)$. We multiply by $x$:
$$xf(x+1)=xf(x)+xf^{\prime}(x)+\frac{1}{2}\frac{x}{c_x}c_xf^{\prime\prime}(c_x)$$
Now if $x\to +\infty$, $c_x\to +\infty$, and $c_xf^{\prime\prime}(c_x)\to 0$. We have $\displaystyle \frac{x}{x+1}\leq \frac{x}{c_x}\leq 1$, hence $\displaystyle \frac{x}{c_x}\to 1$. So the last term $\to 0$ as $x\to+\infty$. As $xf(x)$ and $xf(x+1)$ $\to 0$ also, we get that $xf^{\prime}(x)\to 0$. |
Product of two polynomials formula proof | Here is an additional step:
$$\sum_{i=0}^n \sum_{j=0}^m a_i b_j x^{i+j} \cdot 1 = \sum_{i=0}^n \sum_{j=0}^m a_i b_j x^{i+j} \sum_{k=0}^{m+n} \delta(k-(i+j)) = \sum_{k=0}^{m+n} \sum_{i=0}^n \sum_{j=0}^m a_i b_j x^{i+j} \delta(k-(i+j))$$
Because $\sum_{k=0}^{m+n} \delta(k-(i+j)) = 1$ for every $i,j$ with $0\leq i+j \leq m+n$ (all terms are zero but one).
Now for
$$ \sum_{j=0}^m b_j x^{i+j} \delta(j-(k-i))$$
only the terms with $j=k-i$ survive, all others vanish, and thus you can set $j=k-i$ and drop the summation sign.
Also there is a more intuitive way to "proof" (or rather see) this, which is maybe easier to remember:
This formula is a special case of the "Cauchy-Product" formula, which holds for absolute convergent power series (in fact it is sufficient that one of the series is absolutely convergent), this is a sum of the form $\sum_{k=0}^{\infty} a_i x^i $. If you set $a_k=0$ for $k>n$ you get a ordinary polynomial $\sum_{i=0}^n a_i x^i$. In this case the product formula can nicely be visualized. Maybe you noticed that the product is a sum consisting of a all combinations of the coefficients i.e. $$(\sum_{i=0}^{\infty} a_i x^i)(\sum_{j=0}^{\infty} b_j x^j)=\sum_{i=0}^{\infty} (\sum_{j=0}^{\infty} a_i b_j x^{i+j})$$
If you now look at
$$ \begin{array}{ccccc}
a_0 b_0 x^0& a_1b_0 x^1& a_2b_0 x^2 & a_3b_0 x^3 & \cdots \\
a_0b_1 x^1& a_1b_1 x^2& a_2b_1 x^3 & a_3b_1 x^4 & \cdots \\
a_0b_2 x^2& a_1b_2 x^3& a_2b_2 x^4& a_3b_2 x^5& \cdots \\
a_0b_3 x^3& a_1b_1 x^4& a_2b_1 x^5& a_3b_1 x^6& \cdots \\
\vdots &\vdots &\vdots &\vdots & \ddots
\end{array}$$
you can see that the above sum is the sum of all columns. Another way to sum these elements is to collect all the terms with the same powers of $x$, these are the diagonals in the diagramm. For $x^k$ there are the coefficients $a_0b_k+a_1b_{k-1}+ \dots + a_k b_0 = \sum_{i=0}^{k} a_i b_{k-i}$. Multiplying with $x^k$ and summing over $k$ gets you the formula. |
CDF of a non-decreasing function of random variable | If $f$ is not strictly increasing then this is usually not true, though it has the right idea, which is to write the event $f(X) \leq f(x_0)$ in the form $X \leq z(x_0)$ for some suitable $z(x_0)$.
Specifically:
$$z(x_0)=\inf \{ x : f(x) \geq f(x_0) \}.$$
Then $\{ X \leq z(x_0) \}$ and $\{ f(X) \leq f(x_0) \}$ are just the same event.
For a counterexample to your version, consider $f(x)=\begin{cases} 0 & x<1/2 \\ 1 & x \geq 1/2 \end{cases}$, $X \sim U(0,1)$ and $x_0=1/4$. Then $F_{f(X)}(f(1/4))=F_{f(X)}(0)=1/2 \neq F_X(1/4)=1/4$.
As far as I know there is no general term relating $f(x)$ to $g(y)=\inf \{ x : f(x) \geq y \}$ when $f$ is right-continuous and nondecreasing. But when $f$ is a CDF, $g$ is called the quantile function; cf. "Inverse" of nondecreasing, right-continuous function? |
Improper Integral Question: $ \int_0 ^ \infty\frac{x\log x}{(1+x^2)^2} dx $ | André's solution is very clever. Another way to solve it is exploiting the properties of odd functions. Let $x=e^u$, so that
$$\int\limits_0^\infty {\frac{{x\log x}}{{{{\left( {1 + {x^2}} \right)}^2}}}dx} = \int\limits_{ - \infty }^\infty {\frac{u}{{{{\left( {{e^{ - u}} + e^u} \right)}^2}}}du} $$
Note that $u$ is odd, and $e^u+e^{-u}$ is even, so that the integrand itself is odd. We also know any integral over $[a,\infty)$ or $(-\infty,b]$ exists because of exponential decay, and since $$\int\limits_0^\infty {\frac{u}{{{{\left( {{e^{ - u}} + e^{u}} \right)}^2}}}du}+\int\limits_{-\infty}^0 {\frac{u}{{{{\left( {{e^{ - u}} + e^{u}} \right)}^2}}}du} =0 $$ we have $$\int\limits_{ - \infty }^\infty {\frac{u}{{{{\left( {{e^{ - u}} + e^{u}} \right)}^2}}}du} =0$$ |
A map without fixed points - two wrong approaches | To compute the Lefschetz number, you just look at the induced map on homology. $H_0(S^n)=H_n(S^n)=\mathbb Z$ and all other homology groups are zero. The induced map $H_0(f)\colon \mathbb Z\to\mathbb Z$ is the identity, as is always the case for a map from a connected space to itself. Since $f$ is invertible, $H_n(f)=\pm\mathrm{id}$. To figure out which one, we need to figure out whether it preserves or reverses orientation. If you write down charts and an orientation in these charts, you can see that $f$ preserves orientation iff $n$ is even. (In fact, $f$ is the suspension of the antipodal map on $S^{n-1}$.) So $H_n(f)=(-1)^n\mathrm{id}$. Thus the Lefschetz number is
$$L(f)=(-1)^0Tr(H_0(f))+(-1)^nTr(H_n(f))=1+(-1)^n(-1)^n=2.$$ |
What would be the integral of the zeta function or $ \sum\limits_{n=1}^{\infty} \frac {1}{n^x} $? | [Rough Calculation] You may take it as an answer. I have been calculating the integration in the usual way, assuming $x$ to be real. $C$ is arbitrary constant.
$\displaystyle \int \zeta(x)dx=x-(\frac{1}{2^x\log 2}+\frac{1}{3^x\log 3}+\dots)+C$
Now, I claim that the infinite sum converges.
Since, for all $n>1$ and $x>1$ we have $\displaystyle\frac{1}{n^x}>\frac{1}{n^x\log n}$ summing over $n=2$ to $\infty$ we get,
$$\zeta(x)-1>\sum_{n=2}^\infty\frac{1}{n^x\log n}$$ [I have not used weak inequality as I have not worked on the fact that when they will be equal]
The convergence is followed by the comparison test. As an overview we can say, $$\int\zeta(x)dx>x-1+\zeta(x)$$
Hope this works. |
Simplify a trigonometric equation | $HINT$: $sin(x+y)sin(x-y)$=$sin^2x-sin^2y$ try to find the expression and simplify |
How to find the probability of variables with joint density? | You're correct on the first two, and three is already answered.
Solve $\int_0^1 \int_{-1}^1 f(x,y) \, \mathrm{d}x \, \mathrm{d}y = 1$ for C.
So, $$\int_0^1 \int_{-1}^1 c (x^2 + y) \, \mathrm{d}x \, \mathrm{d}y = 5c/3$$ implies $c = 3/5.$
Compute $\int_0^{0.6} \int_{-1}^1 f(x,y) \, \mathrm{d}x \, \mathrm{d}y$.
$$\int_0^{0.6} \int_{-1}^1 f(x,y) \, \mathrm{d}x \, \mathrm{d}y = .76 c = .456$$ |
Minimization problem with latent function and splines | Function $y(x)$ looks monotonical and continued. At the same time, there is a noise due to the discretization. The best approach is the least square method.
$$\color{brown}{\textbf{Linear model}}$$
Let $f(x) = ax+b,$ then should be minimized the function
$$G(a,b)=\sum_{i=0}^{n-1}\left((ax_i+b)(ay_i+b)-1\right)^2$$
$$=\sum_{i=0}^{n-1}\left(a^2p_i+abs_i)+b^2-1\right)^2,$$
or
$$G(a,b)= a^2(W_{20}a^2+2W_{11}ab+W_{02}b^2)+2a(b^2-1)(W_{10}a+W_{01}b)+W_{00}(b^2-1)^2,$$
where
$$
p_i=x_iy_i,\quad s_i - x_i +y_i,\\[4pt]
\begin{pmatrix}
W_{00} \\ W_{01} \\ W_{10} \\ W_{02} \\ W_{11} \\ W_{20}
\end{pmatrix}=\sum_{i=0}^{n-1}
\begin{pmatrix}
1 \\ s_i \\ p_i \\ s_i^2 \\ p_is_i \\ p_i^2
\end{pmatrix}=
\begin{pmatrix}
91\\
189.336\\
78.15834\\
400.527798\\
161.52697\\
67.315936\\
\end{pmatrix}
$$
Minimizing leads to the solution with the positive $a:$
$$a_0\approx1.069214,\quad b_0\approx 0.006625$$
(see also Wolram Alpha).
The plot with the values of $\mathbf{\color{lightgreen}{x_i}},$ $\mathbf{\color{blue}{y_i}}$ and $\mathbf{\color{red}{f(x_i)f(y_i)}}$ is shown below. The dotted plots are $f(x_i)$ and $f(y_i).$
$$\color{brown}{\textbf{Quadratic model}}$$
Let $f(x) = ax^2+bx+c,$ then should be minimized the function
$$G(a,b,c)=\sum_{i=0}^{n-1}\left((ax_i^2+bx_i+c)(ay_i^2+by_i+c)-1\right)^2$$
$$=\sum_{i=0}^{n-1} \left((a^2p_i+abs_i+b^2)p_i+ac(s_i^2-2p_i)+bcs_i+c^2-1)\right)^2$$
$$=\sum_{i=0}^{n-1} \left((a^2p_i+abs_i+b^2-2ac)p_i+((as_i+b)cs_i+c^2-1))\right)^2$$
$$= a^4W_{40}+2a^3bW_{31}+a^2b^2W_{22}+2(b^2-2ac)(a^2W_{30}+abW_{21})+(b^2-2ac)^2W_{20}$$
$$+2a^2(acW_{22}+bcW_{21}+(c^2-1)W_{20}))$$
$$+2ab(acW_{13}+bcW_{12}+(c^2-1)W_{11}))$$
$$+2(b^2-2ac)(acW_{12}+bcW_{11}+(c^2-1)W_{10}))$$
$$+a^2c^2W_{04}+2abc^2W_{03}+b^2c^2W_{02}+2(c^2-1)(acW_{02}+bcW_{01})+(c^2-1)W_{00},$$
where
$$
\begin{pmatrix}
W_{00} \\ W_{01} \\ W_{02} \\ W_{03} \\ W_{04} \\ W_{10} \\ W_{11} \\ W_{12} \\ W_{13} \\ W_{20} \\ W_{21} \\ W_{22} \\ W_{30} \\ W_{31} \\ W_{40} \\
\end{pmatrix}=\sum_{i=0}^{n-1}
\begin{pmatrix}
1 \\ s_i \\ s_i^2 \\ s_i^3 \\ s_i^4 \\ p_i \\ s_ip_i \\ s_i^2p_i \\ s_i^3p_i \\ p_i^2 \\ s_ip_i^2 \\ s_i^2p_i^2 \\ p_i^3 \\ s_ip_i^3 \\ p_i^4 \\
\end{pmatrix}=
\begin{pmatrix}
91\\
189.336\\
400.527798\\
863.931994\\
1905.452187\\
78.15834\\
161.52697\\
338.997886\\
724.521647\\
67.315936\\
138.255465\\
288.018453\\
58.12748\\
118.699206\\
50.313345\\
\end{pmatrix}
$$
Minimizing via MathCAD with the initial point
$$a_1=0,\quad b_1=a_1\approx1.069214,\quad c_1=b_0\approx 0.006625$$
leads to the solution
$$a_1\approx 0.314097,\quad b_1\approx 0.75514,\quad c_1=0.006678.$$
The plot with the values of $\mathbf{\color{lightgreen}{x_i}},$ $\mathbf{\color{blue}{y_i}}$ and $\mathbf{\color{red}{f(x_i)f(y_i)}}$ is shown below. The dotted plots are $f(x_i)$ and $f(y_i).$
The values of the quadratic residuals are $g_0=0.228178$ for linear model and $g_1 = 0.006671$ for the quadratic one.
$$\color{brown}{\textbf{Exponential models}}$$
Let $f(x) = e^{h(x)},$ then the condition $f(x)f(y)$ transtorms to
$$h(x)+h(y)=0.$$
Let $h(x)=Ax^3+Bx^2+Cx+D,$ then can be minimized the function
$$G(a,b)=\sum_{i=0}^{n-1}\left(A(x_i^3+y_i^3)+B(x_i^2+y_i^2)+C(x_i+y_i)+2D\right)^2$$
$$=\sum_{i=0}^{n-1}\left(As_i^3+Bs_i^2+Cs_i+2D-(3As_i+2B)p_i\right)^2,$$
or
$$G(a,b)= 9a^2W_{22}+12abW_{21}+4b^2W_{20}-2(3a^2W_{14}+5abW_{13}+(2b^2+3ac)W_{12}+(2bc+6ad)W_{11}$$
$$+4bdW_{10}+a^2W_{06}+2abW_{05}+(2ac+b^2)W_{04}+(4ad+2bc)W_{03}+(4bd+c^2)W_{02}+4cdW_{01}+4d^2W_{00},$$
where
$$
p_i=x_iy_i,\quad s_i - x_i +y_i,\\[4pt]
\begin{pmatrix}
W_{00} \\ W_{01} \\ W_{02} \\ W_{03} \\ W_{04} \\ W_{05} \\ W_{06} \\ W_{10} \\ W_{11} \\ W_{12} \\ W_{13} \\ W_{14} \\ W_{20} \\ W_{21} \\ W_{22} \\
\end{pmatrix}=\sum_{i=0}^{n-1}
\begin{pmatrix}
1 \\ s_i \\ s_i^2 \\ s_i^3 \\ s_i^4 \\ s_i^5 \\ s_i^6 \\ p_i \\ p_is_i \\ p_is_i^2 \\ p_is_i^3 \\ p_is_i^4 \\ p_i^2 \\ p_i^2s_i \\ p_i^2s_i^2 \\
\end{pmatrix}=
\begin{pmatrix}
91\\
189.336\\
400.527798\\
863.931994\\
1905.452187\\
4308.022154\\
10003.30059\\
78.15834\\
161.52697\\
338.997886\\
724.521647\\
1581.452913\\
67.315936\\
138.255465\\
288.018453
\end{pmatrix}
$$
Minimizing via MathCAD with the initial point
$$a_2=b_2=d_2=0,\quad c_2=1$$
leads to the solution
$$a_2\approx 0.00886,\quad b_2\approx-0.075331,\quad c_2\approx0.241154,\quad d_2\approx
-0.168262.$$
The plot with the values of $\mathbf{\color{lightgreen}{x_i}},$ $\mathbf{\color{blue}{y_i}}$ and $\mathbf{\color{red}{f(x_i)f(y_i)}}$ is shown below. The dotted plots are $f(x_i)$ and $f(y_i).$
The values of the quadratic residuals are $g_1 = 0.006671$ for the quadratic model and $g2=0.000016$ for the exponential one.
$$\color{brown}{\textbf{Conclusions.}}$$
The most effective model is quadratic one. The linear model has low accuracy, and the exponentional one does not use the negative coefficients. |
Are curl and divergence local properties? | Classically both, as well as the gradient, are local operators. Applied to any smooth function they depend only on the values in the vicinity of a point. Generalizing to weak derivatives things are a bit different, with these derivatives being defined in terms of functionals, typically as integrations of test functions over a region with specific boundary behavior.
ADDED
But as you guess, and guess correctly you did, the classical derivative, curl, divergence can assign different values at different points. |
Find all $x,y,z \in \mathbb{Q}.x^2+y^2+z^2+x+y+z=1$ | You want $u^2 + v^2 + w^2 = 7 t^2$ where $u/t = 2x+1$, $v/t = 2y+1$, $w/t=2z+1$ and $u,v,w,t$ are integers with no common factor and $t \ne 0$. Now consider this mod $8$. |
Matrix multiplication as a matrix | As you probably know, matrix multiplication isn't commutative in general. The kernel of $\operatorname{ad}(A)$ consists of matrices $B$ which commute with $A$ and its dimension measures in some sense how "many" matrices commute with $A$. On one extreme, if $A = I$, then $A$ commutes with all other matrices so $\dim \ker \operatorname{ad}(A) = n^2$. On the other extreme, if you take $A$ to be a diagonal matrix whose entries are all distinct, the only matrices which commute with $A$ must be diagonal so $\dim \ker \operatorname{ad}(A) = n$. It turns out that for other matrices, we will have $n \leq \dim \ker \operatorname{ad}(A) \leq n^2$ and a precise formula for the dimension of the kernel can be given over an algebraically closed field using the information from the Jordan form of $A$. For a "random" matrix over $\mathbb{C}$, $\dim \ker \operatorname{ad}(A) = n$ because $A$ is diagonalizable with distinct eigenvalues so if $\dim \ker \operatorname{ad}(A) > n$, the matrix $A$ will be "special" in some sense (it will have "more symmetries").
Since the largest possible rank of $\operatorname{ad}(A)$ corresponds to the smallest possible dimension of $\ker \operatorname{ad}(A)$, by asking you to find out the largest rank, you are equivalently asked to find the minimal dimension of the subspace of matrices which commute with $A$. For $n = 2$, given a $2 \times 2$ matrix $A$, you are guaranteed to have at least a two-dimensional subspace of matrices which commute with $A$.
Note that your generalization from $n = 2$ to arbitrary $n$ is false and the maximal possible rank of $\operatorname{ad} A$ is $n^2 - n$. Coincidentally, this coincides with your calculation for $n = 2$. |
Linear Transformations of Data (Econometrics) | Typically, the term "non-singular" is only used to describe square matrices, so presumably we know that $A$ is square. If $X$ is $n \times k$ and if the product $XA$ is conformable, then $A$ must have $k$ rows.
We conclude that $A$ must be $k \times k$. |
G additive group isomorphic to $\mathbf{Z}^{n}, \mathbf{Z}^{m}$ | In short, you want to show that if $\mathbf{Z}^n\cong\mathbf{Z}^m$, as additive groups (with $\mathbf{Z}$ being the additive group of integers), then $n=m$.
One route is the one given in the statement. There are a couple of ways of showing it.
First:
Claim 1. If $G$ is abelian, and $G=H\times K$, then for any positive integer $n$ we have $nG=nH\times nK$.
Proof. If $(nh,nk)\in nH\times nK$, then since $(nh,nk) = n(h,k)\in nG$ we have that $(nh,nk)\in nG$. Conversely, if $(x,y)\in nG$, then $(x,y) = n(h,k) = (nh,nk)$ for some $h\in H$, $k\in K$, so $(x,y)\in nH\times nK$.
Claim 2. If $G = H\times K$, $M\triangleleft H$, and $N\triangleleft K$, then $M\times N\triangleleft G$, and $G/(M\times N) \cong (H/M)\times (K/N)$.
Sketch. Consider the map $G\to (H/M)\times (K/N)$ given by $\varphi(x,y) = (xM,yN)$. Prove that this is an onto group homomorphism. By the Isomorphism Theorems,
$$\frac{G}{\mathrm{ker}(\varphi)} \cong \frac{H}{M}\times \frac{K}{N}.$$
Prove that $\mathrm{ker}(\varphi) = M\times N$ to complete the proof.
Now show that the above claims hold for any number of finite factors (in fact, they hold for any number of factors, finite or infinite). Together they will show that $G/2G\cong (\mathbf{Z}/2\mathbf{Z})^n$ and $G/2G\cong (\mathbf{Z}/2\mathbf{Z})^m$.
So, if $G\cong\mathbf{Z}^n$ and $G\cong\mathbf{Z}^m$, then applying the ideas above it follows that on the one hand you have $G/2G$ has $2^n$ elements, and on the other that it has $2^m$ elements. But $G/2G$ does not depend on how you write $G$, it just depends on $G$. So we must have $2^n=2^m$.
The following is just basic combinatorics:
If $A$ has $k$ elements and $B$ has $\ell$ elements, then $A\times B$ has $k\ell$ elements:
Proof. The number of elements in $A\times B$ is the number of ordered pairs $(a,b)$ with $a\in A$ and $b\in B$. There are $k$ possibilities for the first entry, $\ell$ possibilities for the second entry, so by the Multiplication Rule, the total number of possibilities is $k\times \ell = k\ell$.
By induction, if $A_1,\ldots,A_n$ are all finite, then
$$|A_1\times\cdots\times A_n| = |A_1|\times\cdots\times |A_n|.$$
(In fact, for possibly infinite sets, one defines the product of the cardinalities to be the cardinality of the cartesian product, $|A|\cdot|B|=|A\times B|$)
So if $A_i = \mathbb{Z}/2\mathbb{Z}$ for each $i$, then since $|\mathbb{Z}/2\mathbb{Z}|=2$... |
combinatorial question (sum of numbers) | EDIT: See after the break for what may be the interpretation that was intended.
I'll assume $M=2r$ is even (otherwise I don't know what you mean by $M/2$).
So, you're adding $r$ numbers, and you want the sum to be odd. This will happen if, and only if, the number of odd numbers in your sum is odd.
So, suppose $k$ of your numbers are odd, and the other $2r-k$ are even.
Then you could choose $1$ odd number and $r-1$ even numbers, in ${k\choose1}{2r-k\choose r-1}$ ways.
You could choose $3$ odd numbers and $r-3$ even, in ${k\choose3}{2r-k\choose r-3}$ ways.
And so on. Then add up all the different ways of choosing that you will have worked out, and there's your answer (assuming I have found the correct interpretation for your question).
It appears that the problem is, given $m$ and $M=2r$, find the number of ways of choosing $m_1,\dots,m_{2r}$ such that $\sum_1^rm_i$ is odd and $\sum_1^{2r}m_i=2m$.
First let's note the standard formula: the number of solutions in non-negative integers to $a_1+\cdots+a_s=n$ is $n+s-1\choose s-1$.
Now we want, for each odd $q$, the number of ways to get $r$ numbers to sum to $q$, and the other $r$ numbers to sum to $2m-q$. So the answer is $$\sum_{q\rm\ odd}{q+r-1\choose r-1}{2m-q+r-1\choose r-1}$$ I don't know whether there is a closed form for this sum. A numerical experiment I tried ($m=5$, $r=2$) makes me suspect there is no simple closed form. |
Is my proof by natural deduction for $(p\rightarrow (q\rightarrow r))\rightarrow (q\rightarrow (p\rightarrow r))$ correct? | eliminating 4,5 and 6 which were unnecessary, the correct proof is:
$\quad\bullet\; \left(p\rightarrow\left(q\rightarrow r\right)\right)$ --- premise
$\quad\bullet \quad\bullet\;q$ --- assumption
$\quad\bullet\quad\bullet\quad\bullet\;p$ --- assumption
$\quad\bullet\quad\bullet\quad\bullet\;q\rightarrow r$ --- by $\rightarrow$-elim from 1 and 3
$\quad\bullet\quad\bullet\quad\bullet\;r$ --- by $\rightarrow$-elim from 2 and 4
$\quad\bullet\quad\bullet\;p\rightarrow r$ --- by $\rightarrow$-Intro from 3 and 5
$\quad\bullet\;q\rightarrow \left(p\rightarrow r\right)$ --- by $\rightarrow$-Intro from 2 and 6
$\; \left(p\rightarrow\left(q\rightarrow r\right)\right)\rightarrow \left(q\rightarrow \left(p\rightarrow r\right)\right)$ --- by $\rightarrow$-Intro from 1 and 7 |
Groups in an abstract algebra | If you are not limited to bounded shapes, you can think of shapes that have some translation symmetry; this immediately makes their symmtry group infinite. There are plenty examples of such shapes: a line, a horizontal infinite strip of finite width, a discrete infinite lattice of points.
Basically it is best to think first of which infinite symmetry group you want to have, and then adapt your shape to that. One general method is to take one chose subset of the plane (a single point will do) and add all the transforms of it by the group. For the rotation group this results in such stuff as one or more concentric circles or regions bounded by such circles. These shapes in fact get additional reflection symmetry for free, but that doesn't hurt.
You can have fun with other infinite groups. Think of the group generated by a single rotation by an angle that is irrational to the full rotation by $2\pi$. |
Unique Monic Polynomial | Hint: use the fact that $p(a)=0$ if and only if there exists a polynomial $q$ such that
$$
p(x)=(x-a)q(x).
$$
That's a very useful characterization of roots of polynomials.
One direction is trivial, the other one follows for instance from the identity $x^k-a^k=(x-a)(x^{k-1}+x^{k-2}a+\ldots+xa^{k-2}+a^{k-1})$ for $k\geq 1$. |
$f$ is locally integrable and continuous almost anywhere. Does that guarantee the existence of an interval $(a, b)$ where $f$ is continuous? | A. Pongrácz and I came up with this example simultaneously: given an enumeration $q_n$ of the rationals, let $f(x)=\sum_{n:q_n\le x} 2^{-n}$. This function takes on values between $0$ and $1$, so is locally integrable. Every rational is a non-removable discontinuity, so $f$ is a counterexample to the OP's conjecture.
What is a removable discontinuity? A function $f$ has, according to wikipedia, a removable discontinuity at $x$ if the left and right hand limits of $f(t)$ as $t\to x$ are equal, but unequal to $f(x)$. A classical example is the function equal to $0$ everywhere except at $0$, where it equals $1$. The
left and right hand limits at $0$ are both equal to $0$, but the function itself equals $1$ there. A classical example of a function with a non-removable discontinuity at $0$ is the Heaviside function $H(x)$ taking the value $H(x)=0$ for all negative $x$ and the value $H(x)=1$ for all non-negative $x$. Here the left and right-hand limits differ, and no redefinition of $H(0)$ makes the function continuous there.
The KL-Pongrácz $f$ is in fact a linear combination of shifts of the Heaviside function: $f(x)=\sum 2^{-n}H(x-q_n)$, which probabilists will recognize as the cumulative distribution function of a discontinuous random variable $X$ taking on the rational value $q_n$ with probability $P[X=q_n] = 2^{-n}$, so that $f(x)=P[X\le x]$. Like all cumulative distribution functions it is left-continuous and right-continuous. The distribution functions of discontinuous random variables have non-removable jump discontinuities at their atoms. In the case at hand, the jump discontinuity at $q_n$ is of magnitude $2^{-n}$. |
Putting negations before predicates using quantifiers | Yes, you seem to have a clear understanding of bringing the negation into what is being quantified.
For you're first step, you could have started with $$\exists x \lnot\exists y\Big((P(x, y) \lor Q(x, y)\Big).$$
But either way, the steps you demonstrate are correct. |
How to find the inverse laplace transform of [F(s)/s]^n | A related problem. Here is a start for the case $n=2$,
Let
$$ g(t)=\int_{0}^{t} f(x)dx .$$
Now, recalling the fact
$$ \mathcal{L}(g*h) = \mathcal{L}(g) \mathcal{L}(h), $$
we have
$$ \mathcal{L}^{-1}\left\{\left(\frac{F(s)}{s}\right)^2\right\}=(g*g)(t)=\int_{0}^{t}g(\tau)g(t-\tau)d\tau .$$
You can simplify the above integral by interchanging the order of integration and see what you get. Try to generalize this answer. |
Proving recurrence relation by mathematical induction | Another way to write your suspected formula is $T\left(2^k\right) = 2^k(k+1)$.
When $k=0$, we have $T(1) = 1$ so that's okay.
Now, assuming the formula holds for $k$, we would have by the recurrence relationship that
\begin{align}
T\left(2^{k+1}\right)
&= 2T\left(2^k\right) + 2^{k+1}
\\&= 2\left(2^k(k+1)\right)+2^{k+1}
\\&= 2^{k+1}(k+1)+2^{k+1}
\\&= 2^{k+1}(k+2)
\end{align}
which agrees with the suspected formula.
So by induction, the suspected formula is correct.
Notice, however, that it is only defined for powers of $2$.
This is because the recurrence relation itself is not defined for other integers. |
Half tangent representation | Hint. Use
$$\tan(2\theta)=\frac{2\tan\theta}{1-\tan^2\theta}$$
and let $\theta=x/2$. |
How can I tell that the sequence $a_n=\frac {\ln(n)} {n}$ converges and to what it converges? | You made a mistake
$$\left|\frac{\log(m)}{m}-\frac{\log(n)}{n}\right| \leq \left|\frac{\log(m)}{m}\right| - \left|\frac{\log(n)}{n}\right|$$
is wrong, the rhs could be smaller zero the lhs not.
The function is monotone for $n$ large enough (for $n>e$), so just prove monoticity for $n>3$. |
Graph Theory: Tree has at least 2 vertices of degree 1 | You haven't actually used anything here that requires that you are discussing a tree; that should make you very suspicious! But, you've got just about the right idea.
Let $T$ be our tree, and $P=(p_0,p_1,\ldots,p_k)$ a path of longest length. Suppose that $p_0$ has degree strictly greater than $1$; let $v$ be a neighbor of $p_0$ other than $p_1$.
Consider two cases, and show that each is a contradiction:
Case 1: The vertex $v$ is not in the path $P$. Then you can extend the path to get $P'=(v,p_0,p_1,\ldots,p_k)$; this is a path of longer length, as you pointed out, which contradictions our assumption that $P$ is maximal.
Case 2: The vertex $v$ is already in the path. Here's where you're going to need to use the fact that $T$ is a tree to find a contradiction. Hint: Trees cannot contain cycles. |
What is the difference between first and second right eigenvectors of a row stochastic matrix and their meaning? | Consider a row stochastic matrix $T$ whose eigenvalues are all distinct and satisfy $\lvert\lambda_1\rvert>\lvert\lambda_2\rvert>\lvert\lambda_j\rvert,$ $j\ge3.$ It is not necessary to distinguish left from right eigenvalues because left and right eigenvalues are the same. Define the left eigenvectors
$u_jT=\lambda_ju_j$ and right eigenvectors $Tv_j=\lambda_jv_j.$ Here the $u_j$ are row vectors and the $v_j$ are column vectors. Left and right eigenvectors corresponding to different eigenvalues are orthogonal. This follows by computing $u_jTv_k$ in two different ways:
$$\begin{aligned}(u_jT)v_k&=\lambda_ju_jv_k\\
u_j(Tv_k)&=\lambda_ku_jv_k,\end{aligned}$$
from which one concludes that $(\lambda_j-\lambda_k)u_jv_k=0.$ If $j\ne k,$ then, since the eigenvalues are assumed distinct, $\lambda_j-\lambda_k\ne0$. Hence $u_jv_k=0$. Suppose that it is possible to normalize the $u_j$ and $v_k$ so that the matrices built from the $u_j$ and the $v_k$ are inverses of each other:
$$\begin{bmatrix}u_1\\ u_2\\ \vdots\\ u_n\end{bmatrix}\begin{bmatrix}v_1 & v_2 & \cdots & v_n\end{bmatrix}=I=\begin{bmatrix}v_1 & v_2 & \cdots & v_n\end{bmatrix}\begin{bmatrix}u_1\\ u_2\\ \vdots\\ u_n\end{bmatrix}.$$
Subject to the above assumptions, I will describe how one might understand the roles played by $u_1,$ $u_2,$ $v_1,$ and $v_2$ in the situation where $T$ is the transition matrix of a Markov chain. This means that for a stochastic vector $x$ whose elements represent the probabilities of finding the Markov chain in each of its $n$ states, the stochastic vector $xT$ represents the corresponding probabilities after the system has undergone one transition, and $xT^k$ represents the probabilities after the system has undergone $k$ transitions.
If $T$ satisfies certain conditions, which we won't concern ourselves with here, then both $xT^k$ and the rows of $T^k$ will tend toward $u_1$ as $k$ gets large. For this reason, $u_1$ is known as the stable probability vector. Why does this happen? The Perron-Frobenius theorem says that a nonnegative matrix satisfying our conditions has a unique largest eigenvalue, and that the corresponding eigenvector (either left or right) has all positive entries. Moreover, this is the only eigenvector with all positive entries. Since a row stochastic matrix has a right eigenvector with all entries equal to $1$ and eigenvalue $1,$ the largest eigenvalue is $1.$ In other words, $\lambda_1=1$ and $v_1$ is a scalar multiple of the all-ones vector. It is then clear that the corresponding left eigenvector, $u_1$ is unchanged on multiplication by $T,$ that is, we get the stable-vector condition, $u_1T=u_1.$
By our assumptions above, we have
$$\begin{bmatrix}u_1\\ u_2\\ \vdots\\ u_n\end{bmatrix}T=\begin{bmatrix}\lambda_1 & 0 & \ldots & 0\\ 0 & \lambda_2 & \ldots & 0\\ \vdots & & \ddots & \vdots\\ 0 & 0 & \ldots & \lambda_n\end{bmatrix}\begin{bmatrix}u_1\\ u_2\\ \vdots\\ u_n\end{bmatrix}$$
and
$$T\begin{bmatrix}v_1 & v_2 & \cdots & v_n\end{bmatrix}=\begin{bmatrix}v_1 & v_2 & \cdots & v_n\end{bmatrix}\begin{bmatrix}\lambda_1 & 0 & \ldots & 0\\ 0 & \lambda_2 & \ldots & 0\\ \vdots & & \ddots & \vdots\\ 0 & 0 & \ldots & \lambda_n\end{bmatrix}.$$
Using our assumption on the normalization of $u_j$ and $v_k,$ we obtain
$$\begin{bmatrix}u_1\\ u_2\\ \vdots\\ u_n\end{bmatrix}T\,\begin{bmatrix}v_1 & v_2 & \cdots & v_n\end{bmatrix}=\begin{bmatrix}\lambda_1 & 0 & \ldots & 0\\ 0 & \lambda_2 & \ldots & 0\\ \vdots & & \ddots & \vdots\\ 0 & 0 & \ldots & \lambda_n\end{bmatrix}$$
and
$$T=\begin{bmatrix}v_1 & v_2 & \cdots & v_n\end{bmatrix}\begin{bmatrix}\lambda_1 & 0 & \ldots & 0\\ 0 & \lambda_2 & \ldots & 0\\ \vdots & & \ddots & \vdots\\ 0 & 0 & \ldots & \lambda_n\end{bmatrix}\begin{bmatrix}u_1\\ u_2\\ \vdots\\ u_n\end{bmatrix}.$$
Hence
$$T^k=\begin{bmatrix}v_1 & v_2 & \cdots & v_n\end{bmatrix}\begin{bmatrix}\lambda_1 & 0 & \ldots & 0\\ 0 & \lambda_2 & \ldots & 0\\ \vdots & & \ddots & \vdots\\ 0 & 0 & \ldots & \lambda_n\end{bmatrix}^k\begin{bmatrix}u_1\\ u_2\\ \vdots\\ u_n\end{bmatrix}.$$
Therefore
$$T^k=\begin{bmatrix}v_1 & v_2 & \cdots & v_n\end{bmatrix}\begin{bmatrix}\lambda_1^k u_1\\ \lambda_2^k u_2\\ \vdots\\ \lambda_n^k u_n\end{bmatrix}=\lambda_1^kv_1u_1+\lambda_2^kv_2u_2+\ldots+\lambda_n^kv_nu_n.$$
This is a sum of rank-$1$ matrices. By our assumption on the relative magnitudes of the eigenvalues, the second term decays exponentially quickly with respect to the first, and the third and higher terms decay exponentially quickly with respect to the second. Since $\lambda_1=1$ and $v_1$ is the all-ones vector, we have
$$T^k=\begin{bmatrix}u_1\\ u_1\\ \vdots\\ u_1\end{bmatrix}+\lambda_2^k\begin{bmatrix}v_{21}u_2\\ v_{22}u_2\\ \vdots\\ v_{2n}u_2\end{bmatrix}+\text{exponentially smaller terms,}$$
where the coefficients $v_{2j}$ are the elements of $v_2.$
Therefore the significance of the first right eigenvector $v_1$ being all ones is that $T^k$ tends toward a matrix with all rows the same and equal to the first left eigenvector $u_1.$ If $\lvert\lambda_2\rvert$ is close to $1,$ then $T^k$ approaches this stable form slowly, whereas if $\lvert\lambda_2\rvert$ is close to $0,$ the stable form is reached quickly. The dominant correction to each row is a multiple of the second left eigenvector $u_2,$ with the relative sizes of the corrections to the $i^\text{th}$ and $j^\text{th}$ rows given by the relative sizes of the $i^\text{th}$ and $j^\text{th}$ elements of the second right eigenvector $v_2.$ I believe this may be what the authors of the article you link to, Atlas of Economic Complexity, page 24, are referring to when they state that $v_2$
is the eigenvector that captures the largest amount of variance in the system and is our measure of economic complexity.
However, I am unsure of the precise meaning of their statement, and have no idea about its economic interpretation; I would welcome clarification or alternative interpretations from anyone reading this. (See Addendum below for my own improvements to the explanation.)
Some remarks on formulating the model in the article as a Markov chain. The model is based on a bipartite graph in which some vertices represent countries and other vertices represent products. There is an edge joining country $c$ and product $p$ if $c$ manufactures $p.$ This graph is described by an adjacency matrix $M$ whose element $M_{cp}$ equals $1$ if $c$ manufactures $p$ and $0$ otherwise. The transitions of the system are the following.
From country vertex $c,$ move to a vertex $p,$ chosen at random from the set of products manufactured by $c$.
From product vertex $p,$ move to a vertex $c,$ chosen at random from the set of countries that manufacture $p.$
Let $s_R$ denote the vector of row sums of $M$ and let $s_C$ denote the vector of column sums of $M$. The elements of $s_R$ and $s_C$ are denoted $k_{c,0}$ and $k_{p,0}$ in the article. Write $S_R=\text{diag}(s_R)$ and $S_C=\text{diag}(s_C).$ These are the diagonal matrices whose diagonal elements are the row and column sums of $M.$ The first type of transition is then described by the transition matrix $S_R^{-1}M$ and the second type by the transition matrix $S_C^{-1}M^T.$
Starting from a country vertex, the system will reach another country vertex after an even number of transitions and will reach a product vertex after an odd number of transitions. A Markov chain on country vertices only can be formulated by means of the transition matrix
$$\widetilde{M}=S_R^{-1}MS_C^{-1}M^T$$
in which a transition of the first type is followed by a transition of the second type. Similarly, a Markov chain on product vertices only could be formulated by means of the transition matrix
$$S_C^{-1}M^TS_R^{-1}M$$
in which a transition of the second type is followed by a transition of the first type. It is $\widetilde{M}$ that plays the role of $T$ in the definition of the Economic Complexity Index. Similarly, $S_C^{-1}M^TS_R^{-1}M$ plays the role of $T$ in the definition of the Product Complexity Index.
Addendum. Here's a bit of clarification on the limiting process that leads to consideration of the second right eigenvector. Again using the notation $s_R$ for the row-sum vector, the authors are looking at the limiting behavior of $k^{(C)}_{2\ell}:=\widetilde{M}^\ell s_R.$ (The notation $k^{(C)}_{2\ell}$ is mine; elements of this vector are denoted $k_{c,2\ell}$ in the article. Again, the economic interpretation is unclear to me.) As $\ell\rightarrow\infty,$ this approaches a constant vector. But it's not this constant vector that they're interested in; it's how the elements of $k^{(C)}_{2\ell}$ are distributed around their central value. (Again, don't ask me why.) They quantify this by subtracting the mean value of the elements and dividing by the standard deviation:
$$\lim_{\ell\rightarrow\infty}\frac{k_{c,2\ell}-\langle k^{(C)}_{2\ell}\rangle}{\text{stddev}(k^{(C)}_{2\ell})}.$$
Returning to the expression for the $\ell^\text{th}$ power of a transition matrix,
$$\widetilde{M}^\ell=\begin{bmatrix}u_1\\ u_1\\ \vdots\\ u_1\end{bmatrix}+\lambda_2^\ell\begin{bmatrix}v_{21}u_2\\ v_{22}u_2\\ \vdots\\ v_{2n}u_2\end{bmatrix}+\text{exponentially smaller terms,}$$
we find that
$$k_{c,2\ell}=u_1s_R+\lambda_2^\ell v_{2,c}u_2s_R+\ldots.$$
The first term in this expression is constant - it doesn't depend on $c$ - and so it cancels when the mean is subtracted. It also does not contribute to the standard deviation since it amounts to a constant shift of all the elements of the vector. This leaves the second term as the dominant contribution (the only contribution in the $\ell\rightarrow\infty$ limit, since others are exponentially smaller). Furthermore, the quantity $\lambda_2^\ell u_2s_R$ is constant as well, and so cancels when you divide by the standard deviation. This leaves
$$\lim_{\ell\rightarrow\infty}\frac{k_{c,2\ell}-\langle k^{(C)}_{2\ell}\rangle}{\text{stddev}(k^{(C)}_{2\ell})}=\frac{v_{2,c}-\langle v_2\rangle}{\text{stddev}(v_2)}.$$ |
$\mathbb{Z}\ast\mathbb{Z}\ast\mathbb{Z}$ is an index two subgroup of $\mathbb{Z}\ast\mathbb{Z}$ | Directly:
Define $\color{red}{\Bbb Z}*\color{green}{\Bbb Z}*\color{blue}{\Bbb Z}\to \color{red}{\Bbb Z}*\color{green}{\Bbb Z} $ by $\color{red}1\mapsto \color{red}2$, $\color{green}1\mapsto \color{green}2$, $\color{blue}1\mapsto \color{red}1\cdot\color{green}1$.
If you use $a$, $b$, and $c$ as the standard generators of $\mathbb{Z\ast Z\ast Z}$, and $x$ and $y$ for $\mathbb{Z\ast Z}$, then this map is given by
\begin{align}
a &\mapsto x^2 \\
b &\mapsto y^2 \\
c &\mapsto xy
\end{align} |
given that $f(1)=2, f(2)=8$ and $f(a+b)-f(a)=kab-2b^2$, find $f^\prime (x)$ | Along with the equation, the information that $f(1) = 2$ and $f(2) = 8$ allows you to determine the value of $k$ (think of explicit values you can set $a$ and $b$ to that allows you to exploit this).
With $k = k_0$ determined, as OscarRascal mentions, from the equation you are given you then have
$$
f(a + b) - f(a) = k_0ab - 2b^2
$$
or perhaps in more familar notation,
$$
f(x + h) - f(x) = k_0xh - 2h^2.
$$
Divide both sides by $h$ and let $h\to0$ to then find the general formula for $f'(x)$. |
Equation with sine and cosine - coefficients | You're being quite vague in a way that large numbers of students are. They see something like
$$
(3-3b^2)\sin(bx)+3a\cos(2x)=6\cos(2x) \tag 1
$$
and think that this specifies an problem to be solved. But the words you wrote after that make it clear that whoever posed this said more than what you've told us.
A set of exercises in a textbook may say "Solve the following equations for $x$." It it had said that, that would be a very different problem from what you write about in your paragraph beginning with "The method employed". There is no reason why the coefficient of the sine term should not be $0$ if it says "Solve this equation for $x$."
But if it said that $(1)$ is an identity true of all values of $x$, and asks you to find the coefficients, then, because of the word all, we can conclude that the coefficient of the sine term is $0$.
If $x=0$ then the sine term vanishes and the cosine terms are equal to $1$, and we get $3a=6$ so $a=2$. Then if $x$ has some value that makes the sine nonzero, we get
$$
(3-3b^2)\cdot(\text{some number other than 0}) + 6\cos(2x) = 6\cos(2x).
$$
Subtracting $6\cos(2x)$ from both sides, we get
$$
(3-3b^2)\cdot(\text{some number other than 0}) = 0,
$$
and from that we can conclude that the coefficient is $0$. |
Eigenvalues of a matrix with no calculation | It follow from your remark that the matrix has rank $1$. Therefore, $0$ is an eigenvalue and there can be only another eigenvalue.
Now, simply notice that the product of your matrix by the vector $(1,2,3,4)$ is $(30,60,90,120)$. |
Meaning of Hasse-Arf theorem | The question on the "interest" of a notion or a result can present many aspects. In the specific case here, fix a base local field $K$ in the sense of Serre's book. Then:
1) For a galois extension $L/K$ with group $G$ (not necessarily finite), two filtrations can be defined on $G$, the lower filtration $G_u$ (for real $u\ge -1$), and the upper filtration $G^v=G_{\psi(v)}$. As stressed by Serre somewhere in his chap. IV, the lower numbering is adapted to subgroups, in the sense that $H_u=H\cap G_u$, whereas the upper numbering is adapted to quotients, $(G/H)^v=G^vH/H$. In a remark after the proof of prop. 14 of IV, 3, Serre even gives a unified numerotation $G(t)$.
2) The Hasse-Arf thm. states that, if $G$ is abelian, then a jump $v$ in the upper filtration must be an integer; in other terms, if $G_u \neq G_{u+1}$, then $\phi (u)$ is an integer. This gives non trivial "congruential" information on the lower jumps. For instance, just try to give a direct proof in the simple example of a finite cyclic $p$ - extension, and you'll see that this is far from being obvious.
3) More important are applications to local CFT, see chap. XV. If $G$ is abelian (not necessarily finite), the reciprocity homomorphism $\theta: K^* \to G$ sends the filtration ${U_K}^v$ of the unit group $U_K$ onto the filtration $G^v$. More precisely, if $G$ is abelian finite, $\theta$ induces an iso. $K^*/N(L^*) \cong G$, where $N$ is the norm of $L/K$, and if moreover $L/K$ is totally ramified, an iso $\theta_n : {U_K}^n/{U_K}^{n+1}N({U_L}^{\psi (n)}) \cong G^n/G^{n+1}$ for any integer $n$. A specific application of the Hasse-Arf thm. concerns the so called explicit reciprocity laws. Independently of CFT, using polynomials of a certain type, one gets a normic iso. $N_n : {U_L}^{\psi (n)}/{U_L}^{\psi (n+1)} \to {U_K}^n/{U_K}^{n+1}$ (V, 6), as well as an iso. $\delta_n : {U_K}^n/{U_K}^{n+1}N({U_L}^{\psi (n)})\cong G_{\psi(n)}/G_{\psi(n) +1}$ (XV, 2). By definition $G_{\psi(n)}=G^n$, and by the Hasse-Arf thm. $G_{\psi(n) +1}=G^{n+1}$. It follows that $\delta_n$ and $\theta_n$ have the same target, and it can be shown that actually $\theta_n (x)=\delta_n (x^{-1})$, hence an explicit reciprocity law.
I refrain from evoking other applications such as the conductor of a galois extension, which would bring us too far ./. |
Probability that number is divisible by 5 | Note that if $x$ is not a multiple of $5$ then $x^4-1$ is divisible by $5$ (why?).
This implies that if $x$ and $y$ are not a multiple of $5$ then $x^4-y^4=(x^4-1)-(y^4-1)$ is divisible by $5$.
Can you take it from here?
P.S. Finally you will find that $x^4-y^4$ is NOT divisible by 5 iff the set $\{x,y\}$ ($5n(5n-1)/2$ choices) contains a multiple of $5$ ($n$ choices) and a not-multiple of $5$ ($4n$ choices). Hence the probability is
$$1-\frac{(n)\cdot (4n)}{\frac{5n(5n-1)}{2}}.$$ |
nCr question choosing 1 - 9 from 9 | Line up the items in front of you, in order. To any of them you can say YES or NO. There are $2^9$ ways to do this. This is the same as the number of bit strings of length $9$.
But you didn't want to allow the all NO's possibility (the empty subset). Thus there are $2^9-1$ ways to choose $1$ to $9$ of the objects.
Remark: There are $\dbinom{9}{k}$ ways of choosing exactly $k$ objects. Here $\dbinom{n}{k}$ is a Binomial Coefficient, and is equal to $\dfrac{n!}{k!(n-k)!}$. This binomial coefficient is called by various other names, such as $C(n,k)$, or ${}_nC_k$, or $C^n_k$.
So an alternate, and much longer way of doing the count is to find the sum
$$\binom{9}{1}+\binom{9}{2}+\binom{9}{3}+\cdots +\binom{9}{9}.$$ |
Symmetry argument in an integral | Here's a rough sketch of what I think is going on. I'll pretend that there are minus signs in all the exponents, so that the integrals make sense; otherwise, the problem admits the trivial solution that one divergent integral is just as non-existent as another.
Starting with your original integral, imagine expanding the product of $N+1$ factors. You get $2^{N+1}$ terms, half with $+$ signs and half with $-$ signs. Concentrate for the moment on just one of those terms. It's a product of $N+1$ exponentials, which you can write as the exponential of a sum of $N+1$ terms, each of the form $(x_j\pm x_{j-1})^2$. By suitably changing variables, changing some (suitably chosen) $x_j$'s to $-x_j$, you can make all the summands look like $(x_j-x_{j-1})^2$, or, if you prefer, you can make them all look like $(x_j+x_{j-1})^2$. Do the former (resp. the latter) if the exponential you're working with had a positive (resp. negative) sign when you expanded the original product. Now I'll describe what to do with the positive terms; the analog applies to the negative ones. The integrands in all the positive terms look like $\exp(\sum_j(x_j-x_{j-1})^2)$, but the range of the variables $x_j$ is different in different terms: If you changed variables from $x_j$ to $-x_j$, then the (new) $x_j$ ranges from $0$ to $-\infty$, and furthermore there's a minus sign arising from the change of variables. You can remove those minus signs by having those $x_j$'s range from $-\infty$ to $0$. The other $x_j$'s, the ones you didn't change in the integral, still range from $0$ to $+\infty$. So you end up with an integral over one orthant; some variables range from $0$ to $\infty$ and the rest range from $-\infty$ to $0$. What needs to be checked now (and my omitting this check is one reason that this is only a sketch), is that these integrals over the various orthants fit together into just two integrals over the whole space $\mathbb R^N$, one for the positive integrands and one for the negative. Those should give you the rewritten form in the question. |
primes of the form $p = x^2 + ny^2$: primitive element for the Hilbert class field of $\mathbb{Q}(\sqrt{-n})$ is a real algebraic integer? | I was missing something obvious. Following the suggestion of @franzlemmermeyer, I found the following proposition in Chapter 2 of J. S. Milne's course notes for Algebraic Number Theory:
Let $A$ be an integral domain and $L$ be a field containing $A$. Then we have
Proposition 2.6. Let $K$ be the field of fractions of $A$ and let $L$ be a field containing $K$. If $\alpha \in L$ is algebraic over $K$, then there exists a $d \in A$ such that $d\alpha$ is integral over $A$.
Taking $A = \mathbb{Z}$, $K = \mathbb{Q}$, and $L \cap \mathbb{R} = \mathbb{Q}(\alpha)$ the field containing $K$ allows us to select $\alpha \in \mathscr{O}_L \cap \mathbb{R}$. |
Finding equation from a word problem | The wording of the question is ambiguous and unclear, but I'll take a stab at providing an interpretation. When Tracy and Kelly arrive at the airport, the meter in the cab shows a fare of $F$, but there is an additional $4$-dollar airport fee, and then a $10\%$ surcharge on everything. Thus the total cost of the ride is
$$1.1(F+4)$$
so if Tracy and Kelly split the total cost equally (even though Tracy got picked up first), they each pay
$$1.1(F+4)/2$$
Now I'm going to assume that $x$ refers only to an equal share of what the meter says the fare is, i.e., $x=F/2$. Under this interpretation, the answer is
$$1.1x+2.2$$
for each woman's equal share of the total cost of the ride to the airport. |
Computing generators for a finitely generated module | If $R$ is a ring then clearly is generated as a $R$ Module by $1\in R$. Considering $R^n$ the direct sum of n copies of $R$ we know that there are canonical injections of $i_i: R \hookrightarrow R^n$ sending $r \mapsto (0,0,..r,..)$. We denote $e_i=i_i(1)$ with this is clear that the set $\{e_i\}_{i=1}^n$ generates $R^{n}$
For your question, since you have an epimorphism given $m \in M$ exists some $a \in R^n$ such that $\varphi(a)=m$ but $a=\sum r_i e_i$ so $M$ is generated by $\{ \varphi(e_i) \}$ |
Define a probability distribution which satisfies independence conditions | As Misch said, in each row of the table you should write $\mathbb{P}( P = y, G= x, T = z )$, where $x,y,z$ equal $0$ or $1$. Then:
All eight probabilities should sum to $1$ - they represent all possibilities.
You calculate $\mathbb{P}(P = y )$ by taking the sum over the four rows where $P = y$. This sum should equal $0.5$ for both $y=0$ and $y=1$.
You calculate $\mathbb{P}(G = x )$ by taking the sum over the four rows where $G = x$, and find $\mathbb{P}(T = z )$ in the same way.
You calculate $\mathbb{P}(G = x, T = z )$ by taking the sum over the two rows where both $G=x$ and $T=z$.
To show that $G$ and $T$ are independent, you must show that $\mathbb{P}(G = x ) \mathbb{P}(T = z ) = \mathbb{P}(G = x, T =z )$ for all four possible values of $(x,z)$. To show that $G$ and $T$ are not conditionally independent given $P$, it is enough to find one set of values for $(x,y,z)$ where $\mathbb{P}( G = x | P = y) \mathbb{P}( T = z | P = y) \neq \mathbb{P}( G = x, T = z | P = y)$.
As there are infinitely many solutions, it may help if you add your own constraints. For instance, you may set $\mathbb{P}(G = x ) = 0.5$ for $x=0,1$, and $\mathbb{P}(T = z ) = 0.5$ for $ z=0,1$. |
Calculate the commutator of the vector fields $A=x\partial_y-y\partial_x$ and $B=x\partial_x+y\partial_y$ | The easiest way I know to evaluate the commutator or Lie bracket $[A, B]$ of two vector fields such as
$A=x\dfrac{\partial}{\partial y}-y\dfrac{\partial}{\partial x} \tag{1}$
and
$B=x\dfrac{\partial}{\partial x}+y\dfrac{\partial}{\partial y} \tag{2}$
is to apply it to some differentiable funcction $f$, and then simply work out the result by successive differentiation and application of the Leibniz rule for products, etc. This method works because $[A, B]$ is determined by its action on functions, it is simple to apply and to remember. So if $f$ is any sufficiently (probably twice) differentiable function, we have
$A[f] = (x\dfrac{\partial}{\partial y}-y\dfrac{\partial}{\partial x})[f] = xf_y - yf_x, \tag{3}$
$B[f] = (x\dfrac{\partial}{\partial x}+y\dfrac{\partial}{\partial y})[f] = xf_x + yf_y, \tag{4}$
where we have used the subscript notation for partial derivatives, $f_x = \partial f / \partial x$ etc. Then (3) and (4) show that $A[f]$ and $B[f]$ are themselves functions, we can apply $B$ to (3) and $A$ to (4), obtaining
$BA[f] = B[A[f]] = (x\dfrac{\partial}{\partial x} + y\dfrac{\partial}{\partial y})[xf_y - yf_x] = x\dfrac{\partial}{\partial x}[xf_y - yf_x] + y\dfrac{\partial}{\partial y}[xf_y - yf_x]$
$=x(f_y + xf_{xy} - yf_{xx}) + y(xf_{yy} - f_x - yf_{yx}) \tag{5}$
and
$AB[f] = A[B[f]] = (x\dfrac{\partial}{\partial y}-y\dfrac{\partial}{\partial x})[xf_x + yf_y] = x\dfrac{\partial}{\partial y}[xf_x + yf_y] - y\dfrac{\partial}{\partial x}[xf_x + yf_y]$
$= x(xf_{yx} + f_y + yf_{yy}) - y(f_x + xf_{xx} + yf_{yx}); \tag{6}$
it is now a relatively simple algebraic maneuver to subtract the right-hand sides of (5) and (6), obtaining
$[A, B][f] = (AB - BA)[f] = 0, \tag{7}$
or
$[A, B] = 0, \tag{8}$
since as we have said vector fields are determined by their application to functions; thus (7) implies (8), and the commutator $[A, B]$ of $A$ and $B$ vanishes. Of course, the preceding calculation is a little bit of a grind and I for one have to credit Han de Bruijn for having the insight to realize that we must have $[A, B] = 0$ for purely geometrical reasons.
As for the "difference" between the first and second methods of calculation mentioned in the question, a careful scrutiny of the two equations shows that these methods are in fact the same, once on realizes that the second-order derivative operators entirely wash out of the first expression, leaving only first order operators behind to form $[A, B]$. Indeed, expanding out the first two terms of the first equation yields
$x\dfrac{\partial}{\partial y}\left(x\dfrac{\partial}{\partial x}\right)-x\dfrac{\partial}{\partial x}\left(x\dfrac{\partial}{\partial y}\right) = x\dfrac{\partial}{\partial y}(x)\dfrac{\partial}{\partial x} + x^2\dfrac{\partial^2}{\partial y \partial x} - x\dfrac{\partial}{\partial x}(x)\dfrac{\partial}{\partial y} - x^2\dfrac{\partial^2}{\partial x \partial y}$
$= x\dfrac{\partial}{\partial y}(x)\dfrac{\partial}{\partial x} - x\dfrac{\partial}{\partial x}(x)\dfrac{\partial}{\partial y}, \tag{9}$
in agreement with the second. The reason is, of course, that the second order operators $\partial^2 / \partial y \partial x$ and $\partial^2 / \partial x \partial y$ agree on any sufficiently differentiable function $f$; since vector fields are defined by their action on such functions, it is clear the second method of calculation is essentially the same as the first as far as vector fields are concerned.
Vector fields do not in general commute. To see an easy example, just consider $X =\partial / \partial x$ and $Y = x \partial / \partial y$; then for any suitable function $g$ we have
$[X, Y]g = X[Y[g]] - Y[X[g]] = \dfrac{\partial}{\partial x}(xg_y) - x\dfrac{\partial}{\partial y}g_x = g_y + xg_{xy} - xg_{yx} = g_y; \tag{10}$
since (10) holds for all sufficiently smooth $g$, we see that in fact
$[X, Y] = \dfrac{\partial}{\partial y}. \tag{11}$
Coordinate vector fields, however, do commute, for example
$[\dfrac{\partial}{\partial x}, \dfrac{\partial}{\partial y}] = 0 \tag{12}$
by virtue of the fact that $g_{xy} = g_{yx}$. Furthermore, it is in fact possible to calculate the commutator of any two vector fields $V = V^x\frac{\partial}{\partial x} + V^y\frac{\partial}{\partial y}$ and $W = W^x\frac{\partial}{\partial x} + W^y\frac{\partial}{\partial y}$ where $V^x, V^y, W^x, W^y$ are functions of $x$ and $y$:
$[V, W]g = V[W[g]] - W[V[g]]$
$= (V^x\dfrac{\partial}{\partial x} + V^y\dfrac{\partial}{\partial y}) (W^x\dfrac{\partial}{\partial x} + W^y\dfrac{\partial}{\partial y})[g] - (W^x\dfrac{\partial}{\partial x} + W^y\dfrac{\partial}{\partial y})(V^x\dfrac{\partial}{\partial x} + V^y\dfrac{\partial}{\partial y})[g]$
$= (V^x\dfrac{\partial}{\partial x} + V^y\dfrac{\partial}{\partial y})(W^xg_x + W^yg_y) - (W^x\dfrac{\partial}{\partial x} + W^y\dfrac{\partial}{\partial y})(V^xg_x + V^yg_y)$
$= V^x\dfrac{\partial}{\partial x}(W^xg_x + W^yg_y) + V^y\dfrac{\partial}{\partial y}(W^xg_x + W^yg_y)$
$- W^x\dfrac{\partial}{\partial x}(V^xg_x + V^yg_y) - W^y\dfrac{\partial}{\partial y}(V^xg_x + V^yg_y)$
$=V^x(W^x_xg_x + W^xg_{xx} + W^y_xg_y + W^yg_{yx}) + V^y(W^x_yg_x + W^xg_{xy} + W^y_yg_y + W^yg_{yy})$
$-W^x(V^x_xg_x + V^xg_{xx} + V^y_xg_y + V^yg_{yx}) - W^y(V^x_yg_x + V^xg_{xy} + V^y_yg_y + V^yg_{yy}). \tag{13}$
A careful inspection of the terms occurring on the extreme right (last two lines) of (13) reveals that every one containing a second derivative of $g$, $g_{xx}$, $g_{yx}$, etc., cancels out and we are left with terms containing only the first derivatives of $g$; such cancellation, of course, depends on the fact that $g_{xy} = g_{yx}$, which is one reason we stipulate that $f, g$ be at least $C^2$ functions. If we gather the first derivative terms of (13) together and group them by independent variable, then we will obtain an expression for $[V, W]$ in terms of the basis vector fields $\partial / \partial x$ and $\partial / \partial y$; indeed we have
$[V, W]g = V[W[g]] - W[V[g]]$
$= (V^xW_x^x + V^yW_y^x - W^xV^x_x - W^yV^x_y)g_x + (V^xW^y_x + V^yW_y^y - W^xV^y_x - W^yV_y^y)g_y$
$= ((V[W^x] - W[V^x])\dfrac{\partial}{\partial x} + (V[W^y] - W[V^y])\dfrac{\partial}{\partial y})[g], \tag{14}$
in which the expressions $V[W^x]$ etc. are simply derivatives of the $x$ and $y$ component functions of $V$ and $W$ in the $V$ and $W$ directions, as $V[g]$ is that of $g$ in the direction $V$. (14) shows that
$[V, W] = (V[W^x] - W[V^x])\dfrac{\partial}{\partial x} + (V[W^y] - W[V^y])\dfrac{\partial}{\partial y} \tag{15}$
itself is in fact a first order differential operator or vector field. The calculations (13)-(14), a rather long-winded, grungy-but-you-might-as-well-see-the-whole-mess-at-least-once lot, may in fact be considerably streamlined if one adopts certain standard identities which apply to the Lie bracket or commutator operation:
$[X + Y, Z] = [X, Z] + [y, Z] \tag{16}$
and
$[fX, Y] = f[X, Y] - Y[f]X; \tag{17}$
of these, the first is virtually self-evident and so I leave its demonstration to the reader. The second is almost as easily seen if we apply $[X, Y]$ to some function $g$:
$[fX, Y][g] = fX[Y[g]] - Y[fX[g]]$
$= fX[Y[g]] - Y[f]X[g] - fY[X[g]] = f[X, Y][g] - Y[f]X[g]; \tag{18}$
it should further be noted that if we negate each side of (17) and then use the identity $[V, W] = -[W, V]$ we obtain
$[Y, fX] = f[Y, X] + Y[f]X; \tag{19}$
we apply (16), (17), (19) to $[V, W]$ with $V = V^x\frac{\partial}{\partial x} + V^y\frac{\partial}{\partial y}$:
$[V, W] = [ V^x\dfrac{\partial}{\partial x} + V^y\dfrac{\partial}{\partial y}, W] = [V^x\dfrac{\partial}{\partial x}, W] + [V^y\dfrac{\partial}{\partial y}, W]$
$= V^x[\dfrac{\partial}{\partial x}, W] - W[V^x]\dfrac{\partial}{\partial x} + V^y[\dfrac{\partial}{\partial y}, W] - W[V^y]\dfrac{\partial}{\partial y}, \tag{20}$
and now we repeat the process with $W = W^x\frac{\partial}{\partial x} + W^y\frac{\partial}{\partial y}$:
$[\dfrac{\partial}{\partial x}, W] = [\dfrac{\partial}{\partial x}, W^x\dfrac{\partial}{\partial x} + W^y\dfrac{\partial}{\partial y}]$
$= [\dfrac{\partial}{\partial x}, W^x\dfrac{\partial}{\partial x}] + [\dfrac{\partial}{\partial x},W^y\dfrac{\partial}{\partial y}] = W_x^x\dfrac{\partial}{\partial x} + W_x^y\dfrac{\partial}{\partial y}, \tag{21}$
and likewise
$[\dfrac{\partial}{\partial y}, W] = W_y^x\dfrac{\partial}{\partial x} + W_y^y\dfrac{\partial}{\partial y}, \tag{22}$
(21) and (22) holding since $[\frac{\partial}{\partial x}, \frac{\partial}{\partial y}] = 0$, and $[Z, Z] = 0$ for any vector field $Z$, always. Bringing together (20), (21), and (22) we see that
$[V, W]$
$= V^x(W_x^x\dfrac{\partial}{\partial x} + W_x^y\dfrac{\partial}{\partial y}) - W[V^x]\dfrac{\partial}{\partial x} + V^y(W_y^x\dfrac{\partial}{\partial x} + W_y^y\dfrac{\partial}{\partial y}) - W[V^y]\dfrac{\partial}{\partial y}; \tag{23}$
if the terms of (23) are gathered together and regrouped then it is easy to arrive at
$[V, W] = (V[W^x] - W[V^x])\dfrac{\partial}{\partial x} + (V[W^y] - W[V^y])\dfrac{\partial}{\partial y}, \tag{24}$
i.e., (15). Systematic deployment of the identities (16), (17), (19) allows us to find the expression (15), (24) for $[V, W]$ in terms of the coordiante basis $\partial / \partial x$, $\partial / \partial y$ in a somewhat more streamlined manner than the derivation (13), (14) which takes everything back to basic definitions in terms of the differential operators $\partial / \partial x$, $\partial / \partial y$.
The above remarks pertain, of course, to vector fields on manifolds in the context of differential topology/geometry. When one turns to quantum mechanics, however, the situation is somewhat different. Though both the Lie theory we have discussed above and the theory of operators on Hilbert spaces, which is the framework for much of quantum mechanics, have much in common, there are significant differences. Consder things from the point of view of the nature of the operators and the spaces on which they are defined. In the Lie approach to vector fields (at least in the stream-lined version presented here), they are construed to be first-order differential operators on an appropriate function space, which here is taken to be $C^\infty(\Bbb R^2, \Bbb R)$; in this way we assure the necessary property $f_{xy} = f_{yx}$ which allows the theory to fly in the sense that then $[X, Y]$ will be a first-order operator if $X$ and $Y$ are, as has hopefully been made (perhaps painfully) evident in the above discussion. In the quantum mechanical case, however, the underlying space is a Hilbert space, which may be given concrete form by taking it to be, for example, $L^2(\Bbb R^2, \Bbb C)$. The operators are then linear maps defined either on all of $L^2(\Bbb R^2, \Bbb C)$ or on some dense subspace, as is the case with $p_x = i \hbar (\partial / \partial x)$ or $H = -(\hbar / 2m)\nabla^2$ etc. And though in either case we may define bracket operations $[A, B]$, the precise definitions differ, though there are evident similarities. So from a purely computational point of view, it is likely best to stick with the first method, which keeps track of all derivatives until the very end, rather than the second, which uses a short-cut which as far as I can tell depends on the domain being $C^\infty$. Indeed, since $L^2$ contains non-$C^2$ functions, it is not clear exactly how the commutator of two first derivative maps will be one itself, e.g. what is $[p_x, xp_y]$ going to do, exactly, to a non-$C^2$ element of $L^2$? Though I think the quantum mechanics (and here I refer to the practitioners of quantum mechanics, the subject) have developed answers which depend on the theory of unbounded operators on Hilbert spaces. And that's as far as I can take these things in this post.
Hope this helps. Cheerio,
and as always,
Fiat Lux!!! |
Expected Value eiffel tower problem | The number of visitors being foreigners is a binomially distributed random variable with $p=0.62$ and $n=7.$ Its expected value is
$$\mu=n p=7\cdot0.62=4.34.$$
Your answer is right. |
What is weight lattice modulo coroot lattice? | In Theorem 3.3.6 the authors of said book mention that $Q^\vee$ is embedded into $\mathfrak h^*$ via the form $\langle , \rangle$. Here $\langle, \rangle$ denotes an invariant bilinear form on $\mathfrak g$ such that $\langle \alpha,\alpha\rangle = 2$ for long roots (see the beginning of section 1.4).
In this way $kQ^\vee$ is viewed as a sublattice of $P$. Actually, the authors state in the proof of Theorem 3.3.20 that $W^a = W\ltimes kQ^\vee$ acts on $P \subseteq \mathfrak h^*$; as such this means that $P/kQ^\vee$ is the set of orbits of the action of $kQ^\vee$ on $P$ (which is the same thing). |
Is $\frac{x^2+x}{x+1}$ a polynomial? | You seem to know well enough how it is with this function: It equals $x$ whenever $x$ is not $-1$, and the expression is not defined for $x=-1$.
Whether it makes sense to care about the missing point depends on what you're doing!
Sometimes it is important to consider this to be a different function from the identity. This is especially the case in school algebra classes where a point is made of teaching the students not to be too cavalier about whether the expressions they manipulate are defined at the points they need to be.
At other times the difference is not really worth caring about. Sometimes it is even explicitly declared not to matter at all -- such as if we're speaking about the field of rational functions in one variable, where $x^2+x$ divided by $x+1$ is simply $x$, no ifs or buts.
It's up to you to know enough about what you're trying to achieve to make an intelligent choice between these two approaches. |
Why can we write any integer $n$ in the form $n=2^q(2p+1)$? | ** Hint **: think about the case when $n$ is odd.
If $n$ is even, divide by $2$. |
Is the radical of the product of ideals equal to the product of the radicals? $ \sqrt{I}\cdot \sqrt{J} = \sqrt{I \cdot J} $? | Let $R$ be a noetherian domain which is not a field and $\mathfrak m$ a maximal ideal. By using Nakayama's lemma on the extended ideals in the ring $R_{\mathfrak m} $, we have $\mathfrak m\ne\mathfrak m^2$, therefore $$\sqrt{\mathfrak m}\cdot\sqrt{\mathfrak m}=\mathfrak m\cdot\mathfrak m=\mathfrak m^2\ne \sqrt{\mathfrak m^2}=\mathfrak m$$ |
Primitive Pythagorean triples | As has been pointed out in the answer by Zubin Mukerjee, the result as stated is not correct. We state and prove a correct version.
Theorem: Let $u$ and $v$ be positive integers, with $v\lt u$. Let $x=2uv$, $y=u^2-v^2$, and $z=u^2+v^2$. Then $(x,y,z)$ is a primitive Pythagorean triple if and only if $\gcd(u,v)=1$ and $u$ and $v$ are of opposite parity.
It is clear that $x$, $y$, and $z$ are positive integers. It is also easy to verify that $(2uv)^2+(u^2-v^2)^2=(u^2+v^2)^2$.
(i) We show that if $\gcd(u,v)\ne 1$ or $u$ and $v$ are of the same parity, than the triple $(x,y,z)$ is not primitive. Suppose that $\gcd(u,v)=d\gt 1$. Then $d^2$ divides all of $2uv$, $u^2-v^2$, and $u^2+v^2$, so the triple $(x,y,z)$ is not primitive.
Next we show that if $u$ and $v$ are of the same parity, then $(x,y,z)$ is not primitive. This is because in that case all of $2uv$, $u^2-v^2$, and $u^2+v^2$ are even.
(ii) Next we show that if $\gcd(u,v)=1$ and $u$ and $v$ are of opposite parity, then $(x,y,z)$ is primitive.
Suppose to the contrary that some $d\gt 1$ divides both $y$ and $z$. Then some prime $p$ divides both $y$ and $z$. So $p$ divides $u^2-v^2$ and $p$ divides $u^2+v^2$. Note that that since $u$ and $v$ are of opposite parity, it follows that $u^2+v^2$ is odd. So $p$ is odd.
Since $p$ divides $u^2-v^2$ and $u^2+v^2$, it follows that $p$ divides their sum and difference $2u^2$ and $2v^2$. since $p$ is odd, $p$ divides $u^2$ and $v^2$, and since $p$ is prime, $p$ divides $u$ and $v$. This contradicts the fact that $\gcd(u,v)=1$, and completes the proof. |
In how many ways can $4$ colas, $3$ iced teas, and $3$ orange juices be distributed to $10$ graduates if each grad is to receive $1$ beverage? | Yes. The answer and solution is correct.
Another way to do it is:
You have $10$ objects of which $4$ are colas, $3$ are iced teas and $3$ are orange juices. Basically, you have to permute all of them because out all $10$ need to be chosen
For permutation of $n$ objects of which $P$ are alike, $Q$ are alike and $R$ are alike, such that $P+Q+R=n$ is given by:
$$\frac{n}{P! Q! R!}$$
Hence, here $n=10, P=4, Q=3$ and $R=3.$
On substituting the values, we get $4200$ as the answer. |
How to split the polynomial . | Note that $x^2-5 = x^2$, which is already factorised. |
Shortcut for determining equivalence relations? | Yes there is, check Bell number.
It gives you a Recurrence relation for calculating number of equivalence relations on a set having $n$ elements. |
Hatcher 2.2.31 Invoke Mayer-Vietoris to wedge sum. | This is almost right except you've taken an essentially redundant union (of $X$ and $U$, and $Y$ and $V$). What you want to use as your two spaces is $W=X\cup V$ and $Z=Y\cup U$ which have intersection equal to $U\cap V$ which clearly deformation retracts first to $U$ (or $V$), and then to the wedge point. You can also see that $W$ is homotopy equivalent to $X$, and $Z$ is homotopy equivalent to $Y$. |
$G_\delta$ subset of a $G_\delta$ subset is still a $G_\delta$ subset | Yes, since
$$Y = \Big(\bigcap_{k \in \mathbb N} U_k \Big) \cap \Big(\bigcap_{\ell \in \mathbb N} W_\ell \Big) = \bigcap_{(k,\ell) \in \mathbb N^2} (U_k \cap W_\ell)$$
and each $U_k \cap W_\ell$ is open in $S$, then $Y$ is a countable intersection of open sets in $S$. |
Positivity of a certain sum of Stirling numbers | For the moment at least, I can just individuate the first step of an approach which
might be possibly interesting.
The sum can be rewritten as
$$
\eqalign{
& S(q,n,m) = \sum\limits_{\left( {0\, \le } \right)\,\,i\,\,\left( { \le \,n - m - 1} \right)}
{\;\sum\limits_{\left( {0\, \le } \right)\,j\, \le \,q - 1} {\left( { - 1} \right)^{\,i + j} \left( \matrix{ n \cr j \cr} \right)\left( {q - j} \right)^{\,m}
\left[ \matrix{ j \cr j - i \cr} \right]\left[ \matrix{ n - j \cr m + 1 + i - j \cr} \right]} } = \cr
& = \sum\limits_{\left( {0\, \le } \right)\,\,k\,\,\left( { \le \,m + 1} \right)}
{\;\sum\limits_{\left( {0\, \le } \right)\,j\, \le \,q - 1} {\left( { - 1} \right)^{\,k} \left( \matrix{ n \cr j \cr} \right)\left( {q - j} \right)^{\,m}
\left[ \matrix{ j \cr k \cr} \right]\left[ \matrix{ n - j \cr m + 1 - k \cr} \right]} } = \cr
& = \sum\limits_{\left( {0\, \le } \right)\,j\, \le \,q - 1}
{\left( \matrix{ n \cr j \cr} \right)\left( {q - j} \right)^{\,m} \sum\limits_{\left( {0\, \le } \right)\,\,k\,\,\left( { \le \,m + 1} \right)} {\left( { - 1} \right)^{\,k}
\left[ \matrix{ j \cr k \cr} \right]\left[ \matrix{ n - j \cr m + 1 - k \cr} \right]} } \cr}
$$
where putting the bounds in parentheses is meant to underline that they are implicit in the binomial / Stirling n. ,
which is a useful indication for dealing with convolutions.
Since
$$
x^{\,\overline {\,n\,} } x^{\,\overline {\,m\,} } = \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n + m} \right)} {\sum\limits_{\left( {0\, \le } \right)\,j\,\left( { \le \,k} \right)}
{\left[ \matrix{ n \cr
j \cr} \right]\left[ \matrix{
m \cr
k - j \cr} \right]x^{\,k} } }
$$
where $x^{\,\underline {\,k\,} } ,\quad x^{\,\overline {\,k\,} } $ represent respectively the
Falling and Rising Factorial
then the inner sum above can be written as
$$
\eqalign{
& \sum\limits_{\left( {0\, \le } \right)\,\,k\,\,\left( { \le \,m + 1} \right)}
{\left( { - 1} \right)^{\,k} \left[ \matrix{ j \cr k \cr} \right]\left[ \matrix{ n - j \cr m + 1 - k \cr} \right]}
= \left[ {x^{\,m + 1} } \right]\left( {\left( { - x} \right)^{\,\overline {\,j\,} } x^{\,\overline {\,n - j\,} } } \right) = \cr
& = \left[ {x^{\,m + 1} } \right]\left( {\left( { - 1} \right)^j x^{\,\underline {\,j\,} } x^{\,\overline {\,n - j\,} } } \right)
= \left[ {x^{\,m + 1} } \right]\left( {\left( { - 1} \right)^j x^{\,\underline {\,j\,} } \left( {x + n - 1 - j} \right)^{\,\underline {\,n - j\,} } } \right)
\quad \left| \matrix{ \;1 \le n \hfill \cr \;j \le n \hfill \cr} \right. \cr}
$$
thus giving
$$ \bbox[lightyellow] {
S(q,n,m) = \left[ {x^{\,m + 1} } \right]\sum\limits_{\left( {0\, \le } \right)\,j\, \le \,q - 1} {\left( { - 1} \right)^j
\left( \matrix{ n \cr j \cr} \right)
\left( {q - j} \right)^{\,m} x^{\,\underline {\,j\,} } x^{\,\overline {\,n - j\,} } } \quad \left| {\;1 \le n} \right.
}$$
The function on RHS can be further rewritten as
$$
\eqalign{
& F(q,n,m,x) = \sum\limits_{\left( {0\, \le } \right)\,j\, \le \,q - 1}
{\left( { - 1} \right)^j \left( \matrix{ n \cr j \cr} \right)\left( {q - j} \right)^{\,m} x^{\,\underline {\,j\,} } x^{\,\overline {\,n - j\,} } } = \cr
& = n!\sum\limits_{\left( {0\, \le } \right)\,j\, \le \,q - 1}
{\left( { - 1} \right)^j \left( {q - j} \right)^{\,m} \left( \matrix{ x \cr j \cr} \right)\left( \matrix{ x + n - 1 - j \cr n - j \cr} \right)} = \cr
& = n!\sum\limits_{\left( {0\, \le } \right)\,j\, \le \,q - 1}
{\left( {q - j} \right)^{\,m} \left( \matrix{ j - x - 1 \cr j \cr} \right)\left( \matrix{ x + n - 1 - j \cr n - j \cr} \right)} \cr}
$$ |
$AA^{-1} \subseteq A^{-1}A$ for every infinite subset $A$ of $G$ | There are infinite non-abelian groups $G$ with the property that, for any two elements $x$ and $y$, either $xy=yx$ or $x^2=y^2$. For example, take $G$ to be the direct product of a quaternion group of order $8$ and infinitely many cyclic groups of order $2$.
If $G$ is such a group and $A\subseteq G$, then for any $x,y\in A$, either $xy^{-1}=y^{-1}x$ or $xy^{-1}=x^{-1}y$, so $AA^{-1}\subseteq A^{-1}A$. |
Why when a series converges to L, then its harmonic mean converges to L? | 1 can be proven, for example using Stolz–Cesàro theorem for $b_n=n$.
3 and 5 comes from following lemma:
If $x_n \to x$ and $x \neq 0$, then $\frac{1}{x_n} \to \frac{1}{x}$.
Proof:
If $x_n \to x \neq 0$, then for large $n$ we have $|x_n|>\frac{|x|}{2}$ (because $|x-x_n| + |x_n| \geq |x|$ and $|x-x_n| \to 0$). So note that:
$$\left|\frac{1}{x}-\frac{1}{x_n}\right|=\left|\frac{x_n-x}{x_nx}\right|\leq \left|\frac{2(x-x_n)}{x^2}\right|=\frac{2}{x^2}|x-x_n|$$
But you know that $|x-x_n| \to 0$, $\frac{2}{x^2}$ is constant, so $\frac{2}{x^2}|x-x_n| \to 0$. |
A Lebesgue measure question involving a dense subset of R, translates of a measurable set, etc. | Hints:
1) $m(D \Delta (D + x))$ is a continuous function of $x$.
2) If $x$ and $y$ are Lebesgue points of $D$ and $D^c$ respectively, what can you say about $m(D \Delta (D + x - y))$? |
Dimension space of modular forms of weight $2k$. | This is a very standard argument, see e.g. Corollary 2.16 of William Stein's book "Modular Forms: A Computational Approach". This is available as a free e-book; the relevant section is here:
http://modular.math.washington.edu/books/modform/modform/level_one.html#structure-theorem-for-level-1-modular-forms |
Analytic function that for every $z$ exist a derivate of some order that is zero in that point | By "region" $D$ I assume you mean that $D$ is a non-empty connected open set. Let $S(n)=\{z\in D| f^{(n)}(z)=0\}.$ Then $D=\cup_{n\in \mathbb N}S(n)$ is uncountable, so $S(n)$ must be uncountable for some $n.$ Any uncountable subset of $\mathbb C$ has an uncountable bounded subset. An analytic function, such as $f^{(n)}$, which is $0$ on an infinite bounded subset of $D$, is $0$ on all of $D.$ |
Improper integral of piece-wise rational function | An integral from $0$ to $\infty$ with $\mathrm dv/v$ in it tends to be susceptible to a substitution of the form $u=\lambda v$, which leaves all of that invariant. In the present case, you can write $u=xv$ to get
$$
\int_0^\infty \left( \frac{1}{ 1 + x^\alpha \vert v-1 \vert^\alpha} - \frac{1}{ 1 + x^\alpha \vert v+1 \vert^\alpha} \right) \frac{\mathrm d v}{v}
=
\int_0^\infty \left( \frac{1}{ 1 + \vert u-x \vert^\alpha} - \frac{1}{ 1 + \vert u+x \vert^\alpha} \right) \frac{\mathrm d u}{u}\;.
$$
The integrand is even, so you can simplify things by calculating the integral from $-\infty$ to $\infty$ instead.
The denominators are of the form $z^\alpha+1$, which for odd $\alpha$ factorizes into $\prod_i(z+z_i)$, where $z_i$ are the $\alpha$-th roots of unity. So we have
$$\frac12\int_{-\infty}^\infty \left( \frac{1}{\prod_i(\vert u-x \vert+z_i)} - \frac{1}{\prod_i(\vert u+x \vert+z_i)} \right) \frac{\mathrm d u}{u}\;.$$
The rest is an exercise in partial fractions, with the two complex conjugate solutions combining into a quadratic denominator; I think you'll need to resolve the absolute values first; let me know if you want me to write it out further.
P.S.: You can further simplify things by substituting $t=u-x$ in the first term and $t=u+x$ in the second; that yields
$$\frac12\int_{-\infty}^\infty \frac{1}{\prod_i(\vert t \vert+z_i)}\left(\frac1{t+x}-\frac1{t-x} \right) \mathrm d t\;.
$$ |
Finding inverse Fourier Transform | Hint:
$$u(x,z) = \int_{-\infty}^{\infty}U(x,2\pi q)e^{2\pi i q z} dq$$
Note:
Typically
$$F(\omega; \cdot) = \int_{-\infty}^{\infty}f(t; \cdot)e^{- 2\pi i \omega t} dt $$
and
$$f(t; \cdot) = \int_{-\infty}^{\infty}F(\omega; \cdot)e^{2\pi i \omega t} d\omega $$
So I rewrite your $p$ as $p=2\pi q$ |
Is the vector difference of two subspaces also a linear subspace? | You can show that if $-w\in W$ then $w\in W$ by writing $-w=-1\times w$, and then use the result that $V+W$ is a vectorial subspace of $\mathbb{R}^n.$ |
How do I adjust one percent in a set and maintain the remaining proportions in the set? | Solve
newIng1 / (newIng1 + oldIng2 + oldIng3 + oldIng4) = 5%
to find newIng1.
Then you divide each of newIng1, oldIng2, oldIng3, oldIng4 by the sum
newIng1 + oldIng2 + oldIng3 + oldIng4
to get 100%. |
What's wrong with my "proof" that branch cuts are not arbitrary? | If a contour goes around both potential branch cuts, and the branches of the function are chosen to be the same on that contour, then the integral around the contour doesn't depend on the choice of branch cut. That contour can then be deformed to other contours (such as those in your picture), and the integral still doesn't change. |
A problem about uniform distribution and Rayleigh distribution | You have
$$
f_{X,Y}(x,y) = \frac{1}{2\pi}\exp\left ( -\frac{1}{2} (x^2+y^2)\right )
$$
Note further that $X = R \sin \Theta, Y=R \cos \Theta$
Now, we can use the density transform formula to write
$$
f_{R,\Theta}(r,\theta) = f_{X,Y}(r \sin \theta, r\cos\theta)|J|^{-1}
$$
where $|J|^{-1}$ is the inverse Jacobian of the transformation, you should be able to show $|J|^{-1} = r$. Putting this together we have
$$
f_{R,\Theta}(r,\theta) = \frac{1}{2 \pi} r e^{-r^2/2} \mathbb{1}\{r >0\} \mathbb{1}\{0\le\theta \le 2\pi\}
= \left ( \frac{1}{2 \pi} \mathbb{1}\{0\le\theta \le 2\pi\} \right ) \left (r e^{-r^2/2} \mathbb{1}\{r >0\} \right)
$$
and the result follows immediately (independence holds due to the fact that the density factorizes) |
Proof that $2^n-1$ does not always generate primes when primes are plugged in for $n$? | $$\Large 2^{11}-1=23\cdot 89$$
Take a look at the Wikipedia page on Mersenne primes. There are (currently) only $48$ known prime numbers $p$ such that $2^p-1$ is prime, after people have used computers to check millions. |
[High School]How to find out the other two equations? | K is a Brocard point and we have the associated relation $\cot A+\cot B + \cot C = \cot \omega$ where $\omega$ is the Brocard angle.
Here this yields $\cot A+\cot B + \cot C=\sqrt 3$.
We further have that in a triangle $\cot A+\cot B + \cot C \le \sqrt 3$ with equality when $\triangle ABC$ is equilateral.
The conclusion now follows |
Is the application $D(R_p\circ \imath)(e):\mathfrak{h}\rightarrow T_p\mathcal{L}_p$ an isomorphism? | Yes.
Right multiplication by $p$ is a diffeomorphism $r_p\colon G \to G$, so restricts to a diffeomorphism $H \to Hp$. Thus its differential takes any tangent space $T_h H$ isomorphically onto $T_{hp}(Hp)$.
In more detail, right multiplication also bijectively takes curves $\gamma$ in $H$ starting at $e \in G$ to curves $r_p \circ \gamma\colon t \mapsto \gamma(t)p$ in $Hp$ starting at $p$, so to each tangent vector $\gamma'(0) \in \mathfrak{h}$ corresponds the tangent vector $(r_p \circ \gamma)'(0) = (Dr_p)(e)(\gamma'(0))$ in $T_p(Hp)$, and $(Dr_p)(e)$ sends the one to the other.
This happens essentially because of the definition of tangent vectors and the chain rule. I am sweeping your inclusion $\imath$ under the rug, somewhat, however. Closed subgroups of Lie groups turn out to be regular submanifolds, so that questions of smoothness are the same whether or not one composes with the inclusions. |
Prove or disprove: For all positive integers n and for all integers a and b, if a ≡ b mod n, then a^2 ≡ b^2 mod n. | You can disprove the claim if you find just one counter example.
However to prove, you will have to show it is true for every possible $a,b,n$. Luckily, that is sometimes not as onerous as it sounds.
Here, for the first one, note $a-b$ is a multiple of $n$ immediately gives you $(a-b)(a+b)=a^2-b^2$ is always a multiple as well.
For the second one, can you see a counter example? |
not always $A \models \phi$ or $A \models \neg \phi$ example | Consider the structure $\mathbb N$ of natural numbers and consider the formula $(x=0)$.
For a valuation $v$ such that $v(x)=0$ we have:
$\mathbb N, v \vDash (x=0)$
while obviously, for a valuation $v'$ such that $v'(x)=1$ we have: $\mathbb N, v' \nvDash (x=0)$.
Thus, if we define: $A \vDash \phi$ as "$A,v \vDash \phi$, for every valuation $v$", we have that:
neither: $\mathbb N \vDash (x=0)$, because we have the valuation $v'$ above such that $\mathbb N, v' \nvDash (x=0)$,
nor: $\mathbb N \vDash \lnot (x=0)$, because we have the valuation $v$ above such that $\mathbb N, v \nvDash \lnot (x=0)$. |
Non- linear recurrent relation (exponential term) | We would likely expect no closed form solution to exist in terms of standard functions; the main reason for this is because, in the special case where $\alpha=0$ and $\beta=1$, we have
$$X_0=0$$
$$X_1=-1$$
$$X_2=-e$$
$$X_3=-e^{e}$$
$$X_3=-e^{e^e}$$
and so on. This is essentially the same as tetration, but that causes an issue, since there's no particularly well-accepted way to define tetration in any closed form (for instance, there are various ways to choose smooth functions that satisfy the recurrence).
If we want to be particularly rigorous, the rate of growth that this function possesses is a problem for any elementary function I can think of. If we considered the smallest set of functions $F$ in $\mathbb{N\rightarrow N}$ such that
Any composition of a pair of functions in $F$ is in $F$.
Any function of the form $g(n)=\sum_{i=0}^{m(n)}f(n)$ for $f,m\in F$ has $g\in F$. Similarly for products.
Any power or quotient of two functions in $F$ is in $F$.
The identity, logarithm, factorial, tec. - the sort of functions you'd encounter in a simple calculus class - are in $F$.
So, basically, $F$ is the set of functions which have some form in terms of algebraic operations, summations, and "elementary" functions. So, stuff like
$$f(n)=\sum_{i=0}^{2^n}e^{n!}$$
would be in this class. The problem is that no such function can grow faster that $O(n^{n^{\cdot^{\cdot^{\cdot^{n}}}}})=O(^{k}n)$ where the latter notation is tetration (i.e. $n$ raised to its own power, $n$ times) for some finite $k$ (this is provable with structural induction - it's true of the elementary functions, and any "legal" combination of two functions satisfying the condition*), which is a smaller class of growth than what $X$ exhibits, which is $O(^{n}k)$, which, for large enough $n$ grows arbitrarily large power towers. Thus, the sequence grows too quickly to have a closed form in elementary terms.
(*Example: If $f(n)\in O(^{k}n)$, then $f(n)!\leq f(n)^{f(n)}$ so $f(n)!\in O(^{2k}n)$. You could do this with every sort of combination of functions I list, noting that if the given $k$ is finite for $f$, no transformation of $f$ makes it infinite/non-existant - so, unless we start with some "elementary" function that is in a big enough growth class, we'll never create one by combining old ones)
The above tells us that it is difficult to nest exponents; that is, there's no particular "short-cut" to calculating power towers. However, we could extend this lane of thought (though doing so rigorously eludes me): Why should we be able find a shortcut to calculate $X_n$? Its definition still uses a power tower. Sure, there are plenty of particular cases in which the power tower can be simplified (e.g. if $\alpha=1$ and $\beta=0$), but, to some degree, the number of times we use an exponent in the definition of a variable shouldn't be possible to simplify in general. What if we considered the class of numbers $C_n(x,\alpha,\beta)$ which could be written as an elementary function $f(x,\alpha,\beta)$ employing at most $n$ exponents (or functions like $\sin$ and $\cos$ that essentially invoke an exponent). If $X$ could be written in closed form, then somehow, there would be some $n$ such that $X$ never left $C_n(0,\alpha,\beta)$ for any choice of $\alpha$ and $\beta$ - that somehow, there is always a relation between $X_{n+1}$ and some function using less exponents. This would basically imply that there is no purpose in applying the exponent more than $n$ times, since everything beyond there would be expressible in other ways. This result appears far too strong to be true.
We can also note that every function defined globally in elementary terms is analytic. If a nice elementary function could be given as a closed form, and was analytic, why would we expect it to work at some locations, but not others, despite there never being a singularity?
Any proof will be difficult though; there are cases where $X_n$ is writable in closed form - clearly, any conditions making $X$ eventually periodic will yield a closed form. However, I doubt any other conditions exist where $X$ is writable in closed form, because otherwise we need to deal with nesting exponents. |
Topology problem on Lie Transformation groups. | Assume $G$ is given a topology compatible with the group structure, with the given restriction to $G^*$, and that $G^*$ is open.
It is clear that $\varphi V$ is an open neighborhood of $\varphi\in G$ if $V\subseteq G^*$ is an open neighborhood of the identity.
Conversely, if $U\subseteq G$ is open and $\varphi\in U$, then $V:=\varphi^{-1}U\cap G^*$ is an open neighborhood of the identity within $G^*$, and $\varphi\in\varphi V\subseteq U$.
So the sets $\varphi V$ indeed form a basis for the topology of $G$: every open set is a union of these. |
Volume of revolution of $x=a(\theta+\sin \theta)$ and $y=a(1+cosθ)$ | $$ \int \pi y^2 \, dx =\int\, \pi y(\theta)^2 \, (dx/d\theta) \,d\theta$$
So need not a priori eliminate $\theta$. |
How do you substitute integers 0-9 in this equation to solve it? | Running this Matlab code, which takes a few minutes:
M=perms([0 1 2 3 4 5 6 7 8 9]);
lx=length(M)
for I=1:lx
D=M(I,:);
[w o r m h l e i d t]=deal(D(1),D(2),D(3),D(4),D(5),D(6),D(7),D(8),D(9),D(10));
x=[w o r m h o l e]*[10000000;1000000;100000;10000;1000;100;10;1];
y=[i d i o t]*[10000;1000;100;10;1];
z=[h e m]*[100;10;1];
if x==y*z
x,y,z,D
end
end
You realize that there is no solution for the problem.
The closest fits are:
wormhole idiot hem wormhole-idiot*hem
60418093 72705 831 238
49538970 61692 803 294
43957382 60631 725 -93
42067298 53521 786 -208
41837160 59512 703 224
48765823 90981 536 7
32895210 64627 509 67
38697820 54581 709 -109
31876190 52514 607 192
25043581 79756 314 197
21473196 58510 367 26
17432790 85876 203 -38
10492078 36305 289 -67
9183957 24296 378 69
7485726 13179 568 54 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.