title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Tangent space of an immersed submanifold | Your idea to use the figure-eight immersed submanifold is OK. To compute $T_{(0,0)}S$ as a subspace of $T_{(0,0)}M$ recall that $T_{(0,0)}S$ is generated by $\frac{\partial}{\partial t} \Bigr|_{(0,0)}$ and mapping it by $di_{\big|(0,0)}$ we get that:
$$di_{\big|(0,0)}\left(\frac{\partial}{\partial t} \Bigr|_{(0,0)}\right) = 2\cos(0)\frac{\partial}{\partial x} \Bigr|_{(0,0)} + \cos(0)\frac{\partial}{\partial y} \Bigr|_{(0,0)} = 2\frac{\partial}{\partial x} \Bigr|_{(0,0)} + \frac{\partial}{\partial y} \Bigr|_{(0,0)} $$
And so $T_{(0,0)}S = \text{span}\left\{2\frac{\partial}{\partial x} \Bigr|_{(0,0)} + \frac{\partial}{\partial y} \Bigr|_{(0,0)}\right\}$
The trick now is to find a curve, which is smooth in $M$, lies inside $S$ but not smooth in $S$. Such curve is taking the curve in the opposite direction. So if $\gamma(t) = (\sin 2t, \sin t)$ parametrizes the submanifold, take $\gamma_1(t) = (\sin 2t, -\sin t)$
Now take any $f \in C^{\infty}(M)$ s.t. $f$ is $0$ on $S$. So we have $0 = f \circ \gamma_1(t) = f(\sin 2t,-\sin t) \in C^{\infty}(\mathbb{R})$. Take the derivative at $t=0$ to get:
$$0 = f_x(0,0) \cdot \frac{d(\sin 2t)}{dt}(0) + f_y(0,0) \cdot \frac{d(-\sin t)}{dt}(0) = \left(2\frac{\partial}{\partial x} \Bigr|_{(0,0)} - \frac{\partial}{\partial y} \Bigr|_{(0,0)}\right)f$$
So if the characterization was right then $v = 2\frac{\partial}{\partial x} \Bigr|_{(0,0)} - \frac{\partial}{\partial y} \Bigr|_{(0,0)} \in T_{(0,0)}S$, which is obviously wrong, as it's not in the span of $2\frac{\partial}{\partial x} \Bigr|_{(0,0)} + \frac{\partial}{\partial y} \Bigr|_{(0,0)}$. Hence we derive a contradiction. |
The structure $(\mathbb N, <, f)_{f\in \mathcal F}$ seems to have no elementary extension | There seems to be some confusion here. Let $\mathcal{M} = (\mathbb{N}, < f)_{f \in \mathcal{F}}$. We want to show that if $\mathcal{M} \prec \mathcal{N}$, then $|\mathcal{N}| > \aleph_0$ (by upward Löwenheim–Skolem/Ultrapower construction, we know such $\mathcal{N}$ exists). Any element in $a \in \mathcal{N} \backslash \mathcal{M}$ will be greater than any natural number (in above, you say that it must be less than any natural number or inbetween two of them).
Now, one has to show that $|\mathcal{N}| > \aleph_0$. Choose $a \in \mathcal{N} \backslash \mathcal{M}$. We know that $M \models \forall x \exists y f(x) = y$ for every $f \in \mathcal{F}$. Since $M \prec \mathcal{N}$, we know that for every $f \in \mathcal{F}$, $f(a) \in \mathcal{N}$. Now, it's your job to show that $|\mathcal{N}| > \aleph_0$.
(As a hint, you need to somehow incorporate the ordering.) |
Not sure how to find the limit of this inequality? | Using the Sandwich Theorem, we have
$\lim_{x \to 4} 4x-9 = 7$ and $\lim_{x \to 4} x^2-4x+7 = 7$, and since $f(x)$ is sandwiched in between, $\lim_{x \to 4} f(x) = 7$. |
Weird quadratic residue question. | If $p$ is an odd prime, $\frac{p-1}{2}$ is a quadratic residue iff $-2$ is a quadratic residue.
Let us assume $p\equiv 3\pmod{8}$ and consider the splitting field of $\Phi_8(x)=x^4+1$ over $\mathbb{F}_p$.
Its degree over $\mathbb{F}_p$ is given by the least $k\in\mathbb{N}^+$ such that $8\mid(p^k-1)$, i.e. $2$.
It follows that $\Phi_8$ factors over $\mathbb{F}_p$ as the product of two quadratic irreducible polynomials.
Let us denote through $i$ and $\sqrt{2}$ the elements of $\mathbb{F}_p$ or $\mathbb{F}_{p^2}$ solving $x^2+1=0$ and $x^2-2=0$.
Let us consider the irreducible factor of $\Phi_8$ vanishing at $\frac{1+i}{\sqrt{2}}$. By Frobenius automorphism, the conjugated root is $\left(\frac{1+i}{\sqrt{2}}\right)^p = \frac{-1+i}{\sqrt{2}}$ where the last equality follows from $p\equiv 3\pmod{8}$.
Thus we have that one of the irreducible factors of $\Phi_8$ is
$$ \left(x-\frac{-1+i}{\sqrt{2}}\right)\left(x-\frac{1+i}{\sqrt{2}}\right)=x^2-\sqrt{-2}\,x-1 $$
and since the coefficients of this polynomial belong to $\mathbb{F}_p$, $-2$ is a quadratic residue $\!\!\pmod{p}$.
$\frac{p-1}{2}$ being a prime is irrelevant. |
Unknown number of colours Bernoulli Urn | The problem is solvable. To guide your thinking:
You know:
There are $N$ balls in the urn; $N$ is unknown but $1\le N$.
There are $k$ colours in the urn; $k$ is unknown but $1\le k \le N$.
That's it, you don't know anything else - so now you need to hypothesize about the distribution of $N$ and $k$ - it is these hypotheses that you will test.
When you draw the first ball and it is color $C_1$ you learn nothing. You can either put it back (Multinomial distribution) or put it aside (Multivariate hypergeometric distribution); the second is more complex but will reject your hypothesis faster.
You draw a second ball; there are 3 possibilities:
There isn't one to draw (only if you didn't replace).
It is colour $C_2$.
It is colour $C_1$ again.
For the Multinormal approach, you can say
Can't happen.
$2\le N$ and $2\le k \lt N$ with certainty.
$1\le N$ and $1\le k \lt N$ with certainty (i.e. no new information).
For the Multivariate hypergeometric approach, you can say
$N=1$ and $k=1$ with certainty.
$2\le N$ and $2\le k \lt N$ with certainty.
$2\le N$ and $1\le k \lt N-1$ with certainty.
Using this you refine your hypotheses and move on.
How likely is it?
Assume you draw $n$ balls with replacement getting $x_i$ balls of colour $i$ with $k$ colours in all. The probability of extracting a ball of colour $i$ is $p_i$ which is equal to the proportion of balls of that colour in the bag and is unknown.
Now,
$$\sum_{i=1}^{k}p_i \le 1$$
With the difference between the total and 1 being the probability of all colours not selected.
And, from the multi normal distribution,
$$p=\prod_{i=1}^k\frac{n!p_i^{x_i}}{x_i!}$$
We know this is maximised when $p_i=\frac{x_i}{n}$ giving
$$\begin{align}
p_{max}&=\prod_{i=1}^k\frac{n!\left(\frac{x_i}{n}\right)^{x_i}}{x_i!}\\
&=\prod_{i=1}^k\frac{n!x_i^{x_i}}{n^nx_i!}\\
\end{align}$$
Since $\sum_{i=1}^kx_i=n$
Now you could differentiate with respect to each $x_i$ but given that these are discrete a simple difference equation will do. You draw another ball of colour $a$ where $1\le a\le k+1$, what is your new $p_{max}$ and how much is it different from the old? |
Show that $Z(G)$ is contained in $C(a)$. | In fact $Z(G) = \bigcap_{a \in G} C_G(a)$. |
$\bigcap\limits_{n=1}^\infty A_n$ vs $\bigcap\limits_\infty^{n = 1} A_n$ | Usually we define merely
$$\bigcap_{i\in I}A_i=\{x\mid \forall i\in I\colon x\in A_i\}$$
where $I$ is some nonempty index set.
Alternate notations are common for two special cases:
(i) If $I=\{n, n+1, \ldots, m\}$, we write
$\bigcup_{i=n}^m A_i$ for $\bigcup_{i\in I}A_i$.
(ii) If $I=\{i\in\mathbb N\mid i\ge n\}$, we write $\bigcup_{i=n}^\infty A_i$ for $\bigcup_{i\in I}A_i$.
The notation you exhibit has never occured to me. It doesn't match notations for e.g. sums with $\Sigma$ either. |
Covering and Heine-Borel theorem | It has to be closed because otherwise the statement is false. If $S$ is not closed and if $z\in\overline S\setminus S$, consider the set$$\left\{\left\{w\in\mathbb{C}\,\middle|\,\lvert w-z\rvert>\frac1n\right\}\,\middle|\,n\in\mathbb{N}\right\}.$$It is an open cover of $S$, but it you can check that it has no finite subcover. |
Continuity of a function at $(0,0)$ | It's hard for me to see what you are trying to achieve. The sine is not monotone everywhere, so you need to be careful if you apply it. You seem to reverse an inequality there, and I don't see why.
And, if you are going to use polar coordinates, you can use them directly, you gain nothing by the inequalities you try to use first.
Finally, you don't really need polar coordinates: using the easy inequality $|\sin t|\leq|t|$,
\begin{align}
\left|\frac{\sin(xy)}{|x|+|y|}\right|&=\frac{|\sin xy|}{|x|+|y|}\leq\frac{|xy|}{|x|+|y|}
\leq\frac12\,\frac{x^2+y^2}{|x|+|y|}\\ \ \\
&=\frac12\,\left(\frac{x^2}{|x|+|y|}+\frac{y^2}{|x|+|y|} \right)\\ \ \\
&\leq \frac12\,\left( \frac{x^2}{|x|}+\frac{y^2}{|y|}\right)\\ \ \\
&=\frac12\,(|x|+|y|).
\end{align}
The estimate can actually be made a lot simpler (maybe switching the roles of $y$ and $x$ if $x=0$):
\begin{align}
\left|\frac{\sin(xy)}{|x|+|y|}\right|&=\frac{|\sin xy|}{|x|+|y|}\leq\frac{|xy|}{|x|+|y|}=\frac{|x|\,|y|}{|x|+|y|}\leq\frac{|x|\,|y|}{|x|}=|y|.
\end{align}
Note that if $x=0$ or $y=0$, then no inequalities are needed as already $\sin xy=0$. |
A complex matrix commuting with all diagonalizable matrices is a scalar matrix | This is just a matter of grinding through:
let $\Lambda = \operatorname{diag} \{1,...,n \}$ and compute
$[\Lambda A - A \Lambda]_{ij}$ to show that $A$ is diagonal.
Pick $i \neq j$ and let $V=e_i e_j^T + e_j e_i^T$. Note that $V$ is symmetric hence diagonalisable and compute
$VA-AV$ to show that $\lambda_i=\lambda_j$. |
Thinking of $\int f(x) g(nx) dx$ as an integral with respect to the measure $g(nx)dx$ | Since $\lim_{n\to\infty} \int_I g(nx)\, dx = 0$ for all intervals $I \subset [0,1]$,we get
$$\lim_{n\to\infty}\int_0^1f(x)g(nx)\,dx=0$$
for any step function $f(x)=\sum_{k=1}^n a_k\chi_{I_k}$ where $I_k\subset[0,1]$ is an interval.
Now suppose $f\in L^1$. Since step functions are dense in $L^1$, for any $\epsilon>0$ we can write $f=f_1+f_2$ where $f_1$ is a step function and $\int_0^1|f_2|\,dx<\epsilon$. Using what we've proven for step functions and the boundedness of $g$ gives that
$$\lim_{n\to\infty}\int_0^1f(x)g(nx)\,dx=0.$$ |
Understanding the proof of Möbius inversion formula | The switch of summation is just a change of indices (which does not really use what $\mu$ is):
$$\sum_{d\mid n}\mu\left(\frac nd\right)\sum_{d'\mid d}g(d')=\sum_{d'\mid d \mid n}g(d')\mu\left(\frac nd\right)$$
Now you observe that, if $d',\ d$ are divisors of $n$, then $d'\mid d$ if and ony if $\frac n{d}\mid \frac n{d'}$. For the same reason, $m\mid \frac{n}{d'}$ if and only if $d'\mid \frac nm\mid n$.
So, if you set $m:=\frac nd$ and sum over a fixed $d'$, you get
$$\sum_{d'\mid d \mid n}g(d')\mu\left(\frac nd\right)=\sum_{d'\mid n}g(d')\sum_{m\mid \frac n{d'}}\mu(m)$$ |
Non-isomorphic subgroups w/ same number of elements | The group $(\mathbb{Z}/2\mathbb{Z})\times(\mathbb{Z}/4\mathbb{Z})$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}$ as well as a subgroup isomorphic to $(\mathbb{Z}/2\mathbb{Z})\times(\mathbb{Z}/2\mathbb{Z})$, which both have 4 elements but are not isomorphic to each other. See if you can find the subgroups yourself though. |
Calculus of variations, constraint in a point | I probably misunderstood the question, as I believed the point $c$ is not known a priori. In other words, my understanding was that you are looking for a minimiser of $J[u]$ such that $u(c) = u_c$, $u_c$ given, and $c$ is allowed to vary.
A sort of obstacle problem, whereby the minimiser has to attain the value $u_c$ for a $c$, $a<c<b$, and we are looking for the optimal $c$.
In this case the solution could proceed along the following.
For fixed $c$, look for an extremal that satisifed the Euler-Lagrange equation in both $[a,c]$ and $[c,b]$, as RRL suggested in his answer.
The solution will depend on $c$, so the functional will be parametrised according to c, $J[u] = \hat{J}[u; c]$.
The latter is to be interpreted as a function over $c$: given $c$, it computes the functional value, whose argument is the function $u$, extremal for the given $c$.
Minimising over $c$ is then akin to a one-dimensional function minimisation. |
Show that the PDE has vertical lines as a family of characteristic curves for $y=0$. | $$\frac{dy}{dx}=\frac{-S\pm \sqrt{S^2-4RT}}{2R}=\pm \frac{1}{-\sqrt{y}}$$
$$\implies \frac{2}{3}y^{3/2}=\pm x+c$$ if $y=0$ then $x=\pm c$ hence verticle lines as characteristics. |
Cesàro operator is bounded for $1<p<\infty$ | Using Hardy inequality one may see that
$$
\Vert T(x)\Vert_p=
\left(\sum\limits_{k=1}^\infty \left|\frac{1}{k}\sum\limits_{j=1}^k x_j\right|^p\right)^{1/p}\leq
\left(\sum\limits_{k=1}^\infty \left(\frac{1}{k}\sum\limits_{j=1}^k |x_j|\right)^p\right)^{1/p}\leq
$$
$$
\left(\left(\frac{p}{p-1}\right)^p\sum\limits_{k=1}^\infty |x_j|^p\right)^{1/p}=
\frac{p}{p-1}\left(\sum\limits_{k=1}^\infty |x_j|^p\right)^{1/p}=
\frac{p}{p-1}\Vert x\Vert_p
$$
This means that
$$
\Vert T\Vert\leq\frac{p}{p-1}
$$ |
Complex equation gives unexpected roots | The condition for invertibility is that $\omega^2ac-(a+c)\omega+1\ne0$. I.e. $(1-\omega a)(1-\omega c)\ne 0$, i.e. $a\ne\omega^2\land c\ne\omega^2$.
Which means $2$ matrices. |
What are the first few values of this function? | The recurrence can be rewritten as,
$$(1) \quad a_{n+1}={2 \over {n+1}}-a_n$$
$$\Rightarrow (2) \quad a_n=(-1)^n \cdot a_0+\sum_{k=0}^{n-1} {2 \over {n-k}} \cdot (-1)^k$$
No derivation is given, but $(2)$ is easy enough to check. Comparing to known functions yields,
$$(3) \quad a_n=2 \cdot \Phi(-1,1,n+1)+(a_0-\ln(4)) \cdot (-1)^n$$
Where $\Phi(a,b,c)$ is the Lerch Transcendent. The limit as $n$ approaches infinity doesn't exist. However, it should be noted that the Lerch term goes to $0$. |
Find the polynomials which satisfy the condition $f(x)\mid f(x^2)$ | The polynomials with $f(x)\mid f(x^2)$ are closed under multiplication. In fact, if $f$ is any such polynomial and $g(x)\mid f(x^2)/f(x)$, then $f(x)g(x)$ is also such a polynomial.
WLOG assume $x\nmid f(x)$. The relation $f(x)\mid f(x^2)$ implies
$$ \{\alpha:f(\alpha)=0\}\subseteq \{\beta:f(\beta^2)=0\}=\{\pm\sqrt{\beta}:f(\beta)=0\}.$$
Let $\alpha$ be a zero. Then $\alpha=\pm\sqrt{\beta}$ for some other zero $\beta$, or equivalently $\alpha^2=\beta$. Put another way, the square of any zero is also a zero, so the set of zeros is closed under squaring. Therefore we have a sequence of zeros $\alpha,\alpha^2,\alpha^4,\cdots$ which must eventually terminate since $f$ has finitely many zeros, in which case $\alpha^{2^n}=\alpha^{2^m}$ eventually, so $\alpha^{2^r(2^s-1)}=1$ and thus $\alpha$ is a root of unity.
We can restrict our attention to $f$ that cannot be written as a nontrivial product of other polynomials with this property. I don't think there's a very nice characterization of the possible set of zeros of $f$ beyond "start with a root of unity and keep squaring until you get a repeat." For example, over $\mathbb{C}$ we have that $f(x)=(x-i)(x+1)(x-1)$ is such a polynomial; it includes a kind of "cycle" of length two $\{-1,1\}$ in its zero set, but it also has a kind of "hangnail" at the front, namely $i$. If we think about this in terms of integers mod $n$, we can write $n=2^km$ and use the Chinese Remainder Theorem to track what $x\mapsto 2x$ does to an integer mod $n$; the sequence is eventually periodic but at the beginning the $\mathbb{Z}/2^k\mathbb{Z}$ coordinate may be nonzero.
To get the $f$ with real coefficients, just make sure the set $\{\alpha,\alpha^2,\alpha^4\cdots\}$ is closed under conjugation; if it isn't, then adjoin all their conjugates to construct an $f$ with real coefficients.
And to get $f$ with integer coefficients, if $f$ has an $n$th root of unity as a zero then $f$ is divisible by the cyclotomic polynomial $\Phi_n(x)$. If $n$ is even, then squaring primitive $2n$th roots of unity yield primitive $n$th roots of unity, meaning both $\Phi_{n}(x)$ and $\Phi_{n/2}(x)$ are factors. Writing $n=2^km$, this means it is divisible by $\Phi_{2^km}(x)\Phi_{2^{k-1}m}(x)\cdots\Phi_m(x)$. One can check these polynomials satisfy the condition. |
finding the limit of = $\ln(\frac{2x}{\pi})\cdot e^{\frac{1}{\cos x}}$ | Let $x=\frac{\pi}2-y$ with $y\to 0^+$ then
$$\displaystyle \lim_{x \rightarrow \frac{\pi}{2}^-} \ln\left(\frac{2x}{\pi}\right)\cdot e^{\frac{1}{\cos x}}=\lim_{y \rightarrow 0^+} \ln\left(1-\frac{\pi y}{2}\right)\cdot e^{\frac{1}{\sin y}}=-\infty$$
indeed
$$\ln\left(1-\frac{\pi y}{2}\right)\cdot e^{\frac{1}{\sin y}}=\frac{\ln\left(1- \frac{\pi y}{2} \right)}{\frac{\pi y}{2}}\cdot \frac{\pi y}{2}e^{\frac{1}{\sin y}}$$
and
$$\frac{\ln\left(1- \frac{\pi y}{2} \right)}{\frac{\pi y}{2}}\to -1$$
$$\frac{\pi y}{2} e^{\frac{1}{\sin y}}=\frac{\pi}{2}\frac{e^{\frac{1}{\sin y}}}{\frac{1}{\sin y}}\frac{y}{\sin y}\to +\infty$$ |
Eigenvalues of $AB$ and $BA$ where $A$ and $B$ are square matrices | Here is a proof similar to what the OP has tried:
Let $\lambda$ be any eigenvalue of $AB$ with corresponding eigenvector $x$. Then
$$ABx = \lambda x \Rightarrow \\
BABx = B\lambda x \Rightarrow\\
BA(Bx) = \lambda (Bx)
$$
which implies that $\lambda$ is an eigenvalue of $BA$ with a corresponding eigenvector $Bx$, provided $Bx$ is non-zero. If $Bx = 0$, then $ABx = 0$ implies that $\lambda = 0$.
Thus, $AB$ and $BA$ have the same non-zero eigenvalues. |
Principle of uniform boundedness | The family $T(t), t\in [0,1]$ is a family of continuous operators.
For every $x\in X$, the image of $f_x$ is compact since $f_x$ is continous and the image of a compact set by a continuous map is compact. We deduce that $f_x([0,1]=\{T(t)(x),t\in [0,1]\}$ is bounded since it is compact. We can apply the uniform boundedness principle to show that the family $T(t) $ is bounded. |
Two dice conditional probability | You've made a conceptual error, not a calculation error. In the second method, you have treated $(1\text{st } 2\mid 2\text{nd } 4) \,\,$ and $\,\,(2\text{nd } 2\mid 1\text{st }4)$ as if they are events which can have an union or an intersection, which can both occur or not both occur, etc, in other words, as events that live in the same sample space. But they do not.
Suppose we call $\Omega$ the original sample space with $36$ outcomes. The confusing thing (when you're first getting acquainted with the subject) is that while $(1\text{st } 2\mid 2\text{nd } 4) \,\,$ and $\,\,(2\text{nd } 2\mid 1\text{st }4)$ are both subsets of $\Omega$, they're conditioned on different things, so they actually live in different probability spaces. That's why you can't use inclusion/exclusion to calculate the probability of $(1\text{st } 2\mid 2\text{nd } 4) \,\, \bigcup \,\,(2\text{nd } 2\mid 1\text{st }4)$. The probability function that eats things like $(1\text{st } 2\mid 2\text{nd } 4)$ is a completely different probability function than the one that eats things like $(2\text{nd } 2\mid 1\text{st }4)$. So you can no longer do the same tricks with probabilities of their union, intersection, etc.
More concretely, you overcounted the probability because by separating into the two cases, you treated it as if we rolled twice, and the first time we knew that the second die was a four and the second time we knew that the first die was a four, and as if we were looking for the probability that we get a two and a four exactly once. But that's not the information we have, of course.
You could accurately calculate
$$\mathbb{P}(2 \text{ and } 4 \mid \text{ One is a }4) = \mathbb{P} (2 \text{ and } 4 \mid \text{ First is }4) \cdot \mathbb{P}(\text{First is a }4 \mid \text{ One is a }4)$$
$$ + \mathbb{P} (2 \text{ and } 4 \mid \text{ First is not a }4) \cdot \mathbb{P}(\text{First is not a }4 \mid \text{ One is a }4)$$
$$= \frac16 \cdot \frac{6}{11} + \frac{1}{5}\cdot \frac{5}{11}= \frac{2}{11}.$$ |
Reference for an unknotting move | The way that I would approach this problem would be using the machinery of claspers. Below, I use clasper language freely because I know that you are familiar with it. Your move implies that clasper edges behave like combinatorial objects- I can delete twists in them, and pass clasper edges through one another.
Begin by untying the knot using clasper surgery (for example using Y-claspers only, by unknotting using delta-moves, as in Murakami-Nakanishi/ Matveev). I don't need to remember twisting and linkage of edges thanks to your move- only the position of the leaves, and the combinatorial structure of the clasper (uni-trivalent graphs which end on the leaves) matters.
Next, I notice that the result of a Ck-move, if it happens inside a small ball with one unknotted line segment and no other clasper leaves inside, is ambient isotopic to a line segment whatever the combinatorial ordering of the leaves I think (draw it! The picture unravels "from the left". An illustration is Diagram 32 of http://www.math.kobe-u.ac.jp/publications/rlm15.pdf). [Edit: This is true for some orderings and not others, so more work is needed at this step] I also notice that I can pass one leaf through another "at the cost" of introducing a clasper-move with one more trivalent vertex, and that I can perform a "topological IHX" move inside a clasper to reduce it to "comb form", in which it represents a Ck move. This is enough- I choose a small ball, choose a clasper C, and pull all leaves of C inside the small ball. IHX so it becomes a Ck-move (maybe with leaves arranged in a strange order), and cancel it. I get left with a diagram with one fewer clasper (although the remaining claspers may be more complicated). Induction finishes. [Edit: It isn't clear that this process "converges"- see comments.]
This is one thing that clasper machinery is really well suited for, I think- it's the right language to discuss unknotting moves. Choose a clasper decomposition of the knot or link (replacing it by the unknot, with some tangled web of claspers inside it), identify moves on claspers induced by moves on knots, and show that they suffice to untangle the web, by pulling leaves into standard positions. To my taste, this leads to the nicest proof of "delta moves unknot". |
Why are factor groups important? | Among many other reasons:
Factor groups of $G$ capture all possible images of $G$ under homomorphisms (via the Isomorphism Theorems);
Factor groups of $G$ are "smaller" than $G$, so one may hope that they may be simpler to study than $G$ itself. The Isomorphism Theorems allow us to "import" a certain amount of information from factor groups to the groups themselves.
If you think of groups in terms of Galois Theory, with $G$ being the Galois group of a Galois field extension $K/F$, then the factor groups of $G$ corresponds precisely to the Galois groups of subextensions $E/F$, $E\subseteq K$, that are Galois over $F$. |
Solve equations $x^2+x+1\equiv 0(\mod 7)$ and $2x-4\equiv 0(\mod 6)$ | As $(7,4)=1$
$$x^2+x+1\equiv0\pmod7\iff(2x+1)^2\equiv-3\equiv4$$
$\implies$
either $2x+1\equiv2\iff2x\equiv1\equiv8\iff x\equiv4\pmod7\ \ \ \ (1)$
or $2x+1\equiv-2\iff2x\equiv-3\equiv4\iff x\equiv2\pmod7\ \ \ \ (2)$
and we have $2x\equiv4\pmod6\iff x\equiv2\pmod3\ \ \ \ (3)$
Apply CRT on $(1),(3)$
and $(2,3)\implies$lcm$(3,7)\mid(x-2)$ |
Unif~(-1,1) and finding min and max for two independent random variables | Hint: Use the independence to get the cumulative distribution function:
$$P(\max(X,Y) \leq z) = P(\{X \leq z\}\cap \{Y \leq z\}) = P(X \leq z) P(Y \leq z).$$
Differentiate it when you are done to get the probability density function. |
Finding an orthonormal basis for a matrix | I think you are confused about the definition of "matrix representation of $E$" with respect to a basis. We regard $E$ as a linear transformation and ask "What does it do to the basis vectors of a given basis?" For the standard basis,
$$
E(1,0)=1/25(9,12)
$$
and
$$
E(0,1)=1/25(12,16).
$$
These vectors (regarded as column vectors) are the columns of the matrix representation of $E$ with respect to the standard basis for $\mathbb{R}^2$:
$$1/25
\begin{pmatrix}
9&12\\
12&16
\end{pmatrix}.
$$
Consider what $E$ does to the eigenvenvectors, (still with respect to the standard basis)
$$
E(3/5,4/5)=(3/5,4/5)
$$
and
$$
E(-4/5,3/5)=(0,0).
$$
If we form a new basis from these eigenvectors,
we have
$$
(3/5,4/5)=1(3/5,4/5)+0(-4/5,3/5),
$$
so that $(3/5,4/5)$ is represented by $(1,0)$ in the new basis.
Similarly,
$$(-4/5,3/5)=0(3/5,4/5)+1(-4/5,3/5),$$ so that it is represented by
$(0,1)$ in the new basis.
Then, with respect to the new basis $E(1,0)=(1,0)$ and $E(0,1)=(0,0)$. The matrix that represents $E$ with respect to the new basis is then the matrix with these vectors as columns:
$$
\begin{pmatrix}
1&0\\
0&0
\end{pmatrix}.
$$ |
Find the length of the pool given the following conditions | $W - - - - - - - ->\bullet<= = 50m = = E$
$W <= = = = = = = = = = \bullet - - - - - - >E$
$W = = 20m = =>\bullet<- - - - - - - - E$
It can be solved creatively with a minimum of algebra.
From the diagram, it should be clear that when they first meet, together they have covered the length of the pool, and then twice the length the next time they meet.
Since their respective speeds are constant,
each has travelled twice the distance on the 2nd leg compared to the first leg.
Let the length of the pool be L m, and we will track the $= = = =>$ swimmer.
In the first leg she swam $50m$, so in the 2nd leg she must have swum $100m$
In the 2nd leg, she swam $L-50+20,$ so equating
$L - 50 + 20 = 100$, which yields $L = 130m$ |
If $p$ is a prime number greater than $2$ and $k$ is a natural number so that $k<p$, how can I prove that? | Hint: Do you know that ${p \choose k}=\frac {p!}{k!(p-k)!}?$ |
Cardinality of equivalence relations in $\mathbb{N}$ | Your approach is sound. I think you can choose a better injection from $P(\mathbb{N})$ into equivalence relations. Let us say $0 \in \Bbb N$ and we will inject $P(\Bbb N \setminus\{0\})$ into the equivalence relations on $\Bbb N$. I would suggest you take a subset of $\Bbb N \setminus\{0\}$ to the equivalence relation that groups all members of the subset and $0$ into an equivalence class and leaves all the rest of $\Bbb N$ under the identity.
If we don't do the trick with $0$, all singleton subsets will be mapped to the identity relation and you don't have an injection. |
Partially ordered sets with specific property | A partially ordered set, in which each pair of elements has a least upper bound and a greatest lower bound, is called a lattice. (Unfortunately, the term lattice is also used in other senses, even in mathematics.) |
Find the derivative of equation in matrix form | Here is a sketch of the solution.
First, notice that the derivative of $E$ with respect to $\alpha$ is really the gradient $\nabla _\alpha E$.
We can also expand the expression
$$
E=\alpha^TL\alpha+\lambda(\alpha−\beta)^TD(\alpha−\beta)
$$
into
$$
E=\alpha ^T L \alpha + \lambda (\alpha ^T D \alpha-\alpha ^T D \beta-\beta^T D\alpha+\beta ^TD\beta )
$$
The terms without $\alpha$ in them will fall away, so we can ignore the $\beta^T D \beta$ term. Now we just have some inner products and quadratic forms. For the derivative of a quadratic form, suppose we take $a^T X a$ and take the gradient with respect to $a$. That would mean we take
$$
\begin{split}
\lim _{h \to 0}& \frac{(a+h\vec v)^TX(a+h\vec v)-a^TXa}h\\
&=\lim _{h \to 0} \frac{(a+h\vec v)^TX(a+h\vec v)-a^TXa}h\\
&=\lim _{h \to 0} \frac{a^T X a+h\vec v^TXa+a^TXh\vec v-a^TXa}h\\
&=\lim _{h \to 0} \frac{h\vec v^TXa+a^TXh\vec v+h^2\vec v^TX\vec v}h\\
&=\lim _{h \to 0} \vec v^TXa+a^TX\vec v+h\vec v^TX\vec v\\
&=\vec v^TXa+a^TX\vec v
\end{split}
$$
Now in order to obtain the gradient, we would simply take $\vec v$ to be each of the standard basis vectors, as that would be taking the partial with respect to each component of $\alpha$.
This gives us that
$$
\frac{\partial}{\partial a_i} a^TXa = [X^T a + X a]_i
$$
as we have reduced the derivative of the quadratic form to taking the derivative of a dot product, which will simply be the ith component of vector we are taking the dot product with. You can apply this formula, and the derivative of dot product on the expanded expression to obtain the derivative of your energy function. |
Question about Logical implication | Suppose $A = \varnothing$. Then, for any truth value assignment $w$, $w(\psi) = 1$ (vacuously) for each and every $\psi \in A = \varnothing$
Why?
Because there are no $\psi \in A = \varnothing$. So the hypothesis is satisfied vacuously, because it is not possible for there to exist any $\psi \in \varnothing$ such that $w(\psi) \neq 1$.
Hence $\varnothing \models \varphi$ if and only if for each truth value assignment $w$, $w(\varphi) = 1$, too. This holds if and only if $\varphi$ is a tautology. |
Is there an orthonormal basis for $L^2([0,1])$ consisting of only even-degree polynomials? | The functions $1$, $x^2$, $x^4,\ldots,x^{2n},\ldots$ are linearly independent, so the Gram-Schmidt process does yield an orthonormal sequence with
$f_n(x)=\sum_{i=0}^n a_{n,i}x^{2i}$, and the $a_{n.i}$ real. Is this system complete?
The system is complete iff the polynomials in $x^2$ are dense in
$L^2[0,1]$. By the Stone-Weierstrass theorem, continuous functions on $[0,1]$
can be uniformly approximated by polynomials in $x^2$, and so also approximated
in $L^2[0,1]$ by polynomials in $x^2$. As the continuous functions
are dense in $L^2[0,1]$ then so are the polynomials in $x^2$. The orthonormal
sequence is complete. |
Monomorphism in the category of schemes versus the category of $S$-schemes | This works very generally in any category. Let $\mathcal{C}$ be a category and $S$ be an object in $\mathcal{C}$. Then given objects $a:A\to S$ and $b:B\to S$ in the slice category over $S$, a map $i:A\to B$ is monic in the slice category iff it is monic in $\mathcal{C}$. Indeed, suppose $f,g:C\to A$ are such that $if=ig$. Consider $C$ as an object of the slice category via the map $bif=big:C\to S$. Note then that $f$ and $g$ become morphisms in the slice category, since $a=bi$ so $af=bif$ and $ag=big$. So every pair of parallel arrows in $\mathcal{C}$ coequalized by $i$ gives a pair of parallel arrows in the slice category coequalized by $i$, and so $i$ is monic in $\mathcal{C}$ iff it is monic in the slice category. |
How do multiply the nabla operator by $f$? | So what I understand is that you have trouble calculating the curl of a given vector field
$$
F_x = x\qquad F_y=-y \qquad F_z = 0
$$
In Wikipedia, for example, you can find the quite straightforward formula for the curl:
$$
\nabla\times F = \left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\ \right) \hat{i}
+\left( \frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x}\ \right) \hat{j}
+ \left( \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}\ \right) \hat{k}
$$
As you see, there are no terms $\frac{\partial F_x}{\partial x}$ or $\frac{\partial F_y}{\partial y}$, so therefore the curl is zero. |
On absolutely continuous measures.. | No. For example, the Lebesgue measure is absolutely continuous with respect to the gaussian probability measure $$\frac1{\sqrt{2\pi}}e^{-x^2/2}dx.$$ |
Convergence of a family piecewise linear approximants | Assume that $f'$ is bounded, i.e., there exists an $L>0$, such that
$$
|\,f'(x)|\le L,
$$
for all $x\in\mathbb R$.
Fix $\varepsilon>0$ and let $x\in\mathbb R$, then $x\in[i\varepsilon,(i+1)\varepsilon]$, for some $i\in\mathbb Z$.
$$
f^\varepsilon (x)-f(x) = f(i\varepsilon) + (x - i\varepsilon) \frac{f((i+1)\varepsilon) - f(i\varepsilon)}{\varepsilon}-f(x)
=-\int_{i\varepsilon}^x f'(t)\,dt+
\frac{x-i\varepsilon}{\varepsilon}\int_{i\varepsilon}^{(i+1)\varepsilon}f'(t)\,dt \\=\frac{1}{\varepsilon}\left((x-i\varepsilon)\int_{i\varepsilon}^{x}f'(t)\,dt
+(x-i\varepsilon)\int_{x}^{(i+1)\varepsilon}f'(t)\,dt-\varepsilon\int_{i\varepsilon}^xf'(t)\,dt\right)\\= \frac{1}{\varepsilon}\left((x-i\varepsilon)\int_{x}^{(i+1)\varepsilon}f'(t)\,dt-\big((i+1)\varepsilon-x\big)\int_{i\varepsilon}^xf'(t)\,dt\right),
$$
and hence
$$
|\,f^\varepsilon (x)-f(x)|\le \frac{2L|x-i\varepsilon||(i+1)\varepsilon-x|}{\varepsilon}\le \frac{L\varepsilon}{2},
$$
and hence uniform convergence. |
$f\colon [a,b]\to\mathbb{R}$ Borel-measurable function: What does that mean? | f is borel measurable if the set $$\left\{x \in [a,b] : f(x) <t \right\} \in \mathcal{B}(\mathbb{R})$$ $\forall t \in \mathbb{R}$
Thats another way of looking at it |
Showing that $(A_{ij})=\left(\frac1{1+x_i+x_j}\right)$ is positive semidefinite | Edit: I first foolishly proved that $A$ is positive definite in general. But that's obviously wrong, as $x_1=\ldots=x_n=0$ yields a rank $1$ matrix. So I removed the wrong argument.
Consider the matrix
$$
A_t=\left(t^{x_i+x_j}\right)_{i,j}
$$
and show this is a continuous path of symmetric positive semidefinite for $t\in[0,1]$.
To see this, introduce the matrix
$$
B_t=\left(\frac{t^{x_j}}{\sqrt{n}}\right)_{i,j}
$$
and observe that
$$
A_t=B_t^*B_t.
$$
Then integrate between $0$ and $1$, you find
$$
\int_0^1A_tdt=A
$$
where integration is performed componentwise.
Note that $A$ is obviously symmetric.
Now fix a vector $X$. Since $(X,A_tX)\geq 0$ for all $t\in[0,1]$, the integration of this continuous function yields
$$\int_0^1(X,A_tX)dt=(X,AX)\geq 0$$
for all $X$.
So $A$ is positive semidefinite. |
Nonexistence of the lcm(p, px) | I don't think your conclusion is correct that you have to be able to find an infinite decreasing series of common multiples of $p$ and $px$. You're forgetting an important property of least common multiples!
Hint: If $Q(x)$ is a least common multiple of $p$ and $px$, then by definition it would have the property that for any common multiple $S(x)$ of $p$ and $px$, necessarily $Q\mid S$ within the ring $R$. Can you think of an element of $R$ that is a multiple of by $p$ and $px$, but is not divisible by $p^2x$? |
Another beautiful arctan integral $\int_{1/2}^1 \frac{\arctan\left(\frac{1-x^2}{7 x^2+10x+7}\right)}{1-x^2} \, dx$ | Substituting $x\to\frac{1-x}{1+x}$ is usually a first reaction of mine. Here it works surprisingly well: the integral is equivalent to
$$I=\int_{1/2}^1 \frac{\tan^{-1}\left(\frac{1-x^2}{7 x^2+10x+7}\right)}{1-x^2} \, dx=\frac12\int_0^{\frac13}\frac{\tan^{-1}\left(\frac{x}{x^2+6}\right)}{x} dx.$$
Integrating by parts,
$$I=-\frac12\ln3 \tan^{-1}\left(\frac{3}{55}\right)-\frac12\int_0^{\frac13} \frac{6-x^2}{x^4+13x^2+36}\,\ln x \, dx$$
This looks managable. We can decompose $\displaystyle \, \frac{6-x^2}{x^4+13x^2+36}=\frac{2}{x^2+4}-\frac{3}{x^2+9} $
so that
$$\int_0^{\frac13} \frac{6-x^2}{x^4+13x^2+36}\,\ln x \, dx=2\int_0^{\frac13} \frac{\ln x}{x^2+2^2}\,dx-3\int_0^{\frac13}\frac{\ln x}{x^2+3^2}\,dx$$
Now these integrals may also be done by partial fraction (whence emerge the complex stuff) and the antiderivative of the remaining parts is easily found in terms of logarithms and dilogarithms..
So the logarithm and $tan^{-1}$ cancel at the end, leaving:
$$ I=\frac12\Im\operatorname{Li_2}\left(\frac1{9i}\right)-\frac12\Im\operatorname{Li_2}\left(\frac1{6i}\right)$$
Edit
The final cancellation and Chris' comment below suggest a detour to the integration by parts and partial fractions decomposition:
We just notice that $\displaystyle \tan^{-1}\left(\frac{x}{x^2+6}\right)=\tan^{-1}\left(\frac{x}{2}\right)-\tan^{-1}\left(\frac{x}{3}\right)$, whence
$$I=\frac12\int_0^{\frac13}\frac{\tan^{-1}\left(\frac{x}{x^2+6}\right)}{x} dx=\frac12\int_0^{\frac13}\frac{\tan^{-1}\left(\frac{x}{2}\right)-\tan^{-1}\left(\frac{x}{3}\right)}{x}dx\\=\frac12\int_0^{\frac16}\frac{\tan^{-1}x}{x}dx-\frac12\int_0^{\frac19}\frac{\tan^{-1}x}{x}dx=\frac{1}{2}\operatorname{Ti}_2\left(\frac{1}{6}\right)-\frac12\operatorname{Ti}_2\left(\frac{1}{9}\right).$$ |
How do we compute $\sqrt[3]{x_1} +\sqrt[3]{x_2} $ using the fact that $x_1 + x_2 = 4 , x_1x_2 = -1$? | Note that: $(\sqrt[3]{x_1}+\sqrt[3]{x_2})^3=(x_1+x_2)+3\sqrt[3]{x_1x_2}(\sqrt[3]{x_1}+\sqrt[3]{x_2})$
Putting $t= \sqrt[3]{x_1}+\sqrt[3]{x_2}$ yields: $t^3+3t-4=0$. The last cubic equation has one real root $t=1$ which gives the needed result. |
Existence of integer solution to 63x+70y+15z=2010 | Solving the equation certainly gives an affirmative answer.
$$
z=\frac{-21x}{5}+\frac{-14y}{3}+134,
$$
hence
the solution in integers is $x=5n$, $y=3m$, $z=-21n-14m+134$, where $n$, $m\in\mathbb{Z}$. |
Have "pure" multisets been studied before? | I wouldn't say it's been studied extensively, to my knowledge, but this paper approaches the subject by surveying the literature, defining a first-order theory, and proving relative consistency with $\mathsf{ZFC}$. |
Probability that are solutions-modulo | The equation has a solution when $\gcd(a, 15)$ divides $b$, and it has a unique solution when $\gcd(a, 15) = 1$. Can you proceed from here? |
How to prove that $f(x) = \frac{-2x+1}{(2x-1)^2-1}$ is one-to-one on $(0,1)$? | You can't prove that, because it is not true. For instance,$$f\left(\frac{1-\sqrt5}4\right)=f\left(\frac{1+\sqrt5}4\right)=1.$$ |
Verifying a Series Solution to Dirichlet's Problem via separation of variables | $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
The general solution of the $\ds{2D}$-Laplace Equation is given by
\begin{align}
\mrm{u}\pars{r,\theta} & =
\pars{A\,{\theta \over 2\pi} + B}\bracks{C\ln\pars{r \over p} + D}
\\[5mm] & +
\sum_{n = 1}^{\infty}\left\{\bracks{%
A_{n}\pars{r \over p}^{n} +
B_{n}\pars{p \over r}^{n}}\sin\pars{n\theta}\right.
\\[3mm] & \left.\phantom{\sum_{n = 1}^{\infty}\!\!\braces{}}+
\bracks{%
C_{n}\pars{r \over p}^{n} +
D_{n}\pars{p \over r}^{n}}\cos\pars{n\theta}\right\}\label{1.a}\tag{1.a}
\end{align}
where $\ds{A, B, C, D, \braces{A_{n}}, \braces{B_{n}}, \braces{C_{n}}\ \mbox{and}\ \braces{D_{n}}}$ are constants which are determined by the boundary conditions.
Since the solution must be invariant under $\ds{\theta \mapsto \theta + 2\pi}$, I'll conclude that $\ds{A = 0}$. In such a case, $\ds{B}$ is 'redundant' such that I can set $\ds{B = 1}$. \eqref{1.a} is reduced to:
\begin{align}
\mrm{u}\pars{r,\theta} & =
C\ln\pars{r \over p} + D +
\sum_{n = 1}^{\infty}\left\{\bracks{%
A_{n}\pars{r \over p}^{n} +
B_{n}\pars{p \over r}^{n}}\sin\pars{n\theta}\right.
\\[3mm] & \left.\phantom{\sum_{n = 1}^{\infty}\!\!\braces{}AAAAAAAAAAA\,}+
\bracks{%
C_{n}\pars{r \over p}^{n} +
D_{n}\pars{p \over r}^{n}}\cos\pars{n\theta}\right\}
\label{1.b}\tag{1.b}
\end{align}
Then,
\begin{align}
\mrm{f}\pars{\theta} & =
-C\ln\pars{p} + D +
\sum_{n = 1}^{\infty}\bracks{%
\pars{A_{n}p^{-n} + B_{n}p^{n}}\sin\pars{n\theta} +
\pars{C_{n}p^{-n} + D_{n}p^{n}}\sin\pars{n\theta}}\label{2}\tag{2}
\\[2mm]
\mrm{g}\pars{\theta} & = D +
\sum_{n = 1}^{\infty}\bracks{%
\pars{A_{n} + B_{n}}\sin\pars{n\theta} +
\pars{C_{n} + D_{n}}\sin\pars{n\theta}}
\label{3}\tag{3}
\end{align}
Integrating both members of \eqref{2} and \eqref{3} over $\ds{\pars{0,2\pi}}$:
\begin{equation}
\left\{\begin{array}{rcrcl}
\ds{-\ln\pars{p}C} & \ds{+} & \ds{D} & \ds{=} & \ds{{1 \over 2\pi}\int_{0}^{2\pi}\mrm{f}\pars{\theta}\,\dd\theta}
\\[2mm]
&& \ds{D} & \ds{=} & \ds{{1 \over 2\pi}\int_{0}^{2\pi}\mrm{g}\pars{\theta}\,\dd\theta}
\end{array}\right.
\end{equation}
Those relations determines $\ds{C\ \mbox{and}\ D}$.
Similarly,
\begin{align}
\int_{0}^{2\pi}\mrm{f}\pars{\theta}\sin\pars{n\theta}\,{\dd\theta \over \pi} & =
A_{n}p^{-n} + B_{n}p_{n}
\\[2mm]
\int_{0}^{2\pi}\mrm{f}\pars{\theta}\cos\pars{n\theta}\,{\dd\theta \over \pi} & =
C_{n}p^{-n} + D_{n}p_{n}
\\[2mm]
\int_{0}^{2\pi}\mrm{g}\pars{\theta}\sin\pars{n\theta}\,{\dd\theta \over \pi} & =
A_{n} + B_{n}
\\[2mm]
\int_{0}^{2\pi}\mrm{g}\pars{\theta}\cos\pars{n\theta}\,{\dd\theta \over \pi} & =
C_{n} + D_{n}
\end{align}
Those equations determine $\ds{\braces{A_{n}},\braces{B_{n}},\braces{C_{n}}\ \mbox{and}\ \braces{D_{n}}}$. |
Proving that every product of $n$ consecutive integers contains its prime factors at least as many times as $n!$ does | As shown in equation $(3)$ of this answer, the number of factors of $p$ in $n!$ is given by Legendre's Formula:
$$
v_p(n!)=\frac{n-\sigma_p(n)}{p-1}\tag{1}
$$
where $\sigma_p(n)$ is the sum of the digits of the base-$p$ representation of $n$. Therefore, the number of factors of $p$ in $\frac{(n+k)!}{k!}$ is
$$
v_p\left(\frac{(n+k)!}{k!}\right)=\frac{n-\sigma_p(n+k)+\sigma_p(k)}{p-1}\tag{2}
$$
The difference of $(1)$ and $(2)$ is
$$
v_p\left(\frac{(n+k)!}{k!}\right)-v_p(n!)=\frac{\sigma_p(n)+\sigma_p(k)-\sigma_p(n+k)}{p-1}\tag{3}
$$
and $(3)$ is the number of carries when adding $n$ and $k$, and therefore, greater than or equal to $0$. |
How can you find the cubed roots of $i$? | Using Euler's formula, which states
$$
e^{i \theta} = \cos \theta + i \sin \theta
$$
we will see that
$$
i = 0 + i \cdot 1 = \cos \left( \frac{\pi}{2} + 2n \pi \right) + i \sin \left( \frac{\pi}{2} + 2n \pi \right) = e^{i \left(\frac{ \pi}{2} + 2n \pi \right)}
$$
for all integers $n$. Thus, if $z^3 = i$, then
$$z = \exp\left[ i \left(\frac{\pi}{6}+\frac{2n\pi}{3}\right)\right]$$ for all integers $n$. |
A Group That Span A Space | The argument is correct - but note that although the number of element
in a basis is minimal the sentence "The basis is defined to be that
smallest group that span that space" is a bit wrong since there
are many sets of minimal cardinality that spans the space and not
only one.
Indeed - if $A$ spans $V$ then $\alpha A:=\{\alpha a\mid a\in A\}$
also spans $V$ for all $\alpha\in\mathbb{F}$ |
Does the dual basis to some basis of $T^*_pM$ looks localy like a coordinate chart? | On the first quadrant in $\mathbb{R}^2$, let $\alpha_1 = y dx$ and $\alpha_2 = x dy $. The form a basis since $x\neq 0 \neq y$ in the first quadrant.
Use the identity chart to compute $X_1$ and $X_2$, getting $X_1 = y \partial_x$ and $X_2 = x \partial_y$.
For any smooth $f$, we compute: \begin{align*} [X_1,X_2]f &= X_1(X_2(f)) - X_2(X_1(f))\\ &= X_1(x f_y) - X_2(y f_x) \\ &= yf_y + yxf_{yx} -(xf_x + xyf_{xy})\\ &= yf_y - xf_x.\end{align*}
So, $[X,Y] =y \partial_y - x\partial_x$ which is nonzero at every point in our manifold. |
Alternate complex binomial series sum | $\text{Lemma: } \sum_{k=0}^{n-1} k x^k=\frac{(n-1)x^n+1}{(x-1)}-\frac{(x^n-1)}{(x-1)^2}
$$\text{Proof: }\sum_{k=0}^{n-1} x^k=\frac{x^n-1}{x-1}\implies\sum_{k=0}^{n-1} k x^{k-1}=\frac{n(x-1)x^{n-1}-(x^n-1)}{(x-1)^2}$
$$\sum^{2n-1}_{r=1}(-1)^{r-1}\cdot r\cdot \frac{1}{\binom{2n}{r}}=\sum^{2n-1}_{r=1}(-1)^{r-1}\cdot r\cdot(2n+1)\int_0^1t^{2n-r}(1-t)^{r}dt$$$$=-(2n+1)\int_0^1t^{2n}\sum_{r=1}^{2n-1}r\Big(1-\frac{1}{t}\Big)^rdt$$$$=-(2n+1)\int_0^1t^{2n}\bigg(\frac{(2n-1){\Big(1-\frac{1}{t}\Big)}^{2n}+1}{\frac{-1}{t}}-\frac{{\Big(1-\frac{1}{t}\Big)}^{2n}-1}{(\frac{-1}{t})^2}\bigg)dt$$
$$=(2n+1)\int_0^1\bigg({(2n-1)t{(1-t)}^{2n}+t^{2n+1}}+t^2{(1-t)}^{2n}-t^{2n+2}\bigg)dt$$
$$=(2n+1)\bigg(\frac{(2n-1)}{(2n+2)(2n+1)}+\frac{1}{2n+2}+\frac{2}{(2n+1)(2n+2)(2n+3)}-\frac{1}{2n+3}\bigg)=\frac{n}{n+1}$$
$\blacksquare$ |
Is there a way to solve $ax + \sin(bx+c) = d$ analytically? | There is no closed form in terms of any generally-accepted mathematical functions, as far as I am aware. In particular, the fixed point of the cosine function, i.e. the sole real solution of
$$\cos(x) = x$$
which is equivalent to
$$\sin\left(\frac{\pi}{2} - x\right) = x$$
is famously non-explicit. Hence, there can be no explicit formula for general values of $a$, $b$, $c$, and $d$ for the solutions of the equation you give. |
Multivariable local maximum proof | Property 2 is a statement about a matrix, but in your proof, you say 'the Hessian is smaller than $0$'. This doesn't quite follow.
If you want some details to fill out, you can start there - assuming you know that "a critical point of $f$ is a maximum if the determinant of the Hessian is smaller than zero" how can you connect property 2 to a negative determinant?
If you really wanted to tear this apart, you could start from scratch, and prove "a critical point of $f$ is a maximum if the determinant of the Hessian is smaller than zero". |
Delayed differential equations - conditions for a steady-state solution | To have a steady state response we need a stable system. Now calling $u(t) = x(t)-\frac{c}{a+b}$ (assuming $a +b \ne 0$ etc.) we have
$$
\dot{u}(t)=au(t)+bu(t-\tau)
$$
and after Laplace transforming
$$
U(s) = \frac{u(0)}{s-a-be^{-\tau s}}
$$
so $u(t)$ is stable as far as the poles for $U(s)$ are left complex plane included.
As $s-a-be^{-\tau s} = x-a-b e^{-\tau x}\cos(\tau y) + i\left(y+be^{-\tau x}\sin(\tau y)\right)$ we can construct some graphics to show the poles dependence on $a,b,\tau$
Follows a plot showing the case $a = 1, b = -2, \tau = 1$
and the case with $a = -2, b = 1, \tau = 1$
The poles are at the intersection between the blue and red curves. In the first case we can verify that the response is unstable and in the second case the response is stable. In delayed DE appears infinite poles and we should assure that no one invades the positive complex plane.
It seems that for this differential equation it is sufficient that $a + b <0$, to be stable although this would have to be demonstrated. |
Exists polynomial satisfying following? | If $\mathbb k$ is a field and the degree $d$ of the polynomial is not pre specified, the answer to your question is affirmative.
Let us first consider the special case where $s$ is a scalar multiple of the identity matrix, say, $s=\lambda I$. In this case you are essentially asking if there exists a polynomial $f$ such that $f(0)=0$ and $f(a)=\lambda I$ when $a=u+\lambda I$.
If $\lambda=0$, we can simply take $f=0$. If $\lambda\ne0$, then $a$ is invertible. Therefore, by Cayley-Hamilton theorem, $a^{-1}=g(a)$ for some polynomial $g$. Let $f(x)=\lambda xg(x)$, we see that $f(0)=0$ and $f(a)=\lambda I$.
Now, consider the general case. Since $s$ and $u$ commute, by permutation we may assume that $s$ is a direct sum of sub-blocks of scalar matrices and $u$ is a block diagonal matrix such that its block structure conforms with $s$ and each sub-block is strictly upper triangular. In other words, we may assume that $s=\bigoplus_{j=1}^k \lambda_jI_{n_j}$ for some distinct $\lambda_j$s and that $u=\bigoplus_{j=1}^k u_j$, where each $u_j$ is a strictly upper triangular matrix of the same size as $I_{n_j}$.
For each $j$, let $a_j=u_j+\lambda_jI_{n_j}$ and let $f_j$ be a polynomial such that $f_j(0)=0$ and $f_j(a_j)=\lambda_j I_{n_j}$. We can now construct $f$ in the spirit of Lagrange interpolation. For any fixed $j$, define
$$
p_j(x)=\prod_{\stackrel{i=1}{i\ne j}}^k (x-\lambda_i)^{n_i}
.$$
It follows that $p_j$ annihilates every $a_i$ when $i\ne j$, and $p_j(a_j)$ is invertible. Therefore the matrix inverse of $p_j(a_j)$ is equal to $q_j(a_j)$ for some polynomial $q_j$. Now, if we define $f$ as
$$
f(x) = \sum_{j=1}^k p_j(x)q_j(x)f_j(x),
$$
we get $f(0)=0$ and $f(a_j)=\lambda_j I_{n_j}$ for each $j$. Consequently, $f(u+s)=s$. |
How to show $P(|X-E(X)|\leq x)=1\implies V(X)\leq x^2$ | Let $Y=X-\mathrm{E}[X]$. Then
$$
V(X)=\mathrm{E}[Y^2]=\mathrm{E}[Y^2\mathbf{1}_{|Y|\leqslant x}]\leqslant \mathrm{E}[x^2\mathbf{1}_{|Y|\leqslant x}]=x^2P(|Y|\leqslant x)=x^2.
$$ |
Simplest Optimization Framework for this Problem | Here is a simple nested loop which scans all region boundaries and selects the one with the highest number of region intersections:
int[] MaximizeIntersects(int[] x, int[] e)
{
int n = x.Length;
SortedSet<int> limits = new SortedSet<int>();
int[] y = { 0, 0 };
int intersects = 0;
for(int i = 0; i < n; i++)
{
limits.Add(x[i] - e[i]);
limits.Add(x[i] + e[i]);
}
foreach(int r in limits)
{
int ix = 0;
for(int i = 0; i < n; i++)
{
if (((x[i]-e[i]) <= r) && ((x[i]+e[i]) >= r))
{
ix++; // increment intersect count
}
}
if (ix > intersects)
{
// keep most intersecting boundary so far
y[0] = r;
intersects = ix;
}
else if (ix == intersects)
{
// the boundary following the best so far
y[1] = r;
}
}
return y;
}
Alternative solution with sub-quadratic complexity:
int[] MaximizeIntersects(int[] x, int[] e)
{
int n = x.Length;
SortedSet<int> boundaryStarts = new SortedSet<int>();
SortedSet<int> boundaryEnds = new SortedSet<int>();
Dictionary<int, int> dicCounts = new Dictionary<int, int>();
int[] y = { 0, 0 };
int intersects = 0;
for (int i = 0; i < n; i++)
{
boundaryStarts.Add(x[i] - e[i]);
boundaryEnds.Add(x[i] + e[i]);
increment(dicCounts, x[i] - e[i], +1);
increment(dicCounts, x[i] + e[i], -1);
}
int ix = 0;
foreach (int r in boundaryStarts)
{
ix += dicCounts[r];
if (ix > intersects)
{
// keep most intersecting boundary so far
y[0] = r;
intersects = ix;
}
}
// end of region directly follows start of region
y[1] = boundaryEnds.Where(be => be > y[0]).First();
return y;
}
void increment(Dictionary<int, int> dic, int key, int delta)
{
int value;
if (!dic.TryGetValue(key, out value))
{
dic.Add(key, delta);
}
else
{
dic[key] = value + delta;
}
}
Alternative solution:
A MiniZinc constraint solver model:
% fixed input parameters
int: n = 3;
array[1..n] of int: x = [2, 4, 19];
array[1..n] of int: e = [1, 2, 0];
int: rMin = min(x) - max(e);
int: rMax = max(x) + max(e);
set of int: Bounds = rMin .. rMax;
% Region boundaries caclulated as set comprehensions
set of Bounds: BoundaryStarts = {x[i] - e[i] | i in 1..n};
set of Bounds: BoundaryEnds = {x[i] + e[i] | i in 1..n};
% single decision variable; must be start of a region
var BoundaryStarts: y;
function var int: no_of_intersects(var BoundaryStarts: r) =
sum([(r >= (x[i] - e[i])) /\ (r <= (x[i] + e[i]))| i in 1..n]);
solve maximize(no_of_intersects(y));
output [show(y) ++ " .. " ++ show(min([if r > fix(y) then r else rMax endif | r in BoundaryEnds]))];
Short but not exactly simple. Another drawback is the restriction to integer vectors. |
Foundation of Formal Logic | Here is a possible approach. Consider the minimalist language Brainfuck. This extremely stripped-down language comprises only eight commands and an instruction pointer. The first step is to train yourself to be able to execute Brainfuck programs. This does require some amount of intuition; you need the ability to read and write, at least well enough to be able to read and understand a description of the Brainfuck language. You need to be able to recognize a Brainfuck program when you see one, parse the symbols, and remember how to execute the commands using whatever writing method you have mastered. But you don't have to understand what a "symbol" is in general or what a "rule" is in general. You just have to master a very specific set of symbols and rules.
If you get this far, then in principle you have more than enough power to reproduce any formal logical proof, since Brainfuck is Turing complete. Of course, someone still has to code up logical axioms and rules in Brainfuck, but that's their problem, not yours. If they do their part, you can do your part by executing their Brainfuck programs. In this way, you can formally reproduce all of logic and mathematics, with a minimal reliance on non-formal intuition. (You won't understand anything, of course, but that doesn't seem to be your question.) |
Finding the maximum number with a certain Euler's totient value | This could be a hard problem, in general, but let's look at 1000.
$n$ can't be odd, since we'd have $2n\gt n$ and $\phi(2n)=\phi(n)$. Let $n=2m$.
$m$ can't have more than 3 distinct prime factors, since that would force $\phi(n)$ to be divisible by $2^4$, which it isn't.
If $p$ is prime and $p^2$ divides $n$ then $p$ divides $\phi(n)$ so $p$ is 2 or 5.
Now you can start loking at the numbers $2^r5^s$ for $r=1,2,3$ and $s=0,1,2,3$ to see which ones are $\phi(p)$ for some prime $p$ (other than 2 and 5, which can be treated separately).
So for example, $\phi(11)=10$, $\phi(101)=100$, $\phi(41)=40$, and so on. Once you have the whole (finite) list, you can test the ways of putting together products of these numbers (and powers of 2 and/or 5) to see how to get $\phi(n)=1000$ and which way maximizes $n$.
It's a bit ad hoc, but I'm not sure that can be helped. |
X ~ Uniform distribution | X ~ Uniform(5, 10)
X ~ Uniform(a, b)
$$P(X > 6 \vert X < 9) = \frac{P(X > 6 \cap X < 9)}{P(X < 9)} = \frac{P(6 < X < 9)}{P(X < 9)}=\frac{\frac{9-6}{10-5}}{\frac{9-5}{10-5}}=0.75 $$
$$P(c < X < d) = \frac{(d - c)}{(b - a)}$$
$$P(6 < X < 9) = \frac{(9 - 6)}{(10 - 5)}$$
$$P(X < 9) = \frac{(9 - 5)}{(10 - 5)}$$
"c" gets value of "a" when not present. |
How to do statistic test if I know only one changed parameter? | From your description, I suppose you have $x = 13$ successes among $n = 15$ subjects and for the success probability $p$ you want to test $H_0: p = 0.5$ against $H_a: p > 0.5.$
Exact binomial test. If you are doing an exact binomial test, then the null distribution is $\mathsf{Binom}(n = 15, p = 1/2)$ and the
P-value of the test is the probability, under this distribution,
of getting 13 or more successes. You could use the binomial
pdf to find this probability as
$$P(X \ge 13) = \frac{{15\choose 13}}{2^{15}}+
\frac{{15\choose 14}}{2^{15}}+\frac{{15\choose 15}}{2^{15}} = 0.0037.$$
So the P-value of the test is $0.0037 < 0.01 = 1\%$ and you
can reject $H_0$ at the 1% level in favor of $H_a,$ concluding
that the therapy has a significantly positive effect.
In R statistical software you can compute this P-value directly
or by using the binomial PDF function dbinom or the binomial
CDF function pbinom as shown below:
sum(choose(15, 13:15)/2^15)
[1] 0.003692627
sum(dbinom(13:15, 15, .5))
[1] 0.003692627
1 - pbinom(12, 15, .5)
[1] 0.003692627
In R, the procedure binom.test gives the same P-value along
with additional information.
binom.test(x=13, n=15, alt="greater")
Exact binomial test
data: 13 and 15
number of successes = 13, number of trials = 15, p-value = 0.003693
alternative hypothesis:
true probability of success is greater than 0.5
95 percent confidence interval:
0.6365582 1.0000000
sample estimates:
probability of success
0.8666667
Test using normal approximation. With $n = 15$ subjects you have enough data to use a normal approximation
to test $H_0: p = 0.5$ against $H_a: p > 0.5.$
If $X \sim \mathsf{Binom}(n=15, p=0.5),$ then (as you say) $\mu = E(X) = np = 7.5$ and $\sigma = SD(X) = \sqrt{np(1-p)} = 1.936492.$ Then you can find the approximate P-value by
standardizing and using printed tables of the standard normal CDF. [The use of 12.5 instead of 13 is called a 'continuity correction.]
$$P(X \ge 13) = P(X > 12.5) =
P\left(\frac{X = \mu}{\sigma}> \frac{12.5 -7.5}{1.0365}\right)\\
\approx P(Z > 2.55) = 0.0054.$$
The approximate P-value is not quite the same as the exact one above, but it still indicates rejection of $H_0$ at the 1% level.
The figure below shows the PDF of $\mathsf{Binom}(15, 0.5)$ along with the density curve of the approximating normal distribution. The exact P-value is the sum of the heights of
the blue bars to the right of the vertical dotted line.
The approximate P-value is the area under the normal density curve to the right of that line.
R code for figure:
x = 0:15; PDF = dbinom(x, 15, .5)
hdr = "PDF of BINOM(15, 0.5) with Normal Approx."
plot(x, PDF, type="h", ylim=c(0,.25), col="blue", lwd=2, main=hdr)
abline(h=0, col="green2")
abline(v=0, col="green2")
curve(dnorm(x, 15/2, sqrt(15/4)), add=T, col="red")
abline(v = 12.5, lwd=2, lty="dotted") |
Double Integral $\int\limits_0^a\int\limits_0^a\frac{dx\,dy}{(x^2+y^2+a^2)^\frac32}$ | Both the function that you are integrating as the region over which you are integrating it get unchanged if you exchange $x$ with $y$. Therefore, your integral is equal to$$2\int_0^a\int_0^x\frac1{(a^2+x^2+y^2)^{3/2}}\,\mathrm dy\,\mathrm dx.$$You can compute this integral using polar coordinates: $\theta$ can take values in $\left[0,\frac\pi4\right]$ and, for each $\theta$, $r$ can take values in $\left[0,\frac a{\cos\theta}\right]$. And\begin{align}\int_0^{\pi/4}\int_0^{a/\cos(\theta)}\frac r{(a^2+r^2)^{3/2}}\,\mathrm dr\,\mathrm d\theta&=\int_0^{\pi/4}\frac{1-\frac1{\sqrt{\sec ^2(\theta)+1}}}a\,\mathrm d\theta\\&=\frac1a\left(\frac\pi4-\int_0^{\pi/4}\frac1{\sqrt{\sec ^2(\theta)+1}}\,\mathrm d\theta\right)\\&=\frac\pi{12a}.\end{align}Note that the final equality is equivalent to$$\int_0^{\pi/4}\frac1{\sqrt{\sec^2(\theta)+1}}\,\mathrm d\theta=\frac\pi6.$$This can be justified as follows: you do $\theta=\arccos\left(\sqrt x\right)$ and $\mathrm d\theta=-\frac1{2\sqrt{x-x^2}}\,\mathrm dx$. Doing this, you will get\begin{align}\int_0^{\pi/4}\frac1{\sqrt{\sec^2(\theta)+1}}\,\mathrm d\theta&=\int_1^{1/2}-\frac1{2\sqrt{1-x^2}}\\&=\frac12\int_{1/2}^1\frac1{\sqrt{1-x^2}}\,\mathrm dx\\&=\frac12\left(\arcsin\left(1\right)-\arcsin\left(\frac12\right)\right)\\&=\frac12\left(\frac\pi2-\frac\pi6\right)\\&=\frac\pi6.\end{align} |
how to show $-\int_{-\infty }^{\infty }\!b{{\rm e}^{b-{{\rm e}^{b}}}}\,{\rm d}b= \gamma$? | It works The first integral in G+R, 8.367.4, is
$$ - \gamma = \int_0^\infty e^{-t} \log t \, dt $$
This is also the first example in https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant#Integrals
Substitute
$$ t = e^x $$
The one above is pretty famous. I will see if I can decipher their attribution, it is a sort of bibliographic code. alright, FI was a three volume calculus book in Russian, 1947-1949
ADDED some good proofs of the "famous" item here, with further references: Integral representation of Euler's constant |
Inverse of a product of $n$-tuple in abstract algebra | This is trivial when $n=1$. Let $n>1$ and suppose it is true for $n-1$.
Then $(a_1\cdots a_n)^{-1}=((a_1\cdots a_{n-1})a_n)^{-1}=a_n^{-1}(a_1\cdots a_{n-1})^{-1}$. Now, by the inductive hypothesis we have $(a_1\cdots a_{n-1})^{-1}=a_{n-1}^{-1}\cdots a_1^{-1}$, so we are done. |
Evaluate $\int{\frac{4x}{(x^2-1)(x-1)}dx}$ | Your initial setup is incorrect, as there can never be constants $A,B,C$ for which
$$\frac{4x}{(x^2 -1)(x-1)} = \frac{Ax+B}{x^2 -1} + \frac{C}{x-1}$$
This you can see by multiplying by $x^2-1$. You will see that the right side becomes a polynomial, while the left side does not.
The standard techniques of partial fractions try to get the denominator in the form $$(x-a_1)^{r_1} (x - a_2)^{r_2} ... (x-a_n)^{r_n}$$
with the $\displaystyle a_i$ being distinct: this is crucial.
So in your case, $\displaystyle (x^2-1)(x-1)$ becomes $\displaystyle (x-1)(x+1)(x-1) = (x-1)^2(x+1)$ |
How is $x = -\text{sign}(c_i)e_i$ the solution to minimize $c^T x$ subject to $\|x\|_1 \le 1$? | Since $\Vert x\Vert_1\leq1$, this means $\sum_j\vert x_j\vert\leq 1$. Now let $x_0=-\mathrm{sign}(c_{i})e_i$ as in your question. For an arbitrary $x$ we then have:
$$
|c^Tx|=\left\vert\sum_{j} c_j x_j\right\vert\leq\sum_j|c_j||x_j|\leq|c_i|\sum_j|x_j|=|c_i|\Vert x\Vert_1\leq|c_i|
$$
Because $c^Tx_0=-|c_i|$, we have that this is indeed the best possible. |
Does the Langlands program preserve CFT's distinction between local and global theories? | There is a local aspect to the Langlands program, known as the local Langlands correspondence. In fact, Langlands conjectured the existence of such a correspondence for each local field $F$ and each reductive group $G$ over $F$.
He proved his local conjecture when $F$ is archimedean and $G$ is arbitrary. The case when $G = \mathrm{GL}_n$ and $F$ is non-archimedean is now solved (by Laumon, Rapoport, and Stuhler in the function field case, and by Harris and Taylor in the case of $p$-adic fields). There are many results known for other groups $G$ as well, but the full local conjectures for arbitrary $G$ are not yet settled, as far as I know.
In the $\mathrm{GL}_n$ case, the correspondence, roughly speaking, gives a bijeciton between (typically infinite dimensional!) irreducible representations of the group $\mathrm{GL}_n(F)$ and $n$-dimensional representations of the Galois group of $F$. (So, unlike the abelian case, there is no isomorphism of groups, but rather a certain bijection between certain kinds of representations of rather different groups.)
The case of general $G$ is more involved to state, and indeed, it is not so easy to find the precise conjecture in the literature. In any case, it involves many complications, the most significant of which is probably so-called endoscopy. Note that Ngo won the Fields medal this summer for his work on this topic.
Just as with local CFT, the motivations for the local conjecture come from the global theory. On the one hand, representations of matrix groups over local fields arise naturally in the theory of automorphic forms (this is a generalization/reformulation of the classical theory of Hecke operators in the theory of modular forms), and, on the other hand, (at least certain) automorphic Hecke eigenforms are suppose to be related to representations of Galois groups of global fields by global reciprocity laws. Restricting these global Galois representations to decomposition groups, one should recover the conjectural local correspondence. (And as I noted in my answer to your earlier question, this is in fact the mechanism by which those local correspondences are normally constructed, in the cases when they can be constructed.)
Finally, I'm not sure if I understand your last question about equivalence correctly, but if you are asking whether the existence of the global Langlands correspondence (whatever exactly that means) should follow from the local correspondence, the answer is no. Consider the case of CFT: knowing all the local Artin maps lets you write down the global Artin map, but you still have to prove the global Artin reciprocity law. Similarly, knowing the local Langlands correspondence allows one to write down a candidate for the global correspondence, but to prove that this candidate actually does the job is another, even more difficult, matter.
(As one example: before the modularity theorem for elliptic curves was proved, people knew how to write down the
candidate $q$-expansion that should be the weight $2$ modular form attached to an elliptic curve over $\mathbb Q$; this was because the relevant local issues were all completely understood. The problem was then to prove that this actually was a weight $2$ modular form; this was a global issue, which was completely open until Wiles and Taylor, Breuil, Conrad and Diamond solved it.) |
Complex stationary point of $\frac{z}{1-e^{-z}}+z$? | There is even a closed form solution to the equation $p'(z)=0$ in terms of Lambert's W function
They are given by:
$z_n=W_n(e)-1$
where $n$ labels the different branches of $W$. With this knowledge you try to give approximate solutions using the different asymtotic formulas in the link given above! |
Modular arithmetic with polynomial | Hints:
Chinese remainder theorem, to go from $p,q$ to $n$.
Expand the polynomial $P$, to see that $P(z \mod m)\equiv P(z)\pmod{m}$. |
Banach's fixed how to find q | If $A$ is a closed intervall in $ \mathbb R$ and if $f:A \to A$ is a differentiable function with $q:= \sup \{|f(x)| : x \in A\} < \infty$, then we get by the mean value theorem:
$$|f(x)-f(y)| \le q|x-y|$$
for all $x,y \in A$. If $q<1$ then, by Banach's theorem, there is a unique $x_0 \in A$ such that $f(x_0)=x_0$. |
Are unidirectional vector fields conservative? | In addition to the computational answer of 'Lord Shark the Unknown', you can also look at this graphical example from Wikipedia:
Suppose we now consider a slightly more complicated vector field:
$${\mathbf{F}}(x,y,z) = - x^2 \hat{\mathbf{y}}$$
Its plot:
We might not see any rotation initially, but if we closely look at the right, we see a larger field at, say, $x = 4$ than at $x = 3$. Intuitively, if we placed a small paddle wheel there, the larger "current" on its right side would cause the paddlewheel to rotate clockwise, which corresponds to a curl in the negative $z$ direction. By contrast, if we look at a point on the left and placed a small paddle wheel there, the larger "current" on its left side would cause the paddlewheel to rotate counterclockwise, which corresponds to a curl in the positive $z$ direction. [...] |
A question related to the card game "Set" | Mathematicians call these "independent sets" capsets, and they're fairly widely studied. But surprisingly, we don't know much about their asymptotic behavior! We have a lower bound of order c^n for some c > 2; and we have an upper bound of order about 3^n/n (although this is not in the least trivial to prove!) Most mathematicians seem to suspect that the upper bound can't be improved to c^n for c strictly less than 3, but no one's sure how to prove this.
For more information (perhaps a little out-of-date, but I don't think much has changed) see Terry Tao's discussion. |
If $\frac{d}{dx}e{^x} = e{^x}$, then why does $\frac{d}{dx}e^{-14}$ = 0? | $e^x$ is a function that depends on $x$. $e^{-14}$ is a constant. |
Prove or disprove that any three members of a family of parallelograms intersect | Hint:
We will say that a family of parallelograms has property $P$ if any two elements intersecting implies that any three elements intersect. We want to prove that all the families of parallelograms have this property.
Use a shear mapping (a kind of linear transformation) to make all the parallelograms into rectangles. Prove that a family of parallelograms has property $P$ if and only if its transformed instance has property $P$.
To prove that this is true for families of rectangles, introduce a coordinate system such that the sides of rectangles are parallel to the axes. Observe that two rectangles intersect if and only if both their orthogonal projections onto axes intersect.
Note that because the sides are parallel to the axes the two dimensions are independent. Solve the problem for a single dimension (i.e. segments on a line), and combine twice to get result for the rectangles.
In fact we could have skipped the transformation step, but then you would have to take non-standard coordinate system and non-orthogonal projections (i.e. projections along the vector parallel to the sides). Pick the solution which is more convenient for you.
I hope this helps $\ddot\smile$ |
Is $x*y=xy+1$ an Abelian group? | $$a(bc+1)+1 = (ab+1)c + 1$$
reduces to
$$a=c.$$
To get counterexample, just pick $a \ne c$. |
Almost everywhere convergence and $L^1$ convergence | The answer may depend on the measure space $\left(S,\mathcal F,\mu\right)$ we consider. If $S$ and $\mu\left(S\right)$ are finite then almost everywhere convergence is equivalent to convergence in $L^1$.
However, suppose that $\left(S,\mathcal F,\mu\right)$ is such that there exists a sequence $\left(f_n\right)_{n\geqslant 1}$ which converges in $L^1$ to $0$ but not almost everywhere.
Suppose that there exists a metric $d$ on the space of $\mathcal F $-measurable functions such that for each sequence $\left(g_n\right)_{n\geqslant 1}$, $g_n\to g$ almost everywhere is equivalent to $d\left(g_n,g\right)\to 0$. For each subsequence of $\left(f_n\right)_{n\geqslant 1}$, due to convergence in $L^1$, you can extract a subsequence which converges almost everywhere to $0$. Using the quoted fact in the opening post, we derive that $d\left(f_n,0\right)\to 0$ hence convergence of $\left(f_n\right)_{n\geqslant 1}$ almost everywhere to $0$, a contradiction. |
Does increasing differences imply convexity of a function? | There exist discontinuous additive functions $f:\mathbb R \to \mathbb R$. Since convex functions are necessarily continuous it follows that these functions are not convex. But they obviously satisfy the hypothesis. |
adjoint of operator? | $g\in \mathcal{D}(A^{\star})$ iff the following holds for all $f\in\mathcal{D}(A)$:
$$
\int_{0}^{1}g(\overline{Af})dt=\int_{0}^{1}(A^{\star}g)\overline{f}dt \\
\int_{0}^{1}g(-i\overline{f}')dt=\int_{0}^{1}(A^{\star}g)\overline{f}dt.
$$
Therefore, if $f\in C^1[0,1]$ and $f(0)-\overline{\lambda}f(1)=0$, then
$$
\int_{0}^{1}gf'dt=\int_{0}^{1}(iA^{\star}g)fdt.
$$
In particular, the above holds for all $f\in \mathcal{C}_{c}^{\infty}(0,1)$. So $g$ has a weak derivative at $g'=-iA^{\star}g$, or $A^{\star}g=ig'$, which means that $g$ may be modified on a set of measure $0$ to become absolutely continuous on $[0,1]$, and $A^{\star}g=ig'$. So $\mathcal{D}(A^{\star})\subseteq\mathcal{AC}[0,1]$ with $g'\in L^2[0,1]$. However $\mathcal{D}(A^{\star})\ne \mathcal{AC}[0,1]$ because
\begin{align}
0 = (Af,g)-(f,A^{\star}g) & =i\int_{0}^{1}\{f'\overline{g}+f\overline{g}'\}dt \\
& =i\{f(1)\overline{g(1)}-f(0)\overline{g(0)}\}.
\end{align}
If $(g(0),g(1))=\alpha(1,\overline{\lambda})$ for some scalar $\alpha$, then the right side reduces to $i\{ \lambda f(1)-f(0)\}=0$. Equivalently,
$$
\overline{\lambda}g(0)-g(1)=0.
$$
Therefore,
$$
\mathcal{D}(A^{\star})=\{ g \in \mathcal{AC}[0,1] : g(1)=\overline{\lambda}g(0) \} \\
\mathcal{D}(A^{\star\star})=\{ f \in \mathcal{AC}[0,1] :
f(0)=\lambda f(1) \}
$$
These domains are the same iff $|\lambda|=1$, which is equivalent to $A$ being essentially selfadjoint. |
How would you reverse this double-variable equation? | Let $\mathbb N_0=\{0,1,2,\ldots\}$ and $\mathbb N=\{1,2,3,\ldots\}$.
The mapping $(a,b)\mapsto f(a,b)$ is a bijection from $\mathbb N_0\times\mathbb N_0$ to $\mathbb N_0$. Hence it is also a bijection from $\mathbb N\times\mathbb N$ to $\mathbb N\setminus D$ where $D=\left\{\frac12n(n+i)\mid n\in\mathbb N,i\in\{1,3\}\right\}$ collects the images $f(a,0)=\frac12a(a+3)$ and $f(0,b)=\frac12b(b+1)$ for $a$ and $b$ in $\mathbb N$.
To see this and how to reconstruct $(a,b)$ from $f(a,b)$, assume that $a$ and $b$ are nonnegative and call $s$ the largest integer $k$ such that $k^2\leqslant2f$, which can also be defined by the inequalities
$$
s^2\leqslant2f\leqslant s(s+2).
$$
Note that $2f=(a+b)^2+3a+b$ implies $(a+b)^2\leqslant2f\lt(a+b+2)^2$ hence $s=a+b$ or $s=a+b+1$.
Case $s=a+b$: this means that $2f\leqslant(a+b)(a+b+2)$, that is, $3a+b\leqslant2a+2b$, that is, $a\leqslant b$. Then, $a+b=s$ and $3a+b=2f-s^2$ hence $a=f-\frac12s(s+1)$ and $b=\frac12s(s+3)-f$. Then the condition $a\leqslant b$ holds and $a$ and $b$ are indeed integers.
Case $s=a+b+1$: this means that $2f\geqslant(a+b+1)^2$, that is, $3a+b\geqslant2a+2b+1$, that is, $a\geqslant b+1$. Then, $a+b+1=s$ and $3a+b=2f-(s-1)^2$ hence $a=f-\frac12s(s-1)$ and $b=\frac12s(s+1)-f-1$. Then the condition $a\geqslant b+1$ holds and $a$ and $b$ are indeed integers.
The first case applies when $s(s+1)\leqslant2f\leqslant s(s+2)$, the second when $s^2\leqslant 2f\lt s(s+1)$. To sum up:
For every integer $f\geqslant0$, let $s$ denote the unique integer such that $s^2\leqslant2f\leqslant s^2+2s$.
If $s^2\leqslant 2f\leqslant s^2+s-1$, then $a=f-\frac12s(s-1)$ and $b=\frac12s(s+1)-f-1$ (additionally, in this case $a\geqslant b+1$).
If $s^2+s\leqslant2f\leqslant s^2+2s$, then $a=f-\frac12s(s+1)$ and $b=\frac12s(s+3)-f$ (additionally, in this case $a\leqslant b$).
Exemple: If $f=15123$ as in a comment, then $2f=30246$, $s=173$, $s^2+s=30102$, $2f\geqslant s^2+s$ hence we apply the second item above, which yields $a=15123-\frac12\cdot173\cdot174=72$ and $b=\frac12\cdot173\cdot176-15123=101$. |
$\int_0^\infty e^{-x}x^{n-1}dx$ is convergent for $n>0$ | The proof in the OP fails when $n>1$ since for $x>1$, $x^{n-1}>1$ or $n>1$.
But we can assert that $e^x\ge \frac{x^{\lfloor n\rfloor+1}}{(\lfloor n\rfloor +1)!}$ for $x \ge 1$. So, for $L\ge1$
$$\begin{align}
\left|\int_1^L e^{-x}x^{n-1}\,dx\right|&\le \left(\lfloor n\rfloor+1\right) ! \int_1^L x^{n-\lfloor n\rfloor -2}\,dx\\\\
&=\frac{\left(\lfloor n\rfloor+1\right) !}{\left(n-\lfloor n\rfloor-1\right) }\left(L^{\left(n-\lfloor n\rfloor-1\right) }-1\right)
\end{align}$$
Since $n-\lfloor n\rfloor -1<0$, $\lim_{L\to\infty}\left(\frac{\left(\lfloor n\rfloor+1\right) !}{\left(n-\lfloor n\rfloor-1\right) }\left(L^{\left(n-\lfloor n\rfloor-1\right) }-1\right)
\right)=\frac{\left(\lfloor n\rfloor+1\right) !}{1-\left(n-\lfloor n\rfloor\right) }$, and the integral on the left-hand side of $(1)$ converges. And we are done! |
Do these converge for $p>0$? | The first series converge for all $p > 0$ due to the Leibniz's (alternating series) test.
For the second, let $$f(x) = \frac{p\ln x}{x}$$
then, $$f'(x) = \frac{p(1-\ln x)}{x^2}$$
So, $f(x)$ is decreasing for all $x \ge e$. Again by Leibniz's test, the series converge when $p>0$. |
Distinguishing between bulbs with long and short exponential lifetimes | The solution uses Bayes' Theorem. I'll try to get you started.
(1) Either you bought short-lived bulbs $S$ or long-lived bulbs $L$.
Your prior probabilties (before testing the bulbs at home) is $P(S)=P(L) = 1/2.$
(2) Let $E$ be the event that five bulbs tested (presumably out of five)
are still alive after 300 hours.
(3) You seek $P(L|E).$
Bayes' Theorem says
$$P(L|E) = \frac{P(L\cap E)}{P(E)}
= \frac{P(L)P(E|L)}{P(L)P(E|L)+P(S)P(E|S)}.$$
Knowing what you do about the exponential distribution, you should be able to find $P(E|L)$ and $P(E|S)$ in order to finish the problem.
What is the
probability one 'long-lived' bulb lasts for at least 300 hours? What is the
probability all of five long-lived bulbs last for at least 300 hours?
And so on. with the short-lived bulbs? |
Mafia game probability | Great question :-)
Let $N$ be the event that no-one was killed, and $C$ the event that Aidan is crazy. Then you want the conditional probability
\begin{align}
\textsf{Pr}(C\mid N)
&=\frac{\textsf{Pr}(C\cap N)}{\textsf{Pr}(N)}
\\
&=\frac{\textsf{Pr}(C\cap N)}{\textsf{Pr}(C\cap N)+\textsf{Pr}(\overline{C}\cap N)}\\
&=\frac{\frac12\cdot1}{\frac12\cdot1+\frac12\cdot\frac13}\\
&=\frac34\;.
\end{align}
So the chance wasn't quite as high as you thought, but it was still the right choice.
You calculated
\begin{align}
1-\textsf{Pr}(\overline{C}\cap N)&=\textsf{Pr}((\overline C\cap\overline N)\cup(C\cap N)\cup(C\cap\overline N))\\
&=\textsf{Pr}((\overline C\cap\overline N)\cup C)\;,
\end{align}
the unconditional probability that either Aidan is Mafia and someone was killed or he's crazy (and hence no-one was killed). I don't think this corresponds to any conditional probability given your knowledge of $N$.
Regarding your question "how can it be a $50\%$ chance he's a crazy person, but actually an $80\%$ chance" (in fact $75\%$), the former is the unconditional probability for him to be a crazy person and the latter is the conditional probability, given your knowledge of $N$.
(By "unconditional", I mean conditional only on the facts that you regarded as definitely settled: That the three of you were all townspeople and Aidan was either Mafia or crazy. Of course the original unconditional probability for Aidan to be crazy was only $\frac15$.) |
Left and right continuity | I think you're concluding what you want to prove without actually proving it. You might resort to epsilon-delta proofs. You will have a "left" delta and a "right" delta, so you will just need to let the "two-sided" delta be their minimum.
You're right, sometimes the obvious things are surprisingly elusive as proofs go. |
matrixgroup is a differentiable submanifold of $M_4(\mathbb{R})$ | A more direct way to see this which has no need for any explicit calculations by hand: For every symmetric $A$ and $Y$, we can solve $MA+AM^T = Y$ for $M$ by finding any matrix $\widetilde{Y}$, symmetric or otherwise, that satisfies $\widetilde{Y}+\widetilde{Y}^T = Y$ (for example $\widetilde{Y}_{ij} := \begin{cases} Y_{ij} & i<j \\ \frac{1}{2}Y_{ii} & i=j \\ 0 & i>j \end{cases}$ does it) and solving $MA = \widetilde{Y}$ for $M$ instead. If $A$ is invertible, the latter is always solvable. |
Interpretation of the relation between regularized least-squares and minimum-norm solution for an underdetermined system | In the underdetermined case you may think on the 2 edges:
$ \mu \to 0 $ - Then there is a set of solutions $ \mathcal{X} = \left\{ \boldsymbol{x} \mid A \boldsymbol{x} = \boldsymbol{y} \right\} $. You may chose \hat{\boldsymbol{x}} = \arg \min \left{ {\left| \boldsymbol{x} \right|}_{2}^{2} \right} $.
* $ \mu \to \infty $ - Then the solution is the zero vector $ \boldsymbol{0} $.
Since the path goes solutions which minimizes $ {J}_{1} + \mu {J}_{2} $ it means for any solution for $ \mu > 0 $ its norm will be smaller than the norm of $ \hat{\boldsymbol{x}} $. |
Proof that this set is/isnt convex | I wanted to give a complete answer to the question now that I have figured it out, however, credits to TSF for starting me off with the first part.
a.
\begin{aligned}
& \forall x, y \in \Omega \Rightarrow T(x, w) > 0, T(y, w) > 0 \\\\
& \lambda \in [0, 1] \\\\
& T(\lambda x + (1-\lambda)y, w) \\\\
& = (\lambda x_1 + (1-\lambda)y_1) + ... + (\lambda x_n + (1-\lambda)y_n) \cos((n-1)w) \\\\
& = \lambda T(x, w) + (1 - \lambda)T(y, w) \\\\
& > 0 \text{ -> as shown in the first line} \\\\
& \text{meaning that all points between x and y are in the set and it is therefore convex.}
\end{aligned}
b.
\begin{aligned}
& \forall x, y \in \Omega \Rightarrow T(x, w) > 0, T(y, w) > 0 \\\\
& \lambda \in [0, 1] \\\\
& - \intop_{0}^{2 \pi} \log T(\lambda x + (1-\lambda) y, w) dw \\\\
& = \intop_{0}^{2 \pi} -(\log (T(\lambda x, w) + T(1-\lambda) y, w))) dw \text{ -> we proved this equality in the first part} \\\\
& \leq \intop_{0}^{2 \pi} -\log T(\lambda x, w) -\log T((1-\lambda) y, w) dw \text{ -> -log is convex since log is concave} \\\\
& = - \intop_{0}^{2 \pi} \log T(\lambda x, w) dw - \intop_{0}^{2 \pi} \log T((1-\lambda) y, w) dw \\\\
& \text {this means that all points in between are in the set above the function and so the function is convex}
\end{aligned} |
Find the partial derivative of a sphere with equation $x^2+y^2+z^2=4$ | The way I understand it is you have the equation
$$
x^2+y^2+z^2=4
$$
which is equivalent to
$$
f(x,y)=z=\pm \sqrt{4-x^2-y^2},
$$
therefore
$$
\frac{\partial{f}}{\partial{x}}=\pm \frac{x}{\sqrt{4-x^2-y^2}}
$$
Perhaps more context on where this question comes from could help clarify things. |
$n^{th}$ minimum/maximum notation | I think the descriptive way is the best. You can define $\min\limits_n S$ as "the $n$-th smallest element of $S$" if you want, but you still need that description.
However, it might be less confusing to define $m(S,n)$ as "the $n$-th smallest element of $S$", since $\min\limits_n S$ is a standard notation for "minimum of $S$ over all possible values of $n$".
If you want to mess things up by avoiding the description completely, then you can define
$$f:\ \mathbb{N} \to S, \quad f(k) = \begin{cases}
\min S, & k = 1, \\
\min \{s:\ s \in S, s > f(k-1)\}, & k > 1.
\end{cases}$$
Of course, I advise against it, because it overly cryptic and the descriptive way works just fine.
A (sometimes wrong) alternative
You might consider saying
Let $S = \{ s_1, s_2, \dots \}$, where $s_i < s_j$ for all $i < j$.
and then dealing with $s_n$, which is obviously the $n$-th smallest value of $S$. But, this is generally wrong, as it assumes that $S$ is enumerable. As an example, consider $S = \mathbb{N} \cup [ x, y ]$ for some $x < y$. Here, the first $n = \lfloor x \rfloor$ minimal numbers are well defined, but you cannot write $S$ in the above manner.
If $S$ is finite or infinite but enumerable, the above notation may be useful (and is used fairly often, for example for $n$-th smallest/biggest singular value of a given matrix).
However, a side description is still useful. For example, if using the above notation, I'd still write something like
... Let $s_n$ be the $n$-th smallest element of $S$. ...
That way, you point the reader in the right direction, while also being mathematically precise. |
Evaluating $\int_{0}^{\infty} \frac{W^{-n}}{1+\exp\left(\frac{W-\alpha}{\beta}\right)} dW$ with $0<n<1$ | $$I=\int_{0}^{\infty}\frac{x^{-n} dx}{(1+ge^{x/b})}, c=a/b, g=e^{-c} \implies I=\int_{0}^{\infty} \frac{ x^{-n} e^{-x/b}dx}{1+g^{-1} e^{-x/b}}.$$
Let $y=x/b$, then $$I=- b^{1-n}\int_{0}^{\infty} \sum_{k=1}^{\infty} y^{-n} (-g^{-1})^ke^{-ky} dy= -b^{1-n} \Gamma(1-n) \sum_{k=1}^{\infty} \frac{(-g^{-1})^k }{k^{1-n}}.$$ $$\implies I =-b^{1-n}~ \Gamma(1-n) ~Li_{(1-n)}(-e^{a/b})~~if~~ n <1. $$
Here $Li_{p}(z)$ is polylog function. |
rank of an abelian group and its embedment into vectorspace | If $\{x_1,\dots,x_m\}$ is a maximal linearly independent subset of a finitely generated abelian group $G$ and $H=\mathbb{Z}x_1+\dots+\mathbb{Z}x_m$, then $G/H$ is torsion.
Otherwise, let $y+H$ be an element in $G/H$ with zero annihilator. Then $\{x_1,\dots,x_n,y\}$ is linearly independent (easy check), so contradicting maximality.
Then we have, after tensoring with $\mathbb{Q}$ the exact sequence $0\to H\to G\to G/H\to0$,
$$
G\otimes\mathbb{Q}\cong H\otimes\mathbb{Q}\cong
\mathbb{Z}^m\otimes\mathbb{Q}\cong\mathbb{Q}^m
$$
Now suppose $G\cong\mathbb{Z}^n \oplus \mathbb{Z}_{q_1} \oplus \cdots \oplus \mathbb{Z}_{q_t}$.
Tensoring it with $\mathbb{Q}$, we obtain
$$
G\otimes\mathbb{Q}\cong
\mathbb{Z}^n\otimes\mathbb{Q}\cong\mathbb{Q}^n
$$
The isomorphisms are as $\mathbb{Q}$-vector spaces, so $m=n$. |
How can i find the sum? | $$
\frac{d}{da}a^{-i} = -ia^{-i-1}\implies -a\frac{d}{da}a^{-i} = ia^{-i}
$$
we can write your sum as
$$
a^m\sum_{i=1}^n -a\frac{d}{da}a^{-i} = -a^{m+1}\frac{d}{da}\sum_{i=1}^n \left(a^{-1}\right)^i
$$
Can you take it from here and replace $a$ with $2$? |
Surface Integral of the Partial Derivative of a Harmonic Function | $\frac{\partial f}{\partial n}$ is shorthand for $ (\nabla f) \cdot n $, i.e. the directional derivative in the direction $n$. Really, what this question wants you to do is use the divergence theorem,
$$ \int_V \nabla \cdot F \, dV = \int_S F \cdot n \, dS, $$
where here you have $F=\nabla f$, and the left-hand side is the integral of the Laplacian. |
Find the vector normal to a curve at some point | $1=-2yy'$, which gives $y'=-\frac{1}{2y}$.
Thus, a slop of the normal it's $2\cdot\frac{\sqrt3}{2}=\sqrt3$.
Thus, the equation of the normal it's $y-\frac{\sqrt3}{2}=\sqrt3(x-0)$ or $y=\sqrt3x+\frac{\sqrt3}{2}$.
An equation of the tangent it's indeed, $$y-\frac{\sqrt3}{2}=-\frac{1}{\sqrt3}x$$ or
$$\frac{1}{\sqrt3}x+y-\frac{\sqrt3}{2}=0,$$
which gives a vector normal $\frac{1}{\sqrt3}\vec{i}+\vec{j}$. |
How to construct a dense subset of $\mathbb R$ other than rationals. | First note that you can "cheat" by taking any subset and take a union with the rationals.
Second note that you can always cheat by taking some real number $x$ and considering the set $\{x+q\mid q\in\mathbb Q\}$. If $x$ is irrational then the set is not the rationals.
Now more seriously, you can note that the irrationals ($\mathbb R\setminus\mathbb Q$) are dense, as well all the irrational algebraic numbers ($\sqrt2$ and such). More interestingly the set $\{\sin n\mid n\in\mathbb N\}$ is dense in $[0,1]$ so it can be stretched (or multiplied) into a dense set of $\mathbb R$.
However an important fact is that every countable dense linear order is isomorphic to the rationals, so if your dense set is countable it will not differ too much from the rationals. |
1-Associated Stirling Number of the Second Kind identity verification | Note: The answer to your first question is affirmative: Yes, your calculations are correct.
The answer to your second question is: Sorry, no deep insight from my side, but hopefully some helpful hints for further investigations.
A short look at your first part:
First part:
Since $A_n^{(m)}$ is used with a slightly different meaning in OPs first part and in OPs second part I will use here $\widetilde{A}_n^{(m)}$ and keep the original notation $A_n^{(m)}$ for the second part, which seems to be of more interest to the OP.
With respect to OPs referenced question he introduced a sort of generalisation of Norlund Polynomials, denoted $A_n^{(m)}$. A slight modification of these polynomials are $\widetilde{A}_n^{(m)}$ given via the following generating function:
Let
\begin{align*}
\sum_{n=0}^{\infty}\widetilde{A}_n^{(m)}\frac{x^n}{n!}=\frac{\left(\frac{x^2}{2}\right)^{\frac{m}{2}}}{\left(e^x-1-x\right)^m}\tag{1}
\end{align*}
Next, we look at certain Associated Stirling Numbers of the Second Kind $b(1;n,m)$ given via
$$\sum_{n=0}^{\infty}b(1;n,m)\frac{x^n}{n!}=\frac{\left(e^x-1-x\right)^m}{m!}$$
We observe, since
$$\left(e^x-1-x\right)^m=\left(\frac{x^2}{2!}+\frac{x^3}{3!}+\ldots\right)^m=2^{-m}x^{2m}\left(1+\frac{1}{12}x+\ldots\right)^m$$
the right hand side starts with $x^{2m}$ and so:
The following is valid for the numbers $b(1;n,m)$
$$b(1;n,m)=0\qquad\qquad n<2m$$
and we can write
\begin{align*}
\sum_{n=2m}^{\infty}b(1;n,m)\frac{x^n}{n!}=\frac{\left(e^x-1-x\right)^m}{m!}\tag{2}
\end{align*}
We can now proceed with:
First part: Calculation
We use $-m$ in (1) and we observe:
\begin{align*}
\sum_{n=0}^{\infty}\widetilde{A}_n^{(-m)}\frac{x^n}{n!}&=\frac{\left(e^x-1-x\right)^m}{\left(\frac{x^2}{2}\right)^{\frac{m}{2}}}\\
&=\frac{\left(\sqrt{2}\right)^mm!}{x^m}\frac{\left(e^x-1-x\right)^m}{m!}\\
&=\frac{\left(\sqrt{2}\right)^mm!}{x^m}\sum_{n=2m}^{\infty}b(1;n,m)\frac{x^n}{n!}\qquad\qquad\qquad\qquad\text{by (2)}\\
&=\frac{\left(\sqrt{2}\right)^mm!}{x^m}
\sum_{n=m}^{\infty}b(1;n+m,m)\frac{x^{n+m}}{(n+m)!}\qquad\qquad n\rightarrow n+m \\
&=\left(\sqrt{2}\right)^mm!\sum_{n=m}^{\infty}b(1;n+m,m)\frac{x^{n}}{(n+m)!}\\
&=\left(\sqrt{2}\right)^m\sum_{n=m}^{\infty}b(1;n+m,m)\binom{n+m}{n}^{-1}\frac{x^{n}}{n!}
\end{align*}
Comparison of coefficients gives:
The following is valid:
\begin{align*}
\widetilde{A}_n^{(-m)}&=\left(\sqrt{2}\right)^mb(1;n+m,m)\binom{n+m}{n}^{-1}& n\geq m\tag{3}\\
&=0& n < m
\end{align*}
and we conclude with:
First Part: Result:
We see, that (3) coincides with the result of OPs first part and we therefore conclude that OPs calculations are correct (besides missing a range specification).
We further see that $\widetilde{A}_n^{(-m)}=0$ if $n<m$.
Now, let's have a look at the second part. Here we use $A_n^{(m)}$ as it was given by the OP in the second part, namely
Let
\begin{align*}
\sum_{n=0}^{\infty}A_n^{(m)}\frac{x^n}{n!}=\frac{\left(\frac{x^2}{2}\right)^{m}}{\left(e^x-1-x\right)^m}\tag{4}
\end{align*}
We can already start with:
Second part: Calculation
We use $-m$ in (4) and we observe:
\begin{align*}
\sum_{n=0}^{\infty}A_n^{(-m)}\frac{x^n}{n!}&=\frac{\left(e^x-1-x\right)^m}{\left(\frac{x^2}{2}\right)^{m}}\\
&=2^mm!x^{-2m}\frac{\left(e^x-1-x\right)^m}{m!}\\
&=2^mm!x^{-2m}\sum_{n=2m}^{\infty}b(1;n,m)\frac{x^n}{n!}\qquad\qquad\qquad\text{by (2)}\\
&=2^mm!\sum_{n=2m}^{\infty}b(1;n,m)\frac{x^{n-2m}}{n!}\\
&=2^mm!\sum_{n=0}^{\infty}b(1;2m+n,m)\frac{x^{n}}{(2m+n)!}\quad\qquad n\rightarrow 2m+n\tag{5} \\
&=2^mm!\sum_{n=0}^{\infty}b(1;2m+n,m)\frac{n!}{(2m+n)!}\frac{x^{n}}{n!} \\
\end{align*}
Similarly to part 1 we compare the coefficients of corresponding powers of $x^n$ to get:
\begin{align*}
A_n^{(-m)}&=2^m\frac{m!n!}{(2m+n)!}b(1;2m+n,m)\qquad\qquad n\geq 0\\
&=\frac{2^mm!}{(2m)!}\binom{2m+n}{n}^{-1}b(1;2m+n,m)\tag{6}\\
\end{align*}
Since
$$(2m)!=(2m)!!(2m-1)!!=2^mm!(2m-1)!!$$
we observe:
Second part: Result
The following identity is valid:
\begin{align*}
b(1;2m+n,m)=(2m-1)!!\binom{2m+n}{n}A_n^{(-m)}\qquad\qquad n \geq 0\tag{7}
\end{align*}
Note: It seems, that the difference of the result (7) to OPs suggestion $$k^mb(1;2m+n,2m)\binom{2m+n}{n}^{-1}$$ besides the factor $k^m$ is due to the fact, that the index shift of the sum in the RHS was $n \rightarrow n+2m$ instead of $n \rightarrow n+m$ as it was used in the first part according to OPs calculation.
Hint: I suppose that the paper Explicit formulas for the Nörlund polynomials $B_n^{(x)}$ and $b_n^{(x)}$ is helpful for further investigations. The polynomial $b(1;n,m)$ used here corresponds with $b(n,m)$ in the paper. Interesting formulas with $b(n,m)$ are stated in (1.14) and in the proof section. |
Meaning of superscript (1) in "$v_1(M) = \arg\min_x \|M - xx^T\|_2, x^{(1)}\ge0$" | My guess is that it is the first component of the vector. This is so that the problem has a more unique solution since $xx^T = (-x)(-x)^T$. This can still fail if $x^{(1)}=0$ but depending on the matrices involved this could be very unlikely |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.