title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Given the taylor series, find the function it derives from? | Hint: Let $w=1+x$. Note that $\dfrac{n+1}{n}=1+\dfrac{1}{n}$.
So our sum is
$$\sum_1^\infty w^n +\sum_1^\infty \frac{1}{n}w^n.$$
The first sum will be very familiar. For the second, note that $\dfrac{w^n}{n}$ is an antiderivative of $w^{n-1}$.
For convergence, you are interested in showing that the interval is $-1\lt w\lt 1$. Ratio test will do it, except that you need to show also that we do not have convergence at $w=\pm 1$. |
Robust feasibility with halfspace? | I read the beginning of your question and the comment. :-) It seems that the function $f$ should be affine, by the following. If $a_1\not=a_2$, then there is a vector $x=\lambda(a_1-a_2)\in\mathbb R^n$ such that $a_1^\top x+b_1>a_2^\top x+b_2$. Moreover, if $ a^\top x + b_1 \leq f(x) \leq a^\top x + b_2$ for all $x \in \mathbb{R}^n$ and $f$ is a convex, then $f$ should be affine, because $f$ is affine on each one-dimensional subspace of $\mathbb R^n$. Indeed, let $g$ be a convex function on $\mathbb R$ such that for all $x\in\mathbb R$ we have $ax + b_1 \leq g(x) \leq ax + b_2$ for some given $a,b_1,b_2\in\mathbb R$. Then a function $h:\mathbb R\to\mathbb R$ such that $h(x)=g(x)-ax$ for every $x\in\mathbb R$ is a convex bounded function on $\mathbb R$, and, hence, constant. |
Graph Theory, graph and complement question | If $d$ is degree for $v$ in $G$ then the degree $d'$ of $v$ in $G'$ is $n-1-d$.
So, if $$d_1 \equiv d_2\equiv \cdots \equiv d_n \equiv 0\pmod 2$$ then $$n-1-d_1\equiv n-1-d_2\equiv \cdots \equiv n-1-d_n \equiv n-1\pmod 2$$ |
What is the formula for Combination with repetition? | Yes, there is a formula. Apply it with $n=26$ and $k=3$. |
What is $z_o$ in the equation of parabola in complex plane? | Let $z = x + iy$. Then the equation $|z - z_0| = \Re(z)$, which has solutions for $\Re(z) \geq 0$, is equivalent to
$$
y = \Im(z_0) \pm \sqrt{2\Re(z_0)(x - \Re(z_0)/2)}
$$
You will observe that $z_0$ serves to shift the parabola vertically by its imaginary part, shifts it horizontally by half its real part, and also scales it by its real part. If $\Re(z_0) = 0$, then the parabola reduces to line parallel to the real axis, originating at $(0,\Im(z_0))$ and extending into the the right half plane. As $\Re(z_0)$ increases from zero, the parabola shifts to the right and "fans out". Since the expression under the radical sign must be nonnegative, it is necessary that $\Re(z_0) \geq 0$. |
Estimate value using Lagrange's MVT | $f(x) = \sqrt{x}, \ f'(x) = \frac{1}{2\sqrt{x}} $
Using the MVT on the interval $[49,51]$, $f(51) = f(49) + 2f'(c)$, where $c\in [49,51]$. Because $f(x)$ is a strictly increasing function, $f(51) > f(49)$. Also because $f'(x)$ is decreasing it has a maximum value at $49$ (on the interval $[49,51]$). We use $49$ as the starting point of the interval because it has a natural square root, $7$.
\begin{align}
7 < f(51) = 7 + 2f'(c) &\leq 7 + 2f'(49) = 7 + \frac{1}{\sqrt{49}} = 7 + \frac{1}{7}\\
7 < \sqrt{51} &\leq 7.142857142..
\end{align}
A linear aproximation of a function is an estimate of the function value at $f(x_0 + \Delta x)$ given the value at $f(x_0)$, and first derivative $f'(x_0)$, then
$$
f(x_0 + \Delta x) \approx f(x_0) + f'(x_0)\Delta x
$$
This has a nice geometric interpretation.
For your example the linear approximation of $f(51)$ is
$$
f(49 + 2) = f(49) + 2f'(49) = 7 + \frac{1}{7} = 7.142857142..
$$
Same as the upper bound of the value by the MVT.
The actual value is
$$
\sqrt{51} = 7.141428..
$$
So you can see that for small $\Delta x$ a linear approximation can be pretty accurate. |
Analog of $f(f^{-1}(B))=f(X)\cap B$ for $f^{-1}(f(A))$ | Let $X$ and $Y$ be sets.
Fix $f:X\to Y$.
Remark. For each $A\subseteq X$, $$A\subseteq f^{-1}(f(A))=A\cup\{x\in\mathrm{domain}(f):(\exists a\in A)[f(x)=f(a)]\}.$$
Furthermore, equality holds if and only if $f$ is injective.
Remark. For each $B\subseteq Y$, $$ B\cap\mathrm{range}(f)=f(f^{-1}(B))\subseteq B.$$
Furthermore, equality holds if and only if $f$ is surjective. |
Geometric meaning of reflexive and symmetric relations | Your description of reflexivity is correct.
For symmetry it means that the subset $R$ is "symmetric" around the line $y = x$, this means that for any point $(a, b)\in R$ its mirror point $(b, a)\in R$ (it's the point you get by doing reflection in the line $y=x$), i.e. either none of the two points $(a, b)$ and $(b, a)$ is included in $R$, or both of them are included in $R$.
Thus the "graph" of $R$ is the same as its mirror image when doing reflection in $y=x$. |
Complex integration of a function around a triangle. | $\gamma_1: z = t\\
\gamma_2: z = a + ti\\
\gamma_3: z = a - t + (a-t)i$
Take the derivatives to find $dz$ for each contour
$\int_0^a t \ dt + \int_0^a a (i \ dt) + \int_0^a (a-t)(-1-i\ dt)$ |
Continuity in topological spaces in terms of closures. | Yes, it is indeed an equivalence that $f$ is continuous iff $f(\bar{A}) \subseteq \overline{f(A)}$ for any $A \subseteq X$ (by $\bar{A}$ I denote $cl_X(A)$).
So indeed, you do need to check that all the points of $A$ are mapped to the closure of $f(A)$, but that's not enough. You also need to make sure that this holds for all the points of the closure of $A$ (which are not necessarily in $A$). |
Convergence/dicergence | Let $u_n$ be the general term.
$$\ln(u_n)=n^2\ln(1+\frac 1n)-n$$
$$\ln(1+\frac 1n)=\frac 1n -\frac{1}{2n^2}+\frac{1}{n^2}\epsilon(n)$$
$$\ln(u_n)=-\frac 12+\epsilon(n)$$
thus
$$\lim_{n\to+\infty}u_n=\frac{1}{\sqrt{e}}\ne0$$
the series diverges by the limit test. |
Check if $\mathcal A $ and$ \mathcal B $ : $\bigcup (\mathcal A \cap \mathcal B) = \bigcup \mathcal A \cap \bigcup \mathcal B$ | The counterexample you provided is solid and disproves the theorem right away, so there must be an error in your equivalence chain proving the theorem. The error is in the statement
\begin{align}
&(\exists X)((X\in \mathcal A \land X\in \mathcal B )\land x \in X)\\
\iff&(\exists X)(X \in \mathcal A\land x\in X) \land (\exists X)(X \in \mathcal B \land x\in X).
\end{align}
Those expressions are not equivalent. The first expression states that there exists a set $X$ which is in both $\mathcal A$ and $\mathcal B$ such that also $x\in X$. Whereas the second expression says that there exists a set $X$ from just $\mathcal A$ which contains $x$, but also a (possibly different) set $X$ from just $\mathcal B$ that contains $x$.
So the point is that in the second expression you don't relate families $\mathcal A$ and $\mathcal B$ in any way. A way to perhaps make this clearer is to realise that the statement $$(\exists X)(X \in \mathcal A\land x\in X) \land (\exists X)(X \in \mathcal B \land x\in X)$$
is equivalent to
$$(\exists X)(X \in \mathcal A\land x\in X) \land (\exists Y)(Y \in \mathcal B \land x\in Y),$$
which can' be equivalent to
$$(\exists X)((X\in \mathcal A \land X\in \mathcal B )\land x \in X).$$ |
Fibonacci sequence terms $\le a$ | Let $\phi=\frac{\sqrt5+1}{2}$. Then use formula
$$\frac{\phi^{n-\frac1n}}{\sqrt5}<F_n<\frac{\phi^{n+\frac1n}}{\sqrt5}$$ |
$a$ and $b$ are factors of $6^6$ and $a$ is a factor of $b$ | It suffices to consider powers of $2,3$ separately.
If $v_2(a)=r$ then $r≤v_2(b)≤6$. Of course, $v_2(a)\in \{0,1,2,3,4,5,6\}$. If $v_2(a)=0$ there are $7$ possibilities for $v_2(b)$. If $v_2(a)=1$ there are $6$ possibilities for $v_2(b)$, and so on. Thus, considering only powers of $2$, we get $$7+6+5+4+3+2+1=\frac {7\times 8}2=28$$ possible pairs.
The same calculation works for the powers of $3$.
As we can sort out powers of $2,3$ independently, we get $$28\times 28 =\fbox {784}$$ |
Pseudo-inverse of a matrix as a projection | What does the pseudoinverse of $A$ do? It takes a vector $b$ as input, and returns as output the vector $x$ of least 2-norm such that $Ax = \hat{b}$,
where $\hat{b}$ is the projection of $b$ onto the column space of $A$.
Strang's book Linear Algebra and Its Applications has a good presentation of this topic. |
How find this postive integer $a$ such $a(x^2+y^2)=x^2y^2$ always have roots | Take any two positive coprime integers $u$ and $v$, and a third positive integer $t$.
Let $q$ be the square-free part of $u^2 + v^2$: thus $u^2 + v^2 = qs^2$ for some positive integer $s$.
Let $a = u^2 v^2 t^2 q$.
Then the equation $a(x^2+y^2) = x^2y^2$ has the non-trivial solution $x = tsqu$ and $y = tsqv$ because
$a(x^2+y^2) = u^2v^2t^2q(tsq)^2 (u^2+v^2) = u^2v^2t^2(tsq)^2q^2s^2 = (tsq)^4u^2v^2 = x^2y^2.$
Conversely, if there is a non-trivial solution to this equation for some $a$, then $a$ must necessarily have the form given above.
To see this, let $z = \gcd(x,y)$ and write $x = zu$ and $y = zv$ for some coprime positive integers $u,v$. Then $a(u^2+v^2) = z^2u^2v^2$. Since $u^2v^2$ and $u^2+v^2$ cannot share a prime factor, $a = bu^2v^2$ for some positive integer $b$. Let $q$ be the square-free part of $u^2+v^2$ and write $u^2 + v^2 = qs^2$ for some positive integer $s$. Then $b q s^2 = z^2$ forces $q$ to divide $b$. So $b = qw$ and $w (qs)^2 = z^2$; hence $w = t^2$ is a perfect square. Putting everything together gives $a = u^2v^2t^2q$ as claimed. |
How to find inverse Laplace transform? | Hint:
Use convolution theorem:
$$\mathcal{L}\{f(t)*g(t)\}=F(s)G(s)$$
where $F(s)=\mathcal{L}\{f(t)\}$ and $G(s)=\mathcal{L}\{g(t)\}$. The convolution of two functions $f$ and $g$ is defined as
$$(f*g)(t)=\int_0^{t}f(x)g(t-x)dx$$
Let $F(s)=\frac{2}{s^2+4}$, $f(t)=\sin (2t)$, $G(s)=\frac{3}{s^2+4}$, and $g(t)=\frac{3}{2}\sin (2t)$.
It follows that
\begin{align*}
\mathcal{L}^{-1}\{\frac{2}{s^2+4} \frac{3}{s^2+4}\}&=\mathcal{L}^{-1}\{F(s)G(s)\}\\
&=\int_{0}^tf(x)g(x-t)dx\\
&=\frac{3}{2}\int_0^t\sin(2t)\sin[2(x-t)]dx
\end{align*} |
Compact polynomial operator | Not sure if there is a more elementary argument, but here is one:
Since $A$ is positive semidefinite, $\sigma(A) \subset \mathbb{R}_+$. Let $p(t) = \lambda_0 + \lambda_1t + \ldots + \lambda_nt^n$, and $0\leq i\leq n$ be any integer such that $\lambda_i\neq 0$. Then, we claim that $A^i$ is compact. From this, one can conclude that $\lambda_0 = 0$ since $I$ is not compact, and $A^n$ is compact since $\lambda_n \neq 0$.
To prove the claim, let $f(t) = \lambda_it^i$, then it suffices to show that $f(A)$ is compact. Note that
$$
p(t) \geq f(t) \geq 0 \quad\forall t\in \sigma(A)
$$
So by the spectral theorem
$$
K = p(A) \geq f(A) \geq 0
$$
Let $P_n$ be the sequence of finite rank projections such that $P_nS\to S$ for every compact operator $S$. Then
$$
(I-P_n)K(I-P_n) \geq (I-P_n)f(A)(I-P_n) \quad\forall n\in \mathbb{N}
$$
Hence, $\|(I-P_n)K(I-P_n)\| \geq \|(I-P_n)f(A)(I-P_n)\|$ for all $n\in \mathbb{N}$. Using the identity $\|S^{\ast}S\| = \|S\|^2$, it follows that
$$
\|f(A)^{1/2}(I-P_n)\| \leq \|K^{1/2}(I-P_n)\|
$$
Since $K^{1/2}$ is compact, this second term goes to $0$, and so $f(A)^{1/2}$ is compact, whence $f(A)$ is compact.
Hope this helps. |
$N_1,N_2,N_3 \unlhd G, N_i\cap N_j =\{e\}, G = N_iN_j$. Want to show that $G$ is abelian, $N_i$ are isomorphic. | Let $x_i \in N_i$ and $y_j \in N_j$ with $i \neq j$. Then $[x_i,y_j]=(x_iy_jx_i^{-1})y_j=x_i(y_jx_i^{-1}y_j^{-1})$ so $[x_i,y_j] \in N_i \cap N_j$ and $[x_i,y_j]=1$. Now let $x,y \in N_i$; because $G=N_jN_k$ for $i,j,k$ pairwise non-equal, $x=x_jx_k$ with $x_j \in N_j$ and $x_k \in N_k$; but $y$ commutes with $x_j$ and $x_k$ by the remark above, therefore $y$ commutes with $x$ and $G$ is abelian.
Then, you can show that $G/N_i \simeq N_j$ and $G/N_i \simeq N_k$ with $i,j,k$ pairwise non-equal to conclude that $N_i \simeq N_j \simeq N_k$.
In fact, $G$ doesn't need to be finite. |
How is this connection between the groups made? | This is proved by observing that if $f : [0,1] \to S^1$ is given by the formula
$$f(x) = (\cos(2 \pi n x), \sin(2 \pi n x))
$$
then the unique lifting of $f$ which starts at $0$ is given by the formula
$$\tilde f(x) = nx
$$
One way to discover this, perhaps, is to ask yourself: what function starting at $0$, when composed with $p(x)=(\cos(2 \pi x), \sin(2 \pi x)$ yields $f(x) = (\cos(2 \pi nx), \sin(2 \pi n))$? |
Arity of Primitive Recursive Functions | You can indeed define $g(n)=h(n,f(n))$ (as I assume you intended to write) -- but in order to argue that this $g$ is primitive recursive, you need to already know that $f$ (as well as $h$) is primitive recursive, and for that you need to apply the primitive recursion rule, which depends on knowing that $h$ is primitive recursive.
Note well that what the primitive recursion rule demands as a premise is that $h$ is primitive recursive as a two-argument function. That is the function that describes how to combine $n$ and $f(n)$ in order to find the number you want to be $f(n+1)$. In principle this $h$ needs to be applicable to every pair of numbers, not just ones where the second element happens to be $f$ applied to the first one. If you can't give such a general rule for $h$, the primitive recursion construction does not -- by definition -- necessarily produce a primitive recursive $f$.
In response to the added material headed "edit": The construction you're quoting seems to arguing for the theorem that every constant function is primitive recursive. This conclusion is certainly true, but the argument you're quoting does not use the primitive recursion rule at all. It works by induction at the metalevel, but does not use recursion as a building block for functions.
In fact the same argument would work to prove this:
Define the class of supersimple function by the following rules:
The zero function is supersimple.
The successor function is supersimple.
All $n$-ary projection functions are supersimple.
Every function that arises by composition of supersimple function is supersimple.
No other functions are supersimple.
Theorem. Every constant function $\mathbb N^n\to\mathbb N$ (where $n\ge 0$) is supersimple.
You should recognize rules 1-4 as exactly the same as the corresponding parts of the definition of primitive recursive functions. The primitive recursion rule is missing, but it should be clear that every "supersimple" function is necessarily also "primitive recursive".
And the reason why the constant functions are supersimple is exactly the same as the reason why they are primitive recursive. |
What is the closed form representation of the sum of the first $\text{int}(n/2)$ terms of binomial expansion $(f+(1-f))^n$? | You can express it using hypergeometric functions. If $n = 2m$ is even, it is (according to Maple)
$$ {2\,m\choose m+1}{f}^{m+1} \left( 1-f \right) ^{m-1}
{\mbox{$_2$F$_1$}(1,-m+1;\,m+2;\,{\frac {f}{-1+f}})}
$$
while if $n=2m+1$ is odd,
$${2m+2 \choose m+1} \dfrac{{\mbox{$_2$F$_1$}(1,2\,m+2;\,m+2;\,f)}{f}^{m+1} \left( 1-f
\right) ^{m+1}}{2}
$$ |
$v ∈ V$ is an Eigen vector of $T$ corr. to the Eigen value $c$$\iff$ $[v]_B$ is an Eigen vector of $[T]_B$ | Choose a basis $B=(e_1,e_2,...,e_n)$ of $V$ such that
$$T(e_1,e_2,...e_n)=(e_1,e_2,...,e_n)[T]_B$$
$$v=(e_1,e_2,...,e_n)[v]_B$$
So $$Tv=(e_1,e_2,...e_n)[T]_B[v]_B$$
While $$cv=(e_1,e_2,...,e_n)c[v]_B$$
From $Tv=cv$,we get
$$(e_1,e_2,...e_n)[T]_B[v]_B=(e_1,e_2,...,e_n)c[v]_B$$
If we write the above equation with the form of $\sum_{i=0}^na_ie_i=0$ , due to the linear independence of $e_i$, we have coefficient $a_i=0$, i.e.
$$[T]_B[v]_B=c[v]_B$$
Conversely, from $[T]_B[v]_B=c[v]_B$, we can also see $Tv=cv$.
Thus,we proof this equivalence. |
Unwanted curvature in fourier series | This issue was caused by using a wrong integration algorithm. Just using https://en.wikipedia.org/wiki/Trapezoidal_rule with small steps (1e-4) MASSIVELY improves the results. |
Percentiles of sums of random variables | No. You can't. Because it is not true.
Because it is possible to get less than 75 in other ways also. For example, 10+65 = 75.
$$P(X_1+X_2<35+40) = 0.95^2 $$
$$\implies P(X_1+X_2<35+40 \land X_1<35 \land X_2<40) $$$$+ P(X_1+X_2<35+40 \land X_1<35 \land X_2\ge40) $$$$+ P(X_1+X_2<35+40 \land X_1\ge35 \land X_2<40) $$$$+ P(X_1+X_2<35+40 \land X_1\ge35 \land X_2\ge40) $$$$= 0.95^2$$
In this, the fourth term is obviously 0.
In the fist term, $X_1<35 \land X_2<40 \implies X_1+X_2<35+40$. So the first term becomes $P(X_1<35 \land X_2<40)$ which is nothing but $ 0.95^2$.
Substituting the first and fourth term, we get
$$ 0.95^2 + P(X_1+X_2<35+40 \land X_1<35 \land X_2\ge40) $$$$+ P(X_1+X_2<35+40 \land X_1\ge35 \land X_2<40) +0 $$$$ = 0.95^2 $$
Which means
$$ P(X_1+X_2<35+40 \land X_1<35 \land X_2\ge40) + P(X_1+X_2<35+40 \land X_1\ge35 \land X_2<40) =0 $$
Which obviously not true.
What could be true is
$$P(X_1+X_2<35+40) > 0.95^2 $$ |
Value of transcendental function $f(x)$ at rational point $a$ | Let $g$ be any transcendental function over $\mathbb{Q}(x)$ with the unit circle as natural boundary (or whatever reasonable assumption you want to put). Since $g$ is not constant, $g$ assumes some value that is algebraic at some point in the disc, say $g(a) = x$ where $x$ is algebraic.
Define
$$
f(z) = g(m(z))
$$
where $m$ is a biholomorphic automorphism of the disk taking some rational positive number $r$ to $a$. Then $f(r) = x$, so in general this fails for a very large class of transcendental functions. |
Does flatness imply components map dominantly? | Let $X=X_1 \cup \cdots \cup X_r$ be the decomposition of $X$ in irreducible components. Now let $U_i = X_i - \bigcup_{j \neq i} X_j$ an open dense subset of $X_i$ and also irreducible. The map $U_i \to X \to Y$ is flat and therefore open. So $f(U_i)$ is open and irreducible in $Y$ and therefore an open subset of a certain irreducible component $Y_j \subseteq Y$. As such it is dense in $Y_j$ and the more is $f(X_i)$ dense in $Y_j$. So dominance is true, as you asked in the title, but "onto" is not necessarily true, as I showed in the comment. |
Show $\int_a^b f(x)dx = - \int_b^a f(x)dx$. | For the usual definition to make sense, you need a non-empty interval $[a,b]$ (which implies $a<b$) and then you have a definition for the definite integral of $f$ over $[a,b]$, denoted as:
$$\int_a^bf$$
Notice that this definition does not make sense for $b=a$ (since there is no interval to partition) and not for $b<a$ (since the partitioning requires $a<b$; that's how intervals work).
Using this definition, you can prove the following important property:
$$\int_a^bf = \int_a^cf + \int_c^bf \tag{$*$}$$
With the usual definition, this property holds only for $a<c<b$. To allow more flexibility, one could want to give meaning to integrals of the form $\int_a^af$ and $\int_b^af$ (with $a<b$).
If we define them as:
$$\int_a^af := 0 \quad \mbox{and} \quad \int_b^a f:=-\int_a^b f$$
then the formula $(*)$ remains valid and holds in general, for all $a,b,c$.
How to show $\int_a^b f(x)dx = - \int_b^a f(x)dx$ using the definition of Riemann integration?
Long story short: this is usually a definition, not a property you prove. We define it in such a way that another important property remains valid in general. |
A problem about an intersecting family of subsets of set that is an antichain | There is an injective function from $\Delta$ to $\Gamma$ such that $f(\Gamma)$ such that for all $x\in \Delta$ we have $A\subseteq f(x)$.
Proof: Define $P_i$ as the set of $i$-subsets of $A$. For each $i$ such that $i+1\leq n/2$ consider the bipartite graph with parts $P_i$ and $P_{i+1}$, and where the edges are given by inclusion. Notice all the vertices in a given side have the same degree, so there is a matching that saturates $P_i$ (provable by hall easily).
Now consider the graph with vertex set $P_1\cup P_2\dots P_{\lfloor n/2 \rfloor}$ and select one matching for each layer, notice if we only take those edges the graph is broken up into a bunch of disjoint paths, and for each subset of $x\in A$ of size $k$ and for each $j>k$ there is exactly one vertex of size $j$ connected to $k$. If we are given $\Delta$ and $k$ we can define $f(x)$ in this way, and we will only have $f(x) = f(y)$ if $x$ contains $y$ or visceversa.
Notice we do not need to use that $\Delta$ is intersecting, it is sufficient to use it is an antichain. You can see Dilworth's theorem for a lot of related stuff. |
Find the Prime Factorization of $\varphi(11!)$ | \begin{align}
& \bigg\lfloor \frac{11}{2} \bigg\rfloor+\bigg\lfloor \frac{11}{4} \bigg\rfloor+\bigg\lfloor \frac{11}{8} \bigg\rfloor=8 \\
& \bigg\lfloor \frac{11}{3} \bigg\rfloor+\bigg\lfloor \frac{11}{9} \bigg\rfloor=4 \\
& \bigg\lfloor \frac{11}{5} \bigg\rfloor=2 \\
& \bigg\lfloor \frac{11}{7} \bigg\rfloor=1 \\
&\bigg\lfloor \frac{11}{11} \bigg\rfloor=1 \\
\end{align}
$$\varphi(11!)=\varphi ({{2}^{8}}\times {{3}^{4}}\times {{5}^{2}}\times {{7}^{1}}\times {{11}^{1}})=\varphi ({{2}^{8}})\varphi ({{3}^{4}})\varphi ({{5}^{2}})\varphi ({{7}^{1}})\varphi ({{11}^{1}})$$
Note if $P$ be prime then
$$\varphi(P^n)=P^n-P^{n-1}$$ |
Choice function and well ordering:Proof | {a,b} = a $\cup$ b violates the axiom of regularity, the axiom of foundations. |
Stalk and local property of a scheme | To give a simple answer, a lot of these properties say that if $f = g$ at $x$ then $f = g$ in a neighbourhood of $x$. For instance nilpotent: $f^m = 0$ at $x$ or invertible: $fg = 1$ at $x$. The reason for this comes down to the definition of the stalk as a direct limit.
Quick refresher on direct limits if you need it:
$$ \mathcal{O}_{X,x} = \lim_{\substack{\longrightarrow \\ U \ni x}} \mathcal{O}_X(U) = \bigsqcup_{U \ni x} \mathcal{O}_X(U) \Big/\sim.$$
You should think of a directed limit roughly as a disjoint union where if $a \in A \cap B$ then $a$ as an element of $A$ is equivalent to $a$ as an element of $B$. In this specific case, the equivalence is
$$ (f \in \mathcal{O}(U)) \sim (g \in \mathcal{O}(V)) \text{ if } f = g \text{ on } U \cap V. \tag{1} $$
We write an element of $\bigsqcup \mathcal{O}(U)$ as $(f, U)$ where $f \in \mathcal{O}(U)$ (so we remember which open set it comes from) and $f_x \in \mathcal{O}_x$ is the equivalence class of some $(f, U)$.
You can see that $(1)$ captures the idea that if $f_x = g_x$ then $f = g$ in a neighbourhood of $x$. Namely if $(f, U)$ represents the equivalence class $f_x$ and $(g, V)$ represents $g_x$ then $(f, U) \sim (g, V)$ means that $f|_{U \cap V} = g|_{U \cap V}$. |
Good text on quantum groups. | If you have never seen anything about Hopf algebras I recommend perhaps looking at Section 2.2 of my own thesis. It is a very leisurely introduction in the technically easy finite dimensional case.
Perhaps for a first look at $\mathrm{C}^*$-algebraic quantum groups these notes of Roland Vergnioux might be a good idea:
Haar integrals on finite and compact quantum group
These notes really well-motivate the definition and relate the definition very well to the commutative situation.
An overarching reference might be:
- Thomas Timmermann, An Invitation to Quantum Groups and Duality - From Hopf Algebras to Multiplicative Unitaries and Beyond
However perhaps use this as a reference and instead look at graduate lecture notes such as (in no particular order):
Teo Banica, Free Quantum Groups and Related Topics
Adam Skalski, Quantum Symmetry Groups and Related Topics
Amaury Freslon, Introduction to compact matrix quantum groups and their combinatorics
Moritz Weber, Introduction to compact (matrix) quantum groups
and Banica–Speicher (easy) quantum groups
Uwe Franz, Adam Skalski, Piotr Soltan, Introduction to compact and discrete quantum groups
Between these you are in good nick. |
For which values of $\alpha \in \mathbb{R}$, does the series $\sum_{n=1}^\infty n^\alpha(\sqrt{n+1} - 2 \sqrt{n} + \sqrt{n-1})$ converge? | Hint: $\sqrt{n+1}-2\sqrt{n}+\sqrt{n-1} = \frac{-2}{(\sqrt{n}+\sqrt{n+1})(\sqrt{n-1}+\sqrt{n+1})(\sqrt{n}+\sqrt{n-1})}$
So, this term is for big n approximately $-2n^{-\frac{3}{2}}$ |
Expected time to type a certain sequence | My set up is quite similar to yours, but I get an expected value of $27$, which I believe is correct (by calculating it a different way).
Let $E$ be the expected number of steps until ABC appears.
Let $E_A$ be the expected number of steps after an A, and $E_{AB}$ be the expected number of steps after an AB.
Then, on the first step, either an A appears (probability 1/3) or it doesn't (probability 2/3). If it doesn't, then we've take one step, and are back where we started, so
$$
E = \frac{1}{3}(1+E_A) + \frac{2}{3}(1+E).
$$
If the current step was an A, then one of three things happens: we type a B, an A, or neither, each with 1/3 probability. Hence:
$$
E_A = \frac{1}{3}(1+E_{AB}) + \frac{1}{3}(1+E_A) + \frac{1}{3}(1+E).
$$
If we've reached the AB state, then one of three things happens: we type a C (and we're done), we type an A, or neither, each with probability 1/3. Hence:
$$
E_{AB} = \frac{1}{3}(1) + \frac{1}{3}(1+E_A) + \frac{1}{3}(1+E)
$$
where $E$ is the expected number of steps until $ABC$, $E_A$ is the expected number of steps following an $A$, and $E_{AB}$ is the expected number of steps following $AB$.
Solving this system, I find $E=27$.
In your system, you have some expressions like $E(T)+2$ and $E(T)+3$ which should be $E(T)+1$, since you have already counted the other steps by earlier $E(T)+1$ expressions. |
Proving a result concerning compact sets. | $C \subset E^{c}$ because $C \cap E = \emptyset$.
Let $x \in C$ and suppose that $x\notin E^{c}$, this means $x \in C$ and $ x \in E$ i.e. $x \in C\cap E$ which is a contradiction. Thus $x \in C \subset E^{c}$. |
One to one function behaviour | If I understand you correctly, you are asking a question that has caused a lot of consternation over the centuries.
If two sets $X$ and $Y$ have different cardinalities, with $\vert X\vert >\vert Y\vert$, then any mapping $f:X\to Y$ from the larger to the smaller must be non-injective:
$$
\exists x_1,x_2\in X \, \big((x_1\neq x_2) \& \left(f(x_1)=f(x_2)\right)\big).$$
This is, as you note, the pigeonhole principle.
So, what happens when we consider $f:[0,2]\to [0,1]$ defined by $f(x)=x/2$? This function is clearly injective:
$$\left(\frac{x_1}{2}=\frac{x_2}{2}\right)\Rightarrow (x_1=x_2).$$
So, what is going on? Well, the answer given by Cantor is that despite that fact that $[0,2]\supsetneq [0,1]$, these two sets have the same cardinality. In fact, this insight leads to the central definition that undergirds all of modern set theory:
If $X$ and $Y$ are two sets and there exists a bijection $f:X\to Y$, then $X$ and $Y$ have the same cardinality (written $\vert X\vert=\vert Y\vert$).
Relatedly:
If an injection from $X$ to $Y$ exists, then we can say $\vert X\vert\leq \vert Y\vert.$
This second definition allows us to conclude that
$$X\subseteq Y \Rightarrow \vert X\vert \leq \vert Y\vert.$$
Now, things get tricky when one notes that "infinite" is not a cardinality according to this definition! It is provable that there exist pairs of infinite sets between which no bijection can exist! Famously, Cantor's diagonalization argument showed that $\vert \mathbb{Q}\vert\neq\vert\mathbb{R}\vert$. More generally, no set has the same cardinality as its power set.
This is a good thing really, since it tells us that defining cardinality in terms of bijections is not trivial. The fact that $\big\vert [0,1]\big\vert=\big\vert [0,2]\big\vert$ is not merely a consequence of the fact that both sets are infinite: there is a more meaningful relationship between the sets than that.
FYI The Cantor-Schroder-Bernstein theorem is an important tool for showing that sets have the same cardinality: it states that if there is an injection from $X$ to $Y$, and another injection from $Y$ to $X$, then there must be a bijection between $X$ and $Y$. Equivalently:
$$
\big(\vert X\vert\leq\vert Y\vert \& \vert Y\vert\leq\vert X\vert\big) \Rightarrow \vert X\vert=\vert Y\vert$$ |
Describe the complex function $f(z) = z + 1/z$. | Im$\left(a+b\color{blue}i+\dfrac{a-b\color{blue}i}{a^2+b^2}\right)=b-\dfrac b{a^2+b^2}=b\left(1-\dfrac1{a^2+b^2}\right)=0$
$\implies b=0$ or $a^2+b^2=1$, so $z+\dfrac1z$ is real when $z\in\mathbb R$ or $z=e^{i\theta}$ for $\theta\in\mathbb R$. |
Convergence in $L_p$ and a.s. but with different limits possible? | The answer is yes, because the convergence in $L^p$ implies the convergence a.s. up to passing to some subsequence.
Thus we would get
$$
X_{n_k}\stackrel{a.s.}{\to} X
$$
but
$$
X_n\stackrel{a.s.}{\to} Y
$$
implies that
$$
X_{n_k}\stackrel{a.s.}{\to} Y
$$
thus
$$
X=Y
$$
a.s. |
If $1,\omega,\omega^2,.....\omega^{n-1}$ are the n, $n^{th}$ roots of unity, then $(1-\omega)(1-\omega^2)..(1-\omega^{n-1})$ equals? | The polynomial $x^n-1$ has $n$ complex roots, given by the $n^{th}$ roots of unity, which are
$$
1,\omega,\omega^2,\ldots,\omega^{n-1},
$$
where $\omega=e^{2\pi i/n}$. Thus it has the factorization
$$
x^n-1=(x-1)(x-\omega)(x-\omega^2)\ldots(x-\omega^{n-1}).
$$
If you substitute $x=1$ on both sides, you obtain $0=0$, since $x=1$ is a root. If you want to obtain a non-trivial product when $x=1$, you first need to get rid of the $x-1$ on both sides, like so:
$$
\frac{x^n-1}{x-1}=(x-\omega)(x-\omega^2)\ldots(x-\omega^{n-1}).
$$
If you take the limit as $x\to 1$ in this equation you will find (using this rule) that
$$
n=(1-\omega)(1-\omega^2)\ldots(1-\omega^{n-1}).
$$ |
Is $\left\{ u\in H^{1}\left(\Omega\right)\left|\,a<u\left(x\right) <b\:\text{a.e.}\right.\right\}$ open in $H^1(\Omega)$ | The complement of
$$
\{ u\in H^1: \ b\le u(x) \ a.e.\}
$$
is not equal to
$$
\{ u\in H^1: \ b> u(x) \ a.e.\}.
$$
So (2) is right, (1) is wrong. |
Integrate with square root in square | OK
$$
t = \sqrt{\frac{x-1}{x+1}} \\
x = \frac{1+t^2}{1-t^2} \\
dx = \frac{4t}{(1-t^2)^2}\;dt = \frac{4t}{(1-t)^2(1+t)^2}\;dt \\
\int\left(1+\sqrt{\frac{x-1}{x+1}}\right)^2 dx =
\int (1+t)^2\frac{4t}{(1-t)^2(1+t)^2}\;dt
= \int \frac{4t}{(1-t)^2}\;dt \\
\qquad = \int\left(\frac{4}{t-1} + \frac{4}{(t-1)^2}\right) dt
= 4\log(t-1) - \frac{4}{t-1} +C
$$
and substitute back to get the answer in terms of $x$. |
Flat Connection in $\mathbb{R}^n$ | Parallel in the old fashion Euclidean sense and parallel in the Riemannian geometry sense have little to do with one another.
In $\mathbb{R}^2$, consider the vector field which always points right and has unit length. That is, $v_{(x,y)} = (1,0)$ at every point $p\in \mathbb{R}^2$.
First, let $X_{(x,y)} = (e^x , 0)$. This is a vector field which always points right but as you get larger $x$ values, the arrows get longer. In the classical geometry sense, $X$ and $v$ are parallel at every point. However, if you compute, you'll see that $\nabla_v X \neq 0$, so these are not parallel in the Riemannian geometry sense.
Second, let $X_{(x,y)} = (0,1)$. This is a vector field which always points up with length $1$. In the classical geometry sense, since $v$ points right and $X$ points up, there is no way they are parallel anywhere.
Nonetheless, $\nabla_v(X) = 0$ so they are parallel in the Riemannian geometry sense.
The idea of Riemannian parallelism is that, to say $X$ is parallel along $v$ should mean that as you move along $v$, $X$ doesn't change. In the first example, $X$ does change as you move along $v$, while in the second example, it doesn't. |
You have 3 cakes. Everytime you eat one, there's 17% chance the number of cakes is reset to 3. Find average number of cakes eaten? | Let $f(n)$ be the expected number of cakes left to eat when there are $n$ cakes remaining. If there is one cake remaining, we eat it and either have no cakes remaining with probability $0.83$, or go back to three cakes with probability $0.17$. We thus have:
$$f(1) = 1 + 0.17 f(3)$$
Similarly, if there are two cakes remaining, we eat one and either go to one cake with probability $0.83$, or go back to three cakes with probability $0.17$. We thus have:
$$f(2) = 1 + 0.83 f(1) + 0.17 f(3)$$
A similar reasoning can be applied to the case in which we have three cakes remaining:
$$f(3) = 1 + 0.83 f(2) + 0.17 f(3)$$
Plugging $f(1)$ and $f(2)$ into the last equation, we find:
$$f(3) = 1 + 0.83 \left( 1 + 0.83 \left( 1 + 0.17 f(3) \right) + 0.17 f(3)\right) + 0.17 f(3) \iff f(3) \approx 4.405$$ |
Vector orthogonal projection what's my error | To project $CB$ in the direction of $CA$, you could use:
$$\|CD\| = \frac{CB \cdot CA}{\|CA\|} $$
and using $\|AD\| = \|AC\| - \|CD\|$, you should get $\dfrac{2}{ \sqrt{5}}$ after a bit of simplifying. |
Differentiable function question about pair of points | Here's a counter example: $f(x) = x^3$ and $x_0 = 0$. The derivative is zero but the difference quotient cannot be zero, since the function is strictly monotonic. In fact any strictly monotonic function with a zero derivative somewhere suffices as a counter example.
(By the way, is the numerator in that order ($f(a)-f(b)$) intentionally?) |
Finding real numbers such that $(a-ib)^2 = 4i$ Prove that $(a^2 - b^2) = 0$ | You're doing great. Two complex numbers are equal when the real and imaginary parts agree. Hence you need to solve the system of equations $$\{a^2-b^2=0,~~ -(ab+2)2=0\}$$ |
Given V =$(x_1,x_2......., x_{100}) $ with conditions. Find dim (V) | I assume that $V$ is the set of all vectors $x\in \mathbb R^{100}$ such that if $$x=[x_1,x_2,\dots, x_{100}],$$
then the two equations are satisfied.
In that case, the easiest way to show that the dimension of $V$ is to prove that $V$ is the nullspace of the matrix
$$\begin{bmatrix}
1 & -2 & 0 & 0 & \dots & 0 & 0 & 0 & 0 & \dots & 0\\
1 & 0 & -3 & 0 & \dots & 0 & 0 & 0 & 0 & \dots & 0\\
0 & 0 & 0 & 0 & \dots & 0 & 1 & -1 & -1 & \dots & -1
\end{bmatrix}$$ |
Necessary stability condition for a second order discrete time system $x(k+2) = Ax(k+1) + Bx(k)$ | You can rewrite this system as
$$\begin{bmatrix}x(k+1)\\x(k+2)\end{bmatrix}=\begin{bmatrix}0&I\\B&A\end{bmatrix}\begin{bmatrix}x(k)\\x(k+1)\end{bmatrix}$$
Now this system is stable if and only if all eigenvalues of $\begin{bmatrix}0&I\\B&A\end{bmatrix}$ are in the unit circle. |
Eigenvalue problem corresponding to a univariate differential operator | The function $f$ will be a polynomial when $\lambda\in\mathbb N$. That can be seen from the fact that if we substitute a series $y=\sum_0^\infty a_nx^n$, we get the recursions
$$\tag1
a_{2n+2}=\frac{(2n-\lambda)(2n-2-\lambda)\cdots(2-\lambda)\lambda}{(2n+2)!}\,a_0,
$$
$$\tag2
a_{2n+1}=\frac{(2n-1-\lambda)(2n-3-\lambda)\cdots(1-\lambda)}{(2n+1)!}\,a_1,
$$
that can only stop when $a_0=0$ and $\lambda\in\mathbb N$ is odd, or when $a_1=0$ and $\lambda\in\mathbb N$ is even.
If, on the other hand, $\lambda\not\in \mathbb N$, either $\lambda<0$ or $\lambda\in(2k,2k+2)$ for some $k\in \mathbb N\cap \{0\}$. Consider separately the analytic solutions $\sum_na_{2n}x^{2n}$ and $\sum_na_{2n-1}x^{2n-1}$. Then, with $r_k=(2k+2-\lambda)\cdots(2-\lambda)$, and $x>0$,
$$
\left|\sum_{n\geq k}a_{2n}x^{2n}\right|
=|r_k|\,\sum_{n\geq k}\frac{(2n-2-\lambda)\cdots(2k+4-\lambda)\,x^{2n}}{(2n)!}
\geq|r_k|\,\sum_{n\geq k}\frac{x^{2n}}{(2n)!}.
$$
So the solution behaves like an exponential, and for $p$ big enough the power will overcome the $e^{-x^2}$ from the measure, and the solution cannot be in $L^2(\mu)$. A similar argument can be applied to the other solution.
So the only eigenvalues are the positive integers, and the eigenvectors are polynomials satisfying the recursions $(1)$ or $(2)$. |
Limit of an integral on [0,1] | If by "integrable" you mean Riemann integrable, then an integrable function must be bounded, and so
$$\left|\int\limits_{0}^{1}{x^nf(x)\text{ d}x}\right|\le\left(\max\limits_{x\in[0,1]}{|f(x)|}\right)\int\limits_{0}^{1}{x^n\text{ d}x}. $$
Note that $\int\limits_{0}^{1}{x^n\text{ d}x} = \frac{1}{n+1}$.
If by "integrable" you mean Lebesgue integrable, then the functions $f_n(x) = x^nf(x)$ converges pointwise to zero almost everywhere, and $|f_n(x)|\le|f(x)|$ everywhere, so we can apply the Dominated Convergence Theorem. |
Finding nth permutation in dictionary order with repeats | Obviously you can't apply the rule for the $n$th permutation of unique objects in
a cookbook fashion. Instead, go back to first principles.
The number of permutations that start with $A$ is just the number of permutations
of $(A, B, B, B, C, D, D)$. The number of permutations that start with $B$
is just the number of permutations of $(A, A, B, B, C, D, D)$, and so forth.
Let $Px$ be the number of permutations that start with the subsequence $x.$
The two numbers in the previous paragraph are $P(A)$ and $P(B).$
But $P(A, A)$ is the number of permutations starting with $A,A,$
which is the number of permutations of $(B, B, B, C, D, D),$
and $P(A, B)$ is the number of permutations starting with $A,B.$
To find the first letter, start adding up $P(A),$ $P(B),$ and so forth until the
cumulative sum is at least $n.$
Supposed this shows the first letter is $C.$ Now, starting with $P(A) + P(B)$
(the number of permutations starting with subsequences earlier than $(C)$,)
add $P(B,A),$ $P(B,B),$ and so forth to the total until you have at least $n.$
The point where you stop tells you your second letter.
Unfortunately, $P(A,A) \neq P(A,B),$ for example, so you can't divide $n$ by the
number of permutations of seven of the letters to find the first letter.
The number of permutations of seven letters depends on which seven letters they
are, which depends on what letter was removed from the set.
There may be a way to make the calculations more convenient than the procedure I laid out
above, but it's not obvious to me. |
convex, twice-differentiable vector-values function properties | How to define a convex function $f:\mathbb{R}^n\to\mathbb{R}^m$ for $m>1$? Some inequality is needed. If $f:\Bbb{R}^n\to\Bbb{R}$, that the Hessian matrix is positive semi-definite is the equivalent condition of convexity. How to explain this? See the proof. |
The change in the percentage decrease | The $-4\%$ would be the maximum decrease.
If you have $\$100$ and change it $-2\%$, you then have $100-0.02\times 100$ or $\$98$.
If you have $\$100$ and change it $-4\%$, you then have $100-0.04\times 100$ or $\$96$.
To deal with the sign, you could say "a change of $-4\%$" or "a decrease of $4\%$". They mean the same thing. (Just like "a change of $+4\%$" and "an increase of $4\%$" mean the same thing.) |
Laurent Series Coefficients Problem. | The function $e^{z+\frac{1}{z}}$ is holomorphic everywhere in $0 < |z| < \infty$ and, therefore, has a Laurent series expansions
$$
e^{z+\frac{1}{z}} = \sum_{n=-\infty}^{\infty}a_n z^n, \;\;\; 0 < |z| < \infty.
$$
The Laurent series coefficients $a_n$ are given by
\begin{align}
a_n&=\frac{1}{2\pi i}\oint_{|z|=1}e^{z+\frac{1}{z}}\frac{1}{z^{n+1}}dz \\
&= \frac{1}{2\pi }\int_{-\pi}^{\pi}e^{e^{i\theta}+e^{-i\theta}}e^{-i(n+1)\theta}e^{i\theta}d\theta \\
&= \frac{1}{2\pi }\int_{-\pi}^{\pi}e^{2\cos\theta}e^{-in\theta}d\theta \\
&= \frac{1}{2\pi }\left(\int_{-\pi}^{0}+\int_{0}^{\pi}\right)e^{2\cos\theta}e^{-in\theta}d\theta \\
&= \frac{1}{2\pi }\left(
- \int_{\pi}^{0}e^{2\cos(-\theta)}e^{in\theta}d\theta+\int_{0}^{\pi}e^{2\cos\theta}e^{-in\theta}d\theta\right) \\
&= \frac{1}{2\pi }\int_{0}^{\pi}e^{2\cos\theta}(e^{in\theta}+e^{-in\theta})d\theta \\
&= \frac{1}{\pi}\int_{0}^{\pi}e^{2\cos\theta}\cos(n\theta)d\theta.
\end{align} |
Square root of strictly monotonic function | If $f(x)=e^x$ when $x\neq 0$ and $f(0)=-1$, then $f$ reaches its infimum and $g(x)=e^{2x}$ is strictly monotone.
Similarly, if $f(x)=-e^x$ when $x\neq 0$ and $f(0)=1$, then $f$ reaches its supremum.
However, it is not possible for both to occur. This is because $|f|=\sqrt{g}$ is strictly monotone, so its supremum is not reached, and this supremum is either the supremum of $f$ or minus the infimum of $f$ (or both). |
What's the limit of this expression: $\lim\limits_{M \to \infty}1/(\sum_{i=0}^{\infty}\frac{M!}{\left(M+i\right)!}x^{i})$ | As $1/x$ is continuous, you need to calculate
$$\lim_{M\to \infty} \sum_{i=0}^{\infty}\frac{M!}{\left(M+i\right)!}x^{i}.$$
We have
$$ \frac{1}{M^i} \geq \frac{M!}{(M+i)!} $$
independent of $x$. Thus,
$$\frac{M}{M-x}=\sum_{i=0}^\infty \frac{1}{M^i} x^i \geq\sum_{i=0}^{\infty}\frac{M!}{\left(M+i\right)!}x^{i} .$$
With $M\to\infty$, we find that the limit of the sum is smaller or equal to 1 for all $x$.
To find a lower bound, we just take the term corresponding to $i=0$ (all terms are positive), and we have
$$\sum_{i=0}^{\infty}\frac{M!}{\left(M+i\right)!}x^{i} \geq 1.$$
Concluding, we have that $$\lim_{M\to \infty} \sum_{i=0}^{\infty}\frac{M!}{\left(M+i\right)!}x^{i}=1$$
so your limit is also 1 independent of $x$. |
Finding torsion subgroups of elliptic curves over finite fields | To answer your specific question, no, you cannot "combine" points of order 2 and order 4 to create points of order 3. On the other hand, it's rather easy to find the points of order 3. Simply use the duplication formula to write $$x(2P)=x(P).$$ Clearing denominators will give you an equation to solve for $x(P)$. (In general, you'd get a quartic equation, but since you're looking for $p$-torsion in characteristic $p$, the degree will be lower.) If you get a constant, the curve is supersingular, and $E[3]=0$. If you get a non-constant equation, then $E[3]\cong\mathbb{Z}/3\mathbb{Z}$. |
Permutations Integers 1 to 9 all even numbers stay in their natural positions | To 1:
You have five places where you can put the five odd numbers (1, 3, 5, 7 and 9):
_ 2 _ 4 _ 6 _ 8 _
So you don't need to bother about the even numbers. The number of different permutations will be: 5! = 120
To 3:
You have four places where you can put the four even numbers (2, 4, 6 and 8):
1 _ 3 _ 5 _ 7 _ 9
So you don't need to bother about the odd numbers. The number of different permutations will be: 4! = 24 |
If $u = e^{x+y} + \ln (x^3+y^3-x^2y-xy^2)$, find the value of - | If, as commented by Neeraj Bhauryal, you write $$u=e^{x+y} + \ln (x^3+y^3-x^2y-xy^2)=e^{x+y}+\ln (x+y)+2\ln(x-y)$$ and take into account the "symmetry", the problem of partials is quite simple $$u'_x=e^{x+y}+\frac{1}{x+y}+\frac{2}{x-y}$$ $$u'_y=e^{x+y}+\frac{1}{x+y}-\frac{2}{x-y}$$ $$u''_{xx}=e^{x+y}-\frac{1}{(x+y)^2}-\frac{2}{(x-y)^2}$$ $$u''_{yy}=e^{x+y}-\frac{1}{(x+y)^2}-\frac{2}{(x-y)^2}$$ $$u''_{xy}=e^{x+y}-\frac{1}{(x+y)^2}+\frac{2}{(x-y)^2}$$ and, after a few simplifications, the expression just write $$e^{x+y} (x+y) (x+y+1)$$ |
Minimize error function with integer constraints | Instead of trying all combinations of $h$ and $m$ values you could loop through all the possible $h$ and try just $m=\left\lfloor \frac{128y}{h} +\frac12 \right\rfloor$ for each of them.
You'll need to special-case $h=0$, but zeroes will be involved if and only if $y\le\frac1{256}$, so you can handle that once and for all at the beginning.
You can optimize further by trying only $h$ in the interval $1$ to $\Bigl\lfloor \sqrt{128y}\Bigr\rfloor$. |
Manipulating log function | Note that for any constants $a,b,c\in\mathbb{R}$, $a\ln(b(x+c))=a\ln(b)+a\ln(x+c)=\ln(b^{a})+\ln((x+c)^{a})$
Therefore, in this case we have:
$$2\ln(x-1)-\frac{x}{2}+c=2\ln(-1(1-x))-\frac{x}{2}+c=\ln((-1)^{2}(1-x)^{2})-\frac{x}{2}+c$$
And using the fact that $(-1)^{2}=1$, and $\ln(1)=0$ we have:
$$2\ln(x-1)-\frac{x}{2}+c=2\ln(1-x)-\frac{x}{2}+c$$
And then as amWhy points out, you can have $c=\frac{1}{2}+k$ for restricted values of $x$. |
Notation regarding Subsets of Inner Product Spaces | Well, an inner product space is a vector space equipped with an inner product. So we have an addition operation.
So $A + B := \{ a + b \mid a \in A, b \in B \}$. In other words, $A+ B$ is the set of all sums where the first element is from $A$ and the second element is from $B$.
If $A = \{1, 3\}$ and $B = \{2, 4\}$, then $A+ B = \{1 + 2, 1 + 4, 3 + 2, 3 + 4\}$. |
If $\lim_{|z|\to 1^-}u(z)=0$ then $u\equiv 0$ | We are given that $u$ is a complex-valued harmonic function for $|z|<1$ with $u\to 0$ as $|z|\to 1^{-}$.
The Maximum Modulus Principle guarantees that the maximum (and minimum) for $u$ must occur on the boundary $|z|=1$. Inasmuch as $\lim_{|z|\to 1^{-}}u(z)=0$, then $u=0$ for all $|z|<1$.
We can also view this in the context of real analysis. Since $u$ is harmonic, both its real and imaginary parts satisfy Laplace's equation in the interior of the unit circle and vanish on the boundary. The uniqueness theorem for the Dirichlet Problem applied to Laplace's Equation immediately leads to the expected answer. |
Functional equation of non-negative function | Substitute in $x=y=0$, we get that $ f(0) f(0) = f(0)$. Since $f(0) \neq 0$, thus $f(0) = 1$.
Substitute in $x=x, y = 2$, $0 = f(x\cdot 0) \cdot 0 = f(x+2)$. Hence for $x \geq 2, f(x) = 0 $.
We now focus our attention to the region $x\leq 2$.
Substitute in $x=2-y, y = y < 2$. We get that $ f[ (2-y) f(y) ] f(y) = f(2) = 0$. Since $f(y) \neq 0$ thus $(2-y) f(y) \geq 2$, or that $f(y) \geq \frac{2}{2-y}$.
Suppose that there exists a value $y$ such that $f(y) > \frac{2}{2-y}$. Then, take the value $x$ such that $ \frac{2}{f(y)} < x < 2-y$. We get that
$$ 0 = f( x f(y) ) f(y) = f(x+y) \neq 0,$$
which is a contradiction. Hence, at best, $f(y) = \frac{2}{2-y}$.
It is easy to check that $ f(x) = \begin{cases} \frac{2}{2-x}& 0\leq x < 2 \\ 0 & 2 \leq x \\ \end{cases}$ is a solution to the functional equation. (The simplest approach is to condition on $x+y<2$ and $x+y \geq 2$). Hence, it is the only solution. |
Showing the distribution of a poisson process | This is called thinning of a Poisson Process. Suppose that $\Phi$ is the set of decay points in the real line following Poisson with intensity $\lambda$.
Note that the expected value of $D(t)$ is equal to $\lambda t$.
The problem with your original idea, which I prefer too, is that the conditional distribution is indeed binomial distribution because you have to choose first $n$ decaying points out of total $k$ points. Following is the correct solution:
$$
\Pr(N(t)=n)=\sum_{m=n}^\infty \Pr(N(t)=n\mid D(t)=m)\Pr(D(t)=m)\\
\sum_{m=n}^\infty \binom{m}n p^n(1-p)^{m-n}\frac{(\lambda t)^me^{-\lambda t}}{m!}\\
\sum_{m=n}^\infty \frac {m!}{n!(m-n)!} ({p}\lambda t)^n\frac{((1-p)\lambda t)^{m-n}e^{-\lambda t}}{m!}\\
=\frac {1}{n!} ({\lambda pt})^n e^{-\lambda p t}.
$$ |
logic symbol for 'unlike, differing from' | There is nothing unprofessional about using words like "unlike" in your mathematical narrative. If you were working on a modal logic with a modality for "likeness" and "unlikeness" you might be justified in adopting a new symbol for it, but otherwise it is best to stick with well-known notation. |
Continuity and differentiability of a function defined by a Lebesgue integral | I guess you can read a nice solution in these notes. |
Show a complex equation has one or two roots | You're going about it the wrong way. Try this:
Divide through by $a$, to get an equation of the form $z^2+Bz+C = 0$, which has the same roots as the original equation. (This step is not strictly necessary, but it simplifies the algebra.)
Take the two roots $\alpha, \beta$ that you know about (if the equation has only one root $\alpha$, set $\beta = \alpha$). Show (by direct algebraic calculation) that $(z-\alpha)(z-\beta) = z^2+Bz+C$.
Deduce that $z^2+Bz+C$ can't be zero unless $z=\alpha$ or $z=\beta$. |
Three fields $F\leq F_1,~F_2$ and $F_1\neq F_2$ but $F_1\cong F_2$ | You can show, that for any ring $ R $ (commutative, unitary) there exists at most one morphism $ \Bbb{Q} \to R $. This implies that there can be at most one subring of R isomorphic to $ \Bbb{Q} $. Now it is easy to construct counterexamples. Just consider the following field extensions of $ \Bbb{Q} $ in $ \Bbb{C} $:
$$ \Bbb{Q}[\sqrt[3]{3}] \quad \text{and} \quad \Bbb{Q}[\zeta \sqrt[3]{3}], $$
where $ \zeta $ is a non-trivial third root of unity. Now those are field extensions of $ \Bbb{Q} $ and they are isomorphic because $ \sqrt[3]{3}, \sqrt[3]{3} \zeta $ have the same minimal polynomial $ X^{3} - 3 $. Furthermore they are not the same because $ \Bbb{Q}[\sqrt[3]{3}] \subset \Bbb{R} $, but $ \zeta \sqrt[3]{3} \in \Bbb{C}\backslash\Bbb{R} $.
Hint for the first statement: This is a fun little exercise. If you haven't seen it, I can recommend it. First you can show that there exists exactly one morphism $ \Bbb{Z} \to R $. For this you just have to remember what the symbols in $ \Bbb{Z} $ actually stand for. To show, that there is at most one way to extend this to a morphism $ \Bbb{Q} \to R $ do the same: remember what the symbols in $ \Bbb{Q} $ stand for. |
Proof by contradiction, status of initial assumption after the proof is complete. | We are not deriving $P$ from the assumption $\lnot P$. What we are doing is that we are deriving a contradiction from the assumption $\lnot P$. The contradiction can be something like $P \land \lnot P$, or it could be something else which contradicts an already proven statement.
The fact that we have arrived at a contradiction in turn implies that our initial assumptions have to be false. This is how we derive that $\lnot P$ must be false and hence $P$ must be true.
To reiterate, we are not proving $P$ with $\lnot P$ as an assumption. We are using the assumption to arrive at a contradiction which then implies $P$. |
A question about continued fractions and Gauss map | I've decided to put my comments together in an answer so some readers can get a feel for what the question is about, and see the $n=1,2,3$ cases. I should also mention that this question's subject area seems relevant to the topic of dynamical zeta functions, though I don't know any details.
Setting the simple continued fraction expansion $x=[a_1,a_2,\cdots]$, we find that $T^nx=x$ means
$$[a_1,a_2,a_3,\cdots]=[a_{n+1},a_{n+2},\cdots],$$
and hence the SCFE is $a_1,\cdots,a_n$ repeated over and over again for such a fixed point; note the $a_i$s can be arbitrary positive integers for each $1\le i\le n$. Therefore we have
$$\zeta_n(t)=\sum_{T^nx=x}\frac{1}{q_n^t}=\sum_{k=1}^\infty\frac{f_n(k)}{k^t},$$
where the $q_n$'s are functions of the $x$'s and $c_n(k)$ counts the number of rationals in $(0,1)$ with denominator $k$ (in reduced form) expressible as $x=[a_1,\cdots,a_n]$. When $n=1$, the only rational number $x=[a_1]=1/a_1$ with denominator $k$ is $1/k$, so $c_n(k)=1$ and $\zeta_1(t)=\zeta(t)$ is the familiar Riemann zeta function. When $n=2$, look at the fractions of the form
$$[a,b]=\cfrac{1}{a+\cfrac{1}{b}}=\frac{b}{ab+1}.$$
Note $\gcd(ab+1,b)=\gcd(1,b)=1$ by the Euclidean algorithm, so the numerator and denominator as depicted share no common factor and the above is in reduced form. Hence
$$\zeta_2(t)=\sum_{k\ge2}\frac{1}{k^t}\sum_{ab+1=k}1=\sum_{k\ge2}\frac{\sigma_0(k-1)}{k^t}$$ where the divisor function $\sigma_(m)$ counts the number of positive integer divisors of $m$. For $n=3$ the situation becomes a bit more complicated; the fractions look like
$$[a,b,c]=\cfrac{1}{a+\cfrac{1}{b+\cfrac{1}{c}}}=\cfrac{1}{a+\cfrac{c}{bc+1}}=\frac{bc+1}{a+abc+c}.$$
Note again $\gcd(a(bc+1)+c,bc+1)=\gcd(c,bc+1)=\gcd(c,1)=1$ by the Euclidean algorithm and hence this is again in reduced form as depicted. Therefore we obtain the coefficients
$$f_3(k)=\#\{1\le a,b,c\in\Bbb N: a+abc+c=k\}.$$
So far I haven't had any luck obtaining a closed form for this; I've looked at a couple substitutions and at modular systems $k\equiv \bar{a}$ mod $\bar{c}$, $\bar{c}$ mod $\bar{a}$, to no avail. At any rate, while there's probably a closed-form for $q_n$ as a polynomial in $a_1,\cdots,a_n$ I haven't looked at, the $f_n$ functions will correspond to ever more complicated multivariable Diophantine equations involving more and more terms; I do not immediately see any clever way to tackle all of the $f_n$'s simultaneously. |
If $D$ be a division ring and $D^*$ be finitely generated group then $D^*$ is abelian group? | This isn't really an answer, but it looks as though the question might be hard.
There's a proof for the case where $D$ is finitely generated over its centre in Theorem 1 of
Akbari, S.; Mahdavi-Hezavehi, M., Normal subgroups of $\text{GL}_n(D)$ are not finitely generated, Proc. Am. Math. Soc. 128, No.6, 1627-1632 (2000). ZBL0951.20036,
and Conjecture 1 in the same paper is that it's true in general.
Incidentally, it also seems to be an open problem to decide whether an infinite division ring can be finitely generated as a ring (which of course would follow if its group of units were finitely generated as a group). This is referred to as the "Latyshev problem" in these slides of a 2014 talk by Agata Smoktunovicz. |
Finding the value of the line integral $\frac25\int_{T} x \,\mathrm{d}s,$ | HINT
parametrize the line T by trigonometric functions $((x(t),y(t),z(t))\quad t\in [0,\pi/2]$
set up and calculate the line integral $\frac25\int_{t_1}^{t_2} x(t)\sqrt{x'(t)^2+y'(t)^2+z'(t)^2}dt$ |
find matrix such that $ Ax=(1,1,1)^t$ has exactly three distinct solutions | No:
$Ax=\begin{pmatrix}1\\1\\1\end{pmatrix}$ is consistent implies the system possess only one or infinitely many solutions since, $x_0$ is a solution $\implies$ all solutions are obtained by adding $x_0$ to the general solution of the associated homogeneous system $Ax = 0.$ |
How prove $A^2=0$,if $AB-BA=A$ | You say that $tr(A) = tr(AB)-tr(BA)=0$. Therefore Cayley Hamilton equation tells us that $A^2 = \det(A) I_2$.
On the other hand we have $A^2= A^2B-ABA=ABA-BA^2$. Therefore $2A^2 = A^2B-BA^2=0$ since $A^2$ is a multiple of the identity. |
Why aren't vectors curved? | Well, a short answer is that we use vectors to do linear algebra and geometry, and we use limits and calculus to apply those concepts to curved objects. So, a vector is a basic building block, and shouldn't be too broadly defined.
Here's a non-answer that might help, or might be confusing: if vectors are curved, how will you take a dot product: that is, how will you measure the angle between two vectors numerically?
However, I will try to answer what I think you're getting at as I think it's a very interesting question. If you think of a "vector" as "a magnitude and direction" and would like to use a vector to say how far one has traveled in a given direction, then if you're talking about directions on a curved object like the surface of the Earth, for example, it makes sense to talk about them being curved. The best way of encapsulating this in my opinion is the notion of geodesic, which is like a line segment but it is curved to fit the space. (Essentially, a geodesic segment between points $A$ and $B$ is the shortest path from $A$ to $B$; in a flat space that's a line segment.) Unfortunately, geodesic segments with a direction don't behave like vectors in very important ways, one of which is: you can't add them.
Yes, you can fit two geodesic segments head to tail. Well, maybe--in a non-flat world, it's hard to agree on a consistent idea of "direction", so moving vectors around is suddenly complicated. But even if you pick some coordinates that line up nicely, under the "fit head of $\vec w$ to tail of $\vec v$" definition of addition, then in curved spaces, for most vectors, $\vec v + \vec w \ne \vec w + \vec v$. In fact, differential geometers use these discrepancies to measure curvature! (As well as to decide how coordinate systems should work for curved spaces.) Although I think to start out learning about geodesics and curvature it is easier to read about how Gauss thought of curvature for surfaces, specifically about the angles of triangles.
What we do instead for curved spaces is use tangent vectors, which I think of as the idea of a direction "if you could go 'straight' in that direction". We use derivatives to relate tangent vectors from one place to another, and integrals (line integrals) to measure the distance traveled along a curve. I would argue we use vectors to define not only coordinates but what the idea of "straight" vs "curved" means in the first place.
Finally, while I personally think of linear algebra as geometric, and vectors as geometric objects, even if they're in many dimensions, and I encourage others to do so: for many many applications, vectors and matrices are simply useful ways of organizing numbers that belong to some data. And matrix multiplication, dot products, and other concepts of linear algebra, which are geometric, have other applications. So, you can think of vectors as just a bunch of coordinates, or as objects with a magnitude and direction, but to make those definitions agree we need something else for curves. |
To prove Heine-Borel theorem for $\mathbb R^n$ with usual Euclidean topology | I will proof this for $n=1$, so for $\mathbb{R}$. The general case similar but it needs more attention to details.
Let $A \subset \mathbb{R}$ be totally bounded and closed.
We want to prove that $A$ is compact.
Proof:
We know that $A$ is totally bounded. So: $$\exists M>0: A \subset [-M,M]$$
We show that $[-M,M]$ is compact. So the closed subset $A$ of this is compact to.
Let $$[-M,M] \subset \mathop{\bigcup_{\alpha \in I}} {G}_{\alpha}$$
be an infinite open covering of $M$.
Denote $S_{0} = [-M,M]$. Assume that $S_{0}$ can not be covered by finite elements of $I$. Then at least one of $[-M,0]$, $[0,M]$ can not be covered by finite elements from $I$. Call this interval $S_{1}$. Notice: $S_{1} \subset S_{0}$.
By extending this, we have constructed a chain of sets:
$$S_{0} \supset S_{1} \supset ...\supset S_{n} \supset ...$$
And each set of this chain can not be covered by finite elements of $I$.
Now $d(S_{n})=\frac{2M}{2^{n}} \rightarrow 0$.
Because each $S_{n}$ is closed, there exists an element $x \in \mathbb{R}$: (See below why)
$$\mathop{\bigcap_{n \in \mathbb{N}}{S_{n}}} = \{x\}$$
We know: $$x \in S_{0}=[-M,M] \subset \mathop{\bigcup_{\alpha \in I}} {G}_{\alpha}$$ So:
$$\exists \alpha \in I: x \in G_{\alpha}$$
$ G_{\alpha}$ is open, so: $\exists \epsilon>0: B(x,\epsilon) \subset G_{\alpha}$. Because $d(S_{n}) \rightarrow 0$, we can take $N \in \mathbb{N}$ big enough so that: $S_{N} \subset G_{\alpha}$.
We see that $S_{N}$ is covered by only one element of $I$. This is in contradiction with the assumption that $S_{N}$ could not be covered by finite elements of $I$. So $[-M,M]$ can be covered by finite elements of $I$. So $[-M,M]$ is compact.
This concludes the proof
Im gonna proof the 'see below why' step.
Let $$S_{0} \supset S_{1} \supset ...\supset S_{n} \supset ...$$
be a chain of closed intervals in $\mathbb{R}$ with $d(S_{n}) \rightarrow 0$.
Then $$\exists x \in \mathbb{R}: \mathop{\bigcap_{n \in \mathbb{N}}{S_{n}}} = \{x\}$$
proof
Denote $[a_{n},b_{n}] = S_{n}$
We directly see that $(a_{n})$ is an increasing sequence in $\mathbb{R}$ and $(b_{n})$ is a decreasing sequence in $\mathbb{R}$ with: $a_{n} \leq b_{n}$ so these are convergent sequences. Denote: $$\mathop{\lim_{n \rightarrow \infty}{a_{n}}} = a$$
$$\mathop{\lim_{n \rightarrow \infty}{b_{n}}} = b$$
trivially: $S_{n} \supset [a,b]$ for each $n \in \mathbb{N}$. So $$\forall n \in
\mathbb{N}: b-a \leq d(S_{n})$$
Taking the limit of $n$ approaching $\infty$ we have: $b-a = 0$. So $a=b$
We also see: $$a \in \mathop{\bigcap_{n \in \mathbb{N}}{S_{n}}}$$
Let $$c \in \mathop{\bigcap_{n \in \mathbb{N}}{S_{n}}}$$. Then for each $n \in \mathbb{N}$, we have: $c \in S_{n}$. By a similar argument as earlier:
$$ \forall n \in \mathbb{N}: |c-a| \leq d(S_{n})$$
So $c = a$.
$$\mathop{\bigcap_{n \in \mathbb{N}}{S_{n}}} = \{ a \}$$ |
Find Intersection of two tangent of arc | If $\alpha$ and $\beta$ are start and end point angles, then the intersection of the two tangents at the endpoints corresponds to an angle of $(\alpha+\beta)/2$ and to a distance from center $d=R/\cos(\beta-\alpha)$, where $R$ is the radius. |
Expected Value: Flipping A Fair Coin Until $10 Is Used Up | Set up a recursive relationship by conditioning on the first toss. The probability that one of them, say Abraham WLOG, loses is given by
$$
P(A\;\text{loses game})\\
=P(A\;\text{loses game}|A\;\text{A lost first round})P(A\;\text{A loses first round})+P(A\;\text{loses game}|A\;\text{A won first round})P(A\;\text{A won first round})
$$
Using the above relationship, you should be able to come up with a probability of losing in terms of initial conditions for any endowment. |
Let $a, b, c \in (0, 1)$ and $a+b+c+ab+bc+ca = 1+a b c$. Prove that: | Put $a=\tan A,b=\tan B, c=\tan C$
So, $\tan(A+B+C)=1\implies A+B+C=\frac{\pi}{4} $ .
Now, $$\frac{1+a}{1+a^2}=\frac{1+\cos2A+\sin2A}{2}=\frac{1+\sqrt2\sin(2A+\frac{\pi}{4})}{2}$$
If $x=2A+\frac{\pi}{4}$, as $0<A<\frac{\pi}{4}\implies \frac{\pi}{4}<x<\frac{3\pi}{4}$
If $f(x)=\sin x,f'(x)=\cos x, f''(x)=-\sin x<0$ as $\frac{\pi}{4}<x<\frac{3\pi}{4}$
So, $\sin x$ is concave function in $(\frac{\pi}{4},\frac{3\pi}{4})$.
So using Jensen's inequality,
$$\sum \sin(2A+\frac{\pi}{4})≤3\sin\left(\frac{2A+\frac{\pi}{4}+2B+\frac{\pi}{4}+2C+\frac{\pi}{4}}{3}\right)$$
$$=3\sin\frac{5\pi}{12}=3\sin(\frac{\pi}{4}+\frac{\pi}{6})=\frac{3(\sqrt3+1)}{2\sqrt2}$$
So,
$$\sum\frac{1+a}{1+a^2}=\sum\frac{1+\sqrt2\sin(2A+\frac{\pi}{4})}{2}$$
$$=\frac{3}{2}+\frac{1}{\sqrt2}\sin(2A+\frac{\pi}{4})$$
$$≤\frac{3}{2}+\frac{1}{\sqrt2}\frac{3(\sqrt3+1)}{2\sqrt2}$$
$$=\frac{3}{2}+\frac{3(\sqrt3+1)}{4}=\frac{3\sqrt3}{4}+\frac{9}{4}$$
Alternatively, $$\frac{1+a}{1+a^2}=\frac{1+\cos2A+\sin2A}{2}=\frac{1+\sqrt2\cos(2A-\frac{\pi}{4})}{2}$$
If $y=2A-\frac{\pi}{4}$, as $0<A<\frac{\pi}{4}\implies -\frac{\pi}{4}<y<\frac{\pi}{4}$
If $f(y)=\cos y,f'(y)=-\sin y, f''(y)=-\cos y<0$ as $-\frac{\pi}{4}<y<-\frac{\pi}{4}$
So, $\cos y$ is concave function in $(-\frac{\pi}{4},\frac{\pi}{4})$.
So using Jensen's inequality,
$$\sum \cos(2A-\frac{\pi}{4})≤3\cos\left(\frac{2A-\frac{\pi}{4}+2B-\frac{\pi}{4}+2C-\frac{\pi}{4}}{3}\right)$$
$$=3\cos(-\frac{\pi}{12})=3\cos(\frac{\pi}{6}-\frac{\pi}{4})=\frac{3(\sqrt3+1)}{2\sqrt2}$$ and so on. |
Is this a valid proof of Young's theorem? | Your approach starts in the right direction. But, the interchange of the order of limits is in general not correct. Instead, we will use the Mean-Value Theorem to facilitate the development.
First, note that as it is defined the limits of $\frac{\Delta (h,k)}{hk}$ are
$$\lim_{h\to 0}\lim_{k\to 0}\frac{\Delta(h,k)}{hk}=\frac{\partial }{\partial x}\frac{\partial f}{\partial y}(a,b)$$
and
$$\lim_{k\to 0}\lim_{h\to 0}\frac{\Delta(h,k)}{hk}=\frac{\partial }{\partial y}\frac{\partial f}{\partial x}(a,b)$$
Next, we exploit the Mean-Value Theorem and write
$$\begin{align}
\frac{\Delta(h,k)}{hk}&=\frac{\frac{\partial f}{\partial x}(a+\theta h,b+k)\,h-\frac{\partial f}{\partial x}(a+\theta h,b)\,h}{hk}\\\\
&=\frac1k \left(\frac{\partial f}{\partial x}(a+\theta h,b+k)-\frac{\partial f}{\partial x}(a+\theta h,b)\right)\\\\
&=\frac{\partial }{\partial y}\frac{\partial f}{\partial x}(a+\theta h,b+\eta k) \tag 1
\end{align}$$
where $0<\theta<1$ and $0<\eta<1$.
Analogously, we can exploit the Mean-Value Theorem and write
$$\begin{align}
\frac{\Delta(h,k)}{hk}&=\frac{\frac{\partial f}{\partial y}(a+ h,b+\nu k)\,k-\frac{\partial f}{\partial y}(a,b+\nu k)\,k}{hk}\\\\
&=\frac1h \left(\frac{\partial f}{\partial y}(a+ h,b+\nu k)-\frac{\partial f}{\partial y}(a,b+\nu k)\right)\\\\
&=\frac{\partial }{\partial x}\frac{\partial f}{\partial y}(a+\phi h,b+\nu k) \tag 2
\end{align}$$
where $0<\nu<1$ and $0<\phi<1$.
Using $(1)$ and $(2)$ reveals that for each $x$ and $y$, and $h$ and $k$, there exist four numbers $\theta$, $\eta$, $\nu$, and $\phi$ (all between $0$ and $1$) such that
$$\frac{\partial }{\partial y}\frac{\partial f}{\partial x}(a+\theta h,b+\eta k)=\frac{\partial }{\partial x}\frac{\partial f}{\partial y}(a+\phi h,b+\nu k) \tag 3$$
Under the assumption that both mixed partial derivatives of $f$ exist and are continuous at $(a,b)$, then we can take the limit as $(h,k)\to (0,0)$ of both sides of $(3)$ and obtain the coveted equality of mixed partials. |
how is sum of sines related to digamma? | $$
\begin{array}{rcl}
&& \sum_{k=1}^{\infty}\frac{\sin\left (\frac{\pi k}{3}\right )}{k^{2}} \\
&=& \small\sum_{k=1}^{\infty}\left(
\frac{\sin\left (\frac{\pi 6k}{3}\right )}{(6k)^{2}}+
\frac{\sin\left (\frac{\pi (6k+1)}{3}\right )}{(6k+1)^{2}}+
\frac{\sin\left (\frac{\pi (6k+2)}{3}\right )}{(6k+2)^{2}}+
\frac{\sin\left (\frac{\pi (6k+3)}{3}\right )}{(6k+3)^{2}}+
\frac{\sin\left (\frac{\pi (6k+4)}{3}\right )}{(6k+4)^{2}}+
\frac{\sin\left (\frac{\pi (6k+5)}{3}\right )}{(6k+5)^{2}}
\right) \\
&=& \frac{\sqrt{3}}{2} \sum_{k=1}^{\infty}\left(
\frac{1}{(6k+1)^{2}}+
\frac{1}{(6k+2)^{2}}-
\frac{1}{(6k+4)^{2}}-
\frac{1}{(6k+5)^{2}}
\right)
\end{array}$$ |
The definition of interpretation in a Kripke model collides with my intuition of what it should do | The $w$ indexes elements in $W$, where the elements of $W$ are traditionally called possible worlds.
In each world $w \in W$ you have an "interpretation function" $I_w$ which interpret the constant $c$ of the language with an "object" $I_w(c) \in D$ and each ($n$-ary) predicate constant $P$ with a $n$-ary relation in $D$.
Please, notice that, being $P$ an $n$-ary predicate, its interpretation must be an $n$-ary relation in $D$, i.e. a subset of $D^n$.
There are no "deviations" from the standard semantics for f-o languages; the only addition is that there is no more a single domain (a "world") but a whole family of worlds.
From : Alan Berger (editor), Saul Kripke (2011), Chapter 5 : Kripke Models by John Burgess, page 119-on. See page 134:
Now a Kripke model for modal predicate logic will consist of five components, $\mathcal M = ( X , a , R , D , I )$. Here, as with modal sentential logic, $X$ will be a set of indices [your $W$], $a$ a designated index [your $w_0$], and $R$ a relation on indices [the "accessibility" relation]. As for $D$ and $I$ , the former will be a function assigning each $x \in X$ and set $D_x$ , the domain at index $x$ , while the latter will be a function assigning to each $x \in X$ and each predicate $F$ a relation $F_x^I$ , the interpretation of $F$ at $x$, of the appropriate number of places [emphasis mine]. |
A compass moving on two straight lines | Choose a direction on $a$ and $b$ and set $a_n=\pm OA_n$, $b_n=\pm OB_n$, where the sign is positive if $A_n$ and $B_n$ are on the positive side of $O$.
As $B_n$ is on the perpendicular bisector of $A_nA_{n+1}$, and $A_{n+1}$ is on the perpendicular bisector of $B_nB_{n+1}$,
we have the recursive relations:
$$
b_n={a_n+a_{n+1}\over2\cos\theta},\quad
a_{n+1}={b_n+b_{n+1}\over2\cos\theta}.
$$
We can then eliminate $b_n$ to find the single recursion:
$$
a_{n+2}=2\cos2\theta\,a_{n+1}-a_n.
$$
This can be solved to yield (notice that I start with $n=0$):
$$
a_n=a_0\cos2n\theta+{a_1-a_0\cos2\theta\over\sin2\theta}\sin2n\theta.
$$
Finally, we can substitute $a_1$ in the above equation with its expression in terms of $a_0$ and $d$ (of course one needs $d\ge a_0\sin\theta$):
$$
a_1=a_0\cos2\theta\pm2\cos\theta\sqrt{d^2-a_0^2\sin^2\theta},
$$
obtaining:
$$
a_n=a_0\cos2n\theta\pm{\sqrt{d^2-a_0^2\sin^2\theta}\over\sin\theta}\sin2n\theta.
$$
The choice of the sign depends on the choice made at the first step, because given $A_0$ we have two possible choices for $B_0$. Notice that this formula also works for $\theta=\pi/2$, even if the original formula didn't.
It is then apparent that $\{a_n\}$ is finite if and only if $\theta=\pi/k$, with $k\in\mathbb{N}$, $k\ge2$. |
Proof regarding a subgroup of a cyclic group. | Perhaps a change in notation might help.
Here $$H=\{\xi\in G\mid \xi^a=e\}.$$
Let $x\in H$. Then, by definition, $x\in G$ such that $x^a=e$, since $x$ has to be one such $\xi\in H$. |
How to calculate the inter arrival time of Poisson process having mean as binomial random variable? | Suppose you are 'merging' two Poisson processes one with rate $a$ and one with rate $b$ in this way. This might be a model for two radioactive sources
of different intensities were both in range of a counter.
Then you have $X \sim \mathsf{Pois}(rate=ap)$ and
$Y \sim \mathsf{Pois}(rate=b(1-p)),$ then $T = X + Y \sim \mathsf{Pois}(\lambda),$ where $\lambda = ap + b(1-p).$ Then the interarrival times are
$\mathsf{Exp}(rate = \lambda)$ and average interarrival times are $1/\lambda.$
I confess I can't quite visualize your intended process looking at your code.
Here is a brief simulation in R, that indicates $T$ based on $\lambda=16$ and
$p = .4$ behaves as described.
m = 10^6; lam = 16; p = .4
x = rpois(m, lam*p); y = rpois(m, lam*(1-p))
t = x + y; mean(t); sd(t)
## 15.99342 # aprx E(T) = 16
## 4.001156 # aprx SD(T) = 4.
The histogram below is the simulated distribution of $T.$
The red dots are exact probabilities from $\mathsf{Pois}(\lambda =16).$ |
Distance between powers of 2 and 3 | Note first that if $3^{2n}-1=2^r$ then $(3^n+1)(3^n-1)=2^r$. The two factors in brackets differ by $2$ so one must be an odd multiple of $2$, and this is only possible if $n=1$ (the only odd number we can allow in the factorisation is $1$)
Now suppose that $3^n-1=2^r$ and $n$ is odd. Now $3^n\equiv -1$ mod $4$ so $3^n-1$ is not divisible by $4$.
Now suppose $3^n=2^{2r}-1=(2^r+1)(2^r-1)$. The two factors differ by $2$ and cannot therefore both be divisible by $3$. Only $r=1$ is possible.
The final case is $3^n=2^{k}-1$ where $k$ is odd. Now the right hand side is $\equiv -2$ mod $3$, so only $k=1$ is possible, and $n=0$ (if permitted).
__
The previous version of this answer was overcomplicated - trying to do things in a hurry. |
Fourier Transform Sin & Cos | you got it right until the end, in fact $\frac{1}{2}(\delta (w-w_{o})+\delta (w+w_{o}))=\frac{w}{2w_{o}}(\delta (w-w_{o})-\delta (w+w_{o}))$ |
Normal distribution governed by a Bernoulli distribution | I was fooled by randomness after running a quick computer simulation and getting an extremely close result to what I posted as a comment, i.e. $\require{enclose}\enclose{horizontalstrike}{{\color{red}{\mathrm{Var}(Y)=\sigma_1^2\times p + \sigma_0^2 \times (1-p)}}}.$ @Just_to_Answer pointed out the fact that this was incorrect since the problem was asking for the variance of a mixture distribution - a mistake I confirmed by simply changing the computer simulation to different parameters.
There is nothing I can add to the post on the topic by @whuber here, and you can credit him appropriately. So you can take this as an extended comment.
The variance does indeed contain the formula above, plus a factor that accounts for the dispersion of the means:
$$\mathrm{Var}(Y) =\color{red}{ \sigma_1^2 \times p + \sigma_0^2\times (1-p)}\color{black}{+\Big[
\mu_1^2\times p +\mu_0^2\times (1-p) -
\big(\mu_1 \times p + \mu_o \times(1-p) \big)^2\Big]}$$
And since making the same mistake twice is so human, I ran a simulation again (this time with different settings) to "confirm" the correct equation:
> n = 1e6 # Number of simulations
> p = .7 # Probability of the Bernoulli experiment
> Bern = rbinom(n, 1, p) # The actual simulation
>
> mean_zero = 400 # If Heads, we draw from a N(400, 33):
> sd_zero = 33
>
> mean_one = 14 # if Tails, we draw from a N(14,1):
> sd_one = 1
>
> Y_zero = rnorm(sum(Bern==0),mean_zero,sd_zero)
> Y_one = rnorm(sum(Bern==1),mean_one,sd_one)
> Y = c(Y_zero, Y_one) # And combine the results into a single vector.
>
> var(Y) # Empirical variance
[1] 31639.79
>
> var(Y_one) * p + (1 - p) * var(Y_zero) +
+ (p * mean_one^2 + (1 - p) * mean_zero^2 - (p * mean_one + (1 - p) * mean_zero)^2)
[1] 31614.8 # Calculated variance. |
Which compact (orientable) surfaces are parallelizable? | If $S$ is a parallelizable surface, then it has a flat Riemannian metric -- just choose a smooth global frame and declare it to be orthonormal. If in addition $S$ is compact, Gauss-Bonnet implies that it has Euler characteristic zero. The only compact orientable surface with $\chi(S)=0$ is the torus. |
Real estate problem - local maxima | If the correct answer is $1100$ or $1125$, maybe the function that should be maximizing is
$$
\max[(900+25x)(50-x)-75(50-x)].\tag1
$$
I guess since $x$ units are vacant so the vacant apartments do not need maintenance and repair each month, then there are only $(50-x)$ units that need maintenance and repair each month. Although the question said 'all units', maybe there is a 'mistake'. Assuming that the equation $(1)$ that should be maximizing. Hence
$$
\begin{align}
\frac{d}{dx}((900+25x)(50-x)-75(50-x))&=0\\
25(50-x)-(900+25x)+75&=0\\
50x&=25(50)-900+75\\
x&=\frac{25(50)-900+75}{50}\\
&=\frac{17}{2}\\
&=8.5.
\end{align}
$$
Thus, we have $8$ or $9$ units are vacant and the rent should be charged by the real estate office to maximize profit is
$$
900+25(8)=\boxed{\color{blue}{1100}}
$$
or
$$
900+25(9)=\boxed{\color{blue}{1125}}
$$
$$\large\color{blue}{\text{# }\mathbb{Q.E.D.}\text{ #}}$$ |
An example for a stable harmonic map which is not a local minimizer | Take the surface of revolution $M\subset R^3$ obtained by rotating the curve
$$
x^{2n} + y^{2n}=1
$$
($n\ge 2$) around the $x$-axis. On the surface $M$ take the closed curve $C$
$$
y^2+z^2=1
$$
parameterized by its arclength. This curve will give you a stable minimal map $S^1\to M$ which is not a local minimum of the energy functional. |
Conditional expected value of sum of exponential r.v.s | Your answer is correct.
2) $$ \mathbb E[S_n^2 | X_1] = \mathbb E[(S_n-X_1 + X_1)^2 | X_1] = \mathbb E[(S_n-X_1)^2 | X_1] + 2\mathbb E[X_1(S_n-X_1)|X_1] +\mathbb E[X_1^2|X_1] = \mathbb E[(\sum_{j=2}^n X_j)^2)]+ 2X_1\mathbb E[(S_n-X_1)|X_1] + X_1^2 =
\sum_{2\leq i,j\leq n } \mathbb E[X_iX_j] +2X_1\frac{n-1}{\lambda} + X_1^2 = (n-1) \mathbb E[X_2^2] +\frac{(n-1)(n-2)}{\lambda^2} + 2X_1\frac{n-1}{\lambda} + X_1^2 $$
where $\mathbb E X_2^2 = \mathbb D^2(X_2) + \mathbb E^2X_2 = \frac{2}{\lambda^2}$.
3) $$ \mathbb E[S_n|S_k] = \mathbb E[(S_n-S_k)+ S_k | S_k] = \frac{n-k}{\lambda} + S_k $$ |
Is there a bijection between the set of prime ideals of norm $~ q~$, and the set of ring maps to $~F_q~$? | Prime ideals of norm $q$ are in bijection with surjections $R \to \mathbb{F}_q$, up to postcomposition with Frobenius, for exactly the reason you state. I don't know whether your text has a typo or whether you have misread it, but it is not true that prime ideals of norm $q$ are in bijection with surjections $R \to \mathbb{F}_q$. |
Is it possible to define a ring as a category? | A monoid is a category with exactly one object. A homomorphism of monoids is just a functor of the corresponding categories.
A ring is a linear category with exactly one object. A homomorphism of rings is just a linear functor between the corresponding linear categories.
More generally, if $R$ is a commutative ring, then an $R$-algebra is an $R$-linear category with exactly one object, and $R$-algebra homomorphisms correspond to $R$-linear functors.
This offers a compact definition of modules: A left module over $R$ is just a linear functor $R \to \mathsf{Ab}$. Actually this has lead to the following more general notion ( which has found applications in category theory and algebraic topology): If $C$ is any linear category, then a $C$-module is a linear functor $C \to \mathsf{Ab}$. Homomorphisms of $C$-modules are natural transformations. Thus, we optain a category $C$-modules.
Besides, it shows that the category of monoids or rings is actually a bicategory: If $f,g : M \to N$ are homomorphisms of monoids (or rings), then a $2$-morphism $f \to g$ is an element $n \in N$ such that $n f(m) = g(m) n$ for all $m \in M$. |
Maclaurin series and taylor | You know the Maclaurin series for $e^s$? Now you can just plug in $s=0.15t^2$.
Then evaluate the integral as a sum of powers of $t$. |
Are two vectors pointing in the same direction regarding a surface? | You can use determinants for that purpose. If you have an $n$-dimensional space, then you have $n-1$ vectors defining $S$. Create an $n\times n$-matrix which has those vectors as the first $n-1$ rows, and the last row is $a$ or $b$ resp. If the determinants of the resulting matrices differ in sign, then $a$ and $b$ point in different directions; if they have the same sign, they point in the same direction (regarding the plane or line $S$).
If you know the Hesse normal form of the plane or line, it is even easier. Then you simply compare the signs of the dot products $n_0\cdot a$ and $n_0\cdot b$, where $n_0$ is the normal vector in the Hesse normal form. |
What we can do if the inverse of our function can not be determined explicitly | $$y=x \cos (x)\quad\quad \quad\mbox{ for }\;x\in [0, \frac12]$$
The function is so linear that Taylor series built around $x=0$ is very tempting. now, using series reversion
$$x=y+\frac{y^3}{2}+\frac{17 y^5}{24}+\frac{961 y^7}{720}+\frac{116129
y^9}{40320}+\frac{3488503 y^{11}}{518400}+\frac{7935695921
y^{13}}{479001600}+O\left (y^{15} \right)$$ is almost at the level of machine $\epsilon$.
If we use the expansion of $x$ up to $O\left (y^{2n+1} \right)$ in $y-x \cos(x)$, the result is $O\left (y^{2n+3} \right)$.
Similarly, if $T_n$ is the Taylor expansion of $x \cos(x)$ to $O\left (x^{2n+1} \right)$, computing the norm
$$\Phi_n=\int_0^{\frac 12} \big[x \cos(x)-T_n\big]^2\,dx$$ we have the following results
$$\left(
\begin{array}{cc}
n & \log(\Phi_n) \\
0 & -8.2167 \\
1 & -16.393 \\
2 & -26.272 \\
3 & -37.328 \\
4 & -49.290 \\
5 & -61.988 \\
6 & -75.306 \\
7 & -89.161 \\
8 & -103.49 \\
9 & -118.24 \\
10 & -133.37
\end{array}
\right)$$ |
Finding the Distribution of N and a Probability Value | If $N=j$ then exact $j$ out of $10$ of the observations are less than $x$ and exactly $10-j$ of the observations are at least $x$. Hence $$N\sim \text{Binomial}\Big(10,F(x)\Big)$$ For the second part, notice how $E_i:=\{X_{(i)}<m<X_{(i+1)}\}$ occurs iff exactly $i$ observations are less than $m$ and exactly $10-i$ observations are larger than $m$, where $1\leq i \leq 9$ is a fixed positive integer. Therefore $\mathbb{P}(E_i)={10 \choose i}(1/2)^{10}$ and so $$\mathbb{P}\Big(X_{(2)}<m<X_{(8)}\Big)=\mathbb{P}(E_2)+\dots +\mathbb{P}(E_7)=(1/2)^{10}\sum_{i=2}^7{10 \choose i}$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.