title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Almost sure convergence of sequences that are equal in distribution | I don't have a simple counterexample at the moment but you can argue using Skorohod's Theorem: let $(Y_n)$ tend to $0$ is probability but not almost surely. Since $Y_n \to 0$ in distribution Skorohod's Theorem tells us that there exits random variable $X_1,X_2,...$ such that $X_n \to 0$ almost surely and $X_n$ has same distribution as $Y_n$ for each $n$.
Reference: https://eventuallyalmosteverywhere.wordpress.com/2014/10/13/skorohod-representation-theorem/
A better example: consider $[0,1)$ with Lebesgue measure. Arrange the intervals $[\frac {i-1} {2^{n}},\frac i {2^{n}})$, $1 \leq i \leq 2^{n}$, $n \geq 1$ in as sequence using the 'natural' ordering. Let $Y_1,Y_2,..$ be the indicator functions of these intervals. Now form $X_1,X_2,...$ by replacing $[\frac {i-1} {2^{n}},\frac i {2^{n}})$ by $[0,\frac 1 {2^{n}})$ for each $i$. Then $X_n \to 0$ almost surely, $Y_n$ does not tend to $0$ almost surely and $X_n$ has the same distribution as $Y_n$ for each $n$. |
How to find eigenvalues from a set of equations of a linear transformation | Actually,you can denote $A$ as a special circulant matrix as $J$,e.g.
\begin{pmatrix}
0 & 1 & 0 &\cdots & 0\\
0 & 0 & 1 &\cdots & 0\\
\vdots & \vdots & \vdots &\ddots & \vdots\\
0 & 0 & 0 & \cdots & 1 \\
1 & 0 & 0 & \cdots & 0
\end{pmatrix}
The eigenvalues of $J$ is obvious.
The characteristic polynomial would be $f(\lambda)=\lambda^n+(-1)^n$
Hope that my short answer may help.
Correction: $A^T$ $=J$,but it doesn't matter because $A \sim A^T$.($\sim$ means similar). |
Angle definition confusion in Rodrigues rotation matrix | The problem is that K**2is squaring the matrix componentwise, not using matrix multiplication. |
Convergence almost everywhere | Let $f_n:[0,1]\to\Bbb{R}$ be defined, for $x\ne r_n$, by $f_n(x)=\dfrac{1}{\sqrt{\vert x-r_n\vert}}$.
Clearly, for every $n$ we have
$$
\Vert f_n \Vert_1=\int_0^{r_n}\frac{dx}{\sqrt{r_n-x}}+
\int_{r_n}^1\frac{dx}{\sqrt{x-r_n}}=2(\sqrt{r_n}+\sqrt{1-r_n})\leq 2\sqrt{2}
$$
Now the function $f:[0,1]\to [0,+\infty]$ defined by $f=\sum_{n\geq}\frac{1}{n^2}f_n$, belongs to $L^1([0,1])$ since this series converges normally in $L^1([0,1])$. But the fact that $f\in L^1([0,1])$ implies that the series $\sum_{n\geq}\frac{1}{n^2}f_n(x)$ is almost every where convergent. |
How to formulate that there is no politician who is not ambitious in predicate logic? | It is predicate logic; thus, we need predicates : $P(x)$ for "... is a Politician" and $A(x)$ for "... is Ambitious".
The formula will be :
$\lnot \exists x \ (P(x) \land \lnot A(x))$.
The formula is equivalent to $\forall x \ (P(x) \to A(x))$ that reads :
"Every Politician is Ambitious". |
Fundamental lemma for variational calculus in dual space | Let $u \in E$. For every function ${\theta} \in {\mathcal{C}}_{c}^{\infty } \left({\mathbb{R}}^{n} , \mathbb{R}\right)$ one has ${\theta} u \in {\mathcal{C}}_{c}^{\infty } \left({\mathbb{R}}^{n} , E\right)$
and
$$0 = \int_{{\mathbb{R}}^{n}}^{}f \left(x\right) \left[{\theta} \left(x\right) u\right] d x = \int_{{\mathbb{R}}^{n}}^{}{\theta} \left(x\right) f \left(x\right) \left[u\right] d x$$
It follows that $f \left(x\right) \left[u\right] = 0$ a.e.
Assuming $E$ is separable, let ${\left({u}_{n}\right)}_{n \in \mathbb{N}}$ be a countable dense family in $E$. It follows that $f \left(x\right) \left[{u}_{n}\right] = 0$ for all $n$
and all $x$ but a set of measure $0$, therefore $f \left(x\right) = 0$ a.e. |
How do you write 5000 in scientific notation/standard form? | How accurately have you measured?
If you know that your answer is correct to the nearest whole number, then you should write $5.000 \times 10^3$
If you know that your answer is correct to the nearest ten, then you should write $5.00 \times 10^3$
If you know that your answer is correct to the nearest hundred, then you should write $5.0 \times 10^3$
If you know that your answer is correct to the nearest thousand, then you should write $5 \times 10^3$
If you know that your answer is correct to the nearest 0.1, then you should write $5.0000 \times 10^3$ |
Finding Laurent series in annulus $1<|z|<2$ | Far more simply, observe that $$\frac{1}{(z-1)(z-2)}=\frac{(z-1)-(z-2)}{(z-1)(z-2)}=\frac1{z-2}-\frac1{z-1}.\tag{$\star$}$$
Note that for $z\neq0,2,$ we can write $$\frac1{z-2}=-\frac1{2-z}=-\frac12\cdot\cfrac1{1-\left(\frac{z}{2}\right)}$$ and $$\frac1{z-2}=\frac1{z}\cdot\cfrac1{1-\left(\frac{2}{z}\right)}.$$
Now, one of these can be expanded as a multiple of a geometric series in the disk $|z|<2,$ and the other can be expanded as a multiple of a geometric series in the annulus $|z|>2$. That is, we will use the fact that $$\frac1{1-w}=\sum_{k=0}^\infty w^k$$ whenever $|w|<1$. You should figure out which one works in the disk $|z|<2,$ as that is the relevant one.
Likewise, we can rewrite $\frac1{z-1}$ in two similar forms, one of which is expandable in $|z|<1$ and one of which is expandable in $|z|>1.$ You should figure out which one works in the annulus $|z|>1,$ as that is the relevant one.
Using $(\star)$ with the expansions you found above will give you the desired Laurent series. |
Letter combinations in two subsets of a set of size n | As noted by barak manos, a set where elements may repeat is a multi-set.
Let's say that in the multi-set $M$ there are $k_1$ occurrences of symbol $a_1$; ...; $k_n$ occurrences of symbol $a_n$.
Step 1: Count all sub-multi-sets of $M$. There are $k_1+1$ choices for how many times $a_1$ occurs in our sub-multi-set (since $0$ up to $k_1$ are all possibilities); similarly for $a_2, ..., a_k$. So there are $(k_1+1)\cdot (k_2+1)\cdots (k_n+1)$ sub-multi-sets of $M$.
Step 2: Count $2$-set partitions of $M$. Each sub-multi-set in step 1, and its complement form a two-set partition, except that we exclude the pair $\{\emptyset,M\}$ since the subsets of a partition must be nonempty. Also in the case that all $k_i$ are even, then there is one sub-multi-set that is self-complementary (note also that in this case the total number of sub-multi-sets in step 1 will be odd).
So we have a total of $\lceil \frac{(k_1+1)\cdot (k_2+1)\cdots (k_n+1)-2}{2}\rceil$ partitions. (The outer delimiters are the ceiling function; the subtracted $2$ in the numerator gets rid of the sub-multi-sets $\emptyset$ and $M$; and the ceiling function takes care of the case where there is one self-complementary sub-multi-set.)
In the example provided in the original problem, the $k$-values are $2,2,1$. So the above gives $\lceil\frac{3\cdot 3\cdot 2-2}{2}\rceil=8$ (which matches the list given by OP, although the number is then miscounted/misreported as $7$ in OP's post). |
Questions about absolutely continuous function | You're overthinking this. Let $\varepsilon>0$, and choose the corresponding $\delta$. Then for any $\eta<\delta$ such that $a+\eta\leqslant b$, we have
$$|f(a+\eta)-f(a)|<\varepsilon, $$
and hence $f$ is continuous at $a$. $f$ is continuous at $b$ by a similar argument. |
Collatz Conjecture Algorithm | I did some "independent and unpaid research" on this (much against all advice an undergrad student in mathematics receives regarding this particular problem) and arrived at the following:
The Collatz conjecture is true if and only if all positive integers of the forms $4r+3$ or $8r+3$, $r$ being odd, eventually reach lower values by iterating the Collatz function.
So indeed, you may restrict your attention to odd numbers, specifically to those of the form I just mentioned, and drop the computations when they reach lower values.
(My analysis can be pushed even further to filter away more odd numbers that will reach lower points, but I saw no clear pattern in my analysis, hence paused working on it)
A short version of my proof of the above statement: We just show that all other positive integers eventually reach lower values. For even numbers this is clear. For odd numbers, write them in the form $2^kn+1$, $n$ odd and $k>0$. Iterate the Collatz function. For even exponents $k$, 3 iterations will be enough (details left to the reader). For odd exponents other than 1, 3 iterations will again suffice. Thus the conjecture is true iff all positive integers of the form $2n+1$, $n$ odd, reach lower values.
Now, substitute $n$ by $2^xy+1$, $y$ odd and $x>1$, repeat the above analysis.
As for what happens with $4r+3$ and $8r+3$, $r$ odd…I realized that I probably wouldn't be able to finish any kind of proof with this, but that I could go on forever with these substitutions…
Also, as all others have said, please make sure that all integers less than those that you consider have already been verified, otherwise you might fool yourself (even though I feel it wouldn't matter in the long run, but it could cause some delays in finding the truth in those scenarios…). |
How to compute Legendre symbol $\Bigl(\frac{234987}{9086}\Bigr)$? | It seems that the intention of this problem is to use the Kronecker symbol. It generalizes the Jacobi and Legendre symbols so you can evaluate $\left(\frac{a}{n}\right)$ for all $n\in\Bbb Z$. If $n = u\cdot p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$ is the prime factorization of $n$ with $u = \pm 1$ (a unit), then
$$
\left(\frac{a}{n}\right) := \left(\frac{a}{u}\right)\prod_{i = 1}^k\left(\frac{a}{p_i}\right)^{e_i},
$$
where if $p_i$ is odd, then $\left(\frac{a}{p_i}\right)$ is the Legendre symbol, and
\begin{align*}
\left(\frac{a}{1}\right)&:= 1,\\
\left(\frac{a}{-1}\right)&:= \begin{cases}-1, \quad a < 0\\ 1, \quad a\geq 0\end{cases},\\
\left(\frac{a}{2}\right)&:= \begin{cases}0, \quad a\in 2\Bbb Z\\ 1, \quad a\equiv \pm 1\mod 8\\ -1,\quad a\equiv\pm 3\mod 8\end{cases},\\
\left(\frac{a}{0}\right)&:=\begin{cases}1, \quad a = \pm 1\\ 0, \quad a\neq\pm 1\end{cases}.
\end{align*}
It also satisfies the following reciprocity law:
Suppose $n = 2^e n'$, $m = 2^f m'$ where $n', m'\in 2\Bbb Z + 1$ (for $n = 0$, $n' = 1$), and let $n^* = (-1)^{(n' - 1)/2}n$. Then if $n\geq 0$ or $m\geq 0$, we have
$$
\left(\frac{n}{m}\right) = \left(\frac{m^*}{n}\right) = (-1)^{\left(\frac{n'-1}{2}\right)\left(\frac{m'-1}{2}\right)}\left(\frac{m}{n}\right).
$$
So the Kronecker symbol shares many properties (which can be found at the wikipedia page) with the Legendre and Jacobi symbols (under certain restrictions), but one needs to be a bit careful, as certain things are different. |
How do I prove $\int_0^\pi\frac{(\sin nx)^2}{(\sin x)^2}dx = n\pi$? | If $I_n=\int_0^\pi\dfrac{\sin^2nx}{\sin^2x}dx$
$I_0=0,I_1=\pi$
$$I_{m+1}-I_m=\int_0^\pi\dfrac{\sin^2(m+1)x-\sin^2mx}{\sin^2x}dx=\int_0^\pi\dfrac{\sin(2m+1)x}{\sin x}dx=J_m\text{(say)}$$
Now $$J_{r+1}-J_r=\int_0^\pi\dfrac{\sin(2r+3)x-\sin(2r+1)x}{\sin x}dx=2\int_0^\pi\cos2(r+1)\ dx$$
For $r+1\ne0,$ $$J_{r+1}-J_r=\cdots=0$$
For $r\ge0,$
$$\implies J_r=J_0$$
$$J_0=I_1-I_0=\pi$$
For $m\ge0,$
$$I_{m+1}-I_m=J_0=\pi$$
Can you take it from here? |
Transform line equation in one co-ordinate system X'Y'Z' into XYZ | There is a subtlety you are missing: $z^\prime$ is a parameter. The line is formed by varying $z^\prime$.
So, parameterize the line more generally as a vector-valued function of some parameter, $t \in \{-\infty,+\infty\}$. Then $\mathbf{P} \equiv \mathbf{P}(t)$ which can be represented in the primed basis as $\mathbf{P}(t) = x^\prime(t)\mathbf{x^\prime} + y^\prime(t)\mathbf{y^\prime} + z^\prime(t)\mathbf{y^\prime}$ . Let $z^\prime(t) = t$, so that $x^\prime(t) = A_0 + A_1t$, $y^\prime(t) = B_0 + B_1t$, and $z^\prime(t) = t$. Now, if you know the transformation $(x^\prime,y^\prime,z^\prime) \rightarrow (x,y,z)$, all you have to do is apply is to $(x^\prime(t),y^\prime(t),z^\prime(t))$ to obtain $(x(t),y(t),z(t))$, which will have a similar form ($x(t) = a_0 + a_1t$, $y(t) = b_0 + b_1t$, and $z(t) = c_0 + c_1t$), which you can write in a similar form as the equation you have above in the primed basis.
$$\begin{pmatrix}
P_{x}\\
P_{y}\\
P_{z}\end{pmatrix} =
\begin{pmatrix}
a_{0}\\
b_{0}\\
c_{0}
\end{pmatrix} + \begin{pmatrix}
a_{1}\\
b_{1}\\
c_{1}
\end{pmatrix} t
$$ |
Does $\Delta u=0$ for $u:\Bbb{R}^n\to \Bbb{R}$ imply that $u$ must be a hyperplane? | No, consider the example $u(x,y)=x^2-y^2$ we see $\Delta u=0$ but plot the graph and you don't get a hyperplane in $\mathbf{R}^3$ |
Show that if $x_n ≥ -1$ for all n $\in \mathbb{N}$ and $ \lim_{n \to \infty}(x_n)=0 $ | Here's another approach. Since $x_n>-1$ and $0<1/p\leq1$, you can use Bernoulli's inequality:
$$(1+x_n)^{1/p}\leq1+\frac{x_n}{p}$$
If $x_n\geq0$, one obtains
$$1\leq(1+x_n)^{1/p}\leq 1+x_n/p$$
On the other hand, if $-1<x_n<0$, one has
$$1-|x_n|<(1-|x_n|)^{1/p}\leq1-|x_n|/p$$
Consequently, one takes the limit $n\rightarrow\infty$ and obtains 1. |
How to integrate $(x^2 - 1)/(x^2 + 1)$? | Hint:
$$\int\frac{x^2-1}{1+x^2}dx=\int\left(1-\frac2{1+x^2}\right)dx=\ldots$$ |
the number of integer solutions to $y^p = x^2 +4$ | As you noted, you can prove that $x$ is odd and hence $\gcd(x+2i,x-2i) = 1$, and now you get $4i = a^p-b^p = (a-b) \sum_{k=0}^{p-1} \binom{p-1}{k} a^k b^{p-1-k}$ for some gaussian integers $a,b$. Immediately we get $|a-b| \in \{1,2,4\}$. I think it should now be possible to slowly eliminate the cases one by one for sufficiently large $|a|$. |
Center and axis of Quadratic Surface | the Hessian matrix $H$ of second partials picks out the quadratic form terms. We can name a column vector $p$ and write your polynomial as
$$ \frac{1}{2} x' H x + p' x + d, $$
where $x', p'$ are the transposes of the column vectors $x,p.$
If $H$ is invertible there is a center. Written as a column vector, the gradient of the polynomial is
$$ H x + p. $$ The center is at $- H^{-1}p.$ Indeed, if we write
$$ x = y - H^{-1} p, $$ your polynomial comes out to
$$ \frac{1}{2} y' H y + \left( d - \frac{1}{2} p' H^{-1} p \right) $$
At this point you still have an eigenvector problem to find an axis.
If $H$ is not invertible various things may happen... |
Why does $K_{\chi(H)}(|H|)$ contain the graph H? | How to find a copy of $H$ in $K_{\chi(H)}(|H|)$:
Label each of the vertex classes of $K_{\chi(H)}(|H|)$ as $V_{1},...,V_{\chi(H)}$. Start by coloring the vertices of $H$ with $\chi(H)$ colors (proper coloring), i.e. colors $\{1, 2, 3, ..., \chi(H)\}$. Map each color to a vertex class, and map each vertex that is colored with that color to a distinct vertex in the class. I.e. every vertex that is colored $i$ gets mapped to a distinct vertex in $V_{i}$. This can be done since $H$ can be colored with $\chi(H)$ colors and no more that $|H|$ vertices get mapped to any one class $V_{i}$. Now, the vertices that you've mapped to form a complete $\chi(H)$-partite graph. So every edge possible is there (except edges between vertices of the same color), but you know in $H$ vertices of the same color did not have an edge between them to begin with. |
For every irreducible representation V of G, dimV $ \leq$ g/a. | I think you need the following ingredients for the proof:
For every subgroup $H \subseteq G$ of $G$ and every irreducible representation $V$ of $G$ there exists an irreducible representation $V'$ of $H$ such that $V$ is a subrepresentation of the induced representation $Ind^G_H V'$ (Hint: Restrict $V$ to $H$ and take $V'$ an irreducible quotient of this restriction, then use Frobenius reciprocity.)
The dimension of any induced representation $Ind^G_H V'$ is $\frac{|G|}{|H|} \dim V'$.
The irreducible representations of finite abelian groups all have dimension $1$. |
If $f : (0, \infty) \to \mathbb{R}^n$ is continuous on $[a, \infty) , \forall a>0$ then continuous on $(0, \infty)$ | It's true. You need to show continuity at any point $b$ say in $(0,\infty)$. Pick $a=b/2$ and from what you have assumed, you are there. Which perhaps justa neater way of writing what you have already done.
NB: if altered to 'then $f$ is continuous on $[0,\infty)$' it becomes false, which is perhaps what the question should have read in the first place. Similarly, if it had said 'then $f$ is uniformly continuous on $[0,\infty)$'. |
Finding the mass of an wire using a line integral | Solving the system $
\begin{cases}
z=2-x^2-y^2\\
z=x^2
\end{cases}$
yields $\;x^2=2-x^2-y^2\; \Rightarrow\; x^2+\frac{y^2}{2}=1.$
In other words the projection of the curve in the $xy$ plane is the ellipse parametrized by
$$
\begin{cases}
x(t)=\cos t\\
y(t)=\sqrt{2}\sin t
\end{cases}
$$
with $t\in [0,2\pi]$. So $C$ can be parametrized by
$$
\begin{cases}
x(t)=\cos t\\
y(t)=\sqrt{2}\sin t\\
z(t)=x^2(t)=\cos^2t
\end{cases}
$$
It follows that the mass equals
$$
m=\int_{(0,1,0)}^{(1,0,1)} xy\; dr = \int_{t_1}^{2\pi} \sqrt{2}\sin t \cos t \sqrt{\sin^2t+2\cos^2t+4\sin^2t\cos^2t}\; dt
$$
where $(x(t_1),y(t_1),z(t_1))=(0,1,0)$. Such a $t_1$ does not exist. Also, the parametrization you proposed does not satisfy the equations. Are you sure about the equations in the question? |
Asymptotics of the large eigenvalues of a differential equation (WKB) approximation | Not a full solution, but key steps.
The energy ($E=\lambda^2$) you search is defined at the stopping point ($x=\alpha$):
$E=\alpha^4+\alpha^2$
You are using the semiclassical approximation and $\lambda^2>>1$ in your notations. Thus, we can drop $x^2$ and work only with $x^4$ to get the main asymptotic term: $E\approx\alpha^4$
Now, let's consider Bohr–Sommerfeld quantization conditions:
$$I(\alpha)=\int_{-\alpha}^{\alpha}\sqrt{\lambda^2-x^2-x^4}dx\approx\int_{-\alpha}^{\alpha}\sqrt{\alpha^4-x^4}dx$$$$=\alpha^3\int_{-1}^{1}\sqrt{1-t^4}dt=\frac{\alpha^3}{2}\int_0^1\sqrt{1-u}u^{-\frac{3}{4}}du$$
$$I(\alpha)\approx\frac{\alpha^3}{2}B\,(\frac{3}{2};\frac{1}{4})=\frac{\alpha^3}{2}\frac{\Gamma(\frac{3}{2})\Gamma(\frac{1}{4})}{\Gamma(\frac{7}{4})}=\frac{\alpha^3}{3}\frac{\sqrt{\pi}\,\Gamma(\frac{1}{4})}{\Gamma(\frac{3}{4})}=\frac{\alpha^3}{3}\frac{\sin\frac{\pi}{4}\,\Gamma\,^2(\frac{1}{4})}{\sqrt{\pi}}$$
At the given level of approximation $E\approx\alpha^4\approx\lambda^2$, and $\alpha$ is defined by$$I(\alpha)\approx\frac{\alpha^3}{3}\frac{\sin\frac{\pi}{4}\,\Gamma\,^2(\frac{1}{4})}{\sqrt{\pi}}\approx(n+1/2)\pi$$
Please check the calculations.
For the reference I would recommend "Mathematical Methods for Physics and Engineering" of K.F. RILEY, M.P. HOBSON and S. J. BENCE.
You may look through the section 25.7 (WKB method), pp 900-902, where the similar example is reviewed in details. |
In a UFD in which each maximal ideal is principal, ideal generated by two relatively prime elements is whole ring | Consider the maximal ideal which contains $u_1$ and $u_2)$ it is principal generated by $d$ which thus divides $u_1$ and $u_2$ since the $gcd(u_1,u_2)=1$, we deduce $d=1$. |
$2^n=7x^2+y^2$ solutions | 2^n ≡ y^2 mod 7 ⇒ y^2 < 7
It can be seen that y^2 can only have values 1, 2 and 4. Only 1 and 4 are acceptable if y is integer, that is y= (+ or -) 1 or y =(+ or -) 2. With these values of y some powers of n give integers for x, for example y=2, x =6 for n=8 or x=y=2 for n=5.
This is the reason for existing only one single x and y for every n. |
Line Search for Nonlinear System of Equations with Newton-Raphson | I think what I have in my edit is correct, it seems to work! Any check would be great of course.
I have written in my notes that we can use Newton-Raphson to solve $G(s_i) = 0$ in a similar fashion as we solve the original problem, although this requires $G'(s_i)$ and the tangent stiffness $N'$.
$$G(s_{i+1}) = G(s_i)+G'(s_i)\Delta s_i = 0$$
$$\Delta d_i^T (F^{n+1}-N(d_i+s_i\Delta d_i)) - \Delta d_i^TN'(d_i+s_i\Delta d_i)\Delta d_i \Delta s_i = 0$$
$$\Delta s_i = \dfrac{\Delta d_i^T (F^{n+1}-N(d_i+s_i\Delta d_i))}{\Delta d_i^TN'(d_i+s_i\Delta d_i)\Delta d_i}$$
This can be initialized with $s_i = 1$ and stopped when $|G(s_i)|\leq .5|G(0)|$. *
*As recommended by H. Matthies and G. Strang, "The solution of nonlinear F.E. equations," IJNME, Vol. 14, 1613-1626 (1979). |
Is Jacobi Theta function same as Heat Kernel ? How to derive Jacobi Theta from Heat Kernel? | I recommend the book A Brief Introduction to Theta Functions by Richard Bellman reprinted by Dover Publications. The first expansion you wrote is the Fourier series of the theta function. Using the heat kernel you sum over a lattice of periods to match the periodicity of the theta function. Something like $\sum_{k\in \mathbb{Z}} \Phi (x+2\pi k,t)$. Notice the Fourier transform connection between the two exponentials in your two equations. The summation over a period lattice is equivalent to convolution by a periodic sum of shifted Dirac delta functions (a Dirac comb), and the Fourier transform of this is the point-wise product of the Fourier transform of the heat kernel with another Dirac comb. |
sup and inf of $A=\left\{\frac{m^4+2n^2}{2m^2-m^2n+n^2}:m,n \in\Bbb N\right\}$ | For $n=3$ we have $\frac{m^4+2n^2}{2m^2-m^2n+n^2}\to-\infty$ as $m\to+\infty$, hence $\inf A=-\infty$. |
If every cyclic subgroup of a group G be normal in G, prove that every subgroup of G is normal in G. | Let $H$ be a subgroup of $G$, $h \in H$ and $g \in G$. You have to show that $g^ {-1}hg \in H$. Since $H$ is a subgroup, the cyclic group $\langle h \rangle$ generated by $h$, is a subgroup of $H$ and hence of $G$. Your premise tells us that $\langle h \rangle$ is normal, so $g^ {-1}hg \in \langle h \rangle$, since of course $h \in \langle h \rangle$. Hence $g^ {-1}hg=h^i$ for some $i$, and $h^i \in H$. |
Prove that $\sup{\alpha A} = \alpha \sup{A}$ | Induction has nothing to do with this.
Hint: the map $x\mapsto \alpha x$ is increasing if $\alpha>0$.
Further hints as spoilers.
Spoiler 1
Let $s=\sup A$; then, for each $x\in A$, $x\le s$; therefore, for each $x\in A$, $\alpha x\le\alpha s$. Hence $\alpha s$ is an upper bound for $\alpha A$
Spoiler 2
Suppose $t$ is an upper bound for $\alpha A$. Then $\alpha^{-1}t$ is an upper bound for $A$ (why?). Therefore $\alpha^{-1}t\ge s$ and so $t\ge \alpha s$. |
Proving $[x \in A_i \land x \in (\forall 1\le k\le i-1: {A_k}^c)] \land [x \in A_j \land x \in (\forall 1\le k\le j-1: {A_k}^c)]$ =$\emptyset$ | $$B_k=A_k\backslash \bigcup_{i=1}^{k-1}A_i\subset A_k.$$
Then, if $i\neq j$, the fact that $B_i\cap B_j=\emptyset$ follow. Indeed, if $i<j$, then $$B_j=A_j\backslash \underbrace{\bigcup_{i=1}^{j-1}A_i}_{\supset B_i}.$$ |
Differentiating condition for two infinite graphs | The first one contains a cycle, but the second one not. |
Closed form for duration formula | $$1 + \cfrac1r - (1+r) \cdot \cfrac{cT(1+r)^{-T-1} +(1+r)^{-T} -rT(1+r)^{-T-1}}{c(1-(1+r)^{-T}) +r(1+r)^{-T}} \\
= 1 + \cfrac1r - (1+r) \cdot \cfrac{cT(1+r)^{-T-1} +(1+r)^{-T} -rT(1+r)^{-T-1}}{c-c(1+r)^{-T} +r(1+r)^{-T}} \cdot \cfrac{(1+r)^T}{(1+r)^T} \\
= 1 + \cfrac1r - (1+r) \cdot \cfrac{cT(1+r)^{-1} +1 -rT(1+r)^{-1}}{c(1+r)^T-c +r} \\
= 1 + \cfrac1r - \cfrac{cT +(1+r) -rT}{c(1+r)^T-c +r} \\
= 1 + \cfrac1r + \cfrac{T(r-c) -(1+r) }{c((1+r)^T-1) +r}$$ |
$\frac{\phi(m)}{m}$ is dense in $[0,1]$ | This follows from a much more general theorem.
Theorem: Let $(x_n)$ be a sequence of positive numbers such that $x_n\to 0$ as $n\to\infty$ but $\sum_{n=1}^\infty x_n=\infty$. Then for any $0\leq a<b\leq \infty$, there is a finite set $A\subset\mathbb{N}$ such that $\sum_{n\in A}x_n\in(a,b)$.
To solve your problem from this theorem, let $x_n=-\log(1-1/p_n)$, where $p_n$ is the $n$th prime. Then $(x_n)$ satisfies the hypotheses of the theorem: $x_n\to 0$ since $p_n\to \infty$, and $\sum x_n$ diverges since $\log(1+x)\approx x$ for $x$ small and $\sum 1/p_n$ diverges. Now just apply the theorem with $a=-\log\beta$ and $b=-\log\alpha$ and let $m=\prod_{n\in A} p_n$.
To prove the theorem, choose $N$ such that $x_n<b-a$ for all $n\geq N$. Let $M\geq N$ be minimal such that $\sum_{n=N}^Mx_n>a$ (such an $M$ exists since $\sum_{n=N}^\infty x_n=\infty$). Then $$\sum_{n=N}^Mx_n= x_M+\sum_{n=N}^{M-1}x_n<(b-a)+a=b$$ by minimality of $M$. Thus $A=\{N,N+1,\dots,M\}$ works. |
How prove that $10(a^3+b^3+c^3)-9(a^5+b^5+c^5)\le\dfrac{9}{4}$ | EDIT: The original proof contained an error. It is (hopefully) fixed now.
Consider the function $f(x) = 10x^3 - 9x^5$. Then our goal is to show that $f(a) + f(b) + f(c) \leq 9/4$.
We will need the following two claims
Claim1: $f(x) + f(1-x) \leq 9/4$ for all $x \in [0,1]$.
Proof: A straightforward calculation says that the local maximum is obtained in $x = 0.5 \pm \frac{1}{2\sqrt{3}}$ and is equal to $9/4$.
Claim2: For all $0 \leq a \leq b$ such that $a+b \leq 2/3$ we have $f(a)+f(b) \leq f(a+b)$
Proof: The claim is trivial if $a=0$. Therefore, we shall assume that $a>0$.We need to prove that
$$
10a^3 - 9a^5 + 10b^3 - 9b^5 \leq 10(a+b)^3 - 9(a+b)^5.
$$
Opening the parenthesis on the RHS, and reducing we get that the above is equivalent to
$$
0 \leq 10(3a^2b + 3ab^2) - 9(5a^4b + 10a^3b^2 + 10a^2b^3 + 5ab^4)
$$
Since $a,b > 0$, we can divide by $15ab$, and so, by moving sides it is enough to show that
$$
3(a^3 + 2a^2b + 2ab^2 + b^3) \leq 2(a + b).
$$
Adding $3(a^2b+ab^2)$ to both sides we get
$$
3(a+b)^3 \leq (2+3ab)(a + b).
$$
Since $a,b \geq 0$, we can divide by $a+b$ to get
$$
3(a+b)^2 \leq 2+3ab,
$$
or equivalently
$$
3a^2+3b^2 + 3ab \leq 2
$$
It is easy to check that the inequality holds if $a+b \leq 2/3$.
We now turn to the proof. Let's s assume that $a \leq b \leq c$.
Since $a+b \leq 2/3$, by Claim2 we have $f(a)+f(b) \leq f(a+b)$, and therefore, by Claim1 we get $f(a)+f(b)+f(c) \leq f(a+b) + f(c) = f(1-c) + f(c) \leq 9/4$, as required. |
A proof about Automorphism in congruence class | To show that the map is injective and surjective is equivalent to showing that the map has a two-sided inverse.
The extended Euclidean algorithm yields that there are numbers $m', n'$ such that
$$
m m' + n n '= 1.
$$
Consider the map $G : Z_{n} \to Z_{n}$ given by $G([b]) = m' [b]$. Then for all $a$ one has
$$
G \circ F([a]) = G(F([a])) = G(m [a]) = m'm [a] = (1 - n' n) [a]
= [a],
$$
as $n [a] = [0]$. Similarly $F \circ G([b]) = [b]$ for all $b$. |
Fourier transform of $\exp(-t^2)$ using contour integration. | We can write
$$\exp (i\omega t)\exp(-t^2) = \exp\left(-(t-i\omega/2)^2\right)\exp\left((i\omega/2)^2\right).$$
The factor $\exp (-\omega^2/4)$ can be pulled out of the integral, and we are left with
$$\int_{-\infty + i0}^{+\infty+i0} \exp \left(-(z-i\omega/2)^2\right)\,dz.$$
Using the Cauchy integral theorem, we can shift the integration to
$$\int_{-\infty + i\omega/2}^{+\infty+i\omega/2}\exp \left(-(z-i\omega/2)^2\right)\,dz,$$
and parametrising as $z = t + i\omega/2$, that becomes
$$\int_{-\infty}^{\infty} \exp(-t^2)\,dt.$$
The shift of the contour of integration is justified by applying the Cauchy integral theorem to a rectangle with vertices $-R,\, R,\, R+i\omega/2,\, -R+i\omega/2$ and taking the limit $R\to +\infty$. |
Is integration by substitution a special case of Radon–Nikodym theorem? | There are measure-theoretic versions of integration by substitution, a couple of which can be found on the page you linked to (search the page for "Lebesgue"). Another version is an exercise in Royden's Real analysis (page 107 of the 2nd edition) which says that if $g$ is a monotone increasing, absolutely continuous function such that $g([a,b])=[c,d]$, and if $f$ is a Lebesgue integrable function on $[c,d]$, then $\displaystyle{\int_c^d f(y)dy=\int_a^bf(g(x))g'(x)dx}$. This is also an exercise in Wheeden and Zygmund's Measure and integral (page 124). One of the versions on the Wikipedia page generalizes this to subsets of $\mathbb{R}^n$ in the case where the change of variables is bi-Lipschitz.
I don't see how it would be. The Radon-Nikodym theorem says that if $\nu$ and $\mu$ are measures on $X$ such that $\nu$ is absolutely continuous with respect to $\mu$, then there is a $\mu$-integrable function $g$ such that $\int_X fd\nu=\int_Xfgd\mu$ for all $\nu$-integrable $f$. Both integrals are over the same set, with no change of variables. Maybe I'm not seeing what you have in mind.
However, you can at least derive the formula for linear change of variables for Lebesgue measure using the Radon-Nikodym theorem, and maybe there's more to this than I initially thought. If $T:\mathbb{R}^n\to\mathbb{R}^n$ is an invertible linear map and $m$ is Lebesgue measure, then $\int_{\mathbb{R}^n}fdm=|\det(T)|\int_{\mathbb{R}^n}f\circ Tdm$ for all integrable $f$. A proof of this (without Radon-Nikodym) is given in a 1998 article by Dierolf and Schmidt, and they mention that in the proof they could also have used the Radon-Nikodym theorem. They don't pursue this, but the idea is that $f\mapsto\int_{\mathbb{R}^n}f\circ Tdm$ corresponds to an absolutely continuous measure on $\mathbb{R}^n$, so there is a $g$ such that $\int_{\mathbb{R}^n}f\circ Tdm=\int_{\mathbb{R}^n}fgdm$. In particular, considering $f=\chi_E$ shows that $m(T^{-1}(E))=\int_Egdm$ for all measurable $E$. From this you can show that $g$ must be constant, and the constant must be the measure of the image of the unit $n$-cube under $T^{-1}$, which is $|\det(T^{-1})|=\frac{1}{|\det(T)|}$. |
Fourier transform of a cosine | Through Euler, we have:
$$A\cos(2\pi f_0t)=\frac{A}{2}\left[e^{j2\pi f_0t}+e^{-j2\pi f_0t}\right]$$
Ok, now there is an important property of Dirac's Delta function regarding its integration:
$$\int_{-\infty}^{+\infty}\delta(x\pm x_0)f(x)dx=f(\mp x_0)$$
Then, by inspection of the Euler form of the cosine, we have:
$$e^{j2\pi f_0t}=\int_{-\infty}^{+\infty}\delta(f-f_0)e^{j2\pi ft}df$$
and
$$e^{-j2\pi f_0t}=\int_{-\infty}^{+\infty}\delta(f+f_0)e^{j2\pi ft}df$$
Which are, clearly, the inverse Fourier Transforms of $\delta(f-f_0)$ and $\delta(f+f_0)$. So:
$$\frac{A}{2}\left[e^{j2\pi f_0t}+e^{-j2\pi f_0t}\right]\stackrel{\mathcal{F}}\longleftrightarrow \frac{A}{2}\left[\delta(f-f_0)+\delta(f+f_0)\right]$$
This is also the procedure used for $A\sin(2\pi f_0t)$, giving:
$$\frac{A}{2j}\left[e^{j2\pi f_0t}-e^{-j2\pi f_0t}\right] \stackrel{\mathcal{F}}\longleftrightarrow \frac{A}{2j}\left[\delta(f-f_0)-\delta(f+f_0)\right]$$ |
Problem involving permutations with repetition | Consider the balanced die with sides: $1,2,3,4,5_1,5_2$.
Then the number of ways to obtain $1,2,3,4,5_1$ or $1,2,3,4,5_2$ in any order is $2\cdot 5!$.
On the other hand the possible results are $6^5$ (in this counting we are considering $5_1$ and $5_2$ as two different values).
Hence the required probability is
$$p=\frac{2\cdot 5!}{6^5}=\frac{5}{162}.$$ |
How to find the PDF of a function of two random variables | First of all you have to find the joint density $f_{XY}(x,y)$
You know:
Marginal $$f_Y(y)=\frac{1}{L}\mathbb{1}_{[0;L]}(y)$$
Conditional $$f_{X|Y}(x|y)=\frac{1}{y}\mathbb{1}_{[0;y]}(x)$$
So your joint density is
$$f_{XY}(x,y)=\frac{1}{yL}\mathbb{1}_{[0;L]}(y)\mathbb{1}_{[0;y]}(x)$$
....now you can proceed as you explained...please post you efforts
EDIT:
First observe that $z \in [0;1]$
Theb, using the definition of CDF you get
$$F_Z(z)=\mathbb{P}[Z \leq z]=\mathbb{P}[Y >\frac{X}{z}]=$$
$$=\int_0^{Lz}dx\int_{\frac{X}{z}}^L \frac{1}{Ly}dy...$$
Some intermediate results:
$$\frac{1}{L}\int_0^{Lz}dx[ln y]_{\frac{x}{z}}^L=\frac{1}{L}\int_0^{Lz}[lnL-ln(x/z)]dx=...=zln L-(lnL-1)z=z$$
Note: the integral $\int_0^{Lz}ln(x/z)dx$ is solved by parts. |
particular solution of $y''+y'=xe^{-x}$ | It's often good to start with finding the solution to the homogenous solution
$$y_h'' + y_h' =0$$
which, as you know, is $y_h = c_1 + c_2e^{-x}$. Thus you don't need to have a $e^{-x}$-term when you look for the particular solution, since this is already included in the homogenous (if you include it in your particular solution, it will cancel out).
Second, if you are prone to small misstakes, you can take terms in your particular solution one by one, since differentiation is linear. Start with $Axe^{-x}$:
$$\frac{d^2}{dx^2} Axe^{-x} + \frac{d}{dx} Axe^{-x} = -Ae^{-x}$$
then $Bx^2e^{-x}$:
$$\frac{d^2}{dx^2} Bx^2e^{-x} + \frac{d}{dx} Bx^2e^{-x} = 2Be^{-x} - 2Bxe^{-x}$$
You can see that if you have $y_p = Bx^2e^{-x} + Axe^{x}$ you will get
$$y_p'' + y_p' = (2B - A)e^{-x} - 2Bxe^{-x}$$
thus $B = -\frac{1}{2}$ and $2B - A = -1 -A=0$, which gives $A = -1$, which gives you your answer.
A rule which is convenient when making these kinds of calculations is
$$\frac{d^n}{dx^n} \left( f(x) e^{ax} \right) = e^{ax} \left( \frac{d}{dx} + a \right)^n f(x)$$
where $\left(\frac{d}{dx}\right)^n$ is interpreted as $\frac{d^n}{dx^n}$.
For example, using this on $Axe^{-x}$ (writing $D$ for $\frac{d}{dx}$):
$$\begin{align}
\frac{d^2}{dx^2} Axe^{-x} + \frac{d}{dx} Axe^{-x} &= Ae^{-x}( (D-1)^2 x + (D-1) x )= Ae^{-x} (D-1)((D-1)x + x)) = \\
&= Ae^{-x} (D-1)(1) = -Ae^{-x}
\end{align}$$
You can quickly see that the term $Axe^{-x}$ is not enough. |
Find $x$ such that $4\sqrt x+\sqrt 2=3\sqrt 2$ | You have $f(x)$.You have to find a suitable $f(t)$.This means when you plug in $t$ for every value of $x$ so that the result becomes $3\sqrt2$.
If $f(x)=4\sqrt x+\sqrt 2$ then $f(t)=4\sqrt t+\sqrt 2$.
Now,solve.
$$4\sqrt t+\sqrt2=3\sqrt2$$
$$4\sqrt t=2\sqrt2$$
$$\sqrt t=\frac {1}{\sqrt2}$$
$$t=\frac 12$$.
You can back-calculate to check $f(\frac12)=3\sqrt2$ |
Hamel basis and Banach spaces | I can answer this question when the cardinality $\lambda$ of Hamel basis of $X$ is not less than $\mathfrak{c}$.
For any infinite dimensional Banach space its cardinality and cardinality of its Hamel basis are equal. See theorem 3.5 from The cardinality of Hamel bases of Banach spaces by Lorenz Halbeisen , Norbert Hungerbühler.
Consider Banach space $\ell_\infty(\Lambda)$. Clearly $\operatorname{Card}(\ell_\infty(\lambda))=\lambda\times \operatorname{Card}(\mathbb{C})=\lambda$, so cardinalities of Hamel bases of $\ell_\infty(\lambda)$ and $X$ coincide. Therefore, there is some linear isomorphism between $X$ and $\ell_\infty(\Lambda)$. This isomorphism induces complete norm on $X$. |
Proving Negative of Standard Normal is Standard Normal | You need to know that the distribution is symmetric to prove your result.
You can show it is symmetric looking at the density $$\phi(-x)=\dfrac{1}{\sqrt{2\pi}}e^{-((-x)^2)}=\dfrac{1}{\sqrt{2\pi}}e^{-(x^2)}=\phi(x)$$ |
Shortest distance along line within a certain distance of point | I'm assuming you want a programmatic approach instead of a purely analytic one, so I would attack this problem using standard transformations.
First, translate so $(x_0,y_0)$ lands on the origin - this corresponds to subtracting $(x_0,y_0)$ from each point.
Second, rotate by $-\theta$ so that the line $L$ lands on the $x$-axis - this corresponds to multiplying by the rotation matrix $$\begin{bmatrix} \cos \theta & \sin \theta\\-\sin \theta & \cos \theta\end{bmatrix}$$
Now, $(x_1, y_1)$ has been transformed to $(x_1^\prime , y_1^\prime)$, and we want to find the point on the $x$-axis within a distance of $2$ that has the smallest $x$ coordinate. Assuming $y_1^\prime \leq 2$:
If ${x^\prime_1}^2 + {y^\prime_1}^2 \leq 4$ then $(0,0)$ is the nearest point.
Otherwise:
If $x^\prime_1 > 0$, then $(x^\prime_1 - \sqrt{2^2 - {y_1^\prime}^2},0)$ is the nearest point.
If $x^\prime_1 < 0$, then $(x^\prime_1 + \sqrt{2^2 - {y_1^\prime}^2},0)$ is the nearest point.
Now, just reverse the transformations to find the coordinates you need. |
$n$ people sitting on a circular table without repeating neighbour-sets | $\def\FF{\mathbb{F}}\def\PP{\mathbb{P}}$I can achieve the bound $\binom{n-1}{2}$ if $n=q+1$ for a prime power $q$.
Outline of method: Suppose that $X$ is a set of size $n$ and $G$ is a group acting on $X$ in a sharply three transitive manner. This means that, is $(x_1, x_2, x_3)$ and $(y_1, y_2, y_3)$ are two ordered triples of distinct elements of $G$, there is precisely one $g \in G$ with $g(x_i)=y_i$ for $i=1$, $2$, $3$. (So, in particular, we have $|G| = n(n-1)(n-2)$.) Suppose furthermore that there is an element $\gamma$ in $G$ which acts by an $n$-cycle on $X$, and $\gamma$ is conjugate to $\gamma^{-1}$.
Then I claim $n$ is achievable. Let $C \subset G$ be the conjugacy class of $\gamma$. We note that the only permutations in $S_n$ which commute with an $n$-cycle are powers of that $n$-cycle, so $Z(\gamma) = \langle \gamma \rangle$ and the size of the conjugacy class $C$ is $|G|/|Z(\gamma)| = n(n-1)(n-2)/n=(n-1)(n-2)$.
I claim that, for any distinct elements $y_1$, $y_2$, $y_3$ in $X$, there is a unique $\delta \in C$ with $\delta(y_1) = y_2$ and $\delta(y_2) = y_3$. We first see that there is at least one $\delta$: Fix $x_1 \in X$, define $x_2 = \gamma(x_1)$ and $x_3 = \gamma(x_2)$, and let $h \in G$ be such that $h(x_i) = y_i$. Then $\delta = h \gamma h^{-1}$ does the job. But $|C| = (n-1) (n-2)$ and each $\delta \in C$ gives rise to $n$ triples $(y_1, y_2, y_3)$ with $\delta(y_1) = y_2$, $\delta(y_2) = y_3$. So, if each triple occurs at most once, then each triple occurs exactly once.
We have shown that the elements of $C$ are $(n-1)(n-2)$ oriented $n$-cycles, with each $y_1 \to y_2 \to y_3$ occurring in exactly one. But also, we assumed $\gamma$ conjugate to $\gamma^{-1}$, so for every cycle in $C$ its reverse is also in $C$. Identifying these, we get $\binom{n-1}{2}$ unoriented cycles, where for each $y_2$, and each pair of neighbors $y_1$, $y_3$, there is exactly one cycle where they occur.
Details Take $X = \PP^1(\FF_q)$ and $G = PGL_2(\FF_q)$ with the obvious action. It is well known that $G$ is sharply triply transitive. Choose an identification of $(\FF_q)^2$ with the field $\FF_{q^2}$, choose a generator $\theta$ for the cyclic group $\FF_{q^2}^{\times}$ and let $\gamma \in GL_2(\FF_q)$ be the matrix of multiplication by $\theta$. Then the image of $\gamma$ in $PGL_2(\FF_q)$ has order $q+1$. Moreover, in $PGL_2(\FF_q)$, every element is conjugate to its inverse. This proves the claim. $\square$.
Zassenhaus classified the sharply triple transitive permutation groups and, for all of them, $|X|=q+1$ for a prime power $q$. So there aren't any other $n$ achievable in this way. |
Each ordered semigroup is cancellative: reference? | EDIT. This is an answer to the initial question of the OP, which has been changed later on.
This is not true. Actually, the fact that $+$ respects order just means that $a \leqslant b$ implies $a+c \leqslant b+c$.
For a counterexample to your claim, take $M = \{0, 1\}$ under the usual multiplication and the usual order $0 < 1$.
EDIT. The answer to the new question is "yes", but the proof should be
Suppose that $c \leqslant b$. Then $a+c \leqslant a+b$ as + respects the
order. The claim follows by contraposition. |
Integral of $1/z$ over the unit circle | The problem is that the complex logarithm is not continuous on the whole of $\mathbb{C}$. The formula goes
$ \log(z) = \log(|z|) + i \arg(z)$
where you have to define $\alpha < \arg(z) \leq 2 \pi + \alpha$ for some angle $\alpha$. Typically people take the choice $\alpha = 0$ or $\alpha = -\pi$, and a choice of $\alpha$ is called a branch cut, because it defines a ray in the complex plane along which the logarithm is discontinuous.
When you evaluate your integral in terms of the complex logarithm, you have to keep in mind that you are evaluating $\log(1)$ from two different directions in the complex plane, from above the real axis and below it. If you define $\alpha = 0$ as above, then you'll find that the argument of real numbers right above the real axis and right below it differ by $2\pi$, so you get the right answer. |
The fastest method of finding the inverse of a 1024 bit number (N) i.e. 1/N? | I asked a more general version of this question a while ago, and going by the answer I got, to get the length of the minimal period you first factor out all factors $2$ from $N$ to get $N'$ (which is feasible, especially in binary).
The answer is then equal to the multiplicative order of $2$ modulo $N'$. Whether there is a good algorithm for this, I don't know, but it's such a common problem that someone ought to have thought of something clever. |
Property of Convolutions | No but what you get is this:
$$h*g(x) = \int h(y)g(x-y)dy = \int f(cy)g(x-y)dy.$$
Substitute $z=cy$ and get
$$h*g(x) = \frac{1}{c}\int f(z)g((cx - z)/c)dz.$$
In other words: With $\phi(x) = g(x/c)$ it holds
$$h*g(x) =\frac{1}{c} f*\phi(cx).$$ |
Question about bounded variation and continuity | Hints:For bounded variation, take the partition P:{ 0,1/ 2n, 1/(2n-1),....,1/3,1/2,1}.
V(P,f)=|f(1/2n) - f(0)|+|f(1/2n)-f(1/(2n-1))|+......+|f(1/3)-f(1/2)|+|f(1/2)-f(1)|
=1/2n +1/2n+1/(2n-2)+......+1/2.
=(1/2)*(1+1/2+1/3+...+1/(n-1)+1/n+1/n). Clearly right hand side term goes to infinity as n goes to infinity. |
How to solve this hard sum problem? | The answer is $\color{green}{-1}$.
Indeed, the sum converges very quickly (the terms are $\Theta(x^{-13})$) and the inverses of the partial sums are
$$55741.9354839\\
55312.3888560\\
55297.2897327\\
55296.1612133\\
55296.0276428\\
55296.0059708\\
55296.0015387\\
55296.0004559\\
55296.0001513\\
55296.0000551\\
\cdots$$
Obviously, this tends to $4\cdot4!^3$. |
Algorithm to generate a fraction of winners based on a maximum number of players | Easiest way to do it [in my opinion], if you need k% winners, out of n players, is populating a list and shuffling using fisher-yates shuffle, and then pick the first k%:
1. populate a list of size n containing: [1,2,...,n]
2. shuffle the list
3. take the a sublist containing the first k% of elements -
it is the indexes of the winners out of all players
(*) p.s. I am not familiar with objective-c, but I can only assume an implementation for fisher-yates already exist in it. |
Name That Statistical Function | This is $$\frac1{18}(\operatorname{Tr}M)^2-\frac16\operatorname{Tr}M^2\;.$$ |
Can we find functions $g_0,g_1, \dots, g_n$ such that $f_0g_0+f_1 g_1 + \cdots + f_n g_n =1$ | Holomorphic functions with pointwise addition and multiplication are an integral domain, so when at least one $f_n$ is not zero we can chose $g_n$ as it's meromorphic inverse.
So the trivial case in which it is possible is when we have an $f_k$ without a zero in the unit circle (as $g=1/f_k$ will be holomorphic too)
The trivial case in which it doesn't work, is when all all $f_k$ have a common zero.
When every $f_k$ has an zero somewhere it is complicated. When $f_k$ is polynomial it will work, by chosing $g_k$ as polynomials. Like for example when $f_0=x+1$ and $f_1= x$ we chose $g_0=-x-1$ and $g_1=x$ and have
$$-x^2+1+x^2=1$$
Bad luck this doesn't work so fine on laurent series, as essentiale we use the finte degree of a polynomial. |
Uniform Bivariate Probabilities | As per the fact that X and Y are continuous rv's it is self evident that
$$\mathbb{P}[X+Y=c]=0$$
$\forall c$
It would be different if you wanted to derive the density of $X+Y$ but also in this case it is not possible to solve the problem unless you have enough information about the dependence structure between X and Y (i.e. X,Y independent)
So Let's assume X,Y independent and let's suppose we are interested in calculating $f_Z(z)$, say the law of $Z=X+Y$ and let's suppose $a>b$
First let's note that $f_Z(z)$ is the pdf of the rv, say a probability density and not a "probability function". Known the pdf we can calculate the probability of any interval of Z.
To calculate f it is easier to calculate $F_Z(z)$ first, using the definition.
Before begin the calculation it is very useful to do a drawing of the problem
As you should know, the CDF of the sum is the area under the $Y=z-X$ line, where $z \in [0;a+b]$ multiplied by the joint density $f(x,y)=\frac{1}{ab}$
It is very easy to calculate it in the following way
$$ F_Z(z)=\mathbb{P}[Z \leq z] =
\begin{cases}
0, & \text{if $z <0$} \\
\frac{z^2}{2ab}, & \text{if $0 \leq z <b$} \\
\frac{2z-b}{2a}, & \text{if $b \leq z <a$} \\
1-\frac{(a+b-z)^2}{2ab}, & \text{if $a \leq z <a+b$} \\
1, & \text{if $z \geq a+b$}
\end{cases}$$
If you want to derive $f(z)$ you have only to derivate F. If you set $a=b$ the calculations are easier; if you set $a<b$ the brainstorming is the same but reversed.
How F is calculated in the various intervals?
Just as an example, in the interval $b\leq z<a$ the CDF is
$$F_Z(b)+\frac{z-b}{ab}b=\frac{b}{2a}+\frac{z-b}{ab}b$$
that is (area of the first triangle + area of the parallelogram) on the total area $ab$
.... and so on |
Find positive, coprime integers whose reciprocals sum to 1. | Say the integers are $a_1, a_2, ..., a_n$.
$\sum_{i=1}^{n}\frac{1}{a_i}=1$
$\frac{\sum_{i=1}^{n}{\prod_{j\neq i}{a_j}}}{\prod_{i=1}^{n}{a_i}}=1$
$\sum_{i=1}^{n}{\prod_{j\neq i}{a_j}}= \prod_{i=1}^{n}{a_i}$
$\prod_{j \neq 1}{a_j}+ \sum_{i=2}^{n}{\prod_{j\neq i}{a_j}}= \prod_{i=1}^{n}{a_i} $
The first term of the left hand side of the equation is the only one not divisible by $a_1$. Therefore such sets of reciprocals do not exist. |
Solve for $P$ in $M=P\cdot C$ with $C=diagflat(P^{-1}\cdot W)$ | I'm not familiar with some of your terminology, e.g., "colourspace primaries matrix" and $M$ being "normalized". So I'll just solve the more general purely linear algebraic problem. I'll assume everything in sight is non-singular.
You wish to solve
$$
(*)\ \ \ \ \ M = P\ \ \text{diagflat}(P^{-1} W).
$$
for $P$ where $M$ and $W$ are known. Let
$$
(**)\ \ \ \ \ A = \text{diagflat} (P^{-1} W).
$$
Then $A$ is a $3\times 3$ diagonal matrix. Let $A_1$, $A_2$, and $A_3$ be the diagonal entries of $A$.
Observe $(*)$ can be rearranged as
$$
P^{-1} = A M^{-1}
$$
and therefore
$$
P^{-1} W = A M^{-1} W.
$$
Recalling the definition of $A$ in $(**)$, we have
\begin{eqnarray}
A &=& \text{diagflat}(P^{-1} W) \\
&=& \text{diagflat} (A M^{-1} W).
\end{eqnarray}
But this means
$$
A_i = A_i (M^{-1} W)_i
$$
for $i=1,2,3$ where $(M^{-1} W)_i$ is the $i^{\text{th}}$ member of $M^{-1} W$. That means you'd better have
$$
M^{-1} W = \mathbf{1}
$$
where $\mathbf{1}$ is the $3\times 1$ matrix of $1$'s. This is equivalent to $W = M\mathbf{1}$, i.e., the sum of the $i^{\text{th}}$ row of $M$ is the $i^{\text{th}}$ member of $W$ for all $i$. (I see this is true in your example, so I assume your problem domain somehow imposes the constraint $W = M\mathbf{1}$.)
Assuming $W = M\mathbf{1}$, there are infinitely many solutions $P$ for your problem. Just take
$$
P = M A^{-1}
$$
for any non-singular diagonal matrix $A$. I assume you need $P$ to conform to some additional problem-specific constraint. Choose $A$ accordingly. |
Convergence of sum of reciprocals of binomial coefficients | I think that, for $n>1$, you are entering in the world of hypergeometric functions.
Just have a look to the table and notice the patterns
$$\left(
\begin{array}{cc}
n & S_n \\
2 & \frac{1}{4} \, _3F_2\left(1,2,2;\frac{3}{2},\frac{3}{2};\frac{1}{16}\right) \\
3 & \frac{1}{8} \,
_4F_3\left(1,2,2,2;\frac{3}{2},\frac{3}{2},\frac{3}{2};\frac{1}{64}\right) \\
4 & \frac{1}{16} \,
_5F_4\left(1,2,2,2,2;\frac{3}{2},\frac{3}{2},\frac{3}{2},\frac{3}{2};\frac{1}{25
6}\right) \\
5 & \frac{1}{32} \,
_6F_5\left(1,2,2,2,2,2;\frac{3}{2},\frac{3}{2},\frac{3}{2},\frac{3}{2},\frac{3}{
2};\frac{1}{1024}\right)
\end{array}
\right)$$ What is interesting is that $\log(S_n)$ is almost a linear function of $n$ (almost $\log(S_n)=-n \log(2)$). |
To show $f$ conatined in oval is constant | Think about the Open Mapping Theorem. Note that the image of $f$ is contained in a curve. |
Unbiased estimator of the $l$-norm of a probability vector | For $n\geq l$,
$$\mathbb E\left[Y_1(Y_1-1)\ldots(Y_1-l+1)\right]$$
$$=\sum_{k=l}^n k(k-1)\ldots(k-l+1) \binom{n}{k} \bigl(p(1)\bigr)^k \bigl(1-p(1)\bigr)^{n-k}$$
$$
=n(n-1)\ldots(n-l+1)\cdot \bigl(p(1)\bigr)^l\cdot\sum_{k-l=0}^{n-l} \binom{n-l}{k-l} \bigl(p(1)\bigr)^{k-l} \bigl(1-p(1)\bigr)^{(n-l)-(k-l)}
$$
$$
=n(n-1)\ldots(n-l+1)\cdot \bigl(p(1)\bigr)^l.
$$
So,
$$\mathbb E\left[\frac{Y_1(Y_1-1)\ldots(Y_1-l+1)}{n(n-1)\ldots(n-l+1)}\right]=\bigl(p(1)\bigr)^l$$ |
Calculating the odds of winning a card game | Calculate the probability of each event and then use the expected value. |
Is the class of subsets of integers countably infinite? | No, the set of all subsets of the integer is not countable. Since $\mathbb{Z}$ has the same cardinality as $\mathbb{N}$, it suffice to consider all subsets of $\mathbb{N}$.
For each subset $X$ of $\mathbb{N}$ consider the characteristic function $\chi_X$ defined by
$\chi_X(z) = \begin{cases}
1 & \quad z \in X \\
0 & \quad z \notin X
\end{cases}$
In this way you associate injectively and subjectively each subset $X$ of $\mathbb{N}$ with a function in $2^{\mathbb{N}}$. $2^\mathbb{N}$ has cardinality strictly larger than $\mathbb{N}$. This is proved by the typical Cantor diagonalization argument.
Also, Cantor Diagonalization and the function I wrote above can be used to show more generally that the set of all subsets of a given set has cardinality strictly greater than the given set.
In response to comment :
You can think of a function from $\mathbb{N} \rightarrow 2$ a infinite binary strings of $0$'s and $1$'s. Assume that $2^{\mathbb{N}}$ is countable. That is there is a bijection $\sigma$ from $\mathbb{N}$ to $2^\mathbb{N}$. Then define the function $h : \mathbb{N} \rightarrow 2$ as follows
$h(n) = \begin{cases}
1 & \quad (\sigma(n))(n) = 0 \\
0 & \quad (\sigma(n))(n) = 1
\end{cases}$
Informally, this is the familiar argument, form a new binary string by going down the diagonal and switching $0$ for $1$ and $1$ for $0$. Now this is a a perfectly good binary string hence it down appear as $\sigma(k)$ for some $k$ if $\sigma$ is indeed a bijection. However, it can not be $\sigma(k)$ for any $k$ since it differs from $\sigma(k)$ in at least the $k^\text{th}$ entry.
I hope this helps. |
Show that $\exp: \mathfrak h \to \mathfrak H$ is a bijection. | In this case, the exponential map is just the map$$\begin{bmatrix}0&x&z\\0&0&y\\0&0&0\end{bmatrix}\mapsto\begin{bmatrix}1&x&z+\frac12xy\\0&1&y\\0&0&1\end{bmatrix}$$and it is quite easy to prove that this map is a bijection from $\mathfrak h$ onto $\mathfrak H$. |
Evaluate Fouries transform using properties | The Fourier transform of a product is the convolution between the Fourier transforms of the factors. Since:
$$t\left(\frac{\sin t}{\pi t}\right)^2 = \frac{\sin(t)}{\pi}\cdot\frac{\sin(t)}{\pi t},$$
the Fourier transform of $\frac{\sin t}{t}$ is a multiple of $\mathbb{1}_{[-1,1]}$ and the Fourier transform of $\sin t$ is the difference of two Dirac deltas, the claim follows. |
Binomial coefficient equivalence | Think of it this way: Say we have a collection of $n$ things and will be taking $k$ of them with us. Then we can either choose which $k$ to take with us--which we can do in $\binom{n}{k}$ ways--or choose which $n-k$ not to take with us--which we can do in $\binom{n}{n-k}$ ways. Both approaches yield exactly the same result, so the counts are the same. |
How to choose a basis for the kernel of a matrix? | Hint: You have already figured out that the kernel has dimension 2. The vectors $x$ in the kernel satisfy $-x_1+x_2+x_4=0$ and $4 x_3=0$.
You just need to find two vectors in the kernel that are linearly independent. They will form a basis of the kernel. |
Eigen values and self adjoin operator | Since $K$ is self-adjoint, we have $\langle Ku,v\rangle=\langle u,Kv\rangle$. If $Ku=\lambda u$ and $u=v\ne0$, then
$$
\lambda \langle u,u\rangle=\langle \lambda u,u\rangle=\langle u,\lambda u\rangle=\overline\lambda\langle u,u\rangle,
$$
where the second identity comes from the fact that $K$ is self-adjoint. Since $\langle u,u\rangle=\|u\|^2\ne0$, we conclude that $\lambda =\overline\lambda$ and so $\lambda$ is real. |
How many subset of $A=\{1,2,3...,35\} $ divisible by $5$. | Let $\Gamma$ be the set of all $26$-element subsets of $\mathbb{Z}_{35}$. Define the partition
$$ \Gamma=\Gamma_0\sqcup \Gamma_1\sqcup\Gamma_2\sqcup\Gamma_3\sqcup\Gamma_4 $$
where $\Gamma_k\subseteq\Gamma$ consists of those $A\in\Gamma$ with $\sum_{a\in A}a\equiv k\bmod 5$.
There is a function $\tau$ on $\Gamma$ defined by $\tau(A)=\{a+1\mid a\in A\}$. You can show $\tau(\Gamma_k)=\Gamma_{k+1}$ (with the index interpreted $\bmod 5$). Therefore, each $\Gamma_k$ has the same size, which must therefore be $1/5$th that of $\Gamma$.
In conclusion, $|\Gamma_0|=\frac{1}{5}\binom{35}{26}$. |
Wedge product is zero | There is the theorem (Theorem 3 in Wedge Product) saying that $v_1\wedge v_2\wedge \cdots \wedge v_k=0$ if and only if $$v_j=\sum_{i=1,i\ne j} a_i v_i$$
for some $j$, $j\le 1\le k$. Applying it to (1) we get the most general solution in the form
$$\mathcal{Y}=f dC,$$
with some arbitrary function $f$. The solution (2) is correct ONLY if we require additionally $d\mathcal{Y}=0$, which gives $f=f(C)$. |
Decay of a convolution from its Fourier transform | My idea for a solution:
Let $x=(x_0,\vec{x})\in\mathbb{R}^4$ and $\operatorname{supp} f\subset O$. Then
$\int f(y)\Delta_0(x-y)d^4x=\int_O f(y)\Delta_0(x-y)d^4x$.
I am interested in the behavior for fixed $x_0$ and $|\vec{x}|\rightarrow\infty$.
For $|\vec{x}|$ large enough, we leave $\operatorname{singsupp}\Delta_0=\{x\in\mathbb{R}^4:x^2=x_0^2-|\vec{x}|^2=0\}$ and we can treat $\Delta_0$ as a smooth function:
$\left|\int_O f(y)\Delta(x-y)d^4x\right|\leq\int_O\left|f(y)\Delta(x-y)\right|d^4x\leq\sup_{y\in O}\frac{1}{|\vec{x}-\vec{y}|^2-(x_0-y_0)^2}\int_O |f(y)|d^4y$
The asymptotic behavior is governed by the supremum term. |
Show that $x^2+2 \equiv 3 \mod 4$ and deduce that there exists a prime $p$ with $p|x^2+2$ and $p \equiv 3 \mod 4$. | Assuming $x$ is odd, $x\equiv1\pmod2$ so $2|x-1$ so $2|(x-1)+2=x+1,$
so $4|(x-1)(x+1)=x^2-1=x^2+2-3, $ so $ x^2+2\equiv3\pmod4$.
Let $p$ be a factor of $x^2+2$. $p$ must be odd because $x$ and therefore $x^2+2$ is.
If all such factors were $\equiv1\pmod4$ then their product would be $\equiv1\pmod4$, a contradiction.
So $x^2+2$ has a prime factor $\equiv3\pmod4$. |
If $A\subseteq B$, then is $\langle A \rangle \le \langle B \rangle$? | Let $G$ be the group that you are working with. The group $\langle A\rangle$ is the intersection of all subgroups of $G$ containing $A$. Since $A\subset B$, $\langle B\rangle$ is one such subgroup, and therefore $\langle A\rangle$ is a subgroup of $\langle B\rangle$. |
Prove that column vectors of a matrix $A $ {span} $(\mathbb{R^m})$ | If you can solve the system $Ax=b$ for every $b$ then the application $f: \Bbb{R}^n \to \Bbb{R}^m$ given by $f(x)=Ax$ is surjective. Try to prove that the column vectors of $A$ span $\Bbb{R}^m$ if and only if the application defined above is surjective. (in the end, if you do the computations, you arrive exactly at the definition) |
For two standard uniform RVs is there a way to find the $E[|U_1-U_2|]$ using the linearity rule, ie $E[X+Y] =E[X] + E[Y]$? | "What about the absolute value screws up the linearity of expected values?" This issue has nothing fundamental to do with expected values at all; it's just a property of how numbers behave, and in particular how absolute values do not play nicely with $+$ or $-$. Note, for instance, that $|5 - 7| \neq |5| - |7|$.
"Is there a way to solve this problem using linearity of expectations?" Not an easy one, no. The best you can do is break this up into cases; consider one case where $U_1 > U_2$, and another where $U_2 > U_1$. You will quickly notice that this strongly resembles your direct calculation via double integral.
It's not the linearity of expected values that's burning you here, as they are indeed always linear; it's the (non)-linearity of absolute values. |
Solving a minimization of the minimum problem | It's a linear programming problem, so it only has one optimal solution. If $c_1$ and $c_2$ are fixed, then just solve them separately to get the minimum one as your solution.
If $c_1$ and $c_2$ is not fixed, I think your question is
$min: c^Tx\\
s.t. Ax=b\\
x\geq0\\
c\in R^n$
Assume there exists $x$ satisfies $Ax=b, \ x\geq0$, then we can always find a $c$ to make the objective function smaller. Therefore, the minimum of this problem is $-\infty$. |
Is the space of positive (semi- or not) definite correlation matrices Polish? | Let me do it this way:
The symmetric matrices are closed in all $p \times p$-matrices (obvious), hence they are Polish.
The positive definite matrices are an open cone in the symmetric matrices, hence they are Polish as well (open subsets of a Polish space are Polish — this is a bit easier to prove than the fact on $G_{\delta}$'s).
Finally, since the functions $f_{i}:A \mapsto a_{ii}$ are continuous, their simultaneous pre-image of $1$ is closed in the positive definite matrices, hence the positive definite correlation matrices are Polish.
The positive semi-definite case is even simpler, as the conditions are all closed.
Finally, I can only recommend working through the first few sections of Kechris, as most of these arguments become rather simple once one gets used to them. |
Prove that for any real numbers $a,b$ we have $\lvert \arctan a−\arctan b\rvert\leq \lvert a−b\rvert$. | You want to prove that $\lvert \arctan a−\arctan b\rvert\leq \lvert a−b\rvert$. This is equivalent to proving that
$$
\frac{\lvert \arctan a−\arctan b\rvert}{\lvert a−b\rvert}\leq 1.
$$
Now let $f(x) = \arctan(x)$. Then (make a note of why you can say this) by the Mean Value Theorem you have a $c\in (a,c)$ such that
$$
f'(c) = \frac{\lvert \arctan a−\arctan b\rvert}{\lvert a−b\rvert}.
$$
All that is left for you to do is to prove that for any such $c$, $f'(c)\leq 1$. |
Line integral of $v_1$ along $\gamma$? | Hint
You have to replace $$\cos^2(t)-t$$
by
$$\cos^3(t)-t$$
and
use the fact that
$$\cos(3t)=4\cos^3(t)-3\cos(t)$$
to compute
$$\int_0^{2\pi}\cos^3(t)dt.$$ |
Limit of $\frac{n! +n^n}{1+\ln(n)}$ | $a_n=o(n^{n})$ and $b_n=o(n)$ does not imply that $\frac {a_n} {b_n} \to \infty$.
Just use the fact that numerator is $>n^{2}$ and denominator is $< n$ (for $n>1$)so the ratio is $>n$. |
What does it mean for a $T(x,y)$ to be one to one and how to show it? | Typically, a function $f$ is one-to-one if it is injective.
This means that if $f(x) = f(y)$, then $x=y$.
For example, the function $(y_1, y_1)\mapsto (y_1, y_1y_2)$ is injective because if $(x_1, x_1x_2) = y_1, y_1y_2)$, then $(x_1,x_2) = (y_1,y_2)$.
We can do the same test for any function. Here, a transformation just means function. |
Inequality for expectation-value of commutator | From
$$|\langle \psi |[A,B]| \psi \rangle|^2 + |\langle \psi |\{A,B\}| \psi \rangle|^2 = 4|\langle \psi |AB| \psi \rangle|^2$$
You have
$$|\langle \psi |[A,B]| \psi \rangle|^2 =- |\langle \psi |\{A,B\}| \psi \rangle|^2 + 4|\langle \psi |AB| \psi \rangle|^2$$
Which means that
$$|\langle \psi |[A,B]| \psi \rangle|^2 \leq 4|\langle \psi |AB| \psi \rangle|^2$$
Now, Using Cauchy-Schwarz the identity you wanted to prove is proven:
$$|\langle \psi |[A,B]| \psi \rangle|^2 \leq 4|\langle \psi |AB| \psi \rangle|^2 \leq 4\langle \psi |A^2| \psi \rangle\langle \psi |B^2| \psi \rangle$$ |
Divergence of $ \sum_n\sqrt{2\pi}^{-1}{n^{-r^2/2}}\left(\frac{1}{r\sqrt{\log n}} - \frac{1}{r^3(\log n)^{3/2}}\right)$. | Sketch: There are constants and other stuff floating around that make this series seem difficult, but the problem amounts to showing if $r^2/2\le 1,$ then
$$\sum_{n=2}^{\infty} \frac{1}{n^{r^2/2}(\ln n)^{1/2}}=\infty.$$
The terms above are at least $1/[n^1(\ln n)^{1/2}].$ So we're done if we show
$$\sum_{n=2}^{\infty} \frac{1}{n(\ln n)^{1/2}} = \infty.$$
A nice way to finish is to use the integral test; the Cauchy condensation test will also work. |
How do i prove that Mobius transformation is continuous at its pole? | It won't work if $a/b=c/d$ for reasons you can figure out.
Suppose you prove $az+b$ approaches some number in $\mathbb C$ other than $0$ as $z\to-d/c$.
Then the problem is to prove that $1/(cz+d)\to\infty$ as $z\to-d/c$.
$$
\frac{1}{cz+d} = \frac 1 c \cdot \frac{1}{z+\frac d c}. \tag 1
$$
(Maybe you can also think about what happens if $c=0$, since $c$ is in two denominators here.)
Given $R>0$, you need find $\delta>0$ such that the second fraction on the right in $(1)$ is bigger than $R$ if $|z-\frac d c |<\delta$. Maybe $\delta=1/R$ works. |
The complex structure of a complex torus | The number $\tau$ is a global constant associated to your torus $X$, and is (essentially) uniquely determined by the "complex structure" of $X$ if you insist on $|\tau|\geq1$, $0\leq{\rm Re}(\tau)\leq{1\over2}$. This number $\tau$, e.g., governs the length of closed geodesics on $X$.
Contrasting this the matrix $J$ is a purely local object that somehow encodes the CR equations of the admissible conformal parameters $z$. |
Definition of $ p $-supersoluble group. | It means that every chief factor of the group either has order $p$ or it has order coprime to $p$. So a (finite) group is supersoluble if and only if it is $p$-supersoluble for all primes $p$. |
Prove that $\{1, 11, 1001,\dots\}$ is an irregular language | The pumping lemma is your friend as it states that if $L$ were regular, there'd be a length $n$ such that all $w\in L$ with $|w|>n$ can be split into $w=xyz$ with $y\ne\epsilon$ such that $xy^kz\in L$ for all $k\in\mathbb N_0$. We do not know $n$, but consider one such word $w$ and let $a,b,c$ be the numbers represented by the binary strings $x,y,z$ and let $N_k$ be the number rpresented by $xy^kz$.
Then $$N_{k+1}=(N_k-c)\cdot 2^{|y|}+b\cdot 2^{|z|}+c$$
and as $N_k\to\infty$ (because $x$ begins with a $1$), we have $\frac{N_{k+1}}{N_k}\to 2^{|y|}$ as $k\to\infty$. On the other hand, $\frac{N_{k+1}}{N_k}$ is always a power of $3$. Thus for sufficiently big $k$, we find $m$ with $|3^m-2^{|y|}|<\frac12$, i.e. $3^m=2^{|y|}$. This is only possible if $m=|y|=0$, contradicting $|y|\ne 0$. |
Better way of getting segment length from triangle with offset and extension | Welcome to MSE! I got the same result for $X_2$. You can sort of simplify it into
$$X_2=\frac TB\left(\sqrt{A^2+B^2}+A\right).$$
For $X_1$, I think you made an error in your calculations. I found that $X_1=X_{2a}-X_{2b}$, so
$$X_1=\frac TB\left(\sqrt{A^2+B^2}-A\right).$$
I don't think there is any way to get simpler expressions of these two lengths. |
Calculating the mass of a surface? | If you have a small section of your surface of area $\Delta S$ where the density doesn't change much, then its mass is going to be roughly the density times the area, i.e. $g \Delta S$. Summing over all such small pieces gives us an approximate total mass of $\sum g\Delta S$, and taking the limit as $\Delta S \rightarrow 0$ leaves us with:
Mass = $\displaystyle \int_S g \, dS$ |
Proving the differentiability of a function in chunks | Somewhat tacky to highjack comments from analysis. However, OP's post query
comment indicates work.
To prove: $\lim_{x\to 0} \frac{f(x) - f(0)}{x} = 0.$
For $x \neq 0$
$\frac{f(x) - f(0)}{x} = ~
x : x$ rational, $0$ : $x$ irrational.
Thus, for $x \neq 0, |\frac{f(x) - f(0)}{x}| \leq |x|.$
For $\epsilon > 0,$ choose $\delta = \epsilon.$
Then, $0 < |x| < \delta = \epsilon ~\Rightarrow~ |\frac{f(x) - f(0)}{x}| < \epsilon. $ |
What are the minimum requirements on X and Y for $\mathbb P(\vert X \vert \mathbb P(\vert X +Y\vert < \epsilon)$ to hold? | If you have $$\mathbb P(|X|<\epsilon) > \mathbb P(|X+c| < \epsilon)$$ for any $c>0$, then you also have the inequality you want. For some $Y$, maybe you can get the inequality you want without also having this inequality for every $c>0$, but I suspect this would lead to very awkward conditions on $Y$.
One sufficient condition is for $X$ to have a unimodal distribution centered at $0$. (That is, the PDF of $X$ is increasing at negative values and decreasing at positive values.) In fact, we don't need the "decreasing at positive values" half of being unimodal. |
Why is the term for $k_0=0$ the only remaining one for this Discrete Fourier Transform? | By calculating the DFT of $e^{\frac{i 2 \pi n k_0}{N}}=\left(\frac{1}{\omega_N}\right)^{n k_0}=\omega_N^{-n k_0}$, we obtain
$\sum_{r=0}^{N-1} e^{\frac{i 2 \pi r k_0}{N}}\omega_{N}^{rk}=\displaystyle\sum_{r=0}^{N-1} \omega_N^{-r k_0}\omega_{N}^{rk}$
$=\sum_{r=0}^{N-1} \omega_{N}^{r(k-k_0)}$
We have
$\delta_N[k-k_0]=\displaystyle\frac{1}{N}\sum_{r=0}^{N-1} e^{-i 2\pi \frac{(k-k_0)}{N}r}$
$=\displaystyle\frac{1}{N}\sum_{r=0}^{N-1} \omega_{N}^{(k-k_0)r}$
So The first sum:
$\displaystyle\sum_{r=0}^{N/2-1} \omega_{N/2}^{kr}$
$=\frac{N}{2}\frac{1}{N/2}\displaystyle\sum_{r=0}^{N/2-1} \omega_{N/2}^{kr}$
$=\frac{N}{2}\delta_{N/2}[k]$
(replacing $k_0=0$) |
Interior point and Minkowski functional | Unless you make additional assumptions on the convex set, this is not true in general. Consider the set $$A = \{ (x,0) \colon x \in [-1,1] \}$$ in $\mathbb R^2$. The corresponding Minkowski functional $p_A$ evaluates to $p_A(x_0) = 1/2$ at $x_0 = (1/2, 0)$, yet $x_0$ is not an interior point of $A$ (since $A$ has no interior). |
$\lim_{n\rightarrow\infty}\frac{a_{n}}{n}=1$ implies that $\lim_{n\rightarrow\infty}\sup_{0\leq k\leq n}\frac{|a_{k}-k|}{n}=0$ | Let $\varepsilon>0$ be arbitary. Then there exists $N_{1}\in\mathbb{N}$
such that $\left|\frac{a_{n}-n}{n}\right|<\varepsilon$ whenever $n\geq N_{1}$.
Let $M=\max_{0\leq k<N_{1}}|a_{k}-k|$. Choose $N_{2}\in\mathbb{N}$
such that $\frac{M}{N_{2}}<\varepsilon$. Let $N=\max(N_{1},N_{2})$.
Let $n\geq N$ be arbitrary. Let $0\leq k\leq n$. If $0\leq k<N_{1}$,
we have that $\left|\frac{a_{k}-k}{n}\right|\leq\frac{M}{N}<\varepsilon$.
If $N_{1}\leq k\leq n$, then $\left|\frac{a_{k}-k}{n}\right|\leq\left|\frac{a_{k}-k}{k}\right|<\varepsilon$
because $k\geq N_{1}$. It follows that $\sup_{0\leq k\leq n}\left|\frac{a_{k}-k}{n}\right|<\varepsilon$.
Hence $\lim_{n}\sup_{0\leq k\leq n}\left|\frac{a_{k}-k}{n}\right|=0$. |
Implications of some sort of $l^2$/uniform convergence | Let $f(x) := 0$ for all $x \in \mathbb{N}$, $v=1$ and define
$$f_n(x) := \begin{cases} 0 & x=1 \\ \frac{x-1}{n} & 2 \leq x \leq n+1 \\ 1 & x>n +1\end{cases}.$$
Then
$$\begin{align*} \mathcal{F}(f_n-f) + o_v(f_n-f) &= \mathcal{F}(f_n) = \sum_{k=1}^{\infty} (f_n(k+1)-f_n(k))^2 = n \cdot \frac{1}{n^2} = \frac{1}{n} \to 0, \end{align*}$$
i.e. $f_n \to f$ in $D$. On the other hand, $$1 = \lim_{n \to \infty}f_n(n) \neq \lim_{x \to \infty} f(x) = 0.$$ |
What would be the value of y-intercept for the following scenario | Any pair of coordinates $(x,y)$ that does not satisfy the above inequality must satisfy the negation of the statement (opposite), i.e., $-6x\ge y+a$. Another way of writing this inequality is $y\le -6x-a$.
If you draw this inequality $y\le -6x-a$ for various values of $a$, you will see that the only possible values for $a$ that will include the origin are when $a\le 0$. |
$\left(\frac{a}{n}\right)=1$ does not necessarily imply that, $a$ is a quadratic residue $\mod n$ | For $\left(\frac{a}{b}\right)$ to be a Legendre symbol, $b$ must be an odd prime. Legendre symbols do indeed determine whether $a$ is a quadratic residue mod $b$.
What you have with $b=15$ is a Jacobi symbol, which is a generalization of Legendre symbol. As you have discovered, if the Jacobi symbol is $1$, that tells us nothing, since it could be the product of an even number of $-1$'s. However if the Jacobi symbol is $-1$, then $a$ is a quadratic NONresidue mod $b$.
More info, as requested. Write $\left(\frac{a}{b}\right)=\left(\frac{a}{p_1}\right)\left(\frac{a}{p_2}\right)\cdots \left(\frac{a}{p_k}\right)$ where $b=p_1p_2\cdots p_k$ is a factorization into primes. If any of the Legendre symbols are $-1$, then $a$ is a nonresidue mod $b$. If all of the Legendre symbols are $1$, then $a$ is a residue mod $b$. |
Path connected subsets of disjoint union | If it isn't entirely clear, you should imagine gluing the disk along the map specified by $f$.
A is path connected. If I could show that $A$ is homeomorphic to $X\cup_fD\setminus \{(0,0)\}$...
You're making this too complicated. If you have two points, both of which are in $X$, we're done (because $X$ is path connected). Similarly, if they're both in $D\setminus \{(0,0)\}$, we're also done. If one is in $X$ and the other is in $D\setminus \{(0,0)\}$, drive to the boundary of $D\setminus \{(0,0)\}$ (which is also in $X$ by the identifications), then drive to the point in $X$ via path connectedness.
$A \cap B$ is path connected...
Looks good to me.
$A$ is open.
Choose a point $a \in A$. If $a \in D\setminus \{(0,0)\}$, it's easy to see that there is some open $U \ni a$ so that $a \in U \subseteq A$. If the point is not in $D\setminus \{(0,0)\}$, choose $U$ to be all of $X$, along with a small open strip around the boundary of $D$. By quotient topology shenanigans, $U$ is open, and we have $a \in U \subseteq A$.
Then, for all $a \in A$, there is some open $U$ so that $a \in U \subseteq A$, meaning $A$ is open.
Assuming you're using this for fundamental group calculations....
The way you want to think about this result is that whenever you "glue a disk" to a space you already know, you are effectively killing the homotopy type of the attaching map, as you can homotope the loop across the disk to make it null-homotopic. This fact is very convenient for calculating the fundamental group if you know the CW structure. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.