title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Poisson process. Compute $E[N_z-N_t | N_s=3]$ | As Henry pointed out $E[N_z-N_s] = E[N_{z-s}] = \lambda (z-s)$ (assuming $\lambda$ is the intensity). Also $E[N_s-N_t | N_s=3] = \frac{3(s-t)}{s}$ (since given $N_s=3$ the arrival times are uniformly distributed in $[0,s]$). |
True/False: If the Wronskian of n functions vanishes at all points on the real line then these functions must be linearly dependent in R. | The answer is no. For instance, the functions $f_1(x) = x^2$ and $f_2(x) = x \cdot |x|$ are continuous with continuous derivatives, have a Wronskian that vanishes everywhere, but fail to be linearly dependent.
The Wronskian Wikipedia page has a good discussion about this. Note that if the set of functions considered is analytic, then their dependence over an interval is indeed equivalent to their having a Wronskian that is identically zero. |
Determine if the statement below is (always) true. If true, justify your answer. If false, give a counterexample. | Assuming $A,B,C \in GL_{n}(\mathbb{R})$,
$$\det(B) ~=~ \det(B)\bigg[\frac{\det(A)\cdot\det(C)}{\det(C)\cdot\det(A)}\bigg] ~=~ \frac{\det(ABC)}{\det(CA)}$$ |
Proving the convergence of $a_n = \frac{n}{n+\sqrt n}$ | Show that it is increasing and bounded.
To show that it is increasing, study the function
$$x \mapsto \frac{x}{x+\sqrt{x}} = \frac{1}{1+\frac{1}{\sqrt{x}}}$$
Boundedness is trivial since $n \le n + \sqrt{n}$. |
Compactness of a subspace of $L^{2}([0,1])$ | Hint: First, let $E$ be the set of $f\in L^2([0,1])$ such that $|f|\le1$. Then $E$ is not compact (hint: trig functions...)
Now, there exists a rectangle $[a,b]\times[c,d]$ contained in $\{(t,s):0\le t\le 1,\sin(t)\le s\le t\}$. Hence, given a sequence $f_n$ in $E$ with no convergent subsequence you can construct a sequence in $K$ with no convergent subsequence, by simply... |
Finding number of cycles in permutation corresponding to translation in a group [University Olympiad] | $\mathbb{Z}_p^k$ acts on $F$ by $a \cdot f$ = $f(x + a).$ This satisfies $(a + b) \cdot f(x) = a \cdot (b \cdot f(x))$ as $f(x + a + b) = b \cdot f(x + a).$ Furthermore, $0 \cdot f(x) = f(x).$ Hence, $\mathbb{Z}_p^k$ has a well defined group action on $F.$
Fix $a \in \mathbb{Z}_p^k \setminus \{0\}.$ Note that there is a well defined group action for any group on itself - now consider the group action of $\mathbb{Z}_p^k$ on itself. Because $a \neq 0$ the number of orbits for the subgroup $\langle a \rangle$ in $\mathbb{Z}_p^k$ we have is $p^{k - 1}.$ To see this, we can use Burnsides Lemma which tells us the number of orbits is $\frac{1}{|\langle a \rangle|} \sum_{f \in F} |Stab_{\langle a \rangle} (f)| = \frac{1}{p} p^k = p^{k - 1},$ as the order of $a$ is $p$ in $\mathbb{Z}_p^k$ and for each $f, Stab(f) = \{0\}$. Now if all the elements in any given orbit of $\langle a \rangle \cdot \mathbb{Z}_p^k$ map to the same element for a given function $f$ then the orbit $\langle a \rangle \cdot f = \{f\}.$ This is clear as the effect of $a \cdot f = f(x + a)$ simply changes the image of $x$ to the image of another member of its orbit. Now, if this is not the case and for a given $f$ we have two elements of a given orbit of $\langle a \rangle$ map to different elements, then we would have the orbit of $f$ be of size $p.$ Hence, it remains to calculate the number of functions that send every element of any given orbit $\langle a \rangle \cdot \mathbb{Z}_p^k$ to the same element and the ones that do not. Since there are $p^{k - 1}$ orbits, we can see that the number of distinct possible choices for the former case is $n^{p^{k - 1}}$ and the latter case has $n^{p^k} - n^{p^{k - 1}}$ choices as we have $n^{p^k}$ number of functions in $F.$ Hence, by our reasoning, we can see that the number of cycles, i.e. the number of distinct orbits of $\langle a \rangle$ acting on $F$ is $n^{p^{k - 1}} + (n^{p^k} - n^{p^{k - 1}})/p$.
You can use the fact that $p \mid n^{p^k} - n^{p^{k - 1}}$ and use induction. Maybe there is a cleaner and more direct argument, but I don't know it...
The main thing in formality for proof writing is making sure nothing is hidden and you have clear logical steps. I edited my answer to show you how I would write the proof. I'm by no means an expert at writing proofs but this is the main idea behind proof writing and it should be the same for olympiads. |
Weakened versions of Word and Isomorphism Problems in group theory | The answer to both of your questions is yes. In the first case, either the algorithm that outputs "Yes, $w=_G 1$", or the algorithm that outputs "no, $w \ne_G 1$" correctly determines whether $w =_G 1$. The fact that we might not know which of these is the correct algorithm does not alter the fact that such an algorithm exists.
A similar argument applies to the second question (and also to questions like "Is Goldbach's Conjecture true"). |
$L^1$ limit of indicator functions must be an indicator | Hint: if $\{x: \text{dist}(f(x), \{0,1\}) > \epsilon\}$ has measure $\eta > 0$, then
$\|f - \chi_{E_n}\|_1 \ge \ldots$. |
Lower bounds for integer factorization | One way to compute $p_a$ given $n$,$a$,$b$ is to factor $n$. If $a$ and $b$ are extremely close together (like $b=a+1$) then the fastest method may well be a simple Fermat factorization. If $a$ and $b$ are roughly of the same size, then this would fall under a general-purpose algorithm like Number Field Sieve, which takes
$O(\exp((\log n)^{1/3 + \epsilon}))$ time.
However, if $b$ is substantially larger than $a$ (say, on the order of $a^2$, perhaps) then $n$ will also be substantially longer than $a$, and it may be more efficient to use something like Lenstra ECM factorization which is faster for small $a$.
At some point (when $b$ is much larger than $a$, maybe exponentially larger) there would be a crossover where it makes sense to ignore $n$ and just focus on computing the $a$th prime. Computing the $a$th prime function is essentially the same as computing $\pi(x)$, the prime-counting function: given either function, you can use binary search to compute the other, so that the running times are at most a $O(\log a)$ factor apart.
Algorithms for computing the prime-counting function $\pi(x)$ are pretty well-studied. In practice this is done by modern forms of the Meissel-Lehmer algorithm which runs in time $O(x^{2/3 + \epsilon})$, which is faster than computing all primes up to $x$ at least for larger $x$, but notably slower than factoring methods for numbers of similar size. See Deleglise and Rivat's paper for more details.
In theory it is possible to compute $\pi(x)$ using high-precision complex arithmetic in time $O(x^{1/2 + \epsilon})$, but this may not be very practical. You might find some informative discussion of this method in William Galway's thesis. |
Simple probability problem | HINT:
Break it into two parts--first find the probability of getting heads with a "guaranteed" coin, then the probability of getting heads with a fair coin. You can do one or the other, so you add those two probabilities.
Answer: (mouse over grey spots to see equations)
I'm not entirely sure about the sample space to consider, but the way I would approach this problem is as follows:
You can pick any one of the $m$ coins that are double-headed, and be guaranteed a head. So, the probability of getting heads this way is the number of double-headed coins over the total number of coins:
$$\displaystyle\frac{m}{2m+1}$$
You can pick any one of the remaining $m+1$ coins, and have $50:50$ chances of getting heads or tails. So, we first find the probability of picking a fair coin, then multiply it by the chances of getting heads with that coin:
$$\displaystyle\frac{m+1}{2m+1}\cdot\frac{1}{2}$$
So, the overall probability of getting a head is:
$$\displaystyle\underbrace{\frac{m}{2m+1}}_{\text{double headed}} + \underbrace{\frac{1}{2}\cdot\frac{m+1}{2m+1}}_{\text{flip a fair head}} = \frac{3m+1}{4m+2}$$
We can tell that $\frac{14}{19}$ won't fit this for, but perhaps try a non-reduced form of the fraction. For example: $\frac{28}{38},\, \frac{42}{57},\, \frac{56}{76},\ldots$ |
Expected value when there are two random variables | Simply estimate the expected value:
$$\mathbb{E}(\#_B)=1\cdot ab+2\cdot ab(1-b)+3\cdot ab(1-b)^2+...$$
or
$$\mathbb{E}(\#_B)=\lim\limits_{n\rightarrow\infty}\frac{ab}{1-b}\sum_{i=1}^n i(1-b)^i.$$
This sum is so called low-order polylogarithm and we know that
$$\lim\limits_{n\rightarrow\infty}\sum_{i=1}^n i(1-b)^i=Li_{-1}(1-b)=\frac{1-b}{b^2}.$$
Hence the result. |
Limits preserve weak inequalities | $|f(x)-A|<\epsilon \Rightarrow A-\epsilon < f(x) < A+\epsilon$
If $A<0$ then choose $\epsilon$ less than $|A|$ which forces $f(x)<0$. |
No open set in $\mathbb{R}^n$ has measure zero in $\mathbb{R}^n$? | It's a common "mistake" / "simplification" in mathematics to ignore obvious exceptions to statements, both in theorems and in exercises. At any rate, it says "non-trivial open set" in the paperback edition. |
Difference between $\limsup_{z\to a}\lvert f(z)\rvert$ and $\limsup \lvert f(z_n)\rvert$ | No, $\limsup_{n\to\infty} f(z_n)$ considers only values of $f$ for the values of $z$ taken by the sequence $z_n$ while $\limsup_{z\to a} f(z)$ considers all values of $z$ in neighborhoods of $a$.
Take for example $f=(1-\chi_\mathbb Q)$ that is $0$ for rational numbers and $1$ otherwise. And let $z_n = 1/n$. Then $f(z_n) = 0$ (since $1/n$ is rational) so $\limsup f(z_n)=0$ however $\limsup_{z\to0} f(z) = 1$ since there exists non-rational numbers arbitrarily near $0$ (and for those $f$ is $1$).
Of course this function is not very analytical, but it at least show the difference... |
How many 3 character combinations can be made using letters AND numbers? | Well, there are 36 combinations for each, and they're all independent. 36x36x36 is 46656. |
Proving the Column Correspondence Principle | To answer this, you probably should also tell us a bit about how your professor describes how to transform a matrix to its reduced row echelon form? That way the answer may be a little more meaningful to you.
Here is a sketch of how I would explain it to my students:
Let's say $ A $ is the original matrix and $R $ the reduced echelon form.
The reduction to reduced echelon form can be viewed as the application (multiplication) of a sequence of matrices to $ A $:
$
D L_{n-1} \cdots L_0 A = R.
$ The matrices $ L_i $ are called Gauss transforms and the matrix $ D $ is diagonal. Importantly, those matrices are square and invertable (nonsingular).
Next, let's take the linearly independent columns of $ R $ and create a matrix $ \widetilde R $ with those and let's take the corresponding columns of $ A $ and make a matrix $ \widetilde A $ with those.
Then $ D L_{n-1} \cdots L_0 \widetilde A = \widetilde R $.
Now, if the columns of $ \widetilde R $ are linearly independent, then we know that $ \widetilde R x = 0 $ implies that $ x = 0 $. Let's see what this tells us about $ \widetilde A $: Assume $ \widetilde A x = 0 $. Then $ \widetilde A x =
L_0^{-1} \cdots L_{n-1}^{-1} D^{-1} \widetilde R x $ which implies that $ \widetilde R x = 0 $. But that means that $ x = 0 $. So, if columns in $ R $ are linearly independent, then the corresponding columns in $ A $ are linearly independent. The converse can be proven similarly.
Now, if your professor did not introduce the concept of Gauss transform, or relate the reduction to echelon form to multiplication by matrices, then it becomes a bit harder to answer your question... |
Using polar coordinates to solve a double integral | You're on the right track, but by the multivariate change of variables formula you need to account for the Jacobian matrix of this change of variables and tweak your bounds a bit, //i.e.//
\begin{align*}
\iint_Rx^2y^2\,dxdy &= \iint_{\{(r,\theta) : r = 1\}} (ar\cos\theta)^2(br\sin\theta)^2\left|\begin{pmatrix}\frac{\partial}{\partial r}(ar\cos\theta) & \frac{\partial}{\partial r}(br\sin\theta) \\ \frac{\partial}{\partial \theta}(ar\cos\theta) & \frac{\partial}{\partial \theta}(br\sin\theta)\end{pmatrix}\right|\,dr d\theta\\
&= \int_0^{2\pi}\int_0^1 a^3b^3r^5\cos^2\theta\sin^2\theta\,drd\theta\\
&= \ldots
\end{align*}
Note here that you can make the $\theta$ bound range from $0$ to $2\pi$ (not just $\pi$) to account for the whole disk $R$ rather than think about negative $r$! |
If the triangle A,B,C has a right angle at the corner A, what is x? | $$\vec{AB}(4-(-2),9-(-2),-10-3)$$ or
$$\vec{AB}(6,11,-13).$$
$$\vec{AC}(0,x+2,-2).$$
By the given $$\vec{AB}\cdot\vec{AC}=0.$$
Thus, $$6\cdot0+11(x+2)+(-13)(-2)=0,$$ which gives $$x=-\frac{48}{11}.$$ |
Gradient and Hessian of a function defined in terms of matrix inverse | First, let's define $$Q(x)=P + D(x) = P + \mathop{\textrm{diag}}(x) \otimes I_m$$ where $\mathop{\textrm{diag}}$ maps the vector $x$ to the corresponding diagonal matrix, and $\otimes$ denotes the Kronecker product. This makes it a bit simpler to see that
$$\frac{\partial Q(x)}{\partial x_i} = \frac{\partial D(x)}{\partial x_i}
= e_ie_i^T \otimes I_m = (e_i \otimes I_m) (e_i^T \otimes I_m)$$
where $e_i$ is unit vector with a $1$ in the $i$th element. Expressed without the Kronecker product, $e_ie_i^T\otimes I_m$ is the block matrix with $I_m$ in the $(i,i)$ block position and $0_m$ everywhere else; and $e_i \otimes I_m$ is the $i$th block column of that matrix.
Armed with this information, we proceed per the aforementioned Matrix Cookbook. From equation 40, we have
$$\begin{aligned}
\frac{\partial Q(x)^{-1}}{\partial x_i} &= -
Q(x)^{-1} (e_ie_i^T \otimes I_m) Q(x)^{-1} \\
&= - Q(x)^{-1} (e_i \otimes I_m) (e_i^T \otimes I_m) Q(x)^{-1}
\end{aligned}$$
We can obtain a very convenient interpretation of this quantity if we
apply a corresponding block structure to $Q(x)^{-1}$. Let's call this quantity $R$, so that
$$R \triangleq Q(x)^{-1} = \begin{bmatrix}
R_{11} & R_{12} & \dots & R_{1n} \\ R_{21} & R_{22} & \dots & R_{2n} \\
\vdots & \vdots & \ddots & \vdots \\ R_{n1} & R_{n2} & \dots & R_{nn}
\end{bmatrix}$$
With this structure in place, we see that right-multiplying $Q(x)^{-1}$ by $e_i \otimes I_m$ obtains the $i$th block column. Let's denote that column by $R_{i:}$.
This leads us to this first derivative:
$$\begin{aligned}
\frac{\partial f(x)}{\partial x_i} &= -
v^TQ(x)^{-1} (e_ie_i^T \otimes I_m) Q(x)^{-1}v \\
&= - v^TQ(x)^{-1} (e_i \otimes I_m) (e_i^T \otimes I_m) Q(x)^{-1}v \\
&= - v^T R_{i:} R_{i:}^T v = \|R_{i:}^T v\|_2^2\end{aligned}$$
The second partial derivatives are
$$\begin{aligned}
\frac{\partial^2 f(x)}{\partial x_i x_j} &=
2 v^TQ(x)^{-1} (e_ie_i^T \otimes I_m)Q(x)^{-1} (e_je_j^T \otimes I_m) Q(x)^{-1}v \\
&= 2 v^TQ(x)^{-1} (e_i \times I_m)(e_i^T \otimes I_m)Q(x)^{-1} (e_j\otimes I_m)(e_j^T \otimes I_m) Q(x)^{-1}v \\
&= 2 v^T R_{i:} R_{ij} R_{j:}^T v
\end{aligned}$$where $R_{ij}$ is the $(i,j)$ block of $Q(x)^{-1}$. |
Comparing sample and population standard deviation | Wikipedia has a discussion of this. For a large sample, your sample standard deviation is very close to the population standard deviation. I would be more concerned about bias from taking the first samples. It would be better to take a random sample in case the first data is somehow different from the rest. |
Writing an expression of the form $X^2 − A^2$ | '$x$' isn't the same as '$X$', so $(x+4)^2 − 7$ can be written in the form $X^2 - A^2$. Particularly, $(x+4)$ can be considered as $X$, and $\sqrt{7}$ can be considered as $A$. So, the expression in the designated form would be $(x+4)^2 - (\sqrt{7})^2$. |
What is the probability that the sum of n numbers between 0 and k is less than k? | Discrete Case
The number of ways to sum $n$ integers in $[0,k]$ and get $m$ is $\left[x^m\right]\left(x^0+x^1+x^2+\cdots+x^k\right)^n$. Thus, the number of ways to get $m\in[0,k]$ is
$$
\begin{align}
&\sum_{m=0}^k\left[x^m\right]\left(x^0+x^1+x^2+\cdots+x^k\right)^n\\
&=\sum_{m=0}^k\left[x^m\right]\left(\frac{1-x^{k+1}}{1-x}\right)^n\\
&=\sum_{m=0}^k\left[x^m\right]\sum_{j=0}^n(-1)^j\binom{n}{j}x^{j(k+1)}
\sum_{i=0}^\infty(-1)^i\binom{-n}{i}x^i\\
&=\sum_{m=0}^k\left[x^m\right]\sum_{j=0}^n(-1)^j\binom{n}{j}x^{j(k+1)}
\sum_{i=0}^\infty\binom{n+i-1}{i}x^i\\
&=\sum_{m=0}^k\sum_{j=0}^n(-1)^j\binom{n}{j}\binom{n+m-j(k+1)-1}{m-j(k+1)}\\
&=\sum_{m=0}^k\binom{n+m-1}{m}\\
&=\binom{n+k}{k}\\
\end{align}
$$
Therefore, the probability is
$$
\frac1{(k+1)^n}\binom{n+k}{k}
$$
Continuous Case
The continuous case is given by the volume of
$$
\sum_{j=1}^nx_j\le1
$$
where $x_j\ge0$, which is $\frac1{n!}$. The answer is independent of $k$ since the volumes of both the sample space and the success space are multiplied by $k^n$.
Note that the limit of the discrete case as $k\to\infty$ is
$$
\begin{align}
\lim_{k\to\infty}\frac1{(k+1)^n}\binom{n+k}{k}
&=\lim_{k\to\infty}\frac1{(k+1)^n}\frac{(k+n)(k+n-1)\cdots(k+1)}{n!}\\
&=\frac1{n!}
\end{align}
$$ |
$\lambda$-commutativity and commutativity of operators | One idea would be to consider Hilbert-Schmidt operators
$$
\mathcal B_2(\mathcal H)=\lbrace B\in\mathcal B(\mathcal H)\,|\,\operatorname{tr}(B^\dagger B)<\infty\rbrace\subset\mathcal B(\mathcal H)
$$
which form a Hilbert space under the scalar product $\langle A,B\rangle=\operatorname{tr}(A^\dagger B)$ (this can be seen as the "natural extension" of matrices to infinite-dimensions in some sense). Then we can use the trace argument from finite-dimensions as for $T,S\in\mathcal B_2(\mathcal H)$ now $TS=\lambda ST$ for some $\lambda\in\mathbb C$ together with $\operatorname{tr}(TS)\neq0$ implies
$$
\operatorname{tr}(TS)=\operatorname{tr}(\lambda ST)=\lambda \operatorname{tr}(ST)=\lambda \operatorname{tr}(TS)\ \Rightarrow\ \lambda=1
$$
so $T$ and $S$ commute.
As an introduction to the trace on infinite-dimensional Hilbert spaces, Hilbert-Schmidt operators etc. I recommend Chapter 3.4 of the book "Analysis Now" (1989) by Pedersen. |
Frechet Differentiation and Equivalent Norm examples | Theorem. Let $\mathrm{V}$ and $\mathrm{W}$ be two normed spaces, and let $\| \cdot \|_1$ and $\| \cdot \|_a$ be a two equialent norms in $\mathrm{V}$ and, likewise $\| \cdot \|_2$ and $\| \cdot \|_b$ is a pair of equivalent norms for $\mathrm{W}.$ Suppose $f$ is defined on an open subset $\mathrm{A}$ of $\mathrm{V}$ into $\mathrm{W}$ and let $v$ be a point of $\mathrm{A}.$ Then, $f$ is differentiable in $v$ with respect to the pair $(\| \cdot \|_1, \| \cdot \|_2)$ if and only if it is differentiable in $v$ with respect to the pair $(\| \cdot \|_a, \| \cdot \|_b).$
Proof. There are constants $c, d, p, q$ all positive such that
$$c \| \cdot \|_1 \leq \| \cdot \|_a \leq d \| \cdot \|_1$$
and
$$p \| \cdot \|_2 \leq \| \cdot \|_b \leq q \| \cdot \|_2.$$
Then
$$\dfrac{c}{q} \dfrac{\|f(v + h) - f(v) - Th\|_b}{\|h\|_a} \leq \dfrac{\|f(v + h) - f(v) - Th\|_2}{\|h\|_1} \leq \dfrac{d}{p} \dfrac{\|f(v + h) - f(v) - Th\|_b}{\|h\|_a}$$
and the proof is complete.
How this theorem imply the excercise? Easy, clearly $\| \cdot \|$ is equivalent with itself. |
Random Variable Problem with unrestricted Parameters Worded Problem | (1)- yes.
(2)- in the general case
$\begin{align}
\because \sum_{x=0}^\infty p (1-p)^x &= \frac{p}{1-(1-p)} & \forall |p|< 1:\text{ the series converges}
\\ & = 1
\\[2ex]
\therefore P(X=x) &= p (1-p)^x \;:\; x\in\{0,1,2,3,\ldots\} & \text{is a probability mass function}
\\[2ex] \text{Then use the following:}
\\[2ex]
\mathsf{Var}[X] & := \mathsf{E}[X^2]-\mathsf{E}[X]\;^2 & \text{definition of variance}
\\[2ex]
\mathsf{E}[X] &= p \sum_{x=0}^\infty x(1-p)^x & \text{the expectation of }X
\\[1ex]
\mathsf{E}[X^2] & = p\sum_{x=0}^\infty x^2(1-p)^x & \text{the expectation of }X^2
\\[2ex]
\therefore \ldots
\end{align}$ |
Is this equation solvable? Bases | $16^0 - 10^0 =1-1 = 0$
$16-10 = 6$
And $16^2 - 10^2 = 156$
And $16^3 - 10^3 = 3096$
And $16^5 - 10^4 = 55,536$
And $16^4 - 10^5 = 948,576$
So you want to solve $55,536a + 3,096b + 156c + 6d + 0e = 400,000$ where $0 \le a,b,c,d,e \le 9$. There may not be a solution but if there is it is unique (except for $e$) as each coefficient is more than ten times the previous.
$400,000 = 7*55,536 + 11,248$
$11,248 = 3*3,096 + 1960$.
$1960 > 10*156$.
so there is no solution.
It's worth noting that $(16 -10)=6$ will always divide $16^k -10^k$ so there can only be a solution if $6|y$. As $6\not \mid 400,000$ there is no solution for $y = 400, 000$.
But having $y$ divisible by $6$ is not enough to guarentee there will be a solution. |
Law of total expectation: $\mathbb{E}( X | A )$ and $p_{A|B}(a|b)$ | I would have said something like $$\mathbb{E}(X | B=b) =\sum_{a \in dom(A)} \mathbb{E}(X | A=a, B=b) \;\Pr(A=a|B=b)$$ and this is not necessarily the same as your expression unless $\mathbb{E}(X | A=a, B=b)=\mathbb{E}(X | A=a)$, which would be the case if $X$ and $B$ were conditionally independent given $A$. |
Conditional Probability drug testing | However, the solution then states $Pr(E_1\mid F)Pr(E_2\mid F)=Pr(E_1\mid F)^2$ i.e $Pr(E_1\mid F)=Pr(E_2\mid F)$ Why is this? Surely the fact that the first test was positive, affects the probability that the second is also?
You need to be careful what is meant by event $E_2$. It helps to assume the second test will be done even if the first test was negative. Then $E_1$ and $E_2$ are conditionally independent given $F$ and are identically distributed (so that $P(E_1\mid F)= P(E_2\mid F)$). If you don't have this assumption then $E_1$ must occur for $E_2$ to occur; that is, $E_2 \subseteq E_1$, and then $E_1$ and $E_2$ are no longer conditionally independent.
This allows us to proceed as follows:
\begin{eqnarray*}
P(F\mid E_1\cap E_2) &=& \dfrac{P(E_1\cap E_2\mid F)P(F)}{P(E_1\cap E_2\mid F)P(F) + P(E_1\cap E_2\mid F^c)P(F^c)} \\
&& \\
&=& \dfrac{P(E_1\mid F)P(E_2\mid F)P(F)}{P(E_1\mid F)P(E_2\mid F)P(F) + P(E_1\mid F^c)P(E_2\mid F^c)P(F^c)} \\
&& \\
&=& \dfrac{P(E_1\mid F)^2P(F)}{P(E_1\mid F)^2P(F) + (1-P(E_1^c\mid F^c))^2(1-P(F))} \\
&& \\
&=& \dfrac{0.99^2\times 0.0002}{0.99^2\times 0.0002 + (1-0.98)^2(1-0.0002)} \\
&& \\
&\approx& 0.3289 \\
\end{eqnarray*} |
Eliminate contradiction in the definition of group of order $p^2$ with a normal subgroup of order $p$. | The proof you give is essentially correct, except every time you say something has order $x$, you should say it has order at most $x$, or even order dividing $x$.
A corrected proof goes as follows: the subgroup generated by $a$ is normal, and it has order dividing $p$. The quotient has order dividing $p$ as well, so the whole group has order dividing $p^2$. It follows that $G$ is abelian (I'm guessing $p$ is meant to be a prime, judging by your link, although I think we can still make things work even if $p$ is not a prime), from which we conclude that $a$ is the identity. The whole presentation then collapses to $G=\langle b|b^p=e\rangle$, so $G$ has order $p$. |
Is $\lim\limits_{m\to\infty}\frac{1}{m}\sum_{n=1}^m \cos\left(\frac{2\pi nj}{s}\right)$ "useful"? | Yes.
Weyl's criterion states that a sequence $a_{k}$ is equidistributed modulo 1, meaning its fractional part is uniformly distributed in the region [0,1], if for all non-zero integers $p$ this holds
$$\lim\limits_{m \to \infty} \frac{1}{m} \sum\limits_{k=1}^{m} e^{2\pi i p a_{k}} = 0$$
Your equation is the real part of this expression for the sequence $\frac{x}{b},\frac{2x}{b},\frac{3x}{b},\cdots,\frac{nx}{b},...$ and $p=1$. |
Is it true, that $H^1(X,\mathcal{K}_{x_1,x_2})=0$? - The cohomology of the complex curve with a coefficient of the shaeaf of meromorphic functions... | The answer is yes for a non-compact Riemann surface $H^1(X, \mathcal K_{x_1,x_2})=0$ .
The key is the exact sequence of sheaves on $X$:$$0\to \mathcal K_{x_1,x_2} \to \mathcal K \xrightarrow {truncate } \mathcal Q_1\oplus \mathcal Q_2\to 0$$ where $\mathcal Q_i$ is the sky-scraper sheaf at $x_i$ with fiber the Laurent tails (locally of the form $\sum_{j=0}^Na_jz^{-j}$).
Taking cohomology we get a long exact sequence $$\cdots \mathcal K(X) \xrightarrow {\text {truncate}} \mathcal Q_1(X) \oplus \mathcal Q_2(X)\to H^1(X, \mathcal K_{x_1,x_2})\to H^1(X, \mathcal K) \to \cdots $$
The vanishing of the cohomology group $H^1(X, \mathcal K_{x_1,x_2})$ then follows from the two facts:
1) $H^1(X, \mathcal K)=0$
2) The morphism $ \mathcal K(X) \xrightarrow {\text {truncate}} \mathcal Q_1(X) \oplus \mathcal Q_2(X)$ is surjective because of the solvability of the Mittag-Leffler problem on a non-compact Riemann surface.
For a compact Riemann surface of genus $\geq1$ the relevant Mittag-Leffler problem is not always solvable, so that we have $H^1(X, \mathcal K_{x_1,x_2})\neq 0$ (however for the Riemann sphere $H^1(\mathbb P^1, \mathcal K_{x_1,x_2})=0$) |
How do you rotate a vector by a unit quaternion? | To answer the question simply, given:
P = [0, p1, p2, p3] <-- point vector
R = [w, x, y, z] <-- rotation
R' = [w, -x, -y, -z]
For the example in the question, these are:
P = [0, 1, 0, 0]
R = [0.707, 0.0, 0.707, 0.0]
R' = [0.707, 0.0, -0.707, 0.0]
You can calculate the resulting vector using the Hamilton product H(a, b) by:
P' = RPR'
P' = H(H(R, P), R')
Performing the calculations:
H(R, P) = [0.0, 0.707, 0.0, -0.707]
P' = H(H(R, P), R') = [0.0, 0.0, 0.0, -1.0 ]
Thus, the example above illustrates a rotation of 90 degrees about the y-axis for the point (1, 0, 0). The result is (0, 0, -1). (Note that the first element of P' will always be 0 and can therefore be discarded.)
For those unfamiliar with quaternions, it's worth noting that the quaternion R may be determined using the formula:
a = angle to rotate
[x, y, z] = axis to rotate around (unit vector)
R = [cos(a/2), sin(a/2)*x, sin(a/2)*y, sin(a/2)*z]
See here for further reference. |
Periodic solution of differential equation $y′=f(y)$ | Suppose first that $f(x) \neq 0$ when $x \neq 0$ and consider the map $S^1 \to S^1$ given by restricting $f$ to the set $|x| = 1$ and normalizing it. Because $f(x) = f(-x)$, the degree of this map is even. This means there are no periodic solutions $y$ that wind around origin, since those have degree 1.
If $f(x) = 0$ for some $x \neq 0$ then $f(kx) = 0$ for all $k \in {\bf R}$. So in this case there can't be any periodic solution winding around origin either.
Consider now the map $\pi_C : {\bf R}^2 \setminus \{0\} \to S^1$ given by projecting to the unit circle around origin. The above discussion tells us that $\pi_C \circ y : S^1 \to S^1$ is not surjective. But it is continuous, so its image is connected and therefore an interval in $S^1$. Denote this interval by $[a,b]$ and let $x \in y(S^1)$ be one of the points in the pre-image of $a$, that is $\pi_C x = a$. Denote by $X$ the closure of the connected component containing $x$ in $\pi_C^{-1}(a)$. This is an interval $[c,d]$ lying in the half-ray with the slope $a$. Consider now points $s, t \in y(S^1)$ that are close to $c$ and $d$ respectively and both lying on a half-ray with the slope $a + \epsilon$. Because they belong to the same half-ray, $t = ks$ for some $k \in {\bf R}$ and so the vectors $f(t) = k^2 f(s)$ are linearly dependent. But this impossible since the curve $y$ must have distinct tangent vectors at $s$ and $t$ -- one leans towards the half-ray with slope $a$ and the other away from it. |
Show that for all real numbers $a$ and $b$, $\,\, ab \le (1/2)(a^2+b^2)$ | Remember that for any $x \in \mathbb{R}$, we have that $x^2 \geq 0$ i.e. square of any real number is non-negative.
Hence, $$(a-b)^2 \geq 0$$ since $a,b \in \mathbb{R}$. Expand $(a-b)^2$ and rearrange to get what you want.
\begin{align}
(a-b)^2 & \geq 0\\
a^2 + b^2 - 2ab & \geq 0\\
a^2 + b^2 & \geq 2ab
\end{align} |
Simplify Squared Expression | HINT :
$$A^2-B^2=(A-B)(A+B).$$ |
Ratio of segments formed by intersection of three cevians | Use the substitution $ a = \frac{1}{ 1 + \frac{1}{x} }$, or that $ x = \frac{a}{1-a}$, and $ 1 + 2x = \frac{ 1+a}{1-a}$.
Rephrasing what you already have
WTS $ \prod \frac{ 1 + a } { 1 - a } \geq 8 $ subject to $ a + b + c = 1$.
This is much easier to work with, give it a try.
Jensen's on the product: Show that $ \ln \frac{ 1+a}{1-a}$ is convex, so $\prod \frac{1+a}{1-a} \geq ( \frac{ 1 + 1/3 } { 1 - 1/3} ) ^3 = 8 $.
AM-GM: substituting in $ a+b+c = 1$, $\prod \frac{ (a+b) + (c+a) } { (b+c)} \geq \prod \frac{ 2 \sqrt{ (a+b)(c+a) }} { (b+c) } = 8.$ |
Factor groups of matrices | The quickest way is to define $\varphi:G_1\to \mathbb (R^*,\times)$ by $\varphi(A,B)=\det(A)/{\det(B)}$. Then just observe that $\ker \varphi=G_2$. |
(Non-continuous) solutions to $f\big(f(x)\big)=kx$ and $f\left(x^2\right)=xf(x)$ | Let's suppose for a non-zero constant $k$ and every real number $x$, we have:
$$f\big(f(x)\big)=kx\tag0\label0$$
$$f\big(x^2\big)=xf(x)\tag1\label1$$
By \eqref{0} you can find out that $f$ is injective. Now, by \eqref{0} and \eqref{1} you have:
$$f\left(f(x)^2\right)=f(x)f\big(f(x)\big)=kxf(x)=kf\left(x^2\right)=f\bigg(f\Big(f\left(x^2\right)\Big)\bigg)=f\left(kx^2\right)$$
$$\therefore\quad f(x)^2=kx^2\tag2\label2$$
Therefore by \eqref{2} we have $k=f(1)^2$ and hence $k$ is positive. Again by \eqref{2}, for every real number $x$ we get $f(x)=\pm\sqrt kx$.
Now let's define $K^\pm:=\big\{x\ne0\big|f(x)=\pm\sqrt kx\big\}$. So we get $\mathbb R=\{0\}\cup K^+\cup K^-$. By \eqref{1} we have $f(0)=0$. You can reformulate \eqref{0} and \eqref{1} this way:
$$x\in K^\pm\quad\text{iff}\quad\sqrt kx\in K^\pm$$
$$x\in K^\pm\quad\text{iff}\quad x^2\in K^\pm$$
It's easy to see that every function satisfying $f(0)=0$ and $f(x)=\pm\sqrt kx$ for $x\in K^\pm$ is a solution, where $K^\pm$ satisfy the above conditions.
The trivial cases happen when one of $K^\pm$ is empty, and give us the linear solutions $f(x)=\sqrt kx$ and $f(x)=-\sqrt kx$. For a nontrivial case, you can take $K^+=\{\pm k^q\vert q\in\mathbb Q\}$ and $K^-=\{\pm k^q\vert q\in\mathbb R\backslash\mathbb Q\}$.
EDIT:
As noted by @DanielWainfleet in the comments, the way my example above is presented, it only works for $k\ne1$. In fact, I could simply write $K^-=\mathbb R\setminus(K^+\cup\{0\})$, and then it would work for all $k>0$. But @DanielWainfleet has also provided an interesting example that works for $k=1$: $K^+=\left\{\pm 2^{2^n}\big\vert n\in\mathbb Z\right\}$. |
Why is $\nabla \cdot u=0$? (If $u=v\cos(k\cdot x)$) | I like to avoid indices altogether whenever possible, and would use the product rule for divergence instead:
$$\nabla \cdot \left[f(x)v(x)\right] = \langle \nabla f(x), v(x)\rangle + f(x) \nabla \cdot v(x).$$
In particular if $v%$ is constant, the second term vanishes and you just get
$$\nabla \cdot u = v \cdot \nabla \cos(k\cdot x) = (v\cdot k) \sin(k\cdot x) = 0.$$ |
Fermat's Last Theorem: implications (there is no new proof) | Actually, FLT has very few consequences even in Number Theory. Its importance has always been in the methods that were developed in the effort to settle it, and what else could be done with those methods. |
Number of ways to write n as a sum of m non-negative integers each less than k | Using generating functions is one way to go!
In your example, we want the coefficient of $x^5$ in the expansion of $(1+x+x^2+x^3)^3$. The coefficient does check out to be $12$. (I will let you check for yourself)
In general, you would want the coefficient of $x^n$ from the expansion of $(1+x+ \dots + x^{k-1} )^m$ |
Radius of convergence of $ f(z)=\frac{z^2} {e^z+1} $ without expansion | Yes. $$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ |
Monkey typing ABRACADABRA and gamblers | With the new strategy, the total payoff isn't always $\$26^{11}+\ldots+\$26$. Say the sequence of letters is ABRABRACADABRA. The gamblers who enter on the 4th and 5th letter will both bet on the 5th letter being `C.' This means nobody will win $\$26^{11}$ or $\$26^{10}$. |
How do I simplify $8t^7 \left(\sqrt{t }-9\right) + t^8 \left(\frac{1}{2\sqrt{t}}\right)$? | It's a bit unclear to me exactly what is causing the problem. But I am assuming it's the square roots? You can rewrite them as normal exponents.
$$
\sqrt{t} = t^{\frac{1}{2}}
$$
And
$$
\frac{1}{2\sqrt{t}} \;=\; \frac{1}{2}\cdot\frac{1}{\sqrt{t}} \;=\; \frac{1}{2}\cdot\frac{1}{t^{\frac{1}{2}}} \;=\; \frac{1}{2}\cdot t^{-\frac{1}{2}}
$$ |
3x3 Determinant, Solving for Eigenvalues | $$\det\begin{pmatrix}
-BI - b-\lambda& -BS& 0\\
BI &BS - r - b-\lambda& 0\\
0&r&-b-\lambda
\end{pmatrix}=$$
$$=(-b-\lambda)\det\begin{pmatrix}
-BI - b-\lambda&-BS\\
BI &BS - r - b-\lambda
\end{pmatrix}$$ |
Combinatorics on a chessboard | You can have the following moves:
R, DR, DDR
Let $x_0, x_1,x_2$ represent the number of moves made R, DR, or DDR during the path to get to the bottom right (these will become repetition numbers for a multiset that we will permute).
You want $x_0+x_1+x_2=7$ because you want a total of 7 moves to the right. Additionally, you want $x_1+2x_2=7$ because you want a total of seven down movements.
For $x_2$ you can have the values $0,1,2,3$. Then for $x_1$, you have $x_1=7-2x_2$. And $x_0=7-x_2-x_1 = 7-x_2-(7-2x_2) = x_2$.
So, you have $x_0=x_2$ and $x_1 = 7-2x_2$. Since there are only 4 cases, we can just enumerate them.
Case $x_0=x_2=0$: Find the number of permutations of the multiset: $\{ DR\cdot 7\}$. There is only 1.
Case $x_0=x_2=1$: Find the number of permutations of the multiset: $\{ R\cdot 1, DR\cdot 5, DDR\cdot 1\}$. There are $\dfrac{7!}{1!5!1!}$.
Case $x_0=x_2=2$: Find the number of permutations of the multiset: $\{ R\cdot 2, DR\cdot 3, DDR\cdot 2\}$. There are $\dfrac{7!}{2!3!2!}$.
Case $x_0=x_2=3$: Find the number of permutations of the multiset: $\{ R\cdot 3, DR\cdot 1, DDR\cdot 3\}$. There are $\dfrac{7!}{3!1!3!}$.
I get a total of 393 possible paths that avoid three or more down moves. This only finds when the final move is an R. Next, apply a similar process for when the final move is RD, then RDD.
If the final moves are RD, then you have:
R, DR, DDR, and $x_0+x_1+x_2=7$, $x_1+2x_2=6$. Now $x_0=x_2+1$ and $x_1=6-2x_2$. You can have values $0,1,2,3$ for $x_2$. That gives multisets:
$$\{R\cdot 1, DR\cdot 6\}, \{R\cdot 2, DR\cdot 4, DDR\cdot 1\}, \{R\cdot 3, DR\cdot 2, DDR\cdot 2\}, \{R\cdot 4, DDR\cdot 3\}$$
This adds $$\dfrac{7!}{1!6!}+\dfrac{7!}{2!4!1!}+\dfrac{7!}{3!2!2!}+\dfrac{7!}{4!3!}$$
Next, if it ends with RDD, you have R, DR, DDR and $x_0=x_2+2, x_1=5-x_2$, and $x_2=0,1,2$.
This gives multisets
$$\{R\cdot 2, DR\cdot 5\}, \{R\cdot 3, DR\cdot 3, DDR\cdot 1\}, \{R\cdot 4, DR\cdot 1, DDR\cdot 2\}$$
So, this final case adds:
$$\dfrac{7!}{2!5!}+\dfrac{7!}{3!3!1!}+\dfrac{7!}{4!1!2!}$$
Now, the total is 1016. |
Perfect squares using 20 1's, 20 2's and 20 3's. | Hint: The resulting number is divisible by $3$ but not by $9$, since the digit sum is $120$. |
Find all group homomorphisms from $\mathbb{Z}^n$ to $ \mathbb{Z}^n$ | if the epic morphism were not monic then it would have a non-trivial kernel $K$. then, we would have $Z^n \cong Z^n \oplus K$ |
Prove that $\angle AQP = \angle ABC$ | Hint. If triangle is isosceles, as can be seen in my drawing , right triangles APQ and BPC are equal because $BP= PQ$ and $PC=PA$, therefore $\angle PBC=\angle PQA$. For other cases we can do following methods:
1-Extend AQ to cross BC at D, also extend PC to intersect AB at K. Now if $\angle QDC=\angle BKQ$ then $\angle QDB + \angle BKQ=180^o$, So $\angle KQD+\angle KBD=180^o$ that results in $\angle KBC=\angle PQA$.
2-Take point k on AB such that $BK=BP$. Draw a perpendicular from K on AB, it intersect BC at F. If we show $PA=KF$ then right triangles PAQ and KBF are equal then $\angle ABC=\angle PQA$. |
Maps circle to "edge" of square region | You already have explicit homeomorphisms that work.
Let me be somewhat more general and denote $\|(x,y)\|_\infty=\max\{|x|,|y|\}$ and $\|(x,y)\|_2=\sqrt{x^2+y^2}$. Then closed disk is described as $\{x\in\Bbb R^2\,|\, \|x\|_2\leq 1\}$ and circle as $\{x\in\Bbb R^2\,|\, \|x\|_2=1\}$, while closed square as $\{x\in\Bbb R^2\,|\, \|x\|_\infty\leq1\}$ and its boundary is $\{x\in\Bbb R^2\,|\, \|x\|_\infty=1\}$. Your maps can be written as
$$x\mapsto x/\|x\|_\infty,\quad x\mapsto x/\|x\|_2$$
Quick check shows that $\|x/\|x\|_\infty\|_\infty = 1$ and $\|x/\|x\|_2\|_2 = 1$ so these really do map circle to boundary of square and vice versa.
More generally, you can consider mapping whole disk to square and square to disk. In that case you would define maps with:
$$x\mapsto\frac {\|x\|_2}{\|x\|_\infty}x,\quad x\mapsto \frac {\|x\|_\infty}{\|x\|_2}x $$
(except at $x=0$ where this is not well defined - extend them by sending $0$ to $0$ in both cases)
Now, for $\|x\|_2\leq 1$ you have $$\left\|\frac {\|x\|_2}{\|x\|_\infty}x\right\|_\infty = \|x\|_2 \leq 1$$ and for $\|x\|_\infty\leq 1$ you have $$\left\|\frac {\|x\|_\infty}{\|x\|_2}x\right\|_2 = \|x\|_\infty \leq 1$$ so everything checks out.
If you restrict those maps to boundaries, you will get your original maps: for $\|x\|_2 = 1$, $\frac {\|x\|_2}{\|x\|_\infty}x = x/\|x\|_\infty$ and similarly for $\|x\|_\infty = 1$.
What these maps do is scale appropriate vectors. Make sure that you understand why both are continuous and confirm that they are inverses. |
Given Least Squares of a set, compute Least Square of the set excluding a single element | Assuming that I properly understand.
If you use normal equations for the regression $y=a+bx$, $a$ and $b$ are given solving the equations $$S_y=n a+b S_x$$ $$S_{xy}=a S_x+b S_{xx}$$ Now, you want to add point $n+1$ and remove point $1$ from the regression set. So, the equations are now $$S'_y=n a'+b' S'_x$$ $$S'_{xy}=a' S'_x+b' S'_{xx}$$Computing the terms we have $$S'_y=S_y+y_{n+1}-y_1$$ $$S'_x=S_x+x_{n+1}-x_1$$ $$S'_{xy}=S_{xy}+x_{n+1}y_{n+1}-x_1y_1$$ $$S'_{xx}=S_{xx}+x_{n+1}^2-x_1^2$$ which make the updating process quite simple.
For sure, when this is done, do not forget to replace make the update $S=S'$ and to shift the indices $k=k-1$ to be ready for the next step. |
Laurent series expansion of $f(z)=\frac{1}{1-2^{1-z}}$ in the annulus $D_r=\{\ z\in\mathbb C\ |\ r<|z-z_0|<R\ \}$ | $f(z)=\frac{1}{1-2^{1-z}}$,so $f(z)=1+\left(\frac{1}{2^{z-1}-1}\right)=1+\left(\frac{1}{e^{(z-1)\log2}-1}\right)=1+\sum_{n=1}^\infty \left(\frac{1}{(z-1)^n(\log2)^n}\right)$.
$0<|z-1|<R$,for any $R\gt0$. |
Find the equation of the locus of $Q$ | Assuming that the circle always passes through the origin, here is a proposed solution.
Observe that the tangent $RQ$ is a horizontal tangent and horizontal tangents are possible only at points directly above and below centre on the circle. Therefore, coordinates of $R$ are $(u,1+v)$. Also, perpendicular from centre bisects a chord, therefore, $P$ is $(2u,0)$. Now, $OP=QR$, hence, $Q$ is $(3u,1+v)$.
Also, distance of centre from origin is $1$, so, $u^2+v^2=1$.
So, locus of $Q$ is an ellipse.
Hope it helps:) |
Unsolvability Degree in Turing's Proof 1 | I am answering this question that I posted in 2012 since there were no responses. The short answer is "Yes" - the calculation has been done before; also some papers published in the 2012/2013 period (discussed below) add some further results to this topic.
In his article in the publication (The Universal Turing Machine, R.Herken, OUP 1988) celebrating the 50th anniversary of the Turing Machine, Davies is discussing the 1936 paper argument. In a footnote 5 he makes the following comment:
A remark for the knowledgeable: the problem of determining whether a given Turing machine is circle-free is complete of degree $\mathbf{0''}$ ...
Now the a theorem of Recursion theory gives:
$\Delta_{n+1} = \mathbf0^{(n)}$
So the circle-free condition is $\Delta_{3}$ as indicated in the question.
The significance of this result is that the Diagonalisation lemma is stating what is excluded for a Turing Machine: it cannot solve $\Delta_{3}$ problems.
Of course the Halting theorem is based on the alternative Post (1936, 1947) model of computation and Halting is $\mathbf{0'}$ complete - one Turing degree lower than Turing's own result demonstrated. Thus a Post machine cannot solve
$\Delta_{2}$ problems - the familiar kind of "non-algorithmic" functions discussed by Penrose and textbooks generally.
Thus Turing's own result is actually showing that his machines are more powerful than the "Textbook Turing Machine". This idea has been exploited indirectly and latterly more directly in papers on extended computation. An example is the paper published in 2013:
"The Computational Power of Turing's Non-terminating Circular A-Machines"
(here ) by van Leeuwen and Wiedermann in Alan Turing His Work and Impact.
The first sentence in this paper begins:
For readers familiar with the concept of Turing machines as described in contemporary textbooks, reading the definition of a Turing machine in Turing's original paper (Turing 1936) may present a surprise.
After I found this result I was myself surprised that another icon of 1930s Logic was not all that it seemed as evidenced in my Stack Question from 2014 (Constructiveness of Proof of Gödel's Completeness Theorem). |
Proving equivalencies of these sets | Indicator functions are a nice way to prove such results. For this case, one need only note that
\begin{align*}
\mathbf 1_{A\cap B}=\mathbf 1_A \cdot \mathbf 1_B, \qquad \mathbf 1_{A\mathbin\triangle B} = \mathbf 1_A+\mathbf 1_B \qquad \text{and} \qquad \mathbf 1_{A\setminus B} = \mathbf 1_A(1+\mathbf 1_B).
\end{align*}
Thus, working modulo 2, we have
\begin{align*}
\mathbf 1_{(A\triangle C)\mathbin\triangle (A\setminus B)} &= \mathbf 1_{A\mathbin\triangle C}+\mathbf 1_{A\setminus B}\\
&= \mathbf 1_{A}+\mathbf 1_{C} + \mathbf 1_A(1+\mathbf 1_B)\\
&= \underbrace{\mathbf 1_A + \mathbf 1_A}_{=0} + \mathbf 1_C + \mathbf1_A\cdot\mathbf 1_B\\
&= \mathbf1_A\cdot\mathbf 1_B + \mathbf 1_C \\
&= \mathbf 1_{A\cap B}+\mathbf 1_C \\
&= \mathbf 1_{(A\cap B)\mathbin\triangle C},
\end{align*}
so it follows that $x\in (A\triangle C)\mathbin\triangle (A\setminus B)\iff x\in (A\cap B)\mathbin\triangle C$, as required. |
Single fixed deposit vs multiple fixed deposits | From the formula $F=P(1+i)^n$ where $F$ is the future value of the deposit, $P$ is the amount deposited, $i$ is the interest per paymentperiod, and $n$ is the number of payment periods.
With $a$ and $b$ being amounts invested separately or together,
$F_{a+b} = (a + b)(1+i)^n = a(1+i)^n + b(1+i)^n = F_{a}+F_{b}$
As such, you should invest the 300k as soon as you get it. Once receiving the additional 50k you have the option of either creating a new deposit which is separate from the first, or combining the two together. There will be no difference whatsoever in the amount of interest you receive from the formula.
Where there IS a difference however, is in what additional fees you have to deal with. Most investment firms will ask for money for creating a new deposit or cancelling a previous deposit. For information on that, you would need to look at the fine print of whatever contract you sign. |
Subvector and related subspace | Strictly speaking $Y_k$ ( for $k<n$) is not a subspace of Y.
A vector space is defined as a quadruple $Y=(V,K,+,\cdot)$ where $V$ is a set, $K$ a field, and $+:V\times V \rightarrow V$ and $\cdot: K\times V \rightarrow V$ are operations that satisfies certain axioms (see here).
A subspace is defined as a subset of $V$ that is a vector space with the same field $K$ and the same operations.
In your case the set $V$ for the space $Y$ is the set of $n-$ples $(y_1,\cdots, y_n)^T$ and the elements of $Y_k$ are $k-$ples , so $Y_k$ is not a subset of $V$. But the subsespaces of $Y$ of dimension $K$ are isomorphic to $Y_k$ (if the field $K$ and the operations are the same). |
Finding the local extrema of $y = \frac{\ln x} {\sqrt{x}}$ | For $x>0$,
As you found
$$f'(x)=\frac{2-\ln(x)}{2x\sqrt{x}}$$
the deniminator is $>0$, the sign of $ f'(x) $ is given by the sign of the numerator $ 2-\ln(x) $.
$$f'(x)=0\iff \ln(x)=2$$
$$\iff x=e^2$$
$$f'(x)>0\iff 2>\ln(x)$$
$$\iff 0<x<e^2$$
thus $ f $ is increasing at $ (0,e^2] $ and decreasing at $ [e^2,+\infty)$.
$ f $ attains its maximum at $ x=e^2$.
on the other hand
$$\lim_{x\to 0^+}f(x)=-\infty$$ and
$$\lim_{x\to+\infty}f(x)=0$$
So, it has no minimum. |
Showing $m^*(A-E)=m^*(A)$ when $m^*(E)=0$ | As suggested by the comment, use the subadditivity of the outer measure to finish the question.
$$m^{\star}(A \setminus E) = m^{\star}(A \setminus E) +m^{\star}(E)\ge m^{\star}(A) \implies m^{\star}(A \setminus E) = m^{\star}(A)$$
To prove the subadditivity of the outer measure, let $\epsilon > 0$, and $A, B$ be two subsets of $\Bbb{R}$ having countable open interval covers $(I_i)_{i=1}^{\infty}$ and $(J_j)_{j=1}^{\infty}$ respectively so that $\sum_{i=1}^{\infty} \ell(I_i) < m^{\star}(A) + \frac\epsilon2$ and $\sum_{j=1}^{\infty} \ell(J_j) < m^{\star}(B) + \frac\epsilon2$. (Such covers exist since $m^\star$ is the infimum of the sum of length of covering open intervals.)
Clearly, $\{(I_i)_{i=1}^{\infty},(J_j)_{j=1}^{\infty}\}$ covers $A \cup B$, and the sum of length of covering open intervals
$$\sum_{i=1}^{\infty} \ell(I_i) + \sum_{j=1}^{\infty} \ell(J_j) < m^{\star}(A) + m^{\star}(B) + \epsilon$$
To finish the proof, on the LHS, take infimum over all countable open interval covers for $A \cup B$ so that $m^\star(A\cup B)$ appears on the LHS.
$$m^\star(A\cup B) < m^{\star}(A) + m^{\star}(B) + \epsilon$$
Since the choice of $\epsilon > 0$ is arbitrary, we conclude that
$$m^\star(A\cup B) \le m^{\star}(A) + m^{\star}(B).$$ |
When is a quotient group $G/H$ abelian? | Yes, as mentioned in the comments you can always take the quotient by the so-called commutator subgroup. That is, the group generated by all commutators $[x,y]=xyx^{-1}y^{-1}$ of elements of $G$.
This quotient is always abelian, and is referred to as the abelianization of $G$.
Now, the answer to your title question is that the quotient $G/H$ will be abelian iff $H$ contains the commutator, aka the "first derived subgroup": $[G,G]\le H$. |
Show that $\mathrm{E}[g(X)]=g(0)+\int_{0}^{\infty }g'(x)P(X>x) \mathop{}\!d x$ | In terms of the CDF $F$,$$\begin{align}E[g(X)]-\int_0^\infty g^\prime(x)P(X>x)dx&=\int_0^\infty[g(x)dF^\prime(x)-g^\prime(x)(1-F(x))dx]\\&=\color{red}{[g(x)F(x)]_0^\infty}-\color{blue}{\int_0^\infty g^\prime(x)dx}\\&=\color{red}{\lim_{x\to\infty}g(x)}-\left(\color{blue}{\lim_{x\to\infty}g(x)-g(0)}\right)\\&=g(0).\end{align}$$Your approach can be made to work too, but since $g(0)$ may be positive, your first step should have read$$E[g(X)]=g(0)+\int_{g(0)}^\infty P(g(X)>x)dx.$$(You can see this, for example, by writing $g(x)=g(0)+g_0(x)$.) |
Compact surface - Tube/Torus | This will be very hand-wavy, so forgive me if it doesn't make sense:
WLOG, suppose $\gamma$ is the unit circle in the $xy-$plane $\gamma(s)=(\cos(s),\sin(s),0), 0\leq s\leq 2\pi$, let $a=1$. Then the we have a surface $\sigma$ that is exactly the product $S^1\times S^1$ (in this case, the normal vectors $n,b$ can be easily calculated explicitly). |
Which of the following are true for continuous functions $f_n$ on $[0,\infty)$ | For 1, you sorta have the right idea, but you need to be careful with how you construct your counterexample. By "joining the points (0,0), (0,1) and (1/n,0)." you get a triangle with base $\tfrac{1}{n}$ and height $1$, which has an area of $\tfrac{1}{2n}$. Thus, $\displaystyle\int_{0}^{\infty}f_n(x)\,dx = \dfrac{1}{2n} \to 0 = \int_{0}^{\infty}f(x)\,dx$ as $n \to \infty$. Instead, try joining the points $(0,0)$, $(\tfrac{1}{n},n)$, and $(\tfrac{2}{n},0)$. Then you will get a triangle with base $\tfrac{2}{n}$ and height $n$, which has an area of $1$. Hence, $\displaystyle\int_{0}^{\infty}f_n(x)\,dx = 1$ for all $n$ while $\displaystyle\int_{0}^{\infty}f(x)\,dx = 0$.
For 2 and 4, consider something like $f_n(x) = \dfrac{1}{n}$ for all $x \in [0,\infty)$. |
$ V_{ \kappa}$ ( $ \kappa $ inaccessible ) models there is a countable model of ZFC | Once you have a model $(M,{\in})$ where $M$ is countable, you also have a bijection $f:\omega\to M$ -- because that's what it means for $M$ to be countable. Now let
$$ E = \{ (a,b)\in\omega\times\omega \mid f(a)\in f(b) \} $$
This makes $(\omega,E)$ isomorphic to $(M,{\in})$. Since $M$ is known to satisfy ZFC, so does $(\omega,E)$. |
Differentiability of vector valued function? | Lemma: Given $\phi:\mathbb{R}^2 \to \mathbb{R}$ and $\psi: \mathbb{R}^2 \to \mathbb{R}$ differentiable, then $\phi \cdot \psi$ is differentiable.
Proof: $\phi \cdot \psi=m \circ (\phi \times \psi),$ where $m(x,y)=x \cdot y$. Therefore, by the chain rule, $\phi \cdot \psi$ is differentiable.
$f=(g \circ \pi_1) \cdot (h \circ \pi_2)$. Since projections are linear, they are differentiable. Since composition of differentiable functions is differentiable (by the chain rule), then the result follows from the lemma. |
Linear Algebra, matrix for a linear transformation | The polynomial $P = X^3-X = (X-1)(X+1) X$ is divisible by the minimal polynomial $\mu_T$ of $T$.
More over, from the other hypotheses, $X^2-1=(X-1)(X+1)$ and $X^2-X=(X-1) X$ are not divisible by $\mu_T$.
There is two options remaining:
$\mu_T= X(X+1)(X-1)$ then $\dim(\ker(T))=1$.
$\mu_T=X (X+1)$ so the only eigenvalues are $-1$ and $0$ and you can conclude from $\dim(\ker(T))$. |
Nonparametric vs Parametric distributions | In statistics the term nonparametric is most often used to describe inferential procedures. The Wikipedia article may be useful. However, because a lot of
the common statistical procedures assume that data are normal, colloquial
usage of 'nonparametric' sometimes just indicates that the normality assumption is not made.
Sign tests, rank-based tests (e.g., one and two-sample Wilcoxon, Kruskal-Wallis, and Friedman tests) and in many cases confidence intervals
associated with these tests are often found in the chapter of an elementary or intermediate-level statistics text called 'Nonparametric Tests'. While the rank-based tests do not assume sampling from a normal distribution, they do assume
continuous data (to the extent that there are few, if any, ties).
When data are reduced to ranks, information is lost. So it is not a good idea
to use rank-based nonparametric tests when data are normal or nearly normal.
Some permutation tests are also called nonparametric, and they do not necessarily require continuous data. Density estimation is largely nonparametric.
Please leave a comment if you want more detail on any of this, or if you need an explanation that goes in another direction entirely. |
Express this vector as a linear combinations by given vectors. | $\alpha$ can be obtained from the area ratios as follows,
$$\frac{\alpha}{1-\alpha} =\frac{\triangle BAS}{\triangle RAS}=\frac{\frac{2}{3} \cdot \frac{1}{2}\cdot\frac{1}{2}}{\frac{1}{2}}=\frac{1}{3}
$$
where we observe that △BAS and △RAS share the same base AS. This allows their area ratio to be expressed as $\alpha/(1-\alpha)$, which is proportional to their heights. Furthermore, from the side partitions given, △RAS is $\frac{1}{2}$ of the area PQRS and, similarly, △BAS is $\frac{1}{6}$ of the area PQRS.
Thus,
$$\alpha=\frac{1}{4}$$
and $$\vec{PC}=\frac{1}{4}\vec{PQ}+\frac{5}{8}\vec{PS}$$ |
The longest repeating decimal that can be created from a simple fraction | The numerator doesn't matter (for this question), so you might as well let it be $1$. The denominator should be the largest prime under $10000$ which has $10$ as a primitive root. I don't know offhand what that prime is, but I'm sure such primes are tabulated and shouldn't be hard to locate.
The table at the Online Encyclopedia doesn't go far enough. There is an applet which claims to find these primes, but I couldn't make it work --- maybe you'll have better luck. |
Suppose $e^A = A$, prove that $A$ is diagonalizable | You write
$$D+N=e^{D}e^{N}=e^{D}+e^{D}(e^{N}-I)$$
As the matrix $e^{D}(e^{N}-I)$ is nilpotent (because it is of the form $e^{D}NQ(N)$ and everything commutes), $e^D$ is diagonalizable (because $D$ is), and these two matrices commutes, from uniqueness in Dunford decomposition, you get:
$$D=e^D, \;\;\;\;N=e^{D}(e^{N}-I)$$
Therefore you get, multiplying by $N^{k-1}$ where $k$ is the smallest integer such that $N^{k+1}=0$ (and assuming by contradiction $k\geq 1$):$$N^k=DN^k,$$ in other words
$$(I-D)N^k=0.$$
Now, is $(I-D)$ inversible? Yes, as $D=e^D$, 1 cannot be an eigenvalue for $D$, and you conclude $N^k=0$, which by definition of $k$, constitute a contradiction, and finish the proof as $k=0$ and $N=0$. |
Resources like "How to solve it" by Polya | For online resources
https://artofproblemsolving.com/ - Check the resources link
https://terrytao.wordpress.com/career-advice/
https://brilliant.org/
For books
The Art and Craft of Problem Solving - Paul Zeitz
Anything by Titu Andreescu (2 examples being: 103 Trigonometry Problems or Putnam and Beyond)
Problem Solving Strategies - Arthur Engel
That should be a good start. You can also check out any book pertaining to preparation for the International Mathematical Olympiad, those also have good problem solving tips.
Hope this helps! |
Example of a non invertible fractional ideal | Hint: Consider the ring $\mathbb{Z}[\sqrt{5}]$. Since this is a non-maximal order in the Dedekind domain $\mathbb{Z}[\varphi]$, you know it must be not integrally closed, and so must have a non-invertible fractional ideal.
To actually find one, a good place to start might be at $I=(2)$ (why?). Indeed, let $\mathfrak{P}=(2,1-\sqrt{5})$. Show that $\mathfrak{P}$ is maximal (just compute the quotient ring). Then show that $\mathfrak{P}\ne I$, but
$$\mathfrak{P}^2=I\mathfrak{P}$$ |
Showing that the Poisson process is characterized by five properties | We have, as $p_{n,k}=\frac{\lambda}n$ when $k\leq n$ and $0$ otherwise,
$$\sum_{k=1}^{+\infty}p_{n,k}^2=\sum_{k=1}^n\frac{\lambda^2}{n^2}=\frac{\lambda^2}n,$$
which gives the wanted result, unless I am missing something. |
What is the simplest way to compute $f_Z(z)$ when $Z = \min(X, Y), X \sim U(0, 5), Y \sim U(0, 10)$? | s there an easier approach to what I have taken here?
I suggest you a simple graphical approach:
Frist of all observe that
$$f_X(x)=\frac{1}{5}$$
$$f_Y(y)=\frac{1}{10}$$
in the following rectangle
Thus, by definition
$$\mathbb{P}[Z>z]=\mathbb{P}[X>z;Y>z]=\frac{(5-z)(10-z)}{50}$$
(area of the purple rectangle $\times f(x,y)$)
and thus
$$F_Z(z)=1-\frac{(5-z)(10-z)}{50}$$
derivating you get your density
$$f_Z(z)=\frac{15-2z}{50}\cdot\mathbb{1}_{(0;5)}(z)$$
... this result matches with yours but I think this procedure is very fast. |
Can all sets be totally ordered (not well-ordered) in ZF? | I wrote a somewhat technical answer to the question several years ago. The answer is no, and you can find a proof in that link.
Let me not entirely flesh out the whole proof, but instead rely on several well-known examples.
Russell sets, or generally, when you cannot choose from a family of finite sets. Suppose that you have a family of sets, each of them is finite, say of size $2$. Let $A$ be the union of all these sets. If $A$ can be linearly ordered, this provides a uniform linear ordering of all your finite sets (and then some), letting you pick the least member of each of these finite sets.
And indeed, it is consistent that there is a countable set of pairwise disjoint pairs, which does not admit a choice function. Therefore, the union of these pairs cannot be linearly ordered.
Amorphous sets. These are sets which cannot be split into two infinite sets. Clearly their existence contradicts even the most basic principles of choice. If $A$ is an amorphous set, it cannot be linearly ordered. Suppose $<$ was in fact a linear ordering of an amorphous set. Then every point defines a cut, one side of which is finite, and the other is infinite. If all but finitely many points have finitely many predecessors, then this linear ordering is isomorphic to an ordinal of the form $\omega+n$, where $n$ is finite. But then it is countable, so no. If all but finitely many points have infinitely many predecessors, consider the reverse linear ordering and derive a contradiction yet again.
$\mathcal P(\omega)/\rm fin$, the equivalence classes of sets of natural numbers modulo finite changes. This one is trickier, and harder to prove. Andrés Caicedo gave a very good proof on MathOverflow to this fact. In short, if this set can be linearly ordered, there is a set without the Baire Property and a set which is not Lebesgue measurable. Since it is consistent that every set has the Baire Property, or if you're willing to accept inaccessible cardinals into your life, that every set is Lebesgue measurable, it follows that $\mathcal P(\omega)/\rm fin$ is consistently without a linear ordering.
There are many other possible examples, but these are the three "obvious" examples. |
Flaw in the proof that $M$ is noetherian given exact sequence? | The assumption that $f$ is injective or that $g$ is surjective aren't necessary, all that matters is that $M'\to M\to M''$ is exact.
Indeed suppose you know the case for when $f$ is injective, and $g$ is surjective, and let $M'\to M\to M''$ be an exact sequence. Then $0\to M'/\ker(f)\to M\to \mathrm{im}(f)\to 0$ is exact, and $M'/\ker(f)$ is noetherian as a quotient of a noetherian module, $\mathrm{im}(f)$ is noetherian as a submodule of a noetherian module; and the rest follows.
This shows that it's not surprising that you didn't use the injectivity of $f$.
Your proof is weird : who is $N$ ? I assume it's the submodule of $M$ under consideration, but then it should be $f^{-1}(N)$ at the beginning, not $g^{-1}(N)$.
Also you don't need $g$ to be surjective for $g(N)$ to be a submodule of $M''$.
Other than that, your proof is fine, and as noted, you didn't use injectivity of $f$, or surjectivity of $g$, but that's not a problem as I pointed out. |
Is there any algorithm for finding the minimum distance to the complement of a convex set? | If the nom is polyhedral $(l_1, l_\infty)$ it can be computed with a finite number of linear program (the problem is polynomial in the $l_1$ case) |
Determinant of eigenvalues | After edit by the OP: Yes. Assuming your $x$ is real, the matrix that you (now) give us has complex eigenvalues. The below discussion was from when you gave us a matrix which actually had real eigenvalues; it points the way to finding real matrices with complex eigenvalues. It still describes a way to find simpler examples.
If you want to find a $2\times 2$ matrix which has real coefficients but complex eigenvalues, consider the following: for the matrix
$$
A:=\begin{bmatrix}a & b\\c & d\end{bmatrix},
$$
we have
$$
\det(A-\lambda I)=(a-\lambda)(d-\lambda)-bc=\lambda^2-(a+d)\lambda+(ad-bc)=0
$$
precisely when
$$
\lambda=\frac{(a+d)\pm\sqrt{(a+d)^2-4(ad-bc)}}{2}.
$$
In order for this to have no real roots (and hence two complex roots), it is then necessary and sufficient to choose $a$, $b$, $c$, and $d$ such that
$$
(a+d)^2-4(ad-bc)<0.
$$
So, for instance, you could choose $a=d=0$ and $b=1$, $c=-1$. You get the matrix
$$
\begin{bmatrix}0 & 1\\-1 & 0\end{bmatrix},
$$
which you can check has eigenvalues $\pm\ i$. |
Integrating $\frac{x^2+1}{x^4+1}$ | Factoring isn't hard: $$x^4+1=x^4+2x^2+1-2x^2=(x^2+1)^2-(\sqrt{2}x)^2=(x^2-\sqrt{2}x+1)(x^2+\sqrt{2}x+1),$$ so
$$\frac{x^2+1}{x^4+1}=\frac12\left(\frac1{x^2+\sqrt{2}x+1}+\frac1{x^2-\sqrt{2}x+1}\right),$$
because $$\frac1{x^2+\sqrt{2}x+1}+\frac1{x^2-\sqrt{2}x+1}=\frac{(x^2-\sqrt{2}x+1)+(x^2+\sqrt{2}x+1)}{(x^2-\sqrt{2}x+1)(x^2+\sqrt{2}x+1)}=\frac{2(x^2+1)}{x^4+1}.$$
That means $$\int\frac{x^2+1}{x^4+1}\,dx=\frac12\left(\int\frac1{x^2+\sqrt{2}x+1}\,dx+\int\frac1{x^2-\sqrt{2}x+1}\,dx\right).$$ |
Equation of line in 3d space passing in two points in a form of ax+by+cz+d=0 | In 3d space your equation describes a plane, not a line.
The usual parametric form for a line would be $\vec{x} + t\vec{w}$, for $t$ a scalar, say.
In your case you can take $\vec{v} = (x_1,y_1,z_1)$ and
$\vec{w} = (x_2,y_2,z_2) - (x_1,y_1,z_1)$ e.g. |
Fourier transformation of sin, cos, sinh and cosh | Because there are formulae:
$$\sin x=\frac{e^{ix}-e^{-ix}}{2i}\quad\cos x=\frac{e^{ix}+e^{-ix}}2$$
$$\sinh x=\frac{e^x-e^{-x}}2\quad\cosh x=\frac{e^x+e^{-x}}2$$
Use them and the Fourier transform of $e^{bx}$ |
What conditions ensure that a functor has a left / right inverse. | Usually what you ask for is that a fully faithful functor $F$ has a left adjoint $L$ (and in that case, the counit $LF \to 1$ of the adjunction is an isomorphism), or a right adjoint $R$, and in that case the unit $1\to RF$ of the adjunction is an isomorphism.
Point is, you need $F$ to be fully faithful both times! |
Evaluate $\int_0^{\pi} \frac{\cos m\theta-\cos m \phi}{\cos \theta - \cos \phi} \text{d}\theta$ | Here is my approach to the problem. It is natural to think of complex analysis. Let $\sqrt{-1}=i$. Now we will prove that:
$$I_m:=\int_{0}^{\pi} \frac{\cos mx}{\cos x - \cos \phi}\mathrm{d}x = \pi\cdot \frac{\sin m\phi}{\sin \phi}=\frac{1}{2}\int_{-\pi}^{\pi} \frac{\cos mx}{\cos x - \cos \phi}\mathrm{d}x$$$$=\frac{1}{2}\int_{-\pi}^{\pi} \frac{e^{imx}+e^{-imx}}{\left(e^{ix}-e^{i\phi}\right)\left(e^{ix}-e^{-i\phi}\right)}\mathrm{d}x$$
Let $z=e^{ix}$, $\mathrm{d}z = ie^{ix}\mathrm{d}x$. Hence:
$$I_m = \frac{-1}{4\sin\phi}\left(\int_{\vert z\vert=1}\frac{z^m}{\left(z-e^{i\phi}\right)\left(z-e^{-i\phi}\right)}\mathrm{d}z-\int_{\vert z\vert=1}\frac{z^{-m}}{\left(z-e^{i\phi}\right)\left(z-e^{i\phi}\right)}\mathrm{d}z\right)=A_m-B_m$$
Where:
$$A_m:=\frac{-1}{4\sin\phi}\int_{\vert z\vert=1}\frac{z^m}{\left(z-e^{i\phi}\right)\left(z-e^{-i\phi}\right)}\mathrm{d}z$$
$$B_m:=\frac{-1}{4\sin\phi}\int_{\vert z\vert=1}\frac{z^{-m}}{\left(z-e^{i\phi}\right)\left(z-e^{-i\phi}\right)}\mathrm{d}z$$
We need to show that $B_m = 0, \forall m \in \mathbb{N}$.It is obvious for $n=1$ We only need to show that:
$$J_m:=\int_{\vert z \vert = 1} \frac{1}{z^m (z-z_1)}\mathrm{d}z = 0, \forall m\in \mathbb{N}, z_1 = e^{i\phi}. $$
By induction, it is obvious that for $m=1$, we have $J_1= 0$. Assume that $J_m =0$ for $m=n-1$. We need to show that $J_n =0$.Since $\frac{1}{z^m}, m>1$ is a meromorphic function, by partial fraction decomposition:
$$J_n=z_1^{-1}\int_{\vert z \vert = 1}\frac{1}{z^{n}} - \frac{1}{z^{n-1}(z-z_1)}\mathrm{d}z=0 - J_{n-1}=0.$$
By obtaining this result, it follows directly that $B_m=0$.Finally, for $A_m$ by Residue theorem, leads us:
$$I_m:=A_m=\frac{-2\pi i}{4\sin\phi}\left(e^{im\phi}-e^{-im\phi}\right)=\pi \frac{\sin m\phi}{\sin \phi}$$
Then your result follows easily since:
$$\int_0^{\pi} \frac{\cos m\theta-\cos m \phi}{\cos \theta - \cos \phi} \text{d}\theta=I_m - I_0 =\pi \frac{\sin m\phi}{\sin \phi}$$ |
Determine the nature of this series? | Hint : $$\ln(n!)=\sum_{k=1}^n \ln(k) \leq n \ln(n)$$ |
Matrix norm via singular values | In a finite dimensional space $A$ has an adjoint $A^*$ and $||A^*|| = ||A||$
Let $B = AA^*$ then $||B|| = ||A A^*|| = ||A||^2$
$B$ is self adjoint and non-negative therefore has a spectrum of real non-negative eigenvalues {$\lambda_i$}, corresponding orthogonal eigenspaces {$E_i$}, and projections {$P_i$} onto those eigenspaces. Furthermore $B = \Sigma \lambda_i P_i$.
Then by your given initial result, $||B|| = max(|| \lambda_i P_i||) = max(|\lambda_i|.|| P_i||) $. But for each projection, $||P_i|| = 1$.
So, $||B|| = max(|\lambda_i| )$ and $||A|| = ||B||^{1/2} = (max(|\lambda_i|) ^{1/2} = max(|\lambda_i|^{1/2}) $.
Since the eigenvalues of $B$ are non-negative then $||A|| = max(\lambda_i^{1/2})$ which by definition is the maximum of the singular values of $A$ |
Proving that a symmetric matrix is positive definite iff all eigenvalues are positive | If $A$ is symmetric and has positive eigenvalues, then, by the spectral theorem for symmetric matrices, there is an orthogonal matrix $Q$ such that $A = Q^\top \Lambda Q$, with $\Lambda = \text{diag}(\lambda_1,\dots,\lambda_n)$. If $x$ is any nonzero vector, then $y := Qx \ne 0$ and
$$
x^\top A x = x^\top (Q^\top \Lambda Q) x = (x^\top Q^\top) \Lambda (Q x) = y^\top \Lambda y = \sum_{i=1}^n \lambda_i y_i^2 > 0
$$
because $y$ is nonzero and $A$ has positive eigenvalues.
Conversely, suppose that $A$ is positive definite and that $Ax = \lambda x$, with $x \ne 0$. WLOG, we may assume that $x^\top x = 1$. Thus,
$$0 < x^\top Ax = x^\top (\lambda x) = \lambda x^\top x = \lambda, $$
as desired. |
A group with 3 subgroups. prove it is cyclic | Let $H$ be the third subgroup, so by assumption $\{e\}\ne H\ne G$. Then there exists $x\in G\setminus H$. Then $\langle x\rangle$ must be one of $\{e\}, H, G$ and it certainly is neither of the first two. (The argument can be generalized as follows: A group with a proper subgroup that contains all proper subgroups is cyclic) |
Operator norm of a functional | In general, if $V$ and $W$ are normed linear spaces, and $U: V \to W$ is a linear operator, we define the norm of $U$ as
$$\| U \| = \inf \{M>0 \mid \| U(x) \| \le M \| x \| \ \forall x \in V \} = \sup _{\| x \| \le 1} \frac {\| U(x) \|} {\| x \|} = \sup _{\| x \| = 1} \frac {\| U(x) \|} {\| x \|}$$
(the theree values are equal).
In particular, if $W = \Bbb C$ (or $\Bbb R$), just replace the norm in $W$ with the usual absolute value. |
Calculate average time | I see two approaches for this. If there's some time of day when nothing happens, e.g. 4 a.m., you can let the wraparound occur at that time; for instance times from 1 a.m. to 4 a.m. would become times from 25:00 to 28:00.
If there's no such natural cutoff, you could use directional statistics; from the Wikipedia article:
Other examples of data that may be regarded as directional include statistics involving temporal periods (e.g. time of day, week, month, year, etc.), [...] |
Solve(Using Logarithms) | You need to use the following identity:
$$\ln a + \ln b = \ln ab$$
$$\ln (a+1)^{\frac{1}{2}} + \ln 5 = 1$$
$$\ln 5(a+1)^{\frac{1}{2}} = 1$$
$$5(a+1)^{\frac{1}{2}} = e^1$$
$$25(a+1) = e^2$$
$$a = \frac{e^2}{25}-1$$ |
Where do you see cyclic quadrilaterals in real life? | I can't think of any applications, and I doubt any satisfactory ones exist - for example, as noted in the comments there may well have been connections to astronomy, but I think it's fair to suggest that almost no-one who is being taught circle theorems is going to use them in their life at any point.
Thus, I'm going to interpret this question as:
Why would you bother learning a theorem that has no application in real life?
And I think there are two good answers to this question:
1. It is interesting
This is, really, the only reason you're taught anything in your life other than how to pay taxes. Geometry is something that lots of people over a long period of time have found to be intrinsically interesting. The reasons for this are complicated - it's a good intellectual exercise, and for many people intellectual exercises are something they enjoy doing.
2. It forces you to think logically
The patterns of thought people generally use in mathematics are valuable. Logical arguments are important in all walks of life, and being able to understand and interpret them is an extremely valuable life skill which you really should want to have.
I have a lot of sympathy with this question, for the following reason: you are probably taught mathematics very badly. The arguments I give above really rely on the idea that you are taught how to prove theorems (and Euclidean geometry is a fantastic exercise in proof). Without that, I would claim that learning geometry really has no value. I would even go so far as to say you shouldn't bother going so far as to learn basic trigonometry (unless you need it to be an engineer or something), unless you study its proof. That really is where all the value, and all the fun, is.
This is not your fault. But there is something you can do about it. Look up a proof, try to understand it, and if you're lucky you'll get a little intellectual buzz from the 'aha!' moment of it all coming together. But, I'm sorry to say, you'll probably have to do this yourself. Mathematics teaching is woeful in the vast majority of schools, and statistically speaking you are unlikely to even have a teacher capable of explaining to you why these results are true, let alone interesting or useful.
So, on the off chance that this answer has spiked your curiosity, I recommend writing another question, called "How do you prove interesting facts about cyclic quadrilaterals?", and you might get a more satisfying answer. |
An upper bound for number of edges | I'll use two claims to prove this:
$\frac{\vert V(G)\vert}{2}\leq\Delta(G)$
$E(G)\leq \frac{\vert V(G)\vert }{2}\cdot\Delta(G)$
Using that, we get:
$$\vert E(G)\vert \leq \frac{\vert V(G)\vert}{2}\cdot\Delta(G)\leq \Delta(G)^2$$
Which is what we wanted to prove.
Proof of 1:
If $\bar G$ (the complement of $G$) is disconnected, it has more than one connected component, (so at least two).
Therefore, one of those connected components size is at most $\frac{\vert V(G)\vert}{2}$.
Denote it by $A$.
Now, let $v$ be some vertex in $G$.
$v$ is adjacent (in $G$) to every vertex $u\notin A$.
Therefore $$\deg_G(v)\vert V(G)\vert-\vert A\vert\geq \frac{\vert V(G)\vert}{2}$$
and since $\Delta(G)\geq \deg_G(v)$, we get $\frac{\vert V(G)\vert}{2}\leq\Delta(G)$
Proof of 2:
$$2\vert E(G)\vert = \sum_{v\in V(G)} \deg(v)\leq\sum_{v\in V(G)} \Delta(G)=\vert V(G)\vert\cdot\Delta(G)\implies \\
\vert E(G)\vert\leq \frac{\vert V(G)\vert}{2}\cdot\Delta(G)$$ |
Pointwise convergent, increasing sequence | At least for the pointwise convergence it is definitely wrong. For $f_{n}(x)=x^{n}$, $x\in[0,1]$, these are absolutely continuous, but the pointwise limit is not even continuous. |
Number of open sets in a metric space | Hint: show that in any finite metric space, all singletons (sets with a single element) are open. From there, it is easy to show that every subset of a finite metric space is open. |
0th-order differential operator vs. 1st-order differential operator on a vector bundle $(E, \pi,M)$ | I couldn't follow your derivation that $D(u\mu) - uD(\mu)=0$. But I think you're misinterpreting the definition. It means that $D$ is a first-order differential operator if and only if for each $u\in C^\infty(M)$, the map $F_u\colon \Gamma E \to \Gamma E$ defined by $F_u(\mu) = D(u\mu) - u D(\mu)$ is linear over $C^\infty(M)$ in $\mu$. More specifically, this means that the following identity holds for all $u,f\in C^\infty(M)$ and $\mu\in \Gamma E$:
$$
D(uf\mu) - u D(f\mu) = f\big(D(u\mu)) - u D(\mu)\big).
$$
As for your second question, how you would prove it depends on specifically what the operator $D$ is. Here's an example. Let $E = TM$, the tangent bundle of $M$, and let $X$ be a smooth vector field on $M$. Consider the map $\mathscr L_X\colon \Gamma(TM)\to \Gamma(TM)$, i.e., Lie differentiation by $X$. A straightforward computation based on the properties of the Lie dervative shows that for all $u,f\in C^\infty(M)$ and all $Y\in \Gamma(TM)$,
$$
\mathscr L_X(ufY) - u \mathscr L_X(fY) = f\big( \mathscr L_X(uY) - u \mathscr L_XY\big).
$$
Thus $\mathscr L_X$ is a first-order differential operator. |
Prove that if $A$ is a square matrix with integer entries and $\det(A) = \pm 1$, then $A^{-1}$ contains all integer entries. | Hint: $A^{-1}=\frac{1}{\det(A)}\text{adj}(A)$. |
Mean Value Theorem continuity | $\tau$ is actually not only continuous but even differentiable. Just write down the definition of the derivative of $\tau$ at $\alpha$ and of the Gateaux derivative of $F$ at $x+\alpha t$ and compare. |
Limit calculation and discontinuity | What is $x^2 - 4$ at $x = 2$? You still have a denominator of zero.
As $x \to 2$, the limit of $\dfrac {x+3}{x -2}$ does not exist.
$$\lim_{x\to 2^-} \dfrac{x+3}{x-2} = -\infty$$
$$\lim_{x\to 2^+} \dfrac{x+3}{x-2} = +\infty$$
Edit: Note that the functions are equivalent everywhere except at $x=-2$. The original (second function in the post) is not defined at $x = -2$, whereas the subsequent is defined there. Since they have different domains on which each is defined, they are not strictly equivalent.
Recall that when finding the limit as $x\to -2$, we are not interested in the behavior of the function at $x = -2$, only as $x$ approaches $-2$, and for this purpose (finding the limiting behavior of the function as $x\to -2)$, the functions are equivalent. |
Prove that $f$ is continuous on $H$ if and only if... | For the first direction, I think you can show it in an easier way.
Take, $x$ which is a limit point of $T$, and let the sequence in $T$, $x_n\to x$. By continuity of $f$, $f(x_n)\to f(x)$. Hence, $f(x)\in \bar {f(T)}$. |
Simplify Inverse Trigonometry Expressions | Let $\displaystyle y=\tan\left(\cos^{-1}\left(\frac{x}{4}\right)\right)\;,$ Now Put $\displaystyle \cos^{-1}\left(\frac{x}{4}\right)=\phi\;,$
Then $\displaystyle \frac{x}{4} = \cos \phi\;,$ Then $\displaystyle \sec \phi = \frac{4}{x},$ So using $\displaystyle \tan \phi = \pm \sqrt{\sec^2 \phi -1} = \pm \frac{\sqrt{16-x^2}}{x}$
So We get $\displaystyle y = \tan \phi = \pm \frac{\sqrt{16-x^2}}{x}$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.