title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Solve an equation with complex numbers | $$z- iz=5 +i$$
$$z=\frac{5+i}{1-i}$$
$$z=\frac{5+i}{1-i}\frac{1+i}{1+i}$$
$$z=2+3i$$ |
How can I make this two parameter piecewise function continuous? | Let's start from the very basic. Say you have a piecewise defined function and in it's definition at see point say c the function is not continous. The reasons for this might be
1) lhl≠rhl≠f(c)
2) either limit is not defined or is oscillatory.
3) lhl=rhl≠f(c)
Case 3 can be tackled easily by setting up lhl = rhl= f(c) by redefining the function at c .
But in your problem you have to deal with two variables and have only one equation and you could just paremetrize your ordered pairs of (a,b) into one variable and nothing more. |
This function seems to defy the integral test, where am I going wrong? | Since:
$\displaystyle\int_1^2f(x)\,\mathrm dx<f(1)$
$\displaystyle\int_2^3f(x)\,\mathrm dx<f(2)$
$\displaystyle\int_3^4f(x)\,\mathrm dx<f(3)$
and so on, you have$$\int_1^{+\infty}f(x)\,\mathrm dx<\sum_{n=1}^\infty f(n).$$ |
Show that resolvant is analytic outside the spectrum | Let $\lambda_0 \in \mathbb{C}\setminus \sigma(T)$. For $|\lambda - \lambda_0| < \frac1{\|R(\lambda_0)\|}$ we will show that $$R(\lambda) = \sum_{n=0}^\infty (\lambda - \lambda_0)^n R(\lambda_0)^{n+1}$$
Verify that in general we have $R(\alpha) - R(\beta) = (\alpha - \beta)R(\alpha)R(\beta)$. This implies
$$R(\lambda)\big(I- (\lambda - \lambda_0)R(\lambda_0)\big) = R(\lambda_0)$$
Note that $\|(\lambda - \lambda_0)R(\lambda_0)\| < 1$ by assumption. Hence $I- (\lambda - \lambda_0)R(\lambda_0)$ is invertible with $$\big(I- (\lambda - \lambda_0)R(\lambda_0)\big)^{-1} = \sum_{n=0}^\infty (\lambda - \lambda_0)^n R(\lambda_0)^{n}$$
Finally it follows
$$R(\lambda) = \big(I- (\lambda - \lambda_0)R(\lambda_0)\big)^{-1}R(\lambda_0) = \sum_{n=0}^\infty (\lambda - \lambda_0)^n R(\lambda_0)^{n+1}$$
which implies that $R$ is analytic on $\mathbb{C}\setminus \sigma(T)$. |
Expected maximum number of collisions for universal hash function | For $a$ the point is that $Prob[A_j] \le nProb[A_j^1]$ because each slot has the same chance to have at least $j$ entries. It will actually be less than this because the right side counts cases where two slots each have $j$ entries twice while the left side counts them once. We were asked for an upper bound, so this is not a problem.
For $b$ the idea is to assume that each key maps to a slot randomly, so the chance a given key maps to slot $1$ is $\frac 1n$
For $c$ you just use $b$ and multiply by the number of subsets of size $j$ |
Maximum x-intercept with a parabola and differentiation? | That should be
$$x=v_0\cos(\theta)t$$
$$y=v_0\sin(\theta)t+\frac{1}{2}(-9.8) t^2$$
$\dot x=v_0\cos(\theta)$ velocity makes projectile to travel to x $\rightarrow \infty $
To find $ x_{max}$ at starting height just let $y=0$ in the equation of parabolic trajectory.
To find maximum height reached set $\dot y = 0 $ treating $\theta$ temporarily constant and then next treat it as a variable for differentiation, getting $45^0.$ |
find the derivative of function at given point | Notice $f(x) = \sin x$, $f'(x) = \cos x$, $f''(x) = - \sin x$, $f'''(x) = - \cos x$ and $f^{(IV)}(x) = \sin x$. How can you use this information to find the nth derivative of $f$ ? This is a hint. |
how to find differential of $\sin^{-1}x$ using first principle? | \begin{align*}
\theta &= \sin^{-1} (x+h) \\
\phi &= \sin^{-1} x \\
\sin (\theta-\phi) &= \sin \theta \cos \phi-\sin \phi \cos \theta \\
&= (x+h)\sqrt{1-x^{2}}-x\sqrt{1-(x+h)^{2}} \\
&= \frac{(x+h)^{2}(1-x^{2})-x^{2}[1-(x+h)^{2}]}
{(x+h)\sqrt{1-x^{2}}+x\sqrt{1-(x+h)^{2}}} \\
&= \frac{h(2x+h)}{(x+h)\sqrt{1-x^{2}}+x\sqrt{1-(x+h)^{2}}} \\
\lim_{h\to 0} \frac{\sin^{-1}(x+h)-\sin^{-1} x}{h} &=
\lim_{h\to 0} \frac{1}{h}\sin^{-1}
\left( \frac{2hx}{2x\sqrt{1-x^{2}}} \right) \\
&= \lim_{h\to 0} \frac{1}{h}
\sin^{-1} \left( \frac{h}{\sqrt{1-x^{2}}} \right) \\
&= \frac{1}{\sqrt{1-x^{2}}} \lim_{h\to 0}
\frac{\sin^{-1} \left( \frac{h}{\sqrt{1-x^{2}}} \right)}
{\frac{h}{\sqrt{1-x^{2}}}} \\
&= \frac{1}{\sqrt{1-x^{2}}}
\end{align*} |
Proofs by Induction - Showing that a > b and b > c means a > c. | It seems to me you are worried about three things.
1) If $a' < a$ and $b' < b$ and $c' < c$ can we say $a' + b' + c' < a + b + c$
The answer is yes. If $x < y$ then $x + a < y + a$ is a basic axiom of order so
$a' < a \implies a' + b' < a + b'$
$b' < b \implies a + b' < a + b$
And $c' < c \implies a' + b' + c' < a + b' + c' < a + b + c' < a + b + c$.
which brings up issue two
2) If $a < b$ and $b < c$ can we say $a < c$.
The answer is yes. Order is defined/axiomatically assumed to be transitive. This is a given-- for this exercise or in any context where order is presumed.
3) if $a < b$ then is $xa < xb$ (e.g. if $7/8 < 1$ so is $2^k(7/8) < 2^k *1$?)
The answer is CONDITIONALLY yes. If $x> 0$ and $a<b$ then $xa < xb$. This is a given axiom of ordered fields (or which the real numbers are one). This is a given.
If, however $x = 0$ then $0a = 0 = 0b$ of course and if $x < 0$ then $(-x)a < (-x)b$ so $(-x) a + (x)b < 0$ (see 1) and $xb < xa$. So if $x < 0$ then $xb < xa$. This is a basic proposition (not an axiom but easily proven). For this exercise you can take it as a given.
====
In some classes you will be given a formal and abstract definition of "<" and a formal and abstract definition of an "ordered field" of with the real numbers are one example and given some basic definitions and axioms and you will be asked to prove very basic propositions. (For example that if $x<y$ then $-y < -x$ or $1/y < 1/x$ or that $x^2 \ge 0$ and so on.)
But for now, we can simply take these all as given. |
Proof of a theorem about oscillation | The difference between $x$ and $y$ satisfies the equation
$$\frac{\text d}{\text dt}(x(t)-y(t))=A(x-y)+R_2(x).$$
This can be solved, formally, by variation of parameters, to give
$$x(t)-y(t)=\int_0^t e^{A(t-\tau)}R_2(x(\tau))\text d\tau.$$
Their separation is then bounded by
$$\begin{align}|x(t)-y(t)|
\leq\int_0^t | e^{A(t-\tau)}R_2(x(\tau))|\text d\tau
\leq\int_0^T e^{|A|(t-\tau)}|R_2(x(\tau))|\text d\tau.\end{align}$$
Now, $R_2(x)=O(x^2)$, which means that there exists a constant $M$ and a distance $X$ such that $|R_2(x)|\leq Mx^2$ whenever $|x|\leq X$. To be able to use this, we need to ensure that $|x(t)|$ will be no greater than some $\delta$-controllable constant for all times $t\in[0,T]$, and our tool to achieve this is making the initial condition $x(0)=x_0$ small enough. This is essentially the statement that the solution is continuous with respect to the initial condition (i.e. around the solution $x(t)\equiv0$ for $x(0)=0$).
This continuity is a standard fact in differential equations but I'll sketch a proof here. Any solution $x(t)$ must be differentiable, hence continuous, and hence bounded. This means the condition $|R_2(x(t))|\leq M_{x_0}x(t)^2$ will hold for all $t\in[0,T]$, for some suitable constant $M_{x_0}$. The right-hand side of the differential equation for $x(t)$ is therefore Lipschitz continuous:
$$|Ax(t)+R_2(x(t))|\leq L|x(t)|\tag1$$
for some $L>0$. (Alternatively, you might impose this to begin with.) Setting $u(t)=|x(t)|^2$, you have
$$
\frac{\text du}{\text dt}
\leq\left|\frac{\text du}{\text dt}\right|
= \left|2x(t)\frac{\text dx}{\text dt}\right|
\leq 2L |x(t)|^2=2L u(t).
$$
By Gronwall's inequality this implies that $|u(t)|\leq|u(0)|e^{2L t}$, which means that $|x(t)|\leq |x_0| e^{LT}$ for all $t\in[0,T]$.
So, in a nutshell: if you can assume or prove that (1) holds with $L$ independent of $x_0$, then your solution $|x(t)|$ will be bounded by $|x_0| e^{LT}$ for all $t\in[0,T]$. Additionally, this means that if you choose $|x_0|<\delta\leq Xe^{-LT}$, you can bind $|x(t)|<X$ for all $t\in[0,T]$.
Let's now return to the difference $|x(t)-y(t)|$. We can now apply the smallness bound on $R_2$, which gives us
$$\begin{align}
|x(t)-y(t)|
&\leq\int_0^t e^{|A|(t-\tau)}|R_2(x(\tau))|\text d\tau
\leq\int_0^t e^{|A|(t-\tau)} M|x(\tau)|^2\text d\tau
\\& \leq M\int_0^T e^{|A|T}|x_0|^2 e^{2LT}\text d\tau
\leq \delta^2 MT e^{(2M+|A|)T}
\end{align}$$
for all $t\in[0,T]$.
Thus, if you're given $\xi>0$ and $T<0$, then choosing $\delta=\min\{Xe^{-LT},\xi e^{-(2M+|A|)T}/MT\}$ you can ensure that
$$|x(t)-y(t)|<\xi\delta$$
for all $t\in[0,T]$.
Finally, note that while this is phrased in one-dimensional language, it holds with trivial extensions (i.e. simply with appropriately defined norms) for the multidimensional case, and therefore for higher-than-first-order ODEs. |
Summation from a real number to a complex number | There is no standard definition for the expression
$$
\sum_{n=1}^i n,
$$
where $i$ is the square root of $-1$. You could choose to give it your own definition, but if you are going to do so then be sure to clearly mention what your definition of this summation is.
If you happened to encounter such a summation in some context without additional information, my best guess would be that $i$ is in fact not the square root of $-1$, and instead just some positive integer, so that
$$
\sum_{n=1}^i n = 1 + 2 + 3 + \dots + i,
$$
where $i$ is some positive integer. |
Given a mass distribution for a discrete random variable X, find alpha | You have correctly determined that $\alpha = 1$.
Then $E X = \sum_{i=1}^\infty i P[X=i] = \sum_{i=1}^\infty i {1 \over 2^i}$.
If $|x|<1$ then $\sum_{k=0}^\infty x^k = {1 \over 1-x}$ and for $x$ inside
the radius of convergence we can differentiate to get
$\sum_{k=1}^\infty k x^{k-1} = {1 \over (1-x)^2}$, and so $\sum_{k=1}^\infty k x^{k} = {x \over (1-x)^2}$. Setting $x={1 \over 2}$ will give the desired result. |
Evaluating the integral $\int_{-1}^1(5x^4-4x^3)\mathrm dx$ | The result is indeed $2$ and probably there's a typo in the book. Your solution is correct.
As you can see from the graph of $f(x) = 5x^4 - 4x^3$ there is no way the integral can be $0$:
$\hspace{11em}$ |
Unique Lifting Property of the Circle Proof | I presume you have some covering map $\epsilon:\Bbb R\to S^1$ and you want
$\varphi=\pi\circ\widetilde\varphi$ etc. Probably your $\epsilon$ maps
$x$ to $\exp(2\pi ix)$, and for definiteness' sake I'll assume this.
You have $z=\epsilon(r_1)=\epsilon(r_2)$. Then $r_2=r_1+k$
where $k$ is a nonzero integer. Take say, $\widetilde U_1=(r_1-\frac12,r_1+\frac12)$ and $\widetilde U_2=(r_2-\frac12,r_2+\frac12)$ and $U
=\epsilon(\widetilde U_1)=\epsilon(\widetilde U_2)$. Then each
of $\widetilde U_i$ is a component of $\epsilon^{-1}(U)$ and the argument
goes through. |
Generating functions for unimodal sequences | You’ve basically already correctly described how this generating function is derived. I would suggest to use different summation indices on the two sides of the equation, as they have different roles. On the left, $n$ is the number being composed. On the right, $n+1$ is the largest part of the strictly unimodal composition: the peak. This accounts for the factor $q^{n+1}$, and $(-q;q)_n=\prod_{k=1}^n\left(1+q^k\right)$ is the generating function for partitions with distinct parts of size at most $n$. Every strictly unimodal composition with peak $n+1$ can be uniquely split into a strictly increasing composition with parts of size at most $n$, the peak, and a strictly decreasing composition with parts of size at most $n$. Both strictly increasing compositions and strictly decreasing compositions are in bijection with partitions. |
Give an example of a equicontinuous that does not converge uniformly | Take the functions $f_n(x)=x/n$ over $\Bbb R$, converging non-uniformly to $f(x)=0$. |
Solve a nonlinear system of equations in 3 variables | It is a linear algebra problem ! So multiply the first by $x(y+z)$ to get
$$x+y+z=-\frac{2}{15}x(x+z)$$ similar for all the rest. Let $a=x+y+z$.
Then you have the linear system
$$\begin{pmatrix}\frac{2}{15}&\frac{2}{15}&0\\
\frac{2}{3}&0&\frac{2}{3}\\
0&\frac{1}{4}&\frac{1}{4}
\end{pmatrix}\begin{pmatrix}xy\\xz\\yz\end{pmatrix}=\begin{pmatrix}-1\\-1\\-1\end{pmatrix}a$$
Solving this by row reduction we get
$$2xy=-5a$$
$$xz=-5a$$
$$yz=a$$
Subsituting and canceling we have
$$x=-5y$$
$$z=2y$$
from which we see by adding those equations that $$a:=x+y+z=-2y$$
And then we substitute again to have
$$yz=-2y$$ and thus
$$z=-2.$$ |
No well-ordered set is isomorphic to any of its initial segments | Hint: We will derive a contradiction to $m$ being the $\trianglelefteq$-minimum element of $B$. As $f(m) \neq m$, then either $f(m) \triangleleft m$ or $m \triangleleft f(m)$. In the former case compare $f(f(m))$ and $f(m)$, and in the latter case consider the unique $x \in A$ such that $f(x) = m$. |
Anti Derivatives of Product Rule | A couple notes:
$$\int[f(x)+g(x)]\,dx=\int f(x)\,dx+\int g(x)\,dx$$
$$\frac{d}{dx}[F(x)G(x)]=F(x)G'(x)+F'(x)G(x)=F(x)g(x)+f(x)G(x)$$ |
Find closed form of $f(a,b,c)$ | Note that $|x-y|+x+y=2\max\{x,y\}$ then $$f(a,b,c)=\left|\left|\dfrac{1}{a}-\dfrac{1}{b}\right|+\dfrac{1}{a}+\dfrac{1}{b}-\dfrac{2}{c}\right|+\left|\dfrac{1}{a}-\dfrac{1}{b}\right|+\dfrac{1}{a}+\dfrac{1}{b}+\dfrac{2}{c}=$$$$=|2\max\{1/a,1/b\}-\dfrac{2}{c}|+2\max\{1/a,1/b\}+\dfrac{2}{c}=$$$$=2\max[2\max\{1/a,1/b\};2/c]=4\max[\max\{1/a,1/b\};1/c]=4\max\{1/a,1/b,1/c\}.$$ |
The probability that a linear Brownian motion will hit a curve | This is very much only a partial answer but the problem intrigued me and it wouldn't fit in a comment, so just see it as inspiration for others who have more time maybe.
Suppose the curve is piecewise linear, i.e. there exist times $$0 = t_0 < t_1 < ... $$ and values $f_k \in \mathbb{R}$ with $f_0 > 0$ such that
$$
f(t) = \sum_{k=0}^\infty \chi_{[t_k, t_{k+1}]}(t) \cdot \underset{:= g_k}{\underbrace{(f_k + (t - t_k) \frac{f_{k+1} - f_k}{t_{k+1} - t_k})}}.
$$
Then
$$
\mathbb{P}(\exists t > 0: B_t - f(t) = 0) = \mathbb{P}\left(\bigcup_{k = 0}^\infty \{\exists t \in [t_k, t_{k+1}]: B_t - g_k(t) = 0\} \right).
$$
Define
$$w^k_t = B_{t + t_k} - B_{t_k}$$
We know that $W^k$ is a BM.
Define furthermore
$$c_k = \frac{f_{k+1} - f_k}{t_{k+1} - t_k}.$$
Then for $t \in [0, t_{k+1} - t_k]$
$$
B_{t + t_k} - g_k(t + t_k) = B_{t_k} - f_k + W^k_t - t c_k
$$
Then
$$
\mathbb{P}(\exists t \in [t_k, t_{k+1}]: B_k - g_k = 0 ) = \mathbb{P} (\exists t \in [0,t_{k+1} - t_k]: B_{t_k} - f_k + W^k_t - t c_k)
$$
$$ = \mathbb{E} \left[ \chi_{\exists t \in [0,t_{k+1} - t_k]: B_{t_k} - f_k + W^k_t - t c_k} \right]
= \mathbb{E} \left[ \mathbb{E} \left[ \chi_{\exists t \in [0,t_{k+1} - t_k]: B_{t_k} - f_k + W^k_t - t c_k} \vert \sigma(B_{t_k})\right] \right]
$$
$$
= \mathbb{E}_{- f_k} \left[ \mathbb{E} \left[ \chi_{\exists t \in [0,t_{k+1} - t_k]: B_{t_k} + W^k_t - t c_k} \vert \sigma(B_{t_k})\right] \right]
$$
Now we use the Strong Markov property (or maybe the weak one suffices here?)
$$
= \mathbb{E}_{- f_k} \left[ \mathbb{E}_{B_{t_k}} \left[ \chi_{\exists t \in [0,t_{k+1} - t_k]:W^k_t - t c_k} \right] \right]
$$
$$
= \mathbb{E}_{- f_k} \left[ \mathbb{P}_{B_{t_k}} \left( T_0^{c_k} \leq t_{k+1} - t_k \right) \right],
$$
where $T^k_0$ is the hitting time of $0$ for a Brownian motion with drift $-c_k$. This can be easily computed as the density is known, you should be able to find it somewhere online.
For clarity let me rewrite which process starts where
$$
= \mathbb{E}_{B_0 = - f_k} \left[ \mathbb{P}_{W^k_0 = B_{t_k}} \left( T_0^{c_k} \leq t_{k+1} - t_k \right) \right].
$$
So this quantity should be computable. But the problem is that the term
$$
\mathbb{P}\left(\bigcup_{k = 0}^\infty \{\exists t \in [t_k, t_{k+1}]: B_t - g_k(t) = 0\} \right)
$$
is not easily reduced to these probabilities. However from here you can maybe start making estimates.
Then to get to smooth functions you approximate with a piecewise linear function and also make some estimates. |
Expression For $u_{n+1}=u_n^2+u_n$ | There is no simple formula for $u_n$ in terms of $a$. Indeed, $u_n$ is a polynomial expression on $a$ of degree $2^{n-1}$.
If you seek a simple formula for $u_n$ to understand its convergence, then this is hopeless because convergence is explained by Julia sets, which are quite complicated. In this case, the Julia set of $f(z)=z^2+z$:
(Image computed with WA) |
Pattern Recognition with basic probability | In answer to your first question, I assume that the problem is that you want to decide whether $\omega_1$ or $\omega_2$ happened given an observation of $x$, and you want to decide this in such a way as to minimize the cost. Your total cost for deciding $\alpha_j$ is just
$$R(\alpha_j|x)=\sum_{i=1}^2\lambda(\alpha_j|\omega_i)p(\omega_i|x).$$
I assume that you know how to compute $p(\omega_i|x)$ using Bayes rule, and I do not know how your $\lambda$ matrix is arranged, whether the decisions are in the rows or columns, so I'll let you do some work before continuing.
For your second question, let me preface my remarks by saying that I am not an academic, I am just a code monkey teetering on the brink of geezerdom, and one of the many mental infirmities bestowed by geezerdom is an irresistible urge to give advice that one knows will go unheeded.
First: Your best bet for figuring out how best to understand pattern recognition is to ask your professor or TA. They are best suited to this purpose, as they know the strengths and weaknesses of the prerequisite courses, and your strengths and weaknesses as well, and they know both much better than some random fool on the internet. Amazingly, your university probably mandates that both your TA and professor sets aside some time for just this purpose.
Note: Approaching your professor and blaming your lack of success on his/her teaching style is not likely to be a good strategy. Most professors are human, or nearly so, and are not likely to exhibit the desired amount of compassion if you blame your problems on them.
Second: Review your prerequisites, in this case probability, particularly conditional probability and Bayes rule. This means picking up your probability textbook and doing as many exercises as you can stand, doing at least enough so that the tedium outweighs the difficulty. In mathematics there are usually two reasons for difficulty, lack of facility with the prerequisites, and not paying enough attention to the details. In your case, I suspect both, but you cannot fix the second until you fix the first.
Third: Learn to use google. In about a minute I found this which contains a worked example much like your problem. |
A family of analytic functions that map to the open unit disk | Let $s = \sup \{ \lvert f'(a)\rvert : f \in F\}$, and pick a sequence $(f_n)$ in $F$ with
$$\lim_{n\to\infty} \lvert f_n'(a)\rvert = s.$$
Since $F$ is a normal family, the sequence $(f_n)$ has a subsequence that converges locally uniformly to a holomorphic function $g$ on $U$. Without loss of generality, suppose the entire sequence converges to $g$ locally uniformly. By the pointwise convergence, it follows that $\lvert g(z)\rvert \leqslant 1$ for all $z\in U$. Since $g(a) = \lim f_n(a) = 0$, it follows that $g$ is not a constant of modulus $1$, hence $\lvert g(z)\rvert < 1$ for all $z\in U$ and $g\in F$. By one of Weierstraß' theorems, we have
$$g'(a) = \lim_{n\to\infty} f_n'(a),$$
hence
$$\lvert g'(a)\rvert = s,$$
so a) $s < +\infty$, and b) the supremum is in fact a maximum. |
Poisson Distribution of Underfilled Bottles | My guess is you were told something about the number of bottles in each case and you should have used the binomial distribution rather than the Poisson distribution. |
Prove $n < 2^n$ using principle of well order | If $C\neq\emptyset$, then it has a smallest element $c$, which cannot be $0$, since $0<2^0$. And it cannot be $1$, since $1<2^1$.
On the other hand, since $c-1<c$, $c-1<2^{c-1}$. But then$$c=(c-1)+1\leqslant2(c-1)<2\times2^{c-1}=2^c,$$which is impossible, since $c\in C$. |
Probability Limit of a Variable | So it's something like this.
Consider a random variable
$S = \sum_{i=1}^m a_i(n)\mathbb{1}_{z \leq x_i(n) < y}$.
You should see for yourself that the expectation of this is unbiasedly $F(y) -F(z)$
$(E[S] = F(y) - F(z))$.
Now think about how to show the convergence in probability: Chebyshev's inequality comes to mind.
$\mathbb{P}(|S-E[S]| > t) \leq \frac{Var(S)}{t^2} = \frac{Var(I_{x_i \in [z,y]})\sum_{i=1}^n a_i^2}{t^2} \leq \frac{n*\text{max}(a_i^2)Var(I_{x_i\in[z,y]})}{t^2}$. If the max of $a_i \rightarrow 0$ and they all add up to $1$, then for $N$ large enough, $a_i <\frac{1}n$ so $a_i^2 \leq \frac{1}{n^2}$, concluding it. |
Modify a Dehn presentation | The answer to your question is positive. The relevant references are:
A.Yu. Olshanski, SQ-universality of hyperbolic groups, Sb. Math. 186 (1995), no. 8, 1199–1211.
T. Delzant, Sous-groupes distingues et quotients des groupes hyperboliques, Duke Math. J. 83 (1996), no. 3, 661–682.
If you cannot access these papers, read the version for relatively hyperbolic groups in
G. Arzhantseva, A. Minasyan, D. Osin, The SQ-universality and residual properties of relatively hyperbolic groups. |
prove $\sum_{i=0}^{n}\binom{2n+1}{i}=2^{2n}$ | Consider subsets of the set $[1,\ldots,2n,\theta]$ of size $\leq n$. Given such a subset $T$, map it to a subset of $[1,\ldots,2n]$ in the following way. If $\theta\notin T$, then $T$ itself is a subset of $[1,\ldots,2n]$, so send it to itself. If $\theta\in T$, then send $T$ to its complement. This mapping gives a bijection between subsets of $[1,\ldots,2n,\theta]$ of size $\leq n$ and subsets of $[1,\ldots,2n]$ of any size. |
The distribution of the x-coordinate on unit circle | If you generate the points uniformly on the unit circle, you will get the same distribution as if you generate them uniformly on the upper half circle. The fraction of points in the interval $[a,b]$ will be the fraction of the arc length between $x=a$ and $x=b$. The angle at $x=a$ is $\arccos a$ and at $x=b$ is $\arccos b$, so the fraction in $[a,b]$ is $\frac 1{\pi}|\arccos a - \arccos b|$ |
Proving $\lim\limits_{n→+∞}\sin(2\pi \sqrt{n^2+\sqrt{n}})=0$ | Since $\sin$ is periodic with period $2\pi$, you know that\begin{align}(\forall n\in\mathbb N):\sin\left(2\pi\sqrt{n^2+\sqrt n}\right)&=\sin\left(2\pi\sqrt{n^2+\sqrt n}-2\pi n\right)\\&=\sin\left(2\pi\left(\sqrt{n^2+\sqrt n}-n\right)\right).\end{align}But$$\lim_{n\to\infty}\sqrt{n^2+\sqrt n}-n=0.$$ |
Number of subsets without objects being adjacent | Part 1
Take out the $3$ chosen people, $17$ are left with $18$ gaps, as shown below:
$\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow\bullet\large\uparrow$
Replace the chosen people in any of $\binom{18}{3}$ ways
Try Part 2 yourself. |
Minimum steps adding edges to form a complete graph | As mentioned in comments there is a trivial lower bound on $m$: $m \ge \left\lceil\frac{t(t - 1)}{n(n - 1)}\right\rceil$. This bounds is not tight even for $t = 4$ and $n = 3$ when $m = 3$ while the bound is $2$.
Let's think deeper: $\deg_{K_t} v = t - 1$ for any $v \in K_t$, and initially $\deg_G v = 0$. Each move $M$ increrases degree of $v$ by at most $n - 1$. Then each of $t$ vertices requires at least $\left\lceil\frac{t - 1}{n - 1}\right\rceil$ moves. And each move affects exactly $n$ vertices. Thus we get $m \ge \left\lceil\left\lceil\frac{t - 1}{n - 1}\right\rceil\cdot\frac{t}{n}\right\rceil$. It is exact up to $(t, n) = (7, 4)$ when $m = 5$ and the bound is $4$. (I mean my bound is exact for $t \le 6$ and for $t = 7$ and $n \le 3$.)
The most trivial upper bound for $n \ge 2$ is $\frac{t(t - 1)}2$ that is the number of edges. Let's try to get something better. To cover all edges incident to a single vertex it would be enought to make $\left\lceil\frac{t - 1}{n - 1}\right\rceil$ moves $M$. After that we can remove this vertex and get $K_{t - 1}$ on all other vertices. And so on until we left only $n$ vertices that need only one move:
$$m \le \left\lceil\frac{t - 1}{n - 1}\right\rceil + \left\lceil\frac{t - 2}{n - 1}\right\rceil + \cdots + \left\lceil\frac{n - 1}{n - 1}\right\rceil.$$
To simplify this bound let's use $k = \left\lfloor\frac{t - 2}{n - 1}\right\rfloor$ and $r = (t - 2) - k(n - 1)$. Noting that $\left\lceil\frac xy\right\rceil = \left\lfloor\frac{x + y - 1}{y}\right\rfloor$ for integers $x$ and $y$, we get:
$$m \le \left\lfloor\frac{t + n - 3}{n - 1}\right\rfloor + \left\lfloor\frac{t + n - 4}{n - 1}\right\rfloor + \cdots + \left\lfloor\frac{n + n - 3}{n - 1}\right\rfloor\\
= (k + 1) \cdot (r + 1) + k \cdot (n - 1) + (k - 1)\cdot(n - 1) + \cdots + 2 \cdot (n - 1) + 1\\
= (k + 1) \cdot (r + 1) + \left(\frac{k(k + 1)}{2} - 1\right)(n - 1) + 1\\
= \frac{k(k + 1)}{2}(n - 1) + kr + k + r - n + 3.$$
Here I combine these two bounds into one table:
$$\begin{array}{c|c|c|c|c|c|c|c}
n\backslash t & 2 & 3 & 4 & 5 & 6 & 7 & 8\\\hline
2 & 1 / 1 & 3 / 3 & 6 / 6 & 10 / 10 & 15 / 15 & 21 / 21 & 28 / 28\\\hline
3 & & 1 / 1 & 3 / 3 & \color{red}{5} / 4 & \color{red}{8} / 6 & \color{red}{11} / 7 & \color{red}{15} / 11\\\hline
4 & & & 1 / 1 & 3 / 3 & \color{red}{5} / 3 & \color{red}{7} / \color{red}{4} & \color{red}{10} / 6\\\hline
5 & & & & 1 / 1 & 3 / 3 & \color{red}{5} / 3 & \color{red}{7} / \color{red}{3}\\\hline
6 & & & & & 1 / 1 & 3 / 3 & \color{red}{5} / 3\\\hline
7 & & & & & & 1 / 1 & 3 /3\\\hline
8 & & & & & & & 1 / 1
\end{array}$$
Non-exact bounds are marked with red color. As far as you see both bounds agree on $n = 2$, $n = t$ and $n = t - 1$ and the lower bound is much more close to the answer. |
How to show two definitions of Borel pointclass $\Sigma_2^0$ are equivalent? | If I understand correctly what the first definition is supposed to be, it will imply the second definition because projecting along a coordinate indexed by $\omega$ amounts to taking a countable union. Proving the converse will amount to observing that because $\omega$ is discrete, any $\omega$-sequence of closed subsets of $\chi$ corresponds to a closed subset of $\omega \times \chi$. |
Smallest number of factors to achieve every number up to (and including) x | You will need to include $1$ as the only way to get it would be to have it already in the set.
Let's now say $p$ is a prime number less or equal than $x$. You will need to include $p$ because this is the only way to have it included in the set. If now $p^2\le x$, you will need to include $p^2$, because the only way to have $p^2$ is if it is included in the set. (You cannot have it as $p\cdot p$ because you have disallowed repetition.) However, now you've included $p^2$, if it happens that $p^3\le x$, you can make $p^3=p\cdot p^2$. However, if you don't include $p^3$, and if $p^4\le x$, you must include $p^4$ etc.
Needless to say, if we have such a set that enables us to see every prime power not greater than $x$ as a product, then every other composite number not greater than $x$ can also be seen as a product - simply break it up as a product of prime powers and make out those prime powers as products.
This suggests the following strategy:
Solve a simpler problem: given $n$, find the smallest subset of $\{1,2,\ldots,n\}$ so that every number from $1$ to $n$ is a sum of the different numbers from that subset. This is because, as we multiply prime powers, we add their exponents.
Use the solution of (1) above for every prime number $p\le x$, taking $n=\lfloor\log_p(x)\rfloor$ - the highest power of $p$ not larger than $x$.
At the end, don't forget to add number $1$ to the set.
So, for example, for $x=20$, you have primes $2,3,5,7,11,13,17,19$ and the highest powers of those not exceeding $20$ are $4,2,1,1,1,1,1,1$ respectively, so:
For prime $p=2$ we have seen we need to choose three numbers (either $2^1, 2^2, 2^3$ or $2^1,2^2,2^4$)
For prime $p=3$ we have to choose two numbers ($3^1, 3^2$)
For other primes, we have to choose them.
The set will end up being either $\{1,2,3,4,5,7,8,9,11,13,17,19\}$ or $\{1,2,3,4,5,7,9,11,13,16,17,19\}$
Of course, we have not solved the problem 1 above, but it looks plausible that, if you choose the numbers that are themselves powers of $2$ you will be able to make any number as a sum of those. (Let's say $n=15$ and you can choose numbers $1,2,4,8$ and write every number from $1$ to $15$ as a sum of those - using the binary representation.)
This can be shown to be the minimum. Note if $n$ is given, take the numbers $1,2,2^2,2^3,\ldots,2^k$ so that $2^{k+1}>n\ge 2^k$. There are $k+1$ of those numbers, and again recalling binary representation, you can make up any number up to not just $n$ but $2^{k+1}-1\ge n$ as a sum of some of those numbers. However, if you took fewer ($k$) numbers, you cannot possibly make up all the sums up to $n$ because the number of (nonempty) sums of $k$ numbers is up to $2^k-1<n$. As we have seen before, this may not be the only minimum ($n=4$ admits the minimum $\{1,2,3\}$ as well as $\{1,2,4\}$) but it is certainly one of the minima.
Summary Given $x$, find all prime numbers not bigger than $x$. For each such prime number $p$ find the highest degree $n=\lfloor\log_p(x)\rfloor$ such that $p^n\le x$. Take the set of numbers $1,2,2^2\ldots,2^k\le n\lt 2^{k+1}$ (i.e. $k=\lfloor\log_2(n)\rfloor$). Your set will now consist of:
The number $1$, and
All the sets $\{p, p^2, p^{2^2},\ldots, p^{2^k}\}$ for all those prime $p$'s |
Clarifying topological concepts: induced/weak/initial topology and weakly compact | Your interpretations for 1 and 2 are correct (except that $f : A \to \text{codomain}(f)$, not $f : \tau \to \text{codomain}(f)$). But they hide a very important part of the process, and from what you write I cannot tell whether you are aware of this.
A topology $\tau$ must satisfy some axioms: (1) the intersection of any two elements of $\tau$ is an element of $\tau$; (2) the union of any collection of elements of $\tau$ is an element of $\tau$; (3) the entire set, and the empty set, are elements of $\tau$.
Your version of 1, for example, says that $\tau$ is the set of all subsets of $A$ the form $f^{-1}(U)$ where $U$ is an element of the topology of $\text{codomain}(f)$.
This begs a big question: Does $\tau$ satisfy the axioms it is supposed to satisfy?
The answer is: Yes it does, but that requires some proof.
The outline of the proof is: Each of the three axioms (1), (2), and (3) for the topology on $\text{codomain}(f)$ implies the same axiom for $\tau$ itself.
Your interpretation for 3 has an error, the correct statement would be that $B$ is weakly compact if it is compact with respect to the subspace topology on $B$ that is induced from $\tau$.
Added in response to a comment from the OP: I don't know how to express compactness of $B$ directly in terms of $\tau$ other than to repeat the standard definition: for every collection of elements $\{U_i\}_{i \in I}$ of $\tau$ such that $B \subset \cup_i U_i$, there exists a finite subset $\{i_1,...,i_N\} \subset I$ such that $B \subset \cup_{n=1}^N U_{i_n}$. This does not imply that $B \in \tau$. Think about the compact subset $[0,1] \subset \mathbb R$ which is not an open subset of $\mathbb R$ (i.e. is not an element of the usual topology on $\mathbb R$). |
Compare 2 lists , t-test problem? | You can also use permutation test. |
How to define a group for a complex rational function | As it turns out we can use the natural matrix structure for the rational mapping $$\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}$$
to help show that the rational mapping is a group. The identity of the matrix is $$\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}.$$ The inverse of the matrix when the determinant doesn't equal zero is $$\begin{pmatrix}
d & -b \\
-c & a
\end{pmatrix}.$$
Finally, we know that matrix multiplication is associative. |
Showing a function one one | For $z, w \in G, z \ne w$
$$
f(w) - f(z) = \int_\gamma f'(\zeta) \, d\zeta
$$
where $\gamma(t) = z + t(w-z)$, $0 \le t \le 1$, is the straight line
between $z$ and $w$. Then
$$
f(w) - f(z) = (w-z) \int_0^1 f'(z + t(w-z)) \, dt
$$
and therefore
$$
\operatorname{Re}\left( \frac{f(w)-f(z)}{w-z} \right)
= \int_0^1 \operatorname{Re} \bigl(f'(z + t(w-z)) \bigr) \, dt > 0
$$
so that in particular, $f(w) \ne f(z)$. |
Easier way of solving betting probability problem? | If you win a dollar every time you guess correct, the expected payout is $2.8\bar 3$.
If you win a dollar just for the very end, if you made all correct guesses, it's just 1/6.
In both cases, there are 6 routes you can go down. |
$\frac{dy}{dx}=\left(\frac{-2xy}{x^2-y^2}\right)$ | The substitution $y=u x$ gives
\begin{eqnarray*}
\frac{dy}{dx} =u + x \frac{du}{dx} =\frac{2u}{u^2-1} \\
\frac{du}{dx} =\frac{3u-u^3}{u^2-1} \\
\frac{u^2-1}{u(3-u^2)} = \frac{2u}{3(3-u^2)} - \frac{1}{3u}
\end{eqnarray*}
Should be easy to complete from here ? |
Reducibility of multi-variable polynomials | It is perhaps easier to say that $$x^2+y^2=(x+iy)(x-iy)$$ is the factorization into irreducible elements of $x^2+y^2$ in $\mathbb C[x,y]$. Obviously $x^2+y^2$ cannot be factored in $\Bbb R[x,y]$.
On the other hand, $x^2 +y^2 + z^2$ is already irreducible in $\mathbb C [x,y,z]$.
$x^2 +y^2 + z^2$ is irreducible in $\mathbb C [x,y,z]$ |
Regular covering corresponding to a kernel | This is a core idea in the theory of covering spaces. Allen Hatcher's Algebraic Topology, Chapter 1, is a good place to start.
Here's an example to build intuition (and your intuition is already correct in your "Edit"): Let $X$ be the unit circle. It's fundamental group is $\mathbb{Z}$. Every subgroup is normal (since the group is abelian) and so is the kernel of some homomorphism. To the kernel of the map which reduces an integer modulo $n > 1$ corresponds the $n$-fold covering of $X$ by itself, i.e. $p: X \to X$ is given by $p(z) = z^n$, where $X$ is identified with the unit complex numbers. Every loop in the total space is mapped to a loop which winds around the origin zero times (modulo $n$) in the base space. So, elements of the fundamental group of the total space are mapped to elements in the kernel $n \mathbb{Z}$. Conversely, a loop whose homotopy class is in the kernel lifts to a loop in the total space. (In general, if you choose any loop, you do not expect it to lift to a loop, but rather just to a path.)
In general, elements of a given subgroup of the fundamental group of the base space are represented by loops which lift to loops in the corresponding covering space. And loops which do not lift to loops (but rather the lift has distinct endpoints) do not belong to the given subgroup.
To do this correctly and rigorously, it is essential to keep track of the basepoints. However, if the given subgroup is normal, then you can more or less ignore the basepoints since the covering space is "regular" (sometimes called normal or Galois). There are other subtleties as well, especially the existence of coverings and the correct notion of what it means for two (based) covering spaces to be equivalent so that there is indeed a 1-1 correspondence between subgroups and covering spaces.) |
First term of a series with two zeros and a constant second difference | We have
$$\Delta(\Delta A)=\{1,1,1,\dots\}$$
$$\Delta A=\{b,b+1,b+2,\dots\}$$
$$A=\{c,c+b,c+2b+1,c+3b+3,\dots\}$$
For a fixed $b$ and $c$ we have
$$a_n=c+(n-1)b+\frac{(n-1)(n-2)}2$$
The condition that $a_{19}=a_{94}=0$ translates to
$$a_{19}=c+18b+153=0$$
$$a_{94}=c+93b+4278=0$$
Subtract the first equation from the second:
$$75b+4125=0;\ b=-55$$
$$c=-153-18b=-153-18(-55)=837$$
Hence $a_1=c=837$. |
Roots of quaternions in space | Somewhat more generally: for any unit vector ${\bf v} \in \mathbb R^3$, identified as the quaternion $v_1 {\bf i} + v_2 {\bf j} + v_3 {\bf k}$, let $V$ be the
2-dimensional subspace of quaternions of the form $x + y \bf v$, for $x, y \in \mathbb R$. This subspace is closed under multiplication, since
${\bf v}^2 = - v_1^2 - v_2^2 - v_3^2 = -1$.
In particular every polynomial $P$ with real coefficients maps $V$ into itself, and $P(x + y {\bf v}) = \alpha + \beta {\bf v}$ if and only if $P(x + y i) = \alpha + \beta i$ in the complex numbers.
The result is that the roots of the polynomial equation $P(q) = \alpha + \beta {\bf v}$ (where $q$ is a quaternion variable) are not terribly interesting: if the complex roots of $P(z) = \alpha + \beta i$ are $r_j = x_j + y_j i$, then
if $\beta \ne 0$ they are the corresponding points $x_j + y_j {\bf v}$ in the plane $V$, while if $\beta = 0$ they form spheres of the form $\{x + y{\bf w}: {\bf w} \in \mathbb R^3,\; \|{\bf w}\|=1\}$. |
Solve the following system of equations using Gaussian Elimination Method | Since adding first two you are getting last one the system has infinite solutions.
From $ y=1-4z$ you get, by puting it in (2) $x =5z$. So your system has a solution $$(x,y,z) = (5t,1-4t,t)$$
where $t$ is an arbitray real number. |
Probability of the occurrence of a random event x in a population X where n = $\infty$? | There is no uniform discrete probability distribution on an infinite set $X$, for suppose $f:X\to\mathbb R$ is the probability mass function for such a distribution. Then $f$ should satisfy
$$\sum_{x\in X}f(x)=1$$
and
$$(\forall x,y\in X)(f(x)=f(y))$$
Pick $x_0\in X$. Then by the second requirement, for all $x\in X$ we have $f(x)=f(x_0)$. Then by the first requirement,
$$\sum_{x\in X} f(x_0)=1$$
But this is impossible to satisfy. For if $f(x_0)=0$, then the sum comes out to zero; but if $f(x_0)\neq 0$, then the sum doesn't converge to a finite value. |
Morphism of ringed spaces not induced by homomorphism of rings | I'm honestly getting a little lost in your argument. For one thing, at the beginning you include $\Bbb C(T)$ into $\Bbb C$, which is possible, but not natural and I don't see the benefit (nowhere do you need to perform such an unnatural operation to see the counterexample!). In fact, the map you're trying to write down is not the counterexample of the linked question at all: the counterexample map from the other question does not have the same domain and codomain, as you do here! So I'll try to explain the counterexample of the linked question in more detail, and while this won't address your argument explicitly, I hope it can clear up some of your confusion.
Let $(A,\mathfrak{m})$ be a DVR, so that $\operatorname{Spec} A = \{s,\eta\}$ ($s$ is the closed point corresponding to the ideal $\mathfrak{m}$, and $\eta$ is the generic point corresponding to $(0)$), and let $K = \operatorname{Frac}A$, so $\operatorname{Spec} K = \{\sigma\}$. As in the example in the other question, we define the map of ringed spaces $f : (\operatorname{Spec} K,\mathcal{O}_K)\to(\operatorname{Spec} A,\mathcal{O}_A)$ as follows:
\begin{align*}
f : \sigma &\mapsto s,\\
f^\sharp(\operatorname{Spec} A) : \mathcal{O}_A(\operatorname{Spec} A)\cong A&\to K\cong f_\ast\mathcal{O}_K(\operatorname{Spec} A)\quad\textrm{is the natural inclusion }\iota,\\
f^\sharp(\{\eta\}) : \mathcal{O}_A(\{\eta\})\cong K&\to 0\cong f_\ast\mathcal{O}_K(\{\eta\})\quad\textrm{is the unique map to $0$},\\
f^\sharp(\emptyset) : \mathcal{O}_A(\emptyset)\cong 0&\to 0\cong f_\ast\mathcal{O}_K(\emptyset).
\end{align*}
Because the restriction map on the structure sheaf of $\operatorname{Spec} A$ from $\operatorname{Spec} A$ to $\{\eta\}$ is the inclusion $\iota : A\to K$, we see that this is indeed a map of ringed spaces.
However, this is not a map of locally ringed spaces. Since the only open set containing $s$ is all of $\operatorname{Spec} A$, the stalk map
$$
f^\sharp_\sigma : \mathcal{O}_{A,f(\sigma)}\to\mathcal{O}_{K,\sigma}
$$
is simply the map on global sections; i.e.,
$$
f^\sharp_\sigma = f^\sharp(\operatorname{Spec} A) : A\to K.
$$
This is a ring homomorphism, but the maximal ideal $\mathfrak{m}\subseteq A$ is not mapped into the maximal ideal $(0)\subseteq K$. Hence, the map of ringed spaces defined above cannot be induced by a ring morphism $\phi : A\to K$, because any ring homomorphism induces a morphism of locally ringed spaces $(\operatorname{Spec} K,\mathcal{O}_K)\to(\operatorname{Spec} A,\mathcal{O}_A)$.
Another way to argue that the map $f$ is not induced by a ring homomorphism $\phi : A\to K$ is to first show that if $f$ is induced by $\phi$, then the map of structure sheaves on global sections $f^\sharp(\operatorname{Spec} A) : \mathcal{O}_A(\operatorname{Spec} A)\to f_\ast\mathcal{O}_K(\operatorname{Spec} A)$ is in fact the map $\phi$. That is, the diagram
$$
\require{AMScd}
\begin{CD}
\mathcal{O}_A(\operatorname{Spec} A) @>{f^\sharp(\operatorname{Spec} A)}>> f_\ast\mathcal{O}_K(\operatorname{Spec} A);\\
@VVV @VVV \\
A @>{\phi}>> K;
\end{CD}
$$
whose vertical arrows are the natural isomorphisms commutes. However, the map on global sections $f^\sharp(\operatorname{Spec} A)$ for the map defined above is the inclusion, and the map on topological spaces induced by the inclusion sends $\sigma$ to $\eta$. |
Suppose f(x) is a polynomial with real coefficient and $f(x)>=0$ $x \in R$ let $g(x)=f(x)+f^{'}(x) +f^{"}(x)+..$. prove that $g(x)>=0 for all x\in R$ | Let $$f(x)=ax^2+bx+c, a>0, ~~ b^2-4ac\le 0$$
Then $$g(x)=f(x)+f'(x)+f''(x)= ax^2+bx+c+2ax+b+2a= ax^2+(2a+b)x+(2a+b+c)$$
the discriminant of this quadratic is $D=b^2-4ac-4a^2\le0$ fo $g(x)$ is also positive definite for all real values of $x$. |
Require a Proof for Intelligenti pauca's method to Compute an Ellipse | That trick is based on Pascal's Theorem:
If six arbitrary points are chosen on a conic (which may be an
ellipse, parabola or hyperbola) and joined by line segments in any
order to form a hexagon, then the three pairs of opposite sides of the
hexagon (extended if necessary) meet at three points which lie on a
straight line, called the Pascal line of the hexagon.
You can see the theorem at work in the figure below: hexagon $A'ABEDC$ is inscribed in an ellipse and its three pairs of opposite sides (having the same colour in the figure) meet at points $F$ (intersection of $AB$ and $CD$), $G$ (intersection of $A'C$ and $BE$) and $H$ (intersection of $A'A$ and $DE$), which lie then on the same line.
Suppose now you let $A'$ approach $A$ closer and closer: in the limit $A'\to A$ line $AA'$ becomes the line tangent to the ellipse at $A$ (see second figure).
This gives a method to construct the tangent at $A$ to a conic passing through points $ABCDE$: it is the line through $A$ and $H$, the latter being the intersection point of lines $FG$ and $DE$. Points $F$ and $G$ are constructed as explained above but with $A'$ replaced by $A$: $F$ is the intersection of $AB$ and $CD$, $G$ is the intersection of $AC$ and $BE$. |
What is the full algebra of Euler products? | In short, there is the only way to obtain a ring of multiplicative Dirichlet series (up to minor modifications) $$\log F(s) = \log \prod_p (1+\sum_{k \ge 1} a_F(p^k) p^{-sk}) = \sum_{p^k} \frac{b_F(p^k)}{k} p^{-sk}$$
$$\log F(s)+\log G(s) = \sum_{p^k} \frac{b_F(p^k)+b_G(p^k)}{k} p^{-sk}$$
$$\log F(s) \otimes \log G(s) = \sum_{p^k} \frac{b_F(p^k)b_G(p^k)}{k} p^{-sk}$$
If $F,G$ both comes from an automorphic form then it is natural to replace $\otimes$ by the Rankin-Selberg convolution $$F(s) \times G(s)= \prod_p (1+\sum_{k \ge 1}a_F(p^k)a_G(p^k) p^{-sk})$$ |
Problem on limits of sequences | $$a_n = \frac{\sqrt[n]{n!}}{n}\frac{b_n - 1}{\ln b_n} \ln b_n^n$$
For the first term on the right,
$$\lim_{n \to \infty} \frac{\sqrt[n]{n!}}{n}=e^{-1}\tag{1}$$
For the second term, from L'hopital rule,
$$\lim_{n \to \infty} \frac{b_n - 1}{\ln b_n}=\lim_{x \to 1} \frac{x-1}{\ln x}=\lim_{x \to 1} x = 1 \tag{2}$$
Also, we have
\begin{align}\lim_{n \to \infty} b_n^n &= \lim_{n \to \infty} \frac{(n+1)!}{n!}\cdot\frac{1}{\sqrt[n+1]{(n+1)!}} \\
&= \lim_{n \to \infty} \frac{n+1}{\sqrt[n+1]{(n+1)!}} \\
&=e\end{align}
Hence $$\lim_{n \to \infty} \ln b_n^n=1\tag{3}$$
To know the limit of $a_n$, you just have to multiply the $3$ terms up. |
Number of solutions for $x[1] + x[2] + \ldots + x[n] =k$ | This is a so-called stars-and-bars problem; the number that you want is
$$\binom{n+k-1}{n-1}=\binom{n+k-1}k\;.$$
The linked article has a reasonably good explanation of the reasoning behind the formula. |
Count the surface limited by two curves. | Plotting these functions, you will notice that $r=\frac1{\sin\phi}$ is the line $y=1$. You can even verify this: $y=r\,\sin\phi=\frac1{\sin\phi}\sin\phi=1$. The other line is below this, although it converges to that straight line for small values of $\phi$.
Now your main problem is probably the fact that both areas are infinite, so taking the difference of the two integrals won't work. Instead, you have to move the difference into the integral:
$$\frac12\int_0^{\frac\pi2}\left(\left(\frac1{\sin\phi}\right)^2-\left(\frac1\phi\right)^2\right)\,\mathrm d\phi
= \frac1\pi$$
The result was computed using Wolfram Alpha. I you have questions about how to do the integration here, please say so (in a comment to this answer or in an edit to your question which appends this), and perhaps some other answer to your question will include details on that as well. A useful step along the way might be knowing the anti-derivative, since using that it is at least possible to verify that the result is correct. |
Relation between $1-(n^{p-1}\mod p)$ and Riemann $\zeta$ | just to help to close the case.
seems the answer to my question is clear: the calculation is correct but the result is trivial. |
Prove that $S_\tau$ is a random variable | Hint $$\{S_{\tau}>a\} = \bigcup_{n \in \mathbb{N}} \{S_n > a\} \cap \{\tau=n\}.$$ |
Find all positive integers $n$ such that.. | Your proof of
$$(n+1)! < n\sum_{k=1}^n k!$$
is correct when you at some point state that in it $n > 2$ is assumed. For $n = 2$, you have equality, and for $n = 1$, the inequality is in the other direction.
For the proof of $(\ast\ast)$, the case $n = 3$ is a simple verification, and then you can inductively see
$$\begin{align}
n\sum_{k=1}^{n+1} k! &= n(n+1)! + \underbrace{(n-1)\sum_{k=1}^n k!}_{<(n+1)!\text{ by induction}} + \underbrace{\sum_{k=1}^n k!}_{<(n+1)!\text{ also}}\\
&< n(n+1)! + (n+1)! + (n+1)!\\
&= (n+2)(n+1)!\\
&= (n+2)!
\end{align}$$
that indeed
$$n-1 < \frac{(n+1)!}{\sum_{k=1}^n k!} < n$$
for all $n > 2$. |
Do I have enough information to solve for the angle of a plane intersecting two cones? | Think about it this way: Draw an arbitrary angle $\theta$. Use dice or some other random means to choose $a$ through $d$ (potentially satisfying some implicit inequalities), and mark these lengths on the legs of that angle. Connect them and intersect those connections with the vertical line through the apex of the angle. Does the resulting figure satisfy all the constraints from your given problem?
Do the randomized dice rolls convey any information about the angle that you picked independently? Of course not. So it would seem that unless there is something you didn't mention (or I failed to read), the information provided can be consistent with a (possibly bounded) range of angles, and is not enough to pick a single angle. |
Half-life time computation, and percentage of isotope remaining after X years. | If you start out with an amount of radioactive material with half-life $T$denoted by $N_0$, then the amount remaining at time $t$, denoted by $N(t)$, is given by:
$$N(t) = N_0\cdot (\frac{1}{2})^{(\frac{t}{T})}$$
The percentage of radioactive material left at a given time is given by:
$$\frac{N(t)}{N_0} \times 100\%$$
Can you proceed from there?
By the way, I don't agree with your book's reference solution. |
Proving existence of a finite projective plane | You cannot use this theorem; at least, if I were teaching the course I would not accept it. This is typical in higher level courses, you present a major result in class that you don't have time to prove; then as an exercise, you assign students to prove a weaker version of the result so they can see how someone might go about proving the more complete version.
It is likely you will have to prove the result yourself, and not rely on a theorem that gives it to you. To have a projective plane, you need "points" and "lines" which satisfy certain conditions. What you have is a design, containing "varieties" and "blocks" meeting their own conditions. Try to find a way to interpret the objects of your design as points/lines, and show they satisfy the projective plane properties. |
If $a$, $b$, $c$, $d$ are positive reals so $(a+c)(b+d) = 1$, prove the following inequality would be greater than or equal to $\frac {1}{3}$. | By C-S $$\sum_{cyc}\frac{a^3}{b+c+d}=\sum_{cyc}\frac{a^4}{ab+ac+ad}\geq\frac{(a^2+b^2+c^2+d^2)^2}{\sum\limits_{cyc}(ab+ac+ad)}\geq$$
$$\geq\frac{a^2+b^2+c^2+d^2}{\sum\limits_{cyc}(ab+ac+ad)}\geq\frac{1}{3},$$
where the last inequality it's $$\sum_{sym}(a-b)^2\geq0.$$ |
If $C|a$ and $C|b$ then $C|(ax+by)$ | Assuming that $c$ divides both $a$ and $b$. Then $a = nc$ and $b = mc$ for some integers $n, m$. Then, assuming $x$ and $y$ are integers,
$$
ax + by = (nc)x + (mc)y = c(nx + my)
$$
Hence, $c$ divides $ax + by$. |
Are there "variables/unknowns" for operations? | Sure, but $\circ$ isn't a good choice; usually it denotes function composition. I would use $\star$, for example, which doesn't have an existing widely-used meaning. |
The sample distribution (pdf) of the sample mean retrieved from gamma distribution | If $X_1, X_2, \ldots, X_n$ is a random sample drawn from a gamma distribution with shape $a$ and scale $b$, then their sum $T = X_1 + \cdots + X_n$ is also gamma with the same scale but with shape $an$. Then the distribution of the sample mean is given by a simple monotone transformation: $\bar X = T/n$ has density $$f_{\bar X}(x) = n f_T(nx),$$ which of course is gamma with shape $an$ and scale $b/n$. This makes sense because the expectation of a single observation is $\operatorname{E}[X_i] = ab$; the expectation of their sum must then be $\operatorname{E}[T] = abn$; and the expectation of the sample mean must be the same as that for a single observation: $$\operatorname{E}[\bar X] = (an)(b/n) = ab.$$ Yet we require the variance of the sample mean to be a decreasing function of the sample size $n$, which is indeed the case: $$\operatorname{Var}[X_i] = ab^2,$$ but $$\operatorname{Var}[\bar X] = (an)(b/n)^2 = \frac{ab^2}{n} \le ab^2$$ for $n \ge 1$. |
If $f(x)=\frac{1}{\pi}\left(\arcsin x+\arccos x+\arctan x\right)+\frac{x+1}{x^2+2x+10}\;,$ Then $\max$ value of $f(x)$ | The domain should be $x\in [-1,1]$.
We have
$$f(x)=\frac{1}{\pi}\left(\frac{\pi}{2}+\arctan x\right)+\frac{x+1}{x^2+2x+10}$$
so
$$f'(x)=\frac{1}{\pi(1+x^2)}+\frac{9-(x+1)^2}{(x^2+2x+10)^2}$$
This is positive because of $(x+1)^2\le 4$.
Since $f(x)$ is increasing, the answer is $f(1)=\color{red}{47/52}$. |
Prime ideal $P$ in $R$ coprime to the conductor plus the localization $R_{P}$ is a DVR implies that $P$ is invertible | Let's use the invertibility criterion you mentioned (changing the notation):
$I$ is invertible iff $IB_Q$ is a principal fractional ideal for every maximal ideal $Q$ of $B$.
Since $\dim B=1$ there is no maximal ideal containing $P$ excepting $P$ itself (of course, the trivial case $P=(0)$ can be excepted), so $PB_Q=B_Q$ for all $Q\ne P$. Is it clear now? |
How to integrate $\int_{3\sqrt{2}}^6 1/\big(t^3\sqrt{t^2-9}\big)\;dt$ | If you use the substitution $ t = 3\sec(u) $, then the integral becomes
$$\int _{3\sqrt{2} }^{6} \frac{1}{t^3\sqrt{t^2-1}} {dt}= \frac{1}{27}\,\int _{\pi/4 }^{\pi/3 }\! \cos^2( u )
{du}$$
Note: To find the limits of integration, we have for $t=3\sqrt{2}$
$$ t=3\sec(u) \implies \sec(u) = \sqrt{2} \implies \cos(u)=\frac{1}{\sqrt{2}}\implies u=\frac{\pi}{4}. $$
Just do the same with the other one. |
Sum involving the number of zeros of $k$ | Let $e(n)$ be the number of non-zero digits of $n$ then:
$a_0(k)=\lfloor \log_{10}(k)\rfloor - e(k) $ so the idea is to compute:
$$s_1=\sum_{k=1}^{\infty}{\frac{\lfloor \log_{10}(k)\rfloor}{k\left(k+1\right)}}\quad \quad s_2=\sum_{ k=1}^{\infty}{\frac{e(k)}{k\left(k+1\right)}}$$
The second sum is comptable by considering:
$$P(x,q)=\prod_{k=0}^{\infty}{\left(1+x\left(q^{10^k}+…+q^{9\cdot 10^k}\right)\right)}$$
For the first a shifting using (in order tor replace the $\log_{10}(k)$ by a variable $n$) would produce :
$$s_2=\sum_{k=1}^{\infty}{\frac{\lfloor \log_{10}(k)\rfloor}{k\left(k+1\right)}}=\sum_{n=0}^{\infty}n\sum_{k=10^n}^{10^{n+1}-1}{\frac{1}{k\left(k+1\right)}}=\sum_{n=0}^{\infty} \left(\frac{n}{10^n}-\frac{n}{10^{n+1}}\right)$$
I hope that this is helpful. |
Distribution functions of a probability measure on a probability space $(\mathbb{R},\mathcal{B})$ | By definition,
$$F(x)=P\left((-\infty,x]\right)\;,\;\;\;x\in\mathbb{R}$$
Now, $P(\{a\}) = \lim_{n \to \infty} P\left( (a - 1/n, a] \right) = F(a) - F(a-)$
where $F(a-)$ is the left limit of $F$ at $a$. |
Direct proof that closed 1-form on $\mathbb{R}^2$ is exact | You are complicating the issue by considering an arbitrary chart $h$. You are in $\mathbb{R}^2$, you don't need to choose a chart. (Or rather, you simply use the identity if you'd like to face it that way. Furthermore, the data is already being given in terms of the identity chart.)
You can do the computation as follows.
\begin{align*}
df&=dx_1(\int_0^1a_1(tx)dt)+x_1((\int_0^1\partial_1a_1(tx)tdt)dx_1+(\int_0^1\partial_2a_1(tx)tdt)dx_2) \\
&+dx_2(\int_0^1a_2(tx)dt)+x_2((\int_0^1\partial_1a_2(tx)tdt)dx_1+(\int_0^1\partial_2a_2(tx)tdt)dx_2) \\
&=\left(\int_0^1a_1(tx)dt+\int_0^1\partial_1a_1(tx)tx_1dt+\int_0^1\partial_1a_2(tx)tx_2dt\right)dx_1 \\
&+\left(\int_0^1a_2(tx)dt+\int_0^1\partial_2 a_1(tx)tx_1dt+\int_0^1\partial_2a_2(tx)tx_2dt\right)dx_2 \\
&=\left(\int_0^1a_1(tx)dt+\int_0^1t\partial_1a_1(tx)x_1dt+\int_0^1t\partial_2a_1(tx)x_2dt\right)dx_1 \\
&+\left(\int_0^1a_2(tx)dt+\int_0^1t\partial_1 a_2(tx)x_1dt+\int_0^1t\partial_2a_2(tx)x_2dt\right)dx_2 \\
&=\left(\int_0^1(t \cdot a_1(tx))'dt\right)dx_1 +\left(\int_0^1(t \cdot a_2(tx))'dt\right)dx_2 \\
&=a_1(x)dx_1+a_2(x)dx_2.
\end{align*}
The fact that $d\omega=0$ is used when we use that $\partial_2a_1=\partial_1a_2$.
Some things are worth noting.
It should be clear that this applies to all star-shaped open sets of $\mathbb{R}^2$ streaming from zero. Consequently, to all star-shaped open sets of $\mathbb{R}^2$ by translation.
This can be adapted to the case of $\mathbb{R}^n$ and any closed $k$-form, proving Poincaré lemma. This adaptation and the intuition behind it can be seen here, for example. |
Ordinary Differential Equation Solution. | Consider first the equation
$$2yy'= -y+y^3$$ Switch variables to make
$$2\frac y{x'}=-y+y^3 \implies x'=\frac{2}{y^2-1}\implies x+C=\log \left(\frac{1-y}{1+y}\right)$$ that is to say
$$y=\frac{1-K e^x}{1+K e^x}$$ Now, trying the variation of parameters, this would give $\color{red}{\text{No hope}}$ |
Proving an isomorphism using the isomorphism theorems | It's basically correct. However, some things you wrote requires proper justification or explanation. Namely,
Show that $\ker f\lhd N_1$.
How do you exactly mean $G_2/\, (N_1/\ker f) $?
For a more direct approach, consider the arising homomorphism $G_1\to G_2/N_2$. |
Confusion regarding lattices | No, ${\mathbb Z}^{d+1} \cap H_d \ne H_d$. For example, if $d=2$, $(1/3,1/3,-2/3) \in H_2$ but is not in ${\mathbb Z}^{d+1}$: its entries are not integers.
$A^*_d$ is the set of vectors $\vec{x}$ such that $\vec{x}\cdot \vec{1} = 0$ and
$\vec{x} \cdot \vec{y}$ is an integer for all $\vec{y} \in A_d$. Since the entries of
$\vec{y}$ are integers, every member of $A_d$ is in $A^*_d$, but there are others.
For example, if $d=2$, $(1/3,1/3,-2/3) \in A^*_d$. To see this, note that $(1/3,1/3,-2/3) \cdot (y_1,y_2,y_3) = (y_1 + y_2 - 2 y_3)/3 = - y_3$. |
$\int \sin^{-1} {\sqrt {\frac {x} {x+1}}} {} dx$ | Let $$u'=1,v=\arcsin\left(\sqrt{\frac{x}{x+1}}\right)$$ then
$$u=x,v'=\frac{1}{\sqrt{1-\frac{x}{x+1}}}\times \frac{1}{2}\left(\frac{x}{x+1}\right)^{-1/2}\times \frac{x+1-x}{(x+1)^2}dx$$ |
Sum of infinite geometric series within probability generating function question | There were two sign errors that I just corrected. The idea in the last step is to rewrite in the form of a geometric series:
$$\frac{a}{1-x}=\sum_{n=0}^\infty a x^n.$$
To get the $1$ where we want it, divide both numerator and denominator by $2-p$:
$$\frac{2(1-p)}{2-p-pz}=\frac{2(1-p)/(2-p)}{(2-p-pz)/(2-p)}=\frac{2(1-p)/(2-p)}{1-pz/(2-p)}.$$
Now take $a=2(1-p)/(2-p)$ and $x=pz/(2-p)$ in the formula for geometric series. |
number-theoretic? probabilities associated with $e^{-n}$ and $1/\Gamma(n)$ | This isn't exactly what you asked for, but with $H_n = \sum_{k=1}^n \frac{1}{k}$ the $n^{th}$ harmonic number, it turns out that $e^{-H_n}$ is asymptotically the probability that a permutation of a large set has no cycles of length at most $n$. This generalizes the well-known asymptotic for derangements when $n = 1$. I think you can prove it by inclusion-exclusion, but there is a much cleaner proof using the exponential formula. |
Is this a Homeomorphism in Topology? | If there were a homeomorphism $\phi:\mathbb{R}^3 \rightarrow \mathbb{R}$, then there would also be a homeomorphism $\mathbb{R}^3 \setminus \{x \} \rightarrow \mathbb{R} \setminus \{\phi(x)\}$ for any $x \in \mathbb{R}^3$ given by the restriction of $\phi$ to that subspace, but this cannot exist because the former is connected whereas the latter is not (continuous maps preserve connectedness). So $\mathbb{R}$ and $\mathbb{R}^3$ cannot be homeomorphic.
As path-connectedness is also preserved under continuous maps, you can also get this result by noting that there exists connected-but-not-path-connected sets in $\mathbb{R}^3$, but not in $\mathbb{R}$ (cf. the topologist's sine curve).
In greater generality, $\mathbb{R}^n$ and $\mathbb{R}^m$ are not homeomorphic whenever $n \neq m$, but showing this requires more sophisticated machinery. |
Convergence of a sequence of sets $A_n:=\{1+ \frac{m^2}{n^2}: m \in \mathbb{N} \}$ | To find whether the limit of a sequence of the sets exists we need to calculate $$ \liminf_{n\to\infty} A_n = \bigcup_{n\ge 1} \bigcap_{k\ge n} A_k$$ and $$ \limsup_{n\to\infty} A_n = \bigcap_{n\ge 1} \bigcup_{k\ge n} A_k$$
If both of these sets are equal then they define $\lim_{n\to\infty} A_n$.
We have, for any $n\in \mathbb N$: \begin{align} \bigcap_{k\ge n}A_k &= \bigcap_{k\ge n} \left\{z: \exists m\in\mathbb N: z=1+\frac {m^2}{k^2} \right\}=\\&=\left\{z: \forall k\ge n \exists m\in\mathbb N: z=1+\frac {m^2}{k^2} \right\}= \\ &= \left\{1+q^2: \forall k\ge n \exists m\in\mathbb N: q=\frac mk \right\} =\\&= \left\{1+q^2: \forall k\ge n : kq \in\mathbb N\right\} =\\&= \left\{1+q^2: q \in\mathbb N\right\}\end{align}
so
$$ \liminf_{n\to\infty} A_n = \bigcup_{n\ge 1} \bigcap_{k\ge n} A_k = \left\{1+q^2: q \in\mathbb N\right\}$$
We also have
$$ \forall n\in\mathbb N \quad \forall q\in\mathbb Q_+ \quad \exists k\ge n \quad\exists m\in\mathbb N : q=\frac mk$$
$$ \forall n\in\mathbb N \quad \forall q\in\mathbb Q_+ \quad \exists k\ge n : 1+q^2\in A_k$$
$$ \forall n\in\mathbb N \quad \forall q\in\mathbb Q_+ : 1+q^2\in \bigcup_{k\ge n} A_k$$$$ \forall n\in\mathbb N :\{1+q^2:q\in\mathbb Q_+\} \subset \bigcup_{k\ge n}A_k$$
On the other hand it is obvious that
$$ \forall k\in\mathbb N: A_k \subset \{1+q^2:q\in\mathbb Q_+\}$$
so
$$ \forall n\in\mathbb N: \bigcup_{k\ge n}A_k \subset \{1+q^2:q\in\mathbb Q_+\}$$
and that means that
$$ \forall n\in\mathbb N: \bigcup_{k\ge n}A_k = \{1+q^2:q\in\mathbb Q_+\}$$
$$ \bigcap_{n\ge 1} \bigcup_{k\ge n}A_k = \{1+q^2:q\in\mathbb Q_+\}$$
$$ \limsup_{n\to\infty} A_n = \{1+q^2:q\in\mathbb Q_+\}$$
We have
$$ \liminf_{n\to\infty} A_n = \left\{1+q^2: q \in\mathbb N\right\} \neq \left\{1+q^2:q\in\mathbb Q_+\right\} = \limsup_{n\to\infty} A_n $$
so $\lim_{n\to\infty} A_n $ does not exist. |
a conjecture on norms and convex functions over polytopes | It's false. To see that, introduce the new constraint $10b - 7a \leq 17$ to your example to create a new polytope $P'$. This constraint cuts off the point $N$ but keeps $M$ in $P'$. Thus $M$ minimizes $f$ over $P'$. The closest point to $x$ on $M$'s facet is now $(1.5, 2.75)$, which is the intersection of the lines $10b-7a=17$ and $2b-a=4$. However, the point $(1,2.4)$, which is on the new facet defined by $10b-7a=17$, is closer to $x = (0,4)$ than is $(1.5, 2.75)$ (a distance of $\approx 1.89$ vs. $\approx 1.95$).
For example, the image below shows the new polytope $P'$. The minimizer of $f$ over $P$ is $M$, which is in blue. The point $(1,2.4)$, in black, is closer to $(0,4)$, in red, than is any point on $M$'s facet. |
How to show $\lim_{n\to\infty}n\cdot \sum_{m=1}^{\infty}\Big(1-\frac{1}{m}\Big)^n\cdot \frac{1}{m^2}=1.$ | Here is a rough idea, but by no means rigorous. Think that the sum approximates a Riemann integral with step size $\frac1n$. Then it approximates
$$ \int_0^\infty \left(1-\frac1{nx}\right)^n \frac{dx}{x^2} .$$
Then the limit as $n \to \infty$ is
$$ \int_0^\infty \exp\left(-\frac1x\right) \frac{dx}{x^2} = 1 .$$
But since we are working with two limiting processes at the same time, I think this answer is not rigorous.
Let's try to make it rigorous. Since $1-x \le e^{-x}$, we know that
$$ n \sum_{m=1}^\infty \left(1-\frac1m\right)^n \frac1{m^2} \le n \sum_{m=1}^\infty \exp\left(-\frac nm\right) \frac1{m^2} $$
And we can definitely apply the Riemann sum idea to the right hand side. So the lim sup is bounded above by $1$.
So what about a lower bound? It seems to me (by considering Taylor's series of order 2) that there is a constant $c>0$ such that for $\alpha>0$ sufficiently small, that $(1-x) \ge e^{-(1+\alpha)x}$ for $x \in [0,c\alpha]$.
So
$$ n \sum_{m=1}^\infty \left(1-\frac1m\right)^n \frac1{m^2} \ge n \sum_{m=1/(c \alpha)}^\infty \exp\left(-(1+\alpha)\frac nm\right) \frac1{m^2} . $$
The limit of this is bounded below by
$$ \int_{0}^\infty \exp\left(-\frac{1+\alpha}{x}\right) \frac{dx}{x^2} .$$
Thus the lim inf is bounded below by this quantity for all $\alpha>0$, and hence the lim inf is bounded below by $1$.
Here is another approach, inspired by the answer given by @Claude Leibovici. Consider the function
$$ f(x) = \frac n{n+1}\left(1-\frac1 x\right)^{n+1} .$$
Use Taylor's series of order 1 with the remainder term:
$$ f(m+1) - f(m) = n\left(1-\frac1m\right)^n \frac1{m^2} + f''(\zeta_m) ,$$
where $\zeta_m \in [m,m+1]$. Sum it up, and use a telescoping series. Then you just have to estimate
$$ \sum_{m=1}^\infty f''(\zeta_m) .$$
If you split the sum into two parts:
$$ \sum_{1 \le m \le n^\theta} + \sum_{n^\theta < m < \infty} $$
where $\theta < 1$ is close to $1$ (probably $\theta = 9/10$ will work), you should see that this converges to $0$ as $n \to \infty$. |
Let $S$ be the set of all subsets of $\{1,...,n\}$ with size $k$. Find the number of other elements that has an intersection of size $l$ with a vertex | Fix $A\subseteq \{1,\ldots,n\}$ with $\#A=k$. Fix a subset $B\subseteq A$ with $\#B=\ell$ (this can be done in $\binom{k}{\ell}$ ways). Then you need to find all subsets of $C\subseteq \{1,\ldots,n\}\setminus A$ with $\#C=k-\ell$ (this can be done in $\binom{n-k}{k-\ell}$ ways). Considering that the collection of all $B\cup C$ are the wanted sets, the answer is
$$
\binom{k}{\ell}\binom{n-k}{k-\ell}.
$$ |
Question on proving validity in predicate logic | I looked at your book and the system it uses seems to be the same as the Fitch system I like to use. The key to this proof is to set it up as a Proof by Contradiction: |
Non orientable manifold. | See Andrew Hwang's answer to this question: Manifold is not orientable
That's really all you need.
(The question isn't very well phrased, but the answer is still the tool you need!) |
Why is $\frac{\operatorname dy'}{\operatorname dy}$ zero, since $y'$ depends on $y$? | As a mathematician, when I see
$$
\frac{dy'(t)}{dy}
$$
I think the most natural definition is
$$
\frac{d\frac{dy}{dt}}{dy} = \frac{d^2y}{dt^2}\frac{dt}{dy}
$$
which decidedly is not (usually) zero.
We can imagine a dynamical system modeling a particle on a line with three coordinates, modeling every potential "state" the particle has, where a "state" is its current position and current velocity. So the variables are named "v, x, and t."
For example, if at time 0, the particle is 3 meters to the west of the origin and moving east at 4 meters per second, its coordinates are $t = 0, v = 3, x = 4$.
In such a system there are invariants of the particle that do not depend on its location, but do depend on its velocity and on what time it is. If we call such an invariant $L$ then it is clear that $\frac{\partial L(v, t)}{\partial x} = 0$.
I imagine that something like this (possibly in higher dimension) is what your notation refers to. |
Cartesian product with a set within a set | Define $X \times Y = \{ (x,y) : x \in X, y \in Y \}$.
Then let $A = \{a, \{b, \emptyset \}\}$ and $B = \{\emptyset \}$.
We have $a \in A$ and $\emptyset \in B$ so $(a,\emptyset) \in A \times B$.
We also have $\{b, \emptyset \} \in A$ and $\emptyset \in B$ so $(\{b, \emptyset \},\emptyset) \in A \times B$.
This means $A \times B = \{(a,\emptyset),(\{b, \emptyset \},\emptyset)\}$.
Notice that the second solution is impossible as it contains a non-pair. |
Solving imaginary equation $z^3 = 5i + 5$ | $$z^3=5(1+i)$$
$$=5\sqrt{2}(\frac{1}{\sqrt{2}}+\frac{i}{\sqrt{2}})$$
$$=5\sqrt{2}(\cos(\frac{\pi}{4})+i\sin(\frac{\pi}{4}))$$
Do you now see some thing worthy??? |
Measurability of closed opeartor | In the separable case, it follows from the following standard result from descriptive set theory:
Theorem. Let $X,Y$ be Polish spaces, let $f : X \to Y$ be a Borel function, and let $B \subset X$ be a Borel set. If the restriction of $f$ to $B$ is one-to-one, then $f(B)$ is a Borel subset of $Y$, and the restriction of $f^{-1}$ to $f(B)$ is a Borel function.
See for instance Proposition 4.5.1 of Srivastava, A Course on Borel Sets.
Now to apply this theorem, let $X = E \oplus F$, let $Y = E$, and let $f = \pi_E : E \oplus F \to E$ be the projection onto $E$, which is continuous and in particular Borel. Let $B$ be the graph of $A$, which by assumption is closed and in particular Borel. Note $\pi_E(B) = D(A)$. Since $B$ is a graph, $\pi_{E}|_B$ is one-to-one, so by the theorem above, $D(A)$ is Borel and the restriction of $\pi_E^{-1}$ to $D(A)$, which is simply the map $x \mapsto (x, Ax)$, is Borel. Then $A$ is just the composition of this map with the continuous map $\pi_F$. |
Travelling Salesman Problem on the unit sphere | Searching for "spherical TSP" one finds a number of resources on this problem. For example, Ben Nolting's blog post A Traveling Salesman on a Sphere: Pitbull's Arctic Adventure contains Mathematica-based solution. I quote the main steps here:
positionVec[{u_, v_}] := {Cos[v °] Cos[u °], Sin[v °] Cos[u ° ], Sin[u °]};
distance[{u1_,v1_},{u2_,v2_}] := VectorAngle[ positionVec[{u1,v1}], positionVec[{u2,v2}]];
tour=FindShortestTour[citylocations, DistanceFunction->distance] |
Help with proof : $Cl_F$ with only one equivalance class implies $D_F$ is a PID | Hint $\rm\ \ a I = (b)\:\Rightarrow\: b\in aI\:\Rightarrow\: b = ai\:$ thus $\rm\:aI = a(i)\:\Rightarrow\: I = (i)\:$ by cancelling $\rm\:a\ne 0$.
Principal ideals are invertible, so cancellable, thus we can cancel $\rm\,(a)\ne 0\:$ from $\rm\,(a)I = (a)(i)\,$ above. If you're not familiar with invertible or fractional ideals then you can easily prove this directly: suppose $\rm\:aI = aJ\ne 0.\:$ To show $\rm\:I\subseteq J\:$ note $\rm\:i\in I\:\Rightarrow\:ai\in aI\subseteq aJ,\:$ so $\rm\:ai = aj\:$ for some $\rm\:j\in J.\:$ Cancelling $\rm\:a\ne 0\:$ yields $\rm\:i = j\in J,\:$ so $\rm\:I\subseteq J.\:$ By symmetry, $\rm\:J\subseteq I,\:$ therefore $\rm\:I = J.$ |
How many binary numbers length of 17 exists (with extra conditions) | It seems you are asking for the number of binary tuples, $(x_1,x_2 \cdots x_{17})$ such that $x_i \le x_{i+1}$ for $i=1\cdots 16$
If so, then you tuple must be of the form $(0,0 \cdots0,0,1,1\cdots 1)$ that is, $n$ zeroes followed by $m$ ones, with $n+m=17$ and $n\ge0$, $m\ge 0$. Then the answer is 18. |
Which of the following series is uniformly convergent? | Number two is not correct: Let $S_{n}(x)=\displaystyle\sum_{k=1}^{n}\dfrac{1}{(x+\pi)^{2}}\dfrac{1}{k^{2}}=\dfrac{1}{(x+\pi)^{2}}\displaystyle
\sum_{k=1}^{n}\dfrac{1}{k^{2}}$, $S(x)=\displaystyle\sum_{k=1}^{\infty}\dfrac{1}{(x+\pi)^{2}}\dfrac{1}{k^{2}}=\dfrac{1}{(x+\pi)^{2}}\displaystyle
\sum_{k=1}^{\infty}\dfrac{1}{k^{2}}$, then $|S_{n}(x)-S(x)|=\dfrac{1}{(x+\pi)^{2}}\displaystyle
\sum_{k\geq n+1}\dfrac{1}{k^{2}}$, if it were uniformly convergent, then for some $N$, we have $|S_{n}(x)-S(x)|<1$ for all $n\geq N$ and $x\in(-\pi,\pi)$, then fix the $n$, take $x\rightarrow-\pi^{+}$ and the expression blows up.
Compliment to @spaceisdarkgreen answer: $\left|\dfrac{\pi}{n}\right|^{n}\leq\left(\dfrac{n^{1/2}}{n}\right)^{n}=\dfrac{1}{n^{n/2}}\leq\dfrac{1}{n^{2}}$ for large $n$, so the series converges. |
Normalizer of the diagonal torus in $Sp_{2n}$ | I did this awhile ago, but didn't write it down. However, I can tell you how to calculate the generators at least modulo the diagonal torus. It might take you awhile.
Let $z = \textrm{antidiag}(1,-1,1,-1, ... , 1, -1)$. And let $$G = \textrm{Sp}_{2n} = \{ x \in G : x^t z x = z \}$$
The standard maximal torus of $G$ is $D = \textrm{diag}(t_1, ... , t_n, t_n^{-1}, ... , t_1^{-1})$. If you take for granted that $G$ is semisimple, you can at least find a set of generators for $N_G(D)$ modulo $D$, which isn't quite what you want, but close.
Let $X(D)$ be the free abelian group of rational characters of $D$, which has basis $e_1, ... , e_n$ (where $e_i[\textrm{diag}(t_1, ... , t_n, t_n^{-1}, ... , t_1^{-1}] = t_i$). Let
$$\Delta = \{e_1 - e_2, ... , e_{n-1} - e_n, 2e_n \}$$
Let $\alpha \in \Delta$. Modulo $D$, there is a unique element $w_{\alpha}$ in $N_G(D)$ which does not lie in $D$ and which commutes with the elements in $(\textrm{Ker } \alpha)^0)$, the connected component of the kernel of $\alpha$. In other words, $[N_{Z_G((\textrm{Ker } \alpha)^0)}(D) : D] = 2$. For all the $\alpha$ except $2e_n$, $\textrm{Ker } \alpha = (\textrm{Ker } \alpha)^0$.
For example, $w_{e_1-e_2}$ is going to be something like
$$\begin{pmatrix} 0 & -1 \\ 1 & 0 \\ & & 1 \\ & & & \ddots \\ & & & & 1 \\ & & & & & 0 & -1 \\ & & & & & 1 & 0 \end{pmatrix}$$
although you may need to tweak the signs $\pm 1$ in a few places to make it actually lie in $G$.
Anyway, the general theory of root systems tells you that $N_G(D)/D$ is generated by the images of these elements $w_{\alpha} : \alpha \in \Delta$. |
Factorizing Determinants | Add the two last columns to the first one and then subtruct the first row from the two other rows. Now developp along the first column to find
$$2(a+b+c)\left[-(a-b)^2-(b-c)(a-c)\right]$$
Repeat the same idea as in 1. |
Integrating Factor for Multidimensional Sturm-Liouville Problem | $$
\phi_{xx}-x\phi_{x}+\phi_{yy}+y\phi_{y}+\lambda \phi = 0.
$$
Multiplying by $e^{-x^2/2+y^2/2}$ leads to the desired form:
$$
(e^{-x^2/2+y^2/2}\phi_{x})_{x}+(e^{-x^2/2+y^2/2}\phi_{y})_{y}+\lambda e^{-x^2/2+y^2/2}\phi = 0 \\
\nabla\cdot(e^{-x^2/2+y^2/2}\phi_{x},e^{-x^2/2+y^2/2}\phi_{y})+\lambda e^{-x^2/2+y^2/2}\phi =0 \\
\nabla\cdot(e^{-x^2/2+y^2/2}\nabla\phi)+\lambda e^{-x^2/2+y^2/2}\phi = 0.
$$ |
2D system of ODEs and a constraint | The system is overdetermined. Two differential equations, an algebraic equation and one initial condition.
Divide the differential equations:
$$
\frac{N'}{S'}=\frac{12/49-3\,S^2}{3\,N\,S}.\tag1
$$
Differentiating the algebraic relation we get
$$
12\,N\,N'=30\,S\,S'\implies \frac{N'}{S'}=\frac{5\,S}{2\,N}.\tag2
$$
From (1) and (2)
$$
\frac{12/49-3\,S^2}{3\,N\,S}=\frac{5\,S}{2\,N}\implies S^2=\frac{8}{343}.
$$
This implies that $N$ and $S$ are constants, but the constant solution of the system is
$$
N=0,\quad S^2=\frac{4}{49}.
$$ |
for every $n\ge{1} , \binom{2n}{n}\ge \frac{2^{2n}}{4\sqrt{n}+2} $ | We begin with the product representations
$${2n\choose n}{1\over 2^{2n}}={1\over 2n}\prod_{j=1}^{n-1}\left(1+{1\over 2j}\right)=\prod_{j=1}^n\left(1-{1\over2j}\right),\quad n\geq1.$$
From
$$ \prod_{j=1}^{n-1}\left(1+{1\over 2j}\right)^{\!\!2}=\prod_{j=1}^{n-1}\left(1+{1\over j}+{1\over 4j^2}\right)\geq \prod_{j=1}^{n-1}\left(1+{1\over j}\right)=n,$$
we see that
$$\left({2n\choose n}{1\over 2^{2n}} \right)^{2} = {1\over (2n)^2}\, \prod_{j=1}^{n-1}\left(1+{1\over 2j}\right)^{\!\!2}
\geq {1\over 4n^2}\, n ={1\over 4n},\quad n\geq1.$$
so by taking square roots, ${2n\choose n}{1\over 2^{2n}}\geq \displaystyle{1\over 2\sqrt{n}}\ge \frac{1}{4\sqrt{n}+2}.$ |
Showing that $\displaystyle\prod_{k = 0}^{\infty} \frac{1+kx}{2+kx} = 0$ for any fixed $x \in \mathbb{R}_{>0}$ | $$ \prod_{k = 0}^{\infty} \frac{1+kx}{2+kx} = 0$$
Proof:
$$\frac{1+kx}{2+kx}=1+\frac{-1}{2+kx} \leqslant\exp \left( \frac{-1}{2+kx} \right) \leqslant \exp \left( \frac{-x^{-1}}{k} \right)$$
So,
$$ \prod_{k = 1}^{N} \frac{1+kx}{2+kx} \leqslant \prod_{k = 1}^{N} \exp \left( \frac{-x^{-1}}{k} \right) = \exp(-x^{-1}H_N)$$
where $H_N$ is the $N^{th}$ harmonic number.
Using the simple estimate $H_N > \log N$, we have that
$$\prod_{k = 0}^{N} \frac{1+kx}{2+kx} \le \frac{1}{2} \exp(-x^{-1}\log N)=\frac{1}{2N^{x^{-1}}}\to 0$$ |
expectation calculation problem | You want the expected time until the earliest component failure, of seven i.i.d. components. This is the seventh least order statistic.
$$\begin{align}
\mathsf E[X_{(7)}]
& = \binom{7}{1}\int_1^\infty x\cdot f_{X}(x)\cdot (1-F_X(x))^6 \operatorname d x
\\ & = 7\int_1^\infty x \cdot\frac {3}{x^4}\cdot \left(\int_x^\infty \frac {3}{y^4}\operatorname d y\right)^6\operatorname d x
\end{align}$$ |
How to find geodesics in metric spaces | It seems likely to me that this question has little in the way of a completely general answer. In the setting of Riemannian geometry, the comment of @Aretino is a good suggestion with some generality. One can also read Riemannian geometry textbooks for proofs of existence theorems for geodesics; those proofs are pretty constructive.
But to address your example question, the most elementary answer in Euclidean geometry is that for each $x \ne y \in \mathbb R^n$ with distance $D = \|x-y\|$, one simply uses coordinate geometry to guess at the formula for the unique geodesic $\gamma : [0,D] \to \mathbb R^n$ such that $\gamma(0)=x$ and $\gamma(1)=D$, namely:
$$\gamma(t) = \left(1-\frac{t}{D}\right) x + \frac{t}{D} y
$$
One must of course check that $\gamma$ is correct, and in fact that it is unique: simply use coordinate geometry to prove that for each $t \in [0,D]$, $\gamma(t)$ is the unique point in $\mathbb R^n$ whose distance to $x$ equals $tD$ and whose distance to $y$ equals $(1-t)D$.
There are several other very homogeneous geometries for which this "guess and check" method also works pretty well, for example the unit $n$ sphere $\mathbb S^n$ embedded in $\mathbb R^n$ and the $n$-dimensional hyperbolic space $\mathbb H^n$ in the hyperboloid model. |
Prove $(-a+b+c)(a-b+c)(a+b-c) \leq abc$, where $a, b$ and $c$ are positive real numbers | Case 1. If $a,b,c$ are lengths of triangle.
Since
$$
2\sqrt{xy}\leq x+y\qquad
2\sqrt{yz}\leq y+z\qquad
2\sqrt{zx}\leq z+x
$$
for $x,y,z\geq 0$, then multiplying this inequalities we get
$$
8xyz\leq(x+y)(y+z)(z+x)
$$
Now substitute
$$
x=\frac{a+b-c}{2}\qquad
y=\frac{a-b+c}{2}\qquad
z=\frac{-a+b+c}{2}\qquad
$$
Since $a,b,c$ are lengths of triangle, then $x,y,z\geq 0$ and our substitution is valid. Then we will obtain
$$
(-a+b+c)(a-b+c)(a+b-c)\leq abc\tag{1}
$$
Case 2. If $a,b,c$ are not lengths of triangle.
Then at least one factor in left hand side of inequality $(1)$ is negative. In fact the only one factor is negative. Indeed, without loss of generality assume that $a+b-c<0$ and $a-b+c<0$, then $a=0.5((a+b-c)+(a-b+c))<0$. Contradiction, hence the only one factor is negative. As the consequence left hand side of inequality $(1)$ is negative and right hand side is positive, so $(1)$ obviously holds. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.