title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Isomorphic groups vs. isomorphic subgroups | No, you won't be able to do this in general.
For an example, take $G$ to be the dihedral group of order $8$, $G=\langle r,s\mid r^4=s^2=1,\ sr=r^3s\rangle$, let $H=\langle r^2\rangle$ and $K=\langle s\rangle$. Both $H$ and $K$ are cyclic of order $2$, and thus abstractly isomorphic to one another. Note that not only are they not conjugate in $G$, but also they are not conjugate in the holomorph of $G$, $G\rtimes\mathrm{Aut}(G)$, since $H$ is the center of $G$ and so always maps to itself under an automorphism.
In particular, the image of $\varphi^{(f)}\theta\hat{\epsilon}\hat{f}$ (that is, of $H$) will necessarily lie in the center $\varphi{(f)}(G)$. But the image of $\varphi^{(f)}\theta\bar{\epsilon}\bar{f}$ (which is the image of $K$) is not central in $\varphi^{(f)}(G)$, and so they cannot be equal. |
What is a 100th percentile? | I cannot comment there. So I am posting an answer.
When the sample space is infinitely large as compared to the point/interval you are referring to, then 100 percentile makes sense.
For instance, the value of $y(0)$ in the function $e^{-x^2}$ is a 100 percentile. |
How to re-write this function | Note that $a^{n-n}=a^0=1$.
Let $S=a^{n-1}+a^{n-2}+\cdots+a^{n-(n-1)}+a^{n-n}=a^{n-1}+a^{n-2}+\cdots+1$. Then
$$
aS=a^n+a^{n-1}+\cdots +a.
$$
Now
$$
aS-S = a^n-1.
$$
Hence
$$
S=\frac{a^n-1}{a-1}.
$$
You could try $n=4$ to understand what was going on above.
When $n=4$,
$$
S=a^3+a^2+a+1,\quad aS = a^4+a^3+a^2+a.
$$
Thus
$$
(a-1)S=aS-S = (a^4+a^3+a^2+a)- (a^3+a^2+a+1)=a^4-1.
$$
So
$$
a^3+a^2+a+1=S=\frac{a^4-1}{a-1}.
$$ |
Transition Probability Matrix and Stationary Distribution | Ad a) We have two states, let '$0$' be the first and '$1$' the second one. Assuming such a labelling, the transition matrix is
$$P= \left( \begin{array}{cc}
p & 1-p \\
1-q & q \\
\end{array} \right)$$
as, for example, the [$2,1]$ cell represents the probability of mistake while sending '$1$' message. (i.e. probability of going from state '$1$' to state '$0$' in one step. Still in other words: receiving '$0$' when '$1$' is being transmitted via given link).
Ad b) Since our Markov chain is irreducible and both states are aperiodic, the stationary distribution $\pi=[\pi_1, \pi_2]^T$ exists and is unique. We can find it by solving system of equations $$\left[ \begin{array}{cc} \pi_1 & \pi_2\end{array}\right] \left( \begin{array}{cc}
p & 1-p \\
1-q & q \\
\end{array} \right) =\left[ \begin{array}{c} \pi_1 \\ \pi_2 \end{array} \right]$$ together with $\pi_1+\pi_2=1.$ My computation yields $\pi_1=\frac{1-q}{2-(p+q)}$ and $\pi_2=\frac{1-p}{2-(p+q)}$.
Ad c) It is the question about the probability of going from '$0$' to '$0$' in exactly two steps. Such a scenario can be realized in two ways:
'$0$' $\to$ '$0$' $\to$ '$0$' which has a probability $\mathbb{P}(\text{'0'} \to\text{'0'})\mathbb{P}(\text{'0'} \to\text{'0'})= p^2$,
'$0$' $\to$ '$1$' $\to$ '$0$' which has a probability $\mathbb{P}(\text{'0'} \to\text{'1'})\mathbb{P}(\text{'1'} \to\text{'0'})= (1-p)(1-q)$.
Hence, the answer is $p^2+(1-p)(1-q)$. Alternatively, you can compute the $[1,1]$ cell of $P^2$ - two-step transition probability matrix.
Ad d) One can consider $50$ as sufficiently large number of steps to approximate $\lim\limits_{n\to \infty}p_{11}^{n}$ that is equal to $\pi_1$ (see limiting distribution theorem). Hence, the answer is $\frac{1-q}{2-(p+q)}$. |
Finding original sample size from result percentage (sales from returned units) | Since $317$ is a prime number, $317/1000$ is in lowest terms. That means the only fractions
$$
\frac{x}{y} = \frac{31.7}{100}
$$
with integers $x$ and $y$ will have $y$ a multiple of $1000$.
If the percentage (after moving the decimal point to get to an integer) has no factors of $2$ or $5$ the same argument will work. If it does have those factors, divide by them to find the fraction in lowest terms. That will tell you the minimum possible number of units.
For example, if the return percentage was $18$, writing
$$
\frac{x}{y} = \frac{18}{100} = \frac{9}{50}
$$
as a fraction in lowest terms
says that the minimum possible number of units is $50$. |
The steps of simplifying a fraction? | The complete general way is to prime factorize the numerator and denominator. To factorize a number into primes is to find a product of primes that equals the given number. This may seem slightly challenging, but as long as the numbers aren't that large, it is usually not too difficult.
Let me give you an example of a prime factorization of, say, $78$. We first note that it is an even number, so $2$ is a factor, i.e. $78 = 2\cdot 39$. Since $2$ is prime we can not reduce that futher, but $39$ is not. We find the factors of $39 = 3\cdot13$. Therefore $78 = 2\cdot3\cdot13$, where all the factors are now prime.
In the fraction we just cancel each common prime factor, i.e.
$$\require{cancel}\frac{36}{60}=\frac{2\cdot 18}{2\cdot 30}=\frac{2\cdot2\cdot9}{2\cdot2\cdot15}= \frac{\cancel{2\cdot2\cdot3}\cdot3}{\cancel{2\cdot2\cdot3}\cdot5}=\frac{3}{5}.$$ |
Show that if $z_0$ is a solution to $(2z-1)^{2014}=(2z+1)^{2014}$, then $\Re(z_0)=0$ | Since $|2z-1 | = |2z+1|$, this tells us that $2z$ lies on the perpendicular bisector of the line from $-1$ to $1$, IE it lies on the line with real part 0. |
Can $C_{10}$ be isomorphic to $C_5\times C_2$? | Define a map $\phi : C_5 \times C_2 \to C_{10}$ by $\phi(r^u,f^v) = a^{2u + 5v}$. Show now that $\phi$ is an isomorphism of groups. |
Double integral $(x-y)^2\sin(x+y)$ on a given parallelogram | Your calculation is correct, because if we look at the integrand $f(x,y) = (x-y)^2 \sin (x+y)$, it obeys the property $$f(2\pi - y, 2\pi - x) = -f(x,y),$$ i.e., $f$ is antisymmetric upon reflection about the line $x + y = 2\pi$. Since the domain of integration is also symmetric about this line, it follows that the integral is zero.
However, if we modify the integrand to be $$g(x,y) = (x-y)^2 \sin^2 (x+y),$$ then you would get the answer $\pi^4/3$. So perhaps there is a typographical error. |
Given is relation $R$. Create a digraph for equivalence relation | Given your description of
$$R = \{ (1,1), (1,5), (2,4), (3,3), (4,1), (5,4) \},$$
I suppose this is a relation on the set
$$\{ 1, 2, 3, 4, 5 \}.$$
If $h_{equiv}(R)$ is the equivalence relation generated by $R$, then you can obtain this relation by following the procedure:
It must be reflexive, so you must add it the pairs $(2,2), (4,4), (5,5)$.
It must be symmetric, so from the existence of pairs $(1,5), (2,4), (4,1), (5,4)$, you also need $(5,1), (4,2), (1,4), (4,5)$.
It must be transitive, so given that $(1,4), (4,2) \in h_{equiv}(R)$ it follows that $(1,2) \in h_{equiv}(R)$ (and also $(2,1)$); from $(5,4), (4,2) \in h_{equiv}(R)$ it follows that $(5,2) \in h_{equiv}(R)$ (and $(2,5)$ too).
So
\begin{align}
h_{equiv}(R) &= \{ (1,1), (1,2), (1,4), (1,5), (2,1), (2,2), (2,4), (2,5), \\
&(3,3), (4,1), (4,2), (4,4), (4,5), (5,1), (5,2), (5,4), (5,5) \}
\end{align}
Its graph is the complete graph on the set $\{1,2,4,5\}$ union with the single element $\{3\}$ |
A visual interpretation of probability? | While I think that that visual is not particularly helpful, in no small part because the exterior is nonsensical, there is a way to interpret it reasonably. In the first case, you can pick any information from the clothing axis, and gain no more information about the weather. Similarly, you can pick any information from the weather axis and gain no more information about the clothing. This is to say that if we compare the whole graph to a subset obtained by focusing on part of one axis, the graph is largely unchanged.
On the other hand, this process fails in the second diagram. If I restrict to "coat" section, the graph is telling me that it is more likely to be raining - I have gained more knowledge about the weather variable by being more specific in my investigation of the clothing variable.
This is the crux of independence and dependence. Two variables are independent if adding information about one does not give you new information about the other. |
cross ratio definition | You didn't need to check if they were collinear. They were in general position (and the other 4 points also were in general position), so (after multiplying the coordinates by some constants) you had two projective frames. The fundamental points of these two projective frames are the projectivization of two bases of $\Bbb{R}^3$. We have a theorem that states that $f$ exists and its associated isomorphism is the one you find when you change the coordinates between the two bases.
(Plus the cross ratio (as far as we know) only works in $\Bbb{P}^1$) |
How to prove $\sum_{i=1}^n |a_i|^r\leq (\sum_{i=1}^n|a_i|)^r$ | For $r\ge1$:
It's enough to do it for finite sums. The case $\sum_{j=1}^n$ follows from the case $n=2$ by induction. For $n=2$ you can assume $a_1=1$.
So you only need to show that $$(1+t)^r\ge1+t^r\quad(t\ge0).$$Let $\phi(t)=(1+t)^r-(1+t^r)$. Then $\phi(0)=0$ and $\phi'(t)\ge0$ for $t\ge0$. |
Can we define $f$ in $0$ so that $f$ is continuous in $0$? | You have an infinite discontinuity at x = 0. When you take the limit of the arctangent function of -x^3 around 0, the limit is 0. Thus the limit of the reciprocal is +/- infinity.You can't define f(x) in terms of a piecewise function that both exist at x =0 and the limits converge to the same number. |
How to find the $S_\kappa$ elements in the product $ \sum_{\kappa=1}^{K} S_\kappa Q_\kappa(r, \theta,\phi) = 1$? | Select random $(r,\theta,\phi)$ triples and note down corresponding $\mathbf Q$ vectors. When you have enough $Q$ points that they define a $(K-1)$-dimensional hyperplane, calculate its equation and normalize so the constant term is $1$. The coefficients of the $Q_\kappa$s are then your desired $\mathbf S$.
This only works if $\mathbf S$ is unique. If there are several possible $\mathbf S$s, then you can't ever be sure that they all are possible solutions, without knowing something about how the $Q$ function work. It might me that there's a single $(r,\theta,\phi)$ that will reject one of them, but you just haven't tried it yet. |
A few questions about a question involving mathematical induction | I don't really understand your reasoning. You know that $d_1=0$, with checks out with $d_1=2^1-2$. Now assume that the relation holds up to $n$, this is your induction hypothesis. Then $$ d_{n+1}=2d_n+2$$ And you just have to use your induction hypothesis. I'll let you go on from here. |
Length of normal. chord... | "Subtends right angle"? Do you mean a line segment AB such that A and B lie on the parabola and angle AOB is a right angle? IF the A and B are $(x, 2\sqrt{x})$ and $(x,-2\sqrt{x})$ Then the length of the line segment is, of course, just the difference in y values, $4\sqrt{x}$. That would be the case when the two sides of the angle make 45 degree angles with the x-axis.
But that is not the only possible case. If, say, the upper side makes angle $\theta$ with the x-axis, it can be written $y= tan(\theta)x$ and crosses the parabla where $tan(\theta)x= 2\sqrt{x}$ so $\sqrt{x}= 2cot(\theta)$, $x= 4 cot^2(\theta)$, $y= 2\sqrt{x}= 2(2 cot(\theta)= 4 cot(\theta)$. The other line, which is perpendicular to the first, so has slope $-\frac{1}{tan(\theta)}= -cot(\theta)$, has equation $y= -cot(\theta)x$ crosses the parabola where $-cot(\theta)x= -2\sqrt{x}$ so $\sqrt{x}= -2tan(\theta)$, $x= 4 tan^2(\theta)$, $y= -4 tan(\theta)$. Can you find the distance between points $A= (4 cot^2(\theta), 4 cot(\theta))$ and $B= (4 tan(\theta), -4\tan(\theta))$? |
Solving a first-order nonlinear ordinary differential equation | Hint: first express $y'$ as a function of $y$. The result should be a separable differential equation. |
Heat partial differential equation initial condition | $$U_t=4U_{xx}$$
Particular solution on the form $U=f(x)g(t)$
$$f(x)g'(t)=4f''(x)g(t) \quad\to\quad \frac{g'(t)}{g(t)}=4\frac{f''(x)}{f(x)}=\text{constant}=C$$
because a function of $x$ cannot be equal to a function of $t$ with any $x$ and any $t$ , except if these functions are the constant function.
Since the condition $U(x,0)$ contains only cosines, we keep only cosines in the solution :
$$f_\nu(x)=\cos(\nu x) \quad\to\quad C=-4\nu^2 \quad\to\quad g_\nu(t)=e^{Ct}=e^{-4\nu^2 t}$$
where $\nu$ is any constant.
Before taking account of the conditions, the solution of the ODE is :
$$U(x,t)=\sum_{\text{any }\nu} A_\nu f_\nu(x)g_\nu(t)= \sum_{\text{any }\nu} A_\nu \cos(\nu x)e^{-4\nu^2 t} \qquad(1)$$
where the coefficients $A_\nu$ are any constants.
Condition : $\quad U(x,0)=3+2\cos(x)-\cos(3x)$
$$U(x,0)=\sum_{\text{any }\nu} A_\nu\cos(\nu x)e^{0}=\sum_{\text{any }\nu}A_\nu\cos(\nu x)=3+2\cos(x)-\cos(3x)$$
Among the infinity of values of $\nu$ only three are convenient :
$\begin{cases}
\nu=0 \quad\to\quad A_0=3 \\
\nu=1 \quad\to\quad A_1=2 \\
\nu=3 \quad\to\quad A_3=-1 \\
\end{cases}$
All other $\nu$ doesn't exist, so all other $A_\nu=0$.
Bringing the three remaining terms into equation $(1)$ leads to :
$U(x,t)= 3 \cos(0)e^{0 t}+2 \cos(1 x)e^{-4(1^2) t}+(-1) \cos(3 x)e^{-4(3^2) t}$
$$U(x,t)= 3+2 \cos(x)e^{-4 t}- \cos(3 x)e^{-36 t}$$
You can verify that the condition $U_x(0,t)=U_x(\pi,t)=0$ is satisfied : the terms $\sin(n\pi)=0$ in the derivatives. This is the consequence of "keeping only cosines in the solution" at the beginning of the calculus. |
Use the principle of mathematical induction to show that the given statement is true for all natural numbers n. | We want to prove that $$
11 + 23 + 35 + \ldots + (12n - 1) = n(6n + 5) \text{.}
$$
We start with the base case $n=1$, i.e. we have to validate $$
11 \overset?= 1(6\cdot 1 + 5)
$$
which is indeed true.
Now we assume that the statement is true for some $n$ (the induction hypothesis), and using that assumption prove that it's also true for $n+1$. This is the induction step. In other words, we have to show that $$
11 + \ldots + (12n - 1) = n(6n + 5) \Rightarrow \underbrace{11 + \ldots + (12(n+1) - 1)}_{A} = \underbrace{(n+1)(6(n+1) + 5)}_{B} \text{.}
$$
We do that by observing that $$\begin{eqnarray}
A &=& 11 + \ldots + (12(n+1) - 1) \\
&=& \underbrace{11 + \ldots + (12n - 1)}_{=n(6n + 5) \,(\star)} + (12(n+1) - 1) \\
&=& 6n^2 + 5n + 12n + 12 - 1 \\
&=& 6n^2 + 17n + 11
\end{eqnarray}$$
and that also $$\begin{eqnarray}
B &=& (n+1)(6(n+1) + 5) \\
&=& (n+1)(6n + 11)\\
&=& 6n^2 + 11n + 6n + 11 \\
&=& 6n^2 + 17n + 11 \text{.}
\end{eqnarray}$$
So $A=B$, which completes the proof of the induction step. $(\star)$ is where we used the induction hypothesis. |
Find a basis for the orthogonal complement of the plane $2x + 3y + 4z = 0$. | Hint:
The orthogonal complement of a subspace $U$ is the collection of all vectors $v$ such that $v\cdot u=0$ for all $u\in U$. Let $v$ be a vector in the orthogonal complement, given by $(a,b,c)$. Let $u$ be a vector in the plane given by $(x,y,z)$. Then $v\cdot u=0$, or $ax+by+cz=0$. What values could $a$, $b$, and $c$ have? |
what is the complexity of recursive summation | Since this problem is tagged dynamic-programming, here are my two cents on how this can be solved in $O(n)$ memory and $O(n)$ time using dynamic-programming.
The key-idea is maintaining a prefix-sum sequence (as an array in memory). It is a sequence that satisfies the property pre[i] = f[0] + f[1] + ... + f[i]. Consider the base-states: pre[i] := 0 for i < 0 and pre[0] = f[0] = 1. Now we recurse:
for i in range[1, n]:
f[i] = pre[i - 1] - pre[(i - k) - 1]
pre[i] = pre[i - 1] + f[i]
If you're not familiar with prefix-sums, note that I employed the fact that f[i] + f[i + 1] + ... + f[j] is the same as pre[j] - pre[i - 1]. |
Calculus: Find $\lim\limits_{h \to 0}{\frac{f(x+h)-f(x)}{h}}$ for $f(x)=\cos(x^2)$ | This is the definition of the derivative of $f(y)=\cos (y^2)$ at the point $y=x$. So,
$$
\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=f'(x)=-2x\sin(x^2)
$$ |
Viral growth (variable rate) | Let's say that on the $k$-th day you have the following distribution of users:
$n_k^4$ - number of users with 4 more days to bring new users
$n_k^3$ - number of users with 3 more days to bring new users
$n_k^2$ - number of users with 2 more days to bring new users
$n_k^1$ - number of users with 1 more day to bring new users
$n_k^0$ - number of users the no longer bring new users
The total number of users on the $k$-th day is $\sum_{i=0}^4n_k^i$.
On the next day:
$n_{k+1}^4 = 0.5 n_{k}^4 + 0.3 n_{k}^3 + 0.4 n_{k}^2 + 0.1 n_{k}^1$
$n_{k+1}^3 = n_{k}^4$
$n_{k+1}^2 = n_{k}^3$
$n_{k+1}^1 = n_{k}^2$
$n_{k+1}^0 = n_{k}^0 + n_{k}^1$
Inital conditions are:
$n_0^4=1$, $n_0^3=n_0^2=n_0^1=n_0^0=0$
The above set of rules can be converted into a fairly simple code that will calculate the number of users on any given day. |
What is the limiting sum of $\frac{1}{1\left(3\right)}+\frac{1}{3\left(5\right)}+\frac{1}{5\left(7\right)}+···+\frac{1}{(2n-1)(2n+1)}=\frac{n}{2n+1}$ | Hint:
$$\frac{1}{(2n-1)(2n+1)}=\frac12\frac{(2n+1)-(2n-1)}{(2n-1)(2n+1)}=\frac12\left[\frac1{2n-1}-\frac1{2n+1}\right]$$ |
$f(z) = xy + iy$ is nowhere analytic? | Note You are confused with the difference between complex differentiable and analytic. Analytic at a point would mean it is complex differentiable on a disk around that point. Your function is not.
It is true that on open sets analytic means the same as complex differentiable. But in general analytic is a stronger requirement. Your set is NOT open. |
Find $P(A∩B)$ and $P(A'∪B')$ | Yes, $P(A\cap B) = P(A)+P(B) - P(A\cup B)$. In this case, that means $$P(A\cap B) = 0.6+0.8 - 1.0 = 0.4$$
In determining the second question, we will find the and answer in the first question very useful in the second case, because the second case is essentially the probability of the complement of the first outcome.
For determining $P(A'\cup B')$, we use DeMorgan's Rule for sets which tells us $A'\cup B' = (A\cap B)'$. So we are needing to find $P((A\cap B)')$ which is equal to $$P(A'\cup B') = P((A\cap B)') = 1-P(A\cap B) = 1- 0.4= 0.6$$ |
Inequality for definite integral. | $$\left| \int_0^{\varepsilon}te^{ut}\,du\right| \le |t| \int_0^{\varepsilon}|e^{ut}|\,du = |t| \int_0^{\varepsilon}e^{ut}\,du $$
Consider $$f(\varepsilon) := \int_0^\varepsilon e^{tu} \ du.$$ This is a convex function and for the fundamental theorem of calculus $f'(\varepsilon) = e^{\varepsilon t}$.
Fix $\varepsilon_0 > 0$, then for all $0\le \varepsilon \le \varepsilon_0$ you have $f(0) \ge f(\varepsilon) + f'(\varepsilon)(0-\varepsilon)$ so
$$\int_0^\varepsilon e^{tu}\ du \le \varepsilon \ e^{\varepsilon t} \le \varepsilon \ e^{\varepsilon_0 t}$$
Concluding
$$\left| \int_0^{\varepsilon}te^{ut}\,du\right| \le |t| \varepsilon \ e^{\varepsilon_0t} \le |t| |\varepsilon| \left( \ e^{\varepsilon_0t} + e^{-\varepsilon_0t}\right)$$
Since in the last passage I just added a strictly positive term. |
Represent any number with two factorials | For $x, y > 1$, $x! - y!$ is always an even number, so the only odd numbers representable in this form are numbers that are smaller than $n!$ by $1$ for some $n$. Taking $k = 3$, as $k+1 = 4 \ne n!$ for any $n$, we see that $3$ is not representable in the form $x! - y!$. |
Show that $f(z)$ has a zero in $\mathbb{D}$ and that $|z_{0}|>\frac{1}{M}$ | Your invoking minimum modulus principle is fine, but the subsequent argument using maximum principle does not produce $|f(z)|=1$ on the boundary (if there's an argument that can prove it, then you should include it in your proof.) I suggest instead to note that $|f(0)|=1<|f(z)|$ for all $|z|=1$. It says that the minimum of $|f(z)|$ cannot occur on $|z|=1$. This leads to a contradiction.
About the second part, it looks fine except that the equality sign in
$$
\left|\frac{1}{M}\right|\leq|z_0|
$$ can be dropped since $|f(z)|=M$ cannot occur on the boundary.
Here, I will present another way in which the Schwarz lemma can be applied. Define
$$
g:\overline{\mathbb{D}}\ni w\mapsto f\left(\frac{w+z_0}{1+\overline{z_0}w}\right).
$$ Then we can see that $g(0)=f(z_0)=0$ and $|g(w)|<M$ for all $|w|=1$. Schwarz's lemma implies $$
|g(w)|<M|w|
$$ for all $0<|w|\le 1$. Finally the conclusion comes from noticing that
$$
|g(-z_0)|=|f(0)|=1<M|-z_0|=M|z_0|.
$$ |
$(\mathbb{R}^n,\|\cdot\|_{p})$ is isometrically isomorphic to $(\mathbb{R}^n,\|\cdot\|_{q})$ iff $p=q,$ | In the article Isometries of Finite-Dimensional Normed Spaces by F. C. Sanchez and J.M.F. Castillo, the authors prove the following result.
Theorem. For $1\le p,q\le\infty$ and $n\ge 2$, $(\mathbb{R}^n,\|\cdot\|_p)$ and $(\mathbb{R}^n,\|\cdot\|_q)$ are isometrically isomorphic if and only if $p=q$ or if $n=2$ and $p,q\in\{1,\infty\}$.
The argument splits in several cases.
The first one is to rule out any isometric isomorphisms between $\ell^n_p$ and $\ell^n_q$ if $q\not=p'$ where $p'$ is the Hölder dual of $p$ defined by $\frac1p+\frac1{p'}=1$.
This is a very beautiful argument, which I will now try to reproduce. For the remaining cases, see the paper.
For simplicity let us also only consider $1<p,q<\infty$.
Denote $(\mathbb{R}^n,\|\cdot\|_p)$ by $\ell^n_p$.
Say $\phi:\ell^n_p\to \ell^n_q$ is an isometric isomorphism.
Recall that, by Hölder's inequality (and its condition for equality), the dual space of $\ell^n_p$ is given by $(\ell^n_p)^*=\ell^{n}_{p'}$ (up to isometric isomorphism).
The dual map $\phi^*:\ell_{q'}^n\to \ell_{p'}^n$ is also an isometric isomorphism (this is a general fact, see appendix below). Unpacking definitions, we see that there exists an invertible matrix $A\in\mathbb{R}^{n\times n}$ such that
$$ \|Ax\|_q = \|x\|_p$$
and
$$ \|A^* x\|_{p'}=\|x\|_{q'}$$
hold for all $x\in\mathbb{R}^n$, where $A^*$ is the dual matrix (so the transpose) of $A$.
The question is: How does such a matrix look like?
Observe that the unit vectors have norm $1$ in all the $p$-norms. So setting $x=e_j$ in both equations we get
$$1=\|Ae_j\|^q_q = \sum_{i=1}^n |A_{ij}|^q$$
and
$$1=\|A^T e_j\|^{p'}_{p'}=\sum_{i=1}^n |A_{ji}|^{p'}$$
Adding both equations over $j=1,\dots,n$ we obtain
$$n=\sum_{i,j=1}^n |A_{ij}|^q = \sum_{i,j=1}^n |A_{ij}|^{p'},$$
so the entry-wise $q$ norm of $A$ equals the entry-wise $p'$ norm and both equal $n$.
Observing also that $|A_{ij}|\le 1$ (by the previous) and since $q\not=p'$ we see that each $A_{ij}$ can only be $0$ or $\pm 1$ and the number of non-zero entries is exactly $n$. Since the matrix is also invertible, we now know exactly how it looks like: it has exactly one $\pm 1$ in each row and each column. That is, $A$ is a "generalized" permutation matrix (the "generalized" hinting to the fact that also $-1$ is allowed).
(the beauty is that we have now "accidentally" classified all the isometries $\ell^n_p\to \ell^n_p$ if $p\not=2$!, which would have been an interesting follow-up question)
From here it is easy to conclude that we must have $p=q$. Consider the vector $x=e_1+e_2$. Then $Ax=\epsilon_1 e_i + \epsilon_2 e_j$ for some $i,j$ and choices of signs $\epsilon_1,\epsilon_2\in\{\pm 1\}$, so
$$2^{1/p}=\|x\|_p=\|Ax\|_q = 2^{1/q}.$$
So $p=q$.
Appendix.
Lemma. Let $X,Y$ be normed vector spaces and $\phi:X\to Y$ an isometric isomorphism. Then also the dual map
$$\phi^*:Y^*\to X^*,$$
given by $\phi^*(w)(x)=w(\phi(x))$ is an isometric isomorphism.
Proof.
We have $$\|\phi^* w\|_{X^*}=\sup_{x\not=0} \frac{|w(\phi(x))|}{\|x\|_X}=\sup_{x\not=0} \frac{|w(\phi(x))|}{\|\phi(x)\|_Y} = \sup_{y\not=0} \frac{|w(y)|}{\|y\|_Y}=\|w\|_{Y^*},$$
where the penultimate step uses that $\phi$ is bijective.
$\square$ |
Use characteristic functions to prove normal distribution of independent random variables $X$ and $Y$ , where $X+Y$ and $X-Y$are independent | Here's one way that I found, though it may not be the nicest. It starts with the observation that you made, namely:
$$ \phi_X(t) = \phi_X(t/2) \, \phi_X(t/2) \, \phi_Y(t/2) \, \phi_Y(-t/2) . $$
You could interpret this as saying that $X$ has the same distribution as
$$ \frac{X_1 + X_2 + Y_1 - Y_2}{2}, $$
where $X_1, X_2, Y_1, Y_2$ are all independent and where the $X_i$'s have the same distribution as $X$ (resp., the $Y_i$'s have the same distribution as $Y$). Now the idea is that we may further divide each $X_i$ and $Y_i$ into a sum of four independent random variables, so that $X$ has the same distribution as a sum of sixteen random variables. Half of these are $X$'s, half are $Y$'s, and some have plus signs while others have minus signs. This suggests that $X$ may be normal, since it can be expressed as an "average of many random variables," akin to the Central Limit Theorem. The problem is that the CLT itself does not apply. Now, if we were in the nicer situation of knowing
$$ X \overset{d}{=} \frac{X_1 + \dotsb + X_n}{\sqrt n}$$
for any $n$, where again the $X_i$'s are independent copies of $X$, then we could invoke the CLT to say that the right-hand side converges in distribution to a standard normal $Z \sim N(0,1)$; but $X$ is equal to this limit in distribution, so $X$ is standard normal.
However, for our situation of mixed $X$'s and $Y$'s with mixed signs, we need a more general result. Specifically, the Lindeberg CLT tells us when a "large average of random variables converges in distribution to normal," where the r.v.'s need not have the same distribution (a priori, $\pm X$ and $\pm Y$ don't). However, we must ensure that we don't have some random variables contributing too much to the variance of the average: this is the idea behind the "Lindeberg condition" in the link above. To apply it to this problem, take the sequence $s_1 Z_1, \dotsc, s_{4^n} Z_{4^n}$ (remember that we obtained sixteen random variables from $n=2$ division operations), where each $Z_i$ has either the distribution of $X$ or $Y$, and where $s_i = \pm 1$, you'll get $\mu_i = 0$ and $\sigma_i^2 = \mathrm{Var}(s_i Z_i) = 1$ for each $i$, so that
$$ s_{4^n}^2 = \sum_{i=1}^{4^n} 1 = 4^n .$$
You can then verify the Lindeberg condition to conclude that $X$, which is equal in distribution to
$$ \frac{1}{s_{4^n}} \sum_{i=1}^{4^n} s_i Z_i = \frac{s_1 Z_1 + \dotsb + s_{4^n} Z_{4^n}}{2^n} ,$$
also converges in distribution to the standard normal distribution. |
Prove that $\sqrt{3} + \sqrt{7}$ is a primitive element of $\mathbb{Q}(\sqrt{3},\sqrt{7})$ | Cubing $\alpha=\sqrt 3+\sqrt 7$, you obtain the linear system
\begin{cases}
\alpha=\sqrt 3+\sqrt 7, \\[1ex]
\alpha^3=24\sqrt 3+16\sqrt7,
\end{cases}
from which can easily deduce $\sqrt 3$ and $\sqrt 7$ as a linear combination of $\alpha$ and $\alpha^3$. |
How does gaussian quadrature work for this example? | $1)$ It should be $\frac{1}{3}=w_0+w_1$, since $f(x)=x^0=1$ is constant. Next, you impose exactness on $f(x)=x,$ which gives a second equation $$\int\limits_0^1 x\cdot x^2\, dx=w_0\cdot 0+w_1\cdot a,$$ or $$\frac{1}{4}=aw_1.$$ Now, you have a $2\times 2$ system of linear equations that you can solve to find $w_0,$ $w_1.$
$2)$ A Gaussian quadrature rule with $n$ nodes will have a degree of precision of $2n-1.$ If $n=2,$ then the degree of precision would be $3$. However, you need to check if these nodes came from performing Gaussian quadrature, as was claimed. Not every set of nodes is going to give you a Gaussian quadrature rule. It seems like what you’re referring to is the process of undetermined coefficients to find the weights; Gaussian quadrature is more substantial and involves choosing the nodes appropriately. If you want to check exactness in the given rule, with $a$ provided to you, do the process outlined in step 1) to find the weights, then test on $x^2,\cdots$. If you want to choose the weights and one of the nodes, see the other answer. |
Solve for the exponent of a matrix | Hint :
Using this you get the diagonalization of $U=PDP^{-1}$, then $U^x = PDP^{-1} \dots PDP^{-1} = P D^x P^{-1}$ where $D$ is the matrix with the elements of $D$ elevated to power $x$. Using this you can solve for the last element being greater to $0.99$ |
Question about Cauchy sequences are convergent in $\mathbb{R}^k$ | Let K be a countable subset of R$^n$.
Q1. If p is a limit point of K, U is an
open nhood of p, then U $\cap$ K is infinite.
If not, then an open nhood of p can be constructed that misses K except possibly at p.
So p is not a limit point of K.
If K is the set of points of Cauchy sequence s, then K is finite iff s is eventually constant.
Q2. K is not open because open subsets of
R$^k$ are uncountable. K is closed iff it
contains all of it limit points. |
No Lie algebra over $\Bbb R$ or $\Bbb C$ can have a unit element. | If $[x,e]=[e,x]$, then what does the antisymmetry of the Lie bracket tell you? |
Why is the higher order Taylor series less accurate for higher values of z? | The original function is bounded, the Taylor polynomials are not (they are non-constant polynomials after all). The fourth degree approximation grows $\sim z^4$, the fifth degree approximation $\sim z^5$ as $z\to \infty$. Clearly, the $z^5$ term is larger than the $z^4$ for $z$ large enough |
Proving projective equivalence of Auslander Transpose | I found an answer here:
http://arxiv.org/pdf/math/9809121v2 |
Determinant of a matrix with odd diagonal and even entries | The determinant is the sum of products of elements taken from one column and one row. All of these products are even except one, that taken from the diagonal. Thus $\det B$ is odd. Alternatively, you could use induction and expand along the first row. |
What is the motivation behind this (tested, working) 2d coordinate transform? | There are several parts to a complete answer to your question, so I'll approach them one at a time.
Imagine that a point is "fixed in the world" in the same fashion as a building is fixed with respect to the ground. Now, you could have different coordinate systems to describe the location of that building (e.g., street numbers, postal addresses, GPS, lat/long, and more) but the building itself never moves.
So consider two different coordinate systems, connected by the equations: (I'll limit myself to 2 dimensions but this is all valid for more dimensions as well, with some modifications)
$$
\begin{array}{rcl}
x' &=& x + c \\
y' &=& y + d
\end{array}
$$
Now imagine that the origin of the non-primed coordinate system is where the building (or point) in question is located, i.e., $x = y = 0$. Then the position of the building or point, in the primed coordinate system, is $(x',y') = (c,d)$. What that means is that the two coordinate systems are translated with respect to one another, by the vector $(c,d)$. In other words, to get from the origin of one coordinate system to the origin of the other, you need to move right (or left) by $c$ units and up (or down) by $d$ units. Of course, in general, your building or point isn't at the origin of either coordinate system (as in the figure below) but that's fine. I used that just as a simplifying step in the argument.
Next, instead of having the two coordinate systems translated with respect to one another, you could have them share the same origin but be rotated with respect to one another, as in the figure below:
The figure gets a bit messy if I label all the coordinates so I didn't but I think you get the idea from the color-coding of the previous picture. In any case, now, the equations connecting the two coordinate systems are
$$
\begin{array}{rcl}
x' &=& \cos(\phi)\,x + \sin(\phi)\,y \\
y' &=& \cos(\phi)\,y - \sin(\phi)\,x
\end{array}
$$
where $\phi$ (the Greek letter 'phi') is the angle between the $x$-axis and the $x'$-axis. Note that we could also write these as
$$
\begin{array}{rcl}
x' &=& {} + a\,x + b\,y \\
y' &=& {} - b\,x + a\,y
\end{array}
$$
provided that $a = \cos(\phi)$ and $b = \sin(\phi)$. Also, because $a$ and $b$ must be set like so, it follows that $a^2 + b^2 = 1$ (because $\sin^2\phi + \cos^2\phi = 1$ for any angle $\phi$). So, any two real numbers $a$ and $b$ such that $a^2 + b^2 = 1$ can be used to define a pair of coordinate systems that are rotated with respect to one another, while sharing the same origin.
A third possibility is having the two coordinate systems share the same origin but have the transformation equations:
$$
\begin{array}{rcl}
x' &=& u\,x \\
y' &=& v\,y
\end{array}
$$
where $u$ and $v$ are real numbers. What these represent is a scaling (stretching or shrinking) of the coordinate axes. Note that if $u$ and $v$ have the same values, that scaling is the same for both axes. An example of this would be having one coordinate system measure distances in meters and the other measure distances in kilometers, in which case the common scaling factor would be 1000.
So, finally, what if you combined all of these, that is, what if you had
$$
(*)\quad
\begin{array}{rcl}
x' &=& u\,( a\,x + b\,y) + c \\
y' &=& v\,(-b\,x + a\,y) + d
\end{array}
$$
That would represent two coordinate systems that are translated, rotated, and scaled with respect to one another. Then, in the case of equal scaling for both axes (say, $u$), you'd have
$$
\begin{array}{rcl}
x' &=& u\,( a\,x + b\,y) + c \\
y' &=& u\,(-b\,x + a\,y) + d
\end{array}
$$
and you could write, instead,
$$
\begin{array}{rcl}
x' &=& A\,x + B\,y + c \\
y' &=& -B\,x + A\,y + d
\end{array}
$$
where $A = ua$ and $B = ub$. This is what you have in your example, i.e., two coordinate systems that are translated, rotated, and equally scaled with respect to one another.
Note: On a second look at what you have, it appears that your $y'$ axis is reversed with respect to the $y$ axis, which corresponds to a choice of $v = -u$ in the $(*)$ expression above, resulting in
$$
\begin{array}{rcl}
x' &=& A\,x + B\,y + c \\
y' &=& B\,x - A\,y + d
\end{array}
$$
I find that an unusual choice of coordinate systems, but that's ok if that's what you want. |
Connected component in $\mathbb{R}^\omega$ | HINT: Let $B$ be the set of bounded sequences in $\Bbb R^\omega$. Show that $B$ is the component of the zero sequence; you can do this by showing that it’s clopen and path-connected. Then consider $x+B$. |
Basic discrete combinatorics questions I have problems with | a. As mentioned by @Unwisdom, you should sum the number of possible passwords of length 18 and 19, rather than multiply them.
b. This is phrased somewhat poorly, but your interpretation seems correct.
c. Your answer is correct.
d. You have 26 total letters, and will be using 13 of them. How many ways can you choose this? Next, consider how many ways you can arrange these 13 letters, and a procedure for constructing a 6-letter password and a 7-letter password. See the comment made by @JMoravitz for elaboration on the second part. As for the colors, you are correct in saying the answer doesn't change, as the passwords are of two different lengths.
e. You have 26 letters, and need to choose 29 distinct letters to create your two passwords. Is this possible?
f. See a comment made by @JMoravitz for this.
g. Note that now, you can repeat letters. When creating each password of length 6, you have 26 options per letter. Therefore, there are $26^6$ options for password 1, and $26^6$ options for password 2. As the passwords are the same length, coloring them will increase the count. |
Example of Quasi-circular domain | Symmetrized bidisc is a quasi-circular domain that isn't a circular domain.
$$\mathbb{G}=\{(z_1+z_2,z_1z_2)\in \mathbb{C}^2: z_1,z_2\in B(0,1)\}$$
Clearly, this is a (1,2)-quasi-circular domain.
To see that this is not a circular domain, notice that $(2,1)\in \bar{\mathbb{G}}$ but $(-2,-1)\not \in \overline{\mathbb{G}}$. |
$\int_{0}^{1} t^2 \sqrt{1+4t^2} dt$ plugging in limits assist | There are 2 mistakes.
$$\begin{align}
\int t^2 \sqrt{(1+4t^2)}dt&=\frac{1}{8}\int{\sinh^2(x)}\sqrt{1+\sinh^2(x)}\cosh(x)dx\\
&=\frac{1}{8}\int{\sinh^2(x)}\cosh^2(x)dx=\frac{1}{8\times 32}\left(\sinh(4x)-4x\right) +C \\
&= \frac{1}{256}\left((4\sinh(x)\cosh^3(x) + 4\sinh^3(x)\cosh(x)) - 4x\right) + C\\
&=\left[\frac{1}{256}\left(4(2t)(\sqrt{1+4t^2})^3 + 4(2t)^3\sqrt{1+4t^2} - 4\sinh^{-1}(2t)\right)\right]_{0}^{1} \approx 0.6063
\end{align}$$ |
How can I solve a/b by removing log? | $log \, a=\log (b^{0.5})$ so $a =b^{0.5}$. $\frac a b$ cannot be determined uniquely from the given equation. |
Isometry for polynomials of degree at most $n$ | This is my approach
The mapping $F_q$ is an isometry from $(P_3, \langle \cdot, \cdot\rangle)$ to $(P_5, \langle \cdot, \cdot\rangle)$ mean $\forall p,p' \in P_3$ $$\langle F_q(p), F_q(p')\rangle_{P_{5}} = \langle p, p'\rangle_{P_3}$$
So, I think, we can make it easy by choose particular values of $p,p' \in P_3$ and make
equations of $b_0,b_1,b_2$
With $p=1,p'=1 \rightarrow 1=b_0^{2}+b_1^{2}+b_2^{2}$
With $p=1,p'=x+1 \rightarrow 1=b_0^{2}+b_1^{2}+b_2^{2}+2b_1b_2$
With $p=1,p'=x^{2}+1 \rightarrow 1=b_0^{2}+b_1^{2}+b_2^{2}+b_0b_2$
With $p=x,p'=x+1 \rightarrow 1=b_0^{2}+b_1^{2}+b_2^{2}+b_0b_1+b_1b_2$
Therefore, $b_0b_1=b_1b_2=b_0b_2=0$, $1=b_0^{2}+b_1^{2}+b_2^{2}$
$\rightarrow (b_0,b_1,b_2)$ must have 2 values = 0, and the other equal 1 or -1
So, q can be $1,-1,x,-x,-x^{2},x^{2}$ ( Test back and it's true ) |
$(P \wedge Q \rightarrow R) \leftrightarrow (P \wedge \lnot R \rightarrow \lnot Q)$ | Almost there!
Isolate the $P$ from $P \land \neg R$ by $\land$ Elim.
Combine this with the $Q$ from the Assumption to get $P \land Q$ using $\land $ Intro
Now you can use $\rightarrow$ Elim with $(P \land Q) \rightarrow R$ and $P \land Q$ to get $R$, and that will contradict the $\neg R$ that you can isolate from the $P \land \neg R$ by $\land$ Elim.
Here it is more formally (and inside the larger $\leftrightarrow$ proof): |
Conditional expectations on von Neumann algebras-change of state | The second question admit (I think) a straightforward solution. Pick $x \in M_+$ and notice that, if $E$ preserves $\varphi$, we have that
$$
\tau(y \, x) = \varphi(x) = \varphi(E(x)) = \tau( y \, E(x) ) = \tau( E(y) \, x)
$$
since that holds for every positive $x \in M$ we have that $y = E(y)$ and so $y \in L^1(N)$. The other direction is trivial.
Question 1 has a very clear answer when $\varphi(x) = \tau(\delta \, x)$ is a trace, ie $\delta$ is central, and satisfies that $\delta$ is invertible, ie $0 < \kappa 1 \leq \delta$. In that case, the natural choice is
$$
E_\varphi(x) = {E}_\tau(\delta)^{-\frac12} \, {E}_\tau( \delta^\frac12 \, x \, \delta^\frac12 ) \, {E}_\tau(\delta)^{-\frac12}.
$$
the map is ucp and, by the centrality of $\delta$ it satisfies that $E \circ E = E$. It also holds that $\varphi$ is preserved. Removing the invertibility is just a technicality that can be solved by taking $\delta'_\epsilon = \delta + \epsilon 1$ and normalize it to $\delta_\epsilon = \delta_\epsilon'/\| \delta_\epsilon' \|_1$. Then, you can take the limit of $E_\epsilon$ as $\epsilon \to 0$ in the pointwise weak-$\ast$ topology.
I do not have a formula for the general case of $\delta$ noncentral. |
When will Andrea arrive before Bert? | Let me try to solve simply.
There are $17\cdot16 = 272$ possible combinations of clock times (points in time) for Andrea & Bert
There are $12\cdot12 = 144$ possible points, when both can be present, and Andrea can only be later in this range.
Of these, on $12$ , they will arrive simultaneously, and Andrea will be later on half of the remaining $132$, thus not earlier on $78$ of the total points
Hence Pr = $\dfrac{(272-78)}{272} = \dfrac {97}{136}$
Your answer is right ! |
How can a square root be defined since it has two answers? | Take it this way: $a = \sqrt{16}$ is just a symbol denoting something.
It denotes (by definition) the non-negative root of $x^2 = 16$.
Since $16 = a^2 = (-a)^2$, it means the other solution of this equation is $-a = -\sqrt{16}$ |
Are the reals $\mathbb{R}$ a subset of $\mathbb{R}^2$? | No the real numbers are not a subset of $\mathbb{R}^2$.
In some contexts it may be useful to identify the real number $x$ with, for example, the couple $(x,0)$. (But you see there is no actual reason why $(x,0)$ rather than $(0,x)$ or maybe $(x,x)$.) Thus, this would need to be mentioned. |
Extension of vector bundles on $\mathbb{CP}^1$ | I think that you can use duality (Hartshorne, III Thm 7.1),
$$ \mathrm{Ext}^1(\mathcal{O}_{\mathbb{P}^1}(2),\mathcal{O}_{\mathbb{P}^1}(-2)) \simeq \mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2))^{\vee} $$
Identify $\mathcal{O}_{\mathbb{P}^1}(-2) = \mathcal{O}_{\mathbb{P}^1}(-p -q)$, where $p$ and $q$ are distinct points. Let $A_{\lambda}$ be a homogeneous quadratic polynomial corresponding to $\lambda$.
You can use $p$, $q$ and $A_{\lambda}$ to construct $E_{\lambda}$.
If $A_{\lambda}(p)=A_{\lambda}(q)=0$, then $a_{\lambda}=0$.
If $A_{\lambda}(p)=0 $ and $A_{\lambda}(q)\neq 0$, then $a_{\lambda}=1$.
If $A_{\lambda}(p\neq 0 $ and $A_{\lambda}(q)\neq 0$, then $a_{\lambda}=2$.
Note that choosing $p$ and $q$ is the same as choosing a basis for $\mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(1))$, which determines a basis for $\mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2))$ that you have to fix to construct a isomorphism
$$ \mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2)) \simeq \mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(2))^{\vee} $$ |
Show that there exist unbounded sequences, $x_n\neq y_n$, such that $x_n-y_n\rightarrow 0$ as $n\rightarrow \infty$ | Work backwards. Choose a sequence $z_n$ that tends to 0 and any unbounded sequence $y_n$. Define $x_n = y_n + z_n$. Then by construction $x_n - y_n = (y_n + z_n) - (y_n) = z_n \to 0$. |
Strange differential equation (update) | Let $g(x) = P(x,f(x))$. Then you have
$f'(x) = f(x) g(x)$. Look at the power series around $x=0$. Suppose
$f(x) = \sum_{n=0}^\infty f_n x^n$ and similarly for $g$.
You have $f_{n+1} = \sum_{k=0}^n f_k g_{n-k}$.
It follows by induction that if $f_0=0$, then $f_n=0$ for all $n$. |
Show that ||f|| attains its maximum on a compact subset | Prove that the norm $\|\cdot\| : \mathbb R^d\to [0,\infty)$ is continuous. Then everything follows from 1D analysis since $\|f\| = \|\cdot\|\circ f$. |
Eigenvalues for T if and only if it is also eigenvalue of T inverse | $$Tv = \lambda v \iff v = T^{-1}\lambda v = \lambda T^{-1} v \iff T^{-1}v = \lambda^{-1}v$$ |
Exercise on pointwise convergence | This is immediate from 'Cantor's diagonal procedure'. There is a subsequence $(n^{(1)}_k)$ such that $f_{n^{(1)}_k}(x_1)$ converges. [ Because any bounded sequence in $\mathbb R$ has a convergent subsequence]. Now look at $f_{n^{(1)}_k}(x_2)$. There is a subsequence $(n^{(2)}_k)$ of $(n^{(1)}_k)$ such that $f_{n^{(2)}_k}(x_2)$ converges. Note that $f_{n^{(2)}_k}(x_1)$ also converges. Repeat this process. You get subsequences $(n^{(j)}_k), j=1,2,...$. Define $n_j$ to be $n^{(j)}_j$. This subseqeunce has the required property. |
How to prove a Minimal Surface minimizes Surface Tension | It is proved by Laws of physics and in that sense no separate proof is needed. We should look to the scalar divergence whether it is constant or zero.
Surface tension is a property, an invariant physical constant that should not be changed. It is the area which is to be minimized.
In the language of calculus of variations under action of variable parameters... concepts of 1) minimal ( or maximal) and 2) constant are same... respectively local or global their derivatives should vanish.
If $N$ denotes force per unit length (aka surface tension) and $p$ the outside pressure then with $1/R_1= \kappa1, 1/R2=\kappa2, $ we sum up two perpendicular planes forces per unit length with pressure normal to surface we have equilibrium
$$ \dfrac{N_1}{R_1} + \dfrac{N_2}{R_2} =p $$
We then have as a property of isotropic films. Taking
$$N_1= N_2 = N ,\, \dfrac{ \kappa1 + \kappa1}{2}= H$$
$$ H= \dfrac{p}{2N} \, ; $$
When pressure differential $ p=0,H=0 $ and we have minimal surfaces as the free soap films and $p= const$ as Constant Mean Curvature $(CMC)$ soap films under action of uniform pressure on one of the two sides. Surfaces of revolution of the latter situation $ (H= const.) $ are the Delaunay Unduloids.
And btw side interest is that it is also
locus of ellipse focus rolling on a line tracing out such a curve.
Maximum volume of given surface area.
If you speak German.. Differential Geometrie, Wilhelm Blaschke 1921 First Edition German. |
looking for reference or nice proof of trig lemma | No one has answered in a long time, so I'll answer myself. I'll start as I indicated in my question. It actually is not that bad. Using standard trig identities,
$${\vec{QP}}\cdot{\vec{QV}} = (\cos(\frac{1}{4}\theta^2)-\cos(\frac{1}{2}\theta))^2-
(\sin(\frac{1}{2}\theta)-\sin(\frac{1}{4}\theta^2))\sin(\frac{1}{4}\theta^2)=\ldots=
\frac{3}{2}+\frac{1}{2}\cos(\theta)-\frac{1}{2}\cos(\frac{\theta}{2}+\frac{\theta^2}{4})-\frac{3}{2}\cos(\frac{\theta}{2}-\frac{\theta^2}{4}). $$
Using the Maclaurin series of $\cos$ and properties of alternating series,
$1-\frac{x^2}{2}<\cos x < 1-\frac{x^2}{2}+\frac{x^4}{24}$ for all $x \in (0,1)$. Both
$\frac{\theta}{2}+\frac{\theta^2}{4}$ and $\frac{\theta}{2}-\frac{\theta^2}{4}$ are between $0$ and $1$. Therefore
$${\vec{QP}}\cdot{\vec{QV}} <
\frac{3}{2}+\frac{1}{2}(1-\frac{\theta^2}{2}+\frac{\theta^4}{24}) -
\frac{1}{2}(1-(\frac{1}{2}(\frac{\theta}{2}+\frac{\theta^2}{4})^2)-
\frac{3}{2}(1-(\frac{1}{2}(\frac{\theta}{2}-\frac{\theta^2}{4})^2) =
-\frac{1}{8}\theta^3+\frac{1}{24}\theta^4<0.$$ |
Finding expectation of negative powers of the standard normal distribution | Your calculation is not Mathematically correct but it it becomes correct if you start with $E[(Z^{-1})^{+}]$. [As usual $x^{+}=\max \{x,0\}$. A similar argument shows that $E[(Z^{-1})^{-}]$ and we can conclude that $Z^{-1}$ doe not exist.
$EZ^{-c}$ does exist if $0<c<1$. In particular it exists for $c=\frac 1 3$. This is because $\int \frac 1 {x^{c}} e^{-x^{2}/2}dx$ exists in this cse. Integrabilty near $0$ follows from integrability of $\frac 1 {x^{c}}$ and integrability near $\pm \infty$ is clear. The value of $EZ^{-1/3}$ is $0$ because the distribution of $EZ^{-1/3}$ is symmetric. |
Can there be a formula $\psi$ such that $(\psi \rightarrow (¬\psi))$ is a theorem of L? | Yes, it can be theorem, let $\psi=(p\land (\neg p))$. The truth table for this will verify that $(\psi \rightarrow (¬\psi))$ is true and completeness will ensure it is a theorem.
The problem with your argument is that you want to attribute truth values to the formula $\psi$, but you should be attributing them to the propositional letters that occur in $(\psi \rightarrow (¬\psi))$. This is how valuations are defined. |
Different ways of calculating the conditional probability in the continuous case | Your intuition is on the right track. For continuous random variables $X$ and $Y$ with joint pdf $f_{X,Y}$, lets require the event $B$ to not be a null-set ( i.e. $P(B)\ne 0$ ). Then, by definition, we have for events $A$ and $B$,
$$
P\left(\,\left(X,Y\right)\in A\,\Big\vert\,\left(X,Y\right)\in B\right){}:={}\dfrac{P(A\,\cap\,B)}{P(B)}{}={}\dfrac{\displaystyle\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}{\bf 1}_{A\cap B}\,f_{X,Y}\,\mathrm dx\mathrm dy}{\displaystyle\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}{\bf 1}_{B}\,f_{X,Y}\,\mathrm dx\mathrm dy}{}={}\displaystyle\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}{\bf 1}_{A}\,\left(\frac{{\bf 1}_{B}\,f_{X,Y}}{\displaystyle\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}{\bf 1}_{B}\,f_{X,Y}\,\mathrm dx\mathrm dy}\right)\mathrm dx\mathrm dy,
$$
where you have (or want to?) defined
$$
f_{X,Y}(x,y\,\vert B){}:={}\dfrac{{\bf 1}_{B}\,f_{X,Y}}{\displaystyle\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}{\bf 1}_{B}\,f_{X,Y}\,\mathrm dx\mathrm dy}\,.
$$
By inheriting all of its properties from $f_{X,Y}$, one can check that this function is a bona-fide Radon-Nikodym derivative; it is a pdf (with respect to the Lebesgue measure) for the probability measure $P(\,\dot\,\vert\,\left(X,Y\right)\in B)$. For instance, replacing the event $A$, above, with $\mathbb{R}^2$ (the entire sample space) shows that this function integrates to $1$ over $\mathbb{R}^2$ and, therefore, over $B$.
In passing, I also note that as a Radon-Nikodym derivative, this function is uniquely defined up to events of Lebesgue measure zero, so changing the value of this function over such events does not change the probabilities evaluated using this function. |
Proof that $\{ e \ | \ \forall p$ prime$: \varphi_e (p) \downarrow \}$ is not $\Delta_2$ | Here is a reduction showing that the set $D$ from the question is $\Pi^0_2$ hard. Let $(\forall n)(\exists w)\phi(m,n,x)$ be an arbitrary $\Pi^0_2$ formula with free variable $m$, in which $\phi(m,n,w)$ is $\Sigma^0_0$.
Given a number $m$, we can make a program $e_m$ which, on each input $p$, immediately halts if $p$ is not prime, and otherwise finds the $n$ for which $p$ is the $(n+1)$st prime. Then $e_m$ searches for a $w$ such that $\phi(m,n,w)$ holds, and halts if and only if such a $w$ is found. (At this point, you may want to stop and complete the argument yourself, if you're studying for an exam.)
Now we have that, for all $m$, $e_m \in D$ if and only if $(\forall n)(\exists w)\phi(m,n,w)$. (We also have that $e_m \in \text{Tot}$ if and only if $(\forall n)(\exists w)\phi(m,n,w)$, by the way.) This completes the reduction.
This shows that $D$ cannot be $\Delta^0_2$, by Post's theorem.
A separate calculation shows that $D$ is itself $\Pi^0_2$, and thus with the previous reduction we have that $D$ is a $\Pi^0_2$ complete set.
The same thing could be obtained by showing that $D$ is $1$-equivalent to the complement of $\emptyset''$, but reducing that specific set to $D$ is somehow more difficult than reducing an arbitrary $\Pi^0_2$ set to $D$.
There is another, heuristic way to see that the set $D$ in the question should not be $\Delta^0_2$. If it was, by Post's theorem it would be computable from $\emptyset'$. But to decide whether an number is in $D$ seems to require asking infinitely many questions to $\emptyset'$. This immediately suggests the reduction method above. |
Can you transform a continuous probability space into an equiprobable probability space? | No, the sum of uncountably many positive reals (and therefore also uncountably many positive rationals) is infinite. Therefore, unless your $P$ maps all but countably many $\{\omega\}$ to $0$ (not what you intended), $P$ cannot sum to the finite value of 1 as required to be a probability space. |
Definition of Conditional expectation of Y given X. | One can also define $\mathbb{E}(Y|X=x)$ through the factorization lemma: Since $Z = \mathbb{E}(Y|X)$ is $\sigma(X)$-measurable, there is some measurable $g:\mathbb{R} \to \mathbb{R}$ that is unique on $X(\Omega)$ such that $Z = g\circ X$. Now we can define $\mathbb{E}(Y|X=x) = g(x)$. Note that this depends on the version $Z$ of $\mathbb{E}(Y|X)$ that one takes. |
Understanding the definitions of Embedded Surface and Locally Parametrised Embedded Surface | Before answering the question, I want to point out that these terms are not standard. The standard terms for "regular parametrization," "embedded surface," and "locally parametrized embedded surface" are, respectively: "injective immersion," "embedding," and "embedded submanifold."
As to your question: Technically speaking, according to the definitions you've given: An ES refers to a special kind of function. That is, an ES is a regular parametrization that satisfies a nice property. By contrast, an LPES is a special kind of subset of $\mathbb{R}^n$.
An LPES is a very concrete, geometric object. Here's an example:
Example: The unit sphere $\mathbb{S} = \{(x,y,z) \in \mathbb{R}^3 \colon x^2 + y^2 + z^2 = 1\}$ is an LPES. Indeed, for any point $(x,y,z) \in \mathbb{S}$ on the sphere, you can find an open set $U \subset \mathbb{R}^3$ containing $(x,y,z)$ for which $U \cap \mathbb{S} = f(D)$ for some domain $D \subset \mathbb{R}^2$ and some regular parametrization $f \colon D \to \mathbb{R}^3$.
So, let's say we take $(x,y,z) = (0,0,1)$. Then the upper half-space $U = \{(x,y,z) \colon z > 0\}$ is an open subset in $\mathbb{R}^3$ with $(x,y,z) \in U$, and the intersection $U \cap \mathbb{S}$ is the upper hemisphere. You can then take $D = \{(u,v) \in \mathbb{R}^2 \colon u^2 + v^2 = 1\}$ to be the unit disk in $\mathbb{R}^2$ and take $f \colon D \to \mathbb{R}^3$ to be $f(u,v) = (u,v, \sqrt{1 - u^2 - v^2})$.
One of the things to take away from this example is that the regular parametrization $f(u,v) = (u,v, \sqrt{1 - u^2 - v^2})$ only gives you points on the upper hemisphere. You will need a different regular parametrization to get points on the lower hemisphere. Even then you haven't covered the entire sphere: the points on the equator will require (at least) one additional regular parametrization. |
Is the union of an increasing family of balls a ball? | To answer your first question, it's not necessarily true in a complete metric space that the union of a chain of open balls is a ball. Here is a counterexample.
Let $M=\{a_i:i\in\mathbb N\}\cup\{b_i:i\in\mathbb N\}\cup\{c\}$ with the following metric:
$d(a_i,a_j)=1$ if $i\ne j$;
$d(b_i,b_j)=2$ if $i\ne j$;
$d(a_i,b_j)=1$ if $j\le i$;
$d(a_i,b_j)=2$ if $j\gt i$;
$d(a_i,c)=d(b_i,c)=2$.
The triangle inequality holds, since all nonzero distances are $1$ or $2$.
The metric is complete, since every Cauchy sequence is eventually constant.
Let $\mathscr B=\{B_n:n\in\mathbb N\}$ where
$B_n=\{x\in M:d(a_n,x)\le1\}=\{x\in M:d(a_n,x)\lt2\}=\{a_i:i\in\mathbb N\}\cup\{b_i:i\le n\}$.
S0 $\mathscr B$ is a chain of open balls and a chain of closed balls. The union $\bigcup\mathscr B=M\setminus\{c\}$ is not a ball because, for each point $x\ne c$, there is a point $y\ne c$ such that $d(x,y)=d(x,c)=2$.
Regarding your other questions. I'm going to guess that it's true for Banach spaces, false for incomplete normed spaces. |
What is the probability of getting at-least one even digit for 5 trials. | Getting exactly four odds is only one of several ways to get at last one even digit.
What you should be considering is how probable it is to get no even digits, which is the same as getting exactly five odds.
So your answer should be $1-(\frac12)^5$. |
Local max and local min | There are examples where $x=0$ can still be a local minima despite your given conditions. Recall:
If $f(x)$ is a function and $a$ an element of its domain, we say that $a$ is a
local minima of $f(x)$ if there exists a neighbourhood $U \ni a$ such that $f(a) \leq f(x)$ for all $x \in U$.
We will need to modify your function somewhat to see that this is the case. Consider
$$ f(x) = \begin{cases} 0 & x = 0 \\ \max\{x^4\sin(1/x),0\} & x \neq 0 \end{cases}.$$
By Squeeze Theorem, everything you want is true: $f(x)$ is continuous and differentiable at $0$ and $f(0) = f'(0) = 0$. As $f(x)$ is always non-negative, $x=0$ is a global (and hence local) minima.
Edit: For a function with continuous derivative, take a look at
$$ f(x) = \begin{cases} x^4\left( \sin\left(\frac1x\right) + e^x\right) & x \neq 0 \\ 0 & x =0 \end{cases}.$$
For $x \geq 0$ this function is always non-negative, and again by Squeeze Theorem, everything you want is true. |
Are these two functions describing spring motion in the same way? | Let’s transform one into the other. Denote $\omega=\sqrt{k\over m}$ and start with
$$x(t)=A\cos(\omega t+\phi)=A\cos\phi\cos\omega t-A\sin\phi\sin\omega t$$
Which is in the first form with $C_1=A\cos\phi$ and $C_2=-A\sin\phi$ |
How to prove that this stochastic process converges in mean square as $t \to \infty$ | Since $\mathbb{E}[X_t^2] \rightarrow C < \infty$, we have that $\sup_{t} \mathbb{E}[X_t^2] < \infty$ so $(X_t)$ is uniformly integrable (this is discrete time, but I'm not certain that this exact argument goes through in continuous time without some additional assumptions). Doob's martingale convergence theorem then guarantees that there exists $X_\infty$ such that $(X_t) \rightarrow X_\infty$ almost surely, and since $(X_t)$ is uniformly integrable this implies $(X_t) \rightarrow X_\infty$ in $L^2$. I'll also prove that for completeness: Fix $\varepsilon > 0$. Then
\begin{align*}
\mathbb{E}[|X_t - X_\infty|^2] &= \mathbb{E}[|X_t - X_\infty|^21_{|X_t - X_\infty| > \varepsilon}] + \mathbb{E}[|X_t - X_\infty|^21_{|X_t - X_\infty| \le \varepsilon}] \\
&\le 2(\mathbb{E}[|X_t|^21_{|X_t - X_\infty| > \varepsilon}] + \mathbb{E}[ |X_\infty|^21_{|X_t - X_\infty| > \varepsilon}]) + \varepsilon^2.
\end{align*}
Since $(X_t) \rightarrow X_\infty$ almost surely, we can choose $T > 0$ to make $\sup_{t \ge T} \mathbb{P}(|X_t-X_\infty| > \varepsilon)$ arbitrarily small, so by uniform integrability we can choose $T> 0$ such that $\mathbb{E}[|X_t|^21_{|X_t - X_\infty| > \varepsilon}] + \mathbb{E}[ |X_\infty|^21_{|X_t - X_\infty| > \varepsilon}] \le \frac{\varepsilon}{2}$ for all $t \ge T$. Thus we conclude
\begin{align*}
\lim_{t \rightarrow \infty}\mathbb{E}[|X_t - X_\infty|^2] &\le \lim_{t \rightarrow \infty}2(\mathbb{E}[|X_t|^21_{|X_t - X_\infty| > \varepsilon}] + \mathbb{E}[ |X_\infty|^21_{|X_t - X_\infty| > \varepsilon}]) + \varepsilon^2 \\
&\le \varepsilon + \varepsilon^2
\end{align*}
and since $\varepsilon$ was arbitrary, we conclude $\lim_{t \rightarrow \infty}\mathbb{E}[|X_t - X_\infty|^2] = 0.$ |
Find $2\times2$ matrices such that $CD=-DC$, with CD different from $0$ | The best way is to use the so called elementary matrices. Let's say you have a matrix $A$, to make it easier let's assume it is really of size 2x2 like you need. It is a known fact that multiplying its first row by a scalar $\lambda$ is equivalent to multiplying $A$ by the matrix $P=\begin{pmatrix}\lambda&0\\0&1\end{pmatrix}$ on the left side. (which means $PA$ is the product that you get). In the same way multiplying the first column of $A$ by $\lambda$ is equivalent to multiplying $A$ by the same matrix $P$ on the right side. (you get the matrix $AP$).
So if you know that then it's easy to see that in your exercise you should take $\lambda=-1$, the matrix $D=\begin{pmatrix}-1&0\\0&1\end{pmatrix}$ and then you only have to find the matrix $C$. Take $C=\begin{pmatrix}0&1\\1&0\end{pmatrix}$. And we really get:
$DC=\begin{pmatrix}-1&0\\0&1\end{pmatrix}\begin{pmatrix}0&1\\1&0\end{pmatrix}=\begin{pmatrix}0&-1\\1&0\end{pmatrix}$
$CD=\begin{pmatrix}0&1\\1&0\end{pmatrix}\begin{pmatrix}-1&0\\0&1\end{pmatrix}=\begin{pmatrix}0&1\\-1&0\end{pmatrix}=-DC$
So as you can see the theory works. $DC$ is $C$ with the first row multiplied by $-1$, $CD$ is $C$ with the first column multiplied by $-1$. That's why it is very useful to know the properties of elementary matrices. Of course I still had to guess the matrix $C$ but it was much easier than if I had to guess both matrices from the beginning. |
Finding an integral formula for intertwined recursive sequences | For any $a,b>0$, let $HG(a,b)=HG\left(\frac{2ab}{a+b},\sqrt{ab}\right)$ and for any $x>1$ let $f(x)=HG(1,x)$.
We have:
$$f(x) = HG(1,x) = HG\left(\frac{2x}{1+x},\sqrt{x}\right) = \frac{2x}{1+x}\,HG\left(1,\frac{1+x}{2\sqrt{x}}\right)\\= \frac{2x}{1+x}\,f\left(\frac{1+x}{2\sqrt{x}}\right)\tag{A} $$
and we may notice that the map $g:x\mapsto\frac{1+x}{2\sqrt{x}}$ sends the interval $(1,+\infty)$ into itself.
In particular the given problem is equivalent to finding an invariant measure $\mu$ such that
$$\forall x>1,\qquad \mu((1,x)) = \frac{2x}{1+x}\,\mu\left(\left(1,\frac{1+x}{2\sqrt{x}}\right)\right)$$
Additionally, the sequence $x,g(x),g(g(x)),g(g(g(x))),\ldots$ converges really fast to $1$ for any $x>1$.
In particular
$$ f(x) = \frac{4x}{(1+\sqrt{x})^2}\;f(g(g(x))) \tag{B}$$
leads to $f(x)\approx \frac{4x}{(1+\sqrt{x})^2}$ and
$$ HG(a,b) = a\cdot f\left(\tfrac{b}{a}\right) \approx \frac{4ab}{\left(\sqrt{a}+\sqrt{b}\right)^2}=H\left(\sqrt{a},\sqrt{b}\right)^2\tag{C}$$
where $H$ stands for the usual harmonic mean. We may also notice that
$$ h(x)=\text{AGM}(1,x)=\text{AGM}\left(\sqrt{x},\frac{1+x}{2}\right) = \sqrt{x}\,h\left(\frac{1+x}{2\sqrt{x}}\right) $$
hence
$$ \frac{f}{h}(x) = \frac{1}{g(x)}\cdot \frac{f}{h}(g(x))\tag{D} $$
and the problem of finding a closed form for $\text{HG}$ boils down to the problem of finding a closed form for the infinite product
$$ g(x)\cdot g(g(x))\cdot g(g(g(x)))\cdot\ldots $$
On the other hand:
$$ h(x) = \sqrt{x} h(g(x)) = \sqrt{x}\sqrt{g(x)} h(g(g(x))) = \ldots = \sqrt{x}\sqrt{g(x)g(g(x))\cdot\ldots} $$
hence $f(x)=\frac{x}{h(x)}$ and
$$\boxed{ HG(a,b) = \frac{ab}{\text{AGM}(a,b)} = \color{red}{\frac{2}{\pi}\int_{0}^{+\infty}\frac{dx}{\sqrt{\left(1+\frac{x^2}{a^2}\right)\left(1+\frac{x^2}{b^2}\right)}}}.}\tag{E}$$
Further proof of $(E)$: it is enough to check that $\frac{ab}{\text{AGM}(a,b)}$ is invariant with respect to the replacements $b\to\sqrt{ab}, a\to\frac{2ab}{a+b}$.
$$\begin{eqnarray*}\frac{\sqrt{ab}\frac{2ab}{a+b}}{\text{AGM}\left(\sqrt{ab},\frac{2ab}{a+b}\right)}&=&\frac{2ab\sqrt{ab}}{\text{AGM}\left((a+b)\sqrt{ab},2ab\right)}\\&=&\frac{2ab}{\text{AGM}\left(a+b,2\sqrt{ab}\right)}\\&=&\frac{ab}{\text{AGM}\left(\frac{a+b}{2},\sqrt{ab}\right)}=\frac{ab}{\text{AGM}(a,b)}\;\large\checkmark\end{eqnarray*} $$ |
Growth condition for Ito diffusions | Let's consider a simplest case: Let $f : \mathbb{R} \to \mathbb{R}$ satisfy global Lipschitz condition
$$ |f(x) - f(y)| \leq L |x - y|, \quad x, y \in \mathbb{R}$$
with Lipschitz constant $L$. Then by triangle inequality, we have
$$ |f(x)| \leq |f(x) - f(0)| + |f(0)| \leq L|x| + |f(0)| \leq C(|x| + 1)$$
for large $C \geq \max(L, |f(0)|)$. Same argument applies to this case. |
why does this simple function converge to f(x) pointwise | Consider a nonnegative function $f$ with the given approximating simple functions $f_n$. If $f(x) \leq n$, then $|f_n(x)-f(x)| \leq 2^{-n}$; if this were not true, then $f_n(x)$ would be bigger or smaller, as follows from its definition. If $f$ is a real valued function (i.e. it never takes the value $+\infty$) then for all sufficiently large $n$ you have $f(x) \leq n$. So $|f_n(x)-f(x)| \leq 2^{-n}$ for all sufficiently large $n$, which does the job.
Visually, you are drawing horizontal lines $y=0,y=2^{-n},y=2\cdot 2^{-n},y=3 \cdot 2^{-n},\dots,y=n,y=+\infty$, then you are grouping all the points where the graph of $f$ is between two adjacent horizontal lines. You then take a simple function which takes on the lower value in each of these subsets. By contrast Riemann integration effectively draws vertical lines, but is otherwise analogous. |
What's the probability that a sum of dice is prime? | For (1.), Rosser and Schoenfeld (sp?) non-asymptotic estimates bound $\pi(n)$ between functions of the form $x/(\log x + C)$ and this should be enough.
For (2) your integrals are rapid decaying and incredibly close to the integrals on the whole real line. O($n$) standard deviations from the average is quite an unlikely event. |
Is it possible to compute homology groups of a space given the Pontryagin ring? | If you know the Pontryagin ring $H_*(X ; \mathbb{Z})$ of the $H$-space $X$, then you automatically know $H_n(X; \mathbb{Z})$ for all $n$ as $H_n(X; \mathbb{Z})$ is merely the $n$th graded component of $H_*(X;\mathbb{Z})$. Then you can use Universal Coefficients to determine $H_n(X; A)$ for any abelian group $A$. |
Show that $\frac {1}{\sqrt{5}}[(\frac {1}{x+r_+}) - (\frac {1}{x+r_-}) = \frac {1}{\sqrt{5}x}[(\frac {1}{1-r_{+}x}) - (\frac {1}{1-r_{-}x})] $ | From your last step:
$$\frac{1}{\sqrt{5}}\left(\frac{r_{+}}{1 - r_{+}x} - \frac{r_{-}}{1 - r_{-}x}\right)$$
$$= \frac{1}{\sqrt{5}x}\left(\frac{r_{+}x}{1 - r_{+}x} - \frac{r_{-}x}{1 - r_{-}x}\right)$$
$$= \frac{1}{\sqrt{5}x}\left(\left[\frac{1}{1 - r_{+}x} - 1\right] - \left[\frac{1}{1 - r_{-}x} - 1\right]\right)$$
$$= \frac{1}{\sqrt{5}x}\left(\frac{1}{1 - r_{+}x} - \frac{1}{1 - r_{-}x}\right)$$ |
Prove set equality using truth tables | You know that:
$A=B$ if and only if $A \subseteq B$ and $B \subseteq A$.
However, as indicated in the comments, your tables don't show $A \subseteq B$ and $B \subseteq A$, so you can't really use the above.
Fortunately, the tables do show the truth-value of $x \in A \rightarrow x \in B$ and $x \in B \rightarrow x \in A$ for some arbitrary $x$.
SO, let's rephrase set identity in terms of elements:
$A = B$iff for all $x: x \in A \leftrightarrow x \in B$
And that is the same as:
$A =B $ iff for all $x: (x \in A \rightarrow x \in B) \land (x \in B \rightarrow x \in A)$
So ... do you now see what you need to do? |
Drawing a graph that is flat, but then spikes | Sometimes such functions are called triangular or tent functions.
The most basic example centered at $0$ looks like :
$$
f(x) =
\begin{cases}
1- |x|, & \text{if }|x|<1\\
0, & \text{ else}
\end{cases}
$$ |
Does the $\lim_{z \to -1} \sqrt{|z|} \,e^{i\operatorname{Arg}(z)/2}=i$? | " I came across a Wolfram Alpha article which said the
$\arg(−1)=\pi$. I am assuming this holds true for the principal
argument as well. "
$\pi$ is the principal argument of $−1$, i.e., Arg$(-1)=\pi.$
But in general, $\arg(−1)=(2k+1)\pi,$ where $k\in\mathbb{Z}.$
" I have always thought about finding $\arg(z)$ by applying the
$\tan^{−1}\left(\frac{y}{x}\right).$ "
This formula doesn't generally work, since
$\tan^{-1}\left(\frac{y}{x}\right) \in
\left(-\frac{\pi}2,\frac{\pi}2\right),$ which doesn't even span the
principal range $\left(-\pi, \pi\right]$ of $\arg(z)$:
$$\displaystyle\text{Arg}(-1-i)=-\frac34\pi \\\neq \frac{\pi}4
=\tan^{−1}\left(\frac{-1}{-1}\right).$$
The given limit $$\lim_{z\to-1} \sqrt{|z|} \exp\left(i\frac{\text{Arg}(z)}2\right)$$ doesn't exist because $\displaystyle\lim_{z\to-1}\frac{\text{Arg}(z)}2$ (and in fact, $\displaystyle\lim_{z\to-1}\frac{\arg(z)}2$) has two non-overlapping representations $\displaystyle\pm\frac{\pi}2$ on the Argand diagram, as hinted by F. Tomas in the comments. |
upper bound of the product of a matrix norm and its inverse norm | Yes: it's $1$ for $1\times1$ matrices and $+\infty$ for $m\times m$ matrices with $m\ge2$: in fact, consider the sequence $$A_n=\begin{pmatrix}n&0&0\\ 0&\frac1n&0\\ 0&0&I_{m-2}\end{pmatrix}$$ |
Is there Any Homomorphism Between Vector Spaces that is not Linear? | Consider $\mathbb C$ as a complex vector space in the usual sense. Then the conjugation is a group homomorphism from $(\mathbb{C},+)$ into itself which is not linear: $\overline{i.1}\neq i.\overline1$. |
If a sequence is bounded, are its subsequences bounded as well? | Let $S = \{ A_n\}_{n=1}^{\infty}$ be a bounded sequence i.e. $\exists k,m \in \mathbb{N}$ such that $\forall n \in \mathbb{N}, m < A_n < k$. Consider the subsequence $\{ A_{2n}\}_{n=1}^{\infty}$. Since $T \subseteq S$, every element of $T$ is also in $S$. Therefore, we can pick $k$ and $m$ to be $\dots$ |
$m$-elements subsets from a $n$-elements set, definition of $\binom nm$, committee problem and binomial theorem | After further reading I feel confortable answering my own question. As pointed out in the comments this is exactly the definition of combinations.
So, first of all, a $r$-permutation of a set $S$ containing $n$ elements is a an ordered selection from $S$, containing $r$ elements. We have $n$ possibilities for our first selection, then $(n-1)$ for the next selection and so on.
Thus, it can be shown using the principle of mathematical induction that the $i^{th}$ selection can be made in exactly $n-(i-1)$ ways such that, from the general combinatorial principle, the total number denoted $_nP_r$ of possible $r$ elements ordered selections that can be made from a set $S$ containing $n$ elements is
$$_nP_r\,=\,n(n-1)(n-2)\ldots(n-r+1)\,=\, \frac{n!}{(n-r)!}$$
A $r$-combination from a set $S$ containing $n$ elements is a subset of $S$ containing $r$ elements. The number $_nC_r$ of possible $r$ elements subsets of a set $S$ containing $n$ elements is
$$_nC_r\,=\,\binom nr \,=\, \frac {n(n-1)(n-2) \ldots (n-r+1)}{r!} = \frac {_nP_r}{r!}$$
We can make $_rP_r = r!$ different ordered selections of $r$ elements from a set containing $r$ elements. In a combination, the order of the elements do not matter. Hence we get the number of $r$ elements subsets of the set $S$ containing $n$ elements, by dividing the number of possible $r$ elements ordered selections from the set $S$ containing $n$ elements by the number $_rP_r$ of different ways we can order such $r$ elements selection.
Therefore,
$$_nC_r\,=\,\binom nr \,=\, \frac{_nP_r}{_rP_r}\,=\, \frac{n!}{(n-1)!\,r!}$$
Finally, we see that since $_nP_r$ contains all possible orderings of $r$ elements selections from the set $S$ containing $n$ elements, it follows that this number is greater than the number of possible different $r$ elements subsets of $S$ such that $_nP_r \,\gt\, _nC_r$. |
Epimorphism affect on Ideals | Consider $\Bbb Z\times \Bbb Z$ and the homomorphism $f:\Bbb Z\times \Bbb Z\to \Bbb Z$ given by $f(a,b)=a$, and the ideal $\Bbb Z\times 2\Bbb Z$. |
Finding correct variation for $\rho$ in spherical coordinate integration | In spherical coordinates:
$$
D=\{(\rho,\theta,\phi)\;|\;0\le \theta \le 2\pi, 0\le \rho \le a, 0\le \phi \le \pi/6\}
$$
Careful, $\rho$ is always positive, and $a$ is the radius of the sphere. $\tan \phi = 1/\sqrt{3}$ will give you the right angle, as you suggested.
In cylindrical coordinates:
$$
D=\{(r,\theta,z) \;|\;0\le \theta \le 2\pi, 0\le r \le a/2, \sqrt{3}r\le z \le \sqrt{a^2-r^2}\}
$$ |
finding the limit superior and inferior of the sequence $\frac 1 n \cos{\frac{n\pi}{2}}$ | Use Cuchy’s theorem by starting as cos nπ/2 lies between -1 and 0.(been getting some technical problems with my pc, so can't type it mathematically) since when n is odd, cos nπ/2=-1 and when is even, cos nπ/2=0.
Hence when we consider -1/n < (1/n)cos nπ/2 < 0. As n→∞, (-1/n)→0. Hence both superior limit and inferior limit = 0. |
How I study these two sequence? | We have $b_n=\frac{2 a_{n+1}-a_n}{a_n}=2\frac{a_{n+1}}{a_n}-1$. For very large values of $n$, since $a_n\to2/3$ we have $a_{n+1}\sim a_n$. So $b_n\to 2-1=1$ so it is Cauchy as well. |
Can we prove that $t$ must be a Mersenne prime? | If both
$$t^2-1=4p \tag{1}$$
$$s=4p+t+1 \tag{2}$$
hold. We can deduce:
Proposition 1. $t$-odd.
Easy to see from $(1) \Rightarrow t^2=4p+1$.
Proposition 2. If $t=2k+1$ then
$$k(k+1)=\varphi(2k+1)\varphi(k+1)=p$$
Given $\gcd(t,t+1)=1$ and $\varphi(n)$ is multiplicative, which means
$$p=\varphi\left(\frac{t(t+1)}{2}\right)=\varphi\left(t\right)\varphi\left(\frac{t+1}{2}\right)$$
then $(1)$ becomes
$$t^2=4\varphi\left(t\right)\varphi\left(\frac{t+1}{2}\right)+1 \tag{3}$$
Substituting
$$k(k+1)=\varphi(2k+1)\varphi(k+1)$$
Proposition 3. $t$-is square free
If $q$-prime has the property that $q^2 \mid t \Rightarrow q \mid \varphi(t)$. But then, from $(3) \Rightarrow q \mid 1$ - contradiction.
Proposition 4. $\frac{t(t+1)}{2}$ is a perfect number.
I.e. $t+1=s-4p$
and
$$(t-1)(t+1)=4p \iff (t-1)(s-4p)=4p \iff t(s-4p)-s+4p=4p \iff \\
t=\frac{s}{s-4p}$$
But $t$-is odd (propositions 1 and 2)
$$2k+1=\frac{s}{s-4k(k+1)} \iff
2k=\frac{4k(k+1)}{s-4k(k+1)} \iff \\
s-4k(k+1)=2(k+1) \iff s=4k(k+1)+2(k+1)=2(k+1)(2k+1)$$
or
$$s=\sigma\left(\frac{t(t+1)}{2}\right)=2\cdot\frac{t(t+1)}{2}$$
Proposition 5. $t$ is a Mersenne prime.
From Proposition 1 and 3 $t$ is odd and square free, i.e. its prime factorisation is $t=q_1\cdot q_2\cdot ...\cdot q_r, q_i>2, q_i\ne q_j, i\ne j$. $\sigma(n)$ is multiplicative and $\gcd(t,t+1)=1$ thus
$$\sigma\left(\frac{t(t+1)}{2}\right)=\sigma(t)\sigma\left(\frac{t+1}{2}\right)=
\sigma\left(\frac{t+1}{2}\right)\prod\limits_{i=1}^r(q_i+1)=2^r\cdot Q$$
if $r\geq 2$, from $(2)$, $4 \mid (t+1)$ and (from proposition 4) $\frac{t(t+1)}{2}$ is even perfect number, thus $t$ is Mersenne prime.
if $r=1$, then $t$-is prime and $\varphi(t)=t-1=2k$ and from proposition 2
$$k(k+1)=\varphi(2k+1)\varphi(k+1)=2k\varphi(k+1) \Rightarrow 2\mid k+1=\frac{t+1}{2}$$
and (from proposition 4) $\frac{t(t+1)}{2}$ is even perfect number, thus $t$ is Mersenne prime.
Done. It might be more difficult (if possible at all) to deduct this from individual $(1)$ or $(2)$.
Remark: This book, page 72, has a short proof of
Theorem 1.51. An even positive integer $n$ is perfect if and only if
$n = 2^{k−1}M_k$ for some positive integer $k$ for which $M_k$
(Mersenne number) is a prime. |
Show that a set of positive semidefinite (PSD) matrices is a convex set | The determinant being nonnegative does not imply that a $2 \times 2$ matrix is PSD.
You also need the diagonal elements to be nonnegative. Your set $P$ is not convex.
For example, $(10, 10, 1)^T$ and $(-10, -10, 1)^T$ are in $P$ but $\frac{1}{2}
(10, 10, 1)^T + \frac{1}{2} (-10, -10, 1)^T$ is not. |
How to find this Differential Equation ? | Hint: Find the equation of the circle. The equation of a circle with centre (g,h) is
$$(x-g)^2+(y-h)^2=r^2$$
Substitute x=1,y=1 and x=-1,y=1 to calculate g and h in terms of r. Differentiate both sides w.r.t x. r will be a constant. |
computing the chromatic polynomial for a graph resulted from merging $n$ forests | No, it already fails in very small cases and $k=1$.
As an example: $K_4$ is the union of 2 spanning paths, but it is equally easy to take the union of 2 $P_4$'s and obtain a cycle, or a paw.
So without detailed information about how the forests overlap, this will not be possible.
I have ignored your requirement that each component must be an edge. This would imply that $V$ must have an even number of vertices, since you also require that each forest has $V$ as its vertex set. But even if this requirement stands, the answer is still no.
$K_6$ has a 1-factorization, so it is a union of five spanning forests whose components are all edges. Many other graphs can be made from five such forests, by simply letting them overlap in different ways, and these graphs do no all have the same chromatic polynomial. |
ln integration (differential equations problem) | Hint
By changing the variable you get:
$$\int\frac{dx}{x+\sqrt{x}}\underbrace{=}_{x=t^2} \int\frac{2dt}{t+1}=2\ln |t+1|+c\underbrace{=}_{x=t^2}2\ln (\sqrt{x}+1) +c.$$ |
Dependent Bernoulli trials | The most flexible structure is the one that assings to all possible $n$ binary vectors $\left( x_1, \ldots, x_n \right)$ a probabilty $$P
\left[ x_1 = i_1, \ldots, x_n = i_n \right] = p_{i_1, \ldots, i_n}$$ such that
$$ \sum_{i_1 = 0}^1 \cdots \sum_{i_n = 0}^1 p_{i_1, \ldots, i_n} = 1$$
Thus, you have to specify $2^n - 1$ parameters (This is much more complicated
than your i.i.d. case where you specify $n$ parameters $p_1, \ldots, p_n$.)
There are many ways to simplify the problem. |
Exact differential equation $\frac{y}{(y+x)^2}dx+ ( \frac{1}{y} - \frac{x}{(x+y)^2})dy=0$ | Both solutions are correct. They differ by a constant, which you can see by pulling out $-1$ from the general constant:
$$\frac x{x+y}+C=\frac x{x+y}-1+C=\frac{x-(x+y)}{x+y}+C=-\frac y{x+y}+C$$ |
Measure on non -discrete locally compact group | Claim 1. Given an open neighborhood $O$ of the identity in a nondiscrete locally compact group, construct a sequence $O_{1},O_{2},\ldots$ of nonempty pairwise disjoint open sets contained in $O$ such that $\lambda(O_{n})<2^{-n}$.
(Note that you must want each $O_{n}$ to be nonempty or the situation is trivial. Also if you know $O$ is relatively compact then each $O_{n}$ has compact closure too.)
Proof: It suffices to fix $\epsilon>0$, and find nonempty disjoint open subsets $U$ and $V$ of $O$ such that $\lambda(U)<\epsilon$. Since if this is possible then the sets $O_{n}$ can be constructed by induction.
Nondiscreteness of $G$ means that around any point we can find an open set of arbitrarily small measure.
So start with $O$ and choose two distinct $x,y\in O$ (these exist since $O$ is not a singleton). Choose $U$ and $V$ pairwise disjoint open sets with $x\in U$ and $y\in V$. We can intersect with $O$ to assume $U$ and $V$ are subsets of $O$. By the nondiscreteness property we can further intersect $U$ with an open neighborhood of $x$ of measure less than $\epsilon$, and thus ensure $\lambda(U)<\epsilon$. |
Upper bound on cardinality of a field | Yes, There are arbitrarily large fields. This follows from a result in logic, namely the Upward Löwenheim–Skolem theorem (see http://en.wikipedia.org/wiki/L%C3%B6wenheim%E2%80%93Skolem_theorem).
To be more concrete, consider the countable algebriacly closed field, $\bar{\mathbb{Q}}$. Every element of this field is algebraic (meaning it is a root of a polynomial). However, we can begin adding "transcendental" elements, like $\pi$ and $e$. These don't affect the "field properties" of the field. Then, we can add any cardinality of transcendentals to construct a field of that particular size. |
The Galois group of two irreducible polynomials | If I know the Galois group of $p(x)$ is $S_3$ and the galois group of $f(x)$ is $S_2$ can I say the Galois group of $p(x)f(x)$ is $S_3 \times S_2$ ?
This is true as long as the intersection of the splitting fields of $p$ and $f$ gives the base field$^\dagger$. If this is the case, and if the Galois group for $f$ is $G_1$ and the Galois group for $p$ is $G_2$, then the Galois group of $pf$ is $G_1 \times G_2$.
You can see this by considering the fact that the Galois group of a polynomial $f$ can be thought of as a subgroup of the permutation group $S_{\deg(f)}$ that acts on the set of that polynomial's roots. In particular, one can show that roots of $f$ can be sent only to other roots of $f$ (this action is transitive $\iff$ the polynomial is irreducible). Put another way, given any algebraic element $\alpha \in \overline{F}$ over a field $F$, automorphisms of $\overline{F}$ can send $\alpha$ only to other roots of its minimal polynomial (its Galois conjugates).
This has the implication that the action of $\text{Gal}(pf)$, given $p$ and $f$ are irreducible, sends roots of $p$ only to other roots of $p$, and likewise for $f$. Therefore, any element of $\text{Gal}(pf)$ is going to permute some roots of $p$, or it's going to permute some roots of $f$, or it's going to be some composition of those two options (roots of $p$ can't be sent to roots of $f$ or vice-versa). This is characteristic of a direct product.
$^\dagger$ If the intersection of the two splitting fields is larger than the base field, then we will have automorphisms that cannot be expressed as a mere composition of two automorphisms, one of which moves around only the roots of $f$, and the other only roots of $p$. For example, if $f(x) = x^3 - 2$ and $p(x) = x^3 -3$, then complex conjugation is such an automorphism. More generally, if the polynomials are irreducible over $F$ and the intersection of the two splitting fields is some $E/F$, then the elements of $\text{Aut}(E/F)$ cannot be decomposed in this way. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.