title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Measure theory exercise | First note that what you want to show is not true in general. You forgot to take the $\sigma$ algebra generated by $\{A\cap Y\mid \dots\}$ instead of just $\{A\cap Y\mid \dots\}$.
To prove the corrected statement, use the good set principle, i.e. set
$$
G = \{E\in \Sigma \mid E\cap Y\in \sigma(\{A\cap Y\mid A\in \mathcal{A}\})\}.
$$
Show that $G$ is a $\sigma$ algebra and that $\mathcal{A}\subset G$.
Then think about why that implies your claim. |
Finding $\int\frac{\sqrt{1-t^2}}{1+t^2}dt$ | $$\int\frac{\cos^2\theta}{1+\sin^2\theta}d\theta$$
multiply num and denum by $\sec^4\theta$
$$\int\frac{\sec^2\theta}{\sec^4\theta+\sec^2\theta\tan^2\theta}d\theta$$
trigonometric identity $\sec^2\theta=1+\tan^2\theta$
$$\int\frac{\sec^2\theta}{2\tan^4\theta+3\tan^2\theta+1}d\theta$$
substitute $u=\tan\theta$
The rest is partial fraction expansion + inverse trigonometric functions |
Why is this proof wrong? | It uses a hasty generalization, a fallacy that looks like this: $$\exists x \exists yPxy\to\forall x\forall yPxy,$$ which is not valid.
Let $Pxy$ be "$x+y$ is a rational number". |
How do I force Sage to order my expressions a particular way, in LaTeX code? | Hmm, Sage isn't necessarily designed to be a typesetting tool. Sage (via Pynac) sorts these symbolic expressions in a certain canonical way for output. It's not so easy to change this, nor desirable from Sage's point of view.
If you are looking for something a little more programmatic and your only use case is LaTeX, then I highly suggest using SageTeX and putting the pieces you need Sage to do in there. Then you have complete control over order (because it's just LaTeX and you can decide exactly what Sage should compute) but you also can use Sage to compute some of the more tedious expressions.
To be specific, you could do various computations in a sagesilent block, and then insert them in the 'right' places using LaTeX's normal facilities for this (along with the \sage stuff). Think of it as string formatting; you really need to do this by hand, but once you know what you want it to look like, then you can do a million of them at once by just replacing the variables with your list of inputs.
Hope this helps; perhaps your use case is more complex and this might not work, but in your example it definitely would. |
strictly finer topologies and bases | Your question is a bit confusing, e.g. "finer" and "greater" (better: "coarser") aren't eben defined for bases and I think you also have it backwards in your explanation. Also, a topological space can have more than one base. Finally, if you think what you're saying is wrong, you should be able to provide a counterexample.
I'm assuming you meant to say this: If $T_1$ and $T_2$ are both topologies on the same set $X$ and if $T_1$ is strictly coarser than $T_2$, i.e. $T_1 \subsetneq T_2$, and if $B_i$ is a base for $T_i$ ($i=1,2$), does it follow that $B_1 \subseteq B_2$ or even $B_1 \subsetneq B_2$?
Counterexample: Take $X=\{1,2,3,4\}$, $T_1=\{\{\}, \{2\}, \{1, 2\}, \{2, 3\}, \{1, 2, 3\}, \{1, 2, 3, 4\}\}$ and $T_2={\cal P}(X)$. Then $T_1$ is strictly coarser than $T_2$. Now, $B_1=T_1$ is a basis for $T_1$ while $B_2=\{\{1\},\{2\},\{3\},\{4\}\}$ is a basis for $T_2$. But clearly not $B_1 \subseteq B_2$. |
To calculate $\nabla f$ and $\nabla f(0,0)$ | The question is answered in the comment, let me just briefly summarize:
You have calculated correctly the gradient vector $\nabla f$ using one of the correct definition. Concerning $\nabla f(0,0)$, it seems to be a confusion in notation. As pointed out in the comment, $\nabla f(0,0)$ means that one first calculate $\nabla f = \nabla f(x, y)$, then evaluate at $(x, y) = (0,0)$. Since
$$ \nabla f \left(x,y\right)=\left(30 x^{2} - 10 x + 5 y,5 x + 10 y\right),$$
we have
$$ \nabla f \left(0,0\right)=\left(30 (0)^{2} - 10 (0) + 5 (0),5 (0)+ 10 (0)\right) = (0,0).$$ |
Prove that a 5 digit even number consisting of distinct even digits can't be a perfect square. | The remainder by $3$ is $2+4+6+8+0 \equiv 20 \equiv 2$ (using the divisibility rule mod $3$: it's the digit sum); but all squares have remainder $0$ or $1$ mod $3$. |
Representing classes of objects and type with number so they can be dismantled | Caution: I have limited experience with radix-conversion so there may be one or more errors here. However, the principle (enumerating all possible combinations) is sound.
For each integer $0\le n< 6\cdot5^7$ let $n_0,\ldots, n_7$ be the unique sequence such that...
$$n=\sum_{i=0}^7n_{i}\cdot5^{7-i}\qquad:\qquad 0\le n_0<6,\quad 0\le n_1,\ldots, n_7<5$$
Use $n_0$ to indicate the person, and $n_1,\ldots,n_7$ to indicate each of the seven properties. If I've done everything correctly (and I'm not sure that I have), then the string of digits $n_0n_1\cdots n_7$ will give person, hat, shirt, etc.
This is basically representing each combination as an eight-digit mixed-radix number, then converting that into decimal. In principal the highest such number should equal the total number of possible combinations, minus one ($``54444444"=6\cdot5^7-1$). This is necessarily the smallest maximum value that can be used. |
Matrix multiplication: is C(AB) the same as (CA)B? | Your reasoning is correct. However, it doesn't really make sense to ask whether $\mathbf C(\mathbf A\mathbf B)=\mathbf C\mathbf A\mathbf B$, since the right-hand side isn't defined unless either you already know that multiplication is associative or you've specified an order of evaluation (which is usual in programming languages but not in mathematics). The appropriate question is whether $\mathbf C(\mathbf A\mathbf B)=(\mathbf C\mathbf A)\mathbf B$, which is what you have in fact used in your derivation. This is called associativity; it holds for matrices; and it allows you to write $\mathbf C\mathbf A\mathbf B$ for either of the products. |
In how many ways 3 persons can solve N problems. | I don't think any formula will be able to capture those rules, for non trivial values of N and k.
What is certainly doable is to write an algorithm to determine that number; or even better, the infinite sequence of ways for each natural n, given a fixed k.
Such algorithm will basically "inductively" compute each element of the series based on the previous element. Such element will be internally represented into a split form into a few useful partitions, one for each of the possible state of the current problem solving situation.
Allow me to further explain.
Finite state machine
One instance of your sequence of N problems solutions can be imagined as a sequence of N+1 states of progress into the problem solving. The things you want to remember, during that process, are just
$$
a \in [0..k[ = \text{how many problems A solved so far (in modulo k)}\\
b \in [0,1] = \text{1 iff the last problem was solved by B, else 0}\\
c \in [0,1] = \text{1 iff C already solved any problem, else 0}
$$
By only knowing these facts (or variables) you are able to determine which are all possible further states, and if one of them is an ending state (i.e. if the current sequence of problem solvers will be one of the ways counted in $w(N)$).
For example, if $a=0, b=0, c=0$ I know that I can go to, depending on the next person who will solve the problem,
$A -> a=1, b=0, c=1$ (not an ending state)
$B -> a=0, b=1, c=1$ (an ending state)
$C -> a=0, b=0, c=0$ (an ending state)
Track all states
Now, notice that all possible states are a finite number, and actually relatively few. The exact amount is $4k$, and in the worst case in which k = 10 (thanks for binding k), you will only have 40 states.
If you know, for each of the $4k$ states, in how many ways you could have got there in exactly n steps, you can determine in how many ways you can get to each of the $4k$ states after step n+1.
If you start with 0 problems, with 1 only in $w_0(0,0,0)$ (you can only start from this state in which A solved 0, B didn't solve the last one, and C didn't solve any), and compute the amount of ways $w_i$ for each state, for each i up to n, you will simply have to sum together the ways $w_n(0,0,1) + w_n(0,1,1)$ to get your ways in which n problems were solved and all your rules were honored.
The code
I coded this in Scala, I hope you don't mind the language choice.
Beware that your number grows pretty quickly, and if its value overflows the maximum Java long value ($2^{63}-1$), you will have to replace long variables with a BigInteger or something similar.
import collection.Map // generic Map interface
object ProblemSolverz {
def main(args: Array[String]): Unit = {
// C
println(new ProblemSolverz(3).ways(1))
// CC, BC, CB
println(new ProblemSolverz(3).ways(2))
// CCC, BCC, CBC, CCB, BCB
println(new ProblemSolverz(3).ways(3))
// CCCC
// BCCC, CBCC, CCBC, CCCB
// BCBC, BCCB, CBCB
// AAAC, AACA, ACAA, CAAA
println(new ProblemSolverz(3).ways(4))
// a LOT of ways
println(new ProblemSolverz(3).ways(20))
}
}
class ProblemSolverz(k: Int) {
def ways(N: Int) =
// number of states in which
// A solved a number of problems equal to 0 in modulo k,
// C solved at least one problem,
// B whatever
states(N - 1)((0, 0, 1)) + states(N - 1)((0, 1, 1))
val states = makeNextStates(Map((0, 0, 0) -> 1L)) // initial state
def makeNextStates(previousStates: Map[(Int, Int, Int), Long]): Stream[Map[(Int, Int, Int), Long]] = {
// initialize all states to 0 (makes for cleaner code later)
val states = scala.collection.mutable.Map((for {
a <- 0 until k
b <- 0 to 1
c <- 0 to 1
} yield ((a, b, c), 0L)): _*)
// for each previous state, compute its successors
previousStates.foreach({
case ((a, b, c), ways) => {
// A could solve this problem
states(((a + 1) % k, 0, c)) += ways
// B could solve this problem, if he is rested
if (b == 0)
states((a, 1, c)) += ways
// C could solve this problem
states((a, 0, 1)) += ways
}
})
// contribute this state to the stream
Stream.cons(states, makeNextStates(states))
}
} |
Continuity of $x^2$ over $\overline{\mathbb{R}}$ | Define a map $f:[-\pi/2,\pi/2]\to\overline{\mathbb R}$ by $f(\pm \pi/2)=\pm\infty$ and $f(x)=\tan x$ for interior $x$. By definition of the metric on $\overline{\mathbb R}$, $f$ is an isometric bijection. Thus, it preserves all metric properties of the spaces under consideration. A function $g : \overline{\mathbb R}\to \overline{\mathbb R}$ is continuous if and only if $f^{-1}\circ g\circ f$ is continuous on $[-\pi/2,\pi/2]$ in the usual metric. With $g(x)=x^2$ we get
$$f^{-1}\circ g\circ f(x) = \begin{cases}\arctan \tan^2 x,\quad &|x|<\pi/2 \\ \pi/2,\quad &|x|=\pi/2\end{cases}$$
To show continuity, we only need to check that $\arctan \tan^2 x\to \pi/2$ as $x\to\pm \pi/2$, which is easy. |
Showing this metric space is complete | HINT:
The map $x \mapsto \frac{1}{x}$ from your space to $[1, \infty)$ with the usual metric is an isometry. |
Everyday life examples of hyperbolic rotations | The terminology "hyperbolic rotation" is somewhat ambiguous. Let me try to explain how a geometer thinks of this (which is probably different than how a physicist thinks of it).
In $n+1$ space time, Lorentz transformations can be understood geometrically as transformations of the subspace $t^2 - x_1^2 - \cdots - x_n^2 = 1$. The infinitesmal Lorentz metric, when restricted to that subspace, is isometric to the $n$-dimensional hyperbolic space $\mathbb{H}^n$; this is known as the hyperboloid model of $\mathbb{H}^n$.
The restriction of any Lorentz transformation is an isometry of $\mathbb{H}^n$, and any isometry of $\mathbb{H}^n$ is the restriction of a Lorentz transformation. So your question requires understanding the isometries of hyperbolic space.
The subgroup of isometries of $\mathbb{H}^n$ that fixes a point is actually isomorphic to the subgroup of isometries of Euclidean space $\mathbb{E}^n$ that fixes a point. To a geometer, these are what one ordinarily thinks of as "hyperbolic rotations". Infinitesmally, there is not difference between the two cases.
Regarding the example in your question, to a geometer this example is not a "hyperbolic rotation" but is instead a "hyperbolic translation". In the $1+1$ dimensional example you are talking about, $\mathbb{H}^1$ is just a line, and the action on this line is a translation, displacing all points on that line the same distance in the same direction. In $n+1$ dimensions, a "translation" of $\mathbb{H}^n$ has a unique line that is preserved, and it displaces points along that line the same distance and the same direction; other points, that do not lie on that line, will be moved a greater distance. This is unlike the situation of a translation of $\mathbb{E}^n$, in which all points are displaced in the same direciton and by the same distance. |
Why is $ \left( \frac1n \sum_{i=1}^n x_i^{p+1} \right) \geq \left( \frac1n \sum_{i=1}^n x_i \right)^{p+1} $ true? | I try to follow from the definition of convex/concave functions, but it's not working out.
Yes, it is. If the function $u$ is convex, then
$$
u\left(\frac1n\sum_{k=1}^nx_k\right)\leqslant\frac1n\sum_{k=1}^nu(x_k).
$$
For $u:x\mapsto x^{p+1}$ on $x\geqslant0$ this is your first inequality. For $u:x\mapsto-x^{p}$ on $x\geqslant0$ this is your second inequality. |
Elements in partial ordered set. | Let $G$ be the partial order consisting only of the elements $a,b$ and $c$ thus $a\leq c$ and $b\leq c$ but $a$ and $b$ are unrelated. As $P(a)$ hold and $a\leq c$ necessarily $P(c)$ has to hold. Thus we know that
$$P(a)=true\>\>\>\> P(b)=false \>\>\>\> P(c)=true$$
is true, since each element $x\in G$ such that $x\neq b$ then $x = a$ or $x=c$ but then $P(x)=true$.
is also true, since if $x\neq a$ and $x\neq c$ then $x=b$ thus $P(x)=false$
is also true since each $x$ such that $x\geq b$ and $x\neq c$, the only option avalible for $x$ is $x=b$ and thus we know that $P(x)=false$.
On the other hand
4. can be true if we do not have any elements which is greater than both $a$ and $b$, something which is not necessary in a partial order. So for instance choose the order consisting of three disjoint elements which are unrelated to eachother. If we however assume that $c$ is a greatest element, then indeed it can not be true. |
$x$ and $y$ are natural numbers $x+y+21=3xy$. Find maximum possible integral power of $6$ in $(xy)$! | Hint:
$$1+63=(3x-1)(3y-1)$$
Now what are the positive divisors$(\equiv-1\pmod3)$ of $64?$ |
Value of $\sum\sin\frac{1}{2^i}$ | It's not a very nice series, I don't think you will get a closed-form analytic expression. However, you can compute its value to any desired accuracy numerically. You can also bound it from above:
$$\sum_{n=0}^\infty \sin 2^{-n}<\sum_{n=0}^\infty 2^{-n}=2$$
This also means that you can compute it much more accurately than just summing the first few terms: you can approximate the rest with a geometric sum.
Example:
$$\sum_{n=0}^\infty \sin 2^{-n}<\sin 1+\sin \frac12 + \sum_{n=2}^\infty 2^{-n}\approx$$
$$0.841471+0.479426+\frac{1}{2}=1.8209$$
The accurate value for 5 decimal places (Wolfram Alpha) is $1.81793...$. |
Derivation of mutual information's closed-form analytical solution | Let $(X, Y) \sim \mathcal{N}(0, K),$ where
$$
K=\left[\begin{array}{cc}
\sigma^{2} & \rho \sigma^{2} \\
\rho \sigma^{2} & \sigma^{2}
\end{array}\right]
$$
Then $$h(X)=h(Y)=\frac{1}{2} \log (2 \pi e) \sigma^{2}$$ and $$h(X, Y)=\frac{1}{2} \log (2 \pi e)^{2}|K|=
\frac{1}{2} \log (2 \pi e)^{2} \sigma^{4}\left(1-\rho^{2}\right),$$ and therefore
$$
I(X ; Y)=h(X)+h(Y)-h(X, Y)=-\frac{1}{2} \log \left(1-\rho^{2}\right)
$$
If $\rho=0, X$ and $Y$ are independent and the mutual information is 0 . If $\rho=\pm 1, X$ and $Y$ are perfectly correlated and the mutual information is infinite. |
Permutation that gives a sequence of non-negative partial sums | Consider the finite sequence of partial sums $\bigl(s_1(k)\bigr)_{1 \leqslant k \leqslant n}$. There is at least one $m$ with
$$s_1(m) = \min \left\{ s_1(k) : 1 \leqslant k \leqslant n\right\}.$$
Then the cyclic permutation $\sigma_{m+1}$ (with the identification $\sigma_{n+1} = \sigma_1$ if $m = n$) has the desired property. |
How to define exponentiation of real numbers with negative base (when the result is real)? | Short answer:
When $y$ is rational, you use the "root" definition,
$$z=x^{p/q}\iff z^q=x^p$$
where $p/q$ is irreductible (no need to recall what a natural power is). For even $q$, $x$ and $z$ must be positive.
We need to add
$$x^{-p}=\frac1{x^p},\\x^0=1.$$
When $y$ is irrational, you don't define anything.
The reson why we don't define the powers of negative for irrational exponents is that we don't know what sign to choose to make it coherent with the ususal propertie of exponentiation. It doesn't seem possible to assign a "parity" to the reals. |
How to solve an equation with a root in the divisor | HINT
For $z\ne 3$,
$$
\frac{4-x^2}{3-\sqrt{z}} - 6 = 0 \iff 4-x^2 = 6(3-\sqrt{z})
$$
can you solve for $\sqrt{z}$ and square both sides? |
The relation $y^2=f(x)$ | If you are sketching over $\Bbb R$, the graph of $y$ is discontinuous whenever $f(x)<0$. $y$ will have the same zeroes and poles as $f(x)$ as well as a vertical tangent line at $f(x)=0$. This is because $$y=\pm\sqrt {f(x)}$$ so $$\frac{dy}{dx}=\pm \frac{f'(x)}{2\sqrt{f(x)}}$$
Only if $f'(x)=f(x)=0$ will there not necessarily be a vertical tangent, since $\frac{dy}{dx}$ will be indeterminate. For each value of $f(x)$ there will be two values of $y$, due to $y^2$. |
$\mu,\nu$ $\sigma$-finite and $\nu \le \mu$. Then there exists a $\mu$-almost-everywhere unique function $f$ with $0 \le f \le 1 \ \mu \ a.e.$ | Well, I presume $\nu\leq \mu$ means $\nu(A) \leq \mu(A)$ for each $A\in \mathcal A$. In that case, let's assume the contrary to the statement you need to prove, i.e. $\mu(B) > 0$ where $B = \{f > 1\}$. We get
$$
\nu(B) - \mu(B) = \int_B(f - 1)\mathrm d\mu > 0
$$
which contradicts the fact that $\nu\leq \mu$. Now, the fact that a Lebesuge integral of a function positive over a set of positive measure is positive I hope you can show yourself. As a hint: split this set into subsets of $\left\{f\geq\frac1n\right\}$ where $n$ runs over positive integer numbers. |
Algebra review for Spivak Calculus | I would recommend going to Khan Academy's list of math courses. They have three courses on algebra at the high school level and it looks like pretty thorough collection of topics. You can brush up on your prealgebra, trig, precalculus skills if necessary as well, and you might also find it useful in having the Khan calculus videos as you go through Spivak.
focusing on very extreme questions where it is impossible to rely on a shallow understanding of the rules
For this, pick up a number theory textbook. I think Dover has one on Amazon for cheap. Try to do the exercises. They will test your ability to solve problems algebraically. |
norm of vector + element of subspace | Let $u_0$ be a nonzero vector in $U$. Show that $\phi(\alpha) = ||x + \alpha u_0||$ is a continuous function from $\mathbf R$ to $\mathbf R$. |
Integral $\ 4\int_0^1\frac{\chi_2(x)\operatorname{Li}_2(x)}{x}\ dx+\int_0^1\frac{\log(1-x)\log^2(x)\log(1+x)}{x}\ dx$ | Using the relation between the Chi function and Dilogarithm we can rewrite the first integral as:
$$4\int_0^1\frac{\chi_2(x)\operatorname{Li}_2(x)}{x}dx=2\int_0^1\frac{\operatorname{Li}^2_2(x)}{x} dx-2\int_0^1\frac{\operatorname{Li}_2(x)\operatorname{Li}_2(-x)}{x} dx$$
You solved the first part here.
$$\int_0^1\frac{\operatorname{Li}_2^2(x)}{x}dx=2\zeta(2)\zeta(3)-3\zeta(5)$$
And the second one is found here:
$$\int_0^1\frac{\operatorname{Li}_2(x){\operatorname{Li}_2(-x)}}{x}dx =-\frac54\zeta(2)\zeta(3)+\frac{59}{32}\zeta(5)$$
Combinging the two results from above yields:
$$\boxed{4\int_0^1\frac{\chi_2(x)\operatorname{Li}_2(x)}{x}dx=\frac{13}{2}\zeta(2)\zeta(3)-\frac{155}{16}\zeta(5)}$$
The second integral is solved here.
$$\boxed{\int_0^1\frac{\ln(1-x)\ln^2 x\ln(1+x)}{x}dx=\frac34 \zeta(2)\zeta(3)-\frac{27}{16}\zeta(5)}$$
Combining the two boxed results gives:
$$4\int_0^1\frac{\chi_2(x)\operatorname{Li}_2(x)}{x} dx+\int_0^1\frac{\ln(1-x)\ln^2(x)\ln(1+x)}{x} dx=\frac{29}4\zeta(2)\zeta(3)-\frac{91}8\zeta(5)$$
Remark.
We know from above that:
$$\int_0^1\frac{\chi_2(x)\operatorname{Li}_2(x)}{x}dx=\frac{13}{8}\zeta(2)\zeta(3)-\frac{155}{64}\zeta(5)$$
But integating by parts also gives us:
$$\sum_{n=0}^\infty \frac{1}{(2n+1)^2}\int_0^1 x^{2n}\operatorname{Li}_2 (x)dx$$$$\overset{IBP}=\sum_{n=0}^\infty \frac{\operatorname{Li}_2(1)}{(2n+1)^3}+\sum_{n=0}^\infty \frac{1}{(2n+1)^3}\int_0^1 x^{2n}\ln(1-x)dx$$
$$=\frac{7}{8}\zeta(2)\zeta(3) +\sum_{n=0}^\infty \frac{H_{2n+1}}{(2n+1)^4}$$
Which results in:
$$\sum_{n=0}^\infty \frac{H_{2n+1}}{(2n+1)^4}=\frac34\zeta(2)\zeta(3)-\frac{155}{64}\zeta(5)$$
Alteratively one can compute that sum in a different way to find the value of the first integral. |
Complex integral in all $ \mathbb(R) $ | There are four cases.
Case 1: $x>0$ and $\lambda<0$
Case 2: $x<0$ and $\lambda<0$
Case 3: $x>0$ and $\lambda>0$
Case 4: $x<0$ and $\lambda>0$
We shall examine Case 1 for which $x>0$ and $\lambda<0$.
Observe the integral $I$ given by
$$I=\oint_C\frac{e^{ixz}}{(\lambda-iz)^n}dz$$
where $C$ is comprised of the real-line from $-R$ to $R$ plus a semi-circle of radius $R$ in the upper-half $z$-plane.
The contribution to the integral from the integration over the semi-circle tends to zero as $R \to \infty$. Thus,
$$\begin{align}
I&=\oint_C\frac{e^{ixz}}{(\lambda-iz)^n}dz=\int_{-\infty}^{\infty}\frac{e^{ixz}}{(\lambda-iz)^n}dz\\\\
&=2\pi i \text{Res}\left(\frac{e^{ixz}}{(\lambda-iz)^n}, z=-i\lambda\right)
\end{align}$$
For a pole of order $n$, the residue is given by
$$\begin{align}
\text{Res}\left(\frac{e^{ixz}}{(\lambda-iz)^n}, z=-i\lambda\right)&=\frac{1}{(n-1)!}\lim_{z\to -i\lambda}\frac{d^{n-1}}{dz^{n-1}}\left((z+i\lambda)^n\frac{e^{ixz}}{(\lambda-iz)^n}\right)\\\\
&=\frac{1}{(n-1)!}\frac{(ix)^{n-1}e^{\lambda x}}{(-i)^n}
\end{align}$$
Thus, the integral of interest is
$$I=2\pi \frac{(-1)^n\,x^{n-1}}{(n-1)!}e^{\lambda x}$$ |
Intuitive logic behind the following equivalences in propositional logic | One issue is that in all your explanations, you only show that the RHS follows from the LHS, and not that the LHS follows from the RHS.
Also, I am not sure that your explanations are really all that more 'intuitive' than going through a truth-table or using boolean algebra. What might help more, is to come up with some concrete examples.
For example, for the second equivalence $(p \to r) \land (q \to r) \equiv (p \lor q) \to r $ you could say that the LHS says that if you are a fruit ($p$) then you are delicious ($r$), and also being a vegetable ($q$) makes you delicious ($r$), whereas the RHS says that if you are either a fruit of a vegetable ($p \lor q)$ then you are delicious ($r$). And these two claims are really saying the same thing: both fruits and vegetables are delicious! |
Prove a cyclic subgroup is normal? | Knowing $r,s$ generates $G$, we only need to check $rHr^{-1}=sHs^{-1}=H$ to show a subgroup $H$ is normal in $G$.
This is clear for your subgroups. But the second subgroup seems not cyclic. |
Can every element in the stalk be represented by a section in the top space? | No: take the sheaf of holomorphic functions $S=\mathcal O$ on $X=\mathbb C$. Then the stalk at $x=1$ of $1/z$ does not come from $\mathcal O(\mathbb C)$.
@Arturo: no you are not misremembering. Here is an example. Take $X=\mathbb P_n (\mathbb C)$ and $S=\mathcal O (-1)$= tautological bundle. Then every stalk $S_x$ is isomorphic with $\mathcal O_x$ so is not zero but $S(\mathbb P_n (\mathbb C))=0$. (By the GAGA principle of Serre you can choose to think in holomorphic way or algebraic way in this example)
@Robin Chapman: Yes, there are sheaves $S$ with all morphisms $S(X) \to S_x$ surjective, that are not flabby. For example, take $X=\mathbb R$ and $S=C$ = sheaf of continuous functions. Then for $x\in \mathbb R$ and $f_x \in C_x$ take a bump function $\phi$ equal to $1$ in vicinity of $x$ and support smaller than the domain of definition of $f$. Then $\phi f$ (extended by zero) is a global function in $C(\mathbb R)$ and its germ at $x$ is equal to $f_x$. But of course $C$ is not flabby. |
Chinese remainder theorem as sheaf condition? | Each ideal $I\subset R$ corresponds to a closed subscheme of $\mathrm{Spec}(R)$, intersection of ideals corresponds to union of subschemes, and sum of ideals corresponds to intersection of subschemes. If a finite collection of closed subschemes is an open cover of their union (which is the case if and only if the union is disjoint), then indeed your diagram is an equalizer, precisely because of the sheaf condition. But in general the sheaf condition won't hold for coverings by closed sets.
A counterexample: let $k$ be a field, $R=k[x,y]$, $I_1=(x)$, $I_2=(y)$, $I_3=(x-y)$. Then $\mathrm{Spec}(R)$ is the affine plane and the ideals $I_1$, $I_2$, and $I_3$ correspond to the lines $L_1:x=0$, $L_2:y=0$, and $L_3:x=y$. The statement that your diagram is an equalizer is equivalent to the statement
the data of a regular function on $L_1\cup L_2\cup L_3$ is the same as the data of a triple of regular function $f_1$, $f_2$ , $f_3$ on $L_1$, $L_2$, $L_3$ resp., such that all three functions take the same value at the origin.
This is false because we need an extra condition on the functions $f_1$, $f_2$, $f_3$, namely for these functions to determine a function on $L_1\cup L_2\cup L_3$, the derivative of $f_3$ in the direction $(1,1)$ needs to equal the sum of the derivative of $f_1$ in the direction $(0,1)$ and the derivative of $f_2$ in the direction $(1,0)$.
The data of $\prod R/I_j$ and the maps to $\prod R/(I_i+I_j)$ corresponds to data of a bunch of closed subschemes and their pairwise intersections. This is not enough to determine a scheme. For instance, the union $L_1\cup L_2\cup L_3$ is not isomorphic to the union of the coordinate axes in $\mathbb{A}^3$, because the tangent space to the former at the singular point is $2$-dimensional, while the tangent space to the latter is $3$-dimensional. But both schemes are the union of three lines, such that the pairwise intersection is a single point. For the example $I_1,I_2,I_3\subset R$ above, the equalizer of your diagram is the coordinate ring of the union of the coordinate axes in $\mathbb{A}^3$, rather than of $L_1\cup L_2\cup L_3$. |
Further from Cauchy inequality | $$M(R) \le \sum_{n=0}^{\infty}|a_n|R^n= \sum_{n=0}^{\infty}|a_n|(2R)^n/2^n \le \sum_{n=0}^{\infty}A(2R)/2^n = 2A(2R).$$ |
Approximation of fiber bundle isomorphisms | This is true. Here is a high-powered proof. There are proofs in the flavor of Hirsch; I don't know a reference, but with some gusto you could prove them.
$C^k$ bundles with fiber $M$ are the same as fiber bundles with structure group $\text{Diff}^k(M)$. They are thus classified by maps $X \to B\text{Diff}^k(M)$, as long as $X$ is paracompact. Then your question follows by proving that the map $B\text{Diff}^\infty(M) \to B\text{Diff}^1(M)$ is a homotopy equivalence; because taking the loop space of this we recover the inclusion homomorphism, it suffices to show that $\text{Diff}^\infty(M) \to \text{Diff}^1(M)$ is a homotopy equivalence. But this follows from what's written in Hirsch:
Let $f: S^k \to \text{Diff}^1(M)$ be a sphere's worth of $C^1$ diffeomorphisms of $M$. By extending the smooth approximation theory in Hirsch to the case when the codomain is a Banach manifold, we may homotope $f$ to be $C^1$; this means that the induced map $f: S^k \times M \to M$ is $C^1$, not just $C^1$ in the $M$-direction. Now this is homotopic to a smooth map $f': S^k \times M \to M$; because $f(x,-)$ was a diffeomorphism and $f'$ is chosen sufficiently close to $f$, $f'(x,-)$ is also a diffeomorphism. (Diffeomorphisms are open in the $C^1$ topology.) Thus the map $\text{Diff}^\infty(M) \to \text{Diff}^1(M)$ is surjective on $\pi_k$. A similar argument in the case with boundary shows that it is injective on homotopy groups. Because $\text{Diff}^k(M)$ are metrizable manifolds, a theorem of Palais says that a weak homotopy equivalence between them is actually a homotopy equivalence, as desired. |
Lagrange multipliers of $f(x,y,z,w) =x+y-z-w$ | Careful in your procedure. If $g_1 = c_1$ and $g_2 = c_2$ are constraints, then don't combine them into one equation $g = g_1 + g_2$ as you have done here. Rather, the Lagrange equations are
$$\nabla(f + \lambda_1 g_1 + \lambda_2 g_2) = 0$$ |
Number game and divisibility: what is the proof that this trick (divisible by 8) works? | Note that $1000$ is divisible by $8$, so we can just consider $H$, $T$, and $U$.
In fact, let's group $T$ and $U$ together to R, i.e. $R=10T+U$.
So, we now need to consider $100H+R$.
If $R$ is divisible by $8$ and $H$ is odd, then the remainder would be $100H = 100(2n+1) = 200n+100 \to 100 \to 4$.
If $R$ is divisible by $4$ but not by $8$, and $H$ is odd, then we can write $R=4(2p+1)$ and $H=2h+1$, so the remainder would be $200h+100+8p+4=8(25h+p+13) \to 0$.
One can consider the other two cases to prove the algorithm. |
Prove $x^n + x < (x^n)x$ using induction. | Let $P(n): x^n+x< x^{n+1}, n \in \mathbb{N}, x > 2$.
Check $P(1)$ is true. $x^1+x = 2x < x^2 \iff x(x-2) > 0$ is true since $x > 2$.
Assume $P(n)$ is true, i.e. $x^n + x < x^{n+1}$, prove $P(n+1)$ is true.
$x^{n+1} + x = x\cdot x^n + x < x(x^{n+1}-x) + x = x^{n+2} - x^2 + x < x^{n+2} - x^2 + x^2 = x^{n+2}$ since $x^2 > x$. Thus $P(n+1)$ is true, and by MPI the statement is true. |
Finding Existence of This Limit | The limit doesn't exist. If you go along the line $y=(x/9.3)^2$, the denominator is zero, while the numerator is
$$
3x^3-6x^3/9.3=(3-6/9.3)x^3,
$$
nonzero for any $x\ne0$. |
Covariant derivative of $Ricc^2$ | We want to compute
$$(\nabla(R\cdot Rc))(X)$$
Connections commute with contractions, so we start by considering the expression (where we contract $X$ with $Rc$, obtaining $Rc(X)$)
$$\nabla(R\cdot Rc\otimes X) = (\nabla (R\cdot Rc))\otimes X + R\cdot Rc\otimes (\nabla X)$$
Taking the contraction and moving terms around, we get
$$(\nabla(R\cdot Rc))(X) = \nabla(R\cdot Rc(X)) - R\cdot Rc(\nabla X)$$
For the second one, notice that
$$Rc\circ Rc = Rc\otimes Rc$$
after contraction, so we proceed as before and start with
$$\nabla[Rc\otimes Rc\otimes Y] = [\nabla(Rc\otimes Rc)]\otimes Y + Rc\otimes Rc\otimes \nabla Y$$
We contract and move terms to get
$$[\nabla(Rc\circ Rc)](Y) = \nabla(Rc(Rc(Y))) - Rc(Rc(\nabla Y))$$
Now notice that using twice the first equality we derived we obtain
$$\nabla(Rc(Rc(Y))) = (\nabla Rc)(Rc(Y)) + Rc((\nabla Rc(Y))) =$$
$$= (\nabla Rc)(Rc(Y)) + Rc((\nabla Rc)(Y) +Rc(\nabla Y)))$$
Putting all together we obtain
$$\nabla(Rc\circ Rc) = (\nabla Rc)\circ Rc + Rc\circ(\nabla Rc)$$
This was an explicit derivation of everything. You can also use the various rules on the connections on the bundles associated to the tangent bundle to get directly
$$\nabla(R\cdot Rc) = dR\otimes Rc + R\nabla Rc$$
(which agrees with the first result, if you work it out a bit) and
$$\nabla(Rc\circ Rc) = \nabla(Rc\otimes Rc) = (\nabla Rc)\otimes Rc + Rc\otimes(\nabla Rc) = (\nabla Rc)\circ Rc + Rc\circ(\nabla Rc)$$
where there is a contraction of the tensors in the intermediate steps. |
How is the law of total probability used in deriving absorption probabilities? | Let $A_0=\{\text{absobtion in 0}\}$. Then
\begin{align}
\mathsf{P}(A_0\mid X_0=i)&=\frac{\mathsf{P}(A_0, X_0=i)}{\mathsf{P}(X_0=i)} \\
&=\sum_k \frac{\mathsf{P}(A_0, X_0=i,X_1=k)}{\mathsf{P}(X_0=i)} \\
&=\sum_k \mathsf{P}(A_0\mid X_1=k, X_0=i)\mathsf{P}(X_1=k\mid X_0=i) \\
&=\sum_k \mathsf{P}(A_0\mid X_1=k)\mathsf{P}(X_1=k\mid X_0=i),
\end{align}
where the 4th line follows from the Markov property. Namely, let $\tau_0:=\inf\{n\ge 1:X_n=0\}$. Then $A_0=\{\tau_0<\infty\}$ and
\begin{align}
\mathsf{P}(A_0\mid X_1=k,X_0=i)&=\sum_{n\ge 1}\mathsf{P}(\tau_0=n\mid X_1=k,X_0=i) \\
&=\sum_{n\ge 1}\mathsf{P}(X_n=0,X_{n-1}\ne 0,\ldots\mid X_1=k,X_0=i) \\
&=\sum_{n\ge 1}\mathsf{P}(X_n=0,X_{n-1}\ne 0,\ldots\mid X_1=k) \\
&=\mathsf{P}(A_0\mid X_1=k),
\end{align}
where the 3rd line follows form the Markov property (sometimes it's called the Extended Markov property, i.e. $\{X_n,X_{n+1},\ldots\}$ is conditionally independent on $\{X_0,\ldots, X_{n-1}\}$ given $X_n$. See, for example, Theorem 4.1.5 in these notes). |
Function notation when $f^n=g^n$ at some $n$? | To answer the question in the body, we say that "$64$ is in the orbit of $2$ under $f$".
More generally, if $f^n(x_0)=y$, then say that "$y$ is in the orbit of $x_0$ under $f$". |
How do you solve this system of equations? | Hint:
From the first equation you have
$$
\gamma=\frac{1-\beta^2}{v\beta}
$$
so the second equation becomes:
$$
c^2(1-\beta^2)=-v^2\beta^2
$$ |
Show the following ring does not have nontrivial two sided nilpotent ideal | A finite-dimensional algebra over a field is semisimple if and only if it has no nontrivial nilpotent two sided ideals, so the statement follows from your previous question Show the following ring is semisimple.
Concerning the above statement about the connection between semisimplicity and nilpotent two-sided ideals:
A finite-dimensional algebra $A$ is semisimple if and only if its radical ideal $\text{rad}(A)$, the intersection of all maximal left ideals, is trivial (the vanishing of $\text{rad}(A)$ is equivalent to the existence of an injection $A\rightarrowtail M_1\oplus\ldots\oplus M_n$ for $M_i$ simple left $A$-modules, whch in turn is equivalent to the semisimplicity of $A$). Since $\text{rad}(A)$ is a two-sided, nilpotent ideal (nilpotency follows from Nakayama's Lemma), this proves "$\Leftarrow$". Conversely, suppose $A$ is semisimple and let $I\lhd A$ be a nontrivial two-sided ideal. By semisimplicity of $A$, there exists a complementary left ideal $J\lhd A$, that is, $A = I\oplus J$. Writing $1 = e_1 + e_2$ for $e_1\in I, e_2\in J$ then gives $e_1 = e_1\cdot 1 = e_1^2 + e_1 e_2$, so we get $e_1 e_2\in I\cap J=\{0\}$ and we get $e_1 = e_1^2$. Further, $e_1\neq 0$ since for $x\in I\setminus\{0\}$ we have $x = x\cdot 1 = x e_1 + x e_2$, so $x e_2 = 0$ and $0\neq x = x e_1$ by the same reasoning. Hence, any nontrivial two-sided ideal of $A$ contains a nontrivial idempotent, hence cannot be nilpotent. |
Infinite summation involving exponents | Hint:
$$\dfrac{2^r}{5^{2^r}-1}-\dfrac{2^r}{5^{2^r}+1}=\dfrac{2^{r+1}}{5^{2^{r+1}}-1}$$
Use Telescoping series |
There exists lineal operations $G$ and $H$ such that $T \circ G=0$ and $H \circ T=0$ if T is not invertible | By assumption $T$ is not injective so the kernel of $T$ is some subspace $W\not=\{0\}$ of $V$. There exists a basis $\{e_1,...,e_n\}$ of the whole space $V$ such that the first $k$ elements are a basis of $W$.
Can you from there construct a linear map $G:V\rightarrow V$ which is the identity in $W$ and such that $T\circ G=0$? It is enough to decide what $G$ does on the basis $\{e_1,...,e_n\}$ and extend by linearity.
The other assertion works similarly, $T$ is not surjective so its range is a proper subspace $W\not=V$. Construct an appropriate basis and then a linear map $H$ such that $H\circ T=0$. |
Differentiation inside a conditional expectation | I would answer your questions under following additional conditions:
$$\biggl|\dfrac{\partial Y(x,\omega)}{\partial x}\biggr|\le Z(\omega),\quad \forall x\in U,\qquad
\mathsf{E}[Z(\omega)]<\infty. \tag{1}$$
In the following we always consider the continuous versions of $Y(x)$, $\mathsf{E}[Y(x)|\mathcal{G}]$, $\frac{\partial Y(x)}{\partial x}$ and $\mathsf{E}[\frac{\partial Y(x)}{\partial x}|\mathcal{G}]$. (c.f. here) Using (1), we have
$$ \int_{a}^x\mathsf{E}\bigg[\dfrac{\partial Y(u)}{\partial u} \biggm| \mathcal{G} \biggr]\,du \stackrel{(*)}=\mathsf{E}\biggl[\int_{a}^x\dfrac{\partial Y(u)}{\partial u} \,du\biggm| \mathcal{G}\biggr]=\mathsf{E}[Y(x)|\mathcal{G}]-\mathsf{E}[Y(a)|\mathcal{G}],\quad a,x\in U. $$
This means that $\mathsf{E}[Y(x)|\mathcal{G}]$ is differentiable in $x$ and
$$\dfrac{\partial \mathsf{E}[Y(x)|\mathcal{G}]}{\partial x}=\mathsf{E}\biggl[\frac{\partial Y(x)}{\partial x}\biggm|\mathcal{G}\biggr].
$$
To prove (*) it suffices using Fubini's theorem and the definition of conditional expectation. |
What are the convergent sequences in the cofinite topology | Let $X$ be an infinite set and $(x_n)$ be a sequence in $X$ containing points that are all distinct (like the sequence $x_n = \frac{1}{n}$).
Let $x \in X$ Then $x_n \rightarrow x$ in the cofinite topology on $X$.
Proof : let $O = X\setminus F$, where $F$ is finite, be an arbitrary non-empty open set that contains $x$. Then all $x_n$ are different by assumption, so let $N$ be the largest integer $n$ such that $x_n \in F$. (this number is then well-defined). Then for all $n > N$, $x_n \notin F$, so $x_n \in O$ for all such $n$. As $O$ was arbitrary, $x_n \rightarrow x$.
Also this is recommended reading for studying convergence in the co-finite topology.
So any injective sequence in the cofinite topology converges to every point of $X$.
In fact as the above link showed:
the only convergent sequences are those that either
take any value only finitely many times. So $\forall n \left|\{ m: a_m = a_n\}\right| < \infty$.
have exactly one value that occurs infinitely many times: $\left|\{n: \left|\{m: a_n = a_m\}\right| = \infty \}\right| = 1$
Sequences from 1 include all injective sequences, the second class contains among others all eventually constant sequences.
Sequences of class 1 converge to any point of $X$, those of class 2 have the single infinitely-occurring value as their limit.
No other sequence is convergent. |
Limit of the inverse of a sum | Let $i\in[\![1,N]\!]$, then $f(i,j)>0$ for $j\neq i$ thus $\lim\limits_{t\rightarrow +\infty}e^{-tf(i,j)}=0$. We then have
$$ \sum_{j=1}^N e^{-tf(i,j)}=1+\sum_{j\neq i}e^{-tf(i,j)}\underset{t\rightarrow +\infty}{\longrightarrow}1 $$
Summing this for $i\in[\![1,N]\!]$ we get $$ \lim\limits_{t\rightarrow +\infty}\sum_{i=1}^N\left(\sum_{j=1}^N e^{-tf(i,j)}\right)^{-1}=N $$ |
Does the derivative of $\arctan(x) = \frac{1}{1+x^2} = \cos^2(y)$, where $x = \tan(y)$? | If $x = \tan(y)$, then
$$\cos^2(y) = \frac1{\sec^2(y)} = \frac1{1+\tan^2(y)} = \frac1{1+x^2}.$$ |
Example of open set $G\subset \mathbb C$ such f is holomorphic on G with $D\subset G but \bar{D}\notin G$ | Take the open set to be $U = \{z \mid |z| < 1\}$ and the open disc to be $D = \{ z \mid |z - 1/2| < 1/2\}$. The point $1$ is in the closure of $D$ but not in $U$. |
Is quasi-isomorphism of $A_{\infty}$ algebras invertible | In Kellers "Introduction to $A_\infty$-algebras" this is essentially part b) of the last theorem of chapter 3 (directly over header of chapter 4) as there it says that it is an quasi-iso if and only if it is a homotopy equivalence, i.e. one only needs to kill homotopy, and not adjoin new inverses, or add roofs. ]
In particular you find an $g$ such that $f \circ g \sim id \sim g \circ f$, where sim induces homotopy equivalence. Which is precisely what you asked for.
The hands on proof might be tricky, but I think one can circumvent the calculations by using model categories. So if you read through Lefevre "Theorie de l'homotopie des $A_\infty$-algebres", that just pops out immediately (assuming your ground structure is semisimple, which is clearly is if you are working over a field).
However that might need that $k$ is a field, but I think you should be happy with that.
This is very similar to a quasi-isomorphism over field being the same as a homotopy equivalence. And also that is what is used in the proof, i.e. that every object and morphism has a representative. |
Weak Differentiability of Holder functions | I think that this post might help you. It is a interesting example and more than you need. |
Why a specified chart is smooth | Note that the first four coordinates of the mapping give a $4$-to-$1$ mapping near the origin. We have to use the last two coordinates to get a $1$-to-$1$ mapping. In particular, the inverse function theorem tells us that
$$\phi(x,y) = \big(x\sqrt{1-x^2-y^2},y\sqrt{1-x^2-y^2}\big) = (v,w)$$
has a smooth inverse in a neighborhood of the origin, even if we cannot write down the explicit formula. Then you get a local smooth inverse of your mapping by taking $\phi^{-1}\circ\pi$, where $\pi$ is projection onto the last two coordinates. Both these maps are smooth. |
Laplace transform of function's derivative | You should check again, but if, for instance, $f$ is a probability density function then the Laplace transform is well defined only if $e^{-st}f(t) \to 0$ for $t \to \infty$ which gives you line 3. |
Showing that $4b^2+4b = a^2+a$ has no non-zero integer solutions? | Multiply by $4$ and complete the square on both sides. This gives
$(4b+2)^2-4=(2a+1)^2-1$
$(4b+2)^2-(2a+1)^2=3$
What are the only two squares differing by exactly $3$? |
Number of combinations in summation. | The monomials come from $\prod_{i=1}^mr_i^{\alpha_{ij}}$ at a given $j$. Each $j$ can give you a different monomial. There are $p$ choices of $j$. There are many more possible monomials, but given your $r$'s and $\alpha$'s you get only one per value of $j$. |
Weird and difficult integral: $\sqrt{1+\frac{1}{3x}} \, dx$ | Method 1: Let's try integration by parts
\begin{eqnarray*}
\int\sqrt{u^2+1}\,du&=&\int1\cdot\sqrt{u^2+1}\,du=u\sqrt{u^2+1}-\int u\frac{2u}{2\sqrt{u^2+1}}\,du=\\
&=&u\sqrt{u^2+1}-\int \frac{u^2+1-1}{\sqrt{u^2+1}}\,du=
u\sqrt{u^2+1}-\int\left(\sqrt{u^2+1}-\frac{1}{\sqrt{u^2+1}}\right)\,du=\\
&=&u\sqrt{u^2+1}-\underbrace{\int\sqrt{u^2+1}\,du}_{\text{same}}+\int\frac{1}{\sqrt{u^2+1}}\,du.
\end{eqnarray*}
The unknown integral can be solved now as
$$
2\int\sqrt{u^2+1}\,du=u\sqrt{u^2+1}+\int\frac{1}{\sqrt{u^2+1}}\,du
$$
which gives you the answer if you know how to calculate the last integral (quite standard). If you don't then you can try
Method 2: The standard substitution for "square root of the square plus constant"
$$
t=u+\sqrt{u^2+1}
$$
that gives after some algebra
$$
u=\frac{t^2-1}{2t}=\frac{t}{2}-\frac{1}{2t},\quad du=\frac{t^2+1}{2t^2}\,dt,\quad \sqrt{u^2+1}=t-u=\frac{t^2+1}{2t}
$$
$$
\int\sqrt{u^2+1}\,du=\int\frac{(t^2+1)^2}{4t^3}\,dt=\int\frac{t^4+2t^2+1}{4t^3}\,dt=\frac{1}{4}\int\left(t+\frac{2}{t}+\frac{1}{t^3}\right)\,dt
$$
which I believe you can manage yourself.
P.S. When doing the final substitution back to $u$ it is beneficial to note that
$$
\frac{1}{t}=\frac{1}{\sqrt{u^2+1}+u}=\frac{1}{\sqrt{u^2+1}+u}\cdot\frac{\sqrt{u^2+1}-u}{\sqrt{u^2+1}-u}=\frac{\sqrt{u^2+1}-u}{u^2+1-u^2}=\sqrt{u^2+1}-u.
$$
P.P.S. Can you now calculate the last integral in the method 1 by the method 2? |
Prove that $(1+z+z^2)(1+z+z^3)(1+z+z^4)=z(1+z)$ | If you actually expand $(1+z+z^2)(1+z+z^3)(1+z+z^4) - z(1+z)$ out, you will get:
$$z^9+z^8+2z^7+3z^6+4z^5+4z^4+4z^3+3z^2+2z+1$$
$$=5z^4+5z^3+5z^2+5z+5 \quad \text{(why?)}$$
$$=5 \left( \frac{z^5-1}{z-1} \right)$$
$$=0$$
Therefore $(1+z+z^2)(1+z+z^3)(1+z+z^4) = z(1+z)$.
In addition, you can use your idea of making the substitution $t=1+z$, so that way you only have $8$ things to multiply. |
Bonferroni’s Principle discussed in Mining of Massive Data Sets book | We are studying $10^9$ people.
On any day, we should expect $1\%$ of them to visit a hotel, which would be $10^9 \cdot 10^{-2}=10^7$ of them deciding to visit a hotel.
Each hotel can hold $100=10^2$ people, to handle then $10^7$ people, we need $\frac{10^7}{10^2}=10^5$ hotels in the market.
Now for your second question, the probability that two particular people would decide to visit some hotel on the same day independently is $(10^{-2})^2=10^{-4}$. Remember that we have $10^5$ hotels, hence the chance of visiting the same hotel on the same day would be $(10^{-5})(10^{-4})=10^{-9}$. The probability that they will visit the same hotel on $2$ different day would then be $(10^{-9})^{2}=10^{-18}$. |
$k$ points of contact for percolation | He establishes a result that implies this in Lemma (7.9):
If $\theta(p) > 0$ and $\eta > 0$, there exists integers $m = m(p, \eta)$ and $n = n(p, \eta)$ such that $2m < n$ and
$$P_p(B(m) \leftrightarrow K(m,n) \, \textrm{in}\, B(n)) > 1 - \eta$$
Is there an issue you are having with the proof of that lemma? |
The module $M/M_n$ is of finite length | Use the short exact sequence $0\rightarrow M_1/M_2 \rightarrow M/M_2 \rightarrow M/M_1 \rightarrow 0$, where all homomorphisms are naturally defined. |
An inequality of real numbers | Not the neatest completion, but you can consider the sign of $abc$.
If $abc \geq 0$, then the required result follows.
If $abc < 0$, if suffices to show that $(1+a)(1+b)(1+c) \geq 0$. |
Is this inversion correct? | $1+\sqrt x\cos y=e^X\implies \sqrt x\cos y=e^X-1-->(1)$
$Y=2\sqrt x(e^X)\sin y\implies \sqrt x\sin y=\frac Y{2e^X}-->(2)$
Squaring & adding we get, $x(\cos^2y+\sin^2y)=(e^X-1)^2+\left(\frac Y{2e^X}\right)^2$
or $x=(e^X-1)^2+\frac {Y^2}{4e^{2X}}$
Divide $(2)$ by $(1)$ (assuimg $x\ne 0$), $$\tan y=\frac{\frac Y{2e^X}}{e^X-1}=\frac Y{e^{2X}(e^X-1)}$$ |
Proving associativity of matrix multiplication | Your proof is fine.
We can change the order of summation as the sum is finite. When we mention multiplication is associative, we might want to mention multiplicative of which object, such as multiplicative of real numbers or complex number. |
Proving $\sum_{k=0}^n(-1)^k\binom nk=0$ | Here is an alternate way:
$$0=(1-1)^n=\sum_{k=0}^{n} (-1)^{k} \binom{n}{k}$$
by the binomial theorem. |
Converting definite integral from one form to another $\int_{\pi/4}^{\pi/2}(2\csc(x))^{17}dx$ | You need to do the following substitution : $$\csc (x)= \frac{e^u+e^{-u}}{2}$$
The integral will convert to :
$$(A) \int_{0}^{\ln(1+\sqrt2)}2(e^u+e^{-u})^{16}du$$ |
Fractional Sobolev spaces are Banach spaces | Ok, I don't need the answer anymore. I found it in Demengel's Functional spaces for the theory of elliptic partial differential equations. Thank you anyway! :) |
Has anyone discovered a convex space-filling 15-faced polyhedron? | According to Olaf Delgado-Friedrichs and Michael O’Keeffe, "Isohedral simple tilings: binodal and by tiles with <16 faces" --
"For simple polyhedra with 14, 15 and 16 faces, there are respectively 10, 65 and
434 that fill space and a total of 23, 136 and 710 distinct tilings."
Not all of these are convex, though.
After reading through a lot of papers by Goldberg, I wrote the following for MathWorld's Space-filling polyhedron
In the period 1974-1980, Michael Goldberg attempted to exhaustively catalog space-filling polyhedra. According to Goldberg, there are 27 distinct space-filling hexahedra, covering all of the 7 hexahedra except the pentagonal pyramid. Of the 34 heptahedra, 16 are space-fillers, which can fill space in at least 56 distinct ways. Octahedra can fill space in at least 49 different ways. In pre-1980 papers, there are forty 11-hedra, sixteen dodecahedra, four 13-hedra, eight 14-hedra, no 15-hedra, one 16-hedron originally discovered by Föppl (Grünbaum and Shephard 1980; Wells 1991, p. 234), two 17-hedra, one 18-hedron, six icosahedra, two 21-hedra, five 22-hedra, two 23-hedra, one 24-hedron, and a believed maximal 26-hedron. In 1980, P. Engel (Wells 1991, pp. 234-235) then found a total of 172 more space-fillers of 17 to 38 faces, and more space-fillers have been found subsequently. P. Schmitt discovered a nonconvex aperiodic polyhedral space-filler around 1990, and a convex polyhedron known as the Schmitt-Conway biprism which fills space only aperiodically was found by J. H. Conway in 1993 (Eppstein). A modern survey would be welcome.
So far, I know of no modern survey. I'll believe all these tilers when I can see them in an interactive 3D program. Goldberg's results need to be cataloged, and Engel's results added in, and then things like this 13-facer can be considered. |
Theorem 41.7 in Munkres Topology | I think this is a subtle point. Let $W_x$ be one of the sets in the collection $\{W_\alpha\}$ such that $x \in W_x$. Let $W_\alpha$ be another set in the same collection. For simplicity of notation, let $S_\alpha$ be the support of $\psi_\alpha$. The subtlety is that $S_\alpha$ is a closed set: it is the smallest closed set containing the open set $O_\alpha = \psi_\alpha^{-1}(\mathbb{R}\setminus\{0\})$.
We have the following relationships:
$$W_\alpha \subset \overline{W_\alpha} \subset O_\alpha \subset V_\alpha.$$
Suppose that $y \in W_x \cap S_\alpha$. If $y \in O_\alpha$, then $y \in V_x \cap V_\alpha$. If $y$ belongs to the frontier (sometimes called the boundary) of $O_\alpha$, then since $W_x$ is an open neighborhood of $y$ it must intersect both $O_\alpha$ (and its complement) and so $W_x \cap O_\alpha \neq \emptyset$; therefore $V_x \cap V_\alpha \neq \emptyset$ in this second case as well.
Since the collection $\{V_\alpha\}$ is locally finite, there can only be finitely many such pairs of indices $x$ and $\alpha$ such that $W_x \cap S_\alpha \neq \emptyset$. |
If $(e^{\alpha(t)})'=\alpha'(t)e^{\alpha(t)}$ then $\alpha(t), \alpha'(t)$ commute? | Here's a counterexample. Take
$$\alpha(t)=\gamma\begin{pmatrix}-1&t\\0&1\end{pmatrix}\in\mathbb C^{2\times 2}$$
where $\gamma$ is a non-zero solution to $\gamma e^\gamma=\sinh\gamma.$ To be concrete take $\gamma=\tfrac12(W_1(-e^{-1})+1)$; here $W_1$ is a particular non-principal branch of Lambert's W function. According to Wolfram Alpha $\gamma\approx -1.04 + 3.73 i.$
$\alpha(t)$ can be thought of as living in $M_4(\mathbb R)$ by considering $\mathbb C^2$ as a real vector space.
By direct computation it does not commute with $\alpha'(t).$ I will show that $(e^{\alpha(t)})'=\alpha'(t)e^{\alpha(t)}.$
Since $\alpha(t)^2=\gamma^2I$ where $I$ is the identity matrix, and the power series for $\cosh(z)$ and $z^{-1}\sinh(z)$ only use even powers of $z,$
\begin{align}
e^{\alpha(t)}
&=\cosh(\alpha(t))+\sinh(\alpha(t))\\
&=\cosh(\alpha(t))+\alpha(t)\alpha(t)^{-1}\sinh(\alpha(t))\\
&=\cosh(\gamma)I+\alpha(t)\gamma^{-1}\sinh(\gamma)\\
&=\cosh(\gamma)I+\begin{pmatrix}-1&t\\0&1\end{pmatrix}\sinh(\gamma).\\
\end{align}
Hence
\begin{align}
\alpha(t)'e^{\alpha(t)}
&=\gamma\begin{pmatrix}0&1\\0&0\end{pmatrix}\Bigl(\cosh(\gamma)I+\begin{pmatrix}-1&t\\0&1\end{pmatrix}\sinh(\gamma)\Bigr)\\
&=\gamma\begin{pmatrix}0&1\\0&0\end{pmatrix}\cosh(\gamma)+\gamma\begin{pmatrix}0&1\\0&0\end{pmatrix}\sinh(\gamma)\\
&=\gamma\begin{pmatrix}0&1\\0&0\end{pmatrix}e^\gamma\\
&=(e^{\alpha(t)})'
\end{align}
as required. |
How to write dihedral group in cycle notation? | The dihedral group $D_n$ is a subgroup of $S_n$ for all $n\ge 3$, see also here, so we can write the elements in cycle notation. More precisely, $D_n$ is generated by an $n$-cycle $\sigma$ and a $2$-cycle $\tau$ satisfying the conditions $\sigma^n=\tau^2=1$, $\sigma\tau=\tau\sigma^{n-1}$. This gives a systematic way how to write the elements of $D_n$ in cycle notation. For example, for $n=3$, take $\sigma=(123)$ and $\tau=(23)$. Then $D_3=\{id,(12),(13),(23),(123),(132) \}$. |
How to construct a joint PDF table? | \begin{align*}
P(K = k | N = n) = \begin{cases}
\frac{1}{2n} & k \in \{1, 2, ..., 2n \}\\
0 & \text{ otherwise }
\end{cases}
\end{align*}
Hence, \begin{align*}
P(K = k, N = n) &= P(N = n) P(K = k | N = n)\\
\\
&= \begin{cases}
\frac{1}{4 * 2^n} & n \in \{1,2, ... \}, k \in \{1,2, ..., 2n \}\\
0 & \text{ otherwise }
\end{cases}
\end{align*}
The above is the PMF table!
The table has infinite number of entries, so you can only draw a few of them.
Verify that \begin{align*}
\sum_{n = 1}^\infty \sum_{k = 1}^{2n} \frac{1}{4 * 2^n} = 1
\end{align*}
Some clarification in response to comments:
The joint PMF $P_{N, K}(n, k)$ sums to one when summed over all possible values of $n$ and $k$
while the conditional PMF $P_{K|N}(k | n)$ sums to one when summed over all possible values of $k$ given $n$, i.e. $$\sum_{k = 1}^{2n} P_{K|N}(k | n) = 1$$ |
Basic geometry question and proof that $dT/d\Theta = 1 + T^2$ | Since $d\theta $ is small, he is approximating the side of the triangle with the length of the curve which is $L d\theta$ when $\theta$ is in radians. |
I need some help with this derivative problem | Multiply by $\displaystyle\lim_{x\to2}(x-2)$ to get
$$\lim_{x\to2}\Big(f(x)+x\Big)=0$$
$$\implies f(2)=-2$$
Can you use this in your original equation, and get your answer now?
NOTE: If you edit and provide context and/or your work, I will provide you the whole solution. This is the policy of MathSE. |
Finding alternate form of entire function? | $F(z)\ne1/z$ is the same as $1-zf(z)\ne0$; $f(z)$ entire implies $1-zf(z)$ entire; so we have an entire function $1-zf(z)$ which is never zero. I take it you know this implies $1-zf(z)=e^{g(z)}$ for some entire $g$, so you get $$f(z)={1-e^{g(z)}\over z}$$ Now $g(0)$ had better be $2\pi in$ for some integer $n$, lest this formula force an unwanted pole at zero on $f$, so $g(z)=2\pi in+zh(z)$ for some entire $h$, and away we go. |
A space whose dual is $F^m$ | Let $e_1, \dots, e_m$ a basis of $F^m$.
Let $f:F^m \to F$ be a linear form, that is an element of the dual space.
Then for any $v= a_1e_1 + \dots + a_me_m$ we have $f(v)= f(a_1e_1 +\dots + a_me_m) = a_1f(e_1) +\dots + a_mf(e_m)$. Thus $f$ is uniquely determined by
$f(e_1), \dots, f(e_m)$ that is an $m$ elements from $F$, viz. an element of $F^m$.
Conversely for every $(b_1, \dots, b_m) \in F^m$ we can define an associated linear form via defining $g(e_i)=b_i$ and thus $g(a_1e_1 +\dots + a_me_m) = a_1b_1 +\dots + a_mb_m$.
It follows the elements of the dual space are in bijection with the elements of $F^m$.
It is not hard to show that this bijection is linear, and thus an isomorphism.
Note: the bijection depends on the basis. This is why one says there is no canonical isomorphism, but there still is an isomorphism.
However the spaces are not literally equal. The one is made up of $m$-tuples of elements of $F$ the other of linear maps from $F^m$ to $F$, that is certain subsets of the Cartesian product of $F^m \times F$.
However, when somebody says "the dual of $V$ is {something}", then usually this {something} is just isomorphic to the dual not literally the sets of linear maps from $V$ to $F$.
For the supplementary question that motivated the question:
Suppose the function $\Gamma$ is injective. This means that the only form that vanishes on all of $v_1, \dots , v_m$ is the form that maps the whole space to $0$. But this means that $v_1, \dots, v_m$ must span $V$, for if not then we could define a form by saying it is $0$ on the span of $v_1, \dots, v_m$ and $1$ for $w_1, \dots, w_n$ where $w_1, \dots, w_n$ is an independent set that completes $v_1, \dots, v_m$ to a generating set of the space. |
what is condition for equations to be incompatible in system of linear equations? | The equations are compatible, because you can get an inverse for A:
$$
A = \begin{bmatrix}
15 & -5 & 0 \\
4 & 2 & 3 \\
7 & 1 & 5 \\
\end{bmatrix}\\
A^{-1} = \begin{bmatrix}
\frac{7}{100} & \frac{1}{4} & \frac{-3}{20} \\
\frac{1}{100} & \frac{3}{4} & \frac{-9}{20} \\
\frac{-1}{10} & \frac{-1}{2} & \frac{1}{2} \\
\end{bmatrix}\\
$$
Hence:
$$
AX=B\\
A^{-1}AX=A^{-1}B\\
IX = A^{-1}B
$$
And now calculating the matrix multiplication, you get:
$$
X = \begin{bmatrix}
\frac{7}{100} & \frac{1}{4} & \frac{-3}{20} \\
\frac{1}{100} & \frac{3}{4} & \frac{-9}{20} \\
\frac{-1}{10} & \frac{-1}{2} & \frac{1}{2} \\
\end{bmatrix}
\begin{bmatrix}
20 \\
9 \\
10 \\
\end{bmatrix} =
\begin{bmatrix}
\frac{43}{20} \\
\frac{49}{20} \\
\frac{-3}{2} \\
\end{bmatrix}
$$
So it is compatible and has solution! |
Find the distribution of $Y$, | Hint: Since $X$ is uniform on $(0,1)$,
$$\Pr(Y=y) =\int_0^1 \binom{n}{y} x^y(1-x)^{n-y}\,dx.$$
The $\binom{n}{y}$ comes out, but we still have an unpleasant integral to do. One can get at it by establishing a reduction formula. But luckily, it is a well-known integral, and has already been done for you.
When the smoke clears, you will find that a "miracle" has occurred, and ultimately we end up with a discrete uniform on $[0,n]$. |
How do we compute Aut(Z2 x Z2)? | Note that $ℤ _ 2 \times ℤ_2=\langle(0,1),(1,0)\rangle=\langle(1,0),(1,1)\rangle=\langle(1,1),(0,1)\rangle$ and these are only only generators with minimum no. of elements(equivalent to basis of a vector space).
Now f is a automorphism on $ℤ_2 \times ℤ_2$ iff $f$ sends any of these three basis-sets to another basis-sets.i.e. we have 3 choices of a basis-set to be image of f .Also for any of these choices we have 2 choices to pick a particular value of a element.i. e there are 6 automorphism(for example Consider $f({(0,1),(1,1)})={(0,1),(1,0)}$ . The rhs can be chosen 3 ways and each case can be done for 2 ways for $f((0,1))=(0,1)$ or $(1,0))$
So$ |\text{Aut}(ℤ_2 \times ℤ_2 )|=6$
Also note that the maximum order of a element this group is 3 i.e
$\text{Aut}(ℤ_2 \times ℤ_2) \cong S_3$ |
Topology generated by the circles on the plane with their centers on a line | No, the topology cannot be discrete: Note that any circle centered on the $x$-axis that passes through a point $(x, y)$ also passes through $(x, -y)$, and if $(x, y)$ is not on the $x$-axis (that is, if $y \neq 0$) then these points are distinct.
Your argument shows, however, that each of the sets $\{(x, \pm y)\}$ is open in the determined topology $T$.
So the sets in $T$ are precisely the arbitrary unions of sets $\{(x, \pm y)\}$. For $y = 0$ these are singletons and for $y \neq 0$ these are pairs of points. |
Why is choosing from the remainder incorrect in solving this combination question? | You are double counting some teams. Consider two volley ball players $V_1$ and $V_2$. You can choose $V_1$ as part of 5C1 and $V_2$ as part of 12C1. Again $V_2$ can be selected as part of 5C1 and $V_1$ as part of 12C1. This counts this combination twice. |
Condition for a,b in common density $ax+by$ | As pointed out in the comments, $0<x, y< 1$ usually means that both $x$ and $y$ are in the open interval $(0, 1)$. With these domains, you get
$$
\begin{align}
\int_0^1\int_0^1(ax+by)dxdy&=1\Leftrightarrow\int_0^1\int_0^1bydxdy+\int_0^1\int_0^1axdxdy\\
&=b\left[\frac{y^2}{2}\right]_0^1+a\left[\frac{x^2}{2}\right]_0^1\\
&=a+b\\
&=1
\end{align}
$$
So $a$ and $b$ should satisfy the condition $a+b=1$. |
Continued Fractions to calculate at-bats | One of the essential facts about continued fractions is that their convergents give the best possible rational approximations to irrational numbers; also, the best possible approximations with small denominators to a rational number with a large denominator. The very best approximations are found by truncating the continued fraction just before a particularly large partial quotient.
So, the batting average is
$$0.3085443=\frac{3085443}{10000000}\ .$$
If we take this literally, the player will have had $3085443$ successes from $10000000$ times at bat (or perhaps even $6170886$ from $20000000$, etc). This doesn't seem likely. So we assume that the number of times at bat was actually smaller, and the given decimal is a rounded number. We use continued fractions to work out the details. We can calculate
$$0.3085443=\frac1{3+\displaystyle\frac1{4+\displaystyle\frac1{6+\displaystyle\frac1{1+\displaystyle\frac1{2+\displaystyle\frac1{1+\displaystyle\frac1{1+\displaystyle\frac1{658+\cdots}}}}}}}}$$
The first convergent is $\frac13$, meaning one success from three times at bat. This is clearly not right as the average would have been rounded to $0.3333333$ not $0.3085443$. The next convergent is
$$\frac1{3+\displaystyle\frac1{4}}=\frac4{13}=0.3076923\cdots$$
which is wrong for the same reason. Keep trying until you find one that works. I haven't gone any further but my guess would be that the answer is
$$\frac1{3+\displaystyle\frac1{4+\displaystyle\frac1{6+\displaystyle\frac1{1+\displaystyle\frac1{2+\displaystyle\frac1{1+\displaystyle\frac1{1}}}}}}}$$
found by truncating the continued fraction just before a particularly large partial quotient. |
A Game of Coin and Die | First flip and second toss are independent events. So do first flip and second flip in the case that first flip is tail.
So use multiplication:
P(head on the first flip and 6 on the second tossing)=P(head on the first flip)*P(6 on the second tossing)=$\frac{1}{2}*\frac{1}{6}=\frac{1}{12}$
P(tails on both flip)=$\frac{1}{2}*\frac{1}{2}=\frac{1}{4}$
Win the game if either one of the two events happens, so use addition:
P(winning the game)=P(head on the first flip and 6 on the second tossing)+P(tails on both flip)=$\frac{1}{12}+\frac{1}{4}=\frac{1}{3}$ |
How can we visualize that $2^n$ gives the number of ways binary digits of length n? | A bit has values $1$ or $0$, so $2$ values. You can prove by induction. Suppose that you have $k$ bits and you have $2^k$ ways and you add one more bit. Then you will have that initial $2^k$ ways for every value of the new bit, so $2*2^k=2^{k+1}$ |
Finite at every point but unbounded on every interval | Yes; Conway base 13 function is an example for such function. It even satisfies the intermediate value property!
Granted the function itself is defined only for the interval $(0,1)$ but by agreeing to send integers to themselves, and taking translations of the function we can easily extend it to the entire real line.
One can also precompose this function with any homeomorphism of $\Bbb R$ with $(0,1)$ actually. |
Example of a continuous map having a connected codomain but a disconnected domain. | Your example is a perfectly good. It absolutely does not violate the generalized IVT. The IVT, in this form, says that the continuous image of a connected set is connected. It says nothing about the continuous preimage of a connected set, which as your example shows, can be disconnected. |
Motivation of eigenvalues and eigenvectors | There’s a very simple geometric question that can motivate introducing eigenvectors and eigenvalues:
Are there any lines through the origin that are mapped to themselves
by the linear transformation $T$?
Every vector on such a line gets mapped to some scalar multiple of itself and since the transformation is linear, this scalar must be the same for every vector on the line. So, finding invariant lines becomes a problem of finding vectors $\mathbf v$ and scalars $\lambda\ne0$ such that $T(\mathbf v)=\lambda\mathbf v$. This equation of course has the trivial solution $\mathbf v=0$, but that doesn’t generate a line, so we sharpen this condition up a bit and require that $\mathbf v\ne0$.
From this simple beginning, we can generalize and expand the question. Allowing $\lambda=0$, for instance, finds lines that are collapsed to the origin by $T$. We can also ask if there are higher-dimensional subspaces that are invariant under $T$, and so on. |
Quotient of Ideals as Vector space | Notice that $\mathfrak{m}_{i+1}\mathfrak{a_i} = \mathfrak{a}_{i+1} $ so that $\mathfrak{m}_{i+1} \frac{\mathfrak{a}_i}{\mathfrak{a}_{i+1}} = 0$.
Whenever you have an $R$ - module, $M$, that is annihilated by an ideal $I$ of $R$, you can view $M$ as an $\frac{R}{I}$ module where the scalar multiplication is $(r + I)m = rm$ for all $r+I \in \frac{R}{I}$, $m \in M$. It is straightforward to check that this is a well-defined scalar multiplication.
Therefore since $\frac{\mathfrak{a}_i}{\mathfrak{a}_{i+1}}$ annihilated by $\mathfrak{m}_{i+1}$, you can view $\frac{\mathfrak{a}_i}{\mathfrak{a}_{i+1}}$ as a $\frac{R}{\mathfrak{m}_{i+1}}$ module. That is, it is a vector space. |
Finding the general term of two related recurrence relations | Assuming $h\neq0$ is constant, from the first equation we get: $b_n=\frac1h\left(a_{n+1}-a_n\right)$. Substituting into the second, we have:
$$b_{n+1}=\frac1h\left(a_{n+2}-a_{n+1}\right)=\frac1h\left(a_{n+1}-a_n\right)-ha_n \Rightarrow\\0= a_{n+2}-a_{n+1}-a_{n+1}+a_n+h^2a_n=a_{n+2}-2a_{n+1}+(h^2+1)a_n\Rightarrow\\
a_{n}-2a_{n-1}+(h^2+1)a_{n-2}=0, \hspace{10pt} a_0=0, a_1=0+h=h$$
Now, following my explanation here, we will solve the last recurrence relation:
The polynomial is $t^2-2t+h^2+1$, hence $t_{1,2}=1\pm ih$.
So $a_n=\alpha(1+ih)^n+\beta(1-ih)^n$. Using the values of $a_0,a_1$, we can find that $\alpha=-\frac i2$ and $\beta=\frac i2$.
So $$\begin{array}{l}
a_n=\frac i2\left((1-ih)^n-(1+ih)^n\right)\\
\begin{align*}b_n=&\frac1h\left(a_{n+1}-a_n\right)=\frac i{2h}\left((1-ih)^{n+1}-(1+ih)^{n+1}-(1-ih)^n+(1+ih)^n\right)\\
=&\frac i{2h}\left(-ih(1-ih)^n-ih(1+ih)^n\right)=\frac12\left((1-ih)^n+(1+ih)^n\right)\end{align*}
\end{array}$$ |
Describe the domain in the plane $\mathbb{R}^2$ | Yes, you are correct. More generally, if $|a|>1$ then
$$\left|\frac{\bar{a}{z}-1}{z-a}\right|<1\Leftrightarrow |z|<1,$$
because
$$\begin{align}
\left|\frac{\bar{a}{z}-1}{z-a}\right|<1 &\Leftrightarrow
|\bar{a}{z}-1|^2<|z-a|^2\Leftrightarrow |a|^2|z|^2-2\text{Re}(\bar{a}{z})+1<|z|^2-2\text{Re}(\bar{a}{z})+|a|^2\\&\Leftrightarrow (1-|a|^2)(1-|z|^2)<0 \Leftrightarrow |z|<1.\end{align}$$
In a similar way, if $|a|<1$ then
$$\left|\frac{\bar{a}{z}-1}{z-a}\right|<1\Leftrightarrow |z|>1.$$ |
Tangent space for product of submanifolds | From $X$ we have both projections $\pi_i : X \rightarrow X_i$, these induce morphisms $d\pi_i : T_{(p_1,p_2)}X \rightarrow T_{p_i}X_i$ But we also have canonical inclusions $\iota_1 = \mathrm{id} \times {p_2} : X_1 \rightarrow X$ and $\iota_2 = p_1 \times \mathrm{id} : X_2 \rightarrow X$, and induce morphisms: $d\iota_i: T_{p_i}X_i \rightarrow T_{(p_1,p_2)}X$.
Now, we can define the following:
$$\begin{align*}
\pi: T_{(p_1,p_2)}X &\rightarrow T_{p_1}X_1 \oplus T_{p_2}X_2 \\
v &\mapsto (d\pi_1(v),d\pi_2(v))
\end{align*}
$$
And
$$\begin{align*}
\iota: T_{p_1}X_1 \oplus T_{p_2}X_2& \rightarrow T_{(p_1,p_2)}X \\
(v,w) &\mapsto d\iota_1(v)+d\iota_2(w)
\end{align*}
$$
Because $\pi_1 \circ \iota_2 = p_1$ constantly, and the analogously for $\pi_2 \circ \iota_1 = p_2$ we have:
$$\pi \circ \iota(v,w) = (d\pi_1(d\iota_1(v)+d\iota_2(w)),d\pi_2(d\iota_1(v)+d\iota_2(w)))= (v,w)$$
$$\iota \circ \pi(v) = d \iota_1(d\pi_1(v))+d \iota_2(d\pi_2(v)) = v $$
So $T_{(p_1,p_2)}X \cong T_{p_1}X_1 \oplus T_{p_2}X_2$ |
If we place $1$ to $n^2$ in an $n\times n$ table, what is the smallest $s$ where $s$ is the max of $a+b$ where $a,b$ are numbers in adjacent cells? | Indeed, a minimum of $n^2 + \lfloor \frac n 2\rfloor + 1$ can be arranged for any $n \geq 3$, as follows, where $m = \lfloor \frac n 2 \rfloor$:
\begin{matrix}
n^2, & 1, & n^2 - 1, & 2, & n^2 - 2, & \dots \\
m + 1, & n^2 - n + m, & m + 2, & n^2 - n + m + 1, & m + 3, &\dots \\
n^2 - n, & n + 1, & n^2 - n - 1, & n + 2, & n^2 - n - 2, & \dots \\
n + m + 1, & n^2 - 2n + m, & n + m + 2, & n^2 - 2n + m + 1, & n + m + 3, & \dots \\
\dots & \dots & \dots & \dots & \dots & \dots
\end{matrix}
As for the optimality, it seems that the following is true:
On a $2k \times 2k$ grid, if we mark $k^2$ cells such that no two are adjacent to each other, then the number of cells that are adjacent to at least one marked cell is at least $k^2 + k$.
If this statement is true, then the above arrangement for $n = 2k$ is optimal: we consider the largest $k^2$ numbers $4k^2, 4k^2 - 1, \dots, 3k^2 + 1$. If two of them are adjacent, then the sum is at least $6k^2 + 3$; otherwise, by the above statement, there are at least $k^2 + k$ numbers adjacent to them, among which the largest one must be at least $k^2 + k$. Since this number is adjacent to a number at least $3k^2 + 1$, their sum is at least $4k^2 + k + 1$.
However I'm not able to prove the above statement, although it seems very likely to be true. |
Why natural numbers don't have supremum? | Let's review several senses in which $\Bbb R$ is complete:
Each set with an upper bound has a least upper bound This works in $\Bbb N$ because $\Bbb N\subseteq\Bbb R$, so if $S\subseteq\Bbb N$ then $S\subseteq\Bbb R$. Narrowly interpreted, your question boils down to overlooking the bold part; broadly interpreted viz. the comments, other respects in which $\Bbb R$ is complete are worth comparing with $\Bbb N$.
Each Dedekind cut is generated by a real number Of course, they're not in general generated by natural numbers. (You could, I suppose, think of naturals as something similar to Dedekind cuts on themselves, but you'd gain nothing from it.)
Each Cauchy sequence converges to a value in $\Bbb R$ Any Cauchy sequence in $\Bbb N$ is eventually constant, converging to some natural number.
Intersection of interval nested, in the way the nested intervals theorem specifies, is a $1$-element set Since naturals differ by at least $1$ (crucial also in the below discussion of IVT), you eventually can't nest proper-subset intervals further.
Monotone convergence In analogy with the least-upper-bound point above, this follows because every sequence in $\Bbb N$ is also in $\Bbb R$. Bolzano-Weierstrass Ditto.
Intermediate value theorem This fails for $\Bbb N$ because if $f(a)<0<f(b)$ then there might not be values between $a,\,b$. |
How to convert equation of a plane into an equivalent coordinate transformation? | First step is to find a basis of the plane you are interested in. First find any vector $\mathbf{v}$ in your plane, which can be anything orthogonal to $\mathbf{n} = \left(\begin{array}{c}n_1 \\ n_2 \\ n_3 \end{array} \right)$. You should choose it to have length 1 to make things easier. We will use this as your new $\hat{\mathbf{x}}'$, which will point along the $x$-axis in your new reference plane.
To find your $\hat{\mathbf{y}}'$, simply take the cross product $\mathbf{v} \times \mathbf{n}$. This is guaranteed to be orthogonal to both $\mathbf{n}$ and $\mathbf{v}$ by standard properties of the cross product. Being orthogonal to $\mathbf{n}$ means it also lives in your plane. You should again rescale so it has length 1.
Then $\hat{\mathbf{z}}'$ will just be $\mathbf{n}$ itself (or rather the unit vector pointing in the direction of $\mathbf{n}$), since it already points "up" relative to your plane.
Now form the matrix $A$ whose columns are $\hat{\mathbf{x}}'$, $\hat{\mathbf{y}}'$, and $\hat{\mathbf{z}}'$. This matrix will send $\hat{\mathbf{x}}$ to $\hat{\mathbf{x}}'$, and so on... This sends the $x,y$ plane to the plane through the origin that is parallel to your desired plane. Finally, you simply add $\mathbf{r}$ to translate everything away from the origin.
Summary: if you have a vector $\mathbf{a} = \left(\begin{array}{c} x \\ y \\ 0 \end{array} \right)$ in the $x,y$-plane, then the corresponding point in your new plane will be $A \, \mathbf{a} + \mathbf{r}$. |
Area between two circles equal to half the area of one of the circles | I don't use analytic geometry.
Let $A$ and $B$ be the points of intersection of the circles and $C$ the center of the circle of radius $r$.
Let $x=\angle ACB$ our unknown. Then $$r=2R \cos \frac x2$$ Simple facts about geometry and trigonometry give the equation $$\sin x - x \cos x= \frac \pi2$$ so $x$ is approximately $1.9056957$ and $r/R$ is approximately $1.1587285$ . |
existence of CW complex construction | There are plenty of tests for seeing if a space is NOT a CW-complex, like checking to see if it fails to be normal, Hausdorff, locally contractible, etc.
Usually, one only cares about a space being homotopy equivalent to a CW-complex. There is a statement in Hatcher's book (Proposition A.11) that says if $Y$ is a space, $X$ is a CW-complex, and there are maps $i \colon Y \to X$ and $r\colon X \to Y$ so that $ri \simeq \mathrm{id}_Y$, then $Y$ is homotopy equivalent to a CW-complex. |
Solving inequality. Did I do it right? | Your calculations are correct. The last result tells you that there are no solutions for $y$ in $\mathbb{R}$ to satisfy the inequality $\displaystyle 5(y-2)-3(y+4)\geqslant2y-20$. |
Are these definitions of a continuous random variable equivalent? | First, some references would prefer the term "absolutely continuous" for what you are calling continuous. They would instead use the word "continuous" to refer to r.v.s whose CDF is continuous; the Cantor distribution is "continuous" in this sense but not in your sense.
At any rate, there are r.v.s with uncountably infinite range that are not even continuous in this weaker sense. For instance you can have $X=B U + 1-B$, where $B$ is Bernoulli(1/2), $U$ is uniform on $(0,1)$, and the two are independent. Then $X$ can take on any value between $0$ and $1$ but its CDF is not continuous at $1$.
As for your second question, $<$ vs. $\leq$ does not matter because in this situation single points have probability zero. |
Sum of two dependent vectors is dependent | Let $v_3 = v_1 + v_2$. Then the set $\{ v_1, v_2, v_3 \}$ is linearly dependent since
$$v_1 + v_2 - v_3 = v_1 + v_2 - (v_1 + v_2) = 0.$$
Remark: It does not make sense to say that a vector is linearly dependent without specifying what other vectors it is linearly dependent on. Therefore, we typically talk about linearly dependent sets. |
Reference request for differential geometry/quantum chaos text | "Quantum Chaos - between order and disorder" Casati and Chirikov (no differential geometry)
"Foundations of Mechanics" Ralph Abraham and Jerry Marsden - (plenty of differential geometry) - classical mechanics +
"A first Course in Dynamics" - Anatole Katok - Ergodic theory
I'm sure there are many others, perhaps a few more suggestions would help.
Oh, Predrag Cvitanovic , "Chaos: Classical and Quantum" , website: ChaosBook.org |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.