title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
A question on Vitali convergence theorem | Let $\mu$ be a Dirac mass at $x=0$ in $\mathbb{R}$. Let $f(0) = \infty$ and $f(x) =0$ otherwise. Define $f_n(x)= n \chi_{\{0\}}$. It is clear that $\{f_n\}$ is uniformly integrable, for if $\epsilon>0$, take $\delta = 1/2$. Then, if a Borel set $A$ has $\mu(A)<\delta$, it must be that $\mu(A) = 0$ (the Dirac measure assigns only the value 0 or 1). Hence, $\int_A |f_n|\,d\mu = 0 < \epsilon$. We also have that $f_n(x) \to f(x)$ as $n\to \infty$, and $\mu(\mathbb R)=1<\infty$. But $\int_\mathbb{R} f\,d\mu = \infty$, so $f\notin L^1(\mu)$. |
Singular matrix in derivation of stationary distribution of AR(1) process | I realized that the given AR(1) process is not wide-sense stationary as $|A| = 1$. As a result the variance of $\mathbf{x}_t$ depends on the time lag $dt$ and diverges as $dt$ goes to infinity. |
Evaluating a double integral using normal density | Here is a quick way.
Observe that $ \int_{-\infty}^{\infty}\int_{-\infty}^{y}$ defines exactly half of the $(x,y)$-plane. Since the integrand depends on $x^2 + y^2 = r^2$ only, the result is $1/2$ of the integral over the full $(x,y)$-plane, and this can be favorably computed in spherical coordinates:
$$
\frac{1}{2\pi} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-\frac{1}{2}(x^2+y^2)} dx dy = \\
\frac{1}{2\pi} \int_{0}^{\infty} e^{-\frac{1}{2} r^2} r dr \int_{0}^{2 \pi}d\phi = \\
\int_{0}^{\infty} e^{-\frac{1}{2} r^2} d(\frac{r^2}{2}) = -e^{-\frac{1}{2} r^2} |_0^{\infty} = 1
$$
So, yes, your integral equals $1/2$. |
Question about affine plane minus origin not being affine | I always find this example to be annoyingly and confusingly explained.
(NB: I assume you have typo and that $U=\mathbb{A}^2_k-\{0\}$ and $x$ and $y$ are the coordinates on $\mathbb{A}^2_k$)
The point is this. We have the natural open embedding $j:U\hookrightarrow \mathbb{A}^2_k$ and not only is $\mathcal{O}(U)\cong \mathcal{O}(\mathbb{A}^2_k)$ as abstract $k$-algebras but, in fact (as the 'Algebraic Hartog's lemma' in Vakil shows) the induced map
$$j^\sharp:\mathcal{O}(\mathbb{A}^2_k)\to \mathcal{O}(U)$$
is an isomorphism. In particular, if $U$ were affine then this would imply that $j$ is an isomorphism (since $j^\sharp$) is which, in particular, would imply that $j$ is bijective. But, of course, this is false.
What Vakil is saying then is that since it's $j^\sharp$ that is an isomorphism one would have that the 'point' of $U$ corresponding to $0$ would be a point $p$ of $U$ such that $j(p)$ agrees with $0$. Indeed, by $0$ in $U$ he really means
$$\ker(\mathcal{O}(U)\to \mathcal{O}(\mathbb{A}^2_k)=k[x,y]\twoheadrightarrow k[x,y]/(x,y)\cong k)$$
but this just means that $j(p)$ in $\mathbb{A}^2_k$ is
$$\ker(\mathcal{O}(\mathbb{A}^2_k)=k[x,y]\twoheadrightarrow k[x,y]/(x,y)=k)$$
which is just $0$. But, of course, no point $p$ can exist since $j^{-1}(0)$ is empty. |
Expected value of the number of die rolls for both 1 and 2 appeared? | In that case you are dealing with $X=N+M$ where $N$ has geometric distribution with parameter $\frac13$ and $M$ has geometric distribution with parameter $\frac16$.
Here $N$ denotes the number of trials needed to arrive at $1$ or $2$.
$M$ denotes the number of remaining trials needed to arrive at the element of $\{1,2\}$ that has had no appearance yet in the first $N$ trials.
Consequently $$\mathbb EX=\mathbb EN+\mathbb EM=3+6=9$$ |
Hamming cube has two minimal cuts $S_1, S_2$ such that $E = S_1 \cup S_2$ | Here's a rough outline of something to try.
Consider the following two cuts (red and blue) of $H_2$:
Each cut divides the cube into two equal pieces, that are symmetric if you swap all the $0$s with $1$s and vice-versa. This is easy to work with, so we try generalise this to cuts of $H_d$. We note that $H_{d+1}$ is the cartesian product $H_{d+1}= H_d \times K_2$, so we get $H_{d+1}$ by taking two copies of $H_d$ and joining the corresponding vertices with edges.
So to get $H_3$ with the right cuts, take two copies of $H_2$ (call them copy $0$ and copy $1$), reflect copy $1$ about the $y$-axis for ease of display, and swap the colours of the cuts in copy $1$:
Now you want to assign the edges between copy $0$ and copy $1$ to the two cuts in a way that gives you one big pair of cuts, that are again symmetric. This is easy enough, pick a vertex $u$ in copy $0$ of $H_2$, and colour the edge $\{0u, 1u\}$ red. Then for every neighbour $v$ of $u$, colour the edge $\{0v, 1v\}$ blue, and so on, until all the in-between edges are coloured (such that no neighbours in a copy of $H_2$ have the same colour edge going between the two copies).
This procedure scales. So if you have these two minimal, symmetric cuts $S_1$ (red) and $S_2$ (blue) in $H_d$, you can get two minimal symmetric cuts for $H_{d+1}$ by taking the two copies of $H_d$, swapping the cuts in one, and giving the edges between the two copies 'alternating' colours. To see that this works in general, it helps a lot to notice that $H_d$ is bipartite. |
minmum number of subsets of $\{1, 2, 3, ... , n\}$, each of cardinality $r$, required such that their intersection is $\{1, 2, 3, ... , m\}$ | First have a look at my comment on your question. In this answer I focus on special case $M=\varnothing$.
Find the minimal number of subsets of $\left\{ 1,\dots,n\right\} $
with cardinality $r$ such that there intersection is empty.
Thinking of their complements this can be rephrased as:
Find the minimal number of subsets of $\left\{ 1,\dots,n\right\} $
with cardinality $n-r$ such that there union is $\left\{ 1,\dots,n\right\} $.
If $k$ denotes this number then $k$ is the smallest integer with $k(n-r)\geq n$.
Applying this on the general case where $M=\{1,\dots,m\}$ we must take $n-m$ instead of $n$ and $r-m$ instead of $r$. So $k$ is the smallest integer with $k(n-r)\geq n-m$. |
Selection of four distinct non-consecutive natural number | You can do this without the inclusion-exclusion principle, since the form of the given answer certainly suggests that way.
First, select 4 arbitrary balls from 17 white balls that is lined up, by marking them red. Now insert one white ball after the first, second, and third red ball. This makes the total 20. Number the balls from 1 to 20. Do the numbers of the red balls satisfy the requirements (i.e. non-consecutive)? Do they cover all the possible cases?
Indeed, the "non-consecutive" requirement can always be removed this way. |
Does $\int_0^\infty |f'(x)| dx < \infty$ conclude $\lim_{x\to \infty} f(x)<\infty $ | Consider the auxiliary function
$$F(x):=\int_0^x|f'(t)|\>dt\qquad(x\geq0)\ .$$
By assumption, the $\lim_{x\to\infty} F(x)$ exists. By Cauchy's criterion it follows that for each $\epsilon>0$ there is an $M\geq0$ with
$$|f(y)-f(x)|\leq\int_x^y|f'(t)|\>dt=F(y)-F(x)<\epsilon\qquad(M<x\leq y)\ .$$
The "essential part" of Cauchy's criterion then allows to conclude that $\lim_{x\to\infty} f(x)$ exists (and is finite). |
Prove $R < ( 1 + (\sum |a_{i}|^{p} )^{q/p})^{1/q}$ for absolute value of polynomial zeros | We have to show that $P(x) = 0$ implies $|x| < \left( 1 + A_{p}^{q} \right)^{1/q}$.
If $A_p = 0$ then $P(x) = x^n$ and the implication clearly holds.
If $A_p > 0$ and $|x| \le 1$ then the desired inequality holds as well.
Therefore it suffices to consider the case that $A_p > 0$, $P(x) = 0$ and $|x| > 1$. Using Hölder's inequality we get
$$
|x|^n = | a_0 + a_1 x + \ldots + a_{n-1}x^{n-1}| \le \left( \sum_{k=0}^{n-1} |a_k|^p \right)^{1/p} \left( \sum_{k=0}^{n-1} |x^k|^{q} \right)^{1/q}
= A_p \left( \sum_{k=0}^{n-1} |x|^{kq} \right)^{1/q} \\
\implies |x|^{nq} \le A_p^q \left( \sum_{k=0}^{n-1} |x|^{kq} \right) \, .
$$
To simplify the notation we set
$$
u = |x|^q \, , a = A_p^q \, .
$$
Then
$$
u^n \le a (1 + u + \ldots + u^{n-1}) \\
\implies 1 \le a \left( \frac 1u + \ldots + \frac{1}{u^{n}}\right)
< a \sum_{k=1}^\infty \frac{1}{u^k} = \frac{a}{u-1} \\
\implies u-1 < a
$$
or $u < 1+a$, and that is exactly the desired estimate $|x| < \left( 1 + A_{p}^{q} \right)^{1/q}$.
Remark: We haven't used the fact that $P$ has only real coefficients and roots, so this estimate actually holds for arbitrary (real or complex) zeros of arbitrary complex polynomials. |
What does Determinant of Covariance Matrix give? | I would like to point out that there is a connection between the determinant of the covariance matrix of (Gaussian distributed) data points and the differential entropy of the distribution.
To put it in other words: Let's say you have a (large) set of points from which you assume it is Gaussian distributed. If you compute the determinant of the sample covariance matrix then you measure (indirectly) the differential entropy of the distribution up to constant factors and a logarithm. See, e.g, Multivariate normal distribution.
The differential entropy of a Gaussian density is defined as:
$$H[p] = \frac{k}{2}(1 + \ln(2\pi)) + \frac{1}{2} \ln \vert \Sigma \vert\;,$$
where $k$ is the dimensionality of your space, i.e., in your case $k=3$.
I think $\Sigma$ is positive semi-definite, which means $\vert \Sigma \vert \geq 0$. At least I have nerver seen $\vert \Sigma \vert < 0$.
The larger $\vert \Sigma \vert$, the more are your data points dispersed. If $\vert \Sigma \vert = 0$, it means that your data ponts do not 'occupy the whole space', meaning that they lie, e.g., on a line or a plane within $\mathbb{R}^3$. Somewhere I have read, that $\vert \Sigma \vert$ is also called generalized variance. Alexander Vigodner is right, it captures the volume of your data cloud.
Since a sample covariance matrix is defined somewhat like: $$\Sigma = \frac{1}{N-1} \sum_{i=1}^N (\vec{x}_i - \vec{\mu})(\vec{x}_i - \vec{\mu})^T\; $$ it follows, that you do not capture any information about the mean. You can verify that easily by adding some large constant vectorial shift to your data; $\vert \Sigma \vert$ should not change.
I don't want to go to much into detail, but there is also a connection to PCA. Since the eigenvalues $\lambda_1, \lambda_2, \lambda_3$ of $\Sigma$ correspond to the variances along the principal component axis of your data points, $\vert \Sigma \vert$ captures their product, because by definition the determinant of a matrix is equal to the product of its eigenvalues.
Note that the largest eigenvalue corresponds to the maximal variance w.r.t. to your data (direction given by the corresponding eigenvector, see PCA). |
How to ensure to find all solutions of a trigonometric equation? | We have:
$$\tan(\pi-\theta)=\tan\left(\theta+\frac{\pi}{6}\right)$$
What you did is from this, conclude that $\pi-\theta=\theta+\frac{\pi}{6}$. However, this is false. The following is true:
$$(\pi-\theta)-\left(\theta+\frac{\pi}{6}\right)=\pi n \text{ for some } n \in \Bbb{Z}$$
Now, we need to solve for $\theta$ in terms of $n$ to get all of the solutions. Simplify the left side:
$$-2\theta+\frac{5\pi}{6}=\pi n \text{ for some } n \in \Bbb{Z}$$
Solve for $\theta$:
$$\theta=\frac{5\pi}{12}-\frac{\pi}{2} n \text{ for some } n \in \Bbb{Z}$$
Now, here are some of the solutions:
$$n=1 \implies \theta=-\frac{\pi}{12}$$
$$n=0 \implies \theta=\frac{5\pi}{12}$$
$$n=-1 \implies \theta=\frac{11\pi}{12}$$
$$n=-2 \implies \theta=\frac{17\pi}{12}$$
If we are talking about the range of $\theta \in [0, \pi]$, $n=0$ and $n=-1$ are the only solutions since $n=1$ and $n=-2$ are outside of this range. However, for $\theta \in \Bbb{R}$, there are an infinite number of solutions. |
Evaluate $x$ for $2x^2 - 3x - 2<0$ | Factorizing, we get $$2x^2-3x-2=(2x+1)(x-2).$$
Its roots are $-\frac12$ and $2$. Using a tabular method, we have
\begin{array}{|c|c|c|c|}
\hline
\text{Interval} & 2x+1 & x-2 & \text{Sign of}\ f(x)\\
\hline
x<-\frac12 & - & - & +\\
\hline
x=-\frac12 & 0 & - & 0\\
\hline
\color{blue}{-\frac12 < x < 2} & + & - & \color{blue}{-}\\
\hline
x=2 & + & 0 & 0\\
\hline
x>2 & + & + & +\\
\hline
\end{array}
Therefore, the answer should be $\boxed{-\frac12<x<2}$. |
If $ \lim_{n\to \infty} \sqrt[n]{a_n} = e$ then the sequence $(a_n \cdot 3 ^{-n})$ converges to $0$ | If $\lim_{n\to \infty} \sqrt[n]{a_n} = e$, then there is an $n_0$ such that $\sqrt[n]{a_n}<\frac{e+3}{2}$ for $n\ge n_0$. Thus:
$$a_n3^{-n}\le\left(\frac{e+3}{6} \right)^n \to 0$$
(there are, in principle, details to cover re: whether $a_n$ is positive or not; these are not difficult to incorporate) |
Classifying functions that satisfy $|f(x)-f(y)| \leq M|x-y|^{\alpha}$ | for $x = a$ and $y = a + h$, then: $\left|\dfrac{f(a+h) - f(a)}{h}\right| \le M\cdot |h|^{\alpha - 1}$, and letting $h \to 0$ we have: $|f'(a)| = 0$, implying $f'(a) = 0$ so $f$ must be a constant function. |
How can I prove that this operator is not hermitian? | A Hermitian operator has to satisfy
$$\langle x,A y\rangle = \langle A x, y\rangle.$$
However, if you choose $x = v_1$ and $y = v_2$. You would have
$$\langle x,A y\rangle = \langle v_1, v_1\rangle = 1.$$ On the other hand,
$$\langle A x, y\rangle = \langle v_1, v_2\rangle = 0.$$
Therefore, $A$ is non-Hermitian. |
Exact sequences of projective modules | This result is known as Schanuel's lemma.
A quick proof is given by introducing the pullback of $f$ and $f'$. It is the submodule of $P\oplus P'$ given by
$$X = \{(p,p')\in P\oplus P'\mid f(p) = f(p')\}.$$
Then, the following sequences are exact :
$$0 \longrightarrow \ker(f')\simeq K' \longrightarrow X \longrightarrow P \longrightarrow 0$$
and
$$0 \longrightarrow \ker(f)\simeq K \longrightarrow X \longrightarrow P' \longrightarrow 0,$$
where $X \longrightarrow P$ and $X \longrightarrow P'$ are the natural projections.
Since $P$ and $P'$ are projective, these sequences are split and one have :
$$K'\oplus P \simeq X \simeq K\oplus P'$$ |
If $A+B+C=π$, prove that | I edited to prove
$$1 -\cos^2 A - \cos^2 B - \cos^2 C-2\cos A\cos B\cos C= 0$$
I believe you need cosine law. Let a, b, c be respective sides of angle A, B, and C, then
$$-cos C = \frac{c^2 - a^2 - b^2}{2ab} $$
$$-cos B = \frac{b^2 - a^2 - c^2}{2ac} $$
$$-cos A = \frac{a^2 - b^2 - c^2}{2bc} $$
so we have
$$cosA cosB cos C = -\frac{(c^2 - a^2 - b^2)(b^2 - a^2 - c^2)(a^2 - b^2 - c^2)}{8(abc)^2}$$
$$1 -cos^2 C - cos^2 A - cos^2 B = -\frac{4(abc)^2 -c^2(c^2 - a^2 - b^2)^2 - b^2(b^2 - a^2 - c^2)^2 - a^2(a^2 - b^2 - c^2)^2}{4(abc)^2}$$
To provde original identity, we just need to prove:
$$4(abc)^2 -c^2(c^2 - a^2 - b^2)^2 - b^2(b^2 - a^2 - c^2)^2 - a^2(a^2 - b^2 - c^2)^2 = -(c^2 - a^2 - b^2)(b^2 - a^2 - c^2)(a^2 - b^2 - c^2)$$
Expand both side should verify the identity (From Wolfram, identity holds) |
Volume integral help | Just a picture , a nice problem . I would find the centroid of the 2-d shape that results from intersection and rotate it around the z-axis and then use the theorem of Pappus to find the volume, but that is just another way of doing the problem, not what you are being asked to do. Hope this helps. |
A Representation Theory Problem in Putnam Competition | It seems to me that the proposers of this problem went an extra mile to be misunderstood: Most reasonable mathematicians would read "group under matrix multiplication" as "group whose multiplication is matrix multiplication and whose identity is the identity of matrix multiplication". Under this interpretation, the $M_i$ do define a representation of $G$. But apparently the problem did not mean to require the identity of the group to be the identity of matrix multiplication, and so you do not know if the matrices are actually invertible as matrices.
But ambiguity is not a one-player game. Consider $r = 3$, $M_1 = I_n$, $M_2 = \operatorname{diag}\left(1, -1\right)$ and $M_3 = \operatorname{diag}\left(1, -1\right)$. Oh, $M_1, M_2, \ldots, M_r$ are supposed to be distinct? Good to know. I am wondering how often this came up on appeal?
Anyway the solution you quoted is overkill. The problem straightforwardly generalizes to matrices over any field of characteristic $0$ instead of real matrices; good luck defining Hermitian forms over arbitrary fields. A solution that generalizes (and is a lot shorter and more elementary than the one in the original post) proceeds as follows (very roughly sketched):
We have
\begin{align}
\left(M_1 + M_2 + \cdots + M_r\right)^2 = \sum_{i, j} M_i M_j = \sum_{k} \sum_{\substack{i, j ;\\ \ M_i M_j = M_k}} M_k
\end{align}
(since the $M_i$ form a group, so each $M_i M_j$ equals some $M_k$),
where all indices in sums range over $\left\{1,2,\ldots,r\right\}$. Now, for every $1 \leq k \leq r$, there exist precisely $r$ pairs $\left(i, j\right)$ such that $M_k = M_i M_j$ (again since the $M_i$ form a group). Hence, for every $1 \leq k \leq r$, we have
\begin{align}
\sum_{\substack{i, j ;\\ \ M_i M_j = M_k}} M_k = r M_k .
\end{align}
Thus,
\begin{align}
\left(M_1 + M_2 + \cdots + M_r\right)^2
&= \sum_{k} \underbrace{\sum_{\substack{i, j ;\\ \ M_i M_j = M_k}} M_k}_{=r M_k} = \sum_{k} r M_k \\
&= r\left(M_1 + M_2 + \cdots + M_r\right) .
\end{align}
This readily yields that $\dfrac{1}{r}\left(M_1 + M_2 + \cdots + M_r\right)$ is an idempotent. But the trace of an idempotent matrix equals its rank (this is a well-known fact), and this particular idempotent matrix $\dfrac{1}{r}\left(M_1 + M_2 + \cdots + M_r\right)$ has trace $0$. How many matrices with rank $0$ are there? |
Evaluate $z^4-(1-i)^2=0$ | $$z^4 = (1-i)^2=(\sqrt{2}\exp\left(\frac{-i\pi}{4} \right))^2=2\exp\left(\frac{-i\pi}{2} \right)=2\exp\left(i\left(\frac{-\pi}{2}+2k\pi \right)\right)$$
$$z=2^\frac14\exp\left(i\left(\frac{-\pi}{8}+\frac{k\pi}2 \right)\right)$$
where $k=0,1,2,3$. |
definite integral which have $\sin$ function. | Hint Begin by
$$
\displaystyle \int^{v}_{u}x^2\ln(\sin x)dx=\displaystyle \int^{v}_{u}\Big(\frac{x^3}{3}\Big)'\ln(\sin x)dx
$$
and use integration by parts.
Of course, you will have to prove the convergence when $u\to 0$ and $v\to \pi$.
You can also use the Fourier series for $0<x<\pi$
$$
-\log(\sin(x))=\sum_{k=1}^\infty\frac{\cos(2kx)}{k}+\log(2)
$$
If needed, look at the very elegant derivation of it here by user17762.
And then use, as was indicated, the technique of Jacky Chong
For your information, computer algebra gives (same as Dr. Sonnhard Graubner)
$$
-1/6\pi^3\log(4) - 1/2\pi\zeta(3)\ .
$$ |
Real world definition of the inverse of a matrix | You start with a supposedly real world example and then let us suppose three hypothetical species only sequentially depending on each other which is not the real world. Therefore it is a tricky business to define what is real or not in these Modeling problems. I wouldn't attach tags to problems as such if I were you.
Additionally, you have not finished the problem definition. You gave us three groups of species and two matrices. Should we multiply these or is it a dynamical system representation?
For example, this is what I understand from the relation you are imposing only because the size of the involved variables match:
$$
v=Ah \quad ,\quad h=Bc \implies v=ABc
$$
where $c,v,h$ are the column vectors consisting $v_i,c_i$ and $h_i$ stacked on top of each other.
But what are the quantities? what is the dimension of say entries of $A$, $\frac{plant}{animal}$? If I multiply $Ah$ the result is the total number of plants eaten by the herbivors and that is not the variable $v$ that you defined in the beginning. |
Solve $T(n)=16T(n/2)+2n^4$ | To gain intuition (and get the "right" solution), it is simpler to start consider the case where $n$ is a power of $2$, i.e. $n=2^k$ for some integer $k \geq 0$. (In what follows, all logarithms are in base $2$, because computer science.)
We can write
$$\begin{align}
T(2^k) &= 16 T(2^{k-1}) + 2\cdot 2^{4k} = 2^4 T(2^{k-1}) + 2^{4k+1} \\
&= 2^4 \left(2^4 T(2^{k-2}) + 2^{4(k-1)+1}\right) + 2^{4k+1} \\
&= 2^{2\cdot 4} T(2^{k-2}) + 2^{4k+1} + 2^{4k+1}
= 2^{2\cdot 4} T(2^{k-2}) + 2\cdot 2^{4k+1}\\
&= 2^{3\cdot 4} T(2^{k-3}) + 3\cdot 2^{4k+1} \qquad\qquad\hfill\text{(Same substitution)}\\
&\vdots \\
&= 2^{\ell\cdot 4} T(2^{k-\ell}) + \ell\cdot 2^{4k+1} \\
&\vdots \\
&= 2^{k\cdot 4} T(2^{0}) + k\cdot 2^{4k+1} = 2^{4k} T(1) + 2k\cdot 2^{4k}
\end{align}$$
so that $T(2^k) = 2^{4k}\left( 2k+\alpha \right)$ for $\alpha\stackrel{\rm def}{=} T(1)$. In other terms,
$$
T(n) = n^4\left(2\log n + \alpha\right)
$$
whenever $n$ is a power of two. Generalizing to general $n$ (I assume in this case the relation is missing $\lfloor\cdot\rfloor$ around the $\frac{n}{2}$) is standard, under the natural assumption that $T$ is non-decreasing.
As far as big-Oh notations are concerned, this leads to
$$
T(n) = \Theta( n^4\log n)
$$
although we did obtain a tighter expression. |
Is this space locally compact? | Just go back and forth using $f$. Start with a point $y\in Y$, you want it to be in a neighborhood that's relatively compact, well, the stuff we know about is in $X$. $f$ is onto, so there exists at least one preimage, pick one arbitrarilly, $x\in X, f(x)=y$ Now using the relative compactness of $X$ you get a neighborhood $x\in U$ where the closure of $U$ is compact. Since $f$ is an open map, $f(U)$ is open, and $y\in f(U)$. Morever, since $f$ is continuous and continuity conserves compactness, $f(\overline U)$ is compact. To conclude, all you need to show is that $f(\overline U)=\overline {f(U)}$ |
Calculate volume in a 3D sort of space using cartesian coordinates | Or by using the polar coordinates instead as below: $$\int_{r=0}^1\int_{\theta=0}^{\pi/2}\int_{z=0}^{y\sin(\theta)}rdzd\theta dr=1/3$$ |
Books on complex analysis (Ahlfors, Conway and Lang) | I have a hard time avoiding blatant self-promotion here...
I don't know Lang. Ahlfors is of course a classic. I have a lot of issues with Conway. (My complaints are with the first volume, which it turns out he wrote as a student! The second volume is full of great stuff.) Conway was the standard text here for years - I hated it so much I started using my own notes instead, which eventually became Complex Made Simple (oops. Well, there are things in there that are not in any other elementary text that I know of.)
Two examples that spring to mind regarding Conway:
He spends almost a page using the power series for $\log(1+z)$ to show that $\lim_{z\to0}\log(1+z)/z=1,$ evidently not recalling the definition of the derivative.
There's a chapter or at least a section on the Perron solution to the Dirichlet problem. There's an exercise, like the first or second exercise in the chapter, which a few decades ago I was unable to do. I sent him a letter explaining why it was harder than he seemed to think.
In the next edition the words "This exercise is hard" were added. A year or so later I realized the exercise was not just hard, it was impossible. Asks us to prove something false.
Seems very unimpressive - I complain I don't know how to do the exercise and he doesn't even bother to make sure it's correct. |
Help explain why the proof works - Gradient Estimate Using the Maximum Principle | I do not suppose this to be an answer, but comments are not permitted without login. May be you should specify which side of the inequality makes the trouble.
Obviously the argument follows a standard one which I just met in on page 14
(Theorem 8.1) of http://arxiv.org/pdf/math/0309021 . Does looking at that short proof help?
In the definition of $w$, I suggest to replace $|u|^2$ by $|u-x_0|^2$.
This of course does not change values of second order differential operator $L$.
(BTW, I suppose a derivative is missing in the definition of $L$.)
I do not understand the exponential term with $\beta$, therefore let us omit that for the purpose of this note
(or set $\beta=0$).
This way (with minus $x_0$ and no $\beta$) we see that for any fixed $x_0$, we have that $|Du(x_0)|^2 = w(x_0) \le \sup_\Omega w$ which is left hand side of the inequality. Then the other side of the inequality is supposed to be less than $\sup _ {\partial \Omega} |Du|^2$
which is, in the PDF file cited above, achieved by a different form of $w$ (zeroing the other terms on the boundary).
Thus for any fixed $x_0$ we estimated $|Du(x_0)|^2$. (Or we should/would.)
Might this be a half an answer? |
Joint Probability 3 dice | The first question is easy : since colour does not matter, it is simply the probability of rolling exactly $3$ ones in $15$ attempts. This is given by a binomial distribution with parameters $15$(number of rolls) and probability parameter $\frac 16$, so the answer is just $\binom{15}{3} \left(\frac 16\right)^3 \left(\frac 56\right)^{12}$, as expected.
For the second one, well, what we do know is that there were $5$ ones rolled. The probability of this event, is given again by a similar binomial expression to above, namely $\binom{15}{5} \left(\frac 16\right)^5 \left(\frac 56\right)^{10}$.
Now, suppose two $1$s came on the blue die, and three on the others. What is the probability of getting two ones when rolling a dice six times? It is just $\binom{6}{2} \left(\frac 16\right)^2 \left(\frac 56\right)^{4}$.
Then, three ones must come on the other rolls, but we do not care how many come on which coloured dice. So the answer here is just $\binom{9}{3} \left(\frac 16\right)^3 \left(\frac 56\right)^{6}$.
Putting these together gives you the answer $\frac{\binom 62 \binom 93}{\binom {15}5}$, which is exactly what you have written, and which if you like is $0.41958...$ or $\frac{60}{143}$. So you have your concept absolutely right, if anything the textbook has got it wrong. |
Maximum number of equilateral triangles in a circle | It is easy to give an upper bound for this problem:
The total area of the triangles must not exceed the area of the circle.
\begin{align}
n \, A_t & \le A_c \iff \\
n \left( \frac{1}{2} b \, h \right) & \le \pi x^2 \iff \\
n \left(
\frac{1}{2}
\left(
1 \sqrt{1-\left(\frac{1}{2}\right)^2}
\right)
\right)
& \le \pi x^2 \iff \\
n \frac{\sqrt{3}}{4} & \le \pi x^2 \iff \\
n & \le \frac{4\pi}{\sqrt{3}} x^2
\end{align}
The exact solution I do not know, e.g. see the link given by Gerry Myerson in the comments:
It seems the numbers $n(x)$ might be one of those many hitherto unpredictable, just given and found by trial, discrete properties.
Here is a visualization:
The case $n=6$ has minimal waste of the displayed cases. |
Unbounded entire function - Little Picard Theorem and Identity theorem - contradiction? | $\Re f$ is never holomorphic (unless $f$ is constant), therefore the fact that it vanishes on a non-discrete closed set means nothing: the essential fact is that it shouldn't vanish on any open set.
In fact, the set $(\Re f)^{-1}(0)$ is ideally a nice "curve" in $\Bbb R^2$; the set $(\Im f)^{-1}(0)$ is, likewise, another nice curve. And sice their itersection $f^{-1}(0)$ is very likely to be a discrete set, the identity theorem will stay put and behave. |
Green's function of first order differential operator in two dimensions | You can write $f$ as $\vec a\cdot \nabla F$ for some function $F$ (just integrate $f$ along lines in the direction of $\vec a$). Then the PDE becomes $$\vec a\cdot \nabla (e^F u)=0$$ from where the general solution is clear: $u(x)=e^{-F (x,y)}\,h(P(x,y))$ where $P$ is the projection onto $\{\vec a\}^\perp$ (or onto another hyperplane transverse to $\vec a$).
Green's function is pretty weird, because there is no diffusion in the direction orthogonal to $\vec a$. Any source term only affects the solution on the $\vec a$-line through the location of the source. When the source is $\delta$ placed at some point $\vec p$, we get Green's function. It is a singular distribution supported on the line $\ell=\{\vec p+t\vec a\}$; more precisely,
$$G(\vec x,\vec p)=ce^{-F(\vec x)}H((\vec x-\vec p)\cdot \vec a) \, m^1_{\ell}$$ where $m^1_{\ell}$ is the linear measure on $\ell$, and $H$ is the Heaviside function. The constant $c$ is chosen so that you get $\delta$ on the right side of equation; I think it is equal to $e^{F(\vec p)}/|\vec a|$. |
Show that $n^4-6n^3+11n^2-6n$ is divisible by $4$ for every integer $n$. | Hint: Use that $$n^4-6n^3+11n^2-6n=(n-3)(n-2)(n-1)n$$ |
Is this a second grade system? | Based on your comment, your definition of an equation's grade seems to be that of polynomial degree. If so, an equation with non-integer exponents has no grade, so the latter system has no grade. |
Sketching functions with an uncountable number of turning points (sinusoids) | Substitute $t = 3x$ to get $$\sin t + t\cos t = 0\tag{1}$$ and note that if $t_0$ is a solution to the above, then $\cos t_0 \neq 0$. Assume the contrary, i.e. $\cos t_0 = 0$. Then we have $$0 = \sin t_0 + t_0\cos t_0 = \sin t_0,$$ but $\sin t_0^2 + \cos t_0^2 = 1$, so we can't have $\sin t_0 = \cos t_0 = 0$ and we arrive at contradiction.
Now, if $t_0$ is a solution of $(1)$, since $\cos t_0\neq 0$, we can write $$0 = \sin t_0 + t_0\cos t_0 = \cos t_0(\tan t_0 + t_0)$$ which implies that $$\tan t_0 + t_0 = 0.$$ This means that any solution of $(1)$ is also a solution of $$\tan t = - t\tag{2}$$ and conversely, any solution of $(2)$ must be a solution to $(1)$.
We conclude that equations $(1)$ and $(2)$ are equivalent, i.e. have the same set of solutions.
There is a great answer how to approach this kind of problem here. |
Convert from one format to other | $$(1 - t)P1 + tP2$$
$$\Leftrightarrow P1-tP1 + tP2~~~Expand ~bracket$$
$$\Leftrightarrow P1+tP2-tP1 ~~~~Rearrange$$
$$ \Leftrightarrow P1 +t(P2-P1) ~~~~ Factorise ~t~$$ |
Why square the time unit from average rate of change: $\frac{units}{time^2}$ | I think your average rate of change during $[a,b]$ is wrong.
What you are calculating is the average rate of change of the rate of change, i.e. the acceleration of the widgets produced. What you should calculate is the total amount of widgets produced divided by change in time.
Therefore, the correct formula for average rate of change should be: $\frac{F(b)-F(a)}{b-a}$ where $F(x)$ is the original function of $f(x)$ ($F'(x)=f(x)$).
Thus, the unit should still be widgets/hour which is the same as that of the rate at which widgets are produced. |
Expansion Constant of a Metric Space | As for $[0,1]$ with the Euclidean metric, the expansion constant is $1$, which means that every family of pairwise-intersecting closed intervals has nonempty intersection. This is a consequence of the following: given two families $\{a_i\}_{i\in I}$ and $\{b_j\}_{j\in J}$ of real numbers such that $a_i\leq b_j$ $ \forall i,j$ then $\sup_{i\in I} a_i\leq \inf_{j\in J} b_j$. Apply this to any family of intervals $[a_i,b_i]$ to find that any point in $[\sup a_i,\inf b_j]\neq \emptyset$ is in the intersection.
Assuming that the space $X$ is compact (or just that bounded subsets are relatively compact), there is a simple proof of $\mu\leq 2$ using the finite intersection property: we just need to prove that any finite subfamily $\{\overline B(x_1,r_1),\ldots,\overline B(x_n,r_n)\}$ is such that $\bigcap_{i=1}^n \overline B(x_n,2r_n)\neq\emptyset$. To this aim use the triangle inequality to prove that the center of a ball of least radius among these is in the intersection.
For the general case, if a metric space $X$ is not complete then it does not admit any finite expansion constant.
Indeed, suppose $X$ is not complete. Then there is a non-convergent Cauchy sequence $(x_n)_n$. Consider the family of balls $\{\overline B(x_n,r_n)\}_n$ with $r_n=\sup_{m\geq n}d(x_n,x_m)$. The pairwise intersection is nonempty because of any pair the biggest ball contains the center of the other one. Assume by contradiction that there is a point $x\in\bigcap_n \overline B(x_n,\mu r_n)$. Then $d(x,x_n)\leq \mu r_n\to 0$ which means that $x_n\to x$, contradiction. This proves that a non-complete space can not have a finite expansion constant.
I don't know about the optimality of $\mu =2$. For Euclidean spaces the worst case seems to be $\mu=\sqrt 2$. |
Finding a sound and complete verification function for a proof system. | As you present it you're right: You can just set
$$ \phi(s,p) = \begin{cases} 1 & \text{if $s=p$ and $s$ has at least four prime factors} \\ 0 & \text{otherwise} \end{cases} $$
and that will obviously be sound and complete.
If that is not a sufficient answer, it must be because there are additional conditions of $\phi$ that you have not reproduced in your question. What they might be is anyone's guess, though.
It is common to require that $\phi$ is computable, but that is obviously the case here -- we can easily check whether $s$ equals $p$, and we can also compute the prime factorization of $s$ and sum the exponents.
One also sometimes sees the stronger requirement that $\phi$ is primitive recursive, but that too is the case here (it takes a slight bit of ingenuity to show this, but not much).
Other than that we get into guesswork territory. One hypothesis could be that the textbook you get the exercise from might want to be able to verify proofs in polynomial time, in which case the above function won't do unless $s$ is given in unary notation. Then it would make some sense as an exercise, because $p$ would need to encode data about the factorization that cannt quickly be extracted from $s$ itself. (One option would be to let $p$ be an encoding of four factors $>2$ that multiply together to yields $s$).
Alternatively there may be a hidden requirement that $\phi$ be specified in a particular formalism or notation. |
Prove that if $ f : D(0,1) \to D(0,1) $ is analytic with $ f(0) = 0 $, then $\frac{f(z)}{z} $ has a removable singularity at 0 | Yes, that's one way to see it. In general, if $f$ has a zero of order $m$ at $a$, then $f(z)=(z-a)^mg(z)$ for some analytic function $g$ with $g(a)\neq 0$, and $g$ is the analytic extension of $\frac{f(z)}{(z-a)^m}$ whose domain includes $a$. The statement that $f$ maps into the unit disk is irrelevant.
In this case, you could also use the definition of the derivative to observe that $\displaystyle{\lim_{z\to 0}g(z)=f'(0)}$.
You write, "it's already been removed." That is not quite accurate, but this is just a technicality due to the fact that $\frac{f(z)}{z}$ can't be evaluated directly at $0$; you would get $\frac{0}{0}$. It is removable because there is a limit at $0$, and defining $g(0)$ to be that limit gives the unique analytic extension to all of $D(0,1)$. |
Is it possible to know the sum of the digits of a number (in base 10), without knowing the digits? | There is a formula for the sum of digits of $n$:
$$s(n) = n -9\sum_{k=1}^\infty\left\lfloor\frac{n}{10^k}\right\rfloor$$
Note that the sum finite since $\left\lfloor\frac{n}{10^k}\right\rfloor = 0$ for $k$ big enough.
I doubt this formula simplifies enough to calculate the sum of digits of numbers of the form $2^n$. |
Surface of a polynomial | This is a quadric surface, degenerate because it has no constant term. The corresponding matrix
$$ \pmatrix{1 & 0 & -1\cr
0 & -1 & 0\cr
-1 & 0 & 0\cr} $$
has eigenvalues $-1$ and $(1\pm\sqrt{5})/2$, two negative and one positive. So it is an elliptic cone. A parametrization is
$$ \eqalign{x &= (1+\sqrt{5}) (\cos(\theta)-1) r\cr
y &= (1+\sqrt{5}) 5^{1/4} \sin(\theta) r\cr
z &= \left((3 + \sqrt{5}) \cos(\theta) + 2\right) r\cr} $$
And here's a picture:
Of course this is only part of it: the cone extends infinitely in both directions. |
Moment Generating Function of beta ( Hard ) | For any random variable $X$ with $|X| \leq C$ with probability $1$ we have $Ee^{tX}\leq eE^{|t||X|} \leq e^{C|t|} <\infty$ for any real number $t$. In particular, Beta random variables are bounded so their MGF's exist.
About the series $\sum \frac {t^{k}} {(k!)(2k+1)}$ just observe that $|\frac {t^{k}} {(k!)(2k+1)}| <\frac {|t|^{k}} {(k!)}$ so the series is convergent for all $t$. ($\sum \frac {|t|^{k}} {(k!)}=e^{|t|}$). |
I am looking for an idea research question for a 5000 word essay in Multi Variable Calculus | DeRham cohomology could be an interesting topic as it underlies the relationship between divergence, gradient and curl. |
Coprimes - assumption and required proof | Let $e=abc$. Since there are infinitely many primes $\{p_i\}_{\infty}$ such that $m>n \implies p_m>p_n$, let $j$ be such that $p_j>e$. Then for any $k,l>j$ you get $\gcd(a,p_kp_l)=1$, $\gcd(b,p_kp_l)=1$ and $\gcd(c,p_kp_l)=1$.
Note that you don't need any assumptions about $a$, $b$, $c$.
Also the $\{p_i\}_\infty$ should just be thought of as the enumeration of all primes in ascending order. |
problem of number theory N. Sato | If $n$ is even, $S$ is empty and $\prod_{x\in S} x = 1$.
Suppose $n$ is odd. We have to prove that $\prod_{x\in S} x \equiv 1 \mod p^k$, for every prime $p$ and $k \in \mathbb{N}$ such that $p^k|n$.
Let $p_1^{e_1}p_2^{e_2}...p_m^{e_m}$ be the prime factorisation of $n$. By the Chinese Remainder Theorem there are exactly $a = p_1^{e_1-1}(p_1-2)p_2^{e_2-1}(p_2-2)...p_m^{e_m-1}(p_m-2)/p_l^{e_l-1}(p_l-2)$ numbers $1 \leq x \leq n$ which are congruent to $1,...,p_k-3$ or $p_k-2$ modulo $p_k^{e_k}$ for all $1 \leq k < l$ and $l < k \leq n$ and congruent to some specific integer modulo $p_l^{e_l}$. This $a$ is an odd number, since all prime factors of $n$ are odd.
Therefore $$\prod_{x\in S} x \equiv \left(\prod_{x\in\{1,...,p_l^{e_l}\}\cap\{y:p_l\nmid y(y+1)} x \right)^a \equiv \left(\left(\prod_{x\in\{1,...,p_l^{e_l}\}\cap\{y:p_l\nmid y\}} x\right) \left(\prod_{x\in\{1,...,p_l^{e_l}\}\cap\{y:p_l\mid y-1\}} x \right)^{-1}\right)^a\equiv \left(-1\cdot((p_l-1)(2p_l-1)...(p_l^{e_l}-1))^{-1}\right)^{a}$$
The inverses of $p_l-1,2p_l-1,...,p_l^{e_l}-1$ modulo $p_l^{e_l}$ are $p_l-1,2p_l-1,...,p_l^{e_l}-1$ in some order, because there product should be $1 \mod p_l$.
So $\left((p_l-1)(2p_l-1)...(p_l^{e_l}-1)\right)^2 \equiv 1 \mod p_l^{e_l}$.
So $p_l^{e_l} | \left((p_l-1)(2p_l-1)...(p_l^{e_l}-1)-1\right)\left((p_l-1)(2p_l-1)...(p_l^{e_l}-1)+1\right)$. The two factors differ 2, so they can't be both divisable by $p_l$. Therefore $(p_l-1)(2p_l-1)...(p_l^{e_l}-1) \equiv 1 \mod p_l^{e_l}$ or $(p_l-1)(2p_l-1)...(p_l^{e_l}-1) \equiv -1 \mod p_l^{e_l}$. $(p_l-1)(2p_l-1)...(p_l^{e_l}-1) \equiv -1 \mod p_l$ and therefore $(p_l-1)(2p_l-1)...(p_l^{e_l}-1) \equiv -1 \mod p_l^{e_l}$.
So
$$\prod_{x\in S} x \equiv \left(-1\cdot((p_l-1)(2p_l-1)...(p_l^{e_l}-1))^{-1}\right)^{a} \equiv 1^a \equiv 1 \mod p_l^{e_l}$$ |
What is the Lebesgue measure of a following set? | Picture (in a case where $0 < a < b, 0 < c < d$): |
Prove closed of dimension one of $X\times I$. | Thank you for @DanielMcLaury's comment, I finally figured it out after his very wise guidance.
Submanifold:
When $dF=0, d^2 F \neq 0$, we know the Jacobian of $dF$ is not singular. Hence by inverse function theorem, it is a submanifold.
Closed:
The given condition {$(x,t) \in X \times I:d(f_t)_x=0$} is closed, since all the limit points satisfy the condition and therefore is included in the set.
1-dimension: By Morse function condition, because $d^2 F \neq 0$, that means given $df_x = 0$, that means $df_{x^\prime} \neq 0$ for all $x^\prime$ in any neighborhood of $x$. Therefore, the set can be only varied by $t$, which suggests it is 1-dimensional. |
Finding arithmetic sequence first term | Hint: If the starting term were $0$, the sum of $40$ terms would be eleven times the sum of the numbers from $0$ to $39$. Can you do that? Then increasing the starting term by $1$ increases all the terms by $1$ and therefore the sum by $40$. |
Definition of a bounded subset of the cone of positive semidefinite matrices | The inner product of matrices is just the (restriction of) the standard inner product in $\mathbb{R}^{n(n+1)/2}$, so it defines a norm, and you can just as well use that norm to define bounded sets. This will be exactly the one you describe as "surrounded by Euclidean ball..". Since on a finite-dimensional vector space all norms are equivalent, it doesn't matter if you take this or the spectral norm - the bound $\alpha$ will only differ by a constant factor (depending on $n$). |
Finding $\displaystyle\lim_{x \to 2} (x^3 - x) = 6$ using epsilon-delta. | We have $|x^3-x-6|=|x-2||x^2+2x+3|$. For $x$ near $2$ we have $1 \le x \le 3$, hence
$|x^3-x-6| \le |x-2|(9+6+3)=18|x-2|$.
Your turn ! |
Number $N>6$, such that $N-1$ and $N+1$ are primes and $N$ divides the sum of its divisors | Looking at the list http://oeis.org/A007691/b007691.txt of the first 1600 multiply perfect numbers, you find that the smallest number you are looking for seems to be
$$
\begin{align}
n&=1928622300236318049928258133164032 \\
&=2^{33}\cdot3^4\cdot7\cdot11^3\cdot31\cdot61\cdot83\cdot331\cdot43691\cdot131071
\end{align}
$$
which is $4$-perfect as well, with $n-1$ and $n+1$ being prime.
(The next $n$ in that sequence appears to be 20736673935772776371907300845135221460949851029611211752485268322919135405936727005685049658102947046034932245258365859207952308374437205033138073857921732017237200100173505791123852481233523248079738393931546624000000000 - don't ask me for $\sigma(n)/n$ for that one, though!) |
Prove that $\int \limits_0^{2a}f(x)dx=\int_0^a[f(x)+f(2a-x)]dx$ | Essentially,
this formula splits $[0, 2a]$
into $[0, a]$ and $[a, 2a]$
and then fiddles with the
integral over the second interval.
More explicitly,
$\begin{align}
\int_0^{2a} f(x) dx
&= \int_0^{a} f(x) dx+\int_a^{2a} f(x) dx\\
&= \int_0^{a} f(x) dx+\int_0^{a} f(x+a) dx\\
&= \int_0^{a} f(x) dx-\int_a^{0} f((a-x)+a) dx
\quad \text{ (replace } x \text{ by }a-x)\\
&= \int_0^{a} f(x) dx-\int_a^{0} f(2a-x) dx\\
&= \int_0^{a} f(x) dx+\int_0^a f(2a-x) dx
\quad \text{ (reverse direction of integration)}\\
&= \int_0^{a} (f(x) dx+f(2a-x)) dx\\
\end{align}
$
Note that this also proves that
$$\int_0^{2a} f(x) dx
= \int_0^{a} (f(x) + f(x+a)) dx
$$ |
Showing set is closed | I assume we're thinking of this as a subset of the real line with the usual topology.
One way would be to show that its complement is the infinite union of intervals $$\bigcup_{n = 1}^\infty\left(\frac{1}{n+1}, \frac{1}{n}\right),$$ together with the intervals $(1/2, \infty)$ and $(-\infty, 0)$. |
Conditions for being an unramified extensions | No. Let $K=\Bbb Q_2$, and $\alpha=\sqrt2$. Then $K(\alpha)/K$ is ramified. |
What is the probability that all three positions are filled by girls? | This is picking without replacement. Probability for a female president and a female v.p, and a female treasurer.
probability for a female president $= 4/8$
probability for a female v.p. $= 3/7$
probability for a female treasurer $= 2/6$
total probability $= \frac{4}{8} \cdot \frac{3}{7} \cdot \frac{2}{6}$
answer $= 1/14$
I hope I got it right. |
Different types of Set Theory | Naive set theory is some general term for set theory where axioms are not thoroughly introduced and studied. We describe some properties of sets, and usually go out with the (provably inconsistent) comprehension axiom schema which essentially says:
Every definable collection is a set.
But as the parenthesis remark points out, that axiom schema is inconsistent. One can construct all sort of collections which cannot be sets. In naive set theory we casually toss this worry aside, and rely on the fact that the sets we are going to meet are going to be sets. By that virtue, naive set theory is often concerned with finite sets, countable sets or with "arbitrary sets" which are assumed to exist.
Despite its name, and its sore philosophical limitation of being inconsistent, you can develop quite a lot of mathematics within naive set theory. This is good because this development can be later carried out formally in the [not yet inconsistent] axiomatic set theory.
So what is axiomatic set theory? In axiomatic set theory the student first has to have some basic knowledge of logic, and we begin by writing down axioms. Then we prove from these axioms all sort of theorems and deduce more and more information. These axioms describe in a formal language what properties we expect sets to have. For example, we expect that if $X$ is a set then its power set is also a set itself. So we can formalize that as an axiom.
Axiomatic set theory is concerned with what statements we can prove from what axioms. For example, one cannot prove that $A\notin A$ for every set $A$, without appealing to the axiom of foundation (or some variant thereof). The proof of this unprovability is a common, and very nice, exercise in axiomatic set theory books and courses.
Where naive set theory is often given as some general outline to how sets should behave, and some basic understanding of the connection between set theory and general mathematics; axiomatic set theory investigate the sets themselves. Things like the axiom of choice, the continuum hypothesis, cardinal and ordinal arithmetics, infinitary combinatorics, and so on. These have applications to general mathematics outside of set theory, but those are still investigated for their own sake.
Finally, both naive set theory and its axiomatic version come in "flavours", but whereas naive set theory is mild, in the sense that one pays less attention to assumptions such as "real numbers are sets" or "real numbers are not sets", and so on; in axiomatic set theory one pays very close attention to the starting assumptions and the language in which one expresses these assumptions in.
For example, $\sf ZFC$ which is one of the common (if not the common) set theories is stated in the language where objects are sets, and we only have $\in$ to define things (from that we can define $\subseteq,\varnothing,\mathcal P(\bullet)$ and so on, but formally we only have $\in$ and $=$). On the other hand, one of its extension $\sf NBG$ - which you have mentioned - lives in a slightly larger language which allows objects which are not sets, called proper classes to exist in a "meaningful way" (whatever that means). There are subtle differences and similarities between these two theories. Often in axiomatic set theory we study extensions of $\sf ZFC$.
There are other theories based on a whole other approach to sets, like theories in which the basic notion is $\subseteq$ rather than $\in$; or theories where the atomic notion is "function" rather than "an element of". All these are very very different set theories, even if sometimes we can prove they end up proving "pretty much" the same statements about sets. These are also topics of axiomatic set theory, even if less mainstream and conventional. |
$f:[0,\infty) \to [0,\infty)$ is continuous and $f(0)=0$ then $f$ is bijective | Continuous and injective implies that $f$ is strictly monotonic. Necessarily $f$ is strictly increasing (Why?). Verify by induction that $f(n) \geq n$ for all $n$. The range of $f$ is an interval (by IVP) and it is unbounded. It contains $0$ so the range must be $[0,\infty)$. |
Continuity of a multi-variate real function | It is not true. Take$$\begin{array}{rccc}f\colon&\mathbb R^2&\longrightarrow&\mathbb R\\&(x,y)&\mapsto&\begin{cases}\dfrac{xy}{x^2+y^2}&\text{ if }(x,y)\neq(0,0)\\0&\text{ otherwise.}\end{cases}\end{array}$$Then $f$ is not continuous (since $\lim_{x\to0}f(x,x)=\frac12\neq f(0,0)$), but, for each $a\in\mathbb R$, the maps $x\mapsto f(x,a)$ and $x\mapsto f(a,x)$ are continuous. |
Is the collection of atlases on a set $X$ a set? | If the charts are to $\mathbb{R}^k$ then it is a subset of $\mathcal{P}(\mathcal{P}(\mathbb{R}^X\times \mathcal{P}(X))$, by identifying a chart with a pair $(f \colon X \to \mathbb{R}^{k},U \subset X)$. If the dimension is not fixed a priori, then replace $\mathbb{R}^k$ with $\mathbb{R}^* = \bigsqcup_{k \ge 0} \mathbb{R}^k$. |
Function Proof that deals with Set Theory | Hint: Consider $H$ to be a constant function from natural numbers into the natural numbers (or any set with at least two elements). |
Maximum and minimum of $\frac13x^3 - \frac32y^2 + 2x $ such that $x-y=0$ | The condition is $g(x,y) = 0 \Leftrightarrow x=y$.
So, the problem becomes a simple single-variable calculus problem. You want to find the maximum and minimum of $h(x) :=f(x,x) = \frac13x^3 - \frac32x^2 + 2x$.
This is a cubic polynomial, so it does not have a global maximum or a global minimum since $$\lim\limits_{x\rightarrow \infty}h(x) = \infty \text{ and } \lim\limits_{x\rightarrow - \infty}h(x) = -\infty.$$
So, your book is right.
Now, $h'(x) = x^2-3x+2$, which vanishes at $x=1$ and $x=2$. These are the local minimum and the local maximum of $h$ respectively. |
Which Option is more expensive? | A European put option gives you the right to sell an underlying asset at a certain price (the strike price) at a certain time (the expiry date). You would want to sell an asset at a higher price. Therefore the put option with a higher strike price would be more expensive, all other things being equal.
A European call option gives you the right to buy an underlying asset at a certain price (the strike price) at a certain time (the expiry date). You would want to buy an asset at a lower price. Therefore the call option with a lower strike price would be more expensive, all other things being equal. |
Convergence of integral $\int_{17}^\infty \frac{(\ln{\ln{x})}\sin{(e^{ax}})}{x+e}$ | If $\;a<0,\;$ then $\;\sin(e^{ax})<e^{ax},\;$ and
$$I<\int\limits_{17}^\infty e^{ax}\,\text dx,$$
i.e. converges.
If $\;a=0,\;$ then
$$I \ge \dfrac{17\sin1}{17+e} \int\limits_{17}^\infty \dfrac{\ln\ln x}xdx
= \dfrac{17\sin1}{17+e} \int\limits_{17}^\infty \ln\ln x\,\text d\ln x,$$
i.e. diverges.
If $\;a>0,\; x\ge 17 > e^e,\;$ then
$$\;\dfrac{d}{dx}\dfrac{\ln \ln x}{x+e} = \dfrac{e-x(\ln x\cdot\ln\ln x-1)}{x(x+e)^2\ln x} < 0.\tag1$$
Applying substitution $\;x=\dfrac1a\ln y,\; k_0=\left\lceil\dfrac{e^{17a}}{2\pi}\right\rceil,\;$ easily to get
$$I = \dfrac1a\int\limits_{\large e^{17a}}^\infty\; \dfrac{\ln\ln\left(\dfrac1a\ln y\right)}{\dfrac1a\ln y+e}\,\dfrac{\sin y}y\,\text dy,$$
$$aI = \int\limits_{e^{17a}}^{2k_0\pi}\; \dfrac{\ln\ln\left(\dfrac1a\ln y\right)}{\dfrac1a\ln y+e}\,\dfrac{\sin y}y\,\text dy
+ \sum\limits_{k=k_0}^\infty\;\int\limits_{2\pi k}^{2\pi k+2\pi}\; \dfrac{\ln\ln\left(\dfrac1a\ln y\right)}{\dfrac1a\ln y+e}\,\dfrac{\sin y}y\,\text dy.\tag2 $$
From $(1)$ should, that the sinusoids in the integrals under the sum have decreasing weight, and for $\;y\in(2k\pi,2k\pi+\pi),\;k=k_0\dots\infty\;$
$$\dfrac{\ln\ln\left(\dfrac1a\ln y\right)}{\dfrac1a\ln y+e}\,\dfrac{\sin y}y
\ge\left|\dfrac{\ln\ln\left(\dfrac1a\ln(y+\pi)\right)}{\dfrac1a\ln(y+\pi)+e}\,\dfrac{\sin (y+\pi)}{y+\pi}\right|.$$
Then
$$\int\limits_{2\pi k}^{2\pi k+2\pi}\; \dfrac{\ln\ln\left(\dfrac1a\ln y\right)}{\dfrac1a\ln y+e}\,\dfrac{\sin y}y\,\text dy>0.$$
Therefore, should exist the constants $\;C_1,\,C_2,\;$ such as
$$I < C_1 + C_2\int\limits_{2\pi k_0}^\infty\dfrac{\sin y}y\,\text dy,\tag3$$
i.e. converges.
Finally, the given integral converges if $\;a\in\mathbb R\setminus \{0\}.$ |
Finding the left and right cosets of H = {(1), (12), (34), (12) ○ (34)} in S4 | Assume your group is $G$ and the subgroup is $H$. By definition $gH$={$gh$|$h\in H$} is a left coset of $H$ respect to $g$, in $G$ and $Hg$={$hg$|$h\in H$} is a right coset of $H$ respect to $g$, in $G$. Here your group is $S_4$={$(),(3,4),(2,3),(2,3,4),(2,4,3),(2,4), (1,2),(1,2)(3,4),(1,2,3),(1,2,3,4),(1,2,4,3),(1,2,4),(1,3,2),(1,3,4,2),(1,3), (1,3,4),(1,3)(2,4), (1,3,2,4), (1,4,3,2), (1,4,2), (1,4,3), (1,4), (1,4,2,3), (1,4)(2,3)$ }, and the $H$ is as you pointed. According to Group theory, the number of right cosets of a subgroup in its group called index is $\frac{|G|}{|H|}$. $|S_4|=4!$ and $|H|=|\langle(1,2),(3,4)\rangle|=4$ so you have atlast $\frac{4!}{4}=6$ cosets right or left for the subgroup. Here there is no matter what $g$ is taken in group $G$. For example, if you take $(1,2,4,3)$ in group, then $(1,2,4,3)H$={$(1,2,4,3)(),(1,2,4,3)(1,2),(1,2,4,3)(3,4),(1,2,4,3)(1,2)(3,4)$}. Hope to help. |
Matrix Kintchine inequality proof Exercise 5.4.13 | It is a straightforward integration of the tail bound. Just note that you should upper bound the tail probability by $1$ for small $t$.
For notational convenience, let $Z = \left\|\sum_i \epsilon_i A_i\right\|_{op}$ and we seek to upper bound $(\mathbb{E} Z^p)^{1/p}$. Write
$$
\mathbb{E} Z^p = \int_0^\infty \Pr\{Z^p \geq t\} dt = \int_0^T \Pr\{Z^p \geq t\} dt + \int_T^\infty \Pr\{Z^p \geq t\} dt,
$$
where $T$ is to be determined. It follows that
\begin{align*}
(\mathbb{E} Z^p)^{1/p} &\leq \left(\int_0^T \Pr\{Z^p \geq t\} dt\right)^{1/p} + \left(\int_T^\infty \Pr\{Z^p \geq t\} dt\right)^{1/p} \\
&\leq T^{1/p} + \left(\int_T^\infty \Pr\{Z^p \geq t\} dt\right)^{1/p}.
\end{align*}
Note that when $t \geq \sqrt{2/c}\cdot \sigma\sqrt{\ln n}$ we have that
$$
\Pr\{Z \geq t\} \leq 2n\exp\left(-\frac{ct^2}{\sigma^2}\right) \leq 2 \exp\left(-\frac{c}{2}\cdot \frac{t^2}{\sigma^2}\right),
$$
which agrees with the tail bound of some subgaussian variable $Y$ with $\|Y\|_{\psi_2}\leq c''\sigma$. Let $T = (\sqrt{2/c}\cdot \sigma\sqrt{\ln n})^p$, we have
$$
\left(\int_T^\infty \Pr\{Z^p \geq t\} dt\right)^{1/p} \leq \left(\int_T^\infty \Pr\{|Y|^p \geq t\} dt\right)^{1/p} \leq (\mathbb{E} |Y|^p)^{1/p} \leq C\sqrt{p}\sigma.
$$
for some absolute constant $C$. It follows immediately that
$$
(\mathbb{E} Z^p)^{1/p} \leq \sqrt{\frac{2}{c}}\cdot \sqrt{\ln n}\cdot \sigma + C\sqrt{p}\sigma \leq C''(\sqrt{p + \ln n})\sigma
$$
as desired. |
When do you get rid of parenthesis in any equation. When do i know when parenthesis are neccessary to stay or not? | Parenthesis are generally used for one of two reasons in a mathematical expression. The first reason is to indicate precedence. In this case, parenthesis are used to ensure that operations are performed in the intended order. According to the order of operations, parts of an expression inside parenthesis are to be computed before the rest of the expression.
To see this in action, take a look at the following two expressions side by side. They are identical except for the parenthesis.
$1 + 1 \cdot 2 = 3$
$(1 + 1)\cdot 2 = 4$
According to the order of operations the first expression evaluates to $3$, because the multiplication is done first, followed by the addition. The second expression, however, evaluates to $4$ since because of the parenthesis I am indicating that the addition should be done first, followed by the multiplication.
So in this way, parenthesis are used to specify the order in which expressions are evaluated. In the above expressions, the use of parenthesis changed the meaning of the expression. In these situations it is imperative that if you $\textit{mean}$ to use parenthesis you keep them in the expression, since the meaning changes when they are dropped. There are other situations, however, where the presence of parenthesis does not change the intended meaning. For instance:
$1 + (2 + 3) + 1 = 7$
$1\cdot3\cdot(2\cdot4) = 24$
$2 \cdot 4 + (1 \cdot 3) = 11$
If the parenthesis are dropped in any of these expressions, the results do not change (try it!). In the first expression this is because addition is associative, meaning that when you are adding more than $2$ numbers together it doesn't matter which numbers you decide to add first (it doesn't matter, for instance, if I first do $2 + 3$ or if I first do $1 + 2$, etc. In the end the answer will be the same). The same goes for multiplication, which is why the second expression above is also independent of the placement of parenthesis.
In situations like this, where the parenthesis do not change the meaning of the expression, you may drop them because they are unnecessary. Note, that the above expressions are not the only situations in which this occurs. They are just examples. Spend some time writing out different expressions with and without parenthesis. See if you can get a feel for when they matter and when they do not.
So, as we have seen, the first use of parenthesis is to indicate precedence. The second is to use parenthesis as an alternative to $\cdot$ or $\times$ when indicating multiplication. For instance, we have that
$$
2\cdot3 = 2 \times 3 = 2(3)
$$
These are all equivalent ways of indicating multiplication. This becomes quite common since you get used to writing things like $2x, 5y, 4(x+2),xy$ etc, where simply writing two elements beside each other indicates multiplication. This works fine in the above expressions where at least one of the factors is a variable. But it wouldn't really make sense to do the same when both factors are numbers. For instance $23$ is clearly just read as the number twenty-three. To remedy this you may just throw in parenthesis around the $3$ to indicate multiplication. |
$f$ is continuous and satisfies the equality given for all $0 \leq x $ | Using the first condition, with $x=2,$ you have
$$2^2(1-2)=\int_0^{f(2)} t^2 dt=\frac{(f(2))^3}{3},$$ that is, $(f(2))^3=-12.$ So $f(2)=\sqrt[3]{-12}.$
To deal with the second condition consider $F(x)=\int_0^x f(t)dt.$ Then $F(x^2(1-x))=\int_0^{x^2(1-x)}f(t)dt=x.$ Taking derivatives, you get $1=f(x^2(1-x))(-3x^2+2x).$ Thus, $\displaystyle f(x^2(1-x))=\frac{1}{-3x^2+2x}.$ Solving the equation $x^2(1-x)=2$ (which doesn't have integer solutions) you get the answer. |
If a $3\times 3$ matrix is not invertible, how do you prove the rest of the invertible matrix theorem? | So now you need to pick three other statements in the invertable matrix theorem and show that they don't hold for that matrix. For example, we can compute that the determinant of your matrix to find that it is $(1+0+0)-(1+0+0)=0$ by multiplying across the diagonals. For the other two, you could look at the row rank of the matrix, or show that it's non-surjective.
As mentioned in the comments, many many things are equivalent to a matrix being invertable, so double check to make sure you're using ones that you've talked about in class! A list of 23 potential statements can be found here. |
Lights Out with custom rules set | Call your matrix $A$. You want to use switch $1$ $x_1$ times, switch $2$ $x_2$ times and so on. Call the vector with the $x_i$ $\bar{x}$. So you obtain that $A \bar{x} = \bar{1}$. Do you know how to proceed from there? |
Integral: $\int_0^{\pi/12} \ln(\tan x)\,dx$ | Using the Fourier series of $\ln(\tan{x})$,
\begin{align}
&\int^\frac{\pi}{12}_0\ln(\tan{x})\ {\rm d}x\\
=&-2\sum^\infty_{n=0}\frac{1}{2n+1}\int^\frac{\pi}{12}_0\cos\Big{[}(4n+2)x\Big{]}\ {\rm d}x\\
=&-\sum^\infty_{n=0}\frac{\sin\Big[(2n+1)\tfrac{\pi}{6}\Big{]}}{(2n+1)^2}\\
=&\color{#E2062C}{-\frac{1}{2}\sum^\infty_{n=0}\frac{1}{(12n+1)^2}}\color{#6F00FF}{-\sum^\infty_{n=0}\frac{1}{(12n+3)^2}}-\color{#E2062C}{\frac{1}{2}\sum^\infty_{n=0}\frac{1}{(12n+5)^2}}\\
&\color{#E2062C}{+\frac{1}{2}\sum^\infty_{n=0}\frac{1}{(12n+7)^2}}\color{#6F00FF}{+\sum^\infty_{n=0}\frac{1}{(12n+9)^2}}\color{#E2062C}{+\frac{1}{2}\sum^\infty_{n=0}\frac{1}{(12n+11)^2}}\\
=&\color{#6F00FF}{-\frac{1}{9}\underbrace{\sum^\infty_{n=0}\left[\frac{1}{(4n+1)^2}-\frac{1}{(4n+3)^2}\right]}_{G}}\color{#E2062C}{-\frac{1}{2}G-\frac{1}{2}\underbrace{\sum^\infty_{n=0}\left[\frac{1}{(12n+3)^2}-\frac{1}{(12n+9)^2}\right]}_{\frac{1}{9}G}}\\
=&\left(-\frac{1}{9}-\frac{1}{2}-\frac{1}{18}\right)G=\large{-\frac{2}{3}G}
\end{align}
Things could be made clearer if we explicitly write out the terms of the sums. For the red sums,
\begin{align}
&-\frac{1}{2}\left(\frac{1}{1^2}+\frac{1}{5^2}-\frac{1}{7^2}-\frac{1}{11^2}+\cdots\right)\\
=&-\frac{1}{2}\left(\frac{1}{1^2}-\frac{1}{3^2}+\frac{1}{5^2}-\frac{1}{7^2}+\frac{1}{9^2}-\frac{1}{11^2}+\cdots\right)-\frac{1}{2}\left(\frac{1}{3^2}-\frac{1}{9^2}+\frac{1}{15^2}-\cdots\right)\\
=&-\frac{1}{2}G-\frac{1}{2}\cdot\frac{1}{9}\left(\frac{1}{1^2}-\frac{1}{3^2}+\frac{1}{5^2}-\cdots\right)=-\frac{5}{9}G
\end{align} |
Is $x\ln|x|$ analytic at $x=0$? | In order to be analytic at $x=0$, the function and all of its derivatives must exist in a neighborhood of $x=0$. However, if $f(x)=x\log(|x|)$ and $f(0)=0$, we see that $f'(0)=\lim_{h\to 0}\frac{h\log(|h||)}{h}$ fails to exist. Therefore, $f$ is not analytic at $x=0$. |
Polynomial growth implies locally Lipschitz? | The Weierstrass function $W(x)$ presented in baby Rudin is a continuous bounded function on $\mathbb R$ such that
$$\limsup_{h\to 0}\left |\frac{W(x+h)-W(x)}{h}\right| =\infty$$
for every $x\in \mathbb R.$ The function $f(x) = xW(x)$ satisfies $|f(x)|\le c|x|$ on $\mathbb R.$ It also satisfies
$$\limsup_{h\to 0}\left |\frac{f(x+h)-f(x)}{h}\right| =\infty$$
for all $x\ne 0.$ Exercise: Prove this. It follows that $f$ fails to be Lipschitz in every nonempty open interval, including the ones containing $0.$ This function is therefore "nowhere locally Lipschitz". |
If $U$ is separable and $V \subset U$, then $V$ is separable | $W_1$ is countable because $Y$ is countable (which is countable because it is a subset of the countable set $\Bbb N\times\Bbb N$).
To show that $W_1$ is dense in $V$ we might show that its intersects every non-empty subset of $V$. But the sequence criterion used here works as well: We show that for every point $v\in V$, there is a sequence of points in $W_1$ that converges to $v$.
Remark: Note that the whole proof does not really make use of normed linear space. The very same argument works for metric space. |
What is the mean and variance of $1/X$ if $X$ is skew normal distributed? | If $X$ is skew normal, then $Y = 1/X$ does not possess a first moment. Thus mean and variance are not defined. |
Homeomorphism & inverse, between $U=\{ (x,y) \in \mathbb{R^2} :|x|+|y|\leqslant 2 \}$ and $V=\{(x,y) \in \mathbb{R^2} : \max(|x|, |y|)\leqslant 3\}$ | This is more of a hint and a correction than an answer.
You need to be careful with your set $U$. They should look like the following: $U$
and $V$
I would agree with User8128 in this and say that you do want to use the combination of two invertible matrices, one a rotation and the other a scaling, which you can find many places on the internet, but it is not a scaling by 1.5. Be careful and see if you can figure out what the proper amount you need to scale by is, and the corresponding matrices for both transformations. |
Prove inequality using Cauchy-Schwarz | $\int_0^{1}u(x)^{2}dx=\int_0^{1} (\int_0^{x} u'(t)dt)^{2}dx$ and $(\int_0^{x} u'(t)dt)^{2}\leq x\int_0^{x} u'(t)^{2}dt \leq x\int_0^{1} u'(t)^{2}dt$. Can you finish? |
Forming similar differential/integral equations | I am not sure exactly what you want (and too long for a comment) but lets have a go.
Take the derivative of the second equation
$$
y'' +\frac{\cos y}{\int \cos y\, dx}y' -\frac{\sin y}{\int \cos y\, dx}\frac{\cos y}{\int \cos y\, dx} = 3\sin^2 y \cos y\, y'
$$
This leads to
$$
\left[y'-\frac{\sin y}{\int \cos y\, dx}\right]\frac{\cos y}{\int \cos y\, dx}=-y''+3\sin^2 y \cos y \,y'=-y'\dfrac{d}{dy}y' +y'\dfrac{d}{dy}\sin^3 y = -y'\dfrac{d}{dy}\left(y'-\sin^3 y\right)
$$
Then using Eq. 2 again we have
$$
y'-\sin^3 y = -\frac{\sin y}{\int \cos y\, dx}
$$
so we get
$$
y'-\frac{\sin y}{\int \cos y\, dx} =-y'\frac{\frac{d}{dy}\left(-\frac{\sin y}{\int \cos y\, dx}\right)}{\frac{\cos y}{\int \cos y\, dx}}
$$
or
$$
y'-\frac{\sin y}{\int \cos y\, dx} =\left(\sin^3y-\frac{\sin y}{\int \cos y\, dx}\right)\frac{\frac{d}{dy}\left(\frac{\sin y}{\int \cos y\, dx}\right)}{\frac{\cos y}{\int \cos y\, dx}} = \left(\sin^3y-\frac{\sin y}{\int \cos y\, dx}\right)\left(1+\frac{\sin y}{\cos y}\frac{\sin y}{\int \cos y \, dx}\right)
$$ |
Is the Epsilon-Delta definition of a limit not precise enough? | The bottom line is:
Limits are only defined on limit points. Continuity is only defined on the domain.
If we have a point $p$ on the domain such that $p$ is a limit point, then continuity at $p$ is equivalent to the limit being the value. However, if we have a point not on the domain, continuity does not make sense (although we can enlarge the domain and define it at the point in order to force continuity, but I digress), and if we have a point on the domain which is not a limit point, then limit does not make sense.
The problem is explictly the following:
The definition of limit is: Let $x_0$ be a limit point of the domain. Then, we say $f(x) \stackrel{x \to x_0}{\to} L$ if for all $\epsilon>0$, there exists $\delta>0$ such that $0<|x-x_0|< \delta \implies |f(x)-L| < \epsilon$.
If $x_0$ is not a limit point, there exists a $\delta$ such that there is no $x$ with $|x-x_0|<\delta$. Therefore, any value of $L$ can be the limit. If we want to be really forceful and withstand the non-uniqueness of limit on metric spaces, then we can indeed let this be the case: every point on the codomain is a limit of $f(x)$ as $x \to x_0$ if $x_0$ is not a limit point. Then, for instance, it would hold that $f$ is continuous at $x_0$ if and only if $f(x) \stackrel{x \to x_0}{\to} f(x_0)$ even if $x_0$ is not a limit point, but that would be a very degenerate case.
What we conclude is that the case of $x_0$ not being a limit point results in a nuisance of non-uniqueness. Continuity is different:
The definition of continuity at $x_0$ is: $f$ is continuous at $x_0$ if for all $\epsilon>0$ there exists $\delta>0$ such that $|x-x_0|<\delta \implies |f(x)-f(x_0)| < \epsilon$.
There is no vacuous problem. Rather, the definition gives immediately that if $x_0$ is not a limit point, then $f$ is automatically continuous at $x_0$ (go through the definition and verify this). There is a certain degree of triviality, but no vacuity.
Summing up, your function $f$ is continuous, but limits on the points of the domain are not well-defined (at least, not unique when viewed under the usual definition).
PS: I acknowledge that there may be different definitions of continuity. However, the one used here is the special case of the definition used in topology, which is by far the most useful and well-behaved. |
Show that $\{(x,x) \ | \ x \in \mathbb{R}\} \in \mathcal{P}(\mathbb{R})\otimes\mathcal{P}(\mathbb{R})$ | $$\{(x,x)|x\in\mathbb R\}=\bigcap_{q\in\mathbb Q}\big[(-\infty,q)\times(-\infty,q)\cup[q,+\infty)\times[q,+\infty)\big]$$ |
What would the equation for this word problem be? | A $12\%$ increase changes the value to
$$
\$100 \times 1.12 = \$112.
$$
Then a $12\%$ decrease makes it
$$
\$100 \times 1.12 \times 0.88 = \$100 \times 0.9856 =\$98.56.
$$
Each two day cycle reduces the value to $98.56\%$ of what it was. After $89$ cycles (half a year) it's worth
$$
\$100 \times 0.9856^{89} = \$27.50.
$$ |
Star mapping in Non-standard analysis | In general $^*X\supseteq\{^*x;x\in X\}$ for every set X. The equality holds if and only if $X$ is finite (because for $X$ infinite there is always a sequence not in equivalence class of any constant sequence).
So $^*\mathbb{N}$ contains some (actually many) elements other than $^*n$ for $n\in\mathbb{N}$. Such elements are usually called nonstandard natural numbers.
Because $\mathbb{N}$ is an integer part of $\mathbb{R}$, the same thing holds about their $^*$-images. That is $^*\mathbb{N}$ is an integer part of $^*\mathbb{R}$. From the same reasons $^*\mathbb{N}$ is a subsemiring of $^*\mathbb{R}$, it satisfies mathematical induction, and so on ...
What exactly lies in $^*\mathbb{N}$ depends on the details of how $^*$ was constructed (more different approaches exist). But for virtually all purposes this is unimportant. Equally unimportant is whether $^*x$ is the sequence $(x,x,x,\ldots)$ or some other object. What matters are the properties of $^*$ (nonstandard principles). |
Real Analysis - Subspace of separable metric spaces is closed and open | The phrase "subspace of a seperable space is seperable" means that it is seperable with the subspace topology. In the subspace topology on A, A is closed and open, but it isn't in the topology on X. |
Is there an example of a graph with an edge space that is not generated by its cycles and cuts? | I think your proof is correct. I found the same question when I was reading Diestel's Graph Theory. In the hint to exercise 26, he mentioned that considering a cycle is useful.
However, your proof can perhaps be simplified a little. You only need to show $1,1,0,0),(0,1,1,0),(0,0,1,1),(1,0,0,1)$ cannot span $(1,0,0,0)$, as in $C_4$ the dimension of the edge space is $4$.
Hope this helps! |
How to determine for what value of P does A have a higher probability than B | Ok, so we need at least 3 of $A$s engines to be running and at least 2 of $B$s engines. So for fixed $P$ we want to find out the probability of these happening, and compare them.
So now the question becomes, what is the probability? Well if we label $A$s engines $a,b,c,d,e$ then we want at least 3 of these to run, so we could have $\{a,b,c\}$, $\{a,b,d\}$, $\{a,b,e\}$... but counting these separately is a bother. The quick way is to use the binomial distribution. So we have
$$\mathbb{P}(\text{at least 3 engines run})=\binom{5}{3}p^3(1-p)^2+\binom{5}{4}p^4(1-p)^1+\binom{5}{5}p^5(1-p)^0
$$
Now you do the same thing for plane $B$ and find for which choice of $p$ is the $A$ probability bigger than the $B$ probability.
Another helpful insight may be to re-write the problem. e.g. say you have a biased coin, what range of biases would mean that it is more likely that you throw at least 3 heads out of 5 than throwing 2 heads out of 3. |
If $P$ is the set of all distributions, the only sufficient subfield is the trivial one | Let $\mathcal{A}'$ be a sufficient subfield of $\mathcal{A}$. We'll show that $\mathcal{A}\subseteq\mathcal{A}'$. Let $B\in\mathcal{A}$. Since $\mathcal{A}'$ is sufficient, there's some $\mathcal{A}'/\overline{\mathfrak{B}}$-measurable $\varphi_B:\Omega\rightarrow\overline{\mathbb{R}}$ such that $p(B)=\int_\Omega fdp$ for all $p\in P$.
Now let $\omega\in\Omega$ and set $p:=\delta_\omega$, where $\delta_\omega$ is the Dirac measure on $\left(\Omega,\mathcal{A}\right)$. Then $p(B)=\mathbb{1}_B\left(\omega\right)$ and $\int_\Omega fdp=f\left(\omega\right)$. Since $\omega$ was arbitrary, $f=\mathbb{1}_B$. So $B\in\mathcal{A}'$.$\square$ |
Proof: How to prove $n$ is odd if $n^2 + 3$ is even | $n^2+3$ is even $\iff$ $\ n^2$ is odd $\iff$ $\ n$ is odd. |
Show that every integer eigenvalue of $A$ divides the determinant of $A$. | Let $P(x)=\sum_{k=0}^na_kx^k$ be the charasteristic polynomial. Then $|P(0)|=|\det A|$. Moreover, the coefficients of $P$ are integer. If $\lambda$ is an integer eigenvalue, then
$$0=P(\lambda)=P(0)+\sum_{k=1}^na_k\lambda^k$$
Therefore
$$|\det A|=|P(0)|=|\lambda|\cdot\left|\sum_{k=1}^na_k\lambda^{k-1}\right|$$
Hence $\lambda$ divides $\det A$. |
Confidence intervals review | Use a $t$-test. Compute a solution $c$ of the equation $$F(c) = \frac{1}{2}(1 + 0.98)$$ where $F(c)$ is the distribution function of the Student's t-distribution (use a table to solve this equation), and compute the sample std. deviation $s$ and the sample mean ${x}$. The required confidence interval is simply $$\left( x - \frac{sc}{\sqrt{n}}, x + \frac{sc}{\sqrt{n}} \right)$$ where $n$ is the number of samples; here $n=4$. |
Proof that 1-P(B|C)=P(~B|C). Is everything correct? | Your are correct, and the proof is rather simple (not requiring the wall of text you wrote :)
$$\begin{align}P(\neg B|C)&=\frac{P(\neg B \land C)}{P(C)} &\text{by definition}
\\&= \frac{P(C) - P(B\land C)}{P(C)} & \text{Because $B\land C$ and $\neg B\land C$ form a partition of $C$}
\\&=\frac{P(C)}{P(C)}-\frac{P(B\land C)}{P(C)}&\text{Algebraic manipulation}
\\&=1-P(B|C)&\text{by definition}\end{align}$$
Note: I assume here that $P(C)>0$, i.e. that $C$ is not an impossible event. Things can get complicated quickly if we look at a more general solution. |
Cardinality of $A$ and the Natural Numbers | Hint:
You can list all the elements of $\mathbb{N}$ in something like an infinite matrix where its entries come from $A=\{1,2,3\}\times \mathbb{N}=\{(x,n): x=1,2,3 \,\text{ and }\, n\in\mathbb{N}\}$ in the following way
$$\begin{bmatrix}1 & 4 & 7 & \cdots \\2 & 5 & 8 & \cdots \\ 3 & 6 & 9 &\cdots\end{bmatrix}$$
So, you're sweeping natural numbers column-wise, i.e. you associate to each entry $(i,j)$, a natural number defined by $3(j-1)+i$ for $i=1,2,3$ and $j \in \mathbb{N}$. For example, we have $(1,2) \mapsto 4$, $(2,1) \mapsto 2$, $(3,3) \mapsto 9$, so on so forth.
Addendum:
If you are interested in a closed form solution, define $f: A \to \mathbb{N}$ by
$$f(x,n) = 3(n-1)+x$$
Then, since $x=1,2,3$, we have that $f(x,n)=m$ if and only if
$$3(n-1)+1 \leqslant m \leqslant 3(n-1)+3$$
$$n-1<(n-1)+\frac{1}{3} \leqslant \frac{m}{3} \leqslant (n-1)+1 = n$$
$$n-1 < \frac{m}{3} \leqslant n$$
which implies that $n = \big\lceil \frac{m}{3}\big\rceil$ and hence, $x = m - 3\big(\big\lceil \frac{m}{3}\big\rceil -1\big)$.
So, $f^{-1}: \mathbb{N} \to A$, is defined by
$$f^{-1}(m) = \bigg(m - 3\big(\lceil \frac{m}{3}\rceil -1\big), \lceil \frac{m}{3}\rceil\bigg)$$
where $\big\lceil x \big\rceil$ denotes the ceiling function, i.e. the smallest integer $n$ such that $x \leqslant n$.
So, $f: A \to \mathbb{N}$ has an inverse and therefore, it's a bijection between $A$ and $\mathbb{N}$, proving that they have the same cardinality. |
Degree or valency of a Cayley graph | The answer may depend on definitions of a Cayley graph and vertex degree. Namely, if we consider the directed Cayley graph then in-degree of its each vertex equals to its out-degree and equals to $|S|$. If we consider the undirected Cayley graph then we assume that $S$ is symmetric and without the identity $\{e\}$ and then degree of its each vertex equals $|S|$. |
Explain why it makes sense that the rate of change of the area increases as the area increases. | First note that the area of the circular ripple is $A=\pi r^2$
$\dfrac{dA}{dt}=2\pi r\dfrac{dr}{dt}$
From the question we have, $\dfrac{dr}{dt}=6$ inches per second
So, the rate of change of area with respect to time is $$\dfrac{dA}{dt}=2\pi r(6)=12\pi r$$
Now, to find the rates of areas with the radius you have, you need to plug in the values of $r$ in the above area rate equation.
Can you take it from here? |
The notation $f o(g)$ from the Landau family of asymptotic notations | My interpretation: this is not a notation anybody seems to be using, and I take it as the result of a sloppily written part of a Wikipedia article.
In more detail: This is the very first time I ever see these notations $<o(\cdot)$ and $> o(\cdot)$ (I do research in theoretical computer science); they are not even defined in the very Wikipedia article which uses them once; and I hope never to see them again, as they are at best confusing, at worst nonsensical (with regard to how $o(\cdot)$ is defined).
Now, to try and assign a meaning to them based on this part of the Wikipedia article which uses them without defining them, let's look at $f < o(g)$ (at $+\infty$). It is stated that "$f(x) = \Omega_R(g(x))$ is the negation of $f(x) < o(g(x))$", so going backwards, based on the definition of the former as
$$
\limsup_{x\to\infty} \frac{f(x)}{g(x)} > 0
$$
the negation corresponding to $f(x) < o(g(x))$ would be something like
$$\forall C>0 \exists A\, \forall x> A,\ f(x) < Cg(x)$$
if I did not get the quantifiers wrong. Again, I see absolutely no point in using this notation $f(x) < o(g(x))$, since it's not only confusing, but by very definition would be equivalent to the much more understandable $f(x) \neq \Omega_R(g(x))$.
PS: note that all that is in the Hardy–Littlewood sense: it is basically never used in computer science, for which $\Omega(\cdot)$ takes a different meaning to begin with. (I am writing this as you included the {computer-science} tag in your question) |
maximizing expectation of exponential of a function | Note that $e^{-x}$ is convex. So we can apply Jensen's inequality, e.g. $f(E(X))\le E(f(X))$ with $f(x)=e^{-x}$
$$\begin{split}\arg\max_\theta E_\tau \left(e^{-F(\theta+\tau)}\right)&\ge\arg\max_\theta e^{-E_\tau(F(\theta+\tau))}\\
&=\arg\min_\theta E_\tau(F(\theta+\tau))\end{split}$$ |
Aligning matrices, normalization. Calculating coefficients. | Least squares via normal equations
The linear system is
$$
\begin{align}
\mathbb{A} c & = \mathbb{B} \\
%
\left(
\begin{array}{cc}
1 & 1 \\
1 & 2 \\
1 & 3 \\
1 & 4 \\
\end{array}
\right)
%
\left(
\begin{array}{cc}
b \\
a \\
\end{array}
\right)
%
&=
%
\left(
\begin{array}{cc}
3 \\
4 \\
5 \\
6 \\
\end{array}
\right)
%
\end{align}
$$
The normal equations are
$$
\begin{align}
%
\mathbb{A}^{T}\mathbb{A} a &= \mathbb{A}^{T}\mathbb{B} \\[2pt]
\left(
\begin{array}{cc}
4 & 10 \\
10 & 30 \\
\end{array}
\right)
%
\left(
\begin{array}{cc}
b \\
a \\
\end{array}
\right)
%
&=
%
\left(
\begin{array}{cc}
18 \\
50 \\
\end{array}
\right)
%
\end{align}
%
$$
with the solution
$$
\boxed{
\left(
\begin{array}{cc}
b \\
a \\
\end{array}
\right)_{LS}
=
\left(
\begin{array}{cc}
2 \\
1 \\
\end{array}
\right)
}
$$
The residual error vector is
$$
r_{LS} = \mathbf{0},
$$
the fit is exact, which implies the data vector $\mathbf{B}$ is in the column space of $\mathbf{A}$. The prescription in terms of column vectors is $\mathbb{B}=2\mathbb{A}_{1} + \mathbb{A}_{2}$.
Least squares via calculus
The target of minimization, the merit function, sums over the rows $r$ and columns $c$:
$$
M(a,b) = \sum_{r=1}^{2}\sum_{c=1}^{2} \left( \mathbf{A}_{rc} - a \mathbf{B}_{rc} - b\right)^{2}
$$
which leads to
$$
\begin{align}
%
\left(
\begin{array}{cc}
%
\sum^{2}_{r=1}\mathbf{A}_{r,1}\mathbf{A}_{r,1} &
\sum^{2}_{r=1}\mathbf{A}_{r,1}\mathbf{A}_{r,2} \\
%
\sum^{2}_{r=1}\mathbf{A}_{r,2}\mathbf{A}_{r,1} &
\sum^{2}_{r=1}\mathbf{A}_{r,2}\mathbf{A}_{r,2} \\
%
\end{array}
\right)
%
%
\left(
\begin{array}{cc}
b \\
a \\
\end{array}
\right)
%
&=
%
\left(
\begin{array}{cc}
%
\sum^{2}_{r=1}\mathbf{A}_{r,1}\mathbf{B}_{r} \\
%
\sum^{2}_{r=1}\mathbf{A}_{r,2}\mathbf{B}_{r} \\
%
\end{array}
\right)
%
\end{align}
%
$$
which yields the same solutions. |
Complete metric space, with floor function. | Since $\langle\mathbb R,d\rangle$ isn’t complete, you’d be better off looking for a Cauchy sequence that doesn’t converge. For both parts it may help to prove the following facts.
If $x\in\mathbb{Z}$ and $0<\epsilon\le 1$, $$B(x,\epsilon)=[x,x+\epsilon)\;.$$
If $x\in\mathbb R\setminus\mathbb Z$, and $0<\epsilon\le \min\Big\{x-\lfloor x\rfloor,\lceil x\rceil-x\Big\}$, $$B(x,\epsilon)=(x-\epsilon,x+\epsilon)\;.$$
Note that this is almost what happens in $\mathbb R$ with the usual metric; the only points that behave differently are the integers. For (1) you could try to find a $d$-Cauchy sequence that would converge to some integer $n$ in the usual metric but not in this one.
For (2), show that for each integer $n$, the interval $[n,n+1)$ has exactly the same compact subsets in $\langle\mathbb R,d\rangle$ as it has in the usual topology. Is it the union of countably many of these compact subsets? |
Being second countable is invariant under perfect mapping | I'll copy here a proof of a more general result from the book Engelking, Ryszard (1989): General Topology. Heldermann Verlag, Berlin. ISBN 3885380064.
Theorem 3.7.19. If there exists a perfect mapping $f \colon X \to Y$ onto a space $Y$, then $w(Y) < w(X)$.
Here $w(X)$ is the weight of a topological space, i.e.
$$w(X)= \min \{|\mathcal B|; \text{$\mathcal B$ is a base for $X$}\}+\aleph_0.$$
Proof. Let $w(X) = \mathfrak m$. Since the validity of our theorem is obvious for $\mathfrak m < \aleph_0$, we
can assume that $\mathfrak m \ge \aleph_0$. Let $\{U_s\}_{s\in S}$ be a base for $X$ such that $|S| = \mathfrak m$ and let $\mathcal T$ be the
family of all finite subsets of $S$. Since $|\mathcal T| = \mathfrak m$ it suffices to show that the family $\{W_T\}_{T\in\mathcal{T}}$,
where $W_T = Y \setminus f(X \setminus \bigcup_{s\in T} U_s)$, is a base for $Y$. It follows from the definition that the
sets $W_T$ are open. Let us take a point $y \in Y$ and a neighbourhood $W \subset Y$ of $y$. The
inverse image $f^{-1}(y)$ is a compact subset of $f^{-1}(W)$; thus there exists a $T \in \mathcal T$ such that
$f^{-1}(y)\subset \bigcup_{s\in T} U_s \subset f^{-1}(W)$.
Clearly $y \in W_T$, and since
$$Y\setminus W = f(X\setminus f^{-1}(W)) \subset f(X\setminus \bigcup_{s\in T} U_s)$$
we have $W_T\subseteq W$.
The above is taken verbatim from Engelking's book. But in the place where he writes $w(X)<\aleph_0$, he probably means $|S|<\aleph_0$. (Since $w(X)$ is infinite by definition.)
If a topological space has a finite base, then the topology is finite. The map $f$ is quotient, since every surjective closed map is quotient. This means that for every open set $U\subseteq Y$ the preimage $f^{-1} (U)$ is open. The assignment $U\mapsto f^{-1}(U)$ is an injection from the topology of $Y$ to the topology of $X$. Hence $Y$ has, in this case, only finitely many open sets. (And thus it has finite base.) |
How to get a lower bound of the number of numbers left after a sieve? | It depends on what you mean by simple. Each sieve is different, and different methods apply. The short answer is no --- for instance, one cannot deduce a nontrivial lower bound on an application of the Sieve of Eratosthenes when applied to twim primes. [Doing so would lead to a proof of the twin prime conjecture].
A good book for figuring out how sieves work is Sieve Methods by Halberstam. The first sieve considered in detail is the Sieve of Eratosthenes and its various applications. |
Probability, objects going around a table | For question $4$, it is the same as the probability that at time $F-1$ the red ball is at position $n$ and the green ball is at position $F-(n+1)$, multiplied by the probability that the green ball moves to the left, i.e.
$$
p_n^s
= \binom{F-1}{n}p^nq^{F - (n+1)}\cdot q
= \binom{F-1}{n}p^n q^{F - n}.
$$
For question $5$, this is the probability that the balls either meet at position $n$ with the red ball arriving first, i.e. $p_n^s$, or the balls meet at some point $k > n$, thus it is
$$
p_n^s + \sum_{k=n+1}^F p_n
= \binom{F-1}{n}p^n q^{F - n} + \sum_{k=n+1}^F\binom{F}{k}p^k q^{F - k},
$$
which is the term on the right of your last equation. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.