title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Finding a span from an equation | To begin, note that you are trying to solve the simultaneous system: $\begin{cases} \begin{bmatrix}1&-2&3&4\end{bmatrix}\bullet \begin{bmatrix}a&b&c&d\end{bmatrix}=0\\\\
\begin{bmatrix}3&-5&7&9\end{bmatrix}\bullet \begin{bmatrix}a&b&c&d\end{bmatrix}=0\end{cases}$
This can be rewritten as a matrix equation $\begin{bmatrix}1&-2&3&4\\3&-5&7&8\end{bmatrix}\begin{bmatrix}a\\b\\c\\d\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}$
Via row reductions of the augmented matrix, you arrived at $\left[\begin{array}{cccc|c}1&0&-1&-4&0\\0&1&-2&-4&0\end{array}\right]$
Interpreting this as a system of equations, this is:
$\begin{cases} 1a+0b-1c-4d=0\\ 0a+1b-2c-4d=0\end{cases}$ which rewritten is $\begin{cases} a = c+4d\\b=2c+4d\end{cases}$
Including the often forgotten lines that $c=c$ and $d=d$, this implies:
$\begin{cases} a = 1c+4d\\b=2c+4d\\c=1c+0d\\d=0c+1d\end{cases}$ or written another way $\begin{bmatrix}a\\b\\c\\d\end{bmatrix} = \begin{bmatrix}c+4d\\2c+4d\\1c+0d\\0c+1d\end{bmatrix}$
Rewritten one last time, that is $\begin{bmatrix}a\\b\\c\\d\end{bmatrix} = c\begin{bmatrix}1\\2\\1\\0\end{bmatrix}+d\begin{bmatrix}4\\4\\0\\1\end{bmatrix}$
That is to say, the solution set to the original question of finding which vectors are perpendicular to $u_1$ and $u_2$ is the set of vectors spanned by the basis $\begin{bmatrix}1\\2\\1\\0\end{bmatrix}$ and $\begin{bmatrix}4\\4\\0\\1\end{bmatrix}$, the vectors appearing in the image. |
Show that orthogonal projection is diagonalizable | Let $u_1,\ldots,u_d$ be an orthonormal basis of $V$ so that the first $k$ basis vectors lie in the subspace $S$. Then $P_S(u_j)=u_j$ for $j\le k$. Also, $P_S(u_j) = 0\cdot u_j$ for $j > k$.
More details: A linear transformation $T:V\rightarrow V$ is diagonalizable if there is a basis of $V$ consisting of eigenvectors of the transformation. An orthogonal projection $P_S$ acts as the identity on the subspace $S$ and maps any element of $S^\perp$ (the vectors orthogonal to $S$) to $0$. $P_S$ is defined by $P_S^2=P_S$ and $P_S^*=P_S$. The image of the orthogonal projection $P_S$ will be $S\subset V$ and the kernel will be $S^{\perp}$.
Because we know that $\dim(S)+\dim(S^{\perp}) = \dim(V)$, and we know that $P_{S}$ acts as the identity on $S$ and acts as $0$ on $S^{\perp}$, we can diagonalize $P_{S}$ by any basis $u_1,\ldots, u_d$ with the first $\dim(S)$ elements in $S$ and the last $\dim(S^{\perp})$ elements in $S^{\perp}$. Such a basis always exists, for instance by extending a basis of $S$ to a basis of $V$, then applying the Gram Schmidt process.
Note that $P_S$ is actually unitarily/orthogonally diagonalizable, since we can diagonalize it with an orthogonal basis. |
Diffeomorphism of tangent spaces | For each fixed $(p,q)\in M\times N$ you have an isomorphism $T_{(p,q)}(M\times N)\cong T_p M\times T_q N$. The isomorphism is merely $X\mapsto (\pi^1_\ast(X),\pi^2_\ast(X))$ where $\pi^i$ is the projection of $M\times N$ onto the $i$th coordinate.
So, now consider the map $T(M\times N)\to TM\times TN$ which, on fibers, is precisely the described above map. If we can prove that this is smooth, then evidently it is a bijective bundle map, and so a bundle isomorphism.
But, check in coordinates (the proper coordinates--think about how you can combine coordinates of $M$ and $N$ to get coordinates of $M\times N$) that this map has a very nice coordinate representation (it's constant). |
If the coordinates of the vertices $B$ and $D$ are $(7,3)$ and $(2,6)$ respectively, find the coordinates of the vertices $A$ and $C$. | They tell you $B$ and $D$, and they tell you that it is a rectangle parallel to the coordinate axes. So I would start by plotting $B$ and $D$ and drawing a rectangle using those vertices. Since $C$ has to touch $D$ with a straight line, the $x$ coordinates must match, which is $2$. Since $C$ has to touch $B$ with straight line, the $y$ coordinate must match which is $3$. So $C$ has to be $(2,3)$. Use similar reasoning to find $A$. |
Prove $\int_{-\infty}^{\infty} \frac{dx}{(1+x^2)^{n+1}} = \frac{(1)(3)(5)...(2n-1)}{(2)(4)(6)...(2n)} \pi \ \ \ \forall n \in \mathbb{N}$ | Another way to use residue calculus is to look at the generating function (for very small $t$) $$\sum_{n=0}^{\infty} \int_{-\infty}^{\infty} \frac{t^{n+1}}{(1+x^2)^{n+1}} \, \mathrm{d}x = \int_{-\infty}^{\infty} \frac{t / (1+x^2)}{1 - \frac{t}{1+x^2}} \, \mathrm{d}x = t \int_{-\infty}^{\infty} \frac{1}{1+x^2 - t} \, \mathrm{d}x.$$ This has its simple pole at $z = i \sqrt{1-t}$ with residue $-\frac{i}{2\sqrt{1-t}}$, so $$\sum_{n=0}^{\infty} \Big(\int_{-\infty}^{\infty} \frac{1}{(1+x^2)^{n+1}} \, \mathrm{d}x \Big) t^{n+1} = \frac{\pi t}{\sqrt{1-t}} = \sum_{n=0}^{\infty} \pi \binom{-1/2}{n} (-1)^n t^{n+1}$$ and you get your integral by comparing coefficients. |
Is my book wrong? Fractional word problem. | You're correct (I will write $10\frac12$ as $10.5$ for clarity).
You need $\dfrac16$ of $10.5 \text{ acres}$ for the roads. So you're left with $10.5-(10.5)\cdot\dfrac16=(10.5)\cdot\dfrac56$ acres of land.
To obtain the number of $\dfrac14\text{ acres}$ contained in $(10.5)\cdot\dfrac56\text{ acres}$, divide the latter by the former to obtain, $\dfrac{(10.5)\cdot\dfrac56\text{ acres}}{\dfrac14\text{ acres}}=10.5\cdot\dfrac{20}{6}=\dfrac{21}2\cdot\dfrac{20}{6}=35$. |
Changing a number between arbitrary bases | Let $x$ be a number. Then if $b$ is any base, $x \% b$ ($x$ mod $b$) is the last digit of $x$'s base-$b$ representation. Now integer-divide $x$ by $b$ to amputate the last digit.
Repeat and this procedure yields the digits of $x$ from least significant to most. It begins "little end first."
EDIT: Here is an example to make things clear.
Let $x = 45$ and $b = 3$.
x x mod 3
45 0
15 0 (integer divide x by 3)
5 2
1 1
We see that $45 = 1200_3$. Read up the last column to get the base-3 expansion
you seek. Let us check.
$$1\cdot 3^3 + 2\cdot 3^2 + 0 + 0 = 27 + 18 = 45.$$
I hope this helps you. |
Revolutions per second? | This involves precisely the same notions as your other recent questions. The circumference of the grinding wheel (in feet) is $\pi$. So if the thing is going at $r$ revolutions per second, the linear speed of the outer part of the grinding surface is $\pi r$ feet per second. This speed should be $\le 6000$ feet per second. So the largest allowed $r$ is the one for which $\pi r=6000$. Now solve for $r$.
By the way, $6000$ feet per second seems awfully fast! |
Distribution of $X-Y$ where $X,Y$ iid ~U[0,a] | Let's do it for special case $a=1$.
Observe that $f(x-y)g(y)$ takes values in $\{0,1\}$ and: $$f(x-y)g(y)=1\iff f(x-y)=1\text{ and } g(y)=1\iff 0<x-y<1\text{ and }-1<y<0\iff$$$$ x-1<y<x\text{ and }-1<y<0$$
That inspires to discern the cases:
$x\leq-1$ resulting in $\int^{\infty}_{-\infty} f(x-y)g(y)dy=0$ because the integrand is $0$ for every $y$.
$-1<x<0$ resulting in $\int^{\infty}_{-\infty} f(x-y)g(y)dy=\int_{-1}^xdy=1+x$
$0<x<1$ resulting in $\int^{\infty}_{-\infty} f(x-y)g(y)dy=\int_{x-1}^0dy=1-x$
$1\leq x$ resulting in $\int^{\infty}_{-\infty} f(x-y)g(y)dy=0$ because the integrand is $0$ for every $y$.
So actually we found that $h(x)=1-|x|$ if $-1<x<1$ and $h(x)=0$ otherwise.
This is the result for $a=1$.
We can find the more general case by observing that $\frac1{a}h(\frac{x}{a})$ is the PDF for $a(X-Y)$ |
Basic definition of numerical analysis | https://people.maths.ox.ac.uk/trefethen/NAessay.pdf
This is the essay my numerical analysis professor always hands out at the start of the semester. It has a good explanation. |
Can someone explain why acceleration is not negative in this problem? | For part $a$, you need to solve for $v_0$. At maximum height, $v = 0$. So $0 = v^2 + 2(-9.8)(46)$, or $v_0 = \sqrt{2(9.8)(46)} \approx 30$ m/s. |
Finishing my $\epsilon - \delta$ proof | What you want to show is that,
for any $\delta > 0$
there is an $\epsilon > 0$
such that
if
$|x+1| < \epsilon$
then
$|\dfrac{x^2-1}{x^2+x}
-2| < \delta
$.
You have worked out
very nicely that
$|\dfrac{x^2-1}{x^2+x}
-2|
=|\frac{x+1}{x}|
$.
Suppose
$|x+1| < \epsilon$.
To make sure that
the $x$ in the denominator
does not cause problems,
we want to choose $x$ so it is
not close to zero.
If we choose
$\epsilon < \frac12$,
then,
since
$-1-\epsilon < x
< -1+\epsilon
$,
then
$x < -\frac12$,
so
$|x| > \frac12$.
Therefore,
$|\frac{x+1}{x}|
<\frac{\epsilon}{\frac12}
=2\epsilon
$.
This shows that
if
$|x+1| < \epsilon$
and
$\epsilon < \frac12$,
then
$|\dfrac{x^2-1}{x^2+x}
-2| < 2\epsilon
$.
Therefore,
to make
$|\dfrac{x^2-1}{x^2+x}
-2| < \delta
$,
choose
$2\epsilon
< \delta$,
or
$\epsilon
< \min(\frac12,\frac{\delta}{2})
$. |
Minimization problem, both terms in function positive | 1- Compute the minimum of your problem by derivating $f(x,y)=10x+3y$
2- Evaluat the minimum value you can have of the edges
3- Take the best solution
For more details here |
lim of integration of a non-negative function: $\lim\limits_{n\rightarrow\infty} \left[\int_a^b (f(x))^n \, dx\right]^\frac{1}{n} = M$ | Hints:
1) Recall that for any positive number $\alpha$, one has
$$ \lim\limits_{n\rightarrow\infty} \alpha^{1/n}=1.$$
2) One has the estimate:
$$\biggl(\int_a^b (f(x))^n \,dx\biggr)^{1/n}\le
\biggl(\int_a^b M^n \,dx\biggr)^{1/n}
= (b-a)^{1/n}\cdot M .$$
3)
Let $\epsilon>0$. By continuity of $f$, choose a non-degenerate interval $[c,d]$ such that $f(x)\ge M-\epsilon$ for all $x\in [c,d]$. Note
$$
\biggl(\int_a^b (f(x))^n\, dx\biggr)^{1/n}\ge \biggl(\int_c^d (M-\epsilon)^n\,dx\biggr)^{1/n} =(M-\epsilon) (d-c)^{1/n}.
$$
Be careful not to imply that the limit exists before proving that it indeed does exist. (From 2), you can show the $\limsup$ is at most $M$; and from $3$, you can show that the $\liminf$ is at least $M$.) |
How to prove inequality with stricly increasing concave function? | If $u\colon [0,+\infty) \to \mathbb{R}$ is a concave function, with $u(0) \geq 0$, then, setting $\mu := x_i/(x_i+x_j) \in (0,1)$,
$$
u(\mu R) = u ((1-\mu) 0 + \mu R) \geq
(1-\mu) u(0) + \mu u(R) \geq \mu \, u(R).
$$
If $u$ is strictly concave, the inequality is strict, i.e. $u(\mu R) > \mu u(R)$.
In addition, if $u$ is positive, then the above inequality gives the required one. |
How to solve $k\ \cos(x)-(k+1)\sin(x) = k$ such that the solutions differ by $\frac{\pi}{2}$? | You want to achieve
$$\begin{cases}k\cos(x)\ \ \ \ \ \ \ \ -(k+1)\sin(x)\ \ \ \ \ \ \ = k,\\
k\cos(x+\frac\pi2)-(k+1)\sin(x+\frac\pi2)= k\end{cases}$$
or
$$\begin{cases}\ \ \ k\cos(x)-(k+1)\sin(x) = k,\\
-k\sin(x)-(k+1)\cos(x) = k.\end{cases}$$
By summing the squares,
$$k^2+(k+1)^2=2k^2,$$
i.e.
$$k=-\frac12.$$
This is a necessary condition. |
Analytically Understanding The Definite Integral As A Limit Of Sums | First partition the interval $[a,b]$ into $n$ equal parts. Denote each equal sized interval by $h$ and note that
$$h = \frac{b - a}{n}$$
Now the author wishes to show you the lower sum of function over these distinct parts. If $f : [a,b] \to \mathbb{R}$ is an increasing function then the lower sum may be written as
$$ U(f, [a,b]) = \sum_{k = 0}^{n - 1} f(a + kh)h$$
Note that what we are doing here is taking the value of the function at the left edge of an interval, and multiplying it by the width of that slice. This is what the equation in the curly braces is trying to tell you, he just skipped the part where he states that he uses the function $f(x) = x$ for this purpose! Now note that if $f$ is defined by $f(x) = x$ then we have
$$ U(f, [a,b]) = h \sum_{k=0}^{n-1}(a + kh) = h\left((n-1)a + h \sum_{k=0}^{n-1}k\right)$$
The sum on the right is the sum of natural numbers from $1$ to $n - 1$ so we have
$$ \sum_{k=0}^{n-1} k= \sum_{k=1}^{n-1}k = \frac{n(n - 1)}{2} $$
To verify this first derive the fact that
$$\sum_{k=1}^N = \frac{N(N+1)}{2}$$
and then set $N = n - 1$. The lower sum then becomes
$$U(f, [a,b]) = h \left( (n-1)a + h \frac{n(n-1)}{2}\right) = \frac{h(n-1)}{2} \left( 2a + hn\right)$$
Now by substituting the value of $h$ we have
$$ U(f, [a,b]) = \frac{(b-a)(n - 1)}{2n} \left( 2a + \frac{b - a}{n}n\right) = \frac{(b-a)(n - 1)}{2n}(b + a) = \left( \frac{1}{2} - \frac{1}{2n}\right)(b - a)(b + a) = \left( \frac{1}{2} - \frac{1}{2n}\right) (b^2 - a^2)$$
Then the result follows from $n \to \infty$. |
complex matrices integration | Because $H,G$ are constant (and sums and limits are linear), you have
$$
\int_0^{+\infty} H\, \exp(tF)\, G \exp(-st)\ dt=H\,\left(\int_0^{+\infty} \, \exp(tF)\, \exp(-st)\ dt\right)\,G.
$$
Now
$$
e^{tF}e^{-st}=e^{-(sI-F)t}
$$
As $sI-F$ is invertible, the antiderivative of $e^{-(sI-F)t}$ is $-(sI-F)^{-1}e^{-(sI-F)t}$.
Then
$$
\int_0^{m} \, \exp(tF)\, \exp(-st)\ dt=\left.\vphantom{\int}-(sI-F)^{-1}e^{-(sI-F)t}\right|_{t=0}^{t=m}=(sI-F)^{-1}(I-e^{-(sI-F)m}).
$$
Now you want to take limit as $m\to\infty$. This doesn't always work. For instance, if $F=0$ and $s=-1$, the limit clearly doesn't exist. You need $F$ so satisfy some condition that guarantees that $\lim_{m\to\infty} e^{-(sI-F)m}=0$. This works for exampe if $sI-F$ is positive definite. |
Solve quadratic, but only one solution allowed | Notice that putting $x=a$ in $p(x)=x^2-mx+ma-a^2$ gives zero. Hence $x=a$ is always a root of $p(x)$, and you can use the factor theorem to say that
$$p(x)=(x-a)q(x),$$
where $q(x)$ is some polynomial of lower degree than $q$. Staring at $p$ makes it clear that $q(x)$ has degree $1$, and if we look harder, we see it is of the form $(x-b)$ for some $b$. Expanding out,
$$ p(x)=(x-a)(x-b)=x^2-(a+b)x+ab, $$
and therefore we see that $a+b=m$, and $ab=ma-a^2$, which are both true when $b=m-a$. Hence the factorisation of $p$ is
$$ x^2-mx+ma-a^2=(x-a)(x+a-m), $$
and then it's clear that you need to have $a=m-a$, or $m=2a$, to have one root.
The book is saying that since $x=a$ is always a solution, for the equation to have only one solution, the other solution must also be $x=a$. |
Prove that if there are $vw ∈ E(G)$ such that $f(v) \neq f(w)$ then $G$ contains the graph $P_4$ as an induced subgraph | Suppose that $vw\in E$ is such that $f(v)\ne f(w)$. Since $x\ne y$, we may assume without loss of generality that $x<y$.
Now $d_G(v,w)=1\le 2$, so $x\le w$ and $y\le v$. Moreover, $d_G(w,w)=0\le 2$, so $y\le w$. Since $x<y$, it follows that $x\ne v$ and $x\ne w$. Suppose that $d_G(x,v)=1$; then the edges $xv$ and $vw$ are a path of length $2$ from $x$ to $w$, so $d_G(x,w)\le 2$, and therefore $y\le x<y$, which is absurd. Thus, $d_G(x,v)=2$, and there is a vertex $z$ distinct from $x$ and $v$ such that $xz,zv\in E$. To show that the vertices $x,z,v,w$ and the edges $xz,zv,vw$ are a $P_4$, you need only show that $w\notin\{x,z,v\}$. We already know that $w\ne x$ and $w\ne v$, so you really need only show that $w\ne z$.
HINT: If $w=z$, then $d_G(x,w)\le 2$. |
How do I show that $\kappa^+ \le 2^\kappa$ for every cardinal $\kappa$? | In $\sf ZFC$ one can argue that since every two cardinals are comparable, either $\kappa^+\leq2^\kappa$, or $2^\kappa\leq\kappa^+$. However from Cantor's theorem $\kappa<2^\kappa$, and by definition we have $\kappa<\lambda\rightarrow\kappa^+\leq\lambda$, and this gives us the wanted proof. But we can prove some part of this without the axiom of choice, and then only use the axiom of choice for the final punch. Here are the details.
This is a combination of the following two theorems. One using the axiom of choice, and another not.
Theorem. ($\sf ZF$) Suppose that $\kappa$ is a well-ordered cardinal, then there exists a surjection from $2^\kappa$ onto $\kappa^+$.
Proof. Note that $\kappa$ is equipotent with $\kappa\times\kappa$ (without an appeal to the axiom of choice, this is proved from the definition of Hessenberg sums for ordinals). We define the following map $\pi\colon\mathcal P(\kappa\times\kappa)\to\kappa^+$: $$\pi(A)=\begin{cases}\alpha & A\text{ is a well-ordering of its domain, and its order type is }\alpha\\0&\text{otherwise}\end{cases}$$
Since every ordinal $\alpha<\kappa^+$ can be mapped into $\kappa$, we have that every $\alpha<\kappa^+$ appears in the range of $\pi$, and therefore this is a surjection, as wanted. $\square$
Now to the part where the axiom of choice is invoked.
Theorem. ($\sf ZFC$) Every surjection admits an injective inverse.
Proof. Suppose $f\colon A\to B$ is a surjection, then $\{f^{-1}(b)\mid b\in B\}$ is a family of non-empty sets. From the axiom of choice it follows that there is a function $g\colon B\to A$ such that $g(b)\in f^{-1}(b)$, as wanted. $\square$
Corollary. ($\sf ZFC$) If $\kappa$ is a cardinal, then $\kappa^+\leq2^\kappa$.
Proof. There exists a surjection from $2^\kappa$ onto $\kappa^+$, and therefore an injection from $\kappa^+$ into $2^\kappa$. $\square$ |
Deciding to place a bet on outcome of a dice roll based on the probability | In order to answer the second part, you need to make sure you understand the concept of "expected value". The implied assumption of part b) is that you're playing the game over and over and want to see what's the average profit/loss you make over all these rounds.
With this in mind, the expected value of a bet with two outcomes is just
$$
\begin{align}
E &= (\text{Win Amt.}) \times P(\text{Win}) - (\text{Loss Amt.}) \times P(\text{Loss}) \\
&= 10(.03549) - 2 (.96451) \\
&= .3549 - 1.92902 \\
&\approx -1.6
\end{align}
$$
Hence on average, you lose about $1.6 for every round of the game. Should you play? |
Velocity dependent force with arbitrary power | To find the general solution of the differential equation
$$m\frac{dv}{dt} = -kv|v|^{n-1}$$
Consider the substitution $u = |v|^{n-1}$ for the integration.
The solution of the above should be
$$|v(t)|=\left( \frac{kt}{m} (n-1) + v_0^{1-n} \right) ^{1 \over 1-n}$$
To find $x(t)$, consider both cases of the absolute value.
You should find out that $n=2$ is also a dividing line from $x(t)$. |
Defining continuity in real analysis | First, let's note that we can make the two definitions of continuity you have above agree, regardless if the domain of $f$ is an open subset of $\mathbb{R}$ or not. Indeed, consider generalizing the sequential definition:
Let $U \subset \mathbb{R}$, let $f : U \to \mathbb{R}$ be a function, and suppose $x_0 \in U$. Then $f$ is continuous at $x_0$ when for all sequences $\{x_n\}_{n=1}^{\infty}$ in $U \setminus \{x_0\}$ which converge to $x_0$ it holds that $\lim\limits_{n \to \infty} f(x_n) = f(x_0)$.
This definition agrees with the $\epsilon-\delta$ definition of continuity when the domain and codomain of $f$ are metric spaces. In particular, it works for isolated points, since if $\{x_0\}$ is isolated in $U$, then there are no sequences in $U \setminus\{x_0\}$ converging to $x_0$, so there's nothing to check. (This is a manifestation of the logic that anything we say about the elements in the empty set is true, since there are no elements in the empty set to check). Notice as well that this definition implies that $f(x) = \sqrt{x}$ is continuous at $0$ as a map $[0,\infty) \to \mathbb{R}$, since the only sequences $\{x_n\}_{n=1}^{\infty}$ in the domain $[0,\infty) \setminus \{0\} = (0,\infty)$ must converge to $0$ from the right, and hence $\sqrt{x_n}$ is defined and $\sqrt{x_n} \to 0 = \sqrt{0}$ as $n \to \infty$.
For right and left continuity, we can easily adjust the above definition as follows:
Let $U \subset \mathbb{R}$, let $f : U \to \mathbb{R}$ be a function, and suppose $x_0 \in U$. Then $f$ is right (resp. left) continuous at $x_0$ when for all sequences $\{x_n\}_{n=1}^{\infty}$ in $U \setminus \{x_0\}$ which converge to $x_0$ from the right (resp. left) it holds that $\lim\limits_{n \to \infty} f(x_n) = f(x_0)$.
As before, this definition works when the domain is an arbitrary subset of $\mathbb{R}$ (no need for the domain to be an interval); however, unlike the metric space generality we gained with definition for continuity above, the right and left continuity definition needs the ordering on $\mathbb{R}$ to make sense (how could we discuss a sequence converging from the right or left when there is no sense of left-to-right ordering that the number-line has?). The usefulness of considering continuity from the right/left at $x_0$, rather than simply continuity at $x_0$, typically occurs when there are sequences within the domain $U \setminus \{x_0\}$ converging to $x_0$ from both the right and left (such as when $x_0$ is an interior point of $U$), since then it could happen that the function is only right (or only left) continuous at $x_0$. Of course, if a function is continuous at a point from the right and from the left, then it is continuous there too. |
Expected value of a random variable - probability of having a girl and a boy | Every outcome (i.e. sequence of children) has exactly $1$ girl, so $P(G=1) = 1$ and we have
$$E(G) = 1\cdot P(G=1) = 1.$$
Now to calculate $E(B)$.
\begin{eqnarray*}
P(B=n) &=& P(\underbrace{bb\cdots b}_{n\text{ times}}\,g) = \left(\frac{1}{2}\right)^{n+1} \\
&& \\
\therefore E(B) &=& \sum_{n=0}^\infty{nP(B=n)} \\
&=& \sum_{n=0}^\infty{n\left(\frac{1}{2}\right)^{n+1}} \\
\end{eqnarray*}
This sum can be evaluated with a little calculus. For $0 \lt x \lt 1$:
$$\dfrac{1}{1-x} = \sum_{n=0}^\infty{x^n}.$$
Differentiating both sides:
$$\dfrac{1}{(1-x)^2} = \sum_{n=0}^\infty{nx^{n-1}}.$$
Making use of this, we have:
\begin{eqnarray*}
E(B) &=& \dfrac{1}{4} \sum_{n=0}^\infty{n\left(\frac{1}{2}\right)^{n-1}} \\
&=& \dfrac{1}{4} \dfrac{1}{(1-\frac{1}{2})^2} \\
&=& 1.
\end{eqnarray*}
Alternative method: a shorter method uses the Law of Total Expectation.
Conditioning on the gender of the first child, denoted by $b$ and $g$:
\begin{eqnarray*}
E(B) &=& E(B \mid g)P(g) + E(B \mid b)P(b) \\
&=& 0\cdot \frac{1}{2} + (1 + E(B))\frac{1}{2} \\
\frac{1}{2}E(B) &=& \frac{1}{2} \\
E(B) &=& 1.
\end{eqnarray*}
The reasoning at line $2$ is that given the first child is a girl, it is certain that there will be $0$ boys. If the first child is a boy we have $1$ boy and are then in the same situation as at the start, so that $E(B \mid b) = 1 + E(B)$. |
Monotone subsequence in a random permutation | As Michael noted in a comment, the expected number of consecutive monotonic subsequences of length $m$ in a permutation of length $n$ is $2(n-m+1)/m!$. The factor $2$ is for the options "decreasing" and "increasing", $n-m+1$ counts the number of positions in which the subsequence can occur, and $1/m!$ is the probability for a given subsequence to be in increasing order. The probability of having at least one such subsequence is lower than this expected value.
Since $\log n$ usually isn't an integer, your question doesn't make sense when taken literally, so I'll assume that you're interested in the asymptotic behaviour. With Stirling's approximation,
$$
(\log_2 n)!\approx\sqrt{2\pi\log_2 n}\left(\frac{\log_2 n}{\mathrm e}\right)^{\log_2 n}=\sqrt{2\pi\log_2 n}\,n^{\log_2\log_2 n-1/\log2}\;,
$$
so $2(n-\log n+1)/(\log n)!$ will eventually be less than $\frac1n$ for sufficiently large $n$. We can plot the functions to find that $2(n-\log_2n+1)/(\log_2n)!=\frac1n$ around $n=676$, and indeed simulations show that the probability is greater than $\frac1n$ for $n=2^9=512$ but less than $\frac1n$ for $n=2^{10}=1024$. (Here's the code.) |
Limiting distribution of $\frac1n \sum_{k=1}^{n}|S_{k-1}|(X_k^2 - 1)$ where $X_k$ are i.i.d standard normal | Let $(X_n)_{n\geq 1}$ be a sequence of i.i.d. standard normal variables. Let $(S_n)_{n\geq 0}$ and $(T_n)_{n\geq 0}$ be given by
$$ S_n = \sum_{i=1}^{n} X_i \qquad\text{and}\qquad T_n = \sum_{i=1}^{n} (X_i^2 - 1). $$
We will also fix a partition $\Pi = \{0 = t_0 < t_1 < \cdots < t_k = 1\}$ of $[0, 1]$. Then define
$$ \begin{gathered}
Y_n = \frac{1}{n}\sum_{i=1}^{n} | S_{i-1} | (X_i^2-1), \\
Y_{\Pi,n} = \frac{1}{n} \sum_{j=1}^{k} |S_{\lfloor nt_{j-1}\rfloor}| (T_{\lfloor nt_j\rfloor} - T_{\lfloor nt_{j-1} \rfloor}).
\end{gathered}$$
Ingredient 1. If $\varphi_{X}(\xi) = \mathbb{E}[\exp(i\xi X)]$ denotes the characteristic function of the random variable $X$, then the inequality $|e^{ix} - e^{iy}| \leq |x - y|$ followed by Jensen's inequality gives
\begin{align*}
\big| \varphi_{Y_n}(\xi) - \varphi_{Y_{\Pi,n}}(\xi) \big|^2
&\leq \xi^2 \mathbb{E}\big[ (Y_n - Y_{\Pi,n})^2 \big] \\
&= \frac{\xi^2}{n^2}\sum_{j=1}^{k} \sum_{i \in (nt_{j-1}, nt_j]} 2 \mathbb{E} \big[ \big( | S_{\lfloor n t_{j-1} \rfloor} | - | S_{i-1} | \big)^2 \big].
\end{align*}
From the reverse triangle inequality, the inner expectation is bounded by
\begin{align*}
2 \mathbb{E} \big[ \big( | S_{\lfloor n t_{j-1} \rfloor} | - | S_{i-1} | \big)^2 \big]
\leq 2 \mathbb{E} \big[ \big( S_{i-1} - S_{\lfloor n t_{j-1} \rfloor} \big)^2 \big]
= 2(i-1-\lfloor nt_{j-1} \rfloor),
\end{align*}
and summing this bound over all $i \in (nt_{j-1}, nt_j]$ yields
$$ \big| \varphi_{Y_n}(\xi) - \varphi_{Y_{\Pi,n}}(\xi) \big|^2
\leq \frac{\xi^2}{n^2} \sum_{j=1}^{k} (\lfloor n t_j \rfloor - \lfloor n t_{j-1} \rfloor)^2
\xrightarrow[n\to\infty]{} \xi^2 \sum_{j=1}^{k} (t_j - t_{j-1})^2. \tag{1} $$
Ingredient 2. From the Multivariate CLT, we know that
$$
\Bigg( \frac{S_{\lfloor nt_j\rfloor} - S_{\lfloor nt_{j-1}\rfloor}}{\sqrt{n}}, \frac{T_{\lfloor nt_j\rfloor} - T_{\lfloor nt_{j-1}\rfloor}}{\sqrt{n}} : 1 \leq j \leq k \Bigg)
\xrightarrow[n\to\infty]{\text{law}} ( W_{t_j} - W_{t_{j-1}}, N_j : 1 \leq j \leq k ),
$$
where $W$ is the standard Brownian motion, $N_j \sim \mathcal{N}(0, 2(t_j - t_{j-1}))$ for each $1 \leq j \leq k$, and all of $W, N_1, \cdots, N_k$ are independent. By the continuous mapping theorem, this shows that
$$ Y_{\Pi,n} \xrightarrow[n\to\infty]{\text{law}} \sum_{j=1}^{k} W_{t_{j-1}} N_j. $$
Moreover, conditioned on $W$, the right-hand side has normal distribution with mean zero and variance $2\sum_{j=1}^{k} W_{t_{j-1}}^2 (t_j - t_{j-1}) $, and so,
$$ \lim_{n\to\infty} \varphi_{Y_{\Pi,n}}(\xi) = \mathbb{E}\left[ \exp\bigg( -\xi^2 \sum_{j=1}^{k} W_{t_{j-1}}^2 (t_j - t_{j-1}) \bigg) \right]. \tag{2} $$
Ingredient 3. Again let $W$ be the standard Brownian motion. Since the sample path $t \mapsto W_t$ is a.s.-continuous, we know that
$$ \sum_{j=1}^{k} W_{t_{j-1}}^2 (t_j - t_{j-1}) \longrightarrow \int_{0}^{1} W_t^2 \, \mathrm{d}t $$
almost surely along any sequence of partitions $(\Pi_k)_{k\geq 1}$ such that $\|\Pi_k\| \to 0$. So, by the bounded convergence theorem,
$$ \mathbb{E}\left[ \exp\bigg( -\xi^2 \sum_{j=1}^{k} W_{t_{j-1}}^2 (t_j - t_{j-1}) \bigg) \right]
\longrightarrow \mathbb{E}\left[ \exp\bigg( -\xi^2 \int_{0}^{1} W_t^2 \, \mathrm{d}t \bigg) \right] \tag{3} $$
as $k\to\infty$ along $(\Pi_k)_{k\geq 1}$.
Conclusion. Combining $\text{(1)–(3)}$ and letting $\|\Pi\| \to 0$ proves that
$$ \lim_{n\to\infty} \varphi_{Y_n}(\xi) = \mathbb{E}\left[ \exp\bigg( -\xi^2 \int_{0}^{1} W^2_t \, \mathrm{d}t \bigg) \right], $$
and therefore $Y_n$ converges in distribution to $\mathcal{N}\big( 0, 2\int_{0}^{1} W_t^2 \, \mathrm{d}t \big)$ as desired. |
Cycloid arc length without integration | Doing it without using the calculus is somewhat complicated, and unfortunately difficult to explain without drawing pictures.
There is a very large literature on the problem. Begin your search by looking for "rectification of the cycloid." (Rectification is a largely obsolete term for finding arclength.)
Famous early rectifications of the cycloid are by Christopher Wren, by Roberval, and by Huygens. All of these rectifications come before the official birth of the calculus. |
Finding the lenght of a parameterized curve | Hint:
$$\int_0^1\sqrt{2+e^{2t}+e^{-2t}}dt=\int_0^1\sqrt{\left(e^t+e^{-t}\right)^2}dt$$ |
Number of positive integral solutions of $a+b+c+d+e=20$ such that $a<b<c<d<e$ and $(a,b,c,d,e)$ is distinct | If $a<b<c<d<e$ it is trivial that $a,b,c,d,e$ are distinct numbers, so you can shorten the title. We may set $a=x_1$ and $b=a+x_2$, $c=b+x_3$, $d=c+x_4$, $e=d+x_5$, so we are looking for the solutions of
$$ 5x_1+4x_2+3x_3+2x_4+x_5 = 20 $$
with $x_1,\ldots,x_5\in\mathbb{N}^+$. By setting $x_k=1+y_k$, we are looking for the solutions of
$$ 5y_1+4y_2+3y_3+2y_4+y_5 = 5 $$
with $y_1,\ldots,y_5\in\mathbb{N}$. The number of such solutions is the coefficient of $z^5$ in the Taylor series at the origin of
$$ \frac{1}{(1-z^5)(1-z^4)(1-z^3)(1-z^2)(1-z)} $$
i.e. the number of integer partitions of $5$, $p(5)=\color{red}{7}$. |
Number Theory: Divisibility | Hint $\,\ 30a\!-\!16b\, =\, 4(\underbrace{4a\!+\!3b}_{\Large \color{#c00}{14}\,c}) + \color{#c00}{14}(a\!-\!2b)$ |
Is $K\subset\mathbb{R}^2$ homeomorphic to an interval if $K$ is connected but $K\setminus\{x\}$ is not for any $x\in K$? Must it have empty interior? | Copying & expanding comments:
1 is false: take the union of two axes. You can find a topological characterization of intervals in Analytic Topology by Whyburn.
2 is correct: removing a point of interior does not disconnect a set, because a disk minus a point is still connected. |
Eigenvectors and Kronecker product | This is in general false. Consider $v_A = v_B = (1, 1)^t$ and $C = \begin{pmatrix}1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \\ 3 & 4 & 1 & 2 \\ 4 & 3 & 2 & 1\end{pmatrix}$. Then $v_A \otimes v_B = (1, 1, 1, 1)^t$ and $C (v_A \otimes v_B) = 10 (v_A \otimes v_B)$, but clearly $C$ is not of the form $A \otimes I_2$.
In general, a single eigenvector doesn't say much about the structure of a matrix. I suppose a version of this statement might be true if there exists a whole eigenbasis of the form $u_i \otimes v_j$ with suitable vectors $u_i, v_j$. |
Finding the orthogonal basis, picture included! | what you have is one orthogonal basis for the column space of $A.$ here is another one $\pmatrix{0\\1\\0}, \pmatrix{1\\0\\1}.$ |
Prove that $L_{2}^{D}$ is an inner product space. | Let $(x_n)_{n=1}^\infty$, $(y_n)_{n=1}^\infty$ be two sequences in $\ell^D_2$ such that $x_n = 0$ for $n \in K$ and $y_n = 0$ for $n \in L$, where $K, L \subseteq \mathbb{N}$ are two sets with finite complements.
Then we have $x_n + y_n = 0$ for $n \in K \cap L$, whose complement is finite so $(x_n)_{n=1}^\infty + (y_n)_{n=1}^\infty = (x_n + y_n)_{n=1}^\infty \in \ell^D_2$.
Similarly, for $\lambda \ne 0$ we have $\lambda x_n = 0$ for $n \in K$ so $\lambda(x_n)_{n=1}^\infty = (\lambda x_n)_{n=1}^\infty \in \ell^D_2$. For $\lambda = 0$, obviously we have $\lambda(x_n)_{n=1}^\infty = 0 \in \ell^D_2$.
Thus, $\ell^D_2$ is a vector subspace of $\ell_2$.
Now, let $\langle\cdot, \cdot\rangle$ be the inner product on $\ell_2$. To show that its restriction is an inner product on $\ell^D_2$, we have to show:
$$\langle x,x\rangle \ge 0, \quad\forall x \in \ell^D_2$$
$$\langle x,x\rangle = 0 \implies x = 0, \quad\forall x \in \ell^D_2$$
$$\langle x,y\rangle = \overline{\langle y,x\rangle}, \quad\forall x,y \in \ell^D_2$$
$$\langle \lambda x + \mu y,z\rangle = \lambda \langle x,z\rangle + \mu \langle y,z\rangle, \quad\forall x,y,z \in \ell^D_2, \forall \lambda, \mu \in \mathbb{C}$$
But every single of these properties already holds for all vectors in $\ell_2$, so it certainly holds for all vectors in $\ell^D_2 \subseteq \ell_2$.
Therefore, $\langle \cdot, \cdot\rangle\Big|_{\ell^D_2}$ is an inner product on $\ell^D_2$. |
If $A$ is connected, is at least one of the sets $\mathrm{Int}A$ and $\mathrm{Bd}A$ connected? | Thanks to Daniel Fischer for his correction!
Consider the union $X$ of two full triangles $T,T'$ in the plane that meet at a vertex, and remove a small disk from the interior of $T$. Then $X$ is connected, and neither the interior nor the boundary of $X$ are connected. |
Problems Using Wolfram Alpha | a) x^2-2x+1
b) 3x^2+6x+2
c) 1+Abs[x+2]
d) 2/(x-1)^2
Directly inputting these into Wolfram Alpha should turn something up. I linked up the first example so you can see what it looks like when entered. |
Different definitions of codimension | $\Rightarrow$ Let $B$ be a basis for $V$ and let $([w_1],\ldots,[w_n])$ be a basis of $X/V$, for each $i$, $[w_i] = \{x + w_i: x \in X\}$.
Let $B' = \{w_1,\ldots,w_n\} \subset V$ be a set of representatives of $[w_1],\ldots,[w_n]$. If $\sum_i \alpha_i w_i = 0$ then $\sum_i [\alpha_i w_i] = [0]$ and then $\sum_i \alpha_i [w_i] = [0]$, but since $[w_i]$ are linearly independent, $\alpha_i = 0$ for $1\leq i \leq n$ and $w_1,\ldots,w_n$ are also linearly independent. Now, if $v \in V$ we know that $[v] = \sum_i \alpha_i [w_i]$ for scalars $\alpha_i$ but then $v - \sum_i \alpha_i w_i \in X$ and there are $x_1,\ldots,x_m$ vectors of $B$ such that $v-\sum_{i=1}^n \alpha_i w_i = \sum_{j=1}^m \beta_j x_j$ and then $x$ is a linear combination of elements of $B$ and $B'$. Since $v$ was generic, every element of $V$ is a linear combination of elementos of $B \cup B'$. Showing that $B\cup B'$ is linearly independent can be accomplished passing to the quotient (like we did to B').
$\Leftarrow$ The hypothesis can be translated as: There is a basis $B$ of $V$ such that $B = B' \cup B''$ where $B'$ is a basis for $X$ and $B''$ has $n$ elements. Now, it's a simple matter of passing to the quotient to show that $\dim (X/V) = n$ |
If $49^n + 16^n +k$ is divisible by $64$ then find $k$. | No such $k$ exists.
Notice that we would need $k\equiv -(49^n+16^n)\bmod 64$ for all $n\in\mathbb N$.
However $49^n+16^n$ already takes two different values $\bmod 64$ for $n=1,2$. |
Relation between de Rham cohomology and integration | Let $T^{n}$ denote the $n$ torus* and $\pi_{i} : T^{n} \rightarrow S^{1}$ denote the $i-th$ projection. Let us denote the pull-back of the top from (if you do not take the normalized top form i.e. $\int_{s^{1}} d\theta = 1$, you will get some $2\pi$ factors) $\pi_{i}^{\ast}(d\theta) = \omega_{i}$. Then each $\omega_{i}\in H^{1}_{DR}(T^{n})$ and by dimension count and the fact that they are linearly independent (as can be seen by integrating on a 'suitable' component $S^{1}$) they together generate $H^{1}_{DR}(T^{n})$.
Now assume the ranks of all de-Rham cohomology of $T^{n}$ are known. The cohomologies $H^{k}(T^{n})$ are generated by wedge of $k-$ subsets of $\omega_{i}$. That they are linearly independent (in cohomology) can be seen by integrating over $k-$ cycles consisting respective component $S^{1}$ factors. Then dimension count gives isomorphism. (There is a more correct proof by showing compatability of wedge and cup products and using comparison of singular/simplicial cohomology and de-Rahm cohomology).
The inegral of each such wedge is just a product of constituent $\omega_{i}$ on $(S^{1})_{i}$.
Hence the basis element $\sum_{I \subset \{1,2,..n\}, |I| = k} (\wedge_{i \in I}\omega_{i})$ integrates on the k-cycle $\sum_{I \subset \{1,2,..n\}, |I| = k}\pm [\prod_{i \in I}(S^{1})_{i}]$ to give you the ranks. The plus minuses are important due to rule of signs in wedge products.
*Torus as per your question. Too many things are called torus in too many different contexts. |
Exam Question from multivariable calculus. | Hint. Use the Mean Value Theorem to the function
$$
g(t)=f(x_0+th), \quad t\in [0,1]$.
$$
Then
$$
f(x_0+h)-f(x_0)=g(1)-g(0)=g'(s), \quad \text{for some}\,\,\,s\in (0,1).
$$ |
Measure of $A = \{n \in \mathbb{N} : n = k + \tau(k)\}$ over $\mathbb{N}$ | We define:
$$A=\{k+\tau(k) \mid k \in \mathbb{N}\}$$
Claim: The set $A$ does not have full measure, i.e.
$$|\overline{A}| = \infty$$
Assume the contrary. We have:
$$S=\overline{A}=\{n \mid \not\exists \space k \in \mathbb{N} \text{ s.t. } n = k+\tau(k) \space; n \in \mathbb{N}\}$$
where $|S| = m$ is finite.
Let $N \in \mathbb{N}$ be sufficiently large. We consider the values $k+\tau(k) \space (1 \leqslant k \leqslant N)$. From $1$ to $N$, there are only $m$ numbers which cannot be expressed as $k+\tau(k)$ amd thus, the remaining numbers can be expressed as $k+\tau(k)$. For $n=k+\tau(k)$ where $n \leqslant N$, we will also have $k \leqslant N$ as if $k>N$, then $n=k+\tau(k)>k>N$ (which is false).
This means that out of the $N$ values $k+\tau(k) \space (1 \leqslant k \leqslant N)$ :
$N-m$ values are the elements of $\{1,2,3,\ldots,N\}-S$
The remaining $m$ values are numbers less than $2N$ as $k+\tau(k) <2N$.
Thus:
$$\sum_{k=1}^N (k+\tau(k)) < \sum_{i=1}^N i - \sum_{t \in S} t + (2N)m \leqslant \frac{N(N+1)}{2}-\frac{m(m+1)}{2}+2Nm$$
But we also have:
$$\sum_{k=1}^N (k+\tau(k)) = \sum_{k=1}^N k + \sum_{k=1}^N \tau(k) = \frac{N(N+1)}{2}+
\sum_{i=1}^N \bigg\lfloor \frac{N}{i} \bigg \rfloor$$
$$\sum_{k=1}^N (k+\tau(k)) > \frac{N(N+1)}{2}+N\sum_{i=1}^N \frac{1}{i}-N>\frac{N(N+1)}{2}+N\log{N}-N$$
Thus, we have:
$$\frac{N(N+1)}{2}-\frac{m(m+1)}{2}+2Nm>\frac{N(N+1)}{2}+N\log{N}-N$$
$$2Nm-\frac{m(m+1)}{2}>N\log{N}-N \implies (2m+1)N>N\log{N} \implies 2m+1>\log{N}$$
This is obviously a contradiction as $N$ can get arbitrarily large. Thus, our claim is true and we have proved the required. |
Is it possible to make this function into a continuous one on the entire $\mathbb{R}$? | The map $f$ is well defined and continuous on the domain $\mathbb{R}-\{1\}$.
It is known that :
$$\lim_{t\to+\infty}\arctan(t)=\frac{\pi}{2}\quad\mathrm{and}\quad\lim_{t\to-\infty}\arctan(t)=-\frac{\pi}{2}$$
It follows that :
$$\lim_{x\to1^-}f(x)=-\frac{\pi}{2}\quad\mathrm{and}\quad\lim_{x\to1^+}f(x)=\frac{\pi}{2}$$
So $f$ cannot be "extended" (that's probably the word you were looking for) into any continuous map defined on $\mathbb{R}$.
Recall that given $\alpha\in\mathbb{R}$ and a map $f:\mathbb{R}-\{\alpha\}\to\mathbb{R}$, the continuity of $f$ at $\alpha$ is equivalent to the existence of finite left and right limits at $\alpha$, those limits beeing moreover equal. |
What does "relation induced by a partition" mean? | There is a particular natural connection between equivalence relations and partitions. Each partition corresponds to an equivalence relation, and each equivalence relation corresponds to a partition. Going back and forth along this correspondence will get you back where you started.
The correspondence is this: Given a partition, each element is related to each element in the same part, and nothing else. The other way: Given an equivalence relation, a part of the partition is given by a maximal set of elements all related to one another.
This correspondence is what they mean by "the equivalence relation induced by the partition".
I suspect that they use the word "induce" because the correspondence is based on a concrete construction to get from one to the other. Not all such natural correspondences are like that. |
$U$ open, in closure of $A$ implies $\overline{A \cap U} $ is dense in $U$ | Use the very useful theorem: open U implies
$U \cap \bar A \subseteq \overline {U \cap A}$.
Proof.
Assume x in $U \cap \bar A$.
Thus for all open V nhood x, exists y in V $\cap$ A.
As x in open U $\cap$ V, exists y in V $\cap$ U $\cap$ A.
Hence x in $\overline {U \cap A}$, QED. |
"Geometrically, it can be viewed as the scaling factor of the linear transformation described by the matrix." | The absolute value of the determinant is the factor by which the volume (Lebesgue measure, more generally) changes when mapping sets under a given transformation. For example, a determinant $3$ transformation on $\mathbb{R}^2$ will map shapes to other shapes with triple the area.
The determinant is not linear; finding the volume of a shape mapped under $T_1 + T_2$ is not simply a matter of smooshing the image under $T_1$ with the image under $T_2$! It's not difficult to see, in fact, that $I$ has determinant $1$ (it doesn't change the measure of shapes), but $I + I$ has determinant $2^n$, where $n$ is the dimension of the space. |
Problem about ideals of the localization of a ring | Let $J$ be an ideal of $R_P$. Consider
$$
I=\{x\in R: x/1\in J\}
$$
It's easy to prove that $I$ is an ideal in $R$: obviously $0\in I$; if $x,y\in I$, then
$$
\frac{x+y}{1}=\frac{x}{1}+\frac{y}{1}\in J
$$
and therefore $x+y\in I$; if $x\in I$ and $r\in R$, then
$$
\frac{xr}{1}=\frac{x}{1}\frac{r}{1}\in J
$$
and thus $xr\in I$.
Suppose $x\in I$ and $x\notin P$. Then, by definition, $x/1\in J$; since $1/x\in R_P$, we have that
$$
\frac{1}{1}=\frac{x}{1}\frac{1}{x}
$$
and so $J=R_P$. Since $J$ is a proper ideal (see below), we have a contradiction. Hence $I\subseteq P$.
Note The given hint is too complicated. Also the statement is wrong, because one has to assume that $J$ is a proper ideal, because for no ideal $I$ of $R$, with $I\subseteq P$, we have $i(I)R_P=R_P$.
Let us now show that $i(I)R_P=J$.
Let $x/s\in J$, with $x\in R$ and $s\in R\setminus P$; then
$$
\frac{x}{1}=\frac{x}{s}\frac{s}{1}\in J
$$
so $x\in I$ and so
$$
\frac{x}{s}=i(x)\frac{1}{s}\in i(I)R_P
$$
Hence $J\subseteq i(I)R_P$.
Now suppose
$$
z=\sum_{k=1}^n i(x_k)\frac{r_k}{s_k}\in i(I)R_P
$$
with $x_k\in I$. By definition, $i(x_k)=x_k/1\in J$, so $z\in J$. |
Naturals $\leq 10^6$ that are either squares or cubes | If $c$ is both the square of some natural number and the cube of some (other) natural number, then for every prime factor that divides $c$, that prime must divide $c$ to a power that is even (since $c$ is a square) and at the same time divisible by $3$ (since $c$ is a cube). That means that any such prime must divide $c$ by a power that is divisible by $6$. Therefore, $c$ is the sixth power of some natural number. |
non sequentially compact space of sequences | $d(x,y)= \sum{2^{-i}\frac{|x_i-y_i|}{1+|x_i-y_i|}}$
If the enumeration of indices $i$ starts from $1$, then $d(x,0)<1$ for each sequence $x$, so $M$ is empty. Thus I’ll assume that it starts from $0$.
For $n>1$ put $k_n=(n+1,2/n,0,0,\dots)$. Then the sequence $\{k_n\}$ belongs to $M$ and clearly diverges. |
Show that $T\neq{T^*}$ | Following up my first comment above, trying to make two integrands equal (for all $x$) is an overconstrained problem. If you could, then yes, that would be an easy path to finding the adjoint, but this shortcut is not available here.
You have that $\langle Tp, q\rangle = \tfrac12 a_1 b_0 + \tfrac13 a_1 b_1 + \tfrac14 a_1 b_2$. Whatever $T^*\!q$ is, it must satisfy $\langle p,T^*\!q\rangle = \tfrac12 a_1 b_0 + \tfrac13 a_1 b_1 + \tfrac14 a_1 b_2$. In particular:
$$\langle 1, T^*\!q \rangle = \langle x^2, T^*\!q \rangle = 0,\quad \langle x, T^*\!q\rangle = \tfrac12 b_0 + \tfrac13 b_1 + \tfrac14 b_2.$$
The first two constraints imply that for any $q\in V$, $T^*\!q$ is a scalar multiple of the polynomial $3 - 16x + 15x^2$ (this isn't immediately obvious, but can you calculate why?). Computing the behaviour of $T^*\!$ then amounts to seeing how the constant in front of $(3-16x+15x^2)$ depends on $b_0$, $b_1$ and $b_2$.
To some extent, the calculations in this problem look more complex because the natural basis $\{1,x,x^2\}$ is not orthogonal with respect to the chosen inner product $\int_0^1$. |
Find the determinant of the $n\times n$ matrix $A_n$ with $(A_n)_{i,j}={n\choose |i-j|}$. | As you noted, your matrix is circulant. In particular, if we take $P$ to be the permutation matrix described here, then we have
$$
A = \sum_{k=0}^{n-1} \binom nk P^k = (I + P)^n - I
$$
Thus, we may compute your determinant as the product of all eigenvalues, namely
$$
\det(A) = \prod_{j=0}^{n-1} [(1 + e^{(2 \pi j/n) i})^n - 1]
$$ |
Beast ball league ways to get 10 points | Let $B_{k, W}$ be the number of ways to get $k$ points ending with a Win.
Let $B_{k, L}$ be the number of ways to get $k$ points ending with a Lose.
We have the recurrence relations:
$ B_{k, W } = B_{k-3, W } + B_{k-2, L} $
$ B_{k, L} = B_{k-1, W} + B_{k-1, L}$
The starting conditions are $B_{1, W} = 0 , B_{1, L } = 1, B_{2,W} = 1, B_{2, L} = 1, B_{3,W} = 1, B_{3,L} = 2$.
From here, we can find $ B_{10, W} + B_{10, L}$, which is left to the reader. |
Possible values of $m$ and $M$ | For $x$ in $[0.5,1]$,
$$
f(x)=f(0.5)+\int_{0.5}^x f'(t)dt=\int_{0.5}^x f'(t)dt.
$$
Moreover, since $\sin$ varies between $-1$ and $1$, we know that its fourth power varies between $0$ and $1$, so
$$
\frac{192x^3}{3}\leq f'(x)\leq\frac{192x^3}{2}.
$$
Therefore,
$$
\int_{0.5}^x 64t^3dt\leq f(x)\leq \int_{0.5}^x 96t^3dt.
$$
and so
$$
16(x^4-0.5^4)\leq f(x)\leq 24(x^4-0.5^4).
$$
Substituting this into the original expression, we have that
$$
\int_{0.5}^1 16(x^4-0.5^4)dx\leq \int_{0.5}^1 f(x)dx\leq \int_{0.5}^1 24(x^4-0.5^4)dx
$$
Evaluating the integrals gives
$$
2.6\leq \int_{0.5}^1 f(x)dx\leq 3.9
$$
Therefore, I get that $(d)$ is the right answer. |
How to exactly write down a proof formally (or how to bring the things I know together)? | Since this is an "if and only if" kind of proof, slow down and try to write down both implications one at a time. Try to understand every step. For example:
Let's show that $f$ surjective implies the other thing. Assume that $g \circ f = h \circ f$. This means that for any element $m \in M$, $g(f(m)) = h(f(m))$. We need to show that for any element $n \in N$, we have $g(n) = h(n)$. But $f$ is surjective, so any $n$ has a preimage $m$: $$\exists m: n = f(m)$$ Thus we in fact have $g(n) = g(f(m)) = h(f(m)) = h(n)$, as required!
Can you try to do the other direction?
Here's what you have to do: assume that $g \circ f = h \circ f \implies g = h$. Now, take any element $n \in N$, and show that there exists a $m \in M$ such that $f(m) = n$.
Edit: This last part is perhaps the less straightforward of the two, so let's see if I can be a little more helpful. Assume on the contrary that $f$ is not surjective. Thus, there exists $n_0 \in N$ that possesses no preimage. Now, take $L = \{a,b\}$, any two-element set. Set $g(n) = a$, and
$$ h(n) = \begin{cases}a & n \neq n_0 \\ b & n = n_0 \end{cases}$$
Now can you find a contradiction? |
Split a differential $(k+1)- form$ into two parts | In this notation, we are writing the coordinates in $U\subseteq\Bbb R^n$
as $x_1,\ldots,x_n$ and in $\Bbb R$ as $t$ so that the coordinates
on $U\times \Bbb R$ are $x_1,\ldots,x_n,t$. So $dx_1,\ldots,dx_n,dt$
are the basic $1$-forms.
A typical $3$-form might then be say
$$t\,dx_1\wedge dx_3-x_1^2dx_2\wedge dx_3+dt\wedge(\sin t\,dx_2-e^{x_2}\,dx_3).$$ |
Is $f(x,y) = x^\beta/y$ quasi-convex for positive $x,y$ for any real $\beta \geq 1$? | I realized I can use the fact that a function $f$ is quasi-convex if and only if $g(f)$ is quasi-convex for some increasing function $g$. So I can consider $g(w) = w^n$ for a sufficiently large positive integer $n$ and get $g(f(x,y)) = x^\alpha / y^\gamma$ where $\alpha > \gamma - 1$. Then it is easy to show that $g(f(x,y))$ is convex by computing the Hessian, hence $g(f(x,y))$ is quasi-convex, so $f(x,y)$ is quasi-convex as well. |
Find the number of rational roots of $f(x)$ | You are right that all three roots must be real; but then you jump to claim that this implies all three roots (counting multiplicity) are rational. Is that really the case?
Since it has a double root, $f(x)$ (up to scaling by a rational factor) can be written as $f(x) = (x-a)^2(x-b)$. If $a=b$ then $f(x)$ satisfies the hypothesis, and you are correct this would $a$ is rational. But why? Because the coefficient of $x^2$ would be $3a$, and if $3a$ is rational, then $a$ is rational.
If $a\neq b$, then the coefficient of $x^2$ is $2a+b$, the coefficient of $x$ is $a^2+2ab$, and the constant coefficient is $a^2b$. If $a$ is rational, then so is $b$ since $2a+b$ is rational; and if $b$ is rational then so is $2a$, hence so is $a$.
Can we have $a$ and $b$ both irrational, and yet have $2a+b$, $a^2+2ab$, and $a^2b$ all rationals? You need to show this is impossible to conclude that all three roots are rational. |
Finding supremum (Uniform convergence and integration). | Why does $\sup_x |f_n(x)-0| \to 0$? $f_n(x) \to 0$ for each $x$ does not imply that the supremum tends to $0$. You have to find the maximum of $f_n(x)$ by setting the derivative equal to $0$ which gives $x=\frac 1 {\sqrt {2n+1}}$. The maximum value turns out to be $\frac n {\sqrt {2n+1}} (1-\frac 1 {2n+1})^{n}$ and the limit of this is $\infty$. |
Equivalent Definitions of a Topological Manifold: Are Open Sets in $R^n$ homeomorphic to $R^n$? | Open balls in $\Bbb R^n$ are homeomorphic to $\Bbb R^n$, but it’s not true in general that (non-empty) open sets in $\Bbb R^n$ are homeomorphic to $\Bbb R^n$: $\Bbb R^n$ and its open balls are connected, but there are lots of open sets in $\Bbb R^n$ that are not connected. However, if $U$ is an open nbhd of $x$ in $\Bbb R^n$, then there is an open ball $B$ such that $x\in B\subseteq U$, so if every point $M$ has a nbhd homeomorphic to some open $U\subseteq\Bbb R^n$, then it automatically has one homeomorphic to an open ball in $\Bbb R^n$. The other direction is trivial, since every open ball in $\Bbb R^n$ is an open set in $\Bbb R^n$.
Finally, to prove that an open ball in $\Bbb R^n$ is homeomorphic to $\Bbb R^n$ itself, it suffices to prove it for the open unit ball centred at the origin. Consider the map from the open unit ball to $\Bbb R^n$ that sends $x$ to $\left(\tan\frac{\pi|x|}2\right)x$. |
Abstract Algebra, group theory | There are $4$ subgroups of order $3$.
Remember that in a group of order $3$ there are $2$ generators . |
What is the value for angle EGB? | Let $\angle BAE=\alpha$ and $\angle DAF=\beta$. Then $\alpha+\beta=90^\circ-20^\circ=70^\circ$.
Note that $\triangle ABE\cong\triangle DCE$ and $\triangle ADF\cong \triangle BCF$.
So $\angle CDE=\angle BAE=\alpha$ and $\angle CBF=\angle DAF=\beta$.
$\angle BEG=\angle BAE+\angle BCD=\alpha+90^\circ$.
$\angle EGB=180^\circ-\angle CBF-\angle BEG=180^\circ-(\alpha+90^\circ)-\beta=90^\circ-\alpha-\beta=20^\circ$. |
How does the following equation get the new Un results after they let Un= Tn +1. Can’t understand the intermediate steps! | Remember that $U_{n-1}=T_{n-1}+1$. Then,
$$T_n+1=2T_{n-1}+2\\T_n+1=2(T_{n-1}+1) \\ U_n=2U_{n-1}$$ |
If $f \circ g$ is onto then $f$ is onto and if $f \circ g$ is one-to-one then $g$ is one-to-one | This post intends to remove this question from the Unanswered list.
As noted in the comments, all of your assertions are correct. |
How many points are needed to uniquely define an ellipse? | The equation $\left(\frac{x-h}{a}\right)^2 + \left( \frac{y-k}{b}\right)^2 = 1$ is the equation for an ellipse with major and minor axes parallel to the coordinate axes. We expect such ellipses to be unchanged under horizontal reflection and under vertical reflection through their axes. In this equation, these reflections are effected by $x \mapsto 2h - x$ and $y \mapsto 2k -y$.
This means, if all you have is one point on the ellipse and the three reflected images of this point, you do not have $8$ independent coordinates; you have $2$ and uninformative reflections forced by the equation.
We can see this by plotting two ellipses at the same center (same $h$ and $k$), intersecting at $4$ points, with, say, semiaxes of length $1$ and $2$.
These clearly have four points of intersection. But as soon as you know an ellipse is centered at the origin and contains any one of the four points of intersection, by the major and minor axis reflection symmetries, it contains all four. This is still true if you use generic ellipses, which can be rotated.
Remember that the reflections are through the major and minor axes, wherever they are.
Of course, there are other ways for two ellipses to intersect at four points.
So just knowing those four points are on an ellipse cannot possibly tell you which one is intended.
Returning to the first diagram, corresponding to the diagram you have where the four known points are the vertices of a square... Symmetries force the center of the ellipse to be the center of the square, but that's not a very strong constraint. |
Prove that x and y commute | If $x$ and $y$ commute, clearly we have
$$
xyxy=xxyy=x^2y^2
$$
if instead
$$
xyxy=x^2y^2
$$
then hitting the left side with $x^{-1}$ and the right with $y^{-1}$ yields
$$
xy=yx
$$ |
Behavior of infinite limits | Yes, this is right:
Let $m>0$ be given.
Since $\displaystyle\lim_{x\to0^+}f(x)=\infty$, there is a $\delta_1>0$ such that $\displaystyle 0<x<\delta_1\implies f(x)>\frac{2m}{C}$.
Since $\displaystyle\lim_{x\to0^+}g(x)=C$, there is a $\delta_2>0$ such that $\displaystyle 0<x<\delta_2\implies |g(x)-C|<\frac{C}{2}$.
If $\displaystyle\delta=\min(\delta_1, \delta_2),$ then $\displaystyle 0<x<\delta\implies f(x)g(x)>\left(\frac{2m}{C}\right)\left(\frac{C}{2}\right)=m$. |
The longest sum of consecutive primes that add to a prime less than 1,000,000 | The sum of the first 536 primes is $958577$ which is prime. The sum of the first 547 primes is $1001604$. That leaves a small number of possibilities to check. |
Help calculating intersection and unions of sets | For the union you must consider the union of $[-1,1],[-2,2],\dots[-N,N]$ as you said. This is nothing but $[-N,N]$. For the intersection note that $[-1,1]$ is precisely the set which is contained in each of the sets in question and hence the intersection is $[-1,1]$. |
Upper bound on $\chi(G)$ for a triangle-free graph | Here's a sketch:
Start with a lemma: Every triangle-free graph contains an independent set of size $\sqrt{n}$. This can be proven by degree considerations: If there exists a vertex of degree $\geq \sqrt{n}$, consider it's neighbors. Otherwise the maximum degree is at most $\sqrt{n}-1$ and apply the greedy algorithm.
To prove the result, apply induction. Let $S$ be an independent set of size $\sqrt{n}$, and let $H=G\setminus S$. Then $H$ is triangle free and has at most $n-\sqrt{n}$ vertices, and so can be properly colored using at most $2\sqrt{n-\sqrt{n}}+1$ colors. Conclude that $G$ can be properly colored using $2\sqrt{n}+1$ colors. |
Some stuff with binary operators | Every such $f$ has the form $f(a, b) = g(a, b) + g(b, a)$ for some arbitrary function $g : \mathbb{Z} \times \mathbb{Z} \to \mathbb{R}$. And no. For example, take $f(a, b) = a^2 b^2$. |
The trace of a matrix in characteristic p | Let $A$ have Jordan form $A = VJV^{-1}$. Then, $A^p = VJ^pV^{-1}$.
If the diagonal entries of $J$ are $\lambda_1, \cdots, \lambda_n$, then the diagonal entries of $J^p$ are $\lambda_1^p, \cdots, \lambda_n^p$.
Hence, $\text{tr}(J^p) = \lambda_1^p + \cdots + \lambda_n^p = (\lambda_1+\cdots+\lambda_n)^p = (\text{tr}(J))^p$.
Finally, since the trace of a matrix is invariant under similarity transforms, $\text{tr}(J^p) = (\text{tr}(J))^p$ becomes $\text{tr}(VJ^pV^{-1}) = (\text{tr}(VJV^{-1}))^p$, i.e. $\text{tr}(A^p) = (\text{tr}(A))^p$, as desired. |
Value of $c$ for which Continuity holds in $\mathbb R^2$. | If $f$ is continuous on $\mathbb{R^2}$, then its projections $f_1 = f(x,0)$, and $f_2 = f(0,y)$ must be continuous on $\mathbb{R}$. We have: $f_1 = \sqrt{1-x^2}, x^2 \leq 1$, and $f_1 = c, x^2 > 1$. And $\displaystyle \lim_{x \to 1^{-}} f_1 = 0 = \lim_{x \to 1^{+}} f_1 = c$. We have $c = 0$. |
Is the 3-sphere isomorphic to Spin(3)? | Yes. This group appears to possess a bunch of alternative definitions, like
The unit quaternions
The special unitary group $\text{SU}(2)$
The spin group $\text{Spin}(3)$
See https://en.wikipedia.org/wiki/Spin_group#Accidental_isomorphisms |
Taking a linear operator inside an integral | I think what you have to use is just the fact that integration is linear.
By looking at the $i$th coordinate you get
$\begin{align} \left(A \int f(x) dx\right)_i
&= \sum_{j} A_{i,j} \left(\int f(x) dx\right)_j\\
&= \sum_{j} A_{i,j} \int f(x)_j dx\\
&= \int \sum_{j} A_{i,j}f(x)_j dx\\
&= \int (A f(x))_i dx \\
&= \left(\int A f(x) dx \right)_i
\end{align}$
and this gives you $A \int f(x) dx = \int A f(x) dx$. |
Is there a closed form for the sum $\sum_{k=2}^N {N \choose k} \frac{k-1}{k}$? | The Maple command $$ sum(binomial(N, k)*(k-1)/k, k = 2 .. N) $$ outputs $$ 1/4\,{\mbox{$_3$F$_2$}(2,2,-N+2;\,3,3;\,-1)}N \left( N-1 \right) $$ in terms of the hypergeometric function. |
Maximum number of components in a Graph Containing $n$ vertices and $k$ edges | In the worst case the edges are used to form a clique. Let $i$ be the smallest integer $\ge 1$ s.t. $\frac{i(i-1)}{2} \ge k$. There are $n-i+1$ connected components in that case, which is maximal posible. |
Why do two symmetrical points form a perpendicular line to their reflection axis? | Let $A,B$ be symmetric with respect to line $\ell$. Then each point on $\ell$ is at equal distance to $A$ and $B$, so $\ell$ is bisector of segment $AB$ and this is well known perpendicular to $AB$. |
Expansive mapping has convergent subsequence | Hint:I note the $n$-th composite of $T$ by $T^{[n]}$. With your notations, by applying $T^{[n_k]}$ you have for $p\geq 1$
$$d(T^{[n_{k+p}-n_k]}(z),z)\leq d(T^{[n_{k+p}]}(z),T^{[n_k]}(z))=d(x_{n_{k+p}},x_{n_k})$$
so for $\varepsilon>0$ and $k,p$ large you have $d(x_m, z)<\varepsilon$ with $m=n_{k+p}-n_k$. |
Existence of Rotation-Type Unitary Transformations | You may as well assume $|x| = |y| = 1$. Try to construct orthonormal bases $\{x_1,\ldots,x_n\}$ and $\{y_1,\ldots,y_n\}$ with $x_1 = x$ and $y_1 = y$. The matrices $X = [x_1|\cdots|x_n]$ and $Y = [y_1|\cdots|y_n]$ are unitary, so $U = YX^{-1}$ is too, and $Ux = Y (X^{-1}x_1) = Y e_1 = y_1 = y$ where $e_1$ is the standard first basis vector. |
Evalutating the following integral: $\int\frac{\ln(x) dx}{x+4x\ln^2(x)}$ | Hint
With a change of variable (find it) the integral becomes:
$$\int\frac{u}{1+4u^2}du$$ |
Calculate correlation coefficient | Note that
$$
EXY=E(\sin2 \pi U\cos2 \pi U)=\int_{\mathbb{R}}\sin2 \pi u\cos2 \pi uf_{U}(u)\, du=\int_{0}^1
\sin2 \pi u\cos2 \pi u\, du
$$
where $f_{U}$ is the density of a uniform and we have used the LOTUS. |
An extension of the birthday problem | I have no idea for your main question, but I know how you got, for the uniform distribution case, a formula distinct from that of Wikipedia -- because I did the same mistake earlier today.
You select $n$ values out of a space of size $d$, each selection being uniformly random. We assume that $n \ll d$ (but not necessarily that $n^2 \ll d$, which is the crucial point). The probability of there being no collision is:
$$ p(n,d) = \frac{d!}{d^n(d-n)!} $$
(there are $\frac{d!}{(d-n)!}$ possible selections with no collisions, out of $d^n$ if we allow collisions).
Using Stirling's formula ($k! \approx \sqrt{2\pi k}(\frac{k}{e})^k)$, we get:
$$ p(n, d) \approx d^{-n} \sqrt{\frac{d}{d-n}} \left(\frac{d}{e}\right)^d\left(\frac{d-n}{e}\right)^{n-d} $$
Since $n \ll d$, the part with the square root is very close to $1$, so we ignore it. The expression then becomes:
$$ p(n, d) \approx e^{-n} \left(1-\frac{n}{d}\right)^{n-d} = e^{-n + (n-d)\ln (1-\frac{n}{d})}$$
No we replace the log with its Taylor approximation, and, that's the tricky point, you have to use degree 2. This means that:
\begin{eqnarray}
-n + (n-d)\ln (1-\frac{n}{d}) &=& -n + (n-d)\left(-\frac{n}{d} - \frac{n^2}{2d^2} + O\left(\frac{n^3}{d^3}\right)\right) \\
&=& -\frac{n^2}{d} - \frac{n^3}{2d^2} + \frac{n^2}{2d} + O\left(\frac{n^3}{d^2}\right) \\
&=& -\frac{n^2}{2d} + O\left(\frac{n^3}{d^2}\right)
\end{eqnarray}
Hence the result:
$$ p(n,d) \approx e^{-\frac{n^2}{2d}} $$
which is the formula given in the Wikipedia page on the birthday "paradox". In the expression above, when we replace the log with its approximation, the degree 1 terms cancel out, which is why we have to go to degree 2. Stopping at degree 1 was my mistake and, I presume, yours too. |
Polytope and vertex points | Even more is true. Every $x\in P$ is a convex combination of the extreme points $x_e \in P$:
\begin{align}
\sum_e \lambda_e x_e &= x\\
\sum_e \lambda_e &= 1\\
\lambda_e &\ge 0
\end{align} |
Solving $Ax \approx b$ with $0 < x_i < 1$? | Formulate it as a quadratic programming problem.
Choose $\epsilon > 0$ to be small.
$$\min \left\|Ax - b \right\|^2$$
subject to
$$\epsilon \le x_i \le 1-\epsilon , \forall i \in \{1, \ldots, n\}.$$ |
Reference for: simple closed curves generate the fundamental group | Some obvious things are less obvious than others....
Let $X$ be a rank two graph consisting of a disjoint pair of circles plus an arc with one endpoint on each circle. Then no matter what $v \in X$ you pick, $\pi_1(X,v)$ is not generated by simple closed curves through $v$. |
Minimize $3\sqrt{5-2x}+\sqrt{13-6y}$ subject to $x^2+y^2=4$ | Comment: you may use this modification:
We rewrite final relation as:
$$A=\sqrt{[3(x-1)=a]^2+(3y=b)^2}+\sqrt{(x=a')^2+(y-3=b')^2}$$
and use this inequality:
$\sqrt{a^2+b^2}+\sqrt{a'^2+b'^2}\geq\sqrt{(a+a')^2+(b+b'^2)}$
we get:
$A\geq\sqrt{16(x^2+y^2)-24(x+y)+18}$
$x^2+y^2=4$
$\Rightarrow$
$A\geq \sqrt{82-24(x+y)}$
If $x=y=\sqrt 2$ then $A\geq\sqrt{84-67.7}\approx 4$
Update:Wolfram says minimum is $2\sqrt {10}=6.32$ at $(x,y)=(\frac25+\frac{3\sqrt6}5=1.87, \frac 65-\frac{\sqrt6}5)=0.71$. If we put this in $\sqrt{82-24(x+y)}$ we get 4.47. mind you x and y must suffice $x^2+y^2=4$, and what Wolfram gives does; $1.87^2+0.71^2=4$.Hence 40 can not be minimum . |
Limes of $a_n = i^n$ | Yes, you need a partial ordered set to make sense of Suprema and Infima. You need this for defining $\limsup$ and $\liminf$.
If you consider the real part note that $$\text{Re } a_n = \text{Re } i^n = \begin{cases}0 &\text{ if } n \text{ odd} \\ (-1)^{\frac n2 } &\text{ if } n \text{ even}\end{cases} $$.
Then it is easy to see that $\limsup \text{Re } a_n = 1$ and $\liminf \text{Re } a_n = -1$.
Analougously one gets $\limsup \text{Im } a_n = 1$ and $\liminf \text{Im } a_n = -1$. |
Homeomorphism of $K$ and $K\cup \{0\}$ | Let's collect some facts:
Any map $f:(X,T_\text{discrete})\to (Y,T)$ is continuous.
Any map $f:(X,T)\to (Y,T_{\text{codiscrete}})$ is continuous.
If $f:X\to Y$ is continuous and $X$ is compact, so is $Y$.
Can you conclude now? |
Showing that the product and metric topology on $\mathbb{R}^n$ are equivalent | The product topology is induced by this norm.
$$\|x\|_{\rm prod} = \max\{|x_k|, 1\le k \le n\}$$
Let us use $\|\cdot\|$ for the Euclidean norm. Then
$$\|x\| = \left(\sum_{k=1}^n x_k^2\right)^{1/2}\le \left(\sum_{k=1}^n \|x\|_{\rm prod}^2\right)^{1/2} = \|x\|_{\rm prod}\sqrt{n}.$$
Now for a reverse inequality.
We have
$$|x_k |\le \left(\sum_{k=1}^n x_k^2\right)^{1/2}, \qquad 1\le k \le n,$$
so $$\|x\|_{\rm prod} \le \|x\|.$$
The norms are equivalent. |
Proof of e as a limit | The x-h notations are equivalent.
Note that $\ln 1=0$ thus
$$\lim_{x \to 0} \frac{\ln(1+x) - \ln(1)}{x}=\lim_{x \to 0} \left(\frac{1}{x} \cdot \ln(1+x)\right)$$
Since the exponential function is continuos $$e^{\lim f(x)}\equiv \lim e^{f(x)}$$ |
Help with hard complex numbers | For the first, it is equal to $i^{-i}.$ So, the log is equal to $-i(\pi i/2 + 2ki\pi) = \pi/2 +2 k \pi.$
The second, before you square, you have the real part of $x=\omega + \omega^2 + \omega^4,$ where $\omega$ is the primitive seventh root of unity. Notice that the conjugate of this expression is $\omega^6 + \omega^5 + \omega^3 = 1-x.$ Since the real part of $x$ is the same as that of $\overline{x},$ we have that the real part of $x$ is $1/2,$ so its square is $1/4.$ |
Are operations on ideals and direct sums/products of abelian groups related? | Yes. In general, $I+J\simeq (I \oplus J)/\{(x, -x) \;|\; x \in I\cap J\}$ as $R$-modules.
The direct product does not seem to be directly related to product of ideals. The closer thing to doing this is the tensor product - namely, tere is always an epimorphism $I \otimes_R J \twoheadrightarrow IJ$. I think it is not injective in general, although I am not sure now.
The connection is the same as in 1. More precisely, one has always an epi $\bigoplus_j I_j \rightarrow \sum_j I_j$ and the kernel is spanned by tuples of elements that sum up to $0$.
Hard to say. The product of ideals comes from the product of elements, whereas the product of $R$-modules is named after the cartesian product (of sets), so in that sense I guess yes. |
Integrating $\sin(\sqrt{16-x^2})$ with respect to x | You can use trapezium rule
$$\int_{a}^b f(x) \ dx = \frac{1}{2}h(y_0+y_n)+2(y_1+y_2+y_3....y_{n-2}+y_{n-1})$$
Where $h=\frac{a+b}{n}$ and $y_n=f(x_n)$ |
What are the properties of divisibility for an arbitrary commutative ring? | The divisibility poset of a ring $R$ is a lattice iff every pair of elements has a gcd and a lcm.
This does not happen in every commutative ring.
For instance, in the ring $\mathbb Z[\sqrt{-5}]$ there is no gcd for $6$ and $2(1+\sqrt{-5})$ (see this).
A class of rings that have this property is GCD domains, which generalize UFDs. |
Finding $\int_{1/4}^4 \frac{1}{x}\sin(x-\frac{1}{x})\,dx $ | If we substitute $t=x-\frac{1}{x}$ (we are allowed since $g(x)=x-\frac{1}{x}$ is increasing and differentiable over $\left[\frac{1}{4},4\right]$), we are left with:
$$ I = \int_{-15/4}^{15/4}\sin(t)\frac{dt}{\sqrt{4+t^2}}$$
hence we are integrating an odd integrable function over a symmetric domain with respect to the origin, so the value of the integral is simply zero. |
$T$-invariant subspace and minimal polynomial | Hint
We express $T^n(v)$ in the basis $\mathcal B_v$ by
$$T^n(v)=-a_0v-a_1T(v)-\cdots- a_{n-1}T^{n-1}v$$
and we write the matrix of $T$ relative to the basis $\mathcal B_v$ so we find the companion matrix $C(p)$. We prove by calculating $\det(tI_n-C(p))$ that the characteristic polynomial $\chi_T$ is equal to $a_0+a_1t+\cdots+a_{n-1}t^{n-1}+t^n$ and it's equal to the minimal polynomial $\mu_T$. |
In Arrow's theorem, why not $n=2$? | It's false in the case $n=2$ - just pick the standard voting model (sum up the votes between the two candidates), and it satisfies all three of Arrow's axioms, which you can check. |
flaws in this proof? uniform continuity | Here are a few points:
It seems that you mistook the function $f$ for the particular sine function.
To formalize your proof, you should say that $\delta=1$, for the particular $\epsilon$ used in the uniform continuity in the interval $[0,P+1]$.
Other than that, I think that's a great proof. |
Books that state column vectors are linearly dependent if determinant is $0$? | Does the following book help you?
https://www.amazon.com/Linear-Algebra-2nd-Kenneth-Hoffman/dp/0135367972
Please inform me whether it was helpful or I should delete the answer. |
Find the set of 'fair' groupings from non-square binary matrix and cost vector | Thanks to the edited tags on this question I now see it is a variation of the 'fair allocation problem', and there is a good resource here on the topic: http://recherche.noiraudes.net/resources/papers/comsoc-chapter12.pdf
Often it's just a case of finding out what your problem's name is :-) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.